All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v9 00/28] arm64: Dom0 ITS emulation
@ 2017-05-11 17:53 Andre Przywara
  2017-05-11 17:53 ` [PATCH v9 01/28] ARM: GICv3: setup number of LPI bits for a GICv3 guest Andre Przywara
                   ` (28 more replies)
  0 siblings, 29 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

this is a reworked version, addressing comments on v8.
After some discussions we came to the conclusion that for properly addressing
all the issues we found we need some more serious rework of the VGIC:
* Protecting a struct pending_irq with the VGIC VCPU lock will not be
sufficient, instead we need a per-IRQ lock.
* The proper answer to avoid a struct pending_irq reference for an LPI to
be freed while it is in use is reference counting.
However it has been decided that these changes (which haven been partially
drafted in an RFC series[1] already) are quite intrusive and need more
time to get in a proper shape. To not loose the effort already spent on the
ITS review, this series is *not* building on these changes, but instead
tries to address all other issues mentioned in the last review round.
It is expected that the VGIC rework is then piled on top of this series.

Beside some smaller changes likes adding ASSERTs or comments there is a new
patch 15/28 to allow access to a struct pending_irq directly from a device
(instead of going via the LPI number). Patch 20/28 got reworked to cover a
corner case where must avoid putting the same LPI number twice into different
LRs. Those two changes required a slight reordering of some of the later
patches (mostly the ITS command emulation).

For a detailed changelog see below.

Cheers,
Andre

----------------------------------
This series adds support for emulation of an ARM GICv3 ITS interrupt
controller. For hardware which relies on the ITS to provide interrupts for
its peripherals this code is needed to get a machine booted into Dom0 at
all. ITS emulation for DomUs is only really useful with PCI passthrough,
which is not yet available for ARM. It is expected that this feature
will be co-developed with the ITS DomU code. However this code drop here
considered DomU emulation already, to keep later architectural changes
to a minimum.

This is technical preview version to allow early testing of the feature.
Things not (properly) addressed in this release:
- There is only support for Dom0 at the moment. DomU support is only really
useful with PCI passthrough, which is not there yet for ARM.
- The MOVALL command is not emulated. In our case there is really nothing
to do here. We might need to revisit this in the future for DomU support.
- The INVALL command might need some rework to be more efficient. Currently
we iterate over all mapped LPIs, which might take a bit longer.
- Indirect tables are not supported. This affects both the host and the
virtual side.
- The ITS tables inside (Dom0) guest memory cannot easily be protected
at the moment (without restricting access to Xen as well). So for now
we trust Dom0 not to touch this memory (which the spec forbids as well).
- With malicious guests (DomUs) there is a possibility of an interrupt
storm triggered by a device. We would need to investigate what that means
for Xen and if there is a nice way to prevent this. Disabling the LPI on
the host side would require command queuing, which has its downsides to
be issued during runtime.
- Dom0 should make sure that the ITS resources (number of LPIs, devices,
events) later handed to a DomU are really limited, as a large number of
them could mean much time spend in Xen to initialize, free or handle those.
It is expected that the toolstack sets up a tailored ITS with just enough
resources to accommodate the needs of the actual passthrough-ed device(s).
- The command queue locking is currently suboptimal and should be made more
fine-grained in the future, if possible.
- Provide support for running with an IOMMU, to map the doorbell page
to all devices.


Some generic design principles:

* The current GIC code statically allocates structures for each supported
IRQ (both for the host and the guest), which due to the potentially
millions of LPI interrupts is not feasible to copy for the ITS.
So we refrain from introducing the ITS as a first class Xen interrupt
controller, also we don't hold struct irq_desc's or struct pending_irq's
for each possible LPI.
Fortunately LPIs are only interesting to guests, so we get away with
storing only the virtual IRQ number and the guest VCPU for each allocated
host LPI, which can be stashed into one uint64_t. This data is stored in
a two-level table, which is both memory efficient and quick to access.
We hook into the existing IRQ handling and VGIC code to avoid accessing
the normal structures, providing alternative methods for getting the
needed information (priority, is enabled?) for LPIs.
Whenever a guest maps a device, we allocate the maximum required number
of struct pending_irq's, so that any triggering LPI can find its data
structure. Upon the guest actually mapping the LPI, this pointer to the
corresponding pending_irq gets entered into a radix tree, so that it can
be quickly looked up.

* On the guest side we (later will) have to deal with malicious guests
trying to hog Xen with mapping requests for a lot of LPIs, for instance.
As the ITS actually uses system memory for storing status information,
we use this memory (which the guest has to provide) to naturally limit
a guest. Whenever we need information from any of the ITS tables, we
temporarily map them (which is cheap on arm64) and copy the required data.

* An obvious approach to handling some guest ITS commands would be to
propagate them to the host, for instance to map devices and LPIs and
to enable or disable LPIs.
However this (later with DomU support) will create an attack vector, as
a malicious guest could try to fill the host command queue with
propagated commands.
So we try to avoid this situation: Dom0 sending a device mapping (MAPD)
command is the only time we allow queuing commands to the host ITS command
queue, as this seems to be the only reliable way of getting the
required information at the moment. However at the same time we map all
events to LPIs already, also enable them. This avoids sending commands
later at runtime, as we can deal with mappings and LPI enabling/disabling
internally.

To accomodate the tech preview nature of this feature at the moment, there
is a Kconfig option to enable it. Also it is supported on arm64 only, which
will most likely not change in the future.
This leads to some hideous constructs like an #ifdef'ed header file with
empty function stubs to accomodate arm32 and non-ITS builds, which share
some generic code paths with the ITS emulation.
The number of supported LPIs can be limited on the command line, in case
the number reported by the hardware is too high. As Xen cannot foresee how
many interrupts the guests will need, we cater for as many as possible.
The command line parameter is called max-lpi-bits and expresses the number
of bits required to hold an interrupt ID. It defaults to 20, if that is
lower than the number supported by the hardware.

This code boots Dom0 on an ARM Fast Model with ITS support. I tried to
address the issues seen by people running the previous versions on real
hardware, though couldn't verify this here for myself.
So any testing, bug reports (and possibly even fixes) are very welcome.

The code can also be found on the its/v9 branch here:
git://linux-arm.org/xen-ap.git
http://www.linux-arm.org/git?p=xen-ap.git;a=shortlog;h=refs/heads/its/v9

Cheers,
Andre

Changelog v8 ... v9:
- [01/28]: initialize number of interrupt IDs for DomUs also
- [02/28]: move priority reading back up front
- [03/28]: enumerate all call sites in commit message, add ASSERTs,
           add "unlikely" hints, avoid skipping ASSERTs, add comment to
           irq_to_pending() definition
- [04/28]: explain expectation of device state while destroying domain
- [05/28]: document case of invalid LPI, change dummy priority to 0xff
- [08/28]: check cross page boundary condition early in function
- [10/28]: initialize status and lr member as well
- [11/28]: check lpi_vcpu_id to cover all virtual CPUs
- [12/28]: add spin lock ASSERT
- [13/28]: introduce types for our ITS table entries, fix error messages
- [14/28]: use new ITS table entry types
- [15/28]: new patch to introduce pending_irq lookup function
- [17/28]: verify size of collection table entry
- [18/28]: use new pending_irq lookup function
- [19/28]: use new pending_irq lookup function, collection table type and
           vgic_init_pending_irq, add Dom0 ASSERT and unmap devices for DomUs
- [20/28]: document PRISTINE_LPI flag, fix typo, avoid double insertion of
           the same LPI into different LRs
- [21/28]: use new pending_irq lookup function, avoid explict LPI number
           parameter
- [22/28]: add physical affinity TODO, use new table type and pending_irq
           lookup function, fix error message
- [24/28]: use pending_irq lookup function, drop explicit LPI number parameter
- [25/28]: drop explicit LPI number parameter
- [27/28]: use new ITS table entry type

Changelog v7 ... v8:
- drop list parameter and rename to gicv3_its_make_hwdwom_dt_nodes()
- remove rebase artifacts
- add irq_enter/irq_exit() calls
- propagates number of host LPIs and number of event IDs to Dom0
- add proper coverage of all addresses in ITS MMIO handler
- avoid vcmd_lock for CBASER writes
- fix missing irqsave/irqrestore on VGIC VCPU lock
- move struct pending_irq use under the VGIC VCPU lock
- protect gic_raise_guest_irq() against NULL pending_irq
- improve device and collection table entry size documentation
- count number of ITSes to increase mmio_count
- rework MAPD, DISCARD, MAPTI and MOVI to take proper locks
- properly rollback failing MAPD and MAPTI calls
- rework functions to update property table
- return error on vgic_access_guest_memory crossing page boundary
- make sure CREADR access is atomic

Changelog v5 ... v6:
- reordered patches to allow splitting the series
- introduced functions later to avoid warnings on intermediate builds
- refactored common code changes into separate patches
- dropped GENMASK_ULL and BIT_ULL (both patches and their usage later)
- rework locking in MMIO register reads and writes
- protect new code from being executed without an ITS being configured
- fix vgic_access_guest_memory (now a separate patch)
- some more comments and TODOs

Changelog v4 ... v5:
- adding many comments
- spinlock asserts
- rename r_host_lpis to max_host_lpi_ids
- remove max_its_device_bits command line
- add warning on high number of LPIs
- avoid potential leak on host MAPD
- properly handle nr_events rounding
- remove unmap_all_devices(), replace with ASSERT
- add barriers for (lockless) host LPI lookups
- add proper locking in ITS and redist MMIO register handling
- rollback failing device mapping
- fix various printks
- add vgic_access_guest_memory() and use it
- (getting rid of page mapping functions and helpers)
- drop table mapping / unmapping on redist/ITS enable/disable
- minor reworks in functions as per review comments
- fix ITS enablement check
- move lpi_to_pending() and lpi_get_priority() to vgic_ops
- move do_LPI() to gic_hw_ops
- whitespace and hard tabs fixes
- introduce ITS domain init function (and use it for the rbtree)
- enable IRQs around do_LPI
- implement TODOs for later optimizations
- add "v" prefix to variables holding virtual properties
- provide locked and normal versions of read/write_itte
- only CLEAR LPI if not already guest visible (plus comment)
- update LPI property on MAPTI
- store vcpu_id in pending_irq for LPIs (helps INVALL)
- improve INVALL implementation to only cover LPIs on this VCPU
- improve virtual BASE register initialization
- limit number of virtual LPIs to 24 bits (Linux bug at 32??)
- only inject LPIs if redistributor is actually enabled

Changelog v3 .. v4:
- make HAS_ITS depend on EXPERT
- introduce new patch 02 to initialize host ITS early
- fix cmd_lock init position
- introduce warning on high number of LPI allocations
- various int -> unsigned fixes
- adding and improving comments
- rate limit ITS command queue full msg
- drop unneeded checks
- validate against allowed number of device IDs
- avoid memory leaks when removing devices
- improve algorithm for finding free host LPI
- convert unmap_all_devices from goto to while loop
- add message on remapping ITS device
- name virtual device / event IDs properly
- use atomic read when reading ITT entry

Changelog v2 .. v3:
- preallocate struct pending_irq's
- map ITS and redistributor tables only on demand
- store property, enable and pending bit in struct pending_irq
- improve error checking and handling
- add comments

Changelog v1 .. v2:
- clean up header file inclusion
- rework host ITS table allocation: observe attributes, many fixes
- remove patch 1 to export __flush_dcache_area, use existing function instead
- use number of LPIs internally instead of number of bits
- keep host_its_list as private as possible
- keep struct its_devices private
- rework gicv3_its_map_guest_devices
- fix rbtree issues
- more error handling and propagation
- cope with GICv4 implementations (but no virtual LPI features!)
- abstract host and guest ITSes by using doorbell addresses
- join per-redistributor variables into one per-CPU structure
- fix data types (unsigned int)
- many minor bug fixes

(Rough) changelog RFC-v2 .. v1:
- split host ITS driver into gic-v3-lpi.c and gic-v3-its.c part
- rename virtual ITS driver file to vgic-v3-its.c
- use macros and named constants for all magic numbers
- use atomic accessors for accessing the host LPI data
- remove leftovers from connecting virtual and host ITSes
- bail out if host ITS is disabled in the DT
- rework map/unmap_guest_pages():
    - split off p2m part as get/put_guest_pages (to be done on allocation)
    - get rid of vmap, using map_domain_page() instead
- delay allocation of virtual tables until actual LPI/ITS enablement
- properly size both virtual and physical tables upon allocation
- fix put_domain() locking issues in physdev_op and LPI handling code
- add and extend comments in various areas
- fix lotsa coding style and white space issues, including comment style
- add locking to data structures not yet covered
- fix various locking issues
- use an rbtree to deal with ITS devices (instead of a list)
- properly handle memory attributes for ITS tables
- handle cacheable/non-cacheable ITS table mappings
- sanitize guest provided ITS/LPI table attributes
- fix breakage on non-GICv2 compatible host GICv3 controllers
- add command line parameters on top of Kconfig options
- properly wait for an ITS to become quiescient before enabling it
- handle host ITS command queue errors
- actually wait for host ITS command completion (READR==WRITER)
- fix ARM32 compilation
- various patch splits and reorderings

Andre Przywara (27):
  ARM: GICv3: setup number of LPI bits for a GICv3 guest
  ARM: VGIC: move irq_to_pending() calls under the VGIC VCPU lock
  ARM: GIC: Add checks for NULL pointer pending_irq's
  ARM: GICv3: introduce separate pending_irq structs for LPIs
  ARM: GICv3: forward pending LPIs to guests
  ARM: GICv3: enable ITS and LPIs on the host
  ARM: vGICv3: handle virtual LPI pending and property tables
  ARM: vGICv3: re-use vgic_reg64_check_access
  ARM: GIC: export and extend vgic_init_pending_irq()
  ARM: VGIC: add vcpu_id to struct pending_irq
  ARM: vGIC: advertise LPI support
  ARM: vITS: add command handling stub and MMIO emulation
  ARM: vITS: introduce translation table walks
  ARM: vITS: provide access to struct pending_irq
  ARM: vITS: handle INT command
  ARM: vITS: handle MAPC command
  ARM: vITS: handle CLEAR command
  ARM: vITS: handle MAPD command
  ARM: GICv3: handle unmapped LPIs
  ARM: vITS: handle MAPTI command
  ARM: vITS: handle MOVI command
  ARM: vITS: handle DISCARD command
  ARM: vITS: handle INV command
  ARM: vITS: handle INVALL command
  ARM: vITS: increase mmio_count for each ITS
  ARM: vITS: create and initialize virtual ITSes for Dom0
  ARM: vITS: create ITS subnodes for Dom0 DT

Vijaya Kumar K (1):
  ARM: introduce vgic_access_guest_memory()

 xen/arch/arm/gic-v2.c            |    7 +
 xen/arch/arm/gic-v3-its.c        |  222 ++++++
 xen/arch/arm/gic-v3-lpi.c        |  104 +++
 xen/arch/arm/gic-v3.c            |   29 +-
 xen/arch/arm/gic.c               |   98 ++-
 xen/arch/arm/vgic-v2.c           |   15 +
 xen/arch/arm/vgic-v3-its.c       | 1404 +++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/vgic-v3.c           |  307 ++++++++-
 xen/arch/arm/vgic.c              |   94 ++-
 xen/include/asm-arm/domain.h     |   12 +-
 xen/include/asm-arm/event.h      |    3 +
 xen/include/asm-arm/gic.h        |    2 +
 xen/include/asm-arm/gic_v3_its.h |   50 ++
 xen/include/asm-arm/vgic-emul.h  |    9 +
 xen/include/asm-arm/vgic.h       |   18 +-
 xen/include/public/arch-arm.h    |    4 +
 16 files changed, 2334 insertions(+), 44 deletions(-)

-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v9 01/28] ARM: GICv3: setup number of LPI bits for a GICv3 guest
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-11 18:34   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 02/28] ARM: VGIC: move irq_to_pending() calls under the VGIC VCPU lock Andre Przywara
                   ` (27 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The host supports a certain number of LPI identifiers, as stored in
the GICD_TYPER register.
Store this number from the hardware register in vgic_v3_hw to allow
injecting the very same number into a guest (Dom0).
DomUs get a fixed limited number for now. We may want to revisit this
when we get proper DomU ITS support.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic-v3.c         | 6 +++++-
 xen/arch/arm/vgic-v3.c        | 9 ++++++++-
 xen/include/asm-arm/domain.h  | 1 +
 xen/include/asm-arm/vgic.h    | 3 ++-
 xen/include/public/arch-arm.h | 4 ++++
 5 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index a559e5e..29c8964 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1579,6 +1579,7 @@ static int __init gicv3_init(void)
 {
     int res, i;
     uint32_t reg;
+    unsigned int intid_bits;
 
     if ( !cpu_has_gicv3 )
     {
@@ -1622,8 +1623,11 @@ static int __init gicv3_init(void)
                i, r->base, r->base + r->size);
     }
 
+    reg = readl_relaxed(GICD + GICD_TYPER);
+    intid_bits = GICD_TYPE_ID_BITS(reg);
+
     vgic_v3_setup_hw(dbase, gicv3.rdist_count, gicv3.rdist_regions,
-                     gicv3.rdist_stride);
+                     gicv3.rdist_stride, intid_bits);
     gicv3_init_v2();
 
     spin_lock_init(&gicv3.lock);
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index d10757a..25e16dc 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -57,18 +57,21 @@ static struct {
     unsigned int nr_rdist_regions;
     const struct rdist_region *regions;
     uint32_t rdist_stride; /* Re-distributor stride */
+    unsigned int intid_bits;  /* Number of interrupt ID bits */
 } vgic_v3_hw;
 
 void vgic_v3_setup_hw(paddr_t dbase,
                       unsigned int nr_rdist_regions,
                       const struct rdist_region *regions,
-                      uint32_t rdist_stride)
+                      uint32_t rdist_stride,
+                      unsigned int intid_bits)
 {
     vgic_v3_hw.enabled = 1;
     vgic_v3_hw.dbase = dbase;
     vgic_v3_hw.nr_rdist_regions = nr_rdist_regions;
     vgic_v3_hw.regions = regions;
     vgic_v3_hw.rdist_stride = rdist_stride;
+    vgic_v3_hw.intid_bits = intid_bits;
 }
 
 static struct vcpu *vgic_v3_irouter_to_vcpu(struct domain *d, uint64_t irouter)
@@ -1482,6 +1485,8 @@ static int vgic_v3_domain_init(struct domain *d)
 
             first_cpu += size / d->arch.vgic.rdist_stride;
         }
+
+        d->arch.vgic.intid_bits = vgic_v3_hw.intid_bits;
     }
     else
     {
@@ -1497,6 +1502,8 @@ static int vgic_v3_domain_init(struct domain *d)
         d->arch.vgic.rdist_regions[0].base = GUEST_GICV3_GICR0_BASE;
         d->arch.vgic.rdist_regions[0].size = GUEST_GICV3_GICR0_SIZE;
         d->arch.vgic.rdist_regions[0].first_cpu = 0;
+
+        d->arch.vgic.intid_bits = GUEST_GICV3_GICD_INTID_BITS;
     }
 
     ret = vgic_v3_its_init_domain(d);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 6de8082..7c3829d 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -111,6 +111,7 @@ struct arch_domain
         uint32_t rdist_stride;              /* Re-Distributor stride */
         struct rb_root its_devices;         /* Devices mapped to an ITS */
         spinlock_t its_devices_lock;        /* Protects the its_devices tree */
+        unsigned int intid_bits;
 #endif
     } vgic;
 
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 544867a..df75064 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -346,7 +346,8 @@ struct rdist_region;
 void vgic_v3_setup_hw(paddr_t dbase,
                       unsigned int nr_rdist_regions,
                       const struct rdist_region *regions,
-                      uint32_t rdist_stride);
+                      uint32_t rdist_stride,
+                      unsigned int intid_bits);
 #endif
 
 #endif /* __ASM_ARM_VGIC_H__ */
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index bd974fb..033dcee 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -400,6 +400,10 @@ typedef uint64_t xen_callback_t;
 #define GUEST_GICV3_GICD_BASE      xen_mk_ullong(0x03001000)
 #define GUEST_GICV3_GICD_SIZE      xen_mk_ullong(0x00010000)
 
+/* TODO: Should this number be a tool stack decision? */
+/* The number of interrupt ID bits a guest (not Dom0) sees. */
+#define GUEST_GICV3_GICD_INTID_BITS     16
+
 #define GUEST_GICV3_RDIST_STRIDE   xen_mk_ullong(0x00020000)
 #define GUEST_GICV3_RDIST_REGIONS  1
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 02/28] ARM: VGIC: move irq_to_pending() calls under the VGIC VCPU lock
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
  2017-05-11 17:53 ` [PATCH v9 01/28] ARM: GICv3: setup number of LPI bits for a GICv3 guest Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-20  0:34   ` Stefano Stabellini
  2017-05-11 17:53 ` [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's Andre Przywara
                   ` (26 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

So far irq_to_pending() is just a convenience function to lookup
statically allocated arrays. This will change with LPIs, which are
more dynamic.
The proper answer to the issue of preventing stale pointers is
ref-counting, which requires more rework and will be introduced with
a later rework.
For now move the irq_to_pending() calls that are used with LPIs under the
VGIC VCPU lock, and only use the returned pointer while holding the lock.
This prevents the memory from being freed while we use it.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic.c  | 5 ++++-
 xen/arch/arm/vgic.c | 4 +++-
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index da19130..dcb1783 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -402,10 +402,13 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, struct pending_irq *n)
 
 void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
 {
-    struct pending_irq *p = irq_to_pending(v, virtual_irq);
+    struct pending_irq *p;
     unsigned long flags;
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    p = irq_to_pending(v, virtual_irq);
+
     if ( !list_empty(&p->lr_queue) )
         list_del_init(&p->lr_queue);
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 83569b0..d30f324 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -466,7 +466,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
 void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
 {
     uint8_t priority;
-    struct pending_irq *iter, *n = irq_to_pending(v, virq);
+    struct pending_irq *iter, *n;
     unsigned long flags;
     bool running;
 
@@ -474,6 +474,8 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
+    n = irq_to_pending(v, virq);
+
     /* vcpu offline */
     if ( test_bit(_VPF_down, &v->pause_flags) )
     {
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
  2017-05-11 17:53 ` [PATCH v9 01/28] ARM: GICv3: setup number of LPI bits for a GICv3 guest Andre Przywara
  2017-05-11 17:53 ` [PATCH v9 02/28] ARM: VGIC: move irq_to_pending() calls under the VGIC VCPU lock Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-12 14:19   ` Julien Grall
  2017-05-20  1:25   ` Stefano Stabellini
  2017-05-11 17:53 ` [PATCH v9 04/28] ARM: GICv3: introduce separate pending_irq structs for LPIs Andre Przywara
                   ` (25 subsequent siblings)
  28 siblings, 2 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

For LPIs the struct pending_irq's are dynamically allocated and the
pointers will be stored in a radix tree. Since an LPI can be "unmapped"
at any time, teach the VGIC how to deal with irq_to_pending() returning
a NULL pointer.
We just do nothing in this case or clean up the LR if the virtual LPI
number was still in an LR.

Those are all call sites for irq_to_pending(), as per:
"git grep irq_to_pending", and their evaluations:
(PROTECTED means: added NULL check and bailing out)

    xen/arch/arm/gic.c:
gic_route_irq_to_guest(): only called for SPIs, added ASSERT()
gic_remove_irq_from_guest(): only called for SPIs, added ASSERT()
gic_remove_from_queues(): PROTECTED, called within VCPU VGIC lock
gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock
gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock
gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock

    xen/arch/arm/vgic.c:
vgic_migrate_irq(): not called for LPIs (virtual IRQs), added ASSERT()
arch_move_irqs(): not iterating over LPIs, added ASSERT()
vgic_disable_irqs(): not called for LPIs, added ASSERT()
vgic_enable_irqs(): not called for LPIs, added ASSERT()
vgic_vcpu_inject_irq(): PROTECTED, moved under VCPU VGIC lock

    xen/include/asm-arm/event.h:
local_events_need_delivery_nomask(): only called for a PPI, added ASSERT()

    xen/include/asm-arm/vgic.h:
(prototype)

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic.c          | 34 ++++++++++++++++++++++++++++++----
 xen/arch/arm/vgic.c         | 24 ++++++++++++++++++++++++
 xen/include/asm-arm/event.h |  3 +++
 3 files changed, 57 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index dcb1783..46bb306 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -148,6 +148,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq,
     /* Caller has already checked that the IRQ is an SPI */
     ASSERT(virq >= 32);
     ASSERT(virq < vgic_num_irqs(d));
+    ASSERT(!is_lpi(virq));
 
     vgic_lock_rank(v_target, rank, flags);
 
@@ -184,6 +185,7 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
     ASSERT(spin_is_locked(&desc->lock));
     ASSERT(test_bit(_IRQ_GUEST, &desc->status));
     ASSERT(p->desc == desc);
+    ASSERT(!is_lpi(virq));
 
     vgic_lock_rank(v_target, rank, flags);
 
@@ -408,9 +410,13 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
     p = irq_to_pending(v, virtual_irq);
-
-    if ( !list_empty(&p->lr_queue) )
+    /*
+     * If an LPIs has been removed meanwhile, it has been cleaned up
+     * already, so nothing to remove here.
+     */
+    if ( likely(p) && !list_empty(&p->lr_queue) )
         list_del_init(&p->lr_queue);
+
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
@@ -418,6 +424,10 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
 {
     struct pending_irq *n = irq_to_pending(v, virtual_irq);
 
+    /* If an LPI has been removed meanwhile, there is nothing left to raise. */
+    if ( unlikely(!n) )
+        return;
+
     ASSERT(spin_is_locked(&v->arch.vgic.lock));
 
     if ( list_empty(&n->lr_queue) )
@@ -437,20 +447,25 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
 {
     int i;
     unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
+    struct pending_irq *p = irq_to_pending(v, virtual_irq);
 
     ASSERT(spin_is_locked(&v->arch.vgic.lock));
 
+    if ( unlikely(!p) )
+        /* An unmapped LPI does not need to be raised. */
+        return;
+
     if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
     {
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
-            gic_set_lr(i, irq_to_pending(v, virtual_irq), GICH_LR_PENDING);
+            gic_set_lr(i, p, GICH_LR_PENDING);
             return;
         }
     }
 
-    gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq));
+    gic_add_to_lr_pending(v, p);
 }
 
 static void gic_update_one_lr(struct vcpu *v, int i)
@@ -465,6 +480,17 @@ static void gic_update_one_lr(struct vcpu *v, int i)
     gic_hw_ops->read_lr(i, &lr_val);
     irq = lr_val.virq;
     p = irq_to_pending(v, irq);
+    /* An LPI might have been unmapped, in which case we just clean up here. */
+    if ( unlikely(!p) )
+    {
+        ASSERT(is_lpi(irq));
+
+        gic_hw_ops->clear_lr(i);
+        clear_bit(i, &this_cpu(lr_mask));
+
+        return;
+    }
+
     if ( lr_val.state & GICH_LR_ACTIVE )
     {
         set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index d30f324..8a5d93b 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -242,6 +242,9 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
     unsigned long flags;
     struct pending_irq *p = irq_to_pending(old, irq);
 
+    /* This will never be called for an LPI, as we don't migrate them. */
+    ASSERT(!is_lpi(irq));
+
     /* nothing to do for virtual interrupts */
     if ( p->desc == NULL )
         return true;
@@ -291,6 +294,9 @@ void arch_move_irqs(struct vcpu *v)
     struct vcpu *v_target;
     int i;
 
+    /* We don't migrate LPIs at the moment. */
+    ASSERT(!is_lpi(vgic_num_irqs(d) - 1));
+
     for ( i = 32; i < vgic_num_irqs(d); i++ )
     {
         v_target = vgic_get_target_vcpu(v, i);
@@ -310,6 +316,9 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
     int i = 0;
     struct vcpu *v_target;
 
+    /* LPIs will never be disabled via this function. */
+    ASSERT(!is_lpi(32 * n + 31));
+
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
         v_target = vgic_get_target_vcpu(v, irq);
@@ -352,6 +361,9 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
     struct vcpu *v_target;
     struct domain *d = v->domain;
 
+    /* LPIs will never be enabled via this function. */
+    ASSERT(!is_lpi(32 * n + 31));
+
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
         v_target = vgic_get_target_vcpu(v, irq);
@@ -432,6 +444,12 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode,
     return true;
 }
 
+/*
+ * Returns the pointer to the struct pending_irq belonging to the given
+ * interrupt.
+ * This can return NULL if called for an LPI which has been unmapped
+ * meanwhile.
+ */
 struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq)
 {
     struct pending_irq *n;
@@ -475,6 +493,12 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
     n = irq_to_pending(v, virq);
+    /* If an LPI has been removed, there is nothing to inject here. */
+    if ( unlikely(!n) )
+    {
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        return;
+    }
 
     /* vcpu offline */
     if ( test_bit(_VPF_down, &v->pause_flags) )
diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h
index 5330dfe..caefa50 100644
--- a/xen/include/asm-arm/event.h
+++ b/xen/include/asm-arm/event.h
@@ -19,6 +19,9 @@ static inline int local_events_need_delivery_nomask(void)
     struct pending_irq *p = irq_to_pending(current,
                                            current->domain->arch.evtchn_irq);
 
+    /* Does not work for LPIs. */
+    ASSERT(!is_lpi(current->domain->arch.evtchn_irq));
+
     /* XXX: if the first interrupt has already been delivered, we should
      * check whether any other interrupts with priority higher than the
      * one in GICV_IAR are in the lr_pending queue or in the LR
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 04/28] ARM: GICv3: introduce separate pending_irq structs for LPIs
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (2 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-12 14:22   ` Julien Grall
  2017-05-22 21:52   ` Stefano Stabellini
  2017-05-11 17:53 ` [PATCH v9 05/28] ARM: GICv3: forward pending LPIs to guests Andre Przywara
                   ` (24 subsequent siblings)
  28 siblings, 2 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

For the same reason that allocating a struct irq_desc for each
possible LPI is not an option, having a struct pending_irq for each LPI
is also not feasible. We only care about mapped LPIs, so we can get away
with having struct pending_irq's only for them.
Maintain a radix tree per domain where we drop the pointer to the
respective pending_irq. The index used is the virtual LPI number.
The memory for the actual structures has been allocated already per
device at device mapping time.
Teach the existing VGIC functions to find the right pointer when being
given a virtual LPI number.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v2.c       |  8 ++++++++
 xen/arch/arm/vgic-v3.c       | 30 ++++++++++++++++++++++++++++++
 xen/arch/arm/vgic.c          |  2 ++
 xen/include/asm-arm/domain.h |  2 ++
 xen/include/asm-arm/vgic.h   |  2 ++
 5 files changed, 44 insertions(+)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index dc9f95b..0587569 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -702,10 +702,18 @@ static void vgic_v2_domain_free(struct domain *d)
     /* Nothing to be cleanup for this driver */
 }
 
+static struct pending_irq *vgic_v2_lpi_to_pending(struct domain *d,
+                                                  unsigned int vlpi)
+{
+    /* Dummy function, no LPIs on a VGICv2. */
+    BUG();
+}
+
 static const struct vgic_ops vgic_v2_ops = {
     .vcpu_init   = vgic_v2_vcpu_init,
     .domain_init = vgic_v2_domain_init,
     .domain_free = vgic_v2_domain_free,
+    .lpi_to_pending = vgic_v2_lpi_to_pending,
     .max_vcpus = 8,
 };
 
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 25e16dc..44d2b50 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1454,6 +1454,9 @@ static int vgic_v3_domain_init(struct domain *d)
     d->arch.vgic.nr_regions = rdist_count;
     d->arch.vgic.rdist_regions = rdist_regions;
 
+    rwlock_init(&d->arch.vgic.pend_lpi_tree_lock);
+    radix_tree_init(&d->arch.vgic.pend_lpi_tree);
+
     /*
      * Domain 0 gets the hardware address.
      * Guests get the virtual platform layout.
@@ -1535,14 +1538,41 @@ static int vgic_v3_domain_init(struct domain *d)
 static void vgic_v3_domain_free(struct domain *d)
 {
     vgic_v3_its_free_domain(d);
+    /*
+     * It is expected that at this point all actual ITS devices have been
+     * cleaned up already. The struct pending_irq's, for which the pointers
+     * have been stored in the radix tree, are allocated and freed by device.
+     * On device unmapping all the entries are removed from the tree and
+     * the backing memory is freed.
+     */
+    radix_tree_destroy(&d->arch.vgic.pend_lpi_tree, NULL);
     xfree(d->arch.vgic.rdist_regions);
 }
 
+/*
+ * Looks up a virtual LPI number in our tree of mapped LPIs. This will return
+ * the corresponding struct pending_irq, which we also use to store the
+ * enabled and pending bit plus the priority.
+ * Returns NULL if an LPI cannot be found (or no LPIs are supported).
+ */
+static struct pending_irq *vgic_v3_lpi_to_pending(struct domain *d,
+                                                  unsigned int lpi)
+{
+    struct pending_irq *pirq;
+
+    read_lock(&d->arch.vgic.pend_lpi_tree_lock);
+    pirq = radix_tree_lookup(&d->arch.vgic.pend_lpi_tree, lpi);
+    read_unlock(&d->arch.vgic.pend_lpi_tree_lock);
+
+    return pirq;
+}
+
 static const struct vgic_ops v3_ops = {
     .vcpu_init   = vgic_v3_vcpu_init,
     .domain_init = vgic_v3_domain_init,
     .domain_free = vgic_v3_domain_free,
     .emulate_reg  = vgic_v3_emulate_reg,
+    .lpi_to_pending = vgic_v3_lpi_to_pending,
     /*
      * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
      * that can be supported is up to 4096(==256*16) in theory.
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 8a5d93b..bf6fb60 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -457,6 +457,8 @@ struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq)
      * are used for SPIs; the rests are used for per cpu irqs */
     if ( irq < 32 )
         n = &v->arch.vgic.pending_irqs[irq];
+    else if ( is_lpi(irq) )
+        n = v->domain->arch.vgic.handler->lpi_to_pending(v->domain, irq);
     else
         n = &v->domain->arch.vgic.pending_irqs[irq - 32];
     return n;
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7c3829d..3d8e84c 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -111,6 +111,8 @@ struct arch_domain
         uint32_t rdist_stride;              /* Re-Distributor stride */
         struct rb_root its_devices;         /* Devices mapped to an ITS */
         spinlock_t its_devices_lock;        /* Protects the its_devices tree */
+        struct radix_tree_root pend_lpi_tree; /* Stores struct pending_irq's */
+        rwlock_t pend_lpi_tree_lock;        /* Protects the pend_lpi_tree */
         unsigned int intid_bits;
 #endif
     } vgic;
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index df75064..c9075a9 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -134,6 +134,8 @@ struct vgic_ops {
     void (*domain_free)(struct domain *d);
     /* vGIC sysreg/cpregs emulate */
     bool (*emulate_reg)(struct cpu_user_regs *regs, union hsr hsr);
+    /* lookup the struct pending_irq for a given LPI interrupt */
+    struct pending_irq *(*lpi_to_pending)(struct domain *d, unsigned int vlpi);
     /* Maximum number of vCPU supported */
     const unsigned int max_vcpus;
 };
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 05/28] ARM: GICv3: forward pending LPIs to guests
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (3 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 04/28] ARM: GICv3: introduce separate pending_irq structs for LPIs Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-12 14:55   ` Julien Grall
  2017-05-22 22:03   ` Stefano Stabellini
  2017-05-11 17:53 ` [PATCH v9 06/28] ARM: GICv3: enable ITS and LPIs on the host Andre Przywara
                   ` (23 subsequent siblings)
  28 siblings, 2 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Upon receiving an LPI on the host, we need to find the right VCPU and
virtual IRQ number to get this IRQ injected.
Iterate our two-level LPI table to find this information quickly when
the host takes an LPI. Call the existing injection function to let the
GIC emulation deal with this interrupt.
Also we enhance struct pending_irq to cache the pending bit and the
priority information for LPIs. Reading the information from there is
faster than accessing the property table from guest memory. Also it
use some padding area, so does not require more memory.
This introduces a do_LPI() as a hardware gic_ops and a function to
retrieve the (cached) priority value of an LPI and a vgic_ops.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic-v2.c            |  7 ++++
 xen/arch/arm/gic-v3-lpi.c        | 71 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/gic-v3.c            |  1 +
 xen/arch/arm/gic.c               |  8 ++++-
 xen/arch/arm/vgic-v2.c           |  7 ++++
 xen/arch/arm/vgic-v3.c           | 18 ++++++++++
 xen/arch/arm/vgic.c              |  7 +++-
 xen/include/asm-arm/domain.h     |  3 +-
 xen/include/asm-arm/gic.h        |  2 ++
 xen/include/asm-arm/gic_v3_its.h |  8 +++++
 xen/include/asm-arm/vgic.h       |  2 ++
 11 files changed, 131 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 270a136..ffbe47c 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1217,6 +1217,12 @@ static int __init gicv2_init(void)
     return 0;
 }
 
+static void gicv2_do_LPI(unsigned int lpi)
+{
+    /* No LPIs in a GICv2 */
+    BUG();
+}
+
 const static struct gic_hw_operations gicv2_ops = {
     .info                = &gicv2_info,
     .init                = gicv2_init,
@@ -1244,6 +1250,7 @@ const static struct gic_hw_operations gicv2_ops = {
     .make_hwdom_madt     = gicv2_make_hwdom_madt,
     .map_hwdom_extra_mappings = gicv2_map_hwdown_extra_mappings,
     .iomem_deny_access   = gicv2_iomem_deny_access,
+    .do_LPI              = gicv2_do_LPI,
 };
 
 /* Set up the GIC */
diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index 292f2d0..44f6315 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -136,6 +136,77 @@ uint64_t gicv3_get_redist_address(unsigned int cpu, bool use_pta)
         return per_cpu(lpi_redist, cpu).redist_id << 16;
 }
 
+/*
+ * Handle incoming LPIs, which are a bit special, because they are potentially
+ * numerous and also only get injected into guests. Treat them specially here,
+ * by just looking up their target vCPU and virtual LPI number and hand it
+ * over to the injection function.
+ * Please note that LPIs are edge-triggered only, also have no active state,
+ * so spurious interrupts on the host side are no issue (we can just ignore
+ * them).
+ * Also a guest cannot expect that firing interrupts that haven't been
+ * fully configured yet will reach the CPU, so we don't need to care about
+ * this special case.
+ */
+void gicv3_do_LPI(unsigned int lpi)
+{
+    struct domain *d;
+    union host_lpi *hlpip, hlpi;
+    struct vcpu *vcpu;
+
+    irq_enter();
+
+    /* EOI the LPI already. */
+    WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
+
+    /* Find out if a guest mapped something to this physical LPI. */
+    hlpip = gic_get_host_lpi(lpi);
+    if ( !hlpip )
+        goto out;
+
+    hlpi.data = read_u64_atomic(&hlpip->data);
+
+    /*
+     * Unmapped events are marked with an invalid LPI ID. We can safely
+     * ignore them, as they have no further state and no-one can expect
+     * to see them if they have not been mapped.
+     */
+    if ( hlpi.virt_lpi == INVALID_LPI )
+        goto out;
+
+    d = rcu_lock_domain_by_id(hlpi.dom_id);
+    if ( !d )
+        goto out;
+
+    /* Make sure we don't step beyond the vcpu array. */
+    if ( hlpi.vcpu_id >= d->max_vcpus )
+    {
+        rcu_unlock_domain(d);
+        goto out;
+    }
+
+    vcpu = d->vcpu[hlpi.vcpu_id];
+
+    /* Check if the VCPU is ready to receive LPIs. */
+    if ( vcpu->arch.vgic.flags & VGIC_V3_LPIS_ENABLED )
+        /*
+         * TODO: Investigate what to do here for potential interrupt storms.
+         * As we keep all host LPIs enabled, for disabling LPIs we would need
+         * to queue a ITS host command, which we avoid so far during a guest's
+         * runtime. Also re-enabling would trigger a host command upon the
+         * guest sending a command, which could be an attack vector for
+         * hogging the host command queue.
+         * See the thread around here for some background:
+         * https://lists.xen.org/archives/html/xen-devel/2016-12/msg00003.html
+         */
+        vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi);
+
+    rcu_unlock_domain(d);
+
+out:
+    irq_exit();
+}
+
 static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
 {
     uint64_t val;
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 29c8964..8140c5f 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1674,6 +1674,7 @@ static const struct gic_hw_operations gicv3_ops = {
     .make_hwdom_dt_node  = gicv3_make_hwdom_dt_node,
     .make_hwdom_madt     = gicv3_make_hwdom_madt,
     .iomem_deny_access   = gicv3_iomem_deny_access,
+    .do_LPI              = gicv3_do_LPI,
 };
 
 static int __init gicv3_dt_preinit(struct dt_device_node *node, const void *data)
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 46bb306..fd3fa05 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -734,7 +734,13 @@ void gic_interrupt(struct cpu_user_regs *regs, int is_fiq)
             do_IRQ(regs, irq, is_fiq);
             local_irq_disable();
         }
-        else if (unlikely(irq < 16))
+        else if ( is_lpi(irq) )
+        {
+            local_irq_enable();
+            gic_hw_ops->do_LPI(irq);
+            local_irq_disable();
+        }
+        else if ( unlikely(irq < 16) )
         {
             do_sgi(regs, irq);
         }
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 0587569..df91940 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -709,11 +709,18 @@ static struct pending_irq *vgic_v2_lpi_to_pending(struct domain *d,
     BUG();
 }
 
+static int vgic_v2_lpi_get_priority(struct domain *d, unsigned int vlpi)
+{
+    /* Dummy function, no LPIs on a VGICv2. */
+    BUG();
+}
+
 static const struct vgic_ops vgic_v2_ops = {
     .vcpu_init   = vgic_v2_vcpu_init,
     .domain_init = vgic_v2_domain_init,
     .domain_free = vgic_v2_domain_free,
     .lpi_to_pending = vgic_v2_lpi_to_pending,
+    .lpi_get_priority = vgic_v2_lpi_get_priority,
     .max_vcpus = 8,
 };
 
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 44d2b50..87f58f6 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1567,12 +1567,30 @@ static struct pending_irq *vgic_v3_lpi_to_pending(struct domain *d,
     return pirq;
 }
 
+/* Retrieve the priority of an LPI from its struct pending_irq. */
+static int vgic_v3_lpi_get_priority(struct domain *d, uint32_t vlpi)
+{
+    struct pending_irq *p = vgic_v3_lpi_to_pending(d, vlpi);
+
+    /*
+     * Cope with the case where this function is called with an invalid LPI.
+     * It is expected that a caller will bail out handling this LPI at a
+     * later point in time, but for the sake of this function let us return
+     * some value here and avoid a NULL pointer dereference.
+     */
+    if ( !p )
+        return 0xff;
+
+    return p->lpi_priority;
+}
+
 static const struct vgic_ops v3_ops = {
     .vcpu_init   = vgic_v3_vcpu_init,
     .domain_init = vgic_v3_domain_init,
     .domain_free = vgic_v3_domain_free,
     .emulate_reg  = vgic_v3_emulate_reg,
     .lpi_to_pending = vgic_v3_lpi_to_pending,
+    .lpi_get_priority = vgic_v3_lpi_get_priority,
     /*
      * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
      * that can be supported is up to 4096(==256*16) in theory.
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index bf6fb60..c29ad5e 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -226,10 +226,15 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq)
 
 static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
 {
-    struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
+    struct vgic_irq_rank *rank;
     unsigned long flags;
     int priority;
 
+    /* LPIs don't have a rank, also store their priority separately. */
+    if ( is_lpi(virq) )
+        return v->domain->arch.vgic.handler->lpi_get_priority(v->domain, virq);
+
+    rank = vgic_rank_irq(v, virq);
     vgic_lock_rank(v, rank, flags);
     priority = rank->priority[virq & INTERRUPT_RANK_MASK];
     vgic_unlock_rank(v, rank, flags);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 3d8e84c..ebaea35 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -260,7 +260,8 @@ struct arch_vcpu
 
         /* GICv3: redistributor base and flags for this vCPU */
         paddr_t rdist_base;
-#define VGIC_V3_RDIST_LAST  (1 << 0)        /* last vCPU of the rdist */
+#define VGIC_V3_RDIST_LAST      (1 << 0)        /* last vCPU of the rdist */
+#define VGIC_V3_LPIS_ENABLED    (1 << 1)
         uint8_t flags;
     } vgic;
 
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 836a103..42963c0 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -366,6 +366,8 @@ struct gic_hw_operations {
     int (*map_hwdom_extra_mappings)(struct domain *d);
     /* Deny access to GIC regions */
     int (*iomem_deny_access)(const struct domain *d);
+    /* Handle LPIs, which require special handling */
+    void (*do_LPI)(unsigned int lpi);
 };
 
 void register_gic_ops(const struct gic_hw_operations *ops);
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index 29559a3..7470779 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -134,6 +134,8 @@ void gicv3_its_dt_init(const struct dt_device_node *node);
 
 bool gicv3_its_host_has_its(void);
 
+void gicv3_do_LPI(unsigned int lpi);
+
 int gicv3_lpi_init_rdist(void __iomem * rdist_base);
 
 /* Initialize the host structures for LPIs and the host ITSes. */
@@ -175,6 +177,12 @@ static inline bool gicv3_its_host_has_its(void)
     return false;
 }
 
+static inline void gicv3_do_LPI(unsigned int lpi)
+{
+    /* We don't enable LPIs without an ITS. */
+    BUG();
+}
+
 static inline int gicv3_lpi_init_rdist(void __iomem * rdist_base)
 {
     return -ENODEV;
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index c9075a9..7efa164 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -72,6 +72,7 @@ struct pending_irq
 #define GIC_INVALID_LR         (uint8_t)~0
     uint8_t lr;
     uint8_t priority;
+    uint8_t lpi_priority;       /* Caches the priority if this is an LPI. */
     /* inflight is used to append instances of pending_irq to
      * vgic.inflight_irqs */
     struct list_head inflight;
@@ -136,6 +137,7 @@ struct vgic_ops {
     bool (*emulate_reg)(struct cpu_user_regs *regs, union hsr hsr);
     /* lookup the struct pending_irq for a given LPI interrupt */
     struct pending_irq *(*lpi_to_pending)(struct domain *d, unsigned int vlpi);
+    int (*lpi_get_priority)(struct domain *d, uint32_t vlpi);
     /* Maximum number of vCPU supported */
     const unsigned int max_vcpus;
 };
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 06/28] ARM: GICv3: enable ITS and LPIs on the host
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (4 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 05/28] ARM: GICv3: forward pending LPIs to guests Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-11 17:53 ` [PATCH v9 07/28] ARM: vGICv3: handle virtual LPI pending and property tables Andre Przywara
                   ` (22 subsequent siblings)
  28 siblings, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Now that the host part of the ITS code is in place, we can enable the
ITS and also LPIs on each redistributor to get the show rolling.
At this point there would be no LPIs mapped, as guests don't know about
the ITS yet.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/arm/gic-v3-its.c |  4 ++++
 xen/arch/arm/gic-v3.c     | 18 ++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 07280b3..aebc257 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -505,6 +505,10 @@ static int gicv3_its_init_single_its(struct host_its *hw_its)
         return -ENOMEM;
     writeq_relaxed(0, hw_its->its_base + GITS_CWRITER);
 
+    /* Now enable interrupt translation and command processing on that ITS. */
+    reg = readl_relaxed(hw_its->its_base + GITS_CTLR);
+    writel_relaxed(reg | GITS_CTLR_ENABLE, hw_its->its_base + GITS_CTLR);
+
     return 0;
 }
 
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 8140c5f..d539d6c 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -620,6 +620,21 @@ static int gicv3_enable_redist(void)
     return 0;
 }
 
+/* Enable LPIs on this redistributor (only useful when the host has an ITS). */
+static bool gicv3_enable_lpis(void)
+{
+    uint32_t val;
+
+    val = readl_relaxed(GICD_RDIST_BASE + GICR_TYPER);
+    if ( !(val & GICR_TYPER_PLPIS) )
+        return false;
+
+    val = readl_relaxed(GICD_RDIST_BASE + GICR_CTLR);
+    writel_relaxed(val | GICR_CTLR_ENABLE_LPIS, GICD_RDIST_BASE + GICR_CTLR);
+
+    return true;
+}
+
 static int __init gicv3_populate_rdist(void)
 {
     int i;
@@ -731,11 +746,14 @@ static int gicv3_cpu_init(void)
     if ( gicv3_enable_redist() )
         return -ENODEV;
 
+    /* If the host has any ITSes, enable LPIs now. */
     if ( gicv3_its_host_has_its() )
     {
         ret = gicv3_its_setup_collection(smp_processor_id());
         if ( ret )
             return ret;
+        if ( !gicv3_enable_lpis() )
+            return -EBUSY;
     }
 
     /* Set priority on PPI and SGI interrupts */
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 07/28] ARM: vGICv3: handle virtual LPI pending and property tables
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (5 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 06/28] ARM: GICv3: enable ITS and LPIs on the host Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-12 15:23   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 08/28] ARM: introduce vgic_access_guest_memory() Andre Przywara
                   ` (21 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Allow a guest to provide the address and size for the memory regions
it has reserved for the GICv3 pending and property tables.
We sanitise the various fields of the respective redistributor
registers.
The MMIO read and write accesses are protected by locks, to avoid any
changing of the property or pending table address while a redistributor
is live and also to protect the non-atomic vgic_reg64_extract() function
on the MMIO read side.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/vgic-v3.c       | 164 +++++++++++++++++++++++++++++++++++++++----
 xen/include/asm-arm/domain.h |   5 ++
 2 files changed, 157 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 87f58f6..5166f9c 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -233,12 +233,29 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info,
         goto read_reserved;
 
     case VREG64(GICR_PROPBASER):
-        /* LPI's not implemented */
-        goto read_as_zero_64;
+        if ( !v->domain->arch.vgic.has_its )
+            goto read_as_zero_64;
+        if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
+
+        vgic_lock(v);
+        *r = vgic_reg64_extract(v->domain->arch.vgic.rdist_propbase, info);
+        vgic_unlock(v);
+        return 1;
 
     case VREG64(GICR_PENDBASER):
-        /* LPI's not implemented */
-        goto read_as_zero_64;
+    {
+        unsigned long flags;
+
+        if ( !v->domain->arch.vgic.has_its )
+            goto read_as_zero_64;
+        if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
+
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
+        *r = vgic_reg64_extract(v->arch.vgic.rdist_pendbase, info);
+        *r &= ~GICR_PENDBASER_PTZ;       /* WO, reads as 0 */
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        return 1;
+    }
 
     case 0x0080:
         goto read_reserved;
@@ -335,11 +352,95 @@ read_unknown:
     return 1;
 }
 
+static uint64_t vgic_sanitise_field(uint64_t reg, uint64_t field_mask,
+                                    int field_shift,
+                                    uint64_t (*sanitise_fn)(uint64_t))
+{
+    uint64_t field = (reg & field_mask) >> field_shift;
+
+    field = sanitise_fn(field) << field_shift;
+
+    return (reg & ~field_mask) | field;
+}
+
+/* We want to avoid outer shareable. */
+static uint64_t vgic_sanitise_shareability(uint64_t field)
+{
+    switch ( field )
+    {
+    case GIC_BASER_OuterShareable:
+        return GIC_BASER_InnerShareable;
+    default:
+        return field;
+    }
+}
+
+/* Avoid any inner non-cacheable mapping. */
+static uint64_t vgic_sanitise_inner_cacheability(uint64_t field)
+{
+    switch ( field )
+    {
+    case GIC_BASER_CACHE_nCnB:
+    case GIC_BASER_CACHE_nC:
+        return GIC_BASER_CACHE_RaWb;
+    default:
+        return field;
+    }
+}
+
+/* Non-cacheable or same-as-inner are OK. */
+static uint64_t vgic_sanitise_outer_cacheability(uint64_t field)
+{
+    switch ( field )
+    {
+    case GIC_BASER_CACHE_SameAsInner:
+    case GIC_BASER_CACHE_nC:
+        return field;
+    default:
+        return GIC_BASER_CACHE_nC;
+    }
+}
+
+static uint64_t sanitize_propbaser(uint64_t reg)
+{
+    reg = vgic_sanitise_field(reg, GICR_PROPBASER_SHAREABILITY_MASK,
+                              GICR_PROPBASER_SHAREABILITY_SHIFT,
+                              vgic_sanitise_shareability);
+    reg = vgic_sanitise_field(reg, GICR_PROPBASER_INNER_CACHEABILITY_MASK,
+                              GICR_PROPBASER_INNER_CACHEABILITY_SHIFT,
+                              vgic_sanitise_inner_cacheability);
+    reg = vgic_sanitise_field(reg, GICR_PROPBASER_OUTER_CACHEABILITY_MASK,
+                              GICR_PROPBASER_OUTER_CACHEABILITY_SHIFT,
+                              vgic_sanitise_outer_cacheability);
+
+    reg &= ~GICR_PROPBASER_RES0_MASK;
+
+    return reg;
+}
+
+static uint64_t sanitize_pendbaser(uint64_t reg)
+{
+    reg = vgic_sanitise_field(reg, GICR_PENDBASER_SHAREABILITY_MASK,
+                              GICR_PENDBASER_SHAREABILITY_SHIFT,
+                              vgic_sanitise_shareability);
+    reg = vgic_sanitise_field(reg, GICR_PENDBASER_INNER_CACHEABILITY_MASK,
+                              GICR_PENDBASER_INNER_CACHEABILITY_SHIFT,
+                              vgic_sanitise_inner_cacheability);
+    reg = vgic_sanitise_field(reg, GICR_PENDBASER_OUTER_CACHEABILITY_MASK,
+                              GICR_PENDBASER_OUTER_CACHEABILITY_SHIFT,
+                              vgic_sanitise_outer_cacheability);
+
+    reg &= ~GICR_PENDBASER_RES0_MASK;
+
+    return reg;
+}
+
 static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
                                           uint32_t gicr_reg,
                                           register_t r)
 {
     struct hsr_dabt dabt = info->dabt;
+    uint64_t reg;
 
     switch ( gicr_reg )
     {
@@ -370,36 +471,75 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
         goto write_impl_defined;
 
     case VREG64(GICR_SETLPIR):
-        /* LPI is not implemented */
+        /* LPIs without an ITS are not implemented */
         goto write_ignore_64;
 
     case VREG64(GICR_CLRLPIR):
-        /* LPI is not implemented */
+        /* LPIs without an ITS are not implemented */
         goto write_ignore_64;
 
     case 0x0050:
         goto write_reserved;
 
     case VREG64(GICR_PROPBASER):
-        /* LPI is not implemented */
-        goto write_ignore_64;
+        if ( !v->domain->arch.vgic.has_its )
+            goto write_ignore_64;
+        if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
+
+        vgic_lock(v);
+
+        /*
+         * Writing PROPBASER with any redistributor having LPIs enabled
+         * is UNPREDICTABLE.
+         */
+        if ( !(v->domain->arch.vgic.rdists_enabled) )
+        {
+            reg = v->domain->arch.vgic.rdist_propbase;
+            vgic_reg64_update(&reg, r, info);
+            reg = sanitize_propbaser(reg);
+            v->domain->arch.vgic.rdist_propbase = reg;
+        }
+
+        vgic_unlock(v);
+
+        return 1;
 
     case VREG64(GICR_PENDBASER):
-        /* LPI is not implemented */
-        goto write_ignore_64;
+    {
+        unsigned long flags;
+
+        if ( !v->domain->arch.vgic.has_its )
+            goto write_ignore_64;
+        if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
+
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+        /* Writing PENDBASER with LPIs enabled is UNPREDICTABLE. */
+        if ( !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
+        {
+            reg = v->arch.vgic.rdist_pendbase;
+            vgic_reg64_update(&reg, r, info);
+            reg = sanitize_pendbaser(reg);
+            v->arch.vgic.rdist_pendbase = reg;
+        }
+
+        spin_unlock_irqrestore(&v->arch.vgic.lock, false);
+
+        return 1;
+    }
 
     case 0x0080:
         goto write_reserved;
 
     case VREG64(GICR_INVLPIR):
-        /* LPI is not implemented */
+        /* LPIs without an ITS are not implemented */
         goto write_ignore_64;
 
     case 0x00A8:
         goto write_reserved;
 
     case VREG64(GICR_INVALLR):
-        /* LPI is not implemented */
+        /* LPIs without an ITS are not implemented */
         goto write_ignore_64;
 
     case 0x00B8:
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index ebaea35..b2d98bb 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -109,11 +109,15 @@ struct arch_domain
         } *rdist_regions;
         int nr_regions;                     /* Number of rdist regions */
         uint32_t rdist_stride;              /* Re-Distributor stride */
+        unsigned long int nr_lpis;
+        uint64_t rdist_propbase;
         struct rb_root its_devices;         /* Devices mapped to an ITS */
         spinlock_t its_devices_lock;        /* Protects the its_devices tree */
         struct radix_tree_root pend_lpi_tree; /* Stores struct pending_irq's */
         rwlock_t pend_lpi_tree_lock;        /* Protects the pend_lpi_tree */
         unsigned int intid_bits;
+        bool rdists_enabled;                /* Is any redistributor enabled? */
+        bool has_its;
 #endif
     } vgic;
 
@@ -260,6 +264,7 @@ struct arch_vcpu
 
         /* GICv3: redistributor base and flags for this vCPU */
         paddr_t rdist_base;
+        uint64_t rdist_pendbase;
 #define VGIC_V3_RDIST_LAST      (1 << 0)        /* last vCPU of the rdist */
 #define VGIC_V3_LPIS_ENABLED    (1 << 1)
         uint8_t flags;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 08/28] ARM: introduce vgic_access_guest_memory()
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (6 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 07/28] ARM: vGICv3: handle virtual LPI pending and property tables Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-12 15:30   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 09/28] ARM: vGICv3: re-use vgic_reg64_check_access Andre Przywara
                   ` (20 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

From: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>

This function allows to copy a chunk of data from and to guest physical
memory. It looks up the associated page from the guest's p2m tree
and maps this page temporarily for the time of the access.
This function was originally written by Vijaya as part of an earlier series:
https://patchwork.kernel.org/patch/8177251

Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic.c        | 50 ++++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/vgic.h |  3 +++
 2 files changed, 53 insertions(+)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index c29ad5e..66adeb4 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -20,6 +20,7 @@
 #include <xen/bitops.h>
 #include <xen/lib.h>
 #include <xen/init.h>
+#include <xen/domain_page.h>
 #include <xen/softirq.h>
 #include <xen/irq.h>
 #include <xen/sched.h>
@@ -620,6 +621,55 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
 }
 
 /*
+ * Temporarily map one physical guest page and copy data to or from it.
+ * The data to be copied cannot cross a page boundary.
+ */
+int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
+                             uint32_t size, bool_t is_write)
+{
+    struct page_info *page;
+    uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
+    p2m_type_t p2mt;
+    void *p;
+
+    /* Do not cross a page boundary. */
+    if ( size > (PAGE_SIZE - offset) )
+    {
+        printk(XENLOG_G_ERR "d%d: vITS: memory access would cross page boundary\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    page = get_page_from_gfn(d, paddr_to_pfn(gpa), &p2mt, P2M_ALLOC);
+    if ( !page )
+    {
+        printk(XENLOG_G_ERR "d%d: vITS: Failed to get table entry\n",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    if ( !p2m_is_ram(p2mt) )
+    {
+        put_page(page);
+        printk(XENLOG_G_ERR "d%d: vITS: memory used by the ITS should be RAM.",
+               d->domain_id);
+        return -EINVAL;
+    }
+
+    p = __map_domain_page(page);
+
+    if ( is_write )
+        memcpy(p + offset, buf, size);
+    else
+        memcpy(buf, p + offset, size);
+
+    unmap_domain_page(p);
+    put_page(page);
+
+    return 0;
+}
+
+/*
  * Local variables:
  * mode: C
  * c-file-style: "BSD"
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 7efa164..6b17802 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -313,6 +313,9 @@ extern void register_vgic_ops(struct domain *d, const struct vgic_ops *ops);
 int vgic_v2_init(struct domain *d, int *mmio_count);
 int vgic_v3_init(struct domain *d, int *mmio_count);
 
+int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
+                             uint32_t size, bool_t is_write);
+
 extern int domain_vgic_register(struct domain *d, int *mmio_count);
 extern int vcpu_vgic_free(struct vcpu *v);
 extern bool vgic_to_sgi(struct vcpu *v, register_t sgir,
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 09/28] ARM: vGICv3: re-use vgic_reg64_check_access
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (7 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 08/28] ARM: introduce vgic_access_guest_memory() Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-11 17:53 ` [PATCH v9 10/28] ARM: GIC: export and extend vgic_init_pending_irq() Andre Przywara
                   ` (19 subsequent siblings)
  28 siblings, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

vgic_reg64_check_access() checks for a valid access width of a 64-bit
MMIO register, which is useful beyond the current GICv3 emulation only.
Move this function to the vgic-emul.h to be easily reusable.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/vgic-v3.c          | 9 ---------
 xen/include/asm-arm/vgic-emul.h | 9 +++++++++
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 5166f9c..38c123c 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -161,15 +161,6 @@ static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank,
     }
 }
 
-static inline bool vgic_reg64_check_access(struct hsr_dabt dabt)
-{
-    /*
-     * 64 bits registers can be accessible using 32-bit and 64-bit unless
-     * stated otherwise (See 8.1.3 ARM IHI 0069A).
-     */
-    return ( dabt.size == DABT_DOUBLE_WORD || dabt.size == DABT_WORD );
-}
-
 static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info,
                                          uint32_t gicr_reg,
                                          register_t *r)
diff --git a/xen/include/asm-arm/vgic-emul.h b/xen/include/asm-arm/vgic-emul.h
index 184a1f0..e52fbaa 100644
--- a/xen/include/asm-arm/vgic-emul.h
+++ b/xen/include/asm-arm/vgic-emul.h
@@ -12,6 +12,15 @@
 #define VRANGE32(start, end) start ... end + 3
 #define VRANGE64(start, end) start ... end + 7
 
+/*
+ * 64 bits registers can be accessible using 32-bit and 64-bit unless
+ * stated otherwise (See 8.1.3 ARM IHI 0069A).
+ */
+static inline bool vgic_reg64_check_access(struct hsr_dabt dabt)
+{
+    return ( dabt.size == DABT_DOUBLE_WORD || dabt.size == DABT_WORD );
+}
+
 #endif /* __ASM_ARM_VGIC_EMUL_H__ */
 
 /*
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 10/28] ARM: GIC: export and extend vgic_init_pending_irq()
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (8 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 09/28] ARM: vGICv3: re-use vgic_reg64_check_access Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-16 12:26   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 11/28] ARM: VGIC: add vcpu_id to struct pending_irq Andre Przywara
                   ` (18 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

For LPIs we later want to dynamically allocate struct pending_irqs.
So beside needing to initialize the struct from there we also need
to clean it up and re-initialize it later on.
Export vgic_init_pending_irq() and extend it to be reusable.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic.c        | 4 +++-
 xen/include/asm-arm/vgic.h | 1 +
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 66adeb4..27d6b51 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -61,11 +61,13 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq)
     return vgic_get_rank(v, rank);
 }
 
-static void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
+void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
 {
     INIT_LIST_HEAD(&p->inflight);
     INIT_LIST_HEAD(&p->lr_queue);
     p->irq = virq;
+    p->status = 0;
+    p->lr = GIC_INVALID_LR;
 }
 
 static void vgic_rank_init(struct vgic_irq_rank *rank, uint8_t index,
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 6b17802..e2111a5 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -302,6 +302,7 @@ extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq);
 extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq);
 extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
+extern void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq);
 extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 11/28] ARM: VGIC: add vcpu_id to struct pending_irq
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (9 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 10/28] ARM: GIC: export and extend vgic_init_pending_irq() Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-16 12:31   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 12/28] ARM: vGIC: advertise LPI support Andre Przywara
                   ` (17 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The target CPU for an LPI is encoded in the interrupt translation table
entry, so can't be easily derived from just an LPI number (short of
walking *all* tables and find the matching LPI).
To avoid this in case we need to know the VCPU (for the INVALL command,
for instance), put the VCPU ID in the struct pending_irq, so that it is
easily accessible.
We use the remaining 8 bits of padding space for that to avoid enlarging
the size of struct pending_irq. The number of VCPUs is limited to 127
at the moment anyway, which we also confirm with a BUILD_BUG_ON.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic.c        | 3 +++
 xen/include/asm-arm/vgic.h | 1 +
 2 files changed, 4 insertions(+)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 27d6b51..97a2cf2 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -63,6 +63,9 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq)
 
 void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
 {
+    /* The lpi_vcpu_id field must be big enough to hold a VCPU ID. */
+    BUILD_BUG_ON(BIT(sizeof(p->lpi_vcpu_id) * 8) < MAX_VIRT_CPUS);
+
     INIT_LIST_HEAD(&p->inflight);
     INIT_LIST_HEAD(&p->lr_queue);
     p->irq = virq;
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index e2111a5..02732db 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -73,6 +73,7 @@ struct pending_irq
     uint8_t lr;
     uint8_t priority;
     uint8_t lpi_priority;       /* Caches the priority if this is an LPI. */
+    uint8_t lpi_vcpu_id;        /* The VCPU for an LPI. */
     /* inflight is used to append instances of pending_irq to
      * vgic.inflight_irqs */
     struct list_head inflight;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (10 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 11/28] ARM: VGIC: add vcpu_id to struct pending_irq Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-16 13:03   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation Andre Przywara
                   ` (16 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

To let a guest know about the availability of virtual LPIs, set the
respective bits in the virtual GIC registers and let a guest control
the LPI enable bit.
Only report the LPI capability if the host has initialized at least
one ITS.
This removes a "TBD" comment, as we now populate the processor number
in the GICR_TYPE register.
Advertise 24 bits worth of LPIs to the guest.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 65 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 38c123c..6dbdb2e 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -170,8 +170,19 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info,
     switch ( gicr_reg )
     {
     case VREG32(GICR_CTLR):
-        /* We have not implemented LPI's, read zero */
-        goto read_as_zero_32;
+    {
+        unsigned long flags;
+
+        if ( !v->domain->arch.vgic.has_its )
+            goto read_as_zero_32;
+        if ( dabt.size != DABT_WORD ) goto bad_width;
+
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
+        *r = vgic_reg32_extract(!!(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED),
+                                info);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        return 1;
+    }
 
     case VREG32(GICR_IIDR):
         if ( dabt.size != DABT_WORD ) goto bad_width;
@@ -183,16 +194,20 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info,
         uint64_t typer, aff;
 
         if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
-        /* TBD: Update processor id in [23:8] when ITS support is added */
         aff = (MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 3) << 56 |
                MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 2) << 48 |
                MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 1) << 40 |
                MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 0) << 32);
         typer = aff;
+        /* We use the VCPU ID as the redistributor ID in bits[23:8] */
+        typer |= (v->vcpu_id & 0xffff) << 8;
 
         if ( v->arch.vgic.flags & VGIC_V3_RDIST_LAST )
             typer |= GICR_TYPER_LAST;
 
+        if ( v->domain->arch.vgic.has_its )
+            typer |= GICR_TYPER_PLPIS;
+
         *r = vgic_reg64_extract(typer, info);
 
         return 1;
@@ -426,6 +441,28 @@ static uint64_t sanitize_pendbaser(uint64_t reg)
     return reg;
 }
 
+static void vgic_vcpu_enable_lpis(struct vcpu *v)
+{
+    uint64_t reg = v->domain->arch.vgic.rdist_propbase;
+    unsigned int nr_lpis = BIT((reg & 0x1f) + 1);
+
+    /* rdists_enabled is protected by the domain lock. */
+    ASSERT(spin_is_locked(&v->domain->arch.vgic.lock));
+
+    if ( nr_lpis < LPI_OFFSET )
+        nr_lpis = 0;
+    else
+        nr_lpis -= LPI_OFFSET;
+
+    if ( !v->domain->arch.vgic.rdists_enabled )
+    {
+        v->domain->arch.vgic.nr_lpis = nr_lpis;
+        v->domain->arch.vgic.rdists_enabled = true;
+    }
+
+    v->arch.vgic.flags |= VGIC_V3_LPIS_ENABLED;
+}
+
 static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
                                           uint32_t gicr_reg,
                                           register_t r)
@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
     switch ( gicr_reg )
     {
     case VREG32(GICR_CTLR):
-        /* LPI's not implemented */
-        goto write_ignore_32;
+    {
+        unsigned long flags;
+
+        if ( !v->domain->arch.vgic.has_its )
+            goto write_ignore_32;
+        if ( dabt.size != DABT_WORD ) goto bad_width;
+
+        vgic_lock(v);                   /* protects rdists_enabled */
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+        /* LPIs can only be enabled once, but never disabled again. */
+        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
+             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
+            vgic_vcpu_enable_lpis(v);
+
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        vgic_unlock(v);
+
+        return 1;
+    }
 
     case VREG32(GICR_IIDR):
         /* RO */
@@ -1058,6 +1113,11 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
         typer = ((ncpus - 1) << GICD_TYPE_CPUS_SHIFT |
                  DIV_ROUND_UP(v->domain->arch.vgic.nr_spis, 32));
 
+        if ( v->domain->arch.vgic.has_its )
+        {
+            typer |= GICD_TYPE_LPIS;
+            irq_bits = v->domain->arch.vgic.intid_bits;
+        }
         typer |= (irq_bits - 1) << GICD_TYPE_ID_BITS_SHIFT;
 
         *r = vgic_reg32_extract(typer, info);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (11 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 12/28] ARM: vGIC: advertise LPI support Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-16 15:24   ` Julien Grall
                     ` (2 more replies)
  2017-05-11 17:53 ` [PATCH v9 14/28] ARM: vITS: introduce translation table walks Andre Przywara
                   ` (15 subsequent siblings)
  28 siblings, 3 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Emulate the memory mapped ITS registers and provide a stub to introduce
the ITS command handling framework (but without actually emulating any
commands at this time).
This fixes a misnomer in our virtual ITS structure, where the spec is
confusingly using ID_bits in GITS_TYPER to denote the number of event IDs
(in contrast to GICD_TYPER, where it means number of LPIs).

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c       | 526 ++++++++++++++++++++++++++++++++++++++-
 xen/include/asm-arm/gic_v3_its.h |   3 +
 2 files changed, 528 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 065ffe2..e3bd1f6 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -19,6 +19,16 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+/*
+ * Locking order:
+ *
+ * its->vcmd_lock                        (protects the command queue)
+ *     its->its_lock                     (protects the translation tables)
+ *         d->its_devices_lock           (protects the device RB tree)
+ *             v->vgic.lock              (protects the struct pending_irq)
+ *                 d->pend_lpi_tree_lock (protects the radix tree)
+ */
+
 #include <xen/bitops.h>
 #include <xen/config.h>
 #include <xen/domain_page.h>
@@ -43,7 +53,7 @@
 struct virt_its {
     struct domain *d;
     unsigned int devid_bits;
-    unsigned int intid_bits;
+    unsigned int evid_bits;
     spinlock_t vcmd_lock;       /* Protects the virtual command buffer, which */
     uint64_t cwriter;           /* consists of CWRITER and CREADR and those   */
     uint64_t creadr;            /* shadow variables cwriter and creadr. */
@@ -53,6 +63,7 @@ struct virt_its {
     uint64_t baser_dev, baser_coll;     /* BASER0 and BASER1 for the guest */
     unsigned int max_collections;
     unsigned int max_devices;
+    /* changing "enabled" requires to hold *both* the vcmd_lock and its_lock */
     bool enabled;
 };
 
@@ -67,6 +78,12 @@ struct vits_itte
     uint16_t pad;
 };
 
+typedef uint16_t coll_table_entry_t;
+typedef uint64_t dev_table_entry_t;
+
+#define GITS_BASER_RO_MASK       (GITS_BASER_TYPE_MASK | \
+                                  (31UL << GITS_BASER_ENTRY_SIZE_SHIFT))
+
 int vgic_v3_its_init_domain(struct domain *d)
 {
     spin_lock_init(&d->arch.vgic.its_devices_lock);
@@ -80,6 +97,513 @@ void vgic_v3_its_free_domain(struct domain *d)
     ASSERT(RB_EMPTY_ROOT(&d->arch.vgic.its_devices));
 }
 
+/**************************************
+ * Functions that handle ITS commands *
+ **************************************/
+
+static uint64_t its_cmd_mask_field(uint64_t *its_cmd, unsigned int word,
+                                   unsigned int shift, unsigned int size)
+{
+    return (le64_to_cpu(its_cmd[word]) >> shift) & (BIT(size) - 1);
+}
+
+#define its_cmd_get_command(cmd)        its_cmd_mask_field(cmd, 0,  0,  8)
+#define its_cmd_get_deviceid(cmd)       its_cmd_mask_field(cmd, 0, 32, 32)
+#define its_cmd_get_size(cmd)           its_cmd_mask_field(cmd, 1,  0,  5)
+#define its_cmd_get_id(cmd)             its_cmd_mask_field(cmd, 1,  0, 32)
+#define its_cmd_get_physical_id(cmd)    its_cmd_mask_field(cmd, 1, 32, 32)
+#define its_cmd_get_collection(cmd)     its_cmd_mask_field(cmd, 2,  0, 16)
+#define its_cmd_get_target_addr(cmd)    its_cmd_mask_field(cmd, 2, 16, 32)
+#define its_cmd_get_validbit(cmd)       its_cmd_mask_field(cmd, 2, 63,  1)
+#define its_cmd_get_ittaddr(cmd)        (its_cmd_mask_field(cmd, 2, 8, 44) << 8)
+
+#define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
+#define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
+
+/*
+ * Must be called with the vcmd_lock held.
+ * TODO: Investigate whether we can be smarter here and don't need to hold
+ * the lock all of the time.
+ */
+static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
+{
+    paddr_t addr = its->cbaser & GENMASK(51, 12);
+    uint64_t command[4];
+
+    ASSERT(spin_is_locked(&its->vcmd_lock));
+
+    if ( its->cwriter >= ITS_CMD_BUFFER_SIZE(its->cbaser) )
+        return -1;
+
+    while ( its->creadr != its->cwriter )
+    {
+        int ret;
+
+        ret = vgic_access_guest_memory(d, addr + its->creadr,
+                                       command, sizeof(command), false);
+        if ( ret )
+            return ret;
+
+        switch ( its_cmd_get_command(command) )
+        {
+        case GITS_CMD_SYNC:
+            /* We handle ITS commands synchronously, so we ignore SYNC. */
+            break;
+        default:
+            gdprintk(XENLOG_WARNING, "vGITS: unhandled ITS command %lu\n",
+                     its_cmd_get_command(command));
+            break;
+        }
+
+        write_u64_atomic(&its->creadr, (its->creadr + ITS_CMD_SIZE) %
+                         ITS_CMD_BUFFER_SIZE(its->cbaser));
+
+        if ( ret )
+            gdprintk(XENLOG_WARNING,
+                     "vGITS: ITS command error %d while handling command %lu\n",
+                     ret, its_cmd_get_command(command));
+    }
+
+    return 0;
+}
+
+/*****************************
+ * ITS registers read access *
+ *****************************/
+
+/* Identifying as an ARM IP, using "X" as the product ID. */
+#define GITS_IIDR_VALUE                 0x5800034c
+
+static int vgic_v3_its_mmio_read(struct vcpu *v, mmio_info_t *info,
+                                 register_t *r, void *priv)
+{
+    struct virt_its *its = priv;
+    uint64_t reg;
+
+    switch ( info->gpa & 0xffff )
+    {
+    case VREG32(GITS_CTLR):
+    {
+        /*
+         * We try to avoid waiting for the command queue lock and report
+         * non-quiescent if that lock is already taken.
+         */
+        bool have_cmd_lock;
+
+        if ( info->dabt.size != DABT_WORD ) goto bad_width;
+
+        have_cmd_lock = spin_trylock(&its->vcmd_lock);
+        spin_lock(&its->its_lock);
+        if ( its->enabled )
+            reg = GITS_CTLR_ENABLE;
+        else
+            reg = 0;
+
+        if ( have_cmd_lock && its->cwriter == its->creadr )
+            reg |= GITS_CTLR_QUIESCENT;
+
+        spin_unlock(&its->its_lock);
+        if ( have_cmd_lock )
+            spin_unlock(&its->vcmd_lock);
+
+        *r = vgic_reg32_extract(reg, info);
+        break;
+    }
+    case VREG32(GITS_IIDR):
+        if ( info->dabt.size != DABT_WORD ) goto bad_width;
+        *r = vgic_reg32_extract(GITS_IIDR_VALUE, info);
+        break;
+    case VREG64(GITS_TYPER):
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+
+        reg = GITS_TYPER_PHYSICAL;
+        reg |= (sizeof(struct vits_itte) - 1) << GITS_TYPER_ITT_SIZE_SHIFT;
+        reg |= (its->evid_bits - 1) << GITS_TYPER_IDBITS_SHIFT;
+        reg |= (its->devid_bits - 1) << GITS_TYPER_DEVIDS_SHIFT;
+        *r = vgic_reg64_extract(reg, info);
+        break;
+    case VRANGE32(0x0018, 0x001C):
+        goto read_reserved;
+    case VRANGE32(0x0020, 0x003C):
+        goto read_impl_defined;
+    case VRANGE32(0x0040, 0x007C):
+        goto read_reserved;
+    case VREG64(GITS_CBASER):
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+        spin_lock(&its->its_lock);
+        *r = vgic_reg64_extract(its->cbaser, info);
+        spin_unlock(&its->its_lock);
+        break;
+    case VREG64(GITS_CWRITER):
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+
+        reg = its->cwriter;
+        *r = vgic_reg64_extract(reg, info);
+        break;
+    case VREG64(GITS_CREADR):
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+
+        reg = its->creadr;
+        *r = vgic_reg64_extract(reg, info);
+        break;
+    case VRANGE64(0x0098, 0x00F8):
+        goto read_reserved;
+    case VREG64(GITS_BASER0):           /* device table */
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+        spin_lock(&its->its_lock);
+        *r = vgic_reg64_extract(its->baser_dev, info);
+        spin_unlock(&its->its_lock);
+        break;
+    case VREG64(GITS_BASER1):           /* collection table */
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+        spin_lock(&its->its_lock);
+        *r = vgic_reg64_extract(its->baser_coll, info);
+        spin_unlock(&its->its_lock);
+        break;
+    case VRANGE64(GITS_BASER2, GITS_BASER7):
+        goto read_as_zero_64;
+    case VRANGE32(0x0140, 0xBFFC):
+        goto read_reserved;
+    case VRANGE32(0xC000, 0xFFCC):
+        goto read_impl_defined;
+    case VRANGE32(0xFFD0, 0xFFE4):
+        goto read_impl_defined;
+    case VREG32(GITS_PIDR2):
+        if ( info->dabt.size != DABT_WORD ) goto bad_width;
+        *r = vgic_reg32_extract(GIC_PIDR2_ARCH_GICv3, info);
+        break;
+    case VRANGE32(0xFFEC, 0xFFFC):
+        goto read_impl_defined;
+    default:
+        printk(XENLOG_G_ERR
+               "%pv: vGITS: unhandled read r%d offset %#04lx\n",
+               v, info->dabt.reg, (unsigned long)info->gpa & 0xffff);
+        return 0;
+    }
+
+    return 1;
+
+read_as_zero_64:
+    if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+    *r = 0;
+
+    return 1;
+
+read_impl_defined:
+    printk(XENLOG_G_DEBUG
+           "%pv: vGITS: RAZ on implementation defined register offset %#04lx\n",
+           v, info->gpa & 0xffff);
+    *r = 0;
+    return 1;
+
+read_reserved:
+    printk(XENLOG_G_DEBUG
+           "%pv: vGITS: RAZ on reserved register offset %#04lx\n",
+           v, info->gpa & 0xffff);
+    *r = 0;
+    return 1;
+
+bad_width:
+    printk(XENLOG_G_ERR "vGITS: bad read width %d r%d offset %#04lx\n",
+           info->dabt.size, info->dabt.reg, (unsigned long)info->gpa & 0xffff);
+    domain_crash_synchronous();
+
+    return 0;
+}
+
+/******************************
+ * ITS registers write access *
+ ******************************/
+
+static unsigned int its_baser_table_size(uint64_t baser)
+{
+    unsigned int ret, page_size[4] = {SZ_4K, SZ_16K, SZ_64K, SZ_64K};
+
+    ret = page_size[(baser >> GITS_BASER_PAGE_SIZE_SHIFT) & 3];
+
+    return ret * ((baser & GITS_BASER_SIZE_MASK) + 1);
+}
+
+static unsigned int its_baser_nr_entries(uint64_t baser)
+{
+    unsigned int entry_size = GITS_BASER_ENTRY_SIZE(baser);
+
+    return its_baser_table_size(baser) / entry_size;
+}
+
+/* Must be called with the ITS lock held. */
+static bool vgic_v3_verify_its_status(struct virt_its *its, bool status)
+{
+    ASSERT(spin_is_locked(&its->its_lock));
+
+    if ( !status )
+        return false;
+
+    if ( !(its->cbaser & GITS_VALID_BIT) ||
+         !(its->baser_dev & GITS_VALID_BIT) ||
+         !(its->baser_coll & GITS_VALID_BIT) )
+    {
+        printk(XENLOG_G_WARNING "d%d tried to enable ITS without having the tables configured.\n",
+               its->d->domain_id);
+        return false;
+    }
+
+    return true;
+}
+
+static void sanitize_its_base_reg(uint64_t *reg)
+{
+    uint64_t r = *reg;
+
+    /* Avoid outer shareable. */
+    switch ( (r >> GITS_BASER_SHAREABILITY_SHIFT) & 0x03 )
+    {
+    case GIC_BASER_OuterShareable:
+        r &= ~GITS_BASER_SHAREABILITY_MASK;
+        r |= GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT;
+        break;
+    default:
+        break;
+    }
+
+    /* Avoid any inner non-cacheable mapping. */
+    switch ( (r >> GITS_BASER_INNER_CACHEABILITY_SHIFT) & 0x07 )
+    {
+    case GIC_BASER_CACHE_nCnB:
+    case GIC_BASER_CACHE_nC:
+        r &= ~GITS_BASER_INNER_CACHEABILITY_MASK;
+        r |= GIC_BASER_CACHE_RaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT;
+        break;
+    default:
+        break;
+    }
+
+    /* Only allow non-cacheable or same-as-inner. */
+    switch ( (r >> GITS_BASER_OUTER_CACHEABILITY_SHIFT) & 0x07 )
+    {
+    case GIC_BASER_CACHE_SameAsInner:
+    case GIC_BASER_CACHE_nC:
+        break;
+    default:
+        r &= ~GITS_BASER_OUTER_CACHEABILITY_MASK;
+        r |= GIC_BASER_CACHE_nC << GITS_BASER_OUTER_CACHEABILITY_SHIFT;
+        break;
+    }
+
+    *reg = r;
+}
+
+static int vgic_v3_its_mmio_write(struct vcpu *v, mmio_info_t *info,
+                                  register_t r, void *priv)
+{
+    struct domain *d = v->domain;
+    struct virt_its *its = priv;
+    uint64_t reg;
+    uint32_t reg32;
+
+    switch ( info->gpa & 0xffff )
+    {
+    case VREG32(GITS_CTLR):
+    {
+        uint32_t ctlr;
+
+        if ( info->dabt.size != DABT_WORD ) goto bad_width;
+
+        /*
+         * We need to take the vcmd_lock to prevent a guest from disabling
+         * the ITS while commands are still processed.
+         */
+        spin_lock(&its->vcmd_lock);
+        spin_lock(&its->its_lock);
+        ctlr = its->enabled ? GITS_CTLR_ENABLE : 0;
+        reg32 = ctlr;
+        vgic_reg32_update(&reg32, r, info);
+
+        if ( ctlr ^ reg32 )
+            its->enabled = vgic_v3_verify_its_status(its,
+                                                     reg32 & GITS_CTLR_ENABLE);
+        spin_unlock(&its->its_lock);
+        spin_unlock(&its->vcmd_lock);
+        return 1;
+    }
+
+    case VREG32(GITS_IIDR):
+        goto write_ignore_32;
+    case VREG32(GITS_TYPER):
+        goto write_ignore_32;
+    case VRANGE32(0x0018, 0x001C):
+        goto write_reserved;
+    case VRANGE32(0x0020, 0x003C):
+        goto write_impl_defined;
+    case VRANGE32(0x0040, 0x007C):
+        goto write_reserved;
+    case VREG64(GITS_CBASER):
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+
+        spin_lock(&its->its_lock);
+        /* Changing base registers with the ITS enabled is UNPREDICTABLE. */
+        if ( its->enabled )
+        {
+            spin_unlock(&its->its_lock);
+            gdprintk(XENLOG_WARNING,
+                     "vGITS: tried to change CBASER with the ITS enabled.\n");
+            return 1;
+        }
+
+        reg = its->cbaser;
+        vgic_reg64_update(&reg, r, info);
+        sanitize_its_base_reg(&reg);
+
+        its->cbaser = reg;
+        its->creadr = 0;
+        spin_unlock(&its->its_lock);
+
+        return 1;
+
+    case VREG64(GITS_CWRITER):
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+
+        spin_lock(&its->vcmd_lock);
+        reg = ITS_CMD_OFFSET(its->cwriter);
+        vgic_reg64_update(&reg, r, info);
+        its->cwriter = ITS_CMD_OFFSET(reg);
+
+        if ( its->enabled )
+            if ( vgic_its_handle_cmds(d, its) )
+                gdprintk(XENLOG_WARNING, "error handling ITS commands\n");
+
+        spin_unlock(&its->vcmd_lock);
+
+        return 1;
+
+    case VREG64(GITS_CREADR):
+        goto write_ignore_64;
+
+    case VRANGE32(0x0098, 0x00FC):
+        goto write_reserved;
+    case VREG64(GITS_BASER0):           /* device table */
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+
+        spin_lock(&its->its_lock);
+
+        /*
+         * Changing base registers with the ITS enabled is UNPREDICTABLE,
+         * we choose to ignore it, but warn.
+         */
+        if ( its->enabled )
+        {
+            spin_unlock(&its->its_lock);
+            gdprintk(XENLOG_WARNING, "vGITS: tried to change BASER with the ITS enabled.\n");
+
+            return 1;
+        }
+
+        reg = its->baser_dev;
+        vgic_reg64_update(&reg, r, info);
+
+        /* We don't support indirect tables for now. */
+        reg &= ~(GITS_BASER_RO_MASK | GITS_BASER_INDIRECT);
+        reg |= (sizeof(dev_table_entry_t) - 1) << GITS_BASER_ENTRY_SIZE_SHIFT;
+        reg |= GITS_BASER_TYPE_DEVICE << GITS_BASER_TYPE_SHIFT;
+        sanitize_its_base_reg(&reg);
+
+        if ( reg & GITS_VALID_BIT )
+        {
+            its->max_devices = its_baser_nr_entries(reg);
+            if ( its->max_devices > BIT(its->devid_bits) )
+                its->max_devices = BIT(its->devid_bits);
+        }
+        else
+            its->max_devices = 0;
+
+        its->baser_dev = reg;
+        spin_unlock(&its->its_lock);
+        return 1;
+    case VREG64(GITS_BASER1):           /* collection table */
+        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+
+        spin_lock(&its->its_lock);
+        /*
+         * Changing base registers with the ITS enabled is UNPREDICTABLE,
+         * we choose to ignore it, but warn.
+         */
+        if ( its->enabled )
+        {
+            spin_unlock(&its->its_lock);
+            gdprintk(XENLOG_INFO, "vGITS: tried to change BASER with the ITS enabled.\n");
+            return 1;
+        }
+
+        reg = its->baser_coll;
+        vgic_reg64_update(&reg, r, info);
+        /* No indirect tables for the collection table. */
+        reg &= ~(GITS_BASER_RO_MASK | GITS_BASER_INDIRECT);
+        reg |= (sizeof(coll_table_entry_t) - 1) << GITS_BASER_ENTRY_SIZE_SHIFT;
+        reg |= GITS_BASER_TYPE_COLLECTION << GITS_BASER_TYPE_SHIFT;
+        sanitize_its_base_reg(&reg);
+
+        if ( reg & GITS_VALID_BIT )
+            its->max_collections = its_baser_nr_entries(reg);
+        else
+            its->max_collections = 0;
+        its->baser_coll = reg;
+        spin_unlock(&its->its_lock);
+        return 1;
+    case VRANGE64(GITS_BASER2, GITS_BASER7):
+        goto write_ignore_64;
+    case VRANGE32(0x0140, 0xBFFC):
+        goto write_reserved;
+    case VRANGE32(0xC000, 0xFFCC):
+        goto write_impl_defined;
+    case VRANGE32(0xFFD0, 0xFFE4):      /* IMPDEF identification registers */
+        goto write_impl_defined;
+    case VREG32(GITS_PIDR2):
+        goto write_ignore_32;
+    case VRANGE32(0xFFEC, 0xFFFC):      /* IMPDEF identification registers */
+        goto write_impl_defined;
+    default:
+        printk(XENLOG_G_ERR
+               "%pv: vGITS: unhandled write r%d offset %#04lx\n",
+               v, info->dabt.reg, (unsigned long)info->gpa & 0xffff);
+        return 0;
+    }
+
+    return 1;
+
+write_ignore_64:
+    if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
+    return 1;
+
+write_ignore_32:
+    if ( info->dabt.size != DABT_WORD ) goto bad_width;
+    return 1;
+
+write_impl_defined:
+    printk(XENLOG_G_DEBUG
+           "%pv: vGITS: WI on implementation defined register offset %#04lx\n",
+           v, info->gpa & 0xffff);
+    return 1;
+
+write_reserved:
+    printk(XENLOG_G_DEBUG
+           "%pv: vGITS: WI on implementation defined register offset %#04lx\n",
+           v, info->gpa & 0xffff);
+    return 1;
+
+bad_width:
+    printk(XENLOG_G_ERR "vGITS: bad write width %d r%d offset %#08lx\n",
+           info->dabt.size, info->dabt.reg, (unsigned long)info->gpa & 0xffff);
+
+    domain_crash_synchronous();
+
+    return 0;
+}
+
+static const struct mmio_handler_ops vgic_its_mmio_handler = {
+    .read  = vgic_v3_its_mmio_read,
+    .write = vgic_v3_its_mmio_write,
+};
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index 7470779..40f4ef5 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -35,6 +35,7 @@
 #define GITS_BASER5                     0x128
 #define GITS_BASER6                     0x130
 #define GITS_BASER7                     0x138
+#define GITS_PIDR2                      GICR_PIDR2
 
 /* Register bits */
 #define GITS_VALID_BIT                  BIT(63)
@@ -57,6 +58,7 @@
 #define GITS_TYPER_ITT_SIZE_MASK        (0xfUL << GITS_TYPER_ITT_SIZE_SHIFT)
 #define GITS_TYPER_ITT_SIZE(r)          ((((r) & GITS_TYPER_ITT_SIZE_MASK) >> \
                                                  GITS_TYPER_ITT_SIZE_SHIFT) + 1)
+#define GITS_TYPER_PHYSICAL             (1U << 0)
 
 #define GITS_BASER_INDIRECT             BIT(62)
 #define GITS_BASER_INNER_CACHEABILITY_SHIFT        59
@@ -76,6 +78,7 @@
                         (((reg >> GITS_BASER_ENTRY_SIZE_SHIFT) & 0x1f) + 1)
 #define GITS_BASER_SHAREABILITY_SHIFT   10
 #define GITS_BASER_PAGE_SIZE_SHIFT      8
+#define GITS_BASER_SIZE_MASK            0xff
 #define GITS_BASER_SHAREABILITY_MASK   (0x3ULL << GITS_BASER_SHAREABILITY_SHIFT)
 #define GITS_BASER_OUTER_CACHEABILITY_MASK   (0x7ULL << GITS_BASER_OUTER_CACHEABILITY_SHIFT)
 #define GITS_BASER_INNER_CACHEABILITY_MASK   (0x7ULL << GITS_BASER_INNER_CACHEABILITY_SHIFT)
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 14/28] ARM: vITS: introduce translation table walks
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (12 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-16 15:57   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 15/28] ARM: vITS: provide access to struct pending_irq Andre Przywara
                   ` (14 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The ITS stores the target (v)CPU and the (virtual) LPI number in tables.
Introduce functions to walk those tables and translate an device ID -
event ID pair into a pair of virtual LPI and vCPU.
We map those tables on demand - which is cheap on arm64 - and copy the
respective entries before using them, to avoid the guest tampering with
them meanwhile.

To allow compiling without warnings, we declare two functions as
non-static for the moment, which two later patches will fix.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c | 183 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 183 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index e3bd1f6..12ec5f1 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -81,6 +81,7 @@ struct vits_itte
 typedef uint16_t coll_table_entry_t;
 typedef uint64_t dev_table_entry_t;
 
+#define UNMAPPED_COLLECTION      ((coll_table_entry_t)~0)
 #define GITS_BASER_RO_MASK       (GITS_BASER_TYPE_MASK | \
                                   (31UL << GITS_BASER_ENTRY_SIZE_SHIFT))
 
@@ -97,6 +98,188 @@ void vgic_v3_its_free_domain(struct domain *d)
     ASSERT(RB_EMPTY_ROOT(&d->arch.vgic.its_devices));
 }
 
+/*
+ * The physical address is encoded slightly differently depending on
+ * the used page size: the highest four bits are stored in the lowest
+ * four bits of the field for 64K pages.
+ */
+static paddr_t get_baser_phys_addr(uint64_t reg)
+{
+    if ( reg & BIT(9) )
+        return (reg & GENMASK(47, 16)) |
+                ((reg & GENMASK(15, 12)) << 36);
+    else
+        return reg & GENMASK(47, 12);
+}
+
+/*
+ * Our collection table encoding:
+ * Just contains the 16-bit VCPU ID of the respective vCPU.
+ */
+
+/* Must be called with the ITS lock held. */
+static struct vcpu *get_vcpu_from_collection(struct virt_its *its,
+                                             uint16_t collid)
+{
+    paddr_t addr = get_baser_phys_addr(its->baser_coll);
+    coll_table_entry_t vcpu_id;
+    int ret;
+
+    ASSERT(spin_is_locked(&its->its_lock));
+
+    if ( collid >= its->max_collections )
+        return NULL;
+
+    ret = vgic_access_guest_memory(its->d,
+                                   addr + collid * sizeof(coll_table_entry_t),
+                                   &vcpu_id, sizeof(vcpu_id), false);
+    if ( ret )
+        return NULL;
+
+    if ( vcpu_id == UNMAPPED_COLLECTION || vcpu_id >= its->d->max_vcpus )
+        return NULL;
+
+    return its->d->vcpu[vcpu_id];
+}
+
+/*
+ * Our device table encodings:
+ * Contains the guest physical address of the Interrupt Translation Table in
+ * bits [51:8], and the size of it is encoded as the number of bits minus one
+ * in the lowest 5 bits of the word.
+ */
+#define DEV_TABLE_ITT_ADDR(x) ((x) & GENMASK(51, 8))
+#define DEV_TABLE_ITT_SIZE(x) (BIT(((x) & GENMASK(4, 0)) + 1))
+#define DEV_TABLE_ENTRY(addr, bits)                     \
+        (((addr) & GENMASK(51, 8)) | (((bits) - 1) & GENMASK(4, 0)))
+
+/*
+ * Lookup the address of the Interrupt Translation Table associated with
+ * that device ID.
+ * TODO: add support for walking indirect tables.
+ */
+static int its_get_itt(struct virt_its *its, uint32_t devid,
+                       dev_table_entry_t *itt)
+{
+    paddr_t addr = get_baser_phys_addr(its->baser_dev);
+
+    if ( devid >= its->max_devices )
+        return -EINVAL;
+
+    return vgic_access_guest_memory(its->d,
+                                    addr + devid * sizeof(dev_table_entry_t),
+                                    itt, sizeof(*itt), false);
+}
+
+/*
+ * Lookup the address of the Interrupt Translation Table associated with
+ * a device ID and return the address of the ITTE belonging to the event ID
+ * (which is an index into that table).
+ */
+static paddr_t its_get_itte_address(struct virt_its *its,
+                                    uint32_t devid, uint32_t evid)
+{
+    dev_table_entry_t itt;
+    int ret;
+
+    ret = its_get_itt(its, devid, &itt);
+    if ( ret )
+        return INVALID_PADDR;
+
+    if ( evid >= DEV_TABLE_ITT_SIZE(itt) ||
+         DEV_TABLE_ITT_ADDR(itt) == INVALID_PADDR )
+        return INVALID_PADDR;
+
+    return DEV_TABLE_ITT_ADDR(itt) + evid * sizeof(struct vits_itte);
+}
+
+/*
+ * Queries the collection and device tables to get the vCPU and virtual
+ * LPI number for a given guest event. This first accesses the guest memory
+ * to resolve the address of the ITTE, then reads the ITTE entry at this
+ * address and puts the result in vcpu_ptr and vlpi_ptr.
+ * Must be called with the ITS lock held.
+ */
+static bool read_itte_locked(struct virt_its *its, uint32_t devid,
+                             uint32_t evid, struct vcpu **vcpu_ptr,
+                             uint32_t *vlpi_ptr)
+{
+    paddr_t addr;
+    struct vits_itte itte;
+    struct vcpu *vcpu;
+
+    ASSERT(spin_is_locked(&its->its_lock));
+
+    addr = its_get_itte_address(its, devid, evid);
+    if ( addr == INVALID_PADDR )
+        return false;
+
+    if ( vgic_access_guest_memory(its->d, addr, &itte, sizeof(itte), false) )
+        return false;
+
+    vcpu = get_vcpu_from_collection(its, itte.collection);
+    if ( !vcpu )
+        return false;
+
+    *vcpu_ptr = vcpu;
+    *vlpi_ptr = itte.vlpi;
+    return true;
+}
+
+/*
+ * This function takes care of the locking by taking the its_lock itself, so
+ * a caller shall not hold this. Before returning, the lock is dropped again.
+ */
+bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
+               struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
+{
+    bool ret;
+
+    spin_lock(&its->its_lock);
+    ret = read_itte_locked(its, devid, evid, vcpu_ptr, vlpi_ptr);
+    spin_unlock(&its->its_lock);
+
+    return ret;
+}
+
+/*
+ * Queries the collection and device tables to translate the device ID and
+ * event ID and find the appropriate ITTE. The given collection ID and the
+ * virtual LPI number are then stored into that entry.
+ * If vcpu_ptr is provided, returns the VCPU belonging to that collection.
+ * Must be called with the ITS lock held.
+ */
+bool write_itte_locked(struct virt_its *its, uint32_t devid,
+                       uint32_t evid, uint32_t collid, uint32_t vlpi,
+                       struct vcpu **vcpu_ptr)
+{
+    paddr_t addr;
+    struct vits_itte itte;
+
+    ASSERT(spin_is_locked(&its->its_lock));
+
+    if ( collid >= its->max_collections )
+        return false;
+
+    if ( vlpi >= its->d->arch.vgic.nr_lpis )
+        return false;
+
+    addr = its_get_itte_address(its, devid, evid);
+    if ( addr == INVALID_PADDR )
+        return false;
+
+    itte.collection = collid;
+    itte.vlpi = vlpi;
+
+    if ( vgic_access_guest_memory(its->d, addr, &itte, sizeof(itte), true) )
+        return false;
+
+    if ( vcpu_ptr )
+        *vcpu_ptr = get_vcpu_from_collection(its, collid);
+
+    return true;
+}
+
 /**************************************
  * Functions that handle ITS commands *
  **************************************/
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 15/28] ARM: vITS: provide access to struct pending_irq
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (13 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 14/28] ARM: vITS: introduce translation table walks Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-17 15:35   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 16/28] ARM: vITS: handle INT command Andre Przywara
                   ` (13 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

For each device we allocate one struct pending_irq for each virtual
event (MSI).
Provide a helper function which returns the pointer to the appropriate
struct, to be able to find the right struct when given a virtual
deviceID/eventID pair.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic-v3-its.c        | 69 ++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/gic_v3_its.h |  4 +++
 2 files changed, 73 insertions(+)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index aebc257..fd6a394 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -800,6 +800,75 @@ out:
     return ret;
 }
 
+/* Must be called with the its_device_lock held. */
+static struct its_device *get_its_device(struct domain *d, paddr_t vdoorbell,
+                                         uint32_t vdevid)
+{
+    struct rb_node *node = d->arch.vgic.its_devices.rb_node;
+    struct its_device *dev;
+
+    ASSERT(spin_is_locked(&d->arch.vgic.its_devices_lock));
+
+    while (node)
+    {
+        int cmp;
+
+        dev = rb_entry(node, struct its_device, rbnode);
+        cmp = compare_its_guest_devices(dev, vdoorbell, vdevid);
+
+        if ( !cmp )
+            return dev;
+
+        if ( cmp > 0 )
+            node = node->rb_left;
+        else
+            node = node->rb_right;
+    }
+
+    return NULL;
+}
+
+static uint32_t get_host_lpi(struct its_device *dev, uint32_t eventid)
+{
+    uint32_t host_lpi = INVALID_LPI;
+
+    if ( dev && (eventid < dev->eventids) )
+        host_lpi = dev->host_lpi_blocks[eventid / LPI_BLOCK] +
+                                       (eventid % LPI_BLOCK);
+
+    return host_lpi;
+}
+
+static struct pending_irq *get_event_pending_irq(struct domain *d,
+                                                 paddr_t vdoorbell_address,
+                                                 uint32_t vdevid,
+                                                 uint32_t veventid,
+                                                 uint32_t *host_lpi)
+{
+    struct its_device *dev;
+    struct pending_irq *pirq = NULL;
+
+    spin_lock(&d->arch.vgic.its_devices_lock);
+    dev = get_its_device(d, vdoorbell_address, vdevid);
+    if ( dev && veventid <= dev->eventids )
+    {
+        pirq = &dev->pend_irqs[veventid];
+        if ( host_lpi )
+            *host_lpi = get_host_lpi(dev, veventid);
+    }
+    spin_unlock(&d->arch.vgic.its_devices_lock);
+
+    return pirq;
+}
+
+struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
+                                                    paddr_t vdoorbell_address,
+                                                    uint32_t vdevid,
+                                                    uint32_t veventid)
+{
+    return get_event_pending_irq(d, vdoorbell_address, vdevid, veventid, NULL);
+}
+
 /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
 void gicv3_its_dt_init(const struct dt_device_node *node)
 {
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index 40f4ef5..d162e89 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -169,6 +169,10 @@ int gicv3_its_map_guest_device(struct domain *d,
 int gicv3_allocate_host_lpi_block(struct domain *d, uint32_t *first_lpi);
 void gicv3_free_host_lpi_block(uint32_t first_lpi);
 
+struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
+                                                    paddr_t vdoorbell_address,
+                                                    uint32_t vdevid,
+                                                    uint32_t veventid);
 #else
 
 static inline void gicv3_its_dt_init(const struct dt_device_node *node)
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 16/28] ARM: vITS: handle INT command
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (14 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 15/28] ARM: vITS: provide access to struct pending_irq Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-17 16:17   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 17/28] ARM: vITS: handle MAPC command Andre Przywara
                   ` (12 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The INT command sets a given LPI identified by a DeviceID/EventID pair
as pending and thus triggers it to be injected.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 12ec5f1..f9379c9 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -300,6 +300,24 @@ static uint64_t its_cmd_mask_field(uint64_t *its_cmd, unsigned int word,
 #define its_cmd_get_validbit(cmd)       its_cmd_mask_field(cmd, 2, 63,  1)
 #define its_cmd_get_ittaddr(cmd)        (its_cmd_mask_field(cmd, 2, 8, 44) << 8)
 
+static int its_handle_int(struct virt_its *its, uint64_t *cmdptr)
+{
+    uint32_t devid = its_cmd_get_deviceid(cmdptr);
+    uint32_t eventid = its_cmd_get_id(cmdptr);
+    struct vcpu *vcpu;
+    uint32_t vlpi;
+
+    if ( !read_itte(its, devid, eventid, &vcpu, &vlpi) )
+        return -1;
+
+    if ( vlpi == INVALID_LPI )
+        return -1;
+
+    vgic_vcpu_inject_irq(vcpu, vlpi);
+
+    return 0;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
 
@@ -329,6 +347,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
 
         switch ( its_cmd_get_command(command) )
         {
+        case GITS_CMD_INT:
+            ret = its_handle_int(its, command);
+            break;
         case GITS_CMD_SYNC:
             /* We handle ITS commands synchronously, so we ignore SYNC. */
             break;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 17/28] ARM: vITS: handle MAPC command
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (15 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 16/28] ARM: vITS: handle INT command Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-17 17:22   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 18/28] ARM: vITS: handle CLEAR command Andre Przywara
                   ` (11 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The MAPC command associates a given collection ID with a given
redistributor, thus mapping collections to VCPUs.
We just store the vcpu_id in the collection table for that.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c | 47 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index f9379c9..8f1c217 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -118,6 +118,27 @@ static paddr_t get_baser_phys_addr(uint64_t reg)
  */
 
 /* Must be called with the ITS lock held. */
+static int its_set_collection(struct virt_its *its, uint16_t collid,
+                              coll_table_entry_t vcpu_id)
+{
+    paddr_t addr;
+
+    /* The collection table entry must be able to store a VCPU ID. */
+    BUILD_BUG_ON(BIT(sizeof(coll_table_entry_t) * 8) < MAX_VIRT_CPUS);
+
+    addr = get_baser_phys_addr(its->baser_coll);
+
+    ASSERT(spin_is_locked(&its->its_lock));
+
+    if ( collid >= its->max_collections )
+        return -ENOENT;
+
+    return vgic_access_guest_memory(its->d,
+                                    addr + collid * sizeof(coll_table_entry_t),
+                                    &vcpu_id, sizeof(vcpu_id), true);
+}
+
+/* Must be called with the ITS lock held. */
 static struct vcpu *get_vcpu_from_collection(struct virt_its *its,
                                              uint16_t collid)
 {
@@ -318,6 +339,29 @@ static int its_handle_int(struct virt_its *its, uint64_t *cmdptr)
     return 0;
 }
 
+static int its_handle_mapc(struct virt_its *its, uint64_t *cmdptr)
+{
+    uint32_t collid = its_cmd_get_collection(cmdptr);
+    uint64_t rdbase = its_cmd_mask_field(cmdptr, 2, 16, 44);
+
+    if ( collid >= its->max_collections )
+        return -1;
+
+    if ( rdbase >= its->d->max_vcpus )
+        return -1;
+
+    spin_lock(&its->its_lock);
+
+    if ( its_cmd_get_validbit(cmdptr) )
+        its_set_collection(its, collid, rdbase);
+    else
+        its_set_collection(its, collid, UNMAPPED_COLLECTION);
+
+    spin_unlock(&its->its_lock);
+
+    return 0;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
 
@@ -350,6 +394,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
         case GITS_CMD_INT:
             ret = its_handle_int(its, command);
             break;
+        case GITS_CMD_MAPC:
+            ret = its_handle_mapc(its, command);
+            break;
         case GITS_CMD_SYNC:
             /* We handle ITS commands synchronously, so we ignore SYNC. */
             break;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 18/28] ARM: vITS: handle CLEAR command
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (16 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 17/28] ARM: vITS: handle MAPC command Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-17 17:45   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 19/28] ARM: vITS: handle MAPD command Andre Przywara
                   ` (10 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

This introduces the ITS command handler for the CLEAR command, which
clears the pending state of an LPI.
This removes a not-yet injected, but already queued IRQ from a VCPU.
As read_itte() is now eventually used, we add the static keyword.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c | 59 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 57 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 8f1c217..8a200e9 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -52,6 +52,7 @@
  */
 struct virt_its {
     struct domain *d;
+    paddr_t doorbell_address;
     unsigned int devid_bits;
     unsigned int evid_bits;
     spinlock_t vcmd_lock;       /* Protects the virtual command buffer, which */
@@ -251,8 +252,8 @@ static bool read_itte_locked(struct virt_its *its, uint32_t devid,
  * This function takes care of the locking by taking the its_lock itself, so
  * a caller shall not hold this. Before returning, the lock is dropped again.
  */
-bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
-               struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
+static bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
+                      struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
 {
     bool ret;
 
@@ -362,6 +363,57 @@ static int its_handle_mapc(struct virt_its *its, uint64_t *cmdptr)
     return 0;
 }
 
+/*
+ * CLEAR removes the pending state from an LPI. */
+static int its_handle_clear(struct virt_its *its, uint64_t *cmdptr)
+{
+    uint32_t devid = its_cmd_get_deviceid(cmdptr);
+    uint32_t eventid = its_cmd_get_id(cmdptr);
+    struct pending_irq *p;
+    struct vcpu *vcpu;
+    uint32_t vlpi;
+    unsigned long flags;
+    int ret = -1;
+
+    spin_lock(&its->its_lock);
+
+    /* Translate the DevID/EvID pair into a vCPU/vLPI pair. */
+    if ( !read_itte_locked(its, devid, eventid, &vcpu, &vlpi) )
+        goto out_unlock;
+
+    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
+                                        devid, eventid);
+    /* Protect against an invalid LPI number. */
+    if ( unlikely(!p) )
+        goto out_unlock;
+
+    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
+
+    /*
+     * If the LPI is already visible on the guest, it is too late to
+     * clear the pending state. However this is a benign race that can
+     * happen on real hardware, too: If the LPI has already been forwarded
+     * to a CPU interface, a CLEAR request reaching the redistributor has
+     * no effect on that LPI anymore. Since LPIs are edge triggered and
+     * have no active state, we don't need to care about this here.
+     */
+    if ( !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+    {
+        /* Remove a pending, but not yet injected guest IRQ. */
+        clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
+        list_del_init(&p->inflight);
+        list_del_init(&p->lr_queue);
+    }
+
+    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
+    ret = 0;
+
+out_unlock:
+    spin_unlock(&its->its_lock);
+
+    return ret;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
 
@@ -391,6 +443,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
 
         switch ( its_cmd_get_command(command) )
         {
+        case GITS_CMD_CLEAR:
+            ret = its_handle_clear(its, command);
+            break;
         case GITS_CMD_INT:
             ret = its_handle_int(its, command);
             break;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 19/28] ARM: vITS: handle MAPD command
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (17 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 18/28] ARM: vITS: handle CLEAR command Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-17 18:07   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs Andre Przywara
                   ` (9 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The MAPD command maps a device by associating a memory region for
storing ITEs with a certain device ID. Since it features a valid bit,
MAPD also covers the "unmap" functionality, which we also cover here.
We store the given guest physical address in the device table, and, if
this command comes from Dom0, tell the host ITS driver about this new
mapping, so it can issue the corresponding host MAPD command and create
the required tables. We take care of rolling back actions should one
step fail.
Upon unmapping a device we make sure we clean up all associated
resources and release the memory again.
We use our existing guest memory access function to find the right ITT
entry and store the mapping there (in guest memory).

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic-v3-its.c        |  18 +++++
 xen/arch/arm/gic-v3-lpi.c        |  18 +++++
 xen/arch/arm/vgic-v3-its.c       | 145 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/gic_v3_its.h |   5 ++
 4 files changed, 186 insertions(+)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index fd6a394..be4c3e0 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -869,6 +869,24 @@ struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
     return get_event_pending_irq(d, vdoorbell_address, vdevid, veventid, NULL);
 }
 
+int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
+                             uint32_t vdevid, uint32_t veventid)
+{
+    uint32_t host_lpi = INVALID_LPI;
+
+    if ( !get_event_pending_irq(d, vdoorbell_address, vdevid, veventid,
+                                &host_lpi) )
+        return -EINVAL;
+
+    if ( host_lpi == INVALID_LPI )
+        return -EINVAL;
+
+    gicv3_lpi_update_host_entry(host_lpi, d->domain_id,
+                                INVALID_VCPU_ID, INVALID_LPI);
+
+    return 0;
+}
+
 /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
 void gicv3_its_dt_init(const struct dt_device_node *node)
 {
diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index 44f6315..d427539 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -207,6 +207,24 @@ out:
     irq_exit();
 }
 
+void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
+                                 unsigned int vcpu_id, uint32_t virt_lpi)
+{
+    union host_lpi *hlpip, hlpi;
+
+    ASSERT(host_lpi >= LPI_OFFSET);
+
+    host_lpi -= LPI_OFFSET;
+
+    hlpip = &lpi_data.host_lpis[host_lpi / HOST_LPIS_PER_PAGE][host_lpi % HOST_LPIS_PER_PAGE];
+
+    hlpi.virt_lpi = virt_lpi;
+    hlpi.dom_id = domain_id;
+    hlpi.vcpu_id = vcpu_id;
+
+    write_u64_atomic(&hlpip->data, hlpi.data);
+}
+
 static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
 {
     uint64_t val;
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 8a200e9..731fe0c 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -175,6 +175,21 @@ static struct vcpu *get_vcpu_from_collection(struct virt_its *its,
 #define DEV_TABLE_ENTRY(addr, bits)                     \
         (((addr) & GENMASK(51, 8)) | (((bits) - 1) & GENMASK(4, 0)))
 
+/* Set the address of an ITT for a given device ID. */
+static int its_set_itt_address(struct virt_its *its, uint32_t devid,
+                               paddr_t itt_address, uint32_t nr_bits)
+{
+    paddr_t addr = get_baser_phys_addr(its->baser_dev);
+    dev_table_entry_t itt_entry = DEV_TABLE_ENTRY(itt_address, nr_bits);
+
+    if ( devid >= its->max_devices )
+        return -ENOENT;
+
+    return vgic_access_guest_memory(its->d,
+                                    addr + devid * sizeof(dev_table_entry_t),
+                                    &itt_entry, sizeof(itt_entry), true);
+}
+
 /*
  * Lookup the address of the Interrupt Translation Table associated with
  * that device ID.
@@ -414,6 +429,133 @@ out_unlock:
     return ret;
 }
 
+/* Must be called with the ITS lock held. */
+static int its_discard_event(struct virt_its *its,
+                             uint32_t vdevid, uint32_t vevid)
+{
+    struct pending_irq *p;
+    unsigned long flags;
+    struct vcpu *vcpu;
+    uint32_t vlpi;
+
+    ASSERT(spin_is_locked(&its->its_lock));
+
+    if ( !read_itte_locked(its, vdevid, vevid, &vcpu, &vlpi) )
+        return -ENOENT;
+
+    if ( vlpi == INVALID_LPI )
+        return -ENOENT;
+
+    /* Lock this VCPU's VGIC to make sure nobody is using the pending_irq. */
+    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
+
+    /* Remove the pending_irq from the tree. */
+    write_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
+    p = radix_tree_delete(&its->d->arch.vgic.pend_lpi_tree, vlpi);
+    write_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
+
+    if ( !p )
+    {
+        spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
+
+        return -ENOENT;
+    }
+
+    /* Cleanup the pending_irq and disconnect it from the LPI. */
+    list_del_init(&p->inflight);
+    list_del_init(&p->lr_queue);
+    vgic_init_pending_irq(p, INVALID_LPI);
+
+    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
+
+    /* Remove the corresponding host LPI entry */
+    return gicv3_remove_guest_event(its->d, its->doorbell_address,
+                                    vdevid, vevid);
+}
+
+static int its_unmap_device(struct virt_its *its, uint32_t devid)
+{
+    dev_table_entry_t itt;
+    uint64_t evid;
+    int ret;
+
+    spin_lock(&its->its_lock);
+
+    ret = its_get_itt(its, devid, &itt);
+    if ( ret )
+    {
+        spin_unlock(&its->its_lock);
+        return ret;
+    }
+
+    /*
+     * For DomUs we need to check that the number of events per device
+     * is really limited, otherwise looping over all events can take too
+     * long for a guest. This ASSERT can then be removed if that is
+     * covered.
+     */
+    ASSERT(is_hardware_domain(its->d));
+
+    for ( evid = 0; evid < DEV_TABLE_ITT_SIZE(itt); evid++ )
+        /* Don't care about errors here, clean up as much as possible. */
+        its_discard_event(its, devid, evid);
+
+    spin_unlock(&its->its_lock);
+
+    return 0;
+}
+
+static int its_handle_mapd(struct virt_its *its, uint64_t *cmdptr)
+{
+    /* size and devid get validated by the functions called below. */
+    uint32_t devid = its_cmd_get_deviceid(cmdptr);
+    unsigned int size = its_cmd_get_size(cmdptr) + 1;
+    bool valid = its_cmd_get_validbit(cmdptr);
+    paddr_t itt_addr = its_cmd_get_ittaddr(cmdptr);
+    int ret;
+
+    /* Sanitize the number of events. */
+    if ( valid && (size > its->evid_bits) )
+        return -1;
+
+    if ( !valid )
+        /* Discard all events and remove pending LPIs. */
+        its_unmap_device(its, devid);
+
+    /*
+     * There is no easy and clean way for Xen to know the ITS device ID of a
+     * particular (PCI) device, so we have to rely on the guest telling
+     * us about it. For *now* we are just using the device ID *Dom0* uses,
+     * because the driver there has the actual knowledge.
+     * Eventually this will be replaced with a dedicated hypercall to
+     * announce pass-through of devices.
+     */
+    if ( is_hardware_domain(its->d) )
+    {
+
+        /*
+         * Dom0's ITSes are mapped 1:1, so both addresses are the same.
+         * Also the device IDs are equal.
+         */
+        ret = gicv3_its_map_guest_device(its->d, its->doorbell_address, devid,
+                                         its->doorbell_address, devid,
+                                         BIT(size), valid);
+        if ( ret && valid )
+            return ret;
+    }
+
+    spin_lock(&its->its_lock);
+
+    if ( valid )
+        ret = its_set_itt_address(its, devid, itt_addr, size);
+    else
+        ret = its_set_itt_address(its, devid, INVALID_PADDR, 1);
+
+    spin_unlock(&its->its_lock);
+
+    return ret;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
 
@@ -452,6 +594,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
         case GITS_CMD_MAPC:
             ret = its_handle_mapc(its, command);
             break;
+        case GITS_CMD_MAPD:
+            ret = its_handle_mapd(its, command);
+            break;
         case GITS_CMD_SYNC:
             /* We handle ITS commands synchronously, so we ignore SYNC. */
             break;
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index d162e89..6f94e65 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -173,6 +173,11 @@ struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
                                                     paddr_t vdoorbell_address,
                                                     uint32_t vdevid,
                                                     uint32_t veventid);
+int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
+                                     uint32_t vdevid, uint32_t veventid);
+void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
+                                 unsigned int vcpu_id, uint32_t virt_lpi);
+
 #else
 
 static inline void gicv3_its_dt_init(const struct dt_device_node *node)
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (18 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 19/28] ARM: vITS: handle MAPD command Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-17 18:37   ` Julien Grall
  2017-05-20  1:25   ` Stefano Stabellini
  2017-05-11 17:53 ` [PATCH v9 21/28] ARM: vITS: handle MAPTI command Andre Przywara
                   ` (8 subsequent siblings)
  28 siblings, 2 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

When LPIs get unmapped by a guest, they might still be in some LR of
some VCPU. Nevertheless we remove the corresponding pending_irq
(possibly freeing it), and detect this case (irq_to_pending() returns
NULL) when the LR gets cleaned up later.
However a *new* LPI may get mapped with the same number while the old
LPI is *still* in some LR. To avoid getting the wrong state, we mark
every newly mapped LPI as PRISTINE, which means: has never been in an
LR before. If we detect the LPI in an LR anyway, it must have been an
older one, which we can simply retire.
Before inserting such a PRISTINE LPI into an LR, we must make sure that
it's not already in another LR, as the architecture forbids two
interrupts with the same virtual IRQ number on one CPU.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic.c         | 55 +++++++++++++++++++++++++++++++++++++++++-----
 xen/include/asm-arm/vgic.h |  6 +++++
 2 files changed, 56 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index fd3fa05..8bf0578 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -375,6 +375,8 @@ static inline void gic_set_lr(int lr, struct pending_irq *p,
 {
     ASSERT(!local_irq_is_enabled());
 
+    clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status);
+
     gic_hw_ops->update_lr(lr, p, state);
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
@@ -442,12 +444,41 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
 #endif
 }
 
+/*
+ * Find an unused LR to insert an IRQ into. If this new interrupt is a
+ * PRISTINE LPI, scan the other LRs to avoid inserting the same IRQ twice.
+ */
+static int gic_find_unused_lr(struct vcpu *v, struct pending_irq *p, int lr)
+{
+    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
+    unsigned long *lr_mask = (unsigned long *) &this_cpu(lr_mask);
+    struct gic_lr lr_val;
+
+    ASSERT(spin_is_locked(&v->arch.vgic.lock));
+
+    if ( test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
+    {
+        int used_lr = 0;
+
+        while ( (used_lr = find_next_bit(lr_mask, nr_lrs, used_lr)) < nr_lrs )
+        {
+            gic_hw_ops->read_lr(used_lr, &lr_val);
+            if ( lr_val.virq == p->irq )
+                return used_lr;
+        }
+    }
+
+    lr = find_next_zero_bit(lr_mask, nr_lrs, lr);
+
+    return lr;
+}
+
 void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
         unsigned int priority)
 {
-    int i;
-    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
     struct pending_irq *p = irq_to_pending(v, virtual_irq);
+    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
+    int i = nr_lrs;
 
     ASSERT(spin_is_locked(&v->arch.vgic.lock));
 
@@ -457,7 +488,8 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
 
     if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
     {
-        i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
+        i = gic_find_unused_lr(v, p, 0);
+
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
             gic_set_lr(i, p, GICH_LR_PENDING);
@@ -509,7 +541,17 @@ static void gic_update_one_lr(struct vcpu *v, int i)
     }
     else if ( lr_val.state & GICH_LR_PENDING )
     {
-        int q __attribute__ ((unused)) = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
+        int q __attribute__ ((unused));
+
+        if ( test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
+        {
+            gic_hw_ops->clear_lr(i);
+            clear_bit(i, &this_cpu(lr_mask));
+
+            return;
+        }
+
+        q = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
 #ifdef GIC_DEBUG
         if ( q )
             gdprintk(XENLOG_DEBUG, "trying to inject irq=%d into d%dv%d, when it is already pending in LR%d\n",
@@ -521,6 +563,9 @@ static void gic_update_one_lr(struct vcpu *v, int i)
         gic_hw_ops->clear_lr(i);
         clear_bit(i, &this_cpu(lr_mask));
 
+        if ( test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
+            return;
+
         if ( p->desc != NULL )
             clear_bit(_IRQ_INPROGRESS, &p->desc->status);
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
@@ -591,7 +636,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
     inflight_r = &v->arch.vgic.inflight_irqs;
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
     {
-        lr = find_next_zero_bit(&this_cpu(lr_mask), nr_lrs, lr);
+        lr = gic_find_unused_lr(v, p, lr);
         if ( lr >= nr_lrs )
         {
             /* No more free LRs: find a lower priority irq to evict */
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 02732db..3fc4ceb 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -60,12 +60,18 @@ struct pending_irq
      * vcpu while it is still inflight and on an GICH_LR register on the
      * old vcpu.
      *
+     * GIC_IRQ_GUEST_PRISTINE_LPI: the IRQ is a newly mapped LPI, which
+     * has never been in an LR before. This means that any trace of an
+     * LPI with the same number in an LR must be from an older LPI, which
+     * has been unmapped before.
+     *
      */
 #define GIC_IRQ_GUEST_QUEUED   0
 #define GIC_IRQ_GUEST_ACTIVE   1
 #define GIC_IRQ_GUEST_VISIBLE  2
 #define GIC_IRQ_GUEST_ENABLED  3
 #define GIC_IRQ_GUEST_MIGRATING   4
+#define GIC_IRQ_GUEST_PRISTINE_LPI  5
     unsigned long status;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
     unsigned int irq;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 21/28] ARM: vITS: handle MAPTI command
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (19 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-18 14:04   ` Julien Grall
  2017-05-22 23:39   ` Stefano Stabellini
  2017-05-11 17:53 ` [PATCH v9 22/28] ARM: vITS: handle MOVI command Andre Przywara
                   ` (7 subsequent siblings)
  28 siblings, 2 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The MAPTI commands associates a DeviceID/EventID pair with a LPI/CPU
pair and actually instantiates LPI interrupts.
We connect the already allocated host LPI to this virtual LPI, so that
any triggering LPI on the host can be quickly forwarded to a guest.
Beside entering the VCPU and the virtual LPI number in the respective
host LPI entry, we also initialize and add the already allocated
struct pending_irq to our radix tree, so that we can now easily find it
by its virtual LPI number.
We also read the property table to update the enabled bit and the
priority for our new LPI, as we might have missed this during an earlier
INVALL call (which only checks mapped LPIs).
Since write_itte_locked() now sees its first usage, we change the
declaration to static.
---
 xen/arch/arm/gic-v3-its.c        |  28 +++++++++
 xen/arch/arm/vgic-v3-its.c       | 124 ++++++++++++++++++++++++++++++++++++++-
 xen/include/asm-arm/gic_v3_its.h |   3 +
 3 files changed, 152 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index be4c3e0..8a50f7d 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -887,6 +887,34 @@ int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
     return 0;
 }
 
+/*
+ * Connects the event ID for an already assigned device to the given VCPU/vLPI
+ * pair. The corresponding physical LPI is already mapped on the host side
+ * (when assigning the physical device to the guest), so we just connect the
+ * target VCPU/vLPI pair to that interrupt to inject it properly if it fires.
+ * Returns a pointer to the already allocated struct pending_irq that is
+ * meant to be used by that event.
+ */
+struct pending_irq *gicv3_assign_guest_event(struct domain *d,
+                                             paddr_t vdoorbell_address,
+                                             uint32_t vdevid, uint32_t veventid,
+                                             struct vcpu *v, uint32_t virt_lpi)
+{
+    struct pending_irq *pirq;
+    uint32_t host_lpi = 0;
+
+    pirq = get_event_pending_irq(d, vdoorbell_address, vdevid, veventid,
+                                 &host_lpi);
+
+    if ( !pirq || !host_lpi )
+        return NULL;
+
+    gicv3_lpi_update_host_entry(host_lpi, d->domain_id,
+                                v ? v->vcpu_id : INVALID_VCPU_ID, virt_lpi);
+
+    return pirq;
+}
+
 /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
 void gicv3_its_dt_init(const struct dt_device_node *node)
 {
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 731fe0c..c5c0e5e 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -286,9 +286,9 @@ static bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
  * If vcpu_ptr is provided, returns the VCPU belonging to that collection.
  * Must be called with the ITS lock held.
  */
-bool write_itte_locked(struct virt_its *its, uint32_t devid,
-                       uint32_t evid, uint32_t collid, uint32_t vlpi,
-                       struct vcpu **vcpu_ptr)
+static bool write_itte_locked(struct virt_its *its, uint32_t devid,
+                              uint32_t evid, uint32_t collid, uint32_t vlpi,
+                              struct vcpu **vcpu_ptr)
 {
     paddr_t addr;
     struct vits_itte itte;
@@ -429,6 +429,33 @@ out_unlock:
     return ret;
 }
 
+/*
+ * For a given virtual LPI read the enabled bit and priority from the virtual
+ * property table and update the virtual IRQ's state in the given pending_irq.
+ * Must be called with the respective VGIC VCPU lock held.
+ */
+static int update_lpi_property(struct domain *d, struct pending_irq *p)
+{
+    paddr_t addr;
+    uint8_t property;
+    int ret;
+
+    addr = d->arch.vgic.rdist_propbase & GENMASK(51, 12);
+
+    ret = vgic_access_guest_memory(d, addr + p->irq - LPI_OFFSET,
+                                   &property, sizeof(property), false);
+    if ( ret )
+        return ret;
+
+    p->lpi_priority = property & LPI_PROP_PRIO_MASK;
+    if ( property & LPI_PROP_ENABLED )
+        set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
+    else
+        clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
+
+    return 0;
+}
+
 /* Must be called with the ITS lock held. */
 static int its_discard_event(struct virt_its *its,
                              uint32_t vdevid, uint32_t vevid)
@@ -556,6 +583,93 @@ static int its_handle_mapd(struct virt_its *its, uint64_t *cmdptr)
     return ret;
 }
 
+static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
+{
+    uint32_t devid = its_cmd_get_deviceid(cmdptr);
+    uint32_t eventid = its_cmd_get_id(cmdptr);
+    uint32_t intid = its_cmd_get_physical_id(cmdptr), _intid;
+    uint16_t collid = its_cmd_get_collection(cmdptr);
+    struct pending_irq *pirq;
+    struct vcpu *vcpu = NULL;
+    int ret = -1;
+
+    if ( its_cmd_get_command(cmdptr) == GITS_CMD_MAPI )
+        intid = eventid;
+
+    spin_lock(&its->its_lock);
+    /*
+     * Check whether there is a valid existing mapping. If yes, behavior is
+     * unpredictable, we choose to ignore this command here.
+     * This makes sure we start with a pristine pending_irq below.
+     */
+    if ( read_itte_locked(its, devid, eventid, &vcpu, &_intid) &&
+         _intid != INVALID_LPI )
+    {
+        spin_unlock(&its->its_lock);
+        return -1;
+    }
+
+    /* Enter the mapping in our virtual ITS tables. */
+    if ( !write_itte_locked(its, devid, eventid, collid, intid, &vcpu) )
+    {
+        spin_unlock(&its->its_lock);
+        return -1;
+    }
+
+    spin_unlock(&its->its_lock);
+
+    /*
+     * Connect this virtual LPI to the corresponding host LPI, which is
+     * determined by the same device ID and event ID on the host side.
+     * This returns us the corresponding, still unused pending_irq.
+     */
+    pirq = gicv3_assign_guest_event(its->d, its->doorbell_address,
+                                    devid, eventid, vcpu, intid);
+    if ( !pirq )
+        goto out_remove_mapping;
+
+    vgic_init_pending_irq(pirq, intid);
+
+    /*
+     * Now read the guest's property table to initialize our cached state.
+     * It can't fire at this time, because it is not known to the host yet.
+     * We don't need the VGIC VCPU lock here, because the pending_irq isn't
+     * in the radix tree yet.
+     */
+    ret = update_lpi_property(its->d, pirq);
+    if ( ret )
+        goto out_remove_host_entry;
+
+    pirq->lpi_vcpu_id = vcpu->vcpu_id;
+    /*
+     * Mark this LPI as new, so any older (now unmapped) LPI in any LR
+     * can be easily recognised as such.
+     */
+    set_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &pirq->status);
+
+    /*
+     * Now insert the pending_irq into the domain's LPI tree, so that
+     * it becomes live.
+     */
+    write_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
+    ret = radix_tree_insert(&its->d->arch.vgic.pend_lpi_tree, intid, pirq);
+    write_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
+
+    if ( !ret )
+        return 0;
+
+out_remove_host_entry:
+    gicv3_remove_guest_event(its->d, its->doorbell_address, devid, eventid);
+
+out_remove_mapping:
+    spin_lock(&its->its_lock);
+    write_itte_locked(its, devid, eventid,
+                      UNMAPPED_COLLECTION, INVALID_LPI, NULL);
+    spin_unlock(&its->its_lock);
+
+    return ret;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
 
@@ -597,6 +711,10 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
         case GITS_CMD_MAPD:
             ret = its_handle_mapd(its, command);
             break;
+        case GITS_CMD_MAPI:
+        case GITS_CMD_MAPTI:
+            ret = its_handle_mapti(its, command);
+            break;
         case GITS_CMD_SYNC:
             /* We handle ITS commands synchronously, so we ignore SYNC. */
             break;
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index 6f94e65..9c08cee 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -175,6 +175,9 @@ struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
                                                     uint32_t veventid);
 int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
                                      uint32_t vdevid, uint32_t veventid);
+struct pending_irq *gicv3_assign_guest_event(struct domain *d, paddr_t doorbell,
+                                             uint32_t devid, uint32_t eventid,
+                                             struct vcpu *v, uint32_t virt_lpi);
 void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
                                  unsigned int vcpu_id, uint32_t virt_lpi);
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 22/28] ARM: vITS: handle MOVI command
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (20 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 21/28] ARM: vITS: handle MAPTI command Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-18 14:17   ` Julien Grall
  2017-05-23  0:28   ` Stefano Stabellini
  2017-05-11 17:53 ` [PATCH v9 23/28] ARM: vITS: handle DISCARD command Andre Przywara
                   ` (6 subsequent siblings)
  28 siblings, 2 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The MOVI command moves the interrupt affinity from one redistributor
(read: VCPU) to another.
For now migration of "live" LPIs is not yet implemented, but we store
the changed affinity in the host LPI structure and in our virtual ITTE.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic-v3-its.c        | 30 ++++++++++++++++++++
 xen/arch/arm/gic-v3-lpi.c        | 15 ++++++++++
 xen/arch/arm/vgic-v3-its.c       | 59 ++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/gic_v3_its.h |  4 +++
 4 files changed, 108 insertions(+)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 8a50f7d..f00597e 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -915,6 +915,36 @@ struct pending_irq *gicv3_assign_guest_event(struct domain *d,
     return pirq;
 }
 
+/* Changes the target VCPU for a given host LPI assigned to a domain. */
+int gicv3_lpi_change_vcpu(struct domain *d, paddr_t vdoorbell,
+                          uint32_t vdevid, uint32_t veventid,
+                          unsigned int vcpu_id)
+{
+    uint32_t host_lpi;
+    struct its_device *dev;
+
+    spin_lock(&d->arch.vgic.its_devices_lock);
+    dev = get_its_device(d, vdoorbell, vdevid);
+    if ( dev )
+        host_lpi = get_host_lpi(dev, veventid);
+    else
+        host_lpi = 0;
+    spin_unlock(&d->arch.vgic.its_devices_lock);
+
+    if ( !host_lpi )
+        return -ENOENT;
+
+    /*
+     * TODO: This just changes the virtual affinity, the physical LPI
+     * still stays on the same physical CPU.
+     * Consider to move the physical affinity to the pCPU running the new
+     * vCPU. However this requires scheduling a host ITS command.
+     */
+    gicv3_lpi_update_host_vcpuid(host_lpi, vcpu_id);
+
+    return 0;
+}
+
 /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
 void gicv3_its_dt_init(const struct dt_device_node *node)
 {
diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index d427539..6af5ad9 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -225,6 +225,21 @@ void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
     write_u64_atomic(&hlpip->data, hlpi.data);
 }
 
+int gicv3_lpi_update_host_vcpuid(uint32_t host_lpi, unsigned int vcpu_id)
+{
+    union host_lpi *hlpip;
+
+    ASSERT(host_lpi >= LPI_OFFSET);
+
+    host_lpi -= LPI_OFFSET;
+
+    hlpip = &lpi_data.host_lpis[host_lpi / HOST_LPIS_PER_PAGE][host_lpi % HOST_LPIS_PER_PAGE];
+
+    write_u16_atomic(&hlpip->vcpu_id, vcpu_id);
+
+    return 0;
+}
+
 static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
 {
     uint64_t val;
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index c5c0e5e..ef7c78f 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -670,6 +670,59 @@ out_remove_mapping:
     return ret;
 }
 
+static int its_handle_movi(struct virt_its *its, uint64_t *cmdptr)
+{
+    uint32_t devid = its_cmd_get_deviceid(cmdptr);
+    uint32_t eventid = its_cmd_get_id(cmdptr);
+    uint16_t collid = its_cmd_get_collection(cmdptr);
+    unsigned long flags;
+    struct pending_irq *p;
+    struct vcpu *ovcpu, *nvcpu;
+    uint32_t vlpi;
+    int ret = -1;
+
+    spin_lock(&its->its_lock);
+    /* Check for a mapped LPI and get the LPI number. */
+    if ( !read_itte_locked(its, devid, eventid, &ovcpu, &vlpi) )
+        goto out_unlock;
+
+    if ( vlpi == INVALID_LPI )
+        goto out_unlock;
+
+    /* Check the new collection ID and get the new VCPU pointer */
+    nvcpu = get_vcpu_from_collection(its, collid);
+    if ( !nvcpu )
+        goto out_unlock;
+
+    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
+                                        devid, eventid);
+    if ( unlikely(!p) )
+        goto out_unlock;
+
+    spin_lock_irqsave(&ovcpu->arch.vgic.lock, flags);
+
+    /* Update our cached vcpu_id in the pending_irq. */
+    p->lpi_vcpu_id = nvcpu->vcpu_id;
+
+    spin_unlock_irqrestore(&ovcpu->arch.vgic.lock, flags);
+
+    /* Now store the new collection in the translation table. */
+    if ( !write_itte_locked(its, devid, eventid, collid, vlpi, &nvcpu) )
+        goto out_unlock;
+
+    spin_unlock(&its->its_lock);
+
+    /* TODO: lookup currently-in-guest virtual IRQs and migrate them? */
+
+    return gicv3_lpi_change_vcpu(its->d, its->doorbell_address,
+                                 devid, eventid, nvcpu->vcpu_id);
+
+out_unlock:
+    spin_unlock(&its->its_lock);
+
+    return ret;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
 
@@ -715,6 +768,12 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
         case GITS_CMD_MAPTI:
             ret = its_handle_mapti(its, command);
             break;
+        case GITS_CMD_MOVALL:
+            gdprintk(XENLOG_G_INFO, "vGITS: ignoring MOVALL command\n");
+            break;
+        case GITS_CMD_MOVI:
+            ret = its_handle_movi(its, command);
+            break;
         case GITS_CMD_SYNC:
             /* We handle ITS commands synchronously, so we ignore SYNC. */
             break;
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index 9c08cee..82d788c 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -178,8 +178,12 @@ int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
 struct pending_irq *gicv3_assign_guest_event(struct domain *d, paddr_t doorbell,
                                              uint32_t devid, uint32_t eventid,
                                              struct vcpu *v, uint32_t virt_lpi);
+int gicv3_lpi_change_vcpu(struct domain *d, paddr_t doorbell,
+                          uint32_t devid, uint32_t eventid,
+                          unsigned int vcpu_id);
 void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
                                  unsigned int vcpu_id, uint32_t virt_lpi);
+int gicv3_lpi_update_host_vcpuid(uint32_t host_lpi, unsigned int vcpu_id);
 
 #else
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 23/28] ARM: vITS: handle DISCARD command
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (21 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 22/28] ARM: vITS: handle MOVI command Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-18 14:23   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 24/28] ARM: vITS: handle INV command Andre Przywara
                   ` (5 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The DISCARD command drops the connection between a DeviceID/EventID
and an LPI/collection pair.
We mark the respective structure entries as not allocated and make
sure that any queued IRQs are removed.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index ef7c78f..f7a8d77 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -723,6 +723,27 @@ out_unlock:
     return ret;
 }
 
+static int its_handle_discard(struct virt_its *its, uint64_t *cmdptr)
+{
+    uint32_t devid = its_cmd_get_deviceid(cmdptr);
+    uint32_t eventid = its_cmd_get_id(cmdptr);
+    int ret;
+
+    spin_lock(&its->its_lock);
+
+    /* Remove from the radix tree and remove the host entry. */
+    ret = its_discard_event(its, devid, eventid);
+
+    /* Remove from the guest's ITTE. */
+    if ( ret || write_itte_locked(its, devid, eventid,
+                                  UNMAPPED_COLLECTION, INVALID_LPI, NULL) )
+        ret = -1;
+
+    spin_unlock(&its->its_lock);
+
+    return ret;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
 
@@ -755,6 +776,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
         case GITS_CMD_CLEAR:
             ret = its_handle_clear(its, command);
             break;
+        case GITS_CMD_DISCARD:
+            ret = its_handle_discard(its, command);
+            break;
         case GITS_CMD_INT:
             ret = its_handle_int(its, command);
             break;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 24/28] ARM: vITS: handle INV command
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (22 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 23/28] ARM: vITS: handle DISCARD command Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-23  0:01   ` Stefano Stabellini
  2017-05-11 17:53 ` [PATCH v9 25/28] ARM: vITS: handle INVALL command Andre Przywara
                   ` (4 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The INV command instructs the ITS to update the configuration data for
a given LPI by re-reading its entry from the property table.
We don't need to care so much about the priority value, but enabling
or disabling an LPI has some effect: We remove or push virtual LPIs
to their VCPUs, also check the virtual pending bit if an LPI gets enabled.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 70 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index f7a8d77..6cfb560 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -456,6 +456,73 @@ static int update_lpi_property(struct domain *d, struct pending_irq *p)
     return 0;
 }
 
+/*
+ * Checks whether an LPI that got enabled or disabled needs to change
+ * something in the VGIC (added or removed from the LR or queues).
+ * Must be called with the VCPU VGIC lock held.
+ */
+static void update_lpi_vgic_status(struct vcpu *v, struct pending_irq *p)
+{
+    ASSERT(spin_is_locked(&v->arch.vgic.lock));
+
+    if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) )
+    {
+        if ( !list_empty(&p->inflight) &&
+             !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+            gic_raise_guest_irq(v, p->irq, p->lpi_priority);
+    }
+    else
+    {
+        clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
+        list_del_init(&p->lr_queue);
+    }
+}
+
+static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr)
+{
+    struct domain *d = its->d;
+    uint32_t devid = its_cmd_get_deviceid(cmdptr);
+    uint32_t eventid = its_cmd_get_id(cmdptr);
+    struct pending_irq *p;
+    unsigned long flags;
+    struct vcpu *vcpu;
+    uint32_t vlpi;
+    int ret = -1;
+
+    spin_lock(&its->its_lock);
+
+    /* Translate the event into a vCPU/vLPI pair. */
+    if ( !read_itte_locked(its, devid, eventid, &vcpu, &vlpi) )
+        goto out_unlock_its;
+
+    if ( vlpi == INVALID_LPI )
+        goto out_unlock_its;
+
+    p = gicv3_its_get_event_pending_irq(d, its->doorbell_address,
+                                        devid, eventid);
+    if ( unlikely(!p) )
+        goto out_unlock_its;
+
+    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
+
+    /* Read the property table and update our cached status. */
+    if ( update_lpi_property(d, p) )
+        goto out_unlock;
+
+    /* Check whether the LPI needs to go on a VCPU. */
+    update_lpi_vgic_status(vcpu, p);
+
+    ret = 0;
+
+out_unlock:
+    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
+
+out_unlock_its:
+    spin_unlock(&its->its_lock);
+
+    return ret;
+}
+
 /* Must be called with the ITS lock held. */
 static int its_discard_event(struct virt_its *its,
                              uint32_t vdevid, uint32_t vevid)
@@ -782,6 +849,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
         case GITS_CMD_INT:
             ret = its_handle_int(its, command);
             break;
+        case GITS_CMD_INV:
+            ret = its_handle_inv(its, command);
+            break;
         case GITS_CMD_MAPC:
             ret = its_handle_mapc(its, command);
             break;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 25/28] ARM: vITS: handle INVALL command
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (23 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 24/28] ARM: vITS: handle INV command Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-06-02 17:24   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 26/28] ARM: vITS: increase mmio_count for each ITS Andre Przywara
                   ` (3 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

The INVALL command instructs an ITS to invalidate the configuration
data for all LPIs associated with a given redistributor (read: VCPU).
This is nasty to emulate exactly with our architecture, so we just
iterate over all mapped LPIs and filter for those from that particular
VCPU.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 6cfb560..36faa16 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -523,6 +523,69 @@ out_unlock_its:
     return ret;
 }
 
+/*
+ * INVALL updates the per-LPI configuration status for every LPI mapped to
+ * a particular redistributor.
+ * We iterate over all mapped LPIs in our radix tree and update those.
+ */
+static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
+{
+    uint32_t collid = its_cmd_get_collection(cmdptr);
+    struct vcpu *vcpu;
+    struct pending_irq *pirqs[16];
+    uint64_t vlpi = 0;          /* 64-bit to catch overflows */
+    unsigned int nr_lpis, i;
+    unsigned long flags;
+    int ret = 0;
+
+    /*
+     * As this implementation walks over all mapped LPIs, it might take
+     * too long for a real guest, so we might want to revisit this
+     * implementation for DomUs.
+     * However this command is very rare, also we don't expect many
+     * LPIs to be actually mapped, so it's fine for Dom0 to use.
+     */
+    ASSERT(is_hardware_domain(its->d));
+
+    spin_lock(&its->its_lock);
+    vcpu = get_vcpu_from_collection(its, collid);
+    spin_unlock(&its->its_lock);
+
+    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
+    read_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
+
+    do
+    {
+        nr_lpis = radix_tree_gang_lookup(&its->d->arch.vgic.pend_lpi_tree,
+                                         (void **)pirqs, vlpi,
+                                         ARRAY_SIZE(pirqs));
+
+        for ( i = 0; i < nr_lpis; i++ )
+        {
+            /* We only care about LPIs on our VCPU. */
+            if ( pirqs[i]->lpi_vcpu_id != vcpu->vcpu_id )
+                continue;
+
+            vlpi = pirqs[i]->irq;
+            /* If that fails for a single LPI, carry on to handle the rest. */
+            ret = update_lpi_property(its->d, pirqs[i]);
+            if ( !ret )
+                update_lpi_vgic_status(vcpu, pirqs[i]);
+        }
+    /*
+     * Loop over the next gang of pending_irqs until we reached the end of
+     * a (fully populated) tree or the lookup function returns less LPIs than
+     * it has been asked for.
+     */
+    } while ( (++vlpi < its->d->arch.vgic.nr_lpis) &&
+              (nr_lpis == ARRAY_SIZE(pirqs)) );
+
+    read_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
+    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
+
+    return ret;
+}
+
 /* Must be called with the ITS lock held. */
 static int its_discard_event(struct virt_its *its,
                              uint32_t vdevid, uint32_t vevid)
@@ -852,6 +915,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
         case GITS_CMD_INV:
             ret = its_handle_inv(its, command);
             break;
+        case GITS_CMD_INVALL:
+            ret = its_handle_invall(its, command);
+            break;
         case GITS_CMD_MAPC:
             ret = its_handle_mapc(its, command);
             break;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 26/28] ARM: vITS: increase mmio_count for each ITS
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (24 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 25/28] ARM: vITS: handle INVALL command Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-18 14:34   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 27/28] ARM: vITS: create and initialize virtual ITSes for Dom0 Andre Przywara
                   ` (2 subsequent siblings)
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Increase the count of MMIO regions needed by one for each ITS Dom0 has
to emulate. We emulate the ITSes 1:1 from the hardware, so the number
is the number of host ITSes.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c       | 15 +++++++++++++++
 xen/arch/arm/vgic-v3.c           |  3 +++
 xen/include/asm-arm/gic_v3_its.h |  7 +++++++
 3 files changed, 25 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 36faa16..8f6ff11 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -86,6 +86,21 @@ typedef uint64_t dev_table_entry_t;
 #define GITS_BASER_RO_MASK       (GITS_BASER_TYPE_MASK | \
                                   (31UL << GITS_BASER_ENTRY_SIZE_SHIFT))
 
+unsigned int vgic_v3_its_count(const struct domain *d)
+{
+    struct host_its *hw_its;
+    unsigned int ret = 0;
+
+    /* Only Dom0 can use emulated ITSes so far. */
+    if ( !is_hardware_domain(d) )
+        return 0;
+
+    list_for_each_entry(hw_its, &host_its_list, entry)
+        ret++;
+
+    return ret;
+}
+
 int vgic_v3_its_init_domain(struct domain *d)
 {
     spin_lock_init(&d->arch.vgic.its_devices_lock);
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 6dbdb2e..41cda78 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1802,6 +1802,9 @@ int vgic_v3_init(struct domain *d, int *mmio_count)
     /* GICD region + number of Redistributors */
     *mmio_count = vgic_v3_rdist_count(d) + 1;
 
+    /* one region per ITS */
+    *mmio_count += vgic_v3_its_count(d);
+
     register_vgic_ops(d, &v3_ops);
 
     return 0;
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index 82d788c..927568f 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -137,6 +137,8 @@ void gicv3_its_dt_init(const struct dt_device_node *node);
 
 bool gicv3_its_host_has_its(void);
 
+unsigned int vgic_v3_its_count(const struct domain *d);
+
 void gicv3_do_LPI(unsigned int lpi);
 
 int gicv3_lpi_init_rdist(void __iomem * rdist_base);
@@ -196,6 +198,11 @@ static inline bool gicv3_its_host_has_its(void)
     return false;
 }
 
+static inline unsigned int vgic_v3_its_count(const struct domain *d)
+{
+    return 0;
+}
+
 static inline void gicv3_do_LPI(unsigned int lpi)
 {
     /* We don't enable LPIs without an ITS. */
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 27/28] ARM: vITS: create and initialize virtual ITSes for Dom0
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (25 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 26/28] ARM: vITS: increase mmio_count for each ITS Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-18 14:41   ` Julien Grall
  2017-05-11 17:53 ` [PATCH v9 28/28] ARM: vITS: create ITS subnodes for Dom0 DT Andre Przywara
  2017-05-11 18:31 ` [PATCH v9 00/28] arm64: Dom0 ITS emulation Julien Grall
  28 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

For each hardware ITS create and initialize a virtual ITS for Dom0.
We use the same memory mapped address to keep the doorbell working.
This introduces a function to initialize a virtual ITS.
We maintain a list of virtual ITSes, at the moment for the only
purpose of later being able to free them again.
We configure the virtual ITSes to match the hardware ones, that is we
keep the number of device ID bits and event ID bits the same as the host
ITS.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/vgic-v3-its.c       | 75 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/vgic-v3.c           |  4 +++
 xen/include/asm-arm/domain.h     |  1 +
 xen/include/asm-arm/gic_v3_its.h |  4 +++
 4 files changed, 84 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 8f6ff11..ca35aca 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -52,6 +52,7 @@
  */
 struct virt_its {
     struct domain *d;
+    struct list_head vits_list;
     paddr_t doorbell_address;
     unsigned int devid_bits;
     unsigned int evid_bits;
@@ -103,14 +104,49 @@ unsigned int vgic_v3_its_count(const struct domain *d)
 
 int vgic_v3_its_init_domain(struct domain *d)
 {
+    int ret;
+
+    INIT_LIST_HEAD(&d->arch.vgic.vits_list);
     spin_lock_init(&d->arch.vgic.its_devices_lock);
     d->arch.vgic.its_devices = RB_ROOT;
 
+    if ( is_hardware_domain(d) )
+    {
+        struct host_its *hw_its;
+
+        list_for_each_entry(hw_its, &host_its_list, entry)
+        {
+            /*
+             * For each host ITS create a virtual ITS using the same
+             * base and thus doorbell address.
+             * Use the same number of device ID and event ID bits as the host.
+             */
+            ret = vgic_v3_its_init_virtual(d, hw_its->addr,
+                                           hw_its->devid_bits,
+                                           hw_its->evid_bits);
+            if ( ret )
+            {
+                vgic_v3_its_free_domain(d);
+                return ret;
+            }
+            else
+                d->arch.vgic.has_its = true;
+        }
+    }
+
     return 0;
 }
 
 void vgic_v3_its_free_domain(struct domain *d)
 {
+    struct virt_its *pos, *temp;
+
+    list_for_each_entry_safe( pos, temp, &d->arch.vgic.vits_list, vits_list )
+    {
+        list_del(&pos->vits_list);
+        xfree(pos);
+    }
+
     ASSERT(RB_EMPTY_ROOT(&d->arch.vgic.its_devices));
 }
 
@@ -1407,6 +1443,45 @@ static const struct mmio_handler_ops vgic_its_mmio_handler = {
     .write = vgic_v3_its_mmio_write,
 };
 
+int vgic_v3_its_init_virtual(struct domain *d, paddr_t guest_addr,
+                             unsigned int devid_bits, unsigned int evid_bits)
+{
+    struct virt_its *its;
+    uint64_t base_attr;
+
+    its = xzalloc(struct virt_its);
+    if ( !its )
+        return -ENOMEM;
+
+    base_attr  = GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT;
+    base_attr |= GIC_BASER_CACHE_SameAsInner << GITS_BASER_OUTER_CACHEABILITY_SHIFT;
+    base_attr |= GIC_BASER_CACHE_RaWaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT;
+
+    its->cbaser  = base_attr;
+    base_attr |= 0ULL << GITS_BASER_PAGE_SIZE_SHIFT;    /* 4K pages */
+    its->baser_dev = GITS_BASER_TYPE_DEVICE << GITS_BASER_TYPE_SHIFT;
+    its->baser_dev |= (sizeof(dev_table_entry_t) - 1) <<
+                      GITS_BASER_ENTRY_SIZE_SHIFT;
+    its->baser_dev |= base_attr;
+    its->baser_coll  = GITS_BASER_TYPE_COLLECTION << GITS_BASER_TYPE_SHIFT;
+    its->baser_coll |= (sizeof(coll_table_entry_t) - 1) <<
+                       GITS_BASER_ENTRY_SIZE_SHIFT;
+    its->baser_coll |= base_attr;
+    its->d = d;
+    its->doorbell_address = guest_addr + ITS_DOORBELL_OFFSET;
+    its->devid_bits = devid_bits;
+    its->evid_bits = evid_bits;
+    spin_lock_init(&its->vcmd_lock);
+    spin_lock_init(&its->its_lock);
+
+    register_mmio_handler(d, &vgic_its_mmio_handler, guest_addr, SZ_64K, its);
+
+    /* Register the virtual ITSes to be able to clean them up later. */
+    list_add_tail(&its->vits_list, &d->arch.vgic.vits_list);
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 41cda78..fd4b5f4 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1700,6 +1700,10 @@ static int vgic_v3_domain_init(struct domain *d)
         d->arch.vgic.intid_bits = GUEST_GICV3_GICD_INTID_BITS;
     }
 
+    /*
+     * For a hardware domain, this will iterate over the host ITSes
+     * and maps  one virtual ITS per host ITS at the same address.
+     */
     ret = vgic_v3_its_init_domain(d);
     if ( ret )
         return ret;
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index b2d98bb..92f4ce5 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -115,6 +115,7 @@ struct arch_domain
         spinlock_t its_devices_lock;        /* Protects the its_devices tree */
         struct radix_tree_root pend_lpi_tree; /* Stores struct pending_irq's */
         rwlock_t pend_lpi_tree_lock;        /* Protects the pend_lpi_tree */
+        struct list_head vits_list;         /* List of virtual ITSes */
         unsigned int intid_bits;
         bool rdists_enabled;                /* Is any redistributor enabled? */
         bool has_its;
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index 927568f..e41f8fd 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -158,6 +158,10 @@ int gicv3_its_setup_collection(unsigned int cpu);
 int vgic_v3_its_init_domain(struct domain *d);
 void vgic_v3_its_free_domain(struct domain *d);
 
+/* Create and register a virtual ITS at the given guest address. */
+int vgic_v3_its_init_virtual(struct domain *d, paddr_t guest_addr,
+			     unsigned int devid_bits, unsigned int evid_bits);
+
 /*
  * Map a device on the host by allocating an ITT on the host (ITS).
  * "nr_event" specifies how many events (interrupts) this device will need.
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH v9 28/28] ARM: vITS: create ITS subnodes for Dom0 DT
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (26 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 27/28] ARM: vITS: create and initialize virtual ITSes for Dom0 Andre Przywara
@ 2017-05-11 17:53 ` Andre Przywara
  2017-05-11 18:31 ` [PATCH v9 00/28] arm64: Dom0 ITS emulation Julien Grall
  28 siblings, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-11 17:53 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Dom0 expects all ITSes in the system to be propagated to be able to
use MSIs.
Create Dom0 DT nodes for each hardware ITS, keeping the register frame
address the same, as the doorbell address that the Dom0 drivers program
into the BARs has to match the hardware.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 xen/arch/arm/gic-v3-its.c        | 73 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/gic-v3.c            |  4 ++-
 xen/include/asm-arm/gic_v3_its.h | 12 +++++++
 3 files changed, 88 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index f00597e..339ecce 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -20,6 +20,7 @@
 
 #include <xen/lib.h>
 #include <xen/delay.h>
+#include <xen/libfdt/libfdt.h>
 #include <xen/mm.h>
 #include <xen/rbtree.h>
 #include <xen/sched.h>
@@ -945,6 +946,78 @@ int gicv3_lpi_change_vcpu(struct domain *d, paddr_t vdoorbell,
     return 0;
 }
 
+/*
+ * Create the respective guest DT nodes from a list of host ITSes.
+ * This copies the reg property, so the guest sees the ITS at the same address
+ * as the host.
+ */
+int gicv3_its_make_hwdom_dt_nodes(const struct domain *d,
+                                  const struct dt_device_node *gic,
+                                  void *fdt)
+{
+    uint32_t len;
+    int res;
+    const void *prop = NULL;
+    const struct dt_device_node *its = NULL;
+    const struct host_its *its_data;
+
+    if ( list_empty(&host_its_list) )
+        return 0;
+
+    /* The sub-nodes require the ranges property */
+    prop = dt_get_property(gic, "ranges", &len);
+    if ( !prop )
+    {
+        printk(XENLOG_ERR "Can't find ranges property for the gic node\n");
+        return -FDT_ERR_XEN(ENOENT);
+    }
+
+    res = fdt_property(fdt, "ranges", prop, len);
+    if ( res )
+        return res;
+
+    list_for_each_entry(its_data, &host_its_list, entry)
+    {
+        its = its_data->dt_node;
+
+        res = fdt_begin_node(fdt, its->name);
+        if ( res )
+            return res;
+
+        res = fdt_property_string(fdt, "compatible", "arm,gic-v3-its");
+        if ( res )
+            return res;
+
+        res = fdt_property(fdt, "msi-controller", NULL, 0);
+        if ( res )
+            return res;
+
+        if ( its->phandle )
+        {
+            res = fdt_property_cell(fdt, "phandle", its->phandle);
+            if ( res )
+                return res;
+        }
+
+        /* Use the same reg regions as the ITS node in host DTB. */
+        prop = dt_get_property(its, "reg", &len);
+        if ( !prop )
+        {
+            printk(XENLOG_ERR "GICv3: Can't find ITS reg property.\n");
+            res = -FDT_ERR_XEN(ENOENT);
+            return res;
+        }
+
+        res = fdt_property(fdt, "reg", prop, len);
+        if ( res )
+            return res;
+
+        fdt_end_node(fdt);
+    }
+
+    return res;
+}
+
 /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
 void gicv3_its_dt_init(const struct dt_device_node *node)
 {
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index d539d6c..c927306 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1172,8 +1172,10 @@ static int gicv3_make_hwdom_dt_node(const struct domain *d,
 
     res = fdt_property(fdt, "reg", new_cells, len);
     xfree(new_cells);
+    if ( res )
+        return res;
 
-    return res;
+    return gicv3_its_make_hwdom_dt_nodes(d, gic, fdt);
 }
 
 static const hw_irq_controller gicv3_host_irq_type = {
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index e41f8fd..94c577a 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -162,6 +162,11 @@ void vgic_v3_its_free_domain(struct domain *d);
 int vgic_v3_its_init_virtual(struct domain *d, paddr_t guest_addr,
 			     unsigned int devid_bits, unsigned int evid_bits);
 
+/* Create the appropriate DT nodes for a hardware domain. */
+int gicv3_its_make_hwdom_dt_nodes(const struct domain *d,
+                                  const struct dt_device_node *gic,
+                                  void *fdt);
+
 /*
  * Map a device on the host by allocating an ITT on the host (ITS).
  * "nr_event" specifies how many events (interrupts) this device will need.
@@ -248,6 +253,13 @@ static inline void vgic_v3_its_free_domain(struct domain *d)
 {
 }
 
+static inline int gicv3_its_make_hwdom_dt_nodes(const struct domain *d,
+                                                const struct dt_device_node *gic,
+                                                void *fdt)
+{
+    return 0;
+}
+
 #endif /* CONFIG_HAS_ITS */
 
 #endif
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 00/28] arm64: Dom0 ITS emulation
  2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
                   ` (27 preceding siblings ...)
  2017-05-11 17:53 ` [PATCH v9 28/28] ARM: vITS: create ITS subnodes for Dom0 DT Andre Przywara
@ 2017-05-11 18:31 ` Julien Grall
  28 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-11 18:31 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 11/05/17 18:53, Andre Przywara wrote:
> The code can also be found on the its/v9 branch here:
> git://linux-arm.org/xen-ap.git
> http://www.linux-arm.org/git?p=xen-ap.git;a=shortlog;h=refs/heads/its/v9

This branch does not exist :/

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 01/28] ARM: GICv3: setup number of LPI bits for a GICv3 guest
  2017-05-11 17:53 ` [PATCH v9 01/28] ARM: GICv3: setup number of LPI bits for a GICv3 guest Andre Przywara
@ 2017-05-11 18:34   ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-11 18:34 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index bd974fb..033dcee 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -400,6 +400,10 @@ typedef uint64_t xen_callback_t;
>  #define GUEST_GICV3_GICD_BASE      xen_mk_ullong(0x03001000)
>  #define GUEST_GICV3_GICD_SIZE      xen_mk_ullong(0x00010000)
>
> +/* TODO: Should this number be a tool stack decision? */
> +/* The number of interrupt ID bits a guest (not Dom0) sees. */
> +#define GUEST_GICV3_GICD_INTID_BITS     16
> +

You don't use this value in this series nor fully support DomU today. 
Please don't include unnecessary things and drop this.

>  #define GUEST_GICV3_RDIST_STRIDE   xen_mk_ullong(0x00020000)
>  #define GUEST_GICV3_RDIST_REGIONS  1

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's
  2017-05-11 17:53 ` [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's Andre Przywara
@ 2017-05-12 14:19   ` Julien Grall
  2017-05-22 16:49     ` Andre Przywara
  2017-05-20  1:25   ` Stefano Stabellini
  1 sibling, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-12 14:19 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> For LPIs the struct pending_irq's are dynamically allocated and the
> pointers will be stored in a radix tree. Since an LPI can be "unmapped"
> at any time, teach the VGIC how to deal with irq_to_pending() returning
> a NULL pointer.
> We just do nothing in this case or clean up the LR if the virtual LPI
> number was still in an LR.
>
> Those are all call sites for irq_to_pending(), as per:
> "git grep irq_to_pending", and their evaluations:
> (PROTECTED means: added NULL check and bailing out)
>
>     xen/arch/arm/gic.c:
> gic_route_irq_to_guest(): only called for SPIs, added ASSERT()
> gic_remove_irq_from_guest(): only called for SPIs, added ASSERT()
> gic_remove_from_queues(): PROTECTED, called within VCPU VGIC lock
> gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock
> gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock
> gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock

Even they are protected, an ASSERT would be useful.

>
>     xen/arch/arm/vgic.c:
> vgic_migrate_irq(): not called for LPIs (virtual IRQs), added ASSERT()
> arch_move_irqs(): not iterating over LPIs, added ASSERT()
> vgic_disable_irqs(): not called for LPIs, added ASSERT()
> vgic_enable_irqs(): not called for LPIs, added ASSERT()
> vgic_vcpu_inject_irq(): PROTECTED, moved under VCPU VGIC lock
>
>     xen/include/asm-arm/event.h:
> local_events_need_delivery_nomask(): only called for a PPI, added ASSERT()
>
>     xen/include/asm-arm/vgic.h:
> (prototype)
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic.c          | 34 ++++++++++++++++++++++++++++++----
>  xen/arch/arm/vgic.c         | 24 ++++++++++++++++++++++++
>  xen/include/asm-arm/event.h |  3 +++
>  3 files changed, 57 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index dcb1783..46bb306 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -148,6 +148,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq,
>      /* Caller has already checked that the IRQ is an SPI */
>      ASSERT(virq >= 32);
>      ASSERT(virq < vgic_num_irqs(d));
> +    ASSERT(!is_lpi(virq));
>
>      vgic_lock_rank(v_target, rank, flags);
>
> @@ -184,6 +185,7 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
>      ASSERT(spin_is_locked(&desc->lock));
>      ASSERT(test_bit(_IRQ_GUEST, &desc->status));
>      ASSERT(p->desc == desc);
> +    ASSERT(!is_lpi(virq));
>
>      vgic_lock_rank(v_target, rank, flags);
>
> @@ -408,9 +410,13 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>      spin_lock_irqsave(&v->arch.vgic.lock, flags);
>
>      p = irq_to_pending(v, virtual_irq);
> -
> -    if ( !list_empty(&p->lr_queue) )
> +    /*
> +     * If an LPIs has been removed meanwhile, it has been cleaned up
> +     * already, so nothing to remove here.
> +     */
> +    if ( likely(p) && !list_empty(&p->lr_queue) )
>          list_del_init(&p->lr_queue);
> +
>      spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>  }
>
> @@ -418,6 +424,10 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
>  {
>      struct pending_irq *n = irq_to_pending(v, virtual_irq);
>
> +    /* If an LPI has been removed meanwhile, there is nothing left to raise. */
> +    if ( unlikely(!n) )
> +        return;
> +
>      ASSERT(spin_is_locked(&v->arch.vgic.lock));
>
>      if ( list_empty(&n->lr_queue) )
> @@ -437,20 +447,25 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>  {
>      int i;
>      unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
> +    struct pending_irq *p = irq_to_pending(v, virtual_irq);
>
>      ASSERT(spin_is_locked(&v->arch.vgic.lock));
>
> +    if ( unlikely(!p) )
> +        /* An unmapped LPI does not need to be raised. */
> +        return;
> +
>      if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
>      {
>          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
>          if (i < nr_lrs) {
>              set_bit(i, &this_cpu(lr_mask));
> -            gic_set_lr(i, irq_to_pending(v, virtual_irq), GICH_LR_PENDING);
> +            gic_set_lr(i, p, GICH_LR_PENDING);
>              return;
>          }
>      }
>
> -    gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq));
> +    gic_add_to_lr_pending(v, p);
>  }
>
>  static void gic_update_one_lr(struct vcpu *v, int i)
> @@ -465,6 +480,17 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>      gic_hw_ops->read_lr(i, &lr_val);
>      irq = lr_val.virq;
>      p = irq_to_pending(v, irq);
> +    /* An LPI might have been unmapped, in which case we just clean up here. */
> +    if ( unlikely(!p) )
> +    {
> +        ASSERT(is_lpi(irq));
> +
> +        gic_hw_ops->clear_lr(i);
> +        clear_bit(i, &this_cpu(lr_mask));
> +
> +        return;
> +    }
> +
>      if ( lr_val.state & GICH_LR_ACTIVE )
>      {
>          set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index d30f324..8a5d93b 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -242,6 +242,9 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
>      unsigned long flags;
>      struct pending_irq *p = irq_to_pending(old, irq);
>
> +    /* This will never be called for an LPI, as we don't migrate them. */
> +    ASSERT(!is_lpi(irq));
> +
>      /* nothing to do for virtual interrupts */
>      if ( p->desc == NULL )
>          return true;
> @@ -291,6 +294,9 @@ void arch_move_irqs(struct vcpu *v)
>      struct vcpu *v_target;
>      int i;
>
> +    /* We don't migrate LPIs at the moment. */
> +    ASSERT(!is_lpi(vgic_num_irqs(d) - 1));
> +
>      for ( i = 32; i < vgic_num_irqs(d); i++ )
>      {
>          v_target = vgic_get_target_vcpu(v, i);
> @@ -310,6 +316,9 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>      int i = 0;
>      struct vcpu *v_target;
>
> +    /* LPIs will never be disabled via this function. */
> +    ASSERT(!is_lpi(32 * n + 31));
> +
>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          v_target = vgic_get_target_vcpu(v, irq);
> @@ -352,6 +361,9 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>      struct vcpu *v_target;
>      struct domain *d = v->domain;
>
> +    /* LPIs will never be enabled via this function. */
> +    ASSERT(!is_lpi(32 * n + 31));
> +
>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          v_target = vgic_get_target_vcpu(v, irq);
> @@ -432,6 +444,12 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode,
>      return true;
>  }
>
> +/*
> + * Returns the pointer to the struct pending_irq belonging to the given
> + * interrupt.
> + * This can return NULL if called for an LPI which has been unmapped
> + * meanwhile.
> + */
>  struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq)
>  {
>      struct pending_irq *n;
> @@ -475,6 +493,12 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
>      spin_lock_irqsave(&v->arch.vgic.lock, flags);
>
>      n = irq_to_pending(v, virq);
> +    /* If an LPI has been removed, there is nothing to inject here. */
> +    if ( unlikely(!n) )
> +    {
> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +        return;
> +    }
>
>      /* vcpu offline */
>      if ( test_bit(_VPF_down, &v->pause_flags) )
> diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h
> index 5330dfe..caefa50 100644
> --- a/xen/include/asm-arm/event.h
> +++ b/xen/include/asm-arm/event.h
> @@ -19,6 +19,9 @@ static inline int local_events_need_delivery_nomask(void)
>      struct pending_irq *p = irq_to_pending(current,
>                                             current->domain->arch.evtchn_irq);
>
> +    /* Does not work for LPIs. */
> +    ASSERT(!is_lpi(current->domain->arch.evtchn_irq));
> +
>      /* XXX: if the first interrupt has already been delivered, we should
>       * check whether any other interrupts with priority higher than the
>       * one in GICV_IAR are in the lr_pending queue or in the LR
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 04/28] ARM: GICv3: introduce separate pending_irq structs for LPIs
  2017-05-11 17:53 ` [PATCH v9 04/28] ARM: GICv3: introduce separate pending_irq structs for LPIs Andre Przywara
@ 2017-05-12 14:22   ` Julien Grall
  2017-05-22 21:52   ` Stefano Stabellini
  1 sibling, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-12 14:22 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> For the same reason that allocating a struct irq_desc for each
> possible LPI is not an option, having a struct pending_irq for each LPI
> is also not feasible. We only care about mapped LPIs, so we can get away
> with having struct pending_irq's only for them.
> Maintain a radix tree per domain where we drop the pointer to the
> respective pending_irq. The index used is the virtual LPI number.
> The memory for the actual structures has been allocated already per
> device at device mapping time.
> Teach the existing VGIC functions to find the right pointer when being
> given a virtual LPI number.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>

Acked-by: Julien Grall <julien.grall@arm.com>

> ---
>  xen/arch/arm/vgic-v2.c       |  8 ++++++++
>  xen/arch/arm/vgic-v3.c       | 30 ++++++++++++++++++++++++++++++
>  xen/arch/arm/vgic.c          |  2 ++
>  xen/include/asm-arm/domain.h |  2 ++
>  xen/include/asm-arm/vgic.h   |  2 ++
>  5 files changed, 44 insertions(+)
>
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index dc9f95b..0587569 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -702,10 +702,18 @@ static void vgic_v2_domain_free(struct domain *d)
>      /* Nothing to be cleanup for this driver */
>  }
>
> +static struct pending_irq *vgic_v2_lpi_to_pending(struct domain *d,
> +                                                  unsigned int vlpi)
> +{
> +    /* Dummy function, no LPIs on a VGICv2. */
> +    BUG();
> +}
> +
>  static const struct vgic_ops vgic_v2_ops = {
>      .vcpu_init   = vgic_v2_vcpu_init,
>      .domain_init = vgic_v2_domain_init,
>      .domain_free = vgic_v2_domain_free,
> +    .lpi_to_pending = vgic_v2_lpi_to_pending,
>      .max_vcpus = 8,
>  };
>
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 25e16dc..44d2b50 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -1454,6 +1454,9 @@ static int vgic_v3_domain_init(struct domain *d)
>      d->arch.vgic.nr_regions = rdist_count;
>      d->arch.vgic.rdist_regions = rdist_regions;
>
> +    rwlock_init(&d->arch.vgic.pend_lpi_tree_lock);
> +    radix_tree_init(&d->arch.vgic.pend_lpi_tree);
> +
>      /*
>       * Domain 0 gets the hardware address.
>       * Guests get the virtual platform layout.
> @@ -1535,14 +1538,41 @@ static int vgic_v3_domain_init(struct domain *d)
>  static void vgic_v3_domain_free(struct domain *d)
>  {
>      vgic_v3_its_free_domain(d);
> +    /*
> +     * It is expected that at this point all actual ITS devices have been
> +     * cleaned up already. The struct pending_irq's, for which the pointers
> +     * have been stored in the radix tree, are allocated and freed by device.
> +     * On device unmapping all the entries are removed from the tree and
> +     * the backing memory is freed.
> +     */
> +    radix_tree_destroy(&d->arch.vgic.pend_lpi_tree, NULL);
>      xfree(d->arch.vgic.rdist_regions);
>  }
>
> +/*
> + * Looks up a virtual LPI number in our tree of mapped LPIs. This will return
> + * the corresponding struct pending_irq, which we also use to store the
> + * enabled and pending bit plus the priority.
> + * Returns NULL if an LPI cannot be found (or no LPIs are supported).
> + */
> +static struct pending_irq *vgic_v3_lpi_to_pending(struct domain *d,
> +                                                  unsigned int lpi)
> +{
> +    struct pending_irq *pirq;
> +
> +    read_lock(&d->arch.vgic.pend_lpi_tree_lock);
> +    pirq = radix_tree_lookup(&d->arch.vgic.pend_lpi_tree, lpi);
> +    read_unlock(&d->arch.vgic.pend_lpi_tree_lock);
> +
> +    return pirq;
> +}
> +
>  static const struct vgic_ops v3_ops = {
>      .vcpu_init   = vgic_v3_vcpu_init,
>      .domain_init = vgic_v3_domain_init,
>      .domain_free = vgic_v3_domain_free,
>      .emulate_reg  = vgic_v3_emulate_reg,
> +    .lpi_to_pending = vgic_v3_lpi_to_pending,
>      /*
>       * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
>       * that can be supported is up to 4096(==256*16) in theory.
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 8a5d93b..bf6fb60 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -457,6 +457,8 @@ struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq)
>       * are used for SPIs; the rests are used for per cpu irqs */
>      if ( irq < 32 )
>          n = &v->arch.vgic.pending_irqs[irq];
> +    else if ( is_lpi(irq) )
> +        n = v->domain->arch.vgic.handler->lpi_to_pending(v->domain, irq);
>      else
>          n = &v->domain->arch.vgic.pending_irqs[irq - 32];
>      return n;
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 7c3829d..3d8e84c 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -111,6 +111,8 @@ struct arch_domain
>          uint32_t rdist_stride;              /* Re-Distributor stride */
>          struct rb_root its_devices;         /* Devices mapped to an ITS */
>          spinlock_t its_devices_lock;        /* Protects the its_devices tree */
> +        struct radix_tree_root pend_lpi_tree; /* Stores struct pending_irq's */
> +        rwlock_t pend_lpi_tree_lock;        /* Protects the pend_lpi_tree */
>          unsigned int intid_bits;
>  #endif
>      } vgic;
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index df75064..c9075a9 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -134,6 +134,8 @@ struct vgic_ops {
>      void (*domain_free)(struct domain *d);
>      /* vGIC sysreg/cpregs emulate */
>      bool (*emulate_reg)(struct cpu_user_regs *regs, union hsr hsr);
> +    /* lookup the struct pending_irq for a given LPI interrupt */
> +    struct pending_irq *(*lpi_to_pending)(struct domain *d, unsigned int vlpi);
>      /* Maximum number of vCPU supported */
>      const unsigned int max_vcpus;
>  };
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 05/28] ARM: GICv3: forward pending LPIs to guests
  2017-05-11 17:53 ` [PATCH v9 05/28] ARM: GICv3: forward pending LPIs to guests Andre Przywara
@ 2017-05-12 14:55   ` Julien Grall
  2017-05-22 22:03   ` Stefano Stabellini
  1 sibling, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-12 14:55 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 44d2b50..87f58f6 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -1567,12 +1567,30 @@ static struct pending_irq *vgic_v3_lpi_to_pending(struct domain *d,
>      return pirq;
>  }
>
> +/* Retrieve the priority of an LPI from its struct pending_irq. */
> +static int vgic_v3_lpi_get_priority(struct domain *d, uint32_t vlpi)
> +{
> +    struct pending_irq *p = vgic_v3_lpi_to_pending(d, vlpi);
> +
> +    /*
> +     * Cope with the case where this function is called with an invalid LPI.
> +     * It is expected that a caller will bail out handling this LPI at a
> +     * later point in time, but for the sake of this function let us return
> +     * some value here and avoid a NULL pointer dereference.
> +     */
> +    if ( !p )
> +        return 0xff;

I am sorry but I am still against a such as it is. This is only a way to 
workaround a broken design for LPIs.

Looking at the code, I think we don't need to take the rank lock for 
reading the priority as this can be done atomically. In this case, the 
call to get_priority could be moved after the check of p in the caller.

> +
> +    return p->lpi_priority;
> +}
> +

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 07/28] ARM: vGICv3: handle virtual LPI pending and property tables
  2017-05-11 17:53 ` [PATCH v9 07/28] ARM: vGICv3: handle virtual LPI pending and property tables Andre Przywara
@ 2017-05-12 15:23   ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-12 15:23 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> Allow a guest to provide the address and size for the memory regions
> it has reserved for the GICv3 pending and property tables.
> We sanitise the various fields of the respective redistributor
> registers.
> The MMIO read and write accesses are protected by locks, to avoid any
> changing of the property or pending table address while a redistributor
> is live and also to protect the non-atomic vgic_reg64_extract() function
> on the MMIO read side.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> Reviewed-by: Julien Grall <julien.grall@arm.com>

Given that you kept my reviewed-by...

[...]

> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index ebaea35..b2d98bb 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -109,11 +109,15 @@ struct arch_domain
>          } *rdist_regions;
>          int nr_regions;                     /* Number of rdist regions */
>          uint32_t rdist_stride;              /* Re-Distributor stride */
> +        unsigned long int nr_lpis;
> +        uint64_t rdist_propbase;
>          struct rb_root its_devices;         /* Devices mapped to an ITS */
>          spinlock_t its_devices_lock;        /* Protects the its_devices tree */
>          struct radix_tree_root pend_lpi_tree; /* Stores struct pending_irq's */
>          rwlock_t pend_lpi_tree_lock;        /* Protects the pend_lpi_tree */
>          unsigned int intid_bits;
> +        bool rdists_enabled;                /* Is any redistributor enabled? */
> +        bool has_its;

... I would have appreciated that you address my request. I.e adding a 
comment to say this could be turned into flags if we need to save space 
later on.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 08/28] ARM: introduce vgic_access_guest_memory()
  2017-05-11 17:53 ` [PATCH v9 08/28] ARM: introduce vgic_access_guest_memory() Andre Przywara
@ 2017-05-12 15:30   ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-12 15:30 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> From: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
>
> This function allows to copy a chunk of data from and to guest physical
> memory. It looks up the associated page from the guest's p2m tree
> and maps this page temporarily for the time of the access.
> This function was originally written by Vijaya as part of an earlier series:
> https://patchwork.kernel.org/patch/8177251
>
> Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic.c        | 50 ++++++++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/vgic.h |  3 +++
>  2 files changed, 53 insertions(+)
>
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index c29ad5e..66adeb4 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -20,6 +20,7 @@
>  #include <xen/bitops.h>
>  #include <xen/lib.h>
>  #include <xen/init.h>
> +#include <xen/domain_page.h>
>  #include <xen/softirq.h>
>  #include <xen/irq.h>
>  #include <xen/sched.h>
> @@ -620,6 +621,55 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
>  }
>
>  /*
> + * Temporarily map one physical guest page and copy data to or from it.
> + * The data to be copied cannot cross a page boundary.
> + */
> +int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
> +                             uint32_t size, bool_t is_write)

s/bool_t/bool/

With that:

Reviewed-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 10/28] ARM: GIC: export and extend vgic_init_pending_irq()
  2017-05-11 17:53 ` [PATCH v9 10/28] ARM: GIC: export and extend vgic_init_pending_irq() Andre Przywara
@ 2017-05-16 12:26   ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-16 12:26 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> For LPIs we later want to dynamically allocate struct pending_irqs.
> So beside needing to initialize the struct from there we also need
> to clean it up and re-initialize it later on.
> Export vgic_init_pending_irq() and extend it to be reusable.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic.c        | 4 +++-
>  xen/include/asm-arm/vgic.h | 1 +
>  2 files changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 66adeb4..27d6b51 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -61,11 +61,13 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq)
>      return vgic_get_rank(v, rank);
>  }
>
> -static void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
> +void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
>  {
>      INIT_LIST_HEAD(&p->inflight);
>      INIT_LIST_HEAD(&p->lr_queue);
>      p->irq = virq;
> +    p->status = 0;
> +    p->lr = GIC_INVALID_LR;

Why the fields desc, priority have not been initialized? It sound like 
to me that we want to memset pending_irq to 0 avoiding to forget 
resetting some fields.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 11/28] ARM: VGIC: add vcpu_id to struct pending_irq
  2017-05-11 17:53 ` [PATCH v9 11/28] ARM: VGIC: add vcpu_id to struct pending_irq Andre Przywara
@ 2017-05-16 12:31   ` Julien Grall
  2017-05-22 22:15     ` Stefano Stabellini
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-16 12:31 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> The target CPU for an LPI is encoded in the interrupt translation table
> entry, so can't be easily derived from just an LPI number (short of
> walking *all* tables and find the matching LPI).
> To avoid this in case we need to know the VCPU (for the INVALL command,
> for instance), put the VCPU ID in the struct pending_irq, so that it is
> easily accessible.
> We use the remaining 8 bits of padding space for that to avoid enlarging
> the size of struct pending_irq. The number of VCPUs is limited to 127
> at the moment anyway, which we also confirm with a BUILD_BUG_ON.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic.c        | 3 +++
>  xen/include/asm-arm/vgic.h | 1 +
>  2 files changed, 4 insertions(+)
>
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 27d6b51..97a2cf2 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -63,6 +63,9 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq)
>
>  void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
>  {
> +    /* The lpi_vcpu_id field must be big enough to hold a VCPU ID. */
> +    BUILD_BUG_ON(BIT(sizeof(p->lpi_vcpu_id) * 8) < MAX_VIRT_CPUS);
> +
>      INIT_LIST_HEAD(&p->inflight);
>      INIT_LIST_HEAD(&p->lr_queue);
>      p->irq = virq;
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index e2111a5..02732db 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -73,6 +73,7 @@ struct pending_irq
>      uint8_t lr;
>      uint8_t priority;
>      uint8_t lpi_priority;       /* Caches the priority if this is an LPI. */
> +    uint8_t lpi_vcpu_id;        /* The VCPU for an LPI. */

Based on the previous patch (#10), I was expecting to see this new field 
initialized in vgic_init_pending_irq.

>      /* inflight is used to append instances of pending_irq to
>       * vgic.inflight_irqs */
>      struct list_head inflight;
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-11 17:53 ` [PATCH v9 12/28] ARM: vGIC: advertise LPI support Andre Przywara
@ 2017-05-16 13:03   ` Julien Grall
  2017-05-22 22:19     ` Stefano Stabellini
  2017-05-23 17:23     ` Andre Przywara
  0 siblings, 2 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-16 13:03 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> To let a guest know about the availability of virtual LPIs, set the
> respective bits in the virtual GIC registers and let a guest control
> the LPI enable bit.
> Only report the LPI capability if the host has initialized at least
> one ITS.
> This removes a "TBD" comment, as we now populate the processor number
> in the GICR_TYPE register.

s/GICR_TYPE/GICR_TYPER/

Also, I think it would be worth explaining that you populate 
GICR_TYPER.Process_Number because the ITS will use it later on.

> Advertise 24 bits worth of LPIs to the guest.

Again this is not valid anymore. You said you would drop it on the 
previous version. So why it has not been done?

>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 65 insertions(+), 5 deletions(-)
>
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 38c123c..6dbdb2e 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -170,8 +170,19 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info,
>      switch ( gicr_reg )
>      {
>      case VREG32(GICR_CTLR):
> -        /* We have not implemented LPI's, read zero */
> -        goto read_as_zero_32;
> +    {
> +        unsigned long flags;
> +
> +        if ( !v->domain->arch.vgic.has_its )
> +            goto read_as_zero_32;
> +        if ( dabt.size != DABT_WORD ) goto bad_width;
> +
> +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +        *r = vgic_reg32_extract(!!(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED),
> +                                info);
> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +        return 1;
> +    }
>
>      case VREG32(GICR_IIDR):
>          if ( dabt.size != DABT_WORD ) goto bad_width;
> @@ -183,16 +194,20 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info,
>          uint64_t typer, aff;
>
>          if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
> -        /* TBD: Update processor id in [23:8] when ITS support is added */
>          aff = (MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 3) << 56 |
>                 MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 2) << 48 |
>                 MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 1) << 40 |
>                 MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 0) << 32);
>          typer = aff;
> +        /* We use the VCPU ID as the redistributor ID in bits[23:8] */
> +        typer |= (v->vcpu_id & 0xffff) << 8;

Why the mask here? This sound like a bug to me if vcpu_id does not fit 
it and you would make it worst by the mask.

But this is already addressed by max_vcpus in the vgic_ops. So please 
drop the pointless mask.

Lastly, I would have expected to try to address my remark everywhere 
regarding hardcoding offset. In this case,

>
>          if ( v->arch.vgic.flags & VGIC_V3_RDIST_LAST )
>              typer |= GICR_TYPER_LAST;
>
> +        if ( v->domain->arch.vgic.has_its )
> +            typer |= GICR_TYPER_PLPIS;
> +
>          *r = vgic_reg64_extract(typer, info);
>
>          return 1;
> @@ -426,6 +441,28 @@ static uint64_t sanitize_pendbaser(uint64_t reg)
>      return reg;
>  }
>
> +static void vgic_vcpu_enable_lpis(struct vcpu *v)
> +{
> +    uint64_t reg = v->domain->arch.vgic.rdist_propbase;
> +    unsigned int nr_lpis = BIT((reg & 0x1f) + 1);
> +
> +    /* rdists_enabled is protected by the domain lock. */
> +    ASSERT(spin_is_locked(&v->domain->arch.vgic.lock));
> +
> +    if ( nr_lpis < LPI_OFFSET )
> +        nr_lpis = 0;
> +    else
> +        nr_lpis -= LPI_OFFSET;
> +
> +    if ( !v->domain->arch.vgic.rdists_enabled )
> +    {
> +        v->domain->arch.vgic.nr_lpis = nr_lpis;
> +        v->domain->arch.vgic.rdists_enabled = true;
> +    }
> +
> +    v->arch.vgic.flags |= VGIC_V3_LPIS_ENABLED;
> +}
> +
>  static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
>                                            uint32_t gicr_reg,
>                                            register_t r)
> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
>      switch ( gicr_reg )
>      {
>      case VREG32(GICR_CTLR):
> -        /* LPI's not implemented */
> -        goto write_ignore_32;
> +    {
> +        unsigned long flags;
> +
> +        if ( !v->domain->arch.vgic.has_its )
> +            goto write_ignore_32;
> +        if ( dabt.size != DABT_WORD ) goto bad_width;
> +
> +        vgic_lock(v);                   /* protects rdists_enabled */

Getting back to the locking. I don't see any place where we get the 
domain vgic lock before vCPU vgic lock. So this raises the question why 
this ordering and not moving this lock into vgic_vcpu_enable_lpis.

At least this require documentation in the code and explanation in the 
commit message.

> +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +
> +        /* LPIs can only be enabled once, but never disabled again. */
> +        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
> +             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
> +            vgic_vcpu_enable_lpis(v);
> +
> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +        vgic_unlock(v);
> +
> +        return 1;
> +    }
>
>      case VREG32(GICR_IIDR):
>          /* RO */
> @@ -1058,6 +1113,11 @@ static int vgic_v3_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
>          typer = ((ncpus - 1) << GICD_TYPE_CPUS_SHIFT |
>                   DIV_ROUND_UP(v->domain->arch.vgic.nr_spis, 32));
>
> +        if ( v->domain->arch.vgic.has_its )
> +        {
> +            typer |= GICD_TYPE_LPIS;
> +            irq_bits = v->domain->arch.vgic.intid_bits;
> +        }

As I said on the previous version, I would have expected the field 
intid_bits to be used even if ITS is not enabled.

The current code make very difficult to understand the purpose of 
intid_bits and know it is only used when ITS is enabled.

intid_bits should correctly be initialized in vgic_v3_domain_init and 
directly used it.

>          typer |= (irq_bits - 1) << GICD_TYPE_ID_BITS_SHIFT;
>
>          *r = vgic_reg32_extract(typer, info);
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation
  2017-05-11 17:53 ` [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation Andre Przywara
@ 2017-05-16 15:24   ` Julien Grall
  2017-05-17 16:16   ` Julien Grall
  2017-05-22 22:32   ` Stefano Stabellini
  2 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-16 15:24 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> Emulate the memory mapped ITS registers and provide a stub to introduce
> the ITS command handling framework (but without actually emulating any
> commands at this time).
> This fixes a misnomer in our virtual ITS structure, where the spec is
> confusingly using ID_bits in GITS_TYPER to denote the number of event IDs
> (in contrast to GICD_TYPER, where it means number of LPIs).
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3-its.c       | 526 ++++++++++++++++++++++++++++++++++++++-
>  xen/include/asm-arm/gic_v3_its.h |   3 +
>  2 files changed, 528 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 065ffe2..e3bd1f6 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -19,6 +19,16 @@
>   * along with this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>
> +/*
> + * Locking order:
> + *
> + * its->vcmd_lock                        (protects the command queue)
> + *     its->its_lock                     (protects the translation tables)
> + *         d->its_devices_lock           (protects the device RB tree)
> + *             v->vgic.lock              (protects the struct pending_irq)
> + *                 d->pend_lpi_tree_lock (protects the radix tree)
> + */
> +
>  #include <xen/bitops.h>
>  #include <xen/config.h>
>  #include <xen/domain_page.h>
> @@ -43,7 +53,7 @@
>  struct virt_its {
>      struct domain *d;
>      unsigned int devid_bits;
> -    unsigned int intid_bits;
> +    unsigned int evid_bits;
>      spinlock_t vcmd_lock;       /* Protects the virtual command buffer, which */
>      uint64_t cwriter;           /* consists of CWRITER and CREADR and those   */
>      uint64_t creadr;            /* shadow variables cwriter and creadr. */
> @@ -53,6 +63,7 @@ struct virt_its {
>      uint64_t baser_dev, baser_coll;     /* BASER0 and BASER1 for the guest */
>      unsigned int max_collections;
>      unsigned int max_devices;
> +    /* changing "enabled" requires to hold *both* the vcmd_lock and its_lock */
>      bool enabled;
>  };
>
> @@ -67,6 +78,12 @@ struct vits_itte
>      uint16_t pad;
>  };
>
> +typedef uint16_t coll_table_entry_t;

Please explain the encoding of coll_table_entry_t;

> +typedef uint64_t dev_table_entry_t;

It would be better to stick this typedef with the macros DEV_TABLE_* you 
defined in patch #14. So we can understand the layout of dev_table_entry_t.

> +
> +#define GITS_BASER_RO_MASK       (GITS_BASER_TYPE_MASK | \
> +                                  (31UL << GITS_BASER_ENTRY_SIZE_SHIFT))

The mask used for the Entry_Size size looks wrong to me. Should not it 
be 0x1f?

> +
>  int vgic_v3_its_init_domain(struct domain *d)
>  {
>      spin_lock_init(&d->arch.vgic.its_devices_lock);
> @@ -80,6 +97,513 @@ void vgic_v3_its_free_domain(struct domain *d)
>      ASSERT(RB_EMPTY_ROOT(&d->arch.vgic.its_devices));
>  }
>
> +/**************************************
> + * Functions that handle ITS commands *
> + **************************************/
> +
> +static uint64_t its_cmd_mask_field(uint64_t *its_cmd, unsigned int word,
> +                                   unsigned int shift, unsigned int size)
> +{
> +    return (le64_to_cpu(its_cmd[word]) >> shift) & (BIT(size) - 1);

NIT: None of the code is big-end ready. So it is not necessary to have 
le64_to_cpu.

> +}
> +
> +#define its_cmd_get_command(cmd)        its_cmd_mask_field(cmd, 0,  0,  8)
> +#define its_cmd_get_deviceid(cmd)       its_cmd_mask_field(cmd, 0, 32, 32)
> +#define its_cmd_get_size(cmd)           its_cmd_mask_field(cmd, 1,  0,  5)
> +#define its_cmd_get_id(cmd)             its_cmd_mask_field(cmd, 1,  0, 32)
> +#define its_cmd_get_physical_id(cmd)    its_cmd_mask_field(cmd, 1, 32, 32)
> +#define its_cmd_get_collection(cmd)     its_cmd_mask_field(cmd, 2,  0, 16)
> +#define its_cmd_get_target_addr(cmd)    its_cmd_mask_field(cmd, 2, 16, 32)
> +#define its_cmd_get_validbit(cmd)       its_cmd_mask_field(cmd, 2, 63,  1)
> +#define its_cmd_get_ittaddr(cmd)        (its_cmd_mask_field(cmd, 2, 8, 44) << 8)
> +
> +#define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
> +#define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
> +
> +/*
> + * Must be called with the vcmd_lock held.
> + * TODO: Investigate whether we can be smarter here and don't need to hold
> + * the lock all of the time.
> + */
> +static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
> +{
> +    paddr_t addr = its->cbaser & GENMASK(51, 12);
> +    uint64_t command[4];
> +
> +    ASSERT(spin_is_locked(&its->vcmd_lock));
> +
> +    if ( its->cwriter >= ITS_CMD_BUFFER_SIZE(its->cbaser) )
> +        return -1;
> +
> +    while ( its->creadr != its->cwriter )
> +    {
> +        int ret;
> +
> +        ret = vgic_access_guest_memory(d, addr + its->creadr,
> +                                       command, sizeof(command), false);
> +        if ( ret )
> +            return ret;
> +
> +        switch ( its_cmd_get_command(command) )
> +        {
> +        case GITS_CMD_SYNC:
> +            /* We handle ITS commands synchronously, so we ignore SYNC. */
> +            break;
> +        default:
> +            gdprintk(XENLOG_WARNING, "vGITS: unhandled ITS command %lu\n",
> +                     its_cmd_get_command(command));
> +            break;
> +        }
> +
> +        write_u64_atomic(&its->creadr, (its->creadr + ITS_CMD_SIZE) %
> +                         ITS_CMD_BUFFER_SIZE(its->cbaser));
> +
> +        if ( ret )
> +            gdprintk(XENLOG_WARNING,
> +                     "vGITS: ITS command error %d while handling command %lu\n",
> +                     ret, its_cmd_get_command(command));
> +    }
> +
> +    return 0;
> +}
> +
> +/*****************************
> + * ITS registers read access *
> + *****************************/
> +
> +/* Identifying as an ARM IP, using "X" as the product ID. */
> +#define GITS_IIDR_VALUE                 0x5800034c
> +
> +static int vgic_v3_its_mmio_read(struct vcpu *v, mmio_info_t *info,
> +                                 register_t *r, void *priv)
> +{
> +    struct virt_its *its = priv;
> +    uint64_t reg;
> +
> +    switch ( info->gpa & 0xffff )
> +    {
> +    case VREG32(GITS_CTLR):
> +    {
> +        /*
> +         * We try to avoid waiting for the command queue lock and report
> +         * non-quiescent if that lock is already taken.
> +         */
> +        bool have_cmd_lock;
> +
> +        if ( info->dabt.size != DABT_WORD ) goto bad_width;
> +
> +        have_cmd_lock = spin_trylock(&its->vcmd_lock);
> +        spin_lock(&its->its_lock);

I think we could simplify a bit more the locking here if we read 
its->enable atomically. That would drop the need of 
spin_lock(&its->its_lock). Although, I would be happy to see a follow-up 
patch.

> +        if ( its->enabled )
> +            reg = GITS_CTLR_ENABLE;
> +        else
> +            reg = 0;

NIT: this could simplify with:

reg = (its->enabled) ? GITS_CTLR_ENABLE : 0;

> +
> +        if ( have_cmd_lock && its->cwriter == its->creadr )
> +            reg |= GITS_CTLR_QUIESCENT;
> +
> +        spin_unlock(&its->its_lock);
> +        if ( have_cmd_lock )
> +            spin_unlock(&its->vcmd_lock);
> +
> +        *r = vgic_reg32_extract(reg, info);
> +        break;
> +    }

Newline here please to lighten the code.

> +    case VREG32(GITS_IIDR):
> +        if ( info->dabt.size != DABT_WORD ) goto bad_width;
> +        *r = vgic_reg32_extract(GITS_IIDR_VALUE, info);
> +        break;

Ditto. Also before every "case" in this patch.

> +    case VREG64(GITS_TYPER):
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        reg = GITS_TYPER_PHYSICAL;
> +        reg |= (sizeof(struct vits_itte) - 1) << GITS_TYPER_ITT_SIZE_SHIFT;
> +        reg |= (its->evid_bits - 1) << GITS_TYPER_IDBITS_SHIFT;
> +        reg |= (its->devid_bits - 1) << GITS_TYPER_DEVIDS_SHIFT;
> +        *r = vgic_reg64_extract(reg, info);
> +        break;
> +    case VRANGE32(0x0018, 0x001C):
> +        goto read_reserved;
> +    case VRANGE32(0x0020, 0x003C):
> +        goto read_impl_defined;
> +    case VRANGE32(0x0040, 0x007C):
> +        goto read_reserved;
> +    case VREG64(GITS_CBASER):
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +        spin_lock(&its->its_lock);
> +        *r = vgic_reg64_extract(its->cbaser, info);
> +        spin_unlock(&its->its_lock);
> +        break;
> +    case VREG64(GITS_CWRITER):
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        reg = its->cwriter;
> +        *r = vgic_reg64_extract(reg, info);
> +        break;
> +    case VREG64(GITS_CREADR):
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        reg = its->creadr;
> +        *r = vgic_reg64_extract(reg, info);
> +        break;
> +    case VRANGE64(0x0098, 0x00F8):
> +        goto read_reserved;
> +    case VREG64(GITS_BASER0):           /* device table */
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +        spin_lock(&its->its_lock);
> +        *r = vgic_reg64_extract(its->baser_dev, info);
> +        spin_unlock(&its->its_lock);
> +        break;
> +    case VREG64(GITS_BASER1):           /* collection table */
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +        spin_lock(&its->its_lock);
> +        *r = vgic_reg64_extract(its->baser_coll, info);
> +        spin_unlock(&its->its_lock);
> +        break;
> +    case VRANGE64(GITS_BASER2, GITS_BASER7):
> +        goto read_as_zero_64;
> +    case VRANGE32(0x0140, 0xBFFC):
> +        goto read_reserved;
> +    case VRANGE32(0xC000, 0xFFCC):
> +        goto read_impl_defined;
> +    case VRANGE32(0xFFD0, 0xFFE4):
> +        goto read_impl_defined;
> +    case VREG32(GITS_PIDR2):
> +        if ( info->dabt.size != DABT_WORD ) goto bad_width;
> +        *r = vgic_reg32_extract(GIC_PIDR2_ARCH_GICv3, info);
> +        break;
> +    case VRANGE32(0xFFEC, 0xFFFC):
> +        goto read_impl_defined;
> +    default:
> +        printk(XENLOG_G_ERR
> +               "%pv: vGITS: unhandled read r%d offset %#04lx\n",
> +               v, info->dabt.reg, (unsigned long)info->gpa & 0xffff);
> +        return 0;
> +    }
> +
> +    return 1;
> +
> +read_as_zero_64:
> +    if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +    *r = 0;
> +
> +    return 1;
> +
> +read_impl_defined:
> +    printk(XENLOG_G_DEBUG
> +           "%pv: vGITS: RAZ on implementation defined register offset %#04lx\n",
> +           v, info->gpa & 0xffff);
> +    *r = 0;
> +    return 1;
> +
> +read_reserved:
> +    printk(XENLOG_G_DEBUG
> +           "%pv: vGITS: RAZ on reserved register offset %#04lx\n",
> +           v, info->gpa & 0xffff);
> +    *r = 0;
> +    return 1;
> +
> +bad_width:
> +    printk(XENLOG_G_ERR "vGITS: bad read width %d r%d offset %#04lx\n",
> +           info->dabt.size, info->dabt.reg, (unsigned long)info->gpa & 0xffff);
> +    domain_crash_synchronous();
> +
> +    return 0;
> +}
> +
> +/******************************
> + * ITS registers write access *
> + ******************************/
> +
> +static unsigned int its_baser_table_size(uint64_t baser)
> +{
> +    unsigned int ret, page_size[4] = {SZ_4K, SZ_16K, SZ_64K, SZ_64K};
> +
> +    ret = page_size[(baser >> GITS_BASER_PAGE_SIZE_SHIFT) & 3];
> +
> +    return ret * ((baser & GITS_BASER_SIZE_MASK) + 1);
> +}
> +
> +static unsigned int its_baser_nr_entries(uint64_t baser)
> +{
> +    unsigned int entry_size = GITS_BASER_ENTRY_SIZE(baser);
> +
> +    return its_baser_table_size(baser) / entry_size;
> +}
> +
> +/* Must be called with the ITS lock held. */
> +static bool vgic_v3_verify_its_status(struct virt_its *its, bool status)
> +{
> +    ASSERT(spin_is_locked(&its->its_lock));
> +
> +    if ( !status )
> +        return false;
> +
> +    if ( !(its->cbaser & GITS_VALID_BIT) ||
> +         !(its->baser_dev & GITS_VALID_BIT) ||
> +         !(its->baser_coll & GITS_VALID_BIT) )
> +    {
> +        printk(XENLOG_G_WARNING "d%d tried to enable ITS without having the tables configured.\n",
> +               its->d->domain_id);
> +        return false;
> +    }
> +
> +    return true;
> +}
> +
> +static void sanitize_its_base_reg(uint64_t *reg)
> +{
> +    uint64_t r = *reg;
> +
> +    /* Avoid outer shareable. */
> +    switch ( (r >> GITS_BASER_SHAREABILITY_SHIFT) & 0x03 )
> +    {
> +    case GIC_BASER_OuterShareable:
> +        r &= ~GITS_BASER_SHAREABILITY_MASK;
> +        r |= GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT;
> +        break;
> +    default:
> +        break;
> +    }
> +
> +    /* Avoid any inner non-cacheable mapping. */
> +    switch ( (r >> GITS_BASER_INNER_CACHEABILITY_SHIFT) & 0x07 )
> +    {
> +    case GIC_BASER_CACHE_nCnB:
> +    case GIC_BASER_CACHE_nC:
> +        r &= ~GITS_BASER_INNER_CACHEABILITY_MASK;
> +        r |= GIC_BASER_CACHE_RaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT;
> +        break;
> +    default:
> +        break;
> +    }
> +
> +    /* Only allow non-cacheable or same-as-inner. */
> +    switch ( (r >> GITS_BASER_OUTER_CACHEABILITY_SHIFT) & 0x07 )
> +    {
> +    case GIC_BASER_CACHE_SameAsInner:
> +    case GIC_BASER_CACHE_nC:
> +        break;
> +    default:
> +        r &= ~GITS_BASER_OUTER_CACHEABILITY_MASK;
> +        r |= GIC_BASER_CACHE_nC << GITS_BASER_OUTER_CACHEABILITY_SHIFT;
> +        break;
> +    }
> +
> +    *reg = r;
> +}
> +
> +static int vgic_v3_its_mmio_write(struct vcpu *v, mmio_info_t *info,
> +                                  register_t r, void *priv)
> +{
> +    struct domain *d = v->domain;
> +    struct virt_its *its = priv;
> +    uint64_t reg;
> +    uint32_t reg32;
> +
> +    switch ( info->gpa & 0xffff )
> +    {
> +    case VREG32(GITS_CTLR):
> +    {
> +        uint32_t ctlr;
> +
> +        if ( info->dabt.size != DABT_WORD ) goto bad_width;
> +
> +        /*
> +         * We need to take the vcmd_lock to prevent a guest from disabling
> +         * the ITS while commands are still processed.
> +         */
> +        spin_lock(&its->vcmd_lock);
> +        spin_lock(&its->its_lock);
> +        ctlr = its->enabled ? GITS_CTLR_ENABLE : 0;
> +        reg32 = ctlr;
> +        vgic_reg32_update(&reg32, r, info);
> +
> +        if ( ctlr ^ reg32 )
> +            its->enabled = vgic_v3_verify_its_status(its,
> +                                                     reg32 & GITS_CTLR_ENABLE);
> +        spin_unlock(&its->its_lock);
> +        spin_unlock(&its->vcmd_lock);
> +        return 1;
> +    }
> +
> +    case VREG32(GITS_IIDR):
> +        goto write_ignore_32;
> +    case VREG32(GITS_TYPER):
> +        goto write_ignore_32;
> +    case VRANGE32(0x0018, 0x001C):
> +        goto write_reserved;
> +    case VRANGE32(0x0020, 0x003C):
> +        goto write_impl_defined;
> +    case VRANGE32(0x0040, 0x007C):
> +        goto write_reserved;
> +    case VREG64(GITS_CBASER):
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        spin_lock(&its->its_lock);
> +        /* Changing base registers with the ITS enabled is UNPREDICTABLE. */
> +        if ( its->enabled )
> +        {
> +            spin_unlock(&its->its_lock);
> +            gdprintk(XENLOG_WARNING,
> +                     "vGITS: tried to change CBASER with the ITS enabled.\n");
> +            return 1;
> +        }
> +
> +        reg = its->cbaser;
> +        vgic_reg64_update(&reg, r, info);
> +        sanitize_its_base_reg(&reg);
> +
> +        its->cbaser = reg;
> +        its->creadr = 0;
> +        spin_unlock(&its->its_lock);
> +
> +        return 1;
> +
> +    case VREG64(GITS_CWRITER):
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        spin_lock(&its->vcmd_lock);
> +        reg = ITS_CMD_OFFSET(its->cwriter);
> +        vgic_reg64_update(&reg, r, info);
> +        its->cwriter = ITS_CMD_OFFSET(reg);
> +
> +        if ( its->enabled )
> +            if ( vgic_its_handle_cmds(d, its) )
> +                gdprintk(XENLOG_WARNING, "error handling ITS commands\n");
> +
> +        spin_unlock(&its->vcmd_lock);
> +
> +        return 1;
> +
> +    case VREG64(GITS_CREADR):
> +        goto write_ignore_64;
> +
> +    case VRANGE32(0x0098, 0x00FC):
> +        goto write_reserved;
> +    case VREG64(GITS_BASER0):           /* device table */
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        spin_lock(&its->its_lock);
> +
> +        /*
> +         * Changing base registers with the ITS enabled is UNPREDICTABLE,
> +         * we choose to ignore it, but warn.
> +         */
> +        if ( its->enabled )
> +        {
> +            spin_unlock(&its->its_lock);
> +            gdprintk(XENLOG_WARNING, "vGITS: tried to change BASER with the ITS enabled.\n");
> +
> +            return 1;
> +        }
> +
> +        reg = its->baser_dev;
> +        vgic_reg64_update(&reg, r, info);
> +
> +        /* We don't support indirect tables for now. */
> +        reg &= ~(GITS_BASER_RO_MASK | GITS_BASER_INDIRECT);
> +        reg |= (sizeof(dev_table_entry_t) - 1) << GITS_BASER_ENTRY_SIZE_SHIFT;
> +        reg |= GITS_BASER_TYPE_DEVICE << GITS_BASER_TYPE_SHIFT;
> +        sanitize_its_base_reg(&reg);
> +
> +        if ( reg & GITS_VALID_BIT )
> +        {
> +            its->max_devices = its_baser_nr_entries(reg);
> +            if ( its->max_devices > BIT(its->devid_bits) )
> +                its->max_devices = BIT(its->devid_bits);
> +        }
> +        else
> +            its->max_devices = 0;
> +
> +        its->baser_dev = reg;
> +        spin_unlock(&its->its_lock);
> +        return 1;
> +    case VREG64(GITS_BASER1):           /* collection table */
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        spin_lock(&its->its_lock);
> +        /*
> +         * Changing base registers with the ITS enabled is UNPREDICTABLE,
> +         * we choose to ignore it, but warn.
> +         */
> +        if ( its->enabled )
> +        {
> +            spin_unlock(&its->its_lock);
> +            gdprintk(XENLOG_INFO, "vGITS: tried to change BASER with the ITS enabled.\n");
> +            return 1;
> +        }
> +
> +        reg = its->baser_coll;
> +        vgic_reg64_update(&reg, r, info);
> +        /* No indirect tables for the collection table. */
> +        reg &= ~(GITS_BASER_RO_MASK | GITS_BASER_INDIRECT);
> +        reg |= (sizeof(coll_table_entry_t) - 1) << GITS_BASER_ENTRY_SIZE_SHIFT;
> +        reg |= GITS_BASER_TYPE_COLLECTION << GITS_BASER_TYPE_SHIFT;
> +        sanitize_its_base_reg(&reg);
> +
> +        if ( reg & GITS_VALID_BIT )
> +            its->max_collections = its_baser_nr_entries(reg);
> +        else
> +            its->max_collections = 0;
> +        its->baser_coll = reg;
> +        spin_unlock(&its->its_lock);
> +        return 1;
> +    case VRANGE64(GITS_BASER2, GITS_BASER7):
> +        goto write_ignore_64;
> +    case VRANGE32(0x0140, 0xBFFC):
> +        goto write_reserved;
> +    case VRANGE32(0xC000, 0xFFCC):
> +        goto write_impl_defined;
> +    case VRANGE32(0xFFD0, 0xFFE4):      /* IMPDEF identification registers */
> +        goto write_impl_defined;
> +    case VREG32(GITS_PIDR2):
> +        goto write_ignore_32;
> +    case VRANGE32(0xFFEC, 0xFFFC):      /* IMPDEF identification registers */
> +        goto write_impl_defined;
> +    default:
> +        printk(XENLOG_G_ERR
> +               "%pv: vGITS: unhandled write r%d offset %#04lx\n",
> +               v, info->dabt.reg, (unsigned long)info->gpa & 0xffff);
> +        return 0;
> +    }
> +
> +    return 1;
> +
> +write_ignore_64:
> +    if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +    return 1;
> +
> +write_ignore_32:
> +    if ( info->dabt.size != DABT_WORD ) goto bad_width;
> +    return 1;
> +
> +write_impl_defined:
> +    printk(XENLOG_G_DEBUG
> +           "%pv: vGITS: WI on implementation defined register offset %#04lx\n",
> +           v, info->gpa & 0xffff);
> +    return 1;
> +
> +write_reserved:
> +    printk(XENLOG_G_DEBUG
> +           "%pv: vGITS: WI on implementation defined register offset %#04lx\n",
> +           v, info->gpa & 0xffff);
> +    return 1;
> +
> +bad_width:
> +    printk(XENLOG_G_ERR "vGITS: bad write width %d r%d offset %#08lx\n",
> +           info->dabt.size, info->dabt.reg, (unsigned long)info->gpa & 0xffff);
> +
> +    domain_crash_synchronous();
> +
> +    return 0;
> +}
> +
> +static const struct mmio_handler_ops vgic_its_mmio_handler = {
> +    .read  = vgic_v3_its_mmio_read,
> +    .write = vgic_v3_its_mmio_write,
> +};
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
> index 7470779..40f4ef5 100644
> --- a/xen/include/asm-arm/gic_v3_its.h
> +++ b/xen/include/asm-arm/gic_v3_its.h
> @@ -35,6 +35,7 @@
>  #define GITS_BASER5                     0x128
>  #define GITS_BASER6                     0x130
>  #define GITS_BASER7                     0x138
> +#define GITS_PIDR2                      GICR_PIDR2
>
>  /* Register bits */
>  #define GITS_VALID_BIT                  BIT(63)
> @@ -57,6 +58,7 @@
>  #define GITS_TYPER_ITT_SIZE_MASK        (0xfUL << GITS_TYPER_ITT_SIZE_SHIFT)
>  #define GITS_TYPER_ITT_SIZE(r)          ((((r) & GITS_TYPER_ITT_SIZE_MASK) >> \
>                                                   GITS_TYPER_ITT_SIZE_SHIFT) + 1)
> +#define GITS_TYPER_PHYSICAL             (1U << 0)
>
>  #define GITS_BASER_INDIRECT             BIT(62)
>  #define GITS_BASER_INNER_CACHEABILITY_SHIFT        59
> @@ -76,6 +78,7 @@
>                          (((reg >> GITS_BASER_ENTRY_SIZE_SHIFT) & 0x1f) + 1)
>  #define GITS_BASER_SHAREABILITY_SHIFT   10
>  #define GITS_BASER_PAGE_SIZE_SHIFT      8
> +#define GITS_BASER_SIZE_MASK            0xff
>  #define GITS_BASER_SHAREABILITY_MASK   (0x3ULL << GITS_BASER_SHAREABILITY_SHIFT)
>  #define GITS_BASER_OUTER_CACHEABILITY_MASK   (0x7ULL << GITS_BASER_OUTER_CACHEABILITY_SHIFT)
>  #define GITS_BASER_INNER_CACHEABILITY_MASK   (0x7ULL << GITS_BASER_INNER_CACHEABILITY_SHIFT)
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 14/28] ARM: vITS: introduce translation table walks
  2017-05-11 17:53 ` [PATCH v9 14/28] ARM: vITS: introduce translation table walks Andre Przywara
@ 2017-05-16 15:57   ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-16 15:57 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> The ITS stores the target (v)CPU and the (virtual) LPI number in tables.
> Introduce functions to walk those tables and translate an device ID -
> event ID pair into a pair of virtual LPI and vCPU.
> We map those tables on demand - which is cheap on arm64 - and copy the
> respective entries before using them, to avoid the guest tampering with
> them meanwhile.
>
> To allow compiling without warnings, we declare two functions as
> non-static for the moment, which two later patches will fix.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3-its.c | 183 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 183 insertions(+)
>
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index e3bd1f6..12ec5f1 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -81,6 +81,7 @@ struct vits_itte
>  typedef uint16_t coll_table_entry_t;
>  typedef uint64_t dev_table_entry_t;
>
> +#define UNMAPPED_COLLECTION      ((coll_table_entry_t)~0)

This should be with the typedef coll_table_entry_t.

>  #define GITS_BASER_RO_MASK       (GITS_BASER_TYPE_MASK | \
>                                    (31UL << GITS_BASER_ENTRY_SIZE_SHIFT))
>
> @@ -97,6 +98,188 @@ void vgic_v3_its_free_domain(struct domain *d)
>      ASSERT(RB_EMPTY_ROOT(&d->arch.vgic.its_devices));
>  }
>
> +/*
> + * The physical address is encoded slightly differently depending on
> + * the used page size: the highest four bits are stored in the lowest
> + * four bits of the field for 64K pages.
> + */
> +static paddr_t get_baser_phys_addr(uint64_t reg)
> +{
> +    if ( reg & BIT(9) )
> +        return (reg & GENMASK(47, 16)) |
> +                ((reg & GENMASK(15, 12)) << 36);
> +    else
> +        return reg & GENMASK(47, 12);
> +}
> +
> +/*
> + * Our collection table encoding:
> + * Just contains the 16-bit VCPU ID of the respective vCPU.
> + */

This should be above the definition of the typedef coll_table_entry_t.

> +
> +/* Must be called with the ITS lock held. */
> +static struct vcpu *get_vcpu_from_collection(struct virt_its *its,
> +                                             uint16_t collid)
> +{
> +    paddr_t addr = get_baser_phys_addr(its->baser_coll);
> +    coll_table_entry_t vcpu_id;
> +    int ret;
> +
> +    ASSERT(spin_is_locked(&its->its_lock));
> +
> +    if ( collid >= its->max_collections )
> +        return NULL;
> +
> +    ret = vgic_access_guest_memory(its->d,
> +                                   addr + collid * sizeof(coll_table_entry_t),
> +                                   &vcpu_id, sizeof(vcpu_id), false);

I was hoping you would address my comment regarding this confusing mix 
of sizeof(vcpu_id) and sizeof(coll_table_entry_t).

Your reason that it would get out of sync is moot because in that case 
you would have had to modify the guest physical address used.

So please either use one of them but not both.

> +    if ( ret )
> +        return NULL;
> +
> +    if ( vcpu_id == UNMAPPED_COLLECTION || vcpu_id >= its->d->max_vcpus )
> +        return NULL;
> +
> +    return its->d->vcpu[vcpu_id];
> +}
> +
> +/*
> + * Our device table encodings:
> + * Contains the guest physical address of the Interrupt Translation Table in
> + * bits [51:8], and the size of it is encoded as the number of bits minus one
> + * in the lowest 5 bits of the word.
> + */
> +#define DEV_TABLE_ITT_ADDR(x) ((x) & GENMASK(51, 8))
> +#define DEV_TABLE_ITT_SIZE(x) (BIT(((x) & GENMASK(4, 0)) + 1))
> +#define DEV_TABLE_ENTRY(addr, bits)                     \
> +        (((addr) & GENMASK(51, 8)) | (((bits) - 1) & GENMASK(4, 0)))

This should be with the typedef as it helps to understand the layout used.

> +
> +/*
> + * Lookup the address of the Interrupt Translation Table associated with
> + * that device ID.
> + * TODO: add support for walking indirect tables.
> + */
> +static int its_get_itt(struct virt_its *its, uint32_t devid,
> +                       dev_table_entry_t *itt)
> +{
> +    paddr_t addr = get_baser_phys_addr(its->baser_dev);
> +
> +    if ( devid >= its->max_devices )
> +        return -EINVAL;
> +
> +    return vgic_access_guest_memory(its->d,
> +                                    addr + devid * sizeof(dev_table_entry_t),
> +                                    itt, sizeof(*itt), false);
> +}
> +
> +/*
> + * Lookup the address of the Interrupt Translation Table associated with
> + * a device ID and return the address of the ITTE belonging to the event ID
> + * (which is an index into that table).
> + */
> +static paddr_t its_get_itte_address(struct virt_its *its,
> +                                    uint32_t devid, uint32_t evid)
> +{
> +    dev_table_entry_t itt;
> +    int ret;
> +
> +    ret = its_get_itt(its, devid, &itt);
> +    if ( ret )
> +        return INVALID_PADDR;
> +
> +    if ( evid >= DEV_TABLE_ITT_SIZE(itt) ||
> +         DEV_TABLE_ITT_ADDR(itt) == INVALID_PADDR )
> +        return INVALID_PADDR;
> +
> +    return DEV_TABLE_ITT_ADDR(itt) + evid * sizeof(struct vits_itte);
> +}
> +
> +/*
> + * Queries the collection and device tables to get the vCPU and virtual
> + * LPI number for a given guest event. This first accesses the guest memory
> + * to resolve the address of the ITTE, then reads the ITTE entry at this
> + * address and puts the result in vcpu_ptr and vlpi_ptr.
> + * Must be called with the ITS lock held.
> + */
> +static bool read_itte_locked(struct virt_its *its, uint32_t devid,
> +                             uint32_t evid, struct vcpu **vcpu_ptr,
> +                             uint32_t *vlpi_ptr)
> +{
> +    paddr_t addr;
> +    struct vits_itte itte;
> +    struct vcpu *vcpu;
> +
> +    ASSERT(spin_is_locked(&its->its_lock));
> +
> +    addr = its_get_itte_address(its, devid, evid);
> +    if ( addr == INVALID_PADDR )
> +        return false;
> +
> +    if ( vgic_access_guest_memory(its->d, addr, &itte, sizeof(itte), false) )
> +        return false;
> +
> +    vcpu = get_vcpu_from_collection(its, itte.collection);
> +    if ( !vcpu )
> +        return false;
> +
> +    *vcpu_ptr = vcpu;
> +    *vlpi_ptr = itte.vlpi;
> +    return true;
> +}
> +
> +/*
> + * This function takes care of the locking by taking the its_lock itself, so
> + * a caller shall not hold this. Before returning, the lock is dropped again.
> + */
> +bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
> +               struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
> +{
> +    bool ret;
> +
> +    spin_lock(&its->its_lock);
> +    ret = read_itte_locked(its, devid, evid, vcpu_ptr, vlpi_ptr);
> +    spin_unlock(&its->its_lock);
> +
> +    return ret;
> +}
> +
> +/*
> + * Queries the collection and device tables to translate the device ID and
> + * event ID and find the appropriate ITTE. The given collection ID and the
> + * virtual LPI number are then stored into that entry.
> + * If vcpu_ptr is provided, returns the VCPU belonging to that collection.
> + * Must be called with the ITS lock held.
> + */
> +bool write_itte_locked(struct virt_its *its, uint32_t devid,
> +                       uint32_t evid, uint32_t collid, uint32_t vlpi,
> +                       struct vcpu **vcpu_ptr)
> +{
> +    paddr_t addr;
> +    struct vits_itte itte;
> +
> +    ASSERT(spin_is_locked(&its->its_lock));
> +
> +    if ( collid >= its->max_collections )
> +        return false;
> +
> +    if ( vlpi >= its->d->arch.vgic.nr_lpis )
> +        return false;
> +
> +    addr = its_get_itte_address(its, devid, evid);
> +    if ( addr == INVALID_PADDR )
> +        return false;
> +
> +    itte.collection = collid;
> +    itte.vlpi = vlpi;
> +
> +    if ( vgic_access_guest_memory(its->d, addr, &itte, sizeof(itte), true) )
> +        return false;
> +
> +    if ( vcpu_ptr )
> +        *vcpu_ptr = get_vcpu_from_collection(its, collid);
> +
> +    return true;
> +}
> +
>  /**************************************
>   * Functions that handle ITS commands *
>   **************************************/
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 15/28] ARM: vITS: provide access to struct pending_irq
  2017-05-11 17:53 ` [PATCH v9 15/28] ARM: vITS: provide access to struct pending_irq Andre Przywara
@ 2017-05-17 15:35   ` Julien Grall
  2017-05-22 16:50     ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-17 15:35 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> For each device we allocate one struct pending_irq for each virtual
> event (MSI).
> Provide a helper function which returns the pointer to the appropriate
> struct, to be able to find the right struct when given a virtual
> deviceID/eventID pair.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic-v3-its.c        | 69 ++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/gic_v3_its.h |  4 +++
>  2 files changed, 73 insertions(+)
>
> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
> index aebc257..fd6a394 100644
> --- a/xen/arch/arm/gic-v3-its.c
> +++ b/xen/arch/arm/gic-v3-its.c
> @@ -800,6 +800,75 @@ out:
>      return ret;
>  }
>
> +/* Must be called with the its_device_lock held. */
> +static struct its_device *get_its_device(struct domain *d, paddr_t vdoorbell,
> +                                         uint32_t vdevid)
> +{
> +    struct rb_node *node = d->arch.vgic.its_devices.rb_node;
> +    struct its_device *dev;
> +
> +    ASSERT(spin_is_locked(&d->arch.vgic.its_devices_lock));
> +
> +    while (node)
> +    {
> +        int cmp;
> +
> +        dev = rb_entry(node, struct its_device, rbnode);
> +        cmp = compare_its_guest_devices(dev, vdoorbell, vdevid);
> +
> +        if ( !cmp )
> +            return dev;
> +
> +        if ( cmp > 0 )
> +            node = node->rb_left;
> +        else
> +            node = node->rb_right;
> +    }
> +
> +    return NULL;
> +}
> +
> +static uint32_t get_host_lpi(struct its_device *dev, uint32_t eventid)
> +{
> +    uint32_t host_lpi = INVALID_LPI;
> +
> +    if ( dev && (eventid < dev->eventids) )
> +        host_lpi = dev->host_lpi_blocks[eventid / LPI_BLOCK] +
> +                                       (eventid % LPI_BLOCK);
> +
> +    return host_lpi;

IHMO, it would be easier to read if you invert the condition:

if ( !dev || (eventid >= dev->eventids) )
   return INVALID_LPI;

return dev->host_lpi_blocks[eventid / LPI_BLOCK] + (eventid % LPI_BLOCK);

Also, whilst I agree about sanitizing eventid, someone calling this 
function with dev = NULL is already wrong. Defensive programming is 
good, but there are some place where I don't think it is necessary. You 
have to trust a bit the caller, otherwise you will end up making the 
check 10 times before accessing it.

> +}
> +
> +static struct pending_irq *get_event_pending_irq(struct domain *d,
> +                                                 paddr_t vdoorbell_address,
> +                                                 uint32_t vdevid,
> +                                                 uint32_t veventid,

s/veventid/ as it is not virtual a virtual one and make the call to 
get_host_lpi fairly confusing.

> +                                                 uint32_t *host_lpi)
> +{
> +    struct its_device *dev;
> +    struct pending_irq *pirq = NULL;
> +
> +    spin_lock(&d->arch.vgic.its_devices_lock);
> +    dev = get_its_device(d, vdoorbell_address, vdevid);
> +    if ( dev && veventid <= dev->eventids )

Why are you using "<=" here and not "<" like in get_host_lpi? Surely one 
of them is wrong.

> +    {
> +        pirq = &dev->pend_irqs[veventid];
> +        if ( host_lpi )
> +            *host_lpi = get_host_lpi(dev, veventid);

Getting the host_lpi is fairly cheap. I would impose to pass host_lpi.

This would also avoid multiple check on the eventid as you currently do. I.e

dev = ...
if ( !dev )
   goto out;

*host_lpi = get_host_lpi(dev, ...);

if ( *host_lpi == INVALID_LPI )
   goto out;

pirq = &dev->pend_irqs[veventid];


out:
    spin_unlock(...)
    return pirq;

> +    }
> +    spin_unlock(&d->arch.vgic.its_devices_lock);
> +
> +    return pirq;
> +}
> +
> +struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
> +                                                    paddr_t vdoorbell_address,
> +                                                    uint32_t vdevid,
> +                                                    uint32_t veventid)

s/veventid/eventid/

> +{
> +    return get_event_pending_irq(d, vdoorbell_address, vdevid, veventid, NULL);
> +}

This wrapper looks a bit pointless to me. Why don't you directly expose 
get_event_pending_irq(...)?

> +
>  /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
>  void gicv3_its_dt_init(const struct dt_device_node *node)
>  {
> diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
> index 40f4ef5..d162e89 100644
> --- a/xen/include/asm-arm/gic_v3_its.h
> +++ b/xen/include/asm-arm/gic_v3_its.h
> @@ -169,6 +169,10 @@ int gicv3_its_map_guest_device(struct domain *d,
>  int gicv3_allocate_host_lpi_block(struct domain *d, uint32_t *first_lpi);
>  void gicv3_free_host_lpi_block(uint32_t first_lpi);
>
> +struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
> +                                                    paddr_t vdoorbell_address,
> +                                                    uint32_t vdevid,
> +                                                    uint32_t veventid);
>  #else
>
>  static inline void gicv3_its_dt_init(const struct dt_device_node *node)
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation
  2017-05-11 17:53 ` [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation Andre Przywara
  2017-05-16 15:24   ` Julien Grall
@ 2017-05-17 16:16   ` Julien Grall
  2017-05-22 22:32   ` Stefano Stabellini
  2 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-17 16:16 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 11/05/17 18:53, Andre Przywara wrote:
> +/* Must be called with the ITS lock held. */
> +static bool vgic_v3_verify_its_status(struct virt_its *its, bool status)
> +{
> +    ASSERT(spin_is_locked(&its->its_lock));
> +
> +    if ( !status )
> +        return false;
> +
> +    if ( !(its->cbaser & GITS_VALID_BIT) ||
> +         !(its->baser_dev & GITS_VALID_BIT) ||
> +         !(its->baser_coll & GITS_VALID_BIT) )
> +    {
> +        printk(XENLOG_G_WARNING "d%d tried to enable ITS without having the tables configured.\n",
> +               its->d->domain_id);
> +        return false;
> +    }

Actually I was expecting more code in this function based on my comment 
in patch #21 on v8 ([1]).

Table could be crafted before the guest is enabling the ITS. As a lot of 
the commands (e.g INT, MOVI...) rely on the content to get the vCPU, we 
could take the wrong lock and not protect correctly the internal structure.

I appreciate that this series is only targeting to support DOM0 which is 
trusted. But the list of TODOs is starting to be extremely long. How are 
we going to address them?

> +
> +    return true;
> +}

Cheers,

[1] <586a4ada-603f-db52-c1aa-5164c2832667@arm.com>


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 16/28] ARM: vITS: handle INT command
  2017-05-11 17:53 ` [PATCH v9 16/28] ARM: vITS: handle INT command Andre Przywara
@ 2017-05-17 16:17   ` Julien Grall
  2017-05-23 17:24     ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-17 16:17 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> The INT command sets a given LPI identified by a DeviceID/EventID pair
> as pending and thus triggers it to be injected.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3-its.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
>
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 12ec5f1..f9379c9 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -300,6 +300,24 @@ static uint64_t its_cmd_mask_field(uint64_t *its_cmd, unsigned int word,
>  #define its_cmd_get_validbit(cmd)       its_cmd_mask_field(cmd, 2, 63,  1)
>  #define its_cmd_get_ittaddr(cmd)        (its_cmd_mask_field(cmd, 2, 8, 44) << 8)
>
> +static int its_handle_int(struct virt_its *its, uint64_t *cmdptr)
> +{
> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> +    struct vcpu *vcpu;
> +    uint32_t vlpi;
> +
> +    if ( !read_itte(its, devid, eventid, &vcpu, &vlpi) )
> +        return -1;

See my comment on patch #13 about crafting the memory.

> +
> +    if ( vlpi == INVALID_LPI )
> +        return -1;
> +
> +    vgic_vcpu_inject_irq(vcpu, vlpi);
> +
> +    return 0;
> +}
> +
>  #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
>  #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
>
> @@ -329,6 +347,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
>
>          switch ( its_cmd_get_command(command) )
>          {
> +        case GITS_CMD_INT:
> +            ret = its_handle_int(its, command);
> +            break;
>          case GITS_CMD_SYNC:
>              /* We handle ITS commands synchronously, so we ignore SYNC. */
>              break;
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 17/28] ARM: vITS: handle MAPC command
  2017-05-11 17:53 ` [PATCH v9 17/28] ARM: vITS: handle MAPC command Andre Przywara
@ 2017-05-17 17:22   ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-17 17:22 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> The MAPC command associates a given collection ID with a given
> redistributor, thus mapping collections to VCPUs.
> We just store the vcpu_id in the collection table for that.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3-its.c | 47 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 47 insertions(+)
>
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index f9379c9..8f1c217 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -118,6 +118,27 @@ static paddr_t get_baser_phys_addr(uint64_t reg)
>   */
>
>  /* Must be called with the ITS lock held. */
> +static int its_set_collection(struct virt_its *its, uint16_t collid,
> +                              coll_table_entry_t vcpu_id)
> +{
> +    paddr_t addr;
> +
> +    /* The collection table entry must be able to store a VCPU ID. */
> +    BUILD_BUG_ON(BIT(sizeof(coll_table_entry_t) * 8) < MAX_VIRT_CPUS);
> +
> +    addr = get_baser_phys_addr(its->baser_coll);

I am not sure to understand why you moved from:

paddr_t addr = get_baser...

to:

paddr_t addr;

...

addr = get_baser...

Keeping them merged would be nice.

With that:

Acked-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 18/28] ARM: vITS: handle CLEAR command
  2017-05-11 17:53 ` [PATCH v9 18/28] ARM: vITS: handle CLEAR command Andre Przywara
@ 2017-05-17 17:45   ` Julien Grall
  2017-05-23 17:24     ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-17 17:45 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> This introduces the ITS command handler for the CLEAR command, which
> clears the pending state of an LPI.
> This removes a not-yet injected, but already queued IRQ from a VCPU.
> As read_itte() is now eventually used, we add the static keyword.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3-its.c | 59 ++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 57 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 8f1c217..8a200e9 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -52,6 +52,7 @@
>   */
>  struct virt_its {
>      struct domain *d;
> +    paddr_t doorbell_address;
>      unsigned int devid_bits;
>      unsigned int evid_bits;
>      spinlock_t vcmd_lock;       /* Protects the virtual command buffer, which */
> @@ -251,8 +252,8 @@ static bool read_itte_locked(struct virt_its *its, uint32_t devid,
>   * This function takes care of the locking by taking the its_lock itself, so
>   * a caller shall not hold this. Before returning, the lock is dropped again.
>   */
> -bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
> -               struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
> +static bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
> +                      struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
>  {
>      bool ret;
>
> @@ -362,6 +363,57 @@ static int its_handle_mapc(struct virt_its *its, uint64_t *cmdptr)
>      return 0;
>  }
>
> +/*
> + * CLEAR removes the pending state from an LPI. */
> +static int its_handle_clear(struct virt_its *its, uint64_t *cmdptr)
> +{
> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> +    struct pending_irq *p;
> +    struct vcpu *vcpu;
> +    uint32_t vlpi;
> +    unsigned long flags;
> +    int ret = -1;
> +
> +    spin_lock(&its->its_lock);
> +
> +    /* Translate the DevID/EvID pair into a vCPU/vLPI pair. */
> +    if ( !read_itte_locked(its, devid, eventid, &vcpu, &vlpi) )
> +        goto out_unlock;
> +
> +    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
> +                                        devid, eventid);
> +    /* Protect against an invalid LPI number. */
> +    if ( unlikely(!p) )
> +        goto out_unlock;
> +
> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);

My comment in patch #9 about crafting the memory handed over to the ITS 
applies here too.

> +
> +    /*
> +     * If the LPI is already visible on the guest, it is too late to
> +     * clear the pending state. However this is a benign race that can
> +     * happen on real hardware, too: If the LPI has already been forwarded
> +     * to a CPU interface, a CLEAR request reaching the redistributor has
> +     * no effect on that LPI anymore. Since LPIs are edge triggered and
> +     * have no active state, we don't need to care about this here.
> +     */
> +    if ( !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
> +    {
> +        /* Remove a pending, but not yet injected guest IRQ. */
> +        clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
> +        list_del_init(&p->inflight);
> +        list_del_init(&p->lr_queue);

On the previous version I was against this open-coding of 
gic_remove_from_queues and instead rework the function.

It still does not make any sense to me because if one day someone 
decides to update gic_remove_from_queues (such as you because you are 
going to rework the vGIC), he will have to remember that you open-coded 
in MOVE because you didn't want to touch the common code.

Common code is not set in stone. The goal is to abstract all the issues
to make easier to propagate change. So please address this comment.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 19/28] ARM: vITS: handle MAPD command
  2017-05-11 17:53 ` [PATCH v9 19/28] ARM: vITS: handle MAPD command Andre Przywara
@ 2017-05-17 18:07   ` Julien Grall
  2017-05-24  9:10     ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-17 18:07 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> The MAPD command maps a device by associating a memory region for
> storing ITEs with a certain device ID. Since it features a valid bit,
> MAPD also covers the "unmap" functionality, which we also cover here.
> We store the given guest physical address in the device table, and, if
> this command comes from Dom0, tell the host ITS driver about this new
> mapping, so it can issue the corresponding host MAPD command and create
> the required tables. We take care of rolling back actions should one
> step fail.
> Upon unmapping a device we make sure we clean up all associated
> resources and release the memory again.
> We use our existing guest memory access function to find the right ITT
> entry and store the mapping there (in guest memory).
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic-v3-its.c        |  18 +++++
>  xen/arch/arm/gic-v3-lpi.c        |  18 +++++
>  xen/arch/arm/vgic-v3-its.c       | 145 +++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/gic_v3_its.h |   5 ++
>  4 files changed, 186 insertions(+)
>
> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
> index fd6a394..be4c3e0 100644
> --- a/xen/arch/arm/gic-v3-its.c
> +++ b/xen/arch/arm/gic-v3-its.c
> @@ -869,6 +869,24 @@ struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
>      return get_event_pending_irq(d, vdoorbell_address, vdevid, veventid, NULL);
>  }
>
> +int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
> +                             uint32_t vdevid, uint32_t veventid)
> +{
> +    uint32_t host_lpi = INVALID_LPI;
> +
> +    if ( !get_event_pending_irq(d, vdoorbell_address, vdevid, veventid,
> +                                &host_lpi) )
> +        return -EINVAL;
> +
> +    if ( host_lpi == INVALID_LPI )
> +        return -EINVAL;
> +
> +    gicv3_lpi_update_host_entry(host_lpi, d->domain_id,
> +                                INVALID_VCPU_ID, INVALID_LPI);
> +
> +    return 0;
> +}
> +
>  /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
>  void gicv3_its_dt_init(const struct dt_device_node *node)
>  {
> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
> index 44f6315..d427539 100644
> --- a/xen/arch/arm/gic-v3-lpi.c
> +++ b/xen/arch/arm/gic-v3-lpi.c
> @@ -207,6 +207,24 @@ out:
>      irq_exit();
>  }
>
> +void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
> +                                 unsigned int vcpu_id, uint32_t virt_lpi)
> +{
> +    union host_lpi *hlpip, hlpi;
> +
> +    ASSERT(host_lpi >= LPI_OFFSET);
> +
> +    host_lpi -= LPI_OFFSET;
> +
> +    hlpip = &lpi_data.host_lpis[host_lpi / HOST_LPIS_PER_PAGE][host_lpi % HOST_LPIS_PER_PAGE];
> +
> +    hlpi.virt_lpi = virt_lpi;
> +    hlpi.dom_id = domain_id;
> +    hlpi.vcpu_id = vcpu_id;
> +
> +    write_u64_atomic(&hlpip->data, hlpi.data);
> +}
> +
>  static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
>  {
>      uint64_t val;
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 8a200e9..731fe0c 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -175,6 +175,21 @@ static struct vcpu *get_vcpu_from_collection(struct virt_its *its,
>  #define DEV_TABLE_ENTRY(addr, bits)                     \
>          (((addr) & GENMASK(51, 8)) | (((bits) - 1) & GENMASK(4, 0)))
>
> +/* Set the address of an ITT for a given device ID. */
> +static int its_set_itt_address(struct virt_its *its, uint32_t devid,
> +                               paddr_t itt_address, uint32_t nr_bits)
> +{
> +    paddr_t addr = get_baser_phys_addr(its->baser_dev);
> +    dev_table_entry_t itt_entry = DEV_TABLE_ENTRY(itt_address, nr_bits);
> +
> +    if ( devid >= its->max_devices )
> +        return -ENOENT;
> +
> +    return vgic_access_guest_memory(its->d,
> +                                    addr + devid * sizeof(dev_table_entry_t),
> +                                    &itt_entry, sizeof(itt_entry), true);
> +}
> +
>  /*
>   * Lookup the address of the Interrupt Translation Table associated with
>   * that device ID.
> @@ -414,6 +429,133 @@ out_unlock:
>      return ret;
>  }
>
> +/* Must be called with the ITS lock held. */
> +static int its_discard_event(struct virt_its *its,
> +                             uint32_t vdevid, uint32_t vevid)
> +{
> +    struct pending_irq *p;
> +    unsigned long flags;
> +    struct vcpu *vcpu;
> +    uint32_t vlpi;
> +
> +    ASSERT(spin_is_locked(&its->its_lock));
> +
> +    if ( !read_itte_locked(its, vdevid, vevid, &vcpu, &vlpi) )
> +        return -ENOENT;
> +
> +    if ( vlpi == INVALID_LPI )
> +        return -ENOENT;
> +
> +    /* Lock this VCPU's VGIC to make sure nobody is using the pending_irq. */
> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);

There is an interesting issue happening with this code. You don't check 
the content of the memory provided by the guest. So a malicious guest 
could craft the memory in order to setup mapping with known vlpi and a 
different vCPU.

This would lead to use the wrong lock here and corrupt the list.

> +
> +    /* Remove the pending_irq from the tree. */
> +    write_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
> +    p = radix_tree_delete(&its->d->arch.vgic.pend_lpi_tree, vlpi);
> +    write_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
> +
> +    if ( !p )
> +    {
> +        spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
> +
> +        return -ENOENT;
> +    }
> +
> +    /* Cleanup the pending_irq and disconnect it from the LPI. */
> +    list_del_init(&p->inflight);
> +    list_del_init(&p->lr_queue);
> +    vgic_init_pending_irq(p, INVALID_LPI);
> +
> +    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
> +
> +    /* Remove the corresponding host LPI entry */
> +    return gicv3_remove_guest_event(its->d, its->doorbell_address,
> +                                    vdevid, vevid);
> +}
> +
> +static int its_unmap_device(struct virt_its *its, uint32_t devid)
> +{
> +    dev_table_entry_t itt;
> +    uint64_t evid;
> +    int ret;
> +
> +    spin_lock(&its->its_lock);
> +
> +    ret = its_get_itt(its, devid, &itt);
> +    if ( ret )
> +    {
> +        spin_unlock(&its->its_lock);
> +        return ret;
> +    }
> +
> +    /*
> +     * For DomUs we need to check that the number of events per device
> +     * is really limited, otherwise looping over all events can take too
> +     * long for a guest. This ASSERT can then be removed if that is
> +     * covered.
> +     */
> +    ASSERT(is_hardware_domain(its->d));
> +
> +    for ( evid = 0; evid < DEV_TABLE_ITT_SIZE(itt); evid++ )
> +        /* Don't care about errors here, clean up as much as possible. */
> +        its_discard_event(its, devid, evid);
> +
> +    spin_unlock(&its->its_lock);
> +
> +    return 0;
> +}
> +
> +static int its_handle_mapd(struct virt_its *its, uint64_t *cmdptr)
> +{
> +    /* size and devid get validated by the functions called below. */
> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> +    unsigned int size = its_cmd_get_size(cmdptr) + 1;
> +    bool valid = its_cmd_get_validbit(cmdptr);
> +    paddr_t itt_addr = its_cmd_get_ittaddr(cmdptr);
> +    int ret;
> +
> +    /* Sanitize the number of events. */
> +    if ( valid && (size > its->evid_bits) )
> +        return -1;
> +
> +    if ( !valid )
> +        /* Discard all events and remove pending LPIs. */
> +        its_unmap_device(its, devid);

its_unmap_device is returning an error but you don't check it. Please 
explain why.

> +
> +    /*
> +     * There is no easy and clean way for Xen to know the ITS device ID of a
> +     * particular (PCI) device, so we have to rely on the guest telling
> +     * us about it. For *now* we are just using the device ID *Dom0* uses,
> +     * because the driver there has the actual knowledge.
> +     * Eventually this will be replaced with a dedicated hypercall to
> +     * announce pass-through of devices.
> +     */
> +    if ( is_hardware_domain(its->d) )
> +    {
> +
> +        /*
> +         * Dom0's ITSes are mapped 1:1, so both addresses are the same.
> +         * Also the device IDs are equal.
> +         */
> +        ret = gicv3_its_map_guest_device(its->d, its->doorbell_address, devid,
> +                                         its->doorbell_address, devid,
> +                                         BIT(size), valid);
> +        if ( ret && valid )
> +            return ret;
> +    }
> +
> +    spin_lock(&its->its_lock);
> +
> +    if ( valid )
> +        ret = its_set_itt_address(its, devid, itt_addr, size);
> +    else
> +        ret = its_set_itt_address(its, devid, INVALID_PADDR, 1);
> +
> +    spin_unlock(&its->its_lock);
> +
> +    return ret;
> +}
> +
>  #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
>  #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
>
> @@ -452,6 +594,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
>          case GITS_CMD_MAPC:
>              ret = its_handle_mapc(its, command);
>              break;
> +        case GITS_CMD_MAPD:
> +            ret = its_handle_mapd(its, command);
> +            break;
>          case GITS_CMD_SYNC:
>              /* We handle ITS commands synchronously, so we ignore SYNC. */
>              break;
> diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
> index d162e89..6f94e65 100644
> --- a/xen/include/asm-arm/gic_v3_its.h
> +++ b/xen/include/asm-arm/gic_v3_its.h
> @@ -173,6 +173,11 @@ struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
>                                                      paddr_t vdoorbell_address,
>                                                      uint32_t vdevid,
>                                                      uint32_t veventid);
> +int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
> +                                     uint32_t vdevid, uint32_t veventid);
> +void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
> +                                 unsigned int vcpu_id, uint32_t virt_lpi);
> +
>  #else
>
>  static inline void gicv3_its_dt_init(const struct dt_device_node *node)
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs
  2017-05-11 17:53 ` [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs Andre Przywara
@ 2017-05-17 18:37   ` Julien Grall
  2017-05-20  1:25   ` Stefano Stabellini
  1 sibling, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-17 18:37 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> When LPIs get unmapped by a guest, they might still be in some LR of
> some VCPU. Nevertheless we remove the corresponding pending_irq
> (possibly freeing it), and detect this case (irq_to_pending() returns
> NULL) when the LR gets cleaned up later.
> However a *new* LPI may get mapped with the same number while the old
> LPI is *still* in some LR. To avoid getting the wrong state, we mark
> every newly mapped LPI as PRISTINE, which means: has never been in an
> LR before. If we detect the LPI in an LR anyway, it must have been an
> older one, which we can simply retire.
> Before inserting such a PRISTINE LPI into an LR, we must make sure that
> it's not already in another LR, as the architecture forbids two
> interrupts with the same virtual IRQ number on one CPU.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic.c         | 55 +++++++++++++++++++++++++++++++++++++++++-----
>  xen/include/asm-arm/vgic.h |  6 +++++
>  2 files changed, 56 insertions(+), 5 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index fd3fa05..8bf0578 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -375,6 +375,8 @@ static inline void gic_set_lr(int lr, struct pending_irq *p,
>  {
>      ASSERT(!local_irq_is_enabled());
>
> +    clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status);
> +
>      gic_hw_ops->update_lr(lr, p, state);
>
>      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> @@ -442,12 +444,41 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
>  #endif
>  }
>
> +/*
> + * Find an unused LR to insert an IRQ into. If this new interrupt is a
> + * PRISTINE LPI, scan the other LRs to avoid inserting the same IRQ twice.

This replicate here a part of the commit message regarding the 
pending_irq structure. So we have background in the code of why going 
through the LRs.

> + */
> +static int gic_find_unused_lr(struct vcpu *v, struct pending_irq *p, int lr)

This should be unsigned for both the return and the lr variable. Also 
please explain the goal of the variable because it is not clear from the 
callers.

> +{
> +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
> +    unsigned long *lr_mask = (unsigned long *) &this_cpu(lr_mask);
> +    struct gic_lr lr_val;
> +
> +    ASSERT(spin_is_locked(&v->arch.vgic.lock));
> +
> +    if ( test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> +    {
> +        int used_lr = 0;

find_next_bit return unsigned long. So likely you want to use either 
unsigned int or unsigned long here.

> +
> +        while ( (used_lr = find_next_bit(lr_mask, nr_lrs, used_lr)) < nr_lrs )
> +        {
> +            gic_hw_ops->read_lr(used_lr, &lr_val);
> +            if ( lr_val.virq == p->irq )
> +                return used_lr;
> +        }
> +    }
> +
> +    lr = find_next_zero_bit(lr_mask, nr_lrs, lr);
> +
> +    return lr;
> +}
> +
>  void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>          unsigned int priority)
>  {
> -    int i;
> -    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
> +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;

Why did you move around the variable? Please avoid such things, it is 
not like the ITS series is easy to review...

> +    int i = nr_lrs;

I don't understand why you initialize i to nr_lrs. i will always have 
the value reset by the return of gic_find_unused_lr before been used.

>
>      ASSERT(spin_is_locked(&v->arch.vgic.lock));
>
> @@ -457,7 +488,8 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>
>      if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
>      {
> -        i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> +        i = gic_find_unused_lr(v, p, 0);
> +
>          if (i < nr_lrs) {
>              set_bit(i, &this_cpu(lr_mask));
>              gic_set_lr(i, p, GICH_LR_PENDING);
> @@ -509,7 +541,17 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>      }
>      else if ( lr_val.state & GICH_LR_PENDING )
>      {
> -        int q __attribute__ ((unused)) = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
> +        int q __attribute__ ((unused));
> +
> +        if ( test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> +        {
> +            gic_hw_ops->clear_lr(i);
> +            clear_bit(i, &this_cpu(lr_mask));
> +
> +            return;
> +        }

This code is very similar to what you do at the beginning when 
pending_irq is NULL. I would prefer if you stick the check there rather 
than try to address in all the different paths.

I know it is not necessary to do it for the active path as LPI does not 
have active state, though this would have require an explanation in the 
commit message). But it is easier to maintain the test_and_clear_bit in 
a single place rather than spreading in 2 different one for little benefits.

> +
> +        q = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
>  #ifdef GIC_DEBUG
>          if ( q )
>              gdprintk(XENLOG_DEBUG, "trying to inject irq=%d into d%dv%d, when it is already pending in LR%d\n",
> @@ -521,6 +563,9 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>          gic_hw_ops->clear_lr(i);
>          clear_bit(i, &this_cpu(lr_mask));
>
> +        if ( test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> +            return;
> +

This would avoid this check too.

>          if ( p->desc != NULL )
>              clear_bit(_IRQ_INPROGRESS, &p->desc->status);
>          clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> @@ -591,7 +636,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
>      inflight_r = &v->arch.vgic.inflight_irqs;
>      list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
>      {
> -        lr = find_next_zero_bit(&this_cpu(lr_mask), nr_lrs, lr);
> +        lr = gic_find_unused_lr(v, p, lr);
>          if ( lr >= nr_lrs )
>          {
>              /* No more free LRs: find a lower priority irq to evict */
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index 02732db..3fc4ceb 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -60,12 +60,18 @@ struct pending_irq
>       * vcpu while it is still inflight and on an GICH_LR register on the
>       * old vcpu.
>       *
> +     * GIC_IRQ_GUEST_PRISTINE_LPI: the IRQ is a newly mapped LPI, which
> +     * has never been in an LR before. This means that any trace of an
> +     * LPI with the same number in an LR must be from an older LPI, which
> +     * has been unmapped before.
> +     *
>       */
>  #define GIC_IRQ_GUEST_QUEUED   0
>  #define GIC_IRQ_GUEST_ACTIVE   1
>  #define GIC_IRQ_GUEST_VISIBLE  2
>  #define GIC_IRQ_GUEST_ENABLED  3
>  #define GIC_IRQ_GUEST_MIGRATING   4
> +#define GIC_IRQ_GUEST_PRISTINE_LPI  5
>      unsigned long status;
>      struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
>      unsigned int irq;
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 21/28] ARM: vITS: handle MAPTI command
  2017-05-11 17:53 ` [PATCH v9 21/28] ARM: vITS: handle MAPTI command Andre Przywara
@ 2017-05-18 14:04   ` Julien Grall
  2017-05-22 23:39   ` Stefano Stabellini
  1 sibling, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-18 14:04 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> The MAPTI commands associates a DeviceID/EventID pair with a LPI/CPU
> pair and actually instantiates LPI interrupts.
> We connect the already allocated host LPI to this virtual LPI, so that
> any triggering LPI on the host can be quickly forwarded to a guest.
> Beside entering the VCPU and the virtual LPI number in the respective
> host LPI entry, we also initialize and add the already allocated
> struct pending_irq to our radix tree, so that we can now easily find it
> by its virtual LPI number.
> We also read the property table to update the enabled bit and the
> priority for our new LPI, as we might have missed this during an earlier
> INVALL call (which only checks mapped LPIs).

This patch is doing more than implementing MAPTI. It also implements 
MAPI and this should be mention in the commit message/title.

> Since write_itte_locked() now sees its first usage, we change the
> declaration to static.

Your signed-off-by is missing here.

> ---
>  xen/arch/arm/gic-v3-its.c        |  28 +++++++++
>  xen/arch/arm/vgic-v3-its.c       | 124 ++++++++++++++++++++++++++++++++++++++-
>  xen/include/asm-arm/gic_v3_its.h |   3 +
>  3 files changed, 152 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
> index be4c3e0..8a50f7d 100644
> --- a/xen/arch/arm/gic-v3-its.c
> +++ b/xen/arch/arm/gic-v3-its.c
> @@ -887,6 +887,34 @@ int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
>      return 0;
>  }
>
> +/*
> + * Connects the event ID for an already assigned device to the given VCPU/vLPI
> + * pair. The corresponding physical LPI is already mapped on the host side
> + * (when assigning the physical device to the guest), so we just connect the
> + * target VCPU/vLPI pair to that interrupt to inject it properly if it fires.
> + * Returns a pointer to the already allocated struct pending_irq that is
> + * meant to be used by that event.
> + */
> +struct pending_irq *gicv3_assign_guest_event(struct domain *d,
> +                                             paddr_t vdoorbell_address,
> +                                             uint32_t vdevid, uint32_t veventid,
> +                                             struct vcpu *v, uint32_t virt_lpi)
> +{
> +    struct pending_irq *pirq;
> +    uint32_t host_lpi = 0;
> +
> +    pirq = get_event_pending_irq(d, vdoorbell_address, vdevid, veventid,
> +                                 &host_lpi);
> +
> +    if ( !pirq || !host_lpi )

Again, if you have one valid then the other is valid. If not, then you 
have a bigger problem.

Also, please test host_lpi against INVALID_LPI to avoid assuming it will 
always be 0.

> +        return NULL;
> +
> +    gicv3_lpi_update_host_entry(host_lpi, d->domain_id,
> +                                v ? v->vcpu_id : INVALID_VCPU_ID, virt_lpi);
> +
> +    return pirq;
> +}
> +
>  /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
>  void gicv3_its_dt_init(const struct dt_device_node *node)
>  {
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 731fe0c..c5c0e5e 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -286,9 +286,9 @@ static bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
>   * If vcpu_ptr is provided, returns the VCPU belonging to that collection.
>   * Must be called with the ITS lock held.
>   */
> -bool write_itte_locked(struct virt_its *its, uint32_t devid,
> -                       uint32_t evid, uint32_t collid, uint32_t vlpi,
> -                       struct vcpu **vcpu_ptr)
> +static bool write_itte_locked(struct virt_its *its, uint32_t devid,
> +                              uint32_t evid, uint32_t collid, uint32_t vlpi,
> +                              struct vcpu **vcpu_ptr)
>  {
>      paddr_t addr;
>      struct vits_itte itte;
> @@ -429,6 +429,33 @@ out_unlock:
>      return ret;
>  }
>
> +/*
> + * For a given virtual LPI read the enabled bit and priority from the virtual
> + * property table and update the virtual IRQ's state in the given pending_irq.
> + * Must be called with the respective VGIC VCPU lock held.
> + */
> +static int update_lpi_property(struct domain *d, struct pending_irq *p)
> +{
> +    paddr_t addr;
> +    uint8_t property;
> +    int ret;
> +
> +    addr = d->arch.vgic.rdist_propbase & GENMASK(51, 12);
> +
> +    ret = vgic_access_guest_memory(d, addr + p->irq - LPI_OFFSET,
> +                                   &property, sizeof(property), false);
> +    if ( ret )
> +        return ret;
> +
> +    p->lpi_priority = property & LPI_PROP_PRIO_MASK;

Again, I don't think this will update lpi_priority atomically. So who is 
preventing this to happen?

> +    if ( property & LPI_PROP_ENABLED )
> +        set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> +    else
> +        clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> +
> +    return 0;
> +}
> +
>  /* Must be called with the ITS lock held. */
>  static int its_discard_event(struct virt_its *its,
>                               uint32_t vdevid, uint32_t vevid)
> @@ -556,6 +583,93 @@ static int its_handle_mapd(struct virt_its *its, uint64_t *cmdptr)
>      return ret;
>  }
>
> +static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
> +{
> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> +    uint32_t intid = its_cmd_get_physical_id(cmdptr), _intid;
> +    uint16_t collid = its_cmd_get_collection(cmdptr);
> +    struct pending_irq *pirq;
> +    struct vcpu *vcpu = NULL;
> +    int ret = -1;
> +
> +    if ( its_cmd_get_command(cmdptr) == GITS_CMD_MAPI )
> +        intid = eventid;
> +
> +    spin_lock(&its->its_lock);
> +    /*
> +     * Check whether there is a valid existing mapping. If yes, behavior is
> +     * unpredictable, we choose to ignore this command here.
> +     * This makes sure we start with a pristine pending_irq below.
> +     */
> +    if ( read_itte_locked(its, devid, eventid, &vcpu, &_intid) &&
> +         _intid != INVALID_LPI )
> +    {
> +        spin_unlock(&its->its_lock);
> +        return -1;
> +    }
> +
> +    /* Enter the mapping in our virtual ITS tables. */
> +    if ( !write_itte_locked(its, devid, eventid, collid, intid, &vcpu) )
> +    {
> +        spin_unlock(&its->its_lock);
> +        return -1;
> +    }
> +
> +    spin_unlock(&its->its_lock);
> +
> +    /*
> +     * Connect this virtual LPI to the corresponding host LPI, which is
> +     * determined by the same device ID and event ID on the host side.
> +     * This returns us the corresponding, still unused pending_irq.
> +     */
> +    pirq = gicv3_assign_guest_event(its->d, its->doorbell_address,
> +                                    devid, eventid, vcpu, intid);
> +    if ( !pirq )
> +        goto out_remove_mapping;
> +
> +    vgic_init_pending_irq(pirq, intid);
> +
> +    /*
> +     * Now read the guest's property table to initialize our cached state.
> +     * It can't fire at this time, because it is not known to the host yet.

gicv3_assign_guest_event will write the mapping between the host LPI and 
virtual LPI above. So could you detail what you mean by "it is not known 
to the host yet"?

> +     * We don't need the VGIC VCPU lock here, because the pending_irq isn't
> +     * in the radix tree yet.
> +     */
> +    ret = update_lpi_property(its->d, pirq);
> +    if ( ret )
> +        goto out_remove_host_entry;
> +
> +    pirq->lpi_vcpu_id = vcpu->vcpu_id;
> +    /*
> +     * Mark this LPI as new, so any older (now unmapped) LPI in any LR
> +     * can be easily recognised as such.
> +     */
> +    set_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &pirq->status);
> +
> +    /*
> +     * Now insert the pending_irq into the domain's LPI tree, so that
> +     * it becomes live.
> +     */
> +    write_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
> +    ret = radix_tree_insert(&its->d->arch.vgic.pend_lpi_tree, intid, pirq);
> +    write_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
> +
> +    if ( !ret )

So radix_tree_insert could return an error because a memory allocation 
failure or the vlpi was already in use.

Leaving aside the memory allocation failure, this is the only place in 
the code which check if an vlpi was already in use.

So I think there is a potential race condition between this code and 
do_LPI if the vLPI already had a mapping. In that case we may use the 
wrong vCPU lock to protect as it is retrieved from the host LPI array.

I can see two solution to solve this:
	1) Check if the vlpi does not yet have a mapping in the radix tree 
before call gicv3_assign_guest_event. This would require to take the 
radix lock for a longer time. However, I think this would break your 
locking order.
	2) Don't get the vCPU from the host LPI array but from pending_irq.

> +        return 0;
> +
> +out_remove_host_entry:
> +    gicv3_remove_guest_event(its->d, its->doorbell_address, devid, eventid);
> +
> +out_remove_mapping:
> +    spin_lock(&its->its_lock);
> +    write_itte_locked(its, devid, eventid,
> +                      UNMAPPED_COLLECTION, INVALID_LPI, NULL);
> +    spin_unlock(&its->its_lock);
> +
> +    return ret;
> +}
> +
>  #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
>  #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
>
> @@ -597,6 +711,10 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
>          case GITS_CMD_MAPD:
>              ret = its_handle_mapd(its, command);
>              break;
> +        case GITS_CMD_MAPI:
> +        case GITS_CMD_MAPTI:
> +            ret = its_handle_mapti(its, command);
> +            break;
>          case GITS_CMD_SYNC:
>              /* We handle ITS commands synchronously, so we ignore SYNC. */
>              break;
> diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
> index 6f94e65..9c08cee 100644
> --- a/xen/include/asm-arm/gic_v3_its.h
> +++ b/xen/include/asm-arm/gic_v3_its.h
> @@ -175,6 +175,9 @@ struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
>                                                      uint32_t veventid);
>  int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
>                                       uint32_t vdevid, uint32_t veventid);
> +struct pending_irq *gicv3_assign_guest_event(struct domain *d, paddr_t doorbell,
> +                                             uint32_t devid, uint32_t eventid,
> +                                             struct vcpu *v, uint32_t virt_lpi);
>  void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
>                                   unsigned int vcpu_id, uint32_t virt_lpi);
>
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 22/28] ARM: vITS: handle MOVI command
  2017-05-11 17:53 ` [PATCH v9 22/28] ARM: vITS: handle MOVI command Andre Przywara
@ 2017-05-18 14:17   ` Julien Grall
  2017-05-23  0:28   ` Stefano Stabellini
  1 sibling, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-18 14:17 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> The MOVI command moves the interrupt affinity from one redistributor
> (read: VCPU) to another.
> For now migration of "live" LPIs is not yet implemented, but we store
> the changed affinity in the host LPI structure and in our virtual ITTE.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic-v3-its.c        | 30 ++++++++++++++++++++
>  xen/arch/arm/gic-v3-lpi.c        | 15 ++++++++++
>  xen/arch/arm/vgic-v3-its.c       | 59 ++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/gic_v3_its.h |  4 +++
>  4 files changed, 108 insertions(+)
>
> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
> index 8a50f7d..f00597e 100644
> --- a/xen/arch/arm/gic-v3-its.c
> +++ b/xen/arch/arm/gic-v3-its.c
> @@ -915,6 +915,36 @@ struct pending_irq *gicv3_assign_guest_event(struct domain *d,
>      return pirq;
>  }
>
> +/* Changes the target VCPU for a given host LPI assigned to a domain. */
> +int gicv3_lpi_change_vcpu(struct domain *d, paddr_t vdoorbell,
> +                          uint32_t vdevid, uint32_t veventid,
> +                          unsigned int vcpu_id)
> +{
> +    uint32_t host_lpi;
> +    struct its_device *dev;
> +
> +    spin_lock(&d->arch.vgic.its_devices_lock);
> +    dev = get_its_device(d, vdoorbell, vdevid);
> +    if ( dev )
> +        host_lpi = get_host_lpi(dev, veventid);
> +    else
> +        host_lpi = 0;
> +    spin_unlock(&d->arch.vgic.its_devices_lock);
> +
> +    if ( !host_lpi )
> +        return -ENOENT;
> +
> +    /*
> +     * TODO: This just changes the virtual affinity, the physical LPI
> +     * still stays on the same physical CPU.
> +     * Consider to move the physical affinity to the pCPU running the new
> +     * vCPU. However this requires scheduling a host ITS command.
> +     */
> +    gicv3_lpi_update_host_vcpuid(host_lpi, vcpu_id);

If you do that, the next interrupt will get the wrong vCPU lock. I guess 
this will be solved by the vGIC rework?

> +
> +    return 0;
> +}
> +
>  /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
>  void gicv3_its_dt_init(const struct dt_device_node *node)
>  {
> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
> index d427539..6af5ad9 100644
> --- a/xen/arch/arm/gic-v3-lpi.c
> +++ b/xen/arch/arm/gic-v3-lpi.c
> @@ -225,6 +225,21 @@ void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
>      write_u64_atomic(&hlpip->data, hlpi.data);
>  }
>
> +int gicv3_lpi_update_host_vcpuid(uint32_t host_lpi, unsigned int vcpu_id)
> +{
> +    union host_lpi *hlpip;
> +
> +    ASSERT(host_lpi >= LPI_OFFSET);
> +
> +    host_lpi -= LPI_OFFSET;
> +
> +    hlpip = &lpi_data.host_lpis[host_lpi / HOST_LPIS_PER_PAGE][host_lpi % HOST_LPIS_PER_PAGE];
> +
> +    write_u16_atomic(&hlpip->vcpu_id, vcpu_id);
> +
> +    return 0;
> +}
> +
>  static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
>  {
>      uint64_t val;
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index c5c0e5e..ef7c78f 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -670,6 +670,59 @@ out_remove_mapping:
>      return ret;
>  }
>
> +static int its_handle_movi(struct virt_its *its, uint64_t *cmdptr)
> +{
> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> +    uint16_t collid = its_cmd_get_collection(cmdptr);
> +    unsigned long flags;
> +    struct pending_irq *p;
> +    struct vcpu *ovcpu, *nvcpu;
> +    uint32_t vlpi;
> +    int ret = -1;
> +
> +    spin_lock(&its->its_lock);
> +    /* Check for a mapped LPI and get the LPI number. */
> +    if ( !read_itte_locked(its, devid, eventid, &ovcpu, &vlpi) )
> +        goto out_unlock;
> +
> +    if ( vlpi == INVALID_LPI )
> +        goto out_unlock;
> +
> +    /* Check the new collection ID and get the new VCPU pointer */
> +    nvcpu = get_vcpu_from_collection(its, collid);
> +    if ( !nvcpu )
> +        goto out_unlock;
> +
> +    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
> +                                        devid, eventid);
> +    if ( unlikely(!p) )
> +        goto out_unlock;
> +
> +    spin_lock_irqsave(&ovcpu->arch.vgic.lock, flags);

The locking is still a problem here because of at least the crafted memory.

> +
> +    /* Update our cached vcpu_id in the pending_irq. */
> +    p->lpi_vcpu_id = nvcpu->vcpu_id;
> +
> +    spin_unlock_irqrestore(&ovcpu->arch.vgic.lock, flags);
> +
> +    /* Now store the new collection in the translation table. */
> +    if ( !write_itte_locked(its, devid, eventid, collid, vlpi, &nvcpu) )
> +        goto out_unlock;
> +
> +    spin_unlock(&its->its_lock);
> +
> +    /* TODO: lookup currently-in-guest virtual IRQs and migrate them? */

That's going to be an issue if you don't do that because do_LPI would 
get the wrong lock. Hopefully this will get resolved by the vGIC rework? 
If so, then a todo would be useful where the locking is fragile.

This comment would also apply for all the place in the new code. So we 
can find easily when doing the rework.

> +
> +    return gicv3_lpi_change_vcpu(its->d, its->doorbell_address,
> +                                 devid, eventid, nvcpu->vcpu_id);
> +
> +out_unlock:
> +    spin_unlock(&its->its_lock);
> +
> +    return ret;
> +}
> +
>  #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
>  #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
>
> @@ -715,6 +768,12 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
>          case GITS_CMD_MAPTI:
>              ret = its_handle_mapti(its, command);
>              break;
> +        case GITS_CMD_MOVALL:
> +            gdprintk(XENLOG_G_INFO, "vGITS: ignoring MOVALL command\n");
> +            break;
> +        case GITS_CMD_MOVI:
> +            ret = its_handle_movi(its, command);
> +            break;
>          case GITS_CMD_SYNC:
>              /* We handle ITS commands synchronously, so we ignore SYNC. */
>              break;
> diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
> index 9c08cee..82d788c 100644
> --- a/xen/include/asm-arm/gic_v3_its.h
> +++ b/xen/include/asm-arm/gic_v3_its.h
> @@ -178,8 +178,12 @@ int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
>  struct pending_irq *gicv3_assign_guest_event(struct domain *d, paddr_t doorbell,
>                                               uint32_t devid, uint32_t eventid,
>                                               struct vcpu *v, uint32_t virt_lpi);
> +int gicv3_lpi_change_vcpu(struct domain *d, paddr_t doorbell,
> +                          uint32_t devid, uint32_t eventid,
> +                          unsigned int vcpu_id);
>  void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
>                                   unsigned int vcpu_id, uint32_t virt_lpi);
> +int gicv3_lpi_update_host_vcpuid(uint32_t host_lpi, unsigned int vcpu_id);
>
>  #else
>
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 23/28] ARM: vITS: handle DISCARD command
  2017-05-11 17:53 ` [PATCH v9 23/28] ARM: vITS: handle DISCARD command Andre Przywara
@ 2017-05-18 14:23   ` Julien Grall
  2017-05-22 16:50     ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-18 14:23 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> The DISCARD command drops the connection between a DeviceID/EventID
> and an LPI/collection pair.
> We mark the respective structure entries as not allocated and make
> sure that any queued IRQs are removed.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3-its.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
>
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index ef7c78f..f7a8d77 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -723,6 +723,27 @@ out_unlock:
>      return ret;
>  }
>
> +static int its_handle_discard(struct virt_its *its, uint64_t *cmdptr)
> +{
> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> +    int ret;
> +
> +    spin_lock(&its->its_lock);
> +
> +    /* Remove from the radix tree and remove the host entry. */
> +    ret = its_discard_event(its, devid, eventid);
> +
> +    /* Remove from the guest's ITTE. */
> +    if ( ret || write_itte_locked(its, devid, eventid,
> +                                  UNMAPPED_COLLECTION, INVALID_LPI, NULL) )

I am not sure to fully understand this if. If ret is not NULL you 
override it and never call write_itte_locked.

Is it what you wanted? If so, then probably a bit more documentation 
would be useful to explain why writte_itte_locked is skipped.

> +        ret = -1;
> +
> +    spin_unlock(&its->its_lock);
> +
> +    return ret;
> +}
> +
>  #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
>  #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
>
> @@ -755,6 +776,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
>          case GITS_CMD_CLEAR:
>              ret = its_handle_clear(its, command);
>              break;
> +        case GITS_CMD_DISCARD:
> +            ret = its_handle_discard(its, command);
> +            break;
>          case GITS_CMD_INT:
>              ret = its_handle_int(its, command);
>              break;
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 26/28] ARM: vITS: increase mmio_count for each ITS
  2017-05-11 17:53 ` [PATCH v9 26/28] ARM: vITS: increase mmio_count for each ITS Andre Przywara
@ 2017-05-18 14:34   ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-18 14:34 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> Increase the count of MMIO regions needed by one for each ITS Dom0 has
> to emulate. We emulate the ITSes 1:1 from the hardware, so the number
> is the number of host ITSes.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>

Acked-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 27/28] ARM: vITS: create and initialize virtual ITSes for Dom0
  2017-05-11 17:53 ` [PATCH v9 27/28] ARM: vITS: create and initialize virtual ITSes for Dom0 Andre Przywara
@ 2017-05-18 14:41   ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-18 14:41 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
> For each hardware ITS create and initialize a virtual ITS for Dom0.
> We use the same memory mapped address to keep the doorbell working.
> This introduces a function to initialize a virtual ITS.
> We maintain a list of virtual ITSes, at the moment for the only
> purpose of later being able to free them again.
> We configure the virtual ITSes to match the hardware ones, that is we
> keep the number of device ID bits and event ID bits the same as the host
> ITS.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3-its.c       | 75 ++++++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/vgic-v3.c           |  4 +++
>  xen/include/asm-arm/domain.h     |  1 +
>  xen/include/asm-arm/gic_v3_its.h |  4 +++
>  4 files changed, 84 insertions(+)
>
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 8f6ff11..ca35aca 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -52,6 +52,7 @@
>   */
>  struct virt_its {
>      struct domain *d;
> +    struct list_head vits_list;
>      paddr_t doorbell_address;
>      unsigned int devid_bits;
>      unsigned int evid_bits;
> @@ -103,14 +104,49 @@ unsigned int vgic_v3_its_count(const struct domain *d)
>
>  int vgic_v3_its_init_domain(struct domain *d)
>  {
> +    int ret;
> +
> +    INIT_LIST_HEAD(&d->arch.vgic.vits_list);
>      spin_lock_init(&d->arch.vgic.its_devices_lock);
>      d->arch.vgic.its_devices = RB_ROOT;
>
> +    if ( is_hardware_domain(d) )
> +    {
> +        struct host_its *hw_its;
> +
> +        list_for_each_entry(hw_its, &host_its_list, entry)
> +        {
> +            /*
> +             * For each host ITS create a virtual ITS using the same
> +             * base and thus doorbell address.
> +             * Use the same number of device ID and event ID bits as the host.
> +             */
> +            ret = vgic_v3_its_init_virtual(d, hw_its->addr,
> +                                           hw_its->devid_bits,
> +                                           hw_its->evid_bits);
> +            if ( ret )
> +            {
> +                vgic_v3_its_free_domain(d);

You don't need this call. vgic_v3_free_domain will always be called when 
a domain is destroyed even if it has not finished to be built.

> +                return ret;
> +            }
> +            else
> +                d->arch.vgic.has_its = true;
> +        }
> +    }
> +
>      return 0;
>  }
>
>  void vgic_v3_its_free_domain(struct domain *d)
>  {
> +    struct virt_its *pos, *temp;
> +
> +    list_for_each_entry_safe( pos, temp, &d->arch.vgic.vits_list, vits_list )
> +    {
> +        list_del(&pos->vits_list);
> +        xfree(pos);
> +    }
> +
>      ASSERT(RB_EMPTY_ROOT(&d->arch.vgic.its_devices));
>  }
>
> @@ -1407,6 +1443,45 @@ static const struct mmio_handler_ops vgic_its_mmio_handler = {
>      .write = vgic_v3_its_mmio_write,
>  };
>
> +int vgic_v3_its_init_virtual(struct domain *d, paddr_t guest_addr,
> +                             unsigned int devid_bits, unsigned int evid_bits)

Why this is exported? The only caller in the same file.

> +{
> +    struct virt_its *its;
> +    uint64_t base_attr;
> +
> +    its = xzalloc(struct virt_its);
> +    if ( !its )
> +        return -ENOMEM;
> +
> +    base_attr  = GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT;
> +    base_attr |= GIC_BASER_CACHE_SameAsInner << GITS_BASER_OUTER_CACHEABILITY_SHIFT;
> +    base_attr |= GIC_BASER_CACHE_RaWaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT;
> +
> +    its->cbaser  = base_attr;
> +    base_attr |= 0ULL << GITS_BASER_PAGE_SIZE_SHIFT;    /* 4K pages */
> +    its->baser_dev = GITS_BASER_TYPE_DEVICE << GITS_BASER_TYPE_SHIFT;
> +    its->baser_dev |= (sizeof(dev_table_entry_t) - 1) <<
> +                      GITS_BASER_ENTRY_SIZE_SHIFT;
> +    its->baser_dev |= base_attr;
> +    its->baser_coll  = GITS_BASER_TYPE_COLLECTION << GITS_BASER_TYPE_SHIFT;
> +    its->baser_coll |= (sizeof(coll_table_entry_t) - 1) <<
> +                       GITS_BASER_ENTRY_SIZE_SHIFT;
> +    its->baser_coll |= base_attr;
> +    its->d = d;
> +    its->doorbell_address = guest_addr + ITS_DOORBELL_OFFSET;
> +    its->devid_bits = devid_bits;
> +    its->evid_bits = evid_bits;
> +    spin_lock_init(&its->vcmd_lock);
> +    spin_lock_init(&its->its_lock);
> +
> +    register_mmio_handler(d, &vgic_its_mmio_handler, guest_addr, SZ_64K, its);
> +
> +    /* Register the virtual ITSes to be able to clean them up later. */

There is only one virtual ITS registered here. So s/ITSes/ITS/

> +    list_add_tail(&its->vits_list, &d->arch.vgic.vits_list);
> +
> +    return 0;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 41cda78..fd4b5f4 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -1700,6 +1700,10 @@ static int vgic_v3_domain_init(struct domain *d)
>          d->arch.vgic.intid_bits = GUEST_GICV3_GICD_INTID_BITS;
>      }
>
> +    /*
> +     * For a hardware domain, this will iterate over the host ITSes
> +     * and maps  one virtual ITS per host ITS at the same address.
> +     */

This kind of comment will get easily rotten if you put it on the caller. 
It would be better to move it on the top of the declaration.

>      ret = vgic_v3_its_init_domain(d);
>      if ( ret )
>          return ret;
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index b2d98bb..92f4ce5 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -115,6 +115,7 @@ struct arch_domain
>          spinlock_t its_devices_lock;        /* Protects the its_devices tree */
>          struct radix_tree_root pend_lpi_tree; /* Stores struct pending_irq's */
>          rwlock_t pend_lpi_tree_lock;        /* Protects the pend_lpi_tree */
> +        struct list_head vits_list;         /* List of virtual ITSes */
>          unsigned int intid_bits;
>          bool rdists_enabled;                /* Is any redistributor enabled? */
>          bool has_its;
> diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
> index 927568f..e41f8fd 100644
> --- a/xen/include/asm-arm/gic_v3_its.h
> +++ b/xen/include/asm-arm/gic_v3_its.h
> @@ -158,6 +158,10 @@ int gicv3_its_setup_collection(unsigned int cpu);
>  int vgic_v3_its_init_domain(struct domain *d);
>  void vgic_v3_its_free_domain(struct domain *d);
>
> +/* Create and register a virtual ITS at the given guest address. */
> +int vgic_v3_its_init_virtual(struct domain *d, paddr_t guest_addr,
> +			     unsigned int devid_bits, unsigned int evid_bits);
> +
>  /*
>   * Map a device on the host by allocating an ITT on the host (ITS).
>   * "nr_event" specifies how many events (interrupts) this device will need.
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 02/28] ARM: VGIC: move irq_to_pending() calls under the VGIC VCPU lock
  2017-05-11 17:53 ` [PATCH v9 02/28] ARM: VGIC: move irq_to_pending() calls under the VGIC VCPU lock Andre Przywara
@ 2017-05-20  0:34   ` Stefano Stabellini
  0 siblings, 0 replies; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-20  0:34 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Thu, 11 May 2017, Andre Przywara wrote:
> So far irq_to_pending() is just a convenience function to lookup
> statically allocated arrays. This will change with LPIs, which are
> more dynamic.
> The proper answer to the issue of preventing stale pointers is
> ref-counting, which requires more rework and will be introduced with
> a later rework.
> For now move the irq_to_pending() calls that are used with LPIs under the
> VGIC VCPU lock, and only use the returned pointer while holding the lock.
> This prevents the memory from being freed while we use it.

I don't like the idea of doing this just for the functions that are used
by LPIs and not the other. Specifically, we are leaving out:

[a]:
- vgic_migrate_irq
- vgic_enable_irqs
- vgic_disable_irqs

[b]:
- arch_move_irqs

Those in group [a] are easy to fix, please do. Just introduce a spinlock
in vgic_disable_irqs (it is safe because gic_remove_from_queues already
takes the vgic vcpu lock).

[b] is not easy to fix, just add a comment.


> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic.c  | 5 ++++-
>  xen/arch/arm/vgic.c | 4 +++-
>  2 files changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index da19130..dcb1783 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -402,10 +402,13 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, struct pending_irq *n)
>  
>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>  {
> -    struct pending_irq *p = irq_to_pending(v, virtual_irq);
> +    struct pending_irq *p;
>      unsigned long flags;
>  
>      spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +
> +    p = irq_to_pending(v, virtual_irq);
> +
>      if ( !list_empty(&p->lr_queue) )
>          list_del_init(&p->lr_queue);
>      spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 83569b0..d30f324 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -466,7 +466,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
>  void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
>  {
>      uint8_t priority;
> -    struct pending_irq *iter, *n = irq_to_pending(v, virq);
> +    struct pending_irq *iter, *n;
>      unsigned long flags;
>      bool running;
>  
> @@ -474,6 +474,8 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
>  
>      spin_lock_irqsave(&v->arch.vgic.lock, flags);
>  
> +    n = irq_to_pending(v, virq);
> +
>      /* vcpu offline */
>      if ( test_bit(_VPF_down, &v->pause_flags) )
>      {
> -- 
> 2.9.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs
  2017-05-11 17:53 ` [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs Andre Przywara
  2017-05-17 18:37   ` Julien Grall
@ 2017-05-20  1:25   ` Stefano Stabellini
  2017-05-22 23:48     ` Stefano Stabellini
  2017-05-23 14:41     ` Andre Przywara
  1 sibling, 2 replies; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-20  1:25 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Thu, 11 May 2017, Andre Przywara wrote:
> When LPIs get unmapped by a guest, they might still be in some LR of
> some VCPU. Nevertheless we remove the corresponding pending_irq
> (possibly freeing it), and detect this case (irq_to_pending() returns
> NULL) when the LR gets cleaned up later.
> However a *new* LPI may get mapped with the same number while the old
> LPI is *still* in some LR. To avoid getting the wrong state, we mark
> every newly mapped LPI as PRISTINE, which means: has never been in an
> LR before. If we detect the LPI in an LR anyway, it must have been an
> older one, which we can simply retire.
> Before inserting such a PRISTINE LPI into an LR, we must make sure that
> it's not already in another LR, as the architecture forbids two
> interrupts with the same virtual IRQ number on one CPU.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic.c         | 55 +++++++++++++++++++++++++++++++++++++++++-----
>  xen/include/asm-arm/vgic.h |  6 +++++
>  2 files changed, 56 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index fd3fa05..8bf0578 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -375,6 +375,8 @@ static inline void gic_set_lr(int lr, struct pending_irq *p,
>  {
>      ASSERT(!local_irq_is_enabled());
>  
> +    clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status);
> +
>      gic_hw_ops->update_lr(lr, p, state);
>  
>      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> @@ -442,12 +444,41 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
>  #endif
>  }
>  
> +/*
> + * Find an unused LR to insert an IRQ into. If this new interrupt is a
> + * PRISTINE LPI, scan the other LRs to avoid inserting the same IRQ twice.
> + */
> +static int gic_find_unused_lr(struct vcpu *v, struct pending_irq *p, int lr)
> +{
> +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
> +    unsigned long *lr_mask = (unsigned long *) &this_cpu(lr_mask);
> +    struct gic_lr lr_val;
> +
> +    ASSERT(spin_is_locked(&v->arch.vgic.lock));
> +
> +    if ( test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )

Maybe we should add an "unlikely".

I can see how this would be OKish at runtime, but at boot time there
might be a bunch of PRISTINE_LPIs, but no MAPDs have been issued yet,
right?

I have a suggestion, I'll leave it to you and Julien if you want to do
this now, or maybe consider it as a TODO item. I am OK either way (I
don't want to delay the ITS any longer).

I am thinking we should do this scanning only after at least one MAPD
has been issued for a given cpu at least once. I would resurrect the
idea of a DISCARD flag, but not on the pending_irq, that I believe it's
difficult to handle, but a single global DISCARD flag per struct vcpu.

On MAPD, we set DISCARD for the target vcpu of the LPI we are dropping.
Next time we want to inject a PRISTINE_IRQ on that cpu interface, we
scan all LRs for interrupts with a NULL pending_irq. We remove those
from LRs, then we remove the DISCARD flag.

Do you think it would work?


> +    {
> +        int used_lr = 0;
> +
> +        while ( (used_lr = find_next_bit(lr_mask, nr_lrs, used_lr)) < nr_lrs )
> +        {
> +            gic_hw_ops->read_lr(used_lr, &lr_val);
> +            if ( lr_val.virq == p->irq )
> +                return used_lr;
> +        }
> +    }
> +
> +    lr = find_next_zero_bit(lr_mask, nr_lrs, lr);
> +
> +    return lr;
> +}
> +
>  void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>          unsigned int priority)
>  {
> -    int i;
> -    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
> +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
> +    int i = nr_lrs;
>  
>      ASSERT(spin_is_locked(&v->arch.vgic.lock));
>  
> @@ -457,7 +488,8 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>  
>      if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
>      {
> -        i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> +        i = gic_find_unused_lr(v, p, 0);
> +
>          if (i < nr_lrs) {
>              set_bit(i, &this_cpu(lr_mask));
>              gic_set_lr(i, p, GICH_LR_PENDING);
> @@ -509,7 +541,17 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>      }
>      else if ( lr_val.state & GICH_LR_PENDING )
>      {
> -        int q __attribute__ ((unused)) = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
> +        int q __attribute__ ((unused));
> +
> +        if ( test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> +        {
> +            gic_hw_ops->clear_lr(i);
> +            clear_bit(i, &this_cpu(lr_mask));
> +
> +            return;
> +        }
> +
> +        q = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
>  #ifdef GIC_DEBUG
>          if ( q )
>              gdprintk(XENLOG_DEBUG, "trying to inject irq=%d into d%dv%d, when it is already pending in LR%d\n",
> @@ -521,6 +563,9 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>          gic_hw_ops->clear_lr(i);
>          clear_bit(i, &this_cpu(lr_mask));
>  
> +        if ( test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> +            return;
>          if ( p->desc != NULL )
>              clear_bit(_IRQ_INPROGRESS, &p->desc->status);
>          clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> @@ -591,7 +636,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
>      inflight_r = &v->arch.vgic.inflight_irqs;
>      list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
>      {
> -        lr = find_next_zero_bit(&this_cpu(lr_mask), nr_lrs, lr);
> +        lr = gic_find_unused_lr(v, p, lr);
>          if ( lr >= nr_lrs )
>          {
>              /* No more free LRs: find a lower priority irq to evict */
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index 02732db..3fc4ceb 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -60,12 +60,18 @@ struct pending_irq
>       * vcpu while it is still inflight and on an GICH_LR register on the
>       * old vcpu.
>       *
> +     * GIC_IRQ_GUEST_PRISTINE_LPI: the IRQ is a newly mapped LPI, which
> +     * has never been in an LR before. This means that any trace of an
> +     * LPI with the same number in an LR must be from an older LPI, which
> +     * has been unmapped before.
> +     *
>       */
>  #define GIC_IRQ_GUEST_QUEUED   0
>  #define GIC_IRQ_GUEST_ACTIVE   1
>  #define GIC_IRQ_GUEST_VISIBLE  2
>  #define GIC_IRQ_GUEST_ENABLED  3
>  #define GIC_IRQ_GUEST_MIGRATING   4
> +#define GIC_IRQ_GUEST_PRISTINE_LPI  5
>      unsigned long status;
>      struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
>      unsigned int irq;
> -- 
> 2.9.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's
  2017-05-11 17:53 ` [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's Andre Przywara
  2017-05-12 14:19   ` Julien Grall
@ 2017-05-20  1:25   ` Stefano Stabellini
  1 sibling, 0 replies; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-20  1:25 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Thu, 11 May 2017, Andre Przywara wrote:
> For LPIs the struct pending_irq's are dynamically allocated and the
> pointers will be stored in a radix tree. Since an LPI can be "unmapped"
> at any time, teach the VGIC how to deal with irq_to_pending() returning
> a NULL pointer.
> We just do nothing in this case or clean up the LR if the virtual LPI
> number was still in an LR.
> 
> Those are all call sites for irq_to_pending(), as per:
> "git grep irq_to_pending", and their evaluations:
> (PROTECTED means: added NULL check and bailing out)
> 
>     xen/arch/arm/gic.c:
> gic_route_irq_to_guest(): only called for SPIs, added ASSERT()
> gic_remove_irq_from_guest(): only called for SPIs, added ASSERT()
> gic_remove_from_queues(): PROTECTED, called within VCPU VGIC lock
> gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock
> gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock
> gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock
> 
>     xen/arch/arm/vgic.c:
> vgic_migrate_irq(): not called for LPIs (virtual IRQs), added ASSERT()
> arch_move_irqs(): not iterating over LPIs, added ASSERT()
> vgic_disable_irqs(): not called for LPIs, added ASSERT()
> vgic_enable_irqs(): not called for LPIs, added ASSERT()
> vgic_vcpu_inject_irq(): PROTECTED, moved under VCPU VGIC lock
> 
>     xen/include/asm-arm/event.h:
> local_events_need_delivery_nomask(): only called for a PPI, added ASSERT()
> 
>     xen/include/asm-arm/vgic.h:
> (prototype)
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/gic.c          | 34 ++++++++++++++++++++++++++++++----
>  xen/arch/arm/vgic.c         | 24 ++++++++++++++++++++++++
>  xen/include/asm-arm/event.h |  3 +++
>  3 files changed, 57 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index dcb1783..46bb306 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -148,6 +148,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq,
>      /* Caller has already checked that the IRQ is an SPI */
>      ASSERT(virq >= 32);
>      ASSERT(virq < vgic_num_irqs(d));
> +    ASSERT(!is_lpi(virq));
>  
>      vgic_lock_rank(v_target, rank, flags);
>  
> @@ -184,6 +185,7 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
>      ASSERT(spin_is_locked(&desc->lock));
>      ASSERT(test_bit(_IRQ_GUEST, &desc->status));
>      ASSERT(p->desc == desc);
> +    ASSERT(!is_lpi(virq));
>  
>      vgic_lock_rank(v_target, rank, flags);
>  
> @@ -408,9 +410,13 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>      spin_lock_irqsave(&v->arch.vgic.lock, flags);
>  
>      p = irq_to_pending(v, virtual_irq);
> -
> -    if ( !list_empty(&p->lr_queue) )
> +    /*
> +     * If an LPIs has been removed meanwhile, it has been cleaned up
> +     * already, so nothing to remove here.
> +     */
> +    if ( likely(p) && !list_empty(&p->lr_queue) )
>          list_del_init(&p->lr_queue);
> +
>      spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>  }
>  
> @@ -418,6 +424,10 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
>  {
>      struct pending_irq *n = irq_to_pending(v, virtual_irq);
>  
> +    /* If an LPI has been removed meanwhile, there is nothing left to raise. */
> +    if ( unlikely(!n) )
> +        return;
> +
>      ASSERT(spin_is_locked(&v->arch.vgic.lock));
>  
>      if ( list_empty(&n->lr_queue) )
> @@ -437,20 +447,25 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>  {
>      int i;
>      unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
> +    struct pending_irq *p = irq_to_pending(v, virtual_irq);
>  
>      ASSERT(spin_is_locked(&v->arch.vgic.lock));
>  
> +    if ( unlikely(!p) )
> +        /* An unmapped LPI does not need to be raised. */
> +        return;
> +
>      if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
>      {
>          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
>          if (i < nr_lrs) {
>              set_bit(i, &this_cpu(lr_mask));
> -            gic_set_lr(i, irq_to_pending(v, virtual_irq), GICH_LR_PENDING);
> +            gic_set_lr(i, p, GICH_LR_PENDING);
>              return;
>          }
>      }
>  
> -    gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq));
> +    gic_add_to_lr_pending(v, p);
>  }
>  
>  static void gic_update_one_lr(struct vcpu *v, int i)
> @@ -465,6 +480,17 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>      gic_hw_ops->read_lr(i, &lr_val);
>      irq = lr_val.virq;
>      p = irq_to_pending(v, irq);
> +    /* An LPI might have been unmapped, in which case we just clean up here. */
> +    if ( unlikely(!p) )
> +    {
> +        ASSERT(is_lpi(irq));
> +
> +        gic_hw_ops->clear_lr(i);
> +        clear_bit(i, &this_cpu(lr_mask));
> +
> +        return;
> +    }
> +
>      if ( lr_val.state & GICH_LR_ACTIVE )
>      {
>          set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index d30f324..8a5d93b 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -242,6 +242,9 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
>      unsigned long flags;
>      struct pending_irq *p = irq_to_pending(old, irq);
>  
> +    /* This will never be called for an LPI, as we don't migrate them. */
> +    ASSERT(!is_lpi(irq));
> +
>      /* nothing to do for virtual interrupts */
>      if ( p->desc == NULL )
>          return true;
> @@ -291,6 +294,9 @@ void arch_move_irqs(struct vcpu *v)
>      struct vcpu *v_target;
>      int i;
>  
> +    /* We don't migrate LPIs at the moment. */
> +    ASSERT(!is_lpi(vgic_num_irqs(d) - 1));
> +
>      for ( i = 32; i < vgic_num_irqs(d); i++ )
>      {
>          v_target = vgic_get_target_vcpu(v, i);
> @@ -310,6 +316,9 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
>      int i = 0;
>      struct vcpu *v_target;
>  
> +    /* LPIs will never be disabled via this function. */
> +    ASSERT(!is_lpi(32 * n + 31));
> +
>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          v_target = vgic_get_target_vcpu(v, irq);
> @@ -352,6 +361,9 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
>      struct vcpu *v_target;
>      struct domain *d = v->domain;
>  
> +    /* LPIs will never be enabled via this function. */
> +    ASSERT(!is_lpi(32 * n + 31));
> +
>      while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
>          irq = i + (32 * n);
>          v_target = vgic_get_target_vcpu(v, irq);
> @@ -432,6 +444,12 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode,
>      return true;
>  }
>  
> +/*
> + * Returns the pointer to the struct pending_irq belonging to the given
> + * interrupt.
> + * This can return NULL if called for an LPI which has been unmapped
> + * meanwhile.
> + */
>  struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq)
>  {
>      struct pending_irq *n;
> @@ -475,6 +493,12 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
>      spin_lock_irqsave(&v->arch.vgic.lock, flags);
>  
>      n = irq_to_pending(v, virq);
> +    /* If an LPI has been removed, there is nothing to inject here. */
> +    if ( unlikely(!n) )
> +    {
> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +        return;
> +    }
>  
>      /* vcpu offline */
>      if ( test_bit(_VPF_down, &v->pause_flags) )
> diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h
> index 5330dfe..caefa50 100644
> --- a/xen/include/asm-arm/event.h
> +++ b/xen/include/asm-arm/event.h
> @@ -19,6 +19,9 @@ static inline int local_events_need_delivery_nomask(void)
>      struct pending_irq *p = irq_to_pending(current,
>                                             current->domain->arch.evtchn_irq);
>  
> +    /* Does not work for LPIs. */
> +    ASSERT(!is_lpi(current->domain->arch.evtchn_irq));
> +
>      /* XXX: if the first interrupt has already been delivered, we should
>       * check whether any other interrupts with priority higher than the
>       * one in GICV_IAR are in the lr_pending queue or in the LR
> -- 
> 2.9.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's
  2017-05-12 14:19   ` Julien Grall
@ 2017-05-22 16:49     ` Andre Przywara
  2017-05-22 17:15       ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-22 16:49 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 12/05/17 15:19, Julien Grall wrote:
> Hi Andre,
> 
> On 11/05/17 18:53, Andre Przywara wrote:
>> For LPIs the struct pending_irq's are dynamically allocated and the
>> pointers will be stored in a radix tree. Since an LPI can be "unmapped"
>> at any time, teach the VGIC how to deal with irq_to_pending() returning
>> a NULL pointer.
>> We just do nothing in this case or clean up the LR if the virtual LPI
>> number was still in an LR.
>>
>> Those are all call sites for irq_to_pending(), as per:
>> "git grep irq_to_pending", and their evaluations:
>> (PROTECTED means: added NULL check and bailing out)
>>
>>     xen/arch/arm/gic.c:
>> gic_route_irq_to_guest(): only called for SPIs, added ASSERT()
>> gic_remove_irq_from_guest(): only called for SPIs, added ASSERT()
>> gic_remove_from_queues(): PROTECTED, called within VCPU VGIC lock
>> gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock
>> gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock
>> gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock
> 
> Even they are protected, an ASSERT would be useful.

I am not sure I get what you mean here.
With PROTECTED I meant that the code checks for a irq_to_pending()
returning NULL and reacts accordingly.
ASSERTs are only for making sure that those functions are never called
for LPIs(), but the other functions can be called with an LPI, and they
can now cope with a NULL pending_irq.

So what do I miss here?

Cheers,
Andre.

>>
>>     xen/arch/arm/vgic.c:
>> vgic_migrate_irq(): not called for LPIs (virtual IRQs), added ASSERT()
>> arch_move_irqs(): not iterating over LPIs, added ASSERT()
>> vgic_disable_irqs(): not called for LPIs, added ASSERT()
>> vgic_enable_irqs(): not called for LPIs, added ASSERT()
>> vgic_vcpu_inject_irq(): PROTECTED, moved under VCPU VGIC lock
>>
>>     xen/include/asm-arm/event.h:
>> local_events_need_delivery_nomask(): only called for a PPI, added
>> ASSERT()
>>
>>     xen/include/asm-arm/vgic.h:
>> (prototype)
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 15/28] ARM: vITS: provide access to struct pending_irq
  2017-05-17 15:35   ` Julien Grall
@ 2017-05-22 16:50     ` Andre Przywara
  2017-05-22 17:19       ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-22 16:50 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 17/05/17 16:35, Julien Grall wrote:
> Hi Andre,
> 
> On 11/05/17 18:53, Andre Przywara wrote:
>> For each device we allocate one struct pending_irq for each virtual
>> event (MSI).
>> Provide a helper function which returns the pointer to the appropriate
>> struct, to be able to find the right struct when given a virtual
>> deviceID/eventID pair.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/gic-v3-its.c        | 69
>> ++++++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/gic_v3_its.h |  4 +++
>>  2 files changed, 73 insertions(+)
>>
>> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
>> index aebc257..fd6a394 100644
>> --- a/xen/arch/arm/gic-v3-its.c
>> +++ b/xen/arch/arm/gic-v3-its.c
>> @@ -800,6 +800,75 @@ out:
>>      return ret;
>>  }
>>
>> +/* Must be called with the its_device_lock held. */
>> +static struct its_device *get_its_device(struct domain *d, paddr_t
>> vdoorbell,
>> +                                         uint32_t vdevid)
>> +{
>> +    struct rb_node *node = d->arch.vgic.its_devices.rb_node;
>> +    struct its_device *dev;
>> +
>> +    ASSERT(spin_is_locked(&d->arch.vgic.its_devices_lock));
>> +
>> +    while (node)
>> +    {
>> +        int cmp;
>> +
>> +        dev = rb_entry(node, struct its_device, rbnode);
>> +        cmp = compare_its_guest_devices(dev, vdoorbell, vdevid);
>> +
>> +        if ( !cmp )
>> +            return dev;
>> +
>> +        if ( cmp > 0 )
>> +            node = node->rb_left;
>> +        else
>> +            node = node->rb_right;
>> +    }
>> +
>> +    return NULL;
>> +}
>> +
>> +static uint32_t get_host_lpi(struct its_device *dev, uint32_t eventid)
>> +{
>> +    uint32_t host_lpi = INVALID_LPI;
>> +
>> +    if ( dev && (eventid < dev->eventids) )
>> +        host_lpi = dev->host_lpi_blocks[eventid / LPI_BLOCK] +
>> +                                       (eventid % LPI_BLOCK);
>> +
>> +    return host_lpi;
> 
> IHMO, it would be easier to read if you invert the condition:
> 
> if ( !dev || (eventid >= dev->eventids) )
>   return INVALID_LPI;
> 
> return dev->host_lpi_blocks[eventid / LPI_BLOCK] + (eventid % LPI_BLOCK);
> 
> Also, whilst I agree about sanitizing eventid, someone calling this
> function with dev = NULL is already wrong. Defensive programming is
> good, but there are some place where I don't think it is necessary. You
> have to trust a bit the caller, otherwise you will end up making the
> check 10 times before accessing it.

Yeah, good point, I think this function was meant to be more widely used
originally. Given that it's static, has only one caller and is
effectively a one-liner, I will just scrap it altogether and just do it
in-line. That should also address your double check comment below.

> 
>> +}
>> +
>> +static struct pending_irq *get_event_pending_irq(struct domain *d,
>> +                                                 paddr_t
>> vdoorbell_address,
>> +                                                 uint32_t vdevid,
>> +                                                 uint32_t veventid,
> 
> s/veventid/ as it is not virtual a virtual one and make the call to
> get_host_lpi fairly confusing.

Right, thinking again I believe there is no such thing as a "virtual
event ID", because the event is always a matter of the device and thus
there is no distinction between a physical and a virtual event. So I
will remove all "v"s from the respective identifiers.

>> +                                                 uint32_t *host_lpi)
>> +{
>> +    struct its_device *dev;
>> +    struct pending_irq *pirq = NULL;
>> +
>> +    spin_lock(&d->arch.vgic.its_devices_lock);
>> +    dev = get_its_device(d, vdoorbell_address, vdevid);
>> +    if ( dev && veventid <= dev->eventids )
> 
> Why are you using "<=" here and not "<" like in get_host_lpi? Surely one
> of them is wrong.

Oh, good catch!

>> +    {
>> +        pirq = &dev->pend_irqs[veventid];
>> +        if ( host_lpi )
>> +            *host_lpi = get_host_lpi(dev, veventid);
> 
> Getting the host_lpi is fairly cheap. I would impose to pass host_lpi.
> 
> This would also avoid multiple check on the eventid as you currently do.
> I.e
> 
> dev = ...
> if ( !dev )
>   goto out;
> 
> *host_lpi = get_host_lpi(dev, ...);
> 
> if ( *host_lpi == INVALID_LPI )
>   goto out;
> 
> pirq = &dev->pend_irqs[veventid];
> 
> 
> out:
>    spin_unlock(...)
>    return pirq;
> 
>> +    }
>> +    spin_unlock(&d->arch.vgic.its_devices_lock);
>> +
>> +    return pirq;
>> +}
>> +
>> +struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
>> +                                                    paddr_t
>> vdoorbell_address,
>> +                                                    uint32_t vdevid,
>> +                                                    uint32_t veventid)
> 
> s/veventid/eventid/
> 
>> +{
>> +    return get_event_pending_irq(d, vdoorbell_address, vdevid,
>> veventid, NULL);
>> +}
> 
> This wrapper looks a bit pointless to me. Why don't you directly expose
> get_event_pending_irq(...)?

I don't want to expose host_lpi in the exported function, because it's
of no need for the callers and rather cumbersome for them to pass NULL
or the like. But then the algorithm to find host_lpi and pirq is
basically the same, so I came up with this joint static function and an
exported wrapper, which hides the host_lpi.
And there is one user (in gicv3_assign_guest_event()) which needs both,
so ...
If you can think of a better way to address this, I am all ears.

Cheers,
Andre.

>> +
>>  /* Scan the DT for any ITS nodes and create a list of host ITSes out
>> of it. */
>>  void gicv3_its_dt_init(const struct dt_device_node *node)
>>  {
>> diff --git a/xen/include/asm-arm/gic_v3_its.h
>> b/xen/include/asm-arm/gic_v3_its.h
>> index 40f4ef5..d162e89 100644
>> --- a/xen/include/asm-arm/gic_v3_its.h
>> +++ b/xen/include/asm-arm/gic_v3_its.h
>> @@ -169,6 +169,10 @@ int gicv3_its_map_guest_device(struct domain *d,
>>  int gicv3_allocate_host_lpi_block(struct domain *d, uint32_t
>> *first_lpi);
>>  void gicv3_free_host_lpi_block(uint32_t first_lpi);
>>
>> +struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
>> +                                                    paddr_t
>> vdoorbell_address,
>> +                                                    uint32_t vdevid,
>> +                                                    uint32_t veventid);
>>  #else
>>
>>  static inline void gicv3_its_dt_init(const struct dt_device_node *node)
>>
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 23/28] ARM: vITS: handle DISCARD command
  2017-05-18 14:23   ` Julien Grall
@ 2017-05-22 16:50     ` Andre Przywara
  2017-05-22 17:20       ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-22 16:50 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 18/05/17 15:23, Julien Grall wrote:
> Hi Andre,
> 
> On 11/05/17 18:53, Andre Przywara wrote:
>> The DISCARD command drops the connection between a DeviceID/EventID
>> and an LPI/collection pair.
>> We mark the respective structure entries as not allocated and make
>> sure that any queued IRQs are removed.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/vgic-v3-its.c | 24 ++++++++++++++++++++++++
>>  1 file changed, 24 insertions(+)
>>
>> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
>> index ef7c78f..f7a8d77 100644
>> --- a/xen/arch/arm/vgic-v3-its.c
>> +++ b/xen/arch/arm/vgic-v3-its.c
>> @@ -723,6 +723,27 @@ out_unlock:
>>      return ret;
>>  }
>>
>> +static int its_handle_discard(struct virt_its *its, uint64_t *cmdptr)
>> +{
>> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
>> +    uint32_t eventid = its_cmd_get_id(cmdptr);
>> +    int ret;
>> +
>> +    spin_lock(&its->its_lock);
>> +
>> +    /* Remove from the radix tree and remove the host entry. */
>> +    ret = its_discard_event(its, devid, eventid);
>> +
>> +    /* Remove from the guest's ITTE. */
>> +    if ( ret || write_itte_locked(its, devid, eventid,
>> +                                  UNMAPPED_COLLECTION, INVALID_LPI,
>> NULL) )
> 
> I am not sure to fully understand this if. If ret is not NULL you
> override it and never call write_itte_locked.

If its_discard_event() succeeded above, then ret will be 0, in which
case we call write_itte_locked(). If that returns non-zero, this is an
error and we set ret to -1, otherwise (no error) it stays at zero.

> Is it what you wanted? If so, then probably a bit more documentation
> would be useful to explain why writte_itte_locked is skipped.

I admit this is a bit convoluted. I will either document this or
simplify the algorithm.

Cheers,
Andre.

> 
>> +        ret = -1;
>> +
>> +    spin_unlock(&its->its_lock);
>> +
>> +    return ret;
>> +}
>> +
>>  #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
>>  #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
>>
>> @@ -755,6 +776,9 @@ static int vgic_its_handle_cmds(struct domain *d,
>> struct virt_its *its)
>>          case GITS_CMD_CLEAR:
>>              ret = its_handle_clear(its, command);
>>              break;
>> +        case GITS_CMD_DISCARD:
>> +            ret = its_handle_discard(its, command);
>> +            break;
>>          case GITS_CMD_INT:
>>              ret = its_handle_int(its, command);
>>              break;
>>
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's
  2017-05-22 16:49     ` Andre Przywara
@ 2017-05-22 17:15       ` Julien Grall
  2017-05-25 16:14         ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-22 17:15 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni



On 22/05/17 17:49, Andre Przywara wrote:
> Hi,

Hi Andre,

> On 12/05/17 15:19, Julien Grall wrote:
>> Hi Andre,
>>
>> On 11/05/17 18:53, Andre Przywara wrote:
>>> For LPIs the struct pending_irq's are dynamically allocated and the
>>> pointers will be stored in a radix tree. Since an LPI can be "unmapped"
>>> at any time, teach the VGIC how to deal with irq_to_pending() returning
>>> a NULL pointer.
>>> We just do nothing in this case or clean up the LR if the virtual LPI
>>> number was still in an LR.
>>>
>>> Those are all call sites for irq_to_pending(), as per:
>>> "git grep irq_to_pending", and their evaluations:
>>> (PROTECTED means: added NULL check and bailing out)
>>>
>>>     xen/arch/arm/gic.c:
>>> gic_route_irq_to_guest(): only called for SPIs, added ASSERT()
>>> gic_remove_irq_from_guest(): only called for SPIs, added ASSERT()
>>> gic_remove_from_queues(): PROTECTED, called within VCPU VGIC lock
>>> gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock
>>> gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock
>>> gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock
>>
>> Even they are protected, an ASSERT would be useful.
>
> I am not sure I get what you mean here.
> With PROTECTED I meant that the code checks for a irq_to_pending()
> returning NULL and reacts accordingly.
> ASSERTs are only for making sure that those functions are never called
> for LPIs(), but the other functions can be called with an LPI, and they
> can now cope with a NULL pending_irq.
>
> So what do I miss here?

I mean adding an ASSERT(spin_is_locked(vgic->vcpu)) in those functions 
if it is not done yet.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 15/28] ARM: vITS: provide access to struct pending_irq
  2017-05-22 16:50     ` Andre Przywara
@ 2017-05-22 17:19       ` Julien Grall
  2017-05-26  9:10         ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-22 17:19 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni



On 22/05/17 17:50, Andre Przywara wrote:
> Hi,

Hi Andre,

> On 17/05/17 16:35, Julien Grall wrote:
>>> +    }
>>> +    spin_unlock(&d->arch.vgic.its_devices_lock);
>>> +
>>> +    return pirq;
>>> +}
>>> +
>>> +struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
>>> +                                                    paddr_t
>>> vdoorbell_address,
>>> +                                                    uint32_t vdevid,
>>> +                                                    uint32_t veventid)
>>
>> s/veventid/eventid/
>>
>>> +{
>>> +    return get_event_pending_irq(d, vdoorbell_address, vdevid,
>>> veventid, NULL);
>>> +}
>>
>> This wrapper looks a bit pointless to me. Why don't you directly expose
>> get_event_pending_irq(...)?
>
> I don't want to expose host_lpi in the exported function, because it's
> of no need for the callers and rather cumbersome for them to pass NULL
> or the like. But then the algorithm to find host_lpi and pirq is
> basically the same, so I came up with this joint static function and an
> exported wrapper, which hides the host_lpi.
> And there is one user (in gicv3_assign_guest_event()) which needs both,
> so ...
> If you can think of a better way to address this, I am all ears.

It is not that bad to pass NULL everywhere. We already have some other 
functions like that.

How about adding the wrapper as a static inline in the header?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 23/28] ARM: vITS: handle DISCARD command
  2017-05-22 16:50     ` Andre Przywara
@ 2017-05-22 17:20       ` Julien Grall
  2017-05-23  9:40         ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-22 17:20 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni



On 22/05/17 17:50, Andre Przywara wrote:
> Hi,

Hi Andre,

> On 18/05/17 15:23, Julien Grall wrote:
>> Hi Andre,
>>
>> On 11/05/17 18:53, Andre Przywara wrote:
>>> The DISCARD command drops the connection between a DeviceID/EventID
>>> and an LPI/collection pair.
>>> We mark the respective structure entries as not allocated and make
>>> sure that any queued IRQs are removed.
>>>
>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>> ---
>>>  xen/arch/arm/vgic-v3-its.c | 24 ++++++++++++++++++++++++
>>>  1 file changed, 24 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
>>> index ef7c78f..f7a8d77 100644
>>> --- a/xen/arch/arm/vgic-v3-its.c
>>> +++ b/xen/arch/arm/vgic-v3-its.c
>>> @@ -723,6 +723,27 @@ out_unlock:
>>>      return ret;
>>>  }
>>>
>>> +static int its_handle_discard(struct virt_its *its, uint64_t *cmdptr)
>>> +{
>>> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
>>> +    uint32_t eventid = its_cmd_get_id(cmdptr);
>>> +    int ret;
>>> +
>>> +    spin_lock(&its->its_lock);
>>> +
>>> +    /* Remove from the radix tree and remove the host entry. */
>>> +    ret = its_discard_event(its, devid, eventid);
>>> +
>>> +    /* Remove from the guest's ITTE. */
>>> +    if ( ret || write_itte_locked(its, devid, eventid,
>>> +                                  UNMAPPED_COLLECTION, INVALID_LPI,
>>> NULL) )
>>
>> I am not sure to fully understand this if. If ret is not NULL you
>> override it and never call write_itte_locked.
>
> If its_discard_event() succeeded above, then ret will be 0, in which
> case we call write_itte_locked(). If that returns non-zero, this is an
> error and we set ret to -1, otherwise (no error) it stays at zero.

But we want to carry the error from its_discard_event. No?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 04/28] ARM: GICv3: introduce separate pending_irq structs for LPIs
  2017-05-11 17:53 ` [PATCH v9 04/28] ARM: GICv3: introduce separate pending_irq structs for LPIs Andre Przywara
  2017-05-12 14:22   ` Julien Grall
@ 2017-05-22 21:52   ` Stefano Stabellini
  1 sibling, 0 replies; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-22 21:52 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Thu, 11 May 2017, Andre Przywara wrote:
> For the same reason that allocating a struct irq_desc for each
> possible LPI is not an option, having a struct pending_irq for each LPI
> is also not feasible. We only care about mapped LPIs, so we can get away
> with having struct pending_irq's only for them.
> Maintain a radix tree per domain where we drop the pointer to the
> respective pending_irq. The index used is the virtual LPI number.
> The memory for the actual structures has been allocated already per
> device at device mapping time.
> Teach the existing VGIC functions to find the right pointer when being
> given a virtual LPI number.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/vgic-v2.c       |  8 ++++++++
>  xen/arch/arm/vgic-v3.c       | 30 ++++++++++++++++++++++++++++++
>  xen/arch/arm/vgic.c          |  2 ++
>  xen/include/asm-arm/domain.h |  2 ++
>  xen/include/asm-arm/vgic.h   |  2 ++
>  5 files changed, 44 insertions(+)
> 
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index dc9f95b..0587569 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -702,10 +702,18 @@ static void vgic_v2_domain_free(struct domain *d)
>      /* Nothing to be cleanup for this driver */
>  }
>  
> +static struct pending_irq *vgic_v2_lpi_to_pending(struct domain *d,
> +                                                  unsigned int vlpi)
> +{
> +    /* Dummy function, no LPIs on a VGICv2. */
> +    BUG();
> +}
> +
>  static const struct vgic_ops vgic_v2_ops = {
>      .vcpu_init   = vgic_v2_vcpu_init,
>      .domain_init = vgic_v2_domain_init,
>      .domain_free = vgic_v2_domain_free,
> +    .lpi_to_pending = vgic_v2_lpi_to_pending,
>      .max_vcpus = 8,
>  };
>  
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 25e16dc..44d2b50 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -1454,6 +1454,9 @@ static int vgic_v3_domain_init(struct domain *d)
>      d->arch.vgic.nr_regions = rdist_count;
>      d->arch.vgic.rdist_regions = rdist_regions;
>  
> +    rwlock_init(&d->arch.vgic.pend_lpi_tree_lock);
> +    radix_tree_init(&d->arch.vgic.pend_lpi_tree);
> +
>      /*
>       * Domain 0 gets the hardware address.
>       * Guests get the virtual platform layout.
> @@ -1535,14 +1538,41 @@ static int vgic_v3_domain_init(struct domain *d)
>  static void vgic_v3_domain_free(struct domain *d)
>  {
>      vgic_v3_its_free_domain(d);
> +    /*
> +     * It is expected that at this point all actual ITS devices have been
> +     * cleaned up already. The struct pending_irq's, for which the pointers
> +     * have been stored in the radix tree, are allocated and freed by device.
> +     * On device unmapping all the entries are removed from the tree and
> +     * the backing memory is freed.
> +     */
> +    radix_tree_destroy(&d->arch.vgic.pend_lpi_tree, NULL);
>      xfree(d->arch.vgic.rdist_regions);
>  }
>  
> +/*
> + * Looks up a virtual LPI number in our tree of mapped LPIs. This will return
> + * the corresponding struct pending_irq, which we also use to store the
> + * enabled and pending bit plus the priority.
> + * Returns NULL if an LPI cannot be found (or no LPIs are supported).
> + */
> +static struct pending_irq *vgic_v3_lpi_to_pending(struct domain *d,
> +                                                  unsigned int lpi)
> +{
> +    struct pending_irq *pirq;
> +
> +    read_lock(&d->arch.vgic.pend_lpi_tree_lock);
> +    pirq = radix_tree_lookup(&d->arch.vgic.pend_lpi_tree, lpi);
> +    read_unlock(&d->arch.vgic.pend_lpi_tree_lock);
> +
> +    return pirq;
> +}
> +
>  static const struct vgic_ops v3_ops = {
>      .vcpu_init   = vgic_v3_vcpu_init,
>      .domain_init = vgic_v3_domain_init,
>      .domain_free = vgic_v3_domain_free,
>      .emulate_reg  = vgic_v3_emulate_reg,
> +    .lpi_to_pending = vgic_v3_lpi_to_pending,
>      /*
>       * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
>       * that can be supported is up to 4096(==256*16) in theory.
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 8a5d93b..bf6fb60 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -457,6 +457,8 @@ struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq)
>       * are used for SPIs; the rests are used for per cpu irqs */
>      if ( irq < 32 )
>          n = &v->arch.vgic.pending_irqs[irq];
> +    else if ( is_lpi(irq) )
> +        n = v->domain->arch.vgic.handler->lpi_to_pending(v->domain, irq);
>      else
>          n = &v->domain->arch.vgic.pending_irqs[irq - 32];
>      return n;
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 7c3829d..3d8e84c 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -111,6 +111,8 @@ struct arch_domain
>          uint32_t rdist_stride;              /* Re-Distributor stride */
>          struct rb_root its_devices;         /* Devices mapped to an ITS */
>          spinlock_t its_devices_lock;        /* Protects the its_devices tree */
> +        struct radix_tree_root pend_lpi_tree; /* Stores struct pending_irq's */
> +        rwlock_t pend_lpi_tree_lock;        /* Protects the pend_lpi_tree */
>          unsigned int intid_bits;
>  #endif
>      } vgic;
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index df75064..c9075a9 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -134,6 +134,8 @@ struct vgic_ops {
>      void (*domain_free)(struct domain *d);
>      /* vGIC sysreg/cpregs emulate */
>      bool (*emulate_reg)(struct cpu_user_regs *regs, union hsr hsr);
> +    /* lookup the struct pending_irq for a given LPI interrupt */
> +    struct pending_irq *(*lpi_to_pending)(struct domain *d, unsigned int vlpi);
>      /* Maximum number of vCPU supported */
>      const unsigned int max_vcpus;
>  };
> -- 
> 2.9.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 05/28] ARM: GICv3: forward pending LPIs to guests
  2017-05-11 17:53 ` [PATCH v9 05/28] ARM: GICv3: forward pending LPIs to guests Andre Przywara
  2017-05-12 14:55   ` Julien Grall
@ 2017-05-22 22:03   ` Stefano Stabellini
  2017-05-25 16:42     ` Andre Przywara
  1 sibling, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-22 22:03 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Thu, 11 May 2017, Andre Przywara wrote:
> Upon receiving an LPI on the host, we need to find the right VCPU and
> virtual IRQ number to get this IRQ injected.
> Iterate our two-level LPI table to find this information quickly when
> the host takes an LPI. Call the existing injection function to let the
> GIC emulation deal with this interrupt.
> Also we enhance struct pending_irq to cache the pending bit and the
> priority information for LPIs.

I can see that you added "uint8_t lpi_priority" to pending_irq. Where
are we caching the pending bit?

Also, I don't think the priority changes need to be part of this patch;
without out I would give my reviewed-by.


> Reading the information from there is
> faster than accessing the property table from guest memory. Also it
> use some padding area, so does not require more memory.
> This introduces a do_LPI() as a hardware gic_ops and a function to
> retrieve the (cached) priority value of an LPI and a vgic_ops.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic-v2.c            |  7 ++++
>  xen/arch/arm/gic-v3-lpi.c        | 71 ++++++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/gic-v3.c            |  1 +
>  xen/arch/arm/gic.c               |  8 ++++-
>  xen/arch/arm/vgic-v2.c           |  7 ++++
>  xen/arch/arm/vgic-v3.c           | 18 ++++++++++
>  xen/arch/arm/vgic.c              |  7 +++-
>  xen/include/asm-arm/domain.h     |  3 +-
>  xen/include/asm-arm/gic.h        |  2 ++
>  xen/include/asm-arm/gic_v3_its.h |  8 +++++
>  xen/include/asm-arm/vgic.h       |  2 ++
>  11 files changed, 131 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> index 270a136..ffbe47c 100644
> --- a/xen/arch/arm/gic-v2.c
> +++ b/xen/arch/arm/gic-v2.c
> @@ -1217,6 +1217,12 @@ static int __init gicv2_init(void)
>      return 0;
>  }
>  
> +static void gicv2_do_LPI(unsigned int lpi)
> +{
> +    /* No LPIs in a GICv2 */
> +    BUG();
> +}
> +
>  const static struct gic_hw_operations gicv2_ops = {
>      .info                = &gicv2_info,
>      .init                = gicv2_init,
> @@ -1244,6 +1250,7 @@ const static struct gic_hw_operations gicv2_ops = {
>      .make_hwdom_madt     = gicv2_make_hwdom_madt,
>      .map_hwdom_extra_mappings = gicv2_map_hwdown_extra_mappings,
>      .iomem_deny_access   = gicv2_iomem_deny_access,
> +    .do_LPI              = gicv2_do_LPI,
>  };
>  
>  /* Set up the GIC */
> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
> index 292f2d0..44f6315 100644
> --- a/xen/arch/arm/gic-v3-lpi.c
> +++ b/xen/arch/arm/gic-v3-lpi.c
> @@ -136,6 +136,77 @@ uint64_t gicv3_get_redist_address(unsigned int cpu, bool use_pta)
>          return per_cpu(lpi_redist, cpu).redist_id << 16;
>  }
>  
> +/*
> + * Handle incoming LPIs, which are a bit special, because they are potentially
> + * numerous and also only get injected into guests. Treat them specially here,
> + * by just looking up their target vCPU and virtual LPI number and hand it
> + * over to the injection function.
> + * Please note that LPIs are edge-triggered only, also have no active state,
> + * so spurious interrupts on the host side are no issue (we can just ignore
> + * them).
> + * Also a guest cannot expect that firing interrupts that haven't been
> + * fully configured yet will reach the CPU, so we don't need to care about
> + * this special case.
> + */
> +void gicv3_do_LPI(unsigned int lpi)
> +{
> +    struct domain *d;
> +    union host_lpi *hlpip, hlpi;
> +    struct vcpu *vcpu;
> +
> +    irq_enter();
> +
> +    /* EOI the LPI already. */
> +    WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
> +
> +    /* Find out if a guest mapped something to this physical LPI. */
> +    hlpip = gic_get_host_lpi(lpi);
> +    if ( !hlpip )
> +        goto out;
> +
> +    hlpi.data = read_u64_atomic(&hlpip->data);
> +
> +    /*
> +     * Unmapped events are marked with an invalid LPI ID. We can safely
> +     * ignore them, as they have no further state and no-one can expect
> +     * to see them if they have not been mapped.
> +     */
> +    if ( hlpi.virt_lpi == INVALID_LPI )
> +        goto out;
> +
> +    d = rcu_lock_domain_by_id(hlpi.dom_id);
> +    if ( !d )
> +        goto out;
> +
> +    /* Make sure we don't step beyond the vcpu array. */
> +    if ( hlpi.vcpu_id >= d->max_vcpus )
> +    {
> +        rcu_unlock_domain(d);
> +        goto out;
> +    }
> +
> +    vcpu = d->vcpu[hlpi.vcpu_id];
> +
> +    /* Check if the VCPU is ready to receive LPIs. */
> +    if ( vcpu->arch.vgic.flags & VGIC_V3_LPIS_ENABLED )
> +        /*
> +         * TODO: Investigate what to do here for potential interrupt storms.
> +         * As we keep all host LPIs enabled, for disabling LPIs we would need
> +         * to queue a ITS host command, which we avoid so far during a guest's
> +         * runtime. Also re-enabling would trigger a host command upon the
> +         * guest sending a command, which could be an attack vector for
> +         * hogging the host command queue.
> +         * See the thread around here for some background:
> +         * https://lists.xen.org/archives/html/xen-devel/2016-12/msg00003.html
> +         */
> +        vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi);
> +
> +    rcu_unlock_domain(d);
> +
> +out:
> +    irq_exit();
> +}
> +
>  static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
>  {
>      uint64_t val;
> diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
> index 29c8964..8140c5f 100644
> --- a/xen/arch/arm/gic-v3.c
> +++ b/xen/arch/arm/gic-v3.c
> @@ -1674,6 +1674,7 @@ static const struct gic_hw_operations gicv3_ops = {
>      .make_hwdom_dt_node  = gicv3_make_hwdom_dt_node,
>      .make_hwdom_madt     = gicv3_make_hwdom_madt,
>      .iomem_deny_access   = gicv3_iomem_deny_access,
> +    .do_LPI              = gicv3_do_LPI,
>  };
>  
>  static int __init gicv3_dt_preinit(struct dt_device_node *node, const void *data)
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 46bb306..fd3fa05 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -734,7 +734,13 @@ void gic_interrupt(struct cpu_user_regs *regs, int is_fiq)
>              do_IRQ(regs, irq, is_fiq);
>              local_irq_disable();
>          }
> -        else if (unlikely(irq < 16))
> +        else if ( is_lpi(irq) )
> +        {
> +            local_irq_enable();
> +            gic_hw_ops->do_LPI(irq);
> +            local_irq_disable();
> +        }
> +        else if ( unlikely(irq < 16) )
>          {
>              do_sgi(regs, irq);
>          }
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index 0587569..df91940 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -709,11 +709,18 @@ static struct pending_irq *vgic_v2_lpi_to_pending(struct domain *d,
>      BUG();
>  }
>  
> +static int vgic_v2_lpi_get_priority(struct domain *d, unsigned int vlpi)
> +{
> +    /* Dummy function, no LPIs on a VGICv2. */
> +    BUG();
> +}
> +
>  static const struct vgic_ops vgic_v2_ops = {
>      .vcpu_init   = vgic_v2_vcpu_init,
>      .domain_init = vgic_v2_domain_init,
>      .domain_free = vgic_v2_domain_free,
>      .lpi_to_pending = vgic_v2_lpi_to_pending,
> +    .lpi_get_priority = vgic_v2_lpi_get_priority,
>      .max_vcpus = 8,
>  };
>  
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 44d2b50..87f58f6 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -1567,12 +1567,30 @@ static struct pending_irq *vgic_v3_lpi_to_pending(struct domain *d,
>      return pirq;
>  }
>  
> +/* Retrieve the priority of an LPI from its struct pending_irq. */
> +static int vgic_v3_lpi_get_priority(struct domain *d, uint32_t vlpi)
> +{
> +    struct pending_irq *p = vgic_v3_lpi_to_pending(d, vlpi);
> +
> +    /*
> +     * Cope with the case where this function is called with an invalid LPI.
> +     * It is expected that a caller will bail out handling this LPI at a
> +     * later point in time, but for the sake of this function let us return
> +     * some value here and avoid a NULL pointer dereference.
> +     */
> +    if ( !p )
> +        return 0xff;
> +
> +    return p->lpi_priority;
> +}
> +
>  static const struct vgic_ops v3_ops = {
>      .vcpu_init   = vgic_v3_vcpu_init,
>      .domain_init = vgic_v3_domain_init,
>      .domain_free = vgic_v3_domain_free,
>      .emulate_reg  = vgic_v3_emulate_reg,
>      .lpi_to_pending = vgic_v3_lpi_to_pending,
> +    .lpi_get_priority = vgic_v3_lpi_get_priority,
>      /*
>       * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
>       * that can be supported is up to 4096(==256*16) in theory.
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index bf6fb60..c29ad5e 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -226,10 +226,15 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq)
>  
>  static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
>  {
> -    struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
> +    struct vgic_irq_rank *rank;
>      unsigned long flags;
>      int priority;
>  
> +    /* LPIs don't have a rank, also store their priority separately. */
> +    if ( is_lpi(virq) )
> +        return v->domain->arch.vgic.handler->lpi_get_priority(v->domain, virq);
> +
> +    rank = vgic_rank_irq(v, virq);
>      vgic_lock_rank(v, rank, flags);
>      priority = rank->priority[virq & INTERRUPT_RANK_MASK];
>      vgic_unlock_rank(v, rank, flags);
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 3d8e84c..ebaea35 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -260,7 +260,8 @@ struct arch_vcpu
>  
>          /* GICv3: redistributor base and flags for this vCPU */
>          paddr_t rdist_base;
> -#define VGIC_V3_RDIST_LAST  (1 << 0)        /* last vCPU of the rdist */
> +#define VGIC_V3_RDIST_LAST      (1 << 0)        /* last vCPU of the rdist */
> +#define VGIC_V3_LPIS_ENABLED    (1 << 1)
>          uint8_t flags;
>      } vgic;
>  
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index 836a103..42963c0 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -366,6 +366,8 @@ struct gic_hw_operations {
>      int (*map_hwdom_extra_mappings)(struct domain *d);
>      /* Deny access to GIC regions */
>      int (*iomem_deny_access)(const struct domain *d);
> +    /* Handle LPIs, which require special handling */
> +    void (*do_LPI)(unsigned int lpi);
>  };
>  
>  void register_gic_ops(const struct gic_hw_operations *ops);
> diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
> index 29559a3..7470779 100644
> --- a/xen/include/asm-arm/gic_v3_its.h
> +++ b/xen/include/asm-arm/gic_v3_its.h
> @@ -134,6 +134,8 @@ void gicv3_its_dt_init(const struct dt_device_node *node);
>  
>  bool gicv3_its_host_has_its(void);
>  
> +void gicv3_do_LPI(unsigned int lpi);
> +
>  int gicv3_lpi_init_rdist(void __iomem * rdist_base);
>  
>  /* Initialize the host structures for LPIs and the host ITSes. */
> @@ -175,6 +177,12 @@ static inline bool gicv3_its_host_has_its(void)
>      return false;
>  }
>  
> +static inline void gicv3_do_LPI(unsigned int lpi)
> +{
> +    /* We don't enable LPIs without an ITS. */
> +    BUG();
> +}
> +
>  static inline int gicv3_lpi_init_rdist(void __iomem * rdist_base)
>  {
>      return -ENODEV;
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index c9075a9..7efa164 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -72,6 +72,7 @@ struct pending_irq
>  #define GIC_INVALID_LR         (uint8_t)~0
>      uint8_t lr;
>      uint8_t priority;
> +    uint8_t lpi_priority;       /* Caches the priority if this is an LPI. */
>      /* inflight is used to append instances of pending_irq to
>       * vgic.inflight_irqs */
>      struct list_head inflight;
> @@ -136,6 +137,7 @@ struct vgic_ops {
>      bool (*emulate_reg)(struct cpu_user_regs *regs, union hsr hsr);
>      /* lookup the struct pending_irq for a given LPI interrupt */
>      struct pending_irq *(*lpi_to_pending)(struct domain *d, unsigned int vlpi);
> +    int (*lpi_get_priority)(struct domain *d, uint32_t vlpi);
>      /* Maximum number of vCPU supported */
>      const unsigned int max_vcpus;
>  };
> -- 
> 2.9.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 11/28] ARM: VGIC: add vcpu_id to struct pending_irq
  2017-05-16 12:31   ` Julien Grall
@ 2017-05-22 22:15     ` Stefano Stabellini
  2017-05-23  9:49       ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-22 22:15 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Vijay Kilari, Andre Przywara, Vijaya Kumar K,
	xen-devel, Shanker Donthineni

On Tue, 16 May 2017, Julien Grall wrote:
> Hi Andre,
> 
> On 11/05/17 18:53, Andre Przywara wrote:
> > The target CPU for an LPI is encoded in the interrupt translation table
> > entry, so can't be easily derived from just an LPI number (short of
> > walking *all* tables and find the matching LPI).
> > To avoid this in case we need to know the VCPU (for the INVALL command,
> > for instance), put the VCPU ID in the struct pending_irq, so that it is
> > easily accessible.
> > We use the remaining 8 bits of padding space for that to avoid enlarging
> > the size of struct pending_irq. The number of VCPUs is limited to 127
> > at the moment anyway, which we also confirm with a BUILD_BUG_ON.
> > 
> > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> > ---
> >  xen/arch/arm/vgic.c        | 3 +++
> >  xen/include/asm-arm/vgic.h | 1 +
> >  2 files changed, 4 insertions(+)
> > 
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 27d6b51..97a2cf2 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -63,6 +63,9 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v,
> > unsigned int irq)
> > 
> >  void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
> >  {
> > +    /* The lpi_vcpu_id field must be big enough to hold a VCPU ID. */
> > +    BUILD_BUG_ON(BIT(sizeof(p->lpi_vcpu_id) * 8) < MAX_VIRT_CPUS);
> > +
> >      INIT_LIST_HEAD(&p->inflight);
> >      INIT_LIST_HEAD(&p->lr_queue);
> >      p->irq = virq;
> > diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> > index e2111a5..02732db 100644
> > --- a/xen/include/asm-arm/vgic.h
> > +++ b/xen/include/asm-arm/vgic.h
> > @@ -73,6 +73,7 @@ struct pending_irq
> >      uint8_t lr;
> >      uint8_t priority;
> >      uint8_t lpi_priority;       /* Caches the priority if this is an LPI.
> > */
> > +    uint8_t lpi_vcpu_id;        /* The VCPU for an LPI. */
> 
> Based on the previous patch (#10), I was expecting to see this new field
> initialized in vgic_init_pending_irq.

right, it should be initialized to INVALID_VCPU_ID


> >      /* inflight is used to append instances of pending_irq to
> >       * vgic.inflight_irqs */
> >      struct list_head inflight;
> > 
> 
> Cheers,
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-16 13:03   ` Julien Grall
@ 2017-05-22 22:19     ` Stefano Stabellini
  2017-05-23 10:49       ` Julien Grall
  2017-05-23 17:23     ` Andre Przywara
  1 sibling, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-22 22:19 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Vijay Kilari, Andre Przywara, Vijaya Kumar K,
	xen-devel, Shanker Donthineni

On Tue, 16 May 2017, Julien Grall wrote:
> > @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu
> > *v, mmio_info_t *info,
> >      switch ( gicr_reg )
> >      {
> >      case VREG32(GICR_CTLR):
> > -        /* LPI's not implemented */
> > -        goto write_ignore_32;
> > +    {
> > +        unsigned long flags;
> > +
> > +        if ( !v->domain->arch.vgic.has_its )
> > +            goto write_ignore_32;
> > +        if ( dabt.size != DABT_WORD ) goto bad_width;
> > +
> > +        vgic_lock(v);                   /* protects rdists_enabled */
> 
> Getting back to the locking. I don't see any place where we get the domain
> vgic lock before vCPU vgic lock. So this raises the question why this ordering
> and not moving this lock into vgic_vcpu_enable_lpis.
> 
> At least this require documentation in the code and explanation in the commit
> message.

It doesn't look like we need to take the v->arch.vgic.lock here. What is
it protecting?


> > +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > +
> > +        /* LPIs can only be enabled once, but never disabled again. */
> > +        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
> > +             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
> > +            vgic_vcpu_enable_lpis(v);
> > +
> > +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > +        vgic_unlock(v);
> > +
> > +        return 1;
> > +    }
> > 
> >      case VREG32(GICR_IIDR):
> >          /* RO */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation
  2017-05-11 17:53 ` [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation Andre Przywara
  2017-05-16 15:24   ` Julien Grall
  2017-05-17 16:16   ` Julien Grall
@ 2017-05-22 22:32   ` Stefano Stabellini
  2017-05-23 10:54     ` Julien Grall
  2 siblings, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-22 22:32 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Thu, 11 May 2017, Andre Przywara wrote:
> +    case VREG64(GITS_CWRITER):
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        reg = its->cwriter;
> +        *r = vgic_reg64_extract(reg, info);

Why is this not protected by a lock? Also from the comment above I
cannot tell if it should be protected by its_lock or by vcmd_lock. 


> +        break;
> +    case VREG64(GITS_CREADR):
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        reg = its->creadr;
> +        *r = vgic_reg64_extract(reg, info);
> +        break;

Same here


> +    case VRANGE64(0x0098, 0x00F8):
> +        goto read_reserved;
> +    case VREG64(GITS_BASER0):           /* device table */
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +        spin_lock(&its->its_lock);
> +        *r = vgic_reg64_extract(its->baser_dev, info);
> +        spin_unlock(&its->its_lock);
> +        break;
> +    case VREG64(GITS_BASER1):           /* collection table */
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +        spin_lock(&its->its_lock);
> +        *r = vgic_reg64_extract(its->baser_coll, info);
> +        spin_unlock(&its->its_lock);
> +        break;
> +    case VRANGE64(GITS_BASER2, GITS_BASER7):
> +        goto read_as_zero_64;
> +    case VRANGE32(0x0140, 0xBFFC):
> +        goto read_reserved;
> +    case VRANGE32(0xC000, 0xFFCC):
> +        goto read_impl_defined;
> +    case VRANGE32(0xFFD0, 0xFFE4):
> +        goto read_impl_defined;
> +    case VREG32(GITS_PIDR2):
> +        if ( info->dabt.size != DABT_WORD ) goto bad_width;
> +        *r = vgic_reg32_extract(GIC_PIDR2_ARCH_GICv3, info);
> +        break;
> +    case VRANGE32(0xFFEC, 0xFFFC):
> +        goto read_impl_defined;
> +    default:
> +        printk(XENLOG_G_ERR
> +               "%pv: vGITS: unhandled read r%d offset %#04lx\n",
> +               v, info->dabt.reg, (unsigned long)info->gpa & 0xffff);
> +        return 0;
> +    }
> +
> +    return 1;
> +
> +read_as_zero_64:
> +    if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +    *r = 0;
> +
> +    return 1;
> +
> +read_impl_defined:
> +    printk(XENLOG_G_DEBUG
> +           "%pv: vGITS: RAZ on implementation defined register offset %#04lx\n",
> +           v, info->gpa & 0xffff);
> +    *r = 0;
> +    return 1;
> +
> +read_reserved:
> +    printk(XENLOG_G_DEBUG
> +           "%pv: vGITS: RAZ on reserved register offset %#04lx\n",
> +           v, info->gpa & 0xffff);
> +    *r = 0;
> +    return 1;
> +
> +bad_width:
> +    printk(XENLOG_G_ERR "vGITS: bad read width %d r%d offset %#04lx\n",
> +           info->dabt.size, info->dabt.reg, (unsigned long)info->gpa & 0xffff);
> +    domain_crash_synchronous();
> +
> +    return 0;
> +}
> +
> +/******************************
> + * ITS registers write access *
> + ******************************/
> +
> +static unsigned int its_baser_table_size(uint64_t baser)
> +{
> +    unsigned int ret, page_size[4] = {SZ_4K, SZ_16K, SZ_64K, SZ_64K};
> +
> +    ret = page_size[(baser >> GITS_BASER_PAGE_SIZE_SHIFT) & 3];
> +
> +    return ret * ((baser & GITS_BASER_SIZE_MASK) + 1);
> +}
> +
> +static unsigned int its_baser_nr_entries(uint64_t baser)
> +{
> +    unsigned int entry_size = GITS_BASER_ENTRY_SIZE(baser);
> +
> +    return its_baser_table_size(baser) / entry_size;
> +}
> +
> +/* Must be called with the ITS lock held. */
> +static bool vgic_v3_verify_its_status(struct virt_its *its, bool status)
> +{
> +    ASSERT(spin_is_locked(&its->its_lock));
> +
> +    if ( !status )
> +        return false;
> +
> +    if ( !(its->cbaser & GITS_VALID_BIT) ||
> +         !(its->baser_dev & GITS_VALID_BIT) ||
> +         !(its->baser_coll & GITS_VALID_BIT) )
> +    {
> +        printk(XENLOG_G_WARNING "d%d tried to enable ITS without having the tables configured.\n",
> +               its->d->domain_id);
> +        return false;
> +    }
> +
> +    return true;
> +}
> +
> +static void sanitize_its_base_reg(uint64_t *reg)
> +{
> +    uint64_t r = *reg;
> +
> +    /* Avoid outer shareable. */
> +    switch ( (r >> GITS_BASER_SHAREABILITY_SHIFT) & 0x03 )
> +    {
> +    case GIC_BASER_OuterShareable:
> +        r &= ~GITS_BASER_SHAREABILITY_MASK;
> +        r |= GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT;
> +        break;
> +    default:
> +        break;
> +    }
> +
> +    /* Avoid any inner non-cacheable mapping. */
> +    switch ( (r >> GITS_BASER_INNER_CACHEABILITY_SHIFT) & 0x07 )
> +    {
> +    case GIC_BASER_CACHE_nCnB:
> +    case GIC_BASER_CACHE_nC:
> +        r &= ~GITS_BASER_INNER_CACHEABILITY_MASK;
> +        r |= GIC_BASER_CACHE_RaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT;
> +        break;
> +    default:
> +        break;
> +    }
> +
> +    /* Only allow non-cacheable or same-as-inner. */
> +    switch ( (r >> GITS_BASER_OUTER_CACHEABILITY_SHIFT) & 0x07 )
> +    {
> +    case GIC_BASER_CACHE_SameAsInner:
> +    case GIC_BASER_CACHE_nC:
> +        break;
> +    default:
> +        r &= ~GITS_BASER_OUTER_CACHEABILITY_MASK;
> +        r |= GIC_BASER_CACHE_nC << GITS_BASER_OUTER_CACHEABILITY_SHIFT;
> +        break;
> +    }
> +
> +    *reg = r;
> +}
> +
> +static int vgic_v3_its_mmio_write(struct vcpu *v, mmio_info_t *info,
> +                                  register_t r, void *priv)
> +{
> +    struct domain *d = v->domain;
> +    struct virt_its *its = priv;
> +    uint64_t reg;
> +    uint32_t reg32;
> +
> +    switch ( info->gpa & 0xffff )
> +    {
> +    case VREG32(GITS_CTLR):
> +    {
> +        uint32_t ctlr;
> +
> +        if ( info->dabt.size != DABT_WORD ) goto bad_width;
> +
> +        /*
> +         * We need to take the vcmd_lock to prevent a guest from disabling
> +         * the ITS while commands are still processed.
> +         */
> +        spin_lock(&its->vcmd_lock);
> +        spin_lock(&its->its_lock);
> +        ctlr = its->enabled ? GITS_CTLR_ENABLE : 0;
> +        reg32 = ctlr;
> +        vgic_reg32_update(&reg32, r, info);
> +
> +        if ( ctlr ^ reg32 )
> +            its->enabled = vgic_v3_verify_its_status(its,
> +                                                     reg32 & GITS_CTLR_ENABLE);
> +        spin_unlock(&its->its_lock);
> +        spin_unlock(&its->vcmd_lock);
> +        return 1;
> +    }
> +
> +    case VREG32(GITS_IIDR):
> +        goto write_ignore_32;
> +    case VREG32(GITS_TYPER):
> +        goto write_ignore_32;
> +    case VRANGE32(0x0018, 0x001C):
> +        goto write_reserved;
> +    case VRANGE32(0x0020, 0x003C):
> +        goto write_impl_defined;
> +    case VRANGE32(0x0040, 0x007C):
> +        goto write_reserved;
> +    case VREG64(GITS_CBASER):
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        spin_lock(&its->its_lock);
> +        /* Changing base registers with the ITS enabled is UNPREDICTABLE. */
> +        if ( its->enabled )
> +        {
> +            spin_unlock(&its->its_lock);
> +            gdprintk(XENLOG_WARNING,
> +                     "vGITS: tried to change CBASER with the ITS enabled.\n");
> +            return 1;
> +        }
> +
> +        reg = its->cbaser;
> +        vgic_reg64_update(&reg, r, info);
> +        sanitize_its_base_reg(&reg);
> +
> +        its->cbaser = reg;
> +        its->creadr = 0;
> +        spin_unlock(&its->its_lock);
> +
> +        return 1;
> +
> +    case VREG64(GITS_CWRITER):
> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> +
> +        spin_lock(&its->vcmd_lock);
> +        reg = ITS_CMD_OFFSET(its->cwriter);
> +        vgic_reg64_update(&reg, r, info);
> +        its->cwriter = ITS_CMD_OFFSET(reg);
> +
> +        if ( its->enabled )
> +            if ( vgic_its_handle_cmds(d, its) )
> +                gdprintk(XENLOG_WARNING, "error handling ITS commands\n");
> +
> +        spin_unlock(&its->vcmd_lock);

OK, so it looks like the reads should be protected by vcmd_lock


> +        return 1;
> +
> +    case VREG64(GITS_CREADR):

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 21/28] ARM: vITS: handle MAPTI command
  2017-05-11 17:53 ` [PATCH v9 21/28] ARM: vITS: handle MAPTI command Andre Przywara
  2017-05-18 14:04   ` Julien Grall
@ 2017-05-22 23:39   ` Stefano Stabellini
  2017-05-23 10:01     ` Andre Przywara
  1 sibling, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-22 23:39 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Thu, 11 May 2017, Andre Przywara wrote:
> @@ -556,6 +583,93 @@ static int its_handle_mapd(struct virt_its *its, uint64_t *cmdptr)
>      return ret;
>  }
>  
> +static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
> +{
> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> +    uint32_t intid = its_cmd_get_physical_id(cmdptr), _intid;
> +    uint16_t collid = its_cmd_get_collection(cmdptr);
> +    struct pending_irq *pirq;
> +    struct vcpu *vcpu = NULL;
> +    int ret = -1;

I think we need to check the eventid to be valid, don't we?


> +    if ( its_cmd_get_command(cmdptr) == GITS_CMD_MAPI )
> +        intid = eventid;
> +
> +    spin_lock(&its->its_lock);
> +    /*
> +     * Check whether there is a valid existing mapping. If yes, behavior is
> +     * unpredictable, we choose to ignore this command here.
> +     * This makes sure we start with a pristine pending_irq below.
> +     */
> +    if ( read_itte_locked(its, devid, eventid, &vcpu, &_intid) &&
> +         _intid != INVALID_LPI )
> +    {
> +        spin_unlock(&its->its_lock);
> +        return -1;
> +    }
> +
> +    /* Enter the mapping in our virtual ITS tables. */
> +    if ( !write_itte_locked(its, devid, eventid, collid, intid, &vcpu) )
> +    {
> +        spin_unlock(&its->its_lock);
> +        return -1;
> +    }
> +
> +    spin_unlock(&its->its_lock);
> +
> +    /*
> +     * Connect this virtual LPI to the corresponding host LPI, which is
> +     * determined by the same device ID and event ID on the host side.
> +     * This returns us the corresponding, still unused pending_irq.
> +     */
> +    pirq = gicv3_assign_guest_event(its->d, its->doorbell_address,
> +                                    devid, eventid, vcpu, intid);
> +    if ( !pirq )
> +        goto out_remove_mapping;
> +
> +    vgic_init_pending_irq(pirq, intid);
> +
> +    /*
> +     * Now read the guest's property table to initialize our cached state.
> +     * It can't fire at this time, because it is not known to the host yet.
> +     * We don't need the VGIC VCPU lock here, because the pending_irq isn't
> +     * in the radix tree yet.
> +     */
> +    ret = update_lpi_property(its->d, pirq);
> +    if ( ret )
> +        goto out_remove_host_entry;
> +
> +    pirq->lpi_vcpu_id = vcpu->vcpu_id;
> +    /*
> +     * Mark this LPI as new, so any older (now unmapped) LPI in any LR
> +     * can be easily recognised as such.
> +     */
> +    set_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &pirq->status);
> +
> +    /*
> +     * Now insert the pending_irq into the domain's LPI tree, so that
> +     * it becomes live.
> +     */
> +    write_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
> +    ret = radix_tree_insert(&its->d->arch.vgic.pend_lpi_tree, intid, pirq);
> +    write_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
> +
> +    if ( !ret )
> +        return 0;
> +
> +out_remove_host_entry:
> +    gicv3_remove_guest_event(its->d, its->doorbell_address, devid, eventid);
> +
> +out_remove_mapping:
> +    spin_lock(&its->its_lock);
> +    write_itte_locked(its, devid, eventid,
> +                      UNMAPPED_COLLECTION, INVALID_LPI, NULL);
> +    spin_unlock(&its->its_lock);
> +
> +    return ret;
> +}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs
  2017-05-20  1:25   ` Stefano Stabellini
@ 2017-05-22 23:48     ` Stefano Stabellini
  2017-05-23 11:10       ` Julien Grall
  2017-05-23 14:41     ` Andre Przywara
  1 sibling, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-22 23:48 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Vijay Kilari, Andre Przywara, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Fri, 19 May 2017, Stefano Stabellini wrote:
> On Thu, 11 May 2017, Andre Przywara wrote:
> > When LPIs get unmapped by a guest, they might still be in some LR of
> > some VCPU. Nevertheless we remove the corresponding pending_irq
> > (possibly freeing it), and detect this case (irq_to_pending() returns
> > NULL) when the LR gets cleaned up later.
> > However a *new* LPI may get mapped with the same number while the old
> > LPI is *still* in some LR. To avoid getting the wrong state, we mark
> > every newly mapped LPI as PRISTINE, which means: has never been in an
> > LR before. If we detect the LPI in an LR anyway, it must have been an
> > older one, which we can simply retire.
> > Before inserting such a PRISTINE LPI into an LR, we must make sure that
> > it's not already in another LR, as the architecture forbids two
> > interrupts with the same virtual IRQ number on one CPU.
> > 
> > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> > ---
> >  xen/arch/arm/gic.c         | 55 +++++++++++++++++++++++++++++++++++++++++-----
> >  xen/include/asm-arm/vgic.h |  6 +++++
> >  2 files changed, 56 insertions(+), 5 deletions(-)
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index fd3fa05..8bf0578 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -375,6 +375,8 @@ static inline void gic_set_lr(int lr, struct pending_irq *p,
> >  {
> >      ASSERT(!local_irq_is_enabled());
> >  
> > +    clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status);
> > +
> >      gic_hw_ops->update_lr(lr, p, state);
> >  
> >      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> > @@ -442,12 +444,41 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
> >  #endif
> >  }
> >  
> > +/*
> > + * Find an unused LR to insert an IRQ into. If this new interrupt is a
> > + * PRISTINE LPI, scan the other LRs to avoid inserting the same IRQ twice.
> > + */
> > +static int gic_find_unused_lr(struct vcpu *v, struct pending_irq *p, int lr)
> > +{
> > +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
> > +    unsigned long *lr_mask = (unsigned long *) &this_cpu(lr_mask);
> > +    struct gic_lr lr_val;
> > +
> > +    ASSERT(spin_is_locked(&v->arch.vgic.lock));
> > +
> > +    if ( test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> 
> Maybe we should add an "unlikely".
> 
> I can see how this would be OKish at runtime, but at boot time there
> might be a bunch of PRISTINE_LPIs, but no MAPDs have been issued yet,
> right?
> 
> I have a suggestion, I'll leave it to you and Julien if you want to do
> this now, or maybe consider it as a TODO item. I am OK either way (I
> don't want to delay the ITS any longer).
> 
> I am thinking we should do this scanning only after at least one MAPD
> has been issued for a given cpu at least once. I would resurrect the
> idea of a DISCARD flag, but not on the pending_irq, that I believe it's
> difficult to handle, but a single global DISCARD flag per struct vcpu.
> 
> On MAPD, we set DISCARD for the target vcpu of the LPI we are dropping.
> Next time we want to inject a PRISTINE_IRQ on that cpu interface, we
> scan all LRs for interrupts with a NULL pending_irq. We remove those
> from LRs, then we remove the DISCARD flag.
> 
> Do you think it would work?

This would need to be done not only on MAPD but also on DISCARD (and on
any other commands that would cause a pending_irq - vLPI mapping to be
dropped).


> > +    {
> > +        int used_lr = 0;
> > +
> > +        while ( (used_lr = find_next_bit(lr_mask, nr_lrs, used_lr)) < nr_lrs )
> > +        {
> > +            gic_hw_ops->read_lr(used_lr, &lr_val);
> > +            if ( lr_val.virq == p->irq )
> > +                return used_lr;
> > +        }
> > +    }
> > +
> > +    lr = find_next_zero_bit(lr_mask, nr_lrs, lr);
> > +
> > +    return lr;
> > +}
> > +
> >  void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> >          unsigned int priority)
> >  {
> > -    int i;
> > -    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
> >      struct pending_irq *p = irq_to_pending(v, virtual_irq);
> > +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
> > +    int i = nr_lrs;
> >  
> >      ASSERT(spin_is_locked(&v->arch.vgic.lock));
> >  
> > @@ -457,7 +488,8 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> >  
> >      if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
> >      {
> > -        i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> > +        i = gic_find_unused_lr(v, p, 0);
> > +
> >          if (i < nr_lrs) {
> >              set_bit(i, &this_cpu(lr_mask));
> >              gic_set_lr(i, p, GICH_LR_PENDING);
> > @@ -509,7 +541,17 @@ static void gic_update_one_lr(struct vcpu *v, int i)
> >      }
> >      else if ( lr_val.state & GICH_LR_PENDING )
> >      {
> > -        int q __attribute__ ((unused)) = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
> > +        int q __attribute__ ((unused));
> > +
> > +        if ( test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> > +        {
> > +            gic_hw_ops->clear_lr(i);
> > +            clear_bit(i, &this_cpu(lr_mask));
> > +
> > +            return;
> > +        }
> > +
> > +        q = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
> >  #ifdef GIC_DEBUG
> >          if ( q )
> >              gdprintk(XENLOG_DEBUG, "trying to inject irq=%d into d%dv%d, when it is already pending in LR%d\n",
> > @@ -521,6 +563,9 @@ static void gic_update_one_lr(struct vcpu *v, int i)
> >          gic_hw_ops->clear_lr(i);
> >          clear_bit(i, &this_cpu(lr_mask));
> >  
> > +        if ( test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> > +            return;
> >          if ( p->desc != NULL )
> >              clear_bit(_IRQ_INPROGRESS, &p->desc->status);
> >          clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> > @@ -591,7 +636,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
> >      inflight_r = &v->arch.vgic.inflight_irqs;
> >      list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
> >      {
> > -        lr = find_next_zero_bit(&this_cpu(lr_mask), nr_lrs, lr);
> > +        lr = gic_find_unused_lr(v, p, lr);
> >          if ( lr >= nr_lrs )
> >          {
> >              /* No more free LRs: find a lower priority irq to evict */
> > diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> > index 02732db..3fc4ceb 100644
> > --- a/xen/include/asm-arm/vgic.h
> > +++ b/xen/include/asm-arm/vgic.h
> > @@ -60,12 +60,18 @@ struct pending_irq
> >       * vcpu while it is still inflight and on an GICH_LR register on the
> >       * old vcpu.
> >       *
> > +     * GIC_IRQ_GUEST_PRISTINE_LPI: the IRQ is a newly mapped LPI, which
> > +     * has never been in an LR before. This means that any trace of an
> > +     * LPI with the same number in an LR must be from an older LPI, which
> > +     * has been unmapped before.
> > +     *
> >       */
> >  #define GIC_IRQ_GUEST_QUEUED   0
> >  #define GIC_IRQ_GUEST_ACTIVE   1
> >  #define GIC_IRQ_GUEST_VISIBLE  2
> >  #define GIC_IRQ_GUEST_ENABLED  3
> >  #define GIC_IRQ_GUEST_MIGRATING   4
> > +#define GIC_IRQ_GUEST_PRISTINE_LPI  5
> >      unsigned long status;
> >      struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
> >      unsigned int irq;
> > -- 
> > 2.9.0
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > https://lists.xen.org/xen-devel
> > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 24/28] ARM: vITS: handle INV command
  2017-05-11 17:53 ` [PATCH v9 24/28] ARM: vITS: handle INV command Andre Przywara
@ 2017-05-23  0:01   ` Stefano Stabellini
  0 siblings, 0 replies; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-23  0:01 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Thu, 11 May 2017, Andre Przywara wrote:
> The INV command instructs the ITS to update the configuration data for
> a given LPI by re-reading its entry from the property table.
> We don't need to care so much about the priority value, but enabling
> or disabling an LPI has some effect: We remove or push virtual LPIs
> to their VCPUs, also check the virtual pending bit if an LPI gets enabled.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/vgic-v3-its.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 70 insertions(+)
> 
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index f7a8d77..6cfb560 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -456,6 +456,73 @@ static int update_lpi_property(struct domain *d, struct pending_irq *p)
>      return 0;
>  }
>  
> +/*
> + * Checks whether an LPI that got enabled or disabled needs to change
> + * something in the VGIC (added or removed from the LR or queues).
> + * Must be called with the VCPU VGIC lock held.
> + */
> +static void update_lpi_vgic_status(struct vcpu *v, struct pending_irq *p)
> +{
> +    ASSERT(spin_is_locked(&v->arch.vgic.lock));
> +
> +    if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) )
> +    {
> +        if ( !list_empty(&p->inflight) &&
> +             !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
> +            gic_raise_guest_irq(v, p->irq, p->lpi_priority);
> +    }
> +    else
> +    {
> +        clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);

This is useless: we get here if !test_bit(GIC_IRQ_GUEST_ENABLED)


> +        list_del_init(&p->lr_queue);

Please call gic_remove_from_queues (or a related function that doesn't
take a lock).

You might want to add a few words on why we are not disabling the
underlying physical LPI here.


> +    }
> +}
> +
> +static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr)
> +{
> +    struct domain *d = its->d;
> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> +    struct pending_irq *p;
> +    unsigned long flags;
> +    struct vcpu *vcpu;
> +    uint32_t vlpi;
> +    int ret = -1;
> +
> +    spin_lock(&its->its_lock);
> +
> +    /* Translate the event into a vCPU/vLPI pair. */
> +    if ( !read_itte_locked(its, devid, eventid, &vcpu, &vlpi) )
> +        goto out_unlock_its;
> +
> +    if ( vlpi == INVALID_LPI )
> +        goto out_unlock_its;
> +
> +    p = gicv3_its_get_event_pending_irq(d, its->doorbell_address,
> +                                        devid, eventid);
> +    if ( unlikely(!p) )
> +        goto out_unlock_its;
> +
> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
> +
> +    /* Read the property table and update our cached status. */
> +    if ( update_lpi_property(d, p) )
> +        goto out_unlock;
> +
> +    /* Check whether the LPI needs to go on a VCPU. */
> +    update_lpi_vgic_status(vcpu, p);
> +
> +    ret = 0;
> +
> +out_unlock:
> +    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
> +
> +out_unlock_its:
> +    spin_unlock(&its->its_lock);
> +
> +    return ret;
> +}
> +
>  /* Must be called with the ITS lock held. */
>  static int its_discard_event(struct virt_its *its,
>                               uint32_t vdevid, uint32_t vevid)
> @@ -782,6 +849,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
>          case GITS_CMD_INT:
>              ret = its_handle_int(its, command);
>              break;
> +        case GITS_CMD_INV:
> +            ret = its_handle_inv(its, command);
> +            break;
>          case GITS_CMD_MAPC:
>              ret = its_handle_mapc(its, command);
>              break;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 22/28] ARM: vITS: handle MOVI command
  2017-05-11 17:53 ` [PATCH v9 22/28] ARM: vITS: handle MOVI command Andre Przywara
  2017-05-18 14:17   ` Julien Grall
@ 2017-05-23  0:28   ` Stefano Stabellini
  1 sibling, 0 replies; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-23  0:28 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Thu, 11 May 2017, Andre Przywara wrote:
> The MOVI command moves the interrupt affinity from one redistributor
> (read: VCPU) to another.
> For now migration of "live" LPIs is not yet implemented, but we store
> the changed affinity in the host LPI structure and in our virtual ITTE.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  xen/arch/arm/gic-v3-its.c        | 30 ++++++++++++++++++++
>  xen/arch/arm/gic-v3-lpi.c        | 15 ++++++++++
>  xen/arch/arm/vgic-v3-its.c       | 59 ++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/gic_v3_its.h |  4 +++
>  4 files changed, 108 insertions(+)
> 
> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
> index 8a50f7d..f00597e 100644
> --- a/xen/arch/arm/gic-v3-its.c
> +++ b/xen/arch/arm/gic-v3-its.c
> @@ -915,6 +915,36 @@ struct pending_irq *gicv3_assign_guest_event(struct domain *d,
>      return pirq;
>  }
>  
> +/* Changes the target VCPU for a given host LPI assigned to a domain. */
> +int gicv3_lpi_change_vcpu(struct domain *d, paddr_t vdoorbell,
> +                          uint32_t vdevid, uint32_t veventid,
> +                          unsigned int vcpu_id)
> +{
> +    uint32_t host_lpi;
> +    struct its_device *dev;
> +
> +    spin_lock(&d->arch.vgic.its_devices_lock);
> +    dev = get_its_device(d, vdoorbell, vdevid);
> +    if ( dev )
> +        host_lpi = get_host_lpi(dev, veventid);
> +    else
> +        host_lpi = 0;
> +    spin_unlock(&d->arch.vgic.its_devices_lock);
> +
> +    if ( !host_lpi )
> +        return -ENOENT;
> +
> +    /*
> +     * TODO: This just changes the virtual affinity, the physical LPI
> +     * still stays on the same physical CPU.
> +     * Consider to move the physical affinity to the pCPU running the new
> +     * vCPU. However this requires scheduling a host ITS command.
> +     */
> +    gicv3_lpi_update_host_vcpuid(host_lpi, vcpu_id);
> +
> +    return 0;
> +}
> +
>  /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
>  void gicv3_its_dt_init(const struct dt_device_node *node)
>  {
> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
> index d427539..6af5ad9 100644
> --- a/xen/arch/arm/gic-v3-lpi.c
> +++ b/xen/arch/arm/gic-v3-lpi.c
> @@ -225,6 +225,21 @@ void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
>      write_u64_atomic(&hlpip->data, hlpi.data);
>  }
>  
> +int gicv3_lpi_update_host_vcpuid(uint32_t host_lpi, unsigned int vcpu_id)
> +{
> +    union host_lpi *hlpip;
> +
> +    ASSERT(host_lpi >= LPI_OFFSET);
> +
> +    host_lpi -= LPI_OFFSET;
> +
> +    hlpip = &lpi_data.host_lpis[host_lpi / HOST_LPIS_PER_PAGE][host_lpi % HOST_LPIS_PER_PAGE];
> +
> +    write_u16_atomic(&hlpip->vcpu_id, vcpu_id);
> +
> +    return 0;
> +}
> +
>  static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
>  {
>      uint64_t val;
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index c5c0e5e..ef7c78f 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -670,6 +670,59 @@ out_remove_mapping:
>      return ret;
>  }
>  
> +static int its_handle_movi(struct virt_its *its, uint64_t *cmdptr)
> +{
> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> +    uint16_t collid = its_cmd_get_collection(cmdptr);
> +    unsigned long flags;
> +    struct pending_irq *p;
> +    struct vcpu *ovcpu, *nvcpu;
> +    uint32_t vlpi;
> +    int ret = -1;
> +
> +    spin_lock(&its->its_lock);
> +    /* Check for a mapped LPI and get the LPI number. */
> +    if ( !read_itte_locked(its, devid, eventid, &ovcpu, &vlpi) )
> +        goto out_unlock;
> +
> +    if ( vlpi == INVALID_LPI )
> +        goto out_unlock;
> +
> +    /* Check the new collection ID and get the new VCPU pointer */
> +    nvcpu = get_vcpu_from_collection(its, collid);
> +    if ( !nvcpu )
> +        goto out_unlock;
> +
> +    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
> +                                        devid, eventid);
> +    if ( unlikely(!p) )
> +        goto out_unlock;
> +
> +    spin_lock_irqsave(&ovcpu->arch.vgic.lock, flags);
> +
> +    /* Update our cached vcpu_id in the pending_irq. */
> +    p->lpi_vcpu_id = nvcpu->vcpu_id;

I think we need to call gicv3_lpi_update_host_vcpuid here.
gicv3_lpi_update_host_vcpuid and this line change the vcpu target: they
need to be called in a region that is protected by both its_lock and
v->arch.vgic.lock.

In addition, right before calling gicv3_lpi_update_host_vcpuid, we need
to change the target for a possible existing inflight interrupt, see
vgic_migrate_irq. We need to handle both the case where the vLPI is
inflight but not in an LR yet, which correspond to the

  if ( !list_empty(&p->lr_queue) )

case in vgic_migrate_irq. In that case, we remove the struct pending_irq
from lr_queue and inflight of the old vcpu and add it to the list of the
new vcpu. More difficult is the case of a vLPI which is both inflight
and in an LR. In the code it corresponds to:

  if ( !list_empty(&p->inflight) )

and we need to set GIC_IRQ_GUEST_MIGRATING. In other words, we need to
call a function that is pretty much like vgic_migrate_irq but without
the irq_set_affinity calls that we cannot handle with LPIs yet. Or you
could just call vgic_migrate_irq making sure that irq_set_affinity does
nothing for LPIs for now.

In the past you replied that vgic_migrate_irq starts with:

if ( p->desc == NULL )

so it wouldn't work for LPIs. Of course, we need to change that check,
but overall the function can be made to work for LPIs as long as
irq_set_affinity does something sensible for them.

If you prefer to implement this after the vgic lock rework, add a TODO
comment, and maybe a BUG_ON(!list_empty(&p->inflight)).


> +    spin_unlock_irqrestore(&ovcpu->arch.vgic.lock, flags);
> +
> +    /* Now store the new collection in the translation table. */
> +    if ( !write_itte_locked(its, devid, eventid, collid, vlpi, &nvcpu) )
> +        goto out_unlock;
> +
> +    spin_unlock(&its->its_lock);
> +
> +    /* TODO: lookup currently-in-guest virtual IRQs and migrate them? */
> +
> +    return gicv3_lpi_change_vcpu(its->d, its->doorbell_address,
> +                                 devid, eventid, nvcpu->vcpu_id);
> +
> +out_unlock:
> +    spin_unlock(&its->its_lock);
> +
> +    return ret;
> +}
> +
>  #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
>  #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
>  
> @@ -715,6 +768,12 @@ static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
>          case GITS_CMD_MAPTI:
>              ret = its_handle_mapti(its, command);
>              break;
> +        case GITS_CMD_MOVALL:
> +            gdprintk(XENLOG_G_INFO, "vGITS: ignoring MOVALL command\n");
> +            break;
> +        case GITS_CMD_MOVI:
> +            ret = its_handle_movi(its, command);
> +            break;
>          case GITS_CMD_SYNC:
>              /* We handle ITS commands synchronously, so we ignore SYNC. */
>              break;
> diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
> index 9c08cee..82d788c 100644
> --- a/xen/include/asm-arm/gic_v3_its.h
> +++ b/xen/include/asm-arm/gic_v3_its.h
> @@ -178,8 +178,12 @@ int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
>  struct pending_irq *gicv3_assign_guest_event(struct domain *d, paddr_t doorbell,
>                                               uint32_t devid, uint32_t eventid,
>                                               struct vcpu *v, uint32_t virt_lpi);
> +int gicv3_lpi_change_vcpu(struct domain *d, paddr_t doorbell,
> +                          uint32_t devid, uint32_t eventid,
> +                          unsigned int vcpu_id);
>  void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
>                                   unsigned int vcpu_id, uint32_t virt_lpi);
> +int gicv3_lpi_update_host_vcpuid(uint32_t host_lpi, unsigned int vcpu_id);
>  
>  #else
>  
> -- 
> 2.9.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 23/28] ARM: vITS: handle DISCARD command
  2017-05-22 17:20       ` Julien Grall
@ 2017-05-23  9:40         ` Andre Przywara
  0 siblings, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-23  9:40 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 22/05/17 18:20, Julien Grall wrote:
> 
> 
> On 22/05/17 17:50, Andre Przywara wrote:
>> Hi,
> 
> Hi Andre,
> 
>> On 18/05/17 15:23, Julien Grall wrote:
>>> Hi Andre,
>>>
>>> On 11/05/17 18:53, Andre Przywara wrote:
>>>> The DISCARD command drops the connection between a DeviceID/EventID
>>>> and an LPI/collection pair.
>>>> We mark the respective structure entries as not allocated and make
>>>> sure that any queued IRQs are removed.
>>>>
>>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>>> ---
>>>>  xen/arch/arm/vgic-v3-its.c | 24 ++++++++++++++++++++++++
>>>>  1 file changed, 24 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
>>>> index ef7c78f..f7a8d77 100644
>>>> --- a/xen/arch/arm/vgic-v3-its.c
>>>> +++ b/xen/arch/arm/vgic-v3-its.c
>>>> @@ -723,6 +723,27 @@ out_unlock:
>>>>      return ret;
>>>>  }
>>>>
>>>> +static int its_handle_discard(struct virt_its *its, uint64_t *cmdptr)
>>>> +{
>>>> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
>>>> +    uint32_t eventid = its_cmd_get_id(cmdptr);
>>>> +    int ret;
>>>> +
>>>> +    spin_lock(&its->its_lock);
>>>> +
>>>> +    /* Remove from the radix tree and remove the host entry. */
>>>> +    ret = its_discard_event(its, devid, eventid);
>>>> +
>>>> +    /* Remove from the guest's ITTE. */
>>>> +    if ( ret || write_itte_locked(its, devid, eventid,
>>>> +                                  UNMAPPED_COLLECTION, INVALID_LPI,
>>>> NULL) )
>>>
>>> I am not sure to fully understand this if. If ret is not NULL you
>>> override it and never call write_itte_locked.
>>
>> If its_discard_event() succeeded above, then ret will be 0, in which
>> case we call write_itte_locked(). If that returns non-zero, this is an
>> error and we set ret to -1, otherwise (no error) it stays at zero.
> 
> But we want to carry the error from its_discard_event. No?

Ah right, this was so cleverly made that I even tricked myself ;-)

I changed this now to be more easily readable.

Cheers,
Andre.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 11/28] ARM: VGIC: add vcpu_id to struct pending_irq
  2017-05-22 22:15     ` Stefano Stabellini
@ 2017-05-23  9:49       ` Andre Przywara
  0 siblings, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-23  9:49 UTC (permalink / raw)
  To: Stefano Stabellini, Julien Grall
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 22/05/17 23:15, Stefano Stabellini wrote:
> On Tue, 16 May 2017, Julien Grall wrote:
>> Hi Andre,
>>
>> On 11/05/17 18:53, Andre Przywara wrote:
>>> The target CPU for an LPI is encoded in the interrupt translation table
>>> entry, so can't be easily derived from just an LPI number (short of
>>> walking *all* tables and find the matching LPI).
>>> To avoid this in case we need to know the VCPU (for the INVALL command,
>>> for instance), put the VCPU ID in the struct pending_irq, so that it is
>>> easily accessible.
>>> We use the remaining 8 bits of padding space for that to avoid enlarging
>>> the size of struct pending_irq. The number of VCPUs is limited to 127
>>> at the moment anyway, which we also confirm with a BUILD_BUG_ON.
>>>
>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>> ---
>>>  xen/arch/arm/vgic.c        | 3 +++
>>>  xen/include/asm-arm/vgic.h | 1 +
>>>  2 files changed, 4 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>>> index 27d6b51..97a2cf2 100644
>>> --- a/xen/arch/arm/vgic.c
>>> +++ b/xen/arch/arm/vgic.c
>>> @@ -63,6 +63,9 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v,
>>> unsigned int irq)
>>>
>>>  void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
>>>  {
>>> +    /* The lpi_vcpu_id field must be big enough to hold a VCPU ID. */
>>> +    BUILD_BUG_ON(BIT(sizeof(p->lpi_vcpu_id) * 8) < MAX_VIRT_CPUS);
>>> +
>>>      INIT_LIST_HEAD(&p->inflight);
>>>      INIT_LIST_HEAD(&p->lr_queue);
>>>      p->irq = virq;
>>> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
>>> index e2111a5..02732db 100644
>>> --- a/xen/include/asm-arm/vgic.h
>>> +++ b/xen/include/asm-arm/vgic.h
>>> @@ -73,6 +73,7 @@ struct pending_irq
>>>      uint8_t lr;
>>>      uint8_t priority;
>>>      uint8_t lpi_priority;       /* Caches the priority if this is an LPI.
>>> */
>>> +    uint8_t lpi_vcpu_id;        /* The VCPU for an LPI. */
>>
>> Based on the previous patch (#10), I was expecting to see this new field
>> initialized in vgic_init_pending_irq.
> 
> right, it should be initialized to INVALID_VCPU_ID

That's right, I fixed that now.

Cheers,
Andre.

>>>      /* inflight is used to append instances of pending_irq to
>>>       * vgic.inflight_irqs */
>>>      struct list_head inflight;
>>>
>>
>> Cheers,
>>
>> -- 
>> Julien Grall
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 21/28] ARM: vITS: handle MAPTI command
  2017-05-22 23:39   ` Stefano Stabellini
@ 2017-05-23 10:01     ` Andre Przywara
  2017-05-23 17:44       ` Stefano Stabellini
  0 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-23 10:01 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: xen-devel, Julien Grall, Shanker Donthineni, Vijaya Kumar K,
	Vijay Kilari

Hi,

On 23/05/17 00:39, Stefano Stabellini wrote:
> On Thu, 11 May 2017, Andre Przywara wrote:
>> @@ -556,6 +583,93 @@ static int its_handle_mapd(struct virt_its *its, uint64_t *cmdptr)
>>      return ret;
>>  }
>>  
>> +static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
>> +{
>> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
>> +    uint32_t eventid = its_cmd_get_id(cmdptr);
>> +    uint32_t intid = its_cmd_get_physical_id(cmdptr), _intid;
>> +    uint16_t collid = its_cmd_get_collection(cmdptr);
>> +    struct pending_irq *pirq;
>> +    struct vcpu *vcpu = NULL;
>> +    int ret = -1;
> 
> I think we need to check the eventid to be valid, don't we?

Yes, but this will be done below as part of write_itte_locked(), in fact
already read_itte_locked().
Shall I add a comment about this?

Cheers,
Andre

>> +    if ( its_cmd_get_command(cmdptr) == GITS_CMD_MAPI )
>> +        intid = eventid;
>> +
>> +    spin_lock(&its->its_lock);
>> +    /*
>> +     * Check whether there is a valid existing mapping. If yes, behavior is
>> +     * unpredictable, we choose to ignore this command here.
>> +     * This makes sure we start with a pristine pending_irq below.
>> +     */
>> +    if ( read_itte_locked(its, devid, eventid, &vcpu, &_intid) &&
>> +         _intid != INVALID_LPI )
>> +    {
>> +        spin_unlock(&its->its_lock);
>> +        return -1;
>> +    }
>> +
>> +    /* Enter the mapping in our virtual ITS tables. */
>> +    if ( !write_itte_locked(its, devid, eventid, collid, intid, &vcpu) )
>> +    {
>> +        spin_unlock(&its->its_lock);
>> +        return -1;
>> +    }
>> +
>> +    spin_unlock(&its->its_lock);
>> +
>> +    /*
>> +     * Connect this virtual LPI to the corresponding host LPI, which is
>> +     * determined by the same device ID and event ID on the host side.
>> +     * This returns us the corresponding, still unused pending_irq.
>> +     */
>> +    pirq = gicv3_assign_guest_event(its->d, its->doorbell_address,
>> +                                    devid, eventid, vcpu, intid);
>> +    if ( !pirq )
>> +        goto out_remove_mapping;
>> +
>> +    vgic_init_pending_irq(pirq, intid);
>> +
>> +    /*
>> +     * Now read the guest's property table to initialize our cached state.
>> +     * It can't fire at this time, because it is not known to the host yet.
>> +     * We don't need the VGIC VCPU lock here, because the pending_irq isn't
>> +     * in the radix tree yet.
>> +     */
>> +    ret = update_lpi_property(its->d, pirq);
>> +    if ( ret )
>> +        goto out_remove_host_entry;
>> +
>> +    pirq->lpi_vcpu_id = vcpu->vcpu_id;
>> +    /*
>> +     * Mark this LPI as new, so any older (now unmapped) LPI in any LR
>> +     * can be easily recognised as such.
>> +     */
>> +    set_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &pirq->status);
>> +
>> +    /*
>> +     * Now insert the pending_irq into the domain's LPI tree, so that
>> +     * it becomes live.
>> +     */
>> +    write_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
>> +    ret = radix_tree_insert(&its->d->arch.vgic.pend_lpi_tree, intid, pirq);
>> +    write_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
>> +
>> +    if ( !ret )
>> +        return 0;
>> +
>> +out_remove_host_entry:
>> +    gicv3_remove_guest_event(its->d, its->doorbell_address, devid, eventid);
>> +
>> +out_remove_mapping:
>> +    spin_lock(&its->its_lock);
>> +    write_itte_locked(its, devid, eventid,
>> +                      UNMAPPED_COLLECTION, INVALID_LPI, NULL);
>> +    spin_unlock(&its->its_lock);
>> +
>> +    return ret;
>> +}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-22 22:19     ` Stefano Stabellini
@ 2017-05-23 10:49       ` Julien Grall
  2017-05-23 17:47         ` Stefano Stabellini
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-23 10:49 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Andre Przywara, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni,
	xen-devel

Hi Stefano,

On 22/05/17 23:19, Stefano Stabellini wrote:
> On Tue, 16 May 2017, Julien Grall wrote:
>>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu
>>> *v, mmio_info_t *info,
>>>      switch ( gicr_reg )
>>>      {
>>>      case VREG32(GICR_CTLR):
>>> -        /* LPI's not implemented */
>>> -        goto write_ignore_32;
>>> +    {
>>> +        unsigned long flags;
>>> +
>>> +        if ( !v->domain->arch.vgic.has_its )
>>> +            goto write_ignore_32;
>>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>>> +
>>> +        vgic_lock(v);                   /* protects rdists_enabled */
>>
>> Getting back to the locking. I don't see any place where we get the domain
>> vgic lock before vCPU vgic lock. So this raises the question why this ordering
>> and not moving this lock into vgic_vcpu_enable_lpis.
>>
>> At least this require documentation in the code and explanation in the commit
>> message.
>
> It doesn't look like we need to take the v->arch.vgic.lock here. What is
> it protecting?

The name of the function is a bit confusion. It does not take the vCPU 
vgic lock but the domain vgic lock.

I believe the vcpu is passed to avoid have v->domain in most of the 
callers. But we should probably rename the function.

In this case it protects vgic_vcpu_enable_lpis because you can configure 
the number of LPIs per re-distributor but this is a domain wide value. I 
know the spec is confusing on this.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation
  2017-05-22 22:32   ` Stefano Stabellini
@ 2017-05-23 10:54     ` Julien Grall
  2017-05-23 17:43       ` Stefano Stabellini
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-23 10:54 UTC (permalink / raw)
  To: Stefano Stabellini, Andre Przywara
  Cc: xen-devel, Vijaya Kumar K, Shanker Donthineni, Vijay Kilari

Hi Stefano,

On 22/05/17 23:32, Stefano Stabellini wrote:
> On Thu, 11 May 2017, Andre Przywara wrote:
>> +    case VREG64(GITS_CWRITER):
>> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
>> +
>> +        reg = its->cwriter;
>> +        *r = vgic_reg64_extract(reg, info);
>
> Why is this not protected by a lock? Also from the comment above I
> cannot tell if it should be protected by its_lock or by vcmd_lock.

Because if you take the vcmd_lock, the vCPU will spin until we finish to 
handle the command queue. This means a guest can potentially block 
multiple pCPUs for a long time.

In this case, cwriter can be read atomically as it was updated by the 
guest itself ...
>
>
>> +        break;
>> +    case VREG64(GITS_CREADR):
>> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
>> +
>> +        reg = its->creadr;
>> +        *r = vgic_reg64_extract(reg, info);
>> +        break;
>
> Same here

For here, the command queue handler will write to creader atomically 
every a command is been executed. Making this lockless also allow a 
domain keep track where we are on the command queue handling.

This is something we already discussed quite a few times. So we should 
probably have a comment in the code to avoid this question to come up again.

[...]

>> +    case VREG64(GITS_CWRITER):
>> +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
>> +
>> +        spin_lock(&its->vcmd_lock);
>> +        reg = ITS_CMD_OFFSET(its->cwriter);
>> +        vgic_reg64_update(&reg, r, info);
>> +        its->cwriter = ITS_CMD_OFFSET(reg);
>> +
>> +        if ( its->enabled )
>> +            if ( vgic_its_handle_cmds(d, its) )
>> +                gdprintk(XENLOG_WARNING, "error handling ITS commands\n");
>> +
>> +        spin_unlock(&its->vcmd_lock);
>
> OK, so it looks like the reads should be protected by vcmd_lock

See my comment above.

>
>
>> +        return 1;
>> +
>> +    case VREG64(GITS_CREADR):

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs
  2017-05-22 23:48     ` Stefano Stabellini
@ 2017-05-23 11:10       ` Julien Grall
  2017-05-23 18:23         ` Stefano Stabellini
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-23 11:10 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Andre Przywara, Vijaya Kumar K, Shanker Donthineni, Vijay Kilari,
	xen-devel

Hi Stefano,

On 23/05/17 00:48, Stefano Stabellini wrote:
> On Fri, 19 May 2017, Stefano Stabellini wrote:
>> On Thu, 11 May 2017, Andre Przywara wrote:
>>> When LPIs get unmapped by a guest, they might still be in some LR of
>>> some VCPU. Nevertheless we remove the corresponding pending_irq
>>> (possibly freeing it), and detect this case (irq_to_pending() returns
>>> NULL) when the LR gets cleaned up later.
>>> However a *new* LPI may get mapped with the same number while the old
>>> LPI is *still* in some LR. To avoid getting the wrong state, we mark
>>> every newly mapped LPI as PRISTINE, which means: has never been in an
>>> LR before. If we detect the LPI in an LR anyway, it must have been an
>>> older one, which we can simply retire.
>>> Before inserting such a PRISTINE LPI into an LR, we must make sure that
>>> it's not already in another LR, as the architecture forbids two
>>> interrupts with the same virtual IRQ number on one CPU.
>>>
>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>> ---
>>>  xen/arch/arm/gic.c         | 55 +++++++++++++++++++++++++++++++++++++++++-----
>>>  xen/include/asm-arm/vgic.h |  6 +++++
>>>  2 files changed, 56 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>>> index fd3fa05..8bf0578 100644
>>> --- a/xen/arch/arm/gic.c
>>> +++ b/xen/arch/arm/gic.c
>>> @@ -375,6 +375,8 @@ static inline void gic_set_lr(int lr, struct pending_irq *p,
>>>  {
>>>      ASSERT(!local_irq_is_enabled());
>>>
>>> +    clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status);
>>> +
>>>      gic_hw_ops->update_lr(lr, p, state);
>>>
>>>      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
>>> @@ -442,12 +444,41 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
>>>  #endif
>>>  }
>>>
>>> +/*
>>> + * Find an unused LR to insert an IRQ into. If this new interrupt is a
>>> + * PRISTINE LPI, scan the other LRs to avoid inserting the same IRQ twice.
>>> + */
>>> +static int gic_find_unused_lr(struct vcpu *v, struct pending_irq *p, int lr)
>>> +{
>>> +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
>>> +    unsigned long *lr_mask = (unsigned long *) &this_cpu(lr_mask);
>>> +    struct gic_lr lr_val;
>>> +
>>> +    ASSERT(spin_is_locked(&v->arch.vgic.lock));
>>> +
>>> +    if ( test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
>>
>> Maybe we should add an "unlikely".
>>
>> I can see how this would be OKish at runtime, but at boot time there
>> might be a bunch of PRISTINE_LPIs, but no MAPDs have been issued yet,
>> right?

You cannot have any PRISTINE_LPIs without any MAPDs done. This bit will 
be set when you do the first MAPTI.

>>
>> I have a suggestion, I'll leave it to you and Julien if you want to do
>> this now, or maybe consider it as a TODO item. I am OK either way (I
>> don't want to delay the ITS any longer).
>>
>> I am thinking we should do this scanning only after at least one MAPD
>> has been issued for a given cpu at least once. I would resurrect the
>> idea of a DISCARD flag, but not on the pending_irq, that I believe it's
>> difficult to handle, but a single global DISCARD flag per struct vcpu.
>>
>> On MAPD, we set DISCARD for the target vcpu of the LPI we are dropping.
>> Next time we want to inject a PRISTINE_IRQ on that cpu interface, we
>> scan all LRs for interrupts with a NULL pending_irq. We remove those
>> from LRs, then we remove the DISCARD flag.
>>
>> Do you think it would work?

I don't understand the point to do that. Ok, you will get the first 
PRISTINE_LPI "fast" (though likely LRs will be empty), but all the other 
will be "slow" (though likely LRs will be empty too).

The pain to implement your suggestion does not seem to be worth it so far.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs
  2017-05-20  1:25   ` Stefano Stabellini
  2017-05-22 23:48     ` Stefano Stabellini
@ 2017-05-23 14:41     ` Andre Przywara
  1 sibling, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-23 14:41 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: xen-devel, Julien Grall, Shanker Donthineni, Vijaya Kumar K,
	Vijay Kilari

Hi Stefano,

On 20/05/17 02:25, Stefano Stabellini wrote:
> On Thu, 11 May 2017, Andre Przywara wrote:
>> When LPIs get unmapped by a guest, they might still be in some LR of
>> some VCPU. Nevertheless we remove the corresponding pending_irq
>> (possibly freeing it), and detect this case (irq_to_pending() returns
>> NULL) when the LR gets cleaned up later.
>> However a *new* LPI may get mapped with the same number while the old
>> LPI is *still* in some LR. To avoid getting the wrong state, we mark
>> every newly mapped LPI as PRISTINE, which means: has never been in an
>> LR before. If we detect the LPI in an LR anyway, it must have been an
>> older one, which we can simply retire.
>> Before inserting such a PRISTINE LPI into an LR, we must make sure that
>> it's not already in another LR, as the architecture forbids two
>> interrupts with the same virtual IRQ number on one CPU.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/gic.c         | 55 +++++++++++++++++++++++++++++++++++++++++-----
>>  xen/include/asm-arm/vgic.h |  6 +++++
>>  2 files changed, 56 insertions(+), 5 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index fd3fa05..8bf0578 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -375,6 +375,8 @@ static inline void gic_set_lr(int lr, struct pending_irq *p,
>>  {
>>      ASSERT(!local_irq_is_enabled());
>>  
>> +    clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status);
>> +
>>      gic_hw_ops->update_lr(lr, p, state);
>>  
>>      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
>> @@ -442,12 +444,41 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
>>  #endif
>>  }
>>  
>> +/*
>> + * Find an unused LR to insert an IRQ into. If this new interrupt is a
>> + * PRISTINE LPI, scan the other LRs to avoid inserting the same IRQ twice.
>> + */
>> +static int gic_find_unused_lr(struct vcpu *v, struct pending_irq *p, int lr)
>> +{
>> +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
>> +    unsigned long *lr_mask = (unsigned long *) &this_cpu(lr_mask);
>> +    struct gic_lr lr_val;
>> +
>> +    ASSERT(spin_is_locked(&v->arch.vgic.lock));
>> +
>> +    if ( test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> 
> Maybe we should add an "unlikely".
> 
> I can see how this would be OKish at runtime, but at boot time there
> might be a bunch of PRISTINE_LPIs,

What is your concern here, performance?
Let's put this into perspective:
- The PRISTINE bit gets set upon MAPTI, which Linux usually does *once*
when the driver gets loaded. It gets cleared after the first injection.
- If that happens, we scan all LRs. Most implementations have 4(!) of
them (ARM GIC implementations, for instance), also the algorithm only
scans *used* LRs, so normally just one or two.
- Reading the LR is a *local* system register *read*, not an MMIO
access, and not propagated to other cores. Yes, this may be "costly"
(compared to other instructions), but it's probably still cheaper than a
page table walk (TLB miss) or L2 cache miss.

So to summarize: this is rare, iterates over only a very small number of
registers and is not hugely expensive.
At this point in time I would refrain from any kind of performance
optimization, at least unless we have solved all the other issues and
have done some benchmarking/profiling (on different hardware platforms).

> but no MAPDs have been issued yet, right?

As Julien already mentioned, this gets set after a MAPTI, which requires
a MAPD before.

Cheers,
Andre.

> I have a suggestion, I'll leave it to you and Julien if you want to do
> this now, or maybe consider it as a TODO item. I am OK either way (I
> don't want to delay the ITS any longer).
> 
> I am thinking we should do this scanning only after at least one MAPD
> has been issued for a given cpu at least once. I would resurrect the
> idea of a DISCARD flag, but not on the pending_irq, that I believe it's
> difficult to handle, but a single global DISCARD flag per struct vcpu.
> 
> On MAPD, we set DISCARD for the target vcpu of the LPI we are dropping.
> Next time we want to inject a PRISTINE_IRQ on that cpu interface, we
> scan all LRs for interrupts with a NULL pending_irq. We remove those
> from LRs, then we remove the DISCARD flag.
> 
> Do you think it would work?
> 
> 
>> +    {
>> +        int used_lr = 0;
>> +
>> +        while ( (used_lr = find_next_bit(lr_mask, nr_lrs, used_lr)) < nr_lrs )
>> +        {
>> +            gic_hw_ops->read_lr(used_lr, &lr_val);
>> +            if ( lr_val.virq == p->irq )
>> +                return used_lr;
>> +        }
>> +    }
>> +
>> +    lr = find_next_zero_bit(lr_mask, nr_lrs, lr);
>> +
>> +    return lr;
>> +}
>> +
>>  void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>>          unsigned int priority)
>>  {
>> -    int i;
>> -    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
>>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
>> +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
>> +    int i = nr_lrs;
>>  
>>      ASSERT(spin_is_locked(&v->arch.vgic.lock));
>>  
>> @@ -457,7 +488,8 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>>  
>>      if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
>>      {
>> -        i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
>> +        i = gic_find_unused_lr(v, p, 0);
>> +
>>          if (i < nr_lrs) {
>>              set_bit(i, &this_cpu(lr_mask));
>>              gic_set_lr(i, p, GICH_LR_PENDING);
>> @@ -509,7 +541,17 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>>      }
>>      else if ( lr_val.state & GICH_LR_PENDING )
>>      {
>> -        int q __attribute__ ((unused)) = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
>> +        int q __attribute__ ((unused));
>> +
>> +        if ( test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
>> +        {
>> +            gic_hw_ops->clear_lr(i);
>> +            clear_bit(i, &this_cpu(lr_mask));
>> +
>> +            return;
>> +        }
>> +
>> +        q = test_and_clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
>>  #ifdef GIC_DEBUG
>>          if ( q )
>>              gdprintk(XENLOG_DEBUG, "trying to inject irq=%d into d%dv%d, when it is already pending in LR%d\n",
>> @@ -521,6 +563,9 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>>          gic_hw_ops->clear_lr(i);
>>          clear_bit(i, &this_cpu(lr_mask));
>>  
>> +        if ( test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
>> +            return;
>>          if ( p->desc != NULL )
>>              clear_bit(_IRQ_INPROGRESS, &p->desc->status);
>>          clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
>> @@ -591,7 +636,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
>>      inflight_r = &v->arch.vgic.inflight_irqs;
>>      list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
>>      {
>> -        lr = find_next_zero_bit(&this_cpu(lr_mask), nr_lrs, lr);
>> +        lr = gic_find_unused_lr(v, p, lr);
>>          if ( lr >= nr_lrs )
>>          {
>>              /* No more free LRs: find a lower priority irq to evict */
>> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
>> index 02732db..3fc4ceb 100644
>> --- a/xen/include/asm-arm/vgic.h
>> +++ b/xen/include/asm-arm/vgic.h
>> @@ -60,12 +60,18 @@ struct pending_irq
>>       * vcpu while it is still inflight and on an GICH_LR register on the
>>       * old vcpu.
>>       *
>> +     * GIC_IRQ_GUEST_PRISTINE_LPI: the IRQ is a newly mapped LPI, which
>> +     * has never been in an LR before. This means that any trace of an
>> +     * LPI with the same number in an LR must be from an older LPI, which
>> +     * has been unmapped before.
>> +     *
>>       */
>>  #define GIC_IRQ_GUEST_QUEUED   0
>>  #define GIC_IRQ_GUEST_ACTIVE   1
>>  #define GIC_IRQ_GUEST_VISIBLE  2
>>  #define GIC_IRQ_GUEST_ENABLED  3
>>  #define GIC_IRQ_GUEST_MIGRATING   4
>> +#define GIC_IRQ_GUEST_PRISTINE_LPI  5
>>      unsigned long status;
>>      struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
>>      unsigned int irq;
>> -- 
>> 2.9.0
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> https://lists.xen.org/xen-devel
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-16 13:03   ` Julien Grall
  2017-05-22 22:19     ` Stefano Stabellini
@ 2017-05-23 17:23     ` Andre Przywara
  1 sibling, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-23 17:23 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 16/05/17 14:03, Julien Grall wrote:
> Hi Andre,
> 
> On 11/05/17 18:53, Andre Przywara wrote:
>> To let a guest know about the availability of virtual LPIs, set the
>> respective bits in the virtual GIC registers and let a guest control
>> the LPI enable bit.
>> Only report the LPI capability if the host has initialized at least
>> one ITS.
>> This removes a "TBD" comment, as we now populate the processor number
>> in the GICR_TYPE register.
> 
> s/GICR_TYPE/GICR_TYPER/
> 
> Also, I think it would be worth explaining that you populate
> GICR_TYPER.Process_Number because the ITS will use it later on.
> 
>> Advertise 24 bits worth of LPIs to the guest.
> 
> Again this is not valid anymore. You said you would drop it on the
> previous version. So why it has not been done?
> 
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/vgic-v3.c | 70
>> ++++++++++++++++++++++++++++++++++++++++++++++----
>>  1 file changed, 65 insertions(+), 5 deletions(-)
>>
>> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
>> index 38c123c..6dbdb2e 100644
>> --- a/xen/arch/arm/vgic-v3.c
>> +++ b/xen/arch/arm/vgic-v3.c
>> @@ -170,8 +170,19 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct
>> vcpu *v, mmio_info_t *info,
>>      switch ( gicr_reg )
>>      {
>>      case VREG32(GICR_CTLR):
>> -        /* We have not implemented LPI's, read zero */
>> -        goto read_as_zero_32;
>> +    {
>> +        unsigned long flags;
>> +
>> +        if ( !v->domain->arch.vgic.has_its )
>> +            goto read_as_zero_32;
>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>> +
>> +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
>> +        *r = vgic_reg32_extract(!!(v->arch.vgic.flags &
>> VGIC_V3_LPIS_ENABLED),
>> +                                info);
>> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>> +        return 1;
>> +    }
>>
>>      case VREG32(GICR_IIDR):
>>          if ( dabt.size != DABT_WORD ) goto bad_width;
>> @@ -183,16 +194,20 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct
>> vcpu *v, mmio_info_t *info,
>>          uint64_t typer, aff;
>>
>>          if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
>> -        /* TBD: Update processor id in [23:8] when ITS support is
>> added */
>>          aff = (MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 3) << 56 |
>>                 MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 2) << 48 |
>>                 MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 1) << 40 |
>>                 MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 0) << 32);
>>          typer = aff;
>> +        /* We use the VCPU ID as the redistributor ID in bits[23:8] */
>> +        typer |= (v->vcpu_id & 0xffff) << 8;
> 
> Why the mask here? This sound like a bug to me if vcpu_id does not fit
> it and you would make it worst by the mask.
> 
> But this is already addressed by max_vcpus in the vgic_ops. So please
> drop the pointless mask.
> 
> Lastly, I would have expected to try to address my remark everywhere
> regarding hardcoding offset. In this case,

Fixed.

>>
>>          if ( v->arch.vgic.flags & VGIC_V3_RDIST_LAST )
>>              typer |= GICR_TYPER_LAST;
>>
>> +        if ( v->domain->arch.vgic.has_its )
>> +            typer |= GICR_TYPER_PLPIS;
>> +
>>          *r = vgic_reg64_extract(typer, info);
>>
>>          return 1;
>> @@ -426,6 +441,28 @@ static uint64_t sanitize_pendbaser(uint64_t reg)
>>      return reg;
>>  }
>>
>> +static void vgic_vcpu_enable_lpis(struct vcpu *v)
>> +{
>> +    uint64_t reg = v->domain->arch.vgic.rdist_propbase;
>> +    unsigned int nr_lpis = BIT((reg & 0x1f) + 1);
>> +
>> +    /* rdists_enabled is protected by the domain lock. */
>> +    ASSERT(spin_is_locked(&v->domain->arch.vgic.lock));
>> +
>> +    if ( nr_lpis < LPI_OFFSET )
>> +        nr_lpis = 0;
>> +    else
>> +        nr_lpis -= LPI_OFFSET;
>> +
>> +    if ( !v->domain->arch.vgic.rdists_enabled )
>> +    {
>> +        v->domain->arch.vgic.nr_lpis = nr_lpis;
>> +        v->domain->arch.vgic.rdists_enabled = true;
>> +    }
>> +
>> +    v->arch.vgic.flags |= VGIC_V3_LPIS_ENABLED;
>> +}
>> +
>>  static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t
>> *info,
>>                                            uint32_t gicr_reg,
>>                                            register_t r)
>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
>> vcpu *v, mmio_info_t *info,
>>      switch ( gicr_reg )
>>      {
>>      case VREG32(GICR_CTLR):
>> -        /* LPI's not implemented */
>> -        goto write_ignore_32;
>> +    {
>> +        unsigned long flags;
>> +
>> +        if ( !v->domain->arch.vgic.has_its )
>> +            goto write_ignore_32;
>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>> +
>> +        vgic_lock(v);                   /* protects rdists_enabled */
> 
> Getting back to the locking. I don't see any place where we get the
> domain vgic lock before vCPU vgic lock.

Because that seems to be the natural locking order, given that one
domain can have multiple VCPUs: We take the domain lock first, then the
VCPU lock. This *seems* to be documented in
xen/include/asm-arm/domain.h, where it says in a comment next to the
domain lock:
================
* If both class of lock is required then this lock must be
* taken first. ....
================

> So this raises the question why
> this ordering and not moving this lock into vgic_vcpu_enable_lpis.

Do you see any issues with that?

> At least this require documentation in the code and explanation in the
> commit message.

In this case I would try to comment on this, but would refrain from
proper locking order documentation (where?) until the rework.

>> +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
>> +
>> +        /* LPIs can only be enabled once, but never disabled again. */
>> +        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
>> +             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
>> +            vgic_vcpu_enable_lpis(v);
>> +
>> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>> +        vgic_unlock(v);
>> +
>> +        return 1;
>> +    }
>>
>>      case VREG32(GICR_IIDR):
>>          /* RO */
>> @@ -1058,6 +1113,11 @@ static int vgic_v3_distr_mmio_read(struct vcpu
>> *v, mmio_info_t *info,
>>          typer = ((ncpus - 1) << GICD_TYPE_CPUS_SHIFT |
>>                   DIV_ROUND_UP(v->domain->arch.vgic.nr_spis, 32));
>>
>> +        if ( v->domain->arch.vgic.has_its )
>> +        {
>> +            typer |= GICD_TYPE_LPIS;
>> +            irq_bits = v->domain->arch.vgic.intid_bits;
>> +        }
> 
> As I said on the previous version, I would have expected the field
> intid_bits to be used even if ITS is not enabled.
> 
> The current code make very difficult to understand the purpose of
> intid_bits and know it is only used when ITS is enabled.
> 
> intid_bits should correctly be initialized in vgic_v3_domain_init and
> directly used it.

OK, I changed this, removed the wrong irq_bits assignment above (this
number is independent from the number of actually implemented SPIs) and
always putting through the hardware value for Dom0 and "10" for DomUs
(to cover all SPIs, we don't need more right now for guests. And yes, I
added a TODO there as well ;-)

Cheers,
Andre.

> 
>>          typer |= (irq_bits - 1) << GICD_TYPE_ID_BITS_SHIFT;
>>
>>          *r = vgic_reg32_extract(typer, info);
>>
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 16/28] ARM: vITS: handle INT command
  2017-05-17 16:17   ` Julien Grall
@ 2017-05-23 17:24     ` Andre Przywara
  0 siblings, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-23 17:24 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 17/05/17 17:17, Julien Grall wrote:
> Hi Andre,
> 
> On 11/05/17 18:53, Andre Przywara wrote:
>> The INT command sets a given LPI identified by a DeviceID/EventID pair
>> as pending and thus triggers it to be injected.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/vgic-v3-its.c | 21 +++++++++++++++++++++
>>  1 file changed, 21 insertions(+)
>>
>> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
>> index 12ec5f1..f9379c9 100644
>> --- a/xen/arch/arm/vgic-v3-its.c
>> +++ b/xen/arch/arm/vgic-v3-its.c
>> @@ -300,6 +300,24 @@ static uint64_t its_cmd_mask_field(uint64_t
>> *its_cmd, unsigned int word,
>>  #define its_cmd_get_validbit(cmd)       its_cmd_mask_field(cmd, 2,
>> 63,  1)
>>  #define its_cmd_get_ittaddr(cmd)        (its_cmd_mask_field(cmd, 2,
>> 8, 44) << 8)
>>
>> +static int its_handle_int(struct virt_its *its, uint64_t *cmdptr)
>> +{
>> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
>> +    uint32_t eventid = its_cmd_get_id(cmdptr);
>> +    struct vcpu *vcpu;
>> +    uint32_t vlpi;
>> +
>> +    if ( !read_itte(its, devid, eventid, &vcpu, &vlpi) )
>> +        return -1;
> 
> See my comment on patch #13 about crafting the memory.

So read_itte goes through some checks already (valid VCPU IDs, valid
device table pointer, valid event ID, ...). I believe we can't do much
more than this. I added a fat TODO and an ASSERT(is_dom0) in
vgic_v3_verify_its_status() to not forget about this problem.
Ideally it shouldn't matter what the guest writes into the table,
hopefully the per-IRQ locking ensures this.

Cheers,
Andre.

> 
>> +
>> +    if ( vlpi == INVALID_LPI )
>> +        return -1;
>> +
>> +    vgic_vcpu_inject_irq(vcpu, vlpi);
>> +
>> +    return 0;
>> +}
>> +
>>  #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
>>  #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
>>
>> @@ -329,6 +347,9 @@ static int vgic_its_handle_cmds(struct domain *d,
>> struct virt_its *its)
>>
>>          switch ( its_cmd_get_command(command) )
>>          {
>> +        case GITS_CMD_INT:
>> +            ret = its_handle_int(its, command);
>> +            break;
>>          case GITS_CMD_SYNC:
>>              /* We handle ITS commands synchronously, so we ignore
>> SYNC. */
>>              break;
>>
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 18/28] ARM: vITS: handle CLEAR command
  2017-05-17 17:45   ` Julien Grall
@ 2017-05-23 17:24     ` Andre Przywara
  2017-05-24  9:04       ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-23 17:24 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 17/05/17 18:45, Julien Grall wrote:
> Hi Andre,
> 
> On 11/05/17 18:53, Andre Przywara wrote:
>> This introduces the ITS command handler for the CLEAR command, which
>> clears the pending state of an LPI.
>> This removes a not-yet injected, but already queued IRQ from a VCPU.
>> As read_itte() is now eventually used, we add the static keyword.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/vgic-v3-its.c | 59
>> ++++++++++++++++++++++++++++++++++++++++++++--
>>  1 file changed, 57 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
>> index 8f1c217..8a200e9 100644
>> --- a/xen/arch/arm/vgic-v3-its.c
>> +++ b/xen/arch/arm/vgic-v3-its.c
>> @@ -52,6 +52,7 @@
>>   */
>>  struct virt_its {
>>      struct domain *d;
>> +    paddr_t doorbell_address;
>>      unsigned int devid_bits;
>>      unsigned int evid_bits;
>>      spinlock_t vcmd_lock;       /* Protects the virtual command
>> buffer, which */
>> @@ -251,8 +252,8 @@ static bool read_itte_locked(struct virt_its *its,
>> uint32_t devid,
>>   * This function takes care of the locking by taking the its_lock
>> itself, so
>>   * a caller shall not hold this. Before returning, the lock is
>> dropped again.
>>   */
>> -bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
>> -               struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
>> +static bool read_itte(struct virt_its *its, uint32_t devid, uint32_t
>> evid,
>> +                      struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
>>  {
>>      bool ret;
>>
>> @@ -362,6 +363,57 @@ static int its_handle_mapc(struct virt_its *its,
>> uint64_t *cmdptr)
>>      return 0;
>>  }
>>
>> +/*
>> + * CLEAR removes the pending state from an LPI. */
>> +static int its_handle_clear(struct virt_its *its, uint64_t *cmdptr)
>> +{
>> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
>> +    uint32_t eventid = its_cmd_get_id(cmdptr);
>> +    struct pending_irq *p;
>> +    struct vcpu *vcpu;
>> +    uint32_t vlpi;
>> +    unsigned long flags;
>> +    int ret = -1;
>> +
>> +    spin_lock(&its->its_lock);
>> +
>> +    /* Translate the DevID/EvID pair into a vCPU/vLPI pair. */
>> +    if ( !read_itte_locked(its, devid, eventid, &vcpu, &vlpi) )
>> +        goto out_unlock;
>> +
>> +    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
>> +                                        devid, eventid);
>> +    /* Protect against an invalid LPI number. */
>> +    if ( unlikely(!p) )
>> +        goto out_unlock;
>> +
>> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
> 
> My comment in patch #9 about crafting the memory handed over to the ITS
> applies here too.
> 
>> +
>> +    /*
>> +     * If the LPI is already visible on the guest, it is too late to
>> +     * clear the pending state. However this is a benign race that can
>> +     * happen on real hardware, too: If the LPI has already been
>> forwarded
>> +     * to a CPU interface, a CLEAR request reaching the redistributor
>> has
>> +     * no effect on that LPI anymore. Since LPIs are edge triggered and
>> +     * have no active state, we don't need to care about this here.
>> +     */
>> +    if ( !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
>> +    {
>> +        /* Remove a pending, but not yet injected guest IRQ. */
>> +        clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
>> +        list_del_init(&p->inflight);
>> +        list_del_init(&p->lr_queue);
> 
> On the previous version I was against this open-coding of
> gic_remove_from_queues and instead rework the function.

Well, I consider gic_remove_from_queues() somewhat broken:
- It should be called vgic_remove... and live in vgic.c, because it
deals with the virtual side only.
- The plural in the name is wrong, since it only removes the IRQ from
lr_pending, not inflight.
- vgic_migrate_irq removes an IRQ from both queues as well, and doesn't
use the function (for the same reason).

So to make it usable in our case, I'd need to either open code the
inflight removal here (which would make calling this function a bit
pointless) or add that to the function, but remove the existing caller.
Looks like a can of worms to me and a distraction from the actual goal
of getting the ITS in place.
So I will surely address this with the VGIC rework (possibly removing
this function altogether), but would like to avoid doing this rework
*now*. To catch all users of the list I would need to grep for inflight
and lr_pending anyway, so one more "open-coded" place is not a big deal.

> It still does not make any sense to me because if one day someone
> decides to update gic_remove_from_queues (such as you because you are
> going to rework the vGIC), he will have to remember that you open-coded
> in MOVE because you didn't want to touch the common code.

As I mentioned above this is the same situation for vgic_migrate_irq()
already.

> Common code is not set in stone. The goal is to abstract all the issues
> to make easier to propagate change. So please address this comment.

I clearly understand this and am all for fixing this, but I don't
believe the ITS series is the place to do this. In fact I don't want to
add more code to this series.
If gic_remove_from_queues would keep up the promise its name gives, I
would love to use it, but it doesn't, so ...

Cheers,
Andre.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation
  2017-05-23 10:54     ` Julien Grall
@ 2017-05-23 17:43       ` Stefano Stabellini
  0 siblings, 0 replies; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-23 17:43 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Vijay Kilari, Andre Przywara, Vijaya Kumar K,
	xen-devel, Shanker Donthineni

On Tue, 23 May 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 22/05/17 23:32, Stefano Stabellini wrote:
> > On Thu, 11 May 2017, Andre Przywara wrote:
> > > +    case VREG64(GITS_CWRITER):
> > > +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> > > +
> > > +        reg = its->cwriter;
> > > +        *r = vgic_reg64_extract(reg, info);
> > 
> > Why is this not protected by a lock? Also from the comment above I
> > cannot tell if it should be protected by its_lock or by vcmd_lock.
> 
> Because if you take the vcmd_lock, the vCPU will spin until we finish to
> handle the command queue. This means a guest can potentially block multiple
> pCPUs for a long time.
> 
> In this case, cwriter can be read atomically as it was updated by the guest
> itself ...
> > 
> > 
> > > +        break;
> > > +    case VREG64(GITS_CREADR):
> > > +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> > > +
> > > +        reg = its->creadr;
> > > +        *r = vgic_reg64_extract(reg, info);
> > > +        break;
> > 
> > Same here
> 
> For here, the command queue handler will write to creader atomically every a
> command is been executed. Making this lockless also allow a domain keep track
> where we are on the command queue handling.
> 
> This is something we already discussed quite a few times. So we should
> probably have a comment in the code to avoid this question to come up again.

All right, thanks


> [...]
> 
> > > +    case VREG64(GITS_CWRITER):
> > > +        if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width;
> > > +
> > > +        spin_lock(&its->vcmd_lock);
> > > +        reg = ITS_CMD_OFFSET(its->cwriter);
> > > +        vgic_reg64_update(&reg, r, info);
> > > +        its->cwriter = ITS_CMD_OFFSET(reg);
> > > +
> > > +        if ( its->enabled )
> > > +            if ( vgic_its_handle_cmds(d, its) )
> > > +                gdprintk(XENLOG_WARNING, "error handling ITS
> > > commands\n");
> > > +
> > > +        spin_unlock(&its->vcmd_lock);
> > 
> > OK, so it looks like the reads should be protected by vcmd_lock
> 
> See my comment above.
> 
> > 
> > 
> > > +        return 1;
> > > +
> > > +    case VREG64(GITS_CREADR):
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 21/28] ARM: vITS: handle MAPTI command
  2017-05-23 10:01     ` Andre Przywara
@ 2017-05-23 17:44       ` Stefano Stabellini
  0 siblings, 0 replies; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-23 17:44 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Tue, 23 May 2017, Andre Przywara wrote:
> Hi,
> 
> On 23/05/17 00:39, Stefano Stabellini wrote:
> > On Thu, 11 May 2017, Andre Przywara wrote:
> >> @@ -556,6 +583,93 @@ static int its_handle_mapd(struct virt_its *its, uint64_t *cmdptr)
> >>      return ret;
> >>  }
> >>  
> >> +static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
> >> +{
> >> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> >> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> >> +    uint32_t intid = its_cmd_get_physical_id(cmdptr), _intid;
> >> +    uint16_t collid = its_cmd_get_collection(cmdptr);
> >> +    struct pending_irq *pirq;
> >> +    struct vcpu *vcpu = NULL;
> >> +    int ret = -1;
> > 
> > I think we need to check the eventid to be valid, don't we?
> 
> Yes, but this will be done below as part of write_itte_locked(), in fact
> already read_itte_locked().
> Shall I add a comment about this?

No need, thanks

> 
> >> +    if ( its_cmd_get_command(cmdptr) == GITS_CMD_MAPI )
> >> +        intid = eventid;
> >> +
> >> +    spin_lock(&its->its_lock);
> >> +    /*
> >> +     * Check whether there is a valid existing mapping. If yes, behavior is
> >> +     * unpredictable, we choose to ignore this command here.
> >> +     * This makes sure we start with a pristine pending_irq below.
> >> +     */
> >> +    if ( read_itte_locked(its, devid, eventid, &vcpu, &_intid) &&
> >> +         _intid != INVALID_LPI )
> >> +    {
> >> +        spin_unlock(&its->its_lock);
> >> +        return -1;
> >> +    }
> >> +
> >> +    /* Enter the mapping in our virtual ITS tables. */
> >> +    if ( !write_itte_locked(its, devid, eventid, collid, intid, &vcpu) )
> >> +    {
> >> +        spin_unlock(&its->its_lock);
> >> +        return -1;
> >> +    }
> >> +
> >> +    spin_unlock(&its->its_lock);
> >> +
> >> +    /*
> >> +     * Connect this virtual LPI to the corresponding host LPI, which is
> >> +     * determined by the same device ID and event ID on the host side.
> >> +     * This returns us the corresponding, still unused pending_irq.
> >> +     */
> >> +    pirq = gicv3_assign_guest_event(its->d, its->doorbell_address,
> >> +                                    devid, eventid, vcpu, intid);
> >> +    if ( !pirq )
> >> +        goto out_remove_mapping;
> >> +
> >> +    vgic_init_pending_irq(pirq, intid);
> >> +
> >> +    /*
> >> +     * Now read the guest's property table to initialize our cached state.
> >> +     * It can't fire at this time, because it is not known to the host yet.
> >> +     * We don't need the VGIC VCPU lock here, because the pending_irq isn't
> >> +     * in the radix tree yet.
> >> +     */
> >> +    ret = update_lpi_property(its->d, pirq);
> >> +    if ( ret )
> >> +        goto out_remove_host_entry;
> >> +
> >> +    pirq->lpi_vcpu_id = vcpu->vcpu_id;
> >> +    /*
> >> +     * Mark this LPI as new, so any older (now unmapped) LPI in any LR
> >> +     * can be easily recognised as such.
> >> +     */
> >> +    set_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &pirq->status);
> >> +
> >> +    /*
> >> +     * Now insert the pending_irq into the domain's LPI tree, so that
> >> +     * it becomes live.
> >> +     */
> >> +    write_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
> >> +    ret = radix_tree_insert(&its->d->arch.vgic.pend_lpi_tree, intid, pirq);
> >> +    write_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
> >> +
> >> +    if ( !ret )
> >> +        return 0;
> >> +
> >> +out_remove_host_entry:
> >> +    gicv3_remove_guest_event(its->d, its->doorbell_address, devid, eventid);
> >> +
> >> +out_remove_mapping:
> >> +    spin_lock(&its->its_lock);
> >> +    write_itte_locked(its, devid, eventid,
> >> +                      UNMAPPED_COLLECTION, INVALID_LPI, NULL);
> >> +    spin_unlock(&its->its_lock);
> >> +
> >> +    return ret;
> >> +}
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-23 10:49       ` Julien Grall
@ 2017-05-23 17:47         ` Stefano Stabellini
  2017-05-24 10:10           ` Julien Grall
  2017-05-25 18:02           ` Andre Przywara
  0 siblings, 2 replies; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-23 17:47 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Vijay Kilari, Andre Przywara, Vijaya Kumar K,
	xen-devel, Shanker Donthineni

On Tue, 23 May 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 22/05/17 23:19, Stefano Stabellini wrote:
> > On Tue, 16 May 2017, Julien Grall wrote:
> > > > @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
> > > > vcpu
> > > > *v, mmio_info_t *info,
> > > >      switch ( gicr_reg )
> > > >      {
> > > >      case VREG32(GICR_CTLR):
> > > > -        /* LPI's not implemented */
> > > > -        goto write_ignore_32;
> > > > +    {
> > > > +        unsigned long flags;
> > > > +
> > > > +        if ( !v->domain->arch.vgic.has_its )
> > > > +            goto write_ignore_32;
> > > > +        if ( dabt.size != DABT_WORD ) goto bad_width;
> > > > +
> > > > +        vgic_lock(v);                   /* protects rdists_enabled */
> > > 
> > > Getting back to the locking. I don't see any place where we get the domain
> > > vgic lock before vCPU vgic lock. So this raises the question why this
> > > ordering
> > > and not moving this lock into vgic_vcpu_enable_lpis.
> > > 
> > > At least this require documentation in the code and explanation in the
> > > commit
> > > message.
> > 
> > It doesn't look like we need to take the v->arch.vgic.lock here. What is
> > it protecting?
> 
> The name of the function is a bit confusion. It does not take the vCPU vgic
> lock but the domain vgic lock.
> 
> I believe the vcpu is passed to avoid have v->domain in most of the callers.
> But we should probably rename the function.
> 
> In this case it protects vgic_vcpu_enable_lpis because you can configure the
> number of LPIs per re-distributor but this is a domain wide value. I know the
> spec is confusing on this.

The quoting here is very unhelpful. In Andre's patch:

@@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
     switch ( gicr_reg )
     {
     case VREG32(GICR_CTLR):
-        /* LPI's not implemented */
-        goto write_ignore_32;
+    {
+        unsigned long flags;
+
+        if ( !v->domain->arch.vgic.has_its )
+            goto write_ignore_32;
+        if ( dabt.size != DABT_WORD ) goto bad_width;
+
+        vgic_lock(v);                   /* protects rdists_enabled */
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+        /* LPIs can only be enabled once, but never disabled again. */
+        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
+             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
+            vgic_vcpu_enable_lpis(v);
+
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        vgic_unlock(v);
+
+        return 1;
+    }

My question is: do we need to take both vgic_lock and v->arch.vgic.lock?
If so, why?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs
  2017-05-23 11:10       ` Julien Grall
@ 2017-05-23 18:23         ` Stefano Stabellini
  2017-05-24  9:47           ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-23 18:23 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Vijay Kilari, Andre Przywara, Vijaya Kumar K,
	xen-devel, Shanker Donthineni

On Tue, 23 May 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 23/05/17 00:48, Stefano Stabellini wrote:
> > On Fri, 19 May 2017, Stefano Stabellini wrote:
> > > On Thu, 11 May 2017, Andre Przywara wrote:
> > > > When LPIs get unmapped by a guest, they might still be in some LR of
> > > > some VCPU. Nevertheless we remove the corresponding pending_irq
> > > > (possibly freeing it), and detect this case (irq_to_pending() returns
> > > > NULL) when the LR gets cleaned up later.
> > > > However a *new* LPI may get mapped with the same number while the old
> > > > LPI is *still* in some LR. To avoid getting the wrong state, we mark
> > > > every newly mapped LPI as PRISTINE, which means: has never been in an
> > > > LR before. If we detect the LPI in an LR anyway, it must have been an
> > > > older one, which we can simply retire.
> > > > Before inserting such a PRISTINE LPI into an LR, we must make sure that
> > > > it's not already in another LR, as the architecture forbids two
> > > > interrupts with the same virtual IRQ number on one CPU.
> > > > 
> > > > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> > > > ---
> > > >  xen/arch/arm/gic.c         | 55
> > > > +++++++++++++++++++++++++++++++++++++++++-----
> > > >  xen/include/asm-arm/vgic.h |  6 +++++
> > > >  2 files changed, 56 insertions(+), 5 deletions(-)
> > > > 
> > > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > > > index fd3fa05..8bf0578 100644
> > > > --- a/xen/arch/arm/gic.c
> > > > +++ b/xen/arch/arm/gic.c
> > > > @@ -375,6 +375,8 @@ static inline void gic_set_lr(int lr, struct
> > > > pending_irq *p,
> > > >  {
> > > >      ASSERT(!local_irq_is_enabled());
> > > > 
> > > > +    clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status);
> > > > +
> > > >      gic_hw_ops->update_lr(lr, p, state);
> > > > 
> > > >      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> > > > @@ -442,12 +444,41 @@ void gic_raise_inflight_irq(struct vcpu *v,
> > > > unsigned int virtual_irq)
> > > >  #endif
> > > >  }
> > > > 
> > > > +/*
> > > > + * Find an unused LR to insert an IRQ into. If this new interrupt is a
> > > > + * PRISTINE LPI, scan the other LRs to avoid inserting the same IRQ
> > > > twice.
> > > > + */
> > > > +static int gic_find_unused_lr(struct vcpu *v, struct pending_irq *p,
> > > > int lr)
> > > > +{
> > > > +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
> > > > +    unsigned long *lr_mask = (unsigned long *) &this_cpu(lr_mask);
> > > > +    struct gic_lr lr_val;
> > > > +
> > > > +    ASSERT(spin_is_locked(&v->arch.vgic.lock));
> > > > +
> > > > +    if ( test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> > > 
> > > Maybe we should add an "unlikely".
> > > 
> > > I can see how this would be OKish at runtime, but at boot time there
> > > might be a bunch of PRISTINE_LPIs, but no MAPDs have been issued yet,
> > > right?
> 
> You cannot have any PRISTINE_LPIs without any MAPDs done. This bit will be set
> when you do the first MAPTI.
> 
> > > 
> > > I have a suggestion, I'll leave it to you and Julien if you want to do
> > > this now, or maybe consider it as a TODO item. I am OK either way (I
> > > don't want to delay the ITS any longer).
> > > 
> > > I am thinking we should do this scanning only after at least one MAPD
> > > has been issued for a given cpu at least once. I would resurrect the
> > > idea of a DISCARD flag, but not on the pending_irq, that I believe it's
> > > difficult to handle, but a single global DISCARD flag per struct vcpu.
> > > 
> > > On MAPD, we set DISCARD for the target vcpu of the LPI we are dropping.
> > > Next time we want to inject a PRISTINE_IRQ on that cpu interface, we
> > > scan all LRs for interrupts with a NULL pending_irq. We remove those
> > > from LRs, then we remove the DISCARD flag.
> > > 
> > > Do you think it would work?
> 
> I don't understand the point to do that. Ok, you will get the first
> PRISTINE_LPI "fast" (though likely LRs will be empty), but all the other will
> be "slow" (though likely LRs will be empty too).
> 
> The pain to implement your suggestion does not seem to be worth it so far.

Let me explain it a bit better, I think I didn't clarify it well enough.
Let me also premise, that this would be fine to do later, it doesn't
have to be part of this patch.

When I wrote MAPD above, I meant actually any commands that delete an
existing pending_irq - vLPI mapping. Specifically, DISCARD, and MAPD
when the 

    if ( !valid )
        /* Discard all events and remove pending LPIs. */
        its_unmap_device(its, devid);

code path is taken, which should not be the case at boot time, right?
Are there any other commands that remove a pending_irq - vLPI mapping
that I missed?

The idea is that we could add a VGIC_V3_LPIS_DISCARD flag to arch_vcpu.
VGIC_V3_LPIS_DISCARD is set on a DISCARD command, and on a MAPD (!valid)
command. If VGIC_V3_LPIS_DISCARD is not set, there is no need to scan
anything. If VGIC_V3_LPIS_DISCARD is set *and* we want to inject a
PRISTINE_IRQ, then we do the scanning.

When we do the scanning, we check all LRs for NULL pending_irq structs.
We clear them all in one go. Then we remove VGIC_V3_LPIS_DISCARD.

This way, we get all PRISTINE_LPI fast, except for the very first one
after a DISCARD or MAPD (!valid) command.

Does it make more sense now? What do you think?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 18/28] ARM: vITS: handle CLEAR command
  2017-05-23 17:24     ` Andre Przywara
@ 2017-05-24  9:04       ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-24  9:04 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni



On 05/23/2017 06:24 PM, Andre Przywara wrote:
> Hi,

Hi Andre,

>
> On 17/05/17 18:45, Julien Grall wrote:
>> Hi Andre,
>>
>> On 11/05/17 18:53, Andre Przywara wrote:
>>> This introduces the ITS command handler for the CLEAR command, which
>>> clears the pending state of an LPI.
>>> This removes a not-yet injected, but already queued IRQ from a VCPU.
>>> As read_itte() is now eventually used, we add the static keyword.
>>>
>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>> ---
>>>  xen/arch/arm/vgic-v3-its.c | 59
>>> ++++++++++++++++++++++++++++++++++++++++++++--
>>>  1 file changed, 57 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
>>> index 8f1c217..8a200e9 100644
>>> --- a/xen/arch/arm/vgic-v3-its.c
>>> +++ b/xen/arch/arm/vgic-v3-its.c
>>> @@ -52,6 +52,7 @@
>>>   */
>>>  struct virt_its {
>>>      struct domain *d;
>>> +    paddr_t doorbell_address;
>>>      unsigned int devid_bits;
>>>      unsigned int evid_bits;
>>>      spinlock_t vcmd_lock;       /* Protects the virtual command
>>> buffer, which */
>>> @@ -251,8 +252,8 @@ static bool read_itte_locked(struct virt_its *its,
>>> uint32_t devid,
>>>   * This function takes care of the locking by taking the its_lock
>>> itself, so
>>>   * a caller shall not hold this. Before returning, the lock is
>>> dropped again.
>>>   */
>>> -bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
>>> -               struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
>>> +static bool read_itte(struct virt_its *its, uint32_t devid, uint32_t
>>> evid,
>>> +                      struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
>>>  {
>>>      bool ret;
>>>
>>> @@ -362,6 +363,57 @@ static int its_handle_mapc(struct virt_its *its,
>>> uint64_t *cmdptr)
>>>      return 0;
>>>  }
>>>
>>> +/*
>>> + * CLEAR removes the pending state from an LPI. */
>>> +static int its_handle_clear(struct virt_its *its, uint64_t *cmdptr)
>>> +{
>>> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
>>> +    uint32_t eventid = its_cmd_get_id(cmdptr);
>>> +    struct pending_irq *p;
>>> +    struct vcpu *vcpu;
>>> +    uint32_t vlpi;
>>> +    unsigned long flags;
>>> +    int ret = -1;
>>> +
>>> +    spin_lock(&its->its_lock);
>>> +
>>> +    /* Translate the DevID/EvID pair into a vCPU/vLPI pair. */
>>> +    if ( !read_itte_locked(its, devid, eventid, &vcpu, &vlpi) )
>>> +        goto out_unlock;
>>> +
>>> +    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
>>> +                                        devid, eventid);
>>> +    /* Protect against an invalid LPI number. */
>>> +    if ( unlikely(!p) )
>>> +        goto out_unlock;
>>> +
>>> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
>>
>> My comment in patch #9 about crafting the memory handed over to the ITS
>> applies here too.
>>
>>> +
>>> +    /*
>>> +     * If the LPI is already visible on the guest, it is too late to
>>> +     * clear the pending state. However this is a benign race that can
>>> +     * happen on real hardware, too: If the LPI has already been
>>> forwarded
>>> +     * to a CPU interface, a CLEAR request reaching the redistributor
>>> has
>>> +     * no effect on that LPI anymore. Since LPIs are edge triggered and
>>> +     * have no active state, we don't need to care about this here.
>>> +     */
>>> +    if ( !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
>>> +    {
>>> +        /* Remove a pending, but not yet injected guest IRQ. */
>>> +        clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
>>> +        list_del_init(&p->inflight);
>>> +        list_del_init(&p->lr_queue);
>>
>> On the previous version I was against this open-coding of
>> gic_remove_from_queues and instead rework the function.
>
> Well, I consider gic_remove_from_queues() somewhat broken:
> - It should be called vgic_remove... and live in vgic.c, because it
> deals with the virtual side only.

Then append a patch for that and ...

> - The plural in the name is wrong, since it only removes the IRQ from
> lr_pending, not inflight.

... that.

> - vgic_migrate_irq removes an IRQ from both queues as well, and doesn't
> use the function (for the same reason).

The existing code may not use it, but it is not a reason to continue to 
open-code it.

> So to make it usable in our case, I'd need to either open code the
> inflight removal here (which would make calling this function a bit
> pointless) or add that to the function, but remove the existing caller.
> Looks like a can of worms to me and a distraction from the actual goal
> of getting the ITS in place.

Not necessarily... you could extend the prototype to specify how much 
you want to clean.

> So I will surely address this with the VGIC rework (possibly removing
> this function altogether), but would like to avoid doing this rework
> *now*. To catch all users of the list I would need to grep for inflight
> and lr_pending anyway, so one more "open-coded" place is not a big deal.

Please stop saying this is "not a big-deal". It is not helpful to get 
this series in shape that makes the maintainers happy enough to merge it.

In this case, I didn't ask to remove all the open-coded place but asked 
to not add more.

>
>> It still does not make any sense to me because if one day someone
>> decides to update gic_remove_from_queues (such as you because you are
>> going to rework the vGIC), he will have to remember that you open-coded
>> in MOVE because you didn't want to touch the common code.
>
> As I mentioned above this is the same situation for vgic_migrate_irq()
> already.

This is for me a call to fix it rather than trying to make the situation 
much worse...

>
>> Common code is not set in stone. The goal is to abstract all the issues
>> to make easier to propagate change. So please address this comment.
>
> I clearly understand this and am all for fixing this, but I don't
> believe the ITS series is the place to do this. In fact I don't want to
> add more code to this series.

Can you stop deferring everything to after the ITS merge? I understand 
why some of the tasks can be deferred because the support is not 
strictly needed here or would be too difficult. In this case, you 
haven't convinced me this cannot be done here.

> If gic_remove_from_queues would keep up the promise its name gives, I
> would love to use it, but it doesn't, so ...

Then fix it and rename it.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 19/28] ARM: vITS: handle MAPD command
  2017-05-17 18:07   ` Julien Grall
@ 2017-05-24  9:10     ` Andre Przywara
  2017-05-24  9:56       ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-24  9:10 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 17/05/17 19:07, Julien Grall wrote:
> Hi Andre,
> 
> On 11/05/17 18:53, Andre Przywara wrote:
>> The MAPD command maps a device by associating a memory region for
>> storing ITEs with a certain device ID. Since it features a valid bit,
>> MAPD also covers the "unmap" functionality, which we also cover here.
>> We store the given guest physical address in the device table, and, if
>> this command comes from Dom0, tell the host ITS driver about this new
>> mapping, so it can issue the corresponding host MAPD command and create
>> the required tables. We take care of rolling back actions should one
>> step fail.
>> Upon unmapping a device we make sure we clean up all associated
>> resources and release the memory again.
>> We use our existing guest memory access function to find the right ITT
>> entry and store the mapping there (in guest memory).
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/gic-v3-its.c        |  18 +++++
>>  xen/arch/arm/gic-v3-lpi.c        |  18 +++++
>>  xen/arch/arm/vgic-v3-its.c       | 145
>> +++++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/gic_v3_its.h |   5 ++
>>  4 files changed, 186 insertions(+)
>>
>> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
>> index fd6a394..be4c3e0 100644
>> --- a/xen/arch/arm/gic-v3-its.c
>> +++ b/xen/arch/arm/gic-v3-its.c
>> @@ -869,6 +869,24 @@ struct pending_irq
>> *gicv3_its_get_event_pending_irq(struct domain *d,
>>      return get_event_pending_irq(d, vdoorbell_address, vdevid,
>> veventid, NULL);
>>  }
>>
>> +int gicv3_remove_guest_event(struct domain *d, paddr_t
>> vdoorbell_address,
>> +                             uint32_t vdevid, uint32_t veventid)
>> +{
>> +    uint32_t host_lpi = INVALID_LPI;
>> +
>> +    if ( !get_event_pending_irq(d, vdoorbell_address, vdevid, veventid,
>> +                                &host_lpi) )
>> +        return -EINVAL;
>> +
>> +    if ( host_lpi == INVALID_LPI )
>> +        return -EINVAL;
>> +
>> +    gicv3_lpi_update_host_entry(host_lpi, d->domain_id,
>> +                                INVALID_VCPU_ID, INVALID_LPI);
>> +
>> +    return 0;
>> +}
>> +
>>  /* Scan the DT for any ITS nodes and create a list of host ITSes out
>> of it. */
>>  void gicv3_its_dt_init(const struct dt_device_node *node)
>>  {
>> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
>> index 44f6315..d427539 100644
>> --- a/xen/arch/arm/gic-v3-lpi.c
>> +++ b/xen/arch/arm/gic-v3-lpi.c
>> @@ -207,6 +207,24 @@ out:
>>      irq_exit();
>>  }
>>
>> +void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
>> +                                 unsigned int vcpu_id, uint32_t
>> virt_lpi)
>> +{
>> +    union host_lpi *hlpip, hlpi;
>> +
>> +    ASSERT(host_lpi >= LPI_OFFSET);
>> +
>> +    host_lpi -= LPI_OFFSET;
>> +
>> +    hlpip = &lpi_data.host_lpis[host_lpi /
>> HOST_LPIS_PER_PAGE][host_lpi % HOST_LPIS_PER_PAGE];
>> +
>> +    hlpi.virt_lpi = virt_lpi;
>> +    hlpi.dom_id = domain_id;
>> +    hlpi.vcpu_id = vcpu_id;
>> +
>> +    write_u64_atomic(&hlpip->data, hlpi.data);
>> +}
>> +
>>  static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
>>  {
>>      uint64_t val;
>> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
>> index 8a200e9..731fe0c 100644
>> --- a/xen/arch/arm/vgic-v3-its.c
>> +++ b/xen/arch/arm/vgic-v3-its.c
>> @@ -175,6 +175,21 @@ static struct vcpu
>> *get_vcpu_from_collection(struct virt_its *its,
>>  #define DEV_TABLE_ENTRY(addr, bits)                     \
>>          (((addr) & GENMASK(51, 8)) | (((bits) - 1) & GENMASK(4, 0)))
>>
>> +/* Set the address of an ITT for a given device ID. */
>> +static int its_set_itt_address(struct virt_its *its, uint32_t devid,
>> +                               paddr_t itt_address, uint32_t nr_bits)
>> +{
>> +    paddr_t addr = get_baser_phys_addr(its->baser_dev);
>> +    dev_table_entry_t itt_entry = DEV_TABLE_ENTRY(itt_address, nr_bits);
>> +
>> +    if ( devid >= its->max_devices )
>> +        return -ENOENT;
>> +
>> +    return vgic_access_guest_memory(its->d,
>> +                                    addr + devid *
>> sizeof(dev_table_entry_t),
>> +                                    &itt_entry, sizeof(itt_entry),
>> true);
>> +}
>> +
>>  /*
>>   * Lookup the address of the Interrupt Translation Table associated with
>>   * that device ID.
>> @@ -414,6 +429,133 @@ out_unlock:
>>      return ret;
>>  }
>>
>> +/* Must be called with the ITS lock held. */
>> +static int its_discard_event(struct virt_its *its,
>> +                             uint32_t vdevid, uint32_t vevid)
>> +{
>> +    struct pending_irq *p;
>> +    unsigned long flags;
>> +    struct vcpu *vcpu;
>> +    uint32_t vlpi;
>> +
>> +    ASSERT(spin_is_locked(&its->its_lock));
>> +
>> +    if ( !read_itte_locked(its, vdevid, vevid, &vcpu, &vlpi) )
>> +        return -ENOENT;
>> +
>> +    if ( vlpi == INVALID_LPI )
>> +        return -ENOENT;
>> +
>> +    /* Lock this VCPU's VGIC to make sure nobody is using the
>> pending_irq. */
>> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
> 
> There is an interesting issue happening with this code. You don't check
> the content of the memory provided by the guest. So a malicious guest
> could craft the memory in order to setup mapping with known vlpi and a
> different vCPU.
> 
> This would lead to use the wrong lock here and corrupt the list.

What about this:
Right now (mostly due to the requirements of the INVALL implementation)
we store the VCPU ID in our struct pending_irq, populated upon MAPTI. So
originally this was just for caching (INVALL being the only user of
this), but I was wondering if we should move the actual instance of this
information to pending_irq instead of relying on the collection ID from
the ITS table. So we would never need to look up and trust the ITS
tables for this information anymore. Later with the VGIC rework we will
need this field anyway (even for SPIs).

I think this should solve this threat, where a guest can manipulate Xen
by crafting the tables. Tinkering with the other information stored in
the tables should not harm Xen, the guest would just shoot itself into
the foot.

Does that make sense?

Cheers,
Andre.

>> +
>> +    /* Remove the pending_irq from the tree. */
>> +    write_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
>> +    p = radix_tree_delete(&its->d->arch.vgic.pend_lpi_tree, vlpi);
>> +    write_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
>> +
>> +    if ( !p )
>> +    {
>> +        spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
>> +
>> +        return -ENOENT;
>> +    }
>> +
>> +    /* Cleanup the pending_irq and disconnect it from the LPI. */
>> +    list_del_init(&p->inflight);
>> +    list_del_init(&p->lr_queue);
>> +    vgic_init_pending_irq(p, INVALID_LPI);
>> +
>> +    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
>> +
>> +    /* Remove the corresponding host LPI entry */
>> +    return gicv3_remove_guest_event(its->d, its->doorbell_address,
>> +                                    vdevid, vevid);
>> +}
>> +
>> +static int its_unmap_device(struct virt_its *its, uint32_t devid)
>> +{
>> +    dev_table_entry_t itt;
>> +    uint64_t evid;
>> +    int ret;
>> +
>> +    spin_lock(&its->its_lock);
>> +
>> +    ret = its_get_itt(its, devid, &itt);
>> +    if ( ret )
>> +    {
>> +        spin_unlock(&its->its_lock);
>> +        return ret;
>> +    }
>> +
>> +    /*
>> +     * For DomUs we need to check that the number of events per device
>> +     * is really limited, otherwise looping over all events can take too
>> +     * long for a guest. This ASSERT can then be removed if that is
>> +     * covered.
>> +     */
>> +    ASSERT(is_hardware_domain(its->d));
>> +
>> +    for ( evid = 0; evid < DEV_TABLE_ITT_SIZE(itt); evid++ )
>> +        /* Don't care about errors here, clean up as much as
>> possible. */
>> +        its_discard_event(its, devid, evid);
>> +
>> +    spin_unlock(&its->its_lock);
>> +
>> +    return 0;
>> +}
>> +
>> +static int its_handle_mapd(struct virt_its *its, uint64_t *cmdptr)
>> +{
>> +    /* size and devid get validated by the functions called below. */
>> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
>> +    unsigned int size = its_cmd_get_size(cmdptr) + 1;
>> +    bool valid = its_cmd_get_validbit(cmdptr);
>> +    paddr_t itt_addr = its_cmd_get_ittaddr(cmdptr);
>> +    int ret;
>> +
>> +    /* Sanitize the number of events. */
>> +    if ( valid && (size > its->evid_bits) )
>> +        return -1;
>> +
>> +    if ( !valid )
>> +        /* Discard all events and remove pending LPIs. */
>> +        its_unmap_device(its, devid);
> 
> its_unmap_device is returning an error but you don't check it. Please
> explain why.
> 
>> +
>> +    /*
>> +     * There is no easy and clean way for Xen to know the ITS device
>> ID of a
>> +     * particular (PCI) device, so we have to rely on the guest telling
>> +     * us about it. For *now* we are just using the device ID *Dom0*
>> uses,
>> +     * because the driver there has the actual knowledge.
>> +     * Eventually this will be replaced with a dedicated hypercall to
>> +     * announce pass-through of devices.
>> +     */
>> +    if ( is_hardware_domain(its->d) )
>> +    {
>> +
>> +        /*
>> +         * Dom0's ITSes are mapped 1:1, so both addresses are the same.
>> +         * Also the device IDs are equal.
>> +         */
>> +        ret = gicv3_its_map_guest_device(its->d,
>> its->doorbell_address, devid,
>> +                                         its->doorbell_address, devid,
>> +                                         BIT(size), valid);
>> +        if ( ret && valid )
>> +            return ret;
>> +    }
>> +
>> +    spin_lock(&its->its_lock);
>> +
>> +    if ( valid )
>> +        ret = its_set_itt_address(its, devid, itt_addr, size);
>> +    else
>> +        ret = its_set_itt_address(its, devid, INVALID_PADDR, 1);
>> +
>> +    spin_unlock(&its->its_lock);
>> +
>> +    return ret;
>> +}
>> +
>>  #define ITS_CMD_BUFFER_SIZE(baser)      ((((baser) & 0xff) + 1) << 12)
>>  #define ITS_CMD_OFFSET(reg)             ((reg) & GENMASK(19, 5))
>>
>> @@ -452,6 +594,9 @@ static int vgic_its_handle_cmds(struct domain *d,
>> struct virt_its *its)
>>          case GITS_CMD_MAPC:
>>              ret = its_handle_mapc(its, command);
>>              break;
>> +        case GITS_CMD_MAPD:
>> +            ret = its_handle_mapd(its, command);
>> +            break;
>>          case GITS_CMD_SYNC:
>>              /* We handle ITS commands synchronously, so we ignore
>> SYNC. */
>>              break;
>> diff --git a/xen/include/asm-arm/gic_v3_its.h
>> b/xen/include/asm-arm/gic_v3_its.h
>> index d162e89..6f94e65 100644
>> --- a/xen/include/asm-arm/gic_v3_its.h
>> +++ b/xen/include/asm-arm/gic_v3_its.h
>> @@ -173,6 +173,11 @@ struct pending_irq
>> *gicv3_its_get_event_pending_irq(struct domain *d,
>>                                                      paddr_t
>> vdoorbell_address,
>>                                                      uint32_t vdevid,
>>                                                      uint32_t veventid);
>> +int gicv3_remove_guest_event(struct domain *d, paddr_t
>> vdoorbell_address,
>> +                                     uint32_t vdevid, uint32_t
>> veventid);
>> +void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
>> +                                 unsigned int vcpu_id, uint32_t
>> virt_lpi);
>> +
>>  #else
>>
>>  static inline void gicv3_its_dt_init(const struct dt_device_node *node)
>>
> 
> Cheers,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs
  2017-05-23 18:23         ` Stefano Stabellini
@ 2017-05-24  9:47           ` Julien Grall
  2017-05-24 17:49             ` Stefano Stabellini
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-24  9:47 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Andre Przywara, Vijaya Kumar K, Shanker Donthineni, Vijay Kilari,
	xen-devel

Hi Stefano,

On 05/23/2017 07:23 PM, Stefano Stabellini wrote:
> On Tue, 23 May 2017, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 23/05/17 00:48, Stefano Stabellini wrote:
>>> On Fri, 19 May 2017, Stefano Stabellini wrote:
>>>> On Thu, 11 May 2017, Andre Przywara wrote:
>>>>> When LPIs get unmapped by a guest, they might still be in some LR of
>>>>> some VCPU. Nevertheless we remove the corresponding pending_irq
>>>>> (possibly freeing it), and detect this case (irq_to_pending() returns
>>>>> NULL) when the LR gets cleaned up later.
>>>>> However a *new* LPI may get mapped with the same number while the old
>>>>> LPI is *still* in some LR. To avoid getting the wrong state, we mark
>>>>> every newly mapped LPI as PRISTINE, which means: has never been in an
>>>>> LR before. If we detect the LPI in an LR anyway, it must have been an
>>>>> older one, which we can simply retire.
>>>>> Before inserting such a PRISTINE LPI into an LR, we must make sure that
>>>>> it's not already in another LR, as the architecture forbids two
>>>>> interrupts with the same virtual IRQ number on one CPU.
>>>>>
>>>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>>>> ---
>>>>>  xen/arch/arm/gic.c         | 55
>>>>> +++++++++++++++++++++++++++++++++++++++++-----
>>>>>  xen/include/asm-arm/vgic.h |  6 +++++
>>>>>  2 files changed, 56 insertions(+), 5 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>>>>> index fd3fa05..8bf0578 100644
>>>>> --- a/xen/arch/arm/gic.c
>>>>> +++ b/xen/arch/arm/gic.c
>>>>> @@ -375,6 +375,8 @@ static inline void gic_set_lr(int lr, struct
>>>>> pending_irq *p,
>>>>>  {
>>>>>      ASSERT(!local_irq_is_enabled());
>>>>>
>>>>> +    clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status);
>>>>> +
>>>>>      gic_hw_ops->update_lr(lr, p, state);
>>>>>
>>>>>      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
>>>>> @@ -442,12 +444,41 @@ void gic_raise_inflight_irq(struct vcpu *v,
>>>>> unsigned int virtual_irq)
>>>>>  #endif
>>>>>  }
>>>>>
>>>>> +/*
>>>>> + * Find an unused LR to insert an IRQ into. If this new interrupt is a
>>>>> + * PRISTINE LPI, scan the other LRs to avoid inserting the same IRQ
>>>>> twice.
>>>>> + */
>>>>> +static int gic_find_unused_lr(struct vcpu *v, struct pending_irq *p,
>>>>> int lr)
>>>>> +{
>>>>> +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
>>>>> +    unsigned long *lr_mask = (unsigned long *) &this_cpu(lr_mask);
>>>>> +    struct gic_lr lr_val;
>>>>> +
>>>>> +    ASSERT(spin_is_locked(&v->arch.vgic.lock));
>>>>> +
>>>>> +    if ( test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
>>>>
>>>> Maybe we should add an "unlikely".
>>>>
>>>> I can see how this would be OKish at runtime, but at boot time there
>>>> might be a bunch of PRISTINE_LPIs, but no MAPDs have been issued yet,
>>>> right?
>>
>> You cannot have any PRISTINE_LPIs without any MAPDs done. This bit will be set
>> when you do the first MAPTI.
>>
>>>>
>>>> I have a suggestion, I'll leave it to you and Julien if you want to do
>>>> this now, or maybe consider it as a TODO item. I am OK either way (I
>>>> don't want to delay the ITS any longer).
>>>>
>>>> I am thinking we should do this scanning only after at least one MAPD
>>>> has been issued for a given cpu at least once. I would resurrect the
>>>> idea of a DISCARD flag, but not on the pending_irq, that I believe it's
>>>> difficult to handle, but a single global DISCARD flag per struct vcpu.
>>>>
>>>> On MAPD, we set DISCARD for the target vcpu of the LPI we are dropping.
>>>> Next time we want to inject a PRISTINE_IRQ on that cpu interface, we
>>>> scan all LRs for interrupts with a NULL pending_irq. We remove those
>>>> from LRs, then we remove the DISCARD flag.
>>>>
>>>> Do you think it would work?
>>
>> I don't understand the point to do that. Ok, you will get the first
>> PRISTINE_LPI "fast" (though likely LRs will be empty), but all the other will
>> be "slow" (though likely LRs will be empty too).
>>
>> The pain to implement your suggestion does not seem to be worth it so far.
>
> Let me explain it a bit better, I think I didn't clarify it well enough.
> Let me also premise, that this would be fine to do later, it doesn't
> have to be part of this patch.
>
> When I wrote MAPD above, I meant actually any commands that delete an
> existing pending_irq - vLPI mapping. Specifically, DISCARD, and MAPD
> when the
>
>     if ( !valid )
>         /* Discard all events and remove pending LPIs. */
>         its_unmap_device(its, devid);
>
> code path is taken, which should not be the case at boot time, right?
> Are there any other commands that remove a pending_irq - vLPI mapping
> that I missed?

I don't think so.

>
> The idea is that we could add a VGIC_V3_LPIS_DISCARD flag to arch_vcpu.
> VGIC_V3_LPIS_DISCARD is set on a DISCARD command, and on a MAPD (!valid)
> command. If VGIC_V3_LPIS_DISCARD is not set, there is no need to scan
> anything. If VGIC_V3_LPIS_DISCARD is set *and* we want to inject a
> PRISTINE_IRQ, then we do the scanning.
>
> When we do the scanning, we check all LRs for NULL pending_irq structs.
> We clear them all in one go. Then we remove VGIC_V3_LPIS_DISCARD.

The problem we are trying to solve here is not because of NULL 
pending_irq structs. It is because of the previous interrupt may still 
be in LRs so we would end-up to have twice the LPIs in it. This will 
result to unpredictable behavior.

This could happen if you do:

     vCPU A               |   vCPU  B
                          |
     DISCARD vLPI1        |
     MAPTI   vLPI1        |

                 interrupt injected on vCPU B

                          |   entering in the hyp
                          |   clear_lrs
                          |   vgic_vcpu_inject_irq


clear_lrs will not remove the interrupt from LRs if it was already 
pending because pending_irq is not NULL anymore.

This is causing issue because we are trying to be clever with the LRs by 
not regenerating them at every entry/exit. This is causing trouble in 
many more place in the vGIC. IHMO we should attempt to regenerate them 
and see how this will affect the performance.

>
> This way, we get all PRISTINE_LPI fast, except for the very first one
> after a DISCARD or MAPD (!valid) command.
>
> Does it make more sense now? What do you think?

To be honest, I think we are trying to think about premature 
optimization without any number. We should first look at the vGIC rework 
and then see if this code will stay in place (I have the feeling it will 
disappear).

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 19/28] ARM: vITS: handle MAPD command
  2017-05-24  9:10     ` Andre Przywara
@ 2017-05-24  9:56       ` Julien Grall
  2017-05-24 13:09         ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-24  9:56 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 05/24/2017 10:10 AM, Andre Przywara wrote:
> On 17/05/17 19:07, Julien Grall wrote:
>>>  /*
>>>   * Lookup the address of the Interrupt Translation Table associated with
>>>   * that device ID.
>>> @@ -414,6 +429,133 @@ out_unlock:
>>>      return ret;
>>>  }
>>>
>>> +/* Must be called with the ITS lock held. */
>>> +static int its_discard_event(struct virt_its *its,
>>> +                             uint32_t vdevid, uint32_t vevid)
>>> +{
>>> +    struct pending_irq *p;
>>> +    unsigned long flags;
>>> +    struct vcpu *vcpu;
>>> +    uint32_t vlpi;
>>> +
>>> +    ASSERT(spin_is_locked(&its->its_lock));
>>> +
>>> +    if ( !read_itte_locked(its, vdevid, vevid, &vcpu, &vlpi) )
>>> +        return -ENOENT;
>>> +
>>> +    if ( vlpi == INVALID_LPI )
>>> +        return -ENOENT;
>>> +
>>> +    /* Lock this VCPU's VGIC to make sure nobody is using the
>>> pending_irq. */
>>> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
>>
>> There is an interesting issue happening with this code. You don't check
>> the content of the memory provided by the guest. So a malicious guest
>> could craft the memory in order to setup mapping with known vlpi and a
>> different vCPU.
>>
>> This would lead to use the wrong lock here and corrupt the list.
>
> What about this:
> Right now (mostly due to the requirements of the INVALL implementation)
> we store the VCPU ID in our struct pending_irq, populated upon MAPTI. So
> originally this was just for caching (INVALL being the only user of
> this), but I was wondering if we should move the actual instance of this
> information to pending_irq instead of relying on the collection ID from
> the ITS table. So we would never need to look up and trust the ITS
> tables for this information anymore. Later with the VGIC rework we will
> need this field anyway (even for SPIs).
>
> I think this should solve this threat, where a guest can manipulate Xen
> by crafting the tables. Tinkering with the other information stored in
> the tables should not harm Xen, the guest would just shoot itself into
> the foot.
>
> Does that make sense?

I think so. If I understand correctly, with that solution we would not 
need to protect the memory provided by the guest?

Cheers.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-23 17:47         ` Stefano Stabellini
@ 2017-05-24 10:10           ` Julien Grall
  2017-05-25 18:02           ` Andre Przywara
  1 sibling, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-24 10:10 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Andre Przywara, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni,
	xen-devel

Hi Stefano,

On 05/23/2017 06:47 PM, Stefano Stabellini wrote:
> On Tue, 23 May 2017, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 22/05/17 23:19, Stefano Stabellini wrote:
>>> On Tue, 16 May 2017, Julien Grall wrote:
>>>>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
>>>>> vcpu
>>>>> *v, mmio_info_t *info,
>>>>>      switch ( gicr_reg )
>>>>>      {
>>>>>      case VREG32(GICR_CTLR):
>>>>> -        /* LPI's not implemented */
>>>>> -        goto write_ignore_32;
>>>>> +    {
>>>>> +        unsigned long flags;
>>>>> +
>>>>> +        if ( !v->domain->arch.vgic.has_its )
>>>>> +            goto write_ignore_32;
>>>>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>>>>> +
>>>>> +        vgic_lock(v);                   /* protects rdists_enabled */
>>>>
>>>> Getting back to the locking. I don't see any place where we get the domain
>>>> vgic lock before vCPU vgic lock. So this raises the question why this
>>>> ordering
>>>> and not moving this lock into vgic_vcpu_enable_lpis.
>>>>
>>>> At least this require documentation in the code and explanation in the
>>>> commit
>>>> message.
>>>
>>> It doesn't look like we need to take the v->arch.vgic.lock here. What is
>>> it protecting?
>>
>> The name of the function is a bit confusion. It does not take the vCPU vgic
>> lock but the domain vgic lock.
>>
>> I believe the vcpu is passed to avoid have v->domain in most of the callers.
>> But we should probably rename the function.
>>
>> In this case it protects vgic_vcpu_enable_lpis because you can configure the
>> number of LPIs per re-distributor but this is a domain wide value. I know the
>> spec is confusing on this.
>
> The quoting here is very unhelpful. In Andre's patch:

Oh, though my point about vgic_lock naming stands :).

>
> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
>      switch ( gicr_reg )
>      {
>      case VREG32(GICR_CTLR):
> -        /* LPI's not implemented */
> -        goto write_ignore_32;
> +    {
> +        unsigned long flags;
> +
> +        if ( !v->domain->arch.vgic.has_its )
> +            goto write_ignore_32;
> +        if ( dabt.size != DABT_WORD ) goto bad_width;
> +
> +        vgic_lock(v);                   /* protects rdists_enabled */
> +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +
> +        /* LPIs can only be enabled once, but never disabled again. */
> +        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
> +             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
> +            vgic_vcpu_enable_lpis(v);
> +
> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +        vgic_unlock(v);
> +
> +        return 1;
> +    }
>
> My question is: do we need to take both vgic_lock and v->arch.vgic.lock?
> If so, why?

I will let Andre confirm here.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 19/28] ARM: vITS: handle MAPD command
  2017-05-24  9:56       ` Julien Grall
@ 2017-05-24 13:09         ` Andre Przywara
  2017-05-25 18:55           ` Stefano Stabellini
  0 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-24 13:09 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 24/05/17 10:56, Julien Grall wrote:
> Hi Andre,
> 
> On 05/24/2017 10:10 AM, Andre Przywara wrote:
>> On 17/05/17 19:07, Julien Grall wrote:
>>>>  /*
>>>>   * Lookup the address of the Interrupt Translation Table associated
>>>> with
>>>>   * that device ID.
>>>> @@ -414,6 +429,133 @@ out_unlock:
>>>>      return ret;
>>>>  }
>>>>
>>>> +/* Must be called with the ITS lock held. */
>>>> +static int its_discard_event(struct virt_its *its,
>>>> +                             uint32_t vdevid, uint32_t vevid)
>>>> +{
>>>> +    struct pending_irq *p;
>>>> +    unsigned long flags;
>>>> +    struct vcpu *vcpu;
>>>> +    uint32_t vlpi;
>>>> +
>>>> +    ASSERT(spin_is_locked(&its->its_lock));
>>>> +
>>>> +    if ( !read_itte_locked(its, vdevid, vevid, &vcpu, &vlpi) )
>>>> +        return -ENOENT;
>>>> +
>>>> +    if ( vlpi == INVALID_LPI )
>>>> +        return -ENOENT;
>>>> +
>>>> +    /* Lock this VCPU's VGIC to make sure nobody is using the
>>>> pending_irq. */
>>>> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
>>>
>>> There is an interesting issue happening with this code. You don't check
>>> the content of the memory provided by the guest. So a malicious guest
>>> could craft the memory in order to setup mapping with known vlpi and a
>>> different vCPU.
>>>
>>> This would lead to use the wrong lock here and corrupt the list.
>>
>> What about this:
>> Right now (mostly due to the requirements of the INVALL implementation)
>> we store the VCPU ID in our struct pending_irq, populated upon MAPTI. So
>> originally this was just for caching (INVALL being the only user of
>> this), but I was wondering if we should move the actual instance of this
>> information to pending_irq instead of relying on the collection ID from
>> the ITS table. So we would never need to look up and trust the ITS
>> tables for this information anymore. Later with the VGIC rework we will
>> need this field anyway (even for SPIs).
>>
>> I think this should solve this threat, where a guest can manipulate Xen
>> by crafting the tables. Tinkering with the other information stored in
>> the tables should not harm Xen, the guest would just shoot itself into
>> the foot.
>>
>> Does that make sense?
> 
> I think so. If I understand correctly, with that solution we would not
> need to protect the memory provided by the guest?

Well, it gets better (though also a bit scary):
Currently we use the guest's ITS tables to translate a DeviceID/EventID
pair to a vLPI/vCPU pair. Now there is this new
gicv3_its_get_event_pending_irq() function, which also takes an ITS and
an DeviceID/EventID pair and gives us a struct pending_irq.
And here we have both the vLPI number and the VCPU number in there
already, so actually we don't need read_itte() anymore. And if we don't
read, we don't need write. And if we don't write, we don't need to
access guest memory. So this seems to ripple through and allows us to
possibly dump the guest memory tables altogether.
Now we still use the collection table in guest memory, but I was
wondering if we could store the collection ID in the vcpu struct and use
some hashing scheme to do the reverse lookup. But that might be
something for some future cleanup / optimization series.

Do I miss something here? It sounds a bit scary that we can dump this
guest memory access scheme which gave us so many headaches and kept us
busy for some months now ...

Cheers,
Andre.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs
  2017-05-24  9:47           ` Julien Grall
@ 2017-05-24 17:49             ` Stefano Stabellini
  0 siblings, 0 replies; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-24 17:49 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Vijay Kilari, Andre Przywara, Vijaya Kumar K,
	xen-devel, Shanker Donthineni

On Wed, 24 May 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 05/23/2017 07:23 PM, Stefano Stabellini wrote:
> > On Tue, 23 May 2017, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 23/05/17 00:48, Stefano Stabellini wrote:
> > > > On Fri, 19 May 2017, Stefano Stabellini wrote:
> > > > > On Thu, 11 May 2017, Andre Przywara wrote:
> > > > > > When LPIs get unmapped by a guest, they might still be in some LR of
> > > > > > some VCPU. Nevertheless we remove the corresponding pending_irq
> > > > > > (possibly freeing it), and detect this case (irq_to_pending()
> > > > > > returns
> > > > > > NULL) when the LR gets cleaned up later.
> > > > > > However a *new* LPI may get mapped with the same number while the
> > > > > > old
> > > > > > LPI is *still* in some LR. To avoid getting the wrong state, we mark
> > > > > > every newly mapped LPI as PRISTINE, which means: has never been in
> > > > > > an
> > > > > > LR before. If we detect the LPI in an LR anyway, it must have been
> > > > > > an
> > > > > > older one, which we can simply retire.
> > > > > > Before inserting such a PRISTINE LPI into an LR, we must make sure
> > > > > > that
> > > > > > it's not already in another LR, as the architecture forbids two
> > > > > > interrupts with the same virtual IRQ number on one CPU.
> > > > > > 
> > > > > > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> > > > > > ---
> > > > > >  xen/arch/arm/gic.c         | 55
> > > > > > +++++++++++++++++++++++++++++++++++++++++-----
> > > > > >  xen/include/asm-arm/vgic.h |  6 +++++
> > > > > >  2 files changed, 56 insertions(+), 5 deletions(-)
> > > > > > 
> > > > > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > > > > > index fd3fa05..8bf0578 100644
> > > > > > --- a/xen/arch/arm/gic.c
> > > > > > +++ b/xen/arch/arm/gic.c
> > > > > > @@ -375,6 +375,8 @@ static inline void gic_set_lr(int lr, struct
> > > > > > pending_irq *p,
> > > > > >  {
> > > > > >      ASSERT(!local_irq_is_enabled());
> > > > > > 
> > > > > > +    clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status);
> > > > > > +
> > > > > >      gic_hw_ops->update_lr(lr, p, state);
> > > > > > 
> > > > > >      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> > > > > > @@ -442,12 +444,41 @@ void gic_raise_inflight_irq(struct vcpu *v,
> > > > > > unsigned int virtual_irq)
> > > > > >  #endif
> > > > > >  }
> > > > > > 
> > > > > > +/*
> > > > > > + * Find an unused LR to insert an IRQ into. If this new interrupt
> > > > > > is a
> > > > > > + * PRISTINE LPI, scan the other LRs to avoid inserting the same IRQ
> > > > > > twice.
> > > > > > + */
> > > > > > +static int gic_find_unused_lr(struct vcpu *v, struct pending_irq
> > > > > > *p,
> > > > > > int lr)
> > > > > > +{
> > > > > > +    unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
> > > > > > +    unsigned long *lr_mask = (unsigned long *) &this_cpu(lr_mask);
> > > > > > +    struct gic_lr lr_val;
> > > > > > +
> > > > > > +    ASSERT(spin_is_locked(&v->arch.vgic.lock));
> > > > > > +
> > > > > > +    if ( test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status) )
> > > > > 
> > > > > Maybe we should add an "unlikely".
> > > > > 
> > > > > I can see how this would be OKish at runtime, but at boot time there
> > > > > might be a bunch of PRISTINE_LPIs, but no MAPDs have been issued yet,
> > > > > right?
> > > 
> > > You cannot have any PRISTINE_LPIs without any MAPDs done. This bit will be
> > > set
> > > when you do the first MAPTI.
> > > 
> > > > > 
> > > > > I have a suggestion, I'll leave it to you and Julien if you want to do
> > > > > this now, or maybe consider it as a TODO item. I am OK either way (I
> > > > > don't want to delay the ITS any longer).
> > > > > 
> > > > > I am thinking we should do this scanning only after at least one MAPD
> > > > > has been issued for a given cpu at least once. I would resurrect the
> > > > > idea of a DISCARD flag, but not on the pending_irq, that I believe
> > > > > it's
> > > > > difficult to handle, but a single global DISCARD flag per struct vcpu.
> > > > > 
> > > > > On MAPD, we set DISCARD for the target vcpu of the LPI we are
> > > > > dropping.
> > > > > Next time we want to inject a PRISTINE_IRQ on that cpu interface, we
> > > > > scan all LRs for interrupts with a NULL pending_irq. We remove those
> > > > > from LRs, then we remove the DISCARD flag.
> > > > > 
> > > > > Do you think it would work?
> > > 
> > > I don't understand the point to do that. Ok, you will get the first
> > > PRISTINE_LPI "fast" (though likely LRs will be empty), but all the other
> > > will
> > > be "slow" (though likely LRs will be empty too).
> > > 
> > > The pain to implement your suggestion does not seem to be worth it so far.
> > 
> > Let me explain it a bit better, I think I didn't clarify it well enough.
> > Let me also premise, that this would be fine to do later, it doesn't
> > have to be part of this patch.
> > 
> > When I wrote MAPD above, I meant actually any commands that delete an
> > existing pending_irq - vLPI mapping. Specifically, DISCARD, and MAPD
> > when the
> > 
> >     if ( !valid )
> >         /* Discard all events and remove pending LPIs. */
> >         its_unmap_device(its, devid);
> > 
> > code path is taken, which should not be the case at boot time, right?
> > Are there any other commands that remove a pending_irq - vLPI mapping
> > that I missed?
> 
> I don't think so.
> 
> > 
> > The idea is that we could add a VGIC_V3_LPIS_DISCARD flag to arch_vcpu.
> > VGIC_V3_LPIS_DISCARD is set on a DISCARD command, and on a MAPD (!valid)
> > command. If VGIC_V3_LPIS_DISCARD is not set, there is no need to scan
> > anything. If VGIC_V3_LPIS_DISCARD is set *and* we want to inject a
> > PRISTINE_IRQ, then we do the scanning.
> > 
> > When we do the scanning, we check all LRs for NULL pending_irq structs.
> > We clear them all in one go. Then we remove VGIC_V3_LPIS_DISCARD.
> 
> The problem we are trying to solve here is not because of NULL pending_irq
> structs. It is because of the previous interrupt may still be in LRs so we
> would end-up to have twice the LPIs in it. This will result to unpredictable
> behavior.
> 
> This could happen if you do:
> 
>     vCPU A               |   vCPU  B
>                          |
>     DISCARD vLPI1        |
>     MAPTI   vLPI1        |
> 
>                 interrupt injected on vCPU B
> 
>                          |   entering in the hyp
>                          |   clear_lrs
>                          |   vgic_vcpu_inject_irq
> 
> 
> clear_lrs will not remove the interrupt from LRs if it was already pending
> because pending_irq is not NULL anymore.
> 
> This is causing issue because we are trying to be clever with the LRs by not
> regenerating them at every entry/exit. This is causing trouble in many more
> place in the vGIC. IHMO we should attempt to regenerate them and see how this
> will affect the performance.

Yes but if pending_irq is not NULL, then it will be marked as PRISTINE,
so it is still recognizable. We can still "clean it up".


> > This way, we get all PRISTINE_LPI fast, except for the very first one
> > after a DISCARD or MAPD (!valid) command.
> > 
> > Does it make more sense now? What do you think?
> 
> To be honest, I think we are trying to think about premature optimization
> without any number. We should first look at the vGIC rework and then see if
> this code will stay in place (I have the feeling it will disappear).
 
OK, no problem. Let's revisit this in the future.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's
  2017-05-22 17:15       ` Julien Grall
@ 2017-05-25 16:14         ` Andre Przywara
  0 siblings, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-25 16:14 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 22/05/17 18:15, Julien Grall wrote:
> 
> 
> On 22/05/17 17:49, Andre Przywara wrote:
>> Hi,
> 
> Hi Andre,
> 
>> On 12/05/17 15:19, Julien Grall wrote:
>>> Hi Andre,
>>>
>>> On 11/05/17 18:53, Andre Przywara wrote:
>>>> For LPIs the struct pending_irq's are dynamically allocated and the
>>>> pointers will be stored in a radix tree. Since an LPI can be "unmapped"
>>>> at any time, teach the VGIC how to deal with irq_to_pending() returning
>>>> a NULL pointer.
>>>> We just do nothing in this case or clean up the LR if the virtual LPI
>>>> number was still in an LR.
>>>>
>>>> Those are all call sites for irq_to_pending(), as per:
>>>> "git grep irq_to_pending", and their evaluations:
>>>> (PROTECTED means: added NULL check and bailing out)
>>>>
>>>>     xen/arch/arm/gic.c:
>>>> gic_route_irq_to_guest(): only called for SPIs, added ASSERT()
>>>> gic_remove_irq_from_guest(): only called for SPIs, added ASSERT()
>>>> gic_remove_from_queues(): PROTECTED, called within VCPU VGIC lock
>>>> gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock
>>>> gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock
>>>> gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock
>>>
>>> Even they are protected, an ASSERT would be useful.
>>
>> I am not sure I get what you mean here.
>> With PROTECTED I meant that the code checks for a irq_to_pending()
>> returning NULL and reacts accordingly.
>> ASSERTs are only for making sure that those functions are never called
>> for LPIs(), but the other functions can be called with an LPI, and they
>> can now cope with a NULL pending_irq.
>>
>> So what do I miss here?
> 
> I mean adding an ASSERT(spin_is_locked(vgic->vcpu)) in those functions
> if it is not done yet.

Well, that's what I meant with PROTECTED: all of them have that ASSERT
already.
So I consider this done then.

Cheers,
Andre.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 05/28] ARM: GICv3: forward pending LPIs to guests
  2017-05-22 22:03   ` Stefano Stabellini
@ 2017-05-25 16:42     ` Andre Przywara
  0 siblings, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-25 16:42 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: xen-devel, Julien Grall, Shanker Donthineni, Vijaya Kumar K,
	Vijay Kilari

Hi,

On 22/05/17 23:03, Stefano Stabellini wrote:
> On Thu, 11 May 2017, Andre Przywara wrote:
>> Upon receiving an LPI on the host, we need to find the right VCPU and
>> virtual IRQ number to get this IRQ injected.
>> Iterate our two-level LPI table to find this information quickly when
>> the host takes an LPI. Call the existing injection function to let the
>> GIC emulation deal with this interrupt.
>> Also we enhance struct pending_irq to cache the pending bit and the
>> priority information for LPIs.
> 
> I can see that you added "uint8_t lpi_priority" to pending_irq. Where
> are we caching the pending bit?

Ah, that statement is a leftover from v5, where I introduced
GIC_IRQ_GUEST_LPI_PENDING. However we don't need an explicit pending
state at the moment (the VGIC rework will probably change this), so we
get away without that bit. Will drop those words.

> Also, I don't think the priority changes need to be part of this patch;
> without out I would give my reviewed-by.

OK, will split it.

Cheers,
Andre.

> 
>> Reading the information from there is
>> faster than accessing the property table from guest memory. Also it
>> use some padding area, so does not require more memory.
>> This introduces a do_LPI() as a hardware gic_ops and a function to
>> retrieve the (cached) priority value of an LPI and a vgic_ops.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> ---
>>  xen/arch/arm/gic-v2.c            |  7 ++++
>>  xen/arch/arm/gic-v3-lpi.c        | 71 ++++++++++++++++++++++++++++++++++++++++
>>  xen/arch/arm/gic-v3.c            |  1 +
>>  xen/arch/arm/gic.c               |  8 ++++-
>>  xen/arch/arm/vgic-v2.c           |  7 ++++
>>  xen/arch/arm/vgic-v3.c           | 18 ++++++++++
>>  xen/arch/arm/vgic.c              |  7 +++-
>>  xen/include/asm-arm/domain.h     |  3 +-
>>  xen/include/asm-arm/gic.h        |  2 ++
>>  xen/include/asm-arm/gic_v3_its.h |  8 +++++
>>  xen/include/asm-arm/vgic.h       |  2 ++
>>  11 files changed, 131 insertions(+), 3 deletions(-)
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-23 17:47         ` Stefano Stabellini
  2017-05-24 10:10           ` Julien Grall
@ 2017-05-25 18:02           ` Andre Przywara
  2017-05-25 18:49             ` Stefano Stabellini
  1 sibling, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-25 18:02 UTC (permalink / raw)
  To: Stefano Stabellini, Julien Grall
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 23/05/17 18:47, Stefano Stabellini wrote:
> On Tue, 23 May 2017, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 22/05/17 23:19, Stefano Stabellini wrote:
>>> On Tue, 16 May 2017, Julien Grall wrote:
>>>>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
>>>>> vcpu
>>>>> *v, mmio_info_t *info,
>>>>>      switch ( gicr_reg )
>>>>>      {
>>>>>      case VREG32(GICR_CTLR):
>>>>> -        /* LPI's not implemented */
>>>>> -        goto write_ignore_32;
>>>>> +    {
>>>>> +        unsigned long flags;
>>>>> +
>>>>> +        if ( !v->domain->arch.vgic.has_its )
>>>>> +            goto write_ignore_32;
>>>>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>>>>> +
>>>>> +        vgic_lock(v);                   /* protects rdists_enabled */
>>>>
>>>> Getting back to the locking. I don't see any place where we get the domain
>>>> vgic lock before vCPU vgic lock. So this raises the question why this
>>>> ordering
>>>> and not moving this lock into vgic_vcpu_enable_lpis.
>>>>
>>>> At least this require documentation in the code and explanation in the
>>>> commit
>>>> message.
>>>
>>> It doesn't look like we need to take the v->arch.vgic.lock here. What is
>>> it protecting?
>>
>> The name of the function is a bit confusion. It does not take the vCPU vgic
>> lock but the domain vgic lock.
>>
>> I believe the vcpu is passed to avoid have v->domain in most of the callers.
>> But we should probably rename the function.
>>
>> In this case it protects vgic_vcpu_enable_lpis because you can configure the
>> number of LPIs per re-distributor but this is a domain wide value. I know the
>> spec is confusing on this.
> 
> The quoting here is very unhelpful. In Andre's patch:
> 
> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
>      switch ( gicr_reg )
>      {
>      case VREG32(GICR_CTLR):
> -        /* LPI's not implemented */
> -        goto write_ignore_32;
> +    {
> +        unsigned long flags;
> +
> +        if ( !v->domain->arch.vgic.has_its )
> +            goto write_ignore_32;
> +        if ( dabt.size != DABT_WORD ) goto bad_width;
> +
> +        vgic_lock(v);                   /* protects rdists_enabled */
> +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +
> +        /* LPIs can only be enabled once, but never disabled again. */
> +        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
> +             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
> +            vgic_vcpu_enable_lpis(v);
> +
> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +        vgic_unlock(v);
> +
> +        return 1;
> +    }
> 
> My question is: do we need to take both vgic_lock and v->arch.vgic.lock?

The domain lock (taken by vgic_lock()) protects rdists_enabled. This
variable stores whether at least one redistributor has LPIs enabled. In
this case the property table gets into use and since the table is shared
across all redistributors, we must not change it anymore, even on
another redistributor which has its LPIs still disabled.
So while this looks like this is a per-redistributor (=per-VCPU)
property, it is actually per domain, hence this lock.
The VGIC VCPU lock is then used to naturally protect the enable bit
against multiple VCPUs accessing this register simultaneously - the
redists are MMIO mapped, but not banked, so this is possible.

Does that make sense?

Cheers,
Andre

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-25 18:02           ` Andre Przywara
@ 2017-05-25 18:49             ` Stefano Stabellini
  2017-05-25 20:07               ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-25 18:49 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Thu, 25 May 2017, Andre Przywara wrote:
> Hi,
> 
> On 23/05/17 18:47, Stefano Stabellini wrote:
> > On Tue, 23 May 2017, Julien Grall wrote:
> >> Hi Stefano,
> >>
> >> On 22/05/17 23:19, Stefano Stabellini wrote:
> >>> On Tue, 16 May 2017, Julien Grall wrote:
> >>>>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
> >>>>> vcpu
> >>>>> *v, mmio_info_t *info,
> >>>>>      switch ( gicr_reg )
> >>>>>      {
> >>>>>      case VREG32(GICR_CTLR):
> >>>>> -        /* LPI's not implemented */
> >>>>> -        goto write_ignore_32;
> >>>>> +    {
> >>>>> +        unsigned long flags;
> >>>>> +
> >>>>> +        if ( !v->domain->arch.vgic.has_its )
> >>>>> +            goto write_ignore_32;
> >>>>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
> >>>>> +
> >>>>> +        vgic_lock(v);                   /* protects rdists_enabled */
> >>>>
> >>>> Getting back to the locking. I don't see any place where we get the domain
> >>>> vgic lock before vCPU vgic lock. So this raises the question why this
> >>>> ordering
> >>>> and not moving this lock into vgic_vcpu_enable_lpis.
> >>>>
> >>>> At least this require documentation in the code and explanation in the
> >>>> commit
> >>>> message.
> >>>
> >>> It doesn't look like we need to take the v->arch.vgic.lock here. What is
> >>> it protecting?
> >>
> >> The name of the function is a bit confusion. It does not take the vCPU vgic
> >> lock but the domain vgic lock.
> >>
> >> I believe the vcpu is passed to avoid have v->domain in most of the callers.
> >> But we should probably rename the function.
> >>
> >> In this case it protects vgic_vcpu_enable_lpis because you can configure the
> >> number of LPIs per re-distributor but this is a domain wide value. I know the
> >> spec is confusing on this.
> > 
> > The quoting here is very unhelpful. In Andre's patch:
> > 
> > @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
> >      switch ( gicr_reg )
> >      {
> >      case VREG32(GICR_CTLR):
> > -        /* LPI's not implemented */
> > -        goto write_ignore_32;
> > +    {
> > +        unsigned long flags;
> > +
> > +        if ( !v->domain->arch.vgic.has_its )
> > +            goto write_ignore_32;
> > +        if ( dabt.size != DABT_WORD ) goto bad_width;
> > +
> > +        vgic_lock(v);                   /* protects rdists_enabled */
> > +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > +
> > +        /* LPIs can only be enabled once, but never disabled again. */
> > +        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
> > +             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
> > +            vgic_vcpu_enable_lpis(v);
> > +
> > +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > +        vgic_unlock(v);
> > +
> > +        return 1;
> > +    }
> > 
> > My question is: do we need to take both vgic_lock and v->arch.vgic.lock?
> 
> The domain lock (taken by vgic_lock()) protects rdists_enabled. This
> variable stores whether at least one redistributor has LPIs enabled. In
> this case the property table gets into use and since the table is shared
> across all redistributors, we must not change it anymore, even on
> another redistributor which has its LPIs still disabled.
> So while this looks like this is a per-redistributor (=per-VCPU)
> property, it is actually per domain, hence this lock.
> The VGIC VCPU lock is then used to naturally protect the enable bit
> against multiple VCPUs accessing this register simultaneously - the
> redists are MMIO mapped, but not banked, so this is possible.
> 
> Does that make sense?

If the VGIC VCPU lock is only used to protect VGIC_V3_LPIS_ENABLED,
couldn't we just read/write the bit atomically? It's just a bit after
all, it doesn't need a lock.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 19/28] ARM: vITS: handle MAPD command
  2017-05-24 13:09         ` Andre Przywara
@ 2017-05-25 18:55           ` Stefano Stabellini
  2017-05-25 20:17             ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-25 18:55 UTC (permalink / raw)
  To: Andre Przywara
  Cc: Stefano Stabellini, Vijay Kilari, Vijaya Kumar K, Julien Grall,
	xen-devel, Shanker Donthineni

On Wed, 24 May 2017, Andre Przywara wrote:
> Hi,
> 
> On 24/05/17 10:56, Julien Grall wrote:
> > Hi Andre,
> > 
> > On 05/24/2017 10:10 AM, Andre Przywara wrote:
> >> On 17/05/17 19:07, Julien Grall wrote:
> >>>>  /*
> >>>>   * Lookup the address of the Interrupt Translation Table associated
> >>>> with
> >>>>   * that device ID.
> >>>> @@ -414,6 +429,133 @@ out_unlock:
> >>>>      return ret;
> >>>>  }
> >>>>
> >>>> +/* Must be called with the ITS lock held. */
> >>>> +static int its_discard_event(struct virt_its *its,
> >>>> +                             uint32_t vdevid, uint32_t vevid)
> >>>> +{
> >>>> +    struct pending_irq *p;
> >>>> +    unsigned long flags;
> >>>> +    struct vcpu *vcpu;
> >>>> +    uint32_t vlpi;
> >>>> +
> >>>> +    ASSERT(spin_is_locked(&its->its_lock));
> >>>> +
> >>>> +    if ( !read_itte_locked(its, vdevid, vevid, &vcpu, &vlpi) )
> >>>> +        return -ENOENT;
> >>>> +
> >>>> +    if ( vlpi == INVALID_LPI )
> >>>> +        return -ENOENT;
> >>>> +
> >>>> +    /* Lock this VCPU's VGIC to make sure nobody is using the
> >>>> pending_irq. */
> >>>> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
> >>>
> >>> There is an interesting issue happening with this code. You don't check
> >>> the content of the memory provided by the guest. So a malicious guest
> >>> could craft the memory in order to setup mapping with known vlpi and a
> >>> different vCPU.
> >>>
> >>> This would lead to use the wrong lock here and corrupt the list.
> >>
> >> What about this:
> >> Right now (mostly due to the requirements of the INVALL implementation)
> >> we store the VCPU ID in our struct pending_irq, populated upon MAPTI. So
> >> originally this was just for caching (INVALL being the only user of
> >> this), but I was wondering if we should move the actual instance of this
> >> information to pending_irq instead of relying on the collection ID from
> >> the ITS table. So we would never need to look up and trust the ITS
> >> tables for this information anymore. Later with the VGIC rework we will
> >> need this field anyway (even for SPIs).
> >>
> >> I think this should solve this threat, where a guest can manipulate Xen
> >> by crafting the tables. Tinkering with the other information stored in
> >> the tables should not harm Xen, the guest would just shoot itself into
> >> the foot.
> >>
> >> Does that make sense?
> > 
> > I think so. If I understand correctly, with that solution we would not
> > need to protect the memory provided by the guest?
> 
> Well, it gets better (though also a bit scary):
> Currently we use the guest's ITS tables to translate a DeviceID/EventID
> pair to a vLPI/vCPU pair. Now there is this new
> gicv3_its_get_event_pending_irq() function, which also takes an ITS and
> an DeviceID/EventID pair and gives us a struct pending_irq.
> And here we have both the vLPI number and the VCPU number in there
> already, so actually we don't need read_itte() anymore. And if we don't
> read, we don't need write. And if we don't write, we don't need to
> access guest memory. So this seems to ripple through and allows us to
> possibly dump the guest memory tables altogether.

Sounds like a good idea to me for DeviceID/EventID to vLPI/vCPU
translations.


> Now we still use the collection table in guest memory, but I was
> wondering if we could store the collection ID in the vcpu struct and use
> some hashing scheme to do the reverse lookup. But that might be
> something for some future cleanup / optimization series.

Leaving the security angle aside for a moment, I would prefer to keep
the guest memory accesses rather than adding another hashing scheme to
Xen for collection IDs.

Going back to security: it looks like it should be possible to check for
the validity of collection IDs without too much troubles?
too?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-25 18:49             ` Stefano Stabellini
@ 2017-05-25 20:07               ` Julien Grall
  2017-05-25 21:05                 ` Stefano Stabellini
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-25 20:07 UTC (permalink / raw)
  To: Stefano Stabellini, Andre Przywara
  Cc: xen-devel, Vijaya Kumar K, nd, Vijay Kilari, Shanker Donthineni

Hi Stefano,

On 25/05/2017 19:49, Stefano Stabellini wrote:
> On Thu, 25 May 2017, Andre Przywara wrote:
>> Hi,
>>
>> On 23/05/17 18:47, Stefano Stabellini wrote:
>>> On Tue, 23 May 2017, Julien Grall wrote:
>>>> Hi Stefano,
>>>>
>>>> On 22/05/17 23:19, Stefano Stabellini wrote:
>>>>> On Tue, 16 May 2017, Julien Grall wrote:
>>>>>>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
>>>>>>> vcpu
>>>>>>> *v, mmio_info_t *info,
>>>>>>>      switch ( gicr_reg )
>>>>>>>      {
>>>>>>>      case VREG32(GICR_CTLR):
>>>>>>> -        /* LPI's not implemented */
>>>>>>> -        goto write_ignore_32;
>>>>>>> +    {
>>>>>>> +        unsigned long flags;
>>>>>>> +
>>>>>>> +        if ( !v->domain->arch.vgic.has_its )
>>>>>>> +            goto write_ignore_32;
>>>>>>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>>>>>>> +
>>>>>>> +        vgic_lock(v);                   /* protects rdists_enabled */
>>>>>>
>>>>>> Getting back to the locking. I don't see any place where we get the domain
>>>>>> vgic lock before vCPU vgic lock. So this raises the question why this
>>>>>> ordering
>>>>>> and not moving this lock into vgic_vcpu_enable_lpis.
>>>>>>
>>>>>> At least this require documentation in the code and explanation in the
>>>>>> commit
>>>>>> message.
>>>>>
>>>>> It doesn't look like we need to take the v->arch.vgic.lock here. What is
>>>>> it protecting?
>>>>
>>>> The name of the function is a bit confusion. It does not take the vCPU vgic
>>>> lock but the domain vgic lock.
>>>>
>>>> I believe the vcpu is passed to avoid have v->domain in most of the callers.
>>>> But we should probably rename the function.
>>>>
>>>> In this case it protects vgic_vcpu_enable_lpis because you can configure the
>>>> number of LPIs per re-distributor but this is a domain wide value. I know the
>>>> spec is confusing on this.
>>>
>>> The quoting here is very unhelpful. In Andre's patch:
>>>
>>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
>>>      switch ( gicr_reg )
>>>      {
>>>      case VREG32(GICR_CTLR):
>>> -        /* LPI's not implemented */
>>> -        goto write_ignore_32;
>>> +    {
>>> +        unsigned long flags;
>>> +
>>> +        if ( !v->domain->arch.vgic.has_its )
>>> +            goto write_ignore_32;
>>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>>> +
>>> +        vgic_lock(v);                   /* protects rdists_enabled */
>>> +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
>>> +
>>> +        /* LPIs can only be enabled once, but never disabled again. */
>>> +        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
>>> +             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
>>> +            vgic_vcpu_enable_lpis(v);
>>> +
>>> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>>> +        vgic_unlock(v);
>>> +
>>> +        return 1;
>>> +    }
>>>
>>> My question is: do we need to take both vgic_lock and v->arch.vgic.lock?
>>
>> The domain lock (taken by vgic_lock()) protects rdists_enabled. This
>> variable stores whether at least one redistributor has LPIs enabled. In
>> this case the property table gets into use and since the table is shared
>> across all redistributors, we must not change it anymore, even on
>> another redistributor which has its LPIs still disabled.
>> So while this looks like this is a per-redistributor (=per-VCPU)
>> property, it is actually per domain, hence this lock.
>> The VGIC VCPU lock is then used to naturally protect the enable bit
>> against multiple VCPUs accessing this register simultaneously - the
>> redists are MMIO mapped, but not banked, so this is possible.
>>
>> Does that make sense?
>
> If the VGIC VCPU lock is only used to protect VGIC_V3_LPIS_ENABLED,
> couldn't we just read/write the bit atomically? It's just a bit after
> all, it doesn't need a lock.

The vGIC vCPU lock is also here to serialize access to the 
re-distributor state when necessary.

For instance you don't want to allow write in PENDBASER after LPIs have 
been enabled.

If you don't take the lock here, you would have a small race where 
PENDBASER might be written whilst the LPIs are getting enabled.

The code in PENDBASER today does not strictly require the locking, but I 
think we should keep the lock around. Moving to the atomic will not 
really benefit here as write to those registers will be very unlikely so 
we don't need very good performance.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 19/28] ARM: vITS: handle MAPD command
  2017-05-25 18:55           ` Stefano Stabellini
@ 2017-05-25 20:17             ` Julien Grall
  2017-05-25 20:44               ` Stefano Stabellini
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-25 20:17 UTC (permalink / raw)
  To: Stefano Stabellini, Andre Przywara
  Cc: xen-devel, Vijaya Kumar K, nd, Vijay Kilari, Shanker Donthineni



On 25/05/2017 19:55, Stefano Stabellini wrote:
> On Wed, 24 May 2017, Andre Przywara wrote:
>> Hi,
>>
>> On 24/05/17 10:56, Julien Grall wrote:
>>> Hi Andre,
>>>
>>> On 05/24/2017 10:10 AM, Andre Przywara wrote:
>>>> On 17/05/17 19:07, Julien Grall wrote:
>>>>>>  /*
>>>>>>   * Lookup the address of the Interrupt Translation Table associated
>>>>>> with
>>>>>>   * that device ID.
>>>>>> @@ -414,6 +429,133 @@ out_unlock:
>>>>>>      return ret;
>>>>>>  }
>>>>>>
>>>>>> +/* Must be called with the ITS lock held. */
>>>>>> +static int its_discard_event(struct virt_its *its,
>>>>>> +                             uint32_t vdevid, uint32_t vevid)
>>>>>> +{
>>>>>> +    struct pending_irq *p;
>>>>>> +    unsigned long flags;
>>>>>> +    struct vcpu *vcpu;
>>>>>> +    uint32_t vlpi;
>>>>>> +
>>>>>> +    ASSERT(spin_is_locked(&its->its_lock));
>>>>>> +
>>>>>> +    if ( !read_itte_locked(its, vdevid, vevid, &vcpu, &vlpi) )
>>>>>> +        return -ENOENT;
>>>>>> +
>>>>>> +    if ( vlpi == INVALID_LPI )
>>>>>> +        return -ENOENT;
>>>>>> +
>>>>>> +    /* Lock this VCPU's VGIC to make sure nobody is using the
>>>>>> pending_irq. */
>>>>>> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
>>>>>
>>>>> There is an interesting issue happening with this code. You don't check
>>>>> the content of the memory provided by the guest. So a malicious guest
>>>>> could craft the memory in order to setup mapping with known vlpi and a
>>>>> different vCPU.
>>>>>
>>>>> This would lead to use the wrong lock here and corrupt the list.
>>>>
>>>> What about this:
>>>> Right now (mostly due to the requirements of the INVALL implementation)
>>>> we store the VCPU ID in our struct pending_irq, populated upon MAPTI. So
>>>> originally this was just for caching (INVALL being the only user of
>>>> this), but I was wondering if we should move the actual instance of this
>>>> information to pending_irq instead of relying on the collection ID from
>>>> the ITS table. So we would never need to look up and trust the ITS
>>>> tables for this information anymore. Later with the VGIC rework we will
>>>> need this field anyway (even for SPIs).
>>>>
>>>> I think this should solve this threat, where a guest can manipulate Xen
>>>> by crafting the tables. Tinkering with the other information stored in
>>>> the tables should not harm Xen, the guest would just shoot itself into
>>>> the foot.
>>>>
>>>> Does that make sense?
>>>
>>> I think so. If I understand correctly, with that solution we would not
>>> need to protect the memory provided by the guest?
>>
>> Well, it gets better (though also a bit scary):
>> Currently we use the guest's ITS tables to translate a DeviceID/EventID
>> pair to a vLPI/vCPU pair. Now there is this new
>> gicv3_its_get_event_pending_irq() function, which also takes an ITS and
>> an DeviceID/EventID pair and gives us a struct pending_irq.
>> And here we have both the vLPI number and the VCPU number in there
>> already, so actually we don't need read_itte() anymore. And if we don't
>> read, we don't need write. And if we don't write, we don't need to
>> access guest memory. So this seems to ripple through and allows us to
>> possibly dump the guest memory tables altogether.
>
> Sounds like a good idea to me for DeviceID/EventID to vLPI/vCPU
> translations.
>
>
>> Now we still use the collection table in guest memory, but I was
>> wondering if we could store the collection ID in the vcpu struct and use
>> some hashing scheme to do the reverse lookup. But that might be
>> something for some future cleanup / optimization series.
>
> Leaving the security angle aside for a moment, I would prefer to keep
> the guest memory accesses rather than adding another hashing scheme to
> Xen for collection IDs.

The spec only require you to implement max cpus + 1 collections. I don't 
think an hashing scheme would be necessary here. It is a simple array (1 
byte per entry in today).

>
> Going back to security: it looks like it should be possible to check for
> the validity of collection IDs without too much troubles?
> too?

If we store everthing in the pending_irq the use of the collection would 
limited to a few commands (e.g MOVI, MAPTI...). We don't much care if 
the guest modify the collection table as long as we check the vCPU is valid.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 19/28] ARM: vITS: handle MAPD command
  2017-05-25 20:17             ` Julien Grall
@ 2017-05-25 20:44               ` Stefano Stabellini
  2017-05-26  8:16                 ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-25 20:44 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Vijay Kilari, Andre Przywara, Vijaya Kumar K,
	xen-devel, nd, Shanker Donthineni

On Thu, 25 May 2017, Julien Grall wrote:
> On 25/05/2017 19:55, Stefano Stabellini wrote:
> > On Wed, 24 May 2017, Andre Przywara wrote:
> > > Hi,
> > > 
> > > On 24/05/17 10:56, Julien Grall wrote:
> > > > Hi Andre,
> > > > 
> > > > On 05/24/2017 10:10 AM, Andre Przywara wrote:
> > > > > On 17/05/17 19:07, Julien Grall wrote:
> > > > > > >  /*
> > > > > > >   * Lookup the address of the Interrupt Translation Table
> > > > > > > associated
> > > > > > > with
> > > > > > >   * that device ID.
> > > > > > > @@ -414,6 +429,133 @@ out_unlock:
> > > > > > >      return ret;
> > > > > > >  }
> > > > > > > 
> > > > > > > +/* Must be called with the ITS lock held. */
> > > > > > > +static int its_discard_event(struct virt_its *its,
> > > > > > > +                             uint32_t vdevid, uint32_t vevid)
> > > > > > > +{
> > > > > > > +    struct pending_irq *p;
> > > > > > > +    unsigned long flags;
> > > > > > > +    struct vcpu *vcpu;
> > > > > > > +    uint32_t vlpi;
> > > > > > > +
> > > > > > > +    ASSERT(spin_is_locked(&its->its_lock));
> > > > > > > +
> > > > > > > +    if ( !read_itte_locked(its, vdevid, vevid, &vcpu, &vlpi) )
> > > > > > > +        return -ENOENT;
> > > > > > > +
> > > > > > > +    if ( vlpi == INVALID_LPI )
> > > > > > > +        return -ENOENT;
> > > > > > > +
> > > > > > > +    /* Lock this VCPU's VGIC to make sure nobody is using the
> > > > > > > pending_irq. */
> > > > > > > +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
> > > > > > 
> > > > > > There is an interesting issue happening with this code. You don't
> > > > > > check
> > > > > > the content of the memory provided by the guest. So a malicious
> > > > > > guest
> > > > > > could craft the memory in order to setup mapping with known vlpi and
> > > > > > a
> > > > > > different vCPU.
> > > > > > 
> > > > > > This would lead to use the wrong lock here and corrupt the list.
> > > > > 
> > > > > What about this:
> > > > > Right now (mostly due to the requirements of the INVALL
> > > > > implementation)
> > > > > we store the VCPU ID in our struct pending_irq, populated upon MAPTI.
> > > > > So
> > > > > originally this was just for caching (INVALL being the only user of
> > > > > this), but I was wondering if we should move the actual instance of
> > > > > this
> > > > > information to pending_irq instead of relying on the collection ID
> > > > > from
> > > > > the ITS table. So we would never need to look up and trust the ITS
> > > > > tables for this information anymore. Later with the VGIC rework we
> > > > > will
> > > > > need this field anyway (even for SPIs).
> > > > > 
> > > > > I think this should solve this threat, where a guest can manipulate
> > > > > Xen
> > > > > by crafting the tables. Tinkering with the other information stored in
> > > > > the tables should not harm Xen, the guest would just shoot itself into
> > > > > the foot.
> > > > > 
> > > > > Does that make sense?
> > > > 
> > > > I think so. If I understand correctly, with that solution we would not
> > > > need to protect the memory provided by the guest?
> > > 
> > > Well, it gets better (though also a bit scary):
> > > Currently we use the guest's ITS tables to translate a DeviceID/EventID
> > > pair to a vLPI/vCPU pair. Now there is this new
> > > gicv3_its_get_event_pending_irq() function, which also takes an ITS and
> > > an DeviceID/EventID pair and gives us a struct pending_irq.
> > > And here we have both the vLPI number and the VCPU number in there
> > > already, so actually we don't need read_itte() anymore. And if we don't
> > > read, we don't need write. And if we don't write, we don't need to
> > > access guest memory. So this seems to ripple through and allows us to
> > > possibly dump the guest memory tables altogether.
> > 
> > Sounds like a good idea to me for DeviceID/EventID to vLPI/vCPU
> > translations.
> > 
> > 
> > > Now we still use the collection table in guest memory, but I was
> > > wondering if we could store the collection ID in the vcpu struct and use
> > > some hashing scheme to do the reverse lookup. But that might be
> > > something for some future cleanup / optimization series.
> > 
> > Leaving the security angle aside for a moment, I would prefer to keep
> > the guest memory accesses rather than adding another hashing scheme to
> > Xen for collection IDs.
> 
> The spec only require you to implement max cpus + 1 collections. I don't think
> an hashing scheme would be necessary here. It is a simple array (1 byte per
> entry in today).
> 
> > 
> > Going back to security: it looks like it should be possible to check for
> > the validity of collection IDs without too much troubles?
> > too?
> 
> If we store everthing in the pending_irq the use of the collection would
> limited to a few commands (e.g MOVI, MAPTI...). We don't much care if the
> guest modify the collection table as long as we check the vCPU is valid.

That's what I thought. In that case we might as well keep the info in
guest memory.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-25 20:07               ` Julien Grall
@ 2017-05-25 21:05                 ` Stefano Stabellini
  2017-05-26 10:19                   ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Stefano Stabellini @ 2017-05-25 21:05 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Vijay Kilari, Andre Przywara, Vijaya Kumar K,
	xen-devel, nd, Shanker Donthineni

On Thu, 25 May 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 25/05/2017 19:49, Stefano Stabellini wrote:
> > On Thu, 25 May 2017, Andre Przywara wrote:
> > > Hi,
> > > 
> > > On 23/05/17 18:47, Stefano Stabellini wrote:
> > > > On Tue, 23 May 2017, Julien Grall wrote:
> > > > > Hi Stefano,
> > > > > 
> > > > > On 22/05/17 23:19, Stefano Stabellini wrote:
> > > > > > On Tue, 16 May 2017, Julien Grall wrote:
> > > > > > > > @@ -436,8 +473,26 @@ static int
> > > > > > > > __vgic_v3_rdistr_rd_mmio_write(struct
> > > > > > > > vcpu
> > > > > > > > *v, mmio_info_t *info,
> > > > > > > >      switch ( gicr_reg )
> > > > > > > >      {
> > > > > > > >      case VREG32(GICR_CTLR):
> > > > > > > > -        /* LPI's not implemented */
> > > > > > > > -        goto write_ignore_32;
> > > > > > > > +    {
> > > > > > > > +        unsigned long flags;
> > > > > > > > +
> > > > > > > > +        if ( !v->domain->arch.vgic.has_its )
> > > > > > > > +            goto write_ignore_32;
> > > > > > > > +        if ( dabt.size != DABT_WORD ) goto bad_width;
> > > > > > > > +
> > > > > > > > +        vgic_lock(v);                   /* protects
> > > > > > > > rdists_enabled */
> > > > > > > 
> > > > > > > Getting back to the locking. I don't see any place where we get
> > > > > > > the domain
> > > > > > > vgic lock before vCPU vgic lock. So this raises the question why
> > > > > > > this
> > > > > > > ordering
> > > > > > > and not moving this lock into vgic_vcpu_enable_lpis.
> > > > > > > 
> > > > > > > At least this require documentation in the code and explanation in
> > > > > > > the
> > > > > > > commit
> > > > > > > message.
> > > > > > 
> > > > > > It doesn't look like we need to take the v->arch.vgic.lock here.
> > > > > > What is
> > > > > > it protecting?
> > > > > 
> > > > > The name of the function is a bit confusion. It does not take the vCPU
> > > > > vgic
> > > > > lock but the domain vgic lock.
> > > > > 
> > > > > I believe the vcpu is passed to avoid have v->domain in most of the
> > > > > callers.
> > > > > But we should probably rename the function.
> > > > > 
> > > > > In this case it protects vgic_vcpu_enable_lpis because you can
> > > > > configure the
> > > > > number of LPIs per re-distributor but this is a domain wide value. I
> > > > > know the
> > > > > spec is confusing on this.
> > > > 
> > > > The quoting here is very unhelpful. In Andre's patch:
> > > > 
> > > > @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
> > > > vcpu *v, mmio_info_t *info,
> > > >      switch ( gicr_reg )
> > > >      {
> > > >      case VREG32(GICR_CTLR):
> > > > -        /* LPI's not implemented */
> > > > -        goto write_ignore_32;
> > > > +    {
> > > > +        unsigned long flags;
> > > > +
> > > > +        if ( !v->domain->arch.vgic.has_its )
> > > > +            goto write_ignore_32;
> > > > +        if ( dabt.size != DABT_WORD ) goto bad_width;
> > > > +
> > > > +        vgic_lock(v);                   /* protects rdists_enabled */
> > > > +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > > > +
> > > > +        /* LPIs can only be enabled once, but never disabled again. */
> > > > +        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
> > > > +             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
> > > > +            vgic_vcpu_enable_lpis(v);
> > > > +
> > > > +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > > > +        vgic_unlock(v);
> > > > +
> > > > +        return 1;
> > > > +    }
> > > > 
> > > > My question is: do we need to take both vgic_lock and v->arch.vgic.lock?
> > > 
> > > The domain lock (taken by vgic_lock()) protects rdists_enabled. This
> > > variable stores whether at least one redistributor has LPIs enabled. In
> > > this case the property table gets into use and since the table is shared
> > > across all redistributors, we must not change it anymore, even on
> > > another redistributor which has its LPIs still disabled.
> > > So while this looks like this is a per-redistributor (=per-VCPU)
> > > property, it is actually per domain, hence this lock.
> > > The VGIC VCPU lock is then used to naturally protect the enable bit
> > > against multiple VCPUs accessing this register simultaneously - the
> > > redists are MMIO mapped, but not banked, so this is possible.
> > > 
> > > Does that make sense?
> > 
> > If the VGIC VCPU lock is only used to protect VGIC_V3_LPIS_ENABLED,
> > couldn't we just read/write the bit atomically? It's just a bit after
> > all, it doesn't need a lock.
> 
> The vGIC vCPU lock is also here to serialize access to the re-distributor
> state when necessary.
> 
> For instance you don't want to allow write in PENDBASER after LPIs have been
> enabled.
> 
> If you don't take the lock here, you would have a small race where PENDBASER
> might be written whilst the LPIs are getting enabled.
> 
> The code in PENDBASER today does not strictly require the locking, but I think
> we should keep the lock around. Moving to the atomic will not really benefit
> here as write to those registers will be very unlikely so we don't need very
> good performance.

I suggested the atomic as a way to replace the lock, to reduce the
number of lock order dependencies, rather than for performance (who
cares about performance for this case). If all accesses to
VGIC_V3_LPIS_ENABLED are atomic, then we wouldn't need the lock. 

Another maybe simpler way to keep the vgic vcpu lock but avoid
introducing the vgic domain lock -> vgic vcpu lock dependency (the less
the better) would be to take the vgic vcpu lock first, release it, then
take the vgic domain lock and call vgic_vcpu_enable_lpis after.  In
pseudo-code:

    vgic vcpu lock
    read old value of VGIC_V3_LPIS_ENABLED
    write new value of VGIC_V3_LPIS_ENABLED
    vgic vcpu unlock

    vgic domain lock
    vgic_vcpu_enable_lpis (minus the setting of arch.vgic.flags)
    vgic domain unlock

It doesn't look like we need to set VGIC_V3_LPIS_ENABLED within
vgic_vcpu_enable_lpis, so this seems to be working. What do you think?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 19/28] ARM: vITS: handle MAPD command
  2017-05-25 20:44               ` Stefano Stabellini
@ 2017-05-26  8:16                 ` Andre Przywara
  0 siblings, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-26  8:16 UTC (permalink / raw)
  To: Stefano Stabellini, Julien Grall
  Cc: xen-devel, Vijaya Kumar K, nd, Vijay Kilari, Shanker Donthineni

Hi,

On 25/05/17 21:44, Stefano Stabellini wrote:
> On Thu, 25 May 2017, Julien Grall wrote:
>> On 25/05/2017 19:55, Stefano Stabellini wrote:
>>> On Wed, 24 May 2017, Andre Przywara wrote:
>>>> Hi,
>>>>
>>>> On 24/05/17 10:56, Julien Grall wrote:
>>>>> Hi Andre,
>>>>>
>>>>> On 05/24/2017 10:10 AM, Andre Przywara wrote:
>>>>>> On 17/05/17 19:07, Julien Grall wrote:
>>>>>>>>  /*
>>>>>>>>   * Lookup the address of the Interrupt Translation Table
>>>>>>>> associated
>>>>>>>> with
>>>>>>>>   * that device ID.
>>>>>>>> @@ -414,6 +429,133 @@ out_unlock:
>>>>>>>>      return ret;
>>>>>>>>  }
>>>>>>>>
>>>>>>>> +/* Must be called with the ITS lock held. */
>>>>>>>> +static int its_discard_event(struct virt_its *its,
>>>>>>>> +                             uint32_t vdevid, uint32_t vevid)
>>>>>>>> +{
>>>>>>>> +    struct pending_irq *p;
>>>>>>>> +    unsigned long flags;
>>>>>>>> +    struct vcpu *vcpu;
>>>>>>>> +    uint32_t vlpi;
>>>>>>>> +
>>>>>>>> +    ASSERT(spin_is_locked(&its->its_lock));
>>>>>>>> +
>>>>>>>> +    if ( !read_itte_locked(its, vdevid, vevid, &vcpu, &vlpi) )
>>>>>>>> +        return -ENOENT;
>>>>>>>> +
>>>>>>>> +    if ( vlpi == INVALID_LPI )
>>>>>>>> +        return -ENOENT;
>>>>>>>> +
>>>>>>>> +    /* Lock this VCPU's VGIC to make sure nobody is using the
>>>>>>>> pending_irq. */
>>>>>>>> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
>>>>>>>
>>>>>>> There is an interesting issue happening with this code. You don't
>>>>>>> check
>>>>>>> the content of the memory provided by the guest. So a malicious
>>>>>>> guest
>>>>>>> could craft the memory in order to setup mapping with known vlpi and
>>>>>>> a
>>>>>>> different vCPU.
>>>>>>>
>>>>>>> This would lead to use the wrong lock here and corrupt the list.
>>>>>>
>>>>>> What about this:
>>>>>> Right now (mostly due to the requirements of the INVALL
>>>>>> implementation)
>>>>>> we store the VCPU ID in our struct pending_irq, populated upon MAPTI.
>>>>>> So
>>>>>> originally this was just for caching (INVALL being the only user of
>>>>>> this), but I was wondering if we should move the actual instance of
>>>>>> this
>>>>>> information to pending_irq instead of relying on the collection ID
>>>>>> from
>>>>>> the ITS table. So we would never need to look up and trust the ITS
>>>>>> tables for this information anymore. Later with the VGIC rework we
>>>>>> will
>>>>>> need this field anyway (even for SPIs).
>>>>>>
>>>>>> I think this should solve this threat, where a guest can manipulate
>>>>>> Xen
>>>>>> by crafting the tables. Tinkering with the other information stored in
>>>>>> the tables should not harm Xen, the guest would just shoot itself into
>>>>>> the foot.
>>>>>>
>>>>>> Does that make sense?
>>>>>
>>>>> I think so. If I understand correctly, with that solution we would not
>>>>> need to protect the memory provided by the guest?
>>>>
>>>> Well, it gets better (though also a bit scary):
>>>> Currently we use the guest's ITS tables to translate a DeviceID/EventID
>>>> pair to a vLPI/vCPU pair. Now there is this new
>>>> gicv3_its_get_event_pending_irq() function, which also takes an ITS and
>>>> an DeviceID/EventID pair and gives us a struct pending_irq.
>>>> And here we have both the vLPI number and the VCPU number in there
>>>> already, so actually we don't need read_itte() anymore. And if we don't
>>>> read, we don't need write. And if we don't write, we don't need to
>>>> access guest memory. So this seems to ripple through and allows us to
>>>> possibly dump the guest memory tables altogether.
>>>
>>> Sounds like a good idea to me for DeviceID/EventID to vLPI/vCPU
>>> translations.
>>>
>>>
>>>> Now we still use the collection table in guest memory, but I was
>>>> wondering if we could store the collection ID in the vcpu struct and use
>>>> some hashing scheme to do the reverse lookup. But that might be
>>>> something for some future cleanup / optimization series.
>>>
>>> Leaving the security angle aside for a moment, I would prefer to keep
>>> the guest memory accesses rather than adding another hashing scheme to
>>> Xen for collection IDs.
>>
>> The spec only require you to implement max cpus + 1 collections. I don't think
>> an hashing scheme would be necessary here. It is a simple array (1 byte per
>> entry in today).
>>
>>>
>>> Going back to security: it looks like it should be possible to check for
>>> the validity of collection IDs without too much troubles?
>>> too?
>>
>> If we store everthing in the pending_irq the use of the collection would
>> limited to a few commands (e.g MOVI, MAPTI...). We don't much care if the
>> guest modify the collection table as long as we check the vCPU is valid.
> 
> That's what I thought. In that case we might as well keep the info in
> guest memory.

Well, if we have a chance to drop guest memory accesses from the ITS
altogether, we should consider this.
But trying to implement this I saw that this requires quite some code
changes, which Julien suggested to postpone for the rework. And I agree.
So I kept it as it is now and added TODOs.

Cheers,
Andre

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 15/28] ARM: vITS: provide access to struct pending_irq
  2017-05-22 17:19       ` Julien Grall
@ 2017-05-26  9:10         ` Andre Przywara
  2017-05-26 10:00           ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Andre Przywara @ 2017-05-26  9:10 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi,

On 22/05/17 18:19, Julien Grall wrote:
> 
> 
> On 22/05/17 17:50, Andre Przywara wrote:
>> Hi,
> 
> Hi Andre,
> 
>> On 17/05/17 16:35, Julien Grall wrote:
>>>> +    }
>>>> +    spin_unlock(&d->arch.vgic.its_devices_lock);
>>>> +
>>>> +    return pirq;
>>>> +}
>>>> +
>>>> +struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
>>>> +                                                    paddr_t
>>>> vdoorbell_address,
>>>> +                                                    uint32_t vdevid,
>>>> +                                                    uint32_t veventid)
>>>
>>> s/veventid/eventid/
>>>
>>>> +{
>>>> +    return get_event_pending_irq(d, vdoorbell_address, vdevid,
>>>> veventid, NULL);
>>>> +}
>>>
>>> This wrapper looks a bit pointless to me. Why don't you directly expose
>>> get_event_pending_irq(...)?
>>
>> I don't want to expose host_lpi in the exported function, because it's
>> of no need for the callers and rather cumbersome for them to pass NULL
>> or the like. But then the algorithm to find host_lpi and pirq is
>> basically the same, so I came up with this joint static function and an
>> exported wrapper, which hides the host_lpi.
>> And there is one user (in gicv3_assign_guest_event()) which needs both,
>> so ...
>> If you can think of a better way to address this, I am all ears.
> 
> It is not that bad to pass NULL everywhere. We already have some other
> functions like that.
> 
> How about adding the wrapper as a static inline in the header?

The host LPI is an internal affair of gic-v3-its.c (the parts caring
about the host control of the ITS). The data structure describing this
is private to this file and not exported.

The virtual ITS emulation does not need to know about the host LPI, that
would break the abstraction. So I prefer to simply not export this.

And I would prefer code design considerations over the cost of one
unconditional branch here.

Cheers,
Andre.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 15/28] ARM: vITS: provide access to struct pending_irq
  2017-05-26  9:10         ` Andre Przywara
@ 2017-05-26 10:00           ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-05-26 10:00 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni



On 26/05/17 10:10, Andre Przywara wrote:
> Hi,
>
> On 22/05/17 18:19, Julien Grall wrote:
>>
>>
>> On 22/05/17 17:50, Andre Przywara wrote:
>>> Hi,
>>
>> Hi Andre,
>>
>>> On 17/05/17 16:35, Julien Grall wrote:
>>>>> +    }
>>>>> +    spin_unlock(&d->arch.vgic.its_devices_lock);
>>>>> +
>>>>> +    return pirq;
>>>>> +}
>>>>> +
>>>>> +struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
>>>>> +                                                    paddr_t
>>>>> vdoorbell_address,
>>>>> +                                                    uint32_t vdevid,
>>>>> +                                                    uint32_t veventid)
>>>>
>>>> s/veventid/eventid/
>>>>
>>>>> +{
>>>>> +    return get_event_pending_irq(d, vdoorbell_address, vdevid,
>>>>> veventid, NULL);
>>>>> +}
>>>>
>>>> This wrapper looks a bit pointless to me. Why don't you directly expose
>>>> get_event_pending_irq(...)?
>>>
>>> I don't want to expose host_lpi in the exported function, because it's
>>> of no need for the callers and rather cumbersome for them to pass NULL
>>> or the like. But then the algorithm to find host_lpi and pirq is
>>> basically the same, so I came up with this joint static function and an
>>> exported wrapper, which hides the host_lpi.
>>> And there is one user (in gicv3_assign_guest_event()) which needs both,
>>> so ...
>>> If you can think of a better way to address this, I am all ears.
>>
>> It is not that bad to pass NULL everywhere. We already have some other
>> functions like that.
>>
>> How about adding the wrapper as a static inline in the header?
>
> The host LPI is an internal affair of gic-v3-its.c (the parts caring
> about the host control of the ITS). The data structure describing this
> is private to this file and not exported.
>
> The virtual ITS emulation does not need to know about the host LPI, that
> would break the abstraction. So I prefer to simply not export this.

So you never envision someone requiring the host LPI even for debug purpose?

AFAICT, there are no other way to get the host LPI if necessary. It 
really does not hurt to expose it and provide a wrapper.

>
> And I would prefer code design considerations over the cost of one
> unconditional branch here.

As you may know I am all in favor of more helpers over the cost of one 
unconditional branch (see the callback example) when it results to a 
better code design.

But here it is not about code design, it is more about what kind of 
information would you need outside (see above).

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-25 21:05                 ` Stefano Stabellini
@ 2017-05-26 10:19                   ` Julien Grall
  2017-05-26 17:12                     ` Andre Przywara
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-05-26 10:19 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Vijay Kilari, Andre Przywara, Vijaya Kumar K, xen-devel, nd,
	Shanker Donthineni

Hi Stefano,

On 25/05/17 22:05, Stefano Stabellini wrote:
> On Thu, 25 May 2017, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 25/05/2017 19:49, Stefano Stabellini wrote:
>>> On Thu, 25 May 2017, Andre Przywara wrote:
>>>> Hi,
>>>>
>>>> On 23/05/17 18:47, Stefano Stabellini wrote:
>>>>> On Tue, 23 May 2017, Julien Grall wrote:
>>>>>> Hi Stefano,
>>>>>>
>>>>>> On 22/05/17 23:19, Stefano Stabellini wrote:
>>>>>>> On Tue, 16 May 2017, Julien Grall wrote:
>>>>>>>>> @@ -436,8 +473,26 @@ static int
>>>>>>>>> __vgic_v3_rdistr_rd_mmio_write(struct
>>>>>>>>> vcpu
>>>>>>>>> *v, mmio_info_t *info,
>>>>>>>>>      switch ( gicr_reg )
>>>>>>>>>      {
>>>>>>>>>      case VREG32(GICR_CTLR):
>>>>>>>>> -        /* LPI's not implemented */
>>>>>>>>> -        goto write_ignore_32;
>>>>>>>>> +    {
>>>>>>>>> +        unsigned long flags;
>>>>>>>>> +
>>>>>>>>> +        if ( !v->domain->arch.vgic.has_its )
>>>>>>>>> +            goto write_ignore_32;
>>>>>>>>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>>>>>>>>> +
>>>>>>>>> +        vgic_lock(v);                   /* protects
>>>>>>>>> rdists_enabled */
>>>>>>>>
>>>>>>>> Getting back to the locking. I don't see any place where we get
>>>>>>>> the domain
>>>>>>>> vgic lock before vCPU vgic lock. So this raises the question why
>>>>>>>> this
>>>>>>>> ordering
>>>>>>>> and not moving this lock into vgic_vcpu_enable_lpis.
>>>>>>>>
>>>>>>>> At least this require documentation in the code and explanation in
>>>>>>>> the
>>>>>>>> commit
>>>>>>>> message.
>>>>>>>
>>>>>>> It doesn't look like we need to take the v->arch.vgic.lock here.
>>>>>>> What is
>>>>>>> it protecting?
>>>>>>
>>>>>> The name of the function is a bit confusion. It does not take the vCPU
>>>>>> vgic
>>>>>> lock but the domain vgic lock.
>>>>>>
>>>>>> I believe the vcpu is passed to avoid have v->domain in most of the
>>>>>> callers.
>>>>>> But we should probably rename the function.
>>>>>>
>>>>>> In this case it protects vgic_vcpu_enable_lpis because you can
>>>>>> configure the
>>>>>> number of LPIs per re-distributor but this is a domain wide value. I
>>>>>> know the
>>>>>> spec is confusing on this.
>>>>>
>>>>> The quoting here is very unhelpful. In Andre's patch:
>>>>>
>>>>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
>>>>> vcpu *v, mmio_info_t *info,
>>>>>      switch ( gicr_reg )
>>>>>      {
>>>>>      case VREG32(GICR_CTLR):
>>>>> -        /* LPI's not implemented */
>>>>> -        goto write_ignore_32;
>>>>> +    {
>>>>> +        unsigned long flags;
>>>>> +
>>>>> +        if ( !v->domain->arch.vgic.has_its )
>>>>> +            goto write_ignore_32;
>>>>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>>>>> +
>>>>> +        vgic_lock(v);                   /* protects rdists_enabled */
>>>>> +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
>>>>> +
>>>>> +        /* LPIs can only be enabled once, but never disabled again. */
>>>>> +        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
>>>>> +             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
>>>>> +            vgic_vcpu_enable_lpis(v);
>>>>> +
>>>>> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>>>>> +        vgic_unlock(v);
>>>>> +
>>>>> +        return 1;
>>>>> +    }
>>>>>
>>>>> My question is: do we need to take both vgic_lock and v->arch.vgic.lock?
>>>>
>>>> The domain lock (taken by vgic_lock()) protects rdists_enabled. This
>>>> variable stores whether at least one redistributor has LPIs enabled. In
>>>> this case the property table gets into use and since the table is shared
>>>> across all redistributors, we must not change it anymore, even on
>>>> another redistributor which has its LPIs still disabled.
>>>> So while this looks like this is a per-redistributor (=per-VCPU)
>>>> property, it is actually per domain, hence this lock.
>>>> The VGIC VCPU lock is then used to naturally protect the enable bit
>>>> against multiple VCPUs accessing this register simultaneously - the
>>>> redists are MMIO mapped, but not banked, so this is possible.
>>>>
>>>> Does that make sense?
>>>
>>> If the VGIC VCPU lock is only used to protect VGIC_V3_LPIS_ENABLED,
>>> couldn't we just read/write the bit atomically? It's just a bit after
>>> all, it doesn't need a lock.
>>
>> The vGIC vCPU lock is also here to serialize access to the re-distributor
>> state when necessary.
>>
>> For instance you don't want to allow write in PENDBASER after LPIs have been
>> enabled.
>>
>> If you don't take the lock here, you would have a small race where PENDBASER
>> might be written whilst the LPIs are getting enabled.
>>
>> The code in PENDBASER today does not strictly require the locking, but I think
>> we should keep the lock around. Moving to the atomic will not really benefit
>> here as write to those registers will be very unlikely so we don't need very
>> good performance.
>
> I suggested the atomic as a way to replace the lock, to reduce the
> number of lock order dependencies, rather than for performance (who
> cares about performance for this case). If all accesses to
> VGIC_V3_LPIS_ENABLED are atomic, then we wouldn't need the lock.
>
> Another maybe simpler way to keep the vgic vcpu lock but avoid
> introducing the vgic domain lock -> vgic vcpu lock dependency (the less
> the better) would be to take the vgic vcpu lock first, release it, then
> take the vgic domain lock and call vgic_vcpu_enable_lpis after.  In
> pseudo-code:
>
>     vgic vcpu lock
>     read old value of VGIC_V3_LPIS_ENABLED
>     write new value of VGIC_V3_LPIS_ENABLED
>     vgic vcpu unlock
>
>     vgic domain lock
>     vgic_vcpu_enable_lpis (minus the setting of arch.vgic.flags)
>     vgic domain unlock
>
> It doesn't look like we need to set VGIC_V3_LPIS_ENABLED within
> vgic_vcpu_enable_lpis, so this seems to be working. What do you think?

 From a point of view of the vGIC you want to enable 
VGIC_V3_LPIS_ENABLED after all the sanity checks have been done.

I would have expected the ITS to check if the redistributor has been 
enabled before enabling it (see vgic_v3_verify_its_status). This is 
because the ITS is using the priority table and also the number of LPIs.

So you effectively want to have VGIC_V3_LPIS_ENABLED set after in 
vgic_vcpu_enable_lpis to avoid potential race condition. You may also 
want to have a mb() before writing to it so you can using 
VGIC_V3_LPIS_ENABLED without any lock safely.

Andre, can you explain why the ITS does not check whether the 
rdists_enabled are enabled?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 12/28] ARM: vGIC: advertise LPI support
  2017-05-26 10:19                   ` Julien Grall
@ 2017-05-26 17:12                     ` Andre Przywara
  0 siblings, 0 replies; 108+ messages in thread
From: Andre Przywara @ 2017-05-26 17:12 UTC (permalink / raw)
  To: Julien Grall, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, nd, Shanker Donthineni, Vijay Kilari

Hi,

On 26/05/17 11:19, Julien Grall wrote:
> Hi Stefano,
> 
> On 25/05/17 22:05, Stefano Stabellini wrote:
>> On Thu, 25 May 2017, Julien Grall wrote:
>>> Hi Stefano,
>>>
>>> On 25/05/2017 19:49, Stefano Stabellini wrote:
>>>> On Thu, 25 May 2017, Andre Przywara wrote:
>>>>> Hi,
>>>>>
>>>>> On 23/05/17 18:47, Stefano Stabellini wrote:
>>>>>> On Tue, 23 May 2017, Julien Grall wrote:
>>>>>>> Hi Stefano,
>>>>>>>
>>>>>>> On 22/05/17 23:19, Stefano Stabellini wrote:
>>>>>>>> On Tue, 16 May 2017, Julien Grall wrote:
>>>>>>>>>> @@ -436,8 +473,26 @@ static int
>>>>>>>>>> __vgic_v3_rdistr_rd_mmio_write(struct
>>>>>>>>>> vcpu
>>>>>>>>>> *v, mmio_info_t *info,
>>>>>>>>>>      switch ( gicr_reg )
>>>>>>>>>>      {
>>>>>>>>>>      case VREG32(GICR_CTLR):
>>>>>>>>>> -        /* LPI's not implemented */
>>>>>>>>>> -        goto write_ignore_32;
>>>>>>>>>> +    {
>>>>>>>>>> +        unsigned long flags;
>>>>>>>>>> +
>>>>>>>>>> +        if ( !v->domain->arch.vgic.has_its )
>>>>>>>>>> +            goto write_ignore_32;
>>>>>>>>>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>>>>>>>>>> +
>>>>>>>>>> +        vgic_lock(v);                   /* protects
>>>>>>>>>> rdists_enabled */
>>>>>>>>>
>>>>>>>>> Getting back to the locking. I don't see any place where we get
>>>>>>>>> the domain
>>>>>>>>> vgic lock before vCPU vgic lock. So this raises the question why
>>>>>>>>> this
>>>>>>>>> ordering
>>>>>>>>> and not moving this lock into vgic_vcpu_enable_lpis.
>>>>>>>>>
>>>>>>>>> At least this require documentation in the code and explanation in
>>>>>>>>> the
>>>>>>>>> commit
>>>>>>>>> message.
>>>>>>>>
>>>>>>>> It doesn't look like we need to take the v->arch.vgic.lock here.
>>>>>>>> What is
>>>>>>>> it protecting?
>>>>>>>
>>>>>>> The name of the function is a bit confusion. It does not take the
>>>>>>> vCPU
>>>>>>> vgic
>>>>>>> lock but the domain vgic lock.
>>>>>>>
>>>>>>> I believe the vcpu is passed to avoid have v->domain in most of the
>>>>>>> callers.
>>>>>>> But we should probably rename the function.
>>>>>>>
>>>>>>> In this case it protects vgic_vcpu_enable_lpis because you can
>>>>>>> configure the
>>>>>>> number of LPIs per re-distributor but this is a domain wide value. I
>>>>>>> know the
>>>>>>> spec is confusing on this.
>>>>>>
>>>>>> The quoting here is very unhelpful. In Andre's patch:
>>>>>>
>>>>>> @@ -436,8 +473,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct
>>>>>> vcpu *v, mmio_info_t *info,
>>>>>>      switch ( gicr_reg )
>>>>>>      {
>>>>>>      case VREG32(GICR_CTLR):
>>>>>> -        /* LPI's not implemented */
>>>>>> -        goto write_ignore_32;
>>>>>> +    {
>>>>>> +        unsigned long flags;
>>>>>> +
>>>>>> +        if ( !v->domain->arch.vgic.has_its )
>>>>>> +            goto write_ignore_32;
>>>>>> +        if ( dabt.size != DABT_WORD ) goto bad_width;
>>>>>> +
>>>>>> +        vgic_lock(v);                   /* protects
>>>>>> rdists_enabled */
>>>>>> +        spin_lock_irqsave(&v->arch.vgic.lock, flags);
>>>>>> +
>>>>>> +        /* LPIs can only be enabled once, but never disabled
>>>>>> again. */
>>>>>> +        if ( (r & GICR_CTLR_ENABLE_LPIS) &&
>>>>>> +             !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
>>>>>> +            vgic_vcpu_enable_lpis(v);
>>>>>> +
>>>>>> +        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>>>>>> +        vgic_unlock(v);
>>>>>> +
>>>>>> +        return 1;
>>>>>> +    }
>>>>>>
>>>>>> My question is: do we need to take both vgic_lock and
>>>>>> v->arch.vgic.lock?
>>>>>
>>>>> The domain lock (taken by vgic_lock()) protects rdists_enabled. This
>>>>> variable stores whether at least one redistributor has LPIs
>>>>> enabled. In
>>>>> this case the property table gets into use and since the table is
>>>>> shared
>>>>> across all redistributors, we must not change it anymore, even on
>>>>> another redistributor which has its LPIs still disabled.
>>>>> So while this looks like this is a per-redistributor (=per-VCPU)
>>>>> property, it is actually per domain, hence this lock.
>>>>> The VGIC VCPU lock is then used to naturally protect the enable bit
>>>>> against multiple VCPUs accessing this register simultaneously - the
>>>>> redists are MMIO mapped, but not banked, so this is possible.
>>>>>
>>>>> Does that make sense?
>>>>
>>>> If the VGIC VCPU lock is only used to protect VGIC_V3_LPIS_ENABLED,
>>>> couldn't we just read/write the bit atomically? It's just a bit after
>>>> all, it doesn't need a lock.
>>>
>>> The vGIC vCPU lock is also here to serialize access to the
>>> re-distributor
>>> state when necessary.
>>>
>>> For instance you don't want to allow write in PENDBASER after LPIs
>>> have been
>>> enabled.
>>>
>>> If you don't take the lock here, you would have a small race where
>>> PENDBASER
>>> might be written whilst the LPIs are getting enabled.
>>>
>>> The code in PENDBASER today does not strictly require the locking,
>>> but I think
>>> we should keep the lock around. Moving to the atomic will not really
>>> benefit
>>> here as write to those registers will be very unlikely so we don't
>>> need very
>>> good performance.
>>
>> I suggested the atomic as a way to replace the lock, to reduce the
>> number of lock order dependencies, rather than for performance (who
>> cares about performance for this case). If all accesses to
>> VGIC_V3_LPIS_ENABLED are atomic, then we wouldn't need the lock.
>>
>> Another maybe simpler way to keep the vgic vcpu lock but avoid
>> introducing the vgic domain lock -> vgic vcpu lock dependency (the less
>> the better) would be to take the vgic vcpu lock first, release it, then
>> take the vgic domain lock and call vgic_vcpu_enable_lpis after.  In
>> pseudo-code:
>>
>>     vgic vcpu lock
>>     read old value of VGIC_V3_LPIS_ENABLED
>>     write new value of VGIC_V3_LPIS_ENABLED
>>     vgic vcpu unlock
>>
>>     vgic domain lock
>>     vgic_vcpu_enable_lpis (minus the setting of arch.vgic.flags)
>>     vgic domain unlock
>>
>> It doesn't look like we need to set VGIC_V3_LPIS_ENABLED within
>> vgic_vcpu_enable_lpis, so this seems to be working. What do you think?
> 
> From a point of view of the vGIC you want to enable VGIC_V3_LPIS_ENABLED
> after all the sanity checks have been done.
> 
> I would have expected the ITS to check if the redistributor has been
> enabled before enabling it (see vgic_v3_verify_its_status). This is
> because the ITS is using the priority table and also the number of LPIs.
> 
> So you effectively want to have VGIC_V3_LPIS_ENABLED set after in
> vgic_vcpu_enable_lpis to avoid potential race condition. You may also
> want to have a mb() before writing to it so you can using
> VGIC_V3_LPIS_ENABLED without any lock safely.

Right, I added an smp_mb() after the rdists_enabled write to make sure
this is in sync.

> Andre, can you explain why the ITS does not check whether the
> rdists_enabled are enabled?

So architecturally it's not required to have LPIs enabled, and from a
spec point of view the ITS does not care about the LPI's properties.
We check for LPIs being enabled on that redistributor before injecting LPI.

But I think you are right that our implementation is a bit sloppy with
the separation between LPIs and the ITS, and reads the property table
while handling commands - because we only keep track of mapped LPIs.
So I added a check now in update_lpi_properties() to bail out (without
an error) if no redistributor has LPIs enabled yet. That should solve
that corner case.

Cheers,
Andre.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 25/28] ARM: vITS: handle INVALL command
  2017-05-11 17:53 ` [PATCH v9 25/28] ARM: vITS: handle INVALL command Andre Przywara
@ 2017-06-02 17:24   ` Julien Grall
  2017-06-02 17:25     ` Julien Grall
  0 siblings, 1 reply; 108+ messages in thread
From: Julien Grall @ 2017-06-02 17:24 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

Hi Andre,

On 05/11/2017 06:53 PM, Andre Przywara wrote:
> +    do
> +    {
> +        nr_lpis = radix_tree_gang_lookup(&its->d->arch.vgic.pend_lpi_tree,
> +                                         (void **)pirqs, vlpi,
> +                                         ARRAY_SIZE(pirqs));
> +
> +        for ( i = 0; i < nr_lpis; i++ )
> +        {
> +            /* We only care about LPIs on our VCPU. */
> +            if ( pirqs[i]->lpi_vcpu_id != vcpu->vcpu_id )
> +                continue;
> +
> +            vlpi = pirqs[i]->irq;
> +            /* If that fails for a single LPI, carry on to handle the rest. */
> +            ret = update_lpi_property(its->d, pirqs[i]);
> +            if ( !ret )
> +                update_lpi_vgic_status(vcpu, pirqs[i]);
> +        }
> +    /*
> +     * Loop over the next gang of pending_irqs until we reached the end of
> +     * a (fully populated) tree or the lookup function returns less LPIs than
> +     * it has been asked for.
> +     */
> +    } while ( (++vlpi < its->d->arch.vgic.nr_lpis) &&
> +              (nr_lpis == ARRAY_SIZE(pirqs)) );
> +
> +    read_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
> +    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
> +
> +    return ret;

The implementation looks good. However, one question. ret would be equal 
to the latest LPI updated. So even if all LPIs have failed but the 
latest, you will still return 0. Is it what you want?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v9 25/28] ARM: vITS: handle INVALL command
  2017-06-02 17:24   ` Julien Grall
@ 2017-06-02 17:25     ` Julien Grall
  0 siblings, 0 replies; 108+ messages in thread
From: Julien Grall @ 2017-06-02 17:25 UTC (permalink / raw)
  To: Andre Przywara, Stefano Stabellini
  Cc: xen-devel, Vijaya Kumar K, Vijay Kilari, Shanker Donthineni

And I obviously commented on the wrong version :/.I will replicate the 
command on v10.

Sorry for the inconvenience.

On 06/02/2017 06:24 PM, Julien Grall wrote:
> Hi Andre,
> 
> On 05/11/2017 06:53 PM, Andre Przywara wrote:
>> +    do
>> +    {
>> +        nr_lpis = 
>> radix_tree_gang_lookup(&its->d->arch.vgic.pend_lpi_tree,
>> +                                         (void **)pirqs, vlpi,
>> +                                         ARRAY_SIZE(pirqs));
>> +
>> +        for ( i = 0; i < nr_lpis; i++ )
>> +        {
>> +            /* We only care about LPIs on our VCPU. */
>> +            if ( pirqs[i]->lpi_vcpu_id != vcpu->vcpu_id )
>> +                continue;
>> +
>> +            vlpi = pirqs[i]->irq;
>> +            /* If that fails for a single LPI, carry on to handle the 
>> rest. */
>> +            ret = update_lpi_property(its->d, pirqs[i]);
>> +            if ( !ret )
>> +                update_lpi_vgic_status(vcpu, pirqs[i]);
>> +        }
>> +    /*
>> +     * Loop over the next gang of pending_irqs until we reached the 
>> end of
>> +     * a (fully populated) tree or the lookup function returns less 
>> LPIs than
>> +     * it has been asked for.
>> +     */
>> +    } while ( (++vlpi < its->d->arch.vgic.nr_lpis) &&
>> +              (nr_lpis == ARRAY_SIZE(pirqs)) );
>> +
>> +    read_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
>> +    spin_unlock_irqrestore(&vcpu->arch.vgic.lock, flags);
>> +
>> +    return ret;
> 
> The implementation looks good. However, one question. ret would be equal 
> to the latest LPI updated. So even if all LPIs have failed but the 
> latest, you will still return 0. Is it what you want?
> 
> Cheers,
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 108+ messages in thread

end of thread, other threads:[~2017-06-02 17:25 UTC | newest]

Thread overview: 108+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-11 17:53 [PATCH v9 00/28] arm64: Dom0 ITS emulation Andre Przywara
2017-05-11 17:53 ` [PATCH v9 01/28] ARM: GICv3: setup number of LPI bits for a GICv3 guest Andre Przywara
2017-05-11 18:34   ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 02/28] ARM: VGIC: move irq_to_pending() calls under the VGIC VCPU lock Andre Przywara
2017-05-20  0:34   ` Stefano Stabellini
2017-05-11 17:53 ` [PATCH v9 03/28] ARM: GIC: Add checks for NULL pointer pending_irq's Andre Przywara
2017-05-12 14:19   ` Julien Grall
2017-05-22 16:49     ` Andre Przywara
2017-05-22 17:15       ` Julien Grall
2017-05-25 16:14         ` Andre Przywara
2017-05-20  1:25   ` Stefano Stabellini
2017-05-11 17:53 ` [PATCH v9 04/28] ARM: GICv3: introduce separate pending_irq structs for LPIs Andre Przywara
2017-05-12 14:22   ` Julien Grall
2017-05-22 21:52   ` Stefano Stabellini
2017-05-11 17:53 ` [PATCH v9 05/28] ARM: GICv3: forward pending LPIs to guests Andre Przywara
2017-05-12 14:55   ` Julien Grall
2017-05-22 22:03   ` Stefano Stabellini
2017-05-25 16:42     ` Andre Przywara
2017-05-11 17:53 ` [PATCH v9 06/28] ARM: GICv3: enable ITS and LPIs on the host Andre Przywara
2017-05-11 17:53 ` [PATCH v9 07/28] ARM: vGICv3: handle virtual LPI pending and property tables Andre Przywara
2017-05-12 15:23   ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 08/28] ARM: introduce vgic_access_guest_memory() Andre Przywara
2017-05-12 15:30   ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 09/28] ARM: vGICv3: re-use vgic_reg64_check_access Andre Przywara
2017-05-11 17:53 ` [PATCH v9 10/28] ARM: GIC: export and extend vgic_init_pending_irq() Andre Przywara
2017-05-16 12:26   ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 11/28] ARM: VGIC: add vcpu_id to struct pending_irq Andre Przywara
2017-05-16 12:31   ` Julien Grall
2017-05-22 22:15     ` Stefano Stabellini
2017-05-23  9:49       ` Andre Przywara
2017-05-11 17:53 ` [PATCH v9 12/28] ARM: vGIC: advertise LPI support Andre Przywara
2017-05-16 13:03   ` Julien Grall
2017-05-22 22:19     ` Stefano Stabellini
2017-05-23 10:49       ` Julien Grall
2017-05-23 17:47         ` Stefano Stabellini
2017-05-24 10:10           ` Julien Grall
2017-05-25 18:02           ` Andre Przywara
2017-05-25 18:49             ` Stefano Stabellini
2017-05-25 20:07               ` Julien Grall
2017-05-25 21:05                 ` Stefano Stabellini
2017-05-26 10:19                   ` Julien Grall
2017-05-26 17:12                     ` Andre Przywara
2017-05-23 17:23     ` Andre Przywara
2017-05-11 17:53 ` [PATCH v9 13/28] ARM: vITS: add command handling stub and MMIO emulation Andre Przywara
2017-05-16 15:24   ` Julien Grall
2017-05-17 16:16   ` Julien Grall
2017-05-22 22:32   ` Stefano Stabellini
2017-05-23 10:54     ` Julien Grall
2017-05-23 17:43       ` Stefano Stabellini
2017-05-11 17:53 ` [PATCH v9 14/28] ARM: vITS: introduce translation table walks Andre Przywara
2017-05-16 15:57   ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 15/28] ARM: vITS: provide access to struct pending_irq Andre Przywara
2017-05-17 15:35   ` Julien Grall
2017-05-22 16:50     ` Andre Przywara
2017-05-22 17:19       ` Julien Grall
2017-05-26  9:10         ` Andre Przywara
2017-05-26 10:00           ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 16/28] ARM: vITS: handle INT command Andre Przywara
2017-05-17 16:17   ` Julien Grall
2017-05-23 17:24     ` Andre Przywara
2017-05-11 17:53 ` [PATCH v9 17/28] ARM: vITS: handle MAPC command Andre Przywara
2017-05-17 17:22   ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 18/28] ARM: vITS: handle CLEAR command Andre Przywara
2017-05-17 17:45   ` Julien Grall
2017-05-23 17:24     ` Andre Przywara
2017-05-24  9:04       ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 19/28] ARM: vITS: handle MAPD command Andre Przywara
2017-05-17 18:07   ` Julien Grall
2017-05-24  9:10     ` Andre Przywara
2017-05-24  9:56       ` Julien Grall
2017-05-24 13:09         ` Andre Przywara
2017-05-25 18:55           ` Stefano Stabellini
2017-05-25 20:17             ` Julien Grall
2017-05-25 20:44               ` Stefano Stabellini
2017-05-26  8:16                 ` Andre Przywara
2017-05-11 17:53 ` [PATCH v9 20/28] ARM: GICv3: handle unmapped LPIs Andre Przywara
2017-05-17 18:37   ` Julien Grall
2017-05-20  1:25   ` Stefano Stabellini
2017-05-22 23:48     ` Stefano Stabellini
2017-05-23 11:10       ` Julien Grall
2017-05-23 18:23         ` Stefano Stabellini
2017-05-24  9:47           ` Julien Grall
2017-05-24 17:49             ` Stefano Stabellini
2017-05-23 14:41     ` Andre Przywara
2017-05-11 17:53 ` [PATCH v9 21/28] ARM: vITS: handle MAPTI command Andre Przywara
2017-05-18 14:04   ` Julien Grall
2017-05-22 23:39   ` Stefano Stabellini
2017-05-23 10:01     ` Andre Przywara
2017-05-23 17:44       ` Stefano Stabellini
2017-05-11 17:53 ` [PATCH v9 22/28] ARM: vITS: handle MOVI command Andre Przywara
2017-05-18 14:17   ` Julien Grall
2017-05-23  0:28   ` Stefano Stabellini
2017-05-11 17:53 ` [PATCH v9 23/28] ARM: vITS: handle DISCARD command Andre Przywara
2017-05-18 14:23   ` Julien Grall
2017-05-22 16:50     ` Andre Przywara
2017-05-22 17:20       ` Julien Grall
2017-05-23  9:40         ` Andre Przywara
2017-05-11 17:53 ` [PATCH v9 24/28] ARM: vITS: handle INV command Andre Przywara
2017-05-23  0:01   ` Stefano Stabellini
2017-05-11 17:53 ` [PATCH v9 25/28] ARM: vITS: handle INVALL command Andre Przywara
2017-06-02 17:24   ` Julien Grall
2017-06-02 17:25     ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 26/28] ARM: vITS: increase mmio_count for each ITS Andre Przywara
2017-05-18 14:34   ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 27/28] ARM: vITS: create and initialize virtual ITSes for Dom0 Andre Przywara
2017-05-18 14:41   ` Julien Grall
2017-05-11 17:53 ` [PATCH v9 28/28] ARM: vITS: create ITS subnodes for Dom0 DT Andre Przywara
2017-05-11 18:31 ` [PATCH v9 00/28] arm64: Dom0 ITS emulation Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.