All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/8] arm/arm64: KVM: dynamic VGIC sizing
@ 2014-09-11 11:09 ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm; +Cc: Christoffer Dall, Andre Przywara

So far, the VGIC data structures have been statically sized, meaning
that we always have to support more interrupts than we actually want,
and more CPU interfaces than we should. This is a waste of resource,
and is the kind of things that should be tuneable.

This series addresses that issue by changing the data structures to be
dynamically allocated, and adds a new configuration attribute to
allocate the number of interrupts. When the attribute is not used, we
fallback to the old behaviour of allocating a fixed number of
interrupts.

This series is also the base for Andre Przywara's GICv3 distributor
emulation code (which can support far more than 8 vcpus and 1020
interrupts).

This has been tested on both ARM (TC2, A20) and arm64 (model and Juno).

The code is available from my kvm-arm64/kvmtool-vgic-dyn branch,
together with the corresponding kvmtool code.

* From v3 [3]
  - Number of comments added to the data structures, making slightly more
    obvious the various mappings
  - Dropped the nr_irqs field from bitmap and bytemap structures, as
    it was a leftover from the initial revision that only had a single
    pointer
  - Small cleanups all over the place
  - Dropped the "sub-page offset" patch for now, as this need some
    serious reworking
  - Rebased on top of Christoffer "vgic cleanup" series, with 3.17-rc4
    thrown in for a good measure

* From v2 [2]
  - Fixed bug that broke QEMU (register access can trigger allocation)
  - irq_pending_on_cpu is now dynamic (needed for more than 32 or 64 vcpus)
  - Rebased on top of Victor's BE patches

* From v1 [1]
  - Rebased on top of 3.16-rc1
  - Lots of cleanup

[1]: https://lists.cs.columbia.edu/pipermail/kvmarm/2013-October/005879.html
[2]: https://lists.cs.columbia.edu/pipermail/kvmarm/2014-June/010050.html
[3]: https://lists.cs.columbia.edu/pipermail/kvmarm/2014-July/010383.html

Marc Zyngier (8):
  KVM: ARM: vgic: plug irq injection race
  arm/arm64: KVM: vgic: switch to dynamic allocation
  arm/arm64: KVM: vgic: Parametrize VGIC_NR_SHARED_IRQS
  arm/arm64: KVM: vgic: kill VGIC_MAX_CPUS
  arm/arm64: KVM: vgic: handle out-of-range MMIO accesses
  arm/arm64: KVM: vgic: kill VGIC_NR_IRQS
  arm/arm64: KVM: vgic: delay vgic allocation until init time
  arm/arm64: KVM: vgic: make number of irqs a configurable attribute

 Documentation/virtual/kvm/devices/arm-vgic.txt |  10 +
 arch/arm/include/uapi/asm/kvm.h                |   1 +
 arch/arm/kvm/arm.c                             |  10 +-
 arch/arm64/include/uapi/asm/kvm.h              |   1 +
 include/kvm/arm_vgic.h                         |  88 ++++--
 virt/kvm/arm/vgic.c                            | 396 +++++++++++++++++++++----
 6 files changed, 413 insertions(+), 93 deletions(-)

-- 
2.0.4


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 0/8] arm/arm64: KVM: dynamic VGIC sizing
@ 2014-09-11 11:09 ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

So far, the VGIC data structures have been statically sized, meaning
that we always have to support more interrupts than we actually want,
and more CPU interfaces than we should. This is a waste of resource,
and is the kind of things that should be tuneable.

This series addresses that issue by changing the data structures to be
dynamically allocated, and adds a new configuration attribute to
allocate the number of interrupts. When the attribute is not used, we
fallback to the old behaviour of allocating a fixed number of
interrupts.

This series is also the base for Andre Przywara's GICv3 distributor
emulation code (which can support far more than 8 vcpus and 1020
interrupts).

This has been tested on both ARM (TC2, A20) and arm64 (model and Juno).

The code is available from my kvm-arm64/kvmtool-vgic-dyn branch,
together with the corresponding kvmtool code.

* From v3 [3]
  - Number of comments added to the data structures, making slightly more
    obvious the various mappings
  - Dropped the nr_irqs field from bitmap and bytemap structures, as
    it was a leftover from the initial revision that only had a single
    pointer
  - Small cleanups all over the place
  - Dropped the "sub-page offset" patch for now, as this need some
    serious reworking
  - Rebased on top of Christoffer "vgic cleanup" series, with 3.17-rc4
    thrown in for a good measure

* From v2 [2]
  - Fixed bug that broke QEMU (register access can trigger allocation)
  - irq_pending_on_cpu is now dynamic (needed for more than 32 or 64 vcpus)
  - Rebased on top of Victor's BE patches

* From v1 [1]
  - Rebased on top of 3.16-rc1
  - Lots of cleanup

[1]: https://lists.cs.columbia.edu/pipermail/kvmarm/2013-October/005879.html
[2]: https://lists.cs.columbia.edu/pipermail/kvmarm/2014-June/010050.html
[3]: https://lists.cs.columbia.edu/pipermail/kvmarm/2014-July/010383.html

Marc Zyngier (8):
  KVM: ARM: vgic: plug irq injection race
  arm/arm64: KVM: vgic: switch to dynamic allocation
  arm/arm64: KVM: vgic: Parametrize VGIC_NR_SHARED_IRQS
  arm/arm64: KVM: vgic: kill VGIC_MAX_CPUS
  arm/arm64: KVM: vgic: handle out-of-range MMIO accesses
  arm/arm64: KVM: vgic: kill VGIC_NR_IRQS
  arm/arm64: KVM: vgic: delay vgic allocation until init time
  arm/arm64: KVM: vgic: make number of irqs a configurable attribute

 Documentation/virtual/kvm/devices/arm-vgic.txt |  10 +
 arch/arm/include/uapi/asm/kvm.h                |   1 +
 arch/arm/kvm/arm.c                             |  10 +-
 arch/arm64/include/uapi/asm/kvm.h              |   1 +
 include/kvm/arm_vgic.h                         |  88 ++++--
 virt/kvm/arm/vgic.c                            | 396 +++++++++++++++++++++----
 6 files changed, 413 insertions(+), 93 deletions(-)

-- 
2.0.4

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 1/8] KVM: ARM: vgic: plug irq injection race
  2014-09-11 11:09 ` Marc Zyngier
@ 2014-09-11 11:09   ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm; +Cc: Christoffer Dall, Andre Przywara

As it stands, nothing prevents userspace from injecting an interrupt
before the guest's GIC is actually initialized.

This goes unnoticed so far (as everything is pretty much statically
allocated), but ends up exploding in a spectacular way once we switch
to a more dynamic allocation (the GIC data structure isn't there yet).

The fix is to test for the "ready" flag in the VGIC distributor before
trying to inject the interrupt. Note that in order to avoid breaking
userspace, we have to ignore what is essentially an error.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 virt/kvm/arm/vgic.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index f7ab1ca..d3299d4 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1584,7 +1584,8 @@ out:
 int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
 			bool level)
 {
-	if (vgic_update_irq_pending(kvm, cpuid, irq_num, level))
+	if (likely(vgic_initialized(kvm)) &&
+	    vgic_update_irq_pending(kvm, cpuid, irq_num, level))
 		vgic_kick_vcpus(kvm);
 
 	return 0;
-- 
2.0.4


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 1/8] KVM: ARM: vgic: plug irq injection race
@ 2014-09-11 11:09   ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

As it stands, nothing prevents userspace from injecting an interrupt
before the guest's GIC is actually initialized.

This goes unnoticed so far (as everything is pretty much statically
allocated), but ends up exploding in a spectacular way once we switch
to a more dynamic allocation (the GIC data structure isn't there yet).

The fix is to test for the "ready" flag in the VGIC distributor before
trying to inject the interrupt. Note that in order to avoid breaking
userspace, we have to ignore what is essentially an error.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
---
 virt/kvm/arm/vgic.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index f7ab1ca..d3299d4 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1584,7 +1584,8 @@ out:
 int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
 			bool level)
 {
-	if (vgic_update_irq_pending(kvm, cpuid, irq_num, level))
+	if (likely(vgic_initialized(kvm)) &&
+	    vgic_update_irq_pending(kvm, cpuid, irq_num, level))
 		vgic_kick_vcpus(kvm);
 
 	return 0;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 2/8] arm/arm64: KVM: vgic: switch to dynamic allocation
  2014-09-11 11:09 ` Marc Zyngier
@ 2014-09-11 11:09   ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm; +Cc: Christoffer Dall, Andre Przywara

So far, all the VGIC data structures are statically defined by the
*maximum* number of vcpus and interrupts it supports. It means that
we always have to oversize it to cater for the worse case.

Start by changing the data structures to be dynamically sizeable,
and allocate them at runtime.

The sizes are still very static though.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c     |   3 +
 include/kvm/arm_vgic.h |  76 ++++++++++++----
 virt/kvm/arm/vgic.c    | 237 ++++++++++++++++++++++++++++++++++++++++++-------
 3 files changed, 267 insertions(+), 49 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index a99e0cd..923a01d 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -172,6 +172,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 			kvm->vcpus[i] = NULL;
 		}
 	}
+
+	kvm_vgic_destroy(kvm);
 }
 
 int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
@@ -253,6 +255,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	kvm_mmu_free_memory_caches(vcpu);
 	kvm_timer_vcpu_terminate(vcpu);
+	kvm_vgic_vcpu_destroy(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index f074539..bdaac57 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -54,19 +54,33 @@
  * - a bunch of shared interrupts (SPI)
  */
 struct vgic_bitmap {
-	union {
-		u32 reg[VGIC_NR_PRIVATE_IRQS / 32];
-		DECLARE_BITMAP(reg_ul, VGIC_NR_PRIVATE_IRQS);
-	} percpu[VGIC_MAX_CPUS];
-	union {
-		u32 reg[VGIC_NR_SHARED_IRQS / 32];
-		DECLARE_BITMAP(reg_ul, VGIC_NR_SHARED_IRQS);
-	} shared;
+	/*
+	 * - One UL per VCPU for private interrupts (assumes UL is at
+	 *   least 32 bits)
+	 * - As many UL as necessary for shared interrupts.
+	 *
+	 * The private interrupts are accessed via the "private"
+	 * field, one UL per vcpu (the state for vcpu n is in
+	 * private[n]). The shared interrupts are accessed via the
+	 * "shared" pointer (IRQn state is at bit n-32 in the bitmap).
+	 */
+	unsigned long *private;
+	unsigned long *shared;
 };
 
 struct vgic_bytemap {
-	u32 percpu[VGIC_MAX_CPUS][VGIC_NR_PRIVATE_IRQS / 4];
-	u32 shared[VGIC_NR_SHARED_IRQS  / 4];
+	/*
+	 * - 8 u32 per VCPU for private interrupts
+	 * - As many u32 as necessary for shared interrupts.
+	 *
+	 * The private interrupts are accessed via the "private"
+	 * field, (the state for vcpu n is in private[n*8] to
+	 * private[n*8 + 7]). The shared interrupts are accessed via
+	 * the "shared" pointer (IRQn state is at byte (n-32)%4 of the
+	 * shared[(n-32)/4] word).
+	 */
+	u32 *private;
+	u32 *shared;
 };
 
 struct kvm_vcpu;
@@ -127,6 +141,9 @@ struct vgic_dist {
 	bool			in_kernel;
 	bool			ready;
 
+	int			nr_cpus;
+	int			nr_irqs;
+
 	/* Virtual control interface mapping */
 	void __iomem		*vctrl_base;
 
@@ -166,15 +183,36 @@ struct vgic_dist {
 	/* Level/edge triggered */
 	struct vgic_bitmap	irq_cfg;
 
-	/* Source CPU per SGI and target CPU */
-	u8			irq_sgi_sources[VGIC_MAX_CPUS][VGIC_NR_SGIS];
+	/*
+	 * Source CPU per SGI and target CPU:
+	 *
+	 * Each byte represent a SGI observable on a VCPU, each bit of
+	 * this byte indicating if the corresponding VCPU has
+	 * generated this interrupt. This is a GICv2 feature only.
+	 *
+	 * For VCPUn (n < 8), irq_sgi_sources[n*16] to [n*16 + 15] are
+	 * the SGIs observable on VCPUn.
+	 */
+	u8			*irq_sgi_sources;
 
-	/* Target CPU for each IRQ */
-	u8			irq_spi_cpu[VGIC_NR_SHARED_IRQS];
-	struct vgic_bitmap	irq_spi_target[VGIC_MAX_CPUS];
+	/*
+	 * Target CPU for each SPI:
+	 *
+	 * Array of available SPI, each byte indicating the target
+	 * VCPU for SPI. IRQn (n >=32) is at irq_spi_cpu[n-32].
+	 */
+	u8			*irq_spi_cpu;
+
+	/*
+	 * Reverse lookup of irq_spi_cpu for faster compute pending:
+	 *
+	 * Array of bitmaps, one per VCPU, describing is IRQn is
+	 * routed to a particular VCPU.
+	 */
+	struct vgic_bitmap	*irq_spi_target;
 
 	/* Bitmap indicating which CPU has something pending */
-	unsigned long		irq_pending_on_cpu;
+	unsigned long		*irq_pending_on_cpu;
 #endif
 };
 
@@ -204,11 +242,11 @@ struct vgic_v3_cpu_if {
 struct vgic_cpu {
 #ifdef CONFIG_KVM_ARM_VGIC
 	/* per IRQ to LR mapping */
-	u8		vgic_irq_lr_map[VGIC_NR_IRQS];
+	u8		*vgic_irq_lr_map;
 
 	/* Pending interrupts on this VCPU */
 	DECLARE_BITMAP(	pending_percpu, VGIC_NR_PRIVATE_IRQS);
-	DECLARE_BITMAP(	pending_shared, VGIC_NR_SHARED_IRQS);
+	unsigned long	*pending_shared;
 
 	/* Bitmap of used/free list registers */
 	DECLARE_BITMAP(	lr_used, VGIC_V2_MAX_LRS);
@@ -239,7 +277,9 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
 int kvm_vgic_hyp_init(void);
 int kvm_vgic_init(struct kvm *kvm);
 int kvm_vgic_create(struct kvm *kvm);
+void kvm_vgic_destroy(struct kvm *kvm);
 int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
+void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
 int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index d3299d4..92c086e 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -95,6 +95,7 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu);
 static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu);
 static void vgic_update_state(struct kvm *kvm);
 static void vgic_kick_vcpus(struct kvm *kvm);
+static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi);
 static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
 static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr);
 static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc);
@@ -124,23 +125,45 @@ static const struct vgic_params *vgic;
 #define REG_OFFSET_SWIZZLE	0
 #endif
 
+static int vgic_init_bitmap(struct vgic_bitmap *b, int nr_cpus, int nr_irqs)
+{
+	int nr_longs;
+
+	nr_longs = nr_cpus + BITS_TO_LONGS(nr_irqs - VGIC_NR_PRIVATE_IRQS);
+
+	b->private = kzalloc(sizeof(unsigned long) * nr_longs, GFP_KERNEL);
+	if (!b->private)
+		return -ENOMEM;
+
+	b->shared = b->private + nr_cpus;
+
+	return 0;
+}
+
+static void vgic_free_bitmap(struct vgic_bitmap *b)
+{
+	kfree(b->private);
+	b->private = NULL;
+	b->shared = NULL;
+}
+
 static u32 *vgic_bitmap_get_reg(struct vgic_bitmap *x,
 				int cpuid, u32 offset)
 {
 	offset >>= 2;
 	if (!offset)
-		return x->percpu[cpuid].reg + (offset ^ REG_OFFSET_SWIZZLE);
+		return (u32 *)(x->private + cpuid) + REG_OFFSET_SWIZZLE;
 	else
-		return x->shared.reg + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
+		return (u32 *)(x->shared) + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
 }
 
 static int vgic_bitmap_get_irq_val(struct vgic_bitmap *x,
 				   int cpuid, int irq)
 {
 	if (irq < VGIC_NR_PRIVATE_IRQS)
-		return test_bit(irq, x->percpu[cpuid].reg_ul);
+		return test_bit(irq, x->private + cpuid);
 
-	return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared.reg_ul);
+	return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared);
 }
 
 static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
@@ -149,9 +172,9 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
 	unsigned long *reg;
 
 	if (irq < VGIC_NR_PRIVATE_IRQS) {
-		reg = x->percpu[cpuid].reg_ul;
+		reg = x->private + cpuid;
 	} else {
-		reg =  x->shared.reg_ul;
+		reg = x->shared;
 		irq -= VGIC_NR_PRIVATE_IRQS;
 	}
 
@@ -163,24 +186,49 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
 
 static unsigned long *vgic_bitmap_get_cpu_map(struct vgic_bitmap *x, int cpuid)
 {
-	if (unlikely(cpuid >= VGIC_MAX_CPUS))
-		return NULL;
-	return x->percpu[cpuid].reg_ul;
+	return x->private + cpuid;
 }
 
 static unsigned long *vgic_bitmap_get_shared_map(struct vgic_bitmap *x)
 {
-	return x->shared.reg_ul;
+	return x->shared;
+}
+
+static int vgic_init_bytemap(struct vgic_bytemap *x, int nr_cpus, int nr_irqs)
+{
+	int size;
+
+	size  = nr_cpus * VGIC_NR_PRIVATE_IRQS;
+	size += nr_irqs - VGIC_NR_PRIVATE_IRQS;
+
+	x->private = kzalloc(size, GFP_KERNEL);
+	if (!x->private)
+		return -ENOMEM;
+
+	x->shared = x->private + nr_cpus * VGIC_NR_PRIVATE_IRQS / sizeof(u32);
+	return 0;
+}
+
+static void vgic_free_bytemap(struct vgic_bytemap *b)
+{
+	kfree(b->private);
+	b->private = NULL;
+	b->shared = NULL;
 }
 
 static u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset)
 {
-	offset >>= 2;
-	BUG_ON(offset > (VGIC_NR_IRQS / 4));
-	if (offset < 8)
-		return x->percpu[cpuid] + offset;
-	else
-		return x->shared + offset - 8;
+	u32 *reg;
+
+	if (offset < VGIC_NR_PRIVATE_IRQS) {
+		reg = x->private;
+		offset += cpuid * VGIC_NR_PRIVATE_IRQS;
+	} else {
+		reg = x->shared;
+		offset -= VGIC_NR_PRIVATE_IRQS;
+	}
+
+	return reg + (offset / sizeof(u32));
 }
 
 #define VGIC_CFG_LEVEL	0
@@ -739,7 +787,7 @@ static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu)
 		 */
 		vgic_dist_irq_set_pending(vcpu, lr.irq);
 		if (lr.irq < VGIC_NR_SGIS)
-			dist->irq_sgi_sources[vcpu_id][lr.irq] |= 1 << lr.source;
+			*vgic_get_sgi_sources(dist, vcpu_id, lr.irq) |= 1 << lr.source;
 		lr.state &= ~LR_STATE_PENDING;
 		vgic_set_lr(vcpu, i, lr);
 
@@ -773,7 +821,7 @@ static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
 	/* Copy source SGIs from distributor side */
 	for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
 		int shift = 8 * (sgi - min_sgi);
-		reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift;
+		reg |= ((u32)*vgic_get_sgi_sources(dist, vcpu_id, sgi)) << shift;
 	}
 
 	mmio_data_write(mmio, ~0, reg);
@@ -797,14 +845,15 @@ static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
 	/* Clear pending SGIs on the distributor */
 	for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
 		u8 mask = reg >> (8 * (sgi - min_sgi));
+		u8 *src = vgic_get_sgi_sources(dist, vcpu_id, sgi);
 		if (set) {
-			if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask)
+			if ((*src & mask) != mask)
 				updated = true;
-			dist->irq_sgi_sources[vcpu_id][sgi] |= mask;
+			*src |= mask;
 		} else {
-			if (dist->irq_sgi_sources[vcpu_id][sgi] & mask)
+			if (*src & mask)
 				updated = true;
-			dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask;
+			*src &= ~mask;
 		}
 	}
 
@@ -988,6 +1037,11 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
 	return true;
 }
 
+static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi)
+{
+	return dist->irq_sgi_sources + vcpu_id * VGIC_NR_SGIS + sgi;
+}
+
 static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
 {
 	struct kvm *kvm = vcpu->kvm;
@@ -1021,7 +1075,7 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
 		if (target_cpus & 1) {
 			/* Flag the SGI as pending */
 			vgic_dist_irq_set_pending(vcpu, sgi);
-			dist->irq_sgi_sources[c][sgi] |= 1 << vcpu_id;
+			*vgic_get_sgi_sources(dist, c, sgi) |= 1 << vcpu_id;
 			kvm_debug("SGI%d from CPU%d to CPU%d\n", sgi, vcpu_id, c);
 		}
 
@@ -1068,14 +1122,14 @@ static void vgic_update_state(struct kvm *kvm)
 	int c;
 
 	if (!dist->enabled) {
-		set_bit(0, &dist->irq_pending_on_cpu);
+		set_bit(0, dist->irq_pending_on_cpu);
 		return;
 	}
 
 	kvm_for_each_vcpu(c, vcpu, kvm) {
 		if (compute_pending_for_cpu(vcpu)) {
 			pr_debug("CPU%d has pending interrupts\n", c);
-			set_bit(c, &dist->irq_pending_on_cpu);
+			set_bit(c, dist->irq_pending_on_cpu);
 		}
 	}
 }
@@ -1232,14 +1286,14 @@ static bool vgic_queue_sgi(struct kvm_vcpu *vcpu, int irq)
 	int vcpu_id = vcpu->vcpu_id;
 	int c;
 
-	sources = dist->irq_sgi_sources[vcpu_id][irq];
+	sources = *vgic_get_sgi_sources(dist, vcpu_id, irq);
 
 	for_each_set_bit(c, &sources, VGIC_MAX_CPUS) {
 		if (vgic_queue_irq(vcpu, c, irq))
 			clear_bit(c, &sources);
 	}
 
-	dist->irq_sgi_sources[vcpu_id][irq] = sources;
+	*vgic_get_sgi_sources(dist, vcpu_id, irq) = sources;
 
 	/*
 	 * If the sources bitmap has been cleared it means that we
@@ -1327,7 +1381,7 @@ epilog:
 		 * us. Claim we don't have anything pending. We'll
 		 * adjust that if needed while exiting.
 		 */
-		clear_bit(vcpu_id, &dist->irq_pending_on_cpu);
+		clear_bit(vcpu_id, dist->irq_pending_on_cpu);
 	}
 }
 
@@ -1429,7 +1483,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
 	/* Check if we still have something up our sleeve... */
 	pending = find_first_zero_bit(elrsr_ptr, vgic->nr_lr);
 	if (level_pending || pending < vgic->nr_lr)
-		set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
+		set_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
 }
 
 void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
@@ -1463,7 +1517,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
 	if (!irqchip_in_kernel(vcpu->kvm))
 		return 0;
 
-	return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
+	return test_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
 }
 
 static void vgic_kick_vcpus(struct kvm *kvm)
@@ -1558,7 +1612,7 @@ static bool vgic_update_irq_pending(struct kvm *kvm, int cpuid,
 
 	if (level) {
 		vgic_cpu_irq_set(vcpu, irq_num);
-		set_bit(cpuid, &dist->irq_pending_on_cpu);
+		set_bit(cpuid, dist->irq_pending_on_cpu);
 	}
 
 out:
@@ -1602,6 +1656,32 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data)
 	return IRQ_HANDLED;
 }
 
+void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+
+	kfree(vgic_cpu->pending_shared);
+	kfree(vgic_cpu->vgic_irq_lr_map);
+	vgic_cpu->pending_shared = NULL;
+	vgic_cpu->vgic_irq_lr_map = NULL;
+}
+
+static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
+{
+	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+
+	int sz = (nr_irqs - VGIC_NR_PRIVATE_IRQS) / 8;
+	vgic_cpu->pending_shared = kzalloc(sz, GFP_KERNEL);
+	vgic_cpu->vgic_irq_lr_map = kzalloc(nr_irqs, GFP_KERNEL);
+
+	if (!vgic_cpu->pending_shared || !vgic_cpu->vgic_irq_lr_map) {
+		kvm_vgic_vcpu_destroy(vcpu);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
 /**
  * kvm_vgic_vcpu_init - Initialize per-vcpu VGIC state
  * @vcpu: pointer to the vcpu struct
@@ -1718,6 +1798,97 @@ out_free_irq:
 	return ret;
 }
 
+void kvm_vgic_destroy(struct kvm *kvm)
+{
+	struct vgic_dist *dist = &kvm->arch.vgic;
+	struct kvm_vcpu *vcpu;
+	int i;
+
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		kvm_vgic_vcpu_destroy(vcpu);
+
+	vgic_free_bitmap(&dist->irq_enabled);
+	vgic_free_bitmap(&dist->irq_level);
+	vgic_free_bitmap(&dist->irq_pending);
+	vgic_free_bitmap(&dist->irq_soft_pend);
+	vgic_free_bitmap(&dist->irq_queued);
+	vgic_free_bitmap(&dist->irq_cfg);
+	vgic_free_bytemap(&dist->irq_priority);
+	if (dist->irq_spi_target) {
+		for (i = 0; i < dist->nr_cpus; i++)
+			vgic_free_bitmap(&dist->irq_spi_target[i]);
+	}
+	kfree(dist->irq_sgi_sources);
+	kfree(dist->irq_spi_cpu);
+	kfree(dist->irq_spi_target);
+	kfree(dist->irq_pending_on_cpu);
+	dist->irq_sgi_sources = NULL;
+	dist->irq_spi_cpu = NULL;
+	dist->irq_spi_target = NULL;
+	dist->irq_pending_on_cpu = NULL;
+}
+
+/*
+ * Allocate and initialize the various data structures. Must be called
+ * with kvm->lock held!
+ */
+static int vgic_init_maps(struct kvm *kvm)
+{
+	struct vgic_dist *dist = &kvm->arch.vgic;
+	struct kvm_vcpu *vcpu;
+	int nr_cpus, nr_irqs;
+	int ret, i;
+
+	nr_cpus = dist->nr_cpus = VGIC_MAX_CPUS;
+	nr_irqs = dist->nr_irqs = VGIC_NR_IRQS;
+
+	ret  = vgic_init_bitmap(&dist->irq_enabled, nr_cpus, nr_irqs);
+	ret |= vgic_init_bitmap(&dist->irq_level, nr_cpus, nr_irqs);
+	ret |= vgic_init_bitmap(&dist->irq_pending, nr_cpus, nr_irqs);
+	ret |= vgic_init_bitmap(&dist->irq_soft_pend, nr_cpus, nr_irqs);
+	ret |= vgic_init_bitmap(&dist->irq_queued, nr_cpus, nr_irqs);
+	ret |= vgic_init_bitmap(&dist->irq_cfg, nr_cpus, nr_irqs);
+	ret |= vgic_init_bytemap(&dist->irq_priority, nr_cpus, nr_irqs);
+
+	if (ret)
+		goto out;
+
+	dist->irq_sgi_sources = kzalloc(nr_cpus * VGIC_NR_SGIS, GFP_KERNEL);
+	dist->irq_spi_cpu = kzalloc(nr_irqs - VGIC_NR_PRIVATE_IRQS, GFP_KERNEL);
+	dist->irq_spi_target = kzalloc(sizeof(*dist->irq_spi_target) * nr_cpus,
+				       GFP_KERNEL);
+	dist->irq_pending_on_cpu = kzalloc(BITS_TO_LONGS(nr_cpus) * sizeof(long),
+					   GFP_KERNEL);
+	if (!dist->irq_sgi_sources ||
+	    !dist->irq_spi_cpu ||
+	    !dist->irq_spi_target ||
+	    !dist->irq_pending_on_cpu) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	for (i = 0; i < nr_cpus; i++)
+		ret |= vgic_init_bitmap(&dist->irq_spi_target[i],
+					nr_cpus, nr_irqs);
+
+	if (ret)
+		goto out;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		ret = vgic_vcpu_init_maps(vcpu, nr_irqs);
+		if (ret) {
+			kvm_err("VGIC: Failed to allocate vcpu memory\n");
+			break;
+		}
+	}
+
+out:
+	if (ret)
+		kvm_vgic_destroy(kvm);
+
+	return ret;
+}
+
 /**
  * kvm_vgic_init - Initialize global VGIC state before running any VCPUs
  * @kvm: pointer to the kvm struct
@@ -1798,6 +1969,10 @@ int kvm_vgic_create(struct kvm *kvm)
 	kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
 
+	ret = vgic_init_maps(kvm);
+	if (ret)
+		kvm_err("Unable to allocate maps\n");
+
 out_unlock:
 	for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) {
 		vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx);
-- 
2.0.4


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 2/8] arm/arm64: KVM: vgic: switch to dynamic allocation
@ 2014-09-11 11:09   ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

So far, all the VGIC data structures are statically defined by the
*maximum* number of vcpus and interrupts it supports. It means that
we always have to oversize it to cater for the worse case.

Start by changing the data structures to be dynamically sizeable,
and allocate them at runtime.

The sizes are still very static though.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c     |   3 +
 include/kvm/arm_vgic.h |  76 ++++++++++++----
 virt/kvm/arm/vgic.c    | 237 ++++++++++++++++++++++++++++++++++++++++++-------
 3 files changed, 267 insertions(+), 49 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index a99e0cd..923a01d 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -172,6 +172,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 			kvm->vcpus[i] = NULL;
 		}
 	}
+
+	kvm_vgic_destroy(kvm);
 }
 
 int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
@@ -253,6 +255,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	kvm_mmu_free_memory_caches(vcpu);
 	kvm_timer_vcpu_terminate(vcpu);
+	kvm_vgic_vcpu_destroy(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index f074539..bdaac57 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -54,19 +54,33 @@
  * - a bunch of shared interrupts (SPI)
  */
 struct vgic_bitmap {
-	union {
-		u32 reg[VGIC_NR_PRIVATE_IRQS / 32];
-		DECLARE_BITMAP(reg_ul, VGIC_NR_PRIVATE_IRQS);
-	} percpu[VGIC_MAX_CPUS];
-	union {
-		u32 reg[VGIC_NR_SHARED_IRQS / 32];
-		DECLARE_BITMAP(reg_ul, VGIC_NR_SHARED_IRQS);
-	} shared;
+	/*
+	 * - One UL per VCPU for private interrupts (assumes UL is at
+	 *   least 32 bits)
+	 * - As many UL as necessary for shared interrupts.
+	 *
+	 * The private interrupts are accessed via the "private"
+	 * field, one UL per vcpu (the state for vcpu n is in
+	 * private[n]). The shared interrupts are accessed via the
+	 * "shared" pointer (IRQn state is at bit n-32 in the bitmap).
+	 */
+	unsigned long *private;
+	unsigned long *shared;
 };
 
 struct vgic_bytemap {
-	u32 percpu[VGIC_MAX_CPUS][VGIC_NR_PRIVATE_IRQS / 4];
-	u32 shared[VGIC_NR_SHARED_IRQS  / 4];
+	/*
+	 * - 8 u32 per VCPU for private interrupts
+	 * - As many u32 as necessary for shared interrupts.
+	 *
+	 * The private interrupts are accessed via the "private"
+	 * field, (the state for vcpu n is in private[n*8] to
+	 * private[n*8 + 7]). The shared interrupts are accessed via
+	 * the "shared" pointer (IRQn state is at byte (n-32)%4 of the
+	 * shared[(n-32)/4] word).
+	 */
+	u32 *private;
+	u32 *shared;
 };
 
 struct kvm_vcpu;
@@ -127,6 +141,9 @@ struct vgic_dist {
 	bool			in_kernel;
 	bool			ready;
 
+	int			nr_cpus;
+	int			nr_irqs;
+
 	/* Virtual control interface mapping */
 	void __iomem		*vctrl_base;
 
@@ -166,15 +183,36 @@ struct vgic_dist {
 	/* Level/edge triggered */
 	struct vgic_bitmap	irq_cfg;
 
-	/* Source CPU per SGI and target CPU */
-	u8			irq_sgi_sources[VGIC_MAX_CPUS][VGIC_NR_SGIS];
+	/*
+	 * Source CPU per SGI and target CPU:
+	 *
+	 * Each byte represent a SGI observable on a VCPU, each bit of
+	 * this byte indicating if the corresponding VCPU has
+	 * generated this interrupt. This is a GICv2 feature only.
+	 *
+	 * For VCPUn (n < 8), irq_sgi_sources[n*16] to [n*16 + 15] are
+	 * the SGIs observable on VCPUn.
+	 */
+	u8			*irq_sgi_sources;
 
-	/* Target CPU for each IRQ */
-	u8			irq_spi_cpu[VGIC_NR_SHARED_IRQS];
-	struct vgic_bitmap	irq_spi_target[VGIC_MAX_CPUS];
+	/*
+	 * Target CPU for each SPI:
+	 *
+	 * Array of available SPI, each byte indicating the target
+	 * VCPU for SPI. IRQn (n >=32) is at irq_spi_cpu[n-32].
+	 */
+	u8			*irq_spi_cpu;
+
+	/*
+	 * Reverse lookup of irq_spi_cpu for faster compute pending:
+	 *
+	 * Array of bitmaps, one per VCPU, describing is IRQn is
+	 * routed to a particular VCPU.
+	 */
+	struct vgic_bitmap	*irq_spi_target;
 
 	/* Bitmap indicating which CPU has something pending */
-	unsigned long		irq_pending_on_cpu;
+	unsigned long		*irq_pending_on_cpu;
 #endif
 };
 
@@ -204,11 +242,11 @@ struct vgic_v3_cpu_if {
 struct vgic_cpu {
 #ifdef CONFIG_KVM_ARM_VGIC
 	/* per IRQ to LR mapping */
-	u8		vgic_irq_lr_map[VGIC_NR_IRQS];
+	u8		*vgic_irq_lr_map;
 
 	/* Pending interrupts on this VCPU */
 	DECLARE_BITMAP(	pending_percpu, VGIC_NR_PRIVATE_IRQS);
-	DECLARE_BITMAP(	pending_shared, VGIC_NR_SHARED_IRQS);
+	unsigned long	*pending_shared;
 
 	/* Bitmap of used/free list registers */
 	DECLARE_BITMAP(	lr_used, VGIC_V2_MAX_LRS);
@@ -239,7 +277,9 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
 int kvm_vgic_hyp_init(void);
 int kvm_vgic_init(struct kvm *kvm);
 int kvm_vgic_create(struct kvm *kvm);
+void kvm_vgic_destroy(struct kvm *kvm);
 int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
+void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
 int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index d3299d4..92c086e 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -95,6 +95,7 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu);
 static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu);
 static void vgic_update_state(struct kvm *kvm);
 static void vgic_kick_vcpus(struct kvm *kvm);
+static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi);
 static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
 static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr);
 static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc);
@@ -124,23 +125,45 @@ static const struct vgic_params *vgic;
 #define REG_OFFSET_SWIZZLE	0
 #endif
 
+static int vgic_init_bitmap(struct vgic_bitmap *b, int nr_cpus, int nr_irqs)
+{
+	int nr_longs;
+
+	nr_longs = nr_cpus + BITS_TO_LONGS(nr_irqs - VGIC_NR_PRIVATE_IRQS);
+
+	b->private = kzalloc(sizeof(unsigned long) * nr_longs, GFP_KERNEL);
+	if (!b->private)
+		return -ENOMEM;
+
+	b->shared = b->private + nr_cpus;
+
+	return 0;
+}
+
+static void vgic_free_bitmap(struct vgic_bitmap *b)
+{
+	kfree(b->private);
+	b->private = NULL;
+	b->shared = NULL;
+}
+
 static u32 *vgic_bitmap_get_reg(struct vgic_bitmap *x,
 				int cpuid, u32 offset)
 {
 	offset >>= 2;
 	if (!offset)
-		return x->percpu[cpuid].reg + (offset ^ REG_OFFSET_SWIZZLE);
+		return (u32 *)(x->private + cpuid) + REG_OFFSET_SWIZZLE;
 	else
-		return x->shared.reg + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
+		return (u32 *)(x->shared) + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
 }
 
 static int vgic_bitmap_get_irq_val(struct vgic_bitmap *x,
 				   int cpuid, int irq)
 {
 	if (irq < VGIC_NR_PRIVATE_IRQS)
-		return test_bit(irq, x->percpu[cpuid].reg_ul);
+		return test_bit(irq, x->private + cpuid);
 
-	return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared.reg_ul);
+	return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared);
 }
 
 static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
@@ -149,9 +172,9 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
 	unsigned long *reg;
 
 	if (irq < VGIC_NR_PRIVATE_IRQS) {
-		reg = x->percpu[cpuid].reg_ul;
+		reg = x->private + cpuid;
 	} else {
-		reg =  x->shared.reg_ul;
+		reg = x->shared;
 		irq -= VGIC_NR_PRIVATE_IRQS;
 	}
 
@@ -163,24 +186,49 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
 
 static unsigned long *vgic_bitmap_get_cpu_map(struct vgic_bitmap *x, int cpuid)
 {
-	if (unlikely(cpuid >= VGIC_MAX_CPUS))
-		return NULL;
-	return x->percpu[cpuid].reg_ul;
+	return x->private + cpuid;
 }
 
 static unsigned long *vgic_bitmap_get_shared_map(struct vgic_bitmap *x)
 {
-	return x->shared.reg_ul;
+	return x->shared;
+}
+
+static int vgic_init_bytemap(struct vgic_bytemap *x, int nr_cpus, int nr_irqs)
+{
+	int size;
+
+	size  = nr_cpus * VGIC_NR_PRIVATE_IRQS;
+	size += nr_irqs - VGIC_NR_PRIVATE_IRQS;
+
+	x->private = kzalloc(size, GFP_KERNEL);
+	if (!x->private)
+		return -ENOMEM;
+
+	x->shared = x->private + nr_cpus * VGIC_NR_PRIVATE_IRQS / sizeof(u32);
+	return 0;
+}
+
+static void vgic_free_bytemap(struct vgic_bytemap *b)
+{
+	kfree(b->private);
+	b->private = NULL;
+	b->shared = NULL;
 }
 
 static u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset)
 {
-	offset >>= 2;
-	BUG_ON(offset > (VGIC_NR_IRQS / 4));
-	if (offset < 8)
-		return x->percpu[cpuid] + offset;
-	else
-		return x->shared + offset - 8;
+	u32 *reg;
+
+	if (offset < VGIC_NR_PRIVATE_IRQS) {
+		reg = x->private;
+		offset += cpuid * VGIC_NR_PRIVATE_IRQS;
+	} else {
+		reg = x->shared;
+		offset -= VGIC_NR_PRIVATE_IRQS;
+	}
+
+	return reg + (offset / sizeof(u32));
 }
 
 #define VGIC_CFG_LEVEL	0
@@ -739,7 +787,7 @@ static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu)
 		 */
 		vgic_dist_irq_set_pending(vcpu, lr.irq);
 		if (lr.irq < VGIC_NR_SGIS)
-			dist->irq_sgi_sources[vcpu_id][lr.irq] |= 1 << lr.source;
+			*vgic_get_sgi_sources(dist, vcpu_id, lr.irq) |= 1 << lr.source;
 		lr.state &= ~LR_STATE_PENDING;
 		vgic_set_lr(vcpu, i, lr);
 
@@ -773,7 +821,7 @@ static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
 	/* Copy source SGIs from distributor side */
 	for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
 		int shift = 8 * (sgi - min_sgi);
-		reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift;
+		reg |= ((u32)*vgic_get_sgi_sources(dist, vcpu_id, sgi)) << shift;
 	}
 
 	mmio_data_write(mmio, ~0, reg);
@@ -797,14 +845,15 @@ static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
 	/* Clear pending SGIs on the distributor */
 	for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
 		u8 mask = reg >> (8 * (sgi - min_sgi));
+		u8 *src = vgic_get_sgi_sources(dist, vcpu_id, sgi);
 		if (set) {
-			if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask)
+			if ((*src & mask) != mask)
 				updated = true;
-			dist->irq_sgi_sources[vcpu_id][sgi] |= mask;
+			*src |= mask;
 		} else {
-			if (dist->irq_sgi_sources[vcpu_id][sgi] & mask)
+			if (*src & mask)
 				updated = true;
-			dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask;
+			*src &= ~mask;
 		}
 	}
 
@@ -988,6 +1037,11 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
 	return true;
 }
 
+static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi)
+{
+	return dist->irq_sgi_sources + vcpu_id * VGIC_NR_SGIS + sgi;
+}
+
 static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
 {
 	struct kvm *kvm = vcpu->kvm;
@@ -1021,7 +1075,7 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
 		if (target_cpus & 1) {
 			/* Flag the SGI as pending */
 			vgic_dist_irq_set_pending(vcpu, sgi);
-			dist->irq_sgi_sources[c][sgi] |= 1 << vcpu_id;
+			*vgic_get_sgi_sources(dist, c, sgi) |= 1 << vcpu_id;
 			kvm_debug("SGI%d from CPU%d to CPU%d\n", sgi, vcpu_id, c);
 		}
 
@@ -1068,14 +1122,14 @@ static void vgic_update_state(struct kvm *kvm)
 	int c;
 
 	if (!dist->enabled) {
-		set_bit(0, &dist->irq_pending_on_cpu);
+		set_bit(0, dist->irq_pending_on_cpu);
 		return;
 	}
 
 	kvm_for_each_vcpu(c, vcpu, kvm) {
 		if (compute_pending_for_cpu(vcpu)) {
 			pr_debug("CPU%d has pending interrupts\n", c);
-			set_bit(c, &dist->irq_pending_on_cpu);
+			set_bit(c, dist->irq_pending_on_cpu);
 		}
 	}
 }
@@ -1232,14 +1286,14 @@ static bool vgic_queue_sgi(struct kvm_vcpu *vcpu, int irq)
 	int vcpu_id = vcpu->vcpu_id;
 	int c;
 
-	sources = dist->irq_sgi_sources[vcpu_id][irq];
+	sources = *vgic_get_sgi_sources(dist, vcpu_id, irq);
 
 	for_each_set_bit(c, &sources, VGIC_MAX_CPUS) {
 		if (vgic_queue_irq(vcpu, c, irq))
 			clear_bit(c, &sources);
 	}
 
-	dist->irq_sgi_sources[vcpu_id][irq] = sources;
+	*vgic_get_sgi_sources(dist, vcpu_id, irq) = sources;
 
 	/*
 	 * If the sources bitmap has been cleared it means that we
@@ -1327,7 +1381,7 @@ epilog:
 		 * us. Claim we don't have anything pending. We'll
 		 * adjust that if needed while exiting.
 		 */
-		clear_bit(vcpu_id, &dist->irq_pending_on_cpu);
+		clear_bit(vcpu_id, dist->irq_pending_on_cpu);
 	}
 }
 
@@ -1429,7 +1483,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
 	/* Check if we still have something up our sleeve... */
 	pending = find_first_zero_bit(elrsr_ptr, vgic->nr_lr);
 	if (level_pending || pending < vgic->nr_lr)
-		set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
+		set_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
 }
 
 void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
@@ -1463,7 +1517,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
 	if (!irqchip_in_kernel(vcpu->kvm))
 		return 0;
 
-	return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
+	return test_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
 }
 
 static void vgic_kick_vcpus(struct kvm *kvm)
@@ -1558,7 +1612,7 @@ static bool vgic_update_irq_pending(struct kvm *kvm, int cpuid,
 
 	if (level) {
 		vgic_cpu_irq_set(vcpu, irq_num);
-		set_bit(cpuid, &dist->irq_pending_on_cpu);
+		set_bit(cpuid, dist->irq_pending_on_cpu);
 	}
 
 out:
@@ -1602,6 +1656,32 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data)
 	return IRQ_HANDLED;
 }
 
+void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+
+	kfree(vgic_cpu->pending_shared);
+	kfree(vgic_cpu->vgic_irq_lr_map);
+	vgic_cpu->pending_shared = NULL;
+	vgic_cpu->vgic_irq_lr_map = NULL;
+}
+
+static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
+{
+	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+
+	int sz = (nr_irqs - VGIC_NR_PRIVATE_IRQS) / 8;
+	vgic_cpu->pending_shared = kzalloc(sz, GFP_KERNEL);
+	vgic_cpu->vgic_irq_lr_map = kzalloc(nr_irqs, GFP_KERNEL);
+
+	if (!vgic_cpu->pending_shared || !vgic_cpu->vgic_irq_lr_map) {
+		kvm_vgic_vcpu_destroy(vcpu);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
 /**
  * kvm_vgic_vcpu_init - Initialize per-vcpu VGIC state
  * @vcpu: pointer to the vcpu struct
@@ -1718,6 +1798,97 @@ out_free_irq:
 	return ret;
 }
 
+void kvm_vgic_destroy(struct kvm *kvm)
+{
+	struct vgic_dist *dist = &kvm->arch.vgic;
+	struct kvm_vcpu *vcpu;
+	int i;
+
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		kvm_vgic_vcpu_destroy(vcpu);
+
+	vgic_free_bitmap(&dist->irq_enabled);
+	vgic_free_bitmap(&dist->irq_level);
+	vgic_free_bitmap(&dist->irq_pending);
+	vgic_free_bitmap(&dist->irq_soft_pend);
+	vgic_free_bitmap(&dist->irq_queued);
+	vgic_free_bitmap(&dist->irq_cfg);
+	vgic_free_bytemap(&dist->irq_priority);
+	if (dist->irq_spi_target) {
+		for (i = 0; i < dist->nr_cpus; i++)
+			vgic_free_bitmap(&dist->irq_spi_target[i]);
+	}
+	kfree(dist->irq_sgi_sources);
+	kfree(dist->irq_spi_cpu);
+	kfree(dist->irq_spi_target);
+	kfree(dist->irq_pending_on_cpu);
+	dist->irq_sgi_sources = NULL;
+	dist->irq_spi_cpu = NULL;
+	dist->irq_spi_target = NULL;
+	dist->irq_pending_on_cpu = NULL;
+}
+
+/*
+ * Allocate and initialize the various data structures. Must be called
+ * with kvm->lock held!
+ */
+static int vgic_init_maps(struct kvm *kvm)
+{
+	struct vgic_dist *dist = &kvm->arch.vgic;
+	struct kvm_vcpu *vcpu;
+	int nr_cpus, nr_irqs;
+	int ret, i;
+
+	nr_cpus = dist->nr_cpus = VGIC_MAX_CPUS;
+	nr_irqs = dist->nr_irqs = VGIC_NR_IRQS;
+
+	ret  = vgic_init_bitmap(&dist->irq_enabled, nr_cpus, nr_irqs);
+	ret |= vgic_init_bitmap(&dist->irq_level, nr_cpus, nr_irqs);
+	ret |= vgic_init_bitmap(&dist->irq_pending, nr_cpus, nr_irqs);
+	ret |= vgic_init_bitmap(&dist->irq_soft_pend, nr_cpus, nr_irqs);
+	ret |= vgic_init_bitmap(&dist->irq_queued, nr_cpus, nr_irqs);
+	ret |= vgic_init_bitmap(&dist->irq_cfg, nr_cpus, nr_irqs);
+	ret |= vgic_init_bytemap(&dist->irq_priority, nr_cpus, nr_irqs);
+
+	if (ret)
+		goto out;
+
+	dist->irq_sgi_sources = kzalloc(nr_cpus * VGIC_NR_SGIS, GFP_KERNEL);
+	dist->irq_spi_cpu = kzalloc(nr_irqs - VGIC_NR_PRIVATE_IRQS, GFP_KERNEL);
+	dist->irq_spi_target = kzalloc(sizeof(*dist->irq_spi_target) * nr_cpus,
+				       GFP_KERNEL);
+	dist->irq_pending_on_cpu = kzalloc(BITS_TO_LONGS(nr_cpus) * sizeof(long),
+					   GFP_KERNEL);
+	if (!dist->irq_sgi_sources ||
+	    !dist->irq_spi_cpu ||
+	    !dist->irq_spi_target ||
+	    !dist->irq_pending_on_cpu) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	for (i = 0; i < nr_cpus; i++)
+		ret |= vgic_init_bitmap(&dist->irq_spi_target[i],
+					nr_cpus, nr_irqs);
+
+	if (ret)
+		goto out;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		ret = vgic_vcpu_init_maps(vcpu, nr_irqs);
+		if (ret) {
+			kvm_err("VGIC: Failed to allocate vcpu memory\n");
+			break;
+		}
+	}
+
+out:
+	if (ret)
+		kvm_vgic_destroy(kvm);
+
+	return ret;
+}
+
 /**
  * kvm_vgic_init - Initialize global VGIC state before running any VCPUs
  * @kvm: pointer to the kvm struct
@@ -1798,6 +1969,10 @@ int kvm_vgic_create(struct kvm *kvm)
 	kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
 
+	ret = vgic_init_maps(kvm);
+	if (ret)
+		kvm_err("Unable to allocate maps\n");
+
 out_unlock:
 	for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) {
 		vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 3/8] arm/arm64: KVM: vgic: Parametrize VGIC_NR_SHARED_IRQS
  2014-09-11 11:09 ` Marc Zyngier
@ 2014-09-11 11:09   ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm; +Cc: Christoffer Dall, Andre Przywara

Having a dynamic number of supported interrupts means that we
cannot relly on VGIC_NR_SHARED_IRQS being fixed anymore.

Instead, make it take the distributor structure as a parameter,
so it can return the right value.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/kvm/arm_vgic.h |  1 -
 virt/kvm/arm/vgic.c    | 16 +++++++++++-----
 2 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index bdaac57..bdeb451 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -29,7 +29,6 @@
 #define VGIC_NR_SGIS		16
 #define VGIC_NR_PPIS		16
 #define VGIC_NR_PRIVATE_IRQS	(VGIC_NR_SGIS + VGIC_NR_PPIS)
-#define VGIC_NR_SHARED_IRQS	(VGIC_NR_IRQS - VGIC_NR_PRIVATE_IRQS)
 #define VGIC_MAX_CPUS		KVM_MAX_VCPUS
 
 #define VGIC_V2_MAX_LRS		(1 << 6)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 92c086e..93fe73b 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1083,11 +1083,17 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
 	}
 }
 
+static int vgic_nr_shared_irqs(struct vgic_dist *dist)
+{
+	return dist->nr_irqs - VGIC_NR_PRIVATE_IRQS;
+}
+
 static int compute_pending_for_cpu(struct kvm_vcpu *vcpu)
 {
 	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
 	unsigned long *pending, *enabled, *pend_percpu, *pend_shared;
 	unsigned long pending_private, pending_shared;
+	int nr_shared = vgic_nr_shared_irqs(dist);
 	int vcpu_id;
 
 	vcpu_id = vcpu->vcpu_id;
@@ -1100,15 +1106,15 @@ static int compute_pending_for_cpu(struct kvm_vcpu *vcpu)
 
 	pending = vgic_bitmap_get_shared_map(&dist->irq_pending);
 	enabled = vgic_bitmap_get_shared_map(&dist->irq_enabled);
-	bitmap_and(pend_shared, pending, enabled, VGIC_NR_SHARED_IRQS);
+	bitmap_and(pend_shared, pending, enabled, nr_shared);
 	bitmap_and(pend_shared, pend_shared,
 		   vgic_bitmap_get_shared_map(&dist->irq_spi_target[vcpu_id]),
-		   VGIC_NR_SHARED_IRQS);
+		   nr_shared);
 
 	pending_private = find_first_bit(pend_percpu, VGIC_NR_PRIVATE_IRQS);
-	pending_shared = find_first_bit(pend_shared, VGIC_NR_SHARED_IRQS);
+	pending_shared = find_first_bit(pend_shared, nr_shared);
 	return (pending_private < VGIC_NR_PRIVATE_IRQS ||
-		pending_shared < VGIC_NR_SHARED_IRQS);
+		pending_shared < vgic_nr_shared_irqs(dist));
 }
 
 /*
@@ -1365,7 +1371,7 @@ static void __kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
 	}
 
 	/* SPIs */
-	for_each_set_bit(i, vgic_cpu->pending_shared, VGIC_NR_SHARED_IRQS) {
+	for_each_set_bit(i, vgic_cpu->pending_shared, vgic_nr_shared_irqs(dist)) {
 		if (!vgic_queue_hwirq(vcpu, i + VGIC_NR_PRIVATE_IRQS))
 			overflow = 1;
 	}
-- 
2.0.4


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 3/8] arm/arm64: KVM: vgic: Parametrize VGIC_NR_SHARED_IRQS
@ 2014-09-11 11:09   ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

Having a dynamic number of supported interrupts means that we
cannot relly on VGIC_NR_SHARED_IRQS being fixed anymore.

Instead, make it take the distributor structure as a parameter,
so it can return the right value.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/kvm/arm_vgic.h |  1 -
 virt/kvm/arm/vgic.c    | 16 +++++++++++-----
 2 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index bdaac57..bdeb451 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -29,7 +29,6 @@
 #define VGIC_NR_SGIS		16
 #define VGIC_NR_PPIS		16
 #define VGIC_NR_PRIVATE_IRQS	(VGIC_NR_SGIS + VGIC_NR_PPIS)
-#define VGIC_NR_SHARED_IRQS	(VGIC_NR_IRQS - VGIC_NR_PRIVATE_IRQS)
 #define VGIC_MAX_CPUS		KVM_MAX_VCPUS
 
 #define VGIC_V2_MAX_LRS		(1 << 6)
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 92c086e..93fe73b 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1083,11 +1083,17 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
 	}
 }
 
+static int vgic_nr_shared_irqs(struct vgic_dist *dist)
+{
+	return dist->nr_irqs - VGIC_NR_PRIVATE_IRQS;
+}
+
 static int compute_pending_for_cpu(struct kvm_vcpu *vcpu)
 {
 	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
 	unsigned long *pending, *enabled, *pend_percpu, *pend_shared;
 	unsigned long pending_private, pending_shared;
+	int nr_shared = vgic_nr_shared_irqs(dist);
 	int vcpu_id;
 
 	vcpu_id = vcpu->vcpu_id;
@@ -1100,15 +1106,15 @@ static int compute_pending_for_cpu(struct kvm_vcpu *vcpu)
 
 	pending = vgic_bitmap_get_shared_map(&dist->irq_pending);
 	enabled = vgic_bitmap_get_shared_map(&dist->irq_enabled);
-	bitmap_and(pend_shared, pending, enabled, VGIC_NR_SHARED_IRQS);
+	bitmap_and(pend_shared, pending, enabled, nr_shared);
 	bitmap_and(pend_shared, pend_shared,
 		   vgic_bitmap_get_shared_map(&dist->irq_spi_target[vcpu_id]),
-		   VGIC_NR_SHARED_IRQS);
+		   nr_shared);
 
 	pending_private = find_first_bit(pend_percpu, VGIC_NR_PRIVATE_IRQS);
-	pending_shared = find_first_bit(pend_shared, VGIC_NR_SHARED_IRQS);
+	pending_shared = find_first_bit(pend_shared, nr_shared);
 	return (pending_private < VGIC_NR_PRIVATE_IRQS ||
-		pending_shared < VGIC_NR_SHARED_IRQS);
+		pending_shared < vgic_nr_shared_irqs(dist));
 }
 
 /*
@@ -1365,7 +1371,7 @@ static void __kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
 	}
 
 	/* SPIs */
-	for_each_set_bit(i, vgic_cpu->pending_shared, VGIC_NR_SHARED_IRQS) {
+	for_each_set_bit(i, vgic_cpu->pending_shared, vgic_nr_shared_irqs(dist)) {
 		if (!vgic_queue_hwirq(vcpu, i + VGIC_NR_PRIVATE_IRQS))
 			overflow = 1;
 	}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 4/8] arm/arm64: KVM: vgic: kill VGIC_MAX_CPUS
  2014-09-11 11:09 ` Marc Zyngier
@ 2014-09-11 11:09   ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm; +Cc: Christoffer Dall, Andre Przywara

We now have the information about the number of CPU interfaces in
the distributor itself. Let's get rid of VGIC_MAX_CPUS, and just
rely on KVM_MAX_VCPUS where we don't have the choice. Yet.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/kvm/arm_vgic.h | 3 +--
 virt/kvm/arm/vgic.c    | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index bdeb451..3900e31 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -29,13 +29,12 @@
 #define VGIC_NR_SGIS		16
 #define VGIC_NR_PPIS		16
 #define VGIC_NR_PRIVATE_IRQS	(VGIC_NR_SGIS + VGIC_NR_PPIS)
-#define VGIC_MAX_CPUS		KVM_MAX_VCPUS
 
 #define VGIC_V2_MAX_LRS		(1 << 6)
 #define VGIC_V3_MAX_LRS		16
 
 /* Sanity checks... */
-#if (VGIC_MAX_CPUS > 8)
+#if (KVM_MAX_VCPUS > 8)
 #error	Invalid number of CPU interfaces
 #endif
 
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 93fe73b..7e6e64d 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1294,7 +1294,7 @@ static bool vgic_queue_sgi(struct kvm_vcpu *vcpu, int irq)
 
 	sources = *vgic_get_sgi_sources(dist, vcpu_id, irq);
 
-	for_each_set_bit(c, &sources, VGIC_MAX_CPUS) {
+	for_each_set_bit(c, &sources, dist->nr_cpus) {
 		if (vgic_queue_irq(vcpu, c, irq))
 			clear_bit(c, &sources);
 	}
@@ -1701,7 +1701,7 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
 	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
 	int i;
 
-	if (vcpu->vcpu_id >= VGIC_MAX_CPUS)
+	if (vcpu->vcpu_id >= dist->nr_cpus)
 		return -EBUSY;
 
 	for (i = 0; i < VGIC_NR_IRQS; i++) {
@@ -1845,7 +1845,7 @@ static int vgic_init_maps(struct kvm *kvm)
 	int nr_cpus, nr_irqs;
 	int ret, i;
 
-	nr_cpus = dist->nr_cpus = VGIC_MAX_CPUS;
+	nr_cpus = dist->nr_cpus = KVM_MAX_VCPUS;
 	nr_irqs = dist->nr_irqs = VGIC_NR_IRQS;
 
 	ret  = vgic_init_bitmap(&dist->irq_enabled, nr_cpus, nr_irqs);
-- 
2.0.4


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 4/8] arm/arm64: KVM: vgic: kill VGIC_MAX_CPUS
@ 2014-09-11 11:09   ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

We now have the information about the number of CPU interfaces in
the distributor itself. Let's get rid of VGIC_MAX_CPUS, and just
rely on KVM_MAX_VCPUS where we don't have the choice. Yet.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/kvm/arm_vgic.h | 3 +--
 virt/kvm/arm/vgic.c    | 6 +++---
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index bdeb451..3900e31 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -29,13 +29,12 @@
 #define VGIC_NR_SGIS		16
 #define VGIC_NR_PPIS		16
 #define VGIC_NR_PRIVATE_IRQS	(VGIC_NR_SGIS + VGIC_NR_PPIS)
-#define VGIC_MAX_CPUS		KVM_MAX_VCPUS
 
 #define VGIC_V2_MAX_LRS		(1 << 6)
 #define VGIC_V3_MAX_LRS		16
 
 /* Sanity checks... */
-#if (VGIC_MAX_CPUS > 8)
+#if (KVM_MAX_VCPUS > 8)
 #error	Invalid number of CPU interfaces
 #endif
 
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 93fe73b..7e6e64d 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1294,7 +1294,7 @@ static bool vgic_queue_sgi(struct kvm_vcpu *vcpu, int irq)
 
 	sources = *vgic_get_sgi_sources(dist, vcpu_id, irq);
 
-	for_each_set_bit(c, &sources, VGIC_MAX_CPUS) {
+	for_each_set_bit(c, &sources, dist->nr_cpus) {
 		if (vgic_queue_irq(vcpu, c, irq))
 			clear_bit(c, &sources);
 	}
@@ -1701,7 +1701,7 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
 	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
 	int i;
 
-	if (vcpu->vcpu_id >= VGIC_MAX_CPUS)
+	if (vcpu->vcpu_id >= dist->nr_cpus)
 		return -EBUSY;
 
 	for (i = 0; i < VGIC_NR_IRQS; i++) {
@@ -1845,7 +1845,7 @@ static int vgic_init_maps(struct kvm *kvm)
 	int nr_cpus, nr_irqs;
 	int ret, i;
 
-	nr_cpus = dist->nr_cpus = VGIC_MAX_CPUS;
+	nr_cpus = dist->nr_cpus = KVM_MAX_VCPUS;
 	nr_irqs = dist->nr_irqs = VGIC_NR_IRQS;
 
 	ret  = vgic_init_bitmap(&dist->irq_enabled, nr_cpus, nr_irqs);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 5/8] arm/arm64: KVM: vgic: handle out-of-range MMIO accesses
  2014-09-11 11:09 ` Marc Zyngier
@ 2014-09-11 11:09   ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm; +Cc: Christoffer Dall, Andre Przywara

Now that we can (almost) dynamically size the number of interrupts,
we're facing an interesting issue:

We have to evaluate at runtime whether or not an access hits a valid
register, based on the sizing of this particular instance of the
distributor. Furthermore, the GIC spec says that accessing a reserved
register is RAZ/WI.

For this, add a new field to our range structure, indicating the number
of bits a single interrupts uses. That allows us to find out whether or
not the access is in range.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/kvm/arm_vgic.h |  3 ++-
 virt/kvm/arm/vgic.c    | 56 ++++++++++++++++++++++++++++++++++++++++----------
 2 files changed, 47 insertions(+), 12 deletions(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 3900e31..97f5f57 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -32,6 +32,7 @@
 
 #define VGIC_V2_MAX_LRS		(1 << 6)
 #define VGIC_V3_MAX_LRS		16
+#define VGIC_MAX_IRQS		1024
 
 /* Sanity checks... */
 #if (KVM_MAX_VCPUS > 8)
@@ -42,7 +43,7 @@
 #error "VGIC_NR_IRQS must be a multiple of 32"
 #endif
 
-#if (VGIC_NR_IRQS > 1024)
+#if (VGIC_NR_IRQS > VGIC_MAX_IRQS)
 #error "VGIC_NR_IRQS must be <= 1024"
 #endif
 
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 7e6e64d..ab01cab 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -892,6 +892,7 @@ static bool handle_mmio_sgi_clear(struct kvm_vcpu *vcpu,
 struct mmio_range {
 	phys_addr_t base;
 	unsigned long len;
+	int bits_per_irq;
 	bool (*handle_mmio)(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio,
 			    phys_addr_t offset);
 };
@@ -900,56 +901,67 @@ static const struct mmio_range vgic_dist_ranges[] = {
 	{
 		.base		= GIC_DIST_CTRL,
 		.len		= 12,
+		.bits_per_irq	= 0,
 		.handle_mmio	= handle_mmio_misc,
 	},
 	{
 		.base		= GIC_DIST_IGROUP,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_raz_wi,
 	},
 	{
 		.base		= GIC_DIST_ENABLE_SET,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_set_enable_reg,
 	},
 	{
 		.base		= GIC_DIST_ENABLE_CLEAR,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_clear_enable_reg,
 	},
 	{
 		.base		= GIC_DIST_PENDING_SET,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_set_pending_reg,
 	},
 	{
 		.base		= GIC_DIST_PENDING_CLEAR,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_clear_pending_reg,
 	},
 	{
 		.base		= GIC_DIST_ACTIVE_SET,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_raz_wi,
 	},
 	{
 		.base		= GIC_DIST_ACTIVE_CLEAR,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_raz_wi,
 	},
 	{
 		.base		= GIC_DIST_PRI,
-		.len		= VGIC_NR_IRQS,
+		.len		= VGIC_MAX_IRQS,
+		.bits_per_irq	= 8,
 		.handle_mmio	= handle_mmio_priority_reg,
 	},
 	{
 		.base		= GIC_DIST_TARGET,
-		.len		= VGIC_NR_IRQS,
+		.len		= VGIC_MAX_IRQS,
+		.bits_per_irq	= 8,
 		.handle_mmio	= handle_mmio_target_reg,
 	},
 	{
 		.base		= GIC_DIST_CONFIG,
-		.len		= VGIC_NR_IRQS / 4,
+		.len		= VGIC_MAX_IRQS / 4,
+		.bits_per_irq	= 2,
 		.handle_mmio	= handle_mmio_cfg_reg,
 	},
 	{
@@ -987,6 +999,22 @@ struct mmio_range *find_matching_range(const struct mmio_range *ranges,
 	return NULL;
 }
 
+static bool vgic_validate_access(const struct vgic_dist *dist,
+				 const struct mmio_range *range,
+				 unsigned long offset)
+{
+	int irq;
+
+	if (!range->bits_per_irq)
+		return true;	/* Not an irq-based access */
+
+	irq = offset * 8 / range->bits_per_irq;
+	if (irq >= dist->nr_irqs)
+		return false;
+
+	return true;
+}
+
 /**
  * vgic_handle_mmio - handle an in-kernel MMIO access
  * @vcpu:	pointer to the vcpu performing the access
@@ -1026,7 +1054,13 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
 
 	spin_lock(&vcpu->kvm->arch.vgic.lock);
 	offset = mmio->phys_addr - range->base - base;
-	updated_state = range->handle_mmio(vcpu, mmio, offset);
+	if (vgic_validate_access(dist, range, offset)) {
+		updated_state = range->handle_mmio(vcpu, mmio, offset);
+	} else {
+		vgic_reg_access(mmio, NULL, offset,
+				ACCESS_READ_RAZ | ACCESS_WRITE_IGNORED);
+		updated_state = false;
+	}
 	spin_unlock(&vcpu->kvm->arch.vgic.lock);
 	kvm_prepare_mmio(run, mmio);
 	kvm_handle_mmio_return(vcpu, run);
-- 
2.0.4


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 5/8] arm/arm64: KVM: vgic: handle out-of-range MMIO accesses
@ 2014-09-11 11:09   ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we can (almost) dynamically size the number of interrupts,
we're facing an interesting issue:

We have to evaluate at runtime whether or not an access hits a valid
register, based on the sizing of this particular instance of the
distributor. Furthermore, the GIC spec says that accessing a reserved
register is RAZ/WI.

For this, add a new field to our range structure, indicating the number
of bits a single interrupts uses. That allows us to find out whether or
not the access is in range.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/kvm/arm_vgic.h |  3 ++-
 virt/kvm/arm/vgic.c    | 56 ++++++++++++++++++++++++++++++++++++++++----------
 2 files changed, 47 insertions(+), 12 deletions(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 3900e31..97f5f57 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -32,6 +32,7 @@
 
 #define VGIC_V2_MAX_LRS		(1 << 6)
 #define VGIC_V3_MAX_LRS		16
+#define VGIC_MAX_IRQS		1024
 
 /* Sanity checks... */
 #if (KVM_MAX_VCPUS > 8)
@@ -42,7 +43,7 @@
 #error "VGIC_NR_IRQS must be a multiple of 32"
 #endif
 
-#if (VGIC_NR_IRQS > 1024)
+#if (VGIC_NR_IRQS > VGIC_MAX_IRQS)
 #error "VGIC_NR_IRQS must be <= 1024"
 #endif
 
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 7e6e64d..ab01cab 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -892,6 +892,7 @@ static bool handle_mmio_sgi_clear(struct kvm_vcpu *vcpu,
 struct mmio_range {
 	phys_addr_t base;
 	unsigned long len;
+	int bits_per_irq;
 	bool (*handle_mmio)(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio,
 			    phys_addr_t offset);
 };
@@ -900,56 +901,67 @@ static const struct mmio_range vgic_dist_ranges[] = {
 	{
 		.base		= GIC_DIST_CTRL,
 		.len		= 12,
+		.bits_per_irq	= 0,
 		.handle_mmio	= handle_mmio_misc,
 	},
 	{
 		.base		= GIC_DIST_IGROUP,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_raz_wi,
 	},
 	{
 		.base		= GIC_DIST_ENABLE_SET,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_set_enable_reg,
 	},
 	{
 		.base		= GIC_DIST_ENABLE_CLEAR,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_clear_enable_reg,
 	},
 	{
 		.base		= GIC_DIST_PENDING_SET,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_set_pending_reg,
 	},
 	{
 		.base		= GIC_DIST_PENDING_CLEAR,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_clear_pending_reg,
 	},
 	{
 		.base		= GIC_DIST_ACTIVE_SET,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_raz_wi,
 	},
 	{
 		.base		= GIC_DIST_ACTIVE_CLEAR,
-		.len		= VGIC_NR_IRQS / 8,
+		.len		= VGIC_MAX_IRQS / 8,
+		.bits_per_irq	= 1,
 		.handle_mmio	= handle_mmio_raz_wi,
 	},
 	{
 		.base		= GIC_DIST_PRI,
-		.len		= VGIC_NR_IRQS,
+		.len		= VGIC_MAX_IRQS,
+		.bits_per_irq	= 8,
 		.handle_mmio	= handle_mmio_priority_reg,
 	},
 	{
 		.base		= GIC_DIST_TARGET,
-		.len		= VGIC_NR_IRQS,
+		.len		= VGIC_MAX_IRQS,
+		.bits_per_irq	= 8,
 		.handle_mmio	= handle_mmio_target_reg,
 	},
 	{
 		.base		= GIC_DIST_CONFIG,
-		.len		= VGIC_NR_IRQS / 4,
+		.len		= VGIC_MAX_IRQS / 4,
+		.bits_per_irq	= 2,
 		.handle_mmio	= handle_mmio_cfg_reg,
 	},
 	{
@@ -987,6 +999,22 @@ struct mmio_range *find_matching_range(const struct mmio_range *ranges,
 	return NULL;
 }
 
+static bool vgic_validate_access(const struct vgic_dist *dist,
+				 const struct mmio_range *range,
+				 unsigned long offset)
+{
+	int irq;
+
+	if (!range->bits_per_irq)
+		return true;	/* Not an irq-based access */
+
+	irq = offset * 8 / range->bits_per_irq;
+	if (irq >= dist->nr_irqs)
+		return false;
+
+	return true;
+}
+
 /**
  * vgic_handle_mmio - handle an in-kernel MMIO access
  * @vcpu:	pointer to the vcpu performing the access
@@ -1026,7 +1054,13 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
 
 	spin_lock(&vcpu->kvm->arch.vgic.lock);
 	offset = mmio->phys_addr - range->base - base;
-	updated_state = range->handle_mmio(vcpu, mmio, offset);
+	if (vgic_validate_access(dist, range, offset)) {
+		updated_state = range->handle_mmio(vcpu, mmio, offset);
+	} else {
+		vgic_reg_access(mmio, NULL, offset,
+				ACCESS_READ_RAZ | ACCESS_WRITE_IGNORED);
+		updated_state = false;
+	}
 	spin_unlock(&vcpu->kvm->arch.vgic.lock);
 	kvm_prepare_mmio(run, mmio);
 	kvm_handle_mmio_return(vcpu, run);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 6/8] arm/arm64: KVM: vgic: kill VGIC_NR_IRQS
  2014-09-11 11:09 ` Marc Zyngier
@ 2014-09-11 11:09   ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm; +Cc: Christoffer Dall, Andre Przywara

Nuke VGIC_NR_IRQS entierly, now that the distributor instance
contains the number of IRQ allocated to this GIC.

Also add VGIC_NR_IRQS_LEGACY to preserve the current API.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/kvm/arm_vgic.h |  6 +++---
 virt/kvm/arm/vgic.c    | 17 +++++++++++------
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 97f5f57..0a27564 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -25,7 +25,7 @@
 #include <linux/spinlock.h>
 #include <linux/types.h>
 
-#define VGIC_NR_IRQS		256
+#define VGIC_NR_IRQS_LEGACY	256
 #define VGIC_NR_SGIS		16
 #define VGIC_NR_PPIS		16
 #define VGIC_NR_PRIVATE_IRQS	(VGIC_NR_SGIS + VGIC_NR_PPIS)
@@ -39,11 +39,11 @@
 #error	Invalid number of CPU interfaces
 #endif
 
-#if (VGIC_NR_IRQS & 31)
+#if (VGIC_NR_IRQS_LEGACY & 31)
 #error "VGIC_NR_IRQS must be a multiple of 32"
 #endif
 
-#if (VGIC_NR_IRQS > VGIC_MAX_IRQS)
+#if (VGIC_NR_IRQS_LEGACY > VGIC_MAX_IRQS)
 #error "VGIC_NR_IRQS must be <= 1024"
 #endif
 
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index ab01cab..dfa6430 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -436,7 +436,7 @@ static bool handle_mmio_misc(struct kvm_vcpu *vcpu,
 
 	case 4:			/* GICD_TYPER */
 		reg  = (atomic_read(&vcpu->kvm->online_vcpus) - 1) << 5;
-		reg |= (VGIC_NR_IRQS >> 5) - 1;
+		reg |= (vcpu->kvm->arch.vgic.nr_irqs >> 5) - 1;
 		vgic_reg_access(mmio, &reg, word_offset,
 				ACCESS_READ_VALUE | ACCESS_WRITE_IGNORED);
 		break;
@@ -1274,13 +1274,14 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu)
 static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
 {
 	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
 	struct vgic_lr vlr;
 	int lr;
 
 	/* Sanitize the input... */
 	BUG_ON(sgi_source_id & ~7);
 	BUG_ON(sgi_source_id && irq >= VGIC_NR_SGIS);
-	BUG_ON(irq >= VGIC_NR_IRQS);
+	BUG_ON(irq >= dist->nr_irqs);
 
 	kvm_debug("Queue IRQ%d\n", irq);
 
@@ -1516,7 +1517,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
 
 		vlr = vgic_get_lr(vcpu, lr);
 
-		BUG_ON(vlr.irq >= VGIC_NR_IRQS);
+		BUG_ON(vlr.irq >= dist->nr_irqs);
 		vgic_cpu->vgic_irq_lr_map[vlr.irq] = LR_EMPTY;
 	}
 
@@ -1738,7 +1739,7 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
 	if (vcpu->vcpu_id >= dist->nr_cpus)
 		return -EBUSY;
 
-	for (i = 0; i < VGIC_NR_IRQS; i++) {
+	for (i = 0; i < dist->nr_irqs; i++) {
 		if (i < VGIC_NR_PPIS)
 			vgic_bitmap_set_irq_val(&dist->irq_enabled,
 						vcpu->vcpu_id, i, 1);
@@ -1880,7 +1881,11 @@ static int vgic_init_maps(struct kvm *kvm)
 	int ret, i;
 
 	nr_cpus = dist->nr_cpus = KVM_MAX_VCPUS;
-	nr_irqs = dist->nr_irqs = VGIC_NR_IRQS;
+
+	if (!dist->nr_irqs)
+		dist->nr_irqs = VGIC_NR_IRQS_LEGACY;
+
+	nr_irqs = dist->nr_irqs;
 
 	ret  = vgic_init_bitmap(&dist->irq_enabled, nr_cpus, nr_irqs);
 	ret |= vgic_init_bitmap(&dist->irq_level, nr_cpus, nr_irqs);
@@ -1964,7 +1969,7 @@ int kvm_vgic_init(struct kvm *kvm)
 		goto out;
 	}
 
-	for (i = VGIC_NR_PRIVATE_IRQS; i < VGIC_NR_IRQS; i += 4)
+	for (i = VGIC_NR_PRIVATE_IRQS; i < kvm->arch.vgic.nr_irqs; i += 4)
 		vgic_set_target_reg(kvm, 0, i);
 
 	kvm->arch.vgic.ready = true;
-- 
2.0.4


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 6/8] arm/arm64: KVM: vgic: kill VGIC_NR_IRQS
@ 2014-09-11 11:09   ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

Nuke VGIC_NR_IRQS entierly, now that the distributor instance
contains the number of IRQ allocated to this GIC.

Also add VGIC_NR_IRQS_LEGACY to preserve the current API.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/kvm/arm_vgic.h |  6 +++---
 virt/kvm/arm/vgic.c    | 17 +++++++++++------
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 97f5f57..0a27564 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -25,7 +25,7 @@
 #include <linux/spinlock.h>
 #include <linux/types.h>
 
-#define VGIC_NR_IRQS		256
+#define VGIC_NR_IRQS_LEGACY	256
 #define VGIC_NR_SGIS		16
 #define VGIC_NR_PPIS		16
 #define VGIC_NR_PRIVATE_IRQS	(VGIC_NR_SGIS + VGIC_NR_PPIS)
@@ -39,11 +39,11 @@
 #error	Invalid number of CPU interfaces
 #endif
 
-#if (VGIC_NR_IRQS & 31)
+#if (VGIC_NR_IRQS_LEGACY & 31)
 #error "VGIC_NR_IRQS must be a multiple of 32"
 #endif
 
-#if (VGIC_NR_IRQS > VGIC_MAX_IRQS)
+#if (VGIC_NR_IRQS_LEGACY > VGIC_MAX_IRQS)
 #error "VGIC_NR_IRQS must be <= 1024"
 #endif
 
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index ab01cab..dfa6430 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -436,7 +436,7 @@ static bool handle_mmio_misc(struct kvm_vcpu *vcpu,
 
 	case 4:			/* GICD_TYPER */
 		reg  = (atomic_read(&vcpu->kvm->online_vcpus) - 1) << 5;
-		reg |= (VGIC_NR_IRQS >> 5) - 1;
+		reg |= (vcpu->kvm->arch.vgic.nr_irqs >> 5) - 1;
 		vgic_reg_access(mmio, &reg, word_offset,
 				ACCESS_READ_VALUE | ACCESS_WRITE_IGNORED);
 		break;
@@ -1274,13 +1274,14 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu)
 static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
 {
 	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
 	struct vgic_lr vlr;
 	int lr;
 
 	/* Sanitize the input... */
 	BUG_ON(sgi_source_id & ~7);
 	BUG_ON(sgi_source_id && irq >= VGIC_NR_SGIS);
-	BUG_ON(irq >= VGIC_NR_IRQS);
+	BUG_ON(irq >= dist->nr_irqs);
 
 	kvm_debug("Queue IRQ%d\n", irq);
 
@@ -1516,7 +1517,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
 
 		vlr = vgic_get_lr(vcpu, lr);
 
-		BUG_ON(vlr.irq >= VGIC_NR_IRQS);
+		BUG_ON(vlr.irq >= dist->nr_irqs);
 		vgic_cpu->vgic_irq_lr_map[vlr.irq] = LR_EMPTY;
 	}
 
@@ -1738,7 +1739,7 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
 	if (vcpu->vcpu_id >= dist->nr_cpus)
 		return -EBUSY;
 
-	for (i = 0; i < VGIC_NR_IRQS; i++) {
+	for (i = 0; i < dist->nr_irqs; i++) {
 		if (i < VGIC_NR_PPIS)
 			vgic_bitmap_set_irq_val(&dist->irq_enabled,
 						vcpu->vcpu_id, i, 1);
@@ -1880,7 +1881,11 @@ static int vgic_init_maps(struct kvm *kvm)
 	int ret, i;
 
 	nr_cpus = dist->nr_cpus = KVM_MAX_VCPUS;
-	nr_irqs = dist->nr_irqs = VGIC_NR_IRQS;
+
+	if (!dist->nr_irqs)
+		dist->nr_irqs = VGIC_NR_IRQS_LEGACY;
+
+	nr_irqs = dist->nr_irqs;
 
 	ret  = vgic_init_bitmap(&dist->irq_enabled, nr_cpus, nr_irqs);
 	ret |= vgic_init_bitmap(&dist->irq_level, nr_cpus, nr_irqs);
@@ -1964,7 +1969,7 @@ int kvm_vgic_init(struct kvm *kvm)
 		goto out;
 	}
 
-	for (i = VGIC_NR_PRIVATE_IRQS; i < VGIC_NR_IRQS; i += 4)
+	for (i = VGIC_NR_PRIVATE_IRQS; i < kvm->arch.vgic.nr_irqs; i += 4)
 		vgic_set_target_reg(kvm, 0, i);
 
 	kvm->arch.vgic.ready = true;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 7/8] arm/arm64: KVM: vgic: delay vgic allocation until init time
  2014-09-11 11:09 ` Marc Zyngier
@ 2014-09-11 11:09   ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm; +Cc: Christoffer Dall, Andre Przywara

It is now quite easy to delay the allocation of the vgic tables
until we actually require it to be up and running (when the first
vcpu is kicking around, or someones tries to access the GIC registers).

This allow us to allocate memory for the exact number of CPUs we
have. As nobody configures the number of interrupts just yet,
use a fallback to VGIC_NR_IRQS_LEGACY.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c     |  7 -------
 include/kvm/arm_vgic.h |  1 -
 virt/kvm/arm/vgic.c    | 42 +++++++++++++++++++++++++++++-------------
 3 files changed, 29 insertions(+), 21 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 923a01d..7d5065e 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -271,16 +271,9 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
 
 int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 {
-	int ret;
-
 	/* Force users to call KVM_ARM_VCPU_INIT */
 	vcpu->arch.target = -1;
 
-	/* Set up VGIC */
-	ret = kvm_vgic_vcpu_init(vcpu);
-	if (ret)
-		return ret;
-
 	/* Set up the timer */
 	kvm_timer_vcpu_init(vcpu);
 
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 0a27564..73cbb61 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -277,7 +277,6 @@ int kvm_vgic_hyp_init(void);
 int kvm_vgic_init(struct kvm *kvm);
 int kvm_vgic_create(struct kvm *kvm);
 void kvm_vgic_destroy(struct kvm *kvm);
-int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
 void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index dfa6430..9180823 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1730,15 +1730,12 @@ static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
  * Initialize the vgic_cpu struct and vgic_dist struct fields pertaining to
  * this vcpu and enable the VGIC for this VCPU
  */
-int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
+static void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
 {
 	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
 	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
 	int i;
 
-	if (vcpu->vcpu_id >= dist->nr_cpus)
-		return -EBUSY;
-
 	for (i = 0; i < dist->nr_irqs; i++) {
 		if (i < VGIC_NR_PPIS)
 			vgic_bitmap_set_irq_val(&dist->irq_enabled,
@@ -1758,8 +1755,6 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
 	vgic_cpu->nr_lr = vgic->nr_lr;
 
 	vgic_enable(vcpu);
-
-	return 0;
 }
 
 static void vgic_init_maintenance_interrupt(void *info)
@@ -1880,8 +1875,17 @@ static int vgic_init_maps(struct kvm *kvm)
 	int nr_cpus, nr_irqs;
 	int ret, i;
 
-	nr_cpus = dist->nr_cpus = KVM_MAX_VCPUS;
+	if (dist->nr_cpus)	/* Already allocated */
+		return 0;
+
+	nr_cpus = dist->nr_cpus = atomic_read(&kvm->online_vcpus);
+	if (!nr_cpus)		/* No vcpus? Can't be good... */
+		return -EINVAL;
 
+	/*
+	 * If nobody configured the number of interrupts, use the
+	 * legacy one.
+	 */
 	if (!dist->nr_irqs)
 		dist->nr_irqs = VGIC_NR_IRQS_LEGACY;
 
@@ -1927,6 +1931,9 @@ static int vgic_init_maps(struct kvm *kvm)
 		}
 	}
 
+	for (i = VGIC_NR_PRIVATE_IRQS; i < dist->nr_irqs; i += 4)
+		vgic_set_target_reg(kvm, 0, i);
+
 out:
 	if (ret)
 		kvm_vgic_destroy(kvm);
@@ -1945,6 +1952,7 @@ out:
  */
 int kvm_vgic_init(struct kvm *kvm)
 {
+	struct kvm_vcpu *vcpu;
 	int ret = 0, i;
 
 	if (!irqchip_in_kernel(kvm))
@@ -1962,6 +1970,12 @@ int kvm_vgic_init(struct kvm *kvm)
 		goto out;
 	}
 
+	ret = vgic_init_maps(kvm);
+	if (ret) {
+		kvm_err("Unable to allocate maps\n");
+		goto out;
+	}
+
 	ret = kvm_phys_addr_ioremap(kvm, kvm->arch.vgic.vgic_cpu_base,
 				    vgic->vcpu_base, KVM_VGIC_V2_CPU_SIZE);
 	if (ret) {
@@ -1969,11 +1983,13 @@ int kvm_vgic_init(struct kvm *kvm)
 		goto out;
 	}
 
-	for (i = VGIC_NR_PRIVATE_IRQS; i < kvm->arch.vgic.nr_irqs; i += 4)
-		vgic_set_target_reg(kvm, 0, i);
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		kvm_vgic_vcpu_init(vcpu);
 
 	kvm->arch.vgic.ready = true;
 out:
+	if (ret)
+		kvm_vgic_destroy(kvm);
 	mutex_unlock(&kvm->lock);
 	return ret;
 }
@@ -2014,10 +2030,6 @@ int kvm_vgic_create(struct kvm *kvm)
 	kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
 
-	ret = vgic_init_maps(kvm);
-	if (ret)
-		kvm_err("Unable to allocate maps\n");
-
 out_unlock:
 	for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) {
 		vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx);
@@ -2218,6 +2230,10 @@ static int vgic_attr_regs_access(struct kvm_device *dev,
 
 	mutex_lock(&dev->kvm->lock);
 
+	ret = vgic_init_maps(dev->kvm);
+	if (ret)
+		goto out;
+
 	if (cpuid >= atomic_read(&dev->kvm->online_vcpus)) {
 		ret = -EINVAL;
 		goto out;
-- 
2.0.4


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 7/8] arm/arm64: KVM: vgic: delay vgic allocation until init time
@ 2014-09-11 11:09   ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

It is now quite easy to delay the allocation of the vgic tables
until we actually require it to be up and running (when the first
vcpu is kicking around, or someones tries to access the GIC registers).

This allow us to allocate memory for the exact number of CPUs we
have. As nobody configures the number of interrupts just yet,
use a fallback to VGIC_NR_IRQS_LEGACY.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c     |  7 -------
 include/kvm/arm_vgic.h |  1 -
 virt/kvm/arm/vgic.c    | 42 +++++++++++++++++++++++++++++-------------
 3 files changed, 29 insertions(+), 21 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 923a01d..7d5065e 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -271,16 +271,9 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
 
 int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
 {
-	int ret;
-
 	/* Force users to call KVM_ARM_VCPU_INIT */
 	vcpu->arch.target = -1;
 
-	/* Set up VGIC */
-	ret = kvm_vgic_vcpu_init(vcpu);
-	if (ret)
-		return ret;
-
 	/* Set up the timer */
 	kvm_timer_vcpu_init(vcpu);
 
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 0a27564..73cbb61 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -277,7 +277,6 @@ int kvm_vgic_hyp_init(void);
 int kvm_vgic_init(struct kvm *kvm);
 int kvm_vgic_create(struct kvm *kvm);
 void kvm_vgic_destroy(struct kvm *kvm);
-int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
 void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index dfa6430..9180823 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1730,15 +1730,12 @@ static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
  * Initialize the vgic_cpu struct and vgic_dist struct fields pertaining to
  * this vcpu and enable the VGIC for this VCPU
  */
-int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
+static void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
 {
 	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
 	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
 	int i;
 
-	if (vcpu->vcpu_id >= dist->nr_cpus)
-		return -EBUSY;
-
 	for (i = 0; i < dist->nr_irqs; i++) {
 		if (i < VGIC_NR_PPIS)
 			vgic_bitmap_set_irq_val(&dist->irq_enabled,
@@ -1758,8 +1755,6 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
 	vgic_cpu->nr_lr = vgic->nr_lr;
 
 	vgic_enable(vcpu);
-
-	return 0;
 }
 
 static void vgic_init_maintenance_interrupt(void *info)
@@ -1880,8 +1875,17 @@ static int vgic_init_maps(struct kvm *kvm)
 	int nr_cpus, nr_irqs;
 	int ret, i;
 
-	nr_cpus = dist->nr_cpus = KVM_MAX_VCPUS;
+	if (dist->nr_cpus)	/* Already allocated */
+		return 0;
+
+	nr_cpus = dist->nr_cpus = atomic_read(&kvm->online_vcpus);
+	if (!nr_cpus)		/* No vcpus? Can't be good... */
+		return -EINVAL;
 
+	/*
+	 * If nobody configured the number of interrupts, use the
+	 * legacy one.
+	 */
 	if (!dist->nr_irqs)
 		dist->nr_irqs = VGIC_NR_IRQS_LEGACY;
 
@@ -1927,6 +1931,9 @@ static int vgic_init_maps(struct kvm *kvm)
 		}
 	}
 
+	for (i = VGIC_NR_PRIVATE_IRQS; i < dist->nr_irqs; i += 4)
+		vgic_set_target_reg(kvm, 0, i);
+
 out:
 	if (ret)
 		kvm_vgic_destroy(kvm);
@@ -1945,6 +1952,7 @@ out:
  */
 int kvm_vgic_init(struct kvm *kvm)
 {
+	struct kvm_vcpu *vcpu;
 	int ret = 0, i;
 
 	if (!irqchip_in_kernel(kvm))
@@ -1962,6 +1970,12 @@ int kvm_vgic_init(struct kvm *kvm)
 		goto out;
 	}
 
+	ret = vgic_init_maps(kvm);
+	if (ret) {
+		kvm_err("Unable to allocate maps\n");
+		goto out;
+	}
+
 	ret = kvm_phys_addr_ioremap(kvm, kvm->arch.vgic.vgic_cpu_base,
 				    vgic->vcpu_base, KVM_VGIC_V2_CPU_SIZE);
 	if (ret) {
@@ -1969,11 +1983,13 @@ int kvm_vgic_init(struct kvm *kvm)
 		goto out;
 	}
 
-	for (i = VGIC_NR_PRIVATE_IRQS; i < kvm->arch.vgic.nr_irqs; i += 4)
-		vgic_set_target_reg(kvm, 0, i);
+	kvm_for_each_vcpu(i, vcpu, kvm)
+		kvm_vgic_vcpu_init(vcpu);
 
 	kvm->arch.vgic.ready = true;
 out:
+	if (ret)
+		kvm_vgic_destroy(kvm);
 	mutex_unlock(&kvm->lock);
 	return ret;
 }
@@ -2014,10 +2030,6 @@ int kvm_vgic_create(struct kvm *kvm)
 	kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
 	kvm->arch.vgic.vgic_cpu_base = VGIC_ADDR_UNDEF;
 
-	ret = vgic_init_maps(kvm);
-	if (ret)
-		kvm_err("Unable to allocate maps\n");
-
 out_unlock:
 	for (; vcpu_lock_idx >= 0; vcpu_lock_idx--) {
 		vcpu = kvm_get_vcpu(kvm, vcpu_lock_idx);
@@ -2218,6 +2230,10 @@ static int vgic_attr_regs_access(struct kvm_device *dev,
 
 	mutex_lock(&dev->kvm->lock);
 
+	ret = vgic_init_maps(dev->kvm);
+	if (ret)
+		goto out;
+
 	if (cpuid >= atomic_read(&dev->kvm->online_vcpus)) {
 		ret = -EINVAL;
 		goto out;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 8/8] arm/arm64: KVM: vgic: make number of irqs a configurable attribute
  2014-09-11 11:09 ` Marc Zyngier
@ 2014-09-11 11:09   ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm; +Cc: Christoffer Dall, Andre Przywara

In order to make the number of interrupts configurable, use the new
fancy device management API to add KVM_DEV_ARM_VGIC_GRP_NR_IRQS as
a VGIC configurable attribute.

Userspace can now specify the exact size of the GIC (by increments
of 32 interrupts).

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 Documentation/virtual/kvm/devices/arm-vgic.txt | 10 +++++++
 arch/arm/include/uapi/asm/kvm.h                |  1 +
 arch/arm64/include/uapi/asm/kvm.h              |  1 +
 virt/kvm/arm/vgic.c                            | 37 ++++++++++++++++++++++++++
 4 files changed, 49 insertions(+)

diff --git a/Documentation/virtual/kvm/devices/arm-vgic.txt b/Documentation/virtual/kvm/devices/arm-vgic.txt
index 7f4e91b..df8b0c7 100644
--- a/Documentation/virtual/kvm/devices/arm-vgic.txt
+++ b/Documentation/virtual/kvm/devices/arm-vgic.txt
@@ -71,3 +71,13 @@ Groups:
   Errors:
     -ENODEV: Getting or setting this register is not yet supported
     -EBUSY: One or more VCPUs are running
+
+  KVM_DEV_ARM_VGIC_GRP_NR_IRQS
+  Attributes:
+    A value describing the number of interrupts (SGI, PPI and SPI) for
+    this GIC instance, ranging from 64 to 1024, in increments of 32.
+
+  Errors:
+    -EINVAL: Value set is out of the expected range
+    -EBUSY: Value has already be set, or GIC has already been initialized
+            with default values.
diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
index e6ebdd3..8b51c1a 100644
--- a/arch/arm/include/uapi/asm/kvm.h
+++ b/arch/arm/include/uapi/asm/kvm.h
@@ -173,6 +173,7 @@ struct kvm_arch_memory_slot {
 #define   KVM_DEV_ARM_VGIC_CPUID_MASK	(0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
 #define   KVM_DEV_ARM_VGIC_OFFSET_SHIFT	0
 #define   KVM_DEV_ARM_VGIC_OFFSET_MASK	(0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
+#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS	3
 
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT		24
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index e633ff8..b5cd6ed 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -159,6 +159,7 @@ struct kvm_arch_memory_slot {
 #define   KVM_DEV_ARM_VGIC_CPUID_MASK	(0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
 #define   KVM_DEV_ARM_VGIC_OFFSET_SHIFT	0
 #define   KVM_DEV_ARM_VGIC_OFFSET_MASK	(0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
+#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS	3
 
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT		24
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 9180823..744388d 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -2331,6 +2331,36 @@ static int vgic_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
 
 		return vgic_attr_regs_access(dev, attr, &reg, true);
 	}
+	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS: {
+		u32 __user *uaddr = (u32 __user *)(long)attr->addr;
+		u32 val;
+		int ret = 0;
+
+		if (get_user(val, uaddr))
+			return -EFAULT;
+
+		/*
+		 * We require:
+		 * - at least 32 SPIs on top of the 16 SGIs and 16 PPIs
+		 * - at most 1024 interrupts
+		 * - a multiple of 32 interrupts
+		 */
+		if (val < (VGIC_NR_PRIVATE_IRQS + 32) ||
+		    val > VGIC_MAX_IRQS ||
+		    (val & 31))
+			return -EINVAL;
+
+		mutex_lock(&dev->kvm->lock);
+
+		if (vgic_initialized(dev->kvm) || dev->kvm->arch.vgic.nr_irqs)
+			ret = -EBUSY;
+		else
+			dev->kvm->arch.vgic.nr_irqs = val;
+
+		mutex_unlock(&dev->kvm->lock);
+
+		return ret;
+	}
 
 	}
 
@@ -2367,6 +2397,11 @@ static int vgic_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
 		r = put_user(reg, uaddr);
 		break;
 	}
+	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS: {
+		u32 __user *uaddr = (u32 __user *)(long)attr->addr;
+		r = put_user(dev->kvm->arch.vgic.nr_irqs, uaddr);
+		break;
+	}
 
 	}
 
@@ -2403,6 +2438,8 @@ static int vgic_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
 	case KVM_DEV_ARM_VGIC_GRP_CPU_REGS:
 		offset = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK;
 		return vgic_has_attr_regs(vgic_cpu_ranges, offset);
+	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS:
+		return 0;
 	}
 	return -ENXIO;
 }
-- 
2.0.4


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 8/8] arm/arm64: KVM: vgic: make number of irqs a configurable attribute
@ 2014-09-11 11:09   ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-11 11:09 UTC (permalink / raw)
  To: linux-arm-kernel

In order to make the number of interrupts configurable, use the new
fancy device management API to add KVM_DEV_ARM_VGIC_GRP_NR_IRQS as
a VGIC configurable attribute.

Userspace can now specify the exact size of the GIC (by increments
of 32 interrupts).

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 Documentation/virtual/kvm/devices/arm-vgic.txt | 10 +++++++
 arch/arm/include/uapi/asm/kvm.h                |  1 +
 arch/arm64/include/uapi/asm/kvm.h              |  1 +
 virt/kvm/arm/vgic.c                            | 37 ++++++++++++++++++++++++++
 4 files changed, 49 insertions(+)

diff --git a/Documentation/virtual/kvm/devices/arm-vgic.txt b/Documentation/virtual/kvm/devices/arm-vgic.txt
index 7f4e91b..df8b0c7 100644
--- a/Documentation/virtual/kvm/devices/arm-vgic.txt
+++ b/Documentation/virtual/kvm/devices/arm-vgic.txt
@@ -71,3 +71,13 @@ Groups:
   Errors:
     -ENODEV: Getting or setting this register is not yet supported
     -EBUSY: One or more VCPUs are running
+
+  KVM_DEV_ARM_VGIC_GRP_NR_IRQS
+  Attributes:
+    A value describing the number of interrupts (SGI, PPI and SPI) for
+    this GIC instance, ranging from 64 to 1024, in increments of 32.
+
+  Errors:
+    -EINVAL: Value set is out of the expected range
+    -EBUSY: Value has already be set, or GIC has already been initialized
+            with default values.
diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
index e6ebdd3..8b51c1a 100644
--- a/arch/arm/include/uapi/asm/kvm.h
+++ b/arch/arm/include/uapi/asm/kvm.h
@@ -173,6 +173,7 @@ struct kvm_arch_memory_slot {
 #define   KVM_DEV_ARM_VGIC_CPUID_MASK	(0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
 #define   KVM_DEV_ARM_VGIC_OFFSET_SHIFT	0
 #define   KVM_DEV_ARM_VGIC_OFFSET_MASK	(0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
+#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS	3
 
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT		24
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index e633ff8..b5cd6ed 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -159,6 +159,7 @@ struct kvm_arch_memory_slot {
 #define   KVM_DEV_ARM_VGIC_CPUID_MASK	(0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
 #define   KVM_DEV_ARM_VGIC_OFFSET_SHIFT	0
 #define   KVM_DEV_ARM_VGIC_OFFSET_MASK	(0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
+#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS	3
 
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT		24
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 9180823..744388d 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -2331,6 +2331,36 @@ static int vgic_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
 
 		return vgic_attr_regs_access(dev, attr, &reg, true);
 	}
+	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS: {
+		u32 __user *uaddr = (u32 __user *)(long)attr->addr;
+		u32 val;
+		int ret = 0;
+
+		if (get_user(val, uaddr))
+			return -EFAULT;
+
+		/*
+		 * We require:
+		 * - at least 32 SPIs on top of the 16 SGIs and 16 PPIs
+		 * - at most 1024 interrupts
+		 * - a multiple of 32 interrupts
+		 */
+		if (val < (VGIC_NR_PRIVATE_IRQS + 32) ||
+		    val > VGIC_MAX_IRQS ||
+		    (val & 31))
+			return -EINVAL;
+
+		mutex_lock(&dev->kvm->lock);
+
+		if (vgic_initialized(dev->kvm) || dev->kvm->arch.vgic.nr_irqs)
+			ret = -EBUSY;
+		else
+			dev->kvm->arch.vgic.nr_irqs = val;
+
+		mutex_unlock(&dev->kvm->lock);
+
+		return ret;
+	}
 
 	}
 
@@ -2367,6 +2397,11 @@ static int vgic_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
 		r = put_user(reg, uaddr);
 		break;
 	}
+	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS: {
+		u32 __user *uaddr = (u32 __user *)(long)attr->addr;
+		r = put_user(dev->kvm->arch.vgic.nr_irqs, uaddr);
+		break;
+	}
 
 	}
 
@@ -2403,6 +2438,8 @@ static int vgic_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
 	case KVM_DEV_ARM_VGIC_GRP_CPU_REGS:
 		offset = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK;
 		return vgic_has_attr_regs(vgic_cpu_ranges, offset);
+	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS:
+		return 0;
 	}
 	return -ENXIO;
 }
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 2/8] arm/arm64: KVM: vgic: switch to dynamic allocation
  2014-09-11 11:09   ` Marc Zyngier
@ 2014-09-11 22:36     ` Christoffer Dall
  -1 siblings, 0 replies; 34+ messages in thread
From: Christoffer Dall @ 2014-09-11 22:36 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, kvm, Andre Przywara

On Thu, Sep 11, 2014 at 12:09:09PM +0100, Marc Zyngier wrote:
> So far, all the VGIC data structures are statically defined by the
> *maximum* number of vcpus and interrupts it supports. It means that
> we always have to oversize it to cater for the worse case.
> 
> Start by changing the data structures to be dynamically sizeable,
> and allocate them at runtime.
> 
> The sizes are still very static though.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm/kvm/arm.c     |   3 +
>  include/kvm/arm_vgic.h |  76 ++++++++++++----
>  virt/kvm/arm/vgic.c    | 237 ++++++++++++++++++++++++++++++++++++++++++-------
>  3 files changed, 267 insertions(+), 49 deletions(-)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index a99e0cd..923a01d 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -172,6 +172,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>  			kvm->vcpus[i] = NULL;
>  		}
>  	}
> +
> +	kvm_vgic_destroy(kvm);
>  }
>  
>  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> @@ -253,6 +255,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
>  {
>  	kvm_mmu_free_memory_caches(vcpu);
>  	kvm_timer_vcpu_terminate(vcpu);
> +	kvm_vgic_vcpu_destroy(vcpu);
>  	kmem_cache_free(kvm_vcpu_cache, vcpu);
>  }
>  
> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
> index f074539..bdaac57 100644
> --- a/include/kvm/arm_vgic.h
> +++ b/include/kvm/arm_vgic.h
> @@ -54,19 +54,33 @@
>   * - a bunch of shared interrupts (SPI)
>   */
>  struct vgic_bitmap {
> -	union {
> -		u32 reg[VGIC_NR_PRIVATE_IRQS / 32];
> -		DECLARE_BITMAP(reg_ul, VGIC_NR_PRIVATE_IRQS);
> -	} percpu[VGIC_MAX_CPUS];
> -	union {
> -		u32 reg[VGIC_NR_SHARED_IRQS / 32];
> -		DECLARE_BITMAP(reg_ul, VGIC_NR_SHARED_IRQS);
> -	} shared;
> +	/*
> +	 * - One UL per VCPU for private interrupts (assumes UL is at
> +	 *   least 32 bits)
> +	 * - As many UL as necessary for shared interrupts.
> +	 *
> +	 * The private interrupts are accessed via the "private"
> +	 * field, one UL per vcpu (the state for vcpu n is in
> +	 * private[n]). The shared interrupts are accessed via the
> +	 * "shared" pointer (IRQn state is at bit n-32 in the bitmap).
> +	 */
> +	unsigned long *private;
> +	unsigned long *shared;

the comment above the define for REG_OFFSET_SWIZZLE still talks about
the unions in struct vgic_bitmap, which is no longer true.  Mind
updating that comment?

>  };
>  
>  struct vgic_bytemap {
> -	u32 percpu[VGIC_MAX_CPUS][VGIC_NR_PRIVATE_IRQS / 4];
> -	u32 shared[VGIC_NR_SHARED_IRQS  / 4];
> +	/*
> +	 * - 8 u32 per VCPU for private interrupts
> +	 * - As many u32 as necessary for shared interrupts.
> +	 *
> +	 * The private interrupts are accessed via the "private"
> +	 * field, (the state for vcpu n is in private[n*8] to
> +	 * private[n*8 + 7]). The shared interrupts are accessed via
> +	 * the "shared" pointer (IRQn state is at byte (n-32)%4 of the
> +	 * shared[(n-32)/4] word).
> +	 */
> +	u32 *private;
> +	u32 *shared;
>  };
>  
>  struct kvm_vcpu;
> @@ -127,6 +141,9 @@ struct vgic_dist {
>  	bool			in_kernel;
>  	bool			ready;
>  
> +	int			nr_cpus;
> +	int			nr_irqs;
> +
>  	/* Virtual control interface mapping */
>  	void __iomem		*vctrl_base;
>  
> @@ -166,15 +183,36 @@ struct vgic_dist {
>  	/* Level/edge triggered */
>  	struct vgic_bitmap	irq_cfg;
>  
> -	/* Source CPU per SGI and target CPU */
> -	u8			irq_sgi_sources[VGIC_MAX_CPUS][VGIC_NR_SGIS];
> +	/*
> +	 * Source CPU per SGI and target CPU:
> +	 *
> +	 * Each byte represent a SGI observable on a VCPU, each bit of
> +	 * this byte indicating if the corresponding VCPU has
> +	 * generated this interrupt. This is a GICv2 feature only.
> +	 *
> +	 * For VCPUn (n < 8), irq_sgi_sources[n*16] to [n*16 + 15] are
> +	 * the SGIs observable on VCPUn.
> +	 */
> +	u8			*irq_sgi_sources;
>  
> -	/* Target CPU for each IRQ */
> -	u8			irq_spi_cpu[VGIC_NR_SHARED_IRQS];
> -	struct vgic_bitmap	irq_spi_target[VGIC_MAX_CPUS];
> +	/*
> +	 * Target CPU for each SPI:
> +	 *
> +	 * Array of available SPI, each byte indicating the target
> +	 * VCPU for SPI. IRQn (n >=32) is at irq_spi_cpu[n-32].
> +	 */
> +	u8			*irq_spi_cpu;
> +
> +	/*
> +	 * Reverse lookup of irq_spi_cpu for faster compute pending:
> +	 *
> +	 * Array of bitmaps, one per VCPU, describing is IRQn is

ah, describing *if* ?

> +	 * routed to a particular VCPU.
> +	 */
> +	struct vgic_bitmap	*irq_spi_target;
>  
>  	/* Bitmap indicating which CPU has something pending */
> -	unsigned long		irq_pending_on_cpu;
> +	unsigned long		*irq_pending_on_cpu;
>  #endif
>  };
>  
> @@ -204,11 +242,11 @@ struct vgic_v3_cpu_if {
>  struct vgic_cpu {
>  #ifdef CONFIG_KVM_ARM_VGIC
>  	/* per IRQ to LR mapping */
> -	u8		vgic_irq_lr_map[VGIC_NR_IRQS];
> +	u8		*vgic_irq_lr_map;
>  
>  	/* Pending interrupts on this VCPU */
>  	DECLARE_BITMAP(	pending_percpu, VGIC_NR_PRIVATE_IRQS);
> -	DECLARE_BITMAP(	pending_shared, VGIC_NR_SHARED_IRQS);
> +	unsigned long	*pending_shared;
>  
>  	/* Bitmap of used/free list registers */
>  	DECLARE_BITMAP(	lr_used, VGIC_V2_MAX_LRS);
> @@ -239,7 +277,9 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
>  int kvm_vgic_hyp_init(void);
>  int kvm_vgic_init(struct kvm *kvm);
>  int kvm_vgic_create(struct kvm *kvm);
> +void kvm_vgic_destroy(struct kvm *kvm);
>  int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
>  void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
>  int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> index d3299d4..92c086e 100644
> --- a/virt/kvm/arm/vgic.c
> +++ b/virt/kvm/arm/vgic.c
> @@ -95,6 +95,7 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu);
>  static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu);
>  static void vgic_update_state(struct kvm *kvm);
>  static void vgic_kick_vcpus(struct kvm *kvm);
> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi);
>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
>  static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr);
>  static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc);
> @@ -124,23 +125,45 @@ static const struct vgic_params *vgic;
>  #define REG_OFFSET_SWIZZLE	0
>  #endif
>  
> +static int vgic_init_bitmap(struct vgic_bitmap *b, int nr_cpus, int nr_irqs)
> +{
> +	int nr_longs;
> +
> +	nr_longs = nr_cpus + BITS_TO_LONGS(nr_irqs - VGIC_NR_PRIVATE_IRQS);
> +
> +	b->private = kzalloc(sizeof(unsigned long) * nr_longs, GFP_KERNEL);
> +	if (!b->private)
> +		return -ENOMEM;
> +
> +	b->shared = b->private + nr_cpus;
> +
> +	return 0;
> +}
> +
> +static void vgic_free_bitmap(struct vgic_bitmap *b)
> +{
> +	kfree(b->private);
> +	b->private = NULL;
> +	b->shared = NULL;
> +}
> +
>  static u32 *vgic_bitmap_get_reg(struct vgic_bitmap *x,
>  				int cpuid, u32 offset)
>  {
>  	offset >>= 2;
>  	if (!offset)
> -		return x->percpu[cpuid].reg + (offset ^ REG_OFFSET_SWIZZLE);
> +		return (u32 *)(x->private + cpuid) + REG_OFFSET_SWIZZLE;
>  	else
> -		return x->shared.reg + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
> +		return (u32 *)(x->shared) + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
>  }
>  
>  static int vgic_bitmap_get_irq_val(struct vgic_bitmap *x,
>  				   int cpuid, int irq)
>  {
>  	if (irq < VGIC_NR_PRIVATE_IRQS)
> -		return test_bit(irq, x->percpu[cpuid].reg_ul);
> +		return test_bit(irq, x->private + cpuid);
>  
> -	return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared.reg_ul);
> +	return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared);
>  }
>  
>  static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
> @@ -149,9 +172,9 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>  	unsigned long *reg;
>  
>  	if (irq < VGIC_NR_PRIVATE_IRQS) {
> -		reg = x->percpu[cpuid].reg_ul;
> +		reg = x->private + cpuid;
>  	} else {
> -		reg =  x->shared.reg_ul;
> +		reg = x->shared;
>  		irq -= VGIC_NR_PRIVATE_IRQS;
>  	}
>  
> @@ -163,24 +186,49 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>  
>  static unsigned long *vgic_bitmap_get_cpu_map(struct vgic_bitmap *x, int cpuid)
>  {
> -	if (unlikely(cpuid >= VGIC_MAX_CPUS))
> -		return NULL;
> -	return x->percpu[cpuid].reg_ul;
> +	return x->private + cpuid;
>  }
>  
>  static unsigned long *vgic_bitmap_get_shared_map(struct vgic_bitmap *x)
>  {
> -	return x->shared.reg_ul;
> +	return x->shared;
> +}
> +
> +static int vgic_init_bytemap(struct vgic_bytemap *x, int nr_cpus, int nr_irqs)
> +{
> +	int size;
> +
> +	size  = nr_cpus * VGIC_NR_PRIVATE_IRQS;
> +	size += nr_irqs - VGIC_NR_PRIVATE_IRQS;
> +
> +	x->private = kzalloc(size, GFP_KERNEL);
> +	if (!x->private)
> +		return -ENOMEM;
> +
> +	x->shared = x->private + nr_cpus * VGIC_NR_PRIVATE_IRQS / sizeof(u32);
> +	return 0;
> +}
> +
> +static void vgic_free_bytemap(struct vgic_bytemap *b)
> +{
> +	kfree(b->private);
> +	b->private = NULL;
> +	b->shared = NULL;
>  }
>  
>  static u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset)
>  {
> -	offset >>= 2;
> -	BUG_ON(offset > (VGIC_NR_IRQS / 4));
> -	if (offset < 8)
> -		return x->percpu[cpuid] + offset;
> -	else
> -		return x->shared + offset - 8;
> +	u32 *reg;
> +
> +	if (offset < VGIC_NR_PRIVATE_IRQS) {
> +		reg = x->private;
> +		offset += cpuid * VGIC_NR_PRIVATE_IRQS;
> +	} else {
> +		reg = x->shared;
> +		offset -= VGIC_NR_PRIVATE_IRQS;
> +	}
> +
> +	return reg + (offset / sizeof(u32));
>  }
>  
>  #define VGIC_CFG_LEVEL	0
> @@ -739,7 +787,7 @@ static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu)
>  		 */
>  		vgic_dist_irq_set_pending(vcpu, lr.irq);
>  		if (lr.irq < VGIC_NR_SGIS)
> -			dist->irq_sgi_sources[vcpu_id][lr.irq] |= 1 << lr.source;
> +			*vgic_get_sgi_sources(dist, vcpu_id, lr.irq) |= 1 << lr.source;
>  		lr.state &= ~LR_STATE_PENDING;
>  		vgic_set_lr(vcpu, i, lr);
>  
> @@ -773,7 +821,7 @@ static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
>  	/* Copy source SGIs from distributor side */
>  	for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
>  		int shift = 8 * (sgi - min_sgi);
> -		reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift;
> +		reg |= ((u32)*vgic_get_sgi_sources(dist, vcpu_id, sgi)) << shift;
>  	}
>  
>  	mmio_data_write(mmio, ~0, reg);
> @@ -797,14 +845,15 @@ static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
>  	/* Clear pending SGIs on the distributor */
>  	for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
>  		u8 mask = reg >> (8 * (sgi - min_sgi));
> +		u8 *src = vgic_get_sgi_sources(dist, vcpu_id, sgi);
>  		if (set) {
> -			if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask)
> +			if ((*src & mask) != mask)
>  				updated = true;
> -			dist->irq_sgi_sources[vcpu_id][sgi] |= mask;
> +			*src |= mask;
>  		} else {
> -			if (dist->irq_sgi_sources[vcpu_id][sgi] & mask)
> +			if (*src & mask)
>  				updated = true;
> -			dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask;
> +			*src &= ~mask;
>  		}
>  	}
>  
> @@ -988,6 +1037,11 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  	return true;
>  }
>  
> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi)
> +{
> +	return dist->irq_sgi_sources + vcpu_id * VGIC_NR_SGIS + sgi;
> +}
> +
>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
>  {
>  	struct kvm *kvm = vcpu->kvm;
> @@ -1021,7 +1075,7 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
>  		if (target_cpus & 1) {
>  			/* Flag the SGI as pending */
>  			vgic_dist_irq_set_pending(vcpu, sgi);
> -			dist->irq_sgi_sources[c][sgi] |= 1 << vcpu_id;
> +			*vgic_get_sgi_sources(dist, c, sgi) |= 1 << vcpu_id;
>  			kvm_debug("SGI%d from CPU%d to CPU%d\n", sgi, vcpu_id, c);
>  		}
>  
> @@ -1068,14 +1122,14 @@ static void vgic_update_state(struct kvm *kvm)
>  	int c;
>  
>  	if (!dist->enabled) {
> -		set_bit(0, &dist->irq_pending_on_cpu);
> +		set_bit(0, dist->irq_pending_on_cpu);
>  		return;
>  	}
>  
>  	kvm_for_each_vcpu(c, vcpu, kvm) {
>  		if (compute_pending_for_cpu(vcpu)) {
>  			pr_debug("CPU%d has pending interrupts\n", c);
> -			set_bit(c, &dist->irq_pending_on_cpu);
> +			set_bit(c, dist->irq_pending_on_cpu);
>  		}
>  	}
>  }
> @@ -1232,14 +1286,14 @@ static bool vgic_queue_sgi(struct kvm_vcpu *vcpu, int irq)
>  	int vcpu_id = vcpu->vcpu_id;
>  	int c;
>  
> -	sources = dist->irq_sgi_sources[vcpu_id][irq];
> +	sources = *vgic_get_sgi_sources(dist, vcpu_id, irq);
>  
>  	for_each_set_bit(c, &sources, VGIC_MAX_CPUS) {
>  		if (vgic_queue_irq(vcpu, c, irq))
>  			clear_bit(c, &sources);
>  	}
>  
> -	dist->irq_sgi_sources[vcpu_id][irq] = sources;
> +	*vgic_get_sgi_sources(dist, vcpu_id, irq) = sources;
>  
>  	/*
>  	 * If the sources bitmap has been cleared it means that we
> @@ -1327,7 +1381,7 @@ epilog:
>  		 * us. Claim we don't have anything pending. We'll
>  		 * adjust that if needed while exiting.
>  		 */
> -		clear_bit(vcpu_id, &dist->irq_pending_on_cpu);
> +		clear_bit(vcpu_id, dist->irq_pending_on_cpu);
>  	}
>  }
>  
> @@ -1429,7 +1483,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
>  	/* Check if we still have something up our sleeve... */
>  	pending = find_first_zero_bit(elrsr_ptr, vgic->nr_lr);
>  	if (level_pending || pending < vgic->nr_lr)
> -		set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
> +		set_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
>  }
>  
>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
> @@ -1463,7 +1517,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
>  	if (!irqchip_in_kernel(vcpu->kvm))
>  		return 0;
>  
> -	return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
> +	return test_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
>  }
>  
>  static void vgic_kick_vcpus(struct kvm *kvm)
> @@ -1558,7 +1612,7 @@ static bool vgic_update_irq_pending(struct kvm *kvm, int cpuid,
>  
>  	if (level) {
>  		vgic_cpu_irq_set(vcpu, irq_num);
> -		set_bit(cpuid, &dist->irq_pending_on_cpu);
> +		set_bit(cpuid, dist->irq_pending_on_cpu);
>  	}
>  
>  out:
> @@ -1602,6 +1656,32 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data)
>  	return IRQ_HANDLED;
>  }
>  
> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
> +{
> +	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> +
> +	kfree(vgic_cpu->pending_shared);
> +	kfree(vgic_cpu->vgic_irq_lr_map);
> +	vgic_cpu->pending_shared = NULL;
> +	vgic_cpu->vgic_irq_lr_map = NULL;
> +}
> +
> +static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
> +{
> +	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> +
> +	int sz = (nr_irqs - VGIC_NR_PRIVATE_IRQS) / 8;

[copying question from last review round, apologies if I'm being stupid]

are we guaranteed that the numerator is always a multiple of 8? if not,
won't you end up allocating one less byte than needed?

> +	vgic_cpu->pending_shared = kzalloc(sz, GFP_KERNEL);
> +	vgic_cpu->vgic_irq_lr_map = kzalloc(nr_irqs, GFP_KERNEL);
> +
> +	if (!vgic_cpu->pending_shared || !vgic_cpu->vgic_irq_lr_map) {
> +		kvm_vgic_vcpu_destroy(vcpu);
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
[...]

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 2/8] arm/arm64: KVM: vgic: switch to dynamic allocation
@ 2014-09-11 22:36     ` Christoffer Dall
  0 siblings, 0 replies; 34+ messages in thread
From: Christoffer Dall @ 2014-09-11 22:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Sep 11, 2014 at 12:09:09PM +0100, Marc Zyngier wrote:
> So far, all the VGIC data structures are statically defined by the
> *maximum* number of vcpus and interrupts it supports. It means that
> we always have to oversize it to cater for the worse case.
> 
> Start by changing the data structures to be dynamically sizeable,
> and allocate them at runtime.
> 
> The sizes are still very static though.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm/kvm/arm.c     |   3 +
>  include/kvm/arm_vgic.h |  76 ++++++++++++----
>  virt/kvm/arm/vgic.c    | 237 ++++++++++++++++++++++++++++++++++++++++++-------
>  3 files changed, 267 insertions(+), 49 deletions(-)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index a99e0cd..923a01d 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -172,6 +172,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>  			kvm->vcpus[i] = NULL;
>  		}
>  	}
> +
> +	kvm_vgic_destroy(kvm);
>  }
>  
>  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> @@ -253,6 +255,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
>  {
>  	kvm_mmu_free_memory_caches(vcpu);
>  	kvm_timer_vcpu_terminate(vcpu);
> +	kvm_vgic_vcpu_destroy(vcpu);
>  	kmem_cache_free(kvm_vcpu_cache, vcpu);
>  }
>  
> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
> index f074539..bdaac57 100644
> --- a/include/kvm/arm_vgic.h
> +++ b/include/kvm/arm_vgic.h
> @@ -54,19 +54,33 @@
>   * - a bunch of shared interrupts (SPI)
>   */
>  struct vgic_bitmap {
> -	union {
> -		u32 reg[VGIC_NR_PRIVATE_IRQS / 32];
> -		DECLARE_BITMAP(reg_ul, VGIC_NR_PRIVATE_IRQS);
> -	} percpu[VGIC_MAX_CPUS];
> -	union {
> -		u32 reg[VGIC_NR_SHARED_IRQS / 32];
> -		DECLARE_BITMAP(reg_ul, VGIC_NR_SHARED_IRQS);
> -	} shared;
> +	/*
> +	 * - One UL per VCPU for private interrupts (assumes UL is at
> +	 *   least 32 bits)
> +	 * - As many UL as necessary for shared interrupts.
> +	 *
> +	 * The private interrupts are accessed via the "private"
> +	 * field, one UL per vcpu (the state for vcpu n is in
> +	 * private[n]). The shared interrupts are accessed via the
> +	 * "shared" pointer (IRQn state is at bit n-32 in the bitmap).
> +	 */
> +	unsigned long *private;
> +	unsigned long *shared;

the comment above the define for REG_OFFSET_SWIZZLE still talks about
the unions in struct vgic_bitmap, which is no longer true.  Mind
updating that comment?

>  };
>  
>  struct vgic_bytemap {
> -	u32 percpu[VGIC_MAX_CPUS][VGIC_NR_PRIVATE_IRQS / 4];
> -	u32 shared[VGIC_NR_SHARED_IRQS  / 4];
> +	/*
> +	 * - 8 u32 per VCPU for private interrupts
> +	 * - As many u32 as necessary for shared interrupts.
> +	 *
> +	 * The private interrupts are accessed via the "private"
> +	 * field, (the state for vcpu n is in private[n*8] to
> +	 * private[n*8 + 7]). The shared interrupts are accessed via
> +	 * the "shared" pointer (IRQn state is at byte (n-32)%4 of the
> +	 * shared[(n-32)/4] word).
> +	 */
> +	u32 *private;
> +	u32 *shared;
>  };
>  
>  struct kvm_vcpu;
> @@ -127,6 +141,9 @@ struct vgic_dist {
>  	bool			in_kernel;
>  	bool			ready;
>  
> +	int			nr_cpus;
> +	int			nr_irqs;
> +
>  	/* Virtual control interface mapping */
>  	void __iomem		*vctrl_base;
>  
> @@ -166,15 +183,36 @@ struct vgic_dist {
>  	/* Level/edge triggered */
>  	struct vgic_bitmap	irq_cfg;
>  
> -	/* Source CPU per SGI and target CPU */
> -	u8			irq_sgi_sources[VGIC_MAX_CPUS][VGIC_NR_SGIS];
> +	/*
> +	 * Source CPU per SGI and target CPU:
> +	 *
> +	 * Each byte represent a SGI observable on a VCPU, each bit of
> +	 * this byte indicating if the corresponding VCPU has
> +	 * generated this interrupt. This is a GICv2 feature only.
> +	 *
> +	 * For VCPUn (n < 8), irq_sgi_sources[n*16] to [n*16 + 15] are
> +	 * the SGIs observable on VCPUn.
> +	 */
> +	u8			*irq_sgi_sources;
>  
> -	/* Target CPU for each IRQ */
> -	u8			irq_spi_cpu[VGIC_NR_SHARED_IRQS];
> -	struct vgic_bitmap	irq_spi_target[VGIC_MAX_CPUS];
> +	/*
> +	 * Target CPU for each SPI:
> +	 *
> +	 * Array of available SPI, each byte indicating the target
> +	 * VCPU for SPI. IRQn (n >=32) is at irq_spi_cpu[n-32].
> +	 */
> +	u8			*irq_spi_cpu;
> +
> +	/*
> +	 * Reverse lookup of irq_spi_cpu for faster compute pending:
> +	 *
> +	 * Array of bitmaps, one per VCPU, describing is IRQn is

ah, describing *if* ?

> +	 * routed to a particular VCPU.
> +	 */
> +	struct vgic_bitmap	*irq_spi_target;
>  
>  	/* Bitmap indicating which CPU has something pending */
> -	unsigned long		irq_pending_on_cpu;
> +	unsigned long		*irq_pending_on_cpu;
>  #endif
>  };
>  
> @@ -204,11 +242,11 @@ struct vgic_v3_cpu_if {
>  struct vgic_cpu {
>  #ifdef CONFIG_KVM_ARM_VGIC
>  	/* per IRQ to LR mapping */
> -	u8		vgic_irq_lr_map[VGIC_NR_IRQS];
> +	u8		*vgic_irq_lr_map;
>  
>  	/* Pending interrupts on this VCPU */
>  	DECLARE_BITMAP(	pending_percpu, VGIC_NR_PRIVATE_IRQS);
> -	DECLARE_BITMAP(	pending_shared, VGIC_NR_SHARED_IRQS);
> +	unsigned long	*pending_shared;
>  
>  	/* Bitmap of used/free list registers */
>  	DECLARE_BITMAP(	lr_used, VGIC_V2_MAX_LRS);
> @@ -239,7 +277,9 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
>  int kvm_vgic_hyp_init(void);
>  int kvm_vgic_init(struct kvm *kvm);
>  int kvm_vgic_create(struct kvm *kvm);
> +void kvm_vgic_destroy(struct kvm *kvm);
>  int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
>  void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
>  int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> index d3299d4..92c086e 100644
> --- a/virt/kvm/arm/vgic.c
> +++ b/virt/kvm/arm/vgic.c
> @@ -95,6 +95,7 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu);
>  static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu);
>  static void vgic_update_state(struct kvm *kvm);
>  static void vgic_kick_vcpus(struct kvm *kvm);
> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi);
>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
>  static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr);
>  static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc);
> @@ -124,23 +125,45 @@ static const struct vgic_params *vgic;
>  #define REG_OFFSET_SWIZZLE	0
>  #endif
>  
> +static int vgic_init_bitmap(struct vgic_bitmap *b, int nr_cpus, int nr_irqs)
> +{
> +	int nr_longs;
> +
> +	nr_longs = nr_cpus + BITS_TO_LONGS(nr_irqs - VGIC_NR_PRIVATE_IRQS);
> +
> +	b->private = kzalloc(sizeof(unsigned long) * nr_longs, GFP_KERNEL);
> +	if (!b->private)
> +		return -ENOMEM;
> +
> +	b->shared = b->private + nr_cpus;
> +
> +	return 0;
> +}
> +
> +static void vgic_free_bitmap(struct vgic_bitmap *b)
> +{
> +	kfree(b->private);
> +	b->private = NULL;
> +	b->shared = NULL;
> +}
> +
>  static u32 *vgic_bitmap_get_reg(struct vgic_bitmap *x,
>  				int cpuid, u32 offset)
>  {
>  	offset >>= 2;
>  	if (!offset)
> -		return x->percpu[cpuid].reg + (offset ^ REG_OFFSET_SWIZZLE);
> +		return (u32 *)(x->private + cpuid) + REG_OFFSET_SWIZZLE;
>  	else
> -		return x->shared.reg + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
> +		return (u32 *)(x->shared) + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
>  }
>  
>  static int vgic_bitmap_get_irq_val(struct vgic_bitmap *x,
>  				   int cpuid, int irq)
>  {
>  	if (irq < VGIC_NR_PRIVATE_IRQS)
> -		return test_bit(irq, x->percpu[cpuid].reg_ul);
> +		return test_bit(irq, x->private + cpuid);
>  
> -	return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared.reg_ul);
> +	return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared);
>  }
>  
>  static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
> @@ -149,9 +172,9 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>  	unsigned long *reg;
>  
>  	if (irq < VGIC_NR_PRIVATE_IRQS) {
> -		reg = x->percpu[cpuid].reg_ul;
> +		reg = x->private + cpuid;
>  	} else {
> -		reg =  x->shared.reg_ul;
> +		reg = x->shared;
>  		irq -= VGIC_NR_PRIVATE_IRQS;
>  	}
>  
> @@ -163,24 +186,49 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>  
>  static unsigned long *vgic_bitmap_get_cpu_map(struct vgic_bitmap *x, int cpuid)
>  {
> -	if (unlikely(cpuid >= VGIC_MAX_CPUS))
> -		return NULL;
> -	return x->percpu[cpuid].reg_ul;
> +	return x->private + cpuid;
>  }
>  
>  static unsigned long *vgic_bitmap_get_shared_map(struct vgic_bitmap *x)
>  {
> -	return x->shared.reg_ul;
> +	return x->shared;
> +}
> +
> +static int vgic_init_bytemap(struct vgic_bytemap *x, int nr_cpus, int nr_irqs)
> +{
> +	int size;
> +
> +	size  = nr_cpus * VGIC_NR_PRIVATE_IRQS;
> +	size += nr_irqs - VGIC_NR_PRIVATE_IRQS;
> +
> +	x->private = kzalloc(size, GFP_KERNEL);
> +	if (!x->private)
> +		return -ENOMEM;
> +
> +	x->shared = x->private + nr_cpus * VGIC_NR_PRIVATE_IRQS / sizeof(u32);
> +	return 0;
> +}
> +
> +static void vgic_free_bytemap(struct vgic_bytemap *b)
> +{
> +	kfree(b->private);
> +	b->private = NULL;
> +	b->shared = NULL;
>  }
>  
>  static u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset)
>  {
> -	offset >>= 2;
> -	BUG_ON(offset > (VGIC_NR_IRQS / 4));
> -	if (offset < 8)
> -		return x->percpu[cpuid] + offset;
> -	else
> -		return x->shared + offset - 8;
> +	u32 *reg;
> +
> +	if (offset < VGIC_NR_PRIVATE_IRQS) {
> +		reg = x->private;
> +		offset += cpuid * VGIC_NR_PRIVATE_IRQS;
> +	} else {
> +		reg = x->shared;
> +		offset -= VGIC_NR_PRIVATE_IRQS;
> +	}
> +
> +	return reg + (offset / sizeof(u32));
>  }
>  
>  #define VGIC_CFG_LEVEL	0
> @@ -739,7 +787,7 @@ static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu)
>  		 */
>  		vgic_dist_irq_set_pending(vcpu, lr.irq);
>  		if (lr.irq < VGIC_NR_SGIS)
> -			dist->irq_sgi_sources[vcpu_id][lr.irq] |= 1 << lr.source;
> +			*vgic_get_sgi_sources(dist, vcpu_id, lr.irq) |= 1 << lr.source;
>  		lr.state &= ~LR_STATE_PENDING;
>  		vgic_set_lr(vcpu, i, lr);
>  
> @@ -773,7 +821,7 @@ static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
>  	/* Copy source SGIs from distributor side */
>  	for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
>  		int shift = 8 * (sgi - min_sgi);
> -		reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift;
> +		reg |= ((u32)*vgic_get_sgi_sources(dist, vcpu_id, sgi)) << shift;
>  	}
>  
>  	mmio_data_write(mmio, ~0, reg);
> @@ -797,14 +845,15 @@ static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
>  	/* Clear pending SGIs on the distributor */
>  	for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
>  		u8 mask = reg >> (8 * (sgi - min_sgi));
> +		u8 *src = vgic_get_sgi_sources(dist, vcpu_id, sgi);
>  		if (set) {
> -			if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask)
> +			if ((*src & mask) != mask)
>  				updated = true;
> -			dist->irq_sgi_sources[vcpu_id][sgi] |= mask;
> +			*src |= mask;
>  		} else {
> -			if (dist->irq_sgi_sources[vcpu_id][sgi] & mask)
> +			if (*src & mask)
>  				updated = true;
> -			dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask;
> +			*src &= ~mask;
>  		}
>  	}
>  
> @@ -988,6 +1037,11 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>  	return true;
>  }
>  
> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi)
> +{
> +	return dist->irq_sgi_sources + vcpu_id * VGIC_NR_SGIS + sgi;
> +}
> +
>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
>  {
>  	struct kvm *kvm = vcpu->kvm;
> @@ -1021,7 +1075,7 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
>  		if (target_cpus & 1) {
>  			/* Flag the SGI as pending */
>  			vgic_dist_irq_set_pending(vcpu, sgi);
> -			dist->irq_sgi_sources[c][sgi] |= 1 << vcpu_id;
> +			*vgic_get_sgi_sources(dist, c, sgi) |= 1 << vcpu_id;
>  			kvm_debug("SGI%d from CPU%d to CPU%d\n", sgi, vcpu_id, c);
>  		}
>  
> @@ -1068,14 +1122,14 @@ static void vgic_update_state(struct kvm *kvm)
>  	int c;
>  
>  	if (!dist->enabled) {
> -		set_bit(0, &dist->irq_pending_on_cpu);
> +		set_bit(0, dist->irq_pending_on_cpu);
>  		return;
>  	}
>  
>  	kvm_for_each_vcpu(c, vcpu, kvm) {
>  		if (compute_pending_for_cpu(vcpu)) {
>  			pr_debug("CPU%d has pending interrupts\n", c);
> -			set_bit(c, &dist->irq_pending_on_cpu);
> +			set_bit(c, dist->irq_pending_on_cpu);
>  		}
>  	}
>  }
> @@ -1232,14 +1286,14 @@ static bool vgic_queue_sgi(struct kvm_vcpu *vcpu, int irq)
>  	int vcpu_id = vcpu->vcpu_id;
>  	int c;
>  
> -	sources = dist->irq_sgi_sources[vcpu_id][irq];
> +	sources = *vgic_get_sgi_sources(dist, vcpu_id, irq);
>  
>  	for_each_set_bit(c, &sources, VGIC_MAX_CPUS) {
>  		if (vgic_queue_irq(vcpu, c, irq))
>  			clear_bit(c, &sources);
>  	}
>  
> -	dist->irq_sgi_sources[vcpu_id][irq] = sources;
> +	*vgic_get_sgi_sources(dist, vcpu_id, irq) = sources;
>  
>  	/*
>  	 * If the sources bitmap has been cleared it means that we
> @@ -1327,7 +1381,7 @@ epilog:
>  		 * us. Claim we don't have anything pending. We'll
>  		 * adjust that if needed while exiting.
>  		 */
> -		clear_bit(vcpu_id, &dist->irq_pending_on_cpu);
> +		clear_bit(vcpu_id, dist->irq_pending_on_cpu);
>  	}
>  }
>  
> @@ -1429,7 +1483,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
>  	/* Check if we still have something up our sleeve... */
>  	pending = find_first_zero_bit(elrsr_ptr, vgic->nr_lr);
>  	if (level_pending || pending < vgic->nr_lr)
> -		set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
> +		set_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
>  }
>  
>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
> @@ -1463,7 +1517,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
>  	if (!irqchip_in_kernel(vcpu->kvm))
>  		return 0;
>  
> -	return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
> +	return test_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
>  }
>  
>  static void vgic_kick_vcpus(struct kvm *kvm)
> @@ -1558,7 +1612,7 @@ static bool vgic_update_irq_pending(struct kvm *kvm, int cpuid,
>  
>  	if (level) {
>  		vgic_cpu_irq_set(vcpu, irq_num);
> -		set_bit(cpuid, &dist->irq_pending_on_cpu);
> +		set_bit(cpuid, dist->irq_pending_on_cpu);
>  	}
>  
>  out:
> @@ -1602,6 +1656,32 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data)
>  	return IRQ_HANDLED;
>  }
>  
> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
> +{
> +	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> +
> +	kfree(vgic_cpu->pending_shared);
> +	kfree(vgic_cpu->vgic_irq_lr_map);
> +	vgic_cpu->pending_shared = NULL;
> +	vgic_cpu->vgic_irq_lr_map = NULL;
> +}
> +
> +static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
> +{
> +	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> +
> +	int sz = (nr_irqs - VGIC_NR_PRIVATE_IRQS) / 8;

[copying question from last review round, apologies if I'm being stupid]

are we guaranteed that the numerator is always a multiple of 8? if not,
won't you end up allocating one less byte than needed?

> +	vgic_cpu->pending_shared = kzalloc(sz, GFP_KERNEL);
> +	vgic_cpu->vgic_irq_lr_map = kzalloc(nr_irqs, GFP_KERNEL);
> +
> +	if (!vgic_cpu->pending_shared || !vgic_cpu->vgic_irq_lr_map) {
> +		kvm_vgic_vcpu_destroy(vcpu);
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
[...]

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 6/8] arm/arm64: KVM: vgic: kill VGIC_NR_IRQS
  2014-09-11 11:09   ` Marc Zyngier
@ 2014-09-11 22:37     ` Christoffer Dall
  -1 siblings, 0 replies; 34+ messages in thread
From: Christoffer Dall @ 2014-09-11 22:37 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, kvm, Andre Przywara

On Thu, Sep 11, 2014 at 12:09:13PM +0100, Marc Zyngier wrote:
> Nuke VGIC_NR_IRQS entierly, now that the distributor instance
> contains the number of IRQ allocated to this GIC.
> 
> Also add VGIC_NR_IRQS_LEGACY to preserve the current API.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Did anything dramtically change in this patch since last time around?

If not, I'll re-affirm my tag:

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 6/8] arm/arm64: KVM: vgic: kill VGIC_NR_IRQS
@ 2014-09-11 22:37     ` Christoffer Dall
  0 siblings, 0 replies; 34+ messages in thread
From: Christoffer Dall @ 2014-09-11 22:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Sep 11, 2014 at 12:09:13PM +0100, Marc Zyngier wrote:
> Nuke VGIC_NR_IRQS entierly, now that the distributor instance
> contains the number of IRQ allocated to this GIC.
> 
> Also add VGIC_NR_IRQS_LEGACY to preserve the current API.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Did anything dramtically change in this patch since last time around?

If not, I'll re-affirm my tag:

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 8/8] arm/arm64: KVM: vgic: make number of irqs a configurable attribute
  2014-09-11 11:09   ` Marc Zyngier
@ 2014-09-11 22:38     ` Christoffer Dall
  -1 siblings, 0 replies; 34+ messages in thread
From: Christoffer Dall @ 2014-09-11 22:38 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, kvm, Andre Przywara

On Thu, Sep 11, 2014 at 12:09:15PM +0100, Marc Zyngier wrote:
> In order to make the number of interrupts configurable, use the new
> fancy device management API to add KVM_DEV_ARM_VGIC_GRP_NR_IRQS as
> a VGIC configurable attribute.
> 
> Userspace can now specify the exact size of the GIC (by increments
> of 32 interrupts).
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  Documentation/virtual/kvm/devices/arm-vgic.txt | 10 +++++++
>  arch/arm/include/uapi/asm/kvm.h                |  1 +
>  arch/arm64/include/uapi/asm/kvm.h              |  1 +
>  virt/kvm/arm/vgic.c                            | 37 ++++++++++++++++++++++++++
>  4 files changed, 49 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/devices/arm-vgic.txt b/Documentation/virtual/kvm/devices/arm-vgic.txt
> index 7f4e91b..df8b0c7 100644
> --- a/Documentation/virtual/kvm/devices/arm-vgic.txt
> +++ b/Documentation/virtual/kvm/devices/arm-vgic.txt
> @@ -71,3 +71,13 @@ Groups:
>    Errors:
>      -ENODEV: Getting or setting this register is not yet supported
>      -EBUSY: One or more VCPUs are running
> +
> +  KVM_DEV_ARM_VGIC_GRP_NR_IRQS
> +  Attributes:
> +    A value describing the number of interrupts (SGI, PPI and SPI) for
> +    this GIC instance, ranging from 64 to 1024, in increments of 32.
> +
> +  Errors:
> +    -EINVAL: Value set is out of the expected range
> +    -EBUSY: Value has already be set, or GIC has already been initialized
> +            with default values.
> diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
> index e6ebdd3..8b51c1a 100644
> --- a/arch/arm/include/uapi/asm/kvm.h
> +++ b/arch/arm/include/uapi/asm/kvm.h
> @@ -173,6 +173,7 @@ struct kvm_arch_memory_slot {
>  #define   KVM_DEV_ARM_VGIC_CPUID_MASK	(0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
>  #define   KVM_DEV_ARM_VGIC_OFFSET_SHIFT	0
>  #define   KVM_DEV_ARM_VGIC_OFFSET_MASK	(0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
> +#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS	3
>  
>  /* KVM_IRQ_LINE irq field index values */
>  #define KVM_ARM_IRQ_TYPE_SHIFT		24
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index e633ff8..b5cd6ed 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -159,6 +159,7 @@ struct kvm_arch_memory_slot {
>  #define   KVM_DEV_ARM_VGIC_CPUID_MASK	(0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
>  #define   KVM_DEV_ARM_VGIC_OFFSET_SHIFT	0
>  #define   KVM_DEV_ARM_VGIC_OFFSET_MASK	(0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
> +#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS	3
>  
>  /* KVM_IRQ_LINE irq field index values */
>  #define KVM_ARM_IRQ_TYPE_SHIFT		24
> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> index 9180823..744388d 100644
> --- a/virt/kvm/arm/vgic.c
> +++ b/virt/kvm/arm/vgic.c
> @@ -2331,6 +2331,36 @@ static int vgic_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
>  
>  		return vgic_attr_regs_access(dev, attr, &reg, true);
>  	}
> +	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS: {
> +		u32 __user *uaddr = (u32 __user *)(long)attr->addr;
> +		u32 val;
> +		int ret = 0;
> +
> +		if (get_user(val, uaddr))
> +			return -EFAULT;
> +
> +		/*
> +		 * We require:
> +		 * - at least 32 SPIs on top of the 16 SGIs and 16 PPIs
> +		 * - at most 1024 interrupts
> +		 * - a multiple of 32 interrupts
> +		 */
> +		if (val < (VGIC_NR_PRIVATE_IRQS + 32) ||
> +		    val > VGIC_MAX_IRQS ||
> +		    (val & 31))
> +			return -EINVAL;
> +
> +		mutex_lock(&dev->kvm->lock);
> +
> +		if (vgic_initialized(dev->kvm) || dev->kvm->arch.vgic.nr_irqs)
> +			ret = -EBUSY;
> +		else
> +			dev->kvm->arch.vgic.nr_irqs = val;
> +
> +		mutex_unlock(&dev->kvm->lock);
> +
> +		return ret;
> +	}
>  
>  	}
>  
> @@ -2367,6 +2397,11 @@ static int vgic_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
>  		r = put_user(reg, uaddr);
>  		break;
>  	}
> +	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS: {
> +		u32 __user *uaddr = (u32 __user *)(long)attr->addr;
> +		r = put_user(dev->kvm->arch.vgic.nr_irqs, uaddr);
> +		break;
> +	}
>  
>  	}
>  
> @@ -2403,6 +2438,8 @@ static int vgic_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
>  	case KVM_DEV_ARM_VGIC_GRP_CPU_REGS:
>  		offset = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK;
>  		return vgic_has_attr_regs(vgic_cpu_ranges, offset);
> +	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS:
> +		return 0;
>  	}
>  	return -ENXIO;
>  }
> -- 
> 2.0.4
> 

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 8/8] arm/arm64: KVM: vgic: make number of irqs a configurable attribute
@ 2014-09-11 22:38     ` Christoffer Dall
  0 siblings, 0 replies; 34+ messages in thread
From: Christoffer Dall @ 2014-09-11 22:38 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Sep 11, 2014 at 12:09:15PM +0100, Marc Zyngier wrote:
> In order to make the number of interrupts configurable, use the new
> fancy device management API to add KVM_DEV_ARM_VGIC_GRP_NR_IRQS as
> a VGIC configurable attribute.
> 
> Userspace can now specify the exact size of the GIC (by increments
> of 32 interrupts).
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  Documentation/virtual/kvm/devices/arm-vgic.txt | 10 +++++++
>  arch/arm/include/uapi/asm/kvm.h                |  1 +
>  arch/arm64/include/uapi/asm/kvm.h              |  1 +
>  virt/kvm/arm/vgic.c                            | 37 ++++++++++++++++++++++++++
>  4 files changed, 49 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/devices/arm-vgic.txt b/Documentation/virtual/kvm/devices/arm-vgic.txt
> index 7f4e91b..df8b0c7 100644
> --- a/Documentation/virtual/kvm/devices/arm-vgic.txt
> +++ b/Documentation/virtual/kvm/devices/arm-vgic.txt
> @@ -71,3 +71,13 @@ Groups:
>    Errors:
>      -ENODEV: Getting or setting this register is not yet supported
>      -EBUSY: One or more VCPUs are running
> +
> +  KVM_DEV_ARM_VGIC_GRP_NR_IRQS
> +  Attributes:
> +    A value describing the number of interrupts (SGI, PPI and SPI) for
> +    this GIC instance, ranging from 64 to 1024, in increments of 32.
> +
> +  Errors:
> +    -EINVAL: Value set is out of the expected range
> +    -EBUSY: Value has already be set, or GIC has already been initialized
> +            with default values.
> diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
> index e6ebdd3..8b51c1a 100644
> --- a/arch/arm/include/uapi/asm/kvm.h
> +++ b/arch/arm/include/uapi/asm/kvm.h
> @@ -173,6 +173,7 @@ struct kvm_arch_memory_slot {
>  #define   KVM_DEV_ARM_VGIC_CPUID_MASK	(0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
>  #define   KVM_DEV_ARM_VGIC_OFFSET_SHIFT	0
>  #define   KVM_DEV_ARM_VGIC_OFFSET_MASK	(0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
> +#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS	3
>  
>  /* KVM_IRQ_LINE irq field index values */
>  #define KVM_ARM_IRQ_TYPE_SHIFT		24
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index e633ff8..b5cd6ed 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -159,6 +159,7 @@ struct kvm_arch_memory_slot {
>  #define   KVM_DEV_ARM_VGIC_CPUID_MASK	(0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
>  #define   KVM_DEV_ARM_VGIC_OFFSET_SHIFT	0
>  #define   KVM_DEV_ARM_VGIC_OFFSET_MASK	(0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
> +#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS	3
>  
>  /* KVM_IRQ_LINE irq field index values */
>  #define KVM_ARM_IRQ_TYPE_SHIFT		24
> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> index 9180823..744388d 100644
> --- a/virt/kvm/arm/vgic.c
> +++ b/virt/kvm/arm/vgic.c
> @@ -2331,6 +2331,36 @@ static int vgic_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
>  
>  		return vgic_attr_regs_access(dev, attr, &reg, true);
>  	}
> +	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS: {
> +		u32 __user *uaddr = (u32 __user *)(long)attr->addr;
> +		u32 val;
> +		int ret = 0;
> +
> +		if (get_user(val, uaddr))
> +			return -EFAULT;
> +
> +		/*
> +		 * We require:
> +		 * - at least 32 SPIs on top of the 16 SGIs and 16 PPIs
> +		 * - at most 1024 interrupts
> +		 * - a multiple of 32 interrupts
> +		 */
> +		if (val < (VGIC_NR_PRIVATE_IRQS + 32) ||
> +		    val > VGIC_MAX_IRQS ||
> +		    (val & 31))
> +			return -EINVAL;
> +
> +		mutex_lock(&dev->kvm->lock);
> +
> +		if (vgic_initialized(dev->kvm) || dev->kvm->arch.vgic.nr_irqs)
> +			ret = -EBUSY;
> +		else
> +			dev->kvm->arch.vgic.nr_irqs = val;
> +
> +		mutex_unlock(&dev->kvm->lock);
> +
> +		return ret;
> +	}
>  
>  	}
>  
> @@ -2367,6 +2397,11 @@ static int vgic_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
>  		r = put_user(reg, uaddr);
>  		break;
>  	}
> +	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS: {
> +		u32 __user *uaddr = (u32 __user *)(long)attr->addr;
> +		r = put_user(dev->kvm->arch.vgic.nr_irqs, uaddr);
> +		break;
> +	}
>  
>  	}
>  
> @@ -2403,6 +2438,8 @@ static int vgic_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
>  	case KVM_DEV_ARM_VGIC_GRP_CPU_REGS:
>  		offset = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK;
>  		return vgic_has_attr_regs(vgic_cpu_ranges, offset);
> +	case KVM_DEV_ARM_VGIC_GRP_NR_IRQS:
> +		return 0;
>  	}
>  	return -ENXIO;
>  }
> -- 
> 2.0.4
> 

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 2/8] arm/arm64: KVM: vgic: switch to dynamic allocation
  2014-09-11 22:36     ` Christoffer Dall
@ 2014-09-12  9:13       ` Marc Zyngier
  -1 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-12  9:13 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvmarm, linux-arm-kernel, kvm, Andre Przywara

On 11/09/14 23:36, Christoffer Dall wrote:
> On Thu, Sep 11, 2014 at 12:09:09PM +0100, Marc Zyngier wrote:
>> So far, all the VGIC data structures are statically defined by the
>> *maximum* number of vcpus and interrupts it supports. It means that
>> we always have to oversize it to cater for the worse case.
>>
>> Start by changing the data structures to be dynamically sizeable,
>> and allocate them at runtime.
>>
>> The sizes are still very static though.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm/kvm/arm.c     |   3 +
>>  include/kvm/arm_vgic.h |  76 ++++++++++++----
>>  virt/kvm/arm/vgic.c    | 237 ++++++++++++++++++++++++++++++++++++++++++-------
>>  3 files changed, 267 insertions(+), 49 deletions(-)
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index a99e0cd..923a01d 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -172,6 +172,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>>                       kvm->vcpus[i] = NULL;
>>               }
>>       }
>> +
>> +     kvm_vgic_destroy(kvm);
>>  }
>>
>>  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>> @@ -253,6 +255,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
>>  {
>>       kvm_mmu_free_memory_caches(vcpu);
>>       kvm_timer_vcpu_terminate(vcpu);
>> +     kvm_vgic_vcpu_destroy(vcpu);
>>       kmem_cache_free(kvm_vcpu_cache, vcpu);
>>  }
>>
>> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
>> index f074539..bdaac57 100644
>> --- a/include/kvm/arm_vgic.h
>> +++ b/include/kvm/arm_vgic.h
>> @@ -54,19 +54,33 @@
>>   * - a bunch of shared interrupts (SPI)
>>   */
>>  struct vgic_bitmap {
>> -     union {
>> -             u32 reg[VGIC_NR_PRIVATE_IRQS / 32];
>> -             DECLARE_BITMAP(reg_ul, VGIC_NR_PRIVATE_IRQS);
>> -     } percpu[VGIC_MAX_CPUS];
>> -     union {
>> -             u32 reg[VGIC_NR_SHARED_IRQS / 32];
>> -             DECLARE_BITMAP(reg_ul, VGIC_NR_SHARED_IRQS);
>> -     } shared;
>> +     /*
>> +      * - One UL per VCPU for private interrupts (assumes UL is at
>> +      *   least 32 bits)
>> +      * - As many UL as necessary for shared interrupts.
>> +      *
>> +      * The private interrupts are accessed via the "private"
>> +      * field, one UL per vcpu (the state for vcpu n is in
>> +      * private[n]). The shared interrupts are accessed via the
>> +      * "shared" pointer (IRQn state is at bit n-32 in the bitmap).
>> +      */
>> +     unsigned long *private;
>> +     unsigned long *shared;
> 
> the comment above the define for REG_OFFSET_SWIZZLE still talks about
> the unions in struct vgic_bitmap, which is no longer true.  Mind
> updating that comment?

Damned, thought I fixed that. Will update it.

>>  };
>>
>>  struct vgic_bytemap {
>> -     u32 percpu[VGIC_MAX_CPUS][VGIC_NR_PRIVATE_IRQS / 4];
>> -     u32 shared[VGIC_NR_SHARED_IRQS  / 4];
>> +     /*
>> +      * - 8 u32 per VCPU for private interrupts
>> +      * - As many u32 as necessary for shared interrupts.
>> +      *
>> +      * The private interrupts are accessed via the "private"
>> +      * field, (the state for vcpu n is in private[n*8] to
>> +      * private[n*8 + 7]). The shared interrupts are accessed via
>> +      * the "shared" pointer (IRQn state is at byte (n-32)%4 of the
>> +      * shared[(n-32)/4] word).
>> +      */
>> +     u32 *private;
>> +     u32 *shared;
>>  };
>>
>>  struct kvm_vcpu;
>> @@ -127,6 +141,9 @@ struct vgic_dist {
>>       bool                    in_kernel;
>>       bool                    ready;
>>
>> +     int                     nr_cpus;
>> +     int                     nr_irqs;
>> +
>>       /* Virtual control interface mapping */
>>       void __iomem            *vctrl_base;
>>
>> @@ -166,15 +183,36 @@ struct vgic_dist {
>>       /* Level/edge triggered */
>>       struct vgic_bitmap      irq_cfg;
>>
>> -     /* Source CPU per SGI and target CPU */
>> -     u8                      irq_sgi_sources[VGIC_MAX_CPUS][VGIC_NR_SGIS];
>> +     /*
>> +      * Source CPU per SGI and target CPU:
>> +      *
>> +      * Each byte represent a SGI observable on a VCPU, each bit of
>> +      * this byte indicating if the corresponding VCPU has
>> +      * generated this interrupt. This is a GICv2 feature only.
>> +      *
>> +      * For VCPUn (n < 8), irq_sgi_sources[n*16] to [n*16 + 15] are
>> +      * the SGIs observable on VCPUn.
>> +      */
>> +     u8                      *irq_sgi_sources;
>>
>> -     /* Target CPU for each IRQ */
>> -     u8                      irq_spi_cpu[VGIC_NR_SHARED_IRQS];
>> -     struct vgic_bitmap      irq_spi_target[VGIC_MAX_CPUS];
>> +     /*
>> +      * Target CPU for each SPI:
>> +      *
>> +      * Array of available SPI, each byte indicating the target
>> +      * VCPU for SPI. IRQn (n >=32) is at irq_spi_cpu[n-32].
>> +      */
>> +     u8                      *irq_spi_cpu;
>> +
>> +     /*
>> +      * Reverse lookup of irq_spi_cpu for faster compute pending:
>> +      *
>> +      * Array of bitmaps, one per VCPU, describing is IRQn is
> 
> ah, describing *if* ?

Indeed. Will fix.

>> +      * routed to a particular VCPU.
>> +      */
>> +     struct vgic_bitmap      *irq_spi_target;
>>
>>       /* Bitmap indicating which CPU has something pending */
>> -     unsigned long           irq_pending_on_cpu;
>> +     unsigned long           *irq_pending_on_cpu;
>>  #endif
>>  };
>>
>> @@ -204,11 +242,11 @@ struct vgic_v3_cpu_if {
>>  struct vgic_cpu {
>>  #ifdef CONFIG_KVM_ARM_VGIC
>>       /* per IRQ to LR mapping */
>> -     u8              vgic_irq_lr_map[VGIC_NR_IRQS];
>> +     u8              *vgic_irq_lr_map;
>>
>>       /* Pending interrupts on this VCPU */
>>       DECLARE_BITMAP( pending_percpu, VGIC_NR_PRIVATE_IRQS);
>> -     DECLARE_BITMAP( pending_shared, VGIC_NR_SHARED_IRQS);
>> +     unsigned long   *pending_shared;
>>
>>       /* Bitmap of used/free list registers */
>>       DECLARE_BITMAP( lr_used, VGIC_V2_MAX_LRS);
>> @@ -239,7 +277,9 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
>>  int kvm_vgic_hyp_init(void);
>>  int kvm_vgic_init(struct kvm *kvm);
>>  int kvm_vgic_create(struct kvm *kvm);
>> +void kvm_vgic_destroy(struct kvm *kvm);
>>  int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
>> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
>>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
>>  void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
>>  int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
>> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
>> index d3299d4..92c086e 100644
>> --- a/virt/kvm/arm/vgic.c
>> +++ b/virt/kvm/arm/vgic.c
>> @@ -95,6 +95,7 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu);
>>  static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu);
>>  static void vgic_update_state(struct kvm *kvm);
>>  static void vgic_kick_vcpus(struct kvm *kvm);
>> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi);
>>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
>>  static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr);
>>  static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc);
>> @@ -124,23 +125,45 @@ static const struct vgic_params *vgic;
>>  #define REG_OFFSET_SWIZZLE   0
>>  #endif
>>
>> +static int vgic_init_bitmap(struct vgic_bitmap *b, int nr_cpus, int nr_irqs)
>> +{
>> +     int nr_longs;
>> +
>> +     nr_longs = nr_cpus + BITS_TO_LONGS(nr_irqs - VGIC_NR_PRIVATE_IRQS);
>> +
>> +     b->private = kzalloc(sizeof(unsigned long) * nr_longs, GFP_KERNEL);
>> +     if (!b->private)
>> +             return -ENOMEM;
>> +
>> +     b->shared = b->private + nr_cpus;
>> +
>> +     return 0;
>> +}
>> +
>> +static void vgic_free_bitmap(struct vgic_bitmap *b)
>> +{
>> +     kfree(b->private);
>> +     b->private = NULL;
>> +     b->shared = NULL;
>> +}
>> +
>>  static u32 *vgic_bitmap_get_reg(struct vgic_bitmap *x,
>>                               int cpuid, u32 offset)
>>  {
>>       offset >>= 2;
>>       if (!offset)
>> -             return x->percpu[cpuid].reg + (offset ^ REG_OFFSET_SWIZZLE);
>> +             return (u32 *)(x->private + cpuid) + REG_OFFSET_SWIZZLE;
>>       else
>> -             return x->shared.reg + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
>> +             return (u32 *)(x->shared) + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
>>  }
>>
>>  static int vgic_bitmap_get_irq_val(struct vgic_bitmap *x,
>>                                  int cpuid, int irq)
>>  {
>>       if (irq < VGIC_NR_PRIVATE_IRQS)
>> -             return test_bit(irq, x->percpu[cpuid].reg_ul);
>> +             return test_bit(irq, x->private + cpuid);
>>
>> -     return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared.reg_ul);
>> +     return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared);
>>  }
>>
>>  static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>> @@ -149,9 +172,9 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>>       unsigned long *reg;
>>
>>       if (irq < VGIC_NR_PRIVATE_IRQS) {
>> -             reg = x->percpu[cpuid].reg_ul;
>> +             reg = x->private + cpuid;
>>       } else {
>> -             reg =  x->shared.reg_ul;
>> +             reg = x->shared;
>>               irq -= VGIC_NR_PRIVATE_IRQS;
>>       }
>>
>> @@ -163,24 +186,49 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>>
>>  static unsigned long *vgic_bitmap_get_cpu_map(struct vgic_bitmap *x, int cpuid)
>>  {
>> -     if (unlikely(cpuid >= VGIC_MAX_CPUS))
>> -             return NULL;
>> -     return x->percpu[cpuid].reg_ul;
>> +     return x->private + cpuid;
>>  }
>>
>>  static unsigned long *vgic_bitmap_get_shared_map(struct vgic_bitmap *x)
>>  {
>> -     return x->shared.reg_ul;
>> +     return x->shared;
>> +}
>> +
>> +static int vgic_init_bytemap(struct vgic_bytemap *x, int nr_cpus, int nr_irqs)
>> +{
>> +     int size;
>> +
>> +     size  = nr_cpus * VGIC_NR_PRIVATE_IRQS;
>> +     size += nr_irqs - VGIC_NR_PRIVATE_IRQS;
>> +
>> +     x->private = kzalloc(size, GFP_KERNEL);
>> +     if (!x->private)
>> +             return -ENOMEM;
>> +
>> +     x->shared = x->private + nr_cpus * VGIC_NR_PRIVATE_IRQS / sizeof(u32);
>> +     return 0;
>> +}
>> +
>> +static void vgic_free_bytemap(struct vgic_bytemap *b)
>> +{
>> +     kfree(b->private);
>> +     b->private = NULL;
>> +     b->shared = NULL;
>>  }
>>
>>  static u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset)
>>  {
>> -     offset >>= 2;
>> -     BUG_ON(offset > (VGIC_NR_IRQS / 4));
>> -     if (offset < 8)
>> -             return x->percpu[cpuid] + offset;
>> -     else
>> -             return x->shared + offset - 8;
>> +     u32 *reg;
>> +
>> +     if (offset < VGIC_NR_PRIVATE_IRQS) {
>> +             reg = x->private;
>> +             offset += cpuid * VGIC_NR_PRIVATE_IRQS;
>> +     } else {
>> +             reg = x->shared;
>> +             offset -= VGIC_NR_PRIVATE_IRQS;
>> +     }
>> +
>> +     return reg + (offset / sizeof(u32));
>>  }
>>
>>  #define VGIC_CFG_LEVEL       0
>> @@ -739,7 +787,7 @@ static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu)
>>                */
>>               vgic_dist_irq_set_pending(vcpu, lr.irq);
>>               if (lr.irq < VGIC_NR_SGIS)
>> -                     dist->irq_sgi_sources[vcpu_id][lr.irq] |= 1 << lr.source;
>> +                     *vgic_get_sgi_sources(dist, vcpu_id, lr.irq) |= 1 << lr.source;
>>               lr.state &= ~LR_STATE_PENDING;
>>               vgic_set_lr(vcpu, i, lr);
>>
>> @@ -773,7 +821,7 @@ static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
>>       /* Copy source SGIs from distributor side */
>>       for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
>>               int shift = 8 * (sgi - min_sgi);
>> -             reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift;
>> +             reg |= ((u32)*vgic_get_sgi_sources(dist, vcpu_id, sgi)) << shift;
>>       }
>>
>>       mmio_data_write(mmio, ~0, reg);
>> @@ -797,14 +845,15 @@ static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
>>       /* Clear pending SGIs on the distributor */
>>       for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
>>               u8 mask = reg >> (8 * (sgi - min_sgi));
>> +             u8 *src = vgic_get_sgi_sources(dist, vcpu_id, sgi);
>>               if (set) {
>> -                     if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask)
>> +                     if ((*src & mask) != mask)
>>                               updated = true;
>> -                     dist->irq_sgi_sources[vcpu_id][sgi] |= mask;
>> +                     *src |= mask;
>>               } else {
>> -                     if (dist->irq_sgi_sources[vcpu_id][sgi] & mask)
>> +                     if (*src & mask)
>>                               updated = true;
>> -                     dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask;
>> +                     *src &= ~mask;
>>               }
>>       }
>>
>> @@ -988,6 +1037,11 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>       return true;
>>  }
>>
>> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi)
>> +{
>> +     return dist->irq_sgi_sources + vcpu_id * VGIC_NR_SGIS + sgi;
>> +}
>> +
>>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
>>  {
>>       struct kvm *kvm = vcpu->kvm;
>> @@ -1021,7 +1075,7 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
>>               if (target_cpus & 1) {
>>                       /* Flag the SGI as pending */
>>                       vgic_dist_irq_set_pending(vcpu, sgi);
>> -                     dist->irq_sgi_sources[c][sgi] |= 1 << vcpu_id;
>> +                     *vgic_get_sgi_sources(dist, c, sgi) |= 1 << vcpu_id;
>>                       kvm_debug("SGI%d from CPU%d to CPU%d\n", sgi, vcpu_id, c);
>>               }
>>
>> @@ -1068,14 +1122,14 @@ static void vgic_update_state(struct kvm *kvm)
>>       int c;
>>
>>       if (!dist->enabled) {
>> -             set_bit(0, &dist->irq_pending_on_cpu);
>> +             set_bit(0, dist->irq_pending_on_cpu);
>>               return;
>>       }
>>
>>       kvm_for_each_vcpu(c, vcpu, kvm) {
>>               if (compute_pending_for_cpu(vcpu)) {
>>                       pr_debug("CPU%d has pending interrupts\n", c);
>> -                     set_bit(c, &dist->irq_pending_on_cpu);
>> +                     set_bit(c, dist->irq_pending_on_cpu);
>>               }
>>       }
>>  }
>> @@ -1232,14 +1286,14 @@ static bool vgic_queue_sgi(struct kvm_vcpu *vcpu, int irq)
>>       int vcpu_id = vcpu->vcpu_id;
>>       int c;
>>
>> -     sources = dist->irq_sgi_sources[vcpu_id][irq];
>> +     sources = *vgic_get_sgi_sources(dist, vcpu_id, irq);
>>
>>       for_each_set_bit(c, &sources, VGIC_MAX_CPUS) {
>>               if (vgic_queue_irq(vcpu, c, irq))
>>                       clear_bit(c, &sources);
>>       }
>>
>> -     dist->irq_sgi_sources[vcpu_id][irq] = sources;
>> +     *vgic_get_sgi_sources(dist, vcpu_id, irq) = sources;
>>
>>       /*
>>        * If the sources bitmap has been cleared it means that we
>> @@ -1327,7 +1381,7 @@ epilog:
>>                * us. Claim we don't have anything pending. We'll
>>                * adjust that if needed while exiting.
>>                */
>> -             clear_bit(vcpu_id, &dist->irq_pending_on_cpu);
>> +             clear_bit(vcpu_id, dist->irq_pending_on_cpu);
>>       }
>>  }
>>
>> @@ -1429,7 +1483,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
>>       /* Check if we still have something up our sleeve... */
>>       pending = find_first_zero_bit(elrsr_ptr, vgic->nr_lr);
>>       if (level_pending || pending < vgic->nr_lr)
>> -             set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
>> +             set_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
>>  }
>>
>>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
>> @@ -1463,7 +1517,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
>>       if (!irqchip_in_kernel(vcpu->kvm))
>>               return 0;
>>
>> -     return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
>> +     return test_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
>>  }
>>
>>  static void vgic_kick_vcpus(struct kvm *kvm)
>> @@ -1558,7 +1612,7 @@ static bool vgic_update_irq_pending(struct kvm *kvm, int cpuid,
>>
>>       if (level) {
>>               vgic_cpu_irq_set(vcpu, irq_num);
>> -             set_bit(cpuid, &dist->irq_pending_on_cpu);
>> +             set_bit(cpuid, dist->irq_pending_on_cpu);
>>       }
>>
>>  out:
>> @@ -1602,6 +1656,32 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data)
>>       return IRQ_HANDLED;
>>  }
>>
>> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
>> +{
>> +     struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
>> +
>> +     kfree(vgic_cpu->pending_shared);
>> +     kfree(vgic_cpu->vgic_irq_lr_map);
>> +     vgic_cpu->pending_shared = NULL;
>> +     vgic_cpu->vgic_irq_lr_map = NULL;
>> +}
>> +
>> +static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
>> +{
>> +     struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
>> +
>> +     int sz = (nr_irqs - VGIC_NR_PRIVATE_IRQS) / 8;
> 
> [copying question from last review round, apologies if I'm being stupid]
> 
> are we guaranteed that the numerator is always a multiple of 8? if not,
> won't you end up allocating one less byte than needed?

We force the allocation to be a multiple of 32, as per the GIC spec (we
enforce this in arm_vgic.h for the legacy behaviour, and in the last
patch for the dynamic case).

> 
>> +     vgic_cpu->pending_shared = kzalloc(sz, GFP_KERNEL);
>> +     vgic_cpu->vgic_irq_lr_map = kzalloc(nr_irqs, GFP_KERNEL);
>> +
>> +     if (!vgic_cpu->pending_shared || !vgic_cpu->vgic_irq_lr_map) {
>> +             kvm_vgic_vcpu_destroy(vcpu);
>> +             return -ENOMEM;
>> +     }
>> +
>> +     return 0;
>> +}
>> +
> [...]

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 2/8] arm/arm64: KVM: vgic: switch to dynamic allocation
@ 2014-09-12  9:13       ` Marc Zyngier
  0 siblings, 0 replies; 34+ messages in thread
From: Marc Zyngier @ 2014-09-12  9:13 UTC (permalink / raw)
  To: linux-arm-kernel

On 11/09/14 23:36, Christoffer Dall wrote:
> On Thu, Sep 11, 2014 at 12:09:09PM +0100, Marc Zyngier wrote:
>> So far, all the VGIC data structures are statically defined by the
>> *maximum* number of vcpus and interrupts it supports. It means that
>> we always have to oversize it to cater for the worse case.
>>
>> Start by changing the data structures to be dynamically sizeable,
>> and allocate them at runtime.
>>
>> The sizes are still very static though.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm/kvm/arm.c     |   3 +
>>  include/kvm/arm_vgic.h |  76 ++++++++++++----
>>  virt/kvm/arm/vgic.c    | 237 ++++++++++++++++++++++++++++++++++++++++++-------
>>  3 files changed, 267 insertions(+), 49 deletions(-)
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index a99e0cd..923a01d 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -172,6 +172,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>>                       kvm->vcpus[i] = NULL;
>>               }
>>       }
>> +
>> +     kvm_vgic_destroy(kvm);
>>  }
>>
>>  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>> @@ -253,6 +255,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
>>  {
>>       kvm_mmu_free_memory_caches(vcpu);
>>       kvm_timer_vcpu_terminate(vcpu);
>> +     kvm_vgic_vcpu_destroy(vcpu);
>>       kmem_cache_free(kvm_vcpu_cache, vcpu);
>>  }
>>
>> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
>> index f074539..bdaac57 100644
>> --- a/include/kvm/arm_vgic.h
>> +++ b/include/kvm/arm_vgic.h
>> @@ -54,19 +54,33 @@
>>   * - a bunch of shared interrupts (SPI)
>>   */
>>  struct vgic_bitmap {
>> -     union {
>> -             u32 reg[VGIC_NR_PRIVATE_IRQS / 32];
>> -             DECLARE_BITMAP(reg_ul, VGIC_NR_PRIVATE_IRQS);
>> -     } percpu[VGIC_MAX_CPUS];
>> -     union {
>> -             u32 reg[VGIC_NR_SHARED_IRQS / 32];
>> -             DECLARE_BITMAP(reg_ul, VGIC_NR_SHARED_IRQS);
>> -     } shared;
>> +     /*
>> +      * - One UL per VCPU for private interrupts (assumes UL is at
>> +      *   least 32 bits)
>> +      * - As many UL as necessary for shared interrupts.
>> +      *
>> +      * The private interrupts are accessed via the "private"
>> +      * field, one UL per vcpu (the state for vcpu n is in
>> +      * private[n]). The shared interrupts are accessed via the
>> +      * "shared" pointer (IRQn state is at bit n-32 in the bitmap).
>> +      */
>> +     unsigned long *private;
>> +     unsigned long *shared;
> 
> the comment above the define for REG_OFFSET_SWIZZLE still talks about
> the unions in struct vgic_bitmap, which is no longer true.  Mind
> updating that comment?

Damned, thought I fixed that. Will update it.

>>  };
>>
>>  struct vgic_bytemap {
>> -     u32 percpu[VGIC_MAX_CPUS][VGIC_NR_PRIVATE_IRQS / 4];
>> -     u32 shared[VGIC_NR_SHARED_IRQS  / 4];
>> +     /*
>> +      * - 8 u32 per VCPU for private interrupts
>> +      * - As many u32 as necessary for shared interrupts.
>> +      *
>> +      * The private interrupts are accessed via the "private"
>> +      * field, (the state for vcpu n is in private[n*8] to
>> +      * private[n*8 + 7]). The shared interrupts are accessed via
>> +      * the "shared" pointer (IRQn state is at byte (n-32)%4 of the
>> +      * shared[(n-32)/4] word).
>> +      */
>> +     u32 *private;
>> +     u32 *shared;
>>  };
>>
>>  struct kvm_vcpu;
>> @@ -127,6 +141,9 @@ struct vgic_dist {
>>       bool                    in_kernel;
>>       bool                    ready;
>>
>> +     int                     nr_cpus;
>> +     int                     nr_irqs;
>> +
>>       /* Virtual control interface mapping */
>>       void __iomem            *vctrl_base;
>>
>> @@ -166,15 +183,36 @@ struct vgic_dist {
>>       /* Level/edge triggered */
>>       struct vgic_bitmap      irq_cfg;
>>
>> -     /* Source CPU per SGI and target CPU */
>> -     u8                      irq_sgi_sources[VGIC_MAX_CPUS][VGIC_NR_SGIS];
>> +     /*
>> +      * Source CPU per SGI and target CPU:
>> +      *
>> +      * Each byte represent a SGI observable on a VCPU, each bit of
>> +      * this byte indicating if the corresponding VCPU has
>> +      * generated this interrupt. This is a GICv2 feature only.
>> +      *
>> +      * For VCPUn (n < 8), irq_sgi_sources[n*16] to [n*16 + 15] are
>> +      * the SGIs observable on VCPUn.
>> +      */
>> +     u8                      *irq_sgi_sources;
>>
>> -     /* Target CPU for each IRQ */
>> -     u8                      irq_spi_cpu[VGIC_NR_SHARED_IRQS];
>> -     struct vgic_bitmap      irq_spi_target[VGIC_MAX_CPUS];
>> +     /*
>> +      * Target CPU for each SPI:
>> +      *
>> +      * Array of available SPI, each byte indicating the target
>> +      * VCPU for SPI. IRQn (n >=32) is at irq_spi_cpu[n-32].
>> +      */
>> +     u8                      *irq_spi_cpu;
>> +
>> +     /*
>> +      * Reverse lookup of irq_spi_cpu for faster compute pending:
>> +      *
>> +      * Array of bitmaps, one per VCPU, describing is IRQn is
> 
> ah, describing *if* ?

Indeed. Will fix.

>> +      * routed to a particular VCPU.
>> +      */
>> +     struct vgic_bitmap      *irq_spi_target;
>>
>>       /* Bitmap indicating which CPU has something pending */
>> -     unsigned long           irq_pending_on_cpu;
>> +     unsigned long           *irq_pending_on_cpu;
>>  #endif
>>  };
>>
>> @@ -204,11 +242,11 @@ struct vgic_v3_cpu_if {
>>  struct vgic_cpu {
>>  #ifdef CONFIG_KVM_ARM_VGIC
>>       /* per IRQ to LR mapping */
>> -     u8              vgic_irq_lr_map[VGIC_NR_IRQS];
>> +     u8              *vgic_irq_lr_map;
>>
>>       /* Pending interrupts on this VCPU */
>>       DECLARE_BITMAP( pending_percpu, VGIC_NR_PRIVATE_IRQS);
>> -     DECLARE_BITMAP( pending_shared, VGIC_NR_SHARED_IRQS);
>> +     unsigned long   *pending_shared;
>>
>>       /* Bitmap of used/free list registers */
>>       DECLARE_BITMAP( lr_used, VGIC_V2_MAX_LRS);
>> @@ -239,7 +277,9 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
>>  int kvm_vgic_hyp_init(void);
>>  int kvm_vgic_init(struct kvm *kvm);
>>  int kvm_vgic_create(struct kvm *kvm);
>> +void kvm_vgic_destroy(struct kvm *kvm);
>>  int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
>> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
>>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
>>  void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
>>  int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
>> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
>> index d3299d4..92c086e 100644
>> --- a/virt/kvm/arm/vgic.c
>> +++ b/virt/kvm/arm/vgic.c
>> @@ -95,6 +95,7 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu);
>>  static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu);
>>  static void vgic_update_state(struct kvm *kvm);
>>  static void vgic_kick_vcpus(struct kvm *kvm);
>> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi);
>>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
>>  static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr);
>>  static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc);
>> @@ -124,23 +125,45 @@ static const struct vgic_params *vgic;
>>  #define REG_OFFSET_SWIZZLE   0
>>  #endif
>>
>> +static int vgic_init_bitmap(struct vgic_bitmap *b, int nr_cpus, int nr_irqs)
>> +{
>> +     int nr_longs;
>> +
>> +     nr_longs = nr_cpus + BITS_TO_LONGS(nr_irqs - VGIC_NR_PRIVATE_IRQS);
>> +
>> +     b->private = kzalloc(sizeof(unsigned long) * nr_longs, GFP_KERNEL);
>> +     if (!b->private)
>> +             return -ENOMEM;
>> +
>> +     b->shared = b->private + nr_cpus;
>> +
>> +     return 0;
>> +}
>> +
>> +static void vgic_free_bitmap(struct vgic_bitmap *b)
>> +{
>> +     kfree(b->private);
>> +     b->private = NULL;
>> +     b->shared = NULL;
>> +}
>> +
>>  static u32 *vgic_bitmap_get_reg(struct vgic_bitmap *x,
>>                               int cpuid, u32 offset)
>>  {
>>       offset >>= 2;
>>       if (!offset)
>> -             return x->percpu[cpuid].reg + (offset ^ REG_OFFSET_SWIZZLE);
>> +             return (u32 *)(x->private + cpuid) + REG_OFFSET_SWIZZLE;
>>       else
>> -             return x->shared.reg + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
>> +             return (u32 *)(x->shared) + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
>>  }
>>
>>  static int vgic_bitmap_get_irq_val(struct vgic_bitmap *x,
>>                                  int cpuid, int irq)
>>  {
>>       if (irq < VGIC_NR_PRIVATE_IRQS)
>> -             return test_bit(irq, x->percpu[cpuid].reg_ul);
>> +             return test_bit(irq, x->private + cpuid);
>>
>> -     return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared.reg_ul);
>> +     return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared);
>>  }
>>
>>  static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>> @@ -149,9 +172,9 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>>       unsigned long *reg;
>>
>>       if (irq < VGIC_NR_PRIVATE_IRQS) {
>> -             reg = x->percpu[cpuid].reg_ul;
>> +             reg = x->private + cpuid;
>>       } else {
>> -             reg =  x->shared.reg_ul;
>> +             reg = x->shared;
>>               irq -= VGIC_NR_PRIVATE_IRQS;
>>       }
>>
>> @@ -163,24 +186,49 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>>
>>  static unsigned long *vgic_bitmap_get_cpu_map(struct vgic_bitmap *x, int cpuid)
>>  {
>> -     if (unlikely(cpuid >= VGIC_MAX_CPUS))
>> -             return NULL;
>> -     return x->percpu[cpuid].reg_ul;
>> +     return x->private + cpuid;
>>  }
>>
>>  static unsigned long *vgic_bitmap_get_shared_map(struct vgic_bitmap *x)
>>  {
>> -     return x->shared.reg_ul;
>> +     return x->shared;
>> +}
>> +
>> +static int vgic_init_bytemap(struct vgic_bytemap *x, int nr_cpus, int nr_irqs)
>> +{
>> +     int size;
>> +
>> +     size  = nr_cpus * VGIC_NR_PRIVATE_IRQS;
>> +     size += nr_irqs - VGIC_NR_PRIVATE_IRQS;
>> +
>> +     x->private = kzalloc(size, GFP_KERNEL);
>> +     if (!x->private)
>> +             return -ENOMEM;
>> +
>> +     x->shared = x->private + nr_cpus * VGIC_NR_PRIVATE_IRQS / sizeof(u32);
>> +     return 0;
>> +}
>> +
>> +static void vgic_free_bytemap(struct vgic_bytemap *b)
>> +{
>> +     kfree(b->private);
>> +     b->private = NULL;
>> +     b->shared = NULL;
>>  }
>>
>>  static u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset)
>>  {
>> -     offset >>= 2;
>> -     BUG_ON(offset > (VGIC_NR_IRQS / 4));
>> -     if (offset < 8)
>> -             return x->percpu[cpuid] + offset;
>> -     else
>> -             return x->shared + offset - 8;
>> +     u32 *reg;
>> +
>> +     if (offset < VGIC_NR_PRIVATE_IRQS) {
>> +             reg = x->private;
>> +             offset += cpuid * VGIC_NR_PRIVATE_IRQS;
>> +     } else {
>> +             reg = x->shared;
>> +             offset -= VGIC_NR_PRIVATE_IRQS;
>> +     }
>> +
>> +     return reg + (offset / sizeof(u32));
>>  }
>>
>>  #define VGIC_CFG_LEVEL       0
>> @@ -739,7 +787,7 @@ static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu)
>>                */
>>               vgic_dist_irq_set_pending(vcpu, lr.irq);
>>               if (lr.irq < VGIC_NR_SGIS)
>> -                     dist->irq_sgi_sources[vcpu_id][lr.irq] |= 1 << lr.source;
>> +                     *vgic_get_sgi_sources(dist, vcpu_id, lr.irq) |= 1 << lr.source;
>>               lr.state &= ~LR_STATE_PENDING;
>>               vgic_set_lr(vcpu, i, lr);
>>
>> @@ -773,7 +821,7 @@ static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
>>       /* Copy source SGIs from distributor side */
>>       for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
>>               int shift = 8 * (sgi - min_sgi);
>> -             reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift;
>> +             reg |= ((u32)*vgic_get_sgi_sources(dist, vcpu_id, sgi)) << shift;
>>       }
>>
>>       mmio_data_write(mmio, ~0, reg);
>> @@ -797,14 +845,15 @@ static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
>>       /* Clear pending SGIs on the distributor */
>>       for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
>>               u8 mask = reg >> (8 * (sgi - min_sgi));
>> +             u8 *src = vgic_get_sgi_sources(dist, vcpu_id, sgi);
>>               if (set) {
>> -                     if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask)
>> +                     if ((*src & mask) != mask)
>>                               updated = true;
>> -                     dist->irq_sgi_sources[vcpu_id][sgi] |= mask;
>> +                     *src |= mask;
>>               } else {
>> -                     if (dist->irq_sgi_sources[vcpu_id][sgi] & mask)
>> +                     if (*src & mask)
>>                               updated = true;
>> -                     dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask;
>> +                     *src &= ~mask;
>>               }
>>       }
>>
>> @@ -988,6 +1037,11 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>       return true;
>>  }
>>
>> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi)
>> +{
>> +     return dist->irq_sgi_sources + vcpu_id * VGIC_NR_SGIS + sgi;
>> +}
>> +
>>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
>>  {
>>       struct kvm *kvm = vcpu->kvm;
>> @@ -1021,7 +1075,7 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
>>               if (target_cpus & 1) {
>>                       /* Flag the SGI as pending */
>>                       vgic_dist_irq_set_pending(vcpu, sgi);
>> -                     dist->irq_sgi_sources[c][sgi] |= 1 << vcpu_id;
>> +                     *vgic_get_sgi_sources(dist, c, sgi) |= 1 << vcpu_id;
>>                       kvm_debug("SGI%d from CPU%d to CPU%d\n", sgi, vcpu_id, c);
>>               }
>>
>> @@ -1068,14 +1122,14 @@ static void vgic_update_state(struct kvm *kvm)
>>       int c;
>>
>>       if (!dist->enabled) {
>> -             set_bit(0, &dist->irq_pending_on_cpu);
>> +             set_bit(0, dist->irq_pending_on_cpu);
>>               return;
>>       }
>>
>>       kvm_for_each_vcpu(c, vcpu, kvm) {
>>               if (compute_pending_for_cpu(vcpu)) {
>>                       pr_debug("CPU%d has pending interrupts\n", c);
>> -                     set_bit(c, &dist->irq_pending_on_cpu);
>> +                     set_bit(c, dist->irq_pending_on_cpu);
>>               }
>>       }
>>  }
>> @@ -1232,14 +1286,14 @@ static bool vgic_queue_sgi(struct kvm_vcpu *vcpu, int irq)
>>       int vcpu_id = vcpu->vcpu_id;
>>       int c;
>>
>> -     sources = dist->irq_sgi_sources[vcpu_id][irq];
>> +     sources = *vgic_get_sgi_sources(dist, vcpu_id, irq);
>>
>>       for_each_set_bit(c, &sources, VGIC_MAX_CPUS) {
>>               if (vgic_queue_irq(vcpu, c, irq))
>>                       clear_bit(c, &sources);
>>       }
>>
>> -     dist->irq_sgi_sources[vcpu_id][irq] = sources;
>> +     *vgic_get_sgi_sources(dist, vcpu_id, irq) = sources;
>>
>>       /*
>>        * If the sources bitmap has been cleared it means that we
>> @@ -1327,7 +1381,7 @@ epilog:
>>                * us. Claim we don't have anything pending. We'll
>>                * adjust that if needed while exiting.
>>                */
>> -             clear_bit(vcpu_id, &dist->irq_pending_on_cpu);
>> +             clear_bit(vcpu_id, dist->irq_pending_on_cpu);
>>       }
>>  }
>>
>> @@ -1429,7 +1483,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
>>       /* Check if we still have something up our sleeve... */
>>       pending = find_first_zero_bit(elrsr_ptr, vgic->nr_lr);
>>       if (level_pending || pending < vgic->nr_lr)
>> -             set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
>> +             set_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
>>  }
>>
>>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
>> @@ -1463,7 +1517,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
>>       if (!irqchip_in_kernel(vcpu->kvm))
>>               return 0;
>>
>> -     return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
>> +     return test_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
>>  }
>>
>>  static void vgic_kick_vcpus(struct kvm *kvm)
>> @@ -1558,7 +1612,7 @@ static bool vgic_update_irq_pending(struct kvm *kvm, int cpuid,
>>
>>       if (level) {
>>               vgic_cpu_irq_set(vcpu, irq_num);
>> -             set_bit(cpuid, &dist->irq_pending_on_cpu);
>> +             set_bit(cpuid, dist->irq_pending_on_cpu);
>>       }
>>
>>  out:
>> @@ -1602,6 +1656,32 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data)
>>       return IRQ_HANDLED;
>>  }
>>
>> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
>> +{
>> +     struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
>> +
>> +     kfree(vgic_cpu->pending_shared);
>> +     kfree(vgic_cpu->vgic_irq_lr_map);
>> +     vgic_cpu->pending_shared = NULL;
>> +     vgic_cpu->vgic_irq_lr_map = NULL;
>> +}
>> +
>> +static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
>> +{
>> +     struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
>> +
>> +     int sz = (nr_irqs - VGIC_NR_PRIVATE_IRQS) / 8;
> 
> [copying question from last review round, apologies if I'm being stupid]
> 
> are we guaranteed that the numerator is always a multiple of 8? if not,
> won't you end up allocating one less byte than needed?

We force the allocation to be a multiple of 32, as per the GIC spec (we
enforce this in arm_vgic.h for the legacy behaviour, and in the last
patch for the dynamic case).

> 
>> +     vgic_cpu->pending_shared = kzalloc(sz, GFP_KERNEL);
>> +     vgic_cpu->vgic_irq_lr_map = kzalloc(nr_irqs, GFP_KERNEL);
>> +
>> +     if (!vgic_cpu->pending_shared || !vgic_cpu->vgic_irq_lr_map) {
>> +             kvm_vgic_vcpu_destroy(vcpu);
>> +             return -ENOMEM;
>> +     }
>> +
>> +     return 0;
>> +}
>> +
> [...]

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 2/8] arm/arm64: KVM: vgic: switch to dynamic allocation
  2014-09-12  9:13       ` Marc Zyngier
@ 2014-09-12 17:43         ` Christoffer Dall
  -1 siblings, 0 replies; 34+ messages in thread
From: Christoffer Dall @ 2014-09-12 17:43 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, kvm, Andre Przywara

On Fri, Sep 12, 2014 at 10:13:11AM +0100, Marc Zyngier wrote:
> On 11/09/14 23:36, Christoffer Dall wrote:
> > On Thu, Sep 11, 2014 at 12:09:09PM +0100, Marc Zyngier wrote:
> >> So far, all the VGIC data structures are statically defined by the
> >> *maximum* number of vcpus and interrupts it supports. It means that
> >> we always have to oversize it to cater for the worse case.
> >>
> >> Start by changing the data structures to be dynamically sizeable,
> >> and allocate them at runtime.
> >>
> >> The sizes are still very static though.
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >>  arch/arm/kvm/arm.c     |   3 +
> >>  include/kvm/arm_vgic.h |  76 ++++++++++++----
> >>  virt/kvm/arm/vgic.c    | 237 ++++++++++++++++++++++++++++++++++++++++++-------
> >>  3 files changed, 267 insertions(+), 49 deletions(-)
> >>
> >> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> >> index a99e0cd..923a01d 100644
> >> --- a/arch/arm/kvm/arm.c
> >> +++ b/arch/arm/kvm/arm.c
> >> @@ -172,6 +172,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
> >>                       kvm->vcpus[i] = NULL;
> >>               }
> >>       }
> >> +
> >> +     kvm_vgic_destroy(kvm);
> >>  }
> >>
> >>  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> >> @@ -253,6 +255,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
> >>  {
> >>       kvm_mmu_free_memory_caches(vcpu);
> >>       kvm_timer_vcpu_terminate(vcpu);
> >> +     kvm_vgic_vcpu_destroy(vcpu);
> >>       kmem_cache_free(kvm_vcpu_cache, vcpu);
> >>  }
> >>
> >> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
> >> index f074539..bdaac57 100644
> >> --- a/include/kvm/arm_vgic.h
> >> +++ b/include/kvm/arm_vgic.h
> >> @@ -54,19 +54,33 @@
> >>   * - a bunch of shared interrupts (SPI)
> >>   */
> >>  struct vgic_bitmap {
> >> -     union {
> >> -             u32 reg[VGIC_NR_PRIVATE_IRQS / 32];
> >> -             DECLARE_BITMAP(reg_ul, VGIC_NR_PRIVATE_IRQS);
> >> -     } percpu[VGIC_MAX_CPUS];
> >> -     union {
> >> -             u32 reg[VGIC_NR_SHARED_IRQS / 32];
> >> -             DECLARE_BITMAP(reg_ul, VGIC_NR_SHARED_IRQS);
> >> -     } shared;
> >> +     /*
> >> +      * - One UL per VCPU for private interrupts (assumes UL is at
> >> +      *   least 32 bits)
> >> +      * - As many UL as necessary for shared interrupts.
> >> +      *
> >> +      * The private interrupts are accessed via the "private"
> >> +      * field, one UL per vcpu (the state for vcpu n is in
> >> +      * private[n]). The shared interrupts are accessed via the
> >> +      * "shared" pointer (IRQn state is at bit n-32 in the bitmap).
> >> +      */
> >> +     unsigned long *private;
> >> +     unsigned long *shared;
> > 
> > the comment above the define for REG_OFFSET_SWIZZLE still talks about
> > the unions in struct vgic_bitmap, which is no longer true.  Mind
> > updating that comment?
> 
> Damned, thought I fixed that. Will update it.
> 
> >>  };
> >>
> >>  struct vgic_bytemap {
> >> -     u32 percpu[VGIC_MAX_CPUS][VGIC_NR_PRIVATE_IRQS / 4];
> >> -     u32 shared[VGIC_NR_SHARED_IRQS  / 4];
> >> +     /*
> >> +      * - 8 u32 per VCPU for private interrupts
> >> +      * - As many u32 as necessary for shared interrupts.
> >> +      *
> >> +      * The private interrupts are accessed via the "private"
> >> +      * field, (the state for vcpu n is in private[n*8] to
> >> +      * private[n*8 + 7]). The shared interrupts are accessed via
> >> +      * the "shared" pointer (IRQn state is at byte (n-32)%4 of the
> >> +      * shared[(n-32)/4] word).
> >> +      */
> >> +     u32 *private;
> >> +     u32 *shared;
> >>  };
> >>
> >>  struct kvm_vcpu;
> >> @@ -127,6 +141,9 @@ struct vgic_dist {
> >>       bool                    in_kernel;
> >>       bool                    ready;
> >>
> >> +     int                     nr_cpus;
> >> +     int                     nr_irqs;
> >> +
> >>       /* Virtual control interface mapping */
> >>       void __iomem            *vctrl_base;
> >>
> >> @@ -166,15 +183,36 @@ struct vgic_dist {
> >>       /* Level/edge triggered */
> >>       struct vgic_bitmap      irq_cfg;
> >>
> >> -     /* Source CPU per SGI and target CPU */
> >> -     u8                      irq_sgi_sources[VGIC_MAX_CPUS][VGIC_NR_SGIS];
> >> +     /*
> >> +      * Source CPU per SGI and target CPU:
> >> +      *
> >> +      * Each byte represent a SGI observable on a VCPU, each bit of
> >> +      * this byte indicating if the corresponding VCPU has
> >> +      * generated this interrupt. This is a GICv2 feature only.
> >> +      *
> >> +      * For VCPUn (n < 8), irq_sgi_sources[n*16] to [n*16 + 15] are
> >> +      * the SGIs observable on VCPUn.
> >> +      */
> >> +     u8                      *irq_sgi_sources;
> >>
> >> -     /* Target CPU for each IRQ */
> >> -     u8                      irq_spi_cpu[VGIC_NR_SHARED_IRQS];
> >> -     struct vgic_bitmap      irq_spi_target[VGIC_MAX_CPUS];
> >> +     /*
> >> +      * Target CPU for each SPI:
> >> +      *
> >> +      * Array of available SPI, each byte indicating the target
> >> +      * VCPU for SPI. IRQn (n >=32) is at irq_spi_cpu[n-32].
> >> +      */
> >> +     u8                      *irq_spi_cpu;
> >> +
> >> +     /*
> >> +      * Reverse lookup of irq_spi_cpu for faster compute pending:
> >> +      *
> >> +      * Array of bitmaps, one per VCPU, describing is IRQn is
> > 
> > ah, describing *if* ?
> 
> Indeed. Will fix.
> 
> >> +      * routed to a particular VCPU.
> >> +      */
> >> +     struct vgic_bitmap      *irq_spi_target;
> >>
> >>       /* Bitmap indicating which CPU has something pending */
> >> -     unsigned long           irq_pending_on_cpu;
> >> +     unsigned long           *irq_pending_on_cpu;
> >>  #endif
> >>  };
> >>
> >> @@ -204,11 +242,11 @@ struct vgic_v3_cpu_if {
> >>  struct vgic_cpu {
> >>  #ifdef CONFIG_KVM_ARM_VGIC
> >>       /* per IRQ to LR mapping */
> >> -     u8              vgic_irq_lr_map[VGIC_NR_IRQS];
> >> +     u8              *vgic_irq_lr_map;
> >>
> >>       /* Pending interrupts on this VCPU */
> >>       DECLARE_BITMAP( pending_percpu, VGIC_NR_PRIVATE_IRQS);
> >> -     DECLARE_BITMAP( pending_shared, VGIC_NR_SHARED_IRQS);
> >> +     unsigned long   *pending_shared;
> >>
> >>       /* Bitmap of used/free list registers */
> >>       DECLARE_BITMAP( lr_used, VGIC_V2_MAX_LRS);
> >> @@ -239,7 +277,9 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
> >>  int kvm_vgic_hyp_init(void);
> >>  int kvm_vgic_init(struct kvm *kvm);
> >>  int kvm_vgic_create(struct kvm *kvm);
> >> +void kvm_vgic_destroy(struct kvm *kvm);
> >>  int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
> >> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
> >>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
> >>  void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
> >>  int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
> >> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> >> index d3299d4..92c086e 100644
> >> --- a/virt/kvm/arm/vgic.c
> >> +++ b/virt/kvm/arm/vgic.c
> >> @@ -95,6 +95,7 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu);
> >>  static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu);
> >>  static void vgic_update_state(struct kvm *kvm);
> >>  static void vgic_kick_vcpus(struct kvm *kvm);
> >> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi);
> >>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
> >>  static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr);
> >>  static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc);
> >> @@ -124,23 +125,45 @@ static const struct vgic_params *vgic;
> >>  #define REG_OFFSET_SWIZZLE   0
> >>  #endif
> >>
> >> +static int vgic_init_bitmap(struct vgic_bitmap *b, int nr_cpus, int nr_irqs)
> >> +{
> >> +     int nr_longs;
> >> +
> >> +     nr_longs = nr_cpus + BITS_TO_LONGS(nr_irqs - VGIC_NR_PRIVATE_IRQS);
> >> +
> >> +     b->private = kzalloc(sizeof(unsigned long) * nr_longs, GFP_KERNEL);
> >> +     if (!b->private)
> >> +             return -ENOMEM;
> >> +
> >> +     b->shared = b->private + nr_cpus;
> >> +
> >> +     return 0;
> >> +}
> >> +
> >> +static void vgic_free_bitmap(struct vgic_bitmap *b)
> >> +{
> >> +     kfree(b->private);
> >> +     b->private = NULL;
> >> +     b->shared = NULL;
> >> +}
> >> +
> >>  static u32 *vgic_bitmap_get_reg(struct vgic_bitmap *x,
> >>                               int cpuid, u32 offset)
> >>  {
> >>       offset >>= 2;
> >>       if (!offset)
> >> -             return x->percpu[cpuid].reg + (offset ^ REG_OFFSET_SWIZZLE);
> >> +             return (u32 *)(x->private + cpuid) + REG_OFFSET_SWIZZLE;
> >>       else
> >> -             return x->shared.reg + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
> >> +             return (u32 *)(x->shared) + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
> >>  }
> >>
> >>  static int vgic_bitmap_get_irq_val(struct vgic_bitmap *x,
> >>                                  int cpuid, int irq)
> >>  {
> >>       if (irq < VGIC_NR_PRIVATE_IRQS)
> >> -             return test_bit(irq, x->percpu[cpuid].reg_ul);
> >> +             return test_bit(irq, x->private + cpuid);
> >>
> >> -     return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared.reg_ul);
> >> +     return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared);
> >>  }
> >>
> >>  static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
> >> @@ -149,9 +172,9 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
> >>       unsigned long *reg;
> >>
> >>       if (irq < VGIC_NR_PRIVATE_IRQS) {
> >> -             reg = x->percpu[cpuid].reg_ul;
> >> +             reg = x->private + cpuid;
> >>       } else {
> >> -             reg =  x->shared.reg_ul;
> >> +             reg = x->shared;
> >>               irq -= VGIC_NR_PRIVATE_IRQS;
> >>       }
> >>
> >> @@ -163,24 +186,49 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
> >>
> >>  static unsigned long *vgic_bitmap_get_cpu_map(struct vgic_bitmap *x, int cpuid)
> >>  {
> >> -     if (unlikely(cpuid >= VGIC_MAX_CPUS))
> >> -             return NULL;
> >> -     return x->percpu[cpuid].reg_ul;
> >> +     return x->private + cpuid;
> >>  }
> >>
> >>  static unsigned long *vgic_bitmap_get_shared_map(struct vgic_bitmap *x)
> >>  {
> >> -     return x->shared.reg_ul;
> >> +     return x->shared;
> >> +}
> >> +
> >> +static int vgic_init_bytemap(struct vgic_bytemap *x, int nr_cpus, int nr_irqs)
> >> +{
> >> +     int size;
> >> +
> >> +     size  = nr_cpus * VGIC_NR_PRIVATE_IRQS;
> >> +     size += nr_irqs - VGIC_NR_PRIVATE_IRQS;
> >> +
> >> +     x->private = kzalloc(size, GFP_KERNEL);
> >> +     if (!x->private)
> >> +             return -ENOMEM;
> >> +
> >> +     x->shared = x->private + nr_cpus * VGIC_NR_PRIVATE_IRQS / sizeof(u32);
> >> +     return 0;
> >> +}
> >> +
> >> +static void vgic_free_bytemap(struct vgic_bytemap *b)
> >> +{
> >> +     kfree(b->private);
> >> +     b->private = NULL;
> >> +     b->shared = NULL;
> >>  }
> >>
> >>  static u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset)
> >>  {
> >> -     offset >>= 2;
> >> -     BUG_ON(offset > (VGIC_NR_IRQS / 4));
> >> -     if (offset < 8)
> >> -             return x->percpu[cpuid] + offset;
> >> -     else
> >> -             return x->shared + offset - 8;
> >> +     u32 *reg;
> >> +
> >> +     if (offset < VGIC_NR_PRIVATE_IRQS) {
> >> +             reg = x->private;
> >> +             offset += cpuid * VGIC_NR_PRIVATE_IRQS;
> >> +     } else {
> >> +             reg = x->shared;
> >> +             offset -= VGIC_NR_PRIVATE_IRQS;
> >> +     }
> >> +
> >> +     return reg + (offset / sizeof(u32));
> >>  }
> >>
> >>  #define VGIC_CFG_LEVEL       0
> >> @@ -739,7 +787,7 @@ static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu)
> >>                */
> >>               vgic_dist_irq_set_pending(vcpu, lr.irq);
> >>               if (lr.irq < VGIC_NR_SGIS)
> >> -                     dist->irq_sgi_sources[vcpu_id][lr.irq] |= 1 << lr.source;
> >> +                     *vgic_get_sgi_sources(dist, vcpu_id, lr.irq) |= 1 << lr.source;
> >>               lr.state &= ~LR_STATE_PENDING;
> >>               vgic_set_lr(vcpu, i, lr);
> >>
> >> @@ -773,7 +821,7 @@ static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
> >>       /* Copy source SGIs from distributor side */
> >>       for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
> >>               int shift = 8 * (sgi - min_sgi);
> >> -             reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift;
> >> +             reg |= ((u32)*vgic_get_sgi_sources(dist, vcpu_id, sgi)) << shift;
> >>       }
> >>
> >>       mmio_data_write(mmio, ~0, reg);
> >> @@ -797,14 +845,15 @@ static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
> >>       /* Clear pending SGIs on the distributor */
> >>       for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
> >>               u8 mask = reg >> (8 * (sgi - min_sgi));
> >> +             u8 *src = vgic_get_sgi_sources(dist, vcpu_id, sgi);
> >>               if (set) {
> >> -                     if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask)
> >> +                     if ((*src & mask) != mask)
> >>                               updated = true;
> >> -                     dist->irq_sgi_sources[vcpu_id][sgi] |= mask;
> >> +                     *src |= mask;
> >>               } else {
> >> -                     if (dist->irq_sgi_sources[vcpu_id][sgi] & mask)
> >> +                     if (*src & mask)
> >>                               updated = true;
> >> -                     dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask;
> >> +                     *src &= ~mask;
> >>               }
> >>       }
> >>
> >> @@ -988,6 +1037,11 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
> >>       return true;
> >>  }
> >>
> >> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi)
> >> +{
> >> +     return dist->irq_sgi_sources + vcpu_id * VGIC_NR_SGIS + sgi;
> >> +}
> >> +
> >>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
> >>  {
> >>       struct kvm *kvm = vcpu->kvm;
> >> @@ -1021,7 +1075,7 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
> >>               if (target_cpus & 1) {
> >>                       /* Flag the SGI as pending */
> >>                       vgic_dist_irq_set_pending(vcpu, sgi);
> >> -                     dist->irq_sgi_sources[c][sgi] |= 1 << vcpu_id;
> >> +                     *vgic_get_sgi_sources(dist, c, sgi) |= 1 << vcpu_id;
> >>                       kvm_debug("SGI%d from CPU%d to CPU%d\n", sgi, vcpu_id, c);
> >>               }
> >>
> >> @@ -1068,14 +1122,14 @@ static void vgic_update_state(struct kvm *kvm)
> >>       int c;
> >>
> >>       if (!dist->enabled) {
> >> -             set_bit(0, &dist->irq_pending_on_cpu);
> >> +             set_bit(0, dist->irq_pending_on_cpu);
> >>               return;
> >>       }
> >>
> >>       kvm_for_each_vcpu(c, vcpu, kvm) {
> >>               if (compute_pending_for_cpu(vcpu)) {
> >>                       pr_debug("CPU%d has pending interrupts\n", c);
> >> -                     set_bit(c, &dist->irq_pending_on_cpu);
> >> +                     set_bit(c, dist->irq_pending_on_cpu);
> >>               }
> >>       }
> >>  }
> >> @@ -1232,14 +1286,14 @@ static bool vgic_queue_sgi(struct kvm_vcpu *vcpu, int irq)
> >>       int vcpu_id = vcpu->vcpu_id;
> >>       int c;
> >>
> >> -     sources = dist->irq_sgi_sources[vcpu_id][irq];
> >> +     sources = *vgic_get_sgi_sources(dist, vcpu_id, irq);
> >>
> >>       for_each_set_bit(c, &sources, VGIC_MAX_CPUS) {
> >>               if (vgic_queue_irq(vcpu, c, irq))
> >>                       clear_bit(c, &sources);
> >>       }
> >>
> >> -     dist->irq_sgi_sources[vcpu_id][irq] = sources;
> >> +     *vgic_get_sgi_sources(dist, vcpu_id, irq) = sources;
> >>
> >>       /*
> >>        * If the sources bitmap has been cleared it means that we
> >> @@ -1327,7 +1381,7 @@ epilog:
> >>                * us. Claim we don't have anything pending. We'll
> >>                * adjust that if needed while exiting.
> >>                */
> >> -             clear_bit(vcpu_id, &dist->irq_pending_on_cpu);
> >> +             clear_bit(vcpu_id, dist->irq_pending_on_cpu);
> >>       }
> >>  }
> >>
> >> @@ -1429,7 +1483,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
> >>       /* Check if we still have something up our sleeve... */
> >>       pending = find_first_zero_bit(elrsr_ptr, vgic->nr_lr);
> >>       if (level_pending || pending < vgic->nr_lr)
> >> -             set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
> >> +             set_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
> >>  }
> >>
> >>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
> >> @@ -1463,7 +1517,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
> >>       if (!irqchip_in_kernel(vcpu->kvm))
> >>               return 0;
> >>
> >> -     return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
> >> +     return test_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
> >>  }
> >>
> >>  static void vgic_kick_vcpus(struct kvm *kvm)
> >> @@ -1558,7 +1612,7 @@ static bool vgic_update_irq_pending(struct kvm *kvm, int cpuid,
> >>
> >>       if (level) {
> >>               vgic_cpu_irq_set(vcpu, irq_num);
> >> -             set_bit(cpuid, &dist->irq_pending_on_cpu);
> >> +             set_bit(cpuid, dist->irq_pending_on_cpu);
> >>       }
> >>
> >>  out:
> >> @@ -1602,6 +1656,32 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data)
> >>       return IRQ_HANDLED;
> >>  }
> >>
> >> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
> >> +{
> >> +     struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> >> +
> >> +     kfree(vgic_cpu->pending_shared);
> >> +     kfree(vgic_cpu->vgic_irq_lr_map);
> >> +     vgic_cpu->pending_shared = NULL;
> >> +     vgic_cpu->vgic_irq_lr_map = NULL;
> >> +}
> >> +
> >> +static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
> >> +{
> >> +     struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> >> +
> >> +     int sz = (nr_irqs - VGIC_NR_PRIVATE_IRQS) / 8;
> > 
> > [copying question from last review round, apologies if I'm being stupid]
> > 
> > are we guaranteed that the numerator is always a multiple of 8? if not,
> > won't you end up allocating one less byte than needed?
> 
> We force the allocation to be a multiple of 32, as per the GIC spec (we
> enforce this in arm_vgic.h for the legacy behaviour, and in the last
> patch for the dynamic case).
> 
right, in that case, with the tiny doc fixes:

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 2/8] arm/arm64: KVM: vgic: switch to dynamic allocation
@ 2014-09-12 17:43         ` Christoffer Dall
  0 siblings, 0 replies; 34+ messages in thread
From: Christoffer Dall @ 2014-09-12 17:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Sep 12, 2014 at 10:13:11AM +0100, Marc Zyngier wrote:
> On 11/09/14 23:36, Christoffer Dall wrote:
> > On Thu, Sep 11, 2014 at 12:09:09PM +0100, Marc Zyngier wrote:
> >> So far, all the VGIC data structures are statically defined by the
> >> *maximum* number of vcpus and interrupts it supports. It means that
> >> we always have to oversize it to cater for the worse case.
> >>
> >> Start by changing the data structures to be dynamically sizeable,
> >> and allocate them at runtime.
> >>
> >> The sizes are still very static though.
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >>  arch/arm/kvm/arm.c     |   3 +
> >>  include/kvm/arm_vgic.h |  76 ++++++++++++----
> >>  virt/kvm/arm/vgic.c    | 237 ++++++++++++++++++++++++++++++++++++++++++-------
> >>  3 files changed, 267 insertions(+), 49 deletions(-)
> >>
> >> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> >> index a99e0cd..923a01d 100644
> >> --- a/arch/arm/kvm/arm.c
> >> +++ b/arch/arm/kvm/arm.c
> >> @@ -172,6 +172,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
> >>                       kvm->vcpus[i] = NULL;
> >>               }
> >>       }
> >> +
> >> +     kvm_vgic_destroy(kvm);
> >>  }
> >>
> >>  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> >> @@ -253,6 +255,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
> >>  {
> >>       kvm_mmu_free_memory_caches(vcpu);
> >>       kvm_timer_vcpu_terminate(vcpu);
> >> +     kvm_vgic_vcpu_destroy(vcpu);
> >>       kmem_cache_free(kvm_vcpu_cache, vcpu);
> >>  }
> >>
> >> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
> >> index f074539..bdaac57 100644
> >> --- a/include/kvm/arm_vgic.h
> >> +++ b/include/kvm/arm_vgic.h
> >> @@ -54,19 +54,33 @@
> >>   * - a bunch of shared interrupts (SPI)
> >>   */
> >>  struct vgic_bitmap {
> >> -     union {
> >> -             u32 reg[VGIC_NR_PRIVATE_IRQS / 32];
> >> -             DECLARE_BITMAP(reg_ul, VGIC_NR_PRIVATE_IRQS);
> >> -     } percpu[VGIC_MAX_CPUS];
> >> -     union {
> >> -             u32 reg[VGIC_NR_SHARED_IRQS / 32];
> >> -             DECLARE_BITMAP(reg_ul, VGIC_NR_SHARED_IRQS);
> >> -     } shared;
> >> +     /*
> >> +      * - One UL per VCPU for private interrupts (assumes UL is at
> >> +      *   least 32 bits)
> >> +      * - As many UL as necessary for shared interrupts.
> >> +      *
> >> +      * The private interrupts are accessed via the "private"
> >> +      * field, one UL per vcpu (the state for vcpu n is in
> >> +      * private[n]). The shared interrupts are accessed via the
> >> +      * "shared" pointer (IRQn state is at bit n-32 in the bitmap).
> >> +      */
> >> +     unsigned long *private;
> >> +     unsigned long *shared;
> > 
> > the comment above the define for REG_OFFSET_SWIZZLE still talks about
> > the unions in struct vgic_bitmap, which is no longer true.  Mind
> > updating that comment?
> 
> Damned, thought I fixed that. Will update it.
> 
> >>  };
> >>
> >>  struct vgic_bytemap {
> >> -     u32 percpu[VGIC_MAX_CPUS][VGIC_NR_PRIVATE_IRQS / 4];
> >> -     u32 shared[VGIC_NR_SHARED_IRQS  / 4];
> >> +     /*
> >> +      * - 8 u32 per VCPU for private interrupts
> >> +      * - As many u32 as necessary for shared interrupts.
> >> +      *
> >> +      * The private interrupts are accessed via the "private"
> >> +      * field, (the state for vcpu n is in private[n*8] to
> >> +      * private[n*8 + 7]). The shared interrupts are accessed via
> >> +      * the "shared" pointer (IRQn state is at byte (n-32)%4 of the
> >> +      * shared[(n-32)/4] word).
> >> +      */
> >> +     u32 *private;
> >> +     u32 *shared;
> >>  };
> >>
> >>  struct kvm_vcpu;
> >> @@ -127,6 +141,9 @@ struct vgic_dist {
> >>       bool                    in_kernel;
> >>       bool                    ready;
> >>
> >> +     int                     nr_cpus;
> >> +     int                     nr_irqs;
> >> +
> >>       /* Virtual control interface mapping */
> >>       void __iomem            *vctrl_base;
> >>
> >> @@ -166,15 +183,36 @@ struct vgic_dist {
> >>       /* Level/edge triggered */
> >>       struct vgic_bitmap      irq_cfg;
> >>
> >> -     /* Source CPU per SGI and target CPU */
> >> -     u8                      irq_sgi_sources[VGIC_MAX_CPUS][VGIC_NR_SGIS];
> >> +     /*
> >> +      * Source CPU per SGI and target CPU:
> >> +      *
> >> +      * Each byte represent a SGI observable on a VCPU, each bit of
> >> +      * this byte indicating if the corresponding VCPU has
> >> +      * generated this interrupt. This is a GICv2 feature only.
> >> +      *
> >> +      * For VCPUn (n < 8), irq_sgi_sources[n*16] to [n*16 + 15] are
> >> +      * the SGIs observable on VCPUn.
> >> +      */
> >> +     u8                      *irq_sgi_sources;
> >>
> >> -     /* Target CPU for each IRQ */
> >> -     u8                      irq_spi_cpu[VGIC_NR_SHARED_IRQS];
> >> -     struct vgic_bitmap      irq_spi_target[VGIC_MAX_CPUS];
> >> +     /*
> >> +      * Target CPU for each SPI:
> >> +      *
> >> +      * Array of available SPI, each byte indicating the target
> >> +      * VCPU for SPI. IRQn (n >=32) is at irq_spi_cpu[n-32].
> >> +      */
> >> +     u8                      *irq_spi_cpu;
> >> +
> >> +     /*
> >> +      * Reverse lookup of irq_spi_cpu for faster compute pending:
> >> +      *
> >> +      * Array of bitmaps, one per VCPU, describing is IRQn is
> > 
> > ah, describing *if* ?
> 
> Indeed. Will fix.
> 
> >> +      * routed to a particular VCPU.
> >> +      */
> >> +     struct vgic_bitmap      *irq_spi_target;
> >>
> >>       /* Bitmap indicating which CPU has something pending */
> >> -     unsigned long           irq_pending_on_cpu;
> >> +     unsigned long           *irq_pending_on_cpu;
> >>  #endif
> >>  };
> >>
> >> @@ -204,11 +242,11 @@ struct vgic_v3_cpu_if {
> >>  struct vgic_cpu {
> >>  #ifdef CONFIG_KVM_ARM_VGIC
> >>       /* per IRQ to LR mapping */
> >> -     u8              vgic_irq_lr_map[VGIC_NR_IRQS];
> >> +     u8              *vgic_irq_lr_map;
> >>
> >>       /* Pending interrupts on this VCPU */
> >>       DECLARE_BITMAP( pending_percpu, VGIC_NR_PRIVATE_IRQS);
> >> -     DECLARE_BITMAP( pending_shared, VGIC_NR_SHARED_IRQS);
> >> +     unsigned long   *pending_shared;
> >>
> >>       /* Bitmap of used/free list registers */
> >>       DECLARE_BITMAP( lr_used, VGIC_V2_MAX_LRS);
> >> @@ -239,7 +277,9 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
> >>  int kvm_vgic_hyp_init(void);
> >>  int kvm_vgic_init(struct kvm *kvm);
> >>  int kvm_vgic_create(struct kvm *kvm);
> >> +void kvm_vgic_destroy(struct kvm *kvm);
> >>  int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
> >> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu);
> >>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
> >>  void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
> >>  int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int irq_num,
> >> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> >> index d3299d4..92c086e 100644
> >> --- a/virt/kvm/arm/vgic.c
> >> +++ b/virt/kvm/arm/vgic.c
> >> @@ -95,6 +95,7 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu);
> >>  static void vgic_retire_lr(int lr_nr, int irq, struct kvm_vcpu *vcpu);
> >>  static void vgic_update_state(struct kvm *kvm);
> >>  static void vgic_kick_vcpus(struct kvm *kvm);
> >> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi);
> >>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg);
> >>  static struct vgic_lr vgic_get_lr(const struct kvm_vcpu *vcpu, int lr);
> >>  static void vgic_set_lr(struct kvm_vcpu *vcpu, int lr, struct vgic_lr lr_desc);
> >> @@ -124,23 +125,45 @@ static const struct vgic_params *vgic;
> >>  #define REG_OFFSET_SWIZZLE   0
> >>  #endif
> >>
> >> +static int vgic_init_bitmap(struct vgic_bitmap *b, int nr_cpus, int nr_irqs)
> >> +{
> >> +     int nr_longs;
> >> +
> >> +     nr_longs = nr_cpus + BITS_TO_LONGS(nr_irqs - VGIC_NR_PRIVATE_IRQS);
> >> +
> >> +     b->private = kzalloc(sizeof(unsigned long) * nr_longs, GFP_KERNEL);
> >> +     if (!b->private)
> >> +             return -ENOMEM;
> >> +
> >> +     b->shared = b->private + nr_cpus;
> >> +
> >> +     return 0;
> >> +}
> >> +
> >> +static void vgic_free_bitmap(struct vgic_bitmap *b)
> >> +{
> >> +     kfree(b->private);
> >> +     b->private = NULL;
> >> +     b->shared = NULL;
> >> +}
> >> +
> >>  static u32 *vgic_bitmap_get_reg(struct vgic_bitmap *x,
> >>                               int cpuid, u32 offset)
> >>  {
> >>       offset >>= 2;
> >>       if (!offset)
> >> -             return x->percpu[cpuid].reg + (offset ^ REG_OFFSET_SWIZZLE);
> >> +             return (u32 *)(x->private + cpuid) + REG_OFFSET_SWIZZLE;
> >>       else
> >> -             return x->shared.reg + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
> >> +             return (u32 *)(x->shared) + ((offset - 1) ^ REG_OFFSET_SWIZZLE);
> >>  }
> >>
> >>  static int vgic_bitmap_get_irq_val(struct vgic_bitmap *x,
> >>                                  int cpuid, int irq)
> >>  {
> >>       if (irq < VGIC_NR_PRIVATE_IRQS)
> >> -             return test_bit(irq, x->percpu[cpuid].reg_ul);
> >> +             return test_bit(irq, x->private + cpuid);
> >>
> >> -     return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared.reg_ul);
> >> +     return test_bit(irq - VGIC_NR_PRIVATE_IRQS, x->shared);
> >>  }
> >>
> >>  static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
> >> @@ -149,9 +172,9 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
> >>       unsigned long *reg;
> >>
> >>       if (irq < VGIC_NR_PRIVATE_IRQS) {
> >> -             reg = x->percpu[cpuid].reg_ul;
> >> +             reg = x->private + cpuid;
> >>       } else {
> >> -             reg =  x->shared.reg_ul;
> >> +             reg = x->shared;
> >>               irq -= VGIC_NR_PRIVATE_IRQS;
> >>       }
> >>
> >> @@ -163,24 +186,49 @@ static void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
> >>
> >>  static unsigned long *vgic_bitmap_get_cpu_map(struct vgic_bitmap *x, int cpuid)
> >>  {
> >> -     if (unlikely(cpuid >= VGIC_MAX_CPUS))
> >> -             return NULL;
> >> -     return x->percpu[cpuid].reg_ul;
> >> +     return x->private + cpuid;
> >>  }
> >>
> >>  static unsigned long *vgic_bitmap_get_shared_map(struct vgic_bitmap *x)
> >>  {
> >> -     return x->shared.reg_ul;
> >> +     return x->shared;
> >> +}
> >> +
> >> +static int vgic_init_bytemap(struct vgic_bytemap *x, int nr_cpus, int nr_irqs)
> >> +{
> >> +     int size;
> >> +
> >> +     size  = nr_cpus * VGIC_NR_PRIVATE_IRQS;
> >> +     size += nr_irqs - VGIC_NR_PRIVATE_IRQS;
> >> +
> >> +     x->private = kzalloc(size, GFP_KERNEL);
> >> +     if (!x->private)
> >> +             return -ENOMEM;
> >> +
> >> +     x->shared = x->private + nr_cpus * VGIC_NR_PRIVATE_IRQS / sizeof(u32);
> >> +     return 0;
> >> +}
> >> +
> >> +static void vgic_free_bytemap(struct vgic_bytemap *b)
> >> +{
> >> +     kfree(b->private);
> >> +     b->private = NULL;
> >> +     b->shared = NULL;
> >>  }
> >>
> >>  static u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset)
> >>  {
> >> -     offset >>= 2;
> >> -     BUG_ON(offset > (VGIC_NR_IRQS / 4));
> >> -     if (offset < 8)
> >> -             return x->percpu[cpuid] + offset;
> >> -     else
> >> -             return x->shared + offset - 8;
> >> +     u32 *reg;
> >> +
> >> +     if (offset < VGIC_NR_PRIVATE_IRQS) {
> >> +             reg = x->private;
> >> +             offset += cpuid * VGIC_NR_PRIVATE_IRQS;
> >> +     } else {
> >> +             reg = x->shared;
> >> +             offset -= VGIC_NR_PRIVATE_IRQS;
> >> +     }
> >> +
> >> +     return reg + (offset / sizeof(u32));
> >>  }
> >>
> >>  #define VGIC_CFG_LEVEL       0
> >> @@ -739,7 +787,7 @@ static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu)
> >>                */
> >>               vgic_dist_irq_set_pending(vcpu, lr.irq);
> >>               if (lr.irq < VGIC_NR_SGIS)
> >> -                     dist->irq_sgi_sources[vcpu_id][lr.irq] |= 1 << lr.source;
> >> +                     *vgic_get_sgi_sources(dist, vcpu_id, lr.irq) |= 1 << lr.source;
> >>               lr.state &= ~LR_STATE_PENDING;
> >>               vgic_set_lr(vcpu, i, lr);
> >>
> >> @@ -773,7 +821,7 @@ static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
> >>       /* Copy source SGIs from distributor side */
> >>       for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
> >>               int shift = 8 * (sgi - min_sgi);
> >> -             reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift;
> >> +             reg |= ((u32)*vgic_get_sgi_sources(dist, vcpu_id, sgi)) << shift;
> >>       }
> >>
> >>       mmio_data_write(mmio, ~0, reg);
> >> @@ -797,14 +845,15 @@ static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu,
> >>       /* Clear pending SGIs on the distributor */
> >>       for (sgi = min_sgi; sgi <= max_sgi; sgi++) {
> >>               u8 mask = reg >> (8 * (sgi - min_sgi));
> >> +             u8 *src = vgic_get_sgi_sources(dist, vcpu_id, sgi);
> >>               if (set) {
> >> -                     if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask)
> >> +                     if ((*src & mask) != mask)
> >>                               updated = true;
> >> -                     dist->irq_sgi_sources[vcpu_id][sgi] |= mask;
> >> +                     *src |= mask;
> >>               } else {
> >> -                     if (dist->irq_sgi_sources[vcpu_id][sgi] & mask)
> >> +                     if (*src & mask)
> >>                               updated = true;
> >> -                     dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask;
> >> +                     *src &= ~mask;
> >>               }
> >>       }
> >>
> >> @@ -988,6 +1037,11 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
> >>       return true;
> >>  }
> >>
> >> +static u8 *vgic_get_sgi_sources(struct vgic_dist *dist, int vcpu_id, int sgi)
> >> +{
> >> +     return dist->irq_sgi_sources + vcpu_id * VGIC_NR_SGIS + sgi;
> >> +}
> >> +
> >>  static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
> >>  {
> >>       struct kvm *kvm = vcpu->kvm;
> >> @@ -1021,7 +1075,7 @@ static void vgic_dispatch_sgi(struct kvm_vcpu *vcpu, u32 reg)
> >>               if (target_cpus & 1) {
> >>                       /* Flag the SGI as pending */
> >>                       vgic_dist_irq_set_pending(vcpu, sgi);
> >> -                     dist->irq_sgi_sources[c][sgi] |= 1 << vcpu_id;
> >> +                     *vgic_get_sgi_sources(dist, c, sgi) |= 1 << vcpu_id;
> >>                       kvm_debug("SGI%d from CPU%d to CPU%d\n", sgi, vcpu_id, c);
> >>               }
> >>
> >> @@ -1068,14 +1122,14 @@ static void vgic_update_state(struct kvm *kvm)
> >>       int c;
> >>
> >>       if (!dist->enabled) {
> >> -             set_bit(0, &dist->irq_pending_on_cpu);
> >> +             set_bit(0, dist->irq_pending_on_cpu);
> >>               return;
> >>       }
> >>
> >>       kvm_for_each_vcpu(c, vcpu, kvm) {
> >>               if (compute_pending_for_cpu(vcpu)) {
> >>                       pr_debug("CPU%d has pending interrupts\n", c);
> >> -                     set_bit(c, &dist->irq_pending_on_cpu);
> >> +                     set_bit(c, dist->irq_pending_on_cpu);
> >>               }
> >>       }
> >>  }
> >> @@ -1232,14 +1286,14 @@ static bool vgic_queue_sgi(struct kvm_vcpu *vcpu, int irq)
> >>       int vcpu_id = vcpu->vcpu_id;
> >>       int c;
> >>
> >> -     sources = dist->irq_sgi_sources[vcpu_id][irq];
> >> +     sources = *vgic_get_sgi_sources(dist, vcpu_id, irq);
> >>
> >>       for_each_set_bit(c, &sources, VGIC_MAX_CPUS) {
> >>               if (vgic_queue_irq(vcpu, c, irq))
> >>                       clear_bit(c, &sources);
> >>       }
> >>
> >> -     dist->irq_sgi_sources[vcpu_id][irq] = sources;
> >> +     *vgic_get_sgi_sources(dist, vcpu_id, irq) = sources;
> >>
> >>       /*
> >>        * If the sources bitmap has been cleared it means that we
> >> @@ -1327,7 +1381,7 @@ epilog:
> >>                * us. Claim we don't have anything pending. We'll
> >>                * adjust that if needed while exiting.
> >>                */
> >> -             clear_bit(vcpu_id, &dist->irq_pending_on_cpu);
> >> +             clear_bit(vcpu_id, dist->irq_pending_on_cpu);
> >>       }
> >>  }
> >>
> >> @@ -1429,7 +1483,7 @@ static void __kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
> >>       /* Check if we still have something up our sleeve... */
> >>       pending = find_first_zero_bit(elrsr_ptr, vgic->nr_lr);
> >>       if (level_pending || pending < vgic->nr_lr)
> >> -             set_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
> >> +             set_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
> >>  }
> >>
> >>  void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
> >> @@ -1463,7 +1517,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
> >>       if (!irqchip_in_kernel(vcpu->kvm))
> >>               return 0;
> >>
> >> -     return test_bit(vcpu->vcpu_id, &dist->irq_pending_on_cpu);
> >> +     return test_bit(vcpu->vcpu_id, dist->irq_pending_on_cpu);
> >>  }
> >>
> >>  static void vgic_kick_vcpus(struct kvm *kvm)
> >> @@ -1558,7 +1612,7 @@ static bool vgic_update_irq_pending(struct kvm *kvm, int cpuid,
> >>
> >>       if (level) {
> >>               vgic_cpu_irq_set(vcpu, irq_num);
> >> -             set_bit(cpuid, &dist->irq_pending_on_cpu);
> >> +             set_bit(cpuid, dist->irq_pending_on_cpu);
> >>       }
> >>
> >>  out:
> >> @@ -1602,6 +1656,32 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data)
> >>       return IRQ_HANDLED;
> >>  }
> >>
> >> +void kvm_vgic_vcpu_destroy(struct kvm_vcpu *vcpu)
> >> +{
> >> +     struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> >> +
> >> +     kfree(vgic_cpu->pending_shared);
> >> +     kfree(vgic_cpu->vgic_irq_lr_map);
> >> +     vgic_cpu->pending_shared = NULL;
> >> +     vgic_cpu->vgic_irq_lr_map = NULL;
> >> +}
> >> +
> >> +static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
> >> +{
> >> +     struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
> >> +
> >> +     int sz = (nr_irqs - VGIC_NR_PRIVATE_IRQS) / 8;
> > 
> > [copying question from last review round, apologies if I'm being stupid]
> > 
> > are we guaranteed that the numerator is always a multiple of 8? if not,
> > won't you end up allocating one less byte than needed?
> 
> We force the allocation to be a multiple of 32, as per the GIC spec (we
> enforce this in arm_vgic.h for the legacy behaviour, and in the last
> patch for the dynamic case).
> 
right, in that case, with the tiny doc fixes:

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/8] arm/arm64: KVM: dynamic VGIC sizing
  2014-09-11 11:09 ` Marc Zyngier
@ 2014-09-25 12:44   ` Shannon Zhao
  -1 siblings, 0 replies; 34+ messages in thread
From: Shannon Zhao @ 2014-09-25 12:44 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm, linux-arm-kernel, kvm, christoffer.dall
  Cc: Andre Przywara, Huangpeng (Peter), Wanghaibin (D), Hangaohuai, Yijun Zhu

Hi all,

I have a problem that I want to ask for your advice.
Before I send this mail to Marc and Christoffer. But it seems that they are busy or not online recently.

I git clone Marc's "kvmtool-vgic-dyn" branch and run it on Fastmodel Cortex-A57*4 with qemu 2.1.0.
https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git

I want to do repeated lifecycle test. The test is that start 2 VMs, sleep 10 and do pkill qemu.
Test script is following:
#!/bin/sh
while true
do
	qemu-system-aarch64 \
    		-enable-kvm -smp 4 \
    		-kernel Image \
    		-m 512 -machine virt,kernel_irqchip=on \
    		-initrd guestfs.cpio.gz \
    		-cpu host \
    		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
    		-serial chardev:pty0 -daemonize \
    		-vnc 0.0.0.0:0 \
    		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &

	qemu-system-aarch64 \
    		-enable-kvm -smp 4 \
    		-kernel Image \
    		-m 512 -machine virt,kernel_irqchip=on \
    		-initrd guestfs.cpio.gz \
    		-cpu host \
    		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
   		-serial chardev:pty0 -daemonize \
    		-vnc 0.0.0.0:1 \
    		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &
    	sleep 10
	pkill qemu
done

After repeating sometimes there is something wrong happened as followed.
Look forward for your reply.
Thanks,
Shannon

BUG: failure at mm/slub.c:3346/kfree()!
Kernel panic - not syncing: BUG!
CPU: 0 PID: 874 Comm: qemu-system-aar Not tainted 3.17.0-rc4+ #1
Call trace:
[<ffffffc000087f50>] dump_backtrace+0x0/0x130
[<ffffffc000088090>] show_stack+0x10/0x1c
[<ffffffc000499074>] dump_stack+0x74/0x94
[<ffffffc0004956c4>] panic+0xe0/0x218
[<ffffffc000152fb4>] kfree+0x168/0x16c
[<ffffffc0000a260c>] kvm_vgic_destroy+0x104/0x164
[<ffffffc000099f54>] kvm_arch_destroy_vm+0x68/0x7c
[<ffffffc000097a04>] kvm_put_kvm+0x158/0x244
[<ffffffc000097b28>] kvm_device_release+0x38/0x4c
[<ffffffc000159710>] __fput+0x98/0x1d8
[<ffffffc0001598a4>] ____fput+0x8/0x14
[<ffffffc0000beb04>] task_work_run+0xac/0xec
[<ffffffc0000a8314>] do_exit+0x280/0x92c
[<ffffffc0000a9788>] do_group_exit+0x40/0xd4
[<ffffffc0000b3214>] get_signal+0x2b4/0x4fc
[<ffffffc000087574>] do_signal+0x170/0x4fc
[<ffffffc000087b18>] do_notify_resume+0x58/0x64
CPU3: stopping
CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.17.0-rc4+ #1
Call trace:
[<ffffffc000087f50>] dump_backtrace+0x0/0x130
[<ffffffc000088090>] show_stack+0x10/0x1c
[<ffffffc000499074>] dump_stack+0x74/0x94
[<ffffffc00008fd74>] handle_IPI+0x180/0x198
[<ffffffc0000812d0>] gic_handle_irq+0x78/0x80
Exception stack(0xffffffc87b8dbe30 to 0xffffffc87b8dbf50)
be20:                                     7b8d8000 ffffffc8 006a00d0 ffffffc0
be40: 7b8dbf70 ffffffc8 000851a0 ffffffc0 00000000 00000000 00000000 00000000
be60: 7ffc49ec ffffffc8 00000000 01000000 00672700 ffffffc0 00000003 00000000
be80: 00000000 00098968 000002a8 00000000 7b8975b0 ffffffc8 7b8dbd80 ffffffc8
bea0: 00000400 00000000 00000400 00000000 00000018 00000000 e8000000 00000003
bec0: 00000000 00000000 80000000 0008bab2 0015879c ffffffc0 b2f741d0 0000007f
bee0: c4ae28c0 0000007f 7b8d8000 ffffffc8 006a00d0 ffffffc0 006a1bc0 ffffffc0
bf00: 004aa258 ffffffc0 0069cdc7 ffffffc0 005b3300 ffffffc0 00000001 00000000
bf20: 8070c000 00000000 00080330 ffffffc0 80000000 00000040 7b8dbf70 ffffffc8
bf40: 0008519c ffffffc0 7b8dbf70 ffffffc8
[<ffffffc000083da0>] el1_irq+0x60/0xc0
[<ffffffc0000d68cc>] cpu_startup_entry+0xe8/0x134
[<ffffffc00008f848>] secondary_start_kernel+0x108/0x118
CPU1: stopping
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.17.0-rc4+ #1
Call trace:
[<ffffffc000087f50>] dump_backtrace+0x0/0x130
[<ffffffc000088090>] show_stack+0x10/0x1c
[<ffffffc000499074>] dump_stack+0x74/0x94
[<ffffffc00008fd74>] handle_IPI+0x180/0x198
[<ffffffc0000812d0>] gic_handle_irq+0x78/0x80
Exception stack(0xffffffc87b8d3e30 to 0xffffffc87b8d3f50)
3e20:                                     7b8d0000 ffffffc8 006a00d0 ffffffc0
3e40: 7b8d3f70 ffffffc8 000851a0 ffffffc0 00000000 00000000 00000000 00000000
3e60: 7ffae9ec ffffffc8 00000000 01000000 00672700 ffffffc0 00000001 00000000
3e80: 00000000 0008583b 000003fe 00000000 7b896130 ffffffc8 7b8d3d80 ffffffc8
3ea0: 00000400 00000000 00000400 00000000 004e9000 00000000 00000001 00000000
3ec0: 00000001 00000000 ffffffff ffffffff 000a9a54 ffffffc0 b16ac310 0000007f
3ee0: e8a17570 0000007f 7b8d0000 ffffffc8 006a00d0 ffffffc0 006a1bc0 ffffffc0
3f00: 004aa258 ffffffc0 0069cdc7 ffffffc0 005b3300 ffffffc0 00000001 00000000
3f20: 8070c000 00000000 00080330 ffffffc0 80000000 00000040 7b8d3f70 ffffffc8
3f40: 0008519c ffffffc0 7b8d3f70 ffffffc8
[<ffffffc000083da0>] el1_irq+0x60/0xc0
[<ffffffc0000d68cc>] cpu_startup_entry+0xe8/0x134
[<ffffffc00008f848>] secondary_start_kernel+0x108/0x118
CPU2: stopping
CPU: 2 PID: 0 Comm: swapper/2 Not tainted 3.17.0-rc4+ #1
Call trace:
[<ffffffc000087f50>] dump_backtrace+0x0/0x130
[<ffffffc000088090>] show_stack+0x10/0x1c
[<ffffffc000499074>] dump_stack+0x74/0x94
[<ffffffc00008fd74>] handle_IPI+0x180/0x198
[<ffffffc0000812d0>] gic_handle_irq+0x78/0x80
Exception stack(0xffffffc87b8d7e30 to 0xffffffc87b8d7f50)
7e20:                                     7b8d4000 ffffffc8 006a00d0 ffffffc0
7e40: 7b8d7f70 ffffffc8 000851a0 ffffffc0 00000000 00000000 00000000 00000000
7e60: 7ffb99ec ffffffc8 00000000 01000000 00672700 ffffffc0 00000002 00000000
7e80: 80000000 0007bfa4 000003fe 00000000 7b896b70 ffffffc8 7b8d7d80 ffffffc8
7ea0: 00000400 00000000 00000400 00000000 0064dd20 00000000 0064dd18 00000000
7ec0: 0064dd10 00000000 ffffffff ffffffff 00168e84 ffffffc0 93c15150 0000007f
7ee0: eed0dfc0 0000007f 7b8d4000 ffffffc8 006a00d0 ffffffc0 006a1bc0 ffffffc0
7f00: 004aa258 ffffffc0 0069cdc7 ffffffc0 005b3300 ffffffc0 00000001 00000000
7f20: 8070c000 00000000 00080330 ffffffc0 80000000 00000040 7b8d7f70 ffffffc8
7f40: 0008519c ffffffc0 7b8d7f70 ffffffc8
[<ffffffc000083da0>] el1_irq+0x60/0xc0
[<ffffffc0000d68cc>] cpu_startup_entry+0xe8/0x134
[<ffffffc00008f848>] secondary_start_kernel+0x108/0x118
---[ end Kernel panic - not syncing: BUG!

On 2014/9/11 19:09, Marc Zyngier wrote:
> So far, the VGIC data structures have been statically sized, meaning
> that we always have to support more interrupts than we actually want,
> and more CPU interfaces than we should. This is a waste of resource,
> and is the kind of things that should be tuneable.
> 
-- 
Shannon


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 0/8] arm/arm64: KVM: dynamic VGIC sizing
@ 2014-09-25 12:44   ` Shannon Zhao
  0 siblings, 0 replies; 34+ messages in thread
From: Shannon Zhao @ 2014-09-25 12:44 UTC (permalink / raw)
  To: linux-arm-kernel

Hi all,

I have a problem that I want to ask for your advice.
Before I send this mail to Marc and Christoffer. But it seems that they are busy or not online recently.

I git clone Marc's "kvmtool-vgic-dyn" branch and run it on Fastmodel Cortex-A57*4 with qemu 2.1.0.
https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git

I want to do repeated lifecycle test. The test is that start 2 VMs, sleep 10 and do pkill qemu.
Test script is following:
#!/bin/sh
while true
do
	qemu-system-aarch64 \
    		-enable-kvm -smp 4 \
    		-kernel Image \
    		-m 512 -machine virt,kernel_irqchip=on \
    		-initrd guestfs.cpio.gz \
    		-cpu host \
    		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
    		-serial chardev:pty0 -daemonize \
    		-vnc 0.0.0.0:0 \
    		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &

	qemu-system-aarch64 \
    		-enable-kvm -smp 4 \
    		-kernel Image \
    		-m 512 -machine virt,kernel_irqchip=on \
    		-initrd guestfs.cpio.gz \
    		-cpu host \
    		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
   		-serial chardev:pty0 -daemonize \
    		-vnc 0.0.0.0:1 \
    		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &
    	sleep 10
	pkill qemu
done

After repeating sometimes there is something wrong happened as followed.
Look forward for your reply.
Thanks,
Shannon

BUG: failure at mm/slub.c:3346/kfree()!
Kernel panic - not syncing: BUG!
CPU: 0 PID: 874 Comm: qemu-system-aar Not tainted 3.17.0-rc4+ #1
Call trace:
[<ffffffc000087f50>] dump_backtrace+0x0/0x130
[<ffffffc000088090>] show_stack+0x10/0x1c
[<ffffffc000499074>] dump_stack+0x74/0x94
[<ffffffc0004956c4>] panic+0xe0/0x218
[<ffffffc000152fb4>] kfree+0x168/0x16c
[<ffffffc0000a260c>] kvm_vgic_destroy+0x104/0x164
[<ffffffc000099f54>] kvm_arch_destroy_vm+0x68/0x7c
[<ffffffc000097a04>] kvm_put_kvm+0x158/0x244
[<ffffffc000097b28>] kvm_device_release+0x38/0x4c
[<ffffffc000159710>] __fput+0x98/0x1d8
[<ffffffc0001598a4>] ____fput+0x8/0x14
[<ffffffc0000beb04>] task_work_run+0xac/0xec
[<ffffffc0000a8314>] do_exit+0x280/0x92c
[<ffffffc0000a9788>] do_group_exit+0x40/0xd4
[<ffffffc0000b3214>] get_signal+0x2b4/0x4fc
[<ffffffc000087574>] do_signal+0x170/0x4fc
[<ffffffc000087b18>] do_notify_resume+0x58/0x64
CPU3: stopping
CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.17.0-rc4+ #1
Call trace:
[<ffffffc000087f50>] dump_backtrace+0x0/0x130
[<ffffffc000088090>] show_stack+0x10/0x1c
[<ffffffc000499074>] dump_stack+0x74/0x94
[<ffffffc00008fd74>] handle_IPI+0x180/0x198
[<ffffffc0000812d0>] gic_handle_irq+0x78/0x80
Exception stack(0xffffffc87b8dbe30 to 0xffffffc87b8dbf50)
be20:                                     7b8d8000 ffffffc8 006a00d0 ffffffc0
be40: 7b8dbf70 ffffffc8 000851a0 ffffffc0 00000000 00000000 00000000 00000000
be60: 7ffc49ec ffffffc8 00000000 01000000 00672700 ffffffc0 00000003 00000000
be80: 00000000 00098968 000002a8 00000000 7b8975b0 ffffffc8 7b8dbd80 ffffffc8
bea0: 00000400 00000000 00000400 00000000 00000018 00000000 e8000000 00000003
bec0: 00000000 00000000 80000000 0008bab2 0015879c ffffffc0 b2f741d0 0000007f
bee0: c4ae28c0 0000007f 7b8d8000 ffffffc8 006a00d0 ffffffc0 006a1bc0 ffffffc0
bf00: 004aa258 ffffffc0 0069cdc7 ffffffc0 005b3300 ffffffc0 00000001 00000000
bf20: 8070c000 00000000 00080330 ffffffc0 80000000 00000040 7b8dbf70 ffffffc8
bf40: 0008519c ffffffc0 7b8dbf70 ffffffc8
[<ffffffc000083da0>] el1_irq+0x60/0xc0
[<ffffffc0000d68cc>] cpu_startup_entry+0xe8/0x134
[<ffffffc00008f848>] secondary_start_kernel+0x108/0x118
CPU1: stopping
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.17.0-rc4+ #1
Call trace:
[<ffffffc000087f50>] dump_backtrace+0x0/0x130
[<ffffffc000088090>] show_stack+0x10/0x1c
[<ffffffc000499074>] dump_stack+0x74/0x94
[<ffffffc00008fd74>] handle_IPI+0x180/0x198
[<ffffffc0000812d0>] gic_handle_irq+0x78/0x80
Exception stack(0xffffffc87b8d3e30 to 0xffffffc87b8d3f50)
3e20:                                     7b8d0000 ffffffc8 006a00d0 ffffffc0
3e40: 7b8d3f70 ffffffc8 000851a0 ffffffc0 00000000 00000000 00000000 00000000
3e60: 7ffae9ec ffffffc8 00000000 01000000 00672700 ffffffc0 00000001 00000000
3e80: 00000000 0008583b 000003fe 00000000 7b896130 ffffffc8 7b8d3d80 ffffffc8
3ea0: 00000400 00000000 00000400 00000000 004e9000 00000000 00000001 00000000
3ec0: 00000001 00000000 ffffffff ffffffff 000a9a54 ffffffc0 b16ac310 0000007f
3ee0: e8a17570 0000007f 7b8d0000 ffffffc8 006a00d0 ffffffc0 006a1bc0 ffffffc0
3f00: 004aa258 ffffffc0 0069cdc7 ffffffc0 005b3300 ffffffc0 00000001 00000000
3f20: 8070c000 00000000 00080330 ffffffc0 80000000 00000040 7b8d3f70 ffffffc8
3f40: 0008519c ffffffc0 7b8d3f70 ffffffc8
[<ffffffc000083da0>] el1_irq+0x60/0xc0
[<ffffffc0000d68cc>] cpu_startup_entry+0xe8/0x134
[<ffffffc00008f848>] secondary_start_kernel+0x108/0x118
CPU2: stopping
CPU: 2 PID: 0 Comm: swapper/2 Not tainted 3.17.0-rc4+ #1
Call trace:
[<ffffffc000087f50>] dump_backtrace+0x0/0x130
[<ffffffc000088090>] show_stack+0x10/0x1c
[<ffffffc000499074>] dump_stack+0x74/0x94
[<ffffffc00008fd74>] handle_IPI+0x180/0x198
[<ffffffc0000812d0>] gic_handle_irq+0x78/0x80
Exception stack(0xffffffc87b8d7e30 to 0xffffffc87b8d7f50)
7e20:                                     7b8d4000 ffffffc8 006a00d0 ffffffc0
7e40: 7b8d7f70 ffffffc8 000851a0 ffffffc0 00000000 00000000 00000000 00000000
7e60: 7ffb99ec ffffffc8 00000000 01000000 00672700 ffffffc0 00000002 00000000
7e80: 80000000 0007bfa4 000003fe 00000000 7b896b70 ffffffc8 7b8d7d80 ffffffc8
7ea0: 00000400 00000000 00000400 00000000 0064dd20 00000000 0064dd18 00000000
7ec0: 0064dd10 00000000 ffffffff ffffffff 00168e84 ffffffc0 93c15150 0000007f
7ee0: eed0dfc0 0000007f 7b8d4000 ffffffc8 006a00d0 ffffffc0 006a1bc0 ffffffc0
7f00: 004aa258 ffffffc0 0069cdc7 ffffffc0 005b3300 ffffffc0 00000001 00000000
7f20: 8070c000 00000000 00080330 ffffffc0 80000000 00000040 7b8d7f70 ffffffc8
7f40: 0008519c ffffffc0 7b8d7f70 ffffffc8
[<ffffffc000083da0>] el1_irq+0x60/0xc0
[<ffffffc0000d68cc>] cpu_startup_entry+0xe8/0x134
[<ffffffc00008f848>] secondary_start_kernel+0x108/0x118
---[ end Kernel panic - not syncing: BUG!

On 2014/9/11 19:09, Marc Zyngier wrote:
> So far, the VGIC data structures have been statically sized, meaning
> that we always have to support more interrupts than we actually want,
> and more CPU interfaces than we should. This is a waste of resource,
> and is the kind of things that should be tuneable.
> 
-- 
Shannon

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/8] arm/arm64: KVM: dynamic VGIC sizing
  2014-09-25 12:44   ` Shannon Zhao
@ 2014-09-25 13:35     ` Christoffer Dall
  -1 siblings, 0 replies; 34+ messages in thread
From: Christoffer Dall @ 2014-09-25 13:35 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: Marc Zyngier, kvmarm, linux-arm-kernel, kvm, Andre Przywara,
	Huangpeng (Peter), Wanghaibin (D),
	Hangaohuai, Yijun Zhu

On Thu, Sep 25, 2014 at 08:44:16PM +0800, Shannon Zhao wrote:
> Hi all,
> 
> I have a problem that I want to ask for your advice.
> Before I send this mail to Marc and Christoffer. But it seems that they are busy or not online recently.
> 
> I git clone Marc's "kvmtool-vgic-dyn" branch and run it on Fastmodel Cortex-A57*4 with qemu 2.1.0.
> https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
> 
> I want to do repeated lifecycle test. The test is that start 2 VMs, sleep 10 and do pkill qemu.
> Test script is following:
> #!/bin/sh
> while true
> do
> 	qemu-system-aarch64 \
>     		-enable-kvm -smp 4 \
>     		-kernel Image \
>     		-m 512 -machine virt,kernel_irqchip=on \
>     		-initrd guestfs.cpio.gz \
>     		-cpu host \
>     		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
>     		-serial chardev:pty0 -daemonize \
>     		-vnc 0.0.0.0:0 \
>     		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &
> 
> 	qemu-system-aarch64 \
>     		-enable-kvm -smp 4 \
>     		-kernel Image \
>     		-m 512 -machine virt,kernel_irqchip=on \
>     		-initrd guestfs.cpio.gz \
>     		-cpu host \
>     		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
>    		-serial chardev:pty0 -daemonize \
>     		-vnc 0.0.0.0:1 \
>     		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &
>     	sleep 10
> 	pkill qemu
> done
> 
> After repeating sometimes there is something wrong happened as followed.
> Look forward for your reply.
> Thanks,
> Shannon
> 
Hi Shannon,

This appears to be related to a bug in vgic_set_attr() which messes up
the bitmap initialization, I'm working on a patch.

-Christoffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 0/8] arm/arm64: KVM: dynamic VGIC sizing
@ 2014-09-25 13:35     ` Christoffer Dall
  0 siblings, 0 replies; 34+ messages in thread
From: Christoffer Dall @ 2014-09-25 13:35 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Sep 25, 2014 at 08:44:16PM +0800, Shannon Zhao wrote:
> Hi all,
> 
> I have a problem that I want to ask for your advice.
> Before I send this mail to Marc and Christoffer. But it seems that they are busy or not online recently.
> 
> I git clone Marc's "kvmtool-vgic-dyn" branch and run it on Fastmodel Cortex-A57*4 with qemu 2.1.0.
> https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
> 
> I want to do repeated lifecycle test. The test is that start 2 VMs, sleep 10 and do pkill qemu.
> Test script is following:
> #!/bin/sh
> while true
> do
> 	qemu-system-aarch64 \
>     		-enable-kvm -smp 4 \
>     		-kernel Image \
>     		-m 512 -machine virt,kernel_irqchip=on \
>     		-initrd guestfs.cpio.gz \
>     		-cpu host \
>     		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
>     		-serial chardev:pty0 -daemonize \
>     		-vnc 0.0.0.0:0 \
>     		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &
> 
> 	qemu-system-aarch64 \
>     		-enable-kvm -smp 4 \
>     		-kernel Image \
>     		-m 512 -machine virt,kernel_irqchip=on \
>     		-initrd guestfs.cpio.gz \
>     		-cpu host \
>     		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
>    		-serial chardev:pty0 -daemonize \
>     		-vnc 0.0.0.0:1 \
>     		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &
>     	sleep 10
> 	pkill qemu
> done
> 
> After repeating sometimes there is something wrong happened as followed.
> Look forward for your reply.
> Thanks,
> Shannon
> 
Hi Shannon,

This appears to be related to a bug in vgic_set_attr() which messes up
the bitmap initialization, I'm working on a patch.

-Christoffer

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 0/8] arm/arm64: KVM: dynamic VGIC sizing
  2014-09-25 13:35     ` Christoffer Dall
@ 2014-09-26  1:15       ` Shannon Zhao
  -1 siblings, 0 replies; 34+ messages in thread
From: Shannon Zhao @ 2014-09-26  1:15 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Marc Zyngier, kvmarm, linux-arm-kernel, kvm, Andre Przywara,
	Huangpeng (Peter), Wanghaibin (D),
	Hangaohuai, Yijun Zhu



On 2014/9/25 21:35, Christoffer Dall wrote:
> On Thu, Sep 25, 2014 at 08:44:16PM +0800, Shannon Zhao wrote:
>> Hi all,
>>
>> I have a problem that I want to ask for your advice.
>> Before I send this mail to Marc and Christoffer. But it seems that they are busy or not online recently.
>>
>> I git clone Marc's "kvmtool-vgic-dyn" branch and run it on Fastmodel Cortex-A57*4 with qemu 2.1.0.
>> https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
>>
>> I want to do repeated lifecycle test. The test is that start 2 VMs, sleep 10 and do pkill qemu.
>> Test script is following:
>> #!/bin/sh
>> while true
>> do
>> 	qemu-system-aarch64 \
>>     		-enable-kvm -smp 4 \
>>     		-kernel Image \
>>     		-m 512 -machine virt,kernel_irqchip=on \
>>     		-initrd guestfs.cpio.gz \
>>     		-cpu host \
>>     		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
>>     		-serial chardev:pty0 -daemonize \
>>     		-vnc 0.0.0.0:0 \
>>     		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &
>>
>> 	qemu-system-aarch64 \
>>     		-enable-kvm -smp 4 \
>>     		-kernel Image \
>>     		-m 512 -machine virt,kernel_irqchip=on \
>>     		-initrd guestfs.cpio.gz \
>>     		-cpu host \
>>     		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
>>    		-serial chardev:pty0 -daemonize \
>>     		-vnc 0.0.0.0:1 \
>>     		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &
>>     	sleep 10
>> 	pkill qemu
>> done
>>
>> After repeating sometimes there is something wrong happened as followed.
>> Look forward for your reply.
>> Thanks,
>> Shannon
>>
> Hi Shannon,
> 
> This appears to be related to a bug in vgic_set_attr() which messes up
> the bitmap initialization, I'm working on a patch.
> 
> -Christoffer
> 
> .
> 
Hi Christoffer,

Thanks for you reply.
I will test your patch to check whether the bug disappears.
Thanks,
-- 
Shannon


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 0/8] arm/arm64: KVM: dynamic VGIC sizing
@ 2014-09-26  1:15       ` Shannon Zhao
  0 siblings, 0 replies; 34+ messages in thread
From: Shannon Zhao @ 2014-09-26  1:15 UTC (permalink / raw)
  To: linux-arm-kernel



On 2014/9/25 21:35, Christoffer Dall wrote:
> On Thu, Sep 25, 2014 at 08:44:16PM +0800, Shannon Zhao wrote:
>> Hi all,
>>
>> I have a problem that I want to ask for your advice.
>> Before I send this mail to Marc and Christoffer. But it seems that they are busy or not online recently.
>>
>> I git clone Marc's "kvmtool-vgic-dyn" branch and run it on Fastmodel Cortex-A57*4 with qemu 2.1.0.
>> https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
>>
>> I want to do repeated lifecycle test. The test is that start 2 VMs, sleep 10 and do pkill qemu.
>> Test script is following:
>> #!/bin/sh
>> while true
>> do
>> 	qemu-system-aarch64 \
>>     		-enable-kvm -smp 4 \
>>     		-kernel Image \
>>     		-m 512 -machine virt,kernel_irqchip=on \
>>     		-initrd guestfs.cpio.gz \
>>     		-cpu host \
>>     		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
>>     		-serial chardev:pty0 -daemonize \
>>     		-vnc 0.0.0.0:0 \
>>     		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &
>>
>> 	qemu-system-aarch64 \
>>     		-enable-kvm -smp 4 \
>>     		-kernel Image \
>>     		-m 512 -machine virt,kernel_irqchip=on \
>>     		-initrd guestfs.cpio.gz \
>>     		-cpu host \
>>     		-chardev pty,id=pty0,mux=on -monitor chardev:pty0 \
>>    		-serial chardev:pty0 -daemonize \
>>     		-vnc 0.0.0.0:1 \
>>     		-append "rdinit=/sbin/init console=ttyAMA0 mem=512M root=/dev/ram earlyprintk=pl011,0x9000000 rw" &
>>     	sleep 10
>> 	pkill qemu
>> done
>>
>> After repeating sometimes there is something wrong happened as followed.
>> Look forward for your reply.
>> Thanks,
>> Shannon
>>
> Hi Shannon,
> 
> This appears to be related to a bug in vgic_set_attr() which messes up
> the bitmap initialization, I'm working on a patch.
> 
> -Christoffer
> 
> .
> 
Hi Christoffer,

Thanks for you reply.
I will test your patch to check whether the bug disappears.
Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2014-09-26  1:19 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-11 11:09 [PATCH v4 0/8] arm/arm64: KVM: dynamic VGIC sizing Marc Zyngier
2014-09-11 11:09 ` Marc Zyngier
2014-09-11 11:09 ` [PATCH v4 1/8] KVM: ARM: vgic: plug irq injection race Marc Zyngier
2014-09-11 11:09   ` Marc Zyngier
2014-09-11 11:09 ` [PATCH v4 2/8] arm/arm64: KVM: vgic: switch to dynamic allocation Marc Zyngier
2014-09-11 11:09   ` Marc Zyngier
2014-09-11 22:36   ` Christoffer Dall
2014-09-11 22:36     ` Christoffer Dall
2014-09-12  9:13     ` Marc Zyngier
2014-09-12  9:13       ` Marc Zyngier
2014-09-12 17:43       ` Christoffer Dall
2014-09-12 17:43         ` Christoffer Dall
2014-09-11 11:09 ` [PATCH v4 3/8] arm/arm64: KVM: vgic: Parametrize VGIC_NR_SHARED_IRQS Marc Zyngier
2014-09-11 11:09   ` Marc Zyngier
2014-09-11 11:09 ` [PATCH v4 4/8] arm/arm64: KVM: vgic: kill VGIC_MAX_CPUS Marc Zyngier
2014-09-11 11:09   ` Marc Zyngier
2014-09-11 11:09 ` [PATCH v4 5/8] arm/arm64: KVM: vgic: handle out-of-range MMIO accesses Marc Zyngier
2014-09-11 11:09   ` Marc Zyngier
2014-09-11 11:09 ` [PATCH v4 6/8] arm/arm64: KVM: vgic: kill VGIC_NR_IRQS Marc Zyngier
2014-09-11 11:09   ` Marc Zyngier
2014-09-11 22:37   ` Christoffer Dall
2014-09-11 22:37     ` Christoffer Dall
2014-09-11 11:09 ` [PATCH v4 7/8] arm/arm64: KVM: vgic: delay vgic allocation until init time Marc Zyngier
2014-09-11 11:09   ` Marc Zyngier
2014-09-11 11:09 ` [PATCH v4 8/8] arm/arm64: KVM: vgic: make number of irqs a configurable attribute Marc Zyngier
2014-09-11 11:09   ` Marc Zyngier
2014-09-11 22:38   ` Christoffer Dall
2014-09-11 22:38     ` Christoffer Dall
2014-09-25 12:44 ` [PATCH v4 0/8] arm/arm64: KVM: dynamic VGIC sizing Shannon Zhao
2014-09-25 12:44   ` Shannon Zhao
2014-09-25 13:35   ` Christoffer Dall
2014-09-25 13:35     ` Christoffer Dall
2014-09-26  1:15     ` Shannon Zhao
2014-09-26  1:15       ` Shannon Zhao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.