linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] kvm: arm: VGIC: Fix interrupt group enablement
@ 2019-11-08 17:49 Andre Przywara
  2019-11-08 17:49 ` [PATCH 1/3] kvm: arm: VGIC: Prepare for handling two interrupt groups Andre Przywara
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Andre Przywara @ 2019-11-08 17:49 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Julien Thierry, Suzuki K Poulose, kvmarm, linux-arm-kernel, kvm

In KVM we emulate an ARM Generic Interrupt Controller with a "single
security state", which (unlike most GICs found in silicon) provides
a non-secure operating system with *two* interrupt groups.
Since on bare metal we typically have only one group available, Linux
does not actually care about the groups and will just use the one
provided.

However we claim to support the GIC architecture, and actually have code
to support two groups, so we should aim to support this properly.

As Marc pointed out recently, we don't honour the separate group enable
bits in the GICD_CTLR register, so a guest can't separately enable or
disable the two groups.

Fixing this unfortunately requires more than just to provide storage for
a second bit: So far we were treating the "groupX enable bit" as a
global distributor enable bit, preventing interrupts from being entered
into the list registers at all if the whole thing was disabled.
Now with two separate bits we might need to block one IRQ, while needing
to forward another one, so this neat trick does not work anymore.

Instead we slightly remodel our "interrupt forwarding" mechanism, to
actually get closer to the architecture: Before adding a pending IRQ to
the ap_list, we check whether its configured interrupt group is enabled.
If it's not, we don't add it to the ap_list (yet). Now when later this
group gets enabled, we need to rescan all (pending) IRQs, to add them to
the ap_list and forward them to the guest. This is not really cheap, but
fortunately wouldn't happen too often, so we refrain from employing any
super clever algorithm, at least for now.
Another solution would be to introduce a "disabled_group_list", where
pending, but group-disabled IRQs go to, let me know if I should explore
this further.

Patch 1 prepares the VGIC code to provide storage for the two enable
bits, also extends the MMIO handling to deal with the two bits.
For this patch we still block the "other" group, as we need the
rescanning algorithm in patch 2 to allow enabling of any group later on.
Patch 3 then enables the functionality, when everything is ready.
The split-up is mostly for review purposes, since I expect some
discussion about patch 2. Happy to merge the three into one once we
agreed on the approach.

There is a corresponding kvm-unit-test series to test the FIQ
functionality, since Linux itself won't use this.
This has been tested with Linux (for regressions) and with
kvm-unit-tests, on a GICv2/arm, GICv2/arm64 and GICv3/arm64 machine.

The kvm-unit-tests patches can be found here:
https://lists.cs.columbia.edu/pipermail/kvmarm/2019-November/037853.html
or in the following repo:
https://github.com/Andre-ARM/kvm-unit-tests/commits/gic-group0

This series here can also be found at:
git://linux-arm.org/linux-ap.git

Based on kvmarm/next, commit cd7056ae34af.

Please have a look!

Cheers,
Andre

Andre Przywara (3):
  kvm: arm: VGIC: Prepare for handling two interrupt groups
  kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled
  kvm: arm: VGIC: Enable proper Group0 handling

 include/kvm/arm_vgic.h           |   7 +-
 virt/kvm/arm/vgic/vgic-debug.c   |   2 +-
 virt/kvm/arm/vgic/vgic-mmio-v2.c |  23 +++---
 virt/kvm/arm/vgic/vgic-mmio-v3.c |  22 +++---
 virt/kvm/arm/vgic/vgic.c         | 130 ++++++++++++++++++++++++++++++-
 virt/kvm/arm/vgic/vgic.h         |   3 +
 6 files changed, 162 insertions(+), 25 deletions(-)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/3] kvm: arm: VGIC: Prepare for handling two interrupt groups
  2019-11-08 17:49 [PATCH 0/3] kvm: arm: VGIC: Fix interrupt group enablement Andre Przywara
@ 2019-11-08 17:49 ` Andre Przywara
  2019-11-10 14:15   ` Marc Zyngier
  2019-11-08 17:49 ` [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled Andre Przywara
  2019-11-08 17:49 ` [PATCH 3/3] kvm: arm: VGIC: Enable proper Group0 handling Andre Przywara
  2 siblings, 1 reply; 12+ messages in thread
From: Andre Przywara @ 2019-11-08 17:49 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Julien Thierry, Suzuki K Poulose, kvmarm, linux-arm-kernel, kvm

The GIC specification describes support for two distinct interrupt
groups, which can be enabled independently from each other. At the
moment our VGIC emulation does not really honour this, so using
Group0 interrupts with the GICv3 emulation does not work as expected
at the moment.

Prepare the code for dealing with the *two* interrupt groups:
Allow to handle the two enable bits in the distributor, by providing
accessors. At the moment this still only honours group1, because we
need more code to properly differentiate the group enables.
Also provide a stub function to later implement the re-scanning of all
IRQs, should the group enable bit for a group change.

This patch does not change the current behaviour yet, but just provides
the infrastructure bits separately, mostly for review purposes.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 include/kvm/arm_vgic.h           |  7 ++-
 virt/kvm/arm/vgic/vgic-debug.c   |  2 +-
 virt/kvm/arm/vgic/vgic-mmio-v2.c | 23 ++++++----
 virt/kvm/arm/vgic/vgic-mmio-v3.c | 22 +++++----
 virt/kvm/arm/vgic/vgic.c         | 76 +++++++++++++++++++++++++++++++-
 virt/kvm/arm/vgic/vgic.h         |  3 ++
 6 files changed, 110 insertions(+), 23 deletions(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 9d53f545a3d5..0f845430c732 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -29,6 +29,9 @@
 #define VGIC_MIN_LPI		8192
 #define KVM_IRQCHIP_NUM_PINS	(1020 - 32)
 
+#define GIC_GROUP0		0
+#define GIC_GROUP1		1
+
 #define irq_is_ppi(irq) ((irq) >= VGIC_NR_SGIS && (irq) < VGIC_NR_PRIVATE_IRQS)
 #define irq_is_spi(irq) ((irq) >= VGIC_NR_PRIVATE_IRQS && \
 			 (irq) <= VGIC_MAX_SPI)
@@ -227,8 +230,8 @@ struct vgic_dist {
 		struct list_head rd_regions;
 	};
 
-	/* distributor enabled */
-	bool			enabled;
+	/* group0 and/or group1 IRQs enabled on the distributor level */
+	u8			groups_enable;
 
 	struct vgic_irq		*spis;
 
diff --git a/virt/kvm/arm/vgic/vgic-debug.c b/virt/kvm/arm/vgic/vgic-debug.c
index cc12fe9b2df3..ab64e908024e 100644
--- a/virt/kvm/arm/vgic/vgic-debug.c
+++ b/virt/kvm/arm/vgic/vgic-debug.c
@@ -150,7 +150,7 @@ static void print_dist_state(struct seq_file *s, struct vgic_dist *dist)
 	seq_printf(s, "nr_spis:\t%d\n", dist->nr_spis);
 	if (v3)
 		seq_printf(s, "nr_lpis:\t%d\n", dist->lpi_list_count);
-	seq_printf(s, "enabled:\t%d\n", dist->enabled);
+	seq_printf(s, "groups enabled:\t%d\n", dist->groups_enable);
 	seq_printf(s, "\n");
 
 	seq_printf(s, "P=pending_latch, L=line_level, A=active\n");
diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
index 5945f062d749..fe8528bd5b55 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
@@ -26,11 +26,14 @@ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu,
 					    gpa_t addr, unsigned int len)
 {
 	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
-	u32 value;
+	u32 value = 0;
 
 	switch (addr & 0x0c) {
 	case GIC_DIST_CTRL:
-		value = vgic->enabled ? GICD_ENABLE : 0;
+		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP0))
+			value |= GICD_ENABLE;
+		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP1))
+			value |= BIT(1);
 		break;
 	case GIC_DIST_CTR:
 		value = vgic->nr_spis + VGIC_NR_PRIVATE_IRQS;
@@ -42,8 +45,6 @@ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu,
 			(vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) |
 			(IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT);
 		break;
-	default:
-		return 0;
 	}
 
 	return value;
@@ -53,14 +54,18 @@ static void vgic_mmio_write_v2_misc(struct kvm_vcpu *vcpu,
 				    gpa_t addr, unsigned int len,
 				    unsigned long val)
 {
-	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
-	bool was_enabled = dist->enabled;
+	struct kvm *kvm = vcpu->kvm;
+	int grp0_changed, grp1_changed;
 
 	switch (addr & 0x0c) {
 	case GIC_DIST_CTRL:
-		dist->enabled = val & GICD_ENABLE;
-		if (!was_enabled && dist->enabled)
-			vgic_kick_vcpus(vcpu->kvm);
+		grp0_changed = vgic_dist_enable_group(kvm, GIC_GROUP0,
+						      val & GICD_ENABLE);
+		grp1_changed = vgic_dist_enable_group(kvm, GIC_GROUP1,
+						      val & BIT(1));
+		if (grp0_changed || grp1_changed)
+			vgic_rescan_pending_irqs(kvm, grp0_changed > 0 ||
+						      grp1_changed > 0);
 		break;
 	case GIC_DIST_CTR:
 	case GIC_DIST_IIDR:
diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
index 7dfd15dbb308..73e3410af332 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
@@ -66,7 +66,9 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu,
 
 	switch (addr & 0x0c) {
 	case GICD_CTLR:
-		if (vgic->enabled)
+		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP0))
+			value |= GICD_CTLR_ENABLE_SS_G0;
+		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP1))
 			value |= GICD_CTLR_ENABLE_SS_G1;
 		value |= GICD_CTLR_ARE_NS | GICD_CTLR_DS;
 		break;
@@ -85,8 +87,6 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu,
 			(vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) |
 			(IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT);
 		break;
-	default:
-		return 0;
 	}
 
 	return value;
@@ -96,15 +96,19 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu,
 				    gpa_t addr, unsigned int len,
 				    unsigned long val)
 {
-	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
-	bool was_enabled = dist->enabled;
+	struct kvm *kvm = vcpu->kvm;
+	int grp0_changed, grp1_changed;
 
 	switch (addr & 0x0c) {
 	case GICD_CTLR:
-		dist->enabled = val & GICD_CTLR_ENABLE_SS_G1;
-
-		if (!was_enabled && dist->enabled)
-			vgic_kick_vcpus(vcpu->kvm);
+		grp0_changed = vgic_dist_enable_group(kvm, GIC_GROUP0,
+						val & GICD_CTLR_ENABLE_SS_G0);
+		grp1_changed = vgic_dist_enable_group(kvm, GIC_GROUP1,
+						val & GICD_CTLR_ENABLE_SS_G1);
+
+		if (grp0_changed || grp1_changed)
+			vgic_rescan_pending_irqs(kvm, grp0_changed > 0 ||
+						      grp1_changed > 0);
 		break;
 	case GICD_TYPER:
 	case GICD_IIDR:
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 99b02ca730a8..3b88e14d239f 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -201,6 +201,12 @@ void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active)
 				      active));
 }
 
+static bool vgic_irq_is_grp_enabled(struct kvm *kvm, struct vgic_irq *irq)
+{
+	/* Placeholder implementation until we properly support Group0. */
+	return kvm->arch.vgic.groups_enable;
+}
+
 /**
  * kvm_vgic_target_oracle - compute the target vcpu for an irq
  *
@@ -228,7 +234,8 @@ static struct kvm_vcpu *vgic_target_oracle(struct vgic_irq *irq)
 	 */
 	if (irq->enabled && irq_is_pending(irq)) {
 		if (unlikely(irq->target_vcpu &&
-			     !irq->target_vcpu->kvm->arch.vgic.enabled))
+			     !vgic_irq_is_grp_enabled(irq->target_vcpu->kvm,
+						      irq)))
 			return NULL;
 
 		return irq->target_vcpu;
@@ -303,6 +310,71 @@ static void vgic_sort_ap_list(struct kvm_vcpu *vcpu)
 	list_sort(NULL, &vgic_cpu->ap_list_head, vgic_irq_cmp);
 }
 
+int vgic_dist_enable_group(struct kvm *kvm, int group, bool status)
+{
+	struct vgic_dist *dist = &kvm->arch.vgic;
+	u32 group_mask = 1U << group;
+	u32 new_bit = (unsigned)status << group;
+	u8 was_enabled = dist->groups_enable & group_mask;
+
+	if (new_bit == was_enabled)
+		return 0;
+
+	/* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */
+	if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
+		if (group == GIC_GROUP0)
+			return 0;
+	} else {
+		if (group == GIC_GROUP1)
+			return 0;
+	}
+
+	dist->groups_enable &= ~group_mask;
+	dist->groups_enable |= new_bit;
+	if (new_bit > was_enabled)
+		return 1;
+	else
+		return -1;
+
+	return 0;
+}
+
+/*
+ * The group enable status of at least one of the groups has changed.
+ * If enabled is true, at least one of the groups got enabled.
+ * Adjust the forwarding state of every IRQ (on ap_list or not) accordingly.
+ */
+void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled)
+{
+	/*
+	 * TODO: actually scan *all* IRQs of the VM for pending IRQs.
+	 * If a pending IRQ's group is now enabled, add it to its ap_list.
+	 * If a pending IRQ's group is now disabled, kick the VCPU to
+	 * let it remove this IRQ from its ap_list. We have to let the
+	 * VCPU do it itself, because we can't know the exact state of an
+	 * IRQ pending on a running VCPU.
+	 */
+
+	 /* For now just kick all VCPUs, as the old code did. */
+	vgic_kick_vcpus(kvm);
+}
+
+bool vgic_dist_group_enabled(struct kvm *kvm, int group)
+{
+	struct vgic_dist *dist = &kvm->arch.vgic;
+
+	/* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */
+	if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
+		if (group == GIC_GROUP0)
+			return false;
+	} else {
+		if (group == GIC_GROUP1)
+			return false;
+	}
+
+	return dist->groups_enable & (1U << group);
+}
+
 /*
  * Only valid injection if changing level for level-triggered IRQs or for a
  * rising edge, and in-kernel connected IRQ lines can only be controlled by
@@ -949,7 +1021,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
 	unsigned long flags;
 	struct vgic_vmcr vmcr;
 
-	if (!vcpu->kvm->arch.vgic.enabled)
+	if (!vcpu->kvm->arch.vgic.groups_enable)
 		return false;
 
 	if (vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last)
diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h
index c7fefd6b1c80..219eb23d580d 100644
--- a/virt/kvm/arm/vgic/vgic.h
+++ b/virt/kvm/arm/vgic/vgic.h
@@ -168,7 +168,10 @@ void vgic_irq_set_phys_pending(struct vgic_irq *irq, bool pending);
 void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active);
 bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq,
 			   unsigned long flags);
+bool vgic_dist_group_enabled(struct kvm *kvm, int group);
+int vgic_dist_enable_group(struct kvm *kvm, int group, bool status);
 void vgic_kick_vcpus(struct kvm *kvm);
+void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled);
 
 int vgic_check_ioaddr(struct kvm *kvm, phys_addr_t *ioaddr,
 		      phys_addr_t addr, phys_addr_t alignment);
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled
  2019-11-08 17:49 [PATCH 0/3] kvm: arm: VGIC: Fix interrupt group enablement Andre Przywara
  2019-11-08 17:49 ` [PATCH 1/3] kvm: arm: VGIC: Prepare for handling two interrupt groups Andre Przywara
@ 2019-11-08 17:49 ` Andre Przywara
  2019-11-10 14:29   ` Marc Zyngier
  2019-11-08 17:49 ` [PATCH 3/3] kvm: arm: VGIC: Enable proper Group0 handling Andre Przywara
  2 siblings, 1 reply; 12+ messages in thread
From: Andre Przywara @ 2019-11-08 17:49 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Julien Thierry, Suzuki K Poulose, kvmarm, linux-arm-kernel, kvm

Our current VGIC emulation code treats the "EnableGrpX" bits in GICD_CTLR
as a single global interrupt delivery switch, where in fact the GIC
architecture asks for this being separate for the two interrupt groups.

To implement this properly, we have to slightly adjust our design, to
*not* let IRQs from a disabled interrupt group be added to the ap_list.

As a consequence, enabling one group requires us to re-evaluate every
pending IRQ and potentially add it to its respective ap_list. Similarly
disabling an interrupt group requires pending IRQs to be removed from
the ap_list (as long as they have not been activated yet).

Implement a rather simple, yet not terribly efficient algorithm to
achieve this: For each VCPU we iterate over all IRQs, checking for
pending ones and adding them to the list. We hold the ap_list_lock
for this, to make this atomic from a VCPU's point of view.

When an interrupt group gets disabled, we can't directly remove affected
IRQs from the ap_list, as a running VCPU might have already activated
them, which wouldn't be immediately visible to the host.
Instead simply kick all VCPUs, so that they clean their ap_list's
automatically when running vgic_prune_ap_list().

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 virt/kvm/arm/vgic/vgic.c | 88 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 80 insertions(+), 8 deletions(-)

diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 3b88e14d239f..28d9ff282017 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -339,6 +339,38 @@ int vgic_dist_enable_group(struct kvm *kvm, int group, bool status)
 	return 0;
 }
 
+/*
+ * Check whether a given IRQs need to be queued to this ap_list, and do
+ * so if that's the case.
+ * Requires the ap_list_lock to be held (but not the irq lock).
+ *
+ * Returns 1 if that IRQ has been added to the ap_list, and 0 if not.
+ */
+static int queue_enabled_irq(struct kvm *kvm, struct kvm_vcpu *vcpu,
+			     int intid)
+{
+	struct vgic_irq *irq = vgic_get_irq(kvm, vcpu, intid);
+	int ret = 0;
+
+	raw_spin_lock(&irq->irq_lock);
+	if (!irq->vcpu && vcpu == vgic_target_oracle(irq)) {
+		/*
+		 * Grab a reference to the irq to reflect the
+		 * fact that it is now in the ap_list.
+		 */
+		vgic_get_irq_kref(irq);
+		list_add_tail(&irq->ap_list,
+			      &vcpu->arch.vgic_cpu.ap_list_head);
+		irq->vcpu = vcpu;
+
+		ret = 1;
+	}
+	raw_spin_unlock(&irq->irq_lock);
+	vgic_put_irq(kvm, irq);
+
+	return ret;
+}
+
 /*
  * The group enable status of at least one of the groups has changed.
  * If enabled is true, at least one of the groups got enabled.
@@ -346,17 +378,57 @@ int vgic_dist_enable_group(struct kvm *kvm, int group, bool status)
  */
 void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled)
 {
+	int cpuid;
+	struct kvm_vcpu *vcpu;
+
 	/*
-	 * TODO: actually scan *all* IRQs of the VM for pending IRQs.
-	 * If a pending IRQ's group is now enabled, add it to its ap_list.
-	 * If a pending IRQ's group is now disabled, kick the VCPU to
-	 * let it remove this IRQ from its ap_list. We have to let the
-	 * VCPU do it itself, because we can't know the exact state of an
-	 * IRQ pending on a running VCPU.
+	 * If no group got enabled, we only have to potentially remove
+	 * interrupts from ap_lists. We can't do this here, because a running
+	 * VCPU might have ACKed an IRQ already, which wouldn't immediately
+	 * be reflected in the ap_list.
+	 * So kick all VCPUs, which will let them re-evaluate their ap_lists
+	 * by running vgic_prune_ap_list(), removing no longer enabled
+	 * IRQs.
+	 */
+	if (!enabled) {
+		vgic_kick_vcpus(kvm);
+
+		return;
+	}
+
+	/*
+	 * At least one group went from disabled to enabled. Now we need
+	 * to scan *all* IRQs of the VM for newly group-enabled IRQs.
+	 * If a pending IRQ's group is now enabled, add it to the ap_list.
+	 *
+	 * For each VCPU this needs to be atomic, as we need *all* newly
+	 * enabled IRQs in be in the ap_list to determine the highest
+	 * priority one.
+	 * So grab the ap_list_lock, then iterate over all private IRQs and
+	 * all SPIs. Once the ap_list is updated, kick that VCPU to
+	 * forward any new IRQs to the guest.
 	 */
+	kvm_for_each_vcpu(cpuid, vcpu, kvm) {
+		unsigned long flags;
+		int i;
 
-	 /* For now just kick all VCPUs, as the old code did. */
-	vgic_kick_vcpus(kvm);
+		raw_spin_lock_irqsave(&vcpu->arch.vgic_cpu.ap_list_lock, flags);
+
+		for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++)
+			queue_enabled_irq(kvm, vcpu, i);
+
+		for (i = VGIC_NR_PRIVATE_IRQS;
+		     i < kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS; i++)
+			queue_enabled_irq(kvm, vcpu, i);
+
+                raw_spin_unlock_irqrestore(&vcpu->arch.vgic_cpu.ap_list_lock,
+                                           flags);
+
+		if (kvm_vgic_vcpu_pending_irq(vcpu)) {
+			kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
+			kvm_vcpu_kick(vcpu);
+		}
+	}
 }
 
 bool vgic_dist_group_enabled(struct kvm *kvm, int group)
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] kvm: arm: VGIC: Enable proper Group0 handling
  2019-11-08 17:49 [PATCH 0/3] kvm: arm: VGIC: Fix interrupt group enablement Andre Przywara
  2019-11-08 17:49 ` [PATCH 1/3] kvm: arm: VGIC: Prepare for handling two interrupt groups Andre Przywara
  2019-11-08 17:49 ` [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled Andre Przywara
@ 2019-11-08 17:49 ` Andre Przywara
  2 siblings, 0 replies; 12+ messages in thread
From: Andre Przywara @ 2019-11-08 17:49 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Julien Thierry, Suzuki K Poulose, kvmarm, linux-arm-kernel, kvm

Now that our VGIC emulation can properly deal with two separate
interrupt groups and their enable bits, we can allow a guest to control
the Group0 enable bit in the distributor.

When evaluating the state of an interrupt, we now take its individual
group enable bit into account, and drop the global "distributor
disabled" notion when checking for VCPUs with a pending interrupt.

This allows the guest to control both groups independently.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
---
 virt/kvm/arm/vgic/vgic.c | 24 +-----------------------
 1 file changed, 1 insertion(+), 23 deletions(-)

diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 28d9ff282017..ed3a10b9ea93 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -203,8 +203,7 @@ void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active)
 
 static bool vgic_irq_is_grp_enabled(struct kvm *kvm, struct vgic_irq *irq)
 {
-	/* Placeholder implementation until we properly support Group0. */
-	return kvm->arch.vgic.groups_enable;
+	return vgic_dist_group_enabled(kvm, irq->group);
 }
 
 /**
@@ -320,15 +319,6 @@ int vgic_dist_enable_group(struct kvm *kvm, int group, bool status)
 	if (new_bit == was_enabled)
 		return 0;
 
-	/* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */
-	if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
-		if (group == GIC_GROUP0)
-			return 0;
-	} else {
-		if (group == GIC_GROUP1)
-			return 0;
-	}
-
 	dist->groups_enable &= ~group_mask;
 	dist->groups_enable |= new_bit;
 	if (new_bit > was_enabled)
@@ -435,15 +425,6 @@ bool vgic_dist_group_enabled(struct kvm *kvm, int group)
 {
 	struct vgic_dist *dist = &kvm->arch.vgic;
 
-	/* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */
-	if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
-		if (group == GIC_GROUP0)
-			return false;
-	} else {
-		if (group == GIC_GROUP1)
-			return false;
-	}
-
 	return dist->groups_enable & (1U << group);
 }
 
@@ -1093,9 +1074,6 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
 	unsigned long flags;
 	struct vgic_vmcr vmcr;
 
-	if (!vcpu->kvm->arch.vgic.groups_enable)
-		return false;
-
 	if (vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last)
 		return true;
 
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] kvm: arm: VGIC: Prepare for handling two interrupt groups
  2019-11-08 17:49 ` [PATCH 1/3] kvm: arm: VGIC: Prepare for handling two interrupt groups Andre Przywara
@ 2019-11-10 14:15   ` Marc Zyngier
  2019-11-12  9:35     ` Andre Przywara
  0 siblings, 1 reply; 12+ messages in thread
From: Marc Zyngier @ 2019-11-10 14:15 UTC (permalink / raw)
  To: Andre Przywara; +Cc: kvmarm, linux-arm-kernel, kvm

On Fri,  8 Nov 2019 17:49:50 +0000
Andre Przywara <andre.przywara@arm.com> wrote:

> The GIC specification describes support for two distinct interrupt
> groups, which can be enabled independently from each other. At the
> moment our VGIC emulation does not really honour this, so using
> Group0 interrupts with the GICv3 emulation does not work as expected
> at the moment.
> 
> Prepare the code for dealing with the *two* interrupt groups:
> Allow to handle the two enable bits in the distributor, by providing
> accessors. At the moment this still only honours group1, because we
> need more code to properly differentiate the group enables.
> Also provide a stub function to later implement the re-scanning of all
> IRQs, should the group enable bit for a group change.
> 
> This patch does not change the current behaviour yet, but just provides
> the infrastructure bits separately, mostly for review purposes.
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  include/kvm/arm_vgic.h           |  7 ++-
>  virt/kvm/arm/vgic/vgic-debug.c   |  2 +-
>  virt/kvm/arm/vgic/vgic-mmio-v2.c | 23 ++++++----
>  virt/kvm/arm/vgic/vgic-mmio-v3.c | 22 +++++----
>  virt/kvm/arm/vgic/vgic.c         | 76 +++++++++++++++++++++++++++++++-
>  virt/kvm/arm/vgic/vgic.h         |  3 ++
>  6 files changed, 110 insertions(+), 23 deletions(-)
> 
> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
> index 9d53f545a3d5..0f845430c732 100644
> --- a/include/kvm/arm_vgic.h
> +++ b/include/kvm/arm_vgic.h
> @@ -29,6 +29,9 @@
>  #define VGIC_MIN_LPI		8192
>  #define KVM_IRQCHIP_NUM_PINS	(1020 - 32)
>  
> +#define GIC_GROUP0		0
> +#define GIC_GROUP1		1

Is there any reason why:
- these are not bit masks (1 << 0, 1 << 1)
- they are not using architectural constants (GICD_EnableGroup0,
  GICD_EnableGroup1)

> +
>  #define irq_is_ppi(irq) ((irq) >= VGIC_NR_SGIS && (irq) < VGIC_NR_PRIVATE_IRQS)
>  #define irq_is_spi(irq) ((irq) >= VGIC_NR_PRIVATE_IRQS && \
>  			 (irq) <= VGIC_MAX_SPI)
> @@ -227,8 +230,8 @@ struct vgic_dist {
>  		struct list_head rd_regions;
>  	};
>  
> -	/* distributor enabled */
> -	bool			enabled;
> +	/* group0 and/or group1 IRQs enabled on the distributor level */
> +	u8			groups_enable;

The comment is a bit misleading. This should be a bitmap of the enabled
groups, limited to groups 0 and 1 (short of having more of the stuff).

>  
>  	struct vgic_irq		*spis;
>  
> diff --git a/virt/kvm/arm/vgic/vgic-debug.c b/virt/kvm/arm/vgic/vgic-debug.c
> index cc12fe9b2df3..ab64e908024e 100644
> --- a/virt/kvm/arm/vgic/vgic-debug.c
> +++ b/virt/kvm/arm/vgic/vgic-debug.c
> @@ -150,7 +150,7 @@ static void print_dist_state(struct seq_file *s, struct vgic_dist *dist)
>  	seq_printf(s, "nr_spis:\t%d\n", dist->nr_spis);
>  	if (v3)
>  		seq_printf(s, "nr_lpis:\t%d\n", dist->lpi_list_count);
> -	seq_printf(s, "enabled:\t%d\n", dist->enabled);
> +	seq_printf(s, "groups enabled:\t%d\n", dist->groups_enable);

You could actually print 0 and/or 1, instead of 0, 1, 2 or 3...

>  	seq_printf(s, "\n");
>  
>  	seq_printf(s, "P=pending_latch, L=line_level, A=active\n");
> diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
> index 5945f062d749..fe8528bd5b55 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
> @@ -26,11 +26,14 @@ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu,
>  					    gpa_t addr, unsigned int len)
>  {
>  	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
> -	u32 value;
> +	u32 value = 0;
>  
>  	switch (addr & 0x0c) {
>  	case GIC_DIST_CTRL:
> -		value = vgic->enabled ? GICD_ENABLE : 0;
> +		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP0))
> +			value |= GICD_ENABLE;
> +		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP1))
> +			value |= BIT(1);

Time to follow the naming in the spec, like on GICv3.

>  		break;
>  	case GIC_DIST_CTR:
>  		value = vgic->nr_spis + VGIC_NR_PRIVATE_IRQS;
> @@ -42,8 +45,6 @@ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu,
>  			(vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) |
>  			(IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT);
>  		break;
> -	default:
> -		return 0;
>  	}
>  
>  	return value;
> @@ -53,14 +54,18 @@ static void vgic_mmio_write_v2_misc(struct kvm_vcpu *vcpu,
>  				    gpa_t addr, unsigned int len,
>  				    unsigned long val)
>  {
> -	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
> -	bool was_enabled = dist->enabled;
> +	struct kvm *kvm = vcpu->kvm;
> +	int grp0_changed, grp1_changed;
>  
>  	switch (addr & 0x0c) {
>  	case GIC_DIST_CTRL:
> -		dist->enabled = val & GICD_ENABLE;
> -		if (!was_enabled && dist->enabled)
> -			vgic_kick_vcpus(vcpu->kvm);
> +		grp0_changed = vgic_dist_enable_group(kvm, GIC_GROUP0,
> +						      val & GICD_ENABLE);
> +		grp1_changed = vgic_dist_enable_group(kvm, GIC_GROUP1,
> +						      val & BIT(1));
> +		if (grp0_changed || grp1_changed)
> +			vgic_rescan_pending_irqs(kvm, grp0_changed > 0 ||
> +						      grp1_changed > 0);
>  		break;
>  	case GIC_DIST_CTR:
>  	case GIC_DIST_IIDR:
> diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
> index 7dfd15dbb308..73e3410af332 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
> @@ -66,7 +66,9 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu,
>  
>  	switch (addr & 0x0c) {
>  	case GICD_CTLR:
> -		if (vgic->enabled)
> +		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP0))
> +			value |= GICD_CTLR_ENABLE_SS_G0;
> +		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP1))
>  			value |= GICD_CTLR_ENABLE_SS_G1;
>  		value |= GICD_CTLR_ARE_NS | GICD_CTLR_DS;
>  		break;
> @@ -85,8 +87,6 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu,
>  			(vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) |
>  			(IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT);
>  		break;
> -	default:
> -		return 0;
>  	}
>  
>  	return value;
> @@ -96,15 +96,19 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu,
>  				    gpa_t addr, unsigned int len,
>  				    unsigned long val)
>  {
> -	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
> -	bool was_enabled = dist->enabled;
> +	struct kvm *kvm = vcpu->kvm;
> +	int grp0_changed, grp1_changed;
>  
>  	switch (addr & 0x0c) {
>  	case GICD_CTLR:
> -		dist->enabled = val & GICD_CTLR_ENABLE_SS_G1;
> -
> -		if (!was_enabled && dist->enabled)
> -			vgic_kick_vcpus(vcpu->kvm);
> +		grp0_changed = vgic_dist_enable_group(kvm, GIC_GROUP0,
> +						val & GICD_CTLR_ENABLE_SS_G0);
> +		grp1_changed = vgic_dist_enable_group(kvm, GIC_GROUP1,
> +						val & GICD_CTLR_ENABLE_SS_G1);
> +
> +		if (grp0_changed || grp1_changed)
> +			vgic_rescan_pending_irqs(kvm, grp0_changed > 0 ||
> +						      grp1_changed > 0);

Aren't you losing some state here? If I have disabled G0 and enabled G1
at the same time, the result is "enabled", which makes little sense
when you have two groups. My hunch is that you have to rescan the all
the pending interrupts and match them against the group.

>  		break;
>  	case GICD_TYPER:
>  	case GICD_IIDR:
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index 99b02ca730a8..3b88e14d239f 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -201,6 +201,12 @@ void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active)
>  				      active));
>  }
>  
> +static bool vgic_irq_is_grp_enabled(struct kvm *kvm, struct vgic_irq *irq)
> +{
> +	/* Placeholder implementation until we properly support Group0. */
> +	return kvm->arch.vgic.groups_enable;
> +}
> +
>  /**
>   * kvm_vgic_target_oracle - compute the target vcpu for an irq
>   *
> @@ -228,7 +234,8 @@ static struct kvm_vcpu *vgic_target_oracle(struct vgic_irq *irq)
>  	 */
>  	if (irq->enabled && irq_is_pending(irq)) {
>  		if (unlikely(irq->target_vcpu &&
> -			     !irq->target_vcpu->kvm->arch.vgic.enabled))
> +			     !vgic_irq_is_grp_enabled(irq->target_vcpu->kvm,
> +						      irq)))
>  			return NULL;
>  
>  		return irq->target_vcpu;
> @@ -303,6 +310,71 @@ static void vgic_sort_ap_list(struct kvm_vcpu *vcpu)
>  	list_sort(NULL, &vgic_cpu->ap_list_head, vgic_irq_cmp);
>  }
>  
> +int vgic_dist_enable_group(struct kvm *kvm, int group, bool status)
> +{
> +	struct vgic_dist *dist = &kvm->arch.vgic;
> +	u32 group_mask = 1U << group;
> +	u32 new_bit = (unsigned)status << group;
> +	u8 was_enabled = dist->groups_enable & group_mask;
> +
> +	if (new_bit == was_enabled)
> +		return 0;
> +
> +	/* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */
> +	if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
> +		if (group == GIC_GROUP0)
> +			return 0;
> +	} else {
> +		if (group == GIC_GROUP1)
> +			return 0;
> +	}
> +
> +	dist->groups_enable &= ~group_mask;
> +	dist->groups_enable |= new_bit;
> +	if (new_bit > was_enabled)
> +		return 1;
> +	else
> +		return -1;
> +
> +	return 0;
> +}
> +
> +/*
> + * The group enable status of at least one of the groups has changed.
> + * If enabled is true, at least one of the groups got enabled.
> + * Adjust the forwarding state of every IRQ (on ap_list or not) accordingly.
> + */
> +void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled)
> +{
> +	/*
> +	 * TODO: actually scan *all* IRQs of the VM for pending IRQs.
> +	 * If a pending IRQ's group is now enabled, add it to its ap_list.
> +	 * If a pending IRQ's group is now disabled, kick the VCPU to
> +	 * let it remove this IRQ from its ap_list. We have to let the
> +	 * VCPU do it itself, because we can't know the exact state of an
> +	 * IRQ pending on a running VCPU.
> +	 */
> +
> +	 /* For now just kick all VCPUs, as the old code did. */
> +	vgic_kick_vcpus(kvm);
> +}
> +
> +bool vgic_dist_group_enabled(struct kvm *kvm, int group)
> +{
> +	struct vgic_dist *dist = &kvm->arch.vgic;
> +
> +	/* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */
> +	if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
> +		if (group == GIC_GROUP0)
> +			return false;
> +	} else {
> +		if (group == GIC_GROUP1)
> +			return false;
> +	}
> +
> +	return dist->groups_enable & (1U << group);
> +}
> +
>  /*
>   * Only valid injection if changing level for level-triggered IRQs or for a
>   * rising edge, and in-kernel connected IRQ lines can only be controlled by
> @@ -949,7 +1021,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
>  	unsigned long flags;
>  	struct vgic_vmcr vmcr;
>  
> -	if (!vcpu->kvm->arch.vgic.enabled)
> +	if (!vcpu->kvm->arch.vgic.groups_enable)
>  		return false;
>  
>  	if (vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last)
> diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h
> index c7fefd6b1c80..219eb23d580d 100644
> --- a/virt/kvm/arm/vgic/vgic.h
> +++ b/virt/kvm/arm/vgic/vgic.h
> @@ -168,7 +168,10 @@ void vgic_irq_set_phys_pending(struct vgic_irq *irq, bool pending);
>  void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active);
>  bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq,
>  			   unsigned long flags);
> +bool vgic_dist_group_enabled(struct kvm *kvm, int group);
> +int vgic_dist_enable_group(struct kvm *kvm, int group, bool status);
>  void vgic_kick_vcpus(struct kvm *kvm);
> +void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled);
>  
>  int vgic_check_ioaddr(struct kvm *kvm, phys_addr_t *ioaddr,
>  		      phys_addr_t addr, phys_addr_t alignment);


Overall, the logic seems a bit unclear. Please start by cleaning this
up so that we know what we're talking about, and get rid of the
(extremely premature) optimizations. If group enables get changed in any
way, just rescan the whole thing. Nobody changes their enables anyway.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled
  2019-11-08 17:49 ` [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled Andre Przywara
@ 2019-11-10 14:29   ` Marc Zyngier
  2019-11-12  9:36     ` Andre Przywara
  0 siblings, 1 reply; 12+ messages in thread
From: Marc Zyngier @ 2019-11-10 14:29 UTC (permalink / raw)
  To: Andre Przywara; +Cc: kvmarm, linux-arm-kernel, kvm

On Fri,  8 Nov 2019 17:49:51 +0000
Andre Przywara <andre.przywara@arm.com> wrote:

> Our current VGIC emulation code treats the "EnableGrpX" bits in GICD_CTLR
> as a single global interrupt delivery switch, where in fact the GIC
> architecture asks for this being separate for the two interrupt groups.
> 
> To implement this properly, we have to slightly adjust our design, to
> *not* let IRQs from a disabled interrupt group be added to the ap_list.
> 
> As a consequence, enabling one group requires us to re-evaluate every
> pending IRQ and potentially add it to its respective ap_list. Similarly
> disabling an interrupt group requires pending IRQs to be removed from
> the ap_list (as long as they have not been activated yet).
> 
> Implement a rather simple, yet not terribly efficient algorithm to
> achieve this: For each VCPU we iterate over all IRQs, checking for
> pending ones and adding them to the list. We hold the ap_list_lock
> for this, to make this atomic from a VCPU's point of view.
> 
> When an interrupt group gets disabled, we can't directly remove affected
> IRQs from the ap_list, as a running VCPU might have already activated
> them, which wouldn't be immediately visible to the host.
> Instead simply kick all VCPUs, so that they clean their ap_list's
> automatically when running vgic_prune_ap_list().
> 
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
>  virt/kvm/arm/vgic/vgic.c | 88 ++++++++++++++++++++++++++++++++++++----
>  1 file changed, 80 insertions(+), 8 deletions(-)
> 
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index 3b88e14d239f..28d9ff282017 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -339,6 +339,38 @@ int vgic_dist_enable_group(struct kvm *kvm, int group, bool status)
>  	return 0;
>  }
>  
> +/*
> + * Check whether a given IRQs need to be queued to this ap_list, and do
> + * so if that's the case.
> + * Requires the ap_list_lock to be held (but not the irq lock).
> + *
> + * Returns 1 if that IRQ has been added to the ap_list, and 0 if not.
> + */
> +static int queue_enabled_irq(struct kvm *kvm, struct kvm_vcpu *vcpu,
> +			     int intid)

true/false seems better than 1/0.

> +{
> +	struct vgic_irq *irq = vgic_get_irq(kvm, vcpu, intid);
> +	int ret = 0;
> +
> +	raw_spin_lock(&irq->irq_lock);
> +	if (!irq->vcpu && vcpu == vgic_target_oracle(irq)) {
> +		/*
> +		 * Grab a reference to the irq to reflect the
> +		 * fact that it is now in the ap_list.
> +		 */
> +		vgic_get_irq_kref(irq);
> +		list_add_tail(&irq->ap_list,
> +			      &vcpu->arch.vgic_cpu.ap_list_head);

Two things:
- This should be the job of vgic_queue_irq_unlock. Why are you
  open-coding it?
- What if the interrupt isn't pending? Non-pending, non-active
  interrupts should not be on the AP list!

> +		irq->vcpu = vcpu;
> +
> +		ret = 1;
> +	}
> +	raw_spin_unlock(&irq->irq_lock);
> +	vgic_put_irq(kvm, irq);
> +
> +	return ret;
> +}
> +
>  /*
>   * The group enable status of at least one of the groups has changed.
>   * If enabled is true, at least one of the groups got enabled.
> @@ -346,17 +378,57 @@ int vgic_dist_enable_group(struct kvm *kvm, int group, bool status)
>   */
>  void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled)
>  {
> +	int cpuid;
> +	struct kvm_vcpu *vcpu;
> +
>  	/*
> -	 * TODO: actually scan *all* IRQs of the VM for pending IRQs.
> -	 * If a pending IRQ's group is now enabled, add it to its ap_list.
> -	 * If a pending IRQ's group is now disabled, kick the VCPU to
> -	 * let it remove this IRQ from its ap_list. We have to let the
> -	 * VCPU do it itself, because we can't know the exact state of an
> -	 * IRQ pending on a running VCPU.
> +	 * If no group got enabled, we only have to potentially remove
> +	 * interrupts from ap_lists. We can't do this here, because a running
> +	 * VCPU might have ACKed an IRQ already, which wouldn't immediately
> +	 * be reflected in the ap_list.
> +	 * So kick all VCPUs, which will let them re-evaluate their ap_lists
> +	 * by running vgic_prune_ap_list(), removing no longer enabled
> +	 * IRQs.
> +	 */
> +	if (!enabled) {
> +		vgic_kick_vcpus(kvm);
> +
> +		return;
> +	}
> +
> +	/*
> +	 * At least one group went from disabled to enabled. Now we need
> +	 * to scan *all* IRQs of the VM for newly group-enabled IRQs.
> +	 * If a pending IRQ's group is now enabled, add it to the ap_list.
> +	 *
> +	 * For each VCPU this needs to be atomic, as we need *all* newly
> +	 * enabled IRQs in be in the ap_list to determine the highest
> +	 * priority one.
> +	 * So grab the ap_list_lock, then iterate over all private IRQs and
> +	 * all SPIs. Once the ap_list is updated, kick that VCPU to
> +	 * forward any new IRQs to the guest.
>  	 */
> +	kvm_for_each_vcpu(cpuid, vcpu, kvm) {
> +		unsigned long flags;
> +		int i;
>  
> -	 /* For now just kick all VCPUs, as the old code did. */
> -	vgic_kick_vcpus(kvm);
> +		raw_spin_lock_irqsave(&vcpu->arch.vgic_cpu.ap_list_lock, flags);
> +
> +		for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++)
> +			queue_enabled_irq(kvm, vcpu, i);
> +
> +		for (i = VGIC_NR_PRIVATE_IRQS;
> +		     i < kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS; i++)
> +			queue_enabled_irq(kvm, vcpu, i);

On top of my questions above, what happens to LPIs? And if a group has
been disabled, how do you retire these interrupts from the AP list?

> +
> +                raw_spin_unlock_irqrestore(&vcpu->arch.vgic_cpu.ap_list_lock,
> +                                           flags);
> +
> +		if (kvm_vgic_vcpu_pending_irq(vcpu)) {
> +			kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
> +			kvm_vcpu_kick(vcpu);
> +		}
> +	}
>  }
>  
>  bool vgic_dist_group_enabled(struct kvm *kvm, int group)

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] kvm: arm: VGIC: Prepare for handling two interrupt groups
  2019-11-10 14:15   ` Marc Zyngier
@ 2019-11-12  9:35     ` Andre Przywara
  0 siblings, 0 replies; 12+ messages in thread
From: Andre Przywara @ 2019-11-12  9:35 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, kvm

On Sun, 10 Nov 2019 14:15:38 +0000
Marc Zyngier <maz@kernel.org> wrote:

Hi Marc,

thanks for having a look!

> On Fri,  8 Nov 2019 17:49:50 +0000
> Andre Przywara <andre.przywara@arm.com> wrote:
> 
> > The GIC specification describes support for two distinct interrupt
> > groups, which can be enabled independently from each other. At the
> > moment our VGIC emulation does not really honour this, so using
> > Group0 interrupts with the GICv3 emulation does not work as expected
> > at the moment.
> > 
> > Prepare the code for dealing with the *two* interrupt groups:
> > Allow to handle the two enable bits in the distributor, by providing
> > accessors. At the moment this still only honours group1, because we
> > need more code to properly differentiate the group enables.
> > Also provide a stub function to later implement the re-scanning of all
> > IRQs, should the group enable bit for a group change.
> > 
> > This patch does not change the current behaviour yet, but just provides
> > the infrastructure bits separately, mostly for review purposes.
> > 
> > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> > ---
> >  include/kvm/arm_vgic.h           |  7 ++-
> >  virt/kvm/arm/vgic/vgic-debug.c   |  2 +-
> >  virt/kvm/arm/vgic/vgic-mmio-v2.c | 23 ++++++----
> >  virt/kvm/arm/vgic/vgic-mmio-v3.c | 22 +++++----
> >  virt/kvm/arm/vgic/vgic.c         | 76 +++++++++++++++++++++++++++++++-
> >  virt/kvm/arm/vgic/vgic.h         |  3 ++
> >  6 files changed, 110 insertions(+), 23 deletions(-)
> > 
> > diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
> > index 9d53f545a3d5..0f845430c732 100644
> > --- a/include/kvm/arm_vgic.h
> > +++ b/include/kvm/arm_vgic.h
> > @@ -29,6 +29,9 @@
> >  #define VGIC_MIN_LPI		8192
> >  #define KVM_IRQCHIP_NUM_PINS	(1020 - 32)
> >  
> > +#define GIC_GROUP0		0
> > +#define GIC_GROUP1		1  
> 
> Is there any reason why:
> - these are not bit masks (1 << 0, 1 << 1)

I needed them this way to do the shifting of the new enable bit later, in vgic_dist_enable_group(). But I see that this the only place, so I will try to use masks everywhere.

> - they are not using architectural constants (GICD_EnableGroup0,
>   GICD_EnableGroup1)
> 
> > +
> >  #define irq_is_ppi(irq) ((irq) >= VGIC_NR_SGIS && (irq) < VGIC_NR_PRIVATE_IRQS)
> >  #define irq_is_spi(irq) ((irq) >= VGIC_NR_PRIVATE_IRQS && \
> >  			 (irq) <= VGIC_MAX_SPI)
> > @@ -227,8 +230,8 @@ struct vgic_dist {
> >  		struct list_head rd_regions;
> >  	};
> >  
> > -	/* distributor enabled */
> > -	bool			enabled;
> > +	/* group0 and/or group1 IRQs enabled on the distributor level */
> > +	u8			groups_enable;  
> 
> The comment is a bit misleading. This should be a bitmap of the enabled
> groups, limited to groups 0 and 1 (short of having more of the stuff).

Which it actually is, it's just not spelled out here.
Do you want to use an explicit C bitfield here or just have the comment amended?

> >  
> >  	struct vgic_irq		*spis;
> >  
> > diff --git a/virt/kvm/arm/vgic/vgic-debug.c b/virt/kvm/arm/vgic/vgic-debug.c
> > index cc12fe9b2df3..ab64e908024e 100644
> > --- a/virt/kvm/arm/vgic/vgic-debug.c
> > +++ b/virt/kvm/arm/vgic/vgic-debug.c
> > @@ -150,7 +150,7 @@ static void print_dist_state(struct seq_file *s, struct vgic_dist *dist)
> >  	seq_printf(s, "nr_spis:\t%d\n", dist->nr_spis);
> >  	if (v3)
> >  		seq_printf(s, "nr_lpis:\t%d\n", dist->lpi_list_count);
> > -	seq_printf(s, "enabled:\t%d\n", dist->enabled);
> > +	seq_printf(s, "groups enabled:\t%d\n", dist->groups_enable);  
> 
> You could actually print 0 and/or 1, instead of 0, 1, 2 or 3...
> 
> >  	seq_printf(s, "\n");
> >  
> >  	seq_printf(s, "P=pending_latch, L=line_level, A=active\n");
> > diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
> > index 5945f062d749..fe8528bd5b55 100644
> > --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
> > +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
> > @@ -26,11 +26,14 @@ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu,
> >  					    gpa_t addr, unsigned int len)
> >  {
> >  	struct vgic_dist *vgic = &vcpu->kvm->arch.vgic;
> > -	u32 value;
> > +	u32 value = 0;
> >  
> >  	switch (addr & 0x0c) {
> >  	case GIC_DIST_CTRL:
> > -		value = vgic->enabled ? GICD_ENABLE : 0;
> > +		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP0))
> > +			value |= GICD_ENABLE;
> > +		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP1))
> > +			value |= BIT(1);  
> 
> Time to follow the naming in the spec, like on GICv3.

Which is a bit confusing for GICv2, since it uses bit 0 for group 1 interrupts in dual-security/non-secure mode, and the very same bit for group 0 interrupts in our single-security case. Shall I just keep the existing name for bit 0, and add an explicit GICD_ENABLE_GRP1 name for bit 1? GICD_ENABLE is also used in the irqchip driver.

> >  		break;
> >  	case GIC_DIST_CTR:
> >  		value = vgic->nr_spis + VGIC_NR_PRIVATE_IRQS;
> > @@ -42,8 +45,6 @@ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu,
> >  			(vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) |
> >  			(IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT);
> >  		break;
> > -	default:
> > -		return 0;
> >  	}
> >  
> >  	return value;
> > @@ -53,14 +54,18 @@ static void vgic_mmio_write_v2_misc(struct kvm_vcpu *vcpu,
> >  				    gpa_t addr, unsigned int len,
> >  				    unsigned long val)
> >  {
> > -	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
> > -	bool was_enabled = dist->enabled;
> > +	struct kvm *kvm = vcpu->kvm;
> > +	int grp0_changed, grp1_changed;
> >  
> >  	switch (addr & 0x0c) {
> >  	case GIC_DIST_CTRL:
> > -		dist->enabled = val & GICD_ENABLE;
> > -		if (!was_enabled && dist->enabled)
> > -			vgic_kick_vcpus(vcpu->kvm);
> > +		grp0_changed = vgic_dist_enable_group(kvm, GIC_GROUP0,
> > +						      val & GICD_ENABLE);
> > +		grp1_changed = vgic_dist_enable_group(kvm, GIC_GROUP1,
> > +						      val & BIT(1));
> > +		if (grp0_changed || grp1_changed)
> > +			vgic_rescan_pending_irqs(kvm, grp0_changed > 0 ||
> > +						      grp1_changed > 0);
> >  		break;
> >  	case GIC_DIST_CTR:
> >  	case GIC_DIST_IIDR:
> > diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
> > index 7dfd15dbb308..73e3410af332 100644
> > --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
> > +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
> > @@ -66,7 +66,9 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu,
> >  
> >  	switch (addr & 0x0c) {
> >  	case GICD_CTLR:
> > -		if (vgic->enabled)
> > +		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP0))
> > +			value |= GICD_CTLR_ENABLE_SS_G0;
> > +		if (vgic_dist_group_enabled(vcpu->kvm, GIC_GROUP1))
> >  			value |= GICD_CTLR_ENABLE_SS_G1;
> >  		value |= GICD_CTLR_ARE_NS | GICD_CTLR_DS;
> >  		break;
> > @@ -85,8 +87,6 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu,
> >  			(vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) |
> >  			(IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT);
> >  		break;
> > -	default:
> > -		return 0;
> >  	}
> >  
> >  	return value;
> > @@ -96,15 +96,19 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu,
> >  				    gpa_t addr, unsigned int len,
> >  				    unsigned long val)
> >  {
> > -	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
> > -	bool was_enabled = dist->enabled;
> > +	struct kvm *kvm = vcpu->kvm;
> > +	int grp0_changed, grp1_changed;
> >  
> >  	switch (addr & 0x0c) {
> >  	case GICD_CTLR:
> > -		dist->enabled = val & GICD_CTLR_ENABLE_SS_G1;
> > -
> > -		if (!was_enabled && dist->enabled)
> > -			vgic_kick_vcpus(vcpu->kvm);
> > +		grp0_changed = vgic_dist_enable_group(kvm, GIC_GROUP0,
> > +						val & GICD_CTLR_ENABLE_SS_G0);
> > +		grp1_changed = vgic_dist_enable_group(kvm, GIC_GROUP1,
> > +						val & GICD_CTLR_ENABLE_SS_G1);
> > +
> > +		if (grp0_changed || grp1_changed)
> > +			vgic_rescan_pending_irqs(kvm, grp0_changed > 0 ||
> > +						      grp1_changed > 0);  
> 
> Aren't you losing some state here? If I have disabled G0 and enabled G1
> at the same time, the result is "enabled", which makes little sense
> when you have two groups. My hunch is that you have to rescan the all
> the pending interrupts and match them against the group.

Mmh, I think you are right, in this case the kick routine at the beginning of vgic_rescan_pending_irqs() wouldn't trigger.
I will probably just pass on both the old and new bitmasks and let the callee figure that out (or ignore it).

> 
> >  		break;
> >  	case GICD_TYPER:
> >  	case GICD_IIDR:
> > diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> > index 99b02ca730a8..3b88e14d239f 100644
> > --- a/virt/kvm/arm/vgic/vgic.c
> > +++ b/virt/kvm/arm/vgic/vgic.c
> > @@ -201,6 +201,12 @@ void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active)
> >  				      active));
> >  }
> >  
> > +static bool vgic_irq_is_grp_enabled(struct kvm *kvm, struct vgic_irq *irq)
> > +{
> > +	/* Placeholder implementation until we properly support Group0. */
> > +	return kvm->arch.vgic.groups_enable;
> > +}
> > +
> >  /**
> >   * kvm_vgic_target_oracle - compute the target vcpu for an irq
> >   *
> > @@ -228,7 +234,8 @@ static struct kvm_vcpu *vgic_target_oracle(struct vgic_irq *irq)
> >  	 */
> >  	if (irq->enabled && irq_is_pending(irq)) {
> >  		if (unlikely(irq->target_vcpu &&
> > -			     !irq->target_vcpu->kvm->arch.vgic.enabled))
> > +			     !vgic_irq_is_grp_enabled(irq->target_vcpu->kvm,
> > +						      irq)))
> >  			return NULL;
> >  
> >  		return irq->target_vcpu;
> > @@ -303,6 +310,71 @@ static void vgic_sort_ap_list(struct kvm_vcpu *vcpu)
> >  	list_sort(NULL, &vgic_cpu->ap_list_head, vgic_irq_cmp);
> >  }
> >  
> > +int vgic_dist_enable_group(struct kvm *kvm, int group, bool status)
> > +{
> > +	struct vgic_dist *dist = &kvm->arch.vgic;
> > +	u32 group_mask = 1U << group;
> > +	u32 new_bit = (unsigned)status << group;
> > +	u8 was_enabled = dist->groups_enable & group_mask;
> > +
> > +	if (new_bit == was_enabled)
> > +		return 0;
> > +
> > +	/* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */
> > +	if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
> > +		if (group == GIC_GROUP0)
> > +			return 0;
> > +	} else {
> > +		if (group == GIC_GROUP1)
> > +			return 0;
> > +	}
> > +
> > +	dist->groups_enable &= ~group_mask;
> > +	dist->groups_enable |= new_bit;
> > +	if (new_bit > was_enabled)
> > +		return 1;
> > +	else
> > +		return -1;
> > +
> > +	return 0;
> > +}
> > +
> > +/*
> > + * The group enable status of at least one of the groups has changed.
> > + * If enabled is true, at least one of the groups got enabled.
> > + * Adjust the forwarding state of every IRQ (on ap_list or not) accordingly.
> > + */
> > +void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled)
> > +{
> > +	/*
> > +	 * TODO: actually scan *all* IRQs of the VM for pending IRQs.
> > +	 * If a pending IRQ's group is now enabled, add it to its ap_list.
> > +	 * If a pending IRQ's group is now disabled, kick the VCPU to
> > +	 * let it remove this IRQ from its ap_list. We have to let the
> > +	 * VCPU do it itself, because we can't know the exact state of an
> > +	 * IRQ pending on a running VCPU.
> > +	 */
> > +
> > +	 /* For now just kick all VCPUs, as the old code did. */
> > +	vgic_kick_vcpus(kvm);
> > +}
> > +
> > +bool vgic_dist_group_enabled(struct kvm *kvm, int group)
> > +{
> > +	struct vgic_dist *dist = &kvm->arch.vgic;
> > +
> > +	/* Group 0 on GICv3 and Group 1 on GICv2 are ignored for now. */
> > +	if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
> > +		if (group == GIC_GROUP0)
> > +			return false;
> > +	} else {
> > +		if (group == GIC_GROUP1)
> > +			return false;
> > +	}
> > +
> > +	return dist->groups_enable & (1U << group);
> > +}
> > +
> >  /*
> >   * Only valid injection if changing level for level-triggered IRQs or for a
> >   * rising edge, and in-kernel connected IRQ lines can only be controlled by
> > @@ -949,7 +1021,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
> >  	unsigned long flags;
> >  	struct vgic_vmcr vmcr;
> >  
> > -	if (!vcpu->kvm->arch.vgic.enabled)
> > +	if (!vcpu->kvm->arch.vgic.groups_enable)
> >  		return false;
> >  
> >  	if (vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last)
> > diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h
> > index c7fefd6b1c80..219eb23d580d 100644
> > --- a/virt/kvm/arm/vgic/vgic.h
> > +++ b/virt/kvm/arm/vgic/vgic.h
> > @@ -168,7 +168,10 @@ void vgic_irq_set_phys_pending(struct vgic_irq *irq, bool pending);
> >  void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active);
> >  bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq,
> >  			   unsigned long flags);
> > +bool vgic_dist_group_enabled(struct kvm *kvm, int group);
> > +int vgic_dist_enable_group(struct kvm *kvm, int group, bool status);
> >  void vgic_kick_vcpus(struct kvm *kvm);
> > +void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled);
> >  
> >  int vgic_check_ioaddr(struct kvm *kvm, phys_addr_t *ioaddr,
> >  		      phys_addr_t addr, phys_addr_t alignment);  
> 
> 
> Overall, the logic seems a bit unclear. Please start by cleaning this
> up so that we know what we're talking about, and get rid of the
> (extremely premature) optimizations. If group enables get changed in any
> way, just rescan the whole thing. Nobody changes their enables anyway.

Looking back at it it seems indeed pretty rubbish. I will see if using masks everywhere can make this cleaner, plus trying to avoid too many optimisations.

Cheers,
Andre.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled
  2019-11-10 14:29   ` Marc Zyngier
@ 2019-11-12  9:36     ` Andre Przywara
  2019-11-14 11:16       ` Marc Zyngier
  0 siblings, 1 reply; 12+ messages in thread
From: Andre Przywara @ 2019-11-12  9:36 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, kvm

On Sun, 10 Nov 2019 14:29:14 +0000
Marc Zyngier <maz@kernel.org> wrote:

Hi Marc,

> On Fri,  8 Nov 2019 17:49:51 +0000
> Andre Przywara <andre.przywara@arm.com> wrote:
> 
> > Our current VGIC emulation code treats the "EnableGrpX" bits in GICD_CTLR
> > as a single global interrupt delivery switch, where in fact the GIC
> > architecture asks for this being separate for the two interrupt groups.
> > 
> > To implement this properly, we have to slightly adjust our design, to
> > *not* let IRQs from a disabled interrupt group be added to the ap_list.
> > 
> > As a consequence, enabling one group requires us to re-evaluate every
> > pending IRQ and potentially add it to its respective ap_list. Similarly
> > disabling an interrupt group requires pending IRQs to be removed from
> > the ap_list (as long as they have not been activated yet).
> > 
> > Implement a rather simple, yet not terribly efficient algorithm to
> > achieve this: For each VCPU we iterate over all IRQs, checking for
> > pending ones and adding them to the list. We hold the ap_list_lock
> > for this, to make this atomic from a VCPU's point of view.
> > 
> > When an interrupt group gets disabled, we can't directly remove affected
> > IRQs from the ap_list, as a running VCPU might have already activated
> > them, which wouldn't be immediately visible to the host.
> > Instead simply kick all VCPUs, so that they clean their ap_list's
> > automatically when running vgic_prune_ap_list().
> > 
> > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> > ---
> >  virt/kvm/arm/vgic/vgic.c | 88 ++++++++++++++++++++++++++++++++++++----
> >  1 file changed, 80 insertions(+), 8 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> > index 3b88e14d239f..28d9ff282017 100644
> > --- a/virt/kvm/arm/vgic/vgic.c
> > +++ b/virt/kvm/arm/vgic/vgic.c
> > @@ -339,6 +339,38 @@ int vgic_dist_enable_group(struct kvm *kvm, int group, bool status)
> >  	return 0;
> >  }
> >  
> > +/*
> > + * Check whether a given IRQs need to be queued to this ap_list, and do
> > + * so if that's the case.
> > + * Requires the ap_list_lock to be held (but not the irq lock).
> > + *
> > + * Returns 1 if that IRQ has been added to the ap_list, and 0 if not.
> > + */
> > +static int queue_enabled_irq(struct kvm *kvm, struct kvm_vcpu *vcpu,
> > +			     int intid)  
> 
> true/false seems better than 1/0.

Mmh, indeed. I think I had more in there in an earlier version.

> > +{
> > +	struct vgic_irq *irq = vgic_get_irq(kvm, vcpu, intid);
> > +	int ret = 0;
> > +
> > +	raw_spin_lock(&irq->irq_lock);
> > +	if (!irq->vcpu && vcpu == vgic_target_oracle(irq)) {
> > +		/*
> > +		 * Grab a reference to the irq to reflect the
> > +		 * fact that it is now in the ap_list.
> > +		 */
> > +		vgic_get_irq_kref(irq);
> > +		list_add_tail(&irq->ap_list,
> > +			      &vcpu->arch.vgic_cpu.ap_list_head);  
> 
> Two things:
> - This should be the job of vgic_queue_irq_unlock. Why are you
>   open-coding it?

I was *really* keen on reusing that, but couldn't  for two reasons:
a) the locking code inside vgic_queue_irq_unlock spoils that: It requires the irq_lock to be held, but not the ap_list_lock. Then it takes both locks, but returns with both of them dropped. We need to hold the ap_list_lock all of the time, to prevent any VCPU returning to the HV to interfere with this routine.
b) vgic_queue_irq_unlock() kicks the VCPU already, where I want to just add all of them first, then kick the VCPU at the end.

So I decided to go with the stripped-down version of it, because I didn't dare to touch the original function. I could refactor this "actually add to the list" part of vgic_queue_irq_unlock() into this new function, then call it from both vgic_queue_irq_unlock() and from the new users.

> - What if the interrupt isn't pending? Non-pending, non-active
>   interrupts should not be on the AP list!

That should be covered by vgic_target_oracle() already, shouldn't it?

> > +		irq->vcpu = vcpu;
> > +
> > +		ret = 1;
> > +	}
> > +	raw_spin_unlock(&irq->irq_lock);
> > +	vgic_put_irq(kvm, irq);
> > +
> > +	return ret;
> > +}
> > +
> >  /*
> >   * The group enable status of at least one of the groups has changed.
> >   * If enabled is true, at least one of the groups got enabled.
> > @@ -346,17 +378,57 @@ int vgic_dist_enable_group(struct kvm *kvm, int group, bool status)
> >   */
> >  void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled)
> >  {
> > +	int cpuid;
> > +	struct kvm_vcpu *vcpu;
> > +
> >  	/*
> > -	 * TODO: actually scan *all* IRQs of the VM for pending IRQs.
> > -	 * If a pending IRQ's group is now enabled, add it to its ap_list.
> > -	 * If a pending IRQ's group is now disabled, kick the VCPU to
> > -	 * let it remove this IRQ from its ap_list. We have to let the
> > -	 * VCPU do it itself, because we can't know the exact state of an
> > -	 * IRQ pending on a running VCPU.
> > +	 * If no group got enabled, we only have to potentially remove
> > +	 * interrupts from ap_lists. We can't do this here, because a running
> > +	 * VCPU might have ACKed an IRQ already, which wouldn't immediately
> > +	 * be reflected in the ap_list.
> > +	 * So kick all VCPUs, which will let them re-evaluate their ap_lists
> > +	 * by running vgic_prune_ap_list(), removing no longer enabled
> > +	 * IRQs.
> > +	 */
> > +	if (!enabled) {
> > +		vgic_kick_vcpus(kvm);
> > +
> > +		return;
> > +	}
> > +
> > +	/*
> > +	 * At least one group went from disabled to enabled. Now we need
> > +	 * to scan *all* IRQs of the VM for newly group-enabled IRQs.
> > +	 * If a pending IRQ's group is now enabled, add it to the ap_list.
> > +	 *
> > +	 * For each VCPU this needs to be atomic, as we need *all* newly
> > +	 * enabled IRQs in be in the ap_list to determine the highest
> > +	 * priority one.
> > +	 * So grab the ap_list_lock, then iterate over all private IRQs and
> > +	 * all SPIs. Once the ap_list is updated, kick that VCPU to
> > +	 * forward any new IRQs to the guest.
> >  	 */
> > +	kvm_for_each_vcpu(cpuid, vcpu, kvm) {
> > +		unsigned long flags;
> > +		int i;
> >  
> > -	 /* For now just kick all VCPUs, as the old code did. */
> > -	vgic_kick_vcpus(kvm);
> > +		raw_spin_lock_irqsave(&vcpu->arch.vgic_cpu.ap_list_lock, flags);
> > +
> > +		for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++)
> > +			queue_enabled_irq(kvm, vcpu, i);
> > +
> > +		for (i = VGIC_NR_PRIVATE_IRQS;
> > +		     i < kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS; i++)
> > +			queue_enabled_irq(kvm, vcpu, i);  
> 
> On top of my questions above, what happens to LPIs?

Oh dear. Looks like wishful thinking on my side ;-) Iterating over all interrupts is probably not a good idea anymore.
Do you think this idea of having a list with group-disabled IRQs is a better approach: In vgic_queue_irq_unlock, if a pending IRQ's group is enabled, it goes into the ap_list, if not, it goes into another list instead. Then we would only need to consult this other list when a group gets enabled. Both lists protected by the same ap_list_lock. Does that make sense?

> And if a group has
> been disabled, how do you retire these interrupts from the AP list?

This is done above: we kick the respective VCPU and rely on vgic_prune_ap_list() to remove them (that uses vgic_target_oracle(), which in turn checks vgic_irq_is_grp_enabled()).

Cheers,
Andre.
 
> > +
> > +                raw_spin_unlock_irqrestore(&vcpu->arch.vgic_cpu.ap_list_lock,
> > +                                           flags);
> > +
> > +		if (kvm_vgic_vcpu_pending_irq(vcpu)) {
> > +			kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
> > +			kvm_vcpu_kick(vcpu);
> > +		}
> > +	}
> >  }
> >  
> >  bool vgic_dist_group_enabled(struct kvm *kvm, int group)  
> 
> Thanks,
> 
> 	M.


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled
  2019-11-12  9:36     ` Andre Przywara
@ 2019-11-14 11:16       ` Marc Zyngier
  2019-11-18 14:12         ` Andre Przywara
  0 siblings, 1 reply; 12+ messages in thread
From: Marc Zyngier @ 2019-11-14 11:16 UTC (permalink / raw)
  To: Andre Przywara; +Cc: kvmarm, linux-arm-kernel, kvm

On 2019-11-12 09:36, Andre Przywara wrote:
> On Sun, 10 Nov 2019 14:29:14 +0000
> Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Marc,
>
>> On Fri,  8 Nov 2019 17:49:51 +0000
>> Andre Przywara <andre.przywara@arm.com> wrote:
>>
>> > Our current VGIC emulation code treats the "EnableGrpX" bits in 
>> GICD_CTLR
>> > as a single global interrupt delivery switch, where in fact the 
>> GIC
>> > architecture asks for this being separate for the two interrupt 
>> groups.
>> >
>> > To implement this properly, we have to slightly adjust our design, 
>> to
>> > *not* let IRQs from a disabled interrupt group be added to the 
>> ap_list.
>> >
>> > As a consequence, enabling one group requires us to re-evaluate 
>> every
>> > pending IRQ and potentially add it to its respective ap_list. 
>> Similarly
>> > disabling an interrupt group requires pending IRQs to be removed 
>> from
>> > the ap_list (as long as they have not been activated yet).
>> >
>> > Implement a rather simple, yet not terribly efficient algorithm to
>> > achieve this: For each VCPU we iterate over all IRQs, checking for
>> > pending ones and adding them to the list. We hold the ap_list_lock
>> > for this, to make this atomic from a VCPU's point of view.
>> >
>> > When an interrupt group gets disabled, we can't directly remove 
>> affected
>> > IRQs from the ap_list, as a running VCPU might have already 
>> activated
>> > them, which wouldn't be immediately visible to the host.
>> > Instead simply kick all VCPUs, so that they clean their ap_list's
>> > automatically when running vgic_prune_ap_list().
>> >
>> > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> > ---
>> >  virt/kvm/arm/vgic/vgic.c | 88 
>> ++++++++++++++++++++++++++++++++++++----
>> >  1 file changed, 80 insertions(+), 8 deletions(-)
>> >
>> > diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
>> > index 3b88e14d239f..28d9ff282017 100644
>> > --- a/virt/kvm/arm/vgic/vgic.c
>> > +++ b/virt/kvm/arm/vgic/vgic.c
>> > @@ -339,6 +339,38 @@ int vgic_dist_enable_group(struct kvm *kvm, 
>> int group, bool status)
>> >  	return 0;
>> >  }
>> >
>> > +/*
>> > + * Check whether a given IRQs need to be queued to this ap_list, 
>> and do
>> > + * so if that's the case.
>> > + * Requires the ap_list_lock to be held (but not the irq lock).
>> > + *
>> > + * Returns 1 if that IRQ has been added to the ap_list, and 0 if 
>> not.
>> > + */
>> > +static int queue_enabled_irq(struct kvm *kvm, struct kvm_vcpu 
>> *vcpu,
>> > +			     int intid)
>>
>> true/false seems better than 1/0.
>
> Mmh, indeed. I think I had more in there in an earlier version.
>
>> > +{
>> > +	struct vgic_irq *irq = vgic_get_irq(kvm, vcpu, intid);
>> > +	int ret = 0;
>> > +
>> > +	raw_spin_lock(&irq->irq_lock);
>> > +	if (!irq->vcpu && vcpu == vgic_target_oracle(irq)) {
>> > +		/*
>> > +		 * Grab a reference to the irq to reflect the
>> > +		 * fact that it is now in the ap_list.
>> > +		 */
>> > +		vgic_get_irq_kref(irq);
>> > +		list_add_tail(&irq->ap_list,
>> > +			      &vcpu->arch.vgic_cpu.ap_list_head);
>>
>> Two things:
>> - This should be the job of vgic_queue_irq_unlock. Why are you
>>   open-coding it?
>
> I was *really* keen on reusing that, but couldn't  for two reasons:
> a) the locking code inside vgic_queue_irq_unlock spoils that: It
> requires the irq_lock to be held, but not the ap_list_lock. Then it
> takes both locks, but returns with both of them dropped. We need to
> hold the ap_list_lock all of the time, to prevent any VCPU returning
> to the HV to interfere with this routine.
> b) vgic_queue_irq_unlock() kicks the VCPU already, where I want to
> just add all of them first, then kick the VCPU at the end.

Indeed, and that is why you need to change the way you queue these
pending, enabled, group-disabled interrupts (see the LPI issue below).

>
> So I decided to go with the stripped-down version of it, because I
> didn't dare to touch the original function. I could refactor this
> "actually add to the list" part of vgic_queue_irq_unlock() into this
> new function, then call it from both vgic_queue_irq_unlock() and from
> the new users.
>
>> - What if the interrupt isn't pending? Non-pending, non-active
>>   interrupts should not be on the AP list!
>
> That should be covered by vgic_target_oracle() already, shouldn't it?

Ah, yes, you're right.

>
>> > +		irq->vcpu = vcpu;
>> > +
>> > +		ret = 1;
>> > +	}
>> > +	raw_spin_unlock(&irq->irq_lock);
>> > +	vgic_put_irq(kvm, irq);
>> > +
>> > +	return ret;
>> > +}
>> > +
>> >  /*
>> >   * The group enable status of at least one of the groups has 
>> changed.
>> >   * If enabled is true, at least one of the groups got enabled.
>> > @@ -346,17 +378,57 @@ int vgic_dist_enable_group(struct kvm *kvm, 
>> int group, bool status)
>> >   */
>> >  void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled)
>> >  {
>> > +	int cpuid;
>> > +	struct kvm_vcpu *vcpu;
>> > +
>> >  	/*
>> > -	 * TODO: actually scan *all* IRQs of the VM for pending IRQs.
>> > -	 * If a pending IRQ's group is now enabled, add it to its 
>> ap_list.
>> > -	 * If a pending IRQ's group is now disabled, kick the VCPU to
>> > -	 * let it remove this IRQ from its ap_list. We have to let the
>> > -	 * VCPU do it itself, because we can't know the exact state of 
>> an
>> > -	 * IRQ pending on a running VCPU.
>> > +	 * If no group got enabled, we only have to potentially remove
>> > +	 * interrupts from ap_lists. We can't do this here, because a 
>> running
>> > +	 * VCPU might have ACKed an IRQ already, which wouldn't 
>> immediately
>> > +	 * be reflected in the ap_list.
>> > +	 * So kick all VCPUs, which will let them re-evaluate their 
>> ap_lists
>> > +	 * by running vgic_prune_ap_list(), removing no longer enabled
>> > +	 * IRQs.
>> > +	 */
>> > +	if (!enabled) {
>> > +		vgic_kick_vcpus(kvm);
>> > +
>> > +		return;
>> > +	}
>> > +
>> > +	/*
>> > +	 * At least one group went from disabled to enabled. Now we need
>> > +	 * to scan *all* IRQs of the VM for newly group-enabled IRQs.
>> > +	 * If a pending IRQ's group is now enabled, add it to the 
>> ap_list.
>> > +	 *
>> > +	 * For each VCPU this needs to be atomic, as we need *all* newly
>> > +	 * enabled IRQs in be in the ap_list to determine the highest
>> > +	 * priority one.
>> > +	 * So grab the ap_list_lock, then iterate over all private IRQs 
>> and
>> > +	 * all SPIs. Once the ap_list is updated, kick that VCPU to
>> > +	 * forward any new IRQs to the guest.
>> >  	 */
>> > +	kvm_for_each_vcpu(cpuid, vcpu, kvm) {
>> > +		unsigned long flags;
>> > +		int i;
>> >
>> > -	 /* For now just kick all VCPUs, as the old code did. */
>> > -	vgic_kick_vcpus(kvm);
>> > +		raw_spin_lock_irqsave(&vcpu->arch.vgic_cpu.ap_list_lock, 
>> flags);
>> > +
>> > +		for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++)
>> > +			queue_enabled_irq(kvm, vcpu, i);
>> > +
>> > +		for (i = VGIC_NR_PRIVATE_IRQS;
>> > +		     i < kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS; i++)
>> > +			queue_enabled_irq(kvm, vcpu, i);
>>
>> On top of my questions above, what happens to LPIs?
>
> Oh dear. Looks like wishful thinking on my side ;-) Iterating over
> all interrupts is probably not a good idea anymore.
> Do you think this idea of having a list with group-disabled IRQs is a
> better approach: In vgic_queue_irq_unlock, if a pending IRQ's group 
> is
> enabled, it goes into the ap_list, if not, it goes into another list
> instead. Then we would only need to consult this other list when a
> group gets enabled. Both lists protected by the same ap_list_lock.
> Does that make sense?

I think that could work. One queue for each group, holding pending,
enabled, group-disabled interrupts. Pending, disabled interrupts are
not queued anywhere, just like today.

The only snag is per-cpu interrupts. On which queue do they live?
Do you have per-CPU queues? or a global one?

>> And if a group has
>> been disabled, how do you retire these interrupts from the AP list?
>
> This is done above: we kick the respective VCPU and rely on
> vgic_prune_ap_list() to remove them (that uses vgic_target_oracle(),
> which in turn checks vgic_irq_is_grp_enabled()).

But what if the CPU isn't running? Kicking it isn't going to do much,
is it?

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled
  2019-11-14 11:16       ` Marc Zyngier
@ 2019-11-18 14:12         ` Andre Przywara
  2019-11-19  9:40           ` Marc Zyngier
  0 siblings, 1 reply; 12+ messages in thread
From: Andre Przywara @ 2019-11-18 14:12 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, kvm

On Thu, 14 Nov 2019 11:16:55 +0000
Marc Zyngier <maz@kernel.org> wrote:

Hi Marc,

> On 2019-11-12 09:36, Andre Przywara wrote:
> > On Sun, 10 Nov 2019 14:29:14 +0000
> > Marc Zyngier <maz@kernel.org> wrote:
> >
> > Hi Marc,
> >  
> >> On Fri,  8 Nov 2019 17:49:51 +0000
> >> Andre Przywara <andre.przywara@arm.com> wrote:
> >>  
> >> > Our current VGIC emulation code treats the "EnableGrpX" bits in   
> >> GICD_CTLR  
> >> > as a single global interrupt delivery switch, where in fact the   
> >> GIC  
> >> > architecture asks for this being separate for the two interrupt   
> >> groups.  
> >> >
> >> > To implement this properly, we have to slightly adjust our design,   
> >> to  
> >> > *not* let IRQs from a disabled interrupt group be added to the   
> >> ap_list.  
> >> >
> >> > As a consequence, enabling one group requires us to re-evaluate   
> >> every  
> >> > pending IRQ and potentially add it to its respective ap_list.   
> >> Similarly  
> >> > disabling an interrupt group requires pending IRQs to be removed   
> >> from  
> >> > the ap_list (as long as they have not been activated yet).
> >> >
> >> > Implement a rather simple, yet not terribly efficient algorithm to
> >> > achieve this: For each VCPU we iterate over all IRQs, checking for
> >> > pending ones and adding them to the list. We hold the ap_list_lock
> >> > for this, to make this atomic from a VCPU's point of view.
> >> >
> >> > When an interrupt group gets disabled, we can't directly remove   
> >> affected  
> >> > IRQs from the ap_list, as a running VCPU might have already   
> >> activated  
> >> > them, which wouldn't be immediately visible to the host.
> >> > Instead simply kick all VCPUs, so that they clean their ap_list's
> >> > automatically when running vgic_prune_ap_list().
> >> >
> >> > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> >> > ---
> >> >  virt/kvm/arm/vgic/vgic.c | 88   
> >> ++++++++++++++++++++++++++++++++++++----  
> >> >  1 file changed, 80 insertions(+), 8 deletions(-)
> >> >
> >> > diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> >> > index 3b88e14d239f..28d9ff282017 100644
> >> > --- a/virt/kvm/arm/vgic/vgic.c
> >> > +++ b/virt/kvm/arm/vgic/vgic.c
> >> > @@ -339,6 +339,38 @@ int vgic_dist_enable_group(struct kvm *kvm,   
> >> int group, bool status)  
> >> >  	return 0;
> >> >  }
> >> >
> >> > +/*
> >> > + * Check whether a given IRQs need to be queued to this ap_list,   
> >> and do  
> >> > + * so if that's the case.
> >> > + * Requires the ap_list_lock to be held (but not the irq lock).
> >> > + *
> >> > + * Returns 1 if that IRQ has been added to the ap_list, and 0 if   
> >> not.  
> >> > + */
> >> > +static int queue_enabled_irq(struct kvm *kvm, struct kvm_vcpu   
> >> *vcpu,  
> >> > +			     int intid)  
> >>
> >> true/false seems better than 1/0.  
> >
> > Mmh, indeed. I think I had more in there in an earlier version.
> >  
> >> > +{
> >> > +	struct vgic_irq *irq = vgic_get_irq(kvm, vcpu, intid);
> >> > +	int ret = 0;
> >> > +
> >> > +	raw_spin_lock(&irq->irq_lock);
> >> > +	if (!irq->vcpu && vcpu == vgic_target_oracle(irq)) {
> >> > +		/*
> >> > +		 * Grab a reference to the irq to reflect the
> >> > +		 * fact that it is now in the ap_list.
> >> > +		 */
> >> > +		vgic_get_irq_kref(irq);
> >> > +		list_add_tail(&irq->ap_list,
> >> > +			      &vcpu->arch.vgic_cpu.ap_list_head);  
> >>
> >> Two things:
> >> - This should be the job of vgic_queue_irq_unlock. Why are you
> >>   open-coding it?  
> >
> > I was *really* keen on reusing that, but couldn't  for two reasons:
> > a) the locking code inside vgic_queue_irq_unlock spoils that: It
> > requires the irq_lock to be held, but not the ap_list_lock. Then it
> > takes both locks, but returns with both of them dropped. We need to
> > hold the ap_list_lock all of the time, to prevent any VCPU returning
> > to the HV to interfere with this routine.
> > b) vgic_queue_irq_unlock() kicks the VCPU already, where I want to
> > just add all of them first, then kick the VCPU at the end.  
> 
> Indeed, and that is why you need to change the way you queue these
> pending, enabled, group-disabled interrupts (see the LPI issue below).
> 
> >
> > So I decided to go with the stripped-down version of it, because I
> > didn't dare to touch the original function. I could refactor this
> > "actually add to the list" part of vgic_queue_irq_unlock() into this
> > new function, then call it from both vgic_queue_irq_unlock() and from
> > the new users.
> >  
> >> - What if the interrupt isn't pending? Non-pending, non-active
> >>   interrupts should not be on the AP list!  
> >
> > That should be covered by vgic_target_oracle() already, shouldn't it?  
> 
> Ah, yes, you're right.
> 
> >  
> >> > +		irq->vcpu = vcpu;
> >> > +
> >> > +		ret = 1;
> >> > +	}
> >> > +	raw_spin_unlock(&irq->irq_lock);
> >> > +	vgic_put_irq(kvm, irq);
> >> > +
> >> > +	return ret;
> >> > +}
> >> > +
> >> >  /*
> >> >   * The group enable status of at least one of the groups has   
> >> changed.  
> >> >   * If enabled is true, at least one of the groups got enabled.
> >> > @@ -346,17 +378,57 @@ int vgic_dist_enable_group(struct kvm *kvm,   
> >> int group, bool status)  
> >> >   */
> >> >  void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled)
> >> >  {
> >> > +	int cpuid;
> >> > +	struct kvm_vcpu *vcpu;
> >> > +
> >> >  	/*
> >> > -	 * TODO: actually scan *all* IRQs of the VM for pending IRQs.
> >> > -	 * If a pending IRQ's group is now enabled, add it to its   
> >> ap_list.  
> >> > -	 * If a pending IRQ's group is now disabled, kick the VCPU to
> >> > -	 * let it remove this IRQ from its ap_list. We have to let the
> >> > -	 * VCPU do it itself, because we can't know the exact state of   
> >> an  
> >> > -	 * IRQ pending on a running VCPU.
> >> > +	 * If no group got enabled, we only have to potentially remove
> >> > +	 * interrupts from ap_lists. We can't do this here, because a   
> >> running  
> >> > +	 * VCPU might have ACKed an IRQ already, which wouldn't   
> >> immediately  
> >> > +	 * be reflected in the ap_list.
> >> > +	 * So kick all VCPUs, which will let them re-evaluate their   
> >> ap_lists  
> >> > +	 * by running vgic_prune_ap_list(), removing no longer enabled
> >> > +	 * IRQs.
> >> > +	 */
> >> > +	if (!enabled) {
> >> > +		vgic_kick_vcpus(kvm);
> >> > +
> >> > +		return;
> >> > +	}
> >> > +
> >> > +	/*
> >> > +	 * At least one group went from disabled to enabled. Now we need
> >> > +	 * to scan *all* IRQs of the VM for newly group-enabled IRQs.
> >> > +	 * If a pending IRQ's group is now enabled, add it to the   
> >> ap_list.  
> >> > +	 *
> >> > +	 * For each VCPU this needs to be atomic, as we need *all* newly
> >> > +	 * enabled IRQs in be in the ap_list to determine the highest
> >> > +	 * priority one.
> >> > +	 * So grab the ap_list_lock, then iterate over all private IRQs   
> >> and  
> >> > +	 * all SPIs. Once the ap_list is updated, kick that VCPU to
> >> > +	 * forward any new IRQs to the guest.
> >> >  	 */
> >> > +	kvm_for_each_vcpu(cpuid, vcpu, kvm) {
> >> > +		unsigned long flags;
> >> > +		int i;
> >> >
> >> > -	 /* For now just kick all VCPUs, as the old code did. */
> >> > -	vgic_kick_vcpus(kvm);
> >> > +		raw_spin_lock_irqsave(&vcpu->arch.vgic_cpu.ap_list_lock,   
> >> flags);  
> >> > +
> >> > +		for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++)
> >> > +			queue_enabled_irq(kvm, vcpu, i);
> >> > +
> >> > +		for (i = VGIC_NR_PRIVATE_IRQS;
> >> > +		     i < kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS; i++)
> >> > +			queue_enabled_irq(kvm, vcpu, i);  
> >>
> >> On top of my questions above, what happens to LPIs?  
> >
> > Oh dear. Looks like wishful thinking on my side ;-) Iterating over
> > all interrupts is probably not a good idea anymore.
> > Do you think this idea of having a list with group-disabled IRQs is a
> > better approach: In vgic_queue_irq_unlock, if a pending IRQ's group 
> > is
> > enabled, it goes into the ap_list, if not, it goes into another list
> > instead. Then we would only need to consult this other list when a
> > group gets enabled. Both lists protected by the same ap_list_lock.
> > Does that make sense?  
> 
> I think that could work. One queue for each group, holding pending,
> enabled, group-disabled interrupts. Pending, disabled interrupts are
> not queued anywhere, just like today.
> 
> The only snag is per-cpu interrupts. On which queue do they live?
> Do you have per-CPU queues? or a global one?

Yes, the idea was to have a per-VCPU "grp_dis_list" in addition to the ap_list, reusing the ap_list list_head in struct vgic_irq. vgic_queue_irq_unlock() would put them into *one* of those two lists, depending on their group-enabled status. When a group gets enabled, we just have to transfer the IRQs from grp_dis_list to ap_list.

But fleshing this out I was wondering if it couldn't be much simpler:
We ignore the group-enabled status most of the time, except in vgic_flush_lr_state(). So group-disabled IRQs *would go* to the ap_list (when they are otherwise pending|active and enabled), but would be skipped when eventually populating the LRs. vgic_prune_ap_list would also not touch them, so they would stay in the ap_list (unless removed for other reasons).

That might raise some eyebrows (because we keep IRQs in the ap_list which are not ready), but would require only minimal changes and avoid all kind of nasty/racy code to be added. The only downside I see is that the ap_list could potentially be much longer, but we could change the sorting algorithm if needed to keep group-disabled IRQs at the end, at which point it wouldn't really matter.

Do you see any problem with that approach? Alex seemed to remember that you had an objection against a very similar (if not identical) idea before.

> >> And if a group has
> >> been disabled, how do you retire these interrupts from the AP list?  
> >
> > This is done above: we kick the respective VCPU and rely on
> > vgic_prune_ap_list() to remove them (that uses vgic_target_oracle(),
> > which in turn checks vgic_irq_is_grp_enabled()).  
> 
> But what if the CPU isn't running? Kicking it isn't going to do much,
> is it?

Not directly, but in either approach that would be handled similar to disabled interrupts: once the VCPU runs, they would *not* end up in LRs (because we check the oracle before), and would be cleaned up in prune() once the guest exits (at least for the original approach).

Cheers,
Andre.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled
  2019-11-18 14:12         ` Andre Przywara
@ 2019-11-19  9:40           ` Marc Zyngier
  2019-11-19 14:32             ` Andre Przywara
  0 siblings, 1 reply; 12+ messages in thread
From: Marc Zyngier @ 2019-11-19  9:40 UTC (permalink / raw)
  To: Andre Przywara; +Cc: kvmarm, linux-arm-kernel, kvm

Hi Andre,

On 2019-11-18 14:12, Andre Przywara wrote:
> On Thu, 14 Nov 2019 11:16:55 +0000
> Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Marc,
>
>> On 2019-11-12 09:36, Andre Przywara wrote:
>> > On Sun, 10 Nov 2019 14:29:14 +0000
>> > Marc Zyngier <maz@kernel.org> wrote:
>> >
>> > Hi Marc,
>> >
>> >> On Fri,  8 Nov 2019 17:49:51 +0000
>> >> Andre Przywara <andre.przywara@arm.com> wrote:
>> >>
>> >> > Our current VGIC emulation code treats the "EnableGrpX" bits in
>> >> GICD_CTLR
>> >> > as a single global interrupt delivery switch, where in fact the
>> >> GIC
>> >> > architecture asks for this being separate for the two interrupt
>> >> groups.
>> >> >
>> >> > To implement this properly, we have to slightly adjust our 
>> design,
>> >> to
>> >> > *not* let IRQs from a disabled interrupt group be added to the
>> >> ap_list.
>> >> >
>> >> > As a consequence, enabling one group requires us to re-evaluate
>> >> every
>> >> > pending IRQ and potentially add it to its respective ap_list.
>> >> Similarly
>> >> > disabling an interrupt group requires pending IRQs to be 
>> removed
>> >> from
>> >> > the ap_list (as long as they have not been activated yet).
>> >> >
>> >> > Implement a rather simple, yet not terribly efficient algorithm 
>> to
>> >> > achieve this: For each VCPU we iterate over all IRQs, checking 
>> for
>> >> > pending ones and adding them to the list. We hold the 
>> ap_list_lock
>> >> > for this, to make this atomic from a VCPU's point of view.
>> >> >
>> >> > When an interrupt group gets disabled, we can't directly remove
>> >> affected
>> >> > IRQs from the ap_list, as a running VCPU might have already
>> >> activated
>> >> > them, which wouldn't be immediately visible to the host.
>> >> > Instead simply kick all VCPUs, so that they clean their 
>> ap_list's
>> >> > automatically when running vgic_prune_ap_list().
>> >> >
>> >> > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> >> > ---
>> >> >  virt/kvm/arm/vgic/vgic.c | 88
>> >> ++++++++++++++++++++++++++++++++++++----
>> >> >  1 file changed, 80 insertions(+), 8 deletions(-)
>> >> >
>> >> > diff --git a/virt/kvm/arm/vgic/vgic.c 
>> b/virt/kvm/arm/vgic/vgic.c
>> >> > index 3b88e14d239f..28d9ff282017 100644
>> >> > --- a/virt/kvm/arm/vgic/vgic.c
>> >> > +++ b/virt/kvm/arm/vgic/vgic.c
>> >> > @@ -339,6 +339,38 @@ int vgic_dist_enable_group(struct kvm 
>> *kvm,
>> >> int group, bool status)
>> >> >  	return 0;
>> >> >  }
>> >> >
>> >> > +/*
>> >> > + * Check whether a given IRQs need to be queued to this 
>> ap_list,
>> >> and do
>> >> > + * so if that's the case.
>> >> > + * Requires the ap_list_lock to be held (but not the irq 
>> lock).
>> >> > + *
>> >> > + * Returns 1 if that IRQ has been added to the ap_list, and 0 
>> if
>> >> not.
>> >> > + */
>> >> > +static int queue_enabled_irq(struct kvm *kvm, struct kvm_vcpu
>> >> *vcpu,
>> >> > +			     int intid)
>> >>
>> >> true/false seems better than 1/0.
>> >
>> > Mmh, indeed. I think I had more in there in an earlier version.
>> >
>> >> > +{
>> >> > +	struct vgic_irq *irq = vgic_get_irq(kvm, vcpu, intid);
>> >> > +	int ret = 0;
>> >> > +
>> >> > +	raw_spin_lock(&irq->irq_lock);
>> >> > +	if (!irq->vcpu && vcpu == vgic_target_oracle(irq)) {
>> >> > +		/*
>> >> > +		 * Grab a reference to the irq to reflect the
>> >> > +		 * fact that it is now in the ap_list.
>> >> > +		 */
>> >> > +		vgic_get_irq_kref(irq);
>> >> > +		list_add_tail(&irq->ap_list,
>> >> > +			      &vcpu->arch.vgic_cpu.ap_list_head);
>> >>
>> >> Two things:
>> >> - This should be the job of vgic_queue_irq_unlock. Why are you
>> >>   open-coding it?
>> >
>> > I was *really* keen on reusing that, but couldn't  for two 
>> reasons:
>> > a) the locking code inside vgic_queue_irq_unlock spoils that: It
>> > requires the irq_lock to be held, but not the ap_list_lock. Then 
>> it
>> > takes both locks, but returns with both of them dropped. We need 
>> to
>> > hold the ap_list_lock all of the time, to prevent any VCPU 
>> returning
>> > to the HV to interfere with this routine.
>> > b) vgic_queue_irq_unlock() kicks the VCPU already, where I want to
>> > just add all of them first, then kick the VCPU at the end.
>>
>> Indeed, and that is why you need to change the way you queue these
>> pending, enabled, group-disabled interrupts (see the LPI issue 
>> below).
>>
>> >
>> > So I decided to go with the stripped-down version of it, because I
>> > didn't dare to touch the original function. I could refactor this
>> > "actually add to the list" part of vgic_queue_irq_unlock() into 
>> this
>> > new function, then call it from both vgic_queue_irq_unlock() and 
>> from
>> > the new users.
>> >
>> >> - What if the interrupt isn't pending? Non-pending, non-active
>> >>   interrupts should not be on the AP list!
>> >
>> > That should be covered by vgic_target_oracle() already, shouldn't 
>> it?
>>
>> Ah, yes, you're right.
>>
>> >
>> >> > +		irq->vcpu = vcpu;
>> >> > +
>> >> > +		ret = 1;
>> >> > +	}
>> >> > +	raw_spin_unlock(&irq->irq_lock);
>> >> > +	vgic_put_irq(kvm, irq);
>> >> > +
>> >> > +	return ret;
>> >> > +}
>> >> > +
>> >> >  /*
>> >> >   * The group enable status of at least one of the groups has
>> >> changed.
>> >> >   * If enabled is true, at least one of the groups got enabled.
>> >> > @@ -346,17 +378,57 @@ int vgic_dist_enable_group(struct kvm 
>> *kvm,
>> >> int group, bool status)
>> >> >   */
>> >> >  void vgic_rescan_pending_irqs(struct kvm *kvm, bool enabled)
>> >> >  {
>> >> > +	int cpuid;
>> >> > +	struct kvm_vcpu *vcpu;
>> >> > +
>> >> >  	/*
>> >> > -	 * TODO: actually scan *all* IRQs of the VM for pending IRQs.
>> >> > -	 * If a pending IRQ's group is now enabled, add it to its
>> >> ap_list.
>> >> > -	 * If a pending IRQ's group is now disabled, kick the VCPU to
>> >> > -	 * let it remove this IRQ from its ap_list. We have to let 
>> the
>> >> > -	 * VCPU do it itself, because we can't know the exact state 
>> of
>> >> an
>> >> > -	 * IRQ pending on a running VCPU.
>> >> > +	 * If no group got enabled, we only have to potentially 
>> remove
>> >> > +	 * interrupts from ap_lists. We can't do this here, because a
>> >> running
>> >> > +	 * VCPU might have ACKed an IRQ already, which wouldn't
>> >> immediately
>> >> > +	 * be reflected in the ap_list.
>> >> > +	 * So kick all VCPUs, which will let them re-evaluate their
>> >> ap_lists
>> >> > +	 * by running vgic_prune_ap_list(), removing no longer 
>> enabled
>> >> > +	 * IRQs.
>> >> > +	 */
>> >> > +	if (!enabled) {
>> >> > +		vgic_kick_vcpus(kvm);
>> >> > +
>> >> > +		return;
>> >> > +	}
>> >> > +
>> >> > +	/*
>> >> > +	 * At least one group went from disabled to enabled. Now we 
>> need
>> >> > +	 * to scan *all* IRQs of the VM for newly group-enabled IRQs.
>> >> > +	 * If a pending IRQ's group is now enabled, add it to the
>> >> ap_list.
>> >> > +	 *
>> >> > +	 * For each VCPU this needs to be atomic, as we need *all* 
>> newly
>> >> > +	 * enabled IRQs in be in the ap_list to determine the highest
>> >> > +	 * priority one.
>> >> > +	 * So grab the ap_list_lock, then iterate over all private 
>> IRQs
>> >> and
>> >> > +	 * all SPIs. Once the ap_list is updated, kick that VCPU to
>> >> > +	 * forward any new IRQs to the guest.
>> >> >  	 */
>> >> > +	kvm_for_each_vcpu(cpuid, vcpu, kvm) {
>> >> > +		unsigned long flags;
>> >> > +		int i;
>> >> >
>> >> > -	 /* For now just kick all VCPUs, as the old code did. */
>> >> > -	vgic_kick_vcpus(kvm);
>> >> > +		raw_spin_lock_irqsave(&vcpu->arch.vgic_cpu.ap_list_lock,
>> >> flags);
>> >> > +
>> >> > +		for (i = 0; i < VGIC_NR_PRIVATE_IRQS; i++)
>> >> > +			queue_enabled_irq(kvm, vcpu, i);
>> >> > +
>> >> > +		for (i = VGIC_NR_PRIVATE_IRQS;
>> >> > +		     i < kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS; i++)
>> >> > +			queue_enabled_irq(kvm, vcpu, i);
>> >>
>> >> On top of my questions above, what happens to LPIs?
>> >
>> > Oh dear. Looks like wishful thinking on my side ;-) Iterating over
>> > all interrupts is probably not a good idea anymore.
>> > Do you think this idea of having a list with group-disabled IRQs 
>> is a
>> > better approach: In vgic_queue_irq_unlock, if a pending IRQ's 
>> group
>> > is
>> > enabled, it goes into the ap_list, if not, it goes into another 
>> list
>> > instead. Then we would only need to consult this other list when a
>> > group gets enabled. Both lists protected by the same ap_list_lock.
>> > Does that make sense?
>>
>> I think that could work. One queue for each group, holding pending,
>> enabled, group-disabled interrupts. Pending, disabled interrupts are
>> not queued anywhere, just like today.
>>
>> The only snag is per-cpu interrupts. On which queue do they live?
>> Do you have per-CPU queues? or a global one?
>
> Yes, the idea was to have a per-VCPU "grp_dis_list" in addition to
> the ap_list, reusing the ap_list list_head in struct vgic_irq.
> vgic_queue_irq_unlock() would put them into *one* of those two lists,
> depending on their group-enabled status. When a group gets enabled, 
> we
> just have to transfer the IRQs from grp_dis_list to ap_list.
>
> But fleshing this out I was wondering if it couldn't be much simpler:
> We ignore the group-enabled status most of the time, except in
> vgic_flush_lr_state(). So group-disabled IRQs *would go* to the
> ap_list (when they are otherwise pending|active and enabled), but
> would be skipped when eventually populating the LRs.
> vgic_prune_ap_list would also not touch them, so they would stay in
> the ap_list (unless removed for other reasons).
>
> That might raise some eyebrows (because we keep IRQs in the ap_list
> which are not ready), but would require only minimal changes and 
> avoid
> all kind of nasty/racy code to be added. The only downside I see is
> that the ap_list could potentially be much longer, but we could 
> change
> the sorting algorithm if needed to keep group-disabled IRQs at the
> end, at which point it wouldn't really matter.
>
> Do you see any problem with that approach? Alex seemed to remember
> that you had an objection against a very similar (if not identical)
> idea before.

My main worry with this is that it causes overhead on the fast path.
Disabled interrupts (for whichever reason they are disabled) shouldn't
have to be evaluated on the fast path.

Take for example kvm_vgic_vcpu_pending_irq(), which we evaluate pretty
often (each time a vcpu wakes up). Do we want to scan a bunch of
group-disabled interrupts there? No.

At the end of the day, what we're looking at is a list of disabled,
pending interrupts. They can be disabled for multiple reasons
(group is disabled, or interrupt itself is disabled). But they should
*not* end-up on the AP list, because that list has a precise semantic.

Your suggestion to add the group-disabled interrupts to the AP list
may be a cool hack, but it is mostly a hack that opens the whole thing
to a bunch of corner cases. Let's not do that.

>> >> And if a group has
>> >> been disabled, how do you retire these interrupts from the AP 
>> list?
>> >
>> > This is done above: we kick the respective VCPU and rely on
>> > vgic_prune_ap_list() to remove them (that uses 
>> vgic_target_oracle(),
>> > which in turn checks vgic_irq_is_grp_enabled()).
>>
>> But what if the CPU isn't running? Kicking it isn't going to do 
>> much,
>> is it?
>
> Not directly, but in either approach that would be handled similar to
> disabled interrupts: once the VCPU runs, they would *not* end up in
> LRs (because we check the oracle before), and would be cleaned up in
> prune() once the guest exits (at least for the original approach).

I lost track of the original approach already :-/

Try and build the above suggestion. It should follow the same flow as
the enabled, group-enabled interrupts, just with a different list.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled
  2019-11-19  9:40           ` Marc Zyngier
@ 2019-11-19 14:32             ` Andre Przywara
  0 siblings, 0 replies; 12+ messages in thread
From: Andre Przywara @ 2019-11-19 14:32 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, kvm

On Tue, 19 Nov 2019 09:40:40 +0000
Marc Zyngier <maz@kernel.org> wrote:

Hi Marc,

[ ... ]

> >>
> >> I think that could work. One queue for each group, holding pending,
> >> enabled, group-disabled interrupts. Pending, disabled interrupts are
> >> not queued anywhere, just like today.
> >>
> >> The only snag is per-cpu interrupts. On which queue do they live?
> >> Do you have per-CPU queues? or a global one?  
> >
> > Yes, the idea was to have a per-VCPU "grp_dis_list" in addition to
> > the ap_list, reusing the ap_list list_head in struct vgic_irq.
> > vgic_queue_irq_unlock() would put them into *one* of those two lists,
> > depending on their group-enabled status. When a group gets enabled, 
> > we
> > just have to transfer the IRQs from grp_dis_list to ap_list.
> >
> > But fleshing this out I was wondering if it couldn't be much simpler:
> > We ignore the group-enabled status most of the time, except in
> > vgic_flush_lr_state(). So group-disabled IRQs *would go* to the
> > ap_list (when they are otherwise pending|active and enabled), but
> > would be skipped when eventually populating the LRs.
> > vgic_prune_ap_list would also not touch them, so they would stay in
> > the ap_list (unless removed for other reasons).
> >
> > That might raise some eyebrows (because we keep IRQs in the ap_list
> > which are not ready), but would require only minimal changes and 
> > avoid
> > all kind of nasty/racy code to be added. The only downside I see is
> > that the ap_list could potentially be much longer, but we could 
> > change
> > the sorting algorithm if needed to keep group-disabled IRQs at the
> > end, at which point it wouldn't really matter.
> >
> > Do you see any problem with that approach? Alex seemed to remember
> > that you had an objection against a very similar (if not identical)
> > idea before.  
> 
> My main worry with this is that it causes overhead on the fast path.
> Disabled interrupts (for whichever reason they are disabled) shouldn't
> have to be evaluated on the fast path.
> 
> Take for example kvm_vgic_vcpu_pending_irq(), which we evaluate pretty
> often (each time a vcpu wakes up). Do we want to scan a bunch of
> group-disabled interrupts there? No.
> 
> At the end of the day, what we're looking at is a list of disabled,
> pending interrupts. They can be disabled for multiple reasons
> (group is disabled, or interrupt itself is disabled). But they should
> *not* end-up on the AP list, because that list has a precise semantic.
> 
> Your suggestion to add the group-disabled interrupts to the AP list
> may be a cool hack, but it is mostly a hack that opens the whole thing
> to a bunch of corner cases. Let's not do that.

I understand what you are saying, and I had similar gripes. It was just too tempting to not give it a try ;-)
 
> >> >> And if a group has
> >> >> been disabled, how do you retire these interrupts from the AP   
> >> list?  
> >> >
> >> > This is done above: we kick the respective VCPU and rely on
> >> > vgic_prune_ap_list() to remove them (that uses   
> >> vgic_target_oracle(),  
> >> > which in turn checks vgic_irq_is_grp_enabled()).  
> >>
> >> But what if the CPU isn't running? Kicking it isn't going to do 
> >> much,
> >> is it?  
> >
> > Not directly, but in either approach that would be handled similar to
> > disabled interrupts: once the VCPU runs, they would *not* end up in
> > LRs (because we check the oracle before), and would be cleaned up in
> > prune() once the guest exits (at least for the original approach).  
> 
> I lost track of the original approach already :-/
> 
> Try and build the above suggestion. It should follow the same flow as
> the enabled, group-enabled interrupts, just with a different list.

OK, will do.

Thanks!

Andre.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2019-11-19 14:32 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-08 17:49 [PATCH 0/3] kvm: arm: VGIC: Fix interrupt group enablement Andre Przywara
2019-11-08 17:49 ` [PATCH 1/3] kvm: arm: VGIC: Prepare for handling two interrupt groups Andre Przywara
2019-11-10 14:15   ` Marc Zyngier
2019-11-12  9:35     ` Andre Przywara
2019-11-08 17:49 ` [PATCH 2/3] kvm: arm: VGIC: Scan all IRQs when interrupt group gets enabled Andre Przywara
2019-11-10 14:29   ` Marc Zyngier
2019-11-12  9:36     ` Andre Przywara
2019-11-14 11:16       ` Marc Zyngier
2019-11-18 14:12         ` Andre Przywara
2019-11-19  9:40           ` Marc Zyngier
2019-11-19 14:32             ` Andre Przywara
2019-11-08 17:49 ` [PATCH 3/3] kvm: arm: VGIC: Enable proper Group0 handling Andre Przywara

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).