All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Fix race condition and simplify vgic active handler
@ 2017-05-16 10:04 ` Christoffer Dall
  0 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-16 10:04 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: kvm, Marc Zyngier, Eric Auger, Andrew Jones, Christoffer Dall

This is a small series that reworks a problem in a previously submitted
patch [1].  The previous patch version did not consider that userspace
accesses already hold the KVM mutex and therefore end up in deadlock.

These patches therefore introduces uaccess read/write functions for the
GICv2 MMIO register descriptor framework, splits out the distributor
active register write functionality into separate guest and userspace
access functions, and then finally take the KVM mutex on the guest
access path and gets rid of unneeded complexity from
active_change_prepare/finish.

Applied on top of v4.12-rc1.

[1]: https://lists.cs.columbia.edu/pipermail/kvmarm/2017-May/025542.html

Changes from v1:
 - Separate guest and uaccess distributor active register writes.
 - Only take the mutex in the guest access path.


Christoffer Dall (3):
  KVM: arm/arm64: Allow GICv2 to supply a uaccess register function
  KVM: arm/arm64: Separate guest and uaccess writes to dist {sc}active
  KVM: arm/arm64: Simplify active_change_prepare and plug race

 arch/arm/include/asm/kvm_host.h   |  2 --
 arch/arm64/include/asm/kvm_host.h |  2 --
 virt/kvm/arm/arm.c                | 20 +++---------
 virt/kvm/arm/vgic/vgic-mmio-v2.c  | 24 +++++++-------
 virt/kvm/arm/vgic/vgic-mmio-v3.c  |  8 +++--
 virt/kvm/arm/vgic/vgic-mmio.c     | 68 ++++++++++++++++++++++++++++++---------
 virt/kvm/arm/vgic/vgic-mmio.h     | 12 ++++++-
 virt/kvm/arm/vgic/vgic.c          | 11 ++++---
 8 files changed, 92 insertions(+), 55 deletions(-)

-- 
2.9.0

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 0/3] Fix race condition and simplify vgic active handler
@ 2017-05-16 10:04 ` Christoffer Dall
  0 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-16 10:04 UTC (permalink / raw)
  To: linux-arm-kernel

This is a small series that reworks a problem in a previously submitted
patch [1].  The previous patch version did not consider that userspace
accesses already hold the KVM mutex and therefore end up in deadlock.

These patches therefore introduces uaccess read/write functions for the
GICv2 MMIO register descriptor framework, splits out the distributor
active register write functionality into separate guest and userspace
access functions, and then finally take the KVM mutex on the guest
access path and gets rid of unneeded complexity from
active_change_prepare/finish.

Applied on top of v4.12-rc1.

[1]: https://lists.cs.columbia.edu/pipermail/kvmarm/2017-May/025542.html

Changes from v1:
 - Separate guest and uaccess distributor active register writes.
 - Only take the mutex in the guest access path.


Christoffer Dall (3):
  KVM: arm/arm64: Allow GICv2 to supply a uaccess register function
  KVM: arm/arm64: Separate guest and uaccess writes to dist {sc}active
  KVM: arm/arm64: Simplify active_change_prepare and plug race

 arch/arm/include/asm/kvm_host.h   |  2 --
 arch/arm64/include/asm/kvm_host.h |  2 --
 virt/kvm/arm/arm.c                | 20 +++---------
 virt/kvm/arm/vgic/vgic-mmio-v2.c  | 24 +++++++-------
 virt/kvm/arm/vgic/vgic-mmio-v3.c  |  8 +++--
 virt/kvm/arm/vgic/vgic-mmio.c     | 68 ++++++++++++++++++++++++++++++---------
 virt/kvm/arm/vgic/vgic-mmio.h     | 12 ++++++-
 virt/kvm/arm/vgic/vgic.c          | 11 ++++---
 8 files changed, 92 insertions(+), 55 deletions(-)

-- 
2.9.0

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 1/3] KVM: arm/arm64: Allow GICv2 to supply a uaccess register function
  2017-05-16 10:04 ` Christoffer Dall
@ 2017-05-16 10:04   ` Christoffer Dall
  -1 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-16 10:04 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel; +Cc: Marc Zyngier, kvm, Christoffer Dall

We are about to differentiate between writes from a VCPU and from
userspace to the GIC's GICD_ISACTIVER and GICD_ICACTIVER registers due
to different synchronization requirements.

Expand the macro to define a register description for the GIC to take
uaccess functions as well.

Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/vgic/vgic-mmio-v2.c | 22 +++++++++++-----------
 virt/kvm/arm/vgic/vgic-mmio.h    |  4 +++-
 2 files changed, 14 insertions(+), 12 deletions(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
index 0a4283e..95543a2 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
@@ -296,34 +296,34 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = {
 		vgic_mmio_read_v2_misc, vgic_mmio_write_v2_misc, 12,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_IGROUP,
-		vgic_mmio_read_rao, vgic_mmio_write_wi, 1,
+		vgic_mmio_read_rao, vgic_mmio_write_wi, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ENABLE_SET,
-		vgic_mmio_read_enable, vgic_mmio_write_senable, 1,
+		vgic_mmio_read_enable, vgic_mmio_write_senable, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ENABLE_CLEAR,
-		vgic_mmio_read_enable, vgic_mmio_write_cenable, 1,
+		vgic_mmio_read_enable, vgic_mmio_write_cenable, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_SET,
-		vgic_mmio_read_pending, vgic_mmio_write_spending, 1,
+		vgic_mmio_read_pending, vgic_mmio_write_spending, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_CLEAR,
-		vgic_mmio_read_pending, vgic_mmio_write_cpending, 1,
+		vgic_mmio_read_pending, vgic_mmio_write_cpending, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_SET,
-		vgic_mmio_read_active, vgic_mmio_write_sactive, 1,
+		vgic_mmio_read_active, vgic_mmio_write_sactive, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_CLEAR,
-		vgic_mmio_read_active, vgic_mmio_write_cactive, 1,
+		vgic_mmio_read_active, vgic_mmio_write_cactive, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PRI,
-		vgic_mmio_read_priority, vgic_mmio_write_priority, 8,
-		VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
+		vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL,
+		8, VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_TARGET,
-		vgic_mmio_read_target, vgic_mmio_write_target, 8,
+		vgic_mmio_read_target, vgic_mmio_write_target, NULL, NULL, 8,
 		VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_CONFIG,
-		vgic_mmio_read_config, vgic_mmio_write_config, 2,
+		vgic_mmio_read_config, vgic_mmio_write_config, NULL, NULL, 2,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_LENGTH(GIC_DIST_SOFTINT,
 		vgic_mmio_read_raz, vgic_mmio_write_sgir, 4,
diff --git a/virt/kvm/arm/vgic/vgic-mmio.h b/virt/kvm/arm/vgic/vgic-mmio.h
index ea4171a..27924a3 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.h
+++ b/virt/kvm/arm/vgic/vgic-mmio.h
@@ -75,7 +75,7 @@ extern struct kvm_io_device_ops kvm_io_gic_ops;
  * The _WITH_LENGTH version instantiates registers with a fixed length
  * and is mutually exclusive with the _PER_IRQ version.
  */
-#define REGISTER_DESC_WITH_BITS_PER_IRQ(off, rd, wr, bpi, acc)		\
+#define REGISTER_DESC_WITH_BITS_PER_IRQ(off, rd, wr, ur, uw, bpi, acc)	\
 	{								\
 		.reg_offset = off,					\
 		.bits_per_irq = bpi,					\
@@ -83,6 +83,8 @@ extern struct kvm_io_device_ops kvm_io_gic_ops;
 		.access_flags = acc,					\
 		.read = rd,						\
 		.write = wr,						\
+		.uaccess_read = ur,					\
+		.uaccess_write = uw,					\
 	}
 
 #define REGISTER_DESC_WITH_LENGTH(off, rd, wr, length, acc)		\
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 1/3] KVM: arm/arm64: Allow GICv2 to supply a uaccess register function
@ 2017-05-16 10:04   ` Christoffer Dall
  0 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-16 10:04 UTC (permalink / raw)
  To: linux-arm-kernel

We are about to differentiate between writes from a VCPU and from
userspace to the GIC's GICD_ISACTIVER and GICD_ICACTIVER registers due
to different synchronization requirements.

Expand the macro to define a register description for the GIC to take
uaccess functions as well.

Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/vgic/vgic-mmio-v2.c | 22 +++++++++++-----------
 virt/kvm/arm/vgic/vgic-mmio.h    |  4 +++-
 2 files changed, 14 insertions(+), 12 deletions(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
index 0a4283e..95543a2 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
@@ -296,34 +296,34 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = {
 		vgic_mmio_read_v2_misc, vgic_mmio_write_v2_misc, 12,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_IGROUP,
-		vgic_mmio_read_rao, vgic_mmio_write_wi, 1,
+		vgic_mmio_read_rao, vgic_mmio_write_wi, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ENABLE_SET,
-		vgic_mmio_read_enable, vgic_mmio_write_senable, 1,
+		vgic_mmio_read_enable, vgic_mmio_write_senable, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ENABLE_CLEAR,
-		vgic_mmio_read_enable, vgic_mmio_write_cenable, 1,
+		vgic_mmio_read_enable, vgic_mmio_write_cenable, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_SET,
-		vgic_mmio_read_pending, vgic_mmio_write_spending, 1,
+		vgic_mmio_read_pending, vgic_mmio_write_spending, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_CLEAR,
-		vgic_mmio_read_pending, vgic_mmio_write_cpending, 1,
+		vgic_mmio_read_pending, vgic_mmio_write_cpending, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_SET,
-		vgic_mmio_read_active, vgic_mmio_write_sactive, 1,
+		vgic_mmio_read_active, vgic_mmio_write_sactive, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_CLEAR,
-		vgic_mmio_read_active, vgic_mmio_write_cactive, 1,
+		vgic_mmio_read_active, vgic_mmio_write_cactive, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PRI,
-		vgic_mmio_read_priority, vgic_mmio_write_priority, 8,
-		VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
+		vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL,
+		8, VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_TARGET,
-		vgic_mmio_read_target, vgic_mmio_write_target, 8,
+		vgic_mmio_read_target, vgic_mmio_write_target, NULL, NULL, 8,
 		VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_CONFIG,
-		vgic_mmio_read_config, vgic_mmio_write_config, 2,
+		vgic_mmio_read_config, vgic_mmio_write_config, NULL, NULL, 2,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_LENGTH(GIC_DIST_SOFTINT,
 		vgic_mmio_read_raz, vgic_mmio_write_sgir, 4,
diff --git a/virt/kvm/arm/vgic/vgic-mmio.h b/virt/kvm/arm/vgic/vgic-mmio.h
index ea4171a..27924a3 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.h
+++ b/virt/kvm/arm/vgic/vgic-mmio.h
@@ -75,7 +75,7 @@ extern struct kvm_io_device_ops kvm_io_gic_ops;
  * The _WITH_LENGTH version instantiates registers with a fixed length
  * and is mutually exclusive with the _PER_IRQ version.
  */
-#define REGISTER_DESC_WITH_BITS_PER_IRQ(off, rd, wr, bpi, acc)		\
+#define REGISTER_DESC_WITH_BITS_PER_IRQ(off, rd, wr, ur, uw, bpi, acc)	\
 	{								\
 		.reg_offset = off,					\
 		.bits_per_irq = bpi,					\
@@ -83,6 +83,8 @@ extern struct kvm_io_device_ops kvm_io_gic_ops;
 		.access_flags = acc,					\
 		.read = rd,						\
 		.write = wr,						\
+		.uaccess_read = ur,					\
+		.uaccess_write = uw,					\
 	}
 
 #define REGISTER_DESC_WITH_LENGTH(off, rd, wr, length, acc)		\
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 2/3] KVM: arm/arm64: Separate guest and uaccess writes to dist {sc}active
  2017-05-16 10:04 ` Christoffer Dall
@ 2017-05-16 10:04   ` Christoffer Dall
  -1 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-16 10:04 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: kvm, Marc Zyngier, Eric Auger, Andrew Jones, Christoffer Dall

Factor out the core register modifier functionality from the entry
points from the register description table, and only call the
prepare/finish functions from the guest path, not the uaccess path.

Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/vgic/vgic-mmio-v2.c |  6 +++--
 virt/kvm/arm/vgic/vgic-mmio-v3.c |  8 ++++---
 virt/kvm/arm/vgic/vgic-mmio.c    | 50 ++++++++++++++++++++++++++++++++++------
 virt/kvm/arm/vgic/vgic-mmio.h    |  8 +++++++
 4 files changed, 60 insertions(+), 12 deletions(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
index 95543a2..8bafe9a 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
@@ -311,10 +311,12 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = {
 		vgic_mmio_read_pending, vgic_mmio_write_cpending, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_SET,
-		vgic_mmio_read_active, vgic_mmio_write_sactive, NULL, NULL, 1,
+		vgic_mmio_read_active, vgic_mmio_write_sactive,
+		NULL, vgic_mmio_uaccess_write_sactive, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_CLEAR,
-		vgic_mmio_read_active, vgic_mmio_write_cactive, NULL, NULL, 1,
+		vgic_mmio_read_active, vgic_mmio_write_cactive,
+		NULL, vgic_mmio_uaccess_write_cactive, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PRI,
 		vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL,
diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
index 99da1a2..23c0d564 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
@@ -456,11 +456,13 @@ static const struct vgic_register_region vgic_v3_dist_registers[] = {
 		vgic_mmio_read_raz, vgic_mmio_write_wi, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISACTIVER,
-		vgic_mmio_read_active, vgic_mmio_write_sactive, NULL, NULL, 1,
+		vgic_mmio_read_active, vgic_mmio_write_sactive,
+		NULL, vgic_mmio_uaccess_write_sactive, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICACTIVER,
-		vgic_mmio_read_active, vgic_mmio_write_cactive, NULL, NULL, 1,
-		VGIC_ACCESS_32bit),
+		vgic_mmio_read_active, vgic_mmio_write_cactive,
+		NULL, vgic_mmio_uaccess_write_cactive,
+		1, VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IPRIORITYR,
 		vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL,
 		8, VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index 1c17b2a..64cbcb4 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -251,38 +251,74 @@ static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
 		kvm_arm_resume_guest(vcpu->kvm);
 }
 
-void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
-			     gpa_t addr, unsigned int len,
-			     unsigned long val)
+static void __vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
+				      gpa_t addr, unsigned int len,
+				      unsigned long val)
 {
 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
 	int i;
 
-	vgic_change_active_prepare(vcpu, intid);
 	for_each_set_bit(i, &val, len * 8) {
 		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
 		vgic_mmio_change_active(vcpu, irq, false);
 		vgic_put_irq(vcpu->kvm, irq);
 	}
-	vgic_change_active_finish(vcpu, intid);
 }
 
-void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
+void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
 			     gpa_t addr, unsigned int len,
 			     unsigned long val)
 {
 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
-	int i;
 
 	vgic_change_active_prepare(vcpu, intid);
+
+	__vgic_mmio_write_cactive(vcpu, addr, len, val);
+
+	vgic_change_active_finish(vcpu, intid);
+}
+
+void vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu,
+				     gpa_t addr, unsigned int len,
+				     unsigned long val)
+{
+	__vgic_mmio_write_cactive(vcpu, addr, len, val);
+}
+
+static void __vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
+				      gpa_t addr, unsigned int len,
+				      unsigned long val)
+{
+	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+	int i;
+
 	for_each_set_bit(i, &val, len * 8) {
 		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
 		vgic_mmio_change_active(vcpu, irq, true);
 		vgic_put_irq(vcpu->kvm, irq);
 	}
+}
+
+void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
+			     gpa_t addr, unsigned int len,
+			     unsigned long val)
+{
+	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+
+	vgic_change_active_prepare(vcpu, intid);
+
+	__vgic_mmio_write_sactive(vcpu, addr, len, val);
+
 	vgic_change_active_finish(vcpu, intid);
 }
 
+void vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu,
+				     gpa_t addr, unsigned int len,
+				     unsigned long val)
+{
+	__vgic_mmio_write_sactive(vcpu, addr, len, val);
+}
+
 unsigned long vgic_mmio_read_priority(struct kvm_vcpu *vcpu,
 				      gpa_t addr, unsigned int len)
 {
diff --git a/virt/kvm/arm/vgic/vgic-mmio.h b/virt/kvm/arm/vgic/vgic-mmio.h
index 27924a3..5693f6df 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.h
+++ b/virt/kvm/arm/vgic/vgic-mmio.h
@@ -167,6 +167,14 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
 			     gpa_t addr, unsigned int len,
 			     unsigned long val);
 
+void vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu,
+				     gpa_t addr, unsigned int len,
+				     unsigned long val);
+
+void vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu,
+				     gpa_t addr, unsigned int len,
+				     unsigned long val);
+
 unsigned long vgic_mmio_read_priority(struct kvm_vcpu *vcpu,
 				      gpa_t addr, unsigned int len);
 
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 2/3] KVM: arm/arm64: Separate guest and uaccess writes to dist {sc}active
@ 2017-05-16 10:04   ` Christoffer Dall
  0 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-16 10:04 UTC (permalink / raw)
  To: linux-arm-kernel

Factor out the core register modifier functionality from the entry
points from the register description table, and only call the
prepare/finish functions from the guest path, not the uaccess path.

Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/vgic/vgic-mmio-v2.c |  6 +++--
 virt/kvm/arm/vgic/vgic-mmio-v3.c |  8 ++++---
 virt/kvm/arm/vgic/vgic-mmio.c    | 50 ++++++++++++++++++++++++++++++++++------
 virt/kvm/arm/vgic/vgic-mmio.h    |  8 +++++++
 4 files changed, 60 insertions(+), 12 deletions(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
index 95543a2..8bafe9a 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
@@ -311,10 +311,12 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = {
 		vgic_mmio_read_pending, vgic_mmio_write_cpending, NULL, NULL, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_SET,
-		vgic_mmio_read_active, vgic_mmio_write_sactive, NULL, NULL, 1,
+		vgic_mmio_read_active, vgic_mmio_write_sactive,
+		NULL, vgic_mmio_uaccess_write_sactive, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_CLEAR,
-		vgic_mmio_read_active, vgic_mmio_write_cactive, NULL, NULL, 1,
+		vgic_mmio_read_active, vgic_mmio_write_cactive,
+		NULL, vgic_mmio_uaccess_write_cactive, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PRI,
 		vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL,
diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
index 99da1a2..23c0d564 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
@@ -456,11 +456,13 @@ static const struct vgic_register_region vgic_v3_dist_registers[] = {
 		vgic_mmio_read_raz, vgic_mmio_write_wi, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISACTIVER,
-		vgic_mmio_read_active, vgic_mmio_write_sactive, NULL, NULL, 1,
+		vgic_mmio_read_active, vgic_mmio_write_sactive,
+		NULL, vgic_mmio_uaccess_write_sactive, 1,
 		VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICACTIVER,
-		vgic_mmio_read_active, vgic_mmio_write_cactive, NULL, NULL, 1,
-		VGIC_ACCESS_32bit),
+		vgic_mmio_read_active, vgic_mmio_write_cactive,
+		NULL, vgic_mmio_uaccess_write_cactive,
+		1, VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IPRIORITYR,
 		vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL,
 		8, VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index 1c17b2a..64cbcb4 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -251,38 +251,74 @@ static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
 		kvm_arm_resume_guest(vcpu->kvm);
 }
 
-void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
-			     gpa_t addr, unsigned int len,
-			     unsigned long val)
+static void __vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
+				      gpa_t addr, unsigned int len,
+				      unsigned long val)
 {
 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
 	int i;
 
-	vgic_change_active_prepare(vcpu, intid);
 	for_each_set_bit(i, &val, len * 8) {
 		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
 		vgic_mmio_change_active(vcpu, irq, false);
 		vgic_put_irq(vcpu->kvm, irq);
 	}
-	vgic_change_active_finish(vcpu, intid);
 }
 
-void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
+void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
 			     gpa_t addr, unsigned int len,
 			     unsigned long val)
 {
 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
-	int i;
 
 	vgic_change_active_prepare(vcpu, intid);
+
+	__vgic_mmio_write_cactive(vcpu, addr, len, val);
+
+	vgic_change_active_finish(vcpu, intid);
+}
+
+void vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu,
+				     gpa_t addr, unsigned int len,
+				     unsigned long val)
+{
+	__vgic_mmio_write_cactive(vcpu, addr, len, val);
+}
+
+static void __vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
+				      gpa_t addr, unsigned int len,
+				      unsigned long val)
+{
+	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+	int i;
+
 	for_each_set_bit(i, &val, len * 8) {
 		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
 		vgic_mmio_change_active(vcpu, irq, true);
 		vgic_put_irq(vcpu->kvm, irq);
 	}
+}
+
+void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
+			     gpa_t addr, unsigned int len,
+			     unsigned long val)
+{
+	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+
+	vgic_change_active_prepare(vcpu, intid);
+
+	__vgic_mmio_write_sactive(vcpu, addr, len, val);
+
 	vgic_change_active_finish(vcpu, intid);
 }
 
+void vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu,
+				     gpa_t addr, unsigned int len,
+				     unsigned long val)
+{
+	__vgic_mmio_write_sactive(vcpu, addr, len, val);
+}
+
 unsigned long vgic_mmio_read_priority(struct kvm_vcpu *vcpu,
 				      gpa_t addr, unsigned int len)
 {
diff --git a/virt/kvm/arm/vgic/vgic-mmio.h b/virt/kvm/arm/vgic/vgic-mmio.h
index 27924a3..5693f6df 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.h
+++ b/virt/kvm/arm/vgic/vgic-mmio.h
@@ -167,6 +167,14 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
 			     gpa_t addr, unsigned int len,
 			     unsigned long val);
 
+void vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu,
+				     gpa_t addr, unsigned int len,
+				     unsigned long val);
+
+void vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu,
+				     gpa_t addr, unsigned int len,
+				     unsigned long val);
+
 unsigned long vgic_mmio_read_priority(struct kvm_vcpu *vcpu,
 				      gpa_t addr, unsigned int len);
 
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
  2017-05-16 10:04 ` Christoffer Dall
@ 2017-05-16 10:04   ` Christoffer Dall
  -1 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-16 10:04 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: kvm, Marc Zyngier, Eric Auger, Andrew Jones, Christoffer Dall

We don't need to stop a specific VCPU when changing the active state,
because private IRQs can only be modified by a running VCPU for the
VCPU itself and it is therefore already stopped.

However, it is also possible for two VCPUs to be modifying the active
state of SPIs at the same time, which can cause the thread being stuck
in the loop that checks other VCPU threads for a potentially very long
time, or to modify the active state of a running VCPU.  Fix this by
serializing all accesses to setting and clearing the active state of
interrupts using the KVM mutex.

Reported-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 arch/arm/include/asm/kvm_host.h   |  2 --
 arch/arm64/include/asm/kvm_host.h |  2 --
 virt/kvm/arm/arm.c                | 20 ++++----------------
 virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
 virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
 5 files changed, 20 insertions(+), 33 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f0e6657..12274d4 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
 struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
 void kvm_arm_halt_guest(struct kvm *kvm);
 void kvm_arm_resume_guest(struct kvm *kvm);
-void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
-void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
 
 int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
 unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 5e19165..32cbe8a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
 struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
 void kvm_arm_halt_guest(struct kvm *kvm);
 void kvm_arm_resume_guest(struct kvm *kvm);
-void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
-void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
 
 u64 __kvm_call_hyp(void *hypfn, ...);
 #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 3417e18..3c387fd 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
 	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
 }
 
-void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
-{
-	vcpu->arch.pause = true;
-	kvm_vcpu_kick(vcpu);
-}
-
-void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
-{
-	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
-
-	vcpu->arch.pause = false;
-	swake_up(wq);
-}
-
 void kvm_arm_resume_guest(struct kvm *kvm)
 {
 	int i;
 	struct kvm_vcpu *vcpu;
 
-	kvm_for_each_vcpu(i, vcpu, kvm)
-		kvm_arm_resume_vcpu(vcpu);
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		vcpu->arch.pause = false;
+		swake_up(kvm_arch_vcpu_wq(vcpu));
+	}
 }
 
 static void vcpu_sleep(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index 64cbcb4..c1e4bdd 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
  * be migrated while we don't hold the IRQ locks and we don't want to be
  * chasing moving targets.
  *
- * For private interrupts, we only have to make sure the single and only VCPU
- * that can potentially queue the IRQ is stopped.
+ * For private interrupts we don't have to do anything because userspace
+ * accesses to the VGIC state already require all VCPUs to be stopped, and
+ * only the VCPU itself can modify its private interrupts active state, which
+ * guarantees that the VCPU is not running.
  */
 static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
 {
-	if (intid < VGIC_NR_PRIVATE_IRQS)
-		kvm_arm_halt_vcpu(vcpu);
-	else
+	if (intid > VGIC_NR_PRIVATE_IRQS)
 		kvm_arm_halt_guest(vcpu->kvm);
 }
 
 /* See vgic_change_active_prepare */
 static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
 {
-	if (intid < VGIC_NR_PRIVATE_IRQS)
-		kvm_arm_resume_vcpu(vcpu);
-	else
+	if (intid > VGIC_NR_PRIVATE_IRQS)
 		kvm_arm_resume_guest(vcpu->kvm);
 }
 
@@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
 {
 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
 
+	mutex_lock(&vcpu->kvm->lock);
 	vgic_change_active_prepare(vcpu, intid);
 
 	__vgic_mmio_write_cactive(vcpu, addr, len, val);
 
 	vgic_change_active_finish(vcpu, intid);
+	mutex_unlock(&vcpu->kvm->lock);
 }
 
 void vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu,
@@ -305,11 +305,13 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
 {
 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
 
+	mutex_lock(&vcpu->kvm->lock);
 	vgic_change_active_prepare(vcpu, intid);
 
 	__vgic_mmio_write_sactive(vcpu, addr, len, val);
 
 	vgic_change_active_finish(vcpu, intid);
+	mutex_unlock(&vcpu->kvm->lock);
 }
 
 void vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 83b24d2..aea080a 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -35,11 +35,12 @@ struct vgic_global kvm_vgic_global_state __ro_after_init = {
 
 /*
  * Locking order is always:
- * its->cmd_lock (mutex)
- *   its->its_lock (mutex)
- *     vgic_cpu->ap_list_lock
- *       kvm->lpi_list_lock
- *         vgic_irq->irq_lock
+ * kvm->lock (mutex)
+ *   its->cmd_lock (mutex)
+ *     its->its_lock (mutex)
+ *       vgic_cpu->ap_list_lock
+ *         kvm->lpi_list_lock
+ *           vgic_irq->irq_lock
  *
  * If you need to take multiple locks, always take the upper lock first,
  * then the lower ones, e.g. first take the its_lock, then the irq_lock.
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
@ 2017-05-16 10:04   ` Christoffer Dall
  0 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-16 10:04 UTC (permalink / raw)
  To: linux-arm-kernel

We don't need to stop a specific VCPU when changing the active state,
because private IRQs can only be modified by a running VCPU for the
VCPU itself and it is therefore already stopped.

However, it is also possible for two VCPUs to be modifying the active
state of SPIs at the same time, which can cause the thread being stuck
in the loop that checks other VCPU threads for a potentially very long
time, or to modify the active state of a running VCPU.  Fix this by
serializing all accesses to setting and clearing the active state of
interrupts using the KVM mutex.

Reported-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 arch/arm/include/asm/kvm_host.h   |  2 --
 arch/arm64/include/asm/kvm_host.h |  2 --
 virt/kvm/arm/arm.c                | 20 ++++----------------
 virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
 virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
 5 files changed, 20 insertions(+), 33 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f0e6657..12274d4 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
 struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
 void kvm_arm_halt_guest(struct kvm *kvm);
 void kvm_arm_resume_guest(struct kvm *kvm);
-void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
-void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
 
 int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
 unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 5e19165..32cbe8a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
 struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
 void kvm_arm_halt_guest(struct kvm *kvm);
 void kvm_arm_resume_guest(struct kvm *kvm);
-void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
-void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
 
 u64 __kvm_call_hyp(void *hypfn, ...);
 #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 3417e18..3c387fd 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
 	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
 }
 
-void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
-{
-	vcpu->arch.pause = true;
-	kvm_vcpu_kick(vcpu);
-}
-
-void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
-{
-	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
-
-	vcpu->arch.pause = false;
-	swake_up(wq);
-}
-
 void kvm_arm_resume_guest(struct kvm *kvm)
 {
 	int i;
 	struct kvm_vcpu *vcpu;
 
-	kvm_for_each_vcpu(i, vcpu, kvm)
-		kvm_arm_resume_vcpu(vcpu);
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		vcpu->arch.pause = false;
+		swake_up(kvm_arch_vcpu_wq(vcpu));
+	}
 }
 
 static void vcpu_sleep(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index 64cbcb4..c1e4bdd 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
  * be migrated while we don't hold the IRQ locks and we don't want to be
  * chasing moving targets.
  *
- * For private interrupts, we only have to make sure the single and only VCPU
- * that can potentially queue the IRQ is stopped.
+ * For private interrupts we don't have to do anything because userspace
+ * accesses to the VGIC state already require all VCPUs to be stopped, and
+ * only the VCPU itself can modify its private interrupts active state, which
+ * guarantees that the VCPU is not running.
  */
 static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
 {
-	if (intid < VGIC_NR_PRIVATE_IRQS)
-		kvm_arm_halt_vcpu(vcpu);
-	else
+	if (intid > VGIC_NR_PRIVATE_IRQS)
 		kvm_arm_halt_guest(vcpu->kvm);
 }
 
 /* See vgic_change_active_prepare */
 static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
 {
-	if (intid < VGIC_NR_PRIVATE_IRQS)
-		kvm_arm_resume_vcpu(vcpu);
-	else
+	if (intid > VGIC_NR_PRIVATE_IRQS)
 		kvm_arm_resume_guest(vcpu->kvm);
 }
 
@@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
 {
 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
 
+	mutex_lock(&vcpu->kvm->lock);
 	vgic_change_active_prepare(vcpu, intid);
 
 	__vgic_mmio_write_cactive(vcpu, addr, len, val);
 
 	vgic_change_active_finish(vcpu, intid);
+	mutex_unlock(&vcpu->kvm->lock);
 }
 
 void vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu,
@@ -305,11 +305,13 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
 {
 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
 
+	mutex_lock(&vcpu->kvm->lock);
 	vgic_change_active_prepare(vcpu, intid);
 
 	__vgic_mmio_write_sactive(vcpu, addr, len, val);
 
 	vgic_change_active_finish(vcpu, intid);
+	mutex_unlock(&vcpu->kvm->lock);
 }
 
 void vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 83b24d2..aea080a 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -35,11 +35,12 @@ struct vgic_global kvm_vgic_global_state __ro_after_init = {
 
 /*
  * Locking order is always:
- * its->cmd_lock (mutex)
- *   its->its_lock (mutex)
- *     vgic_cpu->ap_list_lock
- *       kvm->lpi_list_lock
- *         vgic_irq->irq_lock
+ * kvm->lock (mutex)
+ *   its->cmd_lock (mutex)
+ *     its->its_lock (mutex)
+ *       vgic_cpu->ap_list_lock
+ *         kvm->lpi_list_lock
+ *           vgic_irq->irq_lock
  *
  * If you need to take multiple locks, always take the upper lock first,
  * then the lower ones, e.g. first take the its_lock, then the irq_lock.
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] KVM: arm/arm64: Allow GICv2 to supply a uaccess register function
  2017-05-16 10:04   ` Christoffer Dall
@ 2017-05-22 15:21     ` Marc Zyngier
  -1 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2017-05-22 15:21 UTC (permalink / raw)
  To: Christoffer Dall, kvmarm, linux-arm-kernel; +Cc: kvm, Eric Auger, Andrew Jones

On 16/05/17 11:04, Christoffer Dall wrote:
> We are about to differentiate between writes from a VCPU and from
> userspace to the GIC's GICD_ISACTIVER and GICD_ICACTIVER registers due
> to different synchronization requirements.
> 
> Expand the macro to define a register description for the GIC to take
> uaccess functions as well.
> 
> Signed-off-by: Christoffer Dall <cdall@linaro.org>

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 1/3] KVM: arm/arm64: Allow GICv2 to supply a uaccess register function
@ 2017-05-22 15:21     ` Marc Zyngier
  0 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2017-05-22 15:21 UTC (permalink / raw)
  To: linux-arm-kernel

On 16/05/17 11:04, Christoffer Dall wrote:
> We are about to differentiate between writes from a VCPU and from
> userspace to the GIC's GICD_ISACTIVER and GICD_ICACTIVER registers due
> to different synchronization requirements.
> 
> Expand the macro to define a register description for the GIC to take
> uaccess functions as well.
> 
> Signed-off-by: Christoffer Dall <cdall@linaro.org>

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 2/3] KVM: arm/arm64: Separate guest and uaccess writes to dist {sc}active
  2017-05-16 10:04   ` Christoffer Dall
@ 2017-05-22 15:24     ` Marc Zyngier
  -1 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2017-05-22 15:24 UTC (permalink / raw)
  To: Christoffer Dall, kvmarm, linux-arm-kernel; +Cc: kvm, Eric Auger, Andrew Jones

On 16/05/17 11:04, Christoffer Dall wrote:
> Factor out the core register modifier functionality from the entry
> points from the register description table, and only call the
> prepare/finish functions from the guest path, not the uaccess path.
> 
> Signed-off-by: Christoffer Dall <cdall@linaro.org>

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 2/3] KVM: arm/arm64: Separate guest and uaccess writes to dist {sc}active
@ 2017-05-22 15:24     ` Marc Zyngier
  0 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2017-05-22 15:24 UTC (permalink / raw)
  To: linux-arm-kernel

On 16/05/17 11:04, Christoffer Dall wrote:
> Factor out the core register modifier functionality from the entry
> points from the register description table, and only call the
> prepare/finish functions from the guest path, not the uaccess path.
> 
> Signed-off-by: Christoffer Dall <cdall@linaro.org>

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
  2017-05-16 10:04   ` Christoffer Dall
@ 2017-05-22 15:30     ` Marc Zyngier
  -1 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2017-05-22 15:30 UTC (permalink / raw)
  To: Christoffer Dall, kvmarm, linux-arm-kernel; +Cc: kvm, Eric Auger, Andrew Jones

On 16/05/17 11:04, Christoffer Dall wrote:
> We don't need to stop a specific VCPU when changing the active state,
> because private IRQs can only be modified by a running VCPU for the
> VCPU itself and it is therefore already stopped.
> 
> However, it is also possible for two VCPUs to be modifying the active
> state of SPIs at the same time, which can cause the thread being stuck
> in the loop that checks other VCPU threads for a potentially very long
> time, or to modify the active state of a running VCPU.  Fix this by
> serializing all accesses to setting and clearing the active state of
> interrupts using the KVM mutex.
> 
> Reported-by: Andrew Jones <drjones@redhat.com>
> Signed-off-by: Christoffer Dall <cdall@linaro.org>
> ---
>  arch/arm/include/asm/kvm_host.h   |  2 --
>  arch/arm64/include/asm/kvm_host.h |  2 --
>  virt/kvm/arm/arm.c                | 20 ++++----------------
>  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
>  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
>  5 files changed, 20 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index f0e6657..12274d4 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>  void kvm_arm_halt_guest(struct kvm *kvm);
>  void kvm_arm_resume_guest(struct kvm *kvm);
> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>  
>  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
>  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 5e19165..32cbe8a 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
>  void kvm_arm_halt_guest(struct kvm *kvm);
>  void kvm_arm_resume_guest(struct kvm *kvm);
> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>  
>  u64 __kvm_call_hyp(void *hypfn, ...);
>  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 3417e18..3c387fd 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
>  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
>  }
>  
> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> -{
> -	vcpu->arch.pause = true;
> -	kvm_vcpu_kick(vcpu);
> -}
> -
> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> -{
> -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> -
> -	vcpu->arch.pause = false;
> -	swake_up(wq);
> -}
> -
>  void kvm_arm_resume_guest(struct kvm *kvm)
>  {
>  	int i;
>  	struct kvm_vcpu *vcpu;
>  
> -	kvm_for_each_vcpu(i, vcpu, kvm)
> -		kvm_arm_resume_vcpu(vcpu);
> +	kvm_for_each_vcpu(i, vcpu, kvm) {
> +		vcpu->arch.pause = false;
> +		swake_up(kvm_arch_vcpu_wq(vcpu));
> +	}
>  }
>  
>  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> index 64cbcb4..c1e4bdd 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>   * be migrated while we don't hold the IRQ locks and we don't want to be
>   * chasing moving targets.
>   *
> - * For private interrupts, we only have to make sure the single and only VCPU
> - * that can potentially queue the IRQ is stopped.
> + * For private interrupts we don't have to do anything because userspace
> + * accesses to the VGIC state already require all VCPUs to be stopped, and
> + * only the VCPU itself can modify its private interrupts active state, which
> + * guarantees that the VCPU is not running.
>   */
>  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
>  {
> -	if (intid < VGIC_NR_PRIVATE_IRQS)
> -		kvm_arm_halt_vcpu(vcpu);
> -	else
> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>  		kvm_arm_halt_guest(vcpu->kvm);
>  }
>  
>  /* See vgic_change_active_prepare */
>  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
>  {
> -	if (intid < VGIC_NR_PRIVATE_IRQS)
> -		kvm_arm_resume_vcpu(vcpu);
> -	else
> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>  		kvm_arm_resume_guest(vcpu->kvm);
>  }
>  
> @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
>  {
>  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
>  
> +	mutex_lock(&vcpu->kvm->lock);
>  	vgic_change_active_prepare(vcpu, intid);
>  
>  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
>  
>  	vgic_change_active_finish(vcpu, intid);
> +	mutex_unlock(&vcpu->kvm->lock);

Any reason not to move the lock/unlock calls to prepare/finish? Also, do
we need to take that mutex if intid is a PPI?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
@ 2017-05-22 15:30     ` Marc Zyngier
  0 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2017-05-22 15:30 UTC (permalink / raw)
  To: linux-arm-kernel

On 16/05/17 11:04, Christoffer Dall wrote:
> We don't need to stop a specific VCPU when changing the active state,
> because private IRQs can only be modified by a running VCPU for the
> VCPU itself and it is therefore already stopped.
> 
> However, it is also possible for two VCPUs to be modifying the active
> state of SPIs at the same time, which can cause the thread being stuck
> in the loop that checks other VCPU threads for a potentially very long
> time, or to modify the active state of a running VCPU.  Fix this by
> serializing all accesses to setting and clearing the active state of
> interrupts using the KVM mutex.
> 
> Reported-by: Andrew Jones <drjones@redhat.com>
> Signed-off-by: Christoffer Dall <cdall@linaro.org>
> ---
>  arch/arm/include/asm/kvm_host.h   |  2 --
>  arch/arm64/include/asm/kvm_host.h |  2 --
>  virt/kvm/arm/arm.c                | 20 ++++----------------
>  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
>  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
>  5 files changed, 20 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index f0e6657..12274d4 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>  void kvm_arm_halt_guest(struct kvm *kvm);
>  void kvm_arm_resume_guest(struct kvm *kvm);
> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>  
>  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
>  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 5e19165..32cbe8a 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
>  void kvm_arm_halt_guest(struct kvm *kvm);
>  void kvm_arm_resume_guest(struct kvm *kvm);
> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>  
>  u64 __kvm_call_hyp(void *hypfn, ...);
>  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 3417e18..3c387fd 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
>  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
>  }
>  
> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> -{
> -	vcpu->arch.pause = true;
> -	kvm_vcpu_kick(vcpu);
> -}
> -
> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> -{
> -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> -
> -	vcpu->arch.pause = false;
> -	swake_up(wq);
> -}
> -
>  void kvm_arm_resume_guest(struct kvm *kvm)
>  {
>  	int i;
>  	struct kvm_vcpu *vcpu;
>  
> -	kvm_for_each_vcpu(i, vcpu, kvm)
> -		kvm_arm_resume_vcpu(vcpu);
> +	kvm_for_each_vcpu(i, vcpu, kvm) {
> +		vcpu->arch.pause = false;
> +		swake_up(kvm_arch_vcpu_wq(vcpu));
> +	}
>  }
>  
>  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> index 64cbcb4..c1e4bdd 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>   * be migrated while we don't hold the IRQ locks and we don't want to be
>   * chasing moving targets.
>   *
> - * For private interrupts, we only have to make sure the single and only VCPU
> - * that can potentially queue the IRQ is stopped.
> + * For private interrupts we don't have to do anything because userspace
> + * accesses to the VGIC state already require all VCPUs to be stopped, and
> + * only the VCPU itself can modify its private interrupts active state, which
> + * guarantees that the VCPU is not running.
>   */
>  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
>  {
> -	if (intid < VGIC_NR_PRIVATE_IRQS)
> -		kvm_arm_halt_vcpu(vcpu);
> -	else
> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>  		kvm_arm_halt_guest(vcpu->kvm);
>  }
>  
>  /* See vgic_change_active_prepare */
>  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
>  {
> -	if (intid < VGIC_NR_PRIVATE_IRQS)
> -		kvm_arm_resume_vcpu(vcpu);
> -	else
> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>  		kvm_arm_resume_guest(vcpu->kvm);
>  }
>  
> @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
>  {
>  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
>  
> +	mutex_lock(&vcpu->kvm->lock);
>  	vgic_change_active_prepare(vcpu, intid);
>  
>  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
>  
>  	vgic_change_active_finish(vcpu, intid);
> +	mutex_unlock(&vcpu->kvm->lock);

Any reason not to move the lock/unlock calls to prepare/finish? Also, do
we need to take that mutex if intid is a PPI?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
  2017-05-22 15:30     ` Marc Zyngier
@ 2017-05-23  8:43       ` Christoffer Dall
  -1 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-23  8:43 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, kvm

On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> On 16/05/17 11:04, Christoffer Dall wrote:
> > We don't need to stop a specific VCPU when changing the active state,
> > because private IRQs can only be modified by a running VCPU for the
> > VCPU itself and it is therefore already stopped.
> > 
> > However, it is also possible for two VCPUs to be modifying the active
> > state of SPIs at the same time, which can cause the thread being stuck
> > in the loop that checks other VCPU threads for a potentially very long
> > time, or to modify the active state of a running VCPU.  Fix this by
> > serializing all accesses to setting and clearing the active state of
> > interrupts using the KVM mutex.
> > 
> > Reported-by: Andrew Jones <drjones@redhat.com>
> > Signed-off-by: Christoffer Dall <cdall@linaro.org>
> > ---
> >  arch/arm/include/asm/kvm_host.h   |  2 --
> >  arch/arm64/include/asm/kvm_host.h |  2 --
> >  virt/kvm/arm/arm.c                | 20 ++++----------------
> >  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> >  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> >  5 files changed, 20 insertions(+), 33 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > index f0e6657..12274d4 100644
> > --- a/arch/arm/include/asm/kvm_host.h
> > +++ b/arch/arm/include/asm/kvm_host.h
> > @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> >  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> >  void kvm_arm_halt_guest(struct kvm *kvm);
> >  void kvm_arm_resume_guest(struct kvm *kvm);
> > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> >  
> >  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> >  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 5e19165..32cbe8a 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> >  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> >  void kvm_arm_halt_guest(struct kvm *kvm);
> >  void kvm_arm_resume_guest(struct kvm *kvm);
> > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> >  
> >  u64 __kvm_call_hyp(void *hypfn, ...);
> >  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index 3417e18..3c387fd 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> >  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> >  }
> >  
> > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> > -{
> > -	vcpu->arch.pause = true;
> > -	kvm_vcpu_kick(vcpu);
> > -}
> > -
> > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> > -{
> > -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > -
> > -	vcpu->arch.pause = false;
> > -	swake_up(wq);
> > -}
> > -
> >  void kvm_arm_resume_guest(struct kvm *kvm)
> >  {
> >  	int i;
> >  	struct kvm_vcpu *vcpu;
> >  
> > -	kvm_for_each_vcpu(i, vcpu, kvm)
> > -		kvm_arm_resume_vcpu(vcpu);
> > +	kvm_for_each_vcpu(i, vcpu, kvm) {
> > +		vcpu->arch.pause = false;
> > +		swake_up(kvm_arch_vcpu_wq(vcpu));
> > +	}
> >  }
> >  
> >  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > index 64cbcb4..c1e4bdd 100644
> > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> >   * be migrated while we don't hold the IRQ locks and we don't want to be
> >   * chasing moving targets.
> >   *
> > - * For private interrupts, we only have to make sure the single and only VCPU
> > - * that can potentially queue the IRQ is stopped.
> > + * For private interrupts we don't have to do anything because userspace
> > + * accesses to the VGIC state already require all VCPUs to be stopped, and
> > + * only the VCPU itself can modify its private interrupts active state, which
> > + * guarantees that the VCPU is not running.
> >   */
> >  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> >  {
> > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > -		kvm_arm_halt_vcpu(vcpu);
> > -	else
> > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> >  		kvm_arm_halt_guest(vcpu->kvm);
> >  }
> >  
> >  /* See vgic_change_active_prepare */
> >  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> >  {
> > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > -		kvm_arm_resume_vcpu(vcpu);
> > -	else
> > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> >  		kvm_arm_resume_guest(vcpu->kvm);
> >  }
> >  
> > @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> >  {
> >  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> >  
> > +	mutex_lock(&vcpu->kvm->lock);
> >  	vgic_change_active_prepare(vcpu, intid);
> >  
> >  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> >  
> >  	vgic_change_active_finish(vcpu, intid);
> > +	mutex_unlock(&vcpu->kvm->lock);
> 
> Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> we need to take that mutex if intid is a PPI?

I guess we strictly don't need to take the mutex if it's a PPI, no.

But I actually preferred this symmetry because you can easily tell we
don't have a bug (famous last words) by locking and unlocking the mutex
in the same function.

I don't feel strongly about it though, so I can move it if you prefer
it.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
@ 2017-05-23  8:43       ` Christoffer Dall
  0 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-23  8:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> On 16/05/17 11:04, Christoffer Dall wrote:
> > We don't need to stop a specific VCPU when changing the active state,
> > because private IRQs can only be modified by a running VCPU for the
> > VCPU itself and it is therefore already stopped.
> > 
> > However, it is also possible for two VCPUs to be modifying the active
> > state of SPIs at the same time, which can cause the thread being stuck
> > in the loop that checks other VCPU threads for a potentially very long
> > time, or to modify the active state of a running VCPU.  Fix this by
> > serializing all accesses to setting and clearing the active state of
> > interrupts using the KVM mutex.
> > 
> > Reported-by: Andrew Jones <drjones@redhat.com>
> > Signed-off-by: Christoffer Dall <cdall@linaro.org>
> > ---
> >  arch/arm/include/asm/kvm_host.h   |  2 --
> >  arch/arm64/include/asm/kvm_host.h |  2 --
> >  virt/kvm/arm/arm.c                | 20 ++++----------------
> >  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> >  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> >  5 files changed, 20 insertions(+), 33 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > index f0e6657..12274d4 100644
> > --- a/arch/arm/include/asm/kvm_host.h
> > +++ b/arch/arm/include/asm/kvm_host.h
> > @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> >  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> >  void kvm_arm_halt_guest(struct kvm *kvm);
> >  void kvm_arm_resume_guest(struct kvm *kvm);
> > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> >  
> >  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> >  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 5e19165..32cbe8a 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> >  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> >  void kvm_arm_halt_guest(struct kvm *kvm);
> >  void kvm_arm_resume_guest(struct kvm *kvm);
> > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> >  
> >  u64 __kvm_call_hyp(void *hypfn, ...);
> >  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index 3417e18..3c387fd 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> >  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> >  }
> >  
> > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> > -{
> > -	vcpu->arch.pause = true;
> > -	kvm_vcpu_kick(vcpu);
> > -}
> > -
> > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> > -{
> > -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > -
> > -	vcpu->arch.pause = false;
> > -	swake_up(wq);
> > -}
> > -
> >  void kvm_arm_resume_guest(struct kvm *kvm)
> >  {
> >  	int i;
> >  	struct kvm_vcpu *vcpu;
> >  
> > -	kvm_for_each_vcpu(i, vcpu, kvm)
> > -		kvm_arm_resume_vcpu(vcpu);
> > +	kvm_for_each_vcpu(i, vcpu, kvm) {
> > +		vcpu->arch.pause = false;
> > +		swake_up(kvm_arch_vcpu_wq(vcpu));
> > +	}
> >  }
> >  
> >  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > index 64cbcb4..c1e4bdd 100644
> > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> >   * be migrated while we don't hold the IRQ locks and we don't want to be
> >   * chasing moving targets.
> >   *
> > - * For private interrupts, we only have to make sure the single and only VCPU
> > - * that can potentially queue the IRQ is stopped.
> > + * For private interrupts we don't have to do anything because userspace
> > + * accesses to the VGIC state already require all VCPUs to be stopped, and
> > + * only the VCPU itself can modify its private interrupts active state, which
> > + * guarantees that the VCPU is not running.
> >   */
> >  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> >  {
> > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > -		kvm_arm_halt_vcpu(vcpu);
> > -	else
> > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> >  		kvm_arm_halt_guest(vcpu->kvm);
> >  }
> >  
> >  /* See vgic_change_active_prepare */
> >  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> >  {
> > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > -		kvm_arm_resume_vcpu(vcpu);
> > -	else
> > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> >  		kvm_arm_resume_guest(vcpu->kvm);
> >  }
> >  
> > @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> >  {
> >  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> >  
> > +	mutex_lock(&vcpu->kvm->lock);
> >  	vgic_change_active_prepare(vcpu, intid);
> >  
> >  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> >  
> >  	vgic_change_active_finish(vcpu, intid);
> > +	mutex_unlock(&vcpu->kvm->lock);
> 
> Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> we need to take that mutex if intid is a PPI?

I guess we strictly don't need to take the mutex if it's a PPI, no.

But I actually preferred this symmetry because you can easily tell we
don't have a bug (famous last words) by locking and unlocking the mutex
in the same function.

I don't feel strongly about it though, so I can move it if you prefer
it.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
  2017-05-23  8:43       ` Christoffer Dall
@ 2017-05-23  9:05         ` Marc Zyngier
  -1 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2017-05-23  9:05 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvmarm, linux-arm-kernel, kvm, Eric Auger, Andrew Jones

On 23/05/17 09:43, Christoffer Dall wrote:
> On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
>> On 16/05/17 11:04, Christoffer Dall wrote:
>>> We don't need to stop a specific VCPU when changing the active state,
>>> because private IRQs can only be modified by a running VCPU for the
>>> VCPU itself and it is therefore already stopped.
>>>
>>> However, it is also possible for two VCPUs to be modifying the active
>>> state of SPIs at the same time, which can cause the thread being stuck
>>> in the loop that checks other VCPU threads for a potentially very long
>>> time, or to modify the active state of a running VCPU.  Fix this by
>>> serializing all accesses to setting and clearing the active state of
>>> interrupts using the KVM mutex.
>>>
>>> Reported-by: Andrew Jones <drjones@redhat.com>
>>> Signed-off-by: Christoffer Dall <cdall@linaro.org>
>>> ---
>>>  arch/arm/include/asm/kvm_host.h   |  2 --
>>>  arch/arm64/include/asm/kvm_host.h |  2 --
>>>  virt/kvm/arm/arm.c                | 20 ++++----------------
>>>  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
>>>  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
>>>  5 files changed, 20 insertions(+), 33 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index f0e6657..12274d4 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>>>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>>>  void kvm_arm_halt_guest(struct kvm *kvm);
>>>  void kvm_arm_resume_guest(struct kvm *kvm);
>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>>>  
>>>  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
>>>  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 5e19165..32cbe8a 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>>>  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
>>>  void kvm_arm_halt_guest(struct kvm *kvm);
>>>  void kvm_arm_resume_guest(struct kvm *kvm);
>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>>>  
>>>  u64 __kvm_call_hyp(void *hypfn, ...);
>>>  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
>>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>>> index 3417e18..3c387fd 100644
>>> --- a/virt/kvm/arm/arm.c
>>> +++ b/virt/kvm/arm/arm.c
>>> @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
>>>  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
>>>  }
>>>  
>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
>>> -{
>>> -	vcpu->arch.pause = true;
>>> -	kvm_vcpu_kick(vcpu);
>>> -}
>>> -
>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
>>> -{
>>> -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
>>> -
>>> -	vcpu->arch.pause = false;
>>> -	swake_up(wq);
>>> -}
>>> -
>>>  void kvm_arm_resume_guest(struct kvm *kvm)
>>>  {
>>>  	int i;
>>>  	struct kvm_vcpu *vcpu;
>>>  
>>> -	kvm_for_each_vcpu(i, vcpu, kvm)
>>> -		kvm_arm_resume_vcpu(vcpu);
>>> +	kvm_for_each_vcpu(i, vcpu, kvm) {
>>> +		vcpu->arch.pause = false;
>>> +		swake_up(kvm_arch_vcpu_wq(vcpu));
>>> +	}
>>>  }
>>>  
>>>  static void vcpu_sleep(struct kvm_vcpu *vcpu)
>>> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
>>> index 64cbcb4..c1e4bdd 100644
>>> --- a/virt/kvm/arm/vgic/vgic-mmio.c
>>> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
>>> @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>>>   * be migrated while we don't hold the IRQ locks and we don't want to be
>>>   * chasing moving targets.
>>>   *
>>> - * For private interrupts, we only have to make sure the single and only VCPU
>>> - * that can potentially queue the IRQ is stopped.
>>> + * For private interrupts we don't have to do anything because userspace
>>> + * accesses to the VGIC state already require all VCPUs to be stopped, and
>>> + * only the VCPU itself can modify its private interrupts active state, which
>>> + * guarantees that the VCPU is not running.
>>>   */
>>>  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
>>>  {
>>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
>>> -		kvm_arm_halt_vcpu(vcpu);
>>> -	else
>>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>>>  		kvm_arm_halt_guest(vcpu->kvm);
>>>  }
>>>  
>>>  /* See vgic_change_active_prepare */
>>>  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
>>>  {
>>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
>>> -		kvm_arm_resume_vcpu(vcpu);
>>> -	else
>>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>>>  		kvm_arm_resume_guest(vcpu->kvm);
>>>  }
>>>  
>>> @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
>>>  {
>>>  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
>>>  
>>> +	mutex_lock(&vcpu->kvm->lock);
>>>  	vgic_change_active_prepare(vcpu, intid);
>>>  
>>>  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
>>>  
>>>  	vgic_change_active_finish(vcpu, intid);
>>> +	mutex_unlock(&vcpu->kvm->lock);
>>
>> Any reason not to move the lock/unlock calls to prepare/finish? Also, do
>> we need to take that mutex if intid is a PPI?
> 
> I guess we strictly don't need to take the mutex if it's a PPI, no.
> 
> But I actually preferred this symmetry because you can easily tell we
> don't have a bug (famous last words) by locking and unlocking the mutex
> in the same function.
> 
> I don't feel strongly about it though, so I can move it if you prefer
> it.

No, that's fine, I just wanted to check whether my understanding was
correct.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
@ 2017-05-23  9:05         ` Marc Zyngier
  0 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2017-05-23  9:05 UTC (permalink / raw)
  To: linux-arm-kernel

On 23/05/17 09:43, Christoffer Dall wrote:
> On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
>> On 16/05/17 11:04, Christoffer Dall wrote:
>>> We don't need to stop a specific VCPU when changing the active state,
>>> because private IRQs can only be modified by a running VCPU for the
>>> VCPU itself and it is therefore already stopped.
>>>
>>> However, it is also possible for two VCPUs to be modifying the active
>>> state of SPIs at the same time, which can cause the thread being stuck
>>> in the loop that checks other VCPU threads for a potentially very long
>>> time, or to modify the active state of a running VCPU.  Fix this by
>>> serializing all accesses to setting and clearing the active state of
>>> interrupts using the KVM mutex.
>>>
>>> Reported-by: Andrew Jones <drjones@redhat.com>
>>> Signed-off-by: Christoffer Dall <cdall@linaro.org>
>>> ---
>>>  arch/arm/include/asm/kvm_host.h   |  2 --
>>>  arch/arm64/include/asm/kvm_host.h |  2 --
>>>  virt/kvm/arm/arm.c                | 20 ++++----------------
>>>  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
>>>  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
>>>  5 files changed, 20 insertions(+), 33 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>> index f0e6657..12274d4 100644
>>> --- a/arch/arm/include/asm/kvm_host.h
>>> +++ b/arch/arm/include/asm/kvm_host.h
>>> @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>>>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>>>  void kvm_arm_halt_guest(struct kvm *kvm);
>>>  void kvm_arm_resume_guest(struct kvm *kvm);
>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>>>  
>>>  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
>>>  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 5e19165..32cbe8a 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>>>  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
>>>  void kvm_arm_halt_guest(struct kvm *kvm);
>>>  void kvm_arm_resume_guest(struct kvm *kvm);
>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>>>  
>>>  u64 __kvm_call_hyp(void *hypfn, ...);
>>>  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
>>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>>> index 3417e18..3c387fd 100644
>>> --- a/virt/kvm/arm/arm.c
>>> +++ b/virt/kvm/arm/arm.c
>>> @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
>>>  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
>>>  }
>>>  
>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
>>> -{
>>> -	vcpu->arch.pause = true;
>>> -	kvm_vcpu_kick(vcpu);
>>> -}
>>> -
>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
>>> -{
>>> -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
>>> -
>>> -	vcpu->arch.pause = false;
>>> -	swake_up(wq);
>>> -}
>>> -
>>>  void kvm_arm_resume_guest(struct kvm *kvm)
>>>  {
>>>  	int i;
>>>  	struct kvm_vcpu *vcpu;
>>>  
>>> -	kvm_for_each_vcpu(i, vcpu, kvm)
>>> -		kvm_arm_resume_vcpu(vcpu);
>>> +	kvm_for_each_vcpu(i, vcpu, kvm) {
>>> +		vcpu->arch.pause = false;
>>> +		swake_up(kvm_arch_vcpu_wq(vcpu));
>>> +	}
>>>  }
>>>  
>>>  static void vcpu_sleep(struct kvm_vcpu *vcpu)
>>> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
>>> index 64cbcb4..c1e4bdd 100644
>>> --- a/virt/kvm/arm/vgic/vgic-mmio.c
>>> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
>>> @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>>>   * be migrated while we don't hold the IRQ locks and we don't want to be
>>>   * chasing moving targets.
>>>   *
>>> - * For private interrupts, we only have to make sure the single and only VCPU
>>> - * that can potentially queue the IRQ is stopped.
>>> + * For private interrupts we don't have to do anything because userspace
>>> + * accesses to the VGIC state already require all VCPUs to be stopped, and
>>> + * only the VCPU itself can modify its private interrupts active state, which
>>> + * guarantees that the VCPU is not running.
>>>   */
>>>  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
>>>  {
>>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
>>> -		kvm_arm_halt_vcpu(vcpu);
>>> -	else
>>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>>>  		kvm_arm_halt_guest(vcpu->kvm);
>>>  }
>>>  
>>>  /* See vgic_change_active_prepare */
>>>  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
>>>  {
>>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
>>> -		kvm_arm_resume_vcpu(vcpu);
>>> -	else
>>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>>>  		kvm_arm_resume_guest(vcpu->kvm);
>>>  }
>>>  
>>> @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
>>>  {
>>>  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
>>>  
>>> +	mutex_lock(&vcpu->kvm->lock);
>>>  	vgic_change_active_prepare(vcpu, intid);
>>>  
>>>  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
>>>  
>>>  	vgic_change_active_finish(vcpu, intid);
>>> +	mutex_unlock(&vcpu->kvm->lock);
>>
>> Any reason not to move the lock/unlock calls to prepare/finish? Also, do
>> we need to take that mutex if intid is a PPI?
> 
> I guess we strictly don't need to take the mutex if it's a PPI, no.
> 
> But I actually preferred this symmetry because you can easily tell we
> don't have a bug (famous last words) by locking and unlocking the mutex
> in the same function.
> 
> I don't feel strongly about it though, so I can move it if you prefer
> it.

No, that's fine, I just wanted to check whether my understanding was
correct.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
  2017-05-23  9:05         ` Marc Zyngier
@ 2017-05-23  9:56           ` Christoffer Dall
  -1 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-23  9:56 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, kvm, Eric Auger, Andrew Jones

On Tue, May 23, 2017 at 10:05:13AM +0100, Marc Zyngier wrote:
> On 23/05/17 09:43, Christoffer Dall wrote:
> > On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> >> On 16/05/17 11:04, Christoffer Dall wrote:
> >>> We don't need to stop a specific VCPU when changing the active state,
> >>> because private IRQs can only be modified by a running VCPU for the
> >>> VCPU itself and it is therefore already stopped.
> >>>
> >>> However, it is also possible for two VCPUs to be modifying the active
> >>> state of SPIs at the same time, which can cause the thread being stuck
> >>> in the loop that checks other VCPU threads for a potentially very long
> >>> time, or to modify the active state of a running VCPU.  Fix this by
> >>> serializing all accesses to setting and clearing the active state of
> >>> interrupts using the KVM mutex.
> >>>
> >>> Reported-by: Andrew Jones <drjones@redhat.com>
> >>> Signed-off-by: Christoffer Dall <cdall@linaro.org>
> >>> ---
> >>>  arch/arm/include/asm/kvm_host.h   |  2 --
> >>>  arch/arm64/include/asm/kvm_host.h |  2 --
> >>>  virt/kvm/arm/arm.c                | 20 ++++----------------
> >>>  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> >>>  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> >>>  5 files changed, 20 insertions(+), 33 deletions(-)
> >>>
> >>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> >>> index f0e6657..12274d4 100644
> >>> --- a/arch/arm/include/asm/kvm_host.h
> >>> +++ b/arch/arm/include/asm/kvm_host.h
> >>> @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> >>>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> >>>  void kvm_arm_halt_guest(struct kvm *kvm);
> >>>  void kvm_arm_resume_guest(struct kvm *kvm);
> >>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> >>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> >>>  
> >>>  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> >>>  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> >>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> >>> index 5e19165..32cbe8a 100644
> >>> --- a/arch/arm64/include/asm/kvm_host.h
> >>> +++ b/arch/arm64/include/asm/kvm_host.h
> >>> @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> >>>  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> >>>  void kvm_arm_halt_guest(struct kvm *kvm);
> >>>  void kvm_arm_resume_guest(struct kvm *kvm);
> >>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> >>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> >>>  
> >>>  u64 __kvm_call_hyp(void *hypfn, ...);
> >>>  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> >>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> >>> index 3417e18..3c387fd 100644
> >>> --- a/virt/kvm/arm/arm.c
> >>> +++ b/virt/kvm/arm/arm.c
> >>> @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> >>>  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> >>>  }
> >>>  
> >>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> >>> -{
> >>> -	vcpu->arch.pause = true;
> >>> -	kvm_vcpu_kick(vcpu);
> >>> -}
> >>> -
> >>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> >>> -{
> >>> -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> >>> -
> >>> -	vcpu->arch.pause = false;
> >>> -	swake_up(wq);
> >>> -}
> >>> -
> >>>  void kvm_arm_resume_guest(struct kvm *kvm)
> >>>  {
> >>>  	int i;
> >>>  	struct kvm_vcpu *vcpu;
> >>>  
> >>> -	kvm_for_each_vcpu(i, vcpu, kvm)
> >>> -		kvm_arm_resume_vcpu(vcpu);
> >>> +	kvm_for_each_vcpu(i, vcpu, kvm) {
> >>> +		vcpu->arch.pause = false;
> >>> +		swake_up(kvm_arch_vcpu_wq(vcpu));
> >>> +	}
> >>>  }
> >>>  
> >>>  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> >>> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> >>> index 64cbcb4..c1e4bdd 100644
> >>> --- a/virt/kvm/arm/vgic/vgic-mmio.c
> >>> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> >>> @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> >>>   * be migrated while we don't hold the IRQ locks and we don't want to be
> >>>   * chasing moving targets.
> >>>   *
> >>> - * For private interrupts, we only have to make sure the single and only VCPU
> >>> - * that can potentially queue the IRQ is stopped.
> >>> + * For private interrupts we don't have to do anything because userspace
> >>> + * accesses to the VGIC state already require all VCPUs to be stopped, and
> >>> + * only the VCPU itself can modify its private interrupts active state, which
> >>> + * guarantees that the VCPU is not running.
> >>>   */
> >>>  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> >>>  {
> >>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
> >>> -		kvm_arm_halt_vcpu(vcpu);
> >>> -	else
> >>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
> >>>  		kvm_arm_halt_guest(vcpu->kvm);
> >>>  }
> >>>  
> >>>  /* See vgic_change_active_prepare */
> >>>  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> >>>  {
> >>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
> >>> -		kvm_arm_resume_vcpu(vcpu);
> >>> -	else
> >>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
> >>>  		kvm_arm_resume_guest(vcpu->kvm);
> >>>  }
> >>>  
> >>> @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> >>>  {
> >>>  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> >>>  
> >>> +	mutex_lock(&vcpu->kvm->lock);
> >>>  	vgic_change_active_prepare(vcpu, intid);
> >>>  
> >>>  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> >>>  
> >>>  	vgic_change_active_finish(vcpu, intid);
> >>> +	mutex_unlock(&vcpu->kvm->lock);
> >>
> >> Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> >> we need to take that mutex if intid is a PPI?
> > 
> > I guess we strictly don't need to take the mutex if it's a PPI, no.
> > 
> > But I actually preferred this symmetry because you can easily tell we
> > don't have a bug (famous last words) by locking and unlocking the mutex
> > in the same function.
> > 
> > I don't feel strongly about it though, so I can move it if you prefer
> > it.
> 
> No, that's fine, I just wanted to check whether my understanding was
> correct.
> 
> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> 

Thanks.

So assuming Drew's patches will go on top of these, should we merge this
as fixes to -rcX, or queue them for v4.13 ?

I'm leaning towards the latter because I don't think we've seen these
races do something bad in the wild, and they're probably not going to be
backportable to stable anyway.  Thoughts?

-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
@ 2017-05-23  9:56           ` Christoffer Dall
  0 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-05-23  9:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 23, 2017 at 10:05:13AM +0100, Marc Zyngier wrote:
> On 23/05/17 09:43, Christoffer Dall wrote:
> > On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> >> On 16/05/17 11:04, Christoffer Dall wrote:
> >>> We don't need to stop a specific VCPU when changing the active state,
> >>> because private IRQs can only be modified by a running VCPU for the
> >>> VCPU itself and it is therefore already stopped.
> >>>
> >>> However, it is also possible for two VCPUs to be modifying the active
> >>> state of SPIs at the same time, which can cause the thread being stuck
> >>> in the loop that checks other VCPU threads for a potentially very long
> >>> time, or to modify the active state of a running VCPU.  Fix this by
> >>> serializing all accesses to setting and clearing the active state of
> >>> interrupts using the KVM mutex.
> >>>
> >>> Reported-by: Andrew Jones <drjones@redhat.com>
> >>> Signed-off-by: Christoffer Dall <cdall@linaro.org>
> >>> ---
> >>>  arch/arm/include/asm/kvm_host.h   |  2 --
> >>>  arch/arm64/include/asm/kvm_host.h |  2 --
> >>>  virt/kvm/arm/arm.c                | 20 ++++----------------
> >>>  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> >>>  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> >>>  5 files changed, 20 insertions(+), 33 deletions(-)
> >>>
> >>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> >>> index f0e6657..12274d4 100644
> >>> --- a/arch/arm/include/asm/kvm_host.h
> >>> +++ b/arch/arm/include/asm/kvm_host.h
> >>> @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> >>>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> >>>  void kvm_arm_halt_guest(struct kvm *kvm);
> >>>  void kvm_arm_resume_guest(struct kvm *kvm);
> >>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> >>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> >>>  
> >>>  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> >>>  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> >>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> >>> index 5e19165..32cbe8a 100644
> >>> --- a/arch/arm64/include/asm/kvm_host.h
> >>> +++ b/arch/arm64/include/asm/kvm_host.h
> >>> @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> >>>  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> >>>  void kvm_arm_halt_guest(struct kvm *kvm);
> >>>  void kvm_arm_resume_guest(struct kvm *kvm);
> >>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> >>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> >>>  
> >>>  u64 __kvm_call_hyp(void *hypfn, ...);
> >>>  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> >>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> >>> index 3417e18..3c387fd 100644
> >>> --- a/virt/kvm/arm/arm.c
> >>> +++ b/virt/kvm/arm/arm.c
> >>> @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> >>>  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> >>>  }
> >>>  
> >>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> >>> -{
> >>> -	vcpu->arch.pause = true;
> >>> -	kvm_vcpu_kick(vcpu);
> >>> -}
> >>> -
> >>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> >>> -{
> >>> -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> >>> -
> >>> -	vcpu->arch.pause = false;
> >>> -	swake_up(wq);
> >>> -}
> >>> -
> >>>  void kvm_arm_resume_guest(struct kvm *kvm)
> >>>  {
> >>>  	int i;
> >>>  	struct kvm_vcpu *vcpu;
> >>>  
> >>> -	kvm_for_each_vcpu(i, vcpu, kvm)
> >>> -		kvm_arm_resume_vcpu(vcpu);
> >>> +	kvm_for_each_vcpu(i, vcpu, kvm) {
> >>> +		vcpu->arch.pause = false;
> >>> +		swake_up(kvm_arch_vcpu_wq(vcpu));
> >>> +	}
> >>>  }
> >>>  
> >>>  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> >>> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> >>> index 64cbcb4..c1e4bdd 100644
> >>> --- a/virt/kvm/arm/vgic/vgic-mmio.c
> >>> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> >>> @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> >>>   * be migrated while we don't hold the IRQ locks and we don't want to be
> >>>   * chasing moving targets.
> >>>   *
> >>> - * For private interrupts, we only have to make sure the single and only VCPU
> >>> - * that can potentially queue the IRQ is stopped.
> >>> + * For private interrupts we don't have to do anything because userspace
> >>> + * accesses to the VGIC state already require all VCPUs to be stopped, and
> >>> + * only the VCPU itself can modify its private interrupts active state, which
> >>> + * guarantees that the VCPU is not running.
> >>>   */
> >>>  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> >>>  {
> >>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
> >>> -		kvm_arm_halt_vcpu(vcpu);
> >>> -	else
> >>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
> >>>  		kvm_arm_halt_guest(vcpu->kvm);
> >>>  }
> >>>  
> >>>  /* See vgic_change_active_prepare */
> >>>  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> >>>  {
> >>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
> >>> -		kvm_arm_resume_vcpu(vcpu);
> >>> -	else
> >>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
> >>>  		kvm_arm_resume_guest(vcpu->kvm);
> >>>  }
> >>>  
> >>> @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> >>>  {
> >>>  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> >>>  
> >>> +	mutex_lock(&vcpu->kvm->lock);
> >>>  	vgic_change_active_prepare(vcpu, intid);
> >>>  
> >>>  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> >>>  
> >>>  	vgic_change_active_finish(vcpu, intid);
> >>> +	mutex_unlock(&vcpu->kvm->lock);
> >>
> >> Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> >> we need to take that mutex if intid is a PPI?
> > 
> > I guess we strictly don't need to take the mutex if it's a PPI, no.
> > 
> > But I actually preferred this symmetry because you can easily tell we
> > don't have a bug (famous last words) by locking and unlocking the mutex
> > in the same function.
> > 
> > I don't feel strongly about it though, so I can move it if you prefer
> > it.
> 
> No, that's fine, I just wanted to check whether my understanding was
> correct.
> 
> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> 

Thanks.

So assuming Drew's patches will go on top of these, should we merge this
as fixes to -rcX, or queue them for v4.13 ?

I'm leaning towards the latter because I don't think we've seen these
races do something bad in the wild, and they're probably not going to be
backportable to stable anyway.  Thoughts?

-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
  2017-05-23  9:56           ` Christoffer Dall
@ 2017-05-23 10:36             ` Marc Zyngier
  -1 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2017-05-23 10:36 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvmarm, linux-arm-kernel, kvm, Eric Auger, Andrew Jones

On 23/05/17 10:56, Christoffer Dall wrote:
> On Tue, May 23, 2017 at 10:05:13AM +0100, Marc Zyngier wrote:
>> On 23/05/17 09:43, Christoffer Dall wrote:
>>> On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
>>>> On 16/05/17 11:04, Christoffer Dall wrote:
>>>>> We don't need to stop a specific VCPU when changing the active state,
>>>>> because private IRQs can only be modified by a running VCPU for the
>>>>> VCPU itself and it is therefore already stopped.
>>>>>
>>>>> However, it is also possible for two VCPUs to be modifying the active
>>>>> state of SPIs at the same time, which can cause the thread being stuck
>>>>> in the loop that checks other VCPU threads for a potentially very long
>>>>> time, or to modify the active state of a running VCPU.  Fix this by
>>>>> serializing all accesses to setting and clearing the active state of
>>>>> interrupts using the KVM mutex.
>>>>>
>>>>> Reported-by: Andrew Jones <drjones@redhat.com>
>>>>> Signed-off-by: Christoffer Dall <cdall@linaro.org>
>>>>> ---
>>>>>  arch/arm/include/asm/kvm_host.h   |  2 --
>>>>>  arch/arm64/include/asm/kvm_host.h |  2 --
>>>>>  virt/kvm/arm/arm.c                | 20 ++++----------------
>>>>>  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
>>>>>  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
>>>>>  5 files changed, 20 insertions(+), 33 deletions(-)
>>>>>
>>>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>>>> index f0e6657..12274d4 100644
>>>>> --- a/arch/arm/include/asm/kvm_host.h
>>>>> +++ b/arch/arm/include/asm/kvm_host.h
>>>>> @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>>>>>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>>>>>  void kvm_arm_halt_guest(struct kvm *kvm);
>>>>>  void kvm_arm_resume_guest(struct kvm *kvm);
>>>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
>>>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>>>>>  
>>>>>  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
>>>>>  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
>>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>>> index 5e19165..32cbe8a 100644
>>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>>> @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>>>>>  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
>>>>>  void kvm_arm_halt_guest(struct kvm *kvm);
>>>>>  void kvm_arm_resume_guest(struct kvm *kvm);
>>>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
>>>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>>>>>  
>>>>>  u64 __kvm_call_hyp(void *hypfn, ...);
>>>>>  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
>>>>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>>>>> index 3417e18..3c387fd 100644
>>>>> --- a/virt/kvm/arm/arm.c
>>>>> +++ b/virt/kvm/arm/arm.c
>>>>> @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
>>>>>  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
>>>>>  }
>>>>>  
>>>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
>>>>> -{
>>>>> -	vcpu->arch.pause = true;
>>>>> -	kvm_vcpu_kick(vcpu);
>>>>> -}
>>>>> -
>>>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
>>>>> -{
>>>>> -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
>>>>> -
>>>>> -	vcpu->arch.pause = false;
>>>>> -	swake_up(wq);
>>>>> -}
>>>>> -
>>>>>  void kvm_arm_resume_guest(struct kvm *kvm)
>>>>>  {
>>>>>  	int i;
>>>>>  	struct kvm_vcpu *vcpu;
>>>>>  
>>>>> -	kvm_for_each_vcpu(i, vcpu, kvm)
>>>>> -		kvm_arm_resume_vcpu(vcpu);
>>>>> +	kvm_for_each_vcpu(i, vcpu, kvm) {
>>>>> +		vcpu->arch.pause = false;
>>>>> +		swake_up(kvm_arch_vcpu_wq(vcpu));
>>>>> +	}
>>>>>  }
>>>>>  
>>>>>  static void vcpu_sleep(struct kvm_vcpu *vcpu)
>>>>> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
>>>>> index 64cbcb4..c1e4bdd 100644
>>>>> --- a/virt/kvm/arm/vgic/vgic-mmio.c
>>>>> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
>>>>> @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>>>>>   * be migrated while we don't hold the IRQ locks and we don't want to be
>>>>>   * chasing moving targets.
>>>>>   *
>>>>> - * For private interrupts, we only have to make sure the single and only VCPU
>>>>> - * that can potentially queue the IRQ is stopped.
>>>>> + * For private interrupts we don't have to do anything because userspace
>>>>> + * accesses to the VGIC state already require all VCPUs to be stopped, and
>>>>> + * only the VCPU itself can modify its private interrupts active state, which
>>>>> + * guarantees that the VCPU is not running.
>>>>>   */
>>>>>  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
>>>>>  {
>>>>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
>>>>> -		kvm_arm_halt_vcpu(vcpu);
>>>>> -	else
>>>>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>>>>>  		kvm_arm_halt_guest(vcpu->kvm);
>>>>>  }
>>>>>  
>>>>>  /* See vgic_change_active_prepare */
>>>>>  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
>>>>>  {
>>>>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
>>>>> -		kvm_arm_resume_vcpu(vcpu);
>>>>> -	else
>>>>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>>>>>  		kvm_arm_resume_guest(vcpu->kvm);
>>>>>  }
>>>>>  
>>>>> @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
>>>>>  {
>>>>>  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
>>>>>  
>>>>> +	mutex_lock(&vcpu->kvm->lock);
>>>>>  	vgic_change_active_prepare(vcpu, intid);
>>>>>  
>>>>>  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
>>>>>  
>>>>>  	vgic_change_active_finish(vcpu, intid);
>>>>> +	mutex_unlock(&vcpu->kvm->lock);
>>>>
>>>> Any reason not to move the lock/unlock calls to prepare/finish? Also, do
>>>> we need to take that mutex if intid is a PPI?
>>>
>>> I guess we strictly don't need to take the mutex if it's a PPI, no.
>>>
>>> But I actually preferred this symmetry because you can easily tell we
>>> don't have a bug (famous last words) by locking and unlocking the mutex
>>> in the same function.
>>>
>>> I don't feel strongly about it though, so I can move it if you prefer
>>> it.
>>
>> No, that's fine, I just wanted to check whether my understanding was
>> correct.
>>
>> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>>
> 
> Thanks.
> 
> So assuming Drew's patches will go on top of these, should we merge this
> as fixes to -rcX, or queue them for v4.13 ?
> 
> I'm leaning towards the latter because I don't think we've seen these
> races do something bad in the wild, and they're probably not going to be
> backportable to stable anyway.  Thoughts?

The race would only impact a guest hammering its own registers, so I'm
quite happy for this to go into 4.13.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
@ 2017-05-23 10:36             ` Marc Zyngier
  0 siblings, 0 replies; 30+ messages in thread
From: Marc Zyngier @ 2017-05-23 10:36 UTC (permalink / raw)
  To: linux-arm-kernel

On 23/05/17 10:56, Christoffer Dall wrote:
> On Tue, May 23, 2017 at 10:05:13AM +0100, Marc Zyngier wrote:
>> On 23/05/17 09:43, Christoffer Dall wrote:
>>> On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
>>>> On 16/05/17 11:04, Christoffer Dall wrote:
>>>>> We don't need to stop a specific VCPU when changing the active state,
>>>>> because private IRQs can only be modified by a running VCPU for the
>>>>> VCPU itself and it is therefore already stopped.
>>>>>
>>>>> However, it is also possible for two VCPUs to be modifying the active
>>>>> state of SPIs at the same time, which can cause the thread being stuck
>>>>> in the loop that checks other VCPU threads for a potentially very long
>>>>> time, or to modify the active state of a running VCPU.  Fix this by
>>>>> serializing all accesses to setting and clearing the active state of
>>>>> interrupts using the KVM mutex.
>>>>>
>>>>> Reported-by: Andrew Jones <drjones@redhat.com>
>>>>> Signed-off-by: Christoffer Dall <cdall@linaro.org>
>>>>> ---
>>>>>  arch/arm/include/asm/kvm_host.h   |  2 --
>>>>>  arch/arm64/include/asm/kvm_host.h |  2 --
>>>>>  virt/kvm/arm/arm.c                | 20 ++++----------------
>>>>>  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
>>>>>  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
>>>>>  5 files changed, 20 insertions(+), 33 deletions(-)
>>>>>
>>>>> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
>>>>> index f0e6657..12274d4 100644
>>>>> --- a/arch/arm/include/asm/kvm_host.h
>>>>> +++ b/arch/arm/include/asm/kvm_host.h
>>>>> @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>>>>>  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
>>>>>  void kvm_arm_halt_guest(struct kvm *kvm);
>>>>>  void kvm_arm_resume_guest(struct kvm *kvm);
>>>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
>>>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>>>>>  
>>>>>  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
>>>>>  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
>>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>>> index 5e19165..32cbe8a 100644
>>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>>> @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
>>>>>  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
>>>>>  void kvm_arm_halt_guest(struct kvm *kvm);
>>>>>  void kvm_arm_resume_guest(struct kvm *kvm);
>>>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
>>>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
>>>>>  
>>>>>  u64 __kvm_call_hyp(void *hypfn, ...);
>>>>>  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
>>>>> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
>>>>> index 3417e18..3c387fd 100644
>>>>> --- a/virt/kvm/arm/arm.c
>>>>> +++ b/virt/kvm/arm/arm.c
>>>>> @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
>>>>>  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
>>>>>  }
>>>>>  
>>>>> -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
>>>>> -{
>>>>> -	vcpu->arch.pause = true;
>>>>> -	kvm_vcpu_kick(vcpu);
>>>>> -}
>>>>> -
>>>>> -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
>>>>> -{
>>>>> -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
>>>>> -
>>>>> -	vcpu->arch.pause = false;
>>>>> -	swake_up(wq);
>>>>> -}
>>>>> -
>>>>>  void kvm_arm_resume_guest(struct kvm *kvm)
>>>>>  {
>>>>>  	int i;
>>>>>  	struct kvm_vcpu *vcpu;
>>>>>  
>>>>> -	kvm_for_each_vcpu(i, vcpu, kvm)
>>>>> -		kvm_arm_resume_vcpu(vcpu);
>>>>> +	kvm_for_each_vcpu(i, vcpu, kvm) {
>>>>> +		vcpu->arch.pause = false;
>>>>> +		swake_up(kvm_arch_vcpu_wq(vcpu));
>>>>> +	}
>>>>>  }
>>>>>  
>>>>>  static void vcpu_sleep(struct kvm_vcpu *vcpu)
>>>>> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
>>>>> index 64cbcb4..c1e4bdd 100644
>>>>> --- a/virt/kvm/arm/vgic/vgic-mmio.c
>>>>> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
>>>>> @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>>>>>   * be migrated while we don't hold the IRQ locks and we don't want to be
>>>>>   * chasing moving targets.
>>>>>   *
>>>>> - * For private interrupts, we only have to make sure the single and only VCPU
>>>>> - * that can potentially queue the IRQ is stopped.
>>>>> + * For private interrupts we don't have to do anything because userspace
>>>>> + * accesses to the VGIC state already require all VCPUs to be stopped, and
>>>>> + * only the VCPU itself can modify its private interrupts active state, which
>>>>> + * guarantees that the VCPU is not running.
>>>>>   */
>>>>>  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
>>>>>  {
>>>>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
>>>>> -		kvm_arm_halt_vcpu(vcpu);
>>>>> -	else
>>>>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>>>>>  		kvm_arm_halt_guest(vcpu->kvm);
>>>>>  }
>>>>>  
>>>>>  /* See vgic_change_active_prepare */
>>>>>  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
>>>>>  {
>>>>> -	if (intid < VGIC_NR_PRIVATE_IRQS)
>>>>> -		kvm_arm_resume_vcpu(vcpu);
>>>>> -	else
>>>>> +	if (intid > VGIC_NR_PRIVATE_IRQS)
>>>>>  		kvm_arm_resume_guest(vcpu->kvm);
>>>>>  }
>>>>>  
>>>>> @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
>>>>>  {
>>>>>  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
>>>>>  
>>>>> +	mutex_lock(&vcpu->kvm->lock);
>>>>>  	vgic_change_active_prepare(vcpu, intid);
>>>>>  
>>>>>  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
>>>>>  
>>>>>  	vgic_change_active_finish(vcpu, intid);
>>>>> +	mutex_unlock(&vcpu->kvm->lock);
>>>>
>>>> Any reason not to move the lock/unlock calls to prepare/finish? Also, do
>>>> we need to take that mutex if intid is a PPI?
>>>
>>> I guess we strictly don't need to take the mutex if it's a PPI, no.
>>>
>>> But I actually preferred this symmetry because you can easily tell we
>>> don't have a bug (famous last words) by locking and unlocking the mutex
>>> in the same function.
>>>
>>> I don't feel strongly about it though, so I can move it if you prefer
>>> it.
>>
>> No, that's fine, I just wanted to check whether my understanding was
>> correct.
>>
>> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>>
> 
> Thanks.
> 
> So assuming Drew's patches will go on top of these, should we merge this
> as fixes to -rcX, or queue them for v4.13 ?
> 
> I'm leaning towards the latter because I don't think we've seen these
> races do something bad in the wild, and they're probably not going to be
> backportable to stable anyway.  Thoughts?

The race would only impact a guest hammering its own registers, so I'm
quite happy for this to go into 4.13.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
  2017-05-23  8:43       ` Christoffer Dall
@ 2017-06-04  8:15         ` Andrew Jones
  -1 siblings, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-06-04  8:15 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: Marc Zyngier, kvmarm, linux-arm-kernel, kvm, Eric Auger

On Tue, May 23, 2017 at 10:43:29AM +0200, Christoffer Dall wrote:
> On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> > On 16/05/17 11:04, Christoffer Dall wrote:
> > > We don't need to stop a specific VCPU when changing the active state,
> > > because private IRQs can only be modified by a running VCPU for the
> > > VCPU itself and it is therefore already stopped.
> > > 
> > > However, it is also possible for two VCPUs to be modifying the active
> > > state of SPIs at the same time, which can cause the thread being stuck
> > > in the loop that checks other VCPU threads for a potentially very long
> > > time, or to modify the active state of a running VCPU.  Fix this by
> > > serializing all accesses to setting and clearing the active state of
> > > interrupts using the KVM mutex.
> > > 
> > > Reported-by: Andrew Jones <drjones@redhat.com>
> > > Signed-off-by: Christoffer Dall <cdall@linaro.org>
> > > ---
> > >  arch/arm/include/asm/kvm_host.h   |  2 --
> > >  arch/arm64/include/asm/kvm_host.h |  2 --
> > >  virt/kvm/arm/arm.c                | 20 ++++----------------
> > >  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> > >  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> > >  5 files changed, 20 insertions(+), 33 deletions(-)
> > > 
> > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > > index f0e6657..12274d4 100644
> > > --- a/arch/arm/include/asm/kvm_host.h
> > > +++ b/arch/arm/include/asm/kvm_host.h
> > > @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > >  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > >  
> > >  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> > >  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index 5e19165..32cbe8a 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > >  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > >  
> > >  u64 __kvm_call_hyp(void *hypfn, ...);
> > >  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > > index 3417e18..3c387fd 100644
> > > --- a/virt/kvm/arm/arm.c
> > > +++ b/virt/kvm/arm/arm.c
> > > @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> > >  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> > >  }
> > >  
> > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> > > -{
> > > -	vcpu->arch.pause = true;
> > > -	kvm_vcpu_kick(vcpu);
> > > -}
> > > -
> > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> > > -{
> > > -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > > -
> > > -	vcpu->arch.pause = false;
> > > -	swake_up(wq);
> > > -}
> > > -
> > >  void kvm_arm_resume_guest(struct kvm *kvm)
> > >  {
> > >  	int i;
> > >  	struct kvm_vcpu *vcpu;
> > >  
> > > -	kvm_for_each_vcpu(i, vcpu, kvm)
> > > -		kvm_arm_resume_vcpu(vcpu);
> > > +	kvm_for_each_vcpu(i, vcpu, kvm) {
> > > +		vcpu->arch.pause = false;
> > > +		swake_up(kvm_arch_vcpu_wq(vcpu));
> > > +	}
> > >  }
> > >  
> > >  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> > > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > > index 64cbcb4..c1e4bdd 100644
> > > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > > @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> > >   * be migrated while we don't hold the IRQ locks and we don't want to be
> > >   * chasing moving targets.
> > >   *
> > > - * For private interrupts, we only have to make sure the single and only VCPU
> > > - * that can potentially queue the IRQ is stopped.
> > > + * For private interrupts we don't have to do anything because userspace
> > > + * accesses to the VGIC state already require all VCPUs to be stopped, and
> > > + * only the VCPU itself can modify its private interrupts active state, which
> > > + * guarantees that the VCPU is not running.
> > >   */
> > >  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> > >  {
> > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > -		kvm_arm_halt_vcpu(vcpu);
> > > -	else
> > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > >  		kvm_arm_halt_guest(vcpu->kvm);
> > >  }
> > >  
> > >  /* See vgic_change_active_prepare */
> > >  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> > >  {
> > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > -		kvm_arm_resume_vcpu(vcpu);
> > > -	else
> > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > >  		kvm_arm_resume_guest(vcpu->kvm);
> > >  }
> > >  
> > > @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> > >  {
> > >  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> > >  
> > > +	mutex_lock(&vcpu->kvm->lock);
> > >  	vgic_change_active_prepare(vcpu, intid);
> > >  
> > >  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> > >  
> > >  	vgic_change_active_finish(vcpu, intid);
> > > +	mutex_unlock(&vcpu->kvm->lock);
> > 
> > Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> > we need to take that mutex if intid is a PPI?
> 
> I guess we strictly don't need to take the mutex if it's a PPI, no.
> 
> But I actually preferred this symmetry because you can easily tell we
> don't have a bug (famous last words) by locking and unlocking the mutex
> in the same function.
> 
> I don't feel strongly about it though, so I can move it if you prefer
> it.

Actually we must move the locking into the prepare/finish functions, in
order to tuck them into the VGIC_NR_PRIVATE_IRQS conditions. Otherwise,
with gicv3, when userspace accesses ISACTIVER0/ICACTIVER0, which are
SGI_base offsets, then the vgic_v3_sgibase_registers table is used. That
table doesn't provide the uaccess functions, so we try to lock twice
again.

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
@ 2017-06-04  8:15         ` Andrew Jones
  0 siblings, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-06-04  8:15 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 23, 2017 at 10:43:29AM +0200, Christoffer Dall wrote:
> On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> > On 16/05/17 11:04, Christoffer Dall wrote:
> > > We don't need to stop a specific VCPU when changing the active state,
> > > because private IRQs can only be modified by a running VCPU for the
> > > VCPU itself and it is therefore already stopped.
> > > 
> > > However, it is also possible for two VCPUs to be modifying the active
> > > state of SPIs at the same time, which can cause the thread being stuck
> > > in the loop that checks other VCPU threads for a potentially very long
> > > time, or to modify the active state of a running VCPU.  Fix this by
> > > serializing all accesses to setting and clearing the active state of
> > > interrupts using the KVM mutex.
> > > 
> > > Reported-by: Andrew Jones <drjones@redhat.com>
> > > Signed-off-by: Christoffer Dall <cdall@linaro.org>
> > > ---
> > >  arch/arm/include/asm/kvm_host.h   |  2 --
> > >  arch/arm64/include/asm/kvm_host.h |  2 --
> > >  virt/kvm/arm/arm.c                | 20 ++++----------------
> > >  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> > >  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> > >  5 files changed, 20 insertions(+), 33 deletions(-)
> > > 
> > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > > index f0e6657..12274d4 100644
> > > --- a/arch/arm/include/asm/kvm_host.h
> > > +++ b/arch/arm/include/asm/kvm_host.h
> > > @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > >  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > >  
> > >  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> > >  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index 5e19165..32cbe8a 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > >  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > >  
> > >  u64 __kvm_call_hyp(void *hypfn, ...);
> > >  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > > index 3417e18..3c387fd 100644
> > > --- a/virt/kvm/arm/arm.c
> > > +++ b/virt/kvm/arm/arm.c
> > > @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> > >  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> > >  }
> > >  
> > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> > > -{
> > > -	vcpu->arch.pause = true;
> > > -	kvm_vcpu_kick(vcpu);
> > > -}
> > > -
> > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> > > -{
> > > -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > > -
> > > -	vcpu->arch.pause = false;
> > > -	swake_up(wq);
> > > -}
> > > -
> > >  void kvm_arm_resume_guest(struct kvm *kvm)
> > >  {
> > >  	int i;
> > >  	struct kvm_vcpu *vcpu;
> > >  
> > > -	kvm_for_each_vcpu(i, vcpu, kvm)
> > > -		kvm_arm_resume_vcpu(vcpu);
> > > +	kvm_for_each_vcpu(i, vcpu, kvm) {
> > > +		vcpu->arch.pause = false;
> > > +		swake_up(kvm_arch_vcpu_wq(vcpu));
> > > +	}
> > >  }
> > >  
> > >  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> > > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > > index 64cbcb4..c1e4bdd 100644
> > > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > > @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> > >   * be migrated while we don't hold the IRQ locks and we don't want to be
> > >   * chasing moving targets.
> > >   *
> > > - * For private interrupts, we only have to make sure the single and only VCPU
> > > - * that can potentially queue the IRQ is stopped.
> > > + * For private interrupts we don't have to do anything because userspace
> > > + * accesses to the VGIC state already require all VCPUs to be stopped, and
> > > + * only the VCPU itself can modify its private interrupts active state, which
> > > + * guarantees that the VCPU is not running.
> > >   */
> > >  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> > >  {
> > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > -		kvm_arm_halt_vcpu(vcpu);
> > > -	else
> > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > >  		kvm_arm_halt_guest(vcpu->kvm);
> > >  }
> > >  
> > >  /* See vgic_change_active_prepare */
> > >  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> > >  {
> > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > -		kvm_arm_resume_vcpu(vcpu);
> > > -	else
> > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > >  		kvm_arm_resume_guest(vcpu->kvm);
> > >  }
> > >  
> > > @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> > >  {
> > >  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> > >  
> > > +	mutex_lock(&vcpu->kvm->lock);
> > >  	vgic_change_active_prepare(vcpu, intid);
> > >  
> > >  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> > >  
> > >  	vgic_change_active_finish(vcpu, intid);
> > > +	mutex_unlock(&vcpu->kvm->lock);
> > 
> > Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> > we need to take that mutex if intid is a PPI?
> 
> I guess we strictly don't need to take the mutex if it's a PPI, no.
> 
> But I actually preferred this symmetry because you can easily tell we
> don't have a bug (famous last words) by locking and unlocking the mutex
> in the same function.
> 
> I don't feel strongly about it though, so I can move it if you prefer
> it.

Actually we must move the locking into the prepare/finish functions, in
order to tuck them into the VGIC_NR_PRIVATE_IRQS conditions. Otherwise,
with gicv3, when userspace accesses ISACTIVER0/ICACTIVER0, which are
SGI_base offsets, then the vgic_v3_sgibase_registers table is used. That
table doesn't provide the uaccess functions, so we try to lock twice
again.

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
  2017-06-04  8:15         ` Andrew Jones
@ 2017-06-04 11:57           ` Christoffer Dall
  -1 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-06-04 11:57 UTC (permalink / raw)
  To: Andrew Jones; +Cc: Marc Zyngier, kvmarm, linux-arm-kernel, kvm

On Sun, Jun 04, 2017 at 10:15:57AM +0200, Andrew Jones wrote:
> On Tue, May 23, 2017 at 10:43:29AM +0200, Christoffer Dall wrote:
> > On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> > > On 16/05/17 11:04, Christoffer Dall wrote:
> > > > We don't need to stop a specific VCPU when changing the active state,
> > > > because private IRQs can only be modified by a running VCPU for the
> > > > VCPU itself and it is therefore already stopped.
> > > > 
> > > > However, it is also possible for two VCPUs to be modifying the active
> > > > state of SPIs at the same time, which can cause the thread being stuck
> > > > in the loop that checks other VCPU threads for a potentially very long
> > > > time, or to modify the active state of a running VCPU.  Fix this by
> > > > serializing all accesses to setting and clearing the active state of
> > > > interrupts using the KVM mutex.
> > > > 
> > > > Reported-by: Andrew Jones <drjones@redhat.com>
> > > > Signed-off-by: Christoffer Dall <cdall@linaro.org>
> > > > ---
> > > >  arch/arm/include/asm/kvm_host.h   |  2 --
> > > >  arch/arm64/include/asm/kvm_host.h |  2 --
> > > >  virt/kvm/arm/arm.c                | 20 ++++----------------
> > > >  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> > > >  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> > > >  5 files changed, 20 insertions(+), 33 deletions(-)
> > > > 
> > > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > > > index f0e6657..12274d4 100644
> > > > --- a/arch/arm/include/asm/kvm_host.h
> > > > +++ b/arch/arm/include/asm/kvm_host.h
> > > > @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > >  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > >  
> > > >  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> > > >  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > > index 5e19165..32cbe8a 100644
> > > > --- a/arch/arm64/include/asm/kvm_host.h
> > > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > > @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > >  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > >  
> > > >  u64 __kvm_call_hyp(void *hypfn, ...);
> > > >  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > > > index 3417e18..3c387fd 100644
> > > > --- a/virt/kvm/arm/arm.c
> > > > +++ b/virt/kvm/arm/arm.c
> > > > @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> > > >  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> > > >  }
> > > >  
> > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> > > > -{
> > > > -	vcpu->arch.pause = true;
> > > > -	kvm_vcpu_kick(vcpu);
> > > > -}
> > > > -
> > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> > > > -{
> > > > -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > > > -
> > > > -	vcpu->arch.pause = false;
> > > > -	swake_up(wq);
> > > > -}
> > > > -
> > > >  void kvm_arm_resume_guest(struct kvm *kvm)
> > > >  {
> > > >  	int i;
> > > >  	struct kvm_vcpu *vcpu;
> > > >  
> > > > -	kvm_for_each_vcpu(i, vcpu, kvm)
> > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > +	kvm_for_each_vcpu(i, vcpu, kvm) {
> > > > +		vcpu->arch.pause = false;
> > > > +		swake_up(kvm_arch_vcpu_wq(vcpu));
> > > > +	}
> > > >  }
> > > >  
> > > >  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> > > > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > index 64cbcb4..c1e4bdd 100644
> > > > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > > > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> > > >   * be migrated while we don't hold the IRQ locks and we don't want to be
> > > >   * chasing moving targets.
> > > >   *
> > > > - * For private interrupts, we only have to make sure the single and only VCPU
> > > > - * that can potentially queue the IRQ is stopped.
> > > > + * For private interrupts we don't have to do anything because userspace
> > > > + * accesses to the VGIC state already require all VCPUs to be stopped, and
> > > > + * only the VCPU itself can modify its private interrupts active state, which
> > > > + * guarantees that the VCPU is not running.
> > > >   */
> > > >  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> > > >  {
> > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > -		kvm_arm_halt_vcpu(vcpu);
> > > > -	else
> > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > >  		kvm_arm_halt_guest(vcpu->kvm);
> > > >  }
> > > >  
> > > >  /* See vgic_change_active_prepare */
> > > >  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> > > >  {
> > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > -	else
> > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > >  		kvm_arm_resume_guest(vcpu->kvm);
> > > >  }
> > > >  
> > > > @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> > > >  {
> > > >  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> > > >  
> > > > +	mutex_lock(&vcpu->kvm->lock);
> > > >  	vgic_change_active_prepare(vcpu, intid);
> > > >  
> > > >  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> > > >  
> > > >  	vgic_change_active_finish(vcpu, intid);
> > > > +	mutex_unlock(&vcpu->kvm->lock);
> > > 
> > > Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> > > we need to take that mutex if intid is a PPI?
> > 
> > I guess we strictly don't need to take the mutex if it's a PPI, no.
> > 
> > But I actually preferred this symmetry because you can easily tell we
> > don't have a bug (famous last words) by locking and unlocking the mutex
> > in the same function.
> > 
> > I don't feel strongly about it though, so I can move it if you prefer
> > it.
> 
> Actually we must move the locking into the prepare/finish functions, in
> order to tuck them into the VGIC_NR_PRIVATE_IRQS conditions. Otherwise,
> with gicv3, when userspace accesses ISACTIVER0/ICACTIVER0, which are
> SGI_base offsets, then the vgic_v3_sgibase_registers table is used. That
> table doesn't provide the uaccess functions, so we try to lock twice
> again.
> 

Nice find.

How about we always use the uaccess functions or uaccesses instead? :

diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
index 23c0d564..e25567a 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
@@ -528,12 +528,14 @@ static const struct vgic_register_region vgic_v3_sgibase_registers[] = {
 		vgic_mmio_read_pending, vgic_mmio_write_cpending,
 		vgic_mmio_read_raz, vgic_mmio_write_wi, 4,
 		VGIC_ACCESS_32bit),
-	REGISTER_DESC_WITH_LENGTH(GICR_ISACTIVER0,
-		vgic_mmio_read_active, vgic_mmio_write_sactive, 4,
-		VGIC_ACCESS_32bit),
-	REGISTER_DESC_WITH_LENGTH(GICR_ICACTIVER0,
-		vgic_mmio_read_active, vgic_mmio_write_cactive, 4,
-		VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ISACTIVER0,
+		vgic_mmio_read_active, vgic_mmio_write_sactive,
+		NULL, vgic_mmio_uaccess_write_sactive,
+		4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ICACTIVER0,
+		vgic_mmio_read_active, vgic_mmio_write_cactive,
+		NULL, vgic_mmio_uaccess_write_cactive,
+		4, VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_LENGTH(GICR_IPRIORITYR0,
 		vgic_mmio_read_priority, vgic_mmio_write_priority, 32,
 		VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),

Thanks,
-Christoffer

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
@ 2017-06-04 11:57           ` Christoffer Dall
  0 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-06-04 11:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jun 04, 2017 at 10:15:57AM +0200, Andrew Jones wrote:
> On Tue, May 23, 2017 at 10:43:29AM +0200, Christoffer Dall wrote:
> > On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> > > On 16/05/17 11:04, Christoffer Dall wrote:
> > > > We don't need to stop a specific VCPU when changing the active state,
> > > > because private IRQs can only be modified by a running VCPU for the
> > > > VCPU itself and it is therefore already stopped.
> > > > 
> > > > However, it is also possible for two VCPUs to be modifying the active
> > > > state of SPIs at the same time, which can cause the thread being stuck
> > > > in the loop that checks other VCPU threads for a potentially very long
> > > > time, or to modify the active state of a running VCPU.  Fix this by
> > > > serializing all accesses to setting and clearing the active state of
> > > > interrupts using the KVM mutex.
> > > > 
> > > > Reported-by: Andrew Jones <drjones@redhat.com>
> > > > Signed-off-by: Christoffer Dall <cdall@linaro.org>
> > > > ---
> > > >  arch/arm/include/asm/kvm_host.h   |  2 --
> > > >  arch/arm64/include/asm/kvm_host.h |  2 --
> > > >  virt/kvm/arm/arm.c                | 20 ++++----------------
> > > >  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> > > >  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> > > >  5 files changed, 20 insertions(+), 33 deletions(-)
> > > > 
> > > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > > > index f0e6657..12274d4 100644
> > > > --- a/arch/arm/include/asm/kvm_host.h
> > > > +++ b/arch/arm/include/asm/kvm_host.h
> > > > @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > >  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > >  
> > > >  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> > > >  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > > index 5e19165..32cbe8a 100644
> > > > --- a/arch/arm64/include/asm/kvm_host.h
> > > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > > @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > >  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > >  
> > > >  u64 __kvm_call_hyp(void *hypfn, ...);
> > > >  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > > > index 3417e18..3c387fd 100644
> > > > --- a/virt/kvm/arm/arm.c
> > > > +++ b/virt/kvm/arm/arm.c
> > > > @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> > > >  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> > > >  }
> > > >  
> > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> > > > -{
> > > > -	vcpu->arch.pause = true;
> > > > -	kvm_vcpu_kick(vcpu);
> > > > -}
> > > > -
> > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> > > > -{
> > > > -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > > > -
> > > > -	vcpu->arch.pause = false;
> > > > -	swake_up(wq);
> > > > -}
> > > > -
> > > >  void kvm_arm_resume_guest(struct kvm *kvm)
> > > >  {
> > > >  	int i;
> > > >  	struct kvm_vcpu *vcpu;
> > > >  
> > > > -	kvm_for_each_vcpu(i, vcpu, kvm)
> > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > +	kvm_for_each_vcpu(i, vcpu, kvm) {
> > > > +		vcpu->arch.pause = false;
> > > > +		swake_up(kvm_arch_vcpu_wq(vcpu));
> > > > +	}
> > > >  }
> > > >  
> > > >  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> > > > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > index 64cbcb4..c1e4bdd 100644
> > > > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > > > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> > > >   * be migrated while we don't hold the IRQ locks and we don't want to be
> > > >   * chasing moving targets.
> > > >   *
> > > > - * For private interrupts, we only have to make sure the single and only VCPU
> > > > - * that can potentially queue the IRQ is stopped.
> > > > + * For private interrupts we don't have to do anything because userspace
> > > > + * accesses to the VGIC state already require all VCPUs to be stopped, and
> > > > + * only the VCPU itself can modify its private interrupts active state, which
> > > > + * guarantees that the VCPU is not running.
> > > >   */
> > > >  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> > > >  {
> > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > -		kvm_arm_halt_vcpu(vcpu);
> > > > -	else
> > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > >  		kvm_arm_halt_guest(vcpu->kvm);
> > > >  }
> > > >  
> > > >  /* See vgic_change_active_prepare */
> > > >  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> > > >  {
> > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > -	else
> > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > >  		kvm_arm_resume_guest(vcpu->kvm);
> > > >  }
> > > >  
> > > > @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> > > >  {
> > > >  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> > > >  
> > > > +	mutex_lock(&vcpu->kvm->lock);
> > > >  	vgic_change_active_prepare(vcpu, intid);
> > > >  
> > > >  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> > > >  
> > > >  	vgic_change_active_finish(vcpu, intid);
> > > > +	mutex_unlock(&vcpu->kvm->lock);
> > > 
> > > Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> > > we need to take that mutex if intid is a PPI?
> > 
> > I guess we strictly don't need to take the mutex if it's a PPI, no.
> > 
> > But I actually preferred this symmetry because you can easily tell we
> > don't have a bug (famous last words) by locking and unlocking the mutex
> > in the same function.
> > 
> > I don't feel strongly about it though, so I can move it if you prefer
> > it.
> 
> Actually we must move the locking into the prepare/finish functions, in
> order to tuck them into the VGIC_NR_PRIVATE_IRQS conditions. Otherwise,
> with gicv3, when userspace accesses ISACTIVER0/ICACTIVER0, which are
> SGI_base offsets, then the vgic_v3_sgibase_registers table is used. That
> table doesn't provide the uaccess functions, so we try to lock twice
> again.
> 

Nice find.

How about we always use the uaccess functions or uaccesses instead? :

diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
index 23c0d564..e25567a 100644
--- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
+++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
@@ -528,12 +528,14 @@ static const struct vgic_register_region vgic_v3_sgibase_registers[] = {
 		vgic_mmio_read_pending, vgic_mmio_write_cpending,
 		vgic_mmio_read_raz, vgic_mmio_write_wi, 4,
 		VGIC_ACCESS_32bit),
-	REGISTER_DESC_WITH_LENGTH(GICR_ISACTIVER0,
-		vgic_mmio_read_active, vgic_mmio_write_sactive, 4,
-		VGIC_ACCESS_32bit),
-	REGISTER_DESC_WITH_LENGTH(GICR_ICACTIVER0,
-		vgic_mmio_read_active, vgic_mmio_write_cactive, 4,
-		VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ISACTIVER0,
+		vgic_mmio_read_active, vgic_mmio_write_sactive,
+		NULL, vgic_mmio_uaccess_write_sactive,
+		4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ICACTIVER0,
+		vgic_mmio_read_active, vgic_mmio_write_cactive,
+		NULL, vgic_mmio_uaccess_write_cactive,
+		4, VGIC_ACCESS_32bit),
 	REGISTER_DESC_WITH_LENGTH(GICR_IPRIORITYR0,
 		vgic_mmio_read_priority, vgic_mmio_write_priority, 32,
 		VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),

Thanks,
-Christoffer

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
  2017-06-04 11:57           ` Christoffer Dall
@ 2017-06-04 12:41             ` Andrew Jones
  -1 siblings, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-06-04 12:41 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: Marc Zyngier, kvmarm, linux-arm-kernel, kvm, Eric Auger

On Sun, Jun 04, 2017 at 01:57:37PM +0200, Christoffer Dall wrote:
> On Sun, Jun 04, 2017 at 10:15:57AM +0200, Andrew Jones wrote:
> > On Tue, May 23, 2017 at 10:43:29AM +0200, Christoffer Dall wrote:
> > > On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> > > > On 16/05/17 11:04, Christoffer Dall wrote:
> > > > > We don't need to stop a specific VCPU when changing the active state,
> > > > > because private IRQs can only be modified by a running VCPU for the
> > > > > VCPU itself and it is therefore already stopped.
> > > > > 
> > > > > However, it is also possible for two VCPUs to be modifying the active
> > > > > state of SPIs at the same time, which can cause the thread being stuck
> > > > > in the loop that checks other VCPU threads for a potentially very long
> > > > > time, or to modify the active state of a running VCPU.  Fix this by
> > > > > serializing all accesses to setting and clearing the active state of
> > > > > interrupts using the KVM mutex.
> > > > > 
> > > > > Reported-by: Andrew Jones <drjones@redhat.com>
> > > > > Signed-off-by: Christoffer Dall <cdall@linaro.org>
> > > > > ---
> > > > >  arch/arm/include/asm/kvm_host.h   |  2 --
> > > > >  arch/arm64/include/asm/kvm_host.h |  2 --
> > > > >  virt/kvm/arm/arm.c                | 20 ++++----------------
> > > > >  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> > > > >  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> > > > >  5 files changed, 20 insertions(+), 33 deletions(-)
> > > > > 
> > > > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > > > > index f0e6657..12274d4 100644
> > > > > --- a/arch/arm/include/asm/kvm_host.h
> > > > > +++ b/arch/arm/include/asm/kvm_host.h
> > > > > @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > > >  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> > > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > > >  
> > > > >  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> > > > >  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > > > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > > > index 5e19165..32cbe8a 100644
> > > > > --- a/arch/arm64/include/asm/kvm_host.h
> > > > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > > > @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > > >  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> > > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > > >  
> > > > >  u64 __kvm_call_hyp(void *hypfn, ...);
> > > > >  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > > > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > > > > index 3417e18..3c387fd 100644
> > > > > --- a/virt/kvm/arm/arm.c
> > > > > +++ b/virt/kvm/arm/arm.c
> > > > > @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> > > > >  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> > > > >  }
> > > > >  
> > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> > > > > -{
> > > > > -	vcpu->arch.pause = true;
> > > > > -	kvm_vcpu_kick(vcpu);
> > > > > -}
> > > > > -
> > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> > > > > -{
> > > > > -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > > > > -
> > > > > -	vcpu->arch.pause = false;
> > > > > -	swake_up(wq);
> > > > > -}
> > > > > -
> > > > >  void kvm_arm_resume_guest(struct kvm *kvm)
> > > > >  {
> > > > >  	int i;
> > > > >  	struct kvm_vcpu *vcpu;
> > > > >  
> > > > > -	kvm_for_each_vcpu(i, vcpu, kvm)
> > > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > > +	kvm_for_each_vcpu(i, vcpu, kvm) {
> > > > > +		vcpu->arch.pause = false;
> > > > > +		swake_up(kvm_arch_vcpu_wq(vcpu));
> > > > > +	}
> > > > >  }
> > > > >  
> > > > >  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> > > > > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > index 64cbcb4..c1e4bdd 100644
> > > > > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> > > > >   * be migrated while we don't hold the IRQ locks and we don't want to be
> > > > >   * chasing moving targets.
> > > > >   *
> > > > > - * For private interrupts, we only have to make sure the single and only VCPU
> > > > > - * that can potentially queue the IRQ is stopped.
> > > > > + * For private interrupts we don't have to do anything because userspace
> > > > > + * accesses to the VGIC state already require all VCPUs to be stopped, and
> > > > > + * only the VCPU itself can modify its private interrupts active state, which
> > > > > + * guarantees that the VCPU is not running.
> > > > >   */
> > > > >  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> > > > >  {
> > > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > > -		kvm_arm_halt_vcpu(vcpu);
> > > > > -	else
> > > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > > >  		kvm_arm_halt_guest(vcpu->kvm);
> > > > >  }
> > > > >  
> > > > >  /* See vgic_change_active_prepare */
> > > > >  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> > > > >  {
> > > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > > -	else
> > > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > > >  		kvm_arm_resume_guest(vcpu->kvm);
> > > > >  }
> > > > >  
> > > > > @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> > > > >  {
> > > > >  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> > > > >  
> > > > > +	mutex_lock(&vcpu->kvm->lock);
> > > > >  	vgic_change_active_prepare(vcpu, intid);
> > > > >  
> > > > >  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> > > > >  
> > > > >  	vgic_change_active_finish(vcpu, intid);
> > > > > +	mutex_unlock(&vcpu->kvm->lock);
> > > > 
> > > > Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> > > > we need to take that mutex if intid is a PPI?
> > > 
> > > I guess we strictly don't need to take the mutex if it's a PPI, no.
> > > 
> > > But I actually preferred this symmetry because you can easily tell we
> > > don't have a bug (famous last words) by locking and unlocking the mutex
> > > in the same function.
> > > 
> > > I don't feel strongly about it though, so I can move it if you prefer
> > > it.
> > 
> > Actually we must move the locking into the prepare/finish functions, in
> > order to tuck them into the VGIC_NR_PRIVATE_IRQS conditions. Otherwise,
> > with gicv3, when userspace accesses ISACTIVER0/ICACTIVER0, which are
> > SGI_base offsets, then the vgic_v3_sgibase_registers table is used. That
> > table doesn't provide the uaccess functions, so we try to lock twice
> > again.
> > 
> 
> Nice find.
> 
> How about we always use the uaccess functions or uaccesses instead? :

Yeah, that's a better suggestion. u-access, i-access, we-all-access :-)

drew

> 
> diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
> index 23c0d564..e25567a 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
> @@ -528,12 +528,14 @@ static const struct vgic_register_region vgic_v3_sgibase_registers[] = {
>  		vgic_mmio_read_pending, vgic_mmio_write_cpending,
>  		vgic_mmio_read_raz, vgic_mmio_write_wi, 4,
>  		VGIC_ACCESS_32bit),
> -	REGISTER_DESC_WITH_LENGTH(GICR_ISACTIVER0,
> -		vgic_mmio_read_active, vgic_mmio_write_sactive, 4,
> -		VGIC_ACCESS_32bit),
> -	REGISTER_DESC_WITH_LENGTH(GICR_ICACTIVER0,
> -		vgic_mmio_read_active, vgic_mmio_write_cactive, 4,
> -		VGIC_ACCESS_32bit),
> +	REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ISACTIVER0,
> +		vgic_mmio_read_active, vgic_mmio_write_sactive,
> +		NULL, vgic_mmio_uaccess_write_sactive,
> +		4, VGIC_ACCESS_32bit),
> +	REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ICACTIVER0,
> +		vgic_mmio_read_active, vgic_mmio_write_cactive,
> +		NULL, vgic_mmio_uaccess_write_cactive,
> +		4, VGIC_ACCESS_32bit),
>  	REGISTER_DESC_WITH_LENGTH(GICR_IPRIORITYR0,
>  		vgic_mmio_read_priority, vgic_mmio_write_priority, 32,
>  		VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
> 
> Thanks,
> -Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
@ 2017-06-04 12:41             ` Andrew Jones
  0 siblings, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-06-04 12:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jun 04, 2017 at 01:57:37PM +0200, Christoffer Dall wrote:
> On Sun, Jun 04, 2017 at 10:15:57AM +0200, Andrew Jones wrote:
> > On Tue, May 23, 2017 at 10:43:29AM +0200, Christoffer Dall wrote:
> > > On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> > > > On 16/05/17 11:04, Christoffer Dall wrote:
> > > > > We don't need to stop a specific VCPU when changing the active state,
> > > > > because private IRQs can only be modified by a running VCPU for the
> > > > > VCPU itself and it is therefore already stopped.
> > > > > 
> > > > > However, it is also possible for two VCPUs to be modifying the active
> > > > > state of SPIs at the same time, which can cause the thread being stuck
> > > > > in the loop that checks other VCPU threads for a potentially very long
> > > > > time, or to modify the active state of a running VCPU.  Fix this by
> > > > > serializing all accesses to setting and clearing the active state of
> > > > > interrupts using the KVM mutex.
> > > > > 
> > > > > Reported-by: Andrew Jones <drjones@redhat.com>
> > > > > Signed-off-by: Christoffer Dall <cdall@linaro.org>
> > > > > ---
> > > > >  arch/arm/include/asm/kvm_host.h   |  2 --
> > > > >  arch/arm64/include/asm/kvm_host.h |  2 --
> > > > >  virt/kvm/arm/arm.c                | 20 ++++----------------
> > > > >  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> > > > >  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> > > > >  5 files changed, 20 insertions(+), 33 deletions(-)
> > > > > 
> > > > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > > > > index f0e6657..12274d4 100644
> > > > > --- a/arch/arm/include/asm/kvm_host.h
> > > > > +++ b/arch/arm/include/asm/kvm_host.h
> > > > > @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > > >  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> > > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > > >  
> > > > >  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> > > > >  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > > > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > > > index 5e19165..32cbe8a 100644
> > > > > --- a/arch/arm64/include/asm/kvm_host.h
> > > > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > > > @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > > >  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> > > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > > >  
> > > > >  u64 __kvm_call_hyp(void *hypfn, ...);
> > > > >  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > > > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > > > > index 3417e18..3c387fd 100644
> > > > > --- a/virt/kvm/arm/arm.c
> > > > > +++ b/virt/kvm/arm/arm.c
> > > > > @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> > > > >  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> > > > >  }
> > > > >  
> > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> > > > > -{
> > > > > -	vcpu->arch.pause = true;
> > > > > -	kvm_vcpu_kick(vcpu);
> > > > > -}
> > > > > -
> > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> > > > > -{
> > > > > -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > > > > -
> > > > > -	vcpu->arch.pause = false;
> > > > > -	swake_up(wq);
> > > > > -}
> > > > > -
> > > > >  void kvm_arm_resume_guest(struct kvm *kvm)
> > > > >  {
> > > > >  	int i;
> > > > >  	struct kvm_vcpu *vcpu;
> > > > >  
> > > > > -	kvm_for_each_vcpu(i, vcpu, kvm)
> > > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > > +	kvm_for_each_vcpu(i, vcpu, kvm) {
> > > > > +		vcpu->arch.pause = false;
> > > > > +		swake_up(kvm_arch_vcpu_wq(vcpu));
> > > > > +	}
> > > > >  }
> > > > >  
> > > > >  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> > > > > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > index 64cbcb4..c1e4bdd 100644
> > > > > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> > > > >   * be migrated while we don't hold the IRQ locks and we don't want to be
> > > > >   * chasing moving targets.
> > > > >   *
> > > > > - * For private interrupts, we only have to make sure the single and only VCPU
> > > > > - * that can potentially queue the IRQ is stopped.
> > > > > + * For private interrupts we don't have to do anything because userspace
> > > > > + * accesses to the VGIC state already require all VCPUs to be stopped, and
> > > > > + * only the VCPU itself can modify its private interrupts active state, which
> > > > > + * guarantees that the VCPU is not running.
> > > > >   */
> > > > >  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> > > > >  {
> > > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > > -		kvm_arm_halt_vcpu(vcpu);
> > > > > -	else
> > > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > > >  		kvm_arm_halt_guest(vcpu->kvm);
> > > > >  }
> > > > >  
> > > > >  /* See vgic_change_active_prepare */
> > > > >  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> > > > >  {
> > > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > > -	else
> > > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > > >  		kvm_arm_resume_guest(vcpu->kvm);
> > > > >  }
> > > > >  
> > > > > @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> > > > >  {
> > > > >  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> > > > >  
> > > > > +	mutex_lock(&vcpu->kvm->lock);
> > > > >  	vgic_change_active_prepare(vcpu, intid);
> > > > >  
> > > > >  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> > > > >  
> > > > >  	vgic_change_active_finish(vcpu, intid);
> > > > > +	mutex_unlock(&vcpu->kvm->lock);
> > > > 
> > > > Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> > > > we need to take that mutex if intid is a PPI?
> > > 
> > > I guess we strictly don't need to take the mutex if it's a PPI, no.
> > > 
> > > But I actually preferred this symmetry because you can easily tell we
> > > don't have a bug (famous last words) by locking and unlocking the mutex
> > > in the same function.
> > > 
> > > I don't feel strongly about it though, so I can move it if you prefer
> > > it.
> > 
> > Actually we must move the locking into the prepare/finish functions, in
> > order to tuck them into the VGIC_NR_PRIVATE_IRQS conditions. Otherwise,
> > with gicv3, when userspace accesses ISACTIVER0/ICACTIVER0, which are
> > SGI_base offsets, then the vgic_v3_sgibase_registers table is used. That
> > table doesn't provide the uaccess functions, so we try to lock twice
> > again.
> > 
> 
> Nice find.
> 
> How about we always use the uaccess functions or uaccesses instead? :

Yeah, that's a better suggestion. u-access, i-access, we-all-access :-)

drew

> 
> diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
> index 23c0d564..e25567a 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
> @@ -528,12 +528,14 @@ static const struct vgic_register_region vgic_v3_sgibase_registers[] = {
>  		vgic_mmio_read_pending, vgic_mmio_write_cpending,
>  		vgic_mmio_read_raz, vgic_mmio_write_wi, 4,
>  		VGIC_ACCESS_32bit),
> -	REGISTER_DESC_WITH_LENGTH(GICR_ISACTIVER0,
> -		vgic_mmio_read_active, vgic_mmio_write_sactive, 4,
> -		VGIC_ACCESS_32bit),
> -	REGISTER_DESC_WITH_LENGTH(GICR_ICACTIVER0,
> -		vgic_mmio_read_active, vgic_mmio_write_cactive, 4,
> -		VGIC_ACCESS_32bit),
> +	REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ISACTIVER0,
> +		vgic_mmio_read_active, vgic_mmio_write_sactive,
> +		NULL, vgic_mmio_uaccess_write_sactive,
> +		4, VGIC_ACCESS_32bit),
> +	REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ICACTIVER0,
> +		vgic_mmio_read_active, vgic_mmio_write_cactive,
> +		NULL, vgic_mmio_uaccess_write_cactive,
> +		4, VGIC_ACCESS_32bit),
>  	REGISTER_DESC_WITH_LENGTH(GICR_IPRIORITYR0,
>  		vgic_mmio_read_priority, vgic_mmio_write_priority, 32,
>  		VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
> 
> Thanks,
> -Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
  2017-06-04 12:41             ` Andrew Jones
@ 2017-06-04 14:45               ` Christoffer Dall
  -1 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-06-04 14:45 UTC (permalink / raw)
  To: Andrew Jones; +Cc: Marc Zyngier, kvmarm, linux-arm-kernel, kvm, Eric Auger

On Sun, Jun 04, 2017 at 02:41:13PM +0200, Andrew Jones wrote:
> On Sun, Jun 04, 2017 at 01:57:37PM +0200, Christoffer Dall wrote:
> > On Sun, Jun 04, 2017 at 10:15:57AM +0200, Andrew Jones wrote:
> > > On Tue, May 23, 2017 at 10:43:29AM +0200, Christoffer Dall wrote:
> > > > On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> > > > > On 16/05/17 11:04, Christoffer Dall wrote:
> > > > > > We don't need to stop a specific VCPU when changing the active state,
> > > > > > because private IRQs can only be modified by a running VCPU for the
> > > > > > VCPU itself and it is therefore already stopped.
> > > > > > 
> > > > > > However, it is also possible for two VCPUs to be modifying the active
> > > > > > state of SPIs at the same time, which can cause the thread being stuck
> > > > > > in the loop that checks other VCPU threads for a potentially very long
> > > > > > time, or to modify the active state of a running VCPU.  Fix this by
> > > > > > serializing all accesses to setting and clearing the active state of
> > > > > > interrupts using the KVM mutex.
> > > > > > 
> > > > > > Reported-by: Andrew Jones <drjones@redhat.com>
> > > > > > Signed-off-by: Christoffer Dall <cdall@linaro.org>
> > > > > > ---
> > > > > >  arch/arm/include/asm/kvm_host.h   |  2 --
> > > > > >  arch/arm64/include/asm/kvm_host.h |  2 --
> > > > > >  virt/kvm/arm/arm.c                | 20 ++++----------------
> > > > > >  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> > > > > >  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> > > > > >  5 files changed, 20 insertions(+), 33 deletions(-)
> > > > > > 
> > > > > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > > > > > index f0e6657..12274d4 100644
> > > > > > --- a/arch/arm/include/asm/kvm_host.h
> > > > > > +++ b/arch/arm/include/asm/kvm_host.h
> > > > > > @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > > > >  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> > > > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > > > >  
> > > > > >  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> > > > > >  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > > > > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > > > > index 5e19165..32cbe8a 100644
> > > > > > --- a/arch/arm64/include/asm/kvm_host.h
> > > > > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > > > > @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > > > >  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> > > > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > > > >  
> > > > > >  u64 __kvm_call_hyp(void *hypfn, ...);
> > > > > >  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > > > > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > > > > > index 3417e18..3c387fd 100644
> > > > > > --- a/virt/kvm/arm/arm.c
> > > > > > +++ b/virt/kvm/arm/arm.c
> > > > > > @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> > > > > >  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> > > > > >  }
> > > > > >  
> > > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> > > > > > -{
> > > > > > -	vcpu->arch.pause = true;
> > > > > > -	kvm_vcpu_kick(vcpu);
> > > > > > -}
> > > > > > -
> > > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> > > > > > -{
> > > > > > -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > > > > > -
> > > > > > -	vcpu->arch.pause = false;
> > > > > > -	swake_up(wq);
> > > > > > -}
> > > > > > -
> > > > > >  void kvm_arm_resume_guest(struct kvm *kvm)
> > > > > >  {
> > > > > >  	int i;
> > > > > >  	struct kvm_vcpu *vcpu;
> > > > > >  
> > > > > > -	kvm_for_each_vcpu(i, vcpu, kvm)
> > > > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > > > +	kvm_for_each_vcpu(i, vcpu, kvm) {
> > > > > > +		vcpu->arch.pause = false;
> > > > > > +		swake_up(kvm_arch_vcpu_wq(vcpu));
> > > > > > +	}
> > > > > >  }
> > > > > >  
> > > > > >  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> > > > > > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > > index 64cbcb4..c1e4bdd 100644
> > > > > > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > > @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> > > > > >   * be migrated while we don't hold the IRQ locks and we don't want to be
> > > > > >   * chasing moving targets.
> > > > > >   *
> > > > > > - * For private interrupts, we only have to make sure the single and only VCPU
> > > > > > - * that can potentially queue the IRQ is stopped.
> > > > > > + * For private interrupts we don't have to do anything because userspace
> > > > > > + * accesses to the VGIC state already require all VCPUs to be stopped, and
> > > > > > + * only the VCPU itself can modify its private interrupts active state, which
> > > > > > + * guarantees that the VCPU is not running.
> > > > > >   */
> > > > > >  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> > > > > >  {
> > > > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > > > -		kvm_arm_halt_vcpu(vcpu);
> > > > > > -	else
> > > > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > > > >  		kvm_arm_halt_guest(vcpu->kvm);
> > > > > >  }
> > > > > >  
> > > > > >  /* See vgic_change_active_prepare */
> > > > > >  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> > > > > >  {
> > > > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > > > -	else
> > > > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > > > >  		kvm_arm_resume_guest(vcpu->kvm);
> > > > > >  }
> > > > > >  
> > > > > > @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> > > > > >  {
> > > > > >  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> > > > > >  
> > > > > > +	mutex_lock(&vcpu->kvm->lock);
> > > > > >  	vgic_change_active_prepare(vcpu, intid);
> > > > > >  
> > > > > >  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> > > > > >  
> > > > > >  	vgic_change_active_finish(vcpu, intid);
> > > > > > +	mutex_unlock(&vcpu->kvm->lock);
> > > > > 
> > > > > Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> > > > > we need to take that mutex if intid is a PPI?
> > > > 
> > > > I guess we strictly don't need to take the mutex if it's a PPI, no.
> > > > 
> > > > But I actually preferred this symmetry because you can easily tell we
> > > > don't have a bug (famous last words) by locking and unlocking the mutex
> > > > in the same function.
> > > > 
> > > > I don't feel strongly about it though, so I can move it if you prefer
> > > > it.
> > > 
> > > Actually we must move the locking into the prepare/finish functions, in
> > > order to tuck them into the VGIC_NR_PRIVATE_IRQS conditions. Otherwise,
> > > with gicv3, when userspace accesses ISACTIVER0/ICACTIVER0, which are
> > > SGI_base offsets, then the vgic_v3_sgibase_registers table is used. That
> > > table doesn't provide the uaccess functions, so we try to lock twice
> > > again.
> > > 
> > 
> > Nice find.
> > 
> > How about we always use the uaccess functions or uaccesses instead? :
> 
> Yeah, that's a better suggestion. u-access, i-access, we-all-access :-)
> 
cool.  I've put it in kvmarm/queue.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race
@ 2017-06-04 14:45               ` Christoffer Dall
  0 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-06-04 14:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jun 04, 2017 at 02:41:13PM +0200, Andrew Jones wrote:
> On Sun, Jun 04, 2017 at 01:57:37PM +0200, Christoffer Dall wrote:
> > On Sun, Jun 04, 2017 at 10:15:57AM +0200, Andrew Jones wrote:
> > > On Tue, May 23, 2017 at 10:43:29AM +0200, Christoffer Dall wrote:
> > > > On Mon, May 22, 2017 at 04:30:22PM +0100, Marc Zyngier wrote:
> > > > > On 16/05/17 11:04, Christoffer Dall wrote:
> > > > > > We don't need to stop a specific VCPU when changing the active state,
> > > > > > because private IRQs can only be modified by a running VCPU for the
> > > > > > VCPU itself and it is therefore already stopped.
> > > > > > 
> > > > > > However, it is also possible for two VCPUs to be modifying the active
> > > > > > state of SPIs at the same time, which can cause the thread being stuck
> > > > > > in the loop that checks other VCPU threads for a potentially very long
> > > > > > time, or to modify the active state of a running VCPU.  Fix this by
> > > > > > serializing all accesses to setting and clearing the active state of
> > > > > > interrupts using the KVM mutex.
> > > > > > 
> > > > > > Reported-by: Andrew Jones <drjones@redhat.com>
> > > > > > Signed-off-by: Christoffer Dall <cdall@linaro.org>
> > > > > > ---
> > > > > >  arch/arm/include/asm/kvm_host.h   |  2 --
> > > > > >  arch/arm64/include/asm/kvm_host.h |  2 --
> > > > > >  virt/kvm/arm/arm.c                | 20 ++++----------------
> > > > > >  virt/kvm/arm/vgic/vgic-mmio.c     | 18 ++++++++++--------
> > > > > >  virt/kvm/arm/vgic/vgic.c          | 11 ++++++-----
> > > > > >  5 files changed, 20 insertions(+), 33 deletions(-)
> > > > > > 
> > > > > > diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> > > > > > index f0e6657..12274d4 100644
> > > > > > --- a/arch/arm/include/asm/kvm_host.h
> > > > > > +++ b/arch/arm/include/asm/kvm_host.h
> > > > > > @@ -233,8 +233,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > > > >  struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
> > > > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > > > >  
> > > > > >  int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
> > > > > >  unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
> > > > > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > > > > index 5e19165..32cbe8a 100644
> > > > > > --- a/arch/arm64/include/asm/kvm_host.h
> > > > > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > > > > @@ -333,8 +333,6 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
> > > > > >  struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
> > > > > >  void kvm_arm_halt_guest(struct kvm *kvm);
> > > > > >  void kvm_arm_resume_guest(struct kvm *kvm);
> > > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
> > > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
> > > > > >  
> > > > > >  u64 __kvm_call_hyp(void *hypfn, ...);
> > > > > >  #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
> > > > > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > > > > > index 3417e18..3c387fd 100644
> > > > > > --- a/virt/kvm/arm/arm.c
> > > > > > +++ b/virt/kvm/arm/arm.c
> > > > > > @@ -539,27 +539,15 @@ void kvm_arm_halt_guest(struct kvm *kvm)
> > > > > >  	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> > > > > >  }
> > > > > >  
> > > > > > -void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
> > > > > > -{
> > > > > > -	vcpu->arch.pause = true;
> > > > > > -	kvm_vcpu_kick(vcpu);
> > > > > > -}
> > > > > > -
> > > > > > -void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
> > > > > > -{
> > > > > > -	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
> > > > > > -
> > > > > > -	vcpu->arch.pause = false;
> > > > > > -	swake_up(wq);
> > > > > > -}
> > > > > > -
> > > > > >  void kvm_arm_resume_guest(struct kvm *kvm)
> > > > > >  {
> > > > > >  	int i;
> > > > > >  	struct kvm_vcpu *vcpu;
> > > > > >  
> > > > > > -	kvm_for_each_vcpu(i, vcpu, kvm)
> > > > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > > > +	kvm_for_each_vcpu(i, vcpu, kvm) {
> > > > > > +		vcpu->arch.pause = false;
> > > > > > +		swake_up(kvm_arch_vcpu_wq(vcpu));
> > > > > > +	}
> > > > > >  }
> > > > > >  
> > > > > >  static void vcpu_sleep(struct kvm_vcpu *vcpu)
> > > > > > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > > index 64cbcb4..c1e4bdd 100644
> > > > > > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > > > > > @@ -231,23 +231,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> > > > > >   * be migrated while we don't hold the IRQ locks and we don't want to be
> > > > > >   * chasing moving targets.
> > > > > >   *
> > > > > > - * For private interrupts, we only have to make sure the single and only VCPU
> > > > > > - * that can potentially queue the IRQ is stopped.
> > > > > > + * For private interrupts we don't have to do anything because userspace
> > > > > > + * accesses to the VGIC state already require all VCPUs to be stopped, and
> > > > > > + * only the VCPU itself can modify its private interrupts active state, which
> > > > > > + * guarantees that the VCPU is not running.
> > > > > >   */
> > > > > >  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
> > > > > >  {
> > > > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > > > -		kvm_arm_halt_vcpu(vcpu);
> > > > > > -	else
> > > > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > > > >  		kvm_arm_halt_guest(vcpu->kvm);
> > > > > >  }
> > > > > >  
> > > > > >  /* See vgic_change_active_prepare */
> > > > > >  static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
> > > > > >  {
> > > > > > -	if (intid < VGIC_NR_PRIVATE_IRQS)
> > > > > > -		kvm_arm_resume_vcpu(vcpu);
> > > > > > -	else
> > > > > > +	if (intid > VGIC_NR_PRIVATE_IRQS)
> > > > > >  		kvm_arm_resume_guest(vcpu->kvm);
> > > > > >  }
> > > > > >  
> > > > > > @@ -271,11 +269,13 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
> > > > > >  {
> > > > > >  	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
> > > > > >  
> > > > > > +	mutex_lock(&vcpu->kvm->lock);
> > > > > >  	vgic_change_active_prepare(vcpu, intid);
> > > > > >  
> > > > > >  	__vgic_mmio_write_cactive(vcpu, addr, len, val);
> > > > > >  
> > > > > >  	vgic_change_active_finish(vcpu, intid);
> > > > > > +	mutex_unlock(&vcpu->kvm->lock);
> > > > > 
> > > > > Any reason not to move the lock/unlock calls to prepare/finish? Also, do
> > > > > we need to take that mutex if intid is a PPI?
> > > > 
> > > > I guess we strictly don't need to take the mutex if it's a PPI, no.
> > > > 
> > > > But I actually preferred this symmetry because you can easily tell we
> > > > don't have a bug (famous last words) by locking and unlocking the mutex
> > > > in the same function.
> > > > 
> > > > I don't feel strongly about it though, so I can move it if you prefer
> > > > it.
> > > 
> > > Actually we must move the locking into the prepare/finish functions, in
> > > order to tuck them into the VGIC_NR_PRIVATE_IRQS conditions. Otherwise,
> > > with gicv3, when userspace accesses ISACTIVER0/ICACTIVER0, which are
> > > SGI_base offsets, then the vgic_v3_sgibase_registers table is used. That
> > > table doesn't provide the uaccess functions, so we try to lock twice
> > > again.
> > > 
> > 
> > Nice find.
> > 
> > How about we always use the uaccess functions or uaccesses instead? :
> 
> Yeah, that's a better suggestion. u-access, i-access, we-all-access :-)
> 
cool.  I've put it in kvmarm/queue.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2017-06-04 14:46 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-16 10:04 [PATCH v2 0/3] Fix race condition and simplify vgic active handler Christoffer Dall
2017-05-16 10:04 ` Christoffer Dall
2017-05-16 10:04 ` [PATCH v2 1/3] KVM: arm/arm64: Allow GICv2 to supply a uaccess register function Christoffer Dall
2017-05-16 10:04   ` Christoffer Dall
2017-05-22 15:21   ` Marc Zyngier
2017-05-22 15:21     ` Marc Zyngier
2017-05-16 10:04 ` [PATCH v2 2/3] KVM: arm/arm64: Separate guest and uaccess writes to dist {sc}active Christoffer Dall
2017-05-16 10:04   ` Christoffer Dall
2017-05-22 15:24   ` Marc Zyngier
2017-05-22 15:24     ` Marc Zyngier
2017-05-16 10:04 ` [PATCH v2 3/3] KVM: arm/arm64: Simplify active_change_prepare and plug race Christoffer Dall
2017-05-16 10:04   ` Christoffer Dall
2017-05-22 15:30   ` Marc Zyngier
2017-05-22 15:30     ` Marc Zyngier
2017-05-23  8:43     ` Christoffer Dall
2017-05-23  8:43       ` Christoffer Dall
2017-05-23  9:05       ` Marc Zyngier
2017-05-23  9:05         ` Marc Zyngier
2017-05-23  9:56         ` Christoffer Dall
2017-05-23  9:56           ` Christoffer Dall
2017-05-23 10:36           ` Marc Zyngier
2017-05-23 10:36             ` Marc Zyngier
2017-06-04  8:15       ` Andrew Jones
2017-06-04  8:15         ` Andrew Jones
2017-06-04 11:57         ` Christoffer Dall
2017-06-04 11:57           ` Christoffer Dall
2017-06-04 12:41           ` Andrew Jones
2017-06-04 12:41             ` Andrew Jones
2017-06-04 14:45             ` Christoffer Dall
2017-06-04 14:45               ` Christoffer Dall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.