All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] KVM: arm/arm64: fix some races and allow userspace to set MPIDR
@ 2017-02-27 17:54 Andrew Jones
  2017-02-27 17:55 ` [PATCH 1/5] KVM: arm/arm64: prepare to use vcpu requests Andrew Jones
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Andrew Jones @ 2017-02-27 17:54 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, ashoks, imammedo

This series fixes four races. Two are easy to produce with a
kvm-unit-test test[1], but the other two would be quite hard. I
didn't even try to produce those. The two hard to produce races are
addressed by changing vcpu->arch.pause and vcpu->arch.power_off to
vcpu requests. The two easy to produce races are addressed in two
different ways: the first takes advantage of power_off having been
changed to a vcpu request, the second caches vcpu MPIDRs in order
to avoid extracting them from sys_regs. When introducing the MPIDR
cache we also introduce a new feature (userspace settable MPIDRs).

Support for userspace settable MPIDRs was already posted once[2],
but rejected due to not having a use case. We have one now, which
is to satisfy QEMU's need for the MPDIR information very early,
before vcpu-init has even run. While the original posting author
wasn't me, I've taken authorship now, as I've changed the patch
substantially. If anybody disagrees with that, then feel free to
suggest alternatives. The QEMU counterpart has been posted[3].

This series is based on Radim's recent posting[4] that improves
the vcpu-request framework. I've tested the series on a couple
AArch64 platforms and compile-tested the arm bits.

[1] https://lists.cs.columbia.edu/pipermail/kvmarm/2017-February/023820.html
[2] https://lists.cs.columbia.edu/pipermail/kvmarm/2016-April/019691.html
[3] http://lists.nongnu.org/archive/html/qemu-devel/2017-02/msg06895.html
[4] https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1340496.html

Andrew Jones (4):
  KVM: arm/arm64: prepare to use vcpu requests
  KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request
  KVM: arm/arm64: replace vcpu->arch.power_off with a vcpu-request
  KVM: arm/arm64: allow userspace to set MPIDR

Levente Kurusa (1):
  KVM: arm/arm64: fix race in kvm_psci_vcpu_on

 arch/arm/include/asm/kvm_emulate.h   |  2 +-
 arch/arm/include/asm/kvm_host.h      | 12 +++----
 arch/arm/kvm/arm.c                   | 49 ++++++++++++++-------------
 arch/arm/kvm/coproc.c                | 61 +++++++++++++++++++++++++++++-----
 arch/arm/kvm/coproc.h                |  6 ++++
 arch/arm/kvm/handle_exit.c           |  1 +
 arch/arm/kvm/psci.c                  | 18 ++++------
 arch/arm64/include/asm/kvm_emulate.h |  2 +-
 arch/arm64/include/asm/kvm_host.h    | 12 +++----
 arch/arm64/kvm/handle_exit.c         |  1 +
 arch/arm64/kvm/sys_regs.c            | 64 ++++++++++++++++++++++++++++--------
 include/linux/kvm_host.h             |  5 +++
 include/uapi/linux/kvm.h             |  1 +
 13 files changed, 162 insertions(+), 72 deletions(-)

-- 
2.9.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/5] KVM: arm/arm64: prepare to use vcpu requests
  2017-02-27 17:54 [PATCH 0/5] KVM: arm/arm64: fix some races and allow userspace to set MPIDR Andrew Jones
@ 2017-02-27 17:55 ` Andrew Jones
  2017-03-08 13:21   ` Christoffer Dall
  2017-02-27 17:55 ` [PATCH 2/5] KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request Andrew Jones
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Jones @ 2017-02-27 17:55 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, ashoks, imammedo

Make sure we don't leave vcpu requests we don't intend to
handle later set in the request bitmap. If we don't clear
them, then kvm_request_pending() may return true when we
don't want it to.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/kvm/handle_exit.c   | 1 +
 arch/arm/kvm/psci.c          | 1 +
 arch/arm64/kvm/handle_exit.c | 1 +
 3 files changed, 3 insertions(+)

diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
index 4e40d1955e35..2ec31748fa0b 100644
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -72,6 +72,7 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		trace_kvm_wfx(*vcpu_pc(vcpu), false);
 		vcpu->stat.wfi_exit_stat++;
 		kvm_vcpu_block(vcpu);
+		__kvm_request_clear(KVM_REQ_UNHALT, vcpu);
 	}
 
 	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index c2b131527a64..fd7e381b13b2 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -57,6 +57,7 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
 	 * for KVM will preserve the register state.
 	 */
 	kvm_vcpu_block(vcpu);
+	__kvm_request_clear(KVM_REQ_UNHALT, vcpu);
 
 	return PSCI_RET_SUCCESS;
 }
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 1bfe30dfbfe7..b34971d7c30a 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -89,6 +89,7 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false);
 		vcpu->stat.wfi_exit_stat++;
 		kvm_vcpu_block(vcpu);
+		__kvm_request_clear(KVM_REQ_UNHALT, vcpu);
 	}
 
 	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/5] KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request
  2017-02-27 17:54 [PATCH 0/5] KVM: arm/arm64: fix some races and allow userspace to set MPIDR Andrew Jones
  2017-02-27 17:55 ` [PATCH 1/5] KVM: arm/arm64: prepare to use vcpu requests Andrew Jones
@ 2017-02-27 17:55 ` Andrew Jones
  2017-03-08 14:33   ` Christoffer Dall
  2017-02-27 17:55 ` [PATCH 3/5] KVM: arm/arm64: replace vcpu->arch.power_off " Andrew Jones
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Jones @ 2017-02-27 17:55 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, ashoks, imammedo

This not only ensures visibility of changes to pause by using
atomic ops, but also plugs a small race where a vcpu could get
its pause state enabled just after its last check before entering
the guest. With this patch, while the vcpu will still initially
enter the guest, it will exit immediately due to the IPI sent by
the vcpu kick issued after making the vcpu request.

We use __kvm_request_set, because we don't need the barrier
kvm_request_set() provides. Additionally, kvm_request_test_and_clear
is not used because, for pause, only the requester should do the
clearing, i.e. testing and clearing need to be independent.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h   |  5 +----
 arch/arm/kvm/arm.c                | 26 +++++++++++++-------------
 arch/arm64/include/asm/kvm_host.h |  5 +----
 3 files changed, 15 insertions(+), 21 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index cc495d799c67..3b5d60611cac 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -46,7 +46,7 @@
 #define KVM_MAX_VCPUS VGIC_V2_MAX_CPUS
 #endif
 
-#define KVM_REQ_VCPU_EXIT	8
+#define KVM_REQ_VCPU_PAUSE	8
 
 u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
 int __attribute_const__ kvm_target_cpu(void);
@@ -174,9 +174,6 @@ struct kvm_vcpu_arch {
 	/* vcpu power-off state */
 	bool power_off;
 
-	 /* Don't run the guest (internal implementation need) */
-	bool pause;
-
 	/* IO related fields */
 	struct kvm_decode mmio_decode;
 
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index c9a2103faeb9..17d5f3fb33d9 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -401,7 +401,8 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
 int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
 {
 	return ((!!v->arch.irq_lines || kvm_vgic_vcpu_pending_irq(v))
-		&& !v->arch.power_off && !v->arch.pause);
+		&& !v->arch.power_off
+		&& !__kvm_request_test(KVM_REQ_VCPU_PAUSE, v));
 }
 
 /* Just ensure a guest exit from a particular CPU */
@@ -532,17 +533,12 @@ bool kvm_arch_intc_initialized(struct kvm *kvm)
 
 void kvm_arm_halt_guest(struct kvm *kvm)
 {
-	int i;
-	struct kvm_vcpu *vcpu;
-
-	kvm_for_each_vcpu(i, vcpu, kvm)
-		vcpu->arch.pause = true;
-	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
+	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_PAUSE);
 }
 
 void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.pause = true;
+	__kvm_request_set(KVM_REQ_VCPU_PAUSE, vcpu);
 	kvm_vcpu_kick(vcpu);
 }
 
@@ -550,7 +546,7 @@ void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
 
-	vcpu->arch.pause = false;
+	__kvm_request_clear(KVM_REQ_VCPU_PAUSE, vcpu);
 	swake_up(wq);
 }
 
@@ -568,7 +564,7 @@ static void vcpu_sleep(struct kvm_vcpu *vcpu)
 	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
 
 	swait_event_interruptible(*wq, ((!vcpu->arch.power_off) &&
-				       (!vcpu->arch.pause)));
+		(!__kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))));
 }
 
 static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu)
@@ -621,7 +617,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 		update_vttbr(vcpu->kvm);
 
-		if (vcpu->arch.power_off || vcpu->arch.pause)
+		if (vcpu->arch.power_off || __kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))
 			vcpu_sleep(vcpu);
 
 		/*
@@ -644,8 +640,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 			run->exit_reason = KVM_EXIT_INTR;
 		}
 
+		vcpu->mode = IN_GUEST_MODE;
+		smp_mb(); /* ensure vcpu->mode is visible to kvm_vcpu_kick */
+
 		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) ||
-			vcpu->arch.power_off || vcpu->arch.pause) {
+			vcpu->arch.power_off || kvm_request_pending(vcpu)) {
+			vcpu->mode = OUTSIDE_GUEST_MODE;
+			smp_mb();
 			local_irq_enable();
 			kvm_pmu_sync_hwstate(vcpu);
 			kvm_timer_sync_hwstate(vcpu);
@@ -661,7 +662,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		 */
 		trace_kvm_entry(*vcpu_pc(vcpu));
 		guest_enter_irqoff();
-		vcpu->mode = IN_GUEST_MODE;
 
 		ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index f21fd3894370..c03e1fc3bc34 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
 
 #define KVM_VCPU_MAX_FEATURES 4
 
-#define KVM_REQ_VCPU_EXIT	8
+#define KVM_REQ_VCPU_PAUSE	8
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
@@ -257,9 +257,6 @@ struct kvm_vcpu_arch {
 	/* vcpu power-off state */
 	bool power_off;
 
-	/* Don't run the guest (internal implementation need) */
-	bool pause;
-
 	/* IO related fields */
 	struct kvm_decode mmio_decode;
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/5] KVM: arm/arm64: replace vcpu->arch.power_off with a vcpu-request
  2017-02-27 17:54 [PATCH 0/5] KVM: arm/arm64: fix some races and allow userspace to set MPIDR Andrew Jones
  2017-02-27 17:55 ` [PATCH 1/5] KVM: arm/arm64: prepare to use vcpu requests Andrew Jones
  2017-02-27 17:55 ` [PATCH 2/5] KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request Andrew Jones
@ 2017-02-27 17:55 ` Andrew Jones
  2017-02-27 17:55 ` [PATCH 4/5] KVM: arm/arm64: fix race in kvm_psci_vcpu_on Andrew Jones
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 18+ messages in thread
From: Andrew Jones @ 2017-02-27 17:55 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, ashoks, imammedo

Like pause, replacing power_off with a vcpu-request ensures
visibility of changes and avoids the final race before entering
the guest.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_host.h   |  4 +---
 arch/arm/kvm/arm.c                | 32 ++++++++++++++++++--------------
 arch/arm/kvm/psci.c               | 17 +++++------------
 arch/arm64/include/asm/kvm_host.h |  4 +---
 4 files changed, 25 insertions(+), 32 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 3b5d60611cac..73f61773e9c2 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -47,6 +47,7 @@
 #endif
 
 #define KVM_REQ_VCPU_PAUSE	8
+#define KVM_REQ_VCPU_POWER_OFF	9
 
 u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
 int __attribute_const__ kvm_target_cpu(void);
@@ -171,9 +172,6 @@ struct kvm_vcpu_arch {
 	 * here.
 	 */
 
-	/* vcpu power-off state */
-	bool power_off;
-
 	/* IO related fields */
 	struct kvm_decode mmio_decode;
 
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 17d5f3fb33d9..aa68354a0bf8 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -366,7 +366,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
 				    struct kvm_mp_state *mp_state)
 {
-	if (vcpu->arch.power_off)
+	if (__kvm_request_test(KVM_REQ_VCPU_POWER_OFF, vcpu))
 		mp_state->mp_state = KVM_MP_STATE_STOPPED;
 	else
 		mp_state->mp_state = KVM_MP_STATE_RUNNABLE;
@@ -379,10 +379,10 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
 {
 	switch (mp_state->mp_state) {
 	case KVM_MP_STATE_RUNNABLE:
-		vcpu->arch.power_off = false;
+		__kvm_request_clear(KVM_REQ_VCPU_POWER_OFF, vcpu);
 		break;
 	case KVM_MP_STATE_STOPPED:
-		vcpu->arch.power_off = true;
+		__kvm_request_set(KVM_REQ_VCPU_POWER_OFF, vcpu);
 		break;
 	default:
 		return -EINVAL;
@@ -400,9 +400,9 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
  */
 int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
 {
-	return ((!!v->arch.irq_lines || kvm_vgic_vcpu_pending_irq(v))
-		&& !v->arch.power_off
-		&& !__kvm_request_test(KVM_REQ_VCPU_PAUSE, v));
+	return (!!v->arch.irq_lines || kvm_vgic_vcpu_pending_irq(v))
+	       && !__kvm_request_test(KVM_REQ_VCPU_POWER_OFF, v)
+	       && !__kvm_request_test(KVM_REQ_VCPU_PAUSE, v);
 }
 
 /* Just ensure a guest exit from a particular CPU */
@@ -563,8 +563,9 @@ static void vcpu_sleep(struct kvm_vcpu *vcpu)
 {
 	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
 
-	swait_event_interruptible(*wq, ((!vcpu->arch.power_off) &&
-		(!__kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))));
+	swait_event_interruptible(*wq,
+		!__kvm_request_test(KVM_REQ_VCPU_POWER_OFF, vcpu) &&
+		!__kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu));
 }
 
 static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu)
@@ -617,8 +618,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 		update_vttbr(vcpu->kvm);
 
-		if (vcpu->arch.power_off || __kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))
-			vcpu_sleep(vcpu);
+		if (kvm_request_pending(vcpu)) {
+			if (__kvm_request_test(KVM_REQ_VCPU_POWER_OFF, vcpu) ||
+			    __kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))
+				vcpu_sleep(vcpu);
+		}
 
 		/*
 		 * Preparing the interrupts to be injected also
@@ -643,8 +647,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		vcpu->mode = IN_GUEST_MODE;
 		smp_mb(); /* ensure vcpu->mode is visible to kvm_vcpu_kick */
 
-		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) ||
-			vcpu->arch.power_off || kvm_request_pending(vcpu)) {
+		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)
+		    || kvm_request_pending(vcpu)) {
 			vcpu->mode = OUTSIDE_GUEST_MODE;
 			smp_mb();
 			local_irq_enable();
@@ -872,9 +876,9 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	 * Handle the "start in power-off" case.
 	 */
 	if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
-		vcpu->arch.power_off = true;
+		__kvm_request_set(KVM_REQ_VCPU_POWER_OFF, vcpu);
 	else
-		vcpu->arch.power_off = false;
+		__kvm_request_clear(KVM_REQ_VCPU_POWER_OFF, vcpu);
 
 	return 0;
 }
diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index fd7e381b13b2..6c6255f9d8ff 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -64,7 +64,7 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
 
 static void kvm_psci_vcpu_off(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.power_off = true;
+	__kvm_request_set(KVM_REQ_VCPU_POWER_OFF, vcpu);
 }
 
 static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
@@ -88,7 +88,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
 	 */
 	if (!vcpu)
 		return PSCI_RET_INVALID_PARAMS;
-	if (!vcpu->arch.power_off) {
+	if (!__kvm_request_test(KVM_REQ_VCPU_POWER_OFF, vcpu)) {
 		if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
 			return PSCI_RET_ALREADY_ON;
 		else
@@ -116,8 +116,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
 	 * the general puspose registers are undefined upon CPU_ON.
 	 */
 	vcpu_set_reg(vcpu, 0, context_id);
-	vcpu->arch.power_off = false;
-	smp_mb();		/* Make sure the above is visible */
+	__kvm_request_clear(KVM_REQ_VCPU_POWER_OFF, vcpu);
 
 	wq = kvm_arch_vcpu_wq(vcpu);
 	swake_up(wq);
@@ -154,7 +153,7 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
 		mpidr = kvm_vcpu_get_mpidr_aff(tmp);
 		if ((mpidr & target_affinity_mask) == target_affinity) {
 			matching_cpus++;
-			if (!tmp->arch.power_off)
+			if (!__kvm_request_test(KVM_REQ_VCPU_POWER_OFF, tmp))
 				return PSCI_0_2_AFFINITY_LEVEL_ON;
 		}
 	}
@@ -167,9 +166,6 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
 
 static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
 {
-	int i;
-	struct kvm_vcpu *tmp;
-
 	/*
 	 * The KVM ABI specifies that a system event exit may call KVM_RUN
 	 * again and may perform shutdown/reboot at a later time that when the
@@ -179,10 +175,7 @@ static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
 	 * after this call is handled and before the VCPUs have been
 	 * re-initialized.
 	 */
-	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
-		tmp->arch.power_off = true;
-		kvm_vcpu_kick(tmp);
-	}
+	kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_VCPU_POWER_OFF);
 
 	memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
 	vcpu->run->system_event.type = type;
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c03e1fc3bc34..d1e4b5d962eb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -44,6 +44,7 @@
 #define KVM_VCPU_MAX_FEATURES 4
 
 #define KVM_REQ_VCPU_PAUSE	8
+#define KVM_REQ_VCPU_POWER_OFF	9
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
@@ -254,9 +255,6 @@ struct kvm_vcpu_arch {
 		u32	mdscr_el1;
 	} guest_debug_preserved;
 
-	/* vcpu power-off state */
-	bool power_off;
-
 	/* IO related fields */
 	struct kvm_decode mmio_decode;
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/5] KVM: arm/arm64: fix race in kvm_psci_vcpu_on
  2017-02-27 17:54 [PATCH 0/5] KVM: arm/arm64: fix some races and allow userspace to set MPIDR Andrew Jones
                   ` (2 preceding siblings ...)
  2017-02-27 17:55 ` [PATCH 3/5] KVM: arm/arm64: replace vcpu->arch.power_off " Andrew Jones
@ 2017-02-27 17:55 ` Andrew Jones
  2017-02-27 17:55 ` [PATCH 5/5] KVM: arm/arm64: allow userspace to set MPIDR Andrew Jones
  2017-03-08 17:27 ` [PATCH 0/5] KVM: arm/arm64: fix some races and " Christoffer Dall
  5 siblings, 0 replies; 18+ messages in thread
From: Andrew Jones @ 2017-02-27 17:55 UTC (permalink / raw)
  To: kvmarm; +Cc: Levente Kurusa, marc.zyngier, ashoks, imammedo

From: Levente Kurusa <lkurusa@redhat.com>

When two vcpus issue PSCI_CPU_ON on the same core at the same time,
then it's possible for them to both enter the target vcpu's setup
at the same time. This results in unexpected behaviors at best,
and the potential for some nasty bugs at worst.

Signed-off-by: Levente Kurusa <lkurusa@redhat.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/kvm/psci.c      | 4 ++--
 include/linux/kvm_host.h | 5 +++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
index 6c6255f9d8ff..1fccbeb1c0be 100644
--- a/arch/arm/kvm/psci.c
+++ b/arch/arm/kvm/psci.c
@@ -88,7 +88,8 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
 	 */
 	if (!vcpu)
 		return PSCI_RET_INVALID_PARAMS;
-	if (!__kvm_request_test(KVM_REQ_VCPU_POWER_OFF, vcpu)) {
+
+	if (!__kvm_request_test_and_clear(KVM_REQ_VCPU_POWER_OFF, vcpu)) {
 		if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
 			return PSCI_RET_ALREADY_ON;
 		else
@@ -116,7 +117,6 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
 	 * the general puspose registers are undefined upon CPU_ON.
 	 */
 	vcpu_set_reg(vcpu, 0, context_id);
-	__kvm_request_clear(KVM_REQ_VCPU_POWER_OFF, vcpu);
 
 	wq = kvm_arch_vcpu_wq(vcpu);
 	swake_up(wq);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 97b6bd6318e5..df9df7677510 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1135,6 +1135,11 @@ static inline void __kvm_request_clear(unsigned req, struct kvm_vcpu *vcpu)
 	clear_bit(req, &vcpu->requests);
 }
 
+static inline bool __kvm_request_test_and_clear(unsigned req, struct kvm_vcpu *vcpu)
+{
+	return test_and_clear_bit(req, &vcpu->requests);
+}
+
 static inline bool kvm_request_test_and_clear(unsigned req, struct kvm_vcpu *vcpu)
 {
 	if (__kvm_request_test(req, vcpu)) {
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/5] KVM: arm/arm64: allow userspace to set MPIDR
  2017-02-27 17:54 [PATCH 0/5] KVM: arm/arm64: fix some races and allow userspace to set MPIDR Andrew Jones
                   ` (3 preceding siblings ...)
  2017-02-27 17:55 ` [PATCH 4/5] KVM: arm/arm64: fix race in kvm_psci_vcpu_on Andrew Jones
@ 2017-02-27 17:55 ` Andrew Jones
  2017-03-08 13:19   ` Christoffer Dall
  2017-03-08 17:27 ` [PATCH 0/5] KVM: arm/arm64: fix some races and " Christoffer Dall
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Jones @ 2017-02-27 17:55 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, ashoks, imammedo

QEMU would already prefer having control over the vcpu MPIDRs
in order to correctly map vcpus to virtual numa nodes without
having to first instantiate the vcpus. Also, when virtual cpu
topologies are eventually describable, QEMU may need to adjust
the MPIDR to reflect that topology. Finally, we need to cache
the MPIDR in the vcpu structure to fix potential races that can
arise between vcpu reset and the extraction of the MPIDR from
the sys-reg array. So, while we're tinkering with MPIDR to
provide it a cache, then we might as well extend its capabilities
to allowing user setting too.

Cc: Ashok Kumar <ashoks@broadcom.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 arch/arm/include/asm/kvm_emulate.h   |  2 +-
 arch/arm/include/asm/kvm_host.h      |  3 ++
 arch/arm/kvm/arm.c                   |  1 +
 arch/arm/kvm/coproc.c                | 61 +++++++++++++++++++++++++++++-----
 arch/arm/kvm/coproc.h                |  6 ++++
 arch/arm64/include/asm/kvm_emulate.h |  2 +-
 arch/arm64/include/asm/kvm_host.h    |  3 ++
 arch/arm64/kvm/sys_regs.c            | 64 ++++++++++++++++++++++++++++--------
 include/uapi/linux/kvm.h             |  1 +
 9 files changed, 119 insertions(+), 24 deletions(-)

diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
index 9a8a45aaf19a..1b922de46785 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -213,7 +213,7 @@ static inline u32 kvm_vcpu_hvc_get_imm(struct kvm_vcpu *vcpu)
 
 static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
 {
-	return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
+	return vcpu->arch.vmpidr & MPIDR_HWID_BITMASK;
 }
 
 static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 73f61773e9c2..6427cac77204 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -151,6 +151,9 @@ struct kvm_vcpu_arch {
 	/* The CPU type we expose to the VM */
 	u32 midr;
 
+	/* vcpu MPIDR */
+	u32 vmpidr;
+
 	/* HYP trapping configuration */
 	u32 hcr;
 
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index aa68354a0bf8..899528b884d9 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -207,6 +207,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_READONLY_MEM:
 	case KVM_CAP_MP_STATE:
 	case KVM_CAP_IMMEDIATE_EXIT:
+	case KVM_CAP_ARM_USER_MPIDR:
 		r = 1;
 		break;
 	case KVM_CAP_COALESCED_MMIO:
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 3e5e4194ef86..30d35dd407a6 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -99,16 +99,55 @@ int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
+static int set_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *rd,
+		     const struct kvm_one_reg *reg, void __user *uaddr)
 {
+	struct kvm_vcpu *v;
+	__u32 mpidr;
+	int i;
+
+	if (copy_from_user(&mpidr, uaddr, KVM_REG_SIZE(reg->id)))
+		return -EFAULT;
+
 	/*
-	 * Compute guest MPIDR. We build a virtual cluster out of the
-	 * vcpu_id, but we read the 'U' bit from the underlying
-	 * hardware directly.
+	 * All ARMv7 processors that support KVM include the
+	 * Multiprocessing Extensions. Furthermore, without at
+	 * least one bit set in the MPIDR we won't be able to
+	 * distinguish a user set MPIDR from an unset MPIDR in
+	 * reset_mpidr. Thus we force bit 31 on.
 	 */
-	vcpu_cp15(vcpu, c0_MPIDR) = ((read_cpuid_mpidr() & MPIDR_SMP_BITMASK) |
-				     ((vcpu->vcpu_id >> 2) << MPIDR_LEVEL_BITS) |
-				     (vcpu->vcpu_id & 3));
+	mpidr |= (1UL << 31);
+
+	/* Ensure flag consistency and ID uniqueness */
+	kvm_for_each_vcpu(i, v, vcpu->kvm) {
+		if (v == vcpu)
+			continue;
+		if (((v->arch.vmpidr | mpidr) & MPIDR_SMP_BITMASK) != MPIDR_SMP_VALUE
+		    || ((mpidr & MPIDR_MT_BITMASK) && !(v->arch.vmpidr & MPIDR_MT_BITMASK))
+		    || v->arch.vmpidr == mpidr)
+			return -EINVAL;
+	}
+
+	vcpu->arch.vmpidr = mpidr;
+	vcpu_cp15(vcpu, c0_MPIDR) = mpidr;
+
+	return 0;
+}
+
+static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
+{
+	if (!vcpu->arch.vmpidr) {
+		/*
+		 * Compute guest MPIDR. We build a virtual cluster out of the
+		 * vcpu_id, but we read the 'U' bit from the underlying
+		 * hardware directly.
+		 */
+		u32 mpidr = ((read_cpuid_mpidr() & MPIDR_SMP_BITMASK) |
+			     ((vcpu->vcpu_id >> 2) << MPIDR_LEVEL_BITS) |
+			     (vcpu->vcpu_id & 3));
+		vcpu->arch.vmpidr = mpidr;
+	}
+	vcpu_cp15(vcpu, c0_MPIDR) = vcpu->arch.vmpidr;
 }
 
 /* TRM entries A7:4.3.31 A15:4.3.28 - RO WI */
@@ -300,7 +339,7 @@ static bool pm_fake(struct kvm_vcpu *vcpu,
 static const struct coproc_reg cp15_regs[] = {
 	/* MPIDR: we use VMPIDR for guest access. */
 	{ CRn( 0), CRm( 0), Op1( 0), Op2( 5), is32,
-			NULL, reset_mpidr, c0_MPIDR },
+			NULL, reset_mpidr, c0_MPIDR, 0, NULL, set_mpidr },
 
 	/* CSSELR: swapped by interrupt.S. */
 	{ CRn( 0), CRm( 0), Op1( 2), Op2( 0), is32,
@@ -1072,6 +1111,9 @@ int kvm_arm_coproc_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	if (!r)
 		return get_invariant_cp15(reg->id, uaddr);
 
+	if (r->get_user)
+		return (r->get_user)(vcpu, r, reg, uaddr);
+
 	ret = -ENOENT;
 	if (KVM_REG_SIZE(reg->id) == 8) {
 		u64 val;
@@ -1101,6 +1143,9 @@ int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	if (!r)
 		return set_invariant_cp15(reg->id, uaddr);
 
+	if (r->set_user)
+		return (r->set_user)(vcpu, r, reg, uaddr);
+
 	ret = -ENOENT;
 	if (KVM_REG_SIZE(reg->id) == 8) {
 		u64 val;
diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
index eef1759c2b65..7107f19cac5e 100644
--- a/arch/arm/kvm/coproc.h
+++ b/arch/arm/kvm/coproc.h
@@ -52,6 +52,12 @@ struct coproc_reg {
 
 	/* Value (usually reset value) */
 	u64 val;
+
+	/* Custom get/set_user functions, fallback to generic if NULL */
+	int (*get_user)(struct kvm_vcpu *vcpu, const struct coproc_reg *rd,
+			const struct kvm_one_reg *reg, void __user *uaddr);
+	int (*set_user)(struct kvm_vcpu *vcpu, const struct coproc_reg *rd,
+			const struct kvm_one_reg *reg, void __user *uaddr);
 };
 
 static inline void print_cp_instr(const struct coproc_params *p)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index f5ea0ba70f07..c138bb15b507 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -242,7 +242,7 @@ static inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu)
 
 static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
 {
-	return vcpu_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
+	return vcpu->arch.vmpidr_el2 & MPIDR_HWID_BITMASK;
 }
 
 static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index d1e4b5d962eb..31aed40a3724 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -198,6 +198,9 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
 struct kvm_vcpu_arch {
 	struct kvm_cpu_context ctxt;
 
+	/* vcpu MPIDR */
+	u64 vmpidr_el2;
+
 	/* HYP configuration */
 	u64 hcr_el2;
 	u32 mdcr_el2;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 0e26f8c2b56f..2b558ac034ce 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -429,21 +429,57 @@ static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, AMAIR_EL1) = read_sysreg(amair_el1);
 }
 
-static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static int set_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+		     const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	u64 mpidr;
+	struct kvm_vcpu *v;
+	__u64 mpidr;
+	int i;
 
-	/*
-	 * Map the vcpu_id into the first three affinity level fields of
-	 * the MPIDR. We limit the number of VCPUs in level 0 due to a
-	 * limitation to 16 CPUs in that level in the ICC_SGIxR registers
-	 * of the GICv3 to be able to address each CPU directly when
-	 * sending IPIs.
-	 */
-	mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0);
-	mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1);
-	mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2);
-	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
+	if (copy_from_user(&mpidr, uaddr, KVM_REG_SIZE(reg->id)))
+		return -EFAULT;
+
+	/* Ensure RES1 bits are set */
+	mpidr |= (1ULL << 31);
+
+	/* Ensure no RES0 bits are set */
+	if ((mpidr & (GENMASK_ULL(63, 40) | GENMASK(29, 25))) ||
+	    (vcpu_mode_is_32bit(vcpu) && (mpidr & GENMASK_ULL(39, 32))))
+		return -EINVAL;
+
+	/* Ensure flag consistency and ID uniqueness */
+	kvm_for_each_vcpu(i, v, vcpu->kvm) {
+		if (v == vcpu)
+			continue;
+		if (((v->arch.vmpidr_el2 | mpidr) & MPIDR_UP_BITMASK)
+		    || ((mpidr & MPIDR_MT_BITMASK) && !(v->arch.vmpidr_el2 & MPIDR_MT_BITMASK))
+		    || v->arch.vmpidr_el2 == mpidr)
+			return -EINVAL;
+	}
+
+	vcpu->arch.vmpidr_el2 = mpidr;
+	vcpu_sys_reg(vcpu, MPIDR_EL1) = mpidr;
+
+	return 0;
+}
+
+static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	if (!vcpu->arch.vmpidr_el2) {
+		/*
+		 * Userspace didn't provide us an MPIDR, so we construct one
+		 * from the vcpu_id by mapping it into the first three affinity
+		 * levels. We limit the number of VCPUs in level 0 due to a
+		 * limitation of 16 CPUs in that level in the ICC_SGIxR
+		 * registers of the GICv3, which are used to address each CPU
+		 * directly when sending IPIs.
+		 */
+		u64 mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0);
+		mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1);
+		mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2);
+		vcpu->arch.vmpidr_el2 = (1ULL << 31) | mpidr;
+	}
+	vcpu_sys_reg(vcpu, MPIDR_EL1) = vcpu->arch.vmpidr_el2;
 }
 
 static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
@@ -961,7 +997,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* MPIDR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
-	  NULL, reset_mpidr, MPIDR_EL1 },
+	  NULL, reset_mpidr, MPIDR_EL1, 0, NULL, set_mpidr },
 	/* SCTLR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
 	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index f51d5082a377..c20b58fe84cd 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -883,6 +883,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_PPC_MMU_RADIX 134
 #define KVM_CAP_PPC_MMU_HASH_V3 135
 #define KVM_CAP_IMMEDIATE_EXIT 136
+#define KVM_CAP_ARM_USER_MPIDR 137
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/5] KVM: arm/arm64: allow userspace to set MPIDR
  2017-02-27 17:55 ` [PATCH 5/5] KVM: arm/arm64: allow userspace to set MPIDR Andrew Jones
@ 2017-03-08 13:19   ` Christoffer Dall
  2017-03-08 14:27     ` Peter Maydell
  0 siblings, 1 reply; 18+ messages in thread
From: Christoffer Dall @ 2017-03-08 13:19 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, ashoks, imammedo, kvmarm

On Mon, Feb 27, 2017 at 06:55:04PM +0100, Andrew Jones wrote:
> QEMU would already prefer having control over the vcpu MPIDRs
> in order to correctly map vcpus to virtual numa nodes without
> having to first instantiate the vcpus. Also, when virtual cpu
> topologies are eventually describable, QEMU may need to adjust
> the MPIDR to reflect that topology. Finally, we need to cache
> the MPIDR in the vcpu structure to fix potential races that can
> arise between vcpu reset and the extraction of the MPIDR from
> the sys-reg array. So, while we're tinkering with MPIDR to
> provide it a cache, then we might as well extend its capabilities
> to allowing user setting too.
> 
> Cc: Ashok Kumar <ashoks@broadcom.com>
> Cc: Igor Mammedov <imammedo@redhat.com>
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
>  arch/arm/include/asm/kvm_emulate.h   |  2 +-
>  arch/arm/include/asm/kvm_host.h      |  3 ++
>  arch/arm/kvm/arm.c                   |  1 +
>  arch/arm/kvm/coproc.c                | 61 +++++++++++++++++++++++++++++-----
>  arch/arm/kvm/coproc.h                |  6 ++++
>  arch/arm64/include/asm/kvm_emulate.h |  2 +-
>  arch/arm64/include/asm/kvm_host.h    |  3 ++
>  arch/arm64/kvm/sys_regs.c            | 64 ++++++++++++++++++++++++++++--------
>  include/uapi/linux/kvm.h             |  1 +
>  9 files changed, 119 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
> index 9a8a45aaf19a..1b922de46785 100644
> --- a/arch/arm/include/asm/kvm_emulate.h
> +++ b/arch/arm/include/asm/kvm_emulate.h
> @@ -213,7 +213,7 @@ static inline u32 kvm_vcpu_hvc_get_imm(struct kvm_vcpu *vcpu)
>  
>  static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
>  {
> -	return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
> +	return vcpu->arch.vmpidr & MPIDR_HWID_BITMASK;
>  }
>  
>  static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 73f61773e9c2..6427cac77204 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -151,6 +151,9 @@ struct kvm_vcpu_arch {
>  	/* The CPU type we expose to the VM */
>  	u32 midr;
>  
> +	/* vcpu MPIDR */
> +	u32 vmpidr;
> +
>  	/* HYP trapping configuration */
>  	u32 hcr;
>  
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index aa68354a0bf8..899528b884d9 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -207,6 +207,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_READONLY_MEM:
>  	case KVM_CAP_MP_STATE:
>  	case KVM_CAP_IMMEDIATE_EXIT:
> +	case KVM_CAP_ARM_USER_MPIDR:
>  		r = 1;
>  		break;
>  	case KVM_CAP_COALESCED_MMIO:
> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
> index 3e5e4194ef86..30d35dd407a6 100644
> --- a/arch/arm/kvm/coproc.c
> +++ b/arch/arm/kvm/coproc.c
> @@ -99,16 +99,55 @@ int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>  
> -static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
> +static int set_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *rd,
> +		     const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> +	struct kvm_vcpu *v;
> +	__u32 mpidr;
> +	int i;
> +
> +	if (copy_from_user(&mpidr, uaddr, KVM_REG_SIZE(reg->id)))
> +		return -EFAULT;
> +
>  	/*
> -	 * Compute guest MPIDR. We build a virtual cluster out of the
> -	 * vcpu_id, but we read the 'U' bit from the underlying
> -	 * hardware directly.
> +	 * All ARMv7 processors that support KVM include the
> +	 * Multiprocessing Extensions. Furthermore, without at
> +	 * least one bit set in the MPIDR we won't be able to
> +	 * distinguish a user set MPIDR from an unset MPIDR in
> +	 * reset_mpidr. Thus we force bit 31 on.
>  	 */
> -	vcpu_cp15(vcpu, c0_MPIDR) = ((read_cpuid_mpidr() & MPIDR_SMP_BITMASK) |
> -				     ((vcpu->vcpu_id >> 2) << MPIDR_LEVEL_BITS) |
> -				     (vcpu->vcpu_id & 3));
> +	mpidr |= (1UL << 31);
> +
> +	/* Ensure flag consistency and ID uniqueness */
> +	kvm_for_each_vcpu(i, v, vcpu->kvm) {
> +		if (v == vcpu)
> +			continue;
> +		if (((v->arch.vmpidr | mpidr) & MPIDR_SMP_BITMASK) != MPIDR_SMP_VALUE
> +		    || ((mpidr & MPIDR_MT_BITMASK) && !(v->arch.vmpidr & MPIDR_MT_BITMASK))
> +		    || v->arch.vmpidr == mpidr)
> +			return -EINVAL;
> +	}
> +
> +	vcpu->arch.vmpidr = mpidr;
> +	vcpu_cp15(vcpu, c0_MPIDR) = mpidr;

Isn't this all super racy?  What prevents two VCPUs from getting the
same MPIDR at the same time?

How do you support initializing the MPIDRs with a default value
(existing ABI that we must maintain), but then making vcpu 1 have the
mpidr of vcpu 0 and vice versa?

I think you also need to make sure that you can only modify the MPIDR
before actually running the VCPU.

You can take a look at kvm_vgic_create at how we check has_run_once for
inspiration.

Thanks,
-Christoffer

> +
> +	return 0;
> +}
> +
> +static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
> +{
> +	if (!vcpu->arch.vmpidr) {
> +		/*
> +		 * Compute guest MPIDR. We build a virtual cluster out of the
> +		 * vcpu_id, but we read the 'U' bit from the underlying
> +		 * hardware directly.
> +		 */
> +		u32 mpidr = ((read_cpuid_mpidr() & MPIDR_SMP_BITMASK) |
> +			     ((vcpu->vcpu_id >> 2) << MPIDR_LEVEL_BITS) |
> +			     (vcpu->vcpu_id & 3));
> +		vcpu->arch.vmpidr = mpidr;
> +	}
> +	vcpu_cp15(vcpu, c0_MPIDR) = vcpu->arch.vmpidr;
>  }
>  
>  /* TRM entries A7:4.3.31 A15:4.3.28 - RO WI */
> @@ -300,7 +339,7 @@ static bool pm_fake(struct kvm_vcpu *vcpu,
>  static const struct coproc_reg cp15_regs[] = {
>  	/* MPIDR: we use VMPIDR for guest access. */
>  	{ CRn( 0), CRm( 0), Op1( 0), Op2( 5), is32,
> -			NULL, reset_mpidr, c0_MPIDR },
> +			NULL, reset_mpidr, c0_MPIDR, 0, NULL, set_mpidr },
>  
>  	/* CSSELR: swapped by interrupt.S. */
>  	{ CRn( 0), CRm( 0), Op1( 2), Op2( 0), is32,
> @@ -1072,6 +1111,9 @@ int kvm_arm_coproc_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  	if (!r)
>  		return get_invariant_cp15(reg->id, uaddr);
>  
> +	if (r->get_user)
> +		return (r->get_user)(vcpu, r, reg, uaddr);
> +
>  	ret = -ENOENT;
>  	if (KVM_REG_SIZE(reg->id) == 8) {
>  		u64 val;
> @@ -1101,6 +1143,9 @@ int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  	if (!r)
>  		return set_invariant_cp15(reg->id, uaddr);
>  
> +	if (r->set_user)
> +		return (r->set_user)(vcpu, r, reg, uaddr);
> +
>  	ret = -ENOENT;
>  	if (KVM_REG_SIZE(reg->id) == 8) {
>  		u64 val;
> diff --git a/arch/arm/kvm/coproc.h b/arch/arm/kvm/coproc.h
> index eef1759c2b65..7107f19cac5e 100644
> --- a/arch/arm/kvm/coproc.h
> +++ b/arch/arm/kvm/coproc.h
> @@ -52,6 +52,12 @@ struct coproc_reg {
>  
>  	/* Value (usually reset value) */
>  	u64 val;
> +
> +	/* Custom get/set_user functions, fallback to generic if NULL */
> +	int (*get_user)(struct kvm_vcpu *vcpu, const struct coproc_reg *rd,
> +			const struct kvm_one_reg *reg, void __user *uaddr);
> +	int (*set_user)(struct kvm_vcpu *vcpu, const struct coproc_reg *rd,
> +			const struct kvm_one_reg *reg, void __user *uaddr);
>  };
>  
>  static inline void print_cp_instr(const struct coproc_params *p)
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index f5ea0ba70f07..c138bb15b507 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -242,7 +242,7 @@ static inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu)
>  
>  static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
>  {
> -	return vcpu_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
> +	return vcpu->arch.vmpidr_el2 & MPIDR_HWID_BITMASK;
>  }
>  
>  static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index d1e4b5d962eb..31aed40a3724 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -198,6 +198,9 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
>  struct kvm_vcpu_arch {
>  	struct kvm_cpu_context ctxt;
>  
> +	/* vcpu MPIDR */
> +	u64 vmpidr_el2;
> +
>  	/* HYP configuration */
>  	u64 hcr_el2;
>  	u32 mdcr_el2;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 0e26f8c2b56f..2b558ac034ce 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -429,21 +429,57 @@ static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, AMAIR_EL1) = read_sysreg(amair_el1);
>  }
>  
> -static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +static int set_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
> +		     const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> -	u64 mpidr;
> +	struct kvm_vcpu *v;
> +	__u64 mpidr;
> +	int i;
>  
> -	/*
> -	 * Map the vcpu_id into the first three affinity level fields of
> -	 * the MPIDR. We limit the number of VCPUs in level 0 due to a
> -	 * limitation to 16 CPUs in that level in the ICC_SGIxR registers
> -	 * of the GICv3 to be able to address each CPU directly when
> -	 * sending IPIs.
> -	 */
> -	mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0);
> -	mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1);
> -	mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2);
> -	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
> +	if (copy_from_user(&mpidr, uaddr, KVM_REG_SIZE(reg->id)))
> +		return -EFAULT;
> +
> +	/* Ensure RES1 bits are set */
> +	mpidr |= (1ULL << 31);
> +
> +	/* Ensure no RES0 bits are set */
> +	if ((mpidr & (GENMASK_ULL(63, 40) | GENMASK(29, 25))) ||
> +	    (vcpu_mode_is_32bit(vcpu) && (mpidr & GENMASK_ULL(39, 32))))
> +		return -EINVAL;
> +
> +	/* Ensure flag consistency and ID uniqueness */
> +	kvm_for_each_vcpu(i, v, vcpu->kvm) {
> +		if (v == vcpu)
> +			continue;
> +		if (((v->arch.vmpidr_el2 | mpidr) & MPIDR_UP_BITMASK)
> +		    || ((mpidr & MPIDR_MT_BITMASK) && !(v->arch.vmpidr_el2 & MPIDR_MT_BITMASK))
> +		    || v->arch.vmpidr_el2 == mpidr)
> +			return -EINVAL;
> +	}
> +
> +	vcpu->arch.vmpidr_el2 = mpidr;
> +	vcpu_sys_reg(vcpu, MPIDR_EL1) = mpidr;
> +
> +	return 0;
> +}
> +
> +static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	if (!vcpu->arch.vmpidr_el2) {
> +		/*
> +		 * Userspace didn't provide us an MPIDR, so we construct one
> +		 * from the vcpu_id by mapping it into the first three affinity
> +		 * levels. We limit the number of VCPUs in level 0 due to a
> +		 * limitation of 16 CPUs in that level in the ICC_SGIxR
> +		 * registers of the GICv3, which are used to address each CPU
> +		 * directly when sending IPIs.
> +		 */
> +		u64 mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0);
> +		mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1);
> +		mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2);
> +		vcpu->arch.vmpidr_el2 = (1ULL << 31) | mpidr;
> +	}
> +	vcpu_sys_reg(vcpu, MPIDR_EL1) = vcpu->arch.vmpidr_el2;
>  }
>  
>  static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> @@ -961,7 +997,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  
>  	/* MPIDR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
> -	  NULL, reset_mpidr, MPIDR_EL1 },
> +	  NULL, reset_mpidr, MPIDR_EL1, 0, NULL, set_mpidr },
>  	/* SCTLR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
>  	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index f51d5082a377..c20b58fe84cd 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -883,6 +883,7 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_PPC_MMU_RADIX 134
>  #define KVM_CAP_PPC_MMU_HASH_V3 135
>  #define KVM_CAP_IMMEDIATE_EXIT 136
> +#define KVM_CAP_ARM_USER_MPIDR 137
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
> -- 
> 2.9.3
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/5] KVM: arm/arm64: prepare to use vcpu requests
  2017-02-27 17:55 ` [PATCH 1/5] KVM: arm/arm64: prepare to use vcpu requests Andrew Jones
@ 2017-03-08 13:21   ` Christoffer Dall
  0 siblings, 0 replies; 18+ messages in thread
From: Christoffer Dall @ 2017-03-08 13:21 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, ashoks, imammedo, kvmarm

On Mon, Feb 27, 2017 at 06:55:00PM +0100, Andrew Jones wrote:
> Make sure we don't leave vcpu requests we don't intend to
> handle later set in the request bitmap. If we don't clear
> them, then kvm_request_pending() may return true when we
> don't want it to.
> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>

Acked-by: Christoffer Dall <cdall@linaro.org>

> ---
>  arch/arm/kvm/handle_exit.c   | 1 +
>  arch/arm/kvm/psci.c          | 1 +
>  arch/arm64/kvm/handle_exit.c | 1 +
>  3 files changed, 3 insertions(+)
> 
> diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
> index 4e40d1955e35..2ec31748fa0b 100644
> --- a/arch/arm/kvm/handle_exit.c
> +++ b/arch/arm/kvm/handle_exit.c
> @@ -72,6 +72,7 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		trace_kvm_wfx(*vcpu_pc(vcpu), false);
>  		vcpu->stat.wfi_exit_stat++;
>  		kvm_vcpu_block(vcpu);
> +		__kvm_request_clear(KVM_REQ_UNHALT, vcpu);
>  	}
>  
>  	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> diff --git a/arch/arm/kvm/psci.c b/arch/arm/kvm/psci.c
> index c2b131527a64..fd7e381b13b2 100644
> --- a/arch/arm/kvm/psci.c
> +++ b/arch/arm/kvm/psci.c
> @@ -57,6 +57,7 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu)
>  	 * for KVM will preserve the register state.
>  	 */
>  	kvm_vcpu_block(vcpu);
> +	__kvm_request_clear(KVM_REQ_UNHALT, vcpu);
>  
>  	return PSCI_RET_SUCCESS;
>  }
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 1bfe30dfbfe7..b34971d7c30a 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -89,6 +89,7 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false);
>  		vcpu->stat.wfi_exit_stat++;
>  		kvm_vcpu_block(vcpu);
> +		__kvm_request_clear(KVM_REQ_UNHALT, vcpu);
>  	}
>  
>  	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> -- 
> 2.9.3
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/5] KVM: arm/arm64: allow userspace to set MPIDR
  2017-03-08 13:19   ` Christoffer Dall
@ 2017-03-08 14:27     ` Peter Maydell
  2017-03-08 17:21       ` Christoffer Dall
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Maydell @ 2017-03-08 14:27 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: Marc Zyngier, Ashok Kumar, Igor Mammedov, kvmarm

On 8 March 2017 at 14:19, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> Isn't this all super racy?  What prevents two VCPUs from getting the
> same MPIDR at the same time?

>From a userspace API point of view, I was expecting this to be
"just don't do that then" territory, ie it's userspace's job
to set the MPIDRs to something that makes sense. Does KVM
actually care internally what the MPIDR presented to the guest is?
(In theory I don't see why you shouldn't be able to present
the guest with a bit of bogus hardware that claims the same
MPIDR for all cores.)

thanks
-- PMM

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/5] KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request
  2017-02-27 17:55 ` [PATCH 2/5] KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request Andrew Jones
@ 2017-03-08 14:33   ` Christoffer Dall
  2017-03-08 14:44     ` Andrew Jones
  0 siblings, 1 reply; 18+ messages in thread
From: Christoffer Dall @ 2017-03-08 14:33 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, ashoks, imammedo, kvmarm

On Mon, Feb 27, 2017 at 06:55:01PM +0100, Andrew Jones wrote:
> This not only ensures visibility of changes to pause by using
> atomic ops, but also plugs a small race where a vcpu could get
> its pause state enabled just after its last check before entering
> the guest. With this patch, while the vcpu will still initially
> enter the guest, it will exit immediately due to the IPI sent by
> the vcpu kick issued after making the vcpu request.
> 
> We use __kvm_request_set, because we don't need the barrier
> kvm_request_set() provides. Additionally, kvm_request_test_and_clear
> is not used because, for pause, only the requester should do the
> clearing, i.e. testing and clearing need to be independent.
> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
>  arch/arm/include/asm/kvm_host.h   |  5 +----
>  arch/arm/kvm/arm.c                | 26 +++++++++++++-------------
>  arch/arm64/include/asm/kvm_host.h |  5 +----
>  3 files changed, 15 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index cc495d799c67..3b5d60611cac 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -46,7 +46,7 @@
>  #define KVM_MAX_VCPUS VGIC_V2_MAX_CPUS
>  #endif
>  
> -#define KVM_REQ_VCPU_EXIT	8
> +#define KVM_REQ_VCPU_PAUSE	8
>  
>  u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
>  int __attribute_const__ kvm_target_cpu(void);
> @@ -174,9 +174,6 @@ struct kvm_vcpu_arch {
>  	/* vcpu power-off state */
>  	bool power_off;
>  
> -	 /* Don't run the guest (internal implementation need) */
> -	bool pause;
> -
>  	/* IO related fields */
>  	struct kvm_decode mmio_decode;
>  
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index c9a2103faeb9..17d5f3fb33d9 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -401,7 +401,8 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
>  int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
>  {
>  	return ((!!v->arch.irq_lines || kvm_vgic_vcpu_pending_irq(v))
> -		&& !v->arch.power_off && !v->arch.pause);
> +		&& !v->arch.power_off
> +		&& !__kvm_request_test(KVM_REQ_VCPU_PAUSE, v));
>  }
>  
>  /* Just ensure a guest exit from a particular CPU */
> @@ -532,17 +533,12 @@ bool kvm_arch_intc_initialized(struct kvm *kvm)
>  
>  void kvm_arm_halt_guest(struct kvm *kvm)
>  {
> -	int i;
> -	struct kvm_vcpu *vcpu;
> -
> -	kvm_for_each_vcpu(i, vcpu, kvm)
> -		vcpu->arch.pause = true;
> -	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
> +	kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_PAUSE);
>  }
>  
>  void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
>  {
> -	vcpu->arch.pause = true;
> +	__kvm_request_set(KVM_REQ_VCPU_PAUSE, vcpu);
>  	kvm_vcpu_kick(vcpu);
>  }
>  
> @@ -550,7 +546,7 @@ void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
>  {
>  	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
>  
> -	vcpu->arch.pause = false;
> +	__kvm_request_clear(KVM_REQ_VCPU_PAUSE, vcpu);
>  	swake_up(wq);
>  }
>  
> @@ -568,7 +564,7 @@ static void vcpu_sleep(struct kvm_vcpu *vcpu)
>  	struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
>  
>  	swait_event_interruptible(*wq, ((!vcpu->arch.power_off) &&
> -				       (!vcpu->arch.pause)));
> +		(!__kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))));
>  }
>  
>  static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu)
> @@ -621,7 +617,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  
>  		update_vttbr(vcpu->kvm);
>  
> -		if (vcpu->arch.power_off || vcpu->arch.pause)
> +		if (vcpu->arch.power_off || __kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))
>  			vcpu_sleep(vcpu);
>  
>  		/*
> @@ -644,8 +640,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  			run->exit_reason = KVM_EXIT_INTR;
>  		}
>  
> +		vcpu->mode = IN_GUEST_MODE;
> +		smp_mb(); /* ensure vcpu->mode is visible to kvm_vcpu_kick */

I think this comment is misleading, because this smp_mb() is really
about ensuring that with respect to other CPUs, the write to vcpu->mode
is observable before the read of kvm_request_pending below, and the
corresponding other barrier is the barrier implied in cmpxchg used by
kvm_vcpu_exiting_guest_mode, which gets called by kvm_vcpu_kick(), which
is called after __kvm_set_request.

So this also means that our direct use of kvm_vcpu_kick() without making
a request is currently racy, so we should address that where appropriate
as well.

Do you feel brave enough to take a look at that?

Thanks,
-Christoffer

> +
>  		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) ||
> -			vcpu->arch.power_off || vcpu->arch.pause) {
> +			vcpu->arch.power_off || kvm_request_pending(vcpu)) {
> +			vcpu->mode = OUTSIDE_GUEST_MODE;
> +			smp_mb();
>  			local_irq_enable();
>  			kvm_pmu_sync_hwstate(vcpu);
>  			kvm_timer_sync_hwstate(vcpu);
> @@ -661,7 +662,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		 */
>  		trace_kvm_entry(*vcpu_pc(vcpu));
>  		guest_enter_irqoff();
> -		vcpu->mode = IN_GUEST_MODE;
>  
>  		ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);
>  
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index f21fd3894370..c03e1fc3bc34 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -43,7 +43,7 @@
>  
>  #define KVM_VCPU_MAX_FEATURES 4
>  
> -#define KVM_REQ_VCPU_EXIT	8
> +#define KVM_REQ_VCPU_PAUSE	8
>  
>  int __attribute_const__ kvm_target_cpu(void);
>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
> @@ -257,9 +257,6 @@ struct kvm_vcpu_arch {
>  	/* vcpu power-off state */
>  	bool power_off;
>  
> -	/* Don't run the guest (internal implementation need) */
> -	bool pause;
> -
>  	/* IO related fields */
>  	struct kvm_decode mmio_decode;
>  
> -- 
> 2.9.3
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/5] KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request
  2017-03-08 14:33   ` Christoffer Dall
@ 2017-03-08 14:44     ` Andrew Jones
  2017-03-13 10:30       ` Christoffer Dall
  0 siblings, 1 reply; 18+ messages in thread
From: Andrew Jones @ 2017-03-08 14:44 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: marc.zyngier, ashoks, imammedo, kvmarm

On Wed, Mar 08, 2017 at 06:33:12AM -0800, Christoffer Dall wrote:
> >  static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu)
> > @@ -621,7 +617,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
> >  
> >  		update_vttbr(vcpu->kvm);
> >  
> > -		if (vcpu->arch.power_off || vcpu->arch.pause)
> > +		if (vcpu->arch.power_off || __kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))
> >  			vcpu_sleep(vcpu);
> >  
> >  		/*
> > @@ -644,8 +640,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
> >  			run->exit_reason = KVM_EXIT_INTR;
> >  		}
> >  
> > +		vcpu->mode = IN_GUEST_MODE;
> > +		smp_mb(); /* ensure vcpu->mode is visible to kvm_vcpu_kick */
> 
> I think this comment is misleading, because this smp_mb() is really
> about ensuring that with respect to other CPUs, the write to vcpu->mode
> is observable before the read of kvm_request_pending below, and the
> corresponding other barrier is the barrier implied in cmpxchg used by
> kvm_vcpu_exiting_guest_mode, which gets called by kvm_vcpu_kick(), which
> is called after __kvm_set_request.

Agreed

> 
> So this also means that our direct use of kvm_vcpu_kick() without making
> a request is currently racy, so we should address that where appropriate
> as well.
> 
> Do you feel brave enough to take a look at that?

Yup, mission accepted.

Thanks,
drew

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/5] KVM: arm/arm64: allow userspace to set MPIDR
  2017-03-08 14:27     ` Peter Maydell
@ 2017-03-08 17:21       ` Christoffer Dall
  2017-03-08 20:48         ` Peter Maydell
  0 siblings, 1 reply; 18+ messages in thread
From: Christoffer Dall @ 2017-03-08 17:21 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Marc Zyngier, Ashok Kumar, Igor Mammedov, kvmarm

On Wed, Mar 08, 2017 at 03:27:28PM +0100, Peter Maydell wrote:
> On 8 March 2017 at 14:19, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> > Isn't this all super racy?  What prevents two VCPUs from getting the
> > same MPIDR at the same time?
> 
> From a userspace API point of view, I was expecting this to be
> "just don't do that then" territory, ie it's userspace's job
> to set the MPIDRs to something that makes sense. Does KVM
> actually care internally what the MPIDR presented to the guest is?
> (In theory I don't see why you shouldn't be able to present
> the guest with a bit of bogus hardware that claims the same
> MPIDR for all cores.)
> 

I think it's important that we maintain a strict mapping between an
MPIDR and a VCPU at any given time, because otherwise, we'll have
problems knowing which redestributor belongs to which VCPU and making
other qualified decisions in the KVM/ARM implementation.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/5] KVM: arm/arm64: fix some races and allow userspace to set MPIDR
  2017-02-27 17:54 [PATCH 0/5] KVM: arm/arm64: fix some races and allow userspace to set MPIDR Andrew Jones
                   ` (4 preceding siblings ...)
  2017-02-27 17:55 ` [PATCH 5/5] KVM: arm/arm64: allow userspace to set MPIDR Andrew Jones
@ 2017-03-08 17:27 ` Christoffer Dall
  2017-03-08 17:53   ` Andrew Jones
  5 siblings, 1 reply; 18+ messages in thread
From: Christoffer Dall @ 2017-03-08 17:27 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, ashoks, imammedo, kvmarm

Hi Drew,

On Mon, Feb 27, 2017 at 06:54:59PM +0100, Andrew Jones wrote:
> This series fixes four races. Two are easy to produce with a
> kvm-unit-test test[1], but the other two would be quite hard. I
> didn't even try to produce those. The two hard to produce races are
> addressed by changing vcpu->arch.pause and vcpu->arch.power_off to
> vcpu requests. The two easy to produce races are addressed in two
> different ways: the first takes advantage of power_off having been
> changed to a vcpu request, the second caches vcpu MPIDRs in order
> to avoid extracting them from sys_regs. When introducing the MPIDR
> cache we also introduce a new feature (userspace settable MPIDRs).
> 
> Support for userspace settable MPIDRs was already posted once[2],
> but rejected due to not having a use case. We have one now, which
> is to satisfy QEMU's need for the MPDIR information very early,
> before vcpu-init has even run. While the original posting author
> wasn't me, I've taken authorship now, as I've changed the patch
> substantially. If anybody disagrees with that, then feel free to
> suggest alternatives. The QEMU counterpart has been posted[3].
> 
> This series is based on Radim's recent posting[4] that improves
> the vcpu-request framework. I've tested the series on a couple
> AArch64 platforms and compile-tested the arm bits.

So as we discussed during lunch today, it would be awesome if you could
split this up into a series that fixes the identified races and a
separate series for the introduction of supporting setting the MPIDR
from user space.

Thanks a lot!
-Christoffer

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/5] KVM: arm/arm64: fix some races and allow userspace to set MPIDR
  2017-03-08 17:27 ` [PATCH 0/5] KVM: arm/arm64: fix some races and " Christoffer Dall
@ 2017-03-08 17:53   ` Andrew Jones
  0 siblings, 0 replies; 18+ messages in thread
From: Andrew Jones @ 2017-03-08 17:53 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: marc.zyngier, ashoks, imammedo, kvmarm

On Wed, Mar 08, 2017 at 09:27:12AM -0800, Christoffer Dall wrote:
> Hi Drew,
> 
> On Mon, Feb 27, 2017 at 06:54:59PM +0100, Andrew Jones wrote:
> > This series fixes four races. Two are easy to produce with a
> > kvm-unit-test test[1], but the other two would be quite hard. I
> > didn't even try to produce those. The two hard to produce races are
> > addressed by changing vcpu->arch.pause and vcpu->arch.power_off to
> > vcpu requests. The two easy to produce races are addressed in two
> > different ways: the first takes advantage of power_off having been
> > changed to a vcpu request, the second caches vcpu MPIDRs in order
> > to avoid extracting them from sys_regs. When introducing the MPIDR
> > cache we also introduce a new feature (userspace settable MPIDRs).
> > 
> > Support for userspace settable MPIDRs was already posted once[2],
> > but rejected due to not having a use case. We have one now, which
> > is to satisfy QEMU's need for the MPDIR information very early,
> > before vcpu-init has even run. While the original posting author
> > wasn't me, I've taken authorship now, as I've changed the patch
> > substantially. If anybody disagrees with that, then feel free to
> > suggest alternatives. The QEMU counterpart has been posted[3].
> > 
> > This series is based on Radim's recent posting[4] that improves
> > the vcpu-request framework. I've tested the series on a couple
> > AArch64 platforms and compile-tested the arm bits.
> 
> So as we discussed during lunch today, it would be awesome if you could
> split this up into a series that fixes the identified races and a
> separate series for the introduction of supporting setting the MPIDR
> from user space.

Will do. Thanks for reviewing!

drew

> 
> Thanks a lot!
> -Christoffer

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/5] KVM: arm/arm64: allow userspace to set MPIDR
  2017-03-08 17:21       ` Christoffer Dall
@ 2017-03-08 20:48         ` Peter Maydell
  0 siblings, 0 replies; 18+ messages in thread
From: Peter Maydell @ 2017-03-08 20:48 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: Marc Zyngier, Ashok Kumar, Igor Mammedov, kvmarm

On 8 March 2017 at 18:21, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> On Wed, Mar 08, 2017 at 03:27:28PM +0100, Peter Maydell wrote:
>> From a userspace API point of view, I was expecting this to be
>> "just don't do that then" territory, ie it's userspace's job
>> to set the MPIDRs to something that makes sense. Does KVM
>> actually care internally what the MPIDR presented to the guest is?
>> (In theory I don't see why you shouldn't be able to present
>> the guest with a bit of bogus hardware that claims the same
>> MPIDR for all cores.)
>>
>
> I think it's important that we maintain a strict mapping between an
> MPIDR and a VCPU at any given time, because otherwise, we'll have
> problems knowing which redestributor belongs to which VCPU and making
> other qualified decisions in the KVM/ARM implementation.

Mmm, maybe. In any case I certainly don't have an objection if
you want to define the interface as "must set before VCPU start
and not after" and fail VCPU start if it's using a duplicate ID.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/5] KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request
  2017-03-08 14:44     ` Andrew Jones
@ 2017-03-13 10:30       ` Christoffer Dall
  2017-03-13 17:27         ` Andrew Jones
  0 siblings, 1 reply; 18+ messages in thread
From: Christoffer Dall @ 2017-03-13 10:30 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, ashoks, imammedo, kvmarm

On Wed, Mar 08, 2017 at 03:44:11PM +0100, Andrew Jones wrote:
> On Wed, Mar 08, 2017 at 06:33:12AM -0800, Christoffer Dall wrote:
> > >  static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu)
> > > @@ -621,7 +617,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > >  
> > >  		update_vttbr(vcpu->kvm);
> > >  
> > > -		if (vcpu->arch.power_off || vcpu->arch.pause)
> > > +		if (vcpu->arch.power_off || __kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))
> > >  			vcpu_sleep(vcpu);
> > >  
> > >  		/*
> > > @@ -644,8 +640,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > >  			run->exit_reason = KVM_EXIT_INTR;
> > >  		}
> > >  
> > > +		vcpu->mode = IN_GUEST_MODE;
> > > +		smp_mb(); /* ensure vcpu->mode is visible to kvm_vcpu_kick */
> > 
> > I think this comment is misleading, because this smp_mb() is really
> > about ensuring that with respect to other CPUs, the write to vcpu->mode
> > is observable before the read of kvm_request_pending below, and the
> > corresponding other barrier is the barrier implied in cmpxchg used by
> > kvm_vcpu_exiting_guest_mode, which gets called by kvm_vcpu_kick(), which
> > is called after __kvm_set_request.
> 
> Agreed
> 

Just an adjustment to our conclusion from last week:

Will Deacon clarified that the cmpxchg doesn't have barrier semantics if
the cmpxchg operation fails.  My brain hurts trying to work out if we're
still safe in that case.  Thoughts?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/5] KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request
  2017-03-13 10:30       ` Christoffer Dall
@ 2017-03-13 17:27         ` Andrew Jones
  2017-03-13 18:22           ` Christoffer Dall
  0 siblings, 1 reply; 18+ messages in thread
From: Andrew Jones @ 2017-03-13 17:27 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: marc.zyngier, imammedo, kvmarm

On Mon, Mar 13, 2017 at 11:30:05AM +0100, Christoffer Dall wrote:
> On Wed, Mar 08, 2017 at 03:44:11PM +0100, Andrew Jones wrote:
> > On Wed, Mar 08, 2017 at 06:33:12AM -0800, Christoffer Dall wrote:
> > > >  static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu)
> > > > @@ -621,7 +617,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > > >  
> > > >  		update_vttbr(vcpu->kvm);
> > > >  
> > > > -		if (vcpu->arch.power_off || vcpu->arch.pause)
> > > > +		if (vcpu->arch.power_off || __kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))
> > > >  			vcpu_sleep(vcpu);
> > > >  
> > > >  		/*
> > > > @@ -644,8 +640,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > > >  			run->exit_reason = KVM_EXIT_INTR;
> > > >  		}
> > > >  
> > > > +		vcpu->mode = IN_GUEST_MODE;
> > > > +		smp_mb(); /* ensure vcpu->mode is visible to kvm_vcpu_kick */
> > > 
> > > I think this comment is misleading, because this smp_mb() is really
> > > about ensuring that with respect to other CPUs, the write to vcpu->mode
> > > is observable before the read of kvm_request_pending below, and the
> > > corresponding other barrier is the barrier implied in cmpxchg used by
> > > kvm_vcpu_exiting_guest_mode, which gets called by kvm_vcpu_kick(), which
> > > is called after __kvm_set_request.
> > 
> > Agreed
> > 
> 
> Just an adjustment to our conclusion from last week:
> 
> Will Deacon clarified that the cmpxchg doesn't have barrier semantics if
> the cmpxchg operation fails.  My brain hurts trying to work out if we're
> still safe in that case.  Thoughts?
>

OK, I just ran my brain through a blender, and then I grabbed Radim to
bounce my thoughts off him first. I think he and I are in agreement that
kvm_arch_vcpu_should_kick() should get a read barrier before the cmpxchg.
That smp_rmb() would pair with the general barrier that I have poorly
commented in the code above.

I'll refresh this series in the next couple days, adding that barrier,
and some better comments.

Thanks,
drew

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/5] KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request
  2017-03-13 17:27         ` Andrew Jones
@ 2017-03-13 18:22           ` Christoffer Dall
  0 siblings, 0 replies; 18+ messages in thread
From: Christoffer Dall @ 2017-03-13 18:22 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, imammedo, kvmarm

On Mon, Mar 13, 2017 at 06:27:55PM +0100, Andrew Jones wrote:
> On Mon, Mar 13, 2017 at 11:30:05AM +0100, Christoffer Dall wrote:
> > On Wed, Mar 08, 2017 at 03:44:11PM +0100, Andrew Jones wrote:
> > > On Wed, Mar 08, 2017 at 06:33:12AM -0800, Christoffer Dall wrote:
> > > > >  static int kvm_vcpu_initialized(struct kvm_vcpu *vcpu)
> > > > > @@ -621,7 +617,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > > > >  
> > > > >  		update_vttbr(vcpu->kvm);
> > > > >  
> > > > > -		if (vcpu->arch.power_off || vcpu->arch.pause)
> > > > > +		if (vcpu->arch.power_off || __kvm_request_test(KVM_REQ_VCPU_PAUSE, vcpu))
> > > > >  			vcpu_sleep(vcpu);
> > > > >  
> > > > >  		/*
> > > > > @@ -644,8 +640,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > > > >  			run->exit_reason = KVM_EXIT_INTR;
> > > > >  		}
> > > > >  
> > > > > +		vcpu->mode = IN_GUEST_MODE;
> > > > > +		smp_mb(); /* ensure vcpu->mode is visible to kvm_vcpu_kick */
> > > > 
> > > > I think this comment is misleading, because this smp_mb() is really
> > > > about ensuring that with respect to other CPUs, the write to vcpu->mode
> > > > is observable before the read of kvm_request_pending below, and the
> > > > corresponding other barrier is the barrier implied in cmpxchg used by
> > > > kvm_vcpu_exiting_guest_mode, which gets called by kvm_vcpu_kick(), which
> > > > is called after __kvm_set_request.
> > > 
> > > Agreed
> > > 
> > 
> > Just an adjustment to our conclusion from last week:
> > 
> > Will Deacon clarified that the cmpxchg doesn't have barrier semantics if
> > the cmpxchg operation fails.  My brain hurts trying to work out if we're
> > still safe in that case.  Thoughts?
> >
> 
> OK, I just ran my brain through a blender, and then I grabbed Radim to
> bounce my thoughts off him first. I think he and I are in agreement that
> kvm_arch_vcpu_should_kick() should get a read barrier before the cmpxchg.
> That smp_rmb() would pair with the general barrier that I have poorly
> commented in the code above.
> 
> I'll refresh this series in the next couple days, adding that barrier,
> and some better comments.
> 

Comments and an explicit barrier sounds like a welcome change.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-03-13 18:21 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-27 17:54 [PATCH 0/5] KVM: arm/arm64: fix some races and allow userspace to set MPIDR Andrew Jones
2017-02-27 17:55 ` [PATCH 1/5] KVM: arm/arm64: prepare to use vcpu requests Andrew Jones
2017-03-08 13:21   ` Christoffer Dall
2017-02-27 17:55 ` [PATCH 2/5] KVM: arm/arm64: replace vcpu->arch.pause with a vcpu-request Andrew Jones
2017-03-08 14:33   ` Christoffer Dall
2017-03-08 14:44     ` Andrew Jones
2017-03-13 10:30       ` Christoffer Dall
2017-03-13 17:27         ` Andrew Jones
2017-03-13 18:22           ` Christoffer Dall
2017-02-27 17:55 ` [PATCH 3/5] KVM: arm/arm64: replace vcpu->arch.power_off " Andrew Jones
2017-02-27 17:55 ` [PATCH 4/5] KVM: arm/arm64: fix race in kvm_psci_vcpu_on Andrew Jones
2017-02-27 17:55 ` [PATCH 5/5] KVM: arm/arm64: allow userspace to set MPIDR Andrew Jones
2017-03-08 13:19   ` Christoffer Dall
2017-03-08 14:27     ` Peter Maydell
2017-03-08 17:21       ` Christoffer Dall
2017-03-08 20:48         ` Peter Maydell
2017-03-08 17:27 ` [PATCH 0/5] KVM: arm/arm64: fix some races and " Christoffer Dall
2017-03-08 17:53   ` Andrew Jones

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.