kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs
@ 2022-01-10  5:40 Reiji Watanabe
  2022-01-10  5:40 ` [PATCH 2/2] KVM: arm64: selftests: Introduce vcpu_width_config Reiji Watanabe
  2022-01-10 10:48 ` [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs Alexandru Elisei
  0 siblings, 2 replies; 8+ messages in thread
From: Reiji Watanabe @ 2022-01-10  5:40 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Peter Shier,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata,
	Reiji Watanabe

vcpu_allowed_register_width() checks if all the VCPUs are either
all 32bit or all 64bit.  Since the checking is done even for vCPUs
that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet,
the non-initialized vCPUs are erroneously treated as 64bit vCPU,
which causes the function to incorrectly detect a mixed-width VM.

Fix vcpu_allowed_register_width() to skip the check for vCPUs that
are not initialized yet.

Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation")
Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/reset.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 426bd7fbc3fd..ef78bbc7566a 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -180,8 +180,19 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
 	if (kvm_has_mte(vcpu->kvm) && is32bit)
 		return false;
 
+	/*
+	 * Make sure vcpu->arch.target setting is visible from others so
+	 * that the width consistency checking between two vCPUs is done
+	 * by at least one of them at KVM_ARM_VCPU_INIT.
+	 */
+	smp_mb();
+
 	/* Check that the vcpus are either all 32bit or all 64bit */
 	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+		/* Skip if KVM_ARM_VCPU_INIT is not done for the vcpu yet */
+		if (tmp->arch.target == -1)
+			continue;
+
 		if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit)
 			return false;
 	}

base-commit: df0cc57e057f18e44dac8e6c18aba47ab53202f9
-- 
2.34.1.575.g55b058a8bb-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] KVM: arm64: selftests: Introduce vcpu_width_config
  2022-01-10  5:40 [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs Reiji Watanabe
@ 2022-01-10  5:40 ` Reiji Watanabe
  2022-01-11  9:47   ` Andrew Jones
  2022-01-10 10:48 ` [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs Alexandru Elisei
  1 sibling, 1 reply; 8+ messages in thread
From: Reiji Watanabe @ 2022-01-10  5:40 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Peter Shier,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata,
	Reiji Watanabe

Introduce a test for aarch64 that ensures non-mixed-width vCPUs
(all 64bit vCPUs or all 32bit vcPUs) can be configured, and
mixed-width vCPUs cannot be configured.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/aarch64/vcpu_width_config.c | 125 ++++++++++++++++++
 3 files changed, 127 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/aarch64/vcpu_width_config.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 3cb5ac5da087..8795a83cc382 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -3,6 +3,7 @@
 /aarch64/debug-exceptions
 /aarch64/get-reg-list
 /aarch64/psci_cpu_on_test
+/aarch64/vcpu_width_config
 /aarch64/vgic_init
 /s390x/memop
 /s390x/resets
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 17342b575e85..259e01d0735a 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -95,6 +95,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
 TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
 TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
 TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test
+TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
 TEST_GEN_PROGS_aarch64 += demand_paging_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c b/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c
new file mode 100644
index 000000000000..cd238e068236
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/vcpu_width_config.c
@@ -0,0 +1,125 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * vcpu_width_config - Test KVM_ARM_VCPU_INIT() with KVM_ARM_VCPU_EL1_32BIT.
+ *
+ * Copyright (c) 2022 Google LLC.
+ *
+ * This is a test that ensures that non-mixed-width vCPUs (all 64bit vCPUs
+ * or all 32bit vcPUs) can be configured and mixed-width vCPUs cannot be
+ * configured.
+ */
+
+#define _GNU_SOURCE
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "test_util.h"
+
+
+/*
+ * Add a vCPU, run KVM_ARM_VCPU_INIT with @init1, and then
+ * add another vCPU, and run KVM_ARM_VCPU_INIT with @init2.
+ */
+int add_init_2vcpus(struct kvm_vcpu_init *init1,
+			 struct kvm_vcpu_init *init2)
+{
+	struct kvm_vm *vm;
+	int ret;
+
+	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
+
+	vm_vcpu_add(vm, 0);
+	ret = _vcpu_ioctl(vm, 0, KVM_ARM_VCPU_INIT, init1);
+	if (ret)
+		goto free_exit;
+
+	vm_vcpu_add(vm, 1);
+	ret = _vcpu_ioctl(vm, 1, KVM_ARM_VCPU_INIT, init2);
+
+free_exit:
+	kvm_vm_free(vm);
+	return ret;
+}
+
+/*
+ * Add two vCPUs, then run KVM_ARM_VCPU_INIT for one vCPU with @init1,
+ * and run KVM_ARM_VCPU_INIT for another vCPU with @init2.
+ */
+int add_2vcpus_init_2vcpus(struct kvm_vcpu_init *init1,
+				struct kvm_vcpu_init *init2)
+{
+	struct kvm_vm *vm;
+	int ret;
+
+	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
+
+	vm_vcpu_add(vm, 0);
+	vm_vcpu_add(vm, 1);
+
+	ret = _vcpu_ioctl(vm, 0, KVM_ARM_VCPU_INIT, init1);
+	if (ret)
+		goto free_exit;
+
+	ret = _vcpu_ioctl(vm, 1, KVM_ARM_VCPU_INIT, init2);
+
+free_exit:
+	kvm_vm_free(vm);
+	return ret;
+}
+
+/*
+ * Tests that two 64bit vCPUs can be configured, two 32bit vCPUs can be
+ * configured, and two mixed-witgh vCPUs cannot be configured.
+ * Each of those three cases, configure vCPUs in two different orders.
+ * The one is running KVM_CREATE_VCPU for 2 vCPUs, and then running
+ * KVM_ARM_VCPU_INIT for them.
+ * The other is running KVM_CREATE_VCPU and KVM_ARM_VCPU_INIT for a vCPU,
+ * and then run those commands for another vCPU.
+ */
+int main(void)
+{
+	struct kvm_vcpu_init init1, init2;
+	struct kvm_vm *vm;
+	int ret;
+
+	if (kvm_check_cap(KVM_CAP_ARM_EL1_32BIT) <= 0) {
+		print_skip("KVM_CAP_ARM_EL1_32BIT is not supported");
+		exit(KSFT_SKIP);
+	}
+
+	/* Get the preferred target type and copy that to init2 */
+	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
+	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init1);
+	kvm_vm_free(vm);
+	memcpy(&init2, &init1, sizeof(init2));
+
+	/* Test with 64bit vCPUs */
+	ret = add_init_2vcpus(&init1, &init2);
+	TEST_ASSERT(ret == 0,
+		    "Configuring 64bit EL1 vCPUs failed unexpectedly");
+	ret = add_2vcpus_init_2vcpus(&init1, &init2);
+	TEST_ASSERT(ret == 0,
+		    "Configuring 64bit EL1 vCPUs failed unexpectedly");
+
+	/* Test with 32bit vCPUs */
+	init1.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT);
+	init2.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT);
+	ret = add_init_2vcpus(&init1, &init2);
+	TEST_ASSERT(ret == 0,
+		    "Configuring 32bit EL1 vCPUs failed unexpectedly");
+	ret = add_2vcpus_init_2vcpus(&init1, &init2);
+	TEST_ASSERT(ret == 0,
+		    "Configuring 32bit EL1 vCPUs failed unexpectedly");
+
+	/* Test with mixed-width vCPUs  */
+	init1.features[0] = 0;
+	init2.features[0] = (1 << KVM_ARM_VCPU_EL1_32BIT);
+	ret = add_init_2vcpus(&init1, &init2);
+	TEST_ASSERT(ret != 0,
+		    "Configuring mixed-width vCPUs worked unexpectedly");
+	ret = add_2vcpus_init_2vcpus(&init1, &init2);
+	TEST_ASSERT(ret != 0,
+		    "Configuring mixed-width vCPUs worked unexpectedly");
+
+	return 0;
+}
-- 
2.34.1.575.g55b058a8bb-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs
  2022-01-10  5:40 [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs Reiji Watanabe
  2022-01-10  5:40 ` [PATCH 2/2] KVM: arm64: selftests: Introduce vcpu_width_config Reiji Watanabe
@ 2022-01-10 10:48 ` Alexandru Elisei
  2022-01-11  7:37   ` Reiji Watanabe
  1 sibling, 1 reply; 8+ messages in thread
From: Alexandru Elisei @ 2022-01-10 10:48 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Peter Shier,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Reiji,

On Sun, Jan 09, 2022 at 09:40:41PM -0800, Reiji Watanabe wrote:
> vcpu_allowed_register_width() checks if all the VCPUs are either
> all 32bit or all 64bit.  Since the checking is done even for vCPUs
> that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet,
> the non-initialized vCPUs are erroneously treated as 64bit vCPU,
> which causes the function to incorrectly detect a mixed-width VM.
> 
> Fix vcpu_allowed_register_width() to skip the check for vCPUs that
> are not initialized yet.
> 
> Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation")
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  arch/arm64/kvm/reset.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 426bd7fbc3fd..ef78bbc7566a 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -180,8 +180,19 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
>  	if (kvm_has_mte(vcpu->kvm) && is32bit)
>  		return false;
>  
> +	/*
> +	 * Make sure vcpu->arch.target setting is visible from others so
> +	 * that the width consistency checking between two vCPUs is done
> +	 * by at least one of them at KVM_ARM_VCPU_INIT.
> +	 */
> +	smp_mb();

From ARM DDI 0487G.a, page B2-146 ("Data Memory Barrier (DMB)"):

"The DMB instruction is a memory barrier instruction that ensures the relative
order of memory accesses before the barrier with memory accesses after the
barrier."

I'm going to assume from the comment that you are referring to completion of
memory accesses ("Make sure [..] is visible from others"). Please correct me if
I am wrong. In this case, DMB ensures ordering of memory accesses with regards
to writes and reads, not *completion*.  Have a look at
tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus for
the classic message passing example as an example of memory ordering.
Message passing and other patterns are also explained in ARM DDI 0487G.a, page
K11-8363.

I'm not saying that your approach is incorrect, but the commit message should
explain what memory accesses are being ordered relative to each other and why.

Thanks,
Alex

> +
>  	/* Check that the vcpus are either all 32bit or all 64bit */
>  	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
> +		/* Skip if KVM_ARM_VCPU_INIT is not done for the vcpu yet */
> +		if (tmp->arch.target == -1)
> +			continue;
> +
>  		if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit)
>  			return false;
>  	}
> 
> base-commit: df0cc57e057f18e44dac8e6c18aba47ab53202f9
> -- 
> 2.34.1.575.g55b058a8bb-goog
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs
  2022-01-10 10:48 ` [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs Alexandru Elisei
@ 2022-01-11  7:37   ` Reiji Watanabe
  2022-01-11 13:30     ` Marc Zyngier
  0 siblings, 1 reply; 8+ messages in thread
From: Reiji Watanabe @ 2022-01-11  7:37 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: Marc Zyngier, kvmarm, kvm, Linux ARM, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Peter Shier,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Alex,

On Mon, Jan 10, 2022 at 2:48 AM Alexandru Elisei
<alexandru.elisei@arm.com> wrote:
>
> Hi Reiji,
>
> On Sun, Jan 09, 2022 at 09:40:41PM -0800, Reiji Watanabe wrote:
> > vcpu_allowed_register_width() checks if all the VCPUs are either
> > all 32bit or all 64bit.  Since the checking is done even for vCPUs
> > that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet,
> > the non-initialized vCPUs are erroneously treated as 64bit vCPU,
> > which causes the function to incorrectly detect a mixed-width VM.
> >
> > Fix vcpu_allowed_register_width() to skip the check for vCPUs that
> > are not initialized yet.
> >
> > Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation")
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >  arch/arm64/kvm/reset.c | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> > index 426bd7fbc3fd..ef78bbc7566a 100644
> > --- a/arch/arm64/kvm/reset.c
> > +++ b/arch/arm64/kvm/reset.c
> > @@ -180,8 +180,19 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
> >       if (kvm_has_mte(vcpu->kvm) && is32bit)
> >               return false;
> >
> > +     /*
> > +      * Make sure vcpu->arch.target setting is visible from others so
> > +      * that the width consistency checking between two vCPUs is done
> > +      * by at least one of them at KVM_ARM_VCPU_INIT.
> > +      */
> > +     smp_mb();
>
> From ARM DDI 0487G.a, page B2-146 ("Data Memory Barrier (DMB)"):
>
> "The DMB instruction is a memory barrier instruction that ensures the relative
> order of memory accesses before the barrier with memory accesses after the
> barrier."
>
> I'm going to assume from the comment that you are referring to completion of
> memory accesses ("Make sure [..] is visible from others"). Please correct me if
> I am wrong. In this case, DMB ensures ordering of memory accesses with regards
> to writes and reads, not *completion*.  Have a look at
> tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus for
> the classic message passing example as an example of memory ordering.
> Message passing and other patterns are also explained in ARM DDI 0487G.a, page
> K11-8363.
>
> I'm not saying that your approach is incorrect, but the commit message should
> explain what memory accesses are being ordered relative to each other and why.

Thank you so much for the review.
What I meant with the comment was:
---
  DMB is used to make sure that writing @vcpu->arch.target, which is done
  by kvm_vcpu_set_target() before getting here, is visible to other PEs
  before the following kvm_for_each_vcpu iteration reads the other vCPUs'
  target field.
---
Did the comment become more clear ?? (Or do I use DMB incorrectly ?)

> > +
> >       /* Check that the vcpus are either all 32bit or all 64bit */
> >       kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
> > +             /* Skip if KVM_ARM_VCPU_INIT is not done for the vcpu yet */
> > +             if (tmp->arch.target == -1)
> > +                     continue;

I just noticed DMB(ishld) is needed here to assure ordering between
reading tmp->arch.target and reading vcpu->arch.features for this fix.
Similarly, kvm_vcpu_set_target() needs DMB(ishst) to assure ordering
between writing vcpu->arch.features and writing vcpu->arch.target...
I am going to fix them in the v2 series.

Thanks,
Reiji

> > +
> >               if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit)
> >                       return false;
> >       }
> >
> > base-commit: df0cc57e057f18e44dac8e6c18aba47ab53202f9
> > --
> > 2.34.1.575.g55b058a8bb-goog
> >

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] KVM: arm64: selftests: Introduce vcpu_width_config
  2022-01-10  5:40 ` [PATCH 2/2] KVM: arm64: selftests: Introduce vcpu_width_config Reiji Watanabe
@ 2022-01-11  9:47   ` Andrew Jones
  0 siblings, 0 replies; 8+ messages in thread
From: Andrew Jones @ 2022-01-11  9:47 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Marc Zyngier, kvmarm, kvm, Will Deacon, Peter Shier,
	Paolo Bonzini, linux-arm-kernel

On Sun, Jan 09, 2022 at 09:40:42PM -0800, Reiji Watanabe wrote:
> Introduce a test for aarch64 that ensures non-mixed-width vCPUs
> (all 64bit vCPUs or all 32bit vcPUs) can be configured, and
> mixed-width vCPUs cannot be configured.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  tools/testing/selftests/kvm/.gitignore        |   1 +
>  tools/testing/selftests/kvm/Makefile          |   1 +
>  .../selftests/kvm/aarch64/vcpu_width_config.c | 125 ++++++++++++++++++
>  3 files changed, 127 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/aarch64/vcpu_width_config.c
>

 
Reviewed-by: Andrew Jones <drjones@redhat.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs
  2022-01-11  7:37   ` Reiji Watanabe
@ 2022-01-11 13:30     ` Marc Zyngier
  2022-01-11 16:11       ` Alexandru Elisei
  0 siblings, 1 reply; 8+ messages in thread
From: Marc Zyngier @ 2022-01-11 13:30 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Alexandru Elisei, kvmarm, kvm, Linux ARM, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Peter Shier,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

On Tue, 11 Jan 2022 07:37:57 +0000,
Reiji Watanabe <reijiw@google.com> wrote:
> 
> Hi Alex,
> 
> On Mon, Jan 10, 2022 at 2:48 AM Alexandru Elisei
> <alexandru.elisei@arm.com> wrote:
> >
> > Hi Reiji,
> >
> > On Sun, Jan 09, 2022 at 09:40:41PM -0800, Reiji Watanabe wrote:
> > > vcpu_allowed_register_width() checks if all the VCPUs are either
> > > all 32bit or all 64bit.  Since the checking is done even for vCPUs
> > > that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet,
> > > the non-initialized vCPUs are erroneously treated as 64bit vCPU,
> > > which causes the function to incorrectly detect a mixed-width VM.
> > >
> > > Fix vcpu_allowed_register_width() to skip the check for vCPUs that
> > > are not initialized yet.
> > >
> > > Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation")
> > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > ---
> > >  arch/arm64/kvm/reset.c | 11 +++++++++++
> > >  1 file changed, 11 insertions(+)
> > >
> > > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> > > index 426bd7fbc3fd..ef78bbc7566a 100644
> > > --- a/arch/arm64/kvm/reset.c
> > > +++ b/arch/arm64/kvm/reset.c
> > > @@ -180,8 +180,19 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
> > >       if (kvm_has_mte(vcpu->kvm) && is32bit)
> > >               return false;
> > >
> > > +     /*
> > > +      * Make sure vcpu->arch.target setting is visible from others so
> > > +      * that the width consistency checking between two vCPUs is done
> > > +      * by at least one of them at KVM_ARM_VCPU_INIT.
> > > +      */
> > > +     smp_mb();
> >
> > From ARM DDI 0487G.a, page B2-146 ("Data Memory Barrier (DMB)"):
> >
> > "The DMB instruction is a memory barrier instruction that ensures the relative
> > order of memory accesses before the barrier with memory accesses after the
> > barrier."
> >
> > I'm going to assume from the comment that you are referring to completion of
> > memory accesses ("Make sure [..] is visible from others"). Please correct me if
> > I am wrong. In this case, DMB ensures ordering of memory accesses with regards
> > to writes and reads, not *completion*.  Have a look at
> > tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus for
> > the classic message passing example as an example of memory ordering.
> > Message passing and other patterns are also explained in ARM DDI 0487G.a, page
> > K11-8363.
> >
> > I'm not saying that your approach is incorrect, but the commit message should
> > explain what memory accesses are being ordered relative to each other and why.
> 
> Thank you so much for the review.
> What I meant with the comment was:
> ---
>   DMB is used to make sure that writing @vcpu->arch.target, which is done
>   by kvm_vcpu_set_target() before getting here, is visible to other PEs
>   before the following kvm_for_each_vcpu iteration reads the other vCPUs'
>   target field.
> ---
> Did the comment become more clear ?? (Or do I use DMB incorrectly ?)
> 
> > > +
> > >       /* Check that the vcpus are either all 32bit or all 64bit */
> > >       kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
> > > +             /* Skip if KVM_ARM_VCPU_INIT is not done for the vcpu yet */
> > > +             if (tmp->arch.target == -1)
> > > +                     continue;
> 
> I just noticed DMB(ishld) is needed here to assure ordering between
> reading tmp->arch.target and reading vcpu->arch.features for this fix.
> Similarly, kvm_vcpu_set_target() needs DMB(ishst) to assure ordering
> between writing vcpu->arch.features and writing vcpu->arch.target...
> I am going to fix them in the v2 series.

Yes, you'd need at least this, and preferably in their smp_rmb/wmb
variants.

However, this looks like a pretty fragile construct, as there are
multiple paths where we can change target (including some error
paths from the run loop).

I'd rather all changes to target and the feature bits happen under the
kvm->lock, and take that lock when checking for consistency in
vcpu_allowed_register_width(), as this isn't a fast path. I wrote the
following, which is obviously incomplete and as usual untested.

Thanks,

	M.

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e4727dc771bf..42f2ab80646c 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1061,7 +1061,8 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
 static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 			       const struct kvm_vcpu_init *init)
 {
-	unsigned int i, ret;
+	unsigned int i;
+	int ret = 0;
 	u32 phys_target = kvm_target_cpu();
 
 	if (init->target != phys_target)
@@ -1074,32 +1075,46 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
 		return -EINVAL;
 
+	/* Hazard against a concurent check of the target in kvm_reset_vcpu() */
+	mutex_lock(&vcpu->kvm->lock);
+
 	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
 	for (i = 0; i < sizeof(init->features) * 8; i++) {
 		bool set = (init->features[i / 32] & (1 << (i % 32)));
 
-		if (set && i >= KVM_VCPU_MAX_FEATURES)
-			return -ENOENT;
+		if (set && i >= KVM_VCPU_MAX_FEATURES) {
+			ret = -ENOENT;
+			break;
+		}
 
 		/*
 		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
 		 * use the same feature set.
 		 */
 		if (vcpu->arch.target != -1 && i < KVM_VCPU_MAX_FEATURES &&
-		    test_bit(i, vcpu->arch.features) != set)
-			return -EINVAL;
+		    test_bit(i, vcpu->arch.features) != set) {
+			ret = -EINVAL;
+			break;
+		}
 
 		if (set)
 			set_bit(i, vcpu->arch.features);
 	}
 
-	vcpu->arch.target = phys_target;
+	if (!ret)
+		vcpu->arch.target = phys_target;
+
+	mutex_unlock(&vcpu->kvm->lock);
+	if (ret)
+		return ret;
 
 	/* Now we know what it is, we can reset it. */
 	ret = kvm_reset_vcpu(vcpu);
 	if (ret) {
+		mutex_lock(&vcpu->kvm->lock);
 		vcpu->arch.target = -1;
 		bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
+		mutex_unlock(&vcpu->kvm->lock);
 	}
 
 	return ret;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index ef78bbc7566a..fae88a703140 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -180,13 +180,6 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
 	if (kvm_has_mte(vcpu->kvm) && is32bit)
 		return false;
 
-	/*
-	 * Make sure vcpu->arch.target setting is visible from others so
-	 * that the width consistency checking between two vCPUs is done
-	 * by at least one of them at KVM_ARM_VCPU_INIT.
-	 */
-	smp_mb();
-
 	/* Check that the vcpus are either all 32bit or all 64bit */
 	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
 		/* Skip if KVM_ARM_VCPU_INIT is not done for the vcpu yet */
@@ -222,14 +215,19 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_reset_state reset_state;
-	int ret;
+	int ret = -EINVAL;
 	bool loaded;
 	u32 pstate;
 
 	mutex_lock(&vcpu->kvm->lock);
-	reset_state = vcpu->arch.reset_state;
-	WRITE_ONCE(vcpu->arch.reset_state.reset, false);
+	if (vcpu_allowed_register_width(vcpu)) {
+		reset_state = vcpu->arch.reset_state;
+		WRITE_ONCE(vcpu->arch.reset_state.reset, false);
+		ret = 0;
+	}
 	mutex_unlock(&vcpu->kvm->lock);
+	if (ret)
+		goto out;
 
 	/* Reset PMU outside of the non-preemptible section */
 	kvm_pmu_vcpu_reset(vcpu);
@@ -257,11 +255,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 		}
 	}
 
-	if (!vcpu_allowed_register_width(vcpu)) {
-		ret = -EINVAL;
-		goto out;
-	}
-
 	switch (vcpu->arch.target) {
 	default:
 		if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs
  2022-01-11 13:30     ` Marc Zyngier
@ 2022-01-11 16:11       ` Alexandru Elisei
  2022-01-13  5:33         ` Reiji Watanabe
  0 siblings, 1 reply; 8+ messages in thread
From: Alexandru Elisei @ 2022-01-11 16:11 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Reiji Watanabe, kvmarm, kvm, Linux ARM, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Peter Shier,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Marc,

On Tue, Jan 11, 2022 at 01:30:41PM +0000, Marc Zyngier wrote:
> On Tue, 11 Jan 2022 07:37:57 +0000,
> Reiji Watanabe <reijiw@google.com> wrote:
> > 
> > Hi Alex,
> > 
> > On Mon, Jan 10, 2022 at 2:48 AM Alexandru Elisei
> > <alexandru.elisei@arm.com> wrote:
> > >
> > > Hi Reiji,
> > >
> > > On Sun, Jan 09, 2022 at 09:40:41PM -0800, Reiji Watanabe wrote:
> > > > vcpu_allowed_register_width() checks if all the VCPUs are either
> > > > all 32bit or all 64bit.  Since the checking is done even for vCPUs
> > > > that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet,
> > > > the non-initialized vCPUs are erroneously treated as 64bit vCPU,
> > > > which causes the function to incorrectly detect a mixed-width VM.
> > > >
> > > > Fix vcpu_allowed_register_width() to skip the check for vCPUs that
> > > > are not initialized yet.
> > > >
> > > > Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation")
> > > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > > ---
> > > >  arch/arm64/kvm/reset.c | 11 +++++++++++
> > > >  1 file changed, 11 insertions(+)
> > > >
> > > > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> > > > index 426bd7fbc3fd..ef78bbc7566a 100644
> > > > --- a/arch/arm64/kvm/reset.c
> > > > +++ b/arch/arm64/kvm/reset.c
> > > > @@ -180,8 +180,19 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
> > > >       if (kvm_has_mte(vcpu->kvm) && is32bit)
> > > >               return false;
> > > >
> > > > +     /*
> > > > +      * Make sure vcpu->arch.target setting is visible from others so
> > > > +      * that the width consistency checking between two vCPUs is done
> > > > +      * by at least one of them at KVM_ARM_VCPU_INIT.
> > > > +      */
> > > > +     smp_mb();
> > >
> > > From ARM DDI 0487G.a, page B2-146 ("Data Memory Barrier (DMB)"):
> > >
> > > "The DMB instruction is a memory barrier instruction that ensures the relative
> > > order of memory accesses before the barrier with memory accesses after the
> > > barrier."
> > >
> > > I'm going to assume from the comment that you are referring to completion of
> > > memory accesses ("Make sure [..] is visible from others"). Please correct me if
> > > I am wrong. In this case, DMB ensures ordering of memory accesses with regards
> > > to writes and reads, not *completion*.  Have a look at
> > > tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus for
> > > the classic message passing example as an example of memory ordering.
> > > Message passing and other patterns are also explained in ARM DDI 0487G.a, page
> > > K11-8363.
> > >
> > > I'm not saying that your approach is incorrect, but the commit message should
> > > explain what memory accesses are being ordered relative to each other and why.
> > 
> > Thank you so much for the review.
> > What I meant with the comment was:
> > ---
> >   DMB is used to make sure that writing @vcpu->arch.target, which is done
> >   by kvm_vcpu_set_target() before getting here, is visible to other PEs
> >   before the following kvm_for_each_vcpu iteration reads the other vCPUs'
> >   target field.
> > ---
> > Did the comment become more clear ?? (Or do I use DMB incorrectly ?)
> > 
> > > > +
> > > >       /* Check that the vcpus are either all 32bit or all 64bit */
> > > >       kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
> > > > +             /* Skip if KVM_ARM_VCPU_INIT is not done for the vcpu yet */
> > > > +             if (tmp->arch.target == -1)
> > > > +                     continue;
> > 
> > I just noticed DMB(ishld) is needed here to assure ordering between
> > reading tmp->arch.target and reading vcpu->arch.features for this fix.
> > Similarly, kvm_vcpu_set_target() needs DMB(ishst) to assure ordering
> > between writing vcpu->arch.features and writing vcpu->arch.target...
> > I am going to fix them in the v2 series.
> 
> Yes, you'd need at least this, and preferably in their smp_rmb/wmb
> variants.
> 
> However, this looks like a pretty fragile construct, as there are
> multiple paths where we can change target (including some error
> paths from the run loop).
> 
> I'd rather all changes to target and the feature bits happen under the
> kvm->lock, and take that lock when checking for consistency in
> vcpu_allowed_register_width(), as this isn't a fast path. I wrote the
> following, which is obviously incomplete and as usual untested.

I think this is the better approach, because we also want to make sure that
a PE observes changes to target and features as soon as they have been
made, to avoid situations where one PE sets the target and the 32bit
feature, and another PE reads the old values and skips the check, in which
case memory ordering is not enough.

Thanks,
Alex

> 
> Thanks,
> 
> 	M.
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index e4727dc771bf..42f2ab80646c 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1061,7 +1061,8 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
>  static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  			       const struct kvm_vcpu_init *init)
>  {
> -	unsigned int i, ret;
> +	unsigned int i;
> +	int ret = 0;
>  	u32 phys_target = kvm_target_cpu();
>  
>  	if (init->target != phys_target)
> @@ -1074,32 +1075,46 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
>  		return -EINVAL;
>  
> +	/* Hazard against a concurent check of the target in kvm_reset_vcpu() */
> +	mutex_lock(&vcpu->kvm->lock);
> +
>  	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
>  	for (i = 0; i < sizeof(init->features) * 8; i++) {
>  		bool set = (init->features[i / 32] & (1 << (i % 32)));
>  
> -		if (set && i >= KVM_VCPU_MAX_FEATURES)
> -			return -ENOENT;
> +		if (set && i >= KVM_VCPU_MAX_FEATURES) {
> +			ret = -ENOENT;
> +			break;
> +		}
>  
>  		/*
>  		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
>  		 * use the same feature set.
>  		 */
>  		if (vcpu->arch.target != -1 && i < KVM_VCPU_MAX_FEATURES &&
> -		    test_bit(i, vcpu->arch.features) != set)
> -			return -EINVAL;
> +		    test_bit(i, vcpu->arch.features) != set) {
> +			ret = -EINVAL;
> +			break;
> +		}
>  
>  		if (set)
>  			set_bit(i, vcpu->arch.features);
>  	}
>  
> -	vcpu->arch.target = phys_target;
> +	if (!ret)
> +		vcpu->arch.target = phys_target;
> +
> +	mutex_unlock(&vcpu->kvm->lock);
> +	if (ret)
> +		return ret;
>  
>  	/* Now we know what it is, we can reset it. */
>  	ret = kvm_reset_vcpu(vcpu);
>  	if (ret) {
> +		mutex_lock(&vcpu->kvm->lock);
>  		vcpu->arch.target = -1;
>  		bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
> +		mutex_unlock(&vcpu->kvm->lock);
>  	}
>  
>  	return ret;
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index ef78bbc7566a..fae88a703140 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -180,13 +180,6 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
>  	if (kvm_has_mte(vcpu->kvm) && is32bit)
>  		return false;
>  
> -	/*
> -	 * Make sure vcpu->arch.target setting is visible from others so
> -	 * that the width consistency checking between two vCPUs is done
> -	 * by at least one of them at KVM_ARM_VCPU_INIT.
> -	 */
> -	smp_mb();
> -
>  	/* Check that the vcpus are either all 32bit or all 64bit */
>  	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
>  		/* Skip if KVM_ARM_VCPU_INIT is not done for the vcpu yet */
> @@ -222,14 +215,19 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_reset_state reset_state;
> -	int ret;
> +	int ret = -EINVAL;
>  	bool loaded;
>  	u32 pstate;
>  
>  	mutex_lock(&vcpu->kvm->lock);
> -	reset_state = vcpu->arch.reset_state;
> -	WRITE_ONCE(vcpu->arch.reset_state.reset, false);
> +	if (vcpu_allowed_register_width(vcpu)) {
> +		reset_state = vcpu->arch.reset_state;
> +		WRITE_ONCE(vcpu->arch.reset_state.reset, false);
> +		ret = 0;
> +	}
>  	mutex_unlock(&vcpu->kvm->lock);
> +	if (ret)
> +		goto out;
>  
>  	/* Reset PMU outside of the non-preemptible section */
>  	kvm_pmu_vcpu_reset(vcpu);
> @@ -257,11 +255,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>  		}
>  	}
>  
> -	if (!vcpu_allowed_register_width(vcpu)) {
> -		ret = -EINVAL;
> -		goto out;
> -	}
> -
>  	switch (vcpu->arch.target) {
>  	default:
>  		if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
> 
> -- 
> Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs
  2022-01-11 16:11       ` Alexandru Elisei
@ 2022-01-13  5:33         ` Reiji Watanabe
  0 siblings, 0 replies; 8+ messages in thread
From: Reiji Watanabe @ 2022-01-13  5:33 UTC (permalink / raw)
  To: Alexandru Elisei
  Cc: Marc Zyngier, kvmarm, kvm, Linux ARM, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Peter Shier,
	Ricardo Koller, Oliver Upton, Jing Zhang, Raghavendra Rao Anata

Hi Marc and Alex,

On Tue, Jan 11, 2022 at 8:11 AM Alexandru Elisei
<alexandru.elisei@arm.com> wrote:
>
> Hi Marc,
>
> On Tue, Jan 11, 2022 at 01:30:41PM +0000, Marc Zyngier wrote:
> > On Tue, 11 Jan 2022 07:37:57 +0000,
> > Reiji Watanabe <reijiw@google.com> wrote:
> > >
> > > Hi Alex,
> > >
> > > On Mon, Jan 10, 2022 at 2:48 AM Alexandru Elisei
> > > <alexandru.elisei@arm.com> wrote:
> > > >
> > > > Hi Reiji,
> > > >
> > > > On Sun, Jan 09, 2022 at 09:40:41PM -0800, Reiji Watanabe wrote:
> > > > > vcpu_allowed_register_width() checks if all the VCPUs are either
> > > > > all 32bit or all 64bit.  Since the checking is done even for vCPUs
> > > > > that are not initialized (KVM_ARM_VCPU_INIT has not been done) yet,
> > > > > the non-initialized vCPUs are erroneously treated as 64bit vCPU,
> > > > > which causes the function to incorrectly detect a mixed-width VM.
> > > > >
> > > > > Fix vcpu_allowed_register_width() to skip the check for vCPUs that
> > > > > are not initialized yet.
> > > > >
> > > > > Fixes: 66e94d5cafd4 ("KVM: arm64: Prevent mixed-width VM creation")
> > > > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > > > ---
> > > > >  arch/arm64/kvm/reset.c | 11 +++++++++++
> > > > >  1 file changed, 11 insertions(+)
> > > > >
> > > > > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> > > > > index 426bd7fbc3fd..ef78bbc7566a 100644
> > > > > --- a/arch/arm64/kvm/reset.c
> > > > > +++ b/arch/arm64/kvm/reset.c
> > > > > @@ -180,8 +180,19 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
> > > > >       if (kvm_has_mte(vcpu->kvm) && is32bit)
> > > > >               return false;
> > > > >
> > > > > +     /*
> > > > > +      * Make sure vcpu->arch.target setting is visible from others so
> > > > > +      * that the width consistency checking between two vCPUs is done
> > > > > +      * by at least one of them at KVM_ARM_VCPU_INIT.
> > > > > +      */
> > > > > +     smp_mb();
> > > >
> > > > From ARM DDI 0487G.a, page B2-146 ("Data Memory Barrier (DMB)"):
> > > >
> > > > "The DMB instruction is a memory barrier instruction that ensures the relative
> > > > order of memory accesses before the barrier with memory accesses after the
> > > > barrier."
> > > >
> > > > I'm going to assume from the comment that you are referring to completion of
> > > > memory accesses ("Make sure [..] is visible from others"). Please correct me if
> > > > I am wrong. In this case, DMB ensures ordering of memory accesses with regards
> > > > to writes and reads, not *completion*.  Have a look at
> > > > tools/memory-model/litmus-tests/MP+fencewmbonceonce+fencermbonceonce.litmus for
> > > > the classic message passing example as an example of memory ordering.
> > > > Message passing and other patterns are also explained in ARM DDI 0487G.a, page
> > > > K11-8363.
> > > >
> > > > I'm not saying that your approach is incorrect, but the commit message should
> > > > explain what memory accesses are being ordered relative to each other and why.
> > >
> > > Thank you so much for the review.
> > > What I meant with the comment was:
> > > ---
> > >   DMB is used to make sure that writing @vcpu->arch.target, which is done
> > >   by kvm_vcpu_set_target() before getting here, is visible to other PEs
> > >   before the following kvm_for_each_vcpu iteration reads the other vCPUs'
> > >   target field.
> > > ---
> > > Did the comment become more clear ?? (Or do I use DMB incorrectly ?)
> > >
> > > > > +
> > > > >       /* Check that the vcpus are either all 32bit or all 64bit */
> > > > >       kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
> > > > > +             /* Skip if KVM_ARM_VCPU_INIT is not done for the vcpu yet */
> > > > > +             if (tmp->arch.target == -1)
> > > > > +                     continue;
> > >
> > > I just noticed DMB(ishld) is needed here to assure ordering between
> > > reading tmp->arch.target and reading vcpu->arch.features for this fix.
> > > Similarly, kvm_vcpu_set_target() needs DMB(ishst) to assure ordering
> > > between writing vcpu->arch.features and writing vcpu->arch.target...
> > > I am going to fix them in the v2 series.
> >
> > Yes, you'd need at least this, and preferably in their smp_rmb/wmb
> > variants.
> >
> > However, this looks like a pretty fragile construct, as there are
> > multiple paths where we can change target (including some error
> > paths from the run loop).
> >
> > I'd rather all changes to target and the feature bits happen under the
> > kvm->lock, and take that lock when checking for consistency in
> > vcpu_allowed_register_width(), as this isn't a fast path. I wrote the
> > following, which is obviously incomplete and as usual untested.
>
> I think this is the better approach, because we also want to make sure that
> a PE observes changes to target and features as soon as they have been
> made, to avoid situations where one PE sets the target and the 32bit
> feature, and another PE reads the old values and skips the check, in which
> case memory ordering is not enough.

Thank you for the comments and the suggestion.
I will look into fixing this based on the suggestion.

Thanks,
Reiji




>
> Thanks,
> Alex
>
> >
> > Thanks,
> >
> >       M.
> >
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index e4727dc771bf..42f2ab80646c 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -1061,7 +1061,8 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
> >  static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >                              const struct kvm_vcpu_init *init)
> >  {
> > -     unsigned int i, ret;
> > +     unsigned int i;
> > +     int ret = 0;
> >       u32 phys_target = kvm_target_cpu();
> >
> >       if (init->target != phys_target)
> > @@ -1074,32 +1075,46 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >       if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
> >               return -EINVAL;
> >
> > +     /* Hazard against a concurent check of the target in kvm_reset_vcpu() */
> > +     mutex_lock(&vcpu->kvm->lock);
> > +
> >       /* -ENOENT for unknown features, -EINVAL for invalid combinations. */
> >       for (i = 0; i < sizeof(init->features) * 8; i++) {
> >               bool set = (init->features[i / 32] & (1 << (i % 32)));
> >
> > -             if (set && i >= KVM_VCPU_MAX_FEATURES)
> > -                     return -ENOENT;
> > +             if (set && i >= KVM_VCPU_MAX_FEATURES) {
> > +                     ret = -ENOENT;
> > +                     break;
> > +             }
> >
> >               /*
> >                * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
> >                * use the same feature set.
> >                */
> >               if (vcpu->arch.target != -1 && i < KVM_VCPU_MAX_FEATURES &&
> > -                 test_bit(i, vcpu->arch.features) != set)
> > -                     return -EINVAL;
> > +                 test_bit(i, vcpu->arch.features) != set) {
> > +                     ret = -EINVAL;
> > +                     break;
> > +             }
> >
> >               if (set)
> >                       set_bit(i, vcpu->arch.features);
> >       }
> >
> > -     vcpu->arch.target = phys_target;
> > +     if (!ret)
> > +             vcpu->arch.target = phys_target;
> > +
> > +     mutex_unlock(&vcpu->kvm->lock);
> > +     if (ret)
> > +             return ret;
> >
> >       /* Now we know what it is, we can reset it. */
> >       ret = kvm_reset_vcpu(vcpu);
> >       if (ret) {
> > +             mutex_lock(&vcpu->kvm->lock);
> >               vcpu->arch.target = -1;
> >               bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
> > +             mutex_unlock(&vcpu->kvm->lock);
> >       }
> >
> >       return ret;
> > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> > index ef78bbc7566a..fae88a703140 100644
> > --- a/arch/arm64/kvm/reset.c
> > +++ b/arch/arm64/kvm/reset.c
> > @@ -180,13 +180,6 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
> >       if (kvm_has_mte(vcpu->kvm) && is32bit)
> >               return false;
> >
> > -     /*
> > -      * Make sure vcpu->arch.target setting is visible from others so
> > -      * that the width consistency checking between two vCPUs is done
> > -      * by at least one of them at KVM_ARM_VCPU_INIT.
> > -      */
> > -     smp_mb();
> > -
> >       /* Check that the vcpus are either all 32bit or all 64bit */
> >       kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
> >               /* Skip if KVM_ARM_VCPU_INIT is not done for the vcpu yet */
> > @@ -222,14 +215,19 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
> >  int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >  {
> >       struct vcpu_reset_state reset_state;
> > -     int ret;
> > +     int ret = -EINVAL;
> >       bool loaded;
> >       u32 pstate;
> >
> >       mutex_lock(&vcpu->kvm->lock);
> > -     reset_state = vcpu->arch.reset_state;
> > -     WRITE_ONCE(vcpu->arch.reset_state.reset, false);
> > +     if (vcpu_allowed_register_width(vcpu)) {
> > +             reset_state = vcpu->arch.reset_state;
> > +             WRITE_ONCE(vcpu->arch.reset_state.reset, false);
> > +             ret = 0;
> > +     }
> >       mutex_unlock(&vcpu->kvm->lock);
> > +     if (ret)
> > +             goto out;
> >
> >       /* Reset PMU outside of the non-preemptible section */
> >       kvm_pmu_vcpu_reset(vcpu);
> > @@ -257,11 +255,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
> >               }
> >       }
> >
> > -     if (!vcpu_allowed_register_width(vcpu)) {
> > -             ret = -EINVAL;
> > -             goto out;
> > -     }
> > -
> >       switch (vcpu->arch.target) {
> >       default:
> >               if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
> >
> > --
> > Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-01-13  5:33 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-10  5:40 [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs Reiji Watanabe
2022-01-10  5:40 ` [PATCH 2/2] KVM: arm64: selftests: Introduce vcpu_width_config Reiji Watanabe
2022-01-11  9:47   ` Andrew Jones
2022-01-10 10:48 ` [PATCH 1/2] KVM: arm64: mixed-width check should be skipped for uninitialized vCPUs Alexandru Elisei
2022-01-11  7:37   ` Reiji Watanabe
2022-01-11 13:30     ` Marc Zyngier
2022-01-11 16:11       ` Alexandru Elisei
2022-01-13  5:33         ` Reiji Watanabe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).