All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] KVM: selftests: Fixes for ucall pool + page_fault_test
@ 2022-12-07 21:48 ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Ricardo Koller,
	Paolo Bonzini, Sean Christopherson, Oliver Upton

The combination of the pool-based ucall implementation + page_fault_test
resulted in some 'fun' bugs, not the least of which is that our handling
of the VA space on arm64 is completely wrong.

Small series to fix up the issues on kvm/queue. Patches 1-2 can probably
be squashed into Paolo's merge resolution, if desired.

Mark Brown (1):
  KVM: selftests: Fix build due to ucall_uninit() removal

Oliver Upton (3):
  KVM: selftests: Setup ucall after loading program into guest memory
  KVM: arm64: selftests: Align VA space allocator with TTBR0
  KVM: selftests: Allocate ucall pool from MEM_REGION_DATA

 .../selftests/kvm/aarch64/page_fault_test.c       |  9 +++++++--
 .../testing/selftests/kvm/include/kvm_util_base.h |  1 +
 .../testing/selftests/kvm/lib/aarch64/processor.c | 10 ++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c        | 15 ++++++++++-----
 tools/testing/selftests/kvm/lib/ucall_common.c    |  2 +-
 5 files changed, 29 insertions(+), 8 deletions(-)

-- 
2.39.0.rc0.267.gcb52ba06e7-goog


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 0/4] KVM: selftests: Fixes for ucall pool + page_fault_test
@ 2022-12-07 21:48 ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei
  Cc: kvm, kvmarm, Paolo Bonzini, kvmarm, linux-arm-kernel

The combination of the pool-based ucall implementation + page_fault_test
resulted in some 'fun' bugs, not the least of which is that our handling
of the VA space on arm64 is completely wrong.

Small series to fix up the issues on kvm/queue. Patches 1-2 can probably
be squashed into Paolo's merge resolution, if desired.

Mark Brown (1):
  KVM: selftests: Fix build due to ucall_uninit() removal

Oliver Upton (3):
  KVM: selftests: Setup ucall after loading program into guest memory
  KVM: arm64: selftests: Align VA space allocator with TTBR0
  KVM: selftests: Allocate ucall pool from MEM_REGION_DATA

 .../selftests/kvm/aarch64/page_fault_test.c       |  9 +++++++--
 .../testing/selftests/kvm/include/kvm_util_base.h |  1 +
 .../testing/selftests/kvm/lib/aarch64/processor.c | 10 ++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c        | 15 ++++++++++-----
 tools/testing/selftests/kvm/lib/ucall_common.c    |  2 +-
 5 files changed, 29 insertions(+), 8 deletions(-)

-- 
2.39.0.rc0.267.gcb52ba06e7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 0/4] KVM: selftests: Fixes for ucall pool + page_fault_test
@ 2022-12-07 21:48 ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Ricardo Koller,
	Paolo Bonzini, Sean Christopherson, Oliver Upton

The combination of the pool-based ucall implementation + page_fault_test
resulted in some 'fun' bugs, not the least of which is that our handling
of the VA space on arm64 is completely wrong.

Small series to fix up the issues on kvm/queue. Patches 1-2 can probably
be squashed into Paolo's merge resolution, if desired.

Mark Brown (1):
  KVM: selftests: Fix build due to ucall_uninit() removal

Oliver Upton (3):
  KVM: selftests: Setup ucall after loading program into guest memory
  KVM: arm64: selftests: Align VA space allocator with TTBR0
  KVM: selftests: Allocate ucall pool from MEM_REGION_DATA

 .../selftests/kvm/aarch64/page_fault_test.c       |  9 +++++++--
 .../testing/selftests/kvm/include/kvm_util_base.h |  1 +
 .../testing/selftests/kvm/lib/aarch64/processor.c | 10 ++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c        | 15 ++++++++++-----
 tools/testing/selftests/kvm/lib/ucall_common.c    |  2 +-
 5 files changed, 29 insertions(+), 8 deletions(-)

-- 
2.39.0.rc0.267.gcb52ba06e7-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 1/4] KVM: selftests: Fix build due to ucall_uninit() removal
  2022-12-07 21:48 ` Oliver Upton
  (?)
@ 2022-12-07 21:48   ` Oliver Upton
  -1 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Oliver Upton, Paolo Bonzini, Shuah Khan, Andrew Jones,
	Sean Christopherson, Ricardo Koller
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Mark Brown,
	linux-kselftest, linux-kernel

From: Mark Brown <broonie@kernel.org>

Today's -next fails to build on arm64 due to:

In file included from include/kvm_util.h:11,
                 from aarch64/page_fault_test.c:15:
include/ucall_common.h:36:47: note: expected ‘vm_paddr_t’ {aka ‘long unsigned int’} but argument is of type ‘void *’
   36 | void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa);
      |                                    ~~~~~~~~~~~^~~~~~~~
aarch64/page_fault_test.c:725:2: warning: implicit declaration of function ‘ucall_uninit’; did you mean ‘ucall_init’? [-Wimplicit-function-declaration]
  725 |  ucall_uninit(vm);
      |  ^~~~~~~~~~~~
      |  ucall_init

which is caused by commit

interacting poorly with commit

   28a65567acb5 ("KVM: selftests: Drop now-unnecessary ucall_uninit()")

As is done for other ucall_uninit() users remove the call in the newly added
page_fault_test.c.

Fixes: 28a65567acb5 ("KVM: selftests: Drop now-unnecessary ucall_uninit()")
Fixes: 35c581015712 ("KVM: selftests: aarch64: Add aarch64/page_fault_test")
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Ricardo Koller <ricarkol@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 tools/testing/selftests/kvm/aarch64/page_fault_test.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
index 0cda70bef5d5..92d3a91153b6 100644
--- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
@@ -722,7 +722,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	vcpu_run_loop(vm, vcpu, test);
 
-	ucall_uninit(vm);
 	kvm_vm_free(vm);
 	free_uffd(test, pt_uffd, data_uffd);
 
-- 
2.39.0.rc0.267.gcb52ba06e7-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 1/4] KVM: selftests: Fix build due to ucall_uninit() removal
@ 2022-12-07 21:48   ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Oliver Upton, Paolo Bonzini, Shuah Khan, Andrew Jones,
	Sean Christopherson, Ricardo Koller
  Cc: kvm, linux-kernel, Mark Brown, linux-kselftest, kvmarm, kvmarm,
	linux-arm-kernel

From: Mark Brown <broonie@kernel.org>

Today's -next fails to build on arm64 due to:

In file included from include/kvm_util.h:11,
                 from aarch64/page_fault_test.c:15:
include/ucall_common.h:36:47: note: expected ‘vm_paddr_t’ {aka ‘long unsigned int’} but argument is of type ‘void *’
   36 | void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa);
      |                                    ~~~~~~~~~~~^~~~~~~~
aarch64/page_fault_test.c:725:2: warning: implicit declaration of function ‘ucall_uninit’; did you mean ‘ucall_init’? [-Wimplicit-function-declaration]
  725 |  ucall_uninit(vm);
      |  ^~~~~~~~~~~~
      |  ucall_init

which is caused by commit

interacting poorly with commit

   28a65567acb5 ("KVM: selftests: Drop now-unnecessary ucall_uninit()")

As is done for other ucall_uninit() users remove the call in the newly added
page_fault_test.c.

Fixes: 28a65567acb5 ("KVM: selftests: Drop now-unnecessary ucall_uninit()")
Fixes: 35c581015712 ("KVM: selftests: aarch64: Add aarch64/page_fault_test")
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Ricardo Koller <ricarkol@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 tools/testing/selftests/kvm/aarch64/page_fault_test.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
index 0cda70bef5d5..92d3a91153b6 100644
--- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
@@ -722,7 +722,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	vcpu_run_loop(vm, vcpu, test);
 
-	ucall_uninit(vm);
 	kvm_vm_free(vm);
 	free_uffd(test, pt_uffd, data_uffd);
 
-- 
2.39.0.rc0.267.gcb52ba06e7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 1/4] KVM: selftests: Fix build due to ucall_uninit() removal
@ 2022-12-07 21:48   ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Oliver Upton, Paolo Bonzini, Shuah Khan, Andrew Jones,
	Sean Christopherson, Ricardo Koller
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Mark Brown,
	linux-kselftest, linux-kernel

From: Mark Brown <broonie@kernel.org>

Today's -next fails to build on arm64 due to:

In file included from include/kvm_util.h:11,
                 from aarch64/page_fault_test.c:15:
include/ucall_common.h:36:47: note: expected ‘vm_paddr_t’ {aka ‘long unsigned int’} but argument is of type ‘void *’
   36 | void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa);
      |                                    ~~~~~~~~~~~^~~~~~~~
aarch64/page_fault_test.c:725:2: warning: implicit declaration of function ‘ucall_uninit’; did you mean ‘ucall_init’? [-Wimplicit-function-declaration]
  725 |  ucall_uninit(vm);
      |  ^~~~~~~~~~~~
      |  ucall_init

which is caused by commit

interacting poorly with commit

   28a65567acb5 ("KVM: selftests: Drop now-unnecessary ucall_uninit()")

As is done for other ucall_uninit() users remove the call in the newly added
page_fault_test.c.

Fixes: 28a65567acb5 ("KVM: selftests: Drop now-unnecessary ucall_uninit()")
Fixes: 35c581015712 ("KVM: selftests: aarch64: Add aarch64/page_fault_test")
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Ricardo Koller <ricarkol@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 tools/testing/selftests/kvm/aarch64/page_fault_test.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
index 0cda70bef5d5..92d3a91153b6 100644
--- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
@@ -722,7 +722,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	vcpu_run_loop(vm, vcpu, test);
 
-	ucall_uninit(vm);
 	kvm_vm_free(vm);
 	free_uffd(test, pt_uffd, data_uffd);
 
-- 
2.39.0.rc0.267.gcb52ba06e7-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
  2022-12-07 21:48 ` Oliver Upton
  (?)
@ 2022-12-07 21:48   ` Oliver Upton
  -1 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Oliver Upton, Paolo Bonzini, Shuah Khan
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Ricardo Koller,
	Sean Christopherson, linux-kselftest, linux-kernel

The new ucall infrastructure needs to update a couple of guest globals
to pass through the ucall MMIO addr and pool of ucall structs. A
precondition of this actually working is to have the program image
already loaded into guest memory.

Call ucall_init() after kvm_vm_elf_load(). Continue to park the ucall
MMIO addr after MEM_REGION_TEST_DATA.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 tools/testing/selftests/kvm/aarch64/page_fault_test.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
index 92d3a91153b6..95d22cfb7b41 100644
--- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
@@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
 				    data_size / guest_page_size,
 				    p->test_desc->data_memslot_flags);
 	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
+}
+
+static void setup_ucall(struct kvm_vm *vm)
+{
+	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
 
-	ucall_init(vm, data_gpa + data_size);
+	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
 }
 
 static void setup_default_handlers(struct test_desc *test)
@@ -702,6 +707,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	vm = ____vm_create(mode);
 	setup_memslots(vm, p);
 	kvm_vm_elf_load(vm, program_invocation_name);
+	setup_ucall(vm);
 	vcpu = vm_vcpu_add(vm, 0, guest_code);
 
 	setup_gva_maps(vm);
-- 
2.39.0.rc0.267.gcb52ba06e7-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-07 21:48   ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Oliver Upton, Paolo Bonzini, Shuah Khan
  Cc: kvm, linux-kernel, linux-kselftest, kvmarm, kvmarm, linux-arm-kernel

The new ucall infrastructure needs to update a couple of guest globals
to pass through the ucall MMIO addr and pool of ucall structs. A
precondition of this actually working is to have the program image
already loaded into guest memory.

Call ucall_init() after kvm_vm_elf_load(). Continue to park the ucall
MMIO addr after MEM_REGION_TEST_DATA.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 tools/testing/selftests/kvm/aarch64/page_fault_test.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
index 92d3a91153b6..95d22cfb7b41 100644
--- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
@@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
 				    data_size / guest_page_size,
 				    p->test_desc->data_memslot_flags);
 	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
+}
+
+static void setup_ucall(struct kvm_vm *vm)
+{
+	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
 
-	ucall_init(vm, data_gpa + data_size);
+	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
 }
 
 static void setup_default_handlers(struct test_desc *test)
@@ -702,6 +707,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	vm = ____vm_create(mode);
 	setup_memslots(vm, p);
 	kvm_vm_elf_load(vm, program_invocation_name);
+	setup_ucall(vm);
 	vcpu = vm_vcpu_add(vm, 0, guest_code);
 
 	setup_gva_maps(vm);
-- 
2.39.0.rc0.267.gcb52ba06e7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-07 21:48   ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Oliver Upton, Paolo Bonzini, Shuah Khan
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Ricardo Koller,
	Sean Christopherson, linux-kselftest, linux-kernel

The new ucall infrastructure needs to update a couple of guest globals
to pass through the ucall MMIO addr and pool of ucall structs. A
precondition of this actually working is to have the program image
already loaded into guest memory.

Call ucall_init() after kvm_vm_elf_load(). Continue to park the ucall
MMIO addr after MEM_REGION_TEST_DATA.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 tools/testing/selftests/kvm/aarch64/page_fault_test.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
index 92d3a91153b6..95d22cfb7b41 100644
--- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
@@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
 				    data_size / guest_page_size,
 				    p->test_desc->data_memslot_flags);
 	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
+}
+
+static void setup_ucall(struct kvm_vm *vm)
+{
+	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
 
-	ucall_init(vm, data_gpa + data_size);
+	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
 }
 
 static void setup_default_handlers(struct test_desc *test)
@@ -702,6 +707,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	vm = ____vm_create(mode);
 	setup_memslots(vm, p);
 	kvm_vm_elf_load(vm, program_invocation_name);
+	setup_ucall(vm);
 	vcpu = vm_vcpu_add(vm, 0, guest_code);
 
 	setup_gva_maps(vm);
-- 
2.39.0.rc0.267.gcb52ba06e7-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
  2022-12-07 21:48 ` Oliver Upton
  (?)
@ 2022-12-07 21:48   ` Oliver Upton
  -1 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Suzuki K Poulose, Oliver Upton
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Ricardo Koller,
	Sean Christopherson, linux-kselftest, linux-kernel

An interesting feature of the Arm architecture is that the stage-1 MMU
supports two distinct VA regions, controlled by TTBR{0,1}_EL1. As KVM
selftests on arm64 only uses TTBR0_EL1, the VA space is constrained to
[0, 2^(va_bits-1)). This is different from other architectures that
allow for addressing low and high regions of the VA space from a single
page table.

KVM selftests' VA space allocator presumes the valid address range is
split between low and high memory based the MSB, which of course is a
poor match for arm64's TTBR0 region.

Allow architectures to override the default VA space layout. Make use of
the override to align vpages_valid with the behavior of TTBR0 on arm64.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 .../testing/selftests/kvm/include/kvm_util_base.h |  1 +
 .../testing/selftests/kvm/lib/aarch64/processor.c | 10 ++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c        | 15 ++++++++++-----
 3 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 6cd86da698b3..fbc2a79369b8 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -420,6 +420,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
 void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
 void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
 struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
+void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
 vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
 vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
 vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 316de70db91d..5972a23b2765 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -541,3 +541,13 @@ void kvm_selftest_arch_init(void)
 	 */
 	guest_modes_append_default();
 }
+
+void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+{
+	/*
+	 * arm64 selftests use only TTBR0_EL1, meaning that the valid VA space
+	 * is [0, 2^(64 - TCR_EL1.T0SZ)).
+	 */
+	sparsebit_set_num(vm->vpages_valid, 0,
+			  (1ULL << vm->va_bits) >> vm->page_shift);
+}
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e9607eb089be..c88c3ace16d2 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -186,6 +186,15 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
 	       "Missing new mode params?");
 
+__weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+{
+	sparsebit_set_num(vm->vpages_valid,
+		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+	sparsebit_set_num(vm->vpages_valid,
+		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
+		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+}
+
 struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 {
 	struct kvm_vm *vm;
@@ -274,11 +283,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 
 	/* Limit to VA-bit canonical virtual addresses. */
 	vm->vpages_valid = sparsebit_alloc();
-	sparsebit_set_num(vm->vpages_valid,
-		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
-	sparsebit_set_num(vm->vpages_valid,
-		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
-		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+	vm_vaddr_populate_bitmap(vm);
 
 	/* Limit physical addresses to PA-bits. */
 	vm->max_gfn = vm_compute_max_gfn(vm);
-- 
2.39.0.rc0.267.gcb52ba06e7-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
@ 2022-12-07 21:48   ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Suzuki K Poulose, Oliver Upton
  Cc: kvm, linux-kernel, linux-kselftest, kvmarm, kvmarm, linux-arm-kernel

An interesting feature of the Arm architecture is that the stage-1 MMU
supports two distinct VA regions, controlled by TTBR{0,1}_EL1. As KVM
selftests on arm64 only uses TTBR0_EL1, the VA space is constrained to
[0, 2^(va_bits-1)). This is different from other architectures that
allow for addressing low and high regions of the VA space from a single
page table.

KVM selftests' VA space allocator presumes the valid address range is
split between low and high memory based the MSB, which of course is a
poor match for arm64's TTBR0 region.

Allow architectures to override the default VA space layout. Make use of
the override to align vpages_valid with the behavior of TTBR0 on arm64.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 .../testing/selftests/kvm/include/kvm_util_base.h |  1 +
 .../testing/selftests/kvm/lib/aarch64/processor.c | 10 ++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c        | 15 ++++++++++-----
 3 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 6cd86da698b3..fbc2a79369b8 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -420,6 +420,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
 void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
 void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
 struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
+void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
 vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
 vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
 vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 316de70db91d..5972a23b2765 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -541,3 +541,13 @@ void kvm_selftest_arch_init(void)
 	 */
 	guest_modes_append_default();
 }
+
+void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+{
+	/*
+	 * arm64 selftests use only TTBR0_EL1, meaning that the valid VA space
+	 * is [0, 2^(64 - TCR_EL1.T0SZ)).
+	 */
+	sparsebit_set_num(vm->vpages_valid, 0,
+			  (1ULL << vm->va_bits) >> vm->page_shift);
+}
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e9607eb089be..c88c3ace16d2 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -186,6 +186,15 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
 	       "Missing new mode params?");
 
+__weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+{
+	sparsebit_set_num(vm->vpages_valid,
+		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+	sparsebit_set_num(vm->vpages_valid,
+		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
+		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+}
+
 struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 {
 	struct kvm_vm *vm;
@@ -274,11 +283,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 
 	/* Limit to VA-bit canonical virtual addresses. */
 	vm->vpages_valid = sparsebit_alloc();
-	sparsebit_set_num(vm->vpages_valid,
-		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
-	sparsebit_set_num(vm->vpages_valid,
-		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
-		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+	vm_vaddr_populate_bitmap(vm);
 
 	/* Limit physical addresses to PA-bits. */
 	vm->max_gfn = vm_compute_max_gfn(vm);
-- 
2.39.0.rc0.267.gcb52ba06e7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
@ 2022-12-07 21:48   ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Suzuki K Poulose, Oliver Upton
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Ricardo Koller,
	Sean Christopherson, linux-kselftest, linux-kernel

An interesting feature of the Arm architecture is that the stage-1 MMU
supports two distinct VA regions, controlled by TTBR{0,1}_EL1. As KVM
selftests on arm64 only uses TTBR0_EL1, the VA space is constrained to
[0, 2^(va_bits-1)). This is different from other architectures that
allow for addressing low and high regions of the VA space from a single
page table.

KVM selftests' VA space allocator presumes the valid address range is
split between low and high memory based the MSB, which of course is a
poor match for arm64's TTBR0 region.

Allow architectures to override the default VA space layout. Make use of
the override to align vpages_valid with the behavior of TTBR0 on arm64.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 .../testing/selftests/kvm/include/kvm_util_base.h |  1 +
 .../testing/selftests/kvm/lib/aarch64/processor.c | 10 ++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c        | 15 ++++++++++-----
 3 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 6cd86da698b3..fbc2a79369b8 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -420,6 +420,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
 void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
 void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
 struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
+void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
 vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
 vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
 vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 316de70db91d..5972a23b2765 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -541,3 +541,13 @@ void kvm_selftest_arch_init(void)
 	 */
 	guest_modes_append_default();
 }
+
+void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+{
+	/*
+	 * arm64 selftests use only TTBR0_EL1, meaning that the valid VA space
+	 * is [0, 2^(64 - TCR_EL1.T0SZ)).
+	 */
+	sparsebit_set_num(vm->vpages_valid, 0,
+			  (1ULL << vm->va_bits) >> vm->page_shift);
+}
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e9607eb089be..c88c3ace16d2 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -186,6 +186,15 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
 	       "Missing new mode params?");
 
+__weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+{
+	sparsebit_set_num(vm->vpages_valid,
+		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+	sparsebit_set_num(vm->vpages_valid,
+		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
+		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+}
+
 struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 {
 	struct kvm_vm *vm;
@@ -274,11 +283,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 
 	/* Limit to VA-bit canonical virtual addresses. */
 	vm->vpages_valid = sparsebit_alloc();
-	sparsebit_set_num(vm->vpages_valid,
-		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
-	sparsebit_set_num(vm->vpages_valid,
-		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
-		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+	vm_vaddr_populate_bitmap(vm);
 
 	/* Limit physical addresses to PA-bits. */
 	vm->max_gfn = vm_compute_max_gfn(vm);
-- 
2.39.0.rc0.267.gcb52ba06e7-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 4/4] KVM: selftests: Allocate ucall pool from MEM_REGION_DATA
  2022-12-07 21:48 ` Oliver Upton
  (?)
@ 2022-12-07 21:48   ` Oliver Upton
  -1 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Andrew Jones, Sean Christopherson, Peter Gonda
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Ricardo Koller,
	Oliver Upton, linux-kselftest, linux-kernel

MEM_REGION_TEST_DATA is meant to hold data explicitly used by a
selftest, not implicit allocations due to the selftests infrastructure.
Allocate the ucall pool from MEM_REGION_DATA much like the rest of the
selftests library allocations.

Fixes: 426729b2cf2e ("KVM: selftests: Add ucall pool based implementation")
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 tools/testing/selftests/kvm/lib/ucall_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index 820ce6c82829..0cc0971ce60e 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -22,7 +22,7 @@ void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
 	vm_vaddr_t vaddr;
 	int i;
 
-	vaddr = vm_vaddr_alloc(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR);
+	vaddr = __vm_vaddr_alloc(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR, MEM_REGION_DATA);
 	hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
 	memset(hdr, 0, sizeof(*hdr));
 
-- 
2.39.0.rc0.267.gcb52ba06e7-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 4/4] KVM: selftests: Allocate ucall pool from MEM_REGION_DATA
@ 2022-12-07 21:48   ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Andrew Jones, Sean Christopherson, Peter Gonda
  Cc: kvm, linux-kernel, linux-kselftest, kvmarm, kvmarm, linux-arm-kernel

MEM_REGION_TEST_DATA is meant to hold data explicitly used by a
selftest, not implicit allocations due to the selftests infrastructure.
Allocate the ucall pool from MEM_REGION_DATA much like the rest of the
selftests library allocations.

Fixes: 426729b2cf2e ("KVM: selftests: Add ucall pool based implementation")
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 tools/testing/selftests/kvm/lib/ucall_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index 820ce6c82829..0cc0971ce60e 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -22,7 +22,7 @@ void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
 	vm_vaddr_t vaddr;
 	int i;
 
-	vaddr = vm_vaddr_alloc(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR);
+	vaddr = __vm_vaddr_alloc(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR, MEM_REGION_DATA);
 	hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
 	memset(hdr, 0, sizeof(*hdr));
 
-- 
2.39.0.rc0.267.gcb52ba06e7-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 4/4] KVM: selftests: Allocate ucall pool from MEM_REGION_DATA
@ 2022-12-07 21:48   ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 21:48 UTC (permalink / raw)
  To: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Andrew Jones, Sean Christopherson, Peter Gonda
  Cc: linux-arm-kernel, kvmarm, kvm, kvmarm, Ricardo Koller,
	Oliver Upton, linux-kselftest, linux-kernel

MEM_REGION_TEST_DATA is meant to hold data explicitly used by a
selftest, not implicit allocations due to the selftests infrastructure.
Allocate the ucall pool from MEM_REGION_DATA much like the rest of the
selftests library allocations.

Fixes: 426729b2cf2e ("KVM: selftests: Add ucall pool based implementation")
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 tools/testing/selftests/kvm/lib/ucall_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index 820ce6c82829..0cc0971ce60e 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -22,7 +22,7 @@ void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
 	vm_vaddr_t vaddr;
 	int i;
 
-	vaddr = vm_vaddr_alloc(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR);
+	vaddr = __vm_vaddr_alloc(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR, MEM_REGION_DATA);
 	hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
 	memset(hdr, 0, sizeof(*hdr));
 
-- 
2.39.0.rc0.267.gcb52ba06e7-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 4/4] KVM: selftests: Allocate ucall pool from MEM_REGION_DATA
  2022-12-07 21:48   ` Oliver Upton
  (?)
@ 2022-12-07 23:44     ` Sean Christopherson
  -1 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-07 23:44 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Andrew Jones, Peter Gonda, linux-arm-kernel, kvmarm,
	kvm, kvmarm, Ricardo Koller, linux-kselftest, linux-kernel

On Wed, Dec 07, 2022, Oliver Upton wrote:
> MEM_REGION_TEST_DATA is meant to hold data explicitly used by a
> selftest, not implicit allocations due to the selftests infrastructure.
> Allocate the ucall pool from MEM_REGION_DATA much like the rest of the
> selftests library allocations.
> 
> Fixes: 426729b2cf2e ("KVM: selftests: Add ucall pool based implementation")

Not that it really matters because no one will backport this verbatim, but this
is the wrong commit to blame.  As of commit 426729b2cf2e, MEM_REGION_DATA does not
exist.  And similarly, the common ucall code didn't exist when Ricardo's series
introduced MEM_REGION_DATA.

  $ git show 426729b2cf2e:tools/testing/selftests/kvm/include/kvm_util_base.h | grep MEM_REGION_DATA
  $ git show 290c5b54012b7:tools/testing/selftests/kvm/lib/ucall_common.c
  fatal: path 'tools/testing/selftests/kvm/lib/ucall_common.c' exists on disk, but not in '290c5b54012b7'

The commit where the two collided is:

Fixes: cc7544101eec ("Merge tag 'kvmarm-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD")

> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---

Fixes nit aside,

Reviewed-by: Sean Christopherson <seanjc@google.com>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 4/4] KVM: selftests: Allocate ucall pool from MEM_REGION_DATA
@ 2022-12-07 23:44     ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-07 23:44 UTC (permalink / raw)
  To: Oliver Upton
  Cc: linux-kselftest, kvm, Marc Zyngier, linux-kernel, Andrew Jones,
	Peter Gonda, kvmarm, Paolo Bonzini, Shuah Khan, kvmarm,
	linux-arm-kernel

On Wed, Dec 07, 2022, Oliver Upton wrote:
> MEM_REGION_TEST_DATA is meant to hold data explicitly used by a
> selftest, not implicit allocations due to the selftests infrastructure.
> Allocate the ucall pool from MEM_REGION_DATA much like the rest of the
> selftests library allocations.
> 
> Fixes: 426729b2cf2e ("KVM: selftests: Add ucall pool based implementation")

Not that it really matters because no one will backport this verbatim, but this
is the wrong commit to blame.  As of commit 426729b2cf2e, MEM_REGION_DATA does not
exist.  And similarly, the common ucall code didn't exist when Ricardo's series
introduced MEM_REGION_DATA.

  $ git show 426729b2cf2e:tools/testing/selftests/kvm/include/kvm_util_base.h | grep MEM_REGION_DATA
  $ git show 290c5b54012b7:tools/testing/selftests/kvm/lib/ucall_common.c
  fatal: path 'tools/testing/selftests/kvm/lib/ucall_common.c' exists on disk, but not in '290c5b54012b7'

The commit where the two collided is:

Fixes: cc7544101eec ("Merge tag 'kvmarm-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD")

> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---

Fixes nit aside,

Reviewed-by: Sean Christopherson <seanjc@google.com>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 4/4] KVM: selftests: Allocate ucall pool from MEM_REGION_DATA
@ 2022-12-07 23:44     ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-07 23:44 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Andrew Jones, Peter Gonda, linux-arm-kernel, kvmarm,
	kvm, kvmarm, Ricardo Koller, linux-kselftest, linux-kernel

On Wed, Dec 07, 2022, Oliver Upton wrote:
> MEM_REGION_TEST_DATA is meant to hold data explicitly used by a
> selftest, not implicit allocations due to the selftests infrastructure.
> Allocate the ucall pool from MEM_REGION_DATA much like the rest of the
> selftests library allocations.
> 
> Fixes: 426729b2cf2e ("KVM: selftests: Add ucall pool based implementation")

Not that it really matters because no one will backport this verbatim, but this
is the wrong commit to blame.  As of commit 426729b2cf2e, MEM_REGION_DATA does not
exist.  And similarly, the common ucall code didn't exist when Ricardo's series
introduced MEM_REGION_DATA.

  $ git show 426729b2cf2e:tools/testing/selftests/kvm/include/kvm_util_base.h | grep MEM_REGION_DATA
  $ git show 290c5b54012b7:tools/testing/selftests/kvm/lib/ucall_common.c
  fatal: path 'tools/testing/selftests/kvm/lib/ucall_common.c' exists on disk, but not in '290c5b54012b7'

The commit where the two collided is:

Fixes: cc7544101eec ("Merge tag 'kvmarm-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD")

> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---

Fixes nit aside,

Reviewed-by: Sean Christopherson <seanjc@google.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 4/4] KVM: selftests: Allocate ucall pool from MEM_REGION_DATA
  2022-12-07 23:44     ` Sean Christopherson
  (?)
@ 2022-12-07 23:56       ` Oliver Upton
  -1 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 23:56 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: linux-kselftest, kvm, Marc Zyngier, linux-kernel, Andrew Jones,
	Peter Gonda, kvmarm, Paolo Bonzini, Shuah Khan, kvmarm,
	linux-arm-kernel

On Wed, Dec 07, 2022 at 11:44:50PM +0000, Sean Christopherson wrote:
> On Wed, Dec 07, 2022, Oliver Upton wrote:
> > MEM_REGION_TEST_DATA is meant to hold data explicitly used by a
> > selftest, not implicit allocations due to the selftests infrastructure.
> > Allocate the ucall pool from MEM_REGION_DATA much like the rest of the
> > selftests library allocations.
> > 
> > Fixes: 426729b2cf2e ("KVM: selftests: Add ucall pool based implementation")
> 
> Not that it really matters because no one will backport this verbatim, but this
> is the wrong commit to blame.  As of commit 426729b2cf2e, MEM_REGION_DATA does not
> exist.  And similarly, the common ucall code didn't exist when Ricardo's series
> introduced MEM_REGION_DATA.
> 
>   $ git show 426729b2cf2e:tools/testing/selftests/kvm/include/kvm_util_base.h | grep MEM_REGION_DATA
>   $ git show 290c5b54012b7:tools/testing/selftests/kvm/lib/ucall_common.c
>   fatal: path 'tools/testing/selftests/kvm/lib/ucall_common.c' exists on disk, but not in '290c5b54012b7'
> 
> The commit where the two collided is:
> 
> Fixes: cc7544101eec ("Merge tag 'kvmarm-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD")

Yeah, this Fixes is garbage, apologies.

I imagine Paolo is going to squash some things into the kvmarm merge, so
the Fixes tag ought to be dropped altogether.

> > Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> > ---
> 
> Fixes nit aside,
> 
> Reviewed-by: Sean Christopherson <seanjc@google.com>

Thanks!

--
Best,
Oliver
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 4/4] KVM: selftests: Allocate ucall pool from MEM_REGION_DATA
@ 2022-12-07 23:56       ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 23:56 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Andrew Jones, Peter Gonda, linux-arm-kernel, kvmarm,
	kvm, kvmarm, Ricardo Koller, linux-kselftest, linux-kernel

On Wed, Dec 07, 2022 at 11:44:50PM +0000, Sean Christopherson wrote:
> On Wed, Dec 07, 2022, Oliver Upton wrote:
> > MEM_REGION_TEST_DATA is meant to hold data explicitly used by a
> > selftest, not implicit allocations due to the selftests infrastructure.
> > Allocate the ucall pool from MEM_REGION_DATA much like the rest of the
> > selftests library allocations.
> > 
> > Fixes: 426729b2cf2e ("KVM: selftests: Add ucall pool based implementation")
> 
> Not that it really matters because no one will backport this verbatim, but this
> is the wrong commit to blame.  As of commit 426729b2cf2e, MEM_REGION_DATA does not
> exist.  And similarly, the common ucall code didn't exist when Ricardo's series
> introduced MEM_REGION_DATA.
> 
>   $ git show 426729b2cf2e:tools/testing/selftests/kvm/include/kvm_util_base.h | grep MEM_REGION_DATA
>   $ git show 290c5b54012b7:tools/testing/selftests/kvm/lib/ucall_common.c
>   fatal: path 'tools/testing/selftests/kvm/lib/ucall_common.c' exists on disk, but not in '290c5b54012b7'
> 
> The commit where the two collided is:
> 
> Fixes: cc7544101eec ("Merge tag 'kvmarm-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD")

Yeah, this Fixes is garbage, apologies.

I imagine Paolo is going to squash some things into the kvmarm merge, so
the Fixes tag ought to be dropped altogether.

> > Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> > ---
> 
> Fixes nit aside,
> 
> Reviewed-by: Sean Christopherson <seanjc@google.com>

Thanks!

--
Best,
Oliver

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 4/4] KVM: selftests: Allocate ucall pool from MEM_REGION_DATA
@ 2022-12-07 23:56       ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-07 23:56 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Andrew Jones, Peter Gonda, linux-arm-kernel, kvmarm,
	kvm, kvmarm, Ricardo Koller, linux-kselftest, linux-kernel

On Wed, Dec 07, 2022 at 11:44:50PM +0000, Sean Christopherson wrote:
> On Wed, Dec 07, 2022, Oliver Upton wrote:
> > MEM_REGION_TEST_DATA is meant to hold data explicitly used by a
> > selftest, not implicit allocations due to the selftests infrastructure.
> > Allocate the ucall pool from MEM_REGION_DATA much like the rest of the
> > selftests library allocations.
> > 
> > Fixes: 426729b2cf2e ("KVM: selftests: Add ucall pool based implementation")
> 
> Not that it really matters because no one will backport this verbatim, but this
> is the wrong commit to blame.  As of commit 426729b2cf2e, MEM_REGION_DATA does not
> exist.  And similarly, the common ucall code didn't exist when Ricardo's series
> introduced MEM_REGION_DATA.
> 
>   $ git show 426729b2cf2e:tools/testing/selftests/kvm/include/kvm_util_base.h | grep MEM_REGION_DATA
>   $ git show 290c5b54012b7:tools/testing/selftests/kvm/lib/ucall_common.c
>   fatal: path 'tools/testing/selftests/kvm/lib/ucall_common.c' exists on disk, but not in '290c5b54012b7'
> 
> The commit where the two collided is:
> 
> Fixes: cc7544101eec ("Merge tag 'kvmarm-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD")

Yeah, this Fixes is garbage, apologies.

I imagine Paolo is going to squash some things into the kvmarm merge, so
the Fixes tag ought to be dropped altogether.

> > Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> > ---
> 
> Fixes nit aside,
> 
> Reviewed-by: Sean Christopherson <seanjc@google.com>

Thanks!

--
Best,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
  2022-12-07 21:48   ` Oliver Upton
  (?)
@ 2022-12-07 23:57     ` Sean Christopherson
  -1 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-07 23:57 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Paolo Bonzini, Shuah Khan, linux-arm-kernel, kvmarm, kvm, kvmarm,
	Ricardo Koller, linux-kselftest, linux-kernel

On Wed, Dec 07, 2022, Oliver Upton wrote:
> The new ucall infrastructure needs to update a couple of guest globals
> to pass through the ucall MMIO addr and pool of ucall structs. A
> precondition of this actually working is to have the program image
> already loaded into guest memory.

Ouch.  Might be worth explicitly stating what goes wrong.  Even though it's super
obvious in hindsight, it still took me a few seconds to understand what
precondition you were referring to, e.g. I was trying to figure out how selecting
the MMIO address depended on the guest code being loaded...

> 
> Call ucall_init() after kvm_vm_elf_load(). Continue to park the ucall
> MMIO addr after MEM_REGION_TEST_DATA.
> 
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  tools/testing/selftests/kvm/aarch64/page_fault_test.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> index 92d3a91153b6..95d22cfb7b41 100644
> --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
>  				    data_size / guest_page_size,
>  				    p->test_desc->data_memslot_flags);
>  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> +}
> +
> +static void setup_ucall(struct kvm_vm *vm)
> +{
> +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
>  
> -	ucall_init(vm, data_gpa + data_size);
> +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);

Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?  The reason
I ask is because if so, then we can do the temporarily heinous, but hopefully forward
looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().

E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
add a dedicated region+memslot for the ucall MMIO page.

---
 .../selftests/kvm/aarch64/page_fault_test.c   | 10 +------
 .../selftests/kvm/include/kvm_util_base.h     |  1 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 28 +++++++++++--------
 3 files changed, 19 insertions(+), 20 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
index 95d22cfb7b41..68c47db2eb2e 100644
--- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
@@ -611,13 +611,6 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
 	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
 }
 
-static void setup_ucall(struct kvm_vm *vm)
-{
-	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
-
-	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
-}
-
 static void setup_default_handlers(struct test_desc *test)
 {
 	if (!test->mmio_handler)
@@ -706,8 +699,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	vm = ____vm_create(mode);
 	setup_memslots(vm, p);
-	kvm_vm_elf_load(vm, program_invocation_name);
-	setup_ucall(vm);
+	vm_init_guest_code_and_data(vm);
 	vcpu = vm_vcpu_add(vm, 0, guest_code);
 
 	setup_gva_maps(vm);
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index fbc2a79369b8..175b5ca0c061 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -682,6 +682,7 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
 vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 			      vm_paddr_t paddr_min, uint32_t memslot);
 vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
+void vm_init_guest_code_and_data(struct kvm_vm *vm);
 
 /*
  * ____vm_create() does KVM_CREATE_VM and little else.  __vm_create() also
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index c88c3ace16d2..0eab6b11a6e9 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -329,12 +329,27 @@ static uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
 	return vm_adjust_num_guest_pages(mode, nr_pages);
 }
 
+void vm_init_guest_code_and_data(struct kvm_vm *vm)
+{
+	struct userspace_mem_region *slot;
+
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	/*
+	 * TODO: Add a dedicated memory region to carve out MMIO.  KVM treats
+	 * writes to read-only memslots as MMIO, and creating a read-only
+	 * memslot for the MMIO region would prevent silently clobbering the
+	 * MMIO region.
+	 */
+	slot = vm_get_mem_region(vm, MEM_REGION_DATA);
+	ucall_init(vm, slot->region.guest_phys_addr + slot->region.memory_size);
+}
+
 struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 			   uint64_t nr_extra_pages)
 {
 	uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus,
 						 nr_extra_pages);
-	struct userspace_mem_region *slot0;
 	struct kvm_vm *vm;
 	int i;
 
@@ -347,16 +362,7 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 	for (i = 0; i < NR_MEM_REGIONS; i++)
 		vm->memslots[i] = 0;
 
-	kvm_vm_elf_load(vm, program_invocation_name);
-
-	/*
-	 * TODO: Add proper defines to protect the library's memslots, and then
-	 * carve out memslot1 for the ucall MMIO address.  KVM treats writes to
-	 * read-only memslots as MMIO, and creating a read-only memslot for the
-	 * MMIO region would prevent silently clobbering the MMIO region.
-	 */
-	slot0 = memslot2region(vm, 0);
-	ucall_init(vm, slot0->region.guest_phys_addr + slot0->region.memory_size);
+	vm_init_guest_code_and_data(vm);
 
 	kvm_arch_vm_post_create(vm);
 

base-commit: 1cf369f929d607760bf721f3eb9e965ed9c703e3
-- 

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-07 23:57     ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-07 23:57 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvm, Marc Zyngier, linux-kernel, linux-kselftest, kvmarm,
	Paolo Bonzini, Shuah Khan, kvmarm, linux-arm-kernel

On Wed, Dec 07, 2022, Oliver Upton wrote:
> The new ucall infrastructure needs to update a couple of guest globals
> to pass through the ucall MMIO addr and pool of ucall structs. A
> precondition of this actually working is to have the program image
> already loaded into guest memory.

Ouch.  Might be worth explicitly stating what goes wrong.  Even though it's super
obvious in hindsight, it still took me a few seconds to understand what
precondition you were referring to, e.g. I was trying to figure out how selecting
the MMIO address depended on the guest code being loaded...

> 
> Call ucall_init() after kvm_vm_elf_load(). Continue to park the ucall
> MMIO addr after MEM_REGION_TEST_DATA.
> 
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  tools/testing/selftests/kvm/aarch64/page_fault_test.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> index 92d3a91153b6..95d22cfb7b41 100644
> --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
>  				    data_size / guest_page_size,
>  				    p->test_desc->data_memslot_flags);
>  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> +}
> +
> +static void setup_ucall(struct kvm_vm *vm)
> +{
> +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
>  
> -	ucall_init(vm, data_gpa + data_size);
> +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);

Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?  The reason
I ask is because if so, then we can do the temporarily heinous, but hopefully forward
looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().

E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
add a dedicated region+memslot for the ucall MMIO page.

---
 .../selftests/kvm/aarch64/page_fault_test.c   | 10 +------
 .../selftests/kvm/include/kvm_util_base.h     |  1 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 28 +++++++++++--------
 3 files changed, 19 insertions(+), 20 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
index 95d22cfb7b41..68c47db2eb2e 100644
--- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
@@ -611,13 +611,6 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
 	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
 }
 
-static void setup_ucall(struct kvm_vm *vm)
-{
-	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
-
-	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
-}
-
 static void setup_default_handlers(struct test_desc *test)
 {
 	if (!test->mmio_handler)
@@ -706,8 +699,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	vm = ____vm_create(mode);
 	setup_memslots(vm, p);
-	kvm_vm_elf_load(vm, program_invocation_name);
-	setup_ucall(vm);
+	vm_init_guest_code_and_data(vm);
 	vcpu = vm_vcpu_add(vm, 0, guest_code);
 
 	setup_gva_maps(vm);
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index fbc2a79369b8..175b5ca0c061 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -682,6 +682,7 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
 vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 			      vm_paddr_t paddr_min, uint32_t memslot);
 vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
+void vm_init_guest_code_and_data(struct kvm_vm *vm);
 
 /*
  * ____vm_create() does KVM_CREATE_VM and little else.  __vm_create() also
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index c88c3ace16d2..0eab6b11a6e9 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -329,12 +329,27 @@ static uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
 	return vm_adjust_num_guest_pages(mode, nr_pages);
 }
 
+void vm_init_guest_code_and_data(struct kvm_vm *vm)
+{
+	struct userspace_mem_region *slot;
+
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	/*
+	 * TODO: Add a dedicated memory region to carve out MMIO.  KVM treats
+	 * writes to read-only memslots as MMIO, and creating a read-only
+	 * memslot for the MMIO region would prevent silently clobbering the
+	 * MMIO region.
+	 */
+	slot = vm_get_mem_region(vm, MEM_REGION_DATA);
+	ucall_init(vm, slot->region.guest_phys_addr + slot->region.memory_size);
+}
+
 struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 			   uint64_t nr_extra_pages)
 {
 	uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus,
 						 nr_extra_pages);
-	struct userspace_mem_region *slot0;
 	struct kvm_vm *vm;
 	int i;
 
@@ -347,16 +362,7 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 	for (i = 0; i < NR_MEM_REGIONS; i++)
 		vm->memslots[i] = 0;
 
-	kvm_vm_elf_load(vm, program_invocation_name);
-
-	/*
-	 * TODO: Add proper defines to protect the library's memslots, and then
-	 * carve out memslot1 for the ucall MMIO address.  KVM treats writes to
-	 * read-only memslots as MMIO, and creating a read-only memslot for the
-	 * MMIO region would prevent silently clobbering the MMIO region.
-	 */
-	slot0 = memslot2region(vm, 0);
-	ucall_init(vm, slot0->region.guest_phys_addr + slot0->region.memory_size);
+	vm_init_guest_code_and_data(vm);
 
 	kvm_arch_vm_post_create(vm);
 

base-commit: 1cf369f929d607760bf721f3eb9e965ed9c703e3
-- 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-07 23:57     ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-07 23:57 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Paolo Bonzini, Shuah Khan, linux-arm-kernel, kvmarm, kvm, kvmarm,
	Ricardo Koller, linux-kselftest, linux-kernel

On Wed, Dec 07, 2022, Oliver Upton wrote:
> The new ucall infrastructure needs to update a couple of guest globals
> to pass through the ucall MMIO addr and pool of ucall structs. A
> precondition of this actually working is to have the program image
> already loaded into guest memory.

Ouch.  Might be worth explicitly stating what goes wrong.  Even though it's super
obvious in hindsight, it still took me a few seconds to understand what
precondition you were referring to, e.g. I was trying to figure out how selecting
the MMIO address depended on the guest code being loaded...

> 
> Call ucall_init() after kvm_vm_elf_load(). Continue to park the ucall
> MMIO addr after MEM_REGION_TEST_DATA.
> 
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  tools/testing/selftests/kvm/aarch64/page_fault_test.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> index 92d3a91153b6..95d22cfb7b41 100644
> --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
>  				    data_size / guest_page_size,
>  				    p->test_desc->data_memslot_flags);
>  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> +}
> +
> +static void setup_ucall(struct kvm_vm *vm)
> +{
> +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
>  
> -	ucall_init(vm, data_gpa + data_size);
> +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);

Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?  The reason
I ask is because if so, then we can do the temporarily heinous, but hopefully forward
looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().

E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
add a dedicated region+memslot for the ucall MMIO page.

---
 .../selftests/kvm/aarch64/page_fault_test.c   | 10 +------
 .../selftests/kvm/include/kvm_util_base.h     |  1 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 28 +++++++++++--------
 3 files changed, 19 insertions(+), 20 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
index 95d22cfb7b41..68c47db2eb2e 100644
--- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
@@ -611,13 +611,6 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
 	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
 }
 
-static void setup_ucall(struct kvm_vm *vm)
-{
-	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
-
-	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
-}
-
 static void setup_default_handlers(struct test_desc *test)
 {
 	if (!test->mmio_handler)
@@ -706,8 +699,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	vm = ____vm_create(mode);
 	setup_memslots(vm, p);
-	kvm_vm_elf_load(vm, program_invocation_name);
-	setup_ucall(vm);
+	vm_init_guest_code_and_data(vm);
 	vcpu = vm_vcpu_add(vm, 0, guest_code);
 
 	setup_gva_maps(vm);
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index fbc2a79369b8..175b5ca0c061 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -682,6 +682,7 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
 vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 			      vm_paddr_t paddr_min, uint32_t memslot);
 vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
+void vm_init_guest_code_and_data(struct kvm_vm *vm);
 
 /*
  * ____vm_create() does KVM_CREATE_VM and little else.  __vm_create() also
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index c88c3ace16d2..0eab6b11a6e9 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -329,12 +329,27 @@ static uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
 	return vm_adjust_num_guest_pages(mode, nr_pages);
 }
 
+void vm_init_guest_code_and_data(struct kvm_vm *vm)
+{
+	struct userspace_mem_region *slot;
+
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	/*
+	 * TODO: Add a dedicated memory region to carve out MMIO.  KVM treats
+	 * writes to read-only memslots as MMIO, and creating a read-only
+	 * memslot for the MMIO region would prevent silently clobbering the
+	 * MMIO region.
+	 */
+	slot = vm_get_mem_region(vm, MEM_REGION_DATA);
+	ucall_init(vm, slot->region.guest_phys_addr + slot->region.memory_size);
+}
+
 struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 			   uint64_t nr_extra_pages)
 {
 	uint64_t nr_pages = vm_nr_pages_required(mode, nr_runnable_vcpus,
 						 nr_extra_pages);
-	struct userspace_mem_region *slot0;
 	struct kvm_vm *vm;
 	int i;
 
@@ -347,16 +362,7 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 	for (i = 0; i < NR_MEM_REGIONS; i++)
 		vm->memslots[i] = 0;
 
-	kvm_vm_elf_load(vm, program_invocation_name);
-
-	/*
-	 * TODO: Add proper defines to protect the library's memslots, and then
-	 * carve out memslot1 for the ucall MMIO address.  KVM treats writes to
-	 * read-only memslots as MMIO, and creating a read-only memslot for the
-	 * MMIO region would prevent silently clobbering the MMIO region.
-	 */
-	slot0 = memslot2region(vm, 0);
-	ucall_init(vm, slot0->region.guest_phys_addr + slot0->region.memory_size);
+	vm_init_guest_code_and_data(vm);
 
 	kvm_arch_vm_post_create(vm);
 

base-commit: 1cf369f929d607760bf721f3eb9e965ed9c703e3
-- 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
  2022-12-07 23:57     ` Sean Christopherson
  (?)
@ 2022-12-08  0:17       ` Oliver Upton
  -1 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-08  0:17 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Paolo Bonzini, Shuah Khan, linux-arm-kernel, kvmarm, kvm, kvmarm,
	Ricardo Koller, linux-kselftest, linux-kernel

On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> On Wed, Dec 07, 2022, Oliver Upton wrote:
> > The new ucall infrastructure needs to update a couple of guest globals
> > to pass through the ucall MMIO addr and pool of ucall structs. A
> > precondition of this actually working is to have the program image
> > already loaded into guest memory.
> 
> Ouch.  Might be worth explicitly stating what goes wrong.  Even though it's super
> obvious in hindsight, it still took me a few seconds to understand what
> precondition you were referring to, e.g. I was trying to figure out how selecting
> the MMIO address depended on the guest code being loaded...
> 
> > 
> > Call ucall_init() after kvm_vm_elf_load(). Continue to park the ucall
> > MMIO addr after MEM_REGION_TEST_DATA.
> > 
> > Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> > ---
> >  tools/testing/selftests/kvm/aarch64/page_fault_test.c | 8 +++++++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > index 92d3a91153b6..95d22cfb7b41 100644
> > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> >  				    data_size / guest_page_size,
> >  				    p->test_desc->data_memslot_flags);
> >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > +}
> > +
> > +static void setup_ucall(struct kvm_vm *vm)
> > +{
> > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> >  
> > -	ucall_init(vm, data_gpa + data_size);
> > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> 
> Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?

Sure, but that's only guaranteed in the PA space.

> The reason
> I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> 
> E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> add a dedicated region+memslot for the ucall MMIO page.

Even still, that's just a kludge to make ucalls work. We have other
MMIO devices (GIC distributor, for example) that work by chance since
nothing conflicts with the constant GPAs we've selected in the tests.

I'd rather we go down the route of having an address allocator for the
for both the VA and PA spaces to provide carveouts at runtime. There's
another issue with the new ucall implementation where identity mapping
could stomp on a program segment that I'm fighting with right now which
only further highlights the problems with our (mis)management of address
spaces in selftests.

--
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08  0:17       ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-08  0:17 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Marc Zyngier, linux-kernel, linux-kselftest, kvmarm,
	Paolo Bonzini, Shuah Khan, kvmarm, linux-arm-kernel

On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> On Wed, Dec 07, 2022, Oliver Upton wrote:
> > The new ucall infrastructure needs to update a couple of guest globals
> > to pass through the ucall MMIO addr and pool of ucall structs. A
> > precondition of this actually working is to have the program image
> > already loaded into guest memory.
> 
> Ouch.  Might be worth explicitly stating what goes wrong.  Even though it's super
> obvious in hindsight, it still took me a few seconds to understand what
> precondition you were referring to, e.g. I was trying to figure out how selecting
> the MMIO address depended on the guest code being loaded...
> 
> > 
> > Call ucall_init() after kvm_vm_elf_load(). Continue to park the ucall
> > MMIO addr after MEM_REGION_TEST_DATA.
> > 
> > Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> > ---
> >  tools/testing/selftests/kvm/aarch64/page_fault_test.c | 8 +++++++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > index 92d3a91153b6..95d22cfb7b41 100644
> > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> >  				    data_size / guest_page_size,
> >  				    p->test_desc->data_memslot_flags);
> >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > +}
> > +
> > +static void setup_ucall(struct kvm_vm *vm)
> > +{
> > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> >  
> > -	ucall_init(vm, data_gpa + data_size);
> > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> 
> Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?

Sure, but that's only guaranteed in the PA space.

> The reason
> I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> 
> E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> add a dedicated region+memslot for the ucall MMIO page.

Even still, that's just a kludge to make ucalls work. We have other
MMIO devices (GIC distributor, for example) that work by chance since
nothing conflicts with the constant GPAs we've selected in the tests.

I'd rather we go down the route of having an address allocator for the
for both the VA and PA spaces to provide carveouts at runtime. There's
another issue with the new ucall implementation where identity mapping
could stomp on a program segment that I'm fighting with right now which
only further highlights the problems with our (mis)management of address
spaces in selftests.

--
Thanks,
Oliver
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08  0:17       ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-08  0:17 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Paolo Bonzini, Shuah Khan, linux-arm-kernel, kvmarm, kvm, kvmarm,
	Ricardo Koller, linux-kselftest, linux-kernel

On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> On Wed, Dec 07, 2022, Oliver Upton wrote:
> > The new ucall infrastructure needs to update a couple of guest globals
> > to pass through the ucall MMIO addr and pool of ucall structs. A
> > precondition of this actually working is to have the program image
> > already loaded into guest memory.
> 
> Ouch.  Might be worth explicitly stating what goes wrong.  Even though it's super
> obvious in hindsight, it still took me a few seconds to understand what
> precondition you were referring to, e.g. I was trying to figure out how selecting
> the MMIO address depended on the guest code being loaded...
> 
> > 
> > Call ucall_init() after kvm_vm_elf_load(). Continue to park the ucall
> > MMIO addr after MEM_REGION_TEST_DATA.
> > 
> > Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> > ---
> >  tools/testing/selftests/kvm/aarch64/page_fault_test.c | 8 +++++++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > index 92d3a91153b6..95d22cfb7b41 100644
> > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> >  				    data_size / guest_page_size,
> >  				    p->test_desc->data_memslot_flags);
> >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > +}
> > +
> > +static void setup_ucall(struct kvm_vm *vm)
> > +{
> > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> >  
> > -	ucall_init(vm, data_gpa + data_size);
> > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> 
> Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?

Sure, but that's only guaranteed in the PA space.

> The reason
> I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> 
> E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> add a dedicated region+memslot for the ucall MMIO page.

Even still, that's just a kludge to make ucalls work. We have other
MMIO devices (GIC distributor, for example) that work by chance since
nothing conflicts with the constant GPAs we've selected in the tests.

I'd rather we go down the route of having an address allocator for the
for both the VA and PA spaces to provide carveouts at runtime. There's
another issue with the new ucall implementation where identity mapping
could stomp on a program segment that I'm fighting with right now which
only further highlights the problems with our (mis)management of address
spaces in selftests.

--
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
  2022-12-07 21:48   ` Oliver Upton
  (?)
@ 2022-12-08  0:18     ` Sean Christopherson
  -1 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08  0:18 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Suzuki K Poulose, linux-arm-kernel, kvmarm, kvm,
	kvmarm, Ricardo Koller, linux-kselftest, linux-kernel

On Wed, Dec 07, 2022, Oliver Upton wrote:
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index 316de70db91d..5972a23b2765 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -541,3 +541,13 @@ void kvm_selftest_arch_init(void)
>  	 */
>  	guest_modes_append_default();
>  }
> +
> +void vm_vaddr_populate_bitmap(struct kvm_vm *vm)

Add "arch" so that it's obvious this can be overidden?  The "__weak" conveys that
for the implementation, but not for the call site.  E.g. vm_arch_vaddr_populate_bitmap().

Actually, IIUC, the issue is that the high half isn't mapped (probably the wrong
terminology).  I.e. the calculation for the low half stays the same, and the high
half just goes away.

> +{
> +	/*
> +	 * arm64 selftests use only TTBR0_EL1, meaning that the valid VA space
> +	 * is [0, 2^(64 - TCR_EL1.T0SZ)).
> +	 */
> +	sparsebit_set_num(vm->vpages_valid, 0,
> +			  (1ULL << vm->va_bits) >> vm->page_shift);
> +}
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index e9607eb089be..c88c3ace16d2 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -186,6 +186,15 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
>  _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
>  	       "Missing new mode params?");
>  
> +__weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
> +{
> +	sparsebit_set_num(vm->vpages_valid,
> +		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
> +	sparsebit_set_num(vm->vpages_valid,
> +		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
> +		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);

Any objection to fixing up the formatting?  Actually, we can do more than just
fix the indentation, e.g. the number of bits is identical, and documenting that
this does a high/low split would be helpful.

Together, what about?  The #ifdef is a bit gross, especially around "hi_start",
but it's less duplicate code.  And IMO, having things bundled in the same place
makes it a lot easier for newbies (to arm64 or kernel coding in general) to
understand what's going on and why arm64 is different.

---
 tools/testing/selftests/kvm/lib/kvm_util.c | 23 +++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e9607eb089be..d6f2c17e3d40 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -186,6 +186,23 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
 	       "Missing new mode params?");
 
+static void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+{
+	/*
+	 * All architectures supports splitting the virtual address space into
+	 * a high and a low half.  Populate both halves, except for arm64 which
+	 * currently uses only TTBR0_EL1 (arbitrary selftests "logic"), i.e.
+	 * only has a valid low half.
+	 */
+	sparsebit_num_t nr_va_bits = (1ULL << (vm->va_bits - 1)) >> vm->page_shift;
+#ifndef __aarch64__
+	sparsebit_num_t hi_start = (~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift
+
+	sparsebit_set_num(vm->vpages_valid, hi_start, nr_bits);
+#endif
+	sparsebit_set_num(vm->vpages_valid, 0, nr_va_bits);
+}
+
 struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 {
 	struct kvm_vm *vm;
@@ -274,11 +291,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 
 	/* Limit to VA-bit canonical virtual addresses. */
 	vm->vpages_valid = sparsebit_alloc();
-	sparsebit_set_num(vm->vpages_valid,
-		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
-	sparsebit_set_num(vm->vpages_valid,
-		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
-		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+	vm_vaddr_populate_bitmap(vm);
 
 	/* Limit physical addresses to PA-bits. */
 	vm->max_gfn = vm_compute_max_gfn(vm);

base-commit: 35aecc3289eebf193fd70a067ea448ae2f0bb9b9
-- 


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
@ 2022-12-08  0:18     ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08  0:18 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvm, Marc Zyngier, linux-kernel, linux-kselftest, kvmarm,
	Paolo Bonzini, Shuah Khan, kvmarm, linux-arm-kernel

On Wed, Dec 07, 2022, Oliver Upton wrote:
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index 316de70db91d..5972a23b2765 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -541,3 +541,13 @@ void kvm_selftest_arch_init(void)
>  	 */
>  	guest_modes_append_default();
>  }
> +
> +void vm_vaddr_populate_bitmap(struct kvm_vm *vm)

Add "arch" so that it's obvious this can be overidden?  The "__weak" conveys that
for the implementation, but not for the call site.  E.g. vm_arch_vaddr_populate_bitmap().

Actually, IIUC, the issue is that the high half isn't mapped (probably the wrong
terminology).  I.e. the calculation for the low half stays the same, and the high
half just goes away.

> +{
> +	/*
> +	 * arm64 selftests use only TTBR0_EL1, meaning that the valid VA space
> +	 * is [0, 2^(64 - TCR_EL1.T0SZ)).
> +	 */
> +	sparsebit_set_num(vm->vpages_valid, 0,
> +			  (1ULL << vm->va_bits) >> vm->page_shift);
> +}
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index e9607eb089be..c88c3ace16d2 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -186,6 +186,15 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
>  _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
>  	       "Missing new mode params?");
>  
> +__weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
> +{
> +	sparsebit_set_num(vm->vpages_valid,
> +		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
> +	sparsebit_set_num(vm->vpages_valid,
> +		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
> +		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);

Any objection to fixing up the formatting?  Actually, we can do more than just
fix the indentation, e.g. the number of bits is identical, and documenting that
this does a high/low split would be helpful.

Together, what about?  The #ifdef is a bit gross, especially around "hi_start",
but it's less duplicate code.  And IMO, having things bundled in the same place
makes it a lot easier for newbies (to arm64 or kernel coding in general) to
understand what's going on and why arm64 is different.

---
 tools/testing/selftests/kvm/lib/kvm_util.c | 23 +++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e9607eb089be..d6f2c17e3d40 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -186,6 +186,23 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
 	       "Missing new mode params?");
 
+static void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+{
+	/*
+	 * All architectures supports splitting the virtual address space into
+	 * a high and a low half.  Populate both halves, except for arm64 which
+	 * currently uses only TTBR0_EL1 (arbitrary selftests "logic"), i.e.
+	 * only has a valid low half.
+	 */
+	sparsebit_num_t nr_va_bits = (1ULL << (vm->va_bits - 1)) >> vm->page_shift;
+#ifndef __aarch64__
+	sparsebit_num_t hi_start = (~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift
+
+	sparsebit_set_num(vm->vpages_valid, hi_start, nr_bits);
+#endif
+	sparsebit_set_num(vm->vpages_valid, 0, nr_va_bits);
+}
+
 struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 {
 	struct kvm_vm *vm;
@@ -274,11 +291,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 
 	/* Limit to VA-bit canonical virtual addresses. */
 	vm->vpages_valid = sparsebit_alloc();
-	sparsebit_set_num(vm->vpages_valid,
-		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
-	sparsebit_set_num(vm->vpages_valid,
-		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
-		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+	vm_vaddr_populate_bitmap(vm);
 
 	/* Limit physical addresses to PA-bits. */
 	vm->max_gfn = vm_compute_max_gfn(vm);

base-commit: 35aecc3289eebf193fd70a067ea448ae2f0bb9b9
-- 

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
@ 2022-12-08  0:18     ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08  0:18 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Suzuki K Poulose, linux-arm-kernel, kvmarm, kvm,
	kvmarm, Ricardo Koller, linux-kselftest, linux-kernel

On Wed, Dec 07, 2022, Oliver Upton wrote:
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index 316de70db91d..5972a23b2765 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -541,3 +541,13 @@ void kvm_selftest_arch_init(void)
>  	 */
>  	guest_modes_append_default();
>  }
> +
> +void vm_vaddr_populate_bitmap(struct kvm_vm *vm)

Add "arch" so that it's obvious this can be overidden?  The "__weak" conveys that
for the implementation, but not for the call site.  E.g. vm_arch_vaddr_populate_bitmap().

Actually, IIUC, the issue is that the high half isn't mapped (probably the wrong
terminology).  I.e. the calculation for the low half stays the same, and the high
half just goes away.

> +{
> +	/*
> +	 * arm64 selftests use only TTBR0_EL1, meaning that the valid VA space
> +	 * is [0, 2^(64 - TCR_EL1.T0SZ)).
> +	 */
> +	sparsebit_set_num(vm->vpages_valid, 0,
> +			  (1ULL << vm->va_bits) >> vm->page_shift);
> +}
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index e9607eb089be..c88c3ace16d2 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -186,6 +186,15 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
>  _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
>  	       "Missing new mode params?");
>  
> +__weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
> +{
> +	sparsebit_set_num(vm->vpages_valid,
> +		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
> +	sparsebit_set_num(vm->vpages_valid,
> +		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
> +		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);

Any objection to fixing up the formatting?  Actually, we can do more than just
fix the indentation, e.g. the number of bits is identical, and documenting that
this does a high/low split would be helpful.

Together, what about?  The #ifdef is a bit gross, especially around "hi_start",
but it's less duplicate code.  And IMO, having things bundled in the same place
makes it a lot easier for newbies (to arm64 or kernel coding in general) to
understand what's going on and why arm64 is different.

---
 tools/testing/selftests/kvm/lib/kvm_util.c | 23 +++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e9607eb089be..d6f2c17e3d40 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -186,6 +186,23 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
 	       "Missing new mode params?");
 
+static void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+{
+	/*
+	 * All architectures supports splitting the virtual address space into
+	 * a high and a low half.  Populate both halves, except for arm64 which
+	 * currently uses only TTBR0_EL1 (arbitrary selftests "logic"), i.e.
+	 * only has a valid low half.
+	 */
+	sparsebit_num_t nr_va_bits = (1ULL << (vm->va_bits - 1)) >> vm->page_shift;
+#ifndef __aarch64__
+	sparsebit_num_t hi_start = (~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift
+
+	sparsebit_set_num(vm->vpages_valid, hi_start, nr_bits);
+#endif
+	sparsebit_set_num(vm->vpages_valid, 0, nr_va_bits);
+}
+
 struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 {
 	struct kvm_vm *vm;
@@ -274,11 +291,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode)
 
 	/* Limit to VA-bit canonical virtual addresses. */
 	vm->vpages_valid = sparsebit_alloc();
-	sparsebit_set_num(vm->vpages_valid,
-		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
-	sparsebit_set_num(vm->vpages_valid,
-		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
-		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+	vm_vaddr_populate_bitmap(vm);
 
 	/* Limit physical addresses to PA-bits. */
 	vm->max_gfn = vm_compute_max_gfn(vm);

base-commit: 35aecc3289eebf193fd70a067ea448ae2f0bb9b9
-- 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
  2022-12-08  0:17       ` Oliver Upton
  (?)
@ 2022-12-08  0:24         ` Sean Christopherson
  -1 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08  0:24 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Paolo Bonzini, Shuah Khan, linux-arm-kernel, kvmarm, kvm, kvmarm,
	Ricardo Koller, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022, Oliver Upton wrote:
> On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> > > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > index 92d3a91153b6..95d22cfb7b41 100644
> > > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> > >  				    data_size / guest_page_size,
> > >  				    p->test_desc->data_memslot_flags);
> > >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > > +}
> > > +
> > > +static void setup_ucall(struct kvm_vm *vm)
> > > +{
> > > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> > >  
> > > -	ucall_init(vm, data_gpa + data_size);
> > > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> > 
> > Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?
> 
> Sure, but that's only guaranteed in the PA space.
> 
> > The reason
> > I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> > looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> > 
> > E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> > add a dedicated region+memslot for the ucall MMIO page.
> 
> Even still, that's just a kludge to make ucalls work. We have other
> MMIO devices (GIC distributor, for example) that work by chance since
> nothing conflicts with the constant GPAs we've selected in the tests.
> 
> I'd rather we go down the route of having an address allocator for the
> for both the VA and PA spaces to provide carveouts at runtime.

Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
will yield very explicit asserts, which IMO is better than whatever might go wrong
with a carve out.

> There's another issue with the new ucall implementation where identity
> mapping could stomp on a program segment that I'm fighting with right now
> which only further highlights the problems with our (mis)management of
> address spaces in selftests.

Oooh, this crud:

 	virt_pg_map(vm, mmio_gpa, mmio_gpa);

Yeah, that needs to be fixed.  But again, that's a separate issue, e.g. selftests
can allocate a virtual address and map the read-only memslot.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08  0:24         ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08  0:24 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvm, Marc Zyngier, linux-kernel, linux-kselftest, kvmarm,
	Paolo Bonzini, Shuah Khan, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022, Oliver Upton wrote:
> On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> > > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > index 92d3a91153b6..95d22cfb7b41 100644
> > > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> > >  				    data_size / guest_page_size,
> > >  				    p->test_desc->data_memslot_flags);
> > >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > > +}
> > > +
> > > +static void setup_ucall(struct kvm_vm *vm)
> > > +{
> > > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> > >  
> > > -	ucall_init(vm, data_gpa + data_size);
> > > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> > 
> > Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?
> 
> Sure, but that's only guaranteed in the PA space.
> 
> > The reason
> > I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> > looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> > 
> > E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> > add a dedicated region+memslot for the ucall MMIO page.
> 
> Even still, that's just a kludge to make ucalls work. We have other
> MMIO devices (GIC distributor, for example) that work by chance since
> nothing conflicts with the constant GPAs we've selected in the tests.
> 
> I'd rather we go down the route of having an address allocator for the
> for both the VA and PA spaces to provide carveouts at runtime.

Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
will yield very explicit asserts, which IMO is better than whatever might go wrong
with a carve out.

> There's another issue with the new ucall implementation where identity
> mapping could stomp on a program segment that I'm fighting with right now
> which only further highlights the problems with our (mis)management of
> address spaces in selftests.

Oooh, this crud:

 	virt_pg_map(vm, mmio_gpa, mmio_gpa);

Yeah, that needs to be fixed.  But again, that's a separate issue, e.g. selftests
can allocate a virtual address and map the read-only memslot.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08  0:24         ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08  0:24 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Paolo Bonzini, Shuah Khan, linux-arm-kernel, kvmarm, kvm, kvmarm,
	Ricardo Koller, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022, Oliver Upton wrote:
> On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> > > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > index 92d3a91153b6..95d22cfb7b41 100644
> > > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> > >  				    data_size / guest_page_size,
> > >  				    p->test_desc->data_memslot_flags);
> > >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > > +}
> > > +
> > > +static void setup_ucall(struct kvm_vm *vm)
> > > +{
> > > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> > >  
> > > -	ucall_init(vm, data_gpa + data_size);
> > > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> > 
> > Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?
> 
> Sure, but that's only guaranteed in the PA space.
> 
> > The reason
> > I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> > looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> > 
> > E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> > add a dedicated region+memslot for the ucall MMIO page.
> 
> Even still, that's just a kludge to make ucalls work. We have other
> MMIO devices (GIC distributor, for example) that work by chance since
> nothing conflicts with the constant GPAs we've selected in the tests.
> 
> I'd rather we go down the route of having an address allocator for the
> for both the VA and PA spaces to provide carveouts at runtime.

Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
will yield very explicit asserts, which IMO is better than whatever might go wrong
with a carve out.

> There's another issue with the new ucall implementation where identity
> mapping could stomp on a program segment that I'm fighting with right now
> which only further highlights the problems with our (mis)management of
> address spaces in selftests.

Oooh, this crud:

 	virt_pg_map(vm, mmio_gpa, mmio_gpa);

Yeah, that needs to be fixed.  But again, that's a separate issue, e.g. selftests
can allocate a virtual address and map the read-only memslot.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
  2022-12-08  0:18     ` Sean Christopherson
  (?)
@ 2022-12-08  0:27       ` Oliver Upton
  -1 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-08  0:27 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Suzuki K Poulose, linux-arm-kernel, kvmarm, kvm,
	kvmarm, Ricardo Koller, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022 at 12:18:07AM +0000, Sean Christopherson wrote:

[...]

> Together, what about?  The #ifdef is a bit gross, especially around "hi_start",
> but it's less duplicate code.  And IMO, having things bundled in the same place
> makes it a lot easier for newbies (to arm64 or kernel coding in general) to
> understand what's going on and why arm64 is different.

I'd rather we not go this route. We really shouldn't make any attempt to
de-dupe something that is inherently architecture specific.

For example:

> +	/*
> +	 * All architectures supports splitting the virtual address space into
> +	 * a high and a low half.  Populate both halves, except for arm64 which
> +	 * currently uses only TTBR0_EL1 (arbitrary selftests "logic"), i.e.
> +	 * only has a valid low half.
> +	 */
> +	sparsebit_num_t nr_va_bits = (1ULL << (vm->va_bits - 1)) >> vm->page_shift;

This is still wrong for arm64. When we say the VA space is 48 bits, we
really do mean that TTBR0 is able to address a full 48 bits. So this
truncates the MSB for the addressing mode.

With the code living in the arm64 side of the shop, I can also tailor
the comment to directly match the architecture to provide breadcrumbs
tying it back to the Arm ARM.

--
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
@ 2022-12-08  0:27       ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-08  0:27 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Marc Zyngier, linux-kernel, linux-kselftest, kvmarm,
	Paolo Bonzini, Shuah Khan, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022 at 12:18:07AM +0000, Sean Christopherson wrote:

[...]

> Together, what about?  The #ifdef is a bit gross, especially around "hi_start",
> but it's less duplicate code.  And IMO, having things bundled in the same place
> makes it a lot easier for newbies (to arm64 or kernel coding in general) to
> understand what's going on and why arm64 is different.

I'd rather we not go this route. We really shouldn't make any attempt to
de-dupe something that is inherently architecture specific.

For example:

> +	/*
> +	 * All architectures supports splitting the virtual address space into
> +	 * a high and a low half.  Populate both halves, except for arm64 which
> +	 * currently uses only TTBR0_EL1 (arbitrary selftests "logic"), i.e.
> +	 * only has a valid low half.
> +	 */
> +	sparsebit_num_t nr_va_bits = (1ULL << (vm->va_bits - 1)) >> vm->page_shift;

This is still wrong for arm64. When we say the VA space is 48 bits, we
really do mean that TTBR0 is able to address a full 48 bits. So this
truncates the MSB for the addressing mode.

With the code living in the arm64 side of the shop, I can also tailor
the comment to directly match the architecture to provide breadcrumbs
tying it back to the Arm ARM.

--
Thanks,
Oliver
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
@ 2022-12-08  0:27       ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-08  0:27 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Suzuki K Poulose, linux-arm-kernel, kvmarm, kvm,
	kvmarm, Ricardo Koller, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022 at 12:18:07AM +0000, Sean Christopherson wrote:

[...]

> Together, what about?  The #ifdef is a bit gross, especially around "hi_start",
> but it's less duplicate code.  And IMO, having things bundled in the same place
> makes it a lot easier for newbies (to arm64 or kernel coding in general) to
> understand what's going on and why arm64 is different.

I'd rather we not go this route. We really shouldn't make any attempt to
de-dupe something that is inherently architecture specific.

For example:

> +	/*
> +	 * All architectures supports splitting the virtual address space into
> +	 * a high and a low half.  Populate both halves, except for arm64 which
> +	 * currently uses only TTBR0_EL1 (arbitrary selftests "logic"), i.e.
> +	 * only has a valid low half.
> +	 */
> +	sparsebit_num_t nr_va_bits = (1ULL << (vm->va_bits - 1)) >> vm->page_shift;

This is still wrong for arm64. When we say the VA space is 48 bits, we
really do mean that TTBR0 is able to address a full 48 bits. So this
truncates the MSB for the addressing mode.

With the code living in the arm64 side of the shop, I can also tailor
the comment to directly match the architecture to provide breadcrumbs
tying it back to the Arm ARM.

--
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
  2022-12-08  0:24         ` Sean Christopherson
  (?)
@ 2022-12-08  0:37           ` Oliver Upton
  -1 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-08  0:37 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Paolo Bonzini, Shuah Khan, linux-arm-kernel, kvmarm, kvm, kvmarm,
	Ricardo Koller, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> On Thu, Dec 08, 2022, Oliver Upton wrote:
> > On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> > > > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > index 92d3a91153b6..95d22cfb7b41 100644
> > > > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> > > >  				    data_size / guest_page_size,
> > > >  				    p->test_desc->data_memslot_flags);
> > > >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > > > +}
> > > > +
> > > > +static void setup_ucall(struct kvm_vm *vm)
> > > > +{
> > > > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> > > >  
> > > > -	ucall_init(vm, data_gpa + data_size);
> > > > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> > > 
> > > Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?
> > 
> > Sure, but that's only guaranteed in the PA space.
> > 
> > > The reason
> > > I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> > > looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> > > 
> > > E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> > > add a dedicated region+memslot for the ucall MMIO page.
> > 
> > Even still, that's just a kludge to make ucalls work. We have other
> > MMIO devices (GIC distributor, for example) that work by chance since
> > nothing conflicts with the constant GPAs we've selected in the tests.
> > 
> > I'd rather we go down the route of having an address allocator for the
> > for both the VA and PA spaces to provide carveouts at runtime.
> 
> Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> will yield very explicit asserts, which IMO is better than whatever might go wrong
> with a carve out.

Perhaps the use of the term 'carveout' wasn't right here.

What I'm suggesting is we cannot rely on KVM memslots alone to act as an
allocator for the PA space. KVM can provide devices to the guest that
aren't represented as memslots. If we're trying to fix PA allocations
anyway, why not make it generic enough to suit the needs of things
beyond ucalls?

--
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08  0:37           ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-08  0:37 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Marc Zyngier, linux-kernel, linux-kselftest, kvmarm,
	Paolo Bonzini, Shuah Khan, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> On Thu, Dec 08, 2022, Oliver Upton wrote:
> > On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> > > > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > index 92d3a91153b6..95d22cfb7b41 100644
> > > > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> > > >  				    data_size / guest_page_size,
> > > >  				    p->test_desc->data_memslot_flags);
> > > >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > > > +}
> > > > +
> > > > +static void setup_ucall(struct kvm_vm *vm)
> > > > +{
> > > > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> > > >  
> > > > -	ucall_init(vm, data_gpa + data_size);
> > > > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> > > 
> > > Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?
> > 
> > Sure, but that's only guaranteed in the PA space.
> > 
> > > The reason
> > > I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> > > looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> > > 
> > > E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> > > add a dedicated region+memslot for the ucall MMIO page.
> > 
> > Even still, that's just a kludge to make ucalls work. We have other
> > MMIO devices (GIC distributor, for example) that work by chance since
> > nothing conflicts with the constant GPAs we've selected in the tests.
> > 
> > I'd rather we go down the route of having an address allocator for the
> > for both the VA and PA spaces to provide carveouts at runtime.
> 
> Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> will yield very explicit asserts, which IMO is better than whatever might go wrong
> with a carve out.

Perhaps the use of the term 'carveout' wasn't right here.

What I'm suggesting is we cannot rely on KVM memslots alone to act as an
allocator for the PA space. KVM can provide devices to the guest that
aren't represented as memslots. If we're trying to fix PA allocations
anyway, why not make it generic enough to suit the needs of things
beyond ucalls?

--
Thanks,
Oliver
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08  0:37           ` Oliver Upton
  0 siblings, 0 replies; 57+ messages in thread
From: Oliver Upton @ 2022-12-08  0:37 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Suzuki K Poulose,
	Paolo Bonzini, Shuah Khan, linux-arm-kernel, kvmarm, kvm, kvmarm,
	Ricardo Koller, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> On Thu, Dec 08, 2022, Oliver Upton wrote:
> > On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> > > > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > index 92d3a91153b6..95d22cfb7b41 100644
> > > > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> > > >  				    data_size / guest_page_size,
> > > >  				    p->test_desc->data_memslot_flags);
> > > >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > > > +}
> > > > +
> > > > +static void setup_ucall(struct kvm_vm *vm)
> > > > +{
> > > > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> > > >  
> > > > -	ucall_init(vm, data_gpa + data_size);
> > > > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> > > 
> > > Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?
> > 
> > Sure, but that's only guaranteed in the PA space.
> > 
> > > The reason
> > > I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> > > looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> > > 
> > > E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> > > add a dedicated region+memslot for the ucall MMIO page.
> > 
> > Even still, that's just a kludge to make ucalls work. We have other
> > MMIO devices (GIC distributor, for example) that work by chance since
> > nothing conflicts with the constant GPAs we've selected in the tests.
> > 
> > I'd rather we go down the route of having an address allocator for the
> > for both the VA and PA spaces to provide carveouts at runtime.
> 
> Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> will yield very explicit asserts, which IMO is better than whatever might go wrong
> with a carve out.

Perhaps the use of the term 'carveout' wasn't right here.

What I'm suggesting is we cannot rely on KVM memslots alone to act as an
allocator for the PA space. KVM can provide devices to the guest that
aren't represented as memslots. If we're trying to fix PA allocations
anyway, why not make it generic enough to suit the needs of things
beyond ucalls?

--
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
  2022-12-08  0:27       ` Oliver Upton
  (?)
@ 2022-12-08  1:09         ` Sean Christopherson
  -1 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08  1:09 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Suzuki K Poulose, linux-arm-kernel, kvmarm, kvm,
	kvmarm, Ricardo Koller, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022, Oliver Upton wrote:
> On Thu, Dec 08, 2022 at 12:18:07AM +0000, Sean Christopherson wrote:
> 
> [...]
> 
> > Together, what about?  The #ifdef is a bit gross, especially around "hi_start",
> > but it's less duplicate code.  And IMO, having things bundled in the same place
> > makes it a lot easier for newbies (to arm64 or kernel coding in general) to
> > understand what's going on and why arm64 is different.
> 
> I'd rather we not go this route. We really shouldn't make any attempt to
> de-dupe something that is inherently architecture specific.
> 
> For example:
> 
> > +	/*
> > +	 * All architectures supports splitting the virtual address space into
> > +	 * a high and a low half.  Populate both halves, except for arm64 which
> > +	 * currently uses only TTBR0_EL1 (arbitrary selftests "logic"), i.e.
> > +	 * only has a valid low half.
> > +	 */
> > +	sparsebit_num_t nr_va_bits = (1ULL << (vm->va_bits - 1)) >> vm->page_shift;
> 
> This is still wrong for arm64. When we say the VA space is 48 bits, we
> really do mean that TTBR0 is able to address a full 48 bits. So this
> truncates the MSB for the addressing mode.

Ah, I missed the lack of a "-1" in the arm64 code.

> With the code living in the arm64 side of the shop, I can also tailor
> the comment to directly match the architecture to provide breadcrumbs
> tying it back to the Arm ARM.

The main reason why I don't like splitting the code this way is that it makes it
harder for non-arm64 folks to understand what makes arm64 different.  Case in
point, my overlooking of the "-1".  I read the changelog and the comment and
still missed that small-but-important detail, largely because I am completely
unfamiliar with how TTBR{0,1}_EL1 works.

Actually, before we do anything, we should get confirmation from the s390 and
RISC-V folks on whether they have a canonical hole like x86, i.e. maybe x86 is
the oddball.

Anyways, assuming one architecture is the oddball (I'm betting it's x86), I have
no objection to bleeding some of the details into the common code, including a
large comment to document the gory details.  If every architecture manges to be
different, then yeah, a hook is probably warranted.

That said, I also don't mind shoving a bit of abstraction into arch code if that
avoids some #ifdef ugliness or allows for better documentation, flexibility, etc.
What I don't like is duplicating the logic of turning "VA bits" into the bitmap.

E.g. something like this would also be an option.  Readers would obviously need
to track down has_split_va_space, but that should be fairly easy and can come
with a big arch-specific comment, and meanwhile the core logic of how selftests
populate the va bitmaps is common.

Or if arm64 is the only arch without a split, invert the flag and have arm64 set
the vm->has_combined_va_space or whatever.

static void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
{
	unsigned int eff_va_bits = vm->va_bits;
	sparsebit_num_t nr_bits;

	/* blah blah blah */
	if (vm->has_split_va_space)
		eff_va_bits--;

	nr_bits = (1ULL << eff_va_bits) >> vm->page_shift;

	sparsebit_set_num(vm->vpages_valid, 0, nr_va_bits);

	if (vm->has_split_va_space)
		sparsebit_set_num(vm->vpages_valid,
			  	  (~((1ULL << eff_va_bits) - 1)) >> vm->page_shift,
				  nr_bits);
}

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
@ 2022-12-08  1:09         ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08  1:09 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvm, Marc Zyngier, linux-kernel, linux-kselftest, kvmarm,
	Paolo Bonzini, Shuah Khan, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022, Oliver Upton wrote:
> On Thu, Dec 08, 2022 at 12:18:07AM +0000, Sean Christopherson wrote:
> 
> [...]
> 
> > Together, what about?  The #ifdef is a bit gross, especially around "hi_start",
> > but it's less duplicate code.  And IMO, having things bundled in the same place
> > makes it a lot easier for newbies (to arm64 or kernel coding in general) to
> > understand what's going on and why arm64 is different.
> 
> I'd rather we not go this route. We really shouldn't make any attempt to
> de-dupe something that is inherently architecture specific.
> 
> For example:
> 
> > +	/*
> > +	 * All architectures supports splitting the virtual address space into
> > +	 * a high and a low half.  Populate both halves, except for arm64 which
> > +	 * currently uses only TTBR0_EL1 (arbitrary selftests "logic"), i.e.
> > +	 * only has a valid low half.
> > +	 */
> > +	sparsebit_num_t nr_va_bits = (1ULL << (vm->va_bits - 1)) >> vm->page_shift;
> 
> This is still wrong for arm64. When we say the VA space is 48 bits, we
> really do mean that TTBR0 is able to address a full 48 bits. So this
> truncates the MSB for the addressing mode.

Ah, I missed the lack of a "-1" in the arm64 code.

> With the code living in the arm64 side of the shop, I can also tailor
> the comment to directly match the architecture to provide breadcrumbs
> tying it back to the Arm ARM.

The main reason why I don't like splitting the code this way is that it makes it
harder for non-arm64 folks to understand what makes arm64 different.  Case in
point, my overlooking of the "-1".  I read the changelog and the comment and
still missed that small-but-important detail, largely because I am completely
unfamiliar with how TTBR{0,1}_EL1 works.

Actually, before we do anything, we should get confirmation from the s390 and
RISC-V folks on whether they have a canonical hole like x86, i.e. maybe x86 is
the oddball.

Anyways, assuming one architecture is the oddball (I'm betting it's x86), I have
no objection to bleeding some of the details into the common code, including a
large comment to document the gory details.  If every architecture manges to be
different, then yeah, a hook is probably warranted.

That said, I also don't mind shoving a bit of abstraction into arch code if that
avoids some #ifdef ugliness or allows for better documentation, flexibility, etc.
What I don't like is duplicating the logic of turning "VA bits" into the bitmap.

E.g. something like this would also be an option.  Readers would obviously need
to track down has_split_va_space, but that should be fairly easy and can come
with a big arch-specific comment, and meanwhile the core logic of how selftests
populate the va bitmaps is common.

Or if arm64 is the only arch without a split, invert the flag and have arm64 set
the vm->has_combined_va_space or whatever.

static void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
{
	unsigned int eff_va_bits = vm->va_bits;
	sparsebit_num_t nr_bits;

	/* blah blah blah */
	if (vm->has_split_va_space)
		eff_va_bits--;

	nr_bits = (1ULL << eff_va_bits) >> vm->page_shift;

	sparsebit_set_num(vm->vpages_valid, 0, nr_va_bits);

	if (vm->has_split_va_space)
		sparsebit_set_num(vm->vpages_valid,
			  	  (~((1ULL << eff_va_bits) - 1)) >> vm->page_shift,
				  nr_bits);
}
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
@ 2022-12-08  1:09         ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08  1:09 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, James Morse, Alexandru Elisei, Paolo Bonzini,
	Shuah Khan, Suzuki K Poulose, linux-arm-kernel, kvmarm, kvm,
	kvmarm, Ricardo Koller, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022, Oliver Upton wrote:
> On Thu, Dec 08, 2022 at 12:18:07AM +0000, Sean Christopherson wrote:
> 
> [...]
> 
> > Together, what about?  The #ifdef is a bit gross, especially around "hi_start",
> > but it's less duplicate code.  And IMO, having things bundled in the same place
> > makes it a lot easier for newbies (to arm64 or kernel coding in general) to
> > understand what's going on and why arm64 is different.
> 
> I'd rather we not go this route. We really shouldn't make any attempt to
> de-dupe something that is inherently architecture specific.
> 
> For example:
> 
> > +	/*
> > +	 * All architectures supports splitting the virtual address space into
> > +	 * a high and a low half.  Populate both halves, except for arm64 which
> > +	 * currently uses only TTBR0_EL1 (arbitrary selftests "logic"), i.e.
> > +	 * only has a valid low half.
> > +	 */
> > +	sparsebit_num_t nr_va_bits = (1ULL << (vm->va_bits - 1)) >> vm->page_shift;
> 
> This is still wrong for arm64. When we say the VA space is 48 bits, we
> really do mean that TTBR0 is able to address a full 48 bits. So this
> truncates the MSB for the addressing mode.

Ah, I missed the lack of a "-1" in the arm64 code.

> With the code living in the arm64 side of the shop, I can also tailor
> the comment to directly match the architecture to provide breadcrumbs
> tying it back to the Arm ARM.

The main reason why I don't like splitting the code this way is that it makes it
harder for non-arm64 folks to understand what makes arm64 different.  Case in
point, my overlooking of the "-1".  I read the changelog and the comment and
still missed that small-but-important detail, largely because I am completely
unfamiliar with how TTBR{0,1}_EL1 works.

Actually, before we do anything, we should get confirmation from the s390 and
RISC-V folks on whether they have a canonical hole like x86, i.e. maybe x86 is
the oddball.

Anyways, assuming one architecture is the oddball (I'm betting it's x86), I have
no objection to bleeding some of the details into the common code, including a
large comment to document the gory details.  If every architecture manges to be
different, then yeah, a hook is probably warranted.

That said, I also don't mind shoving a bit of abstraction into arch code if that
avoids some #ifdef ugliness or allows for better documentation, flexibility, etc.
What I don't like is duplicating the logic of turning "VA bits" into the bitmap.

E.g. something like this would also be an option.  Readers would obviously need
to track down has_split_va_space, but that should be fairly easy and can come
with a big arch-specific comment, and meanwhile the core logic of how selftests
populate the va bitmaps is common.

Or if arm64 is the only arch without a split, invert the flag and have arm64 set
the vm->has_combined_va_space or whatever.

static void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
{
	unsigned int eff_va_bits = vm->va_bits;
	sparsebit_num_t nr_bits;

	/* blah blah blah */
	if (vm->has_split_va_space)
		eff_va_bits--;

	nr_bits = (1ULL << eff_va_bits) >> vm->page_shift;

	sparsebit_set_num(vm->vpages_valid, 0, nr_va_bits);

	if (vm->has_split_va_space)
		sparsebit_set_num(vm->vpages_valid,
			  	  (~((1ULL << eff_va_bits) - 1)) >> vm->page_shift,
				  nr_bits);
}

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
  2022-12-08  1:09         ` Sean Christopherson
  (?)
@ 2022-12-08 16:23           ` Andrew Jones
  -1 siblings, 0 replies; 57+ messages in thread
From: Andrew Jones @ 2022-12-08 16:23 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Oliver Upton, kvm, Marc Zyngier, linux-kernel, linux-kselftest,
	kvmarm, Paolo Bonzini, Shuah Khan, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022 at 01:09:38AM +0000, Sean Christopherson wrote:
...
> Actually, before we do anything, we should get confirmation from the s390 and
> RISC-V folks on whether they have a canonical hole like x86, i.e. maybe x86 is
> the oddball.

riscv splits like x86.

Thanks,
drew

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
@ 2022-12-08 16:23           ` Andrew Jones
  0 siblings, 0 replies; 57+ messages in thread
From: Andrew Jones @ 2022-12-08 16:23 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Oliver Upton, kvm, Marc Zyngier, linux-kernel, linux-kselftest,
	kvmarm, Paolo Bonzini, Shuah Khan, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022 at 01:09:38AM +0000, Sean Christopherson wrote:
...
> Actually, before we do anything, we should get confirmation from the s390 and
> RISC-V folks on whether they have a canonical hole like x86, i.e. maybe x86 is
> the oddball.

riscv splits like x86.

Thanks,
drew

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0
@ 2022-12-08 16:23           ` Andrew Jones
  0 siblings, 0 replies; 57+ messages in thread
From: Andrew Jones @ 2022-12-08 16:23 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Marc Zyngier, linux-kernel, linux-kselftest, kvmarm,
	Paolo Bonzini, Shuah Khan, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022 at 01:09:38AM +0000, Sean Christopherson wrote:
...
> Actually, before we do anything, we should get confirmation from the s390 and
> RISC-V folks on whether they have a canonical hole like x86, i.e. maybe x86 is
> the oddball.

riscv splits like x86.

Thanks,
drew
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
  2022-12-08  0:37           ` Oliver Upton
  (?)
@ 2022-12-08 18:47             ` Ricardo Koller
  -1 siblings, 0 replies; 57+ messages in thread
From: Ricardo Koller @ 2022-12-08 18:47 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Sean Christopherson, Marc Zyngier, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Shuah Khan, linux-arm-kernel,
	kvmarm, kvm, kvmarm, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > On Thu, Dec 08, 2022, Oliver Upton wrote:
> > > On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> > > > > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > > index 92d3a91153b6..95d22cfb7b41 100644
> > > > > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> > > > >  				    data_size / guest_page_size,
> > > > >  				    p->test_desc->data_memslot_flags);
> > > > >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > > > > +}
> > > > > +
> > > > > +static void setup_ucall(struct kvm_vm *vm)
> > > > > +{
> > > > > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> > > > >  
> > > > > -	ucall_init(vm, data_gpa + data_size);
> > > > > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> > > > 
> > > > Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?
> > > 
> > > Sure, but that's only guaranteed in the PA space.
> > > 
> > > > The reason
> > > > I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> > > > looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> > > > 
> > > > E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> > > > add a dedicated region+memslot for the ucall MMIO page.
> > > 
> > > Even still, that's just a kludge to make ucalls work. We have other
> > > MMIO devices (GIC distributor, for example) that work by chance since
> > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > 
> > > I'd rather we go down the route of having an address allocator for the
> > > for both the VA and PA spaces to provide carveouts at runtime.
> > 
> > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > with a carve out.
> 
> Perhaps the use of the term 'carveout' wasn't right here.
> 
> What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> allocator for the PA space. KVM can provide devices to the guest that
> aren't represented as memslots. If we're trying to fix PA allocations
> anyway, why not make it generic enough to suit the needs of things
> beyond ucalls?

One extra bit of information: in arm, IO is any access to an address (within
bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
read-only memslots.  No idea what other arches do.

> 
> --
> Thanks,
> Oliver

I think that we should use these proposed changes, and then move to an ideal
solution.  These are the changes I propose:

1. add an arch specific API for allocating MMIO physical ranges:
vm_arch_mmio_region_add(vm, npages).  The x86 version creates a read-only
memslot, and the arm one allocates physical space without a memslot in it.

2. Then change all IO related users (including ucall) to use
vm_arch_mmio_region_add(). Ex:

	pa = vm_arch_mmio_region_add(vm, npages);
	ucall_init(vm, pa);

page_fault_test needs to be adapted to use vm_arch_mmio_region_add() as well.

Thanks,
Ricardo

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08 18:47             ` Ricardo Koller
  0 siblings, 0 replies; 57+ messages in thread
From: Ricardo Koller @ 2022-12-08 18:47 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Shuah Khan, kvm, linux-kernel, linux-kselftest, Marc Zyngier,
	kvmarm, Paolo Bonzini, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > On Thu, Dec 08, 2022, Oliver Upton wrote:
> > > On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> > > > > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > > index 92d3a91153b6..95d22cfb7b41 100644
> > > > > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> > > > >  				    data_size / guest_page_size,
> > > > >  				    p->test_desc->data_memslot_flags);
> > > > >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > > > > +}
> > > > > +
> > > > > +static void setup_ucall(struct kvm_vm *vm)
> > > > > +{
> > > > > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> > > > >  
> > > > > -	ucall_init(vm, data_gpa + data_size);
> > > > > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> > > > 
> > > > Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?
> > > 
> > > Sure, but that's only guaranteed in the PA space.
> > > 
> > > > The reason
> > > > I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> > > > looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> > > > 
> > > > E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> > > > add a dedicated region+memslot for the ucall MMIO page.
> > > 
> > > Even still, that's just a kludge to make ucalls work. We have other
> > > MMIO devices (GIC distributor, for example) that work by chance since
> > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > 
> > > I'd rather we go down the route of having an address allocator for the
> > > for both the VA and PA spaces to provide carveouts at runtime.
> > 
> > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > with a carve out.
> 
> Perhaps the use of the term 'carveout' wasn't right here.
> 
> What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> allocator for the PA space. KVM can provide devices to the guest that
> aren't represented as memslots. If we're trying to fix PA allocations
> anyway, why not make it generic enough to suit the needs of things
> beyond ucalls?

One extra bit of information: in arm, IO is any access to an address (within
bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
read-only memslots.  No idea what other arches do.

> 
> --
> Thanks,
> Oliver

I think that we should use these proposed changes, and then move to an ideal
solution.  These are the changes I propose:

1. add an arch specific API for allocating MMIO physical ranges:
vm_arch_mmio_region_add(vm, npages).  The x86 version creates a read-only
memslot, and the arm one allocates physical space without a memslot in it.

2. Then change all IO related users (including ucall) to use
vm_arch_mmio_region_add(). Ex:

	pa = vm_arch_mmio_region_add(vm, npages);
	ucall_init(vm, pa);

page_fault_test needs to be adapted to use vm_arch_mmio_region_add() as well.

Thanks,
Ricardo
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08 18:47             ` Ricardo Koller
  0 siblings, 0 replies; 57+ messages in thread
From: Ricardo Koller @ 2022-12-08 18:47 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Sean Christopherson, Marc Zyngier, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Shuah Khan, linux-arm-kernel,
	kvmarm, kvm, kvmarm, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > On Thu, Dec 08, 2022, Oliver Upton wrote:
> > > On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
> > > > > diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > > index 92d3a91153b6..95d22cfb7b41 100644
> > > > > --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > > +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c
> > > > > @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
> > > > >  				    data_size / guest_page_size,
> > > > >  				    p->test_desc->data_memslot_flags);
> > > > >  	vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT;
> > > > > +}
> > > > > +
> > > > > +static void setup_ucall(struct kvm_vm *vm)
> > > > > +{
> > > > > +	struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
> > > > >  
> > > > > -	ucall_init(vm, data_gpa + data_size);
> > > > > +	ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
> > > > 
> > > > Isn't there a hole after CODE_AND_DATA_MEMSLOT?  I.e. after memslot 0?
> > > 
> > > Sure, but that's only guaranteed in the PA space.
> > > 
> > > > The reason
> > > > I ask is because if so, then we can do the temporarily heinous, but hopefully forward
> > > > looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
> > > > 
> > > > E.g. I think we can do this immediately, and then at some point in the 6.2 cycle
> > > > add a dedicated region+memslot for the ucall MMIO page.
> > > 
> > > Even still, that's just a kludge to make ucalls work. We have other
> > > MMIO devices (GIC distributor, for example) that work by chance since
> > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > 
> > > I'd rather we go down the route of having an address allocator for the
> > > for both the VA and PA spaces to provide carveouts at runtime.
> > 
> > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > with a carve out.
> 
> Perhaps the use of the term 'carveout' wasn't right here.
> 
> What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> allocator for the PA space. KVM can provide devices to the guest that
> aren't represented as memslots. If we're trying to fix PA allocations
> anyway, why not make it generic enough to suit the needs of things
> beyond ucalls?

One extra bit of information: in arm, IO is any access to an address (within
bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
read-only memslots.  No idea what other arches do.

> 
> --
> Thanks,
> Oliver

I think that we should use these proposed changes, and then move to an ideal
solution.  These are the changes I propose:

1. add an arch specific API for allocating MMIO physical ranges:
vm_arch_mmio_region_add(vm, npages).  The x86 version creates a read-only
memslot, and the arm one allocates physical space without a memslot in it.

2. Then change all IO related users (including ucall) to use
vm_arch_mmio_region_add(). Ex:

	pa = vm_arch_mmio_region_add(vm, npages);
	ucall_init(vm, pa);

page_fault_test needs to be adapted to use vm_arch_mmio_region_add() as well.

Thanks,
Ricardo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
  2022-12-08 18:47             ` Ricardo Koller
  (?)
@ 2022-12-08 19:01               ` Sean Christopherson
  -1 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08 19:01 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: Oliver Upton, Marc Zyngier, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Shuah Khan, linux-arm-kernel,
	kvmarm, kvm, kvmarm, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022, Ricardo Koller wrote:
> On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> > On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > > > Even still, that's just a kludge to make ucalls work. We have other
> > > > MMIO devices (GIC distributor, for example) that work by chance since
> > > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > > 
> > > > I'd rather we go down the route of having an address allocator for the
> > > > for both the VA and PA spaces to provide carveouts at runtime.
> > > 
> > > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > > with a carve out.
> > 
> > Perhaps the use of the term 'carveout' wasn't right here.
> > 
> > What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> > allocator for the PA space. KVM can provide devices to the guest that
> > aren't represented as memslots. If we're trying to fix PA allocations
> > anyway, why not make it generic enough to suit the needs of things
> > beyond ucalls?
> 
> One extra bit of information: in arm, IO is any access to an address (within
> bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
> read-only memslots.  No idea what other arches do.

I don't think that's correct, doesn't this code turn write abort on a RO memslot
into an io_mem_abort()?  Specifically, the "(write_fault && !writable)" check will
match, and assuming none the the edge cases in the if-statement fire, KVM will
send the write down io_mem_abort().

	gfn = fault_ipa >> PAGE_SHIFT;
	memslot = gfn_to_memslot(vcpu->kvm, gfn);
	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
	write_fault = kvm_is_write_fault(vcpu);
	if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
		/*
		 * The guest has put either its instructions or its page-tables
		 * somewhere it shouldn't have. Userspace won't be able to do
		 * anything about this (there's no syndrome for a start), so
		 * re-inject the abort back into the guest.
		 */
		if (is_iabt) {
			ret = -ENOEXEC;
			goto out;
		}

		if (kvm_vcpu_abt_iss1tw(vcpu)) {
			kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
			ret = 1;
			goto out_unlock;
		}

		/*
		 * Check for a cache maintenance operation. Since we
		 * ended-up here, we know it is outside of any memory
		 * slot. But we can't find out if that is for a device,
		 * or if the guest is just being stupid. The only thing
		 * we know for sure is that this range cannot be cached.
		 *
		 * So let's assume that the guest is just being
		 * cautious, and skip the instruction.
		 */
		if (kvm_is_error_hva(hva) && kvm_vcpu_dabt_is_cm(vcpu)) {
			kvm_incr_pc(vcpu);
			ret = 1;
			goto out_unlock;
		}

		/*
		 * The IPA is reported as [MAX:12], so we need to
		 * complement it with the bottom 12 bits from the
		 * faulting VA. This is always 12 bits, irrespective
		 * of the page size.
		 */
		fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
		ret = io_mem_abort(vcpu, fault_ipa);
		goto out_unlock;
	}

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08 19:01               ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08 19:01 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: Shuah Khan, kvm, Marc Zyngier, linux-kernel, linux-kselftest,
	kvmarm, Paolo Bonzini, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022, Ricardo Koller wrote:
> On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> > On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > > > Even still, that's just a kludge to make ucalls work. We have other
> > > > MMIO devices (GIC distributor, for example) that work by chance since
> > > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > > 
> > > > I'd rather we go down the route of having an address allocator for the
> > > > for both the VA and PA spaces to provide carveouts at runtime.
> > > 
> > > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > > with a carve out.
> > 
> > Perhaps the use of the term 'carveout' wasn't right here.
> > 
> > What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> > allocator for the PA space. KVM can provide devices to the guest that
> > aren't represented as memslots. If we're trying to fix PA allocations
> > anyway, why not make it generic enough to suit the needs of things
> > beyond ucalls?
> 
> One extra bit of information: in arm, IO is any access to an address (within
> bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
> read-only memslots.  No idea what other arches do.

I don't think that's correct, doesn't this code turn write abort on a RO memslot
into an io_mem_abort()?  Specifically, the "(write_fault && !writable)" check will
match, and assuming none the the edge cases in the if-statement fire, KVM will
send the write down io_mem_abort().

	gfn = fault_ipa >> PAGE_SHIFT;
	memslot = gfn_to_memslot(vcpu->kvm, gfn);
	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
	write_fault = kvm_is_write_fault(vcpu);
	if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
		/*
		 * The guest has put either its instructions or its page-tables
		 * somewhere it shouldn't have. Userspace won't be able to do
		 * anything about this (there's no syndrome for a start), so
		 * re-inject the abort back into the guest.
		 */
		if (is_iabt) {
			ret = -ENOEXEC;
			goto out;
		}

		if (kvm_vcpu_abt_iss1tw(vcpu)) {
			kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
			ret = 1;
			goto out_unlock;
		}

		/*
		 * Check for a cache maintenance operation. Since we
		 * ended-up here, we know it is outside of any memory
		 * slot. But we can't find out if that is for a device,
		 * or if the guest is just being stupid. The only thing
		 * we know for sure is that this range cannot be cached.
		 *
		 * So let's assume that the guest is just being
		 * cautious, and skip the instruction.
		 */
		if (kvm_is_error_hva(hva) && kvm_vcpu_dabt_is_cm(vcpu)) {
			kvm_incr_pc(vcpu);
			ret = 1;
			goto out_unlock;
		}

		/*
		 * The IPA is reported as [MAX:12], so we need to
		 * complement it with the bottom 12 bits from the
		 * faulting VA. This is always 12 bits, irrespective
		 * of the page size.
		 */
		fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
		ret = io_mem_abort(vcpu, fault_ipa);
		goto out_unlock;
	}
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08 19:01               ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-08 19:01 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: Oliver Upton, Marc Zyngier, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Shuah Khan, linux-arm-kernel,
	kvmarm, kvm, kvmarm, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022, Ricardo Koller wrote:
> On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> > On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > > > Even still, that's just a kludge to make ucalls work. We have other
> > > > MMIO devices (GIC distributor, for example) that work by chance since
> > > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > > 
> > > > I'd rather we go down the route of having an address allocator for the
> > > > for both the VA and PA spaces to provide carveouts at runtime.
> > > 
> > > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > > with a carve out.
> > 
> > Perhaps the use of the term 'carveout' wasn't right here.
> > 
> > What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> > allocator for the PA space. KVM can provide devices to the guest that
> > aren't represented as memslots. If we're trying to fix PA allocations
> > anyway, why not make it generic enough to suit the needs of things
> > beyond ucalls?
> 
> One extra bit of information: in arm, IO is any access to an address (within
> bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
> read-only memslots.  No idea what other arches do.

I don't think that's correct, doesn't this code turn write abort on a RO memslot
into an io_mem_abort()?  Specifically, the "(write_fault && !writable)" check will
match, and assuming none the the edge cases in the if-statement fire, KVM will
send the write down io_mem_abort().

	gfn = fault_ipa >> PAGE_SHIFT;
	memslot = gfn_to_memslot(vcpu->kvm, gfn);
	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
	write_fault = kvm_is_write_fault(vcpu);
	if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
		/*
		 * The guest has put either its instructions or its page-tables
		 * somewhere it shouldn't have. Userspace won't be able to do
		 * anything about this (there's no syndrome for a start), so
		 * re-inject the abort back into the guest.
		 */
		if (is_iabt) {
			ret = -ENOEXEC;
			goto out;
		}

		if (kvm_vcpu_abt_iss1tw(vcpu)) {
			kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
			ret = 1;
			goto out_unlock;
		}

		/*
		 * Check for a cache maintenance operation. Since we
		 * ended-up here, we know it is outside of any memory
		 * slot. But we can't find out if that is for a device,
		 * or if the guest is just being stupid. The only thing
		 * we know for sure is that this range cannot be cached.
		 *
		 * So let's assume that the guest is just being
		 * cautious, and skip the instruction.
		 */
		if (kvm_is_error_hva(hva) && kvm_vcpu_dabt_is_cm(vcpu)) {
			kvm_incr_pc(vcpu);
			ret = 1;
			goto out_unlock;
		}

		/*
		 * The IPA is reported as [MAX:12], so we need to
		 * complement it with the bottom 12 bits from the
		 * faulting VA. This is always 12 bits, irrespective
		 * of the page size.
		 */
		fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
		ret = io_mem_abort(vcpu, fault_ipa);
		goto out_unlock;
	}

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
  2022-12-08 19:01               ` Sean Christopherson
  (?)
@ 2022-12-08 19:49                 ` Ricardo Koller
  -1 siblings, 0 replies; 57+ messages in thread
From: Ricardo Koller @ 2022-12-08 19:49 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Oliver Upton, Marc Zyngier, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Shuah Khan, linux-arm-kernel,
	kvmarm, kvm, kvmarm, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022 at 07:01:57PM +0000, Sean Christopherson wrote:
> On Thu, Dec 08, 2022, Ricardo Koller wrote:
> > On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> > > On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > > > > Even still, that's just a kludge to make ucalls work. We have other
> > > > > MMIO devices (GIC distributor, for example) that work by chance since
> > > > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > > > 
> > > > > I'd rather we go down the route of having an address allocator for the
> > > > > for both the VA and PA spaces to provide carveouts at runtime.
> > > > 
> > > > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > > > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > > > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > > > with a carve out.
> > > 
> > > Perhaps the use of the term 'carveout' wasn't right here.
> > > 
> > > What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> > > allocator for the PA space. KVM can provide devices to the guest that
> > > aren't represented as memslots. If we're trying to fix PA allocations
> > > anyway, why not make it generic enough to suit the needs of things
> > > beyond ucalls?
> > 
> > One extra bit of information: in arm, IO is any access to an address (within
> > bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
> > read-only memslots.  No idea what other arches do.
> 
> I don't think that's correct, doesn't this code turn write abort on a RO memslot
> into an io_mem_abort()?  Specifically, the "(write_fault && !writable)" check will
> match, and assuming none the the edge cases in the if-statement fire, KVM will
> send the write down io_mem_abort().

You are right. In fact, page_fault_test checks precisely that: writes on
RO memslots are sent to userspace as an mmio exit. I was just referring
to the MMIO done for ucall.

Having said that, we could use ucall as writes on read-only memslots
like what x86 does.

> 
> 	gfn = fault_ipa >> PAGE_SHIFT;
> 	memslot = gfn_to_memslot(vcpu->kvm, gfn);
> 	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
> 	write_fault = kvm_is_write_fault(vcpu);
> 	if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
> 		/*
> 		 * The guest has put either its instructions or its page-tables
> 		 * somewhere it shouldn't have. Userspace won't be able to do
> 		 * anything about this (there's no syndrome for a start), so
> 		 * re-inject the abort back into the guest.
> 		 */
> 		if (is_iabt) {
> 			ret = -ENOEXEC;
> 			goto out;
> 		}
> 
> 		if (kvm_vcpu_abt_iss1tw(vcpu)) {
> 			kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
> 			ret = 1;
> 			goto out_unlock;
> 		}
> 
> 		/*
> 		 * Check for a cache maintenance operation. Since we
> 		 * ended-up here, we know it is outside of any memory
> 		 * slot. But we can't find out if that is for a device,
> 		 * or if the guest is just being stupid. The only thing
> 		 * we know for sure is that this range cannot be cached.
> 		 *
> 		 * So let's assume that the guest is just being
> 		 * cautious, and skip the instruction.
> 		 */
> 		if (kvm_is_error_hva(hva) && kvm_vcpu_dabt_is_cm(vcpu)) {
> 			kvm_incr_pc(vcpu);
> 			ret = 1;
> 			goto out_unlock;
> 		}
> 
> 		/*
> 		 * The IPA is reported as [MAX:12], so we need to
> 		 * complement it with the bottom 12 bits from the
> 		 * faulting VA. This is always 12 bits, irrespective
> 		 * of the page size.
> 		 */
> 		fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
> 		ret = io_mem_abort(vcpu, fault_ipa);
> 		goto out_unlock;
> 	}

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08 19:49                 ` Ricardo Koller
  0 siblings, 0 replies; 57+ messages in thread
From: Ricardo Koller @ 2022-12-08 19:49 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Shuah Khan, kvm, Marc Zyngier, linux-kernel, linux-kselftest,
	kvmarm, Paolo Bonzini, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022 at 07:01:57PM +0000, Sean Christopherson wrote:
> On Thu, Dec 08, 2022, Ricardo Koller wrote:
> > On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> > > On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > > > > Even still, that's just a kludge to make ucalls work. We have other
> > > > > MMIO devices (GIC distributor, for example) that work by chance since
> > > > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > > > 
> > > > > I'd rather we go down the route of having an address allocator for the
> > > > > for both the VA and PA spaces to provide carveouts at runtime.
> > > > 
> > > > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > > > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > > > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > > > with a carve out.
> > > 
> > > Perhaps the use of the term 'carveout' wasn't right here.
> > > 
> > > What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> > > allocator for the PA space. KVM can provide devices to the guest that
> > > aren't represented as memslots. If we're trying to fix PA allocations
> > > anyway, why not make it generic enough to suit the needs of things
> > > beyond ucalls?
> > 
> > One extra bit of information: in arm, IO is any access to an address (within
> > bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
> > read-only memslots.  No idea what other arches do.
> 
> I don't think that's correct, doesn't this code turn write abort on a RO memslot
> into an io_mem_abort()?  Specifically, the "(write_fault && !writable)" check will
> match, and assuming none the the edge cases in the if-statement fire, KVM will
> send the write down io_mem_abort().

You are right. In fact, page_fault_test checks precisely that: writes on
RO memslots are sent to userspace as an mmio exit. I was just referring
to the MMIO done for ucall.

Having said that, we could use ucall as writes on read-only memslots
like what x86 does.

> 
> 	gfn = fault_ipa >> PAGE_SHIFT;
> 	memslot = gfn_to_memslot(vcpu->kvm, gfn);
> 	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
> 	write_fault = kvm_is_write_fault(vcpu);
> 	if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
> 		/*
> 		 * The guest has put either its instructions or its page-tables
> 		 * somewhere it shouldn't have. Userspace won't be able to do
> 		 * anything about this (there's no syndrome for a start), so
> 		 * re-inject the abort back into the guest.
> 		 */
> 		if (is_iabt) {
> 			ret = -ENOEXEC;
> 			goto out;
> 		}
> 
> 		if (kvm_vcpu_abt_iss1tw(vcpu)) {
> 			kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
> 			ret = 1;
> 			goto out_unlock;
> 		}
> 
> 		/*
> 		 * Check for a cache maintenance operation. Since we
> 		 * ended-up here, we know it is outside of any memory
> 		 * slot. But we can't find out if that is for a device,
> 		 * or if the guest is just being stupid. The only thing
> 		 * we know for sure is that this range cannot be cached.
> 		 *
> 		 * So let's assume that the guest is just being
> 		 * cautious, and skip the instruction.
> 		 */
> 		if (kvm_is_error_hva(hva) && kvm_vcpu_dabt_is_cm(vcpu)) {
> 			kvm_incr_pc(vcpu);
> 			ret = 1;
> 			goto out_unlock;
> 		}
> 
> 		/*
> 		 * The IPA is reported as [MAX:12], so we need to
> 		 * complement it with the bottom 12 bits from the
> 		 * faulting VA. This is always 12 bits, irrespective
> 		 * of the page size.
> 		 */
> 		fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
> 		ret = io_mem_abort(vcpu, fault_ipa);
> 		goto out_unlock;
> 	}
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-08 19:49                 ` Ricardo Koller
  0 siblings, 0 replies; 57+ messages in thread
From: Ricardo Koller @ 2022-12-08 19:49 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Oliver Upton, Marc Zyngier, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Shuah Khan, linux-arm-kernel,
	kvmarm, kvm, kvmarm, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022 at 07:01:57PM +0000, Sean Christopherson wrote:
> On Thu, Dec 08, 2022, Ricardo Koller wrote:
> > On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> > > On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > > > > Even still, that's just a kludge to make ucalls work. We have other
> > > > > MMIO devices (GIC distributor, for example) that work by chance since
> > > > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > > > 
> > > > > I'd rather we go down the route of having an address allocator for the
> > > > > for both the VA and PA spaces to provide carveouts at runtime.
> > > > 
> > > > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > > > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > > > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > > > with a carve out.
> > > 
> > > Perhaps the use of the term 'carveout' wasn't right here.
> > > 
> > > What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> > > allocator for the PA space. KVM can provide devices to the guest that
> > > aren't represented as memslots. If we're trying to fix PA allocations
> > > anyway, why not make it generic enough to suit the needs of things
> > > beyond ucalls?
> > 
> > One extra bit of information: in arm, IO is any access to an address (within
> > bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
> > read-only memslots.  No idea what other arches do.
> 
> I don't think that's correct, doesn't this code turn write abort on a RO memslot
> into an io_mem_abort()?  Specifically, the "(write_fault && !writable)" check will
> match, and assuming none the the edge cases in the if-statement fire, KVM will
> send the write down io_mem_abort().

You are right. In fact, page_fault_test checks precisely that: writes on
RO memslots are sent to userspace as an mmio exit. I was just referring
to the MMIO done for ucall.

Having said that, we could use ucall as writes on read-only memslots
like what x86 does.

> 
> 	gfn = fault_ipa >> PAGE_SHIFT;
> 	memslot = gfn_to_memslot(vcpu->kvm, gfn);
> 	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
> 	write_fault = kvm_is_write_fault(vcpu);
> 	if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
> 		/*
> 		 * The guest has put either its instructions or its page-tables
> 		 * somewhere it shouldn't have. Userspace won't be able to do
> 		 * anything about this (there's no syndrome for a start), so
> 		 * re-inject the abort back into the guest.
> 		 */
> 		if (is_iabt) {
> 			ret = -ENOEXEC;
> 			goto out;
> 		}
> 
> 		if (kvm_vcpu_abt_iss1tw(vcpu)) {
> 			kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
> 			ret = 1;
> 			goto out_unlock;
> 		}
> 
> 		/*
> 		 * Check for a cache maintenance operation. Since we
> 		 * ended-up here, we know it is outside of any memory
> 		 * slot. But we can't find out if that is for a device,
> 		 * or if the guest is just being stupid. The only thing
> 		 * we know for sure is that this range cannot be cached.
> 		 *
> 		 * So let's assume that the guest is just being
> 		 * cautious, and skip the instruction.
> 		 */
> 		if (kvm_is_error_hva(hva) && kvm_vcpu_dabt_is_cm(vcpu)) {
> 			kvm_incr_pc(vcpu);
> 			ret = 1;
> 			goto out_unlock;
> 		}
> 
> 		/*
> 		 * The IPA is reported as [MAX:12], so we need to
> 		 * complement it with the bottom 12 bits from the
> 		 * faulting VA. This is always 12 bits, irrespective
> 		 * of the page size.
> 		 */
> 		fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
> 		ret = io_mem_abort(vcpu, fault_ipa);
> 		goto out_unlock;
> 	}

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
  2022-12-08 19:49                 ` Ricardo Koller
  (?)
@ 2022-12-09  1:08                   ` Sean Christopherson
  -1 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-09  1:08 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: Oliver Upton, Marc Zyngier, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Shuah Khan, linux-arm-kernel,
	kvmarm, kvm, kvmarm, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022, Ricardo Koller wrote:
> On Thu, Dec 08, 2022 at 07:01:57PM +0000, Sean Christopherson wrote:
> > On Thu, Dec 08, 2022, Ricardo Koller wrote:
> > > On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> > > > On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > > > > > Even still, that's just a kludge to make ucalls work. We have other
> > > > > > MMIO devices (GIC distributor, for example) that work by chance since
> > > > > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > > > > 
> > > > > > I'd rather we go down the route of having an address allocator for the
> > > > > > for both the VA and PA spaces to provide carveouts at runtime.
> > > > > 
> > > > > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > > > > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > > > > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > > > > with a carve out.
> > > > 
> > > > Perhaps the use of the term 'carveout' wasn't right here.
> > > > 
> > > > What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> > > > allocator for the PA space. KVM can provide devices to the guest that
> > > > aren't represented as memslots. If we're trying to fix PA allocations
> > > > anyway, why not make it generic enough to suit the needs of things
> > > > beyond ucalls?
> > > 
> > > One extra bit of information: in arm, IO is any access to an address (within
> > > bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
> > > read-only memslots.  No idea what other arches do.
> > 
> > I don't think that's correct, doesn't this code turn write abort on a RO memslot
> > into an io_mem_abort()?  Specifically, the "(write_fault && !writable)" check will
> > match, and assuming none the the edge cases in the if-statement fire, KVM will
> > send the write down io_mem_abort().
> 
> You are right. In fact, page_fault_test checks precisely that: writes on
> RO memslots are sent to userspace as an mmio exit. I was just referring
> to the MMIO done for ucall.

To clarify for others, Ricardo thought that x86 selftests were already using a
read-only memslot for ucalls, hence the confusion.

> Having said that, we could use ucall as writes on read-only memslots
> like what x86 does.

+1.  x86 currently uses I/O with a hardcoded port, but theoretically that's just
as error prone as hardcoding a GPA, it just works because x86 doesn't have any
port I/O tests.

Ugh, and that made me look at sync_regs_test.c, which does its own open coded
ucall.  That thing is probably working by dumb luck at this point.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-09  1:08                   ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-09  1:08 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: Shuah Khan, kvm, Marc Zyngier, linux-kernel, linux-kselftest,
	kvmarm, Paolo Bonzini, kvmarm, linux-arm-kernel

On Thu, Dec 08, 2022, Ricardo Koller wrote:
> On Thu, Dec 08, 2022 at 07:01:57PM +0000, Sean Christopherson wrote:
> > On Thu, Dec 08, 2022, Ricardo Koller wrote:
> > > On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> > > > On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > > > > > Even still, that's just a kludge to make ucalls work. We have other
> > > > > > MMIO devices (GIC distributor, for example) that work by chance since
> > > > > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > > > > 
> > > > > > I'd rather we go down the route of having an address allocator for the
> > > > > > for both the VA and PA spaces to provide carveouts at runtime.
> > > > > 
> > > > > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > > > > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > > > > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > > > > with a carve out.
> > > > 
> > > > Perhaps the use of the term 'carveout' wasn't right here.
> > > > 
> > > > What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> > > > allocator for the PA space. KVM can provide devices to the guest that
> > > > aren't represented as memslots. If we're trying to fix PA allocations
> > > > anyway, why not make it generic enough to suit the needs of things
> > > > beyond ucalls?
> > > 
> > > One extra bit of information: in arm, IO is any access to an address (within
> > > bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
> > > read-only memslots.  No idea what other arches do.
> > 
> > I don't think that's correct, doesn't this code turn write abort on a RO memslot
> > into an io_mem_abort()?  Specifically, the "(write_fault && !writable)" check will
> > match, and assuming none the the edge cases in the if-statement fire, KVM will
> > send the write down io_mem_abort().
> 
> You are right. In fact, page_fault_test checks precisely that: writes on
> RO memslots are sent to userspace as an mmio exit. I was just referring
> to the MMIO done for ucall.

To clarify for others, Ricardo thought that x86 selftests were already using a
read-only memslot for ucalls, hence the confusion.

> Having said that, we could use ucall as writes on read-only memslots
> like what x86 does.

+1.  x86 currently uses I/O with a hardcoded port, but theoretically that's just
as error prone as hardcoding a GPA, it just works because x86 doesn't have any
port I/O tests.

Ugh, and that made me look at sync_regs_test.c, which does its own open coded
ucall.  That thing is probably working by dumb luck at this point.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory
@ 2022-12-09  1:08                   ` Sean Christopherson
  0 siblings, 0 replies; 57+ messages in thread
From: Sean Christopherson @ 2022-12-09  1:08 UTC (permalink / raw)
  To: Ricardo Koller
  Cc: Oliver Upton, Marc Zyngier, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Shuah Khan, linux-arm-kernel,
	kvmarm, kvm, kvmarm, linux-kselftest, linux-kernel

On Thu, Dec 08, 2022, Ricardo Koller wrote:
> On Thu, Dec 08, 2022 at 07:01:57PM +0000, Sean Christopherson wrote:
> > On Thu, Dec 08, 2022, Ricardo Koller wrote:
> > > On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
> > > > On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
> > > > > > Even still, that's just a kludge to make ucalls work. We have other
> > > > > > MMIO devices (GIC distributor, for example) that work by chance since
> > > > > > nothing conflicts with the constant GPAs we've selected in the tests.
> > > > > > 
> > > > > > I'd rather we go down the route of having an address allocator for the
> > > > > > for both the VA and PA spaces to provide carveouts at runtime.
> > > > > 
> > > > > Aren't those two separate issues?  The PA, a.k.a. memslots space, can be solved
> > > > > by allocating a dedicated memslot, i.e. doesn't need a carve.  At worst, collisions
> > > > > will yield very explicit asserts, which IMO is better than whatever might go wrong
> > > > > with a carve out.
> > > > 
> > > > Perhaps the use of the term 'carveout' wasn't right here.
> > > > 
> > > > What I'm suggesting is we cannot rely on KVM memslots alone to act as an
> > > > allocator for the PA space. KVM can provide devices to the guest that
> > > > aren't represented as memslots. If we're trying to fix PA allocations
> > > > anyway, why not make it generic enough to suit the needs of things
> > > > beyond ucalls?
> > > 
> > > One extra bit of information: in arm, IO is any access to an address (within
> > > bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to
> > > read-only memslots.  No idea what other arches do.
> > 
> > I don't think that's correct, doesn't this code turn write abort on a RO memslot
> > into an io_mem_abort()?  Specifically, the "(write_fault && !writable)" check will
> > match, and assuming none the the edge cases in the if-statement fire, KVM will
> > send the write down io_mem_abort().
> 
> You are right. In fact, page_fault_test checks precisely that: writes on
> RO memslots are sent to userspace as an mmio exit. I was just referring
> to the MMIO done for ucall.

To clarify for others, Ricardo thought that x86 selftests were already using a
read-only memslot for ucalls, hence the confusion.

> Having said that, we could use ucall as writes on read-only memslots
> like what x86 does.

+1.  x86 currently uses I/O with a hardcoded port, but theoretically that's just
as error prone as hardcoding a GPA, it just works because x86 doesn't have any
port I/O tests.

Ugh, and that made me look at sync_regs_test.c, which does its own open coded
ucall.  That thing is probably working by dumb luck at this point.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2022-12-09  1:10 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-07 21:48 [PATCH 0/4] KVM: selftests: Fixes for ucall pool + page_fault_test Oliver Upton
2022-12-07 21:48 ` Oliver Upton
2022-12-07 21:48 ` Oliver Upton
2022-12-07 21:48 ` [PATCH 1/4] KVM: selftests: Fix build due to ucall_uninit() removal Oliver Upton
2022-12-07 21:48   ` Oliver Upton
2022-12-07 21:48   ` Oliver Upton
2022-12-07 21:48 ` [PATCH 2/4] KVM: selftests: Setup ucall after loading program into guest memory Oliver Upton
2022-12-07 21:48   ` Oliver Upton
2022-12-07 21:48   ` Oliver Upton
2022-12-07 23:57   ` Sean Christopherson
2022-12-07 23:57     ` Sean Christopherson
2022-12-07 23:57     ` Sean Christopherson
2022-12-08  0:17     ` Oliver Upton
2022-12-08  0:17       ` Oliver Upton
2022-12-08  0:17       ` Oliver Upton
2022-12-08  0:24       ` Sean Christopherson
2022-12-08  0:24         ` Sean Christopherson
2022-12-08  0:24         ` Sean Christopherson
2022-12-08  0:37         ` Oliver Upton
2022-12-08  0:37           ` Oliver Upton
2022-12-08  0:37           ` Oliver Upton
2022-12-08 18:47           ` Ricardo Koller
2022-12-08 18:47             ` Ricardo Koller
2022-12-08 18:47             ` Ricardo Koller
2022-12-08 19:01             ` Sean Christopherson
2022-12-08 19:01               ` Sean Christopherson
2022-12-08 19:01               ` Sean Christopherson
2022-12-08 19:49               ` Ricardo Koller
2022-12-08 19:49                 ` Ricardo Koller
2022-12-08 19:49                 ` Ricardo Koller
2022-12-09  1:08                 ` Sean Christopherson
2022-12-09  1:08                   ` Sean Christopherson
2022-12-09  1:08                   ` Sean Christopherson
2022-12-07 21:48 ` [PATCH 3/4] KVM: arm64: selftests: Align VA space allocator with TTBR0 Oliver Upton
2022-12-07 21:48   ` Oliver Upton
2022-12-07 21:48   ` Oliver Upton
2022-12-08  0:18   ` Sean Christopherson
2022-12-08  0:18     ` Sean Christopherson
2022-12-08  0:18     ` Sean Christopherson
2022-12-08  0:27     ` Oliver Upton
2022-12-08  0:27       ` Oliver Upton
2022-12-08  0:27       ` Oliver Upton
2022-12-08  1:09       ` Sean Christopherson
2022-12-08  1:09         ` Sean Christopherson
2022-12-08  1:09         ` Sean Christopherson
2022-12-08 16:23         ` Andrew Jones
2022-12-08 16:23           ` Andrew Jones
2022-12-08 16:23           ` Andrew Jones
2022-12-07 21:48 ` [PATCH 4/4] KVM: selftests: Allocate ucall pool from MEM_REGION_DATA Oliver Upton
2022-12-07 21:48   ` Oliver Upton
2022-12-07 21:48   ` Oliver Upton
2022-12-07 23:44   ` Sean Christopherson
2022-12-07 23:44     ` Sean Christopherson
2022-12-07 23:44     ` Sean Christopherson
2022-12-07 23:56     ` Oliver Upton
2022-12-07 23:56       ` Oliver Upton
2022-12-07 23:56       ` Oliver Upton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.