All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-16 11:12 ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:12 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

This patch series enables the KVM selftests for s390x. As a first
test, the sync_regs from x86 has been adapted to s390x.

Please note that the ucall() interface is not used yet - since
s390x neither has PIO nor MMIO, this needs some more work first
before it becomes usable (we likely should use a DIAG hypercall
here, which is what the sync_reg test is currently using, too...).

Thomas Huth (4):
  KVM: selftests: Guard struct kvm_vcpu_events with
    __KVM_HAVE_VCPU_EVENTS
  KVM: selftests: Align memory region addresses to 1M on s390x
  KVM: selftests: Add processor code for s390x
  KVM: selftests: Add the sync_regs test for s390x

 MAINTAINERS                                   |   2 +
 tools/testing/selftests/kvm/Makefile          |   3 +
 .../testing/selftests/kvm/include/kvm_util.h  |   2 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 tools/testing/selftests/kvm/lib/kvm_util.c    |  24 +-
 .../selftests/kvm/lib/s390x/processor.c       | 277 ++++++++++++++++++
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++
 7 files changed, 476 insertions(+), 5 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

-- 
2.21.0


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-16 11:12 ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: thuth @ 2019-05-16 11:12 UTC (permalink / raw)


This patch series enables the KVM selftests for s390x. As a first
test, the sync_regs from x86 has been adapted to s390x.

Please note that the ucall() interface is not used yet - since
s390x neither has PIO nor MMIO, this needs some more work first
before it becomes usable (we likely should use a DIAG hypercall
here, which is what the sync_reg test is currently using, too...).

Thomas Huth (4):
  KVM: selftests: Guard struct kvm_vcpu_events with
    __KVM_HAVE_VCPU_EVENTS
  KVM: selftests: Align memory region addresses to 1M on s390x
  KVM: selftests: Add processor code for s390x
  KVM: selftests: Add the sync_regs test for s390x

 MAINTAINERS                                   |   2 +
 tools/testing/selftests/kvm/Makefile          |   3 +
 .../testing/selftests/kvm/include/kvm_util.h  |   2 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 tools/testing/selftests/kvm/lib/kvm_util.c    |  24 +-
 .../selftests/kvm/lib/s390x/processor.c       | 277 ++++++++++++++++++
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++
 7 files changed, 476 insertions(+), 5 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

-- 
2.21.0

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-16 11:12 ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:12 UTC (permalink / raw)


This patch series enables the KVM selftests for s390x. As a first
test, the sync_regs from x86 has been adapted to s390x.

Please note that the ucall() interface is not used yet - since
s390x neither has PIO nor MMIO, this needs some more work first
before it becomes usable (we likely should use a DIAG hypercall
here, which is what the sync_reg test is currently using, too...).

Thomas Huth (4):
  KVM: selftests: Guard struct kvm_vcpu_events with
    __KVM_HAVE_VCPU_EVENTS
  KVM: selftests: Align memory region addresses to 1M on s390x
  KVM: selftests: Add processor code for s390x
  KVM: selftests: Add the sync_regs test for s390x

 MAINTAINERS                                   |   2 +
 tools/testing/selftests/kvm/Makefile          |   3 +
 .../testing/selftests/kvm/include/kvm_util.h  |   2 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 tools/testing/selftests/kvm/lib/kvm_util.c    |  24 +-
 .../selftests/kvm/lib/s390x/processor.c       | 277 ++++++++++++++++++
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++
 7 files changed, 476 insertions(+), 5 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

-- 
2.21.0

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
  2019-05-16 11:12 ` thuth
  (?)
@ 2019-05-16 11:12   ` thuth
  -1 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:12 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

The struct kvm_vcpu_events code is only available on certain architectures
(arm, arm64 and x86). To be able to compile kvm_util.c also for other
architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 07b71ad9734a..1e46ab205038 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
 int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
+#endif
 
 const char *exit_reason_str(unsigned int exit_reason);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 4ca96b228e46..8d63ccb93e10 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
 		ret, errno);
 }
 
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events)
 {
@@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
 		ret, errno);
 }
+#endif
 
 /*
  * VM VCPU System Regs Get
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-16 11:12   ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: thuth @ 2019-05-16 11:12 UTC (permalink / raw)


The struct kvm_vcpu_events code is only available on certain architectures
(arm, arm64 and x86). To be able to compile kvm_util.c also for other
architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 07b71ad9734a..1e46ab205038 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
 int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
+#endif
 
 const char *exit_reason_str(unsigned int exit_reason);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 4ca96b228e46..8d63ccb93e10 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
 		ret, errno);
 }
 
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events)
 {
@@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
 		ret, errno);
 }
+#endif
 
 /*
  * VM VCPU System Regs Get
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-16 11:12   ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:12 UTC (permalink / raw)


The struct kvm_vcpu_events code is only available on certain architectures
(arm, arm64 and x86). To be able to compile kvm_util.c also for other
architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 07b71ad9734a..1e46ab205038 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
 int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
+#endif
 
 const char *exit_reason_str(unsigned int exit_reason);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 4ca96b228e46..8d63ccb93e10 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
 		ret, errno);
 }
 
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events)
 {
@@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
 		ret, errno);
 }
+#endif
 
 /*
  * VM VCPU System Regs Get
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
  2019-05-16 11:12 ` thuth
  (?)
@ 2019-05-16 11:12   ` thuth
  -1 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:12 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

On s390x, there is a constraint that memory regions have to be aligned
to 1M (or running the VM will fail). Introduce a new "alignment" variable
in the vm_userspace_mem_region_add() function which now can be used for
both, huge page and s390x alignment requirements.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 8d63ccb93e10..64a0da6efe3d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	unsigned long pmem_size = 0;
 	struct userspace_mem_region *region;
 	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
+	size_t alignment;
 
 	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
 		"address not on a page boundary.\n"
@@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	TEST_ASSERT(region != NULL, "Insufficient Memory");
 	region->mmap_size = npages * vm->page_size;
 
-	/* Enough memory to align up to a huge page. */
+#ifdef __s390x__
+	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
+	alignment = 0x100000;
+#else
+	alignment = 1;
+#endif
+
 	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
-		region->mmap_size += huge_page_size;
+		alignment = huge_page_size;
+
+	/* Add enough memory to align up if necessary */
+	if (alignment > 1)
+		region->mmap_size += alignment;
+
 	region->mmap_start = mmap(NULL, region->mmap_size,
 				  PROT_READ | PROT_WRITE,
 				  MAP_PRIVATE | MAP_ANONYMOUS
@@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 		    "test_malloc failed, mmap_start: %p errno: %i",
 		    region->mmap_start, errno);
 
-	/* Align THP allocation up to start of a huge page. */
-	region->host_mem = align(region->mmap_start,
-				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
+	/* Align host address */
+	region->host_mem = align(region->mmap_start, alignment);
 
 	/* As needed perform madvise */
 	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-16 11:12   ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: thuth @ 2019-05-16 11:12 UTC (permalink / raw)


On s390x, there is a constraint that memory regions have to be aligned
to 1M (or running the VM will fail). Introduce a new "alignment" variable
in the vm_userspace_mem_region_add() function which now can be used for
both, huge page and s390x alignment requirements.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 8d63ccb93e10..64a0da6efe3d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	unsigned long pmem_size = 0;
 	struct userspace_mem_region *region;
 	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
+	size_t alignment;
 
 	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
 		"address not on a page boundary.\n"
@@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	TEST_ASSERT(region != NULL, "Insufficient Memory");
 	region->mmap_size = npages * vm->page_size;
 
-	/* Enough memory to align up to a huge page. */
+#ifdef __s390x__
+	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
+	alignment = 0x100000;
+#else
+	alignment = 1;
+#endif
+
 	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
-		region->mmap_size += huge_page_size;
+		alignment = huge_page_size;
+
+	/* Add enough memory to align up if necessary */
+	if (alignment > 1)
+		region->mmap_size += alignment;
+
 	region->mmap_start = mmap(NULL, region->mmap_size,
 				  PROT_READ | PROT_WRITE,
 				  MAP_PRIVATE | MAP_ANONYMOUS
@@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 		    "test_malloc failed, mmap_start: %p errno: %i",
 		    region->mmap_start, errno);
 
-	/* Align THP allocation up to start of a huge page. */
-	region->host_mem = align(region->mmap_start,
-				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
+	/* Align host address */
+	region->host_mem = align(region->mmap_start, alignment);
 
 	/* As needed perform madvise */
 	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-16 11:12   ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:12 UTC (permalink / raw)


On s390x, there is a constraint that memory regions have to be aligned
to 1M (or running the VM will fail). Introduce a new "alignment" variable
in the vm_userspace_mem_region_add() function which now can be used for
both, huge page and s390x alignment requirements.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 8d63ccb93e10..64a0da6efe3d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	unsigned long pmem_size = 0;
 	struct userspace_mem_region *region;
 	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
+	size_t alignment;
 
 	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
 		"address not on a page boundary.\n"
@@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	TEST_ASSERT(region != NULL, "Insufficient Memory");
 	region->mmap_size = npages * vm->page_size;
 
-	/* Enough memory to align up to a huge page. */
+#ifdef __s390x__
+	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
+	alignment = 0x100000;
+#else
+	alignment = 1;
+#endif
+
 	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
-		region->mmap_size += huge_page_size;
+		alignment = huge_page_size;
+
+	/* Add enough memory to align up if necessary */
+	if (alignment > 1)
+		region->mmap_size += alignment;
+
 	region->mmap_start = mmap(NULL, region->mmap_size,
 				  PROT_READ | PROT_WRITE,
 				  MAP_PRIVATE | MAP_ANONYMOUS
@@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 		    "test_malloc failed, mmap_start: %p errno: %i",
 		    region->mmap_start, errno);
 
-	/* Align THP allocation up to start of a huge page. */
-	region->host_mem = align(region->mmap_start,
-				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
+	/* Align host address */
+	region->host_mem = align(region->mmap_start, alignment);
 
 	/* As needed perform madvise */
 	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 3/4] KVM: selftests: Add processor code for s390x
  2019-05-16 11:12 ` thuth
  (?)
@ 2019-05-16 11:12   ` thuth
  -1 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:12 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

Code that takes care of basic CPU setup, page table walking, etc.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 .../selftests/kvm/lib/s390x/processor.c       | 277 ++++++++++++++++++
 4 files changed, 301 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c

diff --git a/MAINTAINERS b/MAINTAINERS
index ee6cf4d1010c..514d1f88ee26 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
 M:	Paolo Bonzini <pbonzini@redhat.com>
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index f8588cca2bef..690422c78fb2 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -9,6 +9,7 @@ UNAME_M := $(shell uname -m)
 LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/ucall.c lib/sparsebit.c
 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c
 LIBKVM_aarch64 = lib/aarch64/processor.c
+LIBKVM_s390x = lib/s390x/processor.c
 
 TEST_GEN_PROGS_x86_64 = x86_64/platform_info_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
diff --git a/tools/testing/selftests/kvm/include/s390x/processor.h b/tools/testing/selftests/kvm/include/s390x/processor.h
new file mode 100644
index 000000000000..e0e96a5f608c
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/s390x/processor.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * s390x processor specific defines
+ */
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
+
+/* Bits in the region/segment table entry */
+#define REGION_ENTRY_ORIGIN	~0xfffUL /* region/segment table origin	   */
+#define REGION_ENTRY_PROTECT	0x200	 /* region protection bit	   */
+#define REGION_ENTRY_NOEXEC	0x100	 /* region no-execute bit	   */
+#define REGION_ENTRY_OFFSET	0xc0	 /* region table offset		   */
+#define REGION_ENTRY_INVALID	0x20	 /* invalid region table entry	   */
+#define REGION_ENTRY_TYPE	0x0c	 /* region/segment table type mask */
+#define REGION_ENTRY_LENGTH	0x03	 /* region third length		   */
+
+/* Bits in the page table entry */
+#define PAGE_INVALID	0x400		/* HW invalid bit    */
+#define PAGE_PROTECT	0x200		/* HW read-only bit  */
+#define PAGE_NOEXEC	0x100		/* HW no-execute bit */
+
+#endif
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
new file mode 100644
index 000000000000..d882b66f3e24
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -0,0 +1,277 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM selftest s390x library code - CPU-related functions (page tables...)
+ *
+ * Copyright (C) 2019, Red Hat, Inc.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_name */
+
+#include "processor.h"
+#include "kvm_util.h"
+#include "../kvm_util_internal.h"
+
+#define KVM_GUEST_PAGE_TABLE_MIN_PADDR		0x180000
+
+#define PAGES_PER_REGION 4
+
+void virt_pgd_alloc(struct kvm_vm *vm, uint32_t memslot)
+{
+	vm_paddr_t paddr;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	if (vm->pgd_created)
+		return;
+
+	paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	vm->pgd = paddr;
+	vm->pgd_created = true;
+}
+
+static uint64_t virt_alloc_region(struct kvm_vm *vm, int ri, uint32_t memslot)
+{
+	uint64_t taddr, entry;
+
+	taddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, taddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	entry = (taddr & REGION_ENTRY_ORIGIN)
+		| (((4 - ri) << 2) & REGION_ENTRY_TYPE)
+		| ((ri < 4 ? (PAGES_PER_REGION - 1) : 0) & REGION_ENTRY_LENGTH);
+
+	return entry;
+}
+
+/*
+ * VM Virtual Page Map
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gva - VM Virtual Address
+ *   gpa - VM Physical Address
+ *   memslot - Memory region slot for new virtual translation tables
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Within the VM given by vm, creates a virtual translation for the page
+ * starting at vaddr to the page starting at paddr.
+ */
+void virt_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa,
+		 uint32_t memslot)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT((gva % vm->page_size) == 0,
+		"Virtual address not on page boundary,\n"
+		"  vaddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
+		(gva >> vm->page_shift)),
+		"Invalid virtual address, vaddr: 0x%lx",
+		gva);
+	TEST_ASSERT((gpa % vm->page_size) == 0,
+		"Physical address not on page boundary,\n"
+		"  paddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
+		"Physical address beyond beyond maximum supported,\n"
+		"  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+		gva, vm->max_gfn, vm->page_size);
+
+	/* Walk through region and segment tables */
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		if (entry[idx] & REGION_ENTRY_INVALID)
+			entry[idx] = virt_alloc_region(vm, ri, memslot);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	/* Fill in page table entry */
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+	if (!(entry[idx] & PAGE_INVALID))
+		fprintf(stderr,
+			"WARNING: PTE for gpa=0x%"PRIx64" already set!\n", gpa);
+	entry[idx] = gpa;
+}
+
+/*
+ * Address Guest Virtual to Guest Physical
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gpa - VM virtual address
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Equivalent VM physical address
+ *
+ * Translates the VM virtual address given by gva to a VM physical
+ * address and then locates the memory region containing the VM
+ * physical address, within the VM given by vm.  When found, the host
+ * virtual address providing the memory to the vm physical address is
+ * returned.
+ * A TEST_ASSERT failure occurs if no region containing translated
+ * VM virtual address exists.
+ */
+vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		TEST_ASSERT(!(entry[idx] & REGION_ENTRY_INVALID),
+			    "No region mapping for vm virtual address 0x%lx",
+			    gva);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+
+	TEST_ASSERT(!(entry[idx] & PAGE_INVALID),
+		    "No page mapping for vm virtual address 0x%lx", gva);
+
+	return (entry[idx] & ~0xffful) + (gva & 0xffful);
+}
+
+static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			   uint64_t ptea_start)
+{
+	uint64_t *pte, ptea;
+
+	for (ptea = ptea_start; ptea < ptea_start + 0x100 * 8; ptea += 8) {
+		pte = addr_gpa2hva(vm, ptea);
+		if (*pte & PAGE_INVALID)
+			continue;
+		fprintf(stream, "%*spte @ 0x%lx: 0x%016lx\n",
+			indent, "", ptea, *pte);
+	}
+}
+
+static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			     uint64_t reg_tab_addr)
+{
+	uint64_t addr, *entry;
+
+	for (addr = reg_tab_addr; addr < reg_tab_addr + 0x400 * 8; addr += 8) {
+		entry = addr_gpa2hva(vm, addr);
+		if (*entry & REGION_ENTRY_INVALID)
+			continue;
+		fprintf(stream, "%*srt%lde @ 0x%lx: 0x%016lx\n",
+			indent, "", 4 - ((*entry & REGION_ENTRY_TYPE) >> 2),
+			addr, *entry);
+		if (*entry & REGION_ENTRY_TYPE) {
+			virt_dump_region(stream, vm, indent + 2,
+					 *entry & REGION_ENTRY_ORIGIN);
+		} else {
+			virt_dump_ptes(stream, vm, indent + 2,
+				       *entry & REGION_ENTRY_ORIGIN);
+		}
+	}
+}
+
+void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+	if (!vm->pgd_created)
+		return;
+
+	virt_dump_region(stream, vm, indent, vm->pgd);
+}
+
+/*
+ * Create a VM with reasonable defaults
+ *
+ * Input Args:
+ *   vcpuid - The id of the single VCPU to add to the VM.
+ *   extra_mem_pages - The size of extra memories to add (this will
+ *                     decide how much extra space we will need to
+ *                     setup the page tables using mem slot 0)
+ *   guest_code - The vCPU's entry point
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Pointer to opaque structure that describes the created VM.
+ */
+struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+				 void *guest_code)
+{
+	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
+	struct kvm_vm *vm;
+
+	/* VM_MODE_P52V48_4K is just a dummy, we don't really use it */
+	vm = vm_create(VM_MODE_P52V48_4K,
+		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+	vm_vcpu_add_default(vm, vcpuid, guest_code);
+
+	return vm;
+}
+
+/*
+ * Adds a vCPU with reasonable defaults (i.e. a stack and initial PSW)
+ *
+ * Input Args:
+ *   vcpuid - The id of the VCPU to add to the VM.
+ *   guest_code - The vCPU's entry point
+ */
+void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
+{
+	size_t stack_size =  DEFAULT_STACK_PGS * getpagesize();
+	uint64_t stack_vaddr;
+	struct kvm_regs regs;
+	struct kvm_run *run;
+
+	stack_vaddr = vm_vaddr_alloc(vm, stack_size,
+				     DEFAULT_GUEST_STACK_VADDR_MIN, 0, 0);
+
+	vm_vcpu_add(vm, vcpuid, 0, 0);
+
+	/* Setup guest general purpose registers */
+	vcpu_regs_get(vm, vcpuid, &regs);
+	regs.gprs[15] = stack_vaddr + (DEFAULT_STACK_PGS * getpagesize()) - 160;
+	vcpu_regs_set(vm, vcpuid, &regs);
+
+	run = vcpu_state(vm, vcpuid);
+	run->psw_mask = 0x0400000180000000ULL;  /* DAT enabled + 64 bit mode */
+	run->psw_addr = (uintptr_t)guest_code;
+}
+
+void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
+{
+	struct kvm_sregs sregs;
+
+	TEST_ASSERT(vm->pgd_created, "Page tables have not been created yet");
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	/* Set mode specific system register values. */
+	vcpu_sregs_get(vm, vcpuid, &sregs);
+	sregs.crs[0] |= 0x00040000;		/* Enable floating point regs */
+	sregs.crs[1] = vm->pgd | 0xf;		/* Primary region table */
+	vcpu_sregs_set(vm, vcpuid, &sregs);
+}
+
+void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
+{
+	struct vcpu *vcpu = vm->vcpu_head;
+
+	fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n",
+		indent, "", vcpu->state->psw_mask, vcpu->state->psw_addr);
+}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 3/4] KVM: selftests: Add processor code for s390x
@ 2019-05-16 11:12   ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: thuth @ 2019-05-16 11:12 UTC (permalink / raw)


Code that takes care of basic CPU setup, page table walking, etc.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 .../selftests/kvm/lib/s390x/processor.c       | 277 ++++++++++++++++++
 4 files changed, 301 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c

diff --git a/MAINTAINERS b/MAINTAINERS
index ee6cf4d1010c..514d1f88ee26 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
 M:	Paolo Bonzini <pbonzini at redhat.com>
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index f8588cca2bef..690422c78fb2 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -9,6 +9,7 @@ UNAME_M := $(shell uname -m)
 LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/ucall.c lib/sparsebit.c
 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c
 LIBKVM_aarch64 = lib/aarch64/processor.c
+LIBKVM_s390x = lib/s390x/processor.c
 
 TEST_GEN_PROGS_x86_64 = x86_64/platform_info_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
diff --git a/tools/testing/selftests/kvm/include/s390x/processor.h b/tools/testing/selftests/kvm/include/s390x/processor.h
new file mode 100644
index 000000000000..e0e96a5f608c
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/s390x/processor.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * s390x processor specific defines
+ */
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
+
+/* Bits in the region/segment table entry */
+#define REGION_ENTRY_ORIGIN	~0xfffUL /* region/segment table origin	   */
+#define REGION_ENTRY_PROTECT	0x200	 /* region protection bit	   */
+#define REGION_ENTRY_NOEXEC	0x100	 /* region no-execute bit	   */
+#define REGION_ENTRY_OFFSET	0xc0	 /* region table offset		   */
+#define REGION_ENTRY_INVALID	0x20	 /* invalid region table entry	   */
+#define REGION_ENTRY_TYPE	0x0c	 /* region/segment table type mask */
+#define REGION_ENTRY_LENGTH	0x03	 /* region third length		   */
+
+/* Bits in the page table entry */
+#define PAGE_INVALID	0x400		/* HW invalid bit    */
+#define PAGE_PROTECT	0x200		/* HW read-only bit  */
+#define PAGE_NOEXEC	0x100		/* HW no-execute bit */
+
+#endif
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
new file mode 100644
index 000000000000..d882b66f3e24
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -0,0 +1,277 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM selftest s390x library code - CPU-related functions (page tables...)
+ *
+ * Copyright (C) 2019, Red Hat, Inc.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_name */
+
+#include "processor.h"
+#include "kvm_util.h"
+#include "../kvm_util_internal.h"
+
+#define KVM_GUEST_PAGE_TABLE_MIN_PADDR		0x180000
+
+#define PAGES_PER_REGION 4
+
+void virt_pgd_alloc(struct kvm_vm *vm, uint32_t memslot)
+{
+	vm_paddr_t paddr;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	if (vm->pgd_created)
+		return;
+
+	paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	vm->pgd = paddr;
+	vm->pgd_created = true;
+}
+
+static uint64_t virt_alloc_region(struct kvm_vm *vm, int ri, uint32_t memslot)
+{
+	uint64_t taddr, entry;
+
+	taddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, taddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	entry = (taddr & REGION_ENTRY_ORIGIN)
+		| (((4 - ri) << 2) & REGION_ENTRY_TYPE)
+		| ((ri < 4 ? (PAGES_PER_REGION - 1) : 0) & REGION_ENTRY_LENGTH);
+
+	return entry;
+}
+
+/*
+ * VM Virtual Page Map
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gva - VM Virtual Address
+ *   gpa - VM Physical Address
+ *   memslot - Memory region slot for new virtual translation tables
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Within the VM given by vm, creates a virtual translation for the page
+ * starting at vaddr to the page starting at paddr.
+ */
+void virt_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa,
+		 uint32_t memslot)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT((gva % vm->page_size) == 0,
+		"Virtual address not on page boundary,\n"
+		"  vaddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
+		(gva >> vm->page_shift)),
+		"Invalid virtual address, vaddr: 0x%lx",
+		gva);
+	TEST_ASSERT((gpa % vm->page_size) == 0,
+		"Physical address not on page boundary,\n"
+		"  paddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
+		"Physical address beyond beyond maximum supported,\n"
+		"  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+		gva, vm->max_gfn, vm->page_size);
+
+	/* Walk through region and segment tables */
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		if (entry[idx] & REGION_ENTRY_INVALID)
+			entry[idx] = virt_alloc_region(vm, ri, memslot);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	/* Fill in page table entry */
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+	if (!(entry[idx] & PAGE_INVALID))
+		fprintf(stderr,
+			"WARNING: PTE for gpa=0x%"PRIx64" already set!\n", gpa);
+	entry[idx] = gpa;
+}
+
+/*
+ * Address Guest Virtual to Guest Physical
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gpa - VM virtual address
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Equivalent VM physical address
+ *
+ * Translates the VM virtual address given by gva to a VM physical
+ * address and then locates the memory region containing the VM
+ * physical address, within the VM given by vm.  When found, the host
+ * virtual address providing the memory to the vm physical address is
+ * returned.
+ * A TEST_ASSERT failure occurs if no region containing translated
+ * VM virtual address exists.
+ */
+vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		TEST_ASSERT(!(entry[idx] & REGION_ENTRY_INVALID),
+			    "No region mapping for vm virtual address 0x%lx",
+			    gva);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+
+	TEST_ASSERT(!(entry[idx] & PAGE_INVALID),
+		    "No page mapping for vm virtual address 0x%lx", gva);
+
+	return (entry[idx] & ~0xffful) + (gva & 0xffful);
+}
+
+static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			   uint64_t ptea_start)
+{
+	uint64_t *pte, ptea;
+
+	for (ptea = ptea_start; ptea < ptea_start + 0x100 * 8; ptea += 8) {
+		pte = addr_gpa2hva(vm, ptea);
+		if (*pte & PAGE_INVALID)
+			continue;
+		fprintf(stream, "%*spte @ 0x%lx: 0x%016lx\n",
+			indent, "", ptea, *pte);
+	}
+}
+
+static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			     uint64_t reg_tab_addr)
+{
+	uint64_t addr, *entry;
+
+	for (addr = reg_tab_addr; addr < reg_tab_addr + 0x400 * 8; addr += 8) {
+		entry = addr_gpa2hva(vm, addr);
+		if (*entry & REGION_ENTRY_INVALID)
+			continue;
+		fprintf(stream, "%*srt%lde @ 0x%lx: 0x%016lx\n",
+			indent, "", 4 - ((*entry & REGION_ENTRY_TYPE) >> 2),
+			addr, *entry);
+		if (*entry & REGION_ENTRY_TYPE) {
+			virt_dump_region(stream, vm, indent + 2,
+					 *entry & REGION_ENTRY_ORIGIN);
+		} else {
+			virt_dump_ptes(stream, vm, indent + 2,
+				       *entry & REGION_ENTRY_ORIGIN);
+		}
+	}
+}
+
+void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+	if (!vm->pgd_created)
+		return;
+
+	virt_dump_region(stream, vm, indent, vm->pgd);
+}
+
+/*
+ * Create a VM with reasonable defaults
+ *
+ * Input Args:
+ *   vcpuid - The id of the single VCPU to add to the VM.
+ *   extra_mem_pages - The size of extra memories to add (this will
+ *                     decide how much extra space we will need to
+ *                     setup the page tables using mem slot 0)
+ *   guest_code - The vCPU's entry point
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Pointer to opaque structure that describes the created VM.
+ */
+struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+				 void *guest_code)
+{
+	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
+	struct kvm_vm *vm;
+
+	/* VM_MODE_P52V48_4K is just a dummy, we don't really use it */
+	vm = vm_create(VM_MODE_P52V48_4K,
+		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+	vm_vcpu_add_default(vm, vcpuid, guest_code);
+
+	return vm;
+}
+
+/*
+ * Adds a vCPU with reasonable defaults (i.e. a stack and initial PSW)
+ *
+ * Input Args:
+ *   vcpuid - The id of the VCPU to add to the VM.
+ *   guest_code - The vCPU's entry point
+ */
+void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
+{
+	size_t stack_size =  DEFAULT_STACK_PGS * getpagesize();
+	uint64_t stack_vaddr;
+	struct kvm_regs regs;
+	struct kvm_run *run;
+
+	stack_vaddr = vm_vaddr_alloc(vm, stack_size,
+				     DEFAULT_GUEST_STACK_VADDR_MIN, 0, 0);
+
+	vm_vcpu_add(vm, vcpuid, 0, 0);
+
+	/* Setup guest general purpose registers */
+	vcpu_regs_get(vm, vcpuid, &regs);
+	regs.gprs[15] = stack_vaddr + (DEFAULT_STACK_PGS * getpagesize()) - 160;
+	vcpu_regs_set(vm, vcpuid, &regs);
+
+	run = vcpu_state(vm, vcpuid);
+	run->psw_mask = 0x0400000180000000ULL;  /* DAT enabled + 64 bit mode */
+	run->psw_addr = (uintptr_t)guest_code;
+}
+
+void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
+{
+	struct kvm_sregs sregs;
+
+	TEST_ASSERT(vm->pgd_created, "Page tables have not been created yet");
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	/* Set mode specific system register values. */
+	vcpu_sregs_get(vm, vcpuid, &sregs);
+	sregs.crs[0] |= 0x00040000;		/* Enable floating point regs */
+	sregs.crs[1] = vm->pgd | 0xf;		/* Primary region table */
+	vcpu_sregs_set(vm, vcpuid, &sregs);
+}
+
+void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
+{
+	struct vcpu *vcpu = vm->vcpu_head;
+
+	fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n",
+		indent, "", vcpu->state->psw_mask, vcpu->state->psw_addr);
+}
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 3/4] KVM: selftests: Add processor code for s390x
@ 2019-05-16 11:12   ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:12 UTC (permalink / raw)


Code that takes care of basic CPU setup, page table walking, etc.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 .../selftests/kvm/lib/s390x/processor.c       | 277 ++++++++++++++++++
 4 files changed, 301 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c

diff --git a/MAINTAINERS b/MAINTAINERS
index ee6cf4d1010c..514d1f88ee26 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
 M:	Paolo Bonzini <pbonzini at redhat.com>
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index f8588cca2bef..690422c78fb2 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -9,6 +9,7 @@ UNAME_M := $(shell uname -m)
 LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/ucall.c lib/sparsebit.c
 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c
 LIBKVM_aarch64 = lib/aarch64/processor.c
+LIBKVM_s390x = lib/s390x/processor.c
 
 TEST_GEN_PROGS_x86_64 = x86_64/platform_info_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
diff --git a/tools/testing/selftests/kvm/include/s390x/processor.h b/tools/testing/selftests/kvm/include/s390x/processor.h
new file mode 100644
index 000000000000..e0e96a5f608c
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/s390x/processor.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * s390x processor specific defines
+ */
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
+
+/* Bits in the region/segment table entry */
+#define REGION_ENTRY_ORIGIN	~0xfffUL /* region/segment table origin	   */
+#define REGION_ENTRY_PROTECT	0x200	 /* region protection bit	   */
+#define REGION_ENTRY_NOEXEC	0x100	 /* region no-execute bit	   */
+#define REGION_ENTRY_OFFSET	0xc0	 /* region table offset		   */
+#define REGION_ENTRY_INVALID	0x20	 /* invalid region table entry	   */
+#define REGION_ENTRY_TYPE	0x0c	 /* region/segment table type mask */
+#define REGION_ENTRY_LENGTH	0x03	 /* region third length		   */
+
+/* Bits in the page table entry */
+#define PAGE_INVALID	0x400		/* HW invalid bit    */
+#define PAGE_PROTECT	0x200		/* HW read-only bit  */
+#define PAGE_NOEXEC	0x100		/* HW no-execute bit */
+
+#endif
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
new file mode 100644
index 000000000000..d882b66f3e24
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -0,0 +1,277 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM selftest s390x library code - CPU-related functions (page tables...)
+ *
+ * Copyright (C) 2019, Red Hat, Inc.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_name */
+
+#include "processor.h"
+#include "kvm_util.h"
+#include "../kvm_util_internal.h"
+
+#define KVM_GUEST_PAGE_TABLE_MIN_PADDR		0x180000
+
+#define PAGES_PER_REGION 4
+
+void virt_pgd_alloc(struct kvm_vm *vm, uint32_t memslot)
+{
+	vm_paddr_t paddr;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	if (vm->pgd_created)
+		return;
+
+	paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	vm->pgd = paddr;
+	vm->pgd_created = true;
+}
+
+static uint64_t virt_alloc_region(struct kvm_vm *vm, int ri, uint32_t memslot)
+{
+	uint64_t taddr, entry;
+
+	taddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, taddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	entry = (taddr & REGION_ENTRY_ORIGIN)
+		| (((4 - ri) << 2) & REGION_ENTRY_TYPE)
+		| ((ri < 4 ? (PAGES_PER_REGION - 1) : 0) & REGION_ENTRY_LENGTH);
+
+	return entry;
+}
+
+/*
+ * VM Virtual Page Map
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gva - VM Virtual Address
+ *   gpa - VM Physical Address
+ *   memslot - Memory region slot for new virtual translation tables
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Within the VM given by vm, creates a virtual translation for the page
+ * starting at vaddr to the page starting at paddr.
+ */
+void virt_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa,
+		 uint32_t memslot)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT((gva % vm->page_size) == 0,
+		"Virtual address not on page boundary,\n"
+		"  vaddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
+		(gva >> vm->page_shift)),
+		"Invalid virtual address, vaddr: 0x%lx",
+		gva);
+	TEST_ASSERT((gpa % vm->page_size) == 0,
+		"Physical address not on page boundary,\n"
+		"  paddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
+		"Physical address beyond beyond maximum supported,\n"
+		"  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+		gva, vm->max_gfn, vm->page_size);
+
+	/* Walk through region and segment tables */
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		if (entry[idx] & REGION_ENTRY_INVALID)
+			entry[idx] = virt_alloc_region(vm, ri, memslot);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	/* Fill in page table entry */
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+	if (!(entry[idx] & PAGE_INVALID))
+		fprintf(stderr,
+			"WARNING: PTE for gpa=0x%"PRIx64" already set!\n", gpa);
+	entry[idx] = gpa;
+}
+
+/*
+ * Address Guest Virtual to Guest Physical
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gpa - VM virtual address
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Equivalent VM physical address
+ *
+ * Translates the VM virtual address given by gva to a VM physical
+ * address and then locates the memory region containing the VM
+ * physical address, within the VM given by vm.  When found, the host
+ * virtual address providing the memory to the vm physical address is
+ * returned.
+ * A TEST_ASSERT failure occurs if no region containing translated
+ * VM virtual address exists.
+ */
+vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		TEST_ASSERT(!(entry[idx] & REGION_ENTRY_INVALID),
+			    "No region mapping for vm virtual address 0x%lx",
+			    gva);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+
+	TEST_ASSERT(!(entry[idx] & PAGE_INVALID),
+		    "No page mapping for vm virtual address 0x%lx", gva);
+
+	return (entry[idx] & ~0xffful) + (gva & 0xffful);
+}
+
+static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			   uint64_t ptea_start)
+{
+	uint64_t *pte, ptea;
+
+	for (ptea = ptea_start; ptea < ptea_start + 0x100 * 8; ptea += 8) {
+		pte = addr_gpa2hva(vm, ptea);
+		if (*pte & PAGE_INVALID)
+			continue;
+		fprintf(stream, "%*spte @ 0x%lx: 0x%016lx\n",
+			indent, "", ptea, *pte);
+	}
+}
+
+static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			     uint64_t reg_tab_addr)
+{
+	uint64_t addr, *entry;
+
+	for (addr = reg_tab_addr; addr < reg_tab_addr + 0x400 * 8; addr += 8) {
+		entry = addr_gpa2hva(vm, addr);
+		if (*entry & REGION_ENTRY_INVALID)
+			continue;
+		fprintf(stream, "%*srt%lde @ 0x%lx: 0x%016lx\n",
+			indent, "", 4 - ((*entry & REGION_ENTRY_TYPE) >> 2),
+			addr, *entry);
+		if (*entry & REGION_ENTRY_TYPE) {
+			virt_dump_region(stream, vm, indent + 2,
+					 *entry & REGION_ENTRY_ORIGIN);
+		} else {
+			virt_dump_ptes(stream, vm, indent + 2,
+				       *entry & REGION_ENTRY_ORIGIN);
+		}
+	}
+}
+
+void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+	if (!vm->pgd_created)
+		return;
+
+	virt_dump_region(stream, vm, indent, vm->pgd);
+}
+
+/*
+ * Create a VM with reasonable defaults
+ *
+ * Input Args:
+ *   vcpuid - The id of the single VCPU to add to the VM.
+ *   extra_mem_pages - The size of extra memories to add (this will
+ *                     decide how much extra space we will need to
+ *                     setup the page tables using mem slot 0)
+ *   guest_code - The vCPU's entry point
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Pointer to opaque structure that describes the created VM.
+ */
+struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+				 void *guest_code)
+{
+	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
+	struct kvm_vm *vm;
+
+	/* VM_MODE_P52V48_4K is just a dummy, we don't really use it */
+	vm = vm_create(VM_MODE_P52V48_4K,
+		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+	vm_vcpu_add_default(vm, vcpuid, guest_code);
+
+	return vm;
+}
+
+/*
+ * Adds a vCPU with reasonable defaults (i.e. a stack and initial PSW)
+ *
+ * Input Args:
+ *   vcpuid - The id of the VCPU to add to the VM.
+ *   guest_code - The vCPU's entry point
+ */
+void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
+{
+	size_t stack_size =  DEFAULT_STACK_PGS * getpagesize();
+	uint64_t stack_vaddr;
+	struct kvm_regs regs;
+	struct kvm_run *run;
+
+	stack_vaddr = vm_vaddr_alloc(vm, stack_size,
+				     DEFAULT_GUEST_STACK_VADDR_MIN, 0, 0);
+
+	vm_vcpu_add(vm, vcpuid, 0, 0);
+
+	/* Setup guest general purpose registers */
+	vcpu_regs_get(vm, vcpuid, &regs);
+	regs.gprs[15] = stack_vaddr + (DEFAULT_STACK_PGS * getpagesize()) - 160;
+	vcpu_regs_set(vm, vcpuid, &regs);
+
+	run = vcpu_state(vm, vcpuid);
+	run->psw_mask = 0x0400000180000000ULL;  /* DAT enabled + 64 bit mode */
+	run->psw_addr = (uintptr_t)guest_code;
+}
+
+void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
+{
+	struct kvm_sregs sregs;
+
+	TEST_ASSERT(vm->pgd_created, "Page tables have not been created yet");
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	/* Set mode specific system register values. */
+	vcpu_sregs_get(vm, vcpuid, &sregs);
+	sregs.crs[0] |= 0x00040000;		/* Enable floating point regs */
+	sregs.crs[1] = vm->pgd | 0xf;		/* Primary region table */
+	vcpu_sregs_set(vm, vcpuid, &sregs);
+}
+
+void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
+{
+	struct vcpu *vcpu = vm->vcpu_head;
+
+	fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n",
+		indent, "", vcpu->state->psw_mask, vcpu->state->psw_addr);
+}
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
  2019-05-16 11:12 ` thuth
  (?)
@ 2019-05-16 11:12   ` thuth
  -1 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:12 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

The test is an adaption of the same test for x86. Note that there
are some differences in the way how s390x deals with the kvm_valid_regs
in struct kvm_run, so some of the tests had to be removed. Also this
test is not using the ucall() interface on s390x yet (which would need
some work to be usable on s390x), so it simply drops out of the VM with
a diag 0x501 breakpoint instead.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   2 +
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
 3 files changed, 154 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 514d1f88ee26..68f76ee9e821 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/s390x/
 F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 690422c78fb2..128b3551dfd0 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -27,6 +27,8 @@ TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
 
+TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
 
diff --git a/tools/testing/selftests/kvm/s390x/sync_regs_test.c b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
new file mode 100644
index 000000000000..e85ff0d69548
--- /dev/null
+++ b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
@@ -0,0 +1,151 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Test for s390x KVM_CAP_SYNC_REGS
+ *
+ * Based on the same test for x86:
+ * Copyright (C) 2018, Google LLC.
+ *
+ * Adaptions for s390x:
+ * Copyright (C) 2019, Red Hat, Inc.
+ *
+ * Test expected behavior of the KVM_CAP_SYNC_REGS functionality.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+
+#define VCPU_ID 5
+
+static void guest_code(void)
+{
+	for (;;) {
+		asm volatile ("diag 0,0,0x501");
+		asm volatile ("ahi 11,1");
+	}
+}
+
+#define REG_COMPARE(reg) \
+	TEST_ASSERT(left->reg == right->reg, \
+		    "Register " #reg \
+		    " values did not match: 0x%llx, 0x%llx\n", \
+		    left->reg, right->reg)
+
+static void compare_regs(struct kvm_regs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(gprs[i]);
+}
+
+static void compare_sregs(struct kvm_sregs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(acrs[i]);
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(crs[i]);
+}
+
+#undef REG_COMPARE
+
+#define TEST_SYNC_FIELDS   (KVM_SYNC_GPRS|KVM_SYNC_ACRS|KVM_SYNC_CRS)
+#define INVALID_SYNC_FIELD 0x80000000
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	struct kvm_run *run;
+	struct kvm_regs regs;
+	struct kvm_sregs sregs;
+	int rv, cap;
+
+	/* Tell stdout not to buffer its content */
+	setbuf(stdout, NULL);
+
+	cap = kvm_check_cap(KVM_CAP_SYNC_REGS);
+	if (!cap) {
+		fprintf(stderr, "CAP_SYNC_REGS not supported, skipping test\n");
+		exit(KSFT_SKIP);
+	}
+
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+
+	run = vcpu_state(vm, VCPU_ID);
+
+	/* Request and verify all valid register sets. */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s390_sieic.icptcode == 4 &&
+		    (run->s390_sieic.ipa >> 8) == 0x83 &&
+		    (run->s390_sieic.ipb >> 16) == 0x501,
+		    "Unexpected interception code: ic=%u, ipa=0x%x, ipb=0x%x\n",
+		    run->s390_sieic.icptcode, run->s390_sieic.ipa,
+		    run->s390_sieic.ipb);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Set and verify various register values */
+	run->s.regs.gprs[11] = 0xBAD1DEA;
+	run->s.regs.acrs[0] = 1 << 11;
+
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = KVM_SYNC_GPRS | KVM_SYNC_ACRS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] == 0xBAD1DEA + 1,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+	TEST_ASSERT(run->s.regs.acrs[0]  == 1 << 11,
+		    "acr0 sync regs value incorrect 0x%llx.",
+		    run->s.regs.acrs[0]);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Clear kvm_dirty_regs bits, verify new s.regs values are
+	 * overwritten with existing guest values.
+	 */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = 0;
+	run->s.regs.gprs[11] = 0xDEADBEEF;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] != 0xDEADBEEF,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+
+	kvm_vm_free(vm);
+
+	return 0;
+}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
@ 2019-05-16 11:12   ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: thuth @ 2019-05-16 11:12 UTC (permalink / raw)


The test is an adaption of the same test for x86. Note that there
are some differences in the way how s390x deals with the kvm_valid_regs
in struct kvm_run, so some of the tests had to be removed. Also this
test is not using the ucall() interface on s390x yet (which would need
some work to be usable on s390x), so it simply drops out of the VM with
a diag 0x501 breakpoint instead.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   2 +
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
 3 files changed, 154 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 514d1f88ee26..68f76ee9e821 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/s390x/
 F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 690422c78fb2..128b3551dfd0 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -27,6 +27,8 @@ TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
 
+TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
 
diff --git a/tools/testing/selftests/kvm/s390x/sync_regs_test.c b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
new file mode 100644
index 000000000000..e85ff0d69548
--- /dev/null
+++ b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
@@ -0,0 +1,151 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Test for s390x KVM_CAP_SYNC_REGS
+ *
+ * Based on the same test for x86:
+ * Copyright (C) 2018, Google LLC.
+ *
+ * Adaptions for s390x:
+ * Copyright (C) 2019, Red Hat, Inc.
+ *
+ * Test expected behavior of the KVM_CAP_SYNC_REGS functionality.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+
+#define VCPU_ID 5
+
+static void guest_code(void)
+{
+	for (;;) {
+		asm volatile ("diag 0,0,0x501");
+		asm volatile ("ahi 11,1");
+	}
+}
+
+#define REG_COMPARE(reg) \
+	TEST_ASSERT(left->reg == right->reg, \
+		    "Register " #reg \
+		    " values did not match: 0x%llx, 0x%llx\n", \
+		    left->reg, right->reg)
+
+static void compare_regs(struct kvm_regs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(gprs[i]);
+}
+
+static void compare_sregs(struct kvm_sregs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(acrs[i]);
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(crs[i]);
+}
+
+#undef REG_COMPARE
+
+#define TEST_SYNC_FIELDS   (KVM_SYNC_GPRS|KVM_SYNC_ACRS|KVM_SYNC_CRS)
+#define INVALID_SYNC_FIELD 0x80000000
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	struct kvm_run *run;
+	struct kvm_regs regs;
+	struct kvm_sregs sregs;
+	int rv, cap;
+
+	/* Tell stdout not to buffer its content */
+	setbuf(stdout, NULL);
+
+	cap = kvm_check_cap(KVM_CAP_SYNC_REGS);
+	if (!cap) {
+		fprintf(stderr, "CAP_SYNC_REGS not supported, skipping test\n");
+		exit(KSFT_SKIP);
+	}
+
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+
+	run = vcpu_state(vm, VCPU_ID);
+
+	/* Request and verify all valid register sets. */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s390_sieic.icptcode == 4 &&
+		    (run->s390_sieic.ipa >> 8) == 0x83 &&
+		    (run->s390_sieic.ipb >> 16) == 0x501,
+		    "Unexpected interception code: ic=%u, ipa=0x%x, ipb=0x%x\n",
+		    run->s390_sieic.icptcode, run->s390_sieic.ipa,
+		    run->s390_sieic.ipb);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Set and verify various register values */
+	run->s.regs.gprs[11] = 0xBAD1DEA;
+	run->s.regs.acrs[0] = 1 << 11;
+
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = KVM_SYNC_GPRS | KVM_SYNC_ACRS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] == 0xBAD1DEA + 1,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+	TEST_ASSERT(run->s.regs.acrs[0]  == 1 << 11,
+		    "acr0 sync regs value incorrect 0x%llx.",
+		    run->s.regs.acrs[0]);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Clear kvm_dirty_regs bits, verify new s.regs values are
+	 * overwritten with existing guest values.
+	 */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = 0;
+	run->s.regs.gprs[11] = 0xDEADBEEF;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] != 0xDEADBEEF,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+
+	kvm_vm_free(vm);
+
+	return 0;
+}
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
@ 2019-05-16 11:12   ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:12 UTC (permalink / raw)


The test is an adaption of the same test for x86. Note that there
are some differences in the way how s390x deals with the kvm_valid_regs
in struct kvm_run, so some of the tests had to be removed. Also this
test is not using the ucall() interface on s390x yet (which would need
some work to be usable on s390x), so it simply drops out of the VM with
a diag 0x501 breakpoint instead.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   2 +
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
 3 files changed, 154 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 514d1f88ee26..68f76ee9e821 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/s390x/
 F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 690422c78fb2..128b3551dfd0 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -27,6 +27,8 @@ TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
 
+TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
 
diff --git a/tools/testing/selftests/kvm/s390x/sync_regs_test.c b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
new file mode 100644
index 000000000000..e85ff0d69548
--- /dev/null
+++ b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
@@ -0,0 +1,151 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Test for s390x KVM_CAP_SYNC_REGS
+ *
+ * Based on the same test for x86:
+ * Copyright (C) 2018, Google LLC.
+ *
+ * Adaptions for s390x:
+ * Copyright (C) 2019, Red Hat, Inc.
+ *
+ * Test expected behavior of the KVM_CAP_SYNC_REGS functionality.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+
+#define VCPU_ID 5
+
+static void guest_code(void)
+{
+	for (;;) {
+		asm volatile ("diag 0,0,0x501");
+		asm volatile ("ahi 11,1");
+	}
+}
+
+#define REG_COMPARE(reg) \
+	TEST_ASSERT(left->reg == right->reg, \
+		    "Register " #reg \
+		    " values did not match: 0x%llx, 0x%llx\n", \
+		    left->reg, right->reg)
+
+static void compare_regs(struct kvm_regs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(gprs[i]);
+}
+
+static void compare_sregs(struct kvm_sregs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(acrs[i]);
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(crs[i]);
+}
+
+#undef REG_COMPARE
+
+#define TEST_SYNC_FIELDS   (KVM_SYNC_GPRS|KVM_SYNC_ACRS|KVM_SYNC_CRS)
+#define INVALID_SYNC_FIELD 0x80000000
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	struct kvm_run *run;
+	struct kvm_regs regs;
+	struct kvm_sregs sregs;
+	int rv, cap;
+
+	/* Tell stdout not to buffer its content */
+	setbuf(stdout, NULL);
+
+	cap = kvm_check_cap(KVM_CAP_SYNC_REGS);
+	if (!cap) {
+		fprintf(stderr, "CAP_SYNC_REGS not supported, skipping test\n");
+		exit(KSFT_SKIP);
+	}
+
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+
+	run = vcpu_state(vm, VCPU_ID);
+
+	/* Request and verify all valid register sets. */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s390_sieic.icptcode == 4 &&
+		    (run->s390_sieic.ipa >> 8) == 0x83 &&
+		    (run->s390_sieic.ipb >> 16) == 0x501,
+		    "Unexpected interception code: ic=%u, ipa=0x%x, ipb=0x%x\n",
+		    run->s390_sieic.icptcode, run->s390_sieic.ipa,
+		    run->s390_sieic.ipb);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Set and verify various register values */
+	run->s.regs.gprs[11] = 0xBAD1DEA;
+	run->s.regs.acrs[0] = 1 << 11;
+
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = KVM_SYNC_GPRS | KVM_SYNC_ACRS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] == 0xBAD1DEA + 1,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+	TEST_ASSERT(run->s.regs.acrs[0]  == 1 << 11,
+		    "acr0 sync regs value incorrect 0x%llx.",
+		    run->s.regs.acrs[0]);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Clear kvm_dirty_regs bits, verify new s.regs values are
+	 * overwritten with existing guest values.
+	 */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = 0;
+	run->s.regs.gprs[11] = 0xDEADBEEF;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] != 0xDEADBEEF,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+
+	kvm_vm_free(vm);
+
+	return 0;
+}
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
  2019-05-16 11:12   ` thuth
  (?)
@ 2019-05-16 11:22     ` david
  -1 siblings, 0 replies; 58+ messages in thread
From: David Hildenbrand @ 2019-05-16 11:22 UTC (permalink / raw)
  To: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, Cornelia Huck, Andrew Jones, linux-kernel,
	linux-kselftest, linux-s390

On 16.05.19 13:12, Thomas Huth wrote:
> The struct kvm_vcpu_events code is only available on certain architectures
> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
>  tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 07b71ad9734a..1e46ab205038 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
>  void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> +#endif
>  
>  const char *exit_reason_str(unsigned int exit_reason);
>  
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 4ca96b228e46..8d63ccb93e10 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
>  		ret, errno);
>  }
>  
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events)
>  {
> @@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
>  		ret, errno);
>  }
> +#endif
>  
>  /*
>   * VM VCPU System Regs Get
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-16 11:22     ` david
  0 siblings, 0 replies; 58+ messages in thread
From: david @ 2019-05-16 11:22 UTC (permalink / raw)


On 16.05.19 13:12, Thomas Huth wrote:
> The struct kvm_vcpu_events code is only available on certain architectures
> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
>  tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 07b71ad9734a..1e46ab205038 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
>  void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> +#endif
>  
>  const char *exit_reason_str(unsigned int exit_reason);
>  
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 4ca96b228e46..8d63ccb93e10 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
>  		ret, errno);
>  }
>  
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events)
>  {
> @@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
>  		ret, errno);
>  }
> +#endif
>  
>  /*
>   * VM VCPU System Regs Get
> 

Reviewed-by: David Hildenbrand <david at redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-16 11:22     ` david
  0 siblings, 0 replies; 58+ messages in thread
From: David Hildenbrand @ 2019-05-16 11:22 UTC (permalink / raw)


On 16.05.19 13:12, Thomas Huth wrote:
> The struct kvm_vcpu_events code is only available on certain architectures
> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
>  tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 07b71ad9734a..1e46ab205038 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
>  void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> +#endif
>  
>  const char *exit_reason_str(unsigned int exit_reason);
>  
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 4ca96b228e46..8d63ccb93e10 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
>  		ret, errno);
>  }
>  
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events)
>  {
> @@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
>  		ret, errno);
>  }
> +#endif
>  
>  /*
>   * VM VCPU System Regs Get
> 

Reviewed-by: David Hildenbrand <david at redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
  2019-05-16 11:12   ` thuth
  (?)
@ 2019-05-16 11:30     ` david
  -1 siblings, 0 replies; 58+ messages in thread
From: David Hildenbrand @ 2019-05-16 11:30 UTC (permalink / raw)
  To: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, Cornelia Huck, Andrew Jones, linux-kernel,
	linux-kselftest, linux-s390

On 16.05.19 13:12, Thomas Huth wrote:
> On s390x, there is a constraint that memory regions have to be aligned
> to 1M (or running the VM will fail). Introduce a new "alignment" variable
> in the vm_userspace_mem_region_add() function which now can be used for
> both, huge page and s390x alignment requirements.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
>  1 file changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 8d63ccb93e10..64a0da6efe3d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	unsigned long pmem_size = 0;
>  	struct userspace_mem_region *region;
>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
> +	size_t alignment;
>  
>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>  		"address not on a page boundary.\n"
> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>  	region->mmap_size = npages * vm->page_size;
>  
> -	/* Enough memory to align up to a huge page. */
> +#ifdef __s390x__
> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
> +	alignment = 0x100000;

This corresponds to huge_page_size, maybe you can exploit this fact here.

Something like

alignment = 1;

/* On s390x, the host address must always be aligned to the THP size */
#ifndef __s390x__
if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
#endif
	alignment = huge_page_size;

Maybe in a nicer fashion. Not sure.

> +#else
> +	alignment = 1;
> +#endif
> +
>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> -		region->mmap_size += huge_page_size;
> +		alignment = huge_page_size;
> +
> +	/* Add enough memory to align up if necessary */
> +	if (alignment > 1)
> +		region->mmap_size += alignment;
> +
>  	region->mmap_start = mmap(NULL, region->mmap_size,
>  				  PROT_READ | PROT_WRITE,
>  				  MAP_PRIVATE | MAP_ANONYMOUS
> @@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  		    "test_malloc failed, mmap_start: %p errno: %i",
>  		    region->mmap_start, errno);
>  
> -	/* Align THP allocation up to start of a huge page. */
> -	region->host_mem = align(region->mmap_start,
> -				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
> +	/* Align host address */
> +	region->host_mem = align(region->mmap_start, alignment);
>  
>  	/* As needed perform madvise */
>  	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
> 


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-16 11:30     ` david
  0 siblings, 0 replies; 58+ messages in thread
From: david @ 2019-05-16 11:30 UTC (permalink / raw)


On 16.05.19 13:12, Thomas Huth wrote:
> On s390x, there is a constraint that memory regions have to be aligned
> to 1M (or running the VM will fail). Introduce a new "alignment" variable
> in the vm_userspace_mem_region_add() function which now can be used for
> both, huge page and s390x alignment requirements.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
>  1 file changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 8d63ccb93e10..64a0da6efe3d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	unsigned long pmem_size = 0;
>  	struct userspace_mem_region *region;
>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
> +	size_t alignment;
>  
>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>  		"address not on a page boundary.\n"
> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>  	region->mmap_size = npages * vm->page_size;
>  
> -	/* Enough memory to align up to a huge page. */
> +#ifdef __s390x__
> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
> +	alignment = 0x100000;

This corresponds to huge_page_size, maybe you can exploit this fact here.

Something like

alignment = 1;

/* On s390x, the host address must always be aligned to the THP size */
#ifndef __s390x__
if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
#endif
	alignment = huge_page_size;

Maybe in a nicer fashion. Not sure.

> +#else
> +	alignment = 1;
> +#endif
> +
>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> -		region->mmap_size += huge_page_size;
> +		alignment = huge_page_size;
> +
> +	/* Add enough memory to align up if necessary */
> +	if (alignment > 1)
> +		region->mmap_size += alignment;
> +
>  	region->mmap_start = mmap(NULL, region->mmap_size,
>  				  PROT_READ | PROT_WRITE,
>  				  MAP_PRIVATE | MAP_ANONYMOUS
> @@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  		    "test_malloc failed, mmap_start: %p errno: %i",
>  		    region->mmap_start, errno);
>  
> -	/* Align THP allocation up to start of a huge page. */
> -	region->host_mem = align(region->mmap_start,
> -				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
> +	/* Align host address */
> +	region->host_mem = align(region->mmap_start, alignment);
>  
>  	/* As needed perform madvise */
>  	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
> 


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-16 11:30     ` david
  0 siblings, 0 replies; 58+ messages in thread
From: David Hildenbrand @ 2019-05-16 11:30 UTC (permalink / raw)


On 16.05.19 13:12, Thomas Huth wrote:
> On s390x, there is a constraint that memory regions have to be aligned
> to 1M (or running the VM will fail). Introduce a new "alignment" variable
> in the vm_userspace_mem_region_add() function which now can be used for
> both, huge page and s390x alignment requirements.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
>  1 file changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 8d63ccb93e10..64a0da6efe3d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	unsigned long pmem_size = 0;
>  	struct userspace_mem_region *region;
>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
> +	size_t alignment;
>  
>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>  		"address not on a page boundary.\n"
> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>  	region->mmap_size = npages * vm->page_size;
>  
> -	/* Enough memory to align up to a huge page. */
> +#ifdef __s390x__
> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
> +	alignment = 0x100000;

This corresponds to huge_page_size, maybe you can exploit this fact here.

Something like

alignment = 1;

/* On s390x, the host address must always be aligned to the THP size */
#ifndef __s390x__
if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
#endif
	alignment = huge_page_size;

Maybe in a nicer fashion. Not sure.

> +#else
> +	alignment = 1;
> +#endif
> +
>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> -		region->mmap_size += huge_page_size;
> +		alignment = huge_page_size;
> +
> +	/* Add enough memory to align up if necessary */
> +	if (alignment > 1)
> +		region->mmap_size += alignment;
> +
>  	region->mmap_start = mmap(NULL, region->mmap_size,
>  				  PROT_READ | PROT_WRITE,
>  				  MAP_PRIVATE | MAP_ANONYMOUS
> @@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  		    "test_malloc failed, mmap_start: %p errno: %i",
>  		    region->mmap_start, errno);
>  
> -	/* Align THP allocation up to start of a huge page. */
> -	region->host_mem = align(region->mmap_start,
> -				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
> +	/* Align host address */
> +	region->host_mem = align(region->mmap_start, alignment);
>  
>  	/* As needed perform madvise */
>  	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
> 


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
  2019-05-16 11:30     ` david
  (?)
@ 2019-05-16 11:59       ` thuth
  -1 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:59 UTC (permalink / raw)
  To: David Hildenbrand, Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, Cornelia Huck, Andrew Jones, linux-kernel,
	linux-kselftest, linux-s390

On 16/05/2019 13.30, David Hildenbrand wrote:
> On 16.05.19 13:12, Thomas Huth wrote:
>> On s390x, there is a constraint that memory regions have to be aligned
>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>> in the vm_userspace_mem_region_add() function which now can be used for
>> both, huge page and s390x alignment requirements.
>>
>> Signed-off-by: Thomas Huth <thuth@redhat.com>
>> ---
>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>> index 8d63ccb93e10..64a0da6efe3d 100644
>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	unsigned long pmem_size = 0;
>>  	struct userspace_mem_region *region;
>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>> +	size_t alignment;
>>  
>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>  		"address not on a page boundary.\n"
>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>  	region->mmap_size = npages * vm->page_size;
>>  
>> -	/* Enough memory to align up to a huge page. */
>> +#ifdef __s390x__
>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>> +	alignment = 0x100000;
> 
> This corresponds to huge_page_size, maybe you can exploit this fact here.
> 
> Something like
> 
> alignment = 1;
> 
> /* On s390x, the host address must always be aligned to the THP size */
> #ifndef __s390x__
> if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> #endif
> 	alignment = huge_page_size;
> 
> Maybe in a nicer fashion. Not sure.

Hmm, but if I've got your explanation on IRC right, it's rather a
coincidence that the huge page size matches the alignment requirements
for KVM memslots, isn't it? So I think the code would look rather
confusing if I'd try to shorten it this way...?

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-16 11:59       ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: thuth @ 2019-05-16 11:59 UTC (permalink / raw)


On 16/05/2019 13.30, David Hildenbrand wrote:
> On 16.05.19 13:12, Thomas Huth wrote:
>> On s390x, there is a constraint that memory regions have to be aligned
>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>> in the vm_userspace_mem_region_add() function which now can be used for
>> both, huge page and s390x alignment requirements.
>>
>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>> ---
>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>> index 8d63ccb93e10..64a0da6efe3d 100644
>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	unsigned long pmem_size = 0;
>>  	struct userspace_mem_region *region;
>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>> +	size_t alignment;
>>  
>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>  		"address not on a page boundary.\n"
>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>  	region->mmap_size = npages * vm->page_size;
>>  
>> -	/* Enough memory to align up to a huge page. */
>> +#ifdef __s390x__
>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>> +	alignment = 0x100000;
> 
> This corresponds to huge_page_size, maybe you can exploit this fact here.
> 
> Something like
> 
> alignment = 1;
> 
> /* On s390x, the host address must always be aligned to the THP size */
> #ifndef __s390x__
> if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> #endif
> 	alignment = huge_page_size;
> 
> Maybe in a nicer fashion. Not sure.

Hmm, but if I've got your explanation on IRC right, it's rather a
coincidence that the huge page size matches the alignment requirements
for KVM memslots, isn't it? So I think the code would look rather
confusing if I'd try to shorten it this way...?

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-16 11:59       ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-16 11:59 UTC (permalink / raw)


On 16/05/2019 13.30, David Hildenbrand wrote:
> On 16.05.19 13:12, Thomas Huth wrote:
>> On s390x, there is a constraint that memory regions have to be aligned
>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>> in the vm_userspace_mem_region_add() function which now can be used for
>> both, huge page and s390x alignment requirements.
>>
>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>> ---
>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>> index 8d63ccb93e10..64a0da6efe3d 100644
>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	unsigned long pmem_size = 0;
>>  	struct userspace_mem_region *region;
>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>> +	size_t alignment;
>>  
>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>  		"address not on a page boundary.\n"
>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>  	region->mmap_size = npages * vm->page_size;
>>  
>> -	/* Enough memory to align up to a huge page. */
>> +#ifdef __s390x__
>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>> +	alignment = 0x100000;
> 
> This corresponds to huge_page_size, maybe you can exploit this fact here.
> 
> Something like
> 
> alignment = 1;
> 
> /* On s390x, the host address must always be aligned to the THP size */
> #ifndef __s390x__
> if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> #endif
> 	alignment = huge_page_size;
> 
> Maybe in a nicer fashion. Not sure.

Hmm, but if I've got your explanation on IRC right, it's rather a
coincidence that the huge page size matches the alignment requirements
for KVM memslots, isn't it? So I think the code would look rather
confusing if I'd try to shorten it this way...?

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
  2019-05-16 11:59       ` thuth
  (?)
@ 2019-05-16 12:08         ` david
  -1 siblings, 0 replies; 58+ messages in thread
From: David Hildenbrand @ 2019-05-16 12:08 UTC (permalink / raw)
  To: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, Cornelia Huck, Andrew Jones, linux-kernel,
	linux-kselftest, linux-s390

On 16.05.19 13:59, Thomas Huth wrote:
> On 16/05/2019 13.30, David Hildenbrand wrote:
>> On 16.05.19 13:12, Thomas Huth wrote:
>>> On s390x, there is a constraint that memory regions have to be aligned
>>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>>> in the vm_userspace_mem_region_add() function which now can be used for
>>> both, huge page and s390x alignment requirements.
>>>
>>> Signed-off-by: Thomas Huth <thuth@redhat.com>
>>> ---
>>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
>>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> index 8d63ccb93e10..64a0da6efe3d 100644
>>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	unsigned long pmem_size = 0;
>>>  	struct userspace_mem_region *region;
>>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>>> +	size_t alignment;
>>>  
>>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>>  		"address not on a page boundary.\n"
>>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>>  	region->mmap_size = npages * vm->page_size;
>>>  
>>> -	/* Enough memory to align up to a huge page. */
>>> +#ifdef __s390x__
>>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>>> +	alignment = 0x100000;
>>
>> This corresponds to huge_page_size, maybe you can exploit this fact here.
>>
>> Something like
>>
>> alignment = 1;
>>
>> /* On s390x, the host address must always be aligned to the THP size */
>> #ifndef __s390x__
>> if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
>> #endif
>> 	alignment = huge_page_size;
>>
>> Maybe in a nicer fashion. Not sure.
> 
> Hmm, but if I've got your explanation on IRC right, it's rather a
> coincidence that the huge page size matches the alignment requirements
> for KVM memslots, isn't it? So I think the code would look rather
> confusing if I'd try to shorten it this way...?

Well, it's not really a coincidence. We have to share page tables
between the gmap and the user space process. One huge page corresponds
to the pages covered by a page table. So the page table "size" dictates
the alignment of both things.

But this is just nit picking here, do it the way you prefer, just wanted
to point it out :)

> 
>  Thomas
> 


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-16 12:08         ` david
  0 siblings, 0 replies; 58+ messages in thread
From: david @ 2019-05-16 12:08 UTC (permalink / raw)


On 16.05.19 13:59, Thomas Huth wrote:
> On 16/05/2019 13.30, David Hildenbrand wrote:
>> On 16.05.19 13:12, Thomas Huth wrote:
>>> On s390x, there is a constraint that memory regions have to be aligned
>>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>>> in the vm_userspace_mem_region_add() function which now can be used for
>>> both, huge page and s390x alignment requirements.
>>>
>>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>>> ---
>>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
>>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> index 8d63ccb93e10..64a0da6efe3d 100644
>>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	unsigned long pmem_size = 0;
>>>  	struct userspace_mem_region *region;
>>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>>> +	size_t alignment;
>>>  
>>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>>  		"address not on a page boundary.\n"
>>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>>  	region->mmap_size = npages * vm->page_size;
>>>  
>>> -	/* Enough memory to align up to a huge page. */
>>> +#ifdef __s390x__
>>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>>> +	alignment = 0x100000;
>>
>> This corresponds to huge_page_size, maybe you can exploit this fact here.
>>
>> Something like
>>
>> alignment = 1;
>>
>> /* On s390x, the host address must always be aligned to the THP size */
>> #ifndef __s390x__
>> if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
>> #endif
>> 	alignment = huge_page_size;
>>
>> Maybe in a nicer fashion. Not sure.
> 
> Hmm, but if I've got your explanation on IRC right, it's rather a
> coincidence that the huge page size matches the alignment requirements
> for KVM memslots, isn't it? So I think the code would look rather
> confusing if I'd try to shorten it this way...?

Well, it's not really a coincidence. We have to share page tables
between the gmap and the user space process. One huge page corresponds
to the pages covered by a page table. So the page table "size" dictates
the alignment of both things.

But this is just nit picking here, do it the way you prefer, just wanted
to point it out :)

> 
>  Thomas
> 


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-16 12:08         ` david
  0 siblings, 0 replies; 58+ messages in thread
From: David Hildenbrand @ 2019-05-16 12:08 UTC (permalink / raw)


On 16.05.19 13:59, Thomas Huth wrote:
> On 16/05/2019 13.30, David Hildenbrand wrote:
>> On 16.05.19 13:12, Thomas Huth wrote:
>>> On s390x, there is a constraint that memory regions have to be aligned
>>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>>> in the vm_userspace_mem_region_add() function which now can be used for
>>> both, huge page and s390x alignment requirements.
>>>
>>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>>> ---
>>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 +++++++++++++++++-----
>>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> index 8d63ccb93e10..64a0da6efe3d 100644
>>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	unsigned long pmem_size = 0;
>>>  	struct userspace_mem_region *region;
>>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>>> +	size_t alignment;
>>>  
>>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>>  		"address not on a page boundary.\n"
>>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>>  	region->mmap_size = npages * vm->page_size;
>>>  
>>> -	/* Enough memory to align up to a huge page. */
>>> +#ifdef __s390x__
>>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>>> +	alignment = 0x100000;
>>
>> This corresponds to huge_page_size, maybe you can exploit this fact here.
>>
>> Something like
>>
>> alignment = 1;
>>
>> /* On s390x, the host address must always be aligned to the THP size */
>> #ifndef __s390x__
>> if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
>> #endif
>> 	alignment = huge_page_size;
>>
>> Maybe in a nicer fashion. Not sure.
> 
> Hmm, but if I've got your explanation on IRC right, it's rather a
> coincidence that the huge page size matches the alignment requirements
> for KVM memslots, isn't it? So I think the code would look rather
> confusing if I'd try to shorten it this way...?

Well, it's not really a coincidence. We have to share page tables
between the gmap and the user space process. One huge page corresponds
to the pages covered by a page table. So the page table "size" dictates
the alignment of both things.

But this is just nit picking here, do it the way you prefer, just wanted
to point it out :)

> 
>  Thomas
> 


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
  2019-05-16 11:12   ` thuth
  (?)
@ 2019-05-20  7:12     ` borntraeger
  -1 siblings, 0 replies; 58+ messages in thread
From: Christian Borntraeger @ 2019-05-20  7:12 UTC (permalink / raw)
  To: Thomas Huth, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390


On 16.05.19 13:12, Thomas Huth wrote:
> The struct kvm_vcpu_events code is only available on certain architectures
> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>

According to the MAINTAINERS patches, you want me to pick these patches. Correct?


> ---
>  tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
>  tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 07b71ad9734a..1e46ab205038 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
>  void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> +#endif
>  
>  const char *exit_reason_str(unsigned int exit_reason);
>  
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 4ca96b228e46..8d63ccb93e10 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
>  		ret, errno);
>  }
>  
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events)
>  {
> @@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
>  		ret, errno);
>  }
> +#endif
>  
>  /*
>   * VM VCPU System Regs Get
> 


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-20  7:12     ` borntraeger
  0 siblings, 0 replies; 58+ messages in thread
From: borntraeger @ 2019-05-20  7:12 UTC (permalink / raw)



On 16.05.19 13:12, Thomas Huth wrote:
> The struct kvm_vcpu_events code is only available on certain architectures
> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
Acked-by: Christian Borntraeger <borntraeger at de.ibm.com>

According to the MAINTAINERS patches, you want me to pick these patches. Correct?


> ---
>  tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
>  tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 07b71ad9734a..1e46ab205038 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
>  void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> +#endif
>  
>  const char *exit_reason_str(unsigned int exit_reason);
>  
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 4ca96b228e46..8d63ccb93e10 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
>  		ret, errno);
>  }
>  
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events)
>  {
> @@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
>  		ret, errno);
>  }
> +#endif
>  
>  /*
>   * VM VCPU System Regs Get
> 

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-20  7:12     ` borntraeger
  0 siblings, 0 replies; 58+ messages in thread
From: Christian Borntraeger @ 2019-05-20  7:12 UTC (permalink / raw)



On 16.05.19 13:12, Thomas Huth wrote:
> The struct kvm_vcpu_events code is only available on certain architectures
> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
Acked-by: Christian Borntraeger <borntraeger at de.ibm.com>

According to the MAINTAINERS patches, you want me to pick these patches. Correct?


> ---
>  tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
>  tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 07b71ad9734a..1e46ab205038 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
>  void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> +#endif
>  
>  const char *exit_reason_str(unsigned int exit_reason);
>  
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 4ca96b228e46..8d63ccb93e10 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
>  		ret, errno);
>  }
>  
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events)
>  {
> @@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
>  		ret, errno);
>  }
> +#endif
>  
>  /*
>   * VM VCPU System Regs Get
> 

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
  2019-05-20  7:12     ` borntraeger
  (?)
@ 2019-05-20  8:08       ` thuth
  -1 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-20  8:08 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

On 20/05/2019 09.12, Christian Borntraeger wrote:
> 
> On 16.05.19 13:12, Thomas Huth wrote:
>> The struct kvm_vcpu_events code is only available on certain architectures
>> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
>> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
>>
>> Signed-off-by: Thomas Huth <thuth@redhat.com>
> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
> 
> According to the MAINTAINERS patches, you want me to pick these patches. Correct?

That would be nice, yes. But if you don't want to be responsible for
s390x-related KVM selftest patches, please let me know, then I'll drop
these hunks from the patches again.

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-20  8:08       ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: thuth @ 2019-05-20  8:08 UTC (permalink / raw)


On 20/05/2019 09.12, Christian Borntraeger wrote:
> 
> On 16.05.19 13:12, Thomas Huth wrote:
>> The struct kvm_vcpu_events code is only available on certain architectures
>> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
>> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
>>
>> Signed-off-by: Thomas Huth <thuth at redhat.com>
> Acked-by: Christian Borntraeger <borntraeger at de.ibm.com>
> 
> According to the MAINTAINERS patches, you want me to pick these patches. Correct?

That would be nice, yes. But if you don't want to be responsible for
s390x-related KVM selftest patches, please let me know, then I'll drop
these hunks from the patches again.

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-20  8:08       ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-20  8:08 UTC (permalink / raw)


On 20/05/2019 09.12, Christian Borntraeger wrote:
> 
> On 16.05.19 13:12, Thomas Huth wrote:
>> The struct kvm_vcpu_events code is only available on certain architectures
>> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
>> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
>>
>> Signed-off-by: Thomas Huth <thuth at redhat.com>
> Acked-by: Christian Borntraeger <borntraeger at de.ibm.com>
> 
> According to the MAINTAINERS patches, you want me to pick these patches. Correct?

That would be nice, yes. But if you don't want to be responsible for
s390x-related KVM selftest patches, please let me know, then I'll drop
these hunks from the patches again.

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
  2019-05-20  8:08       ` thuth
  (?)
@ 2019-05-20  8:13         ` borntraeger
  -1 siblings, 0 replies; 58+ messages in thread
From: Christian Borntraeger @ 2019-05-20  8:13 UTC (permalink / raw)
  To: Thomas Huth, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390



On 20.05.19 10:08, Thomas Huth wrote:
> On 20/05/2019 09.12, Christian Borntraeger wrote:
>>
>> On 16.05.19 13:12, Thomas Huth wrote:
>>> The struct kvm_vcpu_events code is only available on certain architectures
>>> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
>>> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
>>>
>>> Signed-off-by: Thomas Huth <thuth@redhat.com>
>> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
>>
>> According to the MAINTAINERS patches, you want me to pick these patches. Correct?
> 
> That would be nice, yes. But if you don't want to be responsible for
> s390x-related KVM selftest patches, please let me know, then I'll drop
> these hunks from the patches again.

I can take care of these (as part of the normal KVM maintainership).


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-20  8:13         ` borntraeger
  0 siblings, 0 replies; 58+ messages in thread
From: borntraeger @ 2019-05-20  8:13 UTC (permalink / raw)




On 20.05.19 10:08, Thomas Huth wrote:
> On 20/05/2019 09.12, Christian Borntraeger wrote:
>>
>> On 16.05.19 13:12, Thomas Huth wrote:
>>> The struct kvm_vcpu_events code is only available on certain architectures
>>> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
>>> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
>>>
>>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>> Acked-by: Christian Borntraeger <borntraeger at de.ibm.com>
>>
>> According to the MAINTAINERS patches, you want me to pick these patches. Correct?
> 
> That would be nice, yes. But if you don't want to be responsible for
> s390x-related KVM selftest patches, please let me know, then I'll drop
> these hunks from the patches again.

I can take care of these (as part of the normal KVM maintainership).

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-20  8:13         ` borntraeger
  0 siblings, 0 replies; 58+ messages in thread
From: Christian Borntraeger @ 2019-05-20  8:13 UTC (permalink / raw)




On 20.05.19 10:08, Thomas Huth wrote:
> On 20/05/2019 09.12, Christian Borntraeger wrote:
>>
>> On 16.05.19 13:12, Thomas Huth wrote:
>>> The struct kvm_vcpu_events code is only available on certain architectures
>>> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
>>> architectures, we've got to fence the code with __KVM_HAVE_VCPU_EVENTS.
>>>
>>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>> Acked-by: Christian Borntraeger <borntraeger at de.ibm.com>
>>
>> According to the MAINTAINERS patches, you want me to pick these patches. Correct?
> 
> That would be nice, yes. But if you don't want to be responsible for
> s390x-related KVM selftest patches, please let me know, then I'll drop
> these hunks from the patches again.

I can take care of these (as part of the normal KVM maintainership).

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
  2019-05-16 11:12   ` thuth
  (?)
@ 2019-05-20 11:19     ` pbonzini
  -1 siblings, 0 replies; 58+ messages in thread
From: Paolo Bonzini @ 2019-05-20 11:19 UTC (permalink / raw)
  To: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm
  Cc: Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

On 16/05/19 13:12, Thomas Huth wrote:
> +#define VCPU_ID 5
> +
> +static void guest_code(void)
> +{
> +	for (;;) {
> +		asm volatile ("diag 0,0,0x501");
> +		asm volatile ("ahi 11,1");
> +	}

I'd like this to use something like

	register u32 stage = 0 asm("11");
	...
	stage++

instead (yes, it should be fixed in x86 too).

Paolo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
@ 2019-05-20 11:19     ` pbonzini
  0 siblings, 0 replies; 58+ messages in thread
From: pbonzini @ 2019-05-20 11:19 UTC (permalink / raw)


On 16/05/19 13:12, Thomas Huth wrote:
> +#define VCPU_ID 5
> +
> +static void guest_code(void)
> +{
> +	for (;;) {
> +		asm volatile ("diag 0,0,0x501");
> +		asm volatile ("ahi 11,1");
> +	}

I'd like this to use something like

	register u32 stage = 0 asm("11");
	...
	stage++

instead (yes, it should be fixed in x86 too).

Paolo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
@ 2019-05-20 11:19     ` pbonzini
  0 siblings, 0 replies; 58+ messages in thread
From: Paolo Bonzini @ 2019-05-20 11:19 UTC (permalink / raw)


On 16/05/19 13:12, Thomas Huth wrote:
> +#define VCPU_ID 5
> +
> +static void guest_code(void)
> +{
> +	for (;;) {
> +		asm volatile ("diag 0,0,0x501");
> +		asm volatile ("ahi 11,1");
> +	}

I'd like this to use something like

	register u32 stage = 0 asm("11");
	...
	stage++

instead (yes, it should be fixed in x86 too).

Paolo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 0/4] KVM selftests for s390x
  2019-05-16 11:12 ` thuth
  (?)
@ 2019-05-20 11:20   ` pbonzini
  -1 siblings, 0 replies; 58+ messages in thread
From: Paolo Bonzini @ 2019-05-20 11:20 UTC (permalink / raw)
  To: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm
  Cc: Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

On 16/05/19 13:12, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...).

No objections at all, though it would be like to have ucall plumbed in
from the beginning.

Paolo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-20 11:20   ` pbonzini
  0 siblings, 0 replies; 58+ messages in thread
From: pbonzini @ 2019-05-20 11:20 UTC (permalink / raw)


On 16/05/19 13:12, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...).

No objections at all, though it would be like to have ucall plumbed in
from the beginning.

Paolo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-20 11:20   ` pbonzini
  0 siblings, 0 replies; 58+ messages in thread
From: Paolo Bonzini @ 2019-05-20 11:20 UTC (permalink / raw)


On 16/05/19 13:12, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...).

No objections at all, though it would be like to have ucall plumbed in
from the beginning.

Paolo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 0/4] KVM selftests for s390x
  2019-05-20 11:20   ` pbonzini
  (?)
@ 2019-05-20 11:30     ` thuth
  -1 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-20 11:30 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank, kvm, Andrew Jones
  Cc: Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On 20/05/2019 13.20, Paolo Bonzini wrote:
> On 16/05/19 13:12, Thomas Huth wrote:
>> This patch series enables the KVM selftests for s390x. As a first
>> test, the sync_regs from x86 has been adapted to s390x.
>>
>> Please note that the ucall() interface is not used yet - since
>> s390x neither has PIO nor MMIO, this needs some more work first
>> before it becomes usable (we likely should use a DIAG hypercall
>> here, which is what the sync_reg test is currently using, too...).
> 
> No objections at all, though it would be like to have ucall plumbed in
> from the beginning.

I'm still looking at the ucall interface ... what I don't quite get yet
is the question why the ucall_type there is selectable during runtime?

Are there plans to have test that could either use UCALL_PIO or
UCALL_MMIO? If not, what about moving ucall_init() and ucall() to
architecture specific code in tools/testing/selftests/kvm/lib/aarch64/
and tools/testing/selftests/kvm/lib/x86_64 instead, and to remove the
ucall_type stuff again (so that x86 is hard-wired to PIO and aarch64
is hard-wired to MMIO)? ... then I could add a DIAG-based ucall
on s390x more easily, I think.

 Thomas


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-20 11:30     ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: thuth @ 2019-05-20 11:30 UTC (permalink / raw)


On 20/05/2019 13.20, Paolo Bonzini wrote:
> On 16/05/19 13:12, Thomas Huth wrote:
>> This patch series enables the KVM selftests for s390x. As a first
>> test, the sync_regs from x86 has been adapted to s390x.
>>
>> Please note that the ucall() interface is not used yet - since
>> s390x neither has PIO nor MMIO, this needs some more work first
>> before it becomes usable (we likely should use a DIAG hypercall
>> here, which is what the sync_reg test is currently using, too...).
> 
> No objections at all, though it would be like to have ucall plumbed in
> from the beginning.

I'm still looking at the ucall interface ... what I don't quite get yet
is the question why the ucall_type there is selectable during runtime?

Are there plans to have test that could either use UCALL_PIO or
UCALL_MMIO? If not, what about moving ucall_init() and ucall() to
architecture specific code in tools/testing/selftests/kvm/lib/aarch64/
and tools/testing/selftests/kvm/lib/x86_64 instead, and to remove the
ucall_type stuff again (so that x86 is hard-wired to PIO and aarch64
is hard-wired to MMIO)? ... then I could add a DIAG-based ucall
on s390x more easily, I think.

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-20 11:30     ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-20 11:30 UTC (permalink / raw)


On 20/05/2019 13.20, Paolo Bonzini wrote:
> On 16/05/19 13:12, Thomas Huth wrote:
>> This patch series enables the KVM selftests for s390x. As a first
>> test, the sync_regs from x86 has been adapted to s390x.
>>
>> Please note that the ucall() interface is not used yet - since
>> s390x neither has PIO nor MMIO, this needs some more work first
>> before it becomes usable (we likely should use a DIAG hypercall
>> here, which is what the sync_reg test is currently using, too...).
> 
> No objections at all, though it would be like to have ucall plumbed in
> from the beginning.

I'm still looking at the ucall interface ... what I don't quite get yet
is the question why the ucall_type there is selectable during runtime?

Are there plans to have test that could either use UCALL_PIO or
UCALL_MMIO? If not, what about moving ucall_init() and ucall() to
architecture specific code in tools/testing/selftests/kvm/lib/aarch64/
and tools/testing/selftests/kvm/lib/x86_64 instead, and to remove the
ucall_type stuff again (so that x86 is hard-wired to PIO and aarch64
is hard-wired to MMIO)? ... then I could add a DIAG-based ucall
on s390x more easily, I think.

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 0/4] KVM selftests for s390x
  2019-05-20 11:30     ` thuth
  (?)
  (?)
@ 2019-05-20 11:43       ` Paolo Bonzini
  -1 siblings, 0 replies; 58+ messages in thread
From: Paolo Bonzini @ 2019-05-20 11:43 UTC (permalink / raw)
  To: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm, Andrew Jones
  Cc: Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390, Andrew Jones

On 20/05/19 13:30, Thomas Huth wrote:
>> No objections at all, though it would be like to have ucall plumbed in
>> from the beginning.
> I'm still looking at the ucall interface ... what I don't quite get yet
> is the question why the ucall_type there is selectable during runtime?
> 
> Are there plans to have test that could either use UCALL_PIO or
> UCALL_MMIO? If not, what about moving ucall_init() and ucall() to
> architecture specific code in tools/testing/selftests/kvm/lib/aarch64/
> and tools/testing/selftests/kvm/lib/x86_64 instead, and to remove the
> ucall_type stuff again (so that x86 is hard-wired to PIO and aarch64
> is hard-wired to MMIO)? ... then I could add a DIAG-based ucall
> on s390x more easily, I think.

Yes, that would work.  I think Andrew wanted the flexibility to use MMIO
on x86, but it's not really necessary to have it.

Paolo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-20 11:43       ` Paolo Bonzini
  0 siblings, 0 replies; 58+ messages in thread
From: Paolo Bonzini @ 2019-05-20 11:43 UTC (permalink / raw)
  To: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm, Andrew Jones
  Cc: Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On 20/05/19 13:30, Thomas Huth wrote:
>> No objections at all, though it would be like to have ucall plumbed in
>> from the beginning.
> I'm still looking at the ucall interface ... what I don't quite get yet
> is the question why the ucall_type there is selectable during runtime?
> 
> Are there plans to have test that could either use UCALL_PIO or
> UCALL_MMIO? If not, what about moving ucall_init() and ucall() to
> architecture specific code in tools/testing/selftests/kvm/lib/aarch64/
> and tools/testing/selftests/kvm/lib/x86_64 instead, and to remove the
> ucall_type stuff again (so that x86 is hard-wired to PIO and aarch64
> is hard-wired to MMIO)? ... then I could add a DIAG-based ucall
> on s390x more easily, I think.

Yes, that would work.  I think Andrew wanted the flexibility to use MMIO
on x86, but it's not really necessary to have it.

Paolo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-20 11:43       ` Paolo Bonzini
  0 siblings, 0 replies; 58+ messages in thread
From: pbonzini @ 2019-05-20 11:43 UTC (permalink / raw)


On 20/05/19 13:30, Thomas Huth wrote:
>> No objections at all, though it would be like to have ucall plumbed in
>> from the beginning.
> I'm still looking at the ucall interface ... what I don't quite get yet
> is the question why the ucall_type there is selectable during runtime?
> 
> Are there plans to have test that could either use UCALL_PIO or
> UCALL_MMIO? If not, what about moving ucall_init() and ucall() to
> architecture specific code in tools/testing/selftests/kvm/lib/aarch64/
> and tools/testing/selftests/kvm/lib/x86_64 instead, and to remove the
> ucall_type stuff again (so that x86 is hard-wired to PIO and aarch64
> is hard-wired to MMIO)? ... then I could add a DIAG-based ucall
> on s390x more easily, I think.

Yes, that would work.  I think Andrew wanted the flexibility to use MMIO
on x86, but it's not really necessary to have it.

Paolo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-20 11:43       ` Paolo Bonzini
  0 siblings, 0 replies; 58+ messages in thread
From: Paolo Bonzini @ 2019-05-20 11:43 UTC (permalink / raw)


On 20/05/19 13:30, Thomas Huth wrote:
>> No objections at all, though it would be like to have ucall plumbed in
>> from the beginning.
> I'm still looking at the ucall interface ... what I don't quite get yet
> is the question why the ucall_type there is selectable during runtime?
> 
> Are there plans to have test that could either use UCALL_PIO or
> UCALL_MMIO? If not, what about moving ucall_init() and ucall() to
> architecture specific code in tools/testing/selftests/kvm/lib/aarch64/
> and tools/testing/selftests/kvm/lib/x86_64 instead, and to remove the
> ucall_type stuff again (so that x86 is hard-wired to PIO and aarch64
> is hard-wired to MMIO)? ... then I could add a DIAG-based ucall
> on s390x more easily, I think.

Yes, that would work.  I think Andrew wanted the flexibility to use MMIO
on x86, but it's not really necessary to have it.

Paolo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 0/4] KVM selftests for s390x
  2019-05-20 11:43       ` Paolo Bonzini
  (?)
@ 2019-05-22  8:44         ` drjones
  -1 siblings, 0 replies; 58+ messages in thread
From: Andrew Jones @ 2019-05-22  8:44 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm,
	Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On Mon, May 20, 2019 at 01:43:06PM +0200, Paolo Bonzini wrote:
> On 20/05/19 13:30, Thomas Huth wrote:
> >> No objections at all, though it would be like to have ucall plumbed in
> >> from the beginning.
> > I'm still looking at the ucall interface ... what I don't quite get yet
> > is the question why the ucall_type there is selectable during runtime?
> > 
> > Are there plans to have test that could either use UCALL_PIO or
> > UCALL_MMIO? If not, what about moving ucall_init() and ucall() to
> > architecture specific code in tools/testing/selftests/kvm/lib/aarch64/
> > and tools/testing/selftests/kvm/lib/x86_64 instead, and to remove the
> > ucall_type stuff again (so that x86 is hard-wired to PIO and aarch64
> > is hard-wired to MMIO)? ... then I could add a DIAG-based ucall
> > on s390x more easily, I think.
> 
> Yes, that would work.  I think Andrew wanted the flexibility to use MMIO
> on x86, but it's not really necessary to have it.

If the flexibility isn't necessary, then I agree that it'll be nicer to
put the ucall_init() in arch setup code, avoiding the need to remember
it in each unit test.

Thanks,
drew

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-22  8:44         ` drjones
  0 siblings, 0 replies; 58+ messages in thread
From: drjones @ 2019-05-22  8:44 UTC (permalink / raw)


On Mon, May 20, 2019 at 01:43:06PM +0200, Paolo Bonzini wrote:
> On 20/05/19 13:30, Thomas Huth wrote:
> >> No objections at all, though it would be like to have ucall plumbed in
> >> from the beginning.
> > I'm still looking at the ucall interface ... what I don't quite get yet
> > is the question why the ucall_type there is selectable during runtime?
> > 
> > Are there plans to have test that could either use UCALL_PIO or
> > UCALL_MMIO? If not, what about moving ucall_init() and ucall() to
> > architecture specific code in tools/testing/selftests/kvm/lib/aarch64/
> > and tools/testing/selftests/kvm/lib/x86_64 instead, and to remove the
> > ucall_type stuff again (so that x86 is hard-wired to PIO and aarch64
> > is hard-wired to MMIO)? ... then I could add a DIAG-based ucall
> > on s390x more easily, I think.
> 
> Yes, that would work.  I think Andrew wanted the flexibility to use MMIO
> on x86, but it's not really necessary to have it.

If the flexibility isn't necessary, then I agree that it'll be nicer to
put the ucall_init() in arch setup code, avoiding the need to remember
it in each unit test.

Thanks,
drew

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 0/4] KVM selftests for s390x
@ 2019-05-22  8:44         ` drjones
  0 siblings, 0 replies; 58+ messages in thread
From: Andrew Jones @ 2019-05-22  8:44 UTC (permalink / raw)


On Mon, May 20, 2019@01:43:06PM +0200, Paolo Bonzini wrote:
> On 20/05/19 13:30, Thomas Huth wrote:
> >> No objections at all, though it would be like to have ucall plumbed in
> >> from the beginning.
> > I'm still looking at the ucall interface ... what I don't quite get yet
> > is the question why the ucall_type there is selectable during runtime?
> > 
> > Are there plans to have test that could either use UCALL_PIO or
> > UCALL_MMIO? If not, what about moving ucall_init() and ucall() to
> > architecture specific code in tools/testing/selftests/kvm/lib/aarch64/
> > and tools/testing/selftests/kvm/lib/x86_64 instead, and to remove the
> > ucall_type stuff again (so that x86 is hard-wired to PIO and aarch64
> > is hard-wired to MMIO)? ... then I could add a DIAG-based ucall
> > on s390x more easily, I think.
> 
> Yes, that would work.  I think Andrew wanted the flexibility to use MMIO
> on x86, but it's not really necessary to have it.

If the flexibility isn't necessary, then I agree that it'll be nicer to
put the ucall_init() in arch setup code, avoiding the need to remember
it in each unit test.

Thanks,
drew

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
  2019-05-16 11:12   ` thuth
  (?)
@ 2019-05-23 10:56     ` drjones
  -1 siblings, 0 replies; 58+ messages in thread
From: Andrew Jones @ 2019-05-23 10:56 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Christian Borntraeger, Janosch Frank, kvm, Paolo Bonzini,
	Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On Thu, May 16, 2019 at 01:12:53PM +0200, Thomas Huth wrote:
> The test is an adaption of the same test for x86. Note that there
> are some differences in the way how s390x deals with the kvm_valid_regs
> in struct kvm_run, so some of the tests had to be removed. Also this
> test is not using the ucall() interface on s390x yet (which would need
> some work to be usable on s390x), so it simply drops out of the VM with
> a diag 0x501 breakpoint instead.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  MAINTAINERS                                   |   1 +
>  tools/testing/selftests/kvm/Makefile          |   2 +
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
>  3 files changed, 154 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 514d1f88ee26..68f76ee9e821 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
>  F:	arch/s390/include/asm/kvm*
>  F:	arch/s390/kvm/
>  F:	arch/s390/mm/gmap.c
> +F:	tools/testing/selftests/kvm/s390x/
>  F:	tools/testing/selftests/kvm/*/s390x/

Do we need these lines added? We have tools/testing/selftests/kvm/ in the
common KVM section already. If we do want to specify them specifically,
then I guess we need x86 and arm MAINTAINERS updates as well.

Thanks,
drew

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
@ 2019-05-23 10:56     ` drjones
  0 siblings, 0 replies; 58+ messages in thread
From: drjones @ 2019-05-23 10:56 UTC (permalink / raw)


On Thu, May 16, 2019 at 01:12:53PM +0200, Thomas Huth wrote:
> The test is an adaption of the same test for x86. Note that there
> are some differences in the way how s390x deals with the kvm_valid_regs
> in struct kvm_run, so some of the tests had to be removed. Also this
> test is not using the ucall() interface on s390x yet (which would need
> some work to be usable on s390x), so it simply drops out of the VM with
> a diag 0x501 breakpoint instead.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  MAINTAINERS                                   |   1 +
>  tools/testing/selftests/kvm/Makefile          |   2 +
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
>  3 files changed, 154 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 514d1f88ee26..68f76ee9e821 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
>  F:	arch/s390/include/asm/kvm*
>  F:	arch/s390/kvm/
>  F:	arch/s390/mm/gmap.c
> +F:	tools/testing/selftests/kvm/s390x/
>  F:	tools/testing/selftests/kvm/*/s390x/

Do we need these lines added? We have tools/testing/selftests/kvm/ in the
common KVM section already. If we do want to specify them specifically,
then I guess we need x86 and arm MAINTAINERS updates as well.

Thanks,
drew

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
@ 2019-05-23 10:56     ` drjones
  0 siblings, 0 replies; 58+ messages in thread
From: Andrew Jones @ 2019-05-23 10:56 UTC (permalink / raw)


On Thu, May 16, 2019@01:12:53PM +0200, Thomas Huth wrote:
> The test is an adaption of the same test for x86. Note that there
> are some differences in the way how s390x deals with the kvm_valid_regs
> in struct kvm_run, so some of the tests had to be removed. Also this
> test is not using the ucall() interface on s390x yet (which would need
> some work to be usable on s390x), so it simply drops out of the VM with
> a diag 0x501 breakpoint instead.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  MAINTAINERS                                   |   1 +
>  tools/testing/selftests/kvm/Makefile          |   2 +
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
>  3 files changed, 154 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 514d1f88ee26..68f76ee9e821 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
>  F:	arch/s390/include/asm/kvm*
>  F:	arch/s390/kvm/
>  F:	arch/s390/mm/gmap.c
> +F:	tools/testing/selftests/kvm/s390x/
>  F:	tools/testing/selftests/kvm/*/s390x/

Do we need these lines added? We have tools/testing/selftests/kvm/ in the
common KVM section already. If we do want to specify them specifically,
then I guess we need x86 and arm MAINTAINERS updates as well.

Thanks,
drew

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
  2019-05-23 10:56     ` drjones
  (?)
@ 2019-05-23 11:19       ` thuth
  -1 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-23 11:19 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Christian Borntraeger, Janosch Frank, kvm, Paolo Bonzini,
	Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On 23/05/2019 12.56, Andrew Jones wrote:
> On Thu, May 16, 2019 at 01:12:53PM +0200, Thomas Huth wrote:
>> The test is an adaption of the same test for x86. Note that there
>> are some differences in the way how s390x deals with the kvm_valid_regs
>> in struct kvm_run, so some of the tests had to be removed. Also this
>> test is not using the ucall() interface on s390x yet (which would need
>> some work to be usable on s390x), so it simply drops out of the VM with
>> a diag 0x501 breakpoint instead.
>>
>> Signed-off-by: Thomas Huth <thuth@redhat.com>
>> ---
>>  MAINTAINERS                                   |   1 +
>>  tools/testing/selftests/kvm/Makefile          |   2 +
>>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
>>  3 files changed, 154 insertions(+)
>>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 514d1f88ee26..68f76ee9e821 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
>>  F:	arch/s390/include/asm/kvm*
>>  F:	arch/s390/kvm/
>>  F:	arch/s390/mm/gmap.c
>> +F:	tools/testing/selftests/kvm/s390x/
>>  F:	tools/testing/selftests/kvm/*/s390x/
> 
> Do we need these lines added? We have tools/testing/selftests/kvm/ in the
> common KVM section already. If we do want to specify them specifically,
> then I guess we need x86 and arm MAINTAINERS updates as well.

I think they are helpful in the sense that the s390x maintainers get
CC:-ed on related patches as well, and if I've got Christian right, he's
interested in getting informed here. For Arm related patches, I guess
you should ask the Arm maintainers first. For x86, it does not really
matter, since the maintainers are the same.

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
@ 2019-05-23 11:19       ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: thuth @ 2019-05-23 11:19 UTC (permalink / raw)


On 23/05/2019 12.56, Andrew Jones wrote:
> On Thu, May 16, 2019 at 01:12:53PM +0200, Thomas Huth wrote:
>> The test is an adaption of the same test for x86. Note that there
>> are some differences in the way how s390x deals with the kvm_valid_regs
>> in struct kvm_run, so some of the tests had to be removed. Also this
>> test is not using the ucall() interface on s390x yet (which would need
>> some work to be usable on s390x), so it simply drops out of the VM with
>> a diag 0x501 breakpoint instead.
>>
>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>> ---
>>  MAINTAINERS                                   |   1 +
>>  tools/testing/selftests/kvm/Makefile          |   2 +
>>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
>>  3 files changed, 154 insertions(+)
>>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 514d1f88ee26..68f76ee9e821 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
>>  F:	arch/s390/include/asm/kvm*
>>  F:	arch/s390/kvm/
>>  F:	arch/s390/mm/gmap.c
>> +F:	tools/testing/selftests/kvm/s390x/
>>  F:	tools/testing/selftests/kvm/*/s390x/
> 
> Do we need these lines added? We have tools/testing/selftests/kvm/ in the
> common KVM section already. If we do want to specify them specifically,
> then I guess we need x86 and arm MAINTAINERS updates as well.

I think they are helpful in the sense that the s390x maintainers get
CC:-ed on related patches as well, and if I've got Christian right, he's
interested in getting informed here. For Arm related patches, I guess
you should ask the Arm maintainers first. For x86, it does not really
matter, since the maintainers are the same.

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test for s390x
@ 2019-05-23 11:19       ` thuth
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Huth @ 2019-05-23 11:19 UTC (permalink / raw)


On 23/05/2019 12.56, Andrew Jones wrote:
> On Thu, May 16, 2019@01:12:53PM +0200, Thomas Huth wrote:
>> The test is an adaption of the same test for x86. Note that there
>> are some differences in the way how s390x deals with the kvm_valid_regs
>> in struct kvm_run, so some of the tests had to be removed. Also this
>> test is not using the ucall() interface on s390x yet (which would need
>> some work to be usable on s390x), so it simply drops out of the VM with
>> a diag 0x501 breakpoint instead.
>>
>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>> ---
>>  MAINTAINERS                                   |   1 +
>>  tools/testing/selftests/kvm/Makefile          |   2 +
>>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
>>  3 files changed, 154 insertions(+)
>>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 514d1f88ee26..68f76ee9e821 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -8645,6 +8645,7 @@ F:	arch/s390/include/asm/gmap.h
>>  F:	arch/s390/include/asm/kvm*
>>  F:	arch/s390/kvm/
>>  F:	arch/s390/mm/gmap.c
>> +F:	tools/testing/selftests/kvm/s390x/
>>  F:	tools/testing/selftests/kvm/*/s390x/
> 
> Do we need these lines added? We have tools/testing/selftests/kvm/ in the
> common KVM section already. If we do want to specify them specifically,
> then I guess we need x86 and arm MAINTAINERS updates as well.

I think they are helpful in the sense that the s390x maintainers get
CC:-ed on related patches as well, and if I've got Christian right, he's
interested in getting informed here. For Arm related patches, I guess
you should ask the Arm maintainers first. For x86, it does not really
matter, since the maintainers are the same.

 Thomas

^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2019-05-23 11:20 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-16 11:12 [RFC PATCH 0/4] KVM selftests for s390x Thomas Huth
2019-05-16 11:12 ` Thomas Huth
2019-05-16 11:12 ` thuth
2019-05-16 11:12 ` [RFC PATCH 1/4] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS Thomas Huth
2019-05-16 11:12   ` Thomas Huth
2019-05-16 11:12   ` thuth
2019-05-16 11:22   ` David Hildenbrand
2019-05-16 11:22     ` David Hildenbrand
2019-05-16 11:22     ` david
2019-05-20  7:12   ` Christian Borntraeger
2019-05-20  7:12     ` Christian Borntraeger
2019-05-20  7:12     ` borntraeger
2019-05-20  8:08     ` Thomas Huth
2019-05-20  8:08       ` Thomas Huth
2019-05-20  8:08       ` thuth
2019-05-20  8:13       ` Christian Borntraeger
2019-05-20  8:13         ` Christian Borntraeger
2019-05-20  8:13         ` borntraeger
2019-05-16 11:12 ` [RFC PATCH 2/4] KVM: selftests: Align memory region addresses to 1M on s390x Thomas Huth
2019-05-16 11:12   ` Thomas Huth
2019-05-16 11:12   ` thuth
2019-05-16 11:30   ` David Hildenbrand
2019-05-16 11:30     ` David Hildenbrand
2019-05-16 11:30     ` david
2019-05-16 11:59     ` Thomas Huth
2019-05-16 11:59       ` Thomas Huth
2019-05-16 11:59       ` thuth
2019-05-16 12:08       ` David Hildenbrand
2019-05-16 12:08         ` David Hildenbrand
2019-05-16 12:08         ` david
2019-05-16 11:12 ` [RFC PATCH 3/4] KVM: selftests: Add processor code for s390x Thomas Huth
2019-05-16 11:12   ` Thomas Huth
2019-05-16 11:12   ` thuth
2019-05-16 11:12 ` [RFC PATCH 4/4] KVM: selftests: Add the sync_regs test " Thomas Huth
2019-05-16 11:12   ` Thomas Huth
2019-05-16 11:12   ` thuth
2019-05-20 11:19   ` Paolo Bonzini
2019-05-20 11:19     ` Paolo Bonzini
2019-05-20 11:19     ` pbonzini
2019-05-23 10:56   ` Andrew Jones
2019-05-23 10:56     ` Andrew Jones
2019-05-23 10:56     ` drjones
2019-05-23 11:19     ` Thomas Huth
2019-05-23 11:19       ` Thomas Huth
2019-05-23 11:19       ` thuth
2019-05-20 11:20 ` [RFC PATCH 0/4] KVM selftests " Paolo Bonzini
2019-05-20 11:20   ` Paolo Bonzini
2019-05-20 11:20   ` pbonzini
2019-05-20 11:30   ` Thomas Huth
2019-05-20 11:30     ` Thomas Huth
2019-05-20 11:30     ` thuth
2019-05-20 11:43     ` Paolo Bonzini
2019-05-20 11:43       ` Paolo Bonzini
2019-05-20 11:43       ` pbonzini
2019-05-20 11:43       ` Paolo Bonzini
2019-05-22  8:44       ` Andrew Jones
2019-05-22  8:44         ` Andrew Jones
2019-05-22  8:44         ` drjones

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.