All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-23 16:43 ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

This patch series enables the KVM selftests for s390x. As a first
test, the sync_regs from x86 has been adapted to s390x, and after
a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
is now enabled here, too.

Please note that the ucall() interface is not used yet - since
s390x neither has PIO nor MMIO, this needs some more work first
before it becomes usable (we likely should use a DIAG hypercall
here, which is what the sync_reg test is currently using, too...
I started working on that topic, but did not finish that work
yet, so I decided to not include it yet).

RFC -> v1:
 - Rebase, needed to add the first patch for vcpu_nested_state_get/set
 - Added patch to introduce VM_MODE_DEFAULT macro
 - Improved/cleaned up the code in processor.c
 - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
 - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64

Andrew Jones (1):
  kvm: selftests: aarch64: fix default vm mode

Thomas Huth (8):
  KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
    guard
  KVM: selftests: Guard struct kvm_vcpu_events with
    __KVM_HAVE_VCPU_EVENTS
  KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
  KVM: selftests: Align memory region addresses to 1M on s390x
  KVM: selftests: Add processor code for s390x
  KVM: selftests: Add the sync_regs test for s390x
  KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  KVM: selftests: Move kvm_create_max_vcpus test to generic code

 MAINTAINERS                                   |   2 +
 arch/mips/kvm/mips.c                          |   3 +
 arch/powerpc/kvm/powerpc.c                    |   3 +
 arch/s390/kvm/kvm-s390.c                      |   1 +
 arch/x86/kvm/x86.c                            |   3 +
 tools/testing/selftests/kvm/Makefile          |   7 +-
 .../testing/selftests/kvm/include/kvm_util.h  |  10 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
 .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
 .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
 .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
 virt/kvm/arm/arm.c                            |   3 +
 virt/kvm/kvm_main.c                           |   2 -
 16 files changed, 514 insertions(+), 11 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

-- 
2.21.0


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-23 16:43 ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-23 16:43 UTC (permalink / raw)


This patch series enables the KVM selftests for s390x. As a first
test, the sync_regs from x86 has been adapted to s390x, and after
a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
is now enabled here, too.

Please note that the ucall() interface is not used yet - since
s390x neither has PIO nor MMIO, this needs some more work first
before it becomes usable (we likely should use a DIAG hypercall
here, which is what the sync_reg test is currently using, too...
I started working on that topic, but did not finish that work
yet, so I decided to not include it yet).

RFC -> v1:
 - Rebase, needed to add the first patch for vcpu_nested_state_get/set
 - Added patch to introduce VM_MODE_DEFAULT macro
 - Improved/cleaned up the code in processor.c
 - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
 - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64

Andrew Jones (1):
  kvm: selftests: aarch64: fix default vm mode

Thomas Huth (8):
  KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
    guard
  KVM: selftests: Guard struct kvm_vcpu_events with
    __KVM_HAVE_VCPU_EVENTS
  KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
  KVM: selftests: Align memory region addresses to 1M on s390x
  KVM: selftests: Add processor code for s390x
  KVM: selftests: Add the sync_regs test for s390x
  KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  KVM: selftests: Move kvm_create_max_vcpus test to generic code

 MAINTAINERS                                   |   2 +
 arch/mips/kvm/mips.c                          |   3 +
 arch/powerpc/kvm/powerpc.c                    |   3 +
 arch/s390/kvm/kvm-s390.c                      |   1 +
 arch/x86/kvm/x86.c                            |   3 +
 tools/testing/selftests/kvm/Makefile          |   7 +-
 .../testing/selftests/kvm/include/kvm_util.h  |  10 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
 .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
 .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
 .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
 virt/kvm/arm/arm.c                            |   3 +
 virt/kvm/kvm_main.c                           |   2 -
 16 files changed, 514 insertions(+), 11 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

-- 
2.21.0

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-23 16:43 ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)


This patch series enables the KVM selftests for s390x. As a first
test, the sync_regs from x86 has been adapted to s390x, and after
a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
is now enabled here, too.

Please note that the ucall() interface is not used yet - since
s390x neither has PIO nor MMIO, this needs some more work first
before it becomes usable (we likely should use a DIAG hypercall
here, which is what the sync_reg test is currently using, too...
I started working on that topic, but did not finish that work
yet, so I decided to not include it yet).

RFC -> v1:
 - Rebase, needed to add the first patch for vcpu_nested_state_get/set
 - Added patch to introduce VM_MODE_DEFAULT macro
 - Improved/cleaned up the code in processor.c
 - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
 - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64

Andrew Jones (1):
  kvm: selftests: aarch64: fix default vm mode

Thomas Huth (8):
  KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
    guard
  KVM: selftests: Guard struct kvm_vcpu_events with
    __KVM_HAVE_VCPU_EVENTS
  KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
  KVM: selftests: Align memory region addresses to 1M on s390x
  KVM: selftests: Add processor code for s390x
  KVM: selftests: Add the sync_regs test for s390x
  KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  KVM: selftests: Move kvm_create_max_vcpus test to generic code

 MAINTAINERS                                   |   2 +
 arch/mips/kvm/mips.c                          |   3 +
 arch/powerpc/kvm/powerpc.c                    |   3 +
 arch/s390/kvm/kvm-s390.c                      |   1 +
 arch/x86/kvm/x86.c                            |   3 +
 tools/testing/selftests/kvm/Makefile          |   7 +-
 .../testing/selftests/kvm/include/kvm_util.h  |  10 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
 .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
 .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
 .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
 virt/kvm/arm/arm.c                            |   3 +
 virt/kvm/kvm_main.c                           |   2 -
 16 files changed, 514 insertions(+), 11 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

-- 
2.21.0

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 1/9] KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86 guard
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-23 16:43   ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

struct kvm_nested_state is only available on x86 so far. To be able
to compile the code on other architectures as well, we need to wrap
the related code with #ifdefs.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 8c6b9619797d..a5a4b28f14d8 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -118,10 +118,12 @@ void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
+#ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
 			   struct kvm_nested_state *state);
 int vcpu_nested_state_set(struct kvm_vm *vm, uint32_t vcpuid,
 			  struct kvm_nested_state *state, bool ignore_error);
+#endif
 
 const char *exit_reason_str(unsigned int exit_reason);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e9113857f44e..ba1359ac504f 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1250,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		ret, errno);
 }
 
+#ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
 			   struct kvm_nested_state *state)
 {
@@ -1281,6 +1282,7 @@ int vcpu_nested_state_set(struct kvm_vm *vm, uint32_t vcpuid,
 
 	return ret;
 }
+#endif
 
 /*
  * VM VCPU System Regs Get
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 1/9] KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86 guard
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-23 16:43 UTC (permalink / raw)


struct kvm_nested_state is only available on x86 so far. To be able
to compile the code on other architectures as well, we need to wrap
the related code with #ifdefs.

Reviewed-by: Andrew Jones <drjones at redhat.com>
Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 8c6b9619797d..a5a4b28f14d8 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -118,10 +118,12 @@ void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
+#ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
 			   struct kvm_nested_state *state);
 int vcpu_nested_state_set(struct kvm_vm *vm, uint32_t vcpuid,
 			  struct kvm_nested_state *state, bool ignore_error);
+#endif
 
 const char *exit_reason_str(unsigned int exit_reason);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e9113857f44e..ba1359ac504f 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1250,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		ret, errno);
 }
 
+#ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
 			   struct kvm_nested_state *state)
 {
@@ -1281,6 +1282,7 @@ int vcpu_nested_state_set(struct kvm_vm *vm, uint32_t vcpuid,
 
 	return ret;
 }
+#endif
 
 /*
  * VM VCPU System Regs Get
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 1/9] KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86 guard
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)


struct kvm_nested_state is only available on x86 so far. To be able
to compile the code on other architectures as well, we need to wrap
the related code with #ifdefs.

Reviewed-by: Andrew Jones <drjones at redhat.com>
Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 8c6b9619797d..a5a4b28f14d8 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -118,10 +118,12 @@ void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
+#ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
 			   struct kvm_nested_state *state);
 int vcpu_nested_state_set(struct kvm_vm *vm, uint32_t vcpuid,
 			  struct kvm_nested_state *state, bool ignore_error);
+#endif
 
 const char *exit_reason_str(unsigned int exit_reason);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e9113857f44e..ba1359ac504f 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1250,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		ret, errno);
 }
 
+#ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
 			   struct kvm_nested_state *state)
 {
@@ -1281,6 +1282,7 @@ int vcpu_nested_state_set(struct kvm_vm *vm, uint32_t vcpuid,
 
 	return ret;
 }
+#endif
 
 /*
  * VM VCPU System Regs Get
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 2/9] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-23 16:43   ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

The struct kvm_vcpu_events code is only available on certain architectures
(arm, arm64 and x86). To be able to compile kvm_util.c also for other
architectures, we have to fence the code with __KVM_HAVE_VCPU_EVENTS.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index a5a4b28f14d8..b8bf961074fe 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
 int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
+#endif
 #ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
 			   struct kvm_nested_state *state);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index ba1359ac504f..08edb8436c47 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
 		ret, errno);
 }
 
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events)
 {
@@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
 		ret, errno);
 }
+#endif
 
 #ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 2/9] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-23 16:43 UTC (permalink / raw)


The struct kvm_vcpu_events code is only available on certain architectures
(arm, arm64 and x86). To be able to compile kvm_util.c also for other
architectures, we have to fence the code with __KVM_HAVE_VCPU_EVENTS.

Reviewed-by: David Hildenbrand <david at redhat.com>
Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index a5a4b28f14d8..b8bf961074fe 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
 int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
+#endif
 #ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
 			   struct kvm_nested_state *state);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index ba1359ac504f..08edb8436c47 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
 		ret, errno);
 }
 
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events)
 {
@@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
 		ret, errno);
 }
+#endif
 
 #ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 2/9] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)


The struct kvm_vcpu_events code is only available on certain architectures
(arm, arm64 and x86). To be able to compile kvm_util.c also for other
architectures, we have to fence the code with __KVM_HAVE_VCPU_EVENTS.

Reviewed-by: David Hildenbrand <david at redhat.com>
Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index a5a4b28f14d8..b8bf961074fe 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
 int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
+#endif
 #ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
 			   struct kvm_nested_state *state);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index ba1359ac504f..08edb8436c47 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
 		ret, errno);
 }
 
+#ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events)
 {
@@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
 	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
 		ret, errno);
 }
+#endif
 
 #ifdef __x86_64__
 void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 3/9] kvm: selftests: aarch64: fix default vm mode
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-23 16:43   ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

From: Andrew Jones <drjones@redhat.com>

VM_MODE_P52V48_4K is not a valid mode for AArch64. Replace its
use in vm_create_default() with a mode that works and represents
a good AArch64 default. (We didn't ever see a problem with this
because we don't have any unit tests using vm_create_default(),
but it's good to get it fixed in advance.)

Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index e8c42506a09d..fa6cd340137c 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
 	struct kvm_vm *vm;
 
-	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
 
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 	vm_vcpu_add_default(vm, vcpuid, guest_code);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 3/9] kvm: selftests: aarch64: fix default vm mode
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-23 16:43 UTC (permalink / raw)


From: Andrew Jones <drjones at redhat.com>

VM_MODE_P52V48_4K is not a valid mode for AArch64. Replace its
use in vm_create_default() with a mode that works and represents
a good AArch64 default. (We didn't ever see a problem with this
because we don't have any unit tests using vm_create_default(),
but it's good to get it fixed in advance.)

Reported-by: Thomas Huth <thuth at redhat.com>
Signed-off-by: Andrew Jones <drjones at redhat.com>
---
 tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index e8c42506a09d..fa6cd340137c 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
 	struct kvm_vm *vm;
 
-	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
 
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 	vm_vcpu_add_default(vm, vcpuid, guest_code);
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 3/9] kvm: selftests: aarch64: fix default vm mode
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)


From: Andrew Jones <drjones@redhat.com>

VM_MODE_P52V48_4K is not a valid mode for AArch64. Replace its
use in vm_create_default() with a mode that works and represents
a good AArch64 default. (We didn't ever see a problem with this
because we don't have any unit tests using vm_create_default(),
but it's good to get it fixed in advance.)

Reported-by: Thomas Huth <thuth at redhat.com>
Signed-off-by: Andrew Jones <drjones at redhat.com>
---
 tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index e8c42506a09d..fa6cd340137c 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
 	struct kvm_vm *vm;
 
-	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
 
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 	vm_vcpu_add_default(vm, vcpuid, guest_code);
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 4/9] KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-23 16:43   ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

This will be required later for tests like the kvm_create_max_vcpus
test that do not use the vm_create_default() function.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h      | 6 ++++++
 tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
 tools/testing/selftests/kvm/lib/x86_64/processor.c  | 2 +-
 3 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index b8bf961074fe..b6eb6471e6b2 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -43,6 +43,12 @@ enum vm_guest_mode {
 	NUM_VM_MODES,
 };
 
+#ifdef __aarch64__
+#define VM_MODE_DEFAULT VM_MODE_P40V48_4K
+#else
+#define VM_MODE_DEFAULT VM_MODE_P52V48_4K
+#endif
+
 #define vm_guest_mode_string(m) vm_guest_mode_string[m]
 extern const char * const vm_guest_mode_string[];
 
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index fa6cd340137c..596ccaf09cb6 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
 	struct kvm_vm *vm;
 
-	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
 
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 	vm_vcpu_add_default(vm, vcpuid, guest_code);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index dc7fae9fa424..bb38bbcefac5 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -823,7 +823,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
 
 	/* Create VM */
-	vm = vm_create(VM_MODE_P52V48_4K,
+	vm = vm_create(VM_MODE_DEFAULT,
 		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
 		       O_RDWR);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 4/9] KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-23 16:43 UTC (permalink / raw)


This will be required later for tests like the kvm_create_max_vcpus
test that do not use the vm_create_default() function.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h      | 6 ++++++
 tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
 tools/testing/selftests/kvm/lib/x86_64/processor.c  | 2 +-
 3 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index b8bf961074fe..b6eb6471e6b2 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -43,6 +43,12 @@ enum vm_guest_mode {
 	NUM_VM_MODES,
 };
 
+#ifdef __aarch64__
+#define VM_MODE_DEFAULT VM_MODE_P40V48_4K
+#else
+#define VM_MODE_DEFAULT VM_MODE_P52V48_4K
+#endif
+
 #define vm_guest_mode_string(m) vm_guest_mode_string[m]
 extern const char * const vm_guest_mode_string[];
 
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index fa6cd340137c..596ccaf09cb6 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
 	struct kvm_vm *vm;
 
-	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
 
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 	vm_vcpu_add_default(vm, vcpuid, guest_code);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index dc7fae9fa424..bb38bbcefac5 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -823,7 +823,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
 
 	/* Create VM */
-	vm = vm_create(VM_MODE_P52V48_4K,
+	vm = vm_create(VM_MODE_DEFAULT,
 		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
 		       O_RDWR);
 
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 4/9] KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)


This will be required later for tests like the kvm_create_max_vcpus
test that do not use the vm_create_default() function.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h      | 6 ++++++
 tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
 tools/testing/selftests/kvm/lib/x86_64/processor.c  | 2 +-
 3 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index b8bf961074fe..b6eb6471e6b2 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -43,6 +43,12 @@ enum vm_guest_mode {
 	NUM_VM_MODES,
 };
 
+#ifdef __aarch64__
+#define VM_MODE_DEFAULT VM_MODE_P40V48_4K
+#else
+#define VM_MODE_DEFAULT VM_MODE_P52V48_4K
+#endif
+
 #define vm_guest_mode_string(m) vm_guest_mode_string[m]
 extern const char * const vm_guest_mode_string[];
 
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index fa6cd340137c..596ccaf09cb6 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
 	struct kvm_vm *vm;
 
-	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
 
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 	vm_vcpu_add_default(vm, vcpuid, guest_code);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index dc7fae9fa424..bb38bbcefac5 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -823,7 +823,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
 
 	/* Create VM */
-	vm = vm_create(VM_MODE_P52V48_4K,
+	vm = vm_create(VM_MODE_DEFAULT,
 		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
 		       O_RDWR);
 
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-23 16:43   ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

On s390x, there is a constraint that memory regions have to be aligned
to 1M (or running the VM will fail). Introduce a new "alignment" variable
in the vm_userspace_mem_region_add() function which now can be used for
both, huge page and s390x alignment requirements.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 08edb8436c47..656df9d5cd4d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	unsigned long pmem_size = 0;
 	struct userspace_mem_region *region;
 	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
+	size_t alignment;
 
 	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
 		"address not on a page boundary.\n"
@@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	TEST_ASSERT(region != NULL, "Insufficient Memory");
 	region->mmap_size = npages * vm->page_size;
 
-	/* Enough memory to align up to a huge page. */
+#ifdef __s390x__
+	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
+	alignment = 0x100000;
+#else
+	alignment = 1;
+#endif
+
 	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
-		region->mmap_size += huge_page_size;
+		alignment = huge_page_size;
+
+	/* Add enough memory to align up if necessary */
+	if (alignment > 1)
+		region->mmap_size += alignment;
+
 	region->mmap_start = mmap(NULL, region->mmap_size,
 				  PROT_READ | PROT_WRITE,
 				  MAP_PRIVATE | MAP_ANONYMOUS
@@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 		    "test_malloc failed, mmap_start: %p errno: %i",
 		    region->mmap_start, errno);
 
-	/* Align THP allocation up to start of a huge page. */
-	region->host_mem = align(region->mmap_start,
-				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
+	/* Align host address */
+	region->host_mem = align(region->mmap_start, alignment);
 
 	/* As needed perform madvise */
 	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-23 16:43 UTC (permalink / raw)


On s390x, there is a constraint that memory regions have to be aligned
to 1M (or running the VM will fail). Introduce a new "alignment" variable
in the vm_userspace_mem_region_add() function which now can be used for
both, huge page and s390x alignment requirements.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 08edb8436c47..656df9d5cd4d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	unsigned long pmem_size = 0;
 	struct userspace_mem_region *region;
 	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
+	size_t alignment;
 
 	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
 		"address not on a page boundary.\n"
@@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	TEST_ASSERT(region != NULL, "Insufficient Memory");
 	region->mmap_size = npages * vm->page_size;
 
-	/* Enough memory to align up to a huge page. */
+#ifdef __s390x__
+	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
+	alignment = 0x100000;
+#else
+	alignment = 1;
+#endif
+
 	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
-		region->mmap_size += huge_page_size;
+		alignment = huge_page_size;
+
+	/* Add enough memory to align up if necessary */
+	if (alignment > 1)
+		region->mmap_size += alignment;
+
 	region->mmap_start = mmap(NULL, region->mmap_size,
 				  PROT_READ | PROT_WRITE,
 				  MAP_PRIVATE | MAP_ANONYMOUS
@@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 		    "test_malloc failed, mmap_start: %p errno: %i",
 		    region->mmap_start, errno);
 
-	/* Align THP allocation up to start of a huge page. */
-	region->host_mem = align(region->mmap_start,
-				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
+	/* Align host address */
+	region->host_mem = align(region->mmap_start, alignment);
 
 	/* As needed perform madvise */
 	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)


On s390x, there is a constraint that memory regions have to be aligned
to 1M (or running the VM will fail). Introduce a new "alignment" variable
in the vm_userspace_mem_region_add() function which now can be used for
both, huge page and s390x alignment requirements.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 08edb8436c47..656df9d5cd4d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	unsigned long pmem_size = 0;
 	struct userspace_mem_region *region;
 	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
+	size_t alignment;
 
 	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
 		"address not on a page boundary.\n"
@@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	TEST_ASSERT(region != NULL, "Insufficient Memory");
 	region->mmap_size = npages * vm->page_size;
 
-	/* Enough memory to align up to a huge page. */
+#ifdef __s390x__
+	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
+	alignment = 0x100000;
+#else
+	alignment = 1;
+#endif
+
 	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
-		region->mmap_size += huge_page_size;
+		alignment = huge_page_size;
+
+	/* Add enough memory to align up if necessary */
+	if (alignment > 1)
+		region->mmap_size += alignment;
+
 	region->mmap_start = mmap(NULL, region->mmap_size,
 				  PROT_READ | PROT_WRITE,
 				  MAP_PRIVATE | MAP_ANONYMOUS
@@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 		    "test_malloc failed, mmap_start: %p errno: %i",
 		    region->mmap_start, errno);
 
-	/* Align THP allocation up to start of a huge page. */
-	region->host_mem = align(region->mmap_start,
-				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
+	/* Align host address */
+	region->host_mem = align(region->mmap_start, alignment);
 
 	/* As needed perform madvise */
 	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 6/9] KVM: selftests: Add processor code for s390x
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-23 16:43   ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

Code that takes care of basic CPU setup, page table walking, etc.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
 4 files changed, 310 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 5cfbea4ce575..c05aa32dfbbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8663,6 +8663,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
 M:	Paolo Bonzini <pbonzini@redhat.com>
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 79c524395ebe..8495670ad107 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -9,6 +9,7 @@ UNAME_M := $(shell uname -m)
 LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/ucall.c lib/sparsebit.c
 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c
 LIBKVM_aarch64 = lib/aarch64/processor.c
+LIBKVM_s390x = lib/s390x/processor.c
 
 TEST_GEN_PROGS_x86_64 = x86_64/platform_info_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
diff --git a/tools/testing/selftests/kvm/include/s390x/processor.h b/tools/testing/selftests/kvm/include/s390x/processor.h
new file mode 100644
index 000000000000..e0e96a5f608c
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/s390x/processor.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * s390x processor specific defines
+ */
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
+
+/* Bits in the region/segment table entry */
+#define REGION_ENTRY_ORIGIN	~0xfffUL /* region/segment table origin	   */
+#define REGION_ENTRY_PROTECT	0x200	 /* region protection bit	   */
+#define REGION_ENTRY_NOEXEC	0x100	 /* region no-execute bit	   */
+#define REGION_ENTRY_OFFSET	0xc0	 /* region table offset		   */
+#define REGION_ENTRY_INVALID	0x20	 /* invalid region table entry	   */
+#define REGION_ENTRY_TYPE	0x0c	 /* region/segment table type mask */
+#define REGION_ENTRY_LENGTH	0x03	 /* region third length		   */
+
+/* Bits in the page table entry */
+#define PAGE_INVALID	0x400		/* HW invalid bit    */
+#define PAGE_PROTECT	0x200		/* HW read-only bit  */
+#define PAGE_NOEXEC	0x100		/* HW no-execute bit */
+
+#endif
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
new file mode 100644
index 000000000000..c8759445e7d3
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -0,0 +1,286 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM selftest s390x library code - CPU-related functions (page tables...)
+ *
+ * Copyright (C) 2019, Red Hat, Inc.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_name */
+
+#include "processor.h"
+#include "kvm_util.h"
+#include "../kvm_util_internal.h"
+
+#define KVM_GUEST_PAGE_TABLE_MIN_PADDR		0x180000
+
+#define PAGES_PER_REGION 4
+
+void virt_pgd_alloc(struct kvm_vm *vm, uint32_t memslot)
+{
+	vm_paddr_t paddr;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	if (vm->pgd_created)
+		return;
+
+	paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	vm->pgd = paddr;
+	vm->pgd_created = true;
+}
+
+/*
+ * Allocate 4 pages for a region/segment table (ri < 4), or one page for
+ * a page table (ri == 4). Returns a suitable region/segment table entry
+ * which points to the freshly allocated pages.
+ */
+static uint64_t virt_alloc_region(struct kvm_vm *vm, int ri, uint32_t memslot)
+{
+	uint64_t taddr;
+
+	taddr = vm_phy_pages_alloc(vm,  ri < 4 ? PAGES_PER_REGION : 1,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, taddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	return (taddr & REGION_ENTRY_ORIGIN)
+		| (((4 - ri) << 2) & REGION_ENTRY_TYPE)
+		| ((ri < 4 ? (PAGES_PER_REGION - 1) : 0) & REGION_ENTRY_LENGTH);
+}
+
+/*
+ * VM Virtual Page Map
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gva - VM Virtual Address
+ *   gpa - VM Physical Address
+ *   memslot - Memory region slot for new virtual translation tables
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Within the VM given by vm, creates a virtual translation for the page
+ * starting at vaddr to the page starting at paddr.
+ */
+void virt_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa,
+		 uint32_t memslot)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT((gva % vm->page_size) == 0,
+		"Virtual address not on page boundary,\n"
+		"  vaddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
+		(gva >> vm->page_shift)),
+		"Invalid virtual address, vaddr: 0x%lx",
+		gva);
+	TEST_ASSERT((gpa % vm->page_size) == 0,
+		"Physical address not on page boundary,\n"
+		"  paddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
+		"Physical address beyond beyond maximum supported,\n"
+		"  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+		gva, vm->max_gfn, vm->page_size);
+
+	/* Walk through region and segment tables */
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		if (entry[idx] & REGION_ENTRY_INVALID)
+			entry[idx] = virt_alloc_region(vm, ri, memslot);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	/* Fill in page table entry */
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+	if (!(entry[idx] & PAGE_INVALID))
+		fprintf(stderr,
+			"WARNING: PTE for gpa=0x%"PRIx64" already set!\n", gpa);
+	entry[idx] = gpa;
+}
+
+/*
+ * Address Guest Virtual to Guest Physical
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gpa - VM virtual address
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Equivalent VM physical address
+ *
+ * Translates the VM virtual address given by gva to a VM physical
+ * address and then locates the memory region containing the VM
+ * physical address, within the VM given by vm.  When found, the host
+ * virtual address providing the memory to the vm physical address is
+ * returned.
+ * A TEST_ASSERT failure occurs if no region containing translated
+ * VM virtual address exists.
+ */
+vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		TEST_ASSERT(!(entry[idx] & REGION_ENTRY_INVALID),
+			    "No region mapping for vm virtual address 0x%lx",
+			    gva);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+
+	TEST_ASSERT(!(entry[idx] & PAGE_INVALID),
+		    "No page mapping for vm virtual address 0x%lx", gva);
+
+	return (entry[idx] & ~0xffful) + (gva & 0xffful);
+}
+
+static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			   uint64_t ptea_start)
+{
+	uint64_t *pte, ptea;
+
+	for (ptea = ptea_start; ptea < ptea_start + 0x100 * 8; ptea += 8) {
+		pte = addr_gpa2hva(vm, ptea);
+		if (*pte & PAGE_INVALID)
+			continue;
+		fprintf(stream, "%*spte @ 0x%lx: 0x%016lx\n",
+			indent, "", ptea, *pte);
+	}
+}
+
+static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			     uint64_t reg_tab_addr)
+{
+	uint64_t addr, *entry;
+
+	for (addr = reg_tab_addr; addr < reg_tab_addr + 0x400 * 8; addr += 8) {
+		entry = addr_gpa2hva(vm, addr);
+		if (*entry & REGION_ENTRY_INVALID)
+			continue;
+		fprintf(stream, "%*srt%lde @ 0x%lx: 0x%016lx\n",
+			indent, "", 4 - ((*entry & REGION_ENTRY_TYPE) >> 2),
+			addr, *entry);
+		if (*entry & REGION_ENTRY_TYPE) {
+			virt_dump_region(stream, vm, indent + 2,
+					 *entry & REGION_ENTRY_ORIGIN);
+		} else {
+			virt_dump_ptes(stream, vm, indent + 2,
+				       *entry & REGION_ENTRY_ORIGIN);
+		}
+	}
+}
+
+void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+	if (!vm->pgd_created)
+		return;
+
+	virt_dump_region(stream, vm, indent, vm->pgd);
+}
+
+/*
+ * Create a VM with reasonable defaults
+ *
+ * Input Args:
+ *   vcpuid - The id of the single VCPU to add to the VM.
+ *   extra_mem_pages - The size of extra memories to add (this will
+ *                     decide how much extra space we will need to
+ *                     setup the page tables using mem slot 0)
+ *   guest_code - The vCPU's entry point
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Pointer to opaque structure that describes the created VM.
+ */
+struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+				 void *guest_code)
+{
+	/*
+	 * The additional amount of pages required for the page tables is:
+	 * 1 * n / 256 + 4 * (n / 256) / 2048 + 4 * (n / 256) / 2048^2 + ...
+	 * which is definitely smaller than (n / 256) * 2.
+	 */
+	uint64_t extra_pg_pages = extra_mem_pages / 256 * 2;
+	struct kvm_vm *vm;
+
+	vm = vm_create(VM_MODE_DEFAULT,
+		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+	vm_vcpu_add_default(vm, vcpuid, guest_code);
+
+	return vm;
+}
+
+/*
+ * Adds a vCPU with reasonable defaults (i.e. a stack and initial PSW)
+ *
+ * Input Args:
+ *   vcpuid - The id of the VCPU to add to the VM.
+ *   guest_code - The vCPU's entry point
+ */
+void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
+{
+	size_t stack_size =  DEFAULT_STACK_PGS * getpagesize();
+	uint64_t stack_vaddr;
+	struct kvm_regs regs;
+	struct kvm_sregs sregs;
+	struct kvm_run *run;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	stack_vaddr = vm_vaddr_alloc(vm, stack_size,
+				     DEFAULT_GUEST_STACK_VADDR_MIN, 0, 0);
+
+	vm_vcpu_add(vm, vcpuid, 0, 0);
+
+	/* Setup guest registers */
+	vcpu_regs_get(vm, vcpuid, &regs);
+	regs.gprs[15] = stack_vaddr + (DEFAULT_STACK_PGS * getpagesize()) - 160;
+	vcpu_regs_set(vm, vcpuid, &regs);
+
+	vcpu_sregs_get(vm, vcpuid, &sregs);
+	sregs.crs[1] = vm->pgd | 0xf;		/* Primary region table */
+	vcpu_sregs_set(vm, vcpuid, &sregs);
+
+	run = vcpu_state(vm, vcpuid);
+	run->psw_mask = 0x0400000180000000ULL;  /* DAT enabled + 64 bit mode */
+	run->psw_addr = (uintptr_t)guest_code;
+}
+
+void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
+{
+	struct kvm_sregs sregs;
+
+	vcpu_sregs_get(vm, vcpuid, &sregs);
+	sregs.crs[0] |= 0x00040000;		/* Enable floating point regs */
+	vcpu_sregs_set(vm, vcpuid, &sregs);
+}
+
+void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
+{
+	struct vcpu *vcpu = vm->vcpu_head;
+
+	fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n",
+		indent, "", vcpu->state->psw_mask, vcpu->state->psw_addr);
+}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 6/9] KVM: selftests: Add processor code for s390x
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-23 16:43 UTC (permalink / raw)


Code that takes care of basic CPU setup, page table walking, etc.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
 4 files changed, 310 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 5cfbea4ce575..c05aa32dfbbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8663,6 +8663,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
 M:	Paolo Bonzini <pbonzini at redhat.com>
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 79c524395ebe..8495670ad107 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -9,6 +9,7 @@ UNAME_M := $(shell uname -m)
 LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/ucall.c lib/sparsebit.c
 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c
 LIBKVM_aarch64 = lib/aarch64/processor.c
+LIBKVM_s390x = lib/s390x/processor.c
 
 TEST_GEN_PROGS_x86_64 = x86_64/platform_info_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
diff --git a/tools/testing/selftests/kvm/include/s390x/processor.h b/tools/testing/selftests/kvm/include/s390x/processor.h
new file mode 100644
index 000000000000..e0e96a5f608c
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/s390x/processor.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * s390x processor specific defines
+ */
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
+
+/* Bits in the region/segment table entry */
+#define REGION_ENTRY_ORIGIN	~0xfffUL /* region/segment table origin	   */
+#define REGION_ENTRY_PROTECT	0x200	 /* region protection bit	   */
+#define REGION_ENTRY_NOEXEC	0x100	 /* region no-execute bit	   */
+#define REGION_ENTRY_OFFSET	0xc0	 /* region table offset		   */
+#define REGION_ENTRY_INVALID	0x20	 /* invalid region table entry	   */
+#define REGION_ENTRY_TYPE	0x0c	 /* region/segment table type mask */
+#define REGION_ENTRY_LENGTH	0x03	 /* region third length		   */
+
+/* Bits in the page table entry */
+#define PAGE_INVALID	0x400		/* HW invalid bit    */
+#define PAGE_PROTECT	0x200		/* HW read-only bit  */
+#define PAGE_NOEXEC	0x100		/* HW no-execute bit */
+
+#endif
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
new file mode 100644
index 000000000000..c8759445e7d3
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -0,0 +1,286 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM selftest s390x library code - CPU-related functions (page tables...)
+ *
+ * Copyright (C) 2019, Red Hat, Inc.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_name */
+
+#include "processor.h"
+#include "kvm_util.h"
+#include "../kvm_util_internal.h"
+
+#define KVM_GUEST_PAGE_TABLE_MIN_PADDR		0x180000
+
+#define PAGES_PER_REGION 4
+
+void virt_pgd_alloc(struct kvm_vm *vm, uint32_t memslot)
+{
+	vm_paddr_t paddr;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	if (vm->pgd_created)
+		return;
+
+	paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	vm->pgd = paddr;
+	vm->pgd_created = true;
+}
+
+/*
+ * Allocate 4 pages for a region/segment table (ri < 4), or one page for
+ * a page table (ri == 4). Returns a suitable region/segment table entry
+ * which points to the freshly allocated pages.
+ */
+static uint64_t virt_alloc_region(struct kvm_vm *vm, int ri, uint32_t memslot)
+{
+	uint64_t taddr;
+
+	taddr = vm_phy_pages_alloc(vm,  ri < 4 ? PAGES_PER_REGION : 1,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, taddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	return (taddr & REGION_ENTRY_ORIGIN)
+		| (((4 - ri) << 2) & REGION_ENTRY_TYPE)
+		| ((ri < 4 ? (PAGES_PER_REGION - 1) : 0) & REGION_ENTRY_LENGTH);
+}
+
+/*
+ * VM Virtual Page Map
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gva - VM Virtual Address
+ *   gpa - VM Physical Address
+ *   memslot - Memory region slot for new virtual translation tables
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Within the VM given by vm, creates a virtual translation for the page
+ * starting at vaddr to the page starting at paddr.
+ */
+void virt_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa,
+		 uint32_t memslot)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT((gva % vm->page_size) == 0,
+		"Virtual address not on page boundary,\n"
+		"  vaddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
+		(gva >> vm->page_shift)),
+		"Invalid virtual address, vaddr: 0x%lx",
+		gva);
+	TEST_ASSERT((gpa % vm->page_size) == 0,
+		"Physical address not on page boundary,\n"
+		"  paddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
+		"Physical address beyond beyond maximum supported,\n"
+		"  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+		gva, vm->max_gfn, vm->page_size);
+
+	/* Walk through region and segment tables */
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		if (entry[idx] & REGION_ENTRY_INVALID)
+			entry[idx] = virt_alloc_region(vm, ri, memslot);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	/* Fill in page table entry */
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+	if (!(entry[idx] & PAGE_INVALID))
+		fprintf(stderr,
+			"WARNING: PTE for gpa=0x%"PRIx64" already set!\n", gpa);
+	entry[idx] = gpa;
+}
+
+/*
+ * Address Guest Virtual to Guest Physical
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gpa - VM virtual address
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Equivalent VM physical address
+ *
+ * Translates the VM virtual address given by gva to a VM physical
+ * address and then locates the memory region containing the VM
+ * physical address, within the VM given by vm.  When found, the host
+ * virtual address providing the memory to the vm physical address is
+ * returned.
+ * A TEST_ASSERT failure occurs if no region containing translated
+ * VM virtual address exists.
+ */
+vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		TEST_ASSERT(!(entry[idx] & REGION_ENTRY_INVALID),
+			    "No region mapping for vm virtual address 0x%lx",
+			    gva);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+
+	TEST_ASSERT(!(entry[idx] & PAGE_INVALID),
+		    "No page mapping for vm virtual address 0x%lx", gva);
+
+	return (entry[idx] & ~0xffful) + (gva & 0xffful);
+}
+
+static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			   uint64_t ptea_start)
+{
+	uint64_t *pte, ptea;
+
+	for (ptea = ptea_start; ptea < ptea_start + 0x100 * 8; ptea += 8) {
+		pte = addr_gpa2hva(vm, ptea);
+		if (*pte & PAGE_INVALID)
+			continue;
+		fprintf(stream, "%*spte @ 0x%lx: 0x%016lx\n",
+			indent, "", ptea, *pte);
+	}
+}
+
+static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			     uint64_t reg_tab_addr)
+{
+	uint64_t addr, *entry;
+
+	for (addr = reg_tab_addr; addr < reg_tab_addr + 0x400 * 8; addr += 8) {
+		entry = addr_gpa2hva(vm, addr);
+		if (*entry & REGION_ENTRY_INVALID)
+			continue;
+		fprintf(stream, "%*srt%lde @ 0x%lx: 0x%016lx\n",
+			indent, "", 4 - ((*entry & REGION_ENTRY_TYPE) >> 2),
+			addr, *entry);
+		if (*entry & REGION_ENTRY_TYPE) {
+			virt_dump_region(stream, vm, indent + 2,
+					 *entry & REGION_ENTRY_ORIGIN);
+		} else {
+			virt_dump_ptes(stream, vm, indent + 2,
+				       *entry & REGION_ENTRY_ORIGIN);
+		}
+	}
+}
+
+void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+	if (!vm->pgd_created)
+		return;
+
+	virt_dump_region(stream, vm, indent, vm->pgd);
+}
+
+/*
+ * Create a VM with reasonable defaults
+ *
+ * Input Args:
+ *   vcpuid - The id of the single VCPU to add to the VM.
+ *   extra_mem_pages - The size of extra memories to add (this will
+ *                     decide how much extra space we will need to
+ *                     setup the page tables using mem slot 0)
+ *   guest_code - The vCPU's entry point
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Pointer to opaque structure that describes the created VM.
+ */
+struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+				 void *guest_code)
+{
+	/*
+	 * The additional amount of pages required for the page tables is:
+	 * 1 * n / 256 + 4 * (n / 256) / 2048 + 4 * (n / 256) / 2048^2 + ...
+	 * which is definitely smaller than (n / 256) * 2.
+	 */
+	uint64_t extra_pg_pages = extra_mem_pages / 256 * 2;
+	struct kvm_vm *vm;
+
+	vm = vm_create(VM_MODE_DEFAULT,
+		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+	vm_vcpu_add_default(vm, vcpuid, guest_code);
+
+	return vm;
+}
+
+/*
+ * Adds a vCPU with reasonable defaults (i.e. a stack and initial PSW)
+ *
+ * Input Args:
+ *   vcpuid - The id of the VCPU to add to the VM.
+ *   guest_code - The vCPU's entry point
+ */
+void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
+{
+	size_t stack_size =  DEFAULT_STACK_PGS * getpagesize();
+	uint64_t stack_vaddr;
+	struct kvm_regs regs;
+	struct kvm_sregs sregs;
+	struct kvm_run *run;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	stack_vaddr = vm_vaddr_alloc(vm, stack_size,
+				     DEFAULT_GUEST_STACK_VADDR_MIN, 0, 0);
+
+	vm_vcpu_add(vm, vcpuid, 0, 0);
+
+	/* Setup guest registers */
+	vcpu_regs_get(vm, vcpuid, &regs);
+	regs.gprs[15] = stack_vaddr + (DEFAULT_STACK_PGS * getpagesize()) - 160;
+	vcpu_regs_set(vm, vcpuid, &regs);
+
+	vcpu_sregs_get(vm, vcpuid, &sregs);
+	sregs.crs[1] = vm->pgd | 0xf;		/* Primary region table */
+	vcpu_sregs_set(vm, vcpuid, &sregs);
+
+	run = vcpu_state(vm, vcpuid);
+	run->psw_mask = 0x0400000180000000ULL;  /* DAT enabled + 64 bit mode */
+	run->psw_addr = (uintptr_t)guest_code;
+}
+
+void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
+{
+	struct kvm_sregs sregs;
+
+	vcpu_sregs_get(vm, vcpuid, &sregs);
+	sregs.crs[0] |= 0x00040000;		/* Enable floating point regs */
+	vcpu_sregs_set(vm, vcpuid, &sregs);
+}
+
+void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
+{
+	struct vcpu *vcpu = vm->vcpu_head;
+
+	fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n",
+		indent, "", vcpu->state->psw_mask, vcpu->state->psw_addr);
+}
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 6/9] KVM: selftests: Add processor code for s390x
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)


Code that takes care of basic CPU setup, page table walking, etc.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/s390x/processor.h   |  22 ++
 .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
 4 files changed, 310 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
 create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 5cfbea4ce575..c05aa32dfbbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8663,6 +8663,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
 M:	Paolo Bonzini <pbonzini at redhat.com>
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 79c524395ebe..8495670ad107 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -9,6 +9,7 @@ UNAME_M := $(shell uname -m)
 LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/ucall.c lib/sparsebit.c
 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c
 LIBKVM_aarch64 = lib/aarch64/processor.c
+LIBKVM_s390x = lib/s390x/processor.c
 
 TEST_GEN_PROGS_x86_64 = x86_64/platform_info_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
diff --git a/tools/testing/selftests/kvm/include/s390x/processor.h b/tools/testing/selftests/kvm/include/s390x/processor.h
new file mode 100644
index 000000000000..e0e96a5f608c
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/s390x/processor.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * s390x processor specific defines
+ */
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
+
+/* Bits in the region/segment table entry */
+#define REGION_ENTRY_ORIGIN	~0xfffUL /* region/segment table origin	   */
+#define REGION_ENTRY_PROTECT	0x200	 /* region protection bit	   */
+#define REGION_ENTRY_NOEXEC	0x100	 /* region no-execute bit	   */
+#define REGION_ENTRY_OFFSET	0xc0	 /* region table offset		   */
+#define REGION_ENTRY_INVALID	0x20	 /* invalid region table entry	   */
+#define REGION_ENTRY_TYPE	0x0c	 /* region/segment table type mask */
+#define REGION_ENTRY_LENGTH	0x03	 /* region third length		   */
+
+/* Bits in the page table entry */
+#define PAGE_INVALID	0x400		/* HW invalid bit    */
+#define PAGE_PROTECT	0x200		/* HW read-only bit  */
+#define PAGE_NOEXEC	0x100		/* HW no-execute bit */
+
+#endif
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
new file mode 100644
index 000000000000..c8759445e7d3
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -0,0 +1,286 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM selftest s390x library code - CPU-related functions (page tables...)
+ *
+ * Copyright (C) 2019, Red Hat, Inc.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_name */
+
+#include "processor.h"
+#include "kvm_util.h"
+#include "../kvm_util_internal.h"
+
+#define KVM_GUEST_PAGE_TABLE_MIN_PADDR		0x180000
+
+#define PAGES_PER_REGION 4
+
+void virt_pgd_alloc(struct kvm_vm *vm, uint32_t memslot)
+{
+	vm_paddr_t paddr;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	if (vm->pgd_created)
+		return;
+
+	paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	vm->pgd = paddr;
+	vm->pgd_created = true;
+}
+
+/*
+ * Allocate 4 pages for a region/segment table (ri < 4), or one page for
+ * a page table (ri == 4). Returns a suitable region/segment table entry
+ * which points to the freshly allocated pages.
+ */
+static uint64_t virt_alloc_region(struct kvm_vm *vm, int ri, uint32_t memslot)
+{
+	uint64_t taddr;
+
+	taddr = vm_phy_pages_alloc(vm,  ri < 4 ? PAGES_PER_REGION : 1,
+				   KVM_GUEST_PAGE_TABLE_MIN_PADDR, memslot);
+	memset(addr_gpa2hva(vm, taddr), 0xff, PAGES_PER_REGION * vm->page_size);
+
+	return (taddr & REGION_ENTRY_ORIGIN)
+		| (((4 - ri) << 2) & REGION_ENTRY_TYPE)
+		| ((ri < 4 ? (PAGES_PER_REGION - 1) : 0) & REGION_ENTRY_LENGTH);
+}
+
+/*
+ * VM Virtual Page Map
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gva - VM Virtual Address
+ *   gpa - VM Physical Address
+ *   memslot - Memory region slot for new virtual translation tables
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Within the VM given by vm, creates a virtual translation for the page
+ * starting at vaddr to the page starting at paddr.
+ */
+void virt_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa,
+		 uint32_t memslot)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT((gva % vm->page_size) == 0,
+		"Virtual address not on page boundary,\n"
+		"  vaddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
+		(gva >> vm->page_shift)),
+		"Invalid virtual address, vaddr: 0x%lx",
+		gva);
+	TEST_ASSERT((gpa % vm->page_size) == 0,
+		"Physical address not on page boundary,\n"
+		"  paddr: 0x%lx vm->page_size: 0x%x",
+		gva, vm->page_size);
+	TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
+		"Physical address beyond beyond maximum supported,\n"
+		"  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+		gva, vm->max_gfn, vm->page_size);
+
+	/* Walk through region and segment tables */
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		if (entry[idx] & REGION_ENTRY_INVALID)
+			entry[idx] = virt_alloc_region(vm, ri, memslot);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	/* Fill in page table entry */
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+	if (!(entry[idx] & PAGE_INVALID))
+		fprintf(stderr,
+			"WARNING: PTE for gpa=0x%"PRIx64" already set!\n", gpa);
+	entry[idx] = gpa;
+}
+
+/*
+ * Address Guest Virtual to Guest Physical
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gpa - VM virtual address
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Equivalent VM physical address
+ *
+ * Translates the VM virtual address given by gva to a VM physical
+ * address and then locates the memory region containing the VM
+ * physical address, within the VM given by vm.  When found, the host
+ * virtual address providing the memory to the vm physical address is
+ * returned.
+ * A TEST_ASSERT failure occurs if no region containing translated
+ * VM virtual address exists.
+ */
+vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	int ri, idx;
+	uint64_t *entry;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	entry = addr_gpa2hva(vm, vm->pgd);
+	for (ri = 1; ri <= 4; ri++) {
+		idx = (gva >> (64 - 11 * ri)) & 0x7ffu;
+		TEST_ASSERT(!(entry[idx] & REGION_ENTRY_INVALID),
+			    "No region mapping for vm virtual address 0x%lx",
+			    gva);
+		entry = addr_gpa2hva(vm, entry[idx] & REGION_ENTRY_ORIGIN);
+	}
+
+	idx = (gva >> 12) & 0x0ffu;		/* page index */
+
+	TEST_ASSERT(!(entry[idx] & PAGE_INVALID),
+		    "No page mapping for vm virtual address 0x%lx", gva);
+
+	return (entry[idx] & ~0xffful) + (gva & 0xffful);
+}
+
+static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			   uint64_t ptea_start)
+{
+	uint64_t *pte, ptea;
+
+	for (ptea = ptea_start; ptea < ptea_start + 0x100 * 8; ptea += 8) {
+		pte = addr_gpa2hva(vm, ptea);
+		if (*pte & PAGE_INVALID)
+			continue;
+		fprintf(stream, "%*spte @ 0x%lx: 0x%016lx\n",
+			indent, "", ptea, *pte);
+	}
+}
+
+static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+			     uint64_t reg_tab_addr)
+{
+	uint64_t addr, *entry;
+
+	for (addr = reg_tab_addr; addr < reg_tab_addr + 0x400 * 8; addr += 8) {
+		entry = addr_gpa2hva(vm, addr);
+		if (*entry & REGION_ENTRY_INVALID)
+			continue;
+		fprintf(stream, "%*srt%lde @ 0x%lx: 0x%016lx\n",
+			indent, "", 4 - ((*entry & REGION_ENTRY_TYPE) >> 2),
+			addr, *entry);
+		if (*entry & REGION_ENTRY_TYPE) {
+			virt_dump_region(stream, vm, indent + 2,
+					 *entry & REGION_ENTRY_ORIGIN);
+		} else {
+			virt_dump_ptes(stream, vm, indent + 2,
+				       *entry & REGION_ENTRY_ORIGIN);
+		}
+	}
+}
+
+void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+	if (!vm->pgd_created)
+		return;
+
+	virt_dump_region(stream, vm, indent, vm->pgd);
+}
+
+/*
+ * Create a VM with reasonable defaults
+ *
+ * Input Args:
+ *   vcpuid - The id of the single VCPU to add to the VM.
+ *   extra_mem_pages - The size of extra memories to add (this will
+ *                     decide how much extra space we will need to
+ *                     setup the page tables using mem slot 0)
+ *   guest_code - The vCPU's entry point
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Pointer to opaque structure that describes the created VM.
+ */
+struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+				 void *guest_code)
+{
+	/*
+	 * The additional amount of pages required for the page tables is:
+	 * 1 * n / 256 + 4 * (n / 256) / 2048 + 4 * (n / 256) / 2048^2 + ...
+	 * which is definitely smaller than (n / 256) * 2.
+	 */
+	uint64_t extra_pg_pages = extra_mem_pages / 256 * 2;
+	struct kvm_vm *vm;
+
+	vm = vm_create(VM_MODE_DEFAULT,
+		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+	vm_vcpu_add_default(vm, vcpuid, guest_code);
+
+	return vm;
+}
+
+/*
+ * Adds a vCPU with reasonable defaults (i.e. a stack and initial PSW)
+ *
+ * Input Args:
+ *   vcpuid - The id of the VCPU to add to the VM.
+ *   guest_code - The vCPU's entry point
+ */
+void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
+{
+	size_t stack_size =  DEFAULT_STACK_PGS * getpagesize();
+	uint64_t stack_vaddr;
+	struct kvm_regs regs;
+	struct kvm_sregs sregs;
+	struct kvm_run *run;
+
+	TEST_ASSERT(vm->page_size == 4096, "Unsupported page size: 0x%x",
+		    vm->page_size);
+
+	stack_vaddr = vm_vaddr_alloc(vm, stack_size,
+				     DEFAULT_GUEST_STACK_VADDR_MIN, 0, 0);
+
+	vm_vcpu_add(vm, vcpuid, 0, 0);
+
+	/* Setup guest registers */
+	vcpu_regs_get(vm, vcpuid, &regs);
+	regs.gprs[15] = stack_vaddr + (DEFAULT_STACK_PGS * getpagesize()) - 160;
+	vcpu_regs_set(vm, vcpuid, &regs);
+
+	vcpu_sregs_get(vm, vcpuid, &sregs);
+	sregs.crs[1] = vm->pgd | 0xf;		/* Primary region table */
+	vcpu_sregs_set(vm, vcpuid, &sregs);
+
+	run = vcpu_state(vm, vcpuid);
+	run->psw_mask = 0x0400000180000000ULL;  /* DAT enabled + 64 bit mode */
+	run->psw_addr = (uintptr_t)guest_code;
+}
+
+void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
+{
+	struct kvm_sregs sregs;
+
+	vcpu_sregs_get(vm, vcpuid, &sregs);
+	sregs.crs[0] |= 0x00040000;		/* Enable floating point regs */
+	vcpu_sregs_set(vm, vcpuid, &sregs);
+}
+
+void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
+{
+	struct vcpu *vcpu = vm->vcpu_head;
+
+	fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n",
+		indent, "", vcpu->state->psw_mask, vcpu->state->psw_addr);
+}
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 7/9] KVM: selftests: Add the sync_regs test for s390x
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-23 16:43   ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

The test is an adaption of the same test for x86. Note that there
are some differences in the way how s390x deals with the kvm_valid_regs
in struct kvm_run, so some of the tests had to be removed. Also this
test is not using the ucall() interface on s390x yet (which would need
some work to be usable on s390x), so it simply drops out of the VM with
a diag 0x501 breakpoint instead.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   2 +
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
 3 files changed, 154 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

diff --git a/MAINTAINERS b/MAINTAINERS
index c05aa32dfbbe..fe41e2e1767a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8663,6 +8663,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/s390x/
 F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 8495670ad107..d8beb990c8f4 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -29,6 +29,8 @@ TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
 
+TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
 
diff --git a/tools/testing/selftests/kvm/s390x/sync_regs_test.c b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
new file mode 100644
index 000000000000..e85ff0d69548
--- /dev/null
+++ b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
@@ -0,0 +1,151 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Test for s390x KVM_CAP_SYNC_REGS
+ *
+ * Based on the same test for x86:
+ * Copyright (C) 2018, Google LLC.
+ *
+ * Adaptions for s390x:
+ * Copyright (C) 2019, Red Hat, Inc.
+ *
+ * Test expected behavior of the KVM_CAP_SYNC_REGS functionality.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+
+#define VCPU_ID 5
+
+static void guest_code(void)
+{
+	for (;;) {
+		asm volatile ("diag 0,0,0x501");
+		asm volatile ("ahi 11,1");
+	}
+}
+
+#define REG_COMPARE(reg) \
+	TEST_ASSERT(left->reg == right->reg, \
+		    "Register " #reg \
+		    " values did not match: 0x%llx, 0x%llx\n", \
+		    left->reg, right->reg)
+
+static void compare_regs(struct kvm_regs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(gprs[i]);
+}
+
+static void compare_sregs(struct kvm_sregs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(acrs[i]);
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(crs[i]);
+}
+
+#undef REG_COMPARE
+
+#define TEST_SYNC_FIELDS   (KVM_SYNC_GPRS|KVM_SYNC_ACRS|KVM_SYNC_CRS)
+#define INVALID_SYNC_FIELD 0x80000000
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	struct kvm_run *run;
+	struct kvm_regs regs;
+	struct kvm_sregs sregs;
+	int rv, cap;
+
+	/* Tell stdout not to buffer its content */
+	setbuf(stdout, NULL);
+
+	cap = kvm_check_cap(KVM_CAP_SYNC_REGS);
+	if (!cap) {
+		fprintf(stderr, "CAP_SYNC_REGS not supported, skipping test\n");
+		exit(KSFT_SKIP);
+	}
+
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+
+	run = vcpu_state(vm, VCPU_ID);
+
+	/* Request and verify all valid register sets. */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s390_sieic.icptcode == 4 &&
+		    (run->s390_sieic.ipa >> 8) == 0x83 &&
+		    (run->s390_sieic.ipb >> 16) == 0x501,
+		    "Unexpected interception code: ic=%u, ipa=0x%x, ipb=0x%x\n",
+		    run->s390_sieic.icptcode, run->s390_sieic.ipa,
+		    run->s390_sieic.ipb);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Set and verify various register values */
+	run->s.regs.gprs[11] = 0xBAD1DEA;
+	run->s.regs.acrs[0] = 1 << 11;
+
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = KVM_SYNC_GPRS | KVM_SYNC_ACRS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] == 0xBAD1DEA + 1,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+	TEST_ASSERT(run->s.regs.acrs[0]  == 1 << 11,
+		    "acr0 sync regs value incorrect 0x%llx.",
+		    run->s.regs.acrs[0]);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Clear kvm_dirty_regs bits, verify new s.regs values are
+	 * overwritten with existing guest values.
+	 */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = 0;
+	run->s.regs.gprs[11] = 0xDEADBEEF;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] != 0xDEADBEEF,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+
+	kvm_vm_free(vm);
+
+	return 0;
+}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 7/9] KVM: selftests: Add the sync_regs test for s390x
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-23 16:43 UTC (permalink / raw)


The test is an adaption of the same test for x86. Note that there
are some differences in the way how s390x deals with the kvm_valid_regs
in struct kvm_run, so some of the tests had to be removed. Also this
test is not using the ucall() interface on s390x yet (which would need
some work to be usable on s390x), so it simply drops out of the VM with
a diag 0x501 breakpoint instead.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   2 +
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
 3 files changed, 154 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

diff --git a/MAINTAINERS b/MAINTAINERS
index c05aa32dfbbe..fe41e2e1767a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8663,6 +8663,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/s390x/
 F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 8495670ad107..d8beb990c8f4 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -29,6 +29,8 @@ TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
 
+TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
 
diff --git a/tools/testing/selftests/kvm/s390x/sync_regs_test.c b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
new file mode 100644
index 000000000000..e85ff0d69548
--- /dev/null
+++ b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
@@ -0,0 +1,151 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Test for s390x KVM_CAP_SYNC_REGS
+ *
+ * Based on the same test for x86:
+ * Copyright (C) 2018, Google LLC.
+ *
+ * Adaptions for s390x:
+ * Copyright (C) 2019, Red Hat, Inc.
+ *
+ * Test expected behavior of the KVM_CAP_SYNC_REGS functionality.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+
+#define VCPU_ID 5
+
+static void guest_code(void)
+{
+	for (;;) {
+		asm volatile ("diag 0,0,0x501");
+		asm volatile ("ahi 11,1");
+	}
+}
+
+#define REG_COMPARE(reg) \
+	TEST_ASSERT(left->reg == right->reg, \
+		    "Register " #reg \
+		    " values did not match: 0x%llx, 0x%llx\n", \
+		    left->reg, right->reg)
+
+static void compare_regs(struct kvm_regs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(gprs[i]);
+}
+
+static void compare_sregs(struct kvm_sregs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(acrs[i]);
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(crs[i]);
+}
+
+#undef REG_COMPARE
+
+#define TEST_SYNC_FIELDS   (KVM_SYNC_GPRS|KVM_SYNC_ACRS|KVM_SYNC_CRS)
+#define INVALID_SYNC_FIELD 0x80000000
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	struct kvm_run *run;
+	struct kvm_regs regs;
+	struct kvm_sregs sregs;
+	int rv, cap;
+
+	/* Tell stdout not to buffer its content */
+	setbuf(stdout, NULL);
+
+	cap = kvm_check_cap(KVM_CAP_SYNC_REGS);
+	if (!cap) {
+		fprintf(stderr, "CAP_SYNC_REGS not supported, skipping test\n");
+		exit(KSFT_SKIP);
+	}
+
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+
+	run = vcpu_state(vm, VCPU_ID);
+
+	/* Request and verify all valid register sets. */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s390_sieic.icptcode == 4 &&
+		    (run->s390_sieic.ipa >> 8) == 0x83 &&
+		    (run->s390_sieic.ipb >> 16) == 0x501,
+		    "Unexpected interception code: ic=%u, ipa=0x%x, ipb=0x%x\n",
+		    run->s390_sieic.icptcode, run->s390_sieic.ipa,
+		    run->s390_sieic.ipb);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Set and verify various register values */
+	run->s.regs.gprs[11] = 0xBAD1DEA;
+	run->s.regs.acrs[0] = 1 << 11;
+
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = KVM_SYNC_GPRS | KVM_SYNC_ACRS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] == 0xBAD1DEA + 1,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+	TEST_ASSERT(run->s.regs.acrs[0]  == 1 << 11,
+		    "acr0 sync regs value incorrect 0x%llx.",
+		    run->s.regs.acrs[0]);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Clear kvm_dirty_regs bits, verify new s.regs values are
+	 * overwritten with existing guest values.
+	 */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = 0;
+	run->s.regs.gprs[11] = 0xDEADBEEF;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] != 0xDEADBEEF,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+
+	kvm_vm_free(vm);
+
+	return 0;
+}
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 7/9] KVM: selftests: Add the sync_regs test for s390x
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)


The test is an adaption of the same test for x86. Note that there
are some differences in the way how s390x deals with the kvm_valid_regs
in struct kvm_run, so some of the tests had to be removed. Also this
test is not using the ucall() interface on s390x yet (which would need
some work to be usable on s390x), so it simply drops out of the VM with
a diag 0x501 breakpoint instead.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 MAINTAINERS                                   |   1 +
 tools/testing/selftests/kvm/Makefile          |   2 +
 .../selftests/kvm/s390x/sync_regs_test.c      | 151 ++++++++++++++++++
 3 files changed, 154 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c

diff --git a/MAINTAINERS b/MAINTAINERS
index c05aa32dfbbe..fe41e2e1767a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8663,6 +8663,7 @@ F:	arch/s390/include/asm/gmap.h
 F:	arch/s390/include/asm/kvm*
 F:	arch/s390/kvm/
 F:	arch/s390/mm/gmap.c
+F:	tools/testing/selftests/kvm/s390x/
 F:	tools/testing/selftests/kvm/*/s390x/
 
 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 8495670ad107..d8beb990c8f4 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -29,6 +29,8 @@ TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
 
+TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
 
diff --git a/tools/testing/selftests/kvm/s390x/sync_regs_test.c b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
new file mode 100644
index 000000000000..e85ff0d69548
--- /dev/null
+++ b/tools/testing/selftests/kvm/s390x/sync_regs_test.c
@@ -0,0 +1,151 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Test for s390x KVM_CAP_SYNC_REGS
+ *
+ * Based on the same test for x86:
+ * Copyright (C) 2018, Google LLC.
+ *
+ * Adaptions for s390x:
+ * Copyright (C) 2019, Red Hat, Inc.
+ *
+ * Test expected behavior of the KVM_CAP_SYNC_REGS functionality.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+
+#define VCPU_ID 5
+
+static void guest_code(void)
+{
+	for (;;) {
+		asm volatile ("diag 0,0,0x501");
+		asm volatile ("ahi 11,1");
+	}
+}
+
+#define REG_COMPARE(reg) \
+	TEST_ASSERT(left->reg == right->reg, \
+		    "Register " #reg \
+		    " values did not match: 0x%llx, 0x%llx\n", \
+		    left->reg, right->reg)
+
+static void compare_regs(struct kvm_regs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(gprs[i]);
+}
+
+static void compare_sregs(struct kvm_sregs *left, struct kvm_sync_regs *right)
+{
+	int i;
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(acrs[i]);
+
+	for (i = 0; i < 16; i++)
+		REG_COMPARE(crs[i]);
+}
+
+#undef REG_COMPARE
+
+#define TEST_SYNC_FIELDS   (KVM_SYNC_GPRS|KVM_SYNC_ACRS|KVM_SYNC_CRS)
+#define INVALID_SYNC_FIELD 0x80000000
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	struct kvm_run *run;
+	struct kvm_regs regs;
+	struct kvm_sregs sregs;
+	int rv, cap;
+
+	/* Tell stdout not to buffer its content */
+	setbuf(stdout, NULL);
+
+	cap = kvm_check_cap(KVM_CAP_SYNC_REGS);
+	if (!cap) {
+		fprintf(stderr, "CAP_SYNC_REGS not supported, skipping test\n");
+		exit(KSFT_SKIP);
+	}
+
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+
+	run = vcpu_state(vm, VCPU_ID);
+
+	/* Request and verify all valid register sets. */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s390_sieic.icptcode == 4 &&
+		    (run->s390_sieic.ipa >> 8) == 0x83 &&
+		    (run->s390_sieic.ipb >> 16) == 0x501,
+		    "Unexpected interception code: ic=%u, ipa=0x%x, ipb=0x%x\n",
+		    run->s390_sieic.icptcode, run->s390_sieic.ipa,
+		    run->s390_sieic.ipb);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Set and verify various register values */
+	run->s.regs.gprs[11] = 0xBAD1DEA;
+	run->s.regs.acrs[0] = 1 << 11;
+
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = KVM_SYNC_GPRS | KVM_SYNC_ACRS;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] == 0xBAD1DEA + 1,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+	TEST_ASSERT(run->s.regs.acrs[0]  == 1 << 11,
+		    "acr0 sync regs value incorrect 0x%llx.",
+		    run->s.regs.acrs[0]);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	compare_regs(&regs, &run->s.regs);
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	compare_sregs(&sregs, &run->s.regs);
+
+	/* Clear kvm_dirty_regs bits, verify new s.regs values are
+	 * overwritten with existing guest values.
+	 */
+	run->kvm_valid_regs = TEST_SYNC_FIELDS;
+	run->kvm_dirty_regs = 0;
+	run->s.regs.gprs[11] = 0xDEADBEEF;
+	rv = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(rv == 0, "vcpu_run failed: %d\n", rv);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC,
+		    "Unexpected exit reason: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	TEST_ASSERT(run->s.regs.gprs[11] != 0xDEADBEEF,
+		    "r11 sync regs value incorrect 0x%llx.",
+		    run->s.regs.gprs[11]);
+
+	kvm_vm_free(vm);
+
+	return 0;
+}
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-23 16:43   ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
architectures. However, on s390x, the amount of usable CPUs is determined
during runtime - it is depending on the features of the machine the code
is running on. Since we are using the vcpu_id as an index into the SCA
structures that are defined by the hardware (see e.g. the sca_add_vcpu()
function), it is not only the amount of CPUs that is limited by the hard-
ware, but also the range of IDs that we can use.
Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
code into the architecture specific code, and on s390x we have to return
the same value here as for KVM_CAP_MAX_VCPUS.
This problem has been discovered with the kvm_create_max_vcpus selftest.
With this change applied, the selftest now passes on s390x, too.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 arch/mips/kvm/mips.c       | 3 +++
 arch/powerpc/kvm/powerpc.c | 3 +++
 arch/s390/kvm/kvm-s390.c   | 1 +
 arch/x86/kvm/x86.c         | 3 +++
 virt/kvm/arm/arm.c         | 3 +++
 virt/kvm/kvm_main.c        | 2 --
 6 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 6d0517ac18e5..0369f26ab96d 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 	case KVM_CAP_MIPS_FPU:
 		/* We don't handle systems with inconsistent cpu_has_fpu */
 		r = !!raw_cpu_has_fpu;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 3393b166817a..aa3a678711be 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 #ifdef CONFIG_PPC_BOOK3S_64
 	case KVM_CAP_PPC_GET_SMMU_INFO:
 		r = 1;
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 8d6d75db8de6..871d2e99b156 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		break;
 	case KVM_CAP_NR_VCPUS:
 	case KVM_CAP_MAX_VCPUS:
+	case KVM_CAP_MAX_VCPU_ID:
 		r = KVM_S390_BSCA_CPU_SLOTS;
 		if (!kvm_s390_use_sca_entries())
 			r = KVM_MAX_VCPUS;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 536b78c4af6e..09a07d6a154e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 	case KVM_CAP_PV_MMU:	/* obsolete */
 		r = 0;
 		break;
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 90cedebaeb94..7eeebe5e9da2 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 	case KVM_CAP_MSI_DEVID:
 		if (!kvm)
 			r = -EINVAL;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index f0d13d9d125d..c09259dd6286 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 	case KVM_CAP_MULTI_ADDRESS_SPACE:
 		return KVM_ADDRESS_SPACE_NUM;
 #endif
-	case KVM_CAP_MAX_VCPU_ID:
-		return KVM_MAX_VCPU_ID;
 	case KVM_CAP_NR_MEMSLOTS:
 		return KVM_USER_MEM_SLOTS;
 	default:
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-23 16:43 UTC (permalink / raw)


KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
architectures. However, on s390x, the amount of usable CPUs is determined
during runtime - it is depending on the features of the machine the code
is running on. Since we are using the vcpu_id as an index into the SCA
structures that are defined by the hardware (see e.g. the sca_add_vcpu()
function), it is not only the amount of CPUs that is limited by the hard-
ware, but also the range of IDs that we can use.
Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
code into the architecture specific code, and on s390x we have to return
the same value here as for KVM_CAP_MAX_VCPUS.
This problem has been discovered with the kvm_create_max_vcpus selftest.
With this change applied, the selftest now passes on s390x, too.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 arch/mips/kvm/mips.c       | 3 +++
 arch/powerpc/kvm/powerpc.c | 3 +++
 arch/s390/kvm/kvm-s390.c   | 1 +
 arch/x86/kvm/x86.c         | 3 +++
 virt/kvm/arm/arm.c         | 3 +++
 virt/kvm/kvm_main.c        | 2 --
 6 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 6d0517ac18e5..0369f26ab96d 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 	case KVM_CAP_MIPS_FPU:
 		/* We don't handle systems with inconsistent cpu_has_fpu */
 		r = !!raw_cpu_has_fpu;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 3393b166817a..aa3a678711be 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 #ifdef CONFIG_PPC_BOOK3S_64
 	case KVM_CAP_PPC_GET_SMMU_INFO:
 		r = 1;
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 8d6d75db8de6..871d2e99b156 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		break;
 	case KVM_CAP_NR_VCPUS:
 	case KVM_CAP_MAX_VCPUS:
+	case KVM_CAP_MAX_VCPU_ID:
 		r = KVM_S390_BSCA_CPU_SLOTS;
 		if (!kvm_s390_use_sca_entries())
 			r = KVM_MAX_VCPUS;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 536b78c4af6e..09a07d6a154e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 	case KVM_CAP_PV_MMU:	/* obsolete */
 		r = 0;
 		break;
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 90cedebaeb94..7eeebe5e9da2 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 	case KVM_CAP_MSI_DEVID:
 		if (!kvm)
 			r = -EINVAL;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index f0d13d9d125d..c09259dd6286 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 	case KVM_CAP_MULTI_ADDRESS_SPACE:
 		return KVM_ADDRESS_SPACE_NUM;
 #endif
-	case KVM_CAP_MAX_VCPU_ID:
-		return KVM_MAX_VCPU_ID;
 	case KVM_CAP_NR_MEMSLOTS:
 		return KVM_USER_MEM_SLOTS;
 	default:
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)


KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
architectures. However, on s390x, the amount of usable CPUs is determined
during runtime - it is depending on the features of the machine the code
is running on. Since we are using the vcpu_id as an index into the SCA
structures that are defined by the hardware (see e.g. the sca_add_vcpu()
function), it is not only the amount of CPUs that is limited by the hard-
ware, but also the range of IDs that we can use.
Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
code into the architecture specific code, and on s390x we have to return
the same value here as for KVM_CAP_MAX_VCPUS.
This problem has been discovered with the kvm_create_max_vcpus selftest.
With this change applied, the selftest now passes on s390x, too.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 arch/mips/kvm/mips.c       | 3 +++
 arch/powerpc/kvm/powerpc.c | 3 +++
 arch/s390/kvm/kvm-s390.c   | 1 +
 arch/x86/kvm/x86.c         | 3 +++
 virt/kvm/arm/arm.c         | 3 +++
 virt/kvm/kvm_main.c        | 2 --
 6 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 6d0517ac18e5..0369f26ab96d 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 	case KVM_CAP_MIPS_FPU:
 		/* We don't handle systems with inconsistent cpu_has_fpu */
 		r = !!raw_cpu_has_fpu;
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 3393b166817a..aa3a678711be 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 #ifdef CONFIG_PPC_BOOK3S_64
 	case KVM_CAP_PPC_GET_SMMU_INFO:
 		r = 1;
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 8d6d75db8de6..871d2e99b156 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		break;
 	case KVM_CAP_NR_VCPUS:
 	case KVM_CAP_MAX_VCPUS:
+	case KVM_CAP_MAX_VCPU_ID:
 		r = KVM_S390_BSCA_CPU_SLOTS;
 		if (!kvm_s390_use_sca_entries())
 			r = KVM_MAX_VCPUS;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 536b78c4af6e..09a07d6a154e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 	case KVM_CAP_PV_MMU:	/* obsolete */
 		r = 0;
 		break;
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 90cedebaeb94..7eeebe5e9da2 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_MAX_VCPUS:
 		r = KVM_MAX_VCPUS;
 		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_ID;
+		break;
 	case KVM_CAP_MSI_DEVID:
 		if (!kvm)
 			r = -EINVAL;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index f0d13d9d125d..c09259dd6286 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 	case KVM_CAP_MULTI_ADDRESS_SPACE:
 		return KVM_ADDRESS_SPACE_NUM;
 #endif
-	case KVM_CAP_MAX_VCPU_ID:
-		return KVM_MAX_VCPU_ID;
 	case KVM_CAP_NR_MEMSLOTS:
 		return KVM_USER_MEM_SLOTS;
 	default:
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 9/9] KVM: selftests: Move kvm_create_max_vcpus test to generic code
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-23 16:43   ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

There is nothing x86-specific in the test apart from the VM_MODE_P52V48_4K
which we can now replace with VM_MODE_DEFAULT. Thus let's move the file to
the main folder and enable it for aarch64 and s390x, too.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 tools/testing/selftests/kvm/Makefile                          | 4 +++-
 .../testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c | 3 ++-
 2 files changed, 5 insertions(+), 2 deletions(-)
 rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index d8beb990c8f4..aef5bd1166cf 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -21,15 +21,17 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
 TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
 TEST_GEN_PROGS_x86_64 += x86_64/smm_test
-TEST_GEN_PROGS_x86_64 += x86_64/kvm_create_max_vcpus
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
+TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
 TEST_GEN_PROGS_x86_64 += dirty_log_test
 TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
+TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
 
 TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
 
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
diff --git a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
similarity index 93%
rename from tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
rename to tools/testing/selftests/kvm/kvm_create_max_vcpus.c
index 50e92996f918..db78ce07c416 100644
--- a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
+++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
@@ -1,3 +1,4 @@
+// SPDX-License-Identifier: GPL-2.0-only
 /*
  * kvm_create_max_vcpus
  *
@@ -28,7 +29,7 @@ void test_vcpu_creation(int first_vcpu_id, int num_vcpus)
 	printf("Testing creating %d vCPUs, with IDs %d...%d.\n",
 	       num_vcpus, first_vcpu_id, first_vcpu_id + num_vcpus - 1);
 
-	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
+	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
 
 	for (i = 0; i < num_vcpus; i++) {
 		int vcpu_id = first_vcpu_id + i;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 9/9] KVM: selftests: Move kvm_create_max_vcpus test to generic code
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-23 16:43 UTC (permalink / raw)


There is nothing x86-specific in the test apart from the VM_MODE_P52V48_4K
which we can now replace with VM_MODE_DEFAULT. Thus let's move the file to
the main folder and enable it for aarch64 and s390x, too.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/Makefile                          | 4 +++-
 .../testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c | 3 ++-
 2 files changed, 5 insertions(+), 2 deletions(-)
 rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index d8beb990c8f4..aef5bd1166cf 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -21,15 +21,17 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
 TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
 TEST_GEN_PROGS_x86_64 += x86_64/smm_test
-TEST_GEN_PROGS_x86_64 += x86_64/kvm_create_max_vcpus
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
+TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
 TEST_GEN_PROGS_x86_64 += dirty_log_test
 TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
+TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
 
 TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
 
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
diff --git a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
similarity index 93%
rename from tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
rename to tools/testing/selftests/kvm/kvm_create_max_vcpus.c
index 50e92996f918..db78ce07c416 100644
--- a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
+++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
@@ -1,3 +1,4 @@
+// SPDX-License-Identifier: GPL-2.0-only
 /*
  * kvm_create_max_vcpus
  *
@@ -28,7 +29,7 @@ void test_vcpu_creation(int first_vcpu_id, int num_vcpus)
 	printf("Testing creating %d vCPUs, with IDs %d...%d.\n",
 	       num_vcpus, first_vcpu_id, first_vcpu_id + num_vcpus - 1);
 
-	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
+	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
 
 	for (i = 0; i < num_vcpus; i++) {
 		int vcpu_id = first_vcpu_id + i;
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH 9/9] KVM: selftests: Move kvm_create_max_vcpus test to generic code
@ 2019-05-23 16:43   ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-23 16:43 UTC (permalink / raw)


There is nothing x86-specific in the test apart from the VM_MODE_P52V48_4K
which we can now replace with VM_MODE_DEFAULT. Thus let's move the file to
the main folder and enable it for aarch64 and s390x, too.

Signed-off-by: Thomas Huth <thuth at redhat.com>
---
 tools/testing/selftests/kvm/Makefile                          | 4 +++-
 .../testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c | 3 ++-
 2 files changed, 5 insertions(+), 2 deletions(-)
 rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index d8beb990c8f4..aef5bd1166cf 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -21,15 +21,17 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
 TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
 TEST_GEN_PROGS_x86_64 += x86_64/smm_test
-TEST_GEN_PROGS_x86_64 += x86_64/kvm_create_max_vcpus
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
+TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
 TEST_GEN_PROGS_x86_64 += dirty_log_test
 TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
+TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
 
 TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
 
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
diff --git a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
similarity index 93%
rename from tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
rename to tools/testing/selftests/kvm/kvm_create_max_vcpus.c
index 50e92996f918..db78ce07c416 100644
--- a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
+++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
@@ -1,3 +1,4 @@
+// SPDX-License-Identifier: GPL-2.0-only
 /*
  * kvm_create_max_vcpus
  *
@@ -28,7 +29,7 @@ void test_vcpu_creation(int first_vcpu_id, int num_vcpus)
 	printf("Testing creating %d vCPUs, with IDs %d...%d.\n",
 	       num_vcpus, first_vcpu_id, first_vcpu_id + num_vcpus - 1);
 
-	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
+	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
 
 	for (i = 0; i < num_vcpus; i++) {
 		int vcpu_id = first_vcpu_id + i;
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* Re: [PATCH 4/9] KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
  2019-05-23 16:43   ` thuth
  (?)
@ 2019-05-23 17:20     ` drjones
  -1 siblings, 0 replies; 108+ messages in thread
From: Andrew Jones @ 2019-05-23 17:20 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Christian Borntraeger, Janosch Frank, kvm, Paolo Bonzini,
	Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On Thu, May 23, 2019 at 06:43:04PM +0200, Thomas Huth wrote:
> This will be required later for tests like the kvm_create_max_vcpus
> test that do not use the vm_create_default() function.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  tools/testing/selftests/kvm/include/kvm_util.h      | 6 ++++++
>  tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
>  tools/testing/selftests/kvm/lib/x86_64/processor.c  | 2 +-
>  3 files changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index b8bf961074fe..b6eb6471e6b2 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -43,6 +43,12 @@ enum vm_guest_mode {
>  	NUM_VM_MODES,
>  };
>  
> +#ifdef __aarch64__
> +#define VM_MODE_DEFAULT VM_MODE_P40V48_4K
> +#else
> +#define VM_MODE_DEFAULT VM_MODE_P52V48_4K
> +#endif
> +
>  #define vm_guest_mode_string(m) vm_guest_mode_string[m]
>  extern const char * const vm_guest_mode_string[];
>  
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index fa6cd340137c..596ccaf09cb6 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>  	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
>  	struct kvm_vm *vm;
>  
> -	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> +	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
>  
>  	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
>  	vm_vcpu_add_default(vm, vcpuid, guest_code);
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index dc7fae9fa424..bb38bbcefac5 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -823,7 +823,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>  	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
>  
>  	/* Create VM */
> -	vm = vm_create(VM_MODE_P52V48_4K,
> +	vm = vm_create(VM_MODE_DEFAULT,
>  		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
>  		       O_RDWR);
>  
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones@redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 4/9] KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
@ 2019-05-23 17:20     ` drjones
  0 siblings, 0 replies; 108+ messages in thread
From: drjones @ 2019-05-23 17:20 UTC (permalink / raw)


On Thu, May 23, 2019 at 06:43:04PM +0200, Thomas Huth wrote:
> This will be required later for tests like the kvm_create_max_vcpus
> test that do not use the vm_create_default() function.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/include/kvm_util.h      | 6 ++++++
>  tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
>  tools/testing/selftests/kvm/lib/x86_64/processor.c  | 2 +-
>  3 files changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index b8bf961074fe..b6eb6471e6b2 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -43,6 +43,12 @@ enum vm_guest_mode {
>  	NUM_VM_MODES,
>  };
>  
> +#ifdef __aarch64__
> +#define VM_MODE_DEFAULT VM_MODE_P40V48_4K
> +#else
> +#define VM_MODE_DEFAULT VM_MODE_P52V48_4K
> +#endif
> +
>  #define vm_guest_mode_string(m) vm_guest_mode_string[m]
>  extern const char * const vm_guest_mode_string[];
>  
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index fa6cd340137c..596ccaf09cb6 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>  	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
>  	struct kvm_vm *vm;
>  
> -	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> +	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
>  
>  	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
>  	vm_vcpu_add_default(vm, vcpuid, guest_code);
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index dc7fae9fa424..bb38bbcefac5 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -823,7 +823,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>  	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
>  
>  	/* Create VM */
> -	vm = vm_create(VM_MODE_P52V48_4K,
> +	vm = vm_create(VM_MODE_DEFAULT,
>  		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
>  		       O_RDWR);
>  
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 4/9] KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
@ 2019-05-23 17:20     ` drjones
  0 siblings, 0 replies; 108+ messages in thread
From: Andrew Jones @ 2019-05-23 17:20 UTC (permalink / raw)


On Thu, May 23, 2019@06:43:04PM +0200, Thomas Huth wrote:
> This will be required later for tests like the kvm_create_max_vcpus
> test that do not use the vm_create_default() function.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/include/kvm_util.h      | 6 ++++++
>  tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
>  tools/testing/selftests/kvm/lib/x86_64/processor.c  | 2 +-
>  3 files changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index b8bf961074fe..b6eb6471e6b2 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -43,6 +43,12 @@ enum vm_guest_mode {
>  	NUM_VM_MODES,
>  };
>  
> +#ifdef __aarch64__
> +#define VM_MODE_DEFAULT VM_MODE_P40V48_4K
> +#else
> +#define VM_MODE_DEFAULT VM_MODE_P52V48_4K
> +#endif
> +
>  #define vm_guest_mode_string(m) vm_guest_mode_string[m]
>  extern const char * const vm_guest_mode_string[];
>  
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index fa6cd340137c..596ccaf09cb6 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>  	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
>  	struct kvm_vm *vm;
>  
> -	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> +	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
>  
>  	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
>  	vm_vcpu_add_default(vm, vcpuid, guest_code);
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index dc7fae9fa424..bb38bbcefac5 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -823,7 +823,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>  	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
>  
>  	/* Create VM */
> -	vm = vm_create(VM_MODE_P52V48_4K,
> +	vm = vm_create(VM_MODE_DEFAULT,
>  		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
>  		       O_RDWR);
>  
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
  2019-05-23 16:43   ` thuth
  (?)
@ 2019-05-23 17:40     ` drjones
  -1 siblings, 0 replies; 108+ messages in thread
From: Andrew Jones @ 2019-05-23 17:40 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Christian Borntraeger, Janosch Frank, kvm, Paolo Bonzini,
	Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On Thu, May 23, 2019 at 06:43:05PM +0200, Thomas Huth wrote:
> On s390x, there is a constraint that memory regions have to be aligned
> to 1M (or running the VM will fail). Introduce a new "alignment" variable
> in the vm_userspace_mem_region_add() function which now can be used for
> both, huge page and s390x alignment requirements.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
>  1 file changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 08edb8436c47..656df9d5cd4d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	unsigned long pmem_size = 0;
>  	struct userspace_mem_region *region;
>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
> +	size_t alignment;
>  
>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>  		"address not on a page boundary.\n"
> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>  	region->mmap_size = npages * vm->page_size;
>  
> -	/* Enough memory to align up to a huge page. */
> +#ifdef __s390x__
> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
> +	alignment = 0x100000;
> +#else
> +	alignment = 1;
> +#endif
> +
>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> -		region->mmap_size += huge_page_size;
> +		alignment = huge_page_size;

I guess s390x won't ever support VM_MEM_SRC_ANONYMOUS_THP? If it does,
then we need 'alignment = max(huge_page_size, alignment)'. Actually
that might be a nice way to write this anyway for future-proofing.

> +
> +	/* Add enough memory to align up if necessary */
> +	if (alignment > 1)
> +		region->mmap_size += alignment;
> +
>  	region->mmap_start = mmap(NULL, region->mmap_size,
>  				  PROT_READ | PROT_WRITE,
>  				  MAP_PRIVATE | MAP_ANONYMOUS
> @@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  		    "test_malloc failed, mmap_start: %p errno: %i",
>  		    region->mmap_start, errno);
>  
> -	/* Align THP allocation up to start of a huge page. */
> -	region->host_mem = align(region->mmap_start,
> -				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
> +	/* Align host address */
> +	region->host_mem = align(region->mmap_start, alignment);
>  
>  	/* As needed perform madvise */
>  	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
> -- 
> 2.21.0
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-23 17:40     ` drjones
  0 siblings, 0 replies; 108+ messages in thread
From: drjones @ 2019-05-23 17:40 UTC (permalink / raw)


On Thu, May 23, 2019 at 06:43:05PM +0200, Thomas Huth wrote:
> On s390x, there is a constraint that memory regions have to be aligned
> to 1M (or running the VM will fail). Introduce a new "alignment" variable
> in the vm_userspace_mem_region_add() function which now can be used for
> both, huge page and s390x alignment requirements.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
>  1 file changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 08edb8436c47..656df9d5cd4d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	unsigned long pmem_size = 0;
>  	struct userspace_mem_region *region;
>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
> +	size_t alignment;
>  
>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>  		"address not on a page boundary.\n"
> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>  	region->mmap_size = npages * vm->page_size;
>  
> -	/* Enough memory to align up to a huge page. */
> +#ifdef __s390x__
> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
> +	alignment = 0x100000;
> +#else
> +	alignment = 1;
> +#endif
> +
>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> -		region->mmap_size += huge_page_size;
> +		alignment = huge_page_size;

I guess s390x won't ever support VM_MEM_SRC_ANONYMOUS_THP? If it does,
then we need 'alignment = max(huge_page_size, alignment)'. Actually
that might be a nice way to write this anyway for future-proofing.

> +
> +	/* Add enough memory to align up if necessary */
> +	if (alignment > 1)
> +		region->mmap_size += alignment;
> +
>  	region->mmap_start = mmap(NULL, region->mmap_size,
>  				  PROT_READ | PROT_WRITE,
>  				  MAP_PRIVATE | MAP_ANONYMOUS
> @@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  		    "test_malloc failed, mmap_start: %p errno: %i",
>  		    region->mmap_start, errno);
>  
> -	/* Align THP allocation up to start of a huge page. */
> -	region->host_mem = align(region->mmap_start,
> -				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
> +	/* Align host address */
> +	region->host_mem = align(region->mmap_start, alignment);
>  
>  	/* As needed perform madvise */
>  	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
> -- 
> 2.21.0
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-23 17:40     ` drjones
  0 siblings, 0 replies; 108+ messages in thread
From: Andrew Jones @ 2019-05-23 17:40 UTC (permalink / raw)


On Thu, May 23, 2019@06:43:05PM +0200, Thomas Huth wrote:
> On s390x, there is a constraint that memory regions have to be aligned
> to 1M (or running the VM will fail). Introduce a new "alignment" variable
> in the vm_userspace_mem_region_add() function which now can be used for
> both, huge page and s390x alignment requirements.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
>  1 file changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 08edb8436c47..656df9d5cd4d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	unsigned long pmem_size = 0;
>  	struct userspace_mem_region *region;
>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
> +	size_t alignment;
>  
>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>  		"address not on a page boundary.\n"
> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>  	region->mmap_size = npages * vm->page_size;
>  
> -	/* Enough memory to align up to a huge page. */
> +#ifdef __s390x__
> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
> +	alignment = 0x100000;
> +#else
> +	alignment = 1;
> +#endif
> +
>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> -		region->mmap_size += huge_page_size;
> +		alignment = huge_page_size;

I guess s390x won't ever support VM_MEM_SRC_ANONYMOUS_THP? If it does,
then we need 'alignment = max(huge_page_size, alignment)'. Actually
that might be a nice way to write this anyway for future-proofing.

> +
> +	/* Add enough memory to align up if necessary */
> +	if (alignment > 1)
> +		region->mmap_size += alignment;
> +
>  	region->mmap_start = mmap(NULL, region->mmap_size,
>  				  PROT_READ | PROT_WRITE,
>  				  MAP_PRIVATE | MAP_ANONYMOUS
> @@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  		    "test_malloc failed, mmap_start: %p errno: %i",
>  		    region->mmap_start, errno);
>  
> -	/* Align THP allocation up to start of a huge page. */
> -	region->host_mem = align(region->mmap_start,
> -				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
> +	/* Align host address */
> +	region->host_mem = align(region->mmap_start, alignment);
>  
>  	/* As needed perform madvise */
>  	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
> -- 
> 2.21.0
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  2019-05-23 16:43   ` thuth
  (?)
@ 2019-05-23 17:56     ` drjones
  -1 siblings, 0 replies; 108+ messages in thread
From: Andrew Jones @ 2019-05-23 17:56 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Christian Borntraeger, Janosch Frank, kvm, Paolo Bonzini,
	Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On Thu, May 23, 2019 at 06:43:08PM +0200, Thomas Huth wrote:
> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 6d0517ac18e5..0369f26ab96d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MIPS_FPU:
>  		/* We don't handle systems with inconsistent cpu_has_fpu */
>  		r = !!raw_cpu_has_fpu;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..aa3a678711be 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  #ifdef CONFIG_PPC_BOOK3S_64
>  	case KVM_CAP_PPC_GET_SMMU_INFO:
>  		r = 1;
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 8d6d75db8de6..871d2e99b156 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  		break;
>  	case KVM_CAP_NR_VCPUS:
>  	case KVM_CAP_MAX_VCPUS:
> +	case KVM_CAP_MAX_VCPU_ID:
>  		r = KVM_S390_BSCA_CPU_SLOTS;
>  		if (!kvm_s390_use_sca_entries())
>  			r = KVM_MAX_VCPUS;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 536b78c4af6e..09a07d6a154e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_PV_MMU:	/* obsolete */
>  		r = 0;
>  		break;
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 90cedebaeb94..7eeebe5e9da2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MSI_DEVID:
>  		if (!kvm)
>  			r = -EINVAL;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0d13d9d125d..c09259dd6286 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  	case KVM_CAP_MULTI_ADDRESS_SPACE:
>  		return KVM_ADDRESS_SPACE_NUM;
>  #endif
> -	case KVM_CAP_MAX_VCPU_ID:
> -		return KVM_MAX_VCPU_ID;
>  	case KVM_CAP_NR_MEMSLOTS:
>  		return KVM_USER_MEM_SLOTS;
>  	default:
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones@redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-23 17:56     ` drjones
  0 siblings, 0 replies; 108+ messages in thread
From: drjones @ 2019-05-23 17:56 UTC (permalink / raw)


On Thu, May 23, 2019 at 06:43:08PM +0200, Thomas Huth wrote:
> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 6d0517ac18e5..0369f26ab96d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MIPS_FPU:
>  		/* We don't handle systems with inconsistent cpu_has_fpu */
>  		r = !!raw_cpu_has_fpu;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..aa3a678711be 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  #ifdef CONFIG_PPC_BOOK3S_64
>  	case KVM_CAP_PPC_GET_SMMU_INFO:
>  		r = 1;
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 8d6d75db8de6..871d2e99b156 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  		break;
>  	case KVM_CAP_NR_VCPUS:
>  	case KVM_CAP_MAX_VCPUS:
> +	case KVM_CAP_MAX_VCPU_ID:
>  		r = KVM_S390_BSCA_CPU_SLOTS;
>  		if (!kvm_s390_use_sca_entries())
>  			r = KVM_MAX_VCPUS;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 536b78c4af6e..09a07d6a154e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_PV_MMU:	/* obsolete */
>  		r = 0;
>  		break;
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 90cedebaeb94..7eeebe5e9da2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MSI_DEVID:
>  		if (!kvm)
>  			r = -EINVAL;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0d13d9d125d..c09259dd6286 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  	case KVM_CAP_MULTI_ADDRESS_SPACE:
>  		return KVM_ADDRESS_SPACE_NUM;
>  #endif
> -	case KVM_CAP_MAX_VCPU_ID:
> -		return KVM_MAX_VCPU_ID;
>  	case KVM_CAP_NR_MEMSLOTS:
>  		return KVM_USER_MEM_SLOTS;
>  	default:
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-23 17:56     ` drjones
  0 siblings, 0 replies; 108+ messages in thread
From: Andrew Jones @ 2019-05-23 17:56 UTC (permalink / raw)


On Thu, May 23, 2019@06:43:08PM +0200, Thomas Huth wrote:
> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 6d0517ac18e5..0369f26ab96d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MIPS_FPU:
>  		/* We don't handle systems with inconsistent cpu_has_fpu */
>  		r = !!raw_cpu_has_fpu;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..aa3a678711be 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  #ifdef CONFIG_PPC_BOOK3S_64
>  	case KVM_CAP_PPC_GET_SMMU_INFO:
>  		r = 1;
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 8d6d75db8de6..871d2e99b156 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  		break;
>  	case KVM_CAP_NR_VCPUS:
>  	case KVM_CAP_MAX_VCPUS:
> +	case KVM_CAP_MAX_VCPU_ID:
>  		r = KVM_S390_BSCA_CPU_SLOTS;
>  		if (!kvm_s390_use_sca_entries())
>  			r = KVM_MAX_VCPUS;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 536b78c4af6e..09a07d6a154e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_PV_MMU:	/* obsolete */
>  		r = 0;
>  		break;
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 90cedebaeb94..7eeebe5e9da2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MSI_DEVID:
>  		if (!kvm)
>  			r = -EINVAL;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0d13d9d125d..c09259dd6286 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  	case KVM_CAP_MULTI_ADDRESS_SPACE:
>  		return KVM_ADDRESS_SPACE_NUM;
>  #endif
> -	case KVM_CAP_MAX_VCPU_ID:
> -		return KVM_MAX_VCPU_ID;
>  	case KVM_CAP_NR_MEMSLOTS:
>  		return KVM_USER_MEM_SLOTS;
>  	default:
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH 9/9] KVM: selftests: Move kvm_create_max_vcpus test to generic code
  2019-05-23 16:43   ` thuth
  (?)
@ 2019-05-23 17:56     ` drjones
  -1 siblings, 0 replies; 108+ messages in thread
From: Andrew Jones @ 2019-05-23 17:56 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Christian Borntraeger, Janosch Frank, kvm, Paolo Bonzini,
	Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On Thu, May 23, 2019 at 06:43:09PM +0200, Thomas Huth wrote:
> There is nothing x86-specific in the test apart from the VM_MODE_P52V48_4K
> which we can now replace with VM_MODE_DEFAULT. Thus let's move the file to
> the main folder and enable it for aarch64 and s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  tools/testing/selftests/kvm/Makefile                          | 4 +++-
>  .../testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c | 3 ++-
>  2 files changed, 5 insertions(+), 2 deletions(-)
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index d8beb990c8f4..aef5bd1166cf 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -21,15 +21,17 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
>  TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
>  TEST_GEN_PROGS_x86_64 += x86_64/smm_test
> -TEST_GEN_PROGS_x86_64 += x86_64/kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
> +TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += dirty_log_test
>  TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
>  
>  TEST_GEN_PROGS_aarch64 += dirty_log_test
>  TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
> +TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS_s390x += s390x/sync_regs_test
> +TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
>  LIBKVM += $(LIBKVM_$(UNAME_M))
> diff --git a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> similarity index 93%
> rename from tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> rename to tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> index 50e92996f918..db78ce07c416 100644
> --- a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> +++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> @@ -1,3 +1,4 @@
> +// SPDX-License-Identifier: GPL-2.0-only
>  /*
>   * kvm_create_max_vcpus
>   *
> @@ -28,7 +29,7 @@ void test_vcpu_creation(int first_vcpu_id, int num_vcpus)
>  	printf("Testing creating %d vCPUs, with IDs %d...%d.\n",
>  	       num_vcpus, first_vcpu_id, first_vcpu_id + num_vcpus - 1);
>  
> -	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
> +	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
>  
>  	for (i = 0; i < num_vcpus; i++) {
>  		int vcpu_id = first_vcpu_id + i;
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones@redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 9/9] KVM: selftests: Move kvm_create_max_vcpus test to generic code
@ 2019-05-23 17:56     ` drjones
  0 siblings, 0 replies; 108+ messages in thread
From: drjones @ 2019-05-23 17:56 UTC (permalink / raw)


On Thu, May 23, 2019 at 06:43:09PM +0200, Thomas Huth wrote:
> There is nothing x86-specific in the test apart from the VM_MODE_P52V48_4K
> which we can now replace with VM_MODE_DEFAULT. Thus let's move the file to
> the main folder and enable it for aarch64 and s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/Makefile                          | 4 +++-
>  .../testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c | 3 ++-
>  2 files changed, 5 insertions(+), 2 deletions(-)
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index d8beb990c8f4..aef5bd1166cf 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -21,15 +21,17 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
>  TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
>  TEST_GEN_PROGS_x86_64 += x86_64/smm_test
> -TEST_GEN_PROGS_x86_64 += x86_64/kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
> +TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += dirty_log_test
>  TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
>  
>  TEST_GEN_PROGS_aarch64 += dirty_log_test
>  TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
> +TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS_s390x += s390x/sync_regs_test
> +TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
>  LIBKVM += $(LIBKVM_$(UNAME_M))
> diff --git a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> similarity index 93%
> rename from tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> rename to tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> index 50e92996f918..db78ce07c416 100644
> --- a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> +++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> @@ -1,3 +1,4 @@
> +// SPDX-License-Identifier: GPL-2.0-only
>  /*
>   * kvm_create_max_vcpus
>   *
> @@ -28,7 +29,7 @@ void test_vcpu_creation(int first_vcpu_id, int num_vcpus)
>  	printf("Testing creating %d vCPUs, with IDs %d...%d.\n",
>  	       num_vcpus, first_vcpu_id, first_vcpu_id + num_vcpus - 1);
>  
> -	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
> +	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
>  
>  	for (i = 0; i < num_vcpus; i++) {
>  		int vcpu_id = first_vcpu_id + i;
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 9/9] KVM: selftests: Move kvm_create_max_vcpus test to generic code
@ 2019-05-23 17:56     ` drjones
  0 siblings, 0 replies; 108+ messages in thread
From: Andrew Jones @ 2019-05-23 17:56 UTC (permalink / raw)


On Thu, May 23, 2019@06:43:09PM +0200, Thomas Huth wrote:
> There is nothing x86-specific in the test apart from the VM_MODE_P52V48_4K
> which we can now replace with VM_MODE_DEFAULT. Thus let's move the file to
> the main folder and enable it for aarch64 and s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/Makefile                          | 4 +++-
>  .../testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c | 3 ++-
>  2 files changed, 5 insertions(+), 2 deletions(-)
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index d8beb990c8f4..aef5bd1166cf 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -21,15 +21,17 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
>  TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
>  TEST_GEN_PROGS_x86_64 += x86_64/smm_test
> -TEST_GEN_PROGS_x86_64 += x86_64/kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
> +TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += dirty_log_test
>  TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
>  
>  TEST_GEN_PROGS_aarch64 += dirty_log_test
>  TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
> +TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS_s390x += s390x/sync_regs_test
> +TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
>  LIBKVM += $(LIBKVM_$(UNAME_M))
> diff --git a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> similarity index 93%
> rename from tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> rename to tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> index 50e92996f918..db78ce07c416 100644
> --- a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> +++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> @@ -1,3 +1,4 @@
> +// SPDX-License-Identifier: GPL-2.0-only
>  /*
>   * kvm_create_max_vcpus
>   *
> @@ -28,7 +29,7 @@ void test_vcpu_creation(int first_vcpu_id, int num_vcpus)
>  	printf("Testing creating %d vCPUs, with IDs %d...%d.\n",
>  	       num_vcpus, first_vcpu_id, first_vcpu_id + num_vcpus - 1);
>  
> -	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
> +	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
>  
>  	for (i = 0; i < num_vcpus; i++) {
>  		int vcpu_id = first_vcpu_id + i;
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH 2/9] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
  2019-05-23 16:43   ` thuth
  (?)
@ 2019-05-23 17:57     ` drjones
  -1 siblings, 0 replies; 108+ messages in thread
From: Andrew Jones @ 2019-05-23 17:57 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Christian Borntraeger, Janosch Frank, kvm, Paolo Bonzini,
	Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On Thu, May 23, 2019 at 06:43:02PM +0200, Thomas Huth wrote:
> The struct kvm_vcpu_events code is only available on certain architectures
> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
> architectures, we have to fence the code with __KVM_HAVE_VCPU_EVENTS.
> 
> Reviewed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
>  tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index a5a4b28f14d8..b8bf961074fe 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
>  void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> +#endif
>  #ifdef __x86_64__
>  void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
>  			   struct kvm_nested_state *state);
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index ba1359ac504f..08edb8436c47 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
>  		ret, errno);
>  }
>  
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events)
>  {
> @@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
>  		ret, errno);
>  }
> +#endif
>  
>  #ifdef __x86_64__
>  void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones@redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 2/9] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-23 17:57     ` drjones
  0 siblings, 0 replies; 108+ messages in thread
From: drjones @ 2019-05-23 17:57 UTC (permalink / raw)


On Thu, May 23, 2019 at 06:43:02PM +0200, Thomas Huth wrote:
> The struct kvm_vcpu_events code is only available on certain architectures
> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
> architectures, we have to fence the code with __KVM_HAVE_VCPU_EVENTS.
> 
> Reviewed-by: David Hildenbrand <david at redhat.com>
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
>  tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index a5a4b28f14d8..b8bf961074fe 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
>  void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> +#endif
>  #ifdef __x86_64__
>  void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
>  			   struct kvm_nested_state *state);
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index ba1359ac504f..08edb8436c47 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
>  		ret, errno);
>  }
>  
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events)
>  {
> @@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
>  		ret, errno);
>  }
> +#endif
>  
>  #ifdef __x86_64__
>  void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 2/9] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS
@ 2019-05-23 17:57     ` drjones
  0 siblings, 0 replies; 108+ messages in thread
From: Andrew Jones @ 2019-05-23 17:57 UTC (permalink / raw)


On Thu, May 23, 2019@06:43:02PM +0200, Thomas Huth wrote:
> The struct kvm_vcpu_events code is only available on certain architectures
> (arm, arm64 and x86). To be able to compile kvm_util.c also for other
> architectures, we have to fence the code with __KVM_HAVE_VCPU_EVENTS.
> 
> Reviewed-by: David Hildenbrand <david at redhat.com>
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/include/kvm_util.h | 2 ++
>  tools/testing/selftests/kvm/lib/kvm_util.c     | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index a5a4b28f14d8..b8bf961074fe 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,10 +114,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
>  void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> +#endif
>  #ifdef __x86_64__
>  void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
>  			   struct kvm_nested_state *state);
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index ba1359ac504f..08edb8436c47 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1224,6 +1224,7 @@ void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
>  		ret, errno);
>  }
>  
> +#ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events)
>  {
> @@ -1249,6 +1250,7 @@ void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
>  	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
>  		ret, errno);
>  }
> +#endif
>  
>  #ifdef __x86_64__
>  void vcpu_nested_state_get(struct kvm_vm *vm, uint32_t vcpuid,
> -- 
> 2.21.0
>

Reviewed-by: Andrew Jones <drjones at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re:  Re: [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
  2019-05-23 17:40     ` drjones
  (?)
@ 2019-05-24  8:29       ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24  8:29 UTC (permalink / raw)
  To: Andrew Jones, Thomas Huth
  Cc: Janosch Frank, kvm, Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390



On 23.05.19 19:40, Andrew Jones wrote:
> On Thu, May 23, 2019 at 06:43:05PM +0200, Thomas Huth wrote:
>> On s390x, there is a constraint that memory regions have to be aligned
>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>> in the vm_userspace_mem_region_add() function which now can be used for
>> both, huge page and s390x alignment requirements.
>>
>> Signed-off-by: Thomas Huth <thuth@redhat.com>
>> ---
>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>> index 08edb8436c47..656df9d5cd4d 100644
>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	unsigned long pmem_size = 0;
>>  	struct userspace_mem_region *region;
>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>> +	size_t alignment;
>>  
>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>  		"address not on a page boundary.\n"
>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>  	region->mmap_size = npages * vm->page_size;
>>  
>> -	/* Enough memory to align up to a huge page. */
>> +#ifdef __s390x__
>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>> +	alignment = 0x100000;
>> +#else
>> +	alignment = 1;
>> +#endif
>> +
>>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
>> -		region->mmap_size += huge_page_size;
>> +		alignment = huge_page_size;
> 
> I guess s390x won't ever support VM_MEM_SRC_ANONYMOUS_THP? If it does,
> then we need 'alignment = max(huge_page_size, alignment)'. Actually
> that might be a nice way to write this anyway for future-proofing.

I can do 
-		alignment = huge_page_size;
+		alignment = max(huge_page_size, alignment);

when applying.


> 
>> +
>> +	/* Add enough memory to align up if necessary */
>> +	if (alignment > 1)
>> +		region->mmap_size += alignment;
>> +
>>  	region->mmap_start = mmap(NULL, region->mmap_size,
>>  				  PROT_READ | PROT_WRITE,
>>  				  MAP_PRIVATE | MAP_ANONYMOUS
>> @@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  		    "test_malloc failed, mmap_start: %p errno: %i",
>>  		    region->mmap_start, errno);
>>  
>> -	/* Align THP allocation up to start of a huge page. */
>> -	region->host_mem = align(region->mmap_start,
>> -				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
>> +	/* Align host address */
>> +	region->host_mem = align(region->mmap_start, alignment);
>>  
>>  	/* As needed perform madvise */
>>  	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
>> -- 
>> 2.21.0
>>
> 


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-24  8:29       ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-24  8:29 UTC (permalink / raw)




On 23.05.19 19:40, Andrew Jones wrote:
> On Thu, May 23, 2019 at 06:43:05PM +0200, Thomas Huth wrote:
>> On s390x, there is a constraint that memory regions have to be aligned
>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>> in the vm_userspace_mem_region_add() function which now can be used for
>> both, huge page and s390x alignment requirements.
>>
>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>> ---
>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>> index 08edb8436c47..656df9d5cd4d 100644
>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	unsigned long pmem_size = 0;
>>  	struct userspace_mem_region *region;
>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>> +	size_t alignment;
>>  
>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>  		"address not on a page boundary.\n"
>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>  	region->mmap_size = npages * vm->page_size;
>>  
>> -	/* Enough memory to align up to a huge page. */
>> +#ifdef __s390x__
>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>> +	alignment = 0x100000;
>> +#else
>> +	alignment = 1;
>> +#endif
>> +
>>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
>> -		region->mmap_size += huge_page_size;
>> +		alignment = huge_page_size;
> 
> I guess s390x won't ever support VM_MEM_SRC_ANONYMOUS_THP? If it does,
> then we need 'alignment = max(huge_page_size, alignment)'. Actually
> that might be a nice way to write this anyway for future-proofing.

I can do 
-		alignment = huge_page_size;
+		alignment = max(huge_page_size, alignment);

when applying.


> 
>> +
>> +	/* Add enough memory to align up if necessary */
>> +	if (alignment > 1)
>> +		region->mmap_size += alignment;
>> +
>>  	region->mmap_start = mmap(NULL, region->mmap_size,
>>  				  PROT_READ | PROT_WRITE,
>>  				  MAP_PRIVATE | MAP_ANONYMOUS
>> @@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  		    "test_malloc failed, mmap_start: %p errno: %i",
>>  		    region->mmap_start, errno);
>>  
>> -	/* Align THP allocation up to start of a huge page. */
>> -	region->host_mem = align(region->mmap_start,
>> -				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
>> +	/* Align host address */
>> +	region->host_mem = align(region->mmap_start, alignment);
>>  
>>  	/* As needed perform madvise */
>>  	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
>> -- 
>> 2.21.0
>>
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-24  8:29       ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24  8:29 UTC (permalink / raw)




On 23.05.19 19:40, Andrew Jones wrote:
> On Thu, May 23, 2019@06:43:05PM +0200, Thomas Huth wrote:
>> On s390x, there is a constraint that memory regions have to be aligned
>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>> in the vm_userspace_mem_region_add() function which now can be used for
>> both, huge page and s390x alignment requirements.
>>
>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>> ---
>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>
>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>> index 08edb8436c47..656df9d5cd4d 100644
>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	unsigned long pmem_size = 0;
>>  	struct userspace_mem_region *region;
>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>> +	size_t alignment;
>>  
>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>  		"address not on a page boundary.\n"
>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>  	region->mmap_size = npages * vm->page_size;
>>  
>> -	/* Enough memory to align up to a huge page. */
>> +#ifdef __s390x__
>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>> +	alignment = 0x100000;
>> +#else
>> +	alignment = 1;
>> +#endif
>> +
>>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
>> -		region->mmap_size += huge_page_size;
>> +		alignment = huge_page_size;
> 
> I guess s390x won't ever support VM_MEM_SRC_ANONYMOUS_THP? If it does,
> then we need 'alignment = max(huge_page_size, alignment)'. Actually
> that might be a nice way to write this anyway for future-proofing.

I can do 
-		alignment = huge_page_size;
+		alignment = max(huge_page_size, alignment);

when applying.


> 
>> +
>> +	/* Add enough memory to align up if necessary */
>> +	if (alignment > 1)
>> +		region->mmap_size += alignment;
>> +
>>  	region->mmap_start = mmap(NULL, region->mmap_size,
>>  				  PROT_READ | PROT_WRITE,
>>  				  MAP_PRIVATE | MAP_ANONYMOUS
>> @@ -620,9 +632,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>  		    "test_malloc failed, mmap_start: %p errno: %i",
>>  		    region->mmap_start, errno);
>>  
>> -	/* Align THP allocation up to start of a huge page. */
>> -	region->host_mem = align(region->mmap_start,
>> -				 src_type == VM_MEM_SRC_ANONYMOUS_THP ?  huge_page_size : 1);
>> +	/* Align host address */
>> +	region->host_mem = align(region->mmap_start, alignment);
>>  
>>  	/* As needed perform madvise */
>>  	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
>> -- 
>> 2.21.0
>>
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re:  [PATCH 3/9] kvm: selftests: aarch64: fix default vm mode
  2019-05-23 16:43   ` thuth
  (?)
@ 2019-05-24  8:37     ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24  8:37 UTC (permalink / raw)
  To: Thomas Huth, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390



On 23.05.19 18:43, Thomas Huth wrote:
> From: Andrew Jones <drjones@redhat.com>
> 
> VM_MODE_P52V48_4K is not a valid mode for AArch64. Replace its
> use in vm_create_default() with a mode that works and represents
> a good AArch64 default. (We didn't ever see a problem with this
> because we don't have any unit tests using vm_create_default(),
> but it's good to get it fixed in advance.)
> 
> Reported-by: Thomas Huth <thuth@redhat.com>
> Signed-off-by: Andrew Jones <drjones@redhat.com>

I will add Thomas Signed-off-by here as well.


> ---
>  tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index e8c42506a09d..fa6cd340137c 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>  	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
>  	struct kvm_vm *vm;
>  
> -	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> +	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
>  
>  	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
>  	vm_vcpu_add_default(vm, vcpuid, guest_code);
> 


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 3/9] kvm: selftests: aarch64: fix default vm mode
@ 2019-05-24  8:37     ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-24  8:37 UTC (permalink / raw)




On 23.05.19 18:43, Thomas Huth wrote:
> From: Andrew Jones <drjones at redhat.com>
> 
> VM_MODE_P52V48_4K is not a valid mode for AArch64. Replace its
> use in vm_create_default() with a mode that works and represents
> a good AArch64 default. (We didn't ever see a problem with this
> because we don't have any unit tests using vm_create_default(),
> but it's good to get it fixed in advance.)
> 
> Reported-by: Thomas Huth <thuth at redhat.com>
> Signed-off-by: Andrew Jones <drjones at redhat.com>

I will add Thomas Signed-off-by here as well.


> ---
>  tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index e8c42506a09d..fa6cd340137c 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>  	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
>  	struct kvm_vm *vm;
>  
> -	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> +	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
>  
>  	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
>  	vm_vcpu_add_default(vm, vcpuid, guest_code);
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 3/9] kvm: selftests: aarch64: fix default vm mode
@ 2019-05-24  8:37     ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24  8:37 UTC (permalink / raw)




On 23.05.19 18:43, Thomas Huth wrote:
> From: Andrew Jones <drjones at redhat.com>
> 
> VM_MODE_P52V48_4K is not a valid mode for AArch64. Replace its
> use in vm_create_default() with a mode that works and represents
> a good AArch64 default. (We didn't ever see a problem with this
> because we don't have any unit tests using vm_create_default(),
> but it's good to get it fixed in advance.)
> 
> Reported-by: Thomas Huth <thuth at redhat.com>
> Signed-off-by: Andrew Jones <drjones at redhat.com>

I will add Thomas Signed-off-by here as well.


> ---
>  tools/testing/selftests/kvm/lib/aarch64/processor.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index e8c42506a09d..fa6cd340137c 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
>  	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
>  	struct kvm_vm *vm;
>  
> -	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> +	vm = vm_create(VM_MODE_P40V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
>  
>  	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
>  	vm_vcpu_add_default(vm, vcpuid, guest_code);
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  2019-05-23 16:43   ` thuth
  (?)
@ 2019-05-24  9:13     ` cohuck
  -1 siblings, 0 replies; 108+ messages in thread
From: Cornelia Huck @ 2019-05-24  9:13 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Christian Borntraeger, Janosch Frank, kvm, Paolo Bonzini,
	Radim Krčmář,
	Shuah Khan, David Hildenbrand, Andrew Jones, linux-kernel,
	linux-kselftest, linux-s390

On Thu, 23 May 2019 18:43:08 +0200
Thomas Huth <thuth@redhat.com> wrote:

In the subject: s/unusabled/unusable/

> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-24  9:13     ` cohuck
  0 siblings, 0 replies; 108+ messages in thread
From: cohuck @ 2019-05-24  9:13 UTC (permalink / raw)


On Thu, 23 May 2019 18:43:08 +0200
Thomas Huth <thuth at redhat.com> wrote:

In the subject: s/unusabled/unusable/

> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)

Reviewed-by: Cornelia Huck <cohuck at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-24  9:13     ` cohuck
  0 siblings, 0 replies; 108+ messages in thread
From: Cornelia Huck @ 2019-05-24  9:13 UTC (permalink / raw)


On Thu, 23 May 2019 18:43:08 +0200
Thomas Huth <thuth@redhat.com> wrote:

In the subject: s/unusabled/unusable/

> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)

Reviewed-by: Cornelia Huck <cohuck at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  2019-05-23 16:43   ` thuth
  (?)
@ 2019-05-24  9:16     ` david
  -1 siblings, 0 replies; 108+ messages in thread
From: David Hildenbrand @ 2019-05-24  9:16 UTC (permalink / raw)
  To: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, Cornelia Huck, Andrew Jones, linux-kernel,
	linux-kselftest, linux-s390

On 23.05.19 18:43, Thomas Huth wrote:
> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 6d0517ac18e5..0369f26ab96d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MIPS_FPU:
>  		/* We don't handle systems with inconsistent cpu_has_fpu */
>  		r = !!raw_cpu_has_fpu;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..aa3a678711be 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  #ifdef CONFIG_PPC_BOOK3S_64
>  	case KVM_CAP_PPC_GET_SMMU_INFO:
>  		r = 1;
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 8d6d75db8de6..871d2e99b156 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  		break;
>  	case KVM_CAP_NR_VCPUS:
>  	case KVM_CAP_MAX_VCPUS:
> +	case KVM_CAP_MAX_VCPU_ID:
>  		r = KVM_S390_BSCA_CPU_SLOTS;
>  		if (!kvm_s390_use_sca_entries())
>  			r = KVM_MAX_VCPUS;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 536b78c4af6e..09a07d6a154e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_PV_MMU:	/* obsolete */
>  		r = 0;
>  		break;
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 90cedebaeb94..7eeebe5e9da2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MSI_DEVID:
>  		if (!kvm)
>  			r = -EINVAL;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0d13d9d125d..c09259dd6286 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  	case KVM_CAP_MULTI_ADDRESS_SPACE:
>  		return KVM_ADDRESS_SPACE_NUM;
>  #endif
> -	case KVM_CAP_MAX_VCPU_ID:
> -		return KVM_MAX_VCPU_ID;
>  	case KVM_CAP_NR_MEMSLOTS:
>  		return KVM_USER_MEM_SLOTS;
>  	default:
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-24  9:16     ` david
  0 siblings, 0 replies; 108+ messages in thread
From: david @ 2019-05-24  9:16 UTC (permalink / raw)


On 23.05.19 18:43, Thomas Huth wrote:
> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 6d0517ac18e5..0369f26ab96d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MIPS_FPU:
>  		/* We don't handle systems with inconsistent cpu_has_fpu */
>  		r = !!raw_cpu_has_fpu;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..aa3a678711be 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  #ifdef CONFIG_PPC_BOOK3S_64
>  	case KVM_CAP_PPC_GET_SMMU_INFO:
>  		r = 1;
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 8d6d75db8de6..871d2e99b156 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  		break;
>  	case KVM_CAP_NR_VCPUS:
>  	case KVM_CAP_MAX_VCPUS:
> +	case KVM_CAP_MAX_VCPU_ID:
>  		r = KVM_S390_BSCA_CPU_SLOTS;
>  		if (!kvm_s390_use_sca_entries())
>  			r = KVM_MAX_VCPUS;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 536b78c4af6e..09a07d6a154e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_PV_MMU:	/* obsolete */
>  		r = 0;
>  		break;
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 90cedebaeb94..7eeebe5e9da2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MSI_DEVID:
>  		if (!kvm)
>  			r = -EINVAL;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0d13d9d125d..c09259dd6286 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  	case KVM_CAP_MULTI_ADDRESS_SPACE:
>  		return KVM_ADDRESS_SPACE_NUM;
>  #endif
> -	case KVM_CAP_MAX_VCPU_ID:
> -		return KVM_MAX_VCPU_ID;
>  	case KVM_CAP_NR_MEMSLOTS:
>  		return KVM_USER_MEM_SLOTS;
>  	default:
> 

Reviewed-by: David Hildenbrand <david at redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-24  9:16     ` david
  0 siblings, 0 replies; 108+ messages in thread
From: David Hildenbrand @ 2019-05-24  9:16 UTC (permalink / raw)


On 23.05.19 18:43, Thomas Huth wrote:
> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 6d0517ac18e5..0369f26ab96d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MIPS_FPU:
>  		/* We don't handle systems with inconsistent cpu_has_fpu */
>  		r = !!raw_cpu_has_fpu;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..aa3a678711be 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  #ifdef CONFIG_PPC_BOOK3S_64
>  	case KVM_CAP_PPC_GET_SMMU_INFO:
>  		r = 1;
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 8d6d75db8de6..871d2e99b156 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  		break;
>  	case KVM_CAP_NR_VCPUS:
>  	case KVM_CAP_MAX_VCPUS:
> +	case KVM_CAP_MAX_VCPU_ID:
>  		r = KVM_S390_BSCA_CPU_SLOTS;
>  		if (!kvm_s390_use_sca_entries())
>  			r = KVM_MAX_VCPUS;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 536b78c4af6e..09a07d6a154e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_PV_MMU:	/* obsolete */
>  		r = 0;
>  		break;
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 90cedebaeb94..7eeebe5e9da2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MSI_DEVID:
>  		if (!kvm)
>  			r = -EINVAL;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0d13d9d125d..c09259dd6286 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  	case KVM_CAP_MULTI_ADDRESS_SPACE:
>  		return KVM_ADDRESS_SPACE_NUM;
>  #endif
> -	case KVM_CAP_MAX_VCPU_ID:
> -		return KVM_MAX_VCPU_ID;
>  	case KVM_CAP_NR_MEMSLOTS:
>  		return KVM_USER_MEM_SLOTS;
>  	default:
> 

Reviewed-by: David Hildenbrand <david at redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH 9/9] KVM: selftests: Move kvm_create_max_vcpus test to generic code
  2019-05-23 16:43   ` thuth
  (?)
@ 2019-05-24  9:16     ` david
  -1 siblings, 0 replies; 108+ messages in thread
From: David Hildenbrand @ 2019-05-24  9:16 UTC (permalink / raw)
  To: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, Cornelia Huck, Andrew Jones, linux-kernel,
	linux-kselftest, linux-s390

On 23.05.19 18:43, Thomas Huth wrote:
> There is nothing x86-specific in the test apart from the VM_MODE_P52V48_4K
> which we can now replace with VM_MODE_DEFAULT. Thus let's move the file to
> the main folder and enable it for aarch64 and s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  tools/testing/selftests/kvm/Makefile                          | 4 +++-
>  .../testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c | 3 ++-
>  2 files changed, 5 insertions(+), 2 deletions(-)
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index d8beb990c8f4..aef5bd1166cf 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -21,15 +21,17 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
>  TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
>  TEST_GEN_PROGS_x86_64 += x86_64/smm_test
> -TEST_GEN_PROGS_x86_64 += x86_64/kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
> +TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += dirty_log_test
>  TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
>  
>  TEST_GEN_PROGS_aarch64 += dirty_log_test
>  TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
> +TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS_s390x += s390x/sync_regs_test
> +TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
>  LIBKVM += $(LIBKVM_$(UNAME_M))
> diff --git a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> similarity index 93%
> rename from tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> rename to tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> index 50e92996f918..db78ce07c416 100644
> --- a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> +++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> @@ -1,3 +1,4 @@
> +// SPDX-License-Identifier: GPL-2.0-only
>  /*
>   * kvm_create_max_vcpus
>   *
> @@ -28,7 +29,7 @@ void test_vcpu_creation(int first_vcpu_id, int num_vcpus)
>  	printf("Testing creating %d vCPUs, with IDs %d...%d.\n",
>  	       num_vcpus, first_vcpu_id, first_vcpu_id + num_vcpus - 1);
>  
> -	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
> +	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
>  
>  	for (i = 0; i < num_vcpus; i++) {
>  		int vcpu_id = first_vcpu_id + i;
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 9/9] KVM: selftests: Move kvm_create_max_vcpus test to generic code
@ 2019-05-24  9:16     ` david
  0 siblings, 0 replies; 108+ messages in thread
From: david @ 2019-05-24  9:16 UTC (permalink / raw)


On 23.05.19 18:43, Thomas Huth wrote:
> There is nothing x86-specific in the test apart from the VM_MODE_P52V48_4K
> which we can now replace with VM_MODE_DEFAULT. Thus let's move the file to
> the main folder and enable it for aarch64 and s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/Makefile                          | 4 +++-
>  .../testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c | 3 ++-
>  2 files changed, 5 insertions(+), 2 deletions(-)
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index d8beb990c8f4..aef5bd1166cf 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -21,15 +21,17 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
>  TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
>  TEST_GEN_PROGS_x86_64 += x86_64/smm_test
> -TEST_GEN_PROGS_x86_64 += x86_64/kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
> +TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += dirty_log_test
>  TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
>  
>  TEST_GEN_PROGS_aarch64 += dirty_log_test
>  TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
> +TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS_s390x += s390x/sync_regs_test
> +TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
>  LIBKVM += $(LIBKVM_$(UNAME_M))
> diff --git a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> similarity index 93%
> rename from tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> rename to tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> index 50e92996f918..db78ce07c416 100644
> --- a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> +++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> @@ -1,3 +1,4 @@
> +// SPDX-License-Identifier: GPL-2.0-only
>  /*
>   * kvm_create_max_vcpus
>   *
> @@ -28,7 +29,7 @@ void test_vcpu_creation(int first_vcpu_id, int num_vcpus)
>  	printf("Testing creating %d vCPUs, with IDs %d...%d.\n",
>  	       num_vcpus, first_vcpu_id, first_vcpu_id + num_vcpus - 1);
>  
> -	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
> +	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
>  
>  	for (i = 0; i < num_vcpus; i++) {
>  		int vcpu_id = first_vcpu_id + i;
> 

Reviewed-by: David Hildenbrand <david at redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 9/9] KVM: selftests: Move kvm_create_max_vcpus test to generic code
@ 2019-05-24  9:16     ` david
  0 siblings, 0 replies; 108+ messages in thread
From: David Hildenbrand @ 2019-05-24  9:16 UTC (permalink / raw)


On 23.05.19 18:43, Thomas Huth wrote:
> There is nothing x86-specific in the test apart from the VM_MODE_P52V48_4K
> which we can now replace with VM_MODE_DEFAULT. Thus let's move the file to
> the main folder and enable it for aarch64 and s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  tools/testing/selftests/kvm/Makefile                          | 4 +++-
>  .../testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c | 3 ++-
>  2 files changed, 5 insertions(+), 2 deletions(-)
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index d8beb990c8f4..aef5bd1166cf 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -21,15 +21,17 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
>  TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test
>  TEST_GEN_PROGS_x86_64 += x86_64/smm_test
> -TEST_GEN_PROGS_x86_64 += x86_64/kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test
> +TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
>  TEST_GEN_PROGS_x86_64 += dirty_log_test
>  TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
>  
>  TEST_GEN_PROGS_aarch64 += dirty_log_test
>  TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
> +TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS_s390x += s390x/sync_regs_test
> +TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
>  LIBKVM += $(LIBKVM_$(UNAME_M))
> diff --git a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> similarity index 93%
> rename from tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> rename to tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> index 50e92996f918..db78ce07c416 100644
> --- a/tools/testing/selftests/kvm/x86_64/kvm_create_max_vcpus.c
> +++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c
> @@ -1,3 +1,4 @@
> +// SPDX-License-Identifier: GPL-2.0-only
>  /*
>   * kvm_create_max_vcpus
>   *
> @@ -28,7 +29,7 @@ void test_vcpu_creation(int first_vcpu_id, int num_vcpus)
>  	printf("Testing creating %d vCPUs, with IDs %d...%d.\n",
>  	       num_vcpus, first_vcpu_id, first_vcpu_id + num_vcpus - 1);
>  
> -	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
> +	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
>  
>  	for (i = 0; i < num_vcpus; i++) {
>  		int vcpu_id = first_vcpu_id + i;
> 

Reviewed-by: David Hildenbrand <david at redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH] KVM: selftests: enable pgste option for the linker on s390
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-24 10:33   ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 10:33 UTC (permalink / raw)
  To: Janosch Frank
  Cc: KVM, Cornelia Huck, Christian Borntraeger, David Hildenbrand,
	Paolo Bonzini, Radim Krčmář,
	Shuah Khan, Andrew Jones, linux-kselftest, linux-s390

To avoid testcase failures we need to enable the pgstes. This can be
done with /proc/sys/vm/allocate_pgste or with a linker option that
creates an  S390_PGSTE program header.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 tools/testing/selftests/kvm/Makefile | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index aef5bd1166cf..4aac14c1919f 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -44,7 +44,10 @@ CFLAGS += -O2 -g -std=gnu99 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE
 no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
         $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
 
-LDFLAGS += -pthread $(no-pie-option)
+# On s390, build the testcases KVM-enabled
+pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
+
+LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
 
 # After inclusion, $(OUTPUT) is defined and
 # $(TEST_GEN_PROGS) starts with $(OUTPUT)/
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH] KVM: selftests: enable pgste option for the linker on s390
@ 2019-05-24 10:33   ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-24 10:33 UTC (permalink / raw)


To avoid testcase failures we need to enable the pgstes. This can be
done with /proc/sys/vm/allocate_pgste or with a linker option that
creates an  S390_PGSTE program header.

Signed-off-by: Christian Borntraeger <borntraeger at de.ibm.com>
---
 tools/testing/selftests/kvm/Makefile | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index aef5bd1166cf..4aac14c1919f 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -44,7 +44,10 @@ CFLAGS += -O2 -g -std=gnu99 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE
 no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
         $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
 
-LDFLAGS += -pthread $(no-pie-option)
+# On s390, build the testcases KVM-enabled
+pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
+
+LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
 
 # After inclusion, $(OUTPUT) is defined and
 # $(TEST_GEN_PROGS) starts with $(OUTPUT)/
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH] KVM: selftests: enable pgste option for the linker on s390
@ 2019-05-24 10:33   ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 10:33 UTC (permalink / raw)


To avoid testcase failures we need to enable the pgstes. This can be
done with /proc/sys/vm/allocate_pgste or with a linker option that
creates an  S390_PGSTE program header.

Signed-off-by: Christian Borntraeger <borntraeger at de.ibm.com>
---
 tools/testing/selftests/kvm/Makefile | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index aef5bd1166cf..4aac14c1919f 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -44,7 +44,10 @@ CFLAGS += -O2 -g -std=gnu99 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE
 no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
         $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
 
-LDFLAGS += -pthread $(no-pie-option)
+# On s390, build the testcases KVM-enabled
+pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
+
+LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
 
 # After inclusion, $(OUTPUT) is defined and
 # $(TEST_GEN_PROGS) starts with $(OUTPUT)/
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* Re:  [PATCH v1 0/9] KVM selftests for s390x
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-24 11:11   ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 11:11 UTC (permalink / raw)
  To: Thomas Huth, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

I do get

[10400.440298] kvm-s390: failed to commit memory region
[10400.508723] kvm-s390: failed to commit memory region

when running the tests. Will have a look.

On 23.05.19 18:43, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x, and after
> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
> is now enabled here, too.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...
> I started working on that topic, but did not finish that work
> yet, so I decided to not include it yet).
> 
> RFC -> v1:
>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>  - Added patch to introduce VM_MODE_DEFAULT macro
>  - Improved/cleaned up the code in processor.c
>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
> 
> Andrew Jones (1):
>   kvm: selftests: aarch64: fix default vm mode
> 
> Thomas Huth (8):
>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>     guard
>   KVM: selftests: Guard struct kvm_vcpu_events with
>     __KVM_HAVE_VCPU_EVENTS
>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>   KVM: selftests: Align memory region addresses to 1M on s390x
>   KVM: selftests: Add processor code for s390x
>   KVM: selftests: Add the sync_regs test for s390x
>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
> 
>  MAINTAINERS                                   |   2 +
>  arch/mips/kvm/mips.c                          |   3 +
>  arch/powerpc/kvm/powerpc.c                    |   3 +
>  arch/s390/kvm/kvm-s390.c                      |   1 +
>  arch/x86/kvm/x86.c                            |   3 +
>  tools/testing/selftests/kvm/Makefile          |   7 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>  virt/kvm/arm/arm.c                            |   3 +
>  virt/kvm/kvm_main.c                           |   2 -
>  16 files changed, 514 insertions(+), 11 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 11:11   ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-24 11:11 UTC (permalink / raw)


I do get

[10400.440298] kvm-s390: failed to commit memory region
[10400.508723] kvm-s390: failed to commit memory region

when running the tests. Will have a look.

On 23.05.19 18:43, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x, and after
> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
> is now enabled here, too.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...
> I started working on that topic, but did not finish that work
> yet, so I decided to not include it yet).
> 
> RFC -> v1:
>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>  - Added patch to introduce VM_MODE_DEFAULT macro
>  - Improved/cleaned up the code in processor.c
>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
> 
> Andrew Jones (1):
>   kvm: selftests: aarch64: fix default vm mode
> 
> Thomas Huth (8):
>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>     guard
>   KVM: selftests: Guard struct kvm_vcpu_events with
>     __KVM_HAVE_VCPU_EVENTS
>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>   KVM: selftests: Align memory region addresses to 1M on s390x
>   KVM: selftests: Add processor code for s390x
>   KVM: selftests: Add the sync_regs test for s390x
>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
> 
>  MAINTAINERS                                   |   2 +
>  arch/mips/kvm/mips.c                          |   3 +
>  arch/powerpc/kvm/powerpc.c                    |   3 +
>  arch/s390/kvm/kvm-s390.c                      |   1 +
>  arch/x86/kvm/x86.c                            |   3 +
>  tools/testing/selftests/kvm/Makefile          |   7 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>  virt/kvm/arm/arm.c                            |   3 +
>  virt/kvm/kvm_main.c                           |   2 -
>  16 files changed, 514 insertions(+), 11 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 11:11   ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 11:11 UTC (permalink / raw)


I do get

[10400.440298] kvm-s390: failed to commit memory region
[10400.508723] kvm-s390: failed to commit memory region

when running the tests. Will have a look.

On 23.05.19 18:43, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x, and after
> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
> is now enabled here, too.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...
> I started working on that topic, but did not finish that work
> yet, so I decided to not include it yet).
> 
> RFC -> v1:
>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>  - Added patch to introduce VM_MODE_DEFAULT macro
>  - Improved/cleaned up the code in processor.c
>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
> 
> Andrew Jones (1):
>   kvm: selftests: aarch64: fix default vm mode
> 
> Thomas Huth (8):
>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>     guard
>   KVM: selftests: Guard struct kvm_vcpu_events with
>     __KVM_HAVE_VCPU_EVENTS
>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>   KVM: selftests: Align memory region addresses to 1M on s390x
>   KVM: selftests: Add processor code for s390x
>   KVM: selftests: Add the sync_regs test for s390x
>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
> 
>  MAINTAINERS                                   |   2 +
>  arch/mips/kvm/mips.c                          |   3 +
>  arch/powerpc/kvm/powerpc.c                    |   3 +
>  arch/s390/kvm/kvm-s390.c                      |   1 +
>  arch/x86/kvm/x86.c                            |   3 +
>  tools/testing/selftests/kvm/Makefile          |   7 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>  virt/kvm/arm/arm.c                            |   3 +
>  virt/kvm/kvm_main.c                           |   2 -
>  16 files changed, 514 insertions(+), 11 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re:  [PATCH v1 0/9] KVM selftests for s390x
  2019-05-24 11:11   ` borntraeger
  (?)
@ 2019-05-24 12:17     ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 12:17 UTC (permalink / raw)
  To: Thomas Huth, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390



On 24.05.19 13:11, Christian Borntraeger wrote:
> I do get
> 
> [10400.440298] kvm-s390: failed to commit memory region
> [10400.508723] kvm-s390: failed to commit memory region
> 
> when running the tests. Will have a look.

It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
which the s390 code does not like.


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 12:17     ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-24 12:17 UTC (permalink / raw)




On 24.05.19 13:11, Christian Borntraeger wrote:
> I do get
> 
> [10400.440298] kvm-s390: failed to commit memory region
> [10400.508723] kvm-s390: failed to commit memory region
> 
> when running the tests. Will have a look.

It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
which the s390 code does not like.

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 12:17     ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 12:17 UTC (permalink / raw)




On 24.05.19 13:11, Christian Borntraeger wrote:
> I do get
> 
> [10400.440298] kvm-s390: failed to commit memory region
> [10400.508723] kvm-s390: failed to commit memory region
> 
> when running the tests. Will have a look.

It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
which the s390 code does not like.

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re:  [PATCH v1 0/9] KVM selftests for s390x
  2019-05-24 12:17     ` borntraeger
  (?)
@ 2019-05-24 12:29       ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 12:29 UTC (permalink / raw)
  To: Thomas Huth, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390



On 24.05.19 14:17, Christian Borntraeger wrote:
> 
> 
> On 24.05.19 13:11, Christian Borntraeger wrote:
>> I do get
>>
>> [10400.440298] kvm-s390: failed to commit memory region
>> [10400.508723] kvm-s390: failed to commit memory region
>>
>> when running the tests. Will have a look.
> 
> It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
> which the s390 code does not like.
> 

The doc says about  KVM_SET_USER_MEMORY_REGION:

This ioctl allows the user to create or modify a guest physical memory
slot.  When changing an existing slot, it may be moved in the guest
physical memory space, or its flags may be modified.  --> It may not be
resized. <----

$ strace -f -e trace=ioctl tools/testing/selftests/kvm/s390x/sync_regs_test 
ioctl(3, KVM_CHECK_EXTENSION, KVM_CAP_SYNC_REGS) = 1
ioctl(4, KVM_CHECK_EXTENSION, KVM_CAP_IMMEDIATE_EXIT) = 1
ioctl(3, KVM_CREATE_VM, 0)              = 4
ioctl(4, KVM_SET_USER_MEMORY_REGION, {slot=0, flags=0, guest_phys_addr=0, memory_size=2097152, userspace_addr=0x3ffac500000}) = 0
ioctl(4, KVM_CREATE_VCPU, 5)            = 7
ioctl(8, KVM_GET_VCPU_MMAP_SIZE, 0)     = 4096
ioctl(8, KVM_GET_VCPU_MMAP_SIZE, 0)     = 4096
ioctl(7, KVM_GET_SREGS, 0x3ffef0fdb90)  = 0
ioctl(7, KVM_SET_SREGS, 0x3ffef0fdb90)  = 0
ioctl(7, KVM_GET_REGS, 0x3ffef0fdcf8)   = 0
ioctl(7, KVM_SET_REGS, 0x3ffef0fdcf8)   = 0
ioctl(7, KVM_GET_SREGS, 0x3ffef0fdd78)  = 0
ioctl(7, KVM_SET_SREGS, 0x3ffef0fdd78)  = 0
ioctl(7, KVM_RUN, 0)                    = 0
ioctl(7, KVM_GET_REGS, 0x3ffef0fdf90)   = 0
ioctl(7, KVM_GET_SREGS, 0x3ffef0fe010)  = 0
ioctl(7, KVM_RUN, 0)                    = 0
ioctl(7, KVM_GET_REGS, 0x3ffef0fdf90)   = 0
ioctl(7, KVM_GET_SREGS, 0x3ffef0fe010)  = 0
ioctl(7, KVM_RUN, 0)                    = 0
ioctl(4, KVM_SET_USER_MEMORY_REGION, {slot=0, flags=0, guest_phys_addr=0, memory_size=0, userspace_addr=0x3ffac500000}) = 0
+++ exited with 0 +++

So the testcase is wrong? (I think the s390 code is also not fully correct will double check)


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 12:29       ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-24 12:29 UTC (permalink / raw)




On 24.05.19 14:17, Christian Borntraeger wrote:
> 
> 
> On 24.05.19 13:11, Christian Borntraeger wrote:
>> I do get
>>
>> [10400.440298] kvm-s390: failed to commit memory region
>> [10400.508723] kvm-s390: failed to commit memory region
>>
>> when running the tests. Will have a look.
> 
> It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
> which the s390 code does not like.
> 

The doc says about  KVM_SET_USER_MEMORY_REGION:

This ioctl allows the user to create or modify a guest physical memory
slot.  When changing an existing slot, it may be moved in the guest
physical memory space, or its flags may be modified.  --> It may not be
resized. <----

$ strace -f -e trace=ioctl tools/testing/selftests/kvm/s390x/sync_regs_test 
ioctl(3, KVM_CHECK_EXTENSION, KVM_CAP_SYNC_REGS) = 1
ioctl(4, KVM_CHECK_EXTENSION, KVM_CAP_IMMEDIATE_EXIT) = 1
ioctl(3, KVM_CREATE_VM, 0)              = 4
ioctl(4, KVM_SET_USER_MEMORY_REGION, {slot=0, flags=0, guest_phys_addr=0, memory_size=2097152, userspace_addr=0x3ffac500000}) = 0
ioctl(4, KVM_CREATE_VCPU, 5)            = 7
ioctl(8, KVM_GET_VCPU_MMAP_SIZE, 0)     = 4096
ioctl(8, KVM_GET_VCPU_MMAP_SIZE, 0)     = 4096
ioctl(7, KVM_GET_SREGS, 0x3ffef0fdb90)  = 0
ioctl(7, KVM_SET_SREGS, 0x3ffef0fdb90)  = 0
ioctl(7, KVM_GET_REGS, 0x3ffef0fdcf8)   = 0
ioctl(7, KVM_SET_REGS, 0x3ffef0fdcf8)   = 0
ioctl(7, KVM_GET_SREGS, 0x3ffef0fdd78)  = 0
ioctl(7, KVM_SET_SREGS, 0x3ffef0fdd78)  = 0
ioctl(7, KVM_RUN, 0)                    = 0
ioctl(7, KVM_GET_REGS, 0x3ffef0fdf90)   = 0
ioctl(7, KVM_GET_SREGS, 0x3ffef0fe010)  = 0
ioctl(7, KVM_RUN, 0)                    = 0
ioctl(7, KVM_GET_REGS, 0x3ffef0fdf90)   = 0
ioctl(7, KVM_GET_SREGS, 0x3ffef0fe010)  = 0
ioctl(7, KVM_RUN, 0)                    = 0
ioctl(4, KVM_SET_USER_MEMORY_REGION, {slot=0, flags=0, guest_phys_addr=0, memory_size=0, userspace_addr=0x3ffac500000}) = 0
+++ exited with 0 +++

So the testcase is wrong? (I think the s390 code is also not fully correct will double check)

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 12:29       ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 12:29 UTC (permalink / raw)




On 24.05.19 14:17, Christian Borntraeger wrote:
> 
> 
> On 24.05.19 13:11, Christian Borntraeger wrote:
>> I do get
>>
>> [10400.440298] kvm-s390: failed to commit memory region
>> [10400.508723] kvm-s390: failed to commit memory region
>>
>> when running the tests. Will have a look.
> 
> It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
> which the s390 code does not like.
> 

The doc says about  KVM_SET_USER_MEMORY_REGION:

This ioctl allows the user to create or modify a guest physical memory
slot.  When changing an existing slot, it may be moved in the guest
physical memory space, or its flags may be modified.  --> It may not be
resized. <----

$ strace -f -e trace=ioctl tools/testing/selftests/kvm/s390x/sync_regs_test 
ioctl(3, KVM_CHECK_EXTENSION, KVM_CAP_SYNC_REGS) = 1
ioctl(4, KVM_CHECK_EXTENSION, KVM_CAP_IMMEDIATE_EXIT) = 1
ioctl(3, KVM_CREATE_VM, 0)              = 4
ioctl(4, KVM_SET_USER_MEMORY_REGION, {slot=0, flags=0, guest_phys_addr=0, memory_size=2097152, userspace_addr=0x3ffac500000}) = 0
ioctl(4, KVM_CREATE_VCPU, 5)            = 7
ioctl(8, KVM_GET_VCPU_MMAP_SIZE, 0)     = 4096
ioctl(8, KVM_GET_VCPU_MMAP_SIZE, 0)     = 4096
ioctl(7, KVM_GET_SREGS, 0x3ffef0fdb90)  = 0
ioctl(7, KVM_SET_SREGS, 0x3ffef0fdb90)  = 0
ioctl(7, KVM_GET_REGS, 0x3ffef0fdcf8)   = 0
ioctl(7, KVM_SET_REGS, 0x3ffef0fdcf8)   = 0
ioctl(7, KVM_GET_SREGS, 0x3ffef0fdd78)  = 0
ioctl(7, KVM_SET_SREGS, 0x3ffef0fdd78)  = 0
ioctl(7, KVM_RUN, 0)                    = 0
ioctl(7, KVM_GET_REGS, 0x3ffef0fdf90)   = 0
ioctl(7, KVM_GET_SREGS, 0x3ffef0fe010)  = 0
ioctl(7, KVM_RUN, 0)                    = 0
ioctl(7, KVM_GET_REGS, 0x3ffef0fdf90)   = 0
ioctl(7, KVM_GET_SREGS, 0x3ffef0fe010)  = 0
ioctl(7, KVM_RUN, 0)                    = 0
ioctl(4, KVM_SET_USER_MEMORY_REGION, {slot=0, flags=0, guest_phys_addr=0, memory_size=0, userspace_addr=0x3ffac500000}) = 0
+++ exited with 0 +++

So the testcase is wrong? (I think the s390 code is also not fully correct will double check)

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v1 0/9] KVM selftests for s390x
  2019-05-24 12:29       ` borntraeger
  (?)
@ 2019-05-24 12:36         ` david
  -1 siblings, 0 replies; 108+ messages in thread
From: David Hildenbrand @ 2019-05-24 12:36 UTC (permalink / raw)
  To: Christian Borntraeger, Thomas Huth, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, Cornelia Huck, Andrew Jones, linux-kernel,
	linux-kselftest, linux-s390

On 24.05.19 14:29, Christian Borntraeger wrote:
> 
> 
> On 24.05.19 14:17, Christian Borntraeger wrote:
>>
>>
>> On 24.05.19 13:11, Christian Borntraeger wrote:
>>> I do get
>>>
>>> [10400.440298] kvm-s390: failed to commit memory region
>>> [10400.508723] kvm-s390: failed to commit memory region
>>>
>>> when running the tests. Will have a look.
>>
>> It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
>> which the s390 code does not like.
>>
> 
> The doc says about  KVM_SET_USER_MEMORY_REGION:
> 
> This ioctl allows the user to create or modify a guest physical memory
> slot.  When changing an existing slot, it may be moved in the guest
> physical memory space, or its flags may be modified.  --> It may not be
> resized. <----

Size 0 is deleting, not resizing AFAIK.


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 12:36         ` david
  0 siblings, 0 replies; 108+ messages in thread
From: david @ 2019-05-24 12:36 UTC (permalink / raw)


On 24.05.19 14:29, Christian Borntraeger wrote:
> 
> 
> On 24.05.19 14:17, Christian Borntraeger wrote:
>>
>>
>> On 24.05.19 13:11, Christian Borntraeger wrote:
>>> I do get
>>>
>>> [10400.440298] kvm-s390: failed to commit memory region
>>> [10400.508723] kvm-s390: failed to commit memory region
>>>
>>> when running the tests. Will have a look.
>>
>> It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
>> which the s390 code does not like.
>>
> 
> The doc says about  KVM_SET_USER_MEMORY_REGION:
> 
> This ioctl allows the user to create or modify a guest physical memory
> slot.  When changing an existing slot, it may be moved in the guest
> physical memory space, or its flags may be modified.  --> It may not be
> resized. <----

Size 0 is deleting, not resizing AFAIK.


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 12:36         ` david
  0 siblings, 0 replies; 108+ messages in thread
From: David Hildenbrand @ 2019-05-24 12:36 UTC (permalink / raw)


On 24.05.19 14:29, Christian Borntraeger wrote:
> 
> 
> On 24.05.19 14:17, Christian Borntraeger wrote:
>>
>>
>> On 24.05.19 13:11, Christian Borntraeger wrote:
>>> I do get
>>>
>>> [10400.440298] kvm-s390: failed to commit memory region
>>> [10400.508723] kvm-s390: failed to commit memory region
>>>
>>> when running the tests. Will have a look.
>>
>> It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
>> which the s390 code does not like.
>>
> 
> The doc says about  KVM_SET_USER_MEMORY_REGION:
> 
> This ioctl allows the user to create or modify a guest physical memory
> slot.  When changing an existing slot, it may be moved in the guest
> physical memory space, or its flags may be modified.  --> It may not be
> resized. <----

Size 0 is deleting, not resizing AFAIK.


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v1 0/9] KVM selftests for s390x
  2019-05-24 12:36         ` david
  (?)
@ 2019-05-24 12:56           ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 12:56 UTC (permalink / raw)
  To: David Hildenbrand, Thomas Huth, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, Cornelia Huck, Andrew Jones, linux-kernel,
	linux-kselftest, linux-s390



On 24.05.19 14:36, David Hildenbrand wrote:
> On 24.05.19 14:29, Christian Borntraeger wrote:
>>
>>
>> On 24.05.19 14:17, Christian Borntraeger wrote:
>>>
>>>
>>> On 24.05.19 13:11, Christian Borntraeger wrote:
>>>> I do get
>>>>
>>>> [10400.440298] kvm-s390: failed to commit memory region
>>>> [10400.508723] kvm-s390: failed to commit memory region
>>>>
>>>> when running the tests. Will have a look.
>>>
>>> It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
>>> which the s390 code does not like.
>>>
>>
>> The doc says about  KVM_SET_USER_MEMORY_REGION:
>>
>> This ioctl allows the user to create or modify a guest physical memory
>> slot.  When changing an existing slot, it may be moved in the guest
>> physical memory space, or its flags may be modified.  --> It may not be
>> resized. <----
> 
> Size 0 is deleting, not resizing AFAIK.

Right this seems to translate to KVM_MR_DELETE, which the s390 code does not handle (we
will simply deliver a page fault as we share the last page table level). 
I will have a look at implementing KVM_MR_DELETE and KVM_MR_MOVE. In fact, we should
have a testcase for that as well.


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 12:56           ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-24 12:56 UTC (permalink / raw)




On 24.05.19 14:36, David Hildenbrand wrote:
> On 24.05.19 14:29, Christian Borntraeger wrote:
>>
>>
>> On 24.05.19 14:17, Christian Borntraeger wrote:
>>>
>>>
>>> On 24.05.19 13:11, Christian Borntraeger wrote:
>>>> I do get
>>>>
>>>> [10400.440298] kvm-s390: failed to commit memory region
>>>> [10400.508723] kvm-s390: failed to commit memory region
>>>>
>>>> when running the tests. Will have a look.
>>>
>>> It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
>>> which the s390 code does not like.
>>>
>>
>> The doc says about  KVM_SET_USER_MEMORY_REGION:
>>
>> This ioctl allows the user to create or modify a guest physical memory
>> slot.  When changing an existing slot, it may be moved in the guest
>> physical memory space, or its flags may be modified.  --> It may not be
>> resized. <----
> 
> Size 0 is deleting, not resizing AFAIK.

Right this seems to translate to KVM_MR_DELETE, which the s390 code does not handle (we
will simply deliver a page fault as we share the last page table level). 
I will have a look at implementing KVM_MR_DELETE and KVM_MR_MOVE. In fact, we should
have a testcase for that as well.

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 12:56           ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 12:56 UTC (permalink / raw)




On 24.05.19 14:36, David Hildenbrand wrote:
> On 24.05.19 14:29, Christian Borntraeger wrote:
>>
>>
>> On 24.05.19 14:17, Christian Borntraeger wrote:
>>>
>>>
>>> On 24.05.19 13:11, Christian Borntraeger wrote:
>>>> I do get
>>>>
>>>> [10400.440298] kvm-s390: failed to commit memory region
>>>> [10400.508723] kvm-s390: failed to commit memory region
>>>>
>>>> when running the tests. Will have a look.
>>>
>>> It comes from kvm_vm_free. This calls KVM_SET_USER_MEMORY_REGION with size 0,
>>> which the s390 code does not like.
>>>
>>
>> The doc says about  KVM_SET_USER_MEMORY_REGION:
>>
>> This ioctl allows the user to create or modify a guest physical memory
>> slot.  When changing an existing slot, it may be moved in the guest
>> physical memory space, or its flags may be modified.  --> It may not be
>> resized. <----
> 
> Size 0 is deleting, not resizing AFAIK.

Right this seems to translate to KVM_MR_DELETE, which the s390 code does not handle (we
will simply deliver a page fault as we share the last page table level). 
I will have a look at implementing KVM_MR_DELETE and KVM_MR_MOVE. In fact, we should
have a testcase for that as well.

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH] KVM: selftests: enable pgste option for the linker on s390
  2019-05-24 10:33   ` borntraeger
  (?)
@ 2019-05-24 18:16     ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-24 18:16 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Paolo Bonzini,
	Radim Krčmář,
	Shuah Khan, Andrew Jones, linux-kselftest, linux-s390

On 24/05/2019 12.33, Christian Borntraeger wrote:
> To avoid testcase failures we need to enable the pgstes. This can be
> done with /proc/sys/vm/allocate_pgste or with a linker option that
> creates an  S390_PGSTE program header.

Oh, right, I initially enabled it on my LPAR via "sysctl
vm.allocate_pgste=1" when I started working on the selftests and then
completely forgot about this... The linker option is certainly the
better way to do this, thanks for the patch!

Reviewed-by: Thomas Huth <thuth@redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH] KVM: selftests: enable pgste option for the linker on s390
@ 2019-05-24 18:16     ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-24 18:16 UTC (permalink / raw)


On 24/05/2019 12.33, Christian Borntraeger wrote:
> To avoid testcase failures we need to enable the pgstes. This can be
> done with /proc/sys/vm/allocate_pgste or with a linker option that
> creates an  S390_PGSTE program header.

Oh, right, I initially enabled it on my LPAR via "sysctl
vm.allocate_pgste=1" when I started working on the selftests and then
completely forgot about this... The linker option is certainly the
better way to do this, thanks for the patch!

Reviewed-by: Thomas Huth <thuth at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH] KVM: selftests: enable pgste option for the linker on s390
@ 2019-05-24 18:16     ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-24 18:16 UTC (permalink / raw)


On 24/05/2019 12.33, Christian Borntraeger wrote:
> To avoid testcase failures we need to enable the pgstes. This can be
> done with /proc/sys/vm/allocate_pgste or with a linker option that
> creates an  S390_PGSTE program header.

Oh, right, I initially enabled it on my LPAR via "sysctl
vm.allocate_pgste=1" when I started working on the selftests and then
completely forgot about this... The linker option is certainly the
better way to do this, thanks for the patch!

Reviewed-by: Thomas Huth <thuth at redhat.com>

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
  2019-05-24  8:29       ` borntraeger
  (?)
@ 2019-05-24 18:17         ` thuth
  -1 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-24 18:17 UTC (permalink / raw)
  To: Christian Borntraeger, Andrew Jones
  Cc: Janosch Frank, kvm, Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, linux-kernel,
	linux-kselftest, linux-s390

On 24/05/2019 10.29, Christian Borntraeger wrote:
> 
> 
> On 23.05.19 19:40, Andrew Jones wrote:
>> On Thu, May 23, 2019 at 06:43:05PM +0200, Thomas Huth wrote:
>>> On s390x, there is a constraint that memory regions have to be aligned
>>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>>> in the vm_userspace_mem_region_add() function which now can be used for
>>> both, huge page and s390x alignment requirements.
>>>
>>> Signed-off-by: Thomas Huth <thuth@redhat.com>
>>> ---
>>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
>>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> index 08edb8436c47..656df9d5cd4d 100644
>>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	unsigned long pmem_size = 0;
>>>  	struct userspace_mem_region *region;
>>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>>> +	size_t alignment;
>>>  
>>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>>  		"address not on a page boundary.\n"
>>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>>  	region->mmap_size = npages * vm->page_size;
>>>  
>>> -	/* Enough memory to align up to a huge page. */
>>> +#ifdef __s390x__
>>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>>> +	alignment = 0x100000;
>>> +#else
>>> +	alignment = 1;
>>> +#endif
>>> +
>>>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
>>> -		region->mmap_size += huge_page_size;
>>> +		alignment = huge_page_size;
>>
>> I guess s390x won't ever support VM_MEM_SRC_ANONYMOUS_THP? If it does,
>> then we need 'alignment = max(huge_page_size, alignment)'. Actually
>> that might be a nice way to write this anyway for future-proofing.
> 
> I can do 
> -		alignment = huge_page_size;
> +		alignment = max(huge_page_size, alignment);
> 
> when applying.

Yes, please, that's certainly cleaner this way.

 Thanks,
  Thomas

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-24 18:17         ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: thuth @ 2019-05-24 18:17 UTC (permalink / raw)


On 24/05/2019 10.29, Christian Borntraeger wrote:
> 
> 
> On 23.05.19 19:40, Andrew Jones wrote:
>> On Thu, May 23, 2019 at 06:43:05PM +0200, Thomas Huth wrote:
>>> On s390x, there is a constraint that memory regions have to be aligned
>>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>>> in the vm_userspace_mem_region_add() function which now can be used for
>>> both, huge page and s390x alignment requirements.
>>>
>>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>>> ---
>>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
>>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> index 08edb8436c47..656df9d5cd4d 100644
>>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	unsigned long pmem_size = 0;
>>>  	struct userspace_mem_region *region;
>>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>>> +	size_t alignment;
>>>  
>>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>>  		"address not on a page boundary.\n"
>>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>>  	region->mmap_size = npages * vm->page_size;
>>>  
>>> -	/* Enough memory to align up to a huge page. */
>>> +#ifdef __s390x__
>>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>>> +	alignment = 0x100000;
>>> +#else
>>> +	alignment = 1;
>>> +#endif
>>> +
>>>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
>>> -		region->mmap_size += huge_page_size;
>>> +		alignment = huge_page_size;
>>
>> I guess s390x won't ever support VM_MEM_SRC_ANONYMOUS_THP? If it does,
>> then we need 'alignment = max(huge_page_size, alignment)'. Actually
>> that might be a nice way to write this anyway for future-proofing.
> 
> I can do 
> -		alignment = huge_page_size;
> +		alignment = max(huge_page_size, alignment);
> 
> when applying.

Yes, please, that's certainly cleaner this way.

 Thanks,
  Thomas

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x
@ 2019-05-24 18:17         ` thuth
  0 siblings, 0 replies; 108+ messages in thread
From: Thomas Huth @ 2019-05-24 18:17 UTC (permalink / raw)


On 24/05/2019 10.29, Christian Borntraeger wrote:
> 
> 
> On 23.05.19 19:40, Andrew Jones wrote:
>> On Thu, May 23, 2019@06:43:05PM +0200, Thomas Huth wrote:
>>> On s390x, there is a constraint that memory regions have to be aligned
>>> to 1M (or running the VM will fail). Introduce a new "alignment" variable
>>> in the vm_userspace_mem_region_add() function which now can be used for
>>> both, huge page and s390x alignment requirements.
>>>
>>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>>> ---
>>>  tools/testing/selftests/kvm/lib/kvm_util.c | 21 ++++++++++++++++-----
>>>  1 file changed, 16 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> index 08edb8436c47..656df9d5cd4d 100644
>>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> @@ -559,6 +559,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	unsigned long pmem_size = 0;
>>>  	struct userspace_mem_region *region;
>>>  	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>>> +	size_t alignment;
>>>  
>>>  	TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
>>>  		"address not on a page boundary.\n"
>>> @@ -608,9 +609,20 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>>>  	TEST_ASSERT(region != NULL, "Insufficient Memory");
>>>  	region->mmap_size = npages * vm->page_size;
>>>  
>>> -	/* Enough memory to align up to a huge page. */
>>> +#ifdef __s390x__
>>> +	/* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
>>> +	alignment = 0x100000;
>>> +#else
>>> +	alignment = 1;
>>> +#endif
>>> +
>>>  	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
>>> -		region->mmap_size += huge_page_size;
>>> +		alignment = huge_page_size;
>>
>> I guess s390x won't ever support VM_MEM_SRC_ANONYMOUS_THP? If it does,
>> then we need 'alignment = max(huge_page_size, alignment)'. Actually
>> that might be a nice way to write this anyway for future-proofing.
> 
> I can do 
> -		alignment = huge_page_size;
> +		alignment = max(huge_page_size, alignment);
> 
> when applying.

Yes, please, that's certainly cleaner this way.

 Thanks,
  Thomas

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re:  [PATCH v1 0/9] KVM selftests for s390x
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-05-24 18:33   ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 18:33 UTC (permalink / raw)
  To: Thomas Huth, Janosch Frank, kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

I have now queued every patch in the kselftest branch on kvms390 at kernel.org.
I will push out for next as soon as I have some ack/nacks on the
"KVM: s390: fix memory slot handling for KVM_SET_USER_MEMORY_REGION"
patch.

On 23.05.19 18:43, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x, and after
> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
> is now enabled here, too.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...
> I started working on that topic, but did not finish that work
> yet, so I decided to not include it yet).
> 
> RFC -> v1:
>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>  - Added patch to introduce VM_MODE_DEFAULT macro
>  - Improved/cleaned up the code in processor.c
>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
> 
> Andrew Jones (1):
>   kvm: selftests: aarch64: fix default vm mode
> 
> Thomas Huth (8):
>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>     guard
>   KVM: selftests: Guard struct kvm_vcpu_events with
>     __KVM_HAVE_VCPU_EVENTS
>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>   KVM: selftests: Align memory region addresses to 1M on s390x
>   KVM: selftests: Add processor code for s390x
>   KVM: selftests: Add the sync_regs test for s390x
>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
> 
>  MAINTAINERS                                   |   2 +
>  arch/mips/kvm/mips.c                          |   3 +
>  arch/powerpc/kvm/powerpc.c                    |   3 +
>  arch/s390/kvm/kvm-s390.c                      |   1 +
>  arch/x86/kvm/x86.c                            |   3 +
>  tools/testing/selftests/kvm/Makefile          |   7 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>  virt/kvm/arm/arm.c                            |   3 +
>  virt/kvm/kvm_main.c                           |   2 -
>  16 files changed, 514 insertions(+), 11 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 18:33   ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-24 18:33 UTC (permalink / raw)


I have now queued every patch in the kselftest branch on kvms390 at kernel.org.
I will push out for next as soon as I have some ack/nacks on the
"KVM: s390: fix memory slot handling for KVM_SET_USER_MEMORY_REGION"
patch.

On 23.05.19 18:43, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x, and after
> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
> is now enabled here, too.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...
> I started working on that topic, but did not finish that work
> yet, so I decided to not include it yet).
> 
> RFC -> v1:
>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>  - Added patch to introduce VM_MODE_DEFAULT macro
>  - Improved/cleaned up the code in processor.c
>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
> 
> Andrew Jones (1):
>   kvm: selftests: aarch64: fix default vm mode
> 
> Thomas Huth (8):
>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>     guard
>   KVM: selftests: Guard struct kvm_vcpu_events with
>     __KVM_HAVE_VCPU_EVENTS
>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>   KVM: selftests: Align memory region addresses to 1M on s390x
>   KVM: selftests: Add processor code for s390x
>   KVM: selftests: Add the sync_regs test for s390x
>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
> 
>  MAINTAINERS                                   |   2 +
>  arch/mips/kvm/mips.c                          |   3 +
>  arch/powerpc/kvm/powerpc.c                    |   3 +
>  arch/s390/kvm/kvm-s390.c                      |   1 +
>  arch/x86/kvm/x86.c                            |   3 +
>  tools/testing/selftests/kvm/Makefile          |   7 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>  virt/kvm/arm/arm.c                            |   3 +
>  virt/kvm/kvm_main.c                           |   2 -
>  16 files changed, 514 insertions(+), 11 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-05-24 18:33   ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-24 18:33 UTC (permalink / raw)


I have now queued every patch in the kselftest branch on kvms390 at kernel.org.
I will push out for next as soon as I have some ack/nacks on the
"KVM: s390: fix memory slot handling for KVM_SET_USER_MEMORY_REGION"
patch.

On 23.05.19 18:43, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x, and after
> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
> is now enabled here, too.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...
> I started working on that topic, but did not finish that work
> yet, so I decided to not include it yet).
> 
> RFC -> v1:
>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>  - Added patch to introduce VM_MODE_DEFAULT macro
>  - Improved/cleaned up the code in processor.c
>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
> 
> Andrew Jones (1):
>   kvm: selftests: aarch64: fix default vm mode
> 
> Thomas Huth (8):
>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>     guard
>   KVM: selftests: Guard struct kvm_vcpu_events with
>     __KVM_HAVE_VCPU_EVENTS
>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>   KVM: selftests: Align memory region addresses to 1M on s390x
>   KVM: selftests: Add processor code for s390x
>   KVM: selftests: Add the sync_regs test for s390x
>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
> 
>  MAINTAINERS                                   |   2 +
>  arch/mips/kvm/mips.c                          |   3 +
>  arch/powerpc/kvm/powerpc.c                    |   3 +
>  arch/s390/kvm/kvm-s390.c                      |   1 +
>  arch/x86/kvm/x86.c                            |   3 +
>  tools/testing/selftests/kvm/Makefile          |   7 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>  virt/kvm/arm/arm.c                            |   3 +
>  virt/kvm/kvm_main.c                           |   2 -
>  16 files changed, 514 insertions(+), 11 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH] KVM: selftests: enable pgste option for the linker on s390
  2019-05-24 10:33   ` borntraeger
  (?)
@ 2019-05-24 19:07     ` david
  -1 siblings, 0 replies; 108+ messages in thread
From: David Hildenbrand @ 2019-05-24 19:07 UTC (permalink / raw)
  To: Christian Borntraeger, Janosch Frank
  Cc: KVM, Cornelia Huck, Paolo Bonzini, Radim Krčmář,
	Shuah Khan, Andrew Jones, linux-kselftest, linux-s390

On 24.05.19 12:33, Christian Borntraeger wrote:
> To avoid testcase failures we need to enable the pgstes. This can be
> done with /proc/sys/vm/allocate_pgste or with a linker option that
> creates an  S390_PGSTE program header.
> 
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
>  tools/testing/selftests/kvm/Makefile | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index aef5bd1166cf..4aac14c1919f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -44,7 +44,10 @@ CFLAGS += -O2 -g -std=gnu99 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE
>  no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
>          $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
>  
> -LDFLAGS += -pthread $(no-pie-option)
> +# On s390, build the testcases KVM-enabled
> +pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
> +
> +LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
>  
>  # After inclusion, $(OUTPUT) is defined and
>  # $(TEST_GEN_PROGS) starts with $(OUTPUT)/
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH] KVM: selftests: enable pgste option for the linker on s390
@ 2019-05-24 19:07     ` david
  0 siblings, 0 replies; 108+ messages in thread
From: david @ 2019-05-24 19:07 UTC (permalink / raw)


On 24.05.19 12:33, Christian Borntraeger wrote:
> To avoid testcase failures we need to enable the pgstes. This can be
> done with /proc/sys/vm/allocate_pgste or with a linker option that
> creates an  S390_PGSTE program header.
> 
> Signed-off-by: Christian Borntraeger <borntraeger at de.ibm.com>
> ---
>  tools/testing/selftests/kvm/Makefile | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index aef5bd1166cf..4aac14c1919f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -44,7 +44,10 @@ CFLAGS += -O2 -g -std=gnu99 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE
>  no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
>          $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
>  
> -LDFLAGS += -pthread $(no-pie-option)
> +# On s390, build the testcases KVM-enabled
> +pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
> +
> +LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
>  
>  # After inclusion, $(OUTPUT) is defined and
>  # $(TEST_GEN_PROGS) starts with $(OUTPUT)/
> 

Reviewed-by: David Hildenbrand <david at redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH] KVM: selftests: enable pgste option for the linker on s390
@ 2019-05-24 19:07     ` david
  0 siblings, 0 replies; 108+ messages in thread
From: David Hildenbrand @ 2019-05-24 19:07 UTC (permalink / raw)


On 24.05.19 12:33, Christian Borntraeger wrote:
> To avoid testcase failures we need to enable the pgstes. This can be
> done with /proc/sys/vm/allocate_pgste or with a linker option that
> creates an  S390_PGSTE program header.
> 
> Signed-off-by: Christian Borntraeger <borntraeger at de.ibm.com>
> ---
>  tools/testing/selftests/kvm/Makefile | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index aef5bd1166cf..4aac14c1919f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -44,7 +44,10 @@ CFLAGS += -O2 -g -std=gnu99 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE
>  no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
>          $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
>  
> -LDFLAGS += -pthread $(no-pie-option)
> +# On s390, build the testcases KVM-enabled
> +pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
> +
> +LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
>  
>  # After inclusion, $(OUTPUT) is defined and
>  # $(TEST_GEN_PROGS) starts with $(OUTPUT)/
> 

Reviewed-by: David Hildenbrand <david at redhat.com>

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH] KVM: selftests: enable pgste option for the linker on s390
  2019-05-24 10:33   ` borntraeger
  (?)
@ 2019-05-27 11:44     ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-27 11:44 UTC (permalink / raw)
  To: Janosch Frank
  Cc: KVM, Cornelia Huck, David Hildenbrand, Paolo Bonzini,
	Radim Krčmář,
	Shuah Khan, Andrew Jones, linux-kselftest, linux-s390

On 24.05.19 12:33, Christian Borntraeger wrote:
> To avoid testcase failures we need to enable the pgstes. This can be
> done with /proc/sys/vm/allocate_pgste or with a linker option that
> creates an  S390_PGSTE program header.
> 
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
>  tools/testing/selftests/kvm/Makefile | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index aef5bd1166cf..4aac14c1919f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -44,7 +44,10 @@ CFLAGS += -O2 -g -std=gnu99 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE
>  no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
>          $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
>  
> -LDFLAGS += -pthread $(no-pie-option)
> +# On s390, build the testcases KVM-enabled
> +pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
> +
> +LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
>  
>  # After inclusion, $(OUTPUT) is defined and
>  # $(TEST_GEN_PROGS) starts with $(OUTPUT)/
> 

After commit commit 055efab3120bae7ab1ed841317774f3c953f6e1b (kbuild: drop support for cc-ldoption)
I had to change that in the following way.

    
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 2a73b58fd9e0..a798ea54a434 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -48,7 +48,7 @@ no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
         $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
 
 # On s390, build the testcases KVM-enabled
-pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
+pgste-option := $(call cc-option,-Wl$(comma)--s390-pgste)
 
 LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
 

With that pushed out to kvms390/next.

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH] KVM: selftests: enable pgste option for the linker on s390
@ 2019-05-27 11:44     ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-27 11:44 UTC (permalink / raw)


On 24.05.19 12:33, Christian Borntraeger wrote:
> To avoid testcase failures we need to enable the pgstes. This can be
> done with /proc/sys/vm/allocate_pgste or with a linker option that
> creates an  S390_PGSTE program header.
> 
> Signed-off-by: Christian Borntraeger <borntraeger at de.ibm.com>
> ---
>  tools/testing/selftests/kvm/Makefile | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index aef5bd1166cf..4aac14c1919f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -44,7 +44,10 @@ CFLAGS += -O2 -g -std=gnu99 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE
>  no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
>          $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
>  
> -LDFLAGS += -pthread $(no-pie-option)
> +# On s390, build the testcases KVM-enabled
> +pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
> +
> +LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
>  
>  # After inclusion, $(OUTPUT) is defined and
>  # $(TEST_GEN_PROGS) starts with $(OUTPUT)/
> 

After commit commit 055efab3120bae7ab1ed841317774f3c953f6e1b (kbuild: drop support for cc-ldoption)
I had to change that in the following way.

    
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 2a73b58fd9e0..a798ea54a434 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -48,7 +48,7 @@ no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
         $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
 
 # On s390, build the testcases KVM-enabled
-pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
+pgste-option := $(call cc-option,-Wl$(comma)--s390-pgste)
 
 LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
 

With that pushed out to kvms390/next.

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* [PATCH] KVM: selftests: enable pgste option for the linker on s390
@ 2019-05-27 11:44     ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-27 11:44 UTC (permalink / raw)


On 24.05.19 12:33, Christian Borntraeger wrote:
> To avoid testcase failures we need to enable the pgstes. This can be
> done with /proc/sys/vm/allocate_pgste or with a linker option that
> creates an  S390_PGSTE program header.
> 
> Signed-off-by: Christian Borntraeger <borntraeger at de.ibm.com>
> ---
>  tools/testing/selftests/kvm/Makefile | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index aef5bd1166cf..4aac14c1919f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -44,7 +44,10 @@ CFLAGS += -O2 -g -std=gnu99 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE
>  no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
>          $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
>  
> -LDFLAGS += -pthread $(no-pie-option)
> +# On s390, build the testcases KVM-enabled
> +pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
> +
> +LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
>  
>  # After inclusion, $(OUTPUT) is defined and
>  # $(TEST_GEN_PROGS) starts with $(OUTPUT)/
> 

After commit commit 055efab3120bae7ab1ed841317774f3c953f6e1b (kbuild: drop support for cc-ldoption)
I had to change that in the following way.

    
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 2a73b58fd9e0..a798ea54a434 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -48,7 +48,7 @@ no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
         $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
 
 # On s390, build the testcases KVM-enabled
-pgste-option := $(call cc-ldoption, -Wl$(comma)--s390-pgste)
+pgste-option := $(call cc-option,-Wl$(comma)--s390-pgste)
 
 LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
 

With that pushed out to kvms390/next.

^ permalink raw reply related	[flat|nested] 108+ messages in thread

* Re:  [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  2019-05-23 16:43   ` thuth
  (?)
@ 2019-05-28 11:00     ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-28 11:00 UTC (permalink / raw)
  To: Janosch Frank, Paolo Bonzini, Radim Krčmář
  Cc: Thomas Huth, kvm, Shuah Khan, David Hildenbrand, Cornelia Huck,
	Andrew Jones, linux-kernel, linux-kselftest, linux-s390

Paolo, Radim,

would you consider this patch (or the full series) as 5.2 material or 5.3 material?


On 23.05.19 18:43, Thomas Huth wrote:
> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 6d0517ac18e5..0369f26ab96d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MIPS_FPU:
>  		/* We don't handle systems with inconsistent cpu_has_fpu */
>  		r = !!raw_cpu_has_fpu;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..aa3a678711be 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  #ifdef CONFIG_PPC_BOOK3S_64
>  	case KVM_CAP_PPC_GET_SMMU_INFO:
>  		r = 1;
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 8d6d75db8de6..871d2e99b156 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  		break;
>  	case KVM_CAP_NR_VCPUS:
>  	case KVM_CAP_MAX_VCPUS:
> +	case KVM_CAP_MAX_VCPU_ID:
>  		r = KVM_S390_BSCA_CPU_SLOTS;
>  		if (!kvm_s390_use_sca_entries())
>  			r = KVM_MAX_VCPUS;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 536b78c4af6e..09a07d6a154e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_PV_MMU:	/* obsolete */
>  		r = 0;
>  		break;
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 90cedebaeb94..7eeebe5e9da2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MSI_DEVID:
>  		if (!kvm)
>  			r = -EINVAL;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0d13d9d125d..c09259dd6286 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  	case KVM_CAP_MULTI_ADDRESS_SPACE:
>  		return KVM_ADDRESS_SPACE_NUM;
>  #endif
> -	case KVM_CAP_MAX_VCPU_ID:
> -		return KVM_MAX_VCPU_ID;
>  	case KVM_CAP_NR_MEMSLOTS:
>  		return KVM_USER_MEM_SLOTS;
>  	default:
> 


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-28 11:00     ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-28 11:00 UTC (permalink / raw)


Paolo, Radim,

would you consider this patch (or the full series) as 5.2 material or 5.3 material?


On 23.05.19 18:43, Thomas Huth wrote:
> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 6d0517ac18e5..0369f26ab96d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MIPS_FPU:
>  		/* We don't handle systems with inconsistent cpu_has_fpu */
>  		r = !!raw_cpu_has_fpu;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..aa3a678711be 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  #ifdef CONFIG_PPC_BOOK3S_64
>  	case KVM_CAP_PPC_GET_SMMU_INFO:
>  		r = 1;
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 8d6d75db8de6..871d2e99b156 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  		break;
>  	case KVM_CAP_NR_VCPUS:
>  	case KVM_CAP_MAX_VCPUS:
> +	case KVM_CAP_MAX_VCPU_ID:
>  		r = KVM_S390_BSCA_CPU_SLOTS;
>  		if (!kvm_s390_use_sca_entries())
>  			r = KVM_MAX_VCPUS;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 536b78c4af6e..09a07d6a154e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_PV_MMU:	/* obsolete */
>  		r = 0;
>  		break;
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 90cedebaeb94..7eeebe5e9da2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MSI_DEVID:
>  		if (!kvm)
>  			r = -EINVAL;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0d13d9d125d..c09259dd6286 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  	case KVM_CAP_MULTI_ADDRESS_SPACE:
>  		return KVM_ADDRESS_SPACE_NUM;
>  #endif
> -	case KVM_CAP_MAX_VCPU_ID:
> -		return KVM_MAX_VCPU_ID;
>  	case KVM_CAP_NR_MEMSLOTS:
>  		return KVM_USER_MEM_SLOTS;
>  	default:
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-28 11:00     ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-28 11:00 UTC (permalink / raw)


Paolo, Radim,

would you consider this patch (or the full series) as 5.2 material or 5.3 material?


On 23.05.19 18:43, Thomas Huth wrote:
> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> architectures. However, on s390x, the amount of usable CPUs is determined
> during runtime - it is depending on the features of the machine the code
> is running on. Since we are using the vcpu_id as an index into the SCA
> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> function), it is not only the amount of CPUs that is limited by the hard-
> ware, but also the range of IDs that we can use.
> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> code into the architecture specific code, and on s390x we have to return
> the same value here as for KVM_CAP_MAX_VCPUS.
> This problem has been discovered with the kvm_create_max_vcpus selftest.
> With this change applied, the selftest now passes on s390x, too.
> 
> Signed-off-by: Thomas Huth <thuth at redhat.com>
> ---
>  arch/mips/kvm/mips.c       | 3 +++
>  arch/powerpc/kvm/powerpc.c | 3 +++
>  arch/s390/kvm/kvm-s390.c   | 1 +
>  arch/x86/kvm/x86.c         | 3 +++
>  virt/kvm/arm/arm.c         | 3 +++
>  virt/kvm/kvm_main.c        | 2 --
>  6 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 6d0517ac18e5..0369f26ab96d 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -1122,6 +1122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MIPS_FPU:
>  		/* We don't handle systems with inconsistent cpu_has_fpu */
>  		r = !!raw_cpu_has_fpu;
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 3393b166817a..aa3a678711be 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -657,6 +657,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  #ifdef CONFIG_PPC_BOOK3S_64
>  	case KVM_CAP_PPC_GET_SMMU_INFO:
>  		r = 1;
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 8d6d75db8de6..871d2e99b156 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -539,6 +539,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  		break;
>  	case KVM_CAP_NR_VCPUS:
>  	case KVM_CAP_MAX_VCPUS:
> +	case KVM_CAP_MAX_VCPU_ID:
>  		r = KVM_S390_BSCA_CPU_SLOTS;
>  		if (!kvm_s390_use_sca_entries())
>  			r = KVM_MAX_VCPUS;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 536b78c4af6e..09a07d6a154e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3122,6 +3122,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_PV_MMU:	/* obsolete */
>  		r = 0;
>  		break;
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index 90cedebaeb94..7eeebe5e9da2 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -224,6 +224,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_MAX_VCPUS:
>  		r = KVM_MAX_VCPUS;
>  		break;
> +	case KVM_CAP_MAX_VCPU_ID:
> +		r = KVM_MAX_VCPU_ID;
> +		break;
>  	case KVM_CAP_MSI_DEVID:
>  		if (!kvm)
>  			r = -EINVAL;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0d13d9d125d..c09259dd6286 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -3146,8 +3146,6 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  	case KVM_CAP_MULTI_ADDRESS_SPACE:
>  		return KVM_ADDRESS_SPACE_NUM;
>  #endif
> -	case KVM_CAP_MAX_VCPU_ID:
> -		return KVM_MAX_VCPU_ID;
>  	case KVM_CAP_NR_MEMSLOTS:
>  		return KVM_USER_MEM_SLOTS;
>  	default:
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re:  [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  2019-05-28 11:00     ` borntraeger
  (?)
@ 2019-05-28 12:53       ` cohuck
  -1 siblings, 0 replies; 108+ messages in thread
From: Cornelia Huck @ 2019-05-28 12:53 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Thomas Huth, kvm, Shuah Khan, David Hildenbrand, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

On Tue, 28 May 2019 13:00:30 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> Paolo, Radim,
> 
> would you consider this patch (or the full series) as 5.2 material or 5.3 material?

FWIW, I'd consider this patch 5.2 material, as we're currently relaying
wrong values to userspace.

> 
> 
> On 23.05.19 18:43, Thomas Huth wrote:
> > KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> > architectures. However, on s390x, the amount of usable CPUs is determined
> > during runtime - it is depending on the features of the machine the code
> > is running on. Since we are using the vcpu_id as an index into the SCA
> > structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> > function), it is not only the amount of CPUs that is limited by the hard-
> > ware, but also the range of IDs that we can use.
> > Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> > So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> > code into the architecture specific code, and on s390x we have to return
> > the same value here as for KVM_CAP_MAX_VCPUS.
> > This problem has been discovered with the kvm_create_max_vcpus selftest.
> > With this change applied, the selftest now passes on s390x, too.
> > 
> > Signed-off-by: Thomas Huth <thuth@redhat.com>
> > ---
> >  arch/mips/kvm/mips.c       | 3 +++
> >  arch/powerpc/kvm/powerpc.c | 3 +++
> >  arch/s390/kvm/kvm-s390.c   | 1 +
> >  arch/x86/kvm/x86.c         | 3 +++
> >  virt/kvm/arm/arm.c         | 3 +++
> >  virt/kvm/kvm_main.c        | 2 --
> >  6 files changed, 13 insertions(+), 2 deletions(-)

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-28 12:53       ` cohuck
  0 siblings, 0 replies; 108+ messages in thread
From: cohuck @ 2019-05-28 12:53 UTC (permalink / raw)


On Tue, 28 May 2019 13:00:30 +0200
Christian Borntraeger <borntraeger at de.ibm.com> wrote:

> Paolo, Radim,
> 
> would you consider this patch (or the full series) as 5.2 material or 5.3 material?

FWIW, I'd consider this patch 5.2 material, as we're currently relaying
wrong values to userspace.

> 
> 
> On 23.05.19 18:43, Thomas Huth wrote:
> > KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> > architectures. However, on s390x, the amount of usable CPUs is determined
> > during runtime - it is depending on the features of the machine the code
> > is running on. Since we are using the vcpu_id as an index into the SCA
> > structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> > function), it is not only the amount of CPUs that is limited by the hard-
> > ware, but also the range of IDs that we can use.
> > Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> > So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> > code into the architecture specific code, and on s390x we have to return
> > the same value here as for KVM_CAP_MAX_VCPUS.
> > This problem has been discovered with the kvm_create_max_vcpus selftest.
> > With this change applied, the selftest now passes on s390x, too.
> > 
> > Signed-off-by: Thomas Huth <thuth at redhat.com>
> > ---
> >  arch/mips/kvm/mips.c       | 3 +++
> >  arch/powerpc/kvm/powerpc.c | 3 +++
> >  arch/s390/kvm/kvm-s390.c   | 1 +
> >  arch/x86/kvm/x86.c         | 3 +++
> >  virt/kvm/arm/arm.c         | 3 +++
> >  virt/kvm/kvm_main.c        | 2 --
> >  6 files changed, 13 insertions(+), 2 deletions(-)

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-28 12:53       ` cohuck
  0 siblings, 0 replies; 108+ messages in thread
From: Cornelia Huck @ 2019-05-28 12:53 UTC (permalink / raw)


On Tue, 28 May 2019 13:00:30 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> Paolo, Radim,
> 
> would you consider this patch (or the full series) as 5.2 material or 5.3 material?

FWIW, I'd consider this patch 5.2 material, as we're currently relaying
wrong values to userspace.

> 
> 
> On 23.05.19 18:43, Thomas Huth wrote:
> > KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
> > architectures. However, on s390x, the amount of usable CPUs is determined
> > during runtime - it is depending on the features of the machine the code
> > is running on. Since we are using the vcpu_id as an index into the SCA
> > structures that are defined by the hardware (see e.g. the sca_add_vcpu()
> > function), it is not only the amount of CPUs that is limited by the hard-
> > ware, but also the range of IDs that we can use.
> > Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
> > So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
> > code into the architecture specific code, and on s390x we have to return
> > the same value here as for KVM_CAP_MAX_VCPUS.
> > This problem has been discovered with the kvm_create_max_vcpus selftest.
> > With this change applied, the selftest now passes on s390x, too.
> > 
> > Signed-off-by: Thomas Huth <thuth at redhat.com>
> > ---
> >  arch/mips/kvm/mips.c       | 3 +++
> >  arch/powerpc/kvm/powerpc.c | 3 +++
> >  arch/s390/kvm/kvm-s390.c   | 1 +
> >  arch/x86/kvm/x86.c         | 3 +++
> >  virt/kvm/arm/arm.c         | 3 +++
> >  virt/kvm/kvm_main.c        | 2 --
> >  6 files changed, 13 insertions(+), 2 deletions(-)

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re:  Re: [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
  2019-05-28 12:53       ` cohuck
  (?)
@ 2019-05-28 13:48         ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-28 13:48 UTC (permalink / raw)
  To: Cornelia Huck
  Cc: Janosch Frank, Paolo Bonzini, Radim Krčmář,
	Thomas Huth, kvm, Shuah Khan, David Hildenbrand, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390



On 28.05.19 14:53, Cornelia Huck wrote:
> On Tue, 28 May 2019 13:00:30 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
>> Paolo, Radim,
>>
>> would you consider this patch (or the full series) as 5.2 material or 5.3 material?
> 
> FWIW, I'd consider this patch 5.2 material, as we're currently relaying
> wrong values to userspace.

Agreed. I will add cc stable and queue for master. What is our opinion about kselftest?
Are we merging testtools changes also only during the merge window?
> 
>>
>>
>> On 23.05.19 18:43, Thomas Huth wrote:
>>> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
>>> architectures. However, on s390x, the amount of usable CPUs is determined
>>> during runtime - it is depending on the features of the machine the code
>>> is running on. Since we are using the vcpu_id as an index into the SCA
>>> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
>>> function), it is not only the amount of CPUs that is limited by the hard-
>>> ware, but also the range of IDs that we can use.
>>> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
>>> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
>>> code into the architecture specific code, and on s390x we have to return
>>> the same value here as for KVM_CAP_MAX_VCPUS.
>>> This problem has been discovered with the kvm_create_max_vcpus selftest.
>>> With this change applied, the selftest now passes on s390x, too.
>>>
>>> Signed-off-by: Thomas Huth <thuth@redhat.com>
>>> ---
>>>  arch/mips/kvm/mips.c       | 3 +++
>>>  arch/powerpc/kvm/powerpc.c | 3 +++
>>>  arch/s390/kvm/kvm-s390.c   | 1 +
>>>  arch/x86/kvm/x86.c         | 3 +++
>>>  virt/kvm/arm/arm.c         | 3 +++
>>>  virt/kvm/kvm_main.c        | 2 --
>>>  6 files changed, 13 insertions(+), 2 deletions(-)
> 


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-28 13:48         ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-05-28 13:48 UTC (permalink / raw)




On 28.05.19 14:53, Cornelia Huck wrote:
> On Tue, 28 May 2019 13:00:30 +0200
> Christian Borntraeger <borntraeger at de.ibm.com> wrote:
> 
>> Paolo, Radim,
>>
>> would you consider this patch (or the full series) as 5.2 material or 5.3 material?
> 
> FWIW, I'd consider this patch 5.2 material, as we're currently relaying
> wrong values to userspace.

Agreed. I will add cc stable and queue for master. What is our opinion about kselftest?
Are we merging testtools changes also only during the merge window?
> 
>>
>>
>> On 23.05.19 18:43, Thomas Huth wrote:
>>> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
>>> architectures. However, on s390x, the amount of usable CPUs is determined
>>> during runtime - it is depending on the features of the machine the code
>>> is running on. Since we are using the vcpu_id as an index into the SCA
>>> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
>>> function), it is not only the amount of CPUs that is limited by the hard-
>>> ware, but also the range of IDs that we can use.
>>> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
>>> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
>>> code into the architecture specific code, and on s390x we have to return
>>> the same value here as for KVM_CAP_MAX_VCPUS.
>>> This problem has been discovered with the kvm_create_max_vcpus selftest.
>>> With this change applied, the selftest now passes on s390x, too.
>>>
>>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>>> ---
>>>  arch/mips/kvm/mips.c       | 3 +++
>>>  arch/powerpc/kvm/powerpc.c | 3 +++
>>>  arch/s390/kvm/kvm-s390.c   | 1 +
>>>  arch/x86/kvm/x86.c         | 3 +++
>>>  virt/kvm/arm/arm.c         | 3 +++
>>>  virt/kvm/kvm_main.c        | 2 --
>>>  6 files changed, 13 insertions(+), 2 deletions(-)
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
@ 2019-05-28 13:48         ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-05-28 13:48 UTC (permalink / raw)




On 28.05.19 14:53, Cornelia Huck wrote:
> On Tue, 28 May 2019 13:00:30 +0200
> Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> 
>> Paolo, Radim,
>>
>> would you consider this patch (or the full series) as 5.2 material or 5.3 material?
> 
> FWIW, I'd consider this patch 5.2 material, as we're currently relaying
> wrong values to userspace.

Agreed. I will add cc stable and queue for master. What is our opinion about kselftest?
Are we merging testtools changes also only during the merge window?
> 
>>
>>
>> On 23.05.19 18:43, Thomas Huth wrote:
>>> KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
>>> architectures. However, on s390x, the amount of usable CPUs is determined
>>> during runtime - it is depending on the features of the machine the code
>>> is running on. Since we are using the vcpu_id as an index into the SCA
>>> structures that are defined by the hardware (see e.g. the sca_add_vcpu()
>>> function), it is not only the amount of CPUs that is limited by the hard-
>>> ware, but also the range of IDs that we can use.
>>> Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
>>> So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
>>> code into the architecture specific code, and on s390x we have to return
>>> the same value here as for KVM_CAP_MAX_VCPUS.
>>> This problem has been discovered with the kvm_create_max_vcpus selftest.
>>> With this change applied, the selftest now passes on s390x, too.
>>>
>>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>>> ---
>>>  arch/mips/kvm/mips.c       | 3 +++
>>>  arch/powerpc/kvm/powerpc.c | 3 +++
>>>  arch/s390/kvm/kvm-s390.c   | 1 +
>>>  arch/x86/kvm/x86.c         | 3 +++
>>>  virt/kvm/arm/arm.c         | 3 +++
>>>  virt/kvm/kvm_main.c        | 2 --
>>>  6 files changed, 13 insertions(+), 2 deletions(-)
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v1 0/9] KVM selftests for s390x
  2019-05-23 16:43 ` thuth
  (?)
@ 2019-06-04 17:19   ` pbonzini
  -1 siblings, 0 replies; 108+ messages in thread
From: Paolo Bonzini @ 2019-06-04 17:19 UTC (permalink / raw)
  To: Thomas Huth, Christian Borntraeger, Janosch Frank, kvm
  Cc: Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390

On 23/05/19 18:43, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x, and after
> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
> is now enabled here, too.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...
> I started working on that topic, but did not finish that work
> yet, so I decided to not include it yet).

Christian, please include this in your tree (rebasing on top of kvm/next
as soon as I push it).  Note that Thomas is away for about a month.

Paolo

> RFC -> v1:
>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>  - Added patch to introduce VM_MODE_DEFAULT macro
>  - Improved/cleaned up the code in processor.c
>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
> 
> Andrew Jones (1):
>   kvm: selftests: aarch64: fix default vm mode
> 
> Thomas Huth (8):
>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>     guard
>   KVM: selftests: Guard struct kvm_vcpu_events with
>     __KVM_HAVE_VCPU_EVENTS
>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>   KVM: selftests: Align memory region addresses to 1M on s390x
>   KVM: selftests: Add processor code for s390x
>   KVM: selftests: Add the sync_regs test for s390x
>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
> 
>  MAINTAINERS                                   |   2 +
>  arch/mips/kvm/mips.c                          |   3 +
>  arch/powerpc/kvm/powerpc.c                    |   3 +
>  arch/s390/kvm/kvm-s390.c                      |   1 +
>  arch/x86/kvm/x86.c                            |   3 +
>  tools/testing/selftests/kvm/Makefile          |   7 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>  virt/kvm/arm/arm.c                            |   3 +
>  virt/kvm/kvm_main.c                           |   2 -
>  16 files changed, 514 insertions(+), 11 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-06-04 17:19   ` pbonzini
  0 siblings, 0 replies; 108+ messages in thread
From: pbonzini @ 2019-06-04 17:19 UTC (permalink / raw)


On 23/05/19 18:43, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x, and after
> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
> is now enabled here, too.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...
> I started working on that topic, but did not finish that work
> yet, so I decided to not include it yet).

Christian, please include this in your tree (rebasing on top of kvm/next
as soon as I push it).  Note that Thomas is away for about a month.

Paolo

> RFC -> v1:
>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>  - Added patch to introduce VM_MODE_DEFAULT macro
>  - Improved/cleaned up the code in processor.c
>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
> 
> Andrew Jones (1):
>   kvm: selftests: aarch64: fix default vm mode
> 
> Thomas Huth (8):
>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>     guard
>   KVM: selftests: Guard struct kvm_vcpu_events with
>     __KVM_HAVE_VCPU_EVENTS
>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>   KVM: selftests: Align memory region addresses to 1M on s390x
>   KVM: selftests: Add processor code for s390x
>   KVM: selftests: Add the sync_regs test for s390x
>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
> 
>  MAINTAINERS                                   |   2 +
>  arch/mips/kvm/mips.c                          |   3 +
>  arch/powerpc/kvm/powerpc.c                    |   3 +
>  arch/s390/kvm/kvm-s390.c                      |   1 +
>  arch/x86/kvm/x86.c                            |   3 +
>  tools/testing/selftests/kvm/Makefile          |   7 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>  virt/kvm/arm/arm.c                            |   3 +
>  virt/kvm/kvm_main.c                           |   2 -
>  16 files changed, 514 insertions(+), 11 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-06-04 17:19   ` pbonzini
  0 siblings, 0 replies; 108+ messages in thread
From: Paolo Bonzini @ 2019-06-04 17:19 UTC (permalink / raw)


On 23/05/19 18:43, Thomas Huth wrote:
> This patch series enables the KVM selftests for s390x. As a first
> test, the sync_regs from x86 has been adapted to s390x, and after
> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
> is now enabled here, too.
> 
> Please note that the ucall() interface is not used yet - since
> s390x neither has PIO nor MMIO, this needs some more work first
> before it becomes usable (we likely should use a DIAG hypercall
> here, which is what the sync_reg test is currently using, too...
> I started working on that topic, but did not finish that work
> yet, so I decided to not include it yet).

Christian, please include this in your tree (rebasing on top of kvm/next
as soon as I push it).  Note that Thomas is away for about a month.

Paolo

> RFC -> v1:
>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>  - Added patch to introduce VM_MODE_DEFAULT macro
>  - Improved/cleaned up the code in processor.c
>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
> 
> Andrew Jones (1):
>   kvm: selftests: aarch64: fix default vm mode
> 
> Thomas Huth (8):
>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>     guard
>   KVM: selftests: Guard struct kvm_vcpu_events with
>     __KVM_HAVE_VCPU_EVENTS
>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>   KVM: selftests: Align memory region addresses to 1M on s390x
>   KVM: selftests: Add processor code for s390x
>   KVM: selftests: Add the sync_regs test for s390x
>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
> 
>  MAINTAINERS                                   |   2 +
>  arch/mips/kvm/mips.c                          |   3 +
>  arch/powerpc/kvm/powerpc.c                    |   3 +
>  arch/s390/kvm/kvm-s390.c                      |   1 +
>  arch/x86/kvm/x86.c                            |   3 +
>  tools/testing/selftests/kvm/Makefile          |   7 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>  virt/kvm/arm/arm.c                            |   3 +
>  virt/kvm/kvm_main.c                           |   2 -
>  16 files changed, 514 insertions(+), 11 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* Re: [PATCH v1 0/9] KVM selftests for s390x
  2019-06-04 17:19   ` pbonzini
  (?)
@ 2019-06-04 17:37     ` borntraeger
  -1 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-06-04 17:37 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Huth, Janosch Frank, kvm
  Cc: Radim Krčmář,
	Shuah Khan, David Hildenbrand, Cornelia Huck, Andrew Jones,
	linux-kernel, linux-kselftest, linux-s390



On 04.06.19 19:19, Paolo Bonzini wrote:
> On 23/05/19 18:43, Thomas Huth wrote:
>> This patch series enables the KVM selftests for s390x. As a first
>> test, the sync_regs from x86 has been adapted to s390x, and after
>> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
>> is now enabled here, too.
>>
>> Please note that the ucall() interface is not used yet - since
>> s390x neither has PIO nor MMIO, this needs some more work first
>> before it becomes usable (we likely should use a DIAG hypercall
>> here, which is what the sync_reg test is currently using, too...
>> I started working on that topic, but did not finish that work
>> yet, so I decided to not include it yet).
> 
> Christian, please include this in your tree (rebasing on top of kvm/next
> as soon as I push it).  Note that Thomas is away for about a month.

Will do. Right now it is part of my next tree on top of rc3.
> 
> Paolo
> 
>> RFC -> v1:
>>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>>  - Added patch to introduce VM_MODE_DEFAULT macro
>>  - Improved/cleaned up the code in processor.c
>>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
>>
>> Andrew Jones (1):
>>   kvm: selftests: aarch64: fix default vm mode
>>
>> Thomas Huth (8):
>>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>>     guard
>>   KVM: selftests: Guard struct kvm_vcpu_events with
>>     __KVM_HAVE_VCPU_EVENTS
>>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>>   KVM: selftests: Align memory region addresses to 1M on s390x
>>   KVM: selftests: Add processor code for s390x
>>   KVM: selftests: Add the sync_regs test for s390x
>>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
>>
>>  MAINTAINERS                                   |   2 +
>>  arch/mips/kvm/mips.c                          |   3 +
>>  arch/powerpc/kvm/powerpc.c                    |   3 +
>>  arch/s390/kvm/kvm-s390.c                      |   1 +
>>  arch/x86/kvm/x86.c                            |   3 +
>>  tools/testing/selftests/kvm/Makefile          |   7 +-
>>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>>  virt/kvm/arm/arm.c                            |   3 +
>>  virt/kvm/kvm_main.c                           |   2 -
>>  16 files changed, 514 insertions(+), 11 deletions(-)
>>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
>>
> 


^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-06-04 17:37     ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: borntraeger @ 2019-06-04 17:37 UTC (permalink / raw)




On 04.06.19 19:19, Paolo Bonzini wrote:
> On 23/05/19 18:43, Thomas Huth wrote:
>> This patch series enables the KVM selftests for s390x. As a first
>> test, the sync_regs from x86 has been adapted to s390x, and after
>> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
>> is now enabled here, too.
>>
>> Please note that the ucall() interface is not used yet - since
>> s390x neither has PIO nor MMIO, this needs some more work first
>> before it becomes usable (we likely should use a DIAG hypercall
>> here, which is what the sync_reg test is currently using, too...
>> I started working on that topic, but did not finish that work
>> yet, so I decided to not include it yet).
> 
> Christian, please include this in your tree (rebasing on top of kvm/next
> as soon as I push it).  Note that Thomas is away for about a month.

Will do. Right now it is part of my next tree on top of rc3.
> 
> Paolo
> 
>> RFC -> v1:
>>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>>  - Added patch to introduce VM_MODE_DEFAULT macro
>>  - Improved/cleaned up the code in processor.c
>>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
>>
>> Andrew Jones (1):
>>   kvm: selftests: aarch64: fix default vm mode
>>
>> Thomas Huth (8):
>>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>>     guard
>>   KVM: selftests: Guard struct kvm_vcpu_events with
>>     __KVM_HAVE_VCPU_EVENTS
>>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>>   KVM: selftests: Align memory region addresses to 1M on s390x
>>   KVM: selftests: Add processor code for s390x
>>   KVM: selftests: Add the sync_regs test for s390x
>>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
>>
>>  MAINTAINERS                                   |   2 +
>>  arch/mips/kvm/mips.c                          |   3 +
>>  arch/powerpc/kvm/powerpc.c                    |   3 +
>>  arch/s390/kvm/kvm-s390.c                      |   1 +
>>  arch/x86/kvm/x86.c                            |   3 +
>>  tools/testing/selftests/kvm/Makefile          |   7 +-
>>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>>  virt/kvm/arm/arm.c                            |   3 +
>>  virt/kvm/kvm_main.c                           |   2 -
>>  16 files changed, 514 insertions(+), 11 deletions(-)
>>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
>>
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

* [PATCH v1 0/9] KVM selftests for s390x
@ 2019-06-04 17:37     ` borntraeger
  0 siblings, 0 replies; 108+ messages in thread
From: Christian Borntraeger @ 2019-06-04 17:37 UTC (permalink / raw)




On 04.06.19 19:19, Paolo Bonzini wrote:
> On 23/05/19 18:43, Thomas Huth wrote:
>> This patch series enables the KVM selftests for s390x. As a first
>> test, the sync_regs from x86 has been adapted to s390x, and after
>> a fix for KVM_CAP_MAX_VCPU_ID on s390x, the kvm_create_max_vcpus
>> is now enabled here, too.
>>
>> Please note that the ucall() interface is not used yet - since
>> s390x neither has PIO nor MMIO, this needs some more work first
>> before it becomes usable (we likely should use a DIAG hypercall
>> here, which is what the sync_reg test is currently using, too...
>> I started working on that topic, but did not finish that work
>> yet, so I decided to not include it yet).
> 
> Christian, please include this in your tree (rebasing on top of kvm/next
> as soon as I push it).  Note that Thomas is away for about a month.

Will do. Right now it is part of my next tree on top of rc3.
> 
> Paolo
> 
>> RFC -> v1:
>>  - Rebase, needed to add the first patch for vcpu_nested_state_get/set
>>  - Added patch to introduce VM_MODE_DEFAULT macro
>>  - Improved/cleaned up the code in processor.c
>>  - Added patch to fix KVM_CAP_MAX_VCPU_ID on s390x
>>  - Added patch to enable the kvm_create_max_vcpus on s390x and aarch64
>>
>> Andrew Jones (1):
>>   kvm: selftests: aarch64: fix default vm mode
>>
>> Thomas Huth (8):
>>   KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86
>>     guard
>>   KVM: selftests: Guard struct kvm_vcpu_events with
>>     __KVM_HAVE_VCPU_EVENTS
>>   KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits
>>   KVM: selftests: Align memory region addresses to 1M on s390x
>>   KVM: selftests: Add processor code for s390x
>>   KVM: selftests: Add the sync_regs test for s390x
>>   KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
>>   KVM: selftests: Move kvm_create_max_vcpus test to generic code
>>
>>  MAINTAINERS                                   |   2 +
>>  arch/mips/kvm/mips.c                          |   3 +
>>  arch/powerpc/kvm/powerpc.c                    |   3 +
>>  arch/s390/kvm/kvm-s390.c                      |   1 +
>>  arch/x86/kvm/x86.c                            |   3 +
>>  tools/testing/selftests/kvm/Makefile          |   7 +-
>>  .../testing/selftests/kvm/include/kvm_util.h  |  10 +
>>  .../selftests/kvm/include/s390x/processor.h   |  22 ++
>>  .../kvm/{x86_64 => }/kvm_create_max_vcpus.c   |   3 +-
>>  .../selftests/kvm/lib/aarch64/processor.c     |   2 +-
>>  tools/testing/selftests/kvm/lib/kvm_util.c    |  25 +-
>>  .../selftests/kvm/lib/s390x/processor.c       | 286 ++++++++++++++++++
>>  .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
>>  .../selftests/kvm/s390x/sync_regs_test.c      | 151 +++++++++
>>  virt/kvm/arm/arm.c                            |   3 +
>>  virt/kvm/kvm_main.c                           |   2 -
>>  16 files changed, 514 insertions(+), 11 deletions(-)
>>  create mode 100644 tools/testing/selftests/kvm/include/s390x/processor.h
>>  rename tools/testing/selftests/kvm/{x86_64 => }/kvm_create_max_vcpus.c (93%)
>>  create mode 100644 tools/testing/selftests/kvm/lib/s390x/processor.c
>>  create mode 100644 tools/testing/selftests/kvm/s390x/sync_regs_test.c
>>
> 

^ permalink raw reply	[flat|nested] 108+ messages in thread

end of thread, other threads:[~2019-06-04 17:37 UTC | newest]

Thread overview: 108+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-23 16:43 [PATCH v1 0/9] KVM selftests for s390x Thomas Huth
2019-05-23 16:43 ` Thomas Huth
2019-05-23 16:43 ` thuth
2019-05-23 16:43 ` [PATCH 1/9] KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86 guard Thomas Huth
2019-05-23 16:43   ` Thomas Huth
2019-05-23 16:43   ` thuth
2019-05-23 16:43 ` [PATCH 2/9] KVM: selftests: Guard struct kvm_vcpu_events with __KVM_HAVE_VCPU_EVENTS Thomas Huth
2019-05-23 16:43   ` Thomas Huth
2019-05-23 16:43   ` thuth
2019-05-23 17:57   ` Andrew Jones
2019-05-23 17:57     ` Andrew Jones
2019-05-23 17:57     ` drjones
2019-05-23 16:43 ` [PATCH 3/9] kvm: selftests: aarch64: fix default vm mode Thomas Huth
2019-05-23 16:43   ` Thomas Huth
2019-05-23 16:43   ` thuth
2019-05-24  8:37   ` Christian Borntraeger
2019-05-24  8:37     ` Christian Borntraeger
2019-05-24  8:37     ` borntraeger
2019-05-23 16:43 ` [PATCH 4/9] KVM: selftests: Introduce a VM_MODE_DEFAULT macro for the default bits Thomas Huth
2019-05-23 16:43   ` Thomas Huth
2019-05-23 16:43   ` thuth
2019-05-23 17:20   ` Andrew Jones
2019-05-23 17:20     ` Andrew Jones
2019-05-23 17:20     ` drjones
2019-05-23 16:43 ` [PATCH 5/9] KVM: selftests: Align memory region addresses to 1M on s390x Thomas Huth
2019-05-23 16:43   ` Thomas Huth
2019-05-23 16:43   ` thuth
2019-05-23 17:40   ` Andrew Jones
2019-05-23 17:40     ` Andrew Jones
2019-05-23 17:40     ` drjones
2019-05-24  8:29     ` Christian Borntraeger
2019-05-24  8:29       ` Christian Borntraeger
2019-05-24  8:29       ` borntraeger
2019-05-24 18:17       ` Thomas Huth
2019-05-24 18:17         ` Thomas Huth
2019-05-24 18:17         ` thuth
2019-05-23 16:43 ` [PATCH 6/9] KVM: selftests: Add processor code for s390x Thomas Huth
2019-05-23 16:43   ` Thomas Huth
2019-05-23 16:43   ` thuth
2019-05-23 16:43 ` [PATCH 7/9] KVM: selftests: Add the sync_regs test " Thomas Huth
2019-05-23 16:43   ` Thomas Huth
2019-05-23 16:43   ` thuth
2019-05-23 16:43 ` [PATCH 8/9] KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID Thomas Huth
2019-05-23 16:43   ` Thomas Huth
2019-05-23 16:43   ` thuth
2019-05-23 17:56   ` Andrew Jones
2019-05-23 17:56     ` Andrew Jones
2019-05-23 17:56     ` drjones
2019-05-24  9:13   ` Cornelia Huck
2019-05-24  9:13     ` Cornelia Huck
2019-05-24  9:13     ` cohuck
2019-05-24  9:16   ` David Hildenbrand
2019-05-24  9:16     ` David Hildenbrand
2019-05-24  9:16     ` david
2019-05-28 11:00   ` Christian Borntraeger
2019-05-28 11:00     ` Christian Borntraeger
2019-05-28 11:00     ` borntraeger
2019-05-28 12:53     ` Cornelia Huck
2019-05-28 12:53       ` Cornelia Huck
2019-05-28 12:53       ` cohuck
2019-05-28 13:48       ` Christian Borntraeger
2019-05-28 13:48         ` Christian Borntraeger
2019-05-28 13:48         ` borntraeger
2019-05-23 16:43 ` [PATCH 9/9] KVM: selftests: Move kvm_create_max_vcpus test to generic code Thomas Huth
2019-05-23 16:43   ` Thomas Huth
2019-05-23 16:43   ` thuth
2019-05-23 17:56   ` Andrew Jones
2019-05-23 17:56     ` Andrew Jones
2019-05-23 17:56     ` drjones
2019-05-24  9:16   ` David Hildenbrand
2019-05-24  9:16     ` David Hildenbrand
2019-05-24  9:16     ` david
2019-05-24 10:33 ` [PATCH] KVM: selftests: enable pgste option for the linker on s390 Christian Borntraeger
2019-05-24 10:33   ` Christian Borntraeger
2019-05-24 10:33   ` borntraeger
2019-05-24 18:16   ` Thomas Huth
2019-05-24 18:16     ` Thomas Huth
2019-05-24 18:16     ` thuth
2019-05-24 19:07   ` David Hildenbrand
2019-05-24 19:07     ` David Hildenbrand
2019-05-24 19:07     ` david
2019-05-27 11:44   ` Christian Borntraeger
2019-05-27 11:44     ` Christian Borntraeger
2019-05-27 11:44     ` borntraeger
2019-05-24 11:11 ` [PATCH v1 0/9] KVM selftests for s390x Christian Borntraeger
2019-05-24 11:11   ` Christian Borntraeger
2019-05-24 11:11   ` borntraeger
2019-05-24 12:17   ` Christian Borntraeger
2019-05-24 12:17     ` Christian Borntraeger
2019-05-24 12:17     ` borntraeger
2019-05-24 12:29     ` Christian Borntraeger
2019-05-24 12:29       ` Christian Borntraeger
2019-05-24 12:29       ` borntraeger
2019-05-24 12:36       ` David Hildenbrand
2019-05-24 12:36         ` David Hildenbrand
2019-05-24 12:36         ` david
2019-05-24 12:56         ` Christian Borntraeger
2019-05-24 12:56           ` Christian Borntraeger
2019-05-24 12:56           ` borntraeger
2019-05-24 18:33 ` Christian Borntraeger
2019-05-24 18:33   ` Christian Borntraeger
2019-05-24 18:33   ` borntraeger
2019-06-04 17:19 ` Paolo Bonzini
2019-06-04 17:19   ` Paolo Bonzini
2019-06-04 17:19   ` pbonzini
2019-06-04 17:37   ` Christian Borntraeger
2019-06-04 17:37     ` Christian Borntraeger
2019-06-04 17:37     ` borntraeger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.