kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 0/4] KVM: s390: Add new reset vcpu API
@ 2020-01-29 20:03 Janosch Frank
  2020-01-29 20:03 ` [PATCH v8 1/4] " Janosch Frank
                   ` (4 more replies)
  0 siblings, 5 replies; 36+ messages in thread
From: Janosch Frank @ 2020-01-29 20:03 UTC (permalink / raw)
  To: kvm; +Cc: thuth, borntraeger, david, cohuck, linux-s390

Let's implement the remaining resets, namely the normal and clear
reset to improve architectural compliance. 

While we're at it, let's also start testing the new API.
Those tests are not yet complete, but will be extended in the future.

Janosch Frank (3):
  KVM: s390: Add new reset vcpu API
  selftests: KVM: Add fpu and one reg set/get library functions
  selftests: KVM: s390x: Add reset tests

Pierre Morel (1):
  selftests: KVM: testing the local IRQs resets

 Documentation/virt/kvm/api.txt                |  43 ++++
 arch/s390/kvm/kvm-s390.c                      | 103 +++++---
 include/uapi/linux/kvm.h                      |   5 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../testing/selftests/kvm/include/kvm_util.h  |   6 +
 tools/testing/selftests/kvm/lib/kvm_util.c    |  48 ++++
 tools/testing/selftests/kvm/s390x/resets.c    | 222 ++++++++++++++++++
 7 files changed, 399 insertions(+), 29 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/s390x/resets.c

-- 
2.20.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v8 1/4] KVM: s390: Add new reset vcpu API
  2020-01-29 20:03 [PATCH v8 0/4] KVM: s390: Add new reset vcpu API Janosch Frank
@ 2020-01-29 20:03 ` Janosch Frank
  2020-01-30  8:55   ` [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset Christian Borntraeger
                     ` (2 more replies)
  2020-01-29 20:03 ` [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions Janosch Frank
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 36+ messages in thread
From: Janosch Frank @ 2020-01-29 20:03 UTC (permalink / raw)
  To: kvm; +Cc: thuth, borntraeger, david, cohuck, linux-s390

The architecture states that we need to reset local IRQs for all CPU
resets. Because the old reset interface did not support the normal CPU
reset we never did that on a normal reset.

Let's implement an interface for the missing normal and clear resets
and reset all local IRQs, registers and control structures as stated
in the architecture.

Userspace might already reset the registers via the vcpu run struct,
but as we need the interface for the interrupt clearing part anyway,
we implement the resets fully and don't rely on userspace to reset the
rest.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
---
 Documentation/virt/kvm/api.txt |  43 ++++++++++++++
 arch/s390/kvm/kvm-s390.c       | 103 +++++++++++++++++++++++----------
 include/uapi/linux/kvm.h       |   5 ++
 3 files changed, 122 insertions(+), 29 deletions(-)

diff --git a/Documentation/virt/kvm/api.txt b/Documentation/virt/kvm/api.txt
index ebb37b34dcfc..73448764f544 100644
--- a/Documentation/virt/kvm/api.txt
+++ b/Documentation/virt/kvm/api.txt
@@ -4168,6 +4168,42 @@ This ioctl issues an ultravisor call to terminate the secure guest,
 unpins the VPA pages and releases all the device pages that are used to
 track the secure pages by hypervisor.
 
+4.122 KVM_S390_NORMAL_RESET
+
+Capability: KVM_CAP_S390_VCPU_RESETS
+Architectures: s390
+Type: vcpu ioctl
+Parameters: none
+Returns: 0
+
+This ioctl resets VCPU registers and control structures according to
+the cpu reset definition in the POP (Principles Of Operation).
+
+4.123 KVM_S390_INITIAL_RESET
+
+Capability: none
+Architectures: s390
+Type: vcpu ioctl
+Parameters: none
+Returns: 0
+
+This ioctl resets VCPU registers and control structures according to
+the initial cpu reset definition in the POP. However, the cpu is not
+put into ESA mode. This reset is a superset of the normal reset.
+
+4.124 KVM_S390_CLEAR_RESET
+
+Capability: KVM_CAP_S390_VCPU_RESETS
+Architectures: s390
+Type: vcpu ioctl
+Parameters: none
+Returns: 0
+
+This ioctl resets VCPU registers and control structures according to
+the clear cpu reset definition in the POP. However, the cpu is not put
+into ESA mode. This reset is a superset of the initial reset.
+
+
 5. The kvm_run structure
 ------------------------
 
@@ -5396,3 +5432,10 @@ handling by KVM (as some KVM hypercall may be mistakenly treated as TLB
 flush hypercalls by Hyper-V) so userspace should disable KVM identification
 in CPUID and only exposes Hyper-V identification. In this case, guest
 thinks it's running on Hyper-V and only use Hyper-V hypercalls.
+
+8.22 KVM_CAP_S390_VCPU_RESETS
+
+Architectures: s390
+
+This capability indicates that the KVM_S390_NORMAL_RESET and
+KVM_S390_CLEAR_RESET ioctls are available.
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index c5f520de39a6..6aebaf08db64 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -529,6 +529,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_S390_CMMA_MIGRATION:
 	case KVM_CAP_S390_AIS:
 	case KVM_CAP_S390_AIS_MIGRATION:
+	case KVM_CAP_S390_VCPU_RESETS:
 		r = 1;
 		break;
 	case KVM_CAP_S390_HPAGE_1M:
@@ -2844,31 +2845,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 
 }
 
-static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
-{
-	/* this equals initial cpu reset in pop, but we don't switch to ESA */
-	vcpu->arch.sie_block->gpsw.mask = 0;
-	vcpu->arch.sie_block->gpsw.addr = 0;
-	kvm_s390_set_prefix(vcpu, 0);
-	kvm_s390_set_cpu_timer(vcpu, 0);
-	vcpu->arch.sie_block->ckc = 0;
-	vcpu->arch.sie_block->todpr = 0;
-	memset(vcpu->arch.sie_block->gcr, 0, sizeof(vcpu->arch.sie_block->gcr));
-	vcpu->arch.sie_block->gcr[0] = CR0_INITIAL_MASK;
-	vcpu->arch.sie_block->gcr[14] = CR14_INITIAL_MASK;
-	/* make sure the new fpc will be lazily loaded */
-	save_fpu_regs();
-	current->thread.fpu.fpc = 0;
-	vcpu->arch.sie_block->gbea = 1;
-	vcpu->arch.sie_block->pp = 0;
-	vcpu->arch.sie_block->fpf &= ~FPF_BPBC;
-	vcpu->arch.pfault_token = KVM_S390_PFAULT_TOKEN_INVALID;
-	kvm_clear_async_pf_completion_queue(vcpu);
-	if (!kvm_s390_user_cpu_state_ctrl(vcpu->kvm))
-		kvm_s390_vcpu_stop(vcpu);
-	kvm_s390_clear_local_irqs(vcpu);
-}
-
 void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
 {
 	mutex_lock(&vcpu->kvm->lock);
@@ -3283,10 +3259,70 @@ static int kvm_arch_vcpu_ioctl_set_one_reg(struct kvm_vcpu *vcpu,
 	return r;
 }
 
-static int kvm_arch_vcpu_ioctl_initial_reset(struct kvm_vcpu *vcpu)
+static void kvm_arch_vcpu_ioctl_normal_reset(struct kvm_vcpu *vcpu)
 {
-	kvm_s390_vcpu_initial_reset(vcpu);
-	return 0;
+	vcpu->arch.sie_block->gpsw.mask &= ~PSW_MASK_RI;
+	vcpu->arch.pfault_token = KVM_S390_PFAULT_TOKEN_INVALID;
+	memset(vcpu->run->s.regs.riccb, 0, sizeof(vcpu->run->s.regs.riccb));
+
+	kvm_clear_async_pf_completion_queue(vcpu);
+	if (!kvm_s390_user_cpu_state_ctrl(vcpu->kvm))
+		kvm_s390_vcpu_stop(vcpu);
+	kvm_s390_clear_local_irqs(vcpu);
+}
+
+static void kvm_arch_vcpu_ioctl_initial_reset(struct kvm_vcpu *vcpu)
+{
+	/* Initial reset is a superset of the normal reset */
+	kvm_arch_vcpu_ioctl_normal_reset(vcpu);
+
+	/* this equals initial cpu reset in pop, but we don't switch to ESA */
+	vcpu->arch.sie_block->gpsw.mask = 0;
+	vcpu->arch.sie_block->gpsw.addr = 0;
+	kvm_s390_set_prefix(vcpu, 0);
+	kvm_s390_set_cpu_timer(vcpu, 0);
+	vcpu->arch.sie_block->ckc = 0;
+	vcpu->arch.sie_block->todpr = 0;
+	memset(vcpu->arch.sie_block->gcr, 0, sizeof(vcpu->arch.sie_block->gcr));
+	vcpu->arch.sie_block->gcr[0] = CR0_INITIAL_MASK;
+	vcpu->arch.sie_block->gcr[14] = CR14_INITIAL_MASK;
+	/* make sure the new fpc will be lazily loaded */
+	save_fpu_regs();
+	current->thread.fpu.fpc = 0;
+	vcpu->arch.sie_block->gbea = 1;
+	vcpu->arch.sie_block->pp = 0;
+	vcpu->arch.sie_block->fpf &= ~FPF_BPBC;
+}
+
+static void kvm_arch_vcpu_ioctl_clear_reset(struct kvm_vcpu *vcpu)
+{
+	struct kvm_sync_regs *regs = &vcpu->run->s.regs;
+
+	/* Clear reset is a superset of the initial reset */
+	kvm_arch_vcpu_ioctl_initial_reset(vcpu);
+
+	memset(&regs->gprs, 0, sizeof(regs->gprs));
+	memset(&regs->vrs, 0, sizeof(regs->vrs));
+	memset(&regs->acrs, 0, sizeof(regs->acrs));
+
+	regs->etoken = 0;
+	regs->etoken_extension = 0;
+
+	memset(&regs->gscb, 0, sizeof(regs->gscb));
+	if (MACHINE_HAS_GS) {
+		preempt_disable();
+		__ctl_set_bit(2, 4);
+		if (current->thread.gs_cb) {
+			vcpu->arch.host_gscb = current->thread.gs_cb;
+			save_gs_cb(vcpu->arch.host_gscb);
+		}
+		if (vcpu->arch.gs_enabled) {
+			current->thread.gs_cb = (struct gs_cb *)
+				&vcpu->run->s.regs.gscb;
+			restore_gs_cb(current->thread.gs_cb);
+		}
+		preempt_enable();
+	}
 }
 
 int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
@@ -4359,8 +4395,17 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 		r = kvm_arch_vcpu_ioctl_set_initial_psw(vcpu, psw);
 		break;
 	}
+	case KVM_S390_CLEAR_RESET:
+		r = 0;
+		kvm_arch_vcpu_ioctl_clear_reset(vcpu);
+		break;
 	case KVM_S390_INITIAL_RESET:
-		r = kvm_arch_vcpu_ioctl_initial_reset(vcpu);
+		r = 0;
+		kvm_arch_vcpu_ioctl_initial_reset(vcpu);
+		break;
+	case KVM_S390_NORMAL_RESET:
+		r = 0;
+		kvm_arch_vcpu_ioctl_normal_reset(vcpu);
 		break;
 	case KVM_SET_ONE_REG:
 	case KVM_GET_ONE_REG: {
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index f0a16b4adbbd..4b95f9a31a2f 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1009,6 +1009,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_PPC_GUEST_DEBUG_SSTEP 176
 #define KVM_CAP_ARM_NISV_TO_USER 177
 #define KVM_CAP_ARM_INJECT_EXT_DABT 178
+#define KVM_CAP_S390_VCPU_RESETS 179
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
@@ -1473,6 +1474,10 @@ struct kvm_enc_region {
 /* Available with KVM_CAP_ARM_SVE */
 #define KVM_ARM_VCPU_FINALIZE	  _IOW(KVMIO,  0xc2, int)
 
+/* Available with  KVM_CAP_S390_VCPU_RESETS */
+#define KVM_S390_NORMAL_RESET	_IO(KVMIO,   0xc3)
+#define KVM_S390_CLEAR_RESET	_IO(KVMIO,   0xc4)
+
 /* Secure Encrypted Virtualization command */
 enum sev_cmd_id {
 	/* Guest initialization commands */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions
  2020-01-29 20:03 [PATCH v8 0/4] KVM: s390: Add new reset vcpu API Janosch Frank
  2020-01-29 20:03 ` [PATCH v8 1/4] " Janosch Frank
@ 2020-01-29 20:03 ` Janosch Frank
  2020-01-30 10:36   ` Thomas Huth
  2020-01-29 20:03 ` [PATCH v8 3/4] selftests: KVM: s390x: Add reset tests Janosch Frank
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 36+ messages in thread
From: Janosch Frank @ 2020-01-29 20:03 UTC (permalink / raw)
  To: kvm; +Cc: thuth, borntraeger, david, cohuck, linux-s390

Add library access to more registers.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  |  6 +++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 48 +++++++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 29cccaf96baf..ae0d14c2540a 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -125,6 +125,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
 int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
+void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid,
+		  struct kvm_fpu *fpu);
+void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid,
+		  struct kvm_fpu *fpu);
+void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
+void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
 #ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 41cf45416060..dae117728ec6 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1373,6 +1373,54 @@ int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_sregs *sregs)
 	return ioctl(vcpu->fd, KVM_SET_SREGS, sregs);
 }
 
+void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
+{
+	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
+	int ret;
+
+	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
+
+	ret = ioctl(vcpu->fd, KVM_GET_FPU, fpu);
+	TEST_ASSERT(ret == 0, "KVM_GET_FPU failed, rc: %i errno: %i",
+		    ret, errno);
+}
+
+void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
+{
+	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
+	int ret;
+
+	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
+
+	ret = ioctl(vcpu->fd, KVM_SET_FPU, fpu);
+	TEST_ASSERT(ret == 0, "KVM_SET_FPU failed, rc: %i errno: %i",
+		    ret, errno);
+}
+
+void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
+{
+	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
+	int ret;
+
+	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
+
+	ret = ioctl(vcpu->fd, KVM_GET_ONE_REG, reg);
+	TEST_ASSERT(ret == 0, "KVM_GET_ONE_REG failed, rc: %i errno: %i",
+		    ret, errno);
+}
+
+void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
+{
+	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
+	int ret;
+
+	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
+
+	ret = ioctl(vcpu->fd, KVM_SET_ONE_REG, reg);
+	TEST_ASSERT(ret == 0, "KVM_SET_ONE_REG failed, rc: %i errno: %i",
+		    ret, errno);
+}
+
 /*
  * VCPU Ioctl
  *
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 3/4] selftests: KVM: s390x: Add reset tests
  2020-01-29 20:03 [PATCH v8 0/4] KVM: s390: Add new reset vcpu API Janosch Frank
  2020-01-29 20:03 ` [PATCH v8 1/4] " Janosch Frank
  2020-01-29 20:03 ` [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions Janosch Frank
@ 2020-01-29 20:03 ` Janosch Frank
  2020-01-30 10:51   ` Thomas Huth
  2020-01-29 20:03 ` [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets Janosch Frank
  2020-01-30  9:10 ` [PATCH] KVM: s390: Cleanup initial cpu reset Janosch Frank
  4 siblings, 1 reply; 36+ messages in thread
From: Janosch Frank @ 2020-01-29 20:03 UTC (permalink / raw)
  To: kvm; +Cc: thuth, borntraeger, david, cohuck, linux-s390

Test if the registers end up having the correct values after a normal,
initial and clear reset.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 tools/testing/selftests/kvm/Makefile       |   1 +
 tools/testing/selftests/kvm/s390x/resets.c | 165 +++++++++++++++++++++
 2 files changed, 166 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/s390x/resets.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 3138a916574a..fe1ea294730c 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -36,6 +36,7 @@ TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
 
 TEST_GEN_PROGS_s390x = s390x/memop
 TEST_GEN_PROGS_s390x += s390x/sync_regs_test
+TEST_GEN_PROGS_s390x += s390x/resets
 TEST_GEN_PROGS_s390x += dirty_log_test
 TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
 
diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/selftests/kvm/s390x/resets.c
new file mode 100644
index 000000000000..2b2378cc9e80
--- /dev/null
+++ b/tools/testing/selftests/kvm/s390x/resets.c
@@ -0,0 +1,165 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Test for s390x CPU resets
+ *
+ * Copyright (C) 2020, IBM
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+
+#define VCPU_ID 3
+
+struct kvm_vm *vm;
+struct kvm_run *run;
+struct kvm_sync_regs *regs;
+static uint64_t regs_null[16];
+
+static uint64_t crs[16] = { 0x40000ULL,
+			    0x42000ULL,
+			    0, 0, 0, 0, 0,
+			    0x43000ULL,
+			    0, 0, 0, 0, 0,
+			    0x44000ULL,
+			    0, 0
+};
+
+static void guest_code_initial(void)
+{
+	/* Round toward 0 */
+	uint32_t fpc = 0x11;
+
+	/* Dirty registers */
+	asm volatile (
+		"	lctlg	0,15,%0\n"
+		"	sfpc	%1\n"
+		: : "Q" (crs), "d" (fpc));
+}
+
+static void test_one_reg(uint64_t id, uint64_t value)
+{
+	struct kvm_one_reg reg;
+	uint64_t eval_reg;
+
+	reg.addr = (uintptr_t)&eval_reg;
+	reg.id = id;
+	vcpu_get_reg(vm, VCPU_ID, &reg);
+	TEST_ASSERT(eval_reg == value, "value == %s", value);
+}
+
+static void assert_clear(void)
+{
+	struct kvm_sregs sregs;
+	struct kvm_regs regs;
+	struct kvm_fpu fpu;
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+	TEST_ASSERT(!memcmp(&regs.gprs, regs_null, sizeof(regs.gprs)), "grs == 0");
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	TEST_ASSERT(!memcmp(&sregs.acrs, regs_null, sizeof(sregs.acrs)), "acrs == 0");
+
+	vcpu_fpu_get(vm, VCPU_ID, &fpu);
+	TEST_ASSERT(!memcmp(&fpu.fprs, regs_null, sizeof(fpu.fprs)), "fprs == 0");
+}
+
+static void assert_initial(void)
+{
+	struct kvm_sregs sregs;
+	struct kvm_fpu fpu;
+
+	vcpu_sregs_get(vm, VCPU_ID, &sregs);
+	TEST_ASSERT(sregs.crs[0] == 0xE0UL, "cr0 == 0xE0");
+	TEST_ASSERT(sregs.crs[14] == 0xC2000000UL, "cr14 == 0xC2000000");
+	TEST_ASSERT(!memcmp(&sregs.crs[1], regs_null, sizeof(sregs.crs[1]) * 12),
+		    "cr1-13 == 0");
+	TEST_ASSERT(sregs.crs[15] == 0, "cr15 == 0");
+
+	vcpu_fpu_get(vm, VCPU_ID, &fpu);
+	TEST_ASSERT(!fpu.fpc, "fpc == 0");
+
+	test_one_reg(KVM_REG_S390_GBEA, 1);
+	test_one_reg(KVM_REG_S390_PP, 0);
+	test_one_reg(KVM_REG_S390_TODPR, 0);
+	test_one_reg(KVM_REG_S390_CPU_TIMER, 0);
+	test_one_reg(KVM_REG_S390_CLOCK_COMP, 0);
+}
+
+static void assert_normal(void)
+{
+	test_one_reg(KVM_REG_S390_PFTOKEN, KVM_S390_PFAULT_TOKEN_INVALID);
+}
+
+static void test_normal(void)
+{
+	printf("Testing notmal reset\n");
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
+	run = vcpu_state(vm, VCPU_ID);
+	regs = &run->s.regs;
+
+	_vcpu_run(vm, VCPU_ID);
+
+	vcpu_ioctl(vm, VCPU_ID, KVM_S390_NORMAL_RESET, 0);
+	assert_normal();
+	kvm_vm_free(vm);
+}
+
+static int test_initial(void)
+{
+	int rv;
+
+	printf("Testing initial reset\n");
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
+	run = vcpu_state(vm, VCPU_ID);
+	regs = &run->s.regs;
+
+	rv = _vcpu_run(vm, VCPU_ID);
+
+	vcpu_ioctl(vm, VCPU_ID, KVM_S390_INITIAL_RESET, 0);
+	assert_normal();
+	assert_initial();
+	kvm_vm_free(vm);
+	return rv;
+}
+
+static int test_clear(void)
+{
+	int rv;
+
+	printf("Testing clear reset\n");
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
+	run = vcpu_state(vm, VCPU_ID);
+	regs = &run->s.regs;
+
+	rv = _vcpu_run(vm, VCPU_ID);
+
+	vcpu_ioctl(vm, VCPU_ID, KVM_S390_CLEAR_RESET, 0);
+	assert_normal();
+	assert_initial();
+	assert_clear();
+	kvm_vm_free(vm);
+	return rv;
+}
+
+int main(int argc, char *argv[])
+{
+	int addl_resets;
+
+	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
+	addl_resets = kvm_check_cap(KVM_CAP_S390_VCPU_RESETS);
+
+	test_initial();
+	if (addl_resets) {
+		test_normal();
+		test_clear();
+	}
+	return 0;
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets
  2020-01-29 20:03 [PATCH v8 0/4] KVM: s390: Add new reset vcpu API Janosch Frank
                   ` (2 preceding siblings ...)
  2020-01-29 20:03 ` [PATCH v8 3/4] selftests: KVM: s390x: Add reset tests Janosch Frank
@ 2020-01-29 20:03 ` Janosch Frank
  2020-01-30 10:55   ` Cornelia Huck
  2020-01-30 11:10   ` Thomas Huth
  2020-01-30  9:10 ` [PATCH] KVM: s390: Cleanup initial cpu reset Janosch Frank
  4 siblings, 2 replies; 36+ messages in thread
From: Janosch Frank @ 2020-01-29 20:03 UTC (permalink / raw)
  To: kvm; +Cc: thuth, borntraeger, david, cohuck, linux-s390

From: Pierre Morel <pmorel@linux.ibm.com>

Local IRQs are reset by a normal cpu reset.  The initial cpu reset and
the clear cpu reset, as superset of the normal reset, both clear the
IRQs too.

Let's inject an interrupt to a vCPU before calling a reset and see if
it is gone after the reset.

We choose to inject only an emergency interrupt at this point and can
extend the test to other types of IRQs later.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
---
 tools/testing/selftests/kvm/s390x/resets.c | 57 ++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/selftests/kvm/s390x/resets.c
index 2b2378cc9e80..299c1686f98c 100644
--- a/tools/testing/selftests/kvm/s390x/resets.c
+++ b/tools/testing/selftests/kvm/s390x/resets.c
@@ -14,6 +14,9 @@
 #include "kvm_util.h"
 
 #define VCPU_ID 3
+#define LOCAL_IRQS 32
+
+struct kvm_s390_irq buf[VCPU_ID + LOCAL_IRQS];
 
 struct kvm_vm *vm;
 struct kvm_run *run;
@@ -52,6 +55,29 @@ static void test_one_reg(uint64_t id, uint64_t value)
 	TEST_ASSERT(eval_reg == value, "value == %s", value);
 }
 
+static void assert_noirq(void)
+{
+	struct kvm_s390_irq_state irq_state;
+	int irqs;
+
+	if (!(kvm_check_cap(KVM_CAP_S390_INJECT_IRQ) &&
+	    kvm_check_cap(KVM_CAP_S390_IRQ_STATE)))
+		return;
+
+	irq_state.len = sizeof(buf);
+	irq_state.buf = (unsigned long)buf;
+	irqs = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_GET_IRQ_STATE, &irq_state);
+	/*
+	 * irqs contains the number of retrieved interrupts, apart from the
+	 * emergency call that should be cleared by the resets, there should be
+	 * none.
+	 */
+	if (irqs < 0)
+		printf("Error by getting IRQ: errno %d\n", errno);
+
+	TEST_ASSERT(!irqs, "IRQ pending");
+}
+
 static void assert_clear(void)
 {
 	struct kvm_sregs sregs;
@@ -93,6 +119,31 @@ static void assert_initial(void)
 static void assert_normal(void)
 {
 	test_one_reg(KVM_REG_S390_PFTOKEN, KVM_S390_PFAULT_TOKEN_INVALID);
+	assert_noirq();
+}
+
+static int inject_irq(int cpu_id)
+{
+	struct kvm_s390_irq_state irq_state;
+	struct kvm_s390_irq *irq = &buf[0];
+	int irqs;
+
+	if (!(kvm_check_cap(KVM_CAP_S390_INJECT_IRQ) &&
+	    kvm_check_cap(KVM_CAP_S390_IRQ_STATE)))
+		return 0;
+
+	/* Inject IRQ */
+	irq_state.len = sizeof(struct kvm_s390_irq);
+	irq_state.buf = (unsigned long)buf;
+	irq->type = KVM_S390_INT_EMERGENCY;
+	irq->u.emerg.code = cpu_id;
+	irqs = _vcpu_ioctl(vm, cpu_id, KVM_S390_SET_IRQ_STATE, &irq_state);
+	if (irqs < 0) {
+		printf("Error by injecting INT_EMERGENCY: errno %d\n", errno);
+		return errno;
+	}
+
+	return 0;
 }
 
 static void test_normal(void)
@@ -105,6 +156,8 @@ static void test_normal(void)
 
 	_vcpu_run(vm, VCPU_ID);
 
+	inject_irq(VCPU_ID);
+
 	vcpu_ioctl(vm, VCPU_ID, KVM_S390_NORMAL_RESET, 0);
 	assert_normal();
 	kvm_vm_free(vm);
@@ -122,6 +175,8 @@ static int test_initial(void)
 
 	rv = _vcpu_run(vm, VCPU_ID);
 
+	inject_irq(VCPU_ID);
+
 	vcpu_ioctl(vm, VCPU_ID, KVM_S390_INITIAL_RESET, 0);
 	assert_normal();
 	assert_initial();
@@ -141,6 +196,8 @@ static int test_clear(void)
 
 	rv = _vcpu_run(vm, VCPU_ID);
 
+	inject_irq(VCPU_ID);
+
 	vcpu_ioctl(vm, VCPU_ID, KVM_S390_CLEAR_RESET, 0);
 	assert_normal();
 	assert_initial();
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset
  2020-01-29 20:03 ` [PATCH v8 1/4] " Janosch Frank
@ 2020-01-30  8:55   ` Christian Borntraeger
  2020-01-30  9:49     ` David Hildenbrand
  2020-01-30  9:00   ` [PATCH v8 1/4] KVM: s390: Add new reset vcpu API Thomas Huth
  2020-01-30  9:58   ` Christian Borntraeger
  2 siblings, 1 reply; 36+ messages in thread
From: Christian Borntraeger @ 2020-01-30  8:55 UTC (permalink / raw)
  To: frankja; +Cc: borntraeger, cohuck, david, kvm, linux-s390, thuth, stable

The initial CPU reset currently clobbers the userspace fpc. This was an
oversight during a fixup for the lazy fpu reloading rework.  The reset
calls are only done from userspace ioctls. No CPU context is loaded, so
we can (and must) act directly on the sync regs, not on the thread
context. Otherwise the fpu restore call will restore the zeroes fpc to
userspace.

Cc: stable@kernel.org
Fixes: 9abc2a08a7d6 ("KVM: s390: fix memory overwrites when vx is disabled")
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index c059b86..eb789cd 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
 	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
 					CR14_UNUSED_33 |
 					CR14_EXTERNAL_DAMAGE_SUBMASK;
-	/* make sure the new fpc will be lazily loaded */
-	save_fpu_regs();
+	vcpu->run->s.regs.fpc = 0;
 	current->thread.fpu.fpc = 0;
 	vcpu->arch.sie_block->gbea = 1;
 	vcpu->arch.sie_block->pp = 0;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 1/4] KVM: s390: Add new reset vcpu API
  2020-01-29 20:03 ` [PATCH v8 1/4] " Janosch Frank
  2020-01-30  8:55   ` [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset Christian Borntraeger
@ 2020-01-30  9:00   ` Thomas Huth
  2020-01-30  9:58   ` Christian Borntraeger
  2 siblings, 0 replies; 36+ messages in thread
From: Thomas Huth @ 2020-01-30  9:00 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: borntraeger, david, cohuck, linux-s390

On 29/01/2020 21.03, Janosch Frank wrote:
> The architecture states that we need to reset local IRQs for all CPU
> resets. Because the old reset interface did not support the normal CPU
> reset we never did that on a normal reset.
> 
> Let's implement an interface for the missing normal and clear resets
> and reset all local IRQs, registers and control structures as stated
> in the architecture.
> 
> Userspace might already reset the registers via the vcpu run struct,
> but as we need the interface for the interrupt clearing part anyway,
> we implement the resets fully and don't rely on userspace to reset the
> rest.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
> ---
>  Documentation/virt/kvm/api.txt |  43 ++++++++++++++
>  arch/s390/kvm/kvm-s390.c       | 103 +++++++++++++++++++++++----------
>  include/uapi/linux/kvm.h       |   5 ++
>  3 files changed, 122 insertions(+), 29 deletions(-)
> 
> diff --git a/Documentation/virt/kvm/api.txt b/Documentation/virt/kvm/api.txt
> index ebb37b34dcfc..73448764f544 100644
> --- a/Documentation/virt/kvm/api.txt
> +++ b/Documentation/virt/kvm/api.txt
> @@ -4168,6 +4168,42 @@ This ioctl issues an ultravisor call to terminate the secure guest,
>  unpins the VPA pages and releases all the device pages that are used to
>  track the secure pages by hypervisor.
>  
> +4.122 KVM_S390_NORMAL_RESET
> +
> +Capability: KVM_CAP_S390_VCPU_RESETS
> +Architectures: s390
> +Type: vcpu ioctl
> +Parameters: none
> +Returns: 0
> +
> +This ioctl resets VCPU registers and control structures according to
> +the cpu reset definition in the POP (Principles Of Operation).
> +
> +4.123 KVM_S390_INITIAL_RESET
> +
> +Capability: none
> +Architectures: s390
> +Type: vcpu ioctl
> +Parameters: none
> +Returns: 0
> +
> +This ioctl resets VCPU registers and control structures according to
> +the initial cpu reset definition in the POP. However, the cpu is not
> +put into ESA mode. This reset is a superset of the normal reset.
> +
> +4.124 KVM_S390_CLEAR_RESET
> +
> +Capability: KVM_CAP_S390_VCPU_RESETS
> +Architectures: s390
> +Type: vcpu ioctl
> +Parameters: none
> +Returns: 0
> +
> +This ioctl resets VCPU registers and control structures according to
> +the clear cpu reset definition in the POP. However, the cpu is not put
> +into ESA mode. This reset is a superset of the initial reset.
> +
> +
>  5. The kvm_run structure
>  ------------------------
>  
> @@ -5396,3 +5432,10 @@ handling by KVM (as some KVM hypercall may be mistakenly treated as TLB
>  flush hypercalls by Hyper-V) so userspace should disable KVM identification
>  in CPUID and only exposes Hyper-V identification. In this case, guest
>  thinks it's running on Hyper-V and only use Hyper-V hypercalls.
> +
> +8.22 KVM_CAP_S390_VCPU_RESETS
> +
> +Architectures: s390
> +
> +This capability indicates that the KVM_S390_NORMAL_RESET and
> +KVM_S390_CLEAR_RESET ioctls are available.
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index c5f520de39a6..6aebaf08db64 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -529,6 +529,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_S390_CMMA_MIGRATION:
>  	case KVM_CAP_S390_AIS:
>  	case KVM_CAP_S390_AIS_MIGRATION:
> +	case KVM_CAP_S390_VCPU_RESETS:
>  		r = 1;
>  		break;
>  	case KVM_CAP_S390_HPAGE_1M:
> @@ -2844,31 +2845,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>  
>  }
>  
> -static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
> -{
> -	/* this equals initial cpu reset in pop, but we don't switch to ESA */
> -	vcpu->arch.sie_block->gpsw.mask = 0;
> -	vcpu->arch.sie_block->gpsw.addr = 0;
> -	kvm_s390_set_prefix(vcpu, 0);
> -	kvm_s390_set_cpu_timer(vcpu, 0);
> -	vcpu->arch.sie_block->ckc = 0;
> -	vcpu->arch.sie_block->todpr = 0;
> -	memset(vcpu->arch.sie_block->gcr, 0, sizeof(vcpu->arch.sie_block->gcr));
> -	vcpu->arch.sie_block->gcr[0] = CR0_INITIAL_MASK;
> -	vcpu->arch.sie_block->gcr[14] = CR14_INITIAL_MASK;
> -	/* make sure the new fpc will be lazily loaded */
> -	save_fpu_regs();
> -	current->thread.fpu.fpc = 0;
> -	vcpu->arch.sie_block->gbea = 1;
> -	vcpu->arch.sie_block->pp = 0;
> -	vcpu->arch.sie_block->fpf &= ~FPF_BPBC;
> -	vcpu->arch.pfault_token = KVM_S390_PFAULT_TOKEN_INVALID;
> -	kvm_clear_async_pf_completion_queue(vcpu);
> -	if (!kvm_s390_user_cpu_state_ctrl(vcpu->kvm))
> -		kvm_s390_vcpu_stop(vcpu);
> -	kvm_s390_clear_local_irqs(vcpu);
> -}
> -
>  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
>  {
>  	mutex_lock(&vcpu->kvm->lock);
> @@ -3283,10 +3259,70 @@ static int kvm_arch_vcpu_ioctl_set_one_reg(struct kvm_vcpu *vcpu,
>  	return r;
>  }
>  
> -static int kvm_arch_vcpu_ioctl_initial_reset(struct kvm_vcpu *vcpu)
> +static void kvm_arch_vcpu_ioctl_normal_reset(struct kvm_vcpu *vcpu)
>  {
> -	kvm_s390_vcpu_initial_reset(vcpu);
> -	return 0;
> +	vcpu->arch.sie_block->gpsw.mask &= ~PSW_MASK_RI;
> +	vcpu->arch.pfault_token = KVM_S390_PFAULT_TOKEN_INVALID;
> +	memset(vcpu->run->s.regs.riccb, 0, sizeof(vcpu->run->s.regs.riccb));
> +
> +	kvm_clear_async_pf_completion_queue(vcpu);
> +	if (!kvm_s390_user_cpu_state_ctrl(vcpu->kvm))
> +		kvm_s390_vcpu_stop(vcpu);
> +	kvm_s390_clear_local_irqs(vcpu);
> +}
> +
> +static void kvm_arch_vcpu_ioctl_initial_reset(struct kvm_vcpu *vcpu)
> +{
> +	/* Initial reset is a superset of the normal reset */
> +	kvm_arch_vcpu_ioctl_normal_reset(vcpu);
> +
> +	/* this equals initial cpu reset in pop, but we don't switch to ESA */
> +	vcpu->arch.sie_block->gpsw.mask = 0;
> +	vcpu->arch.sie_block->gpsw.addr = 0;
> +	kvm_s390_set_prefix(vcpu, 0);
> +	kvm_s390_set_cpu_timer(vcpu, 0);
> +	vcpu->arch.sie_block->ckc = 0;
> +	vcpu->arch.sie_block->todpr = 0;
> +	memset(vcpu->arch.sie_block->gcr, 0, sizeof(vcpu->arch.sie_block->gcr));
> +	vcpu->arch.sie_block->gcr[0] = CR0_INITIAL_MASK;
> +	vcpu->arch.sie_block->gcr[14] = CR14_INITIAL_MASK;

Is your "KVM: s390: Cleanup initial cpu reset" patch already queued
somewhere? If not, please add it to this series so that it is clear
where the CR*_INITIAL_MASK macros come from.

Apart from that (and the save_fpu_regs() problem that should be fixed
first), the patch looks fine to me now.

 Thomas


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH] KVM: s390: Cleanup initial cpu reset
  2020-01-29 20:03 [PATCH v8 0/4] KVM: s390: Add new reset vcpu API Janosch Frank
                   ` (3 preceding siblings ...)
  2020-01-29 20:03 ` [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets Janosch Frank
@ 2020-01-30  9:10 ` Janosch Frank
  4 siblings, 0 replies; 36+ messages in thread
From: Janosch Frank @ 2020-01-30  9:10 UTC (permalink / raw)
  To: kvm; +Cc: thuth, borntraeger, david, cohuck, linux-s390

The code seems to be quite old and uses lots of unneeded spaces for
alignment, which doesn't really help with readability.

Let's:
* Get rid of the extra spaces
* Remove the ULs as they are not needed on 0s
* Define constants for the CR 0 and 14 initial values
* Use the sizeof of the gcr array to memset it to 0

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
---

I only send out 4 of the 5 patches...

---
 arch/s390/include/asm/kvm_host.h |  5 +++++
 arch/s390/kvm/kvm-s390.c         | 18 +++++++-----------
 2 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 02f4c21c57f6..73044545ecac 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -122,6 +122,11 @@ struct mcck_volatile_info {
 	__u32 reserved;
 };
 
+#define CR0_INITIAL_MASK (CR0_UNUSED_56 | CR0_INTERRUPT_KEY_SUBMASK | \
+			  CR0_MEASUREMENT_ALERT_SUBMASK)
+#define CR14_INITIAL_MASK (CR14_UNUSED_32 | CR14_UNUSED_33 | \
+			   CR14_EXTERNAL_DAMAGE_SUBMASK)
+
 #define CPUSTAT_STOPPED    0x80000000
 #define CPUSTAT_WAIT       0x10000000
 #define CPUSTAT_ECALL_PEND 0x08000000
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index d9e6bf3d54f0..c5f520de39a6 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2847,19 +2847,15 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
 {
 	/* this equals initial cpu reset in pop, but we don't switch to ESA */
-	vcpu->arch.sie_block->gpsw.mask = 0UL;
-	vcpu->arch.sie_block->gpsw.addr = 0UL;
+	vcpu->arch.sie_block->gpsw.mask = 0;
+	vcpu->arch.sie_block->gpsw.addr = 0;
 	kvm_s390_set_prefix(vcpu, 0);
 	kvm_s390_set_cpu_timer(vcpu, 0);
-	vcpu->arch.sie_block->ckc       = 0UL;
-	vcpu->arch.sie_block->todpr     = 0;
-	memset(vcpu->arch.sie_block->gcr, 0, 16 * sizeof(__u64));
-	vcpu->arch.sie_block->gcr[0]  = CR0_UNUSED_56 |
-					CR0_INTERRUPT_KEY_SUBMASK |
-					CR0_MEASUREMENT_ALERT_SUBMASK;
-	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
-					CR14_UNUSED_33 |
-					CR14_EXTERNAL_DAMAGE_SUBMASK;
+	vcpu->arch.sie_block->ckc = 0;
+	vcpu->arch.sie_block->todpr = 0;
+	memset(vcpu->arch.sie_block->gcr, 0, sizeof(vcpu->arch.sie_block->gcr));
+	vcpu->arch.sie_block->gcr[0] = CR0_INITIAL_MASK;
+	vcpu->arch.sie_block->gcr[14] = CR14_INITIAL_MASK;
 	/* make sure the new fpc will be lazily loaded */
 	save_fpu_regs();
 	current->thread.fpu.fpc = 0;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset
  2020-01-30  8:55   ` [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset Christian Borntraeger
@ 2020-01-30  9:49     ` David Hildenbrand
  2020-01-30 10:39       ` Cornelia Huck
  2020-01-30 11:01       ` Christian Borntraeger
  0 siblings, 2 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-01-30  9:49 UTC (permalink / raw)
  To: Christian Borntraeger, frankja; +Cc: cohuck, kvm, linux-s390, thuth, stable

On 30.01.20 09:55, Christian Borntraeger wrote:
> The initial CPU reset currently clobbers the userspace fpc. This was an
> oversight during a fixup for the lazy fpu reloading rework.  The reset
> calls are only done from userspace ioctls. No CPU context is loaded, so
> we can (and must) act directly on the sync regs, not on the thread
> context. Otherwise the fpu restore call will restore the zeroes fpc to
> userspace.
> 
> Cc: stable@kernel.org
> Fixes: 9abc2a08a7d6 ("KVM: s390: fix memory overwrites when vx is disabled")
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index c059b86..eb789cd 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
>  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
>  					CR14_UNUSED_33 |
>  					CR14_EXTERNAL_DAMAGE_SUBMASK;
> -	/* make sure the new fpc will be lazily loaded */
> -	save_fpu_regs();
> +	vcpu->run->s.regs.fpc = 0;
>  	current->thread.fpu.fpc = 0;
>  	vcpu->arch.sie_block->gbea = 1;
>  	vcpu->arch.sie_block->pp = 0;
> 

kvm_arch_vcpu_ioctl() does a vcpu_load(vcpu), followed by the call to
kvm_arch_vcpu_ioctl_initial_reset(), followed by a vcpu_put().

What am I missing?

(we could get rid of the kvm_arch_vcpu_ioctl_initial_reset() wrapper)

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 1/4] KVM: s390: Add new reset vcpu API
  2020-01-29 20:03 ` [PATCH v8 1/4] " Janosch Frank
  2020-01-30  8:55   ` [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset Christian Borntraeger
  2020-01-30  9:00   ` [PATCH v8 1/4] KVM: s390: Add new reset vcpu API Thomas Huth
@ 2020-01-30  9:58   ` Christian Borntraeger
  2 siblings, 0 replies; 36+ messages in thread
From: Christian Borntraeger @ 2020-01-30  9:58 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: thuth, david, cohuck, linux-s390



On 29.01.20 21:03, Janosch Frank wrote:
[...]> +static void kvm_arch_vcpu_ioctl_initial_reset(struct kvm_vcpu *vcpu)
> +{
> +	/* Initial reset is a superset of the normal reset */
> +	kvm_arch_vcpu_ioctl_normal_reset(vcpu);
> +
> +	/* this equals initial cpu reset in pop, but we don't switch to ESA */
> +	vcpu->arch.sie_block->gpsw.mask = 0;
> +	vcpu->arch.sie_block->gpsw.addr = 0;
> +	kvm_s390_set_prefix(vcpu, 0);
> +	kvm_s390_set_cpu_timer(vcpu, 0);
> +	vcpu->arch.sie_block->ckc = 0;
> +	vcpu->arch.sie_block->todpr = 0;
> +	memset(vcpu->arch.sie_block->gcr, 0, sizeof(vcpu->arch.sie_block->gcr));
> +	vcpu->arch.sie_block->gcr[0] = CR0_INITIAL_MASK;
> +	vcpu->arch.sie_block->gcr[14] = CR14_INITIAL_MASK;
> +	/* make sure the new fpc will be lazily loaded */
> +	save_fpu_regs();

see my other patch. We should rebase this series and fix it here as well

> +	current->thread.fpu.fpc = 0;
> +	vcpu->arch.sie_block->gbea = 1;
> +	vcpu->arch.sie_block->pp = 0;
> +	vcpu->arch.sie_block->fpf &= ~FPF_BPBC;
> +}
> +
> +static void kvm_arch_vcpu_ioctl_clear_reset(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_sync_regs *regs = &vcpu->run->s.regs;
> +
> +	/* Clear reset is a superset of the initial reset */
> +	kvm_arch_vcpu_ioctl_initial_reset(vcpu);
> +
> +	memset(&regs->gprs, 0, sizeof(regs->gprs));
> +	memset(&regs->vrs, 0, sizeof(regs->vrs));
> +	memset(&regs->acrs, 0, sizeof(regs->acrs));
> +
> +	regs->etoken = 0;
> +	regs->etoken_extension = 0;
> +
> +	memset(&regs->gscb, 0, sizeof(regs->gscb));


> +	if (MACHINE_HAS_GS) {
> +		preempt_disable();
> +		__ctl_set_bit(2, 4);
> +		if (current->thread.gs_cb) {
> +			vcpu->arch.host_gscb = current->thread.gs_cb;
> +			save_gs_cb(vcpu->arch.host_gscb);
> +		}
> +		if (vcpu->arch.gs_enabled) {
> +			current->thread.gs_cb = (struct gs_cb *)
> +				&vcpu->run->s.regs.gscb;
> +			restore_gs_cb(current->thread.gs_cb);
> +		}
> +		preempt_enable();
> +	}

I think this hunk can go? (same reason as for floating point)


Other than that this looks good.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions
  2020-01-29 20:03 ` [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions Janosch Frank
@ 2020-01-30 10:36   ` Thomas Huth
  2020-01-30 13:55     ` Andrew Jones
  0 siblings, 1 reply; 36+ messages in thread
From: Thomas Huth @ 2020-01-30 10:36 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: borntraeger, david, cohuck, linux-s390, Andrew Jones

On 29/01/2020 21.03, Janosch Frank wrote:
> Add library access to more registers.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  .../testing/selftests/kvm/include/kvm_util.h  |  6 +++
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 48 +++++++++++++++++++
>  2 files changed, 54 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 29cccaf96baf..ae0d14c2540a 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -125,6 +125,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid,
> +		  struct kvm_fpu *fpu);
> +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid,
> +		  struct kvm_fpu *fpu);
> +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
> +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
>  #ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 41cf45416060..dae117728ec6 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1373,6 +1373,54 @@ int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_sregs *sregs)
>  	return ioctl(vcpu->fd, KVM_SET_SREGS, sregs);
>  }
>  
> +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> +{
> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> +	int ret;
> +
> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> +
> +	ret = ioctl(vcpu->fd, KVM_GET_FPU, fpu);
> +	TEST_ASSERT(ret == 0, "KVM_GET_FPU failed, rc: %i errno: %i",
> +		    ret, errno);
> +}
> +
> +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> +{
> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> +	int ret;
> +
> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> +
> +	ret = ioctl(vcpu->fd, KVM_SET_FPU, fpu);
> +	TEST_ASSERT(ret == 0, "KVM_SET_FPU failed, rc: %i errno: %i",
> +		    ret, errno);
> +}
> +
> +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> +{
> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> +	int ret;
> +
> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> +
> +	ret = ioctl(vcpu->fd, KVM_GET_ONE_REG, reg);
> +	TEST_ASSERT(ret == 0, "KVM_GET_ONE_REG failed, rc: %i errno: %i",
> +		    ret, errno);
> +}
> +
> +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> +{
> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> +	int ret;
> +
> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> +
> +	ret = ioctl(vcpu->fd, KVM_SET_ONE_REG, reg);
> +	TEST_ASSERT(ret == 0, "KVM_SET_ONE_REG failed, rc: %i errno: %i",
> +		    ret, errno);
> +}
> +
>  /*
>   * VCPU Ioctl
>   *
> 

Reviewed-by: Thomas Huth <thuth@redhat.com>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset
  2020-01-30  9:49     ` David Hildenbrand
@ 2020-01-30 10:39       ` Cornelia Huck
  2020-01-30 10:56         ` Thomas Huth
  2020-01-30 11:01       ` Christian Borntraeger
  1 sibling, 1 reply; 36+ messages in thread
From: Cornelia Huck @ 2020-01-30 10:39 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Christian Borntraeger, frankja, kvm, linux-s390, thuth, stable

On Thu, 30 Jan 2020 10:49:35 +0100
David Hildenbrand <david@redhat.com> wrote:

> On 30.01.20 09:55, Christian Borntraeger wrote:
> > The initial CPU reset currently clobbers the userspace fpc. This was an
> > oversight during a fixup for the lazy fpu reloading rework.  The reset
> > calls are only done from userspace ioctls. No CPU context is loaded, so
> > we can (and must) act directly on the sync regs, not on the thread
> > context. Otherwise the fpu restore call will restore the zeroes fpc to
> > userspace.
> > 
> > Cc: stable@kernel.org
> > Fixes: 9abc2a08a7d6 ("KVM: s390: fix memory overwrites when vx is disabled")
> > Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> > ---
> >  arch/s390/kvm/kvm-s390.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> > 
> > diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> > index c059b86..eb789cd 100644
> > --- a/arch/s390/kvm/kvm-s390.c
> > +++ b/arch/s390/kvm/kvm-s390.c
> > @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
> >  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
> >  					CR14_UNUSED_33 |
> >  					CR14_EXTERNAL_DAMAGE_SUBMASK;
> > -	/* make sure the new fpc will be lazily loaded */
> > -	save_fpu_regs();
> > +	vcpu->run->s.regs.fpc = 0;
> >  	current->thread.fpu.fpc = 0;
> >  	vcpu->arch.sie_block->gbea = 1;
> >  	vcpu->arch.sie_block->pp = 0;
> >   
> 
> kvm_arch_vcpu_ioctl() does a vcpu_load(vcpu), followed by the call to
> kvm_arch_vcpu_ioctl_initial_reset(), followed by a vcpu_put().
> 
> What am I missing?

I have been staring at this patch for some time now, and I fear I'm
missing something as well. Can we please get more explanation?

> 
> (we could get rid of the kvm_arch_vcpu_ioctl_initial_reset() wrapper)
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 3/4] selftests: KVM: s390x: Add reset tests
  2020-01-29 20:03 ` [PATCH v8 3/4] selftests: KVM: s390x: Add reset tests Janosch Frank
@ 2020-01-30 10:51   ` Thomas Huth
  2020-01-30 11:32     ` Janosch Frank
  0 siblings, 1 reply; 36+ messages in thread
From: Thomas Huth @ 2020-01-30 10:51 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: borntraeger, david, cohuck, linux-s390

On 29/01/2020 21.03, Janosch Frank wrote:
> Test if the registers end up having the correct values after a normal,
> initial and clear reset.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  tools/testing/selftests/kvm/Makefile       |   1 +
>  tools/testing/selftests/kvm/s390x/resets.c | 165 +++++++++++++++++++++
>  2 files changed, 166 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/s390x/resets.c
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 3138a916574a..fe1ea294730c 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -36,6 +36,7 @@ TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
>  
>  TEST_GEN_PROGS_s390x = s390x/memop
>  TEST_GEN_PROGS_s390x += s390x/sync_regs_test
> +TEST_GEN_PROGS_s390x += s390x/resets
>  TEST_GEN_PROGS_s390x += dirty_log_test
>  TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
>  
> diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/selftests/kvm/s390x/resets.c
> new file mode 100644
> index 000000000000..2b2378cc9e80
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/s390x/resets.c
> @@ -0,0 +1,165 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Test for s390x CPU resets
> + *
> + * Copyright (C) 2020, IBM
> + */
> +
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <sys/ioctl.h>
> +
> +#include "test_util.h"
> +#include "kvm_util.h"
> +
> +#define VCPU_ID 3
> +
> +struct kvm_vm *vm;
> +struct kvm_run *run;
> +struct kvm_sync_regs *regs;
> +static uint64_t regs_null[16];
> +
> +static uint64_t crs[16] = { 0x40000ULL,
> +			    0x42000ULL,
> +			    0, 0, 0, 0, 0,
> +			    0x43000ULL,
> +			    0, 0, 0, 0, 0,
> +			    0x44000ULL,
> +			    0, 0
> +};
> +
> +static void guest_code_initial(void)
> +{
> +	/* Round toward 0 */
> +	uint32_t fpc = 0x11;
> +
> +	/* Dirty registers */
> +	asm volatile (
> +		"	lctlg	0,15,%0\n"
> +		"	sfpc	%1\n"
> +		: : "Q" (crs), "d" (fpc));

I'd recommend to add a GUEST_SYNC(0) here ... otherwise the guest code
tries to return from this function and will cause a crash - which will
also finish execution of the guest, but might have unexpected side effects.

> +}
> +
> +static void test_one_reg(uint64_t id, uint64_t value)
> +{
> +	struct kvm_one_reg reg;
> +	uint64_t eval_reg;
> +
> +	reg.addr = (uintptr_t)&eval_reg;
> +	reg.id = id;
> +	vcpu_get_reg(vm, VCPU_ID, &reg);
> +	TEST_ASSERT(eval_reg == value, "value == %s", value);
> +}
> +
> +static void assert_clear(void)
> +{
> +	struct kvm_sregs sregs;
> +	struct kvm_regs regs;
> +	struct kvm_fpu fpu;
> +
> +	vcpu_regs_get(vm, VCPU_ID, &regs);
> +	TEST_ASSERT(!memcmp(&regs.gprs, regs_null, sizeof(regs.gprs)), "grs == 0");
> +
> +	vcpu_sregs_get(vm, VCPU_ID, &sregs);
> +	TEST_ASSERT(!memcmp(&sregs.acrs, regs_null, sizeof(sregs.acrs)), "acrs == 0");
> +
> +	vcpu_fpu_get(vm, VCPU_ID, &fpu);
> +	TEST_ASSERT(!memcmp(&fpu.fprs, regs_null, sizeof(fpu.fprs)), "fprs == 0");
> +}
> +
> +static void assert_initial(void)
> +{
> +	struct kvm_sregs sregs;
> +	struct kvm_fpu fpu;
> +
> +	vcpu_sregs_get(vm, VCPU_ID, &sregs);
> +	TEST_ASSERT(sregs.crs[0] == 0xE0UL, "cr0 == 0xE0");
> +	TEST_ASSERT(sregs.crs[14] == 0xC2000000UL, "cr14 == 0xC2000000");
> +	TEST_ASSERT(!memcmp(&sregs.crs[1], regs_null, sizeof(sregs.crs[1]) * 12),
> +		    "cr1-13 == 0");
> +	TEST_ASSERT(sregs.crs[15] == 0, "cr15 == 0");
> +
> +	vcpu_fpu_get(vm, VCPU_ID, &fpu);
> +	TEST_ASSERT(!fpu.fpc, "fpc == 0");
> +
> +	test_one_reg(KVM_REG_S390_GBEA, 1);
> +	test_one_reg(KVM_REG_S390_PP, 0);
> +	test_one_reg(KVM_REG_S390_TODPR, 0);
> +	test_one_reg(KVM_REG_S390_CPU_TIMER, 0);
> +	test_one_reg(KVM_REG_S390_CLOCK_COMP, 0);
> +}
> +
> +static void assert_normal(void)
> +{
> +	test_one_reg(KVM_REG_S390_PFTOKEN, KVM_S390_PFAULT_TOKEN_INVALID);
> +}
> +
> +static void test_normal(void)
> +{
> +	printf("Testing notmal reset\n");
> +	/* Create VM */
> +	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
> +	run = vcpu_state(vm, VCPU_ID);
> +	regs = &run->s.regs;
> +
> +	_vcpu_run(vm, VCPU_ID);

Could you use vcpu_run() instead of _vcpu_run() ?

> +	vcpu_ioctl(vm, VCPU_ID, KVM_S390_NORMAL_RESET, 0);
> +	assert_normal();
> +	kvm_vm_free(vm);
> +}
> +
> +static int test_initial(void)
> +{
> +	int rv;
> +
> +	printf("Testing initial reset\n");
> +	/* Create VM */
> +	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
> +	run = vcpu_state(vm, VCPU_ID);
> +	regs = &run->s.regs;
> +
> +	rv = _vcpu_run(vm, VCPU_ID);

Extra bonus points if you check here that the registers contain the
values that have been set by the guest ;-)

> +	vcpu_ioctl(vm, VCPU_ID, KVM_S390_INITIAL_RESET, 0);
> +	assert_normal();
> +	assert_initial();
> +	kvm_vm_free(vm);
> +	return rv;
> +}
> +
> +static int test_clear(void)
> +{
> +	int rv;
> +
> +	printf("Testing clear reset\n");
> +	/* Create VM */
> +	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
> +	run = vcpu_state(vm, VCPU_ID);
> +	regs = &run->s.regs;
> +
> +	rv = _vcpu_run(vm, VCPU_ID);
> +
> +	vcpu_ioctl(vm, VCPU_ID, KVM_S390_CLEAR_RESET, 0);
> +	assert_normal();
> +	assert_initial();
> +	assert_clear();
> +	kvm_vm_free(vm);
> +	return rv;
> +}
> +
> +int main(int argc, char *argv[])
> +{
> +	int addl_resets;
> +
> +	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
> +	addl_resets = kvm_check_cap(KVM_CAP_S390_VCPU_RESETS);
> +
> +	test_initial();
> +	if (addl_resets) {

I think you could still fit this into one line, without the need to
declare the addl_resets variable:

	if (kvm_check_cap(KVM_CAP_S390_VCPU_RESETS)) {

> +		test_normal();
> +		test_clear();
> +	}
> +	return 0;
> +}

Apart from the nits, this looks pretty good already, thanks for putting
it together!

 Thomas


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets
  2020-01-29 20:03 ` [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets Janosch Frank
@ 2020-01-30 10:55   ` Cornelia Huck
  2020-01-30 11:18     ` Janosch Frank
  2020-01-30 11:10   ` Thomas Huth
  1 sibling, 1 reply; 36+ messages in thread
From: Cornelia Huck @ 2020-01-30 10:55 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, thuth, borntraeger, david, linux-s390

On Wed, 29 Jan 2020 15:03:12 -0500
Janosch Frank <frankja@linux.ibm.com> wrote:

> From: Pierre Morel <pmorel@linux.ibm.com>
> 
> Local IRQs are reset by a normal cpu reset.  The initial cpu reset and
> the clear cpu reset, as superset of the normal reset, both clear the
> IRQs too.
> 
> Let's inject an interrupt to a vCPU before calling a reset and see if
> it is gone after the reset.
> 
> We choose to inject only an emergency interrupt at this point and can
> extend the test to other types of IRQs later.
> 
> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>

You probably should add your s-o-b here as well.

> ---
>  tools/testing/selftests/kvm/s390x/resets.c | 57 ++++++++++++++++++++++
>  1 file changed, 57 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/selftests/kvm/s390x/resets.c
> index 2b2378cc9e80..299c1686f98c 100644
> --- a/tools/testing/selftests/kvm/s390x/resets.c
> +++ b/tools/testing/selftests/kvm/s390x/resets.c
> @@ -14,6 +14,9 @@
>  #include "kvm_util.h"
>  
>  #define VCPU_ID 3
> +#define LOCAL_IRQS 32

Why 32?

> +
> +struct kvm_s390_irq buf[VCPU_ID + LOCAL_IRQS];
>  
>  struct kvm_vm *vm;
>  struct kvm_run *run;
> @@ -52,6 +55,29 @@ static void test_one_reg(uint64_t id, uint64_t value)
>  	TEST_ASSERT(eval_reg == value, "value == %s", value);
>  }
>  
> +static void assert_noirq(void)
> +{
> +	struct kvm_s390_irq_state irq_state;
> +	int irqs;
> +
> +	if (!(kvm_check_cap(KVM_CAP_S390_INJECT_IRQ) &&
> +	    kvm_check_cap(KVM_CAP_S390_IRQ_STATE)))
> +		return;

Might want to do a

irq_introspection_supported = (check stuff);

once for this test? Works fine as is, of course.

> +
> +	irq_state.len = sizeof(buf);
> +	irq_state.buf = (unsigned long)buf;
> +	irqs = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_GET_IRQ_STATE, &irq_state);
> +	/*
> +	 * irqs contains the number of retrieved interrupts, apart from the
> +	 * emergency call that should be cleared by the resets, there should be
> +	 * none.

Even if there were any, they should have been cleared by the reset,
right?

> +	 */
> +	if (irqs < 0)
> +		printf("Error by getting IRQ: errno %d\n", errno);

"Error getting pending IRQs" ?

> +
> +	TEST_ASSERT(!irqs, "IRQ pending");
> +}
> +
>  static void assert_clear(void)
>  {
>  	struct kvm_sregs sregs;
> @@ -93,6 +119,31 @@ static void assert_initial(void)
>  static void assert_normal(void)
>  {
>  	test_one_reg(KVM_REG_S390_PFTOKEN, KVM_S390_PFAULT_TOKEN_INVALID);
> +	assert_noirq();
> +}
> +
> +static int inject_irq(int cpu_id)

You never seem to check the return code.

> +{
> +	struct kvm_s390_irq_state irq_state;
> +	struct kvm_s390_irq *irq = &buf[0];
> +	int irqs;
> +
> +	if (!(kvm_check_cap(KVM_CAP_S390_INJECT_IRQ) &&
> +	    kvm_check_cap(KVM_CAP_S390_IRQ_STATE)))
> +		return 0;
> +
> +	/* Inject IRQ */
> +	irq_state.len = sizeof(struct kvm_s390_irq);
> +	irq_state.buf = (unsigned long)buf;
> +	irq->type = KVM_S390_INT_EMERGENCY;
> +	irq->u.emerg.code = cpu_id;
> +	irqs = _vcpu_ioctl(vm, cpu_id, KVM_S390_SET_IRQ_STATE, &irq_state);
> +	if (irqs < 0) {
> +		printf("Error by injecting INT_EMERGENCY: errno %d\n", errno);

"Error injecting EMERGENCY IRQ" ?

> +		return errno;
> +	}
> +
> +	return 0;
>  }
>  
>  static void test_normal(void)
> @@ -105,6 +156,8 @@ static void test_normal(void)
>  
>  	_vcpu_run(vm, VCPU_ID);
>  
> +	inject_irq(VCPU_ID);
> +
>  	vcpu_ioctl(vm, VCPU_ID, KVM_S390_NORMAL_RESET, 0);
>  	assert_normal();
>  	kvm_vm_free(vm);
> @@ -122,6 +175,8 @@ static int test_initial(void)
>  
>  	rv = _vcpu_run(vm, VCPU_ID);
>  
> +	inject_irq(VCPU_ID);
> +
>  	vcpu_ioctl(vm, VCPU_ID, KVM_S390_INITIAL_RESET, 0);
>  	assert_normal();
>  	assert_initial();
> @@ -141,6 +196,8 @@ static int test_clear(void)
>  
>  	rv = _vcpu_run(vm, VCPU_ID);
>  
> +	inject_irq(VCPU_ID);
> +
>  	vcpu_ioctl(vm, VCPU_ID, KVM_S390_CLEAR_RESET, 0);
>  	assert_normal();
>  	assert_initial();

On the whole, looks good to me.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset
  2020-01-30 10:39       ` Cornelia Huck
@ 2020-01-30 10:56         ` Thomas Huth
  2020-01-30 11:07           ` Christian Borntraeger
  0 siblings, 1 reply; 36+ messages in thread
From: Thomas Huth @ 2020-01-30 10:56 UTC (permalink / raw)
  To: Cornelia Huck, David Hildenbrand
  Cc: Christian Borntraeger, frankja, kvm, linux-s390, stable

On 30/01/2020 11.39, Cornelia Huck wrote:
> On Thu, 30 Jan 2020 10:49:35 +0100
> David Hildenbrand <david@redhat.com> wrote:
> 
>> On 30.01.20 09:55, Christian Borntraeger wrote:
>>> The initial CPU reset currently clobbers the userspace fpc. This was an
>>> oversight during a fixup for the lazy fpu reloading rework.  The reset
>>> calls are only done from userspace ioctls. No CPU context is loaded, so
>>> we can (and must) act directly on the sync regs, not on the thread
>>> context. Otherwise the fpu restore call will restore the zeroes fpc to
>>> userspace.
>>>
>>> Cc: stable@kernel.org
>>> Fixes: 9abc2a08a7d6 ("KVM: s390: fix memory overwrites when vx is disabled")
>>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>>> ---
>>>  arch/s390/kvm/kvm-s390.c | 3 +--
>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>
>>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>>> index c059b86..eb789cd 100644
>>> --- a/arch/s390/kvm/kvm-s390.c
>>> +++ b/arch/s390/kvm/kvm-s390.c
>>> @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
>>>  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
>>>  					CR14_UNUSED_33 |
>>>  					CR14_EXTERNAL_DAMAGE_SUBMASK;
>>> -	/* make sure the new fpc will be lazily loaded */
>>> -	save_fpu_regs();
>>> +	vcpu->run->s.regs.fpc = 0;
>>>  	current->thread.fpu.fpc = 0;
>>>  	vcpu->arch.sie_block->gbea = 1;
>>>  	vcpu->arch.sie_block->pp = 0;
>>>   
>>
>> kvm_arch_vcpu_ioctl() does a vcpu_load(vcpu), followed by the call to
>> kvm_arch_vcpu_ioctl_initial_reset(), followed by a vcpu_put().
>>
>> What am I missing?
> 
> I have been staring at this patch for some time now, and I fear I'm
> missing something as well. Can we please get more explanation?

Could we please get a test for this issue in the kvm selftests, too?
I.e. host sets a value in its FPC, then calls the INITIAL_RESET ioctl
and then checks that the value in its FPC is still there?

 Thomas


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset
  2020-01-30  9:49     ` David Hildenbrand
  2020-01-30 10:39       ` Cornelia Huck
@ 2020-01-30 11:01       ` Christian Borntraeger
  2020-01-30 11:14         ` Christian Borntraeger
  1 sibling, 1 reply; 36+ messages in thread
From: Christian Borntraeger @ 2020-01-30 11:01 UTC (permalink / raw)
  To: David Hildenbrand, frankja; +Cc: cohuck, kvm, linux-s390, thuth, stable



On 30.01.20 10:49, David Hildenbrand wrote:
> On 30.01.20 09:55, Christian Borntraeger wrote:
>> The initial CPU reset currently clobbers the userspace fpc. This was an
>> oversight during a fixup for the lazy fpu reloading rework.  The reset
>> calls are only done from userspace ioctls. No CPU context is loaded, so
>> we can (and must) act directly on the sync regs, not on the thread
>> context. Otherwise the fpu restore call will restore the zeroes fpc to
>> userspace.
>>
>> Cc: stable@kernel.org
>> Fixes: 9abc2a08a7d6 ("KVM: s390: fix memory overwrites when vx is disabled")
>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>> ---
>>  arch/s390/kvm/kvm-s390.c | 3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>> index c059b86..eb789cd 100644
>> --- a/arch/s390/kvm/kvm-s390.c
>> +++ b/arch/s390/kvm/kvm-s390.c
>> @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
>>  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
>>  					CR14_UNUSED_33 |
>>  					CR14_EXTERNAL_DAMAGE_SUBMASK;
>> -	/* make sure the new fpc will be lazily loaded */
>> -	save_fpu_regs();
>> +	vcpu->run->s.regs.fpc = 0;
>>  	current->thread.fpu.fpc = 0;
>>  	vcpu->arch.sie_block->gbea = 1;
>>  	vcpu->arch.sie_block->pp = 0;
>>
> 
> kvm_arch_vcpu_ioctl() does a vcpu_load(vcpu), followed by the call to
> kvm_arch_vcpu_ioctl_initial_reset(), followed by a vcpu_put().
> 
> What am I missing?

vcpu_load/put does no longer reload the registers lazily. We moved that out into the
vcpu_run ioctl itself. (this avoids register reloading during schedule).


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset
  2020-01-30 10:56         ` Thomas Huth
@ 2020-01-30 11:07           ` Christian Borntraeger
  0 siblings, 0 replies; 36+ messages in thread
From: Christian Borntraeger @ 2020-01-30 11:07 UTC (permalink / raw)
  To: Thomas Huth, Cornelia Huck, David Hildenbrand
  Cc: frankja, kvm, linux-s390, stable



On 30.01.20 11:56, Thomas Huth wrote:
> On 30/01/2020 11.39, Cornelia Huck wrote:
>> On Thu, 30 Jan 2020 10:49:35 +0100
>> David Hildenbrand <david@redhat.com> wrote:
>>
>>> On 30.01.20 09:55, Christian Borntraeger wrote:
>>>> The initial CPU reset currently clobbers the userspace fpc. This was an
>>>> oversight during a fixup for the lazy fpu reloading rework.  The reset
>>>> calls are only done from userspace ioctls. No CPU context is loaded, so
>>>> we can (and must) act directly on the sync regs, not on the thread
>>>> context. Otherwise the fpu restore call will restore the zeroes fpc to
>>>> userspace.
>>>>
>>>> Cc: stable@kernel.org
>>>> Fixes: 9abc2a08a7d6 ("KVM: s390: fix memory overwrites when vx is disabled")
>>>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>>>> ---
>>>>  arch/s390/kvm/kvm-s390.c | 3 +--
>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>>>> index c059b86..eb789cd 100644
>>>> --- a/arch/s390/kvm/kvm-s390.c
>>>> +++ b/arch/s390/kvm/kvm-s390.c
>>>> @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
>>>>  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
>>>>  					CR14_UNUSED_33 |
>>>>  					CR14_EXTERNAL_DAMAGE_SUBMASK;
>>>> -	/* make sure the new fpc will be lazily loaded */
>>>> -	save_fpu_regs();
>>>> +	vcpu->run->s.regs.fpc = 0;
>>>>  	current->thread.fpu.fpc = 0;
>>>>  	vcpu->arch.sie_block->gbea = 1;
>>>>  	vcpu->arch.sie_block->pp = 0;
>>>>   
>>>
>>> kvm_arch_vcpu_ioctl() does a vcpu_load(vcpu), followed by the call to
>>> kvm_arch_vcpu_ioctl_initial_reset(), followed by a vcpu_put().
>>>
>>> What am I missing?
>>
>> I have been staring at this patch for some time now, and I fear I'm
>> missing something as well. Can we please get more explanation?
> 
> Could we please get a test for this issue in the kvm selftests, too?
> I.e. host sets a value in its FPC, then calls the INITIAL_RESET ioctl
> and then checks that the value in its FPC is still there?

Yes, that will come as a later addon patch. (But I am still going to apply
this series soon and do not wait for that. I have a private hack in qemu that
does this checking, but this test code is too ugly to see the world).


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets
  2020-01-29 20:03 ` [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets Janosch Frank
  2020-01-30 10:55   ` Cornelia Huck
@ 2020-01-30 11:10   ` Thomas Huth
  2020-01-30 11:33     ` Janosch Frank
  1 sibling, 1 reply; 36+ messages in thread
From: Thomas Huth @ 2020-01-30 11:10 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: borntraeger, david, cohuck, linux-s390

On 29/01/2020 21.03, Janosch Frank wrote:
> From: Pierre Morel <pmorel@linux.ibm.com>
> 
> Local IRQs are reset by a normal cpu reset.  The initial cpu reset and
> the clear cpu reset, as superset of the normal reset, both clear the
> IRQs too.
> 
> Let's inject an interrupt to a vCPU before calling a reset and see if
> it is gone after the reset.
> 
> We choose to inject only an emergency interrupt at this point and can
> extend the test to other types of IRQs later.
> 
> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
> ---
>  tools/testing/selftests/kvm/s390x/resets.c | 57 ++++++++++++++++++++++
>  1 file changed, 57 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/selftests/kvm/s390x/resets.c
> index 2b2378cc9e80..299c1686f98c 100644
> --- a/tools/testing/selftests/kvm/s390x/resets.c
> +++ b/tools/testing/selftests/kvm/s390x/resets.c
> @@ -14,6 +14,9 @@
>  #include "kvm_util.h"
>  
>  #define VCPU_ID 3
> +#define LOCAL_IRQS 32
> +
> +struct kvm_s390_irq buf[VCPU_ID + LOCAL_IRQS];
>  
>  struct kvm_vm *vm;
>  struct kvm_run *run;
> @@ -52,6 +55,29 @@ static void test_one_reg(uint64_t id, uint64_t value)
>  	TEST_ASSERT(eval_reg == value, "value == %s", value);
>  }
>  
> +static void assert_noirq(void)
> +{
> +	struct kvm_s390_irq_state irq_state;
> +	int irqs;
> +
> +	if (!(kvm_check_cap(KVM_CAP_S390_INJECT_IRQ) &&
> +	    kvm_check_cap(KVM_CAP_S390_IRQ_STATE)))
> +		return;
> +
> +	irq_state.len = sizeof(buf);
> +	irq_state.buf = (unsigned long)buf;
> +	irqs = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_GET_IRQ_STATE, &irq_state);
> +	/*
> +	 * irqs contains the number of retrieved interrupts, apart from the
> +	 * emergency call that should be cleared by the resets, there should be
> +	 * none.
> +	 */
> +	if (irqs < 0)
> +		printf("Error by getting IRQ: errno %d\n", errno);
> +
> +	TEST_ASSERT(!irqs, "IRQ pending");
> +}
> +
>  static void assert_clear(void)
>  {
>  	struct kvm_sregs sregs;
> @@ -93,6 +119,31 @@ static void assert_initial(void)
>  static void assert_normal(void)
>  {
>  	test_one_reg(KVM_REG_S390_PFTOKEN, KVM_S390_PFAULT_TOKEN_INVALID);
> +	assert_noirq();
> +}
> +
> +static int inject_irq(int cpu_id)
> +{
> +	struct kvm_s390_irq_state irq_state;
> +	struct kvm_s390_irq *irq = &buf[0];
> +	int irqs;
> +
> +	if (!(kvm_check_cap(KVM_CAP_S390_INJECT_IRQ) &&
> +	    kvm_check_cap(KVM_CAP_S390_IRQ_STATE)))
> +		return 0;
> +
> +	/* Inject IRQ */
> +	irq_state.len = sizeof(struct kvm_s390_irq);
> +	irq_state.buf = (unsigned long)buf;
> +	irq->type = KVM_S390_INT_EMERGENCY;
> +	irq->u.emerg.code = cpu_id;
> +	irqs = _vcpu_ioctl(vm, cpu_id, KVM_S390_SET_IRQ_STATE, &irq_state);
> +	if (irqs < 0) {
> +		printf("Error by injecting INT_EMERGENCY: errno %d\n", errno);
> +		return errno;
> +	}

Can you turn this into a TEST_ASSERT() instead? Otherwise the printf()
error might go unnoticed.

Apart from that (and the nits that Cornelia already mentioned), the
patch looks fine to me.

 Thomas


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset
  2020-01-30 11:01       ` Christian Borntraeger
@ 2020-01-30 11:14         ` Christian Borntraeger
  2020-01-30 11:20           ` David Hildenbrand
  0 siblings, 1 reply; 36+ messages in thread
From: Christian Borntraeger @ 2020-01-30 11:14 UTC (permalink / raw)
  To: David Hildenbrand, frankja; +Cc: cohuck, kvm, linux-s390, thuth, stable



On 30.01.20 12:01, Christian Borntraeger wrote:
> 
> 
> On 30.01.20 10:49, David Hildenbrand wrote:
>> On 30.01.20 09:55, Christian Borntraeger wrote:
>>> The initial CPU reset currently clobbers the userspace fpc. This was an
>>> oversight during a fixup for the lazy fpu reloading rework.  The reset
>>> calls are only done from userspace ioctls. No CPU context is loaded, so
>>> we can (and must) act directly on the sync regs, not on the thread
>>> context. Otherwise the fpu restore call will restore the zeroes fpc to
>>> userspace.
>>>
>>> Cc: stable@kernel.org
>>> Fixes: 9abc2a08a7d6 ("KVM: s390: fix memory overwrites when vx is disabled")
>>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>>> ---
>>>  arch/s390/kvm/kvm-s390.c | 3 +--
>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>
>>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>>> index c059b86..eb789cd 100644
>>> --- a/arch/s390/kvm/kvm-s390.c
>>> +++ b/arch/s390/kvm/kvm-s390.c
>>> @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
>>>  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
>>>  					CR14_UNUSED_33 |
>>>  					CR14_EXTERNAL_DAMAGE_SUBMASK;
>>> -	/* make sure the new fpc will be lazily loaded */
>>> -	save_fpu_regs();
>>> +	vcpu->run->s.regs.fpc = 0;
>>>  	current->thread.fpu.fpc = 0;
>>>  	vcpu->arch.sie_block->gbea = 1;
>>>  	vcpu->arch.sie_block->pp = 0;
>>>
>>
>> kvm_arch_vcpu_ioctl() does a vcpu_load(vcpu), followed by the call to
>> kvm_arch_vcpu_ioctl_initial_reset(), followed by a vcpu_put().
>>
>> What am I missing?
> 
> vcpu_load/put does no longer reload the registers lazily. We moved that out into the
> vcpu_run ioctl itself. (this avoids register reloading during schedule).

see
e1788bb KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load
31d8b8d KVM: s390: handle access registers in the run ioctl not in vcpu_put/load

so maybe we want to change the Fixes tag to this patch.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets
  2020-01-30 10:55   ` Cornelia Huck
@ 2020-01-30 11:18     ` Janosch Frank
  2020-01-30 11:28       ` Cornelia Huck
  0 siblings, 1 reply; 36+ messages in thread
From: Janosch Frank @ 2020-01-30 11:18 UTC (permalink / raw)
  To: Cornelia Huck; +Cc: kvm, thuth, borntraeger, david, linux-s390


[-- Attachment #1.1: Type: text/plain, Size: 4488 bytes --]

On 1/30/20 11:55 AM, Cornelia Huck wrote:
> On Wed, 29 Jan 2020 15:03:12 -0500
> Janosch Frank <frankja@linux.ibm.com> wrote:
> 
>> From: Pierre Morel <pmorel@linux.ibm.com>
>>
>> Local IRQs are reset by a normal cpu reset.  The initial cpu reset and
>> the clear cpu reset, as superset of the normal reset, both clear the
>> IRQs too.
>>
>> Let's inject an interrupt to a vCPU before calling a reset and see if
>> it is gone after the reset.
>>
>> We choose to inject only an emergency interrupt at this point and can
>> extend the test to other types of IRQs later.
>>
>> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
> 
> You probably should add your s-o-b here as well.
> 
>> ---
>>  tools/testing/selftests/kvm/s390x/resets.c | 57 ++++++++++++++++++++++
>>  1 file changed, 57 insertions(+)
>>
>> diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/selftests/kvm/s390x/resets.c
>> index 2b2378cc9e80..299c1686f98c 100644
>> --- a/tools/testing/selftests/kvm/s390x/resets.c
>> +++ b/tools/testing/selftests/kvm/s390x/resets.c
>> @@ -14,6 +14,9 @@
>>  #include "kvm_util.h"
>>  
>>  #define VCPU_ID 3
>> +#define LOCAL_IRQS 32
> 
> Why 32?
> 
>> +
>> +struct kvm_s390_irq buf[VCPU_ID + LOCAL_IRQS];
>>  
>>  struct kvm_vm *vm;
>>  struct kvm_run *run;
>> @@ -52,6 +55,29 @@ static void test_one_reg(uint64_t id, uint64_t value)
>>  	TEST_ASSERT(eval_reg == value, "value == %s", value);
>>  }
>>  
>> +static void assert_noirq(void)
>> +{
>> +	struct kvm_s390_irq_state irq_state;
>> +	int irqs;
>> +
>> +	if (!(kvm_check_cap(KVM_CAP_S390_INJECT_IRQ) &&
>> +	    kvm_check_cap(KVM_CAP_S390_IRQ_STATE)))
>> +		return;
> 
> Might want to do a
> 
> irq_introspection_supported = (check stuff);
> 
> once for this test? Works fine as is, of course.
> 
>> +
>> +	irq_state.len = sizeof(buf);
>> +	irq_state.buf = (unsigned long)buf;
>> +	irqs = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_GET_IRQ_STATE, &irq_state);
>> +	/*
>> +	 * irqs contains the number of retrieved interrupts, apart from the
>> +	 * emergency call that should be cleared by the resets, there should be
>> +	 * none.
> 
> Even if there were any, they should have been cleared by the reset,
> right?

Yes, that's what "there should be none" should actually express.
I added the comment before sending out.

> 
>> +	 */
>> +	if (irqs < 0)
>> +		printf("Error by getting IRQ: errno %d\n", errno);
> 
> "Error getting pending IRQs" ?

"Could not fetch IRQs: errno %d\n" ?

> 
>> +
>> +	TEST_ASSERT(!irqs, "IRQ pending");
>> +}
>> +
>>  static void assert_clear(void)
>>  {
>>  	struct kvm_sregs sregs;
>> @@ -93,6 +119,31 @@ static void assert_initial(void)
>>  static void assert_normal(void)
>>  {
>>  	test_one_reg(KVM_REG_S390_PFTOKEN, KVM_S390_PFAULT_TOKEN_INVALID);
>> +	assert_noirq();
>> +}
>> +
>> +static int inject_irq(int cpu_id)
> 
> You never seem to check the return code.
> 
>> +{
>> +	struct kvm_s390_irq_state irq_state;
>> +	struct kvm_s390_irq *irq = &buf[0];
>> +	int irqs;
>> +
>> +	if (!(kvm_check_cap(KVM_CAP_S390_INJECT_IRQ) &&
>> +	    kvm_check_cap(KVM_CAP_S390_IRQ_STATE)))
>> +		return 0;
>> +
>> +	/* Inject IRQ */
>> +	irq_state.len = sizeof(struct kvm_s390_irq);
>> +	irq_state.buf = (unsigned long)buf;
>> +	irq->type = KVM_S390_INT_EMERGENCY;
>> +	irq->u.emerg.code = cpu_id;
>> +	irqs = _vcpu_ioctl(vm, cpu_id, KVM_S390_SET_IRQ_STATE, &irq_state);
>> +	if (irqs < 0) {
>> +		printf("Error by injecting INT_EMERGENCY: errno %d\n", errno);
> 
> "Error injecting EMERGENCY IRQ" ?

Sounds good

> 
>> +		return errno;
>> +	}
>> +
>> +	return 0;
>>  }
>>  
>>  static void test_normal(void)
>> @@ -105,6 +156,8 @@ static void test_normal(void)
>>  
>>  	_vcpu_run(vm, VCPU_ID);
>>  
>> +	inject_irq(VCPU_ID);
>> +
>>  	vcpu_ioctl(vm, VCPU_ID, KVM_S390_NORMAL_RESET, 0);
>>  	assert_normal();
>>  	kvm_vm_free(vm);
>> @@ -122,6 +175,8 @@ static int test_initial(void)
>>  
>>  	rv = _vcpu_run(vm, VCPU_ID);
>>  
>> +	inject_irq(VCPU_ID);
>> +
>>  	vcpu_ioctl(vm, VCPU_ID, KVM_S390_INITIAL_RESET, 0);
>>  	assert_normal();
>>  	assert_initial();
>> @@ -141,6 +196,8 @@ static int test_clear(void)
>>  
>>  	rv = _vcpu_run(vm, VCPU_ID);
>>  
>> +	inject_irq(VCPU_ID);
>> +
>>  	vcpu_ioctl(vm, VCPU_ID, KVM_S390_CLEAR_RESET, 0);
>>  	assert_normal();
>>  	assert_initial();
> 
> On the whole, looks good to me.
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset
  2020-01-30 11:14         ` Christian Borntraeger
@ 2020-01-30 11:20           ` David Hildenbrand
  2020-01-30 11:27             ` Christian Borntraeger
  0 siblings, 1 reply; 36+ messages in thread
From: David Hildenbrand @ 2020-01-30 11:20 UTC (permalink / raw)
  To: Christian Borntraeger, frankja; +Cc: cohuck, kvm, linux-s390, thuth, stable

On 30.01.20 12:14, Christian Borntraeger wrote:
> 
> 
> On 30.01.20 12:01, Christian Borntraeger wrote:
>>
>>
>> On 30.01.20 10:49, David Hildenbrand wrote:
>>> On 30.01.20 09:55, Christian Borntraeger wrote:
>>>> The initial CPU reset currently clobbers the userspace fpc. This was an
>>>> oversight during a fixup for the lazy fpu reloading rework.  The reset
>>>> calls are only done from userspace ioctls. No CPU context is loaded, so
>>>> we can (and must) act directly on the sync regs, not on the thread
>>>> context. Otherwise the fpu restore call will restore the zeroes fpc to
>>>> userspace.
>>>>
>>>> Cc: stable@kernel.org
>>>> Fixes: 9abc2a08a7d6 ("KVM: s390: fix memory overwrites when vx is disabled")
>>>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>>>> ---
>>>>  arch/s390/kvm/kvm-s390.c | 3 +--
>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>>>> index c059b86..eb789cd 100644
>>>> --- a/arch/s390/kvm/kvm-s390.c
>>>> +++ b/arch/s390/kvm/kvm-s390.c
>>>> @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
>>>>  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
>>>>  					CR14_UNUSED_33 |
>>>>  					CR14_EXTERNAL_DAMAGE_SUBMASK;
>>>> -	/* make sure the new fpc will be lazily loaded */
>>>> -	save_fpu_regs();
>>>> +	vcpu->run->s.regs.fpc = 0;
>>>>  	current->thread.fpu.fpc = 0;
>>>>  	vcpu->arch.sie_block->gbea = 1;
>>>>  	vcpu->arch.sie_block->pp = 0;
>>>>
>>>
>>> kvm_arch_vcpu_ioctl() does a vcpu_load(vcpu), followed by the call to
>>> kvm_arch_vcpu_ioctl_initial_reset(), followed by a vcpu_put().
>>>
>>> What am I missing?
>>
>> vcpu_load/put does no longer reload the registers lazily. We moved that out into the
>> vcpu_run ioctl itself. (this avoids register reloading during schedule).
> 
> see
> e1788bb KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load
> 31d8b8d KVM: s390: handle access registers in the run ioctl not in vcpu_put/load
> 
> so maybe we want to change the Fixes tag to this patch.
> 

Yes, because

e1788bb KVM: s390: handle floating point registers in the run ioctl not
in vcpu_put/load

broke it.

We should audit all users of save_fpu_regs().


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset
  2020-01-30 11:20           ` David Hildenbrand
@ 2020-01-30 11:27             ` Christian Borntraeger
  2020-01-30 11:42               ` [PATCH v2] KVM: s390: do not clobber user space registers during guest reset/store status Christian Borntraeger
  0 siblings, 1 reply; 36+ messages in thread
From: Christian Borntraeger @ 2020-01-30 11:27 UTC (permalink / raw)
  To: David Hildenbrand, frankja; +Cc: cohuck, kvm, linux-s390, thuth, stable



On 30.01.20 12:20, David Hildenbrand wrote:
> On 30.01.20 12:14, Christian Borntraeger wrote:
>>
>>
>> On 30.01.20 12:01, Christian Borntraeger wrote:
>>>
>>>
>>> On 30.01.20 10:49, David Hildenbrand wrote:
>>>> On 30.01.20 09:55, Christian Borntraeger wrote:
>>>>> The initial CPU reset currently clobbers the userspace fpc. This was an
>>>>> oversight during a fixup for the lazy fpu reloading rework.  The reset
>>>>> calls are only done from userspace ioctls. No CPU context is loaded, so
>>>>> we can (and must) act directly on the sync regs, not on the thread
>>>>> context. Otherwise the fpu restore call will restore the zeroes fpc to
>>>>> userspace.
>>>>>
>>>>> Cc: stable@kernel.org
>>>>> Fixes: 9abc2a08a7d6 ("KVM: s390: fix memory overwrites when vx is disabled")
>>>>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>>>>> ---
>>>>>  arch/s390/kvm/kvm-s390.c | 3 +--
>>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>>>>> index c059b86..eb789cd 100644
>>>>> --- a/arch/s390/kvm/kvm-s390.c
>>>>> +++ b/arch/s390/kvm/kvm-s390.c
>>>>> @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
>>>>>  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
>>>>>  					CR14_UNUSED_33 |
>>>>>  					CR14_EXTERNAL_DAMAGE_SUBMASK;
>>>>> -	/* make sure the new fpc will be lazily loaded */
>>>>> -	save_fpu_regs();
>>>>> +	vcpu->run->s.regs.fpc = 0;
>>>>>  	current->thread.fpu.fpc = 0;
>>>>>  	vcpu->arch.sie_block->gbea = 1;
>>>>>  	vcpu->arch.sie_block->pp = 0;
>>>>>
>>>>
>>>> kvm_arch_vcpu_ioctl() does a vcpu_load(vcpu), followed by the call to
>>>> kvm_arch_vcpu_ioctl_initial_reset(), followed by a vcpu_put().
>>>>
>>>> What am I missing?
>>>
>>> vcpu_load/put does no longer reload the registers lazily. We moved that out into the
>>> vcpu_run ioctl itself. (this avoids register reloading during schedule).
>>
>> see
>> e1788bb KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load
>> 31d8b8d KVM: s390: handle access registers in the run ioctl not in vcpu_put/load
>>
>> so maybe we want to change the Fixes tag to this patch.
>>
> 
> Yes, because
> 
> e1788bb KVM: s390: handle floating point registers in the run ioctl not
> in vcpu_put/load
> 
> broke it.
> 
> We should audit all users of save_fpu_regs().

I think the store status ioctl is also broken. Everything else looks sane.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets
  2020-01-30 11:18     ` Janosch Frank
@ 2020-01-30 11:28       ` Cornelia Huck
  2020-01-30 11:34         ` Janosch Frank
  0 siblings, 1 reply; 36+ messages in thread
From: Cornelia Huck @ 2020-01-30 11:28 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, thuth, borntraeger, david, linux-s390

[-- Attachment #1: Type: text/plain, Size: 1281 bytes --]

On Thu, 30 Jan 2020 12:18:31 +0100
Janosch Frank <frankja@linux.ibm.com> wrote:

> On 1/30/20 11:55 AM, Cornelia Huck wrote:
> > On Wed, 29 Jan 2020 15:03:12 -0500
> > Janosch Frank <frankja@linux.ibm.com> wrote:

> >> +	irq_state.len = sizeof(buf);
> >> +	irq_state.buf = (unsigned long)buf;
> >> +	irqs = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_GET_IRQ_STATE, &irq_state);
> >> +	/*
> >> +	 * irqs contains the number of retrieved interrupts, apart from the
> >> +	 * emergency call that should be cleared by the resets, there should be
> >> +	 * none.  
> > 
> > Even if there were any, they should have been cleared by the reset,
> > right?  
> 
> Yes, that's what "there should be none" should actually express.
> I added the comment before sending out.

So what about

/*
 * irqs contains the number of retrieved interrupts. Any interrupt
 * (notably, the emergency call interrupt we have injected) should
 * be cleared by the resets, so this should be 0.
 */

?

> 
> >   
> >> +	 */
> >> +	if (irqs < 0)
> >> +		printf("Error by getting IRQ: errno %d\n", errno);  
> > 
> > "Error getting pending IRQs" ?  
> 
> "Could not fetch IRQs: errno %d\n" ?

Sounds good.

> 
> >   
> >> +
> >> +	TEST_ASSERT(!irqs, "IRQ pending");
> >> +}

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 3/4] selftests: KVM: s390x: Add reset tests
  2020-01-30 10:51   ` Thomas Huth
@ 2020-01-30 11:32     ` Janosch Frank
  2020-01-30 11:36       ` Thomas Huth
  0 siblings, 1 reply; 36+ messages in thread
From: Janosch Frank @ 2020-01-30 11:32 UTC (permalink / raw)
  To: Thomas Huth, kvm; +Cc: borntraeger, david, cohuck, linux-s390


[-- Attachment #1.1: Type: text/plain, Size: 6457 bytes --]

On 1/30/20 11:51 AM, Thomas Huth wrote:
> On 29/01/2020 21.03, Janosch Frank wrote:
>> Test if the registers end up having the correct values after a normal,
>> initial and clear reset.
>>
>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>> ---
>>  tools/testing/selftests/kvm/Makefile       |   1 +
>>  tools/testing/selftests/kvm/s390x/resets.c | 165 +++++++++++++++++++++
>>  2 files changed, 166 insertions(+)
>>  create mode 100644 tools/testing/selftests/kvm/s390x/resets.c
>>
>> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
>> index 3138a916574a..fe1ea294730c 100644
>> --- a/tools/testing/selftests/kvm/Makefile
>> +++ b/tools/testing/selftests/kvm/Makefile
>> @@ -36,6 +36,7 @@ TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
>>  
>>  TEST_GEN_PROGS_s390x = s390x/memop
>>  TEST_GEN_PROGS_s390x += s390x/sync_regs_test
>> +TEST_GEN_PROGS_s390x += s390x/resets
>>  TEST_GEN_PROGS_s390x += dirty_log_test
>>  TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
>>  
>> diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/selftests/kvm/s390x/resets.c
>> new file mode 100644
>> index 000000000000..2b2378cc9e80
>> --- /dev/null
>> +++ b/tools/testing/selftests/kvm/s390x/resets.c
>> @@ -0,0 +1,165 @@
>> +// SPDX-License-Identifier: GPL-2.0-or-later
>> +/*
>> + * Test for s390x CPU resets
>> + *
>> + * Copyright (C) 2020, IBM
>> + */
>> +
>> +#include <stdio.h>
>> +#include <stdlib.h>
>> +#include <string.h>
>> +#include <sys/ioctl.h>
>> +
>> +#include "test_util.h"
>> +#include "kvm_util.h"
>> +
>> +#define VCPU_ID 3
>> +
>> +struct kvm_vm *vm;
>> +struct kvm_run *run;
>> +struct kvm_sync_regs *regs;
>> +static uint64_t regs_null[16];
>> +
>> +static uint64_t crs[16] = { 0x40000ULL,
>> +			    0x42000ULL,
>> +			    0, 0, 0, 0, 0,
>> +			    0x43000ULL,
>> +			    0, 0, 0, 0, 0,
>> +			    0x44000ULL,
>> +			    0, 0
>> +};
>> +
>> +static void guest_code_initial(void)
>> +{
>> +	/* Round toward 0 */
>> +	uint32_t fpc = 0x11;
>> +
>> +	/* Dirty registers */
>> +	asm volatile (
>> +		"	lctlg	0,15,%0\n"
>> +		"	sfpc	%1\n"
>> +		: : "Q" (crs), "d" (fpc));
> 
> I'd recommend to add a GUEST_SYNC(0) here ... otherwise the guest code
> tries to return from this function and will cause a crash - which will
> also finish execution of the guest, but might have unexpected side effects.

Ok

> 
>> +}
>> +
>> +static void test_one_reg(uint64_t id, uint64_t value)
>> +{
>> +	struct kvm_one_reg reg;
>> +	uint64_t eval_reg;
>> +
>> +	reg.addr = (uintptr_t)&eval_reg;
>> +	reg.id = id;
>> +	vcpu_get_reg(vm, VCPU_ID, &reg);
>> +	TEST_ASSERT(eval_reg == value, "value == %s", value);
>> +}
>> +
>> +static void assert_clear(void)
>> +{
>> +	struct kvm_sregs sregs;
>> +	struct kvm_regs regs;
>> +	struct kvm_fpu fpu;
>> +
>> +	vcpu_regs_get(vm, VCPU_ID, &regs);
>> +	TEST_ASSERT(!memcmp(&regs.gprs, regs_null, sizeof(regs.gprs)), "grs == 0");
>> +
>> +	vcpu_sregs_get(vm, VCPU_ID, &sregs);
>> +	TEST_ASSERT(!memcmp(&sregs.acrs, regs_null, sizeof(sregs.acrs)), "acrs == 0");
>> +
>> +	vcpu_fpu_get(vm, VCPU_ID, &fpu);
>> +	TEST_ASSERT(!memcmp(&fpu.fprs, regs_null, sizeof(fpu.fprs)), "fprs == 0");
>> +}
>> +
>> +static void assert_initial(void)
>> +{
>> +	struct kvm_sregs sregs;
>> +	struct kvm_fpu fpu;
>> +
>> +	vcpu_sregs_get(vm, VCPU_ID, &sregs);
>> +	TEST_ASSERT(sregs.crs[0] == 0xE0UL, "cr0 == 0xE0");
>> +	TEST_ASSERT(sregs.crs[14] == 0xC2000000UL, "cr14 == 0xC2000000");
>> +	TEST_ASSERT(!memcmp(&sregs.crs[1], regs_null, sizeof(sregs.crs[1]) * 12),
>> +		    "cr1-13 == 0");
>> +	TEST_ASSERT(sregs.crs[15] == 0, "cr15 == 0");
>> +
>> +	vcpu_fpu_get(vm, VCPU_ID, &fpu);
>> +	TEST_ASSERT(!fpu.fpc, "fpc == 0");
>> +
>> +	test_one_reg(KVM_REG_S390_GBEA, 1);
>> +	test_one_reg(KVM_REG_S390_PP, 0);
>> +	test_one_reg(KVM_REG_S390_TODPR, 0);
>> +	test_one_reg(KVM_REG_S390_CPU_TIMER, 0);
>> +	test_one_reg(KVM_REG_S390_CLOCK_COMP, 0);
>> +}
>> +
>> +static void assert_normal(void)
>> +{
>> +	test_one_reg(KVM_REG_S390_PFTOKEN, KVM_S390_PFAULT_TOKEN_INVALID);
>> +}
>> +
>> +static void test_normal(void)
>> +{
>> +	printf("Testing notmal reset\n");
>> +	/* Create VM */
>> +	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
>> +	run = vcpu_state(vm, VCPU_ID);
>> +	regs = &run->s.regs;
>> +
>> +	_vcpu_run(vm, VCPU_ID);
> 
> Could you use vcpu_run() instead of _vcpu_run() ?

Done.

> 
>> +	vcpu_ioctl(vm, VCPU_ID, KVM_S390_NORMAL_RESET, 0);
>> +	assert_normal();
>> +	kvm_vm_free(vm);
>> +}
>> +
>> +static int test_initial(void)
>> +{
>> +	int rv;
>> +
>> +	printf("Testing initial reset\n");
>> +	/* Create VM */
>> +	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
>> +	run = vcpu_state(vm, VCPU_ID);
>> +	regs = &run->s.regs;
>> +
>> +	rv = _vcpu_run(vm, VCPU_ID);
> 
> Extra bonus points if you check here that the registers contain the
> values that have been set by the guest ;-)

I started working on that yesterday

> 
>> +	vcpu_ioctl(vm, VCPU_ID, KVM_S390_INITIAL_RESET, 0);
>> +	assert_normal();
>> +	assert_initial();
>> +	kvm_vm_free(vm);
>> +	return rv;
>> +}
>> +
>> +static int test_clear(void)
>> +{
>> +	int rv;
>> +
>> +	printf("Testing clear reset\n");
>> +	/* Create VM */
>> +	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
>> +	run = vcpu_state(vm, VCPU_ID);
>> +	regs = &run->s.regs;
>> +
>> +	rv = _vcpu_run(vm, VCPU_ID);
>> +
>> +	vcpu_ioctl(vm, VCPU_ID, KVM_S390_CLEAR_RESET, 0);
>> +	assert_normal();
>> +	assert_initial();
>> +	assert_clear();
>> +	kvm_vm_free(vm);
>> +	return rv;
>> +}
>> +
>> +int main(int argc, char *argv[])
>> +{
>> +	int addl_resets;
>> +
>> +	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
>> +	addl_resets = kvm_check_cap(KVM_CAP_S390_VCPU_RESETS);
>> +
>> +	test_initial();
>> +	if (addl_resets) {
> 
> I think you could still fit this into one line, without the need to
> declare the addl_resets variable:

The other question is if we still need to check that if the test is
bundled with the kernel anyway?

> 
> 	if (kvm_check_cap(KVM_CAP_S390_VCPU_RESETS)) {
> 
>> +		test_normal();
>> +		test_clear();
>> +	}
>> +	return 0;
>> +}
> 
> Apart from the nits, this looks pretty good already, thanks for putting
> it together!
> 
>  Thomas
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets
  2020-01-30 11:10   ` Thomas Huth
@ 2020-01-30 11:33     ` Janosch Frank
  0 siblings, 0 replies; 36+ messages in thread
From: Janosch Frank @ 2020-01-30 11:33 UTC (permalink / raw)
  To: Thomas Huth, kvm; +Cc: borntraeger, david, cohuck, linux-s390


[-- Attachment #1.1: Type: text/plain, Size: 3331 bytes --]

On 1/30/20 12:10 PM, Thomas Huth wrote:
> On 29/01/2020 21.03, Janosch Frank wrote:
>> From: Pierre Morel <pmorel@linux.ibm.com>
>>
>> Local IRQs are reset by a normal cpu reset.  The initial cpu reset and
>> the clear cpu reset, as superset of the normal reset, both clear the
>> IRQs too.
>>
>> Let's inject an interrupt to a vCPU before calling a reset and see if
>> it is gone after the reset.
>>
>> We choose to inject only an emergency interrupt at this point and can
>> extend the test to other types of IRQs later.
>>
>> Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
>> ---
>>  tools/testing/selftests/kvm/s390x/resets.c | 57 ++++++++++++++++++++++
>>  1 file changed, 57 insertions(+)
>>
>> diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/selftests/kvm/s390x/resets.c
>> index 2b2378cc9e80..299c1686f98c 100644
>> --- a/tools/testing/selftests/kvm/s390x/resets.c
>> +++ b/tools/testing/selftests/kvm/s390x/resets.c
>> @@ -14,6 +14,9 @@
>>  #include "kvm_util.h"
>>  
>>  #define VCPU_ID 3
>> +#define LOCAL_IRQS 32
>> +
>> +struct kvm_s390_irq buf[VCPU_ID + LOCAL_IRQS];
>>  
>>  struct kvm_vm *vm;
>>  struct kvm_run *run;
>> @@ -52,6 +55,29 @@ static void test_one_reg(uint64_t id, uint64_t value)
>>  	TEST_ASSERT(eval_reg == value, "value == %s", value);
>>  }
>>  
>> +static void assert_noirq(void)
>> +{
>> +	struct kvm_s390_irq_state irq_state;
>> +	int irqs;
>> +
>> +	if (!(kvm_check_cap(KVM_CAP_S390_INJECT_IRQ) &&
>> +	    kvm_check_cap(KVM_CAP_S390_IRQ_STATE)))
>> +		return;
>> +
>> +	irq_state.len = sizeof(buf);
>> +	irq_state.buf = (unsigned long)buf;
>> +	irqs = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_GET_IRQ_STATE, &irq_state);
>> +	/*
>> +	 * irqs contains the number of retrieved interrupts, apart from the
>> +	 * emergency call that should be cleared by the resets, there should be
>> +	 * none.
>> +	 */
>> +	if (irqs < 0)
>> +		printf("Error by getting IRQ: errno %d\n", errno);
>> +
>> +	TEST_ASSERT(!irqs, "IRQ pending");
>> +}
>> +
>>  static void assert_clear(void)
>>  {
>>  	struct kvm_sregs sregs;
>> @@ -93,6 +119,31 @@ static void assert_initial(void)
>>  static void assert_normal(void)
>>  {
>>  	test_one_reg(KVM_REG_S390_PFTOKEN, KVM_S390_PFAULT_TOKEN_INVALID);
>> +	assert_noirq();
>> +}
>> +
>> +static int inject_irq(int cpu_id)
>> +{
>> +	struct kvm_s390_irq_state irq_state;
>> +	struct kvm_s390_irq *irq = &buf[0];
>> +	int irqs;
>> +
>> +	if (!(kvm_check_cap(KVM_CAP_S390_INJECT_IRQ) &&
>> +	    kvm_check_cap(KVM_CAP_S390_IRQ_STATE)))
>> +		return 0;
>> +
>> +	/* Inject IRQ */
>> +	irq_state.len = sizeof(struct kvm_s390_irq);
>> +	irq_state.buf = (unsigned long)buf;
>> +	irq->type = KVM_S390_INT_EMERGENCY;
>> +	irq->u.emerg.code = cpu_id;
>> +	irqs = _vcpu_ioctl(vm, cpu_id, KVM_S390_SET_IRQ_STATE, &irq_state);
>> +	if (irqs < 0) {
>> +		printf("Error by injecting INT_EMERGENCY: errno %d\n", errno);
>> +		return errno;
>> +	}
> 
> Can you turn this into a TEST_ASSERT() instead? Otherwise the printf()
> error might go unnoticed.

I've converted both error checks into asserts (set/get irq) and made the
function void.


> 
> Apart from that (and the nits that Cornelia already mentioned), the
> patch looks fine to me.
> 
>  Thomas
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets
  2020-01-30 11:28       ` Cornelia Huck
@ 2020-01-30 11:34         ` Janosch Frank
  0 siblings, 0 replies; 36+ messages in thread
From: Janosch Frank @ 2020-01-30 11:34 UTC (permalink / raw)
  To: Cornelia Huck; +Cc: kvm, thuth, borntraeger, david, linux-s390


[-- Attachment #1.1: Type: text/plain, Size: 282 bytes --]

On 1/30/20 12:28 PM, Cornelia Huck wrote:
> /*
>  * irqs contains the number of retrieved interrupts. Any interrupt
>  * (notably, the emergency call interrupt we have injected) should
>  * be cleared by the resets, so this should be 0.
>  */

Sounds even better, thanks!


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 3/4] selftests: KVM: s390x: Add reset tests
  2020-01-30 11:32     ` Janosch Frank
@ 2020-01-30 11:36       ` Thomas Huth
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Huth @ 2020-01-30 11:36 UTC (permalink / raw)
  To: Janosch Frank, kvm; +Cc: borntraeger, david, cohuck, linux-s390

On 30/01/2020 12.32, Janosch Frank wrote:
> On 1/30/20 11:51 AM, Thomas Huth wrote:
>> On 29/01/2020 21.03, Janosch Frank wrote:
>>> Test if the registers end up having the correct values after a normal,
>>> initial and clear reset.
>>>
>>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>>> ---
>>>  tools/testing/selftests/kvm/Makefile       |   1 +
>>>  tools/testing/selftests/kvm/s390x/resets.c | 165 +++++++++++++++++++++
>>>  2 files changed, 166 insertions(+)
>>>  create mode 100644 tools/testing/selftests/kvm/s390x/resets.c
>>>
>>> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
>>> index 3138a916574a..fe1ea294730c 100644
>>> --- a/tools/testing/selftests/kvm/Makefile
>>> +++ b/tools/testing/selftests/kvm/Makefile
>>> @@ -36,6 +36,7 @@ TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
>>>  
>>>  TEST_GEN_PROGS_s390x = s390x/memop
>>>  TEST_GEN_PROGS_s390x += s390x/sync_regs_test
>>> +TEST_GEN_PROGS_s390x += s390x/resets
>>>  TEST_GEN_PROGS_s390x += dirty_log_test
>>>  TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
>>>  
>>> diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/selftests/kvm/s390x/resets.c
>>> new file mode 100644
>>> index 000000000000..2b2378cc9e80
>>> --- /dev/null
>>> +++ b/tools/testing/selftests/kvm/s390x/resets.c
>>> @@ -0,0 +1,165 @@
>>> +// SPDX-License-Identifier: GPL-2.0-or-later
>>> +/*
>>> + * Test for s390x CPU resets
>>> + *
>>> + * Copyright (C) 2020, IBM
>>> + */
>>> +
>>> +#include <stdio.h>
>>> +#include <stdlib.h>
>>> +#include <string.h>
>>> +#include <sys/ioctl.h>
>>> +
>>> +#include "test_util.h"
>>> +#include "kvm_util.h"
>>> +
>>> +#define VCPU_ID 3
>>> +
>>> +struct kvm_vm *vm;
>>> +struct kvm_run *run;
>>> +struct kvm_sync_regs *regs;
>>> +static uint64_t regs_null[16];
>>> +
>>> +static uint64_t crs[16] = { 0x40000ULL,
>>> +			    0x42000ULL,
>>> +			    0, 0, 0, 0, 0,
>>> +			    0x43000ULL,
>>> +			    0, 0, 0, 0, 0,
>>> +			    0x44000ULL,
>>> +			    0, 0
>>> +};
>>> +
>>> +static void guest_code_initial(void)
>>> +{
>>> +	/* Round toward 0 */
>>> +	uint32_t fpc = 0x11;
>>> +
>>> +	/* Dirty registers */
>>> +	asm volatile (
>>> +		"	lctlg	0,15,%0\n"
>>> +		"	sfpc	%1\n"
>>> +		: : "Q" (crs), "d" (fpc));
>>
>> I'd recommend to add a GUEST_SYNC(0) here ... otherwise the guest code
>> tries to return from this function and will cause a crash - which will
>> also finish execution of the guest, but might have unexpected side effects.
> 
> Ok
> 
>>
>>> +}
>>> +
>>> +static void test_one_reg(uint64_t id, uint64_t value)
>>> +{
>>> +	struct kvm_one_reg reg;
>>> +	uint64_t eval_reg;
>>> +
>>> +	reg.addr = (uintptr_t)&eval_reg;
>>> +	reg.id = id;
>>> +	vcpu_get_reg(vm, VCPU_ID, &reg);
>>> +	TEST_ASSERT(eval_reg == value, "value == %s", value);
>>> +}
>>> +
>>> +static void assert_clear(void)
>>> +{
>>> +	struct kvm_sregs sregs;
>>> +	struct kvm_regs regs;
>>> +	struct kvm_fpu fpu;
>>> +
>>> +	vcpu_regs_get(vm, VCPU_ID, &regs);
>>> +	TEST_ASSERT(!memcmp(&regs.gprs, regs_null, sizeof(regs.gprs)), "grs == 0");
>>> +
>>> +	vcpu_sregs_get(vm, VCPU_ID, &sregs);
>>> +	TEST_ASSERT(!memcmp(&sregs.acrs, regs_null, sizeof(sregs.acrs)), "acrs == 0");
>>> +
>>> +	vcpu_fpu_get(vm, VCPU_ID, &fpu);
>>> +	TEST_ASSERT(!memcmp(&fpu.fprs, regs_null, sizeof(fpu.fprs)), "fprs == 0");
>>> +}
>>> +
>>> +static void assert_initial(void)
>>> +{
>>> +	struct kvm_sregs sregs;
>>> +	struct kvm_fpu fpu;
>>> +
>>> +	vcpu_sregs_get(vm, VCPU_ID, &sregs);
>>> +	TEST_ASSERT(sregs.crs[0] == 0xE0UL, "cr0 == 0xE0");
>>> +	TEST_ASSERT(sregs.crs[14] == 0xC2000000UL, "cr14 == 0xC2000000");
>>> +	TEST_ASSERT(!memcmp(&sregs.crs[1], regs_null, sizeof(sregs.crs[1]) * 12),
>>> +		    "cr1-13 == 0");
>>> +	TEST_ASSERT(sregs.crs[15] == 0, "cr15 == 0");
>>> +
>>> +	vcpu_fpu_get(vm, VCPU_ID, &fpu);
>>> +	TEST_ASSERT(!fpu.fpc, "fpc == 0");
>>> +
>>> +	test_one_reg(KVM_REG_S390_GBEA, 1);
>>> +	test_one_reg(KVM_REG_S390_PP, 0);
>>> +	test_one_reg(KVM_REG_S390_TODPR, 0);
>>> +	test_one_reg(KVM_REG_S390_CPU_TIMER, 0);
>>> +	test_one_reg(KVM_REG_S390_CLOCK_COMP, 0);
>>> +}
>>> +
>>> +static void assert_normal(void)
>>> +{
>>> +	test_one_reg(KVM_REG_S390_PFTOKEN, KVM_S390_PFAULT_TOKEN_INVALID);
>>> +}
>>> +
>>> +static void test_normal(void)
>>> +{
>>> +	printf("Testing notmal reset\n");
>>> +	/* Create VM */
>>> +	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
>>> +	run = vcpu_state(vm, VCPU_ID);
>>> +	regs = &run->s.regs;
>>> +
>>> +	_vcpu_run(vm, VCPU_ID);
>>
>> Could you use vcpu_run() instead of _vcpu_run() ?
> 
> Done.
> 
>>
>>> +	vcpu_ioctl(vm, VCPU_ID, KVM_S390_NORMAL_RESET, 0);
>>> +	assert_normal();
>>> +	kvm_vm_free(vm);
>>> +}
>>> +
>>> +static int test_initial(void)
>>> +{
>>> +	int rv;
>>> +
>>> +	printf("Testing initial reset\n");
>>> +	/* Create VM */
>>> +	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
>>> +	run = vcpu_state(vm, VCPU_ID);
>>> +	regs = &run->s.regs;
>>> +
>>> +	rv = _vcpu_run(vm, VCPU_ID);
>>
>> Extra bonus points if you check here that the registers contain the
>> values that have been set by the guest ;-)
> 
> I started working on that yesterday
> 
>>
>>> +	vcpu_ioctl(vm, VCPU_ID, KVM_S390_INITIAL_RESET, 0);
>>> +	assert_normal();
>>> +	assert_initial();
>>> +	kvm_vm_free(vm);
>>> +	return rv;
>>> +}
>>> +
>>> +static int test_clear(void)
>>> +{
>>> +	int rv;
>>> +
>>> +	printf("Testing clear reset\n");
>>> +	/* Create VM */
>>> +	vm = vm_create_default(VCPU_ID, 0, guest_code_initial);
>>> +	run = vcpu_state(vm, VCPU_ID);
>>> +	regs = &run->s.regs;
>>> +
>>> +	rv = _vcpu_run(vm, VCPU_ID);
>>> +
>>> +	vcpu_ioctl(vm, VCPU_ID, KVM_S390_CLEAR_RESET, 0);
>>> +	assert_normal();
>>> +	assert_initial();
>>> +	assert_clear();
>>> +	kvm_vm_free(vm);
>>> +	return rv;
>>> +}
>>> +
>>> +int main(int argc, char *argv[])
>>> +{
>>> +	int addl_resets;
>>> +
>>> +	setbuf(stdout, NULL);	/* Tell stdout not to buffer its content */
>>> +	addl_resets = kvm_check_cap(KVM_CAP_S390_VCPU_RESETS);
>>> +
>>> +	test_initial();
>>> +	if (addl_resets) {
>>
>> I think you could still fit this into one line, without the need to
>> declare the addl_resets variable:
> 
> The other question is if we still need to check that if the test is
> bundled with the kernel anyway?

For brand new capabilities, I think it would be nice to have the check,
in case somebody (like me) wants to backport the test to slightly older
kernels. For capabilities that have been in the kernel since a long time
(like the IRQ caps in the next patch), I think you can also skip the check.

 Thomas


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2] KVM: s390: do not clobber user space registers during guest reset/store status
  2020-01-30 11:27             ` Christian Borntraeger
@ 2020-01-30 11:42               ` Christian Borntraeger
  2020-01-30 11:44                 ` Christian Borntraeger
  2020-01-30 12:01                 ` Christian Borntraeger
  0 siblings, 2 replies; 36+ messages in thread
From: Christian Borntraeger @ 2020-01-30 11:42 UTC (permalink / raw)
  To: borntraeger; +Cc: cohuck, david, frankja, kvm, linux-s390, stable, thuth

The two ioctls for initial CPU reset and store status currently clobber
the userspace fpc and potentially access registers. This was an
oversight during a fixup for the lazy fpu reloading rework.  The reset
calls are only done from userspace ioctls.  No CPU context is loaded, so
we can (and must) act directly on the sync regs, not on the thread
context. Otherwise the fpu restore call will restore the zeroes fpc to
userspace.

Cc: stable@kernel.org
Fixes: e1788bb995be ("KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load")
Fixes: 31d8b8d41a7e ("KVM: s390: handle access registers in the run ioctl not in vcpu_put/load")
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index c059b86..936415b 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
 	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
 					CR14_UNUSED_33 |
 					CR14_EXTERNAL_DAMAGE_SUBMASK;
-	/* make sure the new fpc will be lazily loaded */
-	save_fpu_regs();
+	vcpu->run->s.regs.fpc = 0;
 	current->thread.fpu.fpc = 0;
 	vcpu->arch.sie_block->gbea = 1;
 	vcpu->arch.sie_block->pp = 0;
@@ -4343,7 +4342,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 	switch (ioctl) {
 	case KVM_S390_STORE_STATUS:
 		idx = srcu_read_lock(&vcpu->kvm->srcu);
-		r = kvm_s390_vcpu_store_status(vcpu, arg);
+		r = kvm_s390_vcpu_store_status_unloaded(vcpu, arg);
 		srcu_read_unlock(&vcpu->kvm->srcu, idx);
 		break;
 	case KVM_S390_SET_INITIAL_PSW: {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v2] KVM: s390: do not clobber user space registers during guest reset/store status
  2020-01-30 11:42               ` [PATCH v2] KVM: s390: do not clobber user space registers during guest reset/store status Christian Borntraeger
@ 2020-01-30 11:44                 ` Christian Borntraeger
  2020-01-30 12:01                 ` Christian Borntraeger
  1 sibling, 0 replies; 36+ messages in thread
From: Christian Borntraeger @ 2020-01-30 11:44 UTC (permalink / raw)
  Cc: cohuck, david, frankja, kvm, linux-s390, stable, thuth



On 30.01.20 12:42, Christian Borntraeger wrote:
> The two ioctls for initial CPU reset and store status currently clobber
> the userspace fpc and potentially access registers. This was an
> oversight during a fixup for the lazy fpu reloading rework.  The reset
> calls are only done from userspace ioctls.  No CPU context is loaded, so
> we can (and must) act directly on the sync regs, not on the thread
> context. Otherwise the fpu restore call will restore the zeroes fpc to
> userspace.
> 
> Cc: stable@kernel.org
> Fixes: e1788bb995be ("KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load")
> Fixes: 31d8b8d41a7e ("KVM: s390: handle access registers in the run ioctl not in vcpu_put/load")
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index c059b86..936415b 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
>  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
>  					CR14_UNUSED_33 |
>  					CR14_EXTERNAL_DAMAGE_SUBMASK;
> -	/* make sure the new fpc will be lazily loaded */
> -	save_fpu_regs();
> +	vcpu->run->s.regs.fpc = 0;
>  	current->thread.fpu.fpc = 0;
>  	vcpu->arch.sie_block->gbea = 1;
>  	vcpu->arch.sie_block->pp = 0;
> @@ -4343,7 +4342,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>  	switch (ioctl) {
>  	case KVM_S390_STORE_STATUS:
>  		idx = srcu_read_lock(&vcpu->kvm->srcu);
> -		r = kvm_s390_vcpu_store_status(vcpu, arg);
> +		r = kvm_s390_vcpu_store_status_unloaded(vcpu, arg);
		kvm_s390_store_status_unloaded of course.....

>  		srcu_read_unlock(&vcpu->kvm->srcu, idx);
>  		break;
>  	case KVM_S390_SET_INITIAL_PSW: {
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2] KVM: s390: do not clobber user space registers during guest reset/store status
  2020-01-30 11:42               ` [PATCH v2] KVM: s390: do not clobber user space registers during guest reset/store status Christian Borntraeger
  2020-01-30 11:44                 ` Christian Borntraeger
@ 2020-01-30 12:01                 ` Christian Borntraeger
  2020-01-30 12:38                   ` David Hildenbrand
  1 sibling, 1 reply; 36+ messages in thread
From: Christian Borntraeger @ 2020-01-30 12:01 UTC (permalink / raw)
  Cc: cohuck, david, frankja, kvm, linux-s390, stable, thuth



On 30.01.20 12:42, Christian Borntraeger wrote:
> The two ioctls for initial CPU reset and store status currently clobber
> the userspace fpc and potentially access registers. This was an
> oversight during a fixup for the lazy fpu reloading rework.  The reset
> calls are only done from userspace ioctls.  No CPU context is loaded, so
> we can (and must) act directly on the sync regs, not on the thread
> context. Otherwise the fpu restore call will restore the zeroes fpc to
> userspace.

New patch description:

    KVM: s390: do not clobber registers during guest reset/store status
    
    The initial CPU reset clobbers the userspace fpc and the store status
    ioctl clobbers the guest acrs + fpr.  As these calls are only done via
    ioctl (and not via vcpu_run), no CPU context is loaded, so we can (and
    must) act directly on the sync regs, not on the thread context.
    
    Cc: stable@kernel.org
    Fixes: e1788bb995be ("KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load")
    Fixes: 31d8b8d41a7e ("KVM: s390: handle access registers in the run ioctl not in vcpu_put/load")
    Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>

> 
> Cc: stable@kernel.org
> Fixes: e1788bb995be ("KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load")
> Fixes: 31d8b8d41a7e ("KVM: s390: handle access registers in the run ioctl not in vcpu_put/load")
> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index c059b86..936415b 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
>  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
>  					CR14_UNUSED_33 |
>  					CR14_EXTERNAL_DAMAGE_SUBMASK;
> -	/* make sure the new fpc will be lazily loaded */
> -	save_fpu_regs();
> +	vcpu->run->s.regs.fpc = 0;
>  	current->thread.fpu.fpc = 0;
>  	vcpu->arch.sie_block->gbea = 1;
>  	vcpu->arch.sie_block->pp = 0;
> @@ -4343,7 +4342,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>  	switch (ioctl) {
>  	case KVM_S390_STORE_STATUS:
>  		idx = srcu_read_lock(&vcpu->kvm->srcu);
> -		r = kvm_s390_vcpu_store_status(vcpu, arg);
> +		r = kvm_s390_vcpu_store_status_unloaded(vcpu, arg);
>  		srcu_read_unlock(&vcpu->kvm->srcu, idx);
>  		break;
>  	case KVM_S390_SET_INITIAL_PSW: {
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2] KVM: s390: do not clobber user space registers during guest reset/store status
  2020-01-30 12:01                 ` Christian Borntraeger
@ 2020-01-30 12:38                   ` David Hildenbrand
  0 siblings, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-01-30 12:38 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: cohuck, frankja, kvm, linux-s390, stable, thuth

On 30.01.20 13:01, Christian Borntraeger wrote:
> 
> 
> On 30.01.20 12:42, Christian Borntraeger wrote:
>> The two ioctls for initial CPU reset and store status currently clobber
>> the userspace fpc and potentially access registers. This was an
>> oversight during a fixup for the lazy fpu reloading rework.  The reset
>> calls are only done from userspace ioctls.  No CPU context is loaded, so
>> we can (and must) act directly on the sync regs, not on the thread
>> context. Otherwise the fpu restore call will restore the zeroes fpc to
>> userspace.
> 
> New patch description:
> 
>     KVM: s390: do not clobber registers during guest reset/store status
>     
>     The initial CPU reset clobbers the userspace fpc and the store status
>     ioctl clobbers the guest acrs + fpr.  As these calls are only done via
>     ioctl (and not via vcpu_run), no CPU context is loaded, so we can (and
>     must) act directly on the sync regs, not on the thread context.
>     
>     Cc: stable@kernel.org
>     Fixes: e1788bb995be ("KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load")
>     Fixes: 31d8b8d41a7e ("KVM: s390: handle access registers in the run ioctl not in vcpu_put/load")
>     Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
> 
>>
>> Cc: stable@kernel.org
>> Fixes: e1788bb995be ("KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load")
>> Fixes: 31d8b8d41a7e ("KVM: s390: handle access registers in the run ioctl not in vcpu_put/load")
>> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
>> ---
>>  arch/s390/kvm/kvm-s390.c | 5 ++---
>>  1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>> index c059b86..936415b 100644
>> --- a/arch/s390/kvm/kvm-s390.c
>> +++ b/arch/s390/kvm/kvm-s390.c
>> @@ -2824,8 +2824,7 @@ static void kvm_s390_vcpu_initial_reset(struct kvm_vcpu *vcpu)
>>  	vcpu->arch.sie_block->gcr[14] = CR14_UNUSED_32 |
>>  					CR14_UNUSED_33 |
>>  					CR14_EXTERNAL_DAMAGE_SUBMASK;
>> -	/* make sure the new fpc will be lazily loaded */
>> -	save_fpu_regs();
>> +	vcpu->run->s.regs.fpc = 0;
>>  	current->thread.fpu.fpc = 0;
>>  	vcpu->arch.sie_block->gbea = 1;
>>  	vcpu->arch.sie_block->pp = 0;
>> @@ -4343,7 +4342,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>>  	switch (ioctl) {
>>  	case KVM_S390_STORE_STATUS:
>>  		idx = srcu_read_lock(&vcpu->kvm->srcu);
>> -		r = kvm_s390_vcpu_store_status(vcpu, arg);
>> +		r = kvm_s390_vcpu_store_status_unloaded(vcpu, arg);
>>  		srcu_read_unlock(&vcpu->kvm->srcu, idx);
>>  		break;
>>  	case KVM_S390_SET_INITIAL_PSW: {
>>
> 

With new description + fixed up call

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions
  2020-01-30 10:36   ` Thomas Huth
@ 2020-01-30 13:55     ` Andrew Jones
  2020-01-30 14:10       ` Janosch Frank
  0 siblings, 1 reply; 36+ messages in thread
From: Andrew Jones @ 2020-01-30 13:55 UTC (permalink / raw)
  To: Thomas Huth; +Cc: Janosch Frank, kvm, borntraeger, david, cohuck, linux-s390

On Thu, Jan 30, 2020 at 11:36:21AM +0100, Thomas Huth wrote:
> On 29/01/2020 21.03, Janosch Frank wrote:
> > Add library access to more registers.
> > 
> > Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> > ---
> >  .../testing/selftests/kvm/include/kvm_util.h  |  6 +++
> >  tools/testing/selftests/kvm/lib/kvm_util.c    | 48 +++++++++++++++++++
> >  2 files changed, 54 insertions(+)
> > 
> > diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> > index 29cccaf96baf..ae0d14c2540a 100644
> > --- a/tools/testing/selftests/kvm/include/kvm_util.h
> > +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> > @@ -125,6 +125,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
> >  		    struct kvm_sregs *sregs);
> >  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
> >  		    struct kvm_sregs *sregs);
> > +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid,
> > +		  struct kvm_fpu *fpu);
> > +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid,
> > +		  struct kvm_fpu *fpu);
> > +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
> > +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
> >  #ifdef __KVM_HAVE_VCPU_EVENTS
> >  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
> >  		     struct kvm_vcpu_events *events);
> > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> > index 41cf45416060..dae117728ec6 100644
> > --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> > @@ -1373,6 +1373,54 @@ int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_sregs *sregs)
> >  	return ioctl(vcpu->fd, KVM_SET_SREGS, sregs);
> >  }
> >  
> > +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> > +{
> > +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> > +	int ret;
> > +
> > +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> > +
> > +	ret = ioctl(vcpu->fd, KVM_GET_FPU, fpu);
> > +	TEST_ASSERT(ret == 0, "KVM_GET_FPU failed, rc: %i errno: %i",
> > +		    ret, errno);
> > +}
> > +
> > +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> > +{
> > +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> > +	int ret;
> > +
> > +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> > +
> > +	ret = ioctl(vcpu->fd, KVM_SET_FPU, fpu);
> > +	TEST_ASSERT(ret == 0, "KVM_SET_FPU failed, rc: %i errno: %i",
> > +		    ret, errno);
> > +}
> > +
> > +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> > +{
> > +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> > +	int ret;
> > +
> > +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> > +
> > +	ret = ioctl(vcpu->fd, KVM_GET_ONE_REG, reg);
> > +	TEST_ASSERT(ret == 0, "KVM_GET_ONE_REG failed, rc: %i errno: %i",
> > +		    ret, errno);
> > +}
> > +
> > +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> > +{
> > +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> > +	int ret;
> > +
> > +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> > +
> > +	ret = ioctl(vcpu->fd, KVM_SET_ONE_REG, reg);
> > +	TEST_ASSERT(ret == 0, "KVM_SET_ONE_REG failed, rc: %i errno: %i",
> > +		    ret, errno);
> > +}
> > +
> >  /*
> >   * VCPU Ioctl
> >   *
> > 
> 
> Reviewed-by: Thomas Huth <thuth@redhat.com>
>

How about what's below instead. It should be equivalent.

Thanks,
drew

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 29cccaf96baf..d96a072e69bf 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -125,6 +125,31 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
 int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
 		    struct kvm_sregs *sregs);
+
+static inline void
+vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
+{
+	vcpu_ioctl(vm, vcpuid, KVM_GET_FPU, fpu);
+}
+
+static inline void
+vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
+{
+	vcpu_ioctl(vm, vcpuid, KVM_SET_FPU, fpu);
+}
+
+static inline void
+vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
+{
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, reg);
+}
+
+static inline void
+vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
+{
+	vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, reg);
+}
+
 #ifdef __KVM_HAVE_VCPU_EVENTS
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_vcpu_events *events);


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions
  2020-01-30 13:55     ` Andrew Jones
@ 2020-01-30 14:10       ` Janosch Frank
  2020-01-30 14:30         ` Andrew Jones
  0 siblings, 1 reply; 36+ messages in thread
From: Janosch Frank @ 2020-01-30 14:10 UTC (permalink / raw)
  To: Andrew Jones, Thomas Huth; +Cc: kvm, borntraeger, david, cohuck, linux-s390


[-- Attachment #1.1: Type: text/plain, Size: 5100 bytes --]

On 1/30/20 2:55 PM, Andrew Jones wrote:
> On Thu, Jan 30, 2020 at 11:36:21AM +0100, Thomas Huth wrote:
>> On 29/01/2020 21.03, Janosch Frank wrote:
>>> Add library access to more registers.
>>>
>>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>>> ---
>>>  .../testing/selftests/kvm/include/kvm_util.h  |  6 +++
>>>  tools/testing/selftests/kvm/lib/kvm_util.c    | 48 +++++++++++++++++++
>>>  2 files changed, 54 insertions(+)
>>>
>>> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
>>> index 29cccaf96baf..ae0d14c2540a 100644
>>> --- a/tools/testing/selftests/kvm/include/kvm_util.h
>>> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
>>> @@ -125,6 +125,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>>>  		    struct kvm_sregs *sregs);
>>>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>>>  		    struct kvm_sregs *sregs);
>>> +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid,
>>> +		  struct kvm_fpu *fpu);
>>> +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid,
>>> +		  struct kvm_fpu *fpu);
>>> +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
>>> +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
>>>  #ifdef __KVM_HAVE_VCPU_EVENTS
>>>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>>>  		     struct kvm_vcpu_events *events);
>>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> index 41cf45416060..dae117728ec6 100644
>>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>>> @@ -1373,6 +1373,54 @@ int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_sregs *sregs)
>>>  	return ioctl(vcpu->fd, KVM_SET_SREGS, sregs);
>>>  }
>>>  
>>> +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
>>> +{
>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>>> +	int ret;
>>> +
>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
>>> +
>>> +	ret = ioctl(vcpu->fd, KVM_GET_FPU, fpu);
>>> +	TEST_ASSERT(ret == 0, "KVM_GET_FPU failed, rc: %i errno: %i",
>>> +		    ret, errno);
>>> +}
>>> +
>>> +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
>>> +{
>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>>> +	int ret;
>>> +
>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
>>> +
>>> +	ret = ioctl(vcpu->fd, KVM_SET_FPU, fpu);
>>> +	TEST_ASSERT(ret == 0, "KVM_SET_FPU failed, rc: %i errno: %i",
>>> +		    ret, errno);
>>> +}
>>> +
>>> +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
>>> +{
>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>>> +	int ret;
>>> +
>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
>>> +
>>> +	ret = ioctl(vcpu->fd, KVM_GET_ONE_REG, reg);
>>> +	TEST_ASSERT(ret == 0, "KVM_GET_ONE_REG failed, rc: %i errno: %i",
>>> +		    ret, errno);
>>> +}
>>> +
>>> +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
>>> +{
>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>>> +	int ret;
>>> +
>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
>>> +
>>> +	ret = ioctl(vcpu->fd, KVM_SET_ONE_REG, reg);
>>> +	TEST_ASSERT(ret == 0, "KVM_SET_ONE_REG failed, rc: %i errno: %i",
>>> +		    ret, errno);
>>> +}
>>> +
>>>  /*
>>>   * VCPU Ioctl
>>>   *
>>>
>>
>> Reviewed-by: Thomas Huth <thuth@redhat.com>
>>
> 
> How about what's below instead. It should be equivalent.

With your proposed changes we loose a bit verbosity in the error
messages. I need to think about which I like more.

> 
> Thanks,
> drew
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 29cccaf96baf..d96a072e69bf 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -125,6 +125,31 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>  		    struct kvm_sregs *sregs);
> +
> +static inline void
> +vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> +{
> +	vcpu_ioctl(vm, vcpuid, KVM_GET_FPU, fpu);
> +}
> +
> +static inline void
> +vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> +{
> +	vcpu_ioctl(vm, vcpuid, KVM_SET_FPU, fpu);
> +}
> +
> +static inline void
> +vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> +{
> +	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, reg);
> +}
> +
> +static inline void
> +vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> +{
> +	vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, reg);
> +}
> +
>  #ifdef __KVM_HAVE_VCPU_EVENTS
>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>  		     struct kvm_vcpu_events *events);
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions
  2020-01-30 14:10       ` Janosch Frank
@ 2020-01-30 14:30         ` Andrew Jones
  2020-01-30 14:58           ` Janosch Frank
  0 siblings, 1 reply; 36+ messages in thread
From: Andrew Jones @ 2020-01-30 14:30 UTC (permalink / raw)
  To: Janosch Frank; +Cc: Thomas Huth, kvm, borntraeger, david, cohuck, linux-s390

On Thu, Jan 30, 2020 at 03:10:55PM +0100, Janosch Frank wrote:
> On 1/30/20 2:55 PM, Andrew Jones wrote:
> > On Thu, Jan 30, 2020 at 11:36:21AM +0100, Thomas Huth wrote:
> >> On 29/01/2020 21.03, Janosch Frank wrote:
> >>> Add library access to more registers.
> >>>
> >>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> >>> ---
> >>>  .../testing/selftests/kvm/include/kvm_util.h  |  6 +++
> >>>  tools/testing/selftests/kvm/lib/kvm_util.c    | 48 +++++++++++++++++++
> >>>  2 files changed, 54 insertions(+)
> >>>
> >>> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> >>> index 29cccaf96baf..ae0d14c2540a 100644
> >>> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> >>> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> >>> @@ -125,6 +125,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
> >>>  		    struct kvm_sregs *sregs);
> >>>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
> >>>  		    struct kvm_sregs *sregs);
> >>> +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid,
> >>> +		  struct kvm_fpu *fpu);
> >>> +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid,
> >>> +		  struct kvm_fpu *fpu);

nit: no need for the above line breaks. We don't even get to 80 char.

> >>> +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
> >>> +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
> >>>  #ifdef __KVM_HAVE_VCPU_EVENTS
> >>>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
> >>>  		     struct kvm_vcpu_events *events);
> >>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> >>> index 41cf45416060..dae117728ec6 100644
> >>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> >>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> >>> @@ -1373,6 +1373,54 @@ int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_sregs *sregs)
> >>>  	return ioctl(vcpu->fd, KVM_SET_SREGS, sregs);
> >>>  }
> >>>  
> >>> +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> >>> +{
> >>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> >>> +	int ret;
> >>> +
> >>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> >>> +
> >>> +	ret = ioctl(vcpu->fd, KVM_GET_FPU, fpu);
> >>> +	TEST_ASSERT(ret == 0, "KVM_GET_FPU failed, rc: %i errno: %i",
> >>> +		    ret, errno);
> >>> +}
> >>> +
> >>> +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> >>> +{
> >>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> >>> +	int ret;
> >>> +
> >>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> >>> +
> >>> +	ret = ioctl(vcpu->fd, KVM_SET_FPU, fpu);
> >>> +	TEST_ASSERT(ret == 0, "KVM_SET_FPU failed, rc: %i errno: %i",
> >>> +		    ret, errno);
> >>> +}
> >>> +
> >>> +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> >>> +{
> >>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> >>> +	int ret;
> >>> +
> >>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> >>> +
> >>> +	ret = ioctl(vcpu->fd, KVM_GET_ONE_REG, reg);
> >>> +	TEST_ASSERT(ret == 0, "KVM_GET_ONE_REG failed, rc: %i errno: %i",
> >>> +		    ret, errno);
> >>> +}
> >>> +
> >>> +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> >>> +{
> >>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> >>> +	int ret;
> >>> +
> >>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> >>> +
> >>> +	ret = ioctl(vcpu->fd, KVM_SET_ONE_REG, reg);
> >>> +	TEST_ASSERT(ret == 0, "KVM_SET_ONE_REG failed, rc: %i errno: %i",
> >>> +		    ret, errno);
> >>> +}
> >>> +
> >>>  /*
> >>>   * VCPU Ioctl
> >>>   *
> >>>
> >>
> >> Reviewed-by: Thomas Huth <thuth@redhat.com>
> >>
> > 
> > How about what's below instead. It should be equivalent.
> 
> With your proposed changes we loose a bit verbosity in the error
> messages. I need to think about which I like more.

Looks like both error messages are missing something. The ones above are
missing the string version of errno. The ones below are missing the string
version of cmd. It's easy to add the string version of errno, which is
an argument for keeping the functions above (but we could at least use
_vcpu_ioctl to avoid duplicating the vcpu_find and vcpu!=NULL assert).
Or, we could consider adding a kvm_ioctl_cmd_to_string() function,
which might be nice for other ioctl wrappers now and in the future.
It shouldn't be too bad to generate a string table from kvm.h, but of
course we'd have to keep it maintained.

Thanks,
drew

> 
> > 
> > Thanks,
> > drew
> > 
> > diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> > index 29cccaf96baf..d96a072e69bf 100644
> > --- a/tools/testing/selftests/kvm/include/kvm_util.h
> > +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> > @@ -125,6 +125,31 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
> >  		    struct kvm_sregs *sregs);
> >  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
> >  		    struct kvm_sregs *sregs);
> > +
> > +static inline void
> > +vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> > +{
> > +	vcpu_ioctl(vm, vcpuid, KVM_GET_FPU, fpu);
> > +}
> > +
> > +static inline void
> > +vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> > +{
> > +	vcpu_ioctl(vm, vcpuid, KVM_SET_FPU, fpu);
> > +}
> > +
> > +static inline void
> > +vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> > +{
> > +	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, reg);
> > +}
> > +
> > +static inline void
> > +vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> > +{
> > +	vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, reg);
> > +}
> > +
> >  #ifdef __KVM_HAVE_VCPU_EVENTS
> >  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
> >  		     struct kvm_vcpu_events *events);
> > 
> 
> 




^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions
  2020-01-30 14:30         ` Andrew Jones
@ 2020-01-30 14:58           ` Janosch Frank
  2020-01-30 15:04             ` Andrew Jones
  0 siblings, 1 reply; 36+ messages in thread
From: Janosch Frank @ 2020-01-30 14:58 UTC (permalink / raw)
  To: Andrew Jones; +Cc: Thomas Huth, kvm, borntraeger, david, cohuck, linux-s390


[-- Attachment #1.1: Type: text/plain, Size: 6378 bytes --]

On 1/30/20 3:30 PM, Andrew Jones wrote:
> On Thu, Jan 30, 2020 at 03:10:55PM +0100, Janosch Frank wrote:
>> On 1/30/20 2:55 PM, Andrew Jones wrote:
>>> On Thu, Jan 30, 2020 at 11:36:21AM +0100, Thomas Huth wrote:
>>>> On 29/01/2020 21.03, Janosch Frank wrote:
>>>>> Add library access to more registers.
>>>>>
>>>>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>>>>> ---
>>>>>  .../testing/selftests/kvm/include/kvm_util.h  |  6 +++
>>>>>  tools/testing/selftests/kvm/lib/kvm_util.c    | 48 +++++++++++++++++++
>>>>>  2 files changed, 54 insertions(+)
>>>>>
>>>>> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
>>>>> index 29cccaf96baf..ae0d14c2540a 100644
>>>>> --- a/tools/testing/selftests/kvm/include/kvm_util.h
>>>>> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
>>>>> @@ -125,6 +125,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>>>>>  		    struct kvm_sregs *sregs);
>>>>>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>>>>>  		    struct kvm_sregs *sregs);
>>>>> +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid,
>>>>> +		  struct kvm_fpu *fpu);
>>>>> +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid,
>>>>> +		  struct kvm_fpu *fpu);
> 
> nit: no need for the above line breaks. We don't even get to 80 char.
> 
>>>>> +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
>>>>> +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
>>>>>  #ifdef __KVM_HAVE_VCPU_EVENTS
>>>>>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>>>>>  		     struct kvm_vcpu_events *events);
>>>>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>>>>> index 41cf45416060..dae117728ec6 100644
>>>>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>>>>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>>>>> @@ -1373,6 +1373,54 @@ int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_sregs *sregs)
>>>>>  	return ioctl(vcpu->fd, KVM_SET_SREGS, sregs);
>>>>>  }
>>>>>  
>>>>> +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
>>>>> +{
>>>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>>>>> +	int ret;
>>>>> +
>>>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
>>>>> +
>>>>> +	ret = ioctl(vcpu->fd, KVM_GET_FPU, fpu);
>>>>> +	TEST_ASSERT(ret == 0, "KVM_GET_FPU failed, rc: %i errno: %i",
>>>>> +		    ret, errno);
>>>>> +}
>>>>> +
>>>>> +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
>>>>> +{
>>>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>>>>> +	int ret;
>>>>> +
>>>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
>>>>> +
>>>>> +	ret = ioctl(vcpu->fd, KVM_SET_FPU, fpu);
>>>>> +	TEST_ASSERT(ret == 0, "KVM_SET_FPU failed, rc: %i errno: %i",
>>>>> +		    ret, errno);
>>>>> +}
>>>>> +
>>>>> +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
>>>>> +{
>>>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>>>>> +	int ret;
>>>>> +
>>>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
>>>>> +
>>>>> +	ret = ioctl(vcpu->fd, KVM_GET_ONE_REG, reg);
>>>>> +	TEST_ASSERT(ret == 0, "KVM_GET_ONE_REG failed, rc: %i errno: %i",
>>>>> +		    ret, errno);
>>>>> +}
>>>>> +
>>>>> +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
>>>>> +{
>>>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>>>>> +	int ret;
>>>>> +
>>>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
>>>>> +
>>>>> +	ret = ioctl(vcpu->fd, KVM_SET_ONE_REG, reg);
>>>>> +	TEST_ASSERT(ret == 0, "KVM_SET_ONE_REG failed, rc: %i errno: %i",
>>>>> +		    ret, errno);
>>>>> +}
>>>>> +
>>>>>  /*
>>>>>   * VCPU Ioctl
>>>>>   *
>>>>>
>>>>
>>>> Reviewed-by: Thomas Huth <thuth@redhat.com>
>>>>
>>>
>>> How about what's below instead. It should be equivalent.
>>
>> With your proposed changes we loose a bit verbosity in the error
>> messages. I need to think about which I like more.
> 
> Looks like both error messages are missing something. The ones above are
> missing the string version of errno. The ones below are missing the string
> version of cmd. It's easy to add the string version of errno, which is
> an argument for keeping the functions above (but we could at least use
> _vcpu_ioctl to avoid duplicating the vcpu_find and vcpu!=NULL assert).

Will do

> Or, we could consider adding a kvm_ioctl_cmd_to_string() function,
> which might be nice for other ioctl wrappers now and in the future.
> It shouldn't be too bad to generate a string table from kvm.h, but of
> course we'd have to keep it maintained.

I'm currently occupied with managing a lot of patches, so something like
that is not very high on my todo list.

> 
> Thanks,
> drew
> 
>>
>>>
>>> Thanks,
>>> drew
>>>
>>> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
>>> index 29cccaf96baf..d96a072e69bf 100644
>>> --- a/tools/testing/selftests/kvm/include/kvm_util.h
>>> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
>>> @@ -125,6 +125,31 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>>>  		    struct kvm_sregs *sregs);
>>>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
>>>  		    struct kvm_sregs *sregs);
>>> +
>>> +static inline void
>>> +vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
>>> +{
>>> +	vcpu_ioctl(vm, vcpuid, KVM_GET_FPU, fpu);
>>> +}
>>> +
>>> +static inline void
>>> +vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
>>> +{
>>> +	vcpu_ioctl(vm, vcpuid, KVM_SET_FPU, fpu);
>>> +}
>>> +
>>> +static inline void
>>> +vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
>>> +{
>>> +	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, reg);
>>> +}
>>> +
>>> +static inline void
>>> +vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
>>> +{
>>> +	vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, reg);
>>> +}
>>> +
>>>  #ifdef __KVM_HAVE_VCPU_EVENTS
>>>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
>>>  		     struct kvm_vcpu_events *events);
>>>
>>
>>
> 
> 
> 



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions
  2020-01-30 14:58           ` Janosch Frank
@ 2020-01-30 15:04             ` Andrew Jones
  0 siblings, 0 replies; 36+ messages in thread
From: Andrew Jones @ 2020-01-30 15:04 UTC (permalink / raw)
  To: Janosch Frank; +Cc: Thomas Huth, kvm, borntraeger, david, cohuck, linux-s390

On Thu, Jan 30, 2020 at 03:58:46PM +0100, Janosch Frank wrote:
> On 1/30/20 3:30 PM, Andrew Jones wrote:
> > On Thu, Jan 30, 2020 at 03:10:55PM +0100, Janosch Frank wrote:
> >> On 1/30/20 2:55 PM, Andrew Jones wrote:
> >>> On Thu, Jan 30, 2020 at 11:36:21AM +0100, Thomas Huth wrote:
> >>>> On 29/01/2020 21.03, Janosch Frank wrote:
> >>>>> Add library access to more registers.
> >>>>>
> >>>>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> >>>>> ---
> >>>>>  .../testing/selftests/kvm/include/kvm_util.h  |  6 +++
> >>>>>  tools/testing/selftests/kvm/lib/kvm_util.c    | 48 +++++++++++++++++++
> >>>>>  2 files changed, 54 insertions(+)
> >>>>>
> >>>>> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> >>>>> index 29cccaf96baf..ae0d14c2540a 100644
> >>>>> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> >>>>> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> >>>>> @@ -125,6 +125,12 @@ void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
> >>>>>  		    struct kvm_sregs *sregs);
> >>>>>  int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
> >>>>>  		    struct kvm_sregs *sregs);
> >>>>> +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid,
> >>>>> +		  struct kvm_fpu *fpu);
> >>>>> +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid,
> >>>>> +		  struct kvm_fpu *fpu);
> > 
> > nit: no need for the above line breaks. We don't even get to 80 char.
> > 
> >>>>> +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
> >>>>> +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg);
> >>>>>  #ifdef __KVM_HAVE_VCPU_EVENTS
> >>>>>  void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
> >>>>>  		     struct kvm_vcpu_events *events);
> >>>>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> >>>>> index 41cf45416060..dae117728ec6 100644
> >>>>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> >>>>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> >>>>> @@ -1373,6 +1373,54 @@ int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_sregs *sregs)
> >>>>>  	return ioctl(vcpu->fd, KVM_SET_SREGS, sregs);
> >>>>>  }
> >>>>>  
> >>>>> +void vcpu_fpu_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> >>>>> +{
> >>>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> >>>>> +	int ret;
> >>>>> +
> >>>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> >>>>> +
> >>>>> +	ret = ioctl(vcpu->fd, KVM_GET_FPU, fpu);
> >>>>> +	TEST_ASSERT(ret == 0, "KVM_GET_FPU failed, rc: %i errno: %i",
> >>>>> +		    ret, errno);
> >>>>> +}
> >>>>> +
> >>>>> +void vcpu_fpu_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_fpu *fpu)
> >>>>> +{
> >>>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> >>>>> +	int ret;
> >>>>> +
> >>>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> >>>>> +
> >>>>> +	ret = ioctl(vcpu->fd, KVM_SET_FPU, fpu);
> >>>>> +	TEST_ASSERT(ret == 0, "KVM_SET_FPU failed, rc: %i errno: %i",
> >>>>> +		    ret, errno);
> >>>>> +}
> >>>>> +
> >>>>> +void vcpu_get_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> >>>>> +{
> >>>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> >>>>> +	int ret;
> >>>>> +
> >>>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> >>>>> +
> >>>>> +	ret = ioctl(vcpu->fd, KVM_GET_ONE_REG, reg);
> >>>>> +	TEST_ASSERT(ret == 0, "KVM_GET_ONE_REG failed, rc: %i errno: %i",
> >>>>> +		    ret, errno);
> >>>>> +}
> >>>>> +
> >>>>> +void vcpu_set_reg(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_one_reg *reg)
> >>>>> +{
> >>>>> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> >>>>> +	int ret;
> >>>>> +
> >>>>> +	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
> >>>>> +
> >>>>> +	ret = ioctl(vcpu->fd, KVM_SET_ONE_REG, reg);
> >>>>> +	TEST_ASSERT(ret == 0, "KVM_SET_ONE_REG failed, rc: %i errno: %i",
> >>>>> +		    ret, errno);
> >>>>> +}
> >>>>> +
> >>>>>  /*
> >>>>>   * VCPU Ioctl
> >>>>>   *
> >>>>>
> >>>>
> >>>> Reviewed-by: Thomas Huth <thuth@redhat.com>
> >>>>
> >>>
> >>> How about what's below instead. It should be equivalent.
> >>
> >> With your proposed changes we loose a bit verbosity in the error
> >> messages. I need to think about which I like more.
> > 
> > Looks like both error messages are missing something. The ones above are
> > missing the string version of errno. The ones below are missing the string
> > version of cmd. It's easy to add the string version of errno, which is
> > an argument for keeping the functions above (but we could at least use
> > _vcpu_ioctl to avoid duplicating the vcpu_find and vcpu!=NULL assert).
> 
> Will do
> 
> > Or, we could consider adding a kvm_ioctl_cmd_to_string() function,
> > which might be nice for other ioctl wrappers now and in the future.
> > It shouldn't be too bad to generate a string table from kvm.h, but of
> > course we'd have to keep it maintained.
> 
> I'm currently occupied with managing a lot of patches, so something like
> that is not very high on my todo list.

Yeah, no worries. We can go with a patch like this for now. I'll
experiment with a table generator when I get a chance in order to
see how ugly it gets. If it's too ugly I'll drop it too.

Thanks,
drew


^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2020-01-30 15:04 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-29 20:03 [PATCH v8 0/4] KVM: s390: Add new reset vcpu API Janosch Frank
2020-01-29 20:03 ` [PATCH v8 1/4] " Janosch Frank
2020-01-30  8:55   ` [PATCH/FIXUP FOR STABLE BEFORE THIS SERIES] KVM: s390: do not clobber user space fpc during guest reset Christian Borntraeger
2020-01-30  9:49     ` David Hildenbrand
2020-01-30 10:39       ` Cornelia Huck
2020-01-30 10:56         ` Thomas Huth
2020-01-30 11:07           ` Christian Borntraeger
2020-01-30 11:01       ` Christian Borntraeger
2020-01-30 11:14         ` Christian Borntraeger
2020-01-30 11:20           ` David Hildenbrand
2020-01-30 11:27             ` Christian Borntraeger
2020-01-30 11:42               ` [PATCH v2] KVM: s390: do not clobber user space registers during guest reset/store status Christian Borntraeger
2020-01-30 11:44                 ` Christian Borntraeger
2020-01-30 12:01                 ` Christian Borntraeger
2020-01-30 12:38                   ` David Hildenbrand
2020-01-30  9:00   ` [PATCH v8 1/4] KVM: s390: Add new reset vcpu API Thomas Huth
2020-01-30  9:58   ` Christian Borntraeger
2020-01-29 20:03 ` [PATCH v8 2/4] selftests: KVM: Add fpu and one reg set/get library functions Janosch Frank
2020-01-30 10:36   ` Thomas Huth
2020-01-30 13:55     ` Andrew Jones
2020-01-30 14:10       ` Janosch Frank
2020-01-30 14:30         ` Andrew Jones
2020-01-30 14:58           ` Janosch Frank
2020-01-30 15:04             ` Andrew Jones
2020-01-29 20:03 ` [PATCH v8 3/4] selftests: KVM: s390x: Add reset tests Janosch Frank
2020-01-30 10:51   ` Thomas Huth
2020-01-30 11:32     ` Janosch Frank
2020-01-30 11:36       ` Thomas Huth
2020-01-29 20:03 ` [PATCH v8 4/4] selftests: KVM: testing the local IRQs resets Janosch Frank
2020-01-30 10:55   ` Cornelia Huck
2020-01-30 11:18     ` Janosch Frank
2020-01-30 11:28       ` Cornelia Huck
2020-01-30 11:34         ` Janosch Frank
2020-01-30 11:10   ` Thomas Huth
2020-01-30 11:33     ` Janosch Frank
2020-01-30  9:10 ` [PATCH] KVM: s390: Cleanup initial cpu reset Janosch Frank

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).