All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] kvm: s390: Add PV dump support
@ 2022-04-28 13:00 Janosch Frank
  2022-04-28 13:00 ` [PATCH 1/9] s390x: Add SE hdr query information Janosch Frank
                   ` (8 more replies)
  0 siblings, 9 replies; 23+ messages in thread
From: Janosch Frank @ 2022-04-28 13:00 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, borntraeger, imbrenda

Sometimes dumping inside of a VM fails, is unavailable or doesn't
yield the required data. For these occasions we dump the VM from the
outside, writing memory and cpu data to a file.

Up to now PV guests only supported dumping from the inside of the
guest through dumpers like KDUMP. A PV guest can be dumped from the
hypervisor but the data will be stale and / or encrypted.

To get the actual state of the PV VM we need the help of the
Ultravisor who safeguards the VM state. New UV calls have been added
to initialize the dump, dump storage state data, dump cpu data and
complete the dump process.

I chose not to document the dump data provided by the Ultravisor since
KVM doesn't interprete it in any way. We're currently searching for a
location and enough cycles to make it available to all.

v4:
	* Rebased and fixed up conflicts due to the Documentation
          changes and new KVM capabilities
	* Fixed the dump facility check, now we check for all 4 calls

v3:
	* Added Rev-by
	* Renamed the query function's len variables to len_min

v2:
	* Added vcpu SIE blocking to avoid validities
	* Moved the KVM CAP to patch #7
	* Renamed len to len_max and introduced len_written for extendability
	* Added Rev-bys


Janosch Frank (9):
  s390x: Add SE hdr query information
  s390: uv: Add dump fields to query
  KVM: s390: pv: Add query interface
  KVM: s390: pv: Add dump support definitions
  KVM: s390: pv: Add query dump information
  kvm: s390: Add configuration dump functionality
  kvm: s390: Add CPU dump functionality
  Documentation: virt: Protected virtual machine dumps
  Documentation/virt/kvm/api.rst: Add protvirt dump/info api
    descriptions

 Documentation/virt/kvm/api.rst               | 152 ++++++++-
 Documentation/virt/kvm/s390/index.rst        |   1 +
 Documentation/virt/kvm/s390/s390-pv-dump.rst |  60 ++++
 arch/s390/boot/uv.c                          |   4 +
 arch/s390/include/asm/kvm_host.h             |   1 +
 arch/s390/include/asm/uv.h                   |  45 ++-
 arch/s390/kernel/uv.c                        |  53 ++++
 arch/s390/kvm/kvm-s390.c                     | 306 +++++++++++++++++++
 arch/s390/kvm/kvm-s390.h                     |   3 +
 arch/s390/kvm/pv.c                           | 131 ++++++++
 include/uapi/linux/kvm.h                     |  55 ++++
 11 files changed, 808 insertions(+), 3 deletions(-)
 create mode 100644 Documentation/virt/kvm/s390/s390-pv-dump.rst

-- 
2.32.0


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/9] s390x: Add SE hdr query information
  2022-04-28 13:00 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
@ 2022-04-28 13:00 ` Janosch Frank
  2022-04-28 13:00 ` [PATCH 2/9] s390: uv: Add dump fields to query Janosch Frank
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Janosch Frank @ 2022-04-28 13:00 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, borntraeger, imbrenda

We have information about the supported se header version and pcf bits
so let's expose it via the sysfs files.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/boot/uv.c        |  2 ++
 arch/s390/include/asm/uv.h |  7 ++++++-
 arch/s390/kernel/uv.c      | 20 ++++++++++++++++++++
 3 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/arch/s390/boot/uv.c b/arch/s390/boot/uv.c
index e6be155ab2e5..b100b57cf15d 100644
--- a/arch/s390/boot/uv.c
+++ b/arch/s390/boot/uv.c
@@ -41,6 +41,8 @@ void uv_query_info(void)
 		uv_info.max_num_sec_conf = uvcb.max_num_sec_conf;
 		uv_info.max_guest_cpu_id = uvcb.max_guest_cpu_id;
 		uv_info.uv_feature_indications = uvcb.uv_feature_indications;
+		uv_info.supp_se_hdr_ver = uvcb.supp_se_hdr_versions;
+		uv_info.supp_se_hdr_pcf = uvcb.supp_se_hdr_pcf;
 	}
 
 #ifdef CONFIG_PROTECTED_VIRTUALIZATION_GUEST
diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
index cfea7b77a5b8..46498b8c587b 100644
--- a/arch/s390/include/asm/uv.h
+++ b/arch/s390/include/asm/uv.h
@@ -110,7 +110,10 @@ struct uv_cb_qui {
 	u8  reserved88[158 - 136];		/* 0x0088 */
 	u16 max_guest_cpu_id;			/* 0x009e */
 	u64 uv_feature_indications;		/* 0x00a0 */
-	u8  reserveda8[200 - 168];		/* 0x00a8 */
+	u64 reserveda8;				/* 0x00a8 */
+	u64 supp_se_hdr_versions;		/* 0x00b0 */
+	u64 supp_se_hdr_pcf;			/* 0x00b8 */
+	u64 reservedc0;				/* 0x00c0 */
 } __packed __aligned(8);
 
 /* Initialize Ultravisor */
@@ -307,6 +310,8 @@ struct uv_info {
 	unsigned int max_num_sec_conf;
 	unsigned short max_guest_cpu_id;
 	unsigned long uv_feature_indications;
+	unsigned long supp_se_hdr_ver;
+	unsigned long supp_se_hdr_pcf;
 };
 
 extern struct uv_info uv_info;
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
index a5425075dd25..852840384e75 100644
--- a/arch/s390/kernel/uv.c
+++ b/arch/s390/kernel/uv.c
@@ -392,6 +392,24 @@ static ssize_t uv_query_facilities(struct kobject *kobj,
 static struct kobj_attribute uv_query_facilities_attr =
 	__ATTR(facilities, 0444, uv_query_facilities, NULL);
 
+static ssize_t uv_query_supp_se_hdr_ver(struct kobject *kobj,
+					struct kobj_attribute *attr, char *buf)
+{
+	return sysfs_emit(buf, "%lx\n", uv_info.supp_se_hdr_ver);
+}
+
+static struct kobj_attribute uv_query_supp_se_hdr_ver_attr =
+	__ATTR(supp_se_hdr_ver, 0444, uv_query_supp_se_hdr_ver, NULL);
+
+static ssize_t uv_query_supp_se_hdr_pcf(struct kobject *kobj,
+					struct kobj_attribute *attr, char *buf)
+{
+	return sysfs_emit(buf, "%lx\n", uv_info.supp_se_hdr_pcf);
+}
+
+static struct kobj_attribute uv_query_supp_se_hdr_pcf_attr =
+	__ATTR(supp_se_hdr_pcf, 0444, uv_query_supp_se_hdr_pcf, NULL);
+
 static ssize_t uv_query_feature_indications(struct kobject *kobj,
 					    struct kobj_attribute *attr, char *buf)
 {
@@ -437,6 +455,8 @@ static struct attribute *uv_query_attrs[] = {
 	&uv_query_max_guest_cpus_attr.attr,
 	&uv_query_max_guest_vms_attr.attr,
 	&uv_query_max_guest_addr_attr.attr,
+	&uv_query_supp_se_hdr_ver_attr.attr,
+	&uv_query_supp_se_hdr_pcf_attr.attr,
 	NULL,
 };
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/9] s390: uv: Add dump fields to query
  2022-04-28 13:00 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
  2022-04-28 13:00 ` [PATCH 1/9] s390x: Add SE hdr query information Janosch Frank
@ 2022-04-28 13:00 ` Janosch Frank
  2022-04-28 13:00 ` [PATCH 3/9] KVM: s390: pv: Add query interface Janosch Frank
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Janosch Frank @ 2022-04-28 13:00 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, borntraeger, imbrenda

The new dump feature requires us to know how much memory is needed for
the "dump storage state" and "dump finalize" ultravisor call. These
values are reported via the UV query call.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/boot/uv.c        |  2 ++
 arch/s390/include/asm/uv.h |  5 +++++
 arch/s390/kernel/uv.c      | 33 +++++++++++++++++++++++++++++++++
 3 files changed, 40 insertions(+)

diff --git a/arch/s390/boot/uv.c b/arch/s390/boot/uv.c
index b100b57cf15d..67c737c1e580 100644
--- a/arch/s390/boot/uv.c
+++ b/arch/s390/boot/uv.c
@@ -43,6 +43,8 @@ void uv_query_info(void)
 		uv_info.uv_feature_indications = uvcb.uv_feature_indications;
 		uv_info.supp_se_hdr_ver = uvcb.supp_se_hdr_versions;
 		uv_info.supp_se_hdr_pcf = uvcb.supp_se_hdr_pcf;
+		uv_info.conf_dump_storage_state_len = uvcb.conf_dump_storage_state_len;
+		uv_info.conf_dump_finalize_len = uvcb.conf_dump_finalize_len;
 	}
 
 #ifdef CONFIG_PROTECTED_VIRTUALIZATION_GUEST
diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
index 46498b8c587b..e8257a293dd1 100644
--- a/arch/s390/include/asm/uv.h
+++ b/arch/s390/include/asm/uv.h
@@ -114,6 +114,9 @@ struct uv_cb_qui {
 	u64 supp_se_hdr_versions;		/* 0x00b0 */
 	u64 supp_se_hdr_pcf;			/* 0x00b8 */
 	u64 reservedc0;				/* 0x00c0 */
+	u64 conf_dump_storage_state_len;	/* 0x00c8 */
+	u64 conf_dump_finalize_len;		/* 0x00d0 */
+	u8  reservedd8[256 - 216];		/* 0x00d8 */
 } __packed __aligned(8);
 
 /* Initialize Ultravisor */
@@ -312,6 +315,8 @@ struct uv_info {
 	unsigned long uv_feature_indications;
 	unsigned long supp_se_hdr_ver;
 	unsigned long supp_se_hdr_pcf;
+	unsigned long conf_dump_storage_state_len;
+	unsigned long conf_dump_finalize_len;
 };
 
 extern struct uv_info uv_info;
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
index 852840384e75..84fe33b6af4d 100644
--- a/arch/s390/kernel/uv.c
+++ b/arch/s390/kernel/uv.c
@@ -410,6 +410,36 @@ static ssize_t uv_query_supp_se_hdr_pcf(struct kobject *kobj,
 static struct kobj_attribute uv_query_supp_se_hdr_pcf_attr =
 	__ATTR(supp_se_hdr_pcf, 0444, uv_query_supp_se_hdr_pcf, NULL);
 
+static ssize_t uv_query_dump_cpu_len(struct kobject *kobj,
+				     struct kobj_attribute *attr, char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "%lx\n",
+			uv_info.guest_cpu_stor_len);
+}
+
+static struct kobj_attribute uv_query_dump_cpu_len_attr =
+	__ATTR(uv_query_dump_cpu_len, 0444, uv_query_dump_cpu_len, NULL);
+
+static ssize_t uv_query_dump_storage_state_len(struct kobject *kobj,
+					       struct kobj_attribute *attr, char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "%lx\n",
+			uv_info.conf_dump_storage_state_len);
+}
+
+static struct kobj_attribute uv_query_dump_storage_state_len_attr =
+	__ATTR(dump_storage_state_len, 0444, uv_query_dump_storage_state_len, NULL);
+
+static ssize_t uv_query_dump_finalize_len(struct kobject *kobj,
+					  struct kobj_attribute *attr, char *page)
+{
+	return scnprintf(page, PAGE_SIZE, "%lx\n",
+			uv_info.conf_dump_finalize_len);
+}
+
+static struct kobj_attribute uv_query_dump_finalize_len_attr =
+	__ATTR(dump_finalize_len, 0444, uv_query_dump_finalize_len, NULL);
+
 static ssize_t uv_query_feature_indications(struct kobject *kobj,
 					    struct kobj_attribute *attr, char *buf)
 {
@@ -457,6 +487,9 @@ static struct attribute *uv_query_attrs[] = {
 	&uv_query_max_guest_addr_attr.attr,
 	&uv_query_supp_se_hdr_ver_attr.attr,
 	&uv_query_supp_se_hdr_pcf_attr.attr,
+	&uv_query_dump_storage_state_len_attr.attr,
+	&uv_query_dump_finalize_len_attr.attr,
+	&uv_query_dump_cpu_len_attr.attr,
 	NULL,
 };
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/9] KVM: s390: pv: Add query interface
  2022-04-28 13:00 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
  2022-04-28 13:00 ` [PATCH 1/9] s390x: Add SE hdr query information Janosch Frank
  2022-04-28 13:00 ` [PATCH 2/9] s390: uv: Add dump fields to query Janosch Frank
@ 2022-04-28 13:00 ` Janosch Frank
  2022-05-09 15:25   ` Claudio Imbrenda
  2022-04-28 13:00 ` [PATCH 4/9] KVM: s390: pv: Add dump support definitions Janosch Frank
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 23+ messages in thread
From: Janosch Frank @ 2022-04-28 13:00 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, borntraeger, imbrenda

Some of the query information is already available via sysfs but
having a IOCTL makes the information easier to retrieve.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 76 ++++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/kvm.h | 25 +++++++++++++
 2 files changed, 101 insertions(+)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 76ad6408cb2c..23352d45a386 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2224,6 +2224,42 @@ static int kvm_s390_cpus_to_pv(struct kvm *kvm, u16 *rc, u16 *rrc)
 	return r;
 }
 
+/*
+ * Here we provide user space with a direct interface to query UV
+ * related data like UV maxima and available features as well as
+ * feature specific data.
+ *
+ * To facilitate future extension of the data structures we'll try to
+ * write data up to the maximum requested length.
+ */
+static ssize_t kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
+{
+	ssize_t len_min;
+
+	switch (info->header.id) {
+	case KVM_PV_INFO_VM: {
+		len_min =  sizeof(info->header) + sizeof(info->vm);
+
+		if (info->header.len_max < len_min)
+			return -EINVAL;
+
+		memcpy(info->vm.inst_calls_list,
+		       uv_info.inst_calls_list,
+		       sizeof(uv_info.inst_calls_list));
+
+		/* It's max cpuidm not max cpus so it's off by one */
+		info->vm.max_cpus = uv_info.max_guest_cpu_id + 1;
+		info->vm.max_guests = uv_info.max_num_sec_conf;
+		info->vm.max_guest_addr = uv_info.max_sec_stor_addr;
+		info->vm.feature_indication = uv_info.uv_feature_indications;
+
+		return len_min;
+	}
+	default:
+		return -EINVAL;
+	}
+}
+
 static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
 {
 	int r = 0;
@@ -2360,6 +2396,46 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
 			     cmd->rc, cmd->rrc);
 		break;
 	}
+	case KVM_PV_INFO: {
+		struct kvm_s390_pv_info info = {};
+		ssize_t data_len;
+
+		/*
+		 * No need to check the VM protection here.
+		 *
+		 * Maybe user space wants to query some of the data
+		 * when the VM is still unprotected. If we see the
+		 * need to fence a new data command we can still
+		 * return an error in the info handler.
+		 */
+
+		r = -EFAULT;
+		if (copy_from_user(&info, argp, sizeof(info.header)))
+			break;
+
+		r = -EINVAL;
+		if (info.header.len_max < sizeof(info.header))
+			break;
+
+		data_len = kvm_s390_handle_pv_info(&info);
+		if (data_len < 0) {
+			r = data_len;
+			break;
+		}
+		/*
+		 * If a data command struct is extended (multiple
+		 * times) this can be used to determine how much of it
+		 * is valid.
+		 */
+		info.header.len_written = data_len;
+
+		r = -EFAULT;
+		if (copy_to_user(argp, &info, data_len))
+			break;
+
+		r = 0;
+		break;
+	}
 	default:
 		r = -ENOTTY;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 91a6fe4e02c0..59e4fb6c7a34 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1645,6 +1645,30 @@ struct kvm_s390_pv_unp {
 	__u64 tweak;
 };
 
+enum pv_cmd_info_id {
+	KVM_PV_INFO_VM,
+};
+
+struct kvm_s390_pv_info_vm {
+	__u64 inst_calls_list[4];
+	__u64 max_cpus;
+	__u64 max_guests;
+	__u64 max_guest_addr;
+	__u64 feature_indication;
+};
+
+struct kvm_s390_pv_info_header {
+	__u32 id;
+	__u32 len_max;
+	__u32 len_written;
+	__u32 reserved;
+};
+
+struct kvm_s390_pv_info {
+	struct kvm_s390_pv_info_header header;
+	struct kvm_s390_pv_info_vm vm;
+};
+
 enum pv_cmd_id {
 	KVM_PV_ENABLE,
 	KVM_PV_DISABLE,
@@ -1653,6 +1677,7 @@ enum pv_cmd_id {
 	KVM_PV_VERIFY,
 	KVM_PV_PREP_RESET,
 	KVM_PV_UNSHARE_ALL,
+	KVM_PV_INFO,
 };
 
 struct kvm_pv_cmd {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/9] KVM: s390: pv: Add dump support definitions
  2022-04-28 13:00 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
                   ` (2 preceding siblings ...)
  2022-04-28 13:00 ` [PATCH 3/9] KVM: s390: pv: Add query interface Janosch Frank
@ 2022-04-28 13:00 ` Janosch Frank
  2022-04-28 13:00 ` [PATCH 5/9] KVM: s390: pv: Add query dump information Janosch Frank
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Janosch Frank @ 2022-04-28 13:00 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, borntraeger, imbrenda

Let's add the constants and structure definitions needed for the dump
support.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
---
 arch/s390/include/asm/uv.h | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
index e8257a293dd1..3e597bb634bd 100644
--- a/arch/s390/include/asm/uv.h
+++ b/arch/s390/include/asm/uv.h
@@ -50,6 +50,10 @@
 #define UVC_CMD_SET_UNSHARE_ALL		0x0340
 #define UVC_CMD_PIN_PAGE_SHARED		0x0341
 #define UVC_CMD_UNPIN_PAGE_SHARED	0x0342
+#define UVC_CMD_DUMP_INIT		0x0400
+#define UVC_CMD_DUMP_CONF_STOR_STATE	0x0401
+#define UVC_CMD_DUMP_CPU		0x0402
+#define UVC_CMD_DUMP_COMPLETE		0x0403
 #define UVC_CMD_SET_SHARED_ACCESS	0x1000
 #define UVC_CMD_REMOVE_SHARED_ACCESS	0x1001
 #define UVC_CMD_RETR_ATTEST		0x1020
@@ -77,6 +81,10 @@ enum uv_cmds_inst {
 	BIT_UVC_CMD_UNSHARE_ALL = 20,
 	BIT_UVC_CMD_PIN_PAGE_SHARED = 21,
 	BIT_UVC_CMD_UNPIN_PAGE_SHARED = 22,
+	BIT_UVC_CMD_DUMP_INIT = 24,
+	BIT_UVC_CMD_DUMP_CONFIG_STOR_STATE = 25,
+	BIT_UVC_CMD_DUMP_CPU = 26,
+	BIT_UVC_CMD_DUMP_COMPLETE = 27,
 	BIT_UVC_CMD_RETR_ATTEST = 28,
 };
 
@@ -246,6 +254,31 @@ struct uv_cb_attest {
 	u64 reserved168[4];		/* 0x0168 */
 } __packed __aligned(8);
 
+struct uv_cb_dump_cpu {
+	struct uv_cb_header header;
+	u64 reserved08[2];
+	u64 cpu_handle;
+	u64 dump_area_origin;
+	u64 reserved28[5];
+} __packed __aligned(8);
+
+struct uv_cb_dump_stor_state {
+	struct uv_cb_header header;
+	u64 reserved08[2];
+	u64 config_handle;
+	u64 dump_area_origin;
+	u64 gaddr;
+	u64 reserved28[4];
+} __packed __aligned(8);
+
+struct uv_cb_dump_complete {
+	struct uv_cb_header header;
+	u64 reserved08[2];
+	u64 config_handle;
+	u64 dump_area_origin;
+	u64 reserved30[5];
+} __packed __aligned(8);
+
 static inline int __uv_call(unsigned long r1, unsigned long r2)
 {
 	int cc;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/9] KVM: s390: pv: Add query dump information
  2022-04-28 13:00 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
                   ` (3 preceding siblings ...)
  2022-04-28 13:00 ` [PATCH 4/9] KVM: s390: pv: Add dump support definitions Janosch Frank
@ 2022-04-28 13:00 ` Janosch Frank
  2022-05-09 15:28   ` Claudio Imbrenda
  2022-04-28 13:00 ` [PATCH 6/9] kvm: s390: Add configuration dump functionality Janosch Frank
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 23+ messages in thread
From: Janosch Frank @ 2022-04-28 13:00 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, borntraeger, imbrenda

The dump API requires userspace to provide buffers into which we will
store data. The dump information added in this patch tells userspace
how big those buffers need to be.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 11 +++++++++++
 include/uapi/linux/kvm.h | 12 +++++++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 23352d45a386..e327a5b8ef78 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -2255,6 +2255,17 @@ static ssize_t kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
 
 		return len_min;
 	}
+	case KVM_PV_INFO_DUMP: {
+		len_min =  sizeof(info->header) + sizeof(info->dump);
+
+		if (info->header.len_max < len_min)
+			return -EINVAL;
+
+		info->dump.dump_cpu_buffer_len = uv_info.guest_cpu_stor_len;
+		info->dump.dump_config_mem_buffer_per_1m = uv_info.conf_dump_storage_state_len;
+		info->dump.dump_config_finalize_len = uv_info.conf_dump_finalize_len;
+		return len_min;
+	}
 	default:
 		return -EINVAL;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 59e4fb6c7a34..2eba89d7ec29 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1647,6 +1647,13 @@ struct kvm_s390_pv_unp {
 
 enum pv_cmd_info_id {
 	KVM_PV_INFO_VM,
+	KVM_PV_INFO_DUMP,
+};
+
+struct kvm_s390_pv_info_dump {
+	__u64 dump_cpu_buffer_len;
+	__u64 dump_config_mem_buffer_per_1m;
+	__u64 dump_config_finalize_len;
 };
 
 struct kvm_s390_pv_info_vm {
@@ -1666,7 +1673,10 @@ struct kvm_s390_pv_info_header {
 
 struct kvm_s390_pv_info {
 	struct kvm_s390_pv_info_header header;
-	struct kvm_s390_pv_info_vm vm;
+	union {
+		struct kvm_s390_pv_info_dump dump;
+		struct kvm_s390_pv_info_vm vm;
+	};
 };
 
 enum pv_cmd_id {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 6/9] kvm: s390: Add configuration dump functionality
  2022-04-28 13:00 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
                   ` (4 preceding siblings ...)
  2022-04-28 13:00 ` [PATCH 5/9] KVM: s390: pv: Add query dump information Janosch Frank
@ 2022-04-28 13:00 ` Janosch Frank
  2022-05-09 17:51   ` Claudio Imbrenda
  2022-04-28 13:01 ` [PATCH 7/9] kvm: s390: Add CPU " Janosch Frank
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 23+ messages in thread
From: Janosch Frank @ 2022-04-28 13:00 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, borntraeger, imbrenda

Sometimes dumping inside of a VM fails, is unavailable or doesn't
yield the required data. For these occasions we dump the VM from the
outside, writing memory and cpu data to a file.

Up to now PV guests only supported dumping from the inside of the
guest through dumpers like KDUMP. A PV guest can be dumped from the
hypervisor but the data will be stale and / or encrypted.

To get the actual state of the PV VM we need the help of the
Ultravisor who safeguards the VM state. New UV calls have been added
to initialize the dump, dump storage state data, dump cpu data and
complete the dump process. We expose these calls in this patch via a
new UV ioctl command.

The sensitive parts of the dump data are encrypted, the dump key is
derived from the Customer Communication Key (CCK). This ensures that
only the owner of the VM who has the CCK can decrypt the dump data.

The memory is dumped / read via a normal export call and a re-import
after the dump initialization is not needed (no re-encryption with a
dump key).

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/include/asm/kvm_host.h |   1 +
 arch/s390/kvm/kvm-s390.c         | 146 +++++++++++++++++++++++++++++++
 arch/s390/kvm/kvm-s390.h         |   2 +
 arch/s390/kvm/pv.c               | 115 ++++++++++++++++++++++++
 include/uapi/linux/kvm.h         |  15 ++++
 5 files changed, 279 insertions(+)

diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 766028d54a3e..a0fbe4820e0a 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -923,6 +923,7 @@ struct kvm_s390_pv {
 	u64 guest_len;
 	unsigned long stor_base;
 	void *stor_var;
+	bool dumping;
 };
 
 struct kvm_arch{
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index e327a5b8ef78..8984e8db33b4 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -606,6 +606,26 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_S390_PROTECTED:
 		r = is_prot_virt_host();
 		break;
+	case KVM_CAP_S390_PROTECTED_DUMP: {
+		u64 pv_cmds_dump[] = {
+			BIT_UVC_CMD_DUMP_INIT,
+			BIT_UVC_CMD_DUMP_CONFIG_STOR_STATE,
+			BIT_UVC_CMD_DUMP_CPU,
+			BIT_UVC_CMD_DUMP_COMPLETE,
+		};
+		int i;
+
+		if (!is_prot_virt_host())
+			return 0;
+
+		r = 1;
+		for (i = 0; i < ARRAY_SIZE(pv_cmds_dump); i++) {
+			if (!test_bit_inv(pv_cmds_dump[i],
+					  (unsigned long *)&uv_info.inst_calls_list))
+				return 0;
+		}
+		break;
+	}
 	default:
 		r = 0;
 	}
@@ -2271,6 +2291,101 @@ static ssize_t kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
 	}
 }
 
+static int kvm_s390_pv_dmp(struct kvm *kvm, struct kvm_pv_cmd *cmd,
+			   struct kvm_s390_pv_dmp dmp)
+{
+	int r = -EINVAL;
+	void __user *result_buff = (void __user *)dmp.buff_addr;
+
+	switch (dmp.subcmd) {
+	case KVM_PV_DUMP_INIT: {
+		if (kvm->arch.pv.dumping)
+			break;
+
+		/*
+		 * Block SIE entry as concurrent dump UVCs could lead
+		 * to validities.
+		 */
+		kvm_s390_vcpu_block_all(kvm);
+
+		r = uv_cmd_nodata(kvm_s390_pv_get_handle(kvm),
+				  UVC_CMD_DUMP_INIT, &cmd->rc, &cmd->rrc);
+		KVM_UV_EVENT(kvm, 3, "PROTVIRT DUMP INIT: rc %x rrc %x",
+			     cmd->rc, cmd->rrc);
+		if (!r)
+			kvm->arch.pv.dumping = true;
+		else {
+			kvm_s390_vcpu_unblock_all(kvm);
+			r = -EINVAL;
+		}
+		break;
+	}
+	case KVM_PV_DUMP_CONFIG_STOR_STATE: {
+		if (!kvm->arch.pv.dumping)
+			break;
+
+		/*
+		 * gaddr is an output parameter since we might stop
+		 * early. As dmp will be copied back in our caller, we
+		 * don't need to do it ourselves.
+		 */
+		r = kvm_s390_pv_dump_stor_state(kvm, result_buff, &dmp.gaddr, dmp.buff_len,
+						&cmd->rc, &cmd->rrc);
+		break;
+	}
+	case KVM_PV_DUMP_COMPLETE: {
+		struct uv_cb_dump_complete complete = {
+			.header.len = sizeof(complete),
+			.header.cmd = UVC_CMD_DUMP_COMPLETE,
+			.config_handle = kvm_s390_pv_get_handle(kvm),
+		};
+		u64 *compl_data;
+
+		r = -EINVAL;
+		if (!kvm->arch.pv.dumping)
+			break;
+
+		if (dmp.buff_len < uv_info.conf_dump_finalize_len)
+			break;
+
+		/* Allocate dump area */
+		r = -ENOMEM;
+		compl_data = vzalloc(uv_info.conf_dump_finalize_len);
+		if (!compl_data)
+			break;
+		complete.dump_area_origin = (u64)compl_data;
+
+		r = uv_call(0, (u64)&complete);
+		cmd->rc = complete.header.rc;
+		cmd->rrc = complete.header.rrc;
+		KVM_UV_EVENT(kvm, 3, "PROTVIRT DUMP COMPLETE: rc %x rrc %x",
+			     complete.header.rc, complete.header.rrc);
+
+		if (!r) {
+			/*
+			 * kvm_s390_pv_dealloc_vm() will also (mem)set
+			 * this to false on a reboot or other destroy
+			 * operation for this vm.
+			 */
+			kvm->arch.pv.dumping = false;
+			kvm_s390_vcpu_unblock_all(kvm);
+			r = copy_to_user(result_buff, compl_data, uv_info.conf_dump_finalize_len);
+			if (r)
+				r = -EFAULT;
+		}
+		vfree(compl_data);
+		if (r > 0)
+			r = -EINVAL;
+		break;
+	}
+	default:
+		r = -ENOTTY;
+		break;
+	}
+
+	return r;
+}
+
 static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
 {
 	int r = 0;
@@ -2447,6 +2562,28 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
 		r = 0;
 		break;
 	}
+	case KVM_PV_DUMP: {
+		struct kvm_s390_pv_dmp dmp;
+
+		r = -EINVAL;
+		if (!kvm_s390_pv_is_protected(kvm))
+			break;
+
+		r = -EFAULT;
+		if (copy_from_user(&dmp, argp, sizeof(dmp)))
+			break;
+
+		r = kvm_s390_pv_dmp(kvm, cmd, dmp);
+		if (r)
+			break;
+
+		if (copy_to_user(argp, &dmp, sizeof(dmp))) {
+			r = -EFAULT;
+			break;
+		}
+
+		break;
+	}
 	default:
 		r = -ENOTTY;
 	}
@@ -4564,6 +4701,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	struct kvm_run *kvm_run = vcpu->run;
 	int rc;
 
+	/*
+	 * Running a VM while dumping always has the potential to
+	 * produce inconsistent dump data. But for PV vcpus a SIE
+	 * entry while dumping could also lead to a validity which we
+	 * absolutely want to avoid.
+	 */
+	if (vcpu->kvm->arch.pv.dumping)
+		return -EINVAL;
+
 	if (kvm_run->immediate_exit)
 		return -EINTR;
 
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 497d52a83c78..2868dd0bba25 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -250,6 +250,8 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
 int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
 		       unsigned long tweak, u16 *rc, u16 *rrc);
 int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
+int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
+				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
 
 static inline u64 kvm_s390_pv_get_handle(struct kvm *kvm)
 {
diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
index cc7c9599f43e..d1635ed50078 100644
--- a/arch/s390/kvm/pv.c
+++ b/arch/s390/kvm/pv.c
@@ -298,3 +298,118 @@ int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state)
 		return -EINVAL;
 	return 0;
 }
+
+/* Size of the cache for the storage state dump data. 1MB for now */
+#define DUMP_BUFF_LEN HPAGE_SIZE
+
+/*
+ * kvm_s390_pv_dump_stor_state
+ *
+ * @kvm: pointer to the guest's KVM struct
+ * @buff_user: Userspace pointer where we will write the results to
+ * @gaddr: Starting absolute guest address for which the storage state
+ *         is requested. This value will be updated with the last
+ *         address for which data was written when returning to
+ *         userspace.
+ * @buff_user_len: Length of the buff_user buffer
+ * @rc: Pointer to where the uvcb return code is stored
+ * @rrc: Pointer to where the uvcb return reason code is stored
+ *
+ * Return:
+ *  0 on success
+ *  -ENOMEM if allocating the cache fails
+ *  -EINVAL if gaddr is not aligned to 1MB
+ *  -EINVAL if buff_user_len is not aligned to uv_info.conf_dump_storage_state_len
+ *  -EINVAL if the UV call fails, rc and rrc will be set in this case
+ *  -EFAULT if copying the result to buff_user failed
+ */
+int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
+				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
+{
+	struct uv_cb_dump_stor_state uvcb = {
+		.header.cmd = UVC_CMD_DUMP_CONF_STOR_STATE,
+		.header.len = sizeof(uvcb),
+		.config_handle = kvm->arch.pv.handle,
+		.gaddr = *gaddr,
+		.dump_area_origin = 0,
+	};
+	size_t buff_kvm_size;
+	size_t size_done = 0;
+	u8 *buff_kvm = NULL;
+	int cc, ret;
+
+	ret = -EINVAL;
+	/* UV call processes 1MB guest storage chunks at a time */
+	if (*gaddr & ~HPAGE_MASK)
+		goto out;
+
+	/*
+	 * We provide the storage state for 1MB chunks of guest
+	 * storage. The buffer will need to be aligned to
+	 * conf_dump_storage_state_len so we don't end on a partial
+	 * chunk.
+	 */
+	if (!buff_user_len ||
+	    buff_user_len & (uv_info.conf_dump_storage_state_len - 1))
+		goto out;
+
+	/*
+	 * Allocate a buffer from which we will later copy to the user process.
+	 *
+	 * We don't want userspace to dictate our buffer size so we limit it to DUMP_BUFF_LEN.
+	 */
+	ret = -ENOMEM;
+	buff_kvm_size = buff_user_len <= DUMP_BUFF_LEN ? buff_user_len : DUMP_BUFF_LEN;
+	buff_kvm = vzalloc(buff_kvm_size);
+	if (!buff_kvm)
+		goto out;
+
+	ret = 0;
+	uvcb.dump_area_origin = (u64)buff_kvm;
+	/* We will loop until the user buffer is filled or an error occurs */
+	do {
+		/* Get a page of data */
+		cc = uv_call_sched(0, (u64)&uvcb);
+
+		/* All or nothing */
+		if (cc) {
+			ret = -EINVAL;
+			break;
+		}
+
+		size_done += uv_info.conf_dump_storage_state_len;
+		uvcb.dump_area_origin += uv_info.conf_dump_storage_state_len;
+		uvcb.gaddr += HPAGE_SIZE;
+		buff_user_len -= PAGE_SIZE;
+
+		/* KVM Buffer full, time to copy to the process */
+		if (!buff_user_len ||
+		    uvcb.dump_area_origin == (uintptr_t)buff_kvm + buff_kvm_size) {
+
+			if (copy_to_user(buff_user, buff_kvm,
+					 uvcb.dump_area_origin - (uintptr_t)buff_kvm)) {
+				ret = -EFAULT;
+				break;
+			}
+
+			buff_user += size_done;
+			size_done = 0;
+			uvcb.dump_area_origin = (u64)buff_kvm;
+		}
+	} while (buff_user_len);
+
+	/* Report back where we ended dumping */
+	*gaddr = uvcb.gaddr;
+
+	/* Lets only log errors, we don't want to spam */
+out:
+	if (ret)
+		KVM_UV_EVENT(kvm, 3,
+			     "PROTVIRT DUMP STORAGE STATE: addr %llx ret %d, uvcb rc %x rrc %x",
+			     uvcb.gaddr, ret, uvcb.header.rc, uvcb.header.rrc);
+	*rc = uvcb.header.rc;
+	*rrc = uvcb.header.rrc;
+	vfree(buff_kvm);
+
+	return ret;
+}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 2eba89d7ec29..b34850907291 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1645,6 +1645,20 @@ struct kvm_s390_pv_unp {
 	__u64 tweak;
 };
 
+enum pv_cmd_dmp_id {
+	KVM_PV_DUMP_INIT,
+	KVM_PV_DUMP_CONFIG_STOR_STATE,
+	KVM_PV_DUMP_COMPLETE,
+};
+
+struct kvm_s390_pv_dmp {
+	__u64 subcmd;
+	__u64 buff_addr;
+	__u64 buff_len;
+	__u64 gaddr;		/* For dump storage state */
+	__u64 reserved[4];
+};
+
 enum pv_cmd_info_id {
 	KVM_PV_INFO_VM,
 	KVM_PV_INFO_DUMP,
@@ -1688,6 +1702,7 @@ enum pv_cmd_id {
 	KVM_PV_PREP_RESET,
 	KVM_PV_UNSHARE_ALL,
 	KVM_PV_INFO,
+	KVM_PV_DUMP,
 };
 
 struct kvm_pv_cmd {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 7/9] kvm: s390: Add CPU dump functionality
  2022-04-28 13:00 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
                   ` (5 preceding siblings ...)
  2022-04-28 13:00 ` [PATCH 6/9] kvm: s390: Add configuration dump functionality Janosch Frank
@ 2022-04-28 13:01 ` Janosch Frank
  2022-05-09 19:11   ` Claudio Imbrenda
  2022-04-28 13:01 ` [PATCH 8/9] Documentation: virt: Protected virtual machine dumps Janosch Frank
  2022-04-28 13:01 ` [PATCH 9/9] Documentation/virt/kvm/api.rst: Add protvirt dump/info api descriptions Janosch Frank
  8 siblings, 1 reply; 23+ messages in thread
From: Janosch Frank @ 2022-04-28 13:01 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, borntraeger, imbrenda

The previous patch introduced the per-VM dump functions now let's
focus on dumping the VCPU state via the newly introduced
KVM_S390_PV_CPU_COMMAND ioctl which mirrors the VM UV ioctl and can be
extended with new commands later.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/kvm/kvm-s390.c | 73 ++++++++++++++++++++++++++++++++++++++++
 arch/s390/kvm/kvm-s390.h |  1 +
 arch/s390/kvm/pv.c       | 16 +++++++++
 include/uapi/linux/kvm.h |  5 +++
 4 files changed, 95 insertions(+)

diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 8984e8db33b4..d15ce38bef14 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -5149,6 +5149,52 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp,
 	return -ENOIOCTLCMD;
 }
 
+static int kvm_s390_handle_pv_vcpu_dump(struct kvm_vcpu *vcpu,
+					struct kvm_pv_cmd *cmd)
+{
+	struct kvm_s390_pv_dmp dmp;
+	void *data;
+	int ret;
+
+	/* Dump initialization is a prerequisite */
+	if (!vcpu->kvm->arch.pv.dumping)
+		return -EINVAL;
+
+	if (copy_from_user(&dmp, (__u8 __user *)cmd->data, sizeof(dmp)))
+		return -EFAULT;
+
+	/* We only handle this subcmd right now */
+	if (dmp.subcmd != KVM_PV_DUMP_CPU)
+		return -EINVAL;
+
+	/* CPU dump length is the same as create cpu storage donation. */
+	if (dmp.buff_len != uv_info.guest_cpu_stor_len)
+		return -EINVAL;
+
+	data = vzalloc(uv_info.guest_cpu_stor_len);
+	if (!data)
+		return -ENOMEM;
+
+	ret = kvm_s390_pv_dump_cpu(vcpu, data, &cmd->rc, &cmd->rrc);
+
+	VCPU_EVENT(vcpu, 3, "PROTVIRT DUMP CPU %d rc %x rrc %x",
+		   vcpu->vcpu_id, cmd->rc, cmd->rrc);
+
+	if (ret) {
+		vfree(data);
+		return -EINVAL;
+	}
+
+	/* On success copy over the dump data */
+	if (copy_to_user((__u8 __user *)dmp.buff_addr, data, uv_info.guest_cpu_stor_len)) {
+		vfree(data);
+		return -EFAULT;
+	}
+
+	vfree(data);
+	return 0;
+}
+
 long kvm_arch_vcpu_ioctl(struct file *filp,
 			 unsigned int ioctl, unsigned long arg)
 {
@@ -5313,6 +5359,33 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 					   irq_state.len);
 		break;
 	}
+	case KVM_S390_PV_CPU_COMMAND: {
+		struct kvm_pv_cmd cmd;
+
+		r = -EINVAL;
+		if (!is_prot_virt_host())
+			break;
+
+		r = -EFAULT;
+		if (copy_from_user(&cmd, argp, sizeof(cmd)))
+			break;
+
+		r = -EINVAL;
+		if (cmd.flags)
+			break;
+
+		/* We only handle this cmd right now */
+		if (cmd.cmd != KVM_PV_DUMP)
+			break;
+
+		r = kvm_s390_handle_pv_vcpu_dump(vcpu, &cmd);
+
+		/* Always copy over UV rc / rrc data */
+		if (copy_to_user((__u8 __user *)argp, &cmd.rc,
+				 sizeof(cmd.rc) + sizeof(cmd.rrc)))
+			r = -EFAULT;
+		break;
+	}
 	default:
 		r = -ENOTTY;
 	}
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 2868dd0bba25..a39815184350 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -250,6 +250,7 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
 int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
 		       unsigned long tweak, u16 *rc, u16 *rrc);
 int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
+int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc);
 int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
 				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
 
diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
index d1635ed50078..9ab8192b9b23 100644
--- a/arch/s390/kvm/pv.c
+++ b/arch/s390/kvm/pv.c
@@ -299,6 +299,22 @@ int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state)
 	return 0;
 }
 
+int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc)
+{
+	struct uv_cb_dump_cpu uvcb = {
+		.header.cmd = UVC_CMD_DUMP_CPU,
+		.header.len = sizeof(uvcb),
+		.cpu_handle = vcpu->arch.pv.handle,
+		.dump_area_origin = (u64)buff,
+	};
+	int cc;
+
+	cc = uv_call_sched(0, (u64)&uvcb);
+	*rc = uvcb.header.rc;
+	*rrc = uvcb.header.rrc;
+	return cc;
+}
+
 /* Size of the cache for the storage state dump data. 1MB for now */
 #define DUMP_BUFF_LEN HPAGE_SIZE
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index b34850907291..108bc7b7a71b 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1144,6 +1144,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_S390_MEM_OP_EXTENSION 211
 #define KVM_CAP_PMU_CAPABILITY 212
 #define KVM_CAP_DISABLE_QUIRKS2 213
+#define KVM_CAP_S390_PROTECTED_DUMP 214
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
@@ -1649,6 +1650,7 @@ enum pv_cmd_dmp_id {
 	KVM_PV_DUMP_INIT,
 	KVM_PV_DUMP_CONFIG_STOR_STATE,
 	KVM_PV_DUMP_COMPLETE,
+	KVM_PV_DUMP_CPU,
 };
 
 struct kvm_s390_pv_dmp {
@@ -2110,4 +2112,7 @@ struct kvm_stats_desc {
 /* Available with KVM_CAP_XSAVE2 */
 #define KVM_GET_XSAVE2		  _IOR(KVMIO,  0xcf, struct kvm_xsave)
 
+/* Available with KVM_CAP_S390_PROTECTED_DUMP */
+#define KVM_S390_PV_CPU_COMMAND	_IOWR(KVMIO, 0xd0, struct kvm_pv_cmd)
+
 #endif /* __LINUX_KVM_H */
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 8/9] Documentation: virt: Protected virtual machine dumps
  2022-04-28 13:00 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
                   ` (6 preceding siblings ...)
  2022-04-28 13:01 ` [PATCH 7/9] kvm: s390: Add CPU " Janosch Frank
@ 2022-04-28 13:01 ` Janosch Frank
  2022-04-28 13:01 ` [PATCH 9/9] Documentation/virt/kvm/api.rst: Add protvirt dump/info api descriptions Janosch Frank
  8 siblings, 0 replies; 23+ messages in thread
From: Janosch Frank @ 2022-04-28 13:01 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, borntraeger, imbrenda

Let's add a documentation file which describes the dump process. Since
we only copy the UV dump data from the UV to userspace we'll not go
into detail here and let the party which processes the data describe
its structure.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 Documentation/virt/kvm/s390/index.rst        |  1 +
 Documentation/virt/kvm/s390/s390-pv-dump.rst | 60 ++++++++++++++++++++
 2 files changed, 61 insertions(+)
 create mode 100644 Documentation/virt/kvm/s390/s390-pv-dump.rst

diff --git a/Documentation/virt/kvm/s390/index.rst b/Documentation/virt/kvm/s390/index.rst
index 605f488f0cc5..44ec9ab14b59 100644
--- a/Documentation/virt/kvm/s390/index.rst
+++ b/Documentation/virt/kvm/s390/index.rst
@@ -10,3 +10,4 @@ KVM for s390 systems
    s390-diag
    s390-pv
    s390-pv-boot
+   s390-pv-dump
diff --git a/Documentation/virt/kvm/s390/s390-pv-dump.rst b/Documentation/virt/kvm/s390/s390-pv-dump.rst
new file mode 100644
index 000000000000..6fe7560e10b1
--- /dev/null
+++ b/Documentation/virt/kvm/s390/s390-pv-dump.rst
@@ -0,0 +1,60 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================================
+s390 (IBM Z) Protected Virtualization dumps
+===========================================
+
+Summary
+-------
+
+Dumping a VM is an essential tool for debugging problems inside
+it. This is especially true when a protected VM runs into trouble as
+there's no way to access its memory and registers from the outside
+while it's running.
+
+However when dumping a protected VM we need to maintain its
+confidentiality until the dump is in the hands of the VM owner who
+should be the only one capable of analysing it.
+
+The confidentiality of the VM dump is ensured by the Ultravisor who
+provides an interface to KVM over which encrypted CPU and memory data
+can be requested. The encryption is based on the Customer
+Communication Key which is the key that's used to encrypt VM data in a
+way that the customer is able to decrypt.
+
+
+Dump process
+------------
+
+A dump is done in 3 steps:
+
+Initiation
+This step initializes the dump process, generates cryptographic seeds
+and extracts dump keys with which the VM dump data will be encrypted.
+
+Data gathering
+Currently there are two types of data that can be gathered from a VM:
+the memory and the vcpu state.
+
+The vcpu state contains all the important registers, general, floating
+point, vector, control and tod/timers of a vcpu. The vcpu dump can
+contain incomplete data if a vcpu is dumped while an instruction is
+emulated with help of the hypervisor. This is indicated by a flag bit
+in the dump data. For the same reason it is very important to not only
+write out the encrypted vcpu state, but also the unencrypted state
+from the hypervisor.
+
+The memory state is further divided into the encrypted memory and its
+encryption tweaks / status flags. The encrypted memory can simply be
+read once it has been exported. The time of the export does not matter
+as no re-encryption is needed. Memory that has been swapped out and
+hence was exported can be read from the swap and written to the dump
+target without need for any special actions.
+
+The tweaks / status flags for the exported pages need to be requested
+from the Ultravisor.
+
+Finalization
+The finalization step will provide the data needed to be able to
+decrypt the vcpu and memory data and end the dump process. When this
+step completes successfully a new dump initiation can be started.
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 9/9] Documentation/virt/kvm/api.rst: Add protvirt dump/info api descriptions
  2022-04-28 13:00 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
                   ` (7 preceding siblings ...)
  2022-04-28 13:01 ` [PATCH 8/9] Documentation: virt: Protected virtual machine dumps Janosch Frank
@ 2022-04-28 13:01 ` Janosch Frank
  8 siblings, 0 replies; 23+ messages in thread
From: Janosch Frank @ 2022-04-28 13:01 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, borntraeger, imbrenda

Time to add the dump API changes to the api documentation file.
Also some minor cleanup.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 Documentation/virt/kvm/api.rst | 152 ++++++++++++++++++++++++++++++++-
 1 file changed, 150 insertions(+), 2 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 85c7abc51af5..70222f46ad22 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -5061,7 +5061,7 @@ into ESA mode. This reset is a superset of the initial reset.
 	__u32 reserved[3];
   };
 
-cmd values:
+**cmd values:**
 
 KVM_PV_ENABLE
   Allocate memory and register the VM with the Ultravisor, thereby
@@ -5077,7 +5077,6 @@ KVM_PV_ENABLE
   =====      =============================
 
 KVM_PV_DISABLE
-
   Deregister the VM from the Ultravisor and reclaim the memory that
   had been donated to the Ultravisor, making it usable by the kernel
   again.  All registered VCPUs are converted back to non-protected
@@ -5094,6 +5093,114 @@ KVM_PV_VM_VERIFY
   Verify the integrity of the unpacked image. Only if this succeeds,
   KVM is allowed to start protected VCPUs.
 
+KVM_PV_INFO
+  :Capability: KVM_CAP_S390_PROTECTED_DUMP
+
+  Presents an API that provides Ultravisor related data to userspace
+  via subcommands. len_max is the size of the user space buffer,
+  len_written is KVM's indication of how much bytes of that buffer
+  were actually written to. len_written can be used to determine the
+  valid fields if more response fields are added in the future.
+
+  ::
+     enum pv_cmd_info_id {
+        KVM_PV_INFO_VM,
+        KVM_PV_INFO_DUMP,
+     };
+
+     struct kvm_s390_pv_info_header {
+        __u32 id;
+        __u32 len_max;
+        __u32 len_written;
+        __u32 reserved;
+     };
+
+     struct kvm_s390_pv_info {
+        struct kvm_s390_pv_info_header header;
+        struct kvm_s390_pv_info_dump dump;
+	struct kvm_s390_pv_info_vm vm;
+     };
+
+**subcommands:**
+
+  KVM_PV_INFO_VM
+    This subcommand provides basic Ultravisor information for PV
+    hosts. These values are likely also exported as files in the sysfs
+    firmware UV query interface but they are more easily available to
+    programs in this API.
+
+    The installed calls and feature_indication members provide the
+    installed UV calls and the UV's other feature indications.
+
+    The max_* members provide information about the maximum number of PV
+    vcpus, PV guests and PV guest memory size.
+
+    ::
+
+      struct kvm_s390_pv_info_vm {
+        __u64 inst_calls_list[4];
+        __u64 max_cpus;
+        __u64 max_guests;
+        __u64 max_guest_addr;
+        __u64 feature_indication;
+      };
+
+
+  KVM_PV_INFO_DUMP
+    This subcommand provides information related to dumping PV guests.
+
+    ::
+
+      struct kvm_s390_pv_info_dump {
+        __u64 dump_cpu_buffer_len;
+        __u64 dump_config_mem_buffer_per_1m;
+        __u64 dump_config_finalize_len;
+      };
+
+KVM_PV_DUMP
+  :Capability: KVM_CAP_S390_PROTECTED_DUMP
+
+  Presents an API that provides calls which facilitate dumping a
+  protected VM.
+
+  ::
+
+    struct kvm_s390_pv_dmp {
+      __u64 subcmd;
+      __u64 buff_addr;
+      __u64 buff_len;
+      __u64 gaddr;		/* For dump storage state */
+    };
+
+  **subcommands:**
+
+  KVM_PV_DUMP_INIT
+    Initializes the dump process of a protected VM. If this call does
+    not succeed all other subcommands will fail with -EINVAL. This
+    subcommand will return -EINVAL if a dump process has not yet been
+    completed.
+
+    Not all PV vms can be dumped, the owner needs to set `dump
+    allowed` PCF bit 34 in the SE header to allow dumping.
+
+  KVM_PV_DUMP_CONFIG_STOR_STATE
+    Stores `buff_len` bytes of tweak component values starting with
+    the 1MB block specified by the absolute guest address
+    (`gaddr`). `buff_len` needs to be `conf_dump_storage_state_len`
+    aligned and at least >= the `conf_dump_storage_state_len` value
+    provided by the dump uv_info data.
+
+  KVM_PV_DUMP_COMPLETE
+    If the subcommand succeeds it completes the dump process and lets
+    KVM_PV_DUMP_INIT be called again.
+
+    On success `conf_dump_finalize_len` bytes of completion data will be
+    stored to the `buff_addr`. The completion data contains a key
+    derivation seed, IV, tweak nonce and encryption keys as well as an
+    authentication tag all of which are needed to decrypt the dump at a
+    later time.
+
+
 4.126 KVM_X86_SET_MSR_FILTER
 ----------------------------
 
@@ -5646,6 +5753,32 @@ The offsets of the state save areas in struct kvm_xsave follow the contents
 of CPUID leaf 0xD on the host.
 
 
+4.135 KVM_S390_PV_CPU_COMMAND
+-----------------------------
+
+:Capability: KVM_CAP_S390_PROTECTED_DUMP
+:Architectures: s390
+:Type: vcpu ioctl
+:Parameters: none
+:Returns: 0 on success, < 0 on error
+
+This ioctl closely mirrors `KVM_S390_PV_COMMAND` but handles requests
+for vcpus. It re-uses the kvm_s390_pv_dmp struct and hence also shares
+the command ids.
+
+**command:**
+
+KVM_PV_DUMP
+  Presents an API that provides calls which facilitate dumping a vcpu
+  of a protected VM.
+
+**subcommand:**
+
+KVM_PV_DUMP_CPU
+  Provides encrypted dump data like register values.
+  The length of the returned data is provided by uv_info.guest_cpu_stor_len.
+
+
 5. The kvm_run structure
 ========================
 
@@ -7724,6 +7857,21 @@ At this time, KVM_PMU_CAP_DISABLE is the only capability.  Setting
 this capability will disable PMU virtualization for that VM.  Usermode
 should adjust CPUID leaf 0xA to reflect that the PMU is disabled.
 
+
+8.36 KVM_CAP_S390_PROTECTED_DUMP
+--------------------------------
+
+:Capability: KVM_CAP_S390_PROTECTED_DUMP
+:Architectures: s390
+:Type: vm
+
+This capability indicates that KVM and the Ultravisor support dumping
+PV guests. The `KVM_PV_DUMP` command is available for the
+`KVM_S390_PV_COMMAND` ioctl and the `KVM_PV_INFO` command provides
+dump related UV data. Also the vcpu ioctl `KVM_S390_PV_CPU_COMMAND` is
+available and supports the `KVM_PV_DUMP_CPU` subcommand.
+
+
 9. Known KVM API problems
 =========================
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/9] KVM: s390: pv: Add query interface
  2022-04-28 13:00 ` [PATCH 3/9] KVM: s390: pv: Add query interface Janosch Frank
@ 2022-05-09 15:25   ` Claudio Imbrenda
  2022-05-10  7:27     ` Janosch Frank
  0 siblings, 1 reply; 23+ messages in thread
From: Claudio Imbrenda @ 2022-05-09 15:25 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger

On Thu, 28 Apr 2022 13:00:56 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> Some of the query information is already available via sysfs but
> having a IOCTL makes the information easier to retrieve.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 76 ++++++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/kvm.h | 25 +++++++++++++
>  2 files changed, 101 insertions(+)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 76ad6408cb2c..23352d45a386 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2224,6 +2224,42 @@ static int kvm_s390_cpus_to_pv(struct kvm *kvm, u16 *rc, u16 *rrc)
>  	return r;
>  }
>  
> +/*
> + * Here we provide user space with a direct interface to query UV
> + * related data like UV maxima and available features as well as
> + * feature specific data.
> + *
> + * To facilitate future extension of the data structures we'll try to
> + * write data up to the maximum requested length.
> + */
> +static ssize_t kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
> +{
> +	ssize_t len_min;
> +
> +	switch (info->header.id) {
> +	case KVM_PV_INFO_VM: {
> +		len_min =  sizeof(info->header) + sizeof(info->vm);
> +
> +		if (info->header.len_max < len_min)
> +			return -EINVAL;
> +
> +		memcpy(info->vm.inst_calls_list,
> +		       uv_info.inst_calls_list,
> +		       sizeof(uv_info.inst_calls_list));
> +
> +		/* It's max cpuidm not max cpus so it's off by one */

s/cpuidm/cpuid,/ ? (and then also s/cpus/cpus,/)

> +		info->vm.max_cpus = uv_info.max_guest_cpu_id + 1;
> +		info->vm.max_guests = uv_info.max_num_sec_conf;
> +		info->vm.max_guest_addr = uv_info.max_sec_stor_addr;
> +		info->vm.feature_indication = uv_info.uv_feature_indications;
> +
> +		return len_min;
> +	}
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
>  static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
>  {
>  	int r = 0;
> @@ -2360,6 +2396,46 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
>  			     cmd->rc, cmd->rrc);
>  		break;
>  	}
> +	case KVM_PV_INFO: {
> +		struct kvm_s390_pv_info info = {};
> +		ssize_t data_len;
> +
> +		/*
> +		 * No need to check the VM protection here.
> +		 *
> +		 * Maybe user space wants to query some of the data
> +		 * when the VM is still unprotected. If we see the
> +		 * need to fence a new data command we can still
> +		 * return an error in the info handler.
> +		 */
> +
> +		r = -EFAULT;
> +		if (copy_from_user(&info, argp, sizeof(info.header)))
> +			break;
> +
> +		r = -EINVAL;
> +		if (info.header.len_max < sizeof(info.header))
> +			break;
> +
> +		data_len = kvm_s390_handle_pv_info(&info);
> +		if (data_len < 0) {
> +			r = data_len;
> +			break;
> +		}
> +		/*
> +		 * If a data command struct is extended (multiple
> +		 * times) this can be used to determine how much of it
> +		 * is valid.
> +		 */
> +		info.header.len_written = data_len;
> +
> +		r = -EFAULT;
> +		if (copy_to_user(argp, &info, data_len))
> +			break;
> +
> +		r = 0;
> +		break;
> +	}
>  	default:
>  		r = -ENOTTY;
>  	}
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 91a6fe4e02c0..59e4fb6c7a34 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1645,6 +1645,30 @@ struct kvm_s390_pv_unp {
>  	__u64 tweak;
>  };
>  
> +enum pv_cmd_info_id {
> +	KVM_PV_INFO_VM,
> +};
> +
> +struct kvm_s390_pv_info_vm {
> +	__u64 inst_calls_list[4];
> +	__u64 max_cpus;
> +	__u64 max_guests;
> +	__u64 max_guest_addr;
> +	__u64 feature_indication;
> +};
> +
> +struct kvm_s390_pv_info_header {
> +	__u32 id;
> +	__u32 len_max;
> +	__u32 len_written;
> +	__u32 reserved;
> +};
> +
> +struct kvm_s390_pv_info {
> +	struct kvm_s390_pv_info_header header;
> +	struct kvm_s390_pv_info_vm vm;
> +};
> +
>  enum pv_cmd_id {
>  	KVM_PV_ENABLE,
>  	KVM_PV_DISABLE,
> @@ -1653,6 +1677,7 @@ enum pv_cmd_id {
>  	KVM_PV_VERIFY,
>  	KVM_PV_PREP_RESET,
>  	KVM_PV_UNSHARE_ALL,
> +	KVM_PV_INFO,
>  };
>  
>  struct kvm_pv_cmd {


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/9] KVM: s390: pv: Add query dump information
  2022-04-28 13:00 ` [PATCH 5/9] KVM: s390: pv: Add query dump information Janosch Frank
@ 2022-05-09 15:28   ` Claudio Imbrenda
  2022-05-10 12:36     ` Janosch Frank
  0 siblings, 1 reply; 23+ messages in thread
From: Claudio Imbrenda @ 2022-05-09 15:28 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger

On Thu, 28 Apr 2022 13:00:58 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> The dump API requires userspace to provide buffers into which we will
> store data. The dump information added in this patch tells userspace
> how big those buffers need to be.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 11 +++++++++++
>  include/uapi/linux/kvm.h | 12 +++++++++++-
>  2 files changed, 22 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 23352d45a386..e327a5b8ef78 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -2255,6 +2255,17 @@ static ssize_t kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
>  
>  		return len_min;
>  	}
> +	case KVM_PV_INFO_DUMP: {
> +		len_min =  sizeof(info->header) + sizeof(info->dump);

so the output will have some zero-padded stuff at the end?

> +
> +		if (info->header.len_max < len_min)
> +			return -EINVAL;
> +
> +		info->dump.dump_cpu_buffer_len = uv_info.guest_cpu_stor_len;
> +		info->dump.dump_config_mem_buffer_per_1m = uv_info.conf_dump_storage_state_len;
> +		info->dump.dump_config_finalize_len = uv_info.conf_dump_finalize_len;
> +		return len_min;
> +	}
>  	default:
>  		return -EINVAL;
>  	}
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 59e4fb6c7a34..2eba89d7ec29 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1647,6 +1647,13 @@ struct kvm_s390_pv_unp {
>  
>  enum pv_cmd_info_id {
>  	KVM_PV_INFO_VM,
> +	KVM_PV_INFO_DUMP,
> +};
> +
> +struct kvm_s390_pv_info_dump {
> +	__u64 dump_cpu_buffer_len;
> +	__u64 dump_config_mem_buffer_per_1m;
> +	__u64 dump_config_finalize_len;
>  };
>  
>  struct kvm_s390_pv_info_vm {
> @@ -1666,7 +1673,10 @@ struct kvm_s390_pv_info_header {
>  
>  struct kvm_s390_pv_info {
>  	struct kvm_s390_pv_info_header header;
> -	struct kvm_s390_pv_info_vm vm;
> +	union {
> +		struct kvm_s390_pv_info_dump dump;
> +		struct kvm_s390_pv_info_vm vm;
> +	};
>  };
>  
>  enum pv_cmd_id {


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/9] kvm: s390: Add configuration dump functionality
  2022-04-28 13:00 ` [PATCH 6/9] kvm: s390: Add configuration dump functionality Janosch Frank
@ 2022-05-09 17:51   ` Claudio Imbrenda
  2022-05-10 14:07     ` Janosch Frank
  0 siblings, 1 reply; 23+ messages in thread
From: Claudio Imbrenda @ 2022-05-09 17:51 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger

On Thu, 28 Apr 2022 13:00:59 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> Sometimes dumping inside of a VM fails, is unavailable or doesn't
> yield the required data. For these occasions we dump the VM from the
> outside, writing memory and cpu data to a file.
> 
> Up to now PV guests only supported dumping from the inside of the
> guest through dumpers like KDUMP. A PV guest can be dumped from the
> hypervisor but the data will be stale and / or encrypted.
> 
> To get the actual state of the PV VM we need the help of the
> Ultravisor who safeguards the VM state. New UV calls have been added
> to initialize the dump, dump storage state data, dump cpu data and
> complete the dump process. We expose these calls in this patch via a
> new UV ioctl command.
> 
> The sensitive parts of the dump data are encrypted, the dump key is
> derived from the Customer Communication Key (CCK). This ensures that
> only the owner of the VM who has the CCK can decrypt the dump data.
> 
> The memory is dumped / read via a normal export call and a re-import
> after the dump initialization is not needed (no re-encryption with a
> dump key).
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  arch/s390/include/asm/kvm_host.h |   1 +
>  arch/s390/kvm/kvm-s390.c         | 146 +++++++++++++++++++++++++++++++
>  arch/s390/kvm/kvm-s390.h         |   2 +
>  arch/s390/kvm/pv.c               | 115 ++++++++++++++++++++++++
>  include/uapi/linux/kvm.h         |  15 ++++
>  5 files changed, 279 insertions(+)
> 
> diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
> index 766028d54a3e..a0fbe4820e0a 100644
> --- a/arch/s390/include/asm/kvm_host.h
> +++ b/arch/s390/include/asm/kvm_host.h
> @@ -923,6 +923,7 @@ struct kvm_s390_pv {
>  	u64 guest_len;
>  	unsigned long stor_base;
>  	void *stor_var;
> +	bool dumping;
>  };
>  
>  struct kvm_arch{
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index e327a5b8ef78..8984e8db33b4 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -606,6 +606,26 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_S390_PROTECTED:
>  		r = is_prot_virt_host();
>  		break;
> +	case KVM_CAP_S390_PROTECTED_DUMP: {
> +		u64 pv_cmds_dump[] = {
> +			BIT_UVC_CMD_DUMP_INIT,
> +			BIT_UVC_CMD_DUMP_CONFIG_STOR_STATE,
> +			BIT_UVC_CMD_DUMP_CPU,
> +			BIT_UVC_CMD_DUMP_COMPLETE,
> +		};
> +		int i;
> +
> +		if (!is_prot_virt_host())
> +			return 0;
> +
> +		r = 1;
> +		for (i = 0; i < ARRAY_SIZE(pv_cmds_dump); i++) {
> +			if (!test_bit_inv(pv_cmds_dump[i],
> +					  (unsigned long *)&uv_info.inst_calls_list))
> +				return 0;
> +		}
> +		break;
> +	}
>  	default:
>  		r = 0;
>  	}
> @@ -2271,6 +2291,101 @@ static ssize_t kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
>  	}
>  }
>  
> +static int kvm_s390_pv_dmp(struct kvm *kvm, struct kvm_pv_cmd *cmd,
> +			   struct kvm_s390_pv_dmp dmp)
> +{
> +	int r = -EINVAL;
> +	void __user *result_buff = (void __user *)dmp.buff_addr;
> +
> +	switch (dmp.subcmd) {
> +	case KVM_PV_DUMP_INIT: {
> +		if (kvm->arch.pv.dumping)
> +			break;
> +
> +		/*
> +		 * Block SIE entry as concurrent dump UVCs could lead
> +		 * to validities.
> +		 */
> +		kvm_s390_vcpu_block_all(kvm);
> +
> +		r = uv_cmd_nodata(kvm_s390_pv_get_handle(kvm),
> +				  UVC_CMD_DUMP_INIT, &cmd->rc, &cmd->rrc);
> +		KVM_UV_EVENT(kvm, 3, "PROTVIRT DUMP INIT: rc %x rrc %x",
> +			     cmd->rc, cmd->rrc);
> +		if (!r)
> +			kvm->arch.pv.dumping = true;
> +		else {
> +			kvm_s390_vcpu_unblock_all(kvm);
> +			r = -EINVAL;
> +		}
> +		break;
> +	}
> +	case KVM_PV_DUMP_CONFIG_STOR_STATE: {
> +		if (!kvm->arch.pv.dumping)
> +			break;
> +
> +		/*
> +		 * gaddr is an output parameter since we might stop
> +		 * early. As dmp will be copied back in our caller, we
> +		 * don't need to do it ourselves.
> +		 */
> +		r = kvm_s390_pv_dump_stor_state(kvm, result_buff, &dmp.gaddr, dmp.buff_len,
> +						&cmd->rc, &cmd->rrc);
> +		break;
> +	}
> +	case KVM_PV_DUMP_COMPLETE: {
> +		struct uv_cb_dump_complete complete = {
> +			.header.len = sizeof(complete),
> +			.header.cmd = UVC_CMD_DUMP_COMPLETE,
> +			.config_handle = kvm_s390_pv_get_handle(kvm),
> +		};
> +		u64 *compl_data;
> +
> +		r = -EINVAL;
> +		if (!kvm->arch.pv.dumping)
> +			break;
> +
> +		if (dmp.buff_len < uv_info.conf_dump_finalize_len)
> +			break;
> +
> +		/* Allocate dump area */
> +		r = -ENOMEM;
> +		compl_data = vzalloc(uv_info.conf_dump_finalize_len);
> +		if (!compl_data)
> +			break;
> +		complete.dump_area_origin = (u64)compl_data;
> +
> +		r = uv_call(0, (u64)&complete);
> +		cmd->rc = complete.header.rc;
> +		cmd->rrc = complete.header.rrc;
> +		KVM_UV_EVENT(kvm, 3, "PROTVIRT DUMP COMPLETE: rc %x rrc %x",
> +			     complete.header.rc, complete.header.rrc);
> +
> +		if (!r) {
> +			/*
> +			 * kvm_s390_pv_dealloc_vm() will also (mem)set
> +			 * this to false on a reboot or other destroy
> +			 * operation for this vm.
> +			 */
> +			kvm->arch.pv.dumping = false;
> +			kvm_s390_vcpu_unblock_all(kvm);
> +			r = copy_to_user(result_buff, compl_data, uv_info.conf_dump_finalize_len);
> +			if (r)
> +				r = -EFAULT;
> +		}
> +		vfree(compl_data);
> +		if (r > 0)
> +			r = -EINVAL;
> +		break;
> +	}
> +	default:
> +		r = -ENOTTY;
> +		break;
> +	}
> +
> +	return r;
> +}
> +
>  static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
>  {
>  	int r = 0;
> @@ -2447,6 +2562,28 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
>  		r = 0;
>  		break;
>  	}
> +	case KVM_PV_DUMP: {
> +		struct kvm_s390_pv_dmp dmp;
> +
> +		r = -EINVAL;
> +		if (!kvm_s390_pv_is_protected(kvm))
> +			break;
> +
> +		r = -EFAULT;
> +		if (copy_from_user(&dmp, argp, sizeof(dmp)))
> +			break;
> +
> +		r = kvm_s390_pv_dmp(kvm, cmd, dmp);
> +		if (r)
> +			break;
> +
> +		if (copy_to_user(argp, &dmp, sizeof(dmp))) {
> +			r = -EFAULT;
> +			break;
> +		}
> +
> +		break;
> +	}
>  	default:
>  		r = -ENOTTY;
>  	}
> @@ -4564,6 +4701,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  	struct kvm_run *kvm_run = vcpu->run;
>  	int rc;
>  
> +	/*
> +	 * Running a VM while dumping always has the potential to
> +	 * produce inconsistent dump data. But for PV vcpus a SIE
> +	 * entry while dumping could also lead to a validity which we
> +	 * absolutely want to avoid.
> +	 */
> +	if (vcpu->kvm->arch.pv.dumping)
> +		return -EINVAL;
> +
>  	if (kvm_run->immediate_exit)
>  		return -EINTR;
>  
> diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
> index 497d52a83c78..2868dd0bba25 100644
> --- a/arch/s390/kvm/kvm-s390.h
> +++ b/arch/s390/kvm/kvm-s390.h
> @@ -250,6 +250,8 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
>  int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
>  		       unsigned long tweak, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
> +int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
> +				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
>  
>  static inline u64 kvm_s390_pv_get_handle(struct kvm *kvm)
>  {
> diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> index cc7c9599f43e..d1635ed50078 100644
> --- a/arch/s390/kvm/pv.c
> +++ b/arch/s390/kvm/pv.c
> @@ -298,3 +298,118 @@ int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state)
>  		return -EINVAL;
>  	return 0;
>  }
> +
> +/* Size of the cache for the storage state dump data. 1MB for now */
> +#define DUMP_BUFF_LEN HPAGE_SIZE
> +
> +/*
> + * kvm_s390_pv_dump_stor_state
> + *
> + * @kvm: pointer to the guest's KVM struct
> + * @buff_user: Userspace pointer where we will write the results to
> + * @gaddr: Starting absolute guest address for which the storage
> state
> + *         is requested. This value will be updated with the last
> + *         address for which data was written when returning to
> + *         userspace.
> + * @buff_user_len: Length of the buff_user buffer
> + * @rc: Pointer to where the uvcb return code is stored
> + * @rrc: Pointer to where the uvcb return reason code is stored
> + *
> + * Return:
> + *  0 on success
> + *  -ENOMEM if allocating the cache fails
> + *  -EINVAL if gaddr is not aligned to 1MB
> + *  -EINVAL if buff_user_len is not aligned to
> uv_info.conf_dump_storage_state_len
> + *  -EINVAL if the UV call fails, rc and rrc will be set in this case
> + *  -EFAULT if copying the result to buff_user failed
> + */
> +int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user
> *buff_user,
> +				u64 *gaddr, u64 buff_user_len, u16
> *rc, u16 *rrc) +{
> +	struct uv_cb_dump_stor_state uvcb = {
> +		.header.cmd = UVC_CMD_DUMP_CONF_STOR_STATE,
> +		.header.len = sizeof(uvcb),
> +		.config_handle = kvm->arch.pv.handle,
> +		.gaddr = *gaddr,
> +		.dump_area_origin = 0,
> +	};
> +	size_t buff_kvm_size;
> +	size_t size_done = 0;
> +	u8 *buff_kvm = NULL;
> +	int cc, ret;
> +
> +	ret = -EINVAL;
> +	/* UV call processes 1MB guest storage chunks at a time */
> +	if (*gaddr & ~HPAGE_MASK)
> +		goto out;
> +
> +	/*
> +	 * We provide the storage state for 1MB chunks of guest
> +	 * storage. The buffer will need to be aligned to
> +	 * conf_dump_storage_state_len so we don't end on a partial
> +	 * chunk.
> +	 */
> +	if (!buff_user_len ||
> +	    buff_user_len & (uv_info.conf_dump_storage_state_len -
> 1))

why not use the IS_ALIGNED macro?

> +		goto out;
> +
> +	/*
> +	 * Allocate a buffer from which we will later copy to the user process.
> +	 *
> +	 * We don't want userspace to dictate our buffer size so we limit it to DUMP_BUFF_LEN.
> +	 */
> +	ret = -ENOMEM;
> +	buff_kvm_size = buff_user_len <= DUMP_BUFF_LEN ? buff_user_len : DUMP_BUFF_LEN;
> +	buff_kvm = vzalloc(buff_kvm_size);
> +	if (!buff_kvm)
> +		goto out;
> +
> +	ret = 0;
> +	uvcb.dump_area_origin = (u64)buff_kvm;
> +	/* We will loop until the user buffer is filled or an error occurs */
> +	do {
> +		/* Get a page of data */

are you getting a page or a block of size conf_dump_storage_state_len ?

> +		cc = uv_call_sched(0, (u64)&uvcb);
> +
> +		/* All or nothing */
> +		if (cc) {
> +			ret = -EINVAL;
> +			break;
> +		}
> +
> +		size_done += uv_info.conf_dump_storage_state_len;
> +		uvcb.dump_area_origin +=
> uv_info.conf_dump_storage_state_len;
> +		uvcb.gaddr += HPAGE_SIZE;
> +		buff_user_len -= PAGE_SIZE;

same here ^ (should it be -= uv_info.conf_dump_storage_state_len ?)

> +
> +		/* KVM Buffer full, time to copy to the process */
> +		if (!buff_user_len ||
> +		    uvcb.dump_area_origin == (uintptr_t)buff_kvm +
> buff_kvm_size) { +

why not  ... || size_done == DUMP_BUFF_LEN ?

> +			if (copy_to_user(buff_user, buff_kvm,
> +					 uvcb.dump_area_origin -
> (uintptr_t)buff_kvm)) {

aren't you trying to copy size_done bytes?

> +				ret = -EFAULT;
> +				break;
> +			}
> +
> +			buff_user += size_done;
> +			size_done = 0;
> +			uvcb.dump_area_origin = (u64)buff_kvm;
> +		}
> +	} while (buff_user_len);
> +
> +	/* Report back where we ended dumping */
> +	*gaddr = uvcb.gaddr;
> +
> +	/* Lets only log errors, we don't want to spam */
> +out:
> +	if (ret)
> +		KVM_UV_EVENT(kvm, 3,
> +			     "PROTVIRT DUMP STORAGE STATE: addr %llx
> ret %d, uvcb rc %x rrc %x",
> +			     uvcb.gaddr, ret, uvcb.header.rc,
> uvcb.header.rrc);
> +	*rc = uvcb.header.rc;
> +	*rrc = uvcb.header.rrc;
> +	vfree(buff_kvm);
> +
> +	return ret;
> +}
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 2eba89d7ec29..b34850907291 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1645,6 +1645,20 @@ struct kvm_s390_pv_unp {
>  	__u64 tweak;
>  };
>  
> +enum pv_cmd_dmp_id {
> +	KVM_PV_DUMP_INIT,
> +	KVM_PV_DUMP_CONFIG_STOR_STATE,
> +	KVM_PV_DUMP_COMPLETE,
> +};
> +
> +struct kvm_s390_pv_dmp {
> +	__u64 subcmd;
> +	__u64 buff_addr;
> +	__u64 buff_len;
> +	__u64 gaddr;		/* For dump storage state */
> +	__u64 reserved[4];
> +};
> +
>  enum pv_cmd_info_id {
>  	KVM_PV_INFO_VM,
>  	KVM_PV_INFO_DUMP,
> @@ -1688,6 +1702,7 @@ enum pv_cmd_id {
>  	KVM_PV_PREP_RESET,
>  	KVM_PV_UNSHARE_ALL,
>  	KVM_PV_INFO,
> +	KVM_PV_DUMP,
>  };
>  
>  struct kvm_pv_cmd {


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 7/9] kvm: s390: Add CPU dump functionality
  2022-04-28 13:01 ` [PATCH 7/9] kvm: s390: Add CPU " Janosch Frank
@ 2022-05-09 19:11   ` Claudio Imbrenda
  2022-05-10  7:26     ` Janosch Frank
  0 siblings, 1 reply; 23+ messages in thread
From: Claudio Imbrenda @ 2022-05-09 19:11 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger

On Thu, 28 Apr 2022 13:01:00 +0000
Janosch Frank <frankja@linux.ibm.com> wrote:

> The previous patch introduced the per-VM dump functions now let's
> focus on dumping the VCPU state via the newly introduced
> KVM_S390_PV_CPU_COMMAND ioctl which mirrors the VM UV ioctl and can be
> extended with new commands later.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  arch/s390/kvm/kvm-s390.c | 73 ++++++++++++++++++++++++++++++++++++++++
>  arch/s390/kvm/kvm-s390.h |  1 +
>  arch/s390/kvm/pv.c       | 16 +++++++++
>  include/uapi/linux/kvm.h |  5 +++
>  4 files changed, 95 insertions(+)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 8984e8db33b4..d15ce38bef14 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -5149,6 +5149,52 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp,
>  	return -ENOIOCTLCMD;
>  }
>  
> +static int kvm_s390_handle_pv_vcpu_dump(struct kvm_vcpu *vcpu,
> +					struct kvm_pv_cmd *cmd)
> +{
> +	struct kvm_s390_pv_dmp dmp;
> +	void *data;
> +	int ret;
> +
> +	/* Dump initialization is a prerequisite */
> +	if (!vcpu->kvm->arch.pv.dumping)
> +		return -EINVAL;
> +
> +	if (copy_from_user(&dmp, (__u8 __user *)cmd->data, sizeof(dmp)))
> +		return -EFAULT;
> +
> +	/* We only handle this subcmd right now */
> +	if (dmp.subcmd != KVM_PV_DUMP_CPU)
> +		return -EINVAL;
> +
> +	/* CPU dump length is the same as create cpu storage donation. */
> +	if (dmp.buff_len != uv_info.guest_cpu_stor_len)
> +		return -EINVAL;
> +
> +	data = vzalloc(uv_info.guest_cpu_stor_len);
> +	if (!data)
> +		return -ENOMEM;
> +
> +	ret = kvm_s390_pv_dump_cpu(vcpu, data, &cmd->rc, &cmd->rrc);
> +
> +	VCPU_EVENT(vcpu, 3, "PROTVIRT DUMP CPU %d rc %x rrc %x",
> +		   vcpu->vcpu_id, cmd->rc, cmd->rrc);
> +
> +	if (ret) {
> +		vfree(data);
> +		return -EINVAL;
> +	}
> +
> +	/* On success copy over the dump data */
> +	if (copy_to_user((__u8 __user *)dmp.buff_addr, data, uv_info.guest_cpu_stor_len)) {
> +		vfree(data);
> +		return -EFAULT;
> +	}
> +
> +	vfree(data);
> +	return 0;
> +}
> +
>  long kvm_arch_vcpu_ioctl(struct file *filp,
>  			 unsigned int ioctl, unsigned long arg)
>  {
> @@ -5313,6 +5359,33 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>  					   irq_state.len);
>  		break;
>  	}
> +	case KVM_S390_PV_CPU_COMMAND: {
> +		struct kvm_pv_cmd cmd;
> +
> +		r = -EINVAL;
> +		if (!is_prot_virt_host())
> +			break;
> +
> +		r = -EFAULT;
> +		if (copy_from_user(&cmd, argp, sizeof(cmd)))
> +			break;
> +
> +		r = -EINVAL;
> +		if (cmd.flags)
> +			break;
> +
> +		/* We only handle this cmd right now */
> +		if (cmd.cmd != KVM_PV_DUMP)
> +			break;
> +
> +		r = kvm_s390_handle_pv_vcpu_dump(vcpu, &cmd);
> +
> +		/* Always copy over UV rc / rrc data */
> +		if (copy_to_user((__u8 __user *)argp, &cmd.rc,
> +				 sizeof(cmd.rc) + sizeof(cmd.rrc)))
> +			r = -EFAULT;
> +		break;
> +	}
>  	default:
>  		r = -ENOTTY;
>  	}
> diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
> index 2868dd0bba25..a39815184350 100644
> --- a/arch/s390/kvm/kvm-s390.h
> +++ b/arch/s390/kvm/kvm-s390.h
> @@ -250,6 +250,7 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
>  int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
>  		       unsigned long tweak, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
> +int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc);
>  int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
>  				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
>  
> diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> index d1635ed50078..9ab8192b9b23 100644
> --- a/arch/s390/kvm/pv.c
> +++ b/arch/s390/kvm/pv.c
> @@ -299,6 +299,22 @@ int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state)
>  	return 0;
>  }
>  
> +int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc)
> +{
> +	struct uv_cb_dump_cpu uvcb = {
> +		.header.cmd = UVC_CMD_DUMP_CPU,
> +		.header.len = sizeof(uvcb),
> +		.cpu_handle = vcpu->arch.pv.handle,
> +		.dump_area_origin = (u64)buff,
> +	};
> +	int cc;
> +
> +	cc = uv_call_sched(0, (u64)&uvcb);

it's a small amount of data, but you use the _sched variant?

and, why aren't you using the _sched variant in the previous patch (for
DUMP_COMPLETE)?

to be clear: I think the right thing is to always use the _sched
variant unless there is a good reason not to (so please fix the previous
patch)

> +	*rc = uvcb.header.rc;
> +	*rrc = uvcb.header.rrc;
> +	return cc;
> +}
> +
>  /* Size of the cache for the storage state dump data. 1MB for now */
>  #define DUMP_BUFF_LEN HPAGE_SIZE
>  
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index b34850907291..108bc7b7a71b 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1144,6 +1144,7 @@ struct kvm_ppc_resize_hpt {
>  #define KVM_CAP_S390_MEM_OP_EXTENSION 211
>  #define KVM_CAP_PMU_CAPABILITY 212
>  #define KVM_CAP_DISABLE_QUIRKS2 213
> +#define KVM_CAP_S390_PROTECTED_DUMP 214
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
> @@ -1649,6 +1650,7 @@ enum pv_cmd_dmp_id {
>  	KVM_PV_DUMP_INIT,
>  	KVM_PV_DUMP_CONFIG_STOR_STATE,
>  	KVM_PV_DUMP_COMPLETE,
> +	KVM_PV_DUMP_CPU,
>  };
>  
>  struct kvm_s390_pv_dmp {
> @@ -2110,4 +2112,7 @@ struct kvm_stats_desc {
>  /* Available with KVM_CAP_XSAVE2 */
>  #define KVM_GET_XSAVE2		  _IOR(KVMIO,  0xcf, struct kvm_xsave)
>  
> +/* Available with KVM_CAP_S390_PROTECTED_DUMP */
> +#define KVM_S390_PV_CPU_COMMAND	_IOWR(KVMIO, 0xd0, struct kvm_pv_cmd)
> +
>  #endif /* __LINUX_KVM_H */


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 7/9] kvm: s390: Add CPU dump functionality
  2022-05-09 19:11   ` Claudio Imbrenda
@ 2022-05-10  7:26     ` Janosch Frank
  2022-05-10  9:14       ` Claudio Imbrenda
  0 siblings, 1 reply; 23+ messages in thread
From: Janosch Frank @ 2022-05-10  7:26 UTC (permalink / raw)
  To: Claudio Imbrenda; +Cc: kvm, linux-s390, borntraeger

On 5/9/22 21:11, Claudio Imbrenda wrote:
> On Thu, 28 Apr 2022 13:01:00 +0000
> Janosch Frank <frankja@linux.ibm.com> wrote:
> 
>> The previous patch introduced the per-VM dump functions now let's
>> focus on dumping the VCPU state via the newly introduced
>> KVM_S390_PV_CPU_COMMAND ioctl which mirrors the VM UV ioctl and can be
>> extended with new commands later.
>>
>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>> ---
>>   arch/s390/kvm/kvm-s390.c | 73 ++++++++++++++++++++++++++++++++++++++++
>>   arch/s390/kvm/kvm-s390.h |  1 +
>>   arch/s390/kvm/pv.c       | 16 +++++++++
>>   include/uapi/linux/kvm.h |  5 +++
>>   4 files changed, 95 insertions(+)
>>
>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>> index 8984e8db33b4..d15ce38bef14 100644
>> --- a/arch/s390/kvm/kvm-s390.c
>> +++ b/arch/s390/kvm/kvm-s390.c
>> @@ -5149,6 +5149,52 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp,
>>   	return -ENOIOCTLCMD;
>>   }
>>   
>> +static int kvm_s390_handle_pv_vcpu_dump(struct kvm_vcpu *vcpu,
>> +					struct kvm_pv_cmd *cmd)
>> +{
>> +	struct kvm_s390_pv_dmp dmp;
>> +	void *data;
>> +	int ret;
>> +
>> +	/* Dump initialization is a prerequisite */
>> +	if (!vcpu->kvm->arch.pv.dumping)
>> +		return -EINVAL;
>> +
>> +	if (copy_from_user(&dmp, (__u8 __user *)cmd->data, sizeof(dmp)))
>> +		return -EFAULT;
>> +
>> +	/* We only handle this subcmd right now */
>> +	if (dmp.subcmd != KVM_PV_DUMP_CPU)
>> +		return -EINVAL;
>> +
>> +	/* CPU dump length is the same as create cpu storage donation. */
>> +	if (dmp.buff_len != uv_info.guest_cpu_stor_len)
>> +		return -EINVAL;
>> +
>> +	data = vzalloc(uv_info.guest_cpu_stor_len);
>> +	if (!data)
>> +		return -ENOMEM;
>> +
>> +	ret = kvm_s390_pv_dump_cpu(vcpu, data, &cmd->rc, &cmd->rrc);
>> +
>> +	VCPU_EVENT(vcpu, 3, "PROTVIRT DUMP CPU %d rc %x rrc %x",
>> +		   vcpu->vcpu_id, cmd->rc, cmd->rrc);
>> +
>> +	if (ret) {
>> +		vfree(data);
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* On success copy over the dump data */
>> +	if (copy_to_user((__u8 __user *)dmp.buff_addr, data, uv_info.guest_cpu_stor_len)) {
>> +		vfree(data);
>> +		return -EFAULT;
>> +	}
>> +
>> +	vfree(data);
>> +	return 0;
>> +}
>> +
>>   long kvm_arch_vcpu_ioctl(struct file *filp,
>>   			 unsigned int ioctl, unsigned long arg)
>>   {
>> @@ -5313,6 +5359,33 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>>   					   irq_state.len);
>>   		break;
>>   	}
>> +	case KVM_S390_PV_CPU_COMMAND: {
>> +		struct kvm_pv_cmd cmd;
>> +
>> +		r = -EINVAL;
>> +		if (!is_prot_virt_host())
>> +			break;
>> +
>> +		r = -EFAULT;
>> +		if (copy_from_user(&cmd, argp, sizeof(cmd)))
>> +			break;
>> +
>> +		r = -EINVAL;
>> +		if (cmd.flags)
>> +			break;
>> +
>> +		/* We only handle this cmd right now */
>> +		if (cmd.cmd != KVM_PV_DUMP)
>> +			break;
>> +
>> +		r = kvm_s390_handle_pv_vcpu_dump(vcpu, &cmd);
>> +
>> +		/* Always copy over UV rc / rrc data */
>> +		if (copy_to_user((__u8 __user *)argp, &cmd.rc,
>> +				 sizeof(cmd.rc) + sizeof(cmd.rrc)))
>> +			r = -EFAULT;
>> +		break;
>> +	}
>>   	default:
>>   		r = -ENOTTY;
>>   	}
>> diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
>> index 2868dd0bba25..a39815184350 100644
>> --- a/arch/s390/kvm/kvm-s390.h
>> +++ b/arch/s390/kvm/kvm-s390.h
>> @@ -250,6 +250,7 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
>>   int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
>>   		       unsigned long tweak, u16 *rc, u16 *rrc);
>>   int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
>> +int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc);
>>   int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
>>   				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
>>   
>> diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
>> index d1635ed50078..9ab8192b9b23 100644
>> --- a/arch/s390/kvm/pv.c
>> +++ b/arch/s390/kvm/pv.c
>> @@ -299,6 +299,22 @@ int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state)
>>   	return 0;
>>   }
>>   
>> +int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc)
>> +{
>> +	struct uv_cb_dump_cpu uvcb = {
>> +		.header.cmd = UVC_CMD_DUMP_CPU,
>> +		.header.len = sizeof(uvcb),
>> +		.cpu_handle = vcpu->arch.pv.handle,
>> +		.dump_area_origin = (u64)buff,
>> +	};
>> +	int cc;
>> +
>> +	cc = uv_call_sched(0, (u64)&uvcb);
> 
> it's a small amount of data, but you use the _sched variant?

Who knows what FW will do :)

> 
> and, why aren't you using the _sched variant in the previous patch (for
> DUMP_COMPLETE)?
> 
> to be clear: I think the right thing is to always use the _sched
> variant unless there is a good reason not to (so please fix the previous
> patch)

Yep, will fix.

It might make sense to switch the functions around and make uv_call 
always do a sched and introduce uv_call_nosched() since that's the 
special case. But that's something for the future.

> 
>> +	*rc = uvcb.header.rc;
>> +	*rrc = uvcb.header.rrc;
>> +	return cc;
>> +}
>> +
>>   /* Size of the cache for the storage state dump data. 1MB for now */
>>   #define DUMP_BUFF_LEN HPAGE_SIZE
>>   
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index b34850907291..108bc7b7a71b 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -1144,6 +1144,7 @@ struct kvm_ppc_resize_hpt {
>>   #define KVM_CAP_S390_MEM_OP_EXTENSION 211
>>   #define KVM_CAP_PMU_CAPABILITY 212
>>   #define KVM_CAP_DISABLE_QUIRKS2 213
>> +#define KVM_CAP_S390_PROTECTED_DUMP 214
>>   
>>   #ifdef KVM_CAP_IRQ_ROUTING
>>   
>> @@ -1649,6 +1650,7 @@ enum pv_cmd_dmp_id {
>>   	KVM_PV_DUMP_INIT,
>>   	KVM_PV_DUMP_CONFIG_STOR_STATE,
>>   	KVM_PV_DUMP_COMPLETE,
>> +	KVM_PV_DUMP_CPU,
>>   };
>>   
>>   struct kvm_s390_pv_dmp {
>> @@ -2110,4 +2112,7 @@ struct kvm_stats_desc {
>>   /* Available with KVM_CAP_XSAVE2 */
>>   #define KVM_GET_XSAVE2		  _IOR(KVMIO,  0xcf, struct kvm_xsave)
>>   
>> +/* Available with KVM_CAP_S390_PROTECTED_DUMP */
>> +#define KVM_S390_PV_CPU_COMMAND	_IOWR(KVMIO, 0xd0, struct kvm_pv_cmd)
>> +
>>   #endif /* __LINUX_KVM_H */
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/9] KVM: s390: pv: Add query interface
  2022-05-09 15:25   ` Claudio Imbrenda
@ 2022-05-10  7:27     ` Janosch Frank
  0 siblings, 0 replies; 23+ messages in thread
From: Janosch Frank @ 2022-05-10  7:27 UTC (permalink / raw)
  To: Claudio Imbrenda; +Cc: kvm, linux-s390, borntraeger

On 5/9/22 17:25, Claudio Imbrenda wrote:
> On Thu, 28 Apr 2022 13:00:56 +0000
> Janosch Frank <frankja@linux.ibm.com> wrote:
> 
>> Some of the query information is already available via sysfs but
>> having a IOCTL makes the information easier to retrieve.
>>
>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>> ---
>>   arch/s390/kvm/kvm-s390.c | 76 ++++++++++++++++++++++++++++++++++++++++
>>   include/uapi/linux/kvm.h | 25 +++++++++++++
>>   2 files changed, 101 insertions(+)
>>
>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>> index 76ad6408cb2c..23352d45a386 100644
>> --- a/arch/s390/kvm/kvm-s390.c
>> +++ b/arch/s390/kvm/kvm-s390.c
>> @@ -2224,6 +2224,42 @@ static int kvm_s390_cpus_to_pv(struct kvm *kvm, u16 *rc, u16 *rrc)
>>   	return r;
>>   }
>>   
>> +/*
>> + * Here we provide user space with a direct interface to query UV
>> + * related data like UV maxima and available features as well as
>> + * feature specific data.
>> + *
>> + * To facilitate future extension of the data structures we'll try to
>> + * write data up to the maximum requested length.
>> + */
>> +static ssize_t kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
>> +{
>> +	ssize_t len_min;
>> +
>> +	switch (info->header.id) {
>> +	case KVM_PV_INFO_VM: {
>> +		len_min =  sizeof(info->header) + sizeof(info->vm);
>> +
>> +		if (info->header.len_max < len_min)
>> +			return -EINVAL;
>> +
>> +		memcpy(info->vm.inst_calls_list,
>> +		       uv_info.inst_calls_list,
>> +		       sizeof(uv_info.inst_calls_list));
>> +
>> +		/* It's max cpuidm not max cpus so it's off by one */
> 
> s/cpuidm/cpuid,/ ? (and then also s/cpus/cpus,/)

Sure, will fix

> 
>> +		info->vm.max_cpus = uv_info.max_guest_cpu_id + 1;
>> +		info->vm.max_guests = uv_info.max_num_sec_conf;
>> +		info->vm.max_guest_addr = uv_info.max_sec_stor_addr;
>> +		info->vm.feature_indication = uv_info.uv_feature_indications;
>> +
>> +		return len_min;
>> +	}
>> +	default:
>> +		return -EINVAL;
>> +	}
>> +}
>> +
>>   static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
>>   {
>>   	int r = 0;
>> @@ -2360,6 +2396,46 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
>>   			     cmd->rc, cmd->rrc);
>>   		break;
>>   	}
>> +	case KVM_PV_INFO: {
>> +		struct kvm_s390_pv_info info = {};
>> +		ssize_t data_len;
>> +
>> +		/*
>> +		 * No need to check the VM protection here.
>> +		 *
>> +		 * Maybe user space wants to query some of the data
>> +		 * when the VM is still unprotected. If we see the
>> +		 * need to fence a new data command we can still
>> +		 * return an error in the info handler.
>> +		 */
>> +
>> +		r = -EFAULT;
>> +		if (copy_from_user(&info, argp, sizeof(info.header)))
>> +			break;
>> +
>> +		r = -EINVAL;
>> +		if (info.header.len_max < sizeof(info.header))
>> +			break;
>> +
>> +		data_len = kvm_s390_handle_pv_info(&info);
>> +		if (data_len < 0) {
>> +			r = data_len;
>> +			break;
>> +		}
>> +		/*
>> +		 * If a data command struct is extended (multiple
>> +		 * times) this can be used to determine how much of it
>> +		 * is valid.
>> +		 */
>> +		info.header.len_written = data_len;
>> +
>> +		r = -EFAULT;
>> +		if (copy_to_user(argp, &info, data_len))
>> +			break;
>> +
>> +		r = 0;
>> +		break;
>> +	}
>>   	default:
>>   		r = -ENOTTY;
>>   	}
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index 91a6fe4e02c0..59e4fb6c7a34 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -1645,6 +1645,30 @@ struct kvm_s390_pv_unp {
>>   	__u64 tweak;
>>   };
>>   
>> +enum pv_cmd_info_id {
>> +	KVM_PV_INFO_VM,
>> +};
>> +
>> +struct kvm_s390_pv_info_vm {
>> +	__u64 inst_calls_list[4];
>> +	__u64 max_cpus;
>> +	__u64 max_guests;
>> +	__u64 max_guest_addr;
>> +	__u64 feature_indication;
>> +};
>> +
>> +struct kvm_s390_pv_info_header {
>> +	__u32 id;
>> +	__u32 len_max;
>> +	__u32 len_written;
>> +	__u32 reserved;
>> +};
>> +
>> +struct kvm_s390_pv_info {
>> +	struct kvm_s390_pv_info_header header;
>> +	struct kvm_s390_pv_info_vm vm;
>> +};
>> +
>>   enum pv_cmd_id {
>>   	KVM_PV_ENABLE,
>>   	KVM_PV_DISABLE,
>> @@ -1653,6 +1677,7 @@ enum pv_cmd_id {
>>   	KVM_PV_VERIFY,
>>   	KVM_PV_PREP_RESET,
>>   	KVM_PV_UNSHARE_ALL,
>> +	KVM_PV_INFO,
>>   };
>>   
>>   struct kvm_pv_cmd {
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 7/9] kvm: s390: Add CPU dump functionality
  2022-05-10  7:26     ` Janosch Frank
@ 2022-05-10  9:14       ` Claudio Imbrenda
  0 siblings, 0 replies; 23+ messages in thread
From: Claudio Imbrenda @ 2022-05-10  9:14 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger

On Tue, 10 May 2022 09:26:50 +0200
Janosch Frank <frankja@linux.ibm.com> wrote:

> On 5/9/22 21:11, Claudio Imbrenda wrote:
> > On Thu, 28 Apr 2022 13:01:00 +0000
> > Janosch Frank <frankja@linux.ibm.com> wrote:
> >   
> >> The previous patch introduced the per-VM dump functions now let's
> >> focus on dumping the VCPU state via the newly introduced
> >> KVM_S390_PV_CPU_COMMAND ioctl which mirrors the VM UV ioctl and can be
> >> extended with new commands later.
> >>
> >> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> >> ---
> >>   arch/s390/kvm/kvm-s390.c | 73 ++++++++++++++++++++++++++++++++++++++++
> >>   arch/s390/kvm/kvm-s390.h |  1 +
> >>   arch/s390/kvm/pv.c       | 16 +++++++++
> >>   include/uapi/linux/kvm.h |  5 +++
> >>   4 files changed, 95 insertions(+)
> >>
> >> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> >> index 8984e8db33b4..d15ce38bef14 100644
> >> --- a/arch/s390/kvm/kvm-s390.c
> >> +++ b/arch/s390/kvm/kvm-s390.c
> >> @@ -5149,6 +5149,52 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp,
> >>   	return -ENOIOCTLCMD;
> >>   }
> >>   
> >> +static int kvm_s390_handle_pv_vcpu_dump(struct kvm_vcpu *vcpu,
> >> +					struct kvm_pv_cmd *cmd)
> >> +{
> >> +	struct kvm_s390_pv_dmp dmp;
> >> +	void *data;
> >> +	int ret;
> >> +
> >> +	/* Dump initialization is a prerequisite */
> >> +	if (!vcpu->kvm->arch.pv.dumping)
> >> +		return -EINVAL;
> >> +
> >> +	if (copy_from_user(&dmp, (__u8 __user *)cmd->data, sizeof(dmp)))
> >> +		return -EFAULT;
> >> +
> >> +	/* We only handle this subcmd right now */
> >> +	if (dmp.subcmd != KVM_PV_DUMP_CPU)
> >> +		return -EINVAL;
> >> +
> >> +	/* CPU dump length is the same as create cpu storage donation. */
> >> +	if (dmp.buff_len != uv_info.guest_cpu_stor_len)
> >> +		return -EINVAL;
> >> +
> >> +	data = vzalloc(uv_info.guest_cpu_stor_len);
> >> +	if (!data)
> >> +		return -ENOMEM;
> >> +
> >> +	ret = kvm_s390_pv_dump_cpu(vcpu, data, &cmd->rc, &cmd->rrc);
> >> +
> >> +	VCPU_EVENT(vcpu, 3, "PROTVIRT DUMP CPU %d rc %x rrc %x",
> >> +		   vcpu->vcpu_id, cmd->rc, cmd->rrc);
> >> +
> >> +	if (ret) {
> >> +		vfree(data);
> >> +		return -EINVAL;
> >> +	}
> >> +
> >> +	/* On success copy over the dump data */
> >> +	if (copy_to_user((__u8 __user *)dmp.buff_addr, data, uv_info.guest_cpu_stor_len)) {
> >> +		vfree(data);
> >> +		return -EFAULT;
> >> +	}
> >> +
> >> +	vfree(data);
> >> +	return 0;
> >> +}
> >> +
> >>   long kvm_arch_vcpu_ioctl(struct file *filp,
> >>   			 unsigned int ioctl, unsigned long arg)
> >>   {
> >> @@ -5313,6 +5359,33 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
> >>   					   irq_state.len);
> >>   		break;
> >>   	}
> >> +	case KVM_S390_PV_CPU_COMMAND: {
> >> +		struct kvm_pv_cmd cmd;
> >> +
> >> +		r = -EINVAL;
> >> +		if (!is_prot_virt_host())
> >> +			break;
> >> +
> >> +		r = -EFAULT;
> >> +		if (copy_from_user(&cmd, argp, sizeof(cmd)))
> >> +			break;
> >> +
> >> +		r = -EINVAL;
> >> +		if (cmd.flags)
> >> +			break;
> >> +
> >> +		/* We only handle this cmd right now */
> >> +		if (cmd.cmd != KVM_PV_DUMP)
> >> +			break;
> >> +
> >> +		r = kvm_s390_handle_pv_vcpu_dump(vcpu, &cmd);
> >> +
> >> +		/* Always copy over UV rc / rrc data */
> >> +		if (copy_to_user((__u8 __user *)argp, &cmd.rc,
> >> +				 sizeof(cmd.rc) + sizeof(cmd.rrc)))
> >> +			r = -EFAULT;
> >> +		break;
> >> +	}
> >>   	default:
> >>   		r = -ENOTTY;
> >>   	}
> >> diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
> >> index 2868dd0bba25..a39815184350 100644
> >> --- a/arch/s390/kvm/kvm-s390.h
> >> +++ b/arch/s390/kvm/kvm-s390.h
> >> @@ -250,6 +250,7 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
> >>   int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
> >>   		       unsigned long tweak, u16 *rc, u16 *rrc);
> >>   int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
> >> +int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc);
> >>   int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
> >>   				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
> >>   
> >> diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
> >> index d1635ed50078..9ab8192b9b23 100644
> >> --- a/arch/s390/kvm/pv.c
> >> +++ b/arch/s390/kvm/pv.c
> >> @@ -299,6 +299,22 @@ int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state)
> >>   	return 0;
> >>   }
> >>   
> >> +int kvm_s390_pv_dump_cpu(struct kvm_vcpu *vcpu, void *buff, u16 *rc, u16 *rrc)
> >> +{
> >> +	struct uv_cb_dump_cpu uvcb = {
> >> +		.header.cmd = UVC_CMD_DUMP_CPU,
> >> +		.header.len = sizeof(uvcb),
> >> +		.cpu_handle = vcpu->arch.pv.handle,
> >> +		.dump_area_origin = (u64)buff,
> >> +	};
> >> +	int cc;
> >> +
> >> +	cc = uv_call_sched(0, (u64)&uvcb);  
> > 
> > it's a small amount of data, but you use the _sched variant?  
> 
> Who knows what FW will do :)
> 
> > 
> > and, why aren't you using the _sched variant in the previous patch (for
> > DUMP_COMPLETE)?
> > 
> > to be clear: I think the right thing is to always use the _sched
> > variant unless there is a good reason not to (so please fix the previous
> > patch)  
> 
> Yep, will fix.
> 
> It might make sense to switch the functions around and make uv_call 
> always do a sched and introduce uv_call_nosched() since that's the 
> special case. But that's something for the future.

I was actually thinking the same thing :)

> 
> >   
> >> +	*rc = uvcb.header.rc;
> >> +	*rrc = uvcb.header.rrc;
> >> +	return cc;
> >> +}
> >> +
> >>   /* Size of the cache for the storage state dump data. 1MB for now */
> >>   #define DUMP_BUFF_LEN HPAGE_SIZE
> >>   
> >> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> >> index b34850907291..108bc7b7a71b 100644
> >> --- a/include/uapi/linux/kvm.h
> >> +++ b/include/uapi/linux/kvm.h
> >> @@ -1144,6 +1144,7 @@ struct kvm_ppc_resize_hpt {
> >>   #define KVM_CAP_S390_MEM_OP_EXTENSION 211
> >>   #define KVM_CAP_PMU_CAPABILITY 212
> >>   #define KVM_CAP_DISABLE_QUIRKS2 213
> >> +#define KVM_CAP_S390_PROTECTED_DUMP 214
> >>   
> >>   #ifdef KVM_CAP_IRQ_ROUTING
> >>   
> >> @@ -1649,6 +1650,7 @@ enum pv_cmd_dmp_id {
> >>   	KVM_PV_DUMP_INIT,
> >>   	KVM_PV_DUMP_CONFIG_STOR_STATE,
> >>   	KVM_PV_DUMP_COMPLETE,
> >> +	KVM_PV_DUMP_CPU,
> >>   };
> >>   
> >>   struct kvm_s390_pv_dmp {
> >> @@ -2110,4 +2112,7 @@ struct kvm_stats_desc {
> >>   /* Available with KVM_CAP_XSAVE2 */
> >>   #define KVM_GET_XSAVE2		  _IOR(KVMIO,  0xcf, struct kvm_xsave)
> >>   
> >> +/* Available with KVM_CAP_S390_PROTECTED_DUMP */
> >> +#define KVM_S390_PV_CPU_COMMAND	_IOWR(KVMIO, 0xd0, struct kvm_pv_cmd)
> >> +
> >>   #endif /* __LINUX_KVM_H */  
> >   
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/9] KVM: s390: pv: Add query dump information
  2022-05-09 15:28   ` Claudio Imbrenda
@ 2022-05-10 12:36     ` Janosch Frank
  2022-05-10 13:28       ` Claudio Imbrenda
  0 siblings, 1 reply; 23+ messages in thread
From: Janosch Frank @ 2022-05-10 12:36 UTC (permalink / raw)
  To: Claudio Imbrenda; +Cc: kvm, linux-s390, borntraeger

On 5/9/22 17:28, Claudio Imbrenda wrote:
> On Thu, 28 Apr 2022 13:00:58 +0000
> Janosch Frank <frankja@linux.ibm.com> wrote:
> 
>> The dump API requires userspace to provide buffers into which we will
>> store data. The dump information added in this patch tells userspace
>> how big those buffers need to be.
>>
>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>> ---
>>   arch/s390/kvm/kvm-s390.c | 11 +++++++++++
>>   include/uapi/linux/kvm.h | 12 +++++++++++-
>>   2 files changed, 22 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>> index 23352d45a386..e327a5b8ef78 100644
>> --- a/arch/s390/kvm/kvm-s390.c
>> +++ b/arch/s390/kvm/kvm-s390.c
>> @@ -2255,6 +2255,17 @@ static ssize_t kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
>>   
>>   		return len_min;
>>   	}
>> +	case KVM_PV_INFO_DUMP: {
>> +		len_min =  sizeof(info->header) + sizeof(info->dump);
> 
> so the output will have some zero-padded stuff at the end?

In which situation?

> 
>> +
>> +		if (info->header.len_max < len_min)
>> +			return -EINVAL;
>> +
>> +		info->dump.dump_cpu_buffer_len = uv_info.guest_cpu_stor_len;
>> +		info->dump.dump_config_mem_buffer_per_1m = uv_info.conf_dump_storage_state_len;
>> +		info->dump.dump_config_finalize_len = uv_info.conf_dump_finalize_len;
>> +		return len_min;
>> +	}
>>   	default:
>>   		return -EINVAL;
>>   	}
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index 59e4fb6c7a34..2eba89d7ec29 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -1647,6 +1647,13 @@ struct kvm_s390_pv_unp {
>>   
>>   enum pv_cmd_info_id {
>>   	KVM_PV_INFO_VM,
>> +	KVM_PV_INFO_DUMP,
>> +};
>> +
>> +struct kvm_s390_pv_info_dump {
>> +	__u64 dump_cpu_buffer_len;
>> +	__u64 dump_config_mem_buffer_per_1m;
>> +	__u64 dump_config_finalize_len;
>>   };
>>   
>>   struct kvm_s390_pv_info_vm {
>> @@ -1666,7 +1673,10 @@ struct kvm_s390_pv_info_header {
>>   
>>   struct kvm_s390_pv_info {
>>   	struct kvm_s390_pv_info_header header;
>> -	struct kvm_s390_pv_info_vm vm;
>> +	union {
>> +		struct kvm_s390_pv_info_dump dump;
>> +		struct kvm_s390_pv_info_vm vm;
>> +	};
>>   };
>>   
>>   enum pv_cmd_id {
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 5/9] KVM: s390: pv: Add query dump information
  2022-05-10 12:36     ` Janosch Frank
@ 2022-05-10 13:28       ` Claudio Imbrenda
  0 siblings, 0 replies; 23+ messages in thread
From: Claudio Imbrenda @ 2022-05-10 13:28 UTC (permalink / raw)
  To: Janosch Frank; +Cc: kvm, linux-s390, borntraeger

On Tue, 10 May 2022 14:36:14 +0200
Janosch Frank <frankja@linux.ibm.com> wrote:

> On 5/9/22 17:28, Claudio Imbrenda wrote:
> > On Thu, 28 Apr 2022 13:00:58 +0000
> > Janosch Frank <frankja@linux.ibm.com> wrote:
> >   
> >> The dump API requires userspace to provide buffers into which we will
> >> store data. The dump information added in this patch tells userspace
> >> how big those buffers need to be.
> >>
> >> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> >> ---
> >>   arch/s390/kvm/kvm-s390.c | 11 +++++++++++
> >>   include/uapi/linux/kvm.h | 12 +++++++++++-
> >>   2 files changed, 22 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> >> index 23352d45a386..e327a5b8ef78 100644
> >> --- a/arch/s390/kvm/kvm-s390.c
> >> +++ b/arch/s390/kvm/kvm-s390.c
> >> @@ -2255,6 +2255,17 @@ static ssize_t kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
> >>   
> >>   		return len_min;
> >>   	}
> >> +	case KVM_PV_INFO_DUMP: {
> >> +		len_min =  sizeof(info->header) + sizeof(info->dump);  
> > 
> > so the output will have some zero-padded stuff at the end?  
> 
> In which situation?

for example when I don't read the patch correctly (oops)

> 
> >   
> >> +
> >> +		if (info->header.len_max < len_min)
> >> +			return -EINVAL;
> >> +
> >> +		info->dump.dump_cpu_buffer_len = uv_info.guest_cpu_stor_len;
> >> +		info->dump.dump_config_mem_buffer_per_1m = uv_info.conf_dump_storage_state_len;
> >> +		info->dump.dump_config_finalize_len = uv_info.conf_dump_finalize_len;
> >> +		return len_min;
> >> +	}
> >>   	default:
> >>   		return -EINVAL;
> >>   	}
> >> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> >> index 59e4fb6c7a34..2eba89d7ec29 100644
> >> --- a/include/uapi/linux/kvm.h
> >> +++ b/include/uapi/linux/kvm.h
> >> @@ -1647,6 +1647,13 @@ struct kvm_s390_pv_unp {
> >>   
> >>   enum pv_cmd_info_id {
> >>   	KVM_PV_INFO_VM,
> >> +	KVM_PV_INFO_DUMP,
> >> +};
> >> +
> >> +struct kvm_s390_pv_info_dump {
> >> +	__u64 dump_cpu_buffer_len;
> >> +	__u64 dump_config_mem_buffer_per_1m;
> >> +	__u64 dump_config_finalize_len;
> >>   };
> >>   
> >>   struct kvm_s390_pv_info_vm {
> >> @@ -1666,7 +1673,10 @@ struct kvm_s390_pv_info_header {
> >>   
> >>   struct kvm_s390_pv_info {
> >>   	struct kvm_s390_pv_info_header header;
> >> -	struct kvm_s390_pv_info_vm vm;
> >> +	union {
> >> +		struct kvm_s390_pv_info_dump dump;
> >> +		struct kvm_s390_pv_info_vm vm;
> >> +	};
> >>   };
> >>   
> >>   enum pv_cmd_id {  
> >   
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/9] kvm: s390: Add configuration dump functionality
  2022-05-09 17:51   ` Claudio Imbrenda
@ 2022-05-10 14:07     ` Janosch Frank
  0 siblings, 0 replies; 23+ messages in thread
From: Janosch Frank @ 2022-05-10 14:07 UTC (permalink / raw)
  To: Claudio Imbrenda; +Cc: kvm, linux-s390, borntraeger

On 5/9/22 19:51, Claudio Imbrenda wrote:
> On Thu, 28 Apr 2022 13:00:59 +0000
> Janosch Frank <frankja@linux.ibm.com> wrote:
> 
>> Sometimes dumping inside of a VM fails, is unavailable or doesn't
>> yield the required data. For these occasions we dump the VM from the
>> outside, writing memory and cpu data to a file.
>>
>> Up to now PV guests only supported dumping from the inside of the
>> guest through dumpers like KDUMP. A PV guest can be dumped from the
>> hypervisor but the data will be stale and / or encrypted.
>>
>> To get the actual state of the PV VM we need the help of the
>> Ultravisor who safeguards the VM state. New UV calls have been added
>> to initialize the dump, dump storage state data, dump cpu data and
>> complete the dump process. We expose these calls in this patch via a
>> new UV ioctl command.
>>
>> The sensitive parts of the dump data are encrypted, the dump key is
>> derived from the Customer Communication Key (CCK). This ensures that
>> only the owner of the VM who has the CCK can decrypt the dump data.
>>
>> The memory is dumped / read via a normal export call and a re-import
>> after the dump initialization is not needed (no re-encryption with a
>> dump key).
>>
>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>> ---
[...]
>> +/*
>> + * kvm_s390_pv_dump_stor_state
>> + *
>> + * @kvm: pointer to the guest's KVM struct
>> + * @buff_user: Userspace pointer where we will write the results to
>> + * @gaddr: Starting absolute guest address for which the storage
>> state
>> + *         is requested. This value will be updated with the last
>> + *         address for which data was written when returning to
>> + *         userspace.
>> + * @buff_user_len: Length of the buff_user buffer
>> + * @rc: Pointer to where the uvcb return code is stored
>> + * @rrc: Pointer to where the uvcb return reason code is stored
>> + *
>> + * Return:
>> + *  0 on success
>> + *  -ENOMEM if allocating the cache fails
>> + *  -EINVAL if gaddr is not aligned to 1MB
>> + *  -EINVAL if buff_user_len is not aligned to
>> uv_info.conf_dump_storage_state_len
>> + *  -EINVAL if the UV call fails, rc and rrc will be set in this case
>> + *  -EFAULT if copying the result to buff_user failed
>> + */
>> +int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user
>> *buff_user,
>> +				u64 *gaddr, u64 buff_user_len, u16
>> *rc, u16 *rrc) +{
>> +	struct uv_cb_dump_stor_state uvcb = {
>> +		.header.cmd = UVC_CMD_DUMP_CONF_STOR_STATE,
>> +		.header.len = sizeof(uvcb),
>> +		.config_handle = kvm->arch.pv.handle,
>> +		.gaddr = *gaddr,
>> +		.dump_area_origin = 0,
>> +	};
>> +	size_t buff_kvm_size;
>> +	size_t size_done = 0;
>> +	u8 *buff_kvm = NULL;
>> +	int cc, ret;
>> +
>> +	ret = -EINVAL;
>> +	/* UV call processes 1MB guest storage chunks at a time */
>> +	if (*gaddr & ~HPAGE_MASK)
>> +		goto out;
>> +
>> +	/*
>> +	 * We provide the storage state for 1MB chunks of guest
>> +	 * storage. The buffer will need to be aligned to
>> +	 * conf_dump_storage_state_len so we don't end on a partial
>> +	 * chunk.
>> +	 */
>> +	if (!buff_user_len ||
>> +	    buff_user_len & (uv_info.conf_dump_storage_state_len -
>> 1))
> 
> why not use the IS_ALIGNED macro?

Habits.
I'll fix this one and the one above.

> 
>> +		goto out;
>> +
>> +	/*
>> +	 * Allocate a buffer from which we will later copy to the user process.
>> +	 *
>> +	 * We don't want userspace to dictate our buffer size so we limit it to DUMP_BUFF_LEN.
>> +	 */
>> +	ret = -ENOMEM;
>> +	buff_kvm_size = buff_user_len <= DUMP_BUFF_LEN ? buff_user_len : DUMP_BUFF_LEN;

This will be converted to min() to make it more readable.

>> +	buff_kvm = vzalloc(buff_kvm_size);
>> +	if (!buff_kvm)
>> +		goto out;
>> +
>> +	ret = 0;
>> +	uvcb.dump_area_origin = (u64)buff_kvm;
>> +	/* We will loop until the user buffer is filled or an error occurs */
>> +	do {
>> +		/* Get a page of data */
> 
> are you getting a page or a block of size conf_dump_storage_state_len ?

Will be changed to:
/* Get 1MB worth of guest storage state data */


I think that comment has historical reasons, we once discussed to always 
write one page instead of the "one 1MB worth of guest storage state".

> 
>> +		cc = uv_call_sched(0, (u64)&uvcb);
>> +
>> +		/* All or nothing */
>> +		if (cc) {
>> +			ret = -EINVAL;
>> +			break;
>> +		}
>> +
>> +		size_done += uv_info.conf_dump_storage_state_len;
>> +		uvcb.dump_area_origin +=
>> uv_info.conf_dump_storage_state_len;
>> +		uvcb.gaddr += HPAGE_SIZE;
>> +		buff_user_len -= PAGE_SIZE;
> 
> same here ^ (should it be -= uv_info.conf_dump_storage_state_len ?)

Yes

> 
>> +
>> +		/* KVM Buffer full, time to copy to the process */
>> +		if (!buff_user_len ||
>> +		    uvcb.dump_area_origin == (uintptr_t)buff_kvm +
>> buff_kvm_size) { +
> 
> why not  ... || size_done == DUMP_BUFF_LEN ?
> 
>> +			if (copy_to_user(buff_user, buff_kvm,
>> +					 uvcb.dump_area_origin -
>> (uintptr_t)buff_kvm)) {
> 
> aren't you trying to copy size_done bytes?
> 
>> +				ret = -EFAULT;
>> +				break;
>> +			}
>> +
>> +			buff_user += size_done;
>> +			size_done = 0;
>> +			uvcb.dump_area_origin = (u64)buff_kvm;
>> +		}
>> +	} while (buff_user_len);
>> +
>> +	/* Report back where we ended dumping */
>> +	*gaddr = uvcb.gaddr;
>> +
>> +	/* Lets only log errors, we don't want to spam */
>> +out:
>> +	if (ret)
>> +		KVM_UV_EVENT(kvm, 3,
>> +			     "PROTVIRT DUMP STORAGE STATE: addr %llx
>> ret %d, uvcb rc %x rrc %x",
>> +			     uvcb.gaddr, ret, uvcb.header.rc,
>> uvcb.header.rrc);
>> +	*rc = uvcb.header.rc;
>> +	*rrc = uvcb.header.rrc;
>> +	vfree(buff_kvm);
>> +
>> +	return ret;
>> +}
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index 2eba89d7ec29..b34850907291 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -1645,6 +1645,20 @@ struct kvm_s390_pv_unp {
>>   	__u64 tweak;
>>   };
>>   
>> +enum pv_cmd_dmp_id {
>> +	KVM_PV_DUMP_INIT,
>> +	KVM_PV_DUMP_CONFIG_STOR_STATE,
>> +	KVM_PV_DUMP_COMPLETE,
>> +};
>> +
>> +struct kvm_s390_pv_dmp {
>> +	__u64 subcmd;
>> +	__u64 buff_addr;
>> +	__u64 buff_len;
>> +	__u64 gaddr;		/* For dump storage state */
>> +	__u64 reserved[4];
>> +};
>> +
>>   enum pv_cmd_info_id {
>>   	KVM_PV_INFO_VM,
>>   	KVM_PV_INFO_DUMP,
>> @@ -1688,6 +1702,7 @@ enum pv_cmd_id {
>>   	KVM_PV_PREP_RESET,
>>   	KVM_PV_UNSHARE_ALL,
>>   	KVM_PV_INFO,
>> +	KVM_PV_DUMP,
>>   };
>>   
>>   struct kvm_pv_cmd {
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/9] kvm: s390: Add configuration dump functionality
  2022-02-23  9:20 ` [PATCH 6/9] kvm: s390: Add configuration dump functionality Janosch Frank
  2022-02-23 18:13   ` kernel test robot
@ 2022-02-23 19:25   ` kernel test robot
  1 sibling, 0 replies; 23+ messages in thread
From: kernel test robot @ 2022-02-23 19:25 UTC (permalink / raw)
  To: Janosch Frank, kvm
  Cc: llvm, kbuild-all, linux-s390, imbrenda, david, borntraeger

Hi Janosch,

I love your patch! Perhaps something to improve:

[auto build test WARNING on kvms390/next]
[also build test WARNING on next-20220222]
[cannot apply to kvm/master s390/features v5.17-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Janosch-Frank/kvm-s390-Add-PV-dump-support/20220223-172213
base:   https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git next
config: s390-randconfig-c005-20220223 (https://download.01.org/0day-ci/archive/20220224/202202240358.kEydOdbz-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project d271fc04d5b97b12e6b797c6067d3c96a8d7470e)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install s390 cross compiling tool for clang build
        # apt-get install binutils-s390x-linux-gnu
        # https://github.com/0day-ci/linux/commit/63a2029ece7e8fee92aa5ad277e2cbd8b13b7e6b
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Janosch-Frank/kvm-s390-Add-PV-dump-support/20220223-172213
        git checkout 63a2029ece7e8fee92aa5ad277e2cbd8b13b7e6b
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=s390 SHELL=/bin/bash arch/s390/kvm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from arch/s390/kvm/pv.c:9:
   In file included from include/linux/kvm_host.h:41:
   In file included from include/linux/kvm_para.h:5:
   In file included from include/uapi/linux/kvm_para.h:37:
   In file included from arch/s390/include/asm/kvm_para.h:25:
   In file included from arch/s390/include/asm/diag.h:12:
   In file included from include/linux/if_ether.h:19:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:464:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __raw_readb(PCI_IOBASE + addr);
                             ~~~~~~~~~~ ^
   include/asm-generic/io.h:477:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
                                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:37:59: note: expanded from macro '__le16_to_cpu'
   #define __le16_to_cpu(x) __swab16((__force __u16)(__le16)(x))
                                                             ^
   include/uapi/linux/swab.h:102:54: note: expanded from macro '__swab16'
   #define __swab16(x) (__u16)__builtin_bswap16((__u16)(x))
                                                        ^
   In file included from arch/s390/kvm/pv.c:9:
   In file included from include/linux/kvm_host.h:41:
   In file included from include/linux/kvm_para.h:5:
   In file included from include/uapi/linux/kvm_para.h:37:
   In file included from arch/s390/include/asm/kvm_para.h:25:
   In file included from arch/s390/include/asm/diag.h:12:
   In file included from include/linux/if_ether.h:19:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:490:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
                                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:35:59: note: expanded from macro '__le32_to_cpu'
   #define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
                                                             ^
   include/uapi/linux/swab.h:115:54: note: expanded from macro '__swab32'
   #define __swab32(x) (__u32)__builtin_bswap32((__u32)(x))
                                                        ^
   In file included from arch/s390/kvm/pv.c:9:
   In file included from include/linux/kvm_host.h:41:
   In file included from include/linux/kvm_para.h:5:
   In file included from include/uapi/linux/kvm_para.h:37:
   In file included from arch/s390/include/asm/kvm_para.h:25:
   In file included from arch/s390/include/asm/diag.h:12:
   In file included from include/linux/if_ether.h:19:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:501:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writeb(value, PCI_IOBASE + addr);
                               ~~~~~~~~~~ ^
   include/asm-generic/io.h:511:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
                                                         ~~~~~~~~~~ ^
   include/asm-generic/io.h:521:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
                                                         ~~~~~~~~~~ ^
   include/asm-generic/io.h:609:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsb(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:617:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsw(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:625:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsl(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:634:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesb(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   include/asm-generic/io.h:643:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesw(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   include/asm-generic/io.h:652:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesl(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   arch/s390/kvm/pv.c:369:6: warning: variable 'cc' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
           if (!buff_kvm)
               ^~~~~~~~~
   arch/s390/kvm/pv.c:411:6: note: uninitialized use occurs here
           if (cc || ret)
               ^~
   arch/s390/kvm/pv.c:369:2: note: remove the 'if' if its condition is always false
           if (!buff_kvm)
           ^~~~~~~~~~~~~~
   arch/s390/kvm/pv.c:357:6: warning: variable 'cc' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
           if (!buff_user_len ||
               ^~~~~~~~~~~~~~~~~
   arch/s390/kvm/pv.c:411:6: note: uninitialized use occurs here
           if (cc || ret)
               ^~
   arch/s390/kvm/pv.c:357:2: note: remove the 'if' if its condition is always false
           if (!buff_user_len ||
           ^~~~~~~~~~~~~~~~~~~~~
>> arch/s390/kvm/pv.c:357:6: warning: variable 'cc' is used uninitialized whenever '||' condition is true [-Wsometimes-uninitialized]
           if (!buff_user_len ||
               ^~~~~~~~~~~~~~
   arch/s390/kvm/pv.c:411:6: note: uninitialized use occurs here
           if (cc || ret)
               ^~
   arch/s390/kvm/pv.c:357:6: note: remove the '||' if its condition is always false
           if (!buff_user_len ||
               ^~~~~~~~~~~~~~~~~
   arch/s390/kvm/pv.c:348:6: warning: variable 'cc' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
           if (*gaddr & ~HPAGE_MASK)
               ^~~~~~~~~~~~~~~~~~~~
   arch/s390/kvm/pv.c:411:6: note: uninitialized use occurs here
           if (cc || ret)
               ^~
   arch/s390/kvm/pv.c:348:2: note: remove the 'if' if its condition is always false
           if (*gaddr & ~HPAGE_MASK)
           ^~~~~~~~~~~~~~~~~~~~~~~~~
   arch/s390/kvm/pv.c:344:8: note: initialize the variable 'cc' to silence this warning
           int cc, ret;
                 ^
                  = 0
   16 warnings generated.


vim +357 arch/s390/kvm/pv.c

   309	
   310	/*
   311	 * kvm_s390_pv_dump_stor_state
   312	 *
   313	 * @kvm: pointer to the guest's KVM struct
   314	 * @buff_user: Userspace pointer where we will write the results to
   315	 * @gaddr: Starting absolute guest address for which the storage state
   316	 *         is requested. This value will be updated with the last
   317	 *         address for which data was written when returning to
   318	 *         userspace.
   319	 * @buff_user_len: Length of the buff_user buffer
   320	 * @rc: Pointer to where the uvcb return code is stored
   321	 * @rrc: Pointer to where the uvcb return reason code is stored
   322	 *
   323	 * Return:
   324	 *  0 on success
   325	 *  -ENOMEM if allocating the cache fails
   326	 *  -EINVAL if gaddr is not aligned to 1MB
   327	 *  -EINVAL if buff_user_len is not aligned to uv_info.conf_dump_storage_state_len
   328	 *  -EINVAL if the UV call fails, rc and rrc will be set in this case
   329	 *  -EFAULT if copying the result to buff_user failed
   330	 */
   331	int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
   332					u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
   333	{
   334		struct uv_cb_dump_stor_state uvcb = {
   335			.header.cmd = UVC_CMD_DUMP_CONF_STOR_STATE,
   336			.header.len = sizeof(uvcb),
   337			.config_handle = kvm->arch.pv.handle,
   338			.gaddr = *gaddr,
   339			.dump_area_origin = 0,
   340		};
   341		size_t buff_kvm_size;
   342		size_t size_done = 0;
   343		u8 *buff_kvm = NULL;
   344		int cc, ret;
   345	
   346		ret = -EINVAL;
   347		/* UV call processes 1MB guest storage chunks at a time */
   348		if (*gaddr & ~HPAGE_MASK)
   349			goto out;
   350	
   351		/*
   352		 * We provide the storage state for 1MB chunks of guest
   353		 * storage. The buffer will need to be aligned to
   354		 * conf_dump_storage_state_len so we don't end on a partial
   355		 * chunk.
   356		 */
 > 357		if (!buff_user_len ||

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 6/9] kvm: s390: Add configuration dump functionality
  2022-02-23  9:20 ` [PATCH 6/9] kvm: s390: Add configuration dump functionality Janosch Frank
@ 2022-02-23 18:13   ` kernel test robot
  2022-02-23 19:25   ` kernel test robot
  1 sibling, 0 replies; 23+ messages in thread
From: kernel test robot @ 2022-02-23 18:13 UTC (permalink / raw)
  To: Janosch Frank, kvm
  Cc: llvm, kbuild-all, linux-s390, imbrenda, david, borntraeger

Hi Janosch,

I love your patch! Perhaps something to improve:

[auto build test WARNING on kvms390/next]
[also build test WARNING on next-20220222]
[cannot apply to kvm/master s390/features v5.17-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Janosch-Frank/kvm-s390-Add-PV-dump-support/20220223-172213
base:   https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git next
config: s390-randconfig-r044-20220223 (https://download.01.org/0day-ci/archive/20220224/202202240208.nprfvRMA-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project d271fc04d5b97b12e6b797c6067d3c96a8d7470e)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install s390 cross compiling tool for clang build
        # apt-get install binutils-s390x-linux-gnu
        # https://github.com/0day-ci/linux/commit/63a2029ece7e8fee92aa5ad277e2cbd8b13b7e6b
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Janosch-Frank/kvm-s390-Add-PV-dump-support/20220223-172213
        git checkout 63a2029ece7e8fee92aa5ad277e2cbd8b13b7e6b
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=s390 SHELL=/bin/bash arch/s390/kvm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from arch/s390/kvm/pv.c:9:
   In file included from include/linux/kvm_host.h:41:
   In file included from include/linux/kvm_para.h:5:
   In file included from include/uapi/linux/kvm_para.h:37:
   In file included from arch/s390/include/asm/kvm_para.h:25:
   In file included from arch/s390/include/asm/diag.h:12:
   In file included from include/linux/if_ether.h:19:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:464:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __raw_readb(PCI_IOBASE + addr);
                             ~~~~~~~~~~ ^
   include/asm-generic/io.h:477:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
                                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:37:59: note: expanded from macro '__le16_to_cpu'
   #define __le16_to_cpu(x) __swab16((__force __u16)(__le16)(x))
                                                             ^
   include/uapi/linux/swab.h:102:54: note: expanded from macro '__swab16'
   #define __swab16(x) (__u16)__builtin_bswap16((__u16)(x))
                                                        ^
   In file included from arch/s390/kvm/pv.c:9:
   In file included from include/linux/kvm_host.h:41:
   In file included from include/linux/kvm_para.h:5:
   In file included from include/uapi/linux/kvm_para.h:37:
   In file included from arch/s390/include/asm/kvm_para.h:25:
   In file included from arch/s390/include/asm/diag.h:12:
   In file included from include/linux/if_ether.h:19:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:490:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
                                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:35:59: note: expanded from macro '__le32_to_cpu'
   #define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
                                                             ^
   include/uapi/linux/swab.h:115:54: note: expanded from macro '__swab32'
   #define __swab32(x) (__u32)__builtin_bswap32((__u32)(x))
                                                        ^
   In file included from arch/s390/kvm/pv.c:9:
   In file included from include/linux/kvm_host.h:41:
   In file included from include/linux/kvm_para.h:5:
   In file included from include/uapi/linux/kvm_para.h:37:
   In file included from arch/s390/include/asm/kvm_para.h:25:
   In file included from arch/s390/include/asm/diag.h:12:
   In file included from include/linux/if_ether.h:19:
   In file included from include/linux/skbuff.h:31:
   In file included from include/linux/dma-mapping.h:10:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:501:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writeb(value, PCI_IOBASE + addr);
                               ~~~~~~~~~~ ^
   include/asm-generic/io.h:511:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
                                                         ~~~~~~~~~~ ^
   include/asm-generic/io.h:521:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
                                                         ~~~~~~~~~~ ^
   include/asm-generic/io.h:609:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsb(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:617:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsw(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:625:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsl(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:634:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesb(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   include/asm-generic/io.h:643:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesw(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   include/asm-generic/io.h:652:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesl(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
>> arch/s390/kvm/pv.c:369:2: warning: variable 'cc' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
           if (!buff_kvm)
           ^~~~~~~~~~~~~~
   include/linux/compiler.h:56:28: note: expanded from macro 'if'
   #define if(cond, ...) if ( __trace_if_var( !!(cond , ## __VA_ARGS__) ) )
                              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/compiler.h:58:30: note: expanded from macro '__trace_if_var'
   #define __trace_if_var(cond) (__builtin_constant_p(cond) ? (cond) : __trace_if_value(cond))
                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/s390/kvm/pv.c:419:9: note: uninitialized use occurs here
           return cc ? -EINVAL : 0;
                  ^~
   arch/s390/kvm/pv.c:369:2: note: remove the 'if' if its condition is always false
           if (!buff_kvm)
           ^~~~~~~~~~~~~~
   include/linux/compiler.h:56:23: note: expanded from macro 'if'
   #define if(cond, ...) if ( __trace_if_var( !!(cond , ## __VA_ARGS__) ) )
                         ^
   arch/s390/kvm/pv.c:357:2: warning: variable 'cc' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
           if (!buff_user_len ||
           ^~~~~~~~~~~~~~~~~~~~~
   include/linux/compiler.h:56:28: note: expanded from macro 'if'
   #define if(cond, ...) if ( __trace_if_var( !!(cond , ## __VA_ARGS__) ) )
                              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/compiler.h:58:30: note: expanded from macro '__trace_if_var'
   #define __trace_if_var(cond) (__builtin_constant_p(cond) ? (cond) : __trace_if_value(cond))
                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/s390/kvm/pv.c:419:9: note: uninitialized use occurs here
           return cc ? -EINVAL : 0;
                  ^~
   arch/s390/kvm/pv.c:357:2: note: remove the 'if' if its condition is always false
           if (!buff_user_len ||
           ^~~~~~~~~~~~~~~~~~~~~
   include/linux/compiler.h:56:23: note: expanded from macro 'if'
   #define if(cond, ...) if ( __trace_if_var( !!(cond , ## __VA_ARGS__) ) )
                         ^
   arch/s390/kvm/pv.c:348:2: warning: variable 'cc' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
           if (*gaddr & ~HPAGE_MASK)
           ^~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/compiler.h:56:28: note: expanded from macro 'if'
   #define if(cond, ...) if ( __trace_if_var( !!(cond , ## __VA_ARGS__) ) )
                              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/compiler.h:58:30: note: expanded from macro '__trace_if_var'
   #define __trace_if_var(cond) (__builtin_constant_p(cond) ? (cond) : __trace_if_value(cond))
                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   arch/s390/kvm/pv.c:419:9: note: uninitialized use occurs here
           return cc ? -EINVAL : 0;
                  ^~
   arch/s390/kvm/pv.c:348:2: note: remove the 'if' if its condition is always false
           if (*gaddr & ~HPAGE_MASK)
           ^~~~~~~~~~~~~~~~~~~~~~~~~
   include/linux/compiler.h:56:23: note: expanded from macro 'if'
   #define if(cond, ...) if ( __trace_if_var( !!(cond , ## __VA_ARGS__) ) )
                         ^
   arch/s390/kvm/pv.c:344:8: note: initialize the variable 'cc' to silence this warning
           int cc, ret;
                 ^
                  = 0
   15 warnings generated.


vim +369 arch/s390/kvm/pv.c

   309	
   310	/*
   311	 * kvm_s390_pv_dump_stor_state
   312	 *
   313	 * @kvm: pointer to the guest's KVM struct
   314	 * @buff_user: Userspace pointer where we will write the results to
   315	 * @gaddr: Starting absolute guest address for which the storage state
   316	 *         is requested. This value will be updated with the last
   317	 *         address for which data was written when returning to
   318	 *         userspace.
   319	 * @buff_user_len: Length of the buff_user buffer
   320	 * @rc: Pointer to where the uvcb return code is stored
   321	 * @rrc: Pointer to where the uvcb return reason code is stored
   322	 *
   323	 * Return:
   324	 *  0 on success
   325	 *  -ENOMEM if allocating the cache fails
   326	 *  -EINVAL if gaddr is not aligned to 1MB
   327	 *  -EINVAL if buff_user_len is not aligned to uv_info.conf_dump_storage_state_len
   328	 *  -EINVAL if the UV call fails, rc and rrc will be set in this case
   329	 *  -EFAULT if copying the result to buff_user failed
   330	 */
   331	int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
   332					u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
   333	{
   334		struct uv_cb_dump_stor_state uvcb = {
   335			.header.cmd = UVC_CMD_DUMP_CONF_STOR_STATE,
   336			.header.len = sizeof(uvcb),
   337			.config_handle = kvm->arch.pv.handle,
   338			.gaddr = *gaddr,
   339			.dump_area_origin = 0,
   340		};
   341		size_t buff_kvm_size;
   342		size_t size_done = 0;
   343		u8 *buff_kvm = NULL;
   344		int cc, ret;
   345	
   346		ret = -EINVAL;
   347		/* UV call processes 1MB guest storage chunks at a time */
   348		if (*gaddr & ~HPAGE_MASK)
   349			goto out;
   350	
   351		/*
   352		 * We provide the storage state for 1MB chunks of guest
   353		 * storage. The buffer will need to be aligned to
   354		 * conf_dump_storage_state_len so we don't end on a partial
   355		 * chunk.
   356		 */
   357		if (!buff_user_len ||
   358		    buff_user_len & (uv_info.conf_dump_storage_state_len - 1))
   359			goto out;
   360	
   361		/*
   362		 * Allocate a buffer from which we will later copy to the user process.
   363		 *
   364		 * We don't want userspace to dictate our buffer size so we limit it to DUMP_BUFF_LEN.
   365		 */
   366		ret = -ENOMEM;
   367		buff_kvm_size = buff_user_len <= DUMP_BUFF_LEN ? buff_user_len : DUMP_BUFF_LEN;
   368		buff_kvm = vzalloc(buff_kvm_size);
 > 369		if (!buff_kvm)

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 6/9] kvm: s390: Add configuration dump functionality
  2022-02-23  9:19 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
@ 2022-02-23  9:20 ` Janosch Frank
  2022-02-23 18:13   ` kernel test robot
  2022-02-23 19:25   ` kernel test robot
  0 siblings, 2 replies; 23+ messages in thread
From: Janosch Frank @ 2022-02-23  9:20 UTC (permalink / raw)
  To: kvm; +Cc: linux-s390, imbrenda, david, borntraeger

Sometimes dumping inside of a VM fails, is unavailable or doesn't
yield the required data. For these occasions we dump the VM from the
outside, writing memory and cpu data to a file.

Up to now PV guests only supported dumping from the inside of the
guest through dumpers like KDUMP. A PV guest can be dumped from the
hypervisor but the data will be stale and / or encrypted.

To get the actual state of the PV VM we need the help of the
Ultravisor who safeguards the VM state. New UV calls have been added
to initialize the dump, dump storage state data, dump cpu data and
complete the dump process. We expose these calls in this patch via a
new UV ioctl command.

The sensitive parts of the dump data are encrypted, the dump key is
derived from the Customer Communication Key (CCK). This ensures that
only the owner of the VM who has the CCK can decrypt the dump data.

The memory is dumped / read via a normal export call and a re-import
after the dump initialization is not needed (no re-encryption with a
dump key).

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/include/asm/kvm_host.h |   1 +
 arch/s390/kvm/kvm-s390.c         | 137 +++++++++++++++++++++++++++++++
 arch/s390/kvm/kvm-s390.h         |   2 +
 arch/s390/kvm/pv.c               | 115 ++++++++++++++++++++++++++
 include/uapi/linux/kvm.h         |  16 ++++
 5 files changed, 271 insertions(+)

diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index a22c9266ea05..659bf4be6f04 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -921,6 +921,7 @@ struct kvm_s390_pv {
 	u64 guest_len;
 	unsigned long stor_base;
 	void *stor_var;
+	bool dumping;
 };
 
 struct kvm_arch{
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 8de53803c1ca..074d50d58430 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -606,6 +606,26 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_S390_PROTECTED:
 		r = is_prot_virt_host();
 		break;
+	case KVM_CAP_S390_PROTECTED_DUMP: {
+		u64 pv_cmds_dump[] = {
+			BIT_UVC_CMD_DUMP_INIT,
+			BIT_UVC_CMD_DUMP_CONFIG_STOR_STATE,
+			BIT_UVC_CMD_DUMP_CONFIG_STOR_STATE,
+			BIT_UVC_CMD_DUMP_CPU,
+		};
+		int i;
+
+		if (!is_prot_virt_host())
+			return 0;
+
+		r = 1;
+		for (i = 0; i < ARRAY_SIZE(pv_cmds_dump); i++) {
+			if (!test_bit_inv(pv_cmds_dump[i],
+					  (unsigned long *)&uv_info.inst_calls_list))
+				return 0;
+		}
+		break;
+	}
 	default:
 		r = 0;
 	}
@@ -2256,6 +2276,92 @@ static int kvm_s390_handle_pv_info(struct kvm_s390_pv_info *info)
 	}
 }
 
+static int kvm_s390_pv_dmp(struct kvm *kvm, struct kvm_pv_cmd *cmd,
+			   struct kvm_s390_pv_dmp dmp)
+{
+	int r = -EINVAL;
+	void __user *result_buff = (void __user *)dmp.buff_addr;
+
+	switch (dmp.subcmd) {
+	case KVM_PV_DUMP_INIT: {
+		if (kvm->arch.pv.dumping)
+			break;
+
+		r = uv_cmd_nodata(kvm_s390_pv_get_handle(kvm),
+				  UVC_CMD_DUMP_INIT, &cmd->rc, &cmd->rrc);
+		KVM_UV_EVENT(kvm, 3, "PROTVIRT DUMP INIT: rc %x rrc %x",
+			     cmd->rc, cmd->rrc);
+		if (!r)
+			kvm->arch.pv.dumping = true;
+		else
+			r = -EINVAL;
+		break;
+	}
+	case KVM_PV_DUMP_CONFIG_STOR_STATE: {
+		if (!kvm->arch.pv.dumping)
+			break;
+
+		/*
+		 * GADDR is an output parameter since we might stop
+		 * early. As dmp will be copied back in our caller, we
+		 * don't need to do it ourselves.
+		 */
+		r = kvm_s390_pv_dump_stor_state(kvm, result_buff, &dmp.gaddr, dmp.buff_len,
+						&cmd->rc, &cmd->rrc);
+		break;
+	}
+	case KVM_PV_DUMP_COMPLETE: {
+		struct uv_cb_dump_complete complete = {
+			.header.len = sizeof(complete),
+			.header.cmd = UVC_CMD_DUMP_COMPLETE,
+			.config_handle = kvm_s390_pv_get_handle(kvm),
+		};
+		u64 *compl_data;
+
+		r = -EINVAL;
+		if (!kvm->arch.pv.dumping)
+			break;
+
+		if (dmp.buff_len < uv_info.conf_dump_finalize_len)
+			break;
+
+		/* Allocate dump area */
+		r = -ENOMEM;
+		compl_data = vzalloc(uv_info.conf_dump_finalize_len);
+		if (!compl_data)
+			break;
+		complete.dump_area_origin = (u64)compl_data;
+
+		r = uv_call(0, (u64)&complete);
+		cmd->rc = complete.header.rc;
+		cmd->rrc = complete.header.rrc;
+		KVM_UV_EVENT(kvm, 3, "PROTVIRT DUMP COMPLETE: rc %x rrc %x",
+			     complete.header.rc, complete.header.rrc);
+
+		if (!r) {
+			/*
+			 * kvm_s390_pv_dealloc_vm() will also (mem)set
+			 * this to false on a reboot or other destroy
+			 * operation for this vm.
+			 */
+			kvm->arch.pv.dumping = false;
+			r = copy_to_user(result_buff, compl_data, uv_info.conf_dump_finalize_len);
+			if (r)
+				r = -EFAULT;
+		}
+		vfree(compl_data);
+		if (r > 0)
+			r = -EINVAL;
+		break;
+	}
+	default:
+		r = -ENOTTY;
+		break;
+	}
+
+	return r;
+}
+
 static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
 {
 	int r = 0;
@@ -2411,6 +2517,28 @@ static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
 			return -EFAULT;
 		return 0;
 	}
+	case KVM_PV_DUMP: {
+		struct kvm_s390_pv_dmp dmp;
+
+		r = -EINVAL;
+		if (!kvm_s390_pv_is_protected(kvm))
+			break;
+
+		r = -EFAULT;
+		if (copy_from_user(&dmp, argp, sizeof(dmp)))
+			break;
+
+		r = kvm_s390_pv_dmp(kvm, cmd, dmp);
+		if (r)
+			break;
+
+		if (copy_to_user(argp, &dmp, sizeof(dmp))) {
+			r = -EFAULT;
+			break;
+		}
+
+		break;
+	}
 	default:
 		r = -ENOTTY;
 	}
@@ -4508,6 +4636,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 	struct kvm_run *kvm_run = vcpu->run;
 	int rc;
 
+	/*
+	 * Running a VM while dumping always has the potential to
+	 * produce inconsistent dump data. But for PV vcpus a SIE
+	 * entry while dumping could also lead to a validity which we
+	 * absolutely want to avoid.
+	 */
+	if (vcpu->kvm->arch.pv.dumping)
+		return -EINVAL;
+
 	if (kvm_run->immediate_exit)
 		return -EINTR;
 
diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
index 098831e815e6..dbae30aa338c 100644
--- a/arch/s390/kvm/kvm-s390.h
+++ b/arch/s390/kvm/kvm-s390.h
@@ -241,6 +241,8 @@ int kvm_s390_pv_set_sec_parms(struct kvm *kvm, void *hdr, u64 length, u16 *rc,
 int kvm_s390_pv_unpack(struct kvm *kvm, unsigned long addr, unsigned long size,
 		       unsigned long tweak, u16 *rc, u16 *rrc);
 int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state);
+int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
+				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc);
 
 static inline u64 kvm_s390_pv_get_handle(struct kvm *kvm)
 {
diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c
index 7f7c0d6af2ce..3c1c257ed38e 100644
--- a/arch/s390/kvm/pv.c
+++ b/arch/s390/kvm/pv.c
@@ -303,3 +303,118 @@ int kvm_s390_pv_set_cpu_state(struct kvm_vcpu *vcpu, u8 state)
 		return -EINVAL;
 	return 0;
 }
+
+/* Size of the cache for the storage state dump data. 1MB for now */
+#define DUMP_BUFF_LEN HPAGE_SIZE
+
+/*
+ * kvm_s390_pv_dump_stor_state
+ *
+ * @kvm: pointer to the guest's KVM struct
+ * @buff_user: Userspace pointer where we will write the results to
+ * @gaddr: Starting absolute guest address for which the storage state
+ *         is requested. This value will be updated with the last
+ *         address for which data was written when returning to
+ *         userspace.
+ * @buff_user_len: Length of the buff_user buffer
+ * @rc: Pointer to where the uvcb return code is stored
+ * @rrc: Pointer to where the uvcb return reason code is stored
+ *
+ * Return:
+ *  0 on success
+ *  -ENOMEM if allocating the cache fails
+ *  -EINVAL if gaddr is not aligned to 1MB
+ *  -EINVAL if buff_user_len is not aligned to uv_info.conf_dump_storage_state_len
+ *  -EINVAL if the UV call fails, rc and rrc will be set in this case
+ *  -EFAULT if copying the result to buff_user failed
+ */
+int kvm_s390_pv_dump_stor_state(struct kvm *kvm, void __user *buff_user,
+				u64 *gaddr, u64 buff_user_len, u16 *rc, u16 *rrc)
+{
+	struct uv_cb_dump_stor_state uvcb = {
+		.header.cmd = UVC_CMD_DUMP_CONF_STOR_STATE,
+		.header.len = sizeof(uvcb),
+		.config_handle = kvm->arch.pv.handle,
+		.gaddr = *gaddr,
+		.dump_area_origin = 0,
+	};
+	size_t buff_kvm_size;
+	size_t size_done = 0;
+	u8 *buff_kvm = NULL;
+	int cc, ret;
+
+	ret = -EINVAL;
+	/* UV call processes 1MB guest storage chunks at a time */
+	if (*gaddr & ~HPAGE_MASK)
+		goto out;
+
+	/*
+	 * We provide the storage state for 1MB chunks of guest
+	 * storage. The buffer will need to be aligned to
+	 * conf_dump_storage_state_len so we don't end on a partial
+	 * chunk.
+	 */
+	if (!buff_user_len ||
+	    buff_user_len & (uv_info.conf_dump_storage_state_len - 1))
+		goto out;
+
+	/*
+	 * Allocate a buffer from which we will later copy to the user process.
+	 *
+	 * We don't want userspace to dictate our buffer size so we limit it to DUMP_BUFF_LEN.
+	 */
+	ret = -ENOMEM;
+	buff_kvm_size = buff_user_len <= DUMP_BUFF_LEN ? buff_user_len : DUMP_BUFF_LEN;
+	buff_kvm = vzalloc(buff_kvm_size);
+	if (!buff_kvm)
+		goto out;
+
+	ret = 0;
+	uvcb.dump_area_origin = (u64)buff_kvm;
+	/* We will loop until the user buffer is filled or an error occurs */
+	do {
+		/* Get a page of data */
+		cc = uv_call_sched(0, (u64)&uvcb);
+
+		/* All or nothing */
+		if (cc) {
+			ret = -EINVAL;
+			break;
+		}
+
+		size_done += uv_info.conf_dump_storage_state_len;
+		uvcb.dump_area_origin += uv_info.conf_dump_storage_state_len;
+		uvcb.gaddr += HPAGE_SIZE;
+		buff_user_len -= PAGE_SIZE;
+
+		/* KVM Buffer full, time to copy to the process */
+		if (!buff_user_len ||
+		    uvcb.dump_area_origin == (uintptr_t)buff_kvm + buff_kvm_size) {
+
+			if (copy_to_user(buff_user, buff_kvm,
+					 uvcb.dump_area_origin - (uintptr_t)buff_kvm)) {
+				ret = -EFAULT;
+				break;
+			}
+
+			buff_user += size_done;
+			size_done = 0;
+			uvcb.dump_area_origin = (u64)buff_kvm;
+		}
+	} while (buff_user_len);
+
+	/* Report back where we ended dumping */
+	*gaddr = uvcb.gaddr;
+
+	/* Lets only log errors, we don't want to spam */
+out:
+	if (cc || ret)
+		KVM_UV_EVENT(kvm, 3,
+			     "PROTVIRT DUMP STORAGE STATE: addr %llx ret %d, uvcb rc %x rrc %x",
+			     uvcb.gaddr, ret, uvcb.header.rc, uvcb.header.rrc);
+	*rc = uvcb.header.rc;
+	*rrc = uvcb.header.rrc;
+	vfree(buff_kvm);
+
+	return cc ? -EINVAL : 0;
+}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index d58cd5a40e62..913c5907b9f1 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1141,6 +1141,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_XSAVE2 208
 #define KVM_CAP_SYS_ATTRIBUTES 209
 #define KVM_CAP_S390_MEM_OP_EXTENSION 210
+#define KVM_CAP_S390_PROTECTED_DUMP 211
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
@@ -1642,6 +1643,20 @@ struct kvm_s390_pv_unp {
 	__u64 tweak;
 };
 
+enum pv_cmd_dmp_id {
+	KVM_PV_DUMP_INIT,
+	KVM_PV_DUMP_CONFIG_STOR_STATE,
+	KVM_PV_DUMP_COMPLETE,
+};
+
+struct kvm_s390_pv_dmp {
+	__u64 subcmd;
+	__u64 buff_addr;
+	__u64 buff_len;
+	__u64 gaddr;		/* For dump storage state */
+	__u64 reserved[4];
+};
+
 enum pv_cmd_info_id {
 	KVM_PV_INFO_VM,
 	KVM_PV_INFO_DUMP,
@@ -1683,6 +1698,7 @@ enum pv_cmd_id {
 	KVM_PV_PREP_RESET,
 	KVM_PV_UNSHARE_ALL,
 	KVM_PV_INFO,
+	KVM_PV_DUMP,
 };
 
 struct kvm_pv_cmd {
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2022-05-10 14:45 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-28 13:00 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
2022-04-28 13:00 ` [PATCH 1/9] s390x: Add SE hdr query information Janosch Frank
2022-04-28 13:00 ` [PATCH 2/9] s390: uv: Add dump fields to query Janosch Frank
2022-04-28 13:00 ` [PATCH 3/9] KVM: s390: pv: Add query interface Janosch Frank
2022-05-09 15:25   ` Claudio Imbrenda
2022-05-10  7:27     ` Janosch Frank
2022-04-28 13:00 ` [PATCH 4/9] KVM: s390: pv: Add dump support definitions Janosch Frank
2022-04-28 13:00 ` [PATCH 5/9] KVM: s390: pv: Add query dump information Janosch Frank
2022-05-09 15:28   ` Claudio Imbrenda
2022-05-10 12:36     ` Janosch Frank
2022-05-10 13:28       ` Claudio Imbrenda
2022-04-28 13:00 ` [PATCH 6/9] kvm: s390: Add configuration dump functionality Janosch Frank
2022-05-09 17:51   ` Claudio Imbrenda
2022-05-10 14:07     ` Janosch Frank
2022-04-28 13:01 ` [PATCH 7/9] kvm: s390: Add CPU " Janosch Frank
2022-05-09 19:11   ` Claudio Imbrenda
2022-05-10  7:26     ` Janosch Frank
2022-05-10  9:14       ` Claudio Imbrenda
2022-04-28 13:01 ` [PATCH 8/9] Documentation: virt: Protected virtual machine dumps Janosch Frank
2022-04-28 13:01 ` [PATCH 9/9] Documentation/virt/kvm/api.rst: Add protvirt dump/info api descriptions Janosch Frank
  -- strict thread matches above, loose matches on Subject: below --
2022-02-23  9:19 [PATCH 0/9] kvm: s390: Add PV dump support Janosch Frank
2022-02-23  9:20 ` [PATCH 6/9] kvm: s390: Add configuration dump functionality Janosch Frank
2022-02-23 18:13   ` kernel test robot
2022-02-23 19:25   ` kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.