All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH v2 0/3] AMX KVM selftest
  2021-12-22 21:47 [PATCH v2 0/3] AMX KVM selftest Yang Zhong
@ 2021-12-22 14:05 ` Paolo Bonzini
  2021-12-22 21:47 ` [WARNING: UNSCANNABLE EXTRACTION FAILED][PATCH v2 1/3] selftest: kvm: Reorder vcpu_load_state steps for AMX Yang Zhong
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2021-12-22 14:05 UTC (permalink / raw)
  To: Yang Zhong, kvm; +Cc: seanjc, jun.nakajima, kevin.tian, jing2.liu

On 12/22/21 22:47, Yang Zhong wrote:
> Hi Paolo,
> 
> Please help review this patchset, which is still based on Jing's AMX v2.
> https://lore.kernel.org/all/20211217153003.1719189-1-jing2.liu@intel.com/
> 
> Since Jing's v3 was justly sent out, I will rebase this patchest on it and send my
> separate v3.
> 
> About this selftest requirement, please check below link:
> https://lore.kernel.org/all/85401305-2c71-e57f-a01e-4850060d300a@redhat.com/
> 
> By the way, this amx_test.c file referenced some Chang's older test code:
> https://lore.kernel.org/lkml/20210221185637.19281-21-chang.seok.bae@intel.com/
> 
> Thanks!

Hi, this looks good to me.

Paolo


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] selftest: kvm: Support amx selftest
  2021-12-22 21:47 ` [PATCH v2 3/3] selftest: kvm: Support amx selftest Yang Zhong
@ 2021-12-22 15:15   ` Paolo Bonzini
  2021-12-23  2:38     ` Yang Zhong
  0 siblings, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2021-12-22 15:15 UTC (permalink / raw)
  To: Yang Zhong, kvm; +Cc: seanjc, jun.nakajima, kevin.tian, jing2.liu

On 12/22/21 22:47, Yang Zhong wrote:
> +	/* Trigger #NM exception */
> +	__tileloadd(tiledata);
> +	GUEST_SYNC(10);
> +
> +	GUEST_DONE();
> +}
> +
> +void guest_nm_handler(struct ex_regs *regs)
> +{
> +	/* Check if #NM is triggered by XFEATURE_MASK_XTILEDATA */
> +	GUEST_SYNC(7);
> +	GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILEDATA);
> +	GUEST_SYNC(8);
> +	GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILEDATA);
> +	/* Clear xfd_err */
> +	wrmsr(MSR_IA32_XFD_ERR, 0);
> +	regs->rip += 3;
> +	GUEST_SYNC(9);
> +}

I don't understand why "regs->rip += 3" is needed though.

My idea was that, after GUEST_SYNC(9) and IRET, the guest would execute 
__tileloadd again; this time without generating #NM, so that after 
GUEST_SYNC(10) the host sees the loaded data in TMM0.

Thanks,

Paolo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 0/3] AMX KVM selftest
@ 2021-12-22 21:47 Yang Zhong
  2021-12-22 14:05 ` Paolo Bonzini
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Yang Zhong @ 2021-12-22 21:47 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: seanjc, jun.nakajima, kevin.tian, jing2.liu, yang.zhong

Hi Paolo,

Please help review this patchset, which is still based on Jing's AMX v2.
https://lore.kernel.org/all/20211217153003.1719189-1-jing2.liu@intel.com/

Since Jing's v3 was justly sent out, I will rebase this patchest on it and send my
separate v3.

About this selftest requirement, please check below link:
https://lore.kernel.org/all/85401305-2c71-e57f-a01e-4850060d300a@redhat.com/

By the way, this amx_test.c file referenced some Chang's older test code:
https://lore.kernel.org/lkml/20210221185637.19281-21-chang.seok.bae@intel.com/

Thanks!

Yang


Change history
==============
v1-->v2
   - Added more GUEST_SYNC() from guest code(Paolo).
   - Added back save/restore code after GUEST_SYNC()
     handles in the main()(Paolo).

Paolo Bonzini (1):
  selftest: kvm: Reorder vcpu_load_state steps for AMX

Yang Zhong (2):
  selftest: kvm: Move struct kvm_x86_state to header
  selftest: kvm: Support amx selftest

 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/x86_64/processor.h  |  16 +-
 .../selftests/kvm/lib/x86_64/processor.c      |  44 +-
 tools/testing/selftests/kvm/x86_64/amx_test.c | 451 ++++++++++++++++++
 4 files changed, 482 insertions(+), 30 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/amx_test.c


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [WARNING: UNSCANNABLE EXTRACTION FAILED][PATCH v2 1/3] selftest: kvm: Reorder vcpu_load_state steps for AMX
  2021-12-22 21:47 [PATCH v2 0/3] AMX KVM selftest Yang Zhong
  2021-12-22 14:05 ` Paolo Bonzini
@ 2021-12-22 21:47 ` Yang Zhong
  2022-01-12 21:05   ` Sean Christopherson
  2021-12-22 21:47 ` [PATCH v2 2/3] selftest: kvm: Move struct kvm_x86_state to header Yang Zhong
  2021-12-22 21:47 ` [PATCH v2 3/3] selftest: kvm: Support amx selftest Yang Zhong
  3 siblings, 1 reply; 9+ messages in thread
From: Yang Zhong @ 2021-12-22 21:47 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: seanjc, jun.nakajima, kevin.tian, jing2.liu, yang.zhong

From: Paolo Bonzini <pbonzini@redhat.com>

For AMX support it is recommended to load XCR0 after XFD, so
that KVM does not see XFD=0, XCR=1 for a save state that will
eventually be disabled (which would lead to premature allocation
of the space required for that save state).

It is also required to load XSAVE data after XCR0 and XFD, so
that KVM can trigger allocation of the extra space required to
store AMX state.

Adjust vcpu_load_state to obey these new requirements.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 .../selftests/kvm/lib/x86_64/processor.c      | 29 ++++++++++---------
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 00324d73c687..9b5abf488211 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1192,9 +1192,14 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	int r;
 
-	r = ioctl(vcpu->fd, KVM_SET_XSAVE, &state->xsave);
-        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XSAVE, r: %i",
-                r);
+	r = ioctl(vcpu->fd, KVM_SET_SREGS, &state->sregs);
+	TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_SREGS, r: %i",
+		r);
+
+	r = ioctl(vcpu->fd, KVM_SET_MSRS, &state->msrs);
+	TEST_ASSERT(r == state->msrs.nmsrs,
+		"Unexpected result from KVM_SET_MSRS,r: %i (failed at %x)",
+		r, r == state->msrs.nmsrs ? -1 : state->msrs.entries[r].index);
 
 	if (kvm_check_cap(KVM_CAP_XCRS)) {
 		r = ioctl(vcpu->fd, KVM_SET_XCRS, &state->xcrs);
@@ -1202,17 +1207,13 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
 			    r);
 	}
 
-	r = ioctl(vcpu->fd, KVM_SET_SREGS, &state->sregs);
-        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_SREGS, r: %i",
-                r);
-
-	r = ioctl(vcpu->fd, KVM_SET_MSRS, &state->msrs);
-        TEST_ASSERT(r == state->msrs.nmsrs, "Unexpected result from KVM_SET_MSRS, r: %i (failed at %x)",
-                r, r == state->msrs.nmsrs ? -1 : state->msrs.entries[r].index);
+	r = ioctl(vcpu->fd, KVM_SET_XSAVE, &state->xsave);
+	TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XSAVE, r: %i",
+		r);
 
 	r = ioctl(vcpu->fd, KVM_SET_VCPU_EVENTS, &state->events);
-        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_VCPU_EVENTS, r: %i",
-                r);
+	TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_VCPU_EVENTS, r: %i",
+		r);
 
 	r = ioctl(vcpu->fd, KVM_SET_MP_STATE, &state->mp_state);
         TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_MP_STATE, r: %i",
@@ -1223,8 +1224,8 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
                 r);
 
 	r = ioctl(vcpu->fd, KVM_SET_REGS, &state->regs);
-        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_REGS, r: %i",
-                r);
+	TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_REGS, r: %i",
+		r);
 
 	if (state->nested.size) {
 		r = ioctl(vcpu->fd, KVM_SET_NESTED_STATE, &state->nested);

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/3] selftest: kvm: Move struct kvm_x86_state to header
  2021-12-22 21:47 [PATCH v2 0/3] AMX KVM selftest Yang Zhong
  2021-12-22 14:05 ` Paolo Bonzini
  2021-12-22 21:47 ` [WARNING: UNSCANNABLE EXTRACTION FAILED][PATCH v2 1/3] selftest: kvm: Reorder vcpu_load_state steps for AMX Yang Zhong
@ 2021-12-22 21:47 ` Yang Zhong
  2021-12-22 21:47 ` [PATCH v2 3/3] selftest: kvm: Support amx selftest Yang Zhong
  3 siblings, 0 replies; 9+ messages in thread
From: Yang Zhong @ 2021-12-22 21:47 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: seanjc, jun.nakajima, kevin.tian, jing2.liu, yang.zhong

Those changes can avoid dereferencing pointer compile issue
when amx_test.c reference state->xsave.

Move struct kvm_x86_state definition to processor.h.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 .../selftests/kvm/include/x86_64/processor.h     | 16 +++++++++++++++-
 .../testing/selftests/kvm/lib/x86_64/processor.c | 15 ---------------
 2 files changed, 15 insertions(+), 16 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 0546173ab628..28f8fa78a47b 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -94,6 +94,21 @@ struct desc_ptr {
 	uint64_t address;
 } __attribute__((packed));
 
+struct kvm_x86_state {
+	struct kvm_xsave *xsave;
+	struct kvm_vcpu_events events;
+	struct kvm_mp_state mp_state;
+	struct kvm_regs regs;
+	struct kvm_xcrs xcrs;
+	struct kvm_sregs sregs;
+	struct kvm_debugregs debugregs;
+	union {
+		struct kvm_nested_state nested;
+		char nested_[16384];
+	};
+	struct kvm_msrs msrs;
+};
+
 static inline uint64_t get_desc64_base(const struct desc64 *desc)
 {
 	return ((uint64_t)desc->base3 << 32) |
@@ -350,7 +365,6 @@ static inline unsigned long get_xmm(int n)
 
 bool is_intel_cpu(void);
 
-struct kvm_x86_state;
 struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid);
 void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid,
 		     struct kvm_x86_state *state);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 9b5abf488211..126f8e743bc2 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1036,21 +1036,6 @@ void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
 	sregs_dump(stream, &sregs, indent + 4);
 }
 
-struct kvm_x86_state {
-	struct kvm_xsave *xsave;
-	struct kvm_vcpu_events events;
-	struct kvm_mp_state mp_state;
-	struct kvm_regs regs;
-	struct kvm_xcrs xcrs;
-	struct kvm_sregs sregs;
-	struct kvm_debugregs debugregs;
-	union {
-		struct kvm_nested_state nested;
-		char nested_[16384];
-	};
-	struct kvm_msrs msrs;
-};
-
 static int kvm_get_num_msrs_fd(int kvm_fd)
 {
 	struct kvm_msr_list nmsrs;

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 3/3] selftest: kvm: Support amx selftest
  2021-12-22 21:47 [PATCH v2 0/3] AMX KVM selftest Yang Zhong
                   ` (2 preceding siblings ...)
  2021-12-22 21:47 ` [PATCH v2 2/3] selftest: kvm: Move struct kvm_x86_state to header Yang Zhong
@ 2021-12-22 21:47 ` Yang Zhong
  2021-12-22 15:15   ` Paolo Bonzini
  3 siblings, 1 reply; 9+ messages in thread
From: Yang Zhong @ 2021-12-22 21:47 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: seanjc, jun.nakajima, kevin.tian, jing2.liu, yang.zhong

This selftest do two test cases, one is to trigger #NM
exception and check MSR XFD_ERR value. Another case is
guest load tile data into tmm0 registers and trap to host
side to check memory data after save/restore.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 tools/testing/selftests/kvm/x86_64/amx_test.c | 451 ++++++++++++++++++
 2 files changed, 452 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/amx_test.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index c4e34717826a..ecb1fe022111 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -75,6 +75,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
 TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
 TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
+TEST_GEN_PROGS_x86_64 += x86_64/amx_test
 TEST_GEN_PROGS_x86_64 += demand_paging_test
 TEST_GEN_PROGS_x86_64 += dirty_log_test
 TEST_GEN_PROGS_x86_64 += dirty_log_perf_test
diff --git a/tools/testing/selftests/kvm/x86_64/amx_test.c b/tools/testing/selftests/kvm/x86_64/amx_test.c
new file mode 100644
index 000000000000..a1e6c022f37c
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/amx_test.c
@@ -0,0 +1,451 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * amx tests
+ *
+ * Copyright (C) 2021, Intel, Inc.
+ *
+ * Tests for amx #NM exception and save/restore.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+#include <sys/syscall.h>
+
+#include "test_util.h"
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "vmx.h"
+
+#ifndef __x86_64__
+# error This test is 64-bit only
+#endif
+
+#define VCPU_ID				0
+#define X86_FEATURE_XSAVE		(1 << 26)
+#define X86_FEATURE_OSXSAVE		(1 << 27)
+
+#define PAGE_SIZE			(1 << 12)
+#define NUM_TILES			8
+#define TILE_SIZE			1024
+#define XSAVE_SIZE			((NUM_TILES * TILE_SIZE) + PAGE_SIZE)
+
+/* Tile configuration associated: */
+#define MAX_TILES			16
+#define RESERVED_BYTES			14
+
+#define XFEATURE_XTILECFG		17
+#define XFEATURE_XTILEDATA		18
+#define XFEATURE_MASK_XTILECFG		(1 << XFEATURE_XTILECFG)
+#define XFEATURE_MASK_XTILEDATA		(1 << XFEATURE_XTILEDATA)
+#define XFEATURE_MASK_XTILE		(XFEATURE_MASK_XTILECFG | XFEATURE_MASK_XTILEDATA)
+
+#define TILE_CPUID			0x1d
+#define XSTATE_CPUID			0xd
+#define TILE_PALETTE_CPUID_SUBLEAVE	0x1
+#define XSTATE_USER_STATE_SUBLEAVE	0x0
+
+#define XSAVE_HDR_OFFSET		512
+
+struct xsave_data {
+	u8 area[XSAVE_SIZE];
+} __aligned(64);
+
+struct tile_config {
+	u8  palette_id;
+	u8  start_row;
+	u8  reserved[RESERVED_BYTES];
+	u16 colsb[MAX_TILES];
+	u8  rows[MAX_TILES];
+};
+
+struct tile_data {
+	u8 data[NUM_TILES * TILE_SIZE];
+};
+
+struct xtile_info {
+	u16 bytes_per_tile;
+	u16 bytes_per_row;
+	u16 max_names;
+	u16 max_rows;
+	u32 xsave_offset;
+	u32 xsave_size;
+};
+
+static struct xtile_info xtile;
+
+static inline u64 __xgetbv(u32 index)
+{
+	u32 eax, edx;
+
+	asm volatile("xgetbv;"
+		     : "=a" (eax), "=d" (edx)
+		     : "c" (index));
+	return eax + ((u64)edx << 32);
+}
+
+static inline void __xsetbv(u32 index, u64 value)
+{
+	u32 eax = value;
+	u32 edx = value >> 32;
+
+	asm volatile("xsetbv" :: "a" (eax), "d" (edx), "c" (index));
+}
+
+static inline void __ldtilecfg(void *cfg)
+{
+	asm volatile(".byte 0xc4,0xe2,0x78,0x49,0x00"
+		     : : "a"(cfg));
+}
+
+static inline void __tileloadd(void *tile)
+{
+	asm volatile(".byte 0xc4,0xe2,0x7b,0x4b,0x04,0x10"
+		     : : "a"(tile), "d"(0));
+}
+
+static inline void __tilerelease(void)
+{
+	asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0" ::);
+}
+
+static inline void __xsavec(struct xsave_data *data, uint64_t rfbm)
+{
+	uint32_t rfbm_lo = rfbm;
+	uint32_t rfbm_hi = rfbm >> 32;
+
+	asm volatile("xsavec (%%rdi)"
+		     : : "D" (data), "a" (rfbm_lo), "d" (rfbm_hi)
+		     : "memory");
+}
+
+static inline void check_cpuid_xsave(void)
+{
+	uint32_t eax, ebx, ecx, edx;
+
+	eax = 1;
+	ecx = 0;
+	cpuid(&eax, &ebx, &ecx, &edx);
+	if (!(ecx & X86_FEATURE_XSAVE))
+		GUEST_ASSERT(!"cpuid: no CPU xsave support!");
+	if (!(ecx & X86_FEATURE_OSXSAVE))
+		GUEST_ASSERT(!"cpuid: no OS xsave support!");
+}
+
+static bool check_xsave_supports_xtile(void)
+{
+	return __xgetbv(0) & XFEATURE_MASK_XTILE;
+}
+
+static bool enum_xtile_config(void)
+{
+	u32 eax, ebx, ecx, edx;
+
+	eax = TILE_CPUID;
+	ecx = TILE_PALETTE_CPUID_SUBLEAVE;
+
+	cpuid(&eax, &ebx, &ecx, &edx);
+	if (!eax || !ebx || !ecx)
+		return false;
+
+	xtile.max_names = ebx >> 16;
+	if (xtile.max_names < NUM_TILES)
+		return false;
+
+	xtile.bytes_per_tile = eax >> 16;
+	if (xtile.bytes_per_tile < TILE_SIZE)
+		return false;
+
+	xtile.bytes_per_row = ebx;
+	xtile.max_rows = ecx;
+
+	return true;
+}
+
+static bool enum_xsave_tile(void)
+{
+	u32 eax, ebx, ecx, edx;
+
+	eax = XSTATE_CPUID;
+	ecx = XFEATURE_XTILEDATA;
+
+	cpuid(&eax, &ebx, &ecx, &edx);
+	if (!eax || !ebx)
+		return false;
+
+	xtile.xsave_offset = ebx;
+	xtile.xsave_size = eax;
+
+	return true;
+}
+
+static bool check_xsave_size(void)
+{
+	u32 eax, ebx, ecx, edx;
+	bool valid = false;
+
+	eax = XSTATE_CPUID;
+	ecx = XSTATE_USER_STATE_SUBLEAVE;
+
+	cpuid(&eax, &ebx, &ecx, &edx);
+	if (ebx && ebx <= XSAVE_SIZE)
+		valid = true;
+
+	return valid;
+}
+
+static bool check_xtile_info(void)
+{
+	bool ret = false;
+
+	if (!check_xsave_size())
+		return ret;
+
+	if (!enum_xsave_tile())
+		return ret;
+
+	if (!enum_xtile_config())
+		return ret;
+
+	if (sizeof(struct tile_data) >= xtile.xsave_size)
+		ret = true;
+
+	return ret;
+}
+
+static void set_tilecfg(struct tile_config *cfg)
+{
+	int i;
+
+	/* Only palette id 1 */
+	cfg->palette_id = 1;
+	for (i = 0; i < xtile.max_names; i++) {
+		cfg->colsb[i] = xtile.bytes_per_row;
+		cfg->rows[i] = xtile.max_rows;
+	}
+}
+
+static void set_xstatebv(void *data, uint64_t bv)
+{
+	*(uint64_t *)(data + XSAVE_HDR_OFFSET) = bv;
+}
+
+static u64 get_xstatebv(void *data)
+{
+	return *(u64 *)(data + XSAVE_HDR_OFFSET);
+}
+
+static void init_regs(void)
+{
+	uint64_t cr4, xcr0;
+
+	/* turn on CR4.OSXSAVE */
+	cr4 = get_cr4();
+	cr4 |= X86_CR4_OSXSAVE;
+	set_cr4(cr4);
+
+	xcr0 = __xgetbv(0);
+	xcr0 |= XFEATURE_MASK_XTILE;
+	__xsetbv(0x0, xcr0);
+}
+
+static void __attribute__((__flatten__)) guest_code(struct tile_config *amx_cfg,
+						    struct tile_data *tiledata,
+						    struct xsave_data *xsave_data)
+{
+	init_regs();
+	check_cpuid_xsave();
+	GUEST_ASSERT(check_xsave_supports_xtile());
+	GUEST_ASSERT(check_xtile_info());
+
+	/* check xtile configs */
+	GUEST_ASSERT(xtile.xsave_offset == 2816);
+	GUEST_ASSERT(xtile.xsave_size == 8192);
+	GUEST_ASSERT(xtile.max_names == 8);
+	GUEST_ASSERT(xtile.bytes_per_tile == 1024);
+	GUEST_ASSERT(xtile.bytes_per_row == 64);
+	GUEST_ASSERT(xtile.max_rows == 16);
+	GUEST_SYNC(1);
+
+	/* xfd=0, enable amx */
+	wrmsr(MSR_IA32_XFD, 0);
+	GUEST_SYNC(2);
+	GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == 0);
+	set_tilecfg(amx_cfg);
+	__ldtilecfg(amx_cfg);
+	GUEST_SYNC(3);
+	/* Check save/restore when trap to userspace */
+	__tileloadd(tiledata);
+	GUEST_SYNC(4);
+	__tilerelease();
+	GUEST_SYNC(5);
+	/* bit 18 not in the XCOMP_BV after xsavec() */
+	set_xstatebv(xsave_data, XFEATURE_MASK_XTILEDATA);
+	__xsavec(xsave_data, XFEATURE_MASK_XTILEDATA);
+	GUEST_ASSERT((get_xstatebv(xsave_data) & XFEATURE_MASK_XTILEDATA) == 0);
+
+	/* xfd=0x40000, disable amx tiledata */
+	wrmsr(MSR_IA32_XFD, XFEATURE_MASK_XTILEDATA);
+	GUEST_SYNC(6);
+	GUEST_ASSERT(rdmsr(MSR_IA32_XFD) == XFEATURE_MASK_XTILEDATA);
+	set_tilecfg(amx_cfg);
+	__ldtilecfg(amx_cfg);
+	/* Trigger #NM exception */
+	__tileloadd(tiledata);
+	GUEST_SYNC(10);
+
+	GUEST_DONE();
+}
+
+void guest_nm_handler(struct ex_regs *regs)
+{
+	/* Check if #NM is triggered by XFEATURE_MASK_XTILEDATA */
+	GUEST_SYNC(7);
+	GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILEDATA);
+	GUEST_SYNC(8);
+	GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILEDATA);
+	/* Clear xfd_err */
+	wrmsr(MSR_IA32_XFD_ERR, 0);
+	regs->rip += 3;
+	GUEST_SYNC(9);
+}
+
+int main(int argc, char *argv[])
+{
+	struct kvm_cpuid_entry2 *entry;
+	struct kvm_regs regs1, regs2;
+	bool amx_supported = false;
+	struct kvm_vm *vm;
+	struct kvm_run *run;
+	struct kvm_x86_state *state;
+	int xsave_restore_size = 0;
+	vm_vaddr_t amx_cfg, tiledata, xsavedata;
+	struct ucall uc;
+	u32 amx_offset;
+	int stage, ret;
+
+	/* Create VM */
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+
+	entry = kvm_get_supported_cpuid_entry(1);
+	if (!(entry->ecx & X86_FEATURE_XSAVE)) {
+		print_skip("XSAVE feature not supported");
+		return 0;
+	}
+
+	if (kvm_get_cpuid_max_basic() >= 0xd) {
+		entry = kvm_get_supported_cpuid_index(0xd, 0);
+		amx_supported = entry && !!(entry->eax & XFEATURE_MASK_XTILE);
+		if (!amx_supported) {
+			print_skip("AMX is not supported by the vCPU and eax=0x%x", entry->eax);
+			return 0;
+		}
+		/* Get xsave/restore max size */
+		xsave_restore_size = entry->ecx;
+	}
+
+	run = vcpu_state(vm, VCPU_ID);
+	vcpu_regs_get(vm, VCPU_ID, &regs1);
+
+	/* Register #NM handler */
+	vm_init_descriptor_tables(vm);
+	vcpu_init_descriptor_tables(vm, VCPU_ID);
+	vm_install_exception_handler(vm, NM_VECTOR, guest_nm_handler);
+
+	/* amx cfg for guest_code */
+	amx_cfg = vm_vaddr_alloc_page(vm);
+	memset(addr_gva2hva(vm, amx_cfg), 0x0, getpagesize());
+
+	/* amx tiledata for guest_code */
+	tiledata = vm_vaddr_alloc_pages(vm, 2);
+	memset(addr_gva2hva(vm, tiledata), rand() | 1, 2 * getpagesize());
+
+	/* xsave data for guest_code */
+	xsavedata = vm_vaddr_alloc_pages(vm, 3);
+	memset(addr_gva2hva(vm, xsavedata), 0, 3 * getpagesize());
+	vcpu_args_set(vm, VCPU_ID, 3, amx_cfg, tiledata, xsavedata);
+
+	for (stage = 1; ; stage++) {
+		_vcpu_run(vm, VCPU_ID);
+		TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
+			    "Stage %d: unexpected exit reason: %u (%s),\n",
+			    stage, run->exit_reason,
+			    exit_reason_str(run->exit_reason));
+
+		switch (get_ucall(vm, VCPU_ID, &uc)) {
+		case UCALL_ABORT:
+			TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0],
+				  __FILE__, uc.args[1]);
+			/* NOT REACHED */
+		case UCALL_SYNC:
+			switch (uc.args[1]) {
+			case 1:
+			case 2:
+			case 3:
+			case 5:
+			case 6:
+			case 7:
+			case 8:
+				fprintf(stderr, "GUEST_SYNC(%ld)\n", uc.args[1]);
+				break;
+			case 4:
+			case 10:
+				fprintf(stderr,
+				"GUEST_SYNC(%ld), check save/restore status\n", uc.args[1]);
+
+				/* Compacted mode, get amx offset by xsave area
+				 * size subtract 8K amx size.
+				 */
+				amx_offset = xsave_restore_size - NUM_TILES*TILE_SIZE;
+				state = vcpu_save_state(vm, VCPU_ID);
+				void *amx_start = (void *)state->xsave + amx_offset;
+				void *tiles_data = (void *)addr_gva2hva(vm, tiledata);
+				/* Only check TMM0 register, 1 tile */
+				ret = memcmp(amx_start, tiles_data, TILE_SIZE);
+				if (uc.args[1] == 4)
+					TEST_ASSERT(ret == 0, "memcmp failed, ret=%d\n", ret);
+				else
+					TEST_ASSERT(ret != 0,
+					"memcmp was successful, but it should fail, ret=%d\n", ret);
+
+				kvm_x86_state_cleanup(state);
+				break;
+			case 9:
+				fprintf(stderr, "GUEST_SYNC(%ld), #NM exception\n", uc.args[1]);
+				break;
+			}
+			break;
+		case UCALL_DONE:
+			fprintf(stderr, "UCALL_DONE\n");
+			goto done;
+		default:
+			TEST_FAIL("Unknown ucall %lu", uc.cmd);
+		}
+
+		state = vcpu_save_state(vm, VCPU_ID);
+		memset(&regs1, 0, sizeof(regs1));
+		vcpu_regs_get(vm, VCPU_ID, &regs1);
+
+		kvm_vm_release(vm);
+
+		/* Restore state in a new VM.  */
+		kvm_vm_restart(vm, O_RDWR);
+		vm_vcpu_add(vm, VCPU_ID);
+		vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid());
+		vcpu_load_state(vm, VCPU_ID, state);
+		run = vcpu_state(vm, VCPU_ID);
+		kvm_x86_state_cleanup(state);
+
+		memset(&regs2, 0, sizeof(regs2));
+		vcpu_regs_get(vm, VCPU_ID, &regs2);
+		TEST_ASSERT(!memcmp(&regs1, &regs2, sizeof(regs2)),
+			    "Unexpected register values after vcpu_load_state; rdi: %lx rsi: %lx",
+			    (ulong) regs2.rdi, (ulong) regs2.rsi);
+	}
+done:
+	kvm_vm_free(vm);
+}

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] selftest: kvm: Support amx selftest
  2021-12-22 15:15   ` Paolo Bonzini
@ 2021-12-23  2:38     ` Yang Zhong
  0 siblings, 0 replies; 9+ messages in thread
From: Yang Zhong @ 2021-12-23  2:38 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, seanjc, jun.nakajima, kevin.tian, jing2.liu, yang.zhong

On Wed, Dec 22, 2021 at 04:15:02PM +0100, Paolo Bonzini wrote:
> On 12/22/21 22:47, Yang Zhong wrote:
> >+	/* Trigger #NM exception */
> >+	__tileloadd(tiledata);
> >+	GUEST_SYNC(10);
> >+
> >+	GUEST_DONE();
> >+}
> >+
> >+void guest_nm_handler(struct ex_regs *regs)
> >+{
> >+	/* Check if #NM is triggered by XFEATURE_MASK_XTILEDATA */
> >+	GUEST_SYNC(7);
> >+	GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILEDATA);
> >+	GUEST_SYNC(8);
> >+	GUEST_ASSERT(rdmsr(MSR_IA32_XFD_ERR) == XFEATURE_MASK_XTILEDATA);
> >+	/* Clear xfd_err */
> >+	wrmsr(MSR_IA32_XFD_ERR, 0);
> >+	regs->rip += 3;
> >+	GUEST_SYNC(9);
> >+}
> 
> I don't understand why "regs->rip += 3" is needed though.
> 
> My idea was that, after GUEST_SYNC(9) and IRET, the guest would
> execute __tileloadd again; this time without generating #NM, so that
> after GUEST_SYNC(10) the host sees the loaded data in TMM0.
>
  
  Sorry, I didn't capture it before, and in that time, I wanted to skip this
  load intruction.

  Let me enbale amx before GUEST_SYNC(9), and then load tiles again.

  The new v3 will address this, thanks!

  Yang


 
> Thanks,
> 
> Paolo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [WARNING: UNSCANNABLE EXTRACTION FAILED][PATCH v2 1/3] selftest: kvm: Reorder vcpu_load_state steps for AMX
  2021-12-22 21:47 ` [WARNING: UNSCANNABLE EXTRACTION FAILED][PATCH v2 1/3] selftest: kvm: Reorder vcpu_load_state steps for AMX Yang Zhong
@ 2022-01-12 21:05   ` Sean Christopherson
  2022-01-13  6:31     ` Yang Zhong
  0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2022-01-12 21:05 UTC (permalink / raw)
  To: Yang Zhong; +Cc: kvm, pbonzini, jun.nakajima, kevin.tian, jing2.liu

On Wed, Dec 22, 2021, Yang Zhong wrote:
> From: Paolo Bonzini <pbonzini@redhat.com>
> 
> For AMX support it is recommended to load XCR0 after XFD, so
> that KVM does not see XFD=0, XCR=1 for a save state that will
> eventually be disabled (which would lead to premature allocation
> of the space required for that save state).

It would be very helpful to clarify that XFD is loaded via KVM_SET_MSRS.  It took
me longer than it should have to understand what was going on.  The large amount of
whitespace noise in this patch certainly didn't help.  E.g. just a simple tweak:

  For AMX support it is recommended to load XCR0 after XFD, i.e. after MSRs, so

> It is also required to load XSAVE data after XCR0 and XFD, so
> that KVM can trigger allocation of the extra space required to
> store AMX state.
> 
> Adjust vcpu_load_state to obey these new requirements.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> ---
>  .../selftests/kvm/lib/x86_64/processor.c      | 29 ++++++++++---------
>  1 file changed, 15 insertions(+), 14 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 00324d73c687..9b5abf488211 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -1192,9 +1192,14 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
>  	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>  	int r;
>  
> -	r = ioctl(vcpu->fd, KVM_SET_XSAVE, &state->xsave);
> -        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XSAVE, r: %i",
> -                r);
> +	r = ioctl(vcpu->fd, KVM_SET_SREGS, &state->sregs);
> +	TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_SREGS, r: %i",
> +		r);

If we're going to bother replacing spaces with tabs, might as well get rid of all
the gratuituous newlines as well.

> +
> +	r = ioctl(vcpu->fd, KVM_SET_MSRS, &state->msrs);
> +	TEST_ASSERT(r == state->msrs.nmsrs,
> +		"Unexpected result from KVM_SET_MSRS,r: %i (failed at %x)",
> +		r, r == state->msrs.nmsrs ? -1 : state->msrs.entries[r].index);

Most people not named "Paolo" prefer to align this with the opening "(" :-)

E.g.

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 97f8c2f2df36..971f41afa689 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1158,44 +1158,36 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
        int r;

        r = ioctl(vcpu->fd, KVM_SET_SREGS, &state->sregs);
-       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_SREGS, r: %i",
-               r);
+       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_SREGS, r: %i", r);

        r = ioctl(vcpu->fd, KVM_SET_MSRS, &state->msrs);
        TEST_ASSERT(r == state->msrs.nmsrs,
-               "Unexpected result from KVM_SET_MSRS,r: %i (failed at %x)",
-               r, r == state->msrs.nmsrs ? -1 : state->msrs.entries[r].index);
+                   "Unexpected result from KVM_SET_MSRS,r: %i (failed at %x)",
+                   r, r == state->msrs.nmsrs ? -1 : state->msrs.entries[r].index);

        if (kvm_check_cap(KVM_CAP_XCRS)) {
                r = ioctl(vcpu->fd, KVM_SET_XCRS, &state->xcrs);
-               TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XCRS, r: %i",
-                           r);
+               TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XCRS, r: %i", r);
        }

        r = ioctl(vcpu->fd, KVM_SET_XSAVE, &state->xsave);
-       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XSAVE, r: %i",
-               r);
+       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XSAVE, r: %i", r);

        r = ioctl(vcpu->fd, KVM_SET_VCPU_EVENTS, &state->events);
-       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_VCPU_EVENTS, r: %i",
-               r);
+       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_VCPU_EVENTS, r: %i", r);

        r = ioctl(vcpu->fd, KVM_SET_MP_STATE, &state->mp_state);
-        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_MP_STATE, r: %i",
-                r);
+        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_MP_STATE, r: %i", r);

        r = ioctl(vcpu->fd, KVM_SET_DEBUGREGS, &state->debugregs);
-        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_DEBUGREGS, r: %i",
-                r);
+        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_DEBUGREGS, r: %i", r);

        r = ioctl(vcpu->fd, KVM_SET_REGS, &state->regs);
-       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_REGS, r: %i",
-               r);
+       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_REGS, r: %i", r);

        if (state->nested.size) {
                r = ioctl(vcpu->fd, KVM_SET_NESTED_STATE, &state->nested);
-               TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_NESTED_STATE, r: %i",
-                       r);
+               TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_NESTED_STATE, r: %i", r);
        }
 }


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [WARNING: UNSCANNABLE EXTRACTION FAILED][PATCH v2 1/3] selftest: kvm: Reorder vcpu_load_state steps for AMX
  2022-01-12 21:05   ` Sean Christopherson
@ 2022-01-13  6:31     ` Yang Zhong
  0 siblings, 0 replies; 9+ messages in thread
From: Yang Zhong @ 2022-01-13  6:31 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, pbonzini, jun.nakajima, kevin.tian, jing2.liu, yang.zhong

Hi Sean,

On Wed, Jan 12, 2022 at 09:05:02PM +0000, Sean Christopherson wrote:
> On Wed, Dec 22, 2021, Yang Zhong wrote:
> > From: Paolo Bonzini <pbonzini@redhat.com>
> > 
> > For AMX support it is recommended to load XCR0 after XFD, so
> > that KVM does not see XFD=0, XCR=1 for a save state that will
> > eventually be disabled (which would lead to premature allocation
> > of the space required for that save state).
> 
> It would be very helpful to clarify that XFD is loaded via KVM_SET_MSRS.  It took
> me longer than it should have to understand what was going on.  The large amount of
> whitespace noise in this patch certainly didn't help.  E.g. just a simple tweak:
> 
>   For AMX support it is recommended to load XCR0 after XFD, i.e. after MSRs, so
> 

    Thanks, this is clearer.


> > It is also required to load XSAVE data after XCR0 and XFD, so
> > that KVM can trigger allocation of the extra space required to
> > store AMX state.
> > 
> > Adjust vcpu_load_state to obey these new requirements.
> > 
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > Signed-off-by: Yang Zhong <yang.zhong@intel.com>
> > ---
> >  .../selftests/kvm/lib/x86_64/processor.c      | 29 ++++++++++---------
> >  1 file changed, 15 insertions(+), 14 deletions(-)
> > 
> > diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> > index 00324d73c687..9b5abf488211 100644
> > --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> > +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> > @@ -1192,9 +1192,14 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
> >  	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> >  	int r;
> >  
> > -	r = ioctl(vcpu->fd, KVM_SET_XSAVE, &state->xsave);
> > -        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XSAVE, r: %i",
> > -                r);
> > +	r = ioctl(vcpu->fd, KVM_SET_SREGS, &state->sregs);
> > +	TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_SREGS, r: %i",
> > +		r);
> 
> If we're going to bother replacing spaces with tabs, might as well get rid of all
> the gratuituous newlines as well.
> 
> > +
> > +	r = ioctl(vcpu->fd, KVM_SET_MSRS, &state->msrs);
> > +	TEST_ASSERT(r == state->msrs.nmsrs,
> > +		"Unexpected result from KVM_SET_MSRS,r: %i (failed at %x)",
> > +		r, r == state->msrs.nmsrs ? -1 : state->msrs.entries[r].index);
> 
> Most people not named "Paolo" prefer to align this with the opening "(" :-)
> 
> E.g.
> 
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 97f8c2f2df36..971f41afa689 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -1158,44 +1158,36 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
>         int r;
> 
>         r = ioctl(vcpu->fd, KVM_SET_SREGS, &state->sregs);
> -       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_SREGS, r: %i",
> -               r);
> +       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_SREGS, r: %i", r);
> 
>         r = ioctl(vcpu->fd, KVM_SET_MSRS, &state->msrs);
>         TEST_ASSERT(r == state->msrs.nmsrs,
> -               "Unexpected result from KVM_SET_MSRS,r: %i (failed at %x)",
> -               r, r == state->msrs.nmsrs ? -1 : state->msrs.entries[r].index);
> +                   "Unexpected result from KVM_SET_MSRS,r: %i (failed at %x)",
> +                   r, r == state->msrs.nmsrs ? -1 : state->msrs.entries[r].index);
> 
>         if (kvm_check_cap(KVM_CAP_XCRS)) {
>                 r = ioctl(vcpu->fd, KVM_SET_XCRS, &state->xcrs);
> -               TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XCRS, r: %i",
> -                           r);
> +               TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XCRS, r: %i", r);
>         }
> 
>         r = ioctl(vcpu->fd, KVM_SET_XSAVE, &state->xsave);
> -       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XSAVE, r: %i",
> -               r);
> +       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XSAVE, r: %i", r);
> 
>         r = ioctl(vcpu->fd, KVM_SET_VCPU_EVENTS, &state->events);
> -       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_VCPU_EVENTS, r: %i",
> -               r);
> +       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_VCPU_EVENTS, r: %i", r);
> 
>         r = ioctl(vcpu->fd, KVM_SET_MP_STATE, &state->mp_state);
> -        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_MP_STATE, r: %i",
> -                r);
> +        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_MP_STATE, r: %i", r);
> 
>         r = ioctl(vcpu->fd, KVM_SET_DEBUGREGS, &state->debugregs);
> -        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_DEBUGREGS, r: %i",
> -                r);
> +        TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_DEBUGREGS, r: %i", r);
> 
>         r = ioctl(vcpu->fd, KVM_SET_REGS, &state->regs);
> -       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_REGS, r: %i",
> -               r);
> +       TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_REGS, r: %i", r);
> 
>         if (state->nested.size) {
>                 r = ioctl(vcpu->fd, KVM_SET_NESTED_STATE, &state->nested);
> -               TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_NESTED_STATE, r: %i",
> -                       r);
> +               TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_NESTED_STATE, r: %i", r);
>         }
>  }

   Thanks for pointing out those issues.
   In fact, space issues are not only in this function, if we need handle those, we had better
   cleanup all those space issues. 
   Since this series has been merged into KVM-Next, we can submit one patch to handle those space 
   issue in the new 5.17-rcx release. thanks!

   Yang

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-01-13  6:46 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-22 21:47 [PATCH v2 0/3] AMX KVM selftest Yang Zhong
2021-12-22 14:05 ` Paolo Bonzini
2021-12-22 21:47 ` [WARNING: UNSCANNABLE EXTRACTION FAILED][PATCH v2 1/3] selftest: kvm: Reorder vcpu_load_state steps for AMX Yang Zhong
2022-01-12 21:05   ` Sean Christopherson
2022-01-13  6:31     ` Yang Zhong
2021-12-22 21:47 ` [PATCH v2 2/3] selftest: kvm: Move struct kvm_x86_state to header Yang Zhong
2021-12-22 21:47 ` [PATCH v2 3/3] selftest: kvm: Support amx selftest Yang Zhong
2021-12-22 15:15   ` Paolo Bonzini
2021-12-23  2:38     ` Yang Zhong

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.