All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/8] Derive XSAVE state component offsets from CPUID leaf 0xd where possible
@ 2021-07-05 10:46 ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Michael Roth, kvm, Roman Bolshakov,
	Paolo Bonzini, Marcelo Tosatti, babu.moger, Cameron Esfahani,
	Eduardo Habkost, David Edmondson

The offset of XSAVE state components within the XSAVE state area is
currently hard-coded via reference to the X86XSaveArea structure. This
structure is accurate for Intel systems at the time of writing, but
incorrect for newer AMD systems, as the state component for protection
keys is located differently (offset 0x980 rather than offset 0xa80).

For KVM and HVF, replace the hard-coding of the state component
offsets with data derived from CPUID leaf 0xd information.

TCG still uses the X86XSaveArea structure, as there is no underlying
CPU to use in determining appropriate values.

This is a replacement for the changes in
https://lore.kernel.org/r/20210520145647.3483809-1-david.edmondson@oracle.com,
which simply modifed the hard-coded offsets for AMD systems.

Testing on HVF is minimal (it builds and, by observation, the XSAVE
state component offsets reported to a running VM are accurate on an
older Intel system).

David Edmondson (8):
  target/i386: Declare constants for XSAVE offsets
  target/i386: Consolidate the X86XSaveArea offset checks
  target/i386: Clarify the padding requirements of X86XSaveArea
  target/i386: Pass buffer and length to XSAVE helper
  target/i386: Make x86_ext_save_areas visible outside cpu.c
  target/i386: Observe XSAVE state area offsets
  target/i386: Populate x86_ext_save_areas offsets using cpuid where
    possible
  target/i386: Move X86XSaveArea into TCG

 target/i386/cpu.c            |  18 +--
 target/i386/cpu.h            |  41 ++----
 target/i386/hvf/hvf-cpu.c    |  34 +++++
 target/i386/hvf/hvf.c        |   3 +-
 target/i386/hvf/x86hvf.c     |  19 ++-
 target/i386/kvm/kvm-cpu.c    |  36 +++++
 target/i386/kvm/kvm.c        |  52 +------
 target/i386/tcg/fpu_helper.c |   1 +
 target/i386/tcg/tcg-cpu.c    |  20 +++
 target/i386/tcg/tcg-cpu.h    |  57 ++++++++
 target/i386/xsave_helper.c   | 267 ++++++++++++++++++++++++++---------
 11 files changed, 381 insertions(+), 167 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [RFC PATCH 0/8] Derive XSAVE state component offsets from CPUID leaf 0xd where possible
@ 2021-07-05 10:46 ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Richard Henderson, Cameron Esfahani, David Edmondson, babu.moger,
	Roman Bolshakov, Paolo Bonzini

The offset of XSAVE state components within the XSAVE state area is
currently hard-coded via reference to the X86XSaveArea structure. This
structure is accurate for Intel systems at the time of writing, but
incorrect for newer AMD systems, as the state component for protection
keys is located differently (offset 0x980 rather than offset 0xa80).

For KVM and HVF, replace the hard-coding of the state component
offsets with data derived from CPUID leaf 0xd information.

TCG still uses the X86XSaveArea structure, as there is no underlying
CPU to use in determining appropriate values.

This is a replacement for the changes in
https://lore.kernel.org/r/20210520145647.3483809-1-david.edmondson@oracle.com,
which simply modifed the hard-coded offsets for AMD systems.

Testing on HVF is minimal (it builds and, by observation, the XSAVE
state component offsets reported to a running VM are accurate on an
older Intel system).

David Edmondson (8):
  target/i386: Declare constants for XSAVE offsets
  target/i386: Consolidate the X86XSaveArea offset checks
  target/i386: Clarify the padding requirements of X86XSaveArea
  target/i386: Pass buffer and length to XSAVE helper
  target/i386: Make x86_ext_save_areas visible outside cpu.c
  target/i386: Observe XSAVE state area offsets
  target/i386: Populate x86_ext_save_areas offsets using cpuid where
    possible
  target/i386: Move X86XSaveArea into TCG

 target/i386/cpu.c            |  18 +--
 target/i386/cpu.h            |  41 ++----
 target/i386/hvf/hvf-cpu.c    |  34 +++++
 target/i386/hvf/hvf.c        |   3 +-
 target/i386/hvf/x86hvf.c     |  19 ++-
 target/i386/kvm/kvm-cpu.c    |  36 +++++
 target/i386/kvm/kvm.c        |  52 +------
 target/i386/tcg/fpu_helper.c |   1 +
 target/i386/tcg/tcg-cpu.c    |  20 +++
 target/i386/tcg/tcg-cpu.h    |  57 ++++++++
 target/i386/xsave_helper.c   | 267 ++++++++++++++++++++++++++---------
 11 files changed, 381 insertions(+), 167 deletions(-)

-- 
2.30.2



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [RFC PATCH 1/8] target/i386: Declare constants for XSAVE offsets
  2021-07-05 10:46 ` David Edmondson
@ 2021-07-05 10:46   ` David Edmondson
  -1 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Michael Roth, kvm, Roman Bolshakov,
	Paolo Bonzini, Marcelo Tosatti, babu.moger, Cameron Esfahani,
	Eduardo Habkost, David Edmondson

Declare and use manifest constants for the XSAVE state component
offsets.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.h | 30 +++++++++++++++++++++++-------
 1 file changed, 23 insertions(+), 7 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index f7fa5870b1..aedb8f2e01 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1305,6 +1305,22 @@ typedef struct XSavePKRU {
     uint32_t padding;
 } XSavePKRU;
 
+#define XSAVE_FCW_FSW_OFFSET    0x000
+#define XSAVE_FTW_FOP_OFFSET    0x004
+#define XSAVE_CWD_RIP_OFFSET    0x008
+#define XSAVE_CWD_RDP_OFFSET    0x010
+#define XSAVE_MXCSR_OFFSET      0x018
+#define XSAVE_ST_SPACE_OFFSET   0x020
+#define XSAVE_XMM_SPACE_OFFSET  0x0a0
+#define XSAVE_XSTATE_BV_OFFSET  0x200
+#define XSAVE_AVX_OFFSET        0x240
+#define XSAVE_BNDREG_OFFSET     0x3c0
+#define XSAVE_BNDCSR_OFFSET     0x400
+#define XSAVE_OPMASK_OFFSET     0x440
+#define XSAVE_ZMM_HI256_OFFSET  0x480
+#define XSAVE_HI16_ZMM_OFFSET   0x680
+#define XSAVE_PKRU_OFFSET       0xa80
+
 typedef struct X86XSaveArea {
     X86LegacyXSaveArea legacy;
     X86XSaveHeader header;
@@ -1325,19 +1341,19 @@ typedef struct X86XSaveArea {
     XSavePKRU pkru_state;
 } X86XSaveArea;
 
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != 0x240);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != 0x3c0);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != 0x400);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != 0x440);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveOpmask) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != 0x480);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != 0x680);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != 0xA80);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
 
 typedef enum TPRAccess {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 1/8] target/i386: Declare constants for XSAVE offsets
@ 2021-07-05 10:46   ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Richard Henderson, Cameron Esfahani, David Edmondson, babu.moger,
	Roman Bolshakov, Paolo Bonzini

Declare and use manifest constants for the XSAVE state component
offsets.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.h | 30 +++++++++++++++++++++++-------
 1 file changed, 23 insertions(+), 7 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index f7fa5870b1..aedb8f2e01 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1305,6 +1305,22 @@ typedef struct XSavePKRU {
     uint32_t padding;
 } XSavePKRU;
 
+#define XSAVE_FCW_FSW_OFFSET    0x000
+#define XSAVE_FTW_FOP_OFFSET    0x004
+#define XSAVE_CWD_RIP_OFFSET    0x008
+#define XSAVE_CWD_RDP_OFFSET    0x010
+#define XSAVE_MXCSR_OFFSET      0x018
+#define XSAVE_ST_SPACE_OFFSET   0x020
+#define XSAVE_XMM_SPACE_OFFSET  0x0a0
+#define XSAVE_XSTATE_BV_OFFSET  0x200
+#define XSAVE_AVX_OFFSET        0x240
+#define XSAVE_BNDREG_OFFSET     0x3c0
+#define XSAVE_BNDCSR_OFFSET     0x400
+#define XSAVE_OPMASK_OFFSET     0x440
+#define XSAVE_ZMM_HI256_OFFSET  0x480
+#define XSAVE_HI16_ZMM_OFFSET   0x680
+#define XSAVE_PKRU_OFFSET       0xa80
+
 typedef struct X86XSaveArea {
     X86LegacyXSaveArea legacy;
     X86XSaveHeader header;
@@ -1325,19 +1341,19 @@ typedef struct X86XSaveArea {
     XSavePKRU pkru_state;
 } X86XSaveArea;
 
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != 0x240);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != 0x3c0);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != 0x400);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != 0x440);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveOpmask) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != 0x480);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != 0x680);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != 0xA80);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
 
 typedef enum TPRAccess {
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 2/8] target/i386: Consolidate the X86XSaveArea offset checks
  2021-07-05 10:46 ` David Edmondson
@ 2021-07-05 10:46   ` David Edmondson
  -1 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Michael Roth, kvm, Roman Bolshakov,
	Paolo Bonzini, Marcelo Tosatti, babu.moger, Cameron Esfahani,
	Eduardo Habkost, David Edmondson

Rather than having similar but different checks in cpu.h and kvm.c,
move them all to cpu.h.
---
 target/i386/cpu.h     | 22 +++++++++++++++-------
 target/i386/kvm/kvm.c | 39 ---------------------------------------
 2 files changed, 15 insertions(+), 46 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index aedb8f2e01..6590ad6391 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1341,21 +1341,29 @@ typedef struct X86XSaveArea {
     XSavePKRU pkru_state;
 } X86XSaveArea;
 
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveOpmask) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
 
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
+
 typedef enum TPRAccess {
     TPR_ACCESS_READ,
     TPR_ACCESS_WRITE,
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 04e4ec063f..3ab1d71775 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -2466,45 +2466,6 @@ static int kvm_put_fpu(X86CPU *cpu)
     return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_FPU, &fpu);
 }
 
-#define XSAVE_FCW_FSW     0
-#define XSAVE_FTW_FOP     1
-#define XSAVE_CWD_RIP     2
-#define XSAVE_CWD_RDP     4
-#define XSAVE_MXCSR       6
-#define XSAVE_ST_SPACE    8
-#define XSAVE_XMM_SPACE   40
-#define XSAVE_XSTATE_BV   128
-#define XSAVE_YMMH_SPACE  144
-#define XSAVE_BNDREGS     240
-#define XSAVE_BNDCSR      256
-#define XSAVE_OPMASK      272
-#define XSAVE_ZMM_Hi256   288
-#define XSAVE_Hi16_ZMM    416
-#define XSAVE_PKRU        672
-
-#define XSAVE_BYTE_OFFSET(word_offset) \
-    ((word_offset) * sizeof_field(struct kvm_xsave, region[0]))
-
-#define ASSERT_OFFSET(word_offset, field) \
-    QEMU_BUILD_BUG_ON(XSAVE_BYTE_OFFSET(word_offset) != \
-                      offsetof(X86XSaveArea, field))
-
-ASSERT_OFFSET(XSAVE_FCW_FSW, legacy.fcw);
-ASSERT_OFFSET(XSAVE_FTW_FOP, legacy.ftw);
-ASSERT_OFFSET(XSAVE_CWD_RIP, legacy.fpip);
-ASSERT_OFFSET(XSAVE_CWD_RDP, legacy.fpdp);
-ASSERT_OFFSET(XSAVE_MXCSR, legacy.mxcsr);
-ASSERT_OFFSET(XSAVE_ST_SPACE, legacy.fpregs);
-ASSERT_OFFSET(XSAVE_XMM_SPACE, legacy.xmm_regs);
-ASSERT_OFFSET(XSAVE_XSTATE_BV, header.xstate_bv);
-ASSERT_OFFSET(XSAVE_YMMH_SPACE, avx_state);
-ASSERT_OFFSET(XSAVE_BNDREGS, bndreg_state);
-ASSERT_OFFSET(XSAVE_BNDCSR, bndcsr_state);
-ASSERT_OFFSET(XSAVE_OPMASK, opmask_state);
-ASSERT_OFFSET(XSAVE_ZMM_Hi256, zmm_hi256_state);
-ASSERT_OFFSET(XSAVE_Hi16_ZMM, hi16_zmm_state);
-ASSERT_OFFSET(XSAVE_PKRU, pkru_state);
-
 static int kvm_put_xsave(X86CPU *cpu)
 {
     CPUX86State *env = &cpu->env;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 2/8] target/i386: Consolidate the X86XSaveArea offset checks
@ 2021-07-05 10:46   ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Richard Henderson, Cameron Esfahani, David Edmondson, babu.moger,
	Roman Bolshakov, Paolo Bonzini

Rather than having similar but different checks in cpu.h and kvm.c,
move them all to cpu.h.
---
 target/i386/cpu.h     | 22 +++++++++++++++-------
 target/i386/kvm/kvm.c | 39 ---------------------------------------
 2 files changed, 15 insertions(+), 46 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index aedb8f2e01..6590ad6391 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1341,21 +1341,29 @@ typedef struct X86XSaveArea {
     XSavePKRU pkru_state;
 } X86XSaveArea;
 
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveOpmask) != 0x40);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
 QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
 
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
+
 typedef enum TPRAccess {
     TPR_ACCESS_READ,
     TPR_ACCESS_WRITE,
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 04e4ec063f..3ab1d71775 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -2466,45 +2466,6 @@ static int kvm_put_fpu(X86CPU *cpu)
     return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_FPU, &fpu);
 }
 
-#define XSAVE_FCW_FSW     0
-#define XSAVE_FTW_FOP     1
-#define XSAVE_CWD_RIP     2
-#define XSAVE_CWD_RDP     4
-#define XSAVE_MXCSR       6
-#define XSAVE_ST_SPACE    8
-#define XSAVE_XMM_SPACE   40
-#define XSAVE_XSTATE_BV   128
-#define XSAVE_YMMH_SPACE  144
-#define XSAVE_BNDREGS     240
-#define XSAVE_BNDCSR      256
-#define XSAVE_OPMASK      272
-#define XSAVE_ZMM_Hi256   288
-#define XSAVE_Hi16_ZMM    416
-#define XSAVE_PKRU        672
-
-#define XSAVE_BYTE_OFFSET(word_offset) \
-    ((word_offset) * sizeof_field(struct kvm_xsave, region[0]))
-
-#define ASSERT_OFFSET(word_offset, field) \
-    QEMU_BUILD_BUG_ON(XSAVE_BYTE_OFFSET(word_offset) != \
-                      offsetof(X86XSaveArea, field))
-
-ASSERT_OFFSET(XSAVE_FCW_FSW, legacy.fcw);
-ASSERT_OFFSET(XSAVE_FTW_FOP, legacy.ftw);
-ASSERT_OFFSET(XSAVE_CWD_RIP, legacy.fpip);
-ASSERT_OFFSET(XSAVE_CWD_RDP, legacy.fpdp);
-ASSERT_OFFSET(XSAVE_MXCSR, legacy.mxcsr);
-ASSERT_OFFSET(XSAVE_ST_SPACE, legacy.fpregs);
-ASSERT_OFFSET(XSAVE_XMM_SPACE, legacy.xmm_regs);
-ASSERT_OFFSET(XSAVE_XSTATE_BV, header.xstate_bv);
-ASSERT_OFFSET(XSAVE_YMMH_SPACE, avx_state);
-ASSERT_OFFSET(XSAVE_BNDREGS, bndreg_state);
-ASSERT_OFFSET(XSAVE_BNDCSR, bndcsr_state);
-ASSERT_OFFSET(XSAVE_OPMASK, opmask_state);
-ASSERT_OFFSET(XSAVE_ZMM_Hi256, zmm_hi256_state);
-ASSERT_OFFSET(XSAVE_Hi16_ZMM, hi16_zmm_state);
-ASSERT_OFFSET(XSAVE_PKRU, pkru_state);
-
 static int kvm_put_xsave(X86CPU *cpu)
 {
     CPUX86State *env = &cpu->env;
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 3/8] target/i386: Clarify the padding requirements of X86XSaveArea
  2021-07-05 10:46 ` David Edmondson
@ 2021-07-05 10:46   ` David Edmondson
  -1 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Michael Roth, kvm, Roman Bolshakov,
	Paolo Bonzini, Marcelo Tosatti, babu.moger, Cameron Esfahani,
	Eduardo Habkost, David Edmondson

Replace the hard-coded size of offsets or structure elements with
defined constants or sizeof().

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 6590ad6391..92f9ca264c 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1329,7 +1329,13 @@ typedef struct X86XSaveArea {
 
     /* AVX State: */
     XSaveAVX avx_state;
-    uint8_t padding[960 - 576 - sizeof(XSaveAVX)];
+
+    /* Ensure that XSaveBNDREG is properly aligned. */
+    uint8_t padding[XSAVE_BNDREG_OFFSET
+                    - sizeof(X86LegacyXSaveArea)
+                    - sizeof(X86XSaveHeader)
+                    - sizeof(XSaveAVX)];
+
     /* MPX State: */
     XSaveBNDREG bndreg_state;
     XSaveBNDCSR bndcsr_state;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 3/8] target/i386: Clarify the padding requirements of X86XSaveArea
@ 2021-07-05 10:46   ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Richard Henderson, Cameron Esfahani, David Edmondson, babu.moger,
	Roman Bolshakov, Paolo Bonzini

Replace the hard-coded size of offsets or structure elements with
defined constants or sizeof().

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 6590ad6391..92f9ca264c 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1329,7 +1329,13 @@ typedef struct X86XSaveArea {
 
     /* AVX State: */
     XSaveAVX avx_state;
-    uint8_t padding[960 - 576 - sizeof(XSaveAVX)];
+
+    /* Ensure that XSaveBNDREG is properly aligned. */
+    uint8_t padding[XSAVE_BNDREG_OFFSET
+                    - sizeof(X86LegacyXSaveArea)
+                    - sizeof(X86XSaveHeader)
+                    - sizeof(XSaveAVX)];
+
     /* MPX State: */
     XSaveBNDREG bndreg_state;
     XSaveBNDCSR bndcsr_state;
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 4/8] target/i386: Pass buffer and length to XSAVE helper
  2021-07-05 10:46 ` David Edmondson
@ 2021-07-05 10:46   ` David Edmondson
  -1 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Michael Roth, kvm, Roman Bolshakov,
	Paolo Bonzini, Marcelo Tosatti, babu.moger, Cameron Esfahani,
	Eduardo Habkost, David Edmondson

In preparation for removing assumptions about XSAVE area offsets, pass
a buffer pointer and buffer length to the XSAVE helper functions.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.h          |  5 +++--
 target/i386/hvf/hvf.c      |  3 ++-
 target/i386/hvf/x86hvf.c   | 19 ++++++++-----------
 target/i386/kvm/kvm.c      | 13 +++++++------
 target/i386/xsave_helper.c | 17 +++++++++--------
 5 files changed, 29 insertions(+), 28 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 92f9ca264c..ada2941c6e 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1667,6 +1667,7 @@ typedef struct CPUX86State {
     uint64_t apic_bus_freq;
 #if defined(CONFIG_KVM) || defined(CONFIG_HVF)
     void *xsave_buf;
+    uint32_t xsave_buf_len;
 #endif
 #if defined(CONFIG_KVM)
     struct kvm_nested_state *nested_state;
@@ -2227,8 +2228,8 @@ void x86_cpu_dump_local_apic_state(CPUState *cs, int flags);
 /* cpu.c */
 bool cpu_is_bsp(X86CPU *cpu);
 
-void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSaveArea *buf);
-void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea *buf);
+void x86_cpu_xrstor_all_areas(X86CPU *cpu, const void *buf, uint32_t buflen);
+void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen);
 void x86_update_hflags(CPUX86State* env);
 
 static inline bool hyperv_feat_enabled(X86CPU *cpu, int feat)
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index 346dbcc26f..e62e8df028 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -267,7 +267,8 @@ int hvf_arch_init_vcpu(CPUState *cpu)
     wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0);
 
     x86cpu = X86_CPU(cpu);
-    x86cpu->env.xsave_buf = qemu_memalign(4096, 4096);
+    x86cpu->env.xsave_buf_len = 4096;
+    x86cpu->env.xsave_buf = qemu_memalign(4096, x86cpu->env.xsave_buf_len);
 
     hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_STAR, 1);
     hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_LSTAR, 1);
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 2ced2c2478..05ec1bddc4 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -73,14 +73,12 @@ void hvf_get_segment(SegmentCache *qseg, struct vmx_segment *vmx_seg)
 
 void hvf_put_xsave(CPUState *cpu_state)
 {
+    void *xsave = X86_CPU(cpu_state)->env.xsave_buf;
+    uint32_t xsave_len = X86_CPU(cpu_state)->env.xsave_buf_len;
 
-    struct X86XSaveArea *xsave;
+    x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave, xsave_len);
 
-    xsave = X86_CPU(cpu_state)->env.xsave_buf;
-
-    x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave);
-
-    if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
+    if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, xsave, xsave_len)) {
         abort();
     }
 }
@@ -158,15 +156,14 @@ void hvf_put_msrs(CPUState *cpu_state)
 
 void hvf_get_xsave(CPUState *cpu_state)
 {
-    struct X86XSaveArea *xsave;
-
-    xsave = X86_CPU(cpu_state)->env.xsave_buf;
+    void *xsave = X86_CPU(cpu_state)->env.xsave_buf;
+    uint32_t xsave_len = X86_CPU(cpu_state)->env.xsave_buf_len;
 
-    if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
+    if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, xsave, xsave_len)) {
         abort();
     }
 
-    x86_cpu_xrstor_all_areas(X86_CPU(cpu_state), xsave);
+    x86_cpu_xrstor_all_areas(X86_CPU(cpu_state), xsave, xsave_len);
 }
 
 void hvf_get_segments(CPUState *cpu_state)
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 3ab1d71775..41b0764ab7 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -1888,8 +1888,9 @@ int kvm_arch_init_vcpu(CPUState *cs)
     }
 
     if (has_xsave) {
-        env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
-        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
+        env->xsave_buf_len = sizeof(struct kvm_xsave);
+        env->xsave_buf = qemu_memalign(4096, env->xsave_buf_len);
+        memset(env->xsave_buf, 0, env->xsave_buf_len);
     }
 
     max_nested_state_len = kvm_max_nested_state_length();
@@ -2469,12 +2470,12 @@ static int kvm_put_fpu(X86CPU *cpu)
 static int kvm_put_xsave(X86CPU *cpu)
 {
     CPUX86State *env = &cpu->env;
-    X86XSaveArea *xsave = env->xsave_buf;
+    void *xsave = env->xsave_buf;
 
     if (!has_xsave) {
         return kvm_put_fpu(cpu);
     }
-    x86_cpu_xsave_all_areas(cpu, xsave);
+    x86_cpu_xsave_all_areas(cpu, xsave, env->xsave_buf_len);
 
     return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_XSAVE, xsave);
 }
@@ -3119,7 +3120,7 @@ static int kvm_get_fpu(X86CPU *cpu)
 static int kvm_get_xsave(X86CPU *cpu)
 {
     CPUX86State *env = &cpu->env;
-    X86XSaveArea *xsave = env->xsave_buf;
+    void *xsave = env->xsave_buf;
     int ret;
 
     if (!has_xsave) {
@@ -3130,7 +3131,7 @@ static int kvm_get_xsave(X86CPU *cpu)
     if (ret < 0) {
         return ret;
     }
-    x86_cpu_xrstor_all_areas(cpu, xsave);
+    x86_cpu_xrstor_all_areas(cpu, xsave, env->xsave_buf_len);
 
     return 0;
 }
diff --git a/target/i386/xsave_helper.c b/target/i386/xsave_helper.c
index 818115e7d2..b16c6ac0fe 100644
--- a/target/i386/xsave_helper.c
+++ b/target/i386/xsave_helper.c
@@ -6,14 +6,16 @@
 
 #include "cpu.h"
 
-void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea *buf)
+void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen)
 {
     CPUX86State *env = &cpu->env;
     X86XSaveArea *xsave = buf;
-
     uint16_t cwd, swd, twd;
     int i;
-    memset(xsave, 0, sizeof(X86XSaveArea));
+
+    assert(buflen >= sizeof(*xsave));
+
+    memset(xsave, 0, buflen);
     twd = 0;
     swd = env->fpus & ~(7 << 11);
     swd |= (env->fpstt & 7) << 11;
@@ -56,17 +58,17 @@ void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea *buf)
             16 * sizeof env->xmm_regs[16]);
     memcpy(&xsave->pkru_state, &env->pkru, sizeof env->pkru);
 #endif
-
 }
 
-void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSaveArea *buf)
+void x86_cpu_xrstor_all_areas(X86CPU *cpu, const void *buf, uint32_t buflen)
 {
-
     CPUX86State *env = &cpu->env;
     const X86XSaveArea *xsave = buf;
-
     int i;
     uint16_t cwd, swd, twd;
+
+    assert(buflen >= sizeof(*xsave));
+
     cwd = xsave->legacy.fcw;
     swd = xsave->legacy.fsw;
     twd = xsave->legacy.ftw;
@@ -108,5 +110,4 @@ void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSaveArea *buf)
            16 * sizeof env->xmm_regs[16]);
     memcpy(&env->pkru, &xsave->pkru_state, sizeof env->pkru);
 #endif
-
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 4/8] target/i386: Pass buffer and length to XSAVE helper
@ 2021-07-05 10:46   ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Richard Henderson, Cameron Esfahani, David Edmondson, babu.moger,
	Roman Bolshakov, Paolo Bonzini

In preparation for removing assumptions about XSAVE area offsets, pass
a buffer pointer and buffer length to the XSAVE helper functions.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.h          |  5 +++--
 target/i386/hvf/hvf.c      |  3 ++-
 target/i386/hvf/x86hvf.c   | 19 ++++++++-----------
 target/i386/kvm/kvm.c      | 13 +++++++------
 target/i386/xsave_helper.c | 17 +++++++++--------
 5 files changed, 29 insertions(+), 28 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 92f9ca264c..ada2941c6e 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1667,6 +1667,7 @@ typedef struct CPUX86State {
     uint64_t apic_bus_freq;
 #if defined(CONFIG_KVM) || defined(CONFIG_HVF)
     void *xsave_buf;
+    uint32_t xsave_buf_len;
 #endif
 #if defined(CONFIG_KVM)
     struct kvm_nested_state *nested_state;
@@ -2227,8 +2228,8 @@ void x86_cpu_dump_local_apic_state(CPUState *cs, int flags);
 /* cpu.c */
 bool cpu_is_bsp(X86CPU *cpu);
 
-void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSaveArea *buf);
-void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea *buf);
+void x86_cpu_xrstor_all_areas(X86CPU *cpu, const void *buf, uint32_t buflen);
+void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen);
 void x86_update_hflags(CPUX86State* env);
 
 static inline bool hyperv_feat_enabled(X86CPU *cpu, int feat)
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index 346dbcc26f..e62e8df028 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -267,7 +267,8 @@ int hvf_arch_init_vcpu(CPUState *cpu)
     wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0);
 
     x86cpu = X86_CPU(cpu);
-    x86cpu->env.xsave_buf = qemu_memalign(4096, 4096);
+    x86cpu->env.xsave_buf_len = 4096;
+    x86cpu->env.xsave_buf = qemu_memalign(4096, x86cpu->env.xsave_buf_len);
 
     hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_STAR, 1);
     hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_LSTAR, 1);
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 2ced2c2478..05ec1bddc4 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -73,14 +73,12 @@ void hvf_get_segment(SegmentCache *qseg, struct vmx_segment *vmx_seg)
 
 void hvf_put_xsave(CPUState *cpu_state)
 {
+    void *xsave = X86_CPU(cpu_state)->env.xsave_buf;
+    uint32_t xsave_len = X86_CPU(cpu_state)->env.xsave_buf_len;
 
-    struct X86XSaveArea *xsave;
+    x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave, xsave_len);
 
-    xsave = X86_CPU(cpu_state)->env.xsave_buf;
-
-    x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave);
-
-    if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
+    if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, xsave, xsave_len)) {
         abort();
     }
 }
@@ -158,15 +156,14 @@ void hvf_put_msrs(CPUState *cpu_state)
 
 void hvf_get_xsave(CPUState *cpu_state)
 {
-    struct X86XSaveArea *xsave;
-
-    xsave = X86_CPU(cpu_state)->env.xsave_buf;
+    void *xsave = X86_CPU(cpu_state)->env.xsave_buf;
+    uint32_t xsave_len = X86_CPU(cpu_state)->env.xsave_buf_len;
 
-    if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) {
+    if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, xsave, xsave_len)) {
         abort();
     }
 
-    x86_cpu_xrstor_all_areas(X86_CPU(cpu_state), xsave);
+    x86_cpu_xrstor_all_areas(X86_CPU(cpu_state), xsave, xsave_len);
 }
 
 void hvf_get_segments(CPUState *cpu_state)
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 3ab1d71775..41b0764ab7 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -1888,8 +1888,9 @@ int kvm_arch_init_vcpu(CPUState *cs)
     }
 
     if (has_xsave) {
-        env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
-        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
+        env->xsave_buf_len = sizeof(struct kvm_xsave);
+        env->xsave_buf = qemu_memalign(4096, env->xsave_buf_len);
+        memset(env->xsave_buf, 0, env->xsave_buf_len);
     }
 
     max_nested_state_len = kvm_max_nested_state_length();
@@ -2469,12 +2470,12 @@ static int kvm_put_fpu(X86CPU *cpu)
 static int kvm_put_xsave(X86CPU *cpu)
 {
     CPUX86State *env = &cpu->env;
-    X86XSaveArea *xsave = env->xsave_buf;
+    void *xsave = env->xsave_buf;
 
     if (!has_xsave) {
         return kvm_put_fpu(cpu);
     }
-    x86_cpu_xsave_all_areas(cpu, xsave);
+    x86_cpu_xsave_all_areas(cpu, xsave, env->xsave_buf_len);
 
     return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_XSAVE, xsave);
 }
@@ -3119,7 +3120,7 @@ static int kvm_get_fpu(X86CPU *cpu)
 static int kvm_get_xsave(X86CPU *cpu)
 {
     CPUX86State *env = &cpu->env;
-    X86XSaveArea *xsave = env->xsave_buf;
+    void *xsave = env->xsave_buf;
     int ret;
 
     if (!has_xsave) {
@@ -3130,7 +3131,7 @@ static int kvm_get_xsave(X86CPU *cpu)
     if (ret < 0) {
         return ret;
     }
-    x86_cpu_xrstor_all_areas(cpu, xsave);
+    x86_cpu_xrstor_all_areas(cpu, xsave, env->xsave_buf_len);
 
     return 0;
 }
diff --git a/target/i386/xsave_helper.c b/target/i386/xsave_helper.c
index 818115e7d2..b16c6ac0fe 100644
--- a/target/i386/xsave_helper.c
+++ b/target/i386/xsave_helper.c
@@ -6,14 +6,16 @@
 
 #include "cpu.h"
 
-void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea *buf)
+void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen)
 {
     CPUX86State *env = &cpu->env;
     X86XSaveArea *xsave = buf;
-
     uint16_t cwd, swd, twd;
     int i;
-    memset(xsave, 0, sizeof(X86XSaveArea));
+
+    assert(buflen >= sizeof(*xsave));
+
+    memset(xsave, 0, buflen);
     twd = 0;
     swd = env->fpus & ~(7 << 11);
     swd |= (env->fpstt & 7) << 11;
@@ -56,17 +58,17 @@ void x86_cpu_xsave_all_areas(X86CPU *cpu, X86XSaveArea *buf)
             16 * sizeof env->xmm_regs[16]);
     memcpy(&xsave->pkru_state, &env->pkru, sizeof env->pkru);
 #endif
-
 }
 
-void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSaveArea *buf)
+void x86_cpu_xrstor_all_areas(X86CPU *cpu, const void *buf, uint32_t buflen)
 {
-
     CPUX86State *env = &cpu->env;
     const X86XSaveArea *xsave = buf;
-
     int i;
     uint16_t cwd, swd, twd;
+
+    assert(buflen >= sizeof(*xsave));
+
     cwd = xsave->legacy.fcw;
     swd = xsave->legacy.fsw;
     twd = xsave->legacy.ftw;
@@ -108,5 +110,4 @@ void x86_cpu_xrstor_all_areas(X86CPU *cpu, const X86XSaveArea *buf)
            16 * sizeof env->xmm_regs[16]);
     memcpy(&env->pkru, &xsave->pkru_state, sizeof env->pkru);
 #endif
-
 }
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 5/8] target/i386: Make x86_ext_save_areas visible outside cpu.c
  2021-07-05 10:46 ` David Edmondson
@ 2021-07-05 10:46   ` David Edmondson
  -1 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Michael Roth, kvm, Roman Bolshakov,
	Paolo Bonzini, Marcelo Tosatti, babu.moger, Cameron Esfahani,
	Eduardo Habkost, David Edmondson

Provide visibility of the x86_ext_save_areas array and associated type
outside of cpu.c.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.c | 7 +------
 target/i386/cpu.h | 9 +++++++++
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index d8f3ab3192..13caa0de50 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1304,12 +1304,7 @@ static const X86RegisterInfo32 x86_reg_info_32[CPU_NB_REGS32] = {
 };
 #undef REGISTER
 
-typedef struct ExtSaveArea {
-    uint32_t feature, bits;
-    uint32_t offset, size;
-} ExtSaveArea;
-
-static const ExtSaveArea x86_ext_save_areas[] = {
+const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = {
     [XSTATE_FP_BIT] = {
         /* x87 FP state component is always enabled if XSAVE is supported */
         .feature = FEAT_1_ECX, .bits = CPUID_EXT_XSAVE,
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index ada2941c6e..c9c0a34330 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1370,6 +1370,15 @@ QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFF
 QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
 QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
 
+typedef struct ExtSaveArea {
+    uint32_t feature, bits;
+    uint32_t offset, size;
+} ExtSaveArea;
+
+#define XSAVE_STATE_AREA_COUNT (XSTATE_PKRU_BIT + 1)
+
+extern const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT];
+
 typedef enum TPRAccess {
     TPR_ACCESS_READ,
     TPR_ACCESS_WRITE,
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 5/8] target/i386: Make x86_ext_save_areas visible outside cpu.c
@ 2021-07-05 10:46   ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Richard Henderson, Cameron Esfahani, David Edmondson, babu.moger,
	Roman Bolshakov, Paolo Bonzini

Provide visibility of the x86_ext_save_areas array and associated type
outside of cpu.c.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.c | 7 +------
 target/i386/cpu.h | 9 +++++++++
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index d8f3ab3192..13caa0de50 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1304,12 +1304,7 @@ static const X86RegisterInfo32 x86_reg_info_32[CPU_NB_REGS32] = {
 };
 #undef REGISTER
 
-typedef struct ExtSaveArea {
-    uint32_t feature, bits;
-    uint32_t offset, size;
-} ExtSaveArea;
-
-static const ExtSaveArea x86_ext_save_areas[] = {
+const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = {
     [XSTATE_FP_BIT] = {
         /* x87 FP state component is always enabled if XSAVE is supported */
         .feature = FEAT_1_ECX, .bits = CPUID_EXT_XSAVE,
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index ada2941c6e..c9c0a34330 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1370,6 +1370,15 @@ QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFF
 QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
 QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
 
+typedef struct ExtSaveArea {
+    uint32_t feature, bits;
+    uint32_t offset, size;
+} ExtSaveArea;
+
+#define XSAVE_STATE_AREA_COUNT (XSTATE_PKRU_BIT + 1)
+
+extern const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT];
+
 typedef enum TPRAccess {
     TPR_ACCESS_READ,
     TPR_ACCESS_WRITE,
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 6/8] target/i386: Observe XSAVE state area offsets
  2021-07-05 10:46 ` David Edmondson
@ 2021-07-05 10:46   ` David Edmondson
  -1 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Michael Roth, kvm, Roman Bolshakov,
	Paolo Bonzini, Marcelo Tosatti, babu.moger, Cameron Esfahani,
	Eduardo Habkost, David Edmondson

Rather than relying on the X86XSaveArea structure definition directly,
the routines that manipulate the XSAVE state area should observe the
offsets declared in the x86_ext_save_areas array.

Currently the offsets declared in the array are derived from the
structure definition, resulting in no functional change.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/xsave_helper.c | 262 ++++++++++++++++++++++++++++---------
 1 file changed, 200 insertions(+), 62 deletions(-)

diff --git a/target/i386/xsave_helper.c b/target/i386/xsave_helper.c
index b16c6ac0fe..ac61a96344 100644
--- a/target/i386/xsave_helper.c
+++ b/target/i386/xsave_helper.c
@@ -9,13 +9,20 @@
 void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen)
 {
     CPUX86State *env = &cpu->env;
-    X86XSaveArea *xsave = buf;
-    uint16_t cwd, swd, twd;
+    const ExtSaveArea *e, *f;
     int i;
 
-    assert(buflen >= sizeof(*xsave));
+    X86LegacyXSaveArea *legacy;
+    X86XSaveHeader *header;
+    uint16_t cwd, swd, twd;
+
+    memset(buf, 0, buflen);
+
+    e = &x86_ext_save_areas[XSTATE_FP_BIT];
+
+    legacy = buf + e->offset;
+    header = buf + e->offset + sizeof(*legacy);
 
-    memset(xsave, 0, buflen);
     twd = 0;
     swd = env->fpus & ~(7 << 11);
     swd |= (env->fpstt & 7) << 11;
@@ -23,91 +30,222 @@ void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen)
     for (i = 0; i < 8; ++i) {
         twd |= (!env->fptags[i]) << i;
     }
-    xsave->legacy.fcw = cwd;
-    xsave->legacy.fsw = swd;
-    xsave->legacy.ftw = twd;
-    xsave->legacy.fpop = env->fpop;
-    xsave->legacy.fpip = env->fpip;
-    xsave->legacy.fpdp = env->fpdp;
-    memcpy(&xsave->legacy.fpregs, env->fpregs,
-            sizeof env->fpregs);
-    xsave->legacy.mxcsr = env->mxcsr;
-    xsave->header.xstate_bv = env->xstate_bv;
-    memcpy(&xsave->bndreg_state.bnd_regs, env->bnd_regs,
-            sizeof env->bnd_regs);
-    xsave->bndcsr_state.bndcsr = env->bndcs_regs;
-    memcpy(&xsave->opmask_state.opmask_regs, env->opmask_regs,
-            sizeof env->opmask_regs);
+    legacy->fcw = cwd;
+    legacy->fsw = swd;
+    legacy->ftw = twd;
+    legacy->fpop = env->fpop;
+    legacy->fpip = env->fpip;
+    legacy->fpdp = env->fpdp;
+    memcpy(&legacy->fpregs, env->fpregs,
+           sizeof(env->fpregs));
+    legacy->mxcsr = env->mxcsr;
 
     for (i = 0; i < CPU_NB_REGS; i++) {
-        uint8_t *xmm = xsave->legacy.xmm_regs[i];
-        uint8_t *ymmh = xsave->avx_state.ymmh[i];
-        uint8_t *zmmh = xsave->zmm_hi256_state.zmm_hi256[i];
+        uint8_t *xmm = legacy->xmm_regs[i];
+
         stq_p(xmm,     env->xmm_regs[i].ZMM_Q(0));
-        stq_p(xmm+8,   env->xmm_regs[i].ZMM_Q(1));
-        stq_p(ymmh,    env->xmm_regs[i].ZMM_Q(2));
-        stq_p(ymmh+8,  env->xmm_regs[i].ZMM_Q(3));
-        stq_p(zmmh,    env->xmm_regs[i].ZMM_Q(4));
-        stq_p(zmmh+8,  env->xmm_regs[i].ZMM_Q(5));
-        stq_p(zmmh+16, env->xmm_regs[i].ZMM_Q(6));
-        stq_p(zmmh+24, env->xmm_regs[i].ZMM_Q(7));
+        stq_p(xmm + 8, env->xmm_regs[i].ZMM_Q(1));
+    }
+
+    header->xstate_bv = env->xstate_bv;
+
+    e = &x86_ext_save_areas[XSTATE_YMM_BIT];
+    if (e->size && e->offset) {
+        XSaveAVX *avx;
+
+        avx = buf + e->offset;
+
+        for (i = 0; i < CPU_NB_REGS; i++) {
+            uint8_t *ymmh = avx->ymmh[i];
+
+            stq_p(ymmh,     env->xmm_regs[i].ZMM_Q(2));
+            stq_p(ymmh + 8, env->xmm_regs[i].ZMM_Q(3));
+        }
+    }
+
+    e = &x86_ext_save_areas[XSTATE_BNDREGS_BIT];
+    if (e->size && e->offset) {
+        XSaveBNDREG *bndreg;
+        XSaveBNDCSR *bndcsr;
+
+        f = &x86_ext_save_areas[XSTATE_BNDCSR_BIT];
+        assert(f->size);
+        assert(f->offset);
+
+        bndreg = buf + e->offset;
+        bndcsr = buf + f->offset;
+
+        memcpy(&bndreg->bnd_regs, env->bnd_regs,
+               sizeof(env->bnd_regs));
+        bndcsr->bndcsr = env->bndcs_regs;
     }
 
+    e = &x86_ext_save_areas[XSTATE_OPMASK_BIT];
+    if (e->size && e->offset) {
+        XSaveOpmask *opmask;
+        XSaveZMM_Hi256 *zmm_hi256;
+#ifdef TARGET_X86_64
+        XSaveHi16_ZMM *hi16_zmm;
+#endif
+
+        f = &x86_ext_save_areas[XSTATE_ZMM_Hi256_BIT];
+        assert(f->size);
+        assert(f->offset);
+
+        opmask = buf + e->offset;
+        zmm_hi256 = buf + f->offset;
+
+        memcpy(&opmask->opmask_regs, env->opmask_regs,
+               sizeof(env->opmask_regs));
+
+        for (i = 0; i < CPU_NB_REGS; i++) {
+            uint8_t *zmmh = zmm_hi256->zmm_hi256[i];
+
+            stq_p(zmmh,      env->xmm_regs[i].ZMM_Q(4));
+            stq_p(zmmh + 8,  env->xmm_regs[i].ZMM_Q(5));
+            stq_p(zmmh + 16, env->xmm_regs[i].ZMM_Q(6));
+            stq_p(zmmh + 24, env->xmm_regs[i].ZMM_Q(7));
+        }
+
 #ifdef TARGET_X86_64
-    memcpy(&xsave->hi16_zmm_state.hi16_zmm, &env->xmm_regs[16],
-            16 * sizeof env->xmm_regs[16]);
-    memcpy(&xsave->pkru_state, &env->pkru, sizeof env->pkru);
+        f = &x86_ext_save_areas[XSTATE_Hi16_ZMM_BIT];
+        assert(f->size);
+        assert(f->offset);
+
+        hi16_zmm = buf + f->offset;
+
+        memcpy(&hi16_zmm->hi16_zmm, &env->xmm_regs[16],
+               16 * sizeof(env->xmm_regs[16]));
+#endif
+    }
+
+#ifdef TARGET_X86_64
+    e = &x86_ext_save_areas[XSTATE_PKRU_BIT];
+    if (e->size && e->offset) {
+        XSavePKRU *pkru = buf + e->offset;
+
+        memcpy(pkru, &env->pkru, sizeof(env->pkru));
+    }
 #endif
 }
 
 void x86_cpu_xrstor_all_areas(X86CPU *cpu, const void *buf, uint32_t buflen)
 {
     CPUX86State *env = &cpu->env;
-    const X86XSaveArea *xsave = buf;
+    const ExtSaveArea *e, *f, *g;
     int i;
+
+    const X86LegacyXSaveArea *legacy;
+    const X86XSaveHeader *header;
     uint16_t cwd, swd, twd;
 
-    assert(buflen >= sizeof(*xsave));
+    e = &x86_ext_save_areas[XSTATE_FP_BIT];
 
-    cwd = xsave->legacy.fcw;
-    swd = xsave->legacy.fsw;
-    twd = xsave->legacy.ftw;
-    env->fpop = xsave->legacy.fpop;
+    legacy = buf + e->offset;
+    header = buf + e->offset + sizeof(*legacy);
+
+    cwd = legacy->fcw;
+    swd = legacy->fsw;
+    twd = legacy->ftw;
+    env->fpop = legacy->fpop;
     env->fpstt = (swd >> 11) & 7;
     env->fpus = swd;
     env->fpuc = cwd;
     for (i = 0; i < 8; ++i) {
         env->fptags[i] = !((twd >> i) & 1);
     }
-    env->fpip = xsave->legacy.fpip;
-    env->fpdp = xsave->legacy.fpdp;
-    env->mxcsr = xsave->legacy.mxcsr;
-    memcpy(env->fpregs, &xsave->legacy.fpregs,
-            sizeof env->fpregs);
-    env->xstate_bv = xsave->header.xstate_bv;
-    memcpy(env->bnd_regs, &xsave->bndreg_state.bnd_regs,
-            sizeof env->bnd_regs);
-    env->bndcs_regs = xsave->bndcsr_state.bndcsr;
-    memcpy(env->opmask_regs, &xsave->opmask_state.opmask_regs,
-            sizeof env->opmask_regs);
+    env->fpip = legacy->fpip;
+    env->fpdp = legacy->fpdp;
+    env->mxcsr = legacy->mxcsr;
+    memcpy(env->fpregs, &legacy->fpregs,
+           sizeof(env->fpregs));
 
     for (i = 0; i < CPU_NB_REGS; i++) {
-        const uint8_t *xmm = xsave->legacy.xmm_regs[i];
-        const uint8_t *ymmh = xsave->avx_state.ymmh[i];
-        const uint8_t *zmmh = xsave->zmm_hi256_state.zmm_hi256[i];
+        const uint8_t *xmm = legacy->xmm_regs[i];
+
         env->xmm_regs[i].ZMM_Q(0) = ldq_p(xmm);
-        env->xmm_regs[i].ZMM_Q(1) = ldq_p(xmm+8);
-        env->xmm_regs[i].ZMM_Q(2) = ldq_p(ymmh);
-        env->xmm_regs[i].ZMM_Q(3) = ldq_p(ymmh+8);
-        env->xmm_regs[i].ZMM_Q(4) = ldq_p(zmmh);
-        env->xmm_regs[i].ZMM_Q(5) = ldq_p(zmmh+8);
-        env->xmm_regs[i].ZMM_Q(6) = ldq_p(zmmh+16);
-        env->xmm_regs[i].ZMM_Q(7) = ldq_p(zmmh+24);
+        env->xmm_regs[i].ZMM_Q(1) = ldq_p(xmm + 8);
+    }
+
+    env->xstate_bv = header->xstate_bv;
+
+    e = &x86_ext_save_areas[XSTATE_YMM_BIT];
+    if (e->size && e->offset) {
+        const XSaveAVX *avx;
+
+        avx = buf + e->offset;
+        for (i = 0; i < CPU_NB_REGS; i++) {
+            const uint8_t *ymmh = avx->ymmh[i];
+
+            env->xmm_regs[i].ZMM_Q(2) = ldq_p(ymmh);
+            env->xmm_regs[i].ZMM_Q(3) = ldq_p(ymmh + 8);
+        }
+    }
+
+    e = &x86_ext_save_areas[XSTATE_BNDREGS_BIT];
+    if (e->size && e->offset) {
+        const XSaveBNDREG *bndreg;
+        const XSaveBNDCSR *bndcsr;
+
+        f = &x86_ext_save_areas[XSTATE_BNDCSR_BIT];
+        assert(f->size);
+        assert(f->offset);
+
+        bndreg = buf + e->offset;
+        bndcsr = buf + f->offset;
+
+        memcpy(env->bnd_regs, &bndreg->bnd_regs,
+               sizeof(env->bnd_regs));
+        env->bndcs_regs = bndcsr->bndcsr;
     }
 
+    e = &x86_ext_save_areas[XSTATE_OPMASK_BIT];
+    if (e->size && e->offset) {
+        const XSaveOpmask *opmask;
+        const XSaveZMM_Hi256 *zmm_hi256;
 #ifdef TARGET_X86_64
-    memcpy(&env->xmm_regs[16], &xsave->hi16_zmm_state.hi16_zmm,
-           16 * sizeof env->xmm_regs[16]);
-    memcpy(&env->pkru, &xsave->pkru_state, sizeof env->pkru);
+        const XSaveHi16_ZMM *hi16_zmm;
+#endif
+
+        f = &x86_ext_save_areas[XSTATE_ZMM_Hi256_BIT];
+        assert(f->size);
+        assert(f->offset);
+
+        g = &x86_ext_save_areas[XSTATE_Hi16_ZMM_BIT];
+        assert(g->size);
+        assert(g->offset);
+
+        opmask = buf + e->offset;
+        zmm_hi256 = buf + f->offset;
+#ifdef TARGET_X86_64
+        hi16_zmm = buf + g->offset;
+#endif
+
+        memcpy(env->opmask_regs, &opmask->opmask_regs,
+               sizeof(env->opmask_regs));
+
+        for (i = 0; i < CPU_NB_REGS; i++) {
+            const uint8_t *zmmh = zmm_hi256->zmm_hi256[i];
+
+            env->xmm_regs[i].ZMM_Q(4) = ldq_p(zmmh);
+            env->xmm_regs[i].ZMM_Q(5) = ldq_p(zmmh + 8);
+            env->xmm_regs[i].ZMM_Q(6) = ldq_p(zmmh + 16);
+            env->xmm_regs[i].ZMM_Q(7) = ldq_p(zmmh + 24);
+        }
+
+#ifdef TARGET_X86_64
+        memcpy(&env->xmm_regs[16], &hi16_zmm->hi16_zmm,
+               16 * sizeof(env->xmm_regs[16]));
+#endif
+    }
+
+#ifdef TARGET_X86_64
+    e = &x86_ext_save_areas[XSTATE_PKRU_BIT];
+    if (e->size && e->offset) {
+        const XSavePKRU *pkru;
+
+        pkru = buf + e->offset;
+        memcpy(&env->pkru, pkru, sizeof(env->pkru));
+    }
 #endif
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 6/8] target/i386: Observe XSAVE state area offsets
@ 2021-07-05 10:46   ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Richard Henderson, Cameron Esfahani, David Edmondson, babu.moger,
	Roman Bolshakov, Paolo Bonzini

Rather than relying on the X86XSaveArea structure definition directly,
the routines that manipulate the XSAVE state area should observe the
offsets declared in the x86_ext_save_areas array.

Currently the offsets declared in the array are derived from the
structure definition, resulting in no functional change.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/xsave_helper.c | 262 ++++++++++++++++++++++++++++---------
 1 file changed, 200 insertions(+), 62 deletions(-)

diff --git a/target/i386/xsave_helper.c b/target/i386/xsave_helper.c
index b16c6ac0fe..ac61a96344 100644
--- a/target/i386/xsave_helper.c
+++ b/target/i386/xsave_helper.c
@@ -9,13 +9,20 @@
 void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen)
 {
     CPUX86State *env = &cpu->env;
-    X86XSaveArea *xsave = buf;
-    uint16_t cwd, swd, twd;
+    const ExtSaveArea *e, *f;
     int i;
 
-    assert(buflen >= sizeof(*xsave));
+    X86LegacyXSaveArea *legacy;
+    X86XSaveHeader *header;
+    uint16_t cwd, swd, twd;
+
+    memset(buf, 0, buflen);
+
+    e = &x86_ext_save_areas[XSTATE_FP_BIT];
+
+    legacy = buf + e->offset;
+    header = buf + e->offset + sizeof(*legacy);
 
-    memset(xsave, 0, buflen);
     twd = 0;
     swd = env->fpus & ~(7 << 11);
     swd |= (env->fpstt & 7) << 11;
@@ -23,91 +30,222 @@ void x86_cpu_xsave_all_areas(X86CPU *cpu, void *buf, uint32_t buflen)
     for (i = 0; i < 8; ++i) {
         twd |= (!env->fptags[i]) << i;
     }
-    xsave->legacy.fcw = cwd;
-    xsave->legacy.fsw = swd;
-    xsave->legacy.ftw = twd;
-    xsave->legacy.fpop = env->fpop;
-    xsave->legacy.fpip = env->fpip;
-    xsave->legacy.fpdp = env->fpdp;
-    memcpy(&xsave->legacy.fpregs, env->fpregs,
-            sizeof env->fpregs);
-    xsave->legacy.mxcsr = env->mxcsr;
-    xsave->header.xstate_bv = env->xstate_bv;
-    memcpy(&xsave->bndreg_state.bnd_regs, env->bnd_regs,
-            sizeof env->bnd_regs);
-    xsave->bndcsr_state.bndcsr = env->bndcs_regs;
-    memcpy(&xsave->opmask_state.opmask_regs, env->opmask_regs,
-            sizeof env->opmask_regs);
+    legacy->fcw = cwd;
+    legacy->fsw = swd;
+    legacy->ftw = twd;
+    legacy->fpop = env->fpop;
+    legacy->fpip = env->fpip;
+    legacy->fpdp = env->fpdp;
+    memcpy(&legacy->fpregs, env->fpregs,
+           sizeof(env->fpregs));
+    legacy->mxcsr = env->mxcsr;
 
     for (i = 0; i < CPU_NB_REGS; i++) {
-        uint8_t *xmm = xsave->legacy.xmm_regs[i];
-        uint8_t *ymmh = xsave->avx_state.ymmh[i];
-        uint8_t *zmmh = xsave->zmm_hi256_state.zmm_hi256[i];
+        uint8_t *xmm = legacy->xmm_regs[i];
+
         stq_p(xmm,     env->xmm_regs[i].ZMM_Q(0));
-        stq_p(xmm+8,   env->xmm_regs[i].ZMM_Q(1));
-        stq_p(ymmh,    env->xmm_regs[i].ZMM_Q(2));
-        stq_p(ymmh+8,  env->xmm_regs[i].ZMM_Q(3));
-        stq_p(zmmh,    env->xmm_regs[i].ZMM_Q(4));
-        stq_p(zmmh+8,  env->xmm_regs[i].ZMM_Q(5));
-        stq_p(zmmh+16, env->xmm_regs[i].ZMM_Q(6));
-        stq_p(zmmh+24, env->xmm_regs[i].ZMM_Q(7));
+        stq_p(xmm + 8, env->xmm_regs[i].ZMM_Q(1));
+    }
+
+    header->xstate_bv = env->xstate_bv;
+
+    e = &x86_ext_save_areas[XSTATE_YMM_BIT];
+    if (e->size && e->offset) {
+        XSaveAVX *avx;
+
+        avx = buf + e->offset;
+
+        for (i = 0; i < CPU_NB_REGS; i++) {
+            uint8_t *ymmh = avx->ymmh[i];
+
+            stq_p(ymmh,     env->xmm_regs[i].ZMM_Q(2));
+            stq_p(ymmh + 8, env->xmm_regs[i].ZMM_Q(3));
+        }
+    }
+
+    e = &x86_ext_save_areas[XSTATE_BNDREGS_BIT];
+    if (e->size && e->offset) {
+        XSaveBNDREG *bndreg;
+        XSaveBNDCSR *bndcsr;
+
+        f = &x86_ext_save_areas[XSTATE_BNDCSR_BIT];
+        assert(f->size);
+        assert(f->offset);
+
+        bndreg = buf + e->offset;
+        bndcsr = buf + f->offset;
+
+        memcpy(&bndreg->bnd_regs, env->bnd_regs,
+               sizeof(env->bnd_regs));
+        bndcsr->bndcsr = env->bndcs_regs;
     }
 
+    e = &x86_ext_save_areas[XSTATE_OPMASK_BIT];
+    if (e->size && e->offset) {
+        XSaveOpmask *opmask;
+        XSaveZMM_Hi256 *zmm_hi256;
+#ifdef TARGET_X86_64
+        XSaveHi16_ZMM *hi16_zmm;
+#endif
+
+        f = &x86_ext_save_areas[XSTATE_ZMM_Hi256_BIT];
+        assert(f->size);
+        assert(f->offset);
+
+        opmask = buf + e->offset;
+        zmm_hi256 = buf + f->offset;
+
+        memcpy(&opmask->opmask_regs, env->opmask_regs,
+               sizeof(env->opmask_regs));
+
+        for (i = 0; i < CPU_NB_REGS; i++) {
+            uint8_t *zmmh = zmm_hi256->zmm_hi256[i];
+
+            stq_p(zmmh,      env->xmm_regs[i].ZMM_Q(4));
+            stq_p(zmmh + 8,  env->xmm_regs[i].ZMM_Q(5));
+            stq_p(zmmh + 16, env->xmm_regs[i].ZMM_Q(6));
+            stq_p(zmmh + 24, env->xmm_regs[i].ZMM_Q(7));
+        }
+
 #ifdef TARGET_X86_64
-    memcpy(&xsave->hi16_zmm_state.hi16_zmm, &env->xmm_regs[16],
-            16 * sizeof env->xmm_regs[16]);
-    memcpy(&xsave->pkru_state, &env->pkru, sizeof env->pkru);
+        f = &x86_ext_save_areas[XSTATE_Hi16_ZMM_BIT];
+        assert(f->size);
+        assert(f->offset);
+
+        hi16_zmm = buf + f->offset;
+
+        memcpy(&hi16_zmm->hi16_zmm, &env->xmm_regs[16],
+               16 * sizeof(env->xmm_regs[16]));
+#endif
+    }
+
+#ifdef TARGET_X86_64
+    e = &x86_ext_save_areas[XSTATE_PKRU_BIT];
+    if (e->size && e->offset) {
+        XSavePKRU *pkru = buf + e->offset;
+
+        memcpy(pkru, &env->pkru, sizeof(env->pkru));
+    }
 #endif
 }
 
 void x86_cpu_xrstor_all_areas(X86CPU *cpu, const void *buf, uint32_t buflen)
 {
     CPUX86State *env = &cpu->env;
-    const X86XSaveArea *xsave = buf;
+    const ExtSaveArea *e, *f, *g;
     int i;
+
+    const X86LegacyXSaveArea *legacy;
+    const X86XSaveHeader *header;
     uint16_t cwd, swd, twd;
 
-    assert(buflen >= sizeof(*xsave));
+    e = &x86_ext_save_areas[XSTATE_FP_BIT];
 
-    cwd = xsave->legacy.fcw;
-    swd = xsave->legacy.fsw;
-    twd = xsave->legacy.ftw;
-    env->fpop = xsave->legacy.fpop;
+    legacy = buf + e->offset;
+    header = buf + e->offset + sizeof(*legacy);
+
+    cwd = legacy->fcw;
+    swd = legacy->fsw;
+    twd = legacy->ftw;
+    env->fpop = legacy->fpop;
     env->fpstt = (swd >> 11) & 7;
     env->fpus = swd;
     env->fpuc = cwd;
     for (i = 0; i < 8; ++i) {
         env->fptags[i] = !((twd >> i) & 1);
     }
-    env->fpip = xsave->legacy.fpip;
-    env->fpdp = xsave->legacy.fpdp;
-    env->mxcsr = xsave->legacy.mxcsr;
-    memcpy(env->fpregs, &xsave->legacy.fpregs,
-            sizeof env->fpregs);
-    env->xstate_bv = xsave->header.xstate_bv;
-    memcpy(env->bnd_regs, &xsave->bndreg_state.bnd_regs,
-            sizeof env->bnd_regs);
-    env->bndcs_regs = xsave->bndcsr_state.bndcsr;
-    memcpy(env->opmask_regs, &xsave->opmask_state.opmask_regs,
-            sizeof env->opmask_regs);
+    env->fpip = legacy->fpip;
+    env->fpdp = legacy->fpdp;
+    env->mxcsr = legacy->mxcsr;
+    memcpy(env->fpregs, &legacy->fpregs,
+           sizeof(env->fpregs));
 
     for (i = 0; i < CPU_NB_REGS; i++) {
-        const uint8_t *xmm = xsave->legacy.xmm_regs[i];
-        const uint8_t *ymmh = xsave->avx_state.ymmh[i];
-        const uint8_t *zmmh = xsave->zmm_hi256_state.zmm_hi256[i];
+        const uint8_t *xmm = legacy->xmm_regs[i];
+
         env->xmm_regs[i].ZMM_Q(0) = ldq_p(xmm);
-        env->xmm_regs[i].ZMM_Q(1) = ldq_p(xmm+8);
-        env->xmm_regs[i].ZMM_Q(2) = ldq_p(ymmh);
-        env->xmm_regs[i].ZMM_Q(3) = ldq_p(ymmh+8);
-        env->xmm_regs[i].ZMM_Q(4) = ldq_p(zmmh);
-        env->xmm_regs[i].ZMM_Q(5) = ldq_p(zmmh+8);
-        env->xmm_regs[i].ZMM_Q(6) = ldq_p(zmmh+16);
-        env->xmm_regs[i].ZMM_Q(7) = ldq_p(zmmh+24);
+        env->xmm_regs[i].ZMM_Q(1) = ldq_p(xmm + 8);
+    }
+
+    env->xstate_bv = header->xstate_bv;
+
+    e = &x86_ext_save_areas[XSTATE_YMM_BIT];
+    if (e->size && e->offset) {
+        const XSaveAVX *avx;
+
+        avx = buf + e->offset;
+        for (i = 0; i < CPU_NB_REGS; i++) {
+            const uint8_t *ymmh = avx->ymmh[i];
+
+            env->xmm_regs[i].ZMM_Q(2) = ldq_p(ymmh);
+            env->xmm_regs[i].ZMM_Q(3) = ldq_p(ymmh + 8);
+        }
+    }
+
+    e = &x86_ext_save_areas[XSTATE_BNDREGS_BIT];
+    if (e->size && e->offset) {
+        const XSaveBNDREG *bndreg;
+        const XSaveBNDCSR *bndcsr;
+
+        f = &x86_ext_save_areas[XSTATE_BNDCSR_BIT];
+        assert(f->size);
+        assert(f->offset);
+
+        bndreg = buf + e->offset;
+        bndcsr = buf + f->offset;
+
+        memcpy(env->bnd_regs, &bndreg->bnd_regs,
+               sizeof(env->bnd_regs));
+        env->bndcs_regs = bndcsr->bndcsr;
     }
 
+    e = &x86_ext_save_areas[XSTATE_OPMASK_BIT];
+    if (e->size && e->offset) {
+        const XSaveOpmask *opmask;
+        const XSaveZMM_Hi256 *zmm_hi256;
 #ifdef TARGET_X86_64
-    memcpy(&env->xmm_regs[16], &xsave->hi16_zmm_state.hi16_zmm,
-           16 * sizeof env->xmm_regs[16]);
-    memcpy(&env->pkru, &xsave->pkru_state, sizeof env->pkru);
+        const XSaveHi16_ZMM *hi16_zmm;
+#endif
+
+        f = &x86_ext_save_areas[XSTATE_ZMM_Hi256_BIT];
+        assert(f->size);
+        assert(f->offset);
+
+        g = &x86_ext_save_areas[XSTATE_Hi16_ZMM_BIT];
+        assert(g->size);
+        assert(g->offset);
+
+        opmask = buf + e->offset;
+        zmm_hi256 = buf + f->offset;
+#ifdef TARGET_X86_64
+        hi16_zmm = buf + g->offset;
+#endif
+
+        memcpy(env->opmask_regs, &opmask->opmask_regs,
+               sizeof(env->opmask_regs));
+
+        for (i = 0; i < CPU_NB_REGS; i++) {
+            const uint8_t *zmmh = zmm_hi256->zmm_hi256[i];
+
+            env->xmm_regs[i].ZMM_Q(4) = ldq_p(zmmh);
+            env->xmm_regs[i].ZMM_Q(5) = ldq_p(zmmh + 8);
+            env->xmm_regs[i].ZMM_Q(6) = ldq_p(zmmh + 16);
+            env->xmm_regs[i].ZMM_Q(7) = ldq_p(zmmh + 24);
+        }
+
+#ifdef TARGET_X86_64
+        memcpy(&env->xmm_regs[16], &hi16_zmm->hi16_zmm,
+               16 * sizeof(env->xmm_regs[16]));
+#endif
+    }
+
+#ifdef TARGET_X86_64
+    e = &x86_ext_save_areas[XSTATE_PKRU_BIT];
+    if (e->size && e->offset) {
+        const XSavePKRU *pkru;
+
+        pkru = buf + e->offset;
+        memcpy(&env->pkru, pkru, sizeof(env->pkru));
+    }
 #endif
 }
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 7/8] target/i386: Populate x86_ext_save_areas offsets using cpuid where possible
  2021-07-05 10:46 ` David Edmondson
@ 2021-07-05 10:46   ` David Edmondson
  -1 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Michael Roth, kvm, Roman Bolshakov,
	Paolo Bonzini, Marcelo Tosatti, babu.moger, Cameron Esfahani,
	Eduardo Habkost, David Edmondson

Rather than relying on the X86XSaveArea structure definition,
determine the offset of XSAVE state areas using CPUID leaf 0xd where
possible (KVM and HVF).

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.c         | 13 +------------
 target/i386/cpu.h         |  2 +-
 target/i386/hvf/hvf-cpu.c | 34 ++++++++++++++++++++++++++++++++++
 target/i386/kvm/kvm-cpu.c | 36 ++++++++++++++++++++++++++++++++++++
 target/i386/tcg/tcg-cpu.c | 20 ++++++++++++++++++++
 5 files changed, 92 insertions(+), 13 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 13caa0de50..5f595a0d7e 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1304,48 +1304,37 @@ static const X86RegisterInfo32 x86_reg_info_32[CPU_NB_REGS32] = {
 };
 #undef REGISTER
 
-const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = {
+ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = {
     [XSTATE_FP_BIT] = {
         /* x87 FP state component is always enabled if XSAVE is supported */
         .feature = FEAT_1_ECX, .bits = CPUID_EXT_XSAVE,
-        /* x87 state is in the legacy region of the XSAVE area */
-        .offset = 0,
         .size = sizeof(X86LegacyXSaveArea) + sizeof(X86XSaveHeader),
     },
     [XSTATE_SSE_BIT] = {
         /* SSE state component is always enabled if XSAVE is supported */
         .feature = FEAT_1_ECX, .bits = CPUID_EXT_XSAVE,
-        /* SSE state is in the legacy region of the XSAVE area */
-        .offset = 0,
         .size = sizeof(X86LegacyXSaveArea) + sizeof(X86XSaveHeader),
     },
     [XSTATE_YMM_BIT] =
           { .feature = FEAT_1_ECX, .bits = CPUID_EXT_AVX,
-            .offset = offsetof(X86XSaveArea, avx_state),
             .size = sizeof(XSaveAVX) },
     [XSTATE_BNDREGS_BIT] =
           { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_MPX,
-            .offset = offsetof(X86XSaveArea, bndreg_state),
             .size = sizeof(XSaveBNDREG)  },
     [XSTATE_BNDCSR_BIT] =
           { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_MPX,
-            .offset = offsetof(X86XSaveArea, bndcsr_state),
             .size = sizeof(XSaveBNDCSR)  },
     [XSTATE_OPMASK_BIT] =
           { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_AVX512F,
-            .offset = offsetof(X86XSaveArea, opmask_state),
             .size = sizeof(XSaveOpmask) },
     [XSTATE_ZMM_Hi256_BIT] =
           { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_AVX512F,
-            .offset = offsetof(X86XSaveArea, zmm_hi256_state),
             .size = sizeof(XSaveZMM_Hi256) },
     [XSTATE_Hi16_ZMM_BIT] =
           { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_AVX512F,
-            .offset = offsetof(X86XSaveArea, hi16_zmm_state),
             .size = sizeof(XSaveHi16_ZMM) },
     [XSTATE_PKRU_BIT] =
           { .feature = FEAT_7_0_ECX, .bits = CPUID_7_0_ECX_PKU,
-            .offset = offsetof(X86XSaveArea, pkru_state),
             .size = sizeof(XSavePKRU) },
 };
 
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index c9c0a34330..96b672f8bd 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1377,7 +1377,7 @@ typedef struct ExtSaveArea {
 
 #define XSAVE_STATE_AREA_COUNT (XSTATE_PKRU_BIT + 1)
 
-extern const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT];
+extern ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT];
 
 typedef enum TPRAccess {
     TPR_ACCESS_READ,
diff --git a/target/i386/hvf/hvf-cpu.c b/target/i386/hvf/hvf-cpu.c
index 8fbc423888..3c7c44698f 100644
--- a/target/i386/hvf/hvf-cpu.c
+++ b/target/i386/hvf/hvf-cpu.c
@@ -30,6 +30,38 @@ static void hvf_cpu_max_instance_init(X86CPU *cpu)
         hvf_get_supported_cpuid(0xC0000000, 0, R_EAX);
 }
 
+static void hvf_cpu_xsave_init(void)
+{
+    int i;
+
+    /*
+     * The allocated storage must be large enough for all of the
+     * possible XSAVE state components.
+     */
+    assert(hvf_get_supported_cpuid(0xd, 0, R_ECX) <= 4096);
+
+    /* x87 state is in the legacy region of the XSAVE area. */
+    x86_ext_save_areas[XSTATE_FP_BIT].offset = 0;
+    /* SSE state is in the legacy region of the XSAVE area. */
+    x86_ext_save_areas[XSTATE_SSE_BIT].offset = 0;
+
+    for (i = XSTATE_SSE_BIT + 1; i < XSAVE_STATE_AREA_COUNT; i++) {
+        ExtSaveArea *esa = &x86_ext_save_areas[i];
+
+        if (esa->size) {
+            int sz = hvf_get_supported_cpuid(0xd, i, R_EAX);
+
+            if (sz != 0) {
+                assert(esa->size == sz);
+
+                esa->offset = hvf_get_supported_cpuid(0xd, i, R_EBX);
+                fprintf(stderr, "%s: state area %d: offset 0x%x, size 0x%x\n",
+                        __func__, i, esa->offset, esa->size);
+            }
+        }
+    }
+}
+
 static void hvf_cpu_instance_init(CPUState *cs)
 {
     X86CPU *cpu = X86_CPU(cs);
@@ -42,6 +74,8 @@ static void hvf_cpu_instance_init(CPUState *cs)
     if (cpu->max_features) {
         hvf_cpu_max_instance_init(cpu);
     }
+
+    hvf_cpu_xsave_init();
 }
 
 static void hvf_cpu_accel_class_init(ObjectClass *oc, void *data)
diff --git a/target/i386/kvm/kvm-cpu.c b/target/i386/kvm/kvm-cpu.c
index 00369c2000..f474cc5b83 100644
--- a/target/i386/kvm/kvm-cpu.c
+++ b/target/i386/kvm/kvm-cpu.c
@@ -122,6 +122,40 @@ static void kvm_cpu_max_instance_init(X86CPU *cpu)
         kvm_arch_get_supported_cpuid(s, 0xC0000000, 0, R_EAX);
 }
 
+static void kvm_cpu_xsave_init(void)
+{
+    KVMState *s = kvm_state;
+    int i;
+
+    /*
+     * The allocated storage must be large enough for all of the
+     * possible XSAVE state components.
+     */
+    assert(sizeof(struct kvm_xsave) >=
+           kvm_arch_get_supported_cpuid(s, 0xd, 0, R_ECX));
+
+    /* x87 state is in the legacy region of the XSAVE area. */
+    x86_ext_save_areas[XSTATE_FP_BIT].offset = 0;
+    /* SSE state is in the legacy region of the XSAVE area. */
+    x86_ext_save_areas[XSTATE_SSE_BIT].offset = 0;
+
+    for (i = XSTATE_SSE_BIT + 1; i < XSAVE_STATE_AREA_COUNT; i++) {
+        ExtSaveArea *esa = &x86_ext_save_areas[i];
+
+        if (esa->size) {
+            int sz = kvm_arch_get_supported_cpuid(s, 0xd, i, R_EAX);
+
+            if (sz != 0) {
+                assert(esa->size == sz);
+
+                esa->offset = kvm_arch_get_supported_cpuid(s, 0xd, i, R_EBX);
+                fprintf(stderr, "%s: state area %d: offset 0x%x, size 0x%x\n",
+                        __func__, i, esa->offset, esa->size);
+            }
+        }
+    }
+}
+
 static void kvm_cpu_instance_init(CPUState *cs)
 {
     X86CPU *cpu = X86_CPU(cs);
@@ -141,6 +175,8 @@ static void kvm_cpu_instance_init(CPUState *cs)
     if (cpu->max_features) {
         kvm_cpu_max_instance_init(cpu);
     }
+
+    kvm_cpu_xsave_init();
 }
 
 static void kvm_cpu_accel_class_init(ObjectClass *oc, void *data)
diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c
index 014ebea2f6..e96ec9bbcc 100644
--- a/target/i386/tcg/tcg-cpu.c
+++ b/target/i386/tcg/tcg-cpu.c
@@ -80,6 +80,24 @@ static void tcg_cpu_class_init(CPUClass *cc)
     cc->init_accel_cpu = tcg_cpu_init_ops;
 }
 
+static void tcg_cpu_xsave_init(void)
+{
+#define XO(bit, field) \
+    x86_ext_save_areas[bit].offset = offsetof(X86XSaveArea, field);
+
+    XO(XSTATE_FP_BIT, legacy);
+    XO(XSTATE_SSE_BIT, legacy);
+    XO(XSTATE_YMM_BIT, avx_state);
+    XO(XSTATE_BNDREGS_BIT, bndreg_state);
+    XO(XSTATE_BNDCSR_BIT, bndcsr_state);
+    XO(XSTATE_OPMASK_BIT, opmask_state);
+    XO(XSTATE_ZMM_Hi256_BIT, zmm_hi256_state);
+    XO(XSTATE_Hi16_ZMM_BIT, hi16_zmm_state);
+    XO(XSTATE_PKRU_BIT, pkru_state);
+
+#undef XO
+}
+
 /*
  * TCG-specific defaults that override all CPU models when using TCG
  */
@@ -93,6 +111,8 @@ static void tcg_cpu_instance_init(CPUState *cs)
     X86CPU *cpu = X86_CPU(cs);
     /* Special cases not set in the X86CPUDefinition structs: */
     x86_cpu_apply_props(cpu, tcg_default_props);
+
+    tcg_cpu_xsave_init();
 }
 
 static void tcg_cpu_accel_class_init(ObjectClass *oc, void *data)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 7/8] target/i386: Populate x86_ext_save_areas offsets using cpuid where possible
@ 2021-07-05 10:46   ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Richard Henderson, Cameron Esfahani, David Edmondson, babu.moger,
	Roman Bolshakov, Paolo Bonzini

Rather than relying on the X86XSaveArea structure definition,
determine the offset of XSAVE state areas using CPUID leaf 0xd where
possible (KVM and HVF).

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.c         | 13 +------------
 target/i386/cpu.h         |  2 +-
 target/i386/hvf/hvf-cpu.c | 34 ++++++++++++++++++++++++++++++++++
 target/i386/kvm/kvm-cpu.c | 36 ++++++++++++++++++++++++++++++++++++
 target/i386/tcg/tcg-cpu.c | 20 ++++++++++++++++++++
 5 files changed, 92 insertions(+), 13 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 13caa0de50..5f595a0d7e 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1304,48 +1304,37 @@ static const X86RegisterInfo32 x86_reg_info_32[CPU_NB_REGS32] = {
 };
 #undef REGISTER
 
-const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = {
+ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT] = {
     [XSTATE_FP_BIT] = {
         /* x87 FP state component is always enabled if XSAVE is supported */
         .feature = FEAT_1_ECX, .bits = CPUID_EXT_XSAVE,
-        /* x87 state is in the legacy region of the XSAVE area */
-        .offset = 0,
         .size = sizeof(X86LegacyXSaveArea) + sizeof(X86XSaveHeader),
     },
     [XSTATE_SSE_BIT] = {
         /* SSE state component is always enabled if XSAVE is supported */
         .feature = FEAT_1_ECX, .bits = CPUID_EXT_XSAVE,
-        /* SSE state is in the legacy region of the XSAVE area */
-        .offset = 0,
         .size = sizeof(X86LegacyXSaveArea) + sizeof(X86XSaveHeader),
     },
     [XSTATE_YMM_BIT] =
           { .feature = FEAT_1_ECX, .bits = CPUID_EXT_AVX,
-            .offset = offsetof(X86XSaveArea, avx_state),
             .size = sizeof(XSaveAVX) },
     [XSTATE_BNDREGS_BIT] =
           { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_MPX,
-            .offset = offsetof(X86XSaveArea, bndreg_state),
             .size = sizeof(XSaveBNDREG)  },
     [XSTATE_BNDCSR_BIT] =
           { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_MPX,
-            .offset = offsetof(X86XSaveArea, bndcsr_state),
             .size = sizeof(XSaveBNDCSR)  },
     [XSTATE_OPMASK_BIT] =
           { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_AVX512F,
-            .offset = offsetof(X86XSaveArea, opmask_state),
             .size = sizeof(XSaveOpmask) },
     [XSTATE_ZMM_Hi256_BIT] =
           { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_AVX512F,
-            .offset = offsetof(X86XSaveArea, zmm_hi256_state),
             .size = sizeof(XSaveZMM_Hi256) },
     [XSTATE_Hi16_ZMM_BIT] =
           { .feature = FEAT_7_0_EBX, .bits = CPUID_7_0_EBX_AVX512F,
-            .offset = offsetof(X86XSaveArea, hi16_zmm_state),
             .size = sizeof(XSaveHi16_ZMM) },
     [XSTATE_PKRU_BIT] =
           { .feature = FEAT_7_0_ECX, .bits = CPUID_7_0_ECX_PKU,
-            .offset = offsetof(X86XSaveArea, pkru_state),
             .size = sizeof(XSavePKRU) },
 };
 
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index c9c0a34330..96b672f8bd 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1377,7 +1377,7 @@ typedef struct ExtSaveArea {
 
 #define XSAVE_STATE_AREA_COUNT (XSTATE_PKRU_BIT + 1)
 
-extern const ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT];
+extern ExtSaveArea x86_ext_save_areas[XSAVE_STATE_AREA_COUNT];
 
 typedef enum TPRAccess {
     TPR_ACCESS_READ,
diff --git a/target/i386/hvf/hvf-cpu.c b/target/i386/hvf/hvf-cpu.c
index 8fbc423888..3c7c44698f 100644
--- a/target/i386/hvf/hvf-cpu.c
+++ b/target/i386/hvf/hvf-cpu.c
@@ -30,6 +30,38 @@ static void hvf_cpu_max_instance_init(X86CPU *cpu)
         hvf_get_supported_cpuid(0xC0000000, 0, R_EAX);
 }
 
+static void hvf_cpu_xsave_init(void)
+{
+    int i;
+
+    /*
+     * The allocated storage must be large enough for all of the
+     * possible XSAVE state components.
+     */
+    assert(hvf_get_supported_cpuid(0xd, 0, R_ECX) <= 4096);
+
+    /* x87 state is in the legacy region of the XSAVE area. */
+    x86_ext_save_areas[XSTATE_FP_BIT].offset = 0;
+    /* SSE state is in the legacy region of the XSAVE area. */
+    x86_ext_save_areas[XSTATE_SSE_BIT].offset = 0;
+
+    for (i = XSTATE_SSE_BIT + 1; i < XSAVE_STATE_AREA_COUNT; i++) {
+        ExtSaveArea *esa = &x86_ext_save_areas[i];
+
+        if (esa->size) {
+            int sz = hvf_get_supported_cpuid(0xd, i, R_EAX);
+
+            if (sz != 0) {
+                assert(esa->size == sz);
+
+                esa->offset = hvf_get_supported_cpuid(0xd, i, R_EBX);
+                fprintf(stderr, "%s: state area %d: offset 0x%x, size 0x%x\n",
+                        __func__, i, esa->offset, esa->size);
+            }
+        }
+    }
+}
+
 static void hvf_cpu_instance_init(CPUState *cs)
 {
     X86CPU *cpu = X86_CPU(cs);
@@ -42,6 +74,8 @@ static void hvf_cpu_instance_init(CPUState *cs)
     if (cpu->max_features) {
         hvf_cpu_max_instance_init(cpu);
     }
+
+    hvf_cpu_xsave_init();
 }
 
 static void hvf_cpu_accel_class_init(ObjectClass *oc, void *data)
diff --git a/target/i386/kvm/kvm-cpu.c b/target/i386/kvm/kvm-cpu.c
index 00369c2000..f474cc5b83 100644
--- a/target/i386/kvm/kvm-cpu.c
+++ b/target/i386/kvm/kvm-cpu.c
@@ -122,6 +122,40 @@ static void kvm_cpu_max_instance_init(X86CPU *cpu)
         kvm_arch_get_supported_cpuid(s, 0xC0000000, 0, R_EAX);
 }
 
+static void kvm_cpu_xsave_init(void)
+{
+    KVMState *s = kvm_state;
+    int i;
+
+    /*
+     * The allocated storage must be large enough for all of the
+     * possible XSAVE state components.
+     */
+    assert(sizeof(struct kvm_xsave) >=
+           kvm_arch_get_supported_cpuid(s, 0xd, 0, R_ECX));
+
+    /* x87 state is in the legacy region of the XSAVE area. */
+    x86_ext_save_areas[XSTATE_FP_BIT].offset = 0;
+    /* SSE state is in the legacy region of the XSAVE area. */
+    x86_ext_save_areas[XSTATE_SSE_BIT].offset = 0;
+
+    for (i = XSTATE_SSE_BIT + 1; i < XSAVE_STATE_AREA_COUNT; i++) {
+        ExtSaveArea *esa = &x86_ext_save_areas[i];
+
+        if (esa->size) {
+            int sz = kvm_arch_get_supported_cpuid(s, 0xd, i, R_EAX);
+
+            if (sz != 0) {
+                assert(esa->size == sz);
+
+                esa->offset = kvm_arch_get_supported_cpuid(s, 0xd, i, R_EBX);
+                fprintf(stderr, "%s: state area %d: offset 0x%x, size 0x%x\n",
+                        __func__, i, esa->offset, esa->size);
+            }
+        }
+    }
+}
+
 static void kvm_cpu_instance_init(CPUState *cs)
 {
     X86CPU *cpu = X86_CPU(cs);
@@ -141,6 +175,8 @@ static void kvm_cpu_instance_init(CPUState *cs)
     if (cpu->max_features) {
         kvm_cpu_max_instance_init(cpu);
     }
+
+    kvm_cpu_xsave_init();
 }
 
 static void kvm_cpu_accel_class_init(ObjectClass *oc, void *data)
diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c
index 014ebea2f6..e96ec9bbcc 100644
--- a/target/i386/tcg/tcg-cpu.c
+++ b/target/i386/tcg/tcg-cpu.c
@@ -80,6 +80,24 @@ static void tcg_cpu_class_init(CPUClass *cc)
     cc->init_accel_cpu = tcg_cpu_init_ops;
 }
 
+static void tcg_cpu_xsave_init(void)
+{
+#define XO(bit, field) \
+    x86_ext_save_areas[bit].offset = offsetof(X86XSaveArea, field);
+
+    XO(XSTATE_FP_BIT, legacy);
+    XO(XSTATE_SSE_BIT, legacy);
+    XO(XSTATE_YMM_BIT, avx_state);
+    XO(XSTATE_BNDREGS_BIT, bndreg_state);
+    XO(XSTATE_BNDCSR_BIT, bndcsr_state);
+    XO(XSTATE_OPMASK_BIT, opmask_state);
+    XO(XSTATE_ZMM_Hi256_BIT, zmm_hi256_state);
+    XO(XSTATE_Hi16_ZMM_BIT, hi16_zmm_state);
+    XO(XSTATE_PKRU_BIT, pkru_state);
+
+#undef XO
+}
+
 /*
  * TCG-specific defaults that override all CPU models when using TCG
  */
@@ -93,6 +111,8 @@ static void tcg_cpu_instance_init(CPUState *cs)
     X86CPU *cpu = X86_CPU(cs);
     /* Special cases not set in the X86CPUDefinition structs: */
     x86_cpu_apply_props(cpu, tcg_default_props);
+
+    tcg_cpu_xsave_init();
 }
 
 static void tcg_cpu_accel_class_init(ObjectClass *oc, void *data)
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG
  2021-07-05 10:46 ` David Edmondson
@ 2021-07-05 10:46   ` David Edmondson
  -1 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Richard Henderson, Michael Roth, kvm, Roman Bolshakov,
	Paolo Bonzini, Marcelo Tosatti, babu.moger, Cameron Esfahani,
	Eduardo Habkost, David Edmondson

Given that TCG is now the only consumer of X86XSaveArea, move the
structure definition and associated offset declarations and checks to a
TCG specific header.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.h            | 57 ------------------------------------
 target/i386/tcg/fpu_helper.c |  1 +
 target/i386/tcg/tcg-cpu.h    | 57 ++++++++++++++++++++++++++++++++++++
 3 files changed, 58 insertions(+), 57 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 96b672f8bd..0f7ddbfeae 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1305,48 +1305,6 @@ typedef struct XSavePKRU {
     uint32_t padding;
 } XSavePKRU;
 
-#define XSAVE_FCW_FSW_OFFSET    0x000
-#define XSAVE_FTW_FOP_OFFSET    0x004
-#define XSAVE_CWD_RIP_OFFSET    0x008
-#define XSAVE_CWD_RDP_OFFSET    0x010
-#define XSAVE_MXCSR_OFFSET      0x018
-#define XSAVE_ST_SPACE_OFFSET   0x020
-#define XSAVE_XMM_SPACE_OFFSET  0x0a0
-#define XSAVE_XSTATE_BV_OFFSET  0x200
-#define XSAVE_AVX_OFFSET        0x240
-#define XSAVE_BNDREG_OFFSET     0x3c0
-#define XSAVE_BNDCSR_OFFSET     0x400
-#define XSAVE_OPMASK_OFFSET     0x440
-#define XSAVE_ZMM_HI256_OFFSET  0x480
-#define XSAVE_HI16_ZMM_OFFSET   0x680
-#define XSAVE_PKRU_OFFSET       0xa80
-
-typedef struct X86XSaveArea {
-    X86LegacyXSaveArea legacy;
-    X86XSaveHeader header;
-
-    /* Extended save areas: */
-
-    /* AVX State: */
-    XSaveAVX avx_state;
-
-    /* Ensure that XSaveBNDREG is properly aligned. */
-    uint8_t padding[XSAVE_BNDREG_OFFSET
-                    - sizeof(X86LegacyXSaveArea)
-                    - sizeof(X86XSaveHeader)
-                    - sizeof(XSaveAVX)];
-
-    /* MPX State: */
-    XSaveBNDREG bndreg_state;
-    XSaveBNDCSR bndcsr_state;
-    /* AVX-512 State: */
-    XSaveOpmask opmask_state;
-    XSaveZMM_Hi256 zmm_hi256_state;
-    XSaveHi16_ZMM hi16_zmm_state;
-    /* PKRU State: */
-    XSavePKRU pkru_state;
-} X86XSaveArea;
-
 QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
@@ -1355,21 +1313,6 @@ QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
 QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
 QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
 
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
-
 typedef struct ExtSaveArea {
     uint32_t feature, bits;
     uint32_t offset, size;
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
index 4e11965067..74bbe94b80 100644
--- a/target/i386/tcg/fpu_helper.c
+++ b/target/i386/tcg/fpu_helper.c
@@ -20,6 +20,7 @@
 #include "qemu/osdep.h"
 #include <math.h>
 #include "cpu.h"
+#include "tcg-cpu.h"
 #include "exec/helper-proto.h"
 #include "fpu/softfloat.h"
 #include "fpu/softfloat-macros.h"
diff --git a/target/i386/tcg/tcg-cpu.h b/target/i386/tcg/tcg-cpu.h
index 36bd300af0..53a8494455 100644
--- a/target/i386/tcg/tcg-cpu.h
+++ b/target/i386/tcg/tcg-cpu.h
@@ -19,6 +19,63 @@
 #ifndef TCG_CPU_H
 #define TCG_CPU_H
 
+#define XSAVE_FCW_FSW_OFFSET    0x000
+#define XSAVE_FTW_FOP_OFFSET    0x004
+#define XSAVE_CWD_RIP_OFFSET    0x008
+#define XSAVE_CWD_RDP_OFFSET    0x010
+#define XSAVE_MXCSR_OFFSET      0x018
+#define XSAVE_ST_SPACE_OFFSET   0x020
+#define XSAVE_XMM_SPACE_OFFSET  0x0a0
+#define XSAVE_XSTATE_BV_OFFSET  0x200
+#define XSAVE_AVX_OFFSET        0x240
+#define XSAVE_BNDREG_OFFSET     0x3c0
+#define XSAVE_BNDCSR_OFFSET     0x400
+#define XSAVE_OPMASK_OFFSET     0x440
+#define XSAVE_ZMM_HI256_OFFSET  0x480
+#define XSAVE_HI16_ZMM_OFFSET   0x680
+#define XSAVE_PKRU_OFFSET       0xa80
+
+typedef struct X86XSaveArea {
+    X86LegacyXSaveArea legacy;
+    X86XSaveHeader header;
+
+    /* Extended save areas: */
+
+    /* AVX State: */
+    XSaveAVX avx_state;
+
+    /* Ensure that XSaveBNDREG is properly aligned. */
+    uint8_t padding[XSAVE_BNDREG_OFFSET
+                    - sizeof(X86LegacyXSaveArea)
+                    - sizeof(X86XSaveHeader)
+                    - sizeof(XSaveAVX)];
+
+    /* MPX State: */
+    XSaveBNDREG bndreg_state;
+    XSaveBNDCSR bndcsr_state;
+    /* AVX-512 State: */
+    XSaveOpmask opmask_state;
+    XSaveZMM_Hi256 zmm_hi256_state;
+    XSaveHi16_ZMM hi16_zmm_state;
+    /* PKRU State: */
+    XSavePKRU pkru_state;
+} X86XSaveArea;
+
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
+
 bool tcg_cpu_realizefn(CPUState *cs, Error **errp);
 
 #endif /* TCG_CPU_H */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG
@ 2021-07-05 10:46   ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-05 10:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Richard Henderson, Cameron Esfahani, David Edmondson, babu.moger,
	Roman Bolshakov, Paolo Bonzini

Given that TCG is now the only consumer of X86XSaveArea, move the
structure definition and associated offset declarations and checks to a
TCG specific header.

Signed-off-by: David Edmondson <david.edmondson@oracle.com>
---
 target/i386/cpu.h            | 57 ------------------------------------
 target/i386/tcg/fpu_helper.c |  1 +
 target/i386/tcg/tcg-cpu.h    | 57 ++++++++++++++++++++++++++++++++++++
 3 files changed, 58 insertions(+), 57 deletions(-)

diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 96b672f8bd..0f7ddbfeae 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1305,48 +1305,6 @@ typedef struct XSavePKRU {
     uint32_t padding;
 } XSavePKRU;
 
-#define XSAVE_FCW_FSW_OFFSET    0x000
-#define XSAVE_FTW_FOP_OFFSET    0x004
-#define XSAVE_CWD_RIP_OFFSET    0x008
-#define XSAVE_CWD_RDP_OFFSET    0x010
-#define XSAVE_MXCSR_OFFSET      0x018
-#define XSAVE_ST_SPACE_OFFSET   0x020
-#define XSAVE_XMM_SPACE_OFFSET  0x0a0
-#define XSAVE_XSTATE_BV_OFFSET  0x200
-#define XSAVE_AVX_OFFSET        0x240
-#define XSAVE_BNDREG_OFFSET     0x3c0
-#define XSAVE_BNDCSR_OFFSET     0x400
-#define XSAVE_OPMASK_OFFSET     0x440
-#define XSAVE_ZMM_HI256_OFFSET  0x480
-#define XSAVE_HI16_ZMM_OFFSET   0x680
-#define XSAVE_PKRU_OFFSET       0xa80
-
-typedef struct X86XSaveArea {
-    X86LegacyXSaveArea legacy;
-    X86XSaveHeader header;
-
-    /* Extended save areas: */
-
-    /* AVX State: */
-    XSaveAVX avx_state;
-
-    /* Ensure that XSaveBNDREG is properly aligned. */
-    uint8_t padding[XSAVE_BNDREG_OFFSET
-                    - sizeof(X86LegacyXSaveArea)
-                    - sizeof(X86XSaveHeader)
-                    - sizeof(XSaveAVX)];
-
-    /* MPX State: */
-    XSaveBNDREG bndreg_state;
-    XSaveBNDCSR bndcsr_state;
-    /* AVX-512 State: */
-    XSaveOpmask opmask_state;
-    XSaveZMM_Hi256 zmm_hi256_state;
-    XSaveHi16_ZMM hi16_zmm_state;
-    /* PKRU State: */
-    XSavePKRU pkru_state;
-} X86XSaveArea;
-
 QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
 QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
@@ -1355,21 +1313,6 @@ QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
 QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
 QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
 
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
-QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
-
 typedef struct ExtSaveArea {
     uint32_t feature, bits;
     uint32_t offset, size;
diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
index 4e11965067..74bbe94b80 100644
--- a/target/i386/tcg/fpu_helper.c
+++ b/target/i386/tcg/fpu_helper.c
@@ -20,6 +20,7 @@
 #include "qemu/osdep.h"
 #include <math.h>
 #include "cpu.h"
+#include "tcg-cpu.h"
 #include "exec/helper-proto.h"
 #include "fpu/softfloat.h"
 #include "fpu/softfloat-macros.h"
diff --git a/target/i386/tcg/tcg-cpu.h b/target/i386/tcg/tcg-cpu.h
index 36bd300af0..53a8494455 100644
--- a/target/i386/tcg/tcg-cpu.h
+++ b/target/i386/tcg/tcg-cpu.h
@@ -19,6 +19,63 @@
 #ifndef TCG_CPU_H
 #define TCG_CPU_H
 
+#define XSAVE_FCW_FSW_OFFSET    0x000
+#define XSAVE_FTW_FOP_OFFSET    0x004
+#define XSAVE_CWD_RIP_OFFSET    0x008
+#define XSAVE_CWD_RDP_OFFSET    0x010
+#define XSAVE_MXCSR_OFFSET      0x018
+#define XSAVE_ST_SPACE_OFFSET   0x020
+#define XSAVE_XMM_SPACE_OFFSET  0x0a0
+#define XSAVE_XSTATE_BV_OFFSET  0x200
+#define XSAVE_AVX_OFFSET        0x240
+#define XSAVE_BNDREG_OFFSET     0x3c0
+#define XSAVE_BNDCSR_OFFSET     0x400
+#define XSAVE_OPMASK_OFFSET     0x440
+#define XSAVE_ZMM_HI256_OFFSET  0x480
+#define XSAVE_HI16_ZMM_OFFSET   0x680
+#define XSAVE_PKRU_OFFSET       0xa80
+
+typedef struct X86XSaveArea {
+    X86LegacyXSaveArea legacy;
+    X86XSaveHeader header;
+
+    /* Extended save areas: */
+
+    /* AVX State: */
+    XSaveAVX avx_state;
+
+    /* Ensure that XSaveBNDREG is properly aligned. */
+    uint8_t padding[XSAVE_BNDREG_OFFSET
+                    - sizeof(X86LegacyXSaveArea)
+                    - sizeof(X86XSaveHeader)
+                    - sizeof(XSaveAVX)];
+
+    /* MPX State: */
+    XSaveBNDREG bndreg_state;
+    XSaveBNDCSR bndcsr_state;
+    /* AVX-512 State: */
+    XSaveOpmask opmask_state;
+    XSaveZMM_Hi256 zmm_hi256_state;
+    XSaveHi16_ZMM hi16_zmm_state;
+    /* PKRU State: */
+    XSavePKRU pkru_state;
+} X86XSaveArea;
+
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
+QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
+
 bool tcg_cpu_realizefn(CPUState *cs, Error **errp);
 
 #endif /* TCG_CPU_H */
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 0/8] Derive XSAVE state component offsets from CPUID leaf 0xd where possible
  2021-07-05 10:46 ` David Edmondson
@ 2021-07-05 16:57   ` Paolo Bonzini
  -1 siblings, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2021-07-05 16:57 UTC (permalink / raw)
  To: David Edmondson, qemu-devel
  Cc: Richard Henderson, Michael Roth, kvm, Roman Bolshakov,
	Marcelo Tosatti, babu.moger, Cameron Esfahani, Eduardo Habkost

On 05/07/21 12:46, David Edmondson wrote:
> The offset of XSAVE state components within the XSAVE state area is
> currently hard-coded via reference to the X86XSaveArea structure. This
> structure is accurate for Intel systems at the time of writing, but
> incorrect for newer AMD systems, as the state component for protection
> keys is located differently (offset 0x980 rather than offset 0xa80).
> 
> For KVM and HVF, replace the hard-coding of the state component
> offsets with data derived from CPUID leaf 0xd information.
> 
> TCG still uses the X86XSaveArea structure, as there is no underlying
> CPU to use in determining appropriate values.
> 
> This is a replacement for the changes in
> https://lore.kernel.org/r/20210520145647.3483809-1-david.edmondson@oracle.com,
> which simply modifed the hard-coded offsets for AMD systems.
> 
> Testing on HVF is minimal (it builds and, by observation, the XSAVE
> state component offsets reported to a running VM are accurate on an
> older Intel system).

This looks great, thanks, so I am queuing it.

Paolo

> David Edmondson (8):
>    target/i386: Declare constants for XSAVE offsets
>    target/i386: Consolidate the X86XSaveArea offset checks
>    target/i386: Clarify the padding requirements of X86XSaveArea
>    target/i386: Pass buffer and length to XSAVE helper
>    target/i386: Make x86_ext_save_areas visible outside cpu.c
>    target/i386: Observe XSAVE state area offsets
>    target/i386: Populate x86_ext_save_areas offsets using cpuid where
>      possible
>    target/i386: Move X86XSaveArea into TCG
> 
>   target/i386/cpu.c            |  18 +--
>   target/i386/cpu.h            |  41 ++----
>   target/i386/hvf/hvf-cpu.c    |  34 +++++
>   target/i386/hvf/hvf.c        |   3 +-
>   target/i386/hvf/x86hvf.c     |  19 ++-
>   target/i386/kvm/kvm-cpu.c    |  36 +++++
>   target/i386/kvm/kvm.c        |  52 +------
>   target/i386/tcg/fpu_helper.c |   1 +
>   target/i386/tcg/tcg-cpu.c    |  20 +++
>   target/i386/tcg/tcg-cpu.h    |  57 ++++++++
>   target/i386/xsave_helper.c   | 267 ++++++++++++++++++++++++++---------
>   11 files changed, 381 insertions(+), 167 deletions(-)
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 0/8] Derive XSAVE state component offsets from CPUID leaf 0xd where possible
@ 2021-07-05 16:57   ` Paolo Bonzini
  0 siblings, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2021-07-05 16:57 UTC (permalink / raw)
  To: David Edmondson, qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Richard Henderson, Cameron Esfahani, babu.moger, Roman Bolshakov

On 05/07/21 12:46, David Edmondson wrote:
> The offset of XSAVE state components within the XSAVE state area is
> currently hard-coded via reference to the X86XSaveArea structure. This
> structure is accurate for Intel systems at the time of writing, but
> incorrect for newer AMD systems, as the state component for protection
> keys is located differently (offset 0x980 rather than offset 0xa80).
> 
> For KVM and HVF, replace the hard-coding of the state component
> offsets with data derived from CPUID leaf 0xd information.
> 
> TCG still uses the X86XSaveArea structure, as there is no underlying
> CPU to use in determining appropriate values.
> 
> This is a replacement for the changes in
> https://lore.kernel.org/r/20210520145647.3483809-1-david.edmondson@oracle.com,
> which simply modifed the hard-coded offsets for AMD systems.
> 
> Testing on HVF is minimal (it builds and, by observation, the XSAVE
> state component offsets reported to a running VM are accurate on an
> older Intel system).

This looks great, thanks, so I am queuing it.

Paolo

> David Edmondson (8):
>    target/i386: Declare constants for XSAVE offsets
>    target/i386: Consolidate the X86XSaveArea offset checks
>    target/i386: Clarify the padding requirements of X86XSaveArea
>    target/i386: Pass buffer and length to XSAVE helper
>    target/i386: Make x86_ext_save_areas visible outside cpu.c
>    target/i386: Observe XSAVE state area offsets
>    target/i386: Populate x86_ext_save_areas offsets using cpuid where
>      possible
>    target/i386: Move X86XSaveArea into TCG
> 
>   target/i386/cpu.c            |  18 +--
>   target/i386/cpu.h            |  41 ++----
>   target/i386/hvf/hvf-cpu.c    |  34 +++++
>   target/i386/hvf/hvf.c        |   3 +-
>   target/i386/hvf/x86hvf.c     |  19 ++-
>   target/i386/kvm/kvm-cpu.c    |  36 +++++
>   target/i386/kvm/kvm.c        |  52 +------
>   target/i386/tcg/fpu_helper.c |   1 +
>   target/i386/tcg/tcg-cpu.c    |  20 +++
>   target/i386/tcg/tcg-cpu.h    |  57 ++++++++
>   target/i386/xsave_helper.c   | 267 ++++++++++++++++++++++++++---------
>   11 files changed, 381 insertions(+), 167 deletions(-)
> 



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG
  2021-07-05 10:46   ` David Edmondson
@ 2021-07-07  1:09     ` Richard Henderson
  -1 siblings, 0 replies; 28+ messages in thread
From: Richard Henderson @ 2021-07-07  1:09 UTC (permalink / raw)
  To: David Edmondson, qemu-devel
  Cc: Michael Roth, kvm, Roman Bolshakov, Paolo Bonzini,
	Marcelo Tosatti, babu.moger, Cameron Esfahani, Eduardo Habkost

On 7/5/21 3:46 AM, David Edmondson wrote:
> Given that TCG is now the only consumer of X86XSaveArea, move the
> structure definition and associated offset declarations and checks to a
> TCG specific header.
> 
> Signed-off-by: David Edmondson <david.edmondson@oracle.com>
> ---
>   target/i386/cpu.h            | 57 ------------------------------------
>   target/i386/tcg/fpu_helper.c |  1 +
>   target/i386/tcg/tcg-cpu.h    | 57 ++++++++++++++++++++++++++++++++++++
>   3 files changed, 58 insertions(+), 57 deletions(-)
> 
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index 96b672f8bd..0f7ddbfeae 100644
> --- a/target/i386/cpu.h
> +++ b/target/i386/cpu.h
> @@ -1305,48 +1305,6 @@ typedef struct XSavePKRU {
>       uint32_t padding;
>   } XSavePKRU;
>   
> -#define XSAVE_FCW_FSW_OFFSET    0x000
> -#define XSAVE_FTW_FOP_OFFSET    0x004
> -#define XSAVE_CWD_RIP_OFFSET    0x008
> -#define XSAVE_CWD_RDP_OFFSET    0x010
> -#define XSAVE_MXCSR_OFFSET      0x018
> -#define XSAVE_ST_SPACE_OFFSET   0x020
> -#define XSAVE_XMM_SPACE_OFFSET  0x0a0
> -#define XSAVE_XSTATE_BV_OFFSET  0x200
> -#define XSAVE_AVX_OFFSET        0x240
> -#define XSAVE_BNDREG_OFFSET     0x3c0
> -#define XSAVE_BNDCSR_OFFSET     0x400
> -#define XSAVE_OPMASK_OFFSET     0x440
> -#define XSAVE_ZMM_HI256_OFFSET  0x480
> -#define XSAVE_HI16_ZMM_OFFSET   0x680
> -#define XSAVE_PKRU_OFFSET       0xa80
> -
> -typedef struct X86XSaveArea {
> -    X86LegacyXSaveArea legacy;
> -    X86XSaveHeader header;
> -
> -    /* Extended save areas: */
> -
> -    /* AVX State: */
> -    XSaveAVX avx_state;
> -
> -    /* Ensure that XSaveBNDREG is properly aligned. */
> -    uint8_t padding[XSAVE_BNDREG_OFFSET
> -                    - sizeof(X86LegacyXSaveArea)
> -                    - sizeof(X86XSaveHeader)
> -                    - sizeof(XSaveAVX)];
> -
> -    /* MPX State: */
> -    XSaveBNDREG bndreg_state;
> -    XSaveBNDCSR bndcsr_state;
> -    /* AVX-512 State: */
> -    XSaveOpmask opmask_state;
> -    XSaveZMM_Hi256 zmm_hi256_state;
> -    XSaveHi16_ZMM hi16_zmm_state;
> -    /* PKRU State: */
> -    XSavePKRU pkru_state;
> -} X86XSaveArea;
> -
>   QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
>   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
>   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
> @@ -1355,21 +1313,6 @@ QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
>   QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
>   QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
>   
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
> -
>   typedef struct ExtSaveArea {
>       uint32_t feature, bits;
>       uint32_t offset, size;
> diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
> index 4e11965067..74bbe94b80 100644
> --- a/target/i386/tcg/fpu_helper.c
> +++ b/target/i386/tcg/fpu_helper.c
> @@ -20,6 +20,7 @@
>   #include "qemu/osdep.h"
>   #include <math.h>
>   #include "cpu.h"
> +#include "tcg-cpu.h"
>   #include "exec/helper-proto.h"
>   #include "fpu/softfloat.h"
>   #include "fpu/softfloat-macros.h"
> diff --git a/target/i386/tcg/tcg-cpu.h b/target/i386/tcg/tcg-cpu.h
> index 36bd300af0..53a8494455 100644
> --- a/target/i386/tcg/tcg-cpu.h
> +++ b/target/i386/tcg/tcg-cpu.h
> @@ -19,6 +19,63 @@
>   #ifndef TCG_CPU_H
>   #define TCG_CPU_H
>   
> +#define XSAVE_FCW_FSW_OFFSET    0x000
> +#define XSAVE_FTW_FOP_OFFSET    0x004
> +#define XSAVE_CWD_RIP_OFFSET    0x008
> +#define XSAVE_CWD_RDP_OFFSET    0x010
> +#define XSAVE_MXCSR_OFFSET      0x018
> +#define XSAVE_ST_SPACE_OFFSET   0x020
> +#define XSAVE_XMM_SPACE_OFFSET  0x0a0
> +#define XSAVE_XSTATE_BV_OFFSET  0x200
> +#define XSAVE_AVX_OFFSET        0x240
> +#define XSAVE_BNDREG_OFFSET     0x3c0
> +#define XSAVE_BNDCSR_OFFSET     0x400
> +#define XSAVE_OPMASK_OFFSET     0x440
> +#define XSAVE_ZMM_HI256_OFFSET  0x480
> +#define XSAVE_HI16_ZMM_OFFSET   0x680
> +#define XSAVE_PKRU_OFFSET       0xa80
> +
> +typedef struct X86XSaveArea {
> +    X86LegacyXSaveArea legacy;
> +    X86XSaveHeader header;
> +
> +    /* Extended save areas: */
> +
> +    /* AVX State: */
> +    XSaveAVX avx_state;
> +
> +    /* Ensure that XSaveBNDREG is properly aligned. */
> +    uint8_t padding[XSAVE_BNDREG_OFFSET
> +                    - sizeof(X86LegacyXSaveArea)
> +                    - sizeof(X86XSaveHeader)
> +                    - sizeof(XSaveAVX)];
> +
> +    /* MPX State: */
> +    XSaveBNDREG bndreg_state;
> +    XSaveBNDCSR bndcsr_state;
> +    /* AVX-512 State: */
> +    XSaveOpmask opmask_state;
> +    XSaveZMM_Hi256 zmm_hi256_state;
> +    XSaveHi16_ZMM hi16_zmm_state;
> +    /* PKRU State: */
> +    XSavePKRU pkru_state;
> +} X86XSaveArea;
> +
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);

My only quibble is that these offsets are otherwise unused.  This just becomes validation 
of compiler layout.

I presume that XSAVE_BNDREG_OFFSET is not merely ROUND_UP(offsetof(avx_state) + 
sizeof(avx_state), some_pow2)?

Do these offsets need to be migrated?  Otherwise, how can one start a vm with kvm and then 
migrate to tcg?  I presume the offsets above are constant for a given cpu, and that 
whatever cpu provides different offsets is not supported by tcg?  Given the lack of avx, 
that's trivial these days...


r~

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG
@ 2021-07-07  1:09     ` Richard Henderson
  0 siblings, 0 replies; 28+ messages in thread
From: Richard Henderson @ 2021-07-07  1:09 UTC (permalink / raw)
  To: David Edmondson, qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Cameron Esfahani, babu.moger, Roman Bolshakov, Paolo Bonzini

On 7/5/21 3:46 AM, David Edmondson wrote:
> Given that TCG is now the only consumer of X86XSaveArea, move the
> structure definition and associated offset declarations and checks to a
> TCG specific header.
> 
> Signed-off-by: David Edmondson <david.edmondson@oracle.com>
> ---
>   target/i386/cpu.h            | 57 ------------------------------------
>   target/i386/tcg/fpu_helper.c |  1 +
>   target/i386/tcg/tcg-cpu.h    | 57 ++++++++++++++++++++++++++++++++++++
>   3 files changed, 58 insertions(+), 57 deletions(-)
> 
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index 96b672f8bd..0f7ddbfeae 100644
> --- a/target/i386/cpu.h
> +++ b/target/i386/cpu.h
> @@ -1305,48 +1305,6 @@ typedef struct XSavePKRU {
>       uint32_t padding;
>   } XSavePKRU;
>   
> -#define XSAVE_FCW_FSW_OFFSET    0x000
> -#define XSAVE_FTW_FOP_OFFSET    0x004
> -#define XSAVE_CWD_RIP_OFFSET    0x008
> -#define XSAVE_CWD_RDP_OFFSET    0x010
> -#define XSAVE_MXCSR_OFFSET      0x018
> -#define XSAVE_ST_SPACE_OFFSET   0x020
> -#define XSAVE_XMM_SPACE_OFFSET  0x0a0
> -#define XSAVE_XSTATE_BV_OFFSET  0x200
> -#define XSAVE_AVX_OFFSET        0x240
> -#define XSAVE_BNDREG_OFFSET     0x3c0
> -#define XSAVE_BNDCSR_OFFSET     0x400
> -#define XSAVE_OPMASK_OFFSET     0x440
> -#define XSAVE_ZMM_HI256_OFFSET  0x480
> -#define XSAVE_HI16_ZMM_OFFSET   0x680
> -#define XSAVE_PKRU_OFFSET       0xa80
> -
> -typedef struct X86XSaveArea {
> -    X86LegacyXSaveArea legacy;
> -    X86XSaveHeader header;
> -
> -    /* Extended save areas: */
> -
> -    /* AVX State: */
> -    XSaveAVX avx_state;
> -
> -    /* Ensure that XSaveBNDREG is properly aligned. */
> -    uint8_t padding[XSAVE_BNDREG_OFFSET
> -                    - sizeof(X86LegacyXSaveArea)
> -                    - sizeof(X86XSaveHeader)
> -                    - sizeof(XSaveAVX)];
> -
> -    /* MPX State: */
> -    XSaveBNDREG bndreg_state;
> -    XSaveBNDCSR bndcsr_state;
> -    /* AVX-512 State: */
> -    XSaveOpmask opmask_state;
> -    XSaveZMM_Hi256 zmm_hi256_state;
> -    XSaveHi16_ZMM hi16_zmm_state;
> -    /* PKRU State: */
> -    XSavePKRU pkru_state;
> -} X86XSaveArea;
> -
>   QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
>   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
>   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
> @@ -1355,21 +1313,6 @@ QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
>   QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
>   QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
>   
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
> -
>   typedef struct ExtSaveArea {
>       uint32_t feature, bits;
>       uint32_t offset, size;
> diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
> index 4e11965067..74bbe94b80 100644
> --- a/target/i386/tcg/fpu_helper.c
> +++ b/target/i386/tcg/fpu_helper.c
> @@ -20,6 +20,7 @@
>   #include "qemu/osdep.h"
>   #include <math.h>
>   #include "cpu.h"
> +#include "tcg-cpu.h"
>   #include "exec/helper-proto.h"
>   #include "fpu/softfloat.h"
>   #include "fpu/softfloat-macros.h"
> diff --git a/target/i386/tcg/tcg-cpu.h b/target/i386/tcg/tcg-cpu.h
> index 36bd300af0..53a8494455 100644
> --- a/target/i386/tcg/tcg-cpu.h
> +++ b/target/i386/tcg/tcg-cpu.h
> @@ -19,6 +19,63 @@
>   #ifndef TCG_CPU_H
>   #define TCG_CPU_H
>   
> +#define XSAVE_FCW_FSW_OFFSET    0x000
> +#define XSAVE_FTW_FOP_OFFSET    0x004
> +#define XSAVE_CWD_RIP_OFFSET    0x008
> +#define XSAVE_CWD_RDP_OFFSET    0x010
> +#define XSAVE_MXCSR_OFFSET      0x018
> +#define XSAVE_ST_SPACE_OFFSET   0x020
> +#define XSAVE_XMM_SPACE_OFFSET  0x0a0
> +#define XSAVE_XSTATE_BV_OFFSET  0x200
> +#define XSAVE_AVX_OFFSET        0x240
> +#define XSAVE_BNDREG_OFFSET     0x3c0
> +#define XSAVE_BNDCSR_OFFSET     0x400
> +#define XSAVE_OPMASK_OFFSET     0x440
> +#define XSAVE_ZMM_HI256_OFFSET  0x480
> +#define XSAVE_HI16_ZMM_OFFSET   0x680
> +#define XSAVE_PKRU_OFFSET       0xa80
> +
> +typedef struct X86XSaveArea {
> +    X86LegacyXSaveArea legacy;
> +    X86XSaveHeader header;
> +
> +    /* Extended save areas: */
> +
> +    /* AVX State: */
> +    XSaveAVX avx_state;
> +
> +    /* Ensure that XSaveBNDREG is properly aligned. */
> +    uint8_t padding[XSAVE_BNDREG_OFFSET
> +                    - sizeof(X86LegacyXSaveArea)
> +                    - sizeof(X86XSaveHeader)
> +                    - sizeof(XSaveAVX)];
> +
> +    /* MPX State: */
> +    XSaveBNDREG bndreg_state;
> +    XSaveBNDCSR bndcsr_state;
> +    /* AVX-512 State: */
> +    XSaveOpmask opmask_state;
> +    XSaveZMM_Hi256 zmm_hi256_state;
> +    XSaveHi16_ZMM hi16_zmm_state;
> +    /* PKRU State: */
> +    XSavePKRU pkru_state;
> +} X86XSaveArea;
> +
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);

My only quibble is that these offsets are otherwise unused.  This just becomes validation 
of compiler layout.

I presume that XSAVE_BNDREG_OFFSET is not merely ROUND_UP(offsetof(avx_state) + 
sizeof(avx_state), some_pow2)?

Do these offsets need to be migrated?  Otherwise, how can one start a vm with kvm and then 
migrate to tcg?  I presume the offsets above are constant for a given cpu, and that 
whatever cpu provides different offsets is not supported by tcg?  Given the lack of avx, 
that's trivial these days...


r~


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG
  2021-07-07  1:09     ` Richard Henderson
  (?)
@ 2021-07-07  6:51     ` Paolo Bonzini
  -1 siblings, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2021-07-07  6:51 UTC (permalink / raw)
  To: Richard Henderson
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti, qemu-devel,
	Cameron Esfahani, David Edmondson, Babu Moger, Roman Bolshakov

[-- Attachment #1: Type: text/plain, Size: 8166 bytes --]

Migration from KVM to TCG is broken anyway. The changing offsets do break
migration of a KVM guest from Intel to AMD or vice versa, because of the
difference in CPUID. That however is not changed by this patch.

Paolo

Il mer 7 lug 2021, 03:09 Richard Henderson <richard.henderson@linaro.org>
ha scritto:

> On 7/5/21 3:46 AM, David Edmondson wrote:
> > Given that TCG is now the only consumer of X86XSaveArea, move the
> > structure definition and associated offset declarations and checks to a
> > TCG specific header.
> >
> > Signed-off-by: David Edmondson <david.edmondson@oracle.com>
> > ---
> >   target/i386/cpu.h            | 57 ------------------------------------
> >   target/i386/tcg/fpu_helper.c |  1 +
> >   target/i386/tcg/tcg-cpu.h    | 57 ++++++++++++++++++++++++++++++++++++
> >   3 files changed, 58 insertions(+), 57 deletions(-)
> >
> > diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> > index 96b672f8bd..0f7ddbfeae 100644
> > --- a/target/i386/cpu.h
> > +++ b/target/i386/cpu.h
> > @@ -1305,48 +1305,6 @@ typedef struct XSavePKRU {
> >       uint32_t padding;
> >   } XSavePKRU;
> >
> > -#define XSAVE_FCW_FSW_OFFSET    0x000
> > -#define XSAVE_FTW_FOP_OFFSET    0x004
> > -#define XSAVE_CWD_RIP_OFFSET    0x008
> > -#define XSAVE_CWD_RDP_OFFSET    0x010
> > -#define XSAVE_MXCSR_OFFSET      0x018
> > -#define XSAVE_ST_SPACE_OFFSET   0x020
> > -#define XSAVE_XMM_SPACE_OFFSET  0x0a0
> > -#define XSAVE_XSTATE_BV_OFFSET  0x200
> > -#define XSAVE_AVX_OFFSET        0x240
> > -#define XSAVE_BNDREG_OFFSET     0x3c0
> > -#define XSAVE_BNDCSR_OFFSET     0x400
> > -#define XSAVE_OPMASK_OFFSET     0x440
> > -#define XSAVE_ZMM_HI256_OFFSET  0x480
> > -#define XSAVE_HI16_ZMM_OFFSET   0x680
> > -#define XSAVE_PKRU_OFFSET       0xa80
> > -
> > -typedef struct X86XSaveArea {
> > -    X86LegacyXSaveArea legacy;
> > -    X86XSaveHeader header;
> > -
> > -    /* Extended save areas: */
> > -
> > -    /* AVX State: */
> > -    XSaveAVX avx_state;
> > -
> > -    /* Ensure that XSaveBNDREG is properly aligned. */
> > -    uint8_t padding[XSAVE_BNDREG_OFFSET
> > -                    - sizeof(X86LegacyXSaveArea)
> > -                    - sizeof(X86XSaveHeader)
> > -                    - sizeof(XSaveAVX)];
> > -
> > -    /* MPX State: */
> > -    XSaveBNDREG bndreg_state;
> > -    XSaveBNDCSR bndcsr_state;
> > -    /* AVX-512 State: */
> > -    XSaveOpmask opmask_state;
> > -    XSaveZMM_Hi256 zmm_hi256_state;
> > -    XSaveHi16_ZMM hi16_zmm_state;
> > -    /* PKRU State: */
> > -    XSavePKRU pkru_state;
> > -} X86XSaveArea;
> > -
> >   QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
> >   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
> >   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
> > @@ -1355,21 +1313,6 @@ QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) !=
> 0x200);
> >   QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
> >   QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
> >
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) !=
> XSAVE_FCW_FSW_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) !=
> XSAVE_FTW_FOP_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) !=
> XSAVE_CWD_RIP_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) !=
> XSAVE_CWD_RDP_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) !=
> XSAVE_MXCSR_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) !=
> XSAVE_ST_SPACE_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) !=
> XSAVE_XMM_SPACE_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) !=
> XSAVE_AVX_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) !=
> XSAVE_BNDREG_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) !=
> XSAVE_BNDCSR_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) !=
> XSAVE_OPMASK_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) !=
> XSAVE_ZMM_HI256_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) !=
> XSAVE_HI16_ZMM_OFFSET);
> > -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) !=
> XSAVE_PKRU_OFFSET);
> > -
> >   typedef struct ExtSaveArea {
> >       uint32_t feature, bits;
> >       uint32_t offset, size;
> > diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
> > index 4e11965067..74bbe94b80 100644
> > --- a/target/i386/tcg/fpu_helper.c
> > +++ b/target/i386/tcg/fpu_helper.c
> > @@ -20,6 +20,7 @@
> >   #include "qemu/osdep.h"
> >   #include <math.h>
> >   #include "cpu.h"
> > +#include "tcg-cpu.h"
> >   #include "exec/helper-proto.h"
> >   #include "fpu/softfloat.h"
> >   #include "fpu/softfloat-macros.h"
> > diff --git a/target/i386/tcg/tcg-cpu.h b/target/i386/tcg/tcg-cpu.h
> > index 36bd300af0..53a8494455 100644
> > --- a/target/i386/tcg/tcg-cpu.h
> > +++ b/target/i386/tcg/tcg-cpu.h
> > @@ -19,6 +19,63 @@
> >   #ifndef TCG_CPU_H
> >   #define TCG_CPU_H
> >
> > +#define XSAVE_FCW_FSW_OFFSET    0x000
> > +#define XSAVE_FTW_FOP_OFFSET    0x004
> > +#define XSAVE_CWD_RIP_OFFSET    0x008
> > +#define XSAVE_CWD_RDP_OFFSET    0x010
> > +#define XSAVE_MXCSR_OFFSET      0x018
> > +#define XSAVE_ST_SPACE_OFFSET   0x020
> > +#define XSAVE_XMM_SPACE_OFFSET  0x0a0
> > +#define XSAVE_XSTATE_BV_OFFSET  0x200
> > +#define XSAVE_AVX_OFFSET        0x240
> > +#define XSAVE_BNDREG_OFFSET     0x3c0
> > +#define XSAVE_BNDCSR_OFFSET     0x400
> > +#define XSAVE_OPMASK_OFFSET     0x440
> > +#define XSAVE_ZMM_HI256_OFFSET  0x480
> > +#define XSAVE_HI16_ZMM_OFFSET   0x680
> > +#define XSAVE_PKRU_OFFSET       0xa80
> > +
> > +typedef struct X86XSaveArea {
> > +    X86LegacyXSaveArea legacy;
> > +    X86XSaveHeader header;
> > +
> > +    /* Extended save areas: */
> > +
> > +    /* AVX State: */
> > +    XSaveAVX avx_state;
> > +
> > +    /* Ensure that XSaveBNDREG is properly aligned. */
> > +    uint8_t padding[XSAVE_BNDREG_OFFSET
> > +                    - sizeof(X86LegacyXSaveArea)
> > +                    - sizeof(X86XSaveHeader)
> > +                    - sizeof(XSaveAVX)];
> > +
> > +    /* MPX State: */
> > +    XSaveBNDREG bndreg_state;
> > +    XSaveBNDCSR bndcsr_state;
> > +    /* AVX-512 State: */
> > +    XSaveOpmask opmask_state;
> > +    XSaveZMM_Hi256 zmm_hi256_state;
> > +    XSaveHi16_ZMM hi16_zmm_state;
> > +    /* PKRU State: */
> > +    XSavePKRU pkru_state;
> > +} X86XSaveArea;
> > +
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) !=
> XSAVE_FCW_FSW_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) !=
> XSAVE_FTW_FOP_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) !=
> XSAVE_CWD_RIP_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) !=
> XSAVE_CWD_RDP_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) !=
> XSAVE_MXCSR_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) !=
> XSAVE_ST_SPACE_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) !=
> XSAVE_XMM_SPACE_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) !=
> XSAVE_AVX_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) !=
> XSAVE_BNDREG_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) !=
> XSAVE_BNDCSR_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) !=
> XSAVE_OPMASK_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) !=
> XSAVE_ZMM_HI256_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) !=
> XSAVE_HI16_ZMM_OFFSET);
> > +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) !=
> XSAVE_PKRU_OFFSET);
>
> My only quibble is that these offsets are otherwise unused.  This just
> becomes validation
> of compiler layout.
>
> I presume that XSAVE_BNDREG_OFFSET is not merely
> ROUND_UP(offsetof(avx_state) +
> sizeof(avx_state), some_pow2)?
>
> Do these offsets need to be migrated?  Otherwise, how can one start a vm
> with kvm and then
> migrate to tcg?  I presume the offsets above are constant for a given cpu,
> and that
> whatever cpu provides different offsets is not supported by tcg?  Given
> the lack of avx,
> that's trivial these days...
>
>
> r~
>
>

[-- Attachment #2: Type: text/html, Size: 9874 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG
  2021-07-07  1:09     ` Richard Henderson
@ 2021-07-07 10:10       ` David Edmondson
  -1 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-07 10:10 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: Michael Roth, kvm, Roman Bolshakov, Paolo Bonzini,
	Marcelo Tosatti, babu.moger, Cameron Esfahani, Eduardo Habkost

On Tuesday, 2021-07-06 at 18:09:42 -07, Richard Henderson wrote:

> On 7/5/21 3:46 AM, David Edmondson wrote:
>> Given that TCG is now the only consumer of X86XSaveArea, move the
>> structure definition and associated offset declarations and checks to a
>> TCG specific header.
>> 
>> Signed-off-by: David Edmondson <david.edmondson@oracle.com>
>> ---
>>   target/i386/cpu.h            | 57 ------------------------------------
>>   target/i386/tcg/fpu_helper.c |  1 +
>>   target/i386/tcg/tcg-cpu.h    | 57 ++++++++++++++++++++++++++++++++++++
>>   3 files changed, 58 insertions(+), 57 deletions(-)
>> 
>> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
>> index 96b672f8bd..0f7ddbfeae 100644
>> --- a/target/i386/cpu.h
>> +++ b/target/i386/cpu.h
>> @@ -1305,48 +1305,6 @@ typedef struct XSavePKRU {
>>       uint32_t padding;
>>   } XSavePKRU;
>>   
>> -#define XSAVE_FCW_FSW_OFFSET    0x000
>> -#define XSAVE_FTW_FOP_OFFSET    0x004
>> -#define XSAVE_CWD_RIP_OFFSET    0x008
>> -#define XSAVE_CWD_RDP_OFFSET    0x010
>> -#define XSAVE_MXCSR_OFFSET      0x018
>> -#define XSAVE_ST_SPACE_OFFSET   0x020
>> -#define XSAVE_XMM_SPACE_OFFSET  0x0a0
>> -#define XSAVE_XSTATE_BV_OFFSET  0x200
>> -#define XSAVE_AVX_OFFSET        0x240
>> -#define XSAVE_BNDREG_OFFSET     0x3c0
>> -#define XSAVE_BNDCSR_OFFSET     0x400
>> -#define XSAVE_OPMASK_OFFSET     0x440
>> -#define XSAVE_ZMM_HI256_OFFSET  0x480
>> -#define XSAVE_HI16_ZMM_OFFSET   0x680
>> -#define XSAVE_PKRU_OFFSET       0xa80
>> -
>> -typedef struct X86XSaveArea {
>> -    X86LegacyXSaveArea legacy;
>> -    X86XSaveHeader header;
>> -
>> -    /* Extended save areas: */
>> -
>> -    /* AVX State: */
>> -    XSaveAVX avx_state;
>> -
>> -    /* Ensure that XSaveBNDREG is properly aligned. */
>> -    uint8_t padding[XSAVE_BNDREG_OFFSET
>> -                    - sizeof(X86LegacyXSaveArea)
>> -                    - sizeof(X86XSaveHeader)
>> -                    - sizeof(XSaveAVX)];
>> -
>> -    /* MPX State: */
>> -    XSaveBNDREG bndreg_state;
>> -    XSaveBNDCSR bndcsr_state;
>> -    /* AVX-512 State: */
>> -    XSaveOpmask opmask_state;
>> -    XSaveZMM_Hi256 zmm_hi256_state;
>> -    XSaveHi16_ZMM hi16_zmm_state;
>> -    /* PKRU State: */
>> -    XSavePKRU pkru_state;
>> -} X86XSaveArea;
>> -
>>   QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
>>   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
>>   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
>> @@ -1355,21 +1313,6 @@ QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
>>   QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
>>   QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
>>   
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
>> -
>>   typedef struct ExtSaveArea {
>>       uint32_t feature, bits;
>>       uint32_t offset, size;
>> diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
>> index 4e11965067..74bbe94b80 100644
>> --- a/target/i386/tcg/fpu_helper.c
>> +++ b/target/i386/tcg/fpu_helper.c
>> @@ -20,6 +20,7 @@
>>   #include "qemu/osdep.h"
>>   #include <math.h>
>>   #include "cpu.h"
>> +#include "tcg-cpu.h"
>>   #include "exec/helper-proto.h"
>>   #include "fpu/softfloat.h"
>>   #include "fpu/softfloat-macros.h"
>> diff --git a/target/i386/tcg/tcg-cpu.h b/target/i386/tcg/tcg-cpu.h
>> index 36bd300af0..53a8494455 100644
>> --- a/target/i386/tcg/tcg-cpu.h
>> +++ b/target/i386/tcg/tcg-cpu.h
>> @@ -19,6 +19,63 @@
>>   #ifndef TCG_CPU_H
>>   #define TCG_CPU_H
>>   
>> +#define XSAVE_FCW_FSW_OFFSET    0x000
>> +#define XSAVE_FTW_FOP_OFFSET    0x004
>> +#define XSAVE_CWD_RIP_OFFSET    0x008
>> +#define XSAVE_CWD_RDP_OFFSET    0x010
>> +#define XSAVE_MXCSR_OFFSET      0x018
>> +#define XSAVE_ST_SPACE_OFFSET   0x020
>> +#define XSAVE_XMM_SPACE_OFFSET  0x0a0
>> +#define XSAVE_XSTATE_BV_OFFSET  0x200
>> +#define XSAVE_AVX_OFFSET        0x240
>> +#define XSAVE_BNDREG_OFFSET     0x3c0
>> +#define XSAVE_BNDCSR_OFFSET     0x400
>> +#define XSAVE_OPMASK_OFFSET     0x440
>> +#define XSAVE_ZMM_HI256_OFFSET  0x480
>> +#define XSAVE_HI16_ZMM_OFFSET   0x680
>> +#define XSAVE_PKRU_OFFSET       0xa80
>> +
>> +typedef struct X86XSaveArea {
>> +    X86LegacyXSaveArea legacy;
>> +    X86XSaveHeader header;
>> +
>> +    /* Extended save areas: */
>> +
>> +    /* AVX State: */
>> +    XSaveAVX avx_state;
>> +
>> +    /* Ensure that XSaveBNDREG is properly aligned. */
>> +    uint8_t padding[XSAVE_BNDREG_OFFSET
>> +                    - sizeof(X86LegacyXSaveArea)
>> +                    - sizeof(X86XSaveHeader)
>> +                    - sizeof(XSaveAVX)];
>> +
>> +    /* MPX State: */
>> +    XSaveBNDREG bndreg_state;
>> +    XSaveBNDCSR bndcsr_state;
>> +    /* AVX-512 State: */
>> +    XSaveOpmask opmask_state;
>> +    XSaveZMM_Hi256 zmm_hi256_state;
>> +    XSaveHi16_ZMM hi16_zmm_state;
>> +    /* PKRU State: */
>> +    XSavePKRU pkru_state;
>> +} X86XSaveArea;
>> +
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
>
> My only quibble is that these offsets are otherwise unused.  This just becomes validation 
> of compiler layout.

Yes.

> I presume that XSAVE_BNDREG_OFFSET is not merely ROUND_UP(offsetof(avx_state) + 
> sizeof(avx_state), some_pow2)?

The offsets were just extracted from a CPU at some point in the
past. It's likely that things are as you describe, but that is not
defined anywhere.

> Do these offsets need to be migrated?  Otherwise, how can one start a vm with kvm and then 
> migrate to tcg?  I presume the offsets above are constant for a given cpu, and that 
> whatever cpu provides different offsets is not supported by tcg?  Given the lack of avx, 
> that's trivial these days...

For TCG I think that the offsets used should be derived from the CPU
model selected, rather than being the same for all CPU models.

If that was done, then in principle it should be possible to migrate
XSAVE state between same CPU model KVM and TCG environments.

dme.
-- 
Tonight I'm gonna bury that horse in the ground.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG
@ 2021-07-07 10:10       ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-07 10:10 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Cameron Esfahani, babu.moger, Roman Bolshakov, Paolo Bonzini

On Tuesday, 2021-07-06 at 18:09:42 -07, Richard Henderson wrote:

> On 7/5/21 3:46 AM, David Edmondson wrote:
>> Given that TCG is now the only consumer of X86XSaveArea, move the
>> structure definition and associated offset declarations and checks to a
>> TCG specific header.
>> 
>> Signed-off-by: David Edmondson <david.edmondson@oracle.com>
>> ---
>>   target/i386/cpu.h            | 57 ------------------------------------
>>   target/i386/tcg/fpu_helper.c |  1 +
>>   target/i386/tcg/tcg-cpu.h    | 57 ++++++++++++++++++++++++++++++++++++
>>   3 files changed, 58 insertions(+), 57 deletions(-)
>> 
>> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
>> index 96b672f8bd..0f7ddbfeae 100644
>> --- a/target/i386/cpu.h
>> +++ b/target/i386/cpu.h
>> @@ -1305,48 +1305,6 @@ typedef struct XSavePKRU {
>>       uint32_t padding;
>>   } XSavePKRU;
>>   
>> -#define XSAVE_FCW_FSW_OFFSET    0x000
>> -#define XSAVE_FTW_FOP_OFFSET    0x004
>> -#define XSAVE_CWD_RIP_OFFSET    0x008
>> -#define XSAVE_CWD_RDP_OFFSET    0x010
>> -#define XSAVE_MXCSR_OFFSET      0x018
>> -#define XSAVE_ST_SPACE_OFFSET   0x020
>> -#define XSAVE_XMM_SPACE_OFFSET  0x0a0
>> -#define XSAVE_XSTATE_BV_OFFSET  0x200
>> -#define XSAVE_AVX_OFFSET        0x240
>> -#define XSAVE_BNDREG_OFFSET     0x3c0
>> -#define XSAVE_BNDCSR_OFFSET     0x400
>> -#define XSAVE_OPMASK_OFFSET     0x440
>> -#define XSAVE_ZMM_HI256_OFFSET  0x480
>> -#define XSAVE_HI16_ZMM_OFFSET   0x680
>> -#define XSAVE_PKRU_OFFSET       0xa80
>> -
>> -typedef struct X86XSaveArea {
>> -    X86LegacyXSaveArea legacy;
>> -    X86XSaveHeader header;
>> -
>> -    /* Extended save areas: */
>> -
>> -    /* AVX State: */
>> -    XSaveAVX avx_state;
>> -
>> -    /* Ensure that XSaveBNDREG is properly aligned. */
>> -    uint8_t padding[XSAVE_BNDREG_OFFSET
>> -                    - sizeof(X86LegacyXSaveArea)
>> -                    - sizeof(X86XSaveHeader)
>> -                    - sizeof(XSaveAVX)];
>> -
>> -    /* MPX State: */
>> -    XSaveBNDREG bndreg_state;
>> -    XSaveBNDCSR bndcsr_state;
>> -    /* AVX-512 State: */
>> -    XSaveOpmask opmask_state;
>> -    XSaveZMM_Hi256 zmm_hi256_state;
>> -    XSaveHi16_ZMM hi16_zmm_state;
>> -    /* PKRU State: */
>> -    XSavePKRU pkru_state;
>> -} X86XSaveArea;
>> -
>>   QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
>>   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
>>   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
>> @@ -1355,21 +1313,6 @@ QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
>>   QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
>>   QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
>>   
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
>> -
>>   typedef struct ExtSaveArea {
>>       uint32_t feature, bits;
>>       uint32_t offset, size;
>> diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
>> index 4e11965067..74bbe94b80 100644
>> --- a/target/i386/tcg/fpu_helper.c
>> +++ b/target/i386/tcg/fpu_helper.c
>> @@ -20,6 +20,7 @@
>>   #include "qemu/osdep.h"
>>   #include <math.h>
>>   #include "cpu.h"
>> +#include "tcg-cpu.h"
>>   #include "exec/helper-proto.h"
>>   #include "fpu/softfloat.h"
>>   #include "fpu/softfloat-macros.h"
>> diff --git a/target/i386/tcg/tcg-cpu.h b/target/i386/tcg/tcg-cpu.h
>> index 36bd300af0..53a8494455 100644
>> --- a/target/i386/tcg/tcg-cpu.h
>> +++ b/target/i386/tcg/tcg-cpu.h
>> @@ -19,6 +19,63 @@
>>   #ifndef TCG_CPU_H
>>   #define TCG_CPU_H
>>   
>> +#define XSAVE_FCW_FSW_OFFSET    0x000
>> +#define XSAVE_FTW_FOP_OFFSET    0x004
>> +#define XSAVE_CWD_RIP_OFFSET    0x008
>> +#define XSAVE_CWD_RDP_OFFSET    0x010
>> +#define XSAVE_MXCSR_OFFSET      0x018
>> +#define XSAVE_ST_SPACE_OFFSET   0x020
>> +#define XSAVE_XMM_SPACE_OFFSET  0x0a0
>> +#define XSAVE_XSTATE_BV_OFFSET  0x200
>> +#define XSAVE_AVX_OFFSET        0x240
>> +#define XSAVE_BNDREG_OFFSET     0x3c0
>> +#define XSAVE_BNDCSR_OFFSET     0x400
>> +#define XSAVE_OPMASK_OFFSET     0x440
>> +#define XSAVE_ZMM_HI256_OFFSET  0x480
>> +#define XSAVE_HI16_ZMM_OFFSET   0x680
>> +#define XSAVE_PKRU_OFFSET       0xa80
>> +
>> +typedef struct X86XSaveArea {
>> +    X86LegacyXSaveArea legacy;
>> +    X86XSaveHeader header;
>> +
>> +    /* Extended save areas: */
>> +
>> +    /* AVX State: */
>> +    XSaveAVX avx_state;
>> +
>> +    /* Ensure that XSaveBNDREG is properly aligned. */
>> +    uint8_t padding[XSAVE_BNDREG_OFFSET
>> +                    - sizeof(X86LegacyXSaveArea)
>> +                    - sizeof(X86XSaveHeader)
>> +                    - sizeof(XSaveAVX)];
>> +
>> +    /* MPX State: */
>> +    XSaveBNDREG bndreg_state;
>> +    XSaveBNDCSR bndcsr_state;
>> +    /* AVX-512 State: */
>> +    XSaveOpmask opmask_state;
>> +    XSaveZMM_Hi256 zmm_hi256_state;
>> +    XSaveHi16_ZMM hi16_zmm_state;
>> +    /* PKRU State: */
>> +    XSavePKRU pkru_state;
>> +} X86XSaveArea;
>> +
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
>
> My only quibble is that these offsets are otherwise unused.  This just becomes validation 
> of compiler layout.

Yes.

> I presume that XSAVE_BNDREG_OFFSET is not merely ROUND_UP(offsetof(avx_state) + 
> sizeof(avx_state), some_pow2)?

The offsets were just extracted from a CPU at some point in the
past. It's likely that things are as you describe, but that is not
defined anywhere.

> Do these offsets need to be migrated?  Otherwise, how can one start a vm with kvm and then 
> migrate to tcg?  I presume the offsets above are constant for a given cpu, and that 
> whatever cpu provides different offsets is not supported by tcg?  Given the lack of avx, 
> that's trivial these days...

For TCG I think that the offsets used should be derived from the CPU
model selected, rather than being the same for all CPU models.

If that was done, then in principle it should be possible to migrate
XSAVE state between same CPU model KVM and TCG environments.

dme.
-- 
Tonight I'm gonna bury that horse in the ground.


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG
  2021-07-07 10:10       ` David Edmondson
  (?)
@ 2021-07-08  7:45       ` David Edmondson
  2021-07-08 15:22         ` Richard Henderson
  -1 siblings, 1 reply; 28+ messages in thread
From: David Edmondson @ 2021-07-08  7:45 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Cameron Esfahani, babu.moger, Roman Bolshakov, Paolo Bonzini

On Wednesday, 2021-07-07 at 11:10:21 +01, David Edmondson wrote:

> On Tuesday, 2021-07-06 at 18:09:42 -07, Richard Henderson wrote:
>
>> On 7/5/21 3:46 AM, David Edmondson wrote:
>>> Given that TCG is now the only consumer of X86XSaveArea, move the
>>> structure definition and associated offset declarations and checks to a
>>> TCG specific header.
>>> 
>>> Signed-off-by: David Edmondson <david.edmondson@oracle.com>
>>> ---
>>>   target/i386/cpu.h            | 57 ------------------------------------
>>>   target/i386/tcg/fpu_helper.c |  1 +
>>>   target/i386/tcg/tcg-cpu.h    | 57 ++++++++++++++++++++++++++++++++++++
>>>   3 files changed, 58 insertions(+), 57 deletions(-)
>>> 
>>> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
>>> index 96b672f8bd..0f7ddbfeae 100644
>>> --- a/target/i386/cpu.h
>>> +++ b/target/i386/cpu.h
>>> @@ -1305,48 +1305,6 @@ typedef struct XSavePKRU {
>>>       uint32_t padding;
>>>   } XSavePKRU;
>>>   
>>> -#define XSAVE_FCW_FSW_OFFSET    0x000
>>> -#define XSAVE_FTW_FOP_OFFSET    0x004
>>> -#define XSAVE_CWD_RIP_OFFSET    0x008
>>> -#define XSAVE_CWD_RDP_OFFSET    0x010
>>> -#define XSAVE_MXCSR_OFFSET      0x018
>>> -#define XSAVE_ST_SPACE_OFFSET   0x020
>>> -#define XSAVE_XMM_SPACE_OFFSET  0x0a0
>>> -#define XSAVE_XSTATE_BV_OFFSET  0x200
>>> -#define XSAVE_AVX_OFFSET        0x240
>>> -#define XSAVE_BNDREG_OFFSET     0x3c0
>>> -#define XSAVE_BNDCSR_OFFSET     0x400
>>> -#define XSAVE_OPMASK_OFFSET     0x440
>>> -#define XSAVE_ZMM_HI256_OFFSET  0x480
>>> -#define XSAVE_HI16_ZMM_OFFSET   0x680
>>> -#define XSAVE_PKRU_OFFSET       0xa80
>>> -
>>> -typedef struct X86XSaveArea {
>>> -    X86LegacyXSaveArea legacy;
>>> -    X86XSaveHeader header;
>>> -
>>> -    /* Extended save areas: */
>>> -
>>> -    /* AVX State: */
>>> -    XSaveAVX avx_state;
>>> -
>>> -    /* Ensure that XSaveBNDREG is properly aligned. */
>>> -    uint8_t padding[XSAVE_BNDREG_OFFSET
>>> -                    - sizeof(X86LegacyXSaveArea)
>>> -                    - sizeof(X86XSaveHeader)
>>> -                    - sizeof(XSaveAVX)];
>>> -
>>> -    /* MPX State: */
>>> -    XSaveBNDREG bndreg_state;
>>> -    XSaveBNDCSR bndcsr_state;
>>> -    /* AVX-512 State: */
>>> -    XSaveOpmask opmask_state;
>>> -    XSaveZMM_Hi256 zmm_hi256_state;
>>> -    XSaveHi16_ZMM hi16_zmm_state;
>>> -    /* PKRU State: */
>>> -    XSavePKRU pkru_state;
>>> -} X86XSaveArea;
>>> -
>>>   QEMU_BUILD_BUG_ON(sizeof(XSaveAVX) != 0x100);
>>>   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDREG) != 0x40);
>>>   QEMU_BUILD_BUG_ON(sizeof(XSaveBNDCSR) != 0x40);
>>> @@ -1355,21 +1313,6 @@ QEMU_BUILD_BUG_ON(sizeof(XSaveZMM_Hi256) != 0x200);
>>>   QEMU_BUILD_BUG_ON(sizeof(XSaveHi16_ZMM) != 0x400);
>>>   QEMU_BUILD_BUG_ON(sizeof(XSavePKRU) != 0x8);
>>>   
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
>>> -QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
>>> -
>>>   typedef struct ExtSaveArea {
>>>       uint32_t feature, bits;
>>>       uint32_t offset, size;
>>> diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c
>>> index 4e11965067..74bbe94b80 100644
>>> --- a/target/i386/tcg/fpu_helper.c
>>> +++ b/target/i386/tcg/fpu_helper.c
>>> @@ -20,6 +20,7 @@
>>>   #include "qemu/osdep.h"
>>>   #include <math.h>
>>>   #include "cpu.h"
>>> +#include "tcg-cpu.h"
>>>   #include "exec/helper-proto.h"
>>>   #include "fpu/softfloat.h"
>>>   #include "fpu/softfloat-macros.h"
>>> diff --git a/target/i386/tcg/tcg-cpu.h b/target/i386/tcg/tcg-cpu.h
>>> index 36bd300af0..53a8494455 100644
>>> --- a/target/i386/tcg/tcg-cpu.h
>>> +++ b/target/i386/tcg/tcg-cpu.h
>>> @@ -19,6 +19,63 @@
>>>   #ifndef TCG_CPU_H
>>>   #define TCG_CPU_H
>>>   
>>> +#define XSAVE_FCW_FSW_OFFSET    0x000
>>> +#define XSAVE_FTW_FOP_OFFSET    0x004
>>> +#define XSAVE_CWD_RIP_OFFSET    0x008
>>> +#define XSAVE_CWD_RDP_OFFSET    0x010
>>> +#define XSAVE_MXCSR_OFFSET      0x018
>>> +#define XSAVE_ST_SPACE_OFFSET   0x020
>>> +#define XSAVE_XMM_SPACE_OFFSET  0x0a0
>>> +#define XSAVE_XSTATE_BV_OFFSET  0x200
>>> +#define XSAVE_AVX_OFFSET        0x240
>>> +#define XSAVE_BNDREG_OFFSET     0x3c0
>>> +#define XSAVE_BNDCSR_OFFSET     0x400
>>> +#define XSAVE_OPMASK_OFFSET     0x440
>>> +#define XSAVE_ZMM_HI256_OFFSET  0x480
>>> +#define XSAVE_HI16_ZMM_OFFSET   0x680
>>> +#define XSAVE_PKRU_OFFSET       0xa80
>>> +
>>> +typedef struct X86XSaveArea {
>>> +    X86LegacyXSaveArea legacy;
>>> +    X86XSaveHeader header;
>>> +
>>> +    /* Extended save areas: */
>>> +
>>> +    /* AVX State: */
>>> +    XSaveAVX avx_state;
>>> +
>>> +    /* Ensure that XSaveBNDREG is properly aligned. */
>>> +    uint8_t padding[XSAVE_BNDREG_OFFSET
>>> +                    - sizeof(X86LegacyXSaveArea)
>>> +                    - sizeof(X86XSaveHeader)
>>> +                    - sizeof(XSaveAVX)];
>>> +
>>> +    /* MPX State: */
>>> +    XSaveBNDREG bndreg_state;
>>> +    XSaveBNDCSR bndcsr_state;
>>> +    /* AVX-512 State: */
>>> +    XSaveOpmask opmask_state;
>>> +    XSaveZMM_Hi256 zmm_hi256_state;
>>> +    XSaveHi16_ZMM hi16_zmm_state;
>>> +    /* PKRU State: */
>>> +    XSavePKRU pkru_state;
>>> +} X86XSaveArea;
>>> +
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fcw) != XSAVE_FCW_FSW_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.ftw) != XSAVE_FTW_FOP_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpip) != XSAVE_CWD_RIP_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpdp) != XSAVE_CWD_RDP_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.mxcsr) != XSAVE_MXCSR_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.fpregs) != XSAVE_ST_SPACE_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, legacy.xmm_regs) != XSAVE_XMM_SPACE_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, avx_state) != XSAVE_AVX_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndreg_state) != XSAVE_BNDREG_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, bndcsr_state) != XSAVE_BNDCSR_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, opmask_state) != XSAVE_OPMASK_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, zmm_hi256_state) != XSAVE_ZMM_HI256_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, hi16_zmm_state) != XSAVE_HI16_ZMM_OFFSET);
>>> +QEMU_BUILD_BUG_ON(offsetof(X86XSaveArea, pkru_state) != XSAVE_PKRU_OFFSET);
>>
>> My only quibble is that these offsets are otherwise unused.  This just becomes validation 
>> of compiler layout.
>
> Yes.
>
>> I presume that XSAVE_BNDREG_OFFSET is not merely ROUND_UP(offsetof(avx_state) + 
>> sizeof(avx_state), some_pow2)?
>
> The offsets were just extracted from a CPU at some point in the
> past. It's likely that things are as you describe, but that is not
> defined anywhere.
>
>> Do these offsets need to be migrated?  Otherwise, how can one start a vm with kvm and then 
>> migrate to tcg?  I presume the offsets above are constant for a given cpu, and that 
>> whatever cpu provides different offsets is not supported by tcg?  Given the lack of avx, 
>> that's trivial these days...
>
> For TCG I think that the offsets used should be derived from the CPU
> model selected, rather than being the same for all CPU models.
>
> If that was done, then in principle it should be possible to migrate
> XSAVE state between same CPU model KVM and TCG environments.

Actually, that's nonsense. With KVM or HVF we have to use the offsets of
the host CPU, as the hardware won't do anything else, irrespective of
the general CPU model chosen.

To have KVM -> TCG migration work it would be necessary to pass the
offsets in the migration stream and have TCG observe them, as you
originally said.

TCG -> KVM migration would only be possible if TCG was configured to use
the same offsets as would later required by KVM (meaning, the host CPU).

dme.
-- 
She's as sweet as Tupelo honey, she's an angel of the first degree.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG
  2021-07-08  7:45       ` David Edmondson
@ 2021-07-08 15:22         ` Richard Henderson
  2021-07-08 16:13           ` David Edmondson
  0 siblings, 1 reply; 28+ messages in thread
From: Richard Henderson @ 2021-07-08 15:22 UTC (permalink / raw)
  To: David Edmondson, qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Cameron Esfahani, babu.moger, Roman Bolshakov, Paolo Bonzini

On 7/8/21 12:45 AM, David Edmondson wrote:
> Actually, that's nonsense. With KVM or HVF we have to use the offsets of
> the host CPU, as the hardware won't do anything else, irrespective of
> the general CPU model chosen.
> 
> To have KVM -> TCG migration work it would be necessary to pass the
> offsets in the migration stream and have TCG observe them, as you
> originally said.
> 
> TCG -> KVM migration would only be possible if TCG was configured to use
> the same offsets as would later required by KVM (meaning, the host CPU).

And kvm -> kvm migration, with the same general cpu model chosen, but with different host 
cpus with different offsets?

It seems like we must migrate then and verify the offsets in that case, so that we can 
fail the migration.


r~

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG
  2021-07-08 15:22         ` Richard Henderson
@ 2021-07-08 16:13           ` David Edmondson
  0 siblings, 0 replies; 28+ messages in thread
From: David Edmondson @ 2021-07-08 16:13 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: Eduardo Habkost, kvm, Michael Roth, Marcelo Tosatti,
	Cameron Esfahani, babu.moger, Roman Bolshakov, Paolo Bonzini

On Thursday, 2021-07-08 at 08:22:02 -07, Richard Henderson wrote:

> On 7/8/21 12:45 AM, David Edmondson wrote:
>> Actually, that's nonsense. With KVM or HVF we have to use the offsets of
>> the host CPU, as the hardware won't do anything else, irrespective of
>> the general CPU model chosen.
>> 
>> To have KVM -> TCG migration work it would be necessary to pass the
>> offsets in the migration stream and have TCG observe them, as you
>> originally said.
>> 
>> TCG -> KVM migration would only be possible if TCG was configured to use
>> the same offsets as would later required by KVM (meaning, the host CPU).
>
> And kvm -> kvm migration, with the same general cpu model chosen, but with different host 
> cpus with different offsets?
>
> It seems like we must migrate then and verify the offsets in that case, so that we can 
> fail the migration.

Agreed.

I will look into migrating the offsets.

dme.
-- 
The sound of a barking dog on a loop, a plane rises in the crystal blue.

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2021-07-08 16:14 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-05 10:46 [RFC PATCH 0/8] Derive XSAVE state component offsets from CPUID leaf 0xd where possible David Edmondson
2021-07-05 10:46 ` David Edmondson
2021-07-05 10:46 ` [RFC PATCH 1/8] target/i386: Declare constants for XSAVE offsets David Edmondson
2021-07-05 10:46   ` David Edmondson
2021-07-05 10:46 ` [RFC PATCH 2/8] target/i386: Consolidate the X86XSaveArea offset checks David Edmondson
2021-07-05 10:46   ` David Edmondson
2021-07-05 10:46 ` [RFC PATCH 3/8] target/i386: Clarify the padding requirements of X86XSaveArea David Edmondson
2021-07-05 10:46   ` David Edmondson
2021-07-05 10:46 ` [RFC PATCH 4/8] target/i386: Pass buffer and length to XSAVE helper David Edmondson
2021-07-05 10:46   ` David Edmondson
2021-07-05 10:46 ` [RFC PATCH 5/8] target/i386: Make x86_ext_save_areas visible outside cpu.c David Edmondson
2021-07-05 10:46   ` David Edmondson
2021-07-05 10:46 ` [RFC PATCH 6/8] target/i386: Observe XSAVE state area offsets David Edmondson
2021-07-05 10:46   ` David Edmondson
2021-07-05 10:46 ` [RFC PATCH 7/8] target/i386: Populate x86_ext_save_areas offsets using cpuid where possible David Edmondson
2021-07-05 10:46   ` David Edmondson
2021-07-05 10:46 ` [RFC PATCH 8/8] target/i386: Move X86XSaveArea into TCG David Edmondson
2021-07-05 10:46   ` David Edmondson
2021-07-07  1:09   ` Richard Henderson
2021-07-07  1:09     ` Richard Henderson
2021-07-07  6:51     ` Paolo Bonzini
2021-07-07 10:10     ` David Edmondson
2021-07-07 10:10       ` David Edmondson
2021-07-08  7:45       ` David Edmondson
2021-07-08 15:22         ` Richard Henderson
2021-07-08 16:13           ` David Edmondson
2021-07-05 16:57 ` [RFC PATCH 0/8] Derive XSAVE state component offsets from CPUID leaf 0xd where possible Paolo Bonzini
2021-07-05 16:57   ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.