* [PATCH][HVM] fix XSAVE leaf 0 EBX size calculation
@ 2011-02-04 17:49 Wei Huang
2011-02-09 5:32 ` [PATCH] x86: reduce magic number usage in XSAVE code (RE: [PATCH][HVM] fix XSAVE leaf 0 EBX size calculation) Wei, Gang
0 siblings, 1 reply; 3+ messages in thread
From: Wei Huang @ 2011-02-04 17:49 UTC (permalink / raw)
To: xen-devel; +Cc: gang.wei
[-- Attachment #1: Type: text/plain, Size: 376 bytes --]
Fixes a size calculation bug when enabled bits in XFEATURE_MASK (xcr0)
aren't contiguous.
Current for_loop will stop when xcr0 feature bit is 0. But in reality,
the bits can be non-contiguous. One example is that LWP is bit 62 on AMD
platform. This patch iterates through all bits to calculate the size for
enabled features.
Signed-off-by: Wei Huang <wei.huang2@amd.com>
[-- Attachment #2: fix_xsave_leaf_0_size_bug.txt --]
[-- Type: text/plain, Size: 884 bytes --]
diff -r fc84efa61ef1 -r d42732f29bed xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c Wed Feb 02 15:26:29 2011 -0600
+++ b/xen/arch/x86/hvm/hvm.c Wed Feb 02 16:13:03 2011 -0600
@@ -2222,10 +2222,12 @@
/* EBX value of main leaf 0 depends on enabled xsave features */
if ( count == 0 && v->arch.xcr0 )
{
- for ( sub_leaf = 2;
- (sub_leaf < 64) && (v->arch.xcr0 & (1ULL << sub_leaf));
- sub_leaf++ )
+ /* reset EBX to default value first */
+ *ebx = 576;
+ for ( sub_leaf = 2; sub_leaf < 64; sub_leaf++ )
{
+ if ( !(v->arch.xcr0 & (1ULL << sub_leaf)) )
+ continue;
domain_cpuid(v->domain, input, sub_leaf, &_eax, &_ebx, &_ecx,
&_edx);
if ( (_eax + _ebx) > *ebx )
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH] x86: reduce magic number usage in XSAVE code (RE: [PATCH][HVM] fix XSAVE leaf 0 EBX size calculation)
2011-02-04 17:49 [PATCH][HVM] fix XSAVE leaf 0 EBX size calculation Wei Huang
@ 2011-02-09 5:32 ` Wei, Gang
2011-02-09 16:41 ` Wei Huang
0 siblings, 1 reply; 3+ messages in thread
From: Wei, Gang @ 2011-02-09 5:32 UTC (permalink / raw)
To: wei.huang2, xen-devel; +Cc: Keir Fraser, Wei, Gang
[-- Attachment #1: Type: text/plain, Size: 2507 bytes --]
Wei Huang wrote on 2011-02-05:
> Fixes a size calculation bug when enabled bits in XFEATURE_MASK (xcr0)
> aren't contiguous.
>
> Current for_loop will stop when xcr0 feature bit is 0. But in reality,
> the bits can be non-contiguous. One example is that LWP is bit 62 on
> AMD platform. This patch iterates through all bits to calculate the size for enabled features.
>
> Signed-off-by: Wei Huang <wei.huang2@amd.com>
ACK on the patch despite it was checked in already. BTW, I want to add a trivial patch to limit the magic number usage in XSAVE code.
x86: reduce magic number usage in XSAVE code
Signed-off-by: Wei Gang <gang.wei@intel.com>
diff -r aeda4adecaf8 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c Tue Feb 08 16:35:35 2011 +0000
+++ b/xen/arch/x86/hvm/hvm.c Thu Feb 10 19:12:29 2011 +0800
@@ -2223,7 +2223,7 @@ void hvm_cpuid(unsigned int input, unsig
if ( count == 0 && v->arch.xcr0 )
{
/* reset EBX to default value first */
- *ebx = 576;
+ *ebx = XSAVE_AREA_MIN_SIZE;
for ( sub_leaf = 2; sub_leaf < 64; sub_leaf++ )
{
if ( !(v->arch.xcr0 & (1ULL << sub_leaf)) )
diff -r aeda4adecaf8 xen/arch/x86/i387.c
--- a/xen/arch/x86/i387.c Tue Feb 08 16:35:35 2011 +0000
+++ b/xen/arch/x86/i387.c Thu Feb 10 19:14:02 2011 +0800
@@ -221,7 +221,6 @@ static void restore_fpu(struct vcpu *v)
}
#define XSTATE_CPUID 0xd
-#define XSAVE_AREA_MIN_SIZE (512 + 64) /* FP/SSE + XSAVE.HEADER */
/*
* Maximum size (in byte) of the XSAVE/XRSTOR save area required by all
diff -r aeda4adecaf8 xen/include/asm-x86/i387.h
--- a/xen/include/asm-x86/i387.h Tue Feb 08 16:35:35 2011 +0000
+++ b/xen/include/asm-x86/i387.h Thu Feb 10 19:11:37 2011 +0800
@@ -21,13 +21,14 @@ int xsave_alloc_save_area(struct vcpu *v
int xsave_alloc_save_area(struct vcpu *v);
void xsave_free_save_area(struct vcpu *v);
+#define XSAVE_AREA_MIN_SIZE (512 + 64) /* FP/SSE + XSAVE.HEADER */
#define XSTATE_FP (1ULL << 0)
#define XSTATE_SSE (1ULL << 1)
#define XSTATE_YMM (1ULL << 2)
#define XSTATE_LWP (1ULL << 62) /* AMD lightweight profiling */
#define XSTATE_FP_SSE (XSTATE_FP | XSTATE_SSE)
#define XCNTXT_MASK (XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_LWP)
-#define XSTATE_YMM_OFFSET (512 + 64)
+#define XSTATE_YMM_OFFSET XSAVE_AREA_MIN_SIZE
#define XSTATE_YMM_SIZE 256
#define XSAVEOPT (1 << 0)
Jimmy
[-- Attachment #2: xsave_trivial_change.patch --]
[-- Type: application/octet-stream, Size: 1881 bytes --]
x86: reduce magic number usage in XSAVE code
Signed-off-by: Wei Gang <gang.wei@intel.com>
diff -r aeda4adecaf8 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c Tue Feb 08 16:35:35 2011 +0000
+++ b/xen/arch/x86/hvm/hvm.c Thu Feb 10 19:12:29 2011 +0800
@@ -2223,7 +2223,7 @@ void hvm_cpuid(unsigned int input, unsig
if ( count == 0 && v->arch.xcr0 )
{
/* reset EBX to default value first */
- *ebx = 576;
+ *ebx = XSAVE_AREA_MIN_SIZE;
for ( sub_leaf = 2; sub_leaf < 64; sub_leaf++ )
{
if ( !(v->arch.xcr0 & (1ULL << sub_leaf)) )
diff -r aeda4adecaf8 xen/arch/x86/i387.c
--- a/xen/arch/x86/i387.c Tue Feb 08 16:35:35 2011 +0000
+++ b/xen/arch/x86/i387.c Thu Feb 10 19:14:02 2011 +0800
@@ -221,7 +221,6 @@ static void restore_fpu(struct vcpu *v)
}
#define XSTATE_CPUID 0xd
-#define XSAVE_AREA_MIN_SIZE (512 + 64) /* FP/SSE + XSAVE.HEADER */
/*
* Maximum size (in byte) of the XSAVE/XRSTOR save area required by all
diff -r aeda4adecaf8 xen/include/asm-x86/i387.h
--- a/xen/include/asm-x86/i387.h Tue Feb 08 16:35:35 2011 +0000
+++ b/xen/include/asm-x86/i387.h Thu Feb 10 19:11:37 2011 +0800
@@ -21,13 +21,14 @@ int xsave_alloc_save_area(struct vcpu *v
int xsave_alloc_save_area(struct vcpu *v);
void xsave_free_save_area(struct vcpu *v);
+#define XSAVE_AREA_MIN_SIZE (512 + 64) /* FP/SSE + XSAVE.HEADER */
#define XSTATE_FP (1ULL << 0)
#define XSTATE_SSE (1ULL << 1)
#define XSTATE_YMM (1ULL << 2)
#define XSTATE_LWP (1ULL << 62) /* AMD lightweight profiling */
#define XSTATE_FP_SSE (XSTATE_FP | XSTATE_SSE)
#define XCNTXT_MASK (XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_LWP)
-#define XSTATE_YMM_OFFSET (512 + 64)
+#define XSTATE_YMM_OFFSET XSAVE_AREA_MIN_SIZE
#define XSTATE_YMM_SIZE 256
#define XSAVEOPT (1 << 0)
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] x86: reduce magic number usage in XSAVE code (RE: [PATCH][HVM] fix XSAVE leaf 0 EBX size calculation)
2011-02-09 5:32 ` [PATCH] x86: reduce magic number usage in XSAVE code (RE: [PATCH][HVM] fix XSAVE leaf 0 EBX size calculation) Wei, Gang
@ 2011-02-09 16:41 ` Wei Huang
0 siblings, 0 replies; 3+ messages in thread
From: Wei Huang @ 2011-02-09 16:41 UTC (permalink / raw)
To: Wei, Gang; +Cc: Keir, xen-devel, Fraser
Acked-by: Wei Huang <wei.huang2@amd.com>
On 02/08/2011 11:32 PM, Wei, Gang wrote:
> Wei Huang wrote on 2011-02-05:
>> Fixes a size calculation bug when enabled bits in XFEATURE_MASK (xcr0)
>> aren't contiguous.
>>
>> Current for_loop will stop when xcr0 feature bit is 0. But in reality,
>> the bits can be non-contiguous. One example is that LWP is bit 62 on
>> AMD platform. This patch iterates through all bits to calculate the size for enabled features.
>>
>> Signed-off-by: Wei Huang<wei.huang2@amd.com>
> ACK on the patch despite it was checked in already. BTW, I want to add a trivial patch to limit the magic number usage in XSAVE code.
>
> x86: reduce magic number usage in XSAVE code
>
> Signed-off-by: Wei Gang<gang.wei@intel.com>
>
> diff -r aeda4adecaf8 xen/arch/x86/hvm/hvm.c
> --- a/xen/arch/x86/hvm/hvm.c Tue Feb 08 16:35:35 2011 +0000
> +++ b/xen/arch/x86/hvm/hvm.c Thu Feb 10 19:12:29 2011 +0800
> @@ -2223,7 +2223,7 @@ void hvm_cpuid(unsigned int input, unsig
> if ( count == 0&& v->arch.xcr0 )
> {
> /* reset EBX to default value first */
> - *ebx = 576;
> + *ebx = XSAVE_AREA_MIN_SIZE;
> for ( sub_leaf = 2; sub_leaf< 64; sub_leaf++ )
> {
> if ( !(v->arch.xcr0& (1ULL<< sub_leaf)) )
> diff -r aeda4adecaf8 xen/arch/x86/i387.c
> --- a/xen/arch/x86/i387.c Tue Feb 08 16:35:35 2011 +0000
> +++ b/xen/arch/x86/i387.c Thu Feb 10 19:14:02 2011 +0800
> @@ -221,7 +221,6 @@ static void restore_fpu(struct vcpu *v)
> }
>
> #define XSTATE_CPUID 0xd
> -#define XSAVE_AREA_MIN_SIZE (512 + 64) /* FP/SSE + XSAVE.HEADER */
>
> /*
> * Maximum size (in byte) of the XSAVE/XRSTOR save area required by all
> diff -r aeda4adecaf8 xen/include/asm-x86/i387.h
> --- a/xen/include/asm-x86/i387.h Tue Feb 08 16:35:35 2011 +0000
> +++ b/xen/include/asm-x86/i387.h Thu Feb 10 19:11:37 2011 +0800
> @@ -21,13 +21,14 @@ int xsave_alloc_save_area(struct vcpu *v
> int xsave_alloc_save_area(struct vcpu *v);
> void xsave_free_save_area(struct vcpu *v);
>
> +#define XSAVE_AREA_MIN_SIZE (512 + 64) /* FP/SSE + XSAVE.HEADER */
> #define XSTATE_FP (1ULL<< 0)
> #define XSTATE_SSE (1ULL<< 1)
> #define XSTATE_YMM (1ULL<< 2)
> #define XSTATE_LWP (1ULL<< 62) /* AMD lightweight profiling */
> #define XSTATE_FP_SSE (XSTATE_FP | XSTATE_SSE)
> #define XCNTXT_MASK (XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_LWP)
> -#define XSTATE_YMM_OFFSET (512 + 64)
> +#define XSTATE_YMM_OFFSET XSAVE_AREA_MIN_SIZE
> #define XSTATE_YMM_SIZE 256
> #define XSAVEOPT (1<< 0)
>
>
> Jimmy
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2011-02-09 16:41 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-02-04 17:49 [PATCH][HVM] fix XSAVE leaf 0 EBX size calculation Wei Huang
2011-02-09 5:32 ` [PATCH] x86: reduce magic number usage in XSAVE code (RE: [PATCH][HVM] fix XSAVE leaf 0 EBX size calculation) Wei, Gang
2011-02-09 16:41 ` Wei Huang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.