* FW: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
@ 2009-03-31 0:51 Dong, Eddie
2009-03-31 4:12 ` Neiger, Gil
0 siblings, 1 reply; 10+ messages in thread
From: Dong, Eddie @ 2009-03-31 0:51 UTC (permalink / raw)
To: Neiger, Gil; +Cc: Avi Kivity, kvm, Dong, Eddie
Avi Kivity wrote:
> Dong, Eddie wrote:
>> @@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu
>> *vcpu, int level) context->rsvd_bits_mask[1][0] = 0;
>> break;
>> case PT32E_ROOT_LEVEL:
>> + context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
>> + rsvd_bits(maxphyaddr, 62) |
>> + rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
>> context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
>> rsvd_bits(maxphyaddr, 62); /* PDE */
>> context->rsvd_bits_mask[0][0] = exb_bit_rsvd
>
> Are you sure that PDPTEs support NX? They don't support R/W and U/S,
> so it seems likely that NX is reserved as well even when EFER.NXE is
> enabled.
Gil:
Here is the original mail in KVM mailinglist. If you would be able to help, that is great.
thx, eddie
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
2009-03-31 0:51 FW: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit Dong, Eddie
@ 2009-03-31 4:12 ` Neiger, Gil
2009-03-31 7:32 ` Dong, Eddie
0 siblings, 1 reply; 10+ messages in thread
From: Neiger, Gil @ 2009-03-31 4:12 UTC (permalink / raw)
To: Dong, Eddie; +Cc: Avi Kivity, kvm
PDPTEs are used only if CR0.PG=CR4.PAE=1.
In that situation, their format depends the value of IA32_EFER.LMA.
If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE that is marked present. The execute-disable setting of a page is determined only by the PDE and PTE.
If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4 entries, PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1).
- Gil
-----Original Message-----
From: Dong, Eddie
Sent: Monday, March 30, 2009 5:51 PM
To: Neiger, Gil
Cc: Avi Kivity; kvm@vger.kernel.org; Dong, Eddie
Subject: FW: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
Avi Kivity wrote:
> Dong, Eddie wrote:
>> @@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu
>> *vcpu, int level) context->rsvd_bits_mask[1][0] = 0;
>> break;
>> case PT32E_ROOT_LEVEL:
>> + context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
>> + rsvd_bits(maxphyaddr, 62) |
>> + rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
>> context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
>> rsvd_bits(maxphyaddr, 62); /* PDE */
>> context->rsvd_bits_mask[0][0] = exb_bit_rsvd
>
> Are you sure that PDPTEs support NX? They don't support R/W and U/S,
> so it seems likely that NX is reserved as well even when EFER.NXE is
> enabled.
Gil:
Here is the original mail in KVM mailinglist. If you would be able to help, that is great.
thx, eddie
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
2009-03-31 4:12 ` Neiger, Gil
@ 2009-03-31 7:32 ` Dong, Eddie
2009-03-31 8:55 ` Avi Kivity
0 siblings, 1 reply; 10+ messages in thread
From: Dong, Eddie @ 2009-03-31 7:32 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm, Neiger, Gil, Dong, Eddie
[-- Attachment #1: Type: text/plain, Size: 2491 bytes --]
Neiger, Gil wrote:
> PDPTEs are used only if CR0.PG=CR4.PAE=1.
>
> In that situation, their format depends the value of IA32_EFER.LMA.
>
> If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE
> that is marked present. The execute-disable setting of a page is
> determined only by the PDE and PTE.
>
> If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4
> entries, PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1).
>
> - Gil
Rebased.
Thanks, eddie
commit 032caed3da123950eeb3e192baf444d4eae80c85
Author: root <root@eddie-wb.localdomain>
Date: Tue Mar 31 16:22:49 2009 +0800
Use rsvd_bits_mask in load_pdptrs and remove bit 5-6 from rsvd_bits_mask per latest SDM.
Signed-off-by: Eddie Dong <Eddie.Dong@intel.com>
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2eab758..1bed3aa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -225,11 +225,6 @@ static int is_nx(struct kvm_vcpu *vcpu)
return vcpu->arch.shadow_efer & EFER_NX;
}
-static int is_present_pte(unsigned long pte)
-{
- return pte & PT_PRESENT_MASK;
-}
-
static int is_shadow_present_pte(u64 pte)
{
return pte != shadow_trap_nonpresent_pte
@@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level)
context->rsvd_bits_mask[1][0] = 0;
break;
case PT32E_ROOT_LEVEL:
+ context->rsvd_bits_mask[0][2] =
+ rsvd_bits(maxphyaddr, 63) |
+ rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62); /* PDE */
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 258e5d5..2a6eb50 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -75,4 +75,9 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
return vcpu->arch.cr0 & X86_CR0_PG;
}
+static inline int is_present_pte(unsigned long pte)
+{
+ return pte & PT_PRESENT_MASK;
+}
+
#endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 961bd2b..b449ff0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -233,7 +233,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
goto out;
}
for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
- if ((pdpte[i] & 1) && (pdpte[i] & 0xfffffff0000001e6ull)) {
+ if (is_present_pte(pdpte[i]) &&
+ (pdpte[i] & vcpu->arch.mmu.rsvd_bits_mask[0][2])) {
ret = 0;
goto out;
}
[-- Attachment #2: cr3_load_rsvd.patch --]
[-- Type: application/octet-stream, Size: 1921 bytes --]
commit 032caed3da123950eeb3e192baf444d4eae80c85
Author: root <root@eddie-wb.localdomain>
Date: Tue Mar 31 16:22:49 2009 +0800
Use rsvd_bits_mask in load_pdptrs and remove bit 5-6 from rsvd_bits_mask per latest SDM.
Signed-off-by: Eddie Dong <Eddie.Dong@intel.com>
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2eab758..1bed3aa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -225,11 +225,6 @@ static int is_nx(struct kvm_vcpu *vcpu)
return vcpu->arch.shadow_efer & EFER_NX;
}
-static int is_present_pte(unsigned long pte)
-{
- return pte & PT_PRESENT_MASK;
-}
-
static int is_shadow_present_pte(u64 pte)
{
return pte != shadow_trap_nonpresent_pte
@@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level)
context->rsvd_bits_mask[1][0] = 0;
break;
case PT32E_ROOT_LEVEL:
+ context->rsvd_bits_mask[0][2] =
+ rsvd_bits(maxphyaddr, 63) |
+ rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62); /* PDE */
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 258e5d5..2a6eb50 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -75,4 +75,9 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
return vcpu->arch.cr0 & X86_CR0_PG;
}
+static inline int is_present_pte(unsigned long pte)
+{
+ return pte & PT_PRESENT_MASK;
+}
+
#endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 961bd2b..b449ff0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -233,7 +233,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
goto out;
}
for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
- if ((pdpte[i] & 1) && (pdpte[i] & 0xfffffff0000001e6ull)) {
+ if (is_present_pte(pdpte[i]) &&
+ (pdpte[i] & vcpu->arch.mmu.rsvd_bits_mask[0][2])) {
ret = 0;
goto out;
}
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
2009-03-31 7:32 ` Dong, Eddie
@ 2009-03-31 8:55 ` Avi Kivity
2009-03-31 15:03 ` Dong, Eddie
0 siblings, 1 reply; 10+ messages in thread
From: Avi Kivity @ 2009-03-31 8:55 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm, Neiger, Gil
Dong, Eddie wrote:
> Neiger, Gil wrote:
>
>> PDPTEs are used only if CR0.PG=CR4.PAE=1.
>>
>> In that situation, their format depends the value of IA32_EFER.LMA.
>>
>> If IA32_EFER.LMA=0, bit 63 is reserved and must be 0 in any PDPTE
>> that is marked present. The execute-disable setting of a page is
>> determined only by the PDE and PTE.
>>
>> If IA32_EFER.LMA=1, bit 63 is used for the execute-disable in PML4
>> entries, PDPTEs, PDEs, and PTEs (assuming IA32_EFER.NXE=1).
>>
>> - Gil
>>
>
> Rebased.
> Thanks, eddie
>
>
>
Looks good, but doesn't apply; please check if you are working against
the latest version.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
2009-03-31 8:55 ` Avi Kivity
@ 2009-03-31 15:03 ` Dong, Eddie
2009-04-01 8:28 ` Avi Kivity
0 siblings, 1 reply; 10+ messages in thread
From: Dong, Eddie @ 2009-03-31 15:03 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm, Dong, Eddie
>
> Looks good, but doesn't apply; please check if you are working against
> the latest version.
Rebased on top of a317a1e496b22d1520218ecf16a02498b99645e2 + previous rsvd bits violation check patch.
thx, eddie
Use rsvd_bits_mask in load_pdptrs and remove bit 5-6 from rsvd_bits_mask per latest SDM.
Signed-off-by: Eddie Dong <Eddie.Dong@intel.com>
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 41a0482..400c056 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -225,11 +225,6 @@ static int is_nx(struct kvm_vcpu *vcpu)
return vcpu->arch.shadow_efer & EFER_NX;
}
-static int is_present_pte(unsigned long pte)
-{
- return pte & PT_PRESENT_MASK;
-}
-
static int is_shadow_present_pte(u64 pte)
{
return pte != shadow_trap_nonpresent_pte
@@ -2195,6 +2190,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level)
context->rsvd_bits_mask[1][0] = 0;
break;
case PT32E_ROOT_LEVEL:
+ context->rsvd_bits_mask[0][2] =
+ rsvd_bits(maxphyaddr, 63) |
+ rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62); /* PDE */
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index eaab214..3494a2f 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -75,4 +75,9 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
return vcpu->arch.cr0 & X86_CR0_PG;
}
+static inline int is_present_pte(unsigned long pte)
+{
+ return pte & PT_PRESENT_MASK;
+}
+
#endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9702353..3d07c9a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -234,7 +234,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
goto out;
}
for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
- if ((pdpte[i] & 1) && (pdpte[i] & 0xfffffff0000001e6ull)) {
+ if (is_present_pte(pdpte[i]) &&
+ (pdpte[i] & vcpu->arch.mmu.rsvd_bits_mask[0][2])) {
ret = 0;
goto out;
}
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
2009-03-31 15:03 ` Dong, Eddie
@ 2009-04-01 8:28 ` Avi Kivity
0 siblings, 0 replies; 10+ messages in thread
From: Avi Kivity @ 2009-04-01 8:28 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm
Dong, Eddie wrote:
>> Looks good, but doesn't apply; please check if you are working against
>> the latest version.
>>
>
> Rebased on top of a317a1e496b22d1520218ecf16a02498b99645e2 + previous rsvd bits violation check patch.
>
Applied, thanks.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
2009-03-30 12:13 ` Avi Kivity
@ 2009-03-30 13:46 ` Dong, Eddie
0 siblings, 0 replies; 10+ messages in thread
From: Dong, Eddie @ 2009-03-30 13:46 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm, Dong, Eddie
Avi Kivity wrote:
> Dong, Eddie wrote:
>> @@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu
>> *vcpu, int level) context->rsvd_bits_mask[1][0] = 0;
>> break;
>> case PT32E_ROOT_LEVEL:
>> + context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
>> + rsvd_bits(maxphyaddr, 62) |
>> + rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
>> context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
>> rsvd_bits(maxphyaddr, 62); /* PDE */
>> context->rsvd_bits_mask[0][0] = exb_bit_rsvd
>
> Are you sure that PDPTEs support NX? They don't support R/W and U/S,
> so it seems likely that NX is reserved as well even when EFER.NXE is
> enabled.
I am refering Fig 3-20/3-21 of SDM3A, but I think Fig3-20/21 has EXB bit missed since Table 3-5 and section 3.10.3.
I will double check with internal architect.
thx, eddie
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
2009-03-30 8:27 ` Dong, Eddie
@ 2009-03-30 12:13 ` Avi Kivity
2009-03-30 13:46 ` Dong, Eddie
0 siblings, 1 reply; 10+ messages in thread
From: Avi Kivity @ 2009-03-30 12:13 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm
Dong, Eddie wrote:
> @@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level)
> context->rsvd_bits_mask[1][0] = 0;
> break;
> case PT32E_ROOT_LEVEL:
> + context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
> + rsvd_bits(maxphyaddr, 62) |
> + rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
> context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
> rsvd_bits(maxphyaddr, 62); /* PDE */
> context->rsvd_bits_mask[0][0] = exb_bit_rsvd
Are you sure that PDPTEs support NX? They don't support R/W and U/S, so
it seems likely that NX is reserved as well even when EFER.NXE is enabled.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
2009-03-30 2:49 ` Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit Dong, Eddie
@ 2009-03-30 8:27 ` Dong, Eddie
2009-03-30 12:13 ` Avi Kivity
0 siblings, 1 reply; 10+ messages in thread
From: Dong, Eddie @ 2009-03-30 8:27 UTC (permalink / raw)
To: Dong, Eddie, Avi Kivity; +Cc: kvm, Dong, Eddie
Dong, Eddie wrote:
> This is followup of rsvd_bits emulation.
>
Base on new rsvd_bits emulation patch.
thx, eddie
commit 2c1472ef2b9fd87a261e8b58a7db11afd6a111dc
Author: root <root@eddie-wb.localdomain>
Date: Mon Mar 30 17:05:47 2009 +0800
Use rsvd_bits_mask in load_pdptrs for cleanup with EXB bit considered.
Signed-off-by: Eddie Dong <Eddie.Dong@intel.com>
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2eab758..eaf41c0 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -225,11 +225,6 @@ static int is_nx(struct kvm_vcpu *vcpu)
return vcpu->arch.shadow_efer & EFER_NX;
}
-static int is_present_pte(unsigned long pte)
-{
- return pte & PT_PRESENT_MASK;
-}
-
static int is_shadow_present_pte(u64 pte)
{
return pte != shadow_trap_nonpresent_pte
@@ -2199,6 +2194,9 @@ void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level)
context->rsvd_bits_mask[1][0] = 0;
break;
case PT32E_ROOT_LEVEL:
+ context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 62) |
+ rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62); /* PDE */
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 258e5d5..2a6eb50 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -75,4 +75,9 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
return vcpu->arch.cr0 & X86_CR0_PG;
}
+static inline int is_present_pte(unsigned long pte)
+{
+ return pte & PT_PRESENT_MASK;
+}
+
#endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 961bd2b..b449ff0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -233,7 +233,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
goto out;
}
for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
- if ((pdpte[i] & 1) && (pdpte[i] & 0xfffffff0000001e6ull)) {
+ if (is_present_pte(pdpte[i]) &&
+ (pdpte[i] & vcpu->arch.mmu.rsvd_bits_mask[0][2])) {
ret = 0;
goto out;
}
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit
2009-03-30 1:53 ` Dong, Eddie
@ 2009-03-30 2:49 ` Dong, Eddie
2009-03-30 8:27 ` Dong, Eddie
0 siblings, 1 reply; 10+ messages in thread
From: Dong, Eddie @ 2009-03-30 2:49 UTC (permalink / raw)
To: Dong, Eddie, Avi Kivity; +Cc: kvm, Dong, Eddie
This is followup of rsvd_bits emulation.
thx, eddie
commit 171eb2b2d8282dd913a5d5c6c695fd64e1ddcf4c
Author: root <root@eddie-wb.localdomain>
Date: Mon Mar 30 11:39:50 2009 +0800
Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit.
Signed-off-by: Eddie Dong <Eddie.Dong@intel.com>
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 0a6f109..b0bf8b2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2255,6 +2255,9 @@ static int paging32E_init_context(struct kvm_vcpu *vcpu)
if (!is_nx(vcpu))
exb_bit_rsvd = rsvd_bits(63, 63);
+ context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 62) |
+ rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62); /* PDE */
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
@@ -2270,6 +2273,17 @@ static int paging32E_init_context(struct kvm_vcpu *vcpu)
static int init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
{
struct kvm_mmu *context = &vcpu->arch.mmu;
+ int maxphyaddr = cpuid_maxphyaddr(vcpu);
+ u64 exb_bit_rsvd = 0;
+
+ if (!is_long_mode(vcpu) && is_pae(vcpu) && is_paging(vcpu)) {
+ if (!is_nx(vcpu))
+ exb_bit_rsvd = rsvd_bits(63, 63);
+
+ context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 62) |
+ rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
+ }
context->new_cr3 = nonpaging_new_cr3;
context->page_fault = tdp_page_fault;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 961bd2b..ff178fd 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -233,7 +233,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
goto out;
}
for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
- if ((pdpte[i] & 1) && (pdpte[i] & 0xfffffff0000001e6ull)) {
+ if ((pdpte[i] & PT_PRESENT_MASK) &&
+ (pdpte[i] & vcpu->arch.mmu.rsvd_bits_mask[0][2])) {
ret = 0;
goto out;
}
^ permalink raw reply related [flat|nested] 10+ messages in thread
end of thread, other threads:[~2009-04-01 8:27 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-03-31 0:51 FW: Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit Dong, Eddie
2009-03-31 4:12 ` Neiger, Gil
2009-03-31 7:32 ` Dong, Eddie
2009-03-31 8:55 ` Avi Kivity
2009-03-31 15:03 ` Dong, Eddie
2009-04-01 8:28 ` Avi Kivity
[not found] <9832F13BD22FB94A829F798DA4A8280501A21068EF@pdsmsx503.ccr.corp.intel.com>
2009-03-27 4:19 ` RFC: Add reserved bits check Dong, Eddie
2009-03-27 9:34 ` Avi Kivity
2009-03-27 13:46 ` Dong, Eddie
2009-03-27 14:28 ` Avi Kivity
2009-03-27 14:42 ` Dong, Eddie
2009-03-29 10:23 ` Avi Kivity
2009-03-30 1:53 ` Dong, Eddie
2009-03-30 2:49 ` Use rsvd_bits_mask in load_pdptrs for cleanup and considing EXB bit Dong, Eddie
2009-03-30 8:27 ` Dong, Eddie
2009-03-30 12:13 ` Avi Kivity
2009-03-30 13:46 ` Dong, Eddie
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).