All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] various x86 kvm fixes
@ 2009-03-05 12:12 Joerg Roedel
  2009-03-05 12:12 ` [PATCH 1/6] kvm/x86/svm: force new asid on vcpu migration Joerg Roedel
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Joerg Roedel @ 2009-03-05 12:12 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel

Hi Avi, Marcelo,

this small series of patches collects some fixes I have collected during
the last days of KVM hacking. Please review and comment or apply if
appropriate :)

Joerg




^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/6] kvm/x86/svm: force new asid on vcpu migration
  2009-03-05 12:12 [PATCH 0/6] various x86 kvm fixes Joerg Roedel
@ 2009-03-05 12:12 ` Joerg Roedel
  2009-03-05 17:53   ` Marcelo Tosatti
  2009-07-26 15:23   ` Avi Kivity
  2009-03-05 12:12 ` [PATCH 2/6] kvm/x86/mmu: remove call to kvm_mmu_pte_write from walk_addr Joerg Roedel
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 17+ messages in thread
From: Joerg Roedel @ 2009-03-05 12:12 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel, Joerg Roedel

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/svm.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 1821c20..0e66bca 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -180,7 +180,7 @@ static inline void kvm_write_cr2(unsigned long val)
 
 static inline void force_new_asid(struct kvm_vcpu *vcpu)
 {
-	to_svm(vcpu)->asid_generation--;
+	to_svm(vcpu)->asid_generation = 0;
 }
 
 static inline void flush_guest_tlb(struct kvm_vcpu *vcpu)
@@ -716,6 +716,7 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		svm->vmcb->control.tsc_offset += delta;
 		vcpu->cpu = cpu;
 		kvm_migrate_timers(vcpu);
+		force_new_asid(vcpu);
 	}
 
 	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
-- 
1.5.6.4



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/6] kvm/x86/mmu: remove call to kvm_mmu_pte_write from walk_addr
  2009-03-05 12:12 [PATCH 0/6] various x86 kvm fixes Joerg Roedel
  2009-03-05 12:12 ` [PATCH 1/6] kvm/x86/svm: force new asid on vcpu migration Joerg Roedel
@ 2009-03-05 12:12 ` Joerg Roedel
  2009-03-05 12:12 ` [PATCH 3/6] kvm/x86/mmu: don't unnecessarily recalculate table_gfn in *fetch Joerg Roedel
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Joerg Roedel @ 2009-03-05 12:12 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel, Joerg Roedel

There is no reason to update the shadow pte here because the guest pte
is only changed to dirty state.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/paging_tmpl.h |    1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 0f11792..a0c11ea 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -209,7 +209,6 @@ walk:
 		if (ret)
 			goto walk;
 		pte |= PT_DIRTY_MASK;
-		kvm_mmu_pte_write(vcpu, pte_gpa, (u8 *)&pte, sizeof(pte), 0);
 		walker->ptes[walker->level - 1] = pte;
 	}
 
-- 
1.5.6.4



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/6] kvm/x86/mmu: don't unnecessarily recalculate table_gfn in *fetch
  2009-03-05 12:12 [PATCH 0/6] various x86 kvm fixes Joerg Roedel
  2009-03-05 12:12 ` [PATCH 1/6] kvm/x86/svm: force new asid on vcpu migration Joerg Roedel
  2009-03-05 12:12 ` [PATCH 2/6] kvm/x86/mmu: remove call to kvm_mmu_pte_write from walk_addr Joerg Roedel
@ 2009-03-05 12:12 ` Joerg Roedel
  2009-03-05 14:36     ` Joerg Roedel
  2009-03-05 12:12 ` [PATCH 4/6] kvm/x86/mmu: handle invlpg on large pages Joerg Roedel
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Joerg Roedel @ 2009-03-05 12:12 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel, Joerg Roedel

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/paging_tmpl.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index a0c11ea..79668ba 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -315,7 +315,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 			direct = 1;
 			if (!is_dirty_pte(gw->ptes[level - 1]))
 				access &= ~ACC_WRITE_MASK;
-			table_gfn = gpte_to_gfn(gw->ptes[level - 1]);
+			table_gfn = gw->table_gfn[level - delta];
 		} else {
 			direct = 0;
 			table_gfn = gw->table_gfn[level - 2];
-- 
1.5.6.4



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/6] kvm/x86/mmu: handle invlpg on large pages
  2009-03-05 12:12 [PATCH 0/6] various x86 kvm fixes Joerg Roedel
                   ` (2 preceding siblings ...)
  2009-03-05 12:12 ` [PATCH 3/6] kvm/x86/mmu: don't unnecessarily recalculate table_gfn in *fetch Joerg Roedel
@ 2009-03-05 12:12 ` Joerg Roedel
  2009-03-05 21:11   ` Marcelo Tosatti
  2009-03-05 12:12 ` [PATCH 5/6] kvm/x86: call kvm_lapic_sync_from_vapic with preemption disabled Joerg Roedel
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Joerg Roedel @ 2009-03-05 12:12 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel, Joerg Roedel

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/paging_tmpl.h |   12 +++++++++---
 1 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 79668ba..aa79396 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -441,6 +441,7 @@ out_unlock:
 static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
 {
 	struct kvm_shadow_walk_iterator iterator;
+	struct kvm_mmu_page *sp;
 	pt_element_t gpte;
 	gpa_t pte_gpa = -1;
 	int level;
@@ -451,12 +452,17 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
 	for_each_shadow_entry(vcpu, gva, iterator) {
 		level = iterator.level;
 		sptep = iterator.sptep;
+		sp = page_header(__pa(sptep));
+
+		if (sp->role.direct) {
+			/* mapped from a guest's large_pte */
+			kvm_mmu_zap_page(vcpu->kvm, sp);
+			kvm_flush_remote_tlbs(vcpu->kvm);
+			return;
+		}
 
-		/* FIXME: properly handle invlpg on large guest pages */
 		if (level == PT_PAGE_TABLE_LEVEL ||
 		    ((level == PT_DIRECTORY_LEVEL) && is_large_pte(*sptep))) {
-			struct kvm_mmu_page *sp = page_header(__pa(sptep));
-
 			pte_gpa = (sp->gfn << PAGE_SHIFT);
 			pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
 
-- 
1.5.6.4



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/6] kvm/x86: call kvm_lapic_sync_from_vapic with preemption disabled
  2009-03-05 12:12 [PATCH 0/6] various x86 kvm fixes Joerg Roedel
                   ` (3 preceding siblings ...)
  2009-03-05 12:12 ` [PATCH 4/6] kvm/x86/mmu: handle invlpg on large pages Joerg Roedel
@ 2009-03-05 12:12 ` Joerg Roedel
  2009-03-05 21:39   ` Marcelo Tosatti
  2009-03-05 12:12 ` [PATCH 6/6] kvm/x86/mmu: include PT_PAGE_SIZE_MASK in PT64_PERM_MASK Joerg Roedel
  2009-03-05 21:50 ` [PATCH 0/6] various x86 kvm fixes Marcelo Tosatti
  6 siblings, 1 reply; 17+ messages in thread
From: Joerg Roedel @ 2009-03-05 12:12 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel, Joerg Roedel

The function uses kmap_atomic. Calling it with preemption enabled is
racy.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/x86.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b556b6a..ff833f4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3203,6 +3203,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 
 	kvm_guest_exit();
 
+	kvm_lapic_sync_from_vapic(vcpu);
+
 	preempt_enable();
 
 	down_read(&vcpu->kvm->slots_lock);
@@ -3218,7 +3220,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 	if (vcpu->arch.exception.pending && kvm_x86_ops->exception_injected(vcpu))
 		vcpu->arch.exception.pending = false;
 
-	kvm_lapic_sync_from_vapic(vcpu);
 
 	r = kvm_x86_ops->handle_exit(kvm_run, vcpu);
 out:
-- 
1.5.6.4



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 6/6] kvm/x86/mmu: include PT_PAGE_SIZE_MASK in PT64_PERM_MASK
  2009-03-05 12:12 [PATCH 0/6] various x86 kvm fixes Joerg Roedel
                   ` (4 preceding siblings ...)
  2009-03-05 12:12 ` [PATCH 5/6] kvm/x86: call kvm_lapic_sync_from_vapic with preemption disabled Joerg Roedel
@ 2009-03-05 12:12 ` Joerg Roedel
  2009-03-08 12:32   ` Avi Kivity
  2009-03-05 21:50 ` [PATCH 0/6] various x86 kvm fixes Marcelo Tosatti
  6 siblings, 1 reply; 17+ messages in thread
From: Joerg Roedel @ 2009-03-05 12:12 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel, Joerg Roedel

We also need to do a remote tlb flush if the PSE bit changes. The
pte_pfn should also change if this bit changes but we can't rely on
that. So check this bit too to be on the save side.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/mmu.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2a36f7f..055b181 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -121,7 +121,7 @@ module_param(oos_shadow, bool, 0644);
 	(PAGE_MASK & ~((1ULL << (PAGE_SHIFT + PT32_LEVEL_BITS)) - 1))
 
 #define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | PT_USER_MASK \
-			| PT64_NX_MASK)
+			| PT64_NX_MASK | PT_PAGE_SIZE_MASK)
 
 #define PFERR_PRESENT_MASK (1U << 0)
 #define PFERR_WRITE_MASK (1U << 1)
-- 
1.5.6.4



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] kvm/x86/mmu: don't unnecessarily recalculate table_gfn in *fetch
  2009-03-05 12:12 ` [PATCH 3/6] kvm/x86/mmu: don't unnecessarily recalculate table_gfn in *fetch Joerg Roedel
@ 2009-03-05 14:36     ` Joerg Roedel
  0 siblings, 0 replies; 17+ messages in thread
From: Joerg Roedel @ 2009-03-05 14:36 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel

On Thu, Mar 05, 2009 at 01:12:30PM +0100, Joerg Roedel wrote:
> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
> ---
>  arch/x86/kvm/paging_tmpl.h |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index a0c11ea..79668ba 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -315,7 +315,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
>  			direct = 1;
>  			if (!is_dirty_pte(gw->ptes[level - 1]))
>  				access &= ~ACC_WRITE_MASK;
> -			table_gfn = gpte_to_gfn(gw->ptes[level - 1]);
> +			table_gfn = gw->table_gfn[level - delta];
>  		} else {
>  			direct = 0;
>  			table_gfn = gw->table_gfn[level - 2];

Made a mistake here when rebasing these patches. Updated patch is below:

>From de38a141a0ba44b3d46e333620d518cd831326c0 Mon Sep 17 00:00:00 2001
From: Joerg Roedel <joerg.roedel@amd.com>
Date: Wed, 4 Mar 2009 20:49:15 +0100
Subject: [PATCH 3/6] kvm/x86/mmu: don't unnecessarily recalculate table_gfn in *fetch

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/paging_tmpl.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index a0c11ea..79668ba 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -315,7 +315,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 			direct = 1;
 			if (!is_dirty_pte(gw->ptes[level - 1]))
 				access &= ~ACC_WRITE_MASK;
-			table_gfn = gpte_to_gfn(gw->ptes[level - 1]);
+			table_gfn = gw->table_gfn[level - 1];
 		} else {
 			direct = 0;
 			table_gfn = gw->table_gfn[level - 2];
-- 
1.5.6.4


-- 
           | Advanced Micro Devices GmbH
 Operating | Karl-Hammerschmidt-Str. 34, 85609 Dornach bei München
 System    | 
 Research  | Geschäftsführer: Jochen Polster, Thomas M. McCoy, Giuliano Meroni
 Center    | Sitz: Dornach, Gemeinde Aschheim, Landkreis München
           | Registergericht München, HRB Nr. 43632


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] kvm/x86/mmu: don't unnecessarily recalculate table_gfn in *fetch
@ 2009-03-05 14:36     ` Joerg Roedel
  0 siblings, 0 replies; 17+ messages in thread
From: Joerg Roedel @ 2009-03-05 14:36 UTC (permalink / raw)
  To: Avi Kivity, Marcelo Tosatti; +Cc: kvm, linux-kernel

On Thu, Mar 05, 2009 at 01:12:30PM +0100, Joerg Roedel wrote:
> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
> ---
>  arch/x86/kvm/paging_tmpl.h |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index a0c11ea..79668ba 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -315,7 +315,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
>  			direct = 1;
>  			if (!is_dirty_pte(gw->ptes[level - 1]))
>  				access &= ~ACC_WRITE_MASK;
> -			table_gfn = gpte_to_gfn(gw->ptes[level - 1]);
> +			table_gfn = gw->table_gfn[level - delta];
>  		} else {
>  			direct = 0;
>  			table_gfn = gw->table_gfn[level - 2];

Made a mistake here when rebasing these patches. Updated patch is below:

From de38a141a0ba44b3d46e333620d518cd831326c0 Mon Sep 17 00:00:00 2001
From: Joerg Roedel <joerg.roedel@amd.com>
Date: Wed, 4 Mar 2009 20:49:15 +0100
Subject: [PATCH 3/6] kvm/x86/mmu: don't unnecessarily recalculate table_gfn in *fetch

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
---
 arch/x86/kvm/paging_tmpl.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index a0c11ea..79668ba 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -315,7 +315,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 			direct = 1;
 			if (!is_dirty_pte(gw->ptes[level - 1]))
 				access &= ~ACC_WRITE_MASK;
-			table_gfn = gpte_to_gfn(gw->ptes[level - 1]);
+			table_gfn = gw->table_gfn[level - 1];
 		} else {
 			direct = 0;
 			table_gfn = gw->table_gfn[level - 2];
-- 
1.5.6.4


-- 
           | Advanced Micro Devices GmbH
 Operating | Karl-Hammerschmidt-Str. 34, 85609 Dornach bei München
 System    | 
 Research  | Geschäftsführer: Jochen Polster, Thomas M. McCoy, Giuliano Meroni
 Center    | Sitz: Dornach, Gemeinde Aschheim, Landkreis München
           | Registergericht München, HRB Nr. 43632


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/6] kvm/x86/svm: force new asid on vcpu migration
  2009-03-05 12:12 ` [PATCH 1/6] kvm/x86/svm: force new asid on vcpu migration Joerg Roedel
@ 2009-03-05 17:53   ` Marcelo Tosatti
  2009-07-26 15:23   ` Avi Kivity
  1 sibling, 0 replies; 17+ messages in thread
From: Marcelo Tosatti @ 2009-03-05 17:53 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Avi Kivity, kvm, linux-kernel

On Thu, Mar 05, 2009 at 01:12:28PM +0100, Joerg Roedel wrote:
> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
> ---
>  arch/x86/kvm/svm.c |    3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 1821c20..0e66bca 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -180,7 +180,7 @@ static inline void kvm_write_cr2(unsigned long val)
>  
>  static inline void force_new_asid(struct kvm_vcpu *vcpu)
>  {
> -	to_svm(vcpu)->asid_generation--;
> +	to_svm(vcpu)->asid_generation = 0;
>  }

Won't the per-cpu asig_generation overflow at some point? 

And can this comparison in pre_svm_run ever be true:

        if (svm->vcpu.cpu != cpu

?

>  static inline void flush_guest_tlb(struct kvm_vcpu *vcpu)
> @@ -716,6 +716,7 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  		svm->vmcb->control.tsc_offset += delta;
>  		vcpu->cpu = cpu;
>  		kvm_migrate_timers(vcpu);
> +		force_new_asid(vcpu);
>  	}
>  
>  	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/6] kvm/x86/mmu: handle invlpg on large pages
  2009-03-05 12:12 ` [PATCH 4/6] kvm/x86/mmu: handle invlpg on large pages Joerg Roedel
@ 2009-03-05 21:11   ` Marcelo Tosatti
  2009-03-06 13:06     ` Joerg Roedel
  0 siblings, 1 reply; 17+ messages in thread
From: Marcelo Tosatti @ 2009-03-05 21:11 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Avi Kivity, kvm, linux-kernel

On Thu, Mar 05, 2009 at 01:12:31PM +0100, Joerg Roedel wrote:
> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
> ---
>  arch/x86/kvm/paging_tmpl.h |   12 +++++++++---
>  1 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> index 79668ba..aa79396 100644
> --- a/arch/x86/kvm/paging_tmpl.h
> +++ b/arch/x86/kvm/paging_tmpl.h
> @@ -441,6 +441,7 @@ out_unlock:
>  static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
>  {
>  	struct kvm_shadow_walk_iterator iterator;
> +	struct kvm_mmu_page *sp;
>  	pt_element_t gpte;
>  	gpa_t pte_gpa = -1;
>  	int level;
> @@ -451,12 +452,17 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
>  	for_each_shadow_entry(vcpu, gva, iterator) {
>  		level = iterator.level;
>  		sptep = iterator.sptep;
> +		sp = page_header(__pa(sptep));
> +
> +		if (sp->role.direct) {
> +			/* mapped from a guest's large_pte */
> +			kvm_mmu_zap_page(vcpu->kvm, sp);
> +			kvm_flush_remote_tlbs(vcpu->kvm);
> +			return;
> +		}

If the guest has 32-bit pte's there might be:

- two large shadow entries to cover 4MB
- one large shadow entry and one shadow page with 512 4k entries
- two shadow pages with 512 4k entries each

So need to cover all this cases. 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 5/6] kvm/x86: call kvm_lapic_sync_from_vapic with preemption disabled
  2009-03-05 12:12 ` [PATCH 5/6] kvm/x86: call kvm_lapic_sync_from_vapic with preemption disabled Joerg Roedel
@ 2009-03-05 21:39   ` Marcelo Tosatti
  0 siblings, 0 replies; 17+ messages in thread
From: Marcelo Tosatti @ 2009-03-05 21:39 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Avi Kivity, kvm, linux-kernel

On Thu, Mar 05, 2009 at 01:12:32PM +0100, Joerg Roedel wrote:
> The function uses kmap_atomic. Calling it with preemption enabled is
> racy.

kmap_atomic disables preemption?

void *kmap_atomic_prot(struct page *page, enum km_type type, pgprot_t prot)
{                       
        enum fixed_addresses idx;
        unsigned long vaddr;    
                        
        /* even !CONFIG_PREEMPT needs this, for in_atomic in  * do_page_fault */
        pagefault_disable();



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/6] various x86 kvm fixes
  2009-03-05 12:12 [PATCH 0/6] various x86 kvm fixes Joerg Roedel
                   ` (5 preceding siblings ...)
  2009-03-05 12:12 ` [PATCH 6/6] kvm/x86/mmu: include PT_PAGE_SIZE_MASK in PT64_PERM_MASK Joerg Roedel
@ 2009-03-05 21:50 ` Marcelo Tosatti
  2009-03-06 13:04   ` Joerg Roedel
  6 siblings, 1 reply; 17+ messages in thread
From: Marcelo Tosatti @ 2009-03-05 21:50 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Avi Kivity, kvm, linux-kernel

On Thu, Mar 05, 2009 at 01:12:27PM +0100, Joerg Roedel wrote:
> Hi Avi, Marcelo,
> 
> this small series of patches collects some fixes I have collected during
> the last days of KVM hacking. Please review and comment or apply if
> appropriate :)

Applied 2, 3 and 6.

Thanks.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/6] various x86 kvm fixes
  2009-03-05 21:50 ` [PATCH 0/6] various x86 kvm fixes Marcelo Tosatti
@ 2009-03-06 13:04   ` Joerg Roedel
  0 siblings, 0 replies; 17+ messages in thread
From: Joerg Roedel @ 2009-03-06 13:04 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Avi Kivity, kvm, linux-kernel

On Thu, Mar 05, 2009 at 06:50:37PM -0300, Marcelo Tosatti wrote:
> On Thu, Mar 05, 2009 at 01:12:27PM +0100, Joerg Roedel wrote:
> > Hi Avi, Marcelo,
> > 
> > this small series of patches collects some fixes I have collected during
> > the last days of KVM hacking. Please review and comment or apply if
> > appropriate :)
> 
> Applied 2, 3 and 6.

Thanks. Sorry for the two false positives. After so much mmu debugging
in the last days it seems I start to see ghost bugs :(

	Joerg

-- 
           | Advanced Micro Devices GmbH
 Operating | Karl-Hammerschmidt-Str. 34, 85609 Dornach bei München
 System    | 
 Research  | Geschäftsführer: Jochen Polster, Thomas M. McCoy, Giuliano Meroni
 Center    | Sitz: Dornach, Gemeinde Aschheim, Landkreis München
           | Registergericht München, HRB Nr. 43632


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/6] kvm/x86/mmu: handle invlpg on large pages
  2009-03-05 21:11   ` Marcelo Tosatti
@ 2009-03-06 13:06     ` Joerg Roedel
  0 siblings, 0 replies; 17+ messages in thread
From: Joerg Roedel @ 2009-03-06 13:06 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Avi Kivity, kvm, linux-kernel

On Thu, Mar 05, 2009 at 06:11:22PM -0300, Marcelo Tosatti wrote:
> On Thu, Mar 05, 2009 at 01:12:31PM +0100, Joerg Roedel wrote:
> > Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
> > ---
> >  arch/x86/kvm/paging_tmpl.h |   12 +++++++++---
> >  1 files changed, 9 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> > index 79668ba..aa79396 100644
> > --- a/arch/x86/kvm/paging_tmpl.h
> > +++ b/arch/x86/kvm/paging_tmpl.h
> > @@ -441,6 +441,7 @@ out_unlock:
> >  static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
> >  {
> >  	struct kvm_shadow_walk_iterator iterator;
> > +	struct kvm_mmu_page *sp;
> >  	pt_element_t gpte;
> >  	gpa_t pte_gpa = -1;
> >  	int level;
> > @@ -451,12 +452,17 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
> >  	for_each_shadow_entry(vcpu, gva, iterator) {
> >  		level = iterator.level;
> >  		sptep = iterator.sptep;
> > +		sp = page_header(__pa(sptep));
> > +
> > +		if (sp->role.direct) {
> > +			/* mapped from a guest's large_pte */
> > +			kvm_mmu_zap_page(vcpu->kvm, sp);
> > +			kvm_flush_remote_tlbs(vcpu->kvm);
> > +			return;
> > +		}
> 
> If the guest has 32-bit pte's there might be:
> 
> - two large shadow entries to cover 4MB
> - one large shadow entry and one shadow page with 512 4k entries
> - two shadow pages with 512 4k entries each
> 
> So need to cover all this cases.

Right. Thanks for pointing this out. I will post an updated version of
this patch.

	Joerg

-- 
           | Advanced Micro Devices GmbH
 Operating | Karl-Hammerschmidt-Str. 34, 85609 Dornach bei München
 System    | 
 Research  | Geschäftsführer: Jochen Polster, Thomas M. McCoy, Giuliano Meroni
 Center    | Sitz: Dornach, Gemeinde Aschheim, Landkreis München
           | Registergericht München, HRB Nr. 43632


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 6/6] kvm/x86/mmu: include PT_PAGE_SIZE_MASK in PT64_PERM_MASK
  2009-03-05 12:12 ` [PATCH 6/6] kvm/x86/mmu: include PT_PAGE_SIZE_MASK in PT64_PERM_MASK Joerg Roedel
@ 2009-03-08 12:32   ` Avi Kivity
  0 siblings, 0 replies; 17+ messages in thread
From: Avi Kivity @ 2009-03-08 12:32 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Marcelo Tosatti, kvm, linux-kernel

Joerg Roedel wrote:
> We also need to do a remote tlb flush if the PSE bit changes. The
> pte_pfn should also change if this bit changes but we can't rely on
> that. So check this bit too to be on the save side.
>
> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
> ---
>  arch/x86/kvm/mmu.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 2a36f7f..055b181 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -121,7 +121,7 @@ module_param(oos_shadow, bool, 0644);
>  	(PAGE_MASK & ~((1ULL << (PAGE_SHIFT + PT32_LEVEL_BITS)) - 1))
>  
>  #define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | PT_USER_MASK \
> -			| PT64_NX_MASK)
> +			| PT64_NX_MASK | PT_PAGE_SIZE_MASK)
>  
>   

PT64_PERM_MASK is used in this way:

static bool need_remote_flush(u64 old, u64 new)
{
    if (!is_shadow_present_pte(old))
        return false;
    if (!is_shadow_present_pte(new))
        return true;
    if ((old ^ new) & PT64_BASE_ADDR_MASK)
        return true;
    old ^= PT64_NX_MASK;
    new ^= PT64_NX_MASK;
    return (old & ~new & PT64_PERM_MASK) != 0;
}

We don't check whether a bit changes, rather we check if a bit is turned 
off (or on in the case of nx).  But I think we need a tlb flush even if 
we change PSE from 0 to 1.

Thinking a bit more, this will never trigger, since an spte at a 
particular level cannot have its pse bit changed (at least in 
kvm_mmu_pte_write).

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/6] kvm/x86/svm: force new asid on vcpu migration
  2009-03-05 12:12 ` [PATCH 1/6] kvm/x86/svm: force new asid on vcpu migration Joerg Roedel
  2009-03-05 17:53   ` Marcelo Tosatti
@ 2009-07-26 15:23   ` Avi Kivity
  1 sibling, 0 replies; 17+ messages in thread
From: Avi Kivity @ 2009-07-26 15:23 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Marcelo Tosatti, kvm, linux-kernel

On 03/05/2009 02:12 PM, Joerg Roedel wrote:
> Signed-off-by: Joerg Roedel<joerg.roedel@amd.com>
> ---
>   arch/x86/kvm/svm.c |    3 ++-
>   1 files changed, 2 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 1821c20..0e66bca 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -180,7 +180,7 @@ static inline void kvm_write_cr2(unsigned long val)
>
>   static inline void force_new_asid(struct kvm_vcpu *vcpu)
>   {
> -	to_svm(vcpu)->asid_generation--;
> +	to_svm(vcpu)->asid_generation = 0;
>   }
>
>   static inline void flush_guest_tlb(struct kvm_vcpu *vcpu)
> @@ -716,6 +716,7 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>   		svm->vmcb->control.tsc_offset += delta;
>   		vcpu->cpu = cpu;
>   		kvm_migrate_timers(vcpu);
> +		force_new_asid(vcpu);
>   	}
>
>   	for (i = 0; i<  NR_HOST_SAVE_USER_MSRS; i++)
>    

Does this remove the need for 6eaa802c ("KVM: SVM: fix random segfaults 
with NPT enabled")?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2009-07-26 15:19 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-03-05 12:12 [PATCH 0/6] various x86 kvm fixes Joerg Roedel
2009-03-05 12:12 ` [PATCH 1/6] kvm/x86/svm: force new asid on vcpu migration Joerg Roedel
2009-03-05 17:53   ` Marcelo Tosatti
2009-07-26 15:23   ` Avi Kivity
2009-03-05 12:12 ` [PATCH 2/6] kvm/x86/mmu: remove call to kvm_mmu_pte_write from walk_addr Joerg Roedel
2009-03-05 12:12 ` [PATCH 3/6] kvm/x86/mmu: don't unnecessarily recalculate table_gfn in *fetch Joerg Roedel
2009-03-05 14:36   ` Joerg Roedel
2009-03-05 14:36     ` Joerg Roedel
2009-03-05 12:12 ` [PATCH 4/6] kvm/x86/mmu: handle invlpg on large pages Joerg Roedel
2009-03-05 21:11   ` Marcelo Tosatti
2009-03-06 13:06     ` Joerg Roedel
2009-03-05 12:12 ` [PATCH 5/6] kvm/x86: call kvm_lapic_sync_from_vapic with preemption disabled Joerg Roedel
2009-03-05 21:39   ` Marcelo Tosatti
2009-03-05 12:12 ` [PATCH 6/6] kvm/x86/mmu: include PT_PAGE_SIZE_MASK in PT64_PERM_MASK Joerg Roedel
2009-03-08 12:32   ` Avi Kivity
2009-03-05 21:50 ` [PATCH 0/6] various x86 kvm fixes Marcelo Tosatti
2009-03-06 13:04   ` Joerg Roedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.