All of lore.kernel.org
 help / color / mirror / Atom feed
* KVM: MMU: always invalidate and flush on spte page size change
@ 2010-05-28 12:44 Marcelo Tosatti
  2010-05-30 10:28 ` Avi Kivity
  0 siblings, 1 reply; 4+ messages in thread
From: Marcelo Tosatti @ 2010-05-28 12:44 UTC (permalink / raw)
  To: kvm; +Cc: Avi Kivity, Andrea Arcangeli


Always invalidate spte and flush TLBs when changing page size, to make
sure different sized translations for the same address are never cached
in a CPU's TLB.

The first case where this occurs is when a non-leaf spte pointer is
overwritten by a leaf, large spte entry. This can happen after dirty
logging is disabled on a memslot, for example.

The second case is a leaf, large spte entry is overwritten with a
non-leaf spte pointer, in __direct_map. Note this cannot happen now
because the only potential source of such overwrite is dirty logging
being enabled, which zaps all MMU pages. But this might change 
in the future, so better be robust against it.

Noticed by Andrea.

KVM-Stable-Tag
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

Index: kvm/arch/x86/kvm/mmu.c
===================================================================
--- kvm.orig/arch/x86/kvm/mmu.c
+++ kvm/arch/x86/kvm/mmu.c
@@ -1952,6 +1952,8 @@ static void mmu_set_spte(struct kvm_vcpu
 
 			child = page_header(pte & PT64_BASE_ADDR_MASK);
 			mmu_page_remove_parent_pte(child, sptep);
+			__set_spte(sptep, shadow_trap_nonpresent_pte);
+			kvm_flush_remote_tlbs(vcpu->kvm);
 		} else if (pfn != spte_to_pfn(*sptep)) {
 			pgprintk("hfn old %lx new %lx\n",
 				 spte_to_pfn(*sptep), pfn);
@@ -2015,6 +2017,16 @@ static int __direct_map(struct kvm_vcpu 
 			break;
 		}
 
+		if (is_shadow_present_pte(*iterator.sptep) &&
+		    !is_large_pte(*iterator.sptep))
+			continue;
+
+		if (is_large_pte(*iterator.sptep)) {
+			rmap_remove(vcpu->kvm, iterator.sptep);
+			__set_spte(iterator.sptep, shadow_trap_nonpresent_pte);
+			kvm_flush_remote_tlbs(vcpu->kvm);
+		}
+
 		if (*iterator.sptep == shadow_trap_nonpresent_pte) {
 			u64 base_addr = iterator.addr;
 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: KVM: MMU: always invalidate and flush on spte page size change
  2010-05-28 12:44 KVM: MMU: always invalidate and flush on spte page size change Marcelo Tosatti
@ 2010-05-30 10:28 ` Avi Kivity
  2010-05-30 15:19   ` Marcelo Tosatti
  0 siblings, 1 reply; 4+ messages in thread
From: Avi Kivity @ 2010-05-30 10:28 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm, Andrea Arcangeli

On 05/28/2010 03:44 PM, Marcelo Tosatti wrote:
> Always invalidate spte and flush TLBs when changing page size, to make
> sure different sized translations for the same address are never cached
> in a CPU's TLB.
>
> The first case where this occurs is when a non-leaf spte pointer is
> overwritten by a leaf, large spte entry. This can happen after dirty
> logging is disabled on a memslot, for example.
>
> The second case is a leaf, large spte entry is overwritten with a
> non-leaf spte pointer, in __direct_map. Note this cannot happen now
> because the only potential source of such overwrite is dirty logging
> being enabled, which zaps all MMU pages. But this might change
> in the future, so better be robust against it.
>
> Noticed by Andrea.
>
> KVM-Stable-Tag
> Signed-off-by: Marcelo Tosatti<mtosatti@redhat.com>
>
> Index: kvm/arch/x86/kvm/mmu.c
> ===================================================================
> --- kvm.orig/arch/x86/kvm/mmu.c
> +++ kvm/arch/x86/kvm/mmu.c
> @@ -1952,6 +1952,8 @@ static void mmu_set_spte(struct kvm_vcpu
>
>   			child = page_header(pte&  PT64_BASE_ADDR_MASK);
>   			mmu_page_remove_parent_pte(child, sptep);
> +			__set_spte(sptep, shadow_trap_nonpresent_pte);
> +			kvm_flush_remote_tlbs(vcpu->kvm);
>   		} else if (pfn != spte_to_pfn(*sptep)) {
>   			pgprintk("hfn old %lx new %lx\n",
>   				 spte_to_pfn(*sptep), pfn);
>    

Applied this bit.

> @@ -2015,6 +2017,16 @@ static int __direct_map(struct kvm_vcpu
>   			break;
>   		}
>
> +		if (is_shadow_present_pte(*iterator.sptep)&&
> +		    !is_large_pte(*iterator.sptep))
> +			continue;
> +
> +		if (is_large_pte(*iterator.sptep)) {
> +			rmap_remove(vcpu->kvm, iterator.sptep);
> +			__set_spte(iterator.sptep, shadow_trap_nonpresent_pte);
> +			kvm_flush_remote_tlbs(vcpu->kvm);
> +		}
> +
>    

Don't we have exactly the same issue in FNAME(fetch)()?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: KVM: MMU: always invalidate and flush on spte page size change
  2010-05-30 10:28 ` Avi Kivity
@ 2010-05-30 15:19   ` Marcelo Tosatti
  2010-05-31 11:49     ` Avi Kivity
  0 siblings, 1 reply; 4+ messages in thread
From: Marcelo Tosatti @ 2010-05-30 15:19 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm, Andrea Arcangeli

On Sun, May 30, 2010 at 01:28:19PM +0300, Avi Kivity wrote:
> On 05/28/2010 03:44 PM, Marcelo Tosatti wrote:
> >Always invalidate spte and flush TLBs when changing page size, to make
> >sure different sized translations for the same address are never cached
> >in a CPU's TLB.
> >
> >The first case where this occurs is when a non-leaf spte pointer is
> >overwritten by a leaf, large spte entry. This can happen after dirty
> >logging is disabled on a memslot, for example.
> >
> >The second case is a leaf, large spte entry is overwritten with a
> >non-leaf spte pointer, in __direct_map. Note this cannot happen now
> >because the only potential source of such overwrite is dirty logging
> >being enabled, which zaps all MMU pages. But this might change
> >in the future, so better be robust against it.
> >
> >Noticed by Andrea.
> >
> >KVM-Stable-Tag
> >Signed-off-by: Marcelo Tosatti<mtosatti@redhat.com>
> >
> >Index: kvm/arch/x86/kvm/mmu.c
> >===================================================================
> >--- kvm.orig/arch/x86/kvm/mmu.c
> >+++ kvm/arch/x86/kvm/mmu.c
> >@@ -1952,6 +1952,8 @@ static void mmu_set_spte(struct kvm_vcpu
> >
> >  			child = page_header(pte&  PT64_BASE_ADDR_MASK);
> >  			mmu_page_remove_parent_pte(child, sptep);
> >+			__set_spte(sptep, shadow_trap_nonpresent_pte);
> >+			kvm_flush_remote_tlbs(vcpu->kvm);
> >  		} else if (pfn != spte_to_pfn(*sptep)) {
> >  			pgprintk("hfn old %lx new %lx\n",
> >  				 spte_to_pfn(*sptep), pfn);
> 
> Applied this bit.
> 
> >@@ -2015,6 +2017,16 @@ static int __direct_map(struct kvm_vcpu
> >  			break;
> >  		}
> >
> >+		if (is_shadow_present_pte(*iterator.sptep)&&
> >+		    !is_large_pte(*iterator.sptep))
> >+			continue;
> >+
> >+		if (is_large_pte(*iterator.sptep)) {
> >+			rmap_remove(vcpu->kvm, iterator.sptep);
> >+			__set_spte(iterator.sptep, shadow_trap_nonpresent_pte);
> >+			kvm_flush_remote_tlbs(vcpu->kvm);
> >+		}
> >+
> 
> Don't we have exactly the same issue in FNAME(fetch)()?

Yes and its already handled there:

                if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep))
                        continue;

                if (is_large_pte(*sptep)) {
                        rmap_remove(vcpu->kvm, sptep);
                        __set_spte(sptep, shadow_trap_nonpresent_pte);
                        kvm_flush_remote_tlbs(vcpu->kvm);
                }


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: KVM: MMU: always invalidate and flush on spte page size change
  2010-05-30 15:19   ` Marcelo Tosatti
@ 2010-05-31 11:49     ` Avi Kivity
  0 siblings, 0 replies; 4+ messages in thread
From: Avi Kivity @ 2010-05-31 11:49 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm, Andrea Arcangeli

On 05/30/2010 06:19 PM, Marcelo Tosatti wrote:
> On Sun, May 30, 2010 at 01:28:19PM +0300, Avi Kivity wrote:
>    
>> On 05/28/2010 03:44 PM, Marcelo Tosatti wrote:
>>      
>>> Always invalidate spte and flush TLBs when changing page size, to make
>>> sure different sized translations for the same address are never cached
>>> in a CPU's TLB.
>>>
>>> The first case where this occurs is when a non-leaf spte pointer is
>>> overwritten by a leaf, large spte entry. This can happen after dirty
>>> logging is disabled on a memslot, for example.
>>>
>>> The second case is a leaf, large spte entry is overwritten with a
>>> non-leaf spte pointer, in __direct_map. Note this cannot happen now
>>> because the only potential source of such overwrite is dirty logging
>>> being enabled, which zaps all MMU pages. But this might change
>>> in the future, so better be robust against it.
>>>
>>> Noticed by Andrea.
>>>
>>> KVM-Stable-Tag
>>> Signed-off-by: Marcelo Tosatti<mtosatti@redhat.com>
>>>
>>> Index: kvm/arch/x86/kvm/mmu.c
>>> ===================================================================
>>> --- kvm.orig/arch/x86/kvm/mmu.c
>>> +++ kvm/arch/x86/kvm/mmu.c
>>> @@ -1952,6 +1952,8 @@ static void mmu_set_spte(struct kvm_vcpu
>>>
>>>   			child = page_header(pte&   PT64_BASE_ADDR_MASK);
>>>   			mmu_page_remove_parent_pte(child, sptep);
>>> +			__set_spte(sptep, shadow_trap_nonpresent_pte);
>>> +			kvm_flush_remote_tlbs(vcpu->kvm);
>>>   		} else if (pfn != spte_to_pfn(*sptep)) {
>>>   			pgprintk("hfn old %lx new %lx\n",
>>>   				 spte_to_pfn(*sptep), pfn);
>>>        
>> Applied this bit.
>>
>>      
>>> @@ -2015,6 +2017,16 @@ static int __direct_map(struct kvm_vcpu
>>>   			break;
>>>   		}
>>>
>>> +		if (is_shadow_present_pte(*iterator.sptep)&&
>>> +		    !is_large_pte(*iterator.sptep))
>>> +			continue;
>>> +
>>> +		if (is_large_pte(*iterator.sptep)) {
>>> +			rmap_remove(vcpu->kvm, iterator.sptep);
>>> +			__set_spte(iterator.sptep, shadow_trap_nonpresent_pte);
>>> +			kvm_flush_remote_tlbs(vcpu->kvm);
>>> +		}
>>> +
>>>        
>> Don't we have exactly the same issue in FNAME(fetch)()?
>>      
> Yes and its already handled there:
>
>                  if (is_shadow_present_pte(*sptep)&&  !is_large_pte(*sptep))
>                          continue;
>
>                  if (is_large_pte(*sptep)) {
>                          rmap_remove(vcpu->kvm, sptep);
>                          __set_spte(sptep, shadow_trap_nonpresent_pte);
>                          kvm_flush_remote_tlbs(vcpu->kvm);
>                  }
>
>    

Well that bit of code wants deduplication under a good name.


-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2010-05-31 11:49 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-05-28 12:44 KVM: MMU: always invalidate and flush on spte page size change Marcelo Tosatti
2010-05-30 10:28 ` Avi Kivity
2010-05-30 15:19   ` Marcelo Tosatti
2010-05-31 11:49     ` Avi Kivity

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.