kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* KVM: MMU: zero caches before entering mmu_lock protected section
@ 2009-01-08 18:26 Marcelo Tosatti
  2009-01-08 18:35 ` Avi Kivity
  0 siblings, 1 reply; 2+ messages in thread
From: Marcelo Tosatti @ 2009-01-08 18:26 UTC (permalink / raw)
  To: kvm; +Cc: Avi Kivity


Clean the pre-allocated cache pages before entering mmu_lock region.
This is safe since the caches are per-vcpu.

Smaller chunks are already zeroed by kmem_cache_zalloc.

~= 0.90% reduction in system time with AIM7 on RHEL3 / 4-vcpu.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 10bdb2a..823d0cd 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -301,7 +301,7 @@ static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache,
 	if (cache->nobjs >= min)
 		return 0;
 	while (cache->nobjs < ARRAY_SIZE(cache->objects)) {
-		page = alloc_page(GFP_KERNEL);
+		page = alloc_page(GFP_KERNEL|__GFP_ZERO);
 		if (!page)
 			return -ENOMEM;
 		set_page_private(page, 0);
@@ -352,7 +352,6 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc,
 
 	BUG_ON(!mc->nobjs);
 	p = mc->objects[--mc->nobjs];
-	memset(p, 0, size);
 	return p;
 }
 

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: KVM: MMU: zero caches before entering mmu_lock protected section
  2009-01-08 18:26 KVM: MMU: zero caches before entering mmu_lock protected section Marcelo Tosatti
@ 2009-01-08 18:35 ` Avi Kivity
  0 siblings, 0 replies; 2+ messages in thread
From: Avi Kivity @ 2009-01-08 18:35 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: kvm

Marcelo Tosatti wrote:
> Clean the pre-allocated cache pages before entering mmu_lock region.
> This is safe since the caches are per-vcpu.
>
> Smaller chunks are already zeroed by kmem_cache_zalloc.
>
> ~= 0.90% reduction in system time with AIM7 on RHEL3 / 4-vcpu.
>
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 10bdb2a..823d0cd 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -301,7 +301,7 @@ static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache,
>  	if (cache->nobjs >= min)
>  		return 0;
>  	while (cache->nobjs < ARRAY_SIZE(cache->objects)) {
> -		page = alloc_page(GFP_KERNEL);
> +		page = alloc_page(GFP_KERNEL|__GFP_ZERO);
>  		if (!page)
>  			return -ENOMEM;
>  		set_page_private(page, 0);
> @@ -352,7 +352,6 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc,
>  
>  	BUG_ON(!mc->nobjs);
>  	p = mc->objects[--mc->nobjs];
> -	memset(p, 0, size);
>  	return p;
>  }
>  
>   

I think we can drop the memset altogether, since we will clear the page 
in ->prefetch_page() anyway.


-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2009-01-08 18:35 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-01-08 18:26 KVM: MMU: zero caches before entering mmu_lock protected section Marcelo Tosatti
2009-01-08 18:35 ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).