All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: x86/mmu: Complete prefetch for trailing SPTEs for direct, legacy MMU
@ 2021-08-18 23:56 Sean Christopherson
  2021-08-19  4:15 ` Sergey Senozhatsky
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Sean Christopherson @ 2021-08-18 23:56 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Sergey Senozhatsky, Ben Gardon

Make a final call to direct_pte_prefetch_many() if there are "trailing"
SPTEs to prefetch, i.e. SPTEs for GFNs following the faulting GFN.  The
call to direct_pte_prefetch_many() in the loop only handles the case
where there are !PRESENT SPTEs preceding a PRESENT SPTE.

E.g. if the faulting GFN is a multiple of 8 (the prefetch size) and all
SPTEs for the following GFNs are !PRESENT, the loop will terminate with
"start = sptep+1" and not prefetch any SPTEs.

Prefetching trailing SPTEs as intended can drastically reduce the number
of guest page faults, e.g. accessing the first byte of every 4kb page in
a 6gb chunk of virtual memory, in a VM with 8gb of preallocated memory,
the number of pf_fixed events observed in L0 drops from ~1.75M to <0.27M.

Note, this only affects memory that is backed by 4kb pages as KVM doesn't
prefetch when installing hugepages.  Shadow paging prefetching is not
affected as it does not batch the prefetches due to the need to process
the corresponding guest PTE.  The TDP MMU is not affected because it
doesn't have prefetching, yet...

Fixes: 957ed9effd80 ("KVM: MMU: prefetch ptes when intercepted guest #PF")
Cc: Sergey Senozhatsky <senozhatsky@google.com>
Cc: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---

Cc'd Ben as this highlights a potential gap with the TDP MMU, which lacks
prefetching of any sort.  For large VMs, which are likely backed by
hugepages anyways, this is a non-issue as the benefits of holding mmu_lock
for read likely masks the cost of taking more VM-Exits.  But VMs with a
small number of vCPUs won't benefit as much from parallel page faults,
e.g. there's no benefit at all if there's a single vCPU.

 arch/x86/kvm/mmu/mmu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a272ccbddfa1..daf7df35f788 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2818,11 +2818,13 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu,
 			if (!start)
 				continue;
 			if (direct_pte_prefetch_many(vcpu, sp, start, spte) < 0)
-				break;
+				return;
 			start = NULL;
 		} else if (!start)
 			start = spte;
 	}
+	if (start)
+		direct_pte_prefetch_many(vcpu, sp, start, spte);
 }
 
 static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep)
-- 
2.33.0.rc1.237.g0d66db33f3-goog


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: x86/mmu: Complete prefetch for trailing SPTEs for direct, legacy MMU
  2021-08-18 23:56 [PATCH] KVM: x86/mmu: Complete prefetch for trailing SPTEs for direct, legacy MMU Sean Christopherson
@ 2021-08-19  4:15 ` Sergey Senozhatsky
  2021-08-25 22:49 ` Lai Jiangshan
  2021-09-23 16:27 ` Paolo Bonzini
  2 siblings, 0 replies; 5+ messages in thread
From: Sergey Senozhatsky @ 2021-08-19  4:15 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Sergey Senozhatsky, Ben Gardon

[..]

> Make a final call to direct_pte_prefetch_many() if there are "trailing"
> SPTEs to prefetch, i.e. SPTEs for GFNs following the faulting GFN.  The
> call to direct_pte_prefetch_many() in the loop only handles the case
> where there are !PRESENT SPTEs preceding a PRESENT SPTE.
> 
> E.g. if the faulting GFN is a multiple of 8 (the prefetch size) and all
> SPTEs for the following GFNs are !PRESENT, the loop will terminate with
> "start = sptep+1" and not prefetch any SPTEs.
> 
> Prefetching trailing SPTEs as intended can drastically reduce the number
> of guest page faults, e.g. accessing the first byte of every 4kb page in
> a 6gb chunk of virtual memory, in a VM with 8gb of preallocated memory,
> the number of pf_fixed events observed in L0 drops from ~1.75M to <0.27M.
> 
> Note, this only affects memory that is backed by 4kb pages as KVM doesn't
> prefetch when installing hugepages.  Shadow paging prefetching is not
> affected as it does not batch the prefetches due to the need to process
> the corresponding guest PTE.  The TDP MMU is not affected because it
> doesn't have prefetching, yet...


Tested-by: Sergey Senozhatsky <senozhatsky@chromium.org>



I ran some tests.


- VM Boot up

From

EPT_VIOLATION    1192184    75.18%     4.40%      0.77us  18020.01us      4.32us ( +-   1.71% )

to

EPT_VIOLATION     947460    69.92%     4.64%      0.69us  34902.15us      5.06us ( +-   1.64% )



- Running test app (in VM)

From

EPT_VIOLATION    6550167    71.05%    11.76%      0.77us  32562.18us      3.51us ( +-   0.36% )

to

EPT_VIOLATION    5489904    68.32%    11.29%      0.71us  16564.19us      3.92us ( +-   0.29% )

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: x86/mmu: Complete prefetch for trailing SPTEs for direct, legacy MMU
  2021-08-18 23:56 [PATCH] KVM: x86/mmu: Complete prefetch for trailing SPTEs for direct, legacy MMU Sean Christopherson
  2021-08-19  4:15 ` Sergey Senozhatsky
@ 2021-08-25 22:49 ` Lai Jiangshan
  2021-08-26 21:35   ` Ben Gardon
  2021-09-23 16:27 ` Paolo Bonzini
  2 siblings, 1 reply; 5+ messages in thread
From: Lai Jiangshan @ 2021-08-25 22:49 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, LKML, Sergey Senozhatsky, Ben Gardon

On Thu, Aug 19, 2021 at 7:57 AM Sean Christopherson <seanjc@google.com> wrote:
>
> Make a final call to direct_pte_prefetch_many() if there are "trailing"
> SPTEs to prefetch, i.e. SPTEs for GFNs following the faulting GFN.  The
> call to direct_pte_prefetch_many() in the loop only handles the case
> where there are !PRESENT SPTEs preceding a PRESENT SPTE.
>
> E.g. if the faulting GFN is a multiple of 8 (the prefetch size) and all
> SPTEs for the following GFNs are !PRESENT, the loop will terminate with
> "start = sptep+1" and not prefetch any SPTEs.
>
> Prefetching trailing SPTEs as intended can drastically reduce the number
> of guest page faults, e.g. accessing the first byte of every 4kb page in
> a 6gb chunk of virtual memory, in a VM with 8gb of preallocated memory,
> the number of pf_fixed events observed in L0 drops from ~1.75M to <0.27M.
>
> Note, this only affects memory that is backed by 4kb pages as KVM doesn't
> prefetch when installing hugepages.  Shadow paging prefetching is not
> affected as it does not batch the prefetches due to the need to process
> the corresponding guest PTE.  The TDP MMU is not affected because it
> doesn't have prefetching, yet...
>
> Fixes: 957ed9effd80 ("KVM: MMU: prefetch ptes when intercepted guest #PF")
> Cc: Sergey Senozhatsky <senozhatsky@google.com>
> Cc: Ben Gardon <bgardon@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>
> Cc'd Ben as this highlights a potential gap with the TDP MMU, which lacks
> prefetching of any sort.  For large VMs, which are likely backed by
> hugepages anyways, this is a non-issue as the benefits of holding mmu_lock
> for read likely masks the cost of taking more VM-Exits.  But VMs with a
> small number of vCPUs won't benefit as much from parallel page faults,
> e.g. there's no benefit at all if there's a single vCPU.
>
>  arch/x86/kvm/mmu/mmu.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index a272ccbddfa1..daf7df35f788 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2818,11 +2818,13 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu,
>                         if (!start)
>                                 continue;
>                         if (direct_pte_prefetch_many(vcpu, sp, start, spte) < 0)
> -                               break;
> +                               return;
>                         start = NULL;
>                 } else if (!start)
>                         start = spte;
>         }
> +       if (start)
> +               direct_pte_prefetch_many(vcpu, sp, start, spte);
>  }


Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com>

>
>  static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep)
> --
> 2.33.0.rc1.237.g0d66db33f3-goog
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: x86/mmu: Complete prefetch for trailing SPTEs for direct, legacy MMU
  2021-08-25 22:49 ` Lai Jiangshan
@ 2021-08-26 21:35   ` Ben Gardon
  0 siblings, 0 replies; 5+ messages in thread
From: Ben Gardon @ 2021-08-26 21:35 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, kvm, LKML, Sergey Senozhatsky

On Wed, Aug 25, 2021 at 3:49 PM Lai Jiangshan
<jiangshanlai+lkml@gmail.com> wrote:
>
> On Thu, Aug 19, 2021 at 7:57 AM Sean Christopherson <seanjc@google.com> wrote:
> >
> > Make a final call to direct_pte_prefetch_many() if there are "trailing"
> > SPTEs to prefetch, i.e. SPTEs for GFNs following the faulting GFN.  The
> > call to direct_pte_prefetch_many() in the loop only handles the case
> > where there are !PRESENT SPTEs preceding a PRESENT SPTE.
> >
> > E.g. if the faulting GFN is a multiple of 8 (the prefetch size) and all
> > SPTEs for the following GFNs are !PRESENT, the loop will terminate with
> > "start = sptep+1" and not prefetch any SPTEs.
> >
> > Prefetching trailing SPTEs as intended can drastically reduce the number
> > of guest page faults, e.g. accessing the first byte of every 4kb page in
> > a 6gb chunk of virtual memory, in a VM with 8gb of preallocated memory,
> > the number of pf_fixed events observed in L0 drops from ~1.75M to <0.27M.
> >
> > Note, this only affects memory that is backed by 4kb pages as KVM doesn't
> > prefetch when installing hugepages.  Shadow paging prefetching is not
> > affected as it does not batch the prefetches due to the need to process
> > the corresponding guest PTE.  The TDP MMU is not affected because it
> > doesn't have prefetching, yet...
> >
> > Fixes: 957ed9effd80 ("KVM: MMU: prefetch ptes when intercepted guest #PF")
> > Cc: Sergey Senozhatsky <senozhatsky@google.com>
> > Cc: Ben Gardon <bgardon@google.com>
> > Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> > ---
> >
> > Cc'd Ben as this highlights a potential gap with the TDP MMU, which lacks
> > prefetching of any sort.  For large VMs, which are likely backed by
> > hugepages anyways, this is a non-issue as the benefits of holding mmu_lock
> > for read likely masks the cost of taking more VM-Exits.  But VMs with a
> > small number of vCPUs won't benefit as much from parallel page faults,
> > e.g. there's no benefit at all if there's a single vCPU.

Yeah, that probably does represent a reduction in performance for very
small VMs. Besides keeping read critical sections small, there's no
reason not to do prefetching with the TDP MMU, it just needs to be
implemented.

> >
> >  arch/x86/kvm/mmu/mmu.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index a272ccbddfa1..daf7df35f788 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -2818,11 +2818,13 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu,
> >                         if (!start)
> >                                 continue;
> >                         if (direct_pte_prefetch_many(vcpu, sp, start, spte) < 0)
> > -                               break;
> > +                               return;
> >                         start = NULL;
> >                 } else if (!start)
> >                         start = spte;
> >         }
> > +       if (start)
> > +               direct_pte_prefetch_many(vcpu, sp, start, spte);

It might be worth explaining some of what you laid out in the commit
description here. This function's implementation is not the easiest to
read.

> >  }
>
>
> Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com>
>
> >
> >  static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep)
> > --
> > 2.33.0.rc1.237.g0d66db33f3-goog
> >

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] KVM: x86/mmu: Complete prefetch for trailing SPTEs for direct, legacy MMU
  2021-08-18 23:56 [PATCH] KVM: x86/mmu: Complete prefetch for trailing SPTEs for direct, legacy MMU Sean Christopherson
  2021-08-19  4:15 ` Sergey Senozhatsky
  2021-08-25 22:49 ` Lai Jiangshan
@ 2021-09-23 16:27 ` Paolo Bonzini
  2 siblings, 0 replies; 5+ messages in thread
From: Paolo Bonzini @ 2021-09-23 16:27 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Sergey Senozhatsky, Ben Gardon

On 19/08/21 01:56, Sean Christopherson wrote:
> Make a final call to direct_pte_prefetch_many() if there are "trailing"
> SPTEs to prefetch, i.e. SPTEs for GFNs following the faulting GFN.  The
> call to direct_pte_prefetch_many() in the loop only handles the case
> where there are !PRESENT SPTEs preceding a PRESENT SPTE.
> 
> E.g. if the faulting GFN is a multiple of 8 (the prefetch size) and all
> SPTEs for the following GFNs are !PRESENT, the loop will terminate with
> "start = sptep+1" and not prefetch any SPTEs.
> 
> Prefetching trailing SPTEs as intended can drastically reduce the number
> of guest page faults, e.g. accessing the first byte of every 4kb page in
> a 6gb chunk of virtual memory, in a VM with 8gb of preallocated memory,
> the number of pf_fixed events observed in L0 drops from ~1.75M to <0.27M.
> 
> Note, this only affects memory that is backed by 4kb pages as KVM doesn't
> prefetch when installing hugepages.  Shadow paging prefetching is not
> affected as it does not batch the prefetches due to the need to process
> the corresponding guest PTE.  The TDP MMU is not affected because it
> doesn't have prefetching, yet...
> 
> Fixes: 957ed9effd80 ("KVM: MMU: prefetch ptes when intercepted guest #PF")
> Cc: Sergey Senozhatsky <senozhatsky@google.com>
> Cc: Ben Gardon <bgardon@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> 
> Cc'd Ben as this highlights a potential gap with the TDP MMU, which lacks
> prefetching of any sort.  For large VMs, which are likely backed by
> hugepages anyways, this is a non-issue as the benefits of holding mmu_lock
> for read likely masks the cost of taking more VM-Exits.  But VMs with a
> small number of vCPUs won't benefit as much from parallel page faults,
> e.g. there's no benefit at all if there's a single vCPU.
> 
>   arch/x86/kvm/mmu/mmu.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index a272ccbddfa1..daf7df35f788 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2818,11 +2818,13 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu,
>   			if (!start)
>   				continue;
>   			if (direct_pte_prefetch_many(vcpu, sp, start, spte) < 0)
> -				break;
> +				return;
>   			start = NULL;
>   		} else if (!start)
>   			start = spte;
>   	}
> +	if (start)
> +		direct_pte_prefetch_many(vcpu, sp, start, spte);
>   }
>   
>   static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep)
> 

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-09-23 16:27 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-18 23:56 [PATCH] KVM: x86/mmu: Complete prefetch for trailing SPTEs for direct, legacy MMU Sean Christopherson
2021-08-19  4:15 ` Sergey Senozhatsky
2021-08-25 22:49 ` Lai Jiangshan
2021-08-26 21:35   ` Ben Gardon
2021-09-23 16:27 ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.