kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] kvm: mmu: Use fast PF path for access tracking of huge pages when possible
@ 2021-11-02  3:29 Junaid Shahid
  2021-11-02 17:41 ` Ben Gardon
  2021-11-03 21:45 ` Sean Christopherson
  0 siblings, 2 replies; 3+ messages in thread
From: Junaid Shahid @ 2021-11-02  3:29 UTC (permalink / raw)
  To: kvm, pbonzini; +Cc: jmattson, seanjc, bgardon

The fast page fault path bails out on write faults to huge pages in
order to accommodate dirty logging. This change adds a check to do that
only when dirty logging is actually enabled, so that access tracking for
huge pages can still use the fast path for write faults in the common
case.

Signed-off-by: Junaid Shahid <junaids@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 354d2ca92df4..5df9181c5082 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3191,8 +3191,9 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 			new_spte |= PT_WRITABLE_MASK;
 
 			/*
-			 * Do not fix write-permission on the large spte.  Since
-			 * we only dirty the first page into the dirty-bitmap in
+			 * Do not fix write-permission on the large spte when
+			 * dirty logging is enabled. Since we only dirty the
+			 * first page into the dirty-bitmap in
 			 * fast_pf_fix_direct_spte(), other pages are missed
 			 * if its slot has dirty logging enabled.
 			 *
@@ -3201,7 +3202,8 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 			 *
 			 * See the comments in kvm_arch_commit_memory_region().
 			 */
-			if (sp->role.level > PG_LEVEL_4K)
+			if (sp->role.level > PG_LEVEL_4K &&
+			    kvm_slot_dirty_track_enabled(fault->slot))
 				break;
 		}
 
-- 
2.33.1.1089.g2158813163f-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] kvm: mmu: Use fast PF path for access tracking of huge pages when possible
  2021-11-02  3:29 [PATCH] kvm: mmu: Use fast PF path for access tracking of huge pages when possible Junaid Shahid
@ 2021-11-02 17:41 ` Ben Gardon
  2021-11-03 21:45 ` Sean Christopherson
  1 sibling, 0 replies; 3+ messages in thread
From: Ben Gardon @ 2021-11-02 17:41 UTC (permalink / raw)
  To: Junaid Shahid; +Cc: kvm, pbonzini, jmattson, seanjc

On Mon, Nov 1, 2021 at 8:30 PM Junaid Shahid <junaids@google.com> wrote:
>
> The fast page fault path bails out on write faults to huge pages in
> order to accommodate dirty logging. This change adds a check to do that
> only when dirty logging is actually enabled, so that access tracking for
> huge pages can still use the fast path for write faults in the common
> case.
>
> Signed-off-by: Junaid Shahid <junaids@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  arch/x86/kvm/mmu/mmu.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 354d2ca92df4..5df9181c5082 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3191,8 +3191,9 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>                         new_spte |= PT_WRITABLE_MASK;
>
>                         /*
> -                        * Do not fix write-permission on the large spte.  Since
> -                        * we only dirty the first page into the dirty-bitmap in
> +                        * Do not fix write-permission on the large spte when
> +                        * dirty logging is enabled. Since we only dirty the
> +                        * first page into the dirty-bitmap in
>                          * fast_pf_fix_direct_spte(), other pages are missed
>                          * if its slot has dirty logging enabled.
>                          *
> @@ -3201,7 +3202,8 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>                          *
>                          * See the comments in kvm_arch_commit_memory_region().
>                          */
> -                       if (sp->role.level > PG_LEVEL_4K)
> +                       if (sp->role.level > PG_LEVEL_4K &&
> +                           kvm_slot_dirty_track_enabled(fault->slot))
>                                 break;
>                 }
>
> --
> 2.33.1.1089.g2158813163f-goog
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] kvm: mmu: Use fast PF path for access tracking of huge pages when possible
  2021-11-02  3:29 [PATCH] kvm: mmu: Use fast PF path for access tracking of huge pages when possible Junaid Shahid
  2021-11-02 17:41 ` Ben Gardon
@ 2021-11-03 21:45 ` Sean Christopherson
  1 sibling, 0 replies; 3+ messages in thread
From: Sean Christopherson @ 2021-11-03 21:45 UTC (permalink / raw)
  To: Junaid Shahid; +Cc: kvm, pbonzini, jmattson, bgardon

On Mon, Nov 01, 2021, Junaid Shahid wrote:
> The fast page fault path bails out on write faults to huge pages in
> order to accommodate dirty logging. This change adds a check to do that
> only when dirty logging is actually enabled, so that access tracking for
> huge pages can still use the fast path for write faults in the common
> case.
>
> Signed-off-by: Junaid Shahid <junaids@google.com>

One nit, otherwise

Reviewed-by: Sean Christopherson <seanjc@google.com>

> ---
>  arch/x86/kvm/mmu/mmu.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 354d2ca92df4..5df9181c5082 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3191,8 +3191,9 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  			new_spte |= PT_WRITABLE_MASK;
>  
>  			/*
> -			 * Do not fix write-permission on the large spte.  Since
> -			 * we only dirty the first page into the dirty-bitmap in
> +			 * Do not fix write-permission on the large spte when
> +			 * dirty logging is enabled. Since we only dirty the
> +			 * first page into the dirty-bitmap in
>  			 * fast_pf_fix_direct_spte(), other pages are missed
>  			 * if its slot has dirty logging enabled.
>  			 *
> @@ -3201,7 +3202,8 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  			 *
>  			 * See the comments in kvm_arch_commit_memory_region().

This part is slightly stale as kvm_mmu_slot_apply_flags() now has the comments.
Maybe just drop it entirely?  The comments there don't do a whole lot to make this
code more understandable.

>  			 */
> -			if (sp->role.level > PG_LEVEL_4K)
> +			if (sp->role.level > PG_LEVEL_4K &&
> +			    kvm_slot_dirty_track_enabled(fault->slot))
>  				break;
>  		}
>  
> -- 
> 2.33.1.1089.g2158813163f-goog
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-11-03 21:45 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-02  3:29 [PATCH] kvm: mmu: Use fast PF path for access tracking of huge pages when possible Junaid Shahid
2021-11-02 17:41 ` Ben Gardon
2021-11-03 21:45 ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).