All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: "Barret Rhoden" <brho@google.com>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Dave Jiang" <dave.jiang@intel.com>,
	"Ross Zwisler" <zwisler@kernel.org>,
	"Vishal Verma" <vishal.l.verma@intel.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Borislav Petkov" <bp@alien8.de>
Cc: kvm@vger.kernel.org, yu.c.zhang@intel.com,
	linux-nvdimm@lists.01.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
	yi.z.zhang@intel.com
Subject: Re: [RFC PATCH] kvm: Use huge pages for DAX-backed files
Date: Tue, 6 Nov 2018 22:16:30 +0100	[thread overview]
Message-ID: <13b9a5a5-6773-131e-8014-f1b1bc975794@redhat.com> (raw)
In-Reply-To: <20181106160553.5a8025ed@gnomeregan.cam.corp.google.com>

On 06/11/2018 22:05, Barret Rhoden wrote:
> On 2018-10-29 at 17:07 Barret Rhoden <brho@google.com> wrote:
>> Another issue is that kvm_mmu_zap_collapsible_spte() also uses
>> PageTransCompoundMap() to detect huge pages, but we don't have a way to
>> get the HVA easily.  Can we just aggressively zap DAX pages there?
> 
> Any thoughts about this?  Is there a way to determine the HVA or GFN in
> this function:

Yes, iter.gfn is the gfn inside the loop and iter.level is the level
(1=PTE, 2=PDE, ...).  iter.level of course is unusable here, similar to
*levelp in transparent_hugepage_adjust, but you can use iter.gfn and
gfn_to_hva.

Paolo

> static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,                        
>                                          struct kvm_rmap_head *rmap_head)        
> {       
>         u64 *sptep;                                                              
>         struct rmap_iterator iter;                                               
>         int need_tlb_flush = 0;
>         kvm_pfn_t pfn;
>         struct kvm_mmu_page *sp;
>                                                                                  
> restart:        
>         for_each_rmap_spte(rmap_head, &iter, sptep) {
>                 sp = page_header(__pa(sptep));
>                 pfn = spte_to_pfn(*sptep);
> 
>                 /*
>                  * We cannot do huge page mapping for indirect shadow pages,     
>                  * which are found on the last rmap (level = 1) when not using   
>                  * tdp; such shadow pages are synced with the page table in      
>                  * the guest, and the guest page table is using 4K page size     
>                  * mapping if the indirect sp has level = 1.                     
>                  */     
>                 if (sp->role.direct &&                                           
>                         !kvm_is_reserved_pfn(pfn) &&                             
>                         PageTransCompoundMap(pfn_to_page(pfn))) {                
>                         pte_list_remove(rmap_head, sptep);                       
>                         need_tlb_flush = 1;                                      
>                         goto restart;                                            
>                 }                                                                
>         }
>                                    
>         return need_tlb_flush;                                                   
> }    
> 
> If not, I was thinking of changing that loop to always remove PTEs for
> DAX mappings, with the understanding that they'll get faulted back in
> later.  Ideally, we'd like to check if the page is huge, but DAX can't
> use the PageTransCompoundMap check.
> 
> Thanks,
> 
> Barret
> 
> 
> 

_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: "Barret Rhoden" <brho@google.com>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Dave Jiang" <dave.jiang@intel.com>,
	"Ross Zwisler" <zwisler@kernel.org>,
	"Vishal Verma" <vishal.l.verma@intel.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Borislav Petkov" <bp@alien8.de>
Cc: linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	x86@kernel.org, kvm@vger.kernel.org, yu.c.zhang@intel.com,
	yi.z.zhang@intel.com
Subject: Re: [RFC PATCH] kvm: Use huge pages for DAX-backed files
Date: Tue, 6 Nov 2018 22:16:30 +0100	[thread overview]
Message-ID: <13b9a5a5-6773-131e-8014-f1b1bc975794@redhat.com> (raw)
In-Reply-To: <20181106160553.5a8025ed@gnomeregan.cam.corp.google.com>

On 06/11/2018 22:05, Barret Rhoden wrote:
> On 2018-10-29 at 17:07 Barret Rhoden <brho@google.com> wrote:
>> Another issue is that kvm_mmu_zap_collapsible_spte() also uses
>> PageTransCompoundMap() to detect huge pages, but we don't have a way to
>> get the HVA easily.  Can we just aggressively zap DAX pages there?
> 
> Any thoughts about this?  Is there a way to determine the HVA or GFN in
> this function:

Yes, iter.gfn is the gfn inside the loop and iter.level is the level
(1=PTE, 2=PDE, ...).  iter.level of course is unusable here, similar to
*levelp in transparent_hugepage_adjust, but you can use iter.gfn and
gfn_to_hva.

Paolo

> static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,                        
>                                          struct kvm_rmap_head *rmap_head)        
> {       
>         u64 *sptep;                                                              
>         struct rmap_iterator iter;                                               
>         int need_tlb_flush = 0;
>         kvm_pfn_t pfn;
>         struct kvm_mmu_page *sp;
>                                                                                  
> restart:        
>         for_each_rmap_spte(rmap_head, &iter, sptep) {
>                 sp = page_header(__pa(sptep));
>                 pfn = spte_to_pfn(*sptep);
> 
>                 /*
>                  * We cannot do huge page mapping for indirect shadow pages,     
>                  * which are found on the last rmap (level = 1) when not using   
>                  * tdp; such shadow pages are synced with the page table in      
>                  * the guest, and the guest page table is using 4K page size     
>                  * mapping if the indirect sp has level = 1.                     
>                  */     
>                 if (sp->role.direct &&                                           
>                         !kvm_is_reserved_pfn(pfn) &&                             
>                         PageTransCompoundMap(pfn_to_page(pfn))) {                
>                         pte_list_remove(rmap_head, sptep);                       
>                         need_tlb_flush = 1;                                      
>                         goto restart;                                            
>                 }                                                                
>         }
>                                    
>         return need_tlb_flush;                                                   
> }    
> 
> If not, I was thinking of changing that loop to always remove PTEs for
> DAX mappings, with the understanding that they'll get faulted back in
> later.  Ideally, we'd like to check if the page is huge, but DAX can't
> use the PageTransCompoundMap check.
> 
> Thanks,
> 
> Barret
> 
> 
> 


  reply	other threads:[~2018-11-06 21:16 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-29 21:07 [RFC PATCH] kvm: Use huge pages for DAX-backed files Barret Rhoden
2018-10-29 21:07 ` Barret Rhoden
2018-10-29 22:25 ` Dan Williams
2018-10-29 22:25   ` Dan Williams
2018-10-29 22:25   ` Dan Williams
     [not found]   ` <CAPcyv4gJUjuSKwy7i2wuKR=Vz-AkDrxnGya5qkg7XTFxuXbtzw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-10-30  0:28     ` Barret Rhoden
2018-10-30  0:28       ` Barret Rhoden
2018-10-30  3:10       ` Dan Williams
2018-10-30  3:10         ` Dan Williams
     [not found]         ` <CAPcyv4gQztHrJ3--rhU4ZpaZyyqdqE0=gx50CRArHKiXwfYC+A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-10-30 19:45           ` Barret Rhoden
2018-10-30 19:45             ` Barret Rhoden
2018-10-31  8:49             ` Paolo Bonzini
2018-10-31  8:49               ` Paolo Bonzini
2018-11-02 20:32               ` Barret Rhoden
2018-11-06 10:19                 ` Paolo Bonzini
     [not found]                   ` <876d5a71-8dda-4728-5329-4e169777ba4a-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-11-06 16:22                     ` Barret Rhoden
2018-11-06 16:22                       ` Barret Rhoden
2018-10-31  3:05         ` Yu Zhang
2018-10-31  3:05           ` Yu Zhang
2018-10-31  3:05           ` Yu Zhang
2018-10-31  8:52   ` Paolo Bonzini
2018-10-31  8:52     ` Paolo Bonzini
2018-10-31  8:52     ` Paolo Bonzini
2018-10-31 21:16     ` Dan Williams
2018-10-31 21:16       ` Dan Williams
2018-10-31 21:16       ` Dan Williams
2018-11-06 10:22       ` Paolo Bonzini
     [not found] ` <20181029210716.212159-1-brho-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2018-11-06 21:05   ` Barret Rhoden
2018-11-06 21:05     ` Barret Rhoden
2018-11-06 21:16     ` Paolo Bonzini [this message]
2018-11-06 21:16       ` Paolo Bonzini
2018-11-06 21:17       ` Barret Rhoden

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=13b9a5a5-6773-131e-8014-f1b1bc975794@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=bp@alien8.de \
    --cc=brho@google.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=hpa@zytor.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=mingo@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=vishal.l.verma@intel.com \
    --cc=x86@kernel.org \
    --cc=yi.z.zhang@intel.com \
    --cc=yu.c.zhang@intel.com \
    --cc=zwisler@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.