linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] kvm: Use huge pages for DAX-backed files
@ 2019-04-04 20:23 Barret Rhoden
  2019-04-04 20:23 ` [PATCH v3 1/3] mm: make dev_pagemap_mapping_shift() externally visible Barret Rhoden
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Barret Rhoden @ 2019-04-04 20:23 UTC (permalink / raw)
  To: Paolo Bonzini, Dan Williams, David Hildenbrand, Dave Jiang,
	Alexander Duyck
  Cc: linux-nvdimm, x86, kvm, linux-kernel, yu.c.zhang, yi.z.zhang

This patch series depends on DAX pages not being PageReserved.  Once
that is in place, these changes will let KVM use huge pages with
DAX-backed files.

From previous discussions[1], it sounds like DAX might not need to keep
the PageReserved bit, but that it hadn't been sorted out yet.

Without the PageReserved change, KVM and DAX still work with these
patches, simply without huge pages - which is the current situation.

If you want to test the huge-page functionality as if DAX pages weren't
PageReserved for KVM, this hack does the trick:

------------------
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index c44985375e7f..ee539eec1fb8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -153,6 +153,10 @@ __weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
 
 bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
 {
+       // XXX hack
+       if (is_zone_device_page(pfn_to_page(pfn)))
+               return false;
+
        if (pfn_valid(pfn))
                return PageReserved(pfn_to_page(pfn));
 
------------------

Perhaps if we are going to leave DAX pages marked PageReserved, then I
can make that hack into a proper commit and have KVM alone treat DAX
pages as if they are not reserved.

v2 -> v3:
v2: https://lore.kernel.org/lkml/20181114215155.259978-1-brho@google.com/
- Updated Acks/Reviewed-by
- Rebased onto linux-next

v1 -> v2:
https://lore.kernel.org/lkml/20181109203921.178363-1-brho@google.com/
- Updated Acks/Reviewed-by
- Minor touchups
- Added patch to remove redundant PageReserved() check
- Rebased onto linux-next

RFC/discussion thread:
https://lore.kernel.org/lkml/20181029210716.212159-1-brho@google.com/

[1] https://lore.kernel.org/lkml/ee8cc068-903c-d87e-f418-ade46786249e@redhat.com/

Barret Rhoden (3):
  mm: make dev_pagemap_mapping_shift() externally visible
  kvm: Use huge pages for DAX-backed files
  kvm: remove redundant PageReserved() check

 arch/x86/kvm/mmu.c  | 33 +++++++++++++++++++++++++++++++--
 include/linux/mm.h  |  3 +++
 mm/memory-failure.c | 38 +++-----------------------------------
 mm/util.c           | 34 ++++++++++++++++++++++++++++++++++
 virt/kvm/kvm_main.c |  8 ++------
 5 files changed, 73 insertions(+), 43 deletions(-)

-- 
2.21.0.392.gf8f6787159e-goog


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 1/3] mm: make dev_pagemap_mapping_shift() externally visible
  2019-04-04 20:23 [PATCH v3 0/3] kvm: Use huge pages for DAX-backed files Barret Rhoden
@ 2019-04-04 20:23 ` Barret Rhoden
  2019-04-04 20:23 ` [PATCH v3 2/3] kvm: Use huge pages for DAX-backed files Barret Rhoden
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Barret Rhoden @ 2019-04-04 20:23 UTC (permalink / raw)
  To: Paolo Bonzini, Dan Williams, David Hildenbrand, Dave Jiang,
	Alexander Duyck
  Cc: linux-nvdimm, x86, kvm, linux-kernel, yu.c.zhang, yi.z.zhang

KVM has a use case for determining the size of a dax mapping.  The KVM
code has easy access to the address and the mm; hence the change in
parameters.

Signed-off-by: Barret Rhoden <brho@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
---
 include/linux/mm.h  |  3 +++
 mm/memory-failure.c | 38 +++-----------------------------------
 mm/util.c           | 34 ++++++++++++++++++++++++++++++++++
 3 files changed, 40 insertions(+), 35 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index fe52e266016e..c09e2f4c9bac 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -963,6 +963,9 @@ static inline bool is_pci_p2pdma_page(const struct page *page)
 }
 #endif /* CONFIG_DEV_PAGEMAP_OPS */
 
+unsigned long dev_pagemap_mapping_shift(unsigned long address,
+					struct mm_struct *mm);
+
 static inline void get_page(struct page *page)
 {
 	page = compound_head(page);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index fc8b51744579..6779c163c4f4 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -265,40 +265,6 @@ void shake_page(struct page *p, int access)
 }
 EXPORT_SYMBOL_GPL(shake_page);
 
-static unsigned long dev_pagemap_mapping_shift(struct page *page,
-		struct vm_area_struct *vma)
-{
-	unsigned long address = vma_address(page, vma);
-	pgd_t *pgd;
-	p4d_t *p4d;
-	pud_t *pud;
-	pmd_t *pmd;
-	pte_t *pte;
-
-	pgd = pgd_offset(vma->vm_mm, address);
-	if (!pgd_present(*pgd))
-		return 0;
-	p4d = p4d_offset(pgd, address);
-	if (!p4d_present(*p4d))
-		return 0;
-	pud = pud_offset(p4d, address);
-	if (!pud_present(*pud))
-		return 0;
-	if (pud_devmap(*pud))
-		return PUD_SHIFT;
-	pmd = pmd_offset(pud, address);
-	if (!pmd_present(*pmd))
-		return 0;
-	if (pmd_devmap(*pmd))
-		return PMD_SHIFT;
-	pte = pte_offset_map(pmd, address);
-	if (!pte_present(*pte))
-		return 0;
-	if (pte_devmap(*pte))
-		return PAGE_SHIFT;
-	return 0;
-}
-
 /*
  * Failure handling: if we can't find or can't kill a process there's
  * not much we can do.	We just print a message and ignore otherwise.
@@ -329,7 +295,9 @@ static void add_to_kill(struct task_struct *tsk, struct page *p,
 	tk->addr = page_address_in_vma(p, vma);
 	tk->addr_valid = 1;
 	if (is_zone_device_page(p))
-		tk->size_shift = dev_pagemap_mapping_shift(p, vma);
+		tk->size_shift =
+			dev_pagemap_mapping_shift(vma_address(page, vma),
+						  vma->vm_mm);
 	else
 		tk->size_shift = compound_order(compound_head(p)) + PAGE_SHIFT;
 
diff --git a/mm/util.c b/mm/util.c
index f94a1ac09cd8..24444e35a7ed 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -795,3 +795,37 @@ int get_cmdline(struct task_struct *task, char *buffer, int buflen)
 out:
 	return res;
 }
+
+unsigned long dev_pagemap_mapping_shift(unsigned long address,
+					struct mm_struct *mm)
+{
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	pgd = pgd_offset(mm, address);
+	if (!pgd_present(*pgd))
+		return 0;
+	p4d = p4d_offset(pgd, address);
+	if (!p4d_present(*p4d))
+		return 0;
+	pud = pud_offset(p4d, address);
+	if (!pud_present(*pud))
+		return 0;
+	if (pud_devmap(*pud))
+		return PUD_SHIFT;
+	pmd = pmd_offset(pud, address);
+	if (!pmd_present(*pmd))
+		return 0;
+	if (pmd_devmap(*pmd))
+		return PMD_SHIFT;
+	pte = pte_offset_map(pmd, address);
+	if (!pte_present(*pte))
+		return 0;
+	if (pte_devmap(*pte))
+		return PAGE_SHIFT;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(dev_pagemap_mapping_shift);
-- 
2.21.0.392.gf8f6787159e-goog


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 2/3] kvm: Use huge pages for DAX-backed files
  2019-04-04 20:23 [PATCH v3 0/3] kvm: Use huge pages for DAX-backed files Barret Rhoden
  2019-04-04 20:23 ` [PATCH v3 1/3] mm: make dev_pagemap_mapping_shift() externally visible Barret Rhoden
@ 2019-04-04 20:23 ` Barret Rhoden
  2019-04-04 20:23 ` [PATCH v3 3/3] kvm: remove redundant PageReserved() check Barret Rhoden
  2019-04-04 20:28 ` [PATCH v3 0/3] kvm: Use huge pages for DAX-backed files Barret Rhoden
  3 siblings, 0 replies; 6+ messages in thread
From: Barret Rhoden @ 2019-04-04 20:23 UTC (permalink / raw)
  To: Paolo Bonzini, Dan Williams, David Hildenbrand, Dave Jiang,
	Alexander Duyck
  Cc: linux-nvdimm, x86, kvm, linux-kernel, yu.c.zhang, yi.z.zhang

This change allows KVM to map DAX-backed files made of huge pages with
huge mappings in the EPT/TDP.

DAX pages are not PageTransCompound.  The existing check is trying to
determine if the mapping for the pfn is a huge mapping or not.  For
non-DAX maps, e.g. hugetlbfs, that means checking PageTransCompound.
For DAX, we can check the page table itself.

Note that KVM already faulted in the page (or huge page) in the host's
page table, and we hold the KVM mmu spinlock.  We grabbed that lock in
kvm_mmu_notifier_invalidate_range_end, before checking the mmu seq.

Signed-off-by: Barret Rhoden <brho@google.com>
---
 arch/x86/kvm/mmu.c | 33 +++++++++++++++++++++++++++++++--
 1 file changed, 31 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index eee455a8a612..48cbf5553688 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3207,6 +3207,35 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn)
 	return -EFAULT;
 }
 
+static bool pfn_is_huge_mapped(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn)
+{
+	struct page *page = pfn_to_page(pfn);
+	unsigned long hva;
+
+	if (!is_zone_device_page(page))
+		return PageTransCompoundMap(page);
+
+	/*
+	 * DAX pages do not use compound pages.  The page should have already
+	 * been mapped into the host-side page table during try_async_pf(), so
+	 * we can check the page tables directly.
+	 */
+	hva = gfn_to_hva(kvm, gfn);
+	if (kvm_is_error_hva(hva))
+		return false;
+
+	/*
+	 * Our caller grabbed the KVM mmu_lock with a successful
+	 * mmu_notifier_retry, so we're safe to walk the page table.
+	 */
+	switch (dev_pagemap_mapping_shift(hva, current->mm)) {
+	case PMD_SHIFT:
+	case PUD_SIZE:
+		return true;
+	}
+	return false;
+}
+
 static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
 					gfn_t *gfnp, kvm_pfn_t *pfnp,
 					int *levelp)
@@ -3223,7 +3252,7 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
 	 */
 	if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) &&
 	    level == PT_PAGE_TABLE_LEVEL &&
-	    PageTransCompoundMap(pfn_to_page(pfn)) &&
+	    pfn_is_huge_mapped(vcpu->kvm, gfn, pfn) &&
 	    !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) {
 		unsigned long mask;
 		/*
@@ -5774,7 +5803,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
 		 */
 		if (sp->role.direct &&
 			!kvm_is_reserved_pfn(pfn) &&
-			PageTransCompoundMap(pfn_to_page(pfn))) {
+			pfn_is_huge_mapped(kvm, sp->gfn, pfn)) {
 			pte_list_remove(rmap_head, sptep);
 
 			if (kvm_available_flush_tlb_with_range())
-- 
2.21.0.392.gf8f6787159e-goog


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 3/3] kvm: remove redundant PageReserved() check
  2019-04-04 20:23 [PATCH v3 0/3] kvm: Use huge pages for DAX-backed files Barret Rhoden
  2019-04-04 20:23 ` [PATCH v3 1/3] mm: make dev_pagemap_mapping_shift() externally visible Barret Rhoden
  2019-04-04 20:23 ` [PATCH v3 2/3] kvm: Use huge pages for DAX-backed files Barret Rhoden
@ 2019-04-04 20:23 ` Barret Rhoden
  2019-04-04 20:37   ` David Hildenbrand
  2019-04-04 20:28 ` [PATCH v3 0/3] kvm: Use huge pages for DAX-backed files Barret Rhoden
  3 siblings, 1 reply; 6+ messages in thread
From: Barret Rhoden @ 2019-04-04 20:23 UTC (permalink / raw)
  To: Paolo Bonzini, Dan Williams, David Hildenbrand, Dave Jiang,
	Alexander Duyck
  Cc: linux-nvdimm, x86, kvm, linux-kernel, yu.c.zhang, yi.z.zhang

kvm_is_reserved_pfn() already checks PageReserved().

Signed-off-by: Barret Rhoden <brho@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 virt/kvm/kvm_main.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 55fe8e20d8fd..c44985375e7f 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1782,12 +1782,8 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);
 
 void kvm_set_pfn_dirty(kvm_pfn_t pfn)
 {
-	if (!kvm_is_reserved_pfn(pfn)) {
-		struct page *page = pfn_to_page(pfn);
-
-		if (!PageReserved(page))
-			SetPageDirty(page);
-	}
+	if (!kvm_is_reserved_pfn(pfn))
+		SetPageDirty(pfn_to_page(pfn));
 }
 EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty);
 
-- 
2.21.0.392.gf8f6787159e-goog


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 0/3] kvm: Use huge pages for DAX-backed files
  2019-04-04 20:23 [PATCH v3 0/3] kvm: Use huge pages for DAX-backed files Barret Rhoden
                   ` (2 preceding siblings ...)
  2019-04-04 20:23 ` [PATCH v3 3/3] kvm: remove redundant PageReserved() check Barret Rhoden
@ 2019-04-04 20:28 ` Barret Rhoden
  3 siblings, 0 replies; 6+ messages in thread
From: Barret Rhoden @ 2019-04-04 20:28 UTC (permalink / raw)
  To: Paolo Bonzini, Dan Williams, David Hildenbrand, Dave Jiang,
	Alexander Duyck
  Cc: linux-nvdimm, x86, kvm, linux-kernel, yu.c.zhang


-yi.z.zhang@intel.com (Bad email address / failure)

Sorry about that.

On 4/4/19 4:23 PM, Barret Rhoden wrote:
> This patch series depends on DAX pages not being PageReserved.  Once
> that is in place, these changes will let KVM use huge pages with
> DAX-backed files.
> 
>  From previous discussions[1], it sounds like DAX might not need to keep
> the PageReserved bit, but that it hadn't been sorted out yet.
> 
> Without the PageReserved change, KVM and DAX still work with these
> patches, simply without huge pages - which is the current situation.
> 
> If you want to test the huge-page functionality as if DAX pages weren't
> PageReserved for KVM, this hack does the trick:
> 
> ------------------
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index c44985375e7f..ee539eec1fb8 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -153,6 +153,10 @@ __weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
>   
>   bool kvm_is_reserved_pfn(kvm_pfn_t pfn)
>   {
> +       // XXX hack
> +       if (is_zone_device_page(pfn_to_page(pfn)))
> +               return false;
> +
>          if (pfn_valid(pfn))
>                  return PageReserved(pfn_to_page(pfn));
>   
> ------------------
> 
> Perhaps if we are going to leave DAX pages marked PageReserved, then I
> can make that hack into a proper commit and have KVM alone treat DAX
> pages as if they are not reserved.
> 
> v2 -> v3:
> v2: https://lore.kernel.org/lkml/20181114215155.259978-1-brho@google.com/
> - Updated Acks/Reviewed-by
> - Rebased onto linux-next
> 
> v1 -> v2:
> https://lore.kernel.org/lkml/20181109203921.178363-1-brho@google.com/
> - Updated Acks/Reviewed-by
> - Minor touchups
> - Added patch to remove redundant PageReserved() check
> - Rebased onto linux-next
> 
> RFC/discussion thread:
> https://lore.kernel.org/lkml/20181029210716.212159-1-brho@google.com/
> 
> [1] https://lore.kernel.org/lkml/ee8cc068-903c-d87e-f418-ade46786249e@redhat.com/
> 
> Barret Rhoden (3):
>    mm: make dev_pagemap_mapping_shift() externally visible
>    kvm: Use huge pages for DAX-backed files
>    kvm: remove redundant PageReserved() check
> 
>   arch/x86/kvm/mmu.c  | 33 +++++++++++++++++++++++++++++++--
>   include/linux/mm.h  |  3 +++
>   mm/memory-failure.c | 38 +++-----------------------------------
>   mm/util.c           | 34 ++++++++++++++++++++++++++++++++++
>   virt/kvm/kvm_main.c |  8 ++------
>   5 files changed, 73 insertions(+), 43 deletions(-)
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 3/3] kvm: remove redundant PageReserved() check
  2019-04-04 20:23 ` [PATCH v3 3/3] kvm: remove redundant PageReserved() check Barret Rhoden
@ 2019-04-04 20:37   ` David Hildenbrand
  0 siblings, 0 replies; 6+ messages in thread
From: David Hildenbrand @ 2019-04-04 20:37 UTC (permalink / raw)
  To: Barret Rhoden, Paolo Bonzini, Dan Williams, Dave Jiang, Alexander Duyck
  Cc: linux-nvdimm, x86, kvm, linux-kernel, yu.c.zhang, yi.z.zhang

On 04.04.19 22:23, Barret Rhoden wrote:
> kvm_is_reserved_pfn() already checks PageReserved().
> 
> Signed-off-by: Barret Rhoden <brho@google.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> ---
>  virt/kvm/kvm_main.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 55fe8e20d8fd..c44985375e7f 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1782,12 +1782,8 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);
>  
>  void kvm_set_pfn_dirty(kvm_pfn_t pfn)
>  {
> -	if (!kvm_is_reserved_pfn(pfn)) {
> -		struct page *page = pfn_to_page(pfn);
> -
> -		if (!PageReserved(page))
> -			SetPageDirty(page);
> -	}
> +	if (!kvm_is_reserved_pfn(pfn))
> +		SetPageDirty(pfn_to_page(pfn));
>  }
>  EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty);
>  
> 

I guess this one can be picked up right away, right?

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-04-04 20:37 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-04 20:23 [PATCH v3 0/3] kvm: Use huge pages for DAX-backed files Barret Rhoden
2019-04-04 20:23 ` [PATCH v3 1/3] mm: make dev_pagemap_mapping_shift() externally visible Barret Rhoden
2019-04-04 20:23 ` [PATCH v3 2/3] kvm: Use huge pages for DAX-backed files Barret Rhoden
2019-04-04 20:23 ` [PATCH v3 3/3] kvm: remove redundant PageReserved() check Barret Rhoden
2019-04-04 20:37   ` David Hildenbrand
2019-04-04 20:28 ` [PATCH v3 0/3] kvm: Use huge pages for DAX-backed files Barret Rhoden

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).