linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/6] Support for contiguous pte hugepages
@ 2017-05-22 13:35 Punit Agrawal
  2017-05-22 13:35 ` [PATCH v3 1/6] mm, gup: Remove broken VM_BUG_ON_PAGE compound check for hugepages Punit Agrawal
                   ` (5 more replies)
  0 siblings, 6 replies; 20+ messages in thread
From: Punit Agrawal @ 2017-05-22 13:35 UTC (permalink / raw)
  To: akpm
  Cc: Punit Agrawal, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, will.deacon, n-horiguchi, kirill.shutemov,
	mike.kravetz, steve.capper, mark.rutland, hillf.zj, linux-arch,
	aneesh.kumar

Hi,

This patchset updates the hugetlb code to fix issues arising from
contiguous pte hugepages (such as on arm64). These are the generic
code changes and the arm64 support based on these patches will be
posted separately. The patches are based on v4.12-rc2. Previous
related postings can be found at [0], [1] and [2].

The patches fall into two categories -

* Patches 1-2 address issues with gup

* Patches 3-6 relate to passing a size argument to hugepage helpers to
  disambiguate the size of the referred page. These changes are
  required to enable arch code to properly handle swap entries for
  contiguous pte hugepages.

  The changes to huge_pte_offset() (patch 3) touch multiple
  architectures but I've managed to minimise these changes for the
  other affected functions - huge_pte_clear() and set_huge_pte_at().

These patches gate the enabling of contiguous hugepages support on
arm64 which has been requested for systems using !4k page granule.

Feedback welcome.

Thanks,
Punit

v2 -> v3
* Added gup fixes

v1 -> v2

* switch huge_pte_offset() to use size instead of hstate for
  consistency with the rest of the api
* Expand the series to address huge_pte_clear() and set_huge_pte_at()

RFC -> v1

* Fixed a missing conversion of huge_pte_offset() prototype to add
  hstate parameter. Reported by 0-day.

[0] https://lkml.org/lkml/2017/3/23/293
[1] https://lkml.org/lkml/2017/3/30/770
[2] http://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1370686.html

Punit Agrawal (5):
  mm, gup: Ensure real head page is ref-counted when using hugepages
  mm/hugetlb: add size parameter to huge_pte_offset()
  mm/hugetlb: Allow architectures to override huge_pte_clear()
  mm/hugetlb: Introduce set_huge_swap_pte_at() helper
  mm: rmap: Use correct helper when poisoning hugepages

Will Deacon (1):
  mm, gup: Remove broken VM_BUG_ON_PAGE compound check for hugepages

 arch/arm64/mm/hugetlbpage.c     |  3 ++-
 arch/ia64/mm/hugetlbpage.c      |  4 ++--
 arch/metag/mm/hugetlbpage.c     |  3 ++-
 arch/mips/mm/hugetlbpage.c      |  3 ++-
 arch/parisc/mm/hugetlbpage.c    |  3 ++-
 arch/powerpc/mm/hugetlbpage.c   |  2 +-
 arch/s390/include/asm/hugetlb.h | 10 ++-------
 arch/s390/mm/hugetlbpage.c      | 12 ++++++++++-
 arch/sh/mm/hugetlbpage.c        |  3 ++-
 arch/sparc/mm/hugetlbpage.c     |  3 ++-
 arch/tile/mm/hugetlbpage.c      |  3 ++-
 arch/x86/mm/hugetlbpage.c       |  2 +-
 fs/userfaultfd.c                |  7 +++++--
 include/asm-generic/hugetlb.h   |  7 ++-----
 include/linux/hugetlb.h         |  7 +++++--
 mm/gup.c                        | 15 ++++++--------
 mm/hugetlb.c                    | 45 +++++++++++++++++++++++++++++------------
 mm/page_vma_mapped.c            |  3 ++-
 mm/pagewalk.c                   |  3 ++-
 mm/rmap.c                       |  7 +++++--
 20 files changed, 90 insertions(+), 55 deletions(-)

-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3 1/6] mm, gup: Remove broken VM_BUG_ON_PAGE compound check for hugepages
  2017-05-22 13:35 [PATCH v3 0/6] Support for contiguous pte hugepages Punit Agrawal
@ 2017-05-22 13:35 ` Punit Agrawal
  2017-05-23 13:09   ` Kirill A. Shutemov
  2017-05-22 13:36 ` [PATCH v3 2/6] mm, gup: Ensure real head page is ref-counted when using hugepages Punit Agrawal
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Punit Agrawal @ 2017-05-22 13:35 UTC (permalink / raw)
  To: akpm
  Cc: Will Deacon, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, n-horiguchi, kirill.shutemov, mike.kravetz,
	steve.capper, mark.rutland, hillf.zj, linux-arch, aneesh.kumar,
	Punit Agrawal

From: Will Deacon <will.deacon@arm.com>

When operating on hugepages with DEBUG_VM enabled, the GUP code checks the
compound head for each tail page prior to calling page_cache_add_speculative.
This is broken, because on the fast-GUP path (where we don't hold any page
table locks) we can be racing with a concurrent invocation of
split_huge_page_to_list.

split_huge_page_to_list deals with this race by using page_ref_freeze to
freeze the page and force concurrent GUPs to fail whilst the component
pages are modified. This modification includes clearing the compound_head
field for the tail pages, so checking this prior to a successful call
to page_cache_add_speculative can lead to false positives: In fact,
page_cache_add_speculative *already* has this check once the page refcount
has been successfully updated, so we can simply remove the broken calls
to VM_BUG_ON_PAGE.

Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
---
 mm/gup.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index d9e6fddcc51f..ccf8cb38234f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1361,7 +1361,6 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 	head = pmd_page(orig);
 	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
 	do {
-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
 		pages[*nr] = page;
 		(*nr)++;
 		page++;
@@ -1400,7 +1399,6 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 	head = pud_page(orig);
 	page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
 	do {
-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
 		pages[*nr] = page;
 		(*nr)++;
 		page++;
@@ -1438,7 +1436,6 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 	head = pgd_page(orig);
 	page = head + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT);
 	do {
-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
 		pages[*nr] = page;
 		(*nr)++;
 		page++;
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 2/6] mm, gup: Ensure real head page is ref-counted when using hugepages
  2017-05-22 13:35 [PATCH v3 0/6] Support for contiguous pte hugepages Punit Agrawal
  2017-05-22 13:35 ` [PATCH v3 1/6] mm, gup: Remove broken VM_BUG_ON_PAGE compound check for hugepages Punit Agrawal
@ 2017-05-22 13:36 ` Punit Agrawal
  2017-05-23 13:13   ` Kirill A. Shutemov
  2017-05-22 13:36 ` [PATCH v3 3/6] mm/hugetlb: add size parameter to huge_pte_offset() Punit Agrawal
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Punit Agrawal @ 2017-05-22 13:36 UTC (permalink / raw)
  To: akpm
  Cc: Punit Agrawal, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, will.deacon, n-horiguchi, kirill.shutemov,
	mike.kravetz, steve.capper, mark.rutland, hillf.zj, linux-arch,
	aneesh.kumar, Michal Hocko

When speculatively taking references to a hugepage using
page_cache_add_speculative() in gup_huge_pmd(), it is assumed that the
page returned by pmd_page() is the head page. Although normally true,
this assumption doesn't hold when the hugepage comprises of successive
page table entries such as when using contiguous bit on arm64 at PTE or
PMD levels.

This can be addressed by ensuring that the page passed to
page_cache_add_speculative() is the real head or by de-referencing the
head page within the function.

We take the first approach to keep the usage pattern aligned with
page_cache_get_speculative() where users already pass the appropriate
page, i.e., the de-referenced head.

Apply the same logic to fix gup_huge_[pud|pgd]() as well.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Acked-by: Steve Capper <steve.capper@arm.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 mm/gup.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index ccf8cb38234f..be67996513be 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1358,8 +1358,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 		return __gup_device_huge_pmd(orig, addr, end, pages, nr);
 
 	refs = 0;
-	head = pmd_page(orig);
-	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
+	page = pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
 	do {
 		pages[*nr] = page;
 		(*nr)++;
@@ -1367,6 +1366,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 		refs++;
 	} while (addr += PAGE_SIZE, addr != end);
 
+	head = compound_head(page);
 	if (!page_cache_add_speculative(head, refs)) {
 		*nr -= refs;
 		return 0;
@@ -1396,8 +1396,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 		return __gup_device_huge_pud(orig, addr, end, pages, nr);
 
 	refs = 0;
-	head = pud_page(orig);
-	page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
+	page = pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
 	do {
 		pages[*nr] = page;
 		(*nr)++;
@@ -1405,6 +1404,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 		refs++;
 	} while (addr += PAGE_SIZE, addr != end);
 
+	head = compound_head(page);
 	if (!page_cache_add_speculative(head, refs)) {
 		*nr -= refs;
 		return 0;
@@ -1433,8 +1433,7 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 
 	BUILD_BUG_ON(pgd_devmap(orig));
 	refs = 0;
-	head = pgd_page(orig);
-	page = head + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT);
+	page = pgd_page(orig) + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT);
 	do {
 		pages[*nr] = page;
 		(*nr)++;
@@ -1442,6 +1441,7 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 		refs++;
 	} while (addr += PAGE_SIZE, addr != end);
 
+	head = compound_head(page);
 	if (!page_cache_add_speculative(head, refs)) {
 		*nr -= refs;
 		return 0;
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 3/6] mm/hugetlb: add size parameter to huge_pte_offset()
  2017-05-22 13:35 [PATCH v3 0/6] Support for contiguous pte hugepages Punit Agrawal
  2017-05-22 13:35 ` [PATCH v3 1/6] mm, gup: Remove broken VM_BUG_ON_PAGE compound check for hugepages Punit Agrawal
  2017-05-22 13:36 ` [PATCH v3 2/6] mm, gup: Ensure real head page is ref-counted when using hugepages Punit Agrawal
@ 2017-05-22 13:36 ` Punit Agrawal
  2017-05-23 10:04   ` kbuild test robot
  2017-05-22 13:36 ` [PATCH v3 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear() Punit Agrawal
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Punit Agrawal @ 2017-05-22 13:36 UTC (permalink / raw)
  To: akpm
  Cc: Punit Agrawal, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, will.deacon, n-horiguchi, kirill.shutemov,
	mike.kravetz, steve.capper, mark.rutland, hillf.zj, linux-arch,
	aneesh.kumar, Tony Luck, Fenghua Yu, James Hogan, Ralf Baechle,
	James E.J. Bottomley, Helge Deller, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman, Martin Schwidefsky,
	Heiko Carstens, Yoshinori Sato, Rich Felker, David S. Miller,
	Chris Metcalf, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Alexander Viro, Michal Hocko

A poisoned or migrated hugepage is stored as a swap entry in the page
tables. On architectures that support hugepages consisting of contiguous
page table entries (such as on arm64) this leads to ambiguity in
determining the page table entry to return in huge_pte_offset() when a
poisoned entry is encountered.

Let's remove the ambiguity by adding a size parameter to convey
additional information about the requested address. Also fixup the
definition/usage of huge_pte_offset() throughout the tree.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Acked-by: Steve Capper <steve.capper@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: James Hogan <james.hogan@imgtec.com> (odd fixer:METAG ARCHITECTURE)
Cc: Ralf Baechle <ralf@linux-mips.org> (supporter:MIPS)
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Helge Deller <deller@gmx.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
---
 arch/arm64/mm/hugetlbpage.c   |  3 ++-
 arch/ia64/mm/hugetlbpage.c    |  4 ++--
 arch/metag/mm/hugetlbpage.c   |  3 ++-
 arch/mips/mm/hugetlbpage.c    |  3 ++-
 arch/parisc/mm/hugetlbpage.c  |  3 ++-
 arch/powerpc/mm/hugetlbpage.c |  2 +-
 arch/s390/mm/hugetlbpage.c    |  3 ++-
 arch/sh/mm/hugetlbpage.c      |  3 ++-
 arch/sparc/mm/hugetlbpage.c   |  3 ++-
 arch/tile/mm/hugetlbpage.c    |  3 ++-
 arch/x86/mm/hugetlbpage.c     |  2 +-
 fs/userfaultfd.c              |  7 +++++--
 include/linux/hugetlb.h       |  5 +++--
 mm/hugetlb.c                  | 23 ++++++++++++++---------
 mm/page_vma_mapped.c          |  3 ++-
 mm/pagewalk.c                 |  3 ++-
 16 files changed, 46 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 69b8200b1cfd..425a8fcd3d38 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -132,7 +132,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
 	return pte;
 }
 
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+pte_t *huge_pte_offset(struct mm_struct *mm,
+		       unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
 	pud_t *pud;
diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c
index 85de86d36fdf..ae35140332f7 100644
--- a/arch/ia64/mm/hugetlbpage.c
+++ b/arch/ia64/mm/hugetlbpage.c
@@ -44,7 +44,7 @@ huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz)
 }
 
 pte_t *
-huge_pte_offset (struct mm_struct *mm, unsigned long addr)
+huge_pte_offset (struct mm_struct *mm, unsigned long addr, unsigned long sz)
 {
 	unsigned long taddr = htlbpage_to_page(addr);
 	pgd_t *pgd;
@@ -92,7 +92,7 @@ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long addr, int writ
 	if (REGION_NUMBER(addr) != RGN_HPAGE)
 		return ERR_PTR(-EINVAL);
 
-	ptep = huge_pte_offset(mm, addr);
+	ptep = huge_pte_offset(mm, addr, HPAGE_SIZE);
 	if (!ptep || pte_none(*ptep))
 		return NULL;
 	page = pte_page(*ptep);
diff --git a/arch/metag/mm/hugetlbpage.c b/arch/metag/mm/hugetlbpage.c
index db1b7da91e4f..67fd53e2935a 100644
--- a/arch/metag/mm/hugetlbpage.c
+++ b/arch/metag/mm/hugetlbpage.c
@@ -74,7 +74,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
 	return pte;
 }
 
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+pte_t *huge_pte_offset(struct mm_struct *mm,
+		       unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
 	pud_t *pud;
diff --git a/arch/mips/mm/hugetlbpage.c b/arch/mips/mm/hugetlbpage.c
index 74aa6f62468f..cef152234312 100644
--- a/arch/mips/mm/hugetlbpage.c
+++ b/arch/mips/mm/hugetlbpage.c
@@ -36,7 +36,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr,
 	return pte;
 }
 
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
+		       unsigned long sz)
 {
 	pgd_t *pgd;
 	pud_t *pud;
diff --git a/arch/parisc/mm/hugetlbpage.c b/arch/parisc/mm/hugetlbpage.c
index aa50ac090e9b..5eb8f633b282 100644
--- a/arch/parisc/mm/hugetlbpage.c
+++ b/arch/parisc/mm/hugetlbpage.c
@@ -69,7 +69,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
 	return pte;
 }
 
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+pte_t *huge_pte_offset(struct mm_struct *mm,
+		       unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
 	pud_t *pud;
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index a4f33de4008e..e46744d3b4ae 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -55,7 +55,7 @@ static unsigned nr_gpages;
 
 #define hugepd_none(hpd)	(hpd_val(hpd) == 0)
 
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz)
 {
 	/* Only called for hugetlbfs pages, hence can ignore THP */
 	return __find_linux_pte_or_hugepte(mm->pgd, addr, NULL, NULL);
diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
index 9b4050caa4e9..ae23afc18493 100644
--- a/arch/s390/mm/hugetlbpage.c
+++ b/arch/s390/mm/hugetlbpage.c
@@ -176,7 +176,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
 	return (pte_t *) pmdp;
 }
 
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+pte_t *huge_pte_offset(struct mm_struct *mm,
+		       unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgdp;
 	pud_t *pudp;
diff --git a/arch/sh/mm/hugetlbpage.c b/arch/sh/mm/hugetlbpage.c
index cc948db74878..d2412d2d6462 100644
--- a/arch/sh/mm/hugetlbpage.c
+++ b/arch/sh/mm/hugetlbpage.c
@@ -42,7 +42,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
 	return pte;
 }
 
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+pte_t *huge_pte_offset(struct mm_struct *mm,
+		       unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
 	pud_t *pud;
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 7c29d38e6b99..8989c5e155b3 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -277,7 +277,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
 	return pte;
 }
 
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+pte_t *huge_pte_offset(struct mm_struct *mm,
+		       unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
 	pud_t *pud;
diff --git a/arch/tile/mm/hugetlbpage.c b/arch/tile/mm/hugetlbpage.c
index cb10153b5c9f..1f0993945521 100644
--- a/arch/tile/mm/hugetlbpage.c
+++ b/arch/tile/mm/hugetlbpage.c
@@ -102,7 +102,8 @@ static pte_t *get_pte(pte_t *base, int index, int level)
 	return ptep;
 }
 
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+pte_t *huge_pte_offset(struct mm_struct *mm,
+		       unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
 	pud_t *pud;
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index 302f43fd9c28..ccf509063dfd 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -33,7 +33,7 @@ follow_huge_addr(struct mm_struct *mm, unsigned long address, int write)
 	if (!vma || !is_vm_hugetlb_page(vma))
 		return ERR_PTR(-EINVAL);
 
-	pte = huge_pte_offset(mm, address);
+	pte = huge_pte_offset(mm, address, vma_mmu_pagesize(vma));
 
 	/* hugetlb should be locked, and hence, prefaulted */
 	WARN_ON(!pte || pte_none(*pte));
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index f7555fc25877..7b9c94837895 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -214,6 +214,7 @@ static inline struct uffd_msg userfault_msg(unsigned long address,
  * hugepmd ranges.
  */
 static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
+					 struct vm_area_struct *vma,
 					 unsigned long address,
 					 unsigned long flags,
 					 unsigned long reason)
@@ -224,7 +225,7 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
 
 	VM_BUG_ON(!rwsem_is_locked(&mm->mmap_sem));
 
-	pte = huge_pte_offset(mm, address);
+	pte = huge_pte_offset(mm, address, vma_mmu_pagesize(vma));
 	if (!pte)
 		goto out;
 
@@ -243,6 +244,7 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
 }
 #else
 static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
+					 struct vm_area_struct *vma,
 					 unsigned long address,
 					 unsigned long flags,
 					 unsigned long reason)
@@ -435,7 +437,8 @@ int handle_userfault(struct vm_fault *vmf, unsigned long reason)
 		must_wait = userfaultfd_must_wait(ctx, vmf->address, vmf->flags,
 						  reason);
 	else
-		must_wait = userfaultfd_huge_must_wait(ctx, vmf->address,
+		must_wait = userfaultfd_huge_must_wait(ctx, vmf->vma,
+						       vmf->address,
 						       vmf->flags, reason);
 	up_read(&mm->mmap_sem);
 
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index b857fc8cc2ec..23010a3b2047 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -113,7 +113,8 @@ extern struct list_head huge_boot_pages;
 
 pte_t *huge_pte_alloc(struct mm_struct *mm,
 			unsigned long addr, unsigned long sz);
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr);
+pte_t *huge_pte_offset(struct mm_struct *mm,
+		       unsigned long addr, unsigned long sz);
 int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep);
 struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
 			      int write);
@@ -157,7 +158,7 @@ static inline void hugetlb_show_meminfo(void)
 #define hugetlb_fault(mm, vma, addr, flags)	({ BUG(); 0; })
 #define hugetlb_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma, dst_addr, \
 				src_addr, pagep)	({ BUG(); 0; })
-#define huge_pte_offset(mm, address)	0
+#define huge_pte_offset(mm, address, sz)	0
 static inline int dequeue_hwpoisoned_huge_page(struct page *page)
 {
 	return 0;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e5828875f7bb..0e4d1fb3122f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3233,7 +3233,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 
 	for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
 		spinlock_t *src_ptl, *dst_ptl;
-		src_pte = huge_pte_offset(src, addr);
+		src_pte = huge_pte_offset(src, addr, sz);
 		if (!src_pte)
 			continue;
 		dst_pte = huge_pte_alloc(dst, addr, sz);
@@ -3317,7 +3317,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 	mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
 	address = start;
 	for (; address < end; address += sz) {
-		ptep = huge_pte_offset(mm, address);
+		ptep = huge_pte_offset(mm, address, sz);
 		if (!ptep)
 			continue;
 
@@ -3535,7 +3535,8 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 			unmap_ref_private(mm, vma, old_page, address);
 			BUG_ON(huge_pte_none(pte));
 			spin_lock(ptl);
-			ptep = huge_pte_offset(mm, address & huge_page_mask(h));
+			ptep = huge_pte_offset(mm, address & huge_page_mask(h),
+					       huge_page_size(h));
 			if (likely(ptep &&
 				   pte_same(huge_ptep_get(ptep), pte)))
 				goto retry_avoidcopy;
@@ -3574,7 +3575,8 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
 	 * before the page tables are altered
 	 */
 	spin_lock(ptl);
-	ptep = huge_pte_offset(mm, address & huge_page_mask(h));
+	ptep = huge_pte_offset(mm, address & huge_page_mask(h),
+			       huge_page_size(h));
 	if (likely(ptep && pte_same(huge_ptep_get(ptep), pte))) {
 		ClearPagePrivate(new_page);
 
@@ -3861,7 +3863,7 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 
 	address &= huge_page_mask(h);
 
-	ptep = huge_pte_offset(mm, address);
+	ptep = huge_pte_offset(mm, address, huge_page_size(h));
 	if (ptep) {
 		entry = huge_ptep_get(ptep);
 		if (unlikely(is_hugetlb_entry_migration(entry))) {
@@ -4118,7 +4120,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		 *
 		 * Note that page table lock is not held when pte is null.
 		 */
-		pte = huge_pte_offset(mm, vaddr & huge_page_mask(h));
+		pte = huge_pte_offset(mm, vaddr & huge_page_mask(h),
+				      huge_page_size(h));
 		if (pte)
 			ptl = huge_pte_lock(h, mm, pte);
 		absent = !pte || huge_pte_none(huge_ptep_get(pte));
@@ -4252,7 +4255,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 	i_mmap_lock_write(vma->vm_file->f_mapping);
 	for (; address < end; address += huge_page_size(h)) {
 		spinlock_t *ptl;
-		ptep = huge_pte_offset(mm, address);
+		ptep = huge_pte_offset(mm, address, huge_page_size(h));
 		if (!ptep)
 			continue;
 		ptl = huge_pte_lock(h, mm, ptep);
@@ -4516,7 +4519,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 
 		saddr = page_table_shareable(svma, vma, addr, idx);
 		if (saddr) {
-			spte = huge_pte_offset(svma->vm_mm, saddr);
+			spte = huge_pte_offset(svma->vm_mm, saddr,
+					       vma_mmu_pagesize(svma));
 			if (spte) {
 				get_page(virt_to_page(spte));
 				break;
@@ -4612,7 +4616,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
 	return pte;
 }
 
-pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+pte_t *huge_pte_offset(struct mm_struct *mm,
+		       unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
 	p4d_t *p4d;
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index de9c40d7304a..8ec6ba230bb9 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -116,7 +116,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 
 	if (unlikely(PageHuge(pvmw->page))) {
 		/* when pud is not present, pte will be NULL */
-		pvmw->pte = huge_pte_offset(mm, pvmw->address);
+		pvmw->pte = huge_pte_offset(mm, pvmw->address,
+					    PAGE_SIZE << compound_order(page));
 		if (!pvmw->pte)
 			return false;
 
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 60f7856e508f..1a4197965415 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -180,12 +180,13 @@ static int walk_hugetlb_range(unsigned long addr, unsigned long end,
 	struct hstate *h = hstate_vma(vma);
 	unsigned long next;
 	unsigned long hmask = huge_page_mask(h);
+	unsigned long sz = huge_page_size(h);
 	pte_t *pte;
 	int err = 0;
 
 	do {
 		next = hugetlb_entry_end(h, addr, end);
-		pte = huge_pte_offset(walk->mm, addr & hmask);
+		pte = huge_pte_offset(walk->mm, addr & hmask, sz);
 		if (pte && walk->hugetlb_entry)
 			err = walk->hugetlb_entry(pte, hmask, addr, next, walk);
 		if (err)
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear()
  2017-05-22 13:35 [PATCH v3 0/6] Support for contiguous pte hugepages Punit Agrawal
                   ` (2 preceding siblings ...)
  2017-05-22 13:36 ` [PATCH v3 3/6] mm/hugetlb: add size parameter to huge_pte_offset() Punit Agrawal
@ 2017-05-22 13:36 ` Punit Agrawal
  2017-05-22 13:59   ` Arnd Bergmann
  2017-05-22 16:25   ` [PATCH v3.1 " Punit Agrawal
  2017-05-22 13:36 ` [PATCH v3 5/6] mm/hugetlb: Introduce set_huge_swap_pte_at() helper Punit Agrawal
  2017-05-22 13:36 ` [PATCH v3 6/6] mm: rmap: Use correct helper when poisoning hugepages Punit Agrawal
  5 siblings, 2 replies; 20+ messages in thread
From: Punit Agrawal @ 2017-05-22 13:36 UTC (permalink / raw)
  To: akpm
  Cc: Punit Agrawal, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, will.deacon, n-horiguchi, kirill.shutemov,
	mike.kravetz, steve.capper, mark.rutland, hillf.zj, linux-arch,
	aneesh.kumar, Martin Schwidefsky, Heiko Carstens, Arnd Bergmann

When unmapping a hugepage range, huge_pte_clear() is used to clear the
page table entries that are marked as not present. huge_pte_clear()
internally just ends up calling pte_clear() which does not correctly
deal with hugepages consisting of contiguous page table entries.

Add a size argument to address this issue and allow architectures to
override huge_pte_clear() in subsequent patches by making it a weak
function.

Update the s390 to use the new mechanism to override huge_pte_clear().

Note that the change only affects huge_pte_clear() - the other generic
hugetlb functions don't need any change.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
---
 arch/s390/include/asm/hugetlb.h | 10 ++--------
 arch/s390/mm/hugetlbpage.c      |  9 +++++++++
 include/asm-generic/hugetlb.h   |  7 ++-----
 mm/hugetlb.c                    |  8 +++++++-
 4 files changed, 20 insertions(+), 14 deletions(-)

diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
index cd546a245c68..aa8489c07f24 100644
--- a/arch/s390/include/asm/hugetlb.h
+++ b/arch/s390/include/asm/hugetlb.h
@@ -38,14 +38,8 @@ static inline int prepare_hugepage_range(struct file *file,
 
 #define arch_clear_hugepage_flags(page)		do { } while (0)
 
-static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
-				  pte_t *ptep)
-{
-	if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
-		pte_val(*ptep) = _REGION3_ENTRY_EMPTY;
-	else
-		pte_val(*ptep) = _SEGMENT_ENTRY_EMPTY;
-}
+void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+		    pte_t *ptep, unsigned long sz);
 
 static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
 					 unsigned long address, pte_t *ptep)
diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
index ae23afc18493..48e19b324017 100644
--- a/arch/s390/mm/hugetlbpage.c
+++ b/arch/s390/mm/hugetlbpage.c
@@ -144,6 +144,15 @@ pte_t huge_ptep_get(pte_t *ptep)
 	return __rste_to_pte(pte_val(*ptep));
 }
 
+void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+		    pte_t *ptep, unsigned long sz)
+{
+	if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
+		pte_val(*ptep) = _REGION3_ENTRY_EMPTY;
+	else
+		pte_val(*ptep) = _SEGMENT_ENTRY_EMPTY;
+}
+
 pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 			      unsigned long addr, pte_t *ptep)
 {
diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h
index 99b490b4d05a..3138e126f43b 100644
--- a/include/asm-generic/hugetlb.h
+++ b/include/asm-generic/hugetlb.h
@@ -31,10 +31,7 @@ static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot)
 	return pte_modify(pte, newprot);
 }
 
-static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
-				  pte_t *ptep)
-{
-	pte_clear(mm, addr, ptep);
-}
+void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+		    pte_t *ptep, unsigned long sz);
 
 #endif /* _ASM_GENERIC_HUGETLB_H */
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0e4d1fb3122f..2b0f6f96f2c1 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3289,6 +3289,12 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 	return ret;
 }
 
+void __weak huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+			   pte_t *ptep, unsigned long sz)
+{
+	pte_clear(mm, addr, ptep);
+}
+
 void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			    unsigned long start, unsigned long end,
 			    struct page *ref_page)
@@ -3338,7 +3344,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		 * unmapped and its refcount is dropped, so just clear pte here.
 		 */
 		if (unlikely(!pte_present(pte))) {
-			huge_pte_clear(mm, address, ptep);
+			huge_pte_clear(mm, address, ptep, sz);
 			spin_unlock(ptl);
 			continue;
 		}
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 5/6] mm/hugetlb: Introduce set_huge_swap_pte_at() helper
  2017-05-22 13:35 [PATCH v3 0/6] Support for contiguous pte hugepages Punit Agrawal
                   ` (3 preceding siblings ...)
  2017-05-22 13:36 ` [PATCH v3 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear() Punit Agrawal
@ 2017-05-22 13:36 ` Punit Agrawal
  2017-05-22 16:30   ` [PATCH v3.1 " Punit Agrawal
  2017-05-22 13:36 ` [PATCH v3 6/6] mm: rmap: Use correct helper when poisoning hugepages Punit Agrawal
  5 siblings, 1 reply; 20+ messages in thread
From: Punit Agrawal @ 2017-05-22 13:36 UTC (permalink / raw)
  To: akpm
  Cc: Punit Agrawal, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, will.deacon, n-horiguchi, kirill.shutemov,
	mike.kravetz, steve.capper, mark.rutland, hillf.zj, linux-arch,
	aneesh.kumar

set_huge_pte_at(), an architecture callback to populate hugepage ptes,
does not provide the range of virtual memory that is targeted. This
leads to ambiguity when dealing with swap entries on architectures that
support hugepages consisting of contiguous ptes.

Fix the problem by introducing an overridable helper that is called when
populating the page tables with swap entries. The size of the targeted
region is provided to the helper to help determine the number of entries
to be updated.

Provide a default implementation that maintains the current behaviour.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Acked-by: Steve Capper <steve.capper@arm.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
---
 include/linux/hugetlb.h |  2 ++
 mm/hugetlb.c            | 14 +++++++++++---
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 23010a3b2047..fa65ad73a65f 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -127,6 +127,8 @@ int pud_huge(pud_t pud);
 unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end, pgprot_t newprot);
 
+void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr,
+			  pte_t *ptep, pte_t pte, unsigned long sz);
 #else /* !CONFIG_HUGETLB_PAGE */
 
 static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 2b0f6f96f2c1..a27e926913f4 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3211,6 +3211,12 @@ static int is_hugetlb_entry_hwpoisoned(pte_t pte)
 		return 0;
 }
 
+void __weak set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr,
+				 pte_t *ptep, pte_t pte, unsigned long sz)
+{
+	set_huge_pte_at(mm, addr, ptep, pte);
+}
+
 int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 			    struct vm_area_struct *vma)
 {
@@ -3263,9 +3269,10 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 				 */
 				make_migration_entry_read(&swp_entry);
 				entry = swp_entry_to_pte(swp_entry);
-				set_huge_pte_at(src, addr, src_pte, entry);
+				set_huge_swap_pte_at(src, addr, src_pte,
+						     entry, sz);
 			}
-			set_huge_pte_at(dst, addr, dst_pte, entry);
+			set_huge_swap_pte_at(dst, addr, dst_pte, entry, sz);
 		} else {
 			if (cow) {
 				huge_ptep_set_wrprotect(src, addr, src_pte);
@@ -4283,7 +4290,8 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 
 				make_migration_entry_read(&entry);
 				newpte = swp_entry_to_pte(entry);
-				set_huge_pte_at(mm, address, ptep, newpte);
+				set_huge_swap_pte_at(mm, address, ptep,
+						     newpte, huge_page_size(h));
 				pages++;
 			}
 			spin_unlock(ptl);
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3 6/6] mm: rmap: Use correct helper when poisoning hugepages
  2017-05-22 13:35 [PATCH v3 0/6] Support for contiguous pte hugepages Punit Agrawal
                   ` (4 preceding siblings ...)
  2017-05-22 13:36 ` [PATCH v3 5/6] mm/hugetlb: Introduce set_huge_swap_pte_at() helper Punit Agrawal
@ 2017-05-22 13:36 ` Punit Agrawal
  5 siblings, 0 replies; 20+ messages in thread
From: Punit Agrawal @ 2017-05-22 13:36 UTC (permalink / raw)
  To: akpm
  Cc: Punit Agrawal, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, will.deacon, n-horiguchi, kirill.shutemov,
	mike.kravetz, steve.capper, mark.rutland, hillf.zj, linux-arch,
	aneesh.kumar

Using set_pte_at() does not do the right thing when putting down
HWPOISON swap entries for hugepages on architectures that support
contiguous ptes.

Fix this problem by using set_huge_swap_pte_at() which was introduced to
fix exactly this problem.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Acked-by: Steve Capper <steve.capper@arm.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
---
 mm/rmap.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index d405f0e0ee96..feb2352aa95f 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1379,15 +1379,18 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 		update_hiwater_rss(mm);
 
 		if (PageHWPoison(page) && !(flags & TTU_IGNORE_HWPOISON)) {
+			pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
 			if (PageHuge(page)) {
 				int nr = 1 << compound_order(page);
 				hugetlb_count_sub(nr, mm);
+				set_huge_swap_pte_at(mm, address,
+						     pvmw.pte, pteval,
+						     vma_mmu_pagesize(vma));
 			} else {
 				dec_mm_counter(mm, mm_counter(page));
+				set_pte_at(mm, address, pvmw.pte, pteval);
 			}
 
-			pteval = swp_entry_to_pte(make_hwpoison_entry(subpage));
-			set_pte_at(mm, address, pvmw.pte, pteval);
 		} else if (pte_unused(pteval)) {
 			/*
 			 * The guest indicated that the page content is of no
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear()
  2017-05-22 13:36 ` [PATCH v3 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear() Punit Agrawal
@ 2017-05-22 13:59   ` Arnd Bergmann
  2017-05-22 15:40     ` Punit Agrawal
  2017-05-22 16:25   ` [PATCH v3.1 " Punit Agrawal
  1 sibling, 1 reply; 20+ messages in thread
From: Arnd Bergmann @ 2017-05-22 13:59 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: Andrew Morton, Linux-MM, Linux Kernel Mailing List, Linux ARM,
	Catalin Marinas, Will Deacon, n-horiguchi, Kirill A . Shutemov,
	mike.kravetz, steve.capper, Mark Rutland, Hillf Danton,
	linux-arch, Aneesh Kumar K.V, Martin Schwidefsky, Heiko Carstens

On Mon, May 22, 2017 at 3:36 PM, Punit Agrawal <punit.agrawal@arm.com> wrote:
> diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h
> index 99b490b4d05a..3138e126f43b 100644
> --- a/include/asm-generic/hugetlb.h
> +++ b/include/asm-generic/hugetlb.h
> @@ -31,10 +31,7 @@ static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot)
>         return pte_modify(pte, newprot);
>  }
>
> -static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
> -                                 pte_t *ptep)
> -{
> -       pte_clear(mm, addr, ptep);
> -}
> +void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
> +                   pte_t *ptep, unsigned long sz);
>
>  #endif /* _ASM_GENERIC_HUGETLB_H */
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 0e4d1fb3122f..2b0f6f96f2c1 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3289,6 +3289,12 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
>         return ret;
>  }
>
> +void __weak huge_pte_clear(struct mm_struct *mm, unsigned long addr,
> +                          pte_t *ptep, unsigned long sz)
> +{
> +       pte_clear(mm, addr, ptep);
> +}
> +
>  void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>                             unsigned long start, unsigned long end,
>                             struct page *ref_page)

I don't really like how this moves the inline version from asm-generic into
a __weak function here. I think it would be better to either stop
using asm-generic/hugetlb.h
on s390, or enclose the generic definition in

#ifndef huge_pte_clear

and then override by defining a macro in s390 as we do in other files
in asm-generic.

       Arnd

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear()
  2017-05-22 13:59   ` Arnd Bergmann
@ 2017-05-22 15:40     ` Punit Agrawal
  0 siblings, 0 replies; 20+ messages in thread
From: Punit Agrawal @ 2017-05-22 15:40 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Andrew Morton, Linux-MM, Linux Kernel Mailing List, Linux ARM,
	Catalin Marinas, Will Deacon, n-horiguchi, Kirill A . Shutemov,
	mike.kravetz, steve.capper, Mark Rutland, Hillf Danton,
	linux-arch, Aneesh Kumar K.V, Martin Schwidefsky, Heiko Carstens

Arnd Bergmann <arnd@arndb.de> writes:

> On Mon, May 22, 2017 at 3:36 PM, Punit Agrawal <punit.agrawal@arm.com> wrote:
>> diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h
>> index 99b490b4d05a..3138e126f43b 100644
>> --- a/include/asm-generic/hugetlb.h
>> +++ b/include/asm-generic/hugetlb.h
>> @@ -31,10 +31,7 @@ static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot)
>>         return pte_modify(pte, newprot);
>>  }
>>
>> -static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
>> -                                 pte_t *ptep)
>> -{
>> -       pte_clear(mm, addr, ptep);
>> -}
>> +void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
>> +                   pte_t *ptep, unsigned long sz);
>>
>>  #endif /* _ASM_GENERIC_HUGETLB_H */
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 0e4d1fb3122f..2b0f6f96f2c1 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -3289,6 +3289,12 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
>>         return ret;
>>  }
>>
>> +void __weak huge_pte_clear(struct mm_struct *mm, unsigned long addr,
>> +                          pte_t *ptep, unsigned long sz)
>> +{
>> +       pte_clear(mm, addr, ptep);
>> +}
>> +
>>  void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>                             unsigned long start, unsigned long end,
>>                             struct page *ref_page)
>
> I don't really like how this moves the inline version from asm-generic into
> a __weak function here. I think it would be better to either stop
> using asm-generic/hugetlb.h
> on s390, or enclose the generic definition in
>
> #ifndef huge_pte_clear
>
> and then override by defining a macro in s390 as we do in other files
> in asm-generic.

Nice! I wasn't aware asm-generic follows this as a standard pattern.

s390 doesn't use asm-generic, but I needed to update the prototype with
an additional parameter (size) and needlessly moved the function. I'll
update the patch.

The changes is needed to enable contiguous pte hugepage support on arm64
[0].

Thanks for taking a look.

Punit

[0] https://www.spinics.net/lists/arm-kernel/msg582758.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3.1 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear()
  2017-05-22 13:36 ` [PATCH v3 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear() Punit Agrawal
  2017-05-22 13:59   ` Arnd Bergmann
@ 2017-05-22 16:25   ` Punit Agrawal
  2017-05-22 20:34     ` Arnd Bergmann
  2017-05-23  5:26     ` Martin Schwidefsky
  1 sibling, 2 replies; 20+ messages in thread
From: Punit Agrawal @ 2017-05-22 16:25 UTC (permalink / raw)
  To: akpm
  Cc: Punit Agrawal, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, will.deacon, n-horiguchi, kirill.shutemov,
	mike.kravetz, steve.capper, mark.rutland, linux-arch,
	aneesh.kumar, Martin Schwidefsky, Heiko Carstens, Arnd Bergmann

When unmapping a hugepage range, huge_pte_clear() is used to clear the
page table entries that are marked as not present. huge_pte_clear()
internally just ends up calling pte_clear() which does not correctly
deal with hugepages consisting of contiguous page table entries.

Add a size argument to address this issue and allow architectures to
override huge_pte_clear() by wrapping it in a #ifndef block.

Update s390 implementation with the size parameter as well.

Note that the change only affects huge_pte_clear() - the other generic
hugetlb functions don't need any change.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
---

Changes since v3

* Drop weak function and use #ifndef block to allow architecture override
* Drop unnecessary move of s390 function definition

 arch/s390/include/asm/hugetlb.h | 2 +-
 include/asm-generic/hugetlb.h   | 4 +++-
 mm/hugetlb.c                    | 2 +-
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
index cd546a245c68..c0443500baec 100644
--- a/arch/s390/include/asm/hugetlb.h
+++ b/arch/s390/include/asm/hugetlb.h
@@ -39,7 +39,7 @@ static inline int prepare_hugepage_range(struct file *file,
 #define arch_clear_hugepage_flags(page)		do { } while (0)
 
 static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
-				  pte_t *ptep)
+				  pte_t *ptep, unsigned long sz)
 {
 	if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
 		pte_val(*ptep) = _REGION3_ENTRY_EMPTY;
diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h
index 99b490b4d05a..540354f94f83 100644
--- a/include/asm-generic/hugetlb.h
+++ b/include/asm-generic/hugetlb.h
@@ -31,10 +31,12 @@ static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot)
 	return pte_modify(pte, newprot);
 }
 
+#ifndef huge_pte_clear
 static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
-				  pte_t *ptep)
+		    pte_t *ptep, unsigned long sz)
 {
 	pte_clear(mm, addr, ptep);
 }
+#endif
 
 #endif /* _ASM_GENERIC_HUGETLB_H */
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0e4d1fb3122f..ddfed20cd637 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3338,7 +3338,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		 * unmapped and its refcount is dropped, so just clear pte here.
 		 */
 		if (unlikely(!pte_present(pte))) {
-			huge_pte_clear(mm, address, ptep);
+			huge_pte_clear(mm, address, ptep, sz);
 			spin_unlock(ptl);
 			continue;
 		}
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3.1 5/6] mm/hugetlb: Introduce set_huge_swap_pte_at() helper
  2017-05-22 13:36 ` [PATCH v3 5/6] mm/hugetlb: Introduce set_huge_swap_pte_at() helper Punit Agrawal
@ 2017-05-22 16:30   ` Punit Agrawal
  0 siblings, 0 replies; 20+ messages in thread
From: Punit Agrawal @ 2017-05-22 16:30 UTC (permalink / raw)
  To: akpm
  Cc: Punit Agrawal, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, will.deacon, n-horiguchi, kirill.shutemov,
	mike.kravetz, steve.capper, mark.rutland, linux-arch,
	aneesh.kumar

set_huge_pte_at(), an architecture callback to populate hugepage ptes,
does not provide the range of virtual memory that is targeted. This
leads to ambiguity when dealing with swap entries on architectures that
support hugepages consisting of contiguous ptes.

Fix the problem by introducing an overridable helper for architectures
needing this support. The helper is called when populating the page
tables with swap entries. The size of the targeted region is provided to
the helper to help determine the number of entries to be updated.

Provide a default implementation that maintains the current behaviour.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Acked-by: Steve Capper <steve.capper@arm.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
---

Changes since v3:

* Use #ifndef block instead of weak function

 include/linux/hugetlb.h | 8 ++++++++
 mm/hugetlb.c            | 8 +++++---
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 23010a3b2047..879eb063fb95 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -435,6 +435,14 @@ static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
 }
 #endif
 
+#ifndef set_huge_swap_pte_at
+static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr,
+					pte_t *ptep, pte_t pte, unsigned long sz)
+{
+	set_huge_pte_at(mm, addr, ptep, pte);
+}
+#endif
+
 static inline struct hstate *page_hstate(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHuge(page), page);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ddfed20cd637..e3052c16d29a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3263,9 +3263,10 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 				 */
 				make_migration_entry_read(&swp_entry);
 				entry = swp_entry_to_pte(swp_entry);
-				set_huge_pte_at(src, addr, src_pte, entry);
+				set_huge_swap_pte_at(src, addr, src_pte,
+						     entry, sz);
 			}
-			set_huge_pte_at(dst, addr, dst_pte, entry);
+			set_huge_swap_pte_at(dst, addr, dst_pte, entry, sz);
 		} else {
 			if (cow) {
 				huge_ptep_set_wrprotect(src, addr, src_pte);
@@ -4277,7 +4278,8 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 
 				make_migration_entry_read(&entry);
 				newpte = swp_entry_to_pte(entry);
-				set_huge_pte_at(mm, address, ptep, newpte);
+				set_huge_swap_pte_at(mm, address, ptep,
+						     newpte, huge_page_size(h));
 				pages++;
 			}
 			spin_unlock(ptl);
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v3.1 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear()
  2017-05-22 16:25   ` [PATCH v3.1 " Punit Agrawal
@ 2017-05-22 20:34     ` Arnd Bergmann
  2017-05-23 14:53       ` Punit Agrawal
  2017-05-23  5:26     ` Martin Schwidefsky
  1 sibling, 1 reply; 20+ messages in thread
From: Arnd Bergmann @ 2017-05-22 20:34 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: Andrew Morton, Linux-MM, Linux Kernel Mailing List, Linux ARM,
	Catalin Marinas, Will Deacon, n-horiguchi, Kirill A . Shutemov,
	mike.kravetz, steve.capper, Mark Rutland, linux-arch,
	Aneesh Kumar K.V, Martin Schwidefsky, Heiko Carstens

On Mon, May 22, 2017 at 6:25 PM, Punit Agrawal <punit.agrawal@arm.com> wrote:
> When unmapping a hugepage range, huge_pte_clear() is used to clear the
> page table entries that are marked as not present. huge_pte_clear()
> internally just ends up calling pte_clear() which does not correctly
> deal with hugepages consisting of contiguous page table entries.
>
> Add a size argument to address this issue and allow architectures to
> override huge_pte_clear() by wrapping it in a #ifndef block.
>
> Update s390 implementation with the size parameter as well.
>
> Note that the change only affects huge_pte_clear() - the other generic
> hugetlb functions don't need any change.
>
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>

Acked-by: Arnd Bergmann <arnd@arndb.de>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3.1 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear()
  2017-05-22 16:25   ` [PATCH v3.1 " Punit Agrawal
  2017-05-22 20:34     ` Arnd Bergmann
@ 2017-05-23  5:26     ` Martin Schwidefsky
  2017-05-23 14:53       ` Punit Agrawal
  1 sibling, 1 reply; 20+ messages in thread
From: Martin Schwidefsky @ 2017-05-23  5:26 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: akpm, linux-mm, linux-kernel, linux-arm-kernel, catalin.marinas,
	will.deacon, n-horiguchi, kirill.shutemov, mike.kravetz,
	steve.capper, mark.rutland, linux-arch, aneesh.kumar,
	Heiko Carstens, Arnd Bergmann

On Mon, 22 May 2017 17:25:55 +0100
Punit Agrawal <punit.agrawal@arm.com> wrote:

> When unmapping a hugepage range, huge_pte_clear() is used to clear the
> page table entries that are marked as not present. huge_pte_clear()
> internally just ends up calling pte_clear() which does not correctly
> deal with hugepages consisting of contiguous page table entries.
> 
> Add a size argument to address this issue and allow architectures to
> override huge_pte_clear() by wrapping it in a #ifndef block.
> 
> Update s390 implementation with the size parameter as well.
> 
> Note that the change only affects huge_pte_clear() - the other generic
> hugetlb functions don't need any change.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> ---
> 
> Changes since v3
> 
> * Drop weak function and use #ifndef block to allow architecture override
> * Drop unnecessary move of s390 function definition
> 
>  arch/s390/include/asm/hugetlb.h | 2 +-
>  include/asm-generic/hugetlb.h   | 4 +++-
>  mm/hugetlb.c                    | 2 +-
>  3 files changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
> index cd546a245c68..c0443500baec 100644
> --- a/arch/s390/include/asm/hugetlb.h
> +++ b/arch/s390/include/asm/hugetlb.h
> @@ -39,7 +39,7 @@ static inline int prepare_hugepage_range(struct file *file,
>  #define arch_clear_hugepage_flags(page)		do { } while (0)
> 
>  static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
> -				  pte_t *ptep)
> +				  pte_t *ptep, unsigned long sz)
>  {
>  	if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
>  		pte_val(*ptep) = _REGION3_ENTRY_EMPTY;

For the nop-change for s390:
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/6] mm/hugetlb: add size parameter to huge_pte_offset()
  2017-05-22 13:36 ` [PATCH v3 3/6] mm/hugetlb: add size parameter to huge_pte_offset() Punit Agrawal
@ 2017-05-23 10:04   ` kbuild test robot
  2017-05-23 16:13     ` Punit Agrawal
  0 siblings, 1 reply; 20+ messages in thread
From: kbuild test robot @ 2017-05-23 10:04 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: kbuild-all, akpm, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, will.deacon, n-horiguchi, kirill.shutemov,
	mike.kravetz, steve.capper, mark.rutland, hillf.zj, linux-arch,
	aneesh.kumar, Tony Luck, Fenghua Yu, James Hogan, Ralf Baechle,
	James E.J. Bottomley, Helge Deller, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman, Martin Schwidefsky,
	Heiko Carstens, Yoshinori Sato, Rich Felker, David S. Miller,
	Chris Metcalf, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Alexander Viro, Michal Hocko

[-- Attachment #1: Type: text/plain, Size: 3348 bytes --]

Hi Punit,

[auto build test ERROR on linus/master]
[also build test ERROR on v4.12-rc2 next-20170523]
[cannot apply to mmotm/master]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Punit-Agrawal/Support-for-contiguous-pte-hugepages/20170523-142407
config: arm64-defconfig (attached as .config)
compiler: aarch64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
        wget https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

   arch/arm64/mm/hugetlbpage.c: In function 'huge_ptep_get_and_clear':
>> arch/arm64/mm/hugetlbpage.c:200:10: error: too few arguments to function 'huge_pte_offset'
      cpte = huge_pte_offset(mm, addr);
             ^~~~~~~~~~~~~~~
   arch/arm64/mm/hugetlbpage.c:135:8: note: declared here
    pte_t *huge_pte_offset(struct mm_struct *mm,
           ^~~~~~~~~~~~~~~
   arch/arm64/mm/hugetlbpage.c: In function 'huge_ptep_set_access_flags':
   arch/arm64/mm/hugetlbpage.c:238:10: error: too few arguments to function 'huge_pte_offset'
      cpte = huge_pte_offset(vma->vm_mm, addr);
             ^~~~~~~~~~~~~~~
   arch/arm64/mm/hugetlbpage.c:135:8: note: declared here
    pte_t *huge_pte_offset(struct mm_struct *mm,
           ^~~~~~~~~~~~~~~
   arch/arm64/mm/hugetlbpage.c: In function 'huge_ptep_set_wrprotect':
   arch/arm64/mm/hugetlbpage.c:263:10: error: too few arguments to function 'huge_pte_offset'
      cpte = huge_pte_offset(mm, addr);
             ^~~~~~~~~~~~~~~
   arch/arm64/mm/hugetlbpage.c:135:8: note: declared here
    pte_t *huge_pte_offset(struct mm_struct *mm,
           ^~~~~~~~~~~~~~~
   arch/arm64/mm/hugetlbpage.c: In function 'huge_ptep_clear_flush':
   arch/arm64/mm/hugetlbpage.c:280:10: error: too few arguments to function 'huge_pte_offset'
      cpte = huge_pte_offset(vma->vm_mm, addr);
             ^~~~~~~~~~~~~~~
   arch/arm64/mm/hugetlbpage.c:135:8: note: declared here
    pte_t *huge_pte_offset(struct mm_struct *mm,
           ^~~~~~~~~~~~~~~

vim +/huge_pte_offset +200 arch/arm64/mm/hugetlbpage.c

66b3923a David Woods 2015-12-17  194  	if (pte_cont(*ptep)) {
66b3923a David Woods 2015-12-17  195  		int ncontig, i;
66b3923a David Woods 2015-12-17  196  		size_t pgsize;
66b3923a David Woods 2015-12-17  197  		pte_t *cpte;
66b3923a David Woods 2015-12-17  198  		bool is_dirty = false;
66b3923a David Woods 2015-12-17  199  
66b3923a David Woods 2015-12-17 @200  		cpte = huge_pte_offset(mm, addr);
66b3923a David Woods 2015-12-17  201  		ncontig = find_num_contig(mm, addr, cpte, *cpte, &pgsize);
66b3923a David Woods 2015-12-17  202  		/* save the 1st pte to return */
66b3923a David Woods 2015-12-17  203  		pte = ptep_get_and_clear(mm, addr, cpte);

:::::: The code at line 200 was first introduced by commit
:::::: 66b3923a1a0f77a563b43f43f6ad091354abbfe9 arm64: hugetlb: add support for PTE contiguous bit

:::::: TO: David Woods <dwoods@ezchip.com>
:::::: CC: Will Deacon <will.deacon@arm.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 35202 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 1/6] mm, gup: Remove broken VM_BUG_ON_PAGE compound check for hugepages
  2017-05-22 13:35 ` [PATCH v3 1/6] mm, gup: Remove broken VM_BUG_ON_PAGE compound check for hugepages Punit Agrawal
@ 2017-05-23 13:09   ` Kirill A. Shutemov
  0 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2017-05-23 13:09 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: akpm, Will Deacon, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, n-horiguchi, kirill.shutemov, mike.kravetz,
	steve.capper, mark.rutland, hillf.zj, linux-arch, aneesh.kumar

On Mon, May 22, 2017 at 02:35:59PM +0100, Punit Agrawal wrote:
> From: Will Deacon <will.deacon@arm.com>
> 
> When operating on hugepages with DEBUG_VM enabled, the GUP code checks the
> compound head for each tail page prior to calling page_cache_add_speculative.
> This is broken, because on the fast-GUP path (where we don't hold any page
> table locks) we can be racing with a concurrent invocation of
> split_huge_page_to_list.
> 
> split_huge_page_to_list deals with this race by using page_ref_freeze to
> freeze the page and force concurrent GUPs to fail whilst the component
> pages are modified. This modification includes clearing the compound_head
> field for the tail pages, so checking this prior to a successful call
> to page_cache_add_speculative can lead to false positives: In fact,
> page_cache_add_speculative *already* has this check once the page refcount
> has been successfully updated, so we can simply remove the broken calls
> to VM_BUG_ON_PAGE.
> 
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> Acked-by: Steve Capper <steve.capper@arm.com>
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>

Looks reasonable to me:

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/6] mm, gup: Ensure real head page is ref-counted when using hugepages
  2017-05-22 13:36 ` [PATCH v3 2/6] mm, gup: Ensure real head page is ref-counted when using hugepages Punit Agrawal
@ 2017-05-23 13:13   ` Kirill A. Shutemov
  2017-05-23 15:43     ` Punit Agrawal
  0 siblings, 1 reply; 20+ messages in thread
From: Kirill A. Shutemov @ 2017-05-23 13:13 UTC (permalink / raw)
  To: Punit Agrawal
  Cc: akpm, linux-mm, linux-kernel, linux-arm-kernel, catalin.marinas,
	will.deacon, n-horiguchi, kirill.shutemov, mike.kravetz,
	steve.capper, mark.rutland, hillf.zj, linux-arch, aneesh.kumar,
	Michal Hocko

On Mon, May 22, 2017 at 02:36:00PM +0100, Punit Agrawal wrote:
> When speculatively taking references to a hugepage using
> page_cache_add_speculative() in gup_huge_pmd(), it is assumed that the
> page returned by pmd_page() is the head page. Although normally true,
> this assumption doesn't hold when the hugepage comprises of successive
> page table entries such as when using contiguous bit on arm64 at PTE or
> PMD levels.
> 
> This can be addressed by ensuring that the page passed to
> page_cache_add_speculative() is the real head or by de-referencing the
> head page within the function.
> 
> We take the first approach to keep the usage pattern aligned with
> page_cache_get_speculative() where users already pass the appropriate
> page, i.e., the de-referenced head.
> 
> Apply the same logic to fix gup_huge_[pud|pgd]() as well.

Hm. Okay. But I'm kinda surprise that this is the only place that need to
be adjusted.

Have you validated all other pmd_page() use-cases?

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3.1 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear()
  2017-05-22 20:34     ` Arnd Bergmann
@ 2017-05-23 14:53       ` Punit Agrawal
  0 siblings, 0 replies; 20+ messages in thread
From: Punit Agrawal @ 2017-05-23 14:53 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Andrew Morton, Linux-MM, Linux Kernel Mailing List, Linux ARM,
	Catalin Marinas, Will Deacon, n-horiguchi, Kirill A . Shutemov,
	mike.kravetz, steve.capper, Mark Rutland, linux-arch,
	Aneesh Kumar K.V, Martin Schwidefsky, Heiko Carstens

Arnd Bergmann <arnd@arndb.de> writes:

> On Mon, May 22, 2017 at 6:25 PM, Punit Agrawal <punit.agrawal@arm.com> wrote:
>> When unmapping a hugepage range, huge_pte_clear() is used to clear the
>> page table entries that are marked as not present. huge_pte_clear()
>> internally just ends up calling pte_clear() which does not correctly
>> deal with hugepages consisting of contiguous page table entries.
>>
>> Add a size argument to address this issue and allow architectures to
>> override huge_pte_clear() by wrapping it in a #ifndef block.
>>
>> Update s390 implementation with the size parameter as well.
>>
>> Note that the change only affects huge_pte_clear() - the other generic
>> hugetlb functions don't need any change.
>>
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
>> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
>> Cc: Arnd Bergmann <arnd@arndb.de>
>> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> Cc: Mike Kravetz <mike.kravetz@oracle.com>
>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Thanks, Arnd. I've applied the tag locally.

>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3.1 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear()
  2017-05-23  5:26     ` Martin Schwidefsky
@ 2017-05-23 14:53       ` Punit Agrawal
  0 siblings, 0 replies; 20+ messages in thread
From: Punit Agrawal @ 2017-05-23 14:53 UTC (permalink / raw)
  To: Martin Schwidefsky
  Cc: akpm, linux-mm, linux-kernel, linux-arm-kernel, catalin.marinas,
	will.deacon, n-horiguchi, kirill.shutemov, mike.kravetz,
	steve.capper, mark.rutland, linux-arch, aneesh.kumar,
	Heiko Carstens, Arnd Bergmann

Martin Schwidefsky <schwidefsky@de.ibm.com> writes:

> On Mon, 22 May 2017 17:25:55 +0100
> Punit Agrawal <punit.agrawal@arm.com> wrote:
>
>> When unmapping a hugepage range, huge_pte_clear() is used to clear the
>> page table entries that are marked as not present. huge_pte_clear()
>> internally just ends up calling pte_clear() which does not correctly
>> deal with hugepages consisting of contiguous page table entries.
>> 
>> Add a size argument to address this issue and allow architectures to
>> override huge_pte_clear() by wrapping it in a #ifndef block.
>> 
>> Update s390 implementation with the size parameter as well.
>> 
>> Note that the change only affects huge_pte_clear() - the other generic
>> hugetlb functions don't need any change.
>> 
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
>> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
>> Cc: Arnd Bergmann <arnd@arndb.de>
>> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> Cc: Mike Kravetz <mike.kravetz@oracle.com>
>> ---
>> 
>> Changes since v3
>> 
>> * Drop weak function and use #ifndef block to allow architecture override
>> * Drop unnecessary move of s390 function definition
>> 
>>  arch/s390/include/asm/hugetlb.h | 2 +-
>>  include/asm-generic/hugetlb.h   | 4 +++-
>>  mm/hugetlb.c                    | 2 +-
>>  3 files changed, 5 insertions(+), 3 deletions(-)
>> 
>> diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
>> index cd546a245c68..c0443500baec 100644
>> --- a/arch/s390/include/asm/hugetlb.h
>> +++ b/arch/s390/include/asm/hugetlb.h
>> @@ -39,7 +39,7 @@ static inline int prepare_hugepage_range(struct file *file,
>>  #define arch_clear_hugepage_flags(page)		do { } while (0)
>> 
>>  static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
>> -				  pte_t *ptep)
>> +				  pte_t *ptep, unsigned long sz)
>>  {
>>  	if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
>>  		pte_val(*ptep) = _REGION3_ENTRY_EMPTY;
>
> For the nop-change for s390:
> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>

Applied the tag locally. Thanks! 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 2/6] mm, gup: Ensure real head page is ref-counted when using hugepages
  2017-05-23 13:13   ` Kirill A. Shutemov
@ 2017-05-23 15:43     ` Punit Agrawal
  0 siblings, 0 replies; 20+ messages in thread
From: Punit Agrawal @ 2017-05-23 15:43 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: akpm, linux-mm, linux-kernel, linux-arm-kernel, catalin.marinas,
	will.deacon, n-horiguchi, kirill.shutemov, mike.kravetz,
	steve.capper, mark.rutland, hillf.zj, linux-arch, aneesh.kumar,
	Michal Hocko

"Kirill A. Shutemov" <kirill@shutemov.name> writes:

> On Mon, May 22, 2017 at 02:36:00PM +0100, Punit Agrawal wrote:
>> When speculatively taking references to a hugepage using
>> page_cache_add_speculative() in gup_huge_pmd(), it is assumed that the
>> page returned by pmd_page() is the head page. Although normally true,
>> this assumption doesn't hold when the hugepage comprises of successive
>> page table entries such as when using contiguous bit on arm64 at PTE or
>> PMD levels.
>> 
>> This can be addressed by ensuring that the page passed to
>> page_cache_add_speculative() is the real head or by de-referencing the
>> head page within the function.
>> 
>> We take the first approach to keep the usage pattern aligned with
>> page_cache_get_speculative() where users already pass the appropriate
>> page, i.e., the de-referenced head.
>> 
>> Apply the same logic to fix gup_huge_[pud|pgd]() as well.
>
> Hm. Okay. But I'm kinda surprise that this is the only place that need to
> be adjusted.
>
> Have you validated all other pmd_page() use-cases?

I came across the gup issues were found while investigating a failing
test from mce-tests.

I think the problem here is not due to the use of pmd_page() but because
page_cache_[add|get]_speculative() don't ensure they ref-count the head
page as is done in get_page().

Having said that, I had a quick look at the other uses of pmd_page() -

Quite a few of them are followed by an explicit BUG_ON() to check that
the page returned is a head page. All other instances seem to be dealing
with transparent hugepages where contiguous hugepages are not supported.

I don't see any call sites that ring alarm bells.

Did you have any particular part of the code in mind where pmd_page()
usage might be a problem?

Thanks,
Punit

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3 3/6] mm/hugetlb: add size parameter to huge_pte_offset()
  2017-05-23 10:04   ` kbuild test robot
@ 2017-05-23 16:13     ` Punit Agrawal
  0 siblings, 0 replies; 20+ messages in thread
From: Punit Agrawal @ 2017-05-23 16:13 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, akpm, linux-mm, linux-kernel, linux-arm-kernel,
	catalin.marinas, will.deacon, n-horiguchi, kirill.shutemov,
	mike.kravetz, steve.capper, mark.rutland, hillf.zj, linux-arch,
	aneesh.kumar, Tony Luck, Fenghua Yu, James Hogan, Ralf Baechle,
	James E.J. Bottomley, Helge Deller, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman, Martin Schwidefsky,
	Heiko Carstens, Yoshinori Sato, Rich Felker, David S. Miller,
	Chris Metcalf, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Alexander Viro, Michal Hocko

kbuild test robot <lkp@intel.com> writes:

> Hi Punit,
>
> [auto build test ERROR on linus/master]
> [also build test ERROR on v4.12-rc2 next-20170523]
> [cannot apply to mmotm/master]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
>
> url:    https://github.com/0day-ci/linux/commits/Punit-Agrawal/Support-for-contiguous-pte-hugepages/20170523-142407
> config: arm64-defconfig (attached as .config)
> compiler: aarch64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
> reproduce:
>         wget https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         make.cross ARCH=arm64 
>
> All errors (new ones prefixed by >>):
>
>    arch/arm64/mm/hugetlbpage.c: In function 'huge_ptep_get_and_clear':
>>> arch/arm64/mm/hugetlbpage.c:200:10: error: too few arguments to function 'huge_pte_offset'
>       cpte = huge_pte_offset(mm, addr);
>              ^~~~~~~~~~~~~~~
>    arch/arm64/mm/hugetlbpage.c:135:8: note: declared here
>     pte_t *huge_pte_offset(struct mm_struct *mm,
>            ^~~~~~~~~~~~~~~
>    arch/arm64/mm/hugetlbpage.c: In function 'huge_ptep_set_access_flags':
>    arch/arm64/mm/hugetlbpage.c:238:10: error: too few arguments to function 'huge_pte_offset'
>       cpte = huge_pte_offset(vma->vm_mm, addr);
>              ^~~~~~~~~~~~~~~
>    arch/arm64/mm/hugetlbpage.c:135:8: note: declared here
>     pte_t *huge_pte_offset(struct mm_struct *mm,
>            ^~~~~~~~~~~~~~~
>    arch/arm64/mm/hugetlbpage.c: In function 'huge_ptep_set_wrprotect':
>    arch/arm64/mm/hugetlbpage.c:263:10: error: too few arguments to function 'huge_pte_offset'
>       cpte = huge_pte_offset(mm, addr);
>              ^~~~~~~~~~~~~~~
>    arch/arm64/mm/hugetlbpage.c:135:8: note: declared here
>     pte_t *huge_pte_offset(struct mm_struct *mm,
>            ^~~~~~~~~~~~~~~
>    arch/arm64/mm/hugetlbpage.c: In function 'huge_ptep_clear_flush':
>    arch/arm64/mm/hugetlbpage.c:280:10: error: too few arguments to function 'huge_pte_offset'
>       cpte = huge_pte_offset(vma->vm_mm, addr);
>              ^~~~~~~~~~~~~~~
>    arch/arm64/mm/hugetlbpage.c:135:8: note: declared here
>     pte_t *huge_pte_offset(struct mm_struct *mm,
>            ^~~~~~~~~~~~~~~

Ok, so we haven't quite managed to remove the dependency of this patch
on the following arm64 changes[0].

I'll post a new version fixing this failure soon.

[0] https://www.spinics.net/lists/arm-kernel/msg582758.html

>
> vim +/huge_pte_offset +200 arch/arm64/mm/hugetlbpage.c
>
> 66b3923a David Woods 2015-12-17  194  	if (pte_cont(*ptep)) {
> 66b3923a David Woods 2015-12-17  195  		int ncontig, i;
> 66b3923a David Woods 2015-12-17  196  		size_t pgsize;
> 66b3923a David Woods 2015-12-17  197  		pte_t *cpte;
> 66b3923a David Woods 2015-12-17  198  		bool is_dirty = false;
> 66b3923a David Woods 2015-12-17  199  
> 66b3923a David Woods 2015-12-17 @200  		cpte = huge_pte_offset(mm, addr);
> 66b3923a David Woods 2015-12-17  201  		ncontig = find_num_contig(mm, addr, cpte, *cpte, &pgsize);
> 66b3923a David Woods 2015-12-17  202  		/* save the 1st pte to return */
> 66b3923a David Woods 2015-12-17  203  		pte = ptep_get_and_clear(mm, addr, cpte);
>
> :::::: The code at line 200 was first introduced by commit
> :::::: 66b3923a1a0f77a563b43f43f6ad091354abbfe9 arm64: hugetlb: add support for PTE contiguous bit
>
> :::::: TO: David Woods <dwoods@ezchip.com>
> :::::: CC: Will Deacon <will.deacon@arm.com>
>
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-05-23 16:13 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-22 13:35 [PATCH v3 0/6] Support for contiguous pte hugepages Punit Agrawal
2017-05-22 13:35 ` [PATCH v3 1/6] mm, gup: Remove broken VM_BUG_ON_PAGE compound check for hugepages Punit Agrawal
2017-05-23 13:09   ` Kirill A. Shutemov
2017-05-22 13:36 ` [PATCH v3 2/6] mm, gup: Ensure real head page is ref-counted when using hugepages Punit Agrawal
2017-05-23 13:13   ` Kirill A. Shutemov
2017-05-23 15:43     ` Punit Agrawal
2017-05-22 13:36 ` [PATCH v3 3/6] mm/hugetlb: add size parameter to huge_pte_offset() Punit Agrawal
2017-05-23 10:04   ` kbuild test robot
2017-05-23 16:13     ` Punit Agrawal
2017-05-22 13:36 ` [PATCH v3 4/6] mm/hugetlb: Allow architectures to override huge_pte_clear() Punit Agrawal
2017-05-22 13:59   ` Arnd Bergmann
2017-05-22 15:40     ` Punit Agrawal
2017-05-22 16:25   ` [PATCH v3.1 " Punit Agrawal
2017-05-22 20:34     ` Arnd Bergmann
2017-05-23 14:53       ` Punit Agrawal
2017-05-23  5:26     ` Martin Schwidefsky
2017-05-23 14:53       ` Punit Agrawal
2017-05-22 13:36 ` [PATCH v3 5/6] mm/hugetlb: Introduce set_huge_swap_pte_at() helper Punit Agrawal
2017-05-22 16:30   ` [PATCH v3.1 " Punit Agrawal
2017-05-22 13:36 ` [PATCH v3 6/6] mm: rmap: Use correct helper when poisoning hugepages Punit Agrawal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).