All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/12] mm: userspace hugepage collapse
@ 2022-04-10 13:54 Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 01/12] mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP Zach O'Keefe
                   ` (11 more replies)
  0 siblings, 12 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

Introduction
--------------------------------

This series provides a mechanism for userspace to induce a collapse of
eligible ranges of memory into transparent hugepages in process context,
thus permitting users to more tightly control their own hugepage
utilization policy at their own expense.

This idea was introduced by David Rientjes[1], and the semantics and
implementation were introduced and discussed in a previous PATCH RFC[2].

Interface
--------------------------------

The proposed interface adds a new madvise(2) mode, MADV_COLLAPSE, and
leverages the new process_madvise(2) call.

(*) process_madvise(2)

	Performs a synchronous collapse of the native pages
	mapped by the list of iovecs into transparent hugepages.

	Allocation semantics are the same as khugepaged, and depend on
	(1) the active sysfs settings
	/sys/kernel/mm/transparent_hugepage/enabled and
	/sys/kernel/mm/transparent_hugepage/khugepaged/defrag, and (2)
	the VMA flags of the memory range being collapsed.

	Collapse eligibility criteria differs from khugepaged in that
	the sysfs files
	/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_[none|swap|shared]
	are ignored.

	When a range spans multiple hugepage-aligned/sized regions, the
	semantics of the collapse of each region is independent from the
	others.

	Caller must have CAP_SYS_ADMIN if not acting on self.

	Return value follows existing process_madvise(2) conventions.  A
	“success” indicates that all hugepage-sized/aligned regions
	covered by the provided range were either successfully
	collapsed, or were already pmd-mapped THPs.

(*) madvise(2)

	Equivalent to process_madvise(2) on self, with 0 returned on
	“success”.

Future work
--------------------------------

Only private anonymous memory is supported by this series. File and
shmem memory support will be added later.

One possible user of this functionality is a userspace agent that
attempts to optimize THP utilization system-wide by allocating THPs
based on, for example, task priority, task performance requirements, or
heatmaps.  For the latter, one idea that has already surfaced is using
DAMON to identify hot regions, and driving THP collapse through a new
DAMOS_COLLAPSE scheme[3].

Sequence of Patches
--------------------------------

Patches 1-4 perform refactoring of collapse logic within khugepaged.c
and introduce the notion of a collapse context.

Patches 5-9 introduces MADV_COLLAPSE, does some renaming, adds support
so that MADV_COLLAPSE context has the eligibility and allocation
semantics referenced above, and adds process_madivse(2) support.

Patches 10-12 add selftests to test the new functionality.

Applies against next-20220408.

[1] https://lore.kernel.org/all/C8C89F13-3F04-456B-BA76-DE2C378D30BF@nvidia.com/
[2] https://lore.kernel.org/linux-mm/20220308213417.1407042-1-zokeefe@google.com/
[3] https://lore.kernel.org/lkml/bcc8d9a0-81d-5f34-5e4-fcc28eb7ce@google.com/T/

Zach O'Keefe (13):
  mm/khugepaged: separate hugepage preallocation and cleanup
  mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP
  mm/khugepaged: add struct collapse_control
  mm/khugepaged: make hugepage allocation context-specific
  mm/khugepaged: add struct collapse_result
  mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse
  mm/khugepaged: remove khugepaged prefix from shared collapse functions
  mm/khugepaged: add flag to ignore khugepaged_max_ptes_*
  mm/khugepaged: add flag to ignore page young/referenced requirement
  mm/madvise: add MADV_COLLAPSE to process_madvise()
  selftests/vm: modularize collapse selftests
  selftests/vm: add MADV_COLLAPSE collapse context to selftests
  selftests/vm: add test to verify recollapse of THPs

 include/linux/huge_mm.h                 |  12 +
 include/trace/events/huge_memory.h      |   5 +-
 include/uapi/asm-generic/mman-common.h  |   2 +
 mm/internal.h                           |   1 +
 mm/khugepaged.c                         | 598 ++++++++++++++++--------
 mm/madvise.c                            |  11 +-
 mm/rmap.c                               |  15 +-
 tools/testing/selftests/vm/khugepaged.c | 417 +++++++++++------
 8 files changed, 702 insertions(+), 359 deletions(-)

-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 01/12] mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 02/12] mm/khugepaged: add struct collapse_control Zach O'Keefe
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

When scanning an anon pmd to see if it's eligible for collapse, return
SCAN_PMD_MAPPED if the pmd already maps a THP.  Note that
SCAN_PMD_MAPPED is different from SCAN_PAGE_COMPOUND used in the
file-collapse path, since the latter might identify pte-mapped compound
pages.  This is required by MADV_COLLAPSE which necessarily needs to
know what hugepage-aligned/sized regions are already pmd-mapped.

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 include/trace/events/huge_memory.h |  3 ++-
 mm/internal.h                      |  1 +
 mm/khugepaged.c                    | 30 ++++++++++++++++++++++++++----
 mm/rmap.c                          | 15 +++++++++++++--
 4 files changed, 42 insertions(+), 7 deletions(-)

diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
index d651f3437367..9faa678e0a5b 100644
--- a/include/trace/events/huge_memory.h
+++ b/include/trace/events/huge_memory.h
@@ -33,7 +33,8 @@
 	EM( SCAN_ALLOC_HUGE_PAGE_FAIL,	"alloc_huge_page_failed")	\
 	EM( SCAN_CGROUP_CHARGE_FAIL,	"ccgroup_charge_failed")	\
 	EM( SCAN_TRUNCATED,		"truncated")			\
-	EMe(SCAN_PAGE_HAS_PRIVATE,	"page_has_private")		\
+	EM( SCAN_PAGE_HAS_PRIVATE,	"page_has_private")		\
+	EMe(SCAN_PMD_MAPPED,		"page_pmd_mapped")		\
 
 #undef EM
 #undef EMe
diff --git a/mm/internal.h b/mm/internal.h
index 1d3fb3c0f971..db594d611925 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -172,6 +172,7 @@ extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason
 /*
  * in mm/rmap.c:
  */
+pmd_t *mm_find_pmd_raw(struct mm_struct *mm, unsigned long address);
 extern pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
 
 /*
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 0cde4b44d799..b403f056a847 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -51,6 +51,7 @@ enum scan_result {
 	SCAN_CGROUP_CHARGE_FAIL,
 	SCAN_TRUNCATED,
 	SCAN_PAGE_HAS_PRIVATE,
+	SCAN_PMD_MAPPED,
 };
 
 #define CREATE_TRACE_POINTS
@@ -987,6 +988,29 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
 	return 0;
 }
 
+static int find_pmd_or_thp_or_none(struct mm_struct *mm,
+				   unsigned long address,
+				   pmd_t **pmd)
+{
+	*pmd = mm_find_pmd_raw(mm, address);
+	pmd_t pmde;
+
+	if (!*pmd)
+		return SCAN_PMD_NULL;
+
+	pmde = pmd_read_atomic(*pmd);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	/* See comments in pmd_none_or_trans_huge_or_clear_bad() */
+	barrier();
+#endif
+	if (!pmd_present(pmde) || pmd_none(pmde))
+		return SCAN_PMD_NULL;
+	if (pmd_trans_huge(pmde))
+		return SCAN_PMD_MAPPED;
+	return SCAN_SUCCEED;
+}
+
 /*
  * Bring missing pages in from swap, to complete THP collapse.
  * Only done if khugepaged_scan_pmd believes it is worthwhile.
@@ -1238,11 +1262,9 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
 
-	pmd = mm_find_pmd(mm, address);
-	if (!pmd) {
-		result = SCAN_PMD_NULL;
+	result = find_pmd_or_thp_or_none(mm, address, &pmd);
+	if (result != SCAN_SUCCEED)
 		goto out;
-	}
 
 	memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load));
 	pte = pte_offset_map_lock(mm, pmd, address, &ptl);
diff --git a/mm/rmap.c b/mm/rmap.c
index a1211fa879cf..fb47443f44c6 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -758,13 +758,12 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma)
 	return vma_address(page, vma);
 }
 
-pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
+pmd_t *mm_find_pmd_raw(struct mm_struct *mm, unsigned long address)
 {
 	pgd_t *pgd;
 	p4d_t *p4d;
 	pud_t *pud;
 	pmd_t *pmd = NULL;
-	pmd_t pmde;
 
 	pgd = pgd_offset(mm, address);
 	if (!pgd_present(*pgd))
@@ -779,6 +778,18 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
 		goto out;
 
 	pmd = pmd_offset(pud, address);
+out:
+	return pmd;
+}
+
+pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
+{
+	pmd_t pmde;
+	pmd_t *pmd;
+
+	pmd = mm_find_pmd_raw(mm, address);
+	if (!pmd)
+		goto out;
 	/*
 	 * Some THP functions use the sequence pmdp_huge_clear_flush(), set_pmd_at()
 	 * without holding anon_vma lock for write.  So when looking for a
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 02/12] mm/khugepaged: add struct collapse_control
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 01/12] mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 03/12] mm/khugepaged: make hugepage allocation context-specific Zach O'Keefe
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

Modularize hugepage collapse by introducing struct collapse_control.
This structure serves to describe the properties of the requested
collapse, as well as serve as a local scratch pad to use during the
collapse itself.

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 mm/khugepaged.c | 79 ++++++++++++++++++++++++++++---------------------
 1 file changed, 46 insertions(+), 33 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b403f056a847..eca61eb88dda 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -86,6 +86,14 @@ static struct kmem_cache *mm_slot_cache __read_mostly;
 
 #define MAX_PTE_MAPPED_THP 8
 
+struct collapse_control {
+	/* Num pages scanned per node */
+	int node_load[MAX_NUMNODES];
+
+	/* Last target selected in khugepaged_find_target_node() for this scan */
+	int last_target_node;
+};
+
 /**
  * struct mm_slot - hash lookup from mm to mm_slot
  * @hash: hash collision list
@@ -796,9 +804,7 @@ static void khugepaged_alloc_sleep(void)
 	remove_wait_queue(&khugepaged_wait, &wait);
 }
 
-static int khugepaged_node_load[MAX_NUMNODES];
-
-static bool khugepaged_scan_abort(int nid)
+static bool khugepaged_scan_abort(int nid, struct collapse_control *cc)
 {
 	int i;
 
@@ -810,11 +816,11 @@ static bool khugepaged_scan_abort(int nid)
 		return false;
 
 	/* If there is a count for this node already, it must be acceptable */
-	if (khugepaged_node_load[nid])
+	if (cc->node_load[nid])
 		return false;
 
 	for (i = 0; i < MAX_NUMNODES; i++) {
-		if (!khugepaged_node_load[i])
+		if (!cc->node_load[i])
 			continue;
 		if (node_distance(nid, i) > node_reclaim_distance)
 			return true;
@@ -829,28 +835,28 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void)
 }
 
 #ifdef CONFIG_NUMA
-static int khugepaged_find_target_node(void)
+static int khugepaged_find_target_node(struct collapse_control *cc)
 {
-	static int last_khugepaged_target_node = NUMA_NO_NODE;
 	int nid, target_node = 0, max_value = 0;
 
 	/* find first node with max normal pages hit */
 	for (nid = 0; nid < MAX_NUMNODES; nid++)
-		if (khugepaged_node_load[nid] > max_value) {
-			max_value = khugepaged_node_load[nid];
+		if (cc->node_load[nid] > max_value) {
+			max_value = cc->node_load[nid];
 			target_node = nid;
 		}
 
 	/* do some balance if several nodes have the same hit record */
-	if (target_node <= last_khugepaged_target_node)
-		for (nid = last_khugepaged_target_node + 1; nid < MAX_NUMNODES;
-				nid++)
-			if (max_value == khugepaged_node_load[nid]) {
+	if (target_node <= cc->last_target_node)
+		for (nid = cc->last_target_node + 1; nid < MAX_NUMNODES;
+		     nid++) {
+			if (max_value == cc->node_load[nid]) {
 				target_node = nid;
 				break;
 			}
+		}
 
-	last_khugepaged_target_node = target_node;
+	cc->last_target_node = target_node;
 	return target_node;
 }
 
@@ -888,7 +894,7 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node)
 	return *hpage;
 }
 #else
-static int khugepaged_find_target_node(void)
+static int khugepaged_find_target_node(struct collapse_control *cc)
 {
 	return 0;
 }
@@ -1248,7 +1254,8 @@ static void collapse_huge_page(struct mm_struct *mm,
 static int khugepaged_scan_pmd(struct mm_struct *mm,
 			       struct vm_area_struct *vma,
 			       unsigned long address,
-			       struct page **hpage)
+			       struct page **hpage,
+			       struct collapse_control *cc)
 {
 	pmd_t *pmd;
 	pte_t *pte, *_pte;
@@ -1266,7 +1273,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 	if (result != SCAN_SUCCEED)
 		goto out;
 
-	memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load));
+	memset(cc->node_load, 0, sizeof(cc->node_load));
 	pte = pte_offset_map_lock(mm, pmd, address, &ptl);
 	for (_address = address, _pte = pte; _pte < pte+HPAGE_PMD_NR;
 	     _pte++, _address += PAGE_SIZE) {
@@ -1332,16 +1339,16 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 
 		/*
 		 * Record which node the original page is from and save this
-		 * information to khugepaged_node_load[].
+		 * information to cc->node_load[].
 		 * Khugepaged will allocate hugepage from the node has the max
 		 * hit record.
 		 */
 		node = page_to_nid(page);
-		if (khugepaged_scan_abort(node)) {
+		if (khugepaged_scan_abort(node, cc)) {
 			result = SCAN_SCAN_ABORT;
 			goto out_unmap;
 		}
-		khugepaged_node_load[node]++;
+		cc->node_load[node]++;
 		if (!PageLRU(page)) {
 			result = SCAN_PAGE_LRU;
 			goto out_unmap;
@@ -1392,7 +1399,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 out_unmap:
 	pte_unmap_unlock(pte, ptl);
 	if (ret) {
-		node = khugepaged_find_target_node();
+		node = khugepaged_find_target_node(cc);
 		/* collapse_huge_page will return with the mmap_lock released */
 		collapse_huge_page(mm, address, hpage, node,
 				referenced, unmapped);
@@ -2032,7 +2039,8 @@ static void collapse_file(struct mm_struct *mm,
 }
 
 static void khugepaged_scan_file(struct mm_struct *mm,
-		struct file *file, pgoff_t start, struct page **hpage)
+		struct file *file, pgoff_t start, struct page **hpage,
+		struct collapse_control *cc)
 {
 	struct page *page = NULL;
 	struct address_space *mapping = file->f_mapping;
@@ -2043,7 +2051,7 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 
 	present = 0;
 	swap = 0;
-	memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load));
+	memset(cc->node_load, 0, sizeof(cc->node_load));
 	rcu_read_lock();
 	xas_for_each(&xas, page, start + HPAGE_PMD_NR - 1) {
 		if (xas_retry(&xas, page))
@@ -2068,11 +2076,11 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 		}
 
 		node = page_to_nid(page);
-		if (khugepaged_scan_abort(node)) {
+		if (khugepaged_scan_abort(node, cc)) {
 			result = SCAN_SCAN_ABORT;
 			break;
 		}
-		khugepaged_node_load[node]++;
+		cc->node_load[node]++;
 
 		if (!PageLRU(page)) {
 			result = SCAN_PAGE_LRU;
@@ -2105,7 +2113,7 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 			result = SCAN_EXCEED_NONE_PTE;
 			count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
 		} else {
-			node = khugepaged_find_target_node();
+			node = khugepaged_find_target_node(cc);
 			collapse_file(mm, file, start, hpage, node);
 		}
 	}
@@ -2114,7 +2122,8 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 }
 #else
 static void khugepaged_scan_file(struct mm_struct *mm,
-		struct file *file, pgoff_t start, struct page **hpage)
+		struct file *file, pgoff_t start, struct page **hpage,
+		struct collapse_control *cc)
 {
 	BUILD_BUG();
 }
@@ -2125,7 +2134,8 @@ static void khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot)
 #endif
 
 static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
-					    struct page **hpage)
+					    struct page **hpage,
+					    struct collapse_control *cc)
 	__releases(&khugepaged_mm_lock)
 	__acquires(&khugepaged_mm_lock)
 {
@@ -2201,12 +2211,12 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 
 				mmap_read_unlock(mm);
 				ret = 1;
-				khugepaged_scan_file(mm, file, pgoff, hpage);
+				khugepaged_scan_file(mm, file, pgoff, hpage, cc);
 				fput(file);
 			} else {
 				ret = khugepaged_scan_pmd(mm, vma,
 						khugepaged_scan.address,
-						hpage);
+						hpage, cc);
 			}
 			/* move to next address */
 			khugepaged_scan.address += HPAGE_PMD_SIZE;
@@ -2262,7 +2272,7 @@ static int khugepaged_wait_event(void)
 		kthread_should_stop();
 }
 
-static void khugepaged_do_scan(void)
+static void khugepaged_do_scan(struct collapse_control *cc)
 {
 	struct page *hpage = NULL;
 	unsigned int progress = 0, pass_through_head = 0;
@@ -2286,7 +2296,7 @@ static void khugepaged_do_scan(void)
 		if (khugepaged_has_work() &&
 		    pass_through_head < 2)
 			progress += khugepaged_scan_mm_slot(pages - progress,
-							    &hpage);
+							    &hpage, cc);
 		else
 			progress = pages;
 		spin_unlock(&khugepaged_mm_lock);
@@ -2325,12 +2335,15 @@ static void khugepaged_wait_work(void)
 static int khugepaged(void *none)
 {
 	struct mm_slot *mm_slot;
+	struct collapse_control cc = {
+		.last_target_node = NUMA_NO_NODE,
+	};
 
 	set_freezable();
 	set_user_nice(current, MAX_NICE);
 
 	while (!kthread_should_stop()) {
-		khugepaged_do_scan();
+		khugepaged_do_scan(&cc);
 		khugepaged_wait_work();
 	}
 
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 03/12] mm/khugepaged: make hugepage allocation context-specific
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 01/12] mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 02/12] mm/khugepaged: add struct collapse_control Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 17:47   ` kernel test robot
  2022-04-10 17:47   ` kernel test robot
  2022-04-10 13:54 ` [PATCH 04/12] mm/khugepaged: add struct collapse_result Zach O'Keefe
                   ` (8 subsequent siblings)
  11 siblings, 2 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

Add hugepage allocation context to struct collapse_context, allowing
different collapse contexts to allocate hugepages differently.  For
example, khugepaged decides to allocate differently in NUMA and UMA
configurations, and other collapse contexts shouldn't be coupled to this
decision.

Additionally, move [pre]allocated hugepage pointer into
struct collapse_context.

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 mm/khugepaged.c | 96 ++++++++++++++++++++++++-------------------------
 1 file changed, 48 insertions(+), 48 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index eca61eb88dda..180d99a6b571 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -92,6 +92,10 @@ struct collapse_control {
 
 	/* Last target selected in khugepaged_find_target_node() for this scan */
 	int last_target_node;
+
+	struct page *hpage;
+	struct page* (*alloc_hpage)(struct collapse_control *cc, gfp_t gfp,
+				    int node);
 };
 
 /**
@@ -877,21 +881,21 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait)
 	return true;
 }
 
-static struct page *
-khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node)
+static struct page *khugepaged_alloc_page(struct collapse_control *cc,
+					  gfp_t gfp, int node)
 {
-	VM_BUG_ON_PAGE(*hpage, *hpage);
+	VM_BUG_ON_PAGE(cc->hpage, cc->hpage);
 
-	*hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER);
-	if (unlikely(!*hpage)) {
+	cc->hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER);
+	if (unlikely(!cc->hpage)) {
 		count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
-		*hpage = ERR_PTR(-ENOMEM);
+		cc->hpage = ERR_PTR(-ENOMEM);
 		return NULL;
 	}
 
-	prep_transhuge_page(*hpage);
+	prep_transhuge_page(cc->hpage);
 	count_vm_event(THP_COLLAPSE_ALLOC);
-	return *hpage;
+	return cc->hpage;
 }
 #else
 static int khugepaged_find_target_node(struct collapse_control *cc)
@@ -953,12 +957,12 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait)
 	return true;
 }
 
-static struct page *
-khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node)
+static struct page *khugepaged_alloc_page(struct collapse_control *cc,
+					  gfp_t gfp)
 {
-	VM_BUG_ON(!*hpage);
+	VM_BUG_ON(!cc->hpage);
 
-	return  *hpage;
+	return cc->hpage;
 }
 #endif
 
@@ -1080,10 +1084,9 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
 	return true;
 }
 
-static void collapse_huge_page(struct mm_struct *mm,
-				   unsigned long address,
-				   struct page **hpage,
-				   int node, int referenced, int unmapped)
+static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
+			       struct collapse_control *cc, int referenced,
+			       int unmapped)
 {
 	LIST_HEAD(compound_pagelist);
 	pmd_t *pmd, _pmd;
@@ -1096,6 +1099,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	struct mmu_notifier_range range;
 	gfp_t gfp;
 	const struct cpumask *cpumask;
+	int node;
 
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
 
@@ -1110,13 +1114,14 @@ static void collapse_huge_page(struct mm_struct *mm,
 	 */
 	mmap_read_unlock(mm);
 
+	node = khugepaged_find_target_node(cc);
 	/* sched to specified node before huage page memory copy */
 	if (task_node(current) != node) {
 		cpumask = cpumask_of_node(node);
 		if (!cpumask_empty(cpumask))
 			set_cpus_allowed_ptr(current, cpumask);
 	}
-	new_page = khugepaged_alloc_page(hpage, gfp, node);
+	new_page = cc->alloc_hpage(cc, gfp, node);
 	if (!new_page) {
 		result = SCAN_ALLOC_HUGE_PAGE_FAIL;
 		goto out_nolock;
@@ -1238,15 +1243,15 @@ static void collapse_huge_page(struct mm_struct *mm,
 	update_mmu_cache_pmd(vma, address, pmd);
 	spin_unlock(pmd_ptl);
 
-	*hpage = NULL;
+	cc->hpage = NULL;
 
 	khugepaged_pages_collapsed++;
 	result = SCAN_SUCCEED;
 out_up_write:
 	mmap_write_unlock(mm);
 out_nolock:
-	if (!IS_ERR_OR_NULL(*hpage))
-		mem_cgroup_uncharge(page_folio(*hpage));
+	if (!IS_ERR_OR_NULL(cc->hpage))
+		mem_cgroup_uncharge(page_folio(cc->hpage));
 	trace_mm_collapse_huge_page(mm, isolated, result);
 	return;
 }
@@ -1254,7 +1259,6 @@ static void collapse_huge_page(struct mm_struct *mm,
 static int khugepaged_scan_pmd(struct mm_struct *mm,
 			       struct vm_area_struct *vma,
 			       unsigned long address,
-			       struct page **hpage,
 			       struct collapse_control *cc)
 {
 	pmd_t *pmd;
@@ -1399,10 +1403,8 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 out_unmap:
 	pte_unmap_unlock(pte, ptl);
 	if (ret) {
-		node = khugepaged_find_target_node(cc);
 		/* collapse_huge_page will return with the mmap_lock released */
-		collapse_huge_page(mm, address, hpage, node,
-				referenced, unmapped);
+		collapse_huge_page(mm, address, cc, referenced, unmapped);
 	}
 out:
 	trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced,
@@ -1655,8 +1657,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
  * @mm: process address space where collapse happens
  * @file: file that collapse on
  * @start: collapse start address
- * @hpage: new allocated huge page for collapse
- * @node: appointed node the new huge page allocate from
+ * @collapse_control: collapse context and scratchpad
  *
  * Basic scheme is simple, details are more complex:
  *  - allocate and lock a new huge page;
@@ -1674,8 +1675,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
  *    + unlock and free huge page;
  */
 static void collapse_file(struct mm_struct *mm,
-		struct file *file, pgoff_t start,
-		struct page **hpage, int node)
+			  struct file *file, pgoff_t start,
+			  struct collapse_control *cc)
 {
 	struct address_space *mapping = file->f_mapping;
 	gfp_t gfp;
@@ -1685,15 +1686,16 @@ static void collapse_file(struct mm_struct *mm,
 	XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER);
 	int nr_none = 0, result = SCAN_SUCCEED;
 	bool is_shmem = shmem_file(file);
-	int nr;
+	int nr, node;
 
 	VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem);
 	VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
 
 	/* Only allocate from the target node */
 	gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE;
+	node = khugepaged_find_target_node(cc);
 
-	new_page = khugepaged_alloc_page(hpage, gfp, node);
+	new_page = cc->alloc_hpage(cc, gfp, node);
 	if (!new_page) {
 		result = SCAN_ALLOC_HUGE_PAGE_FAIL;
 		goto out;
@@ -1986,7 +1988,7 @@ static void collapse_file(struct mm_struct *mm,
 		 * Remove pte page tables, so we can re-fault the page as huge.
 		 */
 		retract_page_tables(mapping, start);
-		*hpage = NULL;
+		cc->hpage = NULL;
 
 		khugepaged_pages_collapsed++;
 	} else {
@@ -2033,14 +2035,14 @@ static void collapse_file(struct mm_struct *mm,
 	unlock_page(new_page);
 out:
 	VM_BUG_ON(!list_empty(&pagelist));
-	if (!IS_ERR_OR_NULL(*hpage))
-		mem_cgroup_uncharge(page_folio(*hpage));
+	if (!IS_ERR_OR_NULL(cc->hpage))
+		mem_cgroup_uncharge(page_folio(cc->hpage));
 	/* TODO: tracepoints */
 }
 
 static void khugepaged_scan_file(struct mm_struct *mm,
-		struct file *file, pgoff_t start, struct page **hpage,
-		struct collapse_control *cc)
+				 struct file *file, pgoff_t start,
+				 struct collapse_control *cc)
 {
 	struct page *page = NULL;
 	struct address_space *mapping = file->f_mapping;
@@ -2113,8 +2115,7 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 			result = SCAN_EXCEED_NONE_PTE;
 			count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
 		} else {
-			node = khugepaged_find_target_node(cc);
-			collapse_file(mm, file, start, hpage, node);
+			collapse_file(mm, file, start, cc);
 		}
 	}
 
@@ -2122,8 +2123,8 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 }
 #else
 static void khugepaged_scan_file(struct mm_struct *mm,
-		struct file *file, pgoff_t start, struct page **hpage,
-		struct collapse_control *cc)
+				 struct file *file, pgoff_t start,
+				 struct collapse_control *cc)
 {
 	BUILD_BUG();
 }
@@ -2134,7 +2135,6 @@ static void khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot)
 #endif
 
 static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
-					    struct page **hpage,
 					    struct collapse_control *cc)
 	__releases(&khugepaged_mm_lock)
 	__acquires(&khugepaged_mm_lock)
@@ -2211,12 +2211,11 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 
 				mmap_read_unlock(mm);
 				ret = 1;
-				khugepaged_scan_file(mm, file, pgoff, hpage, cc);
+				khugepaged_scan_file(mm, file, pgoff, cc);
 				fput(file);
 			} else {
 				ret = khugepaged_scan_pmd(mm, vma,
-						khugepaged_scan.address,
-						hpage, cc);
+						khugepaged_scan.address, cc);
 			}
 			/* move to next address */
 			khugepaged_scan.address += HPAGE_PMD_SIZE;
@@ -2274,15 +2273,15 @@ static int khugepaged_wait_event(void)
 
 static void khugepaged_do_scan(struct collapse_control *cc)
 {
-	struct page *hpage = NULL;
 	unsigned int progress = 0, pass_through_head = 0;
 	unsigned int pages = READ_ONCE(khugepaged_pages_to_scan);
 	bool wait = true;
 
+	cc->hpage = NULL;
 	lru_add_drain_all();
 
 	while (progress < pages) {
-		if (!khugepaged_prealloc_page(&hpage, &wait))
+		if (!khugepaged_prealloc_page(&cc->hpage, &wait))
 			break;
 
 		cond_resched();
@@ -2296,14 +2295,14 @@ static void khugepaged_do_scan(struct collapse_control *cc)
 		if (khugepaged_has_work() &&
 		    pass_through_head < 2)
 			progress += khugepaged_scan_mm_slot(pages - progress,
-							    &hpage, cc);
+							    cc);
 		else
 			progress = pages;
 		spin_unlock(&khugepaged_mm_lock);
 	}
 
-	if (!IS_ERR_OR_NULL(hpage))
-		put_page(hpage);
+	if (!IS_ERR_OR_NULL(cc->hpage))
+		put_page(cc->hpage);
 }
 
 static bool khugepaged_should_wakeup(void)
@@ -2337,6 +2336,7 @@ static int khugepaged(void *none)
 	struct mm_slot *mm_slot;
 	struct collapse_control cc = {
 		.last_target_node = NUMA_NO_NODE,
+		.alloc_hpage = &khugepaged_alloc_page,
 	};
 
 	set_freezable();
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 04/12] mm/khugepaged: add struct collapse_result
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
                   ` (2 preceding siblings ...)
  2022-04-10 13:54 ` [PATCH 03/12] mm/khugepaged: make hugepage allocation context-specific Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse Zach O'Keefe
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

Add struct collapse_result which aggregates data from a single
khugepaged_scan_pmd() or khugapaged_scan_file() request.  Change
khugepaged to take action based on this returned data instead of deep
within the collapsing functions themselves.

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 mm/khugepaged.c | 186 ++++++++++++++++++++++++++----------------------
 1 file changed, 100 insertions(+), 86 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 180d99a6b571..ed025dbbd7e6 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -98,6 +98,14 @@ struct collapse_control {
 				    int node);
 };
 
+/* Gather information from one khugepaged_scan_[pmd|file]() request */
+struct collapse_result {
+	enum scan_result result;
+
+	/* Was mmap_lock dropped during request? */
+	bool dropped_mmap_lock;
+};
+
 /**
  * struct mm_slot - hash lookup from mm to mm_slot
  * @hash: hash collision list
@@ -742,13 +750,13 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 		result = SCAN_SUCCEED;
 		trace_mm_collapse_huge_page_isolate(page, none_or_zero,
 						    referenced, writable, result);
-		return 1;
+		return SCAN_SUCCEED;
 	}
 out:
 	release_pte_pages(pte, _pte, compound_pagelist);
 	trace_mm_collapse_huge_page_isolate(page, none_or_zero,
 					    referenced, writable, result);
-	return 0;
+	return result;
 }
 
 static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
@@ -1086,7 +1094,7 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
 
 static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 			       struct collapse_control *cc, int referenced,
-			       int unmapped)
+			       int unmapped, struct collapse_result *cr)
 {
 	LIST_HEAD(compound_pagelist);
 	pmd_t *pmd, _pmd;
@@ -1094,7 +1102,6 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	pgtable_t pgtable;
 	struct page *new_page;
 	spinlock_t *pmd_ptl, *pte_ptl;
-	int isolated = 0, result = 0;
 	struct vm_area_struct *vma;
 	struct mmu_notifier_range range;
 	gfp_t gfp;
@@ -1102,6 +1109,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	int node;
 
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	cr->result = SCAN_FAIL;
 
 	/* Only allocate from the target node */
 	gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE;
@@ -1113,6 +1121,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	 * that. We will recheck the vma after taking it again in write mode.
 	 */
 	mmap_read_unlock(mm);
+	cr->dropped_mmap_lock = true;
 
 	node = khugepaged_find_target_node(cc);
 	/* sched to specified node before huage page memory copy */
@@ -1123,26 +1132,26 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	}
 	new_page = cc->alloc_hpage(cc, gfp, node);
 	if (!new_page) {
-		result = SCAN_ALLOC_HUGE_PAGE_FAIL;
+		cr->result = SCAN_ALLOC_HUGE_PAGE_FAIL;
 		goto out_nolock;
 	}
 
 	if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) {
-		result = SCAN_CGROUP_CHARGE_FAIL;
+		cr->result = SCAN_CGROUP_CHARGE_FAIL;
 		goto out_nolock;
 	}
 	count_memcg_page_event(new_page, THP_COLLAPSE_ALLOC);
 
 	mmap_read_lock(mm);
-	result = hugepage_vma_revalidate(mm, address, &vma);
-	if (result) {
+	cr->result = hugepage_vma_revalidate(mm, address, &vma);
+	if (cr->result) {
 		mmap_read_unlock(mm);
 		goto out_nolock;
 	}
 
 	pmd = mm_find_pmd(mm, address);
 	if (!pmd) {
-		result = SCAN_PMD_NULL;
+		cr->result = SCAN_PMD_NULL;
 		mmap_read_unlock(mm);
 		goto out_nolock;
 	}
@@ -1165,8 +1174,8 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	 * handled by the anon_vma lock + PG_lock.
 	 */
 	mmap_write_lock(mm);
-	result = hugepage_vma_revalidate(mm, address, &vma);
-	if (result)
+	cr->result = hugepage_vma_revalidate(mm, address, &vma);
+	if (cr->result)
 		goto out_up_write;
 	/* check if the pmd is still valid */
 	if (mm_find_pmd(mm, address) != pmd)
@@ -1193,11 +1202,11 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	mmu_notifier_invalidate_range_end(&range);
 
 	spin_lock(pte_ptl);
-	isolated = __collapse_huge_page_isolate(vma, address, pte,
-			&compound_pagelist);
+	cr->result =  __collapse_huge_page_isolate(vma, address, pte,
+						   &compound_pagelist);
 	spin_unlock(pte_ptl);
 
-	if (unlikely(!isolated)) {
+	if (unlikely(cr->result != SCAN_SUCCEED)) {
 		pte_unmap(pte);
 		spin_lock(pmd_ptl);
 		BUG_ON(!pmd_none(*pmd));
@@ -1209,7 +1218,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 		pmd_populate(mm, pmd, pmd_pgtable(_pmd));
 		spin_unlock(pmd_ptl);
 		anon_vma_unlock_write(vma->anon_vma);
-		result = SCAN_FAIL;
+		cr->result = SCAN_FAIL;
 		goto out_up_write;
 	}
 
@@ -1245,25 +1254,25 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 
 	cc->hpage = NULL;
 
-	khugepaged_pages_collapsed++;
-	result = SCAN_SUCCEED;
+	cr->result = SCAN_SUCCEED;
 out_up_write:
 	mmap_write_unlock(mm);
 out_nolock:
 	if (!IS_ERR_OR_NULL(cc->hpage))
 		mem_cgroup_uncharge(page_folio(cc->hpage));
-	trace_mm_collapse_huge_page(mm, isolated, result);
+	trace_mm_collapse_huge_page(mm, cr->result == SCAN_SUCCEED, cr->result);
 	return;
 }
 
-static int khugepaged_scan_pmd(struct mm_struct *mm,
-			       struct vm_area_struct *vma,
-			       unsigned long address,
-			       struct collapse_control *cc)
+static void khugepaged_scan_pmd(struct mm_struct *mm,
+				struct vm_area_struct *vma,
+				unsigned long address,
+				struct collapse_control *cc,
+				struct collapse_result *cr)
 {
 	pmd_t *pmd;
 	pte_t *pte, *_pte;
-	int ret = 0, result = 0, referenced = 0;
+	int referenced = 0;
 	int none_or_zero = 0, shared = 0;
 	struct page *page = NULL;
 	unsigned long _address;
@@ -1272,9 +1281,10 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 	bool writable = false;
 
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	cr->result = SCAN_FAIL;
 
-	result = find_pmd_or_thp_or_none(mm, address, &pmd);
-	if (result != SCAN_SUCCEED)
+	cr->result = find_pmd_or_thp_or_none(mm, address, &pmd);
+	if (cr->result != SCAN_SUCCEED)
 		goto out;
 
 	memset(cc->node_load, 0, sizeof(cc->node_load));
@@ -1290,12 +1300,12 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 				 * comment below for pte_uffd_wp().
 				 */
 				if (pte_swp_uffd_wp(pteval)) {
-					result = SCAN_PTE_UFFD_WP;
+					cr->result = SCAN_PTE_UFFD_WP;
 					goto out_unmap;
 				}
 				continue;
 			} else {
-				result = SCAN_EXCEED_SWAP_PTE;
+				cr->result = SCAN_EXCEED_SWAP_PTE;
 				count_vm_event(THP_SCAN_EXCEED_SWAP_PTE);
 				goto out_unmap;
 			}
@@ -1305,7 +1315,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 			    ++none_or_zero <= khugepaged_max_ptes_none) {
 				continue;
 			} else {
-				result = SCAN_EXCEED_NONE_PTE;
+				cr->result = SCAN_EXCEED_NONE_PTE;
 				count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
 				goto out_unmap;
 			}
@@ -1320,7 +1330,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 			 * userfault messages that falls outside of
 			 * the registered range.  So, just be simple.
 			 */
-			result = SCAN_PTE_UFFD_WP;
+			cr->result = SCAN_PTE_UFFD_WP;
 			goto out_unmap;
 		}
 		if (pte_write(pteval))
@@ -1328,13 +1338,13 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 
 		page = vm_normal_page(vma, _address, pteval);
 		if (unlikely(!page)) {
-			result = SCAN_PAGE_NULL;
+			cr->result = SCAN_PAGE_NULL;
 			goto out_unmap;
 		}
 
 		if (page_mapcount(page) > 1 &&
 				++shared > khugepaged_max_ptes_shared) {
-			result = SCAN_EXCEED_SHARED_PTE;
+			cr->result = SCAN_EXCEED_SHARED_PTE;
 			count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
 			goto out_unmap;
 		}
@@ -1349,20 +1359,20 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 		 */
 		node = page_to_nid(page);
 		if (khugepaged_scan_abort(node, cc)) {
-			result = SCAN_SCAN_ABORT;
+			cr->result = SCAN_SCAN_ABORT;
 			goto out_unmap;
 		}
 		cc->node_load[node]++;
 		if (!PageLRU(page)) {
-			result = SCAN_PAGE_LRU;
+			cr->result = SCAN_PAGE_LRU;
 			goto out_unmap;
 		}
 		if (PageLocked(page)) {
-			result = SCAN_PAGE_LOCK;
+			cr->result = SCAN_PAGE_LOCK;
 			goto out_unmap;
 		}
 		if (!PageAnon(page)) {
-			result = SCAN_PAGE_ANON;
+			cr->result = SCAN_PAGE_ANON;
 			goto out_unmap;
 		}
 
@@ -1384,7 +1394,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 		 * will be done again later the risk seems low.
 		 */
 		if (!is_refcount_suitable(page)) {
-			result = SCAN_PAGE_COUNT;
+			cr->result = SCAN_PAGE_COUNT;
 			goto out_unmap;
 		}
 		if (pte_young(pteval) ||
@@ -1393,23 +1403,20 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 			referenced++;
 	}
 	if (!writable) {
-		result = SCAN_PAGE_RO;
+		cr->result = SCAN_PAGE_RO;
 	} else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) {
-		result = SCAN_LACK_REFERENCED_PAGE;
+		cr->result = SCAN_LACK_REFERENCED_PAGE;
 	} else {
-		result = SCAN_SUCCEED;
-		ret = 1;
+		cr->result = SCAN_SUCCEED;
 	}
 out_unmap:
 	pte_unmap_unlock(pte, ptl);
-	if (ret) {
+	if (cr->result == SCAN_SUCCEED)
 		/* collapse_huge_page will return with the mmap_lock released */
-		collapse_huge_page(mm, address, cc, referenced, unmapped);
-	}
+		collapse_huge_page(mm, address, cc, referenced, unmapped, cr);
 out:
 	trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced,
-				     none_or_zero, result, unmapped);
-	return ret;
+				     none_or_zero, cr->result, unmapped);
 }
 
 static void collect_mm_slot(struct mm_slot *mm_slot)
@@ -1676,7 +1683,9 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
  */
 static void collapse_file(struct mm_struct *mm,
 			  struct file *file, pgoff_t start,
-			  struct collapse_control *cc)
+			  struct collapse_control *cc,
+			  struct collapse_result *cr)
+
 {
 	struct address_space *mapping = file->f_mapping;
 	gfp_t gfp;
@@ -1684,25 +1693,27 @@ static void collapse_file(struct mm_struct *mm,
 	pgoff_t index, end = start + HPAGE_PMD_NR;
 	LIST_HEAD(pagelist);
 	XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER);
-	int nr_none = 0, result = SCAN_SUCCEED;
+	int nr_none = 0;
 	bool is_shmem = shmem_file(file);
 	int nr, node;
 
 	VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem);
 	VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
 
+	cr->result = SCAN_SUCCEED;
+
 	/* Only allocate from the target node */
 	gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE;
 	node = khugepaged_find_target_node(cc);
 
 	new_page = cc->alloc_hpage(cc, gfp, node);
 	if (!new_page) {
-		result = SCAN_ALLOC_HUGE_PAGE_FAIL;
+		cr->result = SCAN_ALLOC_HUGE_PAGE_FAIL;
 		goto out;
 	}
 
 	if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) {
-		result = SCAN_CGROUP_CHARGE_FAIL;
+		cr->result = SCAN_CGROUP_CHARGE_FAIL;
 		goto out;
 	}
 	count_memcg_page_event(new_page, THP_COLLAPSE_ALLOC);
@@ -1718,7 +1729,7 @@ static void collapse_file(struct mm_struct *mm,
 			break;
 		xas_unlock_irq(&xas);
 		if (!xas_nomem(&xas, GFP_KERNEL)) {
-			result = SCAN_FAIL;
+			cr->result = SCAN_FAIL;
 			goto out;
 		}
 	} while (1);
@@ -1749,13 +1760,13 @@ static void collapse_file(struct mm_struct *mm,
 				 */
 				if (index == start) {
 					if (!xas_next_entry(&xas, end - 1)) {
-						result = SCAN_TRUNCATED;
+						cr->result = SCAN_TRUNCATED;
 						goto xa_locked;
 					}
 					xas_set(&xas, index);
 				}
 				if (!shmem_charge(mapping->host, 1)) {
-					result = SCAN_FAIL;
+					cr->result = SCAN_FAIL;
 					goto xa_locked;
 				}
 				xas_store(&xas, new_page);
@@ -1768,14 +1779,14 @@ static void collapse_file(struct mm_struct *mm,
 				/* swap in or instantiate fallocated page */
 				if (shmem_getpage(mapping->host, index, &page,
 						  SGP_NOALLOC)) {
-					result = SCAN_FAIL;
+					cr->result = SCAN_FAIL;
 					goto xa_unlocked;
 				}
 			} else if (trylock_page(page)) {
 				get_page(page);
 				xas_unlock_irq(&xas);
 			} else {
-				result = SCAN_PAGE_LOCK;
+				cr->result = SCAN_PAGE_LOCK;
 				goto xa_locked;
 			}
 		} else {	/* !is_shmem */
@@ -1788,7 +1799,7 @@ static void collapse_file(struct mm_struct *mm,
 				lru_add_drain();
 				page = find_lock_page(mapping, index);
 				if (unlikely(page == NULL)) {
-					result = SCAN_FAIL;
+					cr->result = SCAN_FAIL;
 					goto xa_unlocked;
 				}
 			} else if (PageDirty(page)) {
@@ -1807,17 +1818,17 @@ static void collapse_file(struct mm_struct *mm,
 				 */
 				xas_unlock_irq(&xas);
 				filemap_flush(mapping);
-				result = SCAN_FAIL;
+				cr->result = SCAN_FAIL;
 				goto xa_unlocked;
 			} else if (PageWriteback(page)) {
 				xas_unlock_irq(&xas);
-				result = SCAN_FAIL;
+				cr->result = SCAN_FAIL;
 				goto xa_unlocked;
 			} else if (trylock_page(page)) {
 				get_page(page);
 				xas_unlock_irq(&xas);
 			} else {
-				result = SCAN_PAGE_LOCK;
+				cr->result = SCAN_PAGE_LOCK;
 				goto xa_locked;
 			}
 		}
@@ -1830,7 +1841,7 @@ static void collapse_file(struct mm_struct *mm,
 
 		/* make sure the page is up to date */
 		if (unlikely(!PageUptodate(page))) {
-			result = SCAN_FAIL;
+			cr->result = SCAN_FAIL;
 			goto out_unlock;
 		}
 
@@ -1839,12 +1850,12 @@ static void collapse_file(struct mm_struct *mm,
 		 * we locked the first page, then a THP might be there already.
 		 */
 		if (PageTransCompound(page)) {
-			result = SCAN_PAGE_COMPOUND;
+			cr->result = SCAN_PAGE_COMPOUND;
 			goto out_unlock;
 		}
 
 		if (page_mapping(page) != mapping) {
-			result = SCAN_TRUNCATED;
+			cr->result = SCAN_TRUNCATED;
 			goto out_unlock;
 		}
 
@@ -1855,18 +1866,18 @@ static void collapse_file(struct mm_struct *mm,
 			 * page is dirty because it hasn't been flushed
 			 * since first write.
 			 */
-			result = SCAN_FAIL;
+			cr->result = SCAN_FAIL;
 			goto out_unlock;
 		}
 
 		if (isolate_lru_page(page)) {
-			result = SCAN_DEL_PAGE_LRU;
+			cr->result = SCAN_DEL_PAGE_LRU;
 			goto out_unlock;
 		}
 
 		if (page_has_private(page) &&
 		    !try_to_release_page(page, GFP_KERNEL)) {
-			result = SCAN_PAGE_HAS_PRIVATE;
+			cr->result = SCAN_PAGE_HAS_PRIVATE;
 			putback_lru_page(page);
 			goto out_unlock;
 		}
@@ -1887,7 +1898,7 @@ static void collapse_file(struct mm_struct *mm,
 		 *  - one from isolate_lru_page;
 		 */
 		if (!page_ref_freeze(page, 3)) {
-			result = SCAN_PAGE_COUNT;
+			cr->result = SCAN_PAGE_COUNT;
 			xas_unlock_irq(&xas);
 			putback_lru_page(page);
 			goto out_unlock;
@@ -1922,7 +1933,7 @@ static void collapse_file(struct mm_struct *mm,
 		 */
 		smp_mb();
 		if (inode_is_open_for_write(mapping->host)) {
-			result = SCAN_FAIL;
+			cr->result = SCAN_FAIL;
 			__mod_lruvec_page_state(new_page, NR_FILE_THPS, -nr);
 			filemap_nr_thps_dec(mapping);
 			goto xa_locked;
@@ -1949,7 +1960,7 @@ static void collapse_file(struct mm_struct *mm,
 	 */
 	try_to_unmap_flush();
 
-	if (result == SCAN_SUCCEED) {
+	if (cr->result == SCAN_SUCCEED) {
 		struct page *page, *tmp;
 
 		/*
@@ -1989,8 +2000,6 @@ static void collapse_file(struct mm_struct *mm,
 		 */
 		retract_page_tables(mapping, start);
 		cc->hpage = NULL;
-
-		khugepaged_pages_collapsed++;
 	} else {
 		struct page *page;
 
@@ -2042,15 +2051,16 @@ static void collapse_file(struct mm_struct *mm,
 
 static void khugepaged_scan_file(struct mm_struct *mm,
 				 struct file *file, pgoff_t start,
-				 struct collapse_control *cc)
+				 struct collapse_control *cc,
+				 struct collapse_result *cr)
 {
 	struct page *page = NULL;
 	struct address_space *mapping = file->f_mapping;
 	XA_STATE(xas, &mapping->i_pages, start);
 	int present, swap;
 	int node = NUMA_NO_NODE;
-	int result = SCAN_SUCCEED;
 
+	cr->result = SCAN_SUCCEED;
 	present = 0;
 	swap = 0;
 	memset(cc->node_load, 0, sizeof(cc->node_load));
@@ -2061,7 +2071,7 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 
 		if (xa_is_value(page)) {
 			if (++swap > khugepaged_max_ptes_swap) {
-				result = SCAN_EXCEED_SWAP_PTE;
+				cr->result = SCAN_EXCEED_SWAP_PTE;
 				count_vm_event(THP_SCAN_EXCEED_SWAP_PTE);
 				break;
 			}
@@ -2073,25 +2083,25 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 		 * into a PMD sized page
 		 */
 		if (PageTransCompound(page)) {
-			result = SCAN_PAGE_COMPOUND;
+			cr->result = SCAN_PAGE_COMPOUND;
 			break;
 		}
 
 		node = page_to_nid(page);
 		if (khugepaged_scan_abort(node, cc)) {
-			result = SCAN_SCAN_ABORT;
+			cr->result = SCAN_SCAN_ABORT;
 			break;
 		}
 		cc->node_load[node]++;
 
 		if (!PageLRU(page)) {
-			result = SCAN_PAGE_LRU;
+			cr->result = SCAN_PAGE_LRU;
 			break;
 		}
 
 		if (page_count(page) !=
 		    1 + page_mapcount(page) + page_has_private(page)) {
-			result = SCAN_PAGE_COUNT;
+			cr->result = SCAN_PAGE_COUNT;
 			break;
 		}
 
@@ -2110,12 +2120,12 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 	}
 	rcu_read_unlock();
 
-	if (result == SCAN_SUCCEED) {
+	if (cr->result == SCAN_SUCCEED) {
 		if (present < HPAGE_PMD_NR - khugepaged_max_ptes_none) {
-			result = SCAN_EXCEED_NONE_PTE;
+			cr->result = SCAN_EXCEED_NONE_PTE;
 			count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
 		} else {
-			collapse_file(mm, file, start, cc);
+			collapse_file(mm, file, start, cc, cr);
 		}
 	}
 
@@ -2124,7 +2134,8 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 #else
 static void khugepaged_scan_file(struct mm_struct *mm,
 				 struct file *file, pgoff_t start,
-				 struct collapse_control *cc)
+				 struct collapse_control *cc,
+				 struct collapse_result *cr)
 {
 	BUILD_BUG();
 }
@@ -2196,7 +2207,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 			goto skip;
 
 		while (khugepaged_scan.address < hend) {
-			int ret;
+			struct collapse_result cr = {0};
 			cond_resched();
 			if (unlikely(khugepaged_test_exit(mm)))
 				goto breakouterloop;
@@ -2210,17 +2221,20 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 						khugepaged_scan.address);
 
 				mmap_read_unlock(mm);
-				ret = 1;
-				khugepaged_scan_file(mm, file, pgoff, cc);
+				cr.dropped_mmap_lock = true;
+				khugepaged_scan_file(mm, file, pgoff, cc, &cr);
 				fput(file);
 			} else {
-				ret = khugepaged_scan_pmd(mm, vma,
-						khugepaged_scan.address, cc);
+				khugepaged_scan_pmd(mm, vma,
+						    khugepaged_scan.address,
+						    cc, &cr);
 			}
+			if (cr.result == SCAN_SUCCEED)
+				++khugepaged_pages_collapsed;
 			/* move to next address */
 			khugepaged_scan.address += HPAGE_PMD_SIZE;
 			progress += HPAGE_PMD_NR;
-			if (ret)
+			if (cr.dropped_mmap_lock)
 				/* we released mmap_lock so break loop */
 				goto breakouterloop_mmap_lock;
 			if (progress >= pages)
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
                   ` (3 preceding siblings ...)
  2022-04-10 13:54 ` [PATCH 04/12] mm/khugepaged: add struct collapse_result Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 16:04   ` kernel test robot
  2022-04-10 16:14   ` kernel test robot
  2022-04-10 13:54 ` [PATCH 06/12] mm/khugepaged: remove khugepaged prefix from shared collapse functions Zach O'Keefe
                   ` (6 subsequent siblings)
  11 siblings, 2 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

This idea was introduced by David Rientjes[1], and the semantics and
implementation were introduced and discussed in a previous PATCH RFC[2].

Introduce a new madvise mode, MADV_COLLAPSE, that allows users to request a
synchronous collapse of memory at their own expense.

The benefits of this approach are:

* CPU is charged to the process that wants to spend the cycles for the
  THP
* avoid unpredictable timing of khugepaged collapse

Immediate users of this new functionality include:

* immediately back executable text by hugepages.  Current support
  provided by CONFIG_READ_ONLY_THP_FOR_FS may take too long on a large
  system.
* malloc implementations that manage memory in hugepage-sized chunks,
  but sometimes subrelease memory back to the system in native-sized
  chunks via MADV_DONTNEED; zapping the pmd.  Later, when the memory
  is hot, the implementation could madvise(MADV_COLLAPSE) to re-back the
  memory by THP to regain TLB performance.

Allocation semantics are the same as khugepaged, and depend on (1) the
active sysfs settings /sys/kernel/mm/transparent_hugepage/enabled and
/sys/kernel/mm/transparent_hugepage/khugepaged/defrag, and (2) the VMA
flags of the memory range being collapsed.

Only privately-mapped anon memory is supported for now.

[1] https://lore.kernel.org/linux-mm/d098c392-273a-36a4-1a29-59731cdf5d3d@google.com/
[2] https://lore.kernel.org/linux-mm/20220308213417.1407042-1-zokeefe@google.com/

Suggested-by: David Rientjes <rientjes@google.com>
Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 include/linux/huge_mm.h                |  12 ++
 include/uapi/asm-generic/mman-common.h |   2 +
 mm/khugepaged.c                        | 151 ++++++++++++++++++++++---
 mm/madvise.c                           |   5 +
 4 files changed, 157 insertions(+), 13 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 816a9937f30e..ddad7c7af44e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -236,6 +236,9 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
 
 int hugepage_madvise(struct vm_area_struct *vma, unsigned long *vm_flags,
 		     int advice);
+int madvise_collapse(struct vm_area_struct *vma,
+		     struct vm_area_struct **prev,
+		     unsigned long start, unsigned long end);
 void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start,
 			   unsigned long end, long adjust_next);
 spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma);
@@ -392,6 +395,15 @@ static inline int hugepage_madvise(struct vm_area_struct *vma,
 	BUG();
 	return 0;
 }
+
+static inline int madvise_collapse(struct vm_area_struct *vma,
+				   struct vm_area_struct **prev,
+				   unsigned long start, unsigned long end)
+{
+	BUG();
+	return 0;
+}
+
 static inline void vma_adjust_trans_huge(struct vm_area_struct *vma,
 					 unsigned long start,
 					 unsigned long end,
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
index 6c1aa92a92e4..6ce1f1ceb432 100644
--- a/include/uapi/asm-generic/mman-common.h
+++ b/include/uapi/asm-generic/mman-common.h
@@ -77,6 +77,8 @@
 
 #define MADV_DONTNEED_LOCKED	24	/* like DONTNEED, but drop locked pages too */
 
+#define MADV_COLLAPSE	25		/* Synchronous hugepage collapse */
+
 /* compatibility flags */
 #define MAP_FILE	0
 
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index ed025dbbd7e6..c5c484b7e394 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -846,7 +846,6 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void)
 	return khugepaged_defrag() ? GFP_TRANSHUGE : GFP_TRANSHUGE_LIGHT;
 }
 
-#ifdef CONFIG_NUMA
 static int khugepaged_find_target_node(struct collapse_control *cc)
 {
 	int nid, target_node = 0, max_value = 0;
@@ -872,6 +871,24 @@ static int khugepaged_find_target_node(struct collapse_control *cc)
 	return target_node;
 }
 
+static struct page *alloc_hpage(struct collapse_control *cc, gfp_t gfp,
+				int node)
+{
+	VM_BUG_ON_PAGE(cc->hpage, cc->hpage);
+
+	cc->hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER);
+	if (unlikely(!cc->hpage)) {
+		count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
+		cc->hpage = ERR_PTR(-ENOMEM);
+		return NULL;
+	}
+
+	prep_transhuge_page(cc->hpage);
+	count_vm_event(THP_COLLAPSE_ALLOC);
+	return cc->hpage;
+}
+
+#ifdef CONFIG_NUMA
 static bool khugepaged_prealloc_page(struct page **hpage, bool *wait)
 {
 	if (IS_ERR(*hpage)) {
@@ -892,18 +909,7 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait)
 static struct page *khugepaged_alloc_page(struct collapse_control *cc,
 					  gfp_t gfp, int node)
 {
-	VM_BUG_ON_PAGE(cc->hpage, cc->hpage);
-
-	cc->hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER);
-	if (unlikely(!cc->hpage)) {
-		count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
-		cc->hpage = ERR_PTR(-ENOMEM);
-		return NULL;
-	}
-
-	prep_transhuge_page(cc->hpage);
-	count_vm_event(THP_COLLAPSE_ALLOC);
-	return cc->hpage;
+	return alloc_hpage(cc, gfp, node);
 }
 #else
 static int khugepaged_find_target_node(struct collapse_control *cc)
@@ -2456,3 +2462,122 @@ void khugepaged_min_free_kbytes_update(void)
 		set_recommended_min_free_kbytes();
 	mutex_unlock(&khugepaged_mutex);
 }
+
+static void madvise_collapse_cleanup_page(struct page **hpage)
+{
+	if (!IS_ERR(*hpage) && *hpage)
+		put_page(*hpage);
+	*hpage = NULL;
+}
+
+int madvise_collapse_errno(enum scan_result r)
+{
+	switch (r) {
+	case SCAN_PMD_NULL:
+	case SCAN_ADDRESS_RANGE:
+	case SCAN_VMA_NULL:
+	case SCAN_PTE_NON_PRESENT:
+	case SCAN_PAGE_NULL:
+		/*
+		 * Addresses in the specified range are not currently mapped,
+		 * or are outside the AS of the process.
+		 */
+		return -ENOMEM;
+	case SCAN_ALLOC_HUGE_PAGE_FAIL:
+	case SCAN_CGROUP_CHARGE_FAIL:
+		/* A kernel resource was temporarily unavailable. */
+		return -EAGAIN;
+	default:
+		return -EINVAL;
+	}
+}
+
+int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
+		     unsigned long start, unsigned long end)
+{
+	struct collapse_control cc = {
+		.last_target_node = NUMA_NO_NODE,
+		.hpage = NULL,
+		.alloc_hpage = &alloc_hpage,
+	};
+	struct mm_struct *mm = vma->vm_mm;
+	struct collapse_result cr;
+	unsigned long hstart, hend, addr;
+	int thps = 0, nr_hpages = 0;
+
+	BUG_ON(vma->vm_start > start);
+	BUG_ON(vma->vm_end < end);
+
+	*prev = vma;
+
+	if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file)
+		return -EINVAL;
+
+	hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
+	hend = end & HPAGE_PMD_MASK;
+	nr_hpages = (hend - hstart) >> HPAGE_PMD_SHIFT;
+
+	if (hstart >= hend || !transparent_hugepage_active(vma))
+		return -EINVAL;
+
+	mmgrab(mm);
+	lru_add_drain();
+
+	for (addr = hstart; ; ) {
+		mmap_assert_locked(mm);
+		cond_resched();
+		memset(&cr, 0, sizeof(cr));
+
+		if (unlikely(khugepaged_test_exit(mm)))
+			break;
+
+		memset(cc.node_load, 0, sizeof(cc.node_load));
+		khugepaged_scan_pmd(mm, vma, addr, &cc, &cr);
+		if (cr.dropped_mmap_lock)
+			*prev = NULL;  /* tell madvise we dropped mmap_lock */
+
+		switch (cr.result) {
+		/* Whitelisted set of results where continuing OK */
+		case SCAN_SUCCEED:
+		case SCAN_PMD_MAPPED:
+			++thps;
+		case SCAN_PMD_NULL:
+		case SCAN_PTE_NON_PRESENT:
+		case SCAN_PTE_UFFD_WP:
+		case SCAN_PAGE_RO:
+		case SCAN_LACK_REFERENCED_PAGE:
+		case SCAN_PAGE_NULL:
+		case SCAN_PAGE_COUNT:
+		case SCAN_PAGE_LOCK:
+		case SCAN_PAGE_COMPOUND:
+			break;
+		case SCAN_PAGE_LRU:
+			lru_add_drain_all();
+			goto retry;
+		default:
+			/* Other error, exit */
+			goto break_loop;
+		}
+		addr += HPAGE_PMD_SIZE;
+		if (addr >= hend)
+			break;
+retry:
+		if (cr.dropped_mmap_lock) {
+			mmap_read_lock(mm);
+			if (hugepage_vma_revalidate(mm, addr, &vma))
+				goto out;
+		}
+		madvise_collapse_cleanup_page(&cc.hpage);
+	}
+
+break_loop:
+	/* madvise_walk_vmas() expects us to hold mmap_lock on return */
+	if (cr.dropped_mmap_lock)
+		mmap_read_lock(mm);
+out:
+	mmap_assert_locked(mm);
+	madvise_collapse_cleanup_page(&cc.hpage);
+	mmdrop(mm);
+
+	return thps == nr_hpages ? 0 : madvise_collapse_errno(cr.result);
+}
diff --git a/mm/madvise.c b/mm/madvise.c
index ec03a76244b7..7ad53e5311cf 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -59,6 +59,7 @@ static int madvise_need_mmap_write(int behavior)
 	case MADV_FREE:
 	case MADV_POPULATE_READ:
 	case MADV_POPULATE_WRITE:
+	case MADV_COLLAPSE:
 		return 0;
 	default:
 		/* be safe, default to 1. list exceptions explicitly */
@@ -1051,6 +1052,8 @@ static int madvise_vma_behavior(struct vm_area_struct *vma,
 		if (error)
 			goto out;
 		break;
+	case MADV_COLLAPSE:
+		return madvise_collapse(vma, prev, start, end);
 	}
 
 	anon_name = anon_vma_name(vma);
@@ -1144,6 +1147,7 @@ madvise_behavior_valid(int behavior)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	case MADV_HUGEPAGE:
 	case MADV_NOHUGEPAGE:
+	case MADV_COLLAPSE:
 #endif
 	case MADV_DONTDUMP:
 	case MADV_DODUMP:
@@ -1333,6 +1337,7 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start,
  *  MADV_NOHUGEPAGE - mark the given range as not worth being backed by
  *		transparent huge pages so the existing pages will not be
  *		coalesced into THP and new pages will not be allocated as THP.
+ *  MADV_COLLAPSE - synchronously coalesce pages into new THP.
  *  MADV_DONTDUMP - the application wants to prevent pages in the given range
  *		from being included in its core dump.
  *  MADV_DODUMP - cancel MADV_DONTDUMP: no longer exclude from core dump.
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 06/12] mm/khugepaged: remove khugepaged prefix from shared collapse functions
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
                   ` (4 preceding siblings ...)
  2022-04-10 13:54 ` [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 17:06   ` kernel test robot
  2022-04-10 13:54 ` [PATCH 07/12] mm/khugepaged: add flag to ignore khugepaged_max_ptes_* Zach O'Keefe
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

The following functions/tracepoints are shared between khugepaged and
madvise collapse contexts.  Remove the khugepaged prefixes.

tracepoint:mm_khugepaged_scan_pmd -> tracepoint:mm_scan_pmd
khugepaged_test_exit() -> test_exit()
khugepaged_scan_abort() -> scan_abort()
khugepaged_scan_pmd() -> scan_pmd()
khugepaged_find_target_node() -> find_target_node()

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 include/trace/events/huge_memory.h |  2 +-
 mm/khugepaged.c                    | 68 ++++++++++++++----------------
 2 files changed, 33 insertions(+), 37 deletions(-)

diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
index 9faa678e0a5b..09be0e2f76b1 100644
--- a/include/trace/events/huge_memory.h
+++ b/include/trace/events/huge_memory.h
@@ -48,7 +48,7 @@ SCAN_STATUS
 #define EM(a, b)	{a, b},
 #define EMe(a, b)	{a, b}
 
-TRACE_EVENT(mm_khugepaged_scan_pmd,
+TRACE_EVENT(mm_scan_pmd,
 
 	TP_PROTO(struct mm_struct *mm, struct page *page, bool writable,
 		 int referenced, int none_or_zero, int status, int unmapped),
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index c5c484b7e394..2717262d1832 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -90,7 +90,7 @@ struct collapse_control {
 	/* Num pages scanned per node */
 	int node_load[MAX_NUMNODES];
 
-	/* Last target selected in khugepaged_find_target_node() for this scan */
+	/* Last target selected in find_target_node() for this scan */
 	int last_target_node;
 
 	struct page *hpage;
@@ -453,7 +453,7 @@ static void insert_to_mm_slots_hash(struct mm_struct *mm,
 	hash_add(mm_slots_hash, &mm_slot->hash, (long)mm);
 }
 
-static inline int khugepaged_test_exit(struct mm_struct *mm)
+static inline int test_exit(struct mm_struct *mm)
 {
 	return atomic_read(&mm->mm_users) == 0;
 }
@@ -505,7 +505,7 @@ void __khugepaged_enter(struct mm_struct *mm)
 		return;
 
 	/* __khugepaged_exit() must not run from under us */
-	VM_BUG_ON_MM(khugepaged_test_exit(mm), mm);
+	VM_BUG_ON_MM(test_exit(mm), mm);
 	if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) {
 		free_mm_slot(mm_slot);
 		return;
@@ -557,12 +557,11 @@ void __khugepaged_exit(struct mm_struct *mm)
 		mmdrop(mm);
 	} else if (mm_slot) {
 		/*
-		 * This is required to serialize against
-		 * khugepaged_test_exit() (which is guaranteed to run
-		 * under mmap sem read mode). Stop here (after we
-		 * return all pagetables will be destroyed) until
-		 * khugepaged has finished working on the pagetables
-		 * under the mmap_lock.
+		 * This is required to serialize against test_exit() (which is
+		 * guaranteed to run under mmap sem read mode). Stop here
+		 * (after we return all pagetables will be destroyed) until
+		 * khugepaged has finished working on the pagetables under
+		 * the mmap_lock.
 		 */
 		mmap_write_lock(mm);
 		mmap_write_unlock(mm);
@@ -816,7 +815,7 @@ static void khugepaged_alloc_sleep(void)
 	remove_wait_queue(&khugepaged_wait, &wait);
 }
 
-static bool khugepaged_scan_abort(int nid, struct collapse_control *cc)
+static bool scan_abort(int nid, struct collapse_control *cc)
 {
 	int i;
 
@@ -846,7 +845,7 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void)
 	return khugepaged_defrag() ? GFP_TRANSHUGE : GFP_TRANSHUGE_LIGHT;
 }
 
-static int khugepaged_find_target_node(struct collapse_control *cc)
+static int find_target_node(struct collapse_control *cc)
 {
 	int nid, target_node = 0, max_value = 0;
 
@@ -993,7 +992,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
 	struct vm_area_struct *vma;
 	unsigned long hstart, hend;
 
-	if (unlikely(khugepaged_test_exit(mm)))
+	if (unlikely(test_exit(mm)))
 		return SCAN_ANY_PROCESS;
 
 	*vmap = vma = find_vma(mm, address);
@@ -1037,7 +1036,7 @@ static int find_pmd_or_thp_or_none(struct mm_struct *mm,
 
 /*
  * Bring missing pages in from swap, to complete THP collapse.
- * Only done if khugepaged_scan_pmd believes it is worthwhile.
+ * Only done if scan_pmd believes it is worthwhile.
  *
  * Called and returns without pte mapped or spinlocks held,
  * but with mmap_lock held to protect against vma changes.
@@ -1129,7 +1128,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	mmap_read_unlock(mm);
 	cr->dropped_mmap_lock = true;
 
-	node = khugepaged_find_target_node(cc);
+	node = find_target_node(cc);
 	/* sched to specified node before huage page memory copy */
 	if (task_node(current) != node) {
 		cpumask = cpumask_of_node(node);
@@ -1270,11 +1269,9 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	return;
 }
 
-static void khugepaged_scan_pmd(struct mm_struct *mm,
-				struct vm_area_struct *vma,
-				unsigned long address,
-				struct collapse_control *cc,
-				struct collapse_result *cr)
+static void scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
+		     unsigned long address, struct collapse_control *cc,
+		     struct collapse_result *cr)
 {
 	pmd_t *pmd;
 	pte_t *pte, *_pte;
@@ -1364,7 +1361,7 @@ static void khugepaged_scan_pmd(struct mm_struct *mm,
 		 * hit record.
 		 */
 		node = page_to_nid(page);
-		if (khugepaged_scan_abort(node, cc)) {
+		if (scan_abort(node, cc)) {
 			cr->result = SCAN_SCAN_ABORT;
 			goto out_unmap;
 		}
@@ -1421,8 +1418,8 @@ static void khugepaged_scan_pmd(struct mm_struct *mm,
 		/* collapse_huge_page will return with the mmap_lock released */
 		collapse_huge_page(mm, address, cc, referenced, unmapped, cr);
 out:
-	trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced,
-				     none_or_zero, cr->result, unmapped);
+	trace_mm_scan_pmd(mm, page, writable, referenced, none_or_zero,
+			  cr->result, unmapped);
 }
 
 static void collect_mm_slot(struct mm_slot *mm_slot)
@@ -1431,7 +1428,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
 
 	lockdep_assert_held(&khugepaged_mm_lock);
 
-	if (khugepaged_test_exit(mm)) {
+	if (test_exit(mm)) {
 		/* free mm_slot */
 		hash_del(&mm_slot->hash);
 		list_del(&mm_slot->mm_node);
@@ -1598,7 +1595,7 @@ static void khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot)
 	if (!mmap_write_trylock(mm))
 		return;
 
-	if (unlikely(khugepaged_test_exit(mm)))
+	if (unlikely(test_exit(mm)))
 		goto out;
 
 	for (i = 0; i < mm_slot->nr_pte_mapped_thp; i++)
@@ -1653,7 +1650,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
 		 * reverse order. Trylock is a way to avoid deadlock.
 		 */
 		if (mmap_write_trylock(mm)) {
-			if (!khugepaged_test_exit(mm))
+			if (!test_exit(mm))
 				collapse_and_free_pmd(mm, vma, addr, pmd);
 			mmap_write_unlock(mm);
 		} else {
@@ -1710,7 +1707,7 @@ static void collapse_file(struct mm_struct *mm,
 
 	/* Only allocate from the target node */
 	gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE;
-	node = khugepaged_find_target_node(cc);
+	node = find_target_node(cc);
 
 	new_page = cc->alloc_hpage(cc, gfp, node);
 	if (!new_page) {
@@ -2094,7 +2091,7 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 		}
 
 		node = page_to_nid(page);
-		if (khugepaged_scan_abort(node, cc)) {
+		if (scan_abort(node, cc)) {
 			cr->result = SCAN_SCAN_ABORT;
 			break;
 		}
@@ -2183,7 +2180,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 	vma = NULL;
 	if (unlikely(!mmap_read_trylock(mm)))
 		goto breakouterloop_mmap_lock;
-	if (likely(!khugepaged_test_exit(mm)))
+	if (likely(!test_exit(mm)))
 		vma = find_vma(mm, khugepaged_scan.address);
 
 	progress++;
@@ -2191,7 +2188,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 		unsigned long hstart, hend;
 
 		cond_resched();
-		if (unlikely(khugepaged_test_exit(mm))) {
+		if (unlikely(test_exit(mm))) {
 			progress++;
 			break;
 		}
@@ -2215,7 +2212,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 		while (khugepaged_scan.address < hend) {
 			struct collapse_result cr = {0};
 			cond_resched();
-			if (unlikely(khugepaged_test_exit(mm)))
+			if (unlikely(test_exit(mm)))
 				goto breakouterloop;
 
 			VM_BUG_ON(khugepaged_scan.address < hstart ||
@@ -2231,9 +2228,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 				khugepaged_scan_file(mm, file, pgoff, cc, &cr);
 				fput(file);
 			} else {
-				khugepaged_scan_pmd(mm, vma,
-						    khugepaged_scan.address,
-						    cc, &cr);
+				scan_pmd(mm, vma, khugepaged_scan.address, cc,
+					 &cr);
 			}
 			if (cr.result == SCAN_SUCCEED)
 				++khugepaged_pages_collapsed;
@@ -2257,7 +2253,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 	 * Release the current mm_slot if this mm is about to die, or
 	 * if we scanned all vmas of this mm.
 	 */
-	if (khugepaged_test_exit(mm) || !vma) {
+	if (test_exit(mm) || !vma) {
 		/*
 		 * Make sure that if mm_users is reaching zero while
 		 * khugepaged runs here, khugepaged_exit will find
@@ -2528,11 +2524,11 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
 		cond_resched();
 		memset(&cr, 0, sizeof(cr));
 
-		if (unlikely(khugepaged_test_exit(mm)))
+		if (unlikely(test_exit(mm)))
 			break;
 
 		memset(cc.node_load, 0, sizeof(cc.node_load));
-		khugepaged_scan_pmd(mm, vma, addr, &cc, &cr);
+		scan_pmd(mm, vma, addr, &cc, &cr);
 		if (cr.dropped_mmap_lock)
 			*prev = NULL;  /* tell madvise we dropped mmap_lock */
 
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 07/12] mm/khugepaged: add flag to ignore khugepaged_max_ptes_*
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
                   ` (5 preceding siblings ...)
  2022-04-10 13:54 ` [PATCH 06/12] mm/khugepaged: remove khugepaged prefix from shared collapse functions Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 08/12] mm/khugepaged: add flag to ignore page young/referenced requirement Zach O'Keefe
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

Add enforce_pte_scan_limits flag to struct collapse_control that allows
context to ignore sysfs-controlled knobs:
khugepaged_max_ptes_[none|swap|shared].  Set this flag in khugepaged
collapse context to preserve existing khugepaged behavior and unset the
flag in madvise collapse context since the user presumably has reason to
believe the collapse will be beneficial.

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 mm/khugepaged.c | 32 ++++++++++++++++++++++----------
 1 file changed, 22 insertions(+), 10 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2717262d1832..7f555da26fdc 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -87,6 +87,9 @@ static struct kmem_cache *mm_slot_cache __read_mostly;
 #define MAX_PTE_MAPPED_THP 8
 
 struct collapse_control {
+	/* Respect khugepaged_max_ptes_[none|swap|shared] */
+	bool enforce_pte_scan_limits;
+
 	/* Num pages scanned per node */
 	int node_load[MAX_NUMNODES];
 
@@ -631,6 +634,7 @@ static bool is_refcount_suitable(struct page *page)
 static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 					unsigned long address,
 					pte_t *pte,
+					struct collapse_control *cc,
 					struct list_head *compound_pagelist)
 {
 	struct page *page = NULL;
@@ -644,7 +648,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 		if (pte_none(pteval) || (pte_present(pteval) &&
 				is_zero_pfn(pte_pfn(pteval)))) {
 			if (!userfaultfd_armed(vma) &&
-			    ++none_or_zero <= khugepaged_max_ptes_none) {
+			    (++none_or_zero <= khugepaged_max_ptes_none ||
+			     !cc->enforce_pte_scan_limits)) {
 				continue;
 			} else {
 				result = SCAN_EXCEED_NONE_PTE;
@@ -664,8 +669,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 
 		VM_BUG_ON_PAGE(!PageAnon(page), page);
 
-		if (page_mapcount(page) > 1 &&
-				++shared > khugepaged_max_ptes_shared) {
+		if (cc->enforce_pte_scan_limits && page_mapcount(page) > 1 &&
+		    ++shared > khugepaged_max_ptes_shared) {
 			result = SCAN_EXCEED_SHARED_PTE;
 			count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
 			goto out;
@@ -1207,7 +1212,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	mmu_notifier_invalidate_range_end(&range);
 
 	spin_lock(pte_ptl);
-	cr->result =  __collapse_huge_page_isolate(vma, address, pte,
+	cr->result =  __collapse_huge_page_isolate(vma, address, pte, cc,
 						   &compound_pagelist);
 	spin_unlock(pte_ptl);
 
@@ -1296,7 +1301,8 @@ static void scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
 	     _pte++, _address += PAGE_SIZE) {
 		pte_t pteval = *_pte;
 		if (is_swap_pte(pteval)) {
-			if (++unmapped <= khugepaged_max_ptes_swap) {
+			if (++unmapped <= khugepaged_max_ptes_swap ||
+			    !cc->enforce_pte_scan_limits) {
 				/*
 				 * Always be strict with uffd-wp
 				 * enabled swap entries.  Please see
@@ -1315,7 +1321,8 @@ static void scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
 		}
 		if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
 			if (!userfaultfd_armed(vma) &&
-			    ++none_or_zero <= khugepaged_max_ptes_none) {
+			    (++none_or_zero <= khugepaged_max_ptes_none ||
+			     !cc->enforce_pte_scan_limits)) {
 				continue;
 			} else {
 				cr->result = SCAN_EXCEED_NONE_PTE;
@@ -1345,8 +1352,9 @@ static void scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
 			goto out_unmap;
 		}
 
-		if (page_mapcount(page) > 1 &&
-				++shared > khugepaged_max_ptes_shared) {
+		if (cc->enforce_pte_scan_limits &&
+		    page_mapcount(page) > 1 &&
+		    ++shared > khugepaged_max_ptes_shared) {
 			cr->result = SCAN_EXCEED_SHARED_PTE;
 			count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
 			goto out_unmap;
@@ -2073,7 +2081,8 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 			continue;
 
 		if (xa_is_value(page)) {
-			if (++swap > khugepaged_max_ptes_swap) {
+			if (cc->enforce_pte_scan_limits &&
+			    ++swap > khugepaged_max_ptes_swap) {
 				cr->result = SCAN_EXCEED_SWAP_PTE;
 				count_vm_event(THP_SCAN_EXCEED_SWAP_PTE);
 				break;
@@ -2124,7 +2133,8 @@ static void khugepaged_scan_file(struct mm_struct *mm,
 	rcu_read_unlock();
 
 	if (cr->result == SCAN_SUCCEED) {
-		if (present < HPAGE_PMD_NR - khugepaged_max_ptes_none) {
+		if (present < HPAGE_PMD_NR - khugepaged_max_ptes_none &&
+		    cc->enforce_pte_scan_limits) {
 			cr->result = SCAN_EXCEED_NONE_PTE;
 			count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
 		} else {
@@ -2351,6 +2361,7 @@ static int khugepaged(void *none)
 {
 	struct mm_slot *mm_slot;
 	struct collapse_control cc = {
+		.enforce_pte_scan_limits = true,
 		.last_target_node = NUMA_NO_NODE,
 		.alloc_hpage = &khugepaged_alloc_page,
 	};
@@ -2492,6 +2503,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
 		     unsigned long start, unsigned long end)
 {
 	struct collapse_control cc = {
+		.enforce_pte_scan_limits = false,
 		.last_target_node = NUMA_NO_NODE,
 		.hpage = NULL,
 		.alloc_hpage = &alloc_hpage,
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 08/12] mm/khugepaged: add flag to ignore page young/referenced requirement
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
                   ` (6 preceding siblings ...)
  2022-04-10 13:54 ` [PATCH 07/12] mm/khugepaged: add flag to ignore khugepaged_max_ptes_* Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 09/12] mm/madvise: add MADV_COLLAPSE to process_madvise() Zach O'Keefe
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

Add enforce_young flag to struct collapse_control that allows context to
ignore requirement that some pages in region being collapsed be young or
referenced.  Set this flag in khugepaged collapse context to preserve
existing khugepaged behavior and unset the flag in madvise collapse
context since the user presumably has reason to believe the collapse
will be beneficial.

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 mm/khugepaged.c | 24 ++++++++++++++++--------
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 7f555da26fdc..8e5e45355c6d 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -90,6 +90,9 @@ struct collapse_control {
 	/* Respect khugepaged_max_ptes_[none|swap|shared] */
 	bool enforce_pte_scan_limits;
 
+	/* Require memory to be young */
+	bool enforce_young;
+
 	/* Num pages scanned per node */
 	int node_load[MAX_NUMNODES];
 
@@ -737,9 +740,10 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 			list_add_tail(&page->lru, compound_pagelist);
 next:
 		/* There should be enough young pte to collapse the page */
-		if (pte_young(pteval) ||
-		    page_is_young(page) || PageReferenced(page) ||
-		    mmu_notifier_test_young(vma->vm_mm, address))
+		if (cc->enforce_young &&
+		    (pte_young(pteval) || page_is_young(page) ||
+		     PageReferenced(page) || mmu_notifier_test_young(vma->vm_mm,
+								     address)))
 			referenced++;
 
 		if (pte_write(pteval))
@@ -748,7 +752,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 
 	if (unlikely(!writable)) {
 		result = SCAN_PAGE_RO;
-	} else if (unlikely(!referenced)) {
+	} else if (unlikely(cc->enforce_young && !referenced)) {
 		result = SCAN_LACK_REFERENCED_PAGE;
 	} else {
 		result = SCAN_SUCCEED;
@@ -1408,14 +1412,16 @@ static void scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
 			cr->result = SCAN_PAGE_COUNT;
 			goto out_unmap;
 		}
-		if (pte_young(pteval) ||
-		    page_is_young(page) || PageReferenced(page) ||
-		    mmu_notifier_test_young(vma->vm_mm, address))
+		if (cc->enforce_young &&
+		    (pte_young(pteval) || page_is_young(page) ||
+		     PageReferenced(page) || mmu_notifier_test_young(vma->vm_mm,
+								     address)))
 			referenced++;
 	}
 	if (!writable) {
 		cr->result = SCAN_PAGE_RO;
-	} else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) {
+	} else if (cc->enforce_young && (!referenced || (unmapped && referenced
+							 < HPAGE_PMD_NR / 2))) {
 		cr->result = SCAN_LACK_REFERENCED_PAGE;
 	} else {
 		cr->result = SCAN_SUCCEED;
@@ -2362,6 +2368,7 @@ static int khugepaged(void *none)
 	struct mm_slot *mm_slot;
 	struct collapse_control cc = {
 		.enforce_pte_scan_limits = true,
+		.enforce_young = true,
 		.last_target_node = NUMA_NO_NODE,
 		.alloc_hpage = &khugepaged_alloc_page,
 	};
@@ -2504,6 +2511,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
 {
 	struct collapse_control cc = {
 		.enforce_pte_scan_limits = false,
+		.enforce_young = false,
 		.last_target_node = NUMA_NO_NODE,
 		.hpage = NULL,
 		.alloc_hpage = &alloc_hpage,
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 09/12] mm/madvise: add MADV_COLLAPSE to process_madvise()
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
                   ` (7 preceding siblings ...)
  2022-04-10 13:54 ` [PATCH 08/12] mm/khugepaged: add flag to ignore page young/referenced requirement Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 10/12] selftests/vm: modularize collapse selftests Zach O'Keefe
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

Allow MADV_COLLAPSE behavior for process_madvise(2) if caller has
CAP_SYS_ADMIN or is requesting collapse of it's own memory.

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 mm/madvise.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/madvise.c b/mm/madvise.c
index 7ad53e5311cf..a5c82fa7972b 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1165,13 +1165,15 @@ madvise_behavior_valid(int behavior)
 }
 
 static bool
-process_madvise_behavior_valid(int behavior)
+process_madvise_behavior_valid(int behavior, struct task_struct *task)
 {
 	switch (behavior) {
 	case MADV_COLD:
 	case MADV_PAGEOUT:
 	case MADV_WILLNEED:
 		return true;
+	case MADV_COLLAPSE:
+		return task == current || capable(CAP_SYS_ADMIN);
 	default:
 		return false;
 	}
@@ -1449,7 +1451,7 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
 		goto free_iov;
 	}
 
-	if (!process_madvise_behavior_valid(behavior)) {
+	if (!process_madvise_behavior_valid(behavior, task)) {
 		ret = -EINVAL;
 		goto release_task;
 	}
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 10/12] selftests/vm: modularize collapse selftests
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
                   ` (8 preceding siblings ...)
  2022-04-10 13:54 ` [PATCH 09/12] mm/madvise: add MADV_COLLAPSE to process_madvise() Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 11/12] selftests/vm: add MADV_COLLAPSE collapse context to selftests Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 12/12] selftests/vm: add test to verify recollapse of THPs Zach O'Keefe
  11 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

Modularize the collapse action of khugepaged collapse selftests by
introducing a struct collapse_context which specifies how to collapse a
given memory range and the expected semantics of the collapse.  This
can be reused later to test other collapse contexts.

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 tools/testing/selftests/vm/khugepaged.c | 257 +++++++++++-------------
 1 file changed, 116 insertions(+), 141 deletions(-)

diff --git a/tools/testing/selftests/vm/khugepaged.c b/tools/testing/selftests/vm/khugepaged.c
index 155120b67a16..c59d832fee96 100644
--- a/tools/testing/selftests/vm/khugepaged.c
+++ b/tools/testing/selftests/vm/khugepaged.c
@@ -23,6 +23,12 @@ static int hpage_pmd_nr;
 #define THP_SYSFS "/sys/kernel/mm/transparent_hugepage/"
 #define PID_SMAPS "/proc/self/smaps"
 
+struct collapse_context {
+	const char *name;
+	void (*collapse)(const char *msg, char *p, bool expect);
+	bool enforce_pte_scan_limits;
+};
+
 enum thp_enabled {
 	THP_ALWAYS,
 	THP_MADVISE,
@@ -528,53 +534,39 @@ static void alloc_at_fault(void)
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_full(void)
+static void collapse_full(struct collapse_context *context)
 {
 	void *p;
 
 	p = alloc_mapping();
 	fill_memory(p, 0, hpage_pmd_size);
-	if (wait_for_scan("Collapse fully populated PTE table", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		success("OK");
-	else
-		fail("Fail");
+	context->collapse("Collapse fully populated PTE table", p, true);
 	validate_memory(p, 0, hpage_pmd_size);
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_empty(void)
+static void collapse_empty(struct collapse_context *context)
 {
 	void *p;
 
 	p = alloc_mapping();
-	if (wait_for_scan("Do not collapse empty PTE table", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		fail("Fail");
-	else
-		success("OK");
+	context->collapse("Do not collapse empty PTE table", p, false);
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_single_pte_entry(void)
+static void collapse_single_pte_entry(struct collapse_context *context)
 {
 	void *p;
 
 	p = alloc_mapping();
 	fill_memory(p, 0, page_size);
-	if (wait_for_scan("Collapse PTE table with single PTE entry present", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		success("OK");
-	else
-		fail("Fail");
+	context->collapse("Collapse PTE table with single PTE entry present", p,
+			  true);
 	validate_memory(p, 0, page_size);
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_max_ptes_none(void)
+static void collapse_max_ptes_none(struct collapse_context *context)
 {
 	int max_ptes_none = hpage_pmd_nr / 2;
 	struct settings settings = default_settings;
@@ -586,28 +578,23 @@ static void collapse_max_ptes_none(void)
 	p = alloc_mapping();
 
 	fill_memory(p, 0, (hpage_pmd_nr - max_ptes_none - 1) * page_size);
-	if (wait_for_scan("Do not collapse with max_ptes_none exceeded", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		fail("Fail");
-	else
-		success("OK");
+	context->collapse("Maybe collapse with max_ptes_none exceeded", p,
+			  !context->enforce_pte_scan_limits);
 	validate_memory(p, 0, (hpage_pmd_nr - max_ptes_none - 1) * page_size);
 
-	fill_memory(p, 0, (hpage_pmd_nr - max_ptes_none) * page_size);
-	if (wait_for_scan("Collapse with max_ptes_none PTEs empty", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		success("OK");
-	else
-		fail("Fail");
-	validate_memory(p, 0, (hpage_pmd_nr - max_ptes_none) * page_size);
+	if (context->enforce_pte_scan_limits) {
+		fill_memory(p, 0, (hpage_pmd_nr - max_ptes_none) * page_size);
+		context->collapse("Collapse with max_ptes_none PTEs empty", p,
+				  true);
+		validate_memory(p, 0,
+				(hpage_pmd_nr - max_ptes_none) * page_size);
+	}
 
 	munmap(p, hpage_pmd_size);
 	write_settings(&default_settings);
 }
 
-static void collapse_swapin_single_pte(void)
+static void collapse_swapin_single_pte(struct collapse_context *context)
 {
 	void *p;
 	p = alloc_mapping();
@@ -625,18 +612,14 @@ static void collapse_swapin_single_pte(void)
 		goto out;
 	}
 
-	if (wait_for_scan("Collapse with swapping in single PTE entry", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		success("OK");
-	else
-		fail("Fail");
+	context->collapse("Collapse with swapping in single PTE entry",
+			  p, true);
 	validate_memory(p, 0, hpage_pmd_size);
 out:
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_max_ptes_swap(void)
+static void collapse_max_ptes_swap(struct collapse_context *context)
 {
 	int max_ptes_swap = read_num("khugepaged/max_ptes_swap");
 	void *p;
@@ -656,39 +639,34 @@ static void collapse_max_ptes_swap(void)
 		goto out;
 	}
 
-	if (wait_for_scan("Do not collapse with max_ptes_swap exceeded", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		fail("Fail");
-	else
-		success("OK");
+	context->collapse("Maybe collapse with max_ptes_swap exceeded",
+			      p, !context->enforce_pte_scan_limits);
 	validate_memory(p, 0, hpage_pmd_size);
 
-	fill_memory(p, 0, hpage_pmd_size);
-	printf("Swapout %d of %d pages...", max_ptes_swap, hpage_pmd_nr);
-	if (madvise(p, max_ptes_swap * page_size, MADV_PAGEOUT)) {
-		perror("madvise(MADV_PAGEOUT)");
-		exit(EXIT_FAILURE);
-	}
-	if (check_swap(p, max_ptes_swap * page_size)) {
-		success("OK");
-	} else {
-		fail("Fail");
-		goto out;
-	}
+	if (context->enforce_pte_scan_limits) {
+		fill_memory(p, 0, hpage_pmd_size);
+		printf("Swapout %d of %d pages...", max_ptes_swap,
+		       hpage_pmd_nr);
+		if (madvise(p, max_ptes_swap * page_size, MADV_PAGEOUT)) {
+			perror("madvise(MADV_PAGEOUT)");
+			exit(EXIT_FAILURE);
+		}
+		if (check_swap(p, max_ptes_swap * page_size)) {
+			success("OK");
+		} else {
+			fail("Fail");
+			goto out;
+		}
 
-	if (wait_for_scan("Collapse with max_ptes_swap pages swapped out", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		success("OK");
-	else
-		fail("Fail");
-	validate_memory(p, 0, hpage_pmd_size);
+		context->collapse("Collapse with max_ptes_swap pages swapped out",
+				  p, true);
+		validate_memory(p, 0, hpage_pmd_size);
+	}
 out:
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_single_pte_entry_compound(void)
+static void collapse_single_pte_entry_compound(struct collapse_context *context)
 {
 	void *p;
 
@@ -710,17 +688,13 @@ static void collapse_single_pte_entry_compound(void)
 	else
 		fail("Fail");
 
-	if (wait_for_scan("Collapse PTE table with single PTE mapping compound page", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		success("OK");
-	else
-		fail("Fail");
+	context->collapse("Collapse PTE table with single PTE mapping compound page",
+			  p, true);
 	validate_memory(p, 0, page_size);
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_full_of_compound(void)
+static void collapse_full_of_compound(struct collapse_context *context)
 {
 	void *p;
 
@@ -742,17 +716,12 @@ static void collapse_full_of_compound(void)
 	else
 		fail("Fail");
 
-	if (wait_for_scan("Collapse PTE table full of compound pages", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		success("OK");
-	else
-		fail("Fail");
+	context->collapse("Collapse PTE table full of compound pages", p, true);
 	validate_memory(p, 0, hpage_pmd_size);
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_compound_extreme(void)
+static void collapse_compound_extreme(struct collapse_context *context)
 {
 	void *p;
 	int i;
@@ -798,18 +767,14 @@ static void collapse_compound_extreme(void)
 	else
 		fail("Fail");
 
-	if (wait_for_scan("Collapse PTE table full of different compound pages", p))
-		fail("Timeout");
-	else if (check_huge(p))
-		success("OK");
-	else
-		fail("Fail");
+	context->collapse("Collapse PTE table full of different compound pages",
+			  p, true);
 
 	validate_memory(p, 0, hpage_pmd_size);
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_fork(void)
+static void collapse_fork(struct collapse_context *context)
 {
 	int wstatus;
 	void *p;
@@ -835,13 +800,8 @@ static void collapse_fork(void)
 			fail("Fail");
 
 		fill_memory(p, page_size, 2 * page_size);
-
-		if (wait_for_scan("Collapse PTE table with single page shared with parent process", p))
-			fail("Timeout");
-		else if (check_huge(p))
-			success("OK");
-		else
-			fail("Fail");
+		context->collapse("Collapse PTE table with single page shared with parent process",
+				  p, true);
 
 		validate_memory(p, 0, page_size);
 		munmap(p, hpage_pmd_size);
@@ -860,7 +820,7 @@ static void collapse_fork(void)
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_fork_compound(void)
+static void collapse_fork_compound(struct collapse_context *context)
 {
 	int wstatus;
 	void *p;
@@ -896,14 +856,10 @@ static void collapse_fork_compound(void)
 		fill_memory(p, 0, page_size);
 
 		write_num("khugepaged/max_ptes_shared", hpage_pmd_nr - 1);
-		if (wait_for_scan("Collapse PTE table full of compound pages in child", p))
-			fail("Timeout");
-		else if (check_huge(p))
-			success("OK");
-		else
-			fail("Fail");
+		context->collapse("Collapse PTE table full of compound pages in child",
+				  p, true);
 		write_num("khugepaged/max_ptes_shared",
-				default_settings.khugepaged.max_ptes_shared);
+			  default_settings.khugepaged.max_ptes_shared);
 
 		validate_memory(p, 0, hpage_pmd_size);
 		munmap(p, hpage_pmd_size);
@@ -922,7 +878,7 @@ static void collapse_fork_compound(void)
 	munmap(p, hpage_pmd_size);
 }
 
-static void collapse_max_ptes_shared()
+static void collapse_max_ptes_shared(struct collapse_context *context)
 {
 	int max_ptes_shared = read_num("khugepaged/max_ptes_shared");
 	int wstatus;
@@ -957,28 +913,22 @@ static void collapse_max_ptes_shared()
 		else
 			fail("Fail");
 
-		if (wait_for_scan("Do not collapse with max_ptes_shared exceeded", p))
-			fail("Timeout");
-		else if (!check_huge(p))
-			success("OK");
-		else
-			fail("Fail");
-
-		printf("Trigger CoW on page %d of %d...",
-				hpage_pmd_nr - max_ptes_shared, hpage_pmd_nr);
-		fill_memory(p, 0, (hpage_pmd_nr - max_ptes_shared) * page_size);
-		if (!check_huge(p))
-			success("OK");
-		else
-			fail("Fail");
-
-
-		if (wait_for_scan("Collapse with max_ptes_shared PTEs shared", p))
-			fail("Timeout");
-		else if (check_huge(p))
-			success("OK");
-		else
-			fail("Fail");
+		context->collapse("Maybe collapse with max_ptes_shared exceeded",
+				  p, !context->enforce_pte_scan_limits);
+
+		if (context->enforce_pte_scan_limits) {
+			printf("Trigger CoW on page %d of %d...",
+			       hpage_pmd_nr - max_ptes_shared, hpage_pmd_nr);
+			fill_memory(p, 0, (hpage_pmd_nr - max_ptes_shared) *
+				    page_size);
+			if (!check_huge(p))
+				success("OK");
+			else
+				fail("Fail");
+
+			context->collapse("Collapse with max_ptes_shared PTEs shared",
+					  p, true);
+		}
 
 		validate_memory(p, 0, hpage_pmd_size);
 		munmap(p, hpage_pmd_size);
@@ -997,8 +947,27 @@ static void collapse_max_ptes_shared()
 	munmap(p, hpage_pmd_size);
 }
 
+static void khugepaged_collapse(const char *msg, char *p, bool expect)
+{
+	if (wait_for_scan(msg, p))
+		fail("Timeout");
+	else if (check_huge(p) == expect)
+		success("OK");
+	else
+		fail("Fail");
+}
+
 int main(void)
 {
+	struct collapse_context contexts[] = {
+		{
+			.name = "khugepaged",
+			.collapse = &khugepaged_collapse,
+			.enforce_pte_scan_limits = true,
+		},
+	};
+	int i;
+
 	setbuf(stdout, NULL);
 
 	page_size = getpagesize();
@@ -1014,18 +983,24 @@ int main(void)
 	adjust_settings();
 
 	alloc_at_fault();
-	collapse_full();
-	collapse_empty();
-	collapse_single_pte_entry();
-	collapse_max_ptes_none();
-	collapse_swapin_single_pte();
-	collapse_max_ptes_swap();
-	collapse_single_pte_entry_compound();
-	collapse_full_of_compound();
-	collapse_compound_extreme();
-	collapse_fork();
-	collapse_fork_compound();
-	collapse_max_ptes_shared();
+
+	for (i = 0; i < sizeof(contexts) / sizeof(contexts[0]); ++i) {
+		struct collapse_context *c = &contexts[i];
+
+		printf("\n*** Testing context: %s ***\n", c->name);
+		collapse_full(c);
+		collapse_empty(c);
+		collapse_single_pte_entry(c);
+		collapse_max_ptes_none(c);
+		collapse_swapin_single_pte(c);
+		collapse_max_ptes_swap(c);
+		collapse_single_pte_entry_compound(c);
+		collapse_full_of_compound(c);
+		collapse_compound_extreme(c);
+		collapse_fork(c);
+		collapse_fork_compound(c);
+		collapse_max_ptes_shared(c);
+	}
 
 	restore_settings(0);
 }
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 11/12] selftests/vm: add MADV_COLLAPSE collapse context to selftests
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
                   ` (9 preceding siblings ...)
  2022-04-10 13:54 ` [PATCH 10/12] selftests/vm: modularize collapse selftests Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  2022-04-10 13:54 ` [PATCH 12/12] selftests/vm: add test to verify recollapse of THPs Zach O'Keefe
  11 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

Add MADV_COLLAPSE selftests.  Extend struct collapse_context to support
context initialization/cleanup.  This is used by madvise collapse
context to "disable" and "enable" khugepaged, since it would otherwise
interfere with the tests.

The mechanism used to "disable" khugepaged is a hack: it sets
/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs to a
large value and feeds khugepaged enough suitable VMAs/pages to keep
khugepaged sleeping for the duration of the madvise collapse tests.
Since khugepaged is woken when this file is written, enough VMAs must be
queued to put khugepaged back to sleep when the tests write to this file
in write_settings().

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 tools/testing/selftests/vm/khugepaged.c | 133 ++++++++++++++++++++++--
 1 file changed, 125 insertions(+), 8 deletions(-)

diff --git a/tools/testing/selftests/vm/khugepaged.c b/tools/testing/selftests/vm/khugepaged.c
index c59d832fee96..e0ccc9443f78 100644
--- a/tools/testing/selftests/vm/khugepaged.c
+++ b/tools/testing/selftests/vm/khugepaged.c
@@ -14,17 +14,23 @@
 #ifndef MADV_PAGEOUT
 #define MADV_PAGEOUT 21
 #endif
+#ifndef MADV_COLLAPSE
+#define MADV_COLLAPSE 25
+#endif
 
 #define BASE_ADDR ((void *)(1UL << 30))
 static unsigned long hpage_pmd_size;
 static unsigned long page_size;
 static int hpage_pmd_nr;
+static int num_khugepaged_wakeups;
 
 #define THP_SYSFS "/sys/kernel/mm/transparent_hugepage/"
 #define PID_SMAPS "/proc/self/smaps"
 
 struct collapse_context {
 	const char *name;
+	bool (*init_context)(void);
+	bool (*cleanup_context)(void);
 	void (*collapse)(const char *msg, char *p, bool expect);
 	bool enforce_pte_scan_limits;
 };
@@ -264,6 +270,17 @@ static void write_num(const char *name, unsigned long num)
 	}
 }
 
+/*
+ * Use this macro instead of write_settings inside tests, and should
+ * be called at most once per callsite.
+ *
+ * Hack to statically count the number of times khugepaged is woken up due to
+ * writes to
+ * /sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs,
+ * and is stored in __COUNTER__.
+ */
+#define WRITE_SETTINGS(s) do { __COUNTER__; write_settings(s); } while (0)
+
 static void write_settings(struct settings *settings)
 {
 	struct khugepaged_settings *khugepaged = &settings->khugepaged;
@@ -332,7 +349,7 @@ static void adjust_settings(void)
 {
 
 	printf("Adjust settings...");
-	write_settings(&default_settings);
+	WRITE_SETTINGS(&default_settings);
 	success("OK");
 }
 
@@ -440,20 +457,25 @@ static bool check_swap(void *addr, unsigned long size)
 	return swap;
 }
 
-static void *alloc_mapping(void)
+static void *alloc_mapping_at(void *at, size_t size)
 {
 	void *p;
 
-	p = mmap(BASE_ADDR, hpage_pmd_size, PROT_READ | PROT_WRITE,
-			MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
-	if (p != BASE_ADDR) {
-		printf("Failed to allocate VMA at %p\n", BASE_ADDR);
+	p = mmap(at, size, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE,
+		 -1, 0);
+	if (p != at) {
+		printf("Failed to allocate VMA at %p\n", at);
 		exit(EXIT_FAILURE);
 	}
 
 	return p;
 }
 
+static void *alloc_mapping(void)
+{
+	return alloc_mapping_at(BASE_ADDR, hpage_pmd_size);
+}
+
 static void fill_memory(int *p, unsigned long start, unsigned long end)
 {
 	int i;
@@ -573,7 +595,7 @@ static void collapse_max_ptes_none(struct collapse_context *context)
 	void *p;
 
 	settings.khugepaged.max_ptes_none = max_ptes_none;
-	write_settings(&settings);
+	WRITE_SETTINGS(&settings);
 
 	p = alloc_mapping();
 
@@ -591,7 +613,7 @@ static void collapse_max_ptes_none(struct collapse_context *context)
 	}
 
 	munmap(p, hpage_pmd_size);
-	write_settings(&default_settings);
+	WRITE_SETTINGS(&default_settings);
 }
 
 static void collapse_swapin_single_pte(struct collapse_context *context)
@@ -947,6 +969,87 @@ static void collapse_max_ptes_shared(struct collapse_context *context)
 	munmap(p, hpage_pmd_size);
 }
 
+static void madvise_collapse(const char *msg, char *p, bool expect)
+{
+	int ret;
+
+	printf("%s...", msg);
+	/* Sanity check */
+	if (check_huge(p)) {
+		printf("Unexpected huge page\n");
+		exit(EXIT_FAILURE);
+	}
+
+	madvise(p, hpage_pmd_size, MADV_HUGEPAGE);
+	ret = madvise(p, hpage_pmd_size, MADV_COLLAPSE);
+	if (((bool)ret) == expect)
+		fail("Fail: Bad return value");
+	else if (check_huge(p) != expect)
+		fail("Fail: check_huge()");
+	else
+		success("OK");
+}
+
+static struct khugepaged_disable_state {
+	void *p;
+	size_t map_size;
+} khugepaged_disable_state;
+
+static bool disable_khugepaged(void)
+{
+	/*
+	 * Hack to "disable" khugepaged by setting
+	 * /transparent_hugepage/khugepaged/scan_sleep_millisecs to some large
+	 * value, then feeding it enough suitable VMAs to scan and subsequently
+	 * sleep.
+	 *
+	 * khugepaged is woken up on writes to
+	 * /transparent_hugepage/khugepaged/scan_sleep_millisecs, so care must
+	 * be taken to not inadvertently wake khugepaged in these tests.
+	 *
+	 * Feed khugepaged 1 hugepage-sized VMA to scan and sleep on, then
+	 * N more for each time khugepaged would be woken up.
+	 */
+	size_t map_size = (num_khugepaged_wakeups + 1) * hpage_pmd_size;
+	void *p;
+	bool ret = true;
+	int full_scans;
+	int timeout = 6;  /* 3 seconds */
+
+	default_settings.khugepaged.scan_sleep_millisecs = 1000 * 60 * 10;
+	default_settings.khugepaged.pages_to_scan = 1;
+	write_settings(&default_settings);
+
+	p = alloc_mapping_at(((char *)BASE_ADDR) + (1UL << 30), map_size);
+	fill_memory(p, 0, map_size);
+
+	full_scans = read_num("khugepaged/full_scans") + 2;
+
+	printf("disabling khugepaged...");
+	while (timeout--) {
+		if (read_num("khugepaged/full_scans") >= full_scans) {
+			fail("Fail");
+			ret = false;
+			break;
+		}
+		printf(".");
+		usleep(TICK);
+	}
+	success("OK");
+	khugepaged_disable_state.p = p;
+	khugepaged_disable_state.map_size = map_size;
+	return ret;
+}
+
+static bool enable_khugepaged(void)
+{
+	printf("enabling khugepaged...");
+	munmap(khugepaged_disable_state.p, khugepaged_disable_state.map_size);
+	write_settings(&saved_settings);
+	success("OK");
+	return true;
+}
+
 static void khugepaged_collapse(const char *msg, char *p, bool expect)
 {
 	if (wait_for_scan(msg, p))
@@ -962,9 +1065,18 @@ int main(void)
 	struct collapse_context contexts[] = {
 		{
 			.name = "khugepaged",
+			.init_context = NULL,
+			.cleanup_context = NULL,
 			.collapse = &khugepaged_collapse,
 			.enforce_pte_scan_limits = true,
 		},
+		{
+			.name = "madvise",
+			.init_context = &disable_khugepaged,
+			.cleanup_context = &enable_khugepaged,
+			.collapse = &madvise_collapse,
+			.enforce_pte_scan_limits = false,
+		},
 	};
 	int i;
 
@@ -973,6 +1085,7 @@ int main(void)
 	page_size = getpagesize();
 	hpage_pmd_size = read_num("hpage_pmd_size");
 	hpage_pmd_nr = hpage_pmd_size / page_size;
+	num_khugepaged_wakeups = __COUNTER__;
 
 	default_settings.khugepaged.max_ptes_none = hpage_pmd_nr - 1;
 	default_settings.khugepaged.max_ptes_swap = hpage_pmd_nr / 8;
@@ -988,6 +1101,8 @@ int main(void)
 		struct collapse_context *c = &contexts[i];
 
 		printf("\n*** Testing context: %s ***\n", c->name);
+		if (c->init_context && !c->init_context())
+			continue;
 		collapse_full(c);
 		collapse_empty(c);
 		collapse_single_pte_entry(c);
@@ -1000,6 +1115,8 @@ int main(void)
 		collapse_fork(c);
 		collapse_fork_compound(c);
 		collapse_max_ptes_shared(c);
+		if (c->cleanup_context && !c->cleanup_context())
+			break;
 	}
 
 	restore_settings(0);
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 12/12] selftests/vm: add test to verify recollapse of THPs
  2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
                   ` (10 preceding siblings ...)
  2022-04-10 13:54 ` [PATCH 11/12] selftests/vm: add MADV_COLLAPSE collapse context to selftests Zach O'Keefe
@ 2022-04-10 13:54 ` Zach O'Keefe
  11 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-10 13:54 UTC (permalink / raw)
  To: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia, Pavel Begunkov, Peter Xu,
	Thomas Bogendoerfer, Zach O'Keefe

Add selftest specific to madvise collapse context that tests
MADV_COLLAPSE is "successful" if a hugepage-algined/sized region is
already pmd-mapped.

Signed-off-by: Zach O'Keefe <zokeefe@google.com>
---
 tools/testing/selftests/vm/khugepaged.c | 32 +++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/tools/testing/selftests/vm/khugepaged.c b/tools/testing/selftests/vm/khugepaged.c
index e0ccc9443f78..c36d04218083 100644
--- a/tools/testing/selftests/vm/khugepaged.c
+++ b/tools/testing/selftests/vm/khugepaged.c
@@ -969,6 +969,32 @@ static void collapse_max_ptes_shared(struct collapse_context *context)
 	munmap(p, hpage_pmd_size);
 }
 
+static void madvise_collapse_existing_thps(void)
+{
+	void *p;
+	int err;
+
+	p = alloc_mapping();
+	fill_memory(p, 0, hpage_pmd_size);
+
+	printf("Collapse fully populated PTE table...");
+	madvise(p, hpage_pmd_size, MADV_HUGEPAGE);
+	err = madvise(p, hpage_pmd_size, MADV_COLLAPSE);
+	if (err == 0 && check_huge(p)) {
+		success("OK");
+		printf("Re-collapse PMD-mapped hugepage");
+		err = madvise(p, hpage_pmd_size, MADV_COLLAPSE);
+		if (err == 0 && check_huge(p))
+			success("OK");
+		else
+			fail("Fail");
+	} else {
+		fail("Fail");
+	}
+	validate_memory(p, 0, hpage_pmd_size);
+	munmap(p, hpage_pmd_size);
+}
+
 static void madvise_collapse(const char *msg, char *p, bool expect)
 {
 	int ret;
@@ -1097,6 +1123,7 @@ int main(void)
 
 	alloc_at_fault();
 
+	/* Shared tests */
 	for (i = 0; i < sizeof(contexts) / sizeof(contexts[0]); ++i) {
 		struct collapse_context *c = &contexts[i];
 
@@ -1119,5 +1146,10 @@ int main(void)
 			break;
 	}
 
+	/* madvise-specific tests */
+	disable_khugepaged();
+	madvise_collapse_existing_thps();
+	enable_khugepaged();
+
 	restore_settings(0);
 }
-- 
2.35.1.1178.g4f1659d476-goog



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse
  2022-04-10 13:54 ` [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse Zach O'Keefe
@ 2022-04-10 16:04   ` kernel test robot
  2022-04-10 16:14   ` kernel test robot
  1 sibling, 0 replies; 24+ messages in thread
From: kernel test robot @ 2022-04-10 16:04 UTC (permalink / raw)
  To: Zach O'Keefe, Alex Shi, David Hildenbrand, David Rientjes,
	Matthew Wilcox, Michal Hocko, Pasha Tatashin, SeongJae Park,
	Song Liu, Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: kbuild-all, Andrea Arcangeli, Andrew Morton,
	Linux Memory Management List, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia

Hi Zach,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on hnaz-mm/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
base:   https://github.com/hnaz/linux-mm master
config: alpha-defconfig (https://download.01.org/0day-ci/archive/20220410/202204102324.NLVoP3qG-lkp@intel.com/config)
compiler: alpha-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/4f4775a3e4a722525787b2c309032810356473c2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
        git checkout 4f4775a3e4a722525787b2c309032810356473c2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=alpha SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   mm/madvise.c: In function 'madvise_need_mmap_write':
>> mm/madvise.c:62:14: error: 'MADV_COLLAPSE' undeclared (first use in this function); did you mean 'MADV_COLD'?
      62 |         case MADV_COLLAPSE:
         |              ^~~~~~~~~~~~~
         |              MADV_COLD
   mm/madvise.c:62:14: note: each undeclared identifier is reported only once for each function it appears in
   mm/madvise.c: In function 'madvise_vma_behavior':
   mm/madvise.c:1055:14: error: 'MADV_COLLAPSE' undeclared (first use in this function); did you mean 'MADV_COLD'?
    1055 |         case MADV_COLLAPSE:
         |              ^~~~~~~~~~~~~
         |              MADV_COLD


vim +62 mm/madvise.c

    44	
    45	/*
    46	 * Any behaviour which results in changes to the vma->vm_flags needs to
    47	 * take mmap_lock for writing. Others, which simply traverse vmas, need
    48	 * to only take it for reading.
    49	 */
    50	static int madvise_need_mmap_write(int behavior)
    51	{
    52		switch (behavior) {
    53		case MADV_REMOVE:
    54		case MADV_WILLNEED:
    55		case MADV_DONTNEED:
    56		case MADV_DONTNEED_LOCKED:
    57		case MADV_COLD:
    58		case MADV_PAGEOUT:
    59		case MADV_FREE:
    60		case MADV_POPULATE_READ:
    61		case MADV_POPULATE_WRITE:
  > 62		case MADV_COLLAPSE:
    63			return 0;
    64		default:
    65			/* be safe, default to 1. list exceptions explicitly */
    66			return 1;
    67		}
    68	}
    69	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse
  2022-04-10 13:54 ` [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse Zach O'Keefe
  2022-04-10 16:04   ` kernel test robot
@ 2022-04-10 16:14   ` kernel test robot
  2022-04-11 17:18       ` Zach O'Keefe
  1 sibling, 1 reply; 24+ messages in thread
From: kernel test robot @ 2022-04-10 16:14 UTC (permalink / raw)
  To: Zach O'Keefe, Alex Shi, David Hildenbrand, David Rientjes,
	Matthew Wilcox, Michal Hocko, Pasha Tatashin, SeongJae Park,
	Song Liu, Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: llvm, kbuild-all, Andrea Arcangeli, Andrew Morton,
	Linux Memory Management List, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia

Hi Zach,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on hnaz-mm/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
base:   https://github.com/hnaz/linux-mm master
config: mips-randconfig-r002-20220410 (https://download.01.org/0day-ci/archive/20220411/202204110059.a0PLTrVC-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 256c6b0ba14e8a7ab6373b61b7193ea8c0a3651c)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install mips cross compiling tool for clang build
        # apt-get install binutils-mips-linux-gnu
        # https://github.com/intel-lab-lkp/linux/commit/4f4775a3e4a722525787b2c309032810356473c2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
        git checkout 4f4775a3e4a722525787b2c309032810356473c2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=mips SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> mm/madvise.c:62:7: error: use of undeclared identifier 'MADV_COLLAPSE'
           case MADV_COLLAPSE:
                ^
   mm/madvise.c:1055:7: error: use of undeclared identifier 'MADV_COLLAPSE'
           case MADV_COLLAPSE:
                ^
   2 errors generated.


vim +/MADV_COLLAPSE +62 mm/madvise.c

    44	
    45	/*
    46	 * Any behaviour which results in changes to the vma->vm_flags needs to
    47	 * take mmap_lock for writing. Others, which simply traverse vmas, need
    48	 * to only take it for reading.
    49	 */
    50	static int madvise_need_mmap_write(int behavior)
    51	{
    52		switch (behavior) {
    53		case MADV_REMOVE:
    54		case MADV_WILLNEED:
    55		case MADV_DONTNEED:
    56		case MADV_DONTNEED_LOCKED:
    57		case MADV_COLD:
    58		case MADV_PAGEOUT:
    59		case MADV_FREE:
    60		case MADV_POPULATE_READ:
    61		case MADV_POPULATE_WRITE:
  > 62		case MADV_COLLAPSE:
    63			return 0;
    64		default:
    65			/* be safe, default to 1. list exceptions explicitly */
    66			return 1;
    67		}
    68	}
    69	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 06/12] mm/khugepaged: remove khugepaged prefix from shared collapse functions
  2022-04-10 13:54 ` [PATCH 06/12] mm/khugepaged: remove khugepaged prefix from shared collapse functions Zach O'Keefe
@ 2022-04-10 17:06   ` kernel test robot
  2022-04-11 17:42       ` Zach O'Keefe
  0 siblings, 1 reply; 24+ messages in thread
From: kernel test robot @ 2022-04-10 17:06 UTC (permalink / raw)
  To: Zach O'Keefe, Alex Shi, David Hildenbrand, David Rientjes,
	Matthew Wilcox, Michal Hocko, Pasha Tatashin, SeongJae Park,
	Song Liu, Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: kbuild-all, Andrea Arcangeli, Andrew Morton,
	Linux Memory Management List, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia

Hi Zach,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on hnaz-mm/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
base:   https://github.com/hnaz/linux-mm master
config: arc-allyesconfig (https://download.01.org/0day-ci/archive/20220411/202204110041.MnMCeEi6-lkp@intel.com/config)
compiler: arceb-elf-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/18407cfcbdad0f4e11dfe2e40028687fc64093c5
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
        git checkout 18407cfcbdad0f4e11dfe2e40028687fc64093c5
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arc SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   mm/khugepaged.c: In function 'find_pmd_or_thp_or_none':
   mm/khugepaged.c:1019:9: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
    1019 |         pmd_t pmde;
         |         ^~~~~
   mm/khugepaged.c: In function 'khugepaged':
   mm/khugepaged.c:2355:32: error: initialization of 'struct page * (*)(struct collapse_control *, gfp_t,  int)' {aka 'struct page * (*)(struct collapse_control *, unsigned int,  int)'} from incompatible pointer type 'struct page * (*)(struct collapse_control *, gfp_t)' {aka 'struct page * (*)(struct collapse_control *, unsigned int)'} [-Werror=incompatible-pointer-types]
    2355 |                 .alloc_hpage = &khugepaged_alloc_page,
         |                                ^
   mm/khugepaged.c:2355:32: note: (near initialization for 'cc.alloc_hpage')
   mm/khugepaged.c: At top level:
   mm/khugepaged.c:2469:5: warning: no previous prototype for 'madvise_collapse_errno' [-Wmissing-prototypes]
    2469 | int madvise_collapse_errno(enum scan_result r)
         |     ^~~~~~~~~~~~~~~~~~~~~~
   mm/khugepaged.c:914:12: warning: 'khugepaged_find_target_node' defined but not used [-Wunused-function]
     914 | static int khugepaged_find_target_node(struct collapse_control *cc)
         |            ^~~~~~~~~~~~~~~~~~~~~~~~~~~
   mm/khugepaged.c: In function 'collapse_file':
>> mm/khugepaged.c:863:55: warning: array subscript -2147483648 is below array bounds of 'int[1]' [-Warray-bounds]
     863 |                         if (max_value == cc->node_load[nid]) {
         |                                          ~~~~~~~~~~~~~^~~~~
   mm/khugepaged.c:91:13: note: while referencing 'node_load'
      91 |         int node_load[MAX_NUMNODES];
         |             ^~~~~~~~~
   mm/khugepaged.c: In function 'collapse_huge_page':
>> mm/khugepaged.c:863:55: warning: array subscript -2147483648 is below array bounds of 'int[1]' [-Warray-bounds]
     863 |                         if (max_value == cc->node_load[nid]) {
         |                                          ~~~~~~~~~~~~~^~~~~
   mm/khugepaged.c:91:13: note: while referencing 'node_load'
      91 |         int node_load[MAX_NUMNODES];
         |             ^~~~~~~~~
   cc1: some warnings being treated as errors


vim +863 mm/khugepaged.c

b46e756f5e4703 Kirill A. Shutemov 2016-07-26  847  
18407cfcbdad0f Zach O'Keefe       2022-04-10  848  static int find_target_node(struct collapse_control *cc)
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  849  {
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  850  	int nid, target_node = 0, max_value = 0;
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  851  
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  852  	/* find first node with max normal pages hit */
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  853  	for (nid = 0; nid < MAX_NUMNODES; nid++)
b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  854  		if (cc->node_load[nid] > max_value) {
b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  855  			max_value = cc->node_load[nid];
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  856  			target_node = nid;
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  857  		}
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  858  
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  859  	/* do some balance if several nodes have the same hit record */
b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  860  	if (target_node <= cc->last_target_node)
b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  861  		for (nid = cc->last_target_node + 1; nid < MAX_NUMNODES;
b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  862  		     nid++) {
b6a99a2eb2cc19 Zach O'Keefe       2022-04-10 @863  			if (max_value == cc->node_load[nid]) {
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  864  				target_node = nid;
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  865  				break;
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  866  			}
b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  867  		}
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  868  
b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  869  	cc->last_target_node = target_node;
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  870  	return target_node;
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  871  }
b46e756f5e4703 Kirill A. Shutemov 2016-07-26  872  

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 03/12] mm/khugepaged: make hugepage allocation context-specific
  2022-04-10 13:54 ` [PATCH 03/12] mm/khugepaged: make hugepage allocation context-specific Zach O'Keefe
@ 2022-04-10 17:47   ` kernel test robot
  2022-04-10 17:47   ` kernel test robot
  1 sibling, 0 replies; 24+ messages in thread
From: kernel test robot @ 2022-04-10 17:47 UTC (permalink / raw)
  To: Zach O'Keefe, Alex Shi, David Hildenbrand, David Rientjes,
	Matthew Wilcox, Michal Hocko, Pasha Tatashin, SeongJae Park,
	Song Liu, Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: llvm, kbuild-all, Andrea Arcangeli, Andrew Morton,
	Linux Memory Management List, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia

Hi Zach,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on hnaz-mm/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
base:   https://github.com/hnaz/linux-mm master
config: i386-randconfig-a002 (https://download.01.org/0day-ci/archive/20220411/202204110122.yKx76cVq-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 256c6b0ba14e8a7ab6373b61b7193ea8c0a3651c)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/93731be575c612b28ee4c7711ebab9e81960f213
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
        git checkout 93731be575c612b28ee4c7711ebab9e81960f213
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   mm/khugepaged.c:1006:8: warning: mixing declarations and code is incompatible with standards before C99 [-Wdeclaration-after-statement]
           pmd_t pmde;
                 ^
>> mm/khugepaged.c:2339:18: error: incompatible function pointer types initializing 'struct page *(*)(struct collapse_control *, gfp_t, int)' (aka 'struct page *(*)(struct collapse_control *, unsigned int, int)') with an expression of type 'struct page *(*)(struct collapse_control *, gfp_t)' (aka 'struct page *(*)(struct collapse_control *, unsigned int)') [-Werror,-Wincompatible-function-pointer-types]
                   .alloc_hpage = &khugepaged_alloc_page,
                                  ^~~~~~~~~~~~~~~~~~~~~~
   1 warning and 1 error generated.


vim +2339 mm/khugepaged.c

  2333	
  2334	static int khugepaged(void *none)
  2335	{
  2336		struct mm_slot *mm_slot;
  2337		struct collapse_control cc = {
  2338			.last_target_node = NUMA_NO_NODE,
> 2339			.alloc_hpage = &khugepaged_alloc_page,
  2340		};
  2341	
  2342		set_freezable();
  2343		set_user_nice(current, MAX_NICE);
  2344	
  2345		while (!kthread_should_stop()) {
  2346			khugepaged_do_scan(&cc);
  2347			khugepaged_wait_work();
  2348		}
  2349	
  2350		spin_lock(&khugepaged_mm_lock);
  2351		mm_slot = khugepaged_scan.mm_slot;
  2352		khugepaged_scan.mm_slot = NULL;
  2353		if (mm_slot)
  2354			collect_mm_slot(mm_slot);
  2355		spin_unlock(&khugepaged_mm_lock);
  2356		return 0;
  2357	}
  2358	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 03/12] mm/khugepaged: make hugepage allocation context-specific
  2022-04-10 13:54 ` [PATCH 03/12] mm/khugepaged: make hugepage allocation context-specific Zach O'Keefe
  2022-04-10 17:47   ` kernel test robot
@ 2022-04-10 17:47   ` kernel test robot
  2022-04-11 17:28       ` Zach O'Keefe
  1 sibling, 1 reply; 24+ messages in thread
From: kernel test robot @ 2022-04-10 17:47 UTC (permalink / raw)
  To: Zach O'Keefe, Alex Shi, David Hildenbrand, David Rientjes,
	Matthew Wilcox, Michal Hocko, Pasha Tatashin, SeongJae Park,
	Song Liu, Vlastimil Babka, Yang Shi, Zi Yan, linux-mm
  Cc: kbuild-all, Andrea Arcangeli, Andrew Morton,
	Linux Memory Management List, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia

Hi Zach,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on hnaz-mm/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
base:   https://github.com/hnaz/linux-mm master
config: i386-randconfig-a001 (https://download.01.org/0day-ci/archive/20220411/202204110146.7vOFQ9VD-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.2.0-19) 11.2.0
reproduce (this is a W=1 build):
        # https://github.com/intel-lab-lkp/linux/commit/93731be575c612b28ee4c7711ebab9e81960f213
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
        git checkout 93731be575c612b28ee4c7711ebab9e81960f213
        # save the config file to linux build tree
        mkdir build_dir
        make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   mm/khugepaged.c: In function 'find_pmd_or_thp_or_none':
   mm/khugepaged.c:1006:9: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
    1006 |         pmd_t pmde;
         |         ^~~~~
   mm/khugepaged.c: In function 'khugepaged':
>> mm/khugepaged.c:2339:32: error: initialization of 'struct page * (*)(struct collapse_control *, gfp_t,  int)' {aka 'struct page * (*)(struct collapse_control *, unsigned int,  int)'} from incompatible pointer type 'struct page * (*)(struct collapse_control *, gfp_t)' {aka 'struct page * (*)(struct collapse_control *, unsigned int)'} [-Werror=incompatible-pointer-types]
    2339 |                 .alloc_hpage = &khugepaged_alloc_page,
         |                                ^
   mm/khugepaged.c:2339:32: note: (near initialization for 'cc.alloc_hpage')
   cc1: some warnings being treated as errors


vim +2339 mm/khugepaged.c

  2333	
  2334	static int khugepaged(void *none)
  2335	{
  2336		struct mm_slot *mm_slot;
  2337		struct collapse_control cc = {
  2338			.last_target_node = NUMA_NO_NODE,
> 2339			.alloc_hpage = &khugepaged_alloc_page,
  2340		};
  2341	
  2342		set_freezable();
  2343		set_user_nice(current, MAX_NICE);
  2344	
  2345		while (!kthread_should_stop()) {
  2346			khugepaged_do_scan(&cc);
  2347			khugepaged_wait_work();
  2348		}
  2349	
  2350		spin_lock(&khugepaged_mm_lock);
  2351		mm_slot = khugepaged_scan.mm_slot;
  2352		khugepaged_scan.mm_slot = NULL;
  2353		if (mm_slot)
  2354			collect_mm_slot(mm_slot);
  2355		spin_unlock(&khugepaged_mm_lock);
  2356		return 0;
  2357	}
  2358	

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse
  2022-04-10 16:14   ` kernel test robot
@ 2022-04-11 17:18       ` Zach O'Keefe
  0 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-11 17:18 UTC (permalink / raw)
  To: kernel test robot
  Cc: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm, llvm, kbuild-all,
	Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia

Sorry about this. Will add support for:

alpha
mips
parisc
xtensa

in respective arch/$ARCH/include/uapi/asm/mman.h files

On Sun, Apr 10, 2022 at 11:15 AM kernel test robot <lkp@intel.com> wrote:
>
> Hi Zach,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on hnaz-mm/master]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
> base:   https://github.com/hnaz/linux-mm master
> config: mips-randconfig-r002-20220410 (https://download.01.org/0day-ci/archive/20220411/202204110059.a0PLTrVC-lkp@intel.com/config)
> compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 256c6b0ba14e8a7ab6373b61b7193ea8c0a3651c)
> reproduce (this is a W=1 build):
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # install mips cross compiling tool for clang build
>         # apt-get install binutils-mips-linux-gnu
>         # https://github.com/intel-lab-lkp/linux/commit/4f4775a3e4a722525787b2c309032810356473c2
>         git remote add linux-review https://github.com/intel-lab-lkp/linux
>         git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
>         git checkout 4f4775a3e4a722525787b2c309032810356473c2
>         # save the config file to linux build tree
>         mkdir build_dir
>         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=mips SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
> >> mm/madvise.c:62:7: error: use of undeclared identifier 'MADV_COLLAPSE'
>            case MADV_COLLAPSE:
>                 ^
>    mm/madvise.c:1055:7: error: use of undeclared identifier 'MADV_COLLAPSE'
>            case MADV_COLLAPSE:
>                 ^
>    2 errors generated.
>
>
> vim +/MADV_COLLAPSE +62 mm/madvise.c
>
>     44
>     45  /*
>     46   * Any behaviour which results in changes to the vma->vm_flags needs to
>     47   * take mmap_lock for writing. Others, which simply traverse vmas, need
>     48   * to only take it for reading.
>     49   */
>     50  static int madvise_need_mmap_write(int behavior)
>     51  {
>     52          switch (behavior) {
>     53          case MADV_REMOVE:
>     54          case MADV_WILLNEED:
>     55          case MADV_DONTNEED:
>     56          case MADV_DONTNEED_LOCKED:
>     57          case MADV_COLD:
>     58          case MADV_PAGEOUT:
>     59          case MADV_FREE:
>     60          case MADV_POPULATE_READ:
>     61          case MADV_POPULATE_WRITE:
>   > 62          case MADV_COLLAPSE:
>     63                  return 0;
>     64          default:
>     65                  /* be safe, default to 1. list exceptions explicitly */
>     66                  return 1;
>     67          }
>     68  }
>     69
>
> --
> 0-DAY CI Kernel Test Service
> https://01.org/lkp
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse
@ 2022-04-11 17:18       ` Zach O'Keefe
  0 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-11 17:18 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3144 bytes --]

Sorry about this. Will add support for:

alpha
mips
parisc
xtensa

in respective arch/$ARCH/include/uapi/asm/mman.h files

On Sun, Apr 10, 2022 at 11:15 AM kernel test robot <lkp@intel.com> wrote:
>
> Hi Zach,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on hnaz-mm/master]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
> base:   https://github.com/hnaz/linux-mm master
> config: mips-randconfig-r002-20220410 (https://download.01.org/0day-ci/archive/20220411/202204110059.a0PLTrVC-lkp(a)intel.com/config)
> compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 256c6b0ba14e8a7ab6373b61b7193ea8c0a3651c)
> reproduce (this is a W=1 build):
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # install mips cross compiling tool for clang build
>         # apt-get install binutils-mips-linux-gnu
>         # https://github.com/intel-lab-lkp/linux/commit/4f4775a3e4a722525787b2c309032810356473c2
>         git remote add linux-review https://github.com/intel-lab-lkp/linux
>         git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
>         git checkout 4f4775a3e4a722525787b2c309032810356473c2
>         # save the config file to linux build tree
>         mkdir build_dir
>         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=mips SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
> >> mm/madvise.c:62:7: error: use of undeclared identifier 'MADV_COLLAPSE'
>            case MADV_COLLAPSE:
>                 ^
>    mm/madvise.c:1055:7: error: use of undeclared identifier 'MADV_COLLAPSE'
>            case MADV_COLLAPSE:
>                 ^
>    2 errors generated.
>
>
> vim +/MADV_COLLAPSE +62 mm/madvise.c
>
>     44
>     45  /*
>     46   * Any behaviour which results in changes to the vma->vm_flags needs to
>     47   * take mmap_lock for writing. Others, which simply traverse vmas, need
>     48   * to only take it for reading.
>     49   */
>     50  static int madvise_need_mmap_write(int behavior)
>     51  {
>     52          switch (behavior) {
>     53          case MADV_REMOVE:
>     54          case MADV_WILLNEED:
>     55          case MADV_DONTNEED:
>     56          case MADV_DONTNEED_LOCKED:
>     57          case MADV_COLD:
>     58          case MADV_PAGEOUT:
>     59          case MADV_FREE:
>     60          case MADV_POPULATE_READ:
>     61          case MADV_POPULATE_WRITE:
>   > 62          case MADV_COLLAPSE:
>     63                  return 0;
>     64          default:
>     65                  /* be safe, default to 1. list exceptions explicitly */
>     66                  return 1;
>     67          }
>     68  }
>     69
>
> --
> 0-DAY CI Kernel Test Service
> https://01.org/lkp
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 03/12] mm/khugepaged: make hugepage allocation context-specific
  2022-04-10 17:47   ` kernel test robot
@ 2022-04-11 17:28       ` Zach O'Keefe
  0 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-11 17:28 UTC (permalink / raw)
  To: kernel test robot
  Cc: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm, kbuild-all,
	Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia

Sorry about this. I thought I had built with !NUMA and
TRANSPARENT_HUGEPAGE. Fixed.

On Sun, Apr 10, 2022 at 12:48 PM kernel test robot <lkp@intel.com> wrote:
>
> Hi Zach,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on hnaz-mm/master]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
> base:   https://github.com/hnaz/linux-mm master
> config: i386-randconfig-a001 (https://download.01.org/0day-ci/archive/20220411/202204110146.7vOFQ9VD-lkp@intel.com/config)
> compiler: gcc-11 (Debian 11.2.0-19) 11.2.0
> reproduce (this is a W=1 build):
>         # https://github.com/intel-lab-lkp/linux/commit/93731be575c612b28ee4c7711ebab9e81960f213
>         git remote add linux-review https://github.com/intel-lab-lkp/linux
>         git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
>         git checkout 93731be575c612b28ee4c7711ebab9e81960f213
>         # save the config file to linux build tree
>         mkdir build_dir
>         make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
>    mm/khugepaged.c: In function 'find_pmd_or_thp_or_none':
>    mm/khugepaged.c:1006:9: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
>     1006 |         pmd_t pmde;
>          |         ^~~~~
>    mm/khugepaged.c: In function 'khugepaged':
> >> mm/khugepaged.c:2339:32: error: initialization of 'struct page * (*)(struct collapse_control *, gfp_t,  int)' {aka 'struct page * (*)(struct collapse_control *, unsigned int,  int)'} from incompatible pointer type 'struct page * (*)(struct collapse_control *, gfp_t)' {aka 'struct page * (*)(struct collapse_control *, unsigned int)'} [-Werror=incompatible-pointer-types]
>     2339 |                 .alloc_hpage = &khugepaged_alloc_page,
>          |                                ^
>    mm/khugepaged.c:2339:32: note: (near initialization for 'cc.alloc_hpage')
>    cc1: some warnings being treated as errors
>
>
> vim +2339 mm/khugepaged.c
>
>   2333
>   2334  static int khugepaged(void *none)
>   2335  {
>   2336          struct mm_slot *mm_slot;
>   2337          struct collapse_control cc = {
>   2338                  .last_target_node = NUMA_NO_NODE,
> > 2339                  .alloc_hpage = &khugepaged_alloc_page,
>   2340          };
>   2341
>   2342          set_freezable();
>   2343          set_user_nice(current, MAX_NICE);
>   2344
>   2345          while (!kthread_should_stop()) {
>   2346                  khugepaged_do_scan(&cc);
>   2347                  khugepaged_wait_work();
>   2348          }
>   2349
>   2350          spin_lock(&khugepaged_mm_lock);
>   2351          mm_slot = khugepaged_scan.mm_slot;
>   2352          khugepaged_scan.mm_slot = NULL;
>   2353          if (mm_slot)
>   2354                  collect_mm_slot(mm_slot);
>   2355          spin_unlock(&khugepaged_mm_lock);
>   2356          return 0;
>   2357  }
>   2358
>
> --
> 0-DAY CI Kernel Test Service
> https://01.org/lkp
>


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 03/12] mm/khugepaged: make hugepage allocation context-specific
@ 2022-04-11 17:28       ` Zach O'Keefe
  0 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-11 17:28 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3281 bytes --]

Sorry about this. I thought I had built with !NUMA and
TRANSPARENT_HUGEPAGE. Fixed.

On Sun, Apr 10, 2022 at 12:48 PM kernel test robot <lkp@intel.com> wrote:
>
> Hi Zach,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on hnaz-mm/master]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
> base:   https://github.com/hnaz/linux-mm master
> config: i386-randconfig-a001 (https://download.01.org/0day-ci/archive/20220411/202204110146.7vOFQ9VD-lkp(a)intel.com/config)
> compiler: gcc-11 (Debian 11.2.0-19) 11.2.0
> reproduce (this is a W=1 build):
>         # https://github.com/intel-lab-lkp/linux/commit/93731be575c612b28ee4c7711ebab9e81960f213
>         git remote add linux-review https://github.com/intel-lab-lkp/linux
>         git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
>         git checkout 93731be575c612b28ee4c7711ebab9e81960f213
>         # save the config file to linux build tree
>         mkdir build_dir
>         make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
>    mm/khugepaged.c: In function 'find_pmd_or_thp_or_none':
>    mm/khugepaged.c:1006:9: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
>     1006 |         pmd_t pmde;
>          |         ^~~~~
>    mm/khugepaged.c: In function 'khugepaged':
> >> mm/khugepaged.c:2339:32: error: initialization of 'struct page * (*)(struct collapse_control *, gfp_t,  int)' {aka 'struct page * (*)(struct collapse_control *, unsigned int,  int)'} from incompatible pointer type 'struct page * (*)(struct collapse_control *, gfp_t)' {aka 'struct page * (*)(struct collapse_control *, unsigned int)'} [-Werror=incompatible-pointer-types]
>     2339 |                 .alloc_hpage = &khugepaged_alloc_page,
>          |                                ^
>    mm/khugepaged.c:2339:32: note: (near initialization for 'cc.alloc_hpage')
>    cc1: some warnings being treated as errors
>
>
> vim +2339 mm/khugepaged.c
>
>   2333
>   2334  static int khugepaged(void *none)
>   2335  {
>   2336          struct mm_slot *mm_slot;
>   2337          struct collapse_control cc = {
>   2338                  .last_target_node = NUMA_NO_NODE,
> > 2339                  .alloc_hpage = &khugepaged_alloc_page,
>   2340          };
>   2341
>   2342          set_freezable();
>   2343          set_user_nice(current, MAX_NICE);
>   2344
>   2345          while (!kthread_should_stop()) {
>   2346                  khugepaged_do_scan(&cc);
>   2347                  khugepaged_wait_work();
>   2348          }
>   2349
>   2350          spin_lock(&khugepaged_mm_lock);
>   2351          mm_slot = khugepaged_scan.mm_slot;
>   2352          khugepaged_scan.mm_slot = NULL;
>   2353          if (mm_slot)
>   2354                  collect_mm_slot(mm_slot);
>   2355          spin_unlock(&khugepaged_mm_lock);
>   2356          return 0;
>   2357  }
>   2358
>
> --
> 0-DAY CI Kernel Test Service
> https://01.org/lkp
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 06/12] mm/khugepaged: remove khugepaged prefix from shared collapse functions
  2022-04-10 17:06   ` kernel test robot
@ 2022-04-11 17:42       ` Zach O'Keefe
  0 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-11 17:42 UTC (permalink / raw)
  To: kernel test robot
  Cc: Alex Shi, David Hildenbrand, David Rientjes, Matthew Wilcox,
	Michal Hocko, Pasha Tatashin, SeongJae Park, Song Liu,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-mm, kbuild-all,
	Andrea Arcangeli, Andrew Morton, Arnd Bergmann, Axel Rasmussen,
	Chris Kennelly, Chris Zankel, Helge Deller, Hugh Dickins,
	Ivan Kokshaysky, James E.J. Bottomley, Jens Axboe,
	Kirill A. Shutemov, Matt Turner, Max Filippov, Miaohe Lin,
	Minchan Kim, Patrick Xia

Sorry about this. Due to a  misplaced "#ifdef CONFIG_NUMA". Fixed in
"[PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage
collapse". Fixed now.

On Sun, Apr 10, 2022 at 12:06 PM kernel test robot <lkp@intel.com> wrote:
>
> Hi Zach,
>
> Thank you for the patch! Perhaps something to improve:
>
> [auto build test WARNING on hnaz-mm/master]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
> base:   https://github.com/hnaz/linux-mm master
> config: arc-allyesconfig (https://download.01.org/0day-ci/archive/20220411/202204110041.MnMCeEi6-lkp@intel.com/config)
> compiler: arceb-elf-gcc (GCC) 11.2.0
> reproduce (this is a W=1 build):
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # https://github.com/intel-lab-lkp/linux/commit/18407cfcbdad0f4e11dfe2e40028687fc64093c5
>         git remote add linux-review https://github.com/intel-lab-lkp/linux
>         git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
>         git checkout 18407cfcbdad0f4e11dfe2e40028687fc64093c5
>         # save the config file to linux build tree
>         mkdir build_dir
>         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arc SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All warnings (new ones prefixed by >>):
>
>    mm/khugepaged.c: In function 'find_pmd_or_thp_or_none':
>    mm/khugepaged.c:1019:9: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
>     1019 |         pmd_t pmde;
>          |         ^~~~~
>    mm/khugepaged.c: In function 'khugepaged':
>    mm/khugepaged.c:2355:32: error: initialization of 'struct page * (*)(struct collapse_control *, gfp_t,  int)' {aka 'struct page * (*)(struct collapse_control *, unsigned int,  int)'} from incompatible pointer type 'struct page * (*)(struct collapse_control *, gfp_t)' {aka 'struct page * (*)(struct collapse_control *, unsigned int)'} [-Werror=incompatible-pointer-types]
>     2355 |                 .alloc_hpage = &khugepaged_alloc_page,
>          |                                ^
>    mm/khugepaged.c:2355:32: note: (near initialization for 'cc.alloc_hpage')
>    mm/khugepaged.c: At top level:
>    mm/khugepaged.c:2469:5: warning: no previous prototype for 'madvise_collapse_errno' [-Wmissing-prototypes]
>     2469 | int madvise_collapse_errno(enum scan_result r)
>          |     ^~~~~~~~~~~~~~~~~~~~~~
>    mm/khugepaged.c:914:12: warning: 'khugepaged_find_target_node' defined but not used [-Wunused-function]
>      914 | static int khugepaged_find_target_node(struct collapse_control *cc)
>          |            ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>    mm/khugepaged.c: In function 'collapse_file':
> >> mm/khugepaged.c:863:55: warning: array subscript -2147483648 is below array bounds of 'int[1]' [-Warray-bounds]
>      863 |                         if (max_value == cc->node_load[nid]) {
>          |                                          ~~~~~~~~~~~~~^~~~~
>    mm/khugepaged.c:91:13: note: while referencing 'node_load'
>       91 |         int node_load[MAX_NUMNODES];
>          |             ^~~~~~~~~
>    mm/khugepaged.c: In function 'collapse_huge_page':
> >> mm/khugepaged.c:863:55: warning: array subscript -2147483648 is below array bounds of 'int[1]' [-Warray-bounds]
>      863 |                         if (max_value == cc->node_load[nid]) {
>          |                                          ~~~~~~~~~~~~~^~~~~
>    mm/khugepaged.c:91:13: note: while referencing 'node_load'
>       91 |         int node_load[MAX_NUMNODES];
>          |             ^~~~~~~~~
>    cc1: some warnings being treated as errors
>
>
> vim +863 mm/khugepaged.c
>
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  847
> 18407cfcbdad0f Zach O'Keefe       2022-04-10  848  static int find_target_node(struct collapse_control *cc)
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  849  {
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  850       int nid, target_node = 0, max_value = 0;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  851
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  852       /* find first node with max normal pages hit */
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  853       for (nid = 0; nid < MAX_NUMNODES; nid++)
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  854               if (cc->node_load[nid] > max_value) {
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  855                       max_value = cc->node_load[nid];
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  856                       target_node = nid;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  857               }
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  858
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  859       /* do some balance if several nodes have the same hit record */
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  860       if (target_node <= cc->last_target_node)
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  861               for (nid = cc->last_target_node + 1; nid < MAX_NUMNODES;
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  862                    nid++) {
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10 @863                       if (max_value == cc->node_load[nid]) {
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  864                               target_node = nid;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  865                               break;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  866                       }
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  867               }
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  868
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  869       cc->last_target_node = target_node;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  870       return target_node;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  871  }
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  872
>
> --
> 0-DAY CI Kernel Test Service
> https://01.org/lkp
>


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 06/12] mm/khugepaged: remove khugepaged prefix from shared collapse functions
@ 2022-04-11 17:42       ` Zach O'Keefe
  0 siblings, 0 replies; 24+ messages in thread
From: Zach O'Keefe @ 2022-04-11 17:42 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 6271 bytes --]

Sorry about this. Due to a  misplaced "#ifdef CONFIG_NUMA". Fixed in
"[PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage
collapse". Fixed now.

On Sun, Apr 10, 2022 at 12:06 PM kernel test robot <lkp@intel.com> wrote:
>
> Hi Zach,
>
> Thank you for the patch! Perhaps something to improve:
>
> [auto build test WARNING on hnaz-mm/master]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
> base:   https://github.com/hnaz/linux-mm master
> config: arc-allyesconfig (https://download.01.org/0day-ci/archive/20220411/202204110041.MnMCeEi6-lkp(a)intel.com/config)
> compiler: arceb-elf-gcc (GCC) 11.2.0
> reproduce (this is a W=1 build):
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # https://github.com/intel-lab-lkp/linux/commit/18407cfcbdad0f4e11dfe2e40028687fc64093c5
>         git remote add linux-review https://github.com/intel-lab-lkp/linux
>         git fetch --no-tags linux-review Zach-O-Keefe/mm-userspace-hugepage-collapse/20220410-215722
>         git checkout 18407cfcbdad0f4e11dfe2e40028687fc64093c5
>         # save the config file to linux build tree
>         mkdir build_dir
>         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arc SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All warnings (new ones prefixed by >>):
>
>    mm/khugepaged.c: In function 'find_pmd_or_thp_or_none':
>    mm/khugepaged.c:1019:9: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
>     1019 |         pmd_t pmde;
>          |         ^~~~~
>    mm/khugepaged.c: In function 'khugepaged':
>    mm/khugepaged.c:2355:32: error: initialization of 'struct page * (*)(struct collapse_control *, gfp_t,  int)' {aka 'struct page * (*)(struct collapse_control *, unsigned int,  int)'} from incompatible pointer type 'struct page * (*)(struct collapse_control *, gfp_t)' {aka 'struct page * (*)(struct collapse_control *, unsigned int)'} [-Werror=incompatible-pointer-types]
>     2355 |                 .alloc_hpage = &khugepaged_alloc_page,
>          |                                ^
>    mm/khugepaged.c:2355:32: note: (near initialization for 'cc.alloc_hpage')
>    mm/khugepaged.c: At top level:
>    mm/khugepaged.c:2469:5: warning: no previous prototype for 'madvise_collapse_errno' [-Wmissing-prototypes]
>     2469 | int madvise_collapse_errno(enum scan_result r)
>          |     ^~~~~~~~~~~~~~~~~~~~~~
>    mm/khugepaged.c:914:12: warning: 'khugepaged_find_target_node' defined but not used [-Wunused-function]
>      914 | static int khugepaged_find_target_node(struct collapse_control *cc)
>          |            ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>    mm/khugepaged.c: In function 'collapse_file':
> >> mm/khugepaged.c:863:55: warning: array subscript -2147483648 is below array bounds of 'int[1]' [-Warray-bounds]
>      863 |                         if (max_value == cc->node_load[nid]) {
>          |                                          ~~~~~~~~~~~~~^~~~~
>    mm/khugepaged.c:91:13: note: while referencing 'node_load'
>       91 |         int node_load[MAX_NUMNODES];
>          |             ^~~~~~~~~
>    mm/khugepaged.c: In function 'collapse_huge_page':
> >> mm/khugepaged.c:863:55: warning: array subscript -2147483648 is below array bounds of 'int[1]' [-Warray-bounds]
>      863 |                         if (max_value == cc->node_load[nid]) {
>          |                                          ~~~~~~~~~~~~~^~~~~
>    mm/khugepaged.c:91:13: note: while referencing 'node_load'
>       91 |         int node_load[MAX_NUMNODES];
>          |             ^~~~~~~~~
>    cc1: some warnings being treated as errors
>
>
> vim +863 mm/khugepaged.c
>
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  847
> 18407cfcbdad0f Zach O'Keefe       2022-04-10  848  static int find_target_node(struct collapse_control *cc)
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  849  {
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  850       int nid, target_node = 0, max_value = 0;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  851
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  852       /* find first node with max normal pages hit */
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  853       for (nid = 0; nid < MAX_NUMNODES; nid++)
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  854               if (cc->node_load[nid] > max_value) {
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  855                       max_value = cc->node_load[nid];
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  856                       target_node = nid;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  857               }
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  858
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  859       /* do some balance if several nodes have the same hit record */
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  860       if (target_node <= cc->last_target_node)
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  861               for (nid = cc->last_target_node + 1; nid < MAX_NUMNODES;
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  862                    nid++) {
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10 @863                       if (max_value == cc->node_load[nid]) {
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  864                               target_node = nid;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  865                               break;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  866                       }
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  867               }
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  868
> b6a99a2eb2cc19 Zach O'Keefe       2022-04-10  869       cc->last_target_node = target_node;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  870       return target_node;
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  871  }
> b46e756f5e4703 Kirill A. Shutemov 2016-07-26  872
>
> --
> 0-DAY CI Kernel Test Service
> https://01.org/lkp
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2022-04-11 17:42 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-10 13:54 [PATCH 00/12] mm: userspace hugepage collapse Zach O'Keefe
2022-04-10 13:54 ` [PATCH 01/12] mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP Zach O'Keefe
2022-04-10 13:54 ` [PATCH 02/12] mm/khugepaged: add struct collapse_control Zach O'Keefe
2022-04-10 13:54 ` [PATCH 03/12] mm/khugepaged: make hugepage allocation context-specific Zach O'Keefe
2022-04-10 17:47   ` kernel test robot
2022-04-10 17:47   ` kernel test robot
2022-04-11 17:28     ` Zach O'Keefe
2022-04-11 17:28       ` Zach O'Keefe
2022-04-10 13:54 ` [PATCH 04/12] mm/khugepaged: add struct collapse_result Zach O'Keefe
2022-04-10 13:54 ` [PATCH 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse Zach O'Keefe
2022-04-10 16:04   ` kernel test robot
2022-04-10 16:14   ` kernel test robot
2022-04-11 17:18     ` Zach O'Keefe
2022-04-11 17:18       ` Zach O'Keefe
2022-04-10 13:54 ` [PATCH 06/12] mm/khugepaged: remove khugepaged prefix from shared collapse functions Zach O'Keefe
2022-04-10 17:06   ` kernel test robot
2022-04-11 17:42     ` Zach O'Keefe
2022-04-11 17:42       ` Zach O'Keefe
2022-04-10 13:54 ` [PATCH 07/12] mm/khugepaged: add flag to ignore khugepaged_max_ptes_* Zach O'Keefe
2022-04-10 13:54 ` [PATCH 08/12] mm/khugepaged: add flag to ignore page young/referenced requirement Zach O'Keefe
2022-04-10 13:54 ` [PATCH 09/12] mm/madvise: add MADV_COLLAPSE to process_madvise() Zach O'Keefe
2022-04-10 13:54 ` [PATCH 10/12] selftests/vm: modularize collapse selftests Zach O'Keefe
2022-04-10 13:54 ` [PATCH 11/12] selftests/vm: add MADV_COLLAPSE collapse context to selftests Zach O'Keefe
2022-04-10 13:54 ` [PATCH 12/12] selftests/vm: add test to verify recollapse of THPs Zach O'Keefe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.