linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/7] A few cleanup patches for khugepaged
@ 2022-06-25  9:28 Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 1/7] mm/khugepaged: remove unneeded shmem_huge_enabled() check Miaohe Lin
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Miaohe Lin @ 2022-06-25  9:28 UTC (permalink / raw)
  To: akpm
  Cc: shy828301, zokeefe, aarcange, willy, vbabka, dhowells, neilb,
	apopple, david, surenb, peterx, linux-mm, linux-kernel,
	linmiaohe

Hi everyone,
This series contains a few cleaup patches to remove unneeded return
value, use helper macro, fix typos and so on. More details can be
found in the respective changelogs. Thanks!
---
v2
  rebase on linux-next-20220624
  collect Reviewed-by tag per Yang Shi, Zach. Thanks.
  tweak commit log of 1/7
  avoid relocking mmap_sem and adjust "swapped_in++" in 2/7
  add comment for nr_none and NR_SHMEM in 4/7
  align args with the opening brace in 5/7 and 6/7
  do free_swap_cache before put_page in 7/7
---
Miaohe Lin (7):
  mm/khugepaged: remove unneeded shmem_huge_enabled() check
  mm/khugepaged: stop swapping in page when VM_FAULT_RETRY occurs
  mm/khugepaged: trivial typo and codestyle cleanup
  mm/khugepaged: minor cleanup for collapse_file
  mm/khugepaged: use helper macro __ATTR_RW
  mm/khugepaged: remove unneeded return value of
    khugepaged_add_pte_mapped_thp()
  mm/khugepaged: try to free transhuge swapcache when possible

 include/linux/swap.h |   5 ++
 mm/khugepaged.c      | 137 ++++++++++++++++++++-----------------------
 mm/swap.h            |   5 --
 3 files changed, 69 insertions(+), 78 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/7] mm/khugepaged: remove unneeded shmem_huge_enabled() check
  2022-06-25  9:28 [PATCH v2 0/7] A few cleanup patches for khugepaged Miaohe Lin
@ 2022-06-25  9:28 ` Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 2/7] mm/khugepaged: stop swapping in page when VM_FAULT_RETRY occurs Miaohe Lin
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Miaohe Lin @ 2022-06-25  9:28 UTC (permalink / raw)
  To: akpm
  Cc: shy828301, zokeefe, aarcange, willy, vbabka, dhowells, neilb,
	apopple, david, surenb, peterx, linux-mm, linux-kernel,
	linmiaohe

If we reach here, khugepaged_scan_mm_slot() has already made sure that
hugepage is enabled for shmem, via its call to hugepage_vma_check().
Remove this duplicated check.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Reviewed-by: Zach O'Keefe <zokeefe@google.com>
---
 mm/khugepaged.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index d8ebb60aae36..8a103e0f8d2b 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2119,8 +2119,6 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 		if (khugepaged_scan.address < hstart)
 			khugepaged_scan.address = hstart;
 		VM_BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK);
-		if (shmem_file(vma->vm_file) && !shmem_huge_enabled(vma))
-			goto skip;
 
 		while (khugepaged_scan.address < hend) {
 			int ret;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/7] mm/khugepaged: stop swapping in page when VM_FAULT_RETRY occurs
  2022-06-25  9:28 [PATCH v2 0/7] A few cleanup patches for khugepaged Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 1/7] mm/khugepaged: remove unneeded shmem_huge_enabled() check Miaohe Lin
@ 2022-06-25  9:28 ` Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 3/7] mm/khugepaged: trivial typo and codestyle cleanup Miaohe Lin
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Miaohe Lin @ 2022-06-25  9:28 UTC (permalink / raw)
  To: akpm
  Cc: shy828301, zokeefe, aarcange, willy, vbabka, dhowells, neilb,
	apopple, david, surenb, peterx, linux-mm, linux-kernel,
	linmiaohe

When do_swap_page returns VM_FAULT_RETRY, we do not retry here and thus
swap entry will remain in pagetable. This will result in later failure.
So stop swapping in pages in this case to save cpu cycles. As A further
optimization, mmap_lock is released when __collapse_huge_page_swapin()
fails to avoid relocking mmap_lock. And "swapped_in++" is moved after
error handling to make it more accurate.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/khugepaged.c | 32 ++++++++++++++------------------
 1 file changed, 14 insertions(+), 18 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 8a103e0f8d2b..c6fc4eb8d77b 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -940,8 +940,8 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
  * Bring missing pages in from swap, to complete THP collapse.
  * Only done if khugepaged_scan_pmd believes it is worthwhile.
  *
- * Called and returns without pte mapped or spinlocks held,
- * but with mmap_lock held to protect against vma changes.
+ * Called and returns without pte mapped or spinlocks held.
+ * Note that if false is returned, mmap_lock will be released.
  */
 
 static bool __collapse_huge_page_swapin(struct mm_struct *mm,
@@ -968,27 +968,24 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
 			pte_unmap(vmf.pte);
 			continue;
 		}
-		swapped_in++;
 		ret = do_swap_page(&vmf);
 
-		/* do_swap_page returns VM_FAULT_RETRY with released mmap_lock */
+		/*
+		 * do_swap_page returns VM_FAULT_RETRY with released mmap_lock.
+		 * Note we treat VM_FAULT_RETRY as VM_FAULT_ERROR here because
+		 * we do not retry here and swap entry will remain in pagetable
+		 * resulting in later failure.
+		 */
 		if (ret & VM_FAULT_RETRY) {
-			mmap_read_lock(mm);
-			if (hugepage_vma_revalidate(mm, haddr, &vma)) {
-				/* vma is no longer available, don't continue to swapin */
-				trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
-				return false;
-			}
-			/* check if the pmd is still valid */
-			if (mm_find_pmd(mm, haddr) != pmd) {
-				trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
-				return false;
-			}
+			trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
+			return false;
 		}
 		if (ret & VM_FAULT_ERROR) {
+			mmap_read_unlock(mm);
 			trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0);
 			return false;
 		}
+		swapped_in++;
 	}
 
 	/* Drain LRU add pagevec to remove extra pin on the swapped in pages */
@@ -1054,13 +1051,12 @@ static void collapse_huge_page(struct mm_struct *mm,
 	}
 
 	/*
-	 * __collapse_huge_page_swapin always returns with mmap_lock locked.
-	 * If it fails, we release mmap_lock and jump out_nolock.
+	 * __collapse_huge_page_swapin will return with mmap_lock released
+	 * when it fails. So we jump out_nolock directly in that case.
 	 * Continuing to collapse causes inconsistency.
 	 */
 	if (unmapped && !__collapse_huge_page_swapin(mm, vma, address,
 						     pmd, referenced)) {
-		mmap_read_unlock(mm);
 		goto out_nolock;
 	}
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/7] mm/khugepaged: trivial typo and codestyle cleanup
  2022-06-25  9:28 [PATCH v2 0/7] A few cleanup patches for khugepaged Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 1/7] mm/khugepaged: remove unneeded shmem_huge_enabled() check Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 2/7] mm/khugepaged: stop swapping in page when VM_FAULT_RETRY occurs Miaohe Lin
@ 2022-06-25  9:28 ` Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 4/7] mm/khugepaged: minor cleanup for collapse_file Miaohe Lin
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Miaohe Lin @ 2022-06-25  9:28 UTC (permalink / raw)
  To: akpm
  Cc: shy828301, zokeefe, aarcange, willy, vbabka, dhowells, neilb,
	apopple, david, surenb, peterx, linux-mm, linux-kernel,
	linmiaohe

Fix some typos and tweak the code to meet codestyle. No functional
change intended.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Zach O'Keefe <zokeefe@google.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
---
 mm/khugepaged.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index c6fc4eb8d77b..a36d9746c321 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -260,7 +260,7 @@ static ssize_t khugepaged_max_ptes_none_store(struct kobject *kobj,
 	unsigned long max_ptes_none;
 
 	err = kstrtoul(buf, 10, &max_ptes_none);
-	if (err || max_ptes_none > HPAGE_PMD_NR-1)
+	if (err || max_ptes_none > HPAGE_PMD_NR - 1)
 		return -EINVAL;
 
 	khugepaged_max_ptes_none = max_ptes_none;
@@ -286,7 +286,7 @@ static ssize_t khugepaged_max_ptes_swap_store(struct kobject *kobj,
 	unsigned long max_ptes_swap;
 
 	err  = kstrtoul(buf, 10, &max_ptes_swap);
-	if (err || max_ptes_swap > HPAGE_PMD_NR-1)
+	if (err || max_ptes_swap > HPAGE_PMD_NR - 1)
 		return -EINVAL;
 
 	khugepaged_max_ptes_swap = max_ptes_swap;
@@ -313,7 +313,7 @@ static ssize_t khugepaged_max_ptes_shared_store(struct kobject *kobj,
 	unsigned long max_ptes_shared;
 
 	err  = kstrtoul(buf, 10, &max_ptes_shared);
-	if (err || max_ptes_shared > HPAGE_PMD_NR-1)
+	if (err || max_ptes_shared > HPAGE_PMD_NR - 1)
 		return -EINVAL;
 
 	khugepaged_max_ptes_shared = max_ptes_shared;
@@ -560,7 +560,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 	int none_or_zero = 0, shared = 0, result = 0, referenced = 0;
 	bool writable = false;
 
-	for (_pte = pte; _pte < pte+HPAGE_PMD_NR;
+	for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
 	     _pte++, address += PAGE_SIZE) {
 		pte_t pteval = *_pte;
 		if (pte_none(pteval) || (pte_present(pteval) &&
@@ -1183,7 +1183,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 
 	memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load));
 	pte = pte_offset_map_lock(mm, pmd, address, &ptl);
-	for (_address = address, _pte = pte; _pte < pte+HPAGE_PMD_NR;
+	for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR;
 	     _pte++, _address += PAGE_SIZE) {
 		pte_t pteval = *_pte;
 		if (is_swap_pte(pteval)) {
@@ -1273,7 +1273,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 		/*
 		 * Check if the page has any GUP (or other external) pins.
 		 *
-		 * Here the check is racy it may see totmal_mapcount > refcount
+		 * Here the check is racy it may see total_mapcount > refcount
 		 * in some cases.
 		 * For example, one process with one forked child process.
 		 * The parent has the PMD split due to MADV_DONTNEED, then
@@ -1524,7 +1524,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
 		 * mmap_write_lock(mm) as PMD-mapping is likely to be split
 		 * later.
 		 *
-		 * Not that vma->anon_vma check is racy: it can be set up after
+		 * Note that vma->anon_vma check is racy: it can be set up after
 		 * the check but before we took mmap_lock by the fault path.
 		 * But page lock would prevent establishing any new ptes of the
 		 * page, so we are safe.
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 4/7] mm/khugepaged: minor cleanup for collapse_file
  2022-06-25  9:28 [PATCH v2 0/7] A few cleanup patches for khugepaged Miaohe Lin
                   ` (2 preceding siblings ...)
  2022-06-25  9:28 ` [PATCH v2 3/7] mm/khugepaged: trivial typo and codestyle cleanup Miaohe Lin
@ 2022-06-25  9:28 ` Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 5/7] mm/khugepaged: use helper macro __ATTR_RW Miaohe Lin
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Miaohe Lin @ 2022-06-25  9:28 UTC (permalink / raw)
  To: akpm
  Cc: shy828301, zokeefe, aarcange, willy, vbabka, dhowells, neilb,
	apopple, david, surenb, peterx, linux-mm, linux-kernel,
	linmiaohe

nr_none is always 0 for non-shmem case because the page can be read from
the backend store. So when nr_none ! = 0, it must be in is_shmem case.
Also only adjust the nrpages and uncharge shmem when nr_none != 0 to save
cpu cycles.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Zach O'Keefe <zokeefe@google.com>
---
 mm/khugepaged.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index a36d9746c321..47514f2fabb9 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1852,8 +1852,8 @@ static void collapse_file(struct mm_struct *mm,
 
 	if (nr_none) {
 		__mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none);
-		if (is_shmem)
-			__mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
+		/* nr_none is always 0 for non-shmem. */
+		__mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
 	}
 
 	/* Join all the small entries into a single multi-index entry */
@@ -1917,10 +1917,10 @@ static void collapse_file(struct mm_struct *mm,
 
 		/* Something went wrong: roll back page cache changes */
 		xas_lock_irq(&xas);
-		mapping->nrpages -= nr_none;
-
-		if (is_shmem)
+		if (nr_none) {
+			mapping->nrpages -= nr_none;
 			shmem_uncharge(mapping->host, nr_none);
+		}
 
 		xas_set(&xas, start);
 		xas_for_each(&xas, page, end - 1) {
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 5/7] mm/khugepaged: use helper macro __ATTR_RW
  2022-06-25  9:28 [PATCH v2 0/7] A few cleanup patches for khugepaged Miaohe Lin
                   ` (3 preceding siblings ...)
  2022-06-25  9:28 ` [PATCH v2 4/7] mm/khugepaged: minor cleanup for collapse_file Miaohe Lin
@ 2022-06-25  9:28 ` Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 6/7] mm/khugepaged: remove unneeded return value of khugepaged_add_pte_mapped_thp() Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 7/7] mm/khugepaged: try to free transhuge swapcache when possible Miaohe Lin
  6 siblings, 0 replies; 8+ messages in thread
From: Miaohe Lin @ 2022-06-25  9:28 UTC (permalink / raw)
  To: akpm
  Cc: shy828301, zokeefe, aarcange, willy, vbabka, dhowells, neilb,
	apopple, david, surenb, peterx, linux-mm, linux-kernel,
	linmiaohe

Use helper macro __ATTR_RW to define the khugepaged attributes. Minor
readability improvement.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
---
 mm/khugepaged.c | 67 ++++++++++++++++++++++---------------------------
 1 file changed, 30 insertions(+), 37 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 47514f2fabb9..aecd33ab2bbe 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -147,8 +147,7 @@ static ssize_t scan_sleep_millisecs_store(struct kobject *kobj,
 	return count;
 }
 static struct kobj_attribute scan_sleep_millisecs_attr =
-	__ATTR(scan_sleep_millisecs, 0644, scan_sleep_millisecs_show,
-	       scan_sleep_millisecs_store);
+	__ATTR_RW(scan_sleep_millisecs);
 
 static ssize_t alloc_sleep_millisecs_show(struct kobject *kobj,
 					  struct kobj_attribute *attr,
@@ -175,8 +174,7 @@ static ssize_t alloc_sleep_millisecs_store(struct kobject *kobj,
 	return count;
 }
 static struct kobj_attribute alloc_sleep_millisecs_attr =
-	__ATTR(alloc_sleep_millisecs, 0644, alloc_sleep_millisecs_show,
-	       alloc_sleep_millisecs_store);
+	__ATTR_RW(alloc_sleep_millisecs);
 
 static ssize_t pages_to_scan_show(struct kobject *kobj,
 				  struct kobj_attribute *attr,
@@ -200,8 +198,7 @@ static ssize_t pages_to_scan_store(struct kobject *kobj,
 	return count;
 }
 static struct kobj_attribute pages_to_scan_attr =
-	__ATTR(pages_to_scan, 0644, pages_to_scan_show,
-	       pages_to_scan_store);
+	__ATTR_RW(pages_to_scan);
 
 static ssize_t pages_collapsed_show(struct kobject *kobj,
 				    struct kobj_attribute *attr,
@@ -221,22 +218,21 @@ static ssize_t full_scans_show(struct kobject *kobj,
 static struct kobj_attribute full_scans_attr =
 	__ATTR_RO(full_scans);
 
-static ssize_t khugepaged_defrag_show(struct kobject *kobj,
-				      struct kobj_attribute *attr, char *buf)
+static ssize_t defrag_show(struct kobject *kobj,
+			   struct kobj_attribute *attr, char *buf)
 {
 	return single_hugepage_flag_show(kobj, attr, buf,
 					 TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG);
 }
-static ssize_t khugepaged_defrag_store(struct kobject *kobj,
-				       struct kobj_attribute *attr,
-				       const char *buf, size_t count)
+static ssize_t defrag_store(struct kobject *kobj,
+			    struct kobj_attribute *attr,
+			    const char *buf, size_t count)
 {
 	return single_hugepage_flag_store(kobj, attr, buf, count,
 				 TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG);
 }
 static struct kobj_attribute khugepaged_defrag_attr =
-	__ATTR(defrag, 0644, khugepaged_defrag_show,
-	       khugepaged_defrag_store);
+	__ATTR_RW(defrag);
 
 /*
  * max_ptes_none controls if khugepaged should collapse hugepages over
@@ -246,15 +242,15 @@ static struct kobj_attribute khugepaged_defrag_attr =
  * runs. Increasing max_ptes_none will instead potentially reduce the
  * free memory in the system during the khugepaged scan.
  */
-static ssize_t khugepaged_max_ptes_none_show(struct kobject *kobj,
-					     struct kobj_attribute *attr,
-					     char *buf)
+static ssize_t max_ptes_none_show(struct kobject *kobj,
+				  struct kobj_attribute *attr,
+				  char *buf)
 {
 	return sysfs_emit(buf, "%u\n", khugepaged_max_ptes_none);
 }
-static ssize_t khugepaged_max_ptes_none_store(struct kobject *kobj,
-					      struct kobj_attribute *attr,
-					      const char *buf, size_t count)
+static ssize_t max_ptes_none_store(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   const char *buf, size_t count)
 {
 	int err;
 	unsigned long max_ptes_none;
@@ -268,19 +264,18 @@ static ssize_t khugepaged_max_ptes_none_store(struct kobject *kobj,
 	return count;
 }
 static struct kobj_attribute khugepaged_max_ptes_none_attr =
-	__ATTR(max_ptes_none, 0644, khugepaged_max_ptes_none_show,
-	       khugepaged_max_ptes_none_store);
+	__ATTR_RW(max_ptes_none);
 
-static ssize_t khugepaged_max_ptes_swap_show(struct kobject *kobj,
-					     struct kobj_attribute *attr,
-					     char *buf)
+static ssize_t max_ptes_swap_show(struct kobject *kobj,
+				  struct kobj_attribute *attr,
+				  char *buf)
 {
 	return sysfs_emit(buf, "%u\n", khugepaged_max_ptes_swap);
 }
 
-static ssize_t khugepaged_max_ptes_swap_store(struct kobject *kobj,
-					      struct kobj_attribute *attr,
-					      const char *buf, size_t count)
+static ssize_t max_ptes_swap_store(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   const char *buf, size_t count)
 {
 	int err;
 	unsigned long max_ptes_swap;
@@ -295,19 +290,18 @@ static ssize_t khugepaged_max_ptes_swap_store(struct kobject *kobj,
 }
 
 static struct kobj_attribute khugepaged_max_ptes_swap_attr =
-	__ATTR(max_ptes_swap, 0644, khugepaged_max_ptes_swap_show,
-	       khugepaged_max_ptes_swap_store);
+	__ATTR_RW(max_ptes_swap);
 
-static ssize_t khugepaged_max_ptes_shared_show(struct kobject *kobj,
-					       struct kobj_attribute *attr,
-					       char *buf)
+static ssize_t max_ptes_shared_show(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    char *buf)
 {
 	return sysfs_emit(buf, "%u\n", khugepaged_max_ptes_shared);
 }
 
-static ssize_t khugepaged_max_ptes_shared_store(struct kobject *kobj,
-					      struct kobj_attribute *attr,
-					      const char *buf, size_t count)
+static ssize_t max_ptes_shared_store(struct kobject *kobj,
+				     struct kobj_attribute *attr,
+				     const char *buf, size_t count)
 {
 	int err;
 	unsigned long max_ptes_shared;
@@ -322,8 +316,7 @@ static ssize_t khugepaged_max_ptes_shared_store(struct kobject *kobj,
 }
 
 static struct kobj_attribute khugepaged_max_ptes_shared_attr =
-	__ATTR(max_ptes_shared, 0644, khugepaged_max_ptes_shared_show,
-	       khugepaged_max_ptes_shared_store);
+	__ATTR_RW(max_ptes_shared);
 
 static struct attribute *khugepaged_attr[] = {
 	&khugepaged_defrag_attr.attr,
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 6/7] mm/khugepaged: remove unneeded return value of khugepaged_add_pte_mapped_thp()
  2022-06-25  9:28 [PATCH v2 0/7] A few cleanup patches for khugepaged Miaohe Lin
                   ` (4 preceding siblings ...)
  2022-06-25  9:28 ` [PATCH v2 5/7] mm/khugepaged: use helper macro __ATTR_RW Miaohe Lin
@ 2022-06-25  9:28 ` Miaohe Lin
  2022-06-25  9:28 ` [PATCH v2 7/7] mm/khugepaged: try to free transhuge swapcache when possible Miaohe Lin
  6 siblings, 0 replies; 8+ messages in thread
From: Miaohe Lin @ 2022-06-25  9:28 UTC (permalink / raw)
  To: akpm
  Cc: shy828301, zokeefe, aarcange, willy, vbabka, dhowells, neilb,
	apopple, david, surenb, peterx, linux-mm, linux-kernel,
	linmiaohe

The return value of khugepaged_add_pte_mapped_thp() is always 0 and also
ignored. Remove it to clean up the code.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Zach O'Keefe <zokeefe@google.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
---
 mm/khugepaged.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index aecd33ab2bbe..6cb82a299eb2 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1339,8 +1339,8 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
  * Notify khugepaged that given addr of the mm is pte-mapped THP. Then
  * khugepaged should try to collapse the page table.
  */
-static int khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
-					 unsigned long addr)
+static void khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
+					  unsigned long addr)
 {
 	struct mm_slot *mm_slot;
 
@@ -1351,7 +1351,6 @@ static int khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
 	if (likely(mm_slot && mm_slot->nr_pte_mapped_thp < MAX_PTE_MAPPED_THP))
 		mm_slot->pte_mapped_thp[mm_slot->nr_pte_mapped_thp++] = addr;
 	spin_unlock(&khugepaged_mm_lock);
-	return 0;
 }
 
 static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 7/7] mm/khugepaged: try to free transhuge swapcache when possible
  2022-06-25  9:28 [PATCH v2 0/7] A few cleanup patches for khugepaged Miaohe Lin
                   ` (5 preceding siblings ...)
  2022-06-25  9:28 ` [PATCH v2 6/7] mm/khugepaged: remove unneeded return value of khugepaged_add_pte_mapped_thp() Miaohe Lin
@ 2022-06-25  9:28 ` Miaohe Lin
  6 siblings, 0 replies; 8+ messages in thread
From: Miaohe Lin @ 2022-06-25  9:28 UTC (permalink / raw)
  To: akpm
  Cc: shy828301, zokeefe, aarcange, willy, vbabka, dhowells, neilb,
	apopple, david, surenb, peterx, linux-mm, linux-kernel,
	linmiaohe

Transhuge swapcaches won't be freed in __collapse_huge_page_copy().
It's because release_pte_page() is not called for these pages and
thus free_page_and_swap_cache can't grab the page lock. These pages
won't be freed from swap cache even if we are the only user until
next time reclaim. It shouldn't hurt indeed, but we could try to
free these pages to save more memory for system.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 include/linux/swap.h | 5 +++++
 mm/khugepaged.c      | 7 ++++++-
 mm/swap.h            | 5 -----
 3 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 8672a7123ccd..ccb83b12b724 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -456,6 +456,7 @@ static inline unsigned long total_swapcache_pages(void)
 	return global_node_page_state(NR_SWAPCACHE);
 }
 
+extern void free_swap_cache(struct page *page);
 extern void free_page_and_swap_cache(struct page *);
 extern void free_pages_and_swap_cache(struct page **, int);
 /* linux/mm/swapfile.c */
@@ -540,6 +541,10 @@ static inline void put_swap_device(struct swap_info_struct *si)
 /* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
 #define free_swap_and_cache(e) is_pfn_swap_entry(e)
 
+static inline void free_swap_cache(struct page *page)
+{
+}
+
 static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask)
 {
 	return 0;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 6cb82a299eb2..cfe231c5958f 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -716,7 +716,12 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
 
 	list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) {
 		list_del(&src_page->lru);
-		release_pte_page(src_page);
+		mod_node_page_state(page_pgdat(src_page),
+				    NR_ISOLATED_ANON + page_is_file_lru(src_page),
+				    -compound_nr(src_page));
+		unlock_page(src_page);
+		free_swap_cache(src_page);
+		putback_lru_page(src_page);
 	}
 }
 
diff --git a/mm/swap.h b/mm/swap.h
index fa0816af4712..17936e068c1c 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -41,7 +41,6 @@ void __delete_from_swap_cache(struct folio *folio,
 void delete_from_swap_cache(struct folio *folio);
 void clear_shadow_from_swap_cache(int type, unsigned long begin,
 				  unsigned long end);
-void free_swap_cache(struct page *page);
 struct page *lookup_swap_cache(swp_entry_t entry,
 			       struct vm_area_struct *vma,
 			       unsigned long addr);
@@ -81,10 +80,6 @@ static inline struct address_space *swap_address_space(swp_entry_t entry)
 	return NULL;
 }
 
-static inline void free_swap_cache(struct page *page)
-{
-}
-
 static inline void show_swap_cache_info(void)
 {
 }
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-06-25  9:28 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-25  9:28 [PATCH v2 0/7] A few cleanup patches for khugepaged Miaohe Lin
2022-06-25  9:28 ` [PATCH v2 1/7] mm/khugepaged: remove unneeded shmem_huge_enabled() check Miaohe Lin
2022-06-25  9:28 ` [PATCH v2 2/7] mm/khugepaged: stop swapping in page when VM_FAULT_RETRY occurs Miaohe Lin
2022-06-25  9:28 ` [PATCH v2 3/7] mm/khugepaged: trivial typo and codestyle cleanup Miaohe Lin
2022-06-25  9:28 ` [PATCH v2 4/7] mm/khugepaged: minor cleanup for collapse_file Miaohe Lin
2022-06-25  9:28 ` [PATCH v2 5/7] mm/khugepaged: use helper macro __ATTR_RW Miaohe Lin
2022-06-25  9:28 ` [PATCH v2 6/7] mm/khugepaged: remove unneeded return value of khugepaged_add_pte_mapped_thp() Miaohe Lin
2022-06-25  9:28 ` [PATCH v2 7/7] mm/khugepaged: try to free transhuge swapcache when possible Miaohe Lin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).