All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] A few cleanup and fixup patches for migration
@ 2022-03-29 13:26 Miaohe Lin
  2022-03-29 13:26 ` [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru Miaohe Lin
                   ` (7 more replies)
  0 siblings, 8 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-29 13:26 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, linmiaohe

Hi everyone,
This series contains a few patches to remove obsolete comment, introduce
helper to remove duplicated code and so no. Also we take all base pages
of THP into account in rare race condition. More details can be found in
the respective changelogs. Thanks!

Miaohe Lin (8):
  mm/vmscan: remove redundant folio_test_swapbacked check when folio is
    file lru
  mm/vmscan: remove unneeded can_split_huge_page check
  mm/vmscan: introduce helper function reclaim_page_list()
  mm/vmscan: save a bit of stack space in shrink_lruvec
  mm/vmscan: use helper folio_is_file_lru()
  mm/vmscan: take all base pages of THP into account when race with
    speculative reference
  mm/vmscan: take min_slab_pages into account when try to call
    shrink_node
  mm/vmscan: remove obsolete comment in kswapd_run

 mm/vmscan.c | 68 +++++++++++++++++++++++++----------------------------
 1 file changed, 32 insertions(+), 36 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru
  2022-03-29 13:26 [PATCH 0/8] A few cleanup and fixup patches for migration Miaohe Lin
@ 2022-03-29 13:26 ` Miaohe Lin
  2022-03-29 13:46   ` Matthew Wilcox
  2022-03-30  8:13   ` Muchun Song
  2022-03-29 13:26 ` [PATCH 2/8] mm/vmscan: remove unneeded can_split_huge_page check Miaohe Lin
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-29 13:26 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, linmiaohe

When folio is file lru, folio_test_swapbacked is guaranteed to be true. So
it's unnecessary to check it here again. No functional change intended.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/vmscan.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 1678802e03e7..7c1a9713bfc9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1434,8 +1434,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
 	 * Anonymous pages are not handled by flushers and must be written
 	 * from reclaim context. Do not stall reclaim based on them
 	 */
-	if (!folio_is_file_lru(folio) ||
-	    (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
+	if (!folio_is_file_lru(folio) || folio_test_anon(folio)) {
 		*dirty = false;
 		*writeback = false;
 		return;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/8] mm/vmscan: remove unneeded can_split_huge_page check
  2022-03-29 13:26 [PATCH 0/8] A few cleanup and fixup patches for migration Miaohe Lin
  2022-03-29 13:26 ` [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru Miaohe Lin
@ 2022-03-29 13:26 ` Miaohe Lin
  2022-03-31  8:38   ` Huang, Ying
  2022-03-29 13:26 ` [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list() Miaohe Lin
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 30+ messages in thread
From: Miaohe Lin @ 2022-03-29 13:26 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, linmiaohe

can_split_huge_page is introduced via commit b8f593cd0896 ("mm, THP, swap:
check whether THP can be split firstly") to avoid deleting the THP from
the swap cache and freeing the swap cluster when the THP cannot be split.
But since commit bd4c82c22c36 ("mm, THP, swap: delay splitting THP after
swapped out"), splitting THP is delayed until THP is swapped out. There's
no need to delete the THP from the swap cache and free the swap cluster
anymore. Thus we can remove this unneeded can_split_huge_page check now to
simplify the code logic.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/vmscan.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7c1a9713bfc9..09b452c4d256 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1691,9 +1691,6 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 				if (folio_maybe_dma_pinned(folio))
 					goto keep_locked;
 				if (PageTransHuge(page)) {
-					/* cannot split THP, skip it */
-					if (!can_split_folio(folio, NULL))
-						goto activate_locked;
 					/*
 					 * Split pages without a PMD map right
 					 * away. Chances are some or all of the
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list()
  2022-03-29 13:26 [PATCH 0/8] A few cleanup and fixup patches for migration Miaohe Lin
  2022-03-29 13:26 ` [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru Miaohe Lin
  2022-03-29 13:26 ` [PATCH 2/8] mm/vmscan: remove unneeded can_split_huge_page check Miaohe Lin
@ 2022-03-29 13:26 ` Miaohe Lin
  2022-03-29 13:50   ` Matthew Wilcox
  2022-03-29 14:00   ` Christoph Hellwig
  2022-03-29 13:26 ` [PATCH 4/8] mm/vmscan: save a bit of stack space in shrink_lruvec Miaohe Lin
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-29 13:26 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, linmiaohe

Introduce helper function reclaim_page_list() to eliminate the duplicated
code of doing shrink_page_list() and putback_lru_page. Also We can separate
node reclaim from node page list operation this way. No functional change
intended.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/vmscan.c | 47 +++++++++++++++++++++++------------------------
 1 file changed, 23 insertions(+), 24 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 09b452c4d256..a6e60c78d058 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2513,14 +2513,11 @@ static void shrink_active_list(unsigned long nr_to_scan,
 			nr_deactivate, nr_rotated, sc->priority, file);
 }
 
-unsigned long reclaim_pages(struct list_head *page_list)
+static inline unsigned int reclaim_page_list(struct list_head *page_list, struct pglist_data *pgdat)
 {
-	int nid = NUMA_NO_NODE;
-	unsigned int nr_reclaimed = 0;
-	LIST_HEAD(node_page_list);
 	struct reclaim_stat dummy_stat;
+	unsigned int nr_reclaimed;
 	struct page *page;
-	unsigned int noreclaim_flag;
 	struct scan_control sc = {
 		.gfp_mask = GFP_KERNEL,
 		.may_writepage = 1,
@@ -2529,6 +2526,24 @@ unsigned long reclaim_pages(struct list_head *page_list)
 		.no_demotion = 1,
 	};
 
+	nr_reclaimed = shrink_page_list(page_list, pgdat, &sc, &dummy_stat, false);
+	while (!list_empty(page_list)) {
+		page = lru_to_page(page_list);
+		list_del(&page->lru);
+		putback_lru_page(page);
+	}
+
+	return nr_reclaimed;
+}
+
+unsigned long reclaim_pages(struct list_head *page_list)
+{
+	int nid = NUMA_NO_NODE;
+	unsigned int nr_reclaimed = 0;
+	LIST_HEAD(node_page_list);
+	struct page *page;
+	unsigned int noreclaim_flag;
+
 	noreclaim_flag = memalloc_noreclaim_save();
 
 	while (!list_empty(page_list)) {
@@ -2544,28 +2559,12 @@ unsigned long reclaim_pages(struct list_head *page_list)
 			continue;
 		}
 
-		nr_reclaimed += shrink_page_list(&node_page_list,
-						NODE_DATA(nid),
-						&sc, &dummy_stat, false);
-		while (!list_empty(&node_page_list)) {
-			page = lru_to_page(&node_page_list);
-			list_del(&page->lru);
-			putback_lru_page(page);
-		}
-
+		nr_reclaimed += reclaim_page_list(&node_page_list, NODE_DATA(nid));
 		nid = NUMA_NO_NODE;
 	}
 
-	if (!list_empty(&node_page_list)) {
-		nr_reclaimed += shrink_page_list(&node_page_list,
-						NODE_DATA(nid),
-						&sc, &dummy_stat, false);
-		while (!list_empty(&node_page_list)) {
-			page = lru_to_page(&node_page_list);
-			list_del(&page->lru);
-			putback_lru_page(page);
-		}
-	}
+	if (!list_empty(&node_page_list))
+		nr_reclaimed += reclaim_page_list(&node_page_list, NODE_DATA(nid));
 
 	memalloc_noreclaim_restore(noreclaim_flag);
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 4/8] mm/vmscan: save a bit of stack space in shrink_lruvec
  2022-03-29 13:26 [PATCH 0/8] A few cleanup and fixup patches for migration Miaohe Lin
                   ` (2 preceding siblings ...)
  2022-03-29 13:26 ` [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list() Miaohe Lin
@ 2022-03-29 13:26 ` Miaohe Lin
  2022-03-29 14:00   ` Christoph Hellwig
  2022-03-29 13:26 ` [PATCH 5/8] mm/vmscan: use helper folio_is_file_lru() Miaohe Lin
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 30+ messages in thread
From: Miaohe Lin @ 2022-03-29 13:26 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, linmiaohe

LRU_UNEVICTABLE is not taken into account when shrink lruvec. So we can
save a bit of stack space by shrinking the array size of nr and targets
to NR_LRU_LISTS - 1. No functional change intended.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/vmscan.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index a6e60c78d058..ebd8ffb63673 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2862,8 +2862,9 @@ static bool can_age_anon_pages(struct pglist_data *pgdat,
 
 static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
 {
-	unsigned long nr[NR_LRU_LISTS];
-	unsigned long targets[NR_LRU_LISTS];
+	/* LRU_UNEVICTABLE is not taken into account. */
+	unsigned long nr[NR_LRU_LISTS - 1];
+	unsigned long targets[NR_LRU_LISTS - 1];
 	unsigned long nr_to_scan;
 	enum lru_list lru;
 	unsigned long nr_reclaimed = 0;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 5/8] mm/vmscan: use helper folio_is_file_lru()
  2022-03-29 13:26 [PATCH 0/8] A few cleanup and fixup patches for migration Miaohe Lin
                   ` (3 preceding siblings ...)
  2022-03-29 13:26 ` [PATCH 4/8] mm/vmscan: save a bit of stack space in shrink_lruvec Miaohe Lin
@ 2022-03-29 13:26 ` Miaohe Lin
  2022-04-01  3:20   ` Huang, Ying
  2022-03-29 13:26 ` [PATCH 6/8] mm/vmscan: take all base pages of THP into account when race with speculative reference Miaohe Lin
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 30+ messages in thread
From: Miaohe Lin @ 2022-03-29 13:26 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, linmiaohe

Use helper folio_is_file_lru() to check whether folio is file lru. Minor
readability improvement.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/vmscan.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index ebd8ffb63673..31e95d627448 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1411,14 +1411,14 @@ static enum page_references folio_check_references(struct folio *folio,
 		/*
 		 * Activate file-backed executable folios after first usage.
 		 */
-		if ((vm_flags & VM_EXEC) && !folio_test_swapbacked(folio))
+		if ((vm_flags & VM_EXEC) && folio_is_file_lru(folio))
 			return PAGEREF_ACTIVATE;
 
 		return PAGEREF_KEEP;
 	}
 
 	/* Reclaim if clean, defer dirty folios to writeback */
-	if (referenced_folio && !folio_test_swapbacked(folio))
+	if (referenced_folio && folio_is_file_lru(folio))
 		return PAGEREF_RECLAIM_CLEAN;
 
 	return PAGEREF_RECLAIM;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 6/8] mm/vmscan: take all base pages of THP into account when race with speculative reference
  2022-03-29 13:26 [PATCH 0/8] A few cleanup and fixup patches for migration Miaohe Lin
                   ` (4 preceding siblings ...)
  2022-03-29 13:26 ` [PATCH 5/8] mm/vmscan: use helper folio_is_file_lru() Miaohe Lin
@ 2022-03-29 13:26 ` Miaohe Lin
  2022-03-29 13:26 ` [PATCH 7/8] mm/vmscan: take min_slab_pages into account when try to call shrink_node Miaohe Lin
  2022-03-29 13:26 ` [PATCH 8/8] mm/vmscan: remove obsolete comment in kswapd_run Miaohe Lin
  7 siblings, 0 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-29 13:26 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, linmiaohe

If the page has buffers, shrink_page_list will try to free the buffer
mappings associated with the page and try to free the page as well.
In the rare race with speculative reference, the page will be freed
shortly by speculative reference. But nr_reclaimed is not incremented
correctly when we come across the THP. We need to account all the base
pages in this case.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/vmscan.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 31e95d627448..1145d23332a3 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1862,7 +1862,7 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 					 * increment nr_reclaimed here (and
 					 * leave it off the LRU).
 					 */
-					nr_reclaimed++;
+					nr_reclaimed += nr_pages;
 					continue;
 				}
 			}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 7/8] mm/vmscan: take min_slab_pages into account when try to call shrink_node
  2022-03-29 13:26 [PATCH 0/8] A few cleanup and fixup patches for migration Miaohe Lin
                   ` (5 preceding siblings ...)
  2022-03-29 13:26 ` [PATCH 6/8] mm/vmscan: take all base pages of THP into account when race with speculative reference Miaohe Lin
@ 2022-03-29 13:26 ` Miaohe Lin
  2022-03-29 13:26 ` [PATCH 8/8] mm/vmscan: remove obsolete comment in kswapd_run Miaohe Lin
  7 siblings, 0 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-29 13:26 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, linmiaohe

Since commit 6b4f7799c6a5 ("mm: vmscan: invoke slab shrinkers from
shrink_zone()"), slab reclaim and lru page reclaim are done together
in the shrink_node. So we should take min_slab_pages into account
when try to call shrink_node.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/vmscan.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 1145d23332a3..e4ef6f637aa9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4695,7 +4695,8 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
 	noreclaim_flag = memalloc_noreclaim_save();
 	set_task_reclaim_state(p, &sc.reclaim_state);
 
-	if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages) {
+	if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages ||
+	    node_page_state_pages(pgdat, NR_SLAB_RECLAIMABLE_B) > pgdat->min_slab_pages) {
 		/*
 		 * Free memory by calling shrink node with increasing
 		 * priorities until we have enough memory freed.
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 8/8] mm/vmscan: remove obsolete comment in kswapd_run
  2022-03-29 13:26 [PATCH 0/8] A few cleanup and fixup patches for migration Miaohe Lin
                   ` (6 preceding siblings ...)
  2022-03-29 13:26 ` [PATCH 7/8] mm/vmscan: take min_slab_pages into account when try to call shrink_node Miaohe Lin
@ 2022-03-29 13:26 ` Miaohe Lin
  7 siblings, 0 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-29 13:26 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, linmiaohe

Since commit 6b700b5b3c59 ("mm/vmscan.c: remove cpu online notification
for now"), cpu online notification is removed. So kswapd won't move to
proper cpus if cpus are hot-added. Remove this obsolete comment.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/vmscan.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index e4ef6f637aa9..b6aa28b34576 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4548,7 +4548,6 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
 
 /*
  * This kswapd start function will be called by init and node-hot-add.
- * On node-hot-add, kswapd will moved to proper cpus if cpus are hot-added.
  */
 void kswapd_run(int nid)
 {
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru
  2022-03-29 13:26 ` [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru Miaohe Lin
@ 2022-03-29 13:46   ` Matthew Wilcox
  2022-03-30  1:46     ` Miaohe Lin
  2022-03-30  8:13   ` Muchun Song
  1 sibling, 1 reply; 30+ messages in thread
From: Matthew Wilcox @ 2022-03-29 13:46 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: akpm, linux-mm, linux-kernel

On Tue, Mar 29, 2022 at 09:26:12PM +0800, Miaohe Lin wrote:
> When folio is file lru, folio_test_swapbacked is guaranteed to be true. So
> it's unnecessary to check it here again. No functional change intended.

ummm ... is your logic right here?  The condition is:

	if (!a || (b && !c))

I don't see how it follows that a => c means we can do any
simplification at all.

> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/vmscan.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 1678802e03e7..7c1a9713bfc9 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1434,8 +1434,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
>  	 * Anonymous pages are not handled by flushers and must be written
>  	 * from reclaim context. Do not stall reclaim based on them
>  	 */
> -	if (!folio_is_file_lru(folio) ||
> -	    (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
> +	if (!folio_is_file_lru(folio) || folio_test_anon(folio)) {
>  		*dirty = false;
>  		*writeback = false;
>  		return;
> -- 
> 2.23.0
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list()
  2022-03-29 13:26 ` [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list() Miaohe Lin
@ 2022-03-29 13:50   ` Matthew Wilcox
  2022-03-30  2:04     ` Miaohe Lin
  2022-03-29 14:00   ` Christoph Hellwig
  1 sibling, 1 reply; 30+ messages in thread
From: Matthew Wilcox @ 2022-03-29 13:50 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: akpm, linux-mm, linux-kernel

On Tue, Mar 29, 2022 at 09:26:14PM +0800, Miaohe Lin wrote:
> -unsigned long reclaim_pages(struct list_head *page_list)
> +static inline unsigned int reclaim_page_list(struct list_head *page_list, struct pglist_data *pgdat)
>  {
> -	int nid = NUMA_NO_NODE;
> -	unsigned int nr_reclaimed = 0;
> -	LIST_HEAD(node_page_list);
>  	struct reclaim_stat dummy_stat;
> +	unsigned int nr_reclaimed;
>  	struct page *page;
> -	unsigned int noreclaim_flag;
>  	struct scan_control sc = {
>  		.gfp_mask = GFP_KERNEL,
>  		.may_writepage = 1,
> @@ -2529,6 +2526,24 @@ unsigned long reclaim_pages(struct list_head *page_list)
>  		.no_demotion = 1,
>  	};
>  
> +	nr_reclaimed = shrink_page_list(page_list, pgdat, &sc, &dummy_stat, false);
> +	while (!list_empty(page_list)) {
> +		page = lru_to_page(page_list);
> +		list_del(&page->lru);
> +		putback_lru_page(page);

Why wouldn't you use a folio here?


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list()
  2022-03-29 13:26 ` [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list() Miaohe Lin
  2022-03-29 13:50   ` Matthew Wilcox
@ 2022-03-29 14:00   ` Christoph Hellwig
  2022-03-30  2:02     ` Miaohe Lin
  1 sibling, 1 reply; 30+ messages in thread
From: Christoph Hellwig @ 2022-03-29 14:00 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: akpm, linux-mm, linux-kernel

> +static inline unsigned int reclaim_page_list(struct list_head *page_list, struct pglist_data *pgdat)

This is completely unreadable with the crazy long lines.

Also why would you force inline this?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/8] mm/vmscan: save a bit of stack space in shrink_lruvec
  2022-03-29 13:26 ` [PATCH 4/8] mm/vmscan: save a bit of stack space in shrink_lruvec Miaohe Lin
@ 2022-03-29 14:00   ` Christoph Hellwig
  2022-03-30  1:59     ` Miaohe Lin
  0 siblings, 1 reply; 30+ messages in thread
From: Christoph Hellwig @ 2022-03-29 14:00 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: akpm, linux-mm, linux-kernel

On Tue, Mar 29, 2022 at 09:26:15PM +0800, Miaohe Lin wrote:
> LRU_UNEVICTABLE is not taken into account when shrink lruvec. So we can
> save a bit of stack space by shrinking the array size of nr and targets
> to NR_LRU_LISTS - 1. No functional change intended.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/vmscan.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a6e60c78d058..ebd8ffb63673 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2862,8 +2862,9 @@ static bool can_age_anon_pages(struct pglist_data *pgdat,
>  
>  static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>  {
> -	unsigned long nr[NR_LRU_LISTS];
> -	unsigned long targets[NR_LRU_LISTS];
> +	/* LRU_UNEVICTABLE is not taken into account. */
> +	unsigned long nr[NR_LRU_LISTS - 1];
> +	unsigned long targets[NR_LRU_LISTS - 1];

This looks like a problem waiting to happen..


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru
  2022-03-29 13:46   ` Matthew Wilcox
@ 2022-03-30  1:46     ` Miaohe Lin
  0 siblings, 0 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-30  1:46 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: akpm, linux-mm, linux-kernel

On 2022/3/29 21:46, Matthew Wilcox wrote:
> On Tue, Mar 29, 2022 at 09:26:12PM +0800, Miaohe Lin wrote:
>> When folio is file lru, folio_test_swapbacked is guaranteed to be true. So
>> it's unnecessary to check it here again. No functional change intended.
> 
> ummm ... is your logic right here?  The condition is:
> 
> 	if (!a || (b && !c))
> 

Because a is !c, so c is !a. Then we have:

!a || (b && !c) ==> !a || (b && !!a) ==> !a || (b && a) ==> !a || b.

Or am I miss something?

Thanks.

> I don't see how it follows that a => c means we can do any
> simplification at all.
> 
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/vmscan.c | 3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 1678802e03e7..7c1a9713bfc9 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1434,8 +1434,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
>>  	 * Anonymous pages are not handled by flushers and must be written
>>  	 * from reclaim context. Do not stall reclaim based on them
>>  	 */
>> -	if (!folio_is_file_lru(folio) ||
>> -	    (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
>> +	if (!folio_is_file_lru(folio) || folio_test_anon(folio)) {
>>  		*dirty = false;
>>  		*writeback = false;
>>  		return;
>> -- 
>> 2.23.0
>>
>>
> 
> .
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 4/8] mm/vmscan: save a bit of stack space in shrink_lruvec
  2022-03-29 14:00   ` Christoph Hellwig
@ 2022-03-30  1:59     ` Miaohe Lin
  0 siblings, 0 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-30  1:59 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: akpm, linux-mm, linux-kernel

On 2022/3/29 22:00, Christoph Hellwig wrote:
> On Tue, Mar 29, 2022 at 09:26:15PM +0800, Miaohe Lin wrote:
>> LRU_UNEVICTABLE is not taken into account when shrink lruvec. So we can
>> save a bit of stack space by shrinking the array size of nr and targets
>> to NR_LRU_LISTS - 1. No functional change intended.
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/vmscan.c | 5 +++--
>>  1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index a6e60c78d058..ebd8ffb63673 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -2862,8 +2862,9 @@ static bool can_age_anon_pages(struct pglist_data *pgdat,
>>  
>>  static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>>  {
>> -	unsigned long nr[NR_LRU_LISTS];
>> -	unsigned long targets[NR_LRU_LISTS];
>> +	/* LRU_UNEVICTABLE is not taken into account. */
>> +	unsigned long nr[NR_LRU_LISTS - 1];
>> +	unsigned long targets[NR_LRU_LISTS - 1];
> 
> This looks like a problem waiting to happen..

IIUC, I am changing the array size to what it exactly uses now. And LRU_UNEVICTABLE won't be
used anyway. Could you please tell me what kind of problem is waiting to happen ? If this
will result in actual risk, I will drop this patch.

Thanks.

> 
> .
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list()
  2022-03-29 14:00   ` Christoph Hellwig
@ 2022-03-30  2:02     ` Miaohe Lin
  0 siblings, 0 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-30  2:02 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: akpm, linux-mm, linux-kernel

On 2022/3/29 22:00, Christoph Hellwig wrote:
>> +static inline unsigned int reclaim_page_list(struct list_head *page_list, struct pglist_data *pgdat)
> 
> This is completely unreadable with the crazy long lines.

Will break this long lines.

> 
> Also why would you force inline this?

It seems inline is indeed not needed. Will change this too.

> .
> 

Many thanks for your comment.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list()
  2022-03-29 13:50   ` Matthew Wilcox
@ 2022-03-30  2:04     ` Miaohe Lin
  2022-03-30 16:23       ` Matthew Wilcox
  0 siblings, 1 reply; 30+ messages in thread
From: Miaohe Lin @ 2022-03-30  2:04 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: akpm, linux-mm, linux-kernel

On 2022/3/29 21:50, Matthew Wilcox wrote:
> On Tue, Mar 29, 2022 at 09:26:14PM +0800, Miaohe Lin wrote:
>> -unsigned long reclaim_pages(struct list_head *page_list)
>> +static inline unsigned int reclaim_page_list(struct list_head *page_list, struct pglist_data *pgdat)
>>  {
>> -	int nid = NUMA_NO_NODE;
>> -	unsigned int nr_reclaimed = 0;
>> -	LIST_HEAD(node_page_list);
>>  	struct reclaim_stat dummy_stat;
>> +	unsigned int nr_reclaimed;
>>  	struct page *page;
>> -	unsigned int noreclaim_flag;
>>  	struct scan_control sc = {
>>  		.gfp_mask = GFP_KERNEL,
>>  		.may_writepage = 1,
>> @@ -2529,6 +2526,24 @@ unsigned long reclaim_pages(struct list_head *page_list)
>>  		.no_demotion = 1,
>>  	};
>>  
>> +	nr_reclaimed = shrink_page_list(page_list, pgdat, &sc, &dummy_stat, false);
>> +	while (!list_empty(page_list)) {
>> +		page = lru_to_page(page_list);
>> +		list_del(&page->lru);
>> +		putback_lru_page(page);
> 
> Why wouldn't you use a folio here?
> 

I was just wanting to keep the code consistent with the previous one. Am I supposed to
use a folio here ? If so, will do it in V2.

Thanks.

> 
> .
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru
  2022-03-29 13:26 ` [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru Miaohe Lin
  2022-03-29 13:46   ` Matthew Wilcox
@ 2022-03-30  8:13   ` Muchun Song
  2022-03-30  9:26     ` Miaohe Lin
  2022-03-31  6:37     ` Huang, Ying
  1 sibling, 2 replies; 30+ messages in thread
From: Muchun Song @ 2022-03-30  8:13 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux Memory Management List, LKML

On Tue, Mar 29, 2022 at 9:26 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> When folio is file lru, folio_test_swapbacked is guaranteed to be true. So
> it's unnecessary to check it here again. No functional change intended.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/vmscan.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 1678802e03e7..7c1a9713bfc9 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1434,8 +1434,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
>          * Anonymous pages are not handled by flushers and must be written
>          * from reclaim context. Do not stall reclaim based on them
>          */
> -       if (!folio_is_file_lru(folio) ||
> -           (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
> +       if (!folio_is_file_lru(folio) || folio_test_anon(folio)) {

At least your login is no problem since folio_is_file_lru() is equal to
!folio_test_swapbacked().  But the new code is not clear to me.
The old code is easy to understand, e.g. folio_test_anon(folio) &&
!folio_test_swapbacked(folio) tells us that the anon pages which
do not need to be swapped should be skipped. So I'm neutral on
the patch.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru
  2022-03-30  8:13   ` Muchun Song
@ 2022-03-30  9:26     ` Miaohe Lin
  2022-03-31  6:37     ` Huang, Ying
  1 sibling, 0 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-30  9:26 UTC (permalink / raw)
  To: Muchun Song; +Cc: Andrew Morton, Linux Memory Management List, LKML

On 2022/3/30 16:13, Muchun Song wrote:
> On Tue, Mar 29, 2022 at 9:26 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> When folio is file lru, folio_test_swapbacked is guaranteed to be true. So
>> it's unnecessary to check it here again. No functional change intended.
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/vmscan.c | 3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 1678802e03e7..7c1a9713bfc9 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1434,8 +1434,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
>>          * Anonymous pages are not handled by flushers and must be written
>>          * from reclaim context. Do not stall reclaim based on them
>>          */
>> -       if (!folio_is_file_lru(folio) ||
>> -           (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
>> +       if (!folio_is_file_lru(folio) || folio_test_anon(folio)) {
> 
> At least your login is no problem since folio_is_file_lru() is equal to
> !folio_test_swapbacked().  But the new code is not clear to me.
> The old code is easy to understand, e.g. folio_test_anon(folio) &&
> !folio_test_swapbacked(folio) tells us that the anon pages which
> do not need to be swapped should be skipped. So I'm neutral on
> the patch.

Thanks for your comment. The previous one might look more common:
folio_test_anon(folio) && !folio_test_swapbacked(folio) means folio
is MADV_FREE and can be freed without swapping out. And I remove the
unneeded !folio_test_swapbacked(folio) check at the expense of losing
minor readability. If it is not worth doing this, I will drop this
patch.

Thanks.

> 
> .
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list()
  2022-03-30  2:04     ` Miaohe Lin
@ 2022-03-30 16:23       ` Matthew Wilcox
  2022-03-31  1:45         ` Miaohe Lin
  0 siblings, 1 reply; 30+ messages in thread
From: Matthew Wilcox @ 2022-03-30 16:23 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: akpm, linux-mm, linux-kernel

On Wed, Mar 30, 2022 at 10:04:54AM +0800, Miaohe Lin wrote:
> On 2022/3/29 21:50, Matthew Wilcox wrote:
> >> +	while (!list_empty(page_list)) {
> >> +		page = lru_to_page(page_list);
> >> +		list_del(&page->lru);
> >> +		putback_lru_page(page);
> > 
> > Why wouldn't you use a folio here?
> > 
> 
> I was just wanting to keep the code consistent with the previous one. Am I supposed to
> use a folio here ? If so, will do it in V2.

The distinction we're trying to make (and obviously we have a long way
to go before we're finished) is between the structure that describes
PAGE_SIZE bytes of memory and the structure that manages the memory
allocation.  This function is clearly handling the entire memory
allocation, so it should be using a folio.  Eventually, page->lru
should disappear, but there's 180 places to figure out what we need to
do instead.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list()
  2022-03-30 16:23       ` Matthew Wilcox
@ 2022-03-31  1:45         ` Miaohe Lin
  0 siblings, 0 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-31  1:45 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: akpm, linux-mm, linux-kernel

On 2022/3/31 0:23, Matthew Wilcox wrote:
> On Wed, Mar 30, 2022 at 10:04:54AM +0800, Miaohe Lin wrote:
>> On 2022/3/29 21:50, Matthew Wilcox wrote:
>>>> +	while (!list_empty(page_list)) {
>>>> +		page = lru_to_page(page_list);
>>>> +		list_del(&page->lru);
>>>> +		putback_lru_page(page);
>>>
>>> Why wouldn't you use a folio here?
>>>
>>
>> I was just wanting to keep the code consistent with the previous one. Am I supposed to
>> use a folio here ? If so, will do it in V2.
> 
> The distinction we're trying to make (and obviously we have a long way
> to go before we're finished) is between the structure that describes
> PAGE_SIZE bytes of memory and the structure that manages the memory
> allocation.  This function is clearly handling the entire memory
> allocation, so it should be using a folio.  Eventually, page->lru

I see. Thanks for clarifying.

> should disappear, but there's 180 places to figure out what we need to
> do instead.
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru
  2022-03-30  8:13   ` Muchun Song
  2022-03-30  9:26     ` Miaohe Lin
@ 2022-03-31  6:37     ` Huang, Ying
  2022-03-31  7:44       ` Miaohe Lin
  1 sibling, 1 reply; 30+ messages in thread
From: Huang, Ying @ 2022-03-31  6:37 UTC (permalink / raw)
  To: Muchun Song; +Cc: Miaohe Lin, Andrew Morton, Linux Memory Management List, LKML

Muchun Song <songmuchun@bytedance.com> writes:

> On Tue, Mar 29, 2022 at 9:26 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> When folio is file lru, folio_test_swapbacked is guaranteed to be true. So
>> it's unnecessary to check it here again. No functional change intended.
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/vmscan.c | 3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 1678802e03e7..7c1a9713bfc9 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1434,8 +1434,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
>>          * Anonymous pages are not handled by flushers and must be written
>>          * from reclaim context. Do not stall reclaim based on them
>>          */
>> -       if (!folio_is_file_lru(folio) ||
>> -           (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
>> +       if (!folio_is_file_lru(folio) || folio_test_anon(folio)) {
>
> At least your login is no problem since folio_is_file_lru() is equal to
> !folio_test_swapbacked().  But the new code is not clear to me.
> The old code is easy to understand, e.g. folio_test_anon(folio) &&
> !folio_test_swapbacked(folio) tells us that the anon pages which
> do not need to be swapped should be skipped.

That is for MADV_FREE pages.  The code is introduced in commit
802a3a92ad7a ("mm: reclaim MADV_FREE pages").

So I think the original code is better.  It's an implementation detail
that folio_is_file_lru() equals !folio_test_swapbacked().  It may be
better to add some comments here for MADV_FREE pages.

> So I'm neutral on the patch.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru
  2022-03-31  6:37     ` Huang, Ying
@ 2022-03-31  7:44       ` Miaohe Lin
  2022-03-31  8:02         ` Huang, Ying
  0 siblings, 1 reply; 30+ messages in thread
From: Miaohe Lin @ 2022-03-31  7:44 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Andrew Morton, Linux Memory Management List, LKML, Muchun Song

On 2022/3/31 14:37, Huang, Ying wrote:
> Muchun Song <songmuchun@bytedance.com> writes:
> 
>> On Tue, Mar 29, 2022 at 9:26 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>
>>> When folio is file lru, folio_test_swapbacked is guaranteed to be true. So
>>> it's unnecessary to check it here again. No functional change intended.
>>>
>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>> ---
>>>  mm/vmscan.c | 3 +--
>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>
>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>> index 1678802e03e7..7c1a9713bfc9 100644
>>> --- a/mm/vmscan.c
>>> +++ b/mm/vmscan.c
>>> @@ -1434,8 +1434,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
>>>          * Anonymous pages are not handled by flushers and must be written
>>>          * from reclaim context. Do not stall reclaim based on them
>>>          */
>>> -       if (!folio_is_file_lru(folio) ||
>>> -           (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
>>> +       if (!folio_is_file_lru(folio) || folio_test_anon(folio)) {
>>
>> At least your login is no problem since folio_is_file_lru() is equal to
>> !folio_test_swapbacked().  But the new code is not clear to me.
>> The old code is easy to understand, e.g. folio_test_anon(folio) &&
>> !folio_test_swapbacked(folio) tells us that the anon pages which
>> do not need to be swapped should be skipped.
> 
> That is for MADV_FREE pages.  The code is introduced in commit
> 802a3a92ad7a ("mm: reclaim MADV_FREE pages").
> 
> So I think the original code is better.  It's an implementation detail
> that folio_is_file_lru() equals !folio_test_swapbacked().  It may be
> better to add some comments here for MADV_FREE pages.
> 

Do you tend to drop this patch or adding a comment with the change in this patch or something else?

Thanks.

>> So I'm neutral on the patch.
> 
> Best Regards,
> Huang, Ying
> .
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru
  2022-03-31  7:44       ` Miaohe Lin
@ 2022-03-31  8:02         ` Huang, Ying
  2022-03-31  8:07           ` Miaohe Lin
  0 siblings, 1 reply; 30+ messages in thread
From: Huang, Ying @ 2022-03-31  8:02 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux Memory Management List, LKML, Muchun Song

Miaohe Lin <linmiaohe@huawei.com> writes:

> On 2022/3/31 14:37, Huang, Ying wrote:
>> Muchun Song <songmuchun@bytedance.com> writes:
>> 
>>> On Tue, Mar 29, 2022 at 9:26 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>>
>>>> When folio is file lru, folio_test_swapbacked is guaranteed to be true. So
>>>> it's unnecessary to check it here again. No functional change intended.
>>>>
>>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>>> ---
>>>>  mm/vmscan.c | 3 +--
>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>>
>>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>>> index 1678802e03e7..7c1a9713bfc9 100644
>>>> --- a/mm/vmscan.c
>>>> +++ b/mm/vmscan.c
>>>> @@ -1434,8 +1434,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
>>>>          * Anonymous pages are not handled by flushers and must be written
>>>>          * from reclaim context. Do not stall reclaim based on them
>>>>          */
>>>> -       if (!folio_is_file_lru(folio) ||
>>>> -           (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
>>>> +       if (!folio_is_file_lru(folio) || folio_test_anon(folio)) {
>>>
>>> At least your login is no problem since folio_is_file_lru() is equal to
>>> !folio_test_swapbacked().  But the new code is not clear to me.
>>> The old code is easy to understand, e.g. folio_test_anon(folio) &&
>>> !folio_test_swapbacked(folio) tells us that the anon pages which
>>> do not need to be swapped should be skipped.
>> 
>> That is for MADV_FREE pages.  The code is introduced in commit
>> 802a3a92ad7a ("mm: reclaim MADV_FREE pages").
>> 
>> So I think the original code is better.  It's an implementation detail
>> that folio_is_file_lru() equals !folio_test_swapbacked().  It may be
>> better to add some comments here for MADV_FREE pages.
>> 
>
> Do you tend to drop this patch or adding a comment with the change in this patch or something else?

I suggest to drop the code change and add a comment about MADV_FREE.

Best Regards,
Huang, Ying

> Thanks.
>
>>> So I'm neutral on the patch.
>> 
>> Best Regards,
>> Huang, Ying
>> .
>> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru
  2022-03-31  8:02         ` Huang, Ying
@ 2022-03-31  8:07           ` Miaohe Lin
  0 siblings, 0 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-31  8:07 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Andrew Morton, Linux Memory Management List, LKML, Muchun Song

On 2022/3/31 16:02, Huang, Ying wrote:
> Miaohe Lin <linmiaohe@huawei.com> writes:
> 
>> On 2022/3/31 14:37, Huang, Ying wrote:
>>> Muchun Song <songmuchun@bytedance.com> writes:
>>>
>>>> On Tue, Mar 29, 2022 at 9:26 PM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>>>
>>>>> When folio is file lru, folio_test_swapbacked is guaranteed to be true. So
>>>>> it's unnecessary to check it here again. No functional change intended.
>>>>>
>>>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>>>> ---
>>>>>  mm/vmscan.c | 3 +--
>>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>>>> index 1678802e03e7..7c1a9713bfc9 100644
>>>>> --- a/mm/vmscan.c
>>>>> +++ b/mm/vmscan.c
>>>>> @@ -1434,8 +1434,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
>>>>>          * Anonymous pages are not handled by flushers and must be written
>>>>>          * from reclaim context. Do not stall reclaim based on them
>>>>>          */
>>>>> -       if (!folio_is_file_lru(folio) ||
>>>>> -           (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
>>>>> +       if (!folio_is_file_lru(folio) || folio_test_anon(folio)) {
>>>>
>>>> At least your login is no problem since folio_is_file_lru() is equal to
>>>> !folio_test_swapbacked().  But the new code is not clear to me.
>>>> The old code is easy to understand, e.g. folio_test_anon(folio) &&
>>>> !folio_test_swapbacked(folio) tells us that the anon pages which
>>>> do not need to be swapped should be skipped.
>>>
>>> That is for MADV_FREE pages.  The code is introduced in commit
>>> 802a3a92ad7a ("mm: reclaim MADV_FREE pages").
>>>
>>> So I think the original code is better.  It's an implementation detail
>>> that folio_is_file_lru() equals !folio_test_swapbacked().  It may be
>>> better to add some comments here for MADV_FREE pages.
>>>
>>
>> Do you tend to drop this patch or adding a comment with the change in this patch or something else?
> 
> I suggest to drop the code change and add a comment about MADV_FREE.

Will do. Thanks.

> 
> Best Regards,
> Huang, Ying
> 
>> Thanks.
>>
>>>> So I'm neutral on the patch.
>>>
>>> Best Regards,
>>> Huang, Ying
>>> .
>>>
> .
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/8] mm/vmscan: remove unneeded can_split_huge_page check
  2022-03-29 13:26 ` [PATCH 2/8] mm/vmscan: remove unneeded can_split_huge_page check Miaohe Lin
@ 2022-03-31  8:38   ` Huang, Ying
  2022-03-31 10:47     ` Miaohe Lin
  0 siblings, 1 reply; 30+ messages in thread
From: Huang, Ying @ 2022-03-31  8:38 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: akpm, linux-mm, linux-kernel

Miaohe Lin <linmiaohe@huawei.com> writes:

> can_split_huge_page is introduced via commit b8f593cd0896 ("mm, THP, swap:
> check whether THP can be split firstly") to avoid deleting the THP from
> the swap cache and freeing the swap cluster when the THP cannot be split.
> But since commit bd4c82c22c36 ("mm, THP, swap: delay splitting THP after
> swapped out"), splitting THP is delayed until THP is swapped out. There's
> no need to delete the THP from the swap cache and free the swap cluster
> anymore. Thus we can remove this unneeded can_split_huge_page check now to
> simplify the code logic.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/vmscan.c | 3 ---
>  1 file changed, 3 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 7c1a9713bfc9..09b452c4d256 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1691,9 +1691,6 @@ static unsigned int shrink_page_list(struct list_head *page_list,
>  				if (folio_maybe_dma_pinned(folio))
>  					goto keep_locked;
>  				if (PageTransHuge(page)) {
> -					/* cannot split THP, skip it */
> -					if (!can_split_folio(folio, NULL))
> -						goto activate_locked;
>  					/*
>  					 * Split pages without a PMD map right
>  					 * away. Chances are some or all of the

I'm OK with the change itself.  But THP still needs to be split after
being swapped out.  The reason we don't need to check can_split_folio()
is that folio_maybe_dma_pinned() is checked before.  Which will avoid
the long term pinned pages to be swapped out.  And we can live with
short term pinned pages.  Without can_split_folio() checking we can
simplify the code as follows,

	/*
	 * Split pages without a PMD map right
	 * away. Chances are some or all of the
	 * tail pages can be freed without IO.
	 */
	if (PageTransHuge(page) && !compound_mapcount(page) &&
            split_huge_page_to_list(page, page_list))
		goto keep_locked;
                                                                
activate_locked can be changed to keep_locked too, because it's just
short term pinning.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/8] mm/vmscan: remove unneeded can_split_huge_page check
  2022-03-31  8:38   ` Huang, Ying
@ 2022-03-31 10:47     ` Miaohe Lin
  0 siblings, 0 replies; 30+ messages in thread
From: Miaohe Lin @ 2022-03-31 10:47 UTC (permalink / raw)
  To: Huang, Ying; +Cc: akpm, linux-mm, linux-kernel

On 2022/3/31 16:38, Huang, Ying wrote:
> Miaohe Lin <linmiaohe@huawei.com> writes:
> 
>> can_split_huge_page is introduced via commit b8f593cd0896 ("mm, THP, swap:
>> check whether THP can be split firstly") to avoid deleting the THP from
>> the swap cache and freeing the swap cluster when the THP cannot be split.
>> But since commit bd4c82c22c36 ("mm, THP, swap: delay splitting THP after
>> swapped out"), splitting THP is delayed until THP is swapped out. There's
>> no need to delete the THP from the swap cache and free the swap cluster
>> anymore. Thus we can remove this unneeded can_split_huge_page check now to
>> simplify the code logic.
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/vmscan.c | 3 ---
>>  1 file changed, 3 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 7c1a9713bfc9..09b452c4d256 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1691,9 +1691,6 @@ static unsigned int shrink_page_list(struct list_head *page_list,
>>  				if (folio_maybe_dma_pinned(folio))
>>  					goto keep_locked;
>>  				if (PageTransHuge(page)) {
>> -					/* cannot split THP, skip it */
>> -					if (!can_split_folio(folio, NULL))
>> -						goto activate_locked;
>>  					/*
>>  					 * Split pages without a PMD map right
>>  					 * away. Chances are some or all of the
> 
> I'm OK with the change itself.  But THP still needs to be split after
> being swapped out.  The reason we don't need to check can_split_folio()

Could you please explain the relevant code path more detailed slightly?
IIUC, if THP is swapped out, it will be freed via destroy_compound_page
after __remove_mapping in shrink_page_list. So THP can be freed without
split. Or am I miss something ?

> is that folio_maybe_dma_pinned() is checked before.  Which will avoid
> the long term pinned pages to be swapped out.  And we can live with
> short term pinned pages.  Without can_split_folio() checking we can
> simplify the code as follows,
> 
> 	/*
> 	 * Split pages without a PMD map right
> 	 * away. Chances are some or all of the
> 	 * tail pages can be freed without IO.
> 	 */
> 	if (PageTransHuge(page) && !compound_mapcount(page) &&
>             split_huge_page_to_list(page, page_list))
> 		goto keep_locked;
>                                                                 
> activate_locked can be changed to keep_locked too, because it's just
> short term pinning.
> 

The change above looks good to me. Many thanks. Should I add a Suggested-by
tag for you?

> Best Regards,
> Huang, Ying

Thanks.

> 
> .
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/8] mm/vmscan: use helper folio_is_file_lru()
  2022-03-29 13:26 ` [PATCH 5/8] mm/vmscan: use helper folio_is_file_lru() Miaohe Lin
@ 2022-04-01  3:20   ` Huang, Ying
  2022-04-01  6:14     ` Miaohe Lin
  0 siblings, 1 reply; 30+ messages in thread
From: Huang, Ying @ 2022-04-01  3:20 UTC (permalink / raw)
  To: Miaohe Lin
  Cc: akpm, linux-mm, linux-kernel, Joonsoo Kim, Konstantin Khlebnikov,
	Wu Fengguang

Miaohe Lin <linmiaohe@huawei.com> writes:

> Use helper folio_is_file_lru() to check whether folio is file lru. Minor
> readability improvement.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/vmscan.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index ebd8ffb63673..31e95d627448 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1411,14 +1411,14 @@ static enum page_references folio_check_references(struct folio *folio,
>  		/*
>  		 * Activate file-backed executable folios after first usage.
>  		 */
> -		if ((vm_flags & VM_EXEC) && !folio_test_swapbacked(folio))
> +		if ((vm_flags & VM_EXEC) && folio_is_file_lru(folio))

I think that this should be converted to

		if ((vm_flags & VM_EXEC)))

We should activate swap-backed executable folios (e.g. tmpfs) after
first usage too.

Best Regards,
Huang, Ying

>  			return PAGEREF_ACTIVATE;
>  
>  		return PAGEREF_KEEP;
>  	}
>  
>  	/* Reclaim if clean, defer dirty folios to writeback */
> -	if (referenced_folio && !folio_test_swapbacked(folio))
> +	if (referenced_folio && folio_is_file_lru(folio))
>  		return PAGEREF_RECLAIM_CLEAN;
>  
>  	return PAGEREF_RECLAIM;

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/8] mm/vmscan: use helper folio_is_file_lru()
  2022-04-01  3:20   ` Huang, Ying
@ 2022-04-01  6:14     ` Miaohe Lin
  2022-04-01  6:40       ` Huang, Ying
  0 siblings, 1 reply; 30+ messages in thread
From: Miaohe Lin @ 2022-04-01  6:14 UTC (permalink / raw)
  To: Huang, Ying
  Cc: akpm, linux-mm, linux-kernel, Joonsoo Kim, Konstantin Khlebnikov,
	Wu Fengguang

On 2022/4/1 11:20, Huang, Ying wrote:
> Miaohe Lin <linmiaohe@huawei.com> writes:
> 
>> Use helper folio_is_file_lru() to check whether folio is file lru. Minor
>> readability improvement.
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/vmscan.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index ebd8ffb63673..31e95d627448 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1411,14 +1411,14 @@ static enum page_references folio_check_references(struct folio *folio,
>>  		/*
>>  		 * Activate file-backed executable folios after first usage.
>>  		 */
>> -		if ((vm_flags & VM_EXEC) && !folio_test_swapbacked(folio))
>> +		if ((vm_flags & VM_EXEC) && folio_is_file_lru(folio))
> 
> I think that this should be converted to
> 
> 		if ((vm_flags & VM_EXEC)))
> 
> We should activate swap-backed executable folios (e.g. tmpfs) after
> first usage too.
> 

Dig into the git history, we can found that commit c909e99364c8 ("vmscan: activate executable pages after first usage")
activate swap-backed executable folios after first usage too. But later swap-backed executable folios is not activated
via commit b518154e59aa ("mm/vmscan: protect the workingset on anonymous LRU") to pretect the workingset from anonymous
LRU. So above change might not be wanted. Or am I miss something?

Many thanks.

> Best Regards,
> Huang, Ying
> 
>>  			return PAGEREF_ACTIVATE;
>>  
>>  		return PAGEREF_KEEP;
>>  	}
>>  
>>  	/* Reclaim if clean, defer dirty folios to writeback */
>> -	if (referenced_folio && !folio_test_swapbacked(folio))
>> +	if (referenced_folio && folio_is_file_lru(folio))
>>  		return PAGEREF_RECLAIM_CLEAN;
>>  
>>  	return PAGEREF_RECLAIM;
> .
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 5/8] mm/vmscan: use helper folio_is_file_lru()
  2022-04-01  6:14     ` Miaohe Lin
@ 2022-04-01  6:40       ` Huang, Ying
  0 siblings, 0 replies; 30+ messages in thread
From: Huang, Ying @ 2022-04-01  6:40 UTC (permalink / raw)
  To: Miaohe Lin, Joonsoo Kim
  Cc: akpm, linux-mm, linux-kernel, Konstantin Khlebnikov, Wu Fengguang

Miaohe Lin <linmiaohe@huawei.com> writes:

> On 2022/4/1 11:20, Huang, Ying wrote:
>> Miaohe Lin <linmiaohe@huawei.com> writes:
>> 
>>> Use helper folio_is_file_lru() to check whether folio is file lru. Minor
>>> readability improvement.
>>>
>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>> ---
>>>  mm/vmscan.c | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>> index ebd8ffb63673..31e95d627448 100644
>>> --- a/mm/vmscan.c
>>> +++ b/mm/vmscan.c
>>> @@ -1411,14 +1411,14 @@ static enum page_references folio_check_references(struct folio *folio,
>>>  		/*
>>>  		 * Activate file-backed executable folios after first usage.
>>>  		 */
>>> -		if ((vm_flags & VM_EXEC) && !folio_test_swapbacked(folio))
>>> +		if ((vm_flags & VM_EXEC) && folio_is_file_lru(folio))
>> 
>> I think that this should be converted to
>> 
>> 		if ((vm_flags & VM_EXEC)))
>> 
>> We should activate swap-backed executable folios (e.g. tmpfs) after
>> first usage too.
>> 
>
> Dig into the git history, we can found that commit c909e99364c8 ("vmscan: activate executable pages after first usage")
> activate swap-backed executable folios after first usage too. But later swap-backed executable folios is not activated
> via commit b518154e59aa ("mm/vmscan: protect the workingset on anonymous LRU") to pretect the workingset from anonymous
> LRU. So above change might not be wanted. Or am I miss something?

I think we should restore the original behavior to protect swap backed
executable folios too.

Hi, Kim,

What do you think about this?

Best Regards,
Huang, Ying

> Many thanks.
>
>> Best Regards,
>> Huang, Ying
>> 
>>>  			return PAGEREF_ACTIVATE;
>>>  
>>>  		return PAGEREF_KEEP;
>>>  	}
>>>  
>>>  	/* Reclaim if clean, defer dirty folios to writeback */
>>> -	if (referenced_folio && !folio_test_swapbacked(folio))
>>> +	if (referenced_folio && folio_is_file_lru(folio))
>>>  		return PAGEREF_RECLAIM_CLEAN;
>>>  
>>>  	return PAGEREF_RECLAIM;
>> .
>> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2022-04-01  6:43 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-29 13:26 [PATCH 0/8] A few cleanup and fixup patches for migration Miaohe Lin
2022-03-29 13:26 ` [PATCH 1/8] mm/vmscan: remove redundant folio_test_swapbacked check when folio is file lru Miaohe Lin
2022-03-29 13:46   ` Matthew Wilcox
2022-03-30  1:46     ` Miaohe Lin
2022-03-30  8:13   ` Muchun Song
2022-03-30  9:26     ` Miaohe Lin
2022-03-31  6:37     ` Huang, Ying
2022-03-31  7:44       ` Miaohe Lin
2022-03-31  8:02         ` Huang, Ying
2022-03-31  8:07           ` Miaohe Lin
2022-03-29 13:26 ` [PATCH 2/8] mm/vmscan: remove unneeded can_split_huge_page check Miaohe Lin
2022-03-31  8:38   ` Huang, Ying
2022-03-31 10:47     ` Miaohe Lin
2022-03-29 13:26 ` [PATCH 3/8] mm/vmscan: introduce helper function reclaim_page_list() Miaohe Lin
2022-03-29 13:50   ` Matthew Wilcox
2022-03-30  2:04     ` Miaohe Lin
2022-03-30 16:23       ` Matthew Wilcox
2022-03-31  1:45         ` Miaohe Lin
2022-03-29 14:00   ` Christoph Hellwig
2022-03-30  2:02     ` Miaohe Lin
2022-03-29 13:26 ` [PATCH 4/8] mm/vmscan: save a bit of stack space in shrink_lruvec Miaohe Lin
2022-03-29 14:00   ` Christoph Hellwig
2022-03-30  1:59     ` Miaohe Lin
2022-03-29 13:26 ` [PATCH 5/8] mm/vmscan: use helper folio_is_file_lru() Miaohe Lin
2022-04-01  3:20   ` Huang, Ying
2022-04-01  6:14     ` Miaohe Lin
2022-04-01  6:40       ` Huang, Ying
2022-03-29 13:26 ` [PATCH 6/8] mm/vmscan: take all base pages of THP into account when race with speculative reference Miaohe Lin
2022-03-29 13:26 ` [PATCH 7/8] mm/vmscan: take min_slab_pages into account when try to call shrink_node Miaohe Lin
2022-03-29 13:26 ` [PATCH 8/8] mm/vmscan: remove obsolete comment in kswapd_run Miaohe Lin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.