Sorry, list_move can cause some problem when deleting a entry not on list any more (AKA DEBUG_LIST in enabled)
I corrected the patch as the attached.

Best Regards,
Yongmei.

发件人: Yongmei Xie <yongmeixie@hotmail.com>
发送时间: 2021年9月19日 23:25
收件人: akpm@linux-foundation.org <akpm@linux-foundation.org>; linux-mm@kvack.org <linux-mm@kvack.org>; linux-kernel@vger.kernel.org <linux-kernel@vger.kernel.org>
抄送: yongmeixie@hotmail.com <yongmeixie@hotmail.com>
主题: [PATCH] mm:vmscan: fix extra adjustment for lruvec's nonresident_age in case of reactivation
 
Before commit #31d8fcac, VM didn't increase nonresident_age (AKA inactive age for
file pages) in shrink_page_list. When putback_inactive_pages was converged with
move_pages_to_lru, both shrink_active_list and shrink_page_list use the same
function to handle move pages to the appropriate lru under lru lock's protection.

At those day, VM didn't increase nonresident_age for second chance promotion.
Commit #31d8fcac fix the problem. Definitely, we should account the activation
for second chance. But move_pages_to_lru is used in reactivation in active lru
as well for protecting code section. So I suggest to add another variable to
tell whether reactivation or not.

Signed-off-by: Yongmei Xie <yongmeixie@hotmail.com>
---
 mm/vmscan.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 74296c2d1fed..85ccafcd4912 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2152,7 +2152,8 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
  * Returns the number of pages moved to the given lruvec.
  */
 static unsigned int move_pages_to_lru(struct lruvec *lruvec,
-                                     struct list_head *list)
+                                     struct list_head *list,
+                                     bool reactivation)
 {
         int nr_pages, nr_moved = 0;
         LIST_HEAD(pages_to_free);
@@ -2203,7 +2204,7 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
                 add_page_to_lru_list(page, lruvec);
                 nr_pages = thp_nr_pages(page);
                 nr_moved += nr_pages;
-               if (PageActive(page))
+               if (PageActive(page) && !reactivation)
                         workingset_age_nonresident(lruvec, nr_pages);
         }
 
@@ -2281,7 +2282,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
         nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false);
 
         spin_lock_irq(&lruvec->lru_lock);
-       move_pages_to_lru(lruvec, &page_list);
+       move_pages_to_lru(lruvec, &page_list, false);
 
         __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
         item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT;
@@ -2418,8 +2419,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
          */
         spin_lock_irq(&lruvec->lru_lock);
 
-       nr_activate = move_pages_to_lru(lruvec, &l_active);
-       nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
+       nr_activate = move_pages_to_lru(lruvec, &l_active, true);
+       nr_deactivate = move_pages_to_lru(lruvec, &l_inactive, false);
         /* Keep all free pages in l_active list */
         list_splice(&l_inactive, &l_active);
 
--
2.18.2