linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: aarcange@redhat.com, akpm@linux-foundation.org,
	alex.shi@linux.alibaba.com, alexander.duyck@gmail.com,
	aryabinin@virtuozzo.com, daniel.m.jordan@oracle.com,
	hannes@cmpxchg.org, hughd@google.com, iamjoonsoo.kim@lge.com,
	jannh@google.com, khlebnikov@yandex-team.ru,
	kirill.shutemov@linux.intel.com, kirill@shutemov.name,
	linux-mm@kvack.org, mgorman@techsingularity.net,
	mhocko@kernel.org, mhocko@suse.com, mika.penttila@nextfour.com,
	minchan@kernel.org, mm-commits@vger.kernel.org,
	richard.weiyang@gmail.com, rong.a.chen@intel.com,
	shakeelb@google.com, tglx@linutronix.de, tj@kernel.org,
	torvalds@linux-foundation.org, vbabka@suse.cz,
	vdavydov.dev@gmail.com, willy@infradead.org,
	yang.shi@linux.alibaba.com, ying.huang@intel.com
Subject: [patch 09/19] mm/swap.c: fold vm event PGROTATED into pagevec_move_tail_fn
Date: Tue, 15 Dec 2020 12:33:56 -0800	[thread overview]
Message-ID: <20201215203356.TnUWPfD4b%akpm@linux-foundation.org> (raw)
In-Reply-To: <20201215123253.954eca9a5ef4c0d52fd381fa@linux-foundation.org>

From: Alex Shi <alex.shi@linux.alibaba.com>
Subject: mm/swap.c: fold vm event PGROTATED into pagevec_move_tail_fn

Fold the PGROTATED event collection into pagevec_move_tail_fn call back
func like other funcs does in pagevec_lru_move_fn.  Thus we could save
func call pagevec_move_tail().  Now all usage of pagevec_lru_move_fn are
same and no needs of its 3rd parameter.

It's just simply the calling. No functional change.

[lkp@intel.com: found a build issue in the original patch, thanks]
Link: https://lkml.kernel.org/r/1604566549-62481-10-git-send-email-alex.shi@linux.alibaba.com
Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
Acked-by: Hugh Dickins <hughd@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: "Chen, Rong A" <rong.a.chen@intel.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Jann Horn <jannh@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mika Penttilä <mika.penttila@nextfour.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/swap.c |   65 ++++++++++++++++++----------------------------------
 1 file changed, 23 insertions(+), 42 deletions(-)

--- a/mm/swap.c~mm-swapc-fold-vm-event-pgrotated-into-pagevec_move_tail_fn
+++ a/mm/swap.c
@@ -204,8 +204,7 @@ int get_kernel_page(unsigned long start,
 EXPORT_SYMBOL_GPL(get_kernel_page);
 
 static void pagevec_lru_move_fn(struct pagevec *pvec,
-	void (*move_fn)(struct page *page, struct lruvec *lruvec, void *arg),
-	void *arg)
+	void (*move_fn)(struct page *page, struct lruvec *lruvec))
 {
 	int i;
 	struct pglist_data *pgdat = NULL;
@@ -224,7 +223,7 @@ static void pagevec_lru_move_fn(struct p
 		}
 
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
-		(*move_fn)(page, lruvec, arg);
+		(*move_fn)(page, lruvec);
 	}
 	if (pgdat)
 		spin_unlock_irqrestore(&pgdat->lru_lock, flags);
@@ -232,35 +231,22 @@ static void pagevec_lru_move_fn(struct p
 	pagevec_reinit(pvec);
 }
 
-static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec,
-				 void *arg)
+static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec)
 {
-	int *pgmoved = arg;
-
 	if (PageLRU(page) && !PageUnevictable(page)) {
 		del_page_from_lru_list(page, lruvec, page_lru(page));
 		ClearPageActive(page);
 		add_page_to_lru_list_tail(page, lruvec, page_lru(page));
-		(*pgmoved) += thp_nr_pages(page);
+		__count_vm_events(PGROTATED, thp_nr_pages(page));
 	}
 }
 
 /*
- * pagevec_move_tail() must be called with IRQ disabled.
- * Otherwise this may cause nasty races.
- */
-static void pagevec_move_tail(struct pagevec *pvec)
-{
-	int pgmoved = 0;
-
-	pagevec_lru_move_fn(pvec, pagevec_move_tail_fn, &pgmoved);
-	__count_vm_events(PGROTATED, pgmoved);
-}
-
-/*
  * Writeback is about to end against a page which has been marked for immediate
  * reclaim.  If it still appears to be reclaimable, move it to the tail of the
  * inactive list.
+ *
+ * rotate_reclaimable_page() must disable IRQs, to prevent nasty races.
  */
 void rotate_reclaimable_page(struct page *page)
 {
@@ -273,7 +259,7 @@ void rotate_reclaimable_page(struct page
 		local_lock_irqsave(&lru_rotate.lock, flags);
 		pvec = this_cpu_ptr(&lru_rotate.pvec);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
-			pagevec_move_tail(pvec);
+			pagevec_lru_move_fn(pvec, pagevec_move_tail_fn);
 		local_unlock_irqrestore(&lru_rotate.lock, flags);
 	}
 }
@@ -315,8 +301,7 @@ void lru_note_cost_page(struct page *pag
 		      page_is_file_lru(page), thp_nr_pages(page));
 }
 
-static void __activate_page(struct page *page, struct lruvec *lruvec,
-			    void *arg)
+static void __activate_page(struct page *page, struct lruvec *lruvec)
 {
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
 		int lru = page_lru_base_type(page);
@@ -340,7 +325,7 @@ static void activate_page_drain(int cpu)
 	struct pagevec *pvec = &per_cpu(lru_pvecs.activate_page, cpu);
 
 	if (pagevec_count(pvec))
-		pagevec_lru_move_fn(pvec, __activate_page, NULL);
+		pagevec_lru_move_fn(pvec, __activate_page);
 }
 
 static bool need_activate_page_drain(int cpu)
@@ -358,7 +343,7 @@ static void activate_page(struct page *p
 		pvec = this_cpu_ptr(&lru_pvecs.activate_page);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
-			pagevec_lru_move_fn(pvec, __activate_page, NULL);
+			pagevec_lru_move_fn(pvec, __activate_page);
 		local_unlock(&lru_pvecs.lock);
 	}
 }
@@ -374,7 +359,7 @@ static void activate_page(struct page *p
 
 	page = compound_head(page);
 	spin_lock_irq(&pgdat->lru_lock);
-	__activate_page(page, mem_cgroup_page_lruvec(page, pgdat), NULL);
+	__activate_page(page, mem_cgroup_page_lruvec(page, pgdat));
 	spin_unlock_irq(&pgdat->lru_lock);
 }
 #endif
@@ -525,8 +510,7 @@ void lru_cache_add_inactive_or_unevictab
  * be write it out by flusher threads as this is much more effective
  * than the single-page writeout from reclaim.
  */
-static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
-			      void *arg)
+static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec)
 {
 	int lru;
 	bool active;
@@ -573,8 +557,7 @@ static void lru_deactivate_file_fn(struc
 	}
 }
 
-static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
-			    void *arg)
+static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec)
 {
 	if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
 		int lru = page_lru_base_type(page);
@@ -591,8 +574,7 @@ static void lru_deactivate_fn(struct pag
 	}
 }
 
-static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec,
-			    void *arg)
+static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec)
 {
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
 	    !PageSwapCache(page) && !PageUnevictable(page)) {
@@ -636,21 +618,21 @@ void lru_add_drain_cpu(int cpu)
 
 		/* No harm done if a racing interrupt already did this */
 		local_lock_irqsave(&lru_rotate.lock, flags);
-		pagevec_move_tail(pvec);
+		pagevec_lru_move_fn(pvec, pagevec_move_tail_fn);
 		local_unlock_irqrestore(&lru_rotate.lock, flags);
 	}
 
 	pvec = &per_cpu(lru_pvecs.lru_deactivate_file, cpu);
 	if (pagevec_count(pvec))
-		pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, NULL);
+		pagevec_lru_move_fn(pvec, lru_deactivate_file_fn);
 
 	pvec = &per_cpu(lru_pvecs.lru_deactivate, cpu);
 	if (pagevec_count(pvec))
-		pagevec_lru_move_fn(pvec, lru_deactivate_fn, NULL);
+		pagevec_lru_move_fn(pvec, lru_deactivate_fn);
 
 	pvec = &per_cpu(lru_pvecs.lru_lazyfree, cpu);
 	if (pagevec_count(pvec))
-		pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL);
+		pagevec_lru_move_fn(pvec, lru_lazyfree_fn);
 
 	activate_page_drain(cpu);
 }
@@ -679,7 +661,7 @@ void deactivate_file_page(struct page *p
 		pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate_file);
 
 		if (!pagevec_add(pvec, page) || PageCompound(page))
-			pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, NULL);
+			pagevec_lru_move_fn(pvec, lru_deactivate_file_fn);
 		local_unlock(&lru_pvecs.lock);
 	}
 }
@@ -701,7 +683,7 @@ void deactivate_page(struct page *page)
 		pvec = this_cpu_ptr(&lru_pvecs.lru_deactivate);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
-			pagevec_lru_move_fn(pvec, lru_deactivate_fn, NULL);
+			pagevec_lru_move_fn(pvec, lru_deactivate_fn);
 		local_unlock(&lru_pvecs.lock);
 	}
 }
@@ -723,7 +705,7 @@ void mark_page_lazyfree(struct page *pag
 		pvec = this_cpu_ptr(&lru_pvecs.lru_lazyfree);
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
-			pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL);
+			pagevec_lru_move_fn(pvec, lru_lazyfree_fn);
 		local_unlock(&lru_pvecs.lock);
 	}
 }
@@ -977,8 +959,7 @@ void __pagevec_release(struct pagevec *p
 }
 EXPORT_SYMBOL(__pagevec_release);
 
-static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
-				 void *arg)
+static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec)
 {
 	enum lru_list lru;
 	int was_unevictable = TestClearPageUnevictable(page);
@@ -1037,7 +1018,7 @@ static void __pagevec_lru_add_fn(struct
  */
 void __pagevec_lru_add(struct pagevec *pvec)
 {
-	pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn, NULL);
+	pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn);
 }
 
 /**
_


  parent reply	other threads:[~2020-12-15 20:34 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-15 20:32 incoming Andrew Morton
2020-12-15 20:33 ` [patch 01/19] mm/thp: move lru_add_page_tail() to huge_memory.c Andrew Morton
2020-12-15 20:33 ` [patch 02/19] mm/thp: use head for head page in lru_add_page_tail() Andrew Morton
2020-12-15 20:33 ` [patch 03/19] mm/thp: simplify lru_add_page_tail() Andrew Morton
2020-12-15 20:33 ` [patch 04/19] mm/thp: narrow lru locking Andrew Morton
2020-12-15 20:33 ` [patch 05/19] mm/vmscan: remove unnecessary lruvec adding Andrew Morton
2020-12-15 20:33 ` [patch 06/19] mm/rmap: stop store reordering issue on page->mapping Andrew Morton
2020-12-15 20:33 ` [patch 07/19] mm: page_idle_get_page() does not need lru_lock Andrew Morton
2020-12-15 20:33 ` [patch 08/19] mm/memcg: add debug checking in lock_page_memcg Andrew Morton
2020-12-15 20:33 ` Andrew Morton [this message]
2020-12-15 20:34 ` [patch 11/19] mm/vmscan: remove lruvec reget in move_pages_to_lru Andrew Morton
2020-12-15 20:34 ` [patch 12/19] mm/mlock: remove lru_lock on TestClearPageMlocked Andrew Morton
2020-12-15 20:34 ` [patch 13/19] mm/mlock: remove __munlock_isolate_lru_page() Andrew Morton
2020-12-15 20:34 ` [patch 14/19] mm/lru: introduce TestClearPageLRU() Andrew Morton
2020-12-15 20:34 ` [patch 15/19] mm/compaction: do page isolation first in compaction Andrew Morton
2020-12-15 20:34 ` [patch 16/19] mm/swap.c: serialize memcg changes in pagevec_lru_move_fn Andrew Morton
2020-12-15 20:34 ` [patch 17/19] mm/lru: replace pgdat lru_lock with lruvec lock Andrew Morton
2020-12-15 20:34 ` [patch 18/19] mm/lru: introduce relock_page_lruvec() Andrew Morton
2020-12-15 20:34 ` [patch 19/19] mm/lru: revise the comments of lru_lock Andrew Morton
2020-12-15 21:00 ` incoming Linus Torvalds
2020-12-15 22:20 ` [patch 01/19] mm/thp: move lru_add_page_tail() to huge_memory.c Andrew Morton
2020-12-15 22:20 ` [patch 02/19] mm/thp: use head for head page in lru_add_page_tail() Andrew Morton
2020-12-15 22:20 ` [patch 03/19] mm/thp: simplify lru_add_page_tail() Andrew Morton
2020-12-15 22:20 ` [patch 04/19] mm/thp: narrow lru locking Andrew Morton
2020-12-15 22:20 ` [patch 05/19] mm/vmscan: remove unnecessary lruvec adding Andrew Morton
2020-12-15 22:20 ` [patch 06/19] mm/rmap: stop store reordering issue on page->mapping Andrew Morton
2020-12-15 22:20 ` [patch 07/19] mm: page_idle_get_page() does not need lru_lock Andrew Morton
2020-12-15 22:20 ` [patch 08/19] mm/memcg: add debug checking in lock_page_memcg Andrew Morton
2020-12-15 22:20 ` [patch 09/19] mm/swap.c: fold vm event PGROTATED into pagevec_move_tail_fn Andrew Morton
2020-12-15 22:20 ` [patch 10/19] mm/lru: move lock into lru_note_cost Andrew Morton
2020-12-15 22:20 ` [patch 11/19] mm/vmscan: remove lruvec reget in move_pages_to_lru Andrew Morton
2020-12-15 22:20 ` [patch 12/19] mm/mlock: remove lru_lock on TestClearPageMlocked Andrew Morton
2020-12-15 22:21 ` [patch 13/19] mm/mlock: remove __munlock_isolate_lru_page() Andrew Morton
2020-12-15 22:21 ` [patch 14/19] mm/lru: introduce TestClearPageLRU() Andrew Morton
2020-12-15 22:21 ` [patch 15/19] mm/compaction: do page isolation first in compaction Andrew Morton
2020-12-15 22:21 ` [patch 16/19] mm/swap.c: serialize memcg changes in pagevec_lru_move_fn Andrew Morton
2020-12-15 22:21 ` [patch 17/19] mm/lru: replace pgdat lru_lock with lruvec lock Andrew Morton
2020-12-15 22:21 ` [patch 18/19] mm/lru: introduce relock_page_lruvec() Andrew Morton
2020-12-15 22:21 ` [patch 19/19] mm/lru: revise the comments of lru_lock Andrew Morton
2020-12-15 22:48 ` incoming Linus Torvalds
2020-12-15 22:49   ` incoming Linus Torvalds
2020-12-15 22:55     ` incoming Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201215203356.TnUWPfD4b%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=aarcange@redhat.com \
    --cc=alex.shi@linux.alibaba.com \
    --cc=alexander.duyck@gmail.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=daniel.m.jordan@oracle.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jannh@google.com \
    --cc=khlebnikov@yandex-team.ru \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kirill@shutemov.name \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=mhocko@suse.com \
    --cc=mika.penttila@nextfour.com \
    --cc=minchan@kernel.org \
    --cc=mm-commits@vger.kernel.org \
    --cc=richard.weiyang@gmail.com \
    --cc=rong.a.chen@intel.com \
    --cc=shakeelb@google.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    --cc=willy@infradead.org \
    --cc=yang.shi@linux.alibaba.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).