linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org,
	akpm@linux-foundation.org, shakeelb@google.com,
	vdavydov.dev@gmail.com
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	duanxiongchun@bytedance.com, fam.zheng@bytedance.com,
	bsingharora@gmail.com, shy828301@gmail.com, alexs@kernel.org,
	smuchun@gmail.com, zhengqi.arch@bytedance.com,
	Muchun Song <songmuchun@bytedance.com>
Subject: [PATCH v1 11/12] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function
Date: Sat, 14 Aug 2021 13:25:18 +0800	[thread overview]
Message-ID: <20210814052519.86679-12-songmuchun@bytedance.com> (raw)
In-Reply-To: <20210814052519.86679-1-songmuchun@bytedance.com>

We need to make sure that the page is deleted from or added to the
correct lruvec list. So add a VM_BUG_ON_FOLIO() to catch invalid
users.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/mm_inline.h | 15 ++++++++++++---
 mm/vmscan.c               |  1 -
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index e2ec68b0515c..60eb827a41fe 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -103,7 +103,10 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio)
 static __always_inline void add_page_to_lru_list(struct page *page,
 				struct lruvec *lruvec)
 {
-	lruvec_add_folio(lruvec, page_folio(page));
+	struct folio *folio = page_folio(page);
+
+	VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+	lruvec_add_folio(lruvec, folio);
 }
 
 static __always_inline
@@ -119,7 +122,10 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio)
 static __always_inline void add_page_to_lru_list_tail(struct page *page,
 				struct lruvec *lruvec)
 {
-	lruvec_add_folio_tail(lruvec, page_folio(page));
+	struct folio *folio = page_folio(page);
+
+	VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+	lruvec_add_folio_tail(lruvec, folio);
 }
 
 static __always_inline
@@ -133,6 +139,9 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
 static __always_inline void del_page_from_lru_list(struct page *page,
 				struct lruvec *lruvec)
 {
-	lruvec_del_folio(lruvec, page_folio(page));
+	struct folio *folio = page_folio(page);
+
+	VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+	lruvec_del_folio(lruvec, folio);
 }
 #endif
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8ce42858ad5d..902d36ec91a3 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2204,7 +2204,6 @@ static unsigned int move_pages_to_lru(struct list_head *list)
 			continue;
 		}
 
-		VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page);
 		add_page_to_lru_list(page, lruvec);
 		nr_pages = thp_nr_pages(page);
 		nr_moved += nr_pages;
-- 
2.11.0


  parent reply	other threads:[~2021-08-14  5:26 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-08-14  5:25 ` [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage Muchun Song
2021-08-14 22:23   ` kernel test robot
2021-08-18  3:01   ` Roman Gushchin
2021-08-20  6:44     ` Muchun Song
2021-08-14  5:25 ` [PATCH v1 02/12] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Muchun Song
2021-08-14  5:25 ` [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Muchun Song
2021-08-18  3:18   ` Roman Gushchin
2021-08-18  4:28     ` Muchun Song
2021-08-18  4:47       ` Roman Gushchin
2021-08-14  5:25 ` [PATCH v1 04/12] mm: vmscan: rework move_pages_to_lru() Muchun Song
2021-08-14  5:25 ` [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}() Muchun Song
2021-08-14  8:22   ` kernel test robot
2021-08-14 10:38   ` kernel test robot
2021-08-14  5:25 ` [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented Muchun Song
2021-08-14  5:25 ` [PATCH v1 07/12] mm: memcontrol: make all the callers of {folio,page}_memcg() safe Muchun Song
2021-08-14  5:25 ` [PATCH v1 08/12] mm: memcontrol: introduce memcg_reparent_ops Muchun Song
2021-08-14  5:25 ` [PATCH v1 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-08-14 14:08   ` kernel test robot
2021-08-14  5:25 ` [PATCH v1 10/12] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Muchun Song
2021-08-14  5:25 ` Muchun Song [this message]
2021-08-14  5:25 ` [PATCH v1 12/12] mm: lru: use lruvec lock to serialize memcg changes Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210814052519.86679-12-songmuchun@bytedance.com \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexs@kernel.org \
    --cc=bsingharora@gmail.com \
    --cc=duanxiongchun@bytedance.com \
    --cc=fam.zheng@bytedance.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shakeelb@google.com \
    --cc=shy828301@gmail.com \
    --cc=smuchun@gmail.com \
    --cc=vdavydov.dev@gmail.com \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).