linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/13] mm: clean up some lru related pieces
@ 2020-09-18  3:00 Yu Zhao
  2020-09-18  3:00 ` [PATCH 01/13] mm: use add_page_to_lru_list() Yu Zhao
                   ` (14 more replies)
  0 siblings, 15 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

Hi Andrew,

I see you have taken this:
  mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
Do you mind dropping it?

Michal asked to do a bit of additional work. So I thought I probably
should create a series to do more cleanups I've been meaning to.

This series contains the change in the patch above and goes a few
more steps farther. It's intended to improve readability and should
not have any performance impacts. There are minor behavior changes in
terms of debugging and error reporting, which I have all highlighted
in the individual patches. All patches were properly tested on 5.8
running Chrome OS, with various debug options turned on.

Michal,

Do you mind taking a looking at the entire series?

Thank you.

Yu Zhao (13):
  mm: use add_page_to_lru_list()
  mm: use page_off_lru()
  mm: move __ClearPageLRU() into page_off_lru()
  mm: shuffle lru list addition and deletion functions
  mm: don't pass enum lru_list to lru list addition functions
  mm: don't pass enum lru_list to trace_mm_lru_insertion()
  mm: don't pass enum lru_list to del_page_from_lru_list()
  mm: rename page_off_lru() to __clear_page_lru_flags()
  mm: inline page_lru_base_type()
  mm: VM_BUG_ON lru page flags
  mm: inline __update_lru_size()
  mm: make lruvec_lru_size() static
  mm: enlarge the int parameter of update_lru_size()

 include/linux/memcontrol.h     |  14 ++--
 include/linux/mm_inline.h      | 115 ++++++++++++++-------------------
 include/linux/mmzone.h         |   2 -
 include/linux/vmstat.h         |   2 +-
 include/trace/events/pagemap.h |  11 ++--
 mm/compaction.c                |   2 +-
 mm/memcontrol.c                |  10 +--
 mm/mlock.c                     |   2 +-
 mm/swap.c                      |  53 ++++++---------
 mm/vmscan.c                    |  28 +++-----
 10 files changed, 95 insertions(+), 144 deletions(-)

-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 01/13] mm: use add_page_to_lru_list()
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  7:32   ` Michal Hocko
  2020-09-18  3:00 ` [PATCH 02/13] mm: use page_off_lru() Yu Zhao
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

This patch replaces the only open-coded lru list addition with
add_page_to_lru_list().

Before this patch, we have:
	update_lru_size()
	list_move()

After this patch, we have:
	list_del()
	add_page_to_lru_list()
		update_lru_size()
		list_add()

The only side effect is that page->lru is temporarily poisoned
after a page is deleted from its old list, which shouldn't be a
problem.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/vmscan.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9727dd8e2581..503fc5e1fe32 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1850,8 +1850,8 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 	while (!list_empty(list)) {
 		page = lru_to_page(list);
 		VM_BUG_ON_PAGE(PageLRU(page), page);
+		list_del(&page->lru);
 		if (unlikely(!page_evictable(page))) {
-			list_del(&page->lru);
 			spin_unlock_irq(&pgdat->lru_lock);
 			putback_lru_page(page);
 			spin_lock_irq(&pgdat->lru_lock);
@@ -1862,9 +1862,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 		SetPageLRU(page);
 		lru = page_lru(page);
 
-		nr_pages = thp_nr_pages(page);
-		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
-		list_move(&page->lru, &lruvec->lists[lru]);
+		add_page_to_lru_list(page, lruvec, lru);
 
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
@@ -1878,6 +1876,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 			} else
 				list_add(&page->lru, &pages_to_free);
 		} else {
+			nr_pages = thp_nr_pages(page);
 			nr_moved += nr_pages;
 			if (PageActive(page))
 				workingset_age_nonresident(lruvec, nr_pages);
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 02/13] mm: use page_off_lru()
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
  2020-09-18  3:00 ` [PATCH 01/13] mm: use add_page_to_lru_list() Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  7:37   ` Michal Hocko
  2020-09-18  3:00 ` [PATCH 03/13] mm: move __ClearPageLRU() into page_off_lru() Yu Zhao
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

This patch replaces the only open-coded __ClearPageActive() with
page_off_lru(). There is no open-coded __ClearPageUnevictable()s.

Before this patch, we have:
	__ClearPageActive()
	add_page_to_lru_list()

After this patch, we have:
	page_off_lru()
		if PageUnevictable()
			__ClearPageUnevictable()
		else if PageActive()
			__ClearPageActive()
	add_page_to_lru_list()

Checking PageUnevictable() shouldn't be a problem because these two
flags are mutually exclusive. Leaking either will trigger bad_page().

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/vmscan.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 503fc5e1fe32..f257d2f61574 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1845,7 +1845,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 	int nr_pages, nr_moved = 0;
 	LIST_HEAD(pages_to_free);
 	struct page *page;
-	enum lru_list lru;
 
 	while (!list_empty(list)) {
 		page = lru_to_page(list);
@@ -1860,14 +1859,11 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
 
 		SetPageLRU(page);
-		lru = page_lru(page);
-
 		add_page_to_lru_list(page, lruvec, lru);
 
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
-			__ClearPageActive(page);
-			del_page_from_lru_list(page, lruvec, lru);
+			del_page_from_lru_list(page, lruvec, page_off_lru(page));
 
 			if (unlikely(PageCompound(page))) {
 				spin_unlock_irq(&pgdat->lru_lock);
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 03/13] mm: move __ClearPageLRU() into page_off_lru()
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
  2020-09-18  3:00 ` [PATCH 01/13] mm: use add_page_to_lru_list() Yu Zhao
  2020-09-18  3:00 ` [PATCH 02/13] mm: use page_off_lru() Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  7:38   ` Michal Hocko
  2020-09-18  3:00 ` [PATCH 04/13] mm: shuffle lru list addition and deletion functions Yu Zhao
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

Now we have a total of three places that free lru pages when their
references become zero (after we drop the reference from isolation).

Before this patch, they all do:
	__ClearPageLRU()
	page_off_lru()
	del_page_from_lru_list()

After this patch, they become:
	page_off_lru()
		__ClearPageLRU()
	del_page_from_lru_list()

This change should have no side effects.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/mm_inline.h | 1 +
 mm/swap.c                 | 2 --
 mm/vmscan.c               | 1 -
 3 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 8fc71e9d7bb0..be9418425e41 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -92,6 +92,7 @@ static __always_inline enum lru_list page_off_lru(struct page *page)
 {
 	enum lru_list lru;
 
+	__ClearPageLRU(page);
 	if (PageUnevictable(page)) {
 		__ClearPageUnevictable(page);
 		lru = LRU_UNEVICTABLE;
diff --git a/mm/swap.c b/mm/swap.c
index 40bf20a75278..8362083f00c9 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -86,7 +86,6 @@ static void __page_cache_release(struct page *page)
 		spin_lock_irqsave(&pgdat->lru_lock, flags);
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
 		VM_BUG_ON_PAGE(!PageLRU(page), page);
-		__ClearPageLRU(page);
 		del_page_from_lru_list(page, lruvec, page_off_lru(page));
 		spin_unlock_irqrestore(&pgdat->lru_lock, flags);
 	}
@@ -895,7 +894,6 @@ void release_pages(struct page **pages, int nr)
 
 			lruvec = mem_cgroup_page_lruvec(page, locked_pgdat);
 			VM_BUG_ON_PAGE(!PageLRU(page), page);
-			__ClearPageLRU(page);
 			del_page_from_lru_list(page, lruvec, page_off_lru(page));
 		}
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f257d2f61574..f9a186a96410 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1862,7 +1862,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 		add_page_to_lru_list(page, lruvec, lru);
 
 		if (put_page_testzero(page)) {
-			__ClearPageLRU(page);
 			del_page_from_lru_list(page, lruvec, page_off_lru(page));
 
 			if (unlikely(PageCompound(page))) {
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 04/13] mm: shuffle lru list addition and deletion functions
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (2 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 03/13] mm: move __ClearPageLRU() into page_off_lru() Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  3:00 ` [PATCH 05/13] mm: don't pass enum lru_list to lru list addition functions Yu Zhao
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

This change should have no side effects.

We want it only because we prefer to avoid forward declarations in
the following patches.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/mm_inline.h | 42 +++++++++++++++++++--------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index be9418425e41..bfa30c752804 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -45,27 +45,6 @@ static __always_inline void update_lru_size(struct lruvec *lruvec,
 #endif
 }
 
-static __always_inline void add_page_to_lru_list(struct page *page,
-				struct lruvec *lruvec, enum lru_list lru)
-{
-	update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
-	list_add(&page->lru, &lruvec->lists[lru]);
-}
-
-static __always_inline void add_page_to_lru_list_tail(struct page *page,
-				struct lruvec *lruvec, enum lru_list lru)
-{
-	update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
-	list_add_tail(&page->lru, &lruvec->lists[lru]);
-}
-
-static __always_inline void del_page_from_lru_list(struct page *page,
-				struct lruvec *lruvec, enum lru_list lru)
-{
-	list_del(&page->lru);
-	update_lru_size(lruvec, lru, page_zonenum(page), -thp_nr_pages(page));
-}
-
 /**
  * page_lru_base_type - which LRU list type should a page be on?
  * @page: the page to test
@@ -126,4 +105,25 @@ static __always_inline enum lru_list page_lru(struct page *page)
 	}
 	return lru;
 }
+
+static __always_inline void add_page_to_lru_list(struct page *page,
+				struct lruvec *lruvec, enum lru_list lru)
+{
+	update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
+	list_add(&page->lru, &lruvec->lists[lru]);
+}
+
+static __always_inline void add_page_to_lru_list_tail(struct page *page,
+				struct lruvec *lruvec, enum lru_list lru)
+{
+	update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
+	list_add_tail(&page->lru, &lruvec->lists[lru]);
+}
+
+static __always_inline void del_page_from_lru_list(struct page *page,
+				struct lruvec *lruvec, enum lru_list lru)
+{
+	list_del(&page->lru);
+	update_lru_size(lruvec, lru, page_zonenum(page), -thp_nr_pages(page));
+}
 #endif
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 05/13] mm: don't pass enum lru_list to lru list addition functions
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (3 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 04/13] mm: shuffle lru list addition and deletion functions Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  3:00 ` [PATCH 06/13] mm: don't pass enum lru_list to trace_mm_lru_insertion() Yu Zhao
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

The enum lru_list parameter to add_page_to_lru_list() and
add_page_to_lru_list_tail() is redundant in the sense that it can
be extracted from the struct page parameter by page_lru().

A caveat is that we need to make sure PageActive() or
PageUnevictable() is correctly set or cleared before calling
these two functions. And they are indeed.

This change should have no side effects.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/mm_inline.h |  8 ++++++--
 mm/swap.c                 | 18 ++++++++----------
 mm/vmscan.c               |  6 ++----
 3 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index bfa30c752804..199ff51bf2a0 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -107,15 +107,19 @@ static __always_inline enum lru_list page_lru(struct page *page)
 }
 
 static __always_inline void add_page_to_lru_list(struct page *page,
-				struct lruvec *lruvec, enum lru_list lru)
+				struct lruvec *lruvec)
 {
+	enum lru_list lru = page_lru(page);
+
 	update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
 	list_add(&page->lru, &lruvec->lists[lru]);
 }
 
 static __always_inline void add_page_to_lru_list_tail(struct page *page,
-				struct lruvec *lruvec, enum lru_list lru)
+				struct lruvec *lruvec)
 {
+	enum lru_list lru = page_lru(page);
+
 	update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
 	list_add_tail(&page->lru, &lruvec->lists[lru]);
 }
diff --git a/mm/swap.c b/mm/swap.c
index 8362083f00c9..8d0e31d43852 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -238,7 +238,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec,
 	if (PageLRU(page) && !PageUnevictable(page)) {
 		del_page_from_lru_list(page, lruvec, page_lru(page));
 		ClearPageActive(page);
-		add_page_to_lru_list_tail(page, lruvec, page_lru(page));
+		add_page_to_lru_list_tail(page, lruvec);
 		(*pgmoved) += thp_nr_pages(page);
 	}
 }
@@ -322,8 +322,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec,
 
 		del_page_from_lru_list(page, lruvec, lru);
 		SetPageActive(page);
-		lru += LRU_ACTIVE;
-		add_page_to_lru_list(page, lruvec, lru);
+		add_page_to_lru_list(page, lruvec);
 		trace_mm_lru_activate(page);
 
 		__count_vm_events(PGACTIVATE, nr_pages);
@@ -555,14 +554,14 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
 		 * It can make readahead confusing.  But race window
 		 * is _really_ small and  it's non-critical problem.
 		 */
-		add_page_to_lru_list(page, lruvec, lru);
+		add_page_to_lru_list(page, lruvec);
 		SetPageReclaim(page);
 	} else {
 		/*
 		 * The page's writeback ends up during pagevec
 		 * We moves tha page into tail of inactive.
 		 */
-		add_page_to_lru_list_tail(page, lruvec, lru);
+		add_page_to_lru_list_tail(page, lruvec);
 		__count_vm_events(PGROTATED, nr_pages);
 	}
 
@@ -583,7 +582,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
 		del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE);
 		ClearPageActive(page);
 		ClearPageReferenced(page);
-		add_page_to_lru_list(page, lruvec, lru);
+		add_page_to_lru_list(page, lruvec);
 
 		__count_vm_events(PGDEACTIVATE, nr_pages);
 		__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE,
@@ -609,7 +608,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec,
 		 * anonymous pages
 		 */
 		ClearPageSwapBacked(page);
-		add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE);
+		add_page_to_lru_list(page, lruvec);
 
 		__count_vm_events(PGLAZYFREE, nr_pages);
 		__count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE,
@@ -955,8 +954,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail,
 		 * Put page_tail on the list at the correct position
 		 * so they all end up in order.
 		 */
-		add_page_to_lru_list_tail(page_tail, lruvec,
-					  page_lru(page_tail));
+		add_page_to_lru_list_tail(page_tail, lruvec);
 	}
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
@@ -1011,7 +1009,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
 			__count_vm_events(UNEVICTABLE_PGCULLED, nr_pages);
 	}
 
-	add_page_to_lru_list(page, lruvec, lru);
+	add_page_to_lru_list(page, lruvec);
 	trace_mm_lru_insertion(page, lru);
 }
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f9a186a96410..895be9fb96ec 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1859,7 +1859,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
 
 		SetPageLRU(page);
-		add_page_to_lru_list(page, lruvec, lru);
+		add_page_to_lru_list(page, lruvec);
 
 		if (put_page_testzero(page)) {
 			del_page_from_lru_list(page, lruvec, page_off_lru(page));
@@ -4276,12 +4276,10 @@ void check_move_unevictable_pages(struct pagevec *pvec)
 			continue;
 
 		if (page_evictable(page)) {
-			enum lru_list lru = page_lru_base_type(page);
-
 			VM_BUG_ON_PAGE(PageActive(page), page);
 			ClearPageUnevictable(page);
 			del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE);
-			add_page_to_lru_list(page, lruvec, lru);
+			add_page_to_lru_list(page, lruvec);
 			pgrescued++;
 		}
 	}
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 06/13] mm: don't pass enum lru_list to trace_mm_lru_insertion()
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (4 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 05/13] mm: don't pass enum lru_list to lru list addition functions Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  3:00 ` [PATCH 07/13] mm: don't pass enum lru_list to del_page_from_lru_list() Yu Zhao
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

The parameter is redundant in the sense that it can be extracted
from the struct page parameter by page_lru() correctly.

This change should have no side effects.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/trace/events/pagemap.h | 11 ++++-------
 mm/swap.c                      |  5 +----
 2 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/include/trace/events/pagemap.h b/include/trace/events/pagemap.h
index 8fd1babae761..e1735fe7c76a 100644
--- a/include/trace/events/pagemap.h
+++ b/include/trace/events/pagemap.h
@@ -27,24 +27,21 @@
 
 TRACE_EVENT(mm_lru_insertion,
 
-	TP_PROTO(
-		struct page *page,
-		int lru
-	),
+	TP_PROTO(struct page *page),
 
-	TP_ARGS(page, lru),
+	TP_ARGS(page),
 
 	TP_STRUCT__entry(
 		__field(struct page *,	page	)
 		__field(unsigned long,	pfn	)
-		__field(int,		lru	)
+		__field(enum lru_list,	lru	)
 		__field(unsigned long,	flags	)
 	),
 
 	TP_fast_assign(
 		__entry->page	= page;
 		__entry->pfn	= page_to_pfn(page);
-		__entry->lru	= lru;
+		__entry->lru	= page_lru(page);
 		__entry->flags	= trace_pagemap_flags(page);
 	),
 
diff --git a/mm/swap.c b/mm/swap.c
index 8d0e31d43852..3c89a7276359 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -962,7 +962,6 @@ void lru_add_page_tail(struct page *page, struct page *page_tail,
 static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
 				 void *arg)
 {
-	enum lru_list lru;
 	int was_unevictable = TestClearPageUnevictable(page);
 	int nr_pages = thp_nr_pages(page);
 
@@ -998,11 +997,9 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
 	smp_mb__after_atomic();
 
 	if (page_evictable(page)) {
-		lru = page_lru(page);
 		if (was_unevictable)
 			__count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages);
 	} else {
-		lru = LRU_UNEVICTABLE;
 		ClearPageActive(page);
 		SetPageUnevictable(page);
 		if (!was_unevictable)
@@ -1010,7 +1007,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
 	}
 
 	add_page_to_lru_list(page, lruvec);
-	trace_mm_lru_insertion(page, lru);
+	trace_mm_lru_insertion(page);
 }
 
 /*
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 07/13] mm: don't pass enum lru_list to del_page_from_lru_list()
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (5 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 06/13] mm: don't pass enum lru_list to trace_mm_lru_insertion() Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  3:00 ` [PATCH 08/13] mm: rename page_off_lru() to __clear_page_lru_flags() Yu Zhao
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

The parameter is redundant in the sense that it can be extracted
from the struct page parameter by page_lru().

To do this, we need to make sure PageActive() or PageUnevictable()
is correctly set or cleared before calling the function. In
check_move_unevictable_pages(), we have:
	ClearPageUnevictable()
	del_page_from_lru_list(lru_list = LRU_UNEVICTABLE)

And we need to reorder them to make page_lru() return
LRU_UNEVICTABLE:
	del_page_from_lru_list()
		page_lru()
	ClearPageUnevictable()

We also need to deal with the deletions on releasing paths that
clear PageLRU() and PageActive()/PageUnevictable():
	del_page_from_lru_list(lru_list = page_off_lru())

It's done by another recording like this:
	del_page_from_lru_list()
		page_lru()
	page_off_lru()

In both cases, the recording should have no side effects.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/mm_inline.h |  5 +++--
 mm/compaction.c           |  2 +-
 mm/mlock.c                |  2 +-
 mm/swap.c                 | 26 ++++++++++----------------
 mm/vmscan.c               |  8 ++++----
 5 files changed, 19 insertions(+), 24 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 199ff51bf2a0..03796021f0fe 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -125,9 +125,10 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page,
 }
 
 static __always_inline void del_page_from_lru_list(struct page *page,
-				struct lruvec *lruvec, enum lru_list lru)
+				struct lruvec *lruvec)
 {
 	list_del(&page->lru);
-	update_lru_size(lruvec, lru, page_zonenum(page), -thp_nr_pages(page));
+	update_lru_size(lruvec, page_lru(page), page_zonenum(page),
+			-thp_nr_pages(page));
 }
 #endif
diff --git a/mm/compaction.c b/mm/compaction.c
index 176dcded298e..ec4af21d2867 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1006,7 +1006,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			low_pfn += compound_nr(page) - 1;
 
 		/* Successfully isolated */
-		del_page_from_lru_list(page, lruvec, page_lru(page));
+		del_page_from_lru_list(page, lruvec);
 		mod_node_page_state(page_pgdat(page),
 				NR_ISOLATED_ANON + page_is_file_lru(page),
 				thp_nr_pages(page));
diff --git a/mm/mlock.c b/mm/mlock.c
index 93ca2bf30b4f..647487912d0a 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -114,7 +114,7 @@ static bool __munlock_isolate_lru_page(struct page *page, bool getpage)
 		if (getpage)
 			get_page(page);
 		ClearPageLRU(page);
-		del_page_from_lru_list(page, lruvec, page_lru(page));
+		del_page_from_lru_list(page, lruvec);
 		return true;
 	}
 
diff --git a/mm/swap.c b/mm/swap.c
index 3c89a7276359..8bbeabc582c1 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -86,7 +86,8 @@ static void __page_cache_release(struct page *page)
 		spin_lock_irqsave(&pgdat->lru_lock, flags);
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
 		VM_BUG_ON_PAGE(!PageLRU(page), page);
-		del_page_from_lru_list(page, lruvec, page_off_lru(page));
+		del_page_from_lru_list(page, lruvec);
+		page_off_lru(page);
 		spin_unlock_irqrestore(&pgdat->lru_lock, flags);
 	}
 }
@@ -236,7 +237,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec,
 	int *pgmoved = arg;
 
 	if (PageLRU(page) && !PageUnevictable(page)) {
-		del_page_from_lru_list(page, lruvec, page_lru(page));
+		del_page_from_lru_list(page, lruvec);
 		ClearPageActive(page);
 		add_page_to_lru_list_tail(page, lruvec);
 		(*pgmoved) += thp_nr_pages(page);
@@ -317,10 +318,9 @@ static void __activate_page(struct page *page, struct lruvec *lruvec,
 			    void *arg)
 {
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
-		int lru = page_lru_base_type(page);
 		int nr_pages = thp_nr_pages(page);
 
-		del_page_from_lru_list(page, lruvec, lru);
+		del_page_from_lru_list(page, lruvec);
 		SetPageActive(page);
 		add_page_to_lru_list(page, lruvec);
 		trace_mm_lru_activate(page);
@@ -527,8 +527,7 @@ void lru_cache_add_inactive_or_unevictable(struct page *page,
 static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
 			      void *arg)
 {
-	int lru;
-	bool active;
+	bool active = PageActive(page);
 	int nr_pages = thp_nr_pages(page);
 
 	if (!PageLRU(page))
@@ -541,10 +540,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
 	if (page_mapped(page))
 		return;
 
-	active = PageActive(page);
-	lru = page_lru_base_type(page);
-
-	del_page_from_lru_list(page, lruvec, lru + active);
+	del_page_from_lru_list(page, lruvec);
 	ClearPageActive(page);
 	ClearPageReferenced(page);
 
@@ -576,10 +572,9 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec,
 			    void *arg)
 {
 	if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
-		int lru = page_lru_base_type(page);
 		int nr_pages = thp_nr_pages(page);
 
-		del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE);
+		del_page_from_lru_list(page, lruvec);
 		ClearPageActive(page);
 		ClearPageReferenced(page);
 		add_page_to_lru_list(page, lruvec);
@@ -595,11 +590,9 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec,
 {
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
 	    !PageSwapCache(page) && !PageUnevictable(page)) {
-		bool active = PageActive(page);
 		int nr_pages = thp_nr_pages(page);
 
-		del_page_from_lru_list(page, lruvec,
-				       LRU_INACTIVE_ANON + active);
+		del_page_from_lru_list(page, lruvec);
 		ClearPageActive(page);
 		ClearPageReferenced(page);
 		/*
@@ -893,7 +886,8 @@ void release_pages(struct page **pages, int nr)
 
 			lruvec = mem_cgroup_page_lruvec(page, locked_pgdat);
 			VM_BUG_ON_PAGE(!PageLRU(page), page);
-			del_page_from_lru_list(page, lruvec, page_off_lru(page));
+			del_page_from_lru_list(page, lruvec);
+			page_off_lru(page);
 		}
 
 		list_add(&page->lru, &pages_to_free);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 895be9fb96ec..47a4e8ba150f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1770,10 +1770,9 @@ int isolate_lru_page(struct page *page)
 		spin_lock_irq(&pgdat->lru_lock);
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
 		if (PageLRU(page)) {
-			int lru = page_lru(page);
 			get_page(page);
 			ClearPageLRU(page);
-			del_page_from_lru_list(page, lruvec, lru);
+			del_page_from_lru_list(page, lruvec);
 			ret = 0;
 		}
 		spin_unlock_irq(&pgdat->lru_lock);
@@ -1862,7 +1861,8 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 		add_page_to_lru_list(page, lruvec);
 
 		if (put_page_testzero(page)) {
-			del_page_from_lru_list(page, lruvec, page_off_lru(page));
+			del_page_from_lru_list(page, lruvec);
+			page_off_lru(page);
 
 			if (unlikely(PageCompound(page))) {
 				spin_unlock_irq(&pgdat->lru_lock);
@@ -4277,8 +4277,8 @@ void check_move_unevictable_pages(struct pagevec *pvec)
 
 		if (page_evictable(page)) {
 			VM_BUG_ON_PAGE(PageActive(page), page);
+			del_page_from_lru_list(page, lruvec);
 			ClearPageUnevictable(page);
-			del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE);
 			add_page_to_lru_list(page, lruvec);
 			pgrescued++;
 		}
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 08/13] mm: rename page_off_lru() to __clear_page_lru_flags()
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (6 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 07/13] mm: don't pass enum lru_list to del_page_from_lru_list() Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  3:00 ` [PATCH 09/13] mm: inline page_lru_base_type() Yu Zhao
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

Rename the function according to what it really does. And make it
return void since the return value is not needed anymore.

If PageActive() and PageUnevictable() are both true, refuse to clear
either and leave them to bad_page().

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/mm_inline.h | 29 ++++++++++-------------------
 mm/swap.c                 |  4 ++--
 mm/vmscan.c               |  2 +-
 3 files changed, 13 insertions(+), 22 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 03796021f0fe..ef3fd79222e5 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -61,28 +61,19 @@ static inline enum lru_list page_lru_base_type(struct page *page)
 }
 
 /**
- * page_off_lru - which LRU list was page on? clearing its lru flags.
- * @page: the page to test
- *
- * Returns the LRU list a page was on, as an index into the array of LRU
- * lists; and clears its Unevictable or Active flags, ready for freeing.
+ * __clear_page_lru_flags - clear page lru flags before releasing a page
+ * @page: the page that was on lru and now has a zero reference
  */
-static __always_inline enum lru_list page_off_lru(struct page *page)
+static __always_inline void __clear_page_lru_flags(struct page *page)
 {
-	enum lru_list lru;
-
 	__ClearPageLRU(page);
-	if (PageUnevictable(page)) {
-		__ClearPageUnevictable(page);
-		lru = LRU_UNEVICTABLE;
-	} else {
-		lru = page_lru_base_type(page);
-		if (PageActive(page)) {
-			__ClearPageActive(page);
-			lru += LRU_ACTIVE;
-		}
-	}
-	return lru;
+
+	/* this shouldn't happen, so leave the flags to bad_page() */
+	if (PageActive(page) && PageUnevictable(page))
+		return;
+
+	__ClearPageActive(page);
+	__ClearPageUnevictable(page);
 }
 
 /**
diff --git a/mm/swap.c b/mm/swap.c
index 8bbeabc582c1..b252f3593c57 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -87,7 +87,7 @@ static void __page_cache_release(struct page *page)
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
 		VM_BUG_ON_PAGE(!PageLRU(page), page);
 		del_page_from_lru_list(page, lruvec);
-		page_off_lru(page);
+		__clear_page_lru_flags(page);
 		spin_unlock_irqrestore(&pgdat->lru_lock, flags);
 	}
 }
@@ -887,7 +887,7 @@ void release_pages(struct page **pages, int nr)
 			lruvec = mem_cgroup_page_lruvec(page, locked_pgdat);
 			VM_BUG_ON_PAGE(!PageLRU(page), page);
 			del_page_from_lru_list(page, lruvec);
-			page_off_lru(page);
+			__clear_page_lru_flags(page);
 		}
 
 		list_add(&page->lru, &pages_to_free);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 47a4e8ba150f..d93033407200 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1862,7 +1862,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 
 		if (put_page_testzero(page)) {
 			del_page_from_lru_list(page, lruvec);
-			page_off_lru(page);
+			__clear_page_lru_flags(page);
 
 			if (unlikely(PageCompound(page))) {
 				spin_unlock_irq(&pgdat->lru_lock);
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 09/13] mm: inline page_lru_base_type()
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (7 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 08/13] mm: rename page_off_lru() to __clear_page_lru_flags() Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  3:00 ` [PATCH 10/13] mm: VM_BUG_ON lru page flags Yu Zhao
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

We've removed all other references to this function.

This change should have no side effects.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/mm_inline.h | 27 ++++++---------------------
 1 file changed, 6 insertions(+), 21 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index ef3fd79222e5..07d9a0286635 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -45,21 +45,6 @@ static __always_inline void update_lru_size(struct lruvec *lruvec,
 #endif
 }
 
-/**
- * page_lru_base_type - which LRU list type should a page be on?
- * @page: the page to test
- *
- * Used for LRU list index arithmetic.
- *
- * Returns the base LRU type - file or anon - @page should be on.
- */
-static inline enum lru_list page_lru_base_type(struct page *page)
-{
-	if (page_is_file_lru(page))
-		return LRU_INACTIVE_FILE;
-	return LRU_INACTIVE_ANON;
-}
-
 /**
  * __clear_page_lru_flags - clear page lru flags before releasing a page
  * @page: the page that was on lru and now has a zero reference
@@ -88,12 +73,12 @@ static __always_inline enum lru_list page_lru(struct page *page)
 	enum lru_list lru;
 
 	if (PageUnevictable(page))
-		lru = LRU_UNEVICTABLE;
-	else {
-		lru = page_lru_base_type(page);
-		if (PageActive(page))
-			lru += LRU_ACTIVE;
-	}
+		return LRU_UNEVICTABLE;
+
+	lru = page_is_file_lru(page) ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON;
+	if (PageActive(page))
+		lru += LRU_ACTIVE;
+
 	return lru;
 }
 
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 10/13] mm: VM_BUG_ON lru page flags
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (8 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 09/13] mm: inline page_lru_base_type() Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  3:00 ` [PATCH 11/13] mm: inline __update_lru_size() Yu Zhao
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

Move scattered VM_BUG_ONs to two essential places that cover all
lru list additions and deletions.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/mm_inline.h | 4 ++++
 mm/swap.c                 | 2 --
 mm/vmscan.c               | 1 -
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 07d9a0286635..7183c7a03f09 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -51,6 +51,8 @@ static __always_inline void update_lru_size(struct lruvec *lruvec,
  */
 static __always_inline void __clear_page_lru_flags(struct page *page)
 {
+	VM_BUG_ON_PAGE(!PageLRU(page), page);
+
 	__ClearPageLRU(page);
 
 	/* this shouldn't happen, so leave the flags to bad_page() */
@@ -72,6 +74,8 @@ static __always_inline enum lru_list page_lru(struct page *page)
 {
 	enum lru_list lru;
 
+	VM_BUG_ON_PAGE(PageActive(page) && PageUnevictable(page), page);
+
 	if (PageUnevictable(page))
 		return LRU_UNEVICTABLE;
 
diff --git a/mm/swap.c b/mm/swap.c
index b252f3593c57..4daa46907dd5 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -85,7 +85,6 @@ static void __page_cache_release(struct page *page)
 
 		spin_lock_irqsave(&pgdat->lru_lock, flags);
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
-		VM_BUG_ON_PAGE(!PageLRU(page), page);
 		del_page_from_lru_list(page, lruvec);
 		__clear_page_lru_flags(page);
 		spin_unlock_irqrestore(&pgdat->lru_lock, flags);
@@ -885,7 +884,6 @@ void release_pages(struct page **pages, int nr)
 			}
 
 			lruvec = mem_cgroup_page_lruvec(page, locked_pgdat);
-			VM_BUG_ON_PAGE(!PageLRU(page), page);
 			del_page_from_lru_list(page, lruvec);
 			__clear_page_lru_flags(page);
 		}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index d93033407200..4688e495c242 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4276,7 +4276,6 @@ void check_move_unevictable_pages(struct pagevec *pvec)
 			continue;
 
 		if (page_evictable(page)) {
-			VM_BUG_ON_PAGE(PageActive(page), page);
 			del_page_from_lru_list(page, lruvec);
 			ClearPageUnevictable(page);
 			add_page_to_lru_list(page, lruvec);
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 11/13] mm: inline __update_lru_size()
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (9 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 10/13] mm: VM_BUG_ON lru page flags Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  3:00 ` [PATCH 12/13] mm: make lruvec_lru_size() static Yu Zhao
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

All other references to the function were removed after
commit a892cb6b977f ("mm/vmscan.c: use update_lru_size() in update_lru_sizes()")

This change should have no side effects.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/mm_inline.h | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 7183c7a03f09..355ea1ee32bd 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -24,7 +24,7 @@ static inline int page_is_file_lru(struct page *page)
 	return !PageSwapBacked(page);
 }
 
-static __always_inline void __update_lru_size(struct lruvec *lruvec,
+static __always_inline void update_lru_size(struct lruvec *lruvec,
 				enum lru_list lru, enum zone_type zid,
 				int nr_pages)
 {
@@ -33,13 +33,6 @@ static __always_inline void __update_lru_size(struct lruvec *lruvec,
 	__mod_lruvec_state(lruvec, NR_LRU_BASE + lru, nr_pages);
 	__mod_zone_page_state(&pgdat->node_zones[zid],
 				NR_ZONE_LRU_BASE + lru, nr_pages);
-}
-
-static __always_inline void update_lru_size(struct lruvec *lruvec,
-				enum lru_list lru, enum zone_type zid,
-				int nr_pages)
-{
-	__update_lru_size(lruvec, lru, zid, nr_pages);
 #ifdef CONFIG_MEMCG
 	mem_cgroup_update_lru_size(lruvec, lru, zid, nr_pages);
 #endif
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 12/13] mm: make lruvec_lru_size() static
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (10 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 11/13] mm: inline __update_lru_size() Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  3:00 ` [PATCH 13/13] mm: enlarge the int parameter of update_lru_size() Yu Zhao
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

All other references to the function were removed after
commit b910718a948a ("mm: vmscan: detect file thrashing at the reclaim root")

This change should have no side effects.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/mmzone.h | 2 --
 mm/vmscan.c            | 3 ++-
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 8379432f4f2f..c2b1f1d363cc 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -842,8 +842,6 @@ static inline struct pglist_data *lruvec_pgdat(struct lruvec *lruvec)
 #endif
 }
 
-extern unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx);
-
 #ifdef CONFIG_HAVE_MEMORYLESS_NODES
 int local_memory_node(int node_id);
 #else
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4688e495c242..367843296c21 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -312,7 +312,8 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
  * @lru: lru to use
  * @zone_idx: zones to consider (use MAX_NR_ZONES for the whole LRU list)
  */
-unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
+static unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru,
+				     int zone_idx)
 {
 	unsigned long size = 0;
 	int zid;
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 13/13] mm: enlarge the int parameter of update_lru_size()
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (11 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 12/13] mm: make lruvec_lru_size() static Yu Zhao
@ 2020-09-18  3:00 ` Yu Zhao
  2020-09-18  7:45 ` [PATCH 00/13] mm: clean up some lru related pieces Michal Hocko
  2020-09-18 20:46 ` Hugh Dickins
  14 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  3:00 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko
  Cc: Alex Shi, Steven Rostedt, Ingo Molnar, Johannes Weiner,
	Vladimir Davydov, Roman Gushchin, Shakeel Butt, Chris Down,
	Yafang Shao, Vlastimil Babka, Huang Ying, Pankaj Gupta,
	Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim, Jaewon Kim,
	cgroups, linux-mm, linux-kernel, Yu Zhao

In update_lru_sizes(), we call update_lru_size() with a long
argument, whereas the callee only takes an int parameter. Though this
isn't causing any overflow I'm aware of, it's not a good idea to
go through the truncation since the underlying counters are already
in long.

This patch enlarges all relevant parameters on the path to the final
underlying counters:
	update_lru_size(int -> long)
		if memcg:
			__mod_lruvec_state(int -> long)
				if smp:
					__mod_node_page_state(long)
				else:
					__mod_node_page_state(int -> long)
			__mod_memcg_lruvec_state(int -> long)
				__mod_memcg_state(int -> long)
		else:
			__mod_lruvec_state(int -> long)
				if smp:
					__mod_node_page_state(long)
				else:
					__mod_node_page_state(int -> long)

		__mod_zone_page_state(long)

		if memcg:
			mem_cgroup_update_lru_size(int -> long)

Note that __mod_node_page_state() for the smp case and
__mod_zone_page_state() already use long. So this change also fixes
the inconsistency.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/memcontrol.h | 14 +++++++-------
 include/linux/mm_inline.h  |  2 +-
 include/linux/vmstat.h     |  2 +-
 mm/memcontrol.c            | 10 +++++-----
 4 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index d0b036123c6a..fcd1829f8382 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -621,7 +621,7 @@ static inline bool mem_cgroup_online(struct mem_cgroup *memcg)
 int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
 
 void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
-		int zid, int nr_pages);
+		int zid, long nr_pages);
 
 static inline
 unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec,
@@ -707,7 +707,7 @@ static inline unsigned long memcg_page_state_local(struct mem_cgroup *memcg,
 	return x;
 }
 
-void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val);
+void __mod_memcg_state(struct mem_cgroup *memcg, int idx, long val);
 
 /* idx can be of type enum memcg_stat_item or node_stat_item */
 static inline void mod_memcg_state(struct mem_cgroup *memcg,
@@ -790,9 +790,9 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
 }
 
 void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
-			      int val);
+			      long val);
 void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
-			int val);
+			long val);
 void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val);
 
 void mod_memcg_obj_state(void *p, int idx, int val);
@@ -1166,7 +1166,7 @@ static inline unsigned long memcg_page_state_local(struct mem_cgroup *memcg,
 
 static inline void __mod_memcg_state(struct mem_cgroup *memcg,
 				     int idx,
-				     int nr)
+				     long nr)
 {
 }
 
@@ -1201,12 +1201,12 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
 }
 
 static inline void __mod_memcg_lruvec_state(struct lruvec *lruvec,
-					    enum node_stat_item idx, int val)
+					    enum node_stat_item idx, long val)
 {
 }
 
 static inline void __mod_lruvec_state(struct lruvec *lruvec,
-				      enum node_stat_item idx, int val)
+				      enum node_stat_item idx, long val)
 {
 	__mod_node_page_state(lruvec_pgdat(lruvec), idx, val);
 }
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 355ea1ee32bd..18e85071b44a 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -26,7 +26,7 @@ static inline int page_is_file_lru(struct page *page)
 
 static __always_inline void update_lru_size(struct lruvec *lruvec,
 				enum lru_list lru, enum zone_type zid,
-				int nr_pages)
+				long nr_pages)
 {
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
 
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 91220ace31da..2ae35e8c45f0 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -310,7 +310,7 @@ static inline void __mod_zone_page_state(struct zone *zone,
 }
 
 static inline void __mod_node_page_state(struct pglist_data *pgdat,
-			enum node_stat_item item, int delta)
+			enum node_stat_item item, long delta)
 {
 	node_page_state_add(delta, pgdat, item);
 }
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index cfa6cbad21d5..11bc4bb36882 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -774,7 +774,7 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz)
  * @idx: the stat item - can be enum memcg_stat_item or enum node_stat_item
  * @val: delta to add to the counter, can be negative
  */
-void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val)
+void __mod_memcg_state(struct mem_cgroup *memcg, int idx, long val)
 {
 	long x, threshold = MEMCG_CHARGE_BATCH;
 
@@ -812,7 +812,7 @@ parent_nodeinfo(struct mem_cgroup_per_node *pn, int nid)
 }
 
 void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
-			      int val)
+			      long val)
 {
 	struct mem_cgroup_per_node *pn;
 	struct mem_cgroup *memcg;
@@ -853,7 +853,7 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
  * change of state at this level: per-node, per-cgroup, per-lruvec.
  */
 void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
-			int val)
+			long val)
 {
 	/* Update node */
 	__mod_node_page_state(lruvec_pgdat(lruvec), idx, val);
@@ -1354,7 +1354,7 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd
  * so as to allow it to check that lru_size 0 is consistent with list_empty).
  */
 void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
-				int zid, int nr_pages)
+				int zid, long nr_pages)
 {
 	struct mem_cgroup_per_node *mz;
 	unsigned long *lru_size;
@@ -1371,7 +1371,7 @@ void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
 
 	size = *lru_size;
 	if (WARN_ONCE(size < 0,
-		"%s(%p, %d, %d): lru_size %ld\n",
+		"%s(%p, %d, %ld): lru_size %ld\n",
 		__func__, lruvec, lru, nr_pages, size)) {
 		VM_BUG_ON(1);
 		*lru_size = 0;
-- 
2.28.0.681.g6f77f65b4e-goog



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 01/13] mm: use add_page_to_lru_list()
  2020-09-18  3:00 ` [PATCH 01/13] mm: use add_page_to_lru_list() Yu Zhao
@ 2020-09-18  7:32   ` Michal Hocko
  2020-09-18 10:05     ` Yu Zhao
  0 siblings, 1 reply; 30+ messages in thread
From: Michal Hocko @ 2020-09-18  7:32 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Thu 17-09-20 21:00:39, Yu Zhao wrote:
> This patch replaces the only open-coded lru list addition with
> add_page_to_lru_list().
> 
> Before this patch, we have:
> 	update_lru_size()
> 	list_move()
> 
> After this patch, we have:
> 	list_del()
> 	add_page_to_lru_list()
> 		update_lru_size()
> 		list_add()
> 
> The only side effect is that page->lru is temporarily poisoned
> after a page is deleted from its old list, which shouldn't be a
> problem.

"because the lru lock is held" right?

Please always be explicit about your reasoning. "It shouldn't be a
problem" without further justification is just pointless for anybody
reading the code later on.
 
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
>  mm/vmscan.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9727dd8e2581..503fc5e1fe32 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1850,8 +1850,8 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>  	while (!list_empty(list)) {
>  		page = lru_to_page(list);
>  		VM_BUG_ON_PAGE(PageLRU(page), page);
> +		list_del(&page->lru);
>  		if (unlikely(!page_evictable(page))) {
> -			list_del(&page->lru);
>  			spin_unlock_irq(&pgdat->lru_lock);
>  			putback_lru_page(page);
>  			spin_lock_irq(&pgdat->lru_lock);
> @@ -1862,9 +1862,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>  		SetPageLRU(page);
>  		lru = page_lru(page);
>  
> -		nr_pages = thp_nr_pages(page);
> -		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
> -		list_move(&page->lru, &lruvec->lists[lru]);
> +		add_page_to_lru_list(page, lruvec, lru);
>  
>  		if (put_page_testzero(page)) {
>  			__ClearPageLRU(page);
> @@ -1878,6 +1876,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>  			} else
>  				list_add(&page->lru, &pages_to_free);
>  		} else {
> +			nr_pages = thp_nr_pages(page);
>  			nr_moved += nr_pages;
>  			if (PageActive(page))
>  				workingset_age_nonresident(lruvec, nr_pages);
> -- 
> 2.28.0.681.g6f77f65b4e-goog

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 02/13] mm: use page_off_lru()
  2020-09-18  3:00 ` [PATCH 02/13] mm: use page_off_lru() Yu Zhao
@ 2020-09-18  7:37   ` Michal Hocko
  2020-09-18 10:27     ` Yu Zhao
  0 siblings, 1 reply; 30+ messages in thread
From: Michal Hocko @ 2020-09-18  7:37 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Thu 17-09-20 21:00:40, Yu Zhao wrote:
> This patch replaces the only open-coded __ClearPageActive() with
> page_off_lru(). There is no open-coded __ClearPageUnevictable()s.
> 
> Before this patch, we have:
> 	__ClearPageActive()
> 	add_page_to_lru_list()
> 
> After this patch, we have:
> 	page_off_lru()
> 		if PageUnevictable()
> 			__ClearPageUnevictable()
> 		else if PageActive()
> 			__ClearPageActive()
> 	add_page_to_lru_list()
> 
> Checking PageUnevictable() shouldn't be a problem because these two
> flags are mutually exclusive. Leaking either will trigger bad_page().

I am sorry but the changelog is really hard to grasp. What are you
trying to achieve, why and why it is safe. This should be a general
outline for any patch. I have already commented on the previous patch
and asked you for the explanation why removing __ClearPageActive from
this path is desirable and safe. I have specifically asked to clarify
the compound page situation as that is using its oen destructor in the
freeing path and that might result in page_off_lru to be not called.
 
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
>  mm/vmscan.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 503fc5e1fe32..f257d2f61574 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1845,7 +1845,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>  	int nr_pages, nr_moved = 0;
>  	LIST_HEAD(pages_to_free);
>  	struct page *page;
> -	enum lru_list lru;
>  
>  	while (!list_empty(list)) {
>  		page = lru_to_page(list);
> @@ -1860,14 +1859,11 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>  		lruvec = mem_cgroup_page_lruvec(page, pgdat);
>  
>  		SetPageLRU(page);
> -		lru = page_lru(page);
> -
>  		add_page_to_lru_list(page, lruvec, lru);
>  
>  		if (put_page_testzero(page)) {
>  			__ClearPageLRU(page);
> -			__ClearPageActive(page);
> -			del_page_from_lru_list(page, lruvec, lru);
> +			del_page_from_lru_list(page, lruvec, page_off_lru(page));
>  
>  			if (unlikely(PageCompound(page))) {
>  				spin_unlock_irq(&pgdat->lru_lock);
> -- 
> 2.28.0.681.g6f77f65b4e-goog

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 03/13] mm: move __ClearPageLRU() into page_off_lru()
  2020-09-18  3:00 ` [PATCH 03/13] mm: move __ClearPageLRU() into page_off_lru() Yu Zhao
@ 2020-09-18  7:38   ` Michal Hocko
  0 siblings, 0 replies; 30+ messages in thread
From: Michal Hocko @ 2020-09-18  7:38 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Thu 17-09-20 21:00:41, Yu Zhao wrote:
> Now we have a total of three places that free lru pages when their
> references become zero (after we drop the reference from isolation).
> 
> Before this patch, they all do:
> 	__ClearPageLRU()
> 	page_off_lru()
> 	del_page_from_lru_list()
> 
> After this patch, they become:
> 	page_off_lru()
> 		__ClearPageLRU()
> 	del_page_from_lru_list()
> 
> This change should have no side effects.

Again, why this is desirable?

> Signed-off-by: Yu Zhao <yuzhao@google.com>
> ---
>  include/linux/mm_inline.h | 1 +
>  mm/swap.c                 | 2 --
>  mm/vmscan.c               | 1 -
>  3 files changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index 8fc71e9d7bb0..be9418425e41 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -92,6 +92,7 @@ static __always_inline enum lru_list page_off_lru(struct page *page)
>  {
>  	enum lru_list lru;
>  
> +	__ClearPageLRU(page);
>  	if (PageUnevictable(page)) {
>  		__ClearPageUnevictable(page);
>  		lru = LRU_UNEVICTABLE;
> diff --git a/mm/swap.c b/mm/swap.c
> index 40bf20a75278..8362083f00c9 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -86,7 +86,6 @@ static void __page_cache_release(struct page *page)
>  		spin_lock_irqsave(&pgdat->lru_lock, flags);
>  		lruvec = mem_cgroup_page_lruvec(page, pgdat);
>  		VM_BUG_ON_PAGE(!PageLRU(page), page);
> -		__ClearPageLRU(page);
>  		del_page_from_lru_list(page, lruvec, page_off_lru(page));
>  		spin_unlock_irqrestore(&pgdat->lru_lock, flags);
>  	}
> @@ -895,7 +894,6 @@ void release_pages(struct page **pages, int nr)
>  
>  			lruvec = mem_cgroup_page_lruvec(page, locked_pgdat);
>  			VM_BUG_ON_PAGE(!PageLRU(page), page);
> -			__ClearPageLRU(page);
>  			del_page_from_lru_list(page, lruvec, page_off_lru(page));
>  		}
>  
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index f257d2f61574..f9a186a96410 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1862,7 +1862,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>  		add_page_to_lru_list(page, lruvec, lru);
>  
>  		if (put_page_testzero(page)) {
> -			__ClearPageLRU(page);
>  			del_page_from_lru_list(page, lruvec, page_off_lru(page));
>  
>  			if (unlikely(PageCompound(page))) {
> -- 
> 2.28.0.681.g6f77f65b4e-goog

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 00/13] mm: clean up some lru related pieces
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (12 preceding siblings ...)
  2020-09-18  3:00 ` [PATCH 13/13] mm: enlarge the int parameter of update_lru_size() Yu Zhao
@ 2020-09-18  7:45 ` Michal Hocko
  2020-09-18  9:36   ` Yu Zhao
  2020-09-18 20:46 ` Hugh Dickins
  14 siblings, 1 reply; 30+ messages in thread
From: Michal Hocko @ 2020-09-18  7:45 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Thu 17-09-20 21:00:38, Yu Zhao wrote:
> Hi Andrew,
> 
> I see you have taken this:
>   mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
> Do you mind dropping it?
> 
> Michal asked to do a bit of additional work. So I thought I probably
> should create a series to do more cleanups I've been meaning to.
> 
> This series contains the change in the patch above and goes a few
> more steps farther. It's intended to improve readability and should
> not have any performance impacts. There are minor behavior changes in
> terms of debugging and error reporting, which I have all highlighted
> in the individual patches. All patches were properly tested on 5.8
> running Chrome OS, with various debug options turned on.
> 
> Michal,
> 
> Do you mind taking a looking at the entire series?

I have stopped at patch 3 as all patches until then are really missing
any justification. What is the point for all this to be done? The code
is far from trivial and just shifting around sounds like a risk. You are
removing ~50 LOC which is always nice but I am not sure the resulting
code is better maintainble or easier to read and understand. Just
consider __ClearPageLRU moving to page_off_lru patch. What is the
additional value of having the flag moved and burry it into a function
to have even more side effects? I found the way how __ClearPageLRU is
nicely close to removing it from LRU easier to follow. This is likely
subjective and other might think differently but as it is not clear what
is your actual goal here it is hard to judge pros and cons.

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 00/13] mm: clean up some lru related pieces
  2020-09-18  7:45 ` [PATCH 00/13] mm: clean up some lru related pieces Michal Hocko
@ 2020-09-18  9:36   ` Yu Zhao
  0 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18  9:36 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Fri, Sep 18, 2020 at 09:45:49AM +0200, Michal Hocko wrote:
> On Thu 17-09-20 21:00:38, Yu Zhao wrote:
> > Hi Andrew,
> > 
> > I see you have taken this:
> >   mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
> > Do you mind dropping it?
> > 
> > Michal asked to do a bit of additional work. So I thought I probably
> > should create a series to do more cleanups I've been meaning to.
> > 
> > This series contains the change in the patch above and goes a few
> > more steps farther. It's intended to improve readability and should
> > not have any performance impacts. There are minor behavior changes in
> > terms of debugging and error reporting, which I have all highlighted
> > in the individual patches. All patches were properly tested on 5.8
> > running Chrome OS, with various debug options turned on.
> > 
> > Michal,
> > 
> > Do you mind taking a looking at the entire series?
> 
> I have stopped at patch 3 as all patches until then are really missing
> any justification. What is the point for all this to be done? The code
> is far from trivial and just shifting around sounds like a risk. You are

I appreciate your caution, and if you let me know what exactly your
concerns are, we could probably work them out together.

> removing ~50 LOC which is always nice but I am not sure the resulting
> code is better maintainble or easier to read and understand. Just
> consider __ClearPageLRU moving to page_off_lru patch. What is the
> additional value of having the flag moved and burry it into a function
> to have even more side effects? I found the way how __ClearPageLRU is

Mind elaborating the side effects?

> nicely close to removing it from LRU easier to follow. This is likely
> subjective and other might think differently but as it is not clear what
> is your actual goal here it is hard to judge pros and cons.

I like this specific example from patch 3. Here is what it does: we
have three places using the same boilerplate, i.e., page_off_lru() +
__ClearPageLRU(), the patch moves __ClearPageLRU() into page_off_lru(),
which already does __ClearPageActive() and __ClearPageUnevictable().
Later on, we rename page_off_lru() to __clear_page_lru_flags() (patch
8).

Its point seems quite clear to me. Why would *anybody* want to use
two helper functions *repeatedly* when the job can be done with just
one? Nobody is paid by the number of lines they add, right? :) And
for that matter, why would anybody want any boilerplate to be open
coded from the same group of helper functions arranged in various
ways? I don't think the answer is subjective, but I don't expect
everybody to agree with me.

Now back to your general question: what's the point of this series?
Readability -- less error prone and easier to maintain. This series
consolidate open-coded boilerplate like the following in many places.
Take lru_lazyfree_fn() as an example:

-		bool active = PageActive(page);
 		int nr_pages = thp_nr_pages(page);
 
-		del_page_from_lru_list(page, lruvec,
-				       LRU_INACTIVE_ANON + active);
+		del_page_from_lru_list(page, lruvec);
 		ClearPageActive(page);
 		ClearPageReferenced(page);
		<snipped>
 		ClearPageSwapBacked(page);
-		add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE);
+		add_page_to_lru_list(page, lruvec);

I hope this helps, but if it doesn't, I'd be more than happy to have
more discussions on the details. And not that I don't appreciate your
review, but please be more specific than 'sounds like a risk' or 'have
even more side effects' so I can address your concerns effectively.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 01/13] mm: use add_page_to_lru_list()
  2020-09-18  7:32   ` Michal Hocko
@ 2020-09-18 10:05     ` Yu Zhao
  0 siblings, 0 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18 10:05 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Fri, Sep 18, 2020 at 09:32:40AM +0200, Michal Hocko wrote:
> On Thu 17-09-20 21:00:39, Yu Zhao wrote:
> > This patch replaces the only open-coded lru list addition with
> > add_page_to_lru_list().
> > 
> > Before this patch, we have:
> > 	update_lru_size()
> > 	list_move()
> > 
> > After this patch, we have:
> > 	list_del()
> > 	add_page_to_lru_list()
> > 		update_lru_size()
> > 		list_add()
> > 
> > The only side effect is that page->lru is temporarily poisoned
> > after a page is deleted from its old list, which shouldn't be a
> > problem.
> 
> "because the lru lock is held" right?
> 
> Please always be explicit about your reasoning. "It shouldn't be a
> problem" without further justification is just pointless for anybody
> reading the code later on.

It's not my reasoning. We are simply assuming different contexts this
discussion is under. In the context I'm assuming, we all know we are
under lru lock, which is rule 101 of lru list operations. But I'd be
happy to remove such assumption and update the commit message.

Any concern with the code change?

>  
> > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > ---
> >  mm/vmscan.c | 7 +++----
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> > 
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 9727dd8e2581..503fc5e1fe32 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1850,8 +1850,8 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> >  	while (!list_empty(list)) {
> >  		page = lru_to_page(list);
> >  		VM_BUG_ON_PAGE(PageLRU(page), page);
> > +		list_del(&page->lru);
> >  		if (unlikely(!page_evictable(page))) {
> > -			list_del(&page->lru);
> >  			spin_unlock_irq(&pgdat->lru_lock);
> >  			putback_lru_page(page);
> >  			spin_lock_irq(&pgdat->lru_lock);
> > @@ -1862,9 +1862,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> >  		SetPageLRU(page);
> >  		lru = page_lru(page);
> >  
> > -		nr_pages = thp_nr_pages(page);
> > -		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
> > -		list_move(&page->lru, &lruvec->lists[lru]);
> > +		add_page_to_lru_list(page, lruvec, lru);
> >  
> >  		if (put_page_testzero(page)) {
> >  			__ClearPageLRU(page);
> > @@ -1878,6 +1876,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> >  			} else
> >  				list_add(&page->lru, &pages_to_free);
> >  		} else {
> > +			nr_pages = thp_nr_pages(page);
> >  			nr_moved += nr_pages;
> >  			if (PageActive(page))
> >  				workingset_age_nonresident(lruvec, nr_pages);
> > -- 
> > 2.28.0.681.g6f77f65b4e-goog
> 
> -- 
> Michal Hocko
> SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 02/13] mm: use page_off_lru()
  2020-09-18  7:37   ` Michal Hocko
@ 2020-09-18 10:27     ` Yu Zhao
  2020-09-18 11:09       ` Michal Hocko
  2020-09-18 11:24       ` Michal Hocko
  0 siblings, 2 replies; 30+ messages in thread
From: Yu Zhao @ 2020-09-18 10:27 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Fri, Sep 18, 2020 at 09:37:00AM +0200, Michal Hocko wrote:
> On Thu 17-09-20 21:00:40, Yu Zhao wrote:
> > This patch replaces the only open-coded __ClearPageActive() with
> > page_off_lru(). There is no open-coded __ClearPageUnevictable()s.
> > 
> > Before this patch, we have:
> > 	__ClearPageActive()
> > 	add_page_to_lru_list()
> > 
> > After this patch, we have:
> > 	page_off_lru()
> > 		if PageUnevictable()
> > 			__ClearPageUnevictable()
> > 		else if PageActive()
> > 			__ClearPageActive()
> > 	add_page_to_lru_list()
> > 
> > Checking PageUnevictable() shouldn't be a problem because these two
> > flags are mutually exclusive. Leaking either will trigger bad_page().
> 
> I am sorry but the changelog is really hard to grasp. What are you
> trying to achieve, why and why it is safe. This should be a general
> outline for any patch. I have already commented on the previous patch
> and asked you for the explanation why removing __ClearPageActive from
> this path is desirable and safe. I have specifically asked to clarify
> the compound page situation as that is using its oen destructor in the
> freeing path and that might result in page_off_lru to be not called.

Haven't I explained we are NOT removing __ClearPageActive()? Is my
notion of the code structure above confusing you? Or 'open-coded'
could mean different things?

And I have asked this before: why does 'the compound page situation'
even matter here? Perhaps if you could give a concrete example related
to the code change and help me understand your concern?

> > Signed-off-by: Yu Zhao <yuzhao@google.com>
> > ---
> >  mm/vmscan.c | 6 +-----
> >  1 file changed, 1 insertion(+), 5 deletions(-)
> > 
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 503fc5e1fe32..f257d2f61574 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1845,7 +1845,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> >  	int nr_pages, nr_moved = 0;
> >  	LIST_HEAD(pages_to_free);
> >  	struct page *page;
> > -	enum lru_list lru;
> >  
> >  	while (!list_empty(list)) {
> >  		page = lru_to_page(list);
> > @@ -1860,14 +1859,11 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> >  		lruvec = mem_cgroup_page_lruvec(page, pgdat);
> >  
> >  		SetPageLRU(page);
> > -		lru = page_lru(page);
> > -
> >  		add_page_to_lru_list(page, lruvec, lru);
> >  
> >  		if (put_page_testzero(page)) {
> >  			__ClearPageLRU(page);
> > -			__ClearPageActive(page);
> > -			del_page_from_lru_list(page, lruvec, lru);
> > +			del_page_from_lru_list(page, lruvec, page_off_lru(page));
> >  
> >  			if (unlikely(PageCompound(page))) {
> >  				spin_unlock_irq(&pgdat->lru_lock);
> > -- 
> > 2.28.0.681.g6f77f65b4e-goog
> 
> -- 
> Michal Hocko
> SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 02/13] mm: use page_off_lru()
  2020-09-18 10:27     ` Yu Zhao
@ 2020-09-18 11:09       ` Michal Hocko
  2020-09-18 18:53         ` Yu Zhao
  2020-09-18 11:24       ` Michal Hocko
  1 sibling, 1 reply; 30+ messages in thread
From: Michal Hocko @ 2020-09-18 11:09 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Fri 18-09-20 04:27:13, Yu Zhao wrote:
> On Fri, Sep 18, 2020 at 09:37:00AM +0200, Michal Hocko wrote:
> > On Thu 17-09-20 21:00:40, Yu Zhao wrote:
> > > This patch replaces the only open-coded __ClearPageActive() with
> > > page_off_lru(). There is no open-coded __ClearPageUnevictable()s.
> > > 
> > > Before this patch, we have:
> > > 	__ClearPageActive()
> > > 	add_page_to_lru_list()
> > > 
> > > After this patch, we have:
> > > 	page_off_lru()
> > > 		if PageUnevictable()
> > > 			__ClearPageUnevictable()
> > > 		else if PageActive()
> > > 			__ClearPageActive()
> > > 	add_page_to_lru_list()
> > > 
> > > Checking PageUnevictable() shouldn't be a problem because these two
> > > flags are mutually exclusive. Leaking either will trigger bad_page().
> > 
> > I am sorry but the changelog is really hard to grasp. What are you
> > trying to achieve, why and why it is safe. This should be a general
> > outline for any patch. I have already commented on the previous patch
> > and asked you for the explanation why removing __ClearPageActive from
> > this path is desirable and safe. I have specifically asked to clarify
> > the compound page situation as that is using its oen destructor in the
> > freeing path and that might result in page_off_lru to be not called.
> 
> Haven't I explained we are NOT removing __ClearPageActive()? Is my
> notion of the code structure above confusing you? Or 'open-coded'
> could mean different things?

Please read through my reply carefuly. I am not saying what you are
doing is wrong. I am expressing a lack of justification which is the
case throughout this patch series. You do not explain why we need it and
why reviewers should spend time on this. Because the review is not as
trivial as looking at the diff.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 02/13] mm: use page_off_lru()
  2020-09-18 10:27     ` Yu Zhao
  2020-09-18 11:09       ` Michal Hocko
@ 2020-09-18 11:24       ` Michal Hocko
  1 sibling, 0 replies; 30+ messages in thread
From: Michal Hocko @ 2020-09-18 11:24 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Fri 18-09-20 04:27:13, Yu Zhao wrote:
> On Fri, Sep 18, 2020 at 09:37:00AM +0200, Michal Hocko wrote:
[...]
> And I have asked this before: why does 'the compound page situation'
> even matter here? Perhaps if you could give a concrete example related
> to the code change and help me understand your concern?

Forgot to answer this part. The compound page concern is a misreading of
the patch on my end. I have missed you are using page_off_lru in this
(move_pages_to_lru) path and that you rely on release_pages to do the
clean up on you. I still find it rather awkward that page_off_lru has
such side effects but I am not deeply familiar with the reasoning
behind so I will rather shut up now.

[...]
> > > @@ -1860,14 +1859,11 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> > >  		lruvec = mem_cgroup_page_lruvec(page, pgdat);
> > >  
> > >  		SetPageLRU(page);
> > > -		lru = page_lru(page);
> > > -
> > >  		add_page_to_lru_list(page, lruvec, lru);
> > >  
> > >  		if (put_page_testzero(page)) {
> > >  			__ClearPageLRU(page);
> > > -			__ClearPageActive(page);
> > > -			del_page_from_lru_list(page, lruvec, lru);
> > > +			del_page_from_lru_list(page, lruvec, page_off_lru(page));
> > >  
> > >  			if (unlikely(PageCompound(page))) {
> > >  				spin_unlock_irq(&pgdat->lru_lock);
> > > -- 
> > > 2.28.0.681.g6f77f65b4e-goog
> > 
> > -- 
> > Michal Hocko
> > SUSE Labs

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 02/13] mm: use page_off_lru()
  2020-09-18 11:09       ` Michal Hocko
@ 2020-09-18 18:53         ` Yu Zhao
  2020-09-21 11:16           ` Michal Hocko
  0 siblings, 1 reply; 30+ messages in thread
From: Yu Zhao @ 2020-09-18 18:53 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Fri, Sep 18, 2020 at 01:09:14PM +0200, Michal Hocko wrote:
> On Fri 18-09-20 04:27:13, Yu Zhao wrote:
> > On Fri, Sep 18, 2020 at 09:37:00AM +0200, Michal Hocko wrote:
> > > On Thu 17-09-20 21:00:40, Yu Zhao wrote:
> > > > This patch replaces the only open-coded __ClearPageActive() with
> > > > page_off_lru(). There is no open-coded __ClearPageUnevictable()s.
> > > > 
> > > > Before this patch, we have:
> > > > 	__ClearPageActive()
> > > > 	add_page_to_lru_list()
> > > > 
> > > > After this patch, we have:
> > > > 	page_off_lru()
> > > > 		if PageUnevictable()
> > > > 			__ClearPageUnevictable()
> > > > 		else if PageActive()
> > > > 			__ClearPageActive()
> > > > 	add_page_to_lru_list()
> > > > 
> > > > Checking PageUnevictable() shouldn't be a problem because these two
> > > > flags are mutually exclusive. Leaking either will trigger bad_page().
> > > 
> > > I am sorry but the changelog is really hard to grasp. What are you
> > > trying to achieve, why and why it is safe. This should be a general
> > > outline for any patch. I have already commented on the previous patch
> > > and asked you for the explanation why removing __ClearPageActive from
> > > this path is desirable and safe. I have specifically asked to clarify
> > > the compound page situation as that is using its oen destructor in the
> > > freeing path and that might result in page_off_lru to be not called.
> > 
> > Haven't I explained we are NOT removing __ClearPageActive()? Is my
> > notion of the code structure above confusing you? Or 'open-coded'
> > could mean different things?
> 
> Please read through my reply carefuly. I am not saying what you are
> doing is wrong. I am expressing a lack of justification which is the
> case throughout this patch series. You do not explain why we need it and
> why reviewers should spend time on this. Because the review is not as
> trivial as looking at the diff.

I appreciate your time. But if you are looking for some grand
justification, I'm afraid I won't be able to give one, because, as it's
titled, this is just a series of cleanup patches.

My questions above are meant to determine which parts are not clear.
Well, I still don't know. So let's try this. What's your impression upon
reading the first sentence of this patch?

> > > > This patch replaces the only open-coded __ClearPageActive() with
> > > > page_off_lru().

Here is how I would think when I read it (which is purely subjective
since I wrote it):

  'replaces the only (an outlier) open-coded (bad) with
   page_off_lru() (the norm)'

It seems to me it has everything I need to know (or say). Of course I
could spell them all out for you if that's how you'd prefer. And if
it's not enough, then please show me some examples and I'll study
them carefully and try my best to follow them.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 00/13] mm: clean up some lru related pieces
  2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
                   ` (13 preceding siblings ...)
  2020-09-18  7:45 ` [PATCH 00/13] mm: clean up some lru related pieces Michal Hocko
@ 2020-09-18 20:46 ` Hugh Dickins
  2020-09-18 21:01   ` Yu Zhao
  14 siblings, 1 reply; 30+ messages in thread
From: Hugh Dickins @ 2020-09-18 20:46 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andrew Morton, Michal Hocko, Alex Shi, Steven Rostedt,
	Ingo Molnar, Johannes Weiner, Vladimir Davydov, Roman Gushchin,
	Shakeel Butt, Chris Down, Yafang Shao, Vlastimil Babka,
	Huang Ying, Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov,
	Minchan Kim, Jaewon Kim, cgroups, linux-mm, linux-kernel

On Thu, 17 Sep 2020, Yu Zhao wrote:

> Hi Andrew,
> 
> I see you have taken this:
>   mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
> Do you mind dropping it?
> 
> Michal asked to do a bit of additional work. So I thought I probably
> should create a series to do more cleanups I've been meaning to.
> 
> This series contains the change in the patch above and goes a few
> more steps farther. It's intended to improve readability and should
> not have any performance impacts. There are minor behavior changes in
> terms of debugging and error reporting, which I have all highlighted
> in the individual patches. All patches were properly tested on 5.8
> running Chrome OS, with various debug options turned on.
> 
> Michal,
> 
> Do you mind taking a looking at the entire series?
> 
> Thank you.
> 
> Yu Zhao (13):
>   mm: use add_page_to_lru_list()
>   mm: use page_off_lru()
>   mm: move __ClearPageLRU() into page_off_lru()
>   mm: shuffle lru list addition and deletion functions
>   mm: don't pass enum lru_list to lru list addition functions
>   mm: don't pass enum lru_list to trace_mm_lru_insertion()
>   mm: don't pass enum lru_list to del_page_from_lru_list()
>   mm: rename page_off_lru() to __clear_page_lru_flags()
>   mm: inline page_lru_base_type()
>   mm: VM_BUG_ON lru page flags
>   mm: inline __update_lru_size()
>   mm: make lruvec_lru_size() static
>   mm: enlarge the int parameter of update_lru_size()
> 
>  include/linux/memcontrol.h     |  14 ++--
>  include/linux/mm_inline.h      | 115 ++++++++++++++-------------------
>  include/linux/mmzone.h         |   2 -
>  include/linux/vmstat.h         |   2 +-
>  include/trace/events/pagemap.h |  11 ++--
>  mm/compaction.c                |   2 +-
>  mm/memcontrol.c                |  10 +--
>  mm/mlock.c                     |   2 +-
>  mm/swap.c                      |  53 ++++++---------
>  mm/vmscan.c                    |  28 +++-----
>  10 files changed, 95 insertions(+), 144 deletions(-)
> 
> -- 
> 2.28.0.681.g6f77f65b4e-goog

Sorry, Yu, I may be out-of-line in sending this: but as you know,
Alex Shi has a long per-memcg lru_lock series playing in much the
same area (particularly conflicting in mm/swap.c and mm/vmscan.c):
a patchset that makes useful changes, that I'm very keen to help
into mmotm a.s.a.p (but not before I've completed diligence).

We've put a lot of effort into its testing, I'm currently reviewing
it patch by patch (my general silence indicating that I'm busy on that,
but slow as ever): so I'm a bit discouraged to have its stability
potentially undermined by conflicting cleanups at this stage.

If there's general agreement that your cleanups are safe and welcome
(Michal's initial reaction sheds some doubt on that), great: I hope
that Andrew can fast-track them into mmotm, then Alex rebase on top
of them, and I then re-test and re-review.

But if that quick agreement is not forthcoming, may I ask you please
to hold back, and resend based on top of Alex's next posting?

Thanks,
Hugh


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 00/13] mm: clean up some lru related pieces
  2020-09-18 20:46 ` Hugh Dickins
@ 2020-09-18 21:01   ` Yu Zhao
  2020-09-18 21:19     ` Hugh Dickins
  0 siblings, 1 reply; 30+ messages in thread
From: Yu Zhao @ 2020-09-18 21:01 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Michal Hocko, Alex Shi, Steven Rostedt,
	Ingo Molnar, Johannes Weiner, Vladimir Davydov, Roman Gushchin,
	Shakeel Butt, Chris Down, Yafang Shao, Vlastimil Babka,
	Huang Ying, Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov,
	Minchan Kim, Jaewon Kim, cgroups, linux-mm, linux-kernel

On Fri, Sep 18, 2020 at 01:46:59PM -0700, Hugh Dickins wrote:
> On Thu, 17 Sep 2020, Yu Zhao wrote:
> 
> > Hi Andrew,
> > 
> > I see you have taken this:
> >   mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
> > Do you mind dropping it?
> > 
> > Michal asked to do a bit of additional work. So I thought I probably
> > should create a series to do more cleanups I've been meaning to.
> > 
> > This series contains the change in the patch above and goes a few
> > more steps farther. It's intended to improve readability and should
> > not have any performance impacts. There are minor behavior changes in
> > terms of debugging and error reporting, which I have all highlighted
> > in the individual patches. All patches were properly tested on 5.8
> > running Chrome OS, with various debug options turned on.
> > 
> > Michal,
> > 
> > Do you mind taking a looking at the entire series?
> > 
> > Thank you.
> > 
> > Yu Zhao (13):
> >   mm: use add_page_to_lru_list()
> >   mm: use page_off_lru()
> >   mm: move __ClearPageLRU() into page_off_lru()
> >   mm: shuffle lru list addition and deletion functions
> >   mm: don't pass enum lru_list to lru list addition functions
> >   mm: don't pass enum lru_list to trace_mm_lru_insertion()
> >   mm: don't pass enum lru_list to del_page_from_lru_list()
> >   mm: rename page_off_lru() to __clear_page_lru_flags()
> >   mm: inline page_lru_base_type()
> >   mm: VM_BUG_ON lru page flags
> >   mm: inline __update_lru_size()
> >   mm: make lruvec_lru_size() static
> >   mm: enlarge the int parameter of update_lru_size()
> > 
> >  include/linux/memcontrol.h     |  14 ++--
> >  include/linux/mm_inline.h      | 115 ++++++++++++++-------------------
> >  include/linux/mmzone.h         |   2 -
> >  include/linux/vmstat.h         |   2 +-
> >  include/trace/events/pagemap.h |  11 ++--
> >  mm/compaction.c                |   2 +-
> >  mm/memcontrol.c                |  10 +--
> >  mm/mlock.c                     |   2 +-
> >  mm/swap.c                      |  53 ++++++---------
> >  mm/vmscan.c                    |  28 +++-----
> >  10 files changed, 95 insertions(+), 144 deletions(-)
> > 
> > -- 
> > 2.28.0.681.g6f77f65b4e-goog
> 
> Sorry, Yu, I may be out-of-line in sending this: but as you know,
> Alex Shi has a long per-memcg lru_lock series playing in much the
> same area (particularly conflicting in mm/swap.c and mm/vmscan.c):
> a patchset that makes useful changes, that I'm very keen to help
> into mmotm a.s.a.p (but not before I've completed diligence).
> 
> We've put a lot of effort into its testing, I'm currently reviewing
> it patch by patch (my general silence indicating that I'm busy on that,
> but slow as ever): so I'm a bit discouraged to have its stability
> potentially undermined by conflicting cleanups at this stage.
> 
> If there's general agreement that your cleanups are safe and welcome
> (Michal's initial reaction sheds some doubt on that), great: I hope
> that Andrew can fast-track them into mmotm, then Alex rebase on top
> of them, and I then re-test and re-review.
> 
> But if that quick agreement is not forthcoming, may I ask you please
> to hold back, and resend based on top of Alex's next posting?

The per-memcg lru lock series seems a high priority, and I have
absolutely no problem accommodate your request.

In return, may I ask you or Alex to review this series after you
have finished with per-memcg lru lock (to make sure that I resolve
all the conflicts correctly at least)?


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 00/13] mm: clean up some lru related pieces
  2020-09-18 21:01   ` Yu Zhao
@ 2020-09-18 21:19     ` Hugh Dickins
  2020-09-19  0:31       ` Alex Shi
  2020-10-13  2:30       ` Hugh Dickins
  0 siblings, 2 replies; 30+ messages in thread
From: Hugh Dickins @ 2020-09-18 21:19 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Hugh Dickins, Andrew Morton, Michal Hocko, Alex Shi,
	Steven Rostedt, Ingo Molnar, Johannes Weiner, Vladimir Davydov,
	Roman Gushchin, Shakeel Butt, Chris Down, Yafang Shao,
	Vlastimil Babka, Huang Ying, Pankaj Gupta, Matthew Wilcox,
	Konstantin Khlebnikov, Minchan Kim, Jaewon Kim, cgroups,
	linux-mm, linux-kernel

On Fri, 18 Sep 2020, Yu Zhao wrote:
> On Fri, Sep 18, 2020 at 01:46:59PM -0700, Hugh Dickins wrote:
> > On Thu, 17 Sep 2020, Yu Zhao wrote:
> > 
> > > Hi Andrew,
> > > 
> > > I see you have taken this:
> > >   mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
> > > Do you mind dropping it?
> > > 
> > > Michal asked to do a bit of additional work. So I thought I probably
> > > should create a series to do more cleanups I've been meaning to.
> > > 
> > > This series contains the change in the patch above and goes a few
> > > more steps farther. It's intended to improve readability and should
> > > not have any performance impacts. There are minor behavior changes in
> > > terms of debugging and error reporting, which I have all highlighted
> > > in the individual patches. All patches were properly tested on 5.8
> > > running Chrome OS, with various debug options turned on.
> > > 
> > > Michal,
> > > 
> > > Do you mind taking a looking at the entire series?
> > > 
> > > Thank you.
> > > 
> > > Yu Zhao (13):
> > >   mm: use add_page_to_lru_list()
> > >   mm: use page_off_lru()
> > >   mm: move __ClearPageLRU() into page_off_lru()
> > >   mm: shuffle lru list addition and deletion functions
> > >   mm: don't pass enum lru_list to lru list addition functions
> > >   mm: don't pass enum lru_list to trace_mm_lru_insertion()
> > >   mm: don't pass enum lru_list to del_page_from_lru_list()
> > >   mm: rename page_off_lru() to __clear_page_lru_flags()
> > >   mm: inline page_lru_base_type()
> > >   mm: VM_BUG_ON lru page flags
> > >   mm: inline __update_lru_size()
> > >   mm: make lruvec_lru_size() static
> > >   mm: enlarge the int parameter of update_lru_size()
> > > 
> > >  include/linux/memcontrol.h     |  14 ++--
> > >  include/linux/mm_inline.h      | 115 ++++++++++++++-------------------
> > >  include/linux/mmzone.h         |   2 -
> > >  include/linux/vmstat.h         |   2 +-
> > >  include/trace/events/pagemap.h |  11 ++--
> > >  mm/compaction.c                |   2 +-
> > >  mm/memcontrol.c                |  10 +--
> > >  mm/mlock.c                     |   2 +-
> > >  mm/swap.c                      |  53 ++++++---------
> > >  mm/vmscan.c                    |  28 +++-----
> > >  10 files changed, 95 insertions(+), 144 deletions(-)
> > > 
> > > -- 
> > > 2.28.0.681.g6f77f65b4e-goog
> > 
> > Sorry, Yu, I may be out-of-line in sending this: but as you know,
> > Alex Shi has a long per-memcg lru_lock series playing in much the
> > same area (particularly conflicting in mm/swap.c and mm/vmscan.c):
> > a patchset that makes useful changes, that I'm very keen to help
> > into mmotm a.s.a.p (but not before I've completed diligence).
> > 
> > We've put a lot of effort into its testing, I'm currently reviewing
> > it patch by patch (my general silence indicating that I'm busy on that,
> > but slow as ever): so I'm a bit discouraged to have its stability
> > potentially undermined by conflicting cleanups at this stage.
> > 
> > If there's general agreement that your cleanups are safe and welcome
> > (Michal's initial reaction sheds some doubt on that), great: I hope
> > that Andrew can fast-track them into mmotm, then Alex rebase on top
> > of them, and I then re-test and re-review.
> > 
> > But if that quick agreement is not forthcoming, may I ask you please
> > to hold back, and resend based on top of Alex's next posting?
> 
> The per-memcg lru lock series seems a high priority, and I have
> absolutely no problem accommodate your request.

Many thanks!

> 
> In return, may I ask you or Alex to review this series after you
> have finished with per-memcg lru lock (to make sure that I resolve
> all the conflicts correctly at least)?

Fair enough: I promise to do so.

And your rebasing will necessarily lead you to review some parts
of Alex's patchset, which will help us all too.

Andrew, Yu asked at the start:
> > > I see you have taken this:
> > >   mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
> > > Do you mind dropping it?
Dropping that for now will help too.

Thanks,
Hugh


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 00/13] mm: clean up some lru related pieces
  2020-09-18 21:19     ` Hugh Dickins
@ 2020-09-19  0:31       ` Alex Shi
  2020-10-13  2:30       ` Hugh Dickins
  1 sibling, 0 replies; 30+ messages in thread
From: Alex Shi @ 2020-09-19  0:31 UTC (permalink / raw)
  To: Hugh Dickins, Yu Zhao
  Cc: Andrew Morton, Michal Hocko, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel



在 2020/9/19 上午5:19, Hugh Dickins 写道:
>>>> 2.28.0.681.g6f77f65b4e-goog
>>> Sorry, Yu, I may be out-of-line in sending this: but as you know,
>>> Alex Shi has a long per-memcg lru_lock series playing in much the
>>> same area (particularly conflicting in mm/swap.c and mm/vmscan.c):
>>> a patchset that makes useful changes, that I'm very keen to help
>>> into mmotm a.s.a.p (but not before I've completed diligence).
>>>
>>> We've put a lot of effort into its testing, I'm currently reviewing
>>> it patch by patch (my general silence indicating that I'm busy on that,
>>> but slow as ever): so I'm a bit discouraged to have its stability
>>> potentially undermined by conflicting cleanups at this stage.
>>>
>>> If there's general agreement that your cleanups are safe and welcome
>>> (Michal's initial reaction sheds some doubt on that), great: I hope
>>> that Andrew can fast-track them into mmotm, then Alex rebase on top
>>> of them, and I then re-test and re-review.
>>>
>>> But if that quick agreement is not forthcoming, may I ask you please
>>> to hold back, and resend based on top of Alex's next posting?
>> The per-memcg lru lock series seems a high priority, and I have
>> absolutely no problem accommodate your request.
> Many thanks!
> 
>> In return, may I ask you or Alex to review this series after you
>> have finished with per-memcg lru lock (to make sure that I resolve
>> all the conflicts correctly at least)?
> Fair enough: I promise to do so.
> 
> And your rebasing will necessarily lead you to review some parts
> of Alex's patchset, which will help us all too.
> 
> Andrew, Yu asked at the start:
>>>> I see you have taken this:
>>>>   mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
>>>> Do you mind dropping it?
> Dropping that for now will help too.

Hi Hugh & Yu,

Thanks for all your considerations! I will looking into this series after thing
on lru_lock finished.

Thanks a lot!
Alex


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 02/13] mm: use page_off_lru()
  2020-09-18 18:53         ` Yu Zhao
@ 2020-09-21 11:16           ` Michal Hocko
  0 siblings, 0 replies; 30+ messages in thread
From: Michal Hocko @ 2020-09-21 11:16 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andrew Morton, Alex Shi, Steven Rostedt, Ingo Molnar,
	Johannes Weiner, Vladimir Davydov, Roman Gushchin, Shakeel Butt,
	Chris Down, Yafang Shao, Vlastimil Babka, Huang Ying,
	Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov, Minchan Kim,
	Jaewon Kim, cgroups, linux-mm, linux-kernel

On Fri 18-09-20 12:53:58, Yu Zhao wrote:
> On Fri, Sep 18, 2020 at 01:09:14PM +0200, Michal Hocko wrote:
> > On Fri 18-09-20 04:27:13, Yu Zhao wrote:
> > > On Fri, Sep 18, 2020 at 09:37:00AM +0200, Michal Hocko wrote:
> > > > On Thu 17-09-20 21:00:40, Yu Zhao wrote:
> > > > > This patch replaces the only open-coded __ClearPageActive() with
> > > > > page_off_lru(). There is no open-coded __ClearPageUnevictable()s.
> > > > > 
> > > > > Before this patch, we have:
> > > > > 	__ClearPageActive()
> > > > > 	add_page_to_lru_list()
> > > > > 
> > > > > After this patch, we have:
> > > > > 	page_off_lru()
> > > > > 		if PageUnevictable()
> > > > > 			__ClearPageUnevictable()
> > > > > 		else if PageActive()
> > > > > 			__ClearPageActive()
> > > > > 	add_page_to_lru_list()
> > > > > 
> > > > > Checking PageUnevictable() shouldn't be a problem because these two
> > > > > flags are mutually exclusive. Leaking either will trigger bad_page().
> > > > 
> > > > I am sorry but the changelog is really hard to grasp. What are you
> > > > trying to achieve, why and why it is safe. This should be a general
> > > > outline for any patch. I have already commented on the previous patch
> > > > and asked you for the explanation why removing __ClearPageActive from
> > > > this path is desirable and safe. I have specifically asked to clarify
> > > > the compound page situation as that is using its oen destructor in the
> > > > freeing path and that might result in page_off_lru to be not called.
> > > 
> > > Haven't I explained we are NOT removing __ClearPageActive()? Is my
> > > notion of the code structure above confusing you? Or 'open-coded'
> > > could mean different things?
> > 
> > Please read through my reply carefuly. I am not saying what you are
> > doing is wrong. I am expressing a lack of justification which is the
> > case throughout this patch series. You do not explain why we need it and
> > why reviewers should spend time on this. Because the review is not as
> > trivial as looking at the diff.
> 
> I appreciate your time. But if you are looking for some grand
> justification, I'm afraid I won't be able to give one, because, as it's
> titled, this is just a series of cleanup patches.

You likely had some reason to do that clean up, right? What was that? An
inconcistency in handling some of the page flags when it is moved around
LRU lists? Was the code too hard to reason about? Was it error prone?

Please do not take me wrong, I am not trying to discourage you from
clean up work. There is a lot of code that would benefit from clean ups.
But it certainly helps to outline your motivation and the goal you would
like to achieve. Without that it would boil down to guessing what you
might have thought or simly moving things around without a very good
long term reason.

Thanks!
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 00/13] mm: clean up some lru related pieces
  2020-09-18 21:19     ` Hugh Dickins
  2020-09-19  0:31       ` Alex Shi
@ 2020-10-13  2:30       ` Hugh Dickins
  1 sibling, 0 replies; 30+ messages in thread
From: Hugh Dickins @ 2020-10-13  2:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Yu Zhao, Hugh Dickins, Michal Hocko, Alex Shi, Steven Rostedt,
	Ingo Molnar, Johannes Weiner, Vladimir Davydov, Roman Gushchin,
	Shakeel Butt, Chris Down, Yafang Shao, Vlastimil Babka,
	Huang Ying, Pankaj Gupta, Matthew Wilcox, Konstantin Khlebnikov,
	Minchan Kim, Jaewon Kim, cgroups, linux-mm, linux-kernel

On Fri, 18 Sep 2020, Hugh Dickins wrote:
> On Fri, 18 Sep 2020, Yu Zhao wrote:
> > On Fri, Sep 18, 2020 at 01:46:59PM -0700, Hugh Dickins wrote:
> > > On Thu, 17 Sep 2020, Yu Zhao wrote:
> > > 
> > > > Hi Andrew,
> > > > 
> > > > I see you have taken this:
> > > >   mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
> > > > Do you mind dropping it?
> > > > 
> > > > Michal asked to do a bit of additional work. So I thought I probably
> > > > should create a series to do more cleanups I've been meaning to.
> > > > 
> > > > This series contains the change in the patch above and goes a few
> > > > more steps farther. It's intended to improve readability and should
> > > > not have any performance impacts. There are minor behavior changes in
> > > > terms of debugging and error reporting, which I have all highlighted
> > > > in the individual patches. All patches were properly tested on 5.8
> > > > running Chrome OS, with various debug options turned on.
> > > > 
> > > > Michal,
> > > > 
> > > > Do you mind taking a looking at the entire series?
> > > > 
> > > > Thank you.
> > > > 
> > > > Yu Zhao (13):
> > > >   mm: use add_page_to_lru_list()
> > > >   mm: use page_off_lru()
> > > >   mm: move __ClearPageLRU() into page_off_lru()
> > > >   mm: shuffle lru list addition and deletion functions
> > > >   mm: don't pass enum lru_list to lru list addition functions
> > > >   mm: don't pass enum lru_list to trace_mm_lru_insertion()
> > > >   mm: don't pass enum lru_list to del_page_from_lru_list()
> > > >   mm: rename page_off_lru() to __clear_page_lru_flags()
> > > >   mm: inline page_lru_base_type()
> > > >   mm: VM_BUG_ON lru page flags
> > > >   mm: inline __update_lru_size()
> > > >   mm: make lruvec_lru_size() static
> > > >   mm: enlarge the int parameter of update_lru_size()
> > > > 
> > > >  include/linux/memcontrol.h     |  14 ++--
> > > >  include/linux/mm_inline.h      | 115 ++++++++++++++-------------------
> > > >  include/linux/mmzone.h         |   2 -
> > > >  include/linux/vmstat.h         |   2 +-
> > > >  include/trace/events/pagemap.h |  11 ++--
> > > >  mm/compaction.c                |   2 +-
> > > >  mm/memcontrol.c                |  10 +--
> > > >  mm/mlock.c                     |   2 +-
> > > >  mm/swap.c                      |  53 ++++++---------
> > > >  mm/vmscan.c                    |  28 +++-----
> > > >  10 files changed, 95 insertions(+), 144 deletions(-)
> > > > 
> > > > -- 
> > > > 2.28.0.681.g6f77f65b4e-goog
> > > 
> > > Sorry, Yu, I may be out-of-line in sending this: but as you know,
> > > Alex Shi has a long per-memcg lru_lock series playing in much the
> > > same area (particularly conflicting in mm/swap.c and mm/vmscan.c):
> > > a patchset that makes useful changes, that I'm very keen to help
> > > into mmotm a.s.a.p (but not before I've completed diligence).
> > > 
> > > We've put a lot of effort into its testing, I'm currently reviewing
> > > it patch by patch (my general silence indicating that I'm busy on that,
> > > but slow as ever): so I'm a bit discouraged to have its stability
> > > potentially undermined by conflicting cleanups at this stage.
> > > 
> > > If there's general agreement that your cleanups are safe and welcome
> > > (Michal's initial reaction sheds some doubt on that), great: I hope
> > > that Andrew can fast-track them into mmotm, then Alex rebase on top
> > > of them, and I then re-test and re-review.
> > > 
> > > But if that quick agreement is not forthcoming, may I ask you please
> > > to hold back, and resend based on top of Alex's next posting?
> > 
> > The per-memcg lru lock series seems a high priority, and I have
> > absolutely no problem accommodate your request.
> 
> Many thanks!
> 
> > 
> > In return, may I ask you or Alex to review this series after you
> > have finished with per-memcg lru lock (to make sure that I resolve
> > all the conflicts correctly at least)?
> 
> Fair enough: I promise to do so.
> 
> And your rebasing will necessarily lead you to review some parts
> of Alex's patchset, which will help us all too.
> 
> Andrew, Yu asked at the start:
> > > > I see you have taken this:
> > > >   mm: use add_page_to_lru_list()/page_lru()/page_off_lru()
> > > > Do you mind dropping it?
> Dropping that for now will help too.

Andrew, please drop Yu Zhao's
mm-use-add_page_to_lru_list-page_lru-page_off_lru.patch
from the mmotm tree.

Yu wants to replace it by this cleanup series, and he has graciously
agreed to rebase his series on top of Alex Shi's per-memcg lru_lock
series; but mm-use-add_page_to_lru_list-page_lru-page_off_lru.patch
is getting in the way of adding them to mmotm.  The three of us are
all hoping that Alex's v19 series can go into mmotm when the merge
window closes, then I'll review Yu's series rebased on top of it.

Thanks,
Hugh


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2020-10-13  2:30 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-18  3:00 [PATCH 00/13] mm: clean up some lru related pieces Yu Zhao
2020-09-18  3:00 ` [PATCH 01/13] mm: use add_page_to_lru_list() Yu Zhao
2020-09-18  7:32   ` Michal Hocko
2020-09-18 10:05     ` Yu Zhao
2020-09-18  3:00 ` [PATCH 02/13] mm: use page_off_lru() Yu Zhao
2020-09-18  7:37   ` Michal Hocko
2020-09-18 10:27     ` Yu Zhao
2020-09-18 11:09       ` Michal Hocko
2020-09-18 18:53         ` Yu Zhao
2020-09-21 11:16           ` Michal Hocko
2020-09-18 11:24       ` Michal Hocko
2020-09-18  3:00 ` [PATCH 03/13] mm: move __ClearPageLRU() into page_off_lru() Yu Zhao
2020-09-18  7:38   ` Michal Hocko
2020-09-18  3:00 ` [PATCH 04/13] mm: shuffle lru list addition and deletion functions Yu Zhao
2020-09-18  3:00 ` [PATCH 05/13] mm: don't pass enum lru_list to lru list addition functions Yu Zhao
2020-09-18  3:00 ` [PATCH 06/13] mm: don't pass enum lru_list to trace_mm_lru_insertion() Yu Zhao
2020-09-18  3:00 ` [PATCH 07/13] mm: don't pass enum lru_list to del_page_from_lru_list() Yu Zhao
2020-09-18  3:00 ` [PATCH 08/13] mm: rename page_off_lru() to __clear_page_lru_flags() Yu Zhao
2020-09-18  3:00 ` [PATCH 09/13] mm: inline page_lru_base_type() Yu Zhao
2020-09-18  3:00 ` [PATCH 10/13] mm: VM_BUG_ON lru page flags Yu Zhao
2020-09-18  3:00 ` [PATCH 11/13] mm: inline __update_lru_size() Yu Zhao
2020-09-18  3:00 ` [PATCH 12/13] mm: make lruvec_lru_size() static Yu Zhao
2020-09-18  3:00 ` [PATCH 13/13] mm: enlarge the int parameter of update_lru_size() Yu Zhao
2020-09-18  7:45 ` [PATCH 00/13] mm: clean up some lru related pieces Michal Hocko
2020-09-18  9:36   ` Yu Zhao
2020-09-18 20:46 ` Hugh Dickins
2020-09-18 21:01   ` Yu Zhao
2020-09-18 21:19     ` Hugh Dickins
2020-09-19  0:31       ` Alex Shi
2020-10-13  2:30       ` Hugh Dickins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).