linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH REBASED 0/4] mm: Generalize putback functions
@ 2019-03-18  9:27 Kirill Tkhai
  2019-03-18  9:27 ` [PATCH REBASED 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Kirill Tkhai @ 2019-03-18  9:27 UTC (permalink / raw)
  To: akpm, daniel.m.jordan, mhocko, ktkhai, linux-mm, linux-kernel

(This is resending of the patchset, rebased on next-20190318).

Functions putback_inactive_pages() and move_active_pages_to_lru()
are almost similar, so this patchset merges them in only function.

v3:   Replace list_del_init() with list_del()
v2.5: Update comment
v2:   Fix tracing. Return VM_BUG_ON() check on the old place. Improve spelling.
---

Kirill Tkhai (4):
      mm: Move recent_rotated pages calculation to shrink_inactive_list()
      mm: Move nr_deactivate accounting to shrink_active_list()
      mm: Remove pages_to_free argument of move_active_pages_to_lru()
      mm: Generalize putback scan functions


 .../trace/postprocess/trace-vmscan-postprocess.pl  |    7 +
 include/linux/vmstat.h                             |    2 
 include/trace/events/vmscan.h                      |   13 +-
 mm/vmscan.c                                        |  148 +++++++-------------
 4 files changed, 68 insertions(+), 102 deletions(-)

--
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH REBASED 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list()
  2019-03-18  9:27 [PATCH REBASED 0/4] mm: Generalize putback functions Kirill Tkhai
@ 2019-03-18  9:27 ` Kirill Tkhai
  2019-05-28 15:51   ` Johannes Weiner
  2019-03-18  9:28 ` [PATCH REBASED 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Kirill Tkhai @ 2019-03-18  9:27 UTC (permalink / raw)
  To: akpm, daniel.m.jordan, mhocko, ktkhai, linux-mm, linux-kernel

The patch moves the calculation from putback_inactive_pages()
to shrink_inactive_list(). This makes putback_inactive_pages()
looking more similar to move_active_pages_to_lru().

To do that, we account activated pages in reclaim_stat::nr_activate.
Since a page may change its LRU type from anon to file cache
inside shrink_page_list() (see ClearPageSwapBacked()), we have to
account pages for the both types. So, nr_activate becomes an array.

Previously we used nr_activate to account PGACTIVATE events, but now
we account them into pgactivate variable (since they are about
number of pages in general, not about sum of hpage_nr_pages).

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>

v2.5: Update comment.
v2:   Update trace events.
---
 .../trace/postprocess/trace-vmscan-postprocess.pl  |    7 ++++---
 include/linux/vmstat.h                             |    2 +-
 include/trace/events/vmscan.h                      |   13 ++++++++-----
 mm/vmscan.c                                        |   15 +++++++--------
 4 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
index 66bfd8396877..995da15b16ca 100644
--- a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
+++ b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
@@ -113,7 +113,7 @@ my $regex_kswapd_wake_default = 'nid=([0-9]*) order=([0-9]*)';
 my $regex_kswapd_sleep_default = 'nid=([0-9]*)';
 my $regex_wakeup_kswapd_default = 'nid=([0-9]*) zid=([0-9]*) order=([0-9]*) gfp_flags=([A-Z_|]*)';
 my $regex_lru_isolate_default = 'isolate_mode=([0-9]*) classzone_idx=([0-9]*) order=([0-9]*) nr_requested=([0-9]*) nr_scanned=([0-9]*) nr_skipped=([0-9]*) nr_taken=([0-9]*) lru=([a-z_]*)';
-my $regex_lru_shrink_inactive_default = 'nid=([0-9]*) nr_scanned=([0-9]*) nr_reclaimed=([0-9]*) nr_dirty=([0-9]*) nr_writeback=([0-9]*) nr_congested=([0-9]*) nr_immediate=([0-9]*) nr_activate=([0-9]*) nr_ref_keep=([0-9]*) nr_unmap_fail=([0-9]*) priority=([0-9]*) flags=([A-Z_|]*)';
+my $regex_lru_shrink_inactive_default = 'nid=([0-9]*) nr_scanned=([0-9]*) nr_reclaimed=([0-9]*) nr_dirty=([0-9]*) nr_writeback=([0-9]*) nr_congested=([0-9]*) nr_immediate=([0-9]*) nr_activate_anon=([0-9]*) nr_activate_file=([0-9]*) nr_ref_keep=([0-9]*) nr_unmap_fail=([0-9]*) priority=([0-9]*) flags=([A-Z_|]*)';
 my $regex_lru_shrink_active_default = 'lru=([A-Z_]*) nr_scanned=([0-9]*) nr_rotated=([0-9]*) priority=([0-9]*)';
 my $regex_writepage_default = 'page=([0-9a-f]*) pfn=([0-9]*) flags=([A-Z_|]*)';
 
@@ -212,7 +212,8 @@ $regex_lru_shrink_inactive = generate_traceevent_regex(
 			"vmscan/mm_vmscan_lru_shrink_inactive",
 			$regex_lru_shrink_inactive_default,
 			"nid", "nr_scanned", "nr_reclaimed", "nr_dirty", "nr_writeback",
-			"nr_congested", "nr_immediate", "nr_activate", "nr_ref_keep",
+			"nr_congested", "nr_immediate", "nr_activate_anon",
+			"nr_activate_file", "nr_ref_keep",
 			"nr_unmap_fail", "priority", "flags");
 $regex_lru_shrink_active = generate_traceevent_regex(
 			"vmscan/mm_vmscan_lru_shrink_active",
@@ -407,7 +408,7 @@ sub process_events {
 			}
 
 			my $nr_reclaimed = $3;
-			my $flags = $12;
+			my $flags = $13;
 			my $file = 0;
 			if ($flags =~ /RECLAIM_WB_FILE/) {
 				$file = 1;
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 2db8d60981fe..bdeda4b079fe 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -26,7 +26,7 @@ struct reclaim_stat {
 	unsigned nr_congested;
 	unsigned nr_writeback;
 	unsigned nr_immediate;
-	unsigned nr_activate;
+	unsigned nr_activate[2];
 	unsigned nr_ref_keep;
 	unsigned nr_unmap_fail;
 };
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index a1cb91342231..4f0e45e90cfc 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -358,7 +358,8 @@ TRACE_EVENT(mm_vmscan_lru_shrink_inactive,
 		__field(unsigned long, nr_writeback)
 		__field(unsigned long, nr_congested)
 		__field(unsigned long, nr_immediate)
-		__field(unsigned long, nr_activate)
+		__field(unsigned int, nr_activate0)
+		__field(unsigned int, nr_activate1)
 		__field(unsigned long, nr_ref_keep)
 		__field(unsigned long, nr_unmap_fail)
 		__field(int, priority)
@@ -373,20 +374,22 @@ TRACE_EVENT(mm_vmscan_lru_shrink_inactive,
 		__entry->nr_writeback = stat->nr_writeback;
 		__entry->nr_congested = stat->nr_congested;
 		__entry->nr_immediate = stat->nr_immediate;
-		__entry->nr_activate = stat->nr_activate;
+		__entry->nr_activate0 = stat->nr_activate[0];
+		__entry->nr_activate1 = stat->nr_activate[1];
 		__entry->nr_ref_keep = stat->nr_ref_keep;
 		__entry->nr_unmap_fail = stat->nr_unmap_fail;
 		__entry->priority = priority;
 		__entry->reclaim_flags = trace_shrink_flags(file);
 	),
 
-	TP_printk("nid=%d nr_scanned=%ld nr_reclaimed=%ld nr_dirty=%ld nr_writeback=%ld nr_congested=%ld nr_immediate=%ld nr_activate=%ld nr_ref_keep=%ld nr_unmap_fail=%ld priority=%d flags=%s",
+	TP_printk("nid=%d nr_scanned=%ld nr_reclaimed=%ld nr_dirty=%ld nr_writeback=%ld nr_congested=%ld nr_immediate=%ld nr_activate_anon=%d nr_activate_file=%d nr_ref_keep=%ld nr_unmap_fail=%ld priority=%d flags=%s",
 		__entry->nid,
 		__entry->nr_scanned, __entry->nr_reclaimed,
 		__entry->nr_dirty, __entry->nr_writeback,
 		__entry->nr_congested, __entry->nr_immediate,
-		__entry->nr_activate, __entry->nr_ref_keep,
-		__entry->nr_unmap_fail, __entry->priority,
+		__entry->nr_activate0, __entry->nr_activate1,
+		__entry->nr_ref_keep, __entry->nr_unmap_fail,
+		__entry->priority,
 		show_reclaim_flags(__entry->reclaim_flags))
 );
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 07f74e9507b6..e610737b36df 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1107,6 +1107,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	LIST_HEAD(ret_pages);
 	LIST_HEAD(free_pages);
 	unsigned nr_reclaimed = 0;
+	unsigned pgactivate = 0;
 
 	memset(stat, 0, sizeof(*stat));
 	cond_resched();
@@ -1466,8 +1467,10 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 			try_to_free_swap(page);
 		VM_BUG_ON_PAGE(PageActive(page), page);
 		if (!PageMlocked(page)) {
+			int type = page_is_file_cache(page);
 			SetPageActive(page);
-			stat->nr_activate++;
+			pgactivate++;
+			stat->nr_activate[type] += hpage_nr_pages(page);
 			count_memcg_page_event(page, PGACTIVATE);
 		}
 keep_locked:
@@ -1482,7 +1485,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	free_unref_page_list(&free_pages);
 
 	list_splice(&ret_pages, page_list);
-	count_vm_events(PGACTIVATE, stat->nr_activate);
+	count_vm_events(PGACTIVATE, pgactivate);
 
 	return nr_reclaimed;
 }
@@ -1807,7 +1810,6 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
 static noinline_for_stack void
 putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 {
-	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
 	LIST_HEAD(pages_to_free);
 
@@ -1833,11 +1835,6 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 		lru = page_lru(page);
 		add_page_to_lru_list(page, lruvec, lru);
 
-		if (is_active_lru(lru)) {
-			int file = is_file_lru(lru);
-			int numpages = hpage_nr_pages(page);
-			reclaim_stat->recent_rotated[file] += numpages;
-		}
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
 			__ClearPageActive(page);
@@ -1945,6 +1942,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 		count_memcg_events(lruvec_memcg(lruvec), PGSTEAL_DIRECT,
 				   nr_reclaimed);
 	}
+	reclaim_stat->recent_rotated[0] = stat.nr_activate[0];
+	reclaim_stat->recent_rotated[1] = stat.nr_activate[1];
 
 	putback_inactive_pages(lruvec, &page_list);
 


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH REBASED 2/4] mm: Move nr_deactivate accounting to shrink_active_list()
  2019-03-18  9:27 [PATCH REBASED 0/4] mm: Generalize putback functions Kirill Tkhai
  2019-03-18  9:27 ` [PATCH REBASED 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
@ 2019-03-18  9:28 ` Kirill Tkhai
  2019-03-22 15:05   ` [PATCH] fixup: vmscan: Fix build on !CONFIG_MEMCG from nr_deactivate changes Chris Down
  2019-03-18  9:28 ` [PATCH REBASED 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru() Kirill Tkhai
  2019-03-18  9:28 ` [PATCH REBASED 4/4] mm: Generalize putback scan functions Kirill Tkhai
  3 siblings, 1 reply; 10+ messages in thread
From: Kirill Tkhai @ 2019-03-18  9:28 UTC (permalink / raw)
  To: akpm, daniel.m.jordan, mhocko, ktkhai, linux-mm, linux-kernel

We know which LRU is not active.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
---
 mm/vmscan.c |   10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index e610737b36df..d2adabe4457d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2040,12 +2040,6 @@ static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
 		}
 	}
 
-	if (!is_active_lru(lru)) {
-		__count_vm_events(PGDEACTIVATE, nr_moved);
-		count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE,
-				   nr_moved);
-	}
-
 	return nr_moved;
 }
 
@@ -2137,6 +2131,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
 
 	nr_activate = move_active_pages_to_lru(lruvec, &l_active, &l_hold, lru);
 	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold, lru - LRU_ACTIVE);
+
+	__count_vm_events(PGDEACTIVATE, nr_deactivate);
+	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
+
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 	spin_unlock_irq(&pgdat->lru_lock);
 


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH REBASED 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru()
  2019-03-18  9:27 [PATCH REBASED 0/4] mm: Generalize putback functions Kirill Tkhai
  2019-03-18  9:27 ` [PATCH REBASED 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
  2019-03-18  9:28 ` [PATCH REBASED 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
@ 2019-03-18  9:28 ` Kirill Tkhai
  2019-03-18  9:28 ` [PATCH REBASED 4/4] mm: Generalize putback scan functions Kirill Tkhai
  3 siblings, 0 replies; 10+ messages in thread
From: Kirill Tkhai @ 2019-03-18  9:28 UTC (permalink / raw)
  To: akpm, daniel.m.jordan, mhocko, ktkhai, linux-mm, linux-kernel

We may use input argument list as output argument too.
This makes the function more similar to putback_inactive_pages().

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>

v2: Fix comment spelling.
---
 mm/vmscan.c |   19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d2adabe4457d..1794ec7b21d8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2004,10 +2004,10 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
 				     struct list_head *list,
-				     struct list_head *pages_to_free,
 				     enum lru_list lru)
 {
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+	LIST_HEAD(pages_to_free);
 	struct page *page;
 	int nr_pages;
 	int nr_moved = 0;
@@ -2034,12 +2034,17 @@ static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
 				(*get_compound_page_dtor(page))(page);
 				spin_lock_irq(&pgdat->lru_lock);
 			} else
-				list_add(&page->lru, pages_to_free);
+				list_add(&page->lru, &pages_to_free);
 		} else {
 			nr_moved += nr_pages;
 		}
 	}
 
+	/*
+	 * To save our caller's stack, now use input list for pages to free.
+	 */
+	list_splice(&pages_to_free, list);
+
 	return nr_moved;
 }
 
@@ -2129,8 +2134,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	nr_activate = move_active_pages_to_lru(lruvec, &l_active, &l_hold, lru);
-	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold, lru - LRU_ACTIVE);
+	nr_activate = move_active_pages_to_lru(lruvec, &l_active, lru);
+	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, lru - LRU_ACTIVE);
+	/* Keep all free pages in l_active list */
+	list_splice(&l_inactive, &l_active);
 
 	__count_vm_events(PGDEACTIVATE, nr_deactivate);
 	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
@@ -2138,8 +2145,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 	spin_unlock_irq(&pgdat->lru_lock);
 
-	mem_cgroup_uncharge_list(&l_hold);
-	free_unref_page_list(&l_hold);
+	mem_cgroup_uncharge_list(&l_active);
+	free_unref_page_list(&l_active);
 	trace_mm_vmscan_lru_shrink_active(pgdat->node_id, nr_taken, nr_activate,
 			nr_deactivate, nr_rotated, sc->priority, file);
 }


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH REBASED 4/4] mm: Generalize putback scan functions
  2019-03-18  9:27 [PATCH REBASED 0/4] mm: Generalize putback functions Kirill Tkhai
                   ` (2 preceding siblings ...)
  2019-03-18  9:28 ` [PATCH REBASED 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru() Kirill Tkhai
@ 2019-03-18  9:28 ` Kirill Tkhai
  3 siblings, 0 replies; 10+ messages in thread
From: Kirill Tkhai @ 2019-03-18  9:28 UTC (permalink / raw)
  To: akpm, daniel.m.jordan, mhocko, ktkhai, linux-mm, linux-kernel

This combines two similar functions move_active_pages_to_lru()
and putback_inactive_pages() into single move_pages_to_lru().
This remove duplicate code and makes object file size smaller.

Before:
   text	   data	    bss	    dec	    hex	filename
  57082	   4732	    128	  61942	   f1f6	mm/vmscan.o
After:
   text	   data	    bss	    dec	    hex	filename
  55112	   4600	    128	  59840	   e9c0	mm/vmscan.o

Note, that now we are checking for !page_evictable() coming
from shrink_active_list(), which shouldn't change any behavior
since that path works with evictable pages only.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>

v3: Replace list_del_init() with list_del()
v2: Move VM_BUG_ON() up.
---
 mm/vmscan.c |  122 +++++++++++++++++++----------------------------------------
 1 file changed, 40 insertions(+), 82 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 1794ec7b21d8..f6b9b45f731d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1807,33 +1807,53 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
 	return isolated > inactive;
 }
 
-static noinline_for_stack void
-putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
+/*
+ * This moves pages from @list to corresponding LRU list.
+ *
+ * We move them the other way if the page is referenced by one or more
+ * processes, from rmap.
+ *
+ * If the pages are mostly unmapped, the processing is fast and it is
+ * appropriate to hold zone_lru_lock across the whole operation.  But if
+ * the pages are mapped, the processing is slow (page_referenced()) so we
+ * should drop zone_lru_lock around each page.  It's impossible to balance
+ * this, so instead we remove the pages from the LRU while processing them.
+ * It is safe to rely on PG_active against the non-LRU pages in here because
+ * nobody will play with that bit on a non-LRU page.
+ *
+ * The downside is that we have to touch page->_refcount against each page.
+ * But we had to alter page->flags anyway.
+ *
+ * Returns the number of pages moved to the given lruvec.
+ */
+
+static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
+						     struct list_head *list)
 {
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+	int nr_pages, nr_moved = 0;
 	LIST_HEAD(pages_to_free);
+	struct page *page;
+	enum lru_list lru;
 
-	/*
-	 * Put back any unfreeable pages.
-	 */
-	while (!list_empty(page_list)) {
-		struct page *page = lru_to_page(page_list);
-		int lru;
-
+	while (!list_empty(list)) {
+		page = lru_to_page(list);
 		VM_BUG_ON_PAGE(PageLRU(page), page);
-		list_del(&page->lru);
 		if (unlikely(!page_evictable(page))) {
+			list_del(&page->lru);
 			spin_unlock_irq(&pgdat->lru_lock);
 			putback_lru_page(page);
 			spin_lock_irq(&pgdat->lru_lock);
 			continue;
 		}
-
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
 
 		SetPageLRU(page);
 		lru = page_lru(page);
-		add_page_to_lru_list(page, lruvec, lru);
+
+		nr_pages = hpage_nr_pages(page);
+		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
+		list_move(&page->lru, &lruvec->lists[lru]);
 
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
@@ -1847,13 +1867,17 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 				spin_lock_irq(&pgdat->lru_lock);
 			} else
 				list_add(&page->lru, &pages_to_free);
+		} else {
+			nr_moved += nr_pages;
 		}
 	}
 
 	/*
 	 * To save our caller's stack, now use input list for pages to free.
 	 */
-	list_splice(&pages_to_free, page_list);
+	list_splice(&pages_to_free, list);
+
+	return nr_moved;
 }
 
 /*
@@ -1945,7 +1969,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	reclaim_stat->recent_rotated[0] = stat.nr_activate[0];
 	reclaim_stat->recent_rotated[1] = stat.nr_activate[1];
 
-	putback_inactive_pages(lruvec, &page_list);
+	move_pages_to_lru(lruvec, &page_list);
 
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 
@@ -1982,72 +2006,6 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	return nr_reclaimed;
 }
 
-/*
- * This moves pages from the active list to the inactive list.
- *
- * We move them the other way if the page is referenced by one or more
- * processes, from rmap.
- *
- * If the pages are mostly unmapped, the processing is fast and it is
- * appropriate to hold pgdat->lru_lock across the whole operation.  But if
- * the pages are mapped, the processing is slow (page_referenced()) so we
- * should drop pgdat->lru_lock around each page.  It's impossible to balance
- * this, so instead we remove the pages from the LRU while processing them.
- * It is safe to rely on PG_active against the non-LRU pages in here because
- * nobody will play with that bit on a non-LRU page.
- *
- * The downside is that we have to touch page->_refcount against each page.
- * But we had to alter page->flags anyway.
- *
- * Returns the number of pages moved to the given lru.
- */
-
-static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
-				     struct list_head *list,
-				     enum lru_list lru)
-{
-	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
-	LIST_HEAD(pages_to_free);
-	struct page *page;
-	int nr_pages;
-	int nr_moved = 0;
-
-	while (!list_empty(list)) {
-		page = lru_to_page(list);
-		lruvec = mem_cgroup_page_lruvec(page, pgdat);
-
-		VM_BUG_ON_PAGE(PageLRU(page), page);
-		SetPageLRU(page);
-
-		nr_pages = hpage_nr_pages(page);
-		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
-		list_move(&page->lru, &lruvec->lists[lru]);
-
-		if (put_page_testzero(page)) {
-			__ClearPageLRU(page);
-			__ClearPageActive(page);
-			del_page_from_lru_list(page, lruvec, lru);
-
-			if (unlikely(PageCompound(page))) {
-				spin_unlock_irq(&pgdat->lru_lock);
-				mem_cgroup_uncharge(page);
-				(*get_compound_page_dtor(page))(page);
-				spin_lock_irq(&pgdat->lru_lock);
-			} else
-				list_add(&page->lru, &pages_to_free);
-		} else {
-			nr_moved += nr_pages;
-		}
-	}
-
-	/*
-	 * To save our caller's stack, now use input list for pages to free.
-	 */
-	list_splice(&pages_to_free, list);
-
-	return nr_moved;
-}
-
 static void shrink_active_list(unsigned long nr_to_scan,
 			       struct lruvec *lruvec,
 			       struct scan_control *sc,
@@ -2134,8 +2092,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	nr_activate = move_active_pages_to_lru(lruvec, &l_active, lru);
-	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, lru - LRU_ACTIVE);
+	nr_activate = move_pages_to_lru(lruvec, &l_active);
+	nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
 	/* Keep all free pages in l_active list */
 	list_splice(&l_inactive, &l_active);
 


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH] fixup: vmscan: Fix build on !CONFIG_MEMCG from nr_deactivate changes
  2019-03-18  9:28 ` [PATCH REBASED 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
@ 2019-03-22 15:05   ` Chris Down
  2019-03-22 15:06     ` Chris Down
  2019-03-22 15:16     ` Kirill Tkhai
  0 siblings, 2 replies; 10+ messages in thread
From: Chris Down @ 2019-03-22 15:05 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kirill Tkhai, Johannes Weiner, Tejun Heo, linux-kernel, cgroups,
	linux-mm, kernel-team

"mm: move nr_deactivate accounting to shrink_active_list()" uses the
non-irqsaved version of count_memcg_events (__count_memcg_events), but
we've only exported the irqsaving version of it to userspace, so the
build breaks:

    mm/vmscan.c: In function ‘shrink_active_list’:
    mm/vmscan.c:2101:2: error: implicit declaration of function ‘__count_memcg_events’; did you mean ‘count_memcg_events’? [-Werror=implicit-function-declaration]

This fixup makes it build with !CONFIG_MEMCG.

Signed-off-by: Chris Down <chris@chrisdown.name>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: linux-kernel@vger.kernel.org
Cc: cgroups@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: kernel-team@fb.com
---
 include/linux/memcontrol.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 534267947664..b226c4bafc93 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1147,6 +1147,12 @@ static inline void count_memcg_events(struct mem_cgroup *memcg,
 {
 }
 
+static inline void __count_memcg_events(struct mem_cgroup *memcg,
+					enum vm_event_item idx,
+					unsigned long count)
+{
+}
+
 static inline void count_memcg_page_event(struct page *page,
 					  int idx)
 {
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] fixup: vmscan: Fix build on !CONFIG_MEMCG from nr_deactivate changes
  2019-03-22 15:05   ` [PATCH] fixup: vmscan: Fix build on !CONFIG_MEMCG from nr_deactivate changes Chris Down
@ 2019-03-22 15:06     ` Chris Down
  2019-03-22 15:16     ` Kirill Tkhai
  1 sibling, 0 replies; 10+ messages in thread
From: Chris Down @ 2019-03-22 15:06 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kirill Tkhai, Johannes Weiner, Tejun Heo, linux-kernel, cgroups,
	linux-mm, kernel-team

Chris Down writes:
>"mm: move nr_deactivate accounting to shrink_active_list()" uses the
>non-irqsaved version of count_memcg_events (__count_memcg_events), but
>we've only exported the irqsaving version of it to userspace, so the
>build breaks:

Er, "with !CONFIG_MEMCG", not "to userspace". No idea where that came from...


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] fixup: vmscan: Fix build on !CONFIG_MEMCG from nr_deactivate changes
  2019-03-22 15:05   ` [PATCH] fixup: vmscan: Fix build on !CONFIG_MEMCG from nr_deactivate changes Chris Down
  2019-03-22 15:06     ` Chris Down
@ 2019-03-22 15:16     ` Kirill Tkhai
  1 sibling, 0 replies; 10+ messages in thread
From: Kirill Tkhai @ 2019-03-22 15:16 UTC (permalink / raw)
  To: Chris Down, Andrew Morton
  Cc: Johannes Weiner, Tejun Heo, linux-kernel, cgroups, linux-mm, kernel-team

On 22.03.2019 18:05, Chris Down wrote:
> "mm: move nr_deactivate accounting to shrink_active_list()" uses the
> non-irqsaved version of count_memcg_events (__count_memcg_events), but
> we've only exported the irqsaving version of it to userspace, so the
> build breaks:
> 
>     mm/vmscan.c: In function ‘shrink_active_list’:
>     mm/vmscan.c:2101:2: error: implicit declaration of function ‘__count_memcg_events’; did you mean ‘count_memcg_events’? [-Werror=implicit-function-declaration]
> 
> This fixup makes it build with !CONFIG_MEMCG.

Yeah, thanks, Chris.

> Signed-off-by: Chris Down <chris@chrisdown.name>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: linux-kernel@vger.kernel.org
> Cc: cgroups@vger.kernel.org
> Cc: linux-mm@kvack.org
> Cc: kernel-team@fb.com
> ---
>  include/linux/memcontrol.h | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 534267947664..b226c4bafc93 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1147,6 +1147,12 @@ static inline void count_memcg_events(struct mem_cgroup *memcg,
>  {
>  }
>  
> +static inline void __count_memcg_events(struct mem_cgroup *memcg,
> +					enum vm_event_item idx,
> +					unsigned long count)
> +{
> +}
> +
>  static inline void count_memcg_page_event(struct page *page,
>  					  int idx)
>  {
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH REBASED 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list()
  2019-03-18  9:27 ` [PATCH REBASED 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
@ 2019-05-28 15:51   ` Johannes Weiner
  2019-05-28 16:07     ` Kirill Tkhai
  0 siblings, 1 reply; 10+ messages in thread
From: Johannes Weiner @ 2019-05-28 15:51 UTC (permalink / raw)
  To: Kirill Tkhai; +Cc: akpm, daniel.m.jordan, mhocko, linux-mm, linux-kernel

On Mon, Mar 18, 2019 at 12:27:59PM +0300, Kirill Tkhai wrote:
> @@ -1945,6 +1942,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
>  		count_memcg_events(lruvec_memcg(lruvec), PGSTEAL_DIRECT,
>  				   nr_reclaimed);
>  	}
> +	reclaim_stat->recent_rotated[0] = stat.nr_activate[0];
> +	reclaim_stat->recent_rotated[1] = stat.nr_activate[1];

Surely this should be +=, right?

Otherwise we maintain essentially no history of page rotations and
that wreaks havoc on the page cache vs. swapping reclaim balance.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH REBASED 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list()
  2019-05-28 15:51   ` Johannes Weiner
@ 2019-05-28 16:07     ` Kirill Tkhai
  0 siblings, 0 replies; 10+ messages in thread
From: Kirill Tkhai @ 2019-05-28 16:07 UTC (permalink / raw)
  To: Johannes Weiner; +Cc: akpm, daniel.m.jordan, mhocko, linux-mm, linux-kernel

On 28.05.2019 18:51, Johannes Weiner wrote:
> On Mon, Mar 18, 2019 at 12:27:59PM +0300, Kirill Tkhai wrote:
>> @@ -1945,6 +1942,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
>>  		count_memcg_events(lruvec_memcg(lruvec), PGSTEAL_DIRECT,
>>  				   nr_reclaimed);
>>  	}
>> +	reclaim_stat->recent_rotated[0] = stat.nr_activate[0];
>> +	reclaim_stat->recent_rotated[1] = stat.nr_activate[1];
> 
> Surely this should be +=, right?
> 
> Otherwise we maintain essentially no history of page rotations and
> that wreaks havoc on the page cache vs. swapping reclaim balance.

Sure, thanks.

Kirill


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-05-28 16:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-18  9:27 [PATCH REBASED 0/4] mm: Generalize putback functions Kirill Tkhai
2019-03-18  9:27 ` [PATCH REBASED 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
2019-05-28 15:51   ` Johannes Weiner
2019-05-28 16:07     ` Kirill Tkhai
2019-03-18  9:28 ` [PATCH REBASED 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
2019-03-22 15:05   ` [PATCH] fixup: vmscan: Fix build on !CONFIG_MEMCG from nr_deactivate changes Chris Down
2019-03-22 15:06     ` Chris Down
2019-03-22 15:16     ` Kirill Tkhai
2019-03-18  9:28 ` [PATCH REBASED 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru() Kirill Tkhai
2019-03-18  9:28 ` [PATCH REBASED 4/4] mm: Generalize putback scan functions Kirill Tkhai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).