linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] mm: Generalize putback functions
@ 2019-02-14 10:35 Kirill Tkhai
  2019-02-14 10:35 ` [PATCH v2 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Kirill Tkhai @ 2019-02-14 10:35 UTC (permalink / raw)
  To: akpm, daniel.m.jordan, mhocko, ktkhai, linux-mm, linux-kernel

Functions putback_inactive_pages() and move_active_pages_to_lru()
are almost similar, so this patchset merges them in only function.

v2: Fix tracing. Return VM_BUG_ON() check on the old place. Improve spelling.
---

Kirill Tkhai (4):
      mm: Move recent_rotated pages calculation to shrink_inactive_list()
      mm: Move nr_deactivate accounting to shrink_active_list()
      mm: Remove pages_to_free argument of move_active_pages_to_lru()
      mm: Generalize putback scan functions


 .../trace/postprocess/trace-vmscan-postprocess.pl  |    7 +
 include/linux/vmstat.h                             |    2 
 include/trace/events/vmscan.h                      |   13 +-
 mm/vmscan.c                                        |  148 +++++++-------------
 4 files changed, 68 insertions(+), 102 deletions(-)

--
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list()
  2019-02-14 10:35 [PATCH v2 0/4] mm: Generalize putback functions Kirill Tkhai
@ 2019-02-14 10:35 ` Kirill Tkhai
  2019-02-14 20:57   ` Andrew Morton
  2019-02-14 10:35 ` [PATCH v2 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Kirill Tkhai @ 2019-02-14 10:35 UTC (permalink / raw)
  To: akpm, daniel.m.jordan, mhocko, ktkhai, linux-mm, linux-kernel

Currently, struct reclaim_stat::nr_activate is a local variable,
used only in shrink_page_list(). This patch introduces another
local variable pgactivate to use instead of it, and reuses
nr_activate to account number of active pages.

Note, that we need nr_activate to be an array, since type of page
may change during shrink_page_list() (see ClearPageSwapBacked()).

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>

v2: Update trace events.
---
 .../trace/postprocess/trace-vmscan-postprocess.pl  |    7 ++++---
 include/linux/vmstat.h                             |    2 +-
 include/trace/events/vmscan.h                      |   13 ++++++++-----
 mm/vmscan.c                                        |   15 +++++++--------
 4 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
index 66bfd8396877..995da15b16ca 100644
--- a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
+++ b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
@@ -113,7 +113,7 @@ my $regex_kswapd_wake_default = 'nid=([0-9]*) order=([0-9]*)';
 my $regex_kswapd_sleep_default = 'nid=([0-9]*)';
 my $regex_wakeup_kswapd_default = 'nid=([0-9]*) zid=([0-9]*) order=([0-9]*) gfp_flags=([A-Z_|]*)';
 my $regex_lru_isolate_default = 'isolate_mode=([0-9]*) classzone_idx=([0-9]*) order=([0-9]*) nr_requested=([0-9]*) nr_scanned=([0-9]*) nr_skipped=([0-9]*) nr_taken=([0-9]*) lru=([a-z_]*)';
-my $regex_lru_shrink_inactive_default = 'nid=([0-9]*) nr_scanned=([0-9]*) nr_reclaimed=([0-9]*) nr_dirty=([0-9]*) nr_writeback=([0-9]*) nr_congested=([0-9]*) nr_immediate=([0-9]*) nr_activate=([0-9]*) nr_ref_keep=([0-9]*) nr_unmap_fail=([0-9]*) priority=([0-9]*) flags=([A-Z_|]*)';
+my $regex_lru_shrink_inactive_default = 'nid=([0-9]*) nr_scanned=([0-9]*) nr_reclaimed=([0-9]*) nr_dirty=([0-9]*) nr_writeback=([0-9]*) nr_congested=([0-9]*) nr_immediate=([0-9]*) nr_activate_anon=([0-9]*) nr_activate_file=([0-9]*) nr_ref_keep=([0-9]*) nr_unmap_fail=([0-9]*) priority=([0-9]*) flags=([A-Z_|]*)';
 my $regex_lru_shrink_active_default = 'lru=([A-Z_]*) nr_scanned=([0-9]*) nr_rotated=([0-9]*) priority=([0-9]*)';
 my $regex_writepage_default = 'page=([0-9a-f]*) pfn=([0-9]*) flags=([A-Z_|]*)';
 
@@ -212,7 +212,8 @@ $regex_lru_shrink_inactive = generate_traceevent_regex(
 			"vmscan/mm_vmscan_lru_shrink_inactive",
 			$regex_lru_shrink_inactive_default,
 			"nid", "nr_scanned", "nr_reclaimed", "nr_dirty", "nr_writeback",
-			"nr_congested", "nr_immediate", "nr_activate", "nr_ref_keep",
+			"nr_congested", "nr_immediate", "nr_activate_anon",
+			"nr_activate_file", "nr_ref_keep",
 			"nr_unmap_fail", "priority", "flags");
 $regex_lru_shrink_active = generate_traceevent_regex(
 			"vmscan/mm_vmscan_lru_shrink_active",
@@ -407,7 +408,7 @@ sub process_events {
 			}
 
 			my $nr_reclaimed = $3;
-			my $flags = $12;
+			my $flags = $13;
 			my $file = 0;
 			if ($flags =~ /RECLAIM_WB_FILE/) {
 				$file = 1;
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 2db8d60981fe..bdeda4b079fe 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -26,7 +26,7 @@ struct reclaim_stat {
 	unsigned nr_congested;
 	unsigned nr_writeback;
 	unsigned nr_immediate;
-	unsigned nr_activate;
+	unsigned nr_activate[2];
 	unsigned nr_ref_keep;
 	unsigned nr_unmap_fail;
 };
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index a1cb91342231..4f0e45e90cfc 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -358,7 +358,8 @@ TRACE_EVENT(mm_vmscan_lru_shrink_inactive,
 		__field(unsigned long, nr_writeback)
 		__field(unsigned long, nr_congested)
 		__field(unsigned long, nr_immediate)
-		__field(unsigned long, nr_activate)
+		__field(unsigned int, nr_activate0)
+		__field(unsigned int, nr_activate1)
 		__field(unsigned long, nr_ref_keep)
 		__field(unsigned long, nr_unmap_fail)
 		__field(int, priority)
@@ -373,20 +374,22 @@ TRACE_EVENT(mm_vmscan_lru_shrink_inactive,
 		__entry->nr_writeback = stat->nr_writeback;
 		__entry->nr_congested = stat->nr_congested;
 		__entry->nr_immediate = stat->nr_immediate;
-		__entry->nr_activate = stat->nr_activate;
+		__entry->nr_activate0 = stat->nr_activate[0];
+		__entry->nr_activate1 = stat->nr_activate[1];
 		__entry->nr_ref_keep = stat->nr_ref_keep;
 		__entry->nr_unmap_fail = stat->nr_unmap_fail;
 		__entry->priority = priority;
 		__entry->reclaim_flags = trace_shrink_flags(file);
 	),
 
-	TP_printk("nid=%d nr_scanned=%ld nr_reclaimed=%ld nr_dirty=%ld nr_writeback=%ld nr_congested=%ld nr_immediate=%ld nr_activate=%ld nr_ref_keep=%ld nr_unmap_fail=%ld priority=%d flags=%s",
+	TP_printk("nid=%d nr_scanned=%ld nr_reclaimed=%ld nr_dirty=%ld nr_writeback=%ld nr_congested=%ld nr_immediate=%ld nr_activate_anon=%d nr_activate_file=%d nr_ref_keep=%ld nr_unmap_fail=%ld priority=%d flags=%s",
 		__entry->nid,
 		__entry->nr_scanned, __entry->nr_reclaimed,
 		__entry->nr_dirty, __entry->nr_writeback,
 		__entry->nr_congested, __entry->nr_immediate,
-		__entry->nr_activate, __entry->nr_ref_keep,
-		__entry->nr_unmap_fail, __entry->priority,
+		__entry->nr_activate0, __entry->nr_activate1,
+		__entry->nr_ref_keep, __entry->nr_unmap_fail,
+		__entry->priority,
 		show_reclaim_flags(__entry->reclaim_flags))
 );
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index ac4806f0f332..84542004a277 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1107,6 +1107,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	LIST_HEAD(ret_pages);
 	LIST_HEAD(free_pages);
 	unsigned nr_reclaimed = 0;
+	unsigned pgactivate = 0;
 
 	memset(stat, 0, sizeof(*stat));
 	cond_resched();
@@ -1466,8 +1467,10 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 			try_to_free_swap(page);
 		VM_BUG_ON_PAGE(PageActive(page), page);
 		if (!PageMlocked(page)) {
+			int type = page_is_file_cache(page);
 			SetPageActive(page);
-			stat->nr_activate++;
+			pgactivate++;
+			stat->nr_activate[type] += hpage_nr_pages(page);
 			count_memcg_page_event(page, PGACTIVATE);
 		}
 keep_locked:
@@ -1482,7 +1485,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	free_unref_page_list(&free_pages);
 
 	list_splice(&ret_pages, page_list);
-	count_vm_events(PGACTIVATE, stat->nr_activate);
+	count_vm_events(PGACTIVATE, pgactivate);
 
 	return nr_reclaimed;
 }
@@ -1807,7 +1810,6 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
 static noinline_for_stack void
 putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 {
-	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
 	LIST_HEAD(pages_to_free);
 
@@ -1833,11 +1835,6 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 		lru = page_lru(page);
 		add_page_to_lru_list(page, lruvec, lru);
 
-		if (is_active_lru(lru)) {
-			int file = is_file_lru(lru);
-			int numpages = hpage_nr_pages(page);
-			reclaim_stat->recent_rotated[file] += numpages;
-		}
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
 			__ClearPageActive(page);
@@ -1945,6 +1942,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 		count_memcg_events(lruvec_memcg(lruvec), PGSTEAL_DIRECT,
 				   nr_reclaimed);
 	}
+	reclaim_stat->recent_rotated[0] = stat.nr_activate[0];
+	reclaim_stat->recent_rotated[1] = stat.nr_activate[1];
 
 	putback_inactive_pages(lruvec, &page_list);
 


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/4] mm: Move nr_deactivate accounting to shrink_active_list()
  2019-02-14 10:35 [PATCH v2 0/4] mm: Generalize putback functions Kirill Tkhai
  2019-02-14 10:35 ` [PATCH v2 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
@ 2019-02-14 10:35 ` Kirill Tkhai
  2019-02-14 10:35 ` [PATCH v2 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru() Kirill Tkhai
  2019-02-14 10:35 ` [PATCH v2 4/4] mm: Generalize putback scan functions Kirill Tkhai
  3 siblings, 0 replies; 11+ messages in thread
From: Kirill Tkhai @ 2019-02-14 10:35 UTC (permalink / raw)
  To: akpm, daniel.m.jordan, mhocko, ktkhai, linux-mm, linux-kernel

We know which LRU is not active.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/vmscan.c |   10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 84542004a277..8d7d55e71511 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2040,12 +2040,6 @@ static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
 		}
 	}
 
-	if (!is_active_lru(lru)) {
-		__count_vm_events(PGDEACTIVATE, nr_moved);
-		count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE,
-				   nr_moved);
-	}
-
 	return nr_moved;
 }
 
@@ -2137,6 +2131,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
 
 	nr_activate = move_active_pages_to_lru(lruvec, &l_active, &l_hold, lru);
 	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold, lru - LRU_ACTIVE);
+
+	__count_vm_events(PGDEACTIVATE, nr_deactivate);
+	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
+
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 	spin_unlock_irq(&pgdat->lru_lock);
 


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru()
  2019-02-14 10:35 [PATCH v2 0/4] mm: Generalize putback functions Kirill Tkhai
  2019-02-14 10:35 ` [PATCH v2 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
  2019-02-14 10:35 ` [PATCH v2 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
@ 2019-02-14 10:35 ` Kirill Tkhai
  2019-02-14 10:35 ` [PATCH v2 4/4] mm: Generalize putback scan functions Kirill Tkhai
  3 siblings, 0 replies; 11+ messages in thread
From: Kirill Tkhai @ 2019-02-14 10:35 UTC (permalink / raw)
  To: akpm, daniel.m.jordan, mhocko, ktkhai, linux-mm, linux-kernel

We may use input argument list as output argument too.
This makes the function more similar to putback_inactive_pages().

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>

v2: Fix comment spelling.
---
 mm/vmscan.c |   19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8d7d55e71511..656a9b5af2a4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2004,10 +2004,10 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
 				     struct list_head *list,
-				     struct list_head *pages_to_free,
 				     enum lru_list lru)
 {
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+	LIST_HEAD(pages_to_free);
 	struct page *page;
 	int nr_pages;
 	int nr_moved = 0;
@@ -2034,12 +2034,17 @@ static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
 				(*get_compound_page_dtor(page))(page);
 				spin_lock_irq(&pgdat->lru_lock);
 			} else
-				list_add(&page->lru, pages_to_free);
+				list_add(&page->lru, &pages_to_free);
 		} else {
 			nr_moved += nr_pages;
 		}
 	}
 
+	/*
+	 * To save our caller's stack, now use input list for pages to free.
+	 */
+	list_splice(&pages_to_free, list);
+
 	return nr_moved;
 }
 
@@ -2129,8 +2134,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	nr_activate = move_active_pages_to_lru(lruvec, &l_active, &l_hold, lru);
-	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold, lru - LRU_ACTIVE);
+	nr_activate = move_active_pages_to_lru(lruvec, &l_active, lru);
+	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, lru - LRU_ACTIVE);
+	/* Keep all free pages in l_active list */
+	list_splice(&l_inactive, &l_active);
 
 	__count_vm_events(PGDEACTIVATE, nr_deactivate);
 	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
@@ -2138,8 +2145,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 	spin_unlock_irq(&pgdat->lru_lock);
 
-	mem_cgroup_uncharge_list(&l_hold);
-	free_unref_page_list(&l_hold);
+	mem_cgroup_uncharge_list(&l_active);
+	free_unref_page_list(&l_active);
 	trace_mm_vmscan_lru_shrink_active(pgdat->node_id, nr_taken, nr_activate,
 			nr_deactivate, nr_rotated, sc->priority, file);
 }


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 4/4] mm: Generalize putback scan functions
  2019-02-14 10:35 [PATCH v2 0/4] mm: Generalize putback functions Kirill Tkhai
                   ` (2 preceding siblings ...)
  2019-02-14 10:35 ` [PATCH v2 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru() Kirill Tkhai
@ 2019-02-14 10:35 ` Kirill Tkhai
  2019-02-15 20:39   ` Daniel Jordan
  3 siblings, 1 reply; 11+ messages in thread
From: Kirill Tkhai @ 2019-02-14 10:35 UTC (permalink / raw)
  To: akpm, daniel.m.jordan, mhocko, ktkhai, linux-mm, linux-kernel

This combines two similar functions move_active_pages_to_lru()
and putback_inactive_pages() into single move_pages_to_lru().
This remove duplicate code and makes object file size smaller.

Before:
   text	   data	    bss	    dec	    hex	filename
  57082	   4732	    128	  61942	   f1f6	mm/vmscan.o
After:
   text	   data	    bss	    dec	    hex	filename
  55112	   4600	    128	  59840	   e9c0	mm/vmscan.o

Note, that now we are checking for !page_evictable() coming
from shrink_active_list(), which shouldn't change any behavior
since that path works with evictable pages only.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>

v2: Move VM_BUG_ON() up.
---
 mm/vmscan.c |  122 +++++++++++++++++++----------------------------------------
 1 file changed, 40 insertions(+), 82 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 656a9b5af2a4..fcec2385dda2 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1807,33 +1807,53 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
 	return isolated > inactive;
 }
 
-static noinline_for_stack void
-putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
+/*
+ * This moves pages from @list to corresponding LRU list.
+ *
+ * We move them the other way if the page is referenced by one or more
+ * processes, from rmap.
+ *
+ * If the pages are mostly unmapped, the processing is fast and it is
+ * appropriate to hold zone_lru_lock across the whole operation.  But if
+ * the pages are mapped, the processing is slow (page_referenced()) so we
+ * should drop zone_lru_lock around each page.  It's impossible to balance
+ * this, so instead we remove the pages from the LRU while processing them.
+ * It is safe to rely on PG_active against the non-LRU pages in here because
+ * nobody will play with that bit on a non-LRU page.
+ *
+ * The downside is that we have to touch page->_refcount against each page.
+ * But we had to alter page->flags anyway.
+ *
+ * Returns the number of pages moved to the given lruvec.
+ */
+
+static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
+						     struct list_head *list)
 {
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+	int nr_pages, nr_moved = 0;
 	LIST_HEAD(pages_to_free);
+	struct page *page;
+	enum lru_list lru;
 
-	/*
-	 * Put back any unfreeable pages.
-	 */
-	while (!list_empty(page_list)) {
-		struct page *page = lru_to_page(page_list);
-		int lru;
-
+	while (!list_empty(list)) {
+		page = lru_to_page(list);
 		VM_BUG_ON_PAGE(PageLRU(page), page);
-		list_del(&page->lru);
 		if (unlikely(!page_evictable(page))) {
+			list_del_init(&page->lru);
 			spin_unlock_irq(&pgdat->lru_lock);
 			putback_lru_page(page);
 			spin_lock_irq(&pgdat->lru_lock);
 			continue;
 		}
-
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
 
 		SetPageLRU(page);
 		lru = page_lru(page);
-		add_page_to_lru_list(page, lruvec, lru);
+
+		nr_pages = hpage_nr_pages(page);
+		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
+		list_move(&page->lru, &lruvec->lists[lru]);
 
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
@@ -1847,13 +1867,17 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 				spin_lock_irq(&pgdat->lru_lock);
 			} else
 				list_add(&page->lru, &pages_to_free);
+		} else {
+			nr_moved += nr_pages;
 		}
 	}
 
 	/*
 	 * To save our caller's stack, now use input list for pages to free.
 	 */
-	list_splice(&pages_to_free, page_list);
+	list_splice(&pages_to_free, list);
+
+	return nr_moved;
 }
 
 /*
@@ -1945,7 +1969,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	reclaim_stat->recent_rotated[0] = stat.nr_activate[0];
 	reclaim_stat->recent_rotated[1] = stat.nr_activate[1];
 
-	putback_inactive_pages(lruvec, &page_list);
+	move_pages_to_lru(lruvec, &page_list);
 
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 
@@ -1982,72 +2006,6 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	return nr_reclaimed;
 }
 
-/*
- * This moves pages from the active list to the inactive list.
- *
- * We move them the other way if the page is referenced by one or more
- * processes, from rmap.
- *
- * If the pages are mostly unmapped, the processing is fast and it is
- * appropriate to hold zone_lru_lock across the whole operation.  But if
- * the pages are mapped, the processing is slow (page_referenced()) so we
- * should drop zone_lru_lock around each page.  It's impossible to balance
- * this, so instead we remove the pages from the LRU while processing them.
- * It is safe to rely on PG_active against the non-LRU pages in here because
- * nobody will play with that bit on a non-LRU page.
- *
- * The downside is that we have to touch page->_refcount against each page.
- * But we had to alter page->flags anyway.
- *
- * Returns the number of pages moved to the given lru.
- */
-
-static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
-				     struct list_head *list,
-				     enum lru_list lru)
-{
-	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
-	LIST_HEAD(pages_to_free);
-	struct page *page;
-	int nr_pages;
-	int nr_moved = 0;
-
-	while (!list_empty(list)) {
-		page = lru_to_page(list);
-		lruvec = mem_cgroup_page_lruvec(page, pgdat);
-
-		VM_BUG_ON_PAGE(PageLRU(page), page);
-		SetPageLRU(page);
-
-		nr_pages = hpage_nr_pages(page);
-		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
-		list_move(&page->lru, &lruvec->lists[lru]);
-
-		if (put_page_testzero(page)) {
-			__ClearPageLRU(page);
-			__ClearPageActive(page);
-			del_page_from_lru_list(page, lruvec, lru);
-
-			if (unlikely(PageCompound(page))) {
-				spin_unlock_irq(&pgdat->lru_lock);
-				mem_cgroup_uncharge(page);
-				(*get_compound_page_dtor(page))(page);
-				spin_lock_irq(&pgdat->lru_lock);
-			} else
-				list_add(&page->lru, &pages_to_free);
-		} else {
-			nr_moved += nr_pages;
-		}
-	}
-
-	/*
-	 * To save our caller's stack, now use input list for pages to free.
-	 */
-	list_splice(&pages_to_free, list);
-
-	return nr_moved;
-}
-
 static void shrink_active_list(unsigned long nr_to_scan,
 			       struct lruvec *lruvec,
 			       struct scan_control *sc,
@@ -2134,8 +2092,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	nr_activate = move_active_pages_to_lru(lruvec, &l_active, lru);
-	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, lru - LRU_ACTIVE);
+	nr_activate = move_pages_to_lru(lruvec, &l_active);
+	nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
 	/* Keep all free pages in l_active list */
 	list_splice(&l_inactive, &l_active);
 


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list()
  2019-02-14 10:35 ` [PATCH v2 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
@ 2019-02-14 20:57   ` Andrew Morton
  2019-02-15  7:38     ` [PATCH v2.5 " Kirill Tkhai
  0 siblings, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2019-02-14 20:57 UTC (permalink / raw)
  To: Kirill Tkhai; +Cc: daniel.m.jordan, mhocko, linux-mm, linux-kernel

On Thu, 14 Feb 2019 13:35:21 +0300 Kirill Tkhai <ktkhai@virtuozzo.com> wrote:

> Currently, struct reclaim_stat::nr_activate is a local variable,
> used only in shrink_page_list(). This patch introduces another
> local variable pgactivate to use instead of it, and reuses
> nr_activate to account number of active pages.
> 
> Note, that we need nr_activate to be an array, since type of page
> may change during shrink_page_list() (see ClearPageSwapBacked()).

The patch has nothing to do with the Subject:

reclaim_stat::nr_activate is not a local variable - it is a struct member.

I can kinda see what the patch is doing but the changelog needs more
care, please.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2.5 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list()
  2019-02-14 20:57   ` Andrew Morton
@ 2019-02-15  7:38     ` Kirill Tkhai
  0 siblings, 0 replies; 11+ messages in thread
From: Kirill Tkhai @ 2019-02-15  7:38 UTC (permalink / raw)
  To: Andrew Morton; +Cc: daniel.m.jordan, mhocko, linux-mm, linux-kernel

mm: Move recent_rotated pages calculation to shrink_inactive_list()

From: Kirill Tkhai <ktkhai@virtuozzo.com>

The patch moves the calculation from putback_inactive_pages()
to shrink_inactive_list(). This makes putback_inactive_pages()
looking more similar to move_active_pages_to_lru().

To do that, we account activated pages in reclaim_stat::nr_activate.
Since a page may change its LRU type from anon to file cache
inside shrink_page_list() (see ClearPageSwapBacked()), we have to
account pages for the both types. So, nr_activate becomes an array.

Previously we used nr_activate to account PGACTIVATE events, but now
we account them into pgactivate variable (since they are about
number of pages in general, not about sum of hpage_nr_pages).

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>

v2.5: Update comment.
v2:   Update trace events.
---
 .../trace/postprocess/trace-vmscan-postprocess.pl  |    7 ++++---
 include/linux/vmstat.h                             |    2 +-
 include/trace/events/vmscan.h                      |   13 ++++++++-----
 mm/vmscan.c                                        |   15 +++++++--------
 4 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
index 66bfd8396877..995da15b16ca 100644
--- a/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
+++ b/Documentation/trace/postprocess/trace-vmscan-postprocess.pl
@@ -113,7 +113,7 @@ my $regex_kswapd_wake_default = 'nid=([0-9]*) order=([0-9]*)';
 my $regex_kswapd_sleep_default = 'nid=([0-9]*)';
 my $regex_wakeup_kswapd_default = 'nid=([0-9]*) zid=([0-9]*) order=([0-9]*) gfp_flags=([A-Z_|]*)';
 my $regex_lru_isolate_default = 'isolate_mode=([0-9]*) classzone_idx=([0-9]*) order=([0-9]*) nr_requested=([0-9]*) nr_scanned=([0-9]*) nr_skipped=([0-9]*) nr_taken=([0-9]*) lru=([a-z_]*)';
-my $regex_lru_shrink_inactive_default = 'nid=([0-9]*) nr_scanned=([0-9]*) nr_reclaimed=([0-9]*) nr_dirty=([0-9]*) nr_writeback=([0-9]*) nr_congested=([0-9]*) nr_immediate=([0-9]*) nr_activate=([0-9]*) nr_ref_keep=([0-9]*) nr_unmap_fail=([0-9]*) priority=([0-9]*) flags=([A-Z_|]*)';
+my $regex_lru_shrink_inactive_default = 'nid=([0-9]*) nr_scanned=([0-9]*) nr_reclaimed=([0-9]*) nr_dirty=([0-9]*) nr_writeback=([0-9]*) nr_congested=([0-9]*) nr_immediate=([0-9]*) nr_activate_anon=([0-9]*) nr_activate_file=([0-9]*) nr_ref_keep=([0-9]*) nr_unmap_fail=([0-9]*) priority=([0-9]*) flags=([A-Z_|]*)';
 my $regex_lru_shrink_active_default = 'lru=([A-Z_]*) nr_scanned=([0-9]*) nr_rotated=([0-9]*) priority=([0-9]*)';
 my $regex_writepage_default = 'page=([0-9a-f]*) pfn=([0-9]*) flags=([A-Z_|]*)';
 
@@ -212,7 +212,8 @@ $regex_lru_shrink_inactive = generate_traceevent_regex(
 			"vmscan/mm_vmscan_lru_shrink_inactive",
 			$regex_lru_shrink_inactive_default,
 			"nid", "nr_scanned", "nr_reclaimed", "nr_dirty", "nr_writeback",
-			"nr_congested", "nr_immediate", "nr_activate", "nr_ref_keep",
+			"nr_congested", "nr_immediate", "nr_activate_anon",
+			"nr_activate_file", "nr_ref_keep",
 			"nr_unmap_fail", "priority", "flags");
 $regex_lru_shrink_active = generate_traceevent_regex(
 			"vmscan/mm_vmscan_lru_shrink_active",
@@ -407,7 +408,7 @@ sub process_events {
 			}
 
 			my $nr_reclaimed = $3;
-			my $flags = $12;
+			my $flags = $13;
 			my $file = 0;
 			if ($flags =~ /RECLAIM_WB_FILE/) {
 				$file = 1;
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 2db8d60981fe..bdeda4b079fe 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -26,7 +26,7 @@ struct reclaim_stat {
 	unsigned nr_congested;
 	unsigned nr_writeback;
 	unsigned nr_immediate;
-	unsigned nr_activate;
+	unsigned nr_activate[2];
 	unsigned nr_ref_keep;
 	unsigned nr_unmap_fail;
 };
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index a1cb91342231..4f0e45e90cfc 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -358,7 +358,8 @@ TRACE_EVENT(mm_vmscan_lru_shrink_inactive,
 		__field(unsigned long, nr_writeback)
 		__field(unsigned long, nr_congested)
 		__field(unsigned long, nr_immediate)
-		__field(unsigned long, nr_activate)
+		__field(unsigned int, nr_activate0)
+		__field(unsigned int, nr_activate1)
 		__field(unsigned long, nr_ref_keep)
 		__field(unsigned long, nr_unmap_fail)
 		__field(int, priority)
@@ -373,20 +374,22 @@ TRACE_EVENT(mm_vmscan_lru_shrink_inactive,
 		__entry->nr_writeback = stat->nr_writeback;
 		__entry->nr_congested = stat->nr_congested;
 		__entry->nr_immediate = stat->nr_immediate;
-		__entry->nr_activate = stat->nr_activate;
+		__entry->nr_activate0 = stat->nr_activate[0];
+		__entry->nr_activate1 = stat->nr_activate[1];
 		__entry->nr_ref_keep = stat->nr_ref_keep;
 		__entry->nr_unmap_fail = stat->nr_unmap_fail;
 		__entry->priority = priority;
 		__entry->reclaim_flags = trace_shrink_flags(file);
 	),
 
-	TP_printk("nid=%d nr_scanned=%ld nr_reclaimed=%ld nr_dirty=%ld nr_writeback=%ld nr_congested=%ld nr_immediate=%ld nr_activate=%ld nr_ref_keep=%ld nr_unmap_fail=%ld priority=%d flags=%s",
+	TP_printk("nid=%d nr_scanned=%ld nr_reclaimed=%ld nr_dirty=%ld nr_writeback=%ld nr_congested=%ld nr_immediate=%ld nr_activate_anon=%d nr_activate_file=%d nr_ref_keep=%ld nr_unmap_fail=%ld priority=%d flags=%s",
 		__entry->nid,
 		__entry->nr_scanned, __entry->nr_reclaimed,
 		__entry->nr_dirty, __entry->nr_writeback,
 		__entry->nr_congested, __entry->nr_immediate,
-		__entry->nr_activate, __entry->nr_ref_keep,
-		__entry->nr_unmap_fail, __entry->priority,
+		__entry->nr_activate0, __entry->nr_activate1,
+		__entry->nr_ref_keep, __entry->nr_unmap_fail,
+		__entry->priority,
 		show_reclaim_flags(__entry->reclaim_flags))
 );
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index ac4806f0f332..84542004a277 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1107,6 +1107,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	LIST_HEAD(ret_pages);
 	LIST_HEAD(free_pages);
 	unsigned nr_reclaimed = 0;
+	unsigned pgactivate = 0;
 
 	memset(stat, 0, sizeof(*stat));
 	cond_resched();
@@ -1466,8 +1467,10 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 			try_to_free_swap(page);
 		VM_BUG_ON_PAGE(PageActive(page), page);
 		if (!PageMlocked(page)) {
+			int type = page_is_file_cache(page);
 			SetPageActive(page);
-			stat->nr_activate++;
+			pgactivate++;
+			stat->nr_activate[type] += hpage_nr_pages(page);
 			count_memcg_page_event(page, PGACTIVATE);
 		}
 keep_locked:
@@ -1482,7 +1485,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	free_unref_page_list(&free_pages);
 
 	list_splice(&ret_pages, page_list);
-	count_vm_events(PGACTIVATE, stat->nr_activate);
+	count_vm_events(PGACTIVATE, pgactivate);
 
 	return nr_reclaimed;
 }
@@ -1807,7 +1810,6 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
 static noinline_for_stack void
 putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 {
-	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
 	LIST_HEAD(pages_to_free);
 
@@ -1833,11 +1835,6 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 		lru = page_lru(page);
 		add_page_to_lru_list(page, lruvec, lru);
 
-		if (is_active_lru(lru)) {
-			int file = is_file_lru(lru);
-			int numpages = hpage_nr_pages(page);
-			reclaim_stat->recent_rotated[file] += numpages;
-		}
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
 			__ClearPageActive(page);
@@ -1945,6 +1942,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 		count_memcg_events(lruvec_memcg(lruvec), PGSTEAL_DIRECT,
 				   nr_reclaimed);
 	}
+	reclaim_stat->recent_rotated[0] = stat.nr_activate[0];
+	reclaim_stat->recent_rotated[1] = stat.nr_activate[1];
 
 	putback_inactive_pages(lruvec, &page_list);
 


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 4/4] mm: Generalize putback scan functions
  2019-02-14 10:35 ` [PATCH v2 4/4] mm: Generalize putback scan functions Kirill Tkhai
@ 2019-02-15 20:39   ` Daniel Jordan
  2019-02-15 22:01     ` Kirill Tkhai
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Jordan @ 2019-02-15 20:39 UTC (permalink / raw)
  To: Kirill Tkhai; +Cc: akpm, daniel.m.jordan, mhocko, linux-mm, linux-kernel

On Thu, Feb 14, 2019 at 01:35:37PM +0300, Kirill Tkhai wrote:
> +static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> +						     struct list_head *list)
>  {
>  	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> +	int nr_pages, nr_moved = 0;
>  	LIST_HEAD(pages_to_free);
> +	struct page *page;
> +	enum lru_list lru;
>  
> -	/*
> -	 * Put back any unfreeable pages.
> -	 */
> -	while (!list_empty(page_list)) {
> -		struct page *page = lru_to_page(page_list);
> -		int lru;
> -
> +	while (!list_empty(list)) {
> +		page = lru_to_page(list);
>  		VM_BUG_ON_PAGE(PageLRU(page), page);
> -		list_del(&page->lru);
>  		if (unlikely(!page_evictable(page))) {
> +			list_del_init(&page->lru);

Why change to list_del_init?  It's more special than list_del but doesn't seem
needed since the page is list_add()ed later.

That postprocess script from patch 1 seems kinda broken before this series, and
still is.  Not that it should block this change.  Out of curiosity did you get
it to run?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 4/4] mm: Generalize putback scan functions
  2019-02-15 20:39   ` Daniel Jordan
@ 2019-02-15 22:01     ` Kirill Tkhai
  2019-02-15 22:13       ` Daniel Jordan
  0 siblings, 1 reply; 11+ messages in thread
From: Kirill Tkhai @ 2019-02-15 22:01 UTC (permalink / raw)
  To: Daniel Jordan; +Cc: akpm, mhocko, linux-mm, linux-kernel

On 15.02.2019 23:39, Daniel Jordan wrote:
> On Thu, Feb 14, 2019 at 01:35:37PM +0300, Kirill Tkhai wrote:
>> +static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>> +						     struct list_head *list)
>>  {
>>  	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
>> +	int nr_pages, nr_moved = 0;
>>  	LIST_HEAD(pages_to_free);
>> +	struct page *page;
>> +	enum lru_list lru;
>>  
>> -	/*
>> -	 * Put back any unfreeable pages.
>> -	 */
>> -	while (!list_empty(page_list)) {
>> -		struct page *page = lru_to_page(page_list);
>> -		int lru;
>> -
>> +	while (!list_empty(list)) {
>> +		page = lru_to_page(list);
>>  		VM_BUG_ON_PAGE(PageLRU(page), page);
>> -		list_del(&page->lru);
>>  		if (unlikely(!page_evictable(page))) {
>> +			list_del_init(&page->lru);
> 
> Why change to list_del_init?  It's more special than list_del but doesn't seem
> needed since the page is list_add()ed later.

Not something special is here, I'll remove this _init.
 
> That postprocess script from patch 1 seems kinda broken before this series, and
> still is.  Not that it should block this change.  Out of curiosity did you get
> it to run?

I fixed all new warnings, which come with my changes, so the patch does not make
the script worse.

If you change all already existing warnings by renaming variables in appropriate
places, the script will work in some way. But I'm not sure this is enough to get
results correct, and I have no a big wish to dive into perl to fix warnings
introduced by another people, so I don't plan to do with this script something else.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 4/4] mm: Generalize putback scan functions
  2019-02-15 22:01     ` Kirill Tkhai
@ 2019-02-15 22:13       ` Daniel Jordan
  2019-02-18  8:20         ` Kirill Tkhai
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Jordan @ 2019-02-15 22:13 UTC (permalink / raw)
  To: Kirill Tkhai; +Cc: Daniel Jordan, akpm, mhocko, linux-mm, linux-kernel

On Fri, Feb 15, 2019 at 10:01:05PM +0000, Kirill Tkhai wrote:
> On 15.02.2019 23:39, Daniel Jordan wrote:
> > On Thu, Feb 14, 2019 at 01:35:37PM +0300, Kirill Tkhai wrote:
> >> +static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> >> +						     struct list_head *list)
> >>  {
> >>  	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> >> +	int nr_pages, nr_moved = 0;
> >>  	LIST_HEAD(pages_to_free);
> >> +	struct page *page;
> >> +	enum lru_list lru;
> >>  
> >> -	/*
> >> -	 * Put back any unfreeable pages.
> >> -	 */
> >> -	while (!list_empty(page_list)) {
> >> -		struct page *page = lru_to_page(page_list);
> >> -		int lru;
> >> -
> >> +	while (!list_empty(list)) {
> >> +		page = lru_to_page(list);
> >>  		VM_BUG_ON_PAGE(PageLRU(page), page);
> >> -		list_del(&page->lru);
> >>  		if (unlikely(!page_evictable(page))) {
> >> +			list_del_init(&page->lru);
> > 
> > Why change to list_del_init?  It's more special than list_del but doesn't seem
> > needed since the page is list_add()ed later.
> 
> Not something special is here, I'll remove this _init.
>  
> > That postprocess script from patch 1 seems kinda broken before this series, and
> > still is.  Not that it should block this change.  Out of curiosity did you get
> > it to run?
> 
> I fixed all new warnings, which come with my changes, so the patch does not make
> the script worse.
> 
> If you change all already existing warnings by renaming variables in appropriate
> places, the script will work in some way. But I'm not sure this is enough to get
> results correct, and I have no a big wish to dive into perl to fix warnings
> introduced by another people, so I don't plan to do with this script something else.

Ok, was asking in case I was doing something wrong.

With the above change, for the series, you can add

Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 4/4] mm: Generalize putback scan functions
  2019-02-15 22:13       ` Daniel Jordan
@ 2019-02-18  8:20         ` Kirill Tkhai
  0 siblings, 0 replies; 11+ messages in thread
From: Kirill Tkhai @ 2019-02-18  8:20 UTC (permalink / raw)
  To: Daniel Jordan; +Cc: akpm, mhocko, linux-mm, linux-kernel



On 16.02.2019 01:13, Daniel Jordan wrote:
> On Fri, Feb 15, 2019 at 10:01:05PM +0000, Kirill Tkhai wrote:
>> On 15.02.2019 23:39, Daniel Jordan wrote:
>>> On Thu, Feb 14, 2019 at 01:35:37PM +0300, Kirill Tkhai wrote:
>>>> +static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>>>> +						     struct list_head *list)
>>>>  {
>>>>  	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
>>>> +	int nr_pages, nr_moved = 0;
>>>>  	LIST_HEAD(pages_to_free);
>>>> +	struct page *page;
>>>> +	enum lru_list lru;
>>>>  
>>>> -	/*
>>>> -	 * Put back any unfreeable pages.
>>>> -	 */
>>>> -	while (!list_empty(page_list)) {
>>>> -		struct page *page = lru_to_page(page_list);
>>>> -		int lru;
>>>> -
>>>> +	while (!list_empty(list)) {
>>>> +		page = lru_to_page(list);
>>>>  		VM_BUG_ON_PAGE(PageLRU(page), page);
>>>> -		list_del(&page->lru);
>>>>  		if (unlikely(!page_evictable(page))) {
>>>> +			list_del_init(&page->lru);
>>>
>>> Why change to list_del_init?  It's more special than list_del but doesn't seem
>>> needed since the page is list_add()ed later.
>>
>> Not something special is here, I'll remove this _init.
>>  
>>> That postprocess script from patch 1 seems kinda broken before this series, and
>>> still is.  Not that it should block this change.  Out of curiosity did you get
>>> it to run?
>>
>> I fixed all new warnings, which come with my changes, so the patch does not make
>> the script worse.
>>
>> If you change all already existing warnings by renaming variables in appropriate
>> places, the script will work in some way. But I'm not sure this is enough to get
>> results correct, and I have no a big wish to dive into perl to fix warnings
>> introduced by another people, so I don't plan to do with this script something else.
> 
> Ok, was asking in case I was doing something wrong.
> 
> With the above change, for the series, you can add
> 
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>

Ok, thanks.


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-02-18  8:20 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-14 10:35 [PATCH v2 0/4] mm: Generalize putback functions Kirill Tkhai
2019-02-14 10:35 ` [PATCH v2 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
2019-02-14 20:57   ` Andrew Morton
2019-02-15  7:38     ` [PATCH v2.5 " Kirill Tkhai
2019-02-14 10:35 ` [PATCH v2 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
2019-02-14 10:35 ` [PATCH v2 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru() Kirill Tkhai
2019-02-14 10:35 ` [PATCH v2 4/4] mm: Generalize putback scan functions Kirill Tkhai
2019-02-15 20:39   ` Daniel Jordan
2019-02-15 22:01     ` Kirill Tkhai
2019-02-15 22:13       ` Daniel Jordan
2019-02-18  8:20         ` Kirill Tkhai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).