linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] mm: Generalize putback functions
@ 2019-02-12 15:13 Kirill Tkhai
  2019-02-12 15:14 ` [PATCH 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Kirill Tkhai @ 2019-02-12 15:13 UTC (permalink / raw)
  To: akpm, mhocko, ktkhai, linux-mm, linux-kernel

Functions putback_inactive_pages() and move_active_pages_to_lru()
are almost similar, so this patchset merges them in only function.

---

Kirill Tkhai (4):
      mm: Move recent_rotated pages calculation to shrink_inactive_list()
      mm: Move nr_deactivate accounting to shrink_active_list()
      mm: Remove pages_to_free argument of move_active_pages_to_lru()
      mm: Generalize putback scan functions


 include/linux/vmstat.h |    2 -
 mm/vmscan.c            |  150 ++++++++++++++++++------------------------------
 2 files changed, 57 insertions(+), 95 deletions(-)

--
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list()
  2019-02-12 15:13 [PATCH 0/4] mm: Generalize putback functions Kirill Tkhai
@ 2019-02-12 15:14 ` Kirill Tkhai
  2019-02-13 17:33   ` Daniel Jordan
  2019-02-12 15:14 ` [PATCH 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Kirill Tkhai @ 2019-02-12 15:14 UTC (permalink / raw)
  To: akpm, mhocko, ktkhai, linux-mm, linux-kernel

Currently, struct reclaim_stat::nr_activate is a local variable,
used only in shrink_page_list(). This patch introduces another
local variable pgactivate to use instead of it, and reuses
nr_activate to account number of active pages.

Note, that we need nr_activate to be an array, since type of page
may change during shrink_page_list() (see ClearPageSwapBacked()).

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 include/linux/vmstat.h |    2 +-
 mm/vmscan.c            |   15 +++++++--------
 2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 2db8d60981fe..bdeda4b079fe 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -26,7 +26,7 @@ struct reclaim_stat {
 	unsigned nr_congested;
 	unsigned nr_writeback;
 	unsigned nr_immediate;
-	unsigned nr_activate;
+	unsigned nr_activate[2];
 	unsigned nr_ref_keep;
 	unsigned nr_unmap_fail;
 };
diff --git a/mm/vmscan.c b/mm/vmscan.c
index ac4806f0f332..84542004a277 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1107,6 +1107,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	LIST_HEAD(ret_pages);
 	LIST_HEAD(free_pages);
 	unsigned nr_reclaimed = 0;
+	unsigned pgactivate = 0;
 
 	memset(stat, 0, sizeof(*stat));
 	cond_resched();
@@ -1466,8 +1467,10 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 			try_to_free_swap(page);
 		VM_BUG_ON_PAGE(PageActive(page), page);
 		if (!PageMlocked(page)) {
+			int type = page_is_file_cache(page);
 			SetPageActive(page);
-			stat->nr_activate++;
+			pgactivate++;
+			stat->nr_activate[type] += hpage_nr_pages(page);
 			count_memcg_page_event(page, PGACTIVATE);
 		}
 keep_locked:
@@ -1482,7 +1485,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 	free_unref_page_list(&free_pages);
 
 	list_splice(&ret_pages, page_list);
-	count_vm_events(PGACTIVATE, stat->nr_activate);
+	count_vm_events(PGACTIVATE, pgactivate);
 
 	return nr_reclaimed;
 }
@@ -1807,7 +1810,6 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
 static noinline_for_stack void
 putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 {
-	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
 	LIST_HEAD(pages_to_free);
 
@@ -1833,11 +1835,6 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 		lru = page_lru(page);
 		add_page_to_lru_list(page, lruvec, lru);
 
-		if (is_active_lru(lru)) {
-			int file = is_file_lru(lru);
-			int numpages = hpage_nr_pages(page);
-			reclaim_stat->recent_rotated[file] += numpages;
-		}
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
 			__ClearPageActive(page);
@@ -1945,6 +1942,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 		count_memcg_events(lruvec_memcg(lruvec), PGSTEAL_DIRECT,
 				   nr_reclaimed);
 	}
+	reclaim_stat->recent_rotated[0] = stat.nr_activate[0];
+	reclaim_stat->recent_rotated[1] = stat.nr_activate[1];
 
 	putback_inactive_pages(lruvec, &page_list);
 


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/4] mm: Move nr_deactivate accounting to shrink_active_list()
  2019-02-12 15:13 [PATCH 0/4] mm: Generalize putback functions Kirill Tkhai
  2019-02-12 15:14 ` [PATCH 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
@ 2019-02-12 15:14 ` Kirill Tkhai
  2019-02-13 19:13   ` Daniel Jordan
  2019-02-12 15:14 ` [PATCH 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru() Kirill Tkhai
  2019-02-12 15:14 ` [PATCH 4/4] mm: Generalize putback scan functions Kirill Tkhai
  3 siblings, 1 reply; 13+ messages in thread
From: Kirill Tkhai @ 2019-02-12 15:14 UTC (permalink / raw)
  To: akpm, mhocko, ktkhai, linux-mm, linux-kernel

We know, which LRU is not active.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/vmscan.c |   10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 84542004a277..8d7d55e71511 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2040,12 +2040,6 @@ static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
 		}
 	}
 
-	if (!is_active_lru(lru)) {
-		__count_vm_events(PGDEACTIVATE, nr_moved);
-		count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE,
-				   nr_moved);
-	}
-
 	return nr_moved;
 }
 
@@ -2137,6 +2131,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
 
 	nr_activate = move_active_pages_to_lru(lruvec, &l_active, &l_hold, lru);
 	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold, lru - LRU_ACTIVE);
+
+	__count_vm_events(PGDEACTIVATE, nr_deactivate);
+	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
+
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 	spin_unlock_irq(&pgdat->lru_lock);
 


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru()
  2019-02-12 15:13 [PATCH 0/4] mm: Generalize putback functions Kirill Tkhai
  2019-02-12 15:14 ` [PATCH 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
  2019-02-12 15:14 ` [PATCH 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
@ 2019-02-12 15:14 ` Kirill Tkhai
  2019-02-13 19:14   ` Daniel Jordan
  2019-02-12 15:14 ` [PATCH 4/4] mm: Generalize putback scan functions Kirill Tkhai
  3 siblings, 1 reply; 13+ messages in thread
From: Kirill Tkhai @ 2019-02-12 15:14 UTC (permalink / raw)
  To: akpm, mhocko, ktkhai, linux-mm, linux-kernel

We may use input argument list as output argument too.
This makes the function more similar to putback_inactive_pages().

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/vmscan.c |   19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8d7d55e71511..88fa71e4c28f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2004,10 +2004,10 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
 				     struct list_head *list,
-				     struct list_head *pages_to_free,
 				     enum lru_list lru)
 {
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+	LIST_HEAD(pages_to_free);
 	struct page *page;
 	int nr_pages;
 	int nr_moved = 0;
@@ -2034,12 +2034,17 @@ static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
 				(*get_compound_page_dtor(page))(page);
 				spin_lock_irq(&pgdat->lru_lock);
 			} else
-				list_add(&page->lru, pages_to_free);
+				list_add(&page->lru, &pages_to_free);
 		} else {
 			nr_moved += nr_pages;
 		}
 	}
 
+	/*
+	 * To save our caller's stack, now use input list for pages to free.
+	 */
+	list_splice(&pages_to_free, list);
+
 	return nr_moved;
 }
 
@@ -2129,8 +2134,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	nr_activate = move_active_pages_to_lru(lruvec, &l_active, &l_hold, lru);
-	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold, lru - LRU_ACTIVE);
+	nr_activate = move_active_pages_to_lru(lruvec, &l_active, lru);
+	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, lru - LRU_ACTIVE);
+	/* Keep all free pages are in l_active list */
+	list_splice(&l_inactive, &l_active);
 
 	__count_vm_events(PGDEACTIVATE, nr_deactivate);
 	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
@@ -2138,8 +2145,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 	spin_unlock_irq(&pgdat->lru_lock);
 
-	mem_cgroup_uncharge_list(&l_hold);
-	free_unref_page_list(&l_hold);
+	mem_cgroup_uncharge_list(&l_active);
+	free_unref_page_list(&l_active);
 	trace_mm_vmscan_lru_shrink_active(pgdat->node_id, nr_taken, nr_activate,
 			nr_deactivate, nr_rotated, sc->priority, file);
 }


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/4] mm: Generalize putback scan functions
  2019-02-12 15:13 [PATCH 0/4] mm: Generalize putback functions Kirill Tkhai
                   ` (2 preceding siblings ...)
  2019-02-12 15:14 ` [PATCH 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru() Kirill Tkhai
@ 2019-02-12 15:14 ` Kirill Tkhai
  2019-02-13 19:20   ` Daniel Jordan
  3 siblings, 1 reply; 13+ messages in thread
From: Kirill Tkhai @ 2019-02-12 15:14 UTC (permalink / raw)
  To: akpm, mhocko, ktkhai, linux-mm, linux-kernel

This combines two similar functions move_active_pages_to_lru()
and putback_inactive_pages() into single move_pages_to_lru().
This remove duplicate code and makes object file size smaller.

Before:
   text	   data	    bss	    dec	    hex	filename
  57082	   4732	    128	  61942	   f1f6	mm/vmscan.o
After:
   text	   data	    bss	    dec	    hex	filename
  55112	   4600	    128	  59840	   e9c0	mm/vmscan.o

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 mm/vmscan.c |  124 ++++++++++++++++++++---------------------------------------
 1 file changed, 41 insertions(+), 83 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 88fa71e4c28f..66e70cf1dd94 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1807,33 +1807,53 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
 	return isolated > inactive;
 }
 
-static noinline_for_stack void
-putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
+/*
+ * This moves pages from @list to corresponding LRU list.
+ *
+ * We move them the other way if the page is referenced by one or more
+ * processes, from rmap.
+ *
+ * If the pages are mostly unmapped, the processing is fast and it is
+ * appropriate to hold zone_lru_lock across the whole operation.  But if
+ * the pages are mapped, the processing is slow (page_referenced()) so we
+ * should drop zone_lru_lock around each page.  It's impossible to balance
+ * this, so instead we remove the pages from the LRU while processing them.
+ * It is safe to rely on PG_active against the non-LRU pages in here because
+ * nobody will play with that bit on a non-LRU page.
+ *
+ * The downside is that we have to touch page->_refcount against each page.
+ * But we had to alter page->flags anyway.
+ *
+ * Returns the number of pages moved to the given lruvec.
+ */
+
+static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
+						     struct list_head *list)
 {
 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+	int nr_pages, nr_moved = 0;
 	LIST_HEAD(pages_to_free);
+	struct page *page;
+	enum lru_list lru;
 
-	/*
-	 * Put back any unfreeable pages.
-	 */
-	while (!list_empty(page_list)) {
-		struct page *page = lru_to_page(page_list);
-		int lru;
-
-		VM_BUG_ON_PAGE(PageLRU(page), page);
-		list_del(&page->lru);
+	while (!list_empty(list)) {
+		page = lru_to_page(list);
 		if (unlikely(!page_evictable(page))) {
+			list_del_init(&page->lru);
 			spin_unlock_irq(&pgdat->lru_lock);
 			putback_lru_page(page);
 			spin_lock_irq(&pgdat->lru_lock);
 			continue;
 		}
-
 		lruvec = mem_cgroup_page_lruvec(page, pgdat);
 
+		VM_BUG_ON_PAGE(PageLRU(page), page);
 		SetPageLRU(page);
 		lru = page_lru(page);
-		add_page_to_lru_list(page, lruvec, lru);
+
+		nr_pages = hpage_nr_pages(page);
+		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
+		list_move(&page->lru, &lruvec->lists[lru]);
 
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
@@ -1847,13 +1867,17 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 				spin_lock_irq(&pgdat->lru_lock);
 			} else
 				list_add(&page->lru, &pages_to_free);
+		} else {
+			nr_moved += nr_pages;
 		}
 	}
 
 	/*
 	 * To save our caller's stack, now use input list for pages to free.
 	 */
-	list_splice(&pages_to_free, page_list);
+	list_splice(&pages_to_free, list);
+
+	return nr_moved;
 }
 
 /*
@@ -1945,7 +1969,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	reclaim_stat->recent_rotated[0] = stat.nr_activate[0];
 	reclaim_stat->recent_rotated[1] = stat.nr_activate[1];
 
-	putback_inactive_pages(lruvec, &page_list);
+	move_pages_to_lru(lruvec, &page_list);
 
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 
@@ -1982,72 +2006,6 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	return nr_reclaimed;
 }
 
-/*
- * This moves pages from the active list to the inactive list.
- *
- * We move them the other way if the page is referenced by one or more
- * processes, from rmap.
- *
- * If the pages are mostly unmapped, the processing is fast and it is
- * appropriate to hold zone_lru_lock across the whole operation.  But if
- * the pages are mapped, the processing is slow (page_referenced()) so we
- * should drop zone_lru_lock around each page.  It's impossible to balance
- * this, so instead we remove the pages from the LRU while processing them.
- * It is safe to rely on PG_active against the non-LRU pages in here because
- * nobody will play with that bit on a non-LRU page.
- *
- * The downside is that we have to touch page->_refcount against each page.
- * But we had to alter page->flags anyway.
- *
- * Returns the number of pages moved to the given lru.
- */
-
-static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
-				     struct list_head *list,
-				     enum lru_list lru)
-{
-	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
-	LIST_HEAD(pages_to_free);
-	struct page *page;
-	int nr_pages;
-	int nr_moved = 0;
-
-	while (!list_empty(list)) {
-		page = lru_to_page(list);
-		lruvec = mem_cgroup_page_lruvec(page, pgdat);
-
-		VM_BUG_ON_PAGE(PageLRU(page), page);
-		SetPageLRU(page);
-
-		nr_pages = hpage_nr_pages(page);
-		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
-		list_move(&page->lru, &lruvec->lists[lru]);
-
-		if (put_page_testzero(page)) {
-			__ClearPageLRU(page);
-			__ClearPageActive(page);
-			del_page_from_lru_list(page, lruvec, lru);
-
-			if (unlikely(PageCompound(page))) {
-				spin_unlock_irq(&pgdat->lru_lock);
-				mem_cgroup_uncharge(page);
-				(*get_compound_page_dtor(page))(page);
-				spin_lock_irq(&pgdat->lru_lock);
-			} else
-				list_add(&page->lru, &pages_to_free);
-		} else {
-			nr_moved += nr_pages;
-		}
-	}
-
-	/*
-	 * To save our caller's stack, now use input list for pages to free.
-	 */
-	list_splice(&pages_to_free, list);
-
-	return nr_moved;
-}
-
 static void shrink_active_list(unsigned long nr_to_scan,
 			       struct lruvec *lruvec,
 			       struct scan_control *sc,
@@ -2134,8 +2092,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	nr_activate = move_active_pages_to_lru(lruvec, &l_active, lru);
-	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, lru - LRU_ACTIVE);
+	nr_activate = move_pages_to_lru(lruvec, &l_active);
+	nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
 	/* Keep all free pages are in l_active list */
 	list_splice(&l_inactive, &l_active);
 


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list()
  2019-02-12 15:14 ` [PATCH 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
@ 2019-02-13 17:33   ` Daniel Jordan
  2019-02-14 10:19     ` Kirill Tkhai
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel Jordan @ 2019-02-13 17:33 UTC (permalink / raw)
  To: Kirill Tkhai; +Cc: akpm, mhocko, linux-mm, linux-kernel

On Tue, Feb 12, 2019 at 06:14:00PM +0300, Kirill Tkhai wrote:
> Currently, struct reclaim_stat::nr_activate is a local variable,
> used only in shrink_page_list(). This patch introduces another
> local variable pgactivate to use instead of it, and reuses
> nr_activate to account number of active pages.
> 
> Note, that we need nr_activate to be an array, since type of page
> may change during shrink_page_list() (see ClearPageSwapBacked()).
> 
> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
> ---
>  include/linux/vmstat.h |    2 +-
>  mm/vmscan.c            |   15 +++++++--------

include/trace/events/vmscan.h needs to account for the array-ification of
nr_activate too.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] mm: Move nr_deactivate accounting to shrink_active_list()
  2019-02-12 15:14 ` [PATCH 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
@ 2019-02-13 19:13   ` Daniel Jordan
  2019-02-14 10:30     ` Kirill Tkhai
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel Jordan @ 2019-02-13 19:13 UTC (permalink / raw)
  To: Kirill Tkhai; +Cc: akpm, mhocko, linux-mm, linux-kernel

On Tue, Feb 12, 2019 at 06:14:05PM +0300, Kirill Tkhai wrote:
> We know, which LRU is not active.

s/,//

> 
> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
> ---
>  mm/vmscan.c |   10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 84542004a277..8d7d55e71511 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2040,12 +2040,6 @@ static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
>  		}
>  	}
>  
> -	if (!is_active_lru(lru)) {
> -		__count_vm_events(PGDEACTIVATE, nr_moved);
> -		count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE,
> -				   nr_moved);
> -	}
> -
>  	return nr_moved;
>  }
>  
> @@ -2137,6 +2131,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
>  
>  	nr_activate = move_active_pages_to_lru(lruvec, &l_active, &l_hold, lru);
>  	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold, lru - LRU_ACTIVE);
> +
> +	__count_vm_events(PGDEACTIVATE, nr_deactivate);
> +	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);

Nice, you're using the irq-unsafe one since irqs are already disabled.  I guess
this was missed in c3cc39118c361.  Do you want to insert a patch before this
one that converts all instances of this pattern in vmscan.c over?

There's a similar oversight in lru_lazyfree_fn with count_memcg_page_event, but
that'd mean __count_memcg_page_event which is probably overkill.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru()
  2019-02-12 15:14 ` [PATCH 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru() Kirill Tkhai
@ 2019-02-13 19:14   ` Daniel Jordan
  2019-02-14 10:36     ` Kirill Tkhai
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel Jordan @ 2019-02-13 19:14 UTC (permalink / raw)
  To: Kirill Tkhai; +Cc: akpm, mhocko, linux-mm, linux-kernel

On Tue, Feb 12, 2019 at 06:14:11PM +0300, Kirill Tkhai wrote:
> +	/* Keep all free pages are in l_active list */

s/are//

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] mm: Generalize putback scan functions
  2019-02-12 15:14 ` [PATCH 4/4] mm: Generalize putback scan functions Kirill Tkhai
@ 2019-02-13 19:20   ` Daniel Jordan
  2019-02-14 10:29     ` Kirill Tkhai
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel Jordan @ 2019-02-13 19:20 UTC (permalink / raw)
  To: Kirill Tkhai; +Cc: akpm, mhocko, linux-mm, linux-kernel

On Tue, Feb 12, 2019 at 06:14:16PM +0300, Kirill Tkhai wrote:
> +static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> +						     struct list_head *list)
>  {
>  	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> +	int nr_pages, nr_moved = 0;
>  	LIST_HEAD(pages_to_free);
> +	struct page *page;
> +	enum lru_list lru;
>  
> -	/*
> -	 * Put back any unfreeable pages.
> -	 */
> -	while (!list_empty(page_list)) {
> -		struct page *page = lru_to_page(page_list);
> -		int lru;
> -
> -		VM_BUG_ON_PAGE(PageLRU(page), page);
> -		list_del(&page->lru);
> +	while (!list_empty(list)) {
> +		page = lru_to_page(list);
>  		if (unlikely(!page_evictable(page))) {
> +			list_del_init(&page->lru);
>  			spin_unlock_irq(&pgdat->lru_lock);
>  			putback_lru_page(page);
>  			spin_lock_irq(&pgdat->lru_lock);
>  			continue;
>  		}
> -
>  		lruvec = mem_cgroup_page_lruvec(page, pgdat);
>  
> +		VM_BUG_ON_PAGE(PageLRU(page), page);

Nit, but moving the BUG down here weakens it a little bit since we miss
checking it if the page is unevictable.


Maybe worth pointing out in the changelog that the main difference from
combining these two functions is that we're now checking for !page_evictable
coming from shrink_active_list, which shouldn't change any behavior since that
path works with evictable pages only.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list()
  2019-02-13 17:33   ` Daniel Jordan
@ 2019-02-14 10:19     ` Kirill Tkhai
  0 siblings, 0 replies; 13+ messages in thread
From: Kirill Tkhai @ 2019-02-14 10:19 UTC (permalink / raw)
  To: Daniel Jordan; +Cc: akpm, mhocko, linux-mm, linux-kernel

On 13.02.2019 20:33, Daniel Jordan wrote:
> On Tue, Feb 12, 2019 at 06:14:00PM +0300, Kirill Tkhai wrote:
>> Currently, struct reclaim_stat::nr_activate is a local variable,
>> used only in shrink_page_list(). This patch introduces another
>> local variable pgactivate to use instead of it, and reuses
>> nr_activate to account number of active pages.
>>
>> Note, that we need nr_activate to be an array, since type of page
>> may change during shrink_page_list() (see ClearPageSwapBacked()).
>>
>> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
>> ---
>>  include/linux/vmstat.h |    2 +-
>>  mm/vmscan.c            |   15 +++++++--------
> 
> include/trace/events/vmscan.h needs to account for the array-ification of
> nr_activate too.

Yeah, thanks.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] mm: Generalize putback scan functions
  2019-02-13 19:20   ` Daniel Jordan
@ 2019-02-14 10:29     ` Kirill Tkhai
  0 siblings, 0 replies; 13+ messages in thread
From: Kirill Tkhai @ 2019-02-14 10:29 UTC (permalink / raw)
  To: Daniel Jordan; +Cc: akpm, mhocko, linux-mm, linux-kernel

On 13.02.2019 22:20, Daniel Jordan wrote:
> On Tue, Feb 12, 2019 at 06:14:16PM +0300, Kirill Tkhai wrote:
>> +static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>> +						     struct list_head *list)
>>  {
>>  	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
>> +	int nr_pages, nr_moved = 0;
>>  	LIST_HEAD(pages_to_free);
>> +	struct page *page;
>> +	enum lru_list lru;
>>  
>> -	/*
>> -	 * Put back any unfreeable pages.
>> -	 */
>> -	while (!list_empty(page_list)) {
>> -		struct page *page = lru_to_page(page_list);
>> -		int lru;
>> -
>> -		VM_BUG_ON_PAGE(PageLRU(page), page);
>> -		list_del(&page->lru);
>> +	while (!list_empty(list)) {
>> +		page = lru_to_page(list);
>>  		if (unlikely(!page_evictable(page))) {
>> +			list_del_init(&page->lru);
>>  			spin_unlock_irq(&pgdat->lru_lock);
>>  			putback_lru_page(page);
>>  			spin_lock_irq(&pgdat->lru_lock);
>>  			continue;
>>  		}
>> -
>>  		lruvec = mem_cgroup_page_lruvec(page, pgdat);
>>  
>> +		VM_BUG_ON_PAGE(PageLRU(page), page);
> 
> Nit, but moving the BUG down here weakens it a little bit since we miss
> checking it if the page is unevictable.

Yeah, this is bad.
 
> Maybe worth pointing out in the changelog that the main difference from
> combining these two functions is that we're now checking for !page_evictable
> coming from shrink_active_list, which shouldn't change any behavior since that
> path works with evictable pages only.

Sounds good.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] mm: Move nr_deactivate accounting to shrink_active_list()
  2019-02-13 19:13   ` Daniel Jordan
@ 2019-02-14 10:30     ` Kirill Tkhai
  0 siblings, 0 replies; 13+ messages in thread
From: Kirill Tkhai @ 2019-02-14 10:30 UTC (permalink / raw)
  To: Daniel Jordan; +Cc: akpm, mhocko, linux-mm, linux-kernel

On 13.02.2019 22:13, Daniel Jordan wrote:
> On Tue, Feb 12, 2019 at 06:14:05PM +0300, Kirill Tkhai wrote:
>> We know, which LRU is not active.
> 
> s/,//
> 
>>
>> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
>> ---
>>  mm/vmscan.c |   10 ++++------
>>  1 file changed, 4 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 84542004a277..8d7d55e71511 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -2040,12 +2040,6 @@ static unsigned move_active_pages_to_lru(struct lruvec *lruvec,
>>  		}
>>  	}
>>  
>> -	if (!is_active_lru(lru)) {
>> -		__count_vm_events(PGDEACTIVATE, nr_moved);
>> -		count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE,
>> -				   nr_moved);
>> -	}
>> -
>>  	return nr_moved;
>>  }
>>  
>> @@ -2137,6 +2131,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
>>  
>>  	nr_activate = move_active_pages_to_lru(lruvec, &l_active, &l_hold, lru);
>>  	nr_deactivate = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold, lru - LRU_ACTIVE);
>> +
>> +	__count_vm_events(PGDEACTIVATE, nr_deactivate);
>> +	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
> 
> Nice, you're using the irq-unsafe one since irqs are already disabled.  I guess
> this was missed in c3cc39118c361.  Do you want to insert a patch before this
> one that converts all instances of this pattern in vmscan.c over?

I had that in plan, but I'm not sure I want to do that in this patchset. Maybe,
something next later on top of this.

> There's a similar oversight in lru_lazyfree_fn with count_memcg_page_event, but
> that'd mean __count_memcg_page_event which is probably overkill.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru()
  2019-02-13 19:14   ` Daniel Jordan
@ 2019-02-14 10:36     ` Kirill Tkhai
  0 siblings, 0 replies; 13+ messages in thread
From: Kirill Tkhai @ 2019-02-14 10:36 UTC (permalink / raw)
  To: Daniel Jordan; +Cc: akpm, mhocko, linux-mm, linux-kernel

On 13.02.2019 22:14, Daniel Jordan wrote:
> On Tue, Feb 12, 2019 at 06:14:11PM +0300, Kirill Tkhai wrote:
>> +	/* Keep all free pages are in l_active list */
> 
> s/are//

Yeah, thanks.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-02-14 10:36 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-12 15:13 [PATCH 0/4] mm: Generalize putback functions Kirill Tkhai
2019-02-12 15:14 ` [PATCH 1/4] mm: Move recent_rotated pages calculation to shrink_inactive_list() Kirill Tkhai
2019-02-13 17:33   ` Daniel Jordan
2019-02-14 10:19     ` Kirill Tkhai
2019-02-12 15:14 ` [PATCH 2/4] mm: Move nr_deactivate accounting to shrink_active_list() Kirill Tkhai
2019-02-13 19:13   ` Daniel Jordan
2019-02-14 10:30     ` Kirill Tkhai
2019-02-12 15:14 ` [PATCH 3/4] mm: Remove pages_to_free argument of move_active_pages_to_lru() Kirill Tkhai
2019-02-13 19:14   ` Daniel Jordan
2019-02-14 10:36     ` Kirill Tkhai
2019-02-12 15:14 ` [PATCH 4/4] mm: Generalize putback scan functions Kirill Tkhai
2019-02-13 19:20   ` Daniel Jordan
2019-02-14 10:29     ` Kirill Tkhai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).