* [PATCH 1/3] mm: swap: fix vmstats for huge pages @ 2020-05-08 21:22 Shakeel Butt 2020-05-08 21:22 ` [PATCH 2/3] mm: swap: memcg: fix memcg stats " Shakeel Butt ` (2 more replies) 0 siblings, 3 replies; 7+ messages in thread From: Shakeel Butt @ 2020-05-08 21:22 UTC (permalink / raw) To: Mel Gorman, Johannes Weiner, Roman Gushchin, Michal Hocko Cc: Andrew Morton, Yafang Shao, linux-mm, cgroups, linux-kernel, Shakeel Butt Many of the callbacks called by pagevec_lru_move_fn() do not correctly update the vmstats for huge pages. Fix that. Also __pagevec_lru_add_fn() use the irq-unsafe alternative to update the stat as the irqs are already disabled. Signed-off-by: Shakeel Butt <shakeelb@google.com> --- mm/swap.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index a37bd7b202ac..3dbef6517cac 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -225,7 +225,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec, del_page_from_lru_list(page, lruvec, page_lru(page)); ClearPageActive(page); add_page_to_lru_list_tail(page, lruvec, page_lru(page)); - (*pgmoved)++; + (*pgmoved) += hpage_nr_pages(page); } } @@ -285,7 +285,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); trace_mm_lru_activate(page); - __count_vm_event(PGACTIVATE); + __count_vm_events(PGACTIVATE, hpage_nr_pages(page)); update_page_reclaim_stat(lruvec, file, 1); } } @@ -503,6 +503,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, { int lru, file; bool active; + int nr_pages = hpage_nr_pages(page); if (!PageLRU(page)) return; @@ -536,11 +537,11 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, * We moves tha page into tail of inactive. */ add_page_to_lru_list_tail(page, lruvec, lru); - __count_vm_event(PGROTATED); + __count_vm_events(PGROTATED, nr_pages); } if (active) - __count_vm_event(PGDEACTIVATE); + __count_vm_events(PGDEACTIVATE, nr_pages); update_page_reclaim_stat(lruvec, file, 0); } @@ -929,6 +930,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, { enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); + int nr_pages = hpage_nr_pages(page); VM_BUG_ON_PAGE(PageLRU(page), page); @@ -966,13 +968,13 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, update_page_reclaim_stat(lruvec, page_is_file_lru(page), PageActive(page)); if (was_unevictable) - count_vm_event(UNEVICTABLE_PGRESCUED); + __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else { lru = LRU_UNEVICTABLE; ClearPageActive(page); SetPageUnevictable(page); if (!was_unevictable) - count_vm_event(UNEVICTABLE_PGCULLED); + __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); } add_page_to_lru_list(page, lruvec, lru); -- 2.26.2.645.ge9eca65c58-goog ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/3] mm: swap: memcg: fix memcg stats for huge pages 2020-05-08 21:22 [PATCH 1/3] mm: swap: fix vmstats for huge pages Shakeel Butt @ 2020-05-08 21:22 ` Shakeel Butt 2020-05-08 21:56 ` Johannes Weiner 2020-05-08 21:22 ` [PATCH 3/3] mm: swap: fix update_page_reclaim_stat " Shakeel Butt 2020-05-08 21:52 ` [PATCH 1/3] mm: swap: fix vmstats " Johannes Weiner 2 siblings, 1 reply; 7+ messages in thread From: Shakeel Butt @ 2020-05-08 21:22 UTC (permalink / raw) To: Mel Gorman, Johannes Weiner, Roman Gushchin, Michal Hocko Cc: Andrew Morton, Yafang Shao, linux-mm, cgroups, linux-kernel, Shakeel Butt The commit 2262185c5b28 ("mm: per-cgroup memory reclaim stats") added PGLAZYFREE, PGACTIVATE & PGDEACTIVATE stats for cgroups but missed couple of places and PGLAZYFREE missed huge page handling. Fix that. Also for PGLAZYFREE use the irq-unsafe function to update as the irq is already disabled. Fixes: 2262185c5b28 ("mm: per-cgroup memory reclaim stats") Signed-off-by: Shakeel Butt <shakeelb@google.com> --- mm/swap.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 3dbef6517cac..4eb179ee0b72 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -278,6 +278,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { int file = page_is_file_lru(page); int lru = page_lru_base_type(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, lru); SetPageActive(page); @@ -285,7 +286,8 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); trace_mm_lru_activate(page); - __count_vm_events(PGACTIVATE, hpage_nr_pages(page)); + __count_vm_events(PGACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, nr_pages); update_page_reclaim_stat(lruvec, file, 1); } } @@ -540,8 +542,10 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGROTATED, nr_pages); } - if (active) + if (active) { __count_vm_events(PGDEACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); + } update_page_reclaim_stat(lruvec, file, 0); } @@ -551,13 +555,15 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { int file = page_is_file_lru(page); int lru = page_lru_base_type(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); ClearPageActive(page); ClearPageReferenced(page); add_page_to_lru_list(page, lruvec, lru); - __count_vm_events(PGDEACTIVATE, hpage_nr_pages(page)); + __count_vm_events(PGDEACTIVATE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); update_page_reclaim_stat(lruvec, file, 0); } } @@ -568,6 +574,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { bool active = PageActive(page); + int nr_pages = hpage_nr_pages(page); del_page_from_lru_list(page, lruvec, LRU_INACTIVE_ANON + active); @@ -581,8 +588,8 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, ClearPageSwapBacked(page); add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE); - __count_vm_events(PGLAZYFREE, hpage_nr_pages(page)); - count_memcg_page_event(page, PGLAZYFREE); + __count_vm_events(PGLAZYFREE, nr_pages); + __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, nr_pages); update_page_reclaim_stat(lruvec, 1, 0); } } -- 2.26.2.645.ge9eca65c58-goog ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 2/3] mm: swap: memcg: fix memcg stats for huge pages 2020-05-08 21:22 ` [PATCH 2/3] mm: swap: memcg: fix memcg stats " Shakeel Butt @ 2020-05-08 21:56 ` Johannes Weiner 0 siblings, 0 replies; 7+ messages in thread From: Johannes Weiner @ 2020-05-08 21:56 UTC (permalink / raw) To: Shakeel Butt Cc: Mel Gorman, Roman Gushchin, Michal Hocko, Andrew Morton, Yafang Shao, linux-mm, cgroups, linux-kernel On Fri, May 08, 2020 at 02:22:14PM -0700, Shakeel Butt wrote: > The commit 2262185c5b28 ("mm: per-cgroup memory reclaim stats") added > PGLAZYFREE, PGACTIVATE & PGDEACTIVATE stats for cgroups but missed > couple of places and PGLAZYFREE missed huge page handling. Fix that. > Also for PGLAZYFREE use the irq-unsafe function to update as the irq is > already disabled. > > Fixes: 2262185c5b28 ("mm: per-cgroup memory reclaim stats") > Signed-off-by: Shakeel Butt <shakeelb@google.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 3/3] mm: swap: fix update_page_reclaim_stat for huge pages 2020-05-08 21:22 [PATCH 1/3] mm: swap: fix vmstats for huge pages Shakeel Butt 2020-05-08 21:22 ` [PATCH 2/3] mm: swap: memcg: fix memcg stats " Shakeel Butt @ 2020-05-08 21:22 ` Shakeel Butt 2020-05-08 21:51 ` Johannes Weiner 2020-05-08 21:52 ` [PATCH 1/3] mm: swap: fix vmstats " Johannes Weiner 2 siblings, 1 reply; 7+ messages in thread From: Shakeel Butt @ 2020-05-08 21:22 UTC (permalink / raw) To: Mel Gorman, Johannes Weiner, Roman Gushchin, Michal Hocko Cc: Andrew Morton, Yafang Shao, linux-mm, cgroups, linux-kernel, Shakeel Butt Currently update_page_reclaim_stat() updates the lruvec.reclaim_stats just once for a page irrespective if a page is huge or not. Fix that by passing the hpage_nr_pages(page) to it. Signed-off-by: Shakeel Butt <shakeelb@google.com> --- mm/swap.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 4eb179ee0b72..dc7297cb76a0 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -262,14 +262,14 @@ void rotate_reclaimable_page(struct page *page) } } -static void update_page_reclaim_stat(struct lruvec *lruvec, - int file, int rotated) +static void update_page_reclaim_stat(struct lruvec *lruvec, int file, + int rotated, int nr_pages) { struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; - reclaim_stat->recent_scanned[file]++; + reclaim_stat->recent_scanned[file] += nr_pages; if (rotated) - reclaim_stat->recent_rotated[file]++; + reclaim_stat->recent_rotated[file] += nr_pages; } static void __activate_page(struct page *page, struct lruvec *lruvec, @@ -288,7 +288,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, __count_vm_events(PGACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, nr_pages); - update_page_reclaim_stat(lruvec, file, 1); + update_page_reclaim_stat(lruvec, file, 1, nr_pages); } } @@ -546,7 +546,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); } - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, nr_pages); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, @@ -564,7 +564,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, nr_pages); } } @@ -590,7 +590,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGLAZYFREE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, nr_pages); - update_page_reclaim_stat(lruvec, 1, 0); + update_page_reclaim_stat(lruvec, 1, 0, nr_pages); } } @@ -928,7 +928,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, } if (!PageUnevictable(page)) - update_page_reclaim_stat(lruvec, file, PageActive(page_tail)); + update_page_reclaim_stat(lruvec, file, PageActive(page_tail), 1); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -973,7 +973,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, if (page_evictable(page)) { lru = page_lru(page); update_page_reclaim_stat(lruvec, page_is_file_lru(page), - PageActive(page)); + PageActive(page), nr_pages); if (was_unevictable) __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else { -- 2.26.2.645.ge9eca65c58-goog ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 3/3] mm: swap: fix update_page_reclaim_stat for huge pages 2020-05-08 21:22 ` [PATCH 3/3] mm: swap: fix update_page_reclaim_stat " Shakeel Butt @ 2020-05-08 21:51 ` Johannes Weiner 2020-05-09 14:06 ` Shakeel Butt 0 siblings, 1 reply; 7+ messages in thread From: Johannes Weiner @ 2020-05-08 21:51 UTC (permalink / raw) To: Shakeel Butt Cc: Mel Gorman, Roman Gushchin, Michal Hocko, Andrew Morton, Yafang Shao, linux-mm, cgroups, linux-kernel On Fri, May 08, 2020 at 02:22:15PM -0700, Shakeel Butt wrote: > Currently update_page_reclaim_stat() updates the lruvec.reclaim_stats > just once for a page irrespective if a page is huge or not. Fix that by > passing the hpage_nr_pages(page) to it. > > Signed-off-by: Shakeel Butt <shakeelb@google.com> https://lore.kernel.org/patchwork/patch/685703/ Laughs, then cries. > @@ -928,7 +928,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, > } > > if (!PageUnevictable(page)) > - update_page_reclaim_stat(lruvec, file, PageActive(page_tail)); > + update_page_reclaim_stat(lruvec, file, PageActive(page_tail), 1); The change to __pagevec_lru_add_fn() below makes sure the tail pages are already accounted. This would make them count twice. > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > @@ -973,7 +973,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, > if (page_evictable(page)) { > lru = page_lru(page); > update_page_reclaim_stat(lruvec, page_is_file_lru(page), > - PageActive(page)); > + PageActive(page), nr_pages); > if (was_unevictable) > __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); > } else { ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 3/3] mm: swap: fix update_page_reclaim_stat for huge pages 2020-05-08 21:51 ` Johannes Weiner @ 2020-05-09 14:06 ` Shakeel Butt 0 siblings, 0 replies; 7+ messages in thread From: Shakeel Butt @ 2020-05-09 14:06 UTC (permalink / raw) To: Johannes Weiner Cc: Mel Gorman, Roman Gushchin, Michal Hocko, Andrew Morton, Yafang Shao, Linux MM, Cgroups, LKML On Fri, May 8, 2020 at 2:51 PM Johannes Weiner <hannes@cmpxchg.org> wrote: > > On Fri, May 08, 2020 at 02:22:15PM -0700, Shakeel Butt wrote: > > Currently update_page_reclaim_stat() updates the lruvec.reclaim_stats > > just once for a page irrespective if a page is huge or not. Fix that by > > passing the hpage_nr_pages(page) to it. > > > > Signed-off-by: Shakeel Butt <shakeelb@google.com> > > https://lore.kernel.org/patchwork/patch/685703/ > > Laughs, then cries. > What happened to that patch? Fell through the cracks? > > @@ -928,7 +928,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, > > } > > > > if (!PageUnevictable(page)) > > - update_page_reclaim_stat(lruvec, file, PageActive(page_tail)); > > + update_page_reclaim_stat(lruvec, file, PageActive(page_tail), 1); > > The change to __pagevec_lru_add_fn() below makes sure the tail pages > are already accounted. This would make them count twice. > Yes, you are right. I will just re-send your patch after rebase. > > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > > > @@ -973,7 +973,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, > > if (page_evictable(page)) { > > lru = page_lru(page); > > update_page_reclaim_stat(lruvec, page_is_file_lru(page), > > - PageActive(page)); > > + PageActive(page), nr_pages); > > if (was_unevictable) > > __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); > > } else { ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/3] mm: swap: fix vmstats for huge pages 2020-05-08 21:22 [PATCH 1/3] mm: swap: fix vmstats for huge pages Shakeel Butt 2020-05-08 21:22 ` [PATCH 2/3] mm: swap: memcg: fix memcg stats " Shakeel Butt 2020-05-08 21:22 ` [PATCH 3/3] mm: swap: fix update_page_reclaim_stat " Shakeel Butt @ 2020-05-08 21:52 ` Johannes Weiner 2 siblings, 0 replies; 7+ messages in thread From: Johannes Weiner @ 2020-05-08 21:52 UTC (permalink / raw) To: Shakeel Butt Cc: Mel Gorman, Roman Gushchin, Michal Hocko, Andrew Morton, Yafang Shao, linux-mm, cgroups, linux-kernel On Fri, May 08, 2020 at 02:22:13PM -0700, Shakeel Butt wrote: > Many of the callbacks called by pagevec_lru_move_fn() do not correctly > update the vmstats for huge pages. Fix that. Also __pagevec_lru_add_fn() > use the irq-unsafe alternative to update the stat as the irqs are > already disabled. > > Signed-off-by: Shakeel Butt <shakeelb@google.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-05-09 14:07 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-05-08 21:22 [PATCH 1/3] mm: swap: fix vmstats for huge pages Shakeel Butt 2020-05-08 21:22 ` [PATCH 2/3] mm: swap: memcg: fix memcg stats " Shakeel Butt 2020-05-08 21:56 ` Johannes Weiner 2020-05-08 21:22 ` [PATCH 3/3] mm: swap: fix update_page_reclaim_stat " Shakeel Butt 2020-05-08 21:51 ` Johannes Weiner 2020-05-09 14:06 ` Shakeel Butt 2020-05-08 21:52 ` [PATCH 1/3] mm: swap: fix vmstats " Johannes Weiner
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).