From: Yu Zhao <yuzhao@google.com> To: Andrew Morton <akpm@linux-foundation.org>, Johannes Weiner <hannes@cmpxchg.org>, Mel Gorman <mgorman@suse.de>, Michal Hocko <mhocko@kernel.org> Cc: "Andi Kleen" <ak@linux.intel.com>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Barry Song" <21cnbao@gmail.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Jesse Barnes" <jsbarnes@google.com>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Michael Larabel" <Michael@michaellarabel.com>, "Mike Rapoport" <rppt@kernel.org>, "Rik van Riel" <riel@surriel.com>, "Vlastimil Babka" <vbabka@suse.cz>, "Will Deacon" <will@kernel.org>, "Ying Huang" <ying.huang@intel.com>, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, "Yu Zhao" <yuzhao@google.com>, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works> Subject: [PATCH v7 03/12] mm/vmscan.c: refactor shrink_node() Date: Tue, 8 Feb 2022 01:18:53 -0700 [thread overview] Message-ID: <20220208081902.3550911-4-yuzhao@google.com> (raw) In-Reply-To: <20220208081902.3550911-1-yuzhao@google.com> This patch refactors shrink_node() to improve readability for the upcoming changes to mm/vmscan.c. Signed-off-by: Yu Zhao <yuzhao@google.com> Acked-by: Brian Geffon <bgeffon@google.com> Acked-by: Jan Alexander Steffens (heftig) <heftig@archlinux.org> Acked-by: Oleksandr Natalenko <oleksandr@natalenko.name> Acked-by: Steven Barrett <steven@liquorix.net> Acked-by: Suleiman Souhlal <suleiman@google.com> Tested-by: Daniel Byrne <djbyrne@mtu.edu> Tested-by: Donald Carr <d@chaos-reins.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru> Tested-by: Shuang Zhai <szhai2@cs.rochester.edu> Tested-by: Sofia Trinh <sofia.trinh@edi.works> --- mm/vmscan.c | 198 +++++++++++++++++++++++++++------------------------- 1 file changed, 104 insertions(+), 94 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 090bfb605ecf..b7228b73e1b3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2716,6 +2716,109 @@ enum scan_balance { SCAN_FILE, }; +static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) +{ + unsigned long file; + struct lruvec *target_lruvec; + + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); + + /* + * Flush the memory cgroup stats, so that we read accurate per-memcg + * lruvec stats for heuristics. + */ + mem_cgroup_flush_stats(); + + /* + * Determine the scan balance between anon and file LRUs. + */ + spin_lock_irq(&target_lruvec->lru_lock); + sc->anon_cost = target_lruvec->anon_cost; + sc->file_cost = target_lruvec->file_cost; + spin_unlock_irq(&target_lruvec->lru_lock); + + /* + * Target desirable inactive:active list ratios for the anon + * and file LRU lists. + */ + if (!sc->force_deactivate) { + unsigned long refaults; + + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_ANON); + if (refaults != target_lruvec->refaults[0] || + inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) + sc->may_deactivate |= DEACTIVATE_ANON; + else + sc->may_deactivate &= ~DEACTIVATE_ANON; + + /* + * When refaults are being observed, it means a new + * workingset is being established. Deactivate to get + * rid of any stale active pages quickly. + */ + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_FILE); + if (refaults != target_lruvec->refaults[1] || + inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) + sc->may_deactivate |= DEACTIVATE_FILE; + else + sc->may_deactivate &= ~DEACTIVATE_FILE; + } else + sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; + + /* + * If we have plenty of inactive file pages that aren't + * thrashing, try to reclaim those first before touching + * anonymous pages. + */ + file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) + sc->cache_trim_mode = 1; + else + sc->cache_trim_mode = 0; + + /* + * Prevent the reclaimer from falling into the cache trap: as + * cache pages start out inactive, every cache fault will tip + * the scan balance towards the file LRU. And as the file LRU + * shrinks, so does the window for rotation from references. + * This means we have a runaway feedback loop where a tiny + * thrashing file LRU becomes infinitely more attractive than + * anon pages. Try to detect this based on file LRU size. + */ + if (!cgroup_reclaim(sc)) { + unsigned long total_high_wmark = 0; + unsigned long free, anon; + int z; + + free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); + file = node_page_state(pgdat, NR_ACTIVE_FILE) + + node_page_state(pgdat, NR_INACTIVE_FILE); + + for (z = 0; z < MAX_NR_ZONES; z++) { + struct zone *zone = &pgdat->node_zones[z]; + + if (!managed_zone(zone)) + continue; + + total_high_wmark += high_wmark_pages(zone); + } + + /* + * Consider anon: if that's low too, this isn't a + * runaway file reclaim problem, but rather just + * extreme pressure. Reclaim as per usual then. + */ + anon = node_page_state(pgdat, NR_INACTIVE_ANON); + + sc->file_is_tiny = + file + free <= total_high_wmark && + !(sc->may_deactivate & DEACTIVATE_ANON) && + anon >> sc->priority; + } +} + /* * Determine how aggressively the anon and file LRU lists should be * scanned. The relative value of each set of LRU lists is determined @@ -3186,109 +3289,16 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) unsigned long nr_reclaimed, nr_scanned; struct lruvec *target_lruvec; bool reclaimable = false; - unsigned long file; target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); again: - /* - * Flush the memory cgroup stats, so that we read accurate per-memcg - * lruvec stats for heuristics. - */ - mem_cgroup_flush_stats(); - memset(&sc->nr, 0, sizeof(sc->nr)); nr_reclaimed = sc->nr_reclaimed; nr_scanned = sc->nr_scanned; - /* - * Determine the scan balance between anon and file LRUs. - */ - spin_lock_irq(&target_lruvec->lru_lock); - sc->anon_cost = target_lruvec->anon_cost; - sc->file_cost = target_lruvec->file_cost; - spin_unlock_irq(&target_lruvec->lru_lock); - - /* - * Target desirable inactive:active list ratios for the anon - * and file LRU lists. - */ - if (!sc->force_deactivate) { - unsigned long refaults; - - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_ANON); - if (refaults != target_lruvec->refaults[0] || - inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) - sc->may_deactivate |= DEACTIVATE_ANON; - else - sc->may_deactivate &= ~DEACTIVATE_ANON; - - /* - * When refaults are being observed, it means a new - * workingset is being established. Deactivate to get - * rid of any stale active pages quickly. - */ - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_FILE); - if (refaults != target_lruvec->refaults[1] || - inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) - sc->may_deactivate |= DEACTIVATE_FILE; - else - sc->may_deactivate &= ~DEACTIVATE_FILE; - } else - sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; - - /* - * If we have plenty of inactive file pages that aren't - * thrashing, try to reclaim those first before touching - * anonymous pages. - */ - file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) - sc->cache_trim_mode = 1; - else - sc->cache_trim_mode = 0; - - /* - * Prevent the reclaimer from falling into the cache trap: as - * cache pages start out inactive, every cache fault will tip - * the scan balance towards the file LRU. And as the file LRU - * shrinks, so does the window for rotation from references. - * This means we have a runaway feedback loop where a tiny - * thrashing file LRU becomes infinitely more attractive than - * anon pages. Try to detect this based on file LRU size. - */ - if (!cgroup_reclaim(sc)) { - unsigned long total_high_wmark = 0; - unsigned long free, anon; - int z; - - free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); - file = node_page_state(pgdat, NR_ACTIVE_FILE) + - node_page_state(pgdat, NR_INACTIVE_FILE); - - for (z = 0; z < MAX_NR_ZONES; z++) { - struct zone *zone = &pgdat->node_zones[z]; - if (!managed_zone(zone)) - continue; - - total_high_wmark += high_wmark_pages(zone); - } - - /* - * Consider anon: if that's low too, this isn't a - * runaway file reclaim problem, but rather just - * extreme pressure. Reclaim as per usual then. - */ - anon = node_page_state(pgdat, NR_INACTIVE_ANON); - - sc->file_is_tiny = - file + free <= total_high_wmark && - !(sc->may_deactivate & DEACTIVATE_ANON) && - anon >> sc->priority; - } + prepare_scan_count(pgdat, sc); shrink_node_memcgs(pgdat, sc); -- 2.35.0.263.gb82422642f-goog
WARNING: multiple messages have this Message-ID (diff)
From: Yu Zhao <yuzhao@google.com> To: Andrew Morton <akpm@linux-foundation.org>, Johannes Weiner <hannes@cmpxchg.org>, Mel Gorman <mgorman@suse.de>, Michal Hocko <mhocko@kernel.org> Cc: "Andi Kleen" <ak@linux.intel.com>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Barry Song" <21cnbao@gmail.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Jesse Barnes" <jsbarnes@google.com>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Michael Larabel" <Michael@michaellarabel.com>, "Mike Rapoport" <rppt@kernel.org>, "Rik van Riel" <riel@surriel.com>, "Vlastimil Babka" <vbabka@suse.cz>, "Will Deacon" <will@kernel.org>, "Ying Huang" <ying.huang@intel.com>, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, "Yu Zhao" <yuzhao@google.com>, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works> Subject: [PATCH v7 03/12] mm/vmscan.c: refactor shrink_node() Date: Tue, 8 Feb 2022 01:18:53 -0700 [thread overview] Message-ID: <20220208081902.3550911-4-yuzhao@google.com> (raw) In-Reply-To: <20220208081902.3550911-1-yuzhao@google.com> This patch refactors shrink_node() to improve readability for the upcoming changes to mm/vmscan.c. Signed-off-by: Yu Zhao <yuzhao@google.com> Acked-by: Brian Geffon <bgeffon@google.com> Acked-by: Jan Alexander Steffens (heftig) <heftig@archlinux.org> Acked-by: Oleksandr Natalenko <oleksandr@natalenko.name> Acked-by: Steven Barrett <steven@liquorix.net> Acked-by: Suleiman Souhlal <suleiman@google.com> Tested-by: Daniel Byrne <djbyrne@mtu.edu> Tested-by: Donald Carr <d@chaos-reins.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru> Tested-by: Shuang Zhai <szhai2@cs.rochester.edu> Tested-by: Sofia Trinh <sofia.trinh@edi.works> --- mm/vmscan.c | 198 +++++++++++++++++++++++++++------------------------- 1 file changed, 104 insertions(+), 94 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 090bfb605ecf..b7228b73e1b3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2716,6 +2716,109 @@ enum scan_balance { SCAN_FILE, }; +static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) +{ + unsigned long file; + struct lruvec *target_lruvec; + + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); + + /* + * Flush the memory cgroup stats, so that we read accurate per-memcg + * lruvec stats for heuristics. + */ + mem_cgroup_flush_stats(); + + /* + * Determine the scan balance between anon and file LRUs. + */ + spin_lock_irq(&target_lruvec->lru_lock); + sc->anon_cost = target_lruvec->anon_cost; + sc->file_cost = target_lruvec->file_cost; + spin_unlock_irq(&target_lruvec->lru_lock); + + /* + * Target desirable inactive:active list ratios for the anon + * and file LRU lists. + */ + if (!sc->force_deactivate) { + unsigned long refaults; + + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_ANON); + if (refaults != target_lruvec->refaults[0] || + inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) + sc->may_deactivate |= DEACTIVATE_ANON; + else + sc->may_deactivate &= ~DEACTIVATE_ANON; + + /* + * When refaults are being observed, it means a new + * workingset is being established. Deactivate to get + * rid of any stale active pages quickly. + */ + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_FILE); + if (refaults != target_lruvec->refaults[1] || + inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) + sc->may_deactivate |= DEACTIVATE_FILE; + else + sc->may_deactivate &= ~DEACTIVATE_FILE; + } else + sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; + + /* + * If we have plenty of inactive file pages that aren't + * thrashing, try to reclaim those first before touching + * anonymous pages. + */ + file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) + sc->cache_trim_mode = 1; + else + sc->cache_trim_mode = 0; + + /* + * Prevent the reclaimer from falling into the cache trap: as + * cache pages start out inactive, every cache fault will tip + * the scan balance towards the file LRU. And as the file LRU + * shrinks, so does the window for rotation from references. + * This means we have a runaway feedback loop where a tiny + * thrashing file LRU becomes infinitely more attractive than + * anon pages. Try to detect this based on file LRU size. + */ + if (!cgroup_reclaim(sc)) { + unsigned long total_high_wmark = 0; + unsigned long free, anon; + int z; + + free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); + file = node_page_state(pgdat, NR_ACTIVE_FILE) + + node_page_state(pgdat, NR_INACTIVE_FILE); + + for (z = 0; z < MAX_NR_ZONES; z++) { + struct zone *zone = &pgdat->node_zones[z]; + + if (!managed_zone(zone)) + continue; + + total_high_wmark += high_wmark_pages(zone); + } + + /* + * Consider anon: if that's low too, this isn't a + * runaway file reclaim problem, but rather just + * extreme pressure. Reclaim as per usual then. + */ + anon = node_page_state(pgdat, NR_INACTIVE_ANON); + + sc->file_is_tiny = + file + free <= total_high_wmark && + !(sc->may_deactivate & DEACTIVATE_ANON) && + anon >> sc->priority; + } +} + /* * Determine how aggressively the anon and file LRU lists should be * scanned. The relative value of each set of LRU lists is determined @@ -3186,109 +3289,16 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) unsigned long nr_reclaimed, nr_scanned; struct lruvec *target_lruvec; bool reclaimable = false; - unsigned long file; target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); again: - /* - * Flush the memory cgroup stats, so that we read accurate per-memcg - * lruvec stats for heuristics. - */ - mem_cgroup_flush_stats(); - memset(&sc->nr, 0, sizeof(sc->nr)); nr_reclaimed = sc->nr_reclaimed; nr_scanned = sc->nr_scanned; - /* - * Determine the scan balance between anon and file LRUs. - */ - spin_lock_irq(&target_lruvec->lru_lock); - sc->anon_cost = target_lruvec->anon_cost; - sc->file_cost = target_lruvec->file_cost; - spin_unlock_irq(&target_lruvec->lru_lock); - - /* - * Target desirable inactive:active list ratios for the anon - * and file LRU lists. - */ - if (!sc->force_deactivate) { - unsigned long refaults; - - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_ANON); - if (refaults != target_lruvec->refaults[0] || - inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) - sc->may_deactivate |= DEACTIVATE_ANON; - else - sc->may_deactivate &= ~DEACTIVATE_ANON; - - /* - * When refaults are being observed, it means a new - * workingset is being established. Deactivate to get - * rid of any stale active pages quickly. - */ - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_FILE); - if (refaults != target_lruvec->refaults[1] || - inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) - sc->may_deactivate |= DEACTIVATE_FILE; - else - sc->may_deactivate &= ~DEACTIVATE_FILE; - } else - sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; - - /* - * If we have plenty of inactive file pages that aren't - * thrashing, try to reclaim those first before touching - * anonymous pages. - */ - file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) - sc->cache_trim_mode = 1; - else - sc->cache_trim_mode = 0; - - /* - * Prevent the reclaimer from falling into the cache trap: as - * cache pages start out inactive, every cache fault will tip - * the scan balance towards the file LRU. And as the file LRU - * shrinks, so does the window for rotation from references. - * This means we have a runaway feedback loop where a tiny - * thrashing file LRU becomes infinitely more attractive than - * anon pages. Try to detect this based on file LRU size. - */ - if (!cgroup_reclaim(sc)) { - unsigned long total_high_wmark = 0; - unsigned long free, anon; - int z; - - free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); - file = node_page_state(pgdat, NR_ACTIVE_FILE) + - node_page_state(pgdat, NR_INACTIVE_FILE); - - for (z = 0; z < MAX_NR_ZONES; z++) { - struct zone *zone = &pgdat->node_zones[z]; - if (!managed_zone(zone)) - continue; - - total_high_wmark += high_wmark_pages(zone); - } - - /* - * Consider anon: if that's low too, this isn't a - * runaway file reclaim problem, but rather just - * extreme pressure. Reclaim as per usual then. - */ - anon = node_page_state(pgdat, NR_INACTIVE_ANON); - - sc->file_is_tiny = - file + free <= total_high_wmark && - !(sc->may_deactivate & DEACTIVATE_ANON) && - anon >> sc->priority; - } + prepare_scan_count(pgdat, sc); shrink_node_memcgs(pgdat, sc); -- 2.35.0.263.gb82422642f-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-02-08 8:19 UTC|newest] Thread overview: 150+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-02-08 8:18 [PATCH v7 00/12] Multigenerational LRU Framework Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 01/12] mm: x86, arm64: add arch_has_hw_pte_young() Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:24 ` Yu Zhao 2022-02-08 8:24 ` Yu Zhao 2022-02-08 10:33 ` Will Deacon 2022-02-08 10:33 ` Will Deacon 2022-02-08 8:18 ` [PATCH v7 02/12] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:27 ` Yu Zhao 2022-02-08 8:27 ` Yu Zhao 2022-02-08 8:18 ` Yu Zhao [this message] 2022-02-08 8:18 ` [PATCH v7 03/12] mm/vmscan.c: refactor shrink_node() Yu Zhao 2022-02-08 8:18 ` [PATCH v7 04/12] mm: multigenerational LRU: groundwork Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:28 ` Yu Zhao 2022-02-08 8:28 ` Yu Zhao 2022-02-10 20:41 ` Johannes Weiner 2022-02-10 20:41 ` Johannes Weiner 2022-02-15 9:43 ` Yu Zhao 2022-02-15 9:43 ` Yu Zhao 2022-02-15 21:53 ` Johannes Weiner 2022-02-15 21:53 ` Johannes Weiner 2022-02-21 8:14 ` Yu Zhao 2022-02-21 8:14 ` Yu Zhao 2022-02-23 21:18 ` Yu Zhao 2022-02-23 21:18 ` Yu Zhao 2022-02-25 16:34 ` Minchan Kim 2022-02-25 16:34 ` Minchan Kim 2022-03-03 15:29 ` Johannes Weiner 2022-03-03 15:29 ` Johannes Weiner 2022-03-03 19:26 ` Yu Zhao 2022-03-03 19:26 ` Yu Zhao 2022-03-03 21:43 ` Johannes Weiner 2022-03-03 21:43 ` Johannes Weiner 2022-03-11 10:16 ` Barry Song 2022-03-11 10:16 ` Barry Song 2022-03-11 23:45 ` Yu Zhao 2022-03-11 23:45 ` Yu Zhao 2022-03-12 10:37 ` Barry Song 2022-03-12 10:37 ` Barry Song 2022-03-12 21:11 ` Yu Zhao 2022-03-12 21:11 ` Yu Zhao 2022-03-13 4:57 ` Barry Song 2022-03-13 4:57 ` Barry Song 2022-03-14 11:11 ` Barry Song 2022-03-14 11:11 ` Barry Song 2022-03-14 16:45 ` Yu Zhao 2022-03-14 16:45 ` Yu Zhao 2022-03-14 23:38 ` Barry Song 2022-03-14 23:38 ` Barry Song 2022-03-15 5:18 ` Yu Zhao 2022-03-15 5:18 ` Yu Zhao 2022-03-15 9:27 ` Barry Song 2022-03-15 9:27 ` Barry Song 2022-03-15 10:29 ` Barry Song 2022-03-15 10:29 ` Barry Song 2022-03-16 2:46 ` Yu Zhao 2022-03-16 2:46 ` Yu Zhao 2022-03-16 4:37 ` Barry Song 2022-03-16 4:37 ` Barry Song 2022-03-16 5:44 ` Yu Zhao 2022-03-16 5:44 ` Yu Zhao 2022-03-16 6:06 ` Barry Song 2022-03-16 6:06 ` Barry Song 2022-03-16 21:37 ` Yu Zhao 2022-03-16 21:37 ` Yu Zhao 2022-02-10 21:37 ` Matthew Wilcox 2022-02-10 21:37 ` Matthew Wilcox 2022-02-13 21:16 ` Yu Zhao 2022-02-13 21:16 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 05/12] mm: multigenerational LRU: minimal implementation Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:33 ` Yu Zhao 2022-02-08 8:33 ` Yu Zhao 2022-02-08 16:50 ` Johannes Weiner 2022-02-08 16:50 ` Johannes Weiner 2022-02-10 2:53 ` Yu Zhao 2022-02-10 2:53 ` Yu Zhao 2022-02-13 10:04 ` Hillf Danton 2022-02-17 0:13 ` Yu Zhao 2022-02-23 8:27 ` Huang, Ying 2022-02-23 8:27 ` Huang, Ying 2022-02-23 9:36 ` Yu Zhao 2022-02-23 9:36 ` Yu Zhao 2022-02-24 0:59 ` Huang, Ying 2022-02-24 0:59 ` Huang, Ying 2022-02-24 1:34 ` Yu Zhao 2022-02-24 1:34 ` Yu Zhao 2022-02-24 3:31 ` Huang, Ying 2022-02-24 3:31 ` Huang, Ying 2022-02-24 4:09 ` Yu Zhao 2022-02-24 4:09 ` Yu Zhao 2022-02-24 5:27 ` Huang, Ying 2022-02-24 5:27 ` Huang, Ying 2022-02-24 5:35 ` Yu Zhao 2022-02-24 5:35 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 06/12] mm: multigenerational LRU: exploit locality in rmap Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:40 ` Yu Zhao 2022-02-08 8:40 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 07/12] mm: multigenerational LRU: support page table walks Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:39 ` Yu Zhao 2022-02-08 8:39 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 08/12] mm: multigenerational LRU: optimize multiple memcgs Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 09/12] mm: multigenerational LRU: runtime switch Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:42 ` Yu Zhao 2022-02-08 8:42 ` Yu Zhao 2022-02-08 8:19 ` [PATCH v7 10/12] mm: multigenerational LRU: thrashing prevention Yu Zhao 2022-02-08 8:19 ` Yu Zhao 2022-02-08 8:43 ` Yu Zhao 2022-02-08 8:43 ` Yu Zhao 2022-02-08 8:19 ` [PATCH v7 11/12] mm: multigenerational LRU: debugfs interface Yu Zhao 2022-02-08 8:19 ` Yu Zhao 2022-02-18 18:56 ` [page-reclaim] " David Rientjes 2022-02-18 18:56 ` David Rientjes 2022-02-08 8:19 ` [PATCH v7 12/12] mm: multigenerational LRU: documentation Yu Zhao 2022-02-08 8:19 ` Yu Zhao 2022-02-08 8:44 ` Yu Zhao 2022-02-08 8:44 ` Yu Zhao 2022-02-14 10:28 ` Mike Rapoport 2022-02-14 10:28 ` Mike Rapoport 2022-02-16 3:22 ` Yu Zhao 2022-02-16 3:22 ` Yu Zhao 2022-02-21 9:01 ` Mike Rapoport 2022-02-21 9:01 ` Mike Rapoport 2022-02-22 1:47 ` Yu Zhao 2022-02-22 1:47 ` Yu Zhao 2022-02-23 10:58 ` Mike Rapoport 2022-02-23 10:58 ` Mike Rapoport 2022-02-23 21:20 ` Yu Zhao 2022-02-23 21:20 ` Yu Zhao 2022-02-08 10:11 ` [PATCH v7 00/12] Multigenerational LRU Framework Oleksandr Natalenko 2022-02-08 10:11 ` Oleksandr Natalenko 2022-02-08 11:14 ` Michal Hocko 2022-02-08 11:14 ` Michal Hocko 2022-02-08 11:23 ` Oleksandr Natalenko 2022-02-08 11:23 ` Oleksandr Natalenko 2022-02-11 20:12 ` Alexey Avramov 2022-02-11 20:12 ` Alexey Avramov 2022-02-12 21:01 ` Yu Zhao 2022-02-12 21:01 ` Yu Zhao 2022-03-03 6:06 ` Vaibhav Jain 2022-03-03 6:06 ` Vaibhav Jain 2022-03-03 6:47 ` Yu Zhao 2022-03-03 6:47 ` Yu Zhao
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220208081902.3550911-4-yuzhao@google.com \ --to=yuzhao@google.com \ --cc=21cnbao@gmail.com \ --cc=Hi-Angel@yandex.ru \ --cc=Michael@michaellarabel.com \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=aneesh.kumar@linux.ibm.com \ --cc=axboe@kernel.dk \ --cc=bgeffon@google.com \ --cc=catalin.marinas@arm.com \ --cc=corbet@lwn.net \ --cc=d@chaos-reins.com \ --cc=dave.hansen@linux.intel.com \ --cc=djbyrne@mtu.edu \ --cc=hannes@cmpxchg.org \ --cc=hdanton@sina.com \ --cc=heftig@archlinux.org \ --cc=holger@applied-asynchrony.com \ --cc=jsbarnes@google.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@kernel.org \ --cc=oleksandr@natalenko.name \ --cc=page-reclaim@google.com \ --cc=riel@surriel.com \ --cc=rppt@kernel.org \ --cc=sofia.trinh@edi.works \ --cc=steven@liquorix.net \ --cc=suleiman@google.com \ --cc=szhai2@cs.rochester.edu \ --cc=torvalds@linux-foundation.org \ --cc=vbabka@suse.cz \ --cc=will@kernel.org \ --cc=willy@infradead.org \ --cc=x86@kernel.org \ --cc=ying.huang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.