From: Yu Zhao <yuzhao@google.com> To: Andrew Morton <akpm@linux-foundation.org>, Linus Torvalds <torvalds@linux-foundation.org> Cc: Andi Kleen <ak@linux.intel.com>, Catalin Marinas <catalin.marinas@arm.com>, Dave Hansen <dave.hansen@linux.intel.com>, Hillf Danton <hdanton@sina.com>, Jens Axboe <axboe@kernel.dk>, Jesse Barnes <jsbarnes@google.com>, Johannes Weiner <hannes@cmpxchg.org>, Jonathan Corbet <corbet@lwn.net>, Matthew Wilcox <willy@infradead.org>, Mel Gorman <mgorman@suse.de>, Michael Larabel <Michael@michaellarabel.com>, Michal Hocko <mhocko@kernel.org>, Rik van Riel <riel@surriel.com>, Vlastimil Babka <vbabka@suse.cz>, Will Deacon <will@kernel.org>, Ying Huang <ying.huang@intel.com>, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Yu Zhao <yuzhao@google.com>, Konstantin Kharlamov <Hi-Angel@yandex.ru> Subject: [PATCH v6 3/9] mm/vmscan.c: refactor shrink_node() Date: Tue, 4 Jan 2022 13:22:22 -0700 [thread overview] Message-ID: <20220104202227.2903605-4-yuzhao@google.com> (raw) In-Reply-To: <20220104202227.2903605-1-yuzhao@google.com> This patch refactors shrink_node() to improve readability for the upcoming changes to mm/vmscan.c. Signed-off-by: Yu Zhao <yuzhao@google.com> Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru> --- mm/vmscan.c | 198 +++++++++++++++++++++++++++------------------------- 1 file changed, 104 insertions(+), 94 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 700434db5735..b6c5fd885216 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2716,6 +2716,109 @@ enum scan_balance { SCAN_FILE, }; +static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) +{ + unsigned long file; + struct lruvec *target_lruvec; + + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); + + /* + * Flush the memory cgroup stats, so that we read accurate per-memcg + * lruvec stats for heuristics. + */ + mem_cgroup_flush_stats(); + + /* + * Determine the scan balance between anon and file LRUs. + */ + spin_lock_irq(&target_lruvec->lru_lock); + sc->anon_cost = target_lruvec->anon_cost; + sc->file_cost = target_lruvec->file_cost; + spin_unlock_irq(&target_lruvec->lru_lock); + + /* + * Target desirable inactive:active list ratios for the anon + * and file LRU lists. + */ + if (!sc->force_deactivate) { + unsigned long refaults; + + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_ANON); + if (refaults != target_lruvec->refaults[0] || + inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) + sc->may_deactivate |= DEACTIVATE_ANON; + else + sc->may_deactivate &= ~DEACTIVATE_ANON; + + /* + * When refaults are being observed, it means a new + * workingset is being established. Deactivate to get + * rid of any stale active pages quickly. + */ + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_FILE); + if (refaults != target_lruvec->refaults[1] || + inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) + sc->may_deactivate |= DEACTIVATE_FILE; + else + sc->may_deactivate &= ~DEACTIVATE_FILE; + } else + sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; + + /* + * If we have plenty of inactive file pages that aren't + * thrashing, try to reclaim those first before touching + * anonymous pages. + */ + file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) + sc->cache_trim_mode = 1; + else + sc->cache_trim_mode = 0; + + /* + * Prevent the reclaimer from falling into the cache trap: as + * cache pages start out inactive, every cache fault will tip + * the scan balance towards the file LRU. And as the file LRU + * shrinks, so does the window for rotation from references. + * This means we have a runaway feedback loop where a tiny + * thrashing file LRU becomes infinitely more attractive than + * anon pages. Try to detect this based on file LRU size. + */ + if (!cgroup_reclaim(sc)) { + unsigned long total_high_wmark = 0; + unsigned long free, anon; + int z; + + free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); + file = node_page_state(pgdat, NR_ACTIVE_FILE) + + node_page_state(pgdat, NR_INACTIVE_FILE); + + for (z = 0; z < MAX_NR_ZONES; z++) { + struct zone *zone = &pgdat->node_zones[z]; + + if (!managed_zone(zone)) + continue; + + total_high_wmark += high_wmark_pages(zone); + } + + /* + * Consider anon: if that's low too, this isn't a + * runaway file reclaim problem, but rather just + * extreme pressure. Reclaim as per usual then. + */ + anon = node_page_state(pgdat, NR_INACTIVE_ANON); + + sc->file_is_tiny = + file + free <= total_high_wmark && + !(sc->may_deactivate & DEACTIVATE_ANON) && + anon >> sc->priority; + } +} + /* * Determine how aggressively the anon and file LRU lists should be * scanned. The relative value of each set of LRU lists is determined @@ -3186,109 +3289,16 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) unsigned long nr_reclaimed, nr_scanned; struct lruvec *target_lruvec; bool reclaimable = false; - unsigned long file; target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); again: - /* - * Flush the memory cgroup stats, so that we read accurate per-memcg - * lruvec stats for heuristics. - */ - mem_cgroup_flush_stats(); - memset(&sc->nr, 0, sizeof(sc->nr)); nr_reclaimed = sc->nr_reclaimed; nr_scanned = sc->nr_scanned; - /* - * Determine the scan balance between anon and file LRUs. - */ - spin_lock_irq(&target_lruvec->lru_lock); - sc->anon_cost = target_lruvec->anon_cost; - sc->file_cost = target_lruvec->file_cost; - spin_unlock_irq(&target_lruvec->lru_lock); - - /* - * Target desirable inactive:active list ratios for the anon - * and file LRU lists. - */ - if (!sc->force_deactivate) { - unsigned long refaults; - - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_ANON); - if (refaults != target_lruvec->refaults[0] || - inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) - sc->may_deactivate |= DEACTIVATE_ANON; - else - sc->may_deactivate &= ~DEACTIVATE_ANON; - - /* - * When refaults are being observed, it means a new - * workingset is being established. Deactivate to get - * rid of any stale active pages quickly. - */ - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_FILE); - if (refaults != target_lruvec->refaults[1] || - inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) - sc->may_deactivate |= DEACTIVATE_FILE; - else - sc->may_deactivate &= ~DEACTIVATE_FILE; - } else - sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; - - /* - * If we have plenty of inactive file pages that aren't - * thrashing, try to reclaim those first before touching - * anonymous pages. - */ - file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) - sc->cache_trim_mode = 1; - else - sc->cache_trim_mode = 0; - - /* - * Prevent the reclaimer from falling into the cache trap: as - * cache pages start out inactive, every cache fault will tip - * the scan balance towards the file LRU. And as the file LRU - * shrinks, so does the window for rotation from references. - * This means we have a runaway feedback loop where a tiny - * thrashing file LRU becomes infinitely more attractive than - * anon pages. Try to detect this based on file LRU size. - */ - if (!cgroup_reclaim(sc)) { - unsigned long total_high_wmark = 0; - unsigned long free, anon; - int z; - - free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); - file = node_page_state(pgdat, NR_ACTIVE_FILE) + - node_page_state(pgdat, NR_INACTIVE_FILE); - - for (z = 0; z < MAX_NR_ZONES; z++) { - struct zone *zone = &pgdat->node_zones[z]; - if (!managed_zone(zone)) - continue; - - total_high_wmark += high_wmark_pages(zone); - } - - /* - * Consider anon: if that's low too, this isn't a - * runaway file reclaim problem, but rather just - * extreme pressure. Reclaim as per usual then. - */ - anon = node_page_state(pgdat, NR_INACTIVE_ANON); - - sc->file_is_tiny = - file + free <= total_high_wmark && - !(sc->may_deactivate & DEACTIVATE_ANON) && - anon >> sc->priority; - } + prepare_scan_count(pgdat, sc); shrink_node_memcgs(pgdat, sc); -- 2.34.1.448.ga2b2bfdf31-goog
WARNING: multiple messages have this Message-ID (diff)
From: Yu Zhao <yuzhao@google.com> To: Andrew Morton <akpm@linux-foundation.org>, Linus Torvalds <torvalds@linux-foundation.org> Cc: Andi Kleen <ak@linux.intel.com>, Catalin Marinas <catalin.marinas@arm.com>, Dave Hansen <dave.hansen@linux.intel.com>, Hillf Danton <hdanton@sina.com>, Jens Axboe <axboe@kernel.dk>, Jesse Barnes <jsbarnes@google.com>, Johannes Weiner <hannes@cmpxchg.org>, Jonathan Corbet <corbet@lwn.net>, Matthew Wilcox <willy@infradead.org>, Mel Gorman <mgorman@suse.de>, Michael Larabel <Michael@michaellarabel.com>, Michal Hocko <mhocko@kernel.org>, Rik van Riel <riel@surriel.com>, Vlastimil Babka <vbabka@suse.cz>, Will Deacon <will@kernel.org>, Ying Huang <ying.huang@intel.com>, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Yu Zhao <yuzhao@google.com>, Konstantin Kharlamov <Hi-Angel@yandex.ru> Subject: [PATCH v6 3/9] mm/vmscan.c: refactor shrink_node() Date: Tue, 4 Jan 2022 13:22:22 -0700 [thread overview] Message-ID: <20220104202227.2903605-4-yuzhao@google.com> (raw) In-Reply-To: <20220104202227.2903605-1-yuzhao@google.com> This patch refactors shrink_node() to improve readability for the upcoming changes to mm/vmscan.c. Signed-off-by: Yu Zhao <yuzhao@google.com> Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru> --- mm/vmscan.c | 198 +++++++++++++++++++++++++++------------------------- 1 file changed, 104 insertions(+), 94 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 700434db5735..b6c5fd885216 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2716,6 +2716,109 @@ enum scan_balance { SCAN_FILE, }; +static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) +{ + unsigned long file; + struct lruvec *target_lruvec; + + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); + + /* + * Flush the memory cgroup stats, so that we read accurate per-memcg + * lruvec stats for heuristics. + */ + mem_cgroup_flush_stats(); + + /* + * Determine the scan balance between anon and file LRUs. + */ + spin_lock_irq(&target_lruvec->lru_lock); + sc->anon_cost = target_lruvec->anon_cost; + sc->file_cost = target_lruvec->file_cost; + spin_unlock_irq(&target_lruvec->lru_lock); + + /* + * Target desirable inactive:active list ratios for the anon + * and file LRU lists. + */ + if (!sc->force_deactivate) { + unsigned long refaults; + + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_ANON); + if (refaults != target_lruvec->refaults[0] || + inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) + sc->may_deactivate |= DEACTIVATE_ANON; + else + sc->may_deactivate &= ~DEACTIVATE_ANON; + + /* + * When refaults are being observed, it means a new + * workingset is being established. Deactivate to get + * rid of any stale active pages quickly. + */ + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_FILE); + if (refaults != target_lruvec->refaults[1] || + inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) + sc->may_deactivate |= DEACTIVATE_FILE; + else + sc->may_deactivate &= ~DEACTIVATE_FILE; + } else + sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; + + /* + * If we have plenty of inactive file pages that aren't + * thrashing, try to reclaim those first before touching + * anonymous pages. + */ + file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) + sc->cache_trim_mode = 1; + else + sc->cache_trim_mode = 0; + + /* + * Prevent the reclaimer from falling into the cache trap: as + * cache pages start out inactive, every cache fault will tip + * the scan balance towards the file LRU. And as the file LRU + * shrinks, so does the window for rotation from references. + * This means we have a runaway feedback loop where a tiny + * thrashing file LRU becomes infinitely more attractive than + * anon pages. Try to detect this based on file LRU size. + */ + if (!cgroup_reclaim(sc)) { + unsigned long total_high_wmark = 0; + unsigned long free, anon; + int z; + + free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); + file = node_page_state(pgdat, NR_ACTIVE_FILE) + + node_page_state(pgdat, NR_INACTIVE_FILE); + + for (z = 0; z < MAX_NR_ZONES; z++) { + struct zone *zone = &pgdat->node_zones[z]; + + if (!managed_zone(zone)) + continue; + + total_high_wmark += high_wmark_pages(zone); + } + + /* + * Consider anon: if that's low too, this isn't a + * runaway file reclaim problem, but rather just + * extreme pressure. Reclaim as per usual then. + */ + anon = node_page_state(pgdat, NR_INACTIVE_ANON); + + sc->file_is_tiny = + file + free <= total_high_wmark && + !(sc->may_deactivate & DEACTIVATE_ANON) && + anon >> sc->priority; + } +} + /* * Determine how aggressively the anon and file LRU lists should be * scanned. The relative value of each set of LRU lists is determined @@ -3186,109 +3289,16 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) unsigned long nr_reclaimed, nr_scanned; struct lruvec *target_lruvec; bool reclaimable = false; - unsigned long file; target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); again: - /* - * Flush the memory cgroup stats, so that we read accurate per-memcg - * lruvec stats for heuristics. - */ - mem_cgroup_flush_stats(); - memset(&sc->nr, 0, sizeof(sc->nr)); nr_reclaimed = sc->nr_reclaimed; nr_scanned = sc->nr_scanned; - /* - * Determine the scan balance between anon and file LRUs. - */ - spin_lock_irq(&target_lruvec->lru_lock); - sc->anon_cost = target_lruvec->anon_cost; - sc->file_cost = target_lruvec->file_cost; - spin_unlock_irq(&target_lruvec->lru_lock); - - /* - * Target desirable inactive:active list ratios for the anon - * and file LRU lists. - */ - if (!sc->force_deactivate) { - unsigned long refaults; - - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_ANON); - if (refaults != target_lruvec->refaults[0] || - inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) - sc->may_deactivate |= DEACTIVATE_ANON; - else - sc->may_deactivate &= ~DEACTIVATE_ANON; - - /* - * When refaults are being observed, it means a new - * workingset is being established. Deactivate to get - * rid of any stale active pages quickly. - */ - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_FILE); - if (refaults != target_lruvec->refaults[1] || - inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) - sc->may_deactivate |= DEACTIVATE_FILE; - else - sc->may_deactivate &= ~DEACTIVATE_FILE; - } else - sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; - - /* - * If we have plenty of inactive file pages that aren't - * thrashing, try to reclaim those first before touching - * anonymous pages. - */ - file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) - sc->cache_trim_mode = 1; - else - sc->cache_trim_mode = 0; - - /* - * Prevent the reclaimer from falling into the cache trap: as - * cache pages start out inactive, every cache fault will tip - * the scan balance towards the file LRU. And as the file LRU - * shrinks, so does the window for rotation from references. - * This means we have a runaway feedback loop where a tiny - * thrashing file LRU becomes infinitely more attractive than - * anon pages. Try to detect this based on file LRU size. - */ - if (!cgroup_reclaim(sc)) { - unsigned long total_high_wmark = 0; - unsigned long free, anon; - int z; - - free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); - file = node_page_state(pgdat, NR_ACTIVE_FILE) + - node_page_state(pgdat, NR_INACTIVE_FILE); - - for (z = 0; z < MAX_NR_ZONES; z++) { - struct zone *zone = &pgdat->node_zones[z]; - if (!managed_zone(zone)) - continue; - - total_high_wmark += high_wmark_pages(zone); - } - - /* - * Consider anon: if that's low too, this isn't a - * runaway file reclaim problem, but rather just - * extreme pressure. Reclaim as per usual then. - */ - anon = node_page_state(pgdat, NR_INACTIVE_ANON); - - sc->file_is_tiny = - file + free <= total_high_wmark && - !(sc->may_deactivate & DEACTIVATE_ANON) && - anon >> sc->priority; - } + prepare_scan_count(pgdat, sc); shrink_node_memcgs(pgdat, sc); -- 2.34.1.448.ga2b2bfdf31-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-01-04 20:23 UTC|newest] Thread overview: 223+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-01-04 20:22 [PATCH v6 0/9] Multigenerational LRU Framework Yu Zhao 2022-01-04 20:22 ` Yu Zhao 2022-01-04 20:22 ` [PATCH v6 1/9] mm: x86, arm64: add arch_has_hw_pte_young() Yu Zhao 2022-01-04 20:22 ` Yu Zhao 2022-01-05 10:45 ` Will Deacon 2022-01-05 10:45 ` Will Deacon 2022-01-05 20:47 ` Yu Zhao 2022-01-05 20:47 ` Yu Zhao 2022-01-06 10:30 ` Will Deacon 2022-01-06 10:30 ` Will Deacon 2022-01-07 7:25 ` Yu Zhao 2022-01-07 7:25 ` Yu Zhao 2022-01-11 14:19 ` Will Deacon 2022-01-11 14:19 ` Will Deacon 2022-01-11 22:27 ` Yu Zhao 2022-01-11 22:27 ` Yu Zhao 2022-01-04 20:22 ` [PATCH v6 2/9] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG Yu Zhao 2022-01-04 20:22 ` Yu Zhao 2022-01-04 21:24 ` Linus Torvalds 2022-01-04 21:24 ` Linus Torvalds 2022-01-04 20:22 ` Yu Zhao [this message] 2022-01-04 20:22 ` [PATCH v6 3/9] mm/vmscan.c: refactor shrink_node() Yu Zhao 2022-01-04 20:22 ` [PATCH v6 4/9] mm: multigenerational lru: groundwork Yu Zhao 2022-01-04 20:22 ` Yu Zhao 2022-01-04 21:34 ` Linus Torvalds 2022-01-04 21:34 ` Linus Torvalds 2022-01-11 8:16 ` Aneesh Kumar K.V 2022-01-11 8:16 ` Aneesh Kumar K.V 2022-01-12 2:16 ` Yu Zhao 2022-01-12 2:16 ` Yu Zhao 2022-01-04 20:22 ` [PATCH v6 5/9] mm: multigenerational lru: mm_struct list Yu Zhao 2022-01-04 20:22 ` Yu Zhao 2022-01-07 9:06 ` Michal Hocko 2022-01-07 9:06 ` Michal Hocko 2022-01-08 0:19 ` Yu Zhao 2022-01-08 0:19 ` Yu Zhao 2022-01-10 15:21 ` Michal Hocko 2022-01-10 15:21 ` Michal Hocko 2022-01-12 8:08 ` Yu Zhao 2022-01-12 8:08 ` Yu Zhao 2022-01-04 20:22 ` [PATCH v6 6/9] mm: multigenerational lru: aging Yu Zhao 2022-01-04 20:22 ` Yu Zhao 2022-01-06 16:06 ` Michal Hocko 2022-01-06 16:06 ` Michal Hocko 2022-01-06 21:27 ` Yu Zhao 2022-01-06 21:27 ` Yu Zhao 2022-01-07 8:43 ` Michal Hocko 2022-01-07 8:43 ` Michal Hocko 2022-01-07 21:12 ` Yu Zhao 2022-01-07 21:12 ` Yu Zhao 2022-01-06 16:12 ` Michal Hocko 2022-01-06 16:12 ` Michal Hocko 2022-01-06 21:41 ` Yu Zhao 2022-01-06 21:41 ` Yu Zhao 2022-01-07 8:55 ` Michal Hocko 2022-01-07 8:55 ` Michal Hocko 2022-01-07 9:00 ` Michal Hocko 2022-01-07 9:00 ` Michal Hocko 2022-01-10 3:58 ` Yu Zhao 2022-01-10 3:58 ` Yu Zhao 2022-01-10 14:37 ` Michal Hocko 2022-01-10 14:37 ` Michal Hocko 2022-01-13 9:43 ` Yu Zhao 2022-01-13 9:43 ` Yu Zhao 2022-01-13 12:02 ` Michal Hocko 2022-01-13 12:02 ` Michal Hocko 2022-01-19 6:31 ` Yu Zhao 2022-01-19 6:31 ` Yu Zhao 2022-01-19 9:44 ` Michal Hocko 2022-01-19 9:44 ` Michal Hocko 2022-01-10 15:01 ` Michal Hocko 2022-01-10 15:01 ` Michal Hocko 2022-01-10 16:01 ` Vlastimil Babka 2022-01-10 16:01 ` Vlastimil Babka 2022-01-10 16:25 ` Michal Hocko 2022-01-10 16:25 ` Michal Hocko 2022-01-11 23:16 ` Yu Zhao 2022-01-11 23:16 ` Yu Zhao 2022-01-12 10:28 ` Michal Hocko 2022-01-12 10:28 ` Michal Hocko 2022-01-13 9:25 ` Yu Zhao 2022-01-13 9:25 ` Yu Zhao 2022-01-07 13:11 ` Michal Hocko 2022-01-07 13:11 ` Michal Hocko 2022-01-07 23:36 ` Yu Zhao 2022-01-07 23:36 ` Yu Zhao 2022-01-10 15:35 ` Michal Hocko 2022-01-10 15:35 ` Michal Hocko 2022-01-11 1:18 ` Yu Zhao 2022-01-11 1:18 ` Yu Zhao 2022-01-11 9:00 ` Michal Hocko 2022-01-11 9:00 ` Michal Hocko [not found] ` <1641900108.61dd684cb0e59@mail.inbox.lv> 2022-01-11 12:15 ` Michal Hocko 2022-01-11 12:15 ` Michal Hocko 2022-01-13 17:00 ` Alexey Avramov 2022-01-13 17:00 ` Alexey Avramov 2022-01-11 14:22 ` Alexey Avramov 2022-01-11 14:22 ` Alexey Avramov 2022-01-07 14:44 ` Michal Hocko 2022-01-07 14:44 ` Michal Hocko 2022-01-10 4:47 ` Yu Zhao 2022-01-10 4:47 ` Yu Zhao 2022-01-10 10:54 ` Michal Hocko 2022-01-10 10:54 ` Michal Hocko 2022-01-19 7:04 ` Yu Zhao 2022-01-19 7:04 ` Yu Zhao 2022-01-19 9:42 ` Michal Hocko 2022-01-19 9:42 ` Michal Hocko 2022-01-23 21:28 ` Yu Zhao 2022-01-23 21:28 ` Yu Zhao 2022-01-24 14:01 ` Michal Hocko 2022-01-24 14:01 ` Michal Hocko 2022-01-10 16:57 ` Michal Hocko 2022-01-10 16:57 ` Michal Hocko 2022-01-12 1:01 ` Yu Zhao 2022-01-12 1:01 ` Yu Zhao 2022-01-12 10:17 ` Michal Hocko 2022-01-12 10:17 ` Michal Hocko 2022-01-12 23:43 ` Yu Zhao 2022-01-12 23:43 ` Yu Zhao 2022-01-13 11:57 ` Michal Hocko 2022-01-13 11:57 ` Michal Hocko 2022-01-23 21:40 ` Yu Zhao 2022-01-23 21:40 ` Yu Zhao 2022-01-04 20:22 ` [PATCH v6 7/9] mm: multigenerational lru: eviction Yu Zhao 2022-01-04 20:22 ` Yu Zhao 2022-01-11 10:37 ` Aneesh Kumar K.V 2022-01-11 10:37 ` Aneesh Kumar K.V 2022-01-12 8:05 ` Yu Zhao 2022-01-12 8:05 ` Yu Zhao 2022-01-04 20:22 ` [PATCH v6 8/9] mm: multigenerational lru: user interface Yu Zhao 2022-01-04 20:22 ` Yu Zhao 2022-01-10 10:27 ` Mike Rapoport 2022-01-10 10:27 ` Mike Rapoport 2022-01-12 8:35 ` Yu Zhao 2022-01-12 8:35 ` Yu Zhao 2022-01-12 10:31 ` Michal Hocko 2022-01-12 10:31 ` Michal Hocko 2022-01-12 15:45 ` Mike Rapoport 2022-01-12 15:45 ` Mike Rapoport 2022-01-13 9:47 ` Yu Zhao 2022-01-13 9:47 ` Yu Zhao 2022-01-13 10:31 ` Aneesh Kumar K.V 2022-01-13 10:31 ` Aneesh Kumar K.V 2022-01-13 23:02 ` Yu Zhao 2022-01-13 23:02 ` Yu Zhao 2022-01-14 5:20 ` Aneesh Kumar K.V 2022-01-14 5:20 ` Aneesh Kumar K.V 2022-01-14 6:50 ` Yu Zhao 2022-01-14 6:50 ` Yu Zhao 2022-01-04 20:22 ` [PATCH v6 9/9] mm: multigenerational lru: Kconfig Yu Zhao 2022-01-04 20:22 ` Yu Zhao 2022-01-04 21:39 ` Linus Torvalds 2022-01-04 21:39 ` Linus Torvalds 2022-01-04 20:22 ` [PATCH v6 0/9] Multigenerational LRU Framework Yu Zhao 2022-01-04 20:30 ` Yu Zhao 2022-01-04 20:30 ` Yu Zhao 2022-01-04 21:43 ` Linus Torvalds 2022-01-04 21:43 ` Linus Torvalds 2022-01-05 21:12 ` Yu Zhao 2022-01-05 21:12 ` Yu Zhao 2022-01-07 9:38 ` Michal Hocko 2022-01-07 9:38 ` Michal Hocko 2022-01-07 18:45 ` Yu Zhao 2022-01-07 18:45 ` Yu Zhao 2022-01-10 15:39 ` Michal Hocko 2022-01-10 15:39 ` Michal Hocko 2022-01-10 22:04 ` Yu Zhao 2022-01-10 22:04 ` Yu Zhao 2022-01-10 22:46 ` Jesse Barnes 2022-01-10 22:46 ` Jesse Barnes 2022-01-11 1:41 ` Linus Torvalds 2022-01-11 1:41 ` Linus Torvalds 2022-01-11 10:40 ` Michal Hocko 2022-01-11 10:40 ` Michal Hocko 2022-01-11 8:41 ` Yu Zhao 2022-01-11 8:41 ` Yu Zhao 2022-01-11 8:53 ` Holger Hoffstätte 2022-01-11 8:53 ` Holger Hoffstätte 2022-01-11 9:26 ` Jan Alexander Steffens (heftig) 2022-01-11 16:04 ` Shuang Zhai 2022-01-11 16:04 ` Shuang Zhai 2022-01-12 1:46 ` Suleiman Souhlal 2022-01-12 1:46 ` Suleiman Souhlal 2022-01-12 6:07 ` Sofia Trinh 2022-01-12 6:07 ` Sofia Trinh 2022-01-12 16:17 ` Daniel Byrne 2022-01-18 9:21 ` Yu Zhao 2022-01-18 9:21 ` Yu Zhao 2022-01-18 9:36 ` Donald Carr 2022-01-18 9:36 ` Donald Carr 2022-01-19 20:19 ` Steven Barrett 2022-01-19 20:19 ` Steven Barrett 2022-01-19 22:25 ` Brian Geffon 2022-01-19 22:25 ` Brian Geffon 2022-01-05 2:44 ` Shuang Zhai 2022-01-05 2:44 ` Shuang Zhai 2022-01-05 8:55 ` SeongJae Park 2022-01-05 8:55 ` SeongJae Park 2022-01-05 10:53 ` Yu Zhao 2022-01-05 10:53 ` Yu Zhao 2022-01-05 11:12 ` Borislav Petkov 2022-01-05 11:12 ` Borislav Petkov 2022-01-05 11:25 ` SeongJae Park 2022-01-05 11:25 ` SeongJae Park 2022-01-05 21:06 ` Yu Zhao 2022-01-05 21:06 ` Yu Zhao 2022-01-10 14:49 ` Alexey Avramov 2022-01-10 14:49 ` Alexey Avramov 2022-01-11 10:24 ` Alexey Avramov 2022-01-11 10:24 ` Alexey Avramov 2022-01-12 20:56 ` Oleksandr Natalenko 2022-01-12 20:56 ` Oleksandr Natalenko 2022-01-13 8:59 ` Yu Zhao 2022-01-13 8:59 ` Yu Zhao 2022-01-23 5:43 ` Barry Song 2022-01-23 5:43 ` Barry Song 2022-01-25 6:48 ` Yu Zhao 2022-01-25 6:48 ` Yu Zhao 2022-01-28 8:54 ` Barry Song 2022-01-28 8:54 ` Barry Song 2022-02-08 9:16 ` Yu Zhao 2022-02-08 9:16 ` Yu Zhao
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220104202227.2903605-4-yuzhao@google.com \ --to=yuzhao@google.com \ --cc=Hi-Angel@yandex.ru \ --cc=Michael@michaellarabel.com \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=axboe@kernel.dk \ --cc=catalin.marinas@arm.com \ --cc=corbet@lwn.net \ --cc=dave.hansen@linux.intel.com \ --cc=hannes@cmpxchg.org \ --cc=hdanton@sina.com \ --cc=jsbarnes@google.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@kernel.org \ --cc=page-reclaim@google.com \ --cc=riel@surriel.com \ --cc=torvalds@linux-foundation.org \ --cc=vbabka@suse.cz \ --cc=will@kernel.org \ --cc=willy@infradead.org \ --cc=x86@kernel.org \ --cc=ying.huang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.