From: Yu Zhao <yuzhao@google.com> To: Stephen Rothwell <sfr@rothwell.id.au>, linux-mm@kvack.org Cc: "Andi Kleen" <ak@linux.intel.com>, "Andrew Morton" <akpm@linux-foundation.org>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Barry Song" <21cnbao@gmail.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Jesse Barnes" <jsbarnes@google.com>, "Johannes Weiner" <hannes@cmpxchg.org>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Mel Gorman" <mgorman@suse.de>, "Michael Larabel" <Michael@michaellarabel.com>, "Michal Hocko" <mhocko@kernel.org>, "Mike Rapoport" <rppt@kernel.org>, "Rik van Riel" <riel@surriel.com>, "Vlastimil Babka" <vbabka@suse.cz>, "Will Deacon" <will@kernel.org>, "Ying Huang" <ying.huang@intel.com>, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, page-reclaim@google.com, x86@kernel.org, "Yu Zhao" <yuzhao@google.com>, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works>, "Vaibhav Jain" <vaibhav@linux.ibm.com> Subject: [PATCH v10 09/14] mm: multi-gen LRU: optimize multiple memcgs Date: Wed, 6 Apr 2022 21:15:21 -0600 [thread overview] Message-ID: <20220407031525.2368067-10-yuzhao@google.com> (raw) In-Reply-To: <20220407031525.2368067-1-yuzhao@google.com> When multiple memcgs are available, it is possible to make better choices based on generations and tiers and therefore improve the overall performance under global memory pressure. This patch adds a rudimentary optimization to select memcgs that can drop single-use unmapped clean pages first. Doing so reduces the chance of going into the aging path or swapping. These two operations can be costly. A typical example that benefits from this optimization is a server running mixed types of workloads, e.g., heavy anon workload in one memcg and heavy buffered I/O workload in the other. Though this optimization can be applied to both kswapd and direct reclaim, it is only added to kswapd to keep the patchset manageable. Later improvements will cover the direct reclaim path. Server benchmark results: Mixed workloads: fio (buffered I/O): +[1, 3]% IOPS BW patch1-8: 2154k 8415MiB/s patch1-9: 2205k 8613MiB/s memcached (anon): +[132, 136]% Ops/sec KB/sec patch1-8: 819618.49 31838.48 patch1-9: 1916516.06 74447.92 Mixed workloads: fio (buffered I/O): +[59, 61]% IOPS BW 5.18-rc1: 1378k 5385MiB/s patch1-9: 2205k 8613MiB/s memcached (anon): +[229, 233]% Ops/sec KB/sec 5.18-rc1: 578946.00 22489.44 patch1-9: 1916516.06 74447.92 Configurations: (changes since patch 6) cat mixed.sh modprobe brd rd_nr=2 rd_size=56623104 swapoff -a mkswap /dev/ram0 swapon /dev/ram0 mkfs.ext4 /dev/ram1 mount -t ext4 /dev/ram1 /mnt memtier_benchmark -S /var/run/memcached/memcached.sock \ -P memcache_binary -n allkeys --key-minimum=1 \ --key-maximum=50000000 --key-pattern=P:P -c 1 -t 36 \ --ratio 1:0 --pipeline 8 -d 2000 fio -name=mglru --numjobs=36 --directory=/mnt --size=1408m \ --buffered=1 --ioengine=io_uring --iodepth=128 \ --iodepth_batch_submit=32 --iodepth_batch_complete=32 \ --rw=randread --random_distribution=random --norandommap \ --time_based --ramp_time=10m --runtime=90m --group_reporting & pid=$! sleep 200 memtier_benchmark -S /var/run/memcached/memcached.sock \ -P memcache_binary -n allkeys --key-minimum=1 \ --key-maximum=50000000 --key-pattern=R:R -c 1 -t 36 \ --ratio 0:1 --pipeline 8 --randomize --distinct-client-seed kill -INT $pid wait Client benchmark results: no change (CONFIG_MEMCG=n) Signed-off-by: Yu Zhao <yuzhao@google.com> Acked-by: Brian Geffon <bgeffon@google.com> Acked-by: Jan Alexander Steffens (heftig) <heftig@archlinux.org> Acked-by: Oleksandr Natalenko <oleksandr@natalenko.name> Acked-by: Steven Barrett <steven@liquorix.net> Acked-by: Suleiman Souhlal <suleiman@google.com> Tested-by: Daniel Byrne <djbyrne@mtu.edu> Tested-by: Donald Carr <d@chaos-reins.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru> Tested-by: Shuang Zhai <szhai2@cs.rochester.edu> Tested-by: Sofia Trinh <sofia.trinh@edi.works> Tested-by: Vaibhav Jain <vaibhav@linux.ibm.com> --- mm/vmscan.c | 45 +++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 41 insertions(+), 4 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9e2810a230a4..0663f1a3f72a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -128,6 +128,13 @@ struct scan_control { /* Always discard instead of demoting to lower tier memory */ unsigned int no_demotion:1; +#ifdef CONFIG_LRU_GEN + /* help make better choices when multiple memcgs are available */ + unsigned int memcgs_need_aging:1; + unsigned int memcgs_need_swapping:1; + unsigned int memcgs_avoid_swapping:1; +#endif + /* Allocation order */ s8 order; @@ -4309,6 +4316,22 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) VM_BUG_ON(!current_is_kswapd()); + /* + * To reduce the chance of going into the aging path or swapping, which + * can be costly, optimistically skip them unless their corresponding + * flags were cleared in the eviction path. This improves the overall + * performance when multiple memcgs are available. + */ + if (!sc->memcgs_need_aging) { + sc->memcgs_need_aging = true; + sc->memcgs_avoid_swapping = !sc->memcgs_need_swapping; + sc->memcgs_need_swapping = true; + return; + } + + sc->memcgs_need_swapping = true; + sc->memcgs_avoid_swapping = true; + current->reclaim_state->mm_walk = &pgdat->mm_walk; memcg = mem_cgroup_iter(NULL, NULL, NULL); @@ -4714,7 +4737,8 @@ static int isolate_folios(struct lruvec *lruvec, struct scan_control *sc, int sw return scanned; } -static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness) +static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness, + bool *swapped) { int type; int scanned; @@ -4780,6 +4804,9 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap sc->nr_reclaimed += reclaimed; + if (type == LRU_GEN_ANON && swapped) + *swapped = true; + return scanned; } @@ -4808,8 +4835,10 @@ static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, bool if (!nr_to_scan) return 0; - if (!need_aging) + if (!need_aging) { + sc->memcgs_need_aging = false; return nr_to_scan; + } /* leave the work to lru_gen_age_node() */ if (current_is_kswapd()) @@ -4831,6 +4860,8 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc { struct blk_plug plug; long scanned = 0; + bool swapped = false; + unsigned long reclaimed = sc->nr_reclaimed; struct pglist_data *pgdat = lruvec_pgdat(lruvec); lru_add_drain(); @@ -4856,13 +4887,19 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc if (!nr_to_scan) break; - delta = evict_folios(lruvec, sc, swappiness); + delta = evict_folios(lruvec, sc, swappiness, &swapped); if (!delta) break; + if (sc->memcgs_avoid_swapping && swappiness < 200 && swapped) + break; + scanned += delta; - if (scanned >= nr_to_scan) + if (scanned >= nr_to_scan) { + if (!swapped && sc->nr_reclaimed - reclaimed >= MIN_LRU_BATCH) + sc->memcgs_need_swapping = false; break; + } cond_resched(); } -- 2.35.1.1094.g7c7d902a7c-goog
WARNING: multiple messages have this Message-ID (diff)
From: Yu Zhao <yuzhao@google.com> To: Stephen Rothwell <sfr@rothwell.id.au>, linux-mm@kvack.org Cc: "Andi Kleen" <ak@linux.intel.com>, "Andrew Morton" <akpm@linux-foundation.org>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Barry Song" <21cnbao@gmail.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Jesse Barnes" <jsbarnes@google.com>, "Johannes Weiner" <hannes@cmpxchg.org>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Mel Gorman" <mgorman@suse.de>, "Michael Larabel" <Michael@michaellarabel.com>, "Michal Hocko" <mhocko@kernel.org>, "Mike Rapoport" <rppt@kernel.org>, "Rik van Riel" <riel@surriel.com>, "Vlastimil Babka" <vbabka@suse.cz>, "Will Deacon" <will@kernel.org>, "Ying Huang" <ying.huang@intel.com>, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, page-reclaim@google.com, x86@kernel.org, "Yu Zhao" <yuzhao@google.com>, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works>, "Vaibhav Jain" <vaibhav@linux.ibm.com> Subject: [PATCH v10 09/14] mm: multi-gen LRU: optimize multiple memcgs Date: Wed, 6 Apr 2022 21:15:21 -0600 [thread overview] Message-ID: <20220407031525.2368067-10-yuzhao@google.com> (raw) In-Reply-To: <20220407031525.2368067-1-yuzhao@google.com> When multiple memcgs are available, it is possible to make better choices based on generations and tiers and therefore improve the overall performance under global memory pressure. This patch adds a rudimentary optimization to select memcgs that can drop single-use unmapped clean pages first. Doing so reduces the chance of going into the aging path or swapping. These two operations can be costly. A typical example that benefits from this optimization is a server running mixed types of workloads, e.g., heavy anon workload in one memcg and heavy buffered I/O workload in the other. Though this optimization can be applied to both kswapd and direct reclaim, it is only added to kswapd to keep the patchset manageable. Later improvements will cover the direct reclaim path. Server benchmark results: Mixed workloads: fio (buffered I/O): +[1, 3]% IOPS BW patch1-8: 2154k 8415MiB/s patch1-9: 2205k 8613MiB/s memcached (anon): +[132, 136]% Ops/sec KB/sec patch1-8: 819618.49 31838.48 patch1-9: 1916516.06 74447.92 Mixed workloads: fio (buffered I/O): +[59, 61]% IOPS BW 5.18-rc1: 1378k 5385MiB/s patch1-9: 2205k 8613MiB/s memcached (anon): +[229, 233]% Ops/sec KB/sec 5.18-rc1: 578946.00 22489.44 patch1-9: 1916516.06 74447.92 Configurations: (changes since patch 6) cat mixed.sh modprobe brd rd_nr=2 rd_size=56623104 swapoff -a mkswap /dev/ram0 swapon /dev/ram0 mkfs.ext4 /dev/ram1 mount -t ext4 /dev/ram1 /mnt memtier_benchmark -S /var/run/memcached/memcached.sock \ -P memcache_binary -n allkeys --key-minimum=1 \ --key-maximum=50000000 --key-pattern=P:P -c 1 -t 36 \ --ratio 1:0 --pipeline 8 -d 2000 fio -name=mglru --numjobs=36 --directory=/mnt --size=1408m \ --buffered=1 --ioengine=io_uring --iodepth=128 \ --iodepth_batch_submit=32 --iodepth_batch_complete=32 \ --rw=randread --random_distribution=random --norandommap \ --time_based --ramp_time=10m --runtime=90m --group_reporting & pid=$! sleep 200 memtier_benchmark -S /var/run/memcached/memcached.sock \ -P memcache_binary -n allkeys --key-minimum=1 \ --key-maximum=50000000 --key-pattern=R:R -c 1 -t 36 \ --ratio 0:1 --pipeline 8 --randomize --distinct-client-seed kill -INT $pid wait Client benchmark results: no change (CONFIG_MEMCG=n) Signed-off-by: Yu Zhao <yuzhao@google.com> Acked-by: Brian Geffon <bgeffon@google.com> Acked-by: Jan Alexander Steffens (heftig) <heftig@archlinux.org> Acked-by: Oleksandr Natalenko <oleksandr@natalenko.name> Acked-by: Steven Barrett <steven@liquorix.net> Acked-by: Suleiman Souhlal <suleiman@google.com> Tested-by: Daniel Byrne <djbyrne@mtu.edu> Tested-by: Donald Carr <d@chaos-reins.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru> Tested-by: Shuang Zhai <szhai2@cs.rochester.edu> Tested-by: Sofia Trinh <sofia.trinh@edi.works> Tested-by: Vaibhav Jain <vaibhav@linux.ibm.com> --- mm/vmscan.c | 45 +++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 41 insertions(+), 4 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9e2810a230a4..0663f1a3f72a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -128,6 +128,13 @@ struct scan_control { /* Always discard instead of demoting to lower tier memory */ unsigned int no_demotion:1; +#ifdef CONFIG_LRU_GEN + /* help make better choices when multiple memcgs are available */ + unsigned int memcgs_need_aging:1; + unsigned int memcgs_need_swapping:1; + unsigned int memcgs_avoid_swapping:1; +#endif + /* Allocation order */ s8 order; @@ -4309,6 +4316,22 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) VM_BUG_ON(!current_is_kswapd()); + /* + * To reduce the chance of going into the aging path or swapping, which + * can be costly, optimistically skip them unless their corresponding + * flags were cleared in the eviction path. This improves the overall + * performance when multiple memcgs are available. + */ + if (!sc->memcgs_need_aging) { + sc->memcgs_need_aging = true; + sc->memcgs_avoid_swapping = !sc->memcgs_need_swapping; + sc->memcgs_need_swapping = true; + return; + } + + sc->memcgs_need_swapping = true; + sc->memcgs_avoid_swapping = true; + current->reclaim_state->mm_walk = &pgdat->mm_walk; memcg = mem_cgroup_iter(NULL, NULL, NULL); @@ -4714,7 +4737,8 @@ static int isolate_folios(struct lruvec *lruvec, struct scan_control *sc, int sw return scanned; } -static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness) +static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness, + bool *swapped) { int type; int scanned; @@ -4780,6 +4804,9 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap sc->nr_reclaimed += reclaimed; + if (type == LRU_GEN_ANON && swapped) + *swapped = true; + return scanned; } @@ -4808,8 +4835,10 @@ static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, bool if (!nr_to_scan) return 0; - if (!need_aging) + if (!need_aging) { + sc->memcgs_need_aging = false; return nr_to_scan; + } /* leave the work to lru_gen_age_node() */ if (current_is_kswapd()) @@ -4831,6 +4860,8 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc { struct blk_plug plug; long scanned = 0; + bool swapped = false; + unsigned long reclaimed = sc->nr_reclaimed; struct pglist_data *pgdat = lruvec_pgdat(lruvec); lru_add_drain(); @@ -4856,13 +4887,19 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc if (!nr_to_scan) break; - delta = evict_folios(lruvec, sc, swappiness); + delta = evict_folios(lruvec, sc, swappiness, &swapped); if (!delta) break; + if (sc->memcgs_avoid_swapping && swappiness < 200 && swapped) + break; + scanned += delta; - if (scanned >= nr_to_scan) + if (scanned >= nr_to_scan) { + if (!swapped && sc->nr_reclaimed - reclaimed >= MIN_LRU_BATCH) + sc->memcgs_need_swapping = false; break; + } cond_resched(); } -- 2.35.1.1094.g7c7d902a7c-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-04-07 3:17 UTC|newest] Thread overview: 198+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-04-07 3:15 [PATCH v10 00/14] Multi-Gen LRU Framework Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 01/14] mm: x86, arm64: add arch_has_hw_pte_young() Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 02/14] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 03/14] mm/vmscan.c: refactor shrink_node() Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-16 6:48 ` Miaohe Lin 2022-04-16 6:48 ` Miaohe Lin 2022-04-07 3:15 ` [PATCH v10 04/14] Revert "include/linux/mm_inline.h: fold __update_lru_size() into its sole caller" Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-16 6:50 ` Miaohe Lin 2022-04-16 6:50 ` Miaohe Lin 2022-04-07 3:15 ` [PATCH v10 05/14] mm: multi-gen LRU: groundwork Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-12 7:06 ` Peter Zijlstra 2022-04-12 7:06 ` Peter Zijlstra 2022-04-20 0:39 ` Yu Zhao 2022-04-20 0:39 ` Yu Zhao 2022-04-20 20:07 ` Linus Torvalds 2022-04-20 20:07 ` Linus Torvalds 2022-04-26 22:39 ` Yu Zhao 2022-04-26 22:39 ` Yu Zhao 2022-04-26 23:42 ` Andrew Morton 2022-04-26 23:42 ` Andrew Morton 2022-04-27 1:18 ` Yu Zhao 2022-04-27 1:18 ` Yu Zhao 2022-04-27 1:34 ` Andrew Morton 2022-04-27 1:34 ` Andrew Morton 2022-04-07 3:15 ` [PATCH v10 06/14] mm: multi-gen LRU: minimal implementation Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-14 6:03 ` Barry Song 2022-04-14 6:03 ` Barry Song 2022-04-14 20:36 ` Yu Zhao 2022-04-14 20:36 ` Yu Zhao 2022-04-14 21:39 ` Andrew Morton 2022-04-14 21:39 ` Andrew Morton 2022-04-14 22:14 ` Yu Zhao 2022-04-14 22:14 ` Yu Zhao 2022-04-15 10:15 ` Barry Song 2022-04-15 10:15 ` Barry Song 2022-04-15 20:17 ` Yu Zhao 2022-04-15 20:17 ` Yu Zhao 2022-04-15 10:26 ` Barry Song 2022-04-15 10:26 ` Barry Song 2022-04-15 20:18 ` Yu Zhao 2022-04-15 20:18 ` Yu Zhao 2022-04-14 11:47 ` Chen Wandun 2022-04-14 11:47 ` Chen Wandun 2022-04-14 20:53 ` Yu Zhao 2022-04-14 20:53 ` Yu Zhao 2022-04-15 2:23 ` Chen Wandun 2022-04-15 2:23 ` Chen Wandun 2022-04-15 5:25 ` Yu Zhao 2022-04-15 5:25 ` Yu Zhao 2022-04-15 6:31 ` Chen Wandun 2022-04-15 6:31 ` Chen Wandun 2022-04-15 6:44 ` Yu Zhao 2022-04-15 6:44 ` Yu Zhao 2022-04-15 9:27 ` Chen Wandun 2022-04-15 9:27 ` Chen Wandun 2022-04-18 9:58 ` Barry Song 2022-04-18 9:58 ` Barry Song 2022-04-19 0:53 ` Yu Zhao 2022-04-19 0:53 ` Yu Zhao 2022-04-19 4:25 ` Barry Song 2022-04-19 4:25 ` Barry Song 2022-04-19 4:36 ` Barry Song 2022-04-19 4:36 ` Barry Song 2022-04-19 22:25 ` Yu Zhao 2022-04-19 22:25 ` Yu Zhao 2022-04-19 22:20 ` Yu Zhao 2022-04-19 22:20 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 07/14] mm: multi-gen LRU: exploit locality in rmap Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-27 4:32 ` Aneesh Kumar K.V 2022-04-27 4:32 ` Aneesh Kumar K.V 2022-04-27 4:38 ` Yu Zhao 2022-04-27 4:38 ` Yu Zhao 2022-04-27 5:31 ` Aneesh Kumar K V 2022-04-27 5:31 ` Aneesh Kumar K V 2022-04-27 6:00 ` Yu Zhao 2022-04-27 6:00 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 08/14] mm: multi-gen LRU: support page table walks Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-12 7:10 ` Peter Zijlstra 2022-04-12 7:10 ` Peter Zijlstra 2022-04-15 5:30 ` Yu Zhao 2022-04-15 5:30 ` Yu Zhao 2022-04-15 1:14 ` Yu Zhao 2022-04-15 1:14 ` Yu Zhao 2022-04-15 1:56 ` Andrew Morton 2022-04-15 1:56 ` Andrew Morton 2022-04-15 6:25 ` Yu Zhao 2022-04-15 6:25 ` Yu Zhao 2022-04-15 19:15 ` Andrew Morton 2022-04-15 19:15 ` Andrew Morton 2022-04-15 20:11 ` Yu Zhao 2022-04-15 20:11 ` Yu Zhao 2022-04-15 21:32 ` Andrew Morton 2022-04-15 21:32 ` Andrew Morton 2022-04-15 21:36 ` Linus Torvalds 2022-04-15 21:36 ` Linus Torvalds 2022-04-15 22:57 ` Yu Zhao 2022-04-15 22:57 ` Yu Zhao 2022-04-15 23:03 ` Linus Torvalds 2022-04-15 23:03 ` Linus Torvalds 2022-04-15 23:24 ` [page-reclaim] " Jesse Barnes 2022-04-15 23:24 ` Jesse Barnes 2022-04-15 23:31 ` Matthew Wilcox 2022-04-15 23:31 ` Matthew Wilcox 2022-04-15 23:37 ` Jesse Barnes 2022-04-15 23:37 ` Jesse Barnes 2022-04-15 23:49 ` Yu Zhao 2022-04-15 23:49 ` Yu Zhao 2022-04-16 16:32 ` Justin Forbes 2022-04-16 16:32 ` Justin Forbes 2022-04-19 22:32 ` Yu Zhao 2022-04-19 22:32 ` Yu Zhao 2022-04-29 14:10 ` zhong jiang 2022-04-29 14:10 ` zhong jiang 2022-04-30 8:34 ` Yu Zhao 2022-04-30 8:34 ` Yu Zhao 2022-04-07 3:15 ` Yu Zhao [this message] 2022-04-07 3:15 ` [PATCH v10 09/14] mm: multi-gen LRU: optimize multiple memcgs Yu Zhao 2022-04-07 3:15 ` [PATCH v10 10/14] mm: multi-gen LRU: kill switch Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-26 20:57 ` Yu Zhao 2022-04-26 20:57 ` Yu Zhao 2022-04-26 22:22 ` Andrew Morton 2022-04-26 22:22 ` Andrew Morton 2022-04-27 1:11 ` Yu Zhao 2022-04-27 1:11 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 11/14] mm: multi-gen LRU: thrashing prevention Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 12/14] mm: multi-gen LRU: debugfs interface Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-16 0:03 ` Yu Zhao 2022-04-16 0:03 ` Yu Zhao 2022-04-16 4:20 ` Andrew Morton 2022-04-16 4:20 ` Andrew Morton 2022-04-26 6:59 ` Yu Zhao 2022-04-26 6:59 ` Yu Zhao 2022-04-26 21:30 ` Andrew Morton 2022-04-26 21:30 ` Andrew Morton 2022-04-26 22:15 ` Yu Zhao 2022-04-26 22:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 13/14] mm: multi-gen LRU: admin guide Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 12:41 ` Bagas Sanjaya 2022-04-07 12:41 ` Bagas Sanjaya 2022-04-07 12:51 ` Jonathan Corbet 2022-04-07 12:51 ` Jonathan Corbet 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-16 2:22 ` Yu Zhao 2022-04-16 2:22 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 14/14] mm: multi-gen LRU: design doc Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 11:39 ` Huang Shijie 2022-04-07 11:39 ` Huang Shijie 2022-04-07 12:41 ` Bagas Sanjaya 2022-04-07 12:41 ` Bagas Sanjaya 2022-04-07 12:52 ` Jonathan Corbet 2022-04-07 12:52 ` Jonathan Corbet 2022-04-08 4:48 ` Bagas Sanjaya 2022-04-08 4:48 ` Bagas Sanjaya 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-26 7:42 ` Yu Zhao 2022-04-26 7:42 ` Yu Zhao 2022-04-07 3:24 ` [PATCH v10 00/14] Multi-Gen LRU Framework Yu Zhao 2022-04-07 3:24 ` Yu Zhao 2022-04-07 8:31 ` Stephen Rothwell 2022-04-07 8:31 ` Stephen Rothwell 2022-04-07 9:08 ` Yu Zhao 2022-04-07 9:08 ` Yu Zhao 2022-04-07 9:41 ` Yu Zhao 2022-04-07 9:41 ` Yu Zhao 2022-04-07 12:13 ` Stephen Rothwell 2022-04-07 12:13 ` Stephen Rothwell 2022-04-08 2:08 ` Yu Zhao 2022-04-08 2:08 ` Yu Zhao 2022-04-12 2:15 ` Andrew Morton 2022-04-12 2:15 ` Andrew Morton 2022-04-14 5:06 ` Andrew Morton 2022-04-14 5:06 ` Andrew Morton 2022-04-20 0:50 ` Yu Zhao 2022-04-20 0:50 ` Yu Zhao
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220407031525.2368067-10-yuzhao@google.com \ --to=yuzhao@google.com \ --cc=21cnbao@gmail.com \ --cc=Hi-Angel@yandex.ru \ --cc=Michael@michaellarabel.com \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=aneesh.kumar@linux.ibm.com \ --cc=axboe@kernel.dk \ --cc=bgeffon@google.com \ --cc=catalin.marinas@arm.com \ --cc=corbet@lwn.net \ --cc=d@chaos-reins.com \ --cc=dave.hansen@linux.intel.com \ --cc=djbyrne@mtu.edu \ --cc=hannes@cmpxchg.org \ --cc=hdanton@sina.com \ --cc=heftig@archlinux.org \ --cc=holger@applied-asynchrony.com \ --cc=jsbarnes@google.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@kernel.org \ --cc=oleksandr@natalenko.name \ --cc=page-reclaim@google.com \ --cc=riel@surriel.com \ --cc=rppt@kernel.org \ --cc=sfr@rothwell.id.au \ --cc=sofia.trinh@edi.works \ --cc=steven@liquorix.net \ --cc=suleiman@google.com \ --cc=szhai2@cs.rochester.edu \ --cc=torvalds@linux-foundation.org \ --cc=vaibhav@linux.ibm.com \ --cc=vbabka@suse.cz \ --cc=will@kernel.org \ --cc=willy@infradead.org \ --cc=x86@kernel.org \ --cc=ying.huang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.