From: Yu Zhao <yuzhao@google.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: "Andi Kleen" <ak@linux.intel.com>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Johannes Weiner" <hannes@cmpxchg.org>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Mel Gorman" <mgorman@suse.de>, "Michael Larabel" <Michael@michaellarabel.com>, "Michal Hocko" <mhocko@kernel.org>, "Mike Rapoport" <rppt@kernel.org>, "Peter Zijlstra" <peterz@infradead.org>, "Tejun Heo" <tj@kernel.org>, "Vlastimil Babka" <vbabka@suse.cz>, "Will Deacon" <will@kernel.org>, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, page-reclaim@google.com, "Yu Zhao" <yuzhao@google.com>, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works>, "Vaibhav Jain" <vaibhav@linux.ibm.com> Subject: [PATCH v14 09/14] mm: multi-gen LRU: optimize multiple memcgs Date: Mon, 15 Aug 2022 01:13:28 -0600 [thread overview] Message-ID: <20220815071332.627393-10-yuzhao@google.com> (raw) In-Reply-To: <20220815071332.627393-1-yuzhao@google.com> When multiple memcgs are available, it is possible to make better choices based on generations and tiers and therefore improve the overall performance under global memory pressure. This patch adds a rudimentary optimization to select memcgs that can drop single-use unmapped clean pages first. Doing so reduces the chance of going into the aging path or swapping. These two decisions can be costly. A typical example that benefits from this optimization is a server running mixed types of workloads, e.g., heavy anon workload in one memcg and heavy buffered I/O workload in the other. Though this optimization can be applied to both kswapd and direct reclaim, it is only added to kswapd to keep the patchset manageable. Later improvements will cover the direct reclaim path. Server benchmark results: Mixed workloads: fio (buffered I/O): +[19, 21]% IOPS BW patch1-8: 1880k 7343MiB/s patch1-9: 2252k 8796MiB/s memcached (anon): +[119, 123]% Ops/sec KB/sec patch1-8: 862768.65 33514.68 patch1-9: 1911022.12 74234.54 Mixed workloads: fio (buffered I/O): +[75, 77]% IOPS BW 5.19-rc1: 1279k 4996MiB/s patch1-9: 2252k 8796MiB/s memcached (anon): +[13, 15]% Ops/sec KB/sec 5.19-rc1: 1673524.04 65008.87 patch1-9: 1911022.12 74234.54 Configurations: (changes since patch 6) cat mixed.sh modprobe brd rd_nr=2 rd_size=56623104 swapoff -a mkswap /dev/ram0 swapon /dev/ram0 mkfs.ext4 /dev/ram1 mount -t ext4 /dev/ram1 /mnt memtier_benchmark -S /var/run/memcached/memcached.sock \ -P memcache_binary -n allkeys --key-minimum=1 \ --key-maximum=50000000 --key-pattern=P:P -c 1 -t 36 \ --ratio 1:0 --pipeline 8 -d 2000 fio -name=mglru --numjobs=36 --directory=/mnt --size=1408m \ --buffered=1 --ioengine=io_uring --iodepth=128 \ --iodepth_batch_submit=32 --iodepth_batch_complete=32 \ --rw=randread --random_distribution=random --norandommap \ --time_based --ramp_time=10m --runtime=90m --group_reporting & pid=$! sleep 200 memtier_benchmark -S /var/run/memcached/memcached.sock \ -P memcache_binary -n allkeys --key-minimum=1 \ --key-maximum=50000000 --key-pattern=R:R -c 1 -t 36 \ --ratio 0:1 --pipeline 8 --randomize --distinct-client-seed kill -INT $pid wait Client benchmark results: no change (CONFIG_MEMCG=n) Signed-off-by: Yu Zhao <yuzhao@google.com> Acked-by: Brian Geffon <bgeffon@google.com> Acked-by: Jan Alexander Steffens (heftig) <heftig@archlinux.org> Acked-by: Oleksandr Natalenko <oleksandr@natalenko.name> Acked-by: Steven Barrett <steven@liquorix.net> Acked-by: Suleiman Souhlal <suleiman@google.com> Tested-by: Daniel Byrne <djbyrne@mtu.edu> Tested-by: Donald Carr <d@chaos-reins.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru> Tested-by: Shuang Zhai <szhai2@cs.rochester.edu> Tested-by: Sofia Trinh <sofia.trinh@edi.works> Tested-by: Vaibhav Jain <vaibhav@linux.ibm.com> --- mm/vmscan.c | 55 ++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 46 insertions(+), 9 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index d1dfc0a77b6f..ee51c752a3af 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -131,6 +131,13 @@ struct scan_control { /* Always discard instead of demoting to lower tier memory */ unsigned int no_demotion:1; +#ifdef CONFIG_LRU_GEN + /* help make better choices when multiple memcgs are available */ + unsigned int memcgs_need_aging:1; + unsigned int memcgs_need_swapping:1; + unsigned int memcgs_avoid_swapping:1; +#endif + /* Allocation order */ s8 order; @@ -4437,6 +4444,22 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) VM_WARN_ON_ONCE(!current_is_kswapd()); + /* + * To reduce the chance of going into the aging path or swapping, which + * can be costly, optimistically skip them unless their corresponding + * flags were cleared in the eviction path. This improves the overall + * performance when multiple memcgs are available. + */ + if (!sc->memcgs_need_aging) { + sc->memcgs_need_aging = true; + sc->memcgs_avoid_swapping = !sc->memcgs_need_swapping; + sc->memcgs_need_swapping = true; + return; + } + + sc->memcgs_need_swapping = true; + sc->memcgs_avoid_swapping = true; + set_mm_walk(pgdat); memcg = mem_cgroup_iter(NULL, NULL, NULL); @@ -4846,7 +4869,8 @@ static int isolate_folios(struct lruvec *lruvec, struct scan_control *sc, int sw return scanned; } -static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness) +static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness, + bool *need_swapping) { int type; int scanned; @@ -4909,6 +4933,9 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap sc->nr_reclaimed += reclaimed; + if (type == LRU_GEN_ANON && need_swapping) + *need_swapping = true; + return scanned; } @@ -4918,10 +4945,9 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap * reclaim. */ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, - bool can_swap, unsigned long reclaimed) + bool can_swap, unsigned long reclaimed, bool *need_aging) { int priority; - bool need_aging; unsigned long nr_to_scan; struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MAX_SEQ(lruvec); @@ -4936,7 +4962,7 @@ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control * (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim)) return 0; - nr_to_scan = get_nr_evictable(lruvec, max_seq, min_seq, can_swap, &need_aging); + nr_to_scan = get_nr_evictable(lruvec, max_seq, min_seq, can_swap, need_aging); if (!nr_to_scan) return 0; @@ -4952,7 +4978,7 @@ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control * if (!nr_to_scan) return 0; - if (!need_aging) + if (!*need_aging) return nr_to_scan; /* skip the aging path at the default priority */ @@ -4972,6 +4998,8 @@ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control * static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { struct blk_plug plug; + bool need_aging = false; + bool need_swapping = false; unsigned long scanned = 0; unsigned long reclaimed = sc->nr_reclaimed; @@ -4993,21 +5021,30 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc else swappiness = 0; - nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness, reclaimed); + nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness, reclaimed, &need_aging); if (!nr_to_scan) - break; + goto done; - delta = evict_folios(lruvec, sc, swappiness); + delta = evict_folios(lruvec, sc, swappiness, &need_swapping); if (!delta) - break; + goto done; scanned += delta; if (scanned >= nr_to_scan) break; + if (sc->memcgs_avoid_swapping && swappiness < 200 && need_swapping) + break; + cond_resched(); } + /* see the comment in lru_gen_age_node() */ + if (!need_aging) + sc->memcgs_need_aging = false; + if (!need_swapping) + sc->memcgs_need_swapping = false; +done: clear_mm_walk(); blk_finish_plug(&plug); -- 2.37.1.595.g718a3a8f04-goog
WARNING: multiple messages have this Message-ID (diff)
From: Yu Zhao <yuzhao@google.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: "Andi Kleen" <ak@linux.intel.com>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Johannes Weiner" <hannes@cmpxchg.org>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Mel Gorman" <mgorman@suse.de>, "Michael Larabel" <Michael@michaellarabel.com>, "Michal Hocko" <mhocko@kernel.org>, "Mike Rapoport" <rppt@kernel.org>, "Peter Zijlstra" <peterz@infradead.org>, "Tejun Heo" <tj@kernel.org>, "Vlastimil Babka" <vbabka@suse.cz>, "Will Deacon" <will@kernel.org>, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, page-reclaim@google.com, "Yu Zhao" <yuzhao@google.com>, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works>, "Vaibhav Jain" <vaibhav@linux.ibm.com> Subject: [PATCH v14 09/14] mm: multi-gen LRU: optimize multiple memcgs Date: Mon, 15 Aug 2022 01:13:28 -0600 [thread overview] Message-ID: <20220815071332.627393-10-yuzhao@google.com> (raw) In-Reply-To: <20220815071332.627393-1-yuzhao@google.com> When multiple memcgs are available, it is possible to make better choices based on generations and tiers and therefore improve the overall performance under global memory pressure. This patch adds a rudimentary optimization to select memcgs that can drop single-use unmapped clean pages first. Doing so reduces the chance of going into the aging path or swapping. These two decisions can be costly. A typical example that benefits from this optimization is a server running mixed types of workloads, e.g., heavy anon workload in one memcg and heavy buffered I/O workload in the other. Though this optimization can be applied to both kswapd and direct reclaim, it is only added to kswapd to keep the patchset manageable. Later improvements will cover the direct reclaim path. Server benchmark results: Mixed workloads: fio (buffered I/O): +[19, 21]% IOPS BW patch1-8: 1880k 7343MiB/s patch1-9: 2252k 8796MiB/s memcached (anon): +[119, 123]% Ops/sec KB/sec patch1-8: 862768.65 33514.68 patch1-9: 1911022.12 74234.54 Mixed workloads: fio (buffered I/O): +[75, 77]% IOPS BW 5.19-rc1: 1279k 4996MiB/s patch1-9: 2252k 8796MiB/s memcached (anon): +[13, 15]% Ops/sec KB/sec 5.19-rc1: 1673524.04 65008.87 patch1-9: 1911022.12 74234.54 Configurations: (changes since patch 6) cat mixed.sh modprobe brd rd_nr=2 rd_size=56623104 swapoff -a mkswap /dev/ram0 swapon /dev/ram0 mkfs.ext4 /dev/ram1 mount -t ext4 /dev/ram1 /mnt memtier_benchmark -S /var/run/memcached/memcached.sock \ -P memcache_binary -n allkeys --key-minimum=1 \ --key-maximum=50000000 --key-pattern=P:P -c 1 -t 36 \ --ratio 1:0 --pipeline 8 -d 2000 fio -name=mglru --numjobs=36 --directory=/mnt --size=1408m \ --buffered=1 --ioengine=io_uring --iodepth=128 \ --iodepth_batch_submit=32 --iodepth_batch_complete=32 \ --rw=randread --random_distribution=random --norandommap \ --time_based --ramp_time=10m --runtime=90m --group_reporting & pid=$! sleep 200 memtier_benchmark -S /var/run/memcached/memcached.sock \ -P memcache_binary -n allkeys --key-minimum=1 \ --key-maximum=50000000 --key-pattern=R:R -c 1 -t 36 \ --ratio 0:1 --pipeline 8 --randomize --distinct-client-seed kill -INT $pid wait Client benchmark results: no change (CONFIG_MEMCG=n) Signed-off-by: Yu Zhao <yuzhao@google.com> Acked-by: Brian Geffon <bgeffon@google.com> Acked-by: Jan Alexander Steffens (heftig) <heftig@archlinux.org> Acked-by: Oleksandr Natalenko <oleksandr@natalenko.name> Acked-by: Steven Barrett <steven@liquorix.net> Acked-by: Suleiman Souhlal <suleiman@google.com> Tested-by: Daniel Byrne <djbyrne@mtu.edu> Tested-by: Donald Carr <d@chaos-reins.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru> Tested-by: Shuang Zhai <szhai2@cs.rochester.edu> Tested-by: Sofia Trinh <sofia.trinh@edi.works> Tested-by: Vaibhav Jain <vaibhav@linux.ibm.com> --- mm/vmscan.c | 55 ++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 46 insertions(+), 9 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index d1dfc0a77b6f..ee51c752a3af 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -131,6 +131,13 @@ struct scan_control { /* Always discard instead of demoting to lower tier memory */ unsigned int no_demotion:1; +#ifdef CONFIG_LRU_GEN + /* help make better choices when multiple memcgs are available */ + unsigned int memcgs_need_aging:1; + unsigned int memcgs_need_swapping:1; + unsigned int memcgs_avoid_swapping:1; +#endif + /* Allocation order */ s8 order; @@ -4437,6 +4444,22 @@ static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) VM_WARN_ON_ONCE(!current_is_kswapd()); + /* + * To reduce the chance of going into the aging path or swapping, which + * can be costly, optimistically skip them unless their corresponding + * flags were cleared in the eviction path. This improves the overall + * performance when multiple memcgs are available. + */ + if (!sc->memcgs_need_aging) { + sc->memcgs_need_aging = true; + sc->memcgs_avoid_swapping = !sc->memcgs_need_swapping; + sc->memcgs_need_swapping = true; + return; + } + + sc->memcgs_need_swapping = true; + sc->memcgs_avoid_swapping = true; + set_mm_walk(pgdat); memcg = mem_cgroup_iter(NULL, NULL, NULL); @@ -4846,7 +4869,8 @@ static int isolate_folios(struct lruvec *lruvec, struct scan_control *sc, int sw return scanned; } -static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness) +static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swappiness, + bool *need_swapping) { int type; int scanned; @@ -4909,6 +4933,9 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap sc->nr_reclaimed += reclaimed; + if (type == LRU_GEN_ANON && need_swapping) + *need_swapping = true; + return scanned; } @@ -4918,10 +4945,9 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap * reclaim. */ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, - bool can_swap, unsigned long reclaimed) + bool can_swap, unsigned long reclaimed, bool *need_aging) { int priority; - bool need_aging; unsigned long nr_to_scan; struct mem_cgroup *memcg = lruvec_memcg(lruvec); DEFINE_MAX_SEQ(lruvec); @@ -4936,7 +4962,7 @@ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control * (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim)) return 0; - nr_to_scan = get_nr_evictable(lruvec, max_seq, min_seq, can_swap, &need_aging); + nr_to_scan = get_nr_evictable(lruvec, max_seq, min_seq, can_swap, need_aging); if (!nr_to_scan) return 0; @@ -4952,7 +4978,7 @@ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control * if (!nr_to_scan) return 0; - if (!need_aging) + if (!*need_aging) return nr_to_scan; /* skip the aging path at the default priority */ @@ -4972,6 +4998,8 @@ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control * static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { struct blk_plug plug; + bool need_aging = false; + bool need_swapping = false; unsigned long scanned = 0; unsigned long reclaimed = sc->nr_reclaimed; @@ -4993,21 +5021,30 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc else swappiness = 0; - nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness, reclaimed); + nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness, reclaimed, &need_aging); if (!nr_to_scan) - break; + goto done; - delta = evict_folios(lruvec, sc, swappiness); + delta = evict_folios(lruvec, sc, swappiness, &need_swapping); if (!delta) - break; + goto done; scanned += delta; if (scanned >= nr_to_scan) break; + if (sc->memcgs_avoid_swapping && swappiness < 200 && need_swapping) + break; + cond_resched(); } + /* see the comment in lru_gen_age_node() */ + if (!need_aging) + sc->memcgs_need_aging = false; + if (!need_swapping) + sc->memcgs_need_swapping = false; +done: clear_mm_walk(); blk_finish_plug(&plug); -- 2.37.1.595.g718a3a8f04-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-08-15 7:15 UTC|newest] Thread overview: 118+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-08-15 7:13 [PATCH v14 00/14] Multi-Gen LRU Framework Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 01/14] mm: x86, arm64: add arch_has_hw_pte_young() Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 02/14] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 03/14] mm/vmscan.c: refactor shrink_node() Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 04/14] Revert "include/linux/mm_inline.h: fold __update_lru_size() into its sole caller" Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 05/14] mm: multi-gen LRU: groundwork Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 06/14] mm: multi-gen LRU: minimal implementation Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 07/14] mm: multi-gen LRU: exploit locality in rmap Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-09-01 9:18 ` Nadav Amit 2022-09-01 9:18 ` Nadav Amit 2022-09-02 1:17 ` Yu Zhao 2022-09-02 1:17 ` Yu Zhao 2022-09-02 1:28 ` Yu Zhao 2022-09-02 1:28 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 08/14] mm: multi-gen LRU: support page table walks Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-10-13 15:04 ` Peter Zijlstra 2022-10-13 15:04 ` Peter Zijlstra 2022-10-19 5:51 ` Yu Zhao 2022-10-19 5:51 ` Yu Zhao 2022-10-19 17:40 ` Linus Torvalds 2022-10-19 17:40 ` Linus Torvalds 2022-10-20 14:13 ` Peter Zijlstra 2022-10-20 14:13 ` Peter Zijlstra 2022-10-20 17:29 ` Yu Zhao 2022-10-20 17:29 ` Yu Zhao 2022-10-20 17:35 ` Linus Torvalds 2022-10-20 17:35 ` Linus Torvalds 2022-10-20 18:55 ` Peter Zijlstra 2022-10-20 18:55 ` Peter Zijlstra 2022-10-21 2:10 ` Linus Torvalds 2022-10-21 2:10 ` Linus Torvalds 2022-10-21 3:38 ` Matthew Wilcox 2022-10-21 3:38 ` Matthew Wilcox 2022-10-21 16:50 ` Linus Torvalds 2022-10-21 16:50 ` Linus Torvalds 2022-10-23 14:44 ` David Gow 2022-10-23 14:44 ` David Gow 2022-10-23 17:55 ` Maciej W. Rozycki 2022-10-23 17:55 ` Maciej W. Rozycki 2022-10-23 18:35 ` Linus Torvalds 2022-10-23 18:35 ` Linus Torvalds 2022-10-24 7:30 ` Arnd Bergmann 2022-10-24 7:30 ` Arnd Bergmann 2022-10-25 16:28 ` Maciej W. Rozycki 2022-10-25 16:28 ` Maciej W. Rozycki 2022-10-26 15:43 ` Arnd Bergmann 2022-10-26 15:43 ` Arnd Bergmann 2022-10-27 23:08 ` Maciej W. Rozycki 2022-10-27 23:08 ` Maciej W. Rozycki 2022-10-28 7:27 ` Arnd Bergmann 2022-10-28 7:27 ` Arnd Bergmann 2022-10-21 10:12 ` Peter Zijlstra 2022-10-21 10:12 ` Peter Zijlstra 2022-10-24 18:20 ` Gareth Poole 2022-10-24 18:20 ` Gareth Poole 2022-10-24 19:28 ` Serentty 2022-10-24 19:28 ` Serentty 2022-08-15 7:13 ` Yu Zhao [this message] 2022-08-15 7:13 ` [PATCH v14 09/14] mm: multi-gen LRU: optimize multiple memcgs Yu Zhao 2022-08-15 7:13 ` [PATCH v14 10/14] mm: multi-gen LRU: kill switch Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 11/14] mm: multi-gen LRU: thrashing prevention Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 12/14] mm: multi-gen LRU: debugfs interface Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 7:13 ` [PATCH v14 13/14] mm: multi-gen LRU: admin guide Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 9:06 ` Bagas Sanjaya 2022-08-15 9:06 ` Bagas Sanjaya 2022-08-15 9:12 ` Mike Rapoport 2022-08-15 9:12 ` Mike Rapoport 2022-08-17 22:46 ` Yu Zhao 2022-08-17 22:46 ` Yu Zhao 2022-09-20 7:43 ` Bagas Sanjaya 2022-09-20 7:43 ` Bagas Sanjaya 2022-08-15 7:13 ` [PATCH v14 14/14] mm: multi-gen LRU: design doc Yu Zhao 2022-08-15 7:13 ` Yu Zhao 2022-08-15 9:07 ` Bagas Sanjaya 2022-08-15 9:07 ` Bagas Sanjaya 2022-08-31 4:17 ` OpenWrt / MIPS benchmark with MGLRU Yu Zhao 2022-08-31 4:17 ` Yu Zhao 2022-08-31 9:44 ` Arnd Bergmann 2022-08-31 12:12 ` Arnd Bergmann 2022-08-31 12:12 ` Arnd Bergmann 2022-08-31 15:13 ` Dave Hansen 2022-08-31 15:13 ` Dave Hansen 2022-08-31 22:18 ` Yu Zhao 2022-08-31 22:18 ` Yu Zhao 2022-09-12 0:08 ` [PATCH v14 00/14] Multi-Gen LRU Framework Andrew Morton 2022-09-12 0:08 ` Andrew Morton 2022-09-15 17:56 ` Yu Zhao 2022-09-15 17:56 ` Yu Zhao 2022-09-18 20:40 ` Yu Zhao 2022-09-18 20:40 ` Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 01/11] mm: multi-gen LRU: update admin guide Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 02/11] mm: multi-gen LRU: add comment in lru_gen_use_mm() Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 03/11] mm: multi-gen LRU: warn on !ptep_test_and_clear_young() Yu Zhao 2022-09-18 23:47 ` Andrew Morton 2022-09-18 23:53 ` Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 04/11] mm: multi-gen LRU: fix warning from __rcu Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 05/11] mm: multi-gen LRU: fix warning from seq_is_valid() Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 06/11] mm: multi-gen LRU: delete overcautious VM_WARN_ON_ONCE() Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 07/11] mm: multi-gen LRU: dial down MAX_LRU_BATCH Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 08/11] mm: multi-gen LRU: delete newline in kswapd_age_node() Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 09/11] mm: multi-gen LRU: add comment in lru_gen_look_around() Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 10/11] mm: multi-gen LRU: fixed long-tailed direct reclaim latency Yu Zhao 2022-09-18 20:47 ` [PATCH v14-fix 11/11] mm: multi-gen LRU: refactor get_nr_evictable() Yu Zhao 2022-09-18 23:47 ` [PATCH v14 00/14] Multi-Gen LRU Framework Andrew Morton 2022-09-18 23:47 ` Andrew Morton
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220815071332.627393-10-yuzhao@google.com \ --to=yuzhao@google.com \ --cc=Hi-Angel@yandex.ru \ --cc=Michael@michaellarabel.com \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=aneesh.kumar@linux.ibm.com \ --cc=axboe@kernel.dk \ --cc=bgeffon@google.com \ --cc=catalin.marinas@arm.com \ --cc=corbet@lwn.net \ --cc=d@chaos-reins.com \ --cc=dave.hansen@linux.intel.com \ --cc=djbyrne@mtu.edu \ --cc=hannes@cmpxchg.org \ --cc=hdanton@sina.com \ --cc=heftig@archlinux.org \ --cc=holger@applied-asynchrony.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@kernel.org \ --cc=oleksandr@natalenko.name \ --cc=page-reclaim@google.com \ --cc=peterz@infradead.org \ --cc=rppt@kernel.org \ --cc=sofia.trinh@edi.works \ --cc=steven@liquorix.net \ --cc=suleiman@google.com \ --cc=szhai2@cs.rochester.edu \ --cc=tj@kernel.org \ --cc=torvalds@linux-foundation.org \ --cc=vaibhav@linux.ibm.com \ --cc=vbabka@suse.cz \ --cc=will@kernel.org \ --cc=willy@infradead.org \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.