From: Johannes Weiner <hannes@cmpxchg.org> To: Yu Zhao <yuzhao@google.com> Cc: "Andrew Morton" <akpm@linux-foundation.org>, "Mel Gorman" <mgorman@suse.de>, "Michal Hocko" <mhocko@kernel.org>, "Andi Kleen" <ak@linux.intel.com>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Barry Song" <21cnbao@gmail.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Jesse Barnes" <jsbarnes@google.com>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Michael Larabel" <Michael@michaellarabel.com>, "Mike Rapoport" <rppt@kernel.org>, "Rik van Riel" <riel@surriel.com>, "Vlastimil Babka" <vbabka@suse.cz>, "Will Deacon" <will@kernel.org>, "Ying Huang" <ying.huang@intel.com>, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works> Subject: Re: [PATCH v7 04/12] mm: multigenerational LRU: groundwork Date: Thu, 3 Mar 2022 10:29:44 -0500 [thread overview] Message-ID: <YiDe6DcLGEfTTKD5@cmpxchg.org> (raw) In-Reply-To: <YhNJ4LVWpmZgLh4I@google.com> Hi Yu, On Mon, Feb 21, 2022 at 01:14:24AM -0700, Yu Zhao wrote: > On Tue, Feb 15, 2022 at 04:53:56PM -0500, Johannes Weiner wrote: > > On Tue, Feb 15, 2022 at 02:43:05AM -0700, Yu Zhao wrote: > > > On Thu, Feb 10, 2022 at 03:41:57PM -0500, Johannes Weiner wrote: > > > > > +static inline bool lru_gen_is_active(struct lruvec *lruvec, int gen) > > > > > +{ > > > > > + unsigned long max_seq = lruvec->lrugen.max_seq; > > > > > + > > > > > + VM_BUG_ON(gen >= MAX_NR_GENS); > > > > > + > > > > > + /* see the comment on MIN_NR_GENS */ > > > > > + return gen == lru_gen_from_seq(max_seq) || gen == lru_gen_from_seq(max_seq - 1); > > > > > +} > > > > > > > > I'm still reading the series, so correct me if I'm wrong: the "active" > > > > set is split into two generations for the sole purpose of the > > > > second-chance policy for fresh faults, right? > > > > > > To be precise, the active/inactive notion on top of generations is > > > just for ABI compatibility, e.g., the counters in /proc/vmstat. > > > Otherwise, this function wouldn't be needed. > > > > Ah! would you mind adding this as a comment to the function? > > Will do. > > > But AFAICS there is the lru_gen_del_folio() callsite that maps it to > > the PG_active flag - which in turn gets used by add_folio() to place > > the thing back on the max_seq generation. So I suppose there is a > > secondary purpose of the function for remembering the page's rough age > > for non-reclaim isolation.> > > Yes, e.g., migration. Ok, thanks for clarifying. That should also be in the comment. On scan resistance: > > The concern isn't the scan overhead, but jankiness from the workingset > > being flooded out by streaming IO. > > Yes, MGLRU uses a different approach to solve this problem, and for > its approach, the scan overhead is the concern. > > MGLRU detects (defines) the working set by scanning the entire memory > for each generation, and it counters the flooding by accelerating the > creation of generations. IOW, all mapped pages have an equal chance to > get scanned, no matter which generation they are in. This is a design > difference compared with the active/inactive LRU, which tries to scans > the active/inactive lists less/more frequently. > > > The concrete usecase at the time was a torrent client hashing a > > downloaded file and thereby kicking out the desktop environment, which > > caused jankiness. The hashing didn't benefit from caching - the file > > wouldn't have fit into RAM anyway - so this was pointless to boot. > > > > Essentially, the tradeoff is this: > > > > 1) If you treat new pages as hot, you accelerate workingset > > transitions, but on the flipside you risk unnecessary refaults in > > running applications when those new pages are one-off. > > > > 2) If you take new pages with a grain of salt, you protect existing > > applications better from one-off floods, but risk refaults in NEW > > application while they're trying to start up. > > Agreed. > > > There are two arguments for why 2) is preferable: > > > > 1) Users are tolerant of cache misses when applications first launch, > > much less so after they've been running for hours. > > Our CUJs (Critical User Journeys) respectfully disagree :) > > They are built on the observation that once users have moved onto > another tab/app, they are more likely to stay with the new tab/app > rather than go back to the old ones. Speaking for myself, this is > generally the case. That's in line with what I said. Where is the disagreement? > > 2) Workingset transitions (and associated jankiness) are bounded by > > the amount of RAM you need to repopulate. But streaming IO is > > bounded by storage, and datasets are routinely several times the > > amount of RAM. Uncacheable sets in excess of RAM can produce an > > infinite stream of "new" references; not protecting the workingset > > from that means longer or even sustained jankiness. > > I'd argue the opposite -- we shouldn't risk refaulting fresh hot pages > just to accommodate this concrete yet minor use case, especially > considering torrent has been given the means (MADV_SEQUENTIAL) to help > itself. > > I appreciate all your points here. The bottom line is we agree this is > a trade off. For what disagree about, we could be both right -- it > comes down to what workloads we care about *more*. It's a straight-forward question: How does MGLRU avoid cache pollution from scans? Your answer above seems to be "it just does". Your answer here seems to be "it doesn't, but it doesn't matter". Forgive me if I'm misreading what you're saying. But it's not a minor concern. Read the motivation behind any modern cache algorithm - ARC, LIRS, Clock-Pro, LRU-K, 2Q - and scan resistance is the reason for why they all exist in the first place. "The LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages." - The LRU-K page replacement algorithm for database disk buffering, O'Neil et al, 1993 "Although LRU replacement policy has been commonly used in the buffer cache management, it is well known for its inability to cope with access patterns with weak locality." - LIRS: an efficient low inter-reference recency set replacement policy to improve buffer cache performance, Jiang, Zhang, 2002 "The self-tuning, low-overhead, scan-resistant adaptive replacement cache algorithm outperforms the least-recently-used algorithm by dynamically responding to changing access patterns and continually balancing between workload recency and frequency features." - Outperforming LRU with an adaptive replacement cache algorithm, Megiddo, Modha, 2004 "Over the last three decades, the inability of LRU as well as CLOCK to handle weak locality accesses has become increasingly serious, and an effective fix becomes increasingly desirable. - CLOCK-Pro: An Effective Improvement of the CLOCK Replacement, Jiang et al, 2005 We can't rely on MADV_SEQUENTIAL alone. Not all accesses know in advance that they'll be one-off; it can be a group of uncoordinated tasks causing the pattern etc. This is a pretty fundamental issue. It would be good to get a more satisfying answer on this. > > > > You can drop the memcg parameter and use lruvec_memcg(). > > > > > > lruvec_memcg() isn't available yet when pgdat_init_internals() calls > > > this function because mem_cgroup_disabled() is initialized afterward. > > > > Good catch. That'll container_of() into garbage. However, we have to > > assume that somebody's going to try that simplification again, so we > > should set up the code now to prevent issues. > > > > cgroup_disable parsing is self-contained, so we can pull it ahead in > > the init sequence. How about this? > > > > diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c > > index 9d05c3ca2d5e..b544d768edc8 100644 > > --- a/kernel/cgroup/cgroup.c > > +++ b/kernel/cgroup/cgroup.c > > @@ -6464,9 +6464,9 @@ static int __init cgroup_disable(char *str) > > break; > > } > > } > > - return 1; > > + return 0; > > } > > -__setup("cgroup_disable=", cgroup_disable); > > +early_param("cgroup_disable", cgroup_disable); > > I think early_param() is still after pgdat_init_internals(), no? It's called twice for some reason, but AFAICS the first one is always called before pgdat_init_internals(): start_kernel() setup_arch() parse_early_param() x86_init.paging.pagetable_init(); paging_init() zone_sizes_init() free_area_init() free_area_init_node() free_area_init_core() pgdat_init_internals() parse_early_param() It's the same/similar for arm, sparc and mips.
WARNING: multiple messages have this Message-ID (diff)
From: Johannes Weiner <hannes@cmpxchg.org> To: Yu Zhao <yuzhao@google.com> Cc: "Andrew Morton" <akpm@linux-foundation.org>, "Mel Gorman" <mgorman@suse.de>, "Michal Hocko" <mhocko@kernel.org>, "Andi Kleen" <ak@linux.intel.com>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Barry Song" <21cnbao@gmail.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Jesse Barnes" <jsbarnes@google.com>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Michael Larabel" <Michael@michaellarabel.com>, "Mike Rapoport" <rppt@kernel.org>, "Rik van Riel" <riel@surriel.com>, "Vlastimil Babka" <vbabka@suse.cz>, "Will Deacon" <will@kernel.org>, "Ying Huang" <ying.huang@intel.com>, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works> Subject: Re: [PATCH v7 04/12] mm: multigenerational LRU: groundwork Date: Thu, 3 Mar 2022 10:29:44 -0500 [thread overview] Message-ID: <YiDe6DcLGEfTTKD5@cmpxchg.org> (raw) In-Reply-To: <YhNJ4LVWpmZgLh4I@google.com> Hi Yu, On Mon, Feb 21, 2022 at 01:14:24AM -0700, Yu Zhao wrote: > On Tue, Feb 15, 2022 at 04:53:56PM -0500, Johannes Weiner wrote: > > On Tue, Feb 15, 2022 at 02:43:05AM -0700, Yu Zhao wrote: > > > On Thu, Feb 10, 2022 at 03:41:57PM -0500, Johannes Weiner wrote: > > > > > +static inline bool lru_gen_is_active(struct lruvec *lruvec, int gen) > > > > > +{ > > > > > + unsigned long max_seq = lruvec->lrugen.max_seq; > > > > > + > > > > > + VM_BUG_ON(gen >= MAX_NR_GENS); > > > > > + > > > > > + /* see the comment on MIN_NR_GENS */ > > > > > + return gen == lru_gen_from_seq(max_seq) || gen == lru_gen_from_seq(max_seq - 1); > > > > > +} > > > > > > > > I'm still reading the series, so correct me if I'm wrong: the "active" > > > > set is split into two generations for the sole purpose of the > > > > second-chance policy for fresh faults, right? > > > > > > To be precise, the active/inactive notion on top of generations is > > > just for ABI compatibility, e.g., the counters in /proc/vmstat. > > > Otherwise, this function wouldn't be needed. > > > > Ah! would you mind adding this as a comment to the function? > > Will do. > > > But AFAICS there is the lru_gen_del_folio() callsite that maps it to > > the PG_active flag - which in turn gets used by add_folio() to place > > the thing back on the max_seq generation. So I suppose there is a > > secondary purpose of the function for remembering the page's rough age > > for non-reclaim isolation.> > > Yes, e.g., migration. Ok, thanks for clarifying. That should also be in the comment. On scan resistance: > > The concern isn't the scan overhead, but jankiness from the workingset > > being flooded out by streaming IO. > > Yes, MGLRU uses a different approach to solve this problem, and for > its approach, the scan overhead is the concern. > > MGLRU detects (defines) the working set by scanning the entire memory > for each generation, and it counters the flooding by accelerating the > creation of generations. IOW, all mapped pages have an equal chance to > get scanned, no matter which generation they are in. This is a design > difference compared with the active/inactive LRU, which tries to scans > the active/inactive lists less/more frequently. > > > The concrete usecase at the time was a torrent client hashing a > > downloaded file and thereby kicking out the desktop environment, which > > caused jankiness. The hashing didn't benefit from caching - the file > > wouldn't have fit into RAM anyway - so this was pointless to boot. > > > > Essentially, the tradeoff is this: > > > > 1) If you treat new pages as hot, you accelerate workingset > > transitions, but on the flipside you risk unnecessary refaults in > > running applications when those new pages are one-off. > > > > 2) If you take new pages with a grain of salt, you protect existing > > applications better from one-off floods, but risk refaults in NEW > > application while they're trying to start up. > > Agreed. > > > There are two arguments for why 2) is preferable: > > > > 1) Users are tolerant of cache misses when applications first launch, > > much less so after they've been running for hours. > > Our CUJs (Critical User Journeys) respectfully disagree :) > > They are built on the observation that once users have moved onto > another tab/app, they are more likely to stay with the new tab/app > rather than go back to the old ones. Speaking for myself, this is > generally the case. That's in line with what I said. Where is the disagreement? > > 2) Workingset transitions (and associated jankiness) are bounded by > > the amount of RAM you need to repopulate. But streaming IO is > > bounded by storage, and datasets are routinely several times the > > amount of RAM. Uncacheable sets in excess of RAM can produce an > > infinite stream of "new" references; not protecting the workingset > > from that means longer or even sustained jankiness. > > I'd argue the opposite -- we shouldn't risk refaulting fresh hot pages > just to accommodate this concrete yet minor use case, especially > considering torrent has been given the means (MADV_SEQUENTIAL) to help > itself. > > I appreciate all your points here. The bottom line is we agree this is > a trade off. For what disagree about, we could be both right -- it > comes down to what workloads we care about *more*. It's a straight-forward question: How does MGLRU avoid cache pollution from scans? Your answer above seems to be "it just does". Your answer here seems to be "it doesn't, but it doesn't matter". Forgive me if I'm misreading what you're saying. But it's not a minor concern. Read the motivation behind any modern cache algorithm - ARC, LIRS, Clock-Pro, LRU-K, 2Q - and scan resistance is the reason for why they all exist in the first place. "The LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages." - The LRU-K page replacement algorithm for database disk buffering, O'Neil et al, 1993 "Although LRU replacement policy has been commonly used in the buffer cache management, it is well known for its inability to cope with access patterns with weak locality." - LIRS: an efficient low inter-reference recency set replacement policy to improve buffer cache performance, Jiang, Zhang, 2002 "The self-tuning, low-overhead, scan-resistant adaptive replacement cache algorithm outperforms the least-recently-used algorithm by dynamically responding to changing access patterns and continually balancing between workload recency and frequency features." - Outperforming LRU with an adaptive replacement cache algorithm, Megiddo, Modha, 2004 "Over the last three decades, the inability of LRU as well as CLOCK to handle weak locality accesses has become increasingly serious, and an effective fix becomes increasingly desirable. - CLOCK-Pro: An Effective Improvement of the CLOCK Replacement, Jiang et al, 2005 We can't rely on MADV_SEQUENTIAL alone. Not all accesses know in advance that they'll be one-off; it can be a group of uncoordinated tasks causing the pattern etc. This is a pretty fundamental issue. It would be good to get a more satisfying answer on this. > > > > You can drop the memcg parameter and use lruvec_memcg(). > > > > > > lruvec_memcg() isn't available yet when pgdat_init_internals() calls > > > this function because mem_cgroup_disabled() is initialized afterward. > > > > Good catch. That'll container_of() into garbage. However, we have to > > assume that somebody's going to try that simplification again, so we > > should set up the code now to prevent issues. > > > > cgroup_disable parsing is self-contained, so we can pull it ahead in > > the init sequence. How about this? > > > > diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c > > index 9d05c3ca2d5e..b544d768edc8 100644 > > --- a/kernel/cgroup/cgroup.c > > +++ b/kernel/cgroup/cgroup.c > > @@ -6464,9 +6464,9 @@ static int __init cgroup_disable(char *str) > > break; > > } > > } > > - return 1; > > + return 0; > > } > > -__setup("cgroup_disable=", cgroup_disable); > > +early_param("cgroup_disable", cgroup_disable); > > I think early_param() is still after pgdat_init_internals(), no? It's called twice for some reason, but AFAICS the first one is always called before pgdat_init_internals(): start_kernel() setup_arch() parse_early_param() x86_init.paging.pagetable_init(); paging_init() zone_sizes_init() free_area_init() free_area_init_node() free_area_init_core() pgdat_init_internals() parse_early_param() It's the same/similar for arm, sparc and mips. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-03-03 15:29 UTC|newest] Thread overview: 150+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-02-08 8:18 [PATCH v7 00/12] Multigenerational LRU Framework Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 01/12] mm: x86, arm64: add arch_has_hw_pte_young() Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:24 ` Yu Zhao 2022-02-08 8:24 ` Yu Zhao 2022-02-08 10:33 ` Will Deacon 2022-02-08 10:33 ` Will Deacon 2022-02-08 8:18 ` [PATCH v7 02/12] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:27 ` Yu Zhao 2022-02-08 8:27 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 03/12] mm/vmscan.c: refactor shrink_node() Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 04/12] mm: multigenerational LRU: groundwork Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:28 ` Yu Zhao 2022-02-08 8:28 ` Yu Zhao 2022-02-10 20:41 ` Johannes Weiner 2022-02-10 20:41 ` Johannes Weiner 2022-02-15 9:43 ` Yu Zhao 2022-02-15 9:43 ` Yu Zhao 2022-02-15 21:53 ` Johannes Weiner 2022-02-15 21:53 ` Johannes Weiner 2022-02-21 8:14 ` Yu Zhao 2022-02-21 8:14 ` Yu Zhao 2022-02-23 21:18 ` Yu Zhao 2022-02-23 21:18 ` Yu Zhao 2022-02-25 16:34 ` Minchan Kim 2022-02-25 16:34 ` Minchan Kim 2022-03-03 15:29 ` Johannes Weiner [this message] 2022-03-03 15:29 ` Johannes Weiner 2022-03-03 19:26 ` Yu Zhao 2022-03-03 19:26 ` Yu Zhao 2022-03-03 21:43 ` Johannes Weiner 2022-03-03 21:43 ` Johannes Weiner 2022-03-11 10:16 ` Barry Song 2022-03-11 10:16 ` Barry Song 2022-03-11 23:45 ` Yu Zhao 2022-03-11 23:45 ` Yu Zhao 2022-03-12 10:37 ` Barry Song 2022-03-12 10:37 ` Barry Song 2022-03-12 21:11 ` Yu Zhao 2022-03-12 21:11 ` Yu Zhao 2022-03-13 4:57 ` Barry Song 2022-03-13 4:57 ` Barry Song 2022-03-14 11:11 ` Barry Song 2022-03-14 11:11 ` Barry Song 2022-03-14 16:45 ` Yu Zhao 2022-03-14 16:45 ` Yu Zhao 2022-03-14 23:38 ` Barry Song 2022-03-14 23:38 ` Barry Song 2022-03-15 5:18 ` Yu Zhao 2022-03-15 5:18 ` Yu Zhao 2022-03-15 9:27 ` Barry Song 2022-03-15 9:27 ` Barry Song 2022-03-15 10:29 ` Barry Song 2022-03-15 10:29 ` Barry Song 2022-03-16 2:46 ` Yu Zhao 2022-03-16 2:46 ` Yu Zhao 2022-03-16 4:37 ` Barry Song 2022-03-16 4:37 ` Barry Song 2022-03-16 5:44 ` Yu Zhao 2022-03-16 5:44 ` Yu Zhao 2022-03-16 6:06 ` Barry Song 2022-03-16 6:06 ` Barry Song 2022-03-16 21:37 ` Yu Zhao 2022-03-16 21:37 ` Yu Zhao 2022-02-10 21:37 ` Matthew Wilcox 2022-02-10 21:37 ` Matthew Wilcox 2022-02-13 21:16 ` Yu Zhao 2022-02-13 21:16 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 05/12] mm: multigenerational LRU: minimal implementation Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:33 ` Yu Zhao 2022-02-08 8:33 ` Yu Zhao 2022-02-08 16:50 ` Johannes Weiner 2022-02-08 16:50 ` Johannes Weiner 2022-02-10 2:53 ` Yu Zhao 2022-02-10 2:53 ` Yu Zhao 2022-02-13 10:04 ` Hillf Danton 2022-02-17 0:13 ` Yu Zhao 2022-02-23 8:27 ` Huang, Ying 2022-02-23 8:27 ` Huang, Ying 2022-02-23 9:36 ` Yu Zhao 2022-02-23 9:36 ` Yu Zhao 2022-02-24 0:59 ` Huang, Ying 2022-02-24 0:59 ` Huang, Ying 2022-02-24 1:34 ` Yu Zhao 2022-02-24 1:34 ` Yu Zhao 2022-02-24 3:31 ` Huang, Ying 2022-02-24 3:31 ` Huang, Ying 2022-02-24 4:09 ` Yu Zhao 2022-02-24 4:09 ` Yu Zhao 2022-02-24 5:27 ` Huang, Ying 2022-02-24 5:27 ` Huang, Ying 2022-02-24 5:35 ` Yu Zhao 2022-02-24 5:35 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 06/12] mm: multigenerational LRU: exploit locality in rmap Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:40 ` Yu Zhao 2022-02-08 8:40 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 07/12] mm: multigenerational LRU: support page table walks Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:39 ` Yu Zhao 2022-02-08 8:39 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 08/12] mm: multigenerational LRU: optimize multiple memcgs Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:18 ` [PATCH v7 09/12] mm: multigenerational LRU: runtime switch Yu Zhao 2022-02-08 8:18 ` Yu Zhao 2022-02-08 8:42 ` Yu Zhao 2022-02-08 8:42 ` Yu Zhao 2022-02-08 8:19 ` [PATCH v7 10/12] mm: multigenerational LRU: thrashing prevention Yu Zhao 2022-02-08 8:19 ` Yu Zhao 2022-02-08 8:43 ` Yu Zhao 2022-02-08 8:43 ` Yu Zhao 2022-02-08 8:19 ` [PATCH v7 11/12] mm: multigenerational LRU: debugfs interface Yu Zhao 2022-02-08 8:19 ` Yu Zhao 2022-02-18 18:56 ` [page-reclaim] " David Rientjes 2022-02-18 18:56 ` David Rientjes 2022-02-08 8:19 ` [PATCH v7 12/12] mm: multigenerational LRU: documentation Yu Zhao 2022-02-08 8:19 ` Yu Zhao 2022-02-08 8:44 ` Yu Zhao 2022-02-08 8:44 ` Yu Zhao 2022-02-14 10:28 ` Mike Rapoport 2022-02-14 10:28 ` Mike Rapoport 2022-02-16 3:22 ` Yu Zhao 2022-02-16 3:22 ` Yu Zhao 2022-02-21 9:01 ` Mike Rapoport 2022-02-21 9:01 ` Mike Rapoport 2022-02-22 1:47 ` Yu Zhao 2022-02-22 1:47 ` Yu Zhao 2022-02-23 10:58 ` Mike Rapoport 2022-02-23 10:58 ` Mike Rapoport 2022-02-23 21:20 ` Yu Zhao 2022-02-23 21:20 ` Yu Zhao 2022-02-08 10:11 ` [PATCH v7 00/12] Multigenerational LRU Framework Oleksandr Natalenko 2022-02-08 10:11 ` Oleksandr Natalenko 2022-02-08 11:14 ` Michal Hocko 2022-02-08 11:14 ` Michal Hocko 2022-02-08 11:23 ` Oleksandr Natalenko 2022-02-08 11:23 ` Oleksandr Natalenko 2022-02-11 20:12 ` Alexey Avramov 2022-02-11 20:12 ` Alexey Avramov 2022-02-12 21:01 ` Yu Zhao 2022-02-12 21:01 ` Yu Zhao 2022-03-03 6:06 ` Vaibhav Jain 2022-03-03 6:06 ` Vaibhav Jain 2022-03-03 6:47 ` Yu Zhao 2022-03-03 6:47 ` Yu Zhao
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=YiDe6DcLGEfTTKD5@cmpxchg.org \ --to=hannes@cmpxchg.org \ --cc=21cnbao@gmail.com \ --cc=Hi-Angel@yandex.ru \ --cc=Michael@michaellarabel.com \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=aneesh.kumar@linux.ibm.com \ --cc=axboe@kernel.dk \ --cc=bgeffon@google.com \ --cc=catalin.marinas@arm.com \ --cc=corbet@lwn.net \ --cc=d@chaos-reins.com \ --cc=dave.hansen@linux.intel.com \ --cc=djbyrne@mtu.edu \ --cc=hdanton@sina.com \ --cc=heftig@archlinux.org \ --cc=holger@applied-asynchrony.com \ --cc=jsbarnes@google.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@kernel.org \ --cc=oleksandr@natalenko.name \ --cc=page-reclaim@google.com \ --cc=riel@surriel.com \ --cc=rppt@kernel.org \ --cc=sofia.trinh@edi.works \ --cc=steven@liquorix.net \ --cc=suleiman@google.com \ --cc=szhai2@cs.rochester.edu \ --cc=torvalds@linux-foundation.org \ --cc=vbabka@suse.cz \ --cc=will@kernel.org \ --cc=willy@infradead.org \ --cc=x86@kernel.org \ --cc=ying.huang@intel.com \ --cc=yuzhao@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.