From: Yu Zhao <yuzhao@google.com> To: Andrew Morton <akpm@linux-foundation.org>, Tejun Heo <tj@kernel.org> Cc: "Stephen Rothwell" <sfr@rothwell.id.au>, Linux-MM <linux-mm@kvack.org>, "Andi Kleen" <ak@linux.intel.com>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Barry Song" <21cnbao@gmail.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Jesse Barnes" <jsbarnes@google.com>, "Johannes Weiner" <hannes@cmpxchg.org>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Mel Gorman" <mgorman@suse.de>, "Michael Larabel" <Michael@michaellarabel.com>, "Michal Hocko" <mhocko@kernel.org>, "Mike Rapoport" <rppt@kernel.org>, "Rik van Riel" <riel@surriel.com>, "Vlastimil Babka" <vbabka@suse.cz>, "Will Deacon" <will@kernel.org>, "Ying Huang" <ying.huang@intel.com>, "Linux ARM" <linux-arm-kernel@lists.infradead.org>, "open list:DOCUMENTATION" <linux-doc@vger.kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, "Kernel Page Reclaim v2" <page-reclaim@google.com>, "the arch/x86 maintainers" <x86@kernel.org>, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works>, "Vaibhav Jain" <vaibhav@linux.ibm.com> Subject: Re: [PATCH v10 10/14] mm: multi-gen LRU: kill switch Date: Tue, 26 Apr 2022 14:57:15 -0600 [thread overview] Message-ID: <CAOUHufbtFj0Hez7wkw3DHGDwo6wudCzCvACt2GfgrFcubW_DYg@mail.gmail.com> (raw) In-Reply-To: <20220411191627.629f21de83cd0a520ef4a142@linux-foundation.org> On Mon, Apr 11, 2022 at 8:16 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Wed, 6 Apr 2022 21:15:22 -0600 Yu Zhao <yuzhao@google.com> wrote: > > > Add /sys/kernel/mm/lru_gen/enabled as a kill switch. Components that > > can be disabled include: > > 0x0001: the multi-gen LRU core > > 0x0002: walking page table, when arch_has_hw_pte_young() returns > > true > > 0x0004: clearing the accessed bit in non-leaf PMD entries, when > > CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y > > [yYnN]: apply to all the components above > > E.g., > > echo y >/sys/kernel/mm/lru_gen/enabled > > cat /sys/kernel/mm/lru_gen/enabled > > 0x0007 > > echo 5 >/sys/kernel/mm/lru_gen/enabled > > cat /sys/kernel/mm/lru_gen/enabled > > 0x0005 > > I'm shocked that this actually works. How does it work? Existing > pages & folios are drained over time or synchrnously? Basically we have a double-throw way, and once flipped, new (isolated) pages can only be added to the lists of the current implementation. Existing pages on the lists of the previous implementation are synchronously drained (isolated and then re-added), with cond_resched() of course. > Supporting > structures remain allocated, available for reenablement? Correct. > Why is it thought necessary to have this? Is it expected to be > permanent? This is almost a must for large scale deployments/experiments. For deployments, we need to keep fix rollout (high priority) and feature enabling (low priority) separate. Rolling out multiple binaries works but will make the process slower and more painful. So generally for each release, there is only one binary to roll out, and unless it's impossible, new features are disabled by default. Once a rollout completes, i.e., reaches enough population and remains stable, new features are turned on gradually. If something goes wrong with a new feature, we turn off that feature rather than roll back the kernel. Similarly, for A/B experiments, we don't want to use two binaries. > > NB: the page table walks happen on the scale of seconds under heavy > > memory pressure, in which case the mmap_lock contention is a lesser > > concern, compared with the LRU lock contention and the I/O congestion. > > So far the only well-known case of the mmap_lock contention happens on > > Android, due to Scudo [1] which allocates several thousand VMAs for > > merely a few hundred MBs. The SPF and the Maple Tree also have > > provided their own assessments [2][3]. However, if walking page tables > > does worsen the mmap_lock contention, the kill switch can be used to > > disable it. In this case the multi-gen LRU will suffer a minor > > performance degradation, as shown previously. > > > > Clearing the accessed bit in non-leaf PMD entries can also be > > disabled, since this behavior was not tested on x86 varieties other > > than Intel and AMD. > > > > ... > > > > --- a/include/linux/cgroup.h > > +++ b/include/linux/cgroup.h > > @@ -432,6 +432,18 @@ static inline void cgroup_put(struct cgroup *cgrp) > > css_put(&cgrp->self); > > } > > > > +extern struct mutex cgroup_mutex; > > + > > +static inline void cgroup_lock(void) > > +{ > > + mutex_lock(&cgroup_mutex); > > +} > > + > > +static inline void cgroup_unlock(void) > > +{ > > + mutex_unlock(&cgroup_mutex); > > +} > > It's a tad rude to export mutex_lock like this without (apparently) > informing its owner (Tejun). Looping in Tejun. > And if we're going to wrap its operations via helper fuctions then > > - presumably all cgroup_mutex operations should be wrapped and > > - exiting open-coded operations on this mutex should be converted. I wrapped cgroup_mutex here because I'm not a big fan of #ifdefs (CONFIG_CGROUPs). Internally for cgroup code, it seems superfluous to me to use these wrappers, e.g., for developers who work on cgroup code, they might not be interested in looking up these wrappers. > > +static bool drain_evictable(struct lruvec *lruvec) > > +{ > > + int gen, type, zone; > > + int remaining = MAX_LRU_BATCH; > > + > > + for_each_gen_type_zone(gen, type, zone) { > > + struct list_head *head = &lruvec->lrugen.lists[gen][type][zone]; > > + > > + while (!list_empty(head)) { > > + bool success; > > + struct folio *folio = lru_to_folio(head); > > + > > + VM_BUG_ON_FOLIO(folio_test_unevictable(folio), folio); > > + VM_BUG_ON_FOLIO(folio_test_active(folio), folio); > > + VM_BUG_ON_FOLIO(folio_is_file_lru(folio) != type, folio); > > + VM_BUG_ON_FOLIO(folio_zonenum(folio) != zone, folio); > > So many new BUG_ONs to upset Linus :( I'll replace them with VM_WARN_ON_ONCE_FOLIO(), based on the previous discussion. > > + success = lru_gen_del_folio(lruvec, folio, false); > > + VM_BUG_ON(!success); > > + lruvec_add_folio(lruvec, folio); > > + > > + if (!--remaining) > > + return false; > > + } > > + } > > + > > + return true; > > +} > > + > > > > ... > > > > +static ssize_t store_enable(struct kobject *kobj, struct kobj_attribute *attr, > > + const char *buf, size_t len) > > +{ > > + int i; > > + unsigned int caps; > > + > > + if (tolower(*buf) == 'n') > > + caps = 0; > > + else if (tolower(*buf) == 'y') > > + caps = -1; > > + else if (kstrtouint(buf, 0, &caps)) > > + return -EINVAL; > > See kstrtobool() `caps` is not a boolean, hence the plural and the below. > > + for (i = 0; i < NR_LRU_GEN_CAPS; i++) { > > + bool enable = caps & BIT(i); > > + > > + if (i == LRU_GEN_CORE) > > + lru_gen_change_state(enable); > > + else if (enable) > > + static_branch_enable(&lru_gen_caps[i]); > > + else > > + static_branch_disable(&lru_gen_caps[i]); > > + } > > + > > + return len; > > +}
WARNING: multiple messages have this Message-ID (diff)
From: Yu Zhao <yuzhao@google.com> To: Andrew Morton <akpm@linux-foundation.org>, Tejun Heo <tj@kernel.org> Cc: "Stephen Rothwell" <sfr@rothwell.id.au>, Linux-MM <linux-mm@kvack.org>, "Andi Kleen" <ak@linux.intel.com>, "Aneesh Kumar" <aneesh.kumar@linux.ibm.com>, "Barry Song" <21cnbao@gmail.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>, "Jesse Barnes" <jsbarnes@google.com>, "Johannes Weiner" <hannes@cmpxchg.org>, "Jonathan Corbet" <corbet@lwn.net>, "Linus Torvalds" <torvalds@linux-foundation.org>, "Matthew Wilcox" <willy@infradead.org>, "Mel Gorman" <mgorman@suse.de>, "Michael Larabel" <Michael@michaellarabel.com>, "Michal Hocko" <mhocko@kernel.org>, "Mike Rapoport" <rppt@kernel.org>, "Rik van Riel" <riel@surriel.com>, "Vlastimil Babka" <vbabka@suse.cz>, "Will Deacon" <will@kernel.org>, "Ying Huang" <ying.huang@intel.com>, "Linux ARM" <linux-arm-kernel@lists.infradead.org>, "open list:DOCUMENTATION" <linux-doc@vger.kernel.org>, linux-kernel <linux-kernel@vger.kernel.org>, "Kernel Page Reclaim v2" <page-reclaim@google.com>, "the arch/x86 maintainers" <x86@kernel.org>, "Brian Geffon" <bgeffon@google.com>, "Jan Alexander Steffens" <heftig@archlinux.org>, "Oleksandr Natalenko" <oleksandr@natalenko.name>, "Steven Barrett" <steven@liquorix.net>, "Suleiman Souhlal" <suleiman@google.com>, "Daniel Byrne" <djbyrne@mtu.edu>, "Donald Carr" <d@chaos-reins.com>, "Holger Hoffstätte" <holger@applied-asynchrony.com>, "Konstantin Kharlamov" <Hi-Angel@yandex.ru>, "Shuang Zhai" <szhai2@cs.rochester.edu>, "Sofia Trinh" <sofia.trinh@edi.works>, "Vaibhav Jain" <vaibhav@linux.ibm.com> Subject: Re: [PATCH v10 10/14] mm: multi-gen LRU: kill switch Date: Tue, 26 Apr 2022 14:57:15 -0600 [thread overview] Message-ID: <CAOUHufbtFj0Hez7wkw3DHGDwo6wudCzCvACt2GfgrFcubW_DYg@mail.gmail.com> (raw) In-Reply-To: <20220411191627.629f21de83cd0a520ef4a142@linux-foundation.org> On Mon, Apr 11, 2022 at 8:16 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Wed, 6 Apr 2022 21:15:22 -0600 Yu Zhao <yuzhao@google.com> wrote: > > > Add /sys/kernel/mm/lru_gen/enabled as a kill switch. Components that > > can be disabled include: > > 0x0001: the multi-gen LRU core > > 0x0002: walking page table, when arch_has_hw_pte_young() returns > > true > > 0x0004: clearing the accessed bit in non-leaf PMD entries, when > > CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y > > [yYnN]: apply to all the components above > > E.g., > > echo y >/sys/kernel/mm/lru_gen/enabled > > cat /sys/kernel/mm/lru_gen/enabled > > 0x0007 > > echo 5 >/sys/kernel/mm/lru_gen/enabled > > cat /sys/kernel/mm/lru_gen/enabled > > 0x0005 > > I'm shocked that this actually works. How does it work? Existing > pages & folios are drained over time or synchrnously? Basically we have a double-throw way, and once flipped, new (isolated) pages can only be added to the lists of the current implementation. Existing pages on the lists of the previous implementation are synchronously drained (isolated and then re-added), with cond_resched() of course. > Supporting > structures remain allocated, available for reenablement? Correct. > Why is it thought necessary to have this? Is it expected to be > permanent? This is almost a must for large scale deployments/experiments. For deployments, we need to keep fix rollout (high priority) and feature enabling (low priority) separate. Rolling out multiple binaries works but will make the process slower and more painful. So generally for each release, there is only one binary to roll out, and unless it's impossible, new features are disabled by default. Once a rollout completes, i.e., reaches enough population and remains stable, new features are turned on gradually. If something goes wrong with a new feature, we turn off that feature rather than roll back the kernel. Similarly, for A/B experiments, we don't want to use two binaries. > > NB: the page table walks happen on the scale of seconds under heavy > > memory pressure, in which case the mmap_lock contention is a lesser > > concern, compared with the LRU lock contention and the I/O congestion. > > So far the only well-known case of the mmap_lock contention happens on > > Android, due to Scudo [1] which allocates several thousand VMAs for > > merely a few hundred MBs. The SPF and the Maple Tree also have > > provided their own assessments [2][3]. However, if walking page tables > > does worsen the mmap_lock contention, the kill switch can be used to > > disable it. In this case the multi-gen LRU will suffer a minor > > performance degradation, as shown previously. > > > > Clearing the accessed bit in non-leaf PMD entries can also be > > disabled, since this behavior was not tested on x86 varieties other > > than Intel and AMD. > > > > ... > > > > --- a/include/linux/cgroup.h > > +++ b/include/linux/cgroup.h > > @@ -432,6 +432,18 @@ static inline void cgroup_put(struct cgroup *cgrp) > > css_put(&cgrp->self); > > } > > > > +extern struct mutex cgroup_mutex; > > + > > +static inline void cgroup_lock(void) > > +{ > > + mutex_lock(&cgroup_mutex); > > +} > > + > > +static inline void cgroup_unlock(void) > > +{ > > + mutex_unlock(&cgroup_mutex); > > +} > > It's a tad rude to export mutex_lock like this without (apparently) > informing its owner (Tejun). Looping in Tejun. > And if we're going to wrap its operations via helper fuctions then > > - presumably all cgroup_mutex operations should be wrapped and > > - exiting open-coded operations on this mutex should be converted. I wrapped cgroup_mutex here because I'm not a big fan of #ifdefs (CONFIG_CGROUPs). Internally for cgroup code, it seems superfluous to me to use these wrappers, e.g., for developers who work on cgroup code, they might not be interested in looking up these wrappers. > > +static bool drain_evictable(struct lruvec *lruvec) > > +{ > > + int gen, type, zone; > > + int remaining = MAX_LRU_BATCH; > > + > > + for_each_gen_type_zone(gen, type, zone) { > > + struct list_head *head = &lruvec->lrugen.lists[gen][type][zone]; > > + > > + while (!list_empty(head)) { > > + bool success; > > + struct folio *folio = lru_to_folio(head); > > + > > + VM_BUG_ON_FOLIO(folio_test_unevictable(folio), folio); > > + VM_BUG_ON_FOLIO(folio_test_active(folio), folio); > > + VM_BUG_ON_FOLIO(folio_is_file_lru(folio) != type, folio); > > + VM_BUG_ON_FOLIO(folio_zonenum(folio) != zone, folio); > > So many new BUG_ONs to upset Linus :( I'll replace them with VM_WARN_ON_ONCE_FOLIO(), based on the previous discussion. > > + success = lru_gen_del_folio(lruvec, folio, false); > > + VM_BUG_ON(!success); > > + lruvec_add_folio(lruvec, folio); > > + > > + if (!--remaining) > > + return false; > > + } > > + } > > + > > + return true; > > +} > > + > > > > ... > > > > +static ssize_t store_enable(struct kobject *kobj, struct kobj_attribute *attr, > > + const char *buf, size_t len) > > +{ > > + int i; > > + unsigned int caps; > > + > > + if (tolower(*buf) == 'n') > > + caps = 0; > > + else if (tolower(*buf) == 'y') > > + caps = -1; > > + else if (kstrtouint(buf, 0, &caps)) > > + return -EINVAL; > > See kstrtobool() `caps` is not a boolean, hence the plural and the below. > > + for (i = 0; i < NR_LRU_GEN_CAPS; i++) { > > + bool enable = caps & BIT(i); > > + > > + if (i == LRU_GEN_CORE) > > + lru_gen_change_state(enable); > > + else if (enable) > > + static_branch_enable(&lru_gen_caps[i]); > > + else > > + static_branch_disable(&lru_gen_caps[i]); > > + } > > + > > + return len; > > +} _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-04-26 20:58 UTC|newest] Thread overview: 198+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-04-07 3:15 [PATCH v10 00/14] Multi-Gen LRU Framework Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 01/14] mm: x86, arm64: add arch_has_hw_pte_young() Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 02/14] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 03/14] mm/vmscan.c: refactor shrink_node() Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-16 6:48 ` Miaohe Lin 2022-04-16 6:48 ` Miaohe Lin 2022-04-07 3:15 ` [PATCH v10 04/14] Revert "include/linux/mm_inline.h: fold __update_lru_size() into its sole caller" Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-16 6:50 ` Miaohe Lin 2022-04-16 6:50 ` Miaohe Lin 2022-04-07 3:15 ` [PATCH v10 05/14] mm: multi-gen LRU: groundwork Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-12 7:06 ` Peter Zijlstra 2022-04-12 7:06 ` Peter Zijlstra 2022-04-20 0:39 ` Yu Zhao 2022-04-20 0:39 ` Yu Zhao 2022-04-20 20:07 ` Linus Torvalds 2022-04-20 20:07 ` Linus Torvalds 2022-04-26 22:39 ` Yu Zhao 2022-04-26 22:39 ` Yu Zhao 2022-04-26 23:42 ` Andrew Morton 2022-04-26 23:42 ` Andrew Morton 2022-04-27 1:18 ` Yu Zhao 2022-04-27 1:18 ` Yu Zhao 2022-04-27 1:34 ` Andrew Morton 2022-04-27 1:34 ` Andrew Morton 2022-04-07 3:15 ` [PATCH v10 06/14] mm: multi-gen LRU: minimal implementation Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-14 6:03 ` Barry Song 2022-04-14 6:03 ` Barry Song 2022-04-14 20:36 ` Yu Zhao 2022-04-14 20:36 ` Yu Zhao 2022-04-14 21:39 ` Andrew Morton 2022-04-14 21:39 ` Andrew Morton 2022-04-14 22:14 ` Yu Zhao 2022-04-14 22:14 ` Yu Zhao 2022-04-15 10:15 ` Barry Song 2022-04-15 10:15 ` Barry Song 2022-04-15 20:17 ` Yu Zhao 2022-04-15 20:17 ` Yu Zhao 2022-04-15 10:26 ` Barry Song 2022-04-15 10:26 ` Barry Song 2022-04-15 20:18 ` Yu Zhao 2022-04-15 20:18 ` Yu Zhao 2022-04-14 11:47 ` Chen Wandun 2022-04-14 11:47 ` Chen Wandun 2022-04-14 20:53 ` Yu Zhao 2022-04-14 20:53 ` Yu Zhao 2022-04-15 2:23 ` Chen Wandun 2022-04-15 2:23 ` Chen Wandun 2022-04-15 5:25 ` Yu Zhao 2022-04-15 5:25 ` Yu Zhao 2022-04-15 6:31 ` Chen Wandun 2022-04-15 6:31 ` Chen Wandun 2022-04-15 6:44 ` Yu Zhao 2022-04-15 6:44 ` Yu Zhao 2022-04-15 9:27 ` Chen Wandun 2022-04-15 9:27 ` Chen Wandun 2022-04-18 9:58 ` Barry Song 2022-04-18 9:58 ` Barry Song 2022-04-19 0:53 ` Yu Zhao 2022-04-19 0:53 ` Yu Zhao 2022-04-19 4:25 ` Barry Song 2022-04-19 4:25 ` Barry Song 2022-04-19 4:36 ` Barry Song 2022-04-19 4:36 ` Barry Song 2022-04-19 22:25 ` Yu Zhao 2022-04-19 22:25 ` Yu Zhao 2022-04-19 22:20 ` Yu Zhao 2022-04-19 22:20 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 07/14] mm: multi-gen LRU: exploit locality in rmap Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-27 4:32 ` Aneesh Kumar K.V 2022-04-27 4:32 ` Aneesh Kumar K.V 2022-04-27 4:38 ` Yu Zhao 2022-04-27 4:38 ` Yu Zhao 2022-04-27 5:31 ` Aneesh Kumar K V 2022-04-27 5:31 ` Aneesh Kumar K V 2022-04-27 6:00 ` Yu Zhao 2022-04-27 6:00 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 08/14] mm: multi-gen LRU: support page table walks Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-12 7:10 ` Peter Zijlstra 2022-04-12 7:10 ` Peter Zijlstra 2022-04-15 5:30 ` Yu Zhao 2022-04-15 5:30 ` Yu Zhao 2022-04-15 1:14 ` Yu Zhao 2022-04-15 1:14 ` Yu Zhao 2022-04-15 1:56 ` Andrew Morton 2022-04-15 1:56 ` Andrew Morton 2022-04-15 6:25 ` Yu Zhao 2022-04-15 6:25 ` Yu Zhao 2022-04-15 19:15 ` Andrew Morton 2022-04-15 19:15 ` Andrew Morton 2022-04-15 20:11 ` Yu Zhao 2022-04-15 20:11 ` Yu Zhao 2022-04-15 21:32 ` Andrew Morton 2022-04-15 21:32 ` Andrew Morton 2022-04-15 21:36 ` Linus Torvalds 2022-04-15 21:36 ` Linus Torvalds 2022-04-15 22:57 ` Yu Zhao 2022-04-15 22:57 ` Yu Zhao 2022-04-15 23:03 ` Linus Torvalds 2022-04-15 23:03 ` Linus Torvalds 2022-04-15 23:24 ` [page-reclaim] " Jesse Barnes 2022-04-15 23:24 ` Jesse Barnes 2022-04-15 23:31 ` Matthew Wilcox 2022-04-15 23:31 ` Matthew Wilcox 2022-04-15 23:37 ` Jesse Barnes 2022-04-15 23:37 ` Jesse Barnes 2022-04-15 23:49 ` Yu Zhao 2022-04-15 23:49 ` Yu Zhao 2022-04-16 16:32 ` Justin Forbes 2022-04-16 16:32 ` Justin Forbes 2022-04-19 22:32 ` Yu Zhao 2022-04-19 22:32 ` Yu Zhao 2022-04-29 14:10 ` zhong jiang 2022-04-29 14:10 ` zhong jiang 2022-04-30 8:34 ` Yu Zhao 2022-04-30 8:34 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 09/14] mm: multi-gen LRU: optimize multiple memcgs Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 10/14] mm: multi-gen LRU: kill switch Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-26 20:57 ` Yu Zhao [this message] 2022-04-26 20:57 ` Yu Zhao 2022-04-26 22:22 ` Andrew Morton 2022-04-26 22:22 ` Andrew Morton 2022-04-27 1:11 ` Yu Zhao 2022-04-27 1:11 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 11/14] mm: multi-gen LRU: thrashing prevention Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 12/14] mm: multi-gen LRU: debugfs interface Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-16 0:03 ` Yu Zhao 2022-04-16 0:03 ` Yu Zhao 2022-04-16 4:20 ` Andrew Morton 2022-04-16 4:20 ` Andrew Morton 2022-04-26 6:59 ` Yu Zhao 2022-04-26 6:59 ` Yu Zhao 2022-04-26 21:30 ` Andrew Morton 2022-04-26 21:30 ` Andrew Morton 2022-04-26 22:15 ` Yu Zhao 2022-04-26 22:15 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 13/14] mm: multi-gen LRU: admin guide Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 12:41 ` Bagas Sanjaya 2022-04-07 12:41 ` Bagas Sanjaya 2022-04-07 12:51 ` Jonathan Corbet 2022-04-07 12:51 ` Jonathan Corbet 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-16 2:22 ` Yu Zhao 2022-04-16 2:22 ` Yu Zhao 2022-04-07 3:15 ` [PATCH v10 14/14] mm: multi-gen LRU: design doc Yu Zhao 2022-04-07 3:15 ` Yu Zhao 2022-04-07 11:39 ` Huang Shijie 2022-04-07 11:39 ` Huang Shijie 2022-04-07 12:41 ` Bagas Sanjaya 2022-04-07 12:41 ` Bagas Sanjaya 2022-04-07 12:52 ` Jonathan Corbet 2022-04-07 12:52 ` Jonathan Corbet 2022-04-08 4:48 ` Bagas Sanjaya 2022-04-08 4:48 ` Bagas Sanjaya 2022-04-12 2:16 ` Andrew Morton 2022-04-12 2:16 ` Andrew Morton 2022-04-26 7:42 ` Yu Zhao 2022-04-26 7:42 ` Yu Zhao 2022-04-07 3:24 ` [PATCH v10 00/14] Multi-Gen LRU Framework Yu Zhao 2022-04-07 3:24 ` Yu Zhao 2022-04-07 8:31 ` Stephen Rothwell 2022-04-07 8:31 ` Stephen Rothwell 2022-04-07 9:08 ` Yu Zhao 2022-04-07 9:08 ` Yu Zhao 2022-04-07 9:41 ` Yu Zhao 2022-04-07 9:41 ` Yu Zhao 2022-04-07 12:13 ` Stephen Rothwell 2022-04-07 12:13 ` Stephen Rothwell 2022-04-08 2:08 ` Yu Zhao 2022-04-08 2:08 ` Yu Zhao 2022-04-12 2:15 ` Andrew Morton 2022-04-12 2:15 ` Andrew Morton 2022-04-14 5:06 ` Andrew Morton 2022-04-14 5:06 ` Andrew Morton 2022-04-20 0:50 ` Yu Zhao 2022-04-20 0:50 ` Yu Zhao
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=CAOUHufbtFj0Hez7wkw3DHGDwo6wudCzCvACt2GfgrFcubW_DYg@mail.gmail.com \ --to=yuzhao@google.com \ --cc=21cnbao@gmail.com \ --cc=Hi-Angel@yandex.ru \ --cc=Michael@michaellarabel.com \ --cc=ak@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=aneesh.kumar@linux.ibm.com \ --cc=axboe@kernel.dk \ --cc=bgeffon@google.com \ --cc=catalin.marinas@arm.com \ --cc=corbet@lwn.net \ --cc=d@chaos-reins.com \ --cc=dave.hansen@linux.intel.com \ --cc=djbyrne@mtu.edu \ --cc=hannes@cmpxchg.org \ --cc=hdanton@sina.com \ --cc=heftig@archlinux.org \ --cc=holger@applied-asynchrony.com \ --cc=jsbarnes@google.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@kernel.org \ --cc=oleksandr@natalenko.name \ --cc=page-reclaim@google.com \ --cc=riel@surriel.com \ --cc=rppt@kernel.org \ --cc=sfr@rothwell.id.au \ --cc=sofia.trinh@edi.works \ --cc=steven@liquorix.net \ --cc=suleiman@google.com \ --cc=szhai2@cs.rochester.edu \ --cc=tj@kernel.org \ --cc=torvalds@linux-foundation.org \ --cc=vaibhav@linux.ibm.com \ --cc=vbabka@suse.cz \ --cc=will@kernel.org \ --cc=willy@infradead.org \ --cc=x86@kernel.org \ --cc=ying.huang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.