From: Yu Zhao <yuzhao@google.com>
To: Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Mel Gorman <mgorman@suse.de>, Michal Hocko <mhocko@kernel.org>
Cc: "Andi Kleen" <ak@linux.intel.com>,
"Aneesh Kumar" <aneesh.kumar@linux.ibm.com>,
"Barry Song" <21cnbao@gmail.com>,
"Catalin Marinas" <catalin.marinas@arm.com>,
"Dave Hansen" <dave.hansen@linux.intel.com>,
"Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>,
"Jesse Barnes" <jsbarnes@google.com>,
"Jonathan Corbet" <corbet@lwn.net>,
"Linus Torvalds" <torvalds@linux-foundation.org>,
"Matthew Wilcox" <willy@infradead.org>,
"Michael Larabel" <Michael@michaellarabel.com>,
"Mike Rapoport" <rppt@kernel.org>,
"Rik van Riel" <riel@surriel.com>,
"Vlastimil Babka" <vbabka@suse.cz>,
"Will Deacon" <will@kernel.org>,
"Ying Huang" <ying.huang@intel.com>,
linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
page-reclaim@google.com, x86@kernel.org,
"Yu Zhao" <yuzhao@google.com>,
"Brian Geffon" <bgeffon@google.com>,
"Jan Alexander Steffens" <heftig@archlinux.org>,
"Oleksandr Natalenko" <oleksandr@natalenko.name>,
"Steven Barrett" <steven@liquorix.net>,
"Suleiman Souhlal" <suleiman@google.com>,
"Daniel Byrne" <djbyrne@mtu.edu>,
"Donald Carr" <d@chaos-reins.com>,
"Holger Hoffstätte" <holger@applied-asynchrony.com>,
"Konstantin Kharlamov" <Hi-Angel@yandex.ru>,
"Shuang Zhai" <szhai2@cs.rochester.edu>,
"Sofia Trinh" <sofia.trinh@edi.works>
Subject: [PATCH v7 09/12] mm: multigenerational LRU: runtime switch
Date: Tue, 8 Feb 2022 01:18:59 -0700 [thread overview]
Message-ID: <20220208081902.3550911-10-yuzhao@google.com> (raw)
In-Reply-To: <20220208081902.3550911-1-yuzhao@google.com>
Add /sys/kernel/mm/lru_gen/enabled as a runtime switch. Features that
can be enabled or disabled include:
0x0001: the multigenerational LRU
0x0002: the page table walks, when arch_has_hw_pte_young() returns
true
0x0004: the use of the accessed bit in non-leaf PMD entries, when
CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG=y
[yYnN]: apply to all the features above
E.g.,
echo y >/sys/kernel/mm/lru_gen/enabled
cat /sys/kernel/mm/lru_gen/enabled
0x0007
echo 5 >/sys/kernel/mm/lru_gen/enabled
cat /sys/kernel/mm/lru_gen/enabled
0x0005
NB: the page table walks happen on the scale of seconds under heavy
memory pressure. Under such a condition, the mmap_lock contention is a
lesser concern, compared with the LRU lock contention and the I/O
congestion. So far the only well-known case of the mmap_lock
contention is Android, due to Scudo [1] which allocates several
thousand VMAs for merely a few hundred MBs. The SPF and the Maple Tree
also have provided their own assessments [2][3]. However, if the page
table walks do worsen the mmap_lock contention, the runtime switch can
be used to disable this feature. In this case the multigenerational
LRU will suffer a minor performance degradation, as shown previously.
The use of the accessed bit in non-leaf PMD entries can also be
disabled, since this feature wasn't tested on x86 varieties other than
Intel and AMD.
[1] https://source.android.com/devices/tech/debug/scudo
[2] https://lore.kernel.org/lkml/20220128131006.67712-1-michel@lespinasse.org/
[3] https://lore.kernel.org/lkml/20220202024137.2516438-1-Liam.Howlett@oracle.com/
Signed-off-by: Yu Zhao <yuzhao@google.com>
Acked-by: Brian Geffon <bgeffon@google.com>
Acked-by: Jan Alexander Steffens (heftig) <heftig@archlinux.org>
Acked-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Acked-by: Steven Barrett <steven@liquorix.net>
Acked-by: Suleiman Souhlal <suleiman@google.com>
Tested-by: Daniel Byrne <djbyrne@mtu.edu>
Tested-by: Donald Carr <d@chaos-reins.com>
Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru>
Tested-by: Shuang Zhai <szhai2@cs.rochester.edu>
Tested-by: Sofia Trinh <sofia.trinh@edi.works>
---
include/linux/cgroup.h | 15 +-
include/linux/mm_inline.h | 10 +-
include/linux/mmzone.h | 7 +
kernel/cgroup/cgroup-internal.h | 1 -
mm/Kconfig | 6 +
mm/vmscan.c | 236 +++++++++++++++++++++++++++++++-
6 files changed, 267 insertions(+), 8 deletions(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 75c151413fda..b145025f3eac 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -432,6 +432,18 @@ static inline void cgroup_put(struct cgroup *cgrp)
css_put(&cgrp->self);
}
+extern struct mutex cgroup_mutex;
+
+static inline void cgroup_lock(void)
+{
+ mutex_lock(&cgroup_mutex);
+}
+
+static inline void cgroup_unlock(void)
+{
+ mutex_unlock(&cgroup_mutex);
+}
+
/**
* task_css_set_check - obtain a task's css_set with extra access conditions
* @task: the task to obtain css_set for
@@ -446,7 +458,6 @@ static inline void cgroup_put(struct cgroup *cgrp)
* as locks used during the cgroup_subsys::attach() methods.
*/
#ifdef CONFIG_PROVE_RCU
-extern struct mutex cgroup_mutex;
extern spinlock_t css_set_lock;
#define task_css_set_check(task, __c) \
rcu_dereference_check((task)->cgroups, \
@@ -707,6 +718,8 @@ struct cgroup;
static inline u64 cgroup_id(const struct cgroup *cgrp) { return 1; }
static inline void css_get(struct cgroup_subsys_state *css) {}
static inline void css_put(struct cgroup_subsys_state *css) {}
+static inline void cgroup_lock(void) {}
+static inline void cgroup_unlock(void) {}
static inline int cgroup_attach_task_all(struct task_struct *from,
struct task_struct *t) { return 0; }
static inline int cgroupstats_build(struct cgroupstats *stats,
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 37c8a0ede4ff..130d62751e05 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -96,7 +96,15 @@ static __always_inline enum lru_list folio_lru_list(struct folio *folio)
static inline bool lru_gen_enabled(void)
{
- return true;
+#ifdef CONFIG_LRU_GEN_ENABLED
+ DECLARE_STATIC_KEY_TRUE(lru_gen_caps[NR_LRU_GEN_CAPS]);
+
+ return static_branch_likely(&lru_gen_caps[LRU_GEN_CORE]);
+#else
+ DECLARE_STATIC_KEY_FALSE(lru_gen_caps[NR_LRU_GEN_CAPS]);
+
+ return static_branch_unlikely(&lru_gen_caps[LRU_GEN_CORE]);
+#endif
}
static inline bool lru_gen_in_fault(void)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fa0a7a84ee58..4ecec9152761 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -311,6 +311,13 @@ struct page_vma_mapped_walk;
#ifdef CONFIG_LRU_GEN
+enum {
+ LRU_GEN_CORE,
+ LRU_GEN_MM_WALK,
+ LRU_GEN_NONLEAF_YOUNG,
+ NR_LRU_GEN_CAPS
+};
+
#define MIN_LRU_BATCH BITS_PER_LONG
#define MAX_LRU_BATCH (MIN_LRU_BATCH * 128)
diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h
index 6e36e854b512..929ed3bf1a7c 100644
--- a/kernel/cgroup/cgroup-internal.h
+++ b/kernel/cgroup/cgroup-internal.h
@@ -165,7 +165,6 @@ struct cgroup_mgctx {
#define DEFINE_CGROUP_MGCTX(name) \
struct cgroup_mgctx name = CGROUP_MGCTX_INIT(name)
-extern struct mutex cgroup_mutex;
extern spinlock_t css_set_lock;
extern struct cgroup_subsys *cgroup_subsys[];
extern struct list_head cgroup_roots;
diff --git a/mm/Kconfig b/mm/Kconfig
index e899623d5df0..aae72b740d8a 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -903,6 +903,12 @@ config LRU_GEN
Documentation/admin-guide/mm/multigen_lru.rst and
Documentation/vm/multigen_lru.rst for details.
+config LRU_GEN_ENABLED
+ bool "Enable by default"
+ depends on LRU_GEN
+ help
+ This option enables the multigenerational LRU by default.
+
config NR_LRU_GENS
int "Max number of generations"
depends on LRU_GEN
diff --git a/mm/vmscan.c b/mm/vmscan.c
index fc09b6c10624..700c35f2a030 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3066,6 +3066,12 @@ enum {
TYPE_FILE,
};
+#ifdef CONFIG_LRU_GEN_ENABLED
+DEFINE_STATIC_KEY_ARRAY_TRUE(lru_gen_caps, NR_LRU_GEN_CAPS);
+#else
+DEFINE_STATIC_KEY_ARRAY_FALSE(lru_gen_caps, NR_LRU_GEN_CAPS);
+#endif
+
/******************************************************************************
* shorthand helpers
******************************************************************************/
@@ -3102,6 +3108,15 @@ static int folio_lru_tier(struct folio *folio)
return lru_tier_from_refs(refs);
}
+static bool get_cap(int cap)
+{
+#ifdef CONFIG_LRU_GEN_ENABLED
+ return static_branch_likely(&lru_gen_caps[cap]);
+#else
+ return static_branch_unlikely(&lru_gen_caps[cap]);
+#endif
+}
+
static struct lruvec *get_lruvec(struct mem_cgroup *memcg, int nid)
{
struct pglist_data *pgdat = NODE_DATA(nid);
@@ -3893,7 +3908,8 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long next, struct vm_area
goto next;
if (!pmd_trans_huge(pmd[i])) {
- if (IS_ENABLED(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG))
+ if (IS_ENABLED(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) &&
+ get_cap(LRU_GEN_NONLEAF_YOUNG))
pmdp_test_and_clear_young(vma, addr, pmd + i);
goto next;
}
@@ -4000,10 +4016,12 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end,
priv->mm_stats[MM_PMD_TOTAL]++;
#ifdef CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG
- if (!pmd_young(val))
- continue;
+ if (get_cap(LRU_GEN_NONLEAF_YOUNG)) {
+ if (!pmd_young(val))
+ continue;
- walk_pmd_range_locked(pud, addr, vma, walk, &pos);
+ walk_pmd_range_locked(pud, addr, vma, walk, &pos);
+ }
#endif
if (!priv->full_scan && !test_bloom_filter(priv->lruvec, priv->max_seq, pmd + i))
continue;
@@ -4235,7 +4253,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq,
* handful of PTEs. Spreading the work out over a period of time usually
* is less efficient, but it avoids bursty page faults.
*/
- if (!full_scan && !arch_has_hw_pte_young()) {
+ if (!full_scan && (!arch_has_hw_pte_young() || !get_cap(LRU_GEN_MM_WALK))) {
success = iterate_mm_list_nowalk(lruvec, max_seq);
goto done;
}
@@ -4941,6 +4959,211 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc
current->reclaim_state->mm_walk = NULL;
}
+/******************************************************************************
+ * state change
+ ******************************************************************************/
+
+static bool __maybe_unused state_is_valid(struct lruvec *lruvec)
+{
+ struct lru_gen_struct *lrugen = &lruvec->lrugen;
+
+ if (lrugen->enabled) {
+ enum lru_list lru;
+
+ for_each_evictable_lru(lru) {
+ if (!list_empty(&lruvec->lists[lru]))
+ return false;
+ }
+ } else {
+ int gen, type, zone;
+
+ for_each_gen_type_zone(gen, type, zone) {
+ if (!list_empty(&lrugen->lists[gen][type][zone]))
+ return false;
+
+ /* unlikely but not a bug when reset_batch_size() is pending */
+ VM_WARN_ON(lrugen->nr_pages[gen][type][zone]);
+ }
+ }
+
+ return true;
+}
+
+static bool fill_evictable(struct lruvec *lruvec)
+{
+ enum lru_list lru;
+ int remaining = MAX_LRU_BATCH;
+
+ for_each_evictable_lru(lru) {
+ int type = is_file_lru(lru);
+ bool active = is_active_lru(lru);
+ struct list_head *head = &lruvec->lists[lru];
+
+ while (!list_empty(head)) {
+ bool success;
+ struct folio *folio = lru_to_folio(head);
+
+ VM_BUG_ON_FOLIO(folio_test_unevictable(folio), folio);
+ VM_BUG_ON_FOLIO(folio_test_active(folio) != active, folio);
+ VM_BUG_ON_FOLIO(folio_is_file_lru(folio) != type, folio);
+ VM_BUG_ON_FOLIO(folio_lru_gen(folio) < MAX_NR_GENS, folio);
+
+ lruvec_del_folio(lruvec, folio);
+ success = lru_gen_add_folio(lruvec, folio, false);
+ VM_BUG_ON(!success);
+
+ if (!--remaining)
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static bool drain_evictable(struct lruvec *lruvec)
+{
+ int gen, type, zone;
+ int remaining = MAX_LRU_BATCH;
+
+ for_each_gen_type_zone(gen, type, zone) {
+ struct list_head *head = &lruvec->lrugen.lists[gen][type][zone];
+
+ while (!list_empty(head)) {
+ bool success;
+ struct folio *folio = lru_to_folio(head);
+
+ VM_BUG_ON_FOLIO(folio_test_unevictable(folio), folio);
+ VM_BUG_ON_FOLIO(folio_test_active(folio), folio);
+ VM_BUG_ON_FOLIO(folio_is_file_lru(folio) != type, folio);
+ VM_BUG_ON_FOLIO(folio_zonenum(folio) != zone, folio);
+
+ success = lru_gen_del_folio(lruvec, folio, false);
+ VM_BUG_ON(!success);
+ lruvec_add_folio(lruvec, folio);
+
+ if (!--remaining)
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static void lru_gen_change_state(bool enable)
+{
+ static DEFINE_MUTEX(state_mutex);
+
+ struct mem_cgroup *memcg;
+
+ cgroup_lock();
+ cpus_read_lock();
+ get_online_mems();
+ mutex_lock(&state_mutex);
+
+ if (enable == lru_gen_enabled())
+ goto unlock;
+
+ if (enable)
+ static_branch_enable_cpuslocked(&lru_gen_caps[LRU_GEN_CORE]);
+ else
+ static_branch_disable_cpuslocked(&lru_gen_caps[LRU_GEN_CORE]);
+
+ memcg = mem_cgroup_iter(NULL, NULL, NULL);
+ do {
+ int nid;
+
+ for_each_node(nid) {
+ struct lruvec *lruvec = get_lruvec(memcg, nid);
+
+ if (!lruvec)
+ continue;
+
+ spin_lock_irq(&lruvec->lru_lock);
+
+ VM_BUG_ON(!seq_is_valid(lruvec));
+ VM_BUG_ON(!state_is_valid(lruvec));
+
+ lruvec->lrugen.enabled = enable;
+
+ while (!(enable ? fill_evictable(lruvec) : drain_evictable(lruvec))) {
+ spin_unlock_irq(&lruvec->lru_lock);
+ cond_resched();
+ spin_lock_irq(&lruvec->lru_lock);
+ }
+
+ spin_unlock_irq(&lruvec->lru_lock);
+ }
+
+ cond_resched();
+ } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)));
+unlock:
+ mutex_unlock(&state_mutex);
+ put_online_mems();
+ cpus_read_unlock();
+ cgroup_unlock();
+}
+
+/******************************************************************************
+ * sysfs interface
+ ******************************************************************************/
+
+static ssize_t show_enable(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
+{
+ unsigned int caps = 0;
+
+ if (get_cap(LRU_GEN_CORE))
+ caps |= BIT(LRU_GEN_CORE);
+
+ if (arch_has_hw_pte_young() && get_cap(LRU_GEN_MM_WALK))
+ caps |= BIT(LRU_GEN_MM_WALK);
+
+ if (IS_ENABLED(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) && get_cap(LRU_GEN_NONLEAF_YOUNG))
+ caps |= BIT(LRU_GEN_NONLEAF_YOUNG);
+
+ return snprintf(buf, PAGE_SIZE, "0x%04x\n", caps);
+}
+
+static ssize_t store_enable(struct kobject *kobj, struct kobj_attribute *attr,
+ const char *buf, size_t len)
+{
+ int i;
+ unsigned int caps;
+
+ if (tolower(*buf) == 'n')
+ caps = 0;
+ else if (tolower(*buf) == 'y')
+ caps = -1;
+ else if (kstrtouint(buf, 0, &caps))
+ return -EINVAL;
+
+ for (i = 0; i < NR_LRU_GEN_CAPS; i++) {
+ bool enable = caps & BIT(i);
+
+ if (i == LRU_GEN_CORE)
+ lru_gen_change_state(enable);
+ else if (enable)
+ static_branch_enable(&lru_gen_caps[i]);
+ else
+ static_branch_disable(&lru_gen_caps[i]);
+ }
+
+ return len;
+}
+
+static struct kobj_attribute lru_gen_enabled_attr = __ATTR(
+ enabled, 0644, show_enable, store_enable
+);
+
+static struct attribute *lru_gen_attrs[] = {
+ &lru_gen_enabled_attr.attr,
+ NULL
+};
+
+static struct attribute_group lru_gen_attr_group = {
+ .name = "lru_gen",
+ .attrs = lru_gen_attrs,
+};
+
/******************************************************************************
* initialization
******************************************************************************/
@@ -5007,6 +5230,9 @@ static int __init init_lru_gen(void)
BUILD_BUG_ON(BIT(LRU_GEN_WIDTH) <= MAX_NR_GENS);
BUILD_BUG_ON(sizeof(MM_STAT_CODES) != NR_MM_STATS + 1);
+ if (sysfs_create_group(mm_kobj, &lru_gen_attr_group))
+ pr_err("lru_gen: failed to create sysfs group\n");
+
return 0;
};
late_initcall(init_lru_gen);
--
2.35.0.263.gb82422642f-goog
next prev parent reply other threads:[~2022-02-08 8:19 UTC|newest]
Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-08 8:18 [PATCH v7 00/12] Multigenerational LRU Framework Yu Zhao
2022-02-08 8:18 ` [PATCH v7 01/12] mm: x86, arm64: add arch_has_hw_pte_young() Yu Zhao
2022-02-08 8:24 ` Yu Zhao
2022-02-08 10:33 ` Will Deacon
2022-02-08 8:18 ` [PATCH v7 02/12] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG Yu Zhao
2022-02-08 8:27 ` Yu Zhao
2022-02-08 8:18 ` [PATCH v7 03/12] mm/vmscan.c: refactor shrink_node() Yu Zhao
2022-02-08 8:18 ` [PATCH v7 04/12] mm: multigenerational LRU: groundwork Yu Zhao
2022-02-08 8:28 ` Yu Zhao
2022-02-10 20:41 ` Johannes Weiner
2022-02-15 9:43 ` Yu Zhao
2022-02-15 21:53 ` Johannes Weiner
2022-02-21 8:14 ` Yu Zhao
2022-02-23 21:18 ` Yu Zhao
2022-02-25 16:34 ` Minchan Kim
2022-03-03 15:29 ` Johannes Weiner
2022-03-03 19:26 ` Yu Zhao
2022-03-03 21:43 ` Johannes Weiner
2022-03-11 10:16 ` Barry Song
2022-03-11 23:45 ` Yu Zhao
2022-03-12 10:37 ` Barry Song
2022-03-12 21:11 ` Yu Zhao
2022-03-13 4:57 ` Barry Song
2022-03-14 11:11 ` Barry Song
2022-03-14 16:45 ` Yu Zhao
2022-03-14 23:38 ` Barry Song
[not found] ` <CAOUHufa9eY44QadfGTzsxa2=hEvqwahXd7Canck5Gt-N6c4UKA@mail.gmail.com>
[not found] ` <CAGsJ_4zvj5rmz7DkW-kJx+jmUT9G8muLJ9De--NZma9ey0Oavw@mail.gmail.com>
2022-03-15 10:29 ` Barry Song
2022-03-16 2:46 ` Yu Zhao
2022-03-16 4:37 ` Barry Song
2022-03-16 5:44 ` Yu Zhao
2022-03-16 6:06 ` Barry Song
2022-03-16 21:37 ` Yu Zhao
2022-02-10 21:37 ` Matthew Wilcox
2022-02-13 21:16 ` Yu Zhao
2022-02-08 8:18 ` [PATCH v7 05/12] mm: multigenerational LRU: minimal implementation Yu Zhao
2022-02-08 8:33 ` Yu Zhao
2022-02-08 16:50 ` Johannes Weiner
2022-02-10 2:53 ` Yu Zhao
2022-02-13 10:04 ` Hillf Danton
2022-02-17 0:13 ` Yu Zhao
2022-02-23 8:27 ` Huang, Ying
2022-02-23 9:36 ` Yu Zhao
2022-02-24 0:59 ` Huang, Ying
2022-02-24 1:34 ` Yu Zhao
2022-02-24 3:31 ` Huang, Ying
2022-02-24 4:09 ` Yu Zhao
2022-02-24 5:27 ` Huang, Ying
2022-02-24 5:35 ` Yu Zhao
2022-02-08 8:18 ` [PATCH v7 06/12] mm: multigenerational LRU: exploit locality in rmap Yu Zhao
2022-02-08 8:40 ` Yu Zhao
2022-02-08 8:18 ` [PATCH v7 07/12] mm: multigenerational LRU: support page table walks Yu Zhao
2022-02-08 8:39 ` Yu Zhao
2022-02-08 8:18 ` [PATCH v7 08/12] mm: multigenerational LRU: optimize multiple memcgs Yu Zhao
2022-02-08 8:18 ` Yu Zhao [this message]
2022-02-08 8:42 ` [PATCH v7 09/12] mm: multigenerational LRU: runtime switch Yu Zhao
2022-02-08 8:19 ` [PATCH v7 10/12] mm: multigenerational LRU: thrashing prevention Yu Zhao
2022-02-08 8:43 ` Yu Zhao
2022-02-08 8:19 ` [PATCH v7 11/12] mm: multigenerational LRU: debugfs interface Yu Zhao
2022-02-18 18:56 ` [page-reclaim] " David Rientjes
2022-02-08 8:19 ` [PATCH v7 12/12] mm: multigenerational LRU: documentation Yu Zhao
2022-02-08 8:44 ` Yu Zhao
2022-02-14 10:28 ` Mike Rapoport
2022-02-16 3:22 ` Yu Zhao
2022-02-21 9:01 ` Mike Rapoport
2022-02-22 1:47 ` Yu Zhao
2022-02-23 10:58 ` Mike Rapoport
2022-02-23 21:20 ` Yu Zhao
2022-02-08 10:11 ` [PATCH v7 00/12] Multigenerational LRU Framework Oleksandr Natalenko
2022-02-08 11:14 ` Michal Hocko
2022-02-08 11:23 ` Oleksandr Natalenko
2022-02-11 20:12 ` Alexey Avramov
2022-02-12 21:01 ` Yu Zhao
2022-03-03 6:06 ` Vaibhav Jain
2022-03-03 6:47 ` Yu Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220208081902.3550911-10-yuzhao@google.com \
--to=yuzhao@google.com \
--cc=21cnbao@gmail.com \
--cc=Hi-Angel@yandex.ru \
--cc=Michael@michaellarabel.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=axboe@kernel.dk \
--cc=bgeffon@google.com \
--cc=catalin.marinas@arm.com \
--cc=corbet@lwn.net \
--cc=d@chaos-reins.com \
--cc=dave.hansen@linux.intel.com \
--cc=djbyrne@mtu.edu \
--cc=hannes@cmpxchg.org \
--cc=hdanton@sina.com \
--cc=heftig@archlinux.org \
--cc=holger@applied-asynchrony.com \
--cc=jsbarnes@google.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@kernel.org \
--cc=oleksandr@natalenko.name \
--cc=page-reclaim@google.com \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=sofia.trinh@edi.works \
--cc=steven@liquorix.net \
--cc=suleiman@google.com \
--cc=szhai2@cs.rochester.edu \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).