linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/16] Multigenerational LRU Framework
@ 2021-04-13  6:56 Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 01/16] include/linux/memcontrol.h: do not warn in page_memcg_rcu() if !CONFIG_MEMCG Yu Zhao
                   ` (18 more replies)
  0 siblings, 19 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

What's new in v2
================
Special thanks to Jens Axboe for reporting a regression in buffered
I/O and helping test the fix.

This version includes the support of tiers, which represent levels of
usage from file descriptors only. Pages accessed N times via file
descriptors belong to tier order_base_2(N). Each generation contains
at most MAX_NR_TIERS tiers, and they require additional MAX_NR_TIERS-2
bits in page->flags. In contrast to moving across generations which
requires the lru lock, moving across tiers only involves an atomic
operation on page->flags and therefore has a negligible cost. A
feedback loop modeled after the well-known PID controller monitors the
refault rates across all tiers and decides when to activate pages from
which tiers, on the reclaim path.

This feedback model has a few advantages over the current feedforward
model:
1) It has a negligible overhead in the buffered I/O access path
   because activations are done in the reclaim path.
2) It takes mapped pages into account and avoids overprotecting pages
   accessed multiple times via file descriptors.
3) More tiers offer better protection to pages accessed more than
   twice when buffered-I/O-intensive workloads are under memory
   pressure.

The fio/io_uring benchmark shows 14% improvement in IOPS when randomly
accessing Samsung PM981a in the buffered I/O mode.

Highlights from the discussions on v1
=====================================
Thanks to Ying Huang and Dave Hansen for the comments and suggestions
on page table scanning.

A simple worst-case scenario test did not find page table scanning
underperforms the rmap because of the following optimizations:
1) It will not scan page tables from processes that have been sleeping
   since the last scan.
2) It will not scan PTE tables under non-leaf PMD entries that do not
   have the accessed bit set, when
   CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
3) It will not zigzag between the PGD table and the same PMD or PTE
   table spanning multiple VMAs. In other words, it finishes all the
   VMAs with the range of the same PMD or PTE table before it returns
   to the PGD table. This optimizes workloads that have large numbers
   of tiny VMAs, especially when CONFIG_PGTABLE_LEVELS=5.

TLDR
====
The current page reclaim is too expensive in terms of CPU usage and
often making poor choices about what to evict. We would like to offer
an alternative framework that is performant, versatile and
straightforward.

Repo
====
git fetch https://linux-mm.googlesource.com/page-reclaim refs/changes/73/1173/1

Gerrit https://linux-mm-review.googlesource.com/c/page-reclaim/+/1173

Background
==========
DRAM is a major factor in total cost of ownership, and improving
memory overcommit brings a high return on investment. Over the past
decade of research and experimentation in memory overcommit, we
observed a distinct trend across millions of servers and clients: the
size of page cache has been decreasing because of the growing
popularity of cloud storage. Nowadays anon pages account for more than
90% of our memory consumption and page cache contains mostly
executable pages.

Problems
========
Notion of active/inactive
-------------------------
For servers equipped with hundreds of gigabytes of memory, the
granularity of the active/inactive is too coarse to be useful for job
scheduling. False active/inactive rates are relatively high, and thus
the assumed savings may not materialize.

For phones and laptops, executable pages are frequently evicted
despite the fact that there are many less recently used anon pages.
Major faults on executable pages cause "janks" (slow UI renderings)
and negatively impact user experience.

For lruvecs from different memcgs or nodes, comparisons are impossible
due to the lack of a common frame of reference.

Incremental scans via rmap
--------------------------
Each incremental scan picks up at where the last scan left off and
stops after it has found a handful of unreferenced pages. For
workloads using a large amount of anon memory, incremental scans lose
the advantage under sustained memory pressure due to high ratios of
the number of scanned pages to the number of reclaimed pages. In our
case, the average ratio of pgscan to pgsteal is above 7.

On top of that, the rmap has poor memory locality due to its complex
data structures. The combined effects typically result in a high
amount of CPU usage in the reclaim path. For example, with zram, a
typical kswapd profile on v5.11 looks like:
  31.03%  page_vma_mapped_walk
  25.59%  lzo1x_1_do_compress
   4.63%  do_raw_spin_lock
   3.89%  vma_interval_tree_iter_next
   3.33%  vma_interval_tree_subtree_search

And with real swap, it looks like:
  45.16%  page_vma_mapped_walk
   7.61%  do_raw_spin_lock
   5.69%  vma_interval_tree_iter_next
   4.91%  vma_interval_tree_subtree_search
   3.71%  page_referenced_one

Solutions
=========
Notion of generation numbers
----------------------------
The notion of generation numbers introduces a quantitative approach to
memory overcommit. A larger number of pages can be spread out across
a configurable number of generations, and each generation includes all
pages that have been referenced since the last generation. This
improved granularity yields relatively low false active/inactive
rates.

Given an lruvec, scans of anon and file types and selections between
them are all based on direct comparisons of generation numbers, which
are simple and yet effective. For different lruvecs, comparisons are
still possible based on birth times of generations.

Differential scans via page tables
----------------------------------
Each differential scan discovers all pages that have been referenced
since the last scan. Specifically, it walks the mm_struct list
associated with an lruvec to scan page tables of processes that have
been scheduled since the last scan. The cost of each differential scan
is roughly proportional to the number of referenced pages it
discovers. Unless address spaces are extremely sparse, page tables
usually have better memory locality than the rmap. The end result is
generally a significant reduction in CPU usage, for workloads using a
large amount of anon memory.

Our real-world benchmark that browses popular websites in multiple
Chrome tabs demonstrates 51% less CPU usage from kswapd and 52% (full)
less PSI on v5.11. With this patchset, kswapd profile looks like:
  49.36%  lzo1x_1_do_compress
   4.54%  page_vma_mapped_walk
   4.45%  memset_erms
   3.47%  walk_pte_range
   2.88%  zram_bvec_rw

In addition, direct reclaim latency is reduced by 22% at 99th
percentile and the number of refaults is reduced by 7%. Both metrics
are important to phones and laptops as they are correlated to user
experience.

Framework
=========
For each lruvec, evictable pages are divided into multiple
generations. The youngest generation number is stored in
lruvec->evictable.max_seq for both anon and file types as they are
aged on an equal footing. The oldest generation numbers are stored in
lruvec->evictable.min_seq[2] separately for anon and file types as
clean file pages can be evicted regardless of may_swap or
may_writepage. Generation numbers are truncated into
order_base_2(MAX_NR_GENS+1) bits in order to fit into page->flags. The
sliding window technique is used to prevent truncated generation
numbers from overlapping. Each truncated generation number is an inde
to lruvec->evictable.lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES].
Evictable pages are added to the per-zone lists indexed by max_seq or
min_seq[2] (modulo MAX_NR_GENS), depending on whether they are being
faulted in.

Each generation is then divided into multiple tiers. Tiers represent
levels of usage from file descriptors only. Pages accessed N times via
file descriptors belong to tier order_base_2(N). In contrast to moving
across generations which requires the lru lock, moving across tiers
only involves an atomic operation on page->flags and therefore has a
lower cost. A feedback loop modeled after the well-known PID
controller monitors the refault rates across all tiers and decides
when to activate pages from which tiers on the reclaim path.

The framework comprises two conceptually independent components: the
aging and the eviction, which can be invoked separately from user
space.

Aging
-----
The aging produces young generations. Given an lruvec, the aging scans
page tables for referenced pages of this lruvec. Upon finding one, the
aging updates its generation number to max_seq. After each round of
scan, the aging increments max_seq.

The aging maintains either a system-wide mm_struct list or per-memcg
mm_struct lists and tracks whether an mm_struct is being used or has
been used since the last scan. Multiple threads can concurrently work
on the same mm_struct list, and each of them will be given a different
mm_struct belonging to a process that has been scheduled since the
last scan.

The aging is due when both of min_seq[2] reaches max_seq-1, assuming
both anon and file types are reclaimable.

Eviction
--------
The eviction consumes old generations. Given an lruvec, the eviction
scans the pages on the per-zone lists indexed by either of min_seq[2].
It first tries to select a type based on the values of min_seq[2].
When anon and file types are both available from the same generation,
it selects the one that has a lower refault rate.

During a scan, the eviction sorts pages according to their generation
numbers, if the aging has found them referenced. It also moves pages
from the tiers that have higher refault rates than tier 0 to the next
generation.

When it finds all the per-zone lists of a selected type are empty, the
eviction increments min_seq[2] indexed by this selected type.

Use cases
=========
On Android, our most advanced simulation that generates memory
pressure from realistic user behavior shows 18% fewer low-memory
kills, which in turn reduces cold starts by 16%.

On Borg, a similar approach enables us to identify jobs that
underutilize their memory and downsize them considerably without
compromising any of our service level indicators.

On Chrome OS, our field telemetry reports 96% fewer low-memory tab
discards and 59% fewer OOM kills from fully-utilized devices and no
regressions in monitored user experience from underutilized devices.

Working set estimation
----------------------
User space can invoke the aging by writing "+ memcg_id node_id gen
[swappiness]" to /sys/kernel/debug/lru_gen. This debugfs interface
also provides the birth time and the size of each generation.

Proactive reclaim
-----------------
User space can invoke the eviction by writing "- memcg_id node_id gen
[swappiness] [nr_to_reclaim]" to /sys/kernel/debug/lru_gen. Multiple
command lines are supported, so does concatenation with delimiters.

Intensive buffered I/O
----------------------
Tiers are specifically designed to improve the performance of
intensive buffered I/O under memory pressure. The fio/io_uring
benchmark shows 14% improvement in IOPS when randomly accessing
Samsung PM981a in buffered I/O mode.

For far memory tiering and NUMA-aware job scheduling, please refer to
the reference section.

FAQ
===
Why not try to improve the existing code?
-----------------------------------------
We have tried but concluded the aforementioned problems are
fundamental, and therefore changes made on top of them will not result
in substantial gains.

What particular workloads does it help?
---------------------------------------
This framework is designed to improve the performance of the page
reclaim under any types of workloads.

How would it benefit the community?
-----------------------------------
Google is committed to promoting sustainable development of the
community. We hope successful adoptions of this framework will
steadily climb over time. To that end, we would be happy to learn your
workloads and work with you case by case, and we will do our best to
keep the repo fully maintained. For those whose workloads rely on the
existing code, we will make sure you will not be affected in any way.

References
==========
1. Long-term SLOs for reclaimed cloud computing resources
   https://research.google/pubs/pub43017/
2. Profiling a warehouse-scale computer
   https://research.google/pubs/pub44271/
3. Evaluation of NUMA-Aware Scheduling in Warehouse-Scale Clusters
   https://research.google/pubs/pub48329/
4. Software-defined far memory in warehouse-scale computers
   https://research.google/pubs/pub48551/
5. Borg: the Next Generation
   https://research.google/pubs/pub49065/

Yu Zhao (16):
  include/linux/memcontrol.h: do not warn in page_memcg_rcu() if
    !CONFIG_MEMCG
  include/linux/nodemask.h: define next_memory_node() if !CONFIG_NUMA
  include/linux/huge_mm.h: define is_huge_zero_pmd() if
    !CONFIG_TRANSPARENT_HUGEPAGE
  include/linux/cgroup.h: export cgroup_mutex
  mm/swap.c: export activate_page()
  mm, x86: support the access bit on non-leaf PMD entries
  mm/vmscan.c: refactor shrink_node()
  mm: multigenerational lru: groundwork
  mm: multigenerational lru: activation
  mm: multigenerational lru: mm_struct list
  mm: multigenerational lru: aging
  mm: multigenerational lru: eviction
  mm: multigenerational lru: page reclaim
  mm: multigenerational lru: user interface
  mm: multigenerational lru: Kconfig
  mm: multigenerational lru: documentation

 Documentation/vm/index.rst        |    1 +
 Documentation/vm/multigen_lru.rst |  192 +++
 arch/Kconfig                      |    9 +
 arch/x86/Kconfig                  |    1 +
 arch/x86/include/asm/pgtable.h    |    2 +-
 arch/x86/mm/pgtable.c             |    5 +-
 fs/exec.c                         |    2 +
 fs/fuse/dev.c                     |    3 +-
 fs/proc/task_mmu.c                |    3 +-
 include/linux/cgroup.h            |   15 +-
 include/linux/huge_mm.h           |    5 +
 include/linux/memcontrol.h        |    7 +-
 include/linux/mm.h                |    2 +
 include/linux/mm_inline.h         |  294 ++++
 include/linux/mm_types.h          |  117 ++
 include/linux/mmzone.h            |  118 +-
 include/linux/nodemask.h          |    1 +
 include/linux/page-flags-layout.h |   20 +-
 include/linux/page-flags.h        |    4 +-
 include/linux/pgtable.h           |    4 +-
 include/linux/swap.h              |    5 +-
 kernel/bounds.c                   |    6 +
 kernel/events/uprobes.c           |    2 +-
 kernel/exit.c                     |    1 +
 kernel/fork.c                     |   10 +
 kernel/kthread.c                  |    1 +
 kernel/sched/core.c               |    2 +
 mm/Kconfig                        |   55 +
 mm/huge_memory.c                  |    5 +-
 mm/khugepaged.c                   |    2 +-
 mm/memcontrol.c                   |   28 +
 mm/memory.c                       |   14 +-
 mm/migrate.c                      |    2 +-
 mm/mm_init.c                      |   16 +-
 mm/mmzone.c                       |    2 +
 mm/rmap.c                         |    6 +
 mm/swap.c                         |   54 +-
 mm/swapfile.c                     |    6 +-
 mm/userfaultfd.c                  |    2 +-
 mm/vmscan.c                       | 2580 ++++++++++++++++++++++++++++-
 mm/workingset.c                   |  179 +-
 41 files changed, 3603 insertions(+), 180 deletions(-)
 create mode 100644 Documentation/vm/multigen_lru.rst

-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH v2 01/16] include/linux/memcontrol.h: do not warn in page_memcg_rcu() if !CONFIG_MEMCG
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 02/16] include/linux/nodemask.h: define next_memory_node() if !CONFIG_NUMA Yu Zhao
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

page_memcg_rcu() warns on !rcu_read_lock_held() regardless of
CONFIG_MEMCG. The following code is legit, but it triggers the warning
when !CONFIG_MEMCG, since lock_page_memcg() and unlock_page_memcg()
are empty for this config.

  memcg = lock_page_memcg(page1)
    (rcu_read_lock() if CONFIG_MEMCG=y)

  do something to page1

  if (page_memcg_rcu(page2) == memcg)
    do something to page2 too as it cannot be migrated away from the
    memcg either.

  unlock_page_memcg(page1)
    (rcu_read_unlock() if CONFIG_MEMCG=y)

Locking/unlocking rcu consistently for both configs is rigorous but it
also forces unnecessary locking upon users who have no interest in
CONFIG_MEMCG.

This patch removes the assertion for !CONFIG_MEMCG, because
page_memcg_rcu() has a few callers and there are no concerns regarding
their correctness at the moment.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/memcontrol.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 0c04d39a7967..f13dc02cf277 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1077,7 +1077,6 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
 
 static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
 {
-	WARN_ON_ONCE(!rcu_read_lock_held());
 	return NULL;
 }
 
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 02/16] include/linux/nodemask.h: define next_memory_node() if !CONFIG_NUMA
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 01/16] include/linux/memcontrol.h: do not warn in page_memcg_rcu() if !CONFIG_MEMCG Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 03/16] include/linux/huge_mm.h: define is_huge_zero_pmd() if !CONFIG_TRANSPARENT_HUGEPAGE Yu Zhao
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

Currently next_memory_node only exists when CONFIG_NUMA=y. This patch
adds the macro for !CONFIG_NUMA.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/nodemask.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
index ac398e143c9a..89fe4e3592f9 100644
--- a/include/linux/nodemask.h
+++ b/include/linux/nodemask.h
@@ -486,6 +486,7 @@ static inline int num_node_state(enum node_states state)
 #define first_online_node	0
 #define first_memory_node	0
 #define next_online_node(nid)	(MAX_NUMNODES)
+#define next_memory_node(nid)	(MAX_NUMNODES)
 #define nr_node_ids		1U
 #define nr_online_nodes		1U
 
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 03/16] include/linux/huge_mm.h: define is_huge_zero_pmd() if !CONFIG_TRANSPARENT_HUGEPAGE
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 01/16] include/linux/memcontrol.h: do not warn in page_memcg_rcu() if !CONFIG_MEMCG Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 02/16] include/linux/nodemask.h: define next_memory_node() if !CONFIG_NUMA Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 04/16] include/linux/cgroup.h: export cgroup_mutex Yu Zhao
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

Currently is_huge_zero_pmd() only exists when
CONFIG_TRANSPARENT_HUGEPAGE=y. This patch adds the function for
!CONFIG_TRANSPARENT_HUGEPAGE.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/huge_mm.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index ba973efcd369..0ba7b3f9029c 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -443,6 +443,11 @@ static inline bool is_huge_zero_page(struct page *page)
 	return false;
 }
 
+static inline bool is_huge_zero_pmd(pmd_t pmd)
+{
+	return false;
+}
+
 static inline bool is_huge_zero_pud(pud_t pud)
 {
 	return false;
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 04/16] include/linux/cgroup.h: export cgroup_mutex
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (2 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 03/16] include/linux/huge_mm.h: define is_huge_zero_pmd() if !CONFIG_TRANSPARENT_HUGEPAGE Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 05/16] mm/swap.c: export activate_page() Yu Zhao
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

cgroup_mutex is needed to synchronize with memcg creations.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/cgroup.h | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 4f2f79de083e..bd5744360cfa 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -432,6 +432,18 @@ static inline void cgroup_put(struct cgroup *cgrp)
 	css_put(&cgrp->self);
 }
 
+extern struct mutex cgroup_mutex;
+
+static inline void cgroup_lock(void)
+{
+	mutex_lock(&cgroup_mutex);
+}
+
+static inline void cgroup_unlock(void)
+{
+	mutex_unlock(&cgroup_mutex);
+}
+
 /**
  * task_css_set_check - obtain a task's css_set with extra access conditions
  * @task: the task to obtain css_set for
@@ -446,7 +458,6 @@ static inline void cgroup_put(struct cgroup *cgrp)
  * as locks used during the cgroup_subsys::attach() methods.
  */
 #ifdef CONFIG_PROVE_RCU
-extern struct mutex cgroup_mutex;
 extern spinlock_t css_set_lock;
 #define task_css_set_check(task, __c)					\
 	rcu_dereference_check((task)->cgroups,				\
@@ -704,6 +715,8 @@ struct cgroup;
 static inline u64 cgroup_id(const struct cgroup *cgrp) { return 1; }
 static inline void css_get(struct cgroup_subsys_state *css) {}
 static inline void css_put(struct cgroup_subsys_state *css) {}
+static inline void cgroup_lock(void) {}
+static inline void cgroup_unlock(void) {}
 static inline int cgroup_attach_task_all(struct task_struct *from,
 					 struct task_struct *t) { return 0; }
 static inline int cgroupstats_build(struct cgroupstats *stats,
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 05/16] mm/swap.c: export activate_page()
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (3 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 04/16] include/linux/cgroup.h: export cgroup_mutex Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 06/16] mm, x86: support the access bit on non-leaf PMD entries Yu Zhao
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

activate_page() is needed to activate pages that are already on lru or
queued in lru_pvecs.lru_add. The exported function is a merger between
the existing activate_page() and __lru_cache_activate_page().

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/swap.h |  1 +
 mm/swap.c            | 28 +++++++++++++++-------------
 2 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 4cc6ec3bf0ab..de2bbbf181ba 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -344,6 +344,7 @@ extern void lru_add_drain_cpu(int cpu);
 extern void lru_add_drain_cpu_zone(struct zone *zone);
 extern void lru_add_drain_all(void);
 extern void rotate_reclaimable_page(struct page *page);
+extern void activate_page(struct page *page);
 extern void deactivate_file_page(struct page *page);
 extern void deactivate_page(struct page *page);
 extern void mark_page_lazyfree(struct page *page);
diff --git a/mm/swap.c b/mm/swap.c
index 31b844d4ed94..f20ed56ebbbf 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -334,7 +334,7 @@ static bool need_activate_page_drain(int cpu)
 	return pagevec_count(&per_cpu(lru_pvecs.activate_page, cpu)) != 0;
 }
 
-static void activate_page(struct page *page)
+static void activate_page_on_lru(struct page *page)
 {
 	page = compound_head(page);
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
@@ -354,7 +354,7 @@ static inline void activate_page_drain(int cpu)
 {
 }
 
-static void activate_page(struct page *page)
+static void activate_page_on_lru(struct page *page)
 {
 	struct lruvec *lruvec;
 
@@ -368,11 +368,22 @@ static void activate_page(struct page *page)
 }
 #endif
 
-static void __lru_cache_activate_page(struct page *page)
+/*
+ * If the page is on the LRU, queue it for activation via
+ * lru_pvecs.activate_page. Otherwise, assume the page is on a
+ * pagevec, mark it active and it'll be moved to the active
+ * LRU on the next drain.
+ */
+void activate_page(struct page *page)
 {
 	struct pagevec *pvec;
 	int i;
 
+	if (PageLRU(page)) {
+		activate_page_on_lru(page);
+		return;
+	}
+
 	local_lock(&lru_pvecs.lock);
 	pvec = this_cpu_ptr(&lru_pvecs.lru_add);
 
@@ -421,16 +432,7 @@ void mark_page_accessed(struct page *page)
 		 * evictable page accessed has no effect.
 		 */
 	} else if (!PageActive(page)) {
-		/*
-		 * If the page is on the LRU, queue it for activation via
-		 * lru_pvecs.activate_page. Otherwise, assume the page is on a
-		 * pagevec, mark it active and it'll be moved to the active
-		 * LRU on the next drain.
-		 */
-		if (PageLRU(page))
-			activate_page(page);
-		else
-			__lru_cache_activate_page(page);
+		activate_page(page);
 		ClearPageReferenced(page);
 		workingset_activation(page);
 	}
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 06/16] mm, x86: support the access bit on non-leaf PMD entries
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (4 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 05/16] mm/swap.c: export activate_page() Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 07/16] mm/vmscan.c: refactor shrink_node() Yu Zhao
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

Some architectures support the accessed bit on non-leaf PMD entries
(parents) in addition to leaf PTE entries (children) where pages are
mapped, e.g., x86_64 sets the accessed bit on a parent when using it
as part of linear-address translation [1]. Page table walkers who are
interested in the accessed bit on children can take advantage of this:
they do not need to search the children when the accessed bit is not
set on a parent, given that they have previously cleared the accessed
bit on this parent.

[1]: Intel 64 and IA-32 Architectures Software Developer's Manual
     Volume 3 (October 2019), section 4.8

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 arch/Kconfig                   | 9 +++++++++
 arch/x86/Kconfig               | 1 +
 arch/x86/include/asm/pgtable.h | 2 +-
 arch/x86/mm/pgtable.c          | 5 ++++-
 include/linux/pgtable.h        | 4 ++--
 5 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index ecfd3520b676..cbd7f66734ee 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -782,6 +782,15 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE
 config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
 	bool
 
+config HAVE_ARCH_PARENT_PMD_YOUNG
+	bool
+	depends on PGTABLE_LEVELS > 2
+	help
+	  Architectures that select this are able to set the accessed bit on
+	  non-leaf PMD entries in addition to leaf PTE entries where pages are
+	  mapped. For them, page table walkers that clear the accessed bit may
+	  stop at non-leaf PMD entries when they do not see the accessed bit.
+
 config HAVE_ARCH_HUGE_VMAP
 	bool
 
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2792879d398e..b5972eb82337 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -163,6 +163,7 @@ config X86
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD if X86_64
+	select HAVE_ARCH_PARENT_PMD_YOUNG	if X86_64
 	select HAVE_ARCH_USERFAULTFD_WP         if X86_64 && USERFAULTFD
 	select HAVE_ARCH_VMAP_STACK		if X86_64
 	select HAVE_ARCH_WITHIN_STACK_FRAMES
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index a02c67291cfc..a6b5cfe1fc5a 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -846,7 +846,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
 
 static inline int pmd_bad(pmd_t pmd)
 {
-	return (pmd_flags(pmd) & ~_PAGE_USER) != _KERNPG_TABLE;
+	return ((pmd_flags(pmd) | _PAGE_ACCESSED) & ~_PAGE_USER) != _KERNPG_TABLE;
 }
 
 static inline unsigned long pages_to_mb(unsigned long npg)
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index f6a9e2e36642..1c27e6f43f80 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -550,7 +550,7 @@ int ptep_test_and_clear_young(struct vm_area_struct *vma,
 	return ret;
 }
 
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG)
 int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 			      unsigned long addr, pmd_t *pmdp)
 {
@@ -562,6 +562,9 @@ int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 
 	return ret;
 }
+#endif
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 int pudp_test_and_clear_young(struct vm_area_struct *vma,
 			      unsigned long addr, pud_t *pudp)
 {
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 5e772392a379..08dd9b8c055a 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -193,7 +193,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG)
 static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 					    unsigned long address,
 					    pmd_t *pmdp)
@@ -214,7 +214,7 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 	BUILD_BUG();
 	return 0;
 }
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG */
 #endif
 
 #ifndef __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 07/16] mm/vmscan.c: refactor shrink_node()
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (5 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 06/16] mm, x86: support the access bit on non-leaf PMD entries Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 08/16] mm: multigenerational lru: groundwork Yu Zhao
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

Heuristics that determine scan balance between anon and file LRUs are
rather independent. Move them into a separate function to improve
readability.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/vmscan.c | 186 +++++++++++++++++++++++++++-------------------------
 1 file changed, 98 insertions(+), 88 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 562e87cbd7a1..1a24d2e0a4cb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2224,6 +2224,103 @@ enum scan_balance {
 	SCAN_FILE,
 };
 
+static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc)
+{
+	unsigned long file;
+	struct lruvec *target_lruvec;
+
+	target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat);
+
+	/*
+	 * Determine the scan balance between anon and file LRUs.
+	 */
+	spin_lock_irq(&target_lruvec->lru_lock);
+	sc->anon_cost = target_lruvec->anon_cost;
+	sc->file_cost = target_lruvec->file_cost;
+	spin_unlock_irq(&target_lruvec->lru_lock);
+
+	/*
+	 * Target desirable inactive:active list ratios for the anon
+	 * and file LRU lists.
+	 */
+	if (!sc->force_deactivate) {
+		unsigned long refaults;
+
+		refaults = lruvec_page_state(target_lruvec,
+				WORKINGSET_ACTIVATE_ANON);
+		if (refaults != target_lruvec->refaults[0] ||
+			inactive_is_low(target_lruvec, LRU_INACTIVE_ANON))
+			sc->may_deactivate |= DEACTIVATE_ANON;
+		else
+			sc->may_deactivate &= ~DEACTIVATE_ANON;
+
+		/*
+		 * When refaults are being observed, it means a new
+		 * workingset is being established. Deactivate to get
+		 * rid of any stale active pages quickly.
+		 */
+		refaults = lruvec_page_state(target_lruvec,
+				WORKINGSET_ACTIVATE_FILE);
+		if (refaults != target_lruvec->refaults[1] ||
+		    inactive_is_low(target_lruvec, LRU_INACTIVE_FILE))
+			sc->may_deactivate |= DEACTIVATE_FILE;
+		else
+			sc->may_deactivate &= ~DEACTIVATE_FILE;
+	} else
+		sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE;
+
+	/*
+	 * If we have plenty of inactive file pages that aren't
+	 * thrashing, try to reclaim those first before touching
+	 * anonymous pages.
+	 */
+	file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
+	if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
+		sc->cache_trim_mode = 1;
+	else
+		sc->cache_trim_mode = 0;
+
+	/*
+	 * Prevent the reclaimer from falling into the cache trap: as
+	 * cache pages start out inactive, every cache fault will tip
+	 * the scan balance towards the file LRU.  And as the file LRU
+	 * shrinks, so does the window for rotation from references.
+	 * This means we have a runaway feedback loop where a tiny
+	 * thrashing file LRU becomes infinitely more attractive than
+	 * anon pages.  Try to detect this based on file LRU size.
+	 */
+	if (!cgroup_reclaim(sc)) {
+		unsigned long total_high_wmark = 0;
+		unsigned long free, anon;
+		int z;
+
+		free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES);
+		file = node_page_state(pgdat, NR_ACTIVE_FILE) +
+			   node_page_state(pgdat, NR_INACTIVE_FILE);
+
+		for (z = 0; z < MAX_NR_ZONES; z++) {
+			struct zone *zone = &pgdat->node_zones[z];
+
+			if (!managed_zone(zone))
+				continue;
+
+			total_high_wmark += high_wmark_pages(zone);
+		}
+
+		/*
+		 * Consider anon: if that's low too, this isn't a
+		 * runaway file reclaim problem, but rather just
+		 * extreme pressure. Reclaim as per usual then.
+		 */
+		anon = node_page_state(pgdat, NR_INACTIVE_ANON);
+
+		sc->file_is_tiny =
+			file + free <= total_high_wmark &&
+			!(sc->may_deactivate & DEACTIVATE_ANON) &&
+			anon >> sc->priority;
+	}
+}
+
 /*
  * Determine how aggressively the anon and file LRU lists should be
  * scanned.  The relative value of each set of LRU lists is determined
@@ -2669,7 +2766,6 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 	unsigned long nr_reclaimed, nr_scanned;
 	struct lruvec *target_lruvec;
 	bool reclaimable = false;
-	unsigned long file;
 
 	target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat);
 
@@ -2679,93 +2775,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 	nr_reclaimed = sc->nr_reclaimed;
 	nr_scanned = sc->nr_scanned;
 
-	/*
-	 * Determine the scan balance between anon and file LRUs.
-	 */
-	spin_lock_irq(&target_lruvec->lru_lock);
-	sc->anon_cost = target_lruvec->anon_cost;
-	sc->file_cost = target_lruvec->file_cost;
-	spin_unlock_irq(&target_lruvec->lru_lock);
-
-	/*
-	 * Target desirable inactive:active list ratios for the anon
-	 * and file LRU lists.
-	 */
-	if (!sc->force_deactivate) {
-		unsigned long refaults;
-
-		refaults = lruvec_page_state(target_lruvec,
-				WORKINGSET_ACTIVATE_ANON);
-		if (refaults != target_lruvec->refaults[0] ||
-			inactive_is_low(target_lruvec, LRU_INACTIVE_ANON))
-			sc->may_deactivate |= DEACTIVATE_ANON;
-		else
-			sc->may_deactivate &= ~DEACTIVATE_ANON;
-
-		/*
-		 * When refaults are being observed, it means a new
-		 * workingset is being established. Deactivate to get
-		 * rid of any stale active pages quickly.
-		 */
-		refaults = lruvec_page_state(target_lruvec,
-				WORKINGSET_ACTIVATE_FILE);
-		if (refaults != target_lruvec->refaults[1] ||
-		    inactive_is_low(target_lruvec, LRU_INACTIVE_FILE))
-			sc->may_deactivate |= DEACTIVATE_FILE;
-		else
-			sc->may_deactivate &= ~DEACTIVATE_FILE;
-	} else
-		sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE;
-
-	/*
-	 * If we have plenty of inactive file pages that aren't
-	 * thrashing, try to reclaim those first before touching
-	 * anonymous pages.
-	 */
-	file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
-	if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
-		sc->cache_trim_mode = 1;
-	else
-		sc->cache_trim_mode = 0;
-
-	/*
-	 * Prevent the reclaimer from falling into the cache trap: as
-	 * cache pages start out inactive, every cache fault will tip
-	 * the scan balance towards the file LRU.  And as the file LRU
-	 * shrinks, so does the window for rotation from references.
-	 * This means we have a runaway feedback loop where a tiny
-	 * thrashing file LRU becomes infinitely more attractive than
-	 * anon pages.  Try to detect this based on file LRU size.
-	 */
-	if (!cgroup_reclaim(sc)) {
-		unsigned long total_high_wmark = 0;
-		unsigned long free, anon;
-		int z;
-
-		free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES);
-		file = node_page_state(pgdat, NR_ACTIVE_FILE) +
-			   node_page_state(pgdat, NR_INACTIVE_FILE);
-
-		for (z = 0; z < MAX_NR_ZONES; z++) {
-			struct zone *zone = &pgdat->node_zones[z];
-			if (!managed_zone(zone))
-				continue;
-
-			total_high_wmark += high_wmark_pages(zone);
-		}
-
-		/*
-		 * Consider anon: if that's low too, this isn't a
-		 * runaway file reclaim problem, but rather just
-		 * extreme pressure. Reclaim as per usual then.
-		 */
-		anon = node_page_state(pgdat, NR_INACTIVE_ANON);
-
-		sc->file_is_tiny =
-			file + free <= total_high_wmark &&
-			!(sc->may_deactivate & DEACTIVATE_ANON) &&
-			anon >> sc->priority;
-	}
+	prepare_scan_count(pgdat, sc);
 
 	shrink_node_memcgs(pgdat, sc);
 
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 08/16] mm: multigenerational lru: groundwork
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (6 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 07/16] mm/vmscan.c: refactor shrink_node() Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 09/16] mm: multigenerational lru: activation Yu Zhao
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

For each lruvec, evictable pages are divided into multiple
generations. The youngest generation number is stored in max_seq for
both anon and file types as they are aged on an equal footing. The
oldest generation numbers are stored in min_seq[2] separately for anon
and file types as clean file pages can be evicted regardless of
may_swap or may_writepage. Generation numbers are truncated into
order_base_2(MAX_NR_GENS+1) bits in order to fit into page->flags. The
sliding window technique is used to prevent truncated generation
numbers from overlapping. Each truncated generation number is an index
to lruvec->evictable.lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES].
Evictable pages are added to the per-zone lists indexed by max_seq or
min_seq[2] (modulo MAX_NR_GENS), depending on whether they are being
faulted in.

The workflow comprises two conceptually independent functions: the
aging and the eviction. The aging produces young generations. Given an
lruvec, the aging scans page tables for referenced pages of this
lruvec. Upon finding one, the aging updates its generation number to
max_seq. After each round of scan, the aging increments max_seq. The
aging is due when both of min_seq[2] reaches max_seq-1, assuming both
anon and file types are reclaimable.

The eviction consumes old generations. Given an lruvec, the eviction
scans the pages on the per-zone lists indexed by either of min_seq[2].
It tries to select a type based on the values of min_seq[2] and
swappiness. During a scan, the eviction sorts pages according to their
generation numbers, if the aging has found them referenced. When it
finds all the per-zone lists of a selected type are empty, the
eviction increments min_seq[2] indexed by this selected type.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 fs/fuse/dev.c                     |   3 +-
 include/linux/mm.h                |   2 +
 include/linux/mm_inline.h         | 193 +++++++++++++++++++
 include/linux/mmzone.h            | 110 +++++++++++
 include/linux/page-flags-layout.h |  20 +-
 include/linux/page-flags.h        |   4 +-
 kernel/bounds.c                   |   6 +
 mm/huge_memory.c                  |   3 +-
 mm/mm_init.c                      |  16 +-
 mm/mmzone.c                       |   2 +
 mm/swapfile.c                     |   4 +
 mm/vmscan.c                       | 305 ++++++++++++++++++++++++++++++
 12 files changed, 656 insertions(+), 12 deletions(-)

diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
index c0fee830a34e..27c83f557794 100644
--- a/fs/fuse/dev.c
+++ b/fs/fuse/dev.c
@@ -784,7 +784,8 @@ static int fuse_check_page(struct page *page)
 	       1 << PG_lru |
 	       1 << PG_active |
 	       1 << PG_reclaim |
-	       1 << PG_waiters))) {
+	       1 << PG_waiters |
+	       LRU_GEN_MASK | LRU_USAGE_MASK))) {
 		dump_page(page, "fuse: trying to steal weird page");
 		return 1;
 	}
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8ba434287387..2c8a2db78ce9 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1070,6 +1070,8 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf);
 #define ZONES_PGOFF		(NODES_PGOFF - ZONES_WIDTH)
 #define LAST_CPUPID_PGOFF	(ZONES_PGOFF - LAST_CPUPID_WIDTH)
 #define KASAN_TAG_PGOFF		(LAST_CPUPID_PGOFF - KASAN_TAG_WIDTH)
+#define LRU_GEN_PGOFF		(KASAN_TAG_PGOFF - LRU_GEN_WIDTH)
+#define LRU_USAGE_PGOFF		(LRU_GEN_PGOFF - LRU_USAGE_WIDTH)
 
 /*
  * Define the bit shifts to access each section.  For non-existent
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 355ea1ee32bd..2bf910eb3dd7 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -79,11 +79,198 @@ static __always_inline enum lru_list page_lru(struct page *page)
 	return lru;
 }
 
+#ifdef CONFIG_LRU_GEN
+
+#ifdef CONFIG_LRU_GEN_ENABLED
+DECLARE_STATIC_KEY_TRUE(lru_gen_static_key);
+#define lru_gen_enabled() static_branch_likely(&lru_gen_static_key)
+#else
+DECLARE_STATIC_KEY_FALSE(lru_gen_static_key);
+#define lru_gen_enabled() static_branch_unlikely(&lru_gen_static_key)
+#endif
+
+/* We track at most MAX_NR_GENS generations using the sliding window technique. */
+static inline int lru_gen_from_seq(unsigned long seq)
+{
+	return seq % MAX_NR_GENS;
+}
+
+/* Return a proper index regardless whether we keep a full history of stats. */
+static inline int sid_from_seq_or_gen(int seq_or_gen)
+{
+	return seq_or_gen % NR_STAT_GENS;
+}
+
+/* The youngest and the second youngest generations are considered active. */
+static inline bool lru_gen_is_active(struct lruvec *lruvec, int gen)
+{
+	unsigned long max_seq = READ_ONCE(lruvec->evictable.max_seq);
+
+	VM_BUG_ON(!max_seq);
+	VM_BUG_ON(gen >= MAX_NR_GENS);
+
+	return gen == lru_gen_from_seq(max_seq) || gen == lru_gen_from_seq(max_seq - 1);
+}
+
+/* Update the sizes of the multigenerational lru. */
+static inline void lru_gen_update_size(struct page *page, struct lruvec *lruvec,
+				       int old_gen, int new_gen)
+{
+	int file = page_is_file_lru(page);
+	int zone = page_zonenum(page);
+	int delta = thp_nr_pages(page);
+	enum lru_list lru = LRU_FILE * file;
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	lockdep_assert_held(&lruvec->lru_lock);
+	VM_BUG_ON(old_gen != -1 && old_gen >= MAX_NR_GENS);
+	VM_BUG_ON(new_gen != -1 && new_gen >= MAX_NR_GENS);
+	VM_BUG_ON(old_gen == -1 && new_gen == -1);
+
+	if (old_gen >= 0)
+		WRITE_ONCE(lrugen->sizes[old_gen][file][zone],
+			   lrugen->sizes[old_gen][file][zone] - delta);
+	if (new_gen >= 0)
+		WRITE_ONCE(lrugen->sizes[new_gen][file][zone],
+			   lrugen->sizes[new_gen][file][zone] + delta);
+
+	if (old_gen < 0) {
+		if (lru_gen_is_active(lruvec, new_gen))
+			lru += LRU_ACTIVE;
+		update_lru_size(lruvec, lru, zone, delta);
+		return;
+	}
+
+	if (new_gen < 0) {
+		if (lru_gen_is_active(lruvec, old_gen))
+			lru += LRU_ACTIVE;
+		update_lru_size(lruvec, lru, zone, -delta);
+		return;
+	}
+
+	if (!lru_gen_is_active(lruvec, old_gen) && lru_gen_is_active(lruvec, new_gen)) {
+		update_lru_size(lruvec, lru, zone, -delta);
+		update_lru_size(lruvec, lru + LRU_ACTIVE, zone, delta);
+	}
+
+	VM_BUG_ON(lru_gen_is_active(lruvec, old_gen) && !lru_gen_is_active(lruvec, new_gen));
+}
+
+/* Add a page to a list of the multigenerational lru. Return true on success. */
+static inline bool lru_gen_addition(struct page *page, struct lruvec *lruvec, bool front)
+{
+	int gen;
+	unsigned long old_flags, new_flags;
+	int file = page_is_file_lru(page);
+	int zone = page_zonenum(page);
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	if (PageUnevictable(page) || !lrugen->enabled[file])
+		return false;
+	/*
+	 * If a page is being faulted in, add it to the youngest generation.
+	 * try_walk_mm_list() may look at the size of the youngest generation to
+	 * determine if the aging is due.
+	 *
+	 * If a page can't be evicted immediately, i.e., a shmem page not in
+	 * swap cache, a dirty page waiting on writeback, or a page rejected by
+	 * evict_lru_gen_pages() due to races, dirty buffer heads, etc., add it
+	 * to the second oldest generation.
+	 *
+	 * If a page could be evicted immediately, i.e., deactivated, rotated by
+	 * writeback, or allocated for buffered io, add it to the oldest
+	 * generation.
+	 */
+	if (PageActive(page))
+		gen = lru_gen_from_seq(lrugen->max_seq);
+	else if ((!file && !PageSwapCache(page)) ||
+		 (PageReclaim(page) && (PageDirty(page) || PageWriteback(page))) ||
+		 (!PageReferenced(page) && PageWorkingset(page)))
+		gen = lru_gen_from_seq(lrugen->min_seq[file] + 1);
+	else
+		gen = lru_gen_from_seq(lrugen->min_seq[file]);
+
+	do {
+		old_flags = READ_ONCE(page->flags);
+		VM_BUG_ON_PAGE(old_flags & LRU_GEN_MASK, page);
+
+		new_flags = (old_flags & ~(LRU_GEN_MASK | BIT(PG_active))) |
+			    ((gen + 1UL) << LRU_GEN_PGOFF);
+		/* see the comment in evict_lru_gen_pages() */
+		if (!(old_flags & BIT(PG_referenced)))
+			new_flags &= ~(LRU_USAGE_MASK | LRU_TIER_FLAGS);
+	} while (cmpxchg(&page->flags, old_flags, new_flags) != old_flags);
+
+	lru_gen_update_size(page, lruvec, -1, gen);
+	if (front)
+		list_add(&page->lru, &lrugen->lists[gen][file][zone]);
+	else
+		list_add_tail(&page->lru, &lrugen->lists[gen][file][zone]);
+
+	return true;
+}
+
+/* Delete a page from a list of the multigenerational lru. Return true on success. */
+static inline bool lru_gen_deletion(struct page *page, struct lruvec *lruvec)
+{
+	int gen;
+	unsigned long old_flags, new_flags;
+
+	do {
+		old_flags = READ_ONCE(page->flags);
+		if (!(old_flags & LRU_GEN_MASK))
+			return false;
+
+		VM_BUG_ON_PAGE(PageActive(page), page);
+		VM_BUG_ON_PAGE(PageUnevictable(page), page);
+
+		gen = ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
+
+		new_flags = old_flags & ~LRU_GEN_MASK;
+		/* mark page active accordingly */
+		if (lru_gen_is_active(lruvec, gen))
+			new_flags |= BIT(PG_active);
+	} while (cmpxchg(&page->flags, old_flags, new_flags) != old_flags);
+
+	lru_gen_update_size(page, lruvec, gen, -1);
+	list_del(&page->lru);
+
+	return true;
+}
+
+/* Return -1 when a page is not on a list of the multigenerational lru. */
+static inline int page_lru_gen(struct page *page)
+{
+	return ((READ_ONCE(page->flags) & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
+}
+
+#else /* CONFIG_LRU_GEN */
+
+static inline bool lru_gen_enabled(void)
+{
+	return false;
+}
+
+static inline bool lru_gen_addition(struct page *page, struct lruvec *lruvec, bool front)
+{
+	return false;
+}
+
+static inline bool lru_gen_deletion(struct page *page, struct lruvec *lruvec)
+{
+	return false;
+}
+
+#endif /* CONFIG_LRU_GEN */
+
 static __always_inline void add_page_to_lru_list(struct page *page,
 				struct lruvec *lruvec)
 {
 	enum lru_list lru = page_lru(page);
 
+	if (lru_gen_addition(page, lruvec, true))
+		return;
+
 	update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
 	list_add(&page->lru, &lruvec->lists[lru]);
 }
@@ -93,6 +280,9 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page,
 {
 	enum lru_list lru = page_lru(page);
 
+	if (lru_gen_addition(page, lruvec, false))
+		return;
+
 	update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
 	list_add_tail(&page->lru, &lruvec->lists[lru]);
 }
@@ -100,6 +290,9 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page,
 static __always_inline void del_page_from_lru_list(struct page *page,
 				struct lruvec *lruvec)
 {
+	if (lru_gen_deletion(page, lruvec))
+		return;
+
 	list_del(&page->lru);
 	update_lru_size(lruvec, page_lru(page), page_zonenum(page),
 			-thp_nr_pages(page));
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 47946cec7584..a60c7498afd7 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -293,6 +293,112 @@ enum lruvec_flags {
 					 */
 };
 
+struct lruvec;
+
+#define LRU_GEN_MASK		((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF)
+#define LRU_USAGE_MASK		((BIT(LRU_USAGE_WIDTH) - 1) << LRU_USAGE_PGOFF)
+
+#ifdef CONFIG_LRU_GEN
+
+/*
+ * For each lruvec, evictable pages are divided into multiple generations. The
+ * youngest and the oldest generation numbers, AKA max_seq and min_seq, are
+ * monotonically increasing. The sliding window technique is used to track at
+ * most MAX_NR_GENS and at least MIN_NR_GENS generations. An offset within the
+ * window, AKA gen, indexes an array of per-type and per-zone lists for the
+ * corresponding generation. All pages from this array of lists have gen+1
+ * stored in page->flags. 0 is reserved to indicate that pages are not on the
+ * lists.
+ */
+#define MAX_NR_GENS		((unsigned int)CONFIG_NR_LRU_GENS)
+
+/*
+ * Each generation is then divided into multiple tiers. Tiers represent levels
+ * of usage from file descriptors, i.e., mark_page_accessed(). In contrast to
+ * moving across generations which requires the lru lock, moving across tiers
+ * only involves an atomic operation on page->flags and therefore has a
+ * negligible cost.
+ *
+ * The purposes of tiers are to:
+ *   1) estimate whether pages accessed multiple times via file descriptors are
+ *   more active than pages accessed only via page tables by separating the two
+ *   access types into upper tiers and the base tier and comparing refault rates
+ *   across tiers.
+ *   2) improve buffered io performance by deferring activations of pages
+ *   accessed multiple times until the eviction. That is activations happen in
+ *   the reclaim path, not the access path.
+ *
+ * Pages accessed N times via file descriptors belong to tier order_base_2(N).
+ * The base tier uses the following page flag:
+ *   !PageReferenced() -- readahead pages
+ *   PageReferenced() -- single-access pages
+ * All upper tiers use the following page flags:
+ *   PageReferenced() && PageWorkingset() -- multi-access pages
+ * in addition to the bits storing N-2 accesses. Therefore, we can support one
+ * upper tier without using additional bits in page->flags.
+ *
+ * Note that
+ *   1) PageWorkingset() is always set for upper tiers because we want to
+ *    maintain the existing psi behavior.
+ *   2) !PageReferenced() && PageWorkingset() is not a valid tier. See the
+ *   comment in evict_lru_gen_pages().
+ *   3) pages accessed only via page tables belong to the base tier.
+ *
+ * Pages from the base tier are evicted regardless of the refault rate. Pages
+ * from upper tiers will be moved to the next generation, if their refault rates
+ * are higher than that of the base tier.
+ */
+#define MAX_NR_TIERS		((unsigned int)CONFIG_TIERS_PER_GEN)
+#define LRU_TIER_FLAGS		(BIT(PG_referenced) | BIT(PG_workingset))
+#define LRU_USAGE_SHIFT		(CONFIG_TIERS_PER_GEN - 1)
+
+/* Whether to keep historical stats for each generation. */
+#ifdef CONFIG_LRU_GEN_STATS
+#define NR_STAT_GENS		((unsigned int)CONFIG_NR_LRU_GENS)
+#else
+#define NR_STAT_GENS		1U
+#endif
+
+struct lrugen {
+	/* the aging increments the max generation number */
+	unsigned long max_seq;
+	/* the eviction increments the min generation numbers */
+	unsigned long min_seq[ANON_AND_FILE];
+	/* the birth time of each generation in jiffies */
+	unsigned long timestamps[MAX_NR_GENS];
+	/* the lists of the multigenerational lru */
+	struct list_head lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES];
+	/* the sizes of the multigenerational lru in pages */
+	unsigned long sizes[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES];
+	/* to determine which type and its tiers to evict */
+	atomic_long_t evicted[NR_STAT_GENS][ANON_AND_FILE][MAX_NR_TIERS];
+	atomic_long_t refaulted[NR_STAT_GENS][ANON_AND_FILE][MAX_NR_TIERS];
+	/* the base tier is inactive and won't be activated */
+	unsigned long activated[NR_STAT_GENS][ANON_AND_FILE][MAX_NR_TIERS - 1];
+	/* arithmetic mean weighted by geometric series 1/2, 1/4, ... */
+	unsigned long avg_total[ANON_AND_FILE][MAX_NR_TIERS];
+	unsigned long avg_refaulted[ANON_AND_FILE][MAX_NR_TIERS];
+	/* reclaim priority to compare across memcgs */
+	atomic_t priority;
+	/* whether the multigenerational lru is enabled */
+	bool enabled[ANON_AND_FILE];
+};
+
+void lru_gen_init_lruvec(struct lruvec *lruvec);
+void lru_gen_set_state(bool enable, bool main, bool swap);
+
+#else /* CONFIG_LRU_GEN */
+
+static inline void lru_gen_init_lruvec(struct lruvec *lruvec)
+{
+}
+
+static inline void lru_gen_set_state(bool enable, bool main, bool swap)
+{
+}
+
+#endif /* CONFIG_LRU_GEN */
+
 struct lruvec {
 	struct list_head		lists[NR_LRU_LISTS];
 	/* per lruvec lru_lock for memcg */
@@ -310,6 +416,10 @@ struct lruvec {
 	unsigned long			refaults[ANON_AND_FILE];
 	/* Various lruvec state flags (enum lruvec_flags) */
 	unsigned long			flags;
+#ifdef CONFIG_LRU_GEN
+	/* unevictable pages are on LRU_UNEVICTABLE */
+	struct lrugen			evictable;
+#endif
 #ifdef CONFIG_MEMCG
 	struct pglist_data *pgdat;
 #endif
diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h
index 7d4ec26d8a3e..df83aaec8498 100644
--- a/include/linux/page-flags-layout.h
+++ b/include/linux/page-flags-layout.h
@@ -24,6 +24,17 @@
 #error ZONES_SHIFT -- too many zones configured adjust calculation
 #endif
 
+#ifdef CONFIG_LRU_GEN
+/*
+ * LRU_GEN_WIDTH is generated from order_base_2(CONFIG_NR_LRU_GENS + 1). And the
+ * comment on MAX_NR_TIERS explains why we offset by 2 here.
+ */
+#define LRU_USAGE_WIDTH		(CONFIG_TIERS_PER_GEN - 2)
+#else
+#define LRU_GEN_WIDTH		0
+#define LRU_USAGE_WIDTH		0
+#endif
+
 #ifdef CONFIG_SPARSEMEM
 #include <asm/sparsemem.h>
 
@@ -56,7 +67,8 @@
 
 #define ZONES_WIDTH		ZONES_SHIFT
 
-#if SECTIONS_WIDTH+ZONES_WIDTH+NODES_SHIFT <= BITS_PER_LONG - NR_PAGEFLAGS
+#if SECTIONS_WIDTH+ZONES_WIDTH+LRU_GEN_WIDTH+LRU_USAGE_WIDTH+NODES_SHIFT \
+	<= BITS_PER_LONG - NR_PAGEFLAGS
 #define NODES_WIDTH		NODES_SHIFT
 #else
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
@@ -83,14 +95,16 @@
 #define KASAN_TAG_WIDTH 0
 #endif
 
-#if SECTIONS_WIDTH+ZONES_WIDTH+NODES_SHIFT+LAST_CPUPID_SHIFT+KASAN_TAG_WIDTH \
+#if SECTIONS_WIDTH+ZONES_WIDTH+LRU_GEN_WIDTH+LRU_USAGE_WIDTH+ \
+	NODES_WIDTH+KASAN_TAG_WIDTH+LAST_CPUPID_SHIFT \
 	<= BITS_PER_LONG - NR_PAGEFLAGS
 #define LAST_CPUPID_WIDTH LAST_CPUPID_SHIFT
 #else
 #define LAST_CPUPID_WIDTH 0
 #endif
 
-#if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH+LAST_CPUPID_WIDTH+KASAN_TAG_WIDTH \
+#if SECTIONS_WIDTH+ZONES_WIDTH+LRU_GEN_WIDTH+LRU_USAGE_WIDTH+ \
+	NODES_WIDTH+KASAN_TAG_WIDTH+LAST_CPUPID_WIDTH \
 	> BITS_PER_LONG - NR_PAGEFLAGS
 #error "Not enough bits in page flags"
 #endif
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 04a34c08e0a6..e58984fca32a 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -817,7 +817,7 @@ static inline void ClearPageSlabPfmemalloc(struct page *page)
 	 1UL << PG_private	| 1UL << PG_private_2	|	\
 	 1UL << PG_writeback	| 1UL << PG_reserved	|	\
 	 1UL << PG_slab		| 1UL << PG_active 	|	\
-	 1UL << PG_unevictable	| __PG_MLOCKED)
+	 1UL << PG_unevictable	| __PG_MLOCKED | LRU_GEN_MASK)
 
 /*
  * Flags checked when a page is prepped for return by the page allocator.
@@ -828,7 +828,7 @@ static inline void ClearPageSlabPfmemalloc(struct page *page)
  * alloc-free cycle to prevent from reusing the page.
  */
 #define PAGE_FLAGS_CHECK_AT_PREP	\
-	(((1UL << NR_PAGEFLAGS) - 1) & ~__PG_HWPOISON)
+	((((1UL << NR_PAGEFLAGS) - 1) & ~__PG_HWPOISON) | LRU_GEN_MASK | LRU_USAGE_MASK)
 
 #define PAGE_FLAGS_PRIVATE				\
 	(1UL << PG_private | 1UL << PG_private_2)
diff --git a/kernel/bounds.c b/kernel/bounds.c
index 9795d75b09b2..a8cbf2d0b11a 100644
--- a/kernel/bounds.c
+++ b/kernel/bounds.c
@@ -22,6 +22,12 @@ int main(void)
 	DEFINE(NR_CPUS_BITS, ilog2(CONFIG_NR_CPUS));
 #endif
 	DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
+#ifdef CONFIG_LRU_GEN
+	/* bits needed to represent internal values stored in page->flags */
+	DEFINE(LRU_GEN_WIDTH, order_base_2(CONFIG_NR_LRU_GENS + 1));
+	/* bits needed to represent normalized values for external uses */
+	DEFINE(LRU_GEN_SHIFT, order_base_2(CONFIG_NR_LRU_GENS));
+#endif
 	/* End of constants */
 
 	return 0;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ae907a9c2050..26d3cc4a7a0b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2418,7 +2418,8 @@ static void __split_huge_page_tail(struct page *head, int tail,
 #ifdef CONFIG_64BIT
 			 (1L << PG_arch_2) |
 #endif
-			 (1L << PG_dirty)));
+			 (1L << PG_dirty) |
+			 LRU_GEN_MASK | LRU_USAGE_MASK));
 
 	/* ->mapping in first tail page is compound_mapcount */
 	VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING,
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 8e02e865cc65..6303ed7aa511 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -71,27 +71,33 @@ void __init mminit_verify_pageflags_layout(void)
 	width = shift - SECTIONS_WIDTH - NODES_WIDTH - ZONES_WIDTH
 		- LAST_CPUPID_SHIFT - KASAN_TAG_WIDTH;
 	mminit_dprintk(MMINIT_TRACE, "pageflags_layout_widths",
-		"Section %d Node %d Zone %d Lastcpupid %d Kasantag %d Flags %d\n",
+		"Section %d Node %d Zone %d Lastcpupid %d Kasantag %d lru gen %d tier %d Flags %d\n",
 		SECTIONS_WIDTH,
 		NODES_WIDTH,
 		ZONES_WIDTH,
 		LAST_CPUPID_WIDTH,
 		KASAN_TAG_WIDTH,
+		LRU_GEN_WIDTH,
+		LRU_USAGE_WIDTH,
 		NR_PAGEFLAGS);
 	mminit_dprintk(MMINIT_TRACE, "pageflags_layout_shifts",
-		"Section %d Node %d Zone %d Lastcpupid %d Kasantag %d\n",
+		"Section %d Node %d Zone %d Lastcpupid %d Kasantag %d lru gen %d tier %d\n",
 		SECTIONS_SHIFT,
 		NODES_SHIFT,
 		ZONES_SHIFT,
 		LAST_CPUPID_SHIFT,
-		KASAN_TAG_WIDTH);
+		KASAN_TAG_WIDTH,
+		LRU_GEN_WIDTH,
+		LRU_USAGE_WIDTH);
 	mminit_dprintk(MMINIT_TRACE, "pageflags_layout_pgshifts",
-		"Section %lu Node %lu Zone %lu Lastcpupid %lu Kasantag %lu\n",
+		"Section %lu Node %lu Zone %lu Lastcpupid %lu Kasantag %lu lru gen %lu tier %lu\n",
 		(unsigned long)SECTIONS_PGSHIFT,
 		(unsigned long)NODES_PGSHIFT,
 		(unsigned long)ZONES_PGSHIFT,
 		(unsigned long)LAST_CPUPID_PGSHIFT,
-		(unsigned long)KASAN_TAG_PGSHIFT);
+		(unsigned long)KASAN_TAG_PGSHIFT,
+		(unsigned long)LRU_GEN_PGOFF,
+		(unsigned long)LRU_USAGE_PGOFF);
 	mminit_dprintk(MMINIT_TRACE, "pageflags_layout_nodezoneid",
 		"Node/Zone ID: %lu -> %lu\n",
 		(unsigned long)(ZONEID_PGOFF + ZONEID_SHIFT),
diff --git a/mm/mmzone.c b/mm/mmzone.c
index eb89d6e018e2..2ec0d7793424 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -81,6 +81,8 @@ void lruvec_init(struct lruvec *lruvec)
 
 	for_each_lru(lru)
 		INIT_LIST_HEAD(&lruvec->lists[lru]);
+
+	lru_gen_init_lruvec(lruvec);
 }
 
 #if defined(CONFIG_NUMA_BALANCING) && !defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 084a5b9a18e5..c6041d10a73a 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2702,6 +2702,8 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
 	err = 0;
 	atomic_inc(&proc_poll_event);
 	wake_up_interruptible(&proc_poll_wait);
+	/* stop tracking anon if the multigenerational lru is enabled */
+	lru_gen_set_state(false, false, true);
 
 out_dput:
 	filp_close(victim, NULL);
@@ -3348,6 +3350,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
 	mutex_unlock(&swapon_mutex);
 	atomic_inc(&proc_poll_event);
 	wake_up_interruptible(&proc_poll_wait);
+	/* start tracking anon if the multigenerational lru is enabled */
+	lru_gen_set_state(true, false, true);
 
 	error = 0;
 	goto out;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 1a24d2e0a4cb..8559bb94d452 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -49,6 +49,7 @@
 #include <linux/printk.h>
 #include <linux/dax.h>
 #include <linux/psi.h>
+#include <linux/memory.h>
 
 #include <asm/tlbflush.h>
 #include <asm/div64.h>
@@ -4314,3 +4315,307 @@ void check_move_unevictable_pages(struct pagevec *pvec)
 	}
 }
 EXPORT_SYMBOL_GPL(check_move_unevictable_pages);
+
+#ifdef CONFIG_LRU_GEN
+
+/*
+ * After pages are faulted in, the aging must scan them twice before the
+ * eviction can. The first scan clears the accessed bit set during initial
+ * faults. And the second scan makes sure they haven't been used since the
+ * first.
+ */
+#define MIN_NR_GENS	2
+
+#define MAX_BATCH_SIZE	8192
+
+/******************************************************************************
+ *                          shorthand helpers
+ ******************************************************************************/
+
+#define DEFINE_MAX_SEQ()						\
+	unsigned long max_seq = READ_ONCE(lruvec->evictable.max_seq)
+
+#define DEFINE_MIN_SEQ()						\
+	unsigned long min_seq[ANON_AND_FILE] = {			\
+		READ_ONCE(lruvec->evictable.min_seq[0]),		\
+		READ_ONCE(lruvec->evictable.min_seq[1]),		\
+	}
+
+#define for_each_type_zone(file, zone)					\
+	for ((file) = 0; (file) < ANON_AND_FILE; (file)++)		\
+		for ((zone) = 0; (zone) < MAX_NR_ZONES; (zone)++)
+
+#define for_each_gen_type_zone(gen, file, zone)				\
+	for ((gen) = 0; (gen) < MAX_NR_GENS; (gen)++)			\
+		for ((file) = 0; (file) < ANON_AND_FILE; (file)++)	\
+			for ((zone) = 0; (zone) < MAX_NR_ZONES; (zone)++)
+
+static int get_nr_gens(struct lruvec *lruvec, int file)
+{
+	return lruvec->evictable.max_seq - lruvec->evictable.min_seq[file] + 1;
+}
+
+static int min_nr_gens(unsigned long max_seq, unsigned long *min_seq, int swappiness)
+{
+	return max_seq - max(min_seq[!swappiness], min_seq[1]) + 1;
+}
+
+static int max_nr_gens(unsigned long max_seq, unsigned long *min_seq, int swappiness)
+{
+	return max_seq - min(min_seq[!swappiness], min_seq[1]) + 1;
+}
+
+static bool __maybe_unused seq_is_valid(struct lruvec *lruvec)
+{
+	lockdep_assert_held(&lruvec->lru_lock);
+
+	return get_nr_gens(lruvec, 0) >= MIN_NR_GENS &&
+	       get_nr_gens(lruvec, 0) <= MAX_NR_GENS &&
+	       get_nr_gens(lruvec, 1) >= MIN_NR_GENS &&
+	       get_nr_gens(lruvec, 1) <= MAX_NR_GENS;
+}
+
+/******************************************************************************
+ *                          state change
+ ******************************************************************************/
+
+#ifdef CONFIG_LRU_GEN_ENABLED
+DEFINE_STATIC_KEY_TRUE(lru_gen_static_key);
+#else
+DEFINE_STATIC_KEY_FALSE(lru_gen_static_key);
+#endif
+
+static DEFINE_MUTEX(lru_gen_state_mutex);
+static int lru_gen_nr_swapfiles __read_mostly;
+
+static bool __maybe_unused state_is_valid(struct lruvec *lruvec)
+{
+	int gen, file, zone;
+	enum lru_list lru;
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	for_each_evictable_lru(lru) {
+		file = is_file_lru(lru);
+
+		if (lrugen->enabled[file] && !list_empty(&lruvec->lists[lru]))
+			return false;
+	}
+
+	for_each_gen_type_zone(gen, file, zone) {
+		if (!lrugen->enabled[file] && !list_empty(&lrugen->lists[gen][file][zone]))
+			return false;
+
+		VM_WARN_ONCE(!lrugen->enabled[file] && lrugen->sizes[gen][file][zone],
+			     "lru_gen: possible unbalanced number of pages");
+	}
+
+	return true;
+}
+
+static bool fill_lru_gen_lists(struct lruvec *lruvec)
+{
+	enum lru_list lru;
+	int batch_size = 0;
+
+	for_each_evictable_lru(lru) {
+		int file = is_file_lru(lru);
+		bool active = is_active_lru(lru);
+		struct list_head *head = &lruvec->lists[lru];
+
+		if (!lruvec->evictable.enabled[file])
+			continue;
+
+		while (!list_empty(head)) {
+			bool success;
+			struct page *page = lru_to_page(head);
+
+			VM_BUG_ON_PAGE(PageTail(page), page);
+			VM_BUG_ON_PAGE(PageUnevictable(page), page);
+			VM_BUG_ON_PAGE(PageActive(page) != active, page);
+			VM_BUG_ON_PAGE(page_lru_gen(page) != -1, page);
+			VM_BUG_ON_PAGE(page_is_file_lru(page) != file, page);
+
+			prefetchw_prev_lru_page(page, head, flags);
+
+			del_page_from_lru_list(page, lruvec);
+			success = lru_gen_addition(page, lruvec, true);
+			VM_BUG_ON(!success);
+
+			if (++batch_size == MAX_BATCH_SIZE)
+				return false;
+		}
+	}
+
+	return true;
+}
+
+static bool drain_lru_gen_lists(struct lruvec *lruvec)
+{
+	int gen, file, zone;
+	int batch_size = 0;
+
+	for_each_gen_type_zone(gen, file, zone) {
+		struct list_head *head = &lruvec->evictable.lists[gen][file][zone];
+
+		if (lruvec->evictable.enabled[file])
+			continue;
+
+		while (!list_empty(head)) {
+			bool success;
+			struct page *page = lru_to_page(head);
+
+			VM_BUG_ON_PAGE(PageTail(page), page);
+			VM_BUG_ON_PAGE(PageUnevictable(page), page);
+			VM_BUG_ON_PAGE(PageActive(page), page);
+			VM_BUG_ON_PAGE(page_is_file_lru(page) != file, page);
+			VM_BUG_ON_PAGE(page_zonenum(page) != zone, page);
+
+			prefetchw_prev_lru_page(page, head, flags);
+
+			success = lru_gen_deletion(page, lruvec);
+			VM_BUG_ON(!success);
+			add_page_to_lru_list(page, lruvec);
+
+			if (++batch_size == MAX_BATCH_SIZE)
+				return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * For file page tracking, we enable/disable it according to the main switch.
+ * For anon page tracking, we only enabled it when the main switch is on and
+ * there is at least one swapfile; we disable it when there are no swapfiles
+ * regardless of the value of the main switch. Otherwise, we will eventually
+ * reach the max size of the sliding window and have to call inc_min_seq(),
+ * which brings an unnecessary overhead.
+ */
+void lru_gen_set_state(bool enable, bool main, bool swap)
+{
+	struct mem_cgroup *memcg;
+
+	mem_hotplug_begin();
+	mutex_lock(&lru_gen_state_mutex);
+	cgroup_lock();
+
+	main = main && enable != lru_gen_enabled();
+	swap = swap && !(enable ? lru_gen_nr_swapfiles++ : --lru_gen_nr_swapfiles);
+	swap = swap && lru_gen_enabled();
+	if (!main && !swap)
+		goto unlock;
+
+	if (main) {
+		if (enable)
+			static_branch_enable(&lru_gen_static_key);
+		else
+			static_branch_disable(&lru_gen_static_key);
+	}
+
+	memcg = mem_cgroup_iter(NULL, NULL, NULL);
+	do {
+		int nid;
+
+		for_each_node_state(nid, N_MEMORY) {
+			struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+			struct lrugen *lrugen = &lruvec->evictable;
+
+			spin_lock_irq(&lruvec->lru_lock);
+
+			VM_BUG_ON(!seq_is_valid(lruvec));
+			VM_BUG_ON(!state_is_valid(lruvec));
+
+			WRITE_ONCE(lrugen->enabled[0], lru_gen_enabled() && lru_gen_nr_swapfiles);
+			WRITE_ONCE(lrugen->enabled[1], lru_gen_enabled());
+
+			while (!(enable ? fill_lru_gen_lists(lruvec) :
+					  drain_lru_gen_lists(lruvec))) {
+				spin_unlock_irq(&lruvec->lru_lock);
+				cond_resched();
+				spin_lock_irq(&lruvec->lru_lock);
+			}
+
+			spin_unlock_irq(&lruvec->lru_lock);
+		}
+
+		cond_resched();
+	} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)));
+unlock:
+	cgroup_unlock();
+	mutex_unlock(&lru_gen_state_mutex);
+	mem_hotplug_done();
+}
+
+static int __meminit __maybe_unused lru_gen_online_mem(struct notifier_block *self,
+						       unsigned long action, void *arg)
+{
+	struct mem_cgroup *memcg;
+	struct memory_notify *mnb = arg;
+	int nid = mnb->status_change_nid;
+
+	if (action != MEM_GOING_ONLINE || nid == NUMA_NO_NODE)
+		return NOTIFY_DONE;
+
+	mutex_lock(&lru_gen_state_mutex);
+	cgroup_lock();
+
+	memcg = mem_cgroup_iter(NULL, NULL, NULL);
+	do {
+		struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+		struct lrugen *lrugen = &lruvec->evictable;
+
+		VM_BUG_ON(!seq_is_valid(lruvec));
+		VM_BUG_ON(!state_is_valid(lruvec));
+
+		WRITE_ONCE(lrugen->enabled[0], lru_gen_enabled() && lru_gen_nr_swapfiles);
+		WRITE_ONCE(lrugen->enabled[1], lru_gen_enabled());
+	} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)));
+
+	cgroup_unlock();
+	mutex_unlock(&lru_gen_state_mutex);
+
+	return NOTIFY_DONE;
+}
+
+/******************************************************************************
+ *                          initialization
+ ******************************************************************************/
+
+void lru_gen_init_lruvec(struct lruvec *lruvec)
+{
+	int i;
+	int gen, file, zone;
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	atomic_set(&lrugen->priority, DEF_PRIORITY);
+
+	lrugen->max_seq = MIN_NR_GENS + 1;
+	lrugen->enabled[0] = lru_gen_enabled() && lru_gen_nr_swapfiles;
+	lrugen->enabled[1] = lru_gen_enabled();
+
+	for (i = 0; i <= MIN_NR_GENS + 1; i++)
+		lrugen->timestamps[i] = jiffies;
+
+	for_each_gen_type_zone(gen, file, zone)
+		INIT_LIST_HEAD(&lrugen->lists[gen][file][zone]);
+}
+
+static int __init init_lru_gen(void)
+{
+	BUILD_BUG_ON(MIN_NR_GENS + 1 >= MAX_NR_GENS);
+	BUILD_BUG_ON(BIT(LRU_GEN_WIDTH) <= MAX_NR_GENS);
+
+	if (hotplug_memory_notifier(lru_gen_online_mem, 0))
+		pr_err("lru_gen: failed to subscribe hotplug notifications\n");
+
+	return 0;
+};
+/*
+ * We want to run as early as possible because some debug code, e.g.,
+ * dma_resv_lockdep(), calls mm_alloc() and mmput(). We only depend on mm_kobj,
+ * which is initialized one stage earlier.
+ */
+arch_initcall(init_lru_gen);
+
+#endif /* CONFIG_LRU_GEN */
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 09/16] mm: multigenerational lru: activation
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (7 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 08/16] mm: multigenerational lru: groundwork Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 10/16] mm: multigenerational lru: mm_struct list Yu Zhao
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

For pages accessed multiple times via file descriptors, instead of
activating them upon the second accesses, we activate them based on
the refault rates of their tiers. Pages accessed N times via file
descriptors belong to tier order_base_2(N). Pages from tier 0, i.e.,
those read ahead, accessed once via file descriptors and accessed only
via page tables, are evicted regardless of the refault rate. Pages
from other tiers will be moved to the next generation, i.e.,
activated, if the refault rates of their tiers are higher than that of
tier 0. Each generation contains at most MAX_NR_TIERS tiers, and they
require additional MAX_NR_TIERS-2 bits in page->flags. This feedback
model has a few advantages over the current feedforward model:
  1) It has a negligible overhead in the access path because
  activations are done in the reclaim path.
  2) It takes mapped pages into account and avoids overprotecting
  pages accessed multiple times via file descriptors.
  3) More tiers offer better protection to pages accessed more than
  twice when buffered-I/O-intensive workloads are under memory
  pressure.

For pages mapped upon page faults, the accessed bit is set and they
must be properly aged. We add them to the per-zone lists index by
max_seq, i.e., the youngest generation. For pages not in page cache
or swap cache, this can be done easily in the page fault path: we
rename lru_cache_add_inactive_or_unevictable() to
lru_cache_add_page_vma() and add a new parameter, which is set to true
for pages mapped upon page faults. For pages in page cache or swap
cache, we cannot differentiate the page fault path from the read ahead
path at the time we call lru_cache_add() in add_to_page_cache_lru()
and __read_swap_cache_async(). So we add a new function
lru_gen_activation(), which is essentially activate_page(), to move
pages to the per-zone lists indexed by max_seq at a later time.
Hopefully we would find those pages in lru_pvecs.lru_add and simply
set PageActive() on them without having to actually move them.

Finally, we need to be compatible with the existing notion of active
and inactive. We cannot use PageActive() because it is not set on
active pages unless they are isolated, in order to spare the aging the
trouble of clearing it when an active generation becomes inactive. A
new function page_is_active() compares the generation number of a page
with max_seq and max_seq-1 (modulo MAX_NR_GENS), which are considered
active and protected from the eviction. Other generations, which may
or may not exist, are considered inactive.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 fs/proc/task_mmu.c        |   3 +-
 include/linux/mm_inline.h | 101 +++++++++++++++++++++
 include/linux/swap.h      |   4 +-
 kernel/events/uprobes.c   |   2 +-
 mm/huge_memory.c          |   2 +-
 mm/khugepaged.c           |   2 +-
 mm/memory.c               |  14 +--
 mm/migrate.c              |   2 +-
 mm/swap.c                 |  26 +++---
 mm/swapfile.c             |   2 +-
 mm/userfaultfd.c          |   2 +-
 mm/vmscan.c               |  91 ++++++++++++++++++-
 mm/workingset.c           | 179 +++++++++++++++++++++++++++++++-------
 13 files changed, 371 insertions(+), 59 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index e862cab69583..d292f20c4e3d 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -19,6 +19,7 @@
 #include <linux/shmem_fs.h>
 #include <linux/uaccess.h>
 #include <linux/pkeys.h>
+#include <linux/mm_inline.h>
 
 #include <asm/elf.h>
 #include <asm/tlb.h>
@@ -1718,7 +1719,7 @@ static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty,
 	if (PageSwapCache(page))
 		md->swapcache += nr_pages;
 
-	if (PageActive(page) || PageUnevictable(page))
+	if (PageUnevictable(page) || page_is_active(compound_head(page), NULL))
 		md->active += nr_pages;
 
 	if (PageWriteback(page))
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 2bf910eb3dd7..5eb4b12972ec 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -95,6 +95,12 @@ static inline int lru_gen_from_seq(unsigned long seq)
 	return seq % MAX_NR_GENS;
 }
 
+/* Convert the level of usage to a tier. See the comment on MAX_NR_TIERS. */
+static inline int lru_tier_from_usage(int usage)
+{
+	return order_base_2(usage + 1);
+}
+
 /* Return a proper index regardless whether we keep a full history of stats. */
 static inline int sid_from_seq_or_gen(int seq_or_gen)
 {
@@ -238,12 +244,93 @@ static inline bool lru_gen_deletion(struct page *page, struct lruvec *lruvec)
 	return true;
 }
 
+/* Activate a page from page cache or swap cache after it's mapped. */
+static inline void lru_gen_activation(struct page *page, struct vm_area_struct *vma)
+{
+	if (!lru_gen_enabled())
+		return;
+
+	if (PageActive(page) || PageUnevictable(page) || vma_is_dax(vma) ||
+	    (vma->vm_flags & (VM_LOCKED | VM_SPECIAL)))
+		return;
+	/*
+	 * TODO: pass vm_fault to add_to_page_cache_lru() and
+	 * __read_swap_cache_async() so they can activate pages directly when in
+	 * the page fault path.
+	 */
+	activate_page(page);
+}
+
 /* Return -1 when a page is not on a list of the multigenerational lru. */
 static inline int page_lru_gen(struct page *page)
 {
 	return ((READ_ONCE(page->flags) & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
 }
 
+/* This function works regardless whether the multigenerational lru is enabled. */
+static inline bool page_is_active(struct page *page, struct lruvec *lruvec)
+{
+	struct mem_cgroup *memcg;
+	int gen = page_lru_gen(page);
+	bool active = false;
+
+	VM_BUG_ON_PAGE(PageTail(page), page);
+
+	if (gen < 0)
+		return PageActive(page);
+
+	if (lruvec) {
+		VM_BUG_ON_PAGE(PageUnevictable(page), page);
+		VM_BUG_ON_PAGE(PageActive(page), page);
+		lockdep_assert_held(&lruvec->lru_lock);
+
+		return lru_gen_is_active(lruvec, gen);
+	}
+
+	rcu_read_lock();
+
+	memcg = page_memcg_rcu(page);
+	lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
+	active = lru_gen_is_active(lruvec, gen);
+
+	rcu_read_unlock();
+
+	return active;
+}
+
+/* Return the level of usage of a page. See the comment on MAX_NR_TIERS. */
+static inline int page_tier_usage(struct page *page)
+{
+	unsigned long flags = READ_ONCE(page->flags);
+
+	return flags & BIT(PG_workingset) ?
+	       ((flags & LRU_USAGE_MASK) >> LRU_USAGE_PGOFF) + 1 : 0;
+}
+
+/* Increment the usage counter after a page is accessed via file descriptors. */
+static inline bool page_inc_usage(struct page *page)
+{
+	unsigned long old_flags, new_flags;
+
+	if (!lru_gen_enabled())
+		return PageActive(page);
+
+	do {
+		old_flags = READ_ONCE(page->flags);
+
+		if (!(old_flags & BIT(PG_workingset)))
+			new_flags = old_flags | BIT(PG_workingset);
+		else
+			new_flags = (old_flags & ~LRU_USAGE_MASK) | min(LRU_USAGE_MASK,
+				    (old_flags & LRU_USAGE_MASK) + BIT(LRU_USAGE_PGOFF));
+
+		if (old_flags == new_flags)
+			break;
+	} while (cmpxchg(&page->flags, old_flags, new_flags) != old_flags);
+
+	return true;
+}
+
 #else /* CONFIG_LRU_GEN */
 
 static inline bool lru_gen_enabled(void)
@@ -261,6 +348,20 @@ static inline bool lru_gen_deletion(struct page *page, struct lruvec *lruvec)
 	return false;
 }
 
+static inline void lru_gen_activation(struct page *page, struct vm_area_struct *vma)
+{
+}
+
+static inline bool page_is_active(struct page *page, struct lruvec *lruvec)
+{
+	return PageActive(page);
+}
+
+static inline bool page_inc_usage(struct page *page)
+{
+	return PageActive(page);
+}
+
 #endif /* CONFIG_LRU_GEN */
 
 static __always_inline void add_page_to_lru_list(struct page *page,
diff --git a/include/linux/swap.h b/include/linux/swap.h
index de2bbbf181ba..0e7532c7db22 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -350,8 +350,8 @@ extern void deactivate_page(struct page *page);
 extern void mark_page_lazyfree(struct page *page);
 extern void swap_setup(void);
 
-extern void lru_cache_add_inactive_or_unevictable(struct page *page,
-						struct vm_area_struct *vma);
+extern void lru_cache_add_page_vma(struct page *page, struct vm_area_struct *vma,
+				   bool faulting);
 
 /* linux/mm/vmscan.c */
 extern unsigned long zone_reclaimable_pages(struct zone *zone);
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 6addc9780319..4e93e5602723 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -184,7 +184,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 	if (new_page) {
 		get_page(new_page);
 		page_add_new_anon_rmap(new_page, vma, addr, false);
-		lru_cache_add_inactive_or_unevictable(new_page, vma);
+		lru_cache_add_page_vma(new_page, vma, false);
 	} else
 		/* no new page, just dec_mm_counter for old_page */
 		dec_mm_counter(mm, MM_ANONPAGES);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 26d3cc4a7a0b..2cf46270c84b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -637,7 +637,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
 		entry = mk_huge_pmd(page, vma->vm_page_prot);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 		page_add_new_anon_rmap(page, vma, haddr, true);
-		lru_cache_add_inactive_or_unevictable(page, vma);
+		lru_cache_add_page_vma(page, vma, true);
 		pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
 		set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
 		update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index a7d6cb912b05..08a43910f232 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1199,7 +1199,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	spin_lock(pmd_ptl);
 	BUG_ON(!pmd_none(*pmd));
 	page_add_new_anon_rmap(new_page, vma, address, true);
-	lru_cache_add_inactive_or_unevictable(new_page, vma);
+	lru_cache_add_page_vma(new_page, vma, true);
 	pgtable_trans_huge_deposit(mm, pmd, pgtable);
 	set_pmd_at(mm, address, pmd, _pmd);
 	update_mmu_cache_pmd(vma, address, pmd);
diff --git a/mm/memory.c b/mm/memory.c
index 550405fc3b5e..9a6cb6d31430 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -73,6 +73,7 @@
 #include <linux/perf_event.h>
 #include <linux/ptrace.h>
 #include <linux/vmalloc.h>
+#include <linux/mm_inline.h>
 
 #include <trace/events/kmem.h>
 
@@ -839,7 +840,7 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
 	copy_user_highpage(new_page, page, addr, src_vma);
 	__SetPageUptodate(new_page);
 	page_add_new_anon_rmap(new_page, dst_vma, addr, false);
-	lru_cache_add_inactive_or_unevictable(new_page, dst_vma);
+	lru_cache_add_page_vma(new_page, dst_vma, false);
 	rss[mm_counter(new_page)]++;
 
 	/* All done, just insert the new page copy in the child */
@@ -2907,7 +2908,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 		 */
 		ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
 		page_add_new_anon_rmap(new_page, vma, vmf->address, false);
-		lru_cache_add_inactive_or_unevictable(new_page, vma);
+		lru_cache_add_page_vma(new_page, vma, true);
 		/*
 		 * We call the notify macro here because, when using secondary
 		 * mmu page tables (such as kvm shadow page tables), we want the
@@ -3438,9 +3439,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	/* ksm created a completely new copy */
 	if (unlikely(page != swapcache && swapcache)) {
 		page_add_new_anon_rmap(page, vma, vmf->address, false);
-		lru_cache_add_inactive_or_unevictable(page, vma);
+		lru_cache_add_page_vma(page, vma, true);
 	} else {
 		do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
+		lru_gen_activation(page, vma);
 	}
 
 	swap_free(entry);
@@ -3584,7 +3586,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 
 	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 	page_add_new_anon_rmap(page, vma, vmf->address, false);
-	lru_cache_add_inactive_or_unevictable(page, vma);
+	lru_cache_add_page_vma(page, vma, true);
 setpte:
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry);
 
@@ -3709,6 +3711,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
 
 	add_mm_counter(vma->vm_mm, mm_counter_file(page), HPAGE_PMD_NR);
 	page_add_file_rmap(page, true);
+	lru_gen_activation(page, vma);
 	/*
 	 * deposit and withdraw with pmd lock held
 	 */
@@ -3752,10 +3755,11 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr)
 	if (write && !(vma->vm_flags & VM_SHARED)) {
 		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
 		page_add_new_anon_rmap(page, vma, addr, false);
-		lru_cache_add_inactive_or_unevictable(page, vma);
+		lru_cache_add_page_vma(page, vma, true);
 	} else {
 		inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page));
 		page_add_file_rmap(page, false);
+		lru_gen_activation(page, vma);
 	}
 	set_pte_at(vma->vm_mm, addr, vmf->pte, entry);
 }
diff --git a/mm/migrate.c b/mm/migrate.c
index 62b81d5257aa..1064b03cac33 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -3004,7 +3004,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
 	inc_mm_counter(mm, MM_ANONPAGES);
 	page_add_new_anon_rmap(page, vma, addr, false);
 	if (!is_zone_device_page(page))
-		lru_cache_add_inactive_or_unevictable(page, vma);
+		lru_cache_add_page_vma(page, vma, false);
 	get_page(page);
 
 	if (flush) {
diff --git a/mm/swap.c b/mm/swap.c
index f20ed56ebbbf..d6458ee1e9f8 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -306,7 +306,7 @@ void lru_note_cost_page(struct page *page)
 
 static void __activate_page(struct page *page, struct lruvec *lruvec)
 {
-	if (!PageActive(page) && !PageUnevictable(page)) {
+	if (!PageUnevictable(page) && !page_is_active(page, lruvec)) {
 		int nr_pages = thp_nr_pages(page);
 
 		del_page_from_lru_list(page, lruvec);
@@ -337,7 +337,7 @@ static bool need_activate_page_drain(int cpu)
 static void activate_page_on_lru(struct page *page)
 {
 	page = compound_head(page);
-	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
+	if (PageLRU(page) && !PageUnevictable(page) && !page_is_active(page, NULL)) {
 		struct pagevec *pvec;
 
 		local_lock(&lru_pvecs.lock);
@@ -431,7 +431,7 @@ void mark_page_accessed(struct page *page)
 		 * this list is never rotated or maintained, so marking an
 		 * evictable page accessed has no effect.
 		 */
-	} else if (!PageActive(page)) {
+	} else if (!page_inc_usage(page)) {
 		activate_page(page);
 		ClearPageReferenced(page);
 		workingset_activation(page);
@@ -467,15 +467,14 @@ void lru_cache_add(struct page *page)
 EXPORT_SYMBOL(lru_cache_add);
 
 /**
- * lru_cache_add_inactive_or_unevictable
+ * lru_cache_add_page_vma
  * @page:  the page to be added to LRU
  * @vma:   vma in which page is mapped for determining reclaimability
  *
- * Place @page on the inactive or unevictable LRU list, depending on its
- * evictability.
+ * Place @page on an LRU list, depending on its evictability.
  */
-void lru_cache_add_inactive_or_unevictable(struct page *page,
-					 struct vm_area_struct *vma)
+void lru_cache_add_page_vma(struct page *page, struct vm_area_struct *vma,
+			    bool faulting)
 {
 	bool unevictable;
 
@@ -492,6 +491,11 @@ void lru_cache_add_inactive_or_unevictable(struct page *page,
 		__mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages);
 		count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages);
 	}
+
+	/* tell the multigenerational lru that the page is being faulted in */
+	if (lru_gen_enabled() && !unevictable && faulting)
+		SetPageActive(page);
+
 	lru_cache_add(page);
 }
 
@@ -518,7 +522,7 @@ void lru_cache_add_inactive_or_unevictable(struct page *page,
  */
 static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec)
 {
-	bool active = PageActive(page);
+	bool active = page_is_active(page, lruvec);
 	int nr_pages = thp_nr_pages(page);
 
 	if (PageUnevictable(page))
@@ -558,7 +562,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec)
 
 static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec)
 {
-	if (PageActive(page) && !PageUnevictable(page)) {
+	if (!PageUnevictable(page) && page_is_active(page, lruvec)) {
 		int nr_pages = thp_nr_pages(page);
 
 		del_page_from_lru_list(page, lruvec);
@@ -672,7 +676,7 @@ void deactivate_file_page(struct page *page)
  */
 void deactivate_page(struct page *page)
 {
-	if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
+	if (PageLRU(page) && !PageUnevictable(page) && page_is_active(page, NULL)) {
 		struct pagevec *pvec;
 
 		local_lock(&lru_pvecs.lock);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index c6041d10a73a..ab3b5ca404fd 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1936,7 +1936,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
 		page_add_anon_rmap(page, vma, addr, false);
 	} else { /* ksm created a completely new copy */
 		page_add_new_anon_rmap(page, vma, addr, false);
-		lru_cache_add_inactive_or_unevictable(page, vma);
+		lru_cache_add_page_vma(page, vma, false);
 	}
 	swap_free(entry);
 out:
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 9a3d451402d7..e1d4cd3103b8 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -123,7 +123,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm,
 
 	inc_mm_counter(dst_mm, MM_ANONPAGES);
 	page_add_new_anon_rmap(page, dst_vma, dst_addr, false);
-	lru_cache_add_inactive_or_unevictable(page, dst_vma);
+	lru_cache_add_page_vma(page, dst_vma, true);
 
 	set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8559bb94d452..c74ebe2039f7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -898,9 +898,11 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
 
 	if (PageSwapCache(page)) {
 		swp_entry_t swap = { .val = page_private(page) };
-		mem_cgroup_swapout(page, swap);
+
+		/* get a shadow entry before page_memcg() is cleared */
 		if (reclaimed && !mapping_exiting(mapping))
 			shadow = workingset_eviction(page, target_memcg);
+		mem_cgroup_swapout(page, swap);
 		__delete_from_swap_cache(page, swap, shadow);
 		xa_unlock_irqrestore(&mapping->i_pages, flags);
 		put_swap_page(page, swap);
@@ -4375,6 +4377,93 @@ static bool __maybe_unused seq_is_valid(struct lruvec *lruvec)
 	       get_nr_gens(lruvec, 1) <= MAX_NR_GENS;
 }
 
+/******************************************************************************
+ *                          refault feedback loop
+ ******************************************************************************/
+
+/*
+ * A feedback loop modeled after the PID controller. Currently supports the
+ * proportional (P) and the integral (I) terms; the derivative (D) term can be
+ * added if necessary. The setpoint (SP) is the desired position; the process
+ * variable (PV) is the measured position. The error is the difference between
+ * the SP and the PV. A positive error results in a positive control output
+ * correction, which, in our case, is to allow eviction.
+ *
+ * The P term is the current refault rate refaulted/(evicted+activated), which
+ * has a weight of 1. The I term is the arithmetic mean of the last N refault
+ * rates, weighted by geometric series 1/2, 1/4, ..., 1/(1<<N).
+ *
+ * Our goal is to make sure upper tiers have similar refault rates as the base
+ * tier. That is we try to be fair to all tiers by maintaining similar refault
+ * rates across them.
+ */
+struct controller_pos {
+	unsigned long refaulted;
+	unsigned long total;
+	int gain;
+};
+
+static void read_controller_pos(struct controller_pos *pos, struct lruvec *lruvec,
+				int file, int tier, int gain)
+{
+	struct lrugen *lrugen = &lruvec->evictable;
+	int sid = sid_from_seq_or_gen(lrugen->min_seq[file]);
+
+	pos->refaulted = lrugen->avg_refaulted[file][tier] +
+			 atomic_long_read(&lrugen->refaulted[sid][file][tier]);
+	pos->total = lrugen->avg_total[file][tier] +
+		     atomic_long_read(&lrugen->evicted[sid][file][tier]);
+	if (tier)
+		pos->total += lrugen->activated[sid][file][tier - 1];
+	pos->gain = gain;
+}
+
+static void reset_controller_pos(struct lruvec *lruvec, int gen, int file)
+{
+	int tier;
+	int sid = sid_from_seq_or_gen(gen);
+	struct lrugen *lrugen = &lruvec->evictable;
+	bool carryover = gen == lru_gen_from_seq(lrugen->min_seq[file]);
+
+	if (!carryover && NR_STAT_GENS == 1)
+		return;
+
+	for (tier = 0; tier < MAX_NR_TIERS; tier++) {
+		if (carryover) {
+			unsigned long sum;
+
+			sum = lrugen->avg_refaulted[file][tier] +
+			      atomic_long_read(&lrugen->refaulted[sid][file][tier]);
+			WRITE_ONCE(lrugen->avg_refaulted[file][tier], sum >> 1);
+
+			sum = lrugen->avg_total[file][tier] +
+			      atomic_long_read(&lrugen->evicted[sid][file][tier]);
+			if (tier)
+				sum += lrugen->activated[sid][file][tier - 1];
+			WRITE_ONCE(lrugen->avg_total[file][tier], sum >> 1);
+
+			if (NR_STAT_GENS > 1)
+				continue;
+		}
+
+		atomic_long_set(&lrugen->refaulted[sid][file][tier], 0);
+		atomic_long_set(&lrugen->evicted[sid][file][tier], 0);
+		if (tier)
+			WRITE_ONCE(lrugen->activated[sid][file][tier - 1], 0);
+	}
+}
+
+static bool positive_ctrl_err(struct controller_pos *sp, struct controller_pos *pv)
+{
+	/*
+	 * Allow eviction if the PV has a limited number of refaulted pages or a
+	 * lower refault rate than the SP.
+	 */
+	return pv->refaulted < SWAP_CLUSTER_MAX ||
+	       pv->refaulted * max(sp->total, 1UL) * sp->gain <=
+	       sp->refaulted * max(pv->total, 1UL) * pv->gain;
+}
+
 /******************************************************************************
  *                          state change
  ******************************************************************************/
diff --git a/mm/workingset.c b/mm/workingset.c
index cd39902c1062..df363f9419fc 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -168,9 +168,9 @@
  * refault distance will immediately activate the refaulting page.
  */
 
-#define EVICTION_SHIFT	((BITS_PER_LONG - BITS_PER_XA_VALUE) +	\
-			 1 + NODES_SHIFT + MEM_CGROUP_ID_SHIFT)
-#define EVICTION_MASK	(~0UL >> EVICTION_SHIFT)
+#define EVICTION_SHIFT		(BITS_PER_XA_VALUE - MEM_CGROUP_ID_SHIFT - NODES_SHIFT)
+#define EVICTION_MASK		(BIT(EVICTION_SHIFT) - 1)
+#define WORKINGSET_WIDTH	1
 
 /*
  * Eviction timestamps need to be able to cover the full range of
@@ -182,38 +182,139 @@
  */
 static unsigned int bucket_order __read_mostly;
 
-static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long eviction,
-			 bool workingset)
+static void *pack_shadow(int memcg_id, struct pglist_data *pgdat, unsigned long val)
 {
-	eviction >>= bucket_order;
-	eviction &= EVICTION_MASK;
-	eviction = (eviction << MEM_CGROUP_ID_SHIFT) | memcgid;
-	eviction = (eviction << NODES_SHIFT) | pgdat->node_id;
-	eviction = (eviction << 1) | workingset;
+	val = (val << MEM_CGROUP_ID_SHIFT) | memcg_id;
+	val = (val << NODES_SHIFT) | pgdat->node_id;
 
-	return xa_mk_value(eviction);
+	return xa_mk_value(val);
 }
 
-static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat,
-			  unsigned long *evictionp, bool *workingsetp)
+static unsigned long unpack_shadow(void *shadow, int *memcg_id, struct pglist_data **pgdat)
 {
-	unsigned long entry = xa_to_value(shadow);
-	int memcgid, nid;
-	bool workingset;
-
-	workingset = entry & 1;
-	entry >>= 1;
-	nid = entry & ((1UL << NODES_SHIFT) - 1);
-	entry >>= NODES_SHIFT;
-	memcgid = entry & ((1UL << MEM_CGROUP_ID_SHIFT) - 1);
-	entry >>= MEM_CGROUP_ID_SHIFT;
-
-	*memcgidp = memcgid;
-	*pgdat = NODE_DATA(nid);
-	*evictionp = entry << bucket_order;
-	*workingsetp = workingset;
+	unsigned long val = xa_to_value(shadow);
+
+	*pgdat = NODE_DATA(val & (BIT(NODES_SHIFT) - 1));
+	val >>= NODES_SHIFT;
+	*memcg_id = val & (BIT(MEM_CGROUP_ID_SHIFT) - 1);
+
+	return val >> MEM_CGROUP_ID_SHIFT;
+}
+
+#ifdef CONFIG_LRU_GEN
+
+#if LRU_GEN_SHIFT + LRU_USAGE_SHIFT >= EVICTION_SHIFT
+#error "Please try smaller NODES_SHIFT, NR_LRU_GENS and TIERS_PER_GEN configurations"
+#endif
+
+static void page_set_usage(struct page *page, int usage)
+{
+	unsigned long old_flags, new_flags;
+
+	VM_BUG_ON(usage > BIT(LRU_USAGE_WIDTH));
+
+	if (!usage)
+		return;
+
+	do {
+		old_flags = READ_ONCE(page->flags);
+		new_flags = (old_flags & ~LRU_USAGE_MASK) | LRU_TIER_FLAGS |
+			    ((usage - 1UL) << LRU_USAGE_PGOFF);
+		if (old_flags == new_flags)
+			break;
+	} while (cmpxchg(&page->flags, old_flags, new_flags) != old_flags);
+}
+
+/* Return a token to be stored in the shadow entry of a page being evicted. */
+static void *lru_gen_eviction(struct page *page)
+{
+	int sid, tier;
+	unsigned long token;
+	unsigned long min_seq;
+	struct lruvec *lruvec;
+	struct lrugen *lrugen;
+	int file = page_is_file_lru(page);
+	int usage = page_tier_usage(page);
+	struct mem_cgroup *memcg = page_memcg(page);
+	struct pglist_data *pgdat = page_pgdat(page);
+
+	if (!lru_gen_enabled())
+		return NULL;
+
+	lruvec = mem_cgroup_lruvec(memcg, pgdat);
+	lrugen = &lruvec->evictable;
+	min_seq = READ_ONCE(lrugen->min_seq[file]);
+	token = (min_seq << LRU_USAGE_SHIFT) | usage;
+
+	sid = sid_from_seq_or_gen(min_seq);
+	tier = lru_tier_from_usage(usage);
+	atomic_long_add(thp_nr_pages(page), &lrugen->evicted[sid][file][tier]);
+
+	return pack_shadow(mem_cgroup_id(memcg), pgdat, token);
+}
+
+/* Account a refaulted page based on the token stored in its shadow entry. */
+static bool lru_gen_refault(struct page *page, void *shadow)
+{
+	int sid, tier, usage;
+	int memcg_id;
+	unsigned long token;
+	unsigned long min_seq;
+	struct lruvec *lruvec;
+	struct lrugen *lrugen;
+	struct pglist_data *pgdat;
+	struct mem_cgroup *memcg;
+	int file = page_is_file_lru(page);
+
+	if (!lru_gen_enabled())
+		return false;
+
+	token = unpack_shadow(shadow, &memcg_id, &pgdat);
+	if (page_pgdat(page) != pgdat)
+		return true;
+
+	rcu_read_lock();
+	memcg = page_memcg_rcu(page);
+	if (mem_cgroup_id(memcg) != memcg_id)
+		goto unlock;
+
+	usage = token & (BIT(LRU_USAGE_SHIFT) - 1);
+	token >>= LRU_USAGE_SHIFT;
+
+	lruvec = mem_cgroup_lruvec(memcg, pgdat);
+	lrugen = &lruvec->evictable;
+	min_seq = READ_ONCE(lrugen->min_seq[file]);
+	if (token != (min_seq & (EVICTION_MASK >> LRU_USAGE_SHIFT)))
+		goto unlock;
+
+	page_set_usage(page, usage);
+
+	sid = sid_from_seq_or_gen(min_seq);
+	tier = lru_tier_from_usage(usage);
+	atomic_long_add(thp_nr_pages(page), &lrugen->refaulted[sid][file][tier]);
+	inc_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file);
+	if (tier)
+		inc_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file);
+unlock:
+	rcu_read_unlock();
+
+	return true;
+}
+
+#else /* CONFIG_LRU_GEN */
+
+static void *lru_gen_eviction(struct page *page)
+{
+	return NULL;
 }
 
+static bool lru_gen_refault(struct page *page, void *shadow)
+{
+	return false;
+}
+
+#endif /* CONFIG_LRU_GEN */
+
 /**
  * workingset_age_nonresident - age non-resident entries as LRU ages
  * @lruvec: the lruvec that was aged
@@ -256,18 +357,25 @@ void *workingset_eviction(struct page *page, struct mem_cgroup *target_memcg)
 	unsigned long eviction;
 	struct lruvec *lruvec;
 	int memcgid;
+	void *shadow;
 
 	/* Page is fully exclusive and pins page's memory cgroup pointer */
 	VM_BUG_ON_PAGE(PageLRU(page), page);
 	VM_BUG_ON_PAGE(page_count(page), page);
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 
+	shadow = lru_gen_eviction(page);
+	if (shadow)
+		return shadow;
+
 	lruvec = mem_cgroup_lruvec(target_memcg, pgdat);
 	/* XXX: target_memcg can be NULL, go through lruvec */
 	memcgid = mem_cgroup_id(lruvec_memcg(lruvec));
 	eviction = atomic_long_read(&lruvec->nonresident_age);
+	eviction >>= bucket_order;
+	eviction = (eviction << WORKINGSET_WIDTH) | PageWorkingset(page);
 	workingset_age_nonresident(lruvec, thp_nr_pages(page));
-	return pack_shadow(memcgid, pgdat, eviction, PageWorkingset(page));
+	return pack_shadow(memcgid, pgdat, eviction);
 }
 
 /**
@@ -294,7 +402,10 @@ void workingset_refault(struct page *page, void *shadow)
 	bool workingset;
 	int memcgid;
 
-	unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset);
+	if (lru_gen_refault(page, shadow))
+		return;
+
+	eviction = unpack_shadow(shadow, &memcgid, &pgdat);
 
 	rcu_read_lock();
 	/*
@@ -318,6 +429,8 @@ void workingset_refault(struct page *page, void *shadow)
 		goto out;
 	eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
 	refault = atomic_long_read(&eviction_lruvec->nonresident_age);
+	workingset = eviction & (BIT(WORKINGSET_WIDTH) - 1);
+	eviction = (eviction >> WORKINGSET_WIDTH) << bucket_order;
 
 	/*
 	 * Calculate the refault distance
@@ -335,7 +448,7 @@ void workingset_refault(struct page *page, void *shadow)
 	 * longest time, so the occasional inappropriate activation
 	 * leading to pressure on the active list is not a problem.
 	 */
-	refault_distance = (refault - eviction) & EVICTION_MASK;
+	refault_distance = (refault - eviction) & (EVICTION_MASK >> WORKINGSET_WIDTH);
 
 	/*
 	 * The activation decision for this page is made at the level
@@ -594,7 +707,7 @@ static int __init workingset_init(void)
 	unsigned int max_order;
 	int ret;
 
-	BUILD_BUG_ON(BITS_PER_LONG < EVICTION_SHIFT);
+	BUILD_BUG_ON(EVICTION_SHIFT < WORKINGSET_WIDTH);
 	/*
 	 * Calculate the eviction bucket size to cover the longest
 	 * actionable refault distance, which is currently half of
@@ -602,7 +715,7 @@ static int __init workingset_init(void)
 	 * some more pages at runtime, so keep working with up to
 	 * double the initial memory by using totalram_pages as-is.
 	 */
-	timestamp_bits = BITS_PER_LONG - EVICTION_SHIFT;
+	timestamp_bits = EVICTION_SHIFT - WORKINGSET_WIDTH;
 	max_order = fls_long(totalram_pages() - 1);
 	if (max_order > timestamp_bits)
 		bucket_order = max_order - timestamp_bits;
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 10/16] mm: multigenerational lru: mm_struct list
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (8 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 09/16] mm: multigenerational lru: activation Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-14 14:36   ` Matthew Wilcox
  2021-04-13  6:56 ` [PATCH v2 11/16] mm: multigenerational lru: aging Yu Zhao
                   ` (8 subsequent siblings)
  18 siblings, 1 reply; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

In order to scan page tables, we add an infrastructure to maintain
either a system-wide mm_struct list or per-memcg mm_struct lists.
Multiple threads can concurrently work on the same mm_struct list, and
each of them will be given a different mm_struct.

This infrastructure also tracks whether an mm_struct is being used on
any CPUs or has been used since the last time a worker looked at it.
In other words, workers will not be given an mm_struct that belongs to
a process that has been sleeping.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 fs/exec.c                  |   2 +
 include/linux/memcontrol.h |   6 +
 include/linux/mm_types.h   | 117 ++++++++++++++
 include/linux/mmzone.h     |   2 -
 kernel/exit.c              |   1 +
 kernel/fork.c              |  10 ++
 kernel/kthread.c           |   1 +
 kernel/sched/core.c        |   2 +
 mm/memcontrol.c            |  28 ++++
 mm/vmscan.c                | 316 +++++++++++++++++++++++++++++++++++++
 10 files changed, 483 insertions(+), 2 deletions(-)

diff --git a/fs/exec.c b/fs/exec.c
index 18594f11c31f..c691d4d7720c 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1008,6 +1008,7 @@ static int exec_mmap(struct mm_struct *mm)
 	active_mm = tsk->active_mm;
 	tsk->active_mm = mm;
 	tsk->mm = mm;
+	lru_gen_add_mm(mm);
 	/*
 	 * This prevents preemption while active_mm is being loaded and
 	 * it and mm are being updated, which could cause problems for
@@ -1018,6 +1019,7 @@ static int exec_mmap(struct mm_struct *mm)
 	if (!IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM))
 		local_irq_enable();
 	activate_mm(active_mm, mm);
+	lru_gen_switch_mm(active_mm, mm);
 	if (IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM))
 		local_irq_enable();
 	tsk->mm->vmacache_seqnum = 0;
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index f13dc02cf277..cff95ed1ee2b 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -212,6 +212,8 @@ struct obj_cgroup {
 	};
 };
 
+struct lru_gen_mm_list;
+
 /*
  * The memory controller data structure. The memory controller controls both
  * page cache and RSS per cgroup. We would eventually like to provide
@@ -335,6 +337,10 @@ struct mem_cgroup {
 	struct deferred_split deferred_split_queue;
 #endif
 
+#ifdef CONFIG_LRU_GEN
+	struct lru_gen_mm_list *mm_list;
+#endif
+
 	struct mem_cgroup_per_node *nodeinfo[0];
 	/* WARNING: nodeinfo must be the last member here */
 };
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6613b26a8894..f8a239fbb958 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -15,6 +15,8 @@
 #include <linux/page-flags-layout.h>
 #include <linux/workqueue.h>
 #include <linux/seqlock.h>
+#include <linux/nodemask.h>
+#include <linux/mmdebug.h>
 
 #include <asm/mmu.h>
 
@@ -383,6 +385,8 @@ struct core_state {
 	struct completion startup;
 };
 
+#define ANON_AND_FILE 2
+
 struct kioctx_table;
 struct mm_struct {
 	struct {
@@ -561,6 +565,22 @@ struct mm_struct {
 
 #ifdef CONFIG_IOMMU_SUPPORT
 		u32 pasid;
+#endif
+#ifdef CONFIG_LRU_GEN
+		struct {
+			/* the node of a global or per-memcg mm_struct list */
+			struct list_head list;
+#ifdef CONFIG_MEMCG
+			/* points to memcg of the owner task above */
+			struct mem_cgroup *memcg;
+#endif
+			/* whether this mm_struct has been used since the last walk */
+			nodemask_t nodes[ANON_AND_FILE];
+#ifndef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+			/* the number of CPUs using this mm_struct */
+			atomic_t nr_cpus;
+#endif
+		} lrugen;
 #endif
 	} __randomize_layout;
 
@@ -588,6 +608,103 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm)
 	return (struct cpumask *)&mm->cpu_bitmap;
 }
 
+#ifdef CONFIG_LRU_GEN
+
+void lru_gen_init_mm(struct mm_struct *mm);
+void lru_gen_add_mm(struct mm_struct *mm);
+void lru_gen_del_mm(struct mm_struct *mm);
+#ifdef CONFIG_MEMCG
+int lru_gen_alloc_mm_list(struct mem_cgroup *memcg);
+void lru_gen_free_mm_list(struct mem_cgroup *memcg);
+void lru_gen_migrate_mm(struct mm_struct *mm);
+#endif
+
+/*
+ * Track the usage so mm_struct's that haven't been used since the last walk can
+ * be skipped. This function adds a theoretical overhead to each context switch,
+ * which hasn't been measurable.
+ */
+static inline void lru_gen_switch_mm(struct mm_struct *old, struct mm_struct *new)
+{
+	int file;
+
+	/* exclude init_mm, efi_mm, etc. */
+	if (!core_kernel_data((unsigned long)old)) {
+		VM_BUG_ON(old == &init_mm);
+
+		for (file = 0; file < ANON_AND_FILE; file++)
+			nodes_setall(old->lrugen.nodes[file]);
+
+#ifndef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+		atomic_dec(&old->lrugen.nr_cpus);
+		VM_BUG_ON_MM(atomic_read(&old->lrugen.nr_cpus) < 0, old);
+#endif
+	} else
+		VM_BUG_ON_MM(READ_ONCE(old->lrugen.list.prev) ||
+			     READ_ONCE(old->lrugen.list.next), old);
+
+	if (!core_kernel_data((unsigned long)new)) {
+		VM_BUG_ON(new == &init_mm);
+
+#ifndef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+		atomic_inc(&new->lrugen.nr_cpus);
+		VM_BUG_ON_MM(atomic_read(&new->lrugen.nr_cpus) < 0, new);
+#endif
+	} else
+		VM_BUG_ON_MM(READ_ONCE(new->lrugen.list.prev) ||
+			     READ_ONCE(new->lrugen.list.next), new);
+}
+
+/* Return whether this mm_struct is being used on any CPUs. */
+static inline bool lru_gen_mm_is_active(struct mm_struct *mm)
+{
+#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+	return !cpumask_empty(mm_cpumask(mm));
+#else
+	return atomic_read(&mm->lrugen.nr_cpus);
+#endif
+}
+
+#else /* CONFIG_LRU_GEN */
+
+static inline void lru_gen_init_mm(struct mm_struct *mm)
+{
+}
+
+static inline void lru_gen_add_mm(struct mm_struct *mm)
+{
+}
+
+static inline void lru_gen_del_mm(struct mm_struct *mm)
+{
+}
+
+#ifdef CONFIG_MEMCG
+static inline int lru_gen_alloc_mm_list(struct mem_cgroup *memcg)
+{
+	return 0;
+}
+
+static inline void lru_gen_free_mm_list(struct mem_cgroup *memcg)
+{
+}
+
+static inline void lru_gen_migrate_mm(struct mm_struct *mm)
+{
+}
+#endif
+
+static inline void lru_gen_switch_mm(struct mm_struct *old, struct mm_struct *new)
+{
+}
+
+static inline bool lru_gen_mm_is_active(struct mm_struct *mm)
+{
+	return false;
+}
+
+#endif /* CONFIG_LRU_GEN */
+
 struct mmu_gather;
 extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm);
 extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm);
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index a60c7498afd7..dcfadf6a8c07 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -285,8 +285,6 @@ static inline bool is_active_lru(enum lru_list lru)
 	return (lru == LRU_ACTIVE_ANON || lru == LRU_ACTIVE_FILE);
 }
 
-#define ANON_AND_FILE 2
-
 enum lruvec_flags {
 	LRUVEC_CONGESTED,		/* lruvec has many dirty pages
 					 * backed by a congested BDI
diff --git a/kernel/exit.c b/kernel/exit.c
index 04029e35e69a..e4292717ce37 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -422,6 +422,7 @@ void mm_update_next_owner(struct mm_struct *mm)
 		goto retry;
 	}
 	WRITE_ONCE(mm->owner, c);
+	lru_gen_migrate_mm(mm);
 	task_unlock(c);
 	put_task_struct(c);
 }
diff --git a/kernel/fork.c b/kernel/fork.c
index 426cd0c51f9e..dfa84200229f 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -665,6 +665,7 @@ static void check_mm(struct mm_struct *mm)
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS
 	VM_BUG_ON_MM(mm->pmd_huge_pte, mm);
 #endif
+	VM_BUG_ON_MM(lru_gen_mm_is_active(mm), mm);
 }
 
 #define allocate_mm()	(kmem_cache_alloc(mm_cachep, GFP_KERNEL))
@@ -1055,6 +1056,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
 		goto fail_nocontext;
 
 	mm->user_ns = get_user_ns(user_ns);
+	lru_gen_init_mm(mm);
 	return mm;
 
 fail_nocontext:
@@ -1097,6 +1099,7 @@ static inline void __mmput(struct mm_struct *mm)
 	}
 	if (mm->binfmt)
 		module_put(mm->binfmt->module);
+	lru_gen_del_mm(mm);
 	mmdrop(mm);
 }
 
@@ -2521,6 +2524,13 @@ pid_t kernel_clone(struct kernel_clone_args *args)
 		get_task_struct(p);
 	}
 
+	if (IS_ENABLED(CONFIG_LRU_GEN) && !(clone_flags & CLONE_VM)) {
+		/* lock the task to synchronize with memcg migration */
+		task_lock(p);
+		lru_gen_add_mm(p->mm);
+		task_unlock(p);
+	}
+
 	wake_up_new_task(p);
 
 	/* forking complete and child started to run, tell ptracer */
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 1578973c5740..8da7767bb06a 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -1303,6 +1303,7 @@ void kthread_use_mm(struct mm_struct *mm)
 	tsk->mm = mm;
 	membarrier_update_current_mm(mm);
 	switch_mm_irqs_off(active_mm, mm, tsk);
+	lru_gen_switch_mm(active_mm, mm);
 	local_irq_enable();
 	task_unlock(tsk);
 #ifdef finish_arch_post_lock_switch
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 98191218d891..bd626dbdb816 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4306,6 +4306,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
 		 * finish_task_switch()'s mmdrop().
 		 */
 		switch_mm_irqs_off(prev->active_mm, next->mm, next);
+		lru_gen_switch_mm(prev->active_mm, next->mm);
 
 		if (!prev->mm) {                        // from kernel
 			/* will mmdrop() in finish_task_switch(). */
@@ -7597,6 +7598,7 @@ void idle_task_exit(void)
 
 	if (mm != &init_mm) {
 		switch_mm(mm, &init_mm, current);
+		lru_gen_switch_mm(mm, &init_mm);
 		finish_arch_post_lock_switch();
 	}
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e064ac0d850a..496e91e813af 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5206,6 +5206,7 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg)
 		free_mem_cgroup_per_node_info(memcg, node);
 	free_percpu(memcg->vmstats_percpu);
 	free_percpu(memcg->vmstats_local);
+	lru_gen_free_mm_list(memcg);
 	kfree(memcg);
 }
 
@@ -5258,6 +5259,9 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
 		if (alloc_mem_cgroup_per_node_info(memcg, node))
 			goto fail;
 
+	if (lru_gen_alloc_mm_list(memcg))
+		goto fail;
+
 	if (memcg_wb_domain_init(memcg, GFP_KERNEL))
 		goto fail;
 
@@ -6162,6 +6166,29 @@ static void mem_cgroup_move_task(void)
 }
 #endif
 
+#ifdef CONFIG_LRU_GEN
+static void mem_cgroup_attach(struct cgroup_taskset *tset)
+{
+	struct cgroup_subsys_state *css;
+	struct task_struct *task = NULL;
+
+	cgroup_taskset_for_each_leader(task, css, tset)
+		;
+
+	if (!task)
+		return;
+
+	task_lock(task);
+	if (task->mm && task->mm->owner == task)
+		lru_gen_migrate_mm(task->mm);
+	task_unlock(task);
+}
+#else
+static void mem_cgroup_attach(struct cgroup_taskset *tset)
+{
+}
+#endif
+
 static int seq_puts_memcg_tunable(struct seq_file *m, unsigned long value)
 {
 	if (value == PAGE_COUNTER_MAX)
@@ -6502,6 +6529,7 @@ struct cgroup_subsys memory_cgrp_subsys = {
 	.css_free = mem_cgroup_css_free,
 	.css_reset = mem_cgroup_css_reset,
 	.can_attach = mem_cgroup_can_attach,
+	.attach = mem_cgroup_attach,
 	.cancel_attach = mem_cgroup_cancel_attach,
 	.post_attach = mem_cgroup_move_task,
 	.dfl_cftypes = memory_files,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c74ebe2039f7..d67dfd1e3930 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4464,6 +4464,313 @@ static bool positive_ctrl_err(struct controller_pos *sp, struct controller_pos *
 	       sp->refaulted * max(pv->total, 1UL) * pv->gain;
 }
 
+/******************************************************************************
+ *                          mm_struct list
+ ******************************************************************************/
+
+enum {
+	MM_SCHED_ACTIVE,	/* running processes */
+	MM_SCHED_INACTIVE,	/* sleeping processes */
+	MM_LOCK_CONTENTION,	/* lock contentions */
+	MM_VMA_INTERVAL,	/* VMAs within the range of current table */
+	MM_LEAF_OTHER_NODE,	/* entries not from node under reclaim */
+	MM_LEAF_OTHER_MEMCG,	/* entries not from memcg under reclaim */
+	MM_LEAF_OLD,		/* old entries */
+	MM_LEAF_YOUNG,		/* young entries */
+	MM_LEAF_DIRTY,		/* dirty entries */
+	MM_LEAF_HOLE,		/* non-present entries */
+	MM_NONLEAF_OLD,		/* old non-leaf pmd entries */
+	MM_NONLEAF_YOUNG,	/* young non-leaf pmd entries */
+	NR_MM_STATS
+};
+
+/* mnemonic codes for the stats above */
+#define MM_STAT_CODES		"aicvnmoydhlu"
+
+struct lru_gen_mm_list {
+	/* the head of a global or per-memcg mm_struct list */
+	struct list_head head;
+	/* protects the list */
+	spinlock_t lock;
+	struct {
+		/* set to max_seq after each round of walk */
+		unsigned long cur_seq;
+		/* the next mm on the list to walk */
+		struct list_head *iter;
+		/* to wait for the last worker to finish */
+		struct wait_queue_head wait;
+		/* the number of concurrent workers */
+		int nr_workers;
+		/* stats for debugging */
+		unsigned long stats[NR_STAT_GENS][NR_MM_STATS];
+	} nodes[0];
+};
+
+static struct lru_gen_mm_list *global_mm_list;
+
+static struct lru_gen_mm_list *alloc_mm_list(void)
+{
+	int nid;
+	struct lru_gen_mm_list *mm_list;
+
+	mm_list = kzalloc(struct_size(mm_list, nodes, nr_node_ids), GFP_KERNEL);
+	if (!mm_list)
+		return NULL;
+
+	INIT_LIST_HEAD(&mm_list->head);
+	spin_lock_init(&mm_list->lock);
+
+	for_each_node(nid) {
+		mm_list->nodes[nid].cur_seq = MIN_NR_GENS;
+		mm_list->nodes[nid].iter = &mm_list->head;
+		init_waitqueue_head(&mm_list->nodes[nid].wait);
+	}
+
+	return mm_list;
+}
+
+static struct lru_gen_mm_list *get_mm_list(struct mem_cgroup *memcg)
+{
+#ifdef CONFIG_MEMCG
+	if (!mem_cgroup_disabled())
+		return memcg ? memcg->mm_list : root_mem_cgroup->mm_list;
+#endif
+	VM_BUG_ON(memcg);
+
+	return global_mm_list;
+}
+
+void lru_gen_init_mm(struct mm_struct *mm)
+{
+	int file;
+
+	INIT_LIST_HEAD(&mm->lrugen.list);
+#ifdef CONFIG_MEMCG
+	mm->lrugen.memcg = NULL;
+#endif
+#ifndef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+	atomic_set(&mm->lrugen.nr_cpus, 0);
+#endif
+	for (file = 0; file < ANON_AND_FILE; file++)
+		nodes_clear(mm->lrugen.nodes[file]);
+}
+
+void lru_gen_add_mm(struct mm_struct *mm)
+{
+	struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm);
+	struct lru_gen_mm_list *mm_list = get_mm_list(memcg);
+
+	VM_BUG_ON_MM(!list_empty(&mm->lrugen.list), mm);
+#ifdef CONFIG_MEMCG
+	VM_BUG_ON_MM(mm->lrugen.memcg, mm);
+	WRITE_ONCE(mm->lrugen.memcg, memcg);
+#endif
+	spin_lock(&mm_list->lock);
+	list_add_tail(&mm->lrugen.list, &mm_list->head);
+	spin_unlock(&mm_list->lock);
+}
+
+void lru_gen_del_mm(struct mm_struct *mm)
+{
+	int nid;
+#ifdef CONFIG_MEMCG
+	struct lru_gen_mm_list *mm_list = get_mm_list(mm->lrugen.memcg);
+#else
+	struct lru_gen_mm_list *mm_list = get_mm_list(NULL);
+#endif
+
+	spin_lock(&mm_list->lock);
+
+	for_each_node(nid) {
+		if (mm_list->nodes[nid].iter != &mm->lrugen.list)
+			continue;
+
+		mm_list->nodes[nid].iter = mm_list->nodes[nid].iter->next;
+		if (mm_list->nodes[nid].iter == &mm_list->head)
+			WRITE_ONCE(mm_list->nodes[nid].cur_seq,
+				   mm_list->nodes[nid].cur_seq + 1);
+	}
+
+	list_del_init(&mm->lrugen.list);
+
+	spin_unlock(&mm_list->lock);
+
+#ifdef CONFIG_MEMCG
+	mem_cgroup_put(mm->lrugen.memcg);
+	WRITE_ONCE(mm->lrugen.memcg, NULL);
+#endif
+}
+
+#ifdef CONFIG_MEMCG
+int lru_gen_alloc_mm_list(struct mem_cgroup *memcg)
+{
+	if (mem_cgroup_disabled())
+		return 0;
+
+	memcg->mm_list = alloc_mm_list();
+
+	return memcg->mm_list ? 0 : -ENOMEM;
+}
+
+void lru_gen_free_mm_list(struct mem_cgroup *memcg)
+{
+	kfree(memcg->mm_list);
+	memcg->mm_list = NULL;
+}
+
+void lru_gen_migrate_mm(struct mm_struct *mm)
+{
+	struct mem_cgroup *memcg;
+
+	lockdep_assert_held(&mm->owner->alloc_lock);
+
+	if (mem_cgroup_disabled())
+		return;
+
+	rcu_read_lock();
+	memcg = mem_cgroup_from_task(mm->owner);
+	rcu_read_unlock();
+	if (memcg == mm->lrugen.memcg)
+		return;
+
+	VM_BUG_ON_MM(!mm->lrugen.memcg, mm);
+	VM_BUG_ON_MM(list_empty(&mm->lrugen.list), mm);
+
+	lru_gen_del_mm(mm);
+	lru_gen_add_mm(mm);
+}
+
+static bool mm_has_migrated(struct mm_struct *mm, struct mem_cgroup *memcg)
+{
+	return READ_ONCE(mm->lrugen.memcg) != memcg;
+}
+#else
+static bool mm_has_migrated(struct mm_struct *mm, struct mem_cgroup *memcg)
+{
+	return false;
+}
+#endif
+
+struct mm_walk_args {
+	struct mem_cgroup *memcg;
+	unsigned long max_seq;
+	unsigned long next_addr;
+	unsigned long start_pfn;
+	unsigned long end_pfn;
+	int node_id;
+	int batch_size;
+	int mm_stats[NR_MM_STATS];
+	int nr_pages[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES];
+	bool should_walk[ANON_AND_FILE];
+#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG)
+	unsigned long bitmap[BITS_TO_LONGS(PTRS_PER_PMD)];
+#endif
+};
+
+static void reset_mm_stats(struct lru_gen_mm_list *mm_list, bool last,
+			   struct mm_walk_args *args)
+{
+	int i;
+	int nid = args->node_id;
+	int sid = sid_from_seq_or_gen(args->max_seq);
+
+	lockdep_assert_held(&mm_list->lock);
+
+	for (i = 0; i < NR_MM_STATS; i++) {
+		WRITE_ONCE(mm_list->nodes[nid].stats[sid][i],
+			   mm_list->nodes[nid].stats[sid][i] + args->mm_stats[i]);
+		args->mm_stats[i] = 0;
+	}
+
+	if (!last || NR_STAT_GENS == 1)
+		return;
+
+	sid = sid_from_seq_or_gen(args->max_seq + 1);
+	for (i = 0; i < NR_MM_STATS; i++)
+		WRITE_ONCE(mm_list->nodes[nid].stats[sid][i], 0);
+}
+
+static bool should_skip_mm(struct mm_struct *mm, int nid, int swappiness)
+{
+	int file;
+	unsigned long size = 0;
+
+	if (mm_is_oom_victim(mm))
+		return true;
+
+	for (file = !swappiness; file < ANON_AND_FILE; file++) {
+		if (lru_gen_mm_is_active(mm) || node_isset(nid, mm->lrugen.nodes[file]))
+			size += file ? get_mm_counter(mm, MM_FILEPAGES) :
+				       get_mm_counter(mm, MM_ANONPAGES) +
+				       get_mm_counter(mm, MM_SHMEMPAGES);
+	}
+
+	/* leave the legwork to the rmap if mapped pages are too sparse */
+	if (size < max(SWAP_CLUSTER_MAX, mm_pgtables_bytes(mm) / PAGE_SIZE))
+		return true;
+
+	return !mmget_not_zero(mm);
+}
+
+/* To support multiple workers that concurrently walk mm_struct list. */
+static bool get_next_mm(struct mm_walk_args *args, int swappiness, struct mm_struct **iter)
+{
+	bool last = true;
+	struct mm_struct *mm = NULL;
+	int nid = args->node_id;
+	struct lru_gen_mm_list *mm_list = get_mm_list(args->memcg);
+
+	if (*iter)
+		mmput_async(*iter);
+	else if (args->max_seq <= READ_ONCE(mm_list->nodes[nid].cur_seq))
+		return false;
+
+	spin_lock(&mm_list->lock);
+
+	VM_BUG_ON(args->max_seq > mm_list->nodes[nid].cur_seq + 1);
+	VM_BUG_ON(*iter && args->max_seq < mm_list->nodes[nid].cur_seq);
+	VM_BUG_ON(*iter && !mm_list->nodes[nid].nr_workers);
+
+	if (args->max_seq <= mm_list->nodes[nid].cur_seq) {
+		last = *iter;
+		goto done;
+	}
+
+	if (mm_list->nodes[nid].iter == &mm_list->head) {
+		VM_BUG_ON(*iter || mm_list->nodes[nid].nr_workers);
+		mm_list->nodes[nid].iter = mm_list->nodes[nid].iter->next;
+	}
+
+	while (!mm && mm_list->nodes[nid].iter != &mm_list->head) {
+		mm = list_entry(mm_list->nodes[nid].iter, struct mm_struct, lrugen.list);
+		mm_list->nodes[nid].iter = mm_list->nodes[nid].iter->next;
+		if (should_skip_mm(mm, nid, swappiness))
+			mm = NULL;
+
+		args->mm_stats[mm ? MM_SCHED_ACTIVE : MM_SCHED_INACTIVE]++;
+	}
+
+	if (mm_list->nodes[nid].iter == &mm_list->head)
+		WRITE_ONCE(mm_list->nodes[nid].cur_seq,
+			   mm_list->nodes[nid].cur_seq + 1);
+done:
+	if (*iter && !mm)
+		mm_list->nodes[nid].nr_workers--;
+	if (!*iter && mm)
+		mm_list->nodes[nid].nr_workers++;
+
+	last = last && !mm_list->nodes[nid].nr_workers &&
+	       mm_list->nodes[nid].iter == &mm_list->head;
+
+	reset_mm_stats(mm_list, last, args);
+
+	spin_unlock(&mm_list->lock);
+
+	*iter = mm;
+
+	return last;
+}
+
 /******************************************************************************
  *                          state change
  ******************************************************************************/
@@ -4694,6 +5001,15 @@ static int __init init_lru_gen(void)
 {
 	BUILD_BUG_ON(MIN_NR_GENS + 1 >= MAX_NR_GENS);
 	BUILD_BUG_ON(BIT(LRU_GEN_WIDTH) <= MAX_NR_GENS);
+	BUILD_BUG_ON(sizeof(MM_STAT_CODES) != NR_MM_STATS + 1);
+
+	if (mem_cgroup_disabled()) {
+		global_mm_list = alloc_mm_list();
+		if (!global_mm_list) {
+			pr_err("lru_gen: failed to allocate global mm_struct list\n");
+			return -ENOMEM;
+		}
+	}
 
 	if (hotplug_memory_notifier(lru_gen_online_mem, 0))
 		pr_err("lru_gen: failed to subscribe hotplug notifications\n");
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 11/16] mm: multigenerational lru: aging
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (9 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 10/16] mm: multigenerational lru: mm_struct list Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 12/16] mm: multigenerational lru: eviction Yu Zhao
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

The aging produces young generations. Given an lruvec, the aging walks
the mm_struct list associated with this lruvec to scan page tables for
referenced pages. Upon finding one, the aging updates the generation
number of this page to max_seq. After each round of scan, the aging
increments max_seq. The aging is due when both of min_seq[2] reaches
max_seq-1, assuming both anon and file types are reclaimable.

The aging uses the following optimizations when scanning page tables:
  1) It will not scan page tables from processes that have been
  sleeping since the last scan.
  2) It will not scan PTE tables under non-leaf PMD entries that do
  not have the accessed bit set, when
  CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
  3) It will not zigzag between the PGD table and the same PMD or PTE
  table spanning multiple VMAs. In other words, it finishes all the
  VMAs with the range of the same PMD or PTE table before it returns
  to the PGD table. This optimizes workloads that have large numbers
  of tiny VMAs, especially when CONFIG_PGTABLE_LEVELS=5.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/vmscan.c | 700 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 700 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d67dfd1e3930..31e1b4155677 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -50,6 +50,7 @@
 #include <linux/dax.h>
 #include <linux/psi.h>
 #include <linux/memory.h>
+#include <linux/pagewalk.h>
 
 #include <asm/tlbflush.h>
 #include <asm/div64.h>
@@ -4771,6 +4772,702 @@ static bool get_next_mm(struct mm_walk_args *args, int swappiness, struct mm_str
 	return last;
 }
 
+/******************************************************************************
+ *                          the aging
+ ******************************************************************************/
+
+static void update_batch_size(struct page *page, int old_gen, int new_gen,
+			      struct mm_walk_args *args)
+{
+	int file = page_is_file_lru(page);
+	int zone = page_zonenum(page);
+	int delta = thp_nr_pages(page);
+
+	VM_BUG_ON(old_gen >= MAX_NR_GENS);
+	VM_BUG_ON(new_gen >= MAX_NR_GENS);
+
+	args->batch_size++;
+
+	args->nr_pages[old_gen][file][zone] -= delta;
+	args->nr_pages[new_gen][file][zone] += delta;
+}
+
+static void reset_batch_size(struct lruvec *lruvec, struct mm_walk_args *args)
+{
+	int gen, file, zone;
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	args->batch_size = 0;
+
+	spin_lock_irq(&lruvec->lru_lock);
+
+	for_each_gen_type_zone(gen, file, zone) {
+		enum lru_list lru = LRU_FILE * file;
+		int total = args->nr_pages[gen][file][zone];
+
+		if (!total)
+			continue;
+
+		args->nr_pages[gen][file][zone] = 0;
+		WRITE_ONCE(lrugen->sizes[gen][file][zone],
+			   lrugen->sizes[gen][file][zone] + total);
+
+		if (lru_gen_is_active(lruvec, gen))
+			lru += LRU_ACTIVE;
+		update_lru_size(lruvec, lru, zone, total);
+	}
+
+	spin_unlock_irq(&lruvec->lru_lock);
+}
+
+static int page_update_gen(struct page *page, int new_gen)
+{
+	int old_gen;
+	unsigned long old_flags, new_flags;
+
+	VM_BUG_ON(new_gen >= MAX_NR_GENS);
+
+	do {
+		old_flags = READ_ONCE(page->flags);
+
+		old_gen = ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
+		if (old_gen < 0)
+			new_flags = old_flags | BIT(PG_referenced);
+		else
+			new_flags = (old_flags & ~(LRU_GEN_MASK | LRU_USAGE_MASK |
+				     LRU_TIER_FLAGS)) | ((new_gen + 1UL) << LRU_GEN_PGOFF);
+
+		if (old_flags == new_flags)
+			break;
+	} while (cmpxchg(&page->flags, old_flags, new_flags) != old_flags);
+
+	return old_gen;
+}
+
+static int should_skip_vma(unsigned long start, unsigned long end, struct mm_walk *walk)
+{
+	struct vm_area_struct *vma = walk->vma;
+	struct mm_walk_args *args = walk->private;
+
+	if (!vma_is_accessible(vma) || is_vm_hugetlb_page(vma) ||
+	    (vma->vm_flags & (VM_LOCKED | VM_SPECIAL)))
+		return true;
+
+	if (vma_is_anonymous(vma))
+		return !args->should_walk[0];
+
+	if (vma_is_shmem(vma))
+		return !args->should_walk[0] ||
+		       mapping_unevictable(vma->vm_file->f_mapping);
+
+	return !args->should_walk[1] || vma_is_dax(vma) ||
+	       vma == get_gate_vma(vma->vm_mm) ||
+	       mapping_unevictable(vma->vm_file->f_mapping);
+}
+
+/*
+ * Some userspace memory allocators create many single-page VMAs. So instead of
+ * returning back to the PGD table for each of such VMAs, we finish at least an
+ * entire PMD table and therefore avoid many zigzags. This optimizes page table
+ * walks for workloads that have large numbers of tiny VMAs.
+ *
+ * We scan PMD tables in two pass. The first pass reaches to PTE tables and
+ * doesn't take the PMD lock. The second pass clears the accessed bit on PMD
+ * entries and needs to take the PMD lock. The second pass is only done on the
+ * PMD entries that first pass has found the accessed bit is set, and they must
+ * be:
+ *   1) leaf entries mapping huge pages from the node under reclaim
+ *   2) non-leaf entries whose leaf entries only map pages from the node under
+ *   reclaim, when CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
+ */
+static bool get_next_interval(struct mm_walk *walk, unsigned long mask, unsigned long size,
+			      unsigned long *start, unsigned long *end)
+{
+	unsigned long next = round_up(*end, size);
+	struct mm_walk_args *args = walk->private;
+
+	VM_BUG_ON(mask & size);
+	VM_BUG_ON(*start != *end);
+	VM_BUG_ON(!(*end & ~mask));
+	VM_BUG_ON((*end & mask) != (next & mask));
+
+	while (walk->vma) {
+		if (next >= walk->vma->vm_end) {
+			walk->vma = walk->vma->vm_next;
+			continue;
+		}
+
+		if ((next & mask) != (walk->vma->vm_start & mask))
+			return false;
+
+		if (next <= walk->vma->vm_start &&
+		    should_skip_vma(walk->vma->vm_start, walk->vma->vm_end, walk)) {
+			walk->vma = walk->vma->vm_next;
+			continue;
+		}
+
+		args->mm_stats[MM_VMA_INTERVAL]++;
+
+		*start = max(next, walk->vma->vm_start);
+		next = (next | ~mask) + 1;
+		/* rounded-up boundaries can wrap to 0 */
+		*end = next && next < walk->vma->vm_end ? next : walk->vma->vm_end;
+
+		return true;
+	}
+
+	return false;
+}
+
+static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end,
+			   struct mm_walk *walk)
+{
+	int i;
+	pte_t *pte;
+	spinlock_t *ptl;
+	int remote = 0;
+	struct mm_walk_args *args = walk->private;
+	int old_gen, new_gen = lru_gen_from_seq(args->max_seq);
+
+	VM_BUG_ON(pmd_leaf(*pmd));
+
+	pte = pte_offset_map_lock(walk->mm, pmd, start & PMD_MASK, &ptl);
+	arch_enter_lazy_mmu_mode();
+restart:
+	for (i = pte_index(start); start != end; i++, start += PAGE_SIZE) {
+		struct page *page;
+		unsigned long pfn = pte_pfn(pte[i]);
+
+		if (!pte_present(pte[i]) || is_zero_pfn(pfn)) {
+			args->mm_stats[MM_LEAF_HOLE]++;
+			continue;
+		}
+
+		if (!pte_young(pte[i])) {
+			args->mm_stats[MM_LEAF_OLD]++;
+			continue;
+		}
+
+		if (pfn < args->start_pfn || pfn >= args->end_pfn) {
+			remote++;
+			args->mm_stats[MM_LEAF_OTHER_NODE]++;
+			continue;
+		}
+
+		page = compound_head(pfn_to_page(pfn));
+		if (page_to_nid(page) != args->node_id) {
+			remote++;
+			args->mm_stats[MM_LEAF_OTHER_NODE]++;
+			continue;
+		}
+
+		if (!ptep_test_and_clear_young(walk->vma, start, pte + i))
+			continue;
+
+		if (pte_dirty(pte[i]) && !PageDirty(page) &&
+		    !(PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page))) {
+			set_page_dirty(page);
+			args->mm_stats[MM_LEAF_DIRTY]++;
+		}
+
+		if (page_memcg_rcu(page) != args->memcg) {
+			args->mm_stats[MM_LEAF_OTHER_MEMCG]++;
+			continue;
+		}
+
+		old_gen = page_update_gen(page, new_gen);
+		if (old_gen >= 0 && old_gen != new_gen)
+			update_batch_size(page, old_gen, new_gen, args);
+		args->mm_stats[MM_LEAF_YOUNG]++;
+	}
+
+	if (i < PTRS_PER_PTE && get_next_interval(walk, PMD_MASK, PAGE_SIZE, &start, &end))
+		goto restart;
+
+	arch_leave_lazy_mmu_mode();
+	pte_unmap_unlock(pte, ptl);
+
+	return !remote;
+}
+
+static bool walk_pmd_range_unlocked(pud_t *pud, unsigned long start, unsigned long end,
+				    struct mm_walk *walk)
+{
+	int i;
+	pmd_t *pmd;
+	unsigned long next;
+	int young = 0;
+	struct mm_walk_args *args = walk->private;
+
+	VM_BUG_ON(pud_leaf(*pud));
+
+	pmd = pmd_offset(pud, start & PUD_MASK);
+restart:
+	for (i = pmd_index(start); start != end; i++, start = next) {
+		pmd_t val = pmd_read_atomic(pmd + i);
+
+		next = pmd_addr_end(start, end);
+
+		barrier();
+		if (!pmd_present(val) || is_huge_zero_pmd(val)) {
+			args->mm_stats[MM_LEAF_HOLE]++;
+			continue;
+		}
+
+		if (pmd_trans_huge(val)) {
+			unsigned long pfn = pmd_pfn(val);
+
+			if (!pmd_young(val)) {
+				args->mm_stats[MM_LEAF_OLD]++;
+				continue;
+			}
+
+			if (pfn < args->start_pfn || pfn >= args->end_pfn) {
+				args->mm_stats[MM_LEAF_OTHER_NODE]++;
+				continue;
+			}
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+			young++;
+			__set_bit(i, args->bitmap);
+#endif
+			continue;
+		}
+
+#ifdef CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG
+		if (!pmd_young(val)) {
+			args->mm_stats[MM_NONLEAF_OLD]++;
+			continue;
+		}
+#endif
+
+		if (walk_pte_range(&val, start, next, walk)) {
+#ifdef CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG
+			young++;
+			__set_bit(i, args->bitmap);
+#endif
+		}
+	}
+
+	if (i < PTRS_PER_PMD && get_next_interval(walk, PUD_MASK, PMD_SIZE, &start, &end))
+		goto restart;
+
+	return young;
+}
+
+#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG)
+static void walk_pmd_range_locked(pud_t *pud, unsigned long start, unsigned long end,
+				  struct mm_walk *walk)
+{
+	int i;
+	pmd_t *pmd;
+	spinlock_t *ptl;
+	struct mm_walk_args *args = walk->private;
+	int old_gen, new_gen = lru_gen_from_seq(args->max_seq);
+
+	VM_BUG_ON(pud_leaf(*pud));
+
+	start &= PUD_MASK;
+	pmd = pmd_offset(pud, start);
+	ptl = pmd_lock(walk->mm, pmd);
+	arch_enter_lazy_mmu_mode();
+
+	for_each_set_bit(i, args->bitmap, PTRS_PER_PMD) {
+		struct page *page;
+		unsigned long pfn = pmd_pfn(pmd[i]);
+		unsigned long addr = start + PMD_SIZE * i;
+
+		if (!pmd_present(pmd[i]) || is_huge_zero_pmd(pmd[i])) {
+			args->mm_stats[MM_LEAF_HOLE]++;
+			continue;
+		}
+
+		if (!pmd_young(pmd[i])) {
+			args->mm_stats[MM_LEAF_OLD]++;
+			continue;
+		}
+
+		if (!pmd_trans_huge(pmd[i])) {
+#ifdef CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG
+			args->mm_stats[MM_NONLEAF_YOUNG]++;
+			pmdp_test_and_clear_young(walk->vma, addr, pmd + i);
+#endif
+			continue;
+		}
+
+		if (pfn < args->start_pfn || pfn >= args->end_pfn) {
+			args->mm_stats[MM_LEAF_OTHER_NODE]++;
+			continue;
+		}
+
+		page = pfn_to_page(pfn);
+		VM_BUG_ON_PAGE(PageTail(page), page);
+		if (page_to_nid(page) != args->node_id) {
+			args->mm_stats[MM_LEAF_OTHER_NODE]++;
+			continue;
+		}
+
+		if (!pmdp_test_and_clear_young(walk->vma, addr, pmd + i))
+			continue;
+
+		if (pmd_dirty(pmd[i]) && !PageDirty(page) &&
+		    !(PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page))) {
+			set_page_dirty(page);
+			args->mm_stats[MM_LEAF_DIRTY]++;
+		}
+
+		if (page_memcg_rcu(page) != args->memcg) {
+			args->mm_stats[MM_LEAF_OTHER_MEMCG]++;
+			continue;
+		}
+
+		old_gen = page_update_gen(page, new_gen);
+		if (old_gen >= 0 && old_gen != new_gen)
+			update_batch_size(page, old_gen, new_gen, args);
+		args->mm_stats[MM_LEAF_YOUNG]++;
+	}
+
+	arch_leave_lazy_mmu_mode();
+	spin_unlock(ptl);
+
+	memset(args->bitmap, 0, sizeof(args->bitmap));
+}
+#else
+static void walk_pmd_range_locked(pud_t *pud, unsigned long start, unsigned long end,
+				  struct mm_walk *walk)
+{
+}
+#endif
+
+static int walk_pud_range(p4d_t *p4d, unsigned long start, unsigned long end,
+			  struct mm_walk *walk)
+{
+	int i;
+	pud_t *pud;
+	unsigned long next;
+	struct mm_walk_args *args = walk->private;
+
+	VM_BUG_ON(p4d_leaf(*p4d));
+
+	pud = pud_offset(p4d, start & P4D_MASK);
+restart:
+	for (i = pud_index(start); start != end; i++, start = next) {
+		pud_t val = READ_ONCE(pud[i]);
+
+		next = pud_addr_end(start, end);
+
+		if (!pud_present(val) || WARN_ON_ONCE(pud_leaf(val)))
+			continue;
+
+		if (walk_pmd_range_unlocked(&val, start, next, walk))
+			walk_pmd_range_locked(&val, start, next, walk);
+
+		if (args->batch_size >= MAX_BATCH_SIZE) {
+			end = (start | ~PUD_MASK) + 1;
+			goto done;
+		}
+	}
+
+	if (i < PTRS_PER_PUD && get_next_interval(walk, P4D_MASK, PUD_SIZE, &start, &end))
+		goto restart;
+
+	end = round_up(end, P4D_SIZE);
+done:
+	/* rounded-up boundaries can wrap to 0 */
+	args->next_addr = end && walk->vma ? max(end, walk->vma->vm_start) : 0;
+
+	return -EAGAIN;
+}
+
+static void walk_mm(struct mm_walk_args *args, int swappiness, struct mm_struct *mm)
+{
+	static const struct mm_walk_ops mm_walk_ops = {
+		.test_walk = should_skip_vma,
+		.p4d_entry = walk_pud_range,
+	};
+
+	int err;
+	int file;
+	int nid = args->node_id;
+	struct mem_cgroup *memcg = args->memcg;
+	struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+
+	args->next_addr = FIRST_USER_ADDRESS;
+	for (file = !swappiness; file < ANON_AND_FILE; file++)
+		args->should_walk[file] = lru_gen_mm_is_active(mm) ||
+					  node_isset(nid, mm->lrugen.nodes[file]);
+
+	do {
+		unsigned long start = args->next_addr;
+		unsigned long end = mm->highest_vm_end;
+
+		err = -EBUSY;
+
+		preempt_disable();
+		rcu_read_lock();
+
+#ifdef CONFIG_MEMCG
+		if (memcg && atomic_read(&memcg->moving_account)) {
+			args->mm_stats[MM_LOCK_CONTENTION]++;
+			goto contended;
+		}
+#endif
+		if (!mmap_read_trylock(mm)) {
+			args->mm_stats[MM_LOCK_CONTENTION]++;
+			goto contended;
+		}
+
+		err = walk_page_range(mm, start, end, &mm_walk_ops, args);
+
+		mmap_read_unlock(mm);
+
+		if (args->batch_size)
+			reset_batch_size(lruvec, args);
+contended:
+		rcu_read_unlock();
+		preempt_enable();
+
+		cond_resched();
+	} while (err == -EAGAIN && args->next_addr &&
+		 !mm_is_oom_victim(mm) && !mm_has_migrated(mm, memcg));
+
+	if (err == -EBUSY)
+		return;
+
+	for (file = !swappiness; file < ANON_AND_FILE; file++) {
+		if (args->should_walk[file])
+			node_clear(nid, mm->lrugen.nodes[file]);
+	}
+}
+
+static void page_inc_gen(struct page *page, struct lruvec *lruvec, bool front)
+{
+	int old_gen, new_gen;
+	unsigned long old_flags, new_flags;
+	int file = page_is_file_lru(page);
+	int zone = page_zonenum(page);
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	old_gen = lru_gen_from_seq(lrugen->min_seq[file]);
+
+	do {
+		old_flags = READ_ONCE(page->flags);
+		new_gen = ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
+		VM_BUG_ON_PAGE(new_gen < 0, page);
+		if (new_gen >= 0 && new_gen != old_gen)
+			goto sort;
+
+		new_gen = (old_gen + 1) % MAX_NR_GENS;
+		new_flags = (old_flags & ~(LRU_GEN_MASK | LRU_USAGE_MASK | LRU_TIER_FLAGS)) |
+			    ((new_gen + 1UL) << LRU_GEN_PGOFF);
+		/* mark the page for reclaim if it's pending writeback */
+		if (front)
+			new_flags |= BIT(PG_reclaim);
+	} while (cmpxchg(&page->flags, old_flags, new_flags) != old_flags);
+
+	lru_gen_update_size(page, lruvec, old_gen, new_gen);
+sort:
+	if (front)
+		list_move(&page->lru, &lrugen->lists[new_gen][file][zone]);
+	else
+		list_move_tail(&page->lru, &lrugen->lists[new_gen][file][zone]);
+}
+
+static bool try_inc_min_seq(struct lruvec *lruvec, int file)
+{
+	int gen, zone;
+	bool success = false;
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	VM_BUG_ON(!seq_is_valid(lruvec));
+
+	while (get_nr_gens(lruvec, file) > MIN_NR_GENS) {
+		gen = lru_gen_from_seq(lrugen->min_seq[file]);
+
+		for (zone = 0; zone < MAX_NR_ZONES; zone++) {
+			if (!list_empty(&lrugen->lists[gen][file][zone]))
+				return success;
+		}
+
+		reset_controller_pos(lruvec, gen, file);
+		WRITE_ONCE(lrugen->min_seq[file], lrugen->min_seq[file] + 1);
+
+		success = true;
+	}
+
+	return success;
+}
+
+static bool inc_min_seq(struct lruvec *lruvec, int file)
+{
+	int gen, zone;
+	int batch_size = 0;
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	VM_BUG_ON(!seq_is_valid(lruvec));
+
+	if (get_nr_gens(lruvec, file) != MAX_NR_GENS)
+		return true;
+
+	gen = lru_gen_from_seq(lrugen->min_seq[file]);
+
+	for (zone = 0; zone < MAX_NR_ZONES; zone++) {
+		struct list_head *head = &lrugen->lists[gen][file][zone];
+
+		while (!list_empty(head)) {
+			struct page *page = lru_to_page(head);
+
+			VM_BUG_ON_PAGE(PageTail(page), page);
+			VM_BUG_ON_PAGE(PageUnevictable(page), page);
+			VM_BUG_ON_PAGE(PageActive(page), page);
+			VM_BUG_ON_PAGE(page_is_file_lru(page) != file, page);
+			VM_BUG_ON_PAGE(page_zonenum(page) != zone, page);
+
+			prefetchw_prev_lru_page(page, head, flags);
+
+			page_inc_gen(page, lruvec, false);
+
+			if (++batch_size == MAX_BATCH_SIZE)
+				return false;
+		}
+
+		VM_BUG_ON(lrugen->sizes[gen][file][zone]);
+	}
+
+	reset_controller_pos(lruvec, gen, file);
+	WRITE_ONCE(lrugen->min_seq[file], lrugen->min_seq[file] + 1);
+
+	return true;
+}
+
+static void inc_max_seq(struct lruvec *lruvec)
+{
+	int gen, file, zone;
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	spin_lock_irq(&lruvec->lru_lock);
+
+	VM_BUG_ON(!seq_is_valid(lruvec));
+
+	for (file = 0; file < ANON_AND_FILE; file++) {
+		if (try_inc_min_seq(lruvec, file))
+			continue;
+
+		while (!inc_min_seq(lruvec, file)) {
+			spin_unlock_irq(&lruvec->lru_lock);
+			cond_resched();
+			spin_lock_irq(&lruvec->lru_lock);
+		}
+	}
+
+	gen = lru_gen_from_seq(lrugen->max_seq - 1);
+	for_each_type_zone(file, zone) {
+		enum lru_list lru = LRU_FILE * file;
+		long total = lrugen->sizes[gen][file][zone];
+
+		if (!total)
+			continue;
+
+		WARN_ON_ONCE(total != (int)total);
+
+		update_lru_size(lruvec, lru, zone, total);
+		update_lru_size(lruvec, lru + LRU_ACTIVE, zone, -total);
+	}
+
+	gen = lru_gen_from_seq(lrugen->max_seq + 1);
+	for_each_type_zone(file, zone) {
+		VM_BUG_ON(lrugen->sizes[gen][file][zone]);
+		VM_BUG_ON(!list_empty(&lrugen->lists[gen][file][zone]));
+	}
+
+	for (file = 0; file < ANON_AND_FILE; file++)
+		reset_controller_pos(lruvec, gen, file);
+
+	WRITE_ONCE(lrugen->timestamps[gen], jiffies);
+	/* make sure all preceding modifications appear first */
+	smp_store_release(&lrugen->max_seq, lrugen->max_seq + 1);
+
+	spin_unlock_irq(&lruvec->lru_lock);
+}
+
+/* Main function used by foreground, background and user-triggered aging. */
+static bool walk_mm_list(struct lruvec *lruvec, unsigned long max_seq,
+			 struct scan_control *sc, int swappiness, struct mm_walk_args *args)
+{
+	bool last;
+	bool alloc = !args;
+	struct mm_struct *mm = NULL;
+	struct lrugen *lrugen = &lruvec->evictable;
+	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+	int nid = pgdat->node_id;
+	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
+	struct lru_gen_mm_list *mm_list = get_mm_list(memcg);
+
+	VM_BUG_ON(max_seq > READ_ONCE(lrugen->max_seq));
+
+	/*
+	 * For each walk of the mm_struct list of a memcg, we decrement the
+	 * priority of its lrugen. For each walk of all memcgs in kswapd, we
+	 * increment the priority of every lrugen.
+	 *
+	 * So if this lrugen has a higher priority (smaller value), it means
+	 * other concurrent reclaimers have walked its mm list, and we skip it
+	 * for this priority in order to balance the pressure on all memcgs.
+	 */
+	if (!mem_cgroup_disabled() && !cgroup_reclaim(sc) &&
+	    sc->priority > atomic_read(&lrugen->priority))
+		return false;
+
+	if (alloc) {
+		args = kvzalloc_node(sizeof(*args), GFP_KERNEL, nid);
+		if (!args)
+			return false;
+	}
+
+	args->memcg = memcg;
+	args->max_seq = max_seq;
+	args->start_pfn = pgdat->node_start_pfn;
+	args->end_pfn = pgdat_end_pfn(pgdat);
+	args->node_id = nid;
+
+	do {
+		last = get_next_mm(args, swappiness, &mm);
+		if (mm)
+			walk_mm(args, swappiness, mm);
+
+		cond_resched();
+	} while (mm);
+
+	if (alloc)
+		kvfree(args);
+
+	if (!last) {
+		/* foreground aging prefers not to wait unless "necessary" */
+		if (!current_is_kswapd() && sc->priority < DEF_PRIORITY - 2)
+			wait_event_killable(mm_list->nodes[nid].wait,
+					    max_seq < READ_ONCE(lrugen->max_seq));
+
+		return max_seq < READ_ONCE(lrugen->max_seq);
+	}
+
+	VM_BUG_ON(max_seq != READ_ONCE(lrugen->max_seq));
+
+	inc_max_seq(lruvec);
+
+	if (!mem_cgroup_disabled())
+		atomic_add_unless(&lrugen->priority, -1, 0);
+
+	/* order against inc_max_seq() */
+	smp_mb();
+	/* either we see any waiters or they will see the updated max_seq */
+	if (waitqueue_active(&mm_list->nodes[nid].wait))
+		wake_up_all(&mm_list->nodes[nid].wait);
+
+	wakeup_flusher_threads(WB_REASON_VMSCAN);
+
+	return true;
+}
+
 /******************************************************************************
  *                          state change
  ******************************************************************************/
@@ -5002,6 +5699,9 @@ static int __init init_lru_gen(void)
 	BUILD_BUG_ON(MIN_NR_GENS + 1 >= MAX_NR_GENS);
 	BUILD_BUG_ON(BIT(LRU_GEN_WIDTH) <= MAX_NR_GENS);
 	BUILD_BUG_ON(sizeof(MM_STAT_CODES) != NR_MM_STATS + 1);
+	BUILD_BUG_ON(PMD_SIZE / PAGE_SIZE != PTRS_PER_PTE);
+	BUILD_BUG_ON(PUD_SIZE / PMD_SIZE != PTRS_PER_PMD);
+	BUILD_BUG_ON(P4D_SIZE / PUD_SIZE != PTRS_PER_PUD);
 
 	if (mem_cgroup_disabled()) {
 		global_mm_list = alloc_mm_list();
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 12/16] mm: multigenerational lru: eviction
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (10 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 11/16] mm: multigenerational lru: aging Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 13/16] mm: multigenerational lru: page reclaim Yu Zhao
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

The eviction consumes old generations. Given an lruvec, the eviction
scans the pages on the per-zone lists indexed by either of min_seq[2].
It first tries to select a type based on the values of min_seq[2].
When anon and file types are both available from the same generation,
it selects the one that has a lower refault rate.

During a scan, the eviction sorts pages according to their generation
numbers, if the aging has found them referenced. It also moves pages
from the tiers that have higher refault rates than tier 0 to the next
generation. When it finds all the per-zone lists of a selected type
are empty, the eviction increments min_seq[2] indexed by this selected
type.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/vmscan.c | 341 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 341 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 31e1b4155677..6239b1acd84f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -5468,6 +5468,347 @@ static bool walk_mm_list(struct lruvec *lruvec, unsigned long max_seq,
 	return true;
 }
 
+/******************************************************************************
+ *                          the eviction
+ ******************************************************************************/
+
+static bool sort_page(struct page *page, struct lruvec *lruvec, int tier_to_isolate)
+{
+	bool success;
+	int gen = page_lru_gen(page);
+	int file = page_is_file_lru(page);
+	int zone = page_zonenum(page);
+	int tier = lru_tier_from_usage(page_tier_usage(page));
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	VM_BUG_ON_PAGE(gen == -1, page);
+	VM_BUG_ON_PAGE(tier_to_isolate < 0, page);
+
+	/* a lazy-free page that has been written into? */
+	if (file && PageDirty(page) && PageAnon(page)) {
+		success = lru_gen_deletion(page, lruvec);
+		VM_BUG_ON_PAGE(!success, page);
+		SetPageSwapBacked(page);
+		add_page_to_lru_list_tail(page, lruvec);
+		return true;
+	}
+
+	/* page_update_gen() has updated the page? */
+	if (gen != lru_gen_from_seq(lrugen->min_seq[file])) {
+		list_move(&page->lru, &lrugen->lists[gen][file][zone]);
+		return true;
+	}
+
+	/* activate the page if its tier has a higher refault rate */
+	if (tier_to_isolate < tier) {
+		int sid = sid_from_seq_or_gen(gen);
+
+		page_inc_gen(page, lruvec, false);
+		WRITE_ONCE(lrugen->activated[sid][file][tier - 1],
+			   lrugen->activated[sid][file][tier - 1] + thp_nr_pages(page));
+		inc_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + file);
+		return true;
+	}
+
+	/*
+	 * A page can't be immediately evicted, and page_inc_gen() will mark it
+	 * for reclaim and hopefully writeback will write it soon if it's dirty.
+	 */
+	if (PageLocked(page) || PageWriteback(page) || (file && PageDirty(page))) {
+		page_inc_gen(page, lruvec, true);
+		return true;
+	}
+
+	return false;
+}
+
+static bool should_skip_page(struct page *page, struct scan_control *sc)
+{
+	if (!sc->may_unmap && page_mapped(page))
+		return true;
+
+	if (!(sc->may_writepage && (sc->gfp_mask & __GFP_IO)) &&
+	    (PageDirty(page) || (PageAnon(page) && !PageSwapCache(page))))
+		return true;
+
+	if (!get_page_unless_zero(page))
+		return true;
+
+	if (!TestClearPageLRU(page)) {
+		put_page(page);
+		return true;
+	}
+
+	return false;
+}
+
+static void isolate_page(struct page *page, struct lruvec *lruvec)
+{
+	bool success;
+
+	success = lru_gen_deletion(page, lruvec);
+	VM_BUG_ON_PAGE(!success, page);
+
+	if (PageActive(page)) {
+		ClearPageActive(page);
+		/* make sure shrink_page_list() rejects this page */
+		SetPageReferenced(page);
+		return;
+	}
+
+	/* make sure shrink_page_list() doesn't try to write this page */
+	ClearPageReclaim(page);
+	/* make sure shrink_page_list() doesn't reject this page */
+	ClearPageReferenced(page);
+}
+
+static int scan_lru_gen_pages(struct lruvec *lruvec, struct scan_control *sc,
+			      long *nr_to_scan, int file, int tier,
+			      struct list_head *list)
+{
+	bool success;
+	int gen, zone;
+	enum vm_event_item item;
+	int sorted = 0;
+	int scanned = 0;
+	int isolated = 0;
+	int batch_size = 0;
+	struct lrugen *lrugen = &lruvec->evictable;
+
+	VM_BUG_ON(!list_empty(list));
+
+	if (get_nr_gens(lruvec, file) == MIN_NR_GENS)
+		return -ENOENT;
+
+	gen = lru_gen_from_seq(lrugen->min_seq[file]);
+
+	for (zone = sc->reclaim_idx; zone >= 0; zone--) {
+		LIST_HEAD(moved);
+		int skipped = 0;
+		struct list_head *head = &lrugen->lists[gen][file][zone];
+
+		while (!list_empty(head)) {
+			struct page *page = lru_to_page(head);
+			int delta = thp_nr_pages(page);
+
+			VM_BUG_ON_PAGE(PageTail(page), page);
+			VM_BUG_ON_PAGE(PageUnevictable(page), page);
+			VM_BUG_ON_PAGE(PageActive(page), page);
+			VM_BUG_ON_PAGE(page_is_file_lru(page) != file, page);
+			VM_BUG_ON_PAGE(page_zonenum(page) != zone, page);
+
+			prefetchw_prev_lru_page(page, head, flags);
+
+			scanned += delta;
+
+			if (sort_page(page, lruvec, tier))
+				sorted += delta;
+			else if (should_skip_page(page, sc)) {
+				list_move(&page->lru, &moved);
+				skipped += delta;
+			} else {
+				isolate_page(page, lruvec);
+				list_add(&page->lru, list);
+				isolated += delta;
+			}
+
+			if (scanned >= *nr_to_scan || isolated >= SWAP_CLUSTER_MAX ||
+			    ++batch_size == MAX_BATCH_SIZE)
+				break;
+		}
+
+		list_splice(&moved, head);
+		__count_zid_vm_events(PGSCAN_SKIP, zone, skipped);
+
+		if (scanned >= *nr_to_scan || isolated >= SWAP_CLUSTER_MAX ||
+		    batch_size == MAX_BATCH_SIZE)
+			break;
+	}
+
+	success = try_inc_min_seq(lruvec, file);
+
+	item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT;
+	if (!cgroup_reclaim(sc))
+		__count_vm_events(item, scanned);
+	__count_memcg_events(lruvec_memcg(lruvec), item, scanned);
+	__count_vm_events(PGSCAN_ANON + file, scanned);
+
+	*nr_to_scan -= scanned;
+
+	if (*nr_to_scan <= 0 || success || isolated)
+		return isolated;
+	/*
+	 * We may have trouble finding eligible pages due to reclaim_idx,
+	 * may_unmap and may_writepage. The following check makes sure we won't
+	 * be stuck if we aren't making enough progress.
+	 */
+	return batch_size == MAX_BATCH_SIZE && sorted >= SWAP_CLUSTER_MAX ? 0 : -ENOENT;
+}
+
+static int get_tier_to_isolate(struct lruvec *lruvec, int file)
+{
+	int tier;
+	struct controller_pos sp, pv;
+
+	/*
+	 * Ideally we don't want to evict upper tiers that have higher refault
+	 * rates. However, we need to leave some margin for the fluctuation in
+	 * refault rates. So we use a larger gain factor to make sure upper
+	 * tiers are indeed more active. We choose 2 because the lowest upper
+	 * tier would have twice of the refault rate of the base tier, according
+	 * to their numbers of accesses.
+	 */
+	read_controller_pos(&sp, lruvec, file, 0, 1);
+	for (tier = 1; tier < MAX_NR_TIERS; tier++) {
+		read_controller_pos(&pv, lruvec, file, tier, 2);
+		if (!positive_ctrl_err(&sp, &pv))
+			break;
+	}
+
+	return tier - 1;
+}
+
+static int get_type_to_scan(struct lruvec *lruvec, int swappiness, int *tier_to_isolate)
+{
+	int file, tier;
+	struct controller_pos sp, pv;
+	int gain[ANON_AND_FILE] = { swappiness, 200 - swappiness };
+
+	/*
+	 * Compare the refault rates between the base tiers of anon and file to
+	 * determine which type to evict. Also need to compare the refault rates
+	 * of the upper tiers of the selected type with that of the base tier to
+	 * determine which tier of the selected type to evict.
+	 */
+	read_controller_pos(&sp, lruvec, 0, 0, gain[0]);
+	read_controller_pos(&pv, lruvec, 1, 0, gain[1]);
+	file = positive_ctrl_err(&sp, &pv);
+
+	read_controller_pos(&sp, lruvec, !file, 0, gain[!file]);
+	for (tier = 1; tier < MAX_NR_TIERS; tier++) {
+		read_controller_pos(&pv, lruvec, file, tier, gain[file]);
+		if (!positive_ctrl_err(&sp, &pv))
+			break;
+	}
+
+	*tier_to_isolate = tier - 1;
+
+	return file;
+}
+
+static int isolate_lru_gen_pages(struct lruvec *lruvec, struct scan_control *sc,
+				 int swappiness, long *nr_to_scan, int *type_to_scan,
+				 struct list_head *list)
+{
+	int i;
+	int file;
+	int isolated;
+	int tier = -1;
+	DEFINE_MAX_SEQ();
+	DEFINE_MIN_SEQ();
+
+	VM_BUG_ON(!seq_is_valid(lruvec));
+
+	if (max_nr_gens(max_seq, min_seq, swappiness) == MIN_NR_GENS)
+		return 0;
+	/*
+	 * Try to select a type based on generations and swappiness, and if that
+	 * fails, fall back to get_type_to_scan(). When anon and file are both
+	 * available from the same generation, swappiness 200 is interpreted as
+	 * anon first and swappiness 1 is interpreted as file first.
+	 */
+	file = !swappiness || min_seq[0] > min_seq[1] ||
+	       (min_seq[0] == min_seq[1] && swappiness != 200 &&
+		(swappiness == 1 || get_type_to_scan(lruvec, swappiness, &tier)));
+
+	if (tier == -1)
+		tier = get_tier_to_isolate(lruvec, file);
+
+	for (i = !swappiness; i < ANON_AND_FILE; i++) {
+		isolated = scan_lru_gen_pages(lruvec, sc, nr_to_scan, file, tier, list);
+		if (isolated >= 0)
+			break;
+
+		file = !file;
+		tier = get_tier_to_isolate(lruvec, file);
+	}
+
+	if (isolated < 0)
+		isolated = *nr_to_scan = 0;
+
+	*type_to_scan = file;
+
+	return isolated;
+}
+
+/* Main function used by foreground, background and user-triggered eviction. */
+static bool evict_lru_gen_pages(struct lruvec *lruvec, struct scan_control *sc,
+				int swappiness, long *nr_to_scan)
+{
+	int file;
+	int isolated;
+	int reclaimed;
+	LIST_HEAD(list);
+	struct page *page;
+	enum vm_event_item item;
+	struct reclaim_stat stat;
+	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+
+	spin_lock_irq(&lruvec->lru_lock);
+
+	isolated = isolate_lru_gen_pages(lruvec, sc, swappiness, nr_to_scan, &file, &list);
+	VM_BUG_ON(list_empty(&list) == !!isolated);
+
+	if (isolated)
+		__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, isolated);
+
+	spin_unlock_irq(&lruvec->lru_lock);
+
+	if (!isolated)
+		goto done;
+
+	reclaimed = shrink_page_list(&list, pgdat, sc, &stat, false);
+	/*
+	 * We need to prevent rejected pages from being added back to the same
+	 * lists they were isolated from. Otherwise we may risk looping on them
+	 * forever. We use PageActive() or !PageReferenced() && PageWorkingset()
+	 * to tell lru_gen_addition() not to add them to the oldest generation.
+	 */
+	list_for_each_entry(page, &list, lru) {
+		if (PageMlocked(page))
+			continue;
+
+		if (PageReferenced(page)) {
+			SetPageActive(page);
+			ClearPageReferenced(page);
+		} else {
+			ClearPageActive(page);
+			SetPageWorkingset(page);
+		}
+	}
+
+	spin_lock_irq(&lruvec->lru_lock);
+
+	move_pages_to_lru(lruvec, &list);
+
+	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -isolated);
+
+	item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT;
+	if (!cgroup_reclaim(sc))
+		__count_vm_events(item, reclaimed);
+	__count_memcg_events(lruvec_memcg(lruvec), item, reclaimed);
+	__count_vm_events(PGSTEAL_ANON + file, reclaimed);
+
+	spin_unlock_irq(&lruvec->lru_lock);
+
+	mem_cgroup_uncharge_list(&list);
+	free_unref_page_list(&list);
+
+	sc->nr_reclaimed += reclaimed;
+done:
+	return *nr_to_scan > 0 && sc->nr_reclaimed < sc->nr_to_reclaim;
+}
+
 /******************************************************************************
  *                          state change
  ******************************************************************************/
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 13/16] mm: multigenerational lru: page reclaim
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (11 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 12/16] mm: multigenerational lru: eviction Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 14/16] mm: multigenerational lru: user interface Yu Zhao
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

With the aging and the eviction in place, we can build the page
reclaim in a straightforward manner:
  1) In order to reduce the latency, direct reclaim only invokes the
  aging when both min_seq[2] reaches max_seq-1; otherwise it invokes
  the eviction.
  2) In order to avoid the aging in the direct reclaim path, kswapd
  does the background aging more proactively. It invokes the aging
  when either of min_seq[2] reaches max_seq-1; otherwise it invokes
  the eviction.

And we add another optimization: pages mapped around a referenced PTE
may also have been referenced due to the spatial locality. In the
reclaim path, if the rmap finds the PTE mapping a page under reclaim
referenced, it calls a new function lru_gen_scan_around() to scan the
vicinity of the PTE. And if this new function finds others referenced
PTEs, it updates the generation number of the pages mapped by those
PTEs.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 include/linux/mmzone.h |   6 ++
 mm/rmap.c              |   6 ++
 mm/vmscan.c            | 236 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 248 insertions(+)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index dcfadf6a8c07..a22e9e40083f 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -292,6 +292,7 @@ enum lruvec_flags {
 };
 
 struct lruvec;
+struct page_vma_mapped_walk;
 
 #define LRU_GEN_MASK		((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF)
 #define LRU_USAGE_MASK		((BIT(LRU_USAGE_WIDTH) - 1) << LRU_USAGE_PGOFF)
@@ -384,6 +385,7 @@ struct lrugen {
 
 void lru_gen_init_lruvec(struct lruvec *lruvec);
 void lru_gen_set_state(bool enable, bool main, bool swap);
+void lru_gen_scan_around(struct page_vma_mapped_walk *pvmw);
 
 #else /* CONFIG_LRU_GEN */
 
@@ -395,6 +397,10 @@ static inline void lru_gen_set_state(bool enable, bool main, bool swap)
 {
 }
 
+static inline void lru_gen_scan_around(struct page_vma_mapped_walk *pvmw)
+{
+}
+
 #endif /* CONFIG_LRU_GEN */
 
 struct lruvec {
diff --git a/mm/rmap.c b/mm/rmap.c
index b0fc27e77d6d..d600b282ced5 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -72,6 +72,7 @@
 #include <linux/page_idle.h>
 #include <linux/memremap.h>
 #include <linux/userfaultfd_k.h>
+#include <linux/mm_inline.h>
 
 #include <asm/tlbflush.h>
 
@@ -792,6 +793,11 @@ static bool page_referenced_one(struct page *page, struct vm_area_struct *vma,
 		}
 
 		if (pvmw.pte) {
+			/* the multigenerational lru exploits the spatial locality */
+			if (lru_gen_enabled() && pte_young(*pvmw.pte)) {
+				lru_gen_scan_around(&pvmw);
+				referenced++;
+			}
 			if (ptep_clear_flush_young_notify(vma, address,
 						pvmw.pte)) {
 				/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6239b1acd84f..01c475386379 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1114,6 +1114,10 @@ static unsigned int shrink_page_list(struct list_head *page_list,
 		if (!sc->may_unmap && page_mapped(page))
 			goto keep_locked;
 
+		/* in case the page was found accessed by lru_gen_scan_around() */
+		if (lru_gen_enabled() && !ignore_references && PageReferenced(page))
+			goto keep_locked;
+
 		may_enter_fs = (sc->gfp_mask & __GFP_FS) ||
 			(PageSwapCache(page) && (sc->gfp_mask & __GFP_IO));
 
@@ -2233,6 +2237,10 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc)
 	unsigned long file;
 	struct lruvec *target_lruvec;
 
+	/* the multigenerational lru doesn't use these counters */
+	if (lru_gen_enabled())
+		return;
+
 	target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat);
 
 	/*
@@ -2522,6 +2530,19 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 	}
 }
 
+#ifdef CONFIG_LRU_GEN
+static void age_lru_gens(struct pglist_data *pgdat, struct scan_control *sc);
+static void shrink_lru_gens(struct lruvec *lruvec, struct scan_control *sc);
+#else
+static void age_lru_gens(struct pglist_data *pgdat, struct scan_control *sc)
+{
+}
+
+static void shrink_lru_gens(struct lruvec *lruvec, struct scan_control *sc)
+{
+}
+#endif
+
 static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
 {
 	unsigned long nr[NR_LRU_LISTS];
@@ -2533,6 +2554,11 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
 	struct blk_plug plug;
 	bool scan_adjusted;
 
+	if (lru_gen_enabled()) {
+		shrink_lru_gens(lruvec, sc);
+		return;
+	}
+
 	get_scan_count(lruvec, sc, nr);
 
 	/* Record the original scan target for proportional adjustments later */
@@ -2999,6 +3025,10 @@ static void snapshot_refaults(struct mem_cgroup *target_memcg, pg_data_t *pgdat)
 	struct lruvec *target_lruvec;
 	unsigned long refaults;
 
+	/* the multigenerational lru doesn't use these counters */
+	if (lru_gen_enabled())
+		return;
+
 	target_lruvec = mem_cgroup_lruvec(target_memcg, pgdat);
 	refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE_ANON);
 	target_lruvec->refaults[0] = refaults;
@@ -3373,6 +3403,11 @@ static void age_active_anon(struct pglist_data *pgdat,
 	struct mem_cgroup *memcg;
 	struct lruvec *lruvec;
 
+	if (lru_gen_enabled()) {
+		age_lru_gens(pgdat, sc);
+		return;
+	}
+
 	if (!total_swap_pages)
 		return;
 
@@ -5468,6 +5503,57 @@ static bool walk_mm_list(struct lruvec *lruvec, unsigned long max_seq,
 	return true;
 }
 
+void lru_gen_scan_around(struct page_vma_mapped_walk *pvmw)
+{
+	pte_t *pte;
+	unsigned long start, end;
+	int old_gen, new_gen;
+	unsigned long flags;
+	struct lruvec *lruvec;
+	struct mem_cgroup *memcg;
+	struct pglist_data *pgdat = page_pgdat(pvmw->page);
+
+	lockdep_assert_held(pvmw->ptl);
+
+	start = max(pvmw->address & PMD_MASK, pvmw->vma->vm_start);
+	end = pmd_addr_end(pvmw->address, pvmw->vma->vm_end);
+	pte = pvmw->pte - ((pvmw->address - start) >> PAGE_SHIFT);
+
+	memcg = lock_page_memcg(pvmw->page);
+	lruvec = lock_page_lruvec_irqsave(pvmw->page, &flags);
+
+	new_gen = lru_gen_from_seq(lruvec->evictable.max_seq);
+
+	for (; start != end; pte++, start += PAGE_SIZE) {
+		struct page *page;
+		unsigned long pfn = pte_pfn(*pte);
+
+		if (!pte_present(*pte) || !pte_young(*pte) || is_zero_pfn(pfn))
+			continue;
+
+		if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat))
+			continue;
+
+		page = compound_head(pfn_to_page(pfn));
+		if (page_to_nid(page) != pgdat->node_id)
+			continue;
+
+		if (page_memcg_rcu(page) != memcg)
+			continue;
+		/*
+		 * We may be holding many locks. So try to finish as fast as
+		 * possible and leave the accessed and the dirty bits to page
+		 * table walks.
+		 */
+		old_gen = page_update_gen(page, new_gen);
+		if (old_gen >= 0 && old_gen != new_gen)
+			lru_gen_update_size(page, lruvec, old_gen, new_gen);
+	}
+
+	unlock_page_lruvec_irqrestore(lruvec, flags);
+	unlock_page_memcg(pvmw->page);
+}
+
 /******************************************************************************
  *                          the eviction
  ******************************************************************************/
@@ -5809,6 +5895,156 @@ static bool evict_lru_gen_pages(struct lruvec *lruvec, struct scan_control *sc,
 	return *nr_to_scan > 0 && sc->nr_reclaimed < sc->nr_to_reclaim;
 }
 
+/******************************************************************************
+ *                          page reclaim
+ ******************************************************************************/
+
+static int get_swappiness(struct lruvec *lruvec)
+{
+	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
+	int swappiness = mem_cgroup_get_nr_swap_pages(memcg) >= (long)SWAP_CLUSTER_MAX ?
+			 mem_cgroup_swappiness(memcg) : 0;
+
+	VM_BUG_ON(swappiness > 200U);
+
+	return swappiness;
+}
+
+static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc,
+				    int swappiness)
+{
+	int gen, file, zone;
+	long nr_to_scan = 0;
+	struct lrugen *lrugen = &lruvec->evictable;
+	DEFINE_MAX_SEQ();
+	DEFINE_MIN_SEQ();
+
+	lru_add_drain();
+
+	for (file = !swappiness; file < ANON_AND_FILE; file++) {
+		unsigned long seq;
+
+		for (seq = min_seq[file]; seq <= max_seq; seq++) {
+			gen = lru_gen_from_seq(seq);
+
+			for (zone = 0; zone <= sc->reclaim_idx; zone++)
+				nr_to_scan += READ_ONCE(lrugen->sizes[gen][file][zone]);
+		}
+	}
+
+	nr_to_scan = max(nr_to_scan, 0L);
+	nr_to_scan = round_up(nr_to_scan >> sc->priority, SWAP_CLUSTER_MAX);
+
+	if (max_nr_gens(max_seq, min_seq, swappiness) > MIN_NR_GENS)
+		return nr_to_scan;
+
+	/* kswapd uses age_lru_gens() */
+	if (current_is_kswapd())
+		return 0;
+
+	return walk_mm_list(lruvec, max_seq, sc, swappiness, NULL) ? nr_to_scan : 0;
+}
+
+static void shrink_lru_gens(struct lruvec *lruvec, struct scan_control *sc)
+{
+	struct blk_plug plug;
+	unsigned long scanned = 0;
+	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
+
+	blk_start_plug(&plug);
+
+	while (true) {
+		long nr_to_scan;
+		int swappiness = sc->may_swap ? get_swappiness(lruvec) : 0;
+
+		nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness) - scanned;
+		if (nr_to_scan < (long)SWAP_CLUSTER_MAX)
+			break;
+
+		scanned += nr_to_scan;
+
+		if (!evict_lru_gen_pages(lruvec, sc, swappiness, &nr_to_scan))
+			break;
+
+		scanned -= nr_to_scan;
+
+		if (mem_cgroup_below_min(memcg) ||
+		    (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim))
+			break;
+
+		cond_resched();
+	}
+
+	blk_finish_plug(&plug);
+}
+
+/******************************************************************************
+ *                          the background aging
+ ******************************************************************************/
+
+static int lru_gen_spread = MIN_NR_GENS;
+
+static void try_walk_mm_list(struct lruvec *lruvec, struct scan_control *sc)
+{
+	int gen, file, zone;
+	long old_and_young[2] = {};
+	struct mm_walk_args args = {};
+	int spread = READ_ONCE(lru_gen_spread);
+	int swappiness = get_swappiness(lruvec);
+	struct lrugen *lrugen = &lruvec->evictable;
+	DEFINE_MAX_SEQ();
+	DEFINE_MIN_SEQ();
+
+	lru_add_drain();
+
+	for (file = !swappiness; file < ANON_AND_FILE; file++) {
+		unsigned long seq;
+
+		for (seq = min_seq[file]; seq <= max_seq; seq++) {
+			gen = lru_gen_from_seq(seq);
+
+			for (zone = 0; zone < MAX_NR_ZONES; zone++)
+				old_and_young[seq == max_seq] +=
+					READ_ONCE(lrugen->sizes[gen][file][zone]);
+		}
+	}
+
+	old_and_young[0] = max(old_and_young[0], 0L);
+	old_and_young[1] = max(old_and_young[1], 0L);
+
+	if (old_and_young[0] + old_and_young[1] < SWAP_CLUSTER_MAX)
+		return;
+
+	/* try to spread pages out across spread+1 generations */
+	if (old_and_young[0] >= old_and_young[1] * spread &&
+	    min_nr_gens(max_seq, min_seq, swappiness) > max(spread, MIN_NR_GENS))
+		return;
+
+	walk_mm_list(lruvec, max_seq, sc, swappiness, &args);
+}
+
+static void age_lru_gens(struct pglist_data *pgdat, struct scan_control *sc)
+{
+	struct mem_cgroup *memcg;
+
+	VM_BUG_ON(!current_is_kswapd());
+
+	memcg = mem_cgroup_iter(NULL, NULL, NULL);
+	do {
+		struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);
+		struct lrugen *lrugen = &lruvec->evictable;
+
+		if (!mem_cgroup_below_min(memcg) &&
+		    (!mem_cgroup_below_low(memcg) || sc->memcg_low_reclaim))
+			try_walk_mm_list(lruvec, sc);
+
+		if (!mem_cgroup_disabled())
+			atomic_add_unless(&lrugen->priority, 1, DEF_PRIORITY);
+
+		cond_resched();
+	} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)));
+}
+
 /******************************************************************************
  *                          state change
  ******************************************************************************/
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 14/16] mm: multigenerational lru: user interface
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (12 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 13/16] mm: multigenerational lru: page reclaim Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 15/16] mm: multigenerational lru: Kconfig Yu Zhao
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

Add a sysfs file /sys/kernel/mm/lru_gen/enabled so users can enable
and disable the multigenerational lru at runtime.

Add a sysfs file /sys/kernel/mm/lru_gen/spread so users can spread
pages out across multiple generations. More generations make the
background aging more aggressive.

Add a debugfs file /sys/kernel/debug/lru_gen so users can monitor the
multigenerational lru and trigger the aging and the eviction. This
file has the following output:
  memcg  memcg_id  memcg_path
    node  node_id
      min_gen  birth_time  anon_size  file_size
      ...
      max_gen  birth_time  anon_size  file_size

Given a memcg and a node, "min_gen" is the oldest generation (number)
and "max_gen" is the youngest. Birth time is in milliseconds. The
sizes of anon and file types are in pages.

This file takes the following input:
  + memcg_id node_id gen [swappiness]
  - memcg_id node_id gen [swappiness] [nr_to_reclaim]

The first command line accounts referenced pages to generation
"max_gen" and creates the next generation "max_gen"+1. In this case,
"gen" should be equal to "max_gen". A swap file and a non-zero
"swappiness" are required to scan anon type. If swapping is not
desired, set vm.swappiness to 0. The second command line evicts
generations less than or equal to "gen". In this case, "gen" should be
less than "max_gen"-1 as "max_gen" and "max_gen"-1 are active
generations and therefore protected from the eviction. Use
"nr_to_reclaim" to limit the number of pages to be evicted. Multiple
command lines are supported, so does concatenation with delimiters ","
and ";".

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/vmscan.c | 405 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 405 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 01c475386379..284e32d897cf 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -51,6 +51,8 @@
 #include <linux/psi.h>
 #include <linux/memory.h>
 #include <linux/pagewalk.h>
+#include <linux/ctype.h>
+#include <linux/debugfs.h>
 
 #include <asm/tlbflush.h>
 #include <asm/div64.h>
@@ -6248,6 +6250,403 @@ static int __meminit __maybe_unused lru_gen_online_mem(struct notifier_block *se
 	return NOTIFY_DONE;
 }
 
+/******************************************************************************
+ *                          sysfs interface
+ ******************************************************************************/
+
+static ssize_t show_lru_gen_spread(struct kobject *kobj, struct kobj_attribute *attr,
+				   char *buf)
+{
+	return sprintf(buf, "%d\n", READ_ONCE(lru_gen_spread));
+}
+
+static ssize_t store_lru_gen_spread(struct kobject *kobj, struct kobj_attribute *attr,
+				    const char *buf, size_t len)
+{
+	int spread;
+
+	if (kstrtoint(buf, 10, &spread) || spread >= MAX_NR_GENS)
+		return -EINVAL;
+
+	WRITE_ONCE(lru_gen_spread, spread);
+
+	return len;
+}
+
+static struct kobj_attribute lru_gen_spread_attr = __ATTR(
+	spread, 0644, show_lru_gen_spread, store_lru_gen_spread
+);
+
+static ssize_t show_lru_gen_enabled(struct kobject *kobj, struct kobj_attribute *attr,
+				    char *buf)
+{
+	return snprintf(buf, PAGE_SIZE, "%ld\n", lru_gen_enabled());
+}
+
+static ssize_t store_lru_gen_enabled(struct kobject *kobj, struct kobj_attribute *attr,
+				     const char *buf, size_t len)
+{
+	int enable;
+
+	if (kstrtoint(buf, 10, &enable))
+		return -EINVAL;
+
+	lru_gen_set_state(enable, true, false);
+
+	return len;
+}
+
+static struct kobj_attribute lru_gen_enabled_attr = __ATTR(
+	enabled, 0644, show_lru_gen_enabled, store_lru_gen_enabled
+);
+
+static struct attribute *lru_gen_attrs[] = {
+	&lru_gen_spread_attr.attr,
+	&lru_gen_enabled_attr.attr,
+	NULL
+};
+
+static struct attribute_group lru_gen_attr_group = {
+	.name = "lru_gen",
+	.attrs = lru_gen_attrs,
+};
+
+/******************************************************************************
+ *                          debugfs interface
+ ******************************************************************************/
+
+static void *lru_gen_seq_start(struct seq_file *m, loff_t *pos)
+{
+	struct mem_cgroup *memcg;
+	loff_t nr_to_skip = *pos;
+
+	m->private = kzalloc(PATH_MAX, GFP_KERNEL);
+	if (!m->private)
+		return ERR_PTR(-ENOMEM);
+
+	memcg = mem_cgroup_iter(NULL, NULL, NULL);
+	do {
+		int nid;
+
+		for_each_node_state(nid, N_MEMORY) {
+			if (!nr_to_skip--)
+				return mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+		}
+	} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)));
+
+	return NULL;
+}
+
+static void lru_gen_seq_stop(struct seq_file *m, void *v)
+{
+	if (!IS_ERR_OR_NULL(v))
+		mem_cgroup_iter_break(NULL, lruvec_memcg(v));
+
+	kfree(m->private);
+	m->private = NULL;
+}
+
+static void *lru_gen_seq_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	int nid = lruvec_pgdat(v)->node_id;
+	struct mem_cgroup *memcg = lruvec_memcg(v);
+
+	++*pos;
+
+	nid = next_memory_node(nid);
+	if (nid == MAX_NUMNODES) {
+		memcg = mem_cgroup_iter(NULL, memcg, NULL);
+		if (!memcg)
+			return NULL;
+
+		nid = first_memory_node;
+	}
+
+	return mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+}
+
+static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec,
+				  unsigned long max_seq, unsigned long *min_seq,
+				  unsigned long seq)
+{
+	int i;
+	int file, tier;
+	int sid = sid_from_seq_or_gen(seq);
+	struct lrugen *lrugen = &lruvec->evictable;
+	int nid = lruvec_pgdat(lruvec)->node_id;
+	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
+	struct lru_gen_mm_list *mm_list = get_mm_list(memcg);
+
+	for (tier = 0; tier < MAX_NR_TIERS; tier++) {
+		seq_printf(m, "            %10d", tier);
+		for (file = 0; file < ANON_AND_FILE; file++) {
+			unsigned long n[3] = {};
+
+			if (seq == max_seq) {
+				n[0] = READ_ONCE(lrugen->avg_refaulted[file][tier]);
+				n[1] = READ_ONCE(lrugen->avg_total[file][tier]);
+
+				seq_printf(m, " %10luR %10luT %10lu ", n[0], n[1], n[2]);
+			} else if (seq == min_seq[file] || NR_STAT_GENS > 1) {
+				n[0] = atomic_long_read(&lrugen->refaulted[sid][file][tier]);
+				n[1] = atomic_long_read(&lrugen->evicted[sid][file][tier]);
+				if (tier)
+					n[2] = READ_ONCE(lrugen->activated[sid][file][tier - 1]);
+
+				seq_printf(m, " %10lur %10lue %10lua", n[0], n[1], n[2]);
+			} else
+				seq_puts(m, "          0           0           0 ");
+		}
+		seq_putc(m, '\n');
+	}
+
+	seq_puts(m, "                      ");
+	for (i = 0; i < NR_MM_STATS; i++) {
+		if (seq == max_seq && NR_STAT_GENS == 1)
+			seq_printf(m, " %10lu%c", READ_ONCE(mm_list->nodes[nid].stats[sid][i]),
+				   toupper(MM_STAT_CODES[i]));
+		else if (seq != max_seq && NR_STAT_GENS > 1)
+			seq_printf(m, " %10lu%c", READ_ONCE(mm_list->nodes[nid].stats[sid][i]),
+				   MM_STAT_CODES[i]);
+		else
+			seq_puts(m, "          0 ");
+	}
+	seq_putc(m, '\n');
+}
+
+static int lru_gen_seq_show(struct seq_file *m, void *v)
+{
+	unsigned long seq;
+	bool full = !debugfs_real_fops(m->file)->write;
+	struct lruvec *lruvec = v;
+	struct lrugen *lrugen = &lruvec->evictable;
+	int nid = lruvec_pgdat(lruvec)->node_id;
+	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
+	DEFINE_MAX_SEQ();
+	DEFINE_MIN_SEQ();
+
+	if (nid == first_memory_node) {
+#ifdef CONFIG_MEMCG
+		if (memcg)
+			cgroup_path(memcg->css.cgroup, m->private, PATH_MAX);
+#endif
+		seq_printf(m, "memcg %5hu %s\n",
+			   mem_cgroup_id(memcg), (char *)m->private);
+	}
+
+	seq_printf(m, " node %5d %10d\n", nid, atomic_read(&lrugen->priority));
+
+	seq = full ? (max_seq < MAX_NR_GENS ? 0 : max_seq - MAX_NR_GENS + 1) :
+		     min(min_seq[0], min_seq[1]);
+
+	for (; seq <= max_seq; seq++) {
+		int gen, file, zone;
+		unsigned int msecs;
+
+		gen = lru_gen_from_seq(seq);
+		msecs = jiffies_to_msecs(jiffies - READ_ONCE(lrugen->timestamps[gen]));
+
+		seq_printf(m, " %10lu %10u", seq, msecs);
+
+		for (file = 0; file < ANON_AND_FILE; file++) {
+			long size = 0;
+
+			if (seq < min_seq[file]) {
+				seq_puts(m, "         -0 ");
+				continue;
+			}
+
+			for (zone = 0; zone < MAX_NR_ZONES; zone++)
+				size += READ_ONCE(lrugen->sizes[gen][file][zone]);
+
+			seq_printf(m, " %10lu ", max(size, 0L));
+		}
+
+		seq_putc(m, '\n');
+
+		if (full)
+			lru_gen_seq_show_full(m, lruvec, max_seq, min_seq, seq);
+	}
+
+	return 0;
+}
+
+static const struct seq_operations lru_gen_seq_ops = {
+	.start = lru_gen_seq_start,
+	.stop = lru_gen_seq_stop,
+	.next = lru_gen_seq_next,
+	.show = lru_gen_seq_show,
+};
+
+static int advance_max_seq(struct lruvec *lruvec, unsigned long seq, int swappiness)
+{
+	struct mm_walk_args args = {};
+	struct scan_control sc = {
+		.target_mem_cgroup = lruvec_memcg(lruvec),
+	};
+	DEFINE_MAX_SEQ();
+
+	if (seq == max_seq)
+		walk_mm_list(lruvec, max_seq, &sc, swappiness, &args);
+
+	return seq > max_seq ? -EINVAL : 0;
+}
+
+static int advance_min_seq(struct lruvec *lruvec, unsigned long seq, int swappiness,
+			   unsigned long nr_to_reclaim)
+{
+	struct blk_plug plug;
+	int err = -EINTR;
+	long nr_to_scan = LONG_MAX;
+	struct scan_control sc = {
+		.nr_to_reclaim = nr_to_reclaim,
+		.target_mem_cgroup = lruvec_memcg(lruvec),
+		.may_writepage = 1,
+		.may_unmap = 1,
+		.may_swap = 1,
+		.reclaim_idx = MAX_NR_ZONES - 1,
+		.gfp_mask = GFP_KERNEL,
+	};
+	DEFINE_MAX_SEQ();
+
+	if (seq >= max_seq - 1)
+		return -EINVAL;
+
+	blk_start_plug(&plug);
+
+	while (!signal_pending(current)) {
+		DEFINE_MIN_SEQ();
+
+		if (seq < min(min_seq[!swappiness], min_seq[swappiness < 200]) ||
+		    !evict_lru_gen_pages(lruvec, &sc, swappiness, &nr_to_scan)) {
+			err = 0;
+			break;
+		}
+
+		cond_resched();
+	}
+
+	blk_finish_plug(&plug);
+
+	return err;
+}
+
+static int advance_seq(char cmd, int memcg_id, int nid, unsigned long seq,
+		       int swappiness, unsigned long nr_to_reclaim)
+{
+	struct lruvec *lruvec;
+	int err = -EINVAL;
+	struct mem_cgroup *memcg = NULL;
+
+	if (!mem_cgroup_disabled()) {
+		rcu_read_lock();
+		memcg = mem_cgroup_from_id(memcg_id);
+#ifdef CONFIG_MEMCG
+		if (memcg && !css_tryget(&memcg->css))
+			memcg = NULL;
+#endif
+		rcu_read_unlock();
+
+		if (!memcg)
+			goto done;
+	}
+	if (memcg_id != mem_cgroup_id(memcg))
+		goto done;
+
+	if (nid < 0 || nid >= MAX_NUMNODES || !node_state(nid, N_MEMORY))
+		goto done;
+
+	lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+
+	if (swappiness == -1)
+		swappiness = get_swappiness(lruvec);
+	else if (swappiness > 200U)
+		goto done;
+
+	switch (cmd) {
+	case '+':
+		err = advance_max_seq(lruvec, seq, swappiness);
+		break;
+	case '-':
+		err = advance_min_seq(lruvec, seq, swappiness, nr_to_reclaim);
+		break;
+	}
+done:
+	mem_cgroup_put(memcg);
+
+	return err;
+}
+
+static ssize_t lru_gen_seq_write(struct file *file, const char __user *src,
+				 size_t len, loff_t *pos)
+{
+	void *buf;
+	char *cur, *next;
+	int err = 0;
+
+	buf = kvmalloc(len + 1, GFP_USER);
+	if (!buf)
+		return -ENOMEM;
+
+	if (copy_from_user(buf, src, len)) {
+		kvfree(buf);
+		return -EFAULT;
+	}
+
+	next = buf;
+	next[len] = '\0';
+
+	while ((cur = strsep(&next, ",;\n"))) {
+		int n;
+		int end;
+		char cmd;
+		int memcg_id;
+		int nid;
+		unsigned long seq;
+		int swappiness = -1;
+		unsigned long nr_to_reclaim = -1;
+
+		cur = skip_spaces(cur);
+		if (!*cur)
+			continue;
+
+		n = sscanf(cur, "%c %u %u %lu %n %u %n %lu %n", &cmd, &memcg_id, &nid,
+			   &seq, &end, &swappiness, &end, &nr_to_reclaim, &end);
+		if (n < 4 || cur[end]) {
+			err = -EINVAL;
+			break;
+		}
+
+		err = advance_seq(cmd, memcg_id, nid, seq, swappiness, nr_to_reclaim);
+		if (err)
+			break;
+	}
+
+	kvfree(buf);
+
+	return err ? : len;
+}
+
+static int lru_gen_seq_open(struct inode *inode, struct file *file)
+{
+	return seq_open(file, &lru_gen_seq_ops);
+}
+
+static const struct file_operations lru_gen_rw_fops = {
+	.open = lru_gen_seq_open,
+	.read = seq_read,
+	.write = lru_gen_seq_write,
+	.llseek = seq_lseek,
+	.release = seq_release,
+};
+
+static const struct file_operations lru_gen_ro_fops = {
+	.open = lru_gen_seq_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = seq_release,
+};
+
 /******************************************************************************
  *                          initialization
  ******************************************************************************/
@@ -6291,6 +6690,12 @@ static int __init init_lru_gen(void)
 	if (hotplug_memory_notifier(lru_gen_online_mem, 0))
 		pr_err("lru_gen: failed to subscribe hotplug notifications\n");
 
+	if (sysfs_create_group(mm_kobj, &lru_gen_attr_group))
+		pr_err("lru_gen: failed to create sysfs group\n");
+
+	debugfs_create_file("lru_gen", 0644, NULL, NULL, &lru_gen_rw_fops);
+	debugfs_create_file("lru_gen_full", 0444, NULL, NULL, &lru_gen_ro_fops);
+
 	return 0;
 };
 /*
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 15/16] mm: multigenerational lru: Kconfig
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (13 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 14/16] mm: multigenerational lru: user interface Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  6:56 ` [PATCH v2 16/16] mm: multigenerational lru: documentation Yu Zhao
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

Add configuration options for the multigenerational lru.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/Kconfig | 55 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..0be1c6c90cc0 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -872,4 +872,59 @@ config MAPPING_DIRTY_HELPERS
 config KMAP_LOCAL
 	bool
 
+config LRU_GEN
+	bool "Multigenerational LRU"
+	depends on MMU
+	help
+	  A high performance LRU implementation to heavily overcommit workloads
+	  that are not IO bound. See Documentation/vm/multigen_lru.rst for
+	  details.
+
+	  Warning: do not enable this option unless you plan to use it because
+	  it introduces a small per-process and per-memcg and per-node memory
+	  overhead.
+
+config NR_LRU_GENS
+	int "Max number of generations"
+	depends on LRU_GEN
+	range 4 31
+	default 7
+	help
+	  This will use order_base_2(N+1) spare bits from page flags.
+
+	  Warning: do not use numbers larger than necessary because each
+	  generation introduces a small per-node and per-memcg memory overhead.
+
+config TIERS_PER_GEN
+	int "Number of tiers per generation"
+	depends on LRU_GEN
+	range 2 5
+	default 4
+	help
+	  This will use N-2 spare bits from page flags.
+
+	  Higher values generally offer better protection to active pages under
+	  heavy buffered I/O workloads.
+
+config LRU_GEN_ENABLED
+	bool "Turn on by default"
+	depends on LRU_GEN
+	help
+	  The default value of /sys/kernel/mm/lru_gen/enabled is 0. This option
+	  changes it to 1.
+
+	  Warning: the default value is the fast path. See
+	  Documentation/static-keys.txt for details.
+
+config LRU_GEN_STATS
+	bool "Full stats for debugging"
+	depends on LRU_GEN
+	help
+	  This option keeps full stats for each generation, which can be read
+	  from /sys/kernel/debug/lru_gen_full.
+
+	  Warning: do not enable this option unless you plan to use it because
+	  it introduces an additional small per-process and per-memcg and
+	  per-node memory overhead.
+
 endmenu
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH v2 16/16] mm: multigenerational lru: documentation
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (14 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 15/16] mm: multigenerational lru: Kconfig Yu Zhao
@ 2021-04-13  6:56 ` Yu Zhao
  2021-04-13  7:51 ` [PATCH v2 00/16] Multigenerational LRU Framework SeongJae Park
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-13  6:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim, Yu Zhao

Add Documentation/vm/multigen_lru.rst.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 Documentation/vm/index.rst        |   1 +
 Documentation/vm/multigen_lru.rst | 192 ++++++++++++++++++++++++++++++
 2 files changed, 193 insertions(+)
 create mode 100644 Documentation/vm/multigen_lru.rst

diff --git a/Documentation/vm/index.rst b/Documentation/vm/index.rst
index eff5fbd492d0..c353b3f55924 100644
--- a/Documentation/vm/index.rst
+++ b/Documentation/vm/index.rst
@@ -17,6 +17,7 @@ various features of the Linux memory management
 
    swap_numa
    zswap
+   multigen_lru
 
 Kernel developers MM documentation
 ==================================
diff --git a/Documentation/vm/multigen_lru.rst b/Documentation/vm/multigen_lru.rst
new file mode 100644
index 000000000000..cf772aeca317
--- /dev/null
+++ b/Documentation/vm/multigen_lru.rst
@@ -0,0 +1,192 @@
+=====================
+Multigenerational LRU
+=====================
+
+Quick Start
+===========
+Build Options
+-------------
+:Required: Set ``CONFIG_LRU_GEN=y``.
+
+:Optional: Change ``CONFIG_NR_LRU_GENS`` to a number ``X`` to support
+ a maximum of ``X`` generations.
+
+:Optional: Change ``CONFIG_TIERS_PER_GEN`` to a number ``Y`` to support
+ a maximum of ``Y`` tiers per generation.
+
+:Optional: Set ``CONFIG_LRU_GEN_ENABLED=y`` to turn the feature on by
+ default.
+
+Runtime Options
+---------------
+:Required: Write ``1`` to ``/sys/kernel/mm/lru_gen/enable`` if the
+ feature was not turned on by default.
+
+:Optional: Change ``/sys/kernel/mm/lru_gen/spread`` to a number ``N``
+ to spread pages out across ``N+1`` generations. ``N`` should be less
+ than ``X``. Larger values make the background aging more aggressive.
+
+:Optional: Read ``/sys/kernel/debug/lru_gen`` to verify the feature.
+ This file has the following output:
+
+::
+
+  memcg  memcg_id  memcg_path
+    node  node_id
+      min_gen  birth_time  anon_size  file_size
+      ...
+      max_gen  birth_time  anon_size  file_size
+
+Given a memcg and a node, ``min_gen`` is the oldest generation
+(number) and ``max_gen`` is the youngest. Birth time is in
+milliseconds. The sizes of anon and file types are in pages.
+
+Recipes
+-------
+:Android on ARMv8.1+: ``X=4``, ``N=0``
+
+:Android on pre-ARMv8.1 CPUs: Not recommended due to the lack of
+ ``ARM64_HW_AFDBM``
+
+:Laptops running Chrome on x86_64: ``X=7``, ``N=2``
+
+:Working set estimation: Write ``+ memcg_id node_id gen [swappiness]``
+ to ``/sys/kernel/debug/lru_gen`` to account referenced pages to
+ generation ``max_gen`` and create the next generation ``max_gen+1``.
+ ``gen`` should be equal to ``max_gen``. A swap file and a non-zero
+ ``swappiness`` are required to scan anon type. If swapping is not
+ desired, set ``vm.swappiness`` to ``0``.
+
+:Proactive reclaim: Write ``- memcg_id node_id gen [swappiness]
+ [nr_to_reclaim]`` to ``/sys/kernel/debug/lru_gen`` to evict
+ generations less than or equal to ``gen``. ``gen`` should be less
+ than ``max_gen-1`` as ``max_gen`` and ``max_gen-1`` are active
+ generations and therefore protected from the eviction. Use
+ ``nr_to_reclaim`` to limit the number of pages to be evicted.
+ Multiple command lines are supported, so does concatenation with
+ delimiters ``,`` and ``;``.
+
+Framework
+=========
+For each ``lruvec``, evictable pages are divided into multiple
+generations. The youngest generation number is stored in ``max_seq``
+for both anon and file types as they are aged on an equal footing. The
+oldest generation numbers are stored in ``min_seq[2]`` separately for
+anon and file types as clean file pages can be evicted regardless of
+swap and write-back constraints. Generation numbers are truncated into
+``order_base_2(CONFIG_NR_LRU_GENS+1)`` bits in order to fit into
+``page->flags``. The sliding window technique is used to prevent
+truncated generation numbers from overlapping. Each truncated
+generation number is an index to an array of per-type and per-zone
+lists. Evictable pages are added to the per-zone lists indexed by
+``max_seq`` or ``min_seq[2]`` (modulo ``CONFIG_NR_LRU_GENS``),
+depending on whether they are being faulted in.
+
+Each generation is then divided into multiple tiers. Tiers represent
+levels of usage from file descriptors only. Pages accessed N times via
+file descriptors belong to tier order_base_2(N). In contrast to moving
+across generations which requires the lru lock, moving across tiers
+only involves an atomic operation on ``page->flags`` and therefore has
+a negligible cost.
+
+The workflow comprises two conceptually independent functions: the
+aging and the eviction.
+
+Aging
+-----
+The aging produces young generations. Given an ``lruvec``, the aging
+scans page tables for referenced pages of this ``lruvec``. Upon
+finding one, the aging updates its generation number to ``max_seq``.
+After each round of scan, the aging increments ``max_seq``.
+
+The aging maintains either a system-wide ``mm_struct`` list or
+per-memcg ``mm_struct`` lists, and it only scans page tables of
+processes that have been scheduled since the last scan. Since scans
+are differential with respect to referenced pages, the cost is roughly
+proportional to their number.
+
+The aging is due when both of ``min_seq[2]`` reaches ``max_seq-1``,
+assuming both anon and file types are reclaimable.
+
+Eviction
+--------
+The eviction consumes old generations. Given an ``lruvec``, the
+eviction scans the pages on the per-zone lists indexed by either of
+``min_seq[2]``. It first tries to select a type based on the values of
+``min_seq[2]``. When anon and file types are both available from the
+same generation, it selects the one that has a lower refault rate.
+
+During a scan, the eviction sorts pages according to their generation
+numbers, if the aging has found them referenced.  It also moves pages
+from the tiers that have higher refault rates than tier 0 to the next
+generation.
+
+When it finds all the per-zone lists of a selected type are empty, the
+eviction increments ``min_seq[2]`` indexed by this selected type.
+
+Rationale
+=========
+Limitations of Current Implementation
+-------------------------------------
+Notion of Active/Inactive
+~~~~~~~~~~~~~~~~~~~~~~~~~
+For servers equipped with hundreds of gigabytes of memory, the
+granularity of the active/inactive is too coarse to be useful for job
+scheduling. False active/inactive rates are relatively high, and thus
+the assumed savings may not materialize.
+
+For phones and laptops, executable pages are frequently evicted
+despite the fact that there are many less recently used anon pages.
+Major faults on executable pages cause ``janks`` (slow UI renderings)
+and negatively impact user experience.
+
+For ``lruvec``\s from different memcgs or nodes, comparisons are
+impossible due to the lack of a common frame of reference.
+
+Incremental Scans via ``rmap``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each incremental scan picks up at where the last scan left off and
+stops after it has found a handful of unreferenced pages. For
+workloads using a large amount of anon memory, incremental scans lose
+the advantage under sustained memory pressure due to high ratios of
+the number of scanned pages to the number of reclaimed pages. On top
+of that, the ``rmap`` has poor memory locality due to its complex data
+structures. The combined effects typically result in a high amount of
+CPU usage in the reclaim path.
+
+Benefits of Multigenerational LRU
+---------------------------------
+Notion of Generation Numbers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The notion of generation numbers introduces a quantitative approach to
+memory overcommit. A larger number of pages can be spread out across
+configurable generations, and thus they have relatively low false
+active/inactive rates. Each generation includes all pages that have
+been referenced since the last generation.
+
+Given an ``lruvec``, scans and the selections between anon and file
+types are all based on generation numbers, which are simple and yet
+effective. For different ``lruvec``\s, comparisons are still possible
+based on birth times of generations.
+
+Differential Scans via Page Tables
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each differential scan discovers all pages that have been referenced
+since the last scan. Specifically, it walks the ``mm_struct`` list
+associated with an ``lruvec`` to scan page tables of processes that
+have been scheduled since the last scan. The cost of each differential
+scan is roughly proportional to the number of referenced pages it
+discovers. Unless address spaces are extremely sparse, page tables
+usually have better memory locality than the ``rmap``. The end result
+is generally a significant reduction in CPU usage, for workloads
+using a large amount of anon memory.
+
+To-do List
+==========
+KVM Optimization
+----------------
+Support shadow page table scanning.
+
+NUMA Optimization
+-----------------
+Support NUMA policies and per-node RSS counters.
-- 
2.31.1.295.g9ea45b61b8-goog


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (15 preceding siblings ...)
  2021-04-13  6:56 ` [PATCH v2 16/16] mm: multigenerational lru: documentation Yu Zhao
@ 2021-04-13  7:51 ` SeongJae Park
  2021-04-13 16:13   ` Jens Axboe
  2021-04-14 17:43 ` Johannes Weiner
  2021-04-29 23:46 ` Konstantin Kharlamov
  18 siblings, 1 reply; 57+ messages in thread
From: SeongJae Park @ 2021-04-13  7:51 UTC (permalink / raw)
  To: Yu Zhao
  Cc: linux-mm, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim

From: SeongJae Park <sjpark@amazon.de>

Hello,


Very interesting work, thank you for sharing this :)

On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:

> What's new in v2
> ================
> Special thanks to Jens Axboe for reporting a regression in buffered
> I/O and helping test the fix.

Is the discussion open?  If so, could you please give me a link?

> 
> This version includes the support of tiers, which represent levels of
> usage from file descriptors only. Pages accessed N times via file
> descriptors belong to tier order_base_2(N). Each generation contains
> at most MAX_NR_TIERS tiers, and they require additional MAX_NR_TIERS-2
> bits in page->flags. In contrast to moving across generations which
> requires the lru lock, moving across tiers only involves an atomic
> operation on page->flags and therefore has a negligible cost. A
> feedback loop modeled after the well-known PID controller monitors the
> refault rates across all tiers and decides when to activate pages from
> which tiers, on the reclaim path.
> 
> This feedback model has a few advantages over the current feedforward
> model:
> 1) It has a negligible overhead in the buffered I/O access path
>    because activations are done in the reclaim path.
> 2) It takes mapped pages into account and avoids overprotecting pages
>    accessed multiple times via file descriptors.
> 3) More tiers offer better protection to pages accessed more than
>    twice when buffered-I/O-intensive workloads are under memory
>    pressure.
> 
> The fio/io_uring benchmark shows 14% improvement in IOPS when randomly
> accessing Samsung PM981a in the buffered I/O mode.

Improvement under memory pressure, right?  How much pressure?

[...]
> 
> Differential scans via page tables
> ----------------------------------
> Each differential scan discovers all pages that have been referenced
> since the last scan. Specifically, it walks the mm_struct list
> associated with an lruvec to scan page tables of processes that have
> been scheduled since the last scan.

Does this means it scans only virtual address spaces of processes and therefore
pages in the page cache that are not mmap()-ed will not be scanned?

> The cost of each differential scan
> is roughly proportional to the number of referenced pages it
> discovers. Unless address spaces are extremely sparse, page tables
> usually have better memory locality than the rmap. The end result is
> generally a significant reduction in CPU usage, for workloads using a
> large amount of anon memory.

When and how frequently it scans?


Thanks,
SeongJae Park

[...]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-13  7:51 ` [PATCH v2 00/16] Multigenerational LRU Framework SeongJae Park
@ 2021-04-13 16:13   ` Jens Axboe
  2021-04-13 16:42     ` SeongJae Park
  2021-04-13 23:14     ` Dave Chinner
  0 siblings, 2 replies; 57+ messages in thread
From: Jens Axboe @ 2021-04-13 16:13 UTC (permalink / raw)
  To: SeongJae Park, Yu Zhao
  Cc: linux-mm, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	page-reclaim

On 4/13/21 1:51 AM, SeongJae Park wrote:
> From: SeongJae Park <sjpark@amazon.de>
> 
> Hello,
> 
> 
> Very interesting work, thank you for sharing this :)
> 
> On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:
> 
>> What's new in v2
>> ================
>> Special thanks to Jens Axboe for reporting a regression in buffered
>> I/O and helping test the fix.
> 
> Is the discussion open?  If so, could you please give me a link?

I wasn't on the initial post (or any of the lists it was posted to), but
it's on the google page reclaim list. Not sure if that is public or not.

tldr is that I was pretty excited about this work, as buffered IO tends
to suck (a lot) for high throughput applications. My test case was
pretty simple:

Randomly read a fast device, using 4k buffered IO, and watch what
happens when the page cache gets filled up. For this particular test,
we'll initially be doing 2.1GB/sec of IO, and then drop to 1.5-1.6GB/sec
with kswapd using a lot of CPU trying to keep up. That's mainline
behavior.

The initial posting of this patchset did no better, in fact it did a bit
worse. Performance dropped to the same levels and kswapd was using as
much CPU as before, but on top of that we also got excessive swapping.
Not at a high rate, but 5-10MB/sec continually.

I had some back and forths with Yu Zhao and tested a few new revisions,
and the current series does much better in this regard. Performance
still dips a bit when page cache fills, but not nearly as much, and
kswapd is using less CPU than before.

Hope that helps,
-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-13 16:13   ` Jens Axboe
@ 2021-04-13 16:42     ` SeongJae Park
  2021-04-13 23:14     ` Dave Chinner
  1 sibling, 0 replies; 57+ messages in thread
From: SeongJae Park @ 2021-04-13 16:42 UTC (permalink / raw)
  To: Jens Axboe
  Cc: SeongJae Park, Yu Zhao, linux-mm, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Chinner, Dave Hansen, Hillf Danton,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim

From: SeongJae Park <sjpark@amazon.de>

On Tue, 13 Apr 2021 10:13:24 -0600 Jens Axboe <axboe@kernel.dk> wrote:

> On 4/13/21 1:51 AM, SeongJae Park wrote:
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > Hello,
> > 
> > 
> > Very interesting work, thank you for sharing this :)
> > 
> > On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:
> > 
> >> What's new in v2
> >> ================
> >> Special thanks to Jens Axboe for reporting a regression in buffered
> >> I/O and helping test the fix.
> > 
> > Is the discussion open?  If so, could you please give me a link?
> 
> I wasn't on the initial post (or any of the lists it was posted to), but
> it's on the google page reclaim list. Not sure if that is public or not.
> 
> tldr is that I was pretty excited about this work, as buffered IO tends
> to suck (a lot) for high throughput applications. My test case was
> pretty simple:
> 
> Randomly read a fast device, using 4k buffered IO, and watch what
> happens when the page cache gets filled up. For this particular test,
> we'll initially be doing 2.1GB/sec of IO, and then drop to 1.5-1.6GB/sec
> with kswapd using a lot of CPU trying to keep up. That's mainline
> behavior.
> 
> The initial posting of this patchset did no better, in fact it did a bit
> worse. Performance dropped to the same levels and kswapd was using as
> much CPU as before, but on top of that we also got excessive swapping.
> Not at a high rate, but 5-10MB/sec continually.
> 
> I had some back and forths with Yu Zhao and tested a few new revisions,
> and the current series does much better in this regard. Performance
> still dips a bit when page cache fills, but not nearly as much, and
> kswapd is using less CPU than before.
> 
> Hope that helps,

Appreciate this kind and detailed explanation, Jens!

So, my understanding is that v2 of this patchset improved the performance by
using frequency (tier) in addition to recency (generation number) for buffered
I/O pages.  That makes sense to me.  If I'm misunderstanding, please let me
know.


Thanks,
SeongJae Park

> -- 
> Jens Axboe
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-13 16:13   ` Jens Axboe
  2021-04-13 16:42     ` SeongJae Park
@ 2021-04-13 23:14     ` Dave Chinner
  2021-04-14  2:29       ` Rik van Riel
                         ` (2 more replies)
  1 sibling, 3 replies; 57+ messages in thread
From: Dave Chinner @ 2021-04-13 23:14 UTC (permalink / raw)
  To: Jens Axboe
  Cc: SeongJae Park, Yu Zhao, linux-mm, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	page-reclaim

On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> On 4/13/21 1:51 AM, SeongJae Park wrote:
> > From: SeongJae Park <sjpark@amazon.de>
> > 
> > Hello,
> > 
> > 
> > Very interesting work, thank you for sharing this :)
> > 
> > On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:
> > 
> >> What's new in v2
> >> ================
> >> Special thanks to Jens Axboe for reporting a regression in buffered
> >> I/O and helping test the fix.
> > 
> > Is the discussion open?  If so, could you please give me a link?
> 
> I wasn't on the initial post (or any of the lists it was posted to), but
> it's on the google page reclaim list. Not sure if that is public or not.
> 
> tldr is that I was pretty excited about this work, as buffered IO tends
> to suck (a lot) for high throughput applications. My test case was
> pretty simple:
> 
> Randomly read a fast device, using 4k buffered IO, and watch what
> happens when the page cache gets filled up. For this particular test,
> we'll initially be doing 2.1GB/sec of IO, and then drop to 1.5-1.6GB/sec
> with kswapd using a lot of CPU trying to keep up. That's mainline
> behavior.

I see this exact same behaviour here, too, but I RCA'd it to
contention between the inode and memory reclaim for the mapping
structure that indexes the page cache. Basically the mapping tree
lock is the contention point here - you can either be adding pages
to the mapping during IO, or memory reclaim can be removing pages
from the mapping, but we can't do both at once.

So we end up with kswapd spinning on the mapping tree lock like so
when doing 1.6GB/s in 4kB buffered IO:

-   20.06%     0.00%  [kernel]               [k] kswapd                                                                                                        ▒
   - 20.06% kswapd                                                                                                                                             ▒
      - 20.05% balance_pgdat                                                                                                                                   ▒
         - 20.03% shrink_node                                                                                                                                  ▒
            - 19.92% shrink_lruvec                                                                                                                             ▒
               - 19.91% shrink_inactive_list                                                                                                                   ▒
                  - 19.22% shrink_page_list                                                                                                                    ▒
                     - 17.51% __remove_mapping                                                                                                                 ▒
                        - 14.16% _raw_spin_lock_irqsave                                                                                                        ▒
                           - 14.14% do_raw_spin_lock                                                                                                           ▒
                                __pv_queued_spin_lock_slowpath                                                                                                 ▒
                        - 1.56% __delete_from_page_cache                                                                                                       ▒
                             0.63% xas_store                                                                                                                   ▒
                        - 0.78% _raw_spin_unlock_irqrestore                                                                                                    ▒
                           - 0.69% do_raw_spin_unlock                                                                                                          ▒
                                __raw_callee_save___pv_queued_spin_unlock                                                                                      ▒
                     - 0.82% free_unref_page_list                                                                                                              ▒
                        - 0.72% free_unref_page_commit                                                                                                         ▒
                             0.57% free_pcppages_bulk                                                                                                          ▒

And these are the processes consuming CPU:

   5171 root      20   0 1442496   5696   1284 R  99.7   0.0   1:07.78 fio
   1150 root      20   0       0      0      0 S  47.4   0.0   0:22.70 kswapd1
   1146 root      20   0       0      0      0 S  44.0   0.0   0:21.85 kswapd0
   1152 root      20   0       0      0      0 S  39.7   0.0   0:18.28 kswapd3
   1151 root      20   0       0      0      0 S  15.2   0.0   0:12.14 kswapd2

i.e. when memory reclaim kicks in, the read process has 20% less
time with exclusive access to the mapping tree to insert new pages.
Hence buffered read performance goes down quite substantially when
memory reclaim kicks in, and this really has nothing to do with the
memory reclaim LRU scanning algorithm.

I can actually get this machine to pin those 5 processes to 100% CPU
under certain conditions. Each process is spinning all that extra
time on the mapping tree lock, and performance degrades further.
Changing the LRU reclaim algorithm won't fix this - the workload is
solidly bound by the exclusive nature of the mapping tree lock and
the number of tasks trying to obtain it exclusively...

> The initial posting of this patchset did no better, in fact it did a bit
> worse. Performance dropped to the same levels and kswapd was using as
> much CPU as before, but on top of that we also got excessive swapping.
> Not at a high rate, but 5-10MB/sec continually.
>
> I had some back and forths with Yu Zhao and tested a few new revisions,
> and the current series does much better in this regard. Performance
> still dips a bit when page cache fills, but not nearly as much, and
> kswapd is using less CPU than before.

Profiles would be interesting, because it sounds to me like reclaim
*might* be batching page cache removal better (e.g. fewer, larger
batches) and so spending less time contending on the mapping tree
lock...

IOWs, I suspect this result might actually be a result of less lock
contention due to a change in batch processing characteristics of
the new algorithm rather than it being a "better" algorithm...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-13 23:14     ` Dave Chinner
@ 2021-04-14  2:29       ` Rik van Riel
       [not found]         ` <CAOUHufafMcaG8sOS=1YMy2P_6p0R1FzP16bCwpUau7g1-PybBQ@mail.gmail.com>
  2021-04-14  3:40       ` Yu Zhao
  2021-04-14 14:43       ` Jens Axboe
  2 siblings, 1 reply; 57+ messages in thread
From: Rik van Riel @ 2021-04-14  2:29 UTC (permalink / raw)
  To: Dave Chinner, Jens Axboe
  Cc: SeongJae Park, Yu Zhao, linux-mm, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	page-reclaim

[-- Attachment #1: Type: text/plain, Size: 1764 bytes --]

On Wed, 2021-04-14 at 09:14 +1000, Dave Chinner wrote:
> On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> 
> > The initial posting of this patchset did no better, in fact it did
> > a bit
> > worse. Performance dropped to the same levels and kswapd was using
> > as
> > much CPU as before, but on top of that we also got excessive
> > swapping.
> > Not at a high rate, but 5-10MB/sec continually.
> > 
> > I had some back and forths with Yu Zhao and tested a few new
> > revisions,
> > and the current series does much better in this regard. Performance
> > still dips a bit when page cache fills, but not nearly as much, and
> > kswapd is using less CPU than before.
> 
> Profiles would be interesting, because it sounds to me like reclaim
> *might* be batching page cache removal better (e.g. fewer, larger
> batches) and so spending less time contending on the mapping tree
> lock...
> 
> IOWs, I suspect this result might actually be a result of less lock
> contention due to a change in batch processing characteristics of
> the new algorithm rather than it being a "better" algorithm...

That seems quite likely to me, given the issues we have
had with virtual scan reclaim algorithms in the past.

SeongJae, what is this algorithm supposed to do when faced
with situations like this:
1) Running on a system with 8 NUMA nodes, and
memory
   pressure in one of those nodes.
2) Running PostgresQL or Oracle, with hundreds of
   processes mapping the same (very large) shared
   memory segment.

How do you keep your algorithm from falling into the worst
case virtual scanning scenarios that were crippling the
2.4 kernel 15+ years ago on systems with just a few GB of
memory?

-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-13 23:14     ` Dave Chinner
  2021-04-14  2:29       ` Rik van Riel
@ 2021-04-14  3:40       ` Yu Zhao
  2021-04-14  4:50         ` Dave Chinner
  2021-04-14 14:43       ` Jens Axboe
  2 siblings, 1 reply; 57+ messages in thread
From: Yu Zhao @ 2021-04-14  3:40 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Jens Axboe, SeongJae Park, Linux-MM, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2

On Tue, Apr 13, 2021 at 5:14 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> > On 4/13/21 1:51 AM, SeongJae Park wrote:
> > > From: SeongJae Park <sjpark@amazon.de>
> > >
> > > Hello,
> > >
> > >
> > > Very interesting work, thank you for sharing this :)
> > >
> > > On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:
> > >
> > >> What's new in v2
> > >> ================
> > >> Special thanks to Jens Axboe for reporting a regression in buffered
> > >> I/O and helping test the fix.
> > >
> > > Is the discussion open?  If so, could you please give me a link?
> >
> > I wasn't on the initial post (or any of the lists it was posted to), but
> > it's on the google page reclaim list. Not sure if that is public or not.
> >
> > tldr is that I was pretty excited about this work, as buffered IO tends
> > to suck (a lot) for high throughput applications. My test case was
> > pretty simple:
> >
> > Randomly read a fast device, using 4k buffered IO, and watch what
> > happens when the page cache gets filled up. For this particular test,
> > we'll initially be doing 2.1GB/sec of IO, and then drop to 1.5-1.6GB/sec
> > with kswapd using a lot of CPU trying to keep up. That's mainline
> > behavior.
>
> I see this exact same behaviour here, too, but I RCA'd it to
> contention between the inode and memory reclaim for the mapping
> structure that indexes the page cache. Basically the mapping tree
> lock is the contention point here - you can either be adding pages
> to the mapping during IO, or memory reclaim can be removing pages
> from the mapping, but we can't do both at once.
>
> So we end up with kswapd spinning on the mapping tree lock like so
> when doing 1.6GB/s in 4kB buffered IO:
>
> -   20.06%     0.00%  [kernel]               [k] kswapd                                                                                                        ▒
>    - 20.06% kswapd                                                                                                                                             ▒
>       - 20.05% balance_pgdat                                                                                                                                   ▒
>          - 20.03% shrink_node                                                                                                                                  ▒
>             - 19.92% shrink_lruvec                                                                                                                             ▒
>                - 19.91% shrink_inactive_list                                                                                                                   ▒
>                   - 19.22% shrink_page_list                                                                                                                    ▒
>                      - 17.51% __remove_mapping                                                                                                                 ▒
>                         - 14.16% _raw_spin_lock_irqsave                                                                                                        ▒
>                            - 14.14% do_raw_spin_lock                                                                                                           ▒
>                                 __pv_queued_spin_lock_slowpath                                                                                                 ▒
>                         - 1.56% __delete_from_page_cache                                                                                                       ▒
>                              0.63% xas_store                                                                                                                   ▒
>                         - 0.78% _raw_spin_unlock_irqrestore                                                                                                    ▒
>                            - 0.69% do_raw_spin_unlock                                                                                                          ▒
>                                 __raw_callee_save___pv_queued_spin_unlock                                                                                      ▒
>                      - 0.82% free_unref_page_list                                                                                                              ▒
>                         - 0.72% free_unref_page_commit                                                                                                         ▒
>                              0.57% free_pcppages_bulk                                                                                                          ▒
>
> And these are the processes consuming CPU:
>
>    5171 root      20   0 1442496   5696   1284 R  99.7   0.0   1:07.78 fio
>    1150 root      20   0       0      0      0 S  47.4   0.0   0:22.70 kswapd1
>    1146 root      20   0       0      0      0 S  44.0   0.0   0:21.85 kswapd0
>    1152 root      20   0       0      0      0 S  39.7   0.0   0:18.28 kswapd3
>    1151 root      20   0       0      0      0 S  15.2   0.0   0:12.14 kswapd2
>
> i.e. when memory reclaim kicks in, the read process has 20% less
> time with exclusive access to the mapping tree to insert new pages.
> Hence buffered read performance goes down quite substantially when
> memory reclaim kicks in, and this really has nothing to do with the
> memory reclaim LRU scanning algorithm.
>
> I can actually get this machine to pin those 5 processes to 100% CPU
> under certain conditions. Each process is spinning all that extra
> time on the mapping tree lock, and performance degrades further.
> Changing the LRU reclaim algorithm won't fix this - the workload is
> solidly bound by the exclusive nature of the mapping tree lock and
> the number of tasks trying to obtain it exclusively...
>
> > The initial posting of this patchset did no better, in fact it did a bit
> > worse. Performance dropped to the same levels and kswapd was using as
> > much CPU as before, but on top of that we also got excessive swapping.
> > Not at a high rate, but 5-10MB/sec continually.
> >
> > I had some back and forths with Yu Zhao and tested a few new revisions,
> > and the current series does much better in this regard. Performance
> > still dips a bit when page cache fills, but not nearly as much, and
> > kswapd is using less CPU than before.
>
> Profiles would be interesting, because it sounds to me like reclaim
> *might* be batching page cache removal better (e.g. fewer, larger
> batches) and so spending less time contending on the mapping tree
> lock...
>
> IOWs, I suspect this result might actually be a result of less lock
> contention due to a change in batch processing characteristics of
> the new algorithm rather than it being a "better" algorithm...

I appreciate the profile. But there is no batching in
__remove_mapping() -- it locks the mapping for each page, and
therefore the lock contention penalizes the mainline and this patchset
equally. It looks worse on your system because the four kswapd threads
from different nodes were working on the same file.

And kswapd is only one of two paths that could affect the performance.
The kernel context of the test process is where the improvement mainly
comes from.

I also suspect you were testing a file much larger than your memory
size. If so, sorry to tell you that a file only a few times larger,
e.g. twice, would be worse.

Here is my take:

Claim
-----
This patchset is a "better" algorithm. (Technically it's not an
algorithm, it's a feedback loop.)

Theoretical basis
-----------------
An open-loop control (the mainline) can only be better if the margin
of error in its prediction of the future events is less than that from
the trial-and-error of a closed-loop control (this patchset). For
simple machines, it surely can. For page reclaim, AFAIK, it can't.

A typical example: when randomly accessing a (not infinitely) large
file via buffered io long enough, we're bound to hit the same blocks
multiple times. Should we activate the pages containing those blocks,
i.e., to move them to the active lru list?  No.

RCA
---
For the fio/io_uring benchmark, the "No" is the key.

The mainline activates pages accessed multiple times. This is done in
the buffered io access path by mark_page_accessed(), and it takes the
lru lock, which is contended under memory pressure. This contention
slows down both the access path and kswapd. But kswapd is not the
problem here because we are measuring the io_uring process, not kswap.

For this patchset, there are no activations since the refault rates of
pages accessed multiple times are similar to those accessed only once
-- activations will only be done to pages from tiers with higher
refault rates.

If you wish to debunk
---------------------
git fetch https://linux-mm.googlesource.com/page-reclaim refs/changes/73/1173/1

CONFIG_LRU_GEN=y
CONFIG_LRU_GEN_ENABLED=y

Run your benchmarks

Profiles (200G mem + 400G file)
-------------------------------
A quick test from Jens' fio/io_uring:

-rc7
    13.30%  io_uring  xas_load
    13.22%  io_uring  _copy_to_iter
    12.30%  io_uring  __add_to_page_cache_locked
     7.43%  io_uring  clear_page_erms
     4.18%  io_uring  filemap_get_read_batch
     3.54%  io_uring  get_page_from_freelist
     2.98%  io_uring  ***native_queued_spin_lock_slowpath***
     1.61%  io_uring  page_cache_ra_unbounded
     1.16%  io_uring  xas_start
     1.08%  io_uring  filemap_read
     1.07%  io_uring  ***__activate_page***

lru lock: 2.98% (lru addition + activation)
activation: 1.07%

-rc7 + this patchset
    14.44%  io_uring  xas_load
    14.14%  io_uring  _copy_to_iter
    11.15%  io_uring  __add_to_page_cache_locked
     6.56%  io_uring  clear_page_erms
     4.44%  io_uring  filemap_get_read_batch
     2.14%  io_uring  get_page_from_freelist
     1.32%  io_uring  page_cache_ra_unbounded
     1.20%  io_uring  psi_group_change
     1.18%  io_uring  filemap_read
     1.09%  io_uring  ****native_queued_spin_lock_slowpath****
     1.08%  io_uring  do_mpage_readpage

lru lock: 1.09% (lru addition only)

And I plan to reach out to other communities, e.g., PostgreSQL, to
benchmark the patchset. I heard they have been complaining about the
buffered io performance under memory pressure. Any other benchmarks
you'd suggest?

BTW, you might find another surprise in how less frequently slab
shrinkers are called under memory pressure, because this patchset is a
lot better at finding pages to reclaim and therefore doesn't overkill
slabs.

Thanks.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14  3:40       ` Yu Zhao
@ 2021-04-14  4:50         ` Dave Chinner
  2021-04-14  7:16           ` Yu Zhao
  0 siblings, 1 reply; 57+ messages in thread
From: Dave Chinner @ 2021-04-14  4:50 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Jens Axboe, SeongJae Park, Linux-MM, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2

On Tue, Apr 13, 2021 at 09:40:12PM -0600, Yu Zhao wrote:
> On Tue, Apr 13, 2021 at 5:14 PM Dave Chinner <david@fromorbit.com> wrote:
> > On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> > > On 4/13/21 1:51 AM, SeongJae Park wrote:
> > > > From: SeongJae Park <sjpark@amazon.de>
> > > >
> > > > Hello,
> > > >
> > > >
> > > > Very interesting work, thank you for sharing this :)
> > > >
> > > > On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:
> > > >
> > > >> What's new in v2
> > > >> ================
> > > >> Special thanks to Jens Axboe for reporting a regression in buffered
> > > >> I/O and helping test the fix.
> > > >
> > > > Is the discussion open?  If so, could you please give me a link?
> > >
> > > I wasn't on the initial post (or any of the lists it was posted to), but
> > > it's on the google page reclaim list. Not sure if that is public or not.
> > >
> > > tldr is that I was pretty excited about this work, as buffered IO tends
> > > to suck (a lot) for high throughput applications. My test case was
> > > pretty simple:
> > >
> > > Randomly read a fast device, using 4k buffered IO, and watch what
> > > happens when the page cache gets filled up. For this particular test,
> > > we'll initially be doing 2.1GB/sec of IO, and then drop to 1.5-1.6GB/sec
> > > with kswapd using a lot of CPU trying to keep up. That's mainline
> > > behavior.
> >
> > I see this exact same behaviour here, too, but I RCA'd it to
> > contention between the inode and memory reclaim for the mapping
> > structure that indexes the page cache. Basically the mapping tree
> > lock is the contention point here - you can either be adding pages
> > to the mapping during IO, or memory reclaim can be removing pages
> > from the mapping, but we can't do both at once.
> >
> > So we end up with kswapd spinning on the mapping tree lock like so
> > when doing 1.6GB/s in 4kB buffered IO:
> >
> > -   20.06%     0.00%  [kernel]               [k] kswapd                                                                                                        ▒
> >    - 20.06% kswapd                                                                                                                                             ▒
> >       - 20.05% balance_pgdat                                                                                                                                   ▒
> >          - 20.03% shrink_node                                                                                                                                  ▒
> >             - 19.92% shrink_lruvec                                                                                                                             ▒
> >                - 19.91% shrink_inactive_list                                                                                                                   ▒
> >                   - 19.22% shrink_page_list                                                                                                                    ▒
> >                      - 17.51% __remove_mapping                                                                                                                 ▒
> >                         - 14.16% _raw_spin_lock_irqsave                                                                                                        ▒
> >                            - 14.14% do_raw_spin_lock                                                                                                           ▒
> >                                 __pv_queued_spin_lock_slowpath                                                                                                 ▒
> >                         - 1.56% __delete_from_page_cache                                                                                                       ▒
> >                              0.63% xas_store                                                                                                                   ▒
> >                         - 0.78% _raw_spin_unlock_irqrestore                                                                                                    ▒
> >                            - 0.69% do_raw_spin_unlock                                                                                                          ▒
> >                                 __raw_callee_save___pv_queued_spin_unlock                                                                                      ▒
> >                      - 0.82% free_unref_page_list                                                                                                              ▒
> >                         - 0.72% free_unref_page_commit                                                                                                         ▒
> >                              0.57% free_pcppages_bulk                                                                                                          ▒
> >
> > And these are the processes consuming CPU:
> >
> >    5171 root      20   0 1442496   5696   1284 R  99.7   0.0   1:07.78 fio
> >    1150 root      20   0       0      0      0 S  47.4   0.0   0:22.70 kswapd1
> >    1146 root      20   0       0      0      0 S  44.0   0.0   0:21.85 kswapd0
> >    1152 root      20   0       0      0      0 S  39.7   0.0   0:18.28 kswapd3
> >    1151 root      20   0       0      0      0 S  15.2   0.0   0:12.14 kswapd2
> >
> > i.e. when memory reclaim kicks in, the read process has 20% less
> > time with exclusive access to the mapping tree to insert new pages.
> > Hence buffered read performance goes down quite substantially when
> > memory reclaim kicks in, and this really has nothing to do with the
> > memory reclaim LRU scanning algorithm.
> >
> > I can actually get this machine to pin those 5 processes to 100% CPU
> > under certain conditions. Each process is spinning all that extra
> > time on the mapping tree lock, and performance degrades further.
> > Changing the LRU reclaim algorithm won't fix this - the workload is
> > solidly bound by the exclusive nature of the mapping tree lock and
> > the number of tasks trying to obtain it exclusively...
> >
> > > The initial posting of this patchset did no better, in fact it did a bit
> > > worse. Performance dropped to the same levels and kswapd was using as
> > > much CPU as before, but on top of that we also got excessive swapping.
> > > Not at a high rate, but 5-10MB/sec continually.
> > >
> > > I had some back and forths with Yu Zhao and tested a few new revisions,
> > > and the current series does much better in this regard. Performance
> > > still dips a bit when page cache fills, but not nearly as much, and
> > > kswapd is using less CPU than before.
> >
> > Profiles would be interesting, because it sounds to me like reclaim
> > *might* be batching page cache removal better (e.g. fewer, larger
> > batches) and so spending less time contending on the mapping tree
> > lock...
> >
> > IOWs, I suspect this result might actually be a result of less lock
> > contention due to a change in batch processing characteristics of
> > the new algorithm rather than it being a "better" algorithm...
> 
> I appreciate the profile. But there is no batching in
> __remove_mapping() -- it locks the mapping for each page, and
> therefore the lock contention penalizes the mainline and this patchset
> equally. It looks worse on your system because the four kswapd threads
> from different nodes were working on the same file.

I think you misunderstand exactly what I mean by "batching" here.
I'm not talking about doing multiple pieces of work under a single
lock. What I mean is that the overall amount of work done in a
single reclaim scan (i.e a "reclaim batch") is packaged differently.

We already batch up page reclaim via building a page list and then
passing it to shrink_page_list() to process the batch of pages in a
single pass. Each page in this page list batch then calls
remove_mapping() to pull the page form the LRU, we have a run of
contention between the foreground read() thread and the background
kswapd.

If the size or nature of the pages in the batch passed to
shrink_page_list() changes, then the amount of time a reclaim batch
is going to put pressure on the mapping tree lock will also change.
That's the "change in batching behaviour" I'm referring to here. I
haven't read through the patchset to determine if you change the
shrink_page_list() algorithm, but it likely changes what is passed
to be reclaimed and that in turn changes the locking patterns that
fall out of shrink_page_list...

> And kswapd is only one of two paths that could affect the performance.
> The kernel context of the test process is where the improvement mainly
> comes from.
> 
> I also suspect you were testing a file much larger than your memory
> size. If so, sorry to tell you that a file only a few times larger,
> e.g. twice, would be worse.
> 
> Here is my take:
> 
> Claim
> -----
> This patchset is a "better" algorithm. (Technically it's not an
> algorithm, it's a feedback loop.)
> 
> Theoretical basis
> -----------------
> An open-loop control (the mainline) can only be better if the margin
> of error in its prediction of the future events is less than that from
> the trial-and-error of a closed-loop control (this patchset). For
> simple machines, it surely can. For page reclaim, AFAIK, it can't.
> 
> A typical example: when randomly accessing a (not infinitely) large
> file via buffered io long enough, we're bound to hit the same blocks
> multiple times. Should we activate the pages containing those blocks,
> i.e., to move them to the active lru list?  No.
> 
> RCA
> ---
> For the fio/io_uring benchmark, the "No" is the key.
> 
> The mainline activates pages accessed multiple times. This is done in
> the buffered io access path by mark_page_accessed(), and it takes the
> lru lock, which is contended under memory pressure. This contention
> slows down both the access path and kswapd. But kswapd is not the
> problem here because we are measuring the io_uring process, not kswap.
> 
> For this patchset, there are no activations since the refault rates of
> pages accessed multiple times are similar to those accessed only once
> -- activations will only be done to pages from tiers with higher
> refault rates.
> 
> If you wish to debunk
> ---------------------

Nope, it's your job to convince us that it works, not the other way
around. It's up to you to prove that your assertions are correct,
not for us to prove they are false.

> git fetch https://linux-mm.googlesource.com/page-reclaim refs/changes/73/1173/1
> 
> CONFIG_LRU_GEN=y
> CONFIG_LRU_GEN_ENABLED=y
> 
> Run your benchmarks
> 
> Profiles (200G mem + 400G file)
> -------------------------------
> A quick test from Jens' fio/io_uring:
> 
> -rc7
>     13.30%  io_uring  xas_load
>     13.22%  io_uring  _copy_to_iter
>     12.30%  io_uring  __add_to_page_cache_locked
>      7.43%  io_uring  clear_page_erms
>      4.18%  io_uring  filemap_get_read_batch
>      3.54%  io_uring  get_page_from_freelist
>      2.98%  io_uring  ***native_queued_spin_lock_slowpath***
>      1.61%  io_uring  page_cache_ra_unbounded
>      1.16%  io_uring  xas_start
>      1.08%  io_uring  filemap_read
>      1.07%  io_uring  ***__activate_page***
> 
> lru lock: 2.98% (lru addition + activation)
> activation: 1.07%
> 
> -rc7 + this patchset
>     14.44%  io_uring  xas_load
>     14.14%  io_uring  _copy_to_iter
>     11.15%  io_uring  __add_to_page_cache_locked
>      6.56%  io_uring  clear_page_erms
>      4.44%  io_uring  filemap_get_read_batch
>      2.14%  io_uring  get_page_from_freelist
>      1.32%  io_uring  page_cache_ra_unbounded
>      1.20%  io_uring  psi_group_change
>      1.18%  io_uring  filemap_read
>      1.09%  io_uring  ****native_queued_spin_lock_slowpath****
>      1.08%  io_uring  do_mpage_readpage
> 
> lru lock: 1.09% (lru addition only)

All this tells us is that there was *less contention on the mapping
tree lock*. It does not tell us why there was less contention.

You've handily omitted the kswapd profile, which is really the one
of interest to the discussion here - how did the memory reclaim CPU
usage profile also change at the same time?

> And I plan to reach out to other communities, e.g., PostgreSQL, to
> benchmark the patchset. I heard they have been complaining about the
> buffered io performance under memory pressure. Any other benchmarks
> you'd suggest?
> 
> BTW, you might find another surprise in how less frequently slab
> shrinkers are called under memory pressure, because this patchset is a
> lot better at finding pages to reclaim and therefore doesn't overkill
> slabs.

That's actually very likely to be a Bad Thing and cause unexpected
perofrmance and OOM based regressions. When the machine finally runs
out of page cache it can easily reclaim, it's going to get stuck
with long tail latencies reclaiming huge slab caches as they've had
no substantial ongoing pressure put on them to keep them in balance
with the overall memory pressure the system is under...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
       [not found]         ` <CAOUHufafMcaG8sOS=1YMy2P_6p0R1FzP16bCwpUau7g1-PybBQ@mail.gmail.com>
@ 2021-04-14  6:15           ` Huang, Ying
  2021-04-14  7:58             ` Yu Zhao
  2021-04-14 15:51           ` Andi Kleen
  1 sibling, 1 reply; 57+ messages in thread
From: Huang, Ying @ 2021-04-14  6:15 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Rik van Riel, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andi Kleen, Andrew Morton, Benjamin Manes, Dave Hansen,
	Hillf Danton, Johannes Weiner, Jonathan Corbet, Joonsoo Kim,
	Matthew Wilcox, Mel Gorman, Miaohe Lin, Michael Larabel,
	Michal Hocko, Michel Lespinasse, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

Yu Zhao <yuzhao@google.com> writes:

> On Tue, Apr 13, 2021 at 8:30 PM Rik van Riel <riel@surriel.com> wrote:
>>
>> On Wed, 2021-04-14 at 09:14 +1000, Dave Chinner wrote:
>> > On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
>> >
>> > > The initial posting of this patchset did no better, in fact it did
>> > > a bit
>> > > worse. Performance dropped to the same levels and kswapd was using
>> > > as
>> > > much CPU as before, but on top of that we also got excessive
>> > > swapping.
>> > > Not at a high rate, but 5-10MB/sec continually.
>> > >
>> > > I had some back and forths with Yu Zhao and tested a few new
>> > > revisions,
>> > > and the current series does much better in this regard. Performance
>> > > still dips a bit when page cache fills, but not nearly as much, and
>> > > kswapd is using less CPU than before.
>> >
>> > Profiles would be interesting, because it sounds to me like reclaim
>> > *might* be batching page cache removal better (e.g. fewer, larger
>> > batches) and so spending less time contending on the mapping tree
>> > lock...
>> >
>> > IOWs, I suspect this result might actually be a result of less lock
>> > contention due to a change in batch processing characteristics of
>> > the new algorithm rather than it being a "better" algorithm...
>>
>> That seems quite likely to me, given the issues we have
>> had with virtual scan reclaim algorithms in the past.
>
> Hi Rik,
>
> Let paste the code so we can move beyond the "batching" hypothesis:
>
> static int __remove_mapping(struct address_space *mapping, struct page
> *page,
>                             bool reclaimed, struct mem_cgroup *target_memcg)
> {
>         unsigned long flags;
>         int refcount;
>         void *shadow = NULL;
>
>         BUG_ON(!PageLocked(page));
>         BUG_ON(mapping != page_mapping(page));
>
>         xa_lock_irqsave(&mapping->i_pages, flags);
>
>> SeongJae, what is this algorithm supposed to do when faced
>> with situations like this:
>
> I'll assume the questions were directed at me, not SeongJae.
>
>> 1) Running on a system with 8 NUMA nodes, and
>> memory
>>    pressure in one of those nodes.
>> 2) Running PostgresQL or Oracle, with hundreds of
>>    processes mapping the same (very large) shared
>>    memory segment.
>>
>> How do you keep your algorithm from falling into the worst
>> case virtual scanning scenarios that were crippling the
>> 2.4 kernel 15+ years ago on systems with just a few GB of
>> memory?
>
> There is a fundamental shift: that time we were scanning for cold pages,
> and nowadays we are scanning for hot pages.
>
> I'd be surprised if scanning for cold pages didn't fall apart, because it'd
> find most of the entries accessed, if they are present at all.
>
> Scanning for hot pages, on the other hand, is way better. Let me just
> reiterate:
> 1) It will not scan page tables from processes that have been sleeping
>    since the last scan.
> 2) It will not scan PTE tables under non-leaf PMD entries that do not
>    have the accessed bit set, when
>    CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
> 3) It will not zigzag between the PGD table and the same PMD or PTE
>    table spanning multiple VMAs. In other words, it finishes all the
>    VMAs with the range of the same PMD or PTE table before it returns
>    to the PGD table. This optimizes workloads that have large numbers
>    of tiny VMAs, especially when CONFIG_PGTABLE_LEVELS=5.
>
> So the cost is roughly proportional to the number of referenced pages it
> discovers. If there is no memory pressure, no scanning at all. For a system
> under heavy memory pressure, most of the pages are referenced (otherwise
> why would it be under memory pressure?), and if we use the rmap, we need to
> scan a lot of pages anyway. Why not just scan them all?

This may be not the case.  For rmap scanning, it's possible to scan only
a small portion of memory.  But with the page table scanning, you need
to scan almost all (I understand you have some optimization as above).
As Rik shown in the test case above, there may be memory pressure on
only one of 8 NUMA nodes (because of NUMA binding?).  Then ramp scanning
only needs to scan pages in this node, while the page table scanning may
need to scan pages in other nodes too.

Best Regards,
Huang, Ying

> This way you save a
> lot because of batching (now it's time to talk about batching). Besides,
> page tables have far better memory locality than the rmap. For the shared
> memory example you gave, the rmap needs to lock *each* page it scans. How
> many 4KB pages does your large file have? I'll leave the math to you.
>
> Here are some profiles:
>
> zram with the rmap (mainline)
>   31.03%  page_vma_mapped_walk
>   25.59%  lzo1x_1_do_compress
>    4.63%  do_raw_spin_lock
>    3.89%  vma_interval_tree_iter_next
>    3.33%  vma_interval_tree_subtree_search
>
> zram with page table scanning (this patchset)
>   49.36%  lzo1x_1_do_compress
>    4.54%  page_vma_mapped_walk
>    4.45%  memset_erms
>    3.47%  walk_pte_range
>    2.88%  zram_bvec_rw
>
> Note that these are not just what I saw from some local benchmarks. We have
> observed *millions* of machines in our fleet.
>
> I encourage you to try it and see for yourself. It's as simple as:
>
> git fetch https://linux-mm.googlesource.com/page-reclaim
>  refs/changes/73/1173/1
>
> CONFIG_LRU_GEN=y
> CONFIG_LRU_GEN_ENABLED=y
>
> and build and run your favorite benchmarks.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14  4:50         ` Dave Chinner
@ 2021-04-14  7:16           ` Yu Zhao
  2021-04-14 10:00             ` Yu Zhao
  2021-04-15  1:36             ` Dave Chinner
  0 siblings, 2 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-14  7:16 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Jens Axboe, SeongJae Park, Linux-MM, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2

[-- Attachment #1: Type: text/plain, Size: 15692 bytes --]

On Tue, Apr 13, 2021 at 10:50 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Tue, Apr 13, 2021 at 09:40:12PM -0600, Yu Zhao wrote:
> > On Tue, Apr 13, 2021 at 5:14 PM Dave Chinner <david@fromorbit.com> wrote:
> > > On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> > > > On 4/13/21 1:51 AM, SeongJae Park wrote:
> > > > > From: SeongJae Park <sjpark@amazon.de>
> > > > >
> > > > > Hello,
> > > > >
> > > > >
> > > > > Very interesting work, thank you for sharing this :)
> > > > >
> > > > > On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:
> > > > >
> > > > >> What's new in v2
> > > > >> ================
> > > > >> Special thanks to Jens Axboe for reporting a regression in buffered
> > > > >> I/O and helping test the fix.
> > > > >
> > > > > Is the discussion open?  If so, could you please give me a link?
> > > >
> > > > I wasn't on the initial post (or any of the lists it was posted to), but
> > > > it's on the google page reclaim list. Not sure if that is public or not.
> > > >
> > > > tldr is that I was pretty excited about this work, as buffered IO tends
> > > > to suck (a lot) for high throughput applications. My test case was
> > > > pretty simple:
> > > >
> > > > Randomly read a fast device, using 4k buffered IO, and watch what
> > > > happens when the page cache gets filled up. For this particular test,
> > > > we'll initially be doing 2.1GB/sec of IO, and then drop to 1.5-1.6GB/sec
> > > > with kswapd using a lot of CPU trying to keep up. That's mainline
> > > > behavior.
> > >
> > > I see this exact same behaviour here, too, but I RCA'd it to
> > > contention between the inode and memory reclaim for the mapping
> > > structure that indexes the page cache. Basically the mapping tree
> > > lock is the contention point here - you can either be adding pages
> > > to the mapping during IO, or memory reclaim can be removing pages
> > > from the mapping, but we can't do both at once.
> > >
> > > So we end up with kswapd spinning on the mapping tree lock like so
> > > when doing 1.6GB/s in 4kB buffered IO:
> > >
> > > -   20.06%     0.00%  [kernel]               [k] kswapd                                                                                                        ▒
> > >    - 20.06% kswapd                                                                                                                                             ▒
> > >       - 20.05% balance_pgdat                                                                                                                                   ▒
> > >          - 20.03% shrink_node                                                                                                                                  ▒
> > >             - 19.92% shrink_lruvec                                                                                                                             ▒
> > >                - 19.91% shrink_inactive_list                                                                                                                   ▒
> > >                   - 19.22% shrink_page_list                                                                                                                    ▒
> > >                      - 17.51% __remove_mapping                                                                                                                 ▒
> > >                         - 14.16% _raw_spin_lock_irqsave                                                                                                        ▒
> > >                            - 14.14% do_raw_spin_lock                                                                                                           ▒
> > >                                 __pv_queued_spin_lock_slowpath                                                                                                 ▒
> > >                         - 1.56% __delete_from_page_cache                                                                                                       ▒
> > >                              0.63% xas_store                                                                                                                   ▒
> > >                         - 0.78% _raw_spin_unlock_irqrestore                                                                                                    ▒
> > >                            - 0.69% do_raw_spin_unlock                                                                                                          ▒
> > >                                 __raw_callee_save___pv_queued_spin_unlock                                                                                      ▒
> > >                      - 0.82% free_unref_page_list                                                                                                              ▒
> > >                         - 0.72% free_unref_page_commit                                                                                                         ▒
> > >                              0.57% free_pcppages_bulk                                                                                                          ▒
> > >
> > > And these are the processes consuming CPU:
> > >
> > >    5171 root      20   0 1442496   5696   1284 R  99.7   0.0   1:07.78 fio
> > >    1150 root      20   0       0      0      0 S  47.4   0.0   0:22.70 kswapd1
> > >    1146 root      20   0       0      0      0 S  44.0   0.0   0:21.85 kswapd0
> > >    1152 root      20   0       0      0      0 S  39.7   0.0   0:18.28 kswapd3
> > >    1151 root      20   0       0      0      0 S  15.2   0.0   0:12.14 kswapd2
> > >
> > > i.e. when memory reclaim kicks in, the read process has 20% less
> > > time with exclusive access to the mapping tree to insert new pages.
> > > Hence buffered read performance goes down quite substantially when
> > > memory reclaim kicks in, and this really has nothing to do with the
> > > memory reclaim LRU scanning algorithm.
> > >
> > > I can actually get this machine to pin those 5 processes to 100% CPU
> > > under certain conditions. Each process is spinning all that extra
> > > time on the mapping tree lock, and performance degrades further.
> > > Changing the LRU reclaim algorithm won't fix this - the workload is
> > > solidly bound by the exclusive nature of the mapping tree lock and
> > > the number of tasks trying to obtain it exclusively...
> > >
> > > > The initial posting of this patchset did no better, in fact it did a bit
> > > > worse. Performance dropped to the same levels and kswapd was using as
> > > > much CPU as before, but on top of that we also got excessive swapping.
> > > > Not at a high rate, but 5-10MB/sec continually.
> > > >
> > > > I had some back and forths with Yu Zhao and tested a few new revisions,
> > > > and the current series does much better in this regard. Performance
> > > > still dips a bit when page cache fills, but not nearly as much, and
> > > > kswapd is using less CPU than before.
> > >
> > > Profiles would be interesting, because it sounds to me like reclaim
> > > *might* be batching page cache removal better (e.g. fewer, larger
> > > batches) and so spending less time contending on the mapping tree
> > > lock...
> > >
> > > IOWs, I suspect this result might actually be a result of less lock
> > > contention due to a change in batch processing characteristics of
> > > the new algorithm rather than it being a "better" algorithm...
> >
> > I appreciate the profile. But there is no batching in
> > __remove_mapping() -- it locks the mapping for each page, and
> > therefore the lock contention penalizes the mainline and this patchset
> > equally. It looks worse on your system because the four kswapd threads
> > from different nodes were working on the same file.
>
> I think you misunderstand exactly what I mean by "batching" here.
> I'm not talking about doing multiple pieces of work under a single
> lock. What I mean is that the overall amount of work done in a
> single reclaim scan (i.e a "reclaim batch") is packaged differently.
>
> We already batch up page reclaim via building a page list and then
> passing it to shrink_page_list() to process the batch of pages in a
> single pass. Each page in this page list batch then calls
> remove_mapping() to pull the page form the LRU, we have a run of
> contention between the foreground read() thread and the background
> kswapd.
>
> If the size or nature of the pages in the batch passed to
> shrink_page_list() changes, then the amount of time a reclaim batch
> is going to put pressure on the mapping tree lock will also change.
> That's the "change in batching behaviour" I'm referring to here. I
> haven't read through the patchset to determine if you change the
> shrink_page_list() algorithm, but it likely changes what is passed
> to be reclaimed and that in turn changes the locking patterns that
> fall out of shrink_page_list...

Ok, if we are talking about the size of the batch passed to
shrink_page_list(), both the mainline and this patchset cap it at
SWAP_CLUSTER_MAX, which is 32. There are corner cases, but when
running fio/io_uring, it's safe to say both use 32.

> > And kswapd is only one of two paths that could affect the performance.
> > The kernel context of the test process is where the improvement mainly
> > comes from.
> >
> > I also suspect you were testing a file much larger than your memory
> > size. If so, sorry to tell you that a file only a few times larger,
> > e.g. twice, would be worse.
> >
> > Here is my take:
> >
> > Claim
> > -----
> > This patchset is a "better" algorithm. (Technically it's not an
> > algorithm, it's a feedback loop.)
> >
> > Theoretical basis
> > -----------------
> > An open-loop control (the mainline) can only be better if the margin
> > of error in its prediction of the future events is less than that from
> > the trial-and-error of a closed-loop control (this patchset). For
> > simple machines, it surely can. For page reclaim, AFAIK, it can't.
> >
> > A typical example: when randomly accessing a (not infinitely) large
> > file via buffered io long enough, we're bound to hit the same blocks
> > multiple times. Should we activate the pages containing those blocks,
> > i.e., to move them to the active lru list?  No.
> >
> > RCA
> > ---
> > For the fio/io_uring benchmark, the "No" is the key.
> >
> > The mainline activates pages accessed multiple times. This is done in
> > the buffered io access path by mark_page_accessed(), and it takes the
> > lru lock, which is contended under memory pressure. This contention
> > slows down both the access path and kswapd. But kswapd is not the
> > problem here because we are measuring the io_uring process, not kswap.
> >
> > For this patchset, there are no activations since the refault rates of
> > pages accessed multiple times are similar to those accessed only once
> > -- activations will only be done to pages from tiers with higher
> > refault rates.
> >
> > If you wish to debunk
> > ---------------------
>
> Nope, it's your job to convince us that it works, not the other way
> around. It's up to you to prove that your assertions are correct,
> not for us to prove they are false.

Just trying to keep people motivated, my homework is my own.

> > git fetch https://linux-mm.googlesource.com/page-reclaim refs/changes/73/1173/1
> >
> > CONFIG_LRU_GEN=y
> > CONFIG_LRU_GEN_ENABLED=y
> >
> > Run your benchmarks
> >
> > Profiles (200G mem + 400G file)
> > -------------------------------
> > A quick test from Jens' fio/io_uring:
> >
> > -rc7
> >     13.30%  io_uring  xas_load
> >     13.22%  io_uring  _copy_to_iter
> >     12.30%  io_uring  __add_to_page_cache_locked
> >      7.43%  io_uring  clear_page_erms
> >      4.18%  io_uring  filemap_get_read_batch
> >      3.54%  io_uring  get_page_from_freelist
> >      2.98%  io_uring  ***native_queued_spin_lock_slowpath***
> >      1.61%  io_uring  page_cache_ra_unbounded
> >      1.16%  io_uring  xas_start
> >      1.08%  io_uring  filemap_read
> >      1.07%  io_uring  ***__activate_page***
> >
> > lru lock: 2.98% (lru addition + activation)
> > activation: 1.07%
> >
> > -rc7 + this patchset
> >     14.44%  io_uring  xas_load
> >     14.14%  io_uring  _copy_to_iter
> >     11.15%  io_uring  __add_to_page_cache_locked
> >      6.56%  io_uring  clear_page_erms
> >      4.44%  io_uring  filemap_get_read_batch
> >      2.14%  io_uring  get_page_from_freelist
> >      1.32%  io_uring  page_cache_ra_unbounded
> >      1.20%  io_uring  psi_group_change
> >      1.18%  io_uring  filemap_read
> >      1.09%  io_uring  ****native_queued_spin_lock_slowpath****
> >      1.08%  io_uring  do_mpage_readpage
> >
> > lru lock: 1.09% (lru addition only)
>
> All this tells us is that there was *less contention on the mapping
> tree lock*. It does not tell us why there was less contention.
>
> You've handily omitted the kswapd profile, which is really the one
> of interest to the discussion here - how did the memory reclaim CPU
> usage profile also change at the same time?

Well, let me attach them. Suffix -1 is the mainline, -2 is the patchset.

  mainline
     57.65%  kswapd0  __remove_mapping
  this patchset
     61.61%  kswapd0  __remove_mapping

As I said, the mapping lock contention penalizes both heavily. Its
percentage is even higher with the patchset, because it has less
overhead. I'm trying to explain "the less overhead" part: it's the
activations that make the mainline worse.

  mainline
    6.53%  kswapd0  shrink_active_list
  this patchset
    0

From the io_uring context:
  mainline
     2.53%  io_uring  mark_page_accessed
  this patchset
     0.52%  io_uring  mark_page_accessed

mark_page_accessed() moves pages accessed multiple times to the active
lru list. Then shrink_active_list() moves them back to the inactive
list. All for nothing.

I don't want to paste everything here -- they'd clutter. Please see
all the detailed profiles in the attachment. Let me know if their
formats are no to your liking. I still have the raw perf.data.

> > And I plan to reach out to other communities, e.g., PostgreSQL, to
> > benchmark the patchset. I heard they have been complaining about the
> > buffered io performance under memory pressure. Any other benchmarks
> > you'd suggest?
> >
> > BTW, you might find another surprise in how less frequently slab
> > shrinkers are called under memory pressure, because this patchset is a
> > lot better at finding pages to reclaim and therefore doesn't overkill
> > slabs.
>
> That's actually very likely to be a Bad Thing and cause unexpected
> perofrmance and OOM based regressions. When the machine finally runs
> out of page cache it can easily reclaim, it's going to get stuck
> with long tail latencies reclaiming huge slab caches as they've had
> no substantial ongoing pressure put on them to keep them in balance
> with the overall memory pressure the system is under...

Well. It does use the existing equation. That is if it scans X% of
pages, then it scans X% of slab objects. But 1) it often finds pages
to reclaim at a lower X% 2) the pages it reclaims are less likely to
refault. So the side effect is the overall slab objects it scans also
reduce. I do see your point but don't see any options, at the moment.

[-- Attachment #2: io_uring-2.txt --]
[-- Type: text/plain, Size: 67381 bytes --]

# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 954K of event 'cycles'
# Event count (approx.): 856006306336
#
# Children      Self  Command       Shared Object      Symbol                                      
# ........  ........  ............  .................  ............................................
#
    99.90%     0.00%  io_uring      [unknown]          [k] 0x0000000000000005
    99.90%     0.00%  io_uring      [unknown]          [k] 0x0000564cf9afc450
    99.50%     0.02%  io_uring      libc-2.32.so       [.] syscall
    99.19%     0.01%  io_uring      [kernel.kallsyms]  [k] entry_SYSCALL_64_after_hwframe
    99.16%     0.00%  io_uring      [kernel.kallsyms]  [k] do_syscall_64
    94.78%     0.18%  io_uring      [kernel.kallsyms]  [k] __io_queue_sqe
    94.41%     0.25%  io_uring      [kernel.kallsyms]  [k] io_issue_sqe
    93.60%     0.48%  io_uring      [kernel.kallsyms]  [k] io_read
    89.35%     0.96%  io_uring      [kernel.kallsyms]  [k] blkdev_read_iter
    88.44%     0.12%  io_uring      [kernel.kallsyms]  [k] io_iter_do_read
    88.25%     0.16%  io_uring      [kernel.kallsyms]  [k] generic_file_read_iter
    88.00%     1.20%  io_uring      [kernel.kallsyms]  [k] filemap_read
    84.01%     0.01%  io_uring      [kernel.kallsyms]  [k] __x64_sys_io_uring_enter
    83.91%     0.01%  io_uring      [kernel.kallsyms]  [k] __do_sys_io_uring_enter
    83.74%     0.37%  io_uring      [kernel.kallsyms]  [k] io_submit_sqes
    81.28%     0.07%  io_uring      [kernel.kallsyms]  [k] io_queue_sqe
    74.65%     0.96%  io_uring      [kernel.kallsyms]  [k] filemap_get_pages
    55.92%     0.35%  io_uring      [kernel.kallsyms]  [k] ondemand_readahead
    54.57%     1.34%  io_uring      [kernel.kallsyms]  [k] page_cache_ra_unbounded
    51.57%     0.12%  io_uring      [kernel.kallsyms]  [k] page_cache_sync_ra
    24.14%     0.51%  io_uring      [kernel.kallsyms]  [k] add_to_page_cache_lru
    19.04%    11.51%  io_uring      [kernel.kallsyms]  [k] __add_to_page_cache_locked
    18.48%     0.13%  io_uring      [kernel.kallsyms]  [k] read_pages
    18.42%     0.18%  io_uring      [kernel.kallsyms]  [k] blkdev_readahead
    18.20%     0.55%  io_uring      [kernel.kallsyms]  [k] mpage_readahead
    16.81%     2.31%  io_uring      [kernel.kallsyms]  [k] filemap_get_read_batch
    16.37%    14.83%  io_uring      [kernel.kallsyms]  [k] xas_load
    15.40%     0.02%  io_uring      [kernel.kallsyms]  [k] task_work_run
    15.38%     0.03%  io_uring      [kernel.kallsyms]  [k] exit_to_user_mode_prepare
    15.31%     0.05%  io_uring      [kernel.kallsyms]  [k] tctx_task_work
    15.14%     0.15%  io_uring      [kernel.kallsyms]  [k] syscall_exit_to_user_mode
    14.05%     0.04%  io_uring      [kernel.kallsyms]  [k] io_req_task_submit
    13.86%     0.05%  io_uring      [kernel.kallsyms]  [k] __io_req_task_submit
    12.92%     0.12%  io_uring      [kernel.kallsyms]  [k] submit_bio
    11.40%     0.13%  io_uring      [kernel.kallsyms]  [k] copy_page_to_iter
    10.65%     9.61%  io_uring      [kernel.kallsyms]  [k] _copy_to_iter
     9.45%     0.03%  io_uring      [kernel.kallsyms]  [k] __page_cache_alloc
     9.42%     0.16%  io_uring      [kernel.kallsyms]  [k] submit_bio_noacct
     9.40%     0.11%  io_uring      [kernel.kallsyms]  [k] alloc_pages_current
     9.11%     0.30%  io_uring      [kernel.kallsyms]  [k] __alloc_pages_nodemask
     8.53%     1.81%  io_uring      [kernel.kallsyms]  [k] get_page_from_freelist
     8.38%     0.10%  io_uring      [kernel.kallsyms]  [k] asm_common_interrupt
     8.26%     0.06%  io_uring      [kernel.kallsyms]  [k] common_interrupt
     7.75%     0.05%  io_uring      [kernel.kallsyms]  [k] __common_interrupt
     7.62%     0.44%  io_uring      [kernel.kallsyms]  [k] blk_mq_submit_bio
     7.56%     0.20%  io_uring      [kernel.kallsyms]  [k] handle_edge_irq
     6.45%     5.90%  io_uring      [kernel.kallsyms]  [k] clear_page_erms
     5.25%     0.10%  io_uring      [kernel.kallsyms]  [k] handle_irq_event
     4.88%     0.19%  io_uring      [kernel.kallsyms]  [k] nvme_irq
     4.83%     0.07%  io_uring      [kernel.kallsyms]  [k] __handle_irq_event_percpu
     4.73%     0.52%  io_uring      [kernel.kallsyms]  [k] nvme_process_cq
     4.52%     0.01%  io_uring      [kernel.kallsyms]  [k] page_cache_async_ra
     4.00%     0.04%  io_uring      [kernel.kallsyms]  [k] nvme_pci_complete_rq
     3.82%     0.04%  io_uring      [kernel.kallsyms]  [k] nvme_complete_rq
     3.76%     1.11%  io_uring      [kernel.kallsyms]  [k] do_mpage_readpage
     3.74%     0.06%  io_uring      [kernel.kallsyms]  [k] blk_mq_end_request
     3.03%     0.01%  io_uring      [kernel.kallsyms]  [k] blk_flush_plug_list
     3.02%     0.06%  io_uring      [kernel.kallsyms]  [k] blk_mq_flush_plug_list
     2.96%     0.01%  io_uring      [kernel.kallsyms]  [k] blk_mq_sched_insert_requests
     2.94%     0.10%  io_uring      [kernel.kallsyms]  [k] blk_mq_try_issue_list_directly
     2.89%     0.00%  io_uring      [kernel.kallsyms]  [k] __irqentry_text_start
     2.71%     0.41%  io_uring      [kernel.kallsyms]  [k] psi_task_change
     2.67%     0.21%  io_uring      [kernel.kallsyms]  [k] lru_cache_add
     2.65%     0.14%  io_uring      [kernel.kallsyms]  [k] __blk_mq_try_issue_directly
     2.53%     0.54%  io_uring      [kernel.kallsyms]  [k] nvme_queue_rq
     2.43%     0.17%  io_uring      [kernel.kallsyms]  [k] blk_update_request
     2.42%     0.58%  io_uring      [kernel.kallsyms]  [k] __pagevec_lru_add
     2.29%     1.42%  io_uring      [kernel.kallsyms]  [k] psi_group_change
     2.22%     0.85%  io_uring      [kernel.kallsyms]  [k] blk_attempt_plug_merge
     2.14%     0.04%  io_uring      [kernel.kallsyms]  [k] bio_endio
     2.13%     0.11%  io_uring      [kernel.kallsyms]  [k] rw_verify_area
     2.08%     0.18%  io_uring      [kernel.kallsyms]  [k] mpage_end_io
     2.01%     0.08%  io_uring      [kernel.kallsyms]  [k] mpage_alloc
     1.71%     0.56%  io_uring      [kernel.kallsyms]  [k] _raw_spin_lock_irq
     1.65%     0.98%  io_uring      [kernel.kallsyms]  [k] workingset_refault
     1.65%     0.09%  io_uring      [kernel.kallsyms]  [k] psi_memstall_leave
     1.64%     0.20%  io_uring      [kernel.kallsyms]  [k] security_file_permission
     1.61%     1.59%  io_uring      [kernel.kallsyms]  [k] _raw_spin_lock
     1.58%     0.08%  io_uring      [kernel.kallsyms]  [k] psi_memstall_enter
     1.44%     0.37%  io_uring      [kernel.kallsyms]  [k] xa_get_order
     1.43%     0.13%  io_uring      [kernel.kallsyms]  [k] __blk_mq_alloc_request
     1.37%     0.26%  io_uring      [kernel.kallsyms]  [k] io_submit_flush_completions
     1.36%     0.14%  io_uring      [kernel.kallsyms]  [k] xa_load
     1.34%     0.31%  io_uring      [kernel.kallsyms]  [k] bio_alloc_bioset
     1.31%     0.26%  io_uring      [kernel.kallsyms]  [k] submit_bio_checks
     1.29%     0.04%  io_uring      [kernel.kallsyms]  [k] blk_finish_plug
     1.28%     1.27%  io_uring      [kernel.kallsyms]  [k] native_queued_spin_lock_slowpath
     1.24%     0.19%  io_uring      [kernel.kallsyms]  [k] page_endio
     1.13%     0.10%  io_uring      [kernel.kallsyms]  [k] unlock_page
     1.09%     0.99%  io_uring      [kernel.kallsyms]  [k] read_tsc
     1.07%     0.74%  io_uring      [kernel.kallsyms]  [k] lru_gen_addition
     1.03%     0.09%  io_uring      [kernel.kallsyms]  [k] mempool_alloc
     1.02%     0.13%  io_uring      [kernel.kallsyms]  [k] wake_up_page_bit
     1.02%     0.92%  io_uring      [kernel.kallsyms]  [k] xas_start
     1.01%     0.78%  io_uring      [kernel.kallsyms]  [k] apparmor_file_permission
     0.99%     0.99%  io_uring      [kernel.kallsyms]  [k] native_irq_return_iret
     0.94%     0.16%  io_uring      [kernel.kallsyms]  [k] __mod_lruvec_state
     0.93%     0.55%  io_uring      [kernel.kallsyms]  [k] blk_rq_merge_ok
     0.93%     0.16%  io_uring      [kernel.kallsyms]  [k] irq_chip_ack_parent
     0.91%     0.22%  io_uring      [kernel.kallsyms]  [k] mem_cgroup_charge
     0.88%     0.24%  io_uring      [kernel.kallsyms]  [k] record_times
     0.86%     0.17%  io_uring      [kernel.kallsyms]  [k] page_cache_prev_miss
     0.84%     0.15%  io_uring      [kernel.kallsyms]  [k] PageHuge
     0.84%     0.09%  io_uring      [kernel.kallsyms]  [k] io_req_free_batch
     0.82%     0.20%  io_uring      [kernel.kallsyms]  [k] xas_store
     0.82%     0.06%  io_uring      [kernel.kallsyms]  [k] mempool_alloc_slab
     0.78%     0.11%  io_uring      [kernel.kallsyms]  [k] io_setup_async_rw
     0.77%     0.76%  io_uring      [kernel.kallsyms]  [k] workingset_update_node
     0.73%     0.72%  io_uring      [kernel.kallsyms]  [k] native_apic_msr_eoi_write
     0.73%     0.35%  io_uring      [kernel.kallsyms]  [k] __fsnotify_parent
     0.73%     0.10%  io_uring      [kernel.kallsyms]  [k] io_dismantle_req
     0.71%     0.01%  io_uring      [kernel.kallsyms]  [k] __wake_up_locked_key_bookmark
     0.69%     0.17%  io_uring      [kernel.kallsyms]  [k] blk_mq_get_tag
     0.69%     0.04%  io_uring      [kernel.kallsyms]  [k] __xas_prev
     0.67%     0.09%  io_uring      [kernel.kallsyms]  [k] blk_mq_start_request
     0.65%     0.34%  io_uring      [kernel.kallsyms]  [k] __mod_memcg_lruvec_state
     0.64%     0.09%  io_uring      [kernel.kallsyms]  [k] sched_clock_cpu
     0.63%     0.31%  io_uring      [kernel.kallsyms]  [k] io_async_buf_func
     0.62%     0.23%  io_uring      [kernel.kallsyms]  [k] kfree
     0.60%     0.21%  io_uring      [kernel.kallsyms]  [k] blk_mq_rq_ctx_init
     0.59%     0.08%  io_uring      [kernel.kallsyms]  [k] bio_associate_blkg
     0.59%     0.42%  io_uring      [kernel.kallsyms]  [k] __x86_retpoline_rax
     0.58%     0.13%  io_uring      [kernel.kallsyms]  [k] __check_object_size
     0.58%     0.37%  io_uring      [kernel.kallsyms]  [k] kmem_cache_alloc
     0.57%     0.20%  io_uring      [kernel.kallsyms]  [k] __mod_lruvec_page_state
     0.57%     0.06%  io_uring      [kernel.kallsyms]  [k] __wake_up_common
     0.57%     0.01%  io_uring      [kernel.kallsyms]  [k] bio_put
     0.57%     0.35%  io_uring      [kernel.kallsyms]  [k] __lock_page_async
     0.55%     0.14%  io_uring      [kernel.kallsyms]  [k] blk_mq_free_request
     0.55%     0.27%  io_uring      [kernel.kallsyms]  [k] blk_throtl_bio
     0.54%     0.02%  io_uring      [kernel.kallsyms]  [k] bio_free
     0.54%     0.47%  io_uring      [kernel.kallsyms]  [k] io_file_supports_async
     0.53%     0.40%  io_uring      [kernel.kallsyms]  [k] _raw_spin_lock_irqsave
     0.52%     0.26%  io_uring      [kernel.kallsyms]  [k] __kmalloc
     0.52%     0.04%  io_uring      [kernel.kallsyms]  [k] __blk_mq_get_tag
     0.52%     0.46%  io_uring      [kernel.kallsyms]  [k] mark_page_accessed
     0.50%     0.49%  io_uring      [kernel.kallsyms]  [k] native_sched_clock
     0.49%     0.06%  io_uring      [kernel.kallsyms]  [k] __sbitmap_queue_get
     0.47%     0.42%  io_uring      [kernel.kallsyms]  [k] memset_erms
     0.45%     0.01%  io_uring      [kernel.kallsyms]  [k] irqentry_exit
     0.45%     0.08%  io_uring      [kernel.kallsyms]  [k] mempool_free_slab
     0.45%     0.40%  io_uring      [kernel.kallsyms]  [k] bio_associate_blkg_from_css
     0.45%     0.03%  io_uring      [kernel.kallsyms]  [k] mempool_free
     0.44%     0.41%  io_uring      [kernel.kallsyms]  [k] xas_find_conflict
     0.44%     0.05%  io_uring      [kernel.kallsyms]  [k] irqentry_exit_to_user_mode
     0.43%     0.38%  io_uring      [kernel.kallsyms]  [k] __virt_addr_valid
     0.42%     0.39%  io_uring      [kernel.kallsyms]  [k] nvme_setup_cmd
     0.41%     0.25%  io_uring      [kernel.kallsyms]  [k] rcu_read_unlock_strict
     0.41%     0.15%  io_uring      [kernel.kallsyms]  [k] sbitmap_get
     0.39%     0.37%  io_uring      [kernel.kallsyms]  [k] __mod_node_page_state
     0.39%     0.31%  io_uring      [kernel.kallsyms]  [k] kmem_cache_free
     0.37%     0.34%  io_uring      [kernel.kallsyms]  [k] ktime_get
     0.37%     0.33%  io_uring      [kernel.kallsyms]  [k] io_import_iovec
     0.36%     0.32%  io_uring      [kernel.kallsyms]  [k] io_prep_rw
     0.36%     0.01%  io_uring      io_uring           [.] submitter_fn
     0.36%     0.32%  io_uring      [kernel.kallsyms]  [k] blk_attempt_bio_merge.part.0
     0.34%     0.30%  io_uring      [kernel.kallsyms]  [k] fsnotify
     0.32%     0.32%  io_uring      [kernel.kallsyms]  [k] __mod_memcg_state.part.0
     0.32%     0.18%  io_uring      [kernel.kallsyms]  [k] try_charge
     0.31%     0.01%  io_uring      [kernel.kallsyms]  [k] io_req_task_queue
     0.31%     0.27%  io_uring      [kernel.kallsyms]  [k] percpu_counter_add_batch
     0.31%     0.26%  io_uring      [kernel.kallsyms]  [k] blk_integrity_merge_bio
     0.31%     0.27%  io_uring      [kernel.kallsyms]  [k] wbt_wait
     0.30%     0.26%  io_uring      [kernel.kallsyms]  [k] io_file_get
     0.30%     0.26%  io_uring      [kernel.kallsyms]  [k] blkdev_get_block
     0.29%     0.18%  io_uring      [kernel.kallsyms]  [k] __cond_resched
     0.28%     0.25%  io_uring      [kernel.kallsyms]  [k] wbt_track
     0.28%     0.07%  io_uring      [kernel.kallsyms]  [k] io_req_task_work_add
     0.28%     0.01%  io_uring      [kernel.kallsyms]  [k] asm_sysvec_reschedule_ipi
     0.28%     0.18%  io_uring      [kernel.kallsyms]  [k] apic_ack_irq
     0.27%     0.24%  io_uring      [kernel.kallsyms]  [k] mutex_unlock
     0.26%     0.24%  io_uring      [kernel.kallsyms]  [k] __blk_mq_sched_bio_merge
     0.26%     0.06%  io_uring      [kernel.kallsyms]  [k] __lock_text_start
     0.25%     0.22%  io_uring      [kernel.kallsyms]  [k] __blk_queue_split
     0.25%     0.22%  io_uring      [kernel.kallsyms]  [k] __slab_free
     0.25%     0.05%  io_uring      [kernel.kallsyms]  [k] __blk_mq_free_request
     0.25%     0.10%  io_uring      [kernel.kallsyms]  [k] bio_add_page
     0.25%     0.16%  io_uring      [kernel.kallsyms]  [k] __sbitmap_get_word
     0.23%     0.16%  io_uring      [kernel.kallsyms]  [k] mutex_lock
     0.22%     0.20%  io_uring      [kernel.kallsyms]  [k] __next_zones_zonelist
     0.21%     0.15%  io_uring      [kernel.kallsyms]  [k] __x86_indirect_thunk_rax
     0.21%     0.05%  io_uring      [kernel.kallsyms]  [k] __rq_qos_throttle
     0.20%     0.09%  io_uring      [kernel.kallsyms]  [k] kiocb_done
     0.19%     0.16%  io_uring      [kernel.kallsyms]  [k] bio_crypt_rq_ctx_compatible
     0.19%     0.14%  io_uring      [kernel.kallsyms]  [k] slab_pre_alloc_hook.constprop.0
     0.19%     0.18%  io_uring      [kernel.kallsyms]  [k] slab_free_freelist_hook
     0.19%     0.16%  io_uring      [kernel.kallsyms]  [k] error_entry
     0.19%     0.02%  io_uring      [kernel.kallsyms]  [k] lock_page_lruvec_irqsave
     0.19%     0.16%  io_uring      [kernel.kallsyms]  [k] release_pages
     0.18%     0.02%  io_uring      [kernel.kallsyms]  [k] blk_mq_put_tag
     0.18%     0.15%  io_uring      [kernel.kallsyms]  [k] get_mem_cgroup_from_mm
     0.17%     0.16%  io_uring      [kernel.kallsyms]  [k] sbitmap_queue_clear
     0.16%     0.04%  io_uring      [kernel.kallsyms]  [k] blk_account_io_start
     0.16%     0.14%  io_uring      [kernel.kallsyms]  [k] io_put_req
     0.16%     0.12%  io_uring      [kernel.kallsyms]  [k] blk_account_io_done
     0.16%     0.15%  io_uring      [kernel.kallsyms]  [k] update_io_ticks
     0.16%     0.14%  io_uring      [kernel.kallsyms]  [k] blk_stat_add
     0.15%     0.12%  io_uring      [kernel.kallsyms]  [k] aa_file_perm
     0.15%     0.11%  io_uring      [kernel.kallsyms]  [k] add_interrupt_randomness
     0.15%     0.14%  io_uring      [kernel.kallsyms]  [k] page_mapping
     0.14%     0.00%  io_uring      [kernel.kallsyms]  [k] sysvec_reschedule_ipi
     0.13%     0.11%  io_uring      [kernel.kallsyms]  [k] rcu_all_qs
     0.13%     0.01%  io_uring      [kernel.kallsyms]  [k] mempool_kmalloc
     0.13%     0.12%  io_uring      [kernel.kallsyms]  [k] wbt_issue
     0.13%     0.05%  io_uring      [kernel.kallsyms]  [k] __rq_qos_track
     0.13%     0.12%  io_uring      [kernel.kallsyms]  [k] wbt_done
     0.12%     0.11%  io_uring      [kernel.kallsyms]  [k] blk_add_timer
     0.12%     0.12%  io_uring      [kernel.kallsyms]  [k] __io_cqring_fill_event
     0.12%     0.10%  io_uring      [kernel.kallsyms]  [k] page_counter_try_charge
     0.12%     0.10%  io_uring      [kernel.kallsyms]  [k] nvme_submit_cmd
     0.11%     0.09%  io_uring      [kernel.kallsyms]  [k] memcg_slab_post_alloc_hook
     0.11%     0.05%  io_uring      [kernel.kallsyms]  [k] mem_cgroup_charge_statistics.constprop.0
     0.11%     0.09%  io_uring      [kernel.kallsyms]  [k] blk_queue_enter
     0.11%     0.06%  io_uring      [kernel.kallsyms]  [k] kernel_init_free_pages
     0.11%     0.10%  io_uring      [kernel.kallsyms]  [k] __blk_rq_map_sg
     0.11%     0.10%  io_uring      [kernel.kallsyms]  [k] nvme_setup_rw
     0.11%     0.09%  io_uring      [kernel.kallsyms]  [k] __bio_try_merge_page
     0.11%     0.10%  io_uring      [kernel.kallsyms]  [k] bio_integrity_prep
     0.10%     0.08%  io_uring      [kernel.kallsyms]  [k] __xas_next
     0.10%     0.09%  io_uring      [kernel.kallsyms]  [k] __bio_add_page
     0.10%     0.08%  io_uring      [kernel.kallsyms]  [k] __io_complete_rw.constprop.0
     0.10%     0.09%  io_uring      [kernel.kallsyms]  [k] blk_mq_complete_request_remote
     0.10%     0.10%  io_uring      [kernel.kallsyms]  [k] __count_memcg_events.part.0
     0.09%     0.07%  io_uring      [kernel.kallsyms]  [k] kthread_blkcg
     0.09%     0.08%  io_uring      [kernel.kallsyms]  [k] blk_queue_bounce
     0.09%     0.00%  io_uring      [unknown]          [k] 0xbaff630d0ccd3500
     0.09%     0.09%  io_uring      [kernel.kallsyms]  [k] blk_mq_tag_to_rq
     0.09%     0.08%  io_uring      [kernel.kallsyms]  [k] dma_map_page_attrs
     0.09%     0.08%  io_uring      [kernel.kallsyms]  [k] wbt_data_dir
     0.09%     0.03%  io_uring      [kernel.kallsyms]  [k] __rq_qos_done
     0.09%     0.08%  io_uring      [kernel.kallsyms]  [k] blk_cgroup_bio_start
     0.08%     0.06%  io_uring      [kernel.kallsyms]  [k] sched_clock
     0.08%     0.06%  io_uring      [kernel.kallsyms]  [k] __entry_text_start
     0.08%     0.08%  io_uring      [kernel.kallsyms]  [k] xas_create
     0.08%     0.04%  io_uring      [kernel.kallsyms]  [k] __rq_qos_issue
     0.08%     0.07%  io_uring      [kernel.kallsyms]  [k] blk_mq_get_driver_tag
     0.08%     0.06%  io_uring      [kernel.kallsyms]  [k] policy_nodemask
     0.07%     0.01%  io_uring      [kernel.kallsyms]  [k] nvme_unmap_data.part.0
     0.07%     0.05%  io_uring      [kernel.kallsyms]  [k] __inc_numa_state
     0.07%     0.06%  io_uring      [kernel.kallsyms]  [k] bio_uninit
     0.07%     0.05%  io_uring      [kernel.kallsyms]  [k] blk_add_rq_to_plug
     0.06%     0.06%  io_uring      [kernel.kallsyms]  [k] mem_cgroup_update_lru_size
     0.06%     0.00%  io_uring      libc-2.32.so       [.] __nrand48_r
     0.06%     0.06%  io_uring      [kernel.kallsyms]  [k] __mod_zone_page_state
     0.06%     0.06%  io_uring      [kernel.kallsyms]  [k] bdev_read_page
     0.06%     0.05%  io_uring      [kernel.kallsyms]  [k] _find_next_bit.constprop.0
     0.06%     0.06%  io_uring      [kernel.kallsyms]  [k] syscall_return_via_sysret
     0.06%     0.06%  io_uring      [kernel.kallsyms]  [k] guard_bio_eod
     0.06%     0.01%  io_uring      [kernel.kallsyms]  [k] __slab_alloc
     0.06%     0.05%  io_uring      [kernel.kallsyms]  [k] dma_unmap_page_attrs
     0.05%     0.03%  io_uring      [kernel.kallsyms]  [k] I_BDEV
     0.05%     0.04%  io_uring      [kernel.kallsyms]  [k] bio_advance
     0.05%     0.00%  io_uring      libc-2.32.so       [.] __drand48_iterate
     0.04%     0.04%  io_uring      [kernel.kallsyms]  [k] kmalloc_slab
     0.04%     0.01%  io_uring      [kernel.kallsyms]  [k] wake_up_process
     0.04%     0.04%  io_uring      [kernel.kallsyms]  [k] ___slab_alloc
     0.04%     0.03%  io_uring      [kernel.kallsyms]  [k] dput
     0.04%     0.04%  io_uring      [kernel.kallsyms]  [k] io_cqring_ev_posted
     0.04%     0.03%  io_uring      [kernel.kallsyms]  [k] irq_exit_rcu
     0.04%     0.03%  io_uring      [kernel.kallsyms]  [k] xas_nomem
     0.04%     0.04%  io_uring      [kernel.kallsyms]  [k] note_interrupt
     0.04%     0.02%  io_uring      [kernel.kallsyms]  [k] memcg_check_events
     0.04%     0.04%  io_uring      [kernel.kallsyms]  [k] psi_flags_change
     0.04%     0.03%  io_uring      [kernel.kallsyms]  [k] try_to_wake_up
     0.04%     0.03%  io_uring      [kernel.kallsyms]  [k] should_failslab
     0.04%     0.03%  io_uring      [kernel.kallsyms]  [k] policy_node
     0.03%     0.03%  io_uring      [kernel.kallsyms]  [k] check_stack_object
     0.03%     0.01%  io_uring      [kernel.kallsyms]  [k] mempool_kfree
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] hctx_unlock
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] page_cache_next_miss
     0.03%     0.00%  io_uring      libc-2.32.so       [.] lrand48
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] irq_enter_rcu
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] find_next_zero_bit
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] __rq_qos_done_bio
     0.03%     0.01%  io_uring      [kernel.kallsyms]  [k] memset
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] __fget_light
     0.02%     0.01%  io_uring      [kernel.kallsyms]  [k] dma_pool_alloc
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] asm_sysvec_apic_timer_interrupt
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] sysvec_apic_timer_interrupt
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] iov_iter_bvec
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] cpuset_nodemask_valid_mems_allowed
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] blk_start_plug
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] task_work_add
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] irqentry_enter
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] __fdget
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] restore_regs_and_return_to_kernel
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] _raw_spin_trylock
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] __fget_files
     0.02%     0.01%  io_uring      [kernel.kallsyms]  [k] nvme_error_status
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] __mix_pool_bytes
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] should_fail_alloc_page
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] sync_regs
     0.02%     0.01%  io_uring      [kernel.kallsyms]  [k] io_commit_cqring
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] _mix_pool_bytes
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] hrtimer_interrupt
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] __sysvec_apic_timer_interrupt
     0.02%     0.01%  io_uring      [kernel.kallsyms]  [k] bvec_free
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] __irqentry_text_end
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] syscall_enter_from_user_mode
     0.02%     0.01%  io_uring      [kernel.kallsyms]  [k] iov_iter_revert
     0.02%     0.01%  io_uring      [kernel.kallsyms]  [k] blk_status_to_errno
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] idle_cpu
     0.02%     0.01%  io_uring      [kernel.kallsyms]  [k] blk_queue_exit
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] dma_map_sg_attrs
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] dma_unmap_sg_attrs
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] __hrtimer_run_queues
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] dma_direct_unmap_sg
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] tick_sched_timer
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] bvec_alloc
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] arch_do_signal_or_restart
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] should_fail_bio
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] __mem_cgroup_threshold
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] dma_direct_map_sg
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] propagate_protected_usage
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] __sbq_wake_up
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] dma_pool_free
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] entry_SYSCALL_64_safe_stack
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] nvme_cleanup_cmd
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] free_unref_page_list
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] blk_mq_sched_restart
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] update_process_times
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] error_return
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] bvec_split_segs
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] tick_sched_handle
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] sg_init_table
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] scheduler_tick
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] put_cpu_partial
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] refill_stock
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] __softirqentry_text_start
     0.01%     0.00%  io_uring      io_uring           [.] 0x0000564cf7f80320
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] native_iret
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] mem_cgroup_uncharge_list
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] inode_congested
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] sg_next
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] fpregs_assert_state_consistent
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] fput
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] task_tick_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] tick_do_update_jiffies64
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] wake_up_state
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] allocate_slab
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_run_task_work_sig
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_wall_time
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] timekeeping_advance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] run_rebalance_domains
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __schedule
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_event_task_tick
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] tick_program_event
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] credit_entropy_bits.constprop.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __alloc_pages_slowpath.constprop.0
     0.00%     0.00%  io_uring      io_uring           [.] 0x0000564cf7f802d0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_uring_add_task_file
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rebalance_domains
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_curr
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] native_write_msr
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] schedule
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] timekeeping_update
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_core
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] schedule_timeout
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] clockevents_program_event
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] nvme_free_prps
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_core_si
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_load_avg
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] load_balance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ttwu_do_activate
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] run_timer_softirq
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] enqueue_task
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_report_qs_rnp
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __queue_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_sd_lb_stats.constprop.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] find_busiest_group
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_blocked_averages
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] pvclock_gtod_notify
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __run_timers.part.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] call_timer_fn
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ktime_get_update_offsets_now
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] account_system_time
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] vma_migratable
     0.00%     0.00%  io_uring      libc-2.32.so       [.] __GI___libc_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] get_random_u32
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __wake_up_common_lock
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __wake_up
     0.00%     0.00%  iou-mgr-3119  [unknown]          [k] 0000000000000000
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] ret_from_fork
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] io_wq_manager
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ksys_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] vfs_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] x86_pmu_enable
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_cfs_group
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] hrtimer_active
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] lapic_next_deadline
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] delayed_work_timer_fn
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] arch_scale_freq_tick
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_sched_clock_irq
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] account_process_tick
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] wakeup_kswapd
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] _warn_unseeded_randomness
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] pick_next_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] new_sync_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] tty_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] file_tty_write.constprop.0
     0.00%     0.00%  iou-mgr-3116  [unknown]          [k] 0000000000000000
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] io_wq_manager
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] ret_from_fork
     0.00%     0.00%  io_uring      libc-2.32.so       [.] clock_nanosleep@@GLIBC_2.17
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] schedule_timeout
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] account_system_index_time
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __update_load_avg_se
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] put_prev_entity
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] trigger_load_balance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_gp_kthread_wake
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] swake_up_one
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_clock_nanosleep
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] schedule
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] __schedule
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] common_nsleep
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] io_wqe_worker
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] ret_from_fork
     0.00%     0.00%  iou-wrk-3119  [unknown]          [k] 0000000000000000
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] schedule
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] schedule_timeout
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] hrtimer_nanosleep
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] reweight_entity
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] enqueue_hrtimer
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __cgroup_account_cputime_field
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] enqueue_entity
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] enqueue_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kick_process
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] autoremove_wake_function
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] default_wake_function
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] wake_all_kswapds
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __zone_watermark_ok
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] n_tty_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] insert_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] finish_task_switch.isra.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_nanosleep
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] __schedule
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] dequeue_task
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_rt_rq_load_avg
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] pty_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __calc_delta
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] dequeue_task_fair
     0.00%     0.00%  io_uring      [unknown]          [k] 0000000000000000
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] pick_next_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_min_vruntime
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_vsyscall
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rb_next
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rb_insert_color
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] x86_pmu_disable
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __update_load_avg_cfs_rq
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __radix_tree_lookup
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] idr_find
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] radix_tree_lookup
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_queue_async_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_rw_reissue
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_wq_enqueue
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_wqe_activate_free_worker.isra.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_wqe_enqueue
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_wqe_wake_worker
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] task_numa_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __note_gp_changes
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] note_gp_changes
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] psi_task_switch
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] get_partial_node.part.0
     0.00%     0.00%  io_uring      io_uring           [.] 0x0000564cf7f80324
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] newidle_balance
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] schedule_timeout
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] __schedule
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] schedule
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_syscall_64
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] entry_SYSCALL_64_after_hwframe
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_rq_clock
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] io_wq_check_workers
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] io_worker_handle_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] queue_work_on
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] tty_flip_buffer_push
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] intel_pmu_disable_all
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] newidle_balance
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] pick_next_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rb_erase
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] timerqueue_add
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] acct_account_cputime
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] group_balance_cpu
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_fast_timekeeper
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_dl_rq_load_avg
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] blkcg_maybe_throttle_current
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] cpuacct_charge
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] put_prev_task_fair
     0.00%     0.00%  io_uring      io_uring           [.] 0x0000564cf7f802d4
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] cgroup_rstat_updated
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] calc_global_load
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] send_call_function_single_ipi
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] generic_exec_single
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kick_ilb
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] smp_call_function_single_async
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] native_read_msr
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] run_posix_cpu_timers
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] cpuacct_account_field
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] irq_work_tick
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] mod_node_page_state
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] zone_watermark_ok_safe
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_qs
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ttwu_queue_wakelist
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] dequeue_entity
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] update_sd_lb_stats.constprop.0
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] find_busiest_group
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] load_balance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] newidle_balance
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] dequeue_task
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] _nohz_idle_balance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __handle_mm_fault
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] handle_mm_fault
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] io_wq_submit_work
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] dequeue_task
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] update_sd_lb_stats.constprop.0
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] find_busiest_group
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] io_wq_check_workers
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] load_balance
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] dequeue_entity
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] dequeue_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] switch_fpu_return
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] prepare_to_wait_exclusive
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] io_issue_sqe
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] io_read
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] pick_next_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] vm_mmap_pgoff
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] blkdev_read_iter
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] generic_file_read_iter
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] io_iter_do_read
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_event_alloc
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] copy_process
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] inherit_event.constprop.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] inherit_task_group.isra.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_event_init_task
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] asm_exc_page_fault
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_user_addr_fault
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] exc_page_fault
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __x64_sys_execve
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] bprm_execve
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_execveat_common
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] load_elf_binary
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] dequeue_task
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] copy_process
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] create_io_thread
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] create_io_worker.isra.0
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] inherit_event.constprop.0
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] inherit_task_group.isra.0
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] perf_event_init_task
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] update_curr
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] update_rq_clock
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] create_io_thread
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_uring_alloc_task_context
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_wq_create
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_wq_fork_manager
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] _raw_spin_lock_irqsave
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] tty_insert_flip_string_fixed_flag
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] filemap_read
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c383030
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] newidle_balance
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c343234
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __io_uring_register
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_io_uring_register
     0.00%     0.00%  io_uring      [unknown]          [k] 0x000000000000cc81
     0.00%     0.00%  io_uring      [unknown]          [k] 0x00007f5c1654bc00
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_mmap
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_mmap
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ksys_mmap_pgoff
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] alloc_pages_vma
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] psi_group_change
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] psi_task_change
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_io_uring_setup
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_uring_setup
     0.00%     0.00%  io_uring      [unknown]          [k] 0x6966206465646441
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] _nohz_idle_balance
     0.00%     0.00%  numactl       [unknown]          [k] 0x00007f3fff40bb7b
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] finish_wait
     0.00%     0.00%  numactl       [unknown]          [.] 0000000000000000
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] _find_next_bit.constprop.0
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] __update_idle_core
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] _raw_spin_lock_irq
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] __perf_event_task_sched_in
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] _find_next_bit.constprop.0
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] update_curr
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] _nohz_idle_balance
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] dequeue_entity
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] dequeue_task_fair
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] finish_task_switch.isra.0
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] pick_next_task_idle
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] sched_clock_cpu
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] vm_mmap_pgoff
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] copy_page_to_iter
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_iterate_ctx
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_iterate_sb
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] update_sd_lb_stats.constprop.0
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] find_busiest_group
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] load_balance
     0.00%     0.00%  io_uring      libc-2.32.so       [.] __clone
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] __io_free_req
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] io_free_work
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] native_irq_return_iret
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] copy_user_generic_unrolled
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __tty_buffer_request_room
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ktime_get_real_seconds
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_output_char
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c303237
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c323933
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c333030
     0.00%     0.00%  io_uring      [unknown]          [.] 0x534f49202c343636
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c353937
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c373834
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c383237
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] tty_paranoia_check
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c303032
     0.00%     0.00%  io_uring      [unknown]          [.] 0x534f49202c303637
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c363534
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c363731
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] get_unmapped_area
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __do_sys_clone
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_clone
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kernel_clone
     0.00%     0.00%  io_uring      libc-2.32.so       [.] __mmap
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __get_user_pages
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kvfree
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] pin_user_pages
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] finish_task_switch.isra.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] copy_user_generic_unrolled
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] get_timespec64
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c353333
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c363136
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kmem_cache_alloc_trace
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __alloc_percpu_gfp
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __alloc_file
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __memcg_kmem_charge
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_openat
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] alloc_empty_file
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_filp_open
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_sys_openat2
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] obj_cgroup_charge
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] path_openat
     0.00%     0.00%  io_uring      libc-2.32.so       [.] __GI___libc_open
     0.00%     0.00%  io_uring      [unknown]          [k] 0x4c003270316e3065
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] kthread_blkcg
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __ext4_get_inode_loc
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __ext4_iget
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __x64_sys_openat
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] bio_associate_blkg
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_filp_open
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_sys_openat2
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] ext4_lookup
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] ext4_read_bh_lock
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] ext4_read_bh_nowait
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] ext4_sb_breadahead_unmovable
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] path_openat
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] submit_bh
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] submit_bh_wbc
     0.00%     0.00%  numactl       ld-2.32.so         [.] __GI___open64_nocancel
     0.00%     0.00%  numactl       ld-2.32.so         [.] _dl_map_object
     0.00%     0.00%  io_uring      libc-2.32.so       [.] _int_malloc
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] security_task_getsecid
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_execve
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] bprm_execve
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_execveat_common
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] elf_map
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ima_file_mmap
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] load_elf_binary
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] security_mmap_file
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] vm_mmap
     0.00%     0.00%  io_uring      libc-2.32.so       [.] __GI___execve
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] add_mm_counter_fast
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_wp_page
     0.00%     0.00%  io_uring      ld-2.32.so         [.] _dl_relocate_object
     0.00%     0.00%  io_uring      ld-2.32.so         [.] _dl_sysdep_start
     0.00%     0.00%  io_uring      ld-2.32.so         [.] dl_main
     0.00%     0.00%  io_uring      libc-2.32.so       [.] sysmalloc
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kthread_is_per_cpu
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c303030
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c323332
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c323931
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c343833
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] mmap_region
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_event_mmap
     0.00%     0.00%  io_uring      ld-2.32.so         [.] mmap64
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] aa_get_task_label
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __x64_sys_mmap
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] apparmor_task_getsecid
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] ima_file_mmap
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] ksys_mmap_pgoff
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] security_mmap_file
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] security_task_getsecid
     0.00%     0.00%  numactl       ld-2.32.so         [.] mmap64
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] flush_tlb_batched_pending
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] begin_new_exec
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] exit_mmap
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] mmput
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] unmap_page_range
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] unmap_single_vma
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] unmap_vmas
     0.00%     0.00%  numactl       libc-2.32.so       [.] __GI___execve
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __count_memcg_events.part.0
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __count_memcg_events
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] asm_exc_page_fault
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_user_addr_fault
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] exc_page_fault
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] handle_mm_fault
     0.00%     0.00%  numactl       libc-2.32.so       [.] sysmalloc
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __fsnotify_parent
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] ____fput
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __fput
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] exit_to_user_mode_prepare
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] syscall_exit_to_user_mode
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] task_work_run
     0.00%     0.00%  numactl       libc-2.32.so       [.] __close_nocancel
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] cpumask_next_and
     0.00%     0.00%  numactl       ld-2.32.so         [.] init_tls
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __entry_text_start
     0.00%     0.00%  numactl       ld-2.32.so         [.] _dl_allocate_tls_storage
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] put_dec_trunc8
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __x64_sys_read
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] dev_attr_show
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] kernfs_fop_read_iter
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] kernfs_seq_show
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] ksys_read
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] new_sync_read
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] node_read_meminfo
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] number
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] seq_read_iter
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] sysfs_emit_at
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] sysfs_kf_seq_show
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] vfs_read
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] vscnprintf
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] vsnprintf
     0.00%     0.00%  numactl       libc-2.32.so       [.] read
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] tty_hung_up_p
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __perf_event_task_sched_out
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] remove_wait_queue
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c323139
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c323732
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c343439
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] deactivate_slab
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] ___slab_alloc
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] __slab_alloc
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] allocate_fake_cpuc
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] intel_cpuc_prepare
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] kmem_cache_alloc_node_trace
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] perf_event_alloc
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] perf_try_init_event
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] x86_pmu_event_init
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] _raw_spin_lock
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] del_timer_sync
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] native_write_msr
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] put_prev_task_fair
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] update_load_avg
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] x86_pmu_disable
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] cpumask_next
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] rcu_read_unlock_strict
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] update_blocked_averages
     0.00%     0.00%  iou-mgr-3116  [kernel.kallsyms]  [k] x86_pmu_enable
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] native_sched_clock
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] _nohz_idle_balance
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] update_rq_clock
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] __mutex_init
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] native_sched_clock
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] check_stack_object
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] cpuacct_charge
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] hrtick_update
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] io_wq_worker_running
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] record_times
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] update_curr
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] cpuacct_charge
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] io_wq_worker_sleeping
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] psi_group_change
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] update_min_vruntime
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] x86_pmu_disable
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] del_timer_sync
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] psi_task_change
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] sched_clock_cpu
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kmem_cache_alloc_node_trace
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] allocate_fake_cpuc
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] intel_cpuc_prepare
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_try_init_event
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] x86_pmu_event_init
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] pgd_free
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] update_blocked_averages
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] __mmdrop
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] __update_load_avg_cfs_rq
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] update_load_avg
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] update_cfs_group
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] dequeue_task_fair
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] perf_iterate_ctx
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_mmap
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] elf_map
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] mmap_region
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] perf_event_mmap
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] perf_iterate_sb
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] vm_mmap
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] __update_idle_core
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] block_read_full_page
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] kfree
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] refill_stock
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] update_blocked_averages
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] drain_obj_stock
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] filemap_get_pages
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] filemap_read_page
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] io_dismantle_req
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] kmem_cache_free
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] obj_cgroup_uncharge
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] pick_next_task_idle
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] rcu_read_unlock_strict
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] refill_obj_stock
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ret_from_fork
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] tsk_fork_get_node
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __set_task_comm
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_wq_manager
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_event_comm
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __perf_event_task_sched_in
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] perf_iterate_ctx
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] __set_task_comm
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] io_wqe_worker
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] perf_event_comm
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] perf_iterate_sb
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] __x64_sys_execve
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] begin_new_exec
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] bprm_execve
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] do_execveat_common
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] do_syscall_64
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] entry_SYSCALL_64_after_hwframe
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] load_elf_binary
     0.00%     0.00%  perf          [unknown]          [k] 0x00007f3fff40bb7b
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] apparmor_file_permission
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] rw_verify_area
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] security_file_permission
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] _raw_spin_lock_irq
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] native_write_msr
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] __perf_event_task_sched_in
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] x86_pmu_enable
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] perf_iterate_sb
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] __set_task_comm
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] perf_event_comm
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] schedule_tail
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] schedule_tail
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] native_flush_tlb_one_user
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] native_flush_tlb_one_user
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] end_repeat_nmi
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] native_write_msr
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] __perf_event_task_sched_in
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] finish_task_switch.isra.0
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] x86_pmu_enable
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] native_set_fixmap
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] native_write_msr
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] ctx_resched
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] perf_event_exec
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] x86_pmu_enable
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] native_flush_tlb_one_user
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] native_flush_tlb_one_user
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] memcpy_fromio
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] nmi_cpu_backtrace
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] acpi_os_read_memory
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] acpi_os_read_memory
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] nmi_cpu_backtrace
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] nmi_cpu_backtrace
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] native_sched_clock
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] native_apic_msr_write
     0.00%     0.00%  iou-mgr-3119  [kernel.kallsyms]  [k] native_apic_msr_write
     0.00%     0.00%  iou-wrk-3119  [kernel.kallsyms]  [k] native_apic_msr_write
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] native_apic_msr_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __intel_pmu_enable_all.constprop.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] intel_pmu_handle_irq


#
# (Tip: To count events in every 1000 msec: perf stat -I 1000)
#

[-- Attachment #3: kswapd-1.svg --]
[-- Type: image/svg+xml, Size: 61324 bytes --]

[-- Attachment #4: io_uring-1.txt --]
[-- Type: text/plain, Size: 63989 bytes --]

# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 841K of event 'cycles'
# Event count (approx.): 753744677630
#
# Children      Self  Command       Shared Object      Symbol                                      
# ........  ........  ............  .................  ............................................
#
    99.91%     0.00%  io_uring      [unknown]          [k] 0x0000000000000005
    99.91%     0.00%  io_uring      [unknown]          [k] 0x000055c42c2c2450
    99.45%     0.02%  io_uring      libc-2.32.so       [.] syscall
    99.24%     0.01%  io_uring      [kernel.kallsyms]  [k] entry_SYSCALL_64_after_hwframe
    99.22%     0.00%  io_uring      [kernel.kallsyms]  [k] do_syscall_64
    94.48%     0.22%  io_uring      [kernel.kallsyms]  [k] __io_queue_sqe
    94.09%     0.31%  io_uring      [kernel.kallsyms]  [k] io_issue_sqe
    93.33%     0.62%  io_uring      [kernel.kallsyms]  [k] io_read
    88.57%     0.93%  io_uring      [kernel.kallsyms]  [k] blkdev_read_iter
    87.80%     0.15%  io_uring      [kernel.kallsyms]  [k] io_iter_do_read
    87.57%     0.16%  io_uring      [kernel.kallsyms]  [k] generic_file_read_iter
    87.35%     1.08%  io_uring      [kernel.kallsyms]  [k] filemap_read
    82.44%     0.01%  io_uring      [kernel.kallsyms]  [k] __x64_sys_io_uring_enter
    82.34%     0.01%  io_uring      [kernel.kallsyms]  [k] __do_sys_io_uring_enter
    82.09%     0.47%  io_uring      [kernel.kallsyms]  [k] io_submit_sqes
    79.50%     0.08%  io_uring      [kernel.kallsyms]  [k] io_queue_sqe
    71.47%     1.08%  io_uring      [kernel.kallsyms]  [k] filemap_get_pages
    53.08%     0.35%  io_uring      [kernel.kallsyms]  [k] ondemand_readahead
    51.74%     1.49%  io_uring      [kernel.kallsyms]  [k] page_cache_ra_unbounded
    49.92%     0.13%  io_uring      [kernel.kallsyms]  [k] page_cache_sync_ra
    21.26%     0.63%  io_uring      [kernel.kallsyms]  [k] add_to_page_cache_lru
    17.08%     0.02%  io_uring      [kernel.kallsyms]  [k] task_work_run
    16.98%     0.03%  io_uring      [kernel.kallsyms]  [k] exit_to_user_mode_prepare
    16.94%    10.52%  io_uring      [kernel.kallsyms]  [k] __add_to_page_cache_locked
    16.94%     0.06%  io_uring      [kernel.kallsyms]  [k] tctx_task_work
    16.79%     0.19%  io_uring      [kernel.kallsyms]  [k] syscall_exit_to_user_mode
    16.41%     2.61%  io_uring      [kernel.kallsyms]  [k] filemap_get_read_batch
    16.08%    15.72%  io_uring      [kernel.kallsyms]  [k] xas_load
    15.99%     0.15%  io_uring      [kernel.kallsyms]  [k] read_pages
    15.87%     0.13%  io_uring      [kernel.kallsyms]  [k] blkdev_readahead
    15.72%     0.56%  io_uring      [kernel.kallsyms]  [k] mpage_readahead
    15.58%     0.09%  io_uring      [kernel.kallsyms]  [k] io_req_task_submit
    15.37%     0.07%  io_uring      [kernel.kallsyms]  [k] __io_req_task_submit
    12.14%     0.15%  io_uring      [kernel.kallsyms]  [k] copy_page_to_iter
    12.03%     0.03%  io_uring      [kernel.kallsyms]  [k] __page_cache_alloc
    11.98%     0.14%  io_uring      [kernel.kallsyms]  [k] alloc_pages_current
    11.66%     0.30%  io_uring      [kernel.kallsyms]  [k] __alloc_pages_nodemask
    11.28%    11.05%  io_uring      [kernel.kallsyms]  [k] _copy_to_iter
    11.06%     3.16%  io_uring      [kernel.kallsyms]  [k] get_page_from_freelist
    10.53%     0.10%  io_uring      [kernel.kallsyms]  [k] submit_bio
    10.27%     0.21%  io_uring      [kernel.kallsyms]  [k] submit_bio_noacct
     8.30%     0.51%  io_uring      [kernel.kallsyms]  [k] blk_mq_submit_bio
     7.73%     7.62%  io_uring      [kernel.kallsyms]  [k] clear_page_erms
     3.68%     1.02%  io_uring      [kernel.kallsyms]  [k] do_mpage_readpage
     3.33%     0.01%  io_uring      [kernel.kallsyms]  [k] page_cache_async_ra
     3.25%     0.01%  io_uring      [kernel.kallsyms]  [k] blk_flush_plug_list
     3.24%     0.06%  io_uring      [kernel.kallsyms]  [k] blk_mq_flush_plug_list
     3.18%     0.01%  io_uring      [kernel.kallsyms]  [k] blk_mq_sched_insert_requests
     3.15%     0.13%  io_uring      [kernel.kallsyms]  [k] blk_mq_try_issue_list_directly
     2.84%     0.15%  io_uring      [kernel.kallsyms]  [k] __blk_mq_try_issue_directly
     2.69%     0.58%  io_uring      [kernel.kallsyms]  [k] nvme_queue_rq
     2.54%     1.05%  io_uring      [kernel.kallsyms]  [k] blk_attempt_plug_merge
     2.53%     0.44%  io_uring      [kernel.kallsyms]  [k] mark_page_accessed
     2.36%     0.12%  io_uring      [kernel.kallsyms]  [k] rw_verify_area
     2.16%     0.09%  io_uring      [kernel.kallsyms]  [k] mpage_alloc
     2.10%     0.27%  io_uring      [kernel.kallsyms]  [k] lru_cache_add
     1.81%     0.27%  io_uring      [kernel.kallsyms]  [k] security_file_permission
     1.75%     0.87%  io_uring      [kernel.kallsyms]  [k] __pagevec_lru_add
     1.70%     0.63%  io_uring      [kernel.kallsyms]  [k] _raw_spin_lock_irq
     1.53%     0.40%  io_uring      [kernel.kallsyms]  [k] xa_get_order
     1.52%     0.15%  io_uring      [kernel.kallsyms]  [k] __blk_mq_alloc_request
     1.50%     0.65%  io_uring      [kernel.kallsyms]  [k] workingset_refault
     1.50%     0.07%  io_uring      [kernel.kallsyms]  [k] activate_page
     1.48%     0.29%  io_uring      [kernel.kallsyms]  [k] io_submit_flush_completions
     1.46%     0.32%  io_uring      [kernel.kallsyms]  [k] bio_alloc_bioset
     1.42%     0.15%  io_uring      [kernel.kallsyms]  [k] xa_load
     1.40%     0.16%  io_uring      [kernel.kallsyms]  [k] pagevec_lru_move_fn
     1.39%     0.06%  io_uring      [kernel.kallsyms]  [k] blk_finish_plug
     1.35%     0.28%  io_uring      [kernel.kallsyms]  [k] submit_bio_checks
     1.26%     0.02%  io_uring      [kernel.kallsyms]  [k] asm_common_interrupt
     1.25%     0.01%  io_uring      [kernel.kallsyms]  [k] common_interrupt
     1.21%     1.21%  io_uring      [kernel.kallsyms]  [k] native_queued_spin_lock_slowpath
     1.18%     0.10%  io_uring      [kernel.kallsyms]  [k] mempool_alloc
     1.17%     0.00%  io_uring      [kernel.kallsyms]  [k] __common_interrupt
     1.14%     0.03%  io_uring      [kernel.kallsyms]  [k] handle_edge_irq
     1.08%     0.87%  io_uring      [kernel.kallsyms]  [k] apparmor_file_permission
     1.03%     0.19%  io_uring      [kernel.kallsyms]  [k] __mod_lruvec_state
     1.02%     0.65%  io_uring      [kernel.kallsyms]  [k] blk_rq_merge_ok
     1.01%     0.94%  io_uring      [kernel.kallsyms]  [k] xas_start
     0.95%     0.11%  io_uring      [kernel.kallsyms]  [k] mempool_alloc_slab
     0.93%     0.89%  io_uring      [kernel.kallsyms]  [k] read_tsc
     0.93%     0.24%  io_uring      [kernel.kallsyms]  [k] mem_cgroup_charge
     0.92%     0.10%  io_uring      [kernel.kallsyms]  [k] io_req_free_batch
     0.92%     0.13%  io_uring      [kernel.kallsyms]  [k] io_setup_async_rw
     0.91%     0.24%  io_uring      [kernel.kallsyms]  [k] xas_store
     0.89%     0.19%  io_uring      [kernel.kallsyms]  [k] page_cache_prev_miss
     0.88%     0.63%  io_uring      [kernel.kallsyms]  [k] __activate_page
     0.88%     0.02%  io_uring      [kernel.kallsyms]  [k] asm_sysvec_reschedule_ipi
     0.84%     0.46%  io_uring      [kernel.kallsyms]  [k] __fsnotify_parent
     0.82%     0.11%  io_uring      [kernel.kallsyms]  [k] io_dismantle_req
     0.80%     0.02%  io_uring      [kernel.kallsyms]  [k] handle_irq_event
     0.78%     0.77%  io_uring      [kernel.kallsyms]  [k] workingset_update_node
     0.77%     0.14%  io_uring      [kernel.kallsyms]  [k] PageHuge
     0.74%     0.03%  io_uring      [kernel.kallsyms]  [k] nvme_irq
     0.73%     0.01%  io_uring      [kernel.kallsyms]  [k] __handle_irq_event_percpu
     0.73%     0.10%  io_uring      [kernel.kallsyms]  [k] blk_mq_start_request
     0.72%     0.07%  io_uring      [kernel.kallsyms]  [k] nvme_process_cq
     0.71%     0.04%  io_uring      [kernel.kallsyms]  [k] __xas_prev
     0.71%     0.35%  io_uring      [kernel.kallsyms]  [k] __mod_memcg_lruvec_state
     0.70%     0.15%  io_uring      [kernel.kallsyms]  [k] __check_object_size
     0.69%     0.19%  io_uring      [kernel.kallsyms]  [k] blk_mq_get_tag
     0.69%     0.29%  io_uring      [kernel.kallsyms]  [k] kfree
     0.66%     0.26%  io_uring      [kernel.kallsyms]  [k] blk_mq_rq_ctx_init
     0.65%     0.35%  io_uring      [kernel.kallsyms]  [k] __kmalloc
     0.64%     0.43%  io_uring      [kernel.kallsyms]  [k] kmem_cache_alloc
     0.64%     0.21%  io_uring      [kernel.kallsyms]  [k] __mod_lruvec_page_state
     0.61%     0.01%  io_uring      [kernel.kallsyms]  [k] nvme_pci_complete_rq
     0.60%     0.39%  io_uring      [kernel.kallsyms]  [k] __lock_page_async
     0.58%     0.00%  io_uring      [kernel.kallsyms]  [k] nvme_complete_rq
     0.58%     0.08%  io_uring      [kernel.kallsyms]  [k] bio_associate_blkg
     0.57%     0.31%  io_uring      [kernel.kallsyms]  [k] blk_throtl_bio
     0.57%     0.01%  io_uring      [kernel.kallsyms]  [k] blk_mq_end_request
     0.57%     0.14%  io_uring      [kernel.kallsyms]  [k] workingset_activation
     0.55%     0.53%  io_uring      [kernel.kallsyms]  [k] __virt_addr_valid
     0.55%     0.52%  io_uring      [kernel.kallsyms]  [k] memset_erms
     0.52%     0.39%  io_uring      [kernel.kallsyms]  [k] __x86_retpoline_rax
     0.51%     0.04%  io_uring      [kernel.kallsyms]  [k] __blk_mq_get_tag
     0.49%     0.49%  io_uring      [kernel.kallsyms]  [k] workingset_age_nonresident
     0.48%     0.06%  io_uring      [kernel.kallsyms]  [k] __sbitmap_queue_get
     0.48%     0.09%  io_uring      [kernel.kallsyms]  [k] irqentry_exit_to_user_mode
     0.48%     0.00%  io_uring      [kernel.kallsyms]  [k] irqentry_exit
     0.48%     0.46%  io_uring      [kernel.kallsyms]  [k] bio_associate_blkg_from_css
     0.46%     0.02%  io_uring      [kernel.kallsyms]  [k] sysvec_reschedule_ipi
     0.43%     0.00%  io_uring      [kernel.kallsyms]  [k] __irqentry_text_start
     0.43%     0.41%  io_uring      [kernel.kallsyms]  [k] xas_find_conflict
     0.43%     0.41%  io_uring      [kernel.kallsyms]  [k] io_file_supports_async
     0.42%     0.26%  io_uring      [kernel.kallsyms]  [k] rcu_read_unlock_strict
     0.42%     0.40%  io_uring      [kernel.kallsyms]  [k] blk_attempt_bio_merge.part.0
     0.42%     0.01%  io_uring      io_uring           [.] submitter_fn
     0.41%     0.14%  io_uring      [kernel.kallsyms]  [k] sbitmap_get
     0.39%     0.37%  io_uring      [kernel.kallsyms]  [k] io_import_iovec
     0.37%     0.36%  io_uring      [kernel.kallsyms]  [k] nvme_setup_cmd
     0.37%     0.35%  io_uring      [kernel.kallsyms]  [k] fsnotify
     0.37%     0.36%  io_uring      [kernel.kallsyms]  [k] __mod_memcg_state.part.0
     0.37%     0.35%  io_uring      [kernel.kallsyms]  [k] __mod_node_page_state
     0.36%     0.02%  io_uring      [kernel.kallsyms]  [k] mem_cgroup_from_id
     0.36%     0.35%  io_uring      [kernel.kallsyms]  [k] _raw_spin_lock
     0.36%     0.35%  io_uring      [kernel.kallsyms]  [k] io_prep_rw
     0.36%     0.03%  io_uring      [kernel.kallsyms]  [k] idr_find
     0.36%     0.02%  io_uring      [kernel.kallsyms]  [k] blk_update_request
     0.34%     0.23%  io_uring      [kernel.kallsyms]  [k] __cond_resched
     0.34%     0.32%  io_uring      [kernel.kallsyms]  [k] ktime_get
     0.34%     0.31%  io_uring      [kernel.kallsyms]  [k] blk_integrity_merge_bio
     0.33%     0.30%  io_uring      [kernel.kallsyms]  [k] mem_cgroup_update_lru_size
     0.32%     0.01%  io_uring      [kernel.kallsyms]  [k] bio_endio
     0.32%     0.30%  io_uring      [kernel.kallsyms]  [k] percpu_counter_add_batch
     0.32%     0.31%  io_uring      [kernel.kallsyms]  [k] blkdev_get_block
     0.32%     0.02%  io_uring      [kernel.kallsyms]  [k] radix_tree_lookup
     0.31%     0.30%  io_uring      [kernel.kallsyms]  [k] wbt_wait
     0.31%     0.03%  io_uring      [kernel.kallsyms]  [k] mpage_end_io
     0.31%     0.29%  io_uring      [kernel.kallsyms]  [k] io_file_get
     0.30%     0.30%  io_uring      [kernel.kallsyms]  [k] __radix_tree_lookup
     0.30%     0.30%  io_uring      [kernel.kallsyms]  [k] __blk_queue_split
     0.30%     0.18%  io_uring      [kernel.kallsyms]  [k] try_charge
     0.30%     0.30%  io_uring      [kernel.kallsyms]  [k] __blk_mq_sched_bio_merge
     0.28%     0.26%  io_uring      [kernel.kallsyms]  [k] wbt_track
     0.27%     0.19%  io_uring      [kernel.kallsyms]  [k] __sbitmap_get_word
     0.26%     0.25%  io_uring      [kernel.kallsyms]  [k] __slab_free
     0.25%     0.20%  io_uring      [kernel.kallsyms]  [k] mutex_lock
     0.25%     0.24%  io_uring      [kernel.kallsyms]  [k] mutex_unlock
     0.24%     0.24%  io_uring      [kernel.kallsyms]  [k] native_irq_return_iret
     0.24%     0.22%  io_uring      [kernel.kallsyms]  [k] release_pages
     0.24%     0.10%  io_uring      [kernel.kallsyms]  [k] _raw_spin_lock_irqsave
     0.23%     0.09%  io_uring      [kernel.kallsyms]  [k] bio_add_page
     0.23%     0.20%  io_uring      [kernel.kallsyms]  [k] bio_crypt_rq_ctx_compatible
     0.23%     0.22%  io_uring      [kernel.kallsyms]  [k] __next_zones_zonelist
     0.22%     0.17%  io_uring      [kernel.kallsyms]  [k] slab_pre_alloc_hook.constprop.0
     0.20%     0.20%  io_uring      [kernel.kallsyms]  [k] native_apic_msr_eoi_write
     0.20%     0.10%  io_uring      [kernel.kallsyms]  [k] kiocb_done
     0.19%     0.01%  io_uring      [kernel.kallsyms]  [k] mempool_kmalloc
     0.19%     0.17%  io_uring      [kernel.kallsyms]  [k] get_mem_cgroup_from_mm
     0.19%     0.03%  io_uring      [kernel.kallsyms]  [k] lock_page_lruvec_irqsave
     0.19%     0.12%  io_uring      [kernel.kallsyms]  [k] __x86_indirect_thunk_rax
     0.19%     0.08%  io_uring      [kernel.kallsyms]  [k] __rq_qos_throttle
     0.18%     0.03%  io_uring      [kernel.kallsyms]  [k] page_endio
     0.18%     0.15%  io_uring      [kernel.kallsyms]  [k] aa_file_perm
     0.17%     0.05%  io_uring      [kernel.kallsyms]  [k] blk_account_io_start
     0.16%     0.01%  io_uring      [kernel.kallsyms]  [k] unlock_page
     0.16%     0.15%  io_uring      [kernel.kallsyms]  [k] io_put_req
     0.16%     0.15%  io_uring      [kernel.kallsyms]  [k] wbt_issue
     0.16%     0.13%  io_uring      [kernel.kallsyms]  [k] rcu_all_qs
     0.16%     0.06%  io_uring      [kernel.kallsyms]  [k] __rq_qos_track
     0.15%     0.08%  io_uring      [kernel.kallsyms]  [k] kernel_init_free_pages
     0.15%     0.15%  io_uring      [kernel.kallsyms]  [k] __io_cqring_fill_event
     0.15%     0.02%  io_uring      [kernel.kallsyms]  [k] wake_up_page_bit
     0.15%     0.14%  io_uring      [kernel.kallsyms]  [k] __mod_zone_page_state
     0.15%     0.14%  io_uring      [kernel.kallsyms]  [k] page_mapping
     0.15%     0.14%  io_uring      [kernel.kallsyms]  [k] __blk_rq_map_sg
     0.14%     0.14%  io_uring      [kernel.kallsyms]  [k] slab_free_freelist_hook
     0.14%     0.14%  io_uring      [kernel.kallsyms]  [k] blk_cgroup_bio_start
     0.14%     0.13%  io_uring      [kernel.kallsyms]  [k] nvme_submit_cmd
     0.14%     0.02%  io_uring      [kernel.kallsyms]  [k] irq_chip_ack_parent
     0.14%     0.03%  io_uring      [kernel.kallsyms]  [k] psi_task_change
     0.13%     0.06%  io_uring      [kernel.kallsyms]  [k] mem_cgroup_charge_statistics.constprop.0
     0.13%     0.12%  io_uring      [kernel.kallsyms]  [k] blk_queue_enter
     0.13%     0.12%  io_uring      [kernel.kallsyms]  [k] nvme_setup_rw
     0.13%     0.13%  io_uring      [kernel.kallsyms]  [k] __count_memcg_events.part.0
     0.13%     0.12%  io_uring      [kernel.kallsyms]  [k] update_io_ticks
     0.12%     0.11%  io_uring      [kernel.kallsyms]  [k] memcg_slab_post_alloc_hook
     0.11%     0.11%  io_uring      [kernel.kallsyms]  [k] blk_mq_get_driver_tag
     0.11%     0.08%  io_uring      [kernel.kallsyms]  [k] psi_group_change
     0.11%     0.11%  io_uring      [kernel.kallsyms]  [k] blk_queue_bounce
     0.11%     0.10%  io_uring      [kernel.kallsyms]  [k] page_counter_try_charge
     0.10%     0.09%  io_uring      [kernel.kallsyms]  [k] kthread_blkcg
     0.10%     0.00%  io_uring      [kernel.kallsyms]  [k] __wake_up_locked_key_bookmark
     0.10%     0.10%  io_uring      [kernel.kallsyms]  [k] __bio_add_page
     0.10%     0.09%  io_uring      [kernel.kallsyms]  [k] __io_complete_rw.constprop.0
     0.10%     0.09%  io_uring      [kernel.kallsyms]  [k] bio_integrity_prep
     0.09%     0.08%  io_uring      [kernel.kallsyms]  [k] __xas_next
     0.09%     0.08%  io_uring      [kernel.kallsyms]  [k] __entry_text_start
     0.09%     0.09%  io_uring      [kernel.kallsyms]  [k] dma_map_page_attrs
     0.09%     0.04%  io_uring      [kernel.kallsyms]  [k] io_async_buf_func
     0.09%     0.09%  io_uring      [kernel.kallsyms]  [k] xas_create
     0.09%     0.02%  io_uring      [kernel.kallsyms]  [k] __slab_alloc
     0.09%     0.00%  io_uring      [kernel.kallsyms]  [k] asm_sysvec_apic_timer_interrupt
     0.09%     0.00%  io_uring      [kernel.kallsyms]  [k] bio_put
     0.09%     0.01%  io_uring      [kernel.kallsyms]  [k] __wake_up_common
     0.09%     0.00%  io_uring      [kernel.kallsyms]  [k] sysvec_apic_timer_interrupt
     0.09%     0.08%  io_uring      [kernel.kallsyms]  [k] policy_nodemask
     0.09%     0.01%  io_uring      [kernel.kallsyms]  [k] bio_free
     0.09%     0.08%  io_uring      [kernel.kallsyms]  [k] blk_add_timer
     0.08%     0.04%  io_uring      [kernel.kallsyms]  [k] __rq_qos_issue
     0.08%     0.00%  io_uring      [kernel.kallsyms]  [k] psi_memstall_enter
     0.08%     0.00%  io_uring      [unknown]          [k] 0x43a3da3afeb43b00
     0.08%     0.02%  io_uring      [kernel.kallsyms]  [k] blk_mq_free_request
     0.08%     0.07%  io_uring      [kernel.kallsyms]  [k] __bio_try_merge_page
     0.08%     0.07%  io_uring      [kernel.kallsyms]  [k] __inc_numa_state
     0.08%     0.00%  io_uring      [kernel.kallsyms]  [k] __sysvec_apic_timer_interrupt
     0.07%     0.00%  io_uring      [kernel.kallsyms]  [k] hrtimer_interrupt
     0.07%     0.03%  io_uring      [kernel.kallsyms]  [k] __lock_text_start
     0.07%     0.01%  io_uring      [kernel.kallsyms]  [k] mempool_free_slab
     0.07%     0.06%  io_uring      [kernel.kallsyms]  [k] ___slab_alloc
     0.07%     0.07%  io_uring      [kernel.kallsyms]  [k] syscall_return_via_sysret
     0.07%     0.00%  io_uring      [kernel.kallsyms]  [k] mempool_free
     0.07%     0.00%  io_uring      [kernel.kallsyms]  [k] psi_memstall_leave
     0.06%     0.00%  io_uring      [kernel.kallsyms]  [k] __hrtimer_run_queues
     0.06%     0.00%  io_uring      libc-2.32.so       [.] __nrand48_r
     0.06%     0.05%  io_uring      [kernel.kallsyms]  [k] error_entry
     0.06%     0.00%  io_uring      [kernel.kallsyms]  [k] tick_sched_timer
     0.06%     0.06%  io_uring      [kernel.kallsyms]  [k] blk_add_rq_to_plug
     0.06%     0.05%  io_uring      [kernel.kallsyms]  [k] kmem_cache_free
     0.06%     0.05%  io_uring      [kernel.kallsyms]  [k] _find_next_bit.constprop.0
     0.06%     0.00%  io_uring      [kernel.kallsyms]  [k] __alloc_pages_slowpath.constprop.0
     0.05%     0.05%  io_uring      [kernel.kallsyms]  [k] kmalloc_slab
     0.05%     0.01%  io_uring      [kernel.kallsyms]  [k] lru_note_cost_page
     0.05%     0.05%  io_uring      [kernel.kallsyms]  [k] bdev_read_page
     0.05%     0.03%  io_uring      [kernel.kallsyms]  [k] I_BDEV
     0.05%     0.03%  io_uring      [kernel.kallsyms]  [k] memcg_check_events
     0.05%     0.00%  io_uring      [kernel.kallsyms]  [k] io_req_task_queue
     0.05%     0.00%  io_uring      [kernel.kallsyms]  [k] tick_sched_handle
     0.05%     0.01%  io_uring      [kernel.kallsyms]  [k] lru_note_cost
     0.05%     0.00%  io_uring      [kernel.kallsyms]  [k] update_process_times
     0.05%     0.04%  io_uring      [kernel.kallsyms]  [k] should_failslab
     0.04%     0.00%  io_uring      libc-2.32.so       [.] __drand48_iterate
     0.04%     0.03%  io_uring      [kernel.kallsyms]  [k] apic_ack_irq
     0.04%     0.04%  io_uring      [kernel.kallsyms]  [k] io_cqring_ev_posted
     0.04%     0.01%  io_uring      [kernel.kallsyms]  [k] io_req_task_work_add
     0.04%     0.03%  io_uring      [kernel.kallsyms]  [k] xas_nomem
     0.04%     0.00%  io_uring      [kernel.kallsyms]  [k] scheduler_tick
     0.04%     0.00%  io_uring      [kernel.kallsyms]  [k] dma_map_sg_attrs
     0.04%     0.00%  io_uring      libc-2.32.so       [.] lrand48
     0.04%     0.02%  io_uring      [kernel.kallsyms]  [k] hctx_unlock
     0.04%     0.03%  io_uring      [kernel.kallsyms]  [k] policy_node
     0.04%     0.03%  io_uring      [kernel.kallsyms]  [k] check_stack_object
     0.04%     0.01%  io_uring      [kernel.kallsyms]  [k] __blk_mq_free_request
     0.03%     0.03%  io_uring      [kernel.kallsyms]  [k] dput
     0.03%     0.01%  io_uring      [kernel.kallsyms]  [k] record_times
     0.03%     0.03%  io_uring      [kernel.kallsyms]  [k] blk_stat_add
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] dma_pool_alloc
     0.03%     0.03%  io_uring      [kernel.kallsyms]  [k] find_next_zero_bit
     0.03%     0.03%  io_uring      [kernel.kallsyms]  [k] dma_direct_map_sg
     0.03%     0.01%  io_uring      [kernel.kallsyms]  [k] __fget_light
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] blk_account_io_done
     0.03%     0.03%  io_uring      [kernel.kallsyms]  [k] guard_bio_eod
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] add_interrupt_randomness
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] memset
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] iov_iter_bvec
     0.03%     0.00%  io_uring      [kernel.kallsyms]  [k] blk_mq_put_tag
     0.03%     0.02%  io_uring      [kernel.kallsyms]  [k] cpuset_nodemask_valid_mems_allowed
     0.03%     0.00%  io_uring      [kernel.kallsyms]  [k] sched_clock_cpu
     0.03%     0.00%  io_uring      [kernel.kallsyms]  [k] __count_memcg_events
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] mem_cgroup_get_nr_swap_pages
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] blk_start_plug
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] sbitmap_queue_clear
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] io_commit_cqring
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] should_fail_bio
     0.02%     0.01%  io_uring      [kernel.kallsyms]  [k] page_cache_next_miss
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] task_tick_fair
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] should_fail_alloc_page
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] native_sched_clock
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] __fdget
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] wbt_data_dir
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] sync_regs
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] wbt_done
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] __irqentry_text_end
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] __fget_files
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] do_try_to_free_pages
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] shrink_node
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] try_to_free_pages
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] shrink_lruvec
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] sg_init_table
     0.02%     0.02%  io_uring      [kernel.kallsyms]  [k] blk_mq_complete_request_remote
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] shrink_inactive_list
     0.02%     0.01%  io_uring      [kernel.kallsyms]  [k] sg_next
     0.02%     0.00%  io_uring      [kernel.kallsyms]  [k] __rq_qos_done
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] irq_exit_rcu
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] free_unref_page_list
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] __mem_cgroup_threshold
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] nvme_unmap_data.part.0
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] bvec_alloc
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] shrink_page_list
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] tick_do_update_jiffies64
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] propagate_protected_usage
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] update_wall_time
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] timekeeping_advance
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] iov_iter_revert
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] dma_unmap_page_attrs
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] entry_SYSCALL_64_safe_stack
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] arch_do_signal_or_restart
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] __remove_mapping
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] blk_mq_tag_to_rq
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] try_to_wake_up
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] __softirqentry_text_start
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] perf_event_task_tick
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] wake_up_process
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] bio_uninit
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] note_interrupt
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] irqentry_enter
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] fput
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] update_load_avg
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] __x86_retpoline_r13
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] allocate_slab
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] refill_stock
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] syscall_enter_from_user_mode
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] tick_program_event
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] mempool_kfree
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] update_curr
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] fpregs_assert_state_consistent
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] restore_regs_and_return_to_kernel
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] bio_advance
     0.01%     0.00%  io_uring      [kernel.kallsyms]  [k] mem_cgroup_uncharge_list
     0.01%     0.00%  io_uring      io_uring           [.] 0x000055c42b500320
     0.01%     0.01%  io_uring      [kernel.kallsyms]  [k] native_write_msr
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] inode_congested
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] timekeeping_update
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] run_rebalance_domains
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] task_work_add
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] _raw_spin_trylock
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] irq_enter_rcu
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x86_indirect_thunk_r13
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] dma_unmap_sg_attrs
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] clockevents_program_event
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __rq_qos_done_bio
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __delete_from_page_cache
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] wake_all_kswapds
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rebalance_domains
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] trigger_load_balance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] page_cache_delete
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] blk_queue_exit
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] error_return
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] idle_cpu
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] put_cpu_partial
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] x86_pmu_enable
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] dma_direct_unmap_sg
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __mix_pool_bytes
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] sched_clock
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] _mix_pool_bytes
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] bvec_free
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __update_load_avg_cfs_rq
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __queue_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __sbq_wake_up
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __run_timers.part.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] call_timer_fn
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] run_timer_softirq
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] hrtimer_active
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __schedule
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] blk_status_to_errno
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] arch_scale_freq_tick
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_fast_timekeeper
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] native_iret
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_vsyscall
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_cfs_group
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] delayed_work_timer_fn
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] load_balance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ttwu_do_activate
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] insert_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] dma_pool_free
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] pvclock_gtod_notify
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] account_process_tick
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __zone_watermark_ok
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] lapic_next_deadline
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __update_load_avg_se
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] account_system_index_time
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] account_system_time
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] wakeup_kswapd
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_run_task_work_sig
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] isolate_lru_pages
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] psi_flags_change
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] find_busiest_group
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] enqueue_task
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_uring_add_task_file
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_sd_lb_stats.constprop.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] free_unref_page_commit
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] nvme_error_status
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] nvme_cleanup_cmd
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __wake_up
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] enqueue_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] schedule
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] free_pcppages_bulk
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] _warn_unseeded_randomness
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __wake_up_common_lock
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] autoremove_wake_function
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] default_wake_function
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_blocked_averages
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kick_ilb
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] calc_global_load_tick
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_rq_clock
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] wake_up_state
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] native_read_msr
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_segcblist_ready_cbs
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] x86_pmu_disable
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_sched_clock_irq
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __hrtimer_next_event_base
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_min_vruntime
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] dequeue_task
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] schedule_timeout
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] shrink_active_list
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ktime_get_update_offsets_now
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] enqueue_entity
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] io_wq_manager
     0.00%     0.00%  iou-mgr-2493  [unknown]          [k] 0000000000000000
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] ret_from_fork
     0.00%     0.00%  io_uring      libc-2.32.so       [.] __GI___libc_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] unaccount_page_cache_page
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] smp_call_function_single_async
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __cgroup_account_cputime_field
     0.00%     0.00%  io_uring      io_uring           [.] 0x000055c42b500324
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] run_posix_cpu_timers
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] schedule_timeout
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_write
     0.00%     0.00%  io_uring      libc-2.32.so       [.] clock_nanosleep@@GLIBC_2.17
     0.00%     0.00%  iou-mgr-2496  [unknown]          [k] 0000000000000000
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] ret_from_fork
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] queue_work_on
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] io_wq_manager
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ksys_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] credit_entropy_bits.constprop.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] blk_mq_sched_restart
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] send_call_function_single_ipi
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] generic_exec_single
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] raw_notifier_call_chain
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_clock_nanosleep
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] profile_tick
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] bvec_split_segs
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] pick_next_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] dequeue_task_fair
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] __schedule
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] schedule
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] vfs_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] common_nsleep
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_nanosleep
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] hrtimer_nanosleep
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] schedule
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] schedule_timeout
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kthread_is_per_cpu
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] file_tty_write.constprop.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] new_sync_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] tty_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __free_one_page
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] __schedule
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] workingset_eviction
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] enqueue_hrtimer
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rb_next
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] blk_rq_timed_out_timer
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __calc_delta
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] cpumask_next_and
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] vma_migratable
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] timerqueue_del
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] nohz_balance_exit_idle
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] get_partial_node.part.0
     0.00%     0.00%  iou-wrk-2496  [unknown]          [k] 0000000000000000
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] io_wqe_worker
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] ret_from_fork
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] newidle_balance
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] pick_next_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] update_rt_rq_load_avg
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] dequeue_entity
     0.00%     0.00%  io_uring      [unknown]          [k] 0000000000000000
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] uncharge_batch
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] newidle_balance
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] pick_next_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] timerqueue_add
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] move_pages_to_lru
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] hrtimer_run_queues
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] watchdog_timer_fn
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] nvme_free_prps
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] set_next_entity
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] task_numa_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] select_task_rq_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_core_si
     0.00%     0.00%  io_uring      io_uring           [.] 0x0000000000001320
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] reweight_entity
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] get_random_u32
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] mod_node_page_state
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] newidle_balance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] switch_fpu_return
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] io_worker_handle_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] n_tty_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] _nohz_idle_balance
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] _nohz_idle_balance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] pty_write
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] cpuacct_charge
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] update_rq_clock
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __handle_mm_fault
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] asm_exc_page_fault
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_user_addr_fault
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] exc_page_fault
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] handle_mm_fault
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] io_read
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] io_issue_sqe
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] io_wq_submit_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] finish_task_switch.isra.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] tty_flip_buffer_push
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __perf_event_task_sched_in
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] dequeue_task
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] cpumask_next
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_shrink_slab
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] shrink_slab
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] completion_done
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] update_sd_lb_stats.constprop.0
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] _nohz_idle_balance
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] find_busiest_group
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] load_balance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] xas_clear_mark
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] sched_clock_tick
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] lru_add_drain
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] lru_add_drain_cpu
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] group_balance_cpu
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kick_process
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __perf_event_task_sched_out
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] cpu_stop_queue_work
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] stop_one_cpu_nowait
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] wake_up_q
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] find_next_and_bit
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_nmi_enter
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __intel_pmu_enable_all.constprop.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] account_user_time
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] psi_task_switch
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] pick_next_entity
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rb_erase
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] irq_work_tick
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] raise_softirq
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] prep_compound_page
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_core
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_gp_kthread_wake
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rcu_report_qs_rnp
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] swake_up_one
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] setup_object.isra.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] acct_account_cputime
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_rw_should_reissue
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_complete_rw
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] vma_policy_mof
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] llist_add_batch
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] pgdat_balanced
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] mempolicy_slab_node
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] zone_watermark_ok_safe
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] list_lru_del
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] irqentry_enter_from_user_mode
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] check_preempt_curr
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ttwu_do_wakeup
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] finish_wait
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] dequeue_task_fair
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] update_sd_lb_stats.constprop.0
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] find_busiest_group
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] load_balance
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] copy_process
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] inherit_event.constprop.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] inherit_task_group.isra.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_event_init_task
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_syscall_64
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] entry_SYSCALL_64_after_hwframe
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] dequeue_task
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __handle_mm_fault
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] handle_mm_fault
     0.00%     0.00%  io_uring      libc-2.32.so       [.] _int_malloc
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] kmem_cache_free
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] __io_free_req
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] io_free_work
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] schedule_timeout
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] inherit_event.constprop.0
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] copy_process
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] create_io_thread
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] create_io_worker.isra.0
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] inherit_task_group.isra.0
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] io_wq_check_workers
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] perf_event_init_task
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] dequeue_entity
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] filemap_map_pages
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c383436
     0.00%     0.00%  io_uring      [unknown]          [.] 0x534f49202c343039
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] create_io_thread
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_uring_alloc_task_context
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_wq_create
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_wq_fork_manager
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] hrtimer_start_range_ns
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c323939
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c363733
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kmem_cache_alloc_trace
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_event_alloc
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __get_user_pages
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __io_uring_register
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_io_uring_register
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] pin_user_pages
     0.00%     0.00%  io_uring      [unknown]          [k] 0x000000000000cc81
     0.00%     0.00%  io_uring      [unknown]          [k] 0x00007f130eedcc00
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] update_rq_clock
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] update_cfs_group
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] dequeue_task_fair
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] asm_exc_page_fault
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] perf_event_alloc
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] blkdev_read_iter
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] kiocb_done
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] __schedule
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] pick_next_task_fair
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] schedule
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __x64_sys_execve
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] bprm_execve
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_execveat_common
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] load_elf_binary
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] update_cfs_group
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] __update_idle_core
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __drain_all_pages
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] flush_work
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __add_to_page_cache_locked
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __ext4_get_inode_loc
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __ext4_iget
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __getblk_gfp
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __getblk_slow
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __x64_sys_openat
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] add_to_page_cache_lru
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_filp_open
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_sys_openat2
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] ext4_lookup
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] ext4_sb_breadahead_unmovable
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] pagecache_get_page
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] path_openat
     0.00%     0.00%  numactl       ld-2.32.so         [.] __GI___open64_nocancel
     0.00%     0.00%  numactl       ld-2.32.so         [.] _dl_map_object
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] native_set_pte
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_set_pte
     0.00%     0.00%  numactl       ld-2.32.so         [.] _dl_check_map_versions
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] page_remove_rmap
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_wp_page
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] wp_page_copy
     0.00%     0.00%  numactl       ld-2.32.so         [.] _dl_start
     0.00%     0.00%  numactl       ld-2.32.so         [.] _dl_start_user
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] clear_page_erms
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] __alloc_pages_nodemask
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] alloc_pages_vma
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] get_page_from_freelist
     0.00%     0.00%  numactl       libc-2.32.so       [.] _int_malloc
     0.00%     0.00%  numactl       [unknown]          [k] 0000000000000000
     0.00%     0.00%  io_uring      libc-2.32.so       [.] __clone
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] process_echoes
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c383830
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] next_uptodate_page
     0.00%     0.00%  numactl       libc-2.32.so       [.] _getopt_internal_r
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] vma_interval_tree_remove
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __split_vma
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __vma_adjust
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_mprotect
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_mprotect_pkey
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] mprotect_fixup
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] split_vma
     0.00%     0.00%  io_uring      ld-2.32.so         [.] _dl_sysdep_start
     0.00%     0.00%  io_uring      ld-2.32.so         [.] dl_main
     0.00%     0.00%  io_uring      ld-2.32.so         [.] mprotect
     0.00%     0.00%  io_uring      ld-2.32.so         [.] mmap64
     0.00%     0.00%  io_uring      ld-2.32.so         [.] _dl_new_object
     0.00%     0.00%  io_uring      [unknown]          [.] 0x00007f130ef48fb0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __wait_for_common
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] wait_for_completion
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_close_on_exec
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] begin_new_exec
     0.00%     0.00%  numactl       libc-2.32.so       [.] __GI___execve
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] rb_insert_color
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c303034
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c303639
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ldsem_up_read
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c303237
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c323338
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c343232
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c343632
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c363337
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c373337
     0.00%     0.00%  io_uring      [unknown]          [k] 0x534f49202c383239
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __do_sys_clone
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_clone
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kernel_clone
     0.00%     0.00%  io_uring      [unknown]          [k] 0x7830203d20727470
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] lru_cache_add_page_vma
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] __lock_text_start
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] __update_idle_core
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] _raw_spin_lock_irqsave
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] idle_cpu
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] psi_task_change
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] dequeue_entity
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] pick_next_task_idle
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __x64_sys_io_uring_setup
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_uring_setup
     0.00%     0.00%  io_uring      [unknown]          [k] 0x6966206465646441
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] p4d_offset
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] do_user_addr_fault
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] exc_page_fault
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] clear_page_erms
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] ___slab_alloc
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] __alloc_pages_nodemask
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] __slab_alloc
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] alloc_pages_current
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] allocate_slab
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] get_page_from_freelist
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] kmem_cache_alloc_trace
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] __mod_timer
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] filemap_read
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] io_cqring_ev_posted
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] io_wq_worker_running
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] newidle_balance
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] page_counter_uncharge
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] __calc_delta
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] __io_complete_rw.constprop.0
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] drain_obj_stock
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] generic_file_read_iter
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] io_iter_do_read
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] io_req_complete_post
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] obj_cgroup_uncharge
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] refill_obj_stock
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] update_curr
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] memset_erms
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] allocate_fake_cpuc
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] perf_try_init_event
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] x86_pmu_event_init
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] prepare_to_wait_exclusive
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] __update_load_avg_se
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] find_next_and_bit
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] idle_cpu
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] io_wq_worker_running
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] lock_timer_base
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] psi_task_change
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] reweight_entity
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] _find_next_bit.constprop.0
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] calc_wheel_index
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] cpumask_next_and
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] pick_next_task_idle
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] update_load_avg
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] kmem_cache_alloc_bulk
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] strlen
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] do_mmap
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] elf_map
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] mmap_region
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] vm_mmap
     0.00%     0.00%  numactl       [kernel.kallsyms]  [k] vm_mmap_pgoff
     0.00%     0.00%  numactl       [unknown]          [k] 0x00007fccebb22b7b
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] perf_iterate_ctx
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] __set_task_comm
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] perf_event_comm
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] perf_iterate_sb
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __mutex_init
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] ret_from_fork
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] memcpy_erms
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] __perf_event__output_id_sample
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] perf_event_comm_output
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] perf_iterate_ctx
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] __set_task_comm
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] perf_event_comm
     0.00%     0.00%  iou-wrk-2496  [kernel.kallsyms]  [k] perf_iterate_sb
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] io_wqe_worker
     0.00%     0.00%  io_uring      libc-2.32.so       [.] __GI___gettid
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] start_flush_work.constprop.0
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_iterate_ctx
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] __set_task_comm
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] io_wq_manager
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_event_comm
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] perf_iterate_sb
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] __x64_sys_execve
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] begin_new_exec
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] bprm_execve
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] do_execveat_common
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] do_syscall_64
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] entry_SYSCALL_64_after_hwframe
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] load_elf_binary
     0.00%     0.00%  perf          [unknown]          [k] 0x00007fccebb22b7b
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] schedule_tail
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] get_nohz_timer_target
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] perf_iterate_sb
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] __set_task_comm
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] perf_event_comm
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] schedule_tail
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] native_write_msr
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] __perf_event_task_sched_in
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] finish_task_switch.isra.0
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] x86_pmu_enable
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] nmi_restore
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] nmi_restore
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] recalc_sigpending
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] end_repeat_nmi
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] native_write_msr
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] __perf_event_task_sched_in
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] finish_task_switch.isra.0
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] x86_pmu_enable
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] native_write_msr
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] ctx_resched
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] perf_event_exec
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] x86_pmu_enable
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] native_flush_tlb_one_user
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] native_flush_tlb_one_user
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] native_set_fixmap
     0.00%     0.00%  iou-mgr-2496  [kernel.kallsyms]  [k] ghes_notify_nmi
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] nmi_cpu_backtrace
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] nmi_cpu_backtrace_handler
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] sched_clock
     0.00%     0.00%  io_uring      [kernel.kallsyms]  [k] native_apic_msr_write
     0.00%     0.00%  iou-mgr-2493  [kernel.kallsyms]  [k] native_apic_msr_write
     0.00%     0.00%  perf          [kernel.kallsyms]  [k] native_apic_msr_write


#
# (Tip: System-wide collection from all CPUs: perf record -a)
#

[-- Attachment #5: io_uring-1.svg --]
[-- Type: image/svg+xml, Size: 529992 bytes --]

[-- Attachment #6: kswapd-1.txt --]
[-- Type: text/plain, Size: 38048 bytes --]

# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 773K of event 'cycles'
# Event count (approx.): 456054447930
#
# Children      Self  Command  Shared Object      Symbol                                     
# ........  ........  .......  .................  ...........................................
#
   100.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kthread
   100.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ret_from_fork
   100.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kswapd
    99.90%     0.01%  kswapd0  [kernel.kallsyms]  [k] balance_pgdat
    99.86%     0.05%  kswapd0  [kernel.kallsyms]  [k] shrink_node
    98.00%     0.19%  kswapd0  [kernel.kallsyms]  [k] shrink_lruvec
    91.13%     0.10%  kswapd0  [kernel.kallsyms]  [k] shrink_inactive_list
    75.67%     5.97%  kswapd0  [kernel.kallsyms]  [k] shrink_page_list
    57.65%     2.59%  kswapd0  [kernel.kallsyms]  [k] __remove_mapping
    45.99%     0.27%  kswapd0  [kernel.kallsyms]  [k] __delete_from_page_cache
    42.88%     0.88%  kswapd0  [kernel.kallsyms]  [k] page_cache_delete
    39.05%     1.04%  kswapd0  [kernel.kallsyms]  [k] xas_store
    37.71%    37.67%  kswapd0  [kernel.kallsyms]  [k] xas_create
    12.62%    11.79%  kswapd0  [kernel.kallsyms]  [k] isolate_lru_pages
     8.52%     0.98%  kswapd0  [kernel.kallsyms]  [k] free_unref_page_list
     7.38%     0.61%  kswapd0  [kernel.kallsyms]  [k] free_unref_page_commit
     6.68%     1.78%  kswapd0  [kernel.kallsyms]  [k] free_pcppages_bulk
     6.53%     0.83%  kswapd0  [kernel.kallsyms]  [k] shrink_active_list
     6.45%     3.21%  kswapd0  [kernel.kallsyms]  [k] _raw_spin_lock_irqsave
     4.82%     4.60%  kswapd0  [kernel.kallsyms]  [k] __free_one_page
     4.58%     4.21%  kswapd0  [kernel.kallsyms]  [k] unlock_page
     3.62%     3.55%  kswapd0  [kernel.kallsyms]  [k] native_queued_spin_lock_slowpath
     2.49%     0.71%  kswapd0  [kernel.kallsyms]  [k] unaccount_page_cache_page
     2.46%     0.81%  kswapd0  [kernel.kallsyms]  [k] workingset_eviction
     2.14%     0.33%  kswapd0  [kernel.kallsyms]  [k] __mod_lruvec_state
     1.97%     1.88%  kswapd0  [kernel.kallsyms]  [k] xas_clear_mark
     1.73%     0.26%  kswapd0  [kernel.kallsyms]  [k] __mod_lruvec_page_state
     1.71%     1.06%  kswapd0  [kernel.kallsyms]  [k] move_pages_to_lru
     1.66%     1.62%  kswapd0  [kernel.kallsyms]  [k] workingset_age_nonresident
     1.60%     0.85%  kswapd0  [kernel.kallsyms]  [k] __mod_memcg_lruvec_state
     1.58%     0.02%  kswapd0  [kernel.kallsyms]  [k] shrink_slab
     1.49%     0.13%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_uncharge_list
     1.45%     0.06%  kswapd0  [kernel.kallsyms]  [k] do_shrink_slab
     1.37%     1.32%  kswapd0  [kernel.kallsyms]  [k] page_mapping
     1.06%     0.76%  kswapd0  [kernel.kallsyms]  [k] count_shadow_nodes
     0.89%     0.85%  kswapd0  [kernel.kallsyms]  [k] xas_init_marks
     0.84%     0.58%  kswapd0  [kernel.kallsyms]  [k] _raw_spin_lock_irq
     0.82%     0.49%  kswapd0  [kernel.kallsyms]  [k] page_referenced
     0.81%     0.81%  kswapd0  [kernel.kallsyms]  [k] __mod_memcg_state.part.0
     0.76%     0.08%  kswapd0  [kernel.kallsyms]  [k] uncharge_batch
     0.73%     0.65%  kswapd0  [kernel.kallsyms]  [k] uncharge_page
     0.50%     0.45%  kswapd0  [kernel.kallsyms]  [k] __mod_zone_page_state
     0.50%     0.43%  kswapd0  [kernel.kallsyms]  [k] page_counter_uncharge
     0.48%     0.43%  kswapd0  [kernel.kallsyms]  [k] __isolate_lru_page_prepare
     0.47%     0.47%  kswapd0  [kernel.kallsyms]  [k] __count_memcg_events.part.0
     0.46%     0.11%  kswapd0  [kernel.kallsyms]  [k] lru_note_cost
     0.45%     0.41%  kswapd0  [kernel.kallsyms]  [k] workingset_update_node
     0.42%     0.36%  kswapd0  [kernel.kallsyms]  [k] __mod_node_page_state
     0.37%     0.29%  kswapd0  [kernel.kallsyms]  [k] __lock_text_start
     0.37%     0.06%  kswapd0  [kernel.kallsyms]  [k] wake_up_page_bit
     0.33%     0.12%  kswapd0  [kernel.kallsyms]  [k] cpumask_next
     0.32%     0.22%  kswapd0  [kernel.kallsyms]  [k] __cond_resched
     0.32%     0.28%  kswapd0  [kernel.kallsyms]  [k] free_pcp_prepare
     0.31%     0.01%  kswapd0  [kernel.kallsyms]  [k] __count_memcg_events
     0.30%     0.00%  kswapd0  [kernel.kallsyms]  [k] rmap_walk
     0.29%     0.25%  kswapd0  [kernel.kallsyms]  [k] PageHuge
     0.25%     0.00%  kswapd0  [kernel.kallsyms]  [k] rmap_walk_file
     0.23%     0.06%  kswapd0  [kernel.kallsyms]  [k] super_cache_count
     0.23%     0.17%  kswapd0  [kernel.kallsyms]  [k] rcu_read_unlock_strict
     0.21%     0.01%  kswapd0  [kernel.kallsyms]  [k] page_referenced_one
     0.21%     0.16%  kswapd0  [kernel.kallsyms]  [k] page_mapped
     0.20%     0.20%  kswapd0  [kernel.kallsyms]  [k] list_lru_count_one
     0.20%     0.20%  kswapd0  [kernel.kallsyms]  [k] page_vma_mapped_walk
     0.17%     0.12%  kswapd0  [kernel.kallsyms]  [k] rcu_all_qs
     0.16%     0.16%  kswapd0  [kernel.kallsyms]  [k] _find_next_bit.constprop.0
     0.13%     0.13%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_iter
     0.13%     0.09%  kswapd0  [kernel.kallsyms]  [k] __x86_retpoline_rax
     0.13%     0.13%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_update_lru_size
     0.13%     0.08%  kswapd0  [kernel.kallsyms]  [k] _raw_spin_lock
     0.12%     0.08%  kswapd0  [kernel.kallsyms]  [k] find_next_bit
     0.12%     0.11%  kswapd0  [kernel.kallsyms]  [k] total_mapcount
     0.09%     0.05%  kswapd0  [kernel.kallsyms]  [k] __x86_indirect_thunk_rax
     0.09%     0.00%  kswapd0  [kernel.kallsyms]  [k] asm_sysvec_apic_timer_interrupt
     0.09%     0.01%  kswapd0  [kernel.kallsyms]  [k] __schedule
     0.08%     0.00%  kswapd0  [kernel.kallsyms]  [k] sysvec_apic_timer_interrupt
     0.08%     0.06%  kswapd0  [kernel.kallsyms]  [k] propagate_protected_usage
     0.07%     0.00%  kswapd0  [kernel.kallsyms]  [k] __sysvec_apic_timer_interrupt
     0.07%     0.00%  kswapd0  [kernel.kallsyms]  [k] hrtimer_interrupt
     0.06%     0.00%  kswapd0  [kernel.kallsyms]  [k] schedule
     0.06%     0.01%  kswapd0  [kernel.kallsyms]  [k] __wake_up_locked_key_bookmark
     0.06%     0.00%  kswapd0  [kernel.kallsyms]  [k] schedule_timeout
     0.06%     0.03%  kswapd0  [kernel.kallsyms]  [k] memcg_check_events
     0.05%     0.00%  kswapd0  [kernel.kallsyms]  [k] __hrtimer_run_queues
     0.05%     0.00%  kswapd0  [kernel.kallsyms]  [k] pick_next_task_fair
     0.05%     0.01%  kswapd0  [kernel.kallsyms]  [k] lru_add_drain
     0.05%     0.00%  kswapd0  [kernel.kallsyms]  [k] tick_sched_timer
     0.05%     0.05%  kswapd0  [kernel.kallsyms]  [k] lru_add_drain_cpu
     0.05%     0.00%  kswapd0  [kernel.kallsyms]  [k] newidle_balance
     0.04%     0.00%  kswapd0  [kernel.kallsyms]  [k] tick_sched_handle
     0.04%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_process_times
     0.04%     0.04%  kswapd0  [kernel.kallsyms]  [k] find_first_bit
     0.04%     0.04%  kswapd0  [kernel.kallsyms]  [k] __wake_up_common
     0.04%     0.02%  kswapd0  [kernel.kallsyms]  [k] vma_interval_tree_iter_next
     0.04%     0.00%  kswapd0  [kernel.kallsyms]  [k] load_balance
     0.04%     0.00%  kswapd0  [kernel.kallsyms]  [k] find_busiest_group
     0.04%     0.03%  kswapd0  [kernel.kallsyms]  [k] update_sd_lb_stats.constprop.0
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] scheduler_tick
     0.03%     0.03%  kswapd0  [kernel.kallsyms]  [k] __mem_cgroup_threshold
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] pageout
     0.03%     0.03%  kswapd0  [kernel.kallsyms]  [k] deferred_split_count
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] vmpressure
     0.03%     0.03%  kswapd0  [kernel.kallsyms]  [k] vma_interval_tree_subtree_search
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_writepage
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] idr_find
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] queue_work_on
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] __swap_writepage
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_unmap
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_pgdat_percpu_threshold
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] __queue_work
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] finish_task_switch.isra.0
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] radix_tree_lookup
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] rmap_walk_anon
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] __perf_event_task_sched_in
     0.02%     0.02%  kswapd0  [kernel.kallsyms]  [k] __radix_tree_lookup
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] insert_work
     0.02%     0.02%  kswapd0  [kernel.kallsyms]  [k] kfree_rcu_shrink_count
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] wake_up_process
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_wake_up
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_event_sched_in
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] ctx_sched_in
     0.02%     0.01%  kswapd0  [kernel.kallsyms]  [k] try_to_unmap_one
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_finish_plug
     0.02%     0.02%  kswapd0  [kernel.kallsyms]  [k] io_async_buf_func
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] ttwu_do_activate
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_flush_plug_list
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] task_tick_fair
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] shmem_unused_huge_count
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] super_cache_scan
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] submit_bio
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] up_read
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] submit_bio_noacct
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] enqueue_task
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] visit_groups_merge.constprop.0.isra.0
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] down_read_trylock
     0.01%     0.01%  kswapd0  [unknown]          [.] 0000000000000000
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] prune_dcache_sb
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] merge_sched_in
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_sched_insert_requests
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_flush_plug_list
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] shrink_dentry_list
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] vma_interval_tree_iter_first
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_try_issue_list_directly
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_submit_bio
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_load_avg
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_mq_try_issue_directly
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] enqueue_task_fair
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_queue_rq
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_event_task_tick
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] add_to_swap
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_swevent_add
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] try_to_unmap_flush
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] __dentry_kill
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] irq_exit_rcu
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] native_write_msr
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_calculate_protection
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_task_change
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] anon_vma_interval_tree_iter_first
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] __softirqentry_text_start
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] enqueue_entity
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] ktime_get_update_offsets_now
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] x86_pmu_enable
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] css_next_descendant_pre
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_rq_clock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_blocked_averages
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] idle_cpu
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_curr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _nohz_idle_balance
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __update_load_avg_se
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] trigger_load_balance
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_event_groups_first
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] add_to_swap_cache
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] tick_program_event
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] inactive_is_low
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] backend_shrink_memory_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] iput
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dentry_unlink_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_group_change
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] iput.part.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] evict
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cpumask_next_and
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_swapout
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bus_for_each_dev
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ttm_pool_shrinker_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ktime_get
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] run_rebalance_domains
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ttwu_do_wakeup
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_lock_anon_vma_read
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] clockevents_program_event
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] check_preempt_curr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_memstall_leave
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] event_sched_in
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __delete_from_swap_cache
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cpumask_any_but
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] read_tsc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_release_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_event_update_userpage
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _raw_spin_trylock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] prepare_kswapd_sleep
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_mq_alloc_request
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __update_load_avg_cfs_rq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bdev_try_to_free_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_irq_return_iret
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] offset_to_swap_extent
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_remove_rmap
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blkdev_releasepage
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] put_prev_entity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] arch_scale_freq_tick
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __perf_event_task_sched_out
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __frontswap_store
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] prune_icache_sb
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ptep_clear_flush
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rb_next
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] account_process_tick
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_page_sector
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] jbd2_journal_try_to_free_buffers
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mmu_shrink_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_get_tag
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cgroup_e_css
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] down_read
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] sched_clock_cpu
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] submit_bio_checks
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] hrtimer_active
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_page_dirty
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] shrink_lock_dentry.part.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_core_si
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kmem_cache_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_alloc_bioset
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] check_preempt_wakeup
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_start_plug
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kick_ilb
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pgdat_balanced
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_associate_blkg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] account_system_time
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] x86_pmu_disable
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ___d_drop
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_read_msr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_core
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_apic_msr_eoi_write
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_duplicate
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] buffer_check_dirty_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_cgroup_record
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dequeue_task
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rebalance_domains
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __mod_timer
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_memstall_tick
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_sched_clock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] shrink_huge_zero_page_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_sched_clock_irq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] account_system_index_time
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_kmalloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] check_pte
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cpuacct_charge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_setup_rw
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] zone_watermark_ok_safe
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_attempt_plug_merge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] lapic_next_deadline
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_alloc_slab
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] PageHeadHuge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_associate_blkg_from_css
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] klist_next
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] memcg_to_vmpressure
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dqcache_shrink_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_task_switch
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __rq_qos_throttle
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_get_nr_swap_pages
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xas_create_range
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_rq_map_sg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __swap_duplicate
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kthread_is_per_cpu
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wbt_wait
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] flush_tlb_mm_range
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] smp_call_function_single_async
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_dl_rq_load_avg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] get_swap_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kmem_cache_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_soft_limit_reclaim
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nohz_balance_exit_idle
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_free_buffers
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] run_posix_cpu_timers
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] zswap_frontswap_store
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ptep_clear_flush_young
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_throtl_bio
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_mq_sched_bio_merge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] call_rcu
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] put_swap_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] vmpressure_calc_level
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _swap_info_get
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __sbitmap_queue_get
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __list_lru_walk_one
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_attempt_back_merge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_attempt_bio_merge.part.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_task_reclaim_state
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] find_next_and_bit
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_pmu_nop_txn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] shmem_writepage
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __run_timers.part.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] run_timer_softirq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] record_times
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_set_page_dirty
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_memstall_enter
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_next_entity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] list_lru_walk_one
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dma_pool_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] call_timer_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_free_swap
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_next_buddy
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __kmalloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pick_next_entity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mb_cache_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __slab_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_start_request
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __anon_vma_interval_tree_subtree_search
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] prepare_to_wait
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_cgroup_bio_start
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kthread_should_stop
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] reset_isolation_suitable
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] generic_exec_single
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] destroy_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] enqueue_timer
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __test_set_page_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] memset_erms
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dentry_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bdev_write_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dequeue_task_fair
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_flags_change
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __es_shrink
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_es_scan
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_mq_get_tag
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_es_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __d_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_segcblist_ready_cbs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] send_call_function_single_ipi
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] drop_buffers
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] truncate_inode_pages_final
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] inc_node_page_state
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_add_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] sum_zone_node_page_state
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] free_buffer_head
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wbt_track
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ll_back_merge_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] delayed_work_timer_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] i_callback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] enqueue_hrtimer
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __set_page_dirty_no_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] sbitmap_get
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_swap_info
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rb_erase
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] clear_page_dirty_for_io
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] proc_evict_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] del_timer_sync
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pagevec_lru_move_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_submit_cmd
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __update_idle_core
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_not_mapped
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] lock_timer_base
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kernfs_evict_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dma_map_page_attrs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_queue_split
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] list_lru_add
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dentry_lru_isolate
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __sbitmap_get_word
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_min_vruntime
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rb_insert_color
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_rq_ctx_init
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] get_swap_pages
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] fsnotify_grab_connector
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __destroy_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __fsnotify_inode_delete
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] fsnotify_destroy_marks
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rq_qos_wait
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] clear_buddies
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] timerqueue_del
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_try_charge_swap
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] klist_iter_exit
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __hrtimer_next_event_base
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_segcblist_enqueue
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_qs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_note_context_switch
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pick_next_task_idle
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] put_prev_task_fair
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] calculate_pressure_threshold
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dequeue_entity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_queue_bounce
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] resched_curr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] scan_swap_map_slots
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] llist_add_batch
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] error_entry
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] intel_pmu_disable_all
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] unlock_page_memcg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] irq_work_tick
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] inode_wait_for_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_unmap_flush_dirty
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __common_interrupt
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __handle_irq_event_percpu
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] asm_common_interrupt
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] common_interrupt
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] handle_edge_irq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] handle_irq_event
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_irq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_pci_complete_rq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_process_cq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cpuacct_account_field
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] es_reclaim_extents
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_stat_timer_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] sched_clock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_cblist_dequeue
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __rq_qos_track
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] scan_shadow_nodes
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_get_last_bvec
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dma_map_sg_attrs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] irq_enter_rcu
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_get_driver_tag
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] release_pages
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __remove_inode_hash
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pde_put
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_rt_rq_load_avg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bdev_read_only
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] get_swap_device
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] jbd2_journal_grab_journal_head
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __intel_pmu_enable_all.constprop.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _atomic_dec_and_lock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_from_obj
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] hrtimer_forward
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __inode_wait_for_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wbt_issue
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] calc_global_load_tick
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_account_io_merge_bio
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kernfs_put
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mutex_unlock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] percpu_counter_add_batch
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kthread_blkcg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_drop_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] proc_free_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] refill_obj_stock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] obj_cgroup_uncharge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __srcu_read_lock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_evict_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] calc_wheel_index
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] generic_delete_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] finish_wait
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __bio_add_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wake_up_bit
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] irqentry_exit
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __srcu_read_unlock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] lock_page_memcg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_shmem_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] hrtick_update
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __bio_try_merge_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] acct_account_cputime
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __es_tree_search.isra.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] es_do_reclaim_extents
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_es_free_extent
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_queue_enter
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ___slab_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __slab_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] memcg_slab_post_alloc_hook
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_accelerate_cbs_unlocked
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_gp_kthread_wake
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swake_up_one
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] irqentry_enter
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] timerqueue_add
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] housekeeping_cpumask
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __irqentry_text_start
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __x86_indirect_thunk_r12
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __x86_retpoline_r12
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xas_start
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] scan_swap_map_try_ssd_cluster
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __rq_qos_issue
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] slab_pre_alloc_hook.constprop.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] asm_sysvec_reschedule_ipi
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __swap_entry_free_locked
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_cleanup_cmd
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] error_return
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kmalloc_slab
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] note_gp_changes
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] jiffies_to_usecs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_releasepage
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_id_get_online
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_cfs_group
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bit_waitqueue
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_endio
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_put
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_end_request
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_update_request
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] end_swap_bio_write
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_free_slab
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_complete_rq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_unmap_data.part.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] list_lru_del
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __dput_to_list
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] d_lru_del
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dma_direct_map_sg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] list_lru_walk_one_irq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] shadow_lru_isolate
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xa_delete_node
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xas_free_nodes
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wbt_inflight_cb
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _raw_write_trylock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] arch_perf_update_userpage
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] calc_timer_values
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_setup_cmd
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] clear_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_pmu_nop_int
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] node_page_state
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_group_capacity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_add_timer
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_accelerate_cbs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_flush_tlb_one_user
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] acpi_os_read_memory
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ghes_notify_nmi
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_pte_vaddr_p4d
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nmi_cpu_backtrace
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] end_repeat_nmi
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] intel_pmu_handle_irq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] intel_bts_enable_local


#
# (Tip: To browse sample contexts use perf report --sample 10 and select in context menu)
#

[-- Attachment #7: io_uring-2.svg --]
[-- Type: image/svg+xml, Size: 884919 bytes --]

[-- Attachment #8: kswapd-2.svg --]
[-- Type: image/svg+xml, Size: 56483 bytes --]

[-- Attachment #9: kswapd-2.txt --]
[-- Type: text/plain, Size: 45423 bytes --]

# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 723K of event 'cycles'
# Event count (approx.): 440094111395
#
# Children      Self  Command  Shared Object      Symbol                                      
# ........  ........  .......  .................  ............................................
#
   100.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kthread
   100.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ret_from_fork
   100.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kswapd
    99.92%     0.11%  kswapd0  [kernel.kallsyms]  [k] balance_pgdat
    99.32%     0.03%  kswapd0  [kernel.kallsyms]  [k] shrink_node
    97.25%     0.32%  kswapd0  [kernel.kallsyms]  [k] shrink_lruvec
    96.80%     0.09%  kswapd0  [kernel.kallsyms]  [k] evict_lru_gen_pages
    77.82%     6.28%  kswapd0  [kernel.kallsyms]  [k] shrink_page_list
    61.61%     2.76%  kswapd0  [kernel.kallsyms]  [k] __remove_mapping
    50.28%     0.33%  kswapd0  [kernel.kallsyms]  [k] __delete_from_page_cache
    46.63%     1.08%  kswapd0  [kernel.kallsyms]  [k] page_cache_delete
    42.20%     1.16%  kswapd0  [kernel.kallsyms]  [k] xas_store
    40.71%    40.67%  kswapd0  [kernel.kallsyms]  [k] xas_create
    12.54%     7.76%  kswapd0  [kernel.kallsyms]  [k] isolate_lru_gen_pages
     6.42%     3.19%  kswapd0  [kernel.kallsyms]  [k] _raw_spin_lock_irqsave
     6.15%     0.91%  kswapd0  [kernel.kallsyms]  [k] free_unref_page_list
     5.62%     5.45%  kswapd0  [kernel.kallsyms]  [k] unlock_page
     5.05%     0.59%  kswapd0  [kernel.kallsyms]  [k] free_unref_page_commit
     4.35%     2.04%  kswapd0  [kernel.kallsyms]  [k] lru_gen_update_size
     4.31%     1.41%  kswapd0  [kernel.kallsyms]  [k] free_pcppages_bulk
     3.43%     3.36%  kswapd0  [kernel.kallsyms]  [k] native_queued_spin_lock_slowpath
     3.38%     0.59%  kswapd0  [kernel.kallsyms]  [k] __mod_lruvec_state
     2.97%     0.78%  kswapd0  [kernel.kallsyms]  [k] unaccount_page_cache_page
     2.82%     2.52%  kswapd0  [kernel.kallsyms]  [k] __free_one_page
     2.33%     1.18%  kswapd0  [kernel.kallsyms]  [k] __mod_memcg_lruvec_state
     2.28%     2.17%  kswapd0  [kernel.kallsyms]  [k] xas_clear_mark
     2.13%     0.30%  kswapd0  [kernel.kallsyms]  [k] __mod_lruvec_page_state
     1.88%     0.04%  kswapd0  [kernel.kallsyms]  [k] shrink_slab
     1.82%     1.78%  kswapd0  [kernel.kallsyms]  [k] workingset_eviction
     1.74%     0.06%  kswapd0  [kernel.kallsyms]  [k] do_shrink_slab
     1.70%     0.15%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_uncharge_list
     1.39%     1.01%  kswapd0  [kernel.kallsyms]  [k] count_shadow_nodes
     1.22%     1.18%  kswapd0  [kernel.kallsyms]  [k] __mod_memcg_state.part.0
     1.16%     1.11%  kswapd0  [kernel.kallsyms]  [k] page_mapping
     1.02%     0.98%  kswapd0  [kernel.kallsyms]  [k] xas_init_marks
     0.93%     0.08%  kswapd0  [kernel.kallsyms]  [k] uncharge_batch
     0.84%     0.79%  kswapd0  [kernel.kallsyms]  [k] __mod_node_page_state
     0.74%     0.67%  kswapd0  [kernel.kallsyms]  [k] __mod_zone_page_state
     0.71%     0.61%  kswapd0  [kernel.kallsyms]  [k] uncharge_page
     0.64%     0.56%  kswapd0  [kernel.kallsyms]  [k] page_counter_uncharge
     0.63%     0.45%  kswapd0  [kernel.kallsyms]  [k] page_referenced
     0.48%     0.43%  kswapd0  [kernel.kallsyms]  [k] workingset_update_node
     0.41%     0.31%  kswapd0  [kernel.kallsyms]  [k] __lock_text_start
     0.39%     0.12%  kswapd0  [kernel.kallsyms]  [k] cpumask_next
     0.37%     0.33%  kswapd0  [kernel.kallsyms]  [k] free_pcp_prepare
     0.35%     0.35%  kswapd0  [kernel.kallsyms]  [k] __count_memcg_events.part.0
     0.34%     0.30%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_update_lru_size
     0.33%     0.13%  kswapd0  [kernel.kallsyms]  [k] try_walk_mm_list
     0.32%     0.28%  kswapd0  [kernel.kallsyms]  [k] PageHuge
     0.27%     0.20%  kswapd0  [kernel.kallsyms]  [k] __cond_resched
     0.25%     0.16%  kswapd0  [kernel.kallsyms]  [k] _raw_spin_lock_irq
     0.25%     0.20%  kswapd0  [kernel.kallsyms]  [k] page_mapped
     0.22%     0.00%  kswapd0  [kernel.kallsyms]  [k] rmap_walk
     0.21%     0.13%  kswapd0  [kernel.kallsyms]  [k] rcu_read_unlock_strict
     0.20%     0.20%  kswapd0  [kernel.kallsyms]  [k] _find_next_bit.constprop.0
     0.20%     0.20%  kswapd0  [kernel.kallsyms]  [k] get_tier_to_isolate
     0.19%     0.18%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_iter
     0.19%     0.05%  kswapd0  [kernel.kallsyms]  [k] super_cache_count
     0.17%     0.01%  kswapd0  [kernel.kallsyms]  [k] __count_memcg_events
     0.17%     0.02%  kswapd0  [kernel.kallsyms]  [k] wake_up_page_bit
     0.17%     0.16%  kswapd0  [kernel.kallsyms]  [k] list_lru_count_one
     0.16%     0.10%  kswapd0  [kernel.kallsyms]  [k] find_next_bit
     0.15%     0.10%  kswapd0  [kernel.kallsyms]  [k] rcu_all_qs
     0.14%     0.14%  kswapd0  [kernel.kallsyms]  [k] get_swappiness
     0.14%     0.01%  kswapd0  [kernel.kallsyms]  [k] rmap_walk_file
     0.14%     0.09%  kswapd0  [kernel.kallsyms]  [k] __x86_retpoline_rax
     0.13%     0.11%  kswapd0  [kernel.kallsyms]  [k] _raw_spin_lock
     0.09%     0.01%  kswapd0  [kernel.kallsyms]  [k] page_referenced_one
     0.09%     0.04%  kswapd0  [kernel.kallsyms]  [k] __x86_indirect_thunk_rax
     0.09%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_unmap
     0.09%     0.00%  kswapd0  [kernel.kallsyms]  [k] asm_sysvec_apic_timer_interrupt
     0.09%     0.08%  kswapd0  [kernel.kallsyms]  [k] page_vma_mapped_walk
     0.09%     0.08%  kswapd0  [kernel.kallsyms]  [k] propagate_protected_usage
     0.08%     0.00%  kswapd0  [kernel.kallsyms]  [k] sysvec_apic_timer_interrupt
     0.08%     0.00%  kswapd0  [kernel.kallsyms]  [k] __schedule
     0.08%     0.00%  kswapd0  [kernel.kallsyms]  [k] walk_mm_list
     0.08%     0.00%  kswapd0  [kernel.kallsyms]  [k] walk_page_range
     0.08%     0.00%  kswapd0  [kernel.kallsyms]  [k] __walk_page_range
     0.07%     0.05%  kswapd0  [kernel.kallsyms]  [k] walk_pud_range
     0.07%     0.02%  kswapd0  [kernel.kallsyms]  [k] try_to_unmap_one
     0.07%     0.00%  kswapd0  [kernel.kallsyms]  [k] pageout
     0.07%     0.00%  kswapd0  [kernel.kallsyms]  [k] __sysvec_apic_timer_interrupt
     0.07%     0.00%  kswapd0  [kernel.kallsyms]  [k] hrtimer_interrupt
     0.07%     0.00%  kswapd0  [kernel.kallsyms]  [k] schedule_timeout
     0.07%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_writepage
     0.06%     0.03%  kswapd0  [kernel.kallsyms]  [k] memcg_check_events
     0.06%     0.00%  kswapd0  [kernel.kallsyms]  [k] schedule
     0.06%     0.00%  kswapd0  [kernel.kallsyms]  [k] __swap_writepage
     0.05%     0.00%  kswapd0  [kernel.kallsyms]  [k] pick_next_task_fair
     0.05%     0.00%  kswapd0  [kernel.kallsyms]  [k] __hrtimer_run_queues
     0.05%     0.00%  kswapd0  [kernel.kallsyms]  [k] rmap_walk_anon
     0.05%     0.05%  kswapd0  [kernel.kallsyms]  [k] try_inc_min_seq
     0.05%     0.00%  kswapd0  [kernel.kallsyms]  [k] newidle_balance
     0.05%     0.00%  kswapd0  [kernel.kallsyms]  [k] tick_sched_timer
     0.05%     0.01%  kswapd0  [kernel.kallsyms]  [k] lru_add_drain
     0.04%     0.00%  kswapd0  [kernel.kallsyms]  [k] load_balance
     0.04%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_process_times
     0.04%     0.00%  kswapd0  [kernel.kallsyms]  [k] tick_sched_handle
     0.04%     0.00%  kswapd0  [kernel.kallsyms]  [k] find_busiest_group
     0.04%     0.04%  kswapd0  [kernel.kallsyms]  [k] lru_add_drain_cpu
     0.04%     0.03%  kswapd0  [kernel.kallsyms]  [k] update_sd_lb_stats.constprop.0
     0.04%     0.03%  kswapd0  [kernel.kallsyms]  [k] __mem_cgroup_threshold
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] scheduler_tick
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] vmpressure
     0.03%     0.03%  kswapd0  [kernel.kallsyms]  [k] find_first_bit
     0.03%     0.03%  kswapd0  [kernel.kallsyms]  [k] deferred_split_count
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] submit_bio
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] submit_bio_noacct
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] queue_work_on
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] __queue_work
     0.03%     0.00%  kswapd0  [kernel.kallsyms]  [k] __wake_up_locked_key_bookmark
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] insert_work
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_finish_plug
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] wake_up_process
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_wake_up
     0.02%     0.02%  kswapd0  [kernel.kallsyms]  [k] kfree_rcu_shrink_count
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_flush_plug_list
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] idr_find
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] __delete_from_swap_cache
     0.02%     0.02%  kswapd0  [kernel.kallsyms]  [k] move_pages_to_lru
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] add_to_swap
     0.02%     0.01%  kswapd0  [kernel.kallsyms]  [k] get_next_interval
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] radix_tree_lookup
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_submit_bio
     0.02%     0.02%  kswapd0  [kernel.kallsyms]  [k] __radix_tree_lookup
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] finish_task_switch.isra.0
     0.02%     0.02%  kswapd0  [kernel.kallsyms]  [k] __wake_up_common
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] __perf_event_task_sched_in
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] super_cache_scan
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] ttwu_do_activate
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_flush_plug_list
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_swapout
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_sched_insert_requests
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_try_issue_list_directly
     0.02%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_mq_try_issue_directly
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_calculate_protection
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_queue_rq
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] ctx_sched_in
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_event_sched_in
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] up_read
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] vma_interval_tree_iter_next
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] prune_dcache_sb
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] put_swap_page
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] irq_exit_rcu
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_get_nr_swap_pages
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] shrink_dentry_list
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] task_tick_fair
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] __softirqentry_text_start
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] add_to_swap_cache
     0.01%     0.01%  kswapd0  [unknown]          [.] 0000000000000000
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] enqueue_task
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] down_read_trylock
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] total_mapcount
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] should_skip_vma
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] vma_interval_tree_subtree_search
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] shmem_unused_huge_count
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] __dentry_kill
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] try_to_unmap_flush
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_alloc
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] free_swap_slot
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] update_blocked_averages
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] swap_cgroup_record
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_remove_rmap
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] dentry_unlink_inode
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_alloc_bioset
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] enqueue_task_fair
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] css_next_descendant_pre
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] swapcache_free_entries
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] io_async_buf_func
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] iput
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_attempt_plug_merge
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] iput.part.0
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_load_avg
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] ptep_clear_flush
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] trigger_load_balance
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_event_task_tick
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] submit_bio_checks
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] lru_gen_scan_around
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] evict
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] anon_vma_interval_tree_iter_first
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] idle_cpu
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_task_change
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] page_lock_anon_vma_read
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] visit_groups_merge.constprop.0.isra.0
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] perf_event_groups_first
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_duplicate
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_core_si
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] native_write_msr
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] kmem_cache_free
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] merge_sched_in
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] tick_program_event
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_attempt_bio_merge.part.0
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_core
     0.01%     0.01%  kswapd0  [kernel.kallsyms]  [k] ktime_get_update_offsets_now
     0.01%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_alloc_slab
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] get_swap_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _swap_info_get
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_attempt_back_merge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_group_change
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] flush_tlb_mm_range
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_curr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] enqueue_entity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_swevent_add
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ktime_get
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __common_interrupt
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] asm_common_interrupt
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] common_interrupt
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ptep_test_and_clear_young
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] arch_scale_freq_tick
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] read_tsc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] intel_tfa_pmu_enable_all
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] x86_pmu_enable
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] handle_edge_irq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] handle_irq_event
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_page_dirty
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_update_gen
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] vma_interval_tree_iter_first
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __test_set_page_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_range_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __handle_irq_event_percpu
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] run_rebalance_domains
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] clockevents_program_event
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ttwu_do_wakeup
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_release_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cpumask_any_but
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_associate_blkg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] lru_gen_addition
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_rq_map_sg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __frontswap_store
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __update_load_avg_cfs_rq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xas_create_range
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] jbd2_journal_try_to_free_buffers
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kmem_cache_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_page_sector
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] check_preempt_curr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] backend_shrink_memory_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_pci_complete_rq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_irq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_process_cq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ll_back_merge_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_associate_blkg_from_css
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bus_for_each_dev
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_mq_alloc_request
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] offset_to_swap_extent
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] clear_shadow_from_swap_cache
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_uncharge_swap
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] prepare_kswapd_sleep
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_add_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_not_mapped
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cpumask_next_and
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_kmalloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] memset_erms
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pgdat_balanced
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __rq_qos_throttle
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_rq_clock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_end_request
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] get_swap_pages
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_complete_rq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_irq_return_iret
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _raw_spin_trylock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_batch_size
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __list_lru_walk_one
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mmu_shrink_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __update_load_avg_se
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dequeue_task
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] shmem_writepage
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] prune_icache_sb
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] event_sched_in
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_throtl_bio
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] account_process_tick
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_event_update_userpage
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_get_tag
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] scan_swap_map_slots
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_update_request
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __swap_duplicate
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __perf_event_task_sched_out
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_free_swap
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dqcache_shrink_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] hrtimer_active
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wbt_wait
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __mod_timer
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_releasepage
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] account_system_time
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rebalance_domains
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __run_timers.part.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] run_timer_softirq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] account_system_index_time
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_free_buffers
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nohz_balance_exit_idle
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_read_msr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kick_ilb
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] sched_clock_cpu
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_set_page_dirty
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_mq_sched_bio_merge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] list_lru_walk_one
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cpuacct_charge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cgroup_e_css
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rb_next
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_apic_msr_eoi_write
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] check_preempt_wakeup
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] i_callback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __anon_vma_interval_tree_subtree_search
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] free_buffer_head
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] zswap_frontswap_store
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_endio
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] end_swap_bio_write
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ttm_pool_shrinker_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] shrink_lock_dentry.part.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] down_read
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_soft_limit_reclaim
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] put_prev_entity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blkdev_releasepage
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] list_lru_add
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] clear_page_dirty_for_io
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] check_pte
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] smp_call_function_single_async
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] klist_next
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] call_timer_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __kmalloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pagevec_lru_move_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_sched_clock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_start_plug
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bdev_try_to_free_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] PageHeadHuge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] shrink_huge_zero_page_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_rt_rq_load_avg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_sched_clock_irq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_memstall_tick
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kthread_blkcg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_start_request
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] truncate_inode_pages_final
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xas_find
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_mq_get_tag
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] destroy_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_memstall_enter
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dequeue_task_fair
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_task_reclaim_state
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] memcg_to_vmpressure
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __slab_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] end_page_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pagevec_move_tail_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ___slab_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __d_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] try_to_unmap_flush_dirty
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] fsnotify_destroy_marks
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __destroy_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wakeup_flusher_threads
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] record_times
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pick_next_entity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_task_switch
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] call_rcu
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] zone_watermark_ok_safe
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __slab_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_stat_timer_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pick_next_task_idle
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __sbitmap_queue_get
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _nohz_idle_balance
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] lapic_next_deadline
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] slab_free_freelist_hook
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_submit_cmd
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_swapcount
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bdev_write_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] percpu_counter_add_batch
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kernfs_evict_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] prepare_to_wait
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] sbitmap_get
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] run_posix_cpu_timers
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] refill_obj_stock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_cblist_dequeue
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_dl_rq_load_avg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ___d_drop
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] fsnotify_grab_connector
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __fsnotify_inode_delete
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_account_io_merge_bio
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_es_scan
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] jbd2_journal_grab_journal_head
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xas_load
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] error_entry
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_get_last_bvec
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_charge_statistics.constprop.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rotate_reclaimable_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] obj_cgroup_uncharge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] lru_gen_addition
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __es_shrink
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dma_pool_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mutex_lock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] proc_free_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wakeup_kcompactd
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_queue_enter
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rq_qos_wait
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dentry_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] lock_timer_base
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] klist_iter_exit
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_segcblist_enqueue
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mb_cache_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] send_call_function_single_ipi
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] generic_exec_single
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_swap_info
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_rq_merge_ok
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __blk_queue_split
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] get_swap_device
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __update_idle_core
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_setup_rw
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] enqueue_timer
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __set_page_dirty_no_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_flags_change
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ptep_clear_flush_young
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wbt_inflight_cb
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] psi_memstall_leave
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_rq_ctx_init
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] shadow_lru_isolate
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] list_lru_walk_one_irq
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] scan_shadow_nodes
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_from_obj
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_next_buddy
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dequeue_entity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mutex_unlock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_segcblist_ready_cbs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __bio_add_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xas_start
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] lock_page_memcg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] clear_buddies
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] unlock_page_memcg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] inc_node_page_state
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kmalloc_slab
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_queue_bounce
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cpuacct_account_field
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __remove_inode_hash
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] reset_isolation_suitable
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_next_entity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] x86_pmu_disable
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kthread_is_per_cpu
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] memcg_slab_post_alloc_hook
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __frontswap_invalidate_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] default_wake_function
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __wake_up
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __wake_up_common_lock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] autoremove_wake_function
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dma_map_sg_attrs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_evict_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] find_next_and_bit
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wbt_track
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] es_reclaim_extents
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] arch_tlbbatch_flush
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_flush_tlb_others
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] should_failslab
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dentry_lru_isolate
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] find_vma
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] restore_regs_and_return_to_kernel
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_cgroup_bio_start
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] llist_add_batch
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xas_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] slab_pre_alloc_hook.constprop.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _atomic_dec_and_lock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_put
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_free_slab
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_es_count
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] smp_call_function_many_cond
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] jiffies_to_usecs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_integrity_prep
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_min_vruntime
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_id_get_online
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __queue_delayed_work
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mod_delayed_work_on
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] delayed_work_timer_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bdev_read_only
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvkm_mc_intr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvkm_pci_intr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __zone_watermark_ok
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __sbitmap_get_word
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rb_insert_color
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] walk_page_test
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] on_each_cpu_cond_mask
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kfree
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mempool_kfree
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_unmap_data.part.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] timerqueue_del
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_accelerate_cbs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_es_free_extent
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] es_do_reclaim_extents
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] hrtimer_run_queues
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __next_timer_interrupt
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] inc_zone_page_state
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] scan_swap_map_try_ssd_cluster
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __srcu_read_unlock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_inc_gen
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kthread_should_stop
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __irqentry_text_start
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swap_range_alloc
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wbt_issue
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_integrity_merge_bio
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] calc_global_load_tick
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] smp_call_function_single
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] finish_wait
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_set_pte
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] proc_evict_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_clear_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] truncate_inode_pages_range
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] test_clear_page_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] allocate_slab
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rb_erase
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] resched_curr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xa_delete_node
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] compaction_suitable
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_cfs_group
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] drop_buffers
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] calc_wheel_index
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ioread32
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] alarm_timer_callback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nv04_timer_intr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvkm_subdev_intr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvkm_timer_alarm
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvkm_timer_alarm_trigger
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvkm_timer_intr
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] inode_lru_isolate
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_io_ticks
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_account_io_start
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_crypt_rq_ctx_compatible
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_add_timer
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wake_up_bit
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] kernfs_put
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] vmpressure_calc_level
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_id_put_many
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __hrtimer_next_event_base
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] alloc_pages_current
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] enqueue_hrtimer
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] clear_page_erms
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] intel_pmu_disable_all
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __mod_lruvec_kmem_state
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] calc_timer_values
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_segcblist_accelerate
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __note_gp_changes
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] note_gp_changes
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] get_nohz_timer_target
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] xas_free_nodes
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] add_interrupt_randomness
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] put_prev_task_fair
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] select_task_rq_fair
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] zone_watermark_ok
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ext4_drop_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] jbd2_journal_put_journal_head
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mem_cgroup_try_charge_swap
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] mm_trace_rss_stat
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] lru_gen_update_size
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] page_rmapping
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_free_request
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] update_group_capacity
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __x86_retpoline_rdx
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] vma_is_shmem
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] tick_do_update_jiffies64
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] drain_obj_stock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] release_pages
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_iret
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] radix_tree_node_rcu_free
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] cyc2ns_read_end
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] arch_perf_update_userpage
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] timerqueue_add
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] swake_up_one
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_accelerate_cbs_unlocked
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rcu_gp_kthread_wake
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bio_crypt_ctx_mergeable
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __xas_next
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] anon_vma_interval_tree_iter_next
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __rq_qos_track
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] klist_iter_init_node
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __rq_qos_done_bio
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] blk_mq_get_driver_tag
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __pagevec_lru_add
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __es_tree_search.isra.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __rq_qos_issue
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] get_gate_vma
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nvme_setup_cmd
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __alloc_pages_nodemask
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] get_page_from_freelist
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] rq_wait_inc_below
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dma_direct_map_sg
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wbt_data_dir
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_pmu_nop_int
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] dma_map_page_attrs
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __intel_pmu_enable_all.constprop.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] inode_add_lru
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] generic_delete_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] bit_waitqueue
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __inode_wait_for_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] inode_wait_for_writeback
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _raw_write_trylock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] _raw_write_lock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] put_pid.part.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] proc_pid_evict_inode
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] put_pid
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __rq_qos_merge
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] prandom_u32
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] detach_if_pending
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] del_timer_sync
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] wb_timer_fn
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] credit_entropy_bits.constprop.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] hrtimer_forward
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] irq_work_tick
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __x86_retpoline_r12
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] d_shrink_del
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] error_return
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] invoke_rcu_core
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] perf_event_set_state.part.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] fragmentation_index
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __bio_try_merge_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] radix_tree_node_ctor
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] sched_clock
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] should_fail_bio
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] pmdp_test_and_clear_young
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] set_pgdat_percpu_threshold
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __activate_page
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] acpi_os_read_memory
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] native_flush_tlb_one_user
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nmi_restore
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] ghes_notify_nmi
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] __ghes_peek_estatus.isra.0
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] nmi_cpu_backtrace_handler
     0.00%     0.00%  kswapd0  [kernel.kallsyms]  [k] intel_bts_enable_local


#
# (Tip: To record callchains for each sample: perf record -g)
#

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14  6:15           ` Huang, Ying
@ 2021-04-14  7:58             ` Yu Zhao
  2021-04-14  8:27               ` Huang, Ying
  0 siblings, 1 reply; 57+ messages in thread
From: Yu Zhao @ 2021-04-14  7:58 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Rik van Riel, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andi Kleen, Andrew Morton, Benjamin Manes, Dave Hansen,
	Hillf Danton, Johannes Weiner, Jonathan Corbet, Joonsoo Kim,
	Matthew Wilcox, Mel Gorman, Miaohe Lin, Michael Larabel,
	Michal Hocko, Michel Lespinasse, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 12:15 AM Huang, Ying <ying.huang@intel.com> wrote:
>
> Yu Zhao <yuzhao@google.com> writes:
>
> > On Tue, Apr 13, 2021 at 8:30 PM Rik van Riel <riel@surriel.com> wrote:
> >>
> >> On Wed, 2021-04-14 at 09:14 +1000, Dave Chinner wrote:
> >> > On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> >> >
> >> > > The initial posting of this patchset did no better, in fact it did
> >> > > a bit
> >> > > worse. Performance dropped to the same levels and kswapd was using
> >> > > as
> >> > > much CPU as before, but on top of that we also got excessive
> >> > > swapping.
> >> > > Not at a high rate, but 5-10MB/sec continually.
> >> > >
> >> > > I had some back and forths with Yu Zhao and tested a few new
> >> > > revisions,
> >> > > and the current series does much better in this regard. Performance
> >> > > still dips a bit when page cache fills, but not nearly as much, and
> >> > > kswapd is using less CPU than before.
> >> >
> >> > Profiles would be interesting, because it sounds to me like reclaim
> >> > *might* be batching page cache removal better (e.g. fewer, larger
> >> > batches) and so spending less time contending on the mapping tree
> >> > lock...
> >> >
> >> > IOWs, I suspect this result might actually be a result of less lock
> >> > contention due to a change in batch processing characteristics of
> >> > the new algorithm rather than it being a "better" algorithm...
> >>
> >> That seems quite likely to me, given the issues we have
> >> had with virtual scan reclaim algorithms in the past.
> >
> > Hi Rik,
> >
> > Let paste the code so we can move beyond the "batching" hypothesis:
> >
> > static int __remove_mapping(struct address_space *mapping, struct page
> > *page,
> >                             bool reclaimed, struct mem_cgroup *target_memcg)
> > {
> >         unsigned long flags;
> >         int refcount;
> >         void *shadow = NULL;
> >
> >         BUG_ON(!PageLocked(page));
> >         BUG_ON(mapping != page_mapping(page));
> >
> >         xa_lock_irqsave(&mapping->i_pages, flags);
> >
> >> SeongJae, what is this algorithm supposed to do when faced
> >> with situations like this:
> >
> > I'll assume the questions were directed at me, not SeongJae.
> >
> >> 1) Running on a system with 8 NUMA nodes, and
> >> memory
> >>    pressure in one of those nodes.
> >> 2) Running PostgresQL or Oracle, with hundreds of
> >>    processes mapping the same (very large) shared
> >>    memory segment.
> >>
> >> How do you keep your algorithm from falling into the worst
> >> case virtual scanning scenarios that were crippling the
> >> 2.4 kernel 15+ years ago on systems with just a few GB of
> >> memory?
> >
> > There is a fundamental shift: that time we were scanning for cold pages,
> > and nowadays we are scanning for hot pages.
> >
> > I'd be surprised if scanning for cold pages didn't fall apart, because it'd
> > find most of the entries accessed, if they are present at all.
> >
> > Scanning for hot pages, on the other hand, is way better. Let me just
> > reiterate:
> > 1) It will not scan page tables from processes that have been sleeping
> >    since the last scan.
> > 2) It will not scan PTE tables under non-leaf PMD entries that do not
> >    have the accessed bit set, when
> >    CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
> > 3) It will not zigzag between the PGD table and the same PMD or PTE
> >    table spanning multiple VMAs. In other words, it finishes all the
> >    VMAs with the range of the same PMD or PTE table before it returns
> >    to the PGD table. This optimizes workloads that have large numbers
> >    of tiny VMAs, especially when CONFIG_PGTABLE_LEVELS=5.
> >
> > So the cost is roughly proportional to the number of referenced pages it
> > discovers. If there is no memory pressure, no scanning at all. For a system
> > under heavy memory pressure, most of the pages are referenced (otherwise
> > why would it be under memory pressure?), and if we use the rmap, we need to
> > scan a lot of pages anyway. Why not just scan them all?
>
> This may be not the case.  For rmap scanning, it's possible to scan only
> a small portion of memory.  But with the page table scanning, you need
> to scan almost all (I understand you have some optimization as above).

Hi Ying,

Let's take a step back.

For the sake of discussion, when does the scanning have to happen? Can
we agree that the simplest answer is when we have evicted all inactive
pages?

If so, my next question is who's filled in the memory space previously
occupied by those inactive pages? Newly faulted in pages, right? They
have the accessed bit set, and we can't evict them without scanning
them first, would you agree?

And there are also existing active pages, and they were protected from
eviction. But now we need to deactivate some of them. Do you think
whether they'd have been used or not since the last scan? (Remember
they were active.)

You mentioned "a small portion" and "almost all". How do you interpret
them in terms of these steps?

Intuitively, "a small portion" and "almost all" seem right. But our
observations from *millions* of machines say the ratio of
pgscan_kswapd to pgsteal_kswapd is well over 7 when anon percentage is
>90%. Unlikely streaming files, it doesn't make sense to "stream" anon
memory.

> As Rik shown in the test case above, there may be memory pressure on
> only one of 8 NUMA nodes (because of NUMA binding?).  Then ramp scanning
> only needs to scan pages in this node, while the page table scanning may
> need to scan pages in other nodes too.

Yes, and this is on my to-do list in the patchset:

To-do List
==========
KVM Optimization
----------------
Support shadow page table scanning.

NUMA Optimization
-----------------
Support NUMA policies and per-node RSS counters.

We only can move forward one step at a time. Fair?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14  7:58             ` Yu Zhao
@ 2021-04-14  8:27               ` Huang, Ying
  2021-04-14 13:51                 ` Rik van Riel
  0 siblings, 1 reply; 57+ messages in thread
From: Huang, Ying @ 2021-04-14  8:27 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Rik van Riel, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andi Kleen, Andrew Morton, Benjamin Manes, Dave Hansen,
	Hillf Danton, Johannes Weiner, Jonathan Corbet, Joonsoo Kim,
	Matthew Wilcox, Mel Gorman, Miaohe Lin, Michael Larabel,
	Michal Hocko, Michel Lespinasse, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

Yu Zhao <yuzhao@google.com> writes:

> On Wed, Apr 14, 2021 at 12:15 AM Huang, Ying <ying.huang@intel.com> wrote:
>>
>> Yu Zhao <yuzhao@google.com> writes:
>>
>> > On Tue, Apr 13, 2021 at 8:30 PM Rik van Riel <riel@surriel.com> wrote:
>> >>
>> >> On Wed, 2021-04-14 at 09:14 +1000, Dave Chinner wrote:
>> >> > On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
>> >> >
>> >> > > The initial posting of this patchset did no better, in fact it did
>> >> > > a bit
>> >> > > worse. Performance dropped to the same levels and kswapd was using
>> >> > > as
>> >> > > much CPU as before, but on top of that we also got excessive
>> >> > > swapping.
>> >> > > Not at a high rate, but 5-10MB/sec continually.
>> >> > >
>> >> > > I had some back and forths with Yu Zhao and tested a few new
>> >> > > revisions,
>> >> > > and the current series does much better in this regard. Performance
>> >> > > still dips a bit when page cache fills, but not nearly as much, and
>> >> > > kswapd is using less CPU than before.
>> >> >
>> >> > Profiles would be interesting, because it sounds to me like reclaim
>> >> > *might* be batching page cache removal better (e.g. fewer, larger
>> >> > batches) and so spending less time contending on the mapping tree
>> >> > lock...
>> >> >
>> >> > IOWs, I suspect this result might actually be a result of less lock
>> >> > contention due to a change in batch processing characteristics of
>> >> > the new algorithm rather than it being a "better" algorithm...
>> >>
>> >> That seems quite likely to me, given the issues we have
>> >> had with virtual scan reclaim algorithms in the past.
>> >
>> > Hi Rik,
>> >
>> > Let paste the code so we can move beyond the "batching" hypothesis:
>> >
>> > static int __remove_mapping(struct address_space *mapping, struct page
>> > *page,
>> >                             bool reclaimed, struct mem_cgroup *target_memcg)
>> > {
>> >         unsigned long flags;
>> >         int refcount;
>> >         void *shadow = NULL;
>> >
>> >         BUG_ON(!PageLocked(page));
>> >         BUG_ON(mapping != page_mapping(page));
>> >
>> >         xa_lock_irqsave(&mapping->i_pages, flags);
>> >
>> >> SeongJae, what is this algorithm supposed to do when faced
>> >> with situations like this:
>> >
>> > I'll assume the questions were directed at me, not SeongJae.
>> >
>> >> 1) Running on a system with 8 NUMA nodes, and
>> >> memory
>> >>    pressure in one of those nodes.
>> >> 2) Running PostgresQL or Oracle, with hundreds of
>> >>    processes mapping the same (very large) shared
>> >>    memory segment.
>> >>
>> >> How do you keep your algorithm from falling into the worst
>> >> case virtual scanning scenarios that were crippling the
>> >> 2.4 kernel 15+ years ago on systems with just a few GB of
>> >> memory?
>> >
>> > There is a fundamental shift: that time we were scanning for cold pages,
>> > and nowadays we are scanning for hot pages.
>> >
>> > I'd be surprised if scanning for cold pages didn't fall apart, because it'd
>> > find most of the entries accessed, if they are present at all.
>> >
>> > Scanning for hot pages, on the other hand, is way better. Let me just
>> > reiterate:
>> > 1) It will not scan page tables from processes that have been sleeping
>> >    since the last scan.
>> > 2) It will not scan PTE tables under non-leaf PMD entries that do not
>> >    have the accessed bit set, when
>> >    CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
>> > 3) It will not zigzag between the PGD table and the same PMD or PTE
>> >    table spanning multiple VMAs. In other words, it finishes all the
>> >    VMAs with the range of the same PMD or PTE table before it returns
>> >    to the PGD table. This optimizes workloads that have large numbers
>> >    of tiny VMAs, especially when CONFIG_PGTABLE_LEVELS=5.
>> >
>> > So the cost is roughly proportional to the number of referenced pages it
>> > discovers. If there is no memory pressure, no scanning at all. For a system
>> > under heavy memory pressure, most of the pages are referenced (otherwise
>> > why would it be under memory pressure?), and if we use the rmap, we need to
>> > scan a lot of pages anyway. Why not just scan them all?
>>
>> This may be not the case.  For rmap scanning, it's possible to scan only
>> a small portion of memory.  But with the page table scanning, you need
>> to scan almost all (I understand you have some optimization as above).
>
> Hi Ying,
>
> Let's take a step back.
>
> For the sake of discussion, when does the scanning have to happen? Can
> we agree that the simplest answer is when we have evicted all inactive
> pages?
>
> If so, my next question is who's filled in the memory space previously
> occupied by those inactive pages? Newly faulted in pages, right? They
> have the accessed bit set, and we can't evict them without scanning
> them first, would you agree?
>
> And there are also existing active pages, and they were protected from
> eviction. But now we need to deactivate some of them. Do you think
> whether they'd have been used or not since the last scan? (Remember
> they were active.)
>
> You mentioned "a small portion" and "almost all". How do you interpret
> them in terms of these steps?
>
> Intuitively, "a small portion" and "almost all" seem right. But our
> observations from *millions* of machines say the ratio of
> pgscan_kswapd to pgsteal_kswapd is well over 7 when anon percentage is
>>90%. Unlikely streaming files, it doesn't make sense to "stream" anon
> memory.

What I said is that it is "POSSIBLE" to scan only a small portion of
memory.  Whether and in which cases to do that depends on the policy we
choose.  I didn't say we have chosen to do that for all cases.

>> As Rik shown in the test case above, there may be memory pressure on
>> only one of 8 NUMA nodes (because of NUMA binding?).  Then ramp scanning
>> only needs to scan pages in this node, while the page table scanning may
>> need to scan pages in other nodes too.
>
> Yes, and this is on my to-do list in the patchset:
>
> To-do List
> ==========
> KVM Optimization
> ----------------
> Support shadow page table scanning.
>
> NUMA Optimization
> -----------------
> Support NUMA policies and per-node RSS counters.
>
> We only can move forward one step at a time. Fair?

You don't need to implement that now definitely.  But we can discuss the
possible solution now.

Note that it's possible that only some processes are bound to some NUMA
nodes, while other processes aren't bound.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14  7:16           ` Yu Zhao
@ 2021-04-14 10:00             ` Yu Zhao
  2021-04-15  1:36             ` Dave Chinner
  1 sibling, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-14 10:00 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Jens Axboe, SeongJae Park, Linux-MM, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 01:16:52AM -0600, Yu Zhao wrote:
> On Tue, Apr 13, 2021 at 10:50 PM Dave Chinner <david@fromorbit.com> wrote:
> >
> > On Tue, Apr 13, 2021 at 09:40:12PM -0600, Yu Zhao wrote:
> > > On Tue, Apr 13, 2021 at 5:14 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> > > > > On 4/13/21 1:51 AM, SeongJae Park wrote:
> > > > > > From: SeongJae Park <sjpark@amazon.de>
> > > > > >
> > > > > > Hello,
> > > > > >
> > > > > >
> > > > > > Very interesting work, thank you for sharing this :)
> > > > > >
> > > > > > On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:
> > > > > >
> > > > > >> What's new in v2
> > > > > >> ================
> > > > > >> Special thanks to Jens Axboe for reporting a regression in buffered
> > > > > >> I/O and helping test the fix.
> > > > > >
> > > > > > Is the discussion open?  If so, could you please give me a link?
> > > > >
> > > > > I wasn't on the initial post (or any of the lists it was posted to), but
> > > > > it's on the google page reclaim list. Not sure if that is public or not.
> > > > >
> > > > > tldr is that I was pretty excited about this work, as buffered IO tends
> > > > > to suck (a lot) for high throughput applications. My test case was
> > > > > pretty simple:
> > > > >
> > > > > Randomly read a fast device, using 4k buffered IO, and watch what
> > > > > happens when the page cache gets filled up. For this particular test,
> > > > > we'll initially be doing 2.1GB/sec of IO, and then drop to 1.5-1.6GB/sec
> > > > > with kswapd using a lot of CPU trying to keep up. That's mainline
> > > > > behavior.
> > > >
> > > > I see this exact same behaviour here, too, but I RCA'd it to
> > > > contention between the inode and memory reclaim for the mapping
> > > > structure that indexes the page cache. Basically the mapping tree
> > > > lock is the contention point here - you can either be adding pages
> > > > to the mapping during IO, or memory reclaim can be removing pages
> > > > from the mapping, but we can't do both at once.
> > > >
> > > > So we end up with kswapd spinning on the mapping tree lock like so
> > > > when doing 1.6GB/s in 4kB buffered IO:
> > > >
> > > > -   20.06%     0.00%  [kernel]               [k] kswapd                                                                                                        ▒
> > > >    - 20.06% kswapd                                                                                                                                             ▒
> > > >       - 20.05% balance_pgdat                                                                                                                                   ▒
> > > >          - 20.03% shrink_node                                                                                                                                  ▒
> > > >             - 19.92% shrink_lruvec                                                                                                                             ▒
> > > >                - 19.91% shrink_inactive_list                                                                                                                   ▒
> > > >                   - 19.22% shrink_page_list                                                                                                                    ▒
> > > >                      - 17.51% __remove_mapping                                                                                                                 ▒
> > > >                         - 14.16% _raw_spin_lock_irqsave                                                                                                        ▒
> > > >                            - 14.14% do_raw_spin_lock                                                                                                           ▒
> > > >                                 __pv_queued_spin_lock_slowpath                                                                                                 ▒
> > > >                         - 1.56% __delete_from_page_cache                                                                                                       ▒
> > > >                              0.63% xas_store                                                                                                                   ▒
> > > >                         - 0.78% _raw_spin_unlock_irqrestore                                                                                                    ▒
> > > >                            - 0.69% do_raw_spin_unlock                                                                                                          ▒
> > > >                                 __raw_callee_save___pv_queued_spin_unlock                                                                                      ▒
> > > >                      - 0.82% free_unref_page_list                                                                                                              ▒
> > > >                         - 0.72% free_unref_page_commit                                                                                                         ▒
> > > >                              0.57% free_pcppages_bulk                                                                                                          ▒
> > > >
> > > > And these are the processes consuming CPU:
> > > >
> > > >    5171 root      20   0 1442496   5696   1284 R  99.7   0.0   1:07.78 fio
> > > >    1150 root      20   0       0      0      0 S  47.4   0.0   0:22.70 kswapd1
> > > >    1146 root      20   0       0      0      0 S  44.0   0.0   0:21.85 kswapd0
> > > >    1152 root      20   0       0      0      0 S  39.7   0.0   0:18.28 kswapd3
> > > >    1151 root      20   0       0      0      0 S  15.2   0.0   0:12.14 kswapd2
> > > >
> > > > i.e. when memory reclaim kicks in, the read process has 20% less
> > > > time with exclusive access to the mapping tree to insert new pages.
> > > > Hence buffered read performance goes down quite substantially when
> > > > memory reclaim kicks in, and this really has nothing to do with the
> > > > memory reclaim LRU scanning algorithm.
> > > >
> > > > I can actually get this machine to pin those 5 processes to 100% CPU
> > > > under certain conditions. Each process is spinning all that extra
> > > > time on the mapping tree lock, and performance degrades further.
> > > > Changing the LRU reclaim algorithm won't fix this - the workload is
> > > > solidly bound by the exclusive nature of the mapping tree lock and
> > > > the number of tasks trying to obtain it exclusively...
> > > >
> > > > > The initial posting of this patchset did no better, in fact it did a bit
> > > > > worse. Performance dropped to the same levels and kswapd was using as
> > > > > much CPU as before, but on top of that we also got excessive swapping.
> > > > > Not at a high rate, but 5-10MB/sec continually.
> > > > >
> > > > > I had some back and forths with Yu Zhao and tested a few new revisions,
> > > > > and the current series does much better in this regard. Performance
> > > > > still dips a bit when page cache fills, but not nearly as much, and
> > > > > kswapd is using less CPU than before.
> > > >
> > > > Profiles would be interesting, because it sounds to me like reclaim
> > > > *might* be batching page cache removal better (e.g. fewer, larger
> > > > batches) and so spending less time contending on the mapping tree
> > > > lock...
> > > >
> > > > IOWs, I suspect this result might actually be a result of less lock
> > > > contention due to a change in batch processing characteristics of
> > > > the new algorithm rather than it being a "better" algorithm...
> > >
> > > I appreciate the profile. But there is no batching in
> > > __remove_mapping() -- it locks the mapping for each page, and
> > > therefore the lock contention penalizes the mainline and this patchset
> > > equally. It looks worse on your system because the four kswapd threads
> > > from different nodes were working on the same file.
> >
> > I think you misunderstand exactly what I mean by "batching" here.
> > I'm not talking about doing multiple pieces of work under a single
> > lock. What I mean is that the overall amount of work done in a
> > single reclaim scan (i.e a "reclaim batch") is packaged differently.
> >
> > We already batch up page reclaim via building a page list and then
> > passing it to shrink_page_list() to process the batch of pages in a
> > single pass. Each page in this page list batch then calls
> > remove_mapping() to pull the page form the LRU, we have a run of
> > contention between the foreground read() thread and the background
> > kswapd.
> >
> > If the size or nature of the pages in the batch passed to
> > shrink_page_list() changes, then the amount of time a reclaim batch
> > is going to put pressure on the mapping tree lock will also change.
> > That's the "change in batching behaviour" I'm referring to here. I
> > haven't read through the patchset to determine if you change the
> > shrink_page_list() algorithm, but it likely changes what is passed
> > to be reclaimed and that in turn changes the locking patterns that
> > fall out of shrink_page_list...
> 
> Ok, if we are talking about the size of the batch passed to
> shrink_page_list(), both the mainline and this patchset cap it at
> SWAP_CLUSTER_MAX, which is 32. There are corner cases, but when
> running fio/io_uring, it's safe to say both use 32.
> 
> > > And kswapd is only one of two paths that could affect the performance.
> > > The kernel context of the test process is where the improvement mainly
> > > comes from.
> > >
> > > I also suspect you were testing a file much larger than your memory
> > > size. If so, sorry to tell you that a file only a few times larger,
> > > e.g. twice, would be worse.
> > >
> > > Here is my take:
> > >
> > > Claim
> > > -----
> > > This patchset is a "better" algorithm. (Technically it's not an
> > > algorithm, it's a feedback loop.)
> > >
> > > Theoretical basis
> > > -----------------
> > > An open-loop control (the mainline) can only be better if the margin
> > > of error in its prediction of the future events is less than that from
> > > the trial-and-error of a closed-loop control (this patchset). For
> > > simple machines, it surely can. For page reclaim, AFAIK, it can't.
> > >
> > > A typical example: when randomly accessing a (not infinitely) large
> > > file via buffered io long enough, we're bound to hit the same blocks
> > > multiple times. Should we activate the pages containing those blocks,
> > > i.e., to move them to the active lru list?  No.
> > >
> > > RCA
> > > ---
> > > For the fio/io_uring benchmark, the "No" is the key.
> > >
> > > The mainline activates pages accessed multiple times. This is done in
> > > the buffered io access path by mark_page_accessed(), and it takes the
> > > lru lock, which is contended under memory pressure. This contention
> > > slows down both the access path and kswapd. But kswapd is not the
> > > problem here because we are measuring the io_uring process, not kswap.
> > >
> > > For this patchset, there are no activations since the refault rates of
> > > pages accessed multiple times are similar to those accessed only once
> > > -- activations will only be done to pages from tiers with higher
> > > refault rates.
> > >
> > > If you wish to debunk
> > > ---------------------
> >
> > Nope, it's your job to convince us that it works, not the other way
> > around. It's up to you to prove that your assertions are correct,
> > not for us to prove they are false.
> 
> Just trying to keep people motivated, my homework is my own.
> 
> > > git fetch https://linux-mm.googlesource.com/page-reclaim refs/changes/73/1173/1
> > >
> > > CONFIG_LRU_GEN=y
> > > CONFIG_LRU_GEN_ENABLED=y
> > >
> > > Run your benchmarks
> > >
> > > Profiles (200G mem + 400G file)
> > > -------------------------------
> > > A quick test from Jens' fio/io_uring:
> > >
> > > -rc7
> > >     13.30%  io_uring  xas_load
> > >     13.22%  io_uring  _copy_to_iter
> > >     12.30%  io_uring  __add_to_page_cache_locked
> > >      7.43%  io_uring  clear_page_erms
> > >      4.18%  io_uring  filemap_get_read_batch
> > >      3.54%  io_uring  get_page_from_freelist
> > >      2.98%  io_uring  ***native_queued_spin_lock_slowpath***
> > >      1.61%  io_uring  page_cache_ra_unbounded
> > >      1.16%  io_uring  xas_start
> > >      1.08%  io_uring  filemap_read
> > >      1.07%  io_uring  ***__activate_page***
> > >
> > > lru lock: 2.98% (lru addition + activation)
> > > activation: 1.07%
> > >
> > > -rc7 + this patchset
> > >     14.44%  io_uring  xas_load
> > >     14.14%  io_uring  _copy_to_iter
> > >     11.15%  io_uring  __add_to_page_cache_locked
> > >      6.56%  io_uring  clear_page_erms
> > >      4.44%  io_uring  filemap_get_read_batch
> > >      2.14%  io_uring  get_page_from_freelist
> > >      1.32%  io_uring  page_cache_ra_unbounded
> > >      1.20%  io_uring  psi_group_change
> > >      1.18%  io_uring  filemap_read
> > >      1.09%  io_uring  ****native_queued_spin_lock_slowpath****
> > >      1.08%  io_uring  do_mpage_readpage
> > >
> > > lru lock: 1.09% (lru addition only)
> >
> > All this tells us is that there was *less contention on the mapping
> > tree lock*. It does not tell us why there was less contention.
> >
> > You've handily omitted the kswapd profile, which is really the one
> > of interest to the discussion here - how did the memory reclaim CPU
> > usage profile also change at the same time?
> 
> Well, let me attach them. Suffix -1 is the mainline, -2 is the patchset.
> 
>   mainline
>      57.65%  kswapd0  __remove_mapping
>   this patchset
>      61.61%  kswapd0  __remove_mapping
> 
> As I said, the mapping lock contention penalizes both heavily. Its
> percentage is even higher with the patchset, because it has less
> overhead. I'm trying to explain "the less overhead" part: it's the
> activations that make the mainline worse.
> 
>   mainline
>     6.53%  kswapd0  shrink_active_list
>   this patchset
>     0
> 
> From the io_uring context:
>   mainline
>      2.53%  io_uring  mark_page_accessed
>   this patchset
>      0.52%  io_uring  mark_page_accessed
> 
> mark_page_accessed() moves pages accessed multiple times to the active
> lru list. Then shrink_active_list() moves them back to the inactive
> list. All for nothing.
> 
> I don't want to paste everything here -- they'd clutter. Please see
> all the detailed profiles in the attachment. Let me know if their
> formats are no to your liking. I still have the raw perf.data.
> 
> > > And I plan to reach out to other communities, e.g., PostgreSQL, to
> > > benchmark the patchset. I heard they have been complaining about the
> > > buffered io performance under memory pressure. Any other benchmarks
> > > you'd suggest?
> > >
> > > BTW, you might find another surprise in how less frequently slab
> > > shrinkers are called under memory pressure, because this patchset is a
> > > lot better at finding pages to reclaim and therefore doesn't overkill
> > > slabs.
> >
> > That's actually very likely to be a Bad Thing and cause unexpected
> > perofrmance and OOM based regressions. When the machine finally runs
> > out of page cache it can easily reclaim, it's going to get stuck
> > with long tail latencies reclaiming huge slab caches as they've had
> > no substantial ongoing pressure put on them to keep them in balance
> > with the overall memory pressure the system is under...
> 
> Well. It does use the existing equation. That is if it scans X% of
> pages, then it scans X% of slab objects. But 1) it often finds pages
> to reclaim at a lower X% 2) the pages it reclaims are less likely to
> refault. So the side effect is the overall slab objects it scans also
> reduce. I do see your point but don't see any options, at the moment.

I apologize for the spam. Apparent the attachment in my previous email
didn't reach everybody. I hope this would work:

git clone https://linux-mm.googlesource.com/benchmarks

Repo contains profiles collected when running fio/io_uring,
  mainline:
    kswapd-1.txt
    kswapd-1.svg
    io_uring-1.txt
    io_uring-1.svg
  
  patched:
    kswapd-2.txt
    kswapd-2.svg
    io_uring-2.txt
    io_uring-2.svg

Thanks.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14  8:27               ` Huang, Ying
@ 2021-04-14 13:51                 ` Rik van Riel
  2021-04-14 15:56                   ` Andi Kleen
                                     ` (2 more replies)
  0 siblings, 3 replies; 57+ messages in thread
From: Rik van Riel @ 2021-04-14 13:51 UTC (permalink / raw)
  To: Huang, Ying, Yu Zhao
  Cc: Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM, Andi Kleen,
	Andrew Morton, Benjamin Manes, Dave Hansen, Hillf Danton,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Roman Gushchin, Rong Chen, SeongJae Park,
	Tim Chen, Vlastimil Babka, Yang Shi, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2

[-- Attachment #1: Type: text/plain, Size: 1888 bytes --]

On Wed, 2021-04-14 at 16:27 +0800, Huang, Ying wrote:
> Yu Zhao <yuzhao@google.com> writes:
> 
> > On Wed, Apr 14, 2021 at 12:15 AM Huang, Ying <ying.huang@intel.com>
> > wrote:
> > > 
> > NUMA Optimization
> > -----------------
> > Support NUMA policies and per-node RSS counters.
> > 
> > We only can move forward one step at a time. Fair?
> 
> You don't need to implement that now definitely.  But we can discuss
> the
> possible solution now.

That was my intention, too. I want to make sure we don't
end up "painting ourselves into a corner" by moving in some
direction we have no way to get out of.

The patch set looks promising, but we need some plan to
avoid the worst case behaviors that forced us into rmap
based scanning initially.

> Note that it's possible that only some processes are bound to some
> NUMA
> nodes, while other processes aren't bound.

For workloads like PostgresQL or Oracle, it is common
to have maybe 70% of memory in a large shared memory
segment, spread between all the NUMA nodes, and mapped
into hundreds, if not thousands, of processes in the
system.

Now imagine we have an 8 node system, and memory
pressure in the DMA32 zone of node 0.

How will the current VM behave?

Wha
t will the virtual scanning need to do?

If we can come up with a solution to make virtual
scanning scale for that kind of workload, great.

If not ... if it turns out most of the benefits of
the multigeneratinal LRU framework come from sorting
the pages into multiple LRUs, and from being able
to easily reclaim unmapped pages before having to
scan mapped ones, could it be an idea to implement
that first, independently from virtual scanning?

I am all for improving
our page reclaim system, I
just want to make sure we don't revisit the old traps
that forced us where we are today :)

-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 10/16] mm: multigenerational lru: mm_struct list
  2021-04-13  6:56 ` [PATCH v2 10/16] mm: multigenerational lru: mm_struct list Yu Zhao
@ 2021-04-14 14:36   ` Matthew Wilcox
  0 siblings, 0 replies; 57+ messages in thread
From: Matthew Wilcox @ 2021-04-14 14:36 UTC (permalink / raw)
  To: Yu Zhao
  Cc: linux-mm, Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	page-reclaim

On Tue, Apr 13, 2021 at 12:56:27AM -0600, Yu Zhao wrote:
> In order to scan page tables, we add an infrastructure to maintain
> either a system-wide mm_struct list or per-memcg mm_struct lists.
> Multiple threads can concurrently work on the same mm_struct list, and
> each of them will be given a different mm_struct.
> 
> This infrastructure also tracks whether an mm_struct is being used on
> any CPUs or has been used since the last time a worker looked at it.
> In other words, workers will not be given an mm_struct that belongs to
> a process that has been sleeping.

This seems like a great use for an allocating XArray.  You can use a
search mark to indicate whether it's been used since the last time a
worker looked at it.


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-13 23:14     ` Dave Chinner
  2021-04-14  2:29       ` Rik van Riel
  2021-04-14  3:40       ` Yu Zhao
@ 2021-04-14 14:43       ` Jens Axboe
  2021-04-14 19:42         ` Yu Zhao
  2021-04-15  1:21         ` Dave Chinner
  2 siblings, 2 replies; 57+ messages in thread
From: Jens Axboe @ 2021-04-14 14:43 UTC (permalink / raw)
  To: Dave Chinner
  Cc: SeongJae Park, Yu Zhao, linux-mm, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	page-reclaim

On 4/13/21 5:14 PM, Dave Chinner wrote:
> On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
>> On 4/13/21 1:51 AM, SeongJae Park wrote:
>>> From: SeongJae Park <sjpark@amazon.de>
>>>
>>> Hello,
>>>
>>>
>>> Very interesting work, thank you for sharing this :)
>>>
>>> On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:
>>>
>>>> What's new in v2
>>>> ================
>>>> Special thanks to Jens Axboe for reporting a regression in buffered
>>>> I/O and helping test the fix.
>>>
>>> Is the discussion open?  If so, could you please give me a link?
>>
>> I wasn't on the initial post (or any of the lists it was posted to), but
>> it's on the google page reclaim list. Not sure if that is public or not.
>>
>> tldr is that I was pretty excited about this work, as buffered IO tends
>> to suck (a lot) for high throughput applications. My test case was
>> pretty simple:
>>
>> Randomly read a fast device, using 4k buffered IO, and watch what
>> happens when the page cache gets filled up. For this particular test,
>> we'll initially be doing 2.1GB/sec of IO, and then drop to 1.5-1.6GB/sec
>> with kswapd using a lot of CPU trying to keep up. That's mainline
>> behavior.
> 
> I see this exact same behaviour here, too, but I RCA'd it to
> contention between the inode and memory reclaim for the mapping
> structure that indexes the page cache. Basically the mapping tree
> lock is the contention point here - you can either be adding pages
> to the mapping during IO, or memory reclaim can be removing pages
> from the mapping, but we can't do both at once.
> 
> So we end up with kswapd spinning on the mapping tree lock like so
> when doing 1.6GB/s in 4kB buffered IO:
> 
> -   20.06%     0.00%  [kernel]               [k] kswapd                                                                                                        ▒
>    - 20.06% kswapd                                                                                                                                             ▒
>       - 20.05% balance_pgdat                                                                                                                                   ▒
>          - 20.03% shrink_node                                                                                                                                  ▒
>             - 19.92% shrink_lruvec                                                                                                                             ▒
>                - 19.91% shrink_inactive_list                                                                                                                   ▒
>                   - 19.22% shrink_page_list                                                                                                                    ▒
>                      - 17.51% __remove_mapping                                                                                                                 ▒
>                         - 14.16% _raw_spin_lock_irqsave                                                                                                        ▒
>                            - 14.14% do_raw_spin_lock                                                                                                           ▒
>                                 __pv_queued_spin_lock_slowpath                                                                                                 ▒
>                         - 1.56% __delete_from_page_cache                                                                                                       ▒
>                              0.63% xas_store                                                                                                                   ▒
>                         - 0.78% _raw_spin_unlock_irqrestore                                                                                                    ▒
>                            - 0.69% do_raw_spin_unlock                                                                                                          ▒
>                                 __raw_callee_save___pv_queued_spin_unlock                                                                                      ▒
>                      - 0.82% free_unref_page_list                                                                                                              ▒
>                         - 0.72% free_unref_page_commit                                                                                                         ▒
>                              0.57% free_pcppages_bulk                                                                                                          ▒
> 
> And these are the processes consuming CPU:
> 
>    5171 root      20   0 1442496   5696   1284 R  99.7   0.0   1:07.78 fio
>    1150 root      20   0       0      0      0 S  47.4   0.0   0:22.70 kswapd1
>    1146 root      20   0       0      0      0 S  44.0   0.0   0:21.85 kswapd0
>    1152 root      20   0       0      0      0 S  39.7   0.0   0:18.28 kswapd3
>    1151 root      20   0       0      0      0 S  15.2   0.0   0:12.14 kswapd2

Here's my profile when memory reclaim is active for the above mentioned
test case. This is a single node system, so just kswapd. It's using around
40-45% CPU:

    43.69%  kswapd0  [kernel.vmlinux]  [k] xas_create
            |
            ---ret_from_fork
               kthread
               kswapd
               balance_pgdat
               shrink_node
               shrink_lruvec
               shrink_inactive_list
               shrink_page_list
               __delete_from_page_cache
               xas_store
               xas_create

    16.88%  kswapd0  [kernel.vmlinux]  [k] queued_spin_lock_slowpath
            |
            ---ret_from_fork
               kthread
               kswapd
               balance_pgdat
               shrink_node
               shrink_lruvec
               |          
                --16.82%--shrink_inactive_list
                          |          
                           --16.55%--shrink_page_list
                                     |          
                                      --16.26%--_raw_spin_lock_irqsave
                                                queued_spin_lock_slowpath

     9.89%  kswapd0  [kernel.vmlinux]  [k] shrink_page_list
            |
            ---ret_from_fork
               kthread
               kswapd
               balance_pgdat
               shrink_node
               shrink_lruvec
               shrink_inactive_list
               shrink_page_list

     5.46%  kswapd0  [kernel.vmlinux]  [k] xas_init_marks
            |
            ---ret_from_fork
               kthread
               kswapd
               balance_pgdat
               shrink_node
               shrink_lruvec
               shrink_inactive_list
               shrink_page_list
               |          
                --5.41%--__delete_from_page_cache
                          xas_init_marks

     4.42%  kswapd0  [kernel.vmlinux]  [k] __delete_from_page_cache
            |
            ---ret_from_fork
               kthread
               kswapd
               balance_pgdat
               shrink_node
               shrink_lruvec
               shrink_inactive_list
               |          
                --4.40%--shrink_page_list
                          __delete_from_page_cache

     2.82%  kswapd0  [kernel.vmlinux]  [k] isolate_lru_pages
            |
            ---ret_from_fork
               kthread
               kswapd
               balance_pgdat
               shrink_node
               shrink_lruvec
               |          
               |--1.43%--shrink_active_list
               |          isolate_lru_pages
               |          
                --1.39%--shrink_inactive_list
                          isolate_lru_pages

     1.99%  kswapd0  [kernel.vmlinux]  [k] free_pcppages_bulk
            |
            ---ret_from_fork
               kthread
               kswapd
               balance_pgdat
               shrink_node
               shrink_lruvec
               shrink_inactive_list
               shrink_page_list
               free_unref_page_list
               free_unref_page_commit
               free_pcppages_bulk

     1.79%  kswapd0  [kernel.vmlinux]  [k] _raw_spin_lock_irqsave
            |
            ---ret_from_fork
               kthread
               kswapd
               balance_pgdat
               |          
                --1.76%--shrink_node
                          shrink_lruvec
                          shrink_inactive_list
                          |          
                           --1.72%--shrink_page_list
                                     _raw_spin_lock_irqsave

     1.02%  kswapd0  [kernel.vmlinux]  [k] workingset_eviction
            |
            ---ret_from_fork
               kthread
               kswapd
               balance_pgdat
               shrink_node
               shrink_lruvec
               shrink_inactive_list
               |          
                --1.00%--shrink_page_list
                          workingset_eviction

> i.e. when memory reclaim kicks in, the read process has 20% less
> time with exclusive access to the mapping tree to insert new pages.
> Hence buffered read performance goes down quite substantially when
> memory reclaim kicks in, and this really has nothing to do with the
> memory reclaim LRU scanning algorithm.
> 
> I can actually get this machine to pin those 5 processes to 100% CPU
> under certain conditions. Each process is spinning all that extra
> time on the mapping tree lock, and performance degrades further.
> Changing the LRU reclaim algorithm won't fix this - the workload is
> solidly bound by the exclusive nature of the mapping tree lock and
> the number of tasks trying to obtain it exclusively...

I've seen way worse than the above as well, it's just my go-to easy test
case for "man I wish buffered IO didn't suck so much".

>> The initial posting of this patchset did no better, in fact it did a bit
>> worse. Performance dropped to the same levels and kswapd was using as
>> much CPU as before, but on top of that we also got excessive swapping.
>> Not at a high rate, but 5-10MB/sec continually.
>>
>> I had some back and forths with Yu Zhao and tested a few new revisions,
>> and the current series does much better in this regard. Performance
>> still dips a bit when page cache fills, but not nearly as much, and
>> kswapd is using less CPU than before.
> 
> Profiles would be interesting, because it sounds to me like reclaim
> *might* be batching page cache removal better (e.g. fewer, larger
> batches) and so spending less time contending on the mapping tree
> lock...
> 
> IOWs, I suspect this result might actually be a result of less lock
> contention due to a change in batch processing characteristics of
> the new algorithm rather than it being a "better" algorithm...

See above - let me know if you want to see more specific profiling as
well.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
       [not found]         ` <CAOUHufafMcaG8sOS=1YMy2P_6p0R1FzP16bCwpUau7g1-PybBQ@mail.gmail.com>
  2021-04-14  6:15           ` Huang, Ying
@ 2021-04-14 15:51           ` Andi Kleen
  2021-04-14 15:58             ` Rik van Riel
  2021-04-14 19:04             ` Yu Zhao
  1 sibling, 2 replies; 57+ messages in thread
From: Andi Kleen @ 2021-04-14 15:51 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Rik van Riel, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andrew Morton, Benjamin Manes, Dave Hansen, Hillf Danton,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Roman Gushchin, Rong Chen, SeongJae Park,
	Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

>    2) It will not scan PTE tables under non-leaf PMD entries that do not
>       have the accessed bit set, when
>       CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.

This assumes  that workloads have reasonable locality. Could there
be a worst case where only one or two pages in each PTE are used,
so this PTE skipping trick doesn't work?

-Andi

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 13:51                 ` Rik van Riel
@ 2021-04-14 15:56                   ` Andi Kleen
  2021-04-14 15:58                   ` [page-reclaim] " Shakeel Butt
  2021-04-14 18:45                   ` Yu Zhao
  2 siblings, 0 replies; 57+ messages in thread
From: Andi Kleen @ 2021-04-14 15:56 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Huang, Ying, Yu Zhao, Dave Chinner, Jens Axboe, SeongJae Park,
	Linux-MM, Andrew Morton, Benjamin Manes, Dave Hansen,
	Hillf Danton, Johannes Weiner, Jonathan Corbet, Joonsoo Kim,
	Matthew Wilcox, Mel Gorman, Miaohe Lin, Michael Larabel,
	Michal Hocko, Michel Lespinasse, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

> Now imagine we have an 8 node system, and memory
> pressure in the DMA32 zone of node 0.

The question is how much do we still care about DMA32.
If there are problems they can probably just turn on the IOMMU for
these IO mappings.

-Andi

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 15:51           ` Andi Kleen
@ 2021-04-14 15:58             ` Rik van Riel
  2021-04-14 19:14               ` Yu Zhao
  2021-04-14 19:04             ` Yu Zhao
  1 sibling, 1 reply; 57+ messages in thread
From: Rik van Riel @ 2021-04-14 15:58 UTC (permalink / raw)
  To: Andi Kleen, Yu Zhao
  Cc: Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2

[-- Attachment #1: Type: text/plain, Size: 586 bytes --]

On Wed, 2021-04-14 at 08:51 -0700, Andi Kleen wrote:
> >    2) It will not scan PTE tables under non-leaf PMD entries that
> > do not
> >       have the accessed bit set, when
> >       CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
> 
> This assumes  that workloads have reasonable locality. Could there
> be a worst case where only one or two pages in each PTE are used,
> so this PTE skipping trick doesn't work?

Databases with large shared memory segments shared between
many processes come to mind as a real-world example of a
worst case scenario.

-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [page-reclaim] Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 13:51                 ` Rik van Riel
  2021-04-14 15:56                   ` Andi Kleen
@ 2021-04-14 15:58                   ` Shakeel Butt
  2021-04-14 18:45                   ` Yu Zhao
  2 siblings, 0 replies; 57+ messages in thread
From: Shakeel Butt @ 2021-04-14 15:58 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Huang, Ying, Yu Zhao, Dave Chinner, Jens Axboe, SeongJae Park,
	Linux-MM, Andi Kleen, Andrew Morton, Benjamin Manes, Dave Hansen,
	Hillf Danton, Johannes Weiner, Jonathan Corbet, Joonsoo Kim,
	Matthew Wilcox, Mel Gorman, Miaohe Lin, Michael Larabel,
	Michal Hocko, Michel Lespinasse, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 6:52 AM Rik van Riel <riel@surriel.com> wrote:
>
> On Wed, 2021-04-14 at 16:27 +0800, Huang, Ying wrote:
> > Yu Zhao <yuzhao@google.com> writes:
> >
> > > On Wed, Apr 14, 2021 at 12:15 AM Huang, Ying <ying.huang@intel.com>
> > > wrote:
> > > >
> > > NUMA Optimization
> > > -----------------
> > > Support NUMA policies and per-node RSS counters.
> > >
> > > We only can move forward one step at a time. Fair?
> >
> > You don't need to implement that now definitely.  But we can discuss
> > the
> > possible solution now.
>
> That was my intention, too. I want to make sure we don't
> end up "painting ourselves into a corner" by moving in some
> direction we have no way to get out of.
>
> The patch set looks promising, but we need some plan to
> avoid the worst case behaviors that forced us into rmap
> based scanning initially.
>
> > Note that it's possible that only some processes are bound to some
> > NUMA
> > nodes, while other processes aren't bound.
>
> For workloads like PostgresQL or Oracle, it is common
> to have maybe 70% of memory in a large shared memory
> segment, spread between all the NUMA nodes, and mapped
> into hundreds, if not thousands, of processes in the
> system.
>
> Now imagine we have an 8 node system, and memory
> pressure in the DMA32 zone of node 0.
>
> How will the current VM behave?
>
> Wha
> t will the virtual scanning need to do?
>
> If we can come up with a solution to make virtual
> scanning scale for that kind of workload, great.
>
> If not ... if it turns out most of the benefits of
> the multigeneratinal LRU framework come from sorting
> the pages into multiple LRUs, and from being able
> to easily reclaim unmapped pages before having to
> scan mapped ones, could it be an idea to implement
> that first, independently from virtual scanning?
>
> I am all for improving
> our page reclaim system, I
> just want to make sure we don't revisit the old traps
> that forced us where we are today :)
>

One potential idea is to take the hybrid 'of rmap and virtual
scanning' approach. If the number of pages that are targeted to be
scanned is below some threshold, do rmap otherwise virtual scanning. I
think we can experimentally find good value for that threshold.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (16 preceding siblings ...)
  2021-04-13  7:51 ` [PATCH v2 00/16] Multigenerational LRU Framework SeongJae Park
@ 2021-04-14 17:43 ` Johannes Weiner
  2021-04-27 10:35   ` Yu Zhao
  2021-04-29 23:46 ` Konstantin Kharlamov
  18 siblings, 1 reply; 57+ messages in thread
From: Johannes Weiner @ 2021-04-14 17:43 UTC (permalink / raw)
  To: Yu Zhao
  Cc: linux-mm, Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	page-reclaim

Hello Yu,

On Tue, Apr 13, 2021 at 12:56:17AM -0600, Yu Zhao wrote:
> What's new in v2
> ================
> Special thanks to Jens Axboe for reporting a regression in buffered
> I/O and helping test the fix.
> 
> This version includes the support of tiers, which represent levels of
> usage from file descriptors only. Pages accessed N times via file
> descriptors belong to tier order_base_2(N). Each generation contains
> at most MAX_NR_TIERS tiers, and they require additional MAX_NR_TIERS-2
> bits in page->flags. In contrast to moving across generations which
> requires the lru lock, moving across tiers only involves an atomic
> operation on page->flags and therefore has a negligible cost. A
> feedback loop modeled after the well-known PID controller monitors the
> refault rates across all tiers and decides when to activate pages from
> which tiers, on the reclaim path.

Could you elaborate a bit more on the difference between generations
and tiers?

A refault, a page table reference, or a buffered read through a file
descriptor ultimately all boil down to a memory access. The value of
having that memory resident and the cost of bringing it in from
backing storage should be the same regardless of how it's accessed by
userspace; and whether it's an in-memory reference or a non-resident
reference should have the same relative impact on the page's age.

With that context, I don't understand why file descriptor refs and
refaults get such special treatment. Could you shed some light here?

> This feedback model has a few advantages over the current feedforward
> model:
> 1) It has a negligible overhead in the buffered I/O access path
>    because activations are done in the reclaim path.

This is useful if the workload isn't reclaim bound, but it can be
hazardous to defer work to reclaim, too.

If you go through the git history, there have been several patches to
soften access recognition inside reclaim because it can come with
large latencies when page reclaim kicks in after a longer period with
no memory pressure and doesn't have uptodate reference information -
to the point where eating a few extra IOs tend to add less latency to
the workload than waiting for reclaim to refresh its aging data.

Could you elaborate a bit more on the tradeoff here?

> Highlights from the discussions on v1
> =====================================
> Thanks to Ying Huang and Dave Hansen for the comments and suggestions
> on page table scanning.
> 
> A simple worst-case scenario test did not find page table scanning
> underperforms the rmap because of the following optimizations:
> 1) It will not scan page tables from processes that have been sleeping
>    since the last scan.
> 2) It will not scan PTE tables under non-leaf PMD entries that do not
>    have the accessed bit set, when
>    CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
> 3) It will not zigzag between the PGD table and the same PMD or PTE
>    table spanning multiple VMAs. In other words, it finishes all the
>    VMAs with the range of the same PMD or PTE table before it returns
>    to the PGD table. This optimizes workloads that have large numbers
>    of tiny VMAs, especially when CONFIG_PGTABLE_LEVELS=5.
> 
> TLDR
> ====
> The current page reclaim is too expensive in terms of CPU usage and
> often making poor choices about what to evict. We would like to offer
> an alternative framework that is performant, versatile and
> straightforward.
> 
> Repo
> ====
> git fetch https://linux-mm.googlesource.com/page-reclaim refs/changes/73/1173/1
> 
> Gerrit https://linux-mm-review.googlesource.com/c/page-reclaim/+/1173
> 
> Background
> ==========
> DRAM is a major factor in total cost of ownership, and improving
> memory overcommit brings a high return on investment.

RAM cost on one hand.

On the other, paging backends have seen a revolutionary explosion in
iop/s capacity from solid state devices and CPUs that allow in-memory
compression at scale, so a higher rate of paging (semi-random IO) and
thus larger levels of overcommit are possible than ever before.

There is a lot of new opportunity here.

> Over the past decade of research and experimentation in memory
> overcommit, we observed a distinct trend across millions of servers
> and clients: the size of page cache has been decreasing because of
> the growing popularity of cloud storage. Nowadays anon pages account
> for more than 90% of our memory consumption and page cache contains
> mostly executable pages.

This gives the impression that because the number of setups heavily
using the page cache has reduced somewhat, its significance is waning
as well. I don't think that's true. I think we'll continue to have
mainstream workloads for which the page cache is significant.

Yes, the importance of paging anon memory more efficiently (or paging
it at all again, for that matter), has increased dramatically. But IMO
not because it's more prevalent, but rather because of the increase in
paging capacity from the hardware side. It's not like we've been
heavily paging filesystem data beyond cold starts either when it was
more prevalent - workloads quickly fall apart when you do that on
rotating drives.

So that increase in paging capacity also applies to filesystem data,
and makes local filesystems an option again where they might have been
replaced by anonymous blobs managed by a userspace network filesystem.

Take disaggregated storage for example. It's an attractive measure for
reducing per-host CAPEX when the alternative is a local spindle, whose
seekiness doesn't make the network distance look so bad, and prevents
significant memory overcommit anyway. You have to spec the same RAM in
either case.

The equation is different for flash. You can *significantly* reduce
RAM needs of even latency-sensitive, interactive workloads with cheap,
consumer-grade local SSD drives. Disaggregating those drives and
adding the network to the paging path would directly eat into the much
higher RAM savings. It's a much less attractive proposition now. And
that's bringing larger data sets back to local filesystems.

And of course, even in cloud and disaggregated environments, there ARE
those systems that deal with things like source code trees -
development machines, build hosts etc. For those, filesystem data
continues to be the primary workload.

So while I agree with what you say about anon pages, I don't expect
non-trivial (local) filesystem loads to go away anytime soon. The
kernel needs to continue treating it as a first-class citizen.

> Problems
> ========
> Notion of active/inactive
> -------------------------
> For servers equipped with hundreds of gigabytes of memory, the
> granularity of the active/inactive is too coarse to be useful for job
> scheduling. False active/inactive rates are relatively high, and thus
> the assumed savings may not materialize.

The inactive/active naming is certainly confusing for users of the
system. The kernel uses it to preselect reclaim candidates, it's not
meant to indicate how much memory capacity is idle and available.

But a confusion around naming doesn't necessarily indicate it's bad at
what it is actually designed to do.

Fundamentally, LRU ordering is susceptible to a flood of recent pages
with no reuse pushing out the established frequent pages. The split
into inactive and active is simply there to address this shortcoming,
and protect frequent pages from recent ones - where pages that are
only accessed once get reclaimed before pages used twice or more.

Obviously, 'twice or more' is a coarse category, and it's not hard to
imagine that it might go wrong. But please, don't leave it up to the
imagination ;-) It's been in use for two decades or so, it needs a bit
more in-depth analysis of its limitations to justify replacing it.

> For phones and laptops, executable pages are frequently evicted
> despite the fact that there are many less recently used anon pages.
> Major faults on executable pages cause "janks" (slow UI renderings)
> and negatively impact user experience.

This is not because of the inactive/active scheme but rather because
of the anon/file split, which has evolved over the years to just not
swap onto iop-anemic rotational drives.

We ran into the same issue at FB too, where even with painfully
obvious anon candidates and a fast paging backend the kernel would
happily thrash on the page cache instead.

There has been significant work in this area recently to address this
(see commit 5df741963d52506a985b14c4bcd9a25beb9d1981). We've added
extensive testing and production time onto these patches since and
have not found the kernel to be thrashing executables or be reluctant
to go after anonymous pages anymore.

I wonder if your observation takes these recent changes into account?

> For lruvecs from different memcgs or nodes, comparisons are impossible
> due to the lack of a common frame of reference.

My first thought is that this is expected. Workloads running under
different memory constraints, IO priority levels etc. will not have
comparable workingsets: an access frequency that is considered high in
one domain could be considered quite cold in another.

Could you elaborate a bit on the situations where you would want to
compare, and how this is possible by having more generations?

> Solutions
> =========
> Notion of generation numbers
> ----------------------------
> The notion of generation numbers introduces a quantitative approach to
> memory overcommit. A larger number of pages can be spread out across
> a configurable number of generations, and each generation includes all
> pages that have been referenced since the last generation. This
> improved granularity yields relatively low false active/inactive
> rates.
> 
> Given an lruvec, scans of anon and file types and selections between
> them are all based on direct comparisons of generation numbers, which
> are simple and yet effective. For different lruvecs, comparisons are
> still possible based on birth times of generations.

This describes *what* it's doing, but could you elaborate more on how
to think about generations in relation to workload behavior and what
you can predict based on how your workload gets bucketed into these?

If we accept that the current two generations are not enough, how many
should there be instead? Four? Ten?

What determines this? Is it the workload's access pattern? Or the
memory size?

How do I know whether the number of generations I have chosen is right
for my setup? How do I detect when the underlying factors changed and
it no longer is?

How does it manifest if I have too few generations? What about too
many?

What about systems that host a variety of workloads that come and go?
Is there a generation number that will be good for any combination of
workloads on the system as jobs come and go?

For a general purpose OS like Linux, it's nice to be *able* to tune to
your specific requirements, but it's always bad to *have* to. Whatever
we end up doing, there needs to be some reasonable default behavior
that works acceptably for a broad range of workloads out of the box.

> Differential scans via page tables
> ----------------------------------
> Each differential scan discovers all pages that have been referenced
> since the last scan. Specifically, it walks the mm_struct list
> associated with an lruvec to scan page tables of processes that have
> been scheduled since the last scan. The cost of each differential scan
> is roughly proportional to the number of referenced pages it
> discovers. Unless address spaces are extremely sparse, page tables
> usually have better memory locality than the rmap. The end result is
> generally a significant reduction in CPU usage, for workloads using a
> large amount of anon memory.
> 
> Our real-world benchmark that browses popular websites in multiple
> Chrome tabs demonstrates 51% less CPU usage from kswapd and 52% (full)
> less PSI on v5.11. With this patchset, kswapd profile looks like:
>   49.36%  lzo1x_1_do_compress
>    4.54%  page_vma_mapped_walk
>    4.45%  memset_erms
>    3.47%  walk_pte_range
>    2.88%  zram_bvec_rw
> 
> In addition, direct reclaim latency is reduced by 22% at 99th
> percentile and the number of refaults is reduced by 7%. Both metrics
> are important to phones and laptops as they are correlated to user
> experience.

This looks very exciting!

However, this seems to be an improvement completely in its own right:
getting the mapped page access information in a more efficient way.

Is there anything that ties it to the multi-generation LRU that I may
be missing here? Or could it simply be a drop-in replacement for rmap
that gives us the CPU savings right away?

> Framework
> =========
> For each lruvec, evictable pages are divided into multiple
> generations. The youngest generation number is stored in
> lruvec->evictable.max_seq for both anon and file types as they are
> aged on an equal footing. The oldest generation numbers are stored in
> lruvec->evictable.min_seq[2] separately for anon and file types as
> clean file pages can be evicted regardless of may_swap or
> may_writepage. Generation numbers are truncated into
> order_base_2(MAX_NR_GENS+1) bits in order to fit into page->flags. The
> sliding window technique is used to prevent truncated generation
> numbers from overlapping. Each truncated generation number is an inde
> to lruvec->evictable.lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES].
> Evictable pages are added to the per-zone lists indexed by max_seq or
> min_seq[2] (modulo MAX_NR_GENS), depending on whether they are being
> faulted in.
> 
> Each generation is then divided into multiple tiers. Tiers represent
> levels of usage from file descriptors only. Pages accessed N times via
> file descriptors belong to tier order_base_2(N). In contrast to moving
> across generations which requires the lru lock, moving across tiers
> only involves an atomic operation on page->flags and therefore has a
> lower cost. A feedback loop modeled after the well-known PID
> controller monitors the refault rates across all tiers and decides
> when to activate pages from which tiers on the reclaim path.
> 
> The framework comprises two conceptually independent components: the
> aging and the eviction, which can be invoked separately from user
> space.

Why from userspace?

> Aging
> -----
> The aging produces young generations. Given an lruvec, the aging scans
> page tables for referenced pages of this lruvec. Upon finding one, the
> aging updates its generation number to max_seq. After each round of
> scan, the aging increments max_seq.
> 
> The aging maintains either a system-wide mm_struct list or per-memcg
> mm_struct lists and tracks whether an mm_struct is being used or has
> been used since the last scan. Multiple threads can concurrently work
> on the same mm_struct list, and each of them will be given a different
> mm_struct belonging to a process that has been scheduled since the
> last scan.
> 
> The aging is due when both of min_seq[2] reaches max_seq-1, assuming
> both anon and file types are reclaimable.

As per above, this is centered around mapped pages, but it really
needs to include a detailed answer for unmapped pages, such as page
cache and shmem/tmpfs data, as well as how sampled page table
references behave wrt realtime syscall references.

> Eviction
> --------
> The eviction consumes old generations. Given an lruvec, the eviction
> scans the pages on the per-zone lists indexed by either of min_seq[2].
> It first tries to select a type based on the values of min_seq[2].
> When anon and file types are both available from the same generation,
> it selects the one that has a lower refault rate.
> 
> During a scan, the eviction sorts pages according to their generation
> numbers, if the aging has found them referenced. It also moves pages
> from the tiers that have higher refault rates than tier 0 to the next
> generation.
> 
> When it finds all the per-zone lists of a selected type are empty, the
> eviction increments min_seq[2] indexed by this selected type.
> 
> Use cases
> =========
> On Android, our most advanced simulation that generates memory
> pressure from realistic user behavior shows 18% fewer low-memory
> kills, which in turn reduces cold starts by 16%.

I assume you refer to pressure-induced lmkd kills rather than
conventional kernel OOM kills?

I.e. multi-gen LRU does a better job of identifying the workingset,
rather than giving up too early.

Again, I would be interested if the baseline here includes the recent
anon/file balancing rework or not.

> On Borg, a similar approach enables us to identify jobs that
> underutilize their memory and downsize them considerably without
> compromising any of our service level indicators.

This is doable with the current reclaim implementation as well. At FB
we drive proactive reclaim through cgroup control, in a feedback loop
with psi metrics.

Obviously, this would benefit from better workingset identification in
the kernel, as more memory could be offloaded under the same pressure
tolerances from the workload, but it's more of an optimization than
enabling a uniquely new usecase.

> On Chrome OS, our field telemetry reports 96% fewer low-memory tab
> discards and 59% fewer OOM kills from fully-utilized devices and no
> regressions in monitored user experience from underutilized devices.

Again, lkmd rather than kernel oom kills, right? And with or without
the anon/file rework?

> Working set estimation
> ----------------------
> User space can invoke the aging by writing "+ memcg_id node_id gen
> [swappiness]" to /sys/kernel/debug/lru_gen. This debugfs interface
> also provides the birth time and the size of each generation.
> 
> Proactive reclaim
> -----------------
> User space can invoke the eviction by writing "- memcg_id node_id gen
> [swappiness] [nr_to_reclaim]" to /sys/kernel/debug/lru_gen. Multiple
> command lines are supported, so does concatenation with delimiters.

Can you explain a bit more how these two are supposed to be used?

The memcg id is self-explanatory: Age or evict pages from this
particular workload.

The node is a bit less intuitive. In most setups, the distance to a
remote NUMA node is much smaller than the distance to the storage
backend, and users would prefer finding and evicting the coldest
memory between multiple nodes, not within individual node.

Swappiness raises a similar question. Why would the user prefer one
type of data to be reclaimed over the other? Shouldn't it want to
reclaim the pages that are least likely to be used again soon?

> FAQ
> ===
> Why not try to improve the existing code?
> -----------------------------------------
> We have tried but concluded the aforementioned problems are
> fundamental, and therefore changes made on top of them will not result
> in substantial gains.

Realistically, I think incremental changes are unavoidable to get this
merged upstream.

Not just in the sense that they need to be smaller changes, but also
in the sense that they need to replace old code. It would be
impossible to maintain both, focus development and testing resources,
and provide a reasonably stable experience with both systems tugging
at a complicated shared code base.

On the other hand, the existing code also has billions of hours of
production testing and tuning. We can't throw this all out overnight -
it needs to be surgical and the broader consequences of each step need
to be well understood.

We also have millions of servers relying on being able to do upgrades
for drivers and fixes in other subsystems that we can't put on hold
until we stabilized a new reclaim implementation from scratch.

The good thing is that swap really hasn't been used much
recently. There definitely is room to maneuver without being too
disruptive. There *are* swap configurations today, but for the most
part, users don't expect the kernel to swap until the machine is under
heavy pressure. Few have expectations of it doing a nuanced and
efficient memory offloading job under nominal loads. So the anon side
could well be a testbed for the multigen LRU that has a more
reasonable blast radius than doing everything at once.

And if the rmap replacement for mapped pages could be split out as a
CPU optimzation for getting MMU info, without changing how those are
interpreted in the same step, I think we'd get into a more manageable
territory with this proposal.

Thanks!
Johannes

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 13:51                 ` Rik van Riel
  2021-04-14 15:56                   ` Andi Kleen
  2021-04-14 15:58                   ` [page-reclaim] " Shakeel Butt
@ 2021-04-14 18:45                   ` Yu Zhao
  2 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-14 18:45 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Huang, Ying, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andi Kleen, Andrew Morton, Benjamin Manes, Dave Hansen,
	Hillf Danton, Johannes Weiner, Jonathan Corbet, Joonsoo Kim,
	Matthew Wilcox, Mel Gorman, Miaohe Lin, Michael Larabel,
	Michal Hocko, Michel Lespinasse, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 7:52 AM Rik van Riel <riel@surriel.com> wrote:
>
> On Wed, 2021-04-14 at 16:27 +0800, Huang, Ying wrote:
> > Yu Zhao <yuzhao@google.com> writes:
> >
> > > On Wed, Apr 14, 2021 at 12:15 AM Huang, Ying <ying.huang@intel.com>
> > > wrote:
> > > >
> > > NUMA Optimization
> > > -----------------
> > > Support NUMA policies and per-node RSS counters.
> > >
> > > We only can move forward one step at a time. Fair?
> >
> > You don't need to implement that now definitely.  But we can discuss
> > the
> > possible solution now.
>
> That was my intention, too. I want to make sure we don't
> end up "painting ourselves into a corner" by moving in some
> direction we have no way to get out of.
>
> The patch set looks promising, but we need some plan to
> avoid the worst case behaviors that forced us into rmap
> based scanning initially.

Hi Rik,

By design, we voluntarily fall back to the rmap when page tables of a
process are too sparse. At the moment, we have

bool should_skip_mm()
{
    ...
    /* leave the legwork to the rmap if mapped pages are too sparse */
    if (RSS < mm_pgtables_bytes(mm) / PAGE_SIZE)
        return true;
    ....
}

So yes, I agree we have more work to do in this direction, the
fallback should be per VMA and NUMA aware. Note that once the fallback
happens, it shares the same path with the existing implementation.

Probably I should have clarified that this patchset does not replace
the rmap with page table scanning. It conditionally uses page table
scanning when it thinks most of the pages on a system could have been
referenced, i.e., when it thinks walking the rmap would be less
efficient, based on generations.

It *unconditionally* walks the rmap to scan each of the pages it
eventually tries to evict, because scanning page tables for a small
batch of pages it wants to evict is too costly.

One of the simple ways to look at how the mixture of page table
scanning and the rmap works is:
  1) it scans page tables (but might fallback to the rmap) to
deactivate pages from the active list to the inactive list, when the
inactive list becomes empty
  2) it walks the rmap (not page table scanning) when it evicts
individual pages from the inactive list.
Does it make sense?

I fully agree "the mixture" is currently statistically decided, and it
must be made worst-case scenario proof.

> > Note that it's possible that only some processes are bound to some
> > NUMA
> > nodes, while other processes aren't bound.
>
> For workloads like PostgresQL or Oracle, it is common
> to have maybe 70% of memory in a large shared memory
> segment, spread between all the NUMA nodes, and mapped
> into hundreds, if not thousands, of processes in the
> system.

I do plan to reach out to the PostgreSQL community and ask for help to
benchmark this patchset. Will keep everybody posted.

> Now imagine we have an 8 node system, and memory
> pressure in the DMA32 zone of node 0.
>
> How will the current VM behave?

At the moment, we don't plan to make the DMA32 zone reclaim a
priority. Rather, I'd suggest
  1) stay with the existing implementation
  2) boost the watermark for DMA32

> What will the virtual scanning need to do?

The high priority items are:

To-do List
==========
KVM Optimization
----------------
Support shadow page table scanning.

NUMA Optimization
-----------------
Support NUMA policies and per-node RSS counters.

We are just trying to focus our resources on the trending use cases. Reasonable?

> If we can come up with a solution to make virtual
> scanning scale for that kind of workload, great.

It won't be easy, but IMO nothing worth doing is easy :)

> If not ... if it turns out most of the benefits of
> the multigeneratinal LRU framework come from sorting
> the pages into multiple LRUs, and from being able
> to easily reclaim unmapped pages before having to
> scan mapped ones, could it be an idea to implement
> that first, independently from virtual scanning?

This option is on the table considering the possibilities
  1) there are unforeseeable problems we couldn't solve
  2) sorting pages alone has demonstrated its standalone value

I guess 2) alone will help people heavily using page cache. Google
isn't one of them though. Personally I'm neutral (at least trying to
be), and my goal is to accommodate everybody as best as I can.

> I am all for improving
> our page reclaim system, I
> just want to make sure we don't revisit the old traps
> that forced us where we are today :)

Yeah, I do see your concerns and we need more data. Any suggestions on
benchmarks you'd be interested in?

Thanks.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 15:51           ` Andi Kleen
  2021-04-14 15:58             ` Rik van Riel
@ 2021-04-14 19:04             ` Yu Zhao
  2021-04-15  3:00               ` Andi Kleen
  1 sibling, 1 reply; 57+ messages in thread
From: Yu Zhao @ 2021-04-14 19:04 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Rik van Riel, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andrew Morton, Benjamin Manes, Dave Hansen, Hillf Danton,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Roman Gushchin, Rong Chen, SeongJae Park,
	Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 9:51 AM Andi Kleen <ak@linux.intel.com> wrote:
>
> >    2) It will not scan PTE tables under non-leaf PMD entries that do not
> >       have the accessed bit set, when
> >       CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
>
> This assumes  that workloads have reasonable locality. Could there
> be a worst case where only one or two pages in each PTE are used,
> so this PTE skipping trick doesn't work?

Hi Andi,

Yes, it does make that assumption. And yes, there could. AFAIK, only
x86 supports this.

I wrote a crude test to verify this, and it maps exactly one page
within each PTE table. And I found page table scanning didn't
underperform the rmap:

https://lore.kernel.org/linux-mm/YHFuL%2FDdtiml4biw@google.com/#t

The reason (sorry for repeating this) is page table scanning is conditional:

bool should_skip_mm()
{
    ...
    /* leave the legwork to the rmap if mapped pages are too sparse */
    if (RSS < mm_pgtables_bytes(mm) / PAGE_SIZE)
        return true;
    ....
}

We fall back to the rmap when it's obviously not smart to do so. There
is still a lot of room for improvement in this function though, i.e.,
it should be per VMA and NUMA aware.

Note that page table scanning doesn't replace the existing rmap scan.
It's complementary, and it happens when there is a good chance that
most of the pages on a system under pressure have been referenced.
IOW, scanning them one by one with the rmap would cost more than
scanning them all at once via page tables.

Sounds reasonable?

Thanks.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 15:58             ` Rik van Riel
@ 2021-04-14 19:14               ` Yu Zhao
  2021-04-14 19:41                 ` Rik van Riel
  0 siblings, 1 reply; 57+ messages in thread
From: Yu Zhao @ 2021-04-14 19:14 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Andi Kleen, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andrew Morton, Benjamin Manes, Dave Hansen, Hillf Danton,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Roman Gushchin, Rong Chen, SeongJae Park,
	Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 9:59 AM Rik van Riel <riel@surriel.com> wrote:
>
> On Wed, 2021-04-14 at 08:51 -0700, Andi Kleen wrote:
> > >    2) It will not scan PTE tables under non-leaf PMD entries that
> > > do not
> > >       have the accessed bit set, when
> > >       CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
> >
> > This assumes  that workloads have reasonable locality. Could there
> > be a worst case where only one or two pages in each PTE are used,
> > so this PTE skipping trick doesn't work?
>
> Databases with large shared memory segments shared between
> many processes come to mind as a real-world example of a
> worst case scenario.

Well, I don't think you two are talking about the same thing. Andi was
focusing on sparsity. Your example seems to be about sharing, i.e.,
ihgh mapcount. Of course both can happen at the same time, as I tested
here:
https://lore.kernel.org/linux-mm/YHFuL%2FDdtiml4biw@google.com/#t

I'm skeptical that shared memory used by databases is that sparse,
i.e., one page per PTE table, because the extremely low locality would
heavily penalize their performance. But my knowledge in databases is
close to zero. So feel free to enlighten me or just ignore what I
said.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 19:14               ` Yu Zhao
@ 2021-04-14 19:41                 ` Rik van Riel
  2021-04-14 20:08                   ` Yu Zhao
  0 siblings, 1 reply; 57+ messages in thread
From: Rik van Riel @ 2021-04-14 19:41 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Andi Kleen, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andrew Morton, Benjamin Manes, Dave Hansen, Hillf Danton,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Roman Gushchin, Rong Chen, SeongJae Park,
	Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

[-- Attachment #1: Type: text/plain, Size: 1847 bytes --]

On Wed, 2021-04-14 at 13:14 -0600, Yu Zhao wrote:
> On Wed, Apr 14, 2021 at 9:59 AM Rik van Riel <riel@surriel.com>
> wrote:
> > On Wed, 2021-04-14 at 08:51 -0700, Andi Kleen wrote:
> > > >    2) It will not scan PTE tables under non-leaf PMD entries
> > > > that
> > > > do not
> > > >       have the accessed bit set, when
> > > >       CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
> > > 
> > > This assumes  that workloads have reasonable locality. Could
> > > there
> > > be a worst case where only one or two pages in each PTE are used,
> > > so this PTE skipping trick doesn't work?
> > 
> > Databases with large shared memory segments shared between
> > many processes come to mind as a real-world example of a
> > worst case scenario.
> 
> Well, I don't think you two are talking about the same thing. Andi
> was
> focusing on sparsity. Your example seems to be about sharing, i.e.,
> ihgh mapcount. Of course both can happen at the same time, as I
> tested
> here:
> https://lore.kernel.org/linux-mm/YHFuL%2FDdtiml4biw@google.com/#t
> 
> I'm skeptical that shared memory used by databases is that sparse,
> i.e., one page per PTE table, because the extremely low locality
> would
> heavily penalize their performance. But my knowledge in databases is
> close to zero. So feel free to enlighten me or just ignore what I
> said.

A database may have a 200GB shared memory segment,
and a worker task that gets spun up to handle a
query might access only 1MB of memory to answer
that query.

That memory could be from anywhere inside the
shared memory segment. Maybe some of the accesses
are more dense, and others more sparse, who knows?

A lot of the locality
will depend on how memory
space inside the shared memory segment is reclaimed
and recycled inside the database.

-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 14:43       ` Jens Axboe
@ 2021-04-14 19:42         ` Yu Zhao
  2021-04-15  1:21         ` Dave Chinner
  1 sibling, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-14 19:42 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Dave Chinner, SeongJae Park, Linux-MM, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 8:43 AM Jens Axboe <axboe@kernel.dk> wrote:
>
> On 4/13/21 5:14 PM, Dave Chinner wrote:
> > On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> >> On 4/13/21 1:51 AM, SeongJae Park wrote:
> >>> From: SeongJae Park <sjpark@amazon.de>
> >>>
> >>> Hello,
> >>>
> >>>
> >>> Very interesting work, thank you for sharing this :)
> >>>
> >>> On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:
> >>>
> >>>> What's new in v2
> >>>> ================
> >>>> Special thanks to Jens Axboe for reporting a regression in buffered
> >>>> I/O and helping test the fix.
> >>>
> >>> Is the discussion open?  If so, could you please give me a link?
> >>
> >> I wasn't on the initial post (or any of the lists it was posted to), but
> >> it's on the google page reclaim list. Not sure if that is public or not.
> >>
> >> tldr is that I was pretty excited about this work, as buffered IO tends
> >> to suck (a lot) for high throughput applications. My test case was
> >> pretty simple:
> >>
> >> Randomly read a fast device, using 4k buffered IO, and watch what
> >> happens when the page cache gets filled up. For this particular test,
> >> we'll initially be doing 2.1GB/sec of IO, and then drop to 1.5-1.6GB/sec
> >> with kswapd using a lot of CPU trying to keep up. That's mainline
> >> behavior.
> >
> > I see this exact same behaviour here, too, but I RCA'd it to
> > contention between the inode and memory reclaim for the mapping
> > structure that indexes the page cache. Basically the mapping tree
> > lock is the contention point here - you can either be adding pages
> > to the mapping during IO, or memory reclaim can be removing pages
> > from the mapping, but we can't do both at once.
> >
> > So we end up with kswapd spinning on the mapping tree lock like so
> > when doing 1.6GB/s in 4kB buffered IO:
> >
> > -   20.06%     0.00%  [kernel]               [k] kswapd                                                                                                        ▒
> >    - 20.06% kswapd                                                                                                                                             ▒
> >       - 20.05% balance_pgdat                                                                                                                                   ▒
> >          - 20.03% shrink_node                                                                                                                                  ▒
> >             - 19.92% shrink_lruvec                                                                                                                             ▒
> >                - 19.91% shrink_inactive_list                                                                                                                   ▒
> >                   - 19.22% shrink_page_list                                                                                                                    ▒
> >                      - 17.51% __remove_mapping                                                                                                                 ▒
> >                         - 14.16% _raw_spin_lock_irqsave                                                                                                        ▒
> >                            - 14.14% do_raw_spin_lock                                                                                                           ▒
> >                                 __pv_queued_spin_lock_slowpath                                                                                                 ▒
> >                         - 1.56% __delete_from_page_cache                                                                                                       ▒
> >                              0.63% xas_store                                                                                                                   ▒
> >                         - 0.78% _raw_spin_unlock_irqrestore                                                                                                    ▒
> >                            - 0.69% do_raw_spin_unlock                                                                                                          ▒
> >                                 __raw_callee_save___pv_queued_spin_unlock                                                                                      ▒
> >                      - 0.82% free_unref_page_list                                                                                                              ▒
> >                         - 0.72% free_unref_page_commit                                                                                                         ▒
> >                              0.57% free_pcppages_bulk                                                                                                          ▒
> >
> > And these are the processes consuming CPU:
> >
> >    5171 root      20   0 1442496   5696   1284 R  99.7   0.0   1:07.78 fio
> >    1150 root      20   0       0      0      0 S  47.4   0.0   0:22.70 kswapd1
> >    1146 root      20   0       0      0      0 S  44.0   0.0   0:21.85 kswapd0
> >    1152 root      20   0       0      0      0 S  39.7   0.0   0:18.28 kswapd3
> >    1151 root      20   0       0      0      0 S  15.2   0.0   0:12.14 kswapd2
>
> Here's my profile when memory reclaim is active for the above mentioned
> test case. This is a single node system, so just kswapd. It's using around
> 40-45% CPU:
>
>     43.69%  kswapd0  [kernel.vmlinux]  [k] xas_create
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                shrink_node
>                shrink_lruvec
>                shrink_inactive_list
>                shrink_page_list
>                __delete_from_page_cache
>                xas_store
>                xas_create
>
>     16.88%  kswapd0  [kernel.vmlinux]  [k] queued_spin_lock_slowpath
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                shrink_node
>                shrink_lruvec
>                |
>                 --16.82%--shrink_inactive_list
>                           |
>                            --16.55%--shrink_page_list
>                                      |
>                                       --16.26%--_raw_spin_lock_irqsave
>                                                 queued_spin_lock_slowpath
>
>      9.89%  kswapd0  [kernel.vmlinux]  [k] shrink_page_list
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                shrink_node
>                shrink_lruvec
>                shrink_inactive_list
>                shrink_page_list
>
>      5.46%  kswapd0  [kernel.vmlinux]  [k] xas_init_marks
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                shrink_node
>                shrink_lruvec
>                shrink_inactive_list
>                shrink_page_list
>                |
>                 --5.41%--__delete_from_page_cache
>                           xas_init_marks
>
>      4.42%  kswapd0  [kernel.vmlinux]  [k] __delete_from_page_cache
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                shrink_node
>                shrink_lruvec
>                shrink_inactive_list
>                |
>                 --4.40%--shrink_page_list
>                           __delete_from_page_cache
>
>      2.82%  kswapd0  [kernel.vmlinux]  [k] isolate_lru_pages
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                shrink_node
>                shrink_lruvec
>                |
>                |--1.43%--shrink_active_list
>                |          isolate_lru_pages
>                |
>                 --1.39%--shrink_inactive_list
>                           isolate_lru_pages
>
>      1.99%  kswapd0  [kernel.vmlinux]  [k] free_pcppages_bulk
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                shrink_node
>                shrink_lruvec
>                shrink_inactive_list
>                shrink_page_list
>                free_unref_page_list
>                free_unref_page_commit
>                free_pcppages_bulk
>
>      1.79%  kswapd0  [kernel.vmlinux]  [k] _raw_spin_lock_irqsave
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                |
>                 --1.76%--shrink_node
>                           shrink_lruvec
>                           shrink_inactive_list
>                           |
>                            --1.72%--shrink_page_list
>                                      _raw_spin_lock_irqsave
>
>      1.02%  kswapd0  [kernel.vmlinux]  [k] workingset_eviction
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                shrink_node
>                shrink_lruvec
>                shrink_inactive_list
>                |
>                 --1.00%--shrink_page_list
>                           workingset_eviction
>
> > i.e. when memory reclaim kicks in, the read process has 20% less
> > time with exclusive access to the mapping tree to insert new pages.
> > Hence buffered read performance goes down quite substantially when
> > memory reclaim kicks in, and this really has nothing to do with the
> > memory reclaim LRU scanning algorithm.
> >
> > I can actually get this machine to pin those 5 processes to 100% CPU
> > under certain conditions. Each process is spinning all that extra
> > time on the mapping tree lock, and performance degrades further.
> > Changing the LRU reclaim algorithm won't fix this - the workload is
> > solidly bound by the exclusive nature of the mapping tree lock and
> > the number of tasks trying to obtain it exclusively...
>
> I've seen way worse than the above as well, it's just my go-to easy test
> case for "man I wish buffered IO didn't suck so much".
>
> >> The initial posting of this patchset did no better, in fact it did a bit
> >> worse. Performance dropped to the same levels and kswapd was using as
> >> much CPU as before, but on top of that we also got excessive swapping.
> >> Not at a high rate, but 5-10MB/sec continually.
> >>
> >> I had some back and forths with Yu Zhao and tested a few new revisions,
> >> and the current series does much better in this regard. Performance
> >> still dips a bit when page cache fills, but not nearly as much, and
> >> kswapd is using less CPU than before.
> >
> > Profiles would be interesting, because it sounds to me like reclaim
> > *might* be batching page cache removal better (e.g. fewer, larger
> > batches) and so spending less time contending on the mapping tree
> > lock...
> >
> > IOWs, I suspect this result might actually be a result of less lock
> > contention due to a change in batch processing characteristics of
> > the new algorithm rather than it being a "better" algorithm...
>
> See above - let me know if you want to see more specific profiling as
> well.

Hi Jens,

Thanks for the profiles.

Does the code path I've demonstrated seem clear to you?

Recap:

When randomly accessing a (not infinitely) large file long enough,
some blocks are bound to be accessed multiple times. In the buffered
io access path, mark_page_accessed() activates them, i.e., moving them
to the active list. Once memory is filled and kswapd starts
reclaiming, shrink_active_list() deactivates them, i.e., moving them
back to the inactive list. Both take the lru lock to add/remove pages
to/from the active/inactive lists.

IOW, pages accessed multiple times bounce between the active and the
inactive lists when random accesses put a system under memory
pressure. For random accesses, pages accessed multiple times are not
different from those accessed once, in terms of page reclaim.
(Statistically speaking, they would be less unlikely to be used
again.)

I'd be happy to give it another try if there is anything unclear.

Thanks.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 19:41                 ` Rik van Riel
@ 2021-04-14 20:08                   ` Yu Zhao
  0 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-14 20:08 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Andi Kleen, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andrew Morton, Benjamin Manes, Dave Hansen, Hillf Danton,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Roman Gushchin, Rong Chen, SeongJae Park,
	Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 1:42 PM Rik van Riel <riel@surriel.com> wrote:
>
> On Wed, 2021-04-14 at 13:14 -0600, Yu Zhao wrote:
> > On Wed, Apr 14, 2021 at 9:59 AM Rik van Riel <riel@surriel.com>
> > wrote:
> > > On Wed, 2021-04-14 at 08:51 -0700, Andi Kleen wrote:
> > > > >    2) It will not scan PTE tables under non-leaf PMD entries
> > > > > that
> > > > > do not
> > > > >       have the accessed bit set, when
> > > > >       CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
> > > >
> > > > This assumes  that workloads have reasonable locality. Could
> > > > there
> > > > be a worst case where only one or two pages in each PTE are used,
> > > > so this PTE skipping trick doesn't work?
> > >
> > > Databases with large shared memory segments shared between
> > > many processes come to mind as a real-world example of a
> > > worst case scenario.
> >
> > Well, I don't think you two are talking about the same thing. Andi
> > was
> > focusing on sparsity. Your example seems to be about sharing, i.e.,
> > ihgh mapcount. Of course both can happen at the same time, as I
> > tested
> > here:
> > https://lore.kernel.org/linux-mm/YHFuL%2FDdtiml4biw@google.com/#t
> >
> > I'm skeptical that shared memory used by databases is that sparse,
> > i.e., one page per PTE table, because the extremely low locality
> > would
> > heavily penalize their performance. But my knowledge in databases is
> > close to zero. So feel free to enlighten me or just ignore what I
> > said.
>
> A database may have a 200GB shared memory segment,
> and a worker task that gets spun up to handle a
> query might access only 1MB of memory to answer
> that query.
>
> That memory could be from anywhere inside the
> shared memory segment. Maybe some of the accesses
> are more dense, and others more sparse, who knows?
>
> A lot of the locality
> will depend on how memory
> space inside the shared memory segment is reclaimed
> and recycled inside the database.

Thanks. Yeah, I guess we'll just need to see more benchmarks from the
database realm. Stay tuned :)

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 14:43       ` Jens Axboe
  2021-04-14 19:42         ` Yu Zhao
@ 2021-04-15  1:21         ` Dave Chinner
  1 sibling, 0 replies; 57+ messages in thread
From: Dave Chinner @ 2021-04-15  1:21 UTC (permalink / raw)
  To: Jens Axboe
  Cc: SeongJae Park, Yu Zhao, linux-mm, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	page-reclaim

On Wed, Apr 14, 2021 at 08:43:36AM -0600, Jens Axboe wrote:
> On 4/13/21 5:14 PM, Dave Chinner wrote:
> > On Tue, Apr 13, 2021 at 10:13:24AM -0600, Jens Axboe wrote:
> >> On 4/13/21 1:51 AM, SeongJae Park wrote:
> >>> From: SeongJae Park <sjpark@amazon.de>
> >>>
> >>> Hello,
> >>>
> >>>
> >>> Very interesting work, thank you for sharing this :)
> >>>
> >>> On Tue, 13 Apr 2021 00:56:17 -0600 Yu Zhao <yuzhao@google.com> wrote:
> >>>
> >>>> What's new in v2
> >>>> ================
> >>>> Special thanks to Jens Axboe for reporting a regression in buffered
> >>>> I/O and helping test the fix.
> >>>
> >>> Is the discussion open?  If so, could you please give me a link?
> >>
> >> I wasn't on the initial post (or any of the lists it was posted to), but
> >> it's on the google page reclaim list. Not sure if that is public or not.
> >>
> >> tldr is that I was pretty excited about this work, as buffered IO tends
> >> to suck (a lot) for high throughput applications. My test case was
> >> pretty simple:
> >>
> >> Randomly read a fast device, using 4k buffered IO, and watch what
> >> happens when the page cache gets filled up. For this particular test,
> >> we'll initially be doing 2.1GB/sec of IO, and then drop to 1.5-1.6GB/sec
> >> with kswapd using a lot of CPU trying to keep up. That's mainline
> >> behavior.
> > 
> > I see this exact same behaviour here, too, but I RCA'd it to
> > contention between the inode and memory reclaim for the mapping
> > structure that indexes the page cache. Basically the mapping tree
> > lock is the contention point here - you can either be adding pages
> > to the mapping during IO, or memory reclaim can be removing pages
> > from the mapping, but we can't do both at once.
> > 
> > So we end up with kswapd spinning on the mapping tree lock like so
> > when doing 1.6GB/s in 4kB buffered IO:
> > 
> > -   20.06%     0.00%  [kernel]               [k] kswapd                                                                                                        ▒
> >    - 20.06% kswapd                                                                                                                                             ▒
> >       - 20.05% balance_pgdat                                                                                                                                   ▒
> >          - 20.03% shrink_node                                                                                                                                  ▒
> >             - 19.92% shrink_lruvec                                                                                                                             ▒
> >                - 19.91% shrink_inactive_list                                                                                                                   ▒
> >                   - 19.22% shrink_page_list                                                                                                                    ▒
> >                      - 17.51% __remove_mapping                                                                                                                 ▒
> >                         - 14.16% _raw_spin_lock_irqsave                                                                                                        ▒
> >                            - 14.14% do_raw_spin_lock                                                                                                           ▒
> >                                 __pv_queued_spin_lock_slowpath                                                                                                 ▒
> >                         - 1.56% __delete_from_page_cache                                                                                                       ▒
> >                              0.63% xas_store                                                                                                                   ▒
> >                         - 0.78% _raw_spin_unlock_irqrestore                                                                                                    ▒
> >                            - 0.69% do_raw_spin_unlock                                                                                                          ▒
> >                                 __raw_callee_save___pv_queued_spin_unlock                                                                                      ▒
> >                      - 0.82% free_unref_page_list                                                                                                              ▒
> >                         - 0.72% free_unref_page_commit                                                                                                         ▒
> >                              0.57% free_pcppages_bulk                                                                                                          ▒
> > 
> > And these are the processes consuming CPU:
> > 
> >    5171 root      20   0 1442496   5696   1284 R  99.7   0.0   1:07.78 fio
> >    1150 root      20   0       0      0      0 S  47.4   0.0   0:22.70 kswapd1
> >    1146 root      20   0       0      0      0 S  44.0   0.0   0:21.85 kswapd0
> >    1152 root      20   0       0      0      0 S  39.7   0.0   0:18.28 kswapd3
> >    1151 root      20   0       0      0      0 S  15.2   0.0   0:12.14 kswapd2
> 
> Here's my profile when memory reclaim is active for the above mentioned
> test case. This is a single node system, so just kswapd. It's using around
> 40-45% CPU:
> 
>     43.69%  kswapd0  [kernel.vmlinux]  [k] xas_create
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                shrink_node
>                shrink_lruvec
>                shrink_inactive_list
>                shrink_page_list
>                __delete_from_page_cache
>                xas_store
>                xas_create
> 
>     16.88%  kswapd0  [kernel.vmlinux]  [k] queued_spin_lock_slowpath
>             |
>             ---ret_from_fork
>                kthread
>                kswapd
>                balance_pgdat
>                shrink_node
>                shrink_lruvec
>                |          
>                 --16.82%--shrink_inactive_list
>                           |          
>                            --16.55%--shrink_page_list
>                                      |          
>                                       --16.26%--_raw_spin_lock_irqsave
>                                                 queued_spin_lock_slowpath

Yeah, so it largely ends up in the same place, with the spinlock
contention dominating the CPU usage and efficiency of memory
reclaim.

> > i.e. when memory reclaim kicks in, the read process has 20% less
> > time with exclusive access to the mapping tree to insert new pages.
> > Hence buffered read performance goes down quite substantially when
> > memory reclaim kicks in, and this really has nothing to do with the
> > memory reclaim LRU scanning algorithm.
> > 
> > I can actually get this machine to pin those 5 processes to 100% CPU
> > under certain conditions. Each process is spinning all that extra
> > time on the mapping tree lock, and performance degrades further.
> > Changing the LRU reclaim algorithm won't fix this - the workload is
> > solidly bound by the exclusive nature of the mapping tree lock and
> > the number of tasks trying to obtain it exclusively...
> 
> I've seen way worse than the above as well, it's just my go-to easy test
> case for "man I wish buffered IO didn't suck so much".

*nod*

> >> The initial posting of this patchset did no better, in fact it did a bit
> >> worse. Performance dropped to the same levels and kswapd was using as
> >> much CPU as before, but on top of that we also got excessive swapping.
> >> Not at a high rate, but 5-10MB/sec continually.
> >>
> >> I had some back and forths with Yu Zhao and tested a few new revisions,
> >> and the current series does much better in this regard. Performance
> >> still dips a bit when page cache fills, but not nearly as much, and
> >> kswapd is using less CPU than before.
> > 
> > Profiles would be interesting, because it sounds to me like reclaim
> > *might* be batching page cache removal better (e.g. fewer, larger
> > batches) and so spending less time contending on the mapping tree
> > lock...
> > 
> > IOWs, I suspect this result might actually be a result of less lock
> > contention due to a change in batch processing characteristics of
> > the new algorithm rather than it being a "better" algorithm...
> 
> See above - let me know if you want to see more specific profiling as
> well.

I don't think that profiles are going to give us the level of detail
required to determine how this algorithm is improving performance.
That would require careful instrumentation of the memory reclaim
algorithms to demonstrate any significant change in behaviour, and
then to prove that it's a predictable, consistent improvement across
all types of machines rather than just being a freak of interactions
with a specific workload on specific hardware would need to be done.

When it comes to lock contention like this, you can't infer anything
about external algorithm changes because better algorithms often
make contention worse because the locks are hit harder and so
performance goes the wrong way. Similarly, if the external algorithm
change takes more time to do something because it is less efficient,
then locks are hit less hard, so they contend less, and performance
goes up.

I often see an external change cause a small reduction in lock
contention and increase in throughput through a heavily contended
path is often a sign something is slower or behaving worse, not
better. THe only way to determine if the external change is any good
is to first fix the lock contention problem, then do back to back
testing of the change.

Hence I'd be very hesitant to use this test in any way as a measure
of whether the multi-gen LRU is any better for this workload or
not...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14  7:16           ` Yu Zhao
  2021-04-14 10:00             ` Yu Zhao
@ 2021-04-15  1:36             ` Dave Chinner
  2021-04-24 21:21               ` Yu Zhao
  1 sibling, 1 reply; 57+ messages in thread
From: Dave Chinner @ 2021-04-15  1:36 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Jens Axboe, SeongJae Park, Linux-MM, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 01:16:52AM -0600, Yu Zhao wrote:
> On Tue, Apr 13, 2021 at 10:50 PM Dave Chinner <david@fromorbit.com> wrote:
> > On Tue, Apr 13, 2021 at 09:40:12PM -0600, Yu Zhao wrote:
> > > On Tue, Apr 13, 2021 at 5:14 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > Profiles would be interesting, because it sounds to me like reclaim
> > > > *might* be batching page cache removal better (e.g. fewer, larger
> > > > batches) and so spending less time contending on the mapping tree
> > > > lock...
> > > >
> > > > IOWs, I suspect this result might actually be a result of less lock
> > > > contention due to a change in batch processing characteristics of
> > > > the new algorithm rather than it being a "better" algorithm...
> > >
> > > I appreciate the profile. But there is no batching in
> > > __remove_mapping() -- it locks the mapping for each page, and
> > > therefore the lock contention penalizes the mainline and this patchset
> > > equally. It looks worse on your system because the four kswapd threads
> > > from different nodes were working on the same file.
> >
> > I think you misunderstand exactly what I mean by "batching" here.
> > I'm not talking about doing multiple pieces of work under a single
> > lock. What I mean is that the overall amount of work done in a
> > single reclaim scan (i.e a "reclaim batch") is packaged differently.
> >
> > We already batch up page reclaim via building a page list and then
> > passing it to shrink_page_list() to process the batch of pages in a
> > single pass. Each page in this page list batch then calls
> > remove_mapping() to pull the page form the LRU, we have a run of
> > contention between the foreground read() thread and the background
> > kswapd.
> >
> > If the size or nature of the pages in the batch passed to
> > shrink_page_list() changes, then the amount of time a reclaim batch
> > is going to put pressure on the mapping tree lock will also change.
> > That's the "change in batching behaviour" I'm referring to here. I
> > haven't read through the patchset to determine if you change the
> > shrink_page_list() algorithm, but it likely changes what is passed
> > to be reclaimed and that in turn changes the locking patterns that
> > fall out of shrink_page_list...
> 
> Ok, if we are talking about the size of the batch passed to
> shrink_page_list(), both the mainline and this patchset cap it at
> SWAP_CLUSTER_MAX, which is 32. There are corner cases, but when
> running fio/io_uring, it's safe to say both use 32.

You're still looking at micro-scale behaviour, not the larger-scale
batching effects. Are we passing SWAP_CLUSTER_MAX groups of pages to
shrinker_page_list() at a different rate?

When I say "batch of work" when talking about the page cache cycling
*500 thousand pages a second* through the cache, I'm not talking
about batches of 32 pages. I'm talking about the entire batch of
work kswapd does in an invocation cycle.

Is it scanning 100k pages 10 times a second? or 10k pages a hundred
times a second? How long does a batch take to run? how long does is
sleep between processing batches? Is there any change in these
metrics as a result of the multi-gen LRU patches?

Basically, we're looking at how access to the mapping lock is
changing the contention profile, and whether that is signficant or
not. I suspect it is, because when you have highly contended locks
and you do something external that reduces unrelated lock
contention, it's because that external thing is taking more time to
do and so there's less time to spend hitting locks hard...

As such, I don't think this test is a good measure of the multi-gen
LRU patches at all - performance is dominated by the severity of
lock contention external to the LRU scanning algorithm, and it's
hard to infer anything through suck lock contention....

> I don't want to paste everything here -- they'd clutter. Please see
> all the detailed profiles in the attachment. Let me know if their
> formats are no to your liking. I still have the raw perf.data.

Which makes the discussion thread just about impossible to follow or
comment on. Please just post the relevant excerpt of the stack
profile that you are commenting on.

> > > And I plan to reach out to other communities, e.g., PostgreSQL, to
> > > benchmark the patchset. I heard they have been complaining about the
> > > buffered io performance under memory pressure. Any other benchmarks
> > > you'd suggest?
> > >
> > > BTW, you might find another surprise in how less frequently slab
> > > shrinkers are called under memory pressure, because this patchset is a
> > > lot better at finding pages to reclaim and therefore doesn't overkill
> > > slabs.
> >
> > That's actually very likely to be a Bad Thing and cause unexpected
> > perofrmance and OOM based regressions. When the machine finally runs
> > out of page cache it can easily reclaim, it's going to get stuck
> > with long tail latencies reclaiming huge slab caches as they've had
> > no substantial ongoing pressure put on them to keep them in balance
> > with the overall memory pressure the system is under...
> 
> Well. It does use the existing equation. That is if it scans X% of
> pages, then it scans X% of slab objects. But 1) it often finds pages
> to reclaim at a lower X% 2) the pages it reclaims are less likely to
> refault. So the side effect is the overall slab objects it scans also
> reduce. I do see your point but don't see any options, at the moment.

You'll have to rebalance the memory reclaim algorithms to either:

a) make the shrinkers more aggressive so they do more reclaim when
called less often, or

b) lower the threshold at which shrinkers are called.

Keeping the slab caches in balance with page cache memory pressure
is fairly important for the performance of workloads that generate
inode and dentry cache load, especially those that don't actually
generate page cache pressure. This is the hardest part about making
fundamental changes to memory reclaim behaviour: ensuring that the
system remains balanced over a wide range of differing workloads and
reacts sanely to sudden step changes in workload behaviour...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 19:04             ` Yu Zhao
@ 2021-04-15  3:00               ` Andi Kleen
  2021-04-15  7:13                 ` Yu Zhao
  0 siblings, 1 reply; 57+ messages in thread
From: Andi Kleen @ 2021-04-15  3:00 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Rik van Riel, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andrew Morton, Benjamin Manes, Dave Hansen, Hillf Danton,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Roman Gushchin, Rong Chen, SeongJae Park,
	Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2

> We fall back to the rmap when it's obviously not smart to do so. There
> is still a lot of room for improvement in this function though, i.e.,
> it should be per VMA and NUMA aware.

Okay so it's more a question to tune the cross over heuristic. That
sounds much easier than replacing everything.

Of course long term it might be a problem to maintain too many 
different ways to do things, but I suppose short term it's a reasonable
strategy.

-Andi


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-15  3:00               ` Andi Kleen
@ 2021-04-15  7:13                 ` Yu Zhao
  2021-04-15  8:19                   ` Huang, Ying
  2021-04-15  9:57                   ` Michel Lespinasse
  0 siblings, 2 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-15  7:13 UTC (permalink / raw)
  To: Rik van Riel, Ying Huang
  Cc: Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2, Andi Kleen

On Wed, Apr 14, 2021 at 9:00 PM Andi Kleen <ak@linux.intel.com> wrote:
>
> > We fall back to the rmap when it's obviously not smart to do so. There
> > is still a lot of room for improvement in this function though, i.e.,
> > it should be per VMA and NUMA aware.
>
> Okay so it's more a question to tune the cross over heuristic. That
> sounds much easier than replacing everything.
>
> Of course long term it might be a problem to maintain too many
> different ways to do things, but I suppose short term it's a reasonable
> strategy.

Hi Rik, Ying,

Sorry for being persistent. I want to make sure we are on the same page:

Page table scanning doesn't replace the existing rmap walk. It is
complementary and only happens when it is likely that most of the
pages on a system under pressure have been referenced, i.e., out of
*inactive* pages, by definition of the existing implementation. Under
such a condition, scanning *active* pages one by one with the rmap is
likely to cost more than scanning them all at once via page tables.
When we evict *inactive* pages, we still use the rmap and share a
common path with the existing code.

Page table scanning falls back to the rmap walk if the page tables of
a process are apparently sparse, i.e., rss < size of the page tables.

I should have clarified this at the very beginning of the discussion.
But it has become so natural to me and I assumed we'd all see it this
way.

Your concern regarding the NUMA optimization is still valid, and it's
a high priority.

Thanks.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-15  7:13                 ` Yu Zhao
@ 2021-04-15  8:19                   ` Huang, Ying
  2021-04-15  9:57                   ` Michel Lespinasse
  1 sibling, 0 replies; 57+ messages in thread
From: Huang, Ying @ 2021-04-15  8:19 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Rik van Riel, Dave Chinner, Jens Axboe, SeongJae Park, Linux-MM,
	Andrew Morton, Benjamin Manes, Dave Hansen, Hillf Danton,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Roman Gushchin, Rong Chen, SeongJae Park,
	Tim Chen, Vlastimil Babka, Yang Shi, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2, Andi Kleen

Yu Zhao <yuzhao@google.com> writes:

> On Wed, Apr 14, 2021 at 9:00 PM Andi Kleen <ak@linux.intel.com> wrote:
>>
>> > We fall back to the rmap when it's obviously not smart to do so. There
>> > is still a lot of room for improvement in this function though, i.e.,
>> > it should be per VMA and NUMA aware.
>>
>> Okay so it's more a question to tune the cross over heuristic. That
>> sounds much easier than replacing everything.
>>
>> Of course long term it might be a problem to maintain too many
>> different ways to do things, but I suppose short term it's a reasonable
>> strategy.
>
> Hi Rik, Ying,
>
> Sorry for being persistent. I want to make sure we are on the same page:
>
> Page table scanning doesn't replace the existing rmap walk. It is
> complementary and only happens when it is likely that most of the
> pages on a system under pressure have been referenced, i.e., out of
> *inactive* pages, by definition of the existing implementation. Under
> such a condition, scanning *active* pages one by one with the rmap is
> likely to cost more than scanning them all at once via page tables.
> When we evict *inactive* pages, we still use the rmap and share a
> common path with the existing code.
>
> Page table scanning falls back to the rmap walk if the page tables of
> a process are apparently sparse, i.e., rss < size of the page tables.
>
> I should have clarified this at the very beginning of the discussion.
> But it has become so natural to me and I assumed we'd all see it this
> way.
>
> Your concern regarding the NUMA optimization is still valid, and it's
> a high priority.

Hi, Yu,

In general, I think it's a good idea to combine the page table scanning
and rmap scanning in the page reclaiming.  For example, if the
working-set is transitioned, we can take advantage of the fast page
table scanning to identify the new working-set quickly.  While we can
fallback to the rmap scanning if the page table scanning doesn't help.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-15  7:13                 ` Yu Zhao
  2021-04-15  8:19                   ` Huang, Ying
@ 2021-04-15  9:57                   ` Michel Lespinasse
  2021-04-24  2:33                     ` Yu Zhao
  1 sibling, 1 reply; 57+ messages in thread
From: Michel Lespinasse @ 2021-04-15  9:57 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Rik van Riel, Ying Huang, Dave Chinner, Jens Axboe,
	SeongJae Park, Linux-MM, Andrew Morton, Benjamin Manes,
	Dave Hansen, Hillf Danton, Johannes Weiner, Jonathan Corbet,
	Joonsoo Kim, Matthew Wilcox, Mel Gorman, Miaohe Lin,
	Michael Larabel, Michal Hocko, Michel Lespinasse, Roman Gushchin,
	Rong Chen, SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi,
	Zi Yan, linux-kernel, lkp, Kernel Page Reclaim v2, Andi Kleen

On Thu, Apr 15, 2021 at 01:13:13AM -0600, Yu Zhao wrote:
> Page table scanning doesn't replace the existing rmap walk. It is
> complementary and only happens when it is likely that most of the
> pages on a system under pressure have been referenced, i.e., out of
> *inactive* pages, by definition of the existing implementation. Under
> such a condition, scanning *active* pages one by one with the rmap is
> likely to cost more than scanning them all at once via page tables.
> When we evict *inactive* pages, we still use the rmap and share a
> common path with the existing code.
> 
> Page table scanning falls back to the rmap walk if the page tables of
> a process are apparently sparse, i.e., rss < size of the page tables.

Could you expand a bit more as to how page table scanning and rmap
scanning coexist ? Say, there is some memory pressure and you want to
identify good candidate pages to recaim. You could scan processes with
the page table scanning method, or you could scan the lru list through
the rmap method. How do you mix the two - when you use the lru/rmap
method, won't you encounter both pages that are mapped in "dense"
processes where scanning page tables would have been better, and pages
that are mapped in "sparse" processes where you are happy to be using
rmap, and even pges that are mapped into both types of processes at
once ?  Or, can you change the lru/rmap scan so that it will efficiently
skip over all dense processes when you use it ?

Thanks,

--
Michel "walken" Lespinasse

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-15  9:57                   ` Michel Lespinasse
@ 2021-04-24  2:33                     ` Yu Zhao
  2021-04-24  3:30                       ` Andi Kleen
  0 siblings, 1 reply; 57+ messages in thread
From: Yu Zhao @ 2021-04-24  2:33 UTC (permalink / raw)
  To: Michel Lespinasse
  Cc: Rik van Riel, Ying Huang, Dave Chinner, Jens Axboe,
	SeongJae Park, Linux-MM, Andrew Morton, Benjamin Manes,
	Dave Hansen, Hillf Danton, Johannes Weiner, Jonathan Corbet,
	Joonsoo Kim, Matthew Wilcox, Mel Gorman, Miaohe Lin,
	Michael Larabel, Michal Hocko, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Zi Yan,
	linux-kernel, lkp, Kernel Page Reclaim v2, Andi Kleen

On Sun, Apr 18, 2021 at 12:48 AM Michel Lespinasse
<michel@lespinasse.org> wrote:
> On Thu, Apr 15, 2021 at 01:13:13AM -0600, Yu Zhao wrote:
> > Page table scanning doesn't replace the existing rmap walk. It is
> > complementary and only happens when it is likely that most of the
> > pages on a system under pressure have been referenced, i.e., out of
> > *inactive* pages, by definition of the existing implementation. Under
> > such a condition, scanning *active* pages one by one with the rmap is
> > likely to cost more than scanning them all at once via page tables.
> > When we evict *inactive* pages, we still use the rmap and share a
> > common path with the existing code.
> >
> > Page table scanning falls back to the rmap walk if the page tables of
> > a process are apparently sparse, i.e., rss < size of the page tables.
>
> Could you expand a bit more as to how page table scanning and rmap
> scanning coexist ? Say, there is some memory pressure and you want to
> identify good candidate pages to recaim. You could scan processes with
> the page table scanning method, or you could scan the lru list through
> the rmap method. How do you mix the two - when you use the lru/rmap
> method, won't you encounter both pages that are mapped in "dense"
> processes where scanning page tables would have been better, and pages
> that are mapped in "sparse" processes where you are happy to be using
> rmap, and even pges that are mapped into both types of processes at
> once ?  Or, can you change the lru/rmap scan so that it will efficiently
> skip over all dense processes when you use it ?

Hi Michel,

Sorry for the late reply. I was out of town and am still catching up on emails.

That's a great question. Currently the page table scanning isn't smart
enough to know where dense regions are. My plan was to improve it
gradually but it seems it couldn't wait because people have major
concerns over this.

At the moment, the page table scanning decides if a process is worthy
by checking its RSS against the size of its page tables. This can only
avoid extremely sparse regions, meaning the page table scanning will
scan regions that ideally should be covered by the rmap, for some
worse case scenarios. My next step is to add a bloom filter so it can
quickly determine dense regions and target them only.

Given what I just said, the rmap is unlikely to encounter dense
regions, and that's why the perf profile shows its cpu usage drops
from ~30% to ~5%.

Now the question is how we build the bloom filter. A simple answer is
to let the rmap do the legwork, i.e., when it encounters dense
regions, add them to the filter. Of course this means we'll have to
use the rmap more than we do now, which is not ideal for some
workloads but necessary to avoid worst case scenarios.

Does it make sense?

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-24  2:33                     ` Yu Zhao
@ 2021-04-24  3:30                       ` Andi Kleen
  2021-04-24  4:16                         ` Yu Zhao
  0 siblings, 1 reply; 57+ messages in thread
From: Andi Kleen @ 2021-04-24  3:30 UTC (permalink / raw)
  To: Yu Zhao
  Cc: Michel Lespinasse, Rik van Riel, Ying Huang, Dave Chinner,
	Jens Axboe, SeongJae Park, Linux-MM, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Roman Gushchin,
	Rong Chen, SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi,
	Zi Yan, linux-kernel, lkp, Kernel Page Reclaim v2

> Now the question is how we build the bloom filter. A simple answer is
> to let the rmap do the legwork, i.e., when it encounters dense
> regions, add them to the filter. Of course this means we'll have to
> use the rmap more than we do now, which is not ideal for some
> workloads but necessary to avoid worst case scenarios.

How would you maintain the bloom filter over time? Assume a process
that always creates new mappings and unmaps old mappings. How 
do the stale old mappings get removed and avoid polluting it over time?

Or are you thinking of one of the fancier bloom filter variants
that support deletion? As I understand they're significantly less
space efficient and more complicated.

-Andi

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-24  3:30                       ` Andi Kleen
@ 2021-04-24  4:16                         ` Yu Zhao
  0 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-24  4:16 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Michel Lespinasse, Rik van Riel, Ying Huang, Dave Chinner,
	Jens Axboe, SeongJae Park, Linux-MM, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Roman Gushchin,
	Rong Chen, SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi,
	Zi Yan, linux-kernel, lkp, Kernel Page Reclaim v2

On Fri, Apr 23, 2021 at 9:30 PM Andi Kleen <ak@linux.intel.com> wrote:
>
> > Now the question is how we build the bloom filter. A simple answer is
> > to let the rmap do the legwork, i.e., when it encounters dense
> > regions, add them to the filter. Of course this means we'll have to
> > use the rmap more than we do now, which is not ideal for some
> > workloads but necessary to avoid worst case scenarios.
>
> How would you maintain the bloom filter over time? Assume a process
> that always creates new mappings and unmaps old mappings. How
> do the stale old mappings get removed and avoid polluting it over time?
>
> Or are you thinking of one of the fancier bloom filter variants
> that support deletion? As I understand they're significantly less
> space efficient and more complicated.

Hi Andi,

That's where the double buffering technique comes in :)

Recap: the creation of each new generation starts with scanning page
tables to clear the accessed bit of pages referenced since the last
scan.

We scan page tables according to the current bloom filter, and at the
same time, we build a new one and write it to the second buffer.
During this step, we eliminate regions that have become invalid, e.g.,
too sparse or completely unmapped. Note that the scan *will* miss
newly mapped regions, i.e., dense regions that the rmap hasn't
discovered. Once this step is done, we flip to the second buffer. And
from now on, all the new dense regions discovered by the rmap will be
recorded into this buffer.

Each element in the bloom filter is a hash value from an address of a
page table and a node id, indicating this page table has a worth
number of pages from this node.

A single counting bloom filter works too but it doesn't seem to offer
any advantage over double buffering. And we need to handle overflow
too.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-15  1:36             ` Dave Chinner
@ 2021-04-24 21:21               ` Yu Zhao
  0 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-24 21:21 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Jens Axboe, SeongJae Park, Linux-MM, Andi Kleen, Andrew Morton,
	Benjamin Manes, Dave Hansen, Hillf Danton, Johannes Weiner,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 7:36 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Wed, Apr 14, 2021 at 01:16:52AM -0600, Yu Zhao wrote:
> > On Tue, Apr 13, 2021 at 10:50 PM Dave Chinner <david@fromorbit.com> wrote:
> > > On Tue, Apr 13, 2021 at 09:40:12PM -0600, Yu Zhao wrote:
> > > > On Tue, Apr 13, 2021 at 5:14 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > > Profiles would be interesting, because it sounds to me like reclaim
> > > > > *might* be batching page cache removal better (e.g. fewer, larger
> > > > > batches) and so spending less time contending on the mapping tree
> > > > > lock...
> > > > >
> > > > > IOWs, I suspect this result might actually be a result of less lock
> > > > > contention due to a change in batch processing characteristics of
> > > > > the new algorithm rather than it being a "better" algorithm...
> > > >
> > > > I appreciate the profile. But there is no batching in
> > > > __remove_mapping() -- it locks the mapping for each page, and
> > > > therefore the lock contention penalizes the mainline and this patchset
> > > > equally. It looks worse on your system because the four kswapd threads
> > > > from different nodes were working on the same file.
> > >
> > > I think you misunderstand exactly what I mean by "batching" here.
> > > I'm not talking about doing multiple pieces of work under a single
> > > lock. What I mean is that the overall amount of work done in a
> > > single reclaim scan (i.e a "reclaim batch") is packaged differently.
> > >
> > > We already batch up page reclaim via building a page list and then
> > > passing it to shrink_page_list() to process the batch of pages in a
> > > single pass. Each page in this page list batch then calls
> > > remove_mapping() to pull the page form the LRU, we have a run of
> > > contention between the foreground read() thread and the background
> > > kswapd.
> > >
> > > If the size or nature of the pages in the batch passed to
> > > shrink_page_list() changes, then the amount of time a reclaim batch
> > > is going to put pressure on the mapping tree lock will also change.
> > > That's the "change in batching behaviour" I'm referring to here. I
> > > haven't read through the patchset to determine if you change the
> > > shrink_page_list() algorithm, but it likely changes what is passed
> > > to be reclaimed and that in turn changes the locking patterns that
> > > fall out of shrink_page_list...
> >
> > Ok, if we are talking about the size of the batch passed to
> > shrink_page_list(), both the mainline and this patchset cap it at
> > SWAP_CLUSTER_MAX, which is 32. There are corner cases, but when
> > running fio/io_uring, it's safe to say both use 32.
>
> You're still looking at micro-scale behaviour, not the larger-scale
> batching effects. Are we passing SWAP_CLUSTER_MAX groups of pages to
> shrinker_page_list() at a different rate?
>
> When I say "batch of work" when talking about the page cache cycling
> *500 thousand pages a second* through the cache, I'm not talking
> about batches of 32 pages. I'm talking about the entire batch of
> work kswapd does in an invocation cycle.
>
> Is it scanning 100k pages 10 times a second? or 10k pages a hundred
> times a second? How long does a batch take to run? how long does is
> sleep between processing batches? Is there any change in these
> metrics as a result of the multi-gen LRU patches?

Hi Dave,

Sorry for the late reply. Still catching up on emails.

Well, it doesn't really work that way. Yes, I agree that batching
theoretically can have effects on the performance but the patchset
doesn't change anything in this respect. The number of pages to
reclaim is determined by a common code path shared between the
existing implementation and this patchset. Specifically, ksawpd sets
"sc->nr_to_reclaim" based on the high watermark, and passes "sc" down
to both code path:

 static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
 {
<snipped>
+ if (lru_gen_enabled()) {
+ shrink_lru_gens(lruvec, sc);
+ return;
+ }
<snipped>

And there isn't really any new algorithm. It's just the old plain LRU.
The improvement is purely from a feedback loop that helps avoid
unnecessary activations and deactivations. By activations, I mean the
following work done in the buffered io read path:
  generic_file_read_iter()
    filemap_read()
      mark_page_accessed()
        activate_page()
Simply put, on the second accessed, the current implementation
unconditionally moves a page from the inactive lru to the active lru,
if it's not already there.

And by deactivations, I mean the following work done in kswapd:
  kswapd_shrink_node()
    shrink_node()
      shrink_node_memcgs()
        shrink_lruvec()
          shrink_active_list()
i.e., kswapd moves activated pages back to the inactive list before it
can evict them.

For random accesses, every page is equal and none is active. But how
can we tell? The refault rates, i.e., evicted and then accessed again,
for all pages are the same, no matter how many times they have been
accessed. This is exactly how the feedback loop works.

> Basically, we're looking at how access to the mapping lock is
> changing the contention profile, and whether that is signficant or
> not. I suspect it is, because when you have highly contended locks
> and you do something external that reduces unrelated lock
> contention, it's because that external thing is taking more time to
> do and so there's less time to spend hitting locks hard...
>
> As such, I don't think this test is a good measure of the multi-gen
> LRU patches at all - performance is dominated by the severity of
> lock contention external to the LRU scanning algorithm, and it's
> hard to infer anything through suck lock contention....

The lock contention you saw earlier is because of the four nodes
system you used -- each node has a kswapd thread but there is only one
file. The four kswapd threads keep banging on the same mapping lock.
This can be easily fixed if we just run the same test with a single
node *or* with multiple files.

Here is what I got (it's basically what I've attached in the previous email).

Before the patchset:

kswapd
# Children      Self  Symbol
# ........  ........  .......................................
   100.00%     0.00%  kswapd
    99.90%     0.01%  balance_pgdat
    99.86%     0.05%  shrink_node
    98.00%     0.19%  shrink_lruvec
    91.13%     0.10%  shrink_inactive_list
    75.67%     5.97%  shrink_page_list
    57.65%     2.59%  __remove_mapping
    45.99%     0.27%  __delete_from_page_cache
    42.88%     0.88%  page_cache_delete
    39.05%     1.04%  xas_store
    37.71%    37.67%  xas_create
    12.62%    11.79%  isolate_lru_pages
     8.52%     0.98%  free_unref_page_list
     7.38%     0.61%  free_unref_page_commit
     6.68%     1.78%  free_pcppages_bulk
     6.53%     0.83%  ***shrink_active_list***
     6.45%     3.21%  _raw_spin_lock_irqsave
     4.82%     4.60%  __free_one_page
     4.58%     4.21%  unlock_page
     3.62%     3.55%  native_queued_spin_lock_slowpath
     2.49%     0.71%  unaccount_page_cache_page
     2.46%     0.81%  workingset_eviction
     2.14%     0.33%  __mod_lruvec_state
     1.97%     1.88%  xas_clear_mark
     1.73%     0.26%  __mod_lruvec_page_state
     1.71%     1.06%  move_pages_to_lru
     1.66%     1.62%  workingset_age_nonresident
     1.60%     0.85%  __mod_memcg_lruvec_state
     1.58%     0.02%  shrink_slab
     1.49%     0.13%  mem_cgroup_uncharge_list
     1.45%     0.06%  do_shrink_slab
     1.37%     1.32%  page_mapping
     1.06%     0.76%  count_shadow_nodes

io_uring
# Children      Self  Symbol
# ........  ........  ........................................
    99.22%     0.00%  do_syscall_64
    94.48%     0.22%  __io_queue_sqe
    94.09%     0.31%  io_issue_sqe
    93.33%     0.62%  io_read
    88.57%     0.93%  blkdev_read_iter
    87.80%     0.15%  io_iter_do_read
    87.57%     0.16%  generic_file_read_iter
    87.35%     1.08%  filemap_read
    82.44%     0.01%  __x64_sys_io_uring_enter
    82.34%     0.01%  __do_sys_io_uring_enter
    82.09%     0.47%  io_submit_sqes
    79.50%     0.08%  io_queue_sqe
    71.47%     1.08%  filemap_get_pages
    53.08%     0.35%  ondemand_readahead
    51.74%     1.49%  page_cache_ra_unbounded
    49.92%     0.13%  page_cache_sync_ra
    21.26%     0.63%  add_to_page_cache_lru
    17.08%     0.02%  task_work_run
    16.98%     0.03%  exit_to_user_mode_prepare
    16.94%    10.52%  __add_to_page_cache_locked
    16.94%     0.06%  tctx_task_work
    16.79%     0.19%  syscall_exit_to_user_mode
    16.41%     2.61%  filemap_get_read_batch
    16.08%    15.72%  xas_load
    15.99%     0.15%  read_pages
    15.87%     0.13%  blkdev_readahead
    15.72%     0.56%  mpage_readahead
    15.58%     0.09%  io_req_task_submit
    15.37%     0.07%  __io_req_task_submit
    12.14%     0.15%  copy_page_to_iter
    12.03%     0.03%  ***__page_cache_alloc***
    11.98%     0.14%  alloc_pages_current
    11.66%     0.30%  __alloc_pages_nodemask
    11.28%    11.05%  _copy_to_iter
    11.06%     3.16%  get_page_from_freelist
    10.53%     0.10%  submit_bio
    10.27%     0.21%  submit_bio_noacct
     8.30%     0.51%  blk_mq_submit_bio
     7.73%     7.62%  clear_page_erms
     3.68%     1.02%  do_mpage_readpage
     3.33%     0.01%  page_cache_async_ra
     3.25%     0.01%  blk_flush_plug_list
     3.24%     0.06%  blk_mq_flush_plug_list
     3.18%     0.01%  blk_mq_sched_insert_requests
     3.15%     0.13%  blk_mq_try_issue_list_directly
     2.84%     0.15%  __blk_mq_try_issue_directly
     2.69%     0.58%  nvme_queue_rq
     2.54%     1.05%  blk_attempt_plug_merge
     2.53%     0.44%  ***mark_page_accessed***
     2.36%     0.12%  rw_verify_area
     2.16%     0.09%  mpage_alloc
     2.10%     0.27%  lru_cache_add
     1.81%     0.27%  security_file_permission
     1.75%     0.87%  __pagevec_lru_add
     1.70%     0.63%  _raw_spin_lock_irq
     1.53%     0.40%  xa_get_order
     1.52%     0.15%  __blk_mq_alloc_request
     1.50%     0.65%  workingset_refault
     1.50%     0.07%  activate_page
     1.48%     0.29%  io_submit_flush_completions
     1.46%     0.32%  bio_alloc_bioset
     1.42%     0.15%  xa_load
     1.40%     0.16%  pagevec_lru_move_fn
     1.39%     0.06%  blk_finish_plug
     1.35%     0.28%  submit_bio_checks
     1.26%     0.02%  asm_common_interrupt
     1.25%     0.01%  common_interrupt
     1.21%     1.21%  native_queued_spin_lock_slowpath
     1.18%     0.10%  mempool_alloc
     1.17%     0.00%  __common_interrupt
     1.14%     0.03%  handle_edge_irq
     1.08%     0.87%  apparmor_file_permission
     1.03%     0.19%  __mod_lruvec_state
     1.02%     0.65%  blk_rq_merge_ok
     1.01%     0.94%  xas_start

After the patchset:

kswapd
# Children      Self  Symbol
# ........  ........  ........................................
   100.00%     0.00%  kswapd
    99.92%     0.11%  balance_pgdat
    99.32%     0.03%  shrink_node
    97.25%     0.32%  shrink_lruvec
    96.80%     0.09%  evict_lru_gen_pages
    77.82%     6.28%  shrink_page_list
    61.61%     2.76%  __remove_mapping
    50.28%     0.33%  __delete_from_page_cache
    46.63%     1.08%  page_cache_delete
    42.20%     1.16%  xas_store
    40.71%    40.67%  xas_create
    12.54%     7.76%  isolate_lru_gen_pages
     6.42%     3.19%  _raw_spin_lock_irqsave
     6.15%     0.91%  free_unref_page_list
     5.62%     5.45%  unlock_page
     5.05%     0.59%  free_unref_page_commit
     4.35%     2.04%  lru_gen_update_size
     4.31%     1.41%  free_pcppages_bulk
     3.43%     3.36%  native_queued_spin_lock_slowpath
     3.38%     0.59%  __mod_lruvec_state
     2.97%     0.78%  unaccount_page_cache_page
     2.82%     2.52%  __free_one_page
     2.33%     1.18%  __mod_memcg_lruvec_state
     2.28%     2.17%  xas_clear_mark
     2.13%     0.30%  __mod_lruvec_page_state
     1.88%     0.04%  shrink_slab
     1.82%     1.78%  workingset_eviction
     1.74%     0.06%  do_shrink_slab
     1.70%     0.15%  mem_cgroup_uncharge_list
     1.39%     1.01%  count_shadow_nodes
     1.22%     1.18%  __mod_memcg_state.part.0
     1.16%     1.11%  page_mapping
     1.02%     0.98%  xas_init_marks

io_uring
# Children      Self  Symbol
# ........  ........  ........................................
    99.19%     0.01%  entry_SYSCALL_64_after_hwframe
    99.16%     0.00%  do_syscall_64
    94.78%     0.18%  __io_queue_sqe
    94.41%     0.25%  io_issue_sqe
    93.60%     0.48%  io_read
    89.35%     0.96%  blkdev_read_iter
    88.44%     0.12%  io_iter_do_read
    88.25%     0.16%  generic_file_read_iter
    88.00%     1.20%  filemap_read
    84.01%     0.01%  __x64_sys_io_uring_enter
    83.91%     0.01%  __do_sys_io_uring_enter
    83.74%     0.37%  io_submit_sqes
    81.28%     0.07%  io_queue_sqe
    74.65%     0.96%  filemap_get_pages
    55.92%     0.35%  ondemand_readahead
    54.57%     1.34%  page_cache_ra_unbounded
    51.57%     0.12%  page_cache_sync_ra
    24.14%     0.51%  add_to_page_cache_lru
    19.04%    11.51%  __add_to_page_cache_locked
    18.48%     0.13%  read_pages
    18.42%     0.18%  blkdev_readahead
    18.20%     0.55%  mpage_readahead
    16.81%     2.31%  filemap_get_read_batch
    16.37%    14.83%  xas_load
    15.40%     0.02%  task_work_run
    15.38%     0.03%  exit_to_user_mode_prepare
    15.31%     0.05%  tctx_task_work
    15.14%     0.15%  syscall_exit_to_user_mode
    14.05%     0.04%  io_req_task_submit
    13.86%     0.05%  __io_req_task_submit
    12.92%     0.12%  submit_bio
    11.40%     0.13%  copy_page_to_iter
    10.65%     9.61%  _copy_to_iter
     9.45%     0.03%  ***__page_cache_alloc***
     9.42%     0.16%  submit_bio_noacct
     9.40%     0.11%  alloc_pages_current
     9.11%     0.30%  __alloc_pages_nodemask
     8.53%     1.81%  get_page_from_freelist
     8.38%     0.10%  asm_common_interrupt
     8.26%     0.06%  common_interrupt
     7.75%     0.05%  __common_interrupt
     7.62%     0.44%  blk_mq_submit_bio
     7.56%     0.20%  handle_edge_irq
     6.45%     5.90%  clear_page_erms
     5.25%     0.10%  handle_irq_event
     4.88%     0.19%  nvme_irq
     4.83%     0.07%  __handle_irq_event_percpu
     4.73%     0.52%  nvme_process_cq
     4.52%     0.01%  page_cache_async_ra
     4.00%     0.04%  nvme_pci_complete_rq
     3.82%     0.04%  nvme_complete_rq
     3.76%     1.11%  do_mpage_readpage
     3.74%     0.06%  blk_mq_end_request
     3.03%     0.01%  blk_flush_plug_list
     3.02%     0.06%  blk_mq_flush_plug_list
     2.96%     0.01%  blk_mq_sched_insert_requests
     2.94%     0.10%  blk_mq_try_issue_list_directly
     2.89%     0.00%  __irqentry_text_start
     2.71%     0.41%  psi_task_change
     2.67%     0.21%  lru_cache_add
     2.65%     0.14%  __blk_mq_try_issue_directly
     2.53%     0.54%  nvme_queue_rq
     2.43%     0.17%  blk_update_request
     2.42%     0.58%  __pagevec_lru_add
     2.29%     1.42%  psi_group_change
     2.22%     0.85%  blk_attempt_plug_merge
     2.14%     0.04%  bio_endio
     2.13%     0.11%  rw_verify_area
     2.08%     0.18%  mpage_end_io
     2.01%     0.08%  mpage_alloc
     1.71%     0.56%  _raw_spin_lock_irq
     1.65%     0.98%  workingset_refault
     1.65%     0.09%  psi_memstall_leave
     1.64%     0.20%  security_file_permission
     1.61%     1.59%  _raw_spin_lock
     1.58%     0.08%  psi_memstall_enter
     1.44%     0.37%  xa_get_order
     1.43%     0.13%  __blk_mq_alloc_request
     1.37%     0.26%  io_submit_flush_completions
     1.36%     0.14%  xa_load
     1.34%     0.31%  bio_alloc_bioset
     1.31%     0.26%  submit_bio_checks
     1.29%     0.04%  blk_finish_plug
     1.28%     1.27%  native_queued_spin_lock_slowpath
     1.24%     0.19%  page_endio
     1.13%     0.10%  unlock_page
     1.09%     0.99%  read_tsc
     1.07%     0.74%  lru_gen_addition
     1.03%     0.09%  mempool_alloc
     1.02%     0.13%  wake_up_page_bit
     1.02%     0.92%  xas_start
     1.01%     0.78%  apparmor_file_permission

By comparing the two sets, we can clearly see what's changed:

Before the patchset:
     6.53%     0.83%  ***shrink_active_list***
    12.03%     0.03%  ***__page_cache_alloc***
     2.53%     0.44%  ***mark_page_accessed***

After the patchset:
     9.45%     0.03%  ***__page_cache_alloc***
(There are shrink_active_list() or mark_page_accessed() since we don't
activate and deactivate pages anymore, for this test case.)

Hopefully this is clear enough. But I do see where your skepticism
comes from and I don't want to dismiss it out of hand. So if you have
any other benchmarks, I'd be happy to try them. What do you think?

Thanks.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-14 17:43 ` Johannes Weiner
@ 2021-04-27 10:35   ` Yu Zhao
  0 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-27 10:35 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Linux-MM, Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Jonathan Corbet, Joonsoo Kim, Matthew Wilcox, Mel Gorman,
	Miaohe Lin, Michael Larabel, Michal Hocko, Michel Lespinasse,
	Rik van Riel, Roman Gushchin, Rong Chen, SeongJae Park, Tim Chen,
	Vlastimil Babka, Yang Shi, Ying Huang, Zi Yan, linux-kernel, lkp,
	Kernel Page Reclaim v2

On Wed, Apr 14, 2021 at 11:43 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> Hello Yu,

Hi Johannes,

I appreciate the detailed review. Hopefully I have addressed all your
comments below.

> On Tue, Apr 13, 2021 at 12:56:17AM -0600, Yu Zhao wrote:
> > What's new in v2
> > ================
> > Special thanks to Jens Axboe for reporting a regression in buffered
> > I/O and helping test the fix.
> >
> > This version includes the support of tiers, which represent levels of
> > usage from file descriptors only. Pages accessed N times via file
> > descriptors belong to tier order_base_2(N). Each generation contains
> > at most MAX_NR_TIERS tiers, and they require additional MAX_NR_TIERS-2
> > bits in page->flags. In contrast to moving across generations which
> > requires the lru lock, moving across tiers only involves an atomic
> > operation on page->flags and therefore has a negligible cost. A
> > feedback loop modeled after the well-known PID controller monitors the
> > refault rates across all tiers and decides when to activate pages from
> > which tiers, on the reclaim path.
>
> Could you elaborate a bit more on the difference between generations
> and tiers?
>
> A refault, a page table reference, or a buffered read through a file
> descriptor ultimately all boil down to a memory access. The value of
> having that memory resident and the cost of bringing it in from
> backing storage should be the same regardless of how it's accessed by
> userspace; and whether it's an in-memory reference or a non-resident
> reference should have the same relative impact on the page's age.
>
> With that context, I don't understand why file descriptor refs and
> refaults get such special treatment. Could you shed some light here?
>
> > This feedback model has a few advantages over the current feedforward
> > model:
> > 1) It has a negligible overhead in the buffered I/O access path
> >    because activations are done in the reclaim path.
>
> This is useful if the workload isn't reclaim bound, but it can be
> hazardous to defer work to reclaim, too.
>
> If you go through the git history, there have been several patches to
> soften access recognition inside reclaim because it can come with
> large latencies when page reclaim kicks in after a longer period with
> no memory pressure and doesn't have uptodate reference information -
> to the point where eating a few extra IOs tend to add less latency to
> the workload than waiting for reclaim to refresh its aging data.
>
> Could you elaborate a bit more on the tradeoff here?

=== Tiers ===

I agree with all you said. Let me summarize.

Remark 1: a refault, *a page fault* or a buffered read is exactly one
memory reference. A page table reference as how we count it, i.e., the
accessed bit is set, could be one or a thousand memory references. So
the accessed bit for a mapped page and PageReferenced() for an
unmapped page may carry different weights.

Remark 2: the cost of bringing a page back, regardless of how it is
referenced, is the same.

Remark 3: not using extra aging information may be preferable, if
obtaining or maintaining such information would cost more.

Starting with remark 3.

For pages referenced multiple times via file descriptors, we currently
activate them in mark_page_accessed(), regardless of memory pressure.
If we defer their activations, we may be penalized for it. But, based
on remark 3, it is still a win if activating them on the spot has a
higher overall cost.

The proposal here is we do not move them to the active lru list upon
the second reference. Instead, we simply increment a counter in
page->flags, just like SetPageReferenced() without activate_page() in
mark_page_accessed(). For the sake of discussion, let us assume each
possible value of the counter is a tier. Pages read ahead are in tier
0; pages referenced once are in tier 1; pages referenced twice are in
tier 2, etc. Note that we are talking about references via file
descriptors.

Then we record the refaults for each tier, and we compare the refault
rates, i.e, refaulted/evicted across all tiers, in the reclaim path.
For example, if we see tier 2 has a higher refault rate, we activate
pages from this tier. Otherwise, we keep evicting pages from this
tier. This allows us to shift the cost of activations from the
buffered read path to the reclaim path. This is likely to be a win,
and I will explain why at the end of this section.

Next let us look at remark 1, and how tiers can help us with the
different weight from the accessed bit.

For pages referenced via page tables only, we can assign them a tier,
say tier 0. Then we are able to compare their refault rate with those
referenced multiple times via file descriptors. Even though the
accessed bit carries a different weight, a refault has exactly the
same weight, because of remark 2.

For example, if pages referenced via page tables have a higher refault
rate than pages referenced twice via file descriptors, we will not
activate the latter and therefore would provide better protection to
the former by not flooding the active list. The current implementation
will activate the latter on the spot, which is suboptimal for this
example.

Another example: if we find pages referenced four times via file
descriptors have a higher refault rate than the rest, we only activate
them. The current implementation activates pages accessed twice and
three times too, and if they have a large number, they will flood the
active lru list and weaken the protection to pages accessed four
times.

Now, an additional remark.

Remark 4: tracking references of mapped pages by clearing the accessed
bit is more expensive than tracking references of unmapped pages by
mark_page_accessed().

The creation of a generation begins with scanning page tables (if they
are not too sparse) of each active process to find all referenced
pages since the last scan. So it is expensive.

If we moved a page to the next generation upon the second reference
via file descriptor, old generations would run out of pages sooner and
we would have to create new generations at a faster pace to keep up,
which increases the cost. In addition, moving pages across generations
is also expensive, because, on the data struct level, it is the same
as moving pages between the active and the inactive lists, which
requires the lru lock. On the other hand, tiers are lightweight.
Changing tiers within a generation is only an atomic operation on
page->flags.

With the current implementation, randomly reading (buffered io) a
large file, e.g., twice as large as memory size, from a fast storage
long enough will demonstrate both problems. In kswapd,
shrink_active_list() costs >6% of CPU. In the buffered read path,
mark_page_accessed() costs >2%. Statistically speaking, pages accessed
multiple times are not more active than pages accessed once, in this
case. Therefore, both functions are in vain.

Finally, the tradeoff part.

Fundamentally, the idea of tiers is based on a feedback loop, which is
essentially trial and error. So it will perform worse than the current
open loop control, i.e., activating upon the second referenced, if we
know for sure that pages referenced twice need to be protected. IOW,
knowing what is going to happen can avoid the error part from the
feedback loop. But in the realm of page reclaim, I bet we cannot
predict the future, for any workloads. Does it make sense?

> > Highlights from the discussions on v1
> > =====================================
> > Thanks to Ying Huang and Dave Hansen for the comments and suggestions
> > on page table scanning.
> >
> > A simple worst-case scenario test did not find page table scanning
> > underperforms the rmap because of the following optimizations:
> > 1) It will not scan page tables from processes that have been sleeping
> >    since the last scan.
> > 2) It will not scan PTE tables under non-leaf PMD entries that do not
> >    have the accessed bit set, when
> >    CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
> > 3) It will not zigzag between the PGD table and the same PMD or PTE
> >    table spanning multiple VMAs. In other words, it finishes all the
> >    VMAs with the range of the same PMD or PTE table before it returns
> >    to the PGD table. This optimizes workloads that have large numbers
> >    of tiny VMAs, especially when CONFIG_PGTABLE_LEVELS=5.
> >
> > TLDR
> > ====
> > The current page reclaim is too expensive in terms of CPU usage and
> > often making poor choices about what to evict. We would like to offer
> > an alternative framework that is performant, versatile and
> > straightforward.
> >
> > Repo
> > ====
> > git fetch https://linux-mm.googlesource.com/page-reclaim refs/changes/73/1173/1
> >
> > Gerrit https://linux-mm-review.googlesource.com/c/page-reclaim/+/1173
> >
> > Background
> > ==========
> > DRAM is a major factor in total cost of ownership, and improving
> > memory overcommit brings a high return on investment.
>
> RAM cost on one hand.
>
> On the other, paging backends have seen a revolutionary explosion in
> iop/s capacity from solid state devices and CPUs that allow in-memory
> compression at scale, so a higher rate of paging (semi-random IO) and
> thus larger levels of overcommit are possible than ever before.
>
> There is a lot of new opportunity here.
>
> > Over the past decade of research and experimentation in memory
> > overcommit, we observed a distinct trend across millions of servers
> > and clients: the size of page cache has been decreasing because of
> > the growing popularity of cloud storage. Nowadays anon pages account
> > for more than 90% of our memory consumption and page cache contains
> > mostly executable pages.
>
> This gives the impression that because the number of setups heavily
> using the page cache has reduced somewhat, its significance is waning
> as well. I don't think that's true. I think we'll continue to have
> mainstream workloads for which the page cache is significant.
>
> Yes, the importance of paging anon memory more efficiently (or paging
> it at all again, for that matter), has increased dramatically. But IMO
> not because it's more prevalent, but rather because of the increase in
> paging capacity from the hardware side. It's not like we've been
> heavily paging filesystem data beyond cold starts either when it was
> more prevalent - workloads quickly fall apart when you do that on
> rotating drives.
>
> So that increase in paging capacity also applies to filesystem data,
> and makes local filesystems an option again where they might have been
> replaced by anonymous blobs managed by a userspace network filesystem.
>
> Take disaggregated storage for example. It's an attractive measure for
> reducing per-host CAPEX when the alternative is a local spindle, whose
> seekiness doesn't make the network distance look so bad, and prevents
> significant memory overcommit anyway. You have to spec the same RAM in
> either case.
>
> The equation is different for flash. You can *significantly* reduce
> RAM needs of even latency-sensitive, interactive workloads with cheap,
> consumer-grade local SSD drives. Disaggregating those drives and
> adding the network to the paging path would directly eat into the much
> higher RAM savings. It's a much less attractive proposition now. And
> that's bringing larger data sets back to local filesystems.
>
> And of course, even in cloud and disaggregated environments, there ARE
> those systems that deal with things like source code trees -
> development machines, build hosts etc. For those, filesystem data
> continues to be the primary workload.
>
> So while I agree with what you say about anon pages, I don't expect
> non-trivial (local) filesystem loads to go away anytime soon. The
> kernel needs to continue treating it as a first-class citizen.
>
> > Problems
> > ========
> > Notion of active/inactive
> > -------------------------
> > For servers equipped with hundreds of gigabytes of memory, the
> > granularity of the active/inactive is too coarse to be useful for job
> > scheduling. False active/inactive rates are relatively high, and thus
> > the assumed savings may not materialize.
>
> The inactive/active naming is certainly confusing for users of the
> system. The kernel uses it to preselect reclaim candidates, it's not
> meant to indicate how much memory capacity is idle and available.
>
> But a confusion around naming doesn't necessarily indicate it's bad at
> what it is actually designed to do.
>
> Fundamentally, LRU ordering is susceptible to a flood of recent pages
> with no reuse pushing out the established frequent pages. The split
> into inactive and active is simply there to address this shortcoming,
> and protect frequent pages from recent ones - where pages that are
> only accessed once get reclaimed before pages used twice or more.
>
> Obviously, 'twice or more' is a coarse category, and it's not hard to
> imagine that it might go wrong. But please, don't leave it up to the
> imagination ;-) It's been in use for two decades or so, it needs a bit
> more in-depth analysis of its limitations to justify replacing it.
>
> > For phones and laptops, executable pages are frequently evicted
> > despite the fact that there are many less recently used anon pages.
> > Major faults on executable pages cause "janks" (slow UI renderings)
> > and negatively impact user experience.
>
> This is not because of the inactive/active scheme but rather because
> of the anon/file split, which has evolved over the years to just not
> swap onto iop-anemic rotational drives.
>
> We ran into the same issue at FB too, where even with painfully
> obvious anon candidates and a fast paging backend the kernel would
> happily thrash on the page cache instead.
>
> There has been significant work in this area recently to address this
> (see commit 5df741963d52506a985b14c4bcd9a25beb9d1981). We've added
> extensive testing and production time onto these patches since and
> have not found the kernel to be thrashing executables or be reluctant
> to go after anonymous pages anymore.
>
> I wonder if your observation takes these recent changes into account?

Again, I agree with all you said above. And I can confirm your series
has generally fixed the problem for the following test case.

When our most common 4GB Chromebook model is zram-ing under memory
pressure, the size of the file lru is
  ~80MB without that series
  ~120MB with that series
  ~140MB with this series

User experience is acceptable as long as the size is above 100MB. For
optimal user experience, the size is 200MB. But we do not expect the
optimal user experience under memory pressure.

> > For lruvecs from different memcgs or nodes, comparisons are impossible
> > due to the lack of a common frame of reference.
>
> My first thought is that this is expected. Workloads running under
> different memory constraints, IO priority levels etc. will not have
> comparable workingsets: an access frequency that is considered high in
> one domain could be considered quite cold in another.
>
> Could you elaborate a bit on the situations where you would want to
> compare, and how this is possible by having more generations?

Will cover this in the discussion of generations.

> > Solutions
> > =========
> > Notion of generation numbers
> > ----------------------------
> > The notion of generation numbers introduces a quantitative approach to
> > memory overcommit. A larger number of pages can be spread out across
> > a configurable number of generations, and each generation includes all
> > pages that have been referenced since the last generation. This
> > improved granularity yields relatively low false active/inactive
> > rates.
> >
> > Given an lruvec, scans of anon and file types and selections between
> > them are all based on direct comparisons of generation numbers, which
> > are simple and yet effective. For different lruvecs, comparisons are
> > still possible based on birth times of generations.
>
> This describes *what* it's doing, but could you elaborate more on how
> to think about generations in relation to workload behavior and what
> you can predict based on how your workload gets bucketed into these?
>
> If we accept that the current two generations are not enough, how many
> should there be instead? Four? Ten?
>
> What determines this? Is it the workload's access pattern? Or the
> memory size?
>
> How do I know whether the number of generations I have chosen is right
> for my setup? How do I detect when the underlying factors changed and
> it no longer is?
>
> How does it manifest if I have too few generations? What about too
> many?
>
> What about systems that host a variety of workloads that come and go?
> Is there a generation number that will be good for any combination of
> workloads on the system as jobs come and go?
>
> For a general purpose OS like Linux, it's nice to be *able* to tune to
> your specific requirements, but it's always bad to *have* to. Whatever
> we end up doing, there needs to be some reasonable default behavior
> that works acceptably for a broad range of workloads out of the box.

=== generations ===

All good questions. Let me start abstractly and give concrete examples
afterward.

Remark 1: the number of generations only naturally grows to three,
unless users artificially create more for the purpose of working set
estimation.

Why three? We add pages mapped upon page faults to the youngest
generation, since we need to age them before we can evict them. After
we scan them once and clear the accessed bit set during the initial
faults, they become the second youngest generation. And we still
cannot evict them because we have not ascertained whether they are
inactive. We can only be sure after the second scan. Thereafter they
become the third youngest generation, if the accessed bit is not set.
The third youngest generation is also the oldest, in this case.

I suppose this is not surprising, as it simply follows the current
implementation. This is also why only the youngest and second youngest
generation are considered active, in order to be compatible with the
active/inactive notion. As long as we have something to evict, we do
not need to create more generations. IOW, we only create a new
generation when we are down to the minimum number of generations,
i.e., two, which is equivalent to being out of inactive pages, when
compared with the current implementation.

And why do we need generations in this case? It is because they help
answer the question of when we need to scan active pages. We could
reuse inactive_is_low(). But the number of generations seems to be
more deterministic than the magic numbers in inactive_is_low().

But do users need to configure the number of generations? The answer
is no. Everything works out of box, unless they are interested in the
following.

Remark 2: generations provide a temporal dimension; each generation is
a dot on the timeline.

This is designed for large scale deployments, i.e., data centers that
want to monitor their memory utilization for resource planning;
fleetwide working set estimation for optimal job scheduling, basically
for users who need a set of stats that they can aggregate.

Aggregating the active/inactive across a fleet of machines yields
nothing interesting. But generations are associated with timestamps,
and if they are artificially created at a steady pace, say every two
minutes, then their aggregation tells a lot. I will cover this more in
the use case section.

This principle also applies to memcgs or nodes, from the same machine
or different ones.

The same type of job can run concurrently on different machines and
each machine has a memcg for this job. To gain some insight into this
type of job, users collect a set of stats from those memcgs, and based
on this set, they want to predict how much memory this type of job
typically requires. In our case, it is called Autopilot. Users would
not be able to achieve this if there is not a metric system or a
common frame of reference for the stats in this set.

Similarly, if users want to select an optimal node for a job, they
need to compare all nodes, in order to determine which one has the
least amount of active pages.

Remark 3: architecturally, generations glue everything together.

When we scan page tables, we only update the generation number counter
in page->flags, without isolating the page. This is different from
what we have been doing, e.g., activate_page() or activate_page().
Tiers also rely on generations, because they need a temporal dimension
to sort out refaults from different generations. Needless to day,
refaults from younger generations are worse than those from older
generations, i.e., the former have shorter refault distances than the
latter. (Refault distance is a metric we use internally to measure
page selection quality.)

So generally it would only be more difficult, if we split things up
while trying to retain the same amount of benefits.

> > Differential scans via page tables
> > ----------------------------------
> > Each differential scan discovers all pages that have been referenced
> > since the last scan. Specifically, it walks the mm_struct list
> > associated with an lruvec to scan page tables of processes that have
> > been scheduled since the last scan. The cost of each differential scan
> > is roughly proportional to the number of referenced pages it
> > discovers. Unless address spaces are extremely sparse, page tables
> > usually have better memory locality than the rmap. The end result is
> > generally a significant reduction in CPU usage, for workloads using a
> > large amount of anon memory.
> >
> > Our real-world benchmark that browses popular websites in multiple
> > Chrome tabs demonstrates 51% less CPU usage from kswapd and 52% (full)
> > less PSI on v5.11. With this patchset, kswapd profile looks like:
> >   49.36%  lzo1x_1_do_compress
> >    4.54%  page_vma_mapped_walk
> >    4.45%  memset_erms
> >    3.47%  walk_pte_range
> >    2.88%  zram_bvec_rw
> >
> > In addition, direct reclaim latency is reduced by 22% at 99th
> > percentile and the number of refaults is reduced by 7%. Both metrics
> > are important to phones and laptops as they are correlated to user
> > experience.
>
> This looks very exciting!
>
> However, this seems to be an improvement completely in its own right:
> getting the mapped page access information in a more efficient way.
>
> Is there anything that ties it to the multi-generation LRU that I may
> be missing here? Or could it simply be a drop-in replacement for rmap
> that gives us the CPU savings right away?

Covered in the discussion of generations.

> > Framework
> > =========
> > For each lruvec, evictable pages are divided into multiple
> > generations. The youngest generation number is stored in
> > lruvec->evictable.max_seq for both anon and file types as they are
> > aged on an equal footing. The oldest generation numbers are stored in
> > lruvec->evictable.min_seq[2] separately for anon and file types as
> > clean file pages can be evicted regardless of may_swap or
> > may_writepage. Generation numbers are truncated into
> > order_base_2(MAX_NR_GENS+1) bits in order to fit into page->flags. The
> > sliding window technique is used to prevent truncated generation
> > numbers from overlapping. Each truncated generation number is an inde
> > to lruvec->evictable.lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES].
> > Evictable pages are added to the per-zone lists indexed by max_seq or
> > min_seq[2] (modulo MAX_NR_GENS), depending on whether they are being
> > faulted in.
> >
> > Each generation is then divided into multiple tiers. Tiers represent
> > levels of usage from file descriptors only. Pages accessed N times via
> > file descriptors belong to tier order_base_2(N). In contrast to moving
> > across generations which requires the lru lock, moving across tiers
> > only involves an atomic operation on page->flags and therefore has a
> > lower cost. A feedback loop modeled after the well-known PID
> > controller monitors the refault rates across all tiers and decides
> > when to activate pages from which tiers on the reclaim path.
> >
> > The framework comprises two conceptually independent components: the
> > aging and the eviction, which can be invoked separately from user
> > space.
>
> Why from userspace?

Will cover this in the discussion of use cases.

> > Aging
> > -----
> > The aging produces young generations. Given an lruvec, the aging scans
> > page tables for referenced pages of this lruvec. Upon finding one, the
> > aging updates its generation number to max_seq. After each round of
> > scan, the aging increments max_seq.
> >
> > The aging maintains either a system-wide mm_struct list or per-memcg
> > mm_struct lists and tracks whether an mm_struct is being used or has
> > been used since the last scan. Multiple threads can concurrently work
> > on the same mm_struct list, and each of them will be given a different
> > mm_struct belonging to a process that has been scheduled since the
> > last scan.
> >
> > The aging is due when both of min_seq[2] reaches max_seq-1, assuming
> > both anon and file types are reclaimable.
>
> As per above, this is centered around mapped pages, but it really
> needs to include a detailed answer for unmapped pages, such as page
> cache and shmem/tmpfs data, as well as how sampled page table
> references behave wrt realtime syscall references.

Covered in the discussion of tiers.

> > Eviction
> > --------
> > The eviction consumes old generations. Given an lruvec, the eviction
> > scans the pages on the per-zone lists indexed by either of min_seq[2].
> > It first tries to select a type based on the values of min_seq[2].
> > When anon and file types are both available from the same generation,
> > it selects the one that has a lower refault rate.
> >
> > During a scan, the eviction sorts pages according to their generation
> > numbers, if the aging has found them referenced. It also moves pages
> > from the tiers that have higher refault rates than tier 0 to the next
> > generation.
> >
> > When it finds all the per-zone lists of a selected type are empty, the
> > eviction increments min_seq[2] indexed by this selected type.
> >
> > Use cases
> > =========
> > On Android, our most advanced simulation that generates memory
> > pressure from realistic user behavior shows 18% fewer low-memory
> > kills, which in turn reduces cold starts by 16%.
>
> I assume you refer to pressure-induced lmkd kills rather than
> conventional kernel OOM kills?
>
> I.e. multi-gen LRU does a better job of identifying the workingset,
> rather than giving up too early.
>
> Again, I would be interested if the baseline here includes the recent
> anon/file balancing rework or not.

Yes, lmkd, which is based on PSI.

No, the baseline did not include the rework. I will rerun the
simulation once we have enough devices running 5.10.

BTW, does the rework also improve PSI? If so, the Android team might
be interested in backpacking it.

> > On Borg, a similar approach enables us to identify jobs that
> > underutilize their memory and downsize them considerably without
> > compromising any of our service level indicators.
>
> This is doable with the current reclaim implementation as well. At FB
> we drive proactive reclaim through cgroup control, in a feedback loop
> with psi metrics.
>
> Obviously, this would benefit from better workingset identification in
> the kernel, as more memory could be offloaded under the same pressure
> tolerances from the workload, but it's more of an optimization than
> enabling a uniquely new usecase.

=== use case ===

Thanks for sharing this information. Fleetwide efficiency is my
favorite topic! And I like your model -- it is very straightforward.

However, there are a few constraints that prohibit us from adopting it.

Remark 1: for systems with almost all of the pages mapped, proactive
reclaim using the current interface is unaffordable because of the
overhead from the rmap.

For systems with a fair number of unmapped pages, proactive reclaim
can drop some of them at a low cost. But for systems with almost all
of the pages mapped, proactive reclaim needs to walk the rmap to clear
the accessed bit. The following profile demonstrates such a overhead
when we proactively zram pages that have not been used for more than
two minutes from a system that has 99% of the pages mapped (~500GB,
moderate pressure):

 41.23%  page_vma_mapped_walk
  6.12%  do_raw_spin_lock
  5.23%  vma_interval_tree_iter_next
  4.23%  vma_interval_tree_subtree_search
  2.97%  page_referenced_one
  2.29%  lzo1x_1_do_compress

For what we profile, page_vma_mapped_walk() consumes the highest
amount of CPU among all kernel functions.

Remark 2: for optimal job scheduling, users need to predict whether a
job can land on a machine successfully without actually impacting the
existing jobs.

For example, given a pool of candidates, a job scheduler periodically
calls an aging interface provided by the kernel, in order to estimate
the working set of each candidate. And it ranks the candidates based
on their working sets. Candidates can be individual machines or nodes,
in case this job scheduler is NUMA aware. (Ours is.)

This means that working set estimation and proactive reclaim have to
be separate functions. If we bundle them, this job scheduler would
have to sacrifice the performance of the existing jobs for something
that may or may not come true.

Remark 3: for optimal fleet efficiency, users need to avoid proactive
reclaim unless they plan to use the savings for additional workloads.

Why would users want to proactively reclaim memory if they have no
plan to run additional workloads? The only reason might be that they
are not confident with the ability of the page reclaim, i.e., they do
not know whether it will give them what they need quickly enough when
they really need it. I cannot think of any other reason at the moment
:)

> > On Chrome OS, our field telemetry reports 96% fewer low-memory tab
> > discards and 59% fewer OOM kills from fully-utilized devices and no
> > regressions in monitored user experience from underutilized devices.
>
> Again, lkmd rather than kernel oom kills, right? And with or without
> the anon/file rework?

Yes, lmkd.

No, the baseline does not include the rework. But in this case it
should not matter. We have been carrying the following patch, which
protects the file lru from going below a certain threshold. Let me run
an a/b experiment on 5.10, i.e., with/without the patch, to make sure.

https://lore.kernel.org/linux-mm/20101028191523.GA14972@google.com/

> > Working set estimation
> > ----------------------
> > User space can invoke the aging by writing "+ memcg_id node_id gen
> > [swappiness]" to /sys/kernel/debug/lru_gen. This debugfs interface
> > also provides the birth time and the size of each generation.
> >
> > Proactive reclaim
> > -----------------
> > User space can invoke the eviction by writing "- memcg_id node_id gen
> > [swappiness] [nr_to_reclaim]" to /sys/kernel/debug/lru_gen. Multiple
> > command lines are supported, so does concatenation with delimiters.
>
> Can you explain a bit more how these two are supposed to be used?
>
> The memcg id is self-explanatory: Age or evict pages from this
> particular workload.
>
> The node is a bit less intuitive. In most setups, the distance to a
> remote NUMA node is much smaller than the distance to the storage
> backend, and users would prefer finding and evicting the coldest
> memory between multiple nodes, not within individual node.

But storage backends could be something fast, e.g., zram or zswap in
our case. And we prefer to save cold pages in zram or zswap, so when
they become hot, they will be brought back to the same node. If we
migrate them to a different node, we have no way to migrate them back
instantaneously when they become hot.

> Swappiness raises a similar question. Why would the user prefer one
> type of data to be reclaimed over the other? Shouldn't it want to
> reclaim the pages that are least likely to be used again soon?

We also need to consider how applications perceive the delays from an
anonymous page fault and a buffered io read differently. Even though
these two have the same cost, the delay from an anonymous page fault
may hurt applications more. For example, Chrome is aware that buffered
io reads can be blocking, and it delegates the work to io threads,
e.g., non-UI threads, so the delay will not affect user experience.
Does it make sense?

> > FAQ
> > ===
> > Why not try to improve the existing code?
> > -----------------------------------------
> > We have tried but concluded the aforementioned problems are
> > fundamental, and therefore changes made on top of them will not result
> > in substantial gains.
>
> Realistically, I think incremental changes are unavoidable to get this
> merged upstream.
>
> Not just in the sense that they need to be smaller changes, but also
> in the sense that they need to replace old code. It would be
> impossible to maintain both, focus development and testing resources,
> and provide a reasonably stable experience with both systems tugging
> at a complicated shared code base.
>
> On the other hand, the existing code also has billions of hours of
> production testing and tuning. We can't throw this all out overnight -
> it needs to be surgical and the broader consequences of each step need
> to be well understood.
>
> We also have millions of servers relying on being able to do upgrades
> for drivers and fixes in other subsystems that we can't put on hold
> until we stabilized a new reclaim implementation from scratch.
>
> The good thing is that swap really hasn't been used much
> recently. There definitely is room to maneuver without being too
> disruptive. There *are* swap configurations today, but for the most
> part, users don't expect the kernel to swap until the machine is under
> heavy pressure. Few have expectations of it doing a nuanced and
> efficient memory offloading job under nominal loads. So the anon side
> could well be a testbed for the multigen LRU that has a more
> reasonable blast radius than doing everything at once.
>
> And if the rmap replacement for mapped pages could be split out as a
> CPU optimzation for getting MMU info, without changing how those are
> interpreted in the same step, I think we'd get into a more manageable
> territory with this proposal.

Yeah, I hear you loud and clear. We are not really writing off any
options here, just weighing them in terms of opportunity cost. The
engineering effort is one of the major factors, but the performance
gain and the lead time are also very important to us.

IMO, it would be hard to make substantial progress if we just float
ideas around. We could use something concrete to keep the discussion
going. I am not saying this patchset should be the storyline. But at
least it can serve as the springboard, hopefully launching us to a
middle ground. Does it sound reasonable?

Again, thanks for the detailed review. You have made some excellent
points. I think I also have made some good ones too. Hopefully you
would agree. In any case, feel free to let me know.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
                   ` (17 preceding siblings ...)
  2021-04-14 17:43 ` Johannes Weiner
@ 2021-04-29 23:46 ` Konstantin Kharlamov
  2021-04-30  6:37   ` Konstantin Kharlamov
  18 siblings, 1 reply; 57+ messages in thread
From: Konstantin Kharlamov @ 2021-04-29 23:46 UTC (permalink / raw)
  To: Yu Zhao, linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim

In case you need it yet, this series is:

Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru>

My success story: I have Archlinux with 8G RAM + zswap + swap. While developing,
I have lots of apps opened such as multiple LSP-servers for different langs,
chats, two browsers, etc… Usually, my system gets quickly to a point of SWAP-
storms, where I have to kill LSP-servers, restart browsers to free memory, etc,
otherwise the system lags heavily and is barely usable.

1.5 day ago I migrated from 5.11.15 kernel to 5.12 + the LRU patchset, and I
started up by opening lots of apps to create memory pressure, and worked for a
day like this. Till now I had *not a single SWAP-storm*, and mind you I got 3.4G
in SWAP. I was never getting to the point of 3G in SWAP before without a single
SWAP-storm.

Right now my gf on Fedora 33 also suffers from SWAP-storms on her old Macbook
2013 with 4G RAM + zswap + swap, I think the next week I'll build for her 5.12 +
LRU patchset as well. Will see how it goes, I expect it will improve her
experience by a lot too.

P.S.: upon replying please keep me CCed, I'm not subscribed to the list

On Tue, 2021-04-13 at 00:56 -0600, Yu Zhao wrote:
> What's new in v2
> ================
> Special thanks to Jens Axboe for reporting a regression in buffered
> I/O and helping test the fix.
> 
> This version includes the support of tiers, which represent levels of
> usage from file descriptors only. Pages accessed N times via file
> descriptors belong to tier order_base_2(N). Each generation contains
> at most MAX_NR_TIERS tiers, and they require additional MAX_NR_TIERS-2
> bits in page->flags. In contrast to moving across generations which
> requires the lru lock, moving across tiers only involves an atomic
> operation on page->flags and therefore has a negligible cost. A
> feedback loop modeled after the well-known PID controller monitors the
> refault rates across all tiers and decides when to activate pages from
> which tiers, on the reclaim path.
> 
> This feedback model has a few advantages over the current feedforward
> model:
> 1) It has a negligible overhead in the buffered I/O access path
>    because activations are done in the reclaim path.
> 2) It takes mapped pages into account and avoids overprotecting pages
>    accessed multiple times via file descriptors.
> 3) More tiers offer better protection to pages accessed more than
>    twice when buffered-I/O-intensive workloads are under memory
>    pressure.
> 
> The fio/io_uring benchmark shows 14% improvement in IOPS when randomly
> accessing Samsung PM981a in the buffered I/O mode.
> 
> Highlights from the discussions on v1
> =====================================
> Thanks to Ying Huang and Dave Hansen for the comments and suggestions
> on page table scanning.
> 
> A simple worst-case scenario test did not find page table scanning
> underperforms the rmap because of the following optimizations:
> 1) It will not scan page tables from processes that have been sleeping
>    since the last scan.
> 2) It will not scan PTE tables under non-leaf PMD entries that do not
>    have the accessed bit set, when
>    CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG=y.
> 3) It will not zigzag between the PGD table and the same PMD or PTE
>    table spanning multiple VMAs. In other words, it finishes all the
>    VMAs with the range of the same PMD or PTE table before it returns
>    to the PGD table. This optimizes workloads that have large numbers
>    of tiny VMAs, especially when CONFIG_PGTABLE_LEVELS=5.
> 
> TLDR
> ====
> The current page reclaim is too expensive in terms of CPU usage and
> often making poor choices about what to evict. We would like to offer
> an alternative framework that is performant, versatile and
> straightforward.
> 
> Repo
> ====
> git fetch https://linux-mm.googlesource.com/page-reclaim refs/changes/73/1173/1
> 
> Gerrit https://linux-mm-review.googlesource.com/c/page-reclaim/+/1173
> 
> Background
> ==========
> DRAM is a major factor in total cost of ownership, and improving
> memory overcommit brings a high return on investment. Over the past
> decade of research and experimentation in memory overcommit, we
> observed a distinct trend across millions of servers and clients: the
> size of page cache has been decreasing because of the growing
> popularity of cloud storage. Nowadays anon pages account for more than
> 90% of our memory consumption and page cache contains mostly
> executable pages.
> 
> Problems
> ========
> Notion of active/inactive
> -------------------------
> For servers equipped with hundreds of gigabytes of memory, the
> granularity of the active/inactive is too coarse to be useful for job
> scheduling. False active/inactive rates are relatively high, and thus
> the assumed savings may not materialize.
> 
> For phones and laptops, executable pages are frequently evicted
> despite the fact that there are many less recently used anon pages.
> Major faults on executable pages cause "janks" (slow UI renderings)
> and negatively impact user experience.
> 
> For lruvecs from different memcgs or nodes, comparisons are impossible
> due to the lack of a common frame of reference.
> 
> Incremental scans via rmap
> --------------------------
> Each incremental scan picks up at where the last scan left off and
> stops after it has found a handful of unreferenced pages. For
> workloads using a large amount of anon memory, incremental scans lose
> the advantage under sustained memory pressure due to high ratios of
> the number of scanned pages to the number of reclaimed pages. In our
> case, the average ratio of pgscan to pgsteal is above 7.
> 
> On top of that, the rmap has poor memory locality due to its complex
> data structures. The combined effects typically result in a high
> amount of CPU usage in the reclaim path. For example, with zram, a
> typical kswapd profile on v5.11 looks like:
>   31.03%  page_vma_mapped_walk
>   25.59%  lzo1x_1_do_compress
>    4.63%  do_raw_spin_lock
>    3.89%  vma_interval_tree_iter_next
>    3.33%  vma_interval_tree_subtree_search
> 
> And with real swap, it looks like:
>   45.16%  page_vma_mapped_walk
>    7.61%  do_raw_spin_lock
>    5.69%  vma_interval_tree_iter_next
>    4.91%  vma_interval_tree_subtree_search
>    3.71%  page_referenced_one
> 
> Solutions
> =========
> Notion of generation numbers
> ----------------------------
> The notion of generation numbers introduces a quantitative approach to
> memory overcommit. A larger number of pages can be spread out across
> a configurable number of generations, and each generation includes all
> pages that have been referenced since the last generation. This
> improved granularity yields relatively low false active/inactive
> rates.
> 
> Given an lruvec, scans of anon and file types and selections between
> them are all based on direct comparisons of generation numbers, which
> are simple and yet effective. For different lruvecs, comparisons are
> still possible based on birth times of generations.
> 
> Differential scans via page tables
> ----------------------------------
> Each differential scan discovers all pages that have been referenced
> since the last scan. Specifically, it walks the mm_struct list
> associated with an lruvec to scan page tables of processes that have
> been scheduled since the last scan. The cost of each differential scan
> is roughly proportional to the number of referenced pages it
> discovers. Unless address spaces are extremely sparse, page tables
> usually have better memory locality than the rmap. The end result is
> generally a significant reduction in CPU usage, for workloads using a
> large amount of anon memory.
> 
> Our real-world benchmark that browses popular websites in multiple
> Chrome tabs demonstrates 51% less CPU usage from kswapd and 52% (full)
> less PSI on v5.11. With this patchset, kswapd profile looks like:
>   49.36%  lzo1x_1_do_compress
>    4.54%  page_vma_mapped_walk
>    4.45%  memset_erms
>    3.47%  walk_pte_range
>    2.88%  zram_bvec_rw
> 
> In addition, direct reclaim latency is reduced by 22% at 99th
> percentile and the number of refaults is reduced by 7%. Both metrics
> are important to phones and laptops as they are correlated to user
> experience.
> 
> Framework
> =========
> For each lruvec, evictable pages are divided into multiple
> generations. The youngest generation number is stored in
> lruvec->evictable.max_seq for both anon and file types as they are
> aged on an equal footing. The oldest generation numbers are stored in
> lruvec->evictable.min_seq[2] separately for anon and file types as
> clean file pages can be evicted regardless of may_swap or
> may_writepage. Generation numbers are truncated into
> order_base_2(MAX_NR_GENS+1) bits in order to fit into page->flags. The
> sliding window technique is used to prevent truncated generation
> numbers from overlapping. Each truncated generation number is an inde
> to lruvec->evictable.lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES].
> Evictable pages are added to the per-zone lists indexed by max_seq or
> min_seq[2] (modulo MAX_NR_GENS), depending on whether they are being
> faulted in.
> 
> Each generation is then divided into multiple tiers. Tiers represent
> levels of usage from file descriptors only. Pages accessed N times via
> file descriptors belong to tier order_base_2(N). In contrast to moving
> across generations which requires the lru lock, moving across tiers
> only involves an atomic operation on page->flags and therefore has a
> lower cost. A feedback loop modeled after the well-known PID
> controller monitors the refault rates across all tiers and decides
> when to activate pages from which tiers on the reclaim path.
> 
> The framework comprises two conceptually independent components: the
> aging and the eviction, which can be invoked separately from user
> space.
> 
> Aging
> -----
> The aging produces young generations. Given an lruvec, the aging scans
> page tables for referenced pages of this lruvec. Upon finding one, the
> aging updates its generation number to max_seq. After each round of
> scan, the aging increments max_seq.
> 
> The aging maintains either a system-wide mm_struct list or per-memcg
> mm_struct lists and tracks whether an mm_struct is being used or has
> been used since the last scan. Multiple threads can concurrently work
> on the same mm_struct list, and each of them will be given a different
> mm_struct belonging to a process that has been scheduled since the
> last scan.
> 
> The aging is due when both of min_seq[2] reaches max_seq-1, assuming
> both anon and file types are reclaimable.
> 
> Eviction
> --------
> The eviction consumes old generations. Given an lruvec, the eviction
> scans the pages on the per-zone lists indexed by either of min_seq[2].
> It first tries to select a type based on the values of min_seq[2].
> When anon and file types are both available from the same generation,
> it selects the one that has a lower refault rate.
> 
> During a scan, the eviction sorts pages according to their generation
> numbers, if the aging has found them referenced. It also moves pages
> from the tiers that have higher refault rates than tier 0 to the next
> generation.
> 
> When it finds all the per-zone lists of a selected type are empty, the
> eviction increments min_seq[2] indexed by this selected type.
> 
> Use cases
> =========
> On Android, our most advanced simulation that generates memory
> pressure from realistic user behavior shows 18% fewer low-memory
> kills, which in turn reduces cold starts by 16%.
> 
> On Borg, a similar approach enables us to identify jobs that
> underutilize their memory and downsize them considerably without
> compromising any of our service level indicators.
> 
> On Chrome OS, our field telemetry reports 96% fewer low-memory tab
> discards and 59% fewer OOM kills from fully-utilized devices and no
> regressions in monitored user experience from underutilized devices.
> 
> Working set estimation
> ----------------------
> User space can invoke the aging by writing "+ memcg_id node_id gen
> [swappiness]" to /sys/kernel/debug/lru_gen. This debugfs interface
> also provides the birth time and the size of each generation.
> 
> Proactive reclaim
> -----------------
> User space can invoke the eviction by writing "- memcg_id node_id gen
> [swappiness] [nr_to_reclaim]" to /sys/kernel/debug/lru_gen. Multiple
> command lines are supported, so does concatenation with delimiters.
> 
> Intensive buffered I/O
> ----------------------
> Tiers are specifically designed to improve the performance of
> intensive buffered I/O under memory pressure. The fio/io_uring
> benchmark shows 14% improvement in IOPS when randomly accessing
> Samsung PM981a in buffered I/O mode.
> 
> For far memory tiering and NUMA-aware job scheduling, please refer to
> the reference section.
> 
> FAQ
> ===
> Why not try to improve the existing code?
> -----------------------------------------
> We have tried but concluded the aforementioned problems are
> fundamental, and therefore changes made on top of them will not result
> in substantial gains.
> 
> What particular workloads does it help?
> ---------------------------------------
> This framework is designed to improve the performance of the page
> reclaim under any types of workloads.
> 
> How would it benefit the community?
> -----------------------------------
> Google is committed to promoting sustainable development of the
> community. We hope successful adoptions of this framework will
> steadily climb over time. To that end, we would be happy to learn your
> workloads and work with you case by case, and we will do our best to
> keep the repo fully maintained. For those whose workloads rely on the
> existing code, we will make sure you will not be affected in any way.
> 
> References
> ==========
> 1. Long-term SLOs for reclaimed cloud computing resources
>    https://research.google/pubs/pub43017/
> 2. Profiling a warehouse-scale computer
>    https://research.google/pubs/pub44271/
> 3. Evaluation of NUMA-Aware Scheduling in Warehouse-Scale Clusters
>    https://research.google/pubs/pub48329/
> 4. Software-defined far memory in warehouse-scale computers
>    https://research.google/pubs/pub48551/
> 5. Borg: the Next Generation
>    https://research.google/pubs/pub49065/
> 
> Yu Zhao (16):
>   include/linux/memcontrol.h: do not warn in page_memcg_rcu() if
>     !CONFIG_MEMCG
>   include/linux/nodemask.h: define next_memory_node() if !CONFIG_NUMA
>   include/linux/huge_mm.h: define is_huge_zero_pmd() if
>     !CONFIG_TRANSPARENT_HUGEPAGE
>   include/linux/cgroup.h: export cgroup_mutex
>   mm/swap.c: export activate_page()
>   mm, x86: support the access bit on non-leaf PMD entries
>   mm/vmscan.c: refactor shrink_node()
>   mm: multigenerational lru: groundwork
>   mm: multigenerational lru: activation
>   mm: multigenerational lru: mm_struct list
>   mm: multigenerational lru: aging
>   mm: multigenerational lru: eviction
>   mm: multigenerational lru: page reclaim
>   mm: multigenerational lru: user interface
>   mm: multigenerational lru: Kconfig
>   mm: multigenerational lru: documentation
> 
>  Documentation/vm/index.rst        |    1 +
>  Documentation/vm/multigen_lru.rst |  192 +++
>  arch/Kconfig                      |    9 +
>  arch/x86/Kconfig                  |    1 +
>  arch/x86/include/asm/pgtable.h    |    2 +-
>  arch/x86/mm/pgtable.c             |    5 +-
>  fs/exec.c                         |    2 +
>  fs/fuse/dev.c                     |    3 +-
>  fs/proc/task_mmu.c                |    3 +-
>  include/linux/cgroup.h            |   15 +-
>  include/linux/huge_mm.h           |    5 +
>  include/linux/memcontrol.h        |    7 +-
>  include/linux/mm.h                |    2 +
>  include/linux/mm_inline.h         |  294 ++++
>  include/linux/mm_types.h          |  117 ++
>  include/linux/mmzone.h            |  118 +-
>  include/linux/nodemask.h          |    1 +
>  include/linux/page-flags-layout.h |   20 +-
>  include/linux/page-flags.h        |    4 +-
>  include/linux/pgtable.h           |    4 +-
>  include/linux/swap.h              |    5 +-
>  kernel/bounds.c                   |    6 +
>  kernel/events/uprobes.c           |    2 +-
>  kernel/exit.c                     |    1 +
>  kernel/fork.c                     |   10 +
>  kernel/kthread.c                  |    1 +
>  kernel/sched/core.c               |    2 +
>  mm/Kconfig                        |   55 +
>  mm/huge_memory.c                  |    5 +-
>  mm/khugepaged.c                   |    2 +-
>  mm/memcontrol.c                   |   28 +
>  mm/memory.c                       |   14 +-
>  mm/migrate.c                      |    2 +-
>  mm/mm_init.c                      |   16 +-
>  mm/mmzone.c                       |    2 +
>  mm/rmap.c                         |    6 +
>  mm/swap.c                         |   54 +-
>  mm/swapfile.c                     |    6 +-
>  mm/userfaultfd.c                  |    2 +-
>  mm/vmscan.c                       | 2580 ++++++++++++++++++++++++++++-
>  mm/workingset.c                   |  179 +-
>  41 files changed, 3603 insertions(+), 180 deletions(-)
>  create mode 100644 Documentation/vm/multigen_lru.rst
> 



^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-29 23:46 ` Konstantin Kharlamov
@ 2021-04-30  6:37   ` Konstantin Kharlamov
  2021-04-30 19:31     ` Yu Zhao
  0 siblings, 1 reply; 57+ messages in thread
From: Konstantin Kharlamov @ 2021-04-30  6:37 UTC (permalink / raw)
  To: Yu Zhao, linux-mm
  Cc: Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, page-reclaim

Btw, I noticed a fun thing, an improvement. I don't know yet if it can be
attributed to 5.12 (which I didn't try alone yet) or to the LRU patchset, but
I'd assume the latter, because 5.12 seems didn't to have had anything
interesting regarding memory performance¹.

I usually have Skype running in background for work purposes, which is only used
2-3 times in a week. So one would expect it to be one the first victims to
memory reclaim. Unfortunately, I never seen this to actually happen (till now,
that is): all skypeforlinux processes routinely have 0 bytes in SWAP, and the
only circumstances under which its processes can get into SWAP is after
experiencing many SWAP-storms. It was so hard for the kernel to move these
unused processes to SWAP that at some point I even tried to research if there
are any odd flags a userspace may have set on a process to keep it in RAM, just
in case that's what happens to Skype (A: no, that wasn't the case, running Skype
in a memory limited cgroup makes it swap. It's just that kernel decision were
lacking for some reason).

So, anyway, I am delighted to see now that while testing this patchset, and
without encountering even a single SWAP-storm yet, skypeforlinux are one of the
processes residing in SWAP!!

     λ smem -kc "name user pid pss swap" | grep skype    
    skypeforlinux            constantine  1151    60.0K     7.5M 
    skypeforlinux            constantine  1215   195.0K     8.1M 
    skypeforlinux            constantine  1149   706.0K     7.5M 
    skypeforlinux            constantine  1148   743.0K     7.3M 
    skypeforlinux            constantine  1307     1.4M     8.0M 
    skypeforlinux            constantine  1213     2.1M    46.1M 
    skypeforlinux            constantine  1206    14.0M    10.8M 
    skypeforlinux            constantine   818    38.5M    34.3M 
    skypeforlinux            constantine  1242   103.2M    46.8M 

!!!

1: https://kernelnewbies.org/Linux_5.12#Memory_management

On Fri, 2021-04-30 at 02:46 +0300, Konstantin Kharlamov wrote:
> In case you need it yet, this series is:
> 
> Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru>
> 
> My success story: I have Archlinux with 8G RAM + zswap + swap. While developing,
> I have lots of apps opened such as multiple LSP-servers for different langs,
> chats, two browsers, etc… Usually, my system gets quickly to a point of SWAP-
> storms, where I have to kill LSP-servers, restart browsers to free memory, etc,
> otherwise the system lags heavily and is barely usable.
> 
> 1.5 day ago I migrated from 5.11.15 kernel to 5.12 + the LRU patchset, and I
> started up by opening lots of apps to create memory pressure, and worked for a
> day like this. Till now I had *not a single SWAP-storm*, and mind you I got 3.4G
> in SWAP. I was never getting to the point of 3G in SWAP before without a single
> SWAP-storm.
> 
> Right now my gf on Fedora 33 also suffers from SWAP-storms on her old Macbook
> 2013 with 4G RAM + zswap + swap, I think the next week I'll build for her 5.12 +
> LRU patchset as well. Will see how it goes, I expect it will improve her
> experience by a lot too.
> 
> P.S.: upon replying please keep me CCed, I'm not subscribed to the list


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH v2 00/16] Multigenerational LRU Framework
  2021-04-30  6:37   ` Konstantin Kharlamov
@ 2021-04-30 19:31     ` Yu Zhao
  0 siblings, 0 replies; 57+ messages in thread
From: Yu Zhao @ 2021-04-30 19:31 UTC (permalink / raw)
  To: Konstantin Kharlamov
  Cc: Linux-MM, Alex Shi, Andi Kleen, Andrew Morton, Benjamin Manes,
	Dave Chinner, Dave Hansen, Hillf Danton, Jens Axboe,
	Johannes Weiner, Jonathan Corbet, Joonsoo Kim, Matthew Wilcox,
	Mel Gorman, Miaohe Lin, Michael Larabel, Michal Hocko,
	Michel Lespinasse, Rik van Riel, Roman Gushchin, Rong Chen,
	SeongJae Park, Tim Chen, Vlastimil Babka, Yang Shi, Ying Huang,
	Zi Yan, linux-kernel, lkp, Kernel Page Reclaim v2

On Fri, Apr 30, 2021 at 12:38 AM Konstantin Kharlamov
<hi-angel@yandex.ru> wrote:
>
> Btw, I noticed a fun thing, an improvement. I don't know yet if it can be
> attributed to 5.12 (which I didn't try alone yet) or to the LRU patchset, but
> I'd assume the latter, because 5.12 seems didn't to have had anything
> interesting regarding memory performance¹.

I appreciate the testing and the report. They mean a lot to us.

This improvement is to be expected, and it works both ways. There are
cases that swapping is not a good idea, for example, when building
large repos. Without this patchset, some of my browser memory usually
gets swapped out while tons of memory is used to cache files I don't
really care about.

I completely agree with you on the memory cgroup part: theoretically
it could work around the problem but nobody knows how much memory to
reserve for Skype or Firefox :)

I will keep you posted on the following developments.

Thanks!

> I usually have Skype running in background for work purposes, which is only used
> 2-3 times in a week. So one would expect it to be one the first victims to
> memory reclaim. Unfortunately, I never seen this to actually happen (till now,
> that is): all skypeforlinux processes routinely have 0 bytes in SWAP, and the
> only circumstances under which its processes can get into SWAP is after
> experiencing many SWAP-storms. It was so hard for the kernel to move these
> unused processes to SWAP that at some point I even tried to research if there
> are any odd flags a userspace may have set on a process to keep it in RAM, just
> in case that's what happens to Skype (A: no, that wasn't the case, running Skype
> in a memory limited cgroup makes it swap. It's just that kernel decision were
> lacking for some reason).
>
> So, anyway, I am delighted to see now that while testing this patchset, and
> without encountering even a single SWAP-storm yet, skypeforlinux are one of the
> processes residing in SWAP!!
>
>      λ smem -kc "name user pid pss swap" | grep skype
>     skypeforlinux            constantine  1151    60.0K     7.5M
>     skypeforlinux            constantine  1215   195.0K     8.1M
>     skypeforlinux            constantine  1149   706.0K     7.5M
>     skypeforlinux            constantine  1148   743.0K     7.3M
>     skypeforlinux            constantine  1307     1.4M     8.0M
>     skypeforlinux            constantine  1213     2.1M    46.1M
>     skypeforlinux            constantine  1206    14.0M    10.8M
>     skypeforlinux            constantine   818    38.5M    34.3M
>     skypeforlinux            constantine  1242   103.2M    46.8M
>
> !!!
>
> 1: https://kernelnewbies.org/Linux_5.12#Memory_management
>
> On Fri, 2021-04-30 at 02:46 +0300, Konstantin Kharlamov wrote:
> > In case you need it yet, this series is:
> >
> > Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru>
> >
> > My success story: I have Archlinux with 8G RAM + zswap + swap. While developing,
> > I have lots of apps opened such as multiple LSP-servers for different langs,
> > chats, two browsers, etc… Usually, my system gets quickly to a point of SWAP-
> > storms, where I have to kill LSP-servers, restart browsers to free memory, etc,
> > otherwise the system lags heavily and is barely usable.
> >
> > 1.5 day ago I migrated from 5.11.15 kernel to 5.12 + the LRU patchset, and I
> > started up by opening lots of apps to create memory pressure, and worked for a
> > day like this. Till now I had *not a single SWAP-storm*, and mind you I got 3.4G
> > in SWAP. I was never getting to the point of 3G in SWAP before without a single
> > SWAP-storm.
> >
> > Right now my gf on Fedora 33 also suffers from SWAP-storms on her old Macbook
> > 2013 with 4G RAM + zswap + swap, I think the next week I'll build for her 5.12 +
> > LRU patchset as well. Will see how it goes, I expect it will improve her
> > experience by a lot too.
> >
> > P.S.: upon replying please keep me CCed, I'm not subscribed to the list
>

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2021-04-30 19:32 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-13  6:56 [PATCH v2 00/16] Multigenerational LRU Framework Yu Zhao
2021-04-13  6:56 ` [PATCH v2 01/16] include/linux/memcontrol.h: do not warn in page_memcg_rcu() if !CONFIG_MEMCG Yu Zhao
2021-04-13  6:56 ` [PATCH v2 02/16] include/linux/nodemask.h: define next_memory_node() if !CONFIG_NUMA Yu Zhao
2021-04-13  6:56 ` [PATCH v2 03/16] include/linux/huge_mm.h: define is_huge_zero_pmd() if !CONFIG_TRANSPARENT_HUGEPAGE Yu Zhao
2021-04-13  6:56 ` [PATCH v2 04/16] include/linux/cgroup.h: export cgroup_mutex Yu Zhao
2021-04-13  6:56 ` [PATCH v2 05/16] mm/swap.c: export activate_page() Yu Zhao
2021-04-13  6:56 ` [PATCH v2 06/16] mm, x86: support the access bit on non-leaf PMD entries Yu Zhao
2021-04-13  6:56 ` [PATCH v2 07/16] mm/vmscan.c: refactor shrink_node() Yu Zhao
2021-04-13  6:56 ` [PATCH v2 08/16] mm: multigenerational lru: groundwork Yu Zhao
2021-04-13  6:56 ` [PATCH v2 09/16] mm: multigenerational lru: activation Yu Zhao
2021-04-13  6:56 ` [PATCH v2 10/16] mm: multigenerational lru: mm_struct list Yu Zhao
2021-04-14 14:36   ` Matthew Wilcox
2021-04-13  6:56 ` [PATCH v2 11/16] mm: multigenerational lru: aging Yu Zhao
2021-04-13  6:56 ` [PATCH v2 12/16] mm: multigenerational lru: eviction Yu Zhao
2021-04-13  6:56 ` [PATCH v2 13/16] mm: multigenerational lru: page reclaim Yu Zhao
2021-04-13  6:56 ` [PATCH v2 14/16] mm: multigenerational lru: user interface Yu Zhao
2021-04-13  6:56 ` [PATCH v2 15/16] mm: multigenerational lru: Kconfig Yu Zhao
2021-04-13  6:56 ` [PATCH v2 16/16] mm: multigenerational lru: documentation Yu Zhao
2021-04-13  7:51 ` [PATCH v2 00/16] Multigenerational LRU Framework SeongJae Park
2021-04-13 16:13   ` Jens Axboe
2021-04-13 16:42     ` SeongJae Park
2021-04-13 23:14     ` Dave Chinner
2021-04-14  2:29       ` Rik van Riel
     [not found]         ` <CAOUHufafMcaG8sOS=1YMy2P_6p0R1FzP16bCwpUau7g1-PybBQ@mail.gmail.com>
2021-04-14  6:15           ` Huang, Ying
2021-04-14  7:58             ` Yu Zhao
2021-04-14  8:27               ` Huang, Ying
2021-04-14 13:51                 ` Rik van Riel
2021-04-14 15:56                   ` Andi Kleen
2021-04-14 15:58                   ` [page-reclaim] " Shakeel Butt
2021-04-14 18:45                   ` Yu Zhao
2021-04-14 15:51           ` Andi Kleen
2021-04-14 15:58             ` Rik van Riel
2021-04-14 19:14               ` Yu Zhao
2021-04-14 19:41                 ` Rik van Riel
2021-04-14 20:08                   ` Yu Zhao
2021-04-14 19:04             ` Yu Zhao
2021-04-15  3:00               ` Andi Kleen
2021-04-15  7:13                 ` Yu Zhao
2021-04-15  8:19                   ` Huang, Ying
2021-04-15  9:57                   ` Michel Lespinasse
2021-04-24  2:33                     ` Yu Zhao
2021-04-24  3:30                       ` Andi Kleen
2021-04-24  4:16                         ` Yu Zhao
2021-04-14  3:40       ` Yu Zhao
2021-04-14  4:50         ` Dave Chinner
2021-04-14  7:16           ` Yu Zhao
2021-04-14 10:00             ` Yu Zhao
2021-04-15  1:36             ` Dave Chinner
2021-04-24 21:21               ` Yu Zhao
2021-04-14 14:43       ` Jens Axboe
2021-04-14 19:42         ` Yu Zhao
2021-04-15  1:21         ` Dave Chinner
2021-04-14 17:43 ` Johannes Weiner
2021-04-27 10:35   ` Yu Zhao
2021-04-29 23:46 ` Konstantin Kharlamov
2021-04-30  6:37   ` Konstantin Kharlamov
2021-04-30 19:31     ` Yu Zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).