linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yu Zhao <yuzhao@google.com>
To: Dave Hansen <dave.hansen@intel.com>
Cc: linux-mm@kvack.org, Alex Shi <alex.shi@linux.alibaba.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Hillf Danton <hdanton@sina.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Matthew Wilcox <willy@infradead.org>,
	Mel Gorman <mgorman@suse.de>, Michal Hocko <mhocko@suse.com>,
	Roman Gushchin <guro@fb.com>, Vlastimil Babka <vbabka@suse.cz>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Yang Shi <shy828301@gmail.com>, Ying Huang <ying.huang@intel.com>,
	linux-kernel@vger.kernel.org, page-reclaim@google.com
Subject: Re: [PATCH v1 00/14] Multigenerational LRU
Date: Mon, 15 Mar 2021 20:24:56 -0600	[thread overview]
Message-ID: <YFAW+PtJS7DEngFZ@google.com> (raw)
In-Reply-To: <5f621dd6-4bbd-dbf7-8fa1-d63d9a5bfc16@intel.com>

On Mon, Mar 15, 2021 at 11:00:06AM -0700, Dave Hansen wrote:
> On 3/12/21 11:57 PM, Yu Zhao wrote:
> > Background
> > ==========
> > DRAM is a major factor in total cost of ownership, and improving
> > memory overcommit brings a high return on investment. Over the past
> > decade of research and experimentation in memory overcommit, we
> > observed a distinct trend across millions of servers and clients: the
> > size of page cache has been decreasing because of the growing
> > popularity of cloud storage. Nowadays anon pages account for more than
> > 90% of our memory consumption and page cache contains mostly
> > executable pages.
> 
> This makes a compelling argument that current reclaim is not well
> optimized for anonymous memory with low rates of sharing.  Basically,
> anonymous rmap is very powerful, but we're not getting enough bang for
> our buck out of it.
> 
> I also understand that the workloads you reference are anonymous-heavy
> and that page cache isn't a *major* component.
> 
> But, what does happens to page-cache-heavy workloads?  Does this just
> effectively force databases that want to use shmem over to hugetlbfs?

No, they should benefit too. In terms of page reclaim, shmem pages are
basically considered anon: they are on anon lru and dirty shmem pages
can only be swapped (we can safely assume clean shmem pages are
virtually nonexistent) in contrast to file pages that have backing
storage and need to be written back.

I should have phrased it better: our accounting is based on what the
kernel provides, i.e., anon/file (lru) sizes you listed below.

> How bad does this scanning get in the worst case if there's a lot of
> sharing?

Actually the improvement is larger when there is more sharing, i.e.,
higher map_count larger improvement. Let's assume we have a shmem
page mapped by two processes. To reclaim this page, we need to make
sure neither PTE from the two sets of page tables has the accessed
bit. The current page reclaim uses the rmap, i.e., rmap_walk_file().
It first looks up the two VMAs (from the two processes mapping this
shmem file) in the interval tree of this shmem file, then from each
VMA, it goes through PGD/PUD/PMD to reach the PTE. The page can't be
reclaimed if either of the PTEs has the accessed bit, therefore cost
of the scanning is more than proportional to the number of accesses,
when there is a lot sharing.

Why this series makes it better? We track the usage of page tables.
Specifically, we work alongside switch_mm(): if one of the processes
above hasn't be scheduled since the last scan, we don't need to scan
its page tables. So the cost is roughly proportional to the number of
accesses, regardless of how many processes. And instead of scanning
pages one by one, we do it in large batches. However, page tables can
be very sparse -- this is not a problem for the rmap because it knows
exactly where the PTEs are (by vma_address()). We only know ranges (by
vma->vm_start/vm_end). This is where the accessed bit on non-leaf
PMDs can be of help.

But I guess you are wondering what downsides are. Well, we haven't
seen any (yet). We do have page cache (non-shmem) heavy workloads,
but not at a scale large enough to make any statistically meaningful
observations. We are very interested in working with anybody who has
page cache (non-shmem) heavy workloads and is willing to try out this
series.

> I'm kinda surprised by this, but my 16GB laptop has a lot more page
> cache than I would have guessed:
> 
> > Active(anon):    4065088 kB
> > Inactive(anon):  3981928 kB
> > Active(file):    2260580 kB
> > Inactive(file):  3738096 kB
> > AnonPages:       6624776 kB
> > Mapped:           692036 kB
> > Shmem:            776276 kB
> 
> Most of it isn't mapped, but it's far from all being used for text.

We have categorized two groups:
  1) average users that haven't experienced memory pressure since
  their systems have booted. The booting process fills up page cache
  with one-off file pages, and they remain until users experience
  memory pressure. This can be confirmed by looking at those counters
  of a freshly rebooted and idle system. My guess this is the case for
  your laptop.
  2) engineering users who store git repos and compile locally. They
  complained about their browsers being janky because anon memory got
  swapped even though their systems had a lot of stale file pages in
  page cache, with the current page reclaim. They are what we consider
  part of the page cache (non-shmem) heavy group.

  reply	other threads:[~2021-03-16  2:26 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-13  7:57 [PATCH v1 00/14] Multigenerational LRU Yu Zhao
2021-03-13  7:57 ` [PATCH v1 01/14] include/linux/memcontrol.h: do not warn in page_memcg_rcu() if !CONFIG_MEMCG Yu Zhao
2021-03-13 15:09   ` Matthew Wilcox
2021-03-14  7:45     ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 02/14] include/linux/nodemask.h: define next_memory_node() if !CONFIG_NUMA Yu Zhao
2021-03-13  7:57 ` [PATCH v1 03/14] include/linux/huge_mm.h: define is_huge_zero_pmd() if !CONFIG_TRANSPARENT_HUGEPAGE Yu Zhao
2021-03-13  7:57 ` [PATCH v1 04/14] include/linux/cgroup.h: export cgroup_mutex Yu Zhao
2021-03-13  7:57 ` [PATCH v1 05/14] mm/swap.c: export activate_page() Yu Zhao
2021-03-13  7:57 ` [PATCH v1 06/14] mm, x86: support the access bit on non-leaf PMD entries Yu Zhao
2021-03-14 22:12   ` Zi Yan
2021-03-14 22:51     ` Matthew Wilcox
2021-03-15  0:03       ` Yu Zhao
2021-03-15  0:27         ` Zi Yan
2021-03-15  1:04           ` Yu Zhao
2021-03-14 23:22   ` Dave Hansen
2021-03-15  3:16     ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 07/14] mm/pagewalk.c: add pud_entry_post() for post-order traversals Yu Zhao
2021-03-13  7:57 ` [PATCH v1 08/14] mm/vmscan.c: refactor shrink_node() Yu Zhao
2021-03-13  7:57 ` [PATCH v1 09/14] mm: multigenerational lru: mm_struct list Yu Zhao
2021-03-15 19:40   ` Rik van Riel
2021-03-16  2:07     ` Huang, Ying
2021-03-16  3:57       ` Yu Zhao
2021-03-16  6:44         ` Huang, Ying
2021-03-16  7:56           ` Yu Zhao
2021-03-17  3:37             ` Huang, Ying
2021-03-17 10:46               ` Yu Zhao
2021-03-22  3:13                 ` Huang, Ying
2021-03-22  8:08                   ` Yu Zhao
2021-03-24  6:58                     ` Huang, Ying
2021-04-10 18:48                       ` Yu Zhao
2021-04-13  3:06                         ` Huang, Ying
2021-03-13  7:57 ` [PATCH v1 10/14] mm: multigenerational lru: core Yu Zhao
2021-03-15  2:02   ` Andi Kleen
2021-03-15  3:37     ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 11/14] mm: multigenerational lru: page activation Yu Zhao
2021-03-16 16:34   ` Matthew Wilcox
2021-03-16 21:29     ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 12/14] mm: multigenerational lru: user space interface Yu Zhao
2021-03-13  7:57 ` [PATCH v1 13/14] mm: multigenerational lru: Kconfig Yu Zhao
2021-03-13  7:57 ` [PATCH v1 14/14] mm: multigenerational lru: documentation Yu Zhao
2021-03-19  9:31   ` Alex Shi
2021-03-22  6:09     ` Yu Zhao
2021-03-14 22:48 ` [PATCH v1 00/14] Multigenerational LRU Zi Yan
2021-03-15  0:52   ` Yu Zhao
     [not found] ` <20210315011350.3648-1-hdanton@sina.com>
2021-03-15  6:49   ` Yu Zhao
2021-03-15 18:00 ` Dave Hansen
2021-03-16  2:24   ` Yu Zhao [this message]
2021-03-16 14:50     ` Dave Hansen
2021-03-16 20:30       ` Yu Zhao
2021-03-16 21:14         ` Dave Hansen
2021-04-10  9:21           ` Yu Zhao
2021-04-13  3:02             ` Huang, Ying
2021-04-13 23:00               ` Yu Zhao
2021-03-15 18:38 ` Yang Shi
2021-03-16  3:38   ` Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YFAW+PtJS7DEngFZ@google.com \
    --to=yuzhao@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linux.alibaba.com \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=hdanton@sina.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=page-reclaim@google.com \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=shy828301@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).