All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yu Zhao <yuzhao@google.com>
To: Dave Hansen <dave.hansen@intel.com>
Cc: linux-mm@kvack.org, Alex Shi <alex.shi@linux.alibaba.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Hillf Danton <hdanton@sina.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Matthew Wilcox <willy@infradead.org>,
	Mel Gorman <mgorman@suse.de>, Michal Hocko <mhocko@suse.com>,
	Roman Gushchin <guro@fb.com>, Vlastimil Babka <vbabka@suse.cz>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Yang Shi <shy828301@gmail.com>, Ying Huang <ying.huang@intel.com>,
	linux-kernel@vger.kernel.org, page-reclaim@google.com
Subject: Re: [PATCH v1 00/14] Multigenerational LRU
Date: Tue, 16 Mar 2021 14:30:33 -0600	[thread overview]
Message-ID: <YFEVaZvsVt7nfhdM@google.com> (raw)
In-Reply-To: <7378f56e-4bc0-51d0-4a61-26aa6969c0de@intel.com>

On Tue, Mar 16, 2021 at 07:50:23AM -0700, Dave Hansen wrote:
> On 3/15/21 7:24 PM, Yu Zhao wrote:
> > On Mon, Mar 15, 2021 at 11:00:06AM -0700, Dave Hansen wrote:
> >> How bad does this scanning get in the worst case if there's a lot of
> >> sharing?
> > 
> > Actually the improvement is larger when there is more sharing, i.e.,
> > higher map_count larger improvement. Let's assume we have a shmem
> > page mapped by two processes. To reclaim this page, we need to make
> > sure neither PTE from the two sets of page tables has the accessed
> > bit. The current page reclaim uses the rmap, i.e., rmap_walk_file().
> > It first looks up the two VMAs (from the two processes mapping this
> > shmem file) in the interval tree of this shmem file, then from each
> > VMA, it goes through PGD/PUD/PMD to reach the PTE. The page can't be
> > reclaimed if either of the PTEs has the accessed bit, therefore cost
> > of the scanning is more than proportional to the number of accesses,
> > when there is a lot sharing.
> > 
> > Why this series makes it better? We track the usage of page tables.
> > Specifically, we work alongside switch_mm(): if one of the processes
> > above hasn't be scheduled since the last scan, we don't need to scan
> > its page tables. So the cost is roughly proportional to the number of
> > accesses, regardless of how many processes. And instead of scanning
> > pages one by one, we do it in large batches. However, page tables can
> > be very sparse -- this is not a problem for the rmap because it knows
> > exactly where the PTEs are (by vma_address()). We only know ranges (by
> > vma->vm_start/vm_end). This is where the accessed bit on non-leaf
> > PMDs can be of help.
> 
> That's an interesting argument.  *But*, this pivoted into describing an
> optimization.  My takeaway from this is that large amounts of sharing
> are probably only handled well if the processes doing the sharing are
> not running constantly.
> 
> > But I guess you are wondering what downsides are. Well, we haven't
> > seen any (yet). We do have page cache (non-shmem) heavy workloads,
> > but not at a scale large enough to make any statistically meaningful
> > observations. We are very interested in working with anybody who has
> > page cache (non-shmem) heavy workloads and is willing to try out this
> > series.
> 
> I would also be very interested to see some synthetic, worst-case
> micros.  Maybe take a few thousand processes with very sparse page
> tables that all map some shared memory.  They wake up long enough to
> touch a few pages, then go back to sleep.
> 
> What happens if we do that?  I'm not saying this is a good workload or
> that things must behave well, but I do find it interesting to watch the
> worst case.

It is a reasonable request, thank you. I've just opened a bug to cover
this case (a large sparse shared shmem) and we'll have something soon.

> I think it would also be very worthwhile to include some research in
> this series about why the kernel moved away from page table scanning.
> What has changed?  Are the workloads we were concerned about way back
> then not around any more?  Has faster I/O or larger memory sizes with a
> stagnating page size changed something?

Sure. Hugh also suggested this too but I personally found that ancient
pre-2.4 history too irrelevant (and uninteresting) to the modern age
and decided to spare audience of the boredom.

> >> I'm kinda surprised by this, but my 16GB laptop has a lot more page
> >> cache than I would have guessed:
> >>
> >>> Active(anon):    4065088 kB
> >>> Inactive(anon):  3981928 kB
> >>> Active(file):    2260580 kB
> >>> Inactive(file):  3738096 kB
> >>> AnonPages:       6624776 kB
> >>> Mapped:           692036 kB
> >>> Shmem:            776276 kB
> >>
> >> Most of it isn't mapped, but it's far from all being used for text.
> > 
> > We have categorized two groups:
> >   1) average users that haven't experienced memory pressure since
> >   their systems have booted. The booting process fills up page cache
> >   with one-off file pages, and they remain until users experience
> >   memory pressure. This can be confirmed by looking at those counters
> >   of a freshly rebooted and idle system. My guess this is the case for
> >   your laptop.
> 
> It's been up ~12 days.  There is ~10GB of data in swap, and there's been
> a lot of scanning activity which I would associate with memory pressure:
> 
> > SwapCached:      1187596 kB
> > SwapTotal:      51199996 kB
> > SwapFree:       40419428 kB
> ...
> > nr_vmscan_write 24900719
> > nr_vmscan_immediate_reclaim 115535
> > pgscan_kswapd 320831544
> > pgscan_direct 23396383
> > pgscan_direct_throttle 0
> > pgscan_anon 127491077
> > pgscan_file 216736850
> > slabs_scanned 400469680
> > compact_migrate_scanned 1092813949
> > compact_free_scanned 4919523035
> > compact_daemon_migrate_scanned 2372223
> > compact_daemon_free_scanned 20989310
> > unevictable_pgs_scanned 307388545

10G swap + 8G anon rss + 6G file rss, hmm... an interesting workload.
The file rss does seem a bit high to me, my wild speculation is there
have been git/make activities in addition to a VM?

  reply	other threads:[~2021-03-16 20:31 UTC|newest]

Thread overview: 89+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-13  7:57 [PATCH v1 00/14] Multigenerational LRU Yu Zhao
2021-03-13  7:57 ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 01/14] include/linux/memcontrol.h: do not warn in page_memcg_rcu() if !CONFIG_MEMCG Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-13 15:09   ` Matthew Wilcox
2021-03-14  7:45     ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 02/14] include/linux/nodemask.h: define next_memory_node() if !CONFIG_NUMA Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 03/14] include/linux/huge_mm.h: define is_huge_zero_pmd() if !CONFIG_TRANSPARENT_HUGEPAGE Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 04/14] include/linux/cgroup.h: export cgroup_mutex Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 05/14] mm/swap.c: export activate_page() Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 06/14] mm, x86: support the access bit on non-leaf PMD entries Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-14 22:12   ` Zi Yan
2021-03-14 22:51     ` Matthew Wilcox
2021-03-15  0:03       ` Yu Zhao
2021-03-15  0:27         ` Zi Yan
2021-03-15  1:04           ` Yu Zhao
2021-03-14 23:22   ` Dave Hansen
2021-03-15  3:16     ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 07/14] mm/pagewalk.c: add pud_entry_post() for post-order traversals Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 08/14] mm/vmscan.c: refactor shrink_node() Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 09/14] mm: multigenerational lru: mm_struct list Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-15 19:40   ` Rik van Riel
2021-03-15 19:40     ` Rik van Riel
2021-03-16  2:07     ` Huang, Ying
2021-03-16  2:07       ` Huang, Ying
2021-03-16  3:57       ` Yu Zhao
2021-03-16  6:44         ` Huang, Ying
2021-03-16  6:44           ` Huang, Ying
2021-03-16  7:56           ` Yu Zhao
2021-03-17  3:37             ` Huang, Ying
2021-03-17  3:37               ` Huang, Ying
2021-03-17 10:46               ` Yu Zhao
2021-03-22  3:13                 ` Huang, Ying
2021-03-22  3:13                   ` Huang, Ying
2021-03-22  8:08                   ` Yu Zhao
2021-03-24  6:58                     ` Huang, Ying
2021-03-24  6:58                       ` Huang, Ying
2021-04-10 18:48                       ` Yu Zhao
2021-04-10 18:48                         ` Yu Zhao
2021-04-13  3:06                         ` Huang, Ying
2021-04-13  3:06                           ` Huang, Ying
2021-03-13  7:57 ` [PATCH v1 10/14] mm: multigenerational lru: core Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-15  2:02   ` Andi Kleen
2021-03-15  2:02     ` Andi Kleen
2021-03-15  3:37     ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 11/14] mm: multigenerational lru: page activation Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-16 16:34   ` Matthew Wilcox
2021-03-16 21:29     ` Yu Zhao
2021-03-13  7:57 ` [PATCH v1 12/14] mm: multigenerational lru: user space interface Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-13 12:23   ` kernel test robot
2021-03-13 12:23     ` kernel test robot
2021-03-13  7:57 ` [PATCH v1 13/14] mm: multigenerational lru: Kconfig Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-13 12:53   ` kernel test robot
2021-03-13 12:53     ` kernel test robot
2021-03-13 13:36   ` kernel test robot
2021-03-13 13:36     ` kernel test robot
2021-03-13  7:57 ` [PATCH v1 14/14] mm: multigenerational lru: documentation Yu Zhao
2021-03-13  7:57   ` Yu Zhao
2021-03-19  9:31   ` Alex Shi
2021-03-22  6:09     ` Yu Zhao
2021-03-14 22:48 ` [PATCH v1 00/14] Multigenerational LRU Zi Yan
2021-03-15  0:52   ` Yu Zhao
2021-03-15  1:13 ` Hillf Danton
2021-03-15  6:49   ` Yu Zhao
2021-03-15 18:00 ` Dave Hansen
2021-03-16  2:24   ` Yu Zhao
2021-03-16 14:50     ` Dave Hansen
2021-03-16 20:30       ` Yu Zhao [this message]
2021-03-16 21:14         ` Dave Hansen
2021-04-10  9:21           ` Yu Zhao
2021-04-13  3:02             ` Huang, Ying
2021-04-13  3:02               ` Huang, Ying
2021-04-13 23:00               ` Yu Zhao
2021-04-13 23:00                 ` Yu Zhao
2021-03-15 18:38 ` Yang Shi
2021-03-15 18:38   ` Yang Shi
2021-03-16  3:38   ` Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YFEVaZvsVt7nfhdM@google.com \
    --to=yuzhao@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linux.alibaba.com \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=hdanton@sina.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=page-reclaim@google.com \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=shy828301@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.