linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosryahmed@google.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Chris Li <chrisl@kernel.org>,
	lsf-pc@lists.linux-foundation.org,
	 Johannes Weiner <hannes@cmpxchg.org>,
	Linux-MM <linux-mm@kvack.org>,  Michal Hocko <mhocko@kernel.org>,
	Shakeel Butt <shakeelb@google.com>,
	 David Rientjes <rientjes@google.com>,
	Hugh Dickins <hughd@google.com>,
	 Seth Jennings <sjenning@redhat.com>,
	Dan Streetman <ddstreet@ieee.org>,
	 Vitaly Wool <vitaly.wool@konsulko.com>,
	Yang Shi <shy828301@gmail.com>,  Peter Xu <peterx@redhat.com>,
	Minchan Kim <minchan@kernel.org>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>,
	 Michal Hocko <mhocko@suse.com>, Wei Xu <weixugc@google.com>
Subject: Re: [LSF/MM/BPF TOPIC] Swap Abstraction / Native Zswap
Date: Fri, 17 Mar 2023 03:19:09 -0700	[thread overview]
Message-ID: <CAJD7tkbgS1NHbmvNH8B9f-E7b6tSoE2hFssoDD54KG7u6eBWkw@mail.gmail.com> (raw)
In-Reply-To: <87bkkt5e4o.fsf@yhuang6-desk2.ccr.corp.intel.com>

On Thu, Mar 16, 2023 at 12:51 AM Huang, Ying <ying.huang@intel.com> wrote:
>
> Yosry Ahmed <yosryahmed@google.com> writes:
>
> > On Sun, Mar 12, 2023 at 7:13 PM Huang, Ying <ying.huang@intel.com> wrote:
> >>
> >> Yosry Ahmed <yosryahmed@google.com> writes:
> >>
> >> <snip>
> >> >
> >> > My current idea is to have one xarray that stores the swap_descs
> >> > (which include swap_entry, swapcache, swap_count, etc), and only for
> >> > rotating disks have an additional xarray that maps swap_entry ->
> >> > swap_desc for cluster readahead, assuming we can eliminate all other
> >> > situations requiring a reverse mapping.
> >> >
> >> > I am not sure how having separate xarrays help? If we have one xarray,
> >> > might as well save the other lookups on put everything in swap_desc.
> >> > In fact, this should improve the locking today as swapcache /
> >> > swap_count operations can be lockless or very lightly contended.
> >>
> >> The condition of the proposal is "reverse mapping cannot be avoided for
> >> enough situation".  So, if reverse mapping (or cluster readahead) can be
> >> avoided for enough situations, I think your proposal is good.  Otherwise,
> >> I propose to use 2 xarrays.  You don't need another reverse mapping
> >> xarray, because you just need to read the next several swap_entry into
> >> the swap cache for cluster readahead.  swap_desc isn't needed for
> >> cluster readahead.
> >
> > swap_desc would be needed for cluster readahead in my original
> > proposal as the swap cache lives in swap_descs. Based on the current
> > implementation, we would need a reverse mapping (swap entry ->
> > swap_desc) in 3 situations:
> >
> > 1) __try_to_reclaim_swap(): when trying to find an empty swap slot and
> > failing, we fallback to trying to find swap entries that only have a
> > page in the swap cache (no references in page tables or page cache)
> > and free them. This would require a reverse mapping.
> >
> > 2) swapoff: we need to swap in all entries in a swapfile, so we need
> > to get all swap_descs associated with that swapfile.
> >
> > 3) swap cluster readahead.
> >
> > For (1), I think we can drop the dependency of a reverse mapping if we
> > free swap entries once we swap a page in and add it to the swap cache,
> > even if the swap count does not drop to 0.
>
> Now, we will not drop the swap cache even if the swap count becomes 0 if
> swap space utility < 50%.  Per my understanding, this avoid swap page
> writing for read accesses.  So I don't think we can change this directly
> without necessary discussion firstly.


Right. I am not sure I understand why we do this today, is it to save
the overhead of allocating a new swap entry if the page is swapped out
again soon? I am not sure I understand this statement "this avoid swap
page
writing for read accesses".

>
>
> > For (2), instead of scanning page tables and shmem page cache to find
> > swapped out pages for the swapfile, we can scan all swap_descs
> > instead, we should be more efficient. This is one of the proposal's
> > potential advantages.
>
> Good.
>
> > (3) is the one that would still need a reverse mapping with the
> > current proposal. Today we use swap cluster readahead for anon pages
> > if we have a spinning disk or vma readahead is disabled. For shmem, we
> > always use cluster readahead. If we can limit cluster readahead to
> > only rotating disks, then the reverse mapping can only be maintained
> > for swapfiles on rotating disks. Otherwise, we will need to maintain a
> > reverse mapping for all swapfiles.
>
> For shmem, I think that it should be good to readahead based on shmem
> file offset instead of swap device offset.
>
> It's possible that some pages in the readahead window are from HDD while
> some other pages aren't.  So it's a little hard to enable cluster read
> for HDD only.  Anyway, it's not common to use HDD for swap now.
>
> >>
> >> > If the point is to store the swap_desc directly inside the xarray to
> >> > save 8 bytes, I am concerned that having multiple xarrays for
> >> > swapcache, swap_count, etc will use more than that.
> >>
> >> The idea is to save the memory used by reverse mapping xarray.
> >
> > I see.
> >
> >>
> >> >> >>
> >> >> >> > Keep in mind that the current overhead is 1 byte O(max swap pages) not
> >> >> >> > O(swapped). Also, 1 byte is assuming we do not use the swap
> >> >> >> > continuation pages. If we do, it may end up being more. We also
> >> >> >> > allocate continuation in full 4k pages, so even if one swap_map
> >> >> >> > element in a page requires continuation, we will allocate an entire
> >> >> >> > page. What I am trying to say is that to get an actual comparison you
> >> >> >> > need to also factor in the swap utilization and the rate of usage of
> >> >> >> > swap continuation. I don't know how to come up with a formula for this
> >> >> >> > tbh.
> >> >> >> >
> >> >> >> > Also, like Johannes said, the worst case overhead (32 bytes if you
> >> >> >> > count the reverse mapping) is 0.8% of swapped memory, aka 8M for every
> >> >> >> > 1G swapped. It doesn't sound *very* bad. I understand that it is pure
> >> >> >> > overhead for people not using zswap, but it is not very awful.
> >> >> >> >
> >> >> >> >>
> >> >> >> >> It seems what you really need is one bit of information to indicate
> >> >> >> >> this page is backed by zswap. Then you can have a seperate pointer
> >> >> >> >> for the zswap entry.
> >> >> >> >
> >> >> >> > If you use one bit in swp_entry_t (or one of the available swap types)
> >> >> >> > to indicate whether the page is backed with a swapfile or zswap it
> >> >> >> > doesn't really work. We lose the indirection layer. How do we move the
> >> >> >> > page from zswap to swapfile? We need to go update the page tables and
> >> >> >> > the shmem page cache, similar to swapoff.
> >> >> >> >
> >> >> >> > Instead, if we store a key else in swp_entry_t and use this to lookup
> >> >> >> > the swp_entry_t or zswap_entry pointer then that's essentially what
> >> >> >> > the swap_desc does. It just goes the extra mile of unifying the
> >> >> >> > swapcache as well and storing it directly in the swap_desc instead of
> >> >> >> > storing it in another lookup structure.
> >> >> >>
> >> >> >> If we choose to make sizeof(struct swap_desc) == 8, that is, store only
> >> >> >> swap_entry in swap_desc.  The added indirection appears to be another
> >> >> >> level of page table with 1 entry.  Then, we may use the similar method
> >> >> >> as supporting system with 2 level and 3 level page tables, like the code
> >> >> >> in include/asm-generic/pgtable-nopmd.h.  But I haven't thought about
> >> >> >> this deeply.
> >> >> >
> >> >> > Can you expand further on this idea? I am not sure I fully understand.
> >> >>
> >> >> OK.  The goal is to avoid the overhead if indirection isn't enabled via
> >> >> kconfig.
> >> >>
> >> >> If indirection isn't enabled, store swap_entry in PTE directly.
> >> >> Otherwise, store index of swap_desc in PTE.  Different functions (e.g.,
> >> >> to get/set swap_entry in PTE) are implemented based on kconfig.
> >> >
> >> >
> >> > I thought about this, the problem is that we will have multiple
> >> > implementations of multiple things. For example, swap_count without
> >> > the indirection layer lives in the swap_map (with continuation logic).
> >> > With the indirection layer, it lives in the swap_desc (or somewhere
> >> > else). Same for the swapcache. Even if we keep the swapcache in an
> >> > xarray and not inside swap_desc, it would be indexed by swap_entry if
> >> > the indirection is disabled, and by swap_desc (or similar) if the
> >> > indirection is enabled. I think maintaining separate implementations
> >> > for when the indirection is enabled/disabled would be adding too much
> >> > complexity.
> >> >
> >> > WDYT?
> >>
> >> If we go this way, swap cache and swap_count will always be indexed by
> >> swap_entry.  swap_desc just provides a indirection to make it possible
> >> to move between swap devices.
> >>
> >> Why must we index swap cache and swap_count by swap_desc if indirection
> >> is enabled?  Yes, we can save one xarray indexing if we do so, but I
> >> don't think the overhead of one xarray indexing is a showstopper.
> >>
> >> I think this can be one intermediate step towards your final target.
> >> The changes to current implementation can be smaller.
> >
> > IIUC, the idea is to have two xarrays:
> > (a) xarray that stores a pointer to a struct containing swap_count and
> > swap cache.
> > (b) xarray that stores the underlying swap entry or zswap entry.
> >
> > When indirection is disabled:
> > page tables & page cache have swap entry directly like today, xarray
> > (a) is indexed by swap entry, xarray (b) does not exist. No reverse
> > mapping needed.
> >
> > In this case we have an extra overhead of 12-16 bytes (the struct
> > containing swap_count and swap cache) vs. 24 bytes of the swap_desc.
> >
> > When indirection is enabled:
> > page tables & page cache have a swap id (or swap_desc index), xarray
> > (a) is indexed by swap id,
>
> xarray (a) is indexed by swap entry.


How so? With the indirection enabled, the page tables & page cache
have the swap id (or swap_desc index), which can point to a swap entry
or a zswap entry -- which can change when the page is moved between
zswap & swapfiles. How is xarray (a) indexed by the swap entry in this
case? Shouldn't be indexed by the abstract swap id so that the
writeback from zswap is transparent?

>
>
> > xarray (b) is indexed by swap id as well
> > and contain swap entry or zswap entry. Reverse mapping might be
> > needed.
>
> Reverse mapping isn't needed.


It would be needed if xarray (a) is indexed by the swap id. I am not
sure I understand how it can be indexed by the swap entry if the
indirection is enabled.

>
>
> > In this case we have an extra overhead of 12-16 bytes + 8 bytes for
> > xarray (b) entry + memory overhead from 2nd xarray + reverse mapping
> > where needed.
> >
> > There is also the extra cpu overhead for an extra lookup in certain paths.
> >
> > Is my analysis correct? If yes, I agree that the original proposal is
> > good if the reverse mapping can be avoided in enough situations, and
> > that we should consider such alternatives otherwise. As I mentioned
> > above, I think it comes down to whether we can completely restrict
> > cluster readahead to rotating disks or not -- in which case we need to
> > decide what to do for shmem and for anon when vma readahead is
> > disabled.
>
> We can even have a minimal indirection implementation.  Where, swap
> cache and swap_map[] are kept as they ware before, just one xarray is
> added.  The xarray is indexed by swap id (or swap_desc index) to store
> the corresponding swap entry.
>
> When indirection is disabled, no extra overhead.
>
> When indirection is enabled, the extra overhead is just 8 bytes per
> swapped page.
>
> The basic migration support can be build on top of this.
>
> I think that this could be a baseline for indirection support.  Then
> further optimization can be built on top of it step by step with
> supporting data.


I am not sure how this works with zswap. Currently swap_map[]
implementation is specific for swapfiles, it does not work for zswap
unless we implement separate swap counting logic for zswap &
swapfiles. Same for the swapcache, it currently supports being indexed
by a swap entry, it would need to support being indexed by a swap id,
or have a separate swap cache for zswap. Having separate
implementation would add complexity, and we would need to perform
handoffs of the swap count/cache when a page is moved from zswap to a
swapfile.

>
>
> >>
> >> >> >> >>
> >> >> >> >> Depending on how much you are going to reuse the swap cache, you might
> >> >> >> >> need to have something like a swap_info_struct to keep the locks happy.
> >> >> >> >
> >> >> >> > My current intention is to reimplement the swapcache completely as a
> >> >> >> > pointer in struct swap_desc. This would eliminate this need and a lot
> >> >> >> > of the locking we do today if I get things right.
> >> >> >> >
> >> >> >> >>
> >> >> >> >> > Another potential concern is readahead. With this design, we have no
> >> >> >> >>
> >> >> >> >> Readahead is for spinning disk :-) Even a normal swap file with an SSD can
> >> >> >> >> use some modernization.
> >> >> >> >
> >> >> >> > Yeah, I initially thought we would only need the swp_entry_t ->
> >> >> >> > swap_desc reverse mapping for readahead, and that we can only store
> >> >> >> > that for spinning disks, but I was wrong. We need for other things as
> >> >> >> > well today: swapoff, when trying to find an empty swap slot and we
> >> >> >> > start trying to free swap slots used only by the swapcache. However, I
> >> >> >> > think both of these cases can be fixed (I can share more details if
> >> >> >> > you want). If everything goes well we should only need to maintain the
> >> >> >> > reverse mapping (extra overhead above 24 bytes) for swap files on
> >> >> >> > spinning disks for readahead.
> >> >> >> >
> >> >> >> >>
> >> >> >> >> Looking forward to your discussion.
> >> >>
> >> >> Per my understanding, the indirection is to make it easy to move
> >> >> (swapped) pages among swap devices based on hot/cold.  This is similar
> >> >> as the target of memory tiering.  It appears that we can extend the
> >> >> memory tiering (mm/memory-tiers.c) framework to cover swap devices too?
> >> >> Is it possible for zswap to be faster than some slow memory media?
> >> >
> >> >
> >> > Agree with Chris that this may require a much larger overhaul. A slow
> >> > memory tier is still addressable memory, swap/zswap requires a page
> >> > fault to read the pages. I think (at least for now) there is a
> >> > fundamental difference. We want reclaim to eventually treat slow
> >> > memory & swap as just different tiers to place cold memory in with
> >> > different characteristics, but otherwise I think the swapping
> >> > implementation itself is very different.  Am I missing something?
> >>
> >> Is it possible that zswap is faster than a really slow memory
> >> addressable device backed by NAND?  TBH, I don't have the answer.
> >
> > I am not sure either.
> >
> >>
> >> Anyway, do you need a way to describe the tiers of the swap devices?
> >> So, you can move the cold pages among the swap devices based on that?
> >
> > For now I think the "tiers" in this proposal are just zswap and normal
> > swapfiles. We can later extend it to support more explicit tiering.
>
> IIUC, in original zswap implementation, there's 1:1 relationship between
> zswap and normal swapfile.  But now, you make demoting among swap
> devices more general.  Then we need some general way to specify which
> swap devices are fast and which are slow, and the demoting relationship
> among them.  It can be memory tiers or something else, but we need one.


I think for this proposal, there are only 2 hardcoded tiers. Zswap is
fast, swapfile is slow. In the future, we can support more dynamic
tiering if the need arises.

>
>
> Best Regards,
> Huang, Ying
>


  reply	other threads:[~2023-03-17 10:19 UTC|newest]

Thread overview: 105+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-18 22:38 [LSF/MM/BPF TOPIC] Swap Abstraction / Native Zswap Yosry Ahmed
2023-02-19  4:31 ` Matthew Wilcox
2023-02-19  9:34   ` Yosry Ahmed
2023-02-28 23:22   ` Chris Li
2023-03-01  0:08     ` Matthew Wilcox
2023-03-01 23:22       ` Chris Li
2023-02-21 18:39 ` Yang Shi
2023-02-21 18:56   ` Yosry Ahmed
2023-02-21 19:26     ` Yang Shi
2023-02-21 19:46       ` Yosry Ahmed
2023-02-21 23:34         ` Yang Shi
2023-02-21 23:38           ` Yosry Ahmed
2023-02-22 16:57             ` Johannes Weiner
2023-02-22 22:46               ` Yosry Ahmed
2023-02-28  4:29                 ` Kalesh Singh
2023-02-28  8:09                   ` Yosry Ahmed
2023-02-28  4:54 ` Sergey Senozhatsky
2023-02-28  8:12   ` Yosry Ahmed
2023-02-28 23:29     ` Minchan Kim
2023-03-02  0:58       ` Yosry Ahmed
2023-03-02  1:25         ` Yosry Ahmed
2023-03-02 17:05         ` Chris Li
2023-03-02 17:47         ` Chris Li
2023-03-02 18:15           ` Johannes Weiner
2023-03-02 18:56             ` Chris Li
2023-03-02 18:23           ` Rik van Riel
2023-03-02 21:42             ` Chris Li
2023-03-02 22:36               ` Rik van Riel
2023-03-02 22:55                 ` Yosry Ahmed
2023-03-03  4:05                   ` Chris Li
2023-03-03  0:01                 ` Chris Li
2023-03-02 16:58       ` Chris Li
2023-03-01 10:44     ` Sergey Senozhatsky
2023-03-02  1:01       ` Yosry Ahmed
2023-02-28 23:11 ` Chris Li
2023-03-02  0:30   ` Yosry Ahmed
2023-03-02  1:00     ` Yosry Ahmed
2023-03-02 16:51     ` Chris Li
2023-03-03  0:33     ` Minchan Kim
2023-03-03  0:49       ` Yosry Ahmed
2023-03-03  1:25         ` Minchan Kim
2023-03-03 17:15           ` Yosry Ahmed
2023-03-09 12:48     ` Huang, Ying
2023-03-09 19:58       ` Chris Li
2023-03-09 20:19       ` Yosry Ahmed
2023-03-10  3:06         ` Huang, Ying
2023-03-10 23:14           ` Chris Li
2023-03-13  1:10             ` Huang, Ying
2023-03-15  7:41               ` Yosry Ahmed
2023-03-16  1:42                 ` Huang, Ying
2023-03-11  1:06           ` Yosry Ahmed
2023-03-13  2:12             ` Huang, Ying
2023-03-15  8:01               ` Yosry Ahmed
2023-03-16  7:50                 ` Huang, Ying
2023-03-17 10:19                   ` Yosry Ahmed [this message]
2023-03-17 18:19                     ` Chris Li
2023-03-17 18:23                       ` Yosry Ahmed
2023-03-20  2:55                     ` Huang, Ying
2023-03-20  6:25                       ` Chris Li
2023-03-23  0:56                         ` Huang, Ying
2023-03-23  6:46                           ` Chris Li
2023-03-23  6:56                             ` Huang, Ying
2023-03-23 18:28                               ` Chris Li
2023-03-23 18:40                                 ` Yosry Ahmed
2023-03-23 19:49                                   ` Chris Li
2023-03-23 19:54                                     ` Yosry Ahmed
2023-03-23 21:10                                       ` Chris Li
2023-03-24 17:28                                       ` Chris Li
2023-03-22  5:56                       ` Yosry Ahmed
2023-03-23  1:48                         ` Huang, Ying
2023-03-23  2:21                           ` Yosry Ahmed
2023-03-23  3:16                             ` Huang, Ying
2023-03-23  3:27                               ` Yosry Ahmed
2023-03-23  5:37                                 ` Huang, Ying
2023-03-23 15:18                                   ` Yosry Ahmed
2023-03-24  2:37                                     ` Huang, Ying
2023-03-24  7:28                                       ` Yosry Ahmed
2023-03-24 17:23                                         ` Chris Li
2023-03-27  1:23                                           ` Huang, Ying
2023-03-28  5:54                                             ` Yosry Ahmed
2023-03-28  6:20                                               ` Huang, Ying
2023-03-28  6:29                                                 ` Yosry Ahmed
2023-03-28  6:59                                                   ` Huang, Ying
2023-03-28  7:59                                                     ` Yosry Ahmed
2023-03-28 14:14                                                       ` Johannes Weiner
2023-03-28 19:59                                                         ` Yosry Ahmed
2023-03-28 21:22                                                           ` Chris Li
2023-03-28 21:30                                                             ` Yosry Ahmed
2023-03-28 20:50                                                       ` Chris Li
2023-03-28 21:01                                                         ` Yosry Ahmed
2023-03-28 21:32                                                           ` Chris Li
2023-03-28 21:44                                                             ` Yosry Ahmed
2023-03-28 22:01                                                               ` Chris Li
2023-03-28 22:02                                                                 ` Yosry Ahmed
2023-03-29  1:31                                                               ` Huang, Ying
2023-03-29  1:41                                                                 ` Yosry Ahmed
2023-03-29 16:04                                                                   ` Chris Li
2023-04-04  8:24                                                                     ` Huang, Ying
2023-04-04  8:10                                                                   ` Huang, Ying
2023-04-04  8:47                                                                     ` Yosry Ahmed
2023-04-06  1:40                                                                       ` Huang, Ying
2023-03-29 15:22                                                                 ` Chris Li
2023-03-10  2:07 ` Luis Chamberlain
2023-03-10  2:15   ` Yosry Ahmed
2023-05-12  3:07 ` Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJD7tkbgS1NHbmvNH8B9f-E7b6tSoE2hFssoDD54KG7u6eBWkw@mail.gmail.com \
    --to=yosryahmed@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=chrisl@kernel.org \
    --cc=ddstreet@ieee.org \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mhocko@kernel.org \
    --cc=mhocko@suse.com \
    --cc=minchan@kernel.org \
    --cc=peterx@redhat.com \
    --cc=rientjes@google.com \
    --cc=shakeelb@google.com \
    --cc=shy828301@gmail.com \
    --cc=sjenning@redhat.com \
    --cc=vitaly.wool@konsulko.com \
    --cc=weixugc@google.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).