From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Huang Ying <ying.huang@intel.com>, Gao Xiang <xiang@kernel.org>,
Yu Zhao <yuzhao@google.com>, Yang Shi <shy828301@gmail.com>,
Michal Hocko <mhocko@suse.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags
Date: Fri, 1 Mar 2024 16:31:49 +0000 [thread overview]
Message-ID: <f4453904-6e6a-4b81-bce3-8926cdfaddfc@arm.com> (raw)
In-Reply-To: <5ebac77a-5c61-481f-8ac1-03bc4f4e2b1d@arm.com>
On 01/03/2024 16:27, Ryan Roberts wrote:
> On 28/02/2024 15:12, David Hildenbrand wrote:
>> On 28.02.24 15:57, Ryan Roberts wrote:
>>> On 28/02/2024 12:12, David Hildenbrand wrote:
>>>>>> How relevant is it? Relevant enough that someone decided to put that
>>>>>> optimization in? I don't know :)
>>>>>
>>>>> I'll have one last go at convincing you: Huang Ying (original author) commented
>>>>> "I believe this should be OK. Better to compare the performance too." at [1].
>>>>> That implies to me that perhaps the optimization wasn't in response to a
>>>>> specific problem after all. Do you have any thoughts, Huang?
>>>>
>>>> Might make sense to include that in the patch description!
>>>>
>>>>> OK so if we really do need to keep this optimization, here are some ideas:
>>>>>
>>>>> Fundamentally, we would like to be able to figure out the size of the swap slot
>>>>> from the swap entry. Today swap supports 2 sizes; PAGE_SIZE and PMD_SIZE. For
>>>>> PMD_SIZE, it always uses a full cluster, so can easily add a flag to the
>>>>> cluster
>>>>> to mark it as PMD_SIZE.
>>>>>
>>>>> Going forwards, we want to support all sizes (power-of-2). Most of the time, a
>>>>> cluster will contain only one size of THPs, but this is not the case when a THP
>>>>> in the swapcache gets split or when an order-0 slot gets stolen. We expect
>>>>> these
>>>>> cases to be rare.
>>>>>
>>>>> 1) Keep the size of the smallest swap entry in the cluster header. Most of the
>>>>> time it will be the full size of the swap entry, but sometimes it will cover
>>>>> only a portion. In the latter case you may see a false negative for
>>>>> swap_page_trans_huge_swapped() meaning we take the slow path, but that is rare.
>>>>> There is one wrinkle: currently the HUGE flag is cleared in
>>>>> put_swap_folio(). We
>>>>> wouldn't want to do the equivalent in the new scheme (i.e. set the whole
>>>>> cluster
>>>>> to order-0). I think that is safe, but haven't completely convinced myself yet.
>>>>>
>>>>> 2) allocate 4 bits per (small) swap slot to hold the order. This will give
>>>>> precise information and is conceptually simpler to understand, but will cost
>>>>> more memory (half as much as the initial swap_map[] again).
>>>>>
>>>>> I still prefer to avoid this at all if we can (and would like to hear Huang's
>>>>> thoughts). But if its a choice between 1 and 2, I prefer 1 - I'll do some
>>>>> prototyping.
>>>>
>>>> Taking a step back: what about we simply batch unmapping of swap entries?
>>>>
>>>> That is, if we're unmapping a PTE range, we'll collect swap entries (under PT
>>>> lock) that reference consecutive swap offsets in the same swap file.
>>>
>>> Yes in principle, but there are 4 places where free_swap_and_cache() is called,
>>> and only 2 of those are really amenable to batching (zap_pte_range() and
>>> madvise_free_pte_range()). So the other two users will still take the "slow"
>>> path. Maybe those 2 callsites are the only ones that really matter? I can
>>> certainly have a stab at this approach.
>>
>> We can ignore the s390x one. That s390x code should only apply to KVM guest
>> memory where ordinary THP are not even supported. (and nobody uses mTHP there yet).
>>
>> Long story short: the VM can hint that some memory pages are now unused and the
>> hypervisor can reclaim them. That's what that callback does (zap guest-provided
>> guest memory). No need to worry about any batching for now.
>>
>> Then, there is the shmem one in shmem_free_swap(). I really don't know how shmem
>> handles THP+swapout.
>>
>> But looking at shmem_writepage(), we split any large folios before moving them
>> to the swapcache, so likely we don't care at all, because THP don't apply.
>>
>>>
>>>>
>>>> There, we can then first decrement all the swap counts, and then try minimizing
>>>> how often we actually have to try reclaiming swap space (lookup folio, see it's
>>>> a large folio that we cannot reclaim or could reclaim, ...).
>>>>
>>>> Might need some fine-tuning in swap code to "advance" to the next entry to try
>>>> freeing up, but we certainly can do better than what we would do right now.
>>>
>>> I'm not sure I've understood this. Isn't advancing just a matter of:
>>>
>>> entry = swp_entry(swp_type(entry), swp_offset(entry) + 1);
>>
>> I was talking about the advancing swapslot processing after decrementing the
>> swapcounts.
>>
>> Assume you decremented 512 swapcounts and some of them went to 0. AFAIU, you'd
>> have to start with the first swapslot that has now a swapcount=0 one and try to
>> reclaim swap.
>>
>> Assume you get a small folio, then you'll have to proceed with the next swap
>> slot and try to reclaim swap.
>>
>> Assume you get a large folio, then you can skip more swapslots (depending on
>> offset into the folio etc).
>>
>> If you get what I mean. :)
>>
>
> I've implemented the batching as David suggested, and I'm pretty confident it's
> correct. The only problem is that during testing I can't provoke the code to
> take the path. I've been pouring through the code but struggling to figure out
> under what situation you would expect the swap entry passed to
> free_swap_and_cache() to still have a cached folio? Does anyone have any idea?
>
> This is the original (unbatched) function, after my change, which caused David's
> concern that we would end up calling __try_to_reclaim_swap() far too much:
>
> int free_swap_and_cache(swp_entry_t entry)
> {
> struct swap_info_struct *p;
> unsigned char count;
>
> if (non_swap_entry(entry))
> return 1;
>
> p = _swap_info_get(entry);
> if (p) {
> count = __swap_entry_free(p, entry);
> if (count == SWAP_HAS_CACHE)
> __try_to_reclaim_swap(p, swp_offset(entry),
> TTRS_UNMAPPED | TTRS_FULL);
> }
> return p != NULL;
> }
>
> The trouble is, whenever its called, count is always 0, so
> __try_to_reclaim_swap() never gets called.
>
> My test case is allocating 1G anon memory, then doing madvise(MADV_PAGEOUT) over
> it. Then doing either a munmap() or madvise(MADV_FREE), both of which cause this
> function to be called for every PTE, but count is always 0 after
> __swap_entry_free() so __try_to_reclaim_swap() is never called. I've tried for
> order-0 as well as PTE- and PMD-mapped 2M THP.
>
> I'm guessing the swapcache was already reclaimed as part of MADV_PAGEOUT? I'm
> using a block ram device as my backing store - I think this does synchronous IO
> so perhaps if I have a real block device with async IO I might have more luck?
Ahh I just switched to SSD as swap device and now its getting called. I guess
that's the reason. Sorry for the noise.
> Just a guess...
>
> Or perhaps this code path is a corner case? In which case, perhaps its not worth
> adding the batching optimization after all?
>
> Thanks,
> Ryan
>
next prev parent reply other threads:[~2024-03-01 16:31 UTC|newest]
Thread overview: 116+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-25 14:45 [PATCH v3 0/4] Swap-out small-sized THP without splitting Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-02-22 10:19 ` David Hildenbrand
2024-02-22 10:20 ` David Hildenbrand
2024-02-26 17:41 ` Ryan Roberts
2024-02-27 17:10 ` Ryan Roberts
2024-02-27 19:17 ` David Hildenbrand
2024-02-28 9:37 ` Ryan Roberts
2024-02-28 12:12 ` David Hildenbrand
2024-02-28 14:57 ` Ryan Roberts
2024-02-28 15:12 ` David Hildenbrand
2024-02-28 15:18 ` Ryan Roberts
2024-03-01 16:27 ` Ryan Roberts
2024-03-01 16:31 ` Matthew Wilcox
2024-03-01 16:44 ` Ryan Roberts
2024-03-01 17:00 ` David Hildenbrand
2024-03-01 17:14 ` Ryan Roberts
2024-03-01 17:18 ` David Hildenbrand
2024-03-01 17:06 ` Ryan Roberts
2024-03-04 4:52 ` Barry Song
2024-03-04 5:42 ` Barry Song
2024-03-05 7:41 ` Ryan Roberts
2024-03-01 16:31 ` Ryan Roberts [this message]
2024-03-01 16:32 ` David Hildenbrand
2024-03-04 16:03 ` Ryan Roberts
2024-03-04 17:30 ` David Hildenbrand
2024-03-04 18:38 ` Ryan Roberts
2024-03-04 20:50 ` David Hildenbrand
2024-03-04 21:55 ` Ryan Roberts
2024-03-04 22:02 ` David Hildenbrand
2024-03-04 22:34 ` Ryan Roberts
2024-03-05 6:11 ` Huang, Ying
2024-03-05 8:35 ` David Hildenbrand
2024-03-05 8:46 ` Ryan Roberts
2024-02-28 13:33 ` Matthew Wilcox
2024-02-28 14:24 ` Ryan Roberts
2024-02-28 14:59 ` Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 2/4] mm: swap: Remove struct percpu_cluster Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 3/4] mm: swap: Simplify ssd behavior when scanner steals entry Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without splitting Ryan Roberts
2023-10-30 8:18 ` Huang, Ying
2023-10-30 13:59 ` Ryan Roberts
2023-10-31 8:12 ` Huang, Ying
2023-11-03 11:42 ` Ryan Roberts
2023-11-02 7:40 ` Barry Song
2023-11-02 10:21 ` Ryan Roberts
2023-11-02 22:36 ` Barry Song
2023-11-03 11:31 ` Ryan Roberts
2023-11-03 13:57 ` Steven Price
2023-11-04 9:34 ` Barry Song
2023-11-06 10:12 ` Steven Price
2023-11-06 21:39 ` Barry Song
2023-11-08 11:51 ` Steven Price
2023-11-07 12:46 ` Ryan Roberts
2023-11-07 18:05 ` Barry Song
2023-11-08 11:23 ` Barry Song
2023-11-08 20:20 ` Ryan Roberts
2023-11-08 21:04 ` Barry Song
2023-11-04 5:49 ` Barry Song
2024-02-05 9:51 ` Barry Song
2024-02-05 12:14 ` Ryan Roberts
2024-02-18 23:40 ` Barry Song
2024-02-20 20:03 ` Ryan Roberts
2024-03-05 9:00 ` Ryan Roberts
2024-03-05 9:54 ` Barry Song
2024-03-05 10:44 ` Ryan Roberts
2024-02-27 12:28 ` Ryan Roberts
2024-02-27 13:37 ` Ryan Roberts
2024-02-28 2:46 ` Barry Song
2024-02-22 7:05 ` Barry Song
2024-02-22 10:09 ` David Hildenbrand
2024-02-23 9:46 ` Barry Song
2024-02-27 12:05 ` Ryan Roberts
2024-02-28 1:23 ` Barry Song
2024-02-28 9:34 ` David Hildenbrand
2024-02-28 23:18 ` Barry Song
2024-02-28 15:57 ` Ryan Roberts
2023-11-29 7:47 ` [PATCH v3 0/4] " Barry Song
2023-11-29 12:06 ` Ryan Roberts
2023-11-29 20:38 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 0/6] mm: support large folios swap-in Barry Song
2024-01-18 11:10 ` [PATCH RFC 1/6] arm64: mm: swap: support THP_SWAP on hardware with MTE Barry Song
2024-01-26 23:14 ` Chris Li
2024-02-26 2:59 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 2/6] mm: swap: introduce swap_nr_free() for batched swap_free() Barry Song
2024-01-26 23:17 ` Chris Li
2024-02-26 4:47 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 3/6] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-01-26 23:22 ` Chris Li
2024-01-18 11:10 ` [PATCH RFC 4/6] mm: support large folios swapin as a whole Barry Song
2024-01-27 19:53 ` Chris Li
2024-02-26 7:29 ` Barry Song
2024-01-27 20:06 ` Chris Li
2024-02-26 7:31 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 5/6] mm: rmap: weaken the WARN_ON in __folio_add_anon_rmap() Barry Song
2024-01-18 11:54 ` David Hildenbrand
2024-01-23 6:49 ` Barry Song
2024-01-29 3:25 ` Chris Li
2024-01-29 10:06 ` David Hildenbrand
2024-01-29 16:31 ` Chris Li
2024-02-26 5:05 ` Barry Song
2024-04-06 23:27 ` Barry Song
2024-01-27 23:41 ` Chris Li
2024-01-18 11:10 ` [PATCH RFC 6/6] mm: madvise: don't split mTHP for MADV_PAGEOUT Barry Song
2024-01-29 2:15 ` Chris Li
2024-02-26 6:39 ` Barry Song
2024-02-27 12:22 ` Ryan Roberts
2024-02-27 22:39 ` Barry Song
2024-02-27 14:40 ` Ryan Roberts
2024-02-27 18:57 ` Barry Song
2024-02-28 3:49 ` Barry Song
2024-01-18 15:25 ` [PATCH RFC 0/6] mm: support large folios swap-in Ryan Roberts
2024-01-18 23:54 ` Barry Song
2024-01-19 13:25 ` Ryan Roberts
2024-01-27 14:27 ` Barry Song
2024-01-29 9:05 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f4453904-6e6a-4b81-bce3-8926cdfaddfc@arm.com \
--to=ryan.roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=shy828301@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=xiang@kernel.org \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).