linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Barry Song <21cnbao@gmail.com>
Cc: David Hildenbrand <david@redhat.com>,
	akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, mhocko@suse.com, shy828301@gmail.com,
	wangkefeng.wang@huawei.com, willy@infradead.org,
	xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com,
	chrisl@kernel.org, surenb@google.com, hanchuanhua@oppo.com
Subject: Re: [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without splitting
Date: Wed, 28 Feb 2024 15:57:40 +0000	[thread overview]
Message-ID: <0761feb8-638c-4b3f-8d91-c0dc9dbbf0c6@arm.com> (raw)
In-Reply-To: <CAGsJ_4zEKDVM==0KaFOb_UgO3GZ7ag2DW3sBLA-t9Tf0gAAnww@mail.gmail.com>

On 28/02/2024 01:23, Barry Song wrote:
> On Wed, Feb 28, 2024 at 1:06 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> On 23/02/2024 09:46, Barry Song wrote:
>>> On Thu, Feb 22, 2024 at 11:09 PM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>> On 22.02.24 08:05, Barry Song wrote:
>>>>> Hi Ryan,
>>>>>
>>>>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>>>>> index 2cc0cb41fb32..ea19710aa4cd 100644
>>>>>> --- a/mm/vmscan.c
>>>>>> +++ b/mm/vmscan.c
>>>>>> @@ -1212,11 +1212,13 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>>>>>>                                      if (!can_split_folio(folio, NULL))
>>>>>>                                              goto activate_locked;
>>>>>>                                      /*
>>>>>> -                                     * Split folios without a PMD map right
>>>>>> -                                     * away. Chances are some or all of the
>>>>>> -                                     * tail pages can be freed without IO.
>>>>>> +                                     * Split PMD-mappable folios without a
>>>>>> +                                     * PMD map right away. Chances are some
>>>>>> +                                     * or all of the tail pages can be freed
>>>>>> +                                     * without IO.
>>>>>>                                       */
>>>>>> -                                    if (!folio_entire_mapcount(folio) &&
>>>>>> +                                    if (folio_test_pmd_mappable(folio) &&
>>>>>> +                                        !folio_entire_mapcount(folio) &&
>>>>>>                                          split_folio_to_list(folio,
>>>>>>                                                              folio_list))
>>>>>>                                              goto activate_locked;
>>>>>
>>>>> I ran a test to investigate what would happen while reclaiming a partially
>>>>> unmapped large folio. for example, for 64KiB large folios, MADV_DONTNEED
>>>>> 4KB~64KB, and keep the first subpage 0~4KiB.
>>>>
>>>> IOW, something that already happens with ordinary THP already IIRC.
>>>>
>>>>>
>>>>> My test wants to address my three concerns,
>>>>> a. whether we will have leak on swap slots
>>>>> b. whether we will have redundant I/O
>>>>> c. whether we will cause races on swapcache
>>>>>
>>>>> what i have done is printing folio->_nr_pages_mapped and dumping 16 swap_map[]
>>>>> at some specific stage
>>>>> 1. just after add_to_swap   (swap slots are allocated)
>>>>> 2. before and after try_to_unmap   (ptes are set to swap_entry)
>>>>> 3. before and after pageout (also add printk in zram driver to dump all I/O write)
>>>>> 4. before and after remove_mapping
>>>>>
>>>>> The below is the dumped info for a particular large folio,
>>>>>
>>>>> 1. after add_to_swap
>>>>> [   27.267357] vmscan: After add_to_swap shrink_folio_list 1947 mapnr:1
>>>>> [   27.267650] vmscan: offset:101b0 swp_map 40-40-40-40-40-40-40-40-40-40-40-40-40-40-40-40
>>>>>
>>>>> as you can see,
>>>>> _nr_pages_mapped is 1 and all 16 swap_map are SWAP_HAS_CACHE (0x40)
>>>>>
>>>>>
>>>>> 2. before and after try_to_unmap
>>>>> [   27.268067] vmscan: before try to unmap shrink_folio_list 1991 mapnr:1
>>>>> [   27.268372] try_to_unmap_one address:ffff731f0000 pte:e8000103cd0b43 pte_p:ffff0000c36a8f80
>>>>> [   27.268854] vmscan: after try to unmap shrink_folio_list 1997 mapnr:0
>>>>> [   27.269180] vmscan: offset:101b0 swp_map 41-40-40-40-40-40-40-40-40-40-40-40-40-40-40-40
>>>>>
>>>>> as you can see, one pte is set to swp_entry, and _nr_pages_mapped becomes
>>>>> 0 from 1. The 1st swp_map becomes 0x41, SWAP_HAS_CACHE + 1
>>>>>
>>>>> 3. before and after pageout
>>>>> [   27.269602] vmscan: before pageout shrink_folio_list 2065 mapnr:0
>>>>> [   27.269880] vmscan: offset:101b0 swp_map 41-40-40-40-40-40-40-40-40-40-40-40-40-40-40-40
>>>>> [   27.270691] zram: zram_write_page page:fffffc00030f3400 index:101b0
>>>>> [   27.271061] zram: zram_write_page page:fffffc00030f3440 index:101b1
>>>>> [   27.271416] zram: zram_write_page page:fffffc00030f3480 index:101b2
>>>>> [   27.271751] zram: zram_write_page page:fffffc00030f34c0 index:101b3
>>>>> [   27.272046] zram: zram_write_page page:fffffc00030f3500 index:101b4
>>>>> [   27.272384] zram: zram_write_page page:fffffc00030f3540 index:101b5
>>>>> [   27.272746] zram: zram_write_page page:fffffc00030f3580 index:101b6
>>>>> [   27.273042] zram: zram_write_page page:fffffc00030f35c0 index:101b7
>>>>> [   27.273339] zram: zram_write_page page:fffffc00030f3600 index:101b8
>>>>> [   27.273676] zram: zram_write_page page:fffffc00030f3640 index:101b9
>>>>> [   27.274044] zram: zram_write_page page:fffffc00030f3680 index:101ba
>>>>> [   27.274554] zram: zram_write_page page:fffffc00030f36c0 index:101bb
>>>>> [   27.274870] zram: zram_write_page page:fffffc00030f3700 index:101bc
>>>>> [   27.275166] zram: zram_write_page page:fffffc00030f3740 index:101bd
>>>>> [   27.275463] zram: zram_write_page page:fffffc00030f3780 index:101be
>>>>> [   27.275760] zram: zram_write_page page:fffffc00030f37c0 index:101bf
>>>>> [   27.276102] vmscan: after pageout and before needs_release shrink_folio_list 2124 mapnr:0
>>>>>
>>>>> as you can see, obviously, we have done redundant I/O - 16 zram_write_page though
>>>>> 4~64KiB has been zap_pte_range before, we still write them to zRAM.
>>>>>
>>>>> 4. before and after remove_mapping
>>>>> [   27.276428] vmscan: offset:101b0 swp_map 41-40-40-40-40-40-40-40-40-40-40-40-40-40-40-40
>>>>> [   27.277485] vmscan: after remove_mapping shrink_folio_list 2169 mapnr:0 offset:0
>>>>> [   27.277802] vmscan: offset:101b0 01-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
>>>>>
>>>>> as you can see, swp_map 1-15 becomes 0 and only the first swp_map is 1.
>>>>> all SWAP_HAS_CACHE has been removed. This is perfect and there is no swap
>>>>> slot leak at all!
>>>>>
>>>>> Thus, only two concerns are left for me,
>>>>> 1. as we don't split anyway, we have done 15 unnecessary I/O if a large folio
>>>>> is partially unmapped.
>>
>> So the cost of this is increased IO and swap storage, correct? Is this a big
>> problem in practice? i.e. do you see a lot of partially mapped large folios in
>> your workload? (I agree the proposed fix below is simple, so I think we should
>> do it anyway - I'm just interested in the scale of the problem).
>>
>>>>> 2. large folio is added as a whole as a swapcache covering the range whose
>>>>> part has been zapped. I am not quite sure if this will cause some problems
>>>>> while some concurrent do_anon_page, swapin and swapout occurs between 3 and
>>>>> 4 on zapped subpage1~subpage15. still struggling.. my brain is exploding...
>>
>> Yes mine too. I would only expect the ptes that map the folio will get replaced
>> with swap entries? So I would expect it to be safe. Although I understand the
>> concern with the extra swap consumption.
> 
> yes. it should still be safe. just more I/O and more swap spaces. but they will
> be removed while remove_mapping happens if try_to_unmap_one makes
> the folio unmapped.
> 
> but with the potential possibility even mapped PTEs can be skipped by
> try_to_unmap_one (reported intermediate PTEs issue - PTL is held till
> a valid PTE, some PTEs might be skipped by try_to_unmap without being
> set to swap entries), we could have the possibility folio_mapped() is still true
> after try_to_unmap_one. so we can't get to __remove_mapping() for a long
> time. but it still doesn't cause a crash.
> 
>>
>> [...]
>>>>>
>>>>> To me, it seems safer to split or do some other similar optimization if we find a
>>>>> large folio has partial map and unmap.
>>>>
>>>> I'm hoping that we can avoid any new direct users of _nr_pages_mapped if
>>>> possible.
>>>>
>>>
>>> Is _nr_pages_mapped < nr_pages a reasonable case to split as we
>>> have known the folio has at least some subpages zapped?
>>
>> I'm not sure we need this - the folio's presence on the split list will tell us
>> everything we need to know I think?
> 
> I agree, this is just one question to David, not my proposal.  if
> deferred_list is sufficient,
> I prefer we use deferred_list.
> 
> I actually don't quite understand why David dislikes _nr_pages_mapped being used
> though I do think _nr_pages_mapped cannot precisely reflect how a
> folio is mapped
> by multi-processes. but _nr_pages_mapped < nr_pages seems be safe to
> tell the folio
> is partially unmapped :-)
> 
>>
>>>
>>>> If we find that the folio is on the deferred split list, we might as
>>>> well just split it right away, before swapping it out. That might be a
>>>> reasonable optimization for the case you describe.
>>
>> Yes, agreed. I think there is still chance of a race though; Some other thread
>> could be munmapping in parallel. But in that case, I think we just end up with
>> the increased IO and swap storage? That's not the end of the world if its a
>> corner case.
> 
> I agree. btw, do we need a spinlock ds_queue->split_queue_lock for checking
> the list? deferred_split_folio(), for itself, has no spinlock while checking
>  if (!list_empty(&folio->_deferred_list)), but why? the read and write
> need to be exclusive.....

I don't think so. It's safe to check if the folio is on the queue like this; but
if it isn't then you need to recheck under the lock, as is done here. So for us,
I think we can also do this safely. It is certainly preferable to avoid taking
the lock.

The original change says this:

Before acquire split_queue_lock, check and bail out early if the THP
head page is in the queue already. The checking without holding
split_queue_lock could race with deferred_split_scan, but it doesn't
impact the correctness here.

> 
> void deferred_split_folio(struct folio *folio)
> {
>         ...
> 
>         if (!list_empty(&folio->_deferred_list))
>                 return;
> 
>         spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
>         if (list_empty(&folio->_deferred_list)) {
>                 count_vm_event(THP_DEFERRED_SPLIT_PAGE);
>                 list_add_tail(&folio->_deferred_list, &ds_queue->split_queue);
>                 ds_queue->split_queue_len++;
> #ifdef CONFIG_MEMCG
>                 if (memcg)
>                         set_shrinker_bit(memcg, folio_nid(folio),
>                                          deferred_split_shrinker->id);
> #endif
>         }
>         spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
> }
> 
>>
>>>
>>> i tried to change Ryan's code as below
>>>
>>> @@ -1905,11 +1922,12 @@ static unsigned int shrink_folio_list(struct
>>> list_head *folio_list,
>>>                                          * PMD map right away. Chances are some
>>>                                          * or all of the tail pages can be freed
>>>                                          * without IO.
>>> +                                        * Similarly, split PTE-mapped folios if
>>> +                                        * they have been already
>>> deferred_split.
>>>                                          */
>>> -                                       if (folio_test_pmd_mappable(folio) &&
>>> -                                           !folio_entire_mapcount(folio) &&
>>> -                                           split_folio_to_list(folio,
>>> -                                                               folio_list))
>>> +                                       if
>>> (((folio_test_pmd_mappable(folio) && !folio_entire_mapcount(folio)) ||
>>> +
>>> (!folio_test_pmd_mappable(folio) &&
>>> !list_empty(&folio->_deferred_list)))
>>
>> I'm not sure we need the different tests for pmd_mappable vs !pmd_mappable. I
>> think presence on the deferred list is a sufficient indicator that there are
>> unmapped subpages?
> 
> I don't think there are fundamental differences for pmd and pte. i was
> testing pte-mapped folio at that time, so kept the behavior of pmd as is.
> 
>>
>> I'll incorporate this into my next version.
> 
> Great!
> 
>>
>>> +                                           &&
>>> split_folio_to_list(folio, folio_list))
>>>                                                 goto activate_locked;
>>>                                 }
>>>                                 if (!add_to_swap(folio)) {
>>>
>>> It seems to work as expected. only one I/O is left for a large folio
>>> with 16 PTEs
>>> but 15 of them have been zapped before.
>>>
>>>>
>>>> --
>>>> Cheers,
>>>>
>>>> David / dhildenb
>>>>
>>>
> 
> Thanks
> Barry


  parent reply	other threads:[~2024-02-28 15:57 UTC|newest]

Thread overview: 116+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-25 14:45 [PATCH v3 0/4] Swap-out small-sized THP without splitting Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-02-22 10:19   ` David Hildenbrand
2024-02-22 10:20     ` David Hildenbrand
2024-02-26 17:41       ` Ryan Roberts
2024-02-27 17:10         ` Ryan Roberts
2024-02-27 19:17           ` David Hildenbrand
2024-02-28  9:37             ` Ryan Roberts
2024-02-28 12:12               ` David Hildenbrand
2024-02-28 14:57                 ` Ryan Roberts
2024-02-28 15:12                   ` David Hildenbrand
2024-02-28 15:18                     ` Ryan Roberts
2024-03-01 16:27                     ` Ryan Roberts
2024-03-01 16:31                       ` Matthew Wilcox
2024-03-01 16:44                         ` Ryan Roberts
2024-03-01 17:00                           ` David Hildenbrand
2024-03-01 17:14                             ` Ryan Roberts
2024-03-01 17:18                               ` David Hildenbrand
2024-03-01 17:06                           ` Ryan Roberts
2024-03-04  4:52                             ` Barry Song
2024-03-04  5:42                               ` Barry Song
2024-03-05  7:41                                 ` Ryan Roberts
2024-03-01 16:31                       ` Ryan Roberts
2024-03-01 16:32                       ` David Hildenbrand
2024-03-04 16:03                 ` Ryan Roberts
2024-03-04 17:30                   ` David Hildenbrand
2024-03-04 18:38                     ` Ryan Roberts
2024-03-04 20:50                       ` David Hildenbrand
2024-03-04 21:55                         ` Ryan Roberts
2024-03-04 22:02                           ` David Hildenbrand
2024-03-04 22:34                             ` Ryan Roberts
2024-03-05  6:11                               ` Huang, Ying
2024-03-05  8:35                                 ` David Hildenbrand
2024-03-05  8:46                                   ` Ryan Roberts
2024-02-28 13:33               ` Matthew Wilcox
2024-02-28 14:24                 ` Ryan Roberts
2024-02-28 14:59                   ` Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 2/4] mm: swap: Remove struct percpu_cluster Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 3/4] mm: swap: Simplify ssd behavior when scanner steals entry Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without splitting Ryan Roberts
2023-10-30  8:18   ` Huang, Ying
2023-10-30 13:59     ` Ryan Roberts
2023-10-31  8:12       ` Huang, Ying
2023-11-03 11:42         ` Ryan Roberts
2023-11-02  7:40   ` Barry Song
2023-11-02 10:21     ` Ryan Roberts
2023-11-02 22:36       ` Barry Song
2023-11-03 11:31         ` Ryan Roberts
2023-11-03 13:57           ` Steven Price
2023-11-04  9:34             ` Barry Song
2023-11-06 10:12               ` Steven Price
2023-11-06 21:39                 ` Barry Song
2023-11-08 11:51                   ` Steven Price
2023-11-07 12:46               ` Ryan Roberts
2023-11-07 18:05                 ` Barry Song
2023-11-08 11:23                   ` Barry Song
2023-11-08 20:20                     ` Ryan Roberts
2023-11-08 21:04                       ` Barry Song
2023-11-04  5:49           ` Barry Song
2024-02-05  9:51   ` Barry Song
2024-02-05 12:14     ` Ryan Roberts
2024-02-18 23:40       ` Barry Song
2024-02-20 20:03         ` Ryan Roberts
2024-03-05  9:00         ` Ryan Roberts
2024-03-05  9:54           ` Barry Song
2024-03-05 10:44             ` Ryan Roberts
2024-02-27 12:28     ` Ryan Roberts
2024-02-27 13:37     ` Ryan Roberts
2024-02-28  2:46       ` Barry Song
2024-02-22  7:05   ` Barry Song
2024-02-22 10:09     ` David Hildenbrand
2024-02-23  9:46       ` Barry Song
2024-02-27 12:05         ` Ryan Roberts
2024-02-28  1:23           ` Barry Song
2024-02-28  9:34             ` David Hildenbrand
2024-02-28 23:18               ` Barry Song
2024-02-28 15:57             ` Ryan Roberts [this message]
2023-11-29  7:47 ` [PATCH v3 0/4] " Barry Song
2023-11-29 12:06   ` Ryan Roberts
2023-11-29 20:38     ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 0/6] mm: support large folios swap-in Barry Song
2024-01-18 11:10   ` [PATCH RFC 1/6] arm64: mm: swap: support THP_SWAP on hardware with MTE Barry Song
2024-01-26 23:14     ` Chris Li
2024-02-26  2:59       ` Barry Song
2024-01-18 11:10   ` [PATCH RFC 2/6] mm: swap: introduce swap_nr_free() for batched swap_free() Barry Song
2024-01-26 23:17     ` Chris Li
2024-02-26  4:47       ` Barry Song
2024-01-18 11:10   ` [PATCH RFC 3/6] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-01-26 23:22     ` Chris Li
2024-01-18 11:10   ` [PATCH RFC 4/6] mm: support large folios swapin as a whole Barry Song
2024-01-27 19:53     ` Chris Li
2024-02-26  7:29       ` Barry Song
2024-01-27 20:06     ` Chris Li
2024-02-26  7:31       ` Barry Song
2024-01-18 11:10   ` [PATCH RFC 5/6] mm: rmap: weaken the WARN_ON in __folio_add_anon_rmap() Barry Song
2024-01-18 11:54     ` David Hildenbrand
2024-01-23  6:49       ` Barry Song
2024-01-29  3:25         ` Chris Li
2024-01-29 10:06           ` David Hildenbrand
2024-01-29 16:31             ` Chris Li
2024-02-26  5:05               ` Barry Song
2024-04-06 23:27             ` Barry Song
2024-01-27 23:41     ` Chris Li
2024-01-18 11:10   ` [PATCH RFC 6/6] mm: madvise: don't split mTHP for MADV_PAGEOUT Barry Song
2024-01-29  2:15     ` Chris Li
2024-02-26  6:39       ` Barry Song
2024-02-27 12:22     ` Ryan Roberts
2024-02-27 22:39       ` Barry Song
2024-02-27 14:40     ` Ryan Roberts
2024-02-27 18:57       ` Barry Song
2024-02-28  3:49         ` Barry Song
2024-01-18 15:25   ` [PATCH RFC 0/6] mm: support large folios swap-in Ryan Roberts
2024-01-18 23:54     ` Barry Song
2024-01-19 13:25       ` Ryan Roberts
2024-01-27 14:27         ` Barry Song
2024-01-29  9:05   ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0761feb8-638c-4b3f-8d91-c0dc9dbbf0c6@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=hanchuanhua@oppo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=shy828301@gmail.com \
    --cc=surenb@google.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=ying.huang@intel.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).