linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	 David Hildenbrand <david@redhat.com>,
	 Matthew Wilcox <willy@infradead.org>,
	 Gao Xiang <xiang@kernel.org>,  Yu Zhao <yuzhao@google.com>,
	 Yang Shi <shy828301@gmail.com>,  Michal Hocko <mhocko@suse.com>,
	<linux-kernel@vger.kernel.org>,  <linux-mm@kvack.org>
Subject: Re: [RFC PATCH v1 2/2] mm: swap: Swap-out small-sized THP without splitting
Date: Tue, 17 Oct 2023 13:44:18 +0800	[thread overview]
Message-ID: <87a5shajm5.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <f9265349-3199-44b7-81b1-802c50e95713@arm.com> (Ryan Roberts's message of "Mon, 16 Oct 2023 13:10:21 +0100")

Ryan Roberts <ryan.roberts@arm.com> writes:

> On 16/10/2023 07:17, Huang, Ying wrote:
>> Ryan Roberts <ryan.roberts@arm.com> writes:
>> 
>>> On 11/10/2023 11:36, Ryan Roberts wrote:
>>>> On 11/10/2023 09:25, Huang, Ying wrote:
>>>>> Ryan Roberts <ryan.roberts@arm.com> writes:
>>>>>
>>>>>> The upcoming anonymous small-sized THP feature enables performance
>>>>>> improvements by allocating large folios for anonymous memory. However
>>>>>> I've observed that on an arm64 system running a parallel workload (e.g.
>>>>>> kernel compilation) across many cores, under high memory pressure, the
>>>>>> speed regresses. This is due to bottlenecking on the increased number of
>>>>>> TLBIs added due to all the extra folio splitting.
>>>>>>
>>>>>> Therefore, solve this regression by adding support for swapping out
>>>>>> small-sized THP without needing to split the folio, just like is already
>>>>>> done for PMD-sized THP. This change only applies when CONFIG_THP_SWAP is
>>>>>> enabled, and when the swap backing store is a non-rotating block device
>>>>>> - these are the same constraints as for the existing PMD-sized THP
>>>>>> swap-out support.
>>>>>>
>>>>>> Note that no attempt is made to swap-in THP here - this is still done
>>>>>> page-by-page, like for PMD-sized THP.
>>>>>>
>>>>>> The main change here is to improve the swap entry allocator so that it
>>>>>> can allocate any power-of-2 number of contiguous entries between [4, (1
>>>>>> << PMD_ORDER)]. This is done by allocating a cluster for each distinct
>>>>>> order and allocating sequentially from it until the cluster is full.
>>>>>> This ensures that we don't need to search the map and we get no
>>>>>> fragmentation due to alignment padding for different orders in the
>>>>>> cluster. If there is no current cluster for a given order, we attempt to
>>>>>> allocate a free cluster from the list. If there are no free clusters, we
>>>>>> fail the allocation and the caller falls back to splitting the folio and
>>>>>> allocates individual entries (as per existing PMD-sized THP fallback).
>>>>>>
>>>>>> As far as I can tell, this should not cause any extra fragmentation
>>>>>> concerns, given how similar it is to the existing PMD-sized THP
>>>>>> allocation mechanism. There will be up to (PMD_ORDER-1) clusters in
>>>>>> concurrent use though. In practice, the number of orders in use will be
>>>>>> small though.
>>>>>>
>>>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>>>> ---
>>>>>>  include/linux/swap.h |  7 ++++++
>>>>>>  mm/swapfile.c        | 60 +++++++++++++++++++++++++++++++++-----------
>>>>>>  mm/vmscan.c          | 10 +++++---
>>>>>>  3 files changed, 59 insertions(+), 18 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>>>>>> index a073366a227c..fc55b760aeff 100644
>>>>>> --- a/include/linux/swap.h
>>>>>> +++ b/include/linux/swap.h
>>>>>> @@ -320,6 +320,13 @@ struct swap_info_struct {
>>>>>>  					 */
>>>>>>  	struct work_struct discard_work; /* discard worker */
>>>>>>  	struct swap_cluster_list discard_clusters; /* discard clusters list */
>>>>>> +	unsigned int large_next[PMD_ORDER]; /*
>>>>>> +					     * next free offset within current
>>>>>> +					     * allocation cluster for large
>>>>>> +					     * folios, or UINT_MAX if no current
>>>>>> +					     * cluster. Index is (order - 1).
>>>>>> +					     * Only when cluster_info is used.
>>>>>> +					     */
>>>>>
>>>>> I think that it is better to make this per-CPU.  That is, extend the
>>>>> percpu_cluster mechanism.  Otherwise, we may have scalability issue.
>>>>
>>>> Is your concern that the swap_info spinlock will get too contended as its
>>>> currently written? From briefly looking at percpu_cluster, it looks like that
>>>> spinlock is always held when accessing the per-cpu structures - presumably
>>>> that's what's disabling preemption and making sure the thread is not migrated?
>>>> So I'm not sure what the benefit is currently? Surely you want to just disable
>>>> preemption but not hold the lock? I'm sure I've missed something crucial...
>>>
>>> I looked a bit further at how to implement what you are suggesting.
>>> get_swap_pages() is currently taking the swap_info lock which it needs to check
>>> and update some other parts of the swap_info - I'm not sure that part can be
>>> removed. swap_alloc_large() (my new function) is not doing an awful lot of work,
>>> so I'm not convinced that you would save too much by releasing the lock for that
>>> part. In contrast there is a lot more going on in scan_swap_map_slots() so there
>>> is more benefit to releasing the lock and using the percpu stuff - correct me if
>>> I've missunderstood.
>>>
>>> As an alternative approach, perhaps it makes more sense to beef up the caching
>>> layer in swap_slots.c to handle large folios too? Then you avoid taking the
>>> swap_info lock at all most of the time, like you currently do for single entry
>>> allocations.
>>>
>>> What do you think?
>> 
>> Sorry for late reply.
>> 
>> percpu_cluster is introduced in commit ebc2a1a69111 ("swap: make cluster
>> allocation per-cpu").  Please check the changelog for why it's
>> introduced.  Sorry about my incorrect memory about scalability.
>> percpu_cluster is introduced for disk performance mainly instead of
>> scalability.
>
> Thanks for the pointer. I'm not sure if you are still suggesting that I make my
> small-sized THP allocation mechanism per-cpu though?

Yes.  I think that the reason for that we introduced percpu_cluster
still applies now.

> I anticipate that by virtue of allocating multiple contiguous swap entries for a
> small-sized THP we already get a lot of the benefits that percpu_cluster gives
> order-0 allocations. (Although obviously it will only give contiguity matching
> the size of the THP rather than a full cluster).

I think that you will still introduce "interleave disk access" when
multiple CPU allocate from and write to swap device simultaneously.
Right?  Yes, 16KB block is better than 4KB block, but I don't think it
solves the problem.

> The downside of making this percpu would be keeping more free clusters
> tied up in the percpu caches, potentially causing a need to scan for
> free entries more often.

Yes.  We may waste several MB swap space per-CPU.  Is this a practical
issue given the swap device capacity becomes larger and larger?

--
Best Regards,
Huang, Ying


  reply	other threads:[~2023-10-17  5:46 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-10 14:21 [RFC PATCH v1 0/2] Swap-out small-sized THP without splitting Ryan Roberts
2023-10-10 14:21 ` [RFC PATCH v1 1/2] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2023-10-11  7:43   ` Huang, Ying
2023-10-11  8:17   ` Kefeng Wang
2023-10-11 10:15     ` Ryan Roberts
2023-10-11 10:16     ` Ryan Roberts
2023-10-10 14:21 ` [RFC PATCH v1 2/2] mm: swap: Swap-out small-sized THP without splitting Ryan Roberts
2023-10-11  7:44   ` Ryan Roberts
2023-10-11  8:25   ` Huang, Ying
2023-10-11 10:36     ` Ryan Roberts
2023-10-11 17:14       ` Ryan Roberts
2023-10-16  6:17         ` Huang, Ying
2023-10-16 12:10           ` Ryan Roberts
2023-10-17  5:44             ` Huang, Ying [this message]
2023-10-11  6:37 ` [RFC PATCH v1 0/2] " Huang, Ying
2023-10-11  7:42   ` Ryan Roberts
2023-10-13 16:31   ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87a5shajm5.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).