All of lore.kernel.org
 help / color / mirror / Atom feed
From: Qi Zheng <zhengqi.arch@bytedance.com>
To: Anshuman Khandual <anshuman.khandual@arm.com>,
	mike.kravetz@oracle.com, songmuchun@bytedance.com,
	akpm@linux-foundation.org, catalin.marinas@arm.com,
	will@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH] mm: hugetlb: kill set_huge_swap_pte_at()
Date: Mon, 27 Jun 2022 14:55:50 +0800	[thread overview]
Message-ID: <037fc8c3-9b71-cb83-8882-95d5459a494f@bytedance.com> (raw)
In-Reply-To: <f0cfe169-44fa-5653-d454-149ef286d3bb@arm.com>



On 2022/6/27 14:18, Anshuman Khandual wrote:
> 
> 
> On 6/26/22 20:27, Qi Zheng wrote:
>> The commit e5251fd43007 ("mm/hugetlb: introduce set_huge_swap_pte_at()
>> helper") add set_huge_swap_pte_at() to handle swap entries on
>> architectures that support hugepages consisting of contiguous ptes.
>> And currently the set_huge_swap_pte_at() is only overridden by arm64.
>>
>> The set_huge_swap_pte_at() provide a sz parameter to help determine
>> the number of entries to be updated. But in fact, all hugetlb swap
>> entries contain pfn information, so we can find the corresponding
>> folio through the pfn recorded in the swap entry, then the folio_size()
>> is the number of entries that need to be updated.
>>
>> And considering that users will easily cause bugs by ignoring the
>> difference between set_huge_swap_pte_at() and set_huge_pte_at().
>> Let's handle swap entries in set_huge_pte_at() and remove the
>> set_huge_swap_pte_at(), then we can call set_huge_pte_at()
>> anywhere, which simplifies our coding.
>>
>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>> ---
>>   arch/arm64/include/asm/hugetlb.h |  3 ---
>>   arch/arm64/mm/hugetlbpage.c      | 34 ++++++++++++++++----------------
>>   include/linux/hugetlb.h          | 13 ------------
>>   mm/hugetlb.c                     |  8 +++-----
>>   mm/rmap.c                        | 11 +++--------
>>   5 files changed, 23 insertions(+), 46 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
>> index 1fd2846dbefe..d20f5da2d76f 100644
>> --- a/arch/arm64/include/asm/hugetlb.h
>> +++ b/arch/arm64/include/asm/hugetlb.h
>> @@ -46,9 +46,6 @@ extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
>>   			   pte_t *ptep, unsigned long sz);
>>   #define __HAVE_ARCH_HUGE_PTEP_GET
>>   extern pte_t huge_ptep_get(pte_t *ptep);
>> -extern void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr,
>> -				 pte_t *ptep, pte_t pte, unsigned long sz);
>> -#define set_huge_swap_pte_at set_huge_swap_pte_at
>>   
>>   void __init arm64_hugetlb_cma_reserve(void);
>>   
>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
>> index c9e076683e5d..58b89b9d13e0 100644
>> --- a/arch/arm64/mm/hugetlbpage.c
>> +++ b/arch/arm64/mm/hugetlbpage.c
>> @@ -238,6 +238,13 @@ static void clear_flush(struct mm_struct *mm,
>>   	flush_tlb_range(&vma, saddr, addr);
>>   }
>>   
>> +static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry)
>> +{
>> +	VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry));
>> +
>> +	return page_folio(pfn_to_page(swp_offset(entry)));
>> +}
> 
> Extracting this huge page size from swap entry is an additional operation which
> will increase the over all cost for set_huge_swap_pte_at(). At present the size

Hmm, I think this cost is very small. And replacing
set_huge_swap_pte_at() by transparently handling swap entries helps
reduce possible bugs, which is worthwhile.

> value is readily available near set_huge_swap_pte_at() call sites.

-- 
Thanks,
Qi

WARNING: multiple messages have this Message-ID (diff)
From: Qi Zheng <zhengqi.arch@bytedance.com>
To: Anshuman Khandual <anshuman.khandual@arm.com>,
	mike.kravetz@oracle.com, songmuchun@bytedance.com,
	akpm@linux-foundation.org, catalin.marinas@arm.com,
	will@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH] mm: hugetlb: kill set_huge_swap_pte_at()
Date: Mon, 27 Jun 2022 14:55:50 +0800	[thread overview]
Message-ID: <037fc8c3-9b71-cb83-8882-95d5459a494f@bytedance.com> (raw)
In-Reply-To: <f0cfe169-44fa-5653-d454-149ef286d3bb@arm.com>



On 2022/6/27 14:18, Anshuman Khandual wrote:
> 
> 
> On 6/26/22 20:27, Qi Zheng wrote:
>> The commit e5251fd43007 ("mm/hugetlb: introduce set_huge_swap_pte_at()
>> helper") add set_huge_swap_pte_at() to handle swap entries on
>> architectures that support hugepages consisting of contiguous ptes.
>> And currently the set_huge_swap_pte_at() is only overridden by arm64.
>>
>> The set_huge_swap_pte_at() provide a sz parameter to help determine
>> the number of entries to be updated. But in fact, all hugetlb swap
>> entries contain pfn information, so we can find the corresponding
>> folio through the pfn recorded in the swap entry, then the folio_size()
>> is the number of entries that need to be updated.
>>
>> And considering that users will easily cause bugs by ignoring the
>> difference between set_huge_swap_pte_at() and set_huge_pte_at().
>> Let's handle swap entries in set_huge_pte_at() and remove the
>> set_huge_swap_pte_at(), then we can call set_huge_pte_at()
>> anywhere, which simplifies our coding.
>>
>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>> ---
>>   arch/arm64/include/asm/hugetlb.h |  3 ---
>>   arch/arm64/mm/hugetlbpage.c      | 34 ++++++++++++++++----------------
>>   include/linux/hugetlb.h          | 13 ------------
>>   mm/hugetlb.c                     |  8 +++-----
>>   mm/rmap.c                        | 11 +++--------
>>   5 files changed, 23 insertions(+), 46 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
>> index 1fd2846dbefe..d20f5da2d76f 100644
>> --- a/arch/arm64/include/asm/hugetlb.h
>> +++ b/arch/arm64/include/asm/hugetlb.h
>> @@ -46,9 +46,6 @@ extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
>>   			   pte_t *ptep, unsigned long sz);
>>   #define __HAVE_ARCH_HUGE_PTEP_GET
>>   extern pte_t huge_ptep_get(pte_t *ptep);
>> -extern void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr,
>> -				 pte_t *ptep, pte_t pte, unsigned long sz);
>> -#define set_huge_swap_pte_at set_huge_swap_pte_at
>>   
>>   void __init arm64_hugetlb_cma_reserve(void);
>>   
>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
>> index c9e076683e5d..58b89b9d13e0 100644
>> --- a/arch/arm64/mm/hugetlbpage.c
>> +++ b/arch/arm64/mm/hugetlbpage.c
>> @@ -238,6 +238,13 @@ static void clear_flush(struct mm_struct *mm,
>>   	flush_tlb_range(&vma, saddr, addr);
>>   }
>>   
>> +static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry)
>> +{
>> +	VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry));
>> +
>> +	return page_folio(pfn_to_page(swp_offset(entry)));
>> +}
> 
> Extracting this huge page size from swap entry is an additional operation which
> will increase the over all cost for set_huge_swap_pte_at(). At present the size

Hmm, I think this cost is very small. And replacing
set_huge_swap_pte_at() by transparently handling swap entries helps
reduce possible bugs, which is worthwhile.

> value is readily available near set_huge_swap_pte_at() call sites.

-- 
Thanks,
Qi

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-06-27  6:56 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-26 14:57 [PATCH] mm: hugetlb: kill set_huge_swap_pte_at() Qi Zheng
2022-06-26 14:57 ` Qi Zheng
2022-06-27  3:32 ` Muchun Song
2022-06-27  3:32   ` Muchun Song
2022-06-27  6:18 ` Anshuman Khandual
2022-06-27  6:18   ` Anshuman Khandual
2022-06-27  6:55   ` Qi Zheng [this message]
2022-06-27  6:55     ` Qi Zheng
2022-06-27  7:14     ` Anshuman Khandual
2022-06-27  7:14       ` Anshuman Khandual
2022-06-27  7:29       ` Qi Zheng
2022-06-27  7:29         ` Qi Zheng
2022-06-27  7:35         ` Anshuman Khandual
2022-06-27  7:35           ` Anshuman Khandual
2022-06-27 14:34           ` Matthew Wilcox
2022-06-27 14:34             ` Matthew Wilcox
2022-06-28  5:47             ` Anshuman Khandual
2022-06-28  5:47               ` Anshuman Khandual
2022-06-27  7:44       ` Muchun Song
2022-06-27  7:44         ` Muchun Song
2022-06-27  8:27         ` Anshuman Khandual
2022-06-27  8:27           ` Anshuman Khandual
2022-06-27 14:41 ` Matthew Wilcox
2022-06-27 14:41   ` Matthew Wilcox
2022-06-28  3:34   ` Qi Zheng
2022-06-28  3:34     ` Qi Zheng
2023-09-21 16:25 ` Ryan Roberts
2023-09-21 16:25   ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=037fc8c3-9b71-cb83-8882-95d5459a494f@bytedance.com \
    --to=zhengqi.arch@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=songmuchun@bytedance.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.