linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sidhartha Kumar <sidhartha.kumar@oracle.com>
To: Mike Kravetz <mike.kravetz@oracle.com>,
	Muchun Song <muchun.song@linux.dev>
Cc: linux-kernel@vger.kernel.org,
	Linux Memory Management List <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Muchun Song <songmuchun@bytedance.com>,
	Matthew Wilcox <willy@infradead.org>,
	Mina Almasry <almasrymina@google.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	hughd@google.com, tsahu@linux.ibm.com, jhubbard@nvidia.com,
	David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH mm-unstable v5 01/10] mm: add folio dtor and order setter functions
Date: Wed, 7 Dec 2022 10:49:21 -0800	[thread overview]
Message-ID: <b995eccf-5818-84ee-560e-20c00f9936b4@oracle.com> (raw)
In-Reply-To: <Y5DXivFOA+bO0IYQ@monkey>

On 12/7/22 10:12 AM, Mike Kravetz wrote:
> On 12/07/22 12:11, Muchun Song wrote:
>>
>>
>>> On Dec 7, 2022, at 11:42, Mike Kravetz <mike.kravetz@oracle.com> wrote:
>>>
>>> On 12/07/22 11:34, Muchun Song wrote:
>>>>
>>>>
>>>>> On Nov 30, 2022, at 06:50, Sidhartha Kumar <sidhartha.kumar@oracle.com> wrote:
>>>>>
>>>>> Add folio equivalents for set_compound_order() and set_compound_page_dtor().
>>>>>
>>>>> Also remove extra new-lines introduced by mm/hugetlb: convert
>>>>> move_hugetlb_state() to folios and mm/hugetlb_cgroup: convert
>>>>> hugetlb_cgroup_uncharge_page() to folios.
>>>>>
>>>>> Suggested-by: Mike Kravetz <mike.kravetz@oracle.com>
>>>>> Suggested-by: Muchun Song <songmuchun@bytedance.com>
>>>>> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>>>>> ---
>>>>> include/linux/mm.h | 16 ++++++++++++++++
>>>>> mm/hugetlb.c       |  4 +---
>>>>> 2 files changed, 17 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>>>> index a48c5ad16a5e..2bdef8a5298a 100644
>>>>> --- a/include/linux/mm.h
>>>>> +++ b/include/linux/mm.h
>>>>> @@ -972,6 +972,13 @@ static inline void set_compound_page_dtor(struct page *page,
>>>>> page[1].compound_dtor = compound_dtor;
>>>>> }
>>>>>
>>>>> +static inline void folio_set_compound_dtor(struct folio *folio,
>>>>> + enum compound_dtor_id compound_dtor)
>>>>> +{
>>>>> + VM_BUG_ON_FOLIO(compound_dtor >= NR_COMPOUND_DTORS, folio);
>>>>> + folio->_folio_dtor = compound_dtor;
>>>>> +}
>>>>> +
>>>>> void destroy_large_folio(struct folio *folio);
>>>>>
>>>>> static inline int head_compound_pincount(struct page *head)
>>>>> @@ -987,6 +994,15 @@ static inline void set_compound_order(struct page *page, unsigned int order)
>>>>> #endif
>>>>> }
>>>>>
>>>>> +static inline void folio_set_compound_order(struct folio *folio,
>>>>> + unsigned int order)
>>>>> +{
>>>>> + folio->_folio_order = order;
>>>>> +#ifdef CONFIG_64BIT
>>>>> + folio->_folio_nr_pages = order ? 1U << order : 0;
>>>>
>>>> It seems that you think the user could pass 0 to order. However,
>>>> ->_folio_nr_pages and ->_folio_order fields are invalid for order-0 pages.
>>>> You should not touch it. So this should be:
>>>>
>>>> static inline void folio_set_compound_order(struct folio *folio,
>>>>      unsigned int order)
>>>> {
>>>> 	if (!folio_test_large(folio))
>>>> 		return;
>>>>
>>>> 	folio->_folio_order = order;
>>>> #ifdef CONFIG_64BIT
>>>> 	folio->_folio_nr_pages = 1U << order;
>>>> #endif
>>>> }
>>>
>>> I believe this was changed to accommodate the code in
>>> __destroy_compound_gigantic_page().  It is used in a subsequent patch.
>>> Here is the v6.0 version of the routine.
>>
>> Thanks for your clarification.
>>
>>>
>>> static void __destroy_compound_gigantic_page(struct page *page,
>>> unsigned int order, bool demote)
>>> {
>>> 	int i;
>>> 	int nr_pages = 1 << order;
>>> 	struct page *p = page + 1;
>>>
>>> 	atomic_set(compound_mapcount_ptr(page), 0);
>>> 	atomic_set(compound_pincount_ptr(page), 0);
>>>
>>> 	for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
>>> 		p->mapping = NULL;
>>> 		clear_compound_head(p);
>>> 		if (!demote)
>>> 			set_page_refcounted(p);
>>> 	}
>>>
>>> 	set_compound_order(page, 0);
>>> #ifdef CONFIG_64BIT
>>> 	page[1].compound_nr = 0;
>>> #endif
>>> 	__ClearPageHead(page);
>>> }
>>>
>>>
>>> Might have been better to change this set_compound_order call to
>>> folio_set_compound_order in this patch.
>>>
>>
>> Agree. It has confused me a lot. I suggest changing the code to the
>> followings. The folio_test_large() check is still to avoid unexpected
>> users for OOB.
>>
>> static inline void folio_set_compound_order(struct folio *folio,
>> 					    unsigned int order)
>> {
>> 	VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>> 	// or
>> 	// if (!folio_test_large(folio))
>> 	// 	return;
>>
>> 	folio->_folio_order = order;
>> #ifdef CONFIG_64BIT
>> 	folio->_folio_nr_pages = order ? 1U << order : 0;
>> #endif
>> }
> 
> I think the VM_BUG_ON_FOLIO is appropriate as it would at least flag
> data corruption.
> 
As Mike pointed out, my intention with supporting the 0 case was to 
cleanup the __destroy_compound_gigantic_page code by moving the ifdef 
CONFIG_64BIT lines to folio_set_compound_order(). I'll add the 
VM_BUG_ON_FOLIO line as well as a comment to make it clear it is not 
normally supported.

> Thinking about this some more, it seems that hugetlb is the only caller
> that abuses folio_set_compound_order (and previously set_compound_order)
> by passing in a zero order.  Since it is unlikely that anyone knows of
> this abuse, it might be good to add a comment to the routine to note
> why it handles the zero case.  This might help prevent changes which
> would potentially break hugetlb.

+/*
+ * _folio_nr_pages and _folio_order are invalid for
+ * order-zero pages. An exception is hugetlb, which passes
+ * in a zero order in __destroy_compound_gigantic_page().
+ */
  static inline void folio_set_compound_order(struct folio *folio,
                 unsigned int order)
  {
+       VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
+
         folio->_folio_order = order;
  #ifdef CONFIG_64BIT
         folio->_folio_nr_pages = order ? 1U << order : 0;

Does this comment work?



  reply	other threads:[~2022-12-07 18:50 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-29 22:50 [PATCH mm-unstable v5 00/10] convert core hugetlb functions to folios Sidhartha Kumar
2022-11-29 22:50 ` [PATCH mm-unstable v5 01/10] mm: add folio dtor and order setter functions Sidhartha Kumar
2022-12-07  0:18   ` Mike Kravetz
2022-12-07  3:34   ` Muchun Song
2022-12-07  3:42     ` Mike Kravetz
2022-12-07  4:11       ` Muchun Song
2022-12-07 18:12         ` Mike Kravetz
2022-12-07 18:49           ` Sidhartha Kumar [this message]
2022-12-07 19:05             ` Sidhartha Kumar
2022-12-07 19:25               ` Mike Kravetz
2022-12-08  2:19                 ` Muchun Song
2022-12-08  2:31                   ` John Hubbard
2022-12-08  4:44                     ` Muchun Song
2022-12-12 18:34         ` David Hildenbrand
2022-12-12 18:50           ` Sidhartha Kumar
2022-11-29 22:50 ` [PATCH mm-unstable v5 02/10] mm/hugetlb: convert destroy_compound_gigantic_page() to folios Sidhartha Kumar
2022-12-07  0:32   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 03/10] mm/hugetlb: convert dissolve_free_huge_page() " Sidhartha Kumar
2022-12-07  0:52   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 04/10] mm/hugetlb: convert remove_hugetlb_page() " Sidhartha Kumar
2022-12-07  1:43   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 05/10] mm/hugetlb: convert update_and_free_page() " Sidhartha Kumar
2022-12-07  2:02   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 06/10] mm/hugetlb: convert add_hugetlb_page() to folios and add hugetlb_cma_folio() Sidhartha Kumar
2022-12-07 18:38   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 07/10] mm/hugetlb: convert enqueue_huge_page() to folios Sidhartha Kumar
2022-12-07 18:46   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 08/10] mm/hugetlb: convert free_gigantic_page() " Sidhartha Kumar
2022-12-07 19:04   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 09/10] mm/hugetlb: convert hugetlb prep functions " Sidhartha Kumar
2022-12-07 19:35   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 10/10] mm/hugetlb: change hugetlb allocation functions to return a folio Sidhartha Kumar
2022-12-07 22:01   ` Mike Kravetz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b995eccf-5818-84ee-560e-20c00f9936b4@oracle.com \
    --to=sidhartha.kumar@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=jhubbard@nvidia.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=muchun.song@linux.dev \
    --cc=songmuchun@bytedance.com \
    --cc=tsahu@linux.ibm.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).