linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] secretmem: optimize page_is_secretmem()
@ 2021-04-19  8:42 Mike Rapoport
  2021-04-19  9:15 ` David Hildenbrand
  2021-04-19 11:23 ` Matthew Wilcox
  0 siblings, 2 replies; 12+ messages in thread
From: Mike Rapoport @ 2021-04-19  8:42 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, Yury Norov, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	kernel test robot

From: Mike Rapoport <rppt@linux.ibm.com>

Kernel test robot reported -4.2% regression of will-it-scale.per_thread_ops
due to commit "mm: introduce memfd_secret system call to create "secret"
memory areas".

The perf profile of the test indicated that the regression is caused by
page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range):

 27.76  +2.5  30.23       perf-profile.children.cycles-pp.gup_pgd_range
  0.00  +3.2   3.19 ± 2%  perf-profile.children.cycles-pp.page_mapping
  0.00  +3.7   3.66 ± 2%  perf-profile.children.cycles-pp.page_is_secretmem

Further analysis showed that the slow down happens because neither
page_is_secretmem() nor page_mapping() are not inline and moreover,
multiple page flags checks in page_mapping() involve calling
compound_head() several times for the same page.

Make page_is_secretmem() inline and replace page_mapping() with page flag
checks that do not imply page-to-head conversion.

Reported-by: kernel test robot <oliver.sang@intel.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---

@Andrew,
The patch is vs v5.12-rc7-mmots-2021-04-15-16-28, I'd appreciate if it would
be added as a fixup to the memfd_secret series.

 include/linux/secretmem.h | 26 +++++++++++++++++++++++++-
 mm/secretmem.c            | 12 +-----------
 2 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 907a6734059c..b842b38cbeb1 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -4,8 +4,32 @@
 
 #ifdef CONFIG_SECRETMEM
 
+extern const struct address_space_operations secretmem_aops;
+
+static inline bool page_is_secretmem(struct page *page)
+{
+	struct address_space *mapping;
+
+	/*
+	 * Using page_mapping() is quite slow because of the actual call
+	 * instruction and repeated compound_head(page) inside the
+	 * page_mapping() function.
+	 * We know that secretmem pages are not compound and LRU so we can
+	 * save a couple of cycles here.
+	 */
+	if (PageCompound(page) || !PageLRU(page))
+		return false;
+
+	mapping = (struct address_space *)
+		((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
+
+	if (mapping != page->mapping)
+		return false;
+
+	return page->mapping->a_ops == &secretmem_aops;
+}
+
 bool vma_is_secretmem(struct vm_area_struct *vma);
-bool page_is_secretmem(struct page *page);
 bool secretmem_active(void);
 
 #else
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 3b1ba3991964..0bcd15e1b549 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -151,22 +151,12 @@ static void secretmem_freepage(struct page *page)
 	clear_highpage(page);
 }
 
-static const struct address_space_operations secretmem_aops = {
+const struct address_space_operations secretmem_aops = {
 	.freepage	= secretmem_freepage,
 	.migratepage	= secretmem_migratepage,
 	.isolate_page	= secretmem_isolate_page,
 };
 
-bool page_is_secretmem(struct page *page)
-{
-	struct address_space *mapping = page_mapping(page);
-
-	if (!mapping)
-		return false;
-
-	return mapping->a_ops == &secretmem_aops;
-}
-
 static struct vfsmount *secretmem_mnt;
 
 static struct file *secretmem_file_create(unsigned long flags)
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19  8:42 [PATCH] secretmem: optimize page_is_secretmem() Mike Rapoport
@ 2021-04-19  9:15 ` David Hildenbrand
  2021-04-19  9:36   ` Mike Rapoport
  2021-04-19 11:23 ` Matthew Wilcox
  1 sibling, 1 reply; 12+ messages in thread
From: David Hildenbrand @ 2021-04-19  9:15 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, James Bottomley,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, Yury Norov,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
	kernel test robot

On 19.04.21 10:42, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Kernel test robot reported -4.2% regression of will-it-scale.per_thread_ops
> due to commit "mm: introduce memfd_secret system call to create "secret"
> memory areas".
> 
> The perf profile of the test indicated that the regression is caused by
> page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range):
> 
>   27.76  +2.5  30.23       perf-profile.children.cycles-pp.gup_pgd_range
>    0.00  +3.2   3.19 ± 2%  perf-profile.children.cycles-pp.page_mapping
>    0.00  +3.7   3.66 ± 2%  perf-profile.children.cycles-pp.page_is_secretmem
> 
> Further analysis showed that the slow down happens because neither
> page_is_secretmem() nor page_mapping() are not inline and moreover,
> multiple page flags checks in page_mapping() involve calling
> compound_head() several times for the same page.
> 
> Make page_is_secretmem() inline and replace page_mapping() with page flag
> checks that do not imply page-to-head conversion.
> 
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
> 
> @Andrew,
> The patch is vs v5.12-rc7-mmots-2021-04-15-16-28, I'd appreciate if it would
> be added as a fixup to the memfd_secret series.
> 
>   include/linux/secretmem.h | 26 +++++++++++++++++++++++++-
>   mm/secretmem.c            | 12 +-----------
>   2 files changed, 26 insertions(+), 12 deletions(-)
> 
> diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
> index 907a6734059c..b842b38cbeb1 100644
> --- a/include/linux/secretmem.h
> +++ b/include/linux/secretmem.h
> @@ -4,8 +4,32 @@
>   
>   #ifdef CONFIG_SECRETMEM
>   
> +extern const struct address_space_operations secretmem_aops;
> +
> +static inline bool page_is_secretmem(struct page *page)
> +{
> +	struct address_space *mapping;
> +
> +	/*
> +	 * Using page_mapping() is quite slow because of the actual call
> +	 * instruction and repeated compound_head(page) inside the
> +	 * page_mapping() function.
> +	 * We know that secretmem pages are not compound and LRU so we can
> +	 * save a couple of cycles here.
> +	 */
> +	if (PageCompound(page) || !PageLRU(page))
> +		return false;

I'd assume secretmem pages are rare in basically every setup out there. 
So maybe throwing in a couple of likely()/unlikely() might make sense.

> +
> +	mapping = (struct address_space *)
> +		((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
> +

Not sure if open-coding page_mapping is really a good idea here -- or 
even necessary after the fast path above is in place. Anyhow, just my 2 
cents.

The idea of the patch makes sense to me.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19  9:15 ` David Hildenbrand
@ 2021-04-19  9:36   ` Mike Rapoport
  2021-04-19  9:38     ` David Hildenbrand
  2021-04-19 11:43     ` Matthew Wilcox
  0 siblings, 2 replies; 12+ messages in thread
From: Mike Rapoport @ 2021-04-19  9:36 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Yury Norov, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86, kernel test robot

On Mon, Apr 19, 2021 at 11:15:02AM +0200, David Hildenbrand wrote:
> On 19.04.21 10:42, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> > 
> > Kernel test robot reported -4.2% regression of will-it-scale.per_thread_ops
> > due to commit "mm: introduce memfd_secret system call to create "secret"
> > memory areas".
> > 
> > The perf profile of the test indicated that the regression is caused by
> > page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range):
> > 
> >   27.76  +2.5  30.23       perf-profile.children.cycles-pp.gup_pgd_range
> >    0.00  +3.2   3.19 ± 2%  perf-profile.children.cycles-pp.page_mapping
> >    0.00  +3.7   3.66 ± 2%  perf-profile.children.cycles-pp.page_is_secretmem
> > 
> > Further analysis showed that the slow down happens because neither
> > page_is_secretmem() nor page_mapping() are not inline and moreover,
> > multiple page flags checks in page_mapping() involve calling
> > compound_head() several times for the same page.
> > 
> > Make page_is_secretmem() inline and replace page_mapping() with page flag
> > checks that do not imply page-to-head conversion.
> > 
> > Reported-by: kernel test robot <oliver.sang@intel.com>
> > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> > ---
> > 
> > @Andrew,
> > The patch is vs v5.12-rc7-mmots-2021-04-15-16-28, I'd appreciate if it would
> > be added as a fixup to the memfd_secret series.
> > 
> >   include/linux/secretmem.h | 26 +++++++++++++++++++++++++-
> >   mm/secretmem.c            | 12 +-----------
> >   2 files changed, 26 insertions(+), 12 deletions(-)
> > 
> > diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
> > index 907a6734059c..b842b38cbeb1 100644
> > --- a/include/linux/secretmem.h
> > +++ b/include/linux/secretmem.h
> > @@ -4,8 +4,32 @@
> >   #ifdef CONFIG_SECRETMEM
> > +extern const struct address_space_operations secretmem_aops;
> > +
> > +static inline bool page_is_secretmem(struct page *page)
> > +{
> > +	struct address_space *mapping;
> > +
> > +	/*
> > +	 * Using page_mapping() is quite slow because of the actual call
> > +	 * instruction and repeated compound_head(page) inside the
> > +	 * page_mapping() function.
> > +	 * We know that secretmem pages are not compound and LRU so we can
> > +	 * save a couple of cycles here.
> > +	 */
> > +	if (PageCompound(page) || !PageLRU(page))
> > +		return false;
> 
> I'd assume secretmem pages are rare in basically every setup out there. So
> maybe throwing in a couple of likely()/unlikely() might make sense.

I'd say we could do unlikely(page_is_secretmem()) at call sites. Here I can
hardly estimate which pages are going to be checked.
 
> > +
> > +	mapping = (struct address_space *)
> > +		((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
> > +
> 
> Not sure if open-coding page_mapping is really a good idea here -- or even
> necessary after the fast path above is in place. Anyhow, just my 2 cents.

Well, most if the -4.2% of the performance regression kbuild reported were
due to repeated compount_head(page) in page_mapping(). So the whole point
of this patch is to avoid calling page_mapping().

> The idea of the patch makes sense to me.

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19  9:36   ` Mike Rapoport
@ 2021-04-19  9:38     ` David Hildenbrand
  2021-04-19  9:40       ` David Hildenbrand
  2021-04-19 11:43     ` Matthew Wilcox
  1 sibling, 1 reply; 12+ messages in thread
From: David Hildenbrand @ 2021-04-19  9:38 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Yury Norov, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86, kernel test robot

On 19.04.21 11:36, Mike Rapoport wrote:
> On Mon, Apr 19, 2021 at 11:15:02AM +0200, David Hildenbrand wrote:
>> On 19.04.21 10:42, Mike Rapoport wrote:
>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>
>>> Kernel test robot reported -4.2% regression of will-it-scale.per_thread_ops
>>> due to commit "mm: introduce memfd_secret system call to create "secret"
>>> memory areas".
>>>
>>> The perf profile of the test indicated that the regression is caused by
>>> page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range):
>>>
>>>    27.76  +2.5  30.23       perf-profile.children.cycles-pp.gup_pgd_range
>>>     0.00  +3.2   3.19 ± 2%  perf-profile.children.cycles-pp.page_mapping
>>>     0.00  +3.7   3.66 ± 2%  perf-profile.children.cycles-pp.page_is_secretmem
>>>
>>> Further analysis showed that the slow down happens because neither
>>> page_is_secretmem() nor page_mapping() are not inline and moreover,
>>> multiple page flags checks in page_mapping() involve calling
>>> compound_head() several times for the same page.
>>>
>>> Make page_is_secretmem() inline and replace page_mapping() with page flag
>>> checks that do not imply page-to-head conversion.
>>>
>>> Reported-by: kernel test robot <oliver.sang@intel.com>
>>> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
>>> ---
>>>
>>> @Andrew,
>>> The patch is vs v5.12-rc7-mmots-2021-04-15-16-28, I'd appreciate if it would
>>> be added as a fixup to the memfd_secret series.
>>>
>>>    include/linux/secretmem.h | 26 +++++++++++++++++++++++++-
>>>    mm/secretmem.c            | 12 +-----------
>>>    2 files changed, 26 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
>>> index 907a6734059c..b842b38cbeb1 100644
>>> --- a/include/linux/secretmem.h
>>> +++ b/include/linux/secretmem.h
>>> @@ -4,8 +4,32 @@
>>>    #ifdef CONFIG_SECRETMEM
>>> +extern const struct address_space_operations secretmem_aops;
>>> +
>>> +static inline bool page_is_secretmem(struct page *page)
>>> +{
>>> +	struct address_space *mapping;
>>> +
>>> +	/*
>>> +	 * Using page_mapping() is quite slow because of the actual call
>>> +	 * instruction and repeated compound_head(page) inside the
>>> +	 * page_mapping() function.
>>> +	 * We know that secretmem pages are not compound and LRU so we can
>>> +	 * save a couple of cycles here.
>>> +	 */
>>> +	if (PageCompound(page) || !PageLRU(page))
>>> +		return false;
>>
>> I'd assume secretmem pages are rare in basically every setup out there. So
>> maybe throwing in a couple of likely()/unlikely() might make sense.
> 
> I'd say we could do unlikely(page_is_secretmem()) at call sites. Here I can
> hardly estimate which pages are going to be checked.
>   
>>> +
>>> +	mapping = (struct address_space *)
>>> +		((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
>>> +
>>
>> Not sure if open-coding page_mapping is really a good idea here -- or even
>> necessary after the fast path above is in place. Anyhow, just my 2 cents.
> 
> Well, most if the -4.2% of the performance regression kbuild reported were
> due to repeated compount_head(page) in page_mapping(). So the whole point
> of this patch is to avoid calling page_mapping().

I would have thought the fast path "(PageCompound(page) || 
!PageLRU(page))" would already avoid calling page_mapping() in many cases.


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19  9:38     ` David Hildenbrand
@ 2021-04-19  9:40       ` David Hildenbrand
  2021-04-19 10:14         ` Mike Rapoport
  0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand @ 2021-04-19  9:40 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Yury Norov, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86, kernel test robot

On 19.04.21 11:38, David Hildenbrand wrote:
> On 19.04.21 11:36, Mike Rapoport wrote:
>> On Mon, Apr 19, 2021 at 11:15:02AM +0200, David Hildenbrand wrote:
>>> On 19.04.21 10:42, Mike Rapoport wrote:
>>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>>
>>>> Kernel test robot reported -4.2% regression of will-it-scale.per_thread_ops
>>>> due to commit "mm: introduce memfd_secret system call to create "secret"
>>>> memory areas".
>>>>
>>>> The perf profile of the test indicated that the regression is caused by
>>>> page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range):
>>>>
>>>>     27.76  +2.5  30.23       perf-profile.children.cycles-pp.gup_pgd_range
>>>>      0.00  +3.2   3.19 ± 2%  perf-profile.children.cycles-pp.page_mapping
>>>>      0.00  +3.7   3.66 ± 2%  perf-profile.children.cycles-pp.page_is_secretmem
>>>>
>>>> Further analysis showed that the slow down happens because neither
>>>> page_is_secretmem() nor page_mapping() are not inline and moreover,
>>>> multiple page flags checks in page_mapping() involve calling
>>>> compound_head() several times for the same page.
>>>>
>>>> Make page_is_secretmem() inline and replace page_mapping() with page flag
>>>> checks that do not imply page-to-head conversion.
>>>>
>>>> Reported-by: kernel test robot <oliver.sang@intel.com>
>>>> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
>>>> ---
>>>>
>>>> @Andrew,
>>>> The patch is vs v5.12-rc7-mmots-2021-04-15-16-28, I'd appreciate if it would
>>>> be added as a fixup to the memfd_secret series.
>>>>
>>>>     include/linux/secretmem.h | 26 +++++++++++++++++++++++++-
>>>>     mm/secretmem.c            | 12 +-----------
>>>>     2 files changed, 26 insertions(+), 12 deletions(-)
>>>>
>>>> diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
>>>> index 907a6734059c..b842b38cbeb1 100644
>>>> --- a/include/linux/secretmem.h
>>>> +++ b/include/linux/secretmem.h
>>>> @@ -4,8 +4,32 @@
>>>>     #ifdef CONFIG_SECRETMEM
>>>> +extern const struct address_space_operations secretmem_aops;
>>>> +
>>>> +static inline bool page_is_secretmem(struct page *page)
>>>> +{
>>>> +	struct address_space *mapping;
>>>> +
>>>> +	/*
>>>> +	 * Using page_mapping() is quite slow because of the actual call
>>>> +	 * instruction and repeated compound_head(page) inside the
>>>> +	 * page_mapping() function.
>>>> +	 * We know that secretmem pages are not compound and LRU so we can
>>>> +	 * save a couple of cycles here.
>>>> +	 */
>>>> +	if (PageCompound(page) || !PageLRU(page))
>>>> +		return false;
>>>
>>> I'd assume secretmem pages are rare in basically every setup out there. So
>>> maybe throwing in a couple of likely()/unlikely() might make sense.
>>
>> I'd say we could do unlikely(page_is_secretmem()) at call sites. Here I can
>> hardly estimate which pages are going to be checked.
>>    
>>>> +
>>>> +	mapping = (struct address_space *)
>>>> +		((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
>>>> +
>>>
>>> Not sure if open-coding page_mapping is really a good idea here -- or even
>>> necessary after the fast path above is in place. Anyhow, just my 2 cents.
>>
>> Well, most if the -4.2% of the performance regression kbuild reported were
>> due to repeated compount_head(page) in page_mapping(). So the whole point
>> of this patch is to avoid calling page_mapping().
> 
> I would have thought the fast path "(PageCompound(page) ||
> !PageLRU(page))" would already avoid calling page_mapping() in many cases.

(and I do wonder if a generic page_mapping() optimization would make 
sense instead)

Willy can most probably give the best advise here :)

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19  9:40       ` David Hildenbrand
@ 2021-04-19 10:14         ` Mike Rapoport
  2021-04-19 10:21           ` David Hildenbrand
  0 siblings, 1 reply; 12+ messages in thread
From: Mike Rapoport @ 2021-04-19 10:14 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Yury Norov, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86, kernel test robot

On Mon, Apr 19, 2021 at 11:40:56AM +0200, David Hildenbrand wrote:
> On 19.04.21 11:38, David Hildenbrand wrote:
> > On 19.04.21 11:36, Mike Rapoport wrote:
> > > On Mon, Apr 19, 2021 at 11:15:02AM +0200, David Hildenbrand wrote:
> > > > On 19.04.21 10:42, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > 
> > > > > Kernel test robot reported -4.2% regression of will-it-scale.per_thread_ops
> > > > > due to commit "mm: introduce memfd_secret system call to create "secret"
> > > > > memory areas".
> > > > > 
> > > > > The perf profile of the test indicated that the regression is caused by
> > > > > page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range):
> > > > > 
> > > > >     27.76  +2.5  30.23       perf-profile.children.cycles-pp.gup_pgd_range
> > > > >      0.00  +3.2   3.19 ± 2%  perf-profile.children.cycles-pp.page_mapping
> > > > >      0.00  +3.7   3.66 ± 2%  perf-profile.children.cycles-pp.page_is_secretmem
> > > > > 
> > > > > Further analysis showed that the slow down happens because neither
> > > > > page_is_secretmem() nor page_mapping() are not inline and moreover,
> > > > > multiple page flags checks in page_mapping() involve calling
> > > > > compound_head() several times for the same page.
> > > > > 
> > > > > Make page_is_secretmem() inline and replace page_mapping() with page flag
> > > > > checks that do not imply page-to-head conversion.
> > > > > 
> > > > > Reported-by: kernel test robot <oliver.sang@intel.com>
> > > > > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> > > > > ---
> > > > > 
> > > > > @Andrew,
> > > > > The patch is vs v5.12-rc7-mmots-2021-04-15-16-28, I'd appreciate if it would
> > > > > be added as a fixup to the memfd_secret series.
> > > > > 
> > > > >     include/linux/secretmem.h | 26 +++++++++++++++++++++++++-
> > > > >     mm/secretmem.c            | 12 +-----------
> > > > >     2 files changed, 26 insertions(+), 12 deletions(-)
> > > > > 
> > > > > diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
> > > > > index 907a6734059c..b842b38cbeb1 100644
> > > > > --- a/include/linux/secretmem.h
> > > > > +++ b/include/linux/secretmem.h
> > > > > @@ -4,8 +4,32 @@
> > > > >     #ifdef CONFIG_SECRETMEM
> > > > > +extern const struct address_space_operations secretmem_aops;
> > > > > +
> > > > > +static inline bool page_is_secretmem(struct page *page)
> > > > > +{
> > > > > +	struct address_space *mapping;
> > > > > +
> > > > > +	/*
> > > > > +	 * Using page_mapping() is quite slow because of the actual call
> > > > > +	 * instruction and repeated compound_head(page) inside the
> > > > > +	 * page_mapping() function.
> > > > > +	 * We know that secretmem pages are not compound and LRU so we can
> > > > > +	 * save a couple of cycles here.
> > > > > +	 */
> > > > > +	if (PageCompound(page) || !PageLRU(page))
> > > > > +		return false;
> > > > 
> > > > I'd assume secretmem pages are rare in basically every setup out there. So
> > > > maybe throwing in a couple of likely()/unlikely() might make sense.
> > > 
> > > I'd say we could do unlikely(page_is_secretmem()) at call sites. Here I can
> > > hardly estimate which pages are going to be checked.
> > > > > +
> > > > > +	mapping = (struct address_space *)
> > > > > +		((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
> > > > > +
> > > > 
> > > > Not sure if open-coding page_mapping is really a good idea here -- or even
> > > > necessary after the fast path above is in place. Anyhow, just my 2 cents.
> > > 
> > > Well, most if the -4.2% of the performance regression kbuild reported were
> > > due to repeated compount_head(page) in page_mapping(). So the whole point
> > > of this patch is to avoid calling page_mapping().
> > 
> > I would have thought the fast path "(PageCompound(page) ||
> > !PageLRU(page))" would already avoid calling page_mapping() in many cases.
> 
> (and I do wonder if a generic page_mapping() optimization would make sense
> instead)

Not sure. Replacing page_mapping() with page_file_mapping() at the
call sites at fs/ and mm/ increased the defconfig image by nearly 2k
and page_file_mapping() is way simpler than page_mapping()

add/remove: 1/0 grow/shrink: 35/0 up/down: 1960/0 (1960)
Function                                     old     new   delta
shrink_page_list                            3414    3670    +256
__set_page_dirty_nobuffers                   242     349    +107
check_move_unevictable_pages                 904     987     +83
move_to_new_page                             591     671     +80
shrink_active_list                           912     970     +58
move_pages_to_lru                            911     965     +54
migrate_pages                               2500    2554     +54
shmem_swapin_page                           1145    1197     +52
shmem_undo_range                            1669    1719     +50
__test_set_page_writeback                    620     670     +50
__set_page_dirty_buffers                     187     237     +50
__pagevec_lru_add                            757     807     +50
__munlock_pagevec                           1155    1205     +50
__dump_page                                 1101    1151     +50
__cancel_dirty_page                          182     232     +50
__remove_mapping                             461     510     +49
rmap_walk_file                               402     449     +47
isolate_movable_page                         240     287     +47
test_clear_page_writeback                    668     714     +46
page_cache_pipe_buf_try_steal                171     217     +46
page_endio                                   246     290     +44
page_file_mapping                              -      43     +43
__isolate_lru_page_prepare                   254     297     +43
hugetlb_page_mapping_lock_write               39      81     +42
iomap_set_page_dirty                         110     151     +41
clear_page_dirty_for_io                      324     364     +40
wait_on_page_writeback_killable              118     157     +39
wait_on_page_writeback                       105     144     +39
set_page_dirty                               159     198     +39
putback_movable_page                          32      71     +39
page_mkclean                                 172     211     +39
mark_buffer_dirty                            176     215     +39
invalidate_inode_page                        122     161     +39
delete_from_page_cache                       139     178     +39
PageMovable                                   49      86     +37
isolate_migratepages_block                  2843    2872     +29
Total: Before=17068648, After=17070608, chg +0.01%
 
> Willy can most probably give the best advise here :)

I think that's what folios are for :)

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19 10:14         ` Mike Rapoport
@ 2021-04-19 10:21           ` David Hildenbrand
  0 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand @ 2021-04-19 10:21 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Yury Norov, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86, kernel test robot

On 19.04.21 12:14, Mike Rapoport wrote:
> On Mon, Apr 19, 2021 at 11:40:56AM +0200, David Hildenbrand wrote:
>> On 19.04.21 11:38, David Hildenbrand wrote:
>>> On 19.04.21 11:36, Mike Rapoport wrote:
>>>> On Mon, Apr 19, 2021 at 11:15:02AM +0200, David Hildenbrand wrote:
>>>>> On 19.04.21 10:42, Mike Rapoport wrote:
>>>>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>>>>
>>>>>> Kernel test robot reported -4.2% regression of will-it-scale.per_thread_ops
>>>>>> due to commit "mm: introduce memfd_secret system call to create "secret"
>>>>>> memory areas".
>>>>>>
>>>>>> The perf profile of the test indicated that the regression is caused by
>>>>>> page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range):
>>>>>>
>>>>>>      27.76  +2.5  30.23       perf-profile.children.cycles-pp.gup_pgd_range
>>>>>>       0.00  +3.2   3.19 ± 2%  perf-profile.children.cycles-pp.page_mapping
>>>>>>       0.00  +3.7   3.66 ± 2%  perf-profile.children.cycles-pp.page_is_secretmem
>>>>>>
>>>>>> Further analysis showed that the slow down happens because neither
>>>>>> page_is_secretmem() nor page_mapping() are not inline and moreover,
>>>>>> multiple page flags checks in page_mapping() involve calling
>>>>>> compound_head() several times for the same page.
>>>>>>
>>>>>> Make page_is_secretmem() inline and replace page_mapping() with page flag
>>>>>> checks that do not imply page-to-head conversion.
>>>>>>
>>>>>> Reported-by: kernel test robot <oliver.sang@intel.com>
>>>>>> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
>>>>>> ---
>>>>>>
>>>>>> @Andrew,
>>>>>> The patch is vs v5.12-rc7-mmots-2021-04-15-16-28, I'd appreciate if it would
>>>>>> be added as a fixup to the memfd_secret series.
>>>>>>
>>>>>>      include/linux/secretmem.h | 26 +++++++++++++++++++++++++-
>>>>>>      mm/secretmem.c            | 12 +-----------
>>>>>>      2 files changed, 26 insertions(+), 12 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
>>>>>> index 907a6734059c..b842b38cbeb1 100644
>>>>>> --- a/include/linux/secretmem.h
>>>>>> +++ b/include/linux/secretmem.h
>>>>>> @@ -4,8 +4,32 @@
>>>>>>      #ifdef CONFIG_SECRETMEM
>>>>>> +extern const struct address_space_operations secretmem_aops;
>>>>>> +
>>>>>> +static inline bool page_is_secretmem(struct page *page)
>>>>>> +{
>>>>>> +	struct address_space *mapping;
>>>>>> +
>>>>>> +	/*
>>>>>> +	 * Using page_mapping() is quite slow because of the actual call
>>>>>> +	 * instruction and repeated compound_head(page) inside the
>>>>>> +	 * page_mapping() function.
>>>>>> +	 * We know that secretmem pages are not compound and LRU so we can
>>>>>> +	 * save a couple of cycles here.
>>>>>> +	 */
>>>>>> +	if (PageCompound(page) || !PageLRU(page))
>>>>>> +		return false;
>>>>>
>>>>> I'd assume secretmem pages are rare in basically every setup out there. So
>>>>> maybe throwing in a couple of likely()/unlikely() might make sense.
>>>>
>>>> I'd say we could do unlikely(page_is_secretmem()) at call sites. Here I can
>>>> hardly estimate which pages are going to be checked.
>>>>>> +
>>>>>> +	mapping = (struct address_space *)
>>>>>> +		((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
>>>>>> +
>>>>>
>>>>> Not sure if open-coding page_mapping is really a good idea here -- or even
>>>>> necessary after the fast path above is in place. Anyhow, just my 2 cents.
>>>>
>>>> Well, most if the -4.2% of the performance regression kbuild reported were
>>>> due to repeated compount_head(page) in page_mapping(). So the whole point
>>>> of this patch is to avoid calling page_mapping().
>>>
>>> I would have thought the fast path "(PageCompound(page) ||
>>> !PageLRU(page))" would already avoid calling page_mapping() in many cases.
>>
>> (and I do wonder if a generic page_mapping() optimization would make sense
>> instead)
> 
> Not sure. Replacing page_mapping() with page_file_mapping() at the
> call sites at fs/ and mm/ increased the defconfig image by nearly 2k
> and page_file_mapping() is way simpler than page_mapping()
> 
> add/remove: 1/0 grow/shrink: 35/0 up/down: 1960/0 (1960)
> Function                                     old     new   delta
> shrink_page_list                            3414    3670    +256
> __set_page_dirty_nobuffers                   242     349    +107
> check_move_unevictable_pages                 904     987     +83
> move_to_new_page                             591     671     +80
> shrink_active_list                           912     970     +58
> move_pages_to_lru                            911     965     +54
> migrate_pages                               2500    2554     +54
> shmem_swapin_page                           1145    1197     +52
> shmem_undo_range                            1669    1719     +50
> __test_set_page_writeback                    620     670     +50
> __set_page_dirty_buffers                     187     237     +50
> __pagevec_lru_add                            757     807     +50
> __munlock_pagevec                           1155    1205     +50
> __dump_page                                 1101    1151     +50
> __cancel_dirty_page                          182     232     +50
> __remove_mapping                             461     510     +49
> rmap_walk_file                               402     449     +47
> isolate_movable_page                         240     287     +47
> test_clear_page_writeback                    668     714     +46
> page_cache_pipe_buf_try_steal                171     217     +46
> page_endio                                   246     290     +44
> page_file_mapping                              -      43     +43
> __isolate_lru_page_prepare                   254     297     +43
> hugetlb_page_mapping_lock_write               39      81     +42
> iomap_set_page_dirty                         110     151     +41
> clear_page_dirty_for_io                      324     364     +40
> wait_on_page_writeback_killable              118     157     +39
> wait_on_page_writeback                       105     144     +39
> set_page_dirty                               159     198     +39
> putback_movable_page                          32      71     +39
> page_mkclean                                 172     211     +39
> mark_buffer_dirty                            176     215     +39
> invalidate_inode_page                        122     161     +39
> delete_from_page_cache                       139     178     +39
> PageMovable                                   49      86     +37
> isolate_migratepages_block                  2843    2872     +29
> Total: Before=17068648, After=17070608, chg +0.01%
>   
>> Willy can most probably give the best advise here :)
> 
> I think that's what folios are for :)

Exactly my thought. :)


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19  8:42 [PATCH] secretmem: optimize page_is_secretmem() Mike Rapoport
  2021-04-19  9:15 ` David Hildenbrand
@ 2021-04-19 11:23 ` Matthew Wilcox
  2021-04-19 11:56   ` Mike Rapoport
  1 sibling, 1 reply; 12+ messages in thread
From: Matthew Wilcox @ 2021-04-19 11:23 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Yury Norov, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86, kernel test robot

On Mon, Apr 19, 2021 at 11:42:18AM +0300, Mike Rapoport wrote:
> The perf profile of the test indicated that the regression is caused by
> page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range):

Uhh ... you're calling it in the wrong place!

                VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
                page = pte_page(pte);

                if (page_is_secretmem(page))
                        goto pte_unmap;

                head = try_grab_compound_head(page, 1, flags);
                if (!head)
                        goto pte_unmap;

So you're calling page_is_secretmem() on a struct page without having
a refcount on it.  That is definitely not allowed.  secretmem seems to
be full of these kinds of races; I know this isn't the first one I've
seen in it.  I don't think this patchset is ready for this merge window.

With that fixed, you'll have a head page that you can use for testing,
which means you don't need to test PageCompound() (because you know the
page isn't PageTail), you can just test PageHead().

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19  9:36   ` Mike Rapoport
  2021-04-19  9:38     ` David Hildenbrand
@ 2021-04-19 11:43     ` Matthew Wilcox
  1 sibling, 0 replies; 12+ messages in thread
From: Matthew Wilcox @ 2021-04-19 11:43 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: David Hildenbrand, Andrew Morton, Alexander Viro,
	Andy Lutomirski, Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Yury Norov, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86, kernel test robot

On Mon, Apr 19, 2021 at 12:36:19PM +0300, Mike Rapoport wrote:
> Well, most if the -4.2% of the performance regression kbuild reported were
> due to repeated compount_head(page) in page_mapping(). So the whole point
> of this patch is to avoid calling page_mapping().

It's quite ludicrous how many times we call compound_head() in
page_mapping() today:

 page = compound_head(page);
 if (__builtin_expect(!!(PageSlab(page)), 0))
 if (__builtin_expect(!!(PageSwapCache(page)), 0)) {

TESTPAGEFLAG(Slab, slab, PF_NO_TAIL) expands to:

static __always_inline int PageSlab(struct page *page)
{
	PF_POISONED_CHECK(compound_head(page));
	return test_bit(PG_slab, &compound_head(page));
}

static __always_inline int PageSwapCache(struct page *page)
{
        page = compound_head(page);
        return PageSwapBacked(page) && test_bit(PG_swapcache, &page->flags);
}

but then!

TESTPAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL) also expands like Slab does.

So that's six calls to compound_head(), depending what Kconfig options
you have enabled.

And folio_mapping() is one of the functions I add in the first batch of
patches, so review, etc will be helpful.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19 11:23 ` Matthew Wilcox
@ 2021-04-19 11:56   ` Mike Rapoport
  2021-04-19 12:21     ` Matthew Wilcox
  0 siblings, 1 reply; 12+ messages in thread
From: Mike Rapoport @ 2021-04-19 11:56 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Yury Norov, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86, kernel test robot

On Mon, Apr 19, 2021 at 12:23:02PM +0100, Matthew Wilcox wrote:
> On Mon, Apr 19, 2021 at 11:42:18AM +0300, Mike Rapoport wrote:
> > The perf profile of the test indicated that the regression is caused by
> > page_is_secretmem() called from gup_pte_range() (inlined by gup_pgd_range):
> 
> Uhh ... you're calling it in the wrong place!
> 
>                 VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
>                 page = pte_page(pte);
> 
>                 if (page_is_secretmem(page))
>                         goto pte_unmap;
> 
>                 head = try_grab_compound_head(page, 1, flags);
>                 if (!head)
>                         goto pte_unmap;
> 
> So you're calling page_is_secretmem() on a struct page without having
> a refcount on it.  That is definitely not allowed.  secretmem seems to
> be full of these kinds of races; I know this isn't the first one I've
> seen in it.  I don't think this patchset is ready for this merge window.

There were races in the older version that did caching of large pages and
those were fixed then, but this is anyway irrelevant because all that code
was dropped in the latest respins.

I don't think that the fix of the race in gup_pte_range is that significant
to wait 3 more months because of it.
 
> With that fixed, you'll have a head page that you can use for testing,
> which means you don't need to test PageCompound() (because you know the
> page isn't PageTail), you can just test PageHead().

I can't say I follow you here. page_is_secretmem() is intended to be a
generic test, so I don't see how it will get PageHead() if it is called
from other places.

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19 11:56   ` Mike Rapoport
@ 2021-04-19 12:21     ` Matthew Wilcox
  2021-04-19 18:24       ` Mike Rapoport
  0 siblings, 1 reply; 12+ messages in thread
From: Matthew Wilcox @ 2021-04-19 12:21 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Yury Norov, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86, kernel test robot

On Mon, Apr 19, 2021 at 02:56:17PM +0300, Mike Rapoport wrote:
> On Mon, Apr 19, 2021 at 12:23:02PM +0100, Matthew Wilcox wrote:
> > So you're calling page_is_secretmem() on a struct page without having
> > a refcount on it.  That is definitely not allowed.  secretmem seems to
> > be full of these kinds of races; I know this isn't the first one I've
> > seen in it.  I don't think this patchset is ready for this merge window.
> 
> There were races in the older version that did caching of large pages and
> those were fixed then, but this is anyway irrelevant because all that code
> was dropped in the latest respins.
> 
> I don't think that the fix of the race in gup_pte_range is that significant
> to wait 3 more months because of it.

I have no particular interest in secretmem, but it seems that every time
I come across it while looking at something else, I see these kinds of
major mistakes in it.  That says to me it's not ready and hasn't seen
enough review.

> > With that fixed, you'll have a head page that you can use for testing,
> > which means you don't need to test PageCompound() (because you know the
> > page isn't PageTail), you can just test PageHead().
> 
> I can't say I follow you here. page_is_secretmem() is intended to be a
> generic test, so I don't see how it will get PageHead() if it is called
> from other places.

static inline bool head_is_secretmem(struct page *head)
{
	if (PageHead(head))
		return false;
	...
}

static inline bool page_is_secretmem(struct page *page)
{
	if (PageTail(page))
		return false;
	return head_is_secretmem(page);
}

(yes, calling it head is a misnomer, because it's not necessarily a head,
it might be a base page, but until we have a name for pages which might
be a head page or a base page, it'll have to do ...)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] secretmem: optimize page_is_secretmem()
  2021-04-19 12:21     ` Matthew Wilcox
@ 2021-04-19 18:24       ` Mike Rapoport
  0 siblings, 0 replies; 12+ messages in thread
From: Mike Rapoport @ 2021-04-19 18:24 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Yury Norov, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86, kernel test robot

On Mon, Apr 19, 2021 at 01:21:56PM +0100, Matthew Wilcox wrote:
> On Mon, Apr 19, 2021 at 02:56:17PM +0300, Mike Rapoport wrote:
> 
> > > With that fixed, you'll have a head page that you can use for testing,
> > > which means you don't need to test PageCompound() (because you know the
> > > page isn't PageTail), you can just test PageHead().
> > 
> > I can't say I follow you here. page_is_secretmem() is intended to be a
> > generic test, so I don't see how it will get PageHead() if it is called
> > from other places.
> 
> static inline bool head_is_secretmem(struct page *head)
> {
> 	if (PageHead(head))
> 		return false;
> 	...
> }
> 
> static inline bool page_is_secretmem(struct page *page)
> {
> 	if (PageTail(page))
> 		return false;
> 	return head_is_secretmem(page);
> }
> 
> (yes, calling it head is a misnomer, because it's not necessarily a head,
> it might be a base page, but until we have a name for pages which might
> be a head page or a base page, it'll have to do ...)

To me this looks less clean and readable and in the end we have an extra
check for PG_Head if that won't get optimized away.

Does not seem to me there would be a measurable difference, but if this is
important for future conversion to folio I don't mind using
{head,page}_is_secretmem.

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-04-19 18:24 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-19  8:42 [PATCH] secretmem: optimize page_is_secretmem() Mike Rapoport
2021-04-19  9:15 ` David Hildenbrand
2021-04-19  9:36   ` Mike Rapoport
2021-04-19  9:38     ` David Hildenbrand
2021-04-19  9:40       ` David Hildenbrand
2021-04-19 10:14         ` Mike Rapoport
2021-04-19 10:21           ` David Hildenbrand
2021-04-19 11:43     ` Matthew Wilcox
2021-04-19 11:23 ` Matthew Wilcox
2021-04-19 11:56   ` Mike Rapoport
2021-04-19 12:21     ` Matthew Wilcox
2021-04-19 18:24       ` Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).