All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm: Optimise nth_page for contiguous memmap
@ 2021-04-13 19:46 Matthew Wilcox (Oracle)
  2021-04-14 15:24 ` David Hildenbrand
  2021-04-14 15:27 ` Zi Yan
  0 siblings, 2 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-04-13 19:46 UTC (permalink / raw)
  To: Andrew Morton, linux-mm, linux-kernel, Tejun Heo,
	FUJITA Tomonori, Douglas Gilbert, Chris Wilson
  Cc: Matthew Wilcox (Oracle), Christoph Hellwig

If the memmap is virtually contiguous (either because we're using
a virtually mapped memmap or because we don't support a discontig
memmap at all), then we can implement nth_page() by simple addition.
Contrary to popular belief, the compiler is not able to optimise this
itself for a vmemmap configuration.  This reduces one example user (sg.c)
by four instructions:

        struct page *page = nth_page(rsv_schp->pages[k], offset >> PAGE_SHIFT);

before:
   49 8b 45 70             mov    0x70(%r13),%rax
   48 63 c9                movslq %ecx,%rcx
   48 c1 eb 0c             shr    $0xc,%rbx
   48 8b 04 c8             mov    (%rax,%rcx,8),%rax
   48 2b 05 00 00 00 00    sub    0x0(%rip),%rax
           R_X86_64_PC32      vmemmap_base-0x4
   48 c1 f8 06             sar    $0x6,%rax
   48 01 d8                add    %rbx,%rax
   48 c1 e0 06             shl    $0x6,%rax
   48 03 05 00 00 00 00    add    0x0(%rip),%rax
           R_X86_64_PC32      vmemmap_base-0x4

after:
   49 8b 45 70             mov    0x70(%r13),%rax
   48 63 c9                movslq %ecx,%rcx
   48 c1 eb 0c             shr    $0xc,%rbx
   48 c1 e3 06             shl    $0x6,%rbx
   48 03 1c c8             add    (%rax,%rcx,8),%rbx

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/mm.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 25b9041f9925..2327f99b121f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -234,7 +234,11 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *,
 int __add_to_page_cache_locked(struct page *page, struct address_space *mapping,
 		pgoff_t index, gfp_t gfp, void **shadowp);
 
+#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
 #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n))
+#else
+#define nth_page(page,n) ((page) + (n))
+#endif
 
 /* to align the pointer to the (next) page boundary */
 #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: Optimise nth_page for contiguous memmap
  2021-04-13 19:46 [PATCH] mm: Optimise nth_page for contiguous memmap Matthew Wilcox (Oracle)
@ 2021-04-14 15:24 ` David Hildenbrand
  2021-04-14 18:51   ` Matthew Wilcox
  2021-04-14 15:27 ` Zi Yan
  1 sibling, 1 reply; 5+ messages in thread
From: David Hildenbrand @ 2021-04-14 15:24 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle),
	Andrew Morton, linux-mm, linux-kernel, Tejun Heo,
	FUJITA Tomonori, Douglas Gilbert, Chris Wilson
  Cc: Christoph Hellwig

On 13.04.21 21:46, Matthew Wilcox (Oracle) wrote:
> If the memmap is virtually contiguous (either because we're using
> a virtually mapped memmap or because we don't support a discontig
> memmap at all), then we can implement nth_page() by simple addition.
> Contrary to popular belief, the compiler is not able to optimise this
> itself for a vmemmap configuration.  This reduces one example user (sg.c)
> by four instructions:
> 
>          struct page *page = nth_page(rsv_schp->pages[k], offset >> PAGE_SHIFT);
> 
> before:
>     49 8b 45 70             mov    0x70(%r13),%rax
>     48 63 c9                movslq %ecx,%rcx
>     48 c1 eb 0c             shr    $0xc,%rbx
>     48 8b 04 c8             mov    (%rax,%rcx,8),%rax
>     48 2b 05 00 00 00 00    sub    0x0(%rip),%rax
>             R_X86_64_PC32      vmemmap_base-0x4
>     48 c1 f8 06             sar    $0x6,%rax
>     48 01 d8                add    %rbx,%rax
>     48 c1 e0 06             shl    $0x6,%rax
>     48 03 05 00 00 00 00    add    0x0(%rip),%rax
>             R_X86_64_PC32      vmemmap_base-0x4
> 
> after:
>     49 8b 45 70             mov    0x70(%r13),%rax
>     48 63 c9                movslq %ecx,%rcx
>     48 c1 eb 0c             shr    $0xc,%rbx
>     48 c1 e3 06             shl    $0x6,%rbx
>     48 03 1c c8             add    (%rax,%rcx,8),%rbx
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
>   include/linux/mm.h | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 25b9041f9925..2327f99b121f 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -234,7 +234,11 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *,
>   int __add_to_page_cache_locked(struct page *page, struct address_space *mapping,
>   		pgoff_t index, gfp_t gfp, void **shadowp);
>   
> +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
>   #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n))
> +#else
> +#define nth_page(page,n) ((page) + (n))
> +#endif

For sparsemem we could optimize within a single memory section. But not 
sure if it's worth the trouble.

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: Optimise nth_page for contiguous memmap
  2021-04-13 19:46 [PATCH] mm: Optimise nth_page for contiguous memmap Matthew Wilcox (Oracle)
  2021-04-14 15:24 ` David Hildenbrand
@ 2021-04-14 15:27 ` Zi Yan
  1 sibling, 0 replies; 5+ messages in thread
From: Zi Yan @ 2021-04-14 15:27 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Andrew Morton, linux-mm, linux-kernel, Tejun Heo,
	FUJITA Tomonori, Douglas Gilbert, Chris Wilson,
	Christoph Hellwig

[-- Attachment #1: Type: text/plain, Size: 2360 bytes --]

On 13 Apr 2021, at 15:46, Matthew Wilcox (Oracle) wrote:

> If the memmap is virtually contiguous (either because we're using
> a virtually mapped memmap or because we don't support a discontig
> memmap at all), then we can implement nth_page() by simple addition.
> Contrary to popular belief, the compiler is not able to optimise this
> itself for a vmemmap configuration.  This reduces one example user (sg.c)
> by four instructions:
>
>         struct page *page = nth_page(rsv_schp->pages[k], offset >> PAGE_SHIFT);
>
> before:
>    49 8b 45 70             mov    0x70(%r13),%rax
>    48 63 c9                movslq %ecx,%rcx
>    48 c1 eb 0c             shr    $0xc,%rbx
>    48 8b 04 c8             mov    (%rax,%rcx,8),%rax
>    48 2b 05 00 00 00 00    sub    0x0(%rip),%rax
>            R_X86_64_PC32      vmemmap_base-0x4
>    48 c1 f8 06             sar    $0x6,%rax
>    48 01 d8                add    %rbx,%rax
>    48 c1 e0 06             shl    $0x6,%rax
>    48 03 05 00 00 00 00    add    0x0(%rip),%rax
>            R_X86_64_PC32      vmemmap_base-0x4
>
> after:
>    49 8b 45 70             mov    0x70(%r13),%rax
>    48 63 c9                movslq %ecx,%rcx
>    48 c1 eb 0c             shr    $0xc,%rbx
>    48 c1 e3 06             shl    $0x6,%rbx
>    48 03 1c c8             add    (%rax,%rcx,8),%rbx
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
>  include/linux/mm.h | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 25b9041f9925..2327f99b121f 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -234,7 +234,11 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *,
>  int __add_to_page_cache_locked(struct page *page, struct address_space *mapping,
>  		pgoff_t index, gfp_t gfp, void **shadowp);
>
> +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
>  #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n))
> +#else
> +#define nth_page(page,n) ((page) + (n))
> +#endif
>
>  /* to align the pointer to the (next) page boundary */
>  #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE)
> -- 
> 2.30.2

LGTM. Thanks.

Reviewed-by: Zi Yan <ziy@nvidia.com>

—
Best Regards,
Yan Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: Optimise nth_page for contiguous memmap
  2021-04-14 15:24 ` David Hildenbrand
@ 2021-04-14 18:51   ` Matthew Wilcox
  2021-04-14 18:56     ` David Hildenbrand
  0 siblings, 1 reply; 5+ messages in thread
From: Matthew Wilcox @ 2021-04-14 18:51 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, linux-mm, linux-kernel, Tejun Heo,
	FUJITA Tomonori, Douglas Gilbert, Chris Wilson,
	Christoph Hellwig

On Wed, Apr 14, 2021 at 05:24:42PM +0200, David Hildenbrand wrote:
> On 13.04.21 21:46, Matthew Wilcox (Oracle) wrote:
> > +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
> >   #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n))
> > +#else
> > +#define nth_page(page,n) ((page) + (n))
> > +#endif
> 
> For sparsemem we could optimize within a single memory section. But not sure
> if it's worth the trouble.

Not only is it not worth the trouble, I suspect it's more expensive to
test-and-branch than just unconditionally call pfn_to_page() and
page_to_pfn().  That said, I haven't measured.

SPARSEMEM_VMEMMAP is default Y, and enabled by arm64, ia64, powerpc,
riscv, s390, sparc and x86.  I mean ... do we care any more?

> Reviewed-by: David Hildenbrand <david@redhat.com>
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm: Optimise nth_page for contiguous memmap
  2021-04-14 18:51   ` Matthew Wilcox
@ 2021-04-14 18:56     ` David Hildenbrand
  0 siblings, 0 replies; 5+ messages in thread
From: David Hildenbrand @ 2021-04-14 18:56 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, Chris Wilson, Christoph Hellwig, Douglas Gilbert,
	FUJITA Tomonori, Tejun Heo, linux-kernel, linux-mm

[-- Attachment #1: Type: text/plain, Size: 1029 bytes --]

Matthew Wilcox <willy@infradead.org> schrieb am Mi. 14. Apr. 2021 um 20:52:

> On Wed, Apr 14, 2021 at 05:24:42PM +0200, David Hildenbrand wrote:
> > On 13.04.21 21:46, Matthew Wilcox (Oracle) wrote:
> > > +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
> > >   #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n))
> > > +#else
> > > +#define nth_page(page,n) ((page) + (n))
> > > +#endif
> >
> > For sparsemem we could optimize within a single memory section. But not
> sure
> > if it's worth the trouble.
>
> Not only is it not worth the trouble, I suspect it's more expensive to
> test-and-branch than just unconditionally call pfn_to_page() and
> page_to_pfn().  That said, I haven't measured.


My thinking was that in most cases we might stay within the section such
that there are barely any actual branches.


>
> SPARSEMEM_VMEMMAP is default Y, and enabled by arm64, ia64, powerpc,
> riscv, s390, sparc and x86.  I mean ... do we care any more?


Also true.
-- 
Thanks,

David / dhildenb

[-- Attachment #2: Type: text/html, Size: 1856 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-04-14 18:56 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-13 19:46 [PATCH] mm: Optimise nth_page for contiguous memmap Matthew Wilcox (Oracle)
2021-04-14 15:24 ` David Hildenbrand
2021-04-14 18:51   ` Matthew Wilcox
2021-04-14 18:56     ` David Hildenbrand
2021-04-14 15:27 ` Zi Yan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.