From: John Hubbard <jhubbard@nvidia.com> To: Joao Martins <joao.m.martins@oracle.com>, <linux-mm@kvack.org> Cc: linux-nvdimm@lists.01.org, Matthew Wilcox <willy@infradead.org>, "Jason Gunthorpe <jgg@ziepe.ca>, Jane Chu <jane.chu@oracle.com>, Muchun Song" <songmuchun@bytedance.com>, Mike Kravetz <mike.kravetz@oracle.com>, Andrew@ml01.01.org Subject: Re: [PATCH RFC 6/9] mm/gup: Grab head page refcount once for group of subpages Date: Tue, 8 Dec 2020 20:40:19 -0800 [thread overview] Message-ID: <6f729802-1e93-3036-3dba-be35e06af579@nvidia.com> (raw) In-Reply-To: <20201208172901.17384-8-joao.m.martins@oracle.com> On 12/8/20 9:28 AM, Joao Martins wrote: > Much like hugetlbfs or THPs, we treat device pagemaps with > compound pages like the rest of GUP handling of compound pages. > > Rather than incrementing the refcount every 4K, we record > all sub pages and increment by @refs amount *once*. > > Performance measured by gup_benchmark improves considerably > get_user_pages_fast() and pin_user_pages_fast(): > > $ gup_benchmark -f /dev/dax0.2 -m 16384 -r 10 -S [-u,-a] -n 512 -w "gup_test", now that you're in linux-next, actually. (Maybe I'll retrofit that test with getopt_long(), those options are getting more elaborate.) > > (get_user_pages_fast 2M pages) ~75k us -> ~3.6k us > (pin_user_pages_fast 2M pages) ~125k us -> ~3.8k us That is a beautiful result! I'm very motivated to see if this patchset can make it in, in some form. > > Signed-off-by: Joao Martins <joao.m.martins@oracle.com> > --- > mm/gup.c | 67 ++++++++++++++++++++++++++++++++++++++++++-------------- > 1 file changed, 51 insertions(+), 16 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 98eb8e6d2609..194e6981eb03 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -2250,22 +2250,68 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > } > #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ > > + > +static int record_subpages(struct page *page, unsigned long addr, > + unsigned long end, struct page **pages) > +{ > + int nr; > + > + for (nr = 0; addr != end; addr += PAGE_SIZE) > + pages[nr++] = page++; > + > + return nr; > +} > + > #if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) > -static int __gup_device_huge(unsigned long pfn, unsigned long addr, > - unsigned long end, unsigned int flags, > - struct page **pages, int *nr) > +static int __gup_device_compound_huge(struct dev_pagemap *pgmap, > + struct page *head, unsigned long sz, If this variable survives (I see Jason requested a reorg of this math stuff, and I also like that idea), then I'd like a slightly better name for "sz". I was going to suggest one, but then realized that I can't understand how this works. See below... > + unsigned long addr, unsigned long end, > + unsigned int flags, struct page **pages) > +{ > + struct page *page; > + int refs; > + > + if (!(pgmap->flags & PGMAP_COMPOUND)) > + return -1; btw, I'm unhappy with returning -1 here and assigning it later to a refs variable. (And that will show up even more clearly as an issue if you attempt to make refs unsigned everywhere!) I'm not going to suggest anything because there are a lot of ways to structure these routines, and I don't want to overly constrain you. Just please don't assign negative values to any refs variables. > + > + page = head + ((addr & (sz-1)) >> PAGE_SHIFT); If you pass in PMD_SHIFT or PUD_SHIFT for, that's a number-of-bits, isn't it? Not a size. And if it's not a size, then sz - 1 doesn't work, does it? If it does work, then better naming might help. I'm probably missing a really obvious math trick here. thanks, -- John Hubbard NVIDIA > + refs = record_subpages(page, addr, end, pages); > + > + SetPageReferenced(page); > + head = try_grab_compound_head(head, refs, flags); > + if (!head) { > + ClearPageReferenced(page); > + return 0; > + } > + > + return refs; > +} > + > +static int __gup_device_huge(unsigned long pfn, unsigned long sz, > + unsigned long addr, unsigned long end, > + unsigned int flags, struct page **pages, int *nr) > { > int nr_start = *nr; > struct dev_pagemap *pgmap = NULL; > > do { > struct page *page = pfn_to_page(pfn); > + int refs; > > pgmap = get_dev_pagemap(pfn, pgmap); > if (unlikely(!pgmap)) { > undo_dev_pagemap(nr, nr_start, flags, pages); > return 0; > } > + > + refs = __gup_device_compound_huge(pgmap, page, sz, addr, end, > + flags, pages + *nr); > + if (refs >= 0) { > + *nr += refs; > + put_dev_pagemap(pgmap); > + return refs ? 1 : 0; > + } > + > SetPageReferenced(page); > pages[*nr] = page; > if (unlikely(!try_grab_page(page, flags))) { > @@ -2289,7 +2335,7 @@ static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > int nr_start = *nr; > > fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); > - if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr)) > + if (!__gup_device_huge(fault_pfn, PMD_SHIFT, addr, end, flags, pages, nr)) > return 0; > > if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { > @@ -2307,7 +2353,7 @@ static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, > int nr_start = *nr; > > fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); > - if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr)) > + if (!__gup_device_huge(fault_pfn, PUD_SHIFT, addr, end, flags, pages, nr)) > return 0; > > if (unlikely(pud_val(orig) != pud_val(*pudp))) { > @@ -2334,17 +2380,6 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, > } > #endif > > -static int record_subpages(struct page *page, unsigned long addr, > - unsigned long end, struct page **pages) > -{ > - int nr; > - > - for (nr = 0; addr != end; addr += PAGE_SIZE) > - pages[nr++] = page++; > - > - return nr; > -} > - > #ifdef CONFIG_ARCH_HAS_HUGEPD > static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end, > unsigned long sz) > _______________________________________________ Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
WARNING: multiple messages have this Message-ID (diff)
From: John Hubbard <jhubbard@nvidia.com> To: Joao Martins <joao.m.martins@oracle.com>, <linux-mm@kvack.org> Cc: Dan Williams <dan.j.williams@intel.com>, Ira Weiny <ira.weiny@intel.com>, <linux-nvdimm@lists.01.org>, Matthew Wilcox <willy@infradead.org>, "Jason Gunthorpe" <jgg@ziepe.ca>, Jane Chu <jane.chu@oracle.com>, Muchun Song <songmuchun@bytedance.com>, Mike Kravetz <mike.kravetz@oracle.com>, "Andrew Morton" <akpm@linux-foundation.org> Subject: Re: [PATCH RFC 6/9] mm/gup: Grab head page refcount once for group of subpages Date: Tue, 8 Dec 2020 20:40:19 -0800 [thread overview] Message-ID: <6f729802-1e93-3036-3dba-be35e06af579@nvidia.com> (raw) In-Reply-To: <20201208172901.17384-8-joao.m.martins@oracle.com> On 12/8/20 9:28 AM, Joao Martins wrote: > Much like hugetlbfs or THPs, we treat device pagemaps with > compound pages like the rest of GUP handling of compound pages. > > Rather than incrementing the refcount every 4K, we record > all sub pages and increment by @refs amount *once*. > > Performance measured by gup_benchmark improves considerably > get_user_pages_fast() and pin_user_pages_fast(): > > $ gup_benchmark -f /dev/dax0.2 -m 16384 -r 10 -S [-u,-a] -n 512 -w "gup_test", now that you're in linux-next, actually. (Maybe I'll retrofit that test with getopt_long(), those options are getting more elaborate.) > > (get_user_pages_fast 2M pages) ~75k us -> ~3.6k us > (pin_user_pages_fast 2M pages) ~125k us -> ~3.8k us That is a beautiful result! I'm very motivated to see if this patchset can make it in, in some form. > > Signed-off-by: Joao Martins <joao.m.martins@oracle.com> > --- > mm/gup.c | 67 ++++++++++++++++++++++++++++++++++++++++++-------------- > 1 file changed, 51 insertions(+), 16 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 98eb8e6d2609..194e6981eb03 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -2250,22 +2250,68 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, > } > #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ > > + > +static int record_subpages(struct page *page, unsigned long addr, > + unsigned long end, struct page **pages) > +{ > + int nr; > + > + for (nr = 0; addr != end; addr += PAGE_SIZE) > + pages[nr++] = page++; > + > + return nr; > +} > + > #if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) > -static int __gup_device_huge(unsigned long pfn, unsigned long addr, > - unsigned long end, unsigned int flags, > - struct page **pages, int *nr) > +static int __gup_device_compound_huge(struct dev_pagemap *pgmap, > + struct page *head, unsigned long sz, If this variable survives (I see Jason requested a reorg of this math stuff, and I also like that idea), then I'd like a slightly better name for "sz". I was going to suggest one, but then realized that I can't understand how this works. See below... > + unsigned long addr, unsigned long end, > + unsigned int flags, struct page **pages) > +{ > + struct page *page; > + int refs; > + > + if (!(pgmap->flags & PGMAP_COMPOUND)) > + return -1; btw, I'm unhappy with returning -1 here and assigning it later to a refs variable. (And that will show up even more clearly as an issue if you attempt to make refs unsigned everywhere!) I'm not going to suggest anything because there are a lot of ways to structure these routines, and I don't want to overly constrain you. Just please don't assign negative values to any refs variables. > + > + page = head + ((addr & (sz-1)) >> PAGE_SHIFT); If you pass in PMD_SHIFT or PUD_SHIFT for, that's a number-of-bits, isn't it? Not a size. And if it's not a size, then sz - 1 doesn't work, does it? If it does work, then better naming might help. I'm probably missing a really obvious math trick here. thanks, -- John Hubbard NVIDIA > + refs = record_subpages(page, addr, end, pages); > + > + SetPageReferenced(page); > + head = try_grab_compound_head(head, refs, flags); > + if (!head) { > + ClearPageReferenced(page); > + return 0; > + } > + > + return refs; > +} > + > +static int __gup_device_huge(unsigned long pfn, unsigned long sz, > + unsigned long addr, unsigned long end, > + unsigned int flags, struct page **pages, int *nr) > { > int nr_start = *nr; > struct dev_pagemap *pgmap = NULL; > > do { > struct page *page = pfn_to_page(pfn); > + int refs; > > pgmap = get_dev_pagemap(pfn, pgmap); > if (unlikely(!pgmap)) { > undo_dev_pagemap(nr, nr_start, flags, pages); > return 0; > } > + > + refs = __gup_device_compound_huge(pgmap, page, sz, addr, end, > + flags, pages + *nr); > + if (refs >= 0) { > + *nr += refs; > + put_dev_pagemap(pgmap); > + return refs ? 1 : 0; > + } > + > SetPageReferenced(page); > pages[*nr] = page; > if (unlikely(!try_grab_page(page, flags))) { > @@ -2289,7 +2335,7 @@ static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, > int nr_start = *nr; > > fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); > - if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr)) > + if (!__gup_device_huge(fault_pfn, PMD_SHIFT, addr, end, flags, pages, nr)) > return 0; > > if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { > @@ -2307,7 +2353,7 @@ static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, > int nr_start = *nr; > > fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); > - if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr)) > + if (!__gup_device_huge(fault_pfn, PUD_SHIFT, addr, end, flags, pages, nr)) > return 0; > > if (unlikely(pud_val(orig) != pud_val(*pudp))) { > @@ -2334,17 +2380,6 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, > } > #endif > > -static int record_subpages(struct page *page, unsigned long addr, > - unsigned long end, struct page **pages) > -{ > - int nr; > - > - for (nr = 0; addr != end; addr += PAGE_SIZE) > - pages[nr++] = page++; > - > - return nr; > -} > - > #ifdef CONFIG_ARCH_HAS_HUGEPD > static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end, > unsigned long sz) >
next prev parent reply other threads:[~2020-12-09 4:40 UTC|newest] Thread overview: 147+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-12-08 17:28 [PATCH RFC 0/9] mm, sparse-vmemmap: Introduce compound pagemaps Joao Martins 2020-12-08 17:28 ` Joao Martins 2020-12-08 17:28 ` [PATCH RFC 1/9] memremap: add ZONE_DEVICE support for compound pages Joao Martins 2020-12-08 17:28 ` Joao Martins 2020-12-09 5:59 ` John Hubbard 2020-12-09 5:59 ` John Hubbard 2020-12-09 6:33 ` Matthew Wilcox 2020-12-09 6:33 ` Matthew Wilcox 2020-12-09 13:12 ` Joao Martins 2020-12-09 13:12 ` Joao Martins 2021-02-20 1:43 ` Dan Williams 2021-02-20 1:43 ` Dan Williams 2021-02-22 11:24 ` Joao Martins 2021-02-22 11:24 ` Joao Martins 2021-02-22 20:37 ` Dan Williams 2021-02-22 20:37 ` Dan Williams 2021-02-23 15:46 ` Joao Martins 2021-02-23 15:46 ` Joao Martins 2021-02-23 16:50 ` Dan Williams 2021-02-23 16:50 ` Dan Williams 2021-02-23 17:18 ` Joao Martins 2021-02-23 17:18 ` Joao Martins 2021-02-23 18:18 ` Dan Williams 2021-02-23 18:18 ` Dan Williams 2021-03-10 18:12 ` Joao Martins 2021-03-10 18:12 ` Joao Martins 2021-03-12 5:54 ` Dan Williams 2021-03-12 5:54 ` Dan Williams 2021-02-20 1:24 ` Dan Williams 2021-02-20 1:24 ` Dan Williams 2021-02-22 11:09 ` Joao Martins 2021-02-22 11:09 ` Joao Martins 2020-12-08 17:28 ` [PATCH RFC 2/9] sparse-vmemmap: Consolidate arguments in vmemmap section populate Joao Martins 2020-12-08 17:28 ` Joao Martins 2020-12-09 6:16 ` John Hubbard 2020-12-09 6:16 ` John Hubbard 2020-12-09 13:51 ` Joao Martins 2020-12-09 13:51 ` Joao Martins 2021-02-20 1:49 ` Dan Williams 2021-02-20 1:49 ` Dan Williams 2021-02-22 11:26 ` Joao Martins 2021-02-22 11:26 ` Joao Martins 2020-12-08 17:28 ` [PATCH RFC 3/9] sparse-vmemmap: Reuse vmemmap areas for a given mhp_params::align Joao Martins 2020-12-08 17:28 ` Joao Martins 2020-12-08 17:38 ` Joao Martins 2020-12-08 17:38 ` Joao Martins 2020-12-08 17:28 ` [PATCH RFC 3/9] sparse-vmemmap: Reuse vmemmap areas for a given page size Joao Martins 2020-12-08 17:28 ` Joao Martins 2021-02-20 3:34 ` Dan Williams 2021-02-20 3:34 ` Dan Williams 2021-02-22 11:42 ` Joao Martins 2021-02-22 11:42 ` Joao Martins 2021-02-22 22:40 ` Dan Williams 2021-02-22 22:40 ` Dan Williams 2021-02-23 15:46 ` Joao Martins 2021-02-23 15:46 ` Joao Martins 2020-12-08 17:28 ` [PATCH RFC 4/9] mm/page_alloc: Reuse tail struct pages for compound pagemaps Joao Martins 2020-12-08 17:28 ` Joao Martins 2021-02-20 6:17 ` Dan Williams 2021-02-20 6:17 ` Dan Williams 2021-02-22 12:01 ` Joao Martins 2021-02-22 12:01 ` Joao Martins 2020-12-08 17:28 ` [PATCH RFC 5/9] device-dax: Compound pagemap support Joao Martins 2020-12-08 17:28 ` Joao Martins 2020-12-08 17:28 ` [PATCH RFC 6/9] mm/gup: Grab head page refcount once for group of subpages Joao Martins 2020-12-08 17:28 ` Joao Martins 2020-12-08 19:49 ` Jason Gunthorpe 2020-12-09 11:05 ` Joao Martins 2020-12-09 11:05 ` Joao Martins 2020-12-09 15:15 ` Jason Gunthorpe 2020-12-09 16:02 ` Joao Martins 2020-12-09 16:02 ` Joao Martins 2020-12-09 16:24 ` Jason Gunthorpe 2020-12-09 17:27 ` Joao Martins 2020-12-09 17:27 ` Joao Martins 2020-12-09 18:14 ` Matthew Wilcox 2020-12-09 18:14 ` Matthew Wilcox 2020-12-09 19:08 ` Jason Gunthorpe 2020-12-10 15:43 ` Joao Martins 2020-12-10 15:43 ` Joao Martins 2020-12-09 4:40 ` John Hubbard [this message] 2020-12-09 4:40 ` John Hubbard 2020-12-09 13:44 ` Joao Martins 2020-12-09 13:44 ` Joao Martins 2020-12-08 17:28 ` [PATCH RFC 7/9] mm/gup: Decrement head page " Joao Martins 2020-12-08 17:28 ` Joao Martins 2020-12-08 19:34 ` Jason Gunthorpe 2020-12-09 5:06 ` John Hubbard 2020-12-09 5:06 ` John Hubbard 2020-12-09 13:43 ` Jason Gunthorpe 2020-12-09 12:17 ` Joao Martins 2020-12-09 12:17 ` Joao Martins 2020-12-17 19:05 ` Joao Martins 2020-12-17 19:05 ` Joao Martins 2020-12-17 20:05 ` Jason Gunthorpe 2020-12-17 22:34 ` Joao Martins 2020-12-17 22:34 ` Joao Martins 2020-12-18 14:25 ` Jason Gunthorpe 2020-12-19 2:06 ` John Hubbard 2020-12-19 2:06 ` John Hubbard 2020-12-19 13:10 ` Joao Martins 2020-12-19 13:10 ` Joao Martins 2020-12-08 17:29 ` [PATCH RFC 8/9] RDMA/umem: batch page unpin in __ib_mem_release() Joao Martins 2020-12-08 17:29 ` Joao Martins 2020-12-08 19:29 ` Jason Gunthorpe 2020-12-09 10:59 ` Joao Martins 2020-12-09 10:59 ` Joao Martins 2020-12-19 13:15 ` Joao Martins 2020-12-19 13:15 ` Joao Martins 2020-12-09 5:18 ` John Hubbard 2020-12-09 5:18 ` John Hubbard 2020-12-08 17:29 ` [PATCH RFC 9/9] mm: Add follow_devmap_page() for devdax vmas Joao Martins 2020-12-08 17:29 ` Joao Martins 2020-12-08 19:57 ` Jason Gunthorpe 2020-12-09 8:05 ` Christoph Hellwig 2020-12-09 8:05 ` Christoph Hellwig 2020-12-09 11:19 ` Joao Martins 2020-12-09 11:19 ` Joao Martins 2020-12-09 5:23 ` John Hubbard 2020-12-09 5:23 ` John Hubbard 2020-12-09 9:38 ` [PATCH RFC 0/9] mm, sparse-vmemmap: Introduce compound pagemaps David Hildenbrand 2020-12-09 9:38 ` David Hildenbrand 2020-12-09 9:52 ` [External] " Muchun Song 2020-12-09 9:52 ` Muchun Song 2021-02-20 1:18 ` Dan Williams 2021-02-20 1:18 ` Dan Williams 2021-02-22 11:06 ` Joao Martins 2021-02-22 11:06 ` Joao Martins 2021-02-22 14:32 ` Joao Martins 2021-02-22 14:32 ` Joao Martins 2021-02-23 16:28 ` Joao Martins 2021-02-23 16:28 ` Joao Martins 2021-02-23 16:44 ` Dan Williams 2021-02-23 16:44 ` Dan Williams 2021-02-23 17:15 ` Joao Martins 2021-02-23 17:15 ` Joao Martins 2021-02-23 18:15 ` Dan Williams 2021-02-23 18:15 ` Dan Williams 2021-02-23 18:54 ` Jason Gunthorpe 2021-02-23 22:48 ` Dan Williams 2021-02-23 22:48 ` Dan Williams 2021-02-23 23:07 ` Jason Gunthorpe 2021-02-24 0:14 ` Dan Williams 2021-02-24 0:14 ` Dan Williams 2021-02-24 1:00 ` Jason Gunthorpe 2021-02-24 1:32 ` Dan Williams 2021-02-24 1:32 ` Dan Williams
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=6f729802-1e93-3036-3dba-be35e06af579@nvidia.com \ --to=jhubbard@nvidia.com \ --cc=Andrew@ml01.01.org \ --cc=joao.m.martins@oracle.com \ --cc=linux-mm@kvack.org \ --cc=linux-nvdimm@lists.01.org \ --cc=mike.kravetz@oracle.com \ --cc=songmuchun@bytedance.com \ --cc=willy@infradead.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.