* [PATCH v1 0/3] mm/gup: consistently call it GUP-fast @ 2024-04-02 12:55 David Hildenbrand 2024-04-02 12:55 ` [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions David Hildenbrand ` (2 more replies) 0 siblings, 3 replies; 17+ messages in thread From: David Hildenbrand @ 2024-04-02 12:55 UTC (permalink / raw) To: linux-kernel Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, Peter Xu, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 Some cleanups around function names, comments and the config option of "GUP-fast" -- GUP without "lock" safety belts on. With this cleanup it's easy to judge which functions are GUP-fast specific. We now consistently call it "GUP-fast", avoiding mixing it with "fast GUP", "lockless", or simply "gup" (which I always considered confusing in the ode). So the magic now happens in functions that contain "gup_fast", whereby gup_fast() is the entry point into that magic. Comments consistently reference either "GUP-fast" or "gup_fast()". Based on mm-unstable from today. I won't CC arch maintainers, but only arch mailing lists, to reduce noise. Tested on x86_64, cross compiled on a bunch of archs. RFC -> v1: * Rebased on latest mm/mm-unstable * "mm/gup: consistently name GUP-fast functions" -> "internal_get_user_pages_fast()" -> "gup_fast_fallback()" -> "undo_dev_pagemap()" -> "gup_fast_undo_dev_pagemap()" -> Fixup a bunch more comments * "mm/treewide: rename CONFIG_HAVE_FAST_GUP to CONFIG_HAVE_GUP_FAST" -> Take care of RISCV Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Peter Xu <peterx@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: loongarch@lists.linux.dev Cc: linux-mips@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-perf-users@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Cc: linux-riscv@lists.infradead.org Cc: x86@kernel.org David Hildenbrand (3): mm/gup: consistently name GUP-fast functions mm/treewide: rename CONFIG_HAVE_FAST_GUP to CONFIG_HAVE_GUP_FAST mm: use "GUP-fast" instead "fast GUP" in remaining comments arch/arm/Kconfig | 2 +- arch/arm64/Kconfig | 2 +- arch/loongarch/Kconfig | 2 +- arch/mips/Kconfig | 2 +- arch/powerpc/Kconfig | 2 +- arch/riscv/Kconfig | 2 +- arch/s390/Kconfig | 2 +- arch/sh/Kconfig | 2 +- arch/x86/Kconfig | 2 +- include/linux/rmap.h | 8 +- kernel/events/core.c | 4 +- mm/Kconfig | 2 +- mm/filemap.c | 2 +- mm/gup.c | 215 +++++++++++++++++++++-------------------- mm/internal.h | 2 +- mm/khugepaged.c | 2 +- 16 files changed, 127 insertions(+), 126 deletions(-) -- 2.44.0 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions 2024-04-02 12:55 [PATCH v1 0/3] mm/gup: consistently call it GUP-fast David Hildenbrand @ 2024-04-02 12:55 ` David Hildenbrand 2024-04-13 20:07 ` John Hubbard 2024-04-26 7:17 ` David Hildenbrand 2024-04-02 12:55 ` [PATCH v1 2/3] mm/treewide: rename CONFIG_HAVE_FAST_GUP to CONFIG_HAVE_GUP_FAST David Hildenbrand 2024-04-02 12:55 ` [PATCH v1 3/3] mm: use "GUP-fast" instead "fast GUP" in remaining comments David Hildenbrand 2 siblings, 2 replies; 17+ messages in thread From: David Hildenbrand @ 2024-04-02 12:55 UTC (permalink / raw) To: linux-kernel Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, Peter Xu, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 Let's consistently call the "fast-only" part of GUP "GUP-fast" and rename all relevant internal functions to start with "gup_fast", to make it clearer that this is not ordinary GUP. The current mixture of "lockless", "gup" and "gup_fast" is confusing. Further, avoid the term "huge" when talking about a "leaf" -- for example, we nowadays check pmd_leaf() because pmd_huge() is gone. For the "hugepd"/"hugepte" stuff, it's part of the name ("is_hugepd"), so that stays. What remains is the "external" interface: * get_user_pages_fast_only() * get_user_pages_fast() * pin_user_pages_fast() The high-level internal functions for GUP-fast (+slow fallback) are now: * internal_get_user_pages_fast() -> gup_fast_fallback() * lockless_pages_from_mm() -> gup_fast() The basic GUP-fast walker functions: * gup_pgd_range() -> gup_fast_pgd_range() * gup_p4d_range() -> gup_fast_p4d_range() * gup_pud_range() -> gup_fast_pud_range() * gup_pmd_range() -> gup_fast_pmd_range() * gup_pte_range() -> gup_fast_pte_range() * gup_huge_pgd() -> gup_fast_pgd_leaf() * gup_huge_pud() -> gup_fast_pud_leaf() * gup_huge_pmd() -> gup_fast_pmd_leaf() The weird hugepd stuff: * gup_huge_pd() -> gup_fast_hugepd() * gup_hugepte() -> gup_fast_hugepte() The weird devmap stuff: * __gup_device_huge_pud() -> gup_fast_devmap_pud_leaf() * __gup_device_huge_pmd -> gup_fast_devmap_pmd_leaf() * __gup_device_huge() -> gup_fast_devmap_leaf() * undo_dev_pagemap() -> gup_fast_undo_dev_pagemap() Helper functions: * unpin_user_pages_lockless() -> gup_fast_unpin_user_pages() * gup_fast_folio_allowed() is already properly named * gup_fast_permitted() is already properly named With "gup_fast()", we now even have a function that is referred to in comment in mm/mmu_gather.c. Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/gup.c | 205 ++++++++++++++++++++++++++++--------------------------- 1 file changed, 103 insertions(+), 102 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 95bd9d4b7cfb..f1ac2c5a7f6d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -440,7 +440,7 @@ void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, } EXPORT_SYMBOL(unpin_user_page_range_dirty_lock); -static void unpin_user_pages_lockless(struct page **pages, unsigned long npages) +static void gup_fast_unpin_user_pages(struct page **pages, unsigned long npages) { unsigned long i; struct folio *folio; @@ -525,9 +525,9 @@ static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end, return (__boundary - 1 < end - 1) ? __boundary : end; } -static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { unsigned long pte_end; struct page *page; @@ -577,7 +577,7 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, * of the other folios. See writable_file_mapping_allowed() and * gup_fast_folio_allowed() for more information. */ -static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, +static int gup_fast_hugepd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, struct page **pages, int *nr) { @@ -588,7 +588,7 @@ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, ptep = hugepte_offset(hugepd, addr, pdshift); do { next = hugepte_addr_end(addr, end, sz); - if (!gup_hugepte(ptep, sz, addr, end, flags, pages, nr)) + if (!gup_fast_hugepte(ptep, sz, addr, end, flags, pages, nr)) return 0; } while (ptep++, addr = next, addr != end); @@ -613,8 +613,8 @@ static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, h = hstate_vma(vma); ptep = hugepte_offset(hugepd, addr, pdshift); ptl = huge_pte_lock(h, vma->vm_mm, ptep); - ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE, - flags, &page, &nr); + ret = gup_fast_hugepd(hugepd, addr, pdshift, addr + PAGE_SIZE, + flags, &page, &nr); spin_unlock(ptl); if (ret) { @@ -626,7 +626,7 @@ static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, return NULL; } #else /* CONFIG_ARCH_HAS_HUGEPD */ -static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, +static inline int gup_fast_hugepd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, struct page **pages, int *nr) { @@ -2753,7 +2753,7 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, EXPORT_SYMBOL(get_user_pages_unlocked); /* - * Fast GUP + * GUP-fast * * get_user_pages_fast attempts to pin user pages by walking the page * tables directly and avoids taking locks. Thus the walker needs to be @@ -2767,7 +2767,7 @@ EXPORT_SYMBOL(get_user_pages_unlocked); * * Another way to achieve this is to batch up page table containing pages * belonging to more than one mm_user, then rcu_sched a callback to free those - * pages. Disabling interrupts will allow the fast_gup walker to both block + * pages. Disabling interrupts will allow the gup_fast() walker to both block * the rcu_sched callback, and an IPI that we broadcast for splitting THPs * (which is a relatively rare event). The code below adopts this strategy. * @@ -2876,9 +2876,8 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags) return !reject_file_backed || shmem_mapping(mapping); } -static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start, - unsigned int flags, - struct page **pages) +static void __maybe_unused gup_fast_undo_dev_pagemap(int *nr, int nr_start, + unsigned int flags, struct page **pages) { while ((*nr) - nr_start) { struct page *page = pages[--(*nr)]; @@ -2893,27 +2892,27 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start, #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL /* - * Fast-gup relies on pte change detection to avoid concurrent pgtable + * GUP-fast relies on pte change detection to avoid concurrent pgtable * operations. * - * To pin the page, fast-gup needs to do below in order: + * To pin the page, GUP-fast needs to do below in order: * (1) pin the page (by prefetching pte), then (2) check pte not changed. * * For the rest of pgtable operations where pgtable updates can be racy - * with fast-gup, we need to do (1) clear pte, then (2) check whether page + * with GUP-fast, we need to do (1) clear pte, then (2) check whether page * is pinned. * * Above will work for all pte-level operations, including THP split. * - * For THP collapse, it's a bit more complicated because fast-gup may be + * For THP collapse, it's a bit more complicated because GUP-fast may be * walking a pgtable page that is being freed (pte is still valid but pmd * can be cleared already). To avoid race in such condition, we need to * also check pmd here to make sure pmd doesn't change (corresponds to * pmdp_collapse_flush() in the THP collapse code path). */ -static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { struct dev_pagemap *pgmap = NULL; int nr_start = *nr, ret = 0; @@ -2946,7 +2945,7 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); if (unlikely(!pgmap)) { - undo_dev_pagemap(nr, nr_start, flags, pages); + gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); goto pte_unmap; } } else if (pte_special(pte)) @@ -3010,20 +3009,19 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, * * For a futex to be placed on a THP tail page, get_futex_key requires a * get_user_pages_fast_only implementation that can pin pages. Thus it's still - * useful to have gup_huge_pmd even if we can't operate on ptes. + * useful to have gup_fast_pmd_leaf even if we can't operate on ptes. */ -static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { return 0; } #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ #if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) -static int __gup_device_huge(unsigned long pfn, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_devmap_leaf(unsigned long pfn, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, int *nr) { int nr_start = *nr; struct dev_pagemap *pgmap = NULL; @@ -3033,19 +3031,19 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr, pgmap = get_dev_pagemap(pfn, pgmap); if (unlikely(!pgmap)) { - undo_dev_pagemap(nr, nr_start, flags, pages); + gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); break; } if (!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { - undo_dev_pagemap(nr, nr_start, flags, pages); + gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); break; } SetPageReferenced(page); pages[*nr] = page; if (unlikely(try_grab_page(page, flags))) { - undo_dev_pagemap(nr, nr_start, flags, pages); + gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); break; } (*nr)++; @@ -3056,62 +3054,62 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr, return addr == end; } -static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_devmap_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { unsigned long fault_pfn; int nr_start = *nr; fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr)) + if (!gup_fast_devmap_leaf(fault_pfn, addr, end, flags, pages, nr)) return 0; if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { - undo_dev_pagemap(nr, nr_start, flags, pages); + gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); return 0; } return 1; } -static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_devmap_pud_leaf(pud_t orig, pud_t *pudp, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { unsigned long fault_pfn; int nr_start = *nr; fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr)) + if (!gup_fast_devmap_leaf(fault_pfn, addr, end, flags, pages, nr)) return 0; if (unlikely(pud_val(orig) != pud_val(*pudp))) { - undo_dev_pagemap(nr, nr_start, flags, pages); + gup_fast_undo_dev_pagemap(nr, nr_start, flags, pages); return 0; } return 1; } #else -static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_devmap_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { BUILD_BUG(); return 0; } -static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_devmap_pud_leaf(pud_t pud, pud_t *pudp, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { BUILD_BUG(); return 0; } #endif -static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_pmd_leaf(pmd_t orig, pmd_t *pmdp, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { struct page *page; struct folio *folio; @@ -3123,8 +3121,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, if (pmd_devmap(orig)) { if (unlikely(flags & FOLL_LONGTERM)) return 0; - return __gup_device_huge_pmd(orig, pmdp, addr, end, flags, - pages, nr); + return gup_fast_devmap_pmd_leaf(orig, pmdp, addr, end, flags, + pages, nr); } page = pmd_page(orig); @@ -3153,9 +3151,9 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, return 1; } -static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_pud_leaf(pud_t orig, pud_t *pudp, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { struct page *page; struct folio *folio; @@ -3167,8 +3165,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, if (pud_devmap(orig)) { if (unlikely(flags & FOLL_LONGTERM)) return 0; - return __gup_device_huge_pud(orig, pudp, addr, end, flags, - pages, nr); + return gup_fast_devmap_pud_leaf(orig, pudp, addr, end, flags, + pages, nr); } page = pud_page(orig); @@ -3198,9 +3196,9 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, return 1; } -static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) +static int gup_fast_pgd_leaf(pgd_t orig, pgd_t *pgdp, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { int refs; struct page *page; @@ -3238,8 +3236,9 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, return 1; } -static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned long end, - unsigned int flags, struct page **pages, int *nr) +static int gup_fast_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { unsigned long next; pmd_t *pmdp; @@ -3253,11 +3252,11 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo return 0; if (unlikely(pmd_leaf(pmd))) { - /* See gup_pte_range() */ + /* See gup_fast_pte_range() */ if (pmd_protnone(pmd)) return 0; - if (!gup_huge_pmd(pmd, pmdp, addr, next, flags, + if (!gup_fast_pmd_leaf(pmd, pmdp, addr, next, flags, pages, nr)) return 0; @@ -3266,18 +3265,20 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo * architecture have different format for hugetlbfs * pmd format and THP pmd format */ - if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr, - PMD_SHIFT, next, flags, pages, nr)) + if (!gup_fast_hugepd(__hugepd(pmd_val(pmd)), addr, + PMD_SHIFT, next, flags, pages, nr)) return 0; - } else if (!gup_pte_range(pmd, pmdp, addr, next, flags, pages, nr)) + } else if (!gup_fast_pte_range(pmd, pmdp, addr, next, flags, + pages, nr)) return 0; } while (pmdp++, addr = next, addr != end); return 1; } -static int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, unsigned long end, - unsigned int flags, struct page **pages, int *nr) +static int gup_fast_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { unsigned long next; pud_t *pudp; @@ -3290,22 +3291,24 @@ static int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, unsigned lo if (unlikely(!pud_present(pud))) return 0; if (unlikely(pud_leaf(pud))) { - if (!gup_huge_pud(pud, pudp, addr, next, flags, - pages, nr)) + if (!gup_fast_pud_leaf(pud, pudp, addr, next, flags, + pages, nr)) return 0; } else if (unlikely(is_hugepd(__hugepd(pud_val(pud))))) { - if (!gup_huge_pd(__hugepd(pud_val(pud)), addr, - PUD_SHIFT, next, flags, pages, nr)) + if (!gup_fast_hugepd(__hugepd(pud_val(pud)), addr, + PUD_SHIFT, next, flags, pages, nr)) return 0; - } else if (!gup_pmd_range(pudp, pud, addr, next, flags, pages, nr)) + } else if (!gup_fast_pmd_range(pudp, pud, addr, next, flags, + pages, nr)) return 0; } while (pudp++, addr = next, addr != end); return 1; } -static int gup_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, unsigned long end, - unsigned int flags, struct page **pages, int *nr) +static int gup_fast_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, + unsigned long end, unsigned int flags, struct page **pages, + int *nr) { unsigned long next; p4d_t *p4dp; @@ -3319,17 +3322,18 @@ static int gup_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, unsigned lo return 0; BUILD_BUG_ON(p4d_leaf(p4d)); if (unlikely(is_hugepd(__hugepd(p4d_val(p4d))))) { - if (!gup_huge_pd(__hugepd(p4d_val(p4d)), addr, - P4D_SHIFT, next, flags, pages, nr)) + if (!gup_fast_hugepd(__hugepd(p4d_val(p4d)), addr, + P4D_SHIFT, next, flags, pages, nr)) return 0; - } else if (!gup_pud_range(p4dp, p4d, addr, next, flags, pages, nr)) + } else if (!gup_fast_pud_range(p4dp, p4d, addr, next, flags, + pages, nr)) return 0; } while (p4dp++, addr = next, addr != end); return 1; } -static void gup_pgd_range(unsigned long addr, unsigned long end, +static void gup_fast_pgd_range(unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { unsigned long next; @@ -3343,19 +3347,20 @@ static void gup_pgd_range(unsigned long addr, unsigned long end, if (pgd_none(pgd)) return; if (unlikely(pgd_leaf(pgd))) { - if (!gup_huge_pgd(pgd, pgdp, addr, next, flags, - pages, nr)) + if (!gup_fast_pgd_leaf(pgd, pgdp, addr, next, flags, + pages, nr)) return; } else if (unlikely(is_hugepd(__hugepd(pgd_val(pgd))))) { - if (!gup_huge_pd(__hugepd(pgd_val(pgd)), addr, - PGDIR_SHIFT, next, flags, pages, nr)) + if (!gup_fast_hugepd(__hugepd(pgd_val(pgd)), addr, + PGDIR_SHIFT, next, flags, pages, nr)) return; - } else if (!gup_p4d_range(pgdp, pgd, addr, next, flags, pages, nr)) + } else if (!gup_fast_p4d_range(pgdp, pgd, addr, next, flags, + pages, nr)) return; } while (pgdp++, addr = next, addr != end); } #else -static inline void gup_pgd_range(unsigned long addr, unsigned long end, +static inline void gup_fast_pgd_range(unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { } @@ -3372,10 +3377,8 @@ static bool gup_fast_permitted(unsigned long start, unsigned long end) } #endif -static unsigned long lockless_pages_from_mm(unsigned long start, - unsigned long end, - unsigned int gup_flags, - struct page **pages) +static unsigned long gup_fast(unsigned long start, unsigned long end, + unsigned int gup_flags, struct page **pages) { unsigned long flags; int nr_pinned = 0; @@ -3403,16 +3406,16 @@ static unsigned long lockless_pages_from_mm(unsigned long start, * that come from THPs splitting. */ local_irq_save(flags); - gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); + gup_fast_pgd_range(start, end, gup_flags, pages, &nr_pinned); local_irq_restore(flags); /* * When pinning pages for DMA there could be a concurrent write protect - * from fork() via copy_page_range(), in this case always fail fast GUP. + * from fork() via copy_page_range(), in this case always fail GUP-fast. */ if (gup_flags & FOLL_PIN) { if (read_seqcount_retry(¤t->mm->write_protect_seq, seq)) { - unpin_user_pages_lockless(pages, nr_pinned); + gup_fast_unpin_user_pages(pages, nr_pinned); return 0; } else { sanity_check_pinned_pages(pages, nr_pinned); @@ -3421,10 +3424,8 @@ static unsigned long lockless_pages_from_mm(unsigned long start, return nr_pinned; } -static int internal_get_user_pages_fast(unsigned long start, - unsigned long nr_pages, - unsigned int gup_flags, - struct page **pages) +static int gup_fast_fallback(unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages) { unsigned long len, end; unsigned long nr_pinned; @@ -3452,7 +3453,7 @@ static int internal_get_user_pages_fast(unsigned long start, if (unlikely(!access_ok((void __user *)start, len))) return -EFAULT; - nr_pinned = lockless_pages_from_mm(start, end, gup_flags, pages); + nr_pinned = gup_fast(start, end, gup_flags, pages); if (nr_pinned == nr_pages || gup_flags & FOLL_FAST_ONLY) return nr_pinned; @@ -3506,7 +3507,7 @@ int get_user_pages_fast_only(unsigned long start, int nr_pages, FOLL_GET | FOLL_FAST_ONLY)) return -EINVAL; - return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); + return gup_fast_fallback(start, nr_pages, gup_flags, pages); } EXPORT_SYMBOL_GPL(get_user_pages_fast_only); @@ -3537,7 +3538,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, */ if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_GET)) return -EINVAL; - return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); + return gup_fast_fallback(start, nr_pages, gup_flags, pages); } EXPORT_SYMBOL_GPL(get_user_pages_fast); @@ -3565,7 +3566,7 @@ int pin_user_pages_fast(unsigned long start, int nr_pages, { if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_PIN)) return -EINVAL; - return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); + return gup_fast_fallback(start, nr_pages, gup_flags, pages); } EXPORT_SYMBOL_GPL(pin_user_pages_fast); -- 2.44.0 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions 2024-04-02 12:55 ` [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions David Hildenbrand @ 2024-04-13 20:07 ` John Hubbard 2024-04-26 7:17 ` David Hildenbrand 1 sibling, 0 replies; 17+ messages in thread From: John Hubbard @ 2024-04-13 20:07 UTC (permalink / raw) To: David Hildenbrand, linux-kernel Cc: linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, Peter Xu, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On 4/2/24 5:55 AM, David Hildenbrand wrote: > Let's consistently call the "fast-only" part of GUP "GUP-fast" and rename > all relevant internal functions to start with "gup_fast", to make it > clearer that this is not ordinary GUP. The current mixture of > "lockless", "gup" and "gup_fast" is confusing. > > Further, avoid the term "huge" when talking about a "leaf" -- for > example, we nowadays check pmd_leaf() because pmd_huge() is gone. For the > "hugepd"/"hugepte" stuff, it's part of the name ("is_hugepd"), so that > stays. > > What remains is the "external" interface: > * get_user_pages_fast_only() > * get_user_pages_fast() > * pin_user_pages_fast() > > The high-level internal functions for GUP-fast (+slow fallback) are now: > * internal_get_user_pages_fast() -> gup_fast_fallback() > * lockless_pages_from_mm() -> gup_fast() > > The basic GUP-fast walker functions: > * gup_pgd_range() -> gup_fast_pgd_range() > * gup_p4d_range() -> gup_fast_p4d_range() > * gup_pud_range() -> gup_fast_pud_range() > * gup_pmd_range() -> gup_fast_pmd_range() > * gup_pte_range() -> gup_fast_pte_range() > * gup_huge_pgd() -> gup_fast_pgd_leaf() > * gup_huge_pud() -> gup_fast_pud_leaf() > * gup_huge_pmd() -> gup_fast_pmd_leaf() This is my favorite cleanup of 2024 so far. The above mix was confusing even if one worked on this file all day long--you constantly have to translate from function name, to "is this fast or slow?". whew. > > The weird hugepd stuff: > * gup_huge_pd() -> gup_fast_hugepd() > * gup_hugepte() -> gup_fast_hugepte() > > The weird devmap stuff: > * __gup_device_huge_pud() -> gup_fast_devmap_pud_leaf() > * __gup_device_huge_pmd -> gup_fast_devmap_pmd_leaf() > * __gup_device_huge() -> gup_fast_devmap_leaf() > * undo_dev_pagemap() -> gup_fast_undo_dev_pagemap() > > Helper functions: > * unpin_user_pages_lockless() -> gup_fast_unpin_user_pages() > * gup_fast_folio_allowed() is already properly named > * gup_fast_permitted() is already properly named > > With "gup_fast()", we now even have a function that is referred to in > comment in mm/mmu_gather.c. > > Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> > Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > mm/gup.c | 205 ++++++++++++++++++++++++++++--------------------------- > 1 file changed, 103 insertions(+), 102 deletions(-) > Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks, -- John Hubbard NVIDIA ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions 2024-04-02 12:55 ` [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions David Hildenbrand 2024-04-13 20:07 ` John Hubbard @ 2024-04-26 7:17 ` David Hildenbrand 2024-04-26 13:44 ` Peter Xu 1 sibling, 1 reply; 17+ messages in thread From: David Hildenbrand @ 2024-04-26 7:17 UTC (permalink / raw) To: linux-kernel, Peter Xu Cc: linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On 02.04.24 14:55, David Hildenbrand wrote: > Let's consistently call the "fast-only" part of GUP "GUP-fast" and rename > all relevant internal functions to start with "gup_fast", to make it > clearer that this is not ordinary GUP. The current mixture of > "lockless", "gup" and "gup_fast" is confusing. > > Further, avoid the term "huge" when talking about a "leaf" -- for > example, we nowadays check pmd_leaf() because pmd_huge() is gone. For the > "hugepd"/"hugepte" stuff, it's part of the name ("is_hugepd"), so that > stays. > > What remains is the "external" interface: > * get_user_pages_fast_only() > * get_user_pages_fast() > * pin_user_pages_fast() > > The high-level internal functions for GUP-fast (+slow fallback) are now: > * internal_get_user_pages_fast() -> gup_fast_fallback() > * lockless_pages_from_mm() -> gup_fast() > > The basic GUP-fast walker functions: > * gup_pgd_range() -> gup_fast_pgd_range() > * gup_p4d_range() -> gup_fast_p4d_range() > * gup_pud_range() -> gup_fast_pud_range() > * gup_pmd_range() -> gup_fast_pmd_range() > * gup_pte_range() -> gup_fast_pte_range() > * gup_huge_pgd() -> gup_fast_pgd_leaf() > * gup_huge_pud() -> gup_fast_pud_leaf() > * gup_huge_pmd() -> gup_fast_pmd_leaf() > > The weird hugepd stuff: > * gup_huge_pd() -> gup_fast_hugepd() > * gup_hugepte() -> gup_fast_hugepte() I just realized that we end up calling these from follow_hugepd() as well. And something seems to be off, because gup_fast_hugepd() won't have the VMA even in the slow-GUP case to pass it to gup_must_unshare(). So these are GUP-fast functions and the terminology seem correct. But the usage from follow_hugepd() is questionable, commit a12083d721d703f985f4403d6b333cc449f838f6 Author: Peter Xu <peterx@redhat.com> Date: Wed Mar 27 11:23:31 2024 -0400 mm/gup: handle hugepd for follow_page() states "With previous refactors on fast-gup gup_huge_pd(), most of the code can be leveraged", which doesn't look quite true just staring the the gup_must_unshare() call where we don't pass the VMA. Also, "unlikely(pte_val(pte) != pte_val(ptep_get(ptep)" doesn't make any sense for slow GUP ... @Peter, any insights? -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions 2024-04-26 7:17 ` David Hildenbrand @ 2024-04-26 13:44 ` Peter Xu 2024-04-26 16:12 ` Peter Xu 0 siblings, 1 reply; 17+ messages in thread From: Peter Xu @ 2024-04-26 13:44 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On Fri, Apr 26, 2024 at 09:17:47AM +0200, David Hildenbrand wrote: > On 02.04.24 14:55, David Hildenbrand wrote: > > Let's consistently call the "fast-only" part of GUP "GUP-fast" and rename > > all relevant internal functions to start with "gup_fast", to make it > > clearer that this is not ordinary GUP. The current mixture of > > "lockless", "gup" and "gup_fast" is confusing. > > > > Further, avoid the term "huge" when talking about a "leaf" -- for > > example, we nowadays check pmd_leaf() because pmd_huge() is gone. For the > > "hugepd"/"hugepte" stuff, it's part of the name ("is_hugepd"), so that > > stays. > > > > What remains is the "external" interface: > > * get_user_pages_fast_only() > > * get_user_pages_fast() > > * pin_user_pages_fast() > > > > The high-level internal functions for GUP-fast (+slow fallback) are now: > > * internal_get_user_pages_fast() -> gup_fast_fallback() > > * lockless_pages_from_mm() -> gup_fast() > > > > The basic GUP-fast walker functions: > > * gup_pgd_range() -> gup_fast_pgd_range() > > * gup_p4d_range() -> gup_fast_p4d_range() > > * gup_pud_range() -> gup_fast_pud_range() > > * gup_pmd_range() -> gup_fast_pmd_range() > > * gup_pte_range() -> gup_fast_pte_range() > > * gup_huge_pgd() -> gup_fast_pgd_leaf() > > * gup_huge_pud() -> gup_fast_pud_leaf() > > * gup_huge_pmd() -> gup_fast_pmd_leaf() > > > > The weird hugepd stuff: > > * gup_huge_pd() -> gup_fast_hugepd() > > * gup_hugepte() -> gup_fast_hugepte() > > I just realized that we end up calling these from follow_hugepd() as well. > And something seems to be off, because gup_fast_hugepd() won't have the VMA > even in the slow-GUP case to pass it to gup_must_unshare(). > > So these are GUP-fast functions and the terminology seem correct. But the > usage from follow_hugepd() is questionable, > > commit a12083d721d703f985f4403d6b333cc449f838f6 > Author: Peter Xu <peterx@redhat.com> > Date: Wed Mar 27 11:23:31 2024 -0400 > > mm/gup: handle hugepd for follow_page() > > > states "With previous refactors on fast-gup gup_huge_pd(), most of the code > can be leveraged", which doesn't look quite true just staring the the > gup_must_unshare() call where we don't pass the VMA. Also, > "unlikely(pte_val(pte) != pte_val(ptep_get(ptep)" doesn't make any sense for > slow GUP ... Yes it's not needed, just doesn't look worthwhile to put another helper on top just for this. I mentioned this in the commit message here: There's something not needed for follow page, for example, gup_hugepte() tries to detect pgtable entry change which will never happen with slow gup (which has the pgtable lock held), but that's not a problem to check. > > @Peter, any insights? However I think we should pass vma in for sure, I guess I overlooked that, and it didn't expose in my tests too as I probably missed ./cow. I'll prepare a separate patch on top of this series and the gup-fast rename patches (I saw this one just reached mm-stable), and I'll see whether I can test it too if I can find a Power system fast enough. I'll probably drop the "fast" in the hugepd function names too. Thanks, -- Peter Xu ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions 2024-04-26 13:44 ` Peter Xu @ 2024-04-26 16:12 ` Peter Xu 2024-04-26 17:28 ` David Hildenbrand 0 siblings, 1 reply; 17+ messages in thread From: Peter Xu @ 2024-04-26 16:12 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On Fri, Apr 26, 2024 at 09:44:58AM -0400, Peter Xu wrote: > On Fri, Apr 26, 2024 at 09:17:47AM +0200, David Hildenbrand wrote: > > On 02.04.24 14:55, David Hildenbrand wrote: > > > Let's consistently call the "fast-only" part of GUP "GUP-fast" and rename > > > all relevant internal functions to start with "gup_fast", to make it > > > clearer that this is not ordinary GUP. The current mixture of > > > "lockless", "gup" and "gup_fast" is confusing. > > > > > > Further, avoid the term "huge" when talking about a "leaf" -- for > > > example, we nowadays check pmd_leaf() because pmd_huge() is gone. For the > > > "hugepd"/"hugepte" stuff, it's part of the name ("is_hugepd"), so that > > > stays. > > > > > > What remains is the "external" interface: > > > * get_user_pages_fast_only() > > > * get_user_pages_fast() > > > * pin_user_pages_fast() > > > > > > The high-level internal functions for GUP-fast (+slow fallback) are now: > > > * internal_get_user_pages_fast() -> gup_fast_fallback() > > > * lockless_pages_from_mm() -> gup_fast() > > > > > > The basic GUP-fast walker functions: > > > * gup_pgd_range() -> gup_fast_pgd_range() > > > * gup_p4d_range() -> gup_fast_p4d_range() > > > * gup_pud_range() -> gup_fast_pud_range() > > > * gup_pmd_range() -> gup_fast_pmd_range() > > > * gup_pte_range() -> gup_fast_pte_range() > > > * gup_huge_pgd() -> gup_fast_pgd_leaf() > > > * gup_huge_pud() -> gup_fast_pud_leaf() > > > * gup_huge_pmd() -> gup_fast_pmd_leaf() > > > > > > The weird hugepd stuff: > > > * gup_huge_pd() -> gup_fast_hugepd() > > > * gup_hugepte() -> gup_fast_hugepte() > > > > I just realized that we end up calling these from follow_hugepd() as well. > > And something seems to be off, because gup_fast_hugepd() won't have the VMA > > even in the slow-GUP case to pass it to gup_must_unshare(). > > > > So these are GUP-fast functions and the terminology seem correct. But the > > usage from follow_hugepd() is questionable, > > > > commit a12083d721d703f985f4403d6b333cc449f838f6 > > Author: Peter Xu <peterx@redhat.com> > > Date: Wed Mar 27 11:23:31 2024 -0400 > > > > mm/gup: handle hugepd for follow_page() > > > > > > states "With previous refactors on fast-gup gup_huge_pd(), most of the code > > can be leveraged", which doesn't look quite true just staring the the > > gup_must_unshare() call where we don't pass the VMA. Also, > > "unlikely(pte_val(pte) != pte_val(ptep_get(ptep)" doesn't make any sense for > > slow GUP ... > > Yes it's not needed, just doesn't look worthwhile to put another helper on > top just for this. I mentioned this in the commit message here: > > There's something not needed for follow page, for example, gup_hugepte() > tries to detect pgtable entry change which will never happen with slow > gup (which has the pgtable lock held), but that's not a problem to check. > > > > > @Peter, any insights? > > However I think we should pass vma in for sure, I guess I overlooked that, > and it didn't expose in my tests too as I probably missed ./cow. > > I'll prepare a separate patch on top of this series and the gup-fast rename > patches (I saw this one just reached mm-stable), and I'll see whether I can > test it too if I can find a Power system fast enough. I'll probably drop > the "fast" in the hugepd function names too. Hmm, so when I enable 2M hugetlb I found ./cow is even failing on x86. # ./cow | grep -B1 "not ok" # [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB) not ok 161 No leak from parent into child -- # [RUN] vmsplice() + unmap in child with mprotect() optimization ... with hugetlb (2048 kB) not ok 215 No leak from parent into child -- # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB) not ok 269 No leak from child into parent -- # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB) not ok 323 No leak from child into parent And it looks like it was always failing.. perhaps since the start? We didn't do the same on hugetlb v.s. normal anon from that regard on the vmsplice() fix. I drafted a patch to allow refcount>1 detection as the same, then all tests pass for me, as below. David, I'd like to double check with you before I post anything: is that your intention to do so when working on the R/O pinning or not? Thanks, ========= From 7300c249738dadda1457c755b597c1551dfe8dc6 Mon Sep 17 00:00:00 2001 From: Peter Xu <peterx@redhat.com> Date: Fri, 26 Apr 2024 11:41:12 -0400 Subject: [PATCH] mm/hugetlb: Fix vmsplice case on memory leak once more Signed-off-by: Peter Xu <peterx@redhat.com> --- mm/hugetlb.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 417fc5cdb6ee..1ca102013561 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5961,10 +5961,13 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio, retry_avoidcopy: /* - * If no-one else is actually using this page, we're the exclusive - * owner and can reuse this page. + * If the page is marked exlusively owned (e.g. longterm pinned), + * we can reuse it. Otherwise if no-one else is using this page, + * we can savely set the exclusive bit and reuse it. */ - if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) { + if (folio_test_anon(old_folio) && + (PageAnonExclusive(&old_folio->page) || + folio_ref_count(old_folio) == 1)) { if (!PageAnonExclusive(&old_folio->page)) { folio_move_anon_rmap(old_folio, vma); SetPageAnonExclusive(&old_folio->page); -- 2.44.0 -- Peter Xu ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions 2024-04-26 16:12 ` Peter Xu @ 2024-04-26 17:28 ` David Hildenbrand 2024-04-26 21:20 ` Peter Xu 0 siblings, 1 reply; 17+ messages in thread From: David Hildenbrand @ 2024-04-26 17:28 UTC (permalink / raw) To: Peter Xu Cc: linux-kernel, linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On 26.04.24 18:12, Peter Xu wrote: > On Fri, Apr 26, 2024 at 09:44:58AM -0400, Peter Xu wrote: >> On Fri, Apr 26, 2024 at 09:17:47AM +0200, David Hildenbrand wrote: >>> On 02.04.24 14:55, David Hildenbrand wrote: >>>> Let's consistently call the "fast-only" part of GUP "GUP-fast" and rename >>>> all relevant internal functions to start with "gup_fast", to make it >>>> clearer that this is not ordinary GUP. The current mixture of >>>> "lockless", "gup" and "gup_fast" is confusing. >>>> >>>> Further, avoid the term "huge" when talking about a "leaf" -- for >>>> example, we nowadays check pmd_leaf() because pmd_huge() is gone. For the >>>> "hugepd"/"hugepte" stuff, it's part of the name ("is_hugepd"), so that >>>> stays. >>>> >>>> What remains is the "external" interface: >>>> * get_user_pages_fast_only() >>>> * get_user_pages_fast() >>>> * pin_user_pages_fast() >>>> >>>> The high-level internal functions for GUP-fast (+slow fallback) are now: >>>> * internal_get_user_pages_fast() -> gup_fast_fallback() >>>> * lockless_pages_from_mm() -> gup_fast() >>>> >>>> The basic GUP-fast walker functions: >>>> * gup_pgd_range() -> gup_fast_pgd_range() >>>> * gup_p4d_range() -> gup_fast_p4d_range() >>>> * gup_pud_range() -> gup_fast_pud_range() >>>> * gup_pmd_range() -> gup_fast_pmd_range() >>>> * gup_pte_range() -> gup_fast_pte_range() >>>> * gup_huge_pgd() -> gup_fast_pgd_leaf() >>>> * gup_huge_pud() -> gup_fast_pud_leaf() >>>> * gup_huge_pmd() -> gup_fast_pmd_leaf() >>>> >>>> The weird hugepd stuff: >>>> * gup_huge_pd() -> gup_fast_hugepd() >>>> * gup_hugepte() -> gup_fast_hugepte() >>> >>> I just realized that we end up calling these from follow_hugepd() as well. >>> And something seems to be off, because gup_fast_hugepd() won't have the VMA >>> even in the slow-GUP case to pass it to gup_must_unshare(). >>> >>> So these are GUP-fast functions and the terminology seem correct. But the >>> usage from follow_hugepd() is questionable, >>> >>> commit a12083d721d703f985f4403d6b333cc449f838f6 >>> Author: Peter Xu <peterx@redhat.com> >>> Date: Wed Mar 27 11:23:31 2024 -0400 >>> >>> mm/gup: handle hugepd for follow_page() >>> >>> >>> states "With previous refactors on fast-gup gup_huge_pd(), most of the code >>> can be leveraged", which doesn't look quite true just staring the the >>> gup_must_unshare() call where we don't pass the VMA. Also, >>> "unlikely(pte_val(pte) != pte_val(ptep_get(ptep)" doesn't make any sense for >>> slow GUP ... >> >> Yes it's not needed, just doesn't look worthwhile to put another helper on >> top just for this. I mentioned this in the commit message here: >> >> There's something not needed for follow page, for example, gup_hugepte() >> tries to detect pgtable entry change which will never happen with slow >> gup (which has the pgtable lock held), but that's not a problem to check. >> >>> >>> @Peter, any insights? >> >> However I think we should pass vma in for sure, I guess I overlooked that, >> and it didn't expose in my tests too as I probably missed ./cow. >> >> I'll prepare a separate patch on top of this series and the gup-fast rename >> patches (I saw this one just reached mm-stable), and I'll see whether I can >> test it too if I can find a Power system fast enough. I'll probably drop >> the "fast" in the hugepd function names too. > For the missing VMA parameter, the cow.c test might not trigger it. We never need the VMA to make a pinning decision for anonymous memory. We'll trigger an unsharing fault, get an exclusive anonymous page and can continue. We need the VMA in gup_must_unshare(), when long-term pinning a file hugetlb page. I *think* the gup_longterm.c selftest should trigger that, especially: # [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB) ... # [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB) We need a MAP_SHARED page where the PTE is R/O that we want to long-term pin R/O. I don't remember from the top of my head if the test here might have a R/W-mapped folio. If so, we could extend it to cover that. > Hmm, so when I enable 2M hugetlb I found ./cow is even failing on x86. > > # ./cow | grep -B1 "not ok" > # [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB) > not ok 161 No leak from parent into child > -- > # [RUN] vmsplice() + unmap in child with mprotect() optimization ... with hugetlb (2048 kB) > not ok 215 No leak from parent into child > -- > # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB) > not ok 269 No leak from child into parent > -- > # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB) > not ok 323 No leak from child into parent > > And it looks like it was always failing.. perhaps since the start? We Yes! commit 7dad331be7816103eba8c12caeb88fbd3599c0b9 Author: David Hildenbrand <david@redhat.com> Date: Tue Sep 27 13:01:17 2022 +0200 selftests/vm: anon_cow: hugetlb tests Let's run all existing test cases with all hugetlb sizes we're able to detect. Note that some tests cases still fail. This will, for example, be fixed once vmsplice properly uses FOLL_PIN instead of FOLL_GET for pinning. With 2 MiB and 1 GiB hugetlb on x86_64, the expected failures are: # [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB) not ok 23 No leak from parent into child # [RUN] vmsplice() + unmap in child ... with hugetlb (1048576 kB) not ok 24 No leak from parent into child # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB) not ok 35 No leak from child into parent # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (1048576 kB) not ok 36 No leak from child into parent # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB) not ok 47 No leak from child into parent # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (1048576 kB) not ok 48 No leak from child into parent As it keeps confusing people (until somebody cares enough to fix vmsplice), I already thought about just disabling the test and adding a comment why it happens and why nobody cares. > didn't do the same on hugetlb v.s. normal anon from that regard on the > vmsplice() fix. > > I drafted a patch to allow refcount>1 detection as the same, then all tests > pass for me, as below. > > David, I'd like to double check with you before I post anything: is that > your intention to do so when working on the R/O pinning or not? Here certainly the "if it's easy it would already have done" principle applies. :) The issue is the following: hugetlb pages are scarce resources that cannot usually be overcommitted. For ordinary memory, we don't care if we COW in some corner case because there is an unexpected reference. You temporarily consume an additional page that gets freed as soon as the unexpected reference is dropped. For hugetlb, it is problematic. Assume you have reserved a single 1 GiB hugetlb page and your process uses that in a MAP_PRIVATE mapping. Then it calls fork() and the child quits immediately. If you decide to COW, you would need a second hugetlb page, which we don't have, so you have to crash the program. And in hugetlb it's extremely easy to not get folio_ref_count() == 1: hugetlb_fault() will do a folio_get(folio) before calling hugetlb_wp()! ... so you essentially always copy. At that point I walked away from that, letting vmsplice() be fixed at some point. Dave Howells was close at some point IIRC ... I had some ideas about retrying until the other reference is gone (which cannot be a longterm GUP pin), but as vmsplice essentially does without FOLL_PIN|FOLL_LONGTERM, it's quit hopeless to resolve that as long as vmsplice holds longterm references the wrong way. --- One could argue that fork() with hugetlb and MAP_PRIVATE is stupid and fragile: assume your child MM is torn down deferred, and will unmap the hugetlb page deferred. Or assume you access the page concurrently with fork(). You'd have to COW and crash the program. BUT, there is a horribly ugly hack in hugetlb COW code where you *steal* the page form the child program and crash your child. I'm not making that up, it's horrible. -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions 2024-04-26 17:28 ` David Hildenbrand @ 2024-04-26 21:20 ` Peter Xu 2024-04-26 21:33 ` David Hildenbrand 0 siblings, 1 reply; 17+ messages in thread From: Peter Xu @ 2024-04-26 21:20 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On Fri, Apr 26, 2024 at 07:28:31PM +0200, David Hildenbrand wrote: > On 26.04.24 18:12, Peter Xu wrote: > > On Fri, Apr 26, 2024 at 09:44:58AM -0400, Peter Xu wrote: > > > On Fri, Apr 26, 2024 at 09:17:47AM +0200, David Hildenbrand wrote: > > > > On 02.04.24 14:55, David Hildenbrand wrote: > > > > > Let's consistently call the "fast-only" part of GUP "GUP-fast" and rename > > > > > all relevant internal functions to start with "gup_fast", to make it > > > > > clearer that this is not ordinary GUP. The current mixture of > > > > > "lockless", "gup" and "gup_fast" is confusing. > > > > > > > > > > Further, avoid the term "huge" when talking about a "leaf" -- for > > > > > example, we nowadays check pmd_leaf() because pmd_huge() is gone. For the > > > > > "hugepd"/"hugepte" stuff, it's part of the name ("is_hugepd"), so that > > > > > stays. > > > > > > > > > > What remains is the "external" interface: > > > > > * get_user_pages_fast_only() > > > > > * get_user_pages_fast() > > > > > * pin_user_pages_fast() > > > > > > > > > > The high-level internal functions for GUP-fast (+slow fallback) are now: > > > > > * internal_get_user_pages_fast() -> gup_fast_fallback() > > > > > * lockless_pages_from_mm() -> gup_fast() > > > > > > > > > > The basic GUP-fast walker functions: > > > > > * gup_pgd_range() -> gup_fast_pgd_range() > > > > > * gup_p4d_range() -> gup_fast_p4d_range() > > > > > * gup_pud_range() -> gup_fast_pud_range() > > > > > * gup_pmd_range() -> gup_fast_pmd_range() > > > > > * gup_pte_range() -> gup_fast_pte_range() > > > > > * gup_huge_pgd() -> gup_fast_pgd_leaf() > > > > > * gup_huge_pud() -> gup_fast_pud_leaf() > > > > > * gup_huge_pmd() -> gup_fast_pmd_leaf() > > > > > > > > > > The weird hugepd stuff: > > > > > * gup_huge_pd() -> gup_fast_hugepd() > > > > > * gup_hugepte() -> gup_fast_hugepte() > > > > > > > > I just realized that we end up calling these from follow_hugepd() as well. > > > > And something seems to be off, because gup_fast_hugepd() won't have the VMA > > > > even in the slow-GUP case to pass it to gup_must_unshare(). > > > > > > > > So these are GUP-fast functions and the terminology seem correct. But the > > > > usage from follow_hugepd() is questionable, > > > > > > > > commit a12083d721d703f985f4403d6b333cc449f838f6 > > > > Author: Peter Xu <peterx@redhat.com> > > > > Date: Wed Mar 27 11:23:31 2024 -0400 > > > > > > > > mm/gup: handle hugepd for follow_page() > > > > > > > > > > > > states "With previous refactors on fast-gup gup_huge_pd(), most of the code > > > > can be leveraged", which doesn't look quite true just staring the the > > > > gup_must_unshare() call where we don't pass the VMA. Also, > > > > "unlikely(pte_val(pte) != pte_val(ptep_get(ptep)" doesn't make any sense for > > > > slow GUP ... > > > > > > Yes it's not needed, just doesn't look worthwhile to put another helper on > > > top just for this. I mentioned this in the commit message here: > > > > > > There's something not needed for follow page, for example, gup_hugepte() > > > tries to detect pgtable entry change which will never happen with slow > > > gup (which has the pgtable lock held), but that's not a problem to check. > > > > > > > > > > > @Peter, any insights? > > > > > > However I think we should pass vma in for sure, I guess I overlooked that, > > > and it didn't expose in my tests too as I probably missed ./cow. > > > > > > I'll prepare a separate patch on top of this series and the gup-fast rename > > > patches (I saw this one just reached mm-stable), and I'll see whether I can > > > test it too if I can find a Power system fast enough. I'll probably drop > > > the "fast" in the hugepd function names too. > > > > For the missing VMA parameter, the cow.c test might not trigger it. We never need the VMA to make > a pinning decision for anonymous memory. We'll trigger an unsharing fault, get an exclusive anonymous page > and can continue. > > We need the VMA in gup_must_unshare(), when long-term pinning a file hugetlb page. I *think* > the gup_longterm.c selftest should trigger that, especially: > > # [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (2048 kB) > ... > # [RUN] R/O longterm GUP-fast pin in MAP_SHARED file mapping ... with memfd hugetlb (1048576 kB) > > > We need a MAP_SHARED page where the PTE is R/O that we want to long-term pin R/O. > I don't remember from the top of my head if the test here might have a R/W-mapped > folio. If so, we could extend it to cover that. Let me try both then. > > > Hmm, so when I enable 2M hugetlb I found ./cow is even failing on x86. > > > > # ./cow | grep -B1 "not ok" > > # [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB) > > not ok 161 No leak from parent into child > > -- > > # [RUN] vmsplice() + unmap in child with mprotect() optimization ... with hugetlb (2048 kB) > > not ok 215 No leak from parent into child > > -- > > # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB) > > not ok 269 No leak from child into parent > > -- > > # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB) > > not ok 323 No leak from child into parent > > > > And it looks like it was always failing.. perhaps since the start? We > > Yes! > > commit 7dad331be7816103eba8c12caeb88fbd3599c0b9 > Author: David Hildenbrand <david@redhat.com> > Date: Tue Sep 27 13:01:17 2022 +0200 > > selftests/vm: anon_cow: hugetlb tests > Let's run all existing test cases with all hugetlb sizes we're able to > detect. > Note that some tests cases still fail. This will, for example, be fixed > once vmsplice properly uses FOLL_PIN instead of FOLL_GET for pinning. > With 2 MiB and 1 GiB hugetlb on x86_64, the expected failures are: > # [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB) > not ok 23 No leak from parent into child > # [RUN] vmsplice() + unmap in child ... with hugetlb (1048576 kB) > not ok 24 No leak from parent into child > # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB) > not ok 35 No leak from child into parent > # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (1048576 kB) > not ok 36 No leak from child into parent > # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB) > not ok 47 No leak from child into parent > # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (1048576 kB) > not ok 48 No leak from child into parent > > As it keeps confusing people (until somebody cares enough to fix vmsplice), I already > thought about just disabling the test and adding a comment why it happens and > why nobody cares. I think we should, and when doing so maybe add a rich comment in hugetlb_wp() too explaining everything? > > > didn't do the same on hugetlb v.s. normal anon from that regard on the > > vmsplice() fix. > > > > I drafted a patch to allow refcount>1 detection as the same, then all tests > > pass for me, as below. > > > > David, I'd like to double check with you before I post anything: is that > > your intention to do so when working on the R/O pinning or not? > > Here certainly the "if it's easy it would already have done" principle applies. :) > > The issue is the following: hugetlb pages are scarce resources that cannot usually > be overcommitted. For ordinary memory, we don't care if we COW in some corner case > because there is an unexpected reference. You temporarily consume an additional page > that gets freed as soon as the unexpected reference is dropped. > > For hugetlb, it is problematic. Assume you have reserved a single 1 GiB hugetlb page > and your process uses that in a MAP_PRIVATE mapping. Then it calls fork() and the > child quits immediately. > > If you decide to COW, you would need a second hugetlb page, which we don't have, so > you have to crash the program. > > And in hugetlb it's extremely easy to not get folio_ref_count() == 1: > > hugetlb_fault() will do a folio_get(folio) before calling hugetlb_wp()! > > ... so you essentially always copy. Hmm yes there's one extra refcount. I think this is all fine, we can simply take all of them into account when making a CoW decision. However crashing an userspace can be a problem for sure. > > > At that point I walked away from that, letting vmsplice() be fixed at some point. Dave > Howells was close at some point IIRC ... > > I had some ideas about retrying until the other reference is gone (which cannot be a > longterm GUP pin), but as vmsplice essentially does without FOLL_PIN|FOLL_LONGTERM, > it's quit hopeless to resolve that as long as vmsplice holds longterm references the wrong > way. > > --- > > One could argue that fork() with hugetlb and MAP_PRIVATE is stupid and fragile: assume > your child MM is torn down deferred, and will unmap the hugetlb page deferred. Or assume > you access the page concurrently with fork(). You'd have to COW and crash the program. > BUT, there is a horribly ugly hack in hugetlb COW code where you *steal* the page form > the child program and crash your child. I'm not making that up, it's horrible. I didn't notice that code before; doesn't sound like a very responsible parent.. Looks like either there come a hugetlb guru who can make a decision to break hugetlb ABI at some point, knowing that nobody will really get affected by it, or that's the uncharted area whoever needs to introduce hugetlb v2. Thanks, -- Peter Xu ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions 2024-04-26 21:20 ` Peter Xu @ 2024-04-26 21:33 ` David Hildenbrand 2024-04-26 21:58 ` Peter Xu 0 siblings, 1 reply; 17+ messages in thread From: David Hildenbrand @ 2024-04-26 21:33 UTC (permalink / raw) To: Peter Xu Cc: linux-kernel, linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 >> >>> Hmm, so when I enable 2M hugetlb I found ./cow is even failing on x86. >>> >>> # ./cow | grep -B1 "not ok" >>> # [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB) >>> not ok 161 No leak from parent into child >>> -- >>> # [RUN] vmsplice() + unmap in child with mprotect() optimization ... with hugetlb (2048 kB) >>> not ok 215 No leak from parent into child >>> -- >>> # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB) >>> not ok 269 No leak from child into parent >>> -- >>> # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB) >>> not ok 323 No leak from child into parent >>> >>> And it looks like it was always failing.. perhaps since the start? We >> >> Yes! >> >> commit 7dad331be7816103eba8c12caeb88fbd3599c0b9 >> Author: David Hildenbrand <david@redhat.com> >> Date: Tue Sep 27 13:01:17 2022 +0200 >> >> selftests/vm: anon_cow: hugetlb tests >> Let's run all existing test cases with all hugetlb sizes we're able to >> detect. >> Note that some tests cases still fail. This will, for example, be fixed >> once vmsplice properly uses FOLL_PIN instead of FOLL_GET for pinning. >> With 2 MiB and 1 GiB hugetlb on x86_64, the expected failures are: >> # [RUN] vmsplice() + unmap in child ... with hugetlb (2048 kB) >> not ok 23 No leak from parent into child >> # [RUN] vmsplice() + unmap in child ... with hugetlb (1048576 kB) >> not ok 24 No leak from parent into child >> # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (2048 kB) >> not ok 35 No leak from child into parent >> # [RUN] vmsplice() before fork(), unmap in parent after fork() ... with hugetlb (1048576 kB) >> not ok 36 No leak from child into parent >> # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (2048 kB) >> not ok 47 No leak from child into parent >> # [RUN] vmsplice() + unmap in parent after fork() ... with hugetlb (1048576 kB) >> not ok 48 No leak from child into parent >> >> As it keeps confusing people (until somebody cares enough to fix vmsplice), I already >> thought about just disabling the test and adding a comment why it happens and >> why nobody cares. > > I think we should, and when doing so maybe add a rich comment in > hugetlb_wp() too explaining everything? Likely yes. Let me think of something. > >> >>> didn't do the same on hugetlb v.s. normal anon from that regard on the >>> vmsplice() fix. >>> >>> I drafted a patch to allow refcount>1 detection as the same, then all tests >>> pass for me, as below. >>> >>> David, I'd like to double check with you before I post anything: is that >>> your intention to do so when working on the R/O pinning or not? >> >> Here certainly the "if it's easy it would already have done" principle applies. :) >> >> The issue is the following: hugetlb pages are scarce resources that cannot usually >> be overcommitted. For ordinary memory, we don't care if we COW in some corner case >> because there is an unexpected reference. You temporarily consume an additional page >> that gets freed as soon as the unexpected reference is dropped. >> >> For hugetlb, it is problematic. Assume you have reserved a single 1 GiB hugetlb page >> and your process uses that in a MAP_PRIVATE mapping. Then it calls fork() and the >> child quits immediately. >> >> If you decide to COW, you would need a second hugetlb page, which we don't have, so >> you have to crash the program. >> >> And in hugetlb it's extremely easy to not get folio_ref_count() == 1: >> >> hugetlb_fault() will do a folio_get(folio) before calling hugetlb_wp()! >> >> ... so you essentially always copy. > > Hmm yes there's one extra refcount. I think this is all fine, we can simply > take all of them into account when making a CoW decision. However crashing > an userspace can be a problem for sure. Right, and a simple reference from page migration or some other PFN walker would be sufficient for that. I did not dare being responsible for that, even though races are rare :) The vmsplice leak is not worth that: hugetlb with MAP_PRIVATE to COW-share data between processes with different privilege levels is not really common. > >> >> >> At that point I walked away from that, letting vmsplice() be fixed at some point. Dave >> Howells was close at some point IIRC ... >> >> I had some ideas about retrying until the other reference is gone (which cannot be a >> longterm GUP pin), but as vmsplice essentially does without FOLL_PIN|FOLL_LONGTERM, >> it's quit hopeless to resolve that as long as vmsplice holds longterm references the wrong >> way. >> >> --- >> >> One could argue that fork() with hugetlb and MAP_PRIVATE is stupid and fragile: assume >> your child MM is torn down deferred, and will unmap the hugetlb page deferred. Or assume >> you access the page concurrently with fork(). You'd have to COW and crash the program. >> BUT, there is a horribly ugly hack in hugetlb COW code where you *steal* the page form >> the child program and crash your child. I'm not making that up, it's horrible. > > I didn't notice that code before; doesn't sound like a very responsible > parent.. > > Looks like either there come a hugetlb guru who can make a decision to > break hugetlb ABI at some point, knowing that nobody will really get > affected by it, or that's the uncharted area whoever needs to introduce > hugetlb v2. I raised this topic in the past, and IMHO we either (a) never should have added COW support; or (b) added COW support by using ordinary anonymous memory (hey, partial mappings of hugetlb pages! ;) ). After all, COW is an optimization to speed up fork and defer copying. It relies on memory overcommit, but that doesn't really apply to hugetlb, so we fake it ... One easy ABI break I had in mind was to simply *not* allow COW-sharing of anon hugetlb folios; for example, simply don't copy the page into the child. Chances are there are not really a lot of child processes that would fail ... but likely we would break *something*. So there is no easy way out :( -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions 2024-04-26 21:33 ` David Hildenbrand @ 2024-04-26 21:58 ` Peter Xu 2024-04-27 6:58 ` David Hildenbrand 0 siblings, 1 reply; 17+ messages in thread From: Peter Xu @ 2024-04-26 21:58 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On Fri, Apr 26, 2024 at 11:33:08PM +0200, David Hildenbrand wrote: > I raised this topic in the past, and IMHO we either (a) never should have > added COW support; or (b) added COW support by using ordinary anonymous > memory (hey, partial mappings of hugetlb pages! ;) ). > > After all, COW is an optimization to speed up fork and defer copying. It > relies on memory overcommit, but that doesn't really apply to hugetlb, so we > fake it ... Good summary. > > One easy ABI break I had in mind was to simply *not* allow COW-sharing of > anon hugetlb folios; for example, simply don't copy the page into the child. > Chances are there are not really a lot of child processes that would fail > ... but likely we would break *something*. So there is no easy way out :( Right, not easy. The thing is this is one spot out of many of the specialties, it also may or may not be worthwhile to have dedicated time while nobody yet has a problem with it. It might be easier to start with v2, even though that's also hard to nail everything properly - the challenge can come from different angles. Thanks for the sharings, helpful. I'll go ahead with the Power fix on hugepd putting this aside. I hope that before the end of this year, whatever I'll fix can go away, by removing hugepd completely from Linux. For now that may or may not be as smooth, so we'd better still fix it. -- Peter Xu ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions 2024-04-26 21:58 ` Peter Xu @ 2024-04-27 6:58 ` David Hildenbrand 0 siblings, 0 replies; 17+ messages in thread From: David Hildenbrand @ 2024-04-27 6:58 UTC (permalink / raw) To: Peter Xu Cc: linux-kernel, linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On 26.04.24 23:58, Peter Xu wrote: > On Fri, Apr 26, 2024 at 11:33:08PM +0200, David Hildenbrand wrote: >> I raised this topic in the past, and IMHO we either (a) never should have >> added COW support; or (b) added COW support by using ordinary anonymous >> memory (hey, partial mappings of hugetlb pages! ;) ). >> >> After all, COW is an optimization to speed up fork and defer copying. It >> relies on memory overcommit, but that doesn't really apply to hugetlb, so we >> fake it ... > > Good summary. > >> >> One easy ABI break I had in mind was to simply *not* allow COW-sharing of >> anon hugetlb folios; for example, simply don't copy the page into the child. >> Chances are there are not really a lot of child processes that would fail >> ... but likely we would break *something*. So there is no easy way out :( > > Right, not easy. The thing is this is one spot out of many of the > specialties, it also may or may not be worthwhile to have dedicated time > while nobody yet has a problem with it. It might be easier to start with > v2, even though that's also hard to nail everything properly - the > challenge can come from different angles. > > Thanks for the sharings, helpful. I'll go ahead with the Power fix on > hugepd putting this aside. Yes, hopefully we already do have a test case for that. When writing gup_longterm.c I was more focusing on memfd vs. ordinary file systems ("filesystem type") than how it's mapped into the page tables. > > I hope that before the end of this year, whatever I'll fix can go away, by > removing hugepd completely from Linux. For now that may or may not be as > smooth, so we'd better still fix it. Crossing fingers, I'm annoyed whenever I stumble over it :) -- Cheers, David / dhildenb ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v1 2/3] mm/treewide: rename CONFIG_HAVE_FAST_GUP to CONFIG_HAVE_GUP_FAST 2024-04-02 12:55 [PATCH v1 0/3] mm/gup: consistently call it GUP-fast David Hildenbrand 2024-04-02 12:55 ` [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions David Hildenbrand @ 2024-04-02 12:55 ` David Hildenbrand 2024-04-02 22:32 ` Jason Gunthorpe 2024-04-13 20:11 ` John Hubbard 2024-04-02 12:55 ` [PATCH v1 3/3] mm: use "GUP-fast" instead "fast GUP" in remaining comments David Hildenbrand 2 siblings, 2 replies; 17+ messages in thread From: David Hildenbrand @ 2024-04-02 12:55 UTC (permalink / raw) To: linux-kernel Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, Peter Xu, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 Nowadays, we call it "GUP-fast", the external interface includes functions like "get_user_pages_fast()", and we renamed all internal functions to reflect that as well. Let's make the config option reflect that. Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> Signed-off-by: David Hildenbrand <david@redhat.com> --- arch/arm/Kconfig | 2 +- arch/arm64/Kconfig | 2 +- arch/loongarch/Kconfig | 2 +- arch/mips/Kconfig | 2 +- arch/powerpc/Kconfig | 2 +- arch/riscv/Kconfig | 2 +- arch/s390/Kconfig | 2 +- arch/sh/Kconfig | 2 +- arch/x86/Kconfig | 2 +- include/linux/rmap.h | 8 ++++---- kernel/events/core.c | 4 ++-- mm/Kconfig | 2 +- mm/gup.c | 10 +++++----- mm/internal.h | 2 +- 14 files changed, 22 insertions(+), 22 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index b14aed3a17ab..817918f6635a 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -99,7 +99,7 @@ config ARM select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU select HAVE_EXIT_THREAD - select HAVE_FAST_GUP if ARM_LPAE + select HAVE_GUP_FAST if ARM_LPAE select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL select HAVE_FUNCTION_ERROR_INJECTION select HAVE_FUNCTION_GRAPH_TRACER diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b11c98b3e84..de076a191e9f 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -205,7 +205,7 @@ config ARM64 select HAVE_SAMPLE_FTRACE_DIRECT select HAVE_SAMPLE_FTRACE_DIRECT_MULTI select HAVE_EFFICIENT_UNALIGNED_ACCESS - select HAVE_FAST_GUP + select HAVE_GUP_FAST select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_ERROR_INJECTION diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index a5f300ec6f28..cd642eefd9e5 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -119,7 +119,7 @@ config LOONGARCH select HAVE_EBPF_JIT select HAVE_EFFICIENT_UNALIGNED_ACCESS if !ARCH_STRICT_ALIGN select HAVE_EXIT_THREAD - select HAVE_FAST_GUP + select HAVE_GUP_FAST select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_ARG_ACCESS_API select HAVE_FUNCTION_ERROR_INJECTION diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index 516dc7022bd7..f1aa1bf11166 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -68,7 +68,7 @@ config MIPS select HAVE_DYNAMIC_FTRACE select HAVE_EBPF_JIT if !CPU_MICROMIPS select HAVE_EXIT_THREAD - select HAVE_FAST_GUP + select HAVE_GUP_FAST select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1c4be3373686..e42cc8cd415f 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -236,7 +236,7 @@ config PPC select HAVE_DYNAMIC_FTRACE_WITH_REGS if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32 select HAVE_EBPF_JIT select HAVE_EFFICIENT_UNALIGNED_ACCESS - select HAVE_FAST_GUP + select HAVE_GUP_FAST select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_ARG_ACCESS_API select HAVE_FUNCTION_DESCRIPTORS if PPC64_ELF_ABI_V1 diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index be09c8836d56..3ee60ddef93e 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -132,7 +132,7 @@ config RISCV select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION select HAVE_EBPF_JIT if MMU - select HAVE_FAST_GUP if MMU + select HAVE_GUP_FAST if MMU select HAVE_FUNCTION_ARG_ACCESS_API select HAVE_FUNCTION_ERROR_INJECTION select HAVE_GCC_PLUGINS diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 8f01ada6845e..d9aed4c93ee6 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -174,7 +174,7 @@ config S390 select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EBPF_JIT if HAVE_MARCH_Z196_FEATURES select HAVE_EFFICIENT_UNALIGNED_ACCESS - select HAVE_FAST_GUP + select HAVE_GUP_FAST select HAVE_FENTRY select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_ARG_ACCESS_API diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig index 2ad3e29f0ebe..7292542f75e8 100644 --- a/arch/sh/Kconfig +++ b/arch/sh/Kconfig @@ -38,7 +38,7 @@ config SUPERH select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_KMEMLEAK select HAVE_DYNAMIC_FTRACE - select HAVE_FAST_GUP if MMU + select HAVE_GUP_FAST if MMU select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER select HAVE_FTRACE_MCOUNT_RECORD diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 4fff6ed46e90..222b42941cf3 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -221,7 +221,7 @@ config X86 select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_EISA select HAVE_EXIT_THREAD - select HAVE_FAST_GUP + select HAVE_GUP_FAST select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b7944a833668..9bf9324214fc 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -284,7 +284,7 @@ static inline int hugetlb_try_share_anon_rmap(struct folio *folio) VM_WARN_ON_FOLIO(!PageAnonExclusive(&folio->page), folio); /* Paired with the memory barrier in try_grab_folio(). */ - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP)) + if (IS_ENABLED(CONFIG_HAVE_GUP_FAST)) smp_mb(); if (unlikely(folio_maybe_dma_pinned(folio))) @@ -295,7 +295,7 @@ static inline int hugetlb_try_share_anon_rmap(struct folio *folio) * This is conceptually a smp_wmb() paired with the smp_rmb() in * gup_must_unshare(). */ - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP)) + if (IS_ENABLED(CONFIG_HAVE_GUP_FAST)) smp_mb__after_atomic(); return 0; } @@ -541,7 +541,7 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio, */ /* Paired with the memory barrier in try_grab_folio(). */ - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP)) + if (IS_ENABLED(CONFIG_HAVE_GUP_FAST)) smp_mb(); if (unlikely(folio_maybe_dma_pinned(folio))) @@ -552,7 +552,7 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio, * This is conceptually a smp_wmb() paired with the smp_rmb() in * gup_must_unshare(). */ - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP)) + if (IS_ENABLED(CONFIG_HAVE_GUP_FAST)) smp_mb__after_atomic(); return 0; } diff --git a/kernel/events/core.c b/kernel/events/core.c index 724e6d7e128f..c5a0dc1f135f 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7539,7 +7539,7 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm, unsigned long addr) { u64 size = 0; -#ifdef CONFIG_HAVE_FAST_GUP +#ifdef CONFIG_HAVE_GUP_FAST pgd_t *pgdp, pgd; p4d_t *p4dp, p4d; pud_t *pudp, pud; @@ -7587,7 +7587,7 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm, unsigned long addr) if (pte_present(pte)) size = pte_leaf_size(pte); pte_unmap(ptep); -#endif /* CONFIG_HAVE_FAST_GUP */ +#endif /* CONFIG_HAVE_GUP_FAST */ return size; } diff --git a/mm/Kconfig b/mm/Kconfig index f0ed3168db00..50df323eaece 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -473,7 +473,7 @@ config ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP config HAVE_MEMBLOCK_PHYS_MAP bool -config HAVE_FAST_GUP +config HAVE_GUP_FAST depends on MMU bool diff --git a/mm/gup.c b/mm/gup.c index f1ac2c5a7f6d..929eb89c2e04 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -501,7 +501,7 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) #ifdef CONFIG_MMU -#if defined(CONFIG_ARCH_HAS_HUGEPD) || defined(CONFIG_HAVE_FAST_GUP) +#if defined(CONFIG_ARCH_HAS_HUGEPD) || defined(CONFIG_HAVE_GUP_FAST) static int record_subpages(struct page *page, unsigned long sz, unsigned long addr, unsigned long end, struct page **pages) @@ -515,7 +515,7 @@ static int record_subpages(struct page *page, unsigned long sz, return nr; } -#endif /* CONFIG_ARCH_HAS_HUGEPD || CONFIG_HAVE_FAST_GUP */ +#endif /* CONFIG_ARCH_HAS_HUGEPD || CONFIG_HAVE_GUP_FAST */ #ifdef CONFIG_ARCH_HAS_HUGEPD static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end, @@ -2785,7 +2785,7 @@ EXPORT_SYMBOL(get_user_pages_unlocked); * * This code is based heavily on the PowerPC implementation by Nick Piggin. */ -#ifdef CONFIG_HAVE_FAST_GUP +#ifdef CONFIG_HAVE_GUP_FAST /* * Used in the GUP-fast path to determine whether GUP is permitted to work on @@ -3364,7 +3364,7 @@ static inline void gup_fast_pgd_range(unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { } -#endif /* CONFIG_HAVE_FAST_GUP */ +#endif /* CONFIG_HAVE_GUP_FAST */ #ifndef gup_fast_permitted /* @@ -3384,7 +3384,7 @@ static unsigned long gup_fast(unsigned long start, unsigned long end, int nr_pinned = 0; unsigned seq; - if (!IS_ENABLED(CONFIG_HAVE_FAST_GUP) || + if (!IS_ENABLED(CONFIG_HAVE_GUP_FAST) || !gup_fast_permitted(start, end)) return 0; diff --git a/mm/internal.h b/mm/internal.h index 3df06a152ff0..be432314af3e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1249,7 +1249,7 @@ static inline bool gup_must_unshare(struct vm_area_struct *vma, } /* Paired with a memory barrier in folio_try_share_anon_rmap_*(). */ - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP)) + if (IS_ENABLED(CONFIG_HAVE_GUP_FAST)) smp_rmb(); /* -- 2.44.0 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v1 2/3] mm/treewide: rename CONFIG_HAVE_FAST_GUP to CONFIG_HAVE_GUP_FAST 2024-04-02 12:55 ` [PATCH v1 2/3] mm/treewide: rename CONFIG_HAVE_FAST_GUP to CONFIG_HAVE_GUP_FAST David Hildenbrand @ 2024-04-02 22:32 ` Jason Gunthorpe 2024-04-13 20:11 ` John Hubbard 1 sibling, 0 replies; 17+ messages in thread From: Jason Gunthorpe @ 2024-04-02 22:32 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Mike Rapoport, John Hubbard, Peter Xu, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On Tue, Apr 02, 2024 at 02:55:15PM +0200, David Hildenbrand wrote: > Nowadays, we call it "GUP-fast", the external interface includes > functions like "get_user_pages_fast()", and we renamed all internal > functions to reflect that as well. > > Let's make the config option reflect that. > > Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > arch/arm/Kconfig | 2 +- > arch/arm64/Kconfig | 2 +- > arch/loongarch/Kconfig | 2 +- > arch/mips/Kconfig | 2 +- > arch/powerpc/Kconfig | 2 +- > arch/riscv/Kconfig | 2 +- > arch/s390/Kconfig | 2 +- > arch/sh/Kconfig | 2 +- > arch/x86/Kconfig | 2 +- > include/linux/rmap.h | 8 ++++---- > kernel/events/core.c | 4 ++-- > mm/Kconfig | 2 +- > mm/gup.c | 10 +++++----- > mm/internal.h | 2 +- > 14 files changed, 22 insertions(+), 22 deletions(-) Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Jason ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 2/3] mm/treewide: rename CONFIG_HAVE_FAST_GUP to CONFIG_HAVE_GUP_FAST 2024-04-02 12:55 ` [PATCH v1 2/3] mm/treewide: rename CONFIG_HAVE_FAST_GUP to CONFIG_HAVE_GUP_FAST David Hildenbrand 2024-04-02 22:32 ` Jason Gunthorpe @ 2024-04-13 20:11 ` John Hubbard 1 sibling, 0 replies; 17+ messages in thread From: John Hubbard @ 2024-04-13 20:11 UTC (permalink / raw) To: David Hildenbrand, linux-kernel Cc: linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, Peter Xu, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On 4/2/24 5:55 AM, David Hildenbrand wrote: > Nowadays, we call it "GUP-fast", the external interface includes > functions like "get_user_pages_fast()", and we renamed all internal > functions to reflect that as well. > > Let's make the config option reflect that. > > Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > arch/arm/Kconfig | 2 +- > arch/arm64/Kconfig | 2 +- > arch/loongarch/Kconfig | 2 +- > arch/mips/Kconfig | 2 +- > arch/powerpc/Kconfig | 2 +- > arch/riscv/Kconfig | 2 +- > arch/s390/Kconfig | 2 +- > arch/sh/Kconfig | 2 +- > arch/x86/Kconfig | 2 +- > include/linux/rmap.h | 8 ++++---- > kernel/events/core.c | 4 ++-- > mm/Kconfig | 2 +- > mm/gup.c | 10 +++++----- > mm/internal.h | 2 +- > 14 files changed, 22 insertions(+), 22 deletions(-) > Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks, -- John Hubbard NVIDIA ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v1 3/3] mm: use "GUP-fast" instead "fast GUP" in remaining comments 2024-04-02 12:55 [PATCH v1 0/3] mm/gup: consistently call it GUP-fast David Hildenbrand 2024-04-02 12:55 ` [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions David Hildenbrand 2024-04-02 12:55 ` [PATCH v1 2/3] mm/treewide: rename CONFIG_HAVE_FAST_GUP to CONFIG_HAVE_GUP_FAST David Hildenbrand @ 2024-04-02 12:55 ` David Hildenbrand 2024-04-02 22:33 ` Jason Gunthorpe 2024-04-13 20:12 ` John Hubbard 2 siblings, 2 replies; 17+ messages in thread From: David Hildenbrand @ 2024-04-02 12:55 UTC (permalink / raw) To: linux-kernel Cc: linux-mm, David Hildenbrand, Andrew Morton, Mike Rapoport, Jason Gunthorpe, John Hubbard, Peter Xu, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 Let's fixup the remaining comments to consistently call that thing "GUP-fast". With this change, we consistently call it "GUP-fast". Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/filemap.c | 2 +- mm/khugepaged.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 387b394754fa..c668e11cd6ef 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1810,7 +1810,7 @@ EXPORT_SYMBOL(page_cache_prev_miss); * C. Return the page to the page allocator * * This means that any page may have its reference count temporarily - * increased by a speculative page cache (or fast GUP) lookup as it can + * increased by a speculative page cache (or GUP-fast) lookup as it can * be allocated by another user before the RCU grace period expires. * Because the refcount temporarily acquired here may end up being the * last refcount on the page, any page allocation must be freeable by diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 38830174608f..6972fa05132e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1169,7 +1169,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, * huge and small TLB entries for the same virtual address to * avoid the risk of CPU bugs in that area. * - * Parallel fast GUP is fine since fast GUP will back off when + * Parallel GUP-fast is fine since GUP-fast will back off when * it detects PMD is changed. */ _pmd = pmdp_collapse_flush(vma, address, pmd); -- 2.44.0 ^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v1 3/3] mm: use "GUP-fast" instead "fast GUP" in remaining comments 2024-04-02 12:55 ` [PATCH v1 3/3] mm: use "GUP-fast" instead "fast GUP" in remaining comments David Hildenbrand @ 2024-04-02 22:33 ` Jason Gunthorpe 2024-04-13 20:12 ` John Hubbard 1 sibling, 0 replies; 17+ messages in thread From: Jason Gunthorpe @ 2024-04-02 22:33 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Mike Rapoport, John Hubbard, Peter Xu, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On Tue, Apr 02, 2024 at 02:55:16PM +0200, David Hildenbrand wrote: > Let's fixup the remaining comments to consistently call that thing > "GUP-fast". With this change, we consistently call it "GUP-fast". > > Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > mm/filemap.c | 2 +- > mm/khugepaged.c | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Jason ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1 3/3] mm: use "GUP-fast" instead "fast GUP" in remaining comments 2024-04-02 12:55 ` [PATCH v1 3/3] mm: use "GUP-fast" instead "fast GUP" in remaining comments David Hildenbrand 2024-04-02 22:33 ` Jason Gunthorpe @ 2024-04-13 20:12 ` John Hubbard 1 sibling, 0 replies; 17+ messages in thread From: John Hubbard @ 2024-04-13 20:12 UTC (permalink / raw) To: David Hildenbrand, linux-kernel Cc: linux-mm, Andrew Morton, Mike Rapoport, Jason Gunthorpe, Peter Xu, linux-arm-kernel, loongarch, linux-mips, linuxppc-dev, linux-s390, linux-sh, linux-perf-users, linux-fsdevel, linux-riscv, x86 On 4/2/24 5:55 AM, David Hildenbrand wrote: > Let's fixup the remaining comments to consistently call that thing > "GUP-fast". With this change, we consistently call it "GUP-fast". > > Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > mm/filemap.c | 2 +- > mm/khugepaged.c | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) Yes, everything is changed over now, confirmed. Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks, -- John Hubbard NVIDIA ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2024-04-27 6:58 UTC | newest] Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2024-04-02 12:55 [PATCH v1 0/3] mm/gup: consistently call it GUP-fast David Hildenbrand 2024-04-02 12:55 ` [PATCH v1 1/3] mm/gup: consistently name GUP-fast functions David Hildenbrand 2024-04-13 20:07 ` John Hubbard 2024-04-26 7:17 ` David Hildenbrand 2024-04-26 13:44 ` Peter Xu 2024-04-26 16:12 ` Peter Xu 2024-04-26 17:28 ` David Hildenbrand 2024-04-26 21:20 ` Peter Xu 2024-04-26 21:33 ` David Hildenbrand 2024-04-26 21:58 ` Peter Xu 2024-04-27 6:58 ` David Hildenbrand 2024-04-02 12:55 ` [PATCH v1 2/3] mm/treewide: rename CONFIG_HAVE_FAST_GUP to CONFIG_HAVE_GUP_FAST David Hildenbrand 2024-04-02 22:32 ` Jason Gunthorpe 2024-04-13 20:11 ` John Hubbard 2024-04-02 12:55 ` [PATCH v1 3/3] mm: use "GUP-fast" instead "fast GUP" in remaining comments David Hildenbrand 2024-04-02 22:33 ` Jason Gunthorpe 2024-04-13 20:12 ` John Hubbard
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).