* [PATCH v2 0/3] Fix CONT-PTE/PMD size hugetlb issue when unmapping or migrating @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm Hi, Now migrating a hugetlb page or unmapping a poisoned hugetlb page, we'll use ptep_clear_flush() and set_pte_at() to nuke the page table entry and remap it, and this is incorrect for CONT-PTE or CONT-PMD size hugetlb page, which will cause potential data consistent issue. This patch set will change to use hugetlb related APIs to fix this issue, please find details in each patch. Thanks. Note: Mike pointed out the huge_ptep_get() will only return the one specific value, and it would not take into account the dirty or young bits of CONT-PTE/PMDs like the huge_ptep_get_and_clear() [1]. This inconsistent issue is not introduced by this patch set, and will address this issue in another thread [2]. Meanwhile the uffd for hugetlb case [3] pointed by Gerald also need another patch to address. [1] https://lore.kernel.org/linux-mm/85bd80b4-b4fd-0d3f-a2e5-149559f2f387@oracle.com/ [2] https://lore.kernel.org/all/cover.1651998586.git.baolin.wang@linux.alibaba.com/ [3] https://lore.kernel.org/linux-mm/20220503120343.6264e126@thinkpad/ Changes from v1: - Add acked tag from Mike. - Update some commit message. - Add VM_BUG_ON in try_to_unmap() for hugetlb case. - Add an explict void casting for huge_ptep_clear_flush() in hugetlb.c. Baolin Wang (3): mm: change huge_ptep_clear_flush() to return the original pte mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping arch/arm64/include/asm/hugetlb.h | 4 +-- arch/arm64/mm/hugetlbpage.c | 12 +++----- arch/ia64/include/asm/hugetlb.h | 4 +-- arch/mips/include/asm/hugetlb.h | 9 ++++-- arch/parisc/include/asm/hugetlb.h | 4 +-- arch/powerpc/include/asm/hugetlb.h | 9 ++++-- arch/s390/include/asm/hugetlb.h | 6 ++-- arch/sh/include/asm/hugetlb.h | 4 +-- arch/sparc/include/asm/hugetlb.h | 4 +-- include/asm-generic/hugetlb.h | 4 +-- mm/hugetlb.c | 2 +- mm/rmap.c | 63 ++++++++++++++++++++++++-------------- 12 files changed, 73 insertions(+), 52 deletions(-) -- 1.8.3.1 ^ permalink raw reply [flat|nested] 73+ messages in thread
* [PATCH v2 0/3] Fix CONT-PTE/PMD size hugetlb issue when unmapping or migrating @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm Hi, Now migrating a hugetlb page or unmapping a poisoned hugetlb page, we'll use ptep_clear_flush() and set_pte_at() to nuke the page table entry and remap it, and this is incorrect for CONT-PTE or CONT-PMD size hugetlb page, which will cause potential data consistent issue. This patch set will change to use hugetlb related APIs to fix this issue, please find details in each patch. Thanks. Note: Mike pointed out the huge_ptep_get() will only return the one specific value, and it would not take into account the dirty or young bits of CONT-PTE/PMDs like the huge_ptep_get_and_clear() [1]. This inconsistent issue is not introduced by this patch set, and will address this issue in another thread [2]. Meanwhile the uffd for hugetlb case [3] pointed by Gerald also need another patch to address. [1] https://lore.kernel.org/linux-mm/85bd80b4-b4fd-0d3f-a2e5-149559f2f387@oracle.com/ [2] https://lore.kernel.org/all/cover.1651998586.git.baolin.wang@linux.alibaba.com/ [3] https://lore.kernel.org/linux-mm/20220503120343.6264e126@thinkpad/ Changes from v1: - Add acked tag from Mike. - Update some commit message. - Add VM_BUG_ON in try_to_unmap() for hugetlb case. - Add an explict void casting for huge_ptep_clear_flush() in hugetlb.c. Baolin Wang (3): mm: change huge_ptep_clear_flush() to return the original pte mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping arch/arm64/include/asm/hugetlb.h | 4 +-- arch/arm64/mm/hugetlbpage.c | 12 +++----- arch/ia64/include/asm/hugetlb.h | 4 +-- arch/mips/include/asm/hugetlb.h | 9 ++++-- arch/parisc/include/asm/hugetlb.h | 4 +-- arch/powerpc/include/asm/hugetlb.h | 9 ++++-- arch/s390/include/asm/hugetlb.h | 6 ++-- arch/sh/include/asm/hugetlb.h | 4 +-- arch/sparc/include/asm/hugetlb.h | 4 +-- include/asm-generic/hugetlb.h | 4 +-- mm/hugetlb.c | 2 +- mm/rmap.c | 63 ++++++++++++++++++++++++-------------- 12 files changed, 73 insertions(+), 52 deletions(-) -- 1.8.3.1 ^ permalink raw reply [flat|nested] 73+ messages in thread
* [PATCH v2 0/3] Fix CONT-PTE/PMD size hugetlb issue when unmapping or migrating @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm Hi, Now migrating a hugetlb page or unmapping a poisoned hugetlb page, we'll use ptep_clear_flush() and set_pte_at() to nuke the page table entry and remap it, and this is incorrect for CONT-PTE or CONT-PMD size hugetlb page, which will cause potential data consistent issue. This patch set will change to use hugetlb related APIs to fix this issue, please find details in each patch. Thanks. Note: Mike pointed out the huge_ptep_get() will only return the one specific value, and it would not take into account the dirty or young bits of CONT-PTE/PMDs like the huge_ptep_get_and_clear() [1]. This inconsistent issue is not introduced by this patch set, and will address this issue in another thread [2]. Meanwhile the uffd for hugetlb case [3] pointed by Gerald also need another patch to address. [1] https://lore.kernel.org/linux-mm/85bd80b4-b4fd-0d3f-a2e5-149559f2f387@oracle.com/ [2] https://lore.kernel.org/all/cover.1651998586.git.baolin.wang@linux.alibaba.com/ [3] https://lore.kernel.org/linux-mm/20220503120343.6264e126@thinkpad/ Changes from v1: - Add acked tag from Mike. - Update some commit message. - Add VM_BUG_ON in try_to_unmap() for hugetlb case. - Add an explict void casting for huge_ptep_clear_flush() in hugetlb.c. Baolin Wang (3): mm: change huge_ptep_clear_flush() to return the original pte mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping arch/arm64/include/asm/hugetlb.h | 4 +-- arch/arm64/mm/hugetlbpage.c | 12 +++----- arch/ia64/include/asm/hugetlb.h | 4 +-- arch/mips/include/asm/hugetlb.h | 9 ++++-- arch/parisc/include/asm/hugetlb.h | 4 +-- arch/powerpc/include/asm/hugetlb.h | 9 ++++-- arch/s390/include/asm/hugetlb.h | 6 ++-- arch/sh/include/asm/hugetlb.h | 4 +-- arch/sparc/include/asm/hugetlb.h | 4 +-- include/asm-generic/hugetlb.h | 4 +-- mm/hugetlb.c | 2 +- mm/rmap.c | 63 ++++++++++++++++++++++++-------------- 12 files changed, 73 insertions(+), 52 deletions(-) -- 1.8.3.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* [PATCH v2 0/3] Fix CONT-PTE/PMD size hugetlb issue when unmapping or migrating @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: dalias, linux-ia64, linux-sh, linux-kernel, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, linux-arch, linux-s390, arnd, ysato, deller, borntraeger, gor, hca, baolin.wang, linux-arm-kernel, tsbogend, linux-parisc, linux-mips, svens, linuxppc-dev, davem Hi, Now migrating a hugetlb page or unmapping a poisoned hugetlb page, we'll use ptep_clear_flush() and set_pte_at() to nuke the page table entry and remap it, and this is incorrect for CONT-PTE or CONT-PMD size hugetlb page, which will cause potential data consistent issue. This patch set will change to use hugetlb related APIs to fix this issue, please find details in each patch. Thanks. Note: Mike pointed out the huge_ptep_get() will only return the one specific value, and it would not take into account the dirty or young bits of CONT-PTE/PMDs like the huge_ptep_get_and_clear() [1]. This inconsistent issue is not introduced by this patch set, and will address this issue in another thread [2]. Meanwhile the uffd for hugetlb case [3] pointed by Gerald also need another patch to address. [1] https://lore.kernel.org/linux-mm/85bd80b4-b4fd-0d3f-a2e5-149559f2f387@oracle.com/ [2] https://lore.kernel.org/all/cover.1651998586.git.baolin.wang@linux.alibaba.com/ [3] https://lore.kernel.org/linux-mm/20220503120343.6264e126@thinkpad/ Changes from v1: - Add acked tag from Mike. - Update some commit message. - Add VM_BUG_ON in try_to_unmap() for hugetlb case. - Add an explict void casting for huge_ptep_clear_flush() in hugetlb.c. Baolin Wang (3): mm: change huge_ptep_clear_flush() to return the original pte mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping arch/arm64/include/asm/hugetlb.h | 4 +-- arch/arm64/mm/hugetlbpage.c | 12 +++----- arch/ia64/include/asm/hugetlb.h | 4 +-- arch/mips/include/asm/hugetlb.h | 9 ++++-- arch/parisc/include/asm/hugetlb.h | 4 +-- arch/powerpc/include/asm/hugetlb.h | 9 ++++-- arch/s390/include/asm/hugetlb.h | 6 ++-- arch/sh/include/asm/hugetlb.h | 4 +-- arch/sparc/include/asm/hugetlb.h | 4 +-- include/asm-generic/hugetlb.h | 4 +-- mm/hugetlb.c | 2 +- mm/rmap.c | 63 ++++++++++++++++++++++++-------------- 12 files changed, 73 insertions(+), 52 deletions(-) -- 1.8.3.1 ^ permalink raw reply [flat|nested] 73+ messages in thread
* [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte 2022-05-08 9:36 ` Baolin Wang (?) (?) @ 2022-05-08 9:36 ` Baolin Wang -1 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm It is incorrect to use ptep_clear_flush() to nuke a hugetlb page table when unmapping or migrating a hugetlb page, and will change to use huge_ptep_clear_flush() instead in the following patches. So this is a preparation patch, which changes the huge_ptep_clear_flush() to return the original pte to help to nuke a hugetlb page table. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> --- arch/arm64/include/asm/hugetlb.h | 4 ++-- arch/arm64/mm/hugetlbpage.c | 12 +++++------- arch/ia64/include/asm/hugetlb.h | 4 ++-- arch/mips/include/asm/hugetlb.h | 9 ++++++--- arch/parisc/include/asm/hugetlb.h | 4 ++-- arch/powerpc/include/asm/hugetlb.h | 9 ++++++--- arch/s390/include/asm/hugetlb.h | 6 +++--- arch/sh/include/asm/hugetlb.h | 4 ++-- arch/sparc/include/asm/hugetlb.h | 4 ++-- include/asm-generic/hugetlb.h | 4 ++-- mm/hugetlb.c | 2 +- 11 files changed, 33 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index 1242f71..616b2ca 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -39,8 +39,8 @@ extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, extern void huge_ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -extern void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep); +extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTE_CLEAR extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long sz); diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index cbace1c..ca8e65c 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -486,19 +486,17 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot)); } -void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { size_t pgsize; int ncontig; - if (!pte_cont(READ_ONCE(*ptep))) { - ptep_clear_flush(vma, addr, ptep); - return; - } + if (!pte_cont(READ_ONCE(*ptep))) + return ptep_clear_flush(vma, addr, ptep); ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize); - clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); + return get_clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); } static int __init hugetlbpage_init(void) diff --git a/arch/ia64/include/asm/hugetlb.h b/arch/ia64/include/asm/hugetlb.h index 7e46ebd..65d3811 100644 --- a/arch/ia64/include/asm/hugetlb.h +++ b/arch/ia64/include/asm/hugetlb.h @@ -23,8 +23,8 @@ static inline int is_hugepage_only_range(struct mm_struct *mm, #define is_hugepage_only_range is_hugepage_only_range #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h index c214440..fd69c88 100644 --- a/arch/mips/include/asm/hugetlb.h +++ b/arch/mips/include/asm/hugetlb.h @@ -43,16 +43,19 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { + pte_t pte; + /* * clear the huge pte entry firstly, so that the other smp threads will * not get old pte entry after finishing flush_tlb_page and before * setting new huge pte entry */ - huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); flush_tlb_page(vma, addr); + return pte; } #define __HAVE_ARCH_HUGE_PTE_NONE diff --git a/arch/parisc/include/asm/hugetlb.h b/arch/parisc/include/asm/hugetlb.h index a69cf9e..25bc560 100644 --- a/arch/parisc/include/asm/hugetlb.h +++ b/arch/parisc/include/asm/hugetlb.h @@ -28,8 +28,8 @@ static inline int prepare_hugepage_range(struct file *file, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index 6a1a1ac..8a5674f 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -43,11 +43,14 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { - huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte_t pte; + + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); flush_hugetlb_page(vma, addr); + return pte; } #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h index 32c3fd6..f22beda 100644 --- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -50,10 +50,10 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, set_pte(ptep, __pte(_SEGMENT_ENTRY_EMPTY)); } -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) { - huge_ptep_get_and_clear(vma->vm_mm, address, ptep); + return huge_ptep_get_and_clear(vma->vm_mm, address, ptep); } static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma, diff --git a/arch/sh/include/asm/hugetlb.h b/arch/sh/include/asm/hugetlb.h index ae4de7b..e727cc9 100644 --- a/arch/sh/include/asm/hugetlb.h +++ b/arch/sh/include/asm/hugetlb.h @@ -21,8 +21,8 @@ static inline int prepare_hugepage_range(struct file *file, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/sparc/include/asm/hugetlb.h b/arch/sparc/include/asm/hugetlb.h index 53838a1..b50aa6f 100644 --- a/arch/sparc/include/asm/hugetlb.h +++ b/arch/sparc/include/asm/hugetlb.h @@ -21,8 +21,8 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h index 896f341..a57d667 100644 --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -84,10 +84,10 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, #endif #ifndef __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - ptep_clear_flush(vma, addr, ptep); + return ptep_clear_flush(vma, addr, ptep); } #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8605d7e..61a21af 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, ClearHPageRestoreReserve(new_page); /* Break COW or unshare */ - huge_ptep_clear_flush(vma, haddr, ptep); + (void)huge_ptep_clear_flush(vma, haddr, ptep); mmu_notifier_invalidate_range(mm, range.start, range.end); page_remove_rmap(old_page, vma, true); hugepage_add_new_anon_rmap(new_page, vma, haddr); -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 73+ messages in thread
* [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm It is incorrect to use ptep_clear_flush() to nuke a hugetlb page table when unmapping or migrating a hugetlb page, and will change to use huge_ptep_clear_flush() instead in the following patches. So this is a preparation patch, which changes the huge_ptep_clear_flush() to return the original pte to help to nuke a hugetlb page table. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> --- arch/arm64/include/asm/hugetlb.h | 4 ++-- arch/arm64/mm/hugetlbpage.c | 12 +++++------- arch/ia64/include/asm/hugetlb.h | 4 ++-- arch/mips/include/asm/hugetlb.h | 9 ++++++--- arch/parisc/include/asm/hugetlb.h | 4 ++-- arch/powerpc/include/asm/hugetlb.h | 9 ++++++--- arch/s390/include/asm/hugetlb.h | 6 +++--- arch/sh/include/asm/hugetlb.h | 4 ++-- arch/sparc/include/asm/hugetlb.h | 4 ++-- include/asm-generic/hugetlb.h | 4 ++-- mm/hugetlb.c | 2 +- 11 files changed, 33 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index 1242f71..616b2ca 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -39,8 +39,8 @@ extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, extern void huge_ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -extern void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep); +extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTE_CLEAR extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long sz); diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index cbace1c..ca8e65c 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -486,19 +486,17 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot)); } -void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { size_t pgsize; int ncontig; - if (!pte_cont(READ_ONCE(*ptep))) { - ptep_clear_flush(vma, addr, ptep); - return; - } + if (!pte_cont(READ_ONCE(*ptep))) + return ptep_clear_flush(vma, addr, ptep); ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize); - clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); + return get_clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); } static int __init hugetlbpage_init(void) diff --git a/arch/ia64/include/asm/hugetlb.h b/arch/ia64/include/asm/hugetlb.h index 7e46ebd..65d3811 100644 --- a/arch/ia64/include/asm/hugetlb.h +++ b/arch/ia64/include/asm/hugetlb.h @@ -23,8 +23,8 @@ static inline int is_hugepage_only_range(struct mm_struct *mm, #define is_hugepage_only_range is_hugepage_only_range #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h index c214440..fd69c88 100644 --- a/arch/mips/include/asm/hugetlb.h +++ b/arch/mips/include/asm/hugetlb.h @@ -43,16 +43,19 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { + pte_t pte; + /* * clear the huge pte entry firstly, so that the other smp threads will * not get old pte entry after finishing flush_tlb_page and before * setting new huge pte entry */ - huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); flush_tlb_page(vma, addr); + return pte; } #define __HAVE_ARCH_HUGE_PTE_NONE diff --git a/arch/parisc/include/asm/hugetlb.h b/arch/parisc/include/asm/hugetlb.h index a69cf9e..25bc560 100644 --- a/arch/parisc/include/asm/hugetlb.h +++ b/arch/parisc/include/asm/hugetlb.h @@ -28,8 +28,8 @@ static inline int prepare_hugepage_range(struct file *file, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index 6a1a1ac..8a5674f 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -43,11 +43,14 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { - huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte_t pte; + + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); flush_hugetlb_page(vma, addr); + return pte; } #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h index 32c3fd6..f22beda 100644 --- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -50,10 +50,10 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, set_pte(ptep, __pte(_SEGMENT_ENTRY_EMPTY)); } -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) { - huge_ptep_get_and_clear(vma->vm_mm, address, ptep); + return huge_ptep_get_and_clear(vma->vm_mm, address, ptep); } static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma, diff --git a/arch/sh/include/asm/hugetlb.h b/arch/sh/include/asm/hugetlb.h index ae4de7b..e727cc9 100644 --- a/arch/sh/include/asm/hugetlb.h +++ b/arch/sh/include/asm/hugetlb.h @@ -21,8 +21,8 @@ static inline int prepare_hugepage_range(struct file *file, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/sparc/include/asm/hugetlb.h b/arch/sparc/include/asm/hugetlb.h index 53838a1..b50aa6f 100644 --- a/arch/sparc/include/asm/hugetlb.h +++ b/arch/sparc/include/asm/hugetlb.h @@ -21,8 +21,8 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h index 896f341..a57d667 100644 --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -84,10 +84,10 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, #endif #ifndef __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - ptep_clear_flush(vma, addr, ptep); + return ptep_clear_flush(vma, addr, ptep); } #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8605d7e..61a21af 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, ClearHPageRestoreReserve(new_page); /* Break COW or unshare */ - huge_ptep_clear_flush(vma, haddr, ptep); + (void)huge_ptep_clear_flush(vma, haddr, ptep); mmu_notifier_invalidate_range(mm, range.start, range.end); page_remove_rmap(old_page, vma, true); hugepage_add_new_anon_rmap(new_page, vma, haddr); -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 73+ messages in thread
* [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm It is incorrect to use ptep_clear_flush() to nuke a hugetlb page table when unmapping or migrating a hugetlb page, and will change to use huge_ptep_clear_flush() instead in the following patches. So this is a preparation patch, which changes the huge_ptep_clear_flush() to return the original pte to help to nuke a hugetlb page table. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> --- arch/arm64/include/asm/hugetlb.h | 4 ++-- arch/arm64/mm/hugetlbpage.c | 12 +++++------- arch/ia64/include/asm/hugetlb.h | 4 ++-- arch/mips/include/asm/hugetlb.h | 9 ++++++--- arch/parisc/include/asm/hugetlb.h | 4 ++-- arch/powerpc/include/asm/hugetlb.h | 9 ++++++--- arch/s390/include/asm/hugetlb.h | 6 +++--- arch/sh/include/asm/hugetlb.h | 4 ++-- arch/sparc/include/asm/hugetlb.h | 4 ++-- include/asm-generic/hugetlb.h | 4 ++-- mm/hugetlb.c | 2 +- 11 files changed, 33 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index 1242f71..616b2ca 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -39,8 +39,8 @@ extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, extern void huge_ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -extern void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep); +extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTE_CLEAR extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long sz); diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index cbace1c..ca8e65c 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -486,19 +486,17 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot)); } -void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { size_t pgsize; int ncontig; - if (!pte_cont(READ_ONCE(*ptep))) { - ptep_clear_flush(vma, addr, ptep); - return; - } + if (!pte_cont(READ_ONCE(*ptep))) + return ptep_clear_flush(vma, addr, ptep); ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize); - clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); + return get_clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); } static int __init hugetlbpage_init(void) diff --git a/arch/ia64/include/asm/hugetlb.h b/arch/ia64/include/asm/hugetlb.h index 7e46ebd..65d3811 100644 --- a/arch/ia64/include/asm/hugetlb.h +++ b/arch/ia64/include/asm/hugetlb.h @@ -23,8 +23,8 @@ static inline int is_hugepage_only_range(struct mm_struct *mm, #define is_hugepage_only_range is_hugepage_only_range #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h index c214440..fd69c88 100644 --- a/arch/mips/include/asm/hugetlb.h +++ b/arch/mips/include/asm/hugetlb.h @@ -43,16 +43,19 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { + pte_t pte; + /* * clear the huge pte entry firstly, so that the other smp threads will * not get old pte entry after finishing flush_tlb_page and before * setting new huge pte entry */ - huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); flush_tlb_page(vma, addr); + return pte; } #define __HAVE_ARCH_HUGE_PTE_NONE diff --git a/arch/parisc/include/asm/hugetlb.h b/arch/parisc/include/asm/hugetlb.h index a69cf9e..25bc560 100644 --- a/arch/parisc/include/asm/hugetlb.h +++ b/arch/parisc/include/asm/hugetlb.h @@ -28,8 +28,8 @@ static inline int prepare_hugepage_range(struct file *file, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index 6a1a1ac..8a5674f 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -43,11 +43,14 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { - huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte_t pte; + + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); flush_hugetlb_page(vma, addr); + return pte; } #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h index 32c3fd6..f22beda 100644 --- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -50,10 +50,10 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, set_pte(ptep, __pte(_SEGMENT_ENTRY_EMPTY)); } -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) { - huge_ptep_get_and_clear(vma->vm_mm, address, ptep); + return huge_ptep_get_and_clear(vma->vm_mm, address, ptep); } static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma, diff --git a/arch/sh/include/asm/hugetlb.h b/arch/sh/include/asm/hugetlb.h index ae4de7b..e727cc9 100644 --- a/arch/sh/include/asm/hugetlb.h +++ b/arch/sh/include/asm/hugetlb.h @@ -21,8 +21,8 @@ static inline int prepare_hugepage_range(struct file *file, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/sparc/include/asm/hugetlb.h b/arch/sparc/include/asm/hugetlb.h index 53838a1..b50aa6f 100644 --- a/arch/sparc/include/asm/hugetlb.h +++ b/arch/sparc/include/asm/hugetlb.h @@ -21,8 +21,8 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h index 896f341..a57d667 100644 --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -84,10 +84,10 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, #endif #ifndef __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - ptep_clear_flush(vma, addr, ptep); + return ptep_clear_flush(vma, addr, ptep); } #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8605d7e..61a21af 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, ClearHPageRestoreReserve(new_page); /* Break COW or unshare */ - huge_ptep_clear_flush(vma, haddr, ptep); + (void)huge_ptep_clear_flush(vma, haddr, ptep); mmu_notifier_invalidate_range(mm, range.start, range.end); page_remove_rmap(old_page, vma, true); hugepage_add_new_anon_rmap(new_page, vma, haddr); -- 1.8.3.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 73+ messages in thread
* [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: dalias, linux-ia64, linux-sh, linux-kernel, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, linux-arch, linux-s390, arnd, ysato, deller, borntraeger, gor, hca, baolin.wang, linux-arm-kernel, tsbogend, linux-parisc, linux-mips, svens, linuxppc-dev, davem It is incorrect to use ptep_clear_flush() to nuke a hugetlb page table when unmapping or migrating a hugetlb page, and will change to use huge_ptep_clear_flush() instead in the following patches. So this is a preparation patch, which changes the huge_ptep_clear_flush() to return the original pte to help to nuke a hugetlb page table. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> --- arch/arm64/include/asm/hugetlb.h | 4 ++-- arch/arm64/mm/hugetlbpage.c | 12 +++++------- arch/ia64/include/asm/hugetlb.h | 4 ++-- arch/mips/include/asm/hugetlb.h | 9 ++++++--- arch/parisc/include/asm/hugetlb.h | 4 ++-- arch/powerpc/include/asm/hugetlb.h | 9 ++++++--- arch/s390/include/asm/hugetlb.h | 6 +++--- arch/sh/include/asm/hugetlb.h | 4 ++-- arch/sparc/include/asm/hugetlb.h | 4 ++-- include/asm-generic/hugetlb.h | 4 ++-- mm/hugetlb.c | 2 +- 11 files changed, 33 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index 1242f71..616b2ca 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -39,8 +39,8 @@ extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, extern void huge_ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -extern void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep); +extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTE_CLEAR extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long sz); diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index cbace1c..ca8e65c 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -486,19 +486,17 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot)); } -void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { size_t pgsize; int ncontig; - if (!pte_cont(READ_ONCE(*ptep))) { - ptep_clear_flush(vma, addr, ptep); - return; - } + if (!pte_cont(READ_ONCE(*ptep))) + return ptep_clear_flush(vma, addr, ptep); ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize); - clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); + return get_clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); } static int __init hugetlbpage_init(void) diff --git a/arch/ia64/include/asm/hugetlb.h b/arch/ia64/include/asm/hugetlb.h index 7e46ebd..65d3811 100644 --- a/arch/ia64/include/asm/hugetlb.h +++ b/arch/ia64/include/asm/hugetlb.h @@ -23,8 +23,8 @@ static inline int is_hugepage_only_range(struct mm_struct *mm, #define is_hugepage_only_range is_hugepage_only_range #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h index c214440..fd69c88 100644 --- a/arch/mips/include/asm/hugetlb.h +++ b/arch/mips/include/asm/hugetlb.h @@ -43,16 +43,19 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { + pte_t pte; + /* * clear the huge pte entry firstly, so that the other smp threads will * not get old pte entry after finishing flush_tlb_page and before * setting new huge pte entry */ - huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); flush_tlb_page(vma, addr); + return pte; } #define __HAVE_ARCH_HUGE_PTE_NONE diff --git a/arch/parisc/include/asm/hugetlb.h b/arch/parisc/include/asm/hugetlb.h index a69cf9e..25bc560 100644 --- a/arch/parisc/include/asm/hugetlb.h +++ b/arch/parisc/include/asm/hugetlb.h @@ -28,8 +28,8 @@ static inline int prepare_hugepage_range(struct file *file, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index 6a1a1ac..8a5674f 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -43,11 +43,14 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { - huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); + pte_t pte; + + pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep); flush_hugetlb_page(vma, addr); + return pte; } #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h index 32c3fd6..f22beda 100644 --- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -50,10 +50,10 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, set_pte(ptep, __pte(_SEGMENT_ENTRY_EMPTY)); } -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) { - huge_ptep_get_and_clear(vma->vm_mm, address, ptep); + return huge_ptep_get_and_clear(vma->vm_mm, address, ptep); } static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma, diff --git a/arch/sh/include/asm/hugetlb.h b/arch/sh/include/asm/hugetlb.h index ae4de7b..e727cc9 100644 --- a/arch/sh/include/asm/hugetlb.h +++ b/arch/sh/include/asm/hugetlb.h @@ -21,8 +21,8 @@ static inline int prepare_hugepage_range(struct file *file, } #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/arch/sparc/include/asm/hugetlb.h b/arch/sparc/include/asm/hugetlb.h index 53838a1..b50aa6f 100644 --- a/arch/sparc/include/asm/hugetlb.h +++ b/arch/sparc/include/asm/hugetlb.h @@ -21,8 +21,8 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep); #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, - unsigned long addr, pte_t *ptep) +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) { } diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h index 896f341..a57d667 100644 --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -84,10 +84,10 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, #endif #ifndef __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH -static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { - ptep_clear_flush(vma, addr, ptep); + return ptep_clear_flush(vma, addr, ptep); } #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8605d7e..61a21af 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, ClearHPageRestoreReserve(new_page); /* Break COW or unshare */ - huge_ptep_clear_flush(vma, haddr, ptep); + (void)huge_ptep_clear_flush(vma, haddr, ptep); mmu_notifier_invalidate_range(mm, range.start, range.end); page_remove_rmap(old_page, vma, true); hugepage_add_new_anon_rmap(new_page, vma, haddr); -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte 2022-05-08 9:36 ` Baolin Wang (?) (?) @ 2022-05-08 11:09 ` Muchun Song -1 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-08 11:09 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: > It is incorrect to use ptep_clear_flush() to nuke a hugetlb page > table when unmapping or migrating a hugetlb page, and will change > to use huge_ptep_clear_flush() instead in the following patches. > > So this is a preparation patch, which changes the huge_ptep_clear_flush() > to return the original pte to help to nuke a hugetlb page table. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> But one nit below: [...] > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 8605d7e..61a21af 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > ClearHPageRestoreReserve(new_page); > > /* Break COW or unshare */ > - huge_ptep_clear_flush(vma, haddr, ptep); > + (void)huge_ptep_clear_flush(vma, haddr, ptep); Why add a "(void)" here? Is there any warning if no "(void)"? IIUC, I think we can remove this, right? > mmu_notifier_invalidate_range(mm, range.start, range.end); > page_remove_rmap(old_page, vma, true); > hugepage_add_new_anon_rmap(new_page, vma, haddr); > -- > 1.8.3.1 > > ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-08 11:09 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-08 11:09 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: > It is incorrect to use ptep_clear_flush() to nuke a hugetlb page > table when unmapping or migrating a hugetlb page, and will change > to use huge_ptep_clear_flush() instead in the following patches. > > So this is a preparation patch, which changes the huge_ptep_clear_flush() > to return the original pte to help to nuke a hugetlb page table. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> But one nit below: [...] > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 8605d7e..61a21af 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > ClearHPageRestoreReserve(new_page); > > /* Break COW or unshare */ > - huge_ptep_clear_flush(vma, haddr, ptep); > + (void)huge_ptep_clear_flush(vma, haddr, ptep); Why add a "(void)" here? Is there any warning if no "(void)"? IIUC, I think we can remove this, right? > mmu_notifier_invalidate_range(mm, range.start, range.end); > page_remove_rmap(old_page, vma, true); > hugepage_add_new_anon_rmap(new_page, vma, haddr); > -- > 1.8.3.1 > > ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-08 11:09 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-08 11:09 UTC (permalink / raw) To: Baolin Wang Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: > It is incorrect to use ptep_clear_flush() to nuke a hugetlb page > table when unmapping or migrating a hugetlb page, and will change > to use huge_ptep_clear_flush() instead in the following patches. > > So this is a preparation patch, which changes the huge_ptep_clear_flush() > to return the original pte to help to nuke a hugetlb page table. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> But one nit below: [...] > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 8605d7e..61a21af 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > ClearHPageRestoreReserve(new_page); > > /* Break COW or unshare */ > - huge_ptep_clear_flush(vma, haddr, ptep); > + (void)huge_ptep_clear_flush(vma, haddr, ptep); Why add a "(void)" here? Is there any warning if no "(void)"? IIUC, I think we can remove this, right? > mmu_notifier_invalidate_range(mm, range.start, range.end); > page_remove_rmap(old_page, vma, true); > hugepage_add_new_anon_rmap(new_page, vma, haddr); > -- > 1.8.3.1 > > ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-08 11:09 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-08 11:09 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: > It is incorrect to use ptep_clear_flush() to nuke a hugetlb page > table when unmapping or migrating a hugetlb page, and will change > to use huge_ptep_clear_flush() instead in the following patches. > > So this is a preparation patch, which changes the huge_ptep_clear_flush() > to return the original pte to help to nuke a hugetlb page table. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> But one nit below: [...] > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 8605d7e..61a21af 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > ClearHPageRestoreReserve(new_page); > > /* Break COW or unshare */ > - huge_ptep_clear_flush(vma, haddr, ptep); > + (void)huge_ptep_clear_flush(vma, haddr, ptep); Why add a "(void)" here? Is there any warning if no "(void)"? IIUC, I think we can remove this, right? > mmu_notifier_invalidate_range(mm, range.start, range.end); > page_remove_rmap(old_page, vma, true); > hugepage_add_new_anon_rmap(new_page, vma, haddr); > -- > 1.8.3.1 > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte 2022-05-08 11:09 ` Muchun Song (?) (?) @ 2022-05-08 13:09 ` Baolin Wang -1 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 13:09 UTC (permalink / raw) To: Muchun Song Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On 5/8/2022 7:09 PM, Muchun Song wrote: > On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >> table when unmapping or migrating a hugetlb page, and will change >> to use huge_ptep_clear_flush() instead in the following patches. >> >> So this is a preparation patch, which changes the huge_ptep_clear_flush() >> to return the original pte to help to nuke a hugetlb page table. >> >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> > > Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks for reviewing. > > But one nit below: > > [...] >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 8605d7e..61a21af 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, >> ClearHPageRestoreReserve(new_page); >> >> /* Break COW or unshare */ >> - huge_ptep_clear_flush(vma, haddr, ptep); >> + (void)huge_ptep_clear_flush(vma, haddr, ptep); > > Why add a "(void)" here? Is there any warning if no "(void)"? > IIUC, I think we can remove this, right? I did not meet any warning without the casting, but this is per Mike's comment[1] to make the code consistent with other functions casting to void type explicitly in hugetlb.c file. [1] https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-08 13:09 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 13:09 UTC (permalink / raw) To: Muchun Song Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On 5/8/2022 7:09 PM, Muchun Song wrote: > On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >> table when unmapping or migrating a hugetlb page, and will change >> to use huge_ptep_clear_flush() instead in the following patches. >> >> So this is a preparation patch, which changes the huge_ptep_clear_flush() >> to return the original pte to help to nuke a hugetlb page table. >> >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> > > Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks for reviewing. > > But one nit below: > > [...] >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 8605d7e..61a21af 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, >> ClearHPageRestoreReserve(new_page); >> >> /* Break COW or unshare */ >> - huge_ptep_clear_flush(vma, haddr, ptep); >> + (void)huge_ptep_clear_flush(vma, haddr, ptep); > > Why add a "(void)" here? Is there any warning if no "(void)"? > IIUC, I think we can remove this, right? I did not meet any warning without the casting, but this is per Mike's comment[1] to make the code consistent with other functions casting to void type explicitly in hugetlb.c file. [1] https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-08 13:09 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 13:09 UTC (permalink / raw) To: Muchun Song Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On 5/8/2022 7:09 PM, Muchun Song wrote: > On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >> table when unmapping or migrating a hugetlb page, and will change >> to use huge_ptep_clear_flush() instead in the following patches. >> >> So this is a preparation patch, which changes the huge_ptep_clear_flush() >> to return the original pte to help to nuke a hugetlb page table. >> >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> > > Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks for reviewing. > > But one nit below: > > [...] >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 8605d7e..61a21af 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, >> ClearHPageRestoreReserve(new_page); >> >> /* Break COW or unshare */ >> - huge_ptep_clear_flush(vma, haddr, ptep); >> + (void)huge_ptep_clear_flush(vma, haddr, ptep); > > Why add a "(void)" here? Is there any warning if no "(void)"? > IIUC, I think we can remove this, right? I did not meet any warning without the casting, but this is per Mike's comment[1] to make the code consistent with other functions casting to void type explicitly in hugetlb.c file. [1] https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-08 13:09 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 13:09 UTC (permalink / raw) To: Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz On 5/8/2022 7:09 PM, Muchun Song wrote: > On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >> table when unmapping or migrating a hugetlb page, and will change >> to use huge_ptep_clear_flush() instead in the following patches. >> >> So this is a preparation patch, which changes the huge_ptep_clear_flush() >> to return the original pte to help to nuke a hugetlb page table. >> >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> > > Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks for reviewing. > > But one nit below: > > [...] >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index 8605d7e..61a21af 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, >> ClearHPageRestoreReserve(new_page); >> >> /* Break COW or unshare */ >> - huge_ptep_clear_flush(vma, haddr, ptep); >> + (void)huge_ptep_clear_flush(vma, haddr, ptep); > > Why add a "(void)" here? Is there any warning if no "(void)"? > IIUC, I think we can remove this, right? I did not meet any warning without the casting, but this is per Mike's comment[1] to make the code consistent with other functions casting to void type explicitly in hugetlb.c file. [1] https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte 2022-05-08 13:09 ` Baolin Wang (?) (?) @ 2022-05-09 4:06 ` Muchun Song -1 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-09 4:06 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 09:09:55PM +0800, Baolin Wang wrote: > > > On 5/8/2022 7:09 PM, Muchun Song wrote: > > On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: > > > It is incorrect to use ptep_clear_flush() to nuke a hugetlb page > > > table when unmapping or migrating a hugetlb page, and will change > > > to use huge_ptep_clear_flush() instead in the following patches. > > > > > > So this is a preparation patch, which changes the huge_ptep_clear_flush() > > > to return the original pte to help to nuke a hugetlb page table. > > > > > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > > > Acked-by: Mike Kravetz <mike.kravetz@oracle.com> > > > > Reviewed-by: Muchun Song <songmuchun@bytedance.com> > > Thanks for reviewing. > > > > > But one nit below: > > > > [...] > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > > index 8605d7e..61a21af 100644 > > > --- a/mm/hugetlb.c > > > +++ b/mm/hugetlb.c > > > @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > > > ClearHPageRestoreReserve(new_page); > > > /* Break COW or unshare */ > > > - huge_ptep_clear_flush(vma, haddr, ptep); > > > + (void)huge_ptep_clear_flush(vma, haddr, ptep); > > > > Why add a "(void)" here? Is there any warning if no "(void)"? > > IIUC, I think we can remove this, right? > > I did not meet any warning without the casting, but this is per Mike's > comment[1] to make the code consistent with other functions casting to void > type explicitly in hugetlb.c file. > Got it. I see hugetlb.c per this rule, while others do not. > [1] > https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ > ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 4:06 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-09 4:06 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 09:09:55PM +0800, Baolin Wang wrote: > > > On 5/8/2022 7:09 PM, Muchun Song wrote: > > On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: > > > It is incorrect to use ptep_clear_flush() to nuke a hugetlb page > > > table when unmapping or migrating a hugetlb page, and will change > > > to use huge_ptep_clear_flush() instead in the following patches. > > > > > > So this is a preparation patch, which changes the huge_ptep_clear_flush() > > > to return the original pte to help to nuke a hugetlb page table. > > > > > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > > > Acked-by: Mike Kravetz <mike.kravetz@oracle.com> > > > > Reviewed-by: Muchun Song <songmuchun@bytedance.com> > > Thanks for reviewing. > > > > > But one nit below: > > > > [...] > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > > index 8605d7e..61a21af 100644 > > > --- a/mm/hugetlb.c > > > +++ b/mm/hugetlb.c > > > @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > > > ClearHPageRestoreReserve(new_page); > > > /* Break COW or unshare */ > > > - huge_ptep_clear_flush(vma, haddr, ptep); > > > + (void)huge_ptep_clear_flush(vma, haddr, ptep); > > > > Why add a "(void)" here? Is there any warning if no "(void)"? > > IIUC, I think we can remove this, right? > > I did not meet any warning without the casting, but this is per Mike's > comment[1] to make the code consistent with other functions casting to void > type explicitly in hugetlb.c file. > Got it. I see hugetlb.c per this rule, while others do not. > [1] > https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ > ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 4:06 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-09 4:06 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 09:09:55PM +0800, Baolin Wang wrote: > > > On 5/8/2022 7:09 PM, Muchun Song wrote: > > On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: > > > It is incorrect to use ptep_clear_flush() to nuke a hugetlb page > > > table when unmapping or migrating a hugetlb page, and will change > > > to use huge_ptep_clear_flush() instead in the following patches. > > > > > > So this is a preparation patch, which changes the huge_ptep_clear_flush() > > > to return the original pte to help to nuke a hugetlb page table. > > > > > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > > > Acked-by: Mike Kravetz <mike.kravetz@oracle.com> > > > > Reviewed-by: Muchun Song <songmuchun@bytedance.com> > > Thanks for reviewing. > > > > > But one nit below: > > > > [...] > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > > index 8605d7e..61a21af 100644 > > > --- a/mm/hugetlb.c > > > +++ b/mm/hugetlb.c > > > @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > > > ClearHPageRestoreReserve(new_page); > > > /* Break COW or unshare */ > > > - huge_ptep_clear_flush(vma, haddr, ptep); > > > + (void)huge_ptep_clear_flush(vma, haddr, ptep); > > > > Why add a "(void)" here? Is there any warning if no "(void)"? > > IIUC, I think we can remove this, right? > > I did not meet any warning without the casting, but this is per Mike's > comment[1] to make the code consistent with other functions casting to void > type explicitly in hugetlb.c file. > Got it. I see hugetlb.c per this rule, while others do not. > [1] > https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 4:06 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-09 4:06 UTC (permalink / raw) To: Baolin Wang Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz On Sun, May 08, 2022 at 09:09:55PM +0800, Baolin Wang wrote: > > > On 5/8/2022 7:09 PM, Muchun Song wrote: > > On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: > > > It is incorrect to use ptep_clear_flush() to nuke a hugetlb page > > > table when unmapping or migrating a hugetlb page, and will change > > > to use huge_ptep_clear_flush() instead in the following patches. > > > > > > So this is a preparation patch, which changes the huge_ptep_clear_flush() > > > to return the original pte to help to nuke a hugetlb page table. > > > > > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > > > Acked-by: Mike Kravetz <mike.kravetz@oracle.com> > > > > Reviewed-by: Muchun Song <songmuchun@bytedance.com> > > Thanks for reviewing. > > > > > But one nit below: > > > > [...] > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > > index 8605d7e..61a21af 100644 > > > --- a/mm/hugetlb.c > > > +++ b/mm/hugetlb.c > > > @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > > > ClearHPageRestoreReserve(new_page); > > > /* Break COW or unshare */ > > > - huge_ptep_clear_flush(vma, haddr, ptep); > > > + (void)huge_ptep_clear_flush(vma, haddr, ptep); > > > > Why add a "(void)" here? Is there any warning if no "(void)"? > > IIUC, I think we can remove this, right? > > I did not meet any warning without the casting, but this is per Mike's > comment[1] to make the code consistent with other functions casting to void > type explicitly in hugetlb.c file. > Got it. I see hugetlb.c per this rule, while others do not. > [1] > https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ > ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte 2022-05-08 13:09 ` Baolin Wang (?) (?) @ 2022-05-09 5:46 ` Christophe Leroy -1 siblings, 0 replies; 73+ messages in thread From: Christophe Leroy @ 2022-05-09 5:46 UTC (permalink / raw) To: Baolin Wang, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz Le 08/05/2022 à 15:09, Baolin Wang a écrit : > > > On 5/8/2022 7:09 PM, Muchun Song wrote: >> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>> table when unmapping or migrating a hugetlb page, and will change >>> to use huge_ptep_clear_flush() instead in the following patches. >>> >>> So this is a preparation patch, which changes the >>> huge_ptep_clear_flush() >>> to return the original pte to help to nuke a hugetlb page table. >>> >>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >> >> Reviewed-by: Muchun Song <songmuchun@bytedance.com> > > Thanks for reviewing. > >> >> But one nit below: >> >> [...] >>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>> index 8605d7e..61a21af 100644 >>> --- a/mm/hugetlb.c >>> +++ b/mm/hugetlb.c >>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>> *mm, struct vm_area_struct *vma, >>> ClearHPageRestoreReserve(new_page); >>> /* Break COW or unshare */ >>> - huge_ptep_clear_flush(vma, haddr, ptep); >>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >> >> Why add a "(void)" here? Is there any warning if no "(void)"? >> IIUC, I think we can remove this, right? > > I did not meet any warning without the casting, but this is per Mike's > comment[1] to make the code consistent with other functions casting to > void type explicitly in hugetlb.c file. > > [1] > https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ > As far as I understand, Mike said that you should be accompagnied with a big fat comment explaining why we ignore the returned value from huge_ptep_clear_flush(). By the way huge_ptep_clear_flush() is not declared 'must_check' so this cast is just visual polution and should be removed. In the meantime the comment suggested by Mike should be added instead. Christophe ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 5:46 ` Christophe Leroy 0 siblings, 0 replies; 73+ messages in thread From: Christophe Leroy @ 2022-05-09 5:46 UTC (permalink / raw) To: Baolin Wang, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz DQoNCkxlIDA4LzA1LzIwMjIgw6AgMTU6MDksIEJhb2xpbiBXYW5nIGEgw6ljcml0wqA6DQo+IA0K PiANCj4gT24gNS84LzIwMjIgNzowOSBQTSwgTXVjaHVuIFNvbmcgd3JvdGU6DQo+PiBPbiBTdW4s IE1heSAwOCwgMjAyMiBhdCAwNTozNjozOVBNICswODAwLCBCYW9saW4gV2FuZyB3cm90ZToNCj4+ PiBJdCBpcyBpbmNvcnJlY3QgdG8gdXNlIHB0ZXBfY2xlYXJfZmx1c2goKSB0byBudWtlIGEgaHVn ZXRsYiBwYWdlDQo+Pj4gdGFibGUgd2hlbiB1bm1hcHBpbmcgb3IgbWlncmF0aW5nIGEgaHVnZXRs YiBwYWdlLCBhbmQgd2lsbCBjaGFuZ2UNCj4+PiB0byB1c2UgaHVnZV9wdGVwX2NsZWFyX2ZsdXNo KCkgaW5zdGVhZCBpbiB0aGUgZm9sbG93aW5nIHBhdGNoZXMuDQo+Pj4NCj4+PiBTbyB0aGlzIGlz IGEgcHJlcGFyYXRpb24gcGF0Y2gsIHdoaWNoIGNoYW5nZXMgdGhlIA0KPj4+IGh1Z2VfcHRlcF9j bGVhcl9mbHVzaCgpDQo+Pj4gdG8gcmV0dXJuIHRoZSBvcmlnaW5hbCBwdGUgdG8gaGVscCB0byBu dWtlIGEgaHVnZXRsYiBwYWdlIHRhYmxlLg0KPj4+DQo+Pj4gU2lnbmVkLW9mZi1ieTogQmFvbGlu IFdhbmcgPGJhb2xpbi53YW5nQGxpbnV4LmFsaWJhYmEuY29tPg0KPj4+IEFja2VkLWJ5OiBNaWtl IEtyYXZldHogPG1pa2Uua3JhdmV0ekBvcmFjbGUuY29tPg0KPj4NCj4+IFJldmlld2VkLWJ5OiBN dWNodW4gU29uZyA8c29uZ211Y2h1bkBieXRlZGFuY2UuY29tPg0KPiANCj4gVGhhbmtzIGZvciBy ZXZpZXdpbmcuDQo+IA0KPj4NCj4+IEJ1dCBvbmUgbml0IGJlbG93Og0KPj4NCj4+IFsuLi5dDQo+ Pj4gZGlmZiAtLWdpdCBhL21tL2h1Z2V0bGIuYyBiL21tL2h1Z2V0bGIuYw0KPj4+IGluZGV4IDg2 MDVkN2UuLjYxYTIxYWYgMTAwNjQ0DQo+Pj4gLS0tIGEvbW0vaHVnZXRsYi5jDQo+Pj4gKysrIGIv bW0vaHVnZXRsYi5jDQo+Pj4gQEAgLTUzNDIsNyArNTM0Miw3IEBAIHN0YXRpYyB2bV9mYXVsdF90 IGh1Z2V0bGJfd3Aoc3RydWN0IG1tX3N0cnVjdCANCj4+PiAqbW0sIHN0cnVjdCB2bV9hcmVhX3N0 cnVjdCAqdm1hLA0KPj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBDbGVhckhQYWdlUmVzdG9yZVJlc2Vy dmUobmV3X3BhZ2UpOw0KPj4+IMKgwqDCoMKgwqDCoMKgwqDCoCAvKiBCcmVhayBDT1cgb3IgdW5z aGFyZSAqLw0KPj4+IC3CoMKgwqDCoMKgwqDCoCBodWdlX3B0ZXBfY2xlYXJfZmx1c2godm1hLCBo YWRkciwgcHRlcCk7DQo+Pj4gK8KgwqDCoMKgwqDCoMKgICh2b2lkKWh1Z2VfcHRlcF9jbGVhcl9m bHVzaCh2bWEsIGhhZGRyLCBwdGVwKTsNCj4+DQo+PiBXaHkgYWRkIGEgIih2b2lkKSIgaGVyZT8g SXMgdGhlcmUgYW55IHdhcm5pbmcgaWYgbm8gIih2b2lkKSI/DQo+PiBJSVVDLCBJIHRoaW5rIHdl IGNhbiByZW1vdmUgdGhpcywgcmlnaHQ/DQo+IA0KPiBJIGRpZCBub3QgbWVldCBhbnkgd2Fybmlu ZyB3aXRob3V0IHRoZSBjYXN0aW5nLCBidXQgdGhpcyBpcyBwZXIgTWlrZSdzIA0KPiBjb21tZW50 WzFdIHRvIG1ha2UgdGhlIGNvZGUgY29uc2lzdGVudCB3aXRoIG90aGVyIGZ1bmN0aW9ucyBjYXN0 aW5nIHRvIA0KPiB2b2lkIHR5cGUgZXhwbGljaXRseSBpbiBodWdldGxiLmMgZmlsZS4NCj4gDQo+ IFsxXSANCj4gaHR0cHM6Ly9sb3JlLmtlcm5lbC5vcmcvYWxsLzQ5NWM0ZWJlLWE1YjQtYWZiNi00 Y2IwLTk1NmMxYjE4ZDBjY0BvcmFjbGUuY29tLyANCj4gDQoNCkFzIGZhciBhcyBJIHVuZGVyc3Rh bmQsIE1pa2Ugc2FpZCB0aGF0IHlvdSBzaG91bGQgYmUgYWNjb21wYWduaWVkIHdpdGggYSANCmJp ZyBmYXQgY29tbWVudCBleHBsYWluaW5nIHdoeSB3ZSBpZ25vcmUgdGhlIHJldHVybmVkIHZhbHVl IGZyb20gDQpodWdlX3B0ZXBfY2xlYXJfZmx1c2goKS4NCg0KQnkgdGhlIHdheSBodWdlX3B0ZXBf Y2xlYXJfZmx1c2goKSBpcyBub3QgZGVjbGFyZWQgJ211c3RfY2hlY2snIHNvIHRoaXMgDQpjYXN0 IGlzIGp1c3QgdmlzdWFsIHBvbHV0aW9uIGFuZCBzaG91bGQgYmUgcmVtb3ZlZC4NCg0KSW4gdGhl IG1lYW50aW1lIHRoZSBjb21tZW50IHN1Z2dlc3RlZCBieSBNaWtlIHNob3VsZCBiZSBhZGRlZCBp bnN0ZWFkLg0KDQpDaHJpc3RvcGhl ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 5:46 ` Christophe Leroy 0 siblings, 0 replies; 73+ messages in thread From: Christophe Leroy @ 2022-05-09 5:46 UTC (permalink / raw) To: Baolin Wang, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz Le 08/05/2022 à 15:09, Baolin Wang a écrit : > > > On 5/8/2022 7:09 PM, Muchun Song wrote: >> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>> table when unmapping or migrating a hugetlb page, and will change >>> to use huge_ptep_clear_flush() instead in the following patches. >>> >>> So this is a preparation patch, which changes the >>> huge_ptep_clear_flush() >>> to return the original pte to help to nuke a hugetlb page table. >>> >>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >> >> Reviewed-by: Muchun Song <songmuchun@bytedance.com> > > Thanks for reviewing. > >> >> But one nit below: >> >> [...] >>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>> index 8605d7e..61a21af 100644 >>> --- a/mm/hugetlb.c >>> +++ b/mm/hugetlb.c >>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>> *mm, struct vm_area_struct *vma, >>> ClearHPageRestoreReserve(new_page); >>> /* Break COW or unshare */ >>> - huge_ptep_clear_flush(vma, haddr, ptep); >>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >> >> Why add a "(void)" here? Is there any warning if no "(void)"? >> IIUC, I think we can remove this, right? > > I did not meet any warning without the casting, but this is per Mike's > comment[1] to make the code consistent with other functions casting to > void type explicitly in hugetlb.c file. > > [1] > https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ > As far as I understand, Mike said that you should be accompagnied with a big fat comment explaining why we ignore the returned value from huge_ptep_clear_flush(). By the way huge_ptep_clear_flush() is not declared 'must_check' so this cast is just visual polution and should be removed. In the meantime the comment suggested by Mike should be added instead. Christophe _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 5:46 ` Christophe Leroy 0 siblings, 0 replies; 73+ messages in thread From: Christophe Leroy @ 2022-05-09 5:46 UTC (permalink / raw) To: Baolin Wang, Muchun Song Cc: dalias, linux-ia64, linux-sh, catalin.marinas, linux-kernel, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, gor, ysato, deller, borntraeger, arnd, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-mips, svens, akpm, linuxppc-dev, davem, mike.kravetz Le 08/05/2022 à 15:09, Baolin Wang a écrit : > > > On 5/8/2022 7:09 PM, Muchun Song wrote: >> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>> table when unmapping or migrating a hugetlb page, and will change >>> to use huge_ptep_clear_flush() instead in the following patches. >>> >>> So this is a preparation patch, which changes the >>> huge_ptep_clear_flush() >>> to return the original pte to help to nuke a hugetlb page table. >>> >>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >> >> Reviewed-by: Muchun Song <songmuchun@bytedance.com> > > Thanks for reviewing. > >> >> But one nit below: >> >> [...] >>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>> index 8605d7e..61a21af 100644 >>> --- a/mm/hugetlb.c >>> +++ b/mm/hugetlb.c >>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>> *mm, struct vm_area_struct *vma, >>> ClearHPageRestoreReserve(new_page); >>> /* Break COW or unshare */ >>> - huge_ptep_clear_flush(vma, haddr, ptep); >>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >> >> Why add a "(void)" here? Is there any warning if no "(void)"? >> IIUC, I think we can remove this, right? > > I did not meet any warning without the casting, but this is per Mike's > comment[1] to make the code consistent with other functions casting to > void type explicitly in hugetlb.c file. > > [1] > https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ > As far as I understand, Mike said that you should be accompagnied with a big fat comment explaining why we ignore the returned value from huge_ptep_clear_flush(). By the way huge_ptep_clear_flush() is not declared 'must_check' so this cast is just visual polution and should be removed. In the meantime the comment suggested by Mike should be added instead. Christophe ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte 2022-05-09 5:46 ` Christophe Leroy (?) (?) @ 2022-05-09 8:46 ` Baolin Wang -1 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-09 8:46 UTC (permalink / raw) To: Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz On 5/9/2022 1:46 PM, Christophe Leroy wrote: > > > Le 08/05/2022 à 15:09, Baolin Wang a écrit : >> >> >> On 5/8/2022 7:09 PM, Muchun Song wrote: >>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>> table when unmapping or migrating a hugetlb page, and will change >>>> to use huge_ptep_clear_flush() instead in the following patches. >>>> >>>> So this is a preparation patch, which changes the >>>> huge_ptep_clear_flush() >>>> to return the original pte to help to nuke a hugetlb page table. >>>> >>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>> >>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >> >> Thanks for reviewing. >> >>> >>> But one nit below: >>> >>> [...] >>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>> index 8605d7e..61a21af 100644 >>>> --- a/mm/hugetlb.c >>>> +++ b/mm/hugetlb.c >>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>> *mm, struct vm_area_struct *vma, >>>> ClearHPageRestoreReserve(new_page); >>>> /* Break COW or unshare */ >>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>> >>> Why add a "(void)" here? Is there any warning if no "(void)"? >>> IIUC, I think we can remove this, right? >> >> I did not meet any warning without the casting, but this is per Mike's >> comment[1] to make the code consistent with other functions casting to >> void type explicitly in hugetlb.c file. >> >> [1] >> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >> > > As far as I understand, Mike said that you should be accompagnied with a > big fat comment explaining why we ignore the returned value from > huge_ptep_clear_flush(). > > By the way huge_ptep_clear_flush() is not declared 'must_check' so this > cast is just visual polution and should be removed. > > In the meantime the comment suggested by Mike should be added instead. Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. /* * Just ignore the return value with new page mapped. */ ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 8:46 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-09 8:46 UTC (permalink / raw) To: Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz On 5/9/2022 1:46 PM, Christophe Leroy wrote: > > > Le 08/05/2022 à 15:09, Baolin Wang a écrit : >> >> >> On 5/8/2022 7:09 PM, Muchun Song wrote: >>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>> table when unmapping or migrating a hugetlb page, and will change >>>> to use huge_ptep_clear_flush() instead in the following patches. >>>> >>>> So this is a preparation patch, which changes the >>>> huge_ptep_clear_flush() >>>> to return the original pte to help to nuke a hugetlb page table. >>>> >>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>> >>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >> >> Thanks for reviewing. >> >>> >>> But one nit below: >>> >>> [...] >>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>> index 8605d7e..61a21af 100644 >>>> --- a/mm/hugetlb.c >>>> +++ b/mm/hugetlb.c >>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>> *mm, struct vm_area_struct *vma, >>>> ClearHPageRestoreReserve(new_page); >>>> /* Break COW or unshare */ >>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>> >>> Why add a "(void)" here? Is there any warning if no "(void)"? >>> IIUC, I think we can remove this, right? >> >> I did not meet any warning without the casting, but this is per Mike's >> comment[1] to make the code consistent with other functions casting to >> void type explicitly in hugetlb.c file. >> >> [1] >> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >> > > As far as I understand, Mike said that you should be accompagnied with a > big fat comment explaining why we ignore the returned value from > huge_ptep_clear_flush(). > > By the way huge_ptep_clear_flush() is not declared 'must_check' so this > cast is just visual polution and should be removed. > > In the meantime the comment suggested by Mike should be added instead. Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. /* * Just ignore the return value with new page mapped. */ ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 8:46 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-09 8:46 UTC (permalink / raw) To: Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz On 5/9/2022 1:46 PM, Christophe Leroy wrote: > > > Le 08/05/2022 à 15:09, Baolin Wang a écrit : >> >> >> On 5/8/2022 7:09 PM, Muchun Song wrote: >>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>> table when unmapping or migrating a hugetlb page, and will change >>>> to use huge_ptep_clear_flush() instead in the following patches. >>>> >>>> So this is a preparation patch, which changes the >>>> huge_ptep_clear_flush() >>>> to return the original pte to help to nuke a hugetlb page table. >>>> >>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>> >>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >> >> Thanks for reviewing. >> >>> >>> But one nit below: >>> >>> [...] >>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>> index 8605d7e..61a21af 100644 >>>> --- a/mm/hugetlb.c >>>> +++ b/mm/hugetlb.c >>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>> *mm, struct vm_area_struct *vma, >>>> ClearHPageRestoreReserve(new_page); >>>> /* Break COW or unshare */ >>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>> >>> Why add a "(void)" here? Is there any warning if no "(void)"? >>> IIUC, I think we can remove this, right? >> >> I did not meet any warning without the casting, but this is per Mike's >> comment[1] to make the code consistent with other functions casting to >> void type explicitly in hugetlb.c file. >> >> [1] >> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >> > > As far as I understand, Mike said that you should be accompagnied with a > big fat comment explaining why we ignore the returned value from > huge_ptep_clear_flush(). > > By the way huge_ptep_clear_flush() is not declared 'must_check' so this > cast is just visual polution and should be removed. > > In the meantime the comment suggested by Mike should be added instead. Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. /* * Just ignore the return value with new page mapped. */ _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 8:46 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-09 8:46 UTC (permalink / raw) To: Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, catalin.marinas, linux-kernel, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, gor, ysato, deller, borntraeger, arnd, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-mips, svens, akpm, linuxppc-dev, davem, mike.kravetz On 5/9/2022 1:46 PM, Christophe Leroy wrote: > > > Le 08/05/2022 à 15:09, Baolin Wang a écrit : >> >> >> On 5/8/2022 7:09 PM, Muchun Song wrote: >>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>> table when unmapping or migrating a hugetlb page, and will change >>>> to use huge_ptep_clear_flush() instead in the following patches. >>>> >>>> So this is a preparation patch, which changes the >>>> huge_ptep_clear_flush() >>>> to return the original pte to help to nuke a hugetlb page table. >>>> >>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>> >>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >> >> Thanks for reviewing. >> >>> >>> But one nit below: >>> >>> [...] >>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>> index 8605d7e..61a21af 100644 >>>> --- a/mm/hugetlb.c >>>> +++ b/mm/hugetlb.c >>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>> *mm, struct vm_area_struct *vma, >>>> ClearHPageRestoreReserve(new_page); >>>> /* Break COW or unshare */ >>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>> >>> Why add a "(void)" here? Is there any warning if no "(void)"? >>> IIUC, I think we can remove this, right? >> >> I did not meet any warning without the casting, but this is per Mike's >> comment[1] to make the code consistent with other functions casting to >> void type explicitly in hugetlb.c file. >> >> [1] >> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >> > > As far as I understand, Mike said that you should be accompagnied with a > big fat comment explaining why we ignore the returned value from > huge_ptep_clear_flush(). > > By the way huge_ptep_clear_flush() is not declared 'must_check' so this > cast is just visual polution and should be removed. > > In the meantime the comment suggested by Mike should be added instead. Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. /* * Just ignore the return value with new page mapped. */ ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte 2022-05-09 8:46 ` Baolin Wang (?) (?) @ 2022-05-09 20:02 ` Mike Kravetz -1 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 20:02 UTC (permalink / raw) To: Baolin Wang, Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem On 5/9/22 01:46, Baolin Wang wrote: > > > On 5/9/2022 1:46 PM, Christophe Leroy wrote: >> >> >> Le 08/05/2022 à 15:09, Baolin Wang a écrit : >>> >>> >>> On 5/8/2022 7:09 PM, Muchun Song wrote: >>>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>>> table when unmapping or migrating a hugetlb page, and will change >>>>> to use huge_ptep_clear_flush() instead in the following patches. >>>>> >>>>> So this is a preparation patch, which changes the >>>>> huge_ptep_clear_flush() >>>>> to return the original pte to help to nuke a hugetlb page table. >>>>> >>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>>> >>>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >>> >>> Thanks for reviewing. >>> >>>> >>>> But one nit below: >>>> >>>> [...] >>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>>> index 8605d7e..61a21af 100644 >>>>> --- a/mm/hugetlb.c >>>>> +++ b/mm/hugetlb.c >>>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>>> *mm, struct vm_area_struct *vma, >>>>> ClearHPageRestoreReserve(new_page); >>>>> /* Break COW or unshare */ >>>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>>> >>>> Why add a "(void)" here? Is there any warning if no "(void)"? >>>> IIUC, I think we can remove this, right? >>> >>> I did not meet any warning without the casting, but this is per Mike's >>> comment[1] to make the code consistent with other functions casting to >>> void type explicitly in hugetlb.c file. >>> >>> [1] >>> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >>> >> >> As far as I understand, Mike said that you should be accompagnied with a >> big fat comment explaining why we ignore the returned value from >> huge_ptep_clear_flush(). > >> By the way huge_ptep_clear_flush() is not declared 'must_check' so this >> cast is just visual polution and should be removed. >> >> In the meantime the comment suggested by Mike should be added instead. > Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. > > Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. > Sorry for the confusion. In the original commit, it seemed odd to me that the signature of the function was changing and there was not an associated change to the only caller of the function. I did suggest casting to void or adding a comment. As Christophe mentions, the cast to void is not necessary. In addition, there really isn't a need for a comment as the calling code is not changed. The original version of the commit without either is actually preferable. The commit message does say this is a preparation patch and the return value will be used in later patches. Again, sorry for the confusion. -- Mike Kravetz ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 20:02 ` Mike Kravetz 0 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 20:02 UTC (permalink / raw) To: Baolin Wang, Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem On 5/9/22 01:46, Baolin Wang wrote: > > > On 5/9/2022 1:46 PM, Christophe Leroy wrote: >> >> >> Le 08/05/2022 à 15:09, Baolin Wang a écrit : >>> >>> >>> On 5/8/2022 7:09 PM, Muchun Song wrote: >>>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>>> table when unmapping or migrating a hugetlb page, and will change >>>>> to use huge_ptep_clear_flush() instead in the following patches. >>>>> >>>>> So this is a preparation patch, which changes the >>>>> huge_ptep_clear_flush() >>>>> to return the original pte to help to nuke a hugetlb page table. >>>>> >>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>>> >>>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >>> >>> Thanks for reviewing. >>> >>>> >>>> But one nit below: >>>> >>>> [...] >>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>>> index 8605d7e..61a21af 100644 >>>>> --- a/mm/hugetlb.c >>>>> +++ b/mm/hugetlb.c >>>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>>> *mm, struct vm_area_struct *vma, >>>>> ClearHPageRestoreReserve(new_page); >>>>> /* Break COW or unshare */ >>>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>>> >>>> Why add a "(void)" here? Is there any warning if no "(void)"? >>>> IIUC, I think we can remove this, right? >>> >>> I did not meet any warning without the casting, but this is per Mike's >>> comment[1] to make the code consistent with other functions casting to >>> void type explicitly in hugetlb.c file. >>> >>> [1] >>> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >>> >> >> As far as I understand, Mike said that you should be accompagnied with a >> big fat comment explaining why we ignore the returned value from >> huge_ptep_clear_flush(). > >> By the way huge_ptep_clear_flush() is not declared 'must_check' so this >> cast is just visual polution and should be removed. >> >> In the meantime the comment suggested by Mike should be added instead. > Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. > > Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. > Sorry for the confusion. In the original commit, it seemed odd to me that the signature of the function was changing and there was not an associated change to the only caller of the function. I did suggest casting to void or adding a comment. As Christophe mentions, the cast to void is not necessary. In addition, there really isn't a need for a comment as the calling code is not changed. The original version of the commit without either is actually preferable. The commit message does say this is a preparation patch and the return value will be used in later patches. Again, sorry for the confusion. -- Mike Kravetz ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 20:02 ` Mike Kravetz 0 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 20:02 UTC (permalink / raw) To: Baolin Wang, Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem On 5/9/22 01:46, Baolin Wang wrote: > > > On 5/9/2022 1:46 PM, Christophe Leroy wrote: >> >> >> Le 08/05/2022 à 15:09, Baolin Wang a écrit : >>> >>> >>> On 5/8/2022 7:09 PM, Muchun Song wrote: >>>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>>> table when unmapping or migrating a hugetlb page, and will change >>>>> to use huge_ptep_clear_flush() instead in the following patches. >>>>> >>>>> So this is a preparation patch, which changes the >>>>> huge_ptep_clear_flush() >>>>> to return the original pte to help to nuke a hugetlb page table. >>>>> >>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>>> >>>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >>> >>> Thanks for reviewing. >>> >>>> >>>> But one nit below: >>>> >>>> [...] >>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>>> index 8605d7e..61a21af 100644 >>>>> --- a/mm/hugetlb.c >>>>> +++ b/mm/hugetlb.c >>>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>>> *mm, struct vm_area_struct *vma, >>>>> ClearHPageRestoreReserve(new_page); >>>>> /* Break COW or unshare */ >>>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>>> >>>> Why add a "(void)" here? Is there any warning if no "(void)"? >>>> IIUC, I think we can remove this, right? >>> >>> I did not meet any warning without the casting, but this is per Mike's >>> comment[1] to make the code consistent with other functions casting to >>> void type explicitly in hugetlb.c file. >>> >>> [1] >>> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >>> >> >> As far as I understand, Mike said that you should be accompagnied with a >> big fat comment explaining why we ignore the returned value from >> huge_ptep_clear_flush(). > >> By the way huge_ptep_clear_flush() is not declared 'must_check' so this >> cast is just visual polution and should be removed. >> >> In the meantime the comment suggested by Mike should be added instead. > Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. > > Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. > Sorry for the confusion. In the original commit, it seemed odd to me that the signature of the function was changing and there was not an associated change to the only caller of the function. I did suggest casting to void or adding a comment. As Christophe mentions, the cast to void is not necessary. In addition, there really isn't a need for a comment as the calling code is not changed. The original version of the commit without either is actually preferable. The commit message does say this is a preparation patch and the return value will be used in later patches. Again, sorry for the confusion. -- Mike Kravetz _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-09 20:02 ` Mike Kravetz 0 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 20:02 UTC (permalink / raw) To: Baolin Wang, Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, catalin.marinas, linux-kernel, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, gor, ysato, deller, borntraeger, arnd, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-mips, svens, akpm, linuxppc-dev, davem On 5/9/22 01:46, Baolin Wang wrote: > > > On 5/9/2022 1:46 PM, Christophe Leroy wrote: >> >> >> Le 08/05/2022 à 15:09, Baolin Wang a écrit : >>> >>> >>> On 5/8/2022 7:09 PM, Muchun Song wrote: >>>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>>> table when unmapping or migrating a hugetlb page, and will change >>>>> to use huge_ptep_clear_flush() instead in the following patches. >>>>> >>>>> So this is a preparation patch, which changes the >>>>> huge_ptep_clear_flush() >>>>> to return the original pte to help to nuke a hugetlb page table. >>>>> >>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>>> >>>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >>> >>> Thanks for reviewing. >>> >>>> >>>> But one nit below: >>>> >>>> [...] >>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>>> index 8605d7e..61a21af 100644 >>>>> --- a/mm/hugetlb.c >>>>> +++ b/mm/hugetlb.c >>>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>>> *mm, struct vm_area_struct *vma, >>>>> ClearHPageRestoreReserve(new_page); >>>>> /* Break COW or unshare */ >>>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>>> >>>> Why add a "(void)" here? Is there any warning if no "(void)"? >>>> IIUC, I think we can remove this, right? >>> >>> I did not meet any warning without the casting, but this is per Mike's >>> comment[1] to make the code consistent with other functions casting to >>> void type explicitly in hugetlb.c file. >>> >>> [1] >>> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >>> >> >> As far as I understand, Mike said that you should be accompagnied with a >> big fat comment explaining why we ignore the returned value from >> huge_ptep_clear_flush(). > >> By the way huge_ptep_clear_flush() is not declared 'must_check' so this >> cast is just visual polution and should be removed. >> >> In the meantime the comment suggested by Mike should be added instead. > Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. > > Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. > Sorry for the confusion. In the original commit, it seemed odd to me that the signature of the function was changing and there was not an associated change to the only caller of the function. I did suggest casting to void or adding a comment. As Christophe mentions, the cast to void is not necessary. In addition, there really isn't a need for a comment as the calling code is not changed. The original version of the commit without either is actually preferable. The commit message does say this is a preparation patch and the return value will be used in later patches. Again, sorry for the confusion. -- Mike Kravetz ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte 2022-05-09 20:02 ` Mike Kravetz (?) (?) @ 2022-05-10 1:35 ` Baolin Wang -1 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-10 1:35 UTC (permalink / raw) To: Mike Kravetz, Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem On 5/10/2022 4:02 AM, Mike Kravetz wrote: > On 5/9/22 01:46, Baolin Wang wrote: >> >> >> On 5/9/2022 1:46 PM, Christophe Leroy wrote: >>> >>> >>> Le 08/05/2022 à 15:09, Baolin Wang a écrit : >>>> >>>> >>>> On 5/8/2022 7:09 PM, Muchun Song wrote: >>>>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>>>> table when unmapping or migrating a hugetlb page, and will change >>>>>> to use huge_ptep_clear_flush() instead in the following patches. >>>>>> >>>>>> So this is a preparation patch, which changes the >>>>>> huge_ptep_clear_flush() >>>>>> to return the original pte to help to nuke a hugetlb page table. >>>>>> >>>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>>>> >>>>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >>>> >>>> Thanks for reviewing. >>>> >>>>> >>>>> But one nit below: >>>>> >>>>> [...] >>>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>>>> index 8605d7e..61a21af 100644 >>>>>> --- a/mm/hugetlb.c >>>>>> +++ b/mm/hugetlb.c >>>>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>>>> *mm, struct vm_area_struct *vma, >>>>>> ClearHPageRestoreReserve(new_page); >>>>>> /* Break COW or unshare */ >>>>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>>>> >>>>> Why add a "(void)" here? Is there any warning if no "(void)"? >>>>> IIUC, I think we can remove this, right? >>>> >>>> I did not meet any warning without the casting, but this is per Mike's >>>> comment[1] to make the code consistent with other functions casting to >>>> void type explicitly in hugetlb.c file. >>>> >>>> [1] >>>> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >>>> >>> >>> As far as I understand, Mike said that you should be accompagnied with a >>> big fat comment explaining why we ignore the returned value from >>> huge_ptep_clear_flush(). > >>> By the way huge_ptep_clear_flush() is not declared 'must_check' so this >>> cast is just visual polution and should be removed. >>> >>> In the meantime the comment suggested by Mike should be added instead. >> Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. >> >> Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. >> > > Sorry for the confusion. > > In the original commit, it seemed odd to me that the signature of the > function was changing and there was not an associated change to the only > caller of the function. I did suggest casting to void or adding a comment. > As Christophe mentions, the cast to void is not necessary. In addition, > there really isn't a need for a comment as the calling code is not changed. OK. Will drop the casting in next version. > > The original version of the commit without either is actually preferable. > The commit message does say this is a preparation patch and the return > value will be used in later patches. OK. Thanks Mike for making me clear. Also thanks to Muchun and Christophe. ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-10 1:35 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-10 1:35 UTC (permalink / raw) To: Mike Kravetz, Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem On 5/10/2022 4:02 AM, Mike Kravetz wrote: > On 5/9/22 01:46, Baolin Wang wrote: >> >> >> On 5/9/2022 1:46 PM, Christophe Leroy wrote: >>> >>> >>> Le 08/05/2022 à 15:09, Baolin Wang a écrit : >>>> >>>> >>>> On 5/8/2022 7:09 PM, Muchun Song wrote: >>>>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>>>> table when unmapping or migrating a hugetlb page, and will change >>>>>> to use huge_ptep_clear_flush() instead in the following patches. >>>>>> >>>>>> So this is a preparation patch, which changes the >>>>>> huge_ptep_clear_flush() >>>>>> to return the original pte to help to nuke a hugetlb page table. >>>>>> >>>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>>>> >>>>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >>>> >>>> Thanks for reviewing. >>>> >>>>> >>>>> But one nit below: >>>>> >>>>> [...] >>>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>>>> index 8605d7e..61a21af 100644 >>>>>> --- a/mm/hugetlb.c >>>>>> +++ b/mm/hugetlb.c >>>>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>>>> *mm, struct vm_area_struct *vma, >>>>>> ClearHPageRestoreReserve(new_page); >>>>>> /* Break COW or unshare */ >>>>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>>>> >>>>> Why add a "(void)" here? Is there any warning if no "(void)"? >>>>> IIUC, I think we can remove this, right? >>>> >>>> I did not meet any warning without the casting, but this is per Mike's >>>> comment[1] to make the code consistent with other functions casting to >>>> void type explicitly in hugetlb.c file. >>>> >>>> [1] >>>> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >>>> >>> >>> As far as I understand, Mike said that you should be accompagnied with a >>> big fat comment explaining why we ignore the returned value from >>> huge_ptep_clear_flush(). > >>> By the way huge_ptep_clear_flush() is not declared 'must_check' so this >>> cast is just visual polution and should be removed. >>> >>> In the meantime the comment suggested by Mike should be added instead. >> Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. >> >> Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. >> > > Sorry for the confusion. > > In the original commit, it seemed odd to me that the signature of the > function was changing and there was not an associated change to the only > caller of the function. I did suggest casting to void or adding a comment. > As Christophe mentions, the cast to void is not necessary. In addition, > there really isn't a need for a comment as the calling code is not changed. OK. Will drop the casting in next version. > > The original version of the commit without either is actually preferable. > The commit message does say this is a preparation patch and the return > value will be used in later patches. OK. Thanks Mike for making me clear. Also thanks to Muchun and Christophe. ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-10 1:35 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-10 1:35 UTC (permalink / raw) To: Mike Kravetz, Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem On 5/10/2022 4:02 AM, Mike Kravetz wrote: > On 5/9/22 01:46, Baolin Wang wrote: >> >> >> On 5/9/2022 1:46 PM, Christophe Leroy wrote: >>> >>> >>> Le 08/05/2022 à 15:09, Baolin Wang a écrit : >>>> >>>> >>>> On 5/8/2022 7:09 PM, Muchun Song wrote: >>>>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>>>> table when unmapping or migrating a hugetlb page, and will change >>>>>> to use huge_ptep_clear_flush() instead in the following patches. >>>>>> >>>>>> So this is a preparation patch, which changes the >>>>>> huge_ptep_clear_flush() >>>>>> to return the original pte to help to nuke a hugetlb page table. >>>>>> >>>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>>>> >>>>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >>>> >>>> Thanks for reviewing. >>>> >>>>> >>>>> But one nit below: >>>>> >>>>> [...] >>>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>>>> index 8605d7e..61a21af 100644 >>>>>> --- a/mm/hugetlb.c >>>>>> +++ b/mm/hugetlb.c >>>>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>>>> *mm, struct vm_area_struct *vma, >>>>>> ClearHPageRestoreReserve(new_page); >>>>>> /* Break COW or unshare */ >>>>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>>>> >>>>> Why add a "(void)" here? Is there any warning if no "(void)"? >>>>> IIUC, I think we can remove this, right? >>>> >>>> I did not meet any warning without the casting, but this is per Mike's >>>> comment[1] to make the code consistent with other functions casting to >>>> void type explicitly in hugetlb.c file. >>>> >>>> [1] >>>> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >>>> >>> >>> As far as I understand, Mike said that you should be accompagnied with a >>> big fat comment explaining why we ignore the returned value from >>> huge_ptep_clear_flush(). > >>> By the way huge_ptep_clear_flush() is not declared 'must_check' so this >>> cast is just visual polution and should be removed. >>> >>> In the meantime the comment suggested by Mike should be added instead. >> Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. >> >> Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. >> > > Sorry for the confusion. > > In the original commit, it seemed odd to me that the signature of the > function was changing and there was not an associated change to the only > caller of the function. I did suggest casting to void or adding a comment. > As Christophe mentions, the cast to void is not necessary. In addition, > there really isn't a need for a comment as the calling code is not changed. OK. Will drop the casting in next version. > > The original version of the commit without either is actually preferable. > The commit message does say this is a preparation patch and the return > value will be used in later patches. OK. Thanks Mike for making me clear. Also thanks to Muchun and Christophe. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte @ 2022-05-10 1:35 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-10 1:35 UTC (permalink / raw) To: Mike Kravetz, Christophe Leroy, Muchun Song Cc: dalias, linux-ia64, linux-sh, catalin.marinas, linux-kernel, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, gor, ysato, deller, borntraeger, arnd, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-mips, svens, akpm, linuxppc-dev, davem On 5/10/2022 4:02 AM, Mike Kravetz wrote: > On 5/9/22 01:46, Baolin Wang wrote: >> >> >> On 5/9/2022 1:46 PM, Christophe Leroy wrote: >>> >>> >>> Le 08/05/2022 à 15:09, Baolin Wang a écrit : >>>> >>>> >>>> On 5/8/2022 7:09 PM, Muchun Song wrote: >>>>> On Sun, May 08, 2022 at 05:36:39PM +0800, Baolin Wang wrote: >>>>>> It is incorrect to use ptep_clear_flush() to nuke a hugetlb page >>>>>> table when unmapping or migrating a hugetlb page, and will change >>>>>> to use huge_ptep_clear_flush() instead in the following patches. >>>>>> >>>>>> So this is a preparation patch, which changes the >>>>>> huge_ptep_clear_flush() >>>>>> to return the original pte to help to nuke a hugetlb page table. >>>>>> >>>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>>>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> >>>>> >>>>> Reviewed-by: Muchun Song <songmuchun@bytedance.com> >>>> >>>> Thanks for reviewing. >>>> >>>>> >>>>> But one nit below: >>>>> >>>>> [...] >>>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>>>> index 8605d7e..61a21af 100644 >>>>>> --- a/mm/hugetlb.c >>>>>> +++ b/mm/hugetlb.c >>>>>> @@ -5342,7 +5342,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct >>>>>> *mm, struct vm_area_struct *vma, >>>>>> ClearHPageRestoreReserve(new_page); >>>>>> /* Break COW or unshare */ >>>>>> - huge_ptep_clear_flush(vma, haddr, ptep); >>>>>> + (void)huge_ptep_clear_flush(vma, haddr, ptep); >>>>> >>>>> Why add a "(void)" here? Is there any warning if no "(void)"? >>>>> IIUC, I think we can remove this, right? >>>> >>>> I did not meet any warning without the casting, but this is per Mike's >>>> comment[1] to make the code consistent with other functions casting to >>>> void type explicitly in hugetlb.c file. >>>> >>>> [1] >>>> https://lore.kernel.org/all/495c4ebe-a5b4-afb6-4cb0-956c1b18d0cc@oracle.com/ >>>> >>> >>> As far as I understand, Mike said that you should be accompagnied with a >>> big fat comment explaining why we ignore the returned value from >>> huge_ptep_clear_flush(). > >>> By the way huge_ptep_clear_flush() is not declared 'must_check' so this >>> cast is just visual polution and should be removed. >>> >>> In the meantime the comment suggested by Mike should be added instead. >> Sorry for my misunderstanding. I just follow the explicit void casting like other places in hugetlb.c file. And I am not sure if it is useful adding some comments like below, since we did not need the original pte value in the COW case mapping with a new page, and the code is more readable already I think. >> >> Mike, could you help to clarify what useful comments would you like? and remove the explicit void casting? Thanks. >> > > Sorry for the confusion. > > In the original commit, it seemed odd to me that the signature of the > function was changing and there was not an associated change to the only > caller of the function. I did suggest casting to void or adding a comment. > As Christophe mentions, the cast to void is not necessary. In addition, > there really isn't a need for a comment as the calling code is not changed. OK. Will drop the casting in next version. > > The original version of the commit without either is actually preferable. > The commit message does say this is a preparation patch and the return > value will be used in later patches. OK. Thanks Mike for making me clear. Also thanks to Muchun and Christophe. ^ permalink raw reply [flat|nested] 73+ messages in thread
* [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration 2022-05-08 9:36 ` Baolin Wang (?) (?) @ 2022-05-08 9:36 ` Baolin Wang -1 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb: 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page size specified. When migrating a hugetlb page, we will get the relevant page table entry by huge_pte_offset() only once to nuke it and remap it with a migration pte entry. This is correct for PMD or PUD size hugetlb, since they always contain only one pmd entry or pud entry in the page table. However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, since they can contain several continuous pte or pmd entry with same page table attributes. So we will nuke or remap only one pte or pmd entry for this CONT-PTE/PMD size hugetlb page, which is not expected for hugetlb migration. The problem is we can still continue to modify the subpages' data of a hugetlb page during migrating a hugetlb page, which can cause a serious data consistent issue, since we did not nuke the page table entry and set a migration pte for the subpages of a hugetlb page. To fix this issue, we should change to use huge_ptep_clear_flush() to nuke a hugetlb page table, and remap it with set_huge_pte_at() and set_huge_swap_pte_at() when migrating a hugetlb page, which already considered the CONT-PTE or CONT-PMD size hugetlb. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/rmap.c | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 6fdd198..7cf2408 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1924,13 +1924,15 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, break; } } + + /* Nuke the hugetlb page table entry */ + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); + /* Nuke the page table entry. */ + pteval = ptep_clear_flush(vma, address, pvmw.pte); } - /* Nuke the page table entry. */ - pteval = ptep_clear_flush(vma, address, pvmw.pte); - /* Set the dirty flag on the folio now the pte is gone. */ if (pte_dirty(pteval)) folio_mark_dirty(folio); @@ -2015,7 +2017,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, pte_t swp_pte; if (arch_unmap_one(mm, vma, address, pteval) < 0) { - set_pte_at(mm, address, pvmw.pte, pteval); + if (folio_test_hugetlb(folio)) + set_huge_pte_at(mm, address, pvmw.pte, pteval); + else + set_pte_at(mm, address, pvmw.pte, pteval); ret = false; page_vma_mapped_walk_done(&pvmw); break; @@ -2024,7 +2029,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, !anon_exclusive, subpage); if (anon_exclusive && page_try_share_anon_rmap(subpage)) { - set_pte_at(mm, address, pvmw.pte, pteval); + if (folio_test_hugetlb(folio)) + set_huge_pte_at(mm, address, pvmw.pte, pteval); + else + set_pte_at(mm, address, pvmw.pte, pteval); ret = false; page_vma_mapped_walk_done(&pvmw); break; @@ -2050,7 +2058,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, swp_pte = pte_swp_mksoft_dirty(swp_pte); if (pte_uffd_wp(pteval)) swp_pte = pte_swp_mkuffd_wp(swp_pte); - set_pte_at(mm, address, pvmw.pte, swp_pte); + if (folio_test_hugetlb(folio)) + set_huge_swap_pte_at(mm, address, pvmw.pte, + swp_pte, vma_mmu_pagesize(vma)); + else + set_pte_at(mm, address, pvmw.pte, swp_pte); trace_set_migration_pte(address, pte_val(swp_pte), compound_order(&folio->page)); /* -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 73+ messages in thread
* [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb: 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page size specified. When migrating a hugetlb page, we will get the relevant page table entry by huge_pte_offset() only once to nuke it and remap it with a migration pte entry. This is correct for PMD or PUD size hugetlb, since they always contain only one pmd entry or pud entry in the page table. However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, since they can contain several continuous pte or pmd entry with same page table attributes. So we will nuke or remap only one pte or pmd entry for this CONT-PTE/PMD size hugetlb page, which is not expected for hugetlb migration. The problem is we can still continue to modify the subpages' data of a hugetlb page during migrating a hugetlb page, which can cause a serious data consistent issue, since we did not nuke the page table entry and set a migration pte for the subpages of a hugetlb page. To fix this issue, we should change to use huge_ptep_clear_flush() to nuke a hugetlb page table, and remap it with set_huge_pte_at() and set_huge_swap_pte_at() when migrating a hugetlb page, which already considered the CONT-PTE or CONT-PMD size hugetlb. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/rmap.c | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 6fdd198..7cf2408 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1924,13 +1924,15 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, break; } } + + /* Nuke the hugetlb page table entry */ + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); + /* Nuke the page table entry. */ + pteval = ptep_clear_flush(vma, address, pvmw.pte); } - /* Nuke the page table entry. */ - pteval = ptep_clear_flush(vma, address, pvmw.pte); - /* Set the dirty flag on the folio now the pte is gone. */ if (pte_dirty(pteval)) folio_mark_dirty(folio); @@ -2015,7 +2017,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, pte_t swp_pte; if (arch_unmap_one(mm, vma, address, pteval) < 0) { - set_pte_at(mm, address, pvmw.pte, pteval); + if (folio_test_hugetlb(folio)) + set_huge_pte_at(mm, address, pvmw.pte, pteval); + else + set_pte_at(mm, address, pvmw.pte, pteval); ret = false; page_vma_mapped_walk_done(&pvmw); break; @@ -2024,7 +2029,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, !anon_exclusive, subpage); if (anon_exclusive && page_try_share_anon_rmap(subpage)) { - set_pte_at(mm, address, pvmw.pte, pteval); + if (folio_test_hugetlb(folio)) + set_huge_pte_at(mm, address, pvmw.pte, pteval); + else + set_pte_at(mm, address, pvmw.pte, pteval); ret = false; page_vma_mapped_walk_done(&pvmw); break; @@ -2050,7 +2058,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, swp_pte = pte_swp_mksoft_dirty(swp_pte); if (pte_uffd_wp(pteval)) swp_pte = pte_swp_mkuffd_wp(swp_pte); - set_pte_at(mm, address, pvmw.pte, swp_pte); + if (folio_test_hugetlb(folio)) + set_huge_swap_pte_at(mm, address, pvmw.pte, + swp_pte, vma_mmu_pagesize(vma)); + else + set_pte_at(mm, address, pvmw.pte, swp_pte); trace_set_migration_pte(address, pte_val(swp_pte), compound_order(&folio->page)); /* -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 73+ messages in thread
* [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: dalias, linux-ia64, linux-sh, linux-kernel, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, linux-arch, linux-s390, arnd, ysato, deller, borntraeger, gor, hca, baolin.wang, linux-arm-kernel, tsbogend, linux-parisc, linux-mips, svens, linuxppc-dev, davem On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb: 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page size specified. When migrating a hugetlb page, we will get the relevant page table entry by huge_pte_offset() only once to nuke it and remap it with a migration pte entry. This is correct for PMD or PUD size hugetlb, since they always contain only one pmd entry or pud entry in the page table. However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, since they can contain several continuous pte or pmd entry with same page table attributes. So we will nuke or remap only one pte or pmd entry for this CONT-PTE/PMD size hugetlb page, which is not expected for hugetlb migration. The problem is we can still continue to modify the subpages' data of a hugetlb page during migrating a hugetlb page, which can cause a serious data consistent issue, since we did not nuke the page table entry and set a migration pte for the subpages of a hugetlb page. To fix this issue, we should change to use huge_ptep_clear_flush() to nuke a hugetlb page table, and remap it with set_huge_pte_at() and set_huge_swap_pte_at() when migrating a hugetlb page, which already considered the CONT-PTE or CONT-PMD size hugetlb. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/rmap.c | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 6fdd198..7cf2408 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1924,13 +1924,15 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, break; } } + + /* Nuke the hugetlb page table entry */ + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); + /* Nuke the page table entry. */ + pteval = ptep_clear_flush(vma, address, pvmw.pte); } - /* Nuke the page table entry. */ - pteval = ptep_clear_flush(vma, address, pvmw.pte); - /* Set the dirty flag on the folio now the pte is gone. */ if (pte_dirty(pteval)) folio_mark_dirty(folio); @@ -2015,7 +2017,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, pte_t swp_pte; if (arch_unmap_one(mm, vma, address, pteval) < 0) { - set_pte_at(mm, address, pvmw.pte, pteval); + if (folio_test_hugetlb(folio)) + set_huge_pte_at(mm, address, pvmw.pte, pteval); + else + set_pte_at(mm, address, pvmw.pte, pteval); ret = false; page_vma_mapped_walk_done(&pvmw); break; @@ -2024,7 +2029,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, !anon_exclusive, subpage); if (anon_exclusive && page_try_share_anon_rmap(subpage)) { - set_pte_at(mm, address, pvmw.pte, pteval); + if (folio_test_hugetlb(folio)) + set_huge_pte_at(mm, address, pvmw.pte, pteval); + else + set_pte_at(mm, address, pvmw.pte, pteval); ret = false; page_vma_mapped_walk_done(&pvmw); break; @@ -2050,7 +2058,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, swp_pte = pte_swp_mksoft_dirty(swp_pte); if (pte_uffd_wp(pteval)) swp_pte = pte_swp_mkuffd_wp(swp_pte); - set_pte_at(mm, address, pvmw.pte, swp_pte); + if (folio_test_hugetlb(folio)) + set_huge_swap_pte_at(mm, address, pvmw.pte, + swp_pte, vma_mmu_pagesize(vma)); + else + set_pte_at(mm, address, pvmw.pte, swp_pte); trace_set_migration_pte(address, pte_val(swp_pte), compound_order(&folio->page)); /* -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 73+ messages in thread
* [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb: 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page size specified. When migrating a hugetlb page, we will get the relevant page table entry by huge_pte_offset() only once to nuke it and remap it with a migration pte entry. This is correct for PMD or PUD size hugetlb, since they always contain only one pmd entry or pud entry in the page table. However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, since they can contain several continuous pte or pmd entry with same page table attributes. So we will nuke or remap only one pte or pmd entry for this CONT-PTE/PMD size hugetlb page, which is not expected for hugetlb migration. The problem is we can still continue to modify the subpages' data of a hugetlb page during migrating a hugetlb page, which can cause a serious data consistent issue, since we did not nuke the page table entry and set a migration pte for the subpages of a hugetlb page. To fix this issue, we should change to use huge_ptep_clear_flush() to nuke a hugetlb page table, and remap it with set_huge_pte_at() and set_huge_swap_pte_at() when migrating a hugetlb page, which already considered the CONT-PTE or CONT-PMD size hugetlb. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/rmap.c | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 6fdd198..7cf2408 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1924,13 +1924,15 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, break; } } + + /* Nuke the hugetlb page table entry */ + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); + /* Nuke the page table entry. */ + pteval = ptep_clear_flush(vma, address, pvmw.pte); } - /* Nuke the page table entry. */ - pteval = ptep_clear_flush(vma, address, pvmw.pte); - /* Set the dirty flag on the folio now the pte is gone. */ if (pte_dirty(pteval)) folio_mark_dirty(folio); @@ -2015,7 +2017,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, pte_t swp_pte; if (arch_unmap_one(mm, vma, address, pteval) < 0) { - set_pte_at(mm, address, pvmw.pte, pteval); + if (folio_test_hugetlb(folio)) + set_huge_pte_at(mm, address, pvmw.pte, pteval); + else + set_pte_at(mm, address, pvmw.pte, pteval); ret = false; page_vma_mapped_walk_done(&pvmw); break; @@ -2024,7 +2029,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, !anon_exclusive, subpage); if (anon_exclusive && page_try_share_anon_rmap(subpage)) { - set_pte_at(mm, address, pvmw.pte, pteval); + if (folio_test_hugetlb(folio)) + set_huge_pte_at(mm, address, pvmw.pte, pteval); + else + set_pte_at(mm, address, pvmw.pte, pteval); ret = false; page_vma_mapped_walk_done(&pvmw); break; @@ -2050,7 +2058,11 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, swp_pte = pte_swp_mksoft_dirty(swp_pte); if (pte_uffd_wp(pteval)) swp_pte = pte_swp_mkuffd_wp(swp_pte); - set_pte_at(mm, address, pvmw.pte, swp_pte); + if (folio_test_hugetlb(folio)) + set_huge_swap_pte_at(mm, address, pvmw.pte, + swp_pte, vma_mmu_pagesize(vma)); + else + set_pte_at(mm, address, pvmw.pte, swp_pte); trace_set_migration_pte(address, pte_val(swp_pte), compound_order(&folio->page)); /* -- 1.8.3.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration 2022-05-08 9:36 ` Baolin Wang (?) (?) @ 2022-05-08 12:01 ` kernel test robot -1 siblings, 0 replies; 73+ messages in thread From: kernel test robot @ 2022-05-08 12:01 UTC (permalink / raw) To: Baolin Wang, akpm, mike.kravetz, catalin.marinas, will Cc: kbuild-all, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch Hi Baolin, I love your patch! Yet something to improve: [auto build test ERROR on akpm-mm/mm-everything] [also build test ERROR on next-20220506] [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/202205081910.mStoC5rj-lkp@intel.com/config) compiler: gcc-11 (Debian 11.2.0-20) 11.2.0 reproduce (this is a W=1 build): # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 git checkout 907981b27213707fdb2f8a24c107d6752a09a773 # save the config file mkdir build_dir && cp config build_dir/.config make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): mm/rmap.c: In function 'try_to_migrate_one': >> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration] 1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); | ^~~~~~~~~~~~~~~~~~~~~ | ptep_clear_flush >> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int' >> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration] 2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval); | ^~~~~~~~~~~~~~~ | set_huge_swap_pte_at cc1: some warnings being treated as errors vim +1931 mm/rmap.c 1883 1884 /* Unexpected PMD-mapped THP? */ 1885 VM_BUG_ON_FOLIO(!pvmw.pte, folio); 1886 1887 subpage = folio_page(folio, 1888 pte_pfn(*pvmw.pte) - folio_pfn(folio)); 1889 address = pvmw.address; 1890 anon_exclusive = folio_test_anon(folio) && 1891 PageAnonExclusive(subpage); 1892 1893 if (folio_test_hugetlb(folio)) { 1894 /* 1895 * huge_pmd_unshare may unmap an entire PMD page. 1896 * There is no way of knowing exactly which PMDs may 1897 * be cached for this mm, so we must flush them all. 1898 * start/end were already adjusted above to cover this 1899 * range. 1900 */ 1901 flush_cache_range(vma, range.start, range.end); 1902 1903 if (!folio_test_anon(folio)) { 1904 /* 1905 * To call huge_pmd_unshare, i_mmap_rwsem must be 1906 * held in write mode. Caller needs to explicitly 1907 * do this outside rmap routines. 1908 */ 1909 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 1910 1911 if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { 1912 flush_tlb_range(vma, range.start, range.end); 1913 mmu_notifier_invalidate_range(mm, range.start, 1914 range.end); 1915 1916 /* 1917 * The ref count of the PMD page was dropped 1918 * which is part of the way map counting 1919 * is done for shared PMDs. Return 'true' 1920 * here. When there is no other sharing, 1921 * huge_pmd_unshare returns false and we will 1922 * unmap the actual page and drop map count 1923 * to zero. 1924 */ 1925 page_vma_mapped_walk_done(&pvmw); 1926 break; 1927 } 1928 } 1929 1930 /* Nuke the hugetlb page table entry */ > 1931 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); 1932 } else { 1933 flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); 1934 /* Nuke the page table entry. */ 1935 pteval = ptep_clear_flush(vma, address, pvmw.pte); 1936 } 1937 1938 /* Set the dirty flag on the folio now the pte is gone. */ 1939 if (pte_dirty(pteval)) 1940 folio_mark_dirty(folio); 1941 1942 /* Update high watermark before we lower rss */ 1943 update_hiwater_rss(mm); 1944 1945 if (folio_is_zone_device(folio)) { 1946 unsigned long pfn = folio_pfn(folio); 1947 swp_entry_t entry; 1948 pte_t swp_pte; 1949 1950 if (anon_exclusive) 1951 BUG_ON(page_try_share_anon_rmap(subpage)); 1952 1953 /* 1954 * Store the pfn of the page in a special migration 1955 * pte. do_swap_page() will wait until the migration 1956 * pte is removed and then restart fault handling. 1957 */ 1958 entry = pte_to_swp_entry(pteval); 1959 if (is_writable_device_private_entry(entry)) 1960 entry = make_writable_migration_entry(pfn); 1961 else if (anon_exclusive) 1962 entry = make_readable_exclusive_migration_entry(pfn); 1963 else 1964 entry = make_readable_migration_entry(pfn); 1965 swp_pte = swp_entry_to_pte(entry); 1966 1967 /* 1968 * pteval maps a zone device page and is therefore 1969 * a swap pte. 1970 */ 1971 if (pte_swp_soft_dirty(pteval)) 1972 swp_pte = pte_swp_mksoft_dirty(swp_pte); 1973 if (pte_swp_uffd_wp(pteval)) 1974 swp_pte = pte_swp_mkuffd_wp(swp_pte); 1975 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); 1976 trace_set_migration_pte(pvmw.address, pte_val(swp_pte), 1977 compound_order(&folio->page)); 1978 /* 1979 * No need to invalidate here it will synchronize on 1980 * against the special swap migration pte. 1981 * 1982 * The assignment to subpage above was computed from a 1983 * swap PTE which results in an invalid pointer. 1984 * Since only PAGE_SIZE pages can currently be 1985 * migrated, just set it to page. This will need to be 1986 * changed when hugepage migrations to device private 1987 * memory are supported. 1988 */ 1989 subpage = &folio->page; 1990 } else if (PageHWPoison(subpage)) { 1991 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); 1992 if (folio_test_hugetlb(folio)) { 1993 hugetlb_count_sub(folio_nr_pages(folio), mm); 1994 set_huge_swap_pte_at(mm, address, 1995 pvmw.pte, pteval, 1996 vma_mmu_pagesize(vma)); 1997 } else { 1998 dec_mm_counter(mm, mm_counter(&folio->page)); 1999 set_pte_at(mm, address, pvmw.pte, pteval); 2000 } 2001 2002 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { 2003 /* 2004 * The guest indicated that the page content is of no 2005 * interest anymore. Simply discard the pte, vmscan 2006 * will take care of the rest. 2007 * A future reference will then fault in a new zero 2008 * page. When userfaultfd is active, we must not drop 2009 * this page though, as its main user (postcopy 2010 * migration) will not expect userfaults on already 2011 * copied pages. 2012 */ 2013 dec_mm_counter(mm, mm_counter(&folio->page)); 2014 /* We have to invalidate as we cleared the pte */ 2015 mmu_notifier_invalidate_range(mm, address, 2016 address + PAGE_SIZE); 2017 } else { 2018 swp_entry_t entry; 2019 pte_t swp_pte; 2020 2021 if (arch_unmap_one(mm, vma, address, pteval) < 0) { 2022 if (folio_test_hugetlb(folio)) > 2023 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2024 else 2025 set_pte_at(mm, address, pvmw.pte, pteval); 2026 ret = false; 2027 page_vma_mapped_walk_done(&pvmw); 2028 break; 2029 } 2030 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) && 2031 !anon_exclusive, subpage); 2032 if (anon_exclusive && 2033 page_try_share_anon_rmap(subpage)) { 2034 if (folio_test_hugetlb(folio)) 2035 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2036 else 2037 set_pte_at(mm, address, pvmw.pte, pteval); 2038 ret = false; 2039 page_vma_mapped_walk_done(&pvmw); 2040 break; 2041 } 2042 2043 /* 2044 * Store the pfn of the page in a special migration 2045 * pte. do_swap_page() will wait until the migration 2046 * pte is removed and then restart fault handling. 2047 */ 2048 if (pte_write(pteval)) 2049 entry = make_writable_migration_entry( 2050 page_to_pfn(subpage)); 2051 else if (anon_exclusive) 2052 entry = make_readable_exclusive_migration_entry( 2053 page_to_pfn(subpage)); 2054 else 2055 entry = make_readable_migration_entry( 2056 page_to_pfn(subpage)); 2057 2058 swp_pte = swp_entry_to_pte(entry); 2059 if (pte_soft_dirty(pteval)) 2060 swp_pte = pte_swp_mksoft_dirty(swp_pte); 2061 if (pte_uffd_wp(pteval)) 2062 swp_pte = pte_swp_mkuffd_wp(swp_pte); 2063 if (folio_test_hugetlb(folio)) 2064 set_huge_swap_pte_at(mm, address, pvmw.pte, 2065 swp_pte, vma_mmu_pagesize(vma)); 2066 else 2067 set_pte_at(mm, address, pvmw.pte, swp_pte); 2068 trace_set_migration_pte(address, pte_val(swp_pte), 2069 compound_order(&folio->page)); 2070 /* 2071 * No need to invalidate here it will synchronize on 2072 * against the special swap migration pte. 2073 */ 2074 } 2075 2076 /* 2077 * No need to call mmu_notifier_invalidate_range() it has be 2078 * done above for all cases requiring it to happen under page 2079 * table lock before mmu_notifier_invalidate_range_end() 2080 * 2081 * See Documentation/vm/mmu_notifier.rst 2082 */ 2083 page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); 2084 if (vma->vm_flags & VM_LOCKED) 2085 mlock_page_drain_local(); 2086 folio_put(folio); 2087 } 2088 2089 mmu_notifier_invalidate_range_end(&range); 2090 2091 return ret; 2092 } 2093 -- 0-DAY CI Kernel Test Service https://01.org/lkp ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 12:01 ` kernel test robot 0 siblings, 0 replies; 73+ messages in thread From: kernel test robot @ 2022-05-08 12:01 UTC (permalink / raw) To: Baolin Wang, akpm, mike.kravetz, catalin.marinas, will Cc: kbuild-all, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch Hi Baolin, I love your patch! Yet something to improve: [auto build test ERROR on akpm-mm/mm-everything] [also build test ERROR on next-20220506] [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/202205081910.mStoC5rj-lkp@intel.com/config) compiler: gcc-11 (Debian 11.2.0-20) 11.2.0 reproduce (this is a W=1 build): # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 git checkout 907981b27213707fdb2f8a24c107d6752a09a773 # save the config file mkdir build_dir && cp config build_dir/.config make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): mm/rmap.c: In function 'try_to_migrate_one': >> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration] 1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); | ^~~~~~~~~~~~~~~~~~~~~ | ptep_clear_flush >> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int' >> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration] 2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval); | ^~~~~~~~~~~~~~~ | set_huge_swap_pte_at cc1: some warnings being treated as errors vim +1931 mm/rmap.c 1883 1884 /* Unexpected PMD-mapped THP? */ 1885 VM_BUG_ON_FOLIO(!pvmw.pte, folio); 1886 1887 subpage = folio_page(folio, 1888 pte_pfn(*pvmw.pte) - folio_pfn(folio)); 1889 address = pvmw.address; 1890 anon_exclusive = folio_test_anon(folio) && 1891 PageAnonExclusive(subpage); 1892 1893 if (folio_test_hugetlb(folio)) { 1894 /* 1895 * huge_pmd_unshare may unmap an entire PMD page. 1896 * There is no way of knowing exactly which PMDs may 1897 * be cached for this mm, so we must flush them all. 1898 * start/end were already adjusted above to cover this 1899 * range. 1900 */ 1901 flush_cache_range(vma, range.start, range.end); 1902 1903 if (!folio_test_anon(folio)) { 1904 /* 1905 * To call huge_pmd_unshare, i_mmap_rwsem must be 1906 * held in write mode. Caller needs to explicitly 1907 * do this outside rmap routines. 1908 */ 1909 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 1910 1911 if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { 1912 flush_tlb_range(vma, range.start, range.end); 1913 mmu_notifier_invalidate_range(mm, range.start, 1914 range.end); 1915 1916 /* 1917 * The ref count of the PMD page was dropped 1918 * which is part of the way map counting 1919 * is done for shared PMDs. Return 'true' 1920 * here. When there is no other sharing, 1921 * huge_pmd_unshare returns false and we will 1922 * unmap the actual page and drop map count 1923 * to zero. 1924 */ 1925 page_vma_mapped_walk_done(&pvmw); 1926 break; 1927 } 1928 } 1929 1930 /* Nuke the hugetlb page table entry */ > 1931 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); 1932 } else { 1933 flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); 1934 /* Nuke the page table entry. */ 1935 pteval = ptep_clear_flush(vma, address, pvmw.pte); 1936 } 1937 1938 /* Set the dirty flag on the folio now the pte is gone. */ 1939 if (pte_dirty(pteval)) 1940 folio_mark_dirty(folio); 1941 1942 /* Update high watermark before we lower rss */ 1943 update_hiwater_rss(mm); 1944 1945 if (folio_is_zone_device(folio)) { 1946 unsigned long pfn = folio_pfn(folio); 1947 swp_entry_t entry; 1948 pte_t swp_pte; 1949 1950 if (anon_exclusive) 1951 BUG_ON(page_try_share_anon_rmap(subpage)); 1952 1953 /* 1954 * Store the pfn of the page in a special migration 1955 * pte. do_swap_page() will wait until the migration 1956 * pte is removed and then restart fault handling. 1957 */ 1958 entry = pte_to_swp_entry(pteval); 1959 if (is_writable_device_private_entry(entry)) 1960 entry = make_writable_migration_entry(pfn); 1961 else if (anon_exclusive) 1962 entry = make_readable_exclusive_migration_entry(pfn); 1963 else 1964 entry = make_readable_migration_entry(pfn); 1965 swp_pte = swp_entry_to_pte(entry); 1966 1967 /* 1968 * pteval maps a zone device page and is therefore 1969 * a swap pte. 1970 */ 1971 if (pte_swp_soft_dirty(pteval)) 1972 swp_pte = pte_swp_mksoft_dirty(swp_pte); 1973 if (pte_swp_uffd_wp(pteval)) 1974 swp_pte = pte_swp_mkuffd_wp(swp_pte); 1975 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); 1976 trace_set_migration_pte(pvmw.address, pte_val(swp_pte), 1977 compound_order(&folio->page)); 1978 /* 1979 * No need to invalidate here it will synchronize on 1980 * against the special swap migration pte. 1981 * 1982 * The assignment to subpage above was computed from a 1983 * swap PTE which results in an invalid pointer. 1984 * Since only PAGE_SIZE pages can currently be 1985 * migrated, just set it to page. This will need to be 1986 * changed when hugepage migrations to device private 1987 * memory are supported. 1988 */ 1989 subpage = &folio->page; 1990 } else if (PageHWPoison(subpage)) { 1991 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); 1992 if (folio_test_hugetlb(folio)) { 1993 hugetlb_count_sub(folio_nr_pages(folio), mm); 1994 set_huge_swap_pte_at(mm, address, 1995 pvmw.pte, pteval, 1996 vma_mmu_pagesize(vma)); 1997 } else { 1998 dec_mm_counter(mm, mm_counter(&folio->page)); 1999 set_pte_at(mm, address, pvmw.pte, pteval); 2000 } 2001 2002 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { 2003 /* 2004 * The guest indicated that the page content is of no 2005 * interest anymore. Simply discard the pte, vmscan 2006 * will take care of the rest. 2007 * A future reference will then fault in a new zero 2008 * page. When userfaultfd is active, we must not drop 2009 * this page though, as its main user (postcopy 2010 * migration) will not expect userfaults on already 2011 * copied pages. 2012 */ 2013 dec_mm_counter(mm, mm_counter(&folio->page)); 2014 /* We have to invalidate as we cleared the pte */ 2015 mmu_notifier_invalidate_range(mm, address, 2016 address + PAGE_SIZE); 2017 } else { 2018 swp_entry_t entry; 2019 pte_t swp_pte; 2020 2021 if (arch_unmap_one(mm, vma, address, pteval) < 0) { 2022 if (folio_test_hugetlb(folio)) > 2023 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2024 else 2025 set_pte_at(mm, address, pvmw.pte, pteval); 2026 ret = false; 2027 page_vma_mapped_walk_done(&pvmw); 2028 break; 2029 } 2030 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) && 2031 !anon_exclusive, subpage); 2032 if (anon_exclusive && 2033 page_try_share_anon_rmap(subpage)) { 2034 if (folio_test_hugetlb(folio)) 2035 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2036 else 2037 set_pte_at(mm, address, pvmw.pte, pteval); 2038 ret = false; 2039 page_vma_mapped_walk_done(&pvmw); 2040 break; 2041 } 2042 2043 /* 2044 * Store the pfn of the page in a special migration 2045 * pte. do_swap_page() will wait until the migration 2046 * pte is removed and then restart fault handling. 2047 */ 2048 if (pte_write(pteval)) 2049 entry = make_writable_migration_entry( 2050 page_to_pfn(subpage)); 2051 else if (anon_exclusive) 2052 entry = make_readable_exclusive_migration_entry( 2053 page_to_pfn(subpage)); 2054 else 2055 entry = make_readable_migration_entry( 2056 page_to_pfn(subpage)); 2057 2058 swp_pte = swp_entry_to_pte(entry); 2059 if (pte_soft_dirty(pteval)) 2060 swp_pte = pte_swp_mksoft_dirty(swp_pte); 2061 if (pte_uffd_wp(pteval)) 2062 swp_pte = pte_swp_mkuffd_wp(swp_pte); 2063 if (folio_test_hugetlb(folio)) 2064 set_huge_swap_pte_at(mm, address, pvmw.pte, 2065 swp_pte, vma_mmu_pagesize(vma)); 2066 else 2067 set_pte_at(mm, address, pvmw.pte, swp_pte); 2068 trace_set_migration_pte(address, pte_val(swp_pte), 2069 compound_order(&folio->page)); 2070 /* 2071 * No need to invalidate here it will synchronize on 2072 * against the special swap migration pte. 2073 */ 2074 } 2075 2076 /* 2077 * No need to call mmu_notifier_invalidate_range() it has be 2078 * done above for all cases requiring it to happen under page 2079 * table lock before mmu_notifier_invalidate_range_end() 2080 * 2081 * See Documentation/vm/mmu_notifier.rst 2082 */ 2083 page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); 2084 if (vma->vm_flags & VM_LOCKED) 2085 mlock_page_drain_local(); 2086 folio_put(folio); 2087 } 2088 2089 mmu_notifier_invalidate_range_end(&range); 2090 2091 return ret; 2092 } 2093 -- 0-DAY CI Kernel Test Service https://01.org/lkp ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 12:01 ` kernel test robot 0 siblings, 0 replies; 73+ messages in thread From: kernel test robot @ 2022-05-08 12:01 UTC (permalink / raw) To: Baolin Wang, akpm, mike.kravetz, catalin.marinas, will Cc: kbuild-all, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch Hi Baolin, I love your patch! Yet something to improve: [auto build test ERROR on akpm-mm/mm-everything] [also build test ERROR on next-20220506] [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/202205081910.mStoC5rj-lkp@intel.com/config) compiler: gcc-11 (Debian 11.2.0-20) 11.2.0 reproduce (this is a W=1 build): # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 git checkout 907981b27213707fdb2f8a24c107d6752a09a773 # save the config file mkdir build_dir && cp config build_dir/.config make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): mm/rmap.c: In function 'try_to_migrate_one': >> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration] 1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); | ^~~~~~~~~~~~~~~~~~~~~ | ptep_clear_flush >> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int' >> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration] 2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval); | ^~~~~~~~~~~~~~~ | set_huge_swap_pte_at cc1: some warnings being treated as errors vim +1931 mm/rmap.c 1883 1884 /* Unexpected PMD-mapped THP? */ 1885 VM_BUG_ON_FOLIO(!pvmw.pte, folio); 1886 1887 subpage = folio_page(folio, 1888 pte_pfn(*pvmw.pte) - folio_pfn(folio)); 1889 address = pvmw.address; 1890 anon_exclusive = folio_test_anon(folio) && 1891 PageAnonExclusive(subpage); 1892 1893 if (folio_test_hugetlb(folio)) { 1894 /* 1895 * huge_pmd_unshare may unmap an entire PMD page. 1896 * There is no way of knowing exactly which PMDs may 1897 * be cached for this mm, so we must flush them all. 1898 * start/end were already adjusted above to cover this 1899 * range. 1900 */ 1901 flush_cache_range(vma, range.start, range.end); 1902 1903 if (!folio_test_anon(folio)) { 1904 /* 1905 * To call huge_pmd_unshare, i_mmap_rwsem must be 1906 * held in write mode. Caller needs to explicitly 1907 * do this outside rmap routines. 1908 */ 1909 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 1910 1911 if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { 1912 flush_tlb_range(vma, range.start, range.end); 1913 mmu_notifier_invalidate_range(mm, range.start, 1914 range.end); 1915 1916 /* 1917 * The ref count of the PMD page was dropped 1918 * which is part of the way map counting 1919 * is done for shared PMDs. Return 'true' 1920 * here. When there is no other sharing, 1921 * huge_pmd_unshare returns false and we will 1922 * unmap the actual page and drop map count 1923 * to zero. 1924 */ 1925 page_vma_mapped_walk_done(&pvmw); 1926 break; 1927 } 1928 } 1929 1930 /* Nuke the hugetlb page table entry */ > 1931 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); 1932 } else { 1933 flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); 1934 /* Nuke the page table entry. */ 1935 pteval = ptep_clear_flush(vma, address, pvmw.pte); 1936 } 1937 1938 /* Set the dirty flag on the folio now the pte is gone. */ 1939 if (pte_dirty(pteval)) 1940 folio_mark_dirty(folio); 1941 1942 /* Update high watermark before we lower rss */ 1943 update_hiwater_rss(mm); 1944 1945 if (folio_is_zone_device(folio)) { 1946 unsigned long pfn = folio_pfn(folio); 1947 swp_entry_t entry; 1948 pte_t swp_pte; 1949 1950 if (anon_exclusive) 1951 BUG_ON(page_try_share_anon_rmap(subpage)); 1952 1953 /* 1954 * Store the pfn of the page in a special migration 1955 * pte. do_swap_page() will wait until the migration 1956 * pte is removed and then restart fault handling. 1957 */ 1958 entry = pte_to_swp_entry(pteval); 1959 if (is_writable_device_private_entry(entry)) 1960 entry = make_writable_migration_entry(pfn); 1961 else if (anon_exclusive) 1962 entry = make_readable_exclusive_migration_entry(pfn); 1963 else 1964 entry = make_readable_migration_entry(pfn); 1965 swp_pte = swp_entry_to_pte(entry); 1966 1967 /* 1968 * pteval maps a zone device page and is therefore 1969 * a swap pte. 1970 */ 1971 if (pte_swp_soft_dirty(pteval)) 1972 swp_pte = pte_swp_mksoft_dirty(swp_pte); 1973 if (pte_swp_uffd_wp(pteval)) 1974 swp_pte = pte_swp_mkuffd_wp(swp_pte); 1975 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); 1976 trace_set_migration_pte(pvmw.address, pte_val(swp_pte), 1977 compound_order(&folio->page)); 1978 /* 1979 * No need to invalidate here it will synchronize on 1980 * against the special swap migration pte. 1981 * 1982 * The assignment to subpage above was computed from a 1983 * swap PTE which results in an invalid pointer. 1984 * Since only PAGE_SIZE pages can currently be 1985 * migrated, just set it to page. This will need to be 1986 * changed when hugepage migrations to device private 1987 * memory are supported. 1988 */ 1989 subpage = &folio->page; 1990 } else if (PageHWPoison(subpage)) { 1991 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); 1992 if (folio_test_hugetlb(folio)) { 1993 hugetlb_count_sub(folio_nr_pages(folio), mm); 1994 set_huge_swap_pte_at(mm, address, 1995 pvmw.pte, pteval, 1996 vma_mmu_pagesize(vma)); 1997 } else { 1998 dec_mm_counter(mm, mm_counter(&folio->page)); 1999 set_pte_at(mm, address, pvmw.pte, pteval); 2000 } 2001 2002 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { 2003 /* 2004 * The guest indicated that the page content is of no 2005 * interest anymore. Simply discard the pte, vmscan 2006 * will take care of the rest. 2007 * A future reference will then fault in a new zero 2008 * page. When userfaultfd is active, we must not drop 2009 * this page though, as its main user (postcopy 2010 * migration) will not expect userfaults on already 2011 * copied pages. 2012 */ 2013 dec_mm_counter(mm, mm_counter(&folio->page)); 2014 /* We have to invalidate as we cleared the pte */ 2015 mmu_notifier_invalidate_range(mm, address, 2016 address + PAGE_SIZE); 2017 } else { 2018 swp_entry_t entry; 2019 pte_t swp_pte; 2020 2021 if (arch_unmap_one(mm, vma, address, pteval) < 0) { 2022 if (folio_test_hugetlb(folio)) > 2023 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2024 else 2025 set_pte_at(mm, address, pvmw.pte, pteval); 2026 ret = false; 2027 page_vma_mapped_walk_done(&pvmw); 2028 break; 2029 } 2030 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) && 2031 !anon_exclusive, subpage); 2032 if (anon_exclusive && 2033 page_try_share_anon_rmap(subpage)) { 2034 if (folio_test_hugetlb(folio)) 2035 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2036 else 2037 set_pte_at(mm, address, pvmw.pte, pteval); 2038 ret = false; 2039 page_vma_mapped_walk_done(&pvmw); 2040 break; 2041 } 2042 2043 /* 2044 * Store the pfn of the page in a special migration 2045 * pte. do_swap_page() will wait until the migration 2046 * pte is removed and then restart fault handling. 2047 */ 2048 if (pte_write(pteval)) 2049 entry = make_writable_migration_entry( 2050 page_to_pfn(subpage)); 2051 else if (anon_exclusive) 2052 entry = make_readable_exclusive_migration_entry( 2053 page_to_pfn(subpage)); 2054 else 2055 entry = make_readable_migration_entry( 2056 page_to_pfn(subpage)); 2057 2058 swp_pte = swp_entry_to_pte(entry); 2059 if (pte_soft_dirty(pteval)) 2060 swp_pte = pte_swp_mksoft_dirty(swp_pte); 2061 if (pte_uffd_wp(pteval)) 2062 swp_pte = pte_swp_mkuffd_wp(swp_pte); 2063 if (folio_test_hugetlb(folio)) 2064 set_huge_swap_pte_at(mm, address, pvmw.pte, 2065 swp_pte, vma_mmu_pagesize(vma)); 2066 else 2067 set_pte_at(mm, address, pvmw.pte, swp_pte); 2068 trace_set_migration_pte(address, pte_val(swp_pte), 2069 compound_order(&folio->page)); 2070 /* 2071 * No need to invalidate here it will synchronize on 2072 * against the special swap migration pte. 2073 */ 2074 } 2075 2076 /* 2077 * No need to call mmu_notifier_invalidate_range() it has be 2078 * done above for all cases requiring it to happen under page 2079 * table lock before mmu_notifier_invalidate_range_end() 2080 * 2081 * See Documentation/vm/mmu_notifier.rst 2082 */ 2083 page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); 2084 if (vma->vm_flags & VM_LOCKED) 2085 mlock_page_drain_local(); 2086 folio_put(folio); 2087 } 2088 2089 mmu_notifier_invalidate_range_end(&range); 2090 2091 return ret; 2092 } 2093 -- 0-DAY CI Kernel Test Service https://01.org/lkp _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 12:01 ` kernel test robot 0 siblings, 0 replies; 73+ messages in thread From: kernel test robot @ 2022-05-08 12:01 UTC (permalink / raw) To: Baolin Wang, akpm, mike.kravetz, catalin.marinas, will Cc: dalias, linux-ia64, linux-sh, linux-kernel, James.Bottomley, paulus, sparclinux, agordeev, linux-arch, linux-s390, arnd, ysato, deller, borntraeger, gor, hca, baolin.wang, linux-arm-kernel, tsbogend, kbuild-all, linux-parisc, linux-mips, svens, linuxppc-dev, davem Hi Baolin, I love your patch! Yet something to improve: [auto build test ERROR on akpm-mm/mm-everything] [also build test ERROR on next-20220506] [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/202205081910.mStoC5rj-lkp@intel.com/config) compiler: gcc-11 (Debian 11.2.0-20) 11.2.0 reproduce (this is a W=1 build): # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 git checkout 907981b27213707fdb2f8a24c107d6752a09a773 # save the config file mkdir build_dir && cp config build_dir/.config make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): mm/rmap.c: In function 'try_to_migrate_one': >> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration] 1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); | ^~~~~~~~~~~~~~~~~~~~~ | ptep_clear_flush >> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int' >> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration] 2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval); | ^~~~~~~~~~~~~~~ | set_huge_swap_pte_at cc1: some warnings being treated as errors vim +1931 mm/rmap.c 1883 1884 /* Unexpected PMD-mapped THP? */ 1885 VM_BUG_ON_FOLIO(!pvmw.pte, folio); 1886 1887 subpage = folio_page(folio, 1888 pte_pfn(*pvmw.pte) - folio_pfn(folio)); 1889 address = pvmw.address; 1890 anon_exclusive = folio_test_anon(folio) && 1891 PageAnonExclusive(subpage); 1892 1893 if (folio_test_hugetlb(folio)) { 1894 /* 1895 * huge_pmd_unshare may unmap an entire PMD page. 1896 * There is no way of knowing exactly which PMDs may 1897 * be cached for this mm, so we must flush them all. 1898 * start/end were already adjusted above to cover this 1899 * range. 1900 */ 1901 flush_cache_range(vma, range.start, range.end); 1902 1903 if (!folio_test_anon(folio)) { 1904 /* 1905 * To call huge_pmd_unshare, i_mmap_rwsem must be 1906 * held in write mode. Caller needs to explicitly 1907 * do this outside rmap routines. 1908 */ 1909 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 1910 1911 if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { 1912 flush_tlb_range(vma, range.start, range.end); 1913 mmu_notifier_invalidate_range(mm, range.start, 1914 range.end); 1915 1916 /* 1917 * The ref count of the PMD page was dropped 1918 * which is part of the way map counting 1919 * is done for shared PMDs. Return 'true' 1920 * here. When there is no other sharing, 1921 * huge_pmd_unshare returns false and we will 1922 * unmap the actual page and drop map count 1923 * to zero. 1924 */ 1925 page_vma_mapped_walk_done(&pvmw); 1926 break; 1927 } 1928 } 1929 1930 /* Nuke the hugetlb page table entry */ > 1931 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); 1932 } else { 1933 flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); 1934 /* Nuke the page table entry. */ 1935 pteval = ptep_clear_flush(vma, address, pvmw.pte); 1936 } 1937 1938 /* Set the dirty flag on the folio now the pte is gone. */ 1939 if (pte_dirty(pteval)) 1940 folio_mark_dirty(folio); 1941 1942 /* Update high watermark before we lower rss */ 1943 update_hiwater_rss(mm); 1944 1945 if (folio_is_zone_device(folio)) { 1946 unsigned long pfn = folio_pfn(folio); 1947 swp_entry_t entry; 1948 pte_t swp_pte; 1949 1950 if (anon_exclusive) 1951 BUG_ON(page_try_share_anon_rmap(subpage)); 1952 1953 /* 1954 * Store the pfn of the page in a special migration 1955 * pte. do_swap_page() will wait until the migration 1956 * pte is removed and then restart fault handling. 1957 */ 1958 entry = pte_to_swp_entry(pteval); 1959 if (is_writable_device_private_entry(entry)) 1960 entry = make_writable_migration_entry(pfn); 1961 else if (anon_exclusive) 1962 entry = make_readable_exclusive_migration_entry(pfn); 1963 else 1964 entry = make_readable_migration_entry(pfn); 1965 swp_pte = swp_entry_to_pte(entry); 1966 1967 /* 1968 * pteval maps a zone device page and is therefore 1969 * a swap pte. 1970 */ 1971 if (pte_swp_soft_dirty(pteval)) 1972 swp_pte = pte_swp_mksoft_dirty(swp_pte); 1973 if (pte_swp_uffd_wp(pteval)) 1974 swp_pte = pte_swp_mkuffd_wp(swp_pte); 1975 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); 1976 trace_set_migration_pte(pvmw.address, pte_val(swp_pte), 1977 compound_order(&folio->page)); 1978 /* 1979 * No need to invalidate here it will synchronize on 1980 * against the special swap migration pte. 1981 * 1982 * The assignment to subpage above was computed from a 1983 * swap PTE which results in an invalid pointer. 1984 * Since only PAGE_SIZE pages can currently be 1985 * migrated, just set it to page. This will need to be 1986 * changed when hugepage migrations to device private 1987 * memory are supported. 1988 */ 1989 subpage = &folio->page; 1990 } else if (PageHWPoison(subpage)) { 1991 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); 1992 if (folio_test_hugetlb(folio)) { 1993 hugetlb_count_sub(folio_nr_pages(folio), mm); 1994 set_huge_swap_pte_at(mm, address, 1995 pvmw.pte, pteval, 1996 vma_mmu_pagesize(vma)); 1997 } else { 1998 dec_mm_counter(mm, mm_counter(&folio->page)); 1999 set_pte_at(mm, address, pvmw.pte, pteval); 2000 } 2001 2002 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { 2003 /* 2004 * The guest indicated that the page content is of no 2005 * interest anymore. Simply discard the pte, vmscan 2006 * will take care of the rest. 2007 * A future reference will then fault in a new zero 2008 * page. When userfaultfd is active, we must not drop 2009 * this page though, as its main user (postcopy 2010 * migration) will not expect userfaults on already 2011 * copied pages. 2012 */ 2013 dec_mm_counter(mm, mm_counter(&folio->page)); 2014 /* We have to invalidate as we cleared the pte */ 2015 mmu_notifier_invalidate_range(mm, address, 2016 address + PAGE_SIZE); 2017 } else { 2018 swp_entry_t entry; 2019 pte_t swp_pte; 2020 2021 if (arch_unmap_one(mm, vma, address, pteval) < 0) { 2022 if (folio_test_hugetlb(folio)) > 2023 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2024 else 2025 set_pte_at(mm, address, pvmw.pte, pteval); 2026 ret = false; 2027 page_vma_mapped_walk_done(&pvmw); 2028 break; 2029 } 2030 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) && 2031 !anon_exclusive, subpage); 2032 if (anon_exclusive && 2033 page_try_share_anon_rmap(subpage)) { 2034 if (folio_test_hugetlb(folio)) 2035 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2036 else 2037 set_pte_at(mm, address, pvmw.pte, pteval); 2038 ret = false; 2039 page_vma_mapped_walk_done(&pvmw); 2040 break; 2041 } 2042 2043 /* 2044 * Store the pfn of the page in a special migration 2045 * pte. do_swap_page() will wait until the migration 2046 * pte is removed and then restart fault handling. 2047 */ 2048 if (pte_write(pteval)) 2049 entry = make_writable_migration_entry( 2050 page_to_pfn(subpage)); 2051 else if (anon_exclusive) 2052 entry = make_readable_exclusive_migration_entry( 2053 page_to_pfn(subpage)); 2054 else 2055 entry = make_readable_migration_entry( 2056 page_to_pfn(subpage)); 2057 2058 swp_pte = swp_entry_to_pte(entry); 2059 if (pte_soft_dirty(pteval)) 2060 swp_pte = pte_swp_mksoft_dirty(swp_pte); 2061 if (pte_uffd_wp(pteval)) 2062 swp_pte = pte_swp_mkuffd_wp(swp_pte); 2063 if (folio_test_hugetlb(folio)) 2064 set_huge_swap_pte_at(mm, address, pvmw.pte, 2065 swp_pte, vma_mmu_pagesize(vma)); 2066 else 2067 set_pte_at(mm, address, pvmw.pte, swp_pte); 2068 trace_set_migration_pte(address, pte_val(swp_pte), 2069 compound_order(&folio->page)); 2070 /* 2071 * No need to invalidate here it will synchronize on 2072 * against the special swap migration pte. 2073 */ 2074 } 2075 2076 /* 2077 * No need to call mmu_notifier_invalidate_range() it has be 2078 * done above for all cases requiring it to happen under page 2079 * table lock before mmu_notifier_invalidate_range_end() 2080 * 2081 * See Documentation/vm/mmu_notifier.rst 2082 */ 2083 page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); 2084 if (vma->vm_flags & VM_LOCKED) 2085 mlock_page_drain_local(); 2086 folio_put(folio); 2087 } 2088 2089 mmu_notifier_invalidate_range_end(&range); 2090 2091 return ret; 2092 } 2093 -- 0-DAY CI Kernel Test Service https://01.org/lkp ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration 2022-05-08 12:01 ` kernel test robot ` (2 preceding siblings ...) (?) @ 2022-05-08 13:13 ` Baolin Wang -1 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 13:13 UTC (permalink / raw) To: kernel test robot, akpm, mike.kravetz, catalin.marinas, will Cc: kbuild-all, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch Hi, On 5/8/2022 8:01 PM, kernel test robot wrote: > Hi Baolin, > > I love your patch! Yet something to improve: > > [auto build test ERROR on akpm-mm/mm-everything] > [also build test ERROR on next-20220506] > [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] > [If your patch is applied to the wrong git tree, kindly drop us a note. > And when submitting patch, we suggest to use '--base' as documented in > https://git-scm.com/docs/git-format-patch] > > url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 > base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything > config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/202205081910.mStoC5rj-lkp@intel.com/config) > compiler: gcc-11 (Debian 11.2.0-20) 11.2.0 > reproduce (this is a W=1 build): > # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 > git remote add linux-review https://github.com/intel-lab-lkp/linux > git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 > git checkout 907981b27213707fdb2f8a24c107d6752a09a773 > # save the config file > mkdir build_dir && cp config build_dir/.config > make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash > > If you fix the issue, kindly add following tag as appropriate > Reported-by: kernel test robot <lkp@intel.com> > > All errors (new ones prefixed by >>): > > mm/rmap.c: In function 'try_to_migrate_one': >>> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration] > 1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); > | ^~~~~~~~~~~~~~~~~~~~~ > | ptep_clear_flush >>> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int' >>> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration] > 2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval); > | ^~~~~~~~~~~~~~~ > | set_huge_swap_pte_at > cc1: some warnings being treated as errors Thanks for reporting. I think I should add some dummy functions in hugetlb.h file if the CONFIG_HUGETLB_PAGE is not selected. I can pass the building with below changes and your config file. diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 306d6ef..9f71043 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1093,6 +1093,17 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) +{ + return ptep_get(ptep); +} + +static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte) +{ +} #endif /* CONFIG_HUGETLB_PAGE */ ^ permalink raw reply related [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 13:13 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 13:13 UTC (permalink / raw) To: kernel test robot, akpm, mike.kravetz, catalin.marinas, will Cc: kbuild-all, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch Hi, On 5/8/2022 8:01 PM, kernel test robot wrote: > Hi Baolin, > > I love your patch! Yet something to improve: > > [auto build test ERROR on akpm-mm/mm-everything] > [also build test ERROR on next-20220506] > [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] > [If your patch is applied to the wrong git tree, kindly drop us a note. > And when submitting patch, we suggest to use '--base' as documented in > https://git-scm.com/docs/git-format-patch] > > url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 > base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything > config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/202205081910.mStoC5rj-lkp@intel.com/config) > compiler: gcc-11 (Debian 11.2.0-20) 11.2.0 > reproduce (this is a W=1 build): > # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 > git remote add linux-review https://github.com/intel-lab-lkp/linux > git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 > git checkout 907981b27213707fdb2f8a24c107d6752a09a773 > # save the config file > mkdir build_dir && cp config build_dir/.config > make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash > > If you fix the issue, kindly add following tag as appropriate > Reported-by: kernel test robot <lkp@intel.com> > > All errors (new ones prefixed by >>): > > mm/rmap.c: In function 'try_to_migrate_one': >>> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration] > 1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); > | ^~~~~~~~~~~~~~~~~~~~~ > | ptep_clear_flush >>> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int' >>> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration] > 2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval); > | ^~~~~~~~~~~~~~~ > | set_huge_swap_pte_at > cc1: some warnings being treated as errors Thanks for reporting. I think I should add some dummy functions in hugetlb.h file if the CONFIG_HUGETLB_PAGE is not selected. I can pass the building with below changes and your config file. diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 306d6ef..9f71043 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1093,6 +1093,17 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) +{ + return ptep_get(ptep); +} + +static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte) +{ +} #endif /* CONFIG_HUGETLB_PAGE */ ^ permalink raw reply related [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 13:13 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 13:13 UTC (permalink / raw) To: kbuild-all [-- Attachment #1: Type: text/plain, Size: 3589 bytes --] Hi, On 5/8/2022 8:01 PM, kernel test robot wrote: > Hi Baolin, > > I love your patch! Yet something to improve: > > [auto build test ERROR on akpm-mm/mm-everything] > [also build test ERROR on next-20220506] > [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] > [If your patch is applied to the wrong git tree, kindly drop us a note. > And when submitting patch, we suggest to use '--base' as documented in > https://git-scm.com/docs/git-format-patch] > > url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 > base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything > config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/202205081910.mStoC5rj-lkp(a)intel.com/config) > compiler: gcc-11 (Debian 11.2.0-20) 11.2.0 > reproduce (this is a W=1 build): > # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 > git remote add linux-review https://github.com/intel-lab-lkp/linux > git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 > git checkout 907981b27213707fdb2f8a24c107d6752a09a773 > # save the config file > mkdir build_dir && cp config build_dir/.config > make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash > > If you fix the issue, kindly add following tag as appropriate > Reported-by: kernel test robot <lkp@intel.com> > > All errors (new ones prefixed by >>): > > mm/rmap.c: In function 'try_to_migrate_one': >>> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration] > 1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); > | ^~~~~~~~~~~~~~~~~~~~~ > | ptep_clear_flush >>> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int' >>> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration] > 2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval); > | ^~~~~~~~~~~~~~~ > | set_huge_swap_pte_at > cc1: some warnings being treated as errors Thanks for reporting. I think I should add some dummy functions in hugetlb.h file if the CONFIG_HUGETLB_PAGE is not selected. I can pass the building with below changes and your config file. diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 306d6ef..9f71043 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1093,6 +1093,17 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) +{ + return ptep_get(ptep); +} + +static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte) +{ +} #endif /* CONFIG_HUGETLB_PAGE */ ^ permalink raw reply related [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 13:13 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 13:13 UTC (permalink / raw) To: kernel test robot, akpm, mike.kravetz, catalin.marinas, will Cc: kbuild-all, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch Hi, On 5/8/2022 8:01 PM, kernel test robot wrote: > Hi Baolin, > > I love your patch! Yet something to improve: > > [auto build test ERROR on akpm-mm/mm-everything] > [also build test ERROR on next-20220506] > [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] > [If your patch is applied to the wrong git tree, kindly drop us a note. > And when submitting patch, we suggest to use '--base' as documented in > https://git-scm.com/docs/git-format-patch] > > url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 > base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything > config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/202205081910.mStoC5rj-lkp@intel.com/config) > compiler: gcc-11 (Debian 11.2.0-20) 11.2.0 > reproduce (this is a W=1 build): > # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 > git remote add linux-review https://github.com/intel-lab-lkp/linux > git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 > git checkout 907981b27213707fdb2f8a24c107d6752a09a773 > # save the config file > mkdir build_dir && cp config build_dir/.config > make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash > > If you fix the issue, kindly add following tag as appropriate > Reported-by: kernel test robot <lkp@intel.com> > > All errors (new ones prefixed by >>): > > mm/rmap.c: In function 'try_to_migrate_one': >>> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration] > 1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); > | ^~~~~~~~~~~~~~~~~~~~~ > | ptep_clear_flush >>> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int' >>> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration] > 2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval); > | ^~~~~~~~~~~~~~~ > | set_huge_swap_pte_at > cc1: some warnings being treated as errors Thanks for reporting. I think I should add some dummy functions in hugetlb.h file if the CONFIG_HUGETLB_PAGE is not selected. I can pass the building with below changes and your config file. diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 306d6ef..9f71043 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1093,6 +1093,17 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) +{ + return ptep_get(ptep); +} + +static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte) +{ +} #endif /* CONFIG_HUGETLB_PAGE */ _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 13:13 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 13:13 UTC (permalink / raw) To: kernel test robot, akpm, mike.kravetz, catalin.marinas, will Cc: dalias, linux-ia64, linux-sh, linux-kernel, James.Bottomley, paulus, sparclinux, agordeev, linux-arch, linux-s390, arnd, ysato, deller, borntraeger, gor, hca, linux-arm-kernel, tsbogend, kbuild-all, linux-parisc, linux-mips, svens, linuxppc-dev, davem Hi, On 5/8/2022 8:01 PM, kernel test robot wrote: > Hi Baolin, > > I love your patch! Yet something to improve: > > [auto build test ERROR on akpm-mm/mm-everything] > [also build test ERROR on next-20220506] > [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] > [If your patch is applied to the wrong git tree, kindly drop us a note. > And when submitting patch, we suggest to use '--base' as documented in > https://git-scm.com/docs/git-format-patch] > > url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 > base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything > config: x86_64-randconfig-a013 (https://download.01.org/0day-ci/archive/20220508/202205081910.mStoC5rj-lkp@intel.com/config) > compiler: gcc-11 (Debian 11.2.0-20) 11.2.0 > reproduce (this is a W=1 build): > # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 > git remote add linux-review https://github.com/intel-lab-lkp/linux > git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 > git checkout 907981b27213707fdb2f8a24c107d6752a09a773 > # save the config file > mkdir build_dir && cp config build_dir/.config > make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash > > If you fix the issue, kindly add following tag as appropriate > Reported-by: kernel test robot <lkp@intel.com> > > All errors (new ones prefixed by >>): > > mm/rmap.c: In function 'try_to_migrate_one': >>> mm/rmap.c:1931:34: error: implicit declaration of function 'huge_ptep_clear_flush'; did you mean 'ptep_clear_flush'? [-Werror=implicit-function-declaration] > 1931 | pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); > | ^~~~~~~~~~~~~~~~~~~~~ > | ptep_clear_flush >>> mm/rmap.c:1931:34: error: incompatible types when assigning to type 'pte_t' from type 'int' >>> mm/rmap.c:2023:41: error: implicit declaration of function 'set_huge_pte_at'; did you mean 'set_huge_swap_pte_at'? [-Werror=implicit-function-declaration] > 2023 | set_huge_pte_at(mm, address, pvmw.pte, pteval); > | ^~~~~~~~~~~~~~~ > | set_huge_swap_pte_at > cc1: some warnings being treated as errors Thanks for reporting. I think I should add some dummy functions in hugetlb.h file if the CONFIG_HUGETLB_PAGE is not selected. I can pass the building with below changes and your config file. diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 306d6ef..9f71043 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1093,6 +1093,17 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) +{ + return ptep_get(ptep); +} + +static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte) +{ +} #endif /* CONFIG_HUGETLB_PAGE */ ^ permalink raw reply related [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration 2022-05-08 9:36 ` Baolin Wang (?) (?) @ 2022-05-08 12:11 ` kernel test robot -1 siblings, 0 replies; 73+ messages in thread From: kernel test robot @ 2022-05-08 12:11 UTC (permalink / raw) To: Baolin Wang, akpm, mike.kravetz, catalin.marinas, will Cc: llvm, kbuild-all, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch Hi Baolin, I love your patch! Yet something to improve: [auto build test ERROR on akpm-mm/mm-everything] [also build test ERROR on next-20220506] [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything config: x86_64-randconfig-a014 (https://download.01.org/0day-ci/archive/20220508/202205081950.IpKFNYip-lkp@intel.com/config) compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project a385645b470e2d3a1534aae618ea56b31177639f) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 git checkout 907981b27213707fdb2f8a24c107d6752a09a773 # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): >> mm/rmap.c:1931:13: error: call to undeclared function 'huge_ptep_clear_flush'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); ^ mm/rmap.c:1931:13: note: did you mean 'ptep_clear_flush'? include/linux/pgtable.h:431:14: note: 'ptep_clear_flush' declared here extern pte_t ptep_clear_flush(struct vm_area_struct *vma, ^ >> mm/rmap.c:1931:11: error: assigning to 'pte_t' from incompatible type 'int' pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> mm/rmap.c:2023:6: error: call to undeclared function 'set_huge_pte_at'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] set_huge_pte_at(mm, address, pvmw.pte, pteval); ^ mm/rmap.c:2035:6: error: call to undeclared function 'set_huge_pte_at'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] set_huge_pte_at(mm, address, pvmw.pte, pteval); ^ 4 errors generated. vim +/huge_ptep_clear_flush +1931 mm/rmap.c 1883 1884 /* Unexpected PMD-mapped THP? */ 1885 VM_BUG_ON_FOLIO(!pvmw.pte, folio); 1886 1887 subpage = folio_page(folio, 1888 pte_pfn(*pvmw.pte) - folio_pfn(folio)); 1889 address = pvmw.address; 1890 anon_exclusive = folio_test_anon(folio) && 1891 PageAnonExclusive(subpage); 1892 1893 if (folio_test_hugetlb(folio)) { 1894 /* 1895 * huge_pmd_unshare may unmap an entire PMD page. 1896 * There is no way of knowing exactly which PMDs may 1897 * be cached for this mm, so we must flush them all. 1898 * start/end were already adjusted above to cover this 1899 * range. 1900 */ 1901 flush_cache_range(vma, range.start, range.end); 1902 1903 if (!folio_test_anon(folio)) { 1904 /* 1905 * To call huge_pmd_unshare, i_mmap_rwsem must be 1906 * held in write mode. Caller needs to explicitly 1907 * do this outside rmap routines. 1908 */ 1909 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 1910 1911 if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { 1912 flush_tlb_range(vma, range.start, range.end); 1913 mmu_notifier_invalidate_range(mm, range.start, 1914 range.end); 1915 1916 /* 1917 * The ref count of the PMD page was dropped 1918 * which is part of the way map counting 1919 * is done for shared PMDs. Return 'true' 1920 * here. When there is no other sharing, 1921 * huge_pmd_unshare returns false and we will 1922 * unmap the actual page and drop map count 1923 * to zero. 1924 */ 1925 page_vma_mapped_walk_done(&pvmw); 1926 break; 1927 } 1928 } 1929 1930 /* Nuke the hugetlb page table entry */ > 1931 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); 1932 } else { 1933 flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); 1934 /* Nuke the page table entry. */ 1935 pteval = ptep_clear_flush(vma, address, pvmw.pte); 1936 } 1937 1938 /* Set the dirty flag on the folio now the pte is gone. */ 1939 if (pte_dirty(pteval)) 1940 folio_mark_dirty(folio); 1941 1942 /* Update high watermark before we lower rss */ 1943 update_hiwater_rss(mm); 1944 1945 if (folio_is_zone_device(folio)) { 1946 unsigned long pfn = folio_pfn(folio); 1947 swp_entry_t entry; 1948 pte_t swp_pte; 1949 1950 if (anon_exclusive) 1951 BUG_ON(page_try_share_anon_rmap(subpage)); 1952 1953 /* 1954 * Store the pfn of the page in a special migration 1955 * pte. do_swap_page() will wait until the migration 1956 * pte is removed and then restart fault handling. 1957 */ 1958 entry = pte_to_swp_entry(pteval); 1959 if (is_writable_device_private_entry(entry)) 1960 entry = make_writable_migration_entry(pfn); 1961 else if (anon_exclusive) 1962 entry = make_readable_exclusive_migration_entry(pfn); 1963 else 1964 entry = make_readable_migration_entry(pfn); 1965 swp_pte = swp_entry_to_pte(entry); 1966 1967 /* 1968 * pteval maps a zone device page and is therefore 1969 * a swap pte. 1970 */ 1971 if (pte_swp_soft_dirty(pteval)) 1972 swp_pte = pte_swp_mksoft_dirty(swp_pte); 1973 if (pte_swp_uffd_wp(pteval)) 1974 swp_pte = pte_swp_mkuffd_wp(swp_pte); 1975 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); 1976 trace_set_migration_pte(pvmw.address, pte_val(swp_pte), 1977 compound_order(&folio->page)); 1978 /* 1979 * No need to invalidate here it will synchronize on 1980 * against the special swap migration pte. 1981 * 1982 * The assignment to subpage above was computed from a 1983 * swap PTE which results in an invalid pointer. 1984 * Since only PAGE_SIZE pages can currently be 1985 * migrated, just set it to page. This will need to be 1986 * changed when hugepage migrations to device private 1987 * memory are supported. 1988 */ 1989 subpage = &folio->page; 1990 } else if (PageHWPoison(subpage)) { 1991 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); 1992 if (folio_test_hugetlb(folio)) { 1993 hugetlb_count_sub(folio_nr_pages(folio), mm); 1994 set_huge_swap_pte_at(mm, address, 1995 pvmw.pte, pteval, 1996 vma_mmu_pagesize(vma)); 1997 } else { 1998 dec_mm_counter(mm, mm_counter(&folio->page)); 1999 set_pte_at(mm, address, pvmw.pte, pteval); 2000 } 2001 2002 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { 2003 /* 2004 * The guest indicated that the page content is of no 2005 * interest anymore. Simply discard the pte, vmscan 2006 * will take care of the rest. 2007 * A future reference will then fault in a new zero 2008 * page. When userfaultfd is active, we must not drop 2009 * this page though, as its main user (postcopy 2010 * migration) will not expect userfaults on already 2011 * copied pages. 2012 */ 2013 dec_mm_counter(mm, mm_counter(&folio->page)); 2014 /* We have to invalidate as we cleared the pte */ 2015 mmu_notifier_invalidate_range(mm, address, 2016 address + PAGE_SIZE); 2017 } else { 2018 swp_entry_t entry; 2019 pte_t swp_pte; 2020 2021 if (arch_unmap_one(mm, vma, address, pteval) < 0) { 2022 if (folio_test_hugetlb(folio)) > 2023 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2024 else 2025 set_pte_at(mm, address, pvmw.pte, pteval); 2026 ret = false; 2027 page_vma_mapped_walk_done(&pvmw); 2028 break; 2029 } 2030 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) && 2031 !anon_exclusive, subpage); 2032 if (anon_exclusive && 2033 page_try_share_anon_rmap(subpage)) { 2034 if (folio_test_hugetlb(folio)) 2035 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2036 else 2037 set_pte_at(mm, address, pvmw.pte, pteval); 2038 ret = false; 2039 page_vma_mapped_walk_done(&pvmw); 2040 break; 2041 } 2042 2043 /* 2044 * Store the pfn of the page in a special migration 2045 * pte. do_swap_page() will wait until the migration 2046 * pte is removed and then restart fault handling. 2047 */ 2048 if (pte_write(pteval)) 2049 entry = make_writable_migration_entry( 2050 page_to_pfn(subpage)); 2051 else if (anon_exclusive) 2052 entry = make_readable_exclusive_migration_entry( 2053 page_to_pfn(subpage)); 2054 else 2055 entry = make_readable_migration_entry( 2056 page_to_pfn(subpage)); 2057 2058 swp_pte = swp_entry_to_pte(entry); 2059 if (pte_soft_dirty(pteval)) 2060 swp_pte = pte_swp_mksoft_dirty(swp_pte); 2061 if (pte_uffd_wp(pteval)) 2062 swp_pte = pte_swp_mkuffd_wp(swp_pte); 2063 if (folio_test_hugetlb(folio)) 2064 set_huge_swap_pte_at(mm, address, pvmw.pte, 2065 swp_pte, vma_mmu_pagesize(vma)); 2066 else 2067 set_pte_at(mm, address, pvmw.pte, swp_pte); 2068 trace_set_migration_pte(address, pte_val(swp_pte), 2069 compound_order(&folio->page)); 2070 /* 2071 * No need to invalidate here it will synchronize on 2072 * against the special swap migration pte. 2073 */ 2074 } 2075 2076 /* 2077 * No need to call mmu_notifier_invalidate_range() it has be 2078 * done above for all cases requiring it to happen under page 2079 * table lock before mmu_notifier_invalidate_range_end() 2080 * 2081 * See Documentation/vm/mmu_notifier.rst 2082 */ 2083 page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); 2084 if (vma->vm_flags & VM_LOCKED) 2085 mlock_page_drain_local(); 2086 folio_put(folio); 2087 } 2088 2089 mmu_notifier_invalidate_range_end(&range); 2090 2091 return ret; 2092 } 2093 -- 0-DAY CI Kernel Test Service https://01.org/lkp ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 12:11 ` kernel test robot 0 siblings, 0 replies; 73+ messages in thread From: kernel test robot @ 2022-05-08 12:11 UTC (permalink / raw) To: Baolin Wang, akpm, mike.kravetz, catalin.marinas, will Cc: llvm, kbuild-all, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch Hi Baolin, I love your patch! Yet something to improve: [auto build test ERROR on akpm-mm/mm-everything] [also build test ERROR on next-20220506] [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything config: x86_64-randconfig-a014 (https://download.01.org/0day-ci/archive/20220508/202205081950.IpKFNYip-lkp@intel.com/config) compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project a385645b470e2d3a1534aae618ea56b31177639f) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 git checkout 907981b27213707fdb2f8a24c107d6752a09a773 # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): >> mm/rmap.c:1931:13: error: call to undeclared function 'huge_ptep_clear_flush'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); ^ mm/rmap.c:1931:13: note: did you mean 'ptep_clear_flush'? include/linux/pgtable.h:431:14: note: 'ptep_clear_flush' declared here extern pte_t ptep_clear_flush(struct vm_area_struct *vma, ^ >> mm/rmap.c:1931:11: error: assigning to 'pte_t' from incompatible type 'int' pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> mm/rmap.c:2023:6: error: call to undeclared function 'set_huge_pte_at'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] set_huge_pte_at(mm, address, pvmw.pte, pteval); ^ mm/rmap.c:2035:6: error: call to undeclared function 'set_huge_pte_at'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] set_huge_pte_at(mm, address, pvmw.pte, pteval); ^ 4 errors generated. vim +/huge_ptep_clear_flush +1931 mm/rmap.c 1883 1884 /* Unexpected PMD-mapped THP? */ 1885 VM_BUG_ON_FOLIO(!pvmw.pte, folio); 1886 1887 subpage = folio_page(folio, 1888 pte_pfn(*pvmw.pte) - folio_pfn(folio)); 1889 address = pvmw.address; 1890 anon_exclusive = folio_test_anon(folio) && 1891 PageAnonExclusive(subpage); 1892 1893 if (folio_test_hugetlb(folio)) { 1894 /* 1895 * huge_pmd_unshare may unmap an entire PMD page. 1896 * There is no way of knowing exactly which PMDs may 1897 * be cached for this mm, so we must flush them all. 1898 * start/end were already adjusted above to cover this 1899 * range. 1900 */ 1901 flush_cache_range(vma, range.start, range.end); 1902 1903 if (!folio_test_anon(folio)) { 1904 /* 1905 * To call huge_pmd_unshare, i_mmap_rwsem must be 1906 * held in write mode. Caller needs to explicitly 1907 * do this outside rmap routines. 1908 */ 1909 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 1910 1911 if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { 1912 flush_tlb_range(vma, range.start, range.end); 1913 mmu_notifier_invalidate_range(mm, range.start, 1914 range.end); 1915 1916 /* 1917 * The ref count of the PMD page was dropped 1918 * which is part of the way map counting 1919 * is done for shared PMDs. Return 'true' 1920 * here. When there is no other sharing, 1921 * huge_pmd_unshare returns false and we will 1922 * unmap the actual page and drop map count 1923 * to zero. 1924 */ 1925 page_vma_mapped_walk_done(&pvmw); 1926 break; 1927 } 1928 } 1929 1930 /* Nuke the hugetlb page table entry */ > 1931 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); 1932 } else { 1933 flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); 1934 /* Nuke the page table entry. */ 1935 pteval = ptep_clear_flush(vma, address, pvmw.pte); 1936 } 1937 1938 /* Set the dirty flag on the folio now the pte is gone. */ 1939 if (pte_dirty(pteval)) 1940 folio_mark_dirty(folio); 1941 1942 /* Update high watermark before we lower rss */ 1943 update_hiwater_rss(mm); 1944 1945 if (folio_is_zone_device(folio)) { 1946 unsigned long pfn = folio_pfn(folio); 1947 swp_entry_t entry; 1948 pte_t swp_pte; 1949 1950 if (anon_exclusive) 1951 BUG_ON(page_try_share_anon_rmap(subpage)); 1952 1953 /* 1954 * Store the pfn of the page in a special migration 1955 * pte. do_swap_page() will wait until the migration 1956 * pte is removed and then restart fault handling. 1957 */ 1958 entry = pte_to_swp_entry(pteval); 1959 if (is_writable_device_private_entry(entry)) 1960 entry = make_writable_migration_entry(pfn); 1961 else if (anon_exclusive) 1962 entry = make_readable_exclusive_migration_entry(pfn); 1963 else 1964 entry = make_readable_migration_entry(pfn); 1965 swp_pte = swp_entry_to_pte(entry); 1966 1967 /* 1968 * pteval maps a zone device page and is therefore 1969 * a swap pte. 1970 */ 1971 if (pte_swp_soft_dirty(pteval)) 1972 swp_pte = pte_swp_mksoft_dirty(swp_pte); 1973 if (pte_swp_uffd_wp(pteval)) 1974 swp_pte = pte_swp_mkuffd_wp(swp_pte); 1975 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); 1976 trace_set_migration_pte(pvmw.address, pte_val(swp_pte), 1977 compound_order(&folio->page)); 1978 /* 1979 * No need to invalidate here it will synchronize on 1980 * against the special swap migration pte. 1981 * 1982 * The assignment to subpage above was computed from a 1983 * swap PTE which results in an invalid pointer. 1984 * Since only PAGE_SIZE pages can currently be 1985 * migrated, just set it to page. This will need to be 1986 * changed when hugepage migrations to device private 1987 * memory are supported. 1988 */ 1989 subpage = &folio->page; 1990 } else if (PageHWPoison(subpage)) { 1991 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); 1992 if (folio_test_hugetlb(folio)) { 1993 hugetlb_count_sub(folio_nr_pages(folio), mm); 1994 set_huge_swap_pte_at(mm, address, 1995 pvmw.pte, pteval, 1996 vma_mmu_pagesize(vma)); 1997 } else { 1998 dec_mm_counter(mm, mm_counter(&folio->page)); 1999 set_pte_at(mm, address, pvmw.pte, pteval); 2000 } 2001 2002 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { 2003 /* 2004 * The guest indicated that the page content is of no 2005 * interest anymore. Simply discard the pte, vmscan 2006 * will take care of the rest. 2007 * A future reference will then fault in a new zero 2008 * page. When userfaultfd is active, we must not drop 2009 * this page though, as its main user (postcopy 2010 * migration) will not expect userfaults on already 2011 * copied pages. 2012 */ 2013 dec_mm_counter(mm, mm_counter(&folio->page)); 2014 /* We have to invalidate as we cleared the pte */ 2015 mmu_notifier_invalidate_range(mm, address, 2016 address + PAGE_SIZE); 2017 } else { 2018 swp_entry_t entry; 2019 pte_t swp_pte; 2020 2021 if (arch_unmap_one(mm, vma, address, pteval) < 0) { 2022 if (folio_test_hugetlb(folio)) > 2023 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2024 else 2025 set_pte_at(mm, address, pvmw.pte, pteval); 2026 ret = false; 2027 page_vma_mapped_walk_done(&pvmw); 2028 break; 2029 } 2030 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) && 2031 !anon_exclusive, subpage); 2032 if (anon_exclusive && 2033 page_try_share_anon_rmap(subpage)) { 2034 if (folio_test_hugetlb(folio)) 2035 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2036 else 2037 set_pte_at(mm, address, pvmw.pte, pteval); 2038 ret = false; 2039 page_vma_mapped_walk_done(&pvmw); 2040 break; 2041 } 2042 2043 /* 2044 * Store the pfn of the page in a special migration 2045 * pte. do_swap_page() will wait until the migration 2046 * pte is removed and then restart fault handling. 2047 */ 2048 if (pte_write(pteval)) 2049 entry = make_writable_migration_entry( 2050 page_to_pfn(subpage)); 2051 else if (anon_exclusive) 2052 entry = make_readable_exclusive_migration_entry( 2053 page_to_pfn(subpage)); 2054 else 2055 entry = make_readable_migration_entry( 2056 page_to_pfn(subpage)); 2057 2058 swp_pte = swp_entry_to_pte(entry); 2059 if (pte_soft_dirty(pteval)) 2060 swp_pte = pte_swp_mksoft_dirty(swp_pte); 2061 if (pte_uffd_wp(pteval)) 2062 swp_pte = pte_swp_mkuffd_wp(swp_pte); 2063 if (folio_test_hugetlb(folio)) 2064 set_huge_swap_pte_at(mm, address, pvmw.pte, 2065 swp_pte, vma_mmu_pagesize(vma)); 2066 else 2067 set_pte_at(mm, address, pvmw.pte, swp_pte); 2068 trace_set_migration_pte(address, pte_val(swp_pte), 2069 compound_order(&folio->page)); 2070 /* 2071 * No need to invalidate here it will synchronize on 2072 * against the special swap migration pte. 2073 */ 2074 } 2075 2076 /* 2077 * No need to call mmu_notifier_invalidate_range() it has be 2078 * done above for all cases requiring it to happen under page 2079 * table lock before mmu_notifier_invalidate_range_end() 2080 * 2081 * See Documentation/vm/mmu_notifier.rst 2082 */ 2083 page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); 2084 if (vma->vm_flags & VM_LOCKED) 2085 mlock_page_drain_local(); 2086 folio_put(folio); 2087 } 2088 2089 mmu_notifier_invalidate_range_end(&range); 2090 2091 return ret; 2092 } 2093 -- 0-DAY CI Kernel Test Service https://01.org/lkp ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 12:11 ` kernel test robot 0 siblings, 0 replies; 73+ messages in thread From: kernel test robot @ 2022-05-08 12:11 UTC (permalink / raw) To: Baolin Wang, akpm, mike.kravetz, catalin.marinas, will Cc: llvm, kbuild-all, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch Hi Baolin, I love your patch! Yet something to improve: [auto build test ERROR on akpm-mm/mm-everything] [also build test ERROR on next-20220506] [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything config: x86_64-randconfig-a014 (https://download.01.org/0day-ci/archive/20220508/202205081950.IpKFNYip-lkp@intel.com/config) compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project a385645b470e2d3a1534aae618ea56b31177639f) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 git checkout 907981b27213707fdb2f8a24c107d6752a09a773 # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): >> mm/rmap.c:1931:13: error: call to undeclared function 'huge_ptep_clear_flush'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); ^ mm/rmap.c:1931:13: note: did you mean 'ptep_clear_flush'? include/linux/pgtable.h:431:14: note: 'ptep_clear_flush' declared here extern pte_t ptep_clear_flush(struct vm_area_struct *vma, ^ >> mm/rmap.c:1931:11: error: assigning to 'pte_t' from incompatible type 'int' pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> mm/rmap.c:2023:6: error: call to undeclared function 'set_huge_pte_at'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] set_huge_pte_at(mm, address, pvmw.pte, pteval); ^ mm/rmap.c:2035:6: error: call to undeclared function 'set_huge_pte_at'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] set_huge_pte_at(mm, address, pvmw.pte, pteval); ^ 4 errors generated. vim +/huge_ptep_clear_flush +1931 mm/rmap.c 1883 1884 /* Unexpected PMD-mapped THP? */ 1885 VM_BUG_ON_FOLIO(!pvmw.pte, folio); 1886 1887 subpage = folio_page(folio, 1888 pte_pfn(*pvmw.pte) - folio_pfn(folio)); 1889 address = pvmw.address; 1890 anon_exclusive = folio_test_anon(folio) && 1891 PageAnonExclusive(subpage); 1892 1893 if (folio_test_hugetlb(folio)) { 1894 /* 1895 * huge_pmd_unshare may unmap an entire PMD page. 1896 * There is no way of knowing exactly which PMDs may 1897 * be cached for this mm, so we must flush them all. 1898 * start/end were already adjusted above to cover this 1899 * range. 1900 */ 1901 flush_cache_range(vma, range.start, range.end); 1902 1903 if (!folio_test_anon(folio)) { 1904 /* 1905 * To call huge_pmd_unshare, i_mmap_rwsem must be 1906 * held in write mode. Caller needs to explicitly 1907 * do this outside rmap routines. 1908 */ 1909 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 1910 1911 if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { 1912 flush_tlb_range(vma, range.start, range.end); 1913 mmu_notifier_invalidate_range(mm, range.start, 1914 range.end); 1915 1916 /* 1917 * The ref count of the PMD page was dropped 1918 * which is part of the way map counting 1919 * is done for shared PMDs. Return 'true' 1920 * here. When there is no other sharing, 1921 * huge_pmd_unshare returns false and we will 1922 * unmap the actual page and drop map count 1923 * to zero. 1924 */ 1925 page_vma_mapped_walk_done(&pvmw); 1926 break; 1927 } 1928 } 1929 1930 /* Nuke the hugetlb page table entry */ > 1931 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); 1932 } else { 1933 flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); 1934 /* Nuke the page table entry. */ 1935 pteval = ptep_clear_flush(vma, address, pvmw.pte); 1936 } 1937 1938 /* Set the dirty flag on the folio now the pte is gone. */ 1939 if (pte_dirty(pteval)) 1940 folio_mark_dirty(folio); 1941 1942 /* Update high watermark before we lower rss */ 1943 update_hiwater_rss(mm); 1944 1945 if (folio_is_zone_device(folio)) { 1946 unsigned long pfn = folio_pfn(folio); 1947 swp_entry_t entry; 1948 pte_t swp_pte; 1949 1950 if (anon_exclusive) 1951 BUG_ON(page_try_share_anon_rmap(subpage)); 1952 1953 /* 1954 * Store the pfn of the page in a special migration 1955 * pte. do_swap_page() will wait until the migration 1956 * pte is removed and then restart fault handling. 1957 */ 1958 entry = pte_to_swp_entry(pteval); 1959 if (is_writable_device_private_entry(entry)) 1960 entry = make_writable_migration_entry(pfn); 1961 else if (anon_exclusive) 1962 entry = make_readable_exclusive_migration_entry(pfn); 1963 else 1964 entry = make_readable_migration_entry(pfn); 1965 swp_pte = swp_entry_to_pte(entry); 1966 1967 /* 1968 * pteval maps a zone device page and is therefore 1969 * a swap pte. 1970 */ 1971 if (pte_swp_soft_dirty(pteval)) 1972 swp_pte = pte_swp_mksoft_dirty(swp_pte); 1973 if (pte_swp_uffd_wp(pteval)) 1974 swp_pte = pte_swp_mkuffd_wp(swp_pte); 1975 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); 1976 trace_set_migration_pte(pvmw.address, pte_val(swp_pte), 1977 compound_order(&folio->page)); 1978 /* 1979 * No need to invalidate here it will synchronize on 1980 * against the special swap migration pte. 1981 * 1982 * The assignment to subpage above was computed from a 1983 * swap PTE which results in an invalid pointer. 1984 * Since only PAGE_SIZE pages can currently be 1985 * migrated, just set it to page. This will need to be 1986 * changed when hugepage migrations to device private 1987 * memory are supported. 1988 */ 1989 subpage = &folio->page; 1990 } else if (PageHWPoison(subpage)) { 1991 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); 1992 if (folio_test_hugetlb(folio)) { 1993 hugetlb_count_sub(folio_nr_pages(folio), mm); 1994 set_huge_swap_pte_at(mm, address, 1995 pvmw.pte, pteval, 1996 vma_mmu_pagesize(vma)); 1997 } else { 1998 dec_mm_counter(mm, mm_counter(&folio->page)); 1999 set_pte_at(mm, address, pvmw.pte, pteval); 2000 } 2001 2002 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { 2003 /* 2004 * The guest indicated that the page content is of no 2005 * interest anymore. Simply discard the pte, vmscan 2006 * will take care of the rest. 2007 * A future reference will then fault in a new zero 2008 * page. When userfaultfd is active, we must not drop 2009 * this page though, as its main user (postcopy 2010 * migration) will not expect userfaults on already 2011 * copied pages. 2012 */ 2013 dec_mm_counter(mm, mm_counter(&folio->page)); 2014 /* We have to invalidate as we cleared the pte */ 2015 mmu_notifier_invalidate_range(mm, address, 2016 address + PAGE_SIZE); 2017 } else { 2018 swp_entry_t entry; 2019 pte_t swp_pte; 2020 2021 if (arch_unmap_one(mm, vma, address, pteval) < 0) { 2022 if (folio_test_hugetlb(folio)) > 2023 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2024 else 2025 set_pte_at(mm, address, pvmw.pte, pteval); 2026 ret = false; 2027 page_vma_mapped_walk_done(&pvmw); 2028 break; 2029 } 2030 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) && 2031 !anon_exclusive, subpage); 2032 if (anon_exclusive && 2033 page_try_share_anon_rmap(subpage)) { 2034 if (folio_test_hugetlb(folio)) 2035 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2036 else 2037 set_pte_at(mm, address, pvmw.pte, pteval); 2038 ret = false; 2039 page_vma_mapped_walk_done(&pvmw); 2040 break; 2041 } 2042 2043 /* 2044 * Store the pfn of the page in a special migration 2045 * pte. do_swap_page() will wait until the migration 2046 * pte is removed and then restart fault handling. 2047 */ 2048 if (pte_write(pteval)) 2049 entry = make_writable_migration_entry( 2050 page_to_pfn(subpage)); 2051 else if (anon_exclusive) 2052 entry = make_readable_exclusive_migration_entry( 2053 page_to_pfn(subpage)); 2054 else 2055 entry = make_readable_migration_entry( 2056 page_to_pfn(subpage)); 2057 2058 swp_pte = swp_entry_to_pte(entry); 2059 if (pte_soft_dirty(pteval)) 2060 swp_pte = pte_swp_mksoft_dirty(swp_pte); 2061 if (pte_uffd_wp(pteval)) 2062 swp_pte = pte_swp_mkuffd_wp(swp_pte); 2063 if (folio_test_hugetlb(folio)) 2064 set_huge_swap_pte_at(mm, address, pvmw.pte, 2065 swp_pte, vma_mmu_pagesize(vma)); 2066 else 2067 set_pte_at(mm, address, pvmw.pte, swp_pte); 2068 trace_set_migration_pte(address, pte_val(swp_pte), 2069 compound_order(&folio->page)); 2070 /* 2071 * No need to invalidate here it will synchronize on 2072 * against the special swap migration pte. 2073 */ 2074 } 2075 2076 /* 2077 * No need to call mmu_notifier_invalidate_range() it has be 2078 * done above for all cases requiring it to happen under page 2079 * table lock before mmu_notifier_invalidate_range_end() 2080 * 2081 * See Documentation/vm/mmu_notifier.rst 2082 */ 2083 page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); 2084 if (vma->vm_flags & VM_LOCKED) 2085 mlock_page_drain_local(); 2086 folio_put(folio); 2087 } 2088 2089 mmu_notifier_invalidate_range_end(&range); 2090 2091 return ret; 2092 } 2093 -- 0-DAY CI Kernel Test Service https://01.org/lkp _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 12:11 ` kernel test robot 0 siblings, 0 replies; 73+ messages in thread From: kernel test robot @ 2022-05-08 12:11 UTC (permalink / raw) To: Baolin Wang, akpm, mike.kravetz, catalin.marinas, will Cc: dalias, linux-ia64, linux-sh, llvm, linux-kernel, James.Bottomley, paulus, sparclinux, agordeev, linux-arch, linux-s390, arnd, ysato, deller, borntraeger, gor, hca, baolin.wang, linux-arm-kernel, tsbogend, kbuild-all, linux-parisc, linux-mips, svens, linuxppc-dev, davem Hi Baolin, I love your patch! Yet something to improve: [auto build test ERROR on akpm-mm/mm-everything] [also build test ERROR on next-20220506] [cannot apply to hnaz-mm/master arm64/for-next/core linus/master v5.18-rc5] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything config: x86_64-randconfig-a014 (https://download.01.org/0day-ci/archive/20220508/202205081950.IpKFNYip-lkp@intel.com/config) compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project a385645b470e2d3a1534aae618ea56b31177639f) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/907981b27213707fdb2f8a24c107d6752a09a773 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Baolin-Wang/Fix-CONT-PTE-PMD-size-hugetlb-issue-when-unmapping-or-migrating/20220508-174036 git checkout 907981b27213707fdb2f8a24c107d6752a09a773 # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All errors (new ones prefixed by >>): >> mm/rmap.c:1931:13: error: call to undeclared function 'huge_ptep_clear_flush'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); ^ mm/rmap.c:1931:13: note: did you mean 'ptep_clear_flush'? include/linux/pgtable.h:431:14: note: 'ptep_clear_flush' declared here extern pte_t ptep_clear_flush(struct vm_area_struct *vma, ^ >> mm/rmap.c:1931:11: error: assigning to 'pte_t' from incompatible type 'int' pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> mm/rmap.c:2023:6: error: call to undeclared function 'set_huge_pte_at'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] set_huge_pte_at(mm, address, pvmw.pte, pteval); ^ mm/rmap.c:2035:6: error: call to undeclared function 'set_huge_pte_at'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] set_huge_pte_at(mm, address, pvmw.pte, pteval); ^ 4 errors generated. vim +/huge_ptep_clear_flush +1931 mm/rmap.c 1883 1884 /* Unexpected PMD-mapped THP? */ 1885 VM_BUG_ON_FOLIO(!pvmw.pte, folio); 1886 1887 subpage = folio_page(folio, 1888 pte_pfn(*pvmw.pte) - folio_pfn(folio)); 1889 address = pvmw.address; 1890 anon_exclusive = folio_test_anon(folio) && 1891 PageAnonExclusive(subpage); 1892 1893 if (folio_test_hugetlb(folio)) { 1894 /* 1895 * huge_pmd_unshare may unmap an entire PMD page. 1896 * There is no way of knowing exactly which PMDs may 1897 * be cached for this mm, so we must flush them all. 1898 * start/end were already adjusted above to cover this 1899 * range. 1900 */ 1901 flush_cache_range(vma, range.start, range.end); 1902 1903 if (!folio_test_anon(folio)) { 1904 /* 1905 * To call huge_pmd_unshare, i_mmap_rwsem must be 1906 * held in write mode. Caller needs to explicitly 1907 * do this outside rmap routines. 1908 */ 1909 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); 1910 1911 if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { 1912 flush_tlb_range(vma, range.start, range.end); 1913 mmu_notifier_invalidate_range(mm, range.start, 1914 range.end); 1915 1916 /* 1917 * The ref count of the PMD page was dropped 1918 * which is part of the way map counting 1919 * is done for shared PMDs. Return 'true' 1920 * here. When there is no other sharing, 1921 * huge_pmd_unshare returns false and we will 1922 * unmap the actual page and drop map count 1923 * to zero. 1924 */ 1925 page_vma_mapped_walk_done(&pvmw); 1926 break; 1927 } 1928 } 1929 1930 /* Nuke the hugetlb page table entry */ > 1931 pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); 1932 } else { 1933 flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); 1934 /* Nuke the page table entry. */ 1935 pteval = ptep_clear_flush(vma, address, pvmw.pte); 1936 } 1937 1938 /* Set the dirty flag on the folio now the pte is gone. */ 1939 if (pte_dirty(pteval)) 1940 folio_mark_dirty(folio); 1941 1942 /* Update high watermark before we lower rss */ 1943 update_hiwater_rss(mm); 1944 1945 if (folio_is_zone_device(folio)) { 1946 unsigned long pfn = folio_pfn(folio); 1947 swp_entry_t entry; 1948 pte_t swp_pte; 1949 1950 if (anon_exclusive) 1951 BUG_ON(page_try_share_anon_rmap(subpage)); 1952 1953 /* 1954 * Store the pfn of the page in a special migration 1955 * pte. do_swap_page() will wait until the migration 1956 * pte is removed and then restart fault handling. 1957 */ 1958 entry = pte_to_swp_entry(pteval); 1959 if (is_writable_device_private_entry(entry)) 1960 entry = make_writable_migration_entry(pfn); 1961 else if (anon_exclusive) 1962 entry = make_readable_exclusive_migration_entry(pfn); 1963 else 1964 entry = make_readable_migration_entry(pfn); 1965 swp_pte = swp_entry_to_pte(entry); 1966 1967 /* 1968 * pteval maps a zone device page and is therefore 1969 * a swap pte. 1970 */ 1971 if (pte_swp_soft_dirty(pteval)) 1972 swp_pte = pte_swp_mksoft_dirty(swp_pte); 1973 if (pte_swp_uffd_wp(pteval)) 1974 swp_pte = pte_swp_mkuffd_wp(swp_pte); 1975 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); 1976 trace_set_migration_pte(pvmw.address, pte_val(swp_pte), 1977 compound_order(&folio->page)); 1978 /* 1979 * No need to invalidate here it will synchronize on 1980 * against the special swap migration pte. 1981 * 1982 * The assignment to subpage above was computed from a 1983 * swap PTE which results in an invalid pointer. 1984 * Since only PAGE_SIZE pages can currently be 1985 * migrated, just set it to page. This will need to be 1986 * changed when hugepage migrations to device private 1987 * memory are supported. 1988 */ 1989 subpage = &folio->page; 1990 } else if (PageHWPoison(subpage)) { 1991 pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); 1992 if (folio_test_hugetlb(folio)) { 1993 hugetlb_count_sub(folio_nr_pages(folio), mm); 1994 set_huge_swap_pte_at(mm, address, 1995 pvmw.pte, pteval, 1996 vma_mmu_pagesize(vma)); 1997 } else { 1998 dec_mm_counter(mm, mm_counter(&folio->page)); 1999 set_pte_at(mm, address, pvmw.pte, pteval); 2000 } 2001 2002 } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { 2003 /* 2004 * The guest indicated that the page content is of no 2005 * interest anymore. Simply discard the pte, vmscan 2006 * will take care of the rest. 2007 * A future reference will then fault in a new zero 2008 * page. When userfaultfd is active, we must not drop 2009 * this page though, as its main user (postcopy 2010 * migration) will not expect userfaults on already 2011 * copied pages. 2012 */ 2013 dec_mm_counter(mm, mm_counter(&folio->page)); 2014 /* We have to invalidate as we cleared the pte */ 2015 mmu_notifier_invalidate_range(mm, address, 2016 address + PAGE_SIZE); 2017 } else { 2018 swp_entry_t entry; 2019 pte_t swp_pte; 2020 2021 if (arch_unmap_one(mm, vma, address, pteval) < 0) { 2022 if (folio_test_hugetlb(folio)) > 2023 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2024 else 2025 set_pte_at(mm, address, pvmw.pte, pteval); 2026 ret = false; 2027 page_vma_mapped_walk_done(&pvmw); 2028 break; 2029 } 2030 VM_BUG_ON_PAGE(pte_write(pteval) && folio_test_anon(folio) && 2031 !anon_exclusive, subpage); 2032 if (anon_exclusive && 2033 page_try_share_anon_rmap(subpage)) { 2034 if (folio_test_hugetlb(folio)) 2035 set_huge_pte_at(mm, address, pvmw.pte, pteval); 2036 else 2037 set_pte_at(mm, address, pvmw.pte, pteval); 2038 ret = false; 2039 page_vma_mapped_walk_done(&pvmw); 2040 break; 2041 } 2042 2043 /* 2044 * Store the pfn of the page in a special migration 2045 * pte. do_swap_page() will wait until the migration 2046 * pte is removed and then restart fault handling. 2047 */ 2048 if (pte_write(pteval)) 2049 entry = make_writable_migration_entry( 2050 page_to_pfn(subpage)); 2051 else if (anon_exclusive) 2052 entry = make_readable_exclusive_migration_entry( 2053 page_to_pfn(subpage)); 2054 else 2055 entry = make_readable_migration_entry( 2056 page_to_pfn(subpage)); 2057 2058 swp_pte = swp_entry_to_pte(entry); 2059 if (pte_soft_dirty(pteval)) 2060 swp_pte = pte_swp_mksoft_dirty(swp_pte); 2061 if (pte_uffd_wp(pteval)) 2062 swp_pte = pte_swp_mkuffd_wp(swp_pte); 2063 if (folio_test_hugetlb(folio)) 2064 set_huge_swap_pte_at(mm, address, pvmw.pte, 2065 swp_pte, vma_mmu_pagesize(vma)); 2066 else 2067 set_pte_at(mm, address, pvmw.pte, swp_pte); 2068 trace_set_migration_pte(address, pte_val(swp_pte), 2069 compound_order(&folio->page)); 2070 /* 2071 * No need to invalidate here it will synchronize on 2072 * against the special swap migration pte. 2073 */ 2074 } 2075 2076 /* 2077 * No need to call mmu_notifier_invalidate_range() it has be 2078 * done above for all cases requiring it to happen under page 2079 * table lock before mmu_notifier_invalidate_range_end() 2080 * 2081 * See Documentation/vm/mmu_notifier.rst 2082 */ 2083 page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); 2084 if (vma->vm_flags & VM_LOCKED) 2085 mlock_page_drain_local(); 2086 folio_put(folio); 2087 } 2088 2089 mmu_notifier_invalidate_range_end(&range); 2090 2091 return ret; 2092 } 2093 -- 0-DAY CI Kernel Test Service https://01.org/lkp ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration 2022-05-08 9:36 ` Baolin Wang (?) (?) @ 2022-05-08 13:31 ` Muchun Song -1 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-08 13:31 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 05:36:40PM +0800, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When migrating a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it and remap it with > a migration pte entry. This is correct for PMD or PUD size hugetlb, > since they always contain only one pmd entry or pud entry in the > page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes. So we will nuke or remap only one pte > or pmd entry for this CONT-PTE/PMD size hugetlb page, which is > not expected for hugetlb migration. The problem is we can still > continue to modify the subpages' data of a hugetlb page during > migrating a hugetlb page, which can cause a serious data consistent > issue, since we did not nuke the page table entry and set a > migration pte for the subpages of a hugetlb page. > > To fix this issue, we should change to use huge_ptep_clear_flush() > to nuke a hugetlb page table, and remap it with set_huge_pte_at() > and set_huge_swap_pte_at() when migrating a hugetlb page, which > already considered the CONT-PTE or CONT-PMD size hugetlb. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> This looks fine to me. Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks. ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 13:31 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-08 13:31 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 05:36:40PM +0800, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When migrating a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it and remap it with > a migration pte entry. This is correct for PMD or PUD size hugetlb, > since they always contain only one pmd entry or pud entry in the > page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes. So we will nuke or remap only one pte > or pmd entry for this CONT-PTE/PMD size hugetlb page, which is > not expected for hugetlb migration. The problem is we can still > continue to modify the subpages' data of a hugetlb page during > migrating a hugetlb page, which can cause a serious data consistent > issue, since we did not nuke the page table entry and set a > migration pte for the subpages of a hugetlb page. > > To fix this issue, we should change to use huge_ptep_clear_flush() > to nuke a hugetlb page table, and remap it with set_huge_pte_at() > and set_huge_swap_pte_at() when migrating a hugetlb page, which > already considered the CONT-PTE or CONT-PMD size hugetlb. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> This looks fine to me. Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks. ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 13:31 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-08 13:31 UTC (permalink / raw) To: Baolin Wang Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz On Sun, May 08, 2022 at 05:36:40PM +0800, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When migrating a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it and remap it with > a migration pte entry. This is correct for PMD or PUD size hugetlb, > since they always contain only one pmd entry or pud entry in the > page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes. So we will nuke or remap only one pte > or pmd entry for this CONT-PTE/PMD size hugetlb page, which is > not expected for hugetlb migration. The problem is we can still > continue to modify the subpages' data of a hugetlb page during > migrating a hugetlb page, which can cause a serious data consistent > issue, since we did not nuke the page table entry and set a > migration pte for the subpages of a hugetlb page. > > To fix this issue, we should change to use huge_ptep_clear_flush() > to nuke a hugetlb page table, and remap it with set_huge_pte_at() > and set_huge_swap_pte_at() when migrating a hugetlb page, which > already considered the CONT-PTE or CONT-PMD size hugetlb. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> This looks fine to me. Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks. ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-08 13:31 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-08 13:31 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 05:36:40PM +0800, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When migrating a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it and remap it with > a migration pte entry. This is correct for PMD or PUD size hugetlb, > since they always contain only one pmd entry or pud entry in the > page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes. So we will nuke or remap only one pte > or pmd entry for this CONT-PTE/PMD size hugetlb page, which is > not expected for hugetlb migration. The problem is we can still > continue to modify the subpages' data of a hugetlb page during > migrating a hugetlb page, which can cause a serious data consistent > issue, since we did not nuke the page table entry and set a > migration pte for the subpages of a hugetlb page. > > To fix this issue, we should change to use huge_ptep_clear_flush() > to nuke a hugetlb page table, and remap it with set_huge_pte_at() > and set_huge_swap_pte_at() when migrating a hugetlb page, which > already considered the CONT-PTE or CONT-PMD size hugetlb. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> This looks fine to me. Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration 2022-05-08 9:36 ` Baolin Wang (?) (?) @ 2022-05-09 21:05 ` Mike Kravetz -1 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 21:05 UTC (permalink / raw) To: Baolin Wang, akpm, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On 5/8/22 02:36, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When migrating a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it and remap it with > a migration pte entry. This is correct for PMD or PUD size hugetlb, > since they always contain only one pmd entry or pud entry in the > page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes. So we will nuke or remap only one pte > or pmd entry for this CONT-PTE/PMD size hugetlb page, which is > not expected for hugetlb migration. The problem is we can still > continue to modify the subpages' data of a hugetlb page during > migrating a hugetlb page, which can cause a serious data consistent > issue, since we did not nuke the page table entry and set a > migration pte for the subpages of a hugetlb page. > > To fix this issue, we should change to use huge_ptep_clear_flush() > to nuke a hugetlb page table, and remap it with set_huge_pte_at() > and set_huge_swap_pte_at() when migrating a hugetlb page, which > already considered the CONT-PTE or CONT-PMD size hugetlb. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/rmap.c | 24 ++++++++++++++++++------ > 1 file changed, 18 insertions(+), 6 deletions(-) With the addition of !CONFIG_HUGETLB_PAGE stubs, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> -- Mike Kravetz ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-09 21:05 ` Mike Kravetz 0 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 21:05 UTC (permalink / raw) To: Baolin Wang, akpm, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On 5/8/22 02:36, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When migrating a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it and remap it with > a migration pte entry. This is correct for PMD or PUD size hugetlb, > since they always contain only one pmd entry or pud entry in the > page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes. So we will nuke or remap only one pte > or pmd entry for this CONT-PTE/PMD size hugetlb page, which is > not expected for hugetlb migration. The problem is we can still > continue to modify the subpages' data of a hugetlb page during > migrating a hugetlb page, which can cause a serious data consistent > issue, since we did not nuke the page table entry and set a > migration pte for the subpages of a hugetlb page. > > To fix this issue, we should change to use huge_ptep_clear_flush() > to nuke a hugetlb page table, and remap it with set_huge_pte_at() > and set_huge_swap_pte_at() when migrating a hugetlb page, which > already considered the CONT-PTE or CONT-PMD size hugetlb. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/rmap.c | 24 ++++++++++++++++++------ > 1 file changed, 18 insertions(+), 6 deletions(-) With the addition of !CONFIG_HUGETLB_PAGE stubs, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> -- Mike Kravetz ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-09 21:05 ` Mike Kravetz 0 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 21:05 UTC (permalink / raw) To: Baolin Wang, akpm, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On 5/8/22 02:36, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When migrating a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it and remap it with > a migration pte entry. This is correct for PMD or PUD size hugetlb, > since they always contain only one pmd entry or pud entry in the > page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes. So we will nuke or remap only one pte > or pmd entry for this CONT-PTE/PMD size hugetlb page, which is > not expected for hugetlb migration. The problem is we can still > continue to modify the subpages' data of a hugetlb page during > migrating a hugetlb page, which can cause a serious data consistent > issue, since we did not nuke the page table entry and set a > migration pte for the subpages of a hugetlb page. > > To fix this issue, we should change to use huge_ptep_clear_flush() > to nuke a hugetlb page table, and remap it with set_huge_pte_at() > and set_huge_swap_pte_at() when migrating a hugetlb page, which > already considered the CONT-PTE or CONT-PMD size hugetlb. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/rmap.c | 24 ++++++++++++++++++------ > 1 file changed, 18 insertions(+), 6 deletions(-) With the addition of !CONFIG_HUGETLB_PAGE stubs, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> -- Mike Kravetz _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration @ 2022-05-09 21:05 ` Mike Kravetz 0 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 21:05 UTC (permalink / raw) To: Baolin Wang, akpm, catalin.marinas, will Cc: dalias, linux-ia64, linux-sh, linux-kernel, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, linux-arch, linux-s390, arnd, ysato, deller, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-mips, svens, linuxppc-dev, davem On 5/8/22 02:36, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When migrating a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it and remap it with > a migration pte entry. This is correct for PMD or PUD size hugetlb, > since they always contain only one pmd entry or pud entry in the > page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes. So we will nuke or remap only one pte > or pmd entry for this CONT-PTE/PMD size hugetlb page, which is > not expected for hugetlb migration. The problem is we can still > continue to modify the subpages' data of a hugetlb page during > migrating a hugetlb page, which can cause a serious data consistent > issue, since we did not nuke the page table entry and set a > migration pte for the subpages of a hugetlb page. > > To fix this issue, we should change to use huge_ptep_clear_flush() > to nuke a hugetlb page table, and remap it with set_huge_pte_at() > and set_huge_swap_pte_at() when migrating a hugetlb page, which > already considered the CONT-PTE or CONT-PMD size hugetlb. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/rmap.c | 24 ++++++++++++++++++------ > 1 file changed, 18 insertions(+), 6 deletions(-) With the addition of !CONFIG_HUGETLB_PAGE stubs, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> -- Mike Kravetz ^ permalink raw reply [flat|nested] 73+ messages in thread
* [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping 2022-05-08 9:36 ` Baolin Wang (?) (?) @ 2022-05-08 9:36 ` Baolin Wang -1 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb: 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page size specified. When unmapping a hugetlb page, we will get the relevant page table entry by huge_pte_offset() only once to nuke it. This is correct for PMD or PUD size hugetlb, since they always contain only one pmd entry or pud entry in the page table. However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, since they can contain several continuous pte or pmd entry with same page table attributes, so we will nuke only one pte or pmd entry for this CONT-PTE/PMD size hugetlb page. And now try_to_unmap() is only passed a hugetlb page in the case where the hugetlb page is poisoned. Which means now we will unmap only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb page, and we can still access other subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page, which will cause serious issues possibly. So we should change to use huge_ptep_clear_flush() to nuke the hugetlb page table to fix this issue, which already considered CONT-PTE and CONT-PMD size hugetlb. We've already used set_huge_swap_pte_at() to set a poisoned swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() to make sure the passed hugetlb page is poisoned in try_to_unmap(). Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/rmap.c | 39 ++++++++++++++++++++++----------------- 1 file changed, 22 insertions(+), 17 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 7cf2408..37c8fd2 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1530,6 +1530,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (folio_test_hugetlb(folio)) { /* + * The try_to_unmap() is only passed a hugetlb page + * in the case where the hugetlb page is poisoned. + */ + VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); + /* * huge_pmd_unshare may unmap an entire PMD page. * There is no way of knowing exactly which PMDs may * be cached for this mm, so we must flush them all. @@ -1564,28 +1569,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, break; } } + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); - } - - /* - * Nuke the page table entry. When having to clear - * PageAnonExclusive(), we always have to flush. - */ - if (should_defer_flush(mm, flags) && !anon_exclusive) { /* - * We clear the PTE but do not flush so potentially - * a remote CPU could still be writing to the folio. - * If the entry was previously clean then the - * architecture must guarantee that a clear->dirty - * transition on a cached TLB entry is written through - * and traps if the PTE is unmapped. + * Nuke the page table entry. When having to clear + * PageAnonExclusive(), we always have to flush. */ - pteval = ptep_get_and_clear(mm, address, pvmw.pte); + if (should_defer_flush(mm, flags) && !anon_exclusive) { + /* + * We clear the PTE but do not flush so potentially + * a remote CPU could still be writing to the folio. + * If the entry was previously clean then the + * architecture must guarantee that a clear->dirty + * transition on a cached TLB entry is written through + * and traps if the PTE is unmapped. + */ + pteval = ptep_get_and_clear(mm, address, pvmw.pte); - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); - } else { - pteval = ptep_clear_flush(vma, address, pvmw.pte); + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + } else { + pteval = ptep_clear_flush(vma, address, pvmw.pte); + } } /* -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 73+ messages in thread
* [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb: 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page size specified. When unmapping a hugetlb page, we will get the relevant page table entry by huge_pte_offset() only once to nuke it. This is correct for PMD or PUD size hugetlb, since they always contain only one pmd entry or pud entry in the page table. However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, since they can contain several continuous pte or pmd entry with same page table attributes, so we will nuke only one pte or pmd entry for this CONT-PTE/PMD size hugetlb page. And now try_to_unmap() is only passed a hugetlb page in the case where the hugetlb page is poisoned. Which means now we will unmap only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb page, and we can still access other subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page, which will cause serious issues possibly. So we should change to use huge_ptep_clear_flush() to nuke the hugetlb page table to fix this issue, which already considered CONT-PTE and CONT-PMD size hugetlb. We've already used set_huge_swap_pte_at() to set a poisoned swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() to make sure the passed hugetlb page is poisoned in try_to_unmap(). Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/rmap.c | 39 ++++++++++++++++++++++----------------- 1 file changed, 22 insertions(+), 17 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 7cf2408..37c8fd2 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1530,6 +1530,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (folio_test_hugetlb(folio)) { /* + * The try_to_unmap() is only passed a hugetlb page + * in the case where the hugetlb page is poisoned. + */ + VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); + /* * huge_pmd_unshare may unmap an entire PMD page. * There is no way of knowing exactly which PMDs may * be cached for this mm, so we must flush them all. @@ -1564,28 +1569,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, break; } } + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); - } - - /* - * Nuke the page table entry. When having to clear - * PageAnonExclusive(), we always have to flush. - */ - if (should_defer_flush(mm, flags) && !anon_exclusive) { /* - * We clear the PTE but do not flush so potentially - * a remote CPU could still be writing to the folio. - * If the entry was previously clean then the - * architecture must guarantee that a clear->dirty - * transition on a cached TLB entry is written through - * and traps if the PTE is unmapped. + * Nuke the page table entry. When having to clear + * PageAnonExclusive(), we always have to flush. */ - pteval = ptep_get_and_clear(mm, address, pvmw.pte); + if (should_defer_flush(mm, flags) && !anon_exclusive) { + /* + * We clear the PTE but do not flush so potentially + * a remote CPU could still be writing to the folio. + * If the entry was previously clean then the + * architecture must guarantee that a clear->dirty + * transition on a cached TLB entry is written through + * and traps if the PTE is unmapped. + */ + pteval = ptep_get_and_clear(mm, address, pvmw.pte); - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); - } else { - pteval = ptep_clear_flush(vma, address, pvmw.pte); + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + } else { + pteval = ptep_clear_flush(vma, address, pvmw.pte); + } } /* -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 73+ messages in thread
* [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, baolin.wang, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb: 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page size specified. When unmapping a hugetlb page, we will get the relevant page table entry by huge_pte_offset() only once to nuke it. This is correct for PMD or PUD size hugetlb, since they always contain only one pmd entry or pud entry in the page table. However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, since they can contain several continuous pte or pmd entry with same page table attributes, so we will nuke only one pte or pmd entry for this CONT-PTE/PMD size hugetlb page. And now try_to_unmap() is only passed a hugetlb page in the case where the hugetlb page is poisoned. Which means now we will unmap only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb page, and we can still access other subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page, which will cause serious issues possibly. So we should change to use huge_ptep_clear_flush() to nuke the hugetlb page table to fix this issue, which already considered CONT-PTE and CONT-PMD size hugetlb. We've already used set_huge_swap_pte_at() to set a poisoned swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() to make sure the passed hugetlb page is poisoned in try_to_unmap(). Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/rmap.c | 39 ++++++++++++++++++++++----------------- 1 file changed, 22 insertions(+), 17 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 7cf2408..37c8fd2 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1530,6 +1530,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (folio_test_hugetlb(folio)) { /* + * The try_to_unmap() is only passed a hugetlb page + * in the case where the hugetlb page is poisoned. + */ + VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); + /* * huge_pmd_unshare may unmap an entire PMD page. * There is no way of knowing exactly which PMDs may * be cached for this mm, so we must flush them all. @@ -1564,28 +1569,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, break; } } + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); - } - - /* - * Nuke the page table entry. When having to clear - * PageAnonExclusive(), we always have to flush. - */ - if (should_defer_flush(mm, flags) && !anon_exclusive) { /* - * We clear the PTE but do not flush so potentially - * a remote CPU could still be writing to the folio. - * If the entry was previously clean then the - * architecture must guarantee that a clear->dirty - * transition on a cached TLB entry is written through - * and traps if the PTE is unmapped. + * Nuke the page table entry. When having to clear + * PageAnonExclusive(), we always have to flush. */ - pteval = ptep_get_and_clear(mm, address, pvmw.pte); + if (should_defer_flush(mm, flags) && !anon_exclusive) { + /* + * We clear the PTE but do not flush so potentially + * a remote CPU could still be writing to the folio. + * If the entry was previously clean then the + * architecture must guarantee that a clear->dirty + * transition on a cached TLB entry is written through + * and traps if the PTE is unmapped. + */ + pteval = ptep_get_and_clear(mm, address, pvmw.pte); - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); - } else { - pteval = ptep_clear_flush(vma, address, pvmw.pte); + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + } else { + pteval = ptep_clear_flush(vma, address, pvmw.pte); + } } /* -- 1.8.3.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 73+ messages in thread
* [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping @ 2022-05-08 9:36 ` Baolin Wang 0 siblings, 0 replies; 73+ messages in thread From: Baolin Wang @ 2022-05-08 9:36 UTC (permalink / raw) To: akpm, mike.kravetz, catalin.marinas, will Cc: dalias, linux-ia64, linux-sh, linux-kernel, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, linux-arch, linux-s390, arnd, ysato, deller, borntraeger, gor, hca, baolin.wang, linux-arm-kernel, tsbogend, linux-parisc, linux-mips, svens, linuxppc-dev, davem On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb: 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page size specified. When unmapping a hugetlb page, we will get the relevant page table entry by huge_pte_offset() only once to nuke it. This is correct for PMD or PUD size hugetlb, since they always contain only one pmd entry or pud entry in the page table. However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, since they can contain several continuous pte or pmd entry with same page table attributes, so we will nuke only one pte or pmd entry for this CONT-PTE/PMD size hugetlb page. And now try_to_unmap() is only passed a hugetlb page in the case where the hugetlb page is poisoned. Which means now we will unmap only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb page, and we can still access other subpages of a CONT-PTE or CONT-PMD size poisoned hugetlb page, which will cause serious issues possibly. So we should change to use huge_ptep_clear_flush() to nuke the hugetlb page table to fix this issue, which already considered CONT-PTE and CONT-PMD size hugetlb. We've already used set_huge_swap_pte_at() to set a poisoned swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() to make sure the passed hugetlb page is poisoned in try_to_unmap(). Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/rmap.c | 39 ++++++++++++++++++++++----------------- 1 file changed, 22 insertions(+), 17 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 7cf2408..37c8fd2 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1530,6 +1530,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (folio_test_hugetlb(folio)) { /* + * The try_to_unmap() is only passed a hugetlb page + * in the case where the hugetlb page is poisoned. + */ + VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); + /* * huge_pmd_unshare may unmap an entire PMD page. * There is no way of knowing exactly which PMDs may * be cached for this mm, so we must flush them all. @@ -1564,28 +1569,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, break; } } + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte); } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); - } - - /* - * Nuke the page table entry. When having to clear - * PageAnonExclusive(), we always have to flush. - */ - if (should_defer_flush(mm, flags) && !anon_exclusive) { /* - * We clear the PTE but do not flush so potentially - * a remote CPU could still be writing to the folio. - * If the entry was previously clean then the - * architecture must guarantee that a clear->dirty - * transition on a cached TLB entry is written through - * and traps if the PTE is unmapped. + * Nuke the page table entry. When having to clear + * PageAnonExclusive(), we always have to flush. */ - pteval = ptep_get_and_clear(mm, address, pvmw.pte); + if (should_defer_flush(mm, flags) && !anon_exclusive) { + /* + * We clear the PTE but do not flush so potentially + * a remote CPU could still be writing to the folio. + * If the entry was previously clean then the + * architecture must guarantee that a clear->dirty + * transition on a cached TLB entry is written through + * and traps if the PTE is unmapped. + */ + pteval = ptep_get_and_clear(mm, address, pvmw.pte); - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); - } else { - pteval = ptep_clear_flush(vma, address, pvmw.pte); + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + } else { + pteval = ptep_clear_flush(vma, address, pvmw.pte); + } } /* -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 73+ messages in thread
* Re: [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping 2022-05-08 9:36 ` Baolin Wang (?) (?) @ 2022-05-09 6:42 ` Muchun Song -1 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-09 6:42 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 05:36:41PM +0800, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When unmapping a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it. This is correct > for PMD or PUD size hugetlb, since they always contain only one > pmd entry or pud entry in the page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes, so we will nuke only one pte or pmd > entry for this CONT-PTE/PMD size hugetlb page. > > And now try_to_unmap() is only passed a hugetlb page in the case > where the hugetlb page is poisoned. Which means now we will unmap > only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb > page, and we can still access other subpages of a CONT-PTE or CONT-PMD > size poisoned hugetlb page, which will cause serious issues possibly. > > So we should change to use huge_ptep_clear_flush() to nuke the > hugetlb page table to fix this issue, which already considered > CONT-PTE and CONT-PMD size hugetlb. > > We've already used set_huge_swap_pte_at() to set a poisoned > swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() > to make sure the passed hugetlb page is poisoned in try_to_unmap(). > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks. ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping @ 2022-05-09 6:42 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-09 6:42 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 05:36:41PM +0800, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When unmapping a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it. This is correct > for PMD or PUD size hugetlb, since they always contain only one > pmd entry or pud entry in the page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes, so we will nuke only one pte or pmd > entry for this CONT-PTE/PMD size hugetlb page. > > And now try_to_unmap() is only passed a hugetlb page in the case > where the hugetlb page is poisoned. Which means now we will unmap > only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb > page, and we can still access other subpages of a CONT-PTE or CONT-PMD > size poisoned hugetlb page, which will cause serious issues possibly. > > So we should change to use huge_ptep_clear_flush() to nuke the > hugetlb page table to fix this issue, which already considered > CONT-PTE and CONT-PMD size hugetlb. > > We've already used set_huge_swap_pte_at() to set a poisoned > swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() > to make sure the passed hugetlb page is poisoned in try_to_unmap(). > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks. ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping @ 2022-05-09 6:42 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-09 6:42 UTC (permalink / raw) To: Baolin Wang Cc: akpm, mike.kravetz, catalin.marinas, will, tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On Sun, May 08, 2022 at 05:36:41PM +0800, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When unmapping a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it. This is correct > for PMD or PUD size hugetlb, since they always contain only one > pmd entry or pud entry in the page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes, so we will nuke only one pte or pmd > entry for this CONT-PTE/PMD size hugetlb page. > > And now try_to_unmap() is only passed a hugetlb page in the case > where the hugetlb page is poisoned. Which means now we will unmap > only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb > page, and we can still access other subpages of a CONT-PTE or CONT-PMD > size poisoned hugetlb page, which will cause serious issues possibly. > > So we should change to use huge_ptep_clear_flush() to nuke the > hugetlb page table to fix this issue, which already considered > CONT-PTE and CONT-PMD size hugetlb. > > We've already used set_huge_swap_pte_at() to set a poisoned > swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() > to make sure the passed hugetlb page is poisoned in try_to_unmap(). > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping @ 2022-05-09 6:42 ` Muchun Song 0 siblings, 0 replies; 73+ messages in thread From: Muchun Song @ 2022-05-09 6:42 UTC (permalink / raw) To: Baolin Wang Cc: dalias, linux-ia64, linux-sh, linux-mips, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, will, linux-arch, linux-s390, arnd, ysato, deller, catalin.marinas, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-kernel, svens, akpm, linuxppc-dev, davem, mike.kravetz On Sun, May 08, 2022 at 05:36:41PM +0800, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When unmapping a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it. This is correct > for PMD or PUD size hugetlb, since they always contain only one > pmd entry or pud entry in the page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes, so we will nuke only one pte or pmd > entry for this CONT-PTE/PMD size hugetlb page. > > And now try_to_unmap() is only passed a hugetlb page in the case > where the hugetlb page is poisoned. Which means now we will unmap > only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb > page, and we can still access other subpages of a CONT-PTE or CONT-PMD > size poisoned hugetlb page, which will cause serious issues possibly. > > So we should change to use huge_ptep_clear_flush() to nuke the > hugetlb page table to fix this issue, which already considered > CONT-PTE and CONT-PMD size hugetlb. > > We've already used set_huge_swap_pte_at() to set a poisoned > swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() > to make sure the passed hugetlb page is poisoned in try_to_unmap(). > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks. ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping 2022-05-08 9:36 ` Baolin Wang (?) (?) @ 2022-05-09 22:25 ` Mike Kravetz -1 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 22:25 UTC (permalink / raw) To: Baolin Wang, akpm, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On 5/8/22 02:36, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When unmapping a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it. This is correct > for PMD or PUD size hugetlb, since they always contain only one > pmd entry or pud entry in the page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes, so we will nuke only one pte or pmd > entry for this CONT-PTE/PMD size hugetlb page. > > And now try_to_unmap() is only passed a hugetlb page in the case > where the hugetlb page is poisoned. Which means now we will unmap > only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb > page, and we can still access other subpages of a CONT-PTE or CONT-PMD > size poisoned hugetlb page, which will cause serious issues possibly. > > So we should change to use huge_ptep_clear_flush() to nuke the > hugetlb page table to fix this issue, which already considered > CONT-PTE and CONT-PMD size hugetlb. > > We've already used set_huge_swap_pte_at() to set a poisoned > swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() > to make sure the passed hugetlb page is poisoned in try_to_unmap(). > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/rmap.c | 39 ++++++++++++++++++++++----------------- > 1 file changed, 22 insertions(+), 17 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 7cf2408..37c8fd2 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1530,6 +1530,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > > if (folio_test_hugetlb(folio)) { > /* > + * The try_to_unmap() is only passed a hugetlb page > + * in the case where the hugetlb page is poisoned. > + */ > + VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); > + /* It is unfortunate that this could not easily be added to the first if (folio_test_hugetlb(folio)) block in this routine. However, it is fine to add here. Looks good. Thanks for all these changes, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> -- Mike Kravetz ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping @ 2022-05-09 22:25 ` Mike Kravetz 0 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 22:25 UTC (permalink / raw) To: Baolin Wang, akpm, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On 5/8/22 02:36, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When unmapping a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it. This is correct > for PMD or PUD size hugetlb, since they always contain only one > pmd entry or pud entry in the page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes, so we will nuke only one pte or pmd > entry for this CONT-PTE/PMD size hugetlb page. > > And now try_to_unmap() is only passed a hugetlb page in the case > where the hugetlb page is poisoned. Which means now we will unmap > only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb > page, and we can still access other subpages of a CONT-PTE or CONT-PMD > size poisoned hugetlb page, which will cause serious issues possibly. > > So we should change to use huge_ptep_clear_flush() to nuke the > hugetlb page table to fix this issue, which already considered > CONT-PTE and CONT-PMD size hugetlb. > > We've already used set_huge_swap_pte_at() to set a poisoned > swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() > to make sure the passed hugetlb page is poisoned in try_to_unmap(). > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/rmap.c | 39 ++++++++++++++++++++++----------------- > 1 file changed, 22 insertions(+), 17 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 7cf2408..37c8fd2 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1530,6 +1530,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > > if (folio_test_hugetlb(folio)) { > /* > + * The try_to_unmap() is only passed a hugetlb page > + * in the case where the hugetlb page is poisoned. > + */ > + VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); > + /* It is unfortunate that this could not easily be added to the first if (folio_test_hugetlb(folio)) block in this routine. However, it is fine to add here. Looks good. Thanks for all these changes, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> -- Mike Kravetz ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping @ 2022-05-09 22:25 ` Mike Kravetz 0 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 22:25 UTC (permalink / raw) To: Baolin Wang, akpm, catalin.marinas, will Cc: tsbogend, James.Bottomley, deller, mpe, benh, paulus, hca, gor, agordeev, borntraeger, svens, ysato, dalias, davem, arnd, linux-arm-kernel, linux-kernel, linux-ia64, linux-mips, linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux, linux-arch, linux-mm On 5/8/22 02:36, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When unmapping a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it. This is correct > for PMD or PUD size hugetlb, since they always contain only one > pmd entry or pud entry in the page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes, so we will nuke only one pte or pmd > entry for this CONT-PTE/PMD size hugetlb page. > > And now try_to_unmap() is only passed a hugetlb page in the case > where the hugetlb page is poisoned. Which means now we will unmap > only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb > page, and we can still access other subpages of a CONT-PTE or CONT-PMD > size poisoned hugetlb page, which will cause serious issues possibly. > > So we should change to use huge_ptep_clear_flush() to nuke the > hugetlb page table to fix this issue, which already considered > CONT-PTE and CONT-PMD size hugetlb. > > We've already used set_huge_swap_pte_at() to set a poisoned > swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() > to make sure the passed hugetlb page is poisoned in try_to_unmap(). > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/rmap.c | 39 ++++++++++++++++++++++----------------- > 1 file changed, 22 insertions(+), 17 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 7cf2408..37c8fd2 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1530,6 +1530,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > > if (folio_test_hugetlb(folio)) { > /* > + * The try_to_unmap() is only passed a hugetlb page > + * in the case where the hugetlb page is poisoned. > + */ > + VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); > + /* It is unfortunate that this could not easily be added to the first if (folio_test_hugetlb(folio)) block in this routine. However, it is fine to add here. Looks good. Thanks for all these changes, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> -- Mike Kravetz _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 73+ messages in thread
* Re: [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping @ 2022-05-09 22:25 ` Mike Kravetz 0 siblings, 0 replies; 73+ messages in thread From: Mike Kravetz @ 2022-05-09 22:25 UTC (permalink / raw) To: Baolin Wang, akpm, catalin.marinas, will Cc: dalias, linux-ia64, linux-sh, linux-kernel, James.Bottomley, linux-mm, paulus, sparclinux, agordeev, linux-arch, linux-s390, arnd, ysato, deller, borntraeger, gor, hca, linux-arm-kernel, tsbogend, linux-parisc, linux-mips, svens, linuxppc-dev, davem On 5/8/22 02:36, Baolin Wang wrote: > On some architectures (like ARM64), it can support CONT-PTE/PMD size > hugetlb, which means it can support not only PMD/PUD size hugetlb: > 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page > size specified. > > When unmapping a hugetlb page, we will get the relevant page table > entry by huge_pte_offset() only once to nuke it. This is correct > for PMD or PUD size hugetlb, since they always contain only one > pmd entry or pud entry in the page table. > > However this is incorrect for CONT-PTE and CONT-PMD size hugetlb, > since they can contain several continuous pte or pmd entry with > same page table attributes, so we will nuke only one pte or pmd > entry for this CONT-PTE/PMD size hugetlb page. > > And now try_to_unmap() is only passed a hugetlb page in the case > where the hugetlb page is poisoned. Which means now we will unmap > only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb > page, and we can still access other subpages of a CONT-PTE or CONT-PMD > size poisoned hugetlb page, which will cause serious issues possibly. > > So we should change to use huge_ptep_clear_flush() to nuke the > hugetlb page table to fix this issue, which already considered > CONT-PTE and CONT-PMD size hugetlb. > > We've already used set_huge_swap_pte_at() to set a poisoned > swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON() > to make sure the passed hugetlb page is poisoned in try_to_unmap(). > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/rmap.c | 39 ++++++++++++++++++++++----------------- > 1 file changed, 22 insertions(+), 17 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 7cf2408..37c8fd2 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1530,6 +1530,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > > if (folio_test_hugetlb(folio)) { > /* > + * The try_to_unmap() is only passed a hugetlb page > + * in the case where the hugetlb page is poisoned. > + */ > + VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); > + /* It is unfortunate that this could not easily be added to the first if (folio_test_hugetlb(folio)) block in this routine. However, it is fine to add here. Looks good. Thanks for all these changes, Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> -- Mike Kravetz ^ permalink raw reply [flat|nested] 73+ messages in thread
end of thread, other threads:[~2022-05-10 1:36 UTC | newest] Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-05-08 9:36 [PATCH v2 0/3] Fix CONT-PTE/PMD size hugetlb issue when unmapping or migrating Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 9:36 ` [PATCH v2 1/3] mm: change huge_ptep_clear_flush() to return the original pte Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 11:09 ` Muchun Song 2022-05-08 11:09 ` Muchun Song 2022-05-08 11:09 ` Muchun Song 2022-05-08 11:09 ` Muchun Song 2022-05-08 13:09 ` Baolin Wang 2022-05-08 13:09 ` Baolin Wang 2022-05-08 13:09 ` Baolin Wang 2022-05-08 13:09 ` Baolin Wang 2022-05-09 4:06 ` Muchun Song 2022-05-09 4:06 ` Muchun Song 2022-05-09 4:06 ` Muchun Song 2022-05-09 4:06 ` Muchun Song 2022-05-09 5:46 ` Christophe Leroy 2022-05-09 5:46 ` Christophe Leroy 2022-05-09 5:46 ` Christophe Leroy 2022-05-09 5:46 ` Christophe Leroy 2022-05-09 8:46 ` Baolin Wang 2022-05-09 8:46 ` Baolin Wang 2022-05-09 8:46 ` Baolin Wang 2022-05-09 8:46 ` Baolin Wang 2022-05-09 20:02 ` Mike Kravetz 2022-05-09 20:02 ` Mike Kravetz 2022-05-09 20:02 ` Mike Kravetz 2022-05-09 20:02 ` Mike Kravetz 2022-05-10 1:35 ` Baolin Wang 2022-05-10 1:35 ` Baolin Wang 2022-05-10 1:35 ` Baolin Wang 2022-05-10 1:35 ` Baolin Wang 2022-05-08 9:36 ` [PATCH v2 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 12:01 ` kernel test robot 2022-05-08 12:01 ` kernel test robot 2022-05-08 12:01 ` kernel test robot 2022-05-08 12:01 ` kernel test robot 2022-05-08 13:13 ` Baolin Wang 2022-05-08 13:13 ` Baolin Wang 2022-05-08 13:13 ` Baolin Wang 2022-05-08 13:13 ` Baolin Wang 2022-05-08 13:13 ` Baolin Wang 2022-05-08 12:11 ` kernel test robot 2022-05-08 12:11 ` kernel test robot 2022-05-08 12:11 ` kernel test robot 2022-05-08 12:11 ` kernel test robot 2022-05-08 13:31 ` Muchun Song 2022-05-08 13:31 ` Muchun Song 2022-05-08 13:31 ` Muchun Song 2022-05-08 13:31 ` Muchun Song 2022-05-09 21:05 ` Mike Kravetz 2022-05-09 21:05 ` Mike Kravetz 2022-05-09 21:05 ` Mike Kravetz 2022-05-09 21:05 ` Mike Kravetz 2022-05-08 9:36 ` [PATCH v2 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-08 9:36 ` Baolin Wang 2022-05-09 6:42 ` Muchun Song 2022-05-09 6:42 ` Muchun Song 2022-05-09 6:42 ` Muchun Song 2022-05-09 6:42 ` Muchun Song 2022-05-09 22:25 ` Mike Kravetz 2022-05-09 22:25 ` Mike Kravetz 2022-05-09 22:25 ` Mike Kravetz 2022-05-09 22:25 ` Mike Kravetz
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.