* [PATCH v7 0/3] fix double page fault on arm64 @ 2019-09-20 13:54 Jia He 2019-09-20 13:54 ` [PATCH v7 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He ` (2 more replies) 0 siblings, 3 replies; 9+ messages in thread From: Jia He @ 2019-09-20 13:54 UTC (permalink / raw) To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse, Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose Cc: Ralph Campbell, Jia He, Anshuman Khandual, Alex Van Brunt, Kaly Xin, Jérôme Glisse, Punit Agrawal, hejianet, Andrew Morton, nd, Robin Murphy, Thomas Gleixner When we tested pmdk unit test vmmalloc_fork TEST1 in arm64 guest, there will be a double page fault in __copy_from_user_inatomic of cow_user_page. As told by Catalin: "On arm64 without hardware Access Flag, copying from user will fail because the pte is old and cannot be marked young. So we always end up with zeroed page after fork() + CoW for pfn mappings. we don't always have a hardware-managed access flag on arm64." Changes v7: s/pte_spinlock/pte_offset_map_lock (Kirill) v6: fix error case of returning with spinlock taken (Catalin) move kmap_atomic to avoid handling kunmap_atomic v5: handle the case correctly when !pte_same fix kbuild test failed v4: introduce cpu_has_hw_af (Suzuki) bail out if !pte_same (Kirill) v3: add vmf->ptl lock/unlock (Kirill A. Shutemov) add arch_faults_on_old_pte (Matthew, Catalin) v2: remove FAULT_FLAG_WRITE when setting pte access flag (Catalin) Jia He (3): arm64: cpufeature: introduce helper cpu_has_hw_af() arm64: mm: implement arch_faults_on_old_pte() on arm64 mm: fix double page fault on arm64 if PTE_AF is cleared arch/arm64/include/asm/cpufeature.h | 10 +++++ arch/arm64/include/asm/pgtable.h | 12 ++++++ mm/memory.c | 67 ++++++++++++++++++++++++++--- 3 files changed, 83 insertions(+), 6 deletions(-) -- 2.17.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v7 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() 2019-09-20 13:54 [PATCH v7 0/3] fix double page fault on arm64 Jia He @ 2019-09-20 13:54 ` Jia He 2019-09-20 13:54 ` [PATCH v7 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64 Jia He 2019-09-20 13:54 ` [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He 2 siblings, 0 replies; 9+ messages in thread From: Jia He @ 2019-09-20 13:54 UTC (permalink / raw) To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse, Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose Cc: Ralph Campbell, Jia He, Anshuman Khandual, Alex Van Brunt, Kaly Xin, Jérôme Glisse, Punit Agrawal, hejianet, Andrew Morton, nd, Robin Murphy, Thomas Gleixner We unconditionally set the HW_AFDBM capability and only enable it on CPUs which really have the feature. But sometimes we need to know whether this cpu has the capability of HW AF. So decouple AF from DBM by new helper cpu_has_hw_af(). Reported-by: kbuild test robot <lkp@intel.com> Suggested-by: Suzuki Poulose <Suzuki.Poulose@arm.com> Signed-off-by: Jia He <justin.he@arm.com> --- arch/arm64/include/asm/cpufeature.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index c96ffa4722d3..46caf934ba4e 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -667,6 +667,16 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange) default: return CONFIG_ARM64_PA_BITS; } } + +/* Decouple AF from AFDBM. */ +static inline bool cpu_has_hw_af(void) +{ + if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM)) + return read_cpuid(ID_AA64MMFR1_EL1) & 0xf; + + return false; +} + #endif /* __ASSEMBLY__ */ #endif -- 2.17.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v7 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64 2019-09-20 13:54 [PATCH v7 0/3] fix double page fault on arm64 Jia He 2019-09-20 13:54 ` [PATCH v7 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He @ 2019-09-20 13:54 ` Jia He 2019-09-20 13:54 ` [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He 2 siblings, 0 replies; 9+ messages in thread From: Jia He @ 2019-09-20 13:54 UTC (permalink / raw) To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse, Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose Cc: Ralph Campbell, Jia He, Anshuman Khandual, Alex Van Brunt, Kaly Xin, Jérôme Glisse, Punit Agrawal, hejianet, Andrew Morton, nd, Robin Murphy, Thomas Gleixner On arm64 without hardware Access Flag, copying fromuser will fail because the pte is old and cannot be marked young. So we always end up with zeroed page after fork() + CoW for pfn mappings. we don't always have a hardware-managed access flag on arm64. Hence implement arch_faults_on_old_pte on arm64 to indicate that it might cause page fault when accessing old pte. Signed-off-by: Jia He <justin.he@arm.com> --- arch/arm64/include/asm/pgtable.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index e09760ece844..4a9939615e41 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -868,6 +868,18 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, #define phys_to_ttbr(addr) (addr) #endif +/* + * On arm64 without hardware Access Flag, copying fromuser will fail because + * the pte is old and cannot be marked young. So we always end up with zeroed + * page after fork() + CoW for pfn mappings. we don't always have a + * hardware-managed access flag on arm64. + */ +static inline bool arch_faults_on_old_pte(void) +{ + return !cpu_has_hw_af(); +} +#define arch_faults_on_old_pte arch_faults_on_old_pte + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */ -- 2.17.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared 2019-09-20 13:54 [PATCH v7 0/3] fix double page fault on arm64 Jia He 2019-09-20 13:54 ` [PATCH v7 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He 2019-09-20 13:54 ` [PATCH v7 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64 Jia He @ 2019-09-20 13:54 ` Jia He 2019-09-20 14:21 ` Kirill A. Shutemov 2019-09-20 15:53 ` Matthew Wilcox 2 siblings, 2 replies; 9+ messages in thread From: Jia He @ 2019-09-20 13:54 UTC (permalink / raw) To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse, Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose Cc: Ralph Campbell, Jia He, Anshuman Khandual, Alex Van Brunt, Kaly Xin, Jérôme Glisse, Punit Agrawal, hejianet, Andrew Morton, nd, Robin Murphy, Thomas Gleixner When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there will be a double page fault in __copy_from_user_inatomic of cow_user_page. Below call trace is from arm64 do_page_fault for debugging purpose [ 110.016195] Call trace: [ 110.016826] do_page_fault+0x5a4/0x690 [ 110.017812] do_mem_abort+0x50/0xb0 [ 110.018726] el1_da+0x20/0xc4 [ 110.019492] __arch_copy_from_user+0x180/0x280 [ 110.020646] do_wp_page+0xb0/0x860 [ 110.021517] __handle_mm_fault+0x994/0x1338 [ 110.022606] handle_mm_fault+0xe8/0x180 [ 110.023584] do_page_fault+0x240/0x690 [ 110.024535] do_mem_abort+0x50/0xb0 [ 110.025423] el0_da+0x20/0x24 The pte info before __copy_from_user_inatomic is (PTE_AF is cleared): [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3 As told by Catalin: "On arm64 without hardware Access Flag, copying from user will fail because the pte is old and cannot be marked young. So we always end up with zeroed page after fork() + CoW for pfn mappings. we don't always have a hardware-managed access flag on arm64." This patch fix it by calling pte_mkyoung. Also, the parameter is changed because vmf should be passed to cow_user_page() Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error in case there can be some obscure use-case.(by Kirill) [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork Reported-by: Yibo Cai <Yibo.Cai@arm.com> Signed-off-by: Jia He <justin.he@arm.com> --- mm/memory.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 61 insertions(+), 6 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index e2bb51b6242e..3e39e40fee87 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -118,6 +118,13 @@ int randomize_va_space __read_mostly = 2; #endif +#ifndef arch_faults_on_old_pte +static inline bool arch_faults_on_old_pte(void) +{ + return false; +} +#endif + static int __init disable_randmaps(char *s) { randomize_va_space = 0; @@ -2140,8 +2147,13 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, return same; } -static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) +static inline int cow_user_page(struct page *dst, struct page *src, + struct vm_fault *vmf) { + struct vm_area_struct *vma = vmf->vma; + struct mm_struct *mm = vma->vm_mm; + unsigned long addr = vmf->address; + debug_dma_assert_idle(src); /* @@ -2151,21 +2163,53 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo * fails, we just zero-fill it. Live with it. */ if (unlikely(!src)) { - void *kaddr = kmap_atomic(dst); - void __user *uaddr = (void __user *)(va & PAGE_MASK); + void *kaddr; + pte_t entry; + void __user *uaddr = (void __user *)(addr & PAGE_MASK); + /* On architectures with software "accessed" bits, we would + * take a double page fault, so mark it accessed here. + */ + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) { + vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, + &vmf->ptl); + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) { + entry = pte_mkyoung(vmf->orig_pte); + if (ptep_set_access_flags(vma, addr, + vmf->pte, entry, 0)) + update_mmu_cache(vma, addr, vmf->pte); + } else { + /* Other thread has already handled the fault + * and we don't need to do anything. If it's + * not the case, the fault will be triggered + * again on the same address. + */ + pte_unmap_unlock(vmf->pte, vmf->ptl); + return -1; + } + pte_unmap_unlock(vmf->pte, vmf->ptl); + } + + kaddr = kmap_atomic(dst); /* * This really shouldn't fail, because the page is there * in the page tables. But it might just be unreadable, * in which case we just give up and fill the result with * zeroes. */ - if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) + if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) { + /* Give a warn in case there can be some obscure + * use-case + */ + WARN_ON_ONCE(1); clear_page(kaddr); + } kunmap_atomic(kaddr); flush_dcache_page(dst); } else - copy_user_highpage(dst, src, va, vma); + copy_user_highpage(dst, src, addr, vma); + + return 0; } static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma) @@ -2318,7 +2362,18 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) vmf->address); if (!new_page) goto oom; - cow_user_page(new_page, old_page, vmf->address, vma); + + if (cow_user_page(new_page, old_page, vmf)) { + /* COW failed, if the fault was solved by other, + * it's fine. If not, userspace would re-fault on + * the same address and we will handle the fault + * from the second attempt. + */ + put_page(new_page); + if (old_page) + put_page(old_page); + return 0; + } } if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false)) -- 2.17.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared 2019-09-20 13:54 ` [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He @ 2019-09-20 14:21 ` Kirill A. Shutemov 2019-09-20 14:24 ` Justin He (Arm Technology China) 2019-09-20 15:53 ` Matthew Wilcox 1 sibling, 1 reply; 9+ messages in thread From: Kirill A. Shutemov @ 2019-09-20 14:21 UTC (permalink / raw) To: Jia He Cc: Mark Rutland, Catalin Marinas, linux-mm, Punit Agrawal, Will Deacon, Alex Van Brunt, Marc Zyngier, Anshuman Khandual, Matthew Wilcox, Kaly Xin, hejianet, Ralph Campbell, Suzuki Poulose, Jérôme Glisse, Thomas Gleixner, nd, linux-arm-kernel, linux-kernel, James Morse, Andrew Morton, Robin Murphy, Kirill A. Shutemov On Fri, Sep 20, 2019 at 09:54:37PM +0800, Jia He wrote: > When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there > will be a double page fault in __copy_from_user_inatomic of cow_user_page. > > Below call trace is from arm64 do_page_fault for debugging purpose > [ 110.016195] Call trace: > [ 110.016826] do_page_fault+0x5a4/0x690 > [ 110.017812] do_mem_abort+0x50/0xb0 > [ 110.018726] el1_da+0x20/0xc4 > [ 110.019492] __arch_copy_from_user+0x180/0x280 > [ 110.020646] do_wp_page+0xb0/0x860 > [ 110.021517] __handle_mm_fault+0x994/0x1338 > [ 110.022606] handle_mm_fault+0xe8/0x180 > [ 110.023584] do_page_fault+0x240/0x690 > [ 110.024535] do_mem_abort+0x50/0xb0 > [ 110.025423] el0_da+0x20/0x24 > > The pte info before __copy_from_user_inatomic is (PTE_AF is cleared): > [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3 > > As told by Catalin: "On arm64 without hardware Access Flag, copying from > user will fail because the pte is old and cannot be marked young. So we > always end up with zeroed page after fork() + CoW for pfn mappings. we > don't always have a hardware-managed access flag on arm64." > > This patch fix it by calling pte_mkyoung. Also, the parameter is > changed because vmf should be passed to cow_user_page() > > Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error > in case there can be some obscure use-case.(by Kirill) > > [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork > > Reported-by: Yibo Cai <Yibo.Cai@arm.com> > Signed-off-by: Jia He <justin.he@arm.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> -- Kirill A. Shutemov _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared 2019-09-20 14:21 ` Kirill A. Shutemov @ 2019-09-20 14:24 ` Justin He (Arm Technology China) 0 siblings, 0 replies; 9+ messages in thread From: Justin He (Arm Technology China) @ 2019-09-20 14:24 UTC (permalink / raw) To: Kirill A. Shutemov; +Cc: linux-mm, nd, linux-kernel, linux-arm-kernel, hejianet Thanks for your patent review 😊 -- Cheers, Justin (Jia He) > -----Original Message----- > From: Kirill A. Shutemov <kirill@shutemov.name> > Sent: 2019年9月20日 22:21 > To: Justin He (Arm Technology China) <Justin.He@arm.com> > Cc: Catalin Marinas <Catalin.Marinas@arm.com>; Will Deacon > <will@kernel.org>; Mark Rutland <Mark.Rutland@arm.com>; James Morse > <James.Morse@arm.com>; Marc Zyngier <maz@kernel.org>; Matthew > Wilcox <willy@infradead.org>; Kirill A. Shutemov > <kirill.shutemov@linux.intel.com>; linux-arm-kernel@lists.infradead.org; > linux-kernel@vger.kernel.org; linux-mm@kvack.org; Suzuki Poulose > <Suzuki.Poulose@arm.com>; Punit Agrawal <punitagrawal@gmail.com>; > Anshuman Khandual <Anshuman.Khandual@arm.com>; Alex Van Brunt > <avanbrunt@nvidia.com>; Robin Murphy <Robin.Murphy@arm.com>; > Thomas Gleixner <tglx@linutronix.de>; Andrew Morton <akpm@linux- > foundation.org>; Jérôme Glisse <jglisse@redhat.com>; Ralph Campbell > <rcampbell@nvidia.com>; hejianet@gmail.com; Kaly Xin (Arm Technology > China) <Kaly.Xin@arm.com>; nd <nd@arm.com> > Subject: Re: [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is > cleared > > On Fri, Sep 20, 2019 at 09:54:37PM +0800, Jia He wrote: > > When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, > there > > will be a double page fault in __copy_from_user_inatomic of > cow_user_page. > > > > Below call trace is from arm64 do_page_fault for debugging purpose > > [ 110.016195] Call trace: > > [ 110.016826] do_page_fault+0x5a4/0x690 > > [ 110.017812] do_mem_abort+0x50/0xb0 > > [ 110.018726] el1_da+0x20/0xc4 > > [ 110.019492] __arch_copy_from_user+0x180/0x280 > > [ 110.020646] do_wp_page+0xb0/0x860 > > [ 110.021517] __handle_mm_fault+0x994/0x1338 > > [ 110.022606] handle_mm_fault+0xe8/0x180 > > [ 110.023584] do_page_fault+0x240/0x690 > > [ 110.024535] do_mem_abort+0x50/0xb0 > > [ 110.025423] el0_da+0x20/0x24 > > > > The pte info before __copy_from_user_inatomic is (PTE_AF is cleared): > > [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, > pmd=000000023d4b3003, pte=360000298607bd3 > > > > As told by Catalin: "On arm64 without hardware Access Flag, copying from > > user will fail because the pte is old and cannot be marked young. So we > > always end up with zeroed page after fork() + CoW for pfn mappings. we > > don't always have a hardware-managed access flag on arm64." > > > > This patch fix it by calling pte_mkyoung. Also, the parameter is > > changed because vmf should be passed to cow_user_page() > > > > Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error > > in case there can be some obscure use-case.(by Kirill) > > > > [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork > > > > Reported-by: Yibo Cai <Yibo.Cai@arm.com> > > Signed-off-by: Jia He <justin.he@arm.com> > > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > > -- > Kirill A. Shutemov _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared 2019-09-20 13:54 ` [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He 2019-09-20 14:21 ` Kirill A. Shutemov @ 2019-09-20 15:53 ` Matthew Wilcox 2019-09-20 17:00 ` Kirill A. Shutemov 2019-09-21 13:19 ` Jia He 1 sibling, 2 replies; 9+ messages in thread From: Matthew Wilcox @ 2019-09-20 15:53 UTC (permalink / raw) To: Jia He Cc: Mark Rutland, Kaly Xin, Ralph Campbell, Andrew Morton, Suzuki Poulose, Catalin Marinas, Anshuman Khandual, linux-kernel, linux-mm, Jérôme Glisse, James Morse, linux-arm-kernel, Punit Agrawal, Marc Zyngier, hejianet, Thomas Gleixner, nd, Will Deacon, Alex Van Brunt, Kirill A. Shutemov, Robin Murphy On Fri, Sep 20, 2019 at 09:54:37PM +0800, Jia He wrote: > -static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) > +static inline int cow_user_page(struct page *dst, struct page *src, > + struct vm_fault *vmf) > { Can we talk about the return type here? > + } else { > + /* Other thread has already handled the fault > + * and we don't need to do anything. If it's > + * not the case, the fault will be triggered > + * again on the same address. > + */ > + pte_unmap_unlock(vmf->pte, vmf->ptl); > + return -1; ... > + return 0; > } So -1 for "try again" and 0 for "succeeded". > + if (cow_user_page(new_page, old_page, vmf)) { Then we use it like a bool. But it's kind of backwards from a bool because false is success. > + /* COW failed, if the fault was solved by other, > + * it's fine. If not, userspace would re-fault on > + * the same address and we will handle the fault > + * from the second attempt. > + */ > + put_page(new_page); > + if (old_page) > + put_page(old_page); > + return 0; And we don't use the return value; in fact we invert it. Would this make more sense: static inline bool cow_user_page(struct page *dst, struct page *src, struct vm_fault *vmf) ... pte_unmap_unlock(vmf->pte, vmf->ptl); return false; ... return true; ... if (!cow_user_page(new_page, old_page, vmf)) { That reads more sensibly for me. We could also go with returning a vm_fault_t, but that would be more complex than needed today, I think. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared 2019-09-20 15:53 ` Matthew Wilcox @ 2019-09-20 17:00 ` Kirill A. Shutemov 2019-09-21 13:19 ` Jia He 1 sibling, 0 replies; 9+ messages in thread From: Kirill A. Shutemov @ 2019-09-20 17:00 UTC (permalink / raw) To: Matthew Wilcox Cc: Mark Rutland, Catalin Marinas, linux-mm, Punit Agrawal, Will Deacon, Alex Van Brunt, Jia He, Marc Zyngier, Anshuman Khandual, Kaly Xin, hejianet, Ralph Campbell, Suzuki Poulose, Jérôme Glisse, Thomas Gleixner, nd, linux-arm-kernel, linux-kernel, James Morse, Andrew Morton, Robin Murphy, Kirill A. Shutemov On Fri, Sep 20, 2019 at 08:53:00AM -0700, Matthew Wilcox wrote: > On Fri, Sep 20, 2019 at 09:54:37PM +0800, Jia He wrote: > > -static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) > > +static inline int cow_user_page(struct page *dst, struct page *src, > > + struct vm_fault *vmf) > > { > > Can we talk about the return type here? > > > + } else { > > + /* Other thread has already handled the fault > > + * and we don't need to do anything. If it's > > + * not the case, the fault will be triggered > > + * again on the same address. > > + */ > > + pte_unmap_unlock(vmf->pte, vmf->ptl); > > + return -1; > ... > > + return 0; > > } > > So -1 for "try again" and 0 for "succeeded". > > > + if (cow_user_page(new_page, old_page, vmf)) { > > Then we use it like a bool. But it's kind of backwards from a bool because > false is success. > > > + /* COW failed, if the fault was solved by other, > > + * it's fine. If not, userspace would re-fault on > > + * the same address and we will handle the fault > > + * from the second attempt. > > + */ > > + put_page(new_page); > > + if (old_page) > > + put_page(old_page); > > + return 0; > > And we don't use the return value; in fact we invert it. > > Would this make more sense: > > static inline bool cow_user_page(struct page *dst, struct page *src, > struct vm_fault *vmf) > ... > pte_unmap_unlock(vmf->pte, vmf->ptl); > return false; > ... > return true; > ... > if (!cow_user_page(new_page, old_page, vmf)) { > > That reads more sensibly for me. I like this idea too. -- Kirill A. Shutemov _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared 2019-09-20 15:53 ` Matthew Wilcox 2019-09-20 17:00 ` Kirill A. Shutemov @ 2019-09-21 13:19 ` Jia He 1 sibling, 0 replies; 9+ messages in thread From: Jia He @ 2019-09-21 13:19 UTC (permalink / raw) To: Matthew Wilcox, Jia He Cc: Mark Rutland, Kaly Xin, Ralph Campbell, Andrew Morton, Suzuki Poulose, Catalin Marinas, Anshuman Khandual, linux-kernel, linux-mm, Jérôme Glisse, James Morse, linux-arm-kernel, Punit Agrawal, Marc Zyngier, Thomas Gleixner, nd, Will Deacon, Alex Van Brunt, Kirill A. Shutemov, Robin Murphy [On behalf of justin.he@arm.com] Hi Matthew On 2019/9/20 23:53, Matthew Wilcox wrote: > On Fri, Sep 20, 2019 at 09:54:37PM +0800, Jia He wrote: >> -static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) >> +static inline int cow_user_page(struct page *dst, struct page *src, >> + struct vm_fault *vmf) >> { > Can we talk about the return type here? > >> + } else { >> + /* Other thread has already handled the fault >> + * and we don't need to do anything. If it's >> + * not the case, the fault will be triggered >> + * again on the same address. >> + */ >> + pte_unmap_unlock(vmf->pte, vmf->ptl); >> + return -1; > ... >> + return 0; >> } > So -1 for "try again" and 0 for "succeeded". > >> + if (cow_user_page(new_page, old_page, vmf)) { > Then we use it like a bool. But it's kind of backwards from a bool because > false is success. > >> + /* COW failed, if the fault was solved by other, >> + * it's fine. If not, userspace would re-fault on >> + * the same address and we will handle the fault >> + * from the second attempt. >> + */ >> + put_page(new_page); >> + if (old_page) >> + put_page(old_page); >> + return 0; > And we don't use the return value; in fact we invert it. > > Would this make more sense: > > static inline bool cow_user_page(struct page *dst, struct page *src, > struct vm_fault *vmf) > ... > pte_unmap_unlock(vmf->pte, vmf->ptl); > return false; > ... > return true; > ... > if (!cow_user_page(new_page, old_page, vmf)) { > > That reads more sensibly for me. We could also go with returning a > vm_fault_t, but that would be more complex than needed today, I think. Ok, will change the return type to bool as you suggested. Thanks --- Cheers, Justin (Jia He) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2019-09-21 13:20 UTC | newest] Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-09-20 13:54 [PATCH v7 0/3] fix double page fault on arm64 Jia He 2019-09-20 13:54 ` [PATCH v7 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He 2019-09-20 13:54 ` [PATCH v7 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64 Jia He 2019-09-20 13:54 ` [PATCH v7 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He 2019-09-20 14:21 ` Kirill A. Shutemov 2019-09-20 14:24 ` Justin He (Arm Technology China) 2019-09-20 15:53 ` Matthew Wilcox 2019-09-20 17:00 ` Kirill A. Shutemov 2019-09-21 13:19 ` Jia He
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).