linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 0/3] fix double page fault on arm64
@ 2019-09-21 13:50 Jia He
  2019-09-21 13:50 ` [PATCH v8 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Jia He @ 2019-09-21 13:50 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov,
	linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose
  Cc: Punit Agrawal, Anshuman Khandual, Alex Van Brunt, Robin Murphy,
	Thomas Gleixner, Andrew Morton, Jérôme Glisse,
	Ralph Campbell, hejianet, Kaly Xin, nd, Jia He

When we tested pmdk unit test vmmalloc_fork TEST1 in arm64 guest, there
will be a double page fault in __copy_from_user_inatomic of cow_user_page.

As told by Catalin: "On arm64 without hardware Access Flag, copying from
user will fail because the pte is old and cannot be marked young. So we
always end up with zeroed page after fork() + CoW for pfn mappings. we
don't always have a hardware-managed access flag on arm64."

Changes
v8: change cow_user_page's return type (Matthew)
v7: s/pte_spinlock/pte_offset_map_lock (Kirill)
v6: fix error case of returning with spinlock taken (Catalin)
    move kmap_atomic to avoid handling kunmap_atomic
v5: handle the case correctly when !pte_same
    fix kbuild test failed
v4: introduce cpu_has_hw_af (Suzuki)
    bail out if !pte_same (Kirill)
v3: add vmf->ptl lock/unlock (Kirill A. Shutemov)
    add arch_faults_on_old_pte (Matthew, Catalin)
v2: remove FAULT_FLAG_WRITE when setting pte access flag (Catalin)

Jia He (3):
  arm64: cpufeature: introduce helper cpu_has_hw_af()
  arm64: mm: implement arch_faults_on_old_pte() on arm64
  mm: fix double page fault on arm64 if PTE_AF is cleared

 arch/arm64/include/asm/cpufeature.h | 10 +++++
 arch/arm64/include/asm/pgtable.h    | 12 ++++++
 mm/memory.c                         | 67 ++++++++++++++++++++++++++---
 3 files changed, 83 insertions(+), 6 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v8 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af()
  2019-09-21 13:50 [PATCH v8 0/3] fix double page fault on arm64 Jia He
@ 2019-09-21 13:50 ` Jia He
  2019-09-23 16:07   ` Catalin Marinas
  2019-09-21 13:50 ` [PATCH v8 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64 Jia He
  2019-09-21 13:50 ` [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He
  2 siblings, 1 reply; 16+ messages in thread
From: Jia He @ 2019-09-21 13:50 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov,
	linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose
  Cc: Punit Agrawal, Anshuman Khandual, Alex Van Brunt, Robin Murphy,
	Thomas Gleixner, Andrew Morton, Jérôme Glisse,
	Ralph Campbell, hejianet, Kaly Xin, nd, Jia He

We unconditionally set the HW_AFDBM capability and only enable it on
CPUs which really have the feature. But sometimes we need to know
whether this cpu has the capability of HW AF. So decouple AF from
DBM by new helper cpu_has_hw_af().

Reported-by: kbuild test robot <lkp@intel.com>
Suggested-by: Suzuki Poulose <Suzuki.Poulose@arm.com>
Signed-off-by: Jia He <justin.he@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index c96ffa4722d3..46caf934ba4e 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -667,6 +667,16 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
 	default: return CONFIG_ARM64_PA_BITS;
 	}
 }
+
+/* Decouple AF from AFDBM. */
+static inline bool cpu_has_hw_af(void)
+{
+	if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM))
+		return read_cpuid(ID_AA64MMFR1_EL1) & 0xf;
+
+	return false;
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v8 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64
  2019-09-21 13:50 [PATCH v8 0/3] fix double page fault on arm64 Jia He
  2019-09-21 13:50 ` [PATCH v8 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He
@ 2019-09-21 13:50 ` Jia He
  2019-09-23 16:18   ` Catalin Marinas
  2019-09-21 13:50 ` [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He
  2 siblings, 1 reply; 16+ messages in thread
From: Jia He @ 2019-09-21 13:50 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov,
	linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose
  Cc: Punit Agrawal, Anshuman Khandual, Alex Van Brunt, Robin Murphy,
	Thomas Gleixner, Andrew Morton, Jérôme Glisse,
	Ralph Campbell, hejianet, Kaly Xin, nd, Jia He

On arm64 without hardware Access Flag, copying fromuser will fail because
the pte is old and cannot be marked young. So we always end up with zeroed
page after fork() + CoW for pfn mappings. we don't always have a
hardware-managed access flag on arm64.

Hence implement arch_faults_on_old_pte on arm64 to indicate that it might
cause page fault when accessing old pte.

Signed-off-by: Jia He <justin.he@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index e09760ece844..4a9939615e41 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -868,6 +868,18 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
 #define phys_to_ttbr(addr)	(addr)
 #endif
 
+/*
+ * On arm64 without hardware Access Flag, copying fromuser will fail because
+ * the pte is old and cannot be marked young. So we always end up with zeroed
+ * page after fork() + CoW for pfn mappings. we don't always have a
+ * hardware-managed access flag on arm64.
+ */
+static inline bool arch_faults_on_old_pte(void)
+{
+	return !cpu_has_hw_af();
+}
+#define arch_faults_on_old_pte arch_faults_on_old_pte
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_PGTABLE_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-09-21 13:50 [PATCH v8 0/3] fix double page fault on arm64 Jia He
  2019-09-21 13:50 ` [PATCH v8 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He
  2019-09-21 13:50 ` [PATCH v8 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64 Jia He
@ 2019-09-21 13:50 ` Jia He
  2019-09-21 15:31   ` Matthew Wilcox
                     ` (2 more replies)
  2 siblings, 3 replies; 16+ messages in thread
From: Jia He @ 2019-09-21 13:50 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov,
	linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose
  Cc: Punit Agrawal, Anshuman Khandual, Alex Van Brunt, Robin Murphy,
	Thomas Gleixner, Andrew Morton, Jérôme Glisse,
	Ralph Campbell, hejianet, Kaly Xin, nd, Jia He

When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there
will be a double page fault in __copy_from_user_inatomic of cow_user_page.

Below call trace is from arm64 do_page_fault for debugging purpose
[  110.016195] Call trace:
[  110.016826]  do_page_fault+0x5a4/0x690
[  110.017812]  do_mem_abort+0x50/0xb0
[  110.018726]  el1_da+0x20/0xc4
[  110.019492]  __arch_copy_from_user+0x180/0x280
[  110.020646]  do_wp_page+0xb0/0x860
[  110.021517]  __handle_mm_fault+0x994/0x1338
[  110.022606]  handle_mm_fault+0xe8/0x180
[  110.023584]  do_page_fault+0x240/0x690
[  110.024535]  do_mem_abort+0x50/0xb0
[  110.025423]  el0_da+0x20/0x24

The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
[ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3

As told by Catalin: "On arm64 without hardware Access Flag, copying from
user will fail because the pte is old and cannot be marked young. So we
always end up with zeroed page after fork() + CoW for pfn mappings. we
don't always have a hardware-managed access flag on arm64."

This patch fix it by calling pte_mkyoung. Also, the parameter is
changed because vmf should be passed to cow_user_page()

Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error
in case there can be some obscure use-case.(by Kirill)

[1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork

Reported-by: Yibo Cai <Yibo.Cai@arm.com>
Signed-off-by: Jia He <justin.he@arm.com>
---
 mm/memory.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 61 insertions(+), 6 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index e2bb51b6242e..ae09b070b04d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -118,6 +118,13 @@ int randomize_va_space __read_mostly =
 					2;
 #endif
 
+#ifndef arch_faults_on_old_pte
+static inline bool arch_faults_on_old_pte(void)
+{
+	return false;
+}
+#endif
+
 static int __init disable_randmaps(char *s)
 {
 	randomize_va_space = 0;
@@ -2140,8 +2147,13 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
 	return same;
 }
 
-static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
+static inline bool cow_user_page(struct page *dst, struct page *src,
+				 struct vm_fault *vmf)
 {
+	struct vm_area_struct *vma = vmf->vma;
+	struct mm_struct *mm = vma->vm_mm;
+	unsigned long addr = vmf->address;
+
 	debug_dma_assert_idle(src);
 
 	/*
@@ -2151,21 +2163,53 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
 	 * fails, we just zero-fill it. Live with it.
 	 */
 	if (unlikely(!src)) {
-		void *kaddr = kmap_atomic(dst);
-		void __user *uaddr = (void __user *)(va & PAGE_MASK);
+		void *kaddr;
+		pte_t entry;
+		void __user *uaddr = (void __user *)(addr & PAGE_MASK);
 
+		/* On architectures with software "accessed" bits, we would
+		 * take a double page fault, so mark it accessed here.
+		 */
+		if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
+			vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr,
+						       &vmf->ptl);
+			if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
+				entry = pte_mkyoung(vmf->orig_pte);
+				if (ptep_set_access_flags(vma, addr,
+							  vmf->pte, entry, 0))
+					update_mmu_cache(vma, addr, vmf->pte);
+			} else {
+				/* Other thread has already handled the fault
+				 * and we don't need to do anything. If it's
+				 * not the case, the fault will be triggered
+				 * again on the same address.
+				 */
+				pte_unmap_unlock(vmf->pte, vmf->ptl);
+				return false;
+			}
+			pte_unmap_unlock(vmf->pte, vmf->ptl);
+		}
+
+		kaddr = kmap_atomic(dst);
 		/*
 		 * This really shouldn't fail, because the page is there
 		 * in the page tables. But it might just be unreadable,
 		 * in which case we just give up and fill the result with
 		 * zeroes.
 		 */
-		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
+		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
+			/* Give a warn in case there can be some obscure
+			 * use-case
+			 */
+			WARN_ON_ONCE(1);
 			clear_page(kaddr);
+		}
 		kunmap_atomic(kaddr);
 		flush_dcache_page(dst);
 	} else
-		copy_user_highpage(dst, src, va, vma);
+		copy_user_highpage(dst, src, addr, vma);
+
+	return true;
 }
 
 static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
@@ -2318,7 +2362,18 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 				vmf->address);
 		if (!new_page)
 			goto oom;
-		cow_user_page(new_page, old_page, vmf->address, vma);
+
+		if (!cow_user_page(new_page, old_page, vmf)) {
+			/* COW failed, if the fault was solved by other,
+			 * it's fine. If not, userspace would re-fault on
+			 * the same address and we will handle the fault
+			 * from the second attempt.
+			 */
+			put_page(new_page);
+			if (old_page)
+				put_page(old_page);
+			return 0;
+		}
 	}
 
 	if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false))
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-09-21 13:50 ` [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He
@ 2019-09-21 15:31   ` Matthew Wilcox
  2019-09-23  8:28   ` Kirill A. Shutemov
  2019-09-23 17:04   ` Catalin Marinas
  2 siblings, 0 replies; 16+ messages in thread
From: Matthew Wilcox @ 2019-09-21 15:31 UTC (permalink / raw)
  To: Jia He
  Cc: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Kirill A. Shutemov, linux-arm-kernel, linux-kernel,
	linux-mm, Suzuki Poulose, Punit Agrawal, Anshuman Khandual,
	Alex Van Brunt, Robin Murphy, Thomas Gleixner, Andrew Morton,
	Jérôme Glisse, Ralph Campbell, hejianet, Kaly Xin, nd

On Sat, Sep 21, 2019 at 09:50:54PM +0800, Jia He wrote:
> When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there
> will be a double page fault in __copy_from_user_inatomic of cow_user_page.
> 
> Below call trace is from arm64 do_page_fault for debugging purpose
> [  110.016195] Call trace:
> [  110.016826]  do_page_fault+0x5a4/0x690
> [  110.017812]  do_mem_abort+0x50/0xb0
> [  110.018726]  el1_da+0x20/0xc4
> [  110.019492]  __arch_copy_from_user+0x180/0x280
> [  110.020646]  do_wp_page+0xb0/0x860
> [  110.021517]  __handle_mm_fault+0x994/0x1338
> [  110.022606]  handle_mm_fault+0xe8/0x180
> [  110.023584]  do_page_fault+0x240/0x690
> [  110.024535]  do_mem_abort+0x50/0xb0
> [  110.025423]  el0_da+0x20/0x24
> 
> The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
> [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3
> 
> As told by Catalin: "On arm64 without hardware Access Flag, copying from
> user will fail because the pte is old and cannot be marked young. So we
> always end up with zeroed page after fork() + CoW for pfn mappings. we
> don't always have a hardware-managed access flag on arm64."
> 
> This patch fix it by calling pte_mkyoung. Also, the parameter is
> changed because vmf should be passed to cow_user_page()
> 
> Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error
> in case there can be some obscure use-case.(by Kirill)
> 
> [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork
> 
> Reported-by: Yibo Cai <Yibo.Cai@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-09-21 13:50 ` [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He
  2019-09-21 15:31   ` Matthew Wilcox
@ 2019-09-23  8:28   ` Kirill A. Shutemov
  2019-09-23 17:04   ` Catalin Marinas
  2 siblings, 0 replies; 16+ messages in thread
From: Kirill A. Shutemov @ 2019-09-23  8:28 UTC (permalink / raw)
  To: Jia He
  Cc: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov,
	linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose,
	Punit Agrawal, Anshuman Khandual, Alex Van Brunt, Robin Murphy,
	Thomas Gleixner, Andrew Morton, Jérôme Glisse,
	Ralph Campbell, hejianet, Kaly Xin, nd

On Sat, Sep 21, 2019 at 09:50:54PM +0800, Jia He wrote:
> When we tested pmdk unit test [1] vmmalloc_fork TEST1 in arm64 guest, there
> will be a double page fault in __copy_from_user_inatomic of cow_user_page.
> 
> Below call trace is from arm64 do_page_fault for debugging purpose
> [  110.016195] Call trace:
> [  110.016826]  do_page_fault+0x5a4/0x690
> [  110.017812]  do_mem_abort+0x50/0xb0
> [  110.018726]  el1_da+0x20/0xc4
> [  110.019492]  __arch_copy_from_user+0x180/0x280
> [  110.020646]  do_wp_page+0xb0/0x860
> [  110.021517]  __handle_mm_fault+0x994/0x1338
> [  110.022606]  handle_mm_fault+0xe8/0x180
> [  110.023584]  do_page_fault+0x240/0x690
> [  110.024535]  do_mem_abort+0x50/0xb0
> [  110.025423]  el0_da+0x20/0x24
> 
> The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
> [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003, pmd=000000023d4b3003, pte=360000298607bd3
> 
> As told by Catalin: "On arm64 without hardware Access Flag, copying from
> user will fail because the pte is old and cannot be marked young. So we
> always end up with zeroed page after fork() + CoW for pfn mappings. we
> don't always have a hardware-managed access flag on arm64."
> 
> This patch fix it by calling pte_mkyoung. Also, the parameter is
> changed because vmf should be passed to cow_user_page()
> 
> Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error
> in case there can be some obscure use-case.(by Kirill)
> 
> [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork
> 
> Reported-by: Yibo Cai <Yibo.Cai@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v8 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af()
  2019-09-21 13:50 ` [PATCH v8 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He
@ 2019-09-23 16:07   ` Catalin Marinas
  2019-09-24  1:50     ` Justin He (Arm Technology China)
  0 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2019-09-23 16:07 UTC (permalink / raw)
  To: Jia He
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Punit Agrawal,
	Anshuman Khandual, Alex Van Brunt, Robin Murphy, Thomas Gleixner,
	Andrew Morton, Jérôme Glisse, Ralph Campbell, hejianet,
	Kaly Xin, nd

On Sat, Sep 21, 2019 at 09:50:52PM +0800, Jia He wrote:
> We unconditionally set the HW_AFDBM capability and only enable it on
> CPUs which really have the feature. But sometimes we need to know
> whether this cpu has the capability of HW AF. So decouple AF from
> DBM by new helper cpu_has_hw_af().
> 
> Reported-by: kbuild test robot <lkp@intel.com>
> Suggested-by: Suzuki Poulose <Suzuki.Poulose@arm.com>
> Signed-off-by: Jia He <justin.he@arm.com>
> ---
>  arch/arm64/include/asm/cpufeature.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index c96ffa4722d3..46caf934ba4e 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -667,6 +667,16 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
>  	default: return CONFIG_ARM64_PA_BITS;
>  	}
>  }
> +
> +/* Decouple AF from AFDBM. */

We could do with a better comment here or just remove it altogether. The
aim of the patch was to decouple AF check from the AF+DBM but the
comment here should describe what the function does. Maybe something
like: "Check whether hardware update of the Access flag is supported".

> +static inline bool cpu_has_hw_af(void)
> +{
> +	if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM))
> +		return read_cpuid(ID_AA64MMFR1_EL1) & 0xf;
> +
> +	return false;
> +}

Other than the comment above,

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v8 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64
  2019-09-21 13:50 ` [PATCH v8 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64 Jia He
@ 2019-09-23 16:18   ` Catalin Marinas
  2019-09-24  2:17     ` Justin He (Arm Technology China)
  0 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2019-09-23 16:18 UTC (permalink / raw)
  To: Jia He
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Punit Agrawal,
	Anshuman Khandual, Alex Van Brunt, Robin Murphy, Thomas Gleixner,
	Andrew Morton, Jérôme Glisse, Ralph Campbell, hejianet,
	Kaly Xin, nd

On Sat, Sep 21, 2019 at 09:50:53PM +0800, Jia He wrote:
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index e09760ece844..4a9939615e41 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -868,6 +868,18 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
>  #define phys_to_ttbr(addr)	(addr)
>  #endif
>  
> +/*
> + * On arm64 without hardware Access Flag, copying fromuser will fail because
                                                     ^^^^^^^^
						     from user

> + * the pte is old and cannot be marked young. So we always end up with zeroed
> + * page after fork() + CoW for pfn mappings. we don't always have a
                                                ^^
						We

> + * hardware-managed access flag on arm64.
> + */
> +static inline bool arch_faults_on_old_pte(void)
> +{
> +	return !cpu_has_hw_af();

I saw an early incarnation of your patch having a
WARN_ON(preemptible()). I think we need this back just in case this
function will be used elsewhere in the future.

> +}
> +#define arch_faults_on_old_pte arch_faults_on_old_pte

Otherwise,

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-09-21 13:50 ` [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He
  2019-09-21 15:31   ` Matthew Wilcox
  2019-09-23  8:28   ` Kirill A. Shutemov
@ 2019-09-23 17:04   ` Catalin Marinas
  2019-09-24  6:43     ` Justin He (Arm Technology China)
  2 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2019-09-23 17:04 UTC (permalink / raw)
  To: Jia He
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Punit Agrawal,
	Anshuman Khandual, Alex Van Brunt, Robin Murphy, Thomas Gleixner,
	Andrew Morton, Jérôme Glisse, Ralph Campbell, hejianet,
	Kaly Xin, nd

On Sat, Sep 21, 2019 at 09:50:54PM +0800, Jia He wrote:
> @@ -2151,21 +2163,53 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
>  	 * fails, we just zero-fill it. Live with it.
>  	 */
>  	if (unlikely(!src)) {
> -		void *kaddr = kmap_atomic(dst);
> -		void __user *uaddr = (void __user *)(va & PAGE_MASK);
> +		void *kaddr;
> +		pte_t entry;
> +		void __user *uaddr = (void __user *)(addr & PAGE_MASK);
>  
> +		/* On architectures with software "accessed" bits, we would
> +		 * take a double page fault, so mark it accessed here.
> +		 */

Nitpick: please follow the kernel coding style for multi-line comments
(above and the for the rest of the patch):

		/*
		 * Your multi-line comment.
		 */

> +		if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> +			vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr,
> +						       &vmf->ptl);
> +			if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> +				entry = pte_mkyoung(vmf->orig_pte);
> +				if (ptep_set_access_flags(vma, addr,
> +							  vmf->pte, entry, 0))
> +					update_mmu_cache(vma, addr, vmf->pte);
> +			} else {
> +				/* Other thread has already handled the fault
> +				 * and we don't need to do anything. If it's
> +				 * not the case, the fault will be triggered
> +				 * again on the same address.
> +				 */
> +				pte_unmap_unlock(vmf->pte, vmf->ptl);
> +				return false;
> +			}
> +			pte_unmap_unlock(vmf->pte, vmf->ptl);
> +		}

Another nit, you could rewrite this block slightly to avoid too much
indentation. Something like (untested):

		if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
			vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr,
						       &vmf->ptl);
			if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
				/*
				 * Other thread has already handled the fault
				 * and we don't need to do anything. If it's
				 * not the case, the fault will be triggered
				 * again on the same address.
				 */
				pte_unmap_unlock(vmf->pte, vmf->ptl);
				return false;
			}
			entry = pte_mkyoung(vmf->orig_pte);
			if (ptep_set_access_flags(vma, addr,
						  vmf->pte, entry, 0))
				update_mmu_cache(vma, addr, vmf->pte);
			pte_unmap_unlock(vmf->pte, vmf->ptl);
		}

> +
> +		kaddr = kmap_atomic(dst);

Since you moved the kmap_atomic() here, could the above
arch_faults_on_old_pte() run in a preemptible context? I suggested to
add a WARN_ON in patch 2 to be sure.

>  		/*
>  		 * This really shouldn't fail, because the page is there
>  		 * in the page tables. But it might just be unreadable,
>  		 * in which case we just give up and fill the result with
>  		 * zeroes.
>  		 */
> -		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
> +		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
> +			/* Give a warn in case there can be some obscure
> +			 * use-case
> +			 */
> +			WARN_ON_ONCE(1);

That's more of a question for the mm guys: at this point we do the
copying with the ptl released; is there anything else that could have
made the pte old in the meantime? I think unuse_pte() is only called on
anonymous vmas, so it shouldn't be the case here.

>  			clear_page(kaddr);
> +		}
>  		kunmap_atomic(kaddr);
>  		flush_dcache_page(dst);
>  	} else
> -		copy_user_highpage(dst, src, va, vma);
> +		copy_user_highpage(dst, src, addr, vma);
> +
> +	return true;
>  }

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v8 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af()
  2019-09-23 16:07   ` Catalin Marinas
@ 2019-09-24  1:50     ` Justin He (Arm Technology China)
  0 siblings, 0 replies; 16+ messages in thread
From: Justin He (Arm Technology China) @ 2019-09-24  1:50 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Punit Agrawal,
	Anshuman Khandual, Alex Van Brunt, Robin Murphy, Thomas Gleixner,
	Andrew Morton, Jérôme Glisse, Ralph Campbell, hejianet,
	Kaly Xin (Arm Technology China),
	nd

Hi Catalin

> -----Original Message-----
> From: Catalin Marinas <catalin.marinas@arm.com>
> Sent: 2019年9月24日 0:07
> To: Justin He (Arm Technology China) <Justin.He@arm.com>
> Cc: Will Deacon <will@kernel.org>; Mark Rutland
> <Mark.Rutland@arm.com>; James Morse <James.Morse@arm.com>; Marc
> Zyngier <maz@kernel.org>; Matthew Wilcox <willy@infradead.org>; Kirill A.
> Shutemov <kirill.shutemov@linux.intel.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-
> mm@kvack.org; Suzuki Poulose <Suzuki.Poulose@arm.com>; Punit
> Agrawal <punitagrawal@gmail.com>; Anshuman Khandual
> <Anshuman.Khandual@arm.com>; Alex Van Brunt
> <avanbrunt@nvidia.com>; Robin Murphy <Robin.Murphy@arm.com>;
> Thomas Gleixner <tglx@linutronix.de>; Andrew Morton <akpm@linux-
> foundation.org>; Jérôme Glisse <jglisse@redhat.com>; Ralph Campbell
> <rcampbell@nvidia.com>; hejianet@gmail.com; Kaly Xin (Arm Technology
> China) <Kaly.Xin@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH v8 1/3] arm64: cpufeature: introduce helper
> cpu_has_hw_af()
> 
> On Sat, Sep 21, 2019 at 09:50:52PM +0800, Jia He wrote:
> > We unconditionally set the HW_AFDBM capability and only enable it on
> > CPUs which really have the feature. But sometimes we need to know
> > whether this cpu has the capability of HW AF. So decouple AF from
> > DBM by new helper cpu_has_hw_af().
> >
> > Reported-by: kbuild test robot <lkp@intel.com>
> > Suggested-by: Suzuki Poulose <Suzuki.Poulose@arm.com>
> > Signed-off-by: Jia He <justin.he@arm.com>
> > ---
> >  arch/arm64/include/asm/cpufeature.h | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/cpufeature.h
> b/arch/arm64/include/asm/cpufeature.h
> > index c96ffa4722d3..46caf934ba4e 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -667,6 +667,16 @@ static inline u32
> id_aa64mmfr0_parange_to_phys_shift(int parange)
> >  	default: return CONFIG_ARM64_PA_BITS;
> >  	}
> >  }
> > +
> > +/* Decouple AF from AFDBM. */
> 
> We could do with a better comment here or just remove it altogether. The
> aim of the patch was to decouple AF check from the AF+DBM but the
> comment here should describe what the function does. Maybe something
> like: "Check whether hardware update of the Access flag is supported".
> 

Okay, I will update it

--
Cheers,
Justin (Jia He)


> > +static inline bool cpu_has_hw_af(void)
> > +{
> > +	if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM))
> > +		return read_cpuid(ID_AA64MMFR1_EL1) & 0xf;
> > +
> > +	return false;
> > +}
> 
> Other than the comment above,
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v8 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64
  2019-09-23 16:18   ` Catalin Marinas
@ 2019-09-24  2:17     ` Justin He (Arm Technology China)
  0 siblings, 0 replies; 16+ messages in thread
From: Justin He (Arm Technology China) @ 2019-09-24  2:17 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Punit Agrawal,
	Anshuman Khandual, Alex Van Brunt, Robin Murphy, Thomas Gleixner,
	Andrew Morton, Jérôme Glisse, Ralph Campbell, hejianet,
	Kaly Xin (Arm Technology China),
	nd



> -----Original Message-----
> From: Catalin Marinas <catalin.marinas@arm.com>
> Sent: 2019年9月24日 0:18
> To: Justin He (Arm Technology China) <Justin.He@arm.com>
> Cc: Will Deacon <will@kernel.org>; Mark Rutland
> <Mark.Rutland@arm.com>; James Morse <James.Morse@arm.com>; Marc
> Zyngier <maz@kernel.org>; Matthew Wilcox <willy@infradead.org>; Kirill A.
> Shutemov <kirill.shutemov@linux.intel.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-
> mm@kvack.org; Suzuki Poulose <Suzuki.Poulose@arm.com>; Punit
> Agrawal <punitagrawal@gmail.com>; Anshuman Khandual
> <Anshuman.Khandual@arm.com>; Alex Van Brunt
> <avanbrunt@nvidia.com>; Robin Murphy <Robin.Murphy@arm.com>;
> Thomas Gleixner <tglx@linutronix.de>; Andrew Morton <akpm@linux-
> foundation.org>; Jérôme Glisse <jglisse@redhat.com>; Ralph Campbell
> <rcampbell@nvidia.com>; hejianet@gmail.com; Kaly Xin (Arm Technology
> China) <Kaly.Xin@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH v8 2/3] arm64: mm: implement
> arch_faults_on_old_pte() on arm64
> 
> On Sat, Sep 21, 2019 at 09:50:53PM +0800, Jia He wrote:
> > diff --git a/arch/arm64/include/asm/pgtable.h
> b/arch/arm64/include/asm/pgtable.h
> > index e09760ece844..4a9939615e41 100644
> > --- a/arch/arm64/include/asm/pgtable.h
> > +++ b/arch/arm64/include/asm/pgtable.h
> > @@ -868,6 +868,18 @@ static inline void update_mmu_cache(struct
> vm_area_struct *vma,
> >  #define phys_to_ttbr(addr)	(addr)
> >  #endif
> >
> > +/*
> > + * On arm64 without hardware Access Flag, copying fromuser will fail
> because
>                                                      ^^^^^^^^
> 						     from user
> 

Ok
> > + * the pte is old and cannot be marked young. So we always end up with
> zeroed
> > + * page after fork() + CoW for pfn mappings. we don't always have a
>                                                 ^^
> 						We
> 

Ok
> > + * hardware-managed access flag on arm64.
> > + */
> > +static inline bool arch_faults_on_old_pte(void)
> > +{
> > +	return !cpu_has_hw_af();
> 
> I saw an early incarnation of your patch having a
> WARN_ON(preemptible()). I think we need this back just in case this
> function will be used elsewhere in the future.

Okay

--
Cheers,
Justin (Jia He)


> 
> > +}
> > +#define arch_faults_on_old_pte arch_faults_on_old_pte
> 
> Otherwise,
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-09-23 17:04   ` Catalin Marinas
@ 2019-09-24  6:43     ` Justin He (Arm Technology China)
  2019-09-24 10:33       ` Catalin Marinas
  0 siblings, 1 reply; 16+ messages in thread
From: Justin He (Arm Technology China) @ 2019-09-24  6:43 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Punit Agrawal,
	Anshuman Khandual, Alex Van Brunt, Robin Murphy, Thomas Gleixner,
	Andrew Morton, Jérôme Glisse, Ralph Campbell, hejianet,
	Kaly Xin (Arm Technology China),
	nd

Hi Catalin
Please see an important comment inline, thanks

> -----Original Message-----
> From: Catalin Marinas <catalin.marinas@arm.com>
> Sent: 2019年9月24日 1:05
> To: Justin He (Arm Technology China) <Justin.He@arm.com>
> Cc: Will Deacon <will@kernel.org>; Mark Rutland
> <Mark.Rutland@arm.com>; James Morse <James.Morse@arm.com>; Marc
> Zyngier <maz@kernel.org>; Matthew Wilcox <willy@infradead.org>; Kirill A.
> Shutemov <kirill.shutemov@linux.intel.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-
> mm@kvack.org; Suzuki Poulose <Suzuki.Poulose@arm.com>; Punit
> Agrawal <punitagrawal@gmail.com>; Anshuman Khandual
> <Anshuman.Khandual@arm.com>; Alex Van Brunt
> <avanbrunt@nvidia.com>; Robin Murphy <Robin.Murphy@arm.com>;
> Thomas Gleixner <tglx@linutronix.de>; Andrew Morton <akpm@linux-
> foundation.org>; Jérôme Glisse <jglisse@redhat.com>; Ralph Campbell
> <rcampbell@nvidia.com>; hejianet@gmail.com; Kaly Xin (Arm Technology
> China) <Kaly.Xin@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF
> is cleared
> 
> On Sat, Sep 21, 2019 at 09:50:54PM +0800, Jia He wrote:
> > @@ -2151,21 +2163,53 @@ static inline void cow_user_page(struct page
> *dst, struct page *src, unsigned lo
> >  	 * fails, we just zero-fill it. Live with it.
> >  	 */
> >  	if (unlikely(!src)) {
> > -		void *kaddr = kmap_atomic(dst);
> > -		void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > +		void *kaddr;
> > +		pte_t entry;
> > +		void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> >
> > +		/* On architectures with software "accessed" bits, we would
> > +		 * take a double page fault, so mark it accessed here.
> > +		 */
> 
> Nitpick: please follow the kernel coding style for multi-line comments
> (above and the for the rest of the patch):
> 
> 		/*
> 		 * Your multi-line comment.
> 		 */
> 
> > +		if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte))
> {
> > +			vmf->pte = pte_offset_map_lock(mm, vmf->pmd,
> addr,
> > +						       &vmf->ptl);
> > +			if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> > +				entry = pte_mkyoung(vmf->orig_pte);
> > +				if (ptep_set_access_flags(vma, addr,
> > +							  vmf->pte, entry, 0))
> > +					update_mmu_cache(vma, addr, vmf-
> >pte);
> > +			} else {
> > +				/* Other thread has already handled the
> fault
> > +				 * and we don't need to do anything. If it's
> > +				 * not the case, the fault will be triggered
> > +				 * again on the same address.
> > +				 */
> > +				pte_unmap_unlock(vmf->pte, vmf->ptl);
> > +				return false;
> > +			}
> > +			pte_unmap_unlock(vmf->pte, vmf->ptl);
> > +		}
> 
> Another nit, you could rewrite this block slightly to avoid too much
> indentation. Something like (untested):
> 
> 		if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte))
> {
> 			vmf->pte = pte_offset_map_lock(mm, vmf->pmd,
> addr,
> 						       &vmf->ptl);
> 			if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
> 				/*
> 				 * Other thread has already handled the fault
> 				 * and we don't need to do anything. If it's
> 				 * not the case, the fault will be triggered
> 				 * again on the same address.
> 				 */
> 				pte_unmap_unlock(vmf->pte, vmf->ptl);
> 				return false;
> 			}
> 			entry = pte_mkyoung(vmf->orig_pte);
> 			if (ptep_set_access_flags(vma, addr,
> 						  vmf->pte, entry, 0))
> 				update_mmu_cache(vma, addr, vmf->pte);
> 			pte_unmap_unlock(vmf->pte, vmf->ptl);
> 		}
> 
> > +
> > +		kaddr = kmap_atomic(dst);
> 
> Since you moved the kmap_atomic() here, could the above
> arch_faults_on_old_pte() run in a preemptible context? I suggested to
> add a WARN_ON in patch 2 to be sure.

Should I move kmap_atomic back to the original line? Thus, we can make sure
that arch_faults_on_old_pte() is in the context of preempt_disabled?
Otherwise, arch_faults_on_old_pte() may cause plenty of warning if I add
a WARN_ON in arch_faults_on_old_pte.  I tested it when I enable the PREEMPT=y
on a ThunderX2 qemu guest.


--
Cheers,
Justin (Jia He)


> 
> >  		/*
> >  		 * This really shouldn't fail, because the page is there
> >  		 * in the page tables. But it might just be unreadable,
> >  		 * in which case we just give up and fill the result with
> >  		 * zeroes.
> >  		 */
> > -		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
> > +		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
> > +			/* Give a warn in case there can be some obscure
> > +			 * use-case
> > +			 */
> > +			WARN_ON_ONCE(1);
> 
> That's more of a question for the mm guys: at this point we do the
> copying with the ptl released; is there anything else that could have
> made the pte old in the meantime? I think unuse_pte() is only called on
> anonymous vmas, so it shouldn't be the case here.
> 
> >  			clear_page(kaddr);
> > +		}
> >  		kunmap_atomic(kaddr);
> >  		flush_dcache_page(dst);
> >  	} else
> > -		copy_user_highpage(dst, src, va, vma);
> > +		copy_user_highpage(dst, src, addr, vma);
> > +
> > +	return true;
> >  }
> 
> --
> Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-09-24  6:43     ` Justin He (Arm Technology China)
@ 2019-09-24 10:33       ` Catalin Marinas
  2019-09-24 11:59         ` Kirill A. Shutemov
  2019-09-24 15:29         ` Jia He
  0 siblings, 2 replies; 16+ messages in thread
From: Catalin Marinas @ 2019-09-24 10:33 UTC (permalink / raw)
  To: Justin He (Arm Technology China)
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Punit Agrawal,
	Anshuman Khandual, Alex Van Brunt, Robin Murphy, Thomas Gleixner,
	Andrew Morton, Jérôme Glisse, Ralph Campbell, hejianet,
	Kaly Xin (Arm Technology China),
	nd

On Tue, Sep 24, 2019 at 06:43:06AM +0000, Justin He (Arm Technology China) wrote:
> Catalin Marinas wrote:
> > On Sat, Sep 21, 2019 at 09:50:54PM +0800, Jia He wrote:
> > > @@ -2151,21 +2163,53 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
> > >  	 * fails, we just zero-fill it. Live with it.
> > >  	 */
> > >  	if (unlikely(!src)) {
> > > -		void *kaddr = kmap_atomic(dst);
> > > -		void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > > +		void *kaddr;
> > > +		pte_t entry;
> > > +		void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> > >
> > > +		/* On architectures with software "accessed" bits, we would
> > > +		 * take a double page fault, so mark it accessed here.
> > > +		 */
[...]
> > > +		if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> > > +			vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr,
> > > +						       &vmf->ptl);
> > > +			if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> > > +				entry = pte_mkyoung(vmf->orig_pte);
> > > +				if (ptep_set_access_flags(vma, addr,
> > > +							  vmf->pte, entry, 0))
> > > +					update_mmu_cache(vma, addr, vmf->pte);
> > > +			} else {
> > > +				/* Other thread has already handled the fault
> > > +				 * and we don't need to do anything. If it's
> > > +				 * not the case, the fault will be triggered
> > > +				 * again on the same address.
> > > +				 */
> > > +				pte_unmap_unlock(vmf->pte, vmf->ptl);
> > > +				return false;
> > > +			}
> > > +			pte_unmap_unlock(vmf->pte, vmf->ptl);
> > > +		}
[...]
> > > +
> > > +		kaddr = kmap_atomic(dst);
> > 
> > Since you moved the kmap_atomic() here, could the above
> > arch_faults_on_old_pte() run in a preemptible context? I suggested to
> > add a WARN_ON in patch 2 to be sure.
> 
> Should I move kmap_atomic back to the original line? Thus, we can make sure
> that arch_faults_on_old_pte() is in the context of preempt_disabled?
> Otherwise, arch_faults_on_old_pte() may cause plenty of warning if I add
> a WARN_ON in arch_faults_on_old_pte.  I tested it when I enable the PREEMPT=y
> on a ThunderX2 qemu guest.

So we have two options here:

1. Change arch_faults_on_old_pte() scope to the whole system rather than
   just the current CPU. You'd have to wire up a new arm64 capability
   for the access flag but this way we don't care whether it's
   preemptible or not.

2. Keep the arch_faults_on_old_pte() per-CPU but make sure we are not
   preempted here. The kmap_atomic() move would do but you'd have to
   kunmap_atomic() before the return.

I think the answer to my question below also has some implication on
which option to pick:

> > >  		/*
> > >  		 * This really shouldn't fail, because the page is there
> > >  		 * in the page tables. But it might just be unreadable,
> > >  		 * in which case we just give up and fill the result with
> > >  		 * zeroes.
> > >  		 */
> > > -		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
> > > +		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
> > > +			/* Give a warn in case there can be some obscure
> > > +			 * use-case
> > > +			 */
> > > +			WARN_ON_ONCE(1);
> > 
> > That's more of a question for the mm guys: at this point we do the
> > copying with the ptl released; is there anything else that could have
> > made the pte old in the meantime? I think unuse_pte() is only called on
> > anonymous vmas, so it shouldn't be the case here.

If we need to hold the ptl here, you could as well have an enclosing
kmap/kunmap_atomic (option 2) with some goto instead of "return false".

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-09-24 10:33       ` Catalin Marinas
@ 2019-09-24 11:59         ` Kirill A. Shutemov
  2019-09-24 15:29         ` Jia He
  1 sibling, 0 replies; 16+ messages in thread
From: Kirill A. Shutemov @ 2019-09-24 11:59 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Justin He (Arm Technology China),
	Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Punit Agrawal,
	Anshuman Khandual, Alex Van Brunt, Robin Murphy, Thomas Gleixner,
	Andrew Morton, Jérôme Glisse, Ralph Campbell, hejianet,
	Kaly Xin (Arm Technology China),
	nd

On Tue, Sep 24, 2019 at 11:33:25AM +0100, Catalin Marinas wrote:
> On Tue, Sep 24, 2019 at 06:43:06AM +0000, Justin He (Arm Technology China) wrote:
> > Catalin Marinas wrote:
> > > On Sat, Sep 21, 2019 at 09:50:54PM +0800, Jia He wrote:
> > > > @@ -2151,21 +2163,53 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
> > > >  	 * fails, we just zero-fill it. Live with it.
> > > >  	 */
> > > >  	if (unlikely(!src)) {
> > > > -		void *kaddr = kmap_atomic(dst);
> > > > -		void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > > > +		void *kaddr;
> > > > +		pte_t entry;
> > > > +		void __user *uaddr = (void __user *)(addr & PAGE_MASK);
> > > >
> > > > +		/* On architectures with software "accessed" bits, we would
> > > > +		 * take a double page fault, so mark it accessed here.
> > > > +		 */
> [...]
> > > > +		if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> > > > +			vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr,
> > > > +						       &vmf->ptl);
> > > > +			if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> > > > +				entry = pte_mkyoung(vmf->orig_pte);
> > > > +				if (ptep_set_access_flags(vma, addr,
> > > > +							  vmf->pte, entry, 0))
> > > > +					update_mmu_cache(vma, addr, vmf->pte);
> > > > +			} else {
> > > > +				/* Other thread has already handled the fault
> > > > +				 * and we don't need to do anything. If it's
> > > > +				 * not the case, the fault will be triggered
> > > > +				 * again on the same address.
> > > > +				 */
> > > > +				pte_unmap_unlock(vmf->pte, vmf->ptl);
> > > > +				return false;
> > > > +			}
> > > > +			pte_unmap_unlock(vmf->pte, vmf->ptl);
> > > > +		}
> [...]
> > > > +
> > > > +		kaddr = kmap_atomic(dst);
> > > 
> > > Since you moved the kmap_atomic() here, could the above
> > > arch_faults_on_old_pte() run in a preemptible context? I suggested to
> > > add a WARN_ON in patch 2 to be sure.
> > 
> > Should I move kmap_atomic back to the original line? Thus, we can make sure
> > that arch_faults_on_old_pte() is in the context of preempt_disabled?
> > Otherwise, arch_faults_on_old_pte() may cause plenty of warning if I add
> > a WARN_ON in arch_faults_on_old_pte.  I tested it when I enable the PREEMPT=y
> > on a ThunderX2 qemu guest.
> 
> So we have two options here:
> 
> 1. Change arch_faults_on_old_pte() scope to the whole system rather than
>    just the current CPU. You'd have to wire up a new arm64 capability
>    for the access flag but this way we don't care whether it's
>    preemptible or not.
> 
> 2. Keep the arch_faults_on_old_pte() per-CPU but make sure we are not
>    preempted here. The kmap_atomic() move would do but you'd have to
>    kunmap_atomic() before the return.
> 
> I think the answer to my question below also has some implication on
> which option to pick:
> 
> > > >  		/*
> > > >  		 * This really shouldn't fail, because the page is there
> > > >  		 * in the page tables. But it might just be unreadable,
> > > >  		 * in which case we just give up and fill the result with
> > > >  		 * zeroes.
> > > >  		 */
> > > > -		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
> > > > +		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
> > > > +			/* Give a warn in case there can be some obscure
> > > > +			 * use-case
> > > > +			 */
> > > > +			WARN_ON_ONCE(1);
> > > 
> > > That's more of a question for the mm guys: at this point we do the
> > > copying with the ptl released; is there anything else that could have
> > > made the pte old in the meantime? I think unuse_pte() is only called on
> > > anonymous vmas, so it shouldn't be the case here.
> 
> If we need to hold the ptl here, you could as well have an enclosing
> kmap/kunmap_atomic (option 2) with some goto instead of "return false".

Yeah, look like we need to hold ptl for longer. There is nothing I see
that would prevent clearing young bit under us otherwise.

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-09-24 10:33       ` Catalin Marinas
  2019-09-24 11:59         ` Kirill A. Shutemov
@ 2019-09-24 15:29         ` Jia He
  2019-09-24 16:35           ` Catalin Marinas
  1 sibling, 1 reply; 16+ messages in thread
From: Jia He @ 2019-09-24 15:29 UTC (permalink / raw)
  To: Catalin Marinas, Justin He (Arm Technology China)
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Punit Agrawal,
	Anshuman Khandual, Alex Van Brunt, Robin Murphy, Thomas Gleixner,
	Andrew Morton, Jérôme Glisse, Ralph Campbell,
	Kaly Xin (Arm Technology China),
	nd

Hi Catalin

On 2019/9/24 18:33, Catalin Marinas wrote:
> On Tue, Sep 24, 2019 at 06:43:06AM +0000, Justin He (Arm Technology China) wrote:
>> Catalin Marinas wrote:
>>> On Sat, Sep 21, 2019 at 09:50:54PM +0800, Jia He wrote:
>>>> @@ -2151,21 +2163,53 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
>>>>   	 * fails, we just zero-fill it. Live with it.
>>>>   	 */
>>>>   	if (unlikely(!src)) {
>>>> -		void *kaddr = kmap_atomic(dst);
>>>> -		void __user *uaddr = (void __user *)(va & PAGE_MASK);
>>>> +		void *kaddr;
>>>> +		pte_t entry;
>>>> +		void __user *uaddr = (void __user *)(addr & PAGE_MASK);
>>>>
>>>> +		/* On architectures with software "accessed" bits, we would
>>>> +		 * take a double page fault, so mark it accessed here.
>>>> +		 */
> [...]
>>>> +		if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
>>>> +			vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr,
>>>> +						       &vmf->ptl);
>>>> +			if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
>>>> +				entry = pte_mkyoung(vmf->orig_pte);
>>>> +				if (ptep_set_access_flags(vma, addr,
>>>> +							  vmf->pte, entry, 0))
>>>> +					update_mmu_cache(vma, addr, vmf->pte);
>>>> +			} else {
>>>> +				/* Other thread has already handled the fault
>>>> +				 * and we don't need to do anything. If it's
>>>> +				 * not the case, the fault will be triggered
>>>> +				 * again on the same address.
>>>> +				 */
>>>> +				pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>> +				return false;
>>>> +			}
>>>> +			pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>> +		}
> [...]
>>>> +
>>>> +		kaddr = kmap_atomic(dst);
>>> Since you moved the kmap_atomic() here, could the above
>>> arch_faults_on_old_pte() run in a preemptible context? I suggested to
>>> add a WARN_ON in patch 2 to be sure.
>> Should I move kmap_atomic back to the original line? Thus, we can make sure
>> that arch_faults_on_old_pte() is in the context of preempt_disabled?
>> Otherwise, arch_faults_on_old_pte() may cause plenty of warning if I add
>> a WARN_ON in arch_faults_on_old_pte.  I tested it when I enable the PREEMPT=y
>> on a ThunderX2 qemu guest.
> So we have two options here:
>
> 1. Change arch_faults_on_old_pte() scope to the whole system rather than
>     just the current CPU. You'd have to wire up a new arm64 capability
>     for the access flag but this way we don't care whether it's
>     preemptible or not.
>
> 2. Keep the arch_faults_on_old_pte() per-CPU but make sure we are not
>     preempted here. The kmap_atomic() move would do but you'd have to
>     kunmap_atomic() before the return.
>
> I think the answer to my question below also has some implication on
> which option to pick:
>
>>>>   		/*
>>>>   		 * This really shouldn't fail, because the page is there
>>>>   		 * in the page tables. But it might just be unreadable,
>>>>   		 * in which case we just give up and fill the result with
>>>>   		 * zeroes.
>>>>   		 */
>>>> -		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
>>>> +		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
>>>> +			/* Give a warn in case there can be some obscure
>>>> +			 * use-case
>>>> +			 */
>>>> +			WARN_ON_ONCE(1);
>>> That's more of a question for the mm guys: at this point we do the
>>> copying with the ptl released; is there anything else that could have
>>> made the pte old in the meantime? I think unuse_pte() is only called on
>>> anonymous vmas, so it shouldn't be the case here.
> If we need to hold the ptl here, you could as well have an enclosing
> kmap/kunmap_atomic (option 2) with some goto instead of "return false".

I am not 100% sure that I understand your suggestion well, so I drafted the patch

here:

Changes: optimize the indentions

      hold the ptl longer


-static inline void cow_user_page(struct page *dst, struct page *src, unsigned 
long va, struct vm_area_struct *vma)
+static inline bool cow_user_page(struct page *dst, struct page *src,
+                 struct vm_fault *vmf)
  {
+    struct vm_area_struct *vma = vmf->vma;
+    struct mm_struct *mm = vma->vm_mm;
+    unsigned long addr = vmf->address;
+    bool ret;
+    pte_t entry;
+    void *kaddr;
+    void __user *uaddr;
+
      debug_dma_assert_idle(src);

+    if (likely(src)) {
+        copy_user_highpage(dst, src, addr, vma);
+        return true;
+    }
+
      /*
       * If the source page was a PFN mapping, we don't have
       * a "struct page" for it. We do a best-effort copy by
       * just copying from the original user address. If that
       * fails, we just zero-fill it. Live with it.
       */
-    if (unlikely(!src)) {
-        void *kaddr = kmap_atomic(dst);
-        void __user *uaddr = (void __user *)(va & PAGE_MASK);
+    kaddr = kmap_atomic(dst);
+    uaddr = (void __user *)(addr & PAGE_MASK);
+
+    /*
+     * On architectures with software "accessed" bits, we would
+     * take a double page fault, so mark it accessed here.
+     */
+    vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
+    if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
+        if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) {
+            /*
+             * Other thread has already handled the fault
+             * and we don't need to do anything. If it's
+             * not the case, the fault will be triggered
+             * again on the same address.
+             */
+            ret = false;
+            goto pte_unlock;
+        }
+
+        entry = pte_mkyoung(vmf->orig_pte);
+        if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0))
+            update_mmu_cache(vma, addr, vmf->pte);
+    }

+    /*
+     * This really shouldn't fail, because the page is there
+     * in the page tables. But it might just be unreadable,
+     * in which case we just give up and fill the result with
+     * zeroes.
+     */
+    if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
          /*
-         * This really shouldn't fail, because the page is there
-         * in the page tables. But it might just be unreadable,
-         * in which case we just give up and fill the result with
-         * zeroes.
+         * Give a warn in case there can be some obscure
+         * use-case
           */
-        if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
-            clear_page(kaddr);
-        kunmap_atomic(kaddr);
-        flush_dcache_page(dst);
-    } else
-        copy_user_highpage(dst, src, va, vma);
+        WARN_ON_ONCE(1);
+        clear_page(kaddr);
+    }
+
+    ret = true;
+
+pte_unlock:
+    pte_unmap_unlock(vmf->pte, vmf->ptl);
+    kunmap_atomic(kaddr);
+    flush_dcache_page(dst);
+
+    return ret;
  }


---
Cheers,
Justin (Jia He)


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-09-24 15:29         ` Jia He
@ 2019-09-24 16:35           ` Catalin Marinas
  0 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2019-09-24 16:35 UTC (permalink / raw)
  To: Jia He
  Cc: Justin He (Arm Technology China),
	Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Punit Agrawal,
	Anshuman Khandual, Alex Van Brunt, Robin Murphy, Thomas Gleixner,
	Andrew Morton, Jérôme Glisse, Ralph Campbell,
	Kaly Xin (Arm Technology China),
	nd

On Tue, Sep 24, 2019 at 11:29:07PM +0800, Jia He wrote:
> On 2019/9/24 18:33, Catalin Marinas wrote:
> > On Tue, Sep 24, 2019 at 06:43:06AM +0000, Justin He (Arm Technology China) wrote:
> > > Catalin Marinas wrote:
> > > > On Sat, Sep 21, 2019 at 09:50:54PM +0800, Jia He wrote:
> > > > >   		/*
> > > > >   		 * This really shouldn't fail, because the page is there
> > > > >   		 * in the page tables. But it might just be unreadable,
> > > > >   		 * in which case we just give up and fill the result with
> > > > >   		 * zeroes.
> > > > >   		 */
> > > > > -		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
> > > > > +		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
> > > > > +			/* Give a warn in case there can be some obscure
> > > > > +			 * use-case
> > > > > +			 */
> > > > > +			WARN_ON_ONCE(1);
> > > > That's more of a question for the mm guys: at this point we do the
> > > > copying with the ptl released; is there anything else that could have
> > > > made the pte old in the meantime? I think unuse_pte() is only called on
> > > > anonymous vmas, so it shouldn't be the case here.
> >
> > If we need to hold the ptl here, you could as well have an enclosing
> > kmap/kunmap_atomic (option 2) with some goto instead of "return false".
> 
> I am not 100% sure that I understand your suggestion well, so I
> drafted the patch

Well, however you think the code is cleaner really.

The copy/paste didn't work well, tabs disappeared (or rather the
Exchange server corrupting outgoing emails) but I'll try to comment
below:

> -static inline void cow_user_page(struct page *dst, struct page *src,
>   unsigned long va, struct vm_area_struct *vma)
> +static inline bool cow_user_page(struct page *dst, struct page *src,
> +                 struct vm_fault *vmf)
>  {
> +    struct vm_area_struct *vma = vmf->vma;
> +    struct mm_struct *mm = vma->vm_mm;
> +    unsigned long addr = vmf->address;
> +    bool ret;
> +    pte_t entry;
> +    void *kaddr;
> +    void __user *uaddr;
> +
>      debug_dma_assert_idle(src);
> 
> +    if (likely(src)) {
> +        copy_user_highpage(dst, src, addr, vma);
> +        return true;
> +    }
> +
>      /*
>       * If the source page was a PFN mapping, we don't have
>       * a "struct page" for it. We do a best-effort copy by
>       * just copying from the original user address. If that
>       * fails, we just zero-fill it. Live with it.
>       */
> -    if (unlikely(!src)) {
> -        void *kaddr = kmap_atomic(dst);
> -        void __user *uaddr = (void __user *)(va & PAGE_MASK);
> +    kaddr = kmap_atomic(dst);
> +    uaddr = (void __user *)(addr & PAGE_MASK);
> +
> +    /*
> +     * On architectures with software "accessed" bits, we would
> +     * take a double page fault, so mark it accessed here.
> +     */
> +    vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
> +    if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {

I'd move the pte_offset_map_lock() inside the 'if' block as we don't
want to affect architectures that handle old ptes automatically.

> +        if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> +            /*
> +             * Other thread has already handled the fault
> +             * and we don't need to do anything. If it's
> +             * not the case, the fault will be triggered
> +             * again on the same address.
> +             */
> +            ret = false;
> +            goto pte_unlock;
> +        }
> +
> +        entry = pte_mkyoung(vmf->orig_pte);
> +        if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0))
> +            update_mmu_cache(vma, addr, vmf->pte);
> +    }
> 
> +    /*
> +     * This really shouldn't fail, because the page is there
> +     * in the page tables. But it might just be unreadable,
> +     * in which case we just give up and fill the result with
> +     * zeroes.
> +     */
> +    if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
>          /*
> -         * This really shouldn't fail, because the page is there
> -         * in the page tables. But it might just be unreadable,
> -         * in which case we just give up and fill the result with
> -         * zeroes.
> +         * Give a warn in case there can be some obscure
> +         * use-case
>           */
> -        if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
> -            clear_page(kaddr);
> -        kunmap_atomic(kaddr);
> -        flush_dcache_page(dst);
> -    } else
> -        copy_user_highpage(dst, src, va, vma);
> +        WARN_ON_ONCE(1);
> +        clear_page(kaddr);
> +    }
> +
> +    ret = true;
> +
> +pte_unlock:
> +    pte_unmap_unlock(vmf->pte, vmf->ptl);

Since the locking would be moved in the 'if' block above, we need
another check here before unlocking:

	if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte))
		pte_unmap_unlock(vmf->pte, vmf->ptl);

You could probably replace the two calls to arch_faults_on_old_pte()
with a single bool variable initialisation, something like:

	force_mkyoung = arch_faults_on_old_pte() &&
		!pte_young(vmf->orig_pte)

and only check for "if (force_mkyoung)" in both cases.

> +    kunmap_atomic(kaddr);
> +    flush_dcache_page(dst);
> +
> +    return ret;
>  }

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-09-24 16:35 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-21 13:50 [PATCH v8 0/3] fix double page fault on arm64 Jia He
2019-09-21 13:50 ` [PATCH v8 1/3] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He
2019-09-23 16:07   ` Catalin Marinas
2019-09-24  1:50     ` Justin He (Arm Technology China)
2019-09-21 13:50 ` [PATCH v8 2/3] arm64: mm: implement arch_faults_on_old_pte() on arm64 Jia He
2019-09-23 16:18   ` Catalin Marinas
2019-09-24  2:17     ` Justin He (Arm Technology China)
2019-09-21 13:50 ` [PATCH v8 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He
2019-09-21 15:31   ` Matthew Wilcox
2019-09-23  8:28   ` Kirill A. Shutemov
2019-09-23 17:04   ` Catalin Marinas
2019-09-24  6:43     ` Justin He (Arm Technology China)
2019-09-24 10:33       ` Catalin Marinas
2019-09-24 11:59         ` Kirill A. Shutemov
2019-09-24 15:29         ` Jia He
2019-09-24 16:35           ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).