linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v11 0/4] fix double page fault in cow_user_page for pfn mapping
@ 2019-10-09  8:42 Jia He
  2019-10-09  8:42 ` [PATCH v11 1/4] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Jia He @ 2019-10-09  8:42 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov,
	linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose,
	Borislav Petkov, H. Peter Anvin, x86
  Cc: Thomas Gleixner, Andrew Morton, hejianet, Kaly Xin, nd, Jia He

When we tested pmdk unit test vmmalloc_fork TEST1 in arm64 guest, there
will be a double page fault in __copy_from_user_inatomic of cow_user_page.

As told by Catalin: "On arm64 without hardware Access Flag, copying from
user will fail because the pte is old and cannot be marked young. So we
always end up with zeroed page after fork() + CoW for pfn mappings. we
don't always have a hardware-managed access flag on arm64."

Changes
v11:
    refine cpu_has_hw_af in PATCH 01(Will Deacon, Suzuki)
    change the default return value to true in arch_faults_on_old_pte
    add PATCH 03 for overriding arch_faults_on_old_pte(false) on x86
v10:
    add r-b from Catalin and a-b from Kirill in PATCH 03
    remoe Reported-by in PATCH 01
v9: refactor cow_user_page for indention optimization (Catalin)
    hold the ptl longer (Catalin)
v8: change cow_user_page's return type (Matthew)
v7: s/pte_spinlock/pte_offset_map_lock (Kirill)
v6: fix error case of returning with spinlock taken (Catalin)
    move kmap_atomic to avoid handling kunmap_atomic
v5: handle the case correctly when !pte_same
    fix kbuild test failed
v4: introduce cpu_has_hw_af (Suzuki)
    bail out if !pte_same (Kirill)
v3: add vmf->ptl lock/unlock (Kirill A. Shutemov)
    add arch_faults_on_old_pte (Matthew, Catalin)
v2: remove FAULT_FLAG_WRITE when setting pte access flag (Catalin)

Jia He (4):
  arm64: cpufeature: introduce helper cpu_has_hw_af()
  arm64: mm: implement arch_faults_on_old_pte() on arm64
  x86/mm: implement arch_faults_on_old_pte() stub on x86
  mm: fix double page fault on arm64 if PTE_AF is cleared

 arch/arm64/include/asm/cpufeature.h |  14 ++++
 arch/arm64/include/asm/pgtable.h    |  14 ++++
 arch/x86/include/asm/pgtable.h      |   6 ++
 mm/memory.c                         | 104 ++++++++++++++++++++++++----
 4 files changed, 123 insertions(+), 15 deletions(-)

-- 
2.17.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v11 1/4] arm64: cpufeature: introduce helper cpu_has_hw_af()
  2019-10-09  8:42 [PATCH v11 0/4] fix double page fault in cow_user_page for pfn mapping Jia He
@ 2019-10-09  8:42 ` Jia He
  2019-10-10 16:43   ` Catalin Marinas
  2019-10-09  8:42 ` [PATCH v11 2/4] arm64: mm: implement arch_faults_on_old_pte() on arm64 Jia He
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Jia He @ 2019-10-09  8:42 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov,
	linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose,
	Borislav Petkov, H. Peter Anvin, x86
  Cc: Thomas Gleixner, Andrew Morton, hejianet, Kaly Xin, nd, Jia He

We unconditionally set the HW_AFDBM capability and only enable it on
CPUs which really have the feature. But sometimes we need to know
whether this cpu has the capability of HW AF. So decouple AF from
DBM by a new helper cpu_has_hw_af().

Signed-off-by: Jia He <justin.he@arm.com>
Suggested-by: Suzuki Poulose <Suzuki.Poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9cde5d2e768f..1a95396ea5c8 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -659,6 +659,20 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
 	default: return CONFIG_ARM64_PA_BITS;
 	}
 }
+
+/* Check whether hardware update of the Access flag is supported */
+static inline bool cpu_has_hw_af(void)
+{
+	if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM)) {
+		u64 mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
+
+		return !!cpuid_feature_extract_unsigned_field(mmfr1,
+						ID_AA64MMFR1_HADBS_SHIFT);
+	}
+
+	return false;
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v11 2/4] arm64: mm: implement arch_faults_on_old_pte() on arm64
  2019-10-09  8:42 [PATCH v11 0/4] fix double page fault in cow_user_page for pfn mapping Jia He
  2019-10-09  8:42 ` [PATCH v11 1/4] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He
@ 2019-10-09  8:42 ` Jia He
  2019-10-09  8:42 ` [PATCH v11 3/4] x86/mm: implement arch_faults_on_old_pte() stub on x86 Jia He
  2019-10-09  8:42 ` [PATCH v11 4/4] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He
  3 siblings, 0 replies; 10+ messages in thread
From: Jia He @ 2019-10-09  8:42 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov,
	linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose,
	Borislav Petkov, H. Peter Anvin, x86
  Cc: Thomas Gleixner, Andrew Morton, hejianet, Kaly Xin, nd, Jia He

On arm64 without hardware Access Flag, copying from user will fail because
the pte is old and cannot be marked young. So we always end up with zeroed
page after fork() + CoW for pfn mappings. We don't always have a
hardware-managed Access Flag on arm64.

Hence implement arch_faults_on_old_pte on arm64 to indicate that it might
cause page fault when accessing old pte.

Signed-off-by: Jia He <justin.he@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 7576df00eb50..e96fb82f62de 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -885,6 +885,20 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
 #define phys_to_ttbr(addr)	(addr)
 #endif
 
+/*
+ * On arm64 without hardware Access Flag, copying from user will fail because
+ * the pte is old and cannot be marked young. So we always end up with zeroed
+ * page after fork() + CoW for pfn mappings. We don't always have a
+ * hardware-managed access flag on arm64.
+ */
+static inline bool arch_faults_on_old_pte(void)
+{
+	WARN_ON(preemptible());
+
+	return !cpu_has_hw_af();
+}
+#define arch_faults_on_old_pte arch_faults_on_old_pte
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_PGTABLE_H */
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v11 3/4] x86/mm: implement arch_faults_on_old_pte() stub on x86
  2019-10-09  8:42 [PATCH v11 0/4] fix double page fault in cow_user_page for pfn mapping Jia He
  2019-10-09  8:42 ` [PATCH v11 1/4] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He
  2019-10-09  8:42 ` [PATCH v11 2/4] arm64: mm: implement arch_faults_on_old_pte() on arm64 Jia He
@ 2019-10-09  8:42 ` Jia He
  2019-10-09  8:42 ` [PATCH v11 4/4] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He
  3 siblings, 0 replies; 10+ messages in thread
From: Jia He @ 2019-10-09  8:42 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov,
	linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose,
	Borislav Petkov, H. Peter Anvin, x86
  Cc: Thomas Gleixner, Andrew Morton, hejianet, Kaly Xin, nd, Jia He

arch_faults_on_old_pte is a helper to indicate that it might cause page
fault when accessing old pte. But on x86, there is feature to setting
pte access flag by hardware. Hence implement an overriding stub which
always returns false.

Signed-off-by: Jia He <justin.he@arm.com>
Suggested-by: Will Deacon <will@kernel.org>
---
 arch/x86/include/asm/pgtable.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 0bc530c4eb13..ad97dc155195 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1463,6 +1463,12 @@ static inline bool arch_has_pfn_modify_check(void)
 	return boot_cpu_has_bug(X86_BUG_L1TF);
 }
 
+#define arch_faults_on_old_pte arch_faults_on_old_pte
+static inline bool arch_faults_on_old_pte(void)
+{
+	return false;
+}
+
 #include <asm-generic/pgtable.h>
 #endif	/* __ASSEMBLY__ */
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v11 4/4] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-10-09  8:42 [PATCH v11 0/4] fix double page fault in cow_user_page for pfn mapping Jia He
                   ` (2 preceding siblings ...)
  2019-10-09  8:42 ` [PATCH v11 3/4] x86/mm: implement arch_faults_on_old_pte() stub on x86 Jia He
@ 2019-10-09  8:42 ` Jia He
  2019-10-10 16:45   ` Catalin Marinas
  3 siblings, 1 reply; 10+ messages in thread
From: Jia He @ 2019-10-09  8:42 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Mark Rutland, James Morse,
	Marc Zyngier, Matthew Wilcox, Kirill A. Shutemov,
	linux-arm-kernel, linux-kernel, linux-mm, Suzuki Poulose,
	Borislav Petkov, H. Peter Anvin, x86
  Cc: Thomas Gleixner, Andrew Morton, hejianet, Kaly Xin, nd, Jia He

When we tested pmdk unit test [1] vmmalloc_fork TEST3 on arm64 guest, there
will be a double page fault in __copy_from_user_inatomic of cow_user_page.

To reproduce the bug, the cmd is as follows after you deployed everything:
make -C src/test/vmmalloc_fork/ TEST_TIME=60m check

Below call trace is from arm64 do_page_fault for debugging purpose:
[  110.016195] Call trace:
[  110.016826]  do_page_fault+0x5a4/0x690
[  110.017812]  do_mem_abort+0x50/0xb0
[  110.018726]  el1_da+0x20/0xc4
[  110.019492]  __arch_copy_from_user+0x180/0x280
[  110.020646]  do_wp_page+0xb0/0x860
[  110.021517]  __handle_mm_fault+0x994/0x1338
[  110.022606]  handle_mm_fault+0xe8/0x180
[  110.023584]  do_page_fault+0x240/0x690
[  110.024535]  do_mem_abort+0x50/0xb0
[  110.025423]  el0_da+0x20/0x24

The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
[ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003,
               pmd=000000023d4b3003, pte=360000298607bd3

As told by Catalin: "On arm64 without hardware Access Flag, copying from
user will fail because the pte is old and cannot be marked young. So we
always end up with zeroed page after fork() + CoW for pfn mappings. we
don't always have a hardware-managed access flag on arm64."

This patch fixes it by calling pte_mkyoung. Also, the parameter is
changed because vmf should be passed to cow_user_page()

Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error
in case there can be some obscure use-case (by Kirill).

[1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork

Signed-off-by: Jia He <justin.he@arm.com>
Reported-by: Yibo Cai <Yibo.Cai@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/memory.c | 104 ++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 89 insertions(+), 15 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index b1ca51a079f2..b6a5d6a08438 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -118,6 +118,18 @@ int randomize_va_space __read_mostly =
 					2;
 #endif
 
+#ifndef arch_faults_on_old_pte
+static inline bool arch_faults_on_old_pte(void)
+{
+	/*
+	 * Those arches which don't have hw access flag feature need to
+	 * implement their own helper. By default, "true" means pagefault
+	 * will be hit on old pte.
+	 */
+	return true;
+}
+#endif
+
 static int __init disable_randmaps(char *s)
 {
 	randomize_va_space = 0;
@@ -2145,32 +2157,82 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
 	return same;
 }
 
-static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
+static inline bool cow_user_page(struct page *dst, struct page *src,
+				 struct vm_fault *vmf)
 {
+	bool ret;
+	void *kaddr;
+	void __user *uaddr;
+	bool force_mkyoung;
+	struct vm_area_struct *vma = vmf->vma;
+	struct mm_struct *mm = vma->vm_mm;
+	unsigned long addr = vmf->address;
+
 	debug_dma_assert_idle(src);
 
+	if (likely(src)) {
+		copy_user_highpage(dst, src, addr, vma);
+		return true;
+	}
+
 	/*
 	 * If the source page was a PFN mapping, we don't have
 	 * a "struct page" for it. We do a best-effort copy by
 	 * just copying from the original user address. If that
 	 * fails, we just zero-fill it. Live with it.
 	 */
-	if (unlikely(!src)) {
-		void *kaddr = kmap_atomic(dst);
-		void __user *uaddr = (void __user *)(va & PAGE_MASK);
+	kaddr = kmap_atomic(dst);
+	uaddr = (void __user *)(addr & PAGE_MASK);
+
+	/*
+	 * On architectures with software "accessed" bits, we would
+	 * take a double page fault, so mark it accessed here.
+	 */
+	force_mkyoung = arch_faults_on_old_pte() && !pte_young(vmf->orig_pte);
+	if (force_mkyoung) {
+		pte_t entry;
+
+		vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
+		if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) {
+			/*
+			 * Other thread has already handled the fault
+			 * and we don't need to do anything. If it's
+			 * not the case, the fault will be triggered
+			 * again on the same address.
+			 */
+			ret = false;
+			goto pte_unlock;
+		}
 
+		entry = pte_mkyoung(vmf->orig_pte);
+		if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0))
+			update_mmu_cache(vma, addr, vmf->pte);
+	}
+
+	/*
+	 * This really shouldn't fail, because the page is there
+	 * in the page tables. But it might just be unreadable,
+	 * in which case we just give up and fill the result with
+	 * zeroes.
+	 */
+	if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
 		/*
-		 * This really shouldn't fail, because the page is there
-		 * in the page tables. But it might just be unreadable,
-		 * in which case we just give up and fill the result with
-		 * zeroes.
+		 * Give a warn in case there can be some obscure
+		 * use-case
 		 */
-		if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
-			clear_page(kaddr);
-		kunmap_atomic(kaddr);
-		flush_dcache_page(dst);
-	} else
-		copy_user_highpage(dst, src, va, vma);
+		WARN_ON_ONCE(1);
+		clear_page(kaddr);
+	}
+
+	ret = true;
+
+pte_unlock:
+	if (force_mkyoung)
+		pte_unmap_unlock(vmf->pte, vmf->ptl);
+	kunmap_atomic(kaddr);
+	flush_dcache_page(dst);
+
+	return ret;
 }
 
 static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
@@ -2327,7 +2389,19 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 				vmf->address);
 		if (!new_page)
 			goto oom;
-		cow_user_page(new_page, old_page, vmf->address, vma);
+
+		if (!cow_user_page(new_page, old_page, vmf)) {
+			/*
+			 * COW failed, if the fault was solved by other,
+			 * it's fine. If not, userspace would re-fault on
+			 * the same address and we will handle the fault
+			 * from the second attempt.
+			 */
+			put_page(new_page);
+			if (old_page)
+				put_page(old_page);
+			return 0;
+		}
 	}
 
 	if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false))
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v11 1/4] arm64: cpufeature: introduce helper cpu_has_hw_af()
  2019-10-09  8:42 ` [PATCH v11 1/4] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He
@ 2019-10-10 16:43   ` Catalin Marinas
  2019-10-11  1:16     ` Justin He (Arm Technology China)
  0 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2019-10-10 16:43 UTC (permalink / raw)
  To: Jia He
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Borislav Petkov,
	H. Peter Anvin, x86, Thomas Gleixner, Andrew Morton, hejianet,
	Kaly Xin, nd

On Wed, Oct 09, 2019 at 04:42:43PM +0800, Jia He wrote:
> We unconditionally set the HW_AFDBM capability and only enable it on
> CPUs which really have the feature. But sometimes we need to know
> whether this cpu has the capability of HW AF. So decouple AF from
> DBM by a new helper cpu_has_hw_af().
> 
> Signed-off-by: Jia He <justin.he@arm.com>
> Suggested-by: Suzuki Poulose <Suzuki.Poulose@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

I don't think I reviewed this version of the patch.

> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 9cde5d2e768f..1a95396ea5c8 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -659,6 +659,20 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
>  	default: return CONFIG_ARM64_PA_BITS;
>  	}
>  }
> +
> +/* Check whether hardware update of the Access flag is supported */
> +static inline bool cpu_has_hw_af(void)
> +{
> +	if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM)) {

Please just return early here to avoid unnecessary indentation:

	if (!IS_ENABLED(CONFIG_ARM64_HW_AFDBM))
		return false;

> +		u64 mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
> +
> +		return !!cpuid_feature_extract_unsigned_field(mmfr1,
> +						ID_AA64MMFR1_HADBS_SHIFT);

No need for !!, the return type is a bool already.

Anyway, apart from these nitpicks, the patch is fine you can keep my
reviewed-by.

If later we noticed a potential performance issue on this path, we can
turn it into a static label as with other CPU features.

-- 
Catalin


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v11 4/4] mm: fix double page fault on arm64 if PTE_AF is cleared
  2019-10-09  8:42 ` [PATCH v11 4/4] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He
@ 2019-10-10 16:45   ` Catalin Marinas
  0 siblings, 0 replies; 10+ messages in thread
From: Catalin Marinas @ 2019-10-10 16:45 UTC (permalink / raw)
  To: Jia He
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Borislav Petkov,
	H. Peter Anvin, x86, Thomas Gleixner, Andrew Morton, hejianet,
	Kaly Xin, nd

On Wed, Oct 09, 2019 at 04:42:46PM +0800, Jia He wrote:
> When we tested pmdk unit test [1] vmmalloc_fork TEST3 on arm64 guest, there
> will be a double page fault in __copy_from_user_inatomic of cow_user_page.
> 
> To reproduce the bug, the cmd is as follows after you deployed everything:
> make -C src/test/vmmalloc_fork/ TEST_TIME=60m check
> 
> Below call trace is from arm64 do_page_fault for debugging purpose:
> [  110.016195] Call trace:
> [  110.016826]  do_page_fault+0x5a4/0x690
> [  110.017812]  do_mem_abort+0x50/0xb0
> [  110.018726]  el1_da+0x20/0xc4
> [  110.019492]  __arch_copy_from_user+0x180/0x280
> [  110.020646]  do_wp_page+0xb0/0x860
> [  110.021517]  __handle_mm_fault+0x994/0x1338
> [  110.022606]  handle_mm_fault+0xe8/0x180
> [  110.023584]  do_page_fault+0x240/0x690
> [  110.024535]  do_mem_abort+0x50/0xb0
> [  110.025423]  el0_da+0x20/0x24
> 
> The pte info before __copy_from_user_inatomic is (PTE_AF is cleared):
> [ffff9b007000] pgd=000000023d4f8003, pud=000000023da9b003,
>                pmd=000000023d4b3003, pte=360000298607bd3
> 
> As told by Catalin: "On arm64 without hardware Access Flag, copying from
> user will fail because the pte is old and cannot be marked young. So we
> always end up with zeroed page after fork() + CoW for pfn mappings. we
> don't always have a hardware-managed access flag on arm64."
> 
> This patch fixes it by calling pte_mkyoung. Also, the parameter is
> changed because vmf should be passed to cow_user_page()
> 
> Add a WARN_ON_ONCE when __copy_from_user_inatomic() returns error
> in case there can be some obscure use-case (by Kirill).
> 
> [1] https://github.com/pmem/pmdk/tree/master/src/test/vmmalloc_fork
> 
> Signed-off-by: Jia He <justin.he@arm.com>
> Reported-by: Yibo Cai <Yibo.Cai@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

My reviewed-by still stands. Thanks.

-- 
Catalin


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v11 1/4] arm64: cpufeature: introduce helper cpu_has_hw_af()
  2019-10-10 16:43   ` Catalin Marinas
@ 2019-10-11  1:16     ` Justin He (Arm Technology China)
  2019-10-11 10:38       ` Catalin Marinas
  0 siblings, 1 reply; 10+ messages in thread
From: Justin He (Arm Technology China) @ 2019-10-11  1:16 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Borislav Petkov,
	H. Peter Anvin, x86, Thomas Gleixner, Andrew Morton, hejianet,
	Kaly Xin (Arm Technology China),
	nd

Hi Catalin

> -----Original Message-----
> From: Catalin Marinas <catalin.marinas@arm.com>
> Sent: Friday, October 11, 2019 12:43 AM
> To: Justin He (Arm Technology China) <Justin.He@arm.com>
> Cc: Will Deacon <will@kernel.org>; Mark Rutland
> <Mark.Rutland@arm.com>; James Morse <James.Morse@arm.com>; Marc
> Zyngier <maz@kernel.org>; Matthew Wilcox <willy@infradead.org>; Kirill A.
> Shutemov <kirill.shutemov@linux.intel.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-
> mm@kvack.org; Suzuki Poulose <Suzuki.Poulose@arm.com>; Borislav
> Petkov <bp@alien8.de>; H. Peter Anvin <hpa@zytor.com>; x86@kernel.org;
> Thomas Gleixner <tglx@linutronix.de>; Andrew Morton <akpm@linux-
> foundation.org>; hejianet@gmail.com; Kaly Xin (Arm Technology China)
> <Kaly.Xin@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH v11 1/4] arm64: cpufeature: introduce helper
> cpu_has_hw_af()
> 
> On Wed, Oct 09, 2019 at 04:42:43PM +0800, Jia He wrote:
> > We unconditionally set the HW_AFDBM capability and only enable it on
> > CPUs which really have the feature. But sometimes we need to know
> > whether this cpu has the capability of HW AF. So decouple AF from
> > DBM by a new helper cpu_has_hw_af().
> >
> > Signed-off-by: Jia He <justin.he@arm.com>
> > Suggested-by: Suzuki Poulose <Suzuki.Poulose@arm.com>
> > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> 
> I don't think I reviewed this version of the patch.

Sorry about that.
> 
> > diff --git a/arch/arm64/include/asm/cpufeature.h
> b/arch/arm64/include/asm/cpufeature.h
> > index 9cde5d2e768f..1a95396ea5c8 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -659,6 +659,20 @@ static inline u32
> id_aa64mmfr0_parange_to_phys_shift(int parange)
> >  	default: return CONFIG_ARM64_PA_BITS;
> >  	}
> >  }
> > +
> > +/* Check whether hardware update of the Access flag is supported */
> > +static inline bool cpu_has_hw_af(void)
> > +{
> > +	if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM)) {
> 
> Please just return early here to avoid unnecessary indentation:

Okay
> 
> 	if (!IS_ENABLED(CONFIG_ARM64_HW_AFDBM))
> 		return false;
> 
> > +		u64 mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
> > +
> > +		return !!cpuid_feature_extract_unsigned_field(mmfr1,
> > +
> 	ID_AA64MMFR1_HADBS_SHIFT);
> 
> No need for !!, the return type is a bool already.

But cpuid_feature_extract_unsigned_field has the return type "unsigned int" [1]

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/include/asm/cpufeature.h#n444

> 
> Anyway, apart from these nitpicks, the patch is fine you can keep my
> reviewed-by.

Thanks 😉
> 
> If later we noticed a potential performance issue on this path, we can
> turn it into a static label as with other CPU features.

Okay

--
Cheers,
Justin (Jia He)


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v11 1/4] arm64: cpufeature: introduce helper cpu_has_hw_af()
  2019-10-11  1:16     ` Justin He (Arm Technology China)
@ 2019-10-11 10:38       ` Catalin Marinas
  2019-10-11 13:51         ` Justin He (Arm Technology China)
  0 siblings, 1 reply; 10+ messages in thread
From: Catalin Marinas @ 2019-10-11 10:38 UTC (permalink / raw)
  To: Justin He (Arm Technology China)
  Cc: Will Deacon, Mark Rutland, James Morse, Marc Zyngier,
	Matthew Wilcox, Kirill A. Shutemov, linux-arm-kernel,
	linux-kernel, linux-mm, Suzuki Poulose, Borislav Petkov,
	H. Peter Anvin, x86, Thomas Gleixner, Andrew Morton, hejianet,
	Kaly Xin (Arm Technology China),
	nd

On Fri, Oct 11, 2019 at 01:16:36AM +0000, Justin He (Arm Technology China) wrote:
> From: Catalin Marinas <catalin.marinas@arm.com>
> > On Wed, Oct 09, 2019 at 04:42:43PM +0800, Jia He wrote:
> > > +		u64 mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
> > > +
> > > +		return !!cpuid_feature_extract_unsigned_field(mmfr1,
> > > +
> > 	ID_AA64MMFR1_HADBS_SHIFT);
> > 
> > No need for !!, the return type is a bool already.
> 
> But cpuid_feature_extract_unsigned_field has the return type "unsigned int" [1]
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/include/asm/cpufeature.h#n444

And the C language gives you the automatic conversion from unsigned int
to bool without the need for !!. The reason we use !! in some places is
for converting long to int (not bool) and losing the top 32-bit. See
commit 84fe6826c28f ("arm64: mm: Add double logical invert to pte
accessors") for an explanation.

-- 
Catalin


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v11 1/4] arm64: cpufeature: introduce helper cpu_has_hw_af()
  2019-10-11 10:38       ` Catalin Marinas
@ 2019-10-11 13:51         ` Justin He (Arm Technology China)
  0 siblings, 0 replies; 10+ messages in thread
From: Justin He (Arm Technology China) @ 2019-10-11 13:51 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-arm-kernel, linux-kernel, linux-mm, x86, hejianet,
	Kaly Xin (Arm Technology China),
	nd

Hi Catanlin
Thanks for the detailed explanation.
Will send out v12 soon after testing

--
Cheers,
Justin (Jia He)

 

> -----Original Message-----
> From: Catalin Marinas <catalin.marinas@arm.com>
> Sent: Friday, October 11, 2019 6:39 PM
> To: Justin He (Arm Technology China) <Justin.He@arm.com>
> Cc: Will Deacon <will@kernel.org>; Mark Rutland
> <Mark.Rutland@arm.com>; James Morse <James.Morse@arm.com>; Marc
> Zyngier <maz@kernel.org>; Matthew Wilcox <willy@infradead.org>; Kirill A.
> Shutemov <kirill.shutemov@linux.intel.com>; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-
> mm@kvack.org; Suzuki Poulose <Suzuki.Poulose@arm.com>; Borislav
> Petkov <bp@alien8.de>; H. Peter Anvin <hpa@zytor.com>; x86@kernel.org;
> Thomas Gleixner <tglx@linutronix.de>; Andrew Morton <akpm@linux-
> foundation.org>; hejianet@gmail.com; Kaly Xin (Arm Technology China)
> <Kaly.Xin@arm.com>; nd <nd@arm.com>
> Subject: Re: [PATCH v11 1/4] arm64: cpufeature: introduce helper
> cpu_has_hw_af()
> 
> On Fri, Oct 11, 2019 at 01:16:36AM +0000, Justin He (Arm Technology China)
> wrote:
> > From: Catalin Marinas <catalin.marinas@arm.com>
> > > On Wed, Oct 09, 2019 at 04:42:43PM +0800, Jia He wrote:
> > > > +		u64 mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
> > > > +
> > > > +		return !!cpuid_feature_extract_unsigned_field(mmfr1,
> > > > +
> > > 	ID_AA64MMFR1_HADBS_SHIFT);
> > >
> > > No need for !!, the return type is a bool already.
> >
> > But cpuid_feature_extract_unsigned_field has the return type "unsigned
> int" [1]
> >
> > [1]
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch
> /arm64/include/asm/cpufeature.h#n444
> 
> And the C language gives you the automatic conversion from unsigned int
> to bool without the need for !!. The reason we use !! in some places is
> for converting long to int (not bool) and losing the top 32-bit. See
> commit 84fe6826c28f ("arm64: mm: Add double logical invert to pte
> accessors") for an explanation.
> 
> --
> Catalin


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-10-11 13:51 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-09  8:42 [PATCH v11 0/4] fix double page fault in cow_user_page for pfn mapping Jia He
2019-10-09  8:42 ` [PATCH v11 1/4] arm64: cpufeature: introduce helper cpu_has_hw_af() Jia He
2019-10-10 16:43   ` Catalin Marinas
2019-10-11  1:16     ` Justin He (Arm Technology China)
2019-10-11 10:38       ` Catalin Marinas
2019-10-11 13:51         ` Justin He (Arm Technology China)
2019-10-09  8:42 ` [PATCH v11 2/4] arm64: mm: implement arch_faults_on_old_pte() on arm64 Jia He
2019-10-09  8:42 ` [PATCH v11 3/4] x86/mm: implement arch_faults_on_old_pte() stub on x86 Jia He
2019-10-09  8:42 ` [PATCH v11 4/4] mm: fix double page fault on arm64 if PTE_AF is cleared Jia He
2019-10-10 16:45   ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).