All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -next v4 0/4]mm: page_table_check: add support on arm64 and riscv
@ 2022-04-18  3:44 ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Page table check performs extra verifications at the time when new
pages become accessible from the userspace by getting their page
table entries (PTEs PMDs etc.) added into the table. It is supported
on X86[1].

This patchset made some simple changes and make it easier to support
new architecture, then we support this feature on ARM64 and RISCV.

[1]https://lore.kernel.org/lkml/20211123214814.3756047-1-pasha.tatashin@soleen.com/

v3 -> v4:
 1. Adapt to next-20220414

v2 -> v3:
  1. Modify ptep_clear() in include/linux/pgtable.h, using IS_ENABLED 
     according to the suggestions of Pasha.

v1 -> v2:
  1. Fix arm64's pte/pmd/pud_user_accessible_page() according to the
     suggestions of Catalin.
  2. Also fix riscv's pte_pmd_pud_user_accessible_page().

Kefeng Wang (2):
  mm: page_table_check: move pxx_user_accessible_page into x86
  arm64: mm: add support for page table check

Tong Tiangen (2):
  mm: page_table_check: add hooks to public helpers
  riscv: mm: add support for page table check

 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 ++++++++++++++++++++++++---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 arch/x86/include/asm/pgtable.h   | 29 +++++++-----
 include/linux/pgtable.h          | 26 +++++++----
 mm/page_table_check.c            | 25 ++++-------
 7 files changed, 178 insertions(+), 46 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH -next v4 0/4]mm: page_table_check: add support on arm64 and riscv
@ 2022-04-18  3:44 ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Page table check performs extra verifications at the time when new
pages become accessible from the userspace by getting their page
table entries (PTEs PMDs etc.) added into the table. It is supported
on X86[1].

This patchset made some simple changes and make it easier to support
new architecture, then we support this feature on ARM64 and RISCV.

[1]https://lore.kernel.org/lkml/20211123214814.3756047-1-pasha.tatashin@soleen.com/

v3 -> v4:
 1. Adapt to next-20220414

v2 -> v3:
  1. Modify ptep_clear() in include/linux/pgtable.h, using IS_ENABLED 
     according to the suggestions of Pasha.

v1 -> v2:
  1. Fix arm64's pte/pmd/pud_user_accessible_page() according to the
     suggestions of Catalin.
  2. Also fix riscv's pte_pmd_pud_user_accessible_page().

Kefeng Wang (2):
  mm: page_table_check: move pxx_user_accessible_page into x86
  arm64: mm: add support for page table check

Tong Tiangen (2):
  mm: page_table_check: add hooks to public helpers
  riscv: mm: add support for page table check

 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 ++++++++++++++++++++++++---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 arch/x86/include/asm/pgtable.h   | 29 +++++++-----
 include/linux/pgtable.h          | 26 +++++++----
 mm/page_table_check.c            | 25 ++++-------
 7 files changed, 178 insertions(+), 46 deletions(-)

-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH -next v4 0/4]mm: page_table_check: add support on arm64 and riscv
@ 2022-04-18  3:44 ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Page table check performs extra verifications at the time when new
pages become accessible from the userspace by getting their page
table entries (PTEs PMDs etc.) added into the table. It is supported
on X86[1].

This patchset made some simple changes and make it easier to support
new architecture, then we support this feature on ARM64 and RISCV.

[1]https://lore.kernel.org/lkml/20211123214814.3756047-1-pasha.tatashin@soleen.com/

v3 -> v4:
 1. Adapt to next-20220414

v2 -> v3:
  1. Modify ptep_clear() in include/linux/pgtable.h, using IS_ENABLED 
     according to the suggestions of Pasha.

v1 -> v2:
  1. Fix arm64's pte/pmd/pud_user_accessible_page() according to the
     suggestions of Catalin.
  2. Also fix riscv's pte_pmd_pud_user_accessible_page().

Kefeng Wang (2):
  mm: page_table_check: move pxx_user_accessible_page into x86
  arm64: mm: add support for page table check

Tong Tiangen (2):
  mm: page_table_check: add hooks to public helpers
  riscv: mm: add support for page table check

 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 ++++++++++++++++++++++++---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 arch/x86/include/asm/pgtable.h   | 29 +++++++-----
 include/linux/pgtable.h          | 26 +++++++----
 mm/page_table_check.c            | 25 ++++-------
 7 files changed, 178 insertions(+), 46 deletions(-)

-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
  2022-04-18  3:44 ` Tong Tiangen
  (?)
@ 2022-04-18  3:44   ` Tong Tiangen
  -1 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

The pxx_user_accessible_page() check the PTE bit, it's
architecture-specific code, move them into x86's pgtable.h,
also add default PMD/PUD_PAGE_SIZE definition, it's prepare
for support page table check feature on new architecture.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
 mm/page_table_check.c          | 25 ++++++++-----------------
 2 files changed, 27 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index b7464f13e416..564abe42b0f7 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
 	return true;
 }
 
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
+		(pmd_val(pmd) & _PAGE_USER);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
+		(pud_val(pud) & _PAGE_USER);
+}
+#endif
+
 #endif	/* __ASSEMBLY__ */
 
 #endif /* _ASM_X86_PGTABLE_H */
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index 2458281bff89..145f059d1c4d 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -10,6 +10,14 @@
 #undef pr_fmt
 #define pr_fmt(fmt)	"page_table_check: " fmt
 
+#ifndef PMD_PAGE_SIZE
+#define PMD_PAGE_SIZE	PMD_SIZE
+#endif
+
+#ifndef PUD_PAGE_SIZE
+#define PUD_PAGE_SIZE	PUD_SIZE
+#endif
+
 struct page_table_check {
 	atomic_t anon_map_count;
 	atomic_t file_map_count;
@@ -52,23 +60,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
 	return (void *)(page_ext) + page_table_check_ops.offset;
 }
 
-static inline bool pte_user_accessible_page(pte_t pte)
-{
-	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
-}
-
-static inline bool pmd_user_accessible_page(pmd_t pmd)
-{
-	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
-		(pmd_val(pmd) & _PAGE_USER);
-}
-
-static inline bool pud_user_accessible_page(pud_t pud)
-{
-	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
-		(pud_val(pud) & _PAGE_USER);
-}
-
 /*
  * An enty is removed from the page table, decrement the counters for that page
  * verify that it is of correct type and counters do not become negative.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-18  3:44   ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

The pxx_user_accessible_page() check the PTE bit, it's
architecture-specific code, move them into x86's pgtable.h,
also add default PMD/PUD_PAGE_SIZE definition, it's prepare
for support page table check feature on new architecture.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
 mm/page_table_check.c          | 25 ++++++++-----------------
 2 files changed, 27 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index b7464f13e416..564abe42b0f7 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
 	return true;
 }
 
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
+		(pmd_val(pmd) & _PAGE_USER);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
+		(pud_val(pud) & _PAGE_USER);
+}
+#endif
+
 #endif	/* __ASSEMBLY__ */
 
 #endif /* _ASM_X86_PGTABLE_H */
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index 2458281bff89..145f059d1c4d 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -10,6 +10,14 @@
 #undef pr_fmt
 #define pr_fmt(fmt)	"page_table_check: " fmt
 
+#ifndef PMD_PAGE_SIZE
+#define PMD_PAGE_SIZE	PMD_SIZE
+#endif
+
+#ifndef PUD_PAGE_SIZE
+#define PUD_PAGE_SIZE	PUD_SIZE
+#endif
+
 struct page_table_check {
 	atomic_t anon_map_count;
 	atomic_t file_map_count;
@@ -52,23 +60,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
 	return (void *)(page_ext) + page_table_check_ops.offset;
 }
 
-static inline bool pte_user_accessible_page(pte_t pte)
-{
-	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
-}
-
-static inline bool pmd_user_accessible_page(pmd_t pmd)
-{
-	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
-		(pmd_val(pmd) & _PAGE_USER);
-}
-
-static inline bool pud_user_accessible_page(pud_t pud)
-{
-	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
-		(pud_val(pud) & _PAGE_USER);
-}
-
 /*
  * An enty is removed from the page table, decrement the counters for that page
  * verify that it is of correct type and counters do not become negative.
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-18  3:44   ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

The pxx_user_accessible_page() check the PTE bit, it's
architecture-specific code, move them into x86's pgtable.h,
also add default PMD/PUD_PAGE_SIZE definition, it's prepare
for support page table check feature on new architecture.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
 mm/page_table_check.c          | 25 ++++++++-----------------
 2 files changed, 27 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index b7464f13e416..564abe42b0f7 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
 	return true;
 }
 
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
+		(pmd_val(pmd) & _PAGE_USER);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
+		(pud_val(pud) & _PAGE_USER);
+}
+#endif
+
 #endif	/* __ASSEMBLY__ */
 
 #endif /* _ASM_X86_PGTABLE_H */
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index 2458281bff89..145f059d1c4d 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -10,6 +10,14 @@
 #undef pr_fmt
 #define pr_fmt(fmt)	"page_table_check: " fmt
 
+#ifndef PMD_PAGE_SIZE
+#define PMD_PAGE_SIZE	PMD_SIZE
+#endif
+
+#ifndef PUD_PAGE_SIZE
+#define PUD_PAGE_SIZE	PUD_SIZE
+#endif
+
 struct page_table_check {
 	atomic_t anon_map_count;
 	atomic_t file_map_count;
@@ -52,23 +60,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
 	return (void *)(page_ext) + page_table_check_ops.offset;
 }
 
-static inline bool pte_user_accessible_page(pte_t pte)
-{
-	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
-}
-
-static inline bool pmd_user_accessible_page(pmd_t pmd)
-{
-	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
-		(pmd_val(pmd) & _PAGE_USER);
-}
-
-static inline bool pud_user_accessible_page(pud_t pud)
-{
-	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
-		(pud_val(pud) & _PAGE_USER);
-}
-
 /*
  * An enty is removed from the page table, decrement the counters for that page
  * verify that it is of correct type and counters do not become negative.
-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 2/4] mm: page_table_check: add hooks to public helpers
  2022-04-18  3:44 ` Tong Tiangen
  (?)
@ 2022-04-18  3:44   ` Tong Tiangen
  -1 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Move ptep_clear() to the include/linux/pgtable.h and add page table check
relate hooks to some helpers, it's prepare for support page table check
feature on new architecture.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 10 ----------
 include/linux/pgtable.h        | 26 ++++++++++++++++++--------
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 564abe42b0f7..51cd39858f81 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 	return pte;
 }
 
-#define __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
-		ptep_get_and_clear(mm, addr, ptep);
-	else
-		pte_clear(mm, addr, ptep);
-}
-
 #define __HAVE_ARCH_PTEP_SET_WRPROTECT
 static inline void ptep_set_wrprotect(struct mm_struct *mm,
 				      unsigned long addr, pte_t *ptep)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 49ab8ee2d6d7..10d2d91edf20 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -12,6 +12,7 @@
 #include <linux/bug.h>
 #include <linux/errno.h>
 #include <asm-generic/pgtable_uffd.h>
+#include <linux/page_table_check.h>
 
 #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
 	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
@@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
 }
 #endif
 
-#ifndef __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	pte_clear(mm, addr, ptep);
-}
-#endif
-
 #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address,
@@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 {
 	pte_t pte = *ptep;
 	pte_clear(mm, address, ptep);
+	page_table_check_pte_clear(mm, address, pte);
 	return pte;
 }
 #endif
 
+#ifndef __HAVE_ARCH_PTEP_CLEAR
+static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep)
+{
+	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
+		ptep_get_and_clear(mm, addr, ptep);
+	else
+		pte_clear(mm, addr, ptep);
+}
+#endif
+
 #ifndef __HAVE_ARCH_PTEP_GET
 static inline pte_t ptep_get(pte_t *ptep)
 {
@@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    pmd_t *pmdp)
 {
 	pmd_t pmd = *pmdp;
+
 	pmd_clear(pmdp);
+	page_table_check_pmd_clear(mm, address, pmd);
+
 	return pmd;
 }
 #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
@@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
 	pud_t pud = *pudp;
 
 	pud_clear(pudp);
+	page_table_check_pud_clear(mm, address, pud);
+
 	return pud;
 }
 #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 2/4] mm: page_table_check: add hooks to public helpers
@ 2022-04-18  3:44   ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Move ptep_clear() to the include/linux/pgtable.h and add page table check
relate hooks to some helpers, it's prepare for support page table check
feature on new architecture.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 10 ----------
 include/linux/pgtable.h        | 26 ++++++++++++++++++--------
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 564abe42b0f7..51cd39858f81 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 	return pte;
 }
 
-#define __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
-		ptep_get_and_clear(mm, addr, ptep);
-	else
-		pte_clear(mm, addr, ptep);
-}
-
 #define __HAVE_ARCH_PTEP_SET_WRPROTECT
 static inline void ptep_set_wrprotect(struct mm_struct *mm,
 				      unsigned long addr, pte_t *ptep)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 49ab8ee2d6d7..10d2d91edf20 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -12,6 +12,7 @@
 #include <linux/bug.h>
 #include <linux/errno.h>
 #include <asm-generic/pgtable_uffd.h>
+#include <linux/page_table_check.h>
 
 #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
 	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
@@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
 }
 #endif
 
-#ifndef __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	pte_clear(mm, addr, ptep);
-}
-#endif
-
 #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address,
@@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 {
 	pte_t pte = *ptep;
 	pte_clear(mm, address, ptep);
+	page_table_check_pte_clear(mm, address, pte);
 	return pte;
 }
 #endif
 
+#ifndef __HAVE_ARCH_PTEP_CLEAR
+static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep)
+{
+	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
+		ptep_get_and_clear(mm, addr, ptep);
+	else
+		pte_clear(mm, addr, ptep);
+}
+#endif
+
 #ifndef __HAVE_ARCH_PTEP_GET
 static inline pte_t ptep_get(pte_t *ptep)
 {
@@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    pmd_t *pmdp)
 {
 	pmd_t pmd = *pmdp;
+
 	pmd_clear(pmdp);
+	page_table_check_pmd_clear(mm, address, pmd);
+
 	return pmd;
 }
 #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
@@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
 	pud_t pud = *pudp;
 
 	pud_clear(pudp);
+	page_table_check_pud_clear(mm, address, pud);
+
 	return pud;
 }
 #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 2/4] mm: page_table_check: add hooks to public helpers
@ 2022-04-18  3:44   ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Move ptep_clear() to the include/linux/pgtable.h and add page table check
relate hooks to some helpers, it's prepare for support page table check
feature on new architecture.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 10 ----------
 include/linux/pgtable.h        | 26 ++++++++++++++++++--------
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 564abe42b0f7..51cd39858f81 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 	return pte;
 }
 
-#define __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
-		ptep_get_and_clear(mm, addr, ptep);
-	else
-		pte_clear(mm, addr, ptep);
-}
-
 #define __HAVE_ARCH_PTEP_SET_WRPROTECT
 static inline void ptep_set_wrprotect(struct mm_struct *mm,
 				      unsigned long addr, pte_t *ptep)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 49ab8ee2d6d7..10d2d91edf20 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -12,6 +12,7 @@
 #include <linux/bug.h>
 #include <linux/errno.h>
 #include <asm-generic/pgtable_uffd.h>
+#include <linux/page_table_check.h>
 
 #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
 	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
@@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
 }
 #endif
 
-#ifndef __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	pte_clear(mm, addr, ptep);
-}
-#endif
-
 #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address,
@@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 {
 	pte_t pte = *ptep;
 	pte_clear(mm, address, ptep);
+	page_table_check_pte_clear(mm, address, pte);
 	return pte;
 }
 #endif
 
+#ifndef __HAVE_ARCH_PTEP_CLEAR
+static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep)
+{
+	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
+		ptep_get_and_clear(mm, addr, ptep);
+	else
+		pte_clear(mm, addr, ptep);
+}
+#endif
+
 #ifndef __HAVE_ARCH_PTEP_GET
 static inline pte_t ptep_get(pte_t *ptep)
 {
@@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    pmd_t *pmdp)
 {
 	pmd_t pmd = *pmdp;
+
 	pmd_clear(pmdp);
+	page_table_check_pmd_clear(mm, address, pmd);
+
 	return pmd;
 }
 #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
@@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
 	pud_t pud = *pudp;
 
 	pud_clear(pudp);
+	page_table_check_pud_clear(mm, address, pud);
+
 	return pud;
 }
 #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-18  3:44 ` Tong Tiangen
  (?)
@ 2022-04-18  3:44   ` Tong Tiangen
  -1 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
 2 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e80fd2372f02..7114d2d5155e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -92,6 +92,7 @@ config ARM64
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
 	select ARCH_SUPPORTS_NUMA_BALANCING
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
 	select ARCH_WANT_DEFAULT_BPF_JIT
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 930077f7b572..9f8f97a7cc7c 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -33,6 +33,7 @@
 #include <linux/mmdebug.h>
 #include <linux/mm_types.h>
 #include <linux/sched.h>
+#include <linux/page_table_check.h>
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
@@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
 #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
 #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
 #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
+#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
 #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
 #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
 #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
@@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
 		     __func__, pte_val(old_pte), pte_val(pte));
 }
 
-static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep, pte_t pte)
 {
 	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
@@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
 	set_pte(ptep, pte);
 }
 
+static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep, pte_t pte)
+{
+	page_table_check_pte_set(mm, addr, ptep, pte);
+	return __set_pte_at(mm, addr, ptep, pte);
+}
+
 /*
  * Huge pte definitions.
  */
@@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
 #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
 #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
 #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
+#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
+#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
 #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
 #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
 #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
@@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
 #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
 #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
 
-#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
-#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
+static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+			      pmd_t *pmdp, pmd_t pmd)
+{
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
+			      pud_t *pudp, pud_t pud)
+{
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
 
 #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
 #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
@@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
 #define pud_present(pud)	pte_present(pud_pte(pud))
 #define pud_leaf(pud)		pud_sect(pud)
 #define pud_valid(pud)		pte_valid(pud_pte(pud))
+#define pud_user(pud)		pte_user(pud_pte(pud))
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_present(pud) && pud_user(pud);
+}
+#endif
 
 static inline void set_pud(pud_t *pudp, pud_t pud)
 {
@@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
@@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
 }
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-18  3:44   ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
 2 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e80fd2372f02..7114d2d5155e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -92,6 +92,7 @@ config ARM64
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
 	select ARCH_SUPPORTS_NUMA_BALANCING
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
 	select ARCH_WANT_DEFAULT_BPF_JIT
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 930077f7b572..9f8f97a7cc7c 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -33,6 +33,7 @@
 #include <linux/mmdebug.h>
 #include <linux/mm_types.h>
 #include <linux/sched.h>
+#include <linux/page_table_check.h>
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
@@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
 #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
 #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
 #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
+#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
 #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
 #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
 #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
@@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
 		     __func__, pte_val(old_pte), pte_val(pte));
 }
 
-static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep, pte_t pte)
 {
 	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
@@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
 	set_pte(ptep, pte);
 }
 
+static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep, pte_t pte)
+{
+	page_table_check_pte_set(mm, addr, ptep, pte);
+	return __set_pte_at(mm, addr, ptep, pte);
+}
+
 /*
  * Huge pte definitions.
  */
@@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
 #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
 #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
 #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
+#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
+#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
 #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
 #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
 #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
@@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
 #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
 #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
 
-#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
-#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
+static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+			      pmd_t *pmdp, pmd_t pmd)
+{
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
+			      pud_t *pudp, pud_t pud)
+{
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
 
 #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
 #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
@@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
 #define pud_present(pud)	pte_present(pud_pte(pud))
 #define pud_leaf(pud)		pud_sect(pud)
 #define pud_valid(pud)		pte_valid(pud_pte(pud))
+#define pud_user(pud)		pte_user(pud_pte(pud))
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_present(pud) && pud_user(pud);
+}
+#endif
 
 static inline void set_pud(pud_t *pudp, pud_t pud)
 {
@@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
@@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
 }
 #endif
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-18  3:44   ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
 2 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e80fd2372f02..7114d2d5155e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -92,6 +92,7 @@ config ARM64
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
 	select ARCH_SUPPORTS_NUMA_BALANCING
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
 	select ARCH_WANT_DEFAULT_BPF_JIT
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 930077f7b572..9f8f97a7cc7c 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -33,6 +33,7 @@
 #include <linux/mmdebug.h>
 #include <linux/mm_types.h>
 #include <linux/sched.h>
+#include <linux/page_table_check.h>
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
@@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
 #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
 #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
 #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
+#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
 #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
 #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
 #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
@@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
 		     __func__, pte_val(old_pte), pte_val(pte));
 }
 
-static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep, pte_t pte)
 {
 	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
@@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
 	set_pte(ptep, pte);
 }
 
+static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep, pte_t pte)
+{
+	page_table_check_pte_set(mm, addr, ptep, pte);
+	return __set_pte_at(mm, addr, ptep, pte);
+}
+
 /*
  * Huge pte definitions.
  */
@@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
 #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
 #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
 #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
+#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
+#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
 #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
 #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
 #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
@@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
 #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
 #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
 
-#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
-#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
+static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+			      pmd_t *pmdp, pmd_t pmd)
+{
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
+			      pud_t *pudp, pud_t pud)
+{
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
 
 #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
 #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
@@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
 #define pud_present(pud)	pte_present(pud_pte(pud))
 #define pud_leaf(pud)		pud_sect(pud)
 #define pud_valid(pud)		pte_valid(pud_pte(pud))
+#define pud_user(pud)		pte_user(pud_pte(pud))
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_present(pud) && pud_user(pud);
+}
+#endif
 
 static inline void set_pud(pud_t *pudp, pud_t pud)
 {
@@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
@@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
 }
 #endif
-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 4/4] riscv: mm: add support for page table check
  2022-04-18  3:44 ` Tong Tiangen
  (?)
@ 2022-04-18  3:44   ` Tong Tiangen
  -1 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 2 files changed, 72 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 63f7258984f3..66d241cee52c 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -38,6 +38,7 @@ config RISCV
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
 	select ARCH_SUPPORTS_HUGETLBFS if MMU
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_USE_MEMTEST
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
 	select ARCH_WANT_FRAME_POINTERS
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 046b44225623..6f22d9580658 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -114,6 +114,8 @@
 #include <asm/pgtable-32.h>
 #endif /* CONFIG_64BIT */
 
+#include <linux/page_table_check.h>
+
 #ifdef CONFIG_XIP_KERNEL
 #define XIP_FIXUP(addr) ({							\
 	uintptr_t __a = (uintptr_t)(addr);					\
@@ -315,6 +317,11 @@ static inline int pte_exec(pte_t pte)
 	return pte_val(pte) & _PAGE_EXEC;
 }
 
+static inline int pte_user(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_USER;
+}
+
 static inline int pte_huge(pte_t pte)
 {
 	return pte_present(pte) && (pte_val(pte) & _PAGE_LEAF);
@@ -446,7 +453,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
 
 void flush_icache_pte(pte_t pte);
 
-static inline void set_pte_at(struct mm_struct *mm,
+static inline void __set_pte_at(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep, pte_t pteval)
 {
 	if (pte_present(pteval) && pte_exec(pteval))
@@ -455,10 +462,17 @@ static inline void set_pte_at(struct mm_struct *mm,
 	set_pte(ptep, pteval);
 }
 
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	page_table_check_pte_set(mm, addr, ptep, pteval);
+	__set_pte_at(mm, addr, ptep, pteval);
+}
+
 static inline void pte_clear(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep)
 {
-	set_pte_at(mm, addr, ptep, __pte(0));
+	__set_pte_at(mm, addr, ptep, __pte(0));
 }
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
@@ -475,11 +489,21 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 	return true;
 }
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
@@ -546,6 +570,13 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
 	return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
 }
 
+#define __pud_to_phys(pud)  (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
+
+static inline unsigned long pud_pfn(pud_t pud)
+{
+	return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
+}
+
 static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 {
 	return pte_pmd(pte_modify(pmd_pte(pmd), newprot));
@@ -567,6 +598,11 @@ static inline int pmd_young(pmd_t pmd)
 	return pte_young(pmd_pte(pmd));
 }
 
+static inline int pmd_user(pmd_t pmd)
+{
+	return pte_user(pmd_pte(pmd));
+}
+
 static inline pmd_t pmd_mkold(pmd_t pmd)
 {
 	return pte_pmd(pte_mkold(pmd_pte(pmd)));
@@ -600,15 +636,39 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd)
 static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 				pmd_t *pmdp, pmd_t pmd)
 {
-	return set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline int pud_user(pud_t pud)
+{
+	return pte_user(pud_pte(pud));
 }
 
 static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
 				pud_t *pudp, pud_t pud)
 {
-	return set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && pte_user(pte);
 }
 
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && pmd_user(pmd);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && pud_user(pud);
+}
+#endif
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static inline int pmd_trans_huge(pmd_t pmd)
 {
@@ -634,7 +694,11 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 
 #define __HAVE_ARCH_PMDP_SET_WRPROTECT
@@ -648,6 +712,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 				unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 4/4] riscv: mm: add support for page table check
@ 2022-04-18  3:44   ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 2 files changed, 72 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 63f7258984f3..66d241cee52c 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -38,6 +38,7 @@ config RISCV
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
 	select ARCH_SUPPORTS_HUGETLBFS if MMU
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_USE_MEMTEST
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
 	select ARCH_WANT_FRAME_POINTERS
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 046b44225623..6f22d9580658 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -114,6 +114,8 @@
 #include <asm/pgtable-32.h>
 #endif /* CONFIG_64BIT */
 
+#include <linux/page_table_check.h>
+
 #ifdef CONFIG_XIP_KERNEL
 #define XIP_FIXUP(addr) ({							\
 	uintptr_t __a = (uintptr_t)(addr);					\
@@ -315,6 +317,11 @@ static inline int pte_exec(pte_t pte)
 	return pte_val(pte) & _PAGE_EXEC;
 }
 
+static inline int pte_user(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_USER;
+}
+
 static inline int pte_huge(pte_t pte)
 {
 	return pte_present(pte) && (pte_val(pte) & _PAGE_LEAF);
@@ -446,7 +453,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
 
 void flush_icache_pte(pte_t pte);
 
-static inline void set_pte_at(struct mm_struct *mm,
+static inline void __set_pte_at(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep, pte_t pteval)
 {
 	if (pte_present(pteval) && pte_exec(pteval))
@@ -455,10 +462,17 @@ static inline void set_pte_at(struct mm_struct *mm,
 	set_pte(ptep, pteval);
 }
 
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	page_table_check_pte_set(mm, addr, ptep, pteval);
+	__set_pte_at(mm, addr, ptep, pteval);
+}
+
 static inline void pte_clear(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep)
 {
-	set_pte_at(mm, addr, ptep, __pte(0));
+	__set_pte_at(mm, addr, ptep, __pte(0));
 }
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
@@ -475,11 +489,21 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 	return true;
 }
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
@@ -546,6 +570,13 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
 	return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
 }
 
+#define __pud_to_phys(pud)  (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
+
+static inline unsigned long pud_pfn(pud_t pud)
+{
+	return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
+}
+
 static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 {
 	return pte_pmd(pte_modify(pmd_pte(pmd), newprot));
@@ -567,6 +598,11 @@ static inline int pmd_young(pmd_t pmd)
 	return pte_young(pmd_pte(pmd));
 }
 
+static inline int pmd_user(pmd_t pmd)
+{
+	return pte_user(pmd_pte(pmd));
+}
+
 static inline pmd_t pmd_mkold(pmd_t pmd)
 {
 	return pte_pmd(pte_mkold(pmd_pte(pmd)));
@@ -600,15 +636,39 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd)
 static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 				pmd_t *pmdp, pmd_t pmd)
 {
-	return set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline int pud_user(pud_t pud)
+{
+	return pte_user(pud_pte(pud));
 }
 
 static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
 				pud_t *pudp, pud_t pud)
 {
-	return set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && pte_user(pte);
 }
 
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && pmd_user(pmd);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && pud_user(pud);
+}
+#endif
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static inline int pmd_trans_huge(pmd_t pmd)
 {
@@ -634,7 +694,11 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 
 #define __HAVE_ARCH_PMDP_SET_WRPROTECT
@@ -648,6 +712,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 				unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH -next v4 4/4] riscv: mm: add support for page table check
@ 2022-04-18  3:44   ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  3:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 2 files changed, 72 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 63f7258984f3..66d241cee52c 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -38,6 +38,7 @@ config RISCV
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
 	select ARCH_SUPPORTS_HUGETLBFS if MMU
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_USE_MEMTEST
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
 	select ARCH_WANT_FRAME_POINTERS
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 046b44225623..6f22d9580658 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -114,6 +114,8 @@
 #include <asm/pgtable-32.h>
 #endif /* CONFIG_64BIT */
 
+#include <linux/page_table_check.h>
+
 #ifdef CONFIG_XIP_KERNEL
 #define XIP_FIXUP(addr) ({							\
 	uintptr_t __a = (uintptr_t)(addr);					\
@@ -315,6 +317,11 @@ static inline int pte_exec(pte_t pte)
 	return pte_val(pte) & _PAGE_EXEC;
 }
 
+static inline int pte_user(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_USER;
+}
+
 static inline int pte_huge(pte_t pte)
 {
 	return pte_present(pte) && (pte_val(pte) & _PAGE_LEAF);
@@ -446,7 +453,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
 
 void flush_icache_pte(pte_t pte);
 
-static inline void set_pte_at(struct mm_struct *mm,
+static inline void __set_pte_at(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep, pte_t pteval)
 {
 	if (pte_present(pteval) && pte_exec(pteval))
@@ -455,10 +462,17 @@ static inline void set_pte_at(struct mm_struct *mm,
 	set_pte(ptep, pteval);
 }
 
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	page_table_check_pte_set(mm, addr, ptep, pteval);
+	__set_pte_at(mm, addr, ptep, pteval);
+}
+
 static inline void pte_clear(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep)
 {
-	set_pte_at(mm, addr, ptep, __pte(0));
+	__set_pte_at(mm, addr, ptep, __pte(0));
 }
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
@@ -475,11 +489,21 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 	return true;
 }
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
@@ -546,6 +570,13 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
 	return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
 }
 
+#define __pud_to_phys(pud)  (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
+
+static inline unsigned long pud_pfn(pud_t pud)
+{
+	return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
+}
+
 static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 {
 	return pte_pmd(pte_modify(pmd_pte(pmd), newprot));
@@ -567,6 +598,11 @@ static inline int pmd_young(pmd_t pmd)
 	return pte_young(pmd_pte(pmd));
 }
 
+static inline int pmd_user(pmd_t pmd)
+{
+	return pte_user(pmd_pte(pmd));
+}
+
 static inline pmd_t pmd_mkold(pmd_t pmd)
 {
 	return pte_pmd(pte_mkold(pmd_pte(pmd)));
@@ -600,15 +636,39 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd)
 static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 				pmd_t *pmdp, pmd_t pmd)
 {
-	return set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline int pud_user(pud_t pud)
+{
+	return pte_user(pud_pte(pud));
 }
 
 static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
 				pud_t *pudp, pud_t pud)
 {
-	return set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && pte_user(pte);
 }
 
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && pmd_user(pmd);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && pud_user(pud);
+}
+#endif
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static inline int pmd_trans_huge(pmd_t pmd)
 {
@@ -634,7 +694,11 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 
 #define __HAVE_ARCH_PMDP_SET_WRPROTECT
@@ -648,6 +712,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 				unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 0/4]mm: page_table_check: add support on arm64 and riscv
  2022-04-18  3:44 ` Tong Tiangen
  (?)
@ 2022-04-18  6:12   ` Tong Tiangen
  -1 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  6:12 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

Hi Andrew, Catalin, Palmer:

This patch modifies the code related to the mm/x86/arm64/riscv, who can 
help me merge it if no object, Maybe Andrew is more appropriate?

Thanks,
Tong.

在 2022/4/18 11:44, Tong Tiangen 写道:
> Page table check performs extra verifications at the time when new
> pages become accessible from the userspace by getting their page
> table entries (PTEs PMDs etc.) added into the table. It is supported
> on X86[1].
> 
> This patchset made some simple changes and make it easier to support
> new architecture, then we support this feature on ARM64 and RISCV.
> 
> [1]https://lore.kernel.org/lkml/20211123214814.3756047-1-pasha.tatashin@soleen.com/
> 
> v3 -> v4:
>   1. Adapt to next-20220414
> 
> v2 -> v3:
>    1. Modify ptep_clear() in include/linux/pgtable.h, using IS_ENABLED
>       according to the suggestions of Pasha.
> 
> v1 -> v2:
>    1. Fix arm64's pte/pmd/pud_user_accessible_page() according to the
>       suggestions of Catalin.
>    2. Also fix riscv's pte_pmd_pud_user_accessible_page().
> 
> Kefeng Wang (2):
>    mm: page_table_check: move pxx_user_accessible_page into x86
>    arm64: mm: add support for page table check
> 
> Tong Tiangen (2):
>    mm: page_table_check: add hooks to public helpers
>    riscv: mm: add support for page table check
> 
>   arch/arm64/Kconfig               |  1 +
>   arch/arm64/include/asm/pgtable.h | 65 ++++++++++++++++++++++++---
>   arch/riscv/Kconfig               |  1 +
>   arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
>   arch/x86/include/asm/pgtable.h   | 29 +++++++-----
>   include/linux/pgtable.h          | 26 +++++++----
>   mm/page_table_check.c            | 25 ++++-------
>   7 files changed, 178 insertions(+), 46 deletions(-)
> 

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 0/4]mm: page_table_check: add support on arm64 and riscv
@ 2022-04-18  6:12   ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  6:12 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

Hi Andrew, Catalin, Palmer:

This patch modifies the code related to the mm/x86/arm64/riscv, who can 
help me merge it if no object, Maybe Andrew is more appropriate?

Thanks,
Tong.

在 2022/4/18 11:44, Tong Tiangen 写道:
> Page table check performs extra verifications at the time when new
> pages become accessible from the userspace by getting their page
> table entries (PTEs PMDs etc.) added into the table. It is supported
> on X86[1].
> 
> This patchset made some simple changes and make it easier to support
> new architecture, then we support this feature on ARM64 and RISCV.
> 
> [1]https://lore.kernel.org/lkml/20211123214814.3756047-1-pasha.tatashin@soleen.com/
> 
> v3 -> v4:
>   1. Adapt to next-20220414
> 
> v2 -> v3:
>    1. Modify ptep_clear() in include/linux/pgtable.h, using IS_ENABLED
>       according to the suggestions of Pasha.
> 
> v1 -> v2:
>    1. Fix arm64's pte/pmd/pud_user_accessible_page() according to the
>       suggestions of Catalin.
>    2. Also fix riscv's pte_pmd_pud_user_accessible_page().
> 
> Kefeng Wang (2):
>    mm: page_table_check: move pxx_user_accessible_page into x86
>    arm64: mm: add support for page table check
> 
> Tong Tiangen (2):
>    mm: page_table_check: add hooks to public helpers
>    riscv: mm: add support for page table check
> 
>   arch/arm64/Kconfig               |  1 +
>   arch/arm64/include/asm/pgtable.h | 65 ++++++++++++++++++++++++---
>   arch/riscv/Kconfig               |  1 +
>   arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
>   arch/x86/include/asm/pgtable.h   | 29 +++++++-----
>   include/linux/pgtable.h          | 26 +++++++----
>   mm/page_table_check.c            | 25 ++++-------
>   7 files changed, 178 insertions(+), 46 deletions(-)
> 

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 0/4]mm: page_table_check: add support on arm64 and riscv
@ 2022-04-18  6:12   ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18  6:12 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Andrew Morton, Catalin Marinas,
	Will Deacon, Paul Walmsley, Palmer Dabbelt, Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

Hi Andrew, Catalin, Palmer:

This patch modifies the code related to the mm/x86/arm64/riscv, who can 
help me merge it if no object, Maybe Andrew is more appropriate?

Thanks,
Tong.

在 2022/4/18 11:44, Tong Tiangen 写道:
> Page table check performs extra verifications at the time when new
> pages become accessible from the userspace by getting their page
> table entries (PTEs PMDs etc.) added into the table. It is supported
> on X86[1].
> 
> This patchset made some simple changes and make it easier to support
> new architecture, then we support this feature on ARM64 and RISCV.
> 
> [1]https://lore.kernel.org/lkml/20211123214814.3756047-1-pasha.tatashin@soleen.com/
> 
> v3 -> v4:
>   1. Adapt to next-20220414
> 
> v2 -> v3:
>    1. Modify ptep_clear() in include/linux/pgtable.h, using IS_ENABLED
>       according to the suggestions of Pasha.
> 
> v1 -> v2:
>    1. Fix arm64's pte/pmd/pud_user_accessible_page() according to the
>       suggestions of Catalin.
>    2. Also fix riscv's pte_pmd_pud_user_accessible_page().
> 
> Kefeng Wang (2):
>    mm: page_table_check: move pxx_user_accessible_page into x86
>    arm64: mm: add support for page table check
> 
> Tong Tiangen (2):
>    mm: page_table_check: add hooks to public helpers
>    riscv: mm: add support for page table check
> 
>   arch/arm64/Kconfig               |  1 +
>   arch/arm64/include/asm/pgtable.h | 65 ++++++++++++++++++++++++---
>   arch/riscv/Kconfig               |  1 +
>   arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
>   arch/x86/include/asm/pgtable.h   | 29 +++++++-----
>   include/linux/pgtable.h          | 26 +++++++----
>   mm/page_table_check.c            | 25 ++++-------
>   7 files changed, 178 insertions(+), 46 deletions(-)
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-18  3:44   ` Tong Tiangen
  (?)
@ 2022-04-18  9:28     ` Anshuman Khandual
  -1 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-18  9:28 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

On 4/18/22 09:14, Tong Tiangen wrote:
> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> 
> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
> check"), add some necessary page table check hooks into routines that
> modify user page tables.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/arm64/Kconfig               |  1 +
>  arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
>  2 files changed, 61 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index e80fd2372f02..7114d2d5155e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -92,6 +92,7 @@ config ARM64
>  	select ARCH_SUPPORTS_ATOMIC_RMW
>  	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
>  	select ARCH_SUPPORTS_NUMA_BALANCING
> +	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
>  	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
>  	select ARCH_WANT_DEFAULT_BPF_JIT
>  	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 930077f7b572..9f8f97a7cc7c 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -33,6 +33,7 @@
>  #include <linux/mmdebug.h>
>  #include <linux/mm_types.h>
>  #include <linux/sched.h>
> +#include <linux/page_table_check.h>
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> @@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>  #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
>  #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
>  #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
> +#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
>  #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
>  #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
>  #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
> @@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
>  		     __func__, pte_val(old_pte), pte_val(pte));
>  }
>  
> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>  			      pte_t *ptep, pte_t pte)
>  {
>  	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
> @@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>  	set_pte(ptep, pte);
>  }
>  
> +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +			      pte_t *ptep, pte_t pte)
> +{
> +	page_table_check_pte_set(mm, addr, ptep, pte);
> +	return __set_pte_at(mm, addr, ptep, pte);
> +}
> +
>  /*
>   * Huge pte definitions.
>   */
> @@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
>  #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
>  #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
>  #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
> +#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
> +#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
>  #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
>  #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
>  #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
> @@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
>  #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
>  #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
>  
> -#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
> -#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
> +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> +			      pmd_t *pmdp, pmd_t pmd)
> +{
> +	page_table_check_pmd_set(mm, addr, pmdp, pmd);
> +	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
> +}
> +
> +static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
> +			      pud_t *pudp, pud_t pud)
> +{
> +	page_table_check_pud_set(mm, addr, pudp, pud);
> +	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
> +}
>  
>  #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
>  #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
> @@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
>  #define pud_present(pud)	pte_present(pud_pte(pud))
>  #define pud_leaf(pud)		pud_sect(pud)
>  #define pud_valid(pud)		pte_valid(pud_pte(pud))
> +#define pud_user(pud)		pte_user(pud_pte(pud))
> +
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_present(pud) && pud_user(pud);
> +}
> +#endif
>  
>  static inline void set_pud(pud_t *pudp, pud_t pud)
>  {
> @@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
> +				       unsigned long address, pte_t *ptep)
> +{
> +	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +}
> +
>  #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
>  static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  				       unsigned long address, pte_t *ptep)
>  {
> -	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
> +
> +	page_table_check_pte_clear(mm, address, pte);
> +
> +	return pte;
>  }
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> @@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>  					    unsigned long address, pmd_t *pmdp)
>  {
> -	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +
> +	page_table_check_pmd_clear(mm, address, pmd);
> +
> +	return pmd;
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> @@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>  static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
> +	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>  	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
>  }
>  #endif

Ran this series on arm64 platform after enabling

- CONFIG_PAGE_TABLE_CHECK
- CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)

After some time, the following error came up

[   23.266013] ------------[ cut here ]------------
[   23.266807] kernel BUG at mm/page_table_check.c:90!
[   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[   23.268503] Modules linked in:                                                                    
[   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
[   23.270383] Hardware name: linux,dummy-virt (DT)
[   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
[   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
[   23.274395] sp : ffff80000afb3ca0
[   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
[   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0                     
[   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
[   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
[   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
[   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
[   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
[   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
[   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
[   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
[   23.287673] Call trace:
[   23.288123]  page_table_check_clear.isra.6+0x114/0x148
[   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
[   23.289918]  pmdp_collapse_flush+0x114/0x370
[   23.290692]  khugepaged+0x1170/0x19e0
[   23.291356]  kthread+0x110/0x120
[   23.291945]  ret_from_fork+0x10/0x20
[   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000) 
[   23.293678] ---[ end trace 0000000000000000 ]---
[   23.294511] note: khugepaged[30] exited with preempt_count 2

Looking into file mm/page_table_check.c where this problem occured.

/*
 * An enty is removed from the page table, decrement the counters for that page
 * verify that it is of correct type and counters do not become negative.
 */
static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
                                   unsigned long pfn, unsigned long pgcnt)
{
        struct page_ext *page_ext;
        struct page *page;
        unsigned long i;
        bool anon;

        if (!pfn_valid(pfn))
                return;

        page = pfn_to_page(pfn);
        page_ext = lookup_page_ext(page);
        anon = PageAnon(page);

        for (i = 0; i < pgcnt; i++) {
                struct page_table_check *ptc = get_page_table_check(page_ext);

                if (anon) {
                        BUG_ON(atomic_read(&ptc->file_map_count));
                        BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
                } else {
                        BUG_ON(atomic_read(&ptc->anon_map_count));
 Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
                }
                page_ext = page_ext_next(page_ext);
        }
}

Could you explain what was expected during pmdp_collapse_flush() which when
failed, triggered this BUG_ON() ? This counter seems to be page table check
specific, could it just go wrong ? I have not looked into the details about
page table check mechanism.

- Anshuman

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-18  9:28     ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-18  9:28 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

On 4/18/22 09:14, Tong Tiangen wrote:
> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> 
> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
> check"), add some necessary page table check hooks into routines that
> modify user page tables.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/arm64/Kconfig               |  1 +
>  arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
>  2 files changed, 61 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index e80fd2372f02..7114d2d5155e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -92,6 +92,7 @@ config ARM64
>  	select ARCH_SUPPORTS_ATOMIC_RMW
>  	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
>  	select ARCH_SUPPORTS_NUMA_BALANCING
> +	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
>  	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
>  	select ARCH_WANT_DEFAULT_BPF_JIT
>  	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 930077f7b572..9f8f97a7cc7c 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -33,6 +33,7 @@
>  #include <linux/mmdebug.h>
>  #include <linux/mm_types.h>
>  #include <linux/sched.h>
> +#include <linux/page_table_check.h>
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> @@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>  #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
>  #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
>  #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
> +#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
>  #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
>  #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
>  #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
> @@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
>  		     __func__, pte_val(old_pte), pte_val(pte));
>  }
>  
> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>  			      pte_t *ptep, pte_t pte)
>  {
>  	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
> @@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>  	set_pte(ptep, pte);
>  }
>  
> +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +			      pte_t *ptep, pte_t pte)
> +{
> +	page_table_check_pte_set(mm, addr, ptep, pte);
> +	return __set_pte_at(mm, addr, ptep, pte);
> +}
> +
>  /*
>   * Huge pte definitions.
>   */
> @@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
>  #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
>  #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
>  #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
> +#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
> +#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
>  #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
>  #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
>  #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
> @@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
>  #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
>  #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
>  
> -#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
> -#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
> +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> +			      pmd_t *pmdp, pmd_t pmd)
> +{
> +	page_table_check_pmd_set(mm, addr, pmdp, pmd);
> +	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
> +}
> +
> +static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
> +			      pud_t *pudp, pud_t pud)
> +{
> +	page_table_check_pud_set(mm, addr, pudp, pud);
> +	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
> +}
>  
>  #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
>  #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
> @@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
>  #define pud_present(pud)	pte_present(pud_pte(pud))
>  #define pud_leaf(pud)		pud_sect(pud)
>  #define pud_valid(pud)		pte_valid(pud_pte(pud))
> +#define pud_user(pud)		pte_user(pud_pte(pud))
> +
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_present(pud) && pud_user(pud);
> +}
> +#endif
>  
>  static inline void set_pud(pud_t *pudp, pud_t pud)
>  {
> @@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
> +				       unsigned long address, pte_t *ptep)
> +{
> +	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +}
> +
>  #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
>  static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  				       unsigned long address, pte_t *ptep)
>  {
> -	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
> +
> +	page_table_check_pte_clear(mm, address, pte);
> +
> +	return pte;
>  }
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> @@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>  					    unsigned long address, pmd_t *pmdp)
>  {
> -	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +
> +	page_table_check_pmd_clear(mm, address, pmd);
> +
> +	return pmd;
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> @@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>  static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
> +	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>  	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
>  }
>  #endif

Ran this series on arm64 platform after enabling

- CONFIG_PAGE_TABLE_CHECK
- CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)

After some time, the following error came up

[   23.266013] ------------[ cut here ]------------
[   23.266807] kernel BUG at mm/page_table_check.c:90!
[   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[   23.268503] Modules linked in:                                                                    
[   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
[   23.270383] Hardware name: linux,dummy-virt (DT)
[   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
[   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
[   23.274395] sp : ffff80000afb3ca0
[   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
[   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0                     
[   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
[   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
[   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
[   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
[   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
[   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
[   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
[   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
[   23.287673] Call trace:
[   23.288123]  page_table_check_clear.isra.6+0x114/0x148
[   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
[   23.289918]  pmdp_collapse_flush+0x114/0x370
[   23.290692]  khugepaged+0x1170/0x19e0
[   23.291356]  kthread+0x110/0x120
[   23.291945]  ret_from_fork+0x10/0x20
[   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000) 
[   23.293678] ---[ end trace 0000000000000000 ]---
[   23.294511] note: khugepaged[30] exited with preempt_count 2

Looking into file mm/page_table_check.c where this problem occured.

/*
 * An enty is removed from the page table, decrement the counters for that page
 * verify that it is of correct type and counters do not become negative.
 */
static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
                                   unsigned long pfn, unsigned long pgcnt)
{
        struct page_ext *page_ext;
        struct page *page;
        unsigned long i;
        bool anon;

        if (!pfn_valid(pfn))
                return;

        page = pfn_to_page(pfn);
        page_ext = lookup_page_ext(page);
        anon = PageAnon(page);

        for (i = 0; i < pgcnt; i++) {
                struct page_table_check *ptc = get_page_table_check(page_ext);

                if (anon) {
                        BUG_ON(atomic_read(&ptc->file_map_count));
                        BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
                } else {
                        BUG_ON(atomic_read(&ptc->anon_map_count));
 Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
                }
                page_ext = page_ext_next(page_ext);
        }
}

Could you explain what was expected during pmdp_collapse_flush() which when
failed, triggered this BUG_ON() ? This counter seems to be page table check
specific, could it just go wrong ? I have not looked into the details about
page table check mechanism.

- Anshuman

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-18  9:28     ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-18  9:28 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

On 4/18/22 09:14, Tong Tiangen wrote:
> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> 
> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
> check"), add some necessary page table check hooks into routines that
> modify user page tables.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/arm64/Kconfig               |  1 +
>  arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
>  2 files changed, 61 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index e80fd2372f02..7114d2d5155e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -92,6 +92,7 @@ config ARM64
>  	select ARCH_SUPPORTS_ATOMIC_RMW
>  	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
>  	select ARCH_SUPPORTS_NUMA_BALANCING
> +	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
>  	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
>  	select ARCH_WANT_DEFAULT_BPF_JIT
>  	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 930077f7b572..9f8f97a7cc7c 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -33,6 +33,7 @@
>  #include <linux/mmdebug.h>
>  #include <linux/mm_types.h>
>  #include <linux/sched.h>
> +#include <linux/page_table_check.h>
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> @@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>  #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
>  #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
>  #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
> +#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
>  #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
>  #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
>  #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
> @@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
>  		     __func__, pte_val(old_pte), pte_val(pte));
>  }
>  
> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>  			      pte_t *ptep, pte_t pte)
>  {
>  	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
> @@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>  	set_pte(ptep, pte);
>  }
>  
> +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +			      pte_t *ptep, pte_t pte)
> +{
> +	page_table_check_pte_set(mm, addr, ptep, pte);
> +	return __set_pte_at(mm, addr, ptep, pte);
> +}
> +
>  /*
>   * Huge pte definitions.
>   */
> @@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
>  #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
>  #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
>  #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
> +#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
> +#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
>  #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
>  #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
>  #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
> @@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
>  #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
>  #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
>  
> -#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
> -#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
> +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> +			      pmd_t *pmdp, pmd_t pmd)
> +{
> +	page_table_check_pmd_set(mm, addr, pmdp, pmd);
> +	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
> +}
> +
> +static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
> +			      pud_t *pudp, pud_t pud)
> +{
> +	page_table_check_pud_set(mm, addr, pudp, pud);
> +	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
> +}
>  
>  #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
>  #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
> @@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
>  #define pud_present(pud)	pte_present(pud_pte(pud))
>  #define pud_leaf(pud)		pud_sect(pud)
>  #define pud_valid(pud)		pte_valid(pud_pte(pud))
> +#define pud_user(pud)		pte_user(pud_pte(pud))
> +
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_present(pud) && pud_user(pud);
> +}
> +#endif
>  
>  static inline void set_pud(pud_t *pudp, pud_t pud)
>  {
> @@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
> +				       unsigned long address, pte_t *ptep)
> +{
> +	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +}
> +
>  #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
>  static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  				       unsigned long address, pte_t *ptep)
>  {
> -	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
> +
> +	page_table_check_pte_clear(mm, address, pte);
> +
> +	return pte;
>  }
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> @@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>  					    unsigned long address, pmd_t *pmdp)
>  {
> -	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +
> +	page_table_check_pmd_clear(mm, address, pmd);
> +
> +	return pmd;
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> @@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>  static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
> +	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>  	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
>  }
>  #endif

Ran this series on arm64 platform after enabling

- CONFIG_PAGE_TABLE_CHECK
- CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)

After some time, the following error came up

[   23.266013] ------------[ cut here ]------------
[   23.266807] kernel BUG at mm/page_table_check.c:90!
[   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[   23.268503] Modules linked in:                                                                    
[   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
[   23.270383] Hardware name: linux,dummy-virt (DT)
[   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
[   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
[   23.274395] sp : ffff80000afb3ca0
[   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
[   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0                     
[   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
[   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
[   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
[   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
[   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
[   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
[   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
[   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
[   23.287673] Call trace:
[   23.288123]  page_table_check_clear.isra.6+0x114/0x148
[   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
[   23.289918]  pmdp_collapse_flush+0x114/0x370
[   23.290692]  khugepaged+0x1170/0x19e0
[   23.291356]  kthread+0x110/0x120
[   23.291945]  ret_from_fork+0x10/0x20
[   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000) 
[   23.293678] ---[ end trace 0000000000000000 ]---
[   23.294511] note: khugepaged[30] exited with preempt_count 2

Looking into file mm/page_table_check.c where this problem occured.

/*
 * An enty is removed from the page table, decrement the counters for that page
 * verify that it is of correct type and counters do not become negative.
 */
static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
                                   unsigned long pfn, unsigned long pgcnt)
{
        struct page_ext *page_ext;
        struct page *page;
        unsigned long i;
        bool anon;

        if (!pfn_valid(pfn))
                return;

        page = pfn_to_page(pfn);
        page_ext = lookup_page_ext(page);
        anon = PageAnon(page);

        for (i = 0; i < pgcnt; i++) {
                struct page_table_check *ptc = get_page_table_check(page_ext);

                if (anon) {
                        BUG_ON(atomic_read(&ptc->file_map_count));
                        BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
                } else {
                        BUG_ON(atomic_read(&ptc->anon_map_count));
 Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
                }
                page_ext = page_ext_next(page_ext);
        }
}

Could you explain what was expected during pmdp_collapse_flush() which when
failed, triggered this BUG_ON() ? This counter seems to be page table check
specific, could it just go wrong ? I have not looked into the details about
page table check mechanism.

- Anshuman

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-18  9:28     ` Anshuman Khandual
  (?)
@ 2022-04-18 15:47       ` Tong Tiangen
  -1 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18 15:47 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/18 17:28, Anshuman Khandual 写道:
> On 4/18/22 09:14, Tong Tiangen wrote:
>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>
[...]
>>   #endif
> 
> Ran this series on arm64 platform after enabling
> 
> - CONFIG_PAGE_TABLE_CHECK
> - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
> 
> After some time, the following error came up
> 
> [   23.266013] ------------[ cut here ]------------
> [   23.266807] kernel BUG at mm/page_table_check.c:90!
> [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> [   23.268503] Modules linked in:
> [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
> [   23.270383] Hardware name: linux,dummy-virt (DT)
> [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
> [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
> [   23.274395] sp : ffff80000afb3ca0
> [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
> [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
> [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
> [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
> [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
> [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
> [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
> [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
> [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
> [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
> [   23.287673] Call trace:
> [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
> [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
> [   23.289918]  pmdp_collapse_flush+0x114/0x370
> [   23.290692]  khugepaged+0x1170/0x19e0
> [   23.291356]  kthread+0x110/0x120
> [   23.291945]  ret_from_fork+0x10/0x20
> [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
> [   23.293678] ---[ end trace 0000000000000000 ]---
> [   23.294511] note: khugepaged[30] exited with preempt_count 2
> 
> Looking into file mm/page_table_check.c where this problem occured.
> 
> /*
>   * An enty is removed from the page table, decrement the counters for that page
>   * verify that it is of correct type and counters do not become negative.
>   */
> static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>                                     unsigned long pfn, unsigned long pgcnt)
> {
>          struct page_ext *page_ext;
>          struct page *page;
>          unsigned long i;
>          bool anon;
> 
>          if (!pfn_valid(pfn))
>                  return;
> 
>          page = pfn_to_page(pfn);
>          page_ext = lookup_page_ext(page);
>          anon = PageAnon(page);
> 
>          for (i = 0; i < pgcnt; i++) {
>                  struct page_table_check *ptc = get_page_table_check(page_ext);
> 
>                  if (anon) {
>                          BUG_ON(atomic_read(&ptc->file_map_count));
>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>                  } else {
>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>                  }
>                  page_ext = page_ext_next(page_ext);
>          }
> }
> 
> Could you explain what was expected during pmdp_collapse_flush() which when
> failed, triggered this BUG_ON() ? This counter seems to be page table check
> specific, could it just go wrong ? I have not looked into the details about
> page table check mechanism.
> 
> - Anshuman
> .

Hi Anshuman:

Thanks for your job.

Let me briefly explain the principle of page table check(PTC).

PTC introduces the following struct for page mapping type count:
struct page_table_check {
         atomic_t anon_map_count;
         atomic_t file_map_count;
};
This structure can be obtained by "lookup_page_ext(page)"

When page table entries are set(pud/pmd/pte), page_table_check_set()  is 
called to increase the page mapping count, Also check for errors (eg:if 
a page is used for anonymous mapping, then the page cannot be used for 
file mapping at the same time).

When page table entries are clear(pud/pmd/pte), page_table_check_clear() 
  is called to decrease the page mapping count, Also check for errors.

The error check rules are described in the following documents: 
Documentation/vm/page_table_check.rst

The setting and clearing of page table entries are symmetrical.

Here __page_table_check_pmd_clear() trigger BUGON which indicates that 
the pmd entry file mapping count has become negative.

I guess if PTC didn't detect this exception, would there have been any 
problems?

Thanks,
Tong.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-18 15:47       ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18 15:47 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/18 17:28, Anshuman Khandual 写道:
> On 4/18/22 09:14, Tong Tiangen wrote:
>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>
[...]
>>   #endif
> 
> Ran this series on arm64 platform after enabling
> 
> - CONFIG_PAGE_TABLE_CHECK
> - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
> 
> After some time, the following error came up
> 
> [   23.266013] ------------[ cut here ]------------
> [   23.266807] kernel BUG at mm/page_table_check.c:90!
> [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> [   23.268503] Modules linked in:
> [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
> [   23.270383] Hardware name: linux,dummy-virt (DT)
> [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
> [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
> [   23.274395] sp : ffff80000afb3ca0
> [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
> [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
> [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
> [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
> [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
> [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
> [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
> [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
> [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
> [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
> [   23.287673] Call trace:
> [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
> [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
> [   23.289918]  pmdp_collapse_flush+0x114/0x370
> [   23.290692]  khugepaged+0x1170/0x19e0
> [   23.291356]  kthread+0x110/0x120
> [   23.291945]  ret_from_fork+0x10/0x20
> [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
> [   23.293678] ---[ end trace 0000000000000000 ]---
> [   23.294511] note: khugepaged[30] exited with preempt_count 2
> 
> Looking into file mm/page_table_check.c where this problem occured.
> 
> /*
>   * An enty is removed from the page table, decrement the counters for that page
>   * verify that it is of correct type and counters do not become negative.
>   */
> static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>                                     unsigned long pfn, unsigned long pgcnt)
> {
>          struct page_ext *page_ext;
>          struct page *page;
>          unsigned long i;
>          bool anon;
> 
>          if (!pfn_valid(pfn))
>                  return;
> 
>          page = pfn_to_page(pfn);
>          page_ext = lookup_page_ext(page);
>          anon = PageAnon(page);
> 
>          for (i = 0; i < pgcnt; i++) {
>                  struct page_table_check *ptc = get_page_table_check(page_ext);
> 
>                  if (anon) {
>                          BUG_ON(atomic_read(&ptc->file_map_count));
>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>                  } else {
>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>                  }
>                  page_ext = page_ext_next(page_ext);
>          }
> }
> 
> Could you explain what was expected during pmdp_collapse_flush() which when
> failed, triggered this BUG_ON() ? This counter seems to be page table check
> specific, could it just go wrong ? I have not looked into the details about
> page table check mechanism.
> 
> - Anshuman
> .

Hi Anshuman:

Thanks for your job.

Let me briefly explain the principle of page table check(PTC).

PTC introduces the following struct for page mapping type count:
struct page_table_check {
         atomic_t anon_map_count;
         atomic_t file_map_count;
};
This structure can be obtained by "lookup_page_ext(page)"

When page table entries are set(pud/pmd/pte), page_table_check_set()  is 
called to increase the page mapping count, Also check for errors (eg:if 
a page is used for anonymous mapping, then the page cannot be used for 
file mapping at the same time).

When page table entries are clear(pud/pmd/pte), page_table_check_clear() 
  is called to decrease the page mapping count, Also check for errors.

The error check rules are described in the following documents: 
Documentation/vm/page_table_check.rst

The setting and clearing of page table entries are symmetrical.

Here __page_table_check_pmd_clear() trigger BUGON which indicates that 
the pmd entry file mapping count has become negative.

I guess if PTC didn't detect this exception, would there have been any 
problems?

Thanks,
Tong.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-18 15:47       ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-18 15:47 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/18 17:28, Anshuman Khandual 写道:
> On 4/18/22 09:14, Tong Tiangen wrote:
>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>
[...]
>>   #endif
> 
> Ran this series on arm64 platform after enabling
> 
> - CONFIG_PAGE_TABLE_CHECK
> - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
> 
> After some time, the following error came up
> 
> [   23.266013] ------------[ cut here ]------------
> [   23.266807] kernel BUG at mm/page_table_check.c:90!
> [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> [   23.268503] Modules linked in:
> [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
> [   23.270383] Hardware name: linux,dummy-virt (DT)
> [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
> [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
> [   23.274395] sp : ffff80000afb3ca0
> [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
> [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
> [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
> [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
> [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
> [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
> [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
> [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
> [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
> [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
> [   23.287673] Call trace:
> [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
> [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
> [   23.289918]  pmdp_collapse_flush+0x114/0x370
> [   23.290692]  khugepaged+0x1170/0x19e0
> [   23.291356]  kthread+0x110/0x120
> [   23.291945]  ret_from_fork+0x10/0x20
> [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
> [   23.293678] ---[ end trace 0000000000000000 ]---
> [   23.294511] note: khugepaged[30] exited with preempt_count 2
> 
> Looking into file mm/page_table_check.c where this problem occured.
> 
> /*
>   * An enty is removed from the page table, decrement the counters for that page
>   * verify that it is of correct type and counters do not become negative.
>   */
> static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>                                     unsigned long pfn, unsigned long pgcnt)
> {
>          struct page_ext *page_ext;
>          struct page *page;
>          unsigned long i;
>          bool anon;
> 
>          if (!pfn_valid(pfn))
>                  return;
> 
>          page = pfn_to_page(pfn);
>          page_ext = lookup_page_ext(page);
>          anon = PageAnon(page);
> 
>          for (i = 0; i < pgcnt; i++) {
>                  struct page_table_check *ptc = get_page_table_check(page_ext);
> 
>                  if (anon) {
>                          BUG_ON(atomic_read(&ptc->file_map_count));
>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>                  } else {
>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>                  }
>                  page_ext = page_ext_next(page_ext);
>          }
> }
> 
> Could you explain what was expected during pmdp_collapse_flush() which when
> failed, triggered this BUG_ON() ? This counter seems to be page table check
> specific, could it just go wrong ? I have not looked into the details about
> page table check mechanism.
> 
> - Anshuman
> .

Hi Anshuman:

Thanks for your job.

Let me briefly explain the principle of page table check(PTC).

PTC introduces the following struct for page mapping type count:
struct page_table_check {
         atomic_t anon_map_count;
         atomic_t file_map_count;
};
This structure can be obtained by "lookup_page_ext(page)"

When page table entries are set(pud/pmd/pte), page_table_check_set()  is 
called to increase the page mapping count, Also check for errors (eg:if 
a page is used for anonymous mapping, then the page cannot be used for 
file mapping at the same time).

When page table entries are clear(pud/pmd/pte), page_table_check_clear() 
  is called to decrease the page mapping count, Also check for errors.

The error check rules are described in the following documents: 
Documentation/vm/page_table_check.rst

The setting and clearing of page table entries are symmetrical.

Here __page_table_check_pmd_clear() trigger BUGON which indicates that 
the pmd entry file mapping count has become negative.

I guess if PTC didn't detect this exception, would there have been any 
problems?

Thanks,
Tong.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-18 15:47       ` Tong Tiangen
  (?)
@ 2022-04-18 16:20         ` Pasha Tatashin
  -1 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-18 16:20 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Mon, Apr 18, 2022 at 11:47 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>
>
> 在 2022/4/18 17:28, Anshuman Khandual 写道:
> > On 4/18/22 09:14, Tong Tiangen wrote:
> >> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> >>
> [...]
> >>   #endif
> >
> > Ran this series on arm64 platform after enabling
> >
> > - CONFIG_PAGE_TABLE_CHECK
> > - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
> >
> > After some time, the following error came up
> >
> > [   23.266013] ------------[ cut here ]------------
> > [   23.266807] kernel BUG at mm/page_table_check.c:90!
> > [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> > [   23.268503] Modules linked in:
> > [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
> > [   23.270383] Hardware name: linux,dummy-virt (DT)
> > [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> > [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
> > [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
> > [   23.274395] sp : ffff80000afb3ca0
> > [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
> > [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
> > [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
> > [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
> > [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
> > [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
> > [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
> > [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
> > [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
> > [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
> > [   23.287673] Call trace:
> > [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
> > [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
> > [   23.289918]  pmdp_collapse_flush+0x114/0x370
> > [   23.290692]  khugepaged+0x1170/0x19e0
> > [   23.291356]  kthread+0x110/0x120
> > [   23.291945]  ret_from_fork+0x10/0x20
> > [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
> > [   23.293678] ---[ end trace 0000000000000000 ]---
> > [   23.294511] note: khugepaged[30] exited with preempt_count 2
> >
> > Looking into file mm/page_table_check.c where this problem occured.
> >
> > /*
> >   * An enty is removed from the page table, decrement the counters for that page
> >   * verify that it is of correct type and counters do not become negative.
> >   */
> > static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
> >                                     unsigned long pfn, unsigned long pgcnt)
> > {
> >          struct page_ext *page_ext;
> >          struct page *page;
> >          unsigned long i;
> >          bool anon;
> >
> >          if (!pfn_valid(pfn))
> >                  return;
> >
> >          page = pfn_to_page(pfn);
> >          page_ext = lookup_page_ext(page);
> >          anon = PageAnon(page);
> >
> >          for (i = 0; i < pgcnt; i++) {
> >                  struct page_table_check *ptc = get_page_table_check(page_ext);
> >
> >                  if (anon) {
> >                          BUG_ON(atomic_read(&ptc->file_map_count));
> >                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
> >                  } else {
> >                          BUG_ON(atomic_read(&ptc->anon_map_count));
> >   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
> >                  }
> >                  page_ext = page_ext_next(page_ext);
> >          }
> > }
> >
> > Could you explain what was expected during pmdp_collapse_flush() which when
> > failed, triggered this BUG_ON() ? This counter seems to be page table check
> > specific, could it just go wrong ? I have not looked into the details about
> > page table check mechanism.
> >
> > - Anshuman
> > .
>
> Hi Anshuman:
>
> Thanks for your job.
>
> Let me briefly explain the principle of page table check(PTC).
>
> PTC introduces the following struct for page mapping type count:
> struct page_table_check {
>          atomic_t anon_map_count;
>          atomic_t file_map_count;
> };
> This structure can be obtained by "lookup_page_ext(page)"
>
> When page table entries are set(pud/pmd/pte), page_table_check_set()  is
> called to increase the page mapping count, Also check for errors (eg:if
> a page is used for anonymous mapping, then the page cannot be used for
> file mapping at the same time).
>
> When page table entries are clear(pud/pmd/pte), page_table_check_clear()
>   is called to decrease the page mapping count, Also check for errors.
>
> The error check rules are described in the following documents:
> Documentation/vm/page_table_check.rst
>
> The setting and clearing of page table entries are symmetrical.
>
> Here __page_table_check_pmd_clear() trigger BUGON which indicates that
> the pmd entry file mapping count has become negative.
>
> I guess if PTC didn't detect this exception, would there have been any
> problems?

It is hard to tell what sort of problem has been detected. More
debugging is needed in order to understand it. A huge file entry is
being removed from the page table. However, at least one sub page of
that entry does not have a record that it was added as a file entry to
the page table. At Google we found a few internal security bugs using
PTCs. However, this being new on ARM64, it is possible that the bug is
in PTC/khugepaged itself.

Anshuman is it possible to repro your scenario in QEMU?

Thank you,
Pasha

>
> Thanks,
> Tong.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-18 16:20         ` Pasha Tatashin
  0 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-18 16:20 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Mon, Apr 18, 2022 at 11:47 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>
>
> 在 2022/4/18 17:28, Anshuman Khandual 写道:
> > On 4/18/22 09:14, Tong Tiangen wrote:
> >> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> >>
> [...]
> >>   #endif
> >
> > Ran this series on arm64 platform after enabling
> >
> > - CONFIG_PAGE_TABLE_CHECK
> > - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
> >
> > After some time, the following error came up
> >
> > [   23.266013] ------------[ cut here ]------------
> > [   23.266807] kernel BUG at mm/page_table_check.c:90!
> > [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> > [   23.268503] Modules linked in:
> > [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
> > [   23.270383] Hardware name: linux,dummy-virt (DT)
> > [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> > [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
> > [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
> > [   23.274395] sp : ffff80000afb3ca0
> > [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
> > [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
> > [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
> > [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
> > [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
> > [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
> > [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
> > [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
> > [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
> > [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
> > [   23.287673] Call trace:
> > [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
> > [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
> > [   23.289918]  pmdp_collapse_flush+0x114/0x370
> > [   23.290692]  khugepaged+0x1170/0x19e0
> > [   23.291356]  kthread+0x110/0x120
> > [   23.291945]  ret_from_fork+0x10/0x20
> > [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
> > [   23.293678] ---[ end trace 0000000000000000 ]---
> > [   23.294511] note: khugepaged[30] exited with preempt_count 2
> >
> > Looking into file mm/page_table_check.c where this problem occured.
> >
> > /*
> >   * An enty is removed from the page table, decrement the counters for that page
> >   * verify that it is of correct type and counters do not become negative.
> >   */
> > static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
> >                                     unsigned long pfn, unsigned long pgcnt)
> > {
> >          struct page_ext *page_ext;
> >          struct page *page;
> >          unsigned long i;
> >          bool anon;
> >
> >          if (!pfn_valid(pfn))
> >                  return;
> >
> >          page = pfn_to_page(pfn);
> >          page_ext = lookup_page_ext(page);
> >          anon = PageAnon(page);
> >
> >          for (i = 0; i < pgcnt; i++) {
> >                  struct page_table_check *ptc = get_page_table_check(page_ext);
> >
> >                  if (anon) {
> >                          BUG_ON(atomic_read(&ptc->file_map_count));
> >                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
> >                  } else {
> >                          BUG_ON(atomic_read(&ptc->anon_map_count));
> >   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
> >                  }
> >                  page_ext = page_ext_next(page_ext);
> >          }
> > }
> >
> > Could you explain what was expected during pmdp_collapse_flush() which when
> > failed, triggered this BUG_ON() ? This counter seems to be page table check
> > specific, could it just go wrong ? I have not looked into the details about
> > page table check mechanism.
> >
> > - Anshuman
> > .
>
> Hi Anshuman:
>
> Thanks for your job.
>
> Let me briefly explain the principle of page table check(PTC).
>
> PTC introduces the following struct for page mapping type count:
> struct page_table_check {
>          atomic_t anon_map_count;
>          atomic_t file_map_count;
> };
> This structure can be obtained by "lookup_page_ext(page)"
>
> When page table entries are set(pud/pmd/pte), page_table_check_set()  is
> called to increase the page mapping count, Also check for errors (eg:if
> a page is used for anonymous mapping, then the page cannot be used for
> file mapping at the same time).
>
> When page table entries are clear(pud/pmd/pte), page_table_check_clear()
>   is called to decrease the page mapping count, Also check for errors.
>
> The error check rules are described in the following documents:
> Documentation/vm/page_table_check.rst
>
> The setting and clearing of page table entries are symmetrical.
>
> Here __page_table_check_pmd_clear() trigger BUGON which indicates that
> the pmd entry file mapping count has become negative.
>
> I guess if PTC didn't detect this exception, would there have been any
> problems?

It is hard to tell what sort of problem has been detected. More
debugging is needed in order to understand it. A huge file entry is
being removed from the page table. However, at least one sub page of
that entry does not have a record that it was added as a file entry to
the page table. At Google we found a few internal security bugs using
PTCs. However, this being new on ARM64, it is possible that the bug is
in PTC/khugepaged itself.

Anshuman is it possible to repro your scenario in QEMU?

Thank you,
Pasha

>
> Thanks,
> Tong.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-18 16:20         ` Pasha Tatashin
  0 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-18 16:20 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Mon, Apr 18, 2022 at 11:47 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>
>
> 在 2022/4/18 17:28, Anshuman Khandual 写道:
> > On 4/18/22 09:14, Tong Tiangen wrote:
> >> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> >>
> [...]
> >>   #endif
> >
> > Ran this series on arm64 platform after enabling
> >
> > - CONFIG_PAGE_TABLE_CHECK
> > - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
> >
> > After some time, the following error came up
> >
> > [   23.266013] ------------[ cut here ]------------
> > [   23.266807] kernel BUG at mm/page_table_check.c:90!
> > [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> > [   23.268503] Modules linked in:
> > [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
> > [   23.270383] Hardware name: linux,dummy-virt (DT)
> > [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> > [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
> > [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
> > [   23.274395] sp : ffff80000afb3ca0
> > [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
> > [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
> > [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
> > [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
> > [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
> > [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
> > [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
> > [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
> > [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
> > [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
> > [   23.287673] Call trace:
> > [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
> > [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
> > [   23.289918]  pmdp_collapse_flush+0x114/0x370
> > [   23.290692]  khugepaged+0x1170/0x19e0
> > [   23.291356]  kthread+0x110/0x120
> > [   23.291945]  ret_from_fork+0x10/0x20
> > [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
> > [   23.293678] ---[ end trace 0000000000000000 ]---
> > [   23.294511] note: khugepaged[30] exited with preempt_count 2
> >
> > Looking into file mm/page_table_check.c where this problem occured.
> >
> > /*
> >   * An enty is removed from the page table, decrement the counters for that page
> >   * verify that it is of correct type and counters do not become negative.
> >   */
> > static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
> >                                     unsigned long pfn, unsigned long pgcnt)
> > {
> >          struct page_ext *page_ext;
> >          struct page *page;
> >          unsigned long i;
> >          bool anon;
> >
> >          if (!pfn_valid(pfn))
> >                  return;
> >
> >          page = pfn_to_page(pfn);
> >          page_ext = lookup_page_ext(page);
> >          anon = PageAnon(page);
> >
> >          for (i = 0; i < pgcnt; i++) {
> >                  struct page_table_check *ptc = get_page_table_check(page_ext);
> >
> >                  if (anon) {
> >                          BUG_ON(atomic_read(&ptc->file_map_count));
> >                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
> >                  } else {
> >                          BUG_ON(atomic_read(&ptc->anon_map_count));
> >   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
> >                  }
> >                  page_ext = page_ext_next(page_ext);
> >          }
> > }
> >
> > Could you explain what was expected during pmdp_collapse_flush() which when
> > failed, triggered this BUG_ON() ? This counter seems to be page table check
> > specific, could it just go wrong ? I have not looked into the details about
> > page table check mechanism.
> >
> > - Anshuman
> > .
>
> Hi Anshuman:
>
> Thanks for your job.
>
> Let me briefly explain the principle of page table check(PTC).
>
> PTC introduces the following struct for page mapping type count:
> struct page_table_check {
>          atomic_t anon_map_count;
>          atomic_t file_map_count;
> };
> This structure can be obtained by "lookup_page_ext(page)"
>
> When page table entries are set(pud/pmd/pte), page_table_check_set()  is
> called to increase the page mapping count, Also check for errors (eg:if
> a page is used for anonymous mapping, then the page cannot be used for
> file mapping at the same time).
>
> When page table entries are clear(pud/pmd/pte), page_table_check_clear()
>   is called to decrease the page mapping count, Also check for errors.
>
> The error check rules are described in the following documents:
> Documentation/vm/page_table_check.rst
>
> The setting and clearing of page table entries are symmetrical.
>
> Here __page_table_check_pmd_clear() trigger BUGON which indicates that
> the pmd entry file mapping count has become negative.
>
> I guess if PTC didn't detect this exception, would there have been any
> problems?

It is hard to tell what sort of problem has been detected. More
debugging is needed in order to understand it. A huge file entry is
being removed from the page table. However, at least one sub page of
that entry does not have a record that it was added as a file entry to
the page table. At Google we found a few internal security bugs using
PTCs. However, this being new on ARM64, it is possible that the bug is
in PTC/khugepaged itself.

Anshuman is it possible to repro your scenario in QEMU?

Thank you,
Pasha

>
> Thanks,
> Tong.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-18 15:47       ` Tong Tiangen
  (?)
@ 2022-04-19  7:10         ` Anshuman Khandual
  -1 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19  7:10 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/18/22 21:17, Tong Tiangen wrote:
> 
> 
> 在 2022/4/18 17:28, Anshuman Khandual 写道:
>> On 4/18/22 09:14, Tong Tiangen wrote:
>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>
> [...]
>>>   #endif
>>
>> Ran this series on arm64 platform after enabling
>>
>> - CONFIG_PAGE_TABLE_CHECK
>> - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
>>
>> After some time, the following error came up
>>
>> [   23.266013] ------------[ cut here ]------------
>> [   23.266807] kernel BUG at mm/page_table_check.c:90!
>> [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
>> [   23.268503] Modules linked in:
>> [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
>> [   23.270383] Hardware name: linux,dummy-virt (DT)
>> [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>> [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
>> [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
>> [   23.274395] sp : ffff80000afb3ca0
>> [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
>> [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
>> [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
>> [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
>> [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
>> [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
>> [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
>> [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
>> [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
>> [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
>> [   23.287673] Call trace:
>> [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
>> [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
>> [   23.289918]  pmdp_collapse_flush+0x114/0x370
>> [   23.290692]  khugepaged+0x1170/0x19e0
>> [   23.291356]  kthread+0x110/0x120
>> [   23.291945]  ret_from_fork+0x10/0x20
>> [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
>> [   23.293678] ---[ end trace 0000000000000000 ]---
>> [   23.294511] note: khugepaged[30] exited with preempt_count 2
>>
>> Looking into file mm/page_table_check.c where this problem occured.
>>
>> /*
>>   * An enty is removed from the page table, decrement the counters for that page
>>   * verify that it is of correct type and counters do not become negative.
>>   */
>> static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>>                                     unsigned long pfn, unsigned long pgcnt)
>> {
>>          struct page_ext *page_ext;
>>          struct page *page;
>>          unsigned long i;
>>          bool anon;
>>
>>          if (!pfn_valid(pfn))
>>                  return;
>>
>>          page = pfn_to_page(pfn);
>>          page_ext = lookup_page_ext(page);
>>          anon = PageAnon(page);
>>
>>          for (i = 0; i < pgcnt; i++) {
>>                  struct page_table_check *ptc = get_page_table_check(page_ext);
>>
>>                  if (anon) {
>>                          BUG_ON(atomic_read(&ptc->file_map_count));
>>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>>                  } else {
>>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>>   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>>                  }
>>                  page_ext = page_ext_next(page_ext);
>>          }
>> }
>>
>> Could you explain what was expected during pmdp_collapse_flush() which when
>> failed, triggered this BUG_ON() ? This counter seems to be page table check
>> specific, could it just go wrong ? I have not looked into the details about
>> page table check mechanism.
>>
>> - Anshuman
>> .
> 
> Hi Anshuman:
> 
> Thanks for your job.
> 
> Let me briefly explain the principle of page table check(PTC).
> 
> PTC introduces the following struct for page mapping type count:
> struct page_table_check {
>         atomic_t anon_map_count;
>         atomic_t file_map_count;
> };
> This structure can be obtained by "lookup_page_ext(page)"


Right.

> 
> When page table entries are set(pud/pmd/pte), page_table_check_set()  is called to increase the page mapping count, Also check for errors (eg:if a page is used for anonymous mapping, then the page cannot be used for file mapping at the same time).
> 
> When page table entries are clear(pud/pmd/pte), page_table_check_clear()  is called to decrease the page mapping count, Also check for errors.
> 
> The error check rules are described in the following documents: Documentation/vm/page_table_check.rst

Snippet from that document.

+-------------------+-------------------+-------------------+------------------+
| Current Mapping   | New mapping       | Permissions       | Rule             |
+===================+===================+===================+==================+
| Anonymous         | Anonymous         | Read              | Allow            |
+-------------------+-------------------+-------------------+------------------+
| Anonymous         | Anonymous         | Read / Write      | Prohibit         |
+-------------------+-------------------+-------------------+------------------+
| Anonymous         | Named             | Any               | Prohibit         |
+-------------------+-------------------+-------------------+------------------+
| Named             | Anonymous         | Any               | Prohibit         |
+-------------------+-------------------+-------------------+------------------+
| Named             | Named             | Any               | Allow            |
+-------------------+-------------------+-------------------+------------------+

Does 'Named' refer to file mapping ? Also what does 'Prohibit' imply here ? The
check will call out a BUG_ON() in such cases ?

page_table_check_clear()
{

                if (anon) {
                        BUG_ON(atomic_read(&ptc->file_map_count));
                        BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
                } else {
                        BUG_ON(atomic_read(&ptc->anon_map_count));
                        BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
                }
}

So in the clear path, there are two checks

- If the current mapping is Anon, file_map_count cannot be positive and other way
- Decrement the applicable counter ensuring that it does not turn negative

page_table_check_set()
{
                if (anon) {
                        BUG_ON(atomic_read(&ptc->file_map_count));
                        BUG_ON(atomic_inc_return(&ptc->anon_map_count) > 1 && rw);
                } else {
                        BUG_ON(atomic_read(&ptc->anon_map_count));
                        BUG_ON(atomic_inc_return(&ptc->file_map_count) < 0);
                }
}

So in the set path, there are two checks

- If the current mapping is anon, file_map_count cannot be positive and other way
- Anon mapping cannot be RW if the page has been mapped more than once
- But then why check for negative values for file_map_count after increment ?

Is there any other checks, which this test ensures, that I might be missing ?

> 
> The setting and clearing of page table entries are symmetrical.

This assumption should be true for any user accessible mapping, for this test to work ?

Also why PUD_PAGE_SIZE/PMD_PAGE_SIZE are being used here instead of directly using
generic macros such as PUD_SIZE/PMD_SIZE ? Is there a specific reason ?

> 
> Here __page_table_check_pmd_clear() trigger BUGON which indicates that the pmd entry file mapping count has become negative.
> 
> I guess if PTC didn't detect this exception, would there have been any problems?

I am looking into this, not sure for now.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-19  7:10         ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19  7:10 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/18/22 21:17, Tong Tiangen wrote:
> 
> 
> 在 2022/4/18 17:28, Anshuman Khandual 写道:
>> On 4/18/22 09:14, Tong Tiangen wrote:
>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>
> [...]
>>>   #endif
>>
>> Ran this series on arm64 platform after enabling
>>
>> - CONFIG_PAGE_TABLE_CHECK
>> - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
>>
>> After some time, the following error came up
>>
>> [   23.266013] ------------[ cut here ]------------
>> [   23.266807] kernel BUG at mm/page_table_check.c:90!
>> [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
>> [   23.268503] Modules linked in:
>> [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
>> [   23.270383] Hardware name: linux,dummy-virt (DT)
>> [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>> [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
>> [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
>> [   23.274395] sp : ffff80000afb3ca0
>> [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
>> [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
>> [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
>> [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
>> [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
>> [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
>> [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
>> [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
>> [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
>> [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
>> [   23.287673] Call trace:
>> [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
>> [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
>> [   23.289918]  pmdp_collapse_flush+0x114/0x370
>> [   23.290692]  khugepaged+0x1170/0x19e0
>> [   23.291356]  kthread+0x110/0x120
>> [   23.291945]  ret_from_fork+0x10/0x20
>> [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
>> [   23.293678] ---[ end trace 0000000000000000 ]---
>> [   23.294511] note: khugepaged[30] exited with preempt_count 2
>>
>> Looking into file mm/page_table_check.c where this problem occured.
>>
>> /*
>>   * An enty is removed from the page table, decrement the counters for that page
>>   * verify that it is of correct type and counters do not become negative.
>>   */
>> static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>>                                     unsigned long pfn, unsigned long pgcnt)
>> {
>>          struct page_ext *page_ext;
>>          struct page *page;
>>          unsigned long i;
>>          bool anon;
>>
>>          if (!pfn_valid(pfn))
>>                  return;
>>
>>          page = pfn_to_page(pfn);
>>          page_ext = lookup_page_ext(page);
>>          anon = PageAnon(page);
>>
>>          for (i = 0; i < pgcnt; i++) {
>>                  struct page_table_check *ptc = get_page_table_check(page_ext);
>>
>>                  if (anon) {
>>                          BUG_ON(atomic_read(&ptc->file_map_count));
>>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>>                  } else {
>>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>>   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>>                  }
>>                  page_ext = page_ext_next(page_ext);
>>          }
>> }
>>
>> Could you explain what was expected during pmdp_collapse_flush() which when
>> failed, triggered this BUG_ON() ? This counter seems to be page table check
>> specific, could it just go wrong ? I have not looked into the details about
>> page table check mechanism.
>>
>> - Anshuman
>> .
> 
> Hi Anshuman:
> 
> Thanks for your job.
> 
> Let me briefly explain the principle of page table check(PTC).
> 
> PTC introduces the following struct for page mapping type count:
> struct page_table_check {
>         atomic_t anon_map_count;
>         atomic_t file_map_count;
> };
> This structure can be obtained by "lookup_page_ext(page)"


Right.

> 
> When page table entries are set(pud/pmd/pte), page_table_check_set()  is called to increase the page mapping count, Also check for errors (eg:if a page is used for anonymous mapping, then the page cannot be used for file mapping at the same time).
> 
> When page table entries are clear(pud/pmd/pte), page_table_check_clear()  is called to decrease the page mapping count, Also check for errors.
> 
> The error check rules are described in the following documents: Documentation/vm/page_table_check.rst

Snippet from that document.

+-------------------+-------------------+-------------------+------------------+
| Current Mapping   | New mapping       | Permissions       | Rule             |
+===================+===================+===================+==================+
| Anonymous         | Anonymous         | Read              | Allow            |
+-------------------+-------------------+-------------------+------------------+
| Anonymous         | Anonymous         | Read / Write      | Prohibit         |
+-------------------+-------------------+-------------------+------------------+
| Anonymous         | Named             | Any               | Prohibit         |
+-------------------+-------------------+-------------------+------------------+
| Named             | Anonymous         | Any               | Prohibit         |
+-------------------+-------------------+-------------------+------------------+
| Named             | Named             | Any               | Allow            |
+-------------------+-------------------+-------------------+------------------+

Does 'Named' refer to file mapping ? Also what does 'Prohibit' imply here ? The
check will call out a BUG_ON() in such cases ?

page_table_check_clear()
{

                if (anon) {
                        BUG_ON(atomic_read(&ptc->file_map_count));
                        BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
                } else {
                        BUG_ON(atomic_read(&ptc->anon_map_count));
                        BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
                }
}

So in the clear path, there are two checks

- If the current mapping is Anon, file_map_count cannot be positive and other way
- Decrement the applicable counter ensuring that it does not turn negative

page_table_check_set()
{
                if (anon) {
                        BUG_ON(atomic_read(&ptc->file_map_count));
                        BUG_ON(atomic_inc_return(&ptc->anon_map_count) > 1 && rw);
                } else {
                        BUG_ON(atomic_read(&ptc->anon_map_count));
                        BUG_ON(atomic_inc_return(&ptc->file_map_count) < 0);
                }
}

So in the set path, there are two checks

- If the current mapping is anon, file_map_count cannot be positive and other way
- Anon mapping cannot be RW if the page has been mapped more than once
- But then why check for negative values for file_map_count after increment ?

Is there any other checks, which this test ensures, that I might be missing ?

> 
> The setting and clearing of page table entries are symmetrical.

This assumption should be true for any user accessible mapping, for this test to work ?

Also why PUD_PAGE_SIZE/PMD_PAGE_SIZE are being used here instead of directly using
generic macros such as PUD_SIZE/PMD_SIZE ? Is there a specific reason ?

> 
> Here __page_table_check_pmd_clear() trigger BUGON which indicates that the pmd entry file mapping count has become negative.
> 
> I guess if PTC didn't detect this exception, would there have been any problems?

I am looking into this, not sure for now.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-19  7:10         ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19  7:10 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/18/22 21:17, Tong Tiangen wrote:
> 
> 
> 在 2022/4/18 17:28, Anshuman Khandual 写道:
>> On 4/18/22 09:14, Tong Tiangen wrote:
>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>
> [...]
>>>   #endif
>>
>> Ran this series on arm64 platform after enabling
>>
>> - CONFIG_PAGE_TABLE_CHECK
>> - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
>>
>> After some time, the following error came up
>>
>> [   23.266013] ------------[ cut here ]------------
>> [   23.266807] kernel BUG at mm/page_table_check.c:90!
>> [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
>> [   23.268503] Modules linked in:
>> [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
>> [   23.270383] Hardware name: linux,dummy-virt (DT)
>> [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>> [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
>> [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
>> [   23.274395] sp : ffff80000afb3ca0
>> [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
>> [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
>> [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
>> [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
>> [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
>> [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
>> [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
>> [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
>> [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
>> [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
>> [   23.287673] Call trace:
>> [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
>> [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
>> [   23.289918]  pmdp_collapse_flush+0x114/0x370
>> [   23.290692]  khugepaged+0x1170/0x19e0
>> [   23.291356]  kthread+0x110/0x120
>> [   23.291945]  ret_from_fork+0x10/0x20
>> [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
>> [   23.293678] ---[ end trace 0000000000000000 ]---
>> [   23.294511] note: khugepaged[30] exited with preempt_count 2
>>
>> Looking into file mm/page_table_check.c where this problem occured.
>>
>> /*
>>   * An enty is removed from the page table, decrement the counters for that page
>>   * verify that it is of correct type and counters do not become negative.
>>   */
>> static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>>                                     unsigned long pfn, unsigned long pgcnt)
>> {
>>          struct page_ext *page_ext;
>>          struct page *page;
>>          unsigned long i;
>>          bool anon;
>>
>>          if (!pfn_valid(pfn))
>>                  return;
>>
>>          page = pfn_to_page(pfn);
>>          page_ext = lookup_page_ext(page);
>>          anon = PageAnon(page);
>>
>>          for (i = 0; i < pgcnt; i++) {
>>                  struct page_table_check *ptc = get_page_table_check(page_ext);
>>
>>                  if (anon) {
>>                          BUG_ON(atomic_read(&ptc->file_map_count));
>>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>>                  } else {
>>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>>   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>>                  }
>>                  page_ext = page_ext_next(page_ext);
>>          }
>> }
>>
>> Could you explain what was expected during pmdp_collapse_flush() which when
>> failed, triggered this BUG_ON() ? This counter seems to be page table check
>> specific, could it just go wrong ? I have not looked into the details about
>> page table check mechanism.
>>
>> - Anshuman
>> .
> 
> Hi Anshuman:
> 
> Thanks for your job.
> 
> Let me briefly explain the principle of page table check(PTC).
> 
> PTC introduces the following struct for page mapping type count:
> struct page_table_check {
>         atomic_t anon_map_count;
>         atomic_t file_map_count;
> };
> This structure can be obtained by "lookup_page_ext(page)"


Right.

> 
> When page table entries are set(pud/pmd/pte), page_table_check_set()  is called to increase the page mapping count, Also check for errors (eg:if a page is used for anonymous mapping, then the page cannot be used for file mapping at the same time).
> 
> When page table entries are clear(pud/pmd/pte), page_table_check_clear()  is called to decrease the page mapping count, Also check for errors.
> 
> The error check rules are described in the following documents: Documentation/vm/page_table_check.rst

Snippet from that document.

+-------------------+-------------------+-------------------+------------------+
| Current Mapping   | New mapping       | Permissions       | Rule             |
+===================+===================+===================+==================+
| Anonymous         | Anonymous         | Read              | Allow            |
+-------------------+-------------------+-------------------+------------------+
| Anonymous         | Anonymous         | Read / Write      | Prohibit         |
+-------------------+-------------------+-------------------+------------------+
| Anonymous         | Named             | Any               | Prohibit         |
+-------------------+-------------------+-------------------+------------------+
| Named             | Anonymous         | Any               | Prohibit         |
+-------------------+-------------------+-------------------+------------------+
| Named             | Named             | Any               | Allow            |
+-------------------+-------------------+-------------------+------------------+

Does 'Named' refer to file mapping ? Also what does 'Prohibit' imply here ? The
check will call out a BUG_ON() in such cases ?

page_table_check_clear()
{

                if (anon) {
                        BUG_ON(atomic_read(&ptc->file_map_count));
                        BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
                } else {
                        BUG_ON(atomic_read(&ptc->anon_map_count));
                        BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
                }
}

So in the clear path, there are two checks

- If the current mapping is Anon, file_map_count cannot be positive and other way
- Decrement the applicable counter ensuring that it does not turn negative

page_table_check_set()
{
                if (anon) {
                        BUG_ON(atomic_read(&ptc->file_map_count));
                        BUG_ON(atomic_inc_return(&ptc->anon_map_count) > 1 && rw);
                } else {
                        BUG_ON(atomic_read(&ptc->anon_map_count));
                        BUG_ON(atomic_inc_return(&ptc->file_map_count) < 0);
                }
}

So in the set path, there are two checks

- If the current mapping is anon, file_map_count cannot be positive and other way
- Anon mapping cannot be RW if the page has been mapped more than once
- But then why check for negative values for file_map_count after increment ?

Is there any other checks, which this test ensures, that I might be missing ?

> 
> The setting and clearing of page table entries are symmetrical.

This assumption should be true for any user accessible mapping, for this test to work ?

Also why PUD_PAGE_SIZE/PMD_PAGE_SIZE are being used here instead of directly using
generic macros such as PUD_SIZE/PMD_SIZE ? Is there a specific reason ?

> 
> Here __page_table_check_pmd_clear() trigger BUGON which indicates that the pmd entry file mapping count has become negative.
> 
> I guess if PTC didn't detect this exception, would there have been any problems?

I am looking into this, not sure for now.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-18 16:20         ` Pasha Tatashin
  (?)
@ 2022-04-19  7:25           ` Anshuman Khandual
  -1 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19  7:25 UTC (permalink / raw)
  To: Pasha Tatashin, Tong Tiangen
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



On 4/18/22 21:50, Pasha Tatashin wrote:
> On Mon, Apr 18, 2022 at 11:47 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>
>>
>>
>> 在 2022/4/18 17:28, Anshuman Khandual 写道:
>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>
>> [...]
>>>>   #endif
>>>
>>> Ran this series on arm64 platform after enabling
>>>
>>> - CONFIG_PAGE_TABLE_CHECK
>>> - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
>>>
>>> After some time, the following error came up
>>>
>>> [   23.266013] ------------[ cut here ]------------
>>> [   23.266807] kernel BUG at mm/page_table_check.c:90!
>>> [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
>>> [   23.268503] Modules linked in:
>>> [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
>>> [   23.270383] Hardware name: linux,dummy-virt (DT)
>>> [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>>> [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
>>> [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
>>> [   23.274395] sp : ffff80000afb3ca0
>>> [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
>>> [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
>>> [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
>>> [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
>>> [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
>>> [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
>>> [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
>>> [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
>>> [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
>>> [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
>>> [   23.287673] Call trace:
>>> [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
>>> [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
>>> [   23.289918]  pmdp_collapse_flush+0x114/0x370
>>> [   23.290692]  khugepaged+0x1170/0x19e0
>>> [   23.291356]  kthread+0x110/0x120
>>> [   23.291945]  ret_from_fork+0x10/0x20
>>> [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
>>> [   23.293678] ---[ end trace 0000000000000000 ]---
>>> [   23.294511] note: khugepaged[30] exited with preempt_count 2
>>>
>>> Looking into file mm/page_table_check.c where this problem occured.
>>>
>>> /*
>>>   * An enty is removed from the page table, decrement the counters for that page
>>>   * verify that it is of correct type and counters do not become negative.
>>>   */
>>> static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>>>                                     unsigned long pfn, unsigned long pgcnt)
>>> {
>>>          struct page_ext *page_ext;
>>>          struct page *page;
>>>          unsigned long i;
>>>          bool anon;
>>>
>>>          if (!pfn_valid(pfn))
>>>                  return;
>>>
>>>          page = pfn_to_page(pfn);
>>>          page_ext = lookup_page_ext(page);
>>>          anon = PageAnon(page);
>>>
>>>          for (i = 0; i < pgcnt; i++) {
>>>                  struct page_table_check *ptc = get_page_table_check(page_ext);
>>>
>>>                  if (anon) {
>>>                          BUG_ON(atomic_read(&ptc->file_map_count));
>>>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>>>                  } else {
>>>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>>>   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>>>                  }
>>>                  page_ext = page_ext_next(page_ext);
>>>          }
>>> }
>>>
>>> Could you explain what was expected during pmdp_collapse_flush() which when
>>> failed, triggered this BUG_ON() ? This counter seems to be page table check
>>> specific, could it just go wrong ? I have not looked into the details about
>>> page table check mechanism.
>>>
>>> - Anshuman
>>> .
>>
>> Hi Anshuman:
>>
>> Thanks for your job.
>>
>> Let me briefly explain the principle of page table check(PTC).
>>
>> PTC introduces the following struct for page mapping type count:
>> struct page_table_check {
>>          atomic_t anon_map_count;
>>          atomic_t file_map_count;
>> };
>> This structure can be obtained by "lookup_page_ext(page)"
>>
>> When page table entries are set(pud/pmd/pte), page_table_check_set()  is
>> called to increase the page mapping count, Also check for errors (eg:if
>> a page is used for anonymous mapping, then the page cannot be used for
>> file mapping at the same time).
>>
>> When page table entries are clear(pud/pmd/pte), page_table_check_clear()
>>   is called to decrease the page mapping count, Also check for errors.
>>
>> The error check rules are described in the following documents:
>> Documentation/vm/page_table_check.rst
>>
>> The setting and clearing of page table entries are symmetrical.
>>
>> Here __page_table_check_pmd_clear() trigger BUGON which indicates that
>> the pmd entry file mapping count has become negative.
>>
>> I guess if PTC didn't detect this exception, would there have been any
>> problems?
> 
> It is hard to tell what sort of problem has been detected. More
> debugging is needed in order to understand it. A huge file entry is
> being removed from the page table. However, at least one sub page of
> that entry does not have a record that it was added as a file entry to

I guess PMD splitting scenarios should also be taken care as sub pages
will also go via appropriate XXX_set_at() helpers ?

> the page table. At Google we found a few internal security bugs using
> PTCs. However, this being new on ARM64, it is possible that the bug is
> in PTC/khugepaged itself.
> 
> Anshuman is it possible to repro your scenario in QEMU?

I have been unable to reproduce this reported problem. Last time it just
happened after a fresh boot without anything in particular running. Will
continue experimenting.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-19  7:25           ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19  7:25 UTC (permalink / raw)
  To: Pasha Tatashin, Tong Tiangen
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



On 4/18/22 21:50, Pasha Tatashin wrote:
> On Mon, Apr 18, 2022 at 11:47 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>
>>
>>
>> 在 2022/4/18 17:28, Anshuman Khandual 写道:
>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>
>> [...]
>>>>   #endif
>>>
>>> Ran this series on arm64 platform after enabling
>>>
>>> - CONFIG_PAGE_TABLE_CHECK
>>> - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
>>>
>>> After some time, the following error came up
>>>
>>> [   23.266013] ------------[ cut here ]------------
>>> [   23.266807] kernel BUG at mm/page_table_check.c:90!
>>> [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
>>> [   23.268503] Modules linked in:
>>> [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
>>> [   23.270383] Hardware name: linux,dummy-virt (DT)
>>> [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>>> [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
>>> [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
>>> [   23.274395] sp : ffff80000afb3ca0
>>> [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
>>> [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
>>> [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
>>> [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
>>> [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
>>> [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
>>> [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
>>> [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
>>> [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
>>> [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
>>> [   23.287673] Call trace:
>>> [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
>>> [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
>>> [   23.289918]  pmdp_collapse_flush+0x114/0x370
>>> [   23.290692]  khugepaged+0x1170/0x19e0
>>> [   23.291356]  kthread+0x110/0x120
>>> [   23.291945]  ret_from_fork+0x10/0x20
>>> [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
>>> [   23.293678] ---[ end trace 0000000000000000 ]---
>>> [   23.294511] note: khugepaged[30] exited with preempt_count 2
>>>
>>> Looking into file mm/page_table_check.c where this problem occured.
>>>
>>> /*
>>>   * An enty is removed from the page table, decrement the counters for that page
>>>   * verify that it is of correct type and counters do not become negative.
>>>   */
>>> static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>>>                                     unsigned long pfn, unsigned long pgcnt)
>>> {
>>>          struct page_ext *page_ext;
>>>          struct page *page;
>>>          unsigned long i;
>>>          bool anon;
>>>
>>>          if (!pfn_valid(pfn))
>>>                  return;
>>>
>>>          page = pfn_to_page(pfn);
>>>          page_ext = lookup_page_ext(page);
>>>          anon = PageAnon(page);
>>>
>>>          for (i = 0; i < pgcnt; i++) {
>>>                  struct page_table_check *ptc = get_page_table_check(page_ext);
>>>
>>>                  if (anon) {
>>>                          BUG_ON(atomic_read(&ptc->file_map_count));
>>>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>>>                  } else {
>>>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>>>   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>>>                  }
>>>                  page_ext = page_ext_next(page_ext);
>>>          }
>>> }
>>>
>>> Could you explain what was expected during pmdp_collapse_flush() which when
>>> failed, triggered this BUG_ON() ? This counter seems to be page table check
>>> specific, could it just go wrong ? I have not looked into the details about
>>> page table check mechanism.
>>>
>>> - Anshuman
>>> .
>>
>> Hi Anshuman:
>>
>> Thanks for your job.
>>
>> Let me briefly explain the principle of page table check(PTC).
>>
>> PTC introduces the following struct for page mapping type count:
>> struct page_table_check {
>>          atomic_t anon_map_count;
>>          atomic_t file_map_count;
>> };
>> This structure can be obtained by "lookup_page_ext(page)"
>>
>> When page table entries are set(pud/pmd/pte), page_table_check_set()  is
>> called to increase the page mapping count, Also check for errors (eg:if
>> a page is used for anonymous mapping, then the page cannot be used for
>> file mapping at the same time).
>>
>> When page table entries are clear(pud/pmd/pte), page_table_check_clear()
>>   is called to decrease the page mapping count, Also check for errors.
>>
>> The error check rules are described in the following documents:
>> Documentation/vm/page_table_check.rst
>>
>> The setting and clearing of page table entries are symmetrical.
>>
>> Here __page_table_check_pmd_clear() trigger BUGON which indicates that
>> the pmd entry file mapping count has become negative.
>>
>> I guess if PTC didn't detect this exception, would there have been any
>> problems?
> 
> It is hard to tell what sort of problem has been detected. More
> debugging is needed in order to understand it. A huge file entry is
> being removed from the page table. However, at least one sub page of
> that entry does not have a record that it was added as a file entry to

I guess PMD splitting scenarios should also be taken care as sub pages
will also go via appropriate XXX_set_at() helpers ?

> the page table. At Google we found a few internal security bugs using
> PTCs. However, this being new on ARM64, it is possible that the bug is
> in PTC/khugepaged itself.
> 
> Anshuman is it possible to repro your scenario in QEMU?

I have been unable to reproduce this reported problem. Last time it just
happened after a fresh boot without anything in particular running. Will
continue experimenting.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-19  7:25           ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19  7:25 UTC (permalink / raw)
  To: Pasha Tatashin, Tong Tiangen
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



On 4/18/22 21:50, Pasha Tatashin wrote:
> On Mon, Apr 18, 2022 at 11:47 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>
>>
>>
>> 在 2022/4/18 17:28, Anshuman Khandual 写道:
>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>
>> [...]
>>>>   #endif
>>>
>>> Ran this series on arm64 platform after enabling
>>>
>>> - CONFIG_PAGE_TABLE_CHECK
>>> - CONFIG_PAGE_TABLE_CHECK_ENFORCED (avoiding kernel command line option)
>>>
>>> After some time, the following error came up
>>>
>>> [   23.266013] ------------[ cut here ]------------
>>> [   23.266807] kernel BUG at mm/page_table_check.c:90!
>>> [   23.267609] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
>>> [   23.268503] Modules linked in:
>>> [   23.269012] CPU: 1 PID: 30 Comm: khugepaged Not tainted 5.18.0-rc3-00004-g60aa8e363a91 #2
>>> [   23.270383] Hardware name: linux,dummy-virt (DT)
>>> [   23.271210] pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>>> [   23.272445] pc : page_table_check_clear.isra.6+0x114/0x148
>>> [   23.273429] lr : page_table_check_clear.isra.6+0x64/0x148
>>> [   23.274395] sp : ffff80000afb3ca0
>>> [   23.274994] x29: ffff80000afb3ca0 x28: fffffc00022558e8 x27: ffff80000a27f628
>>> [   23.276260] x26: ffff800009f9f2b0 x25: ffff00008a8d5000 x24: ffff800009f09fa0
>>> [   23.277527] x23: 0000ffff89e00000 x22: ffff800009f09fb8 x21: ffff000089414cc0
>>> [   23.278798] x20: 0000000000000200 x19: fffffc00022a0000 x18: 0000000000000001
>>> [   23.280066] x17: 0000000000000001 x16: 0000000000000000 x15: 0000000000000003
>>> [   23.281331] x14: 0000000000000068 x13: 00000000000000c0 x12: 0000000000000010
>>> [   23.282602] x11: fffffc0002320008 x10: fffffc0002320000 x9 : ffff800009fa1000
>>> [   23.283868] x8 : 00000000ffffffff x7 : 0000000000000001 x6 : ffff800009fa1f08
>>> [   23.285135] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
>>> [   23.286406] x2 : 00000000ffffffff x1 : ffff000080f2800c x0 : ffff000080f28000
>>> [   23.287673] Call trace:
>>> [   23.288123]  page_table_check_clear.isra.6+0x114/0x148
>>> [   23.289043]  __page_table_check_pmd_clear+0x3c/0x50
>>> [   23.289918]  pmdp_collapse_flush+0x114/0x370
>>> [   23.290692]  khugepaged+0x1170/0x19e0
>>> [   23.291356]  kthread+0x110/0x120
>>> [   23.291945]  ret_from_fork+0x10/0x20
>>> [   23.292596] Code: 91001041 b8e80024 51000482 36fffd62 (d4210000)
>>> [   23.293678] ---[ end trace 0000000000000000 ]---
>>> [   23.294511] note: khugepaged[30] exited with preempt_count 2
>>>
>>> Looking into file mm/page_table_check.c where this problem occured.
>>>
>>> /*
>>>   * An enty is removed from the page table, decrement the counters for that page
>>>   * verify that it is of correct type and counters do not become negative.
>>>   */
>>> static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
>>>                                     unsigned long pfn, unsigned long pgcnt)
>>> {
>>>          struct page_ext *page_ext;
>>>          struct page *page;
>>>          unsigned long i;
>>>          bool anon;
>>>
>>>          if (!pfn_valid(pfn))
>>>                  return;
>>>
>>>          page = pfn_to_page(pfn);
>>>          page_ext = lookup_page_ext(page);
>>>          anon = PageAnon(page);
>>>
>>>          for (i = 0; i < pgcnt; i++) {
>>>                  struct page_table_check *ptc = get_page_table_check(page_ext);
>>>
>>>                  if (anon) {
>>>                          BUG_ON(atomic_read(&ptc->file_map_count));
>>>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>>>                  } else {
>>>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>>>   Triggered here ====>>  BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>>>                  }
>>>                  page_ext = page_ext_next(page_ext);
>>>          }
>>> }
>>>
>>> Could you explain what was expected during pmdp_collapse_flush() which when
>>> failed, triggered this BUG_ON() ? This counter seems to be page table check
>>> specific, could it just go wrong ? I have not looked into the details about
>>> page table check mechanism.
>>>
>>> - Anshuman
>>> .
>>
>> Hi Anshuman:
>>
>> Thanks for your job.
>>
>> Let me briefly explain the principle of page table check(PTC).
>>
>> PTC introduces the following struct for page mapping type count:
>> struct page_table_check {
>>          atomic_t anon_map_count;
>>          atomic_t file_map_count;
>> };
>> This structure can be obtained by "lookup_page_ext(page)"
>>
>> When page table entries are set(pud/pmd/pte), page_table_check_set()  is
>> called to increase the page mapping count, Also check for errors (eg:if
>> a page is used for anonymous mapping, then the page cannot be used for
>> file mapping at the same time).
>>
>> When page table entries are clear(pud/pmd/pte), page_table_check_clear()
>>   is called to decrease the page mapping count, Also check for errors.
>>
>> The error check rules are described in the following documents:
>> Documentation/vm/page_table_check.rst
>>
>> The setting and clearing of page table entries are symmetrical.
>>
>> Here __page_table_check_pmd_clear() trigger BUGON which indicates that
>> the pmd entry file mapping count has become negative.
>>
>> I guess if PTC didn't detect this exception, would there have been any
>> problems?
> 
> It is hard to tell what sort of problem has been detected. More
> debugging is needed in order to understand it. A huge file entry is
> being removed from the page table. However, at least one sub page of
> that entry does not have a record that it was added as a file entry to

I guess PMD splitting scenarios should also be taken care as sub pages
will also go via appropriate XXX_set_at() helpers ?

> the page table. At Google we found a few internal security bugs using
> PTCs. However, this being new on ARM64, it is possible that the bug is
> in PTC/khugepaged itself.
> 
> Anshuman is it possible to repro your scenario in QEMU?

I have been unable to reproduce this reported problem. Last time it just
happened after a fresh boot without anything in particular running. Will
continue experimenting.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-19  7:10         ` Anshuman Khandual
  (?)
@ 2022-04-19  8:52           ` Tong Tiangen
  -1 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-19  8:52 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/19 15:10, Anshuman Khandual 写道:
> 
> 
> On 4/18/22 21:17, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/18 17:28, Anshuman Khandual 写道:
>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
[...]
>>>
>>> Could you explain what was expected during pmdp_collapse_flush() which when
>>> failed, triggered this BUG_ON() ? This counter seems to be page table check
>>> specific, could it just go wrong ? I have not looked into the details about
>>> page table check mechanism.
>>>
>>> - Anshuman
>>> .
>>
>> Hi Anshuman:
>>
>> Thanks for your job.
>>
>> Let me briefly explain the principle of page table check(PTC).
>>
>> PTC introduces the following struct for page mapping type count:
>> struct page_table_check {
>>          atomic_t anon_map_count;
>>          atomic_t file_map_count;
>> };
>> This structure can be obtained by "lookup_page_ext(page)"
> 
> 
> Right.
> 
>>
>> When page table entries are set(pud/pmd/pte), page_table_check_set()  is called to increase the page mapping count, Also check for errors (eg:if a page is used for anonymous mapping, then the page cannot be used for file mapping at the same time).
>>
>> When page table entries are clear(pud/pmd/pte), page_table_check_clear()  is called to decrease the page mapping count, Also check for errors.
>>
>> The error check rules are described in the following documents: Documentation/vm/page_table_check.rst
> 
> Snippet from that document.
> 
> +-------------------+-------------------+-------------------+------------------+
> | Current Mapping   | New mapping       | Permissions       | Rule             |
> +===================+===================+===================+==================+
> | Anonymous         | Anonymous         | Read              | Allow            |
> +-------------------+-------------------+-------------------+------------------+
> | Anonymous         | Anonymous         | Read / Write      | Prohibit         |
> +-------------------+-------------------+-------------------+------------------+
> | Anonymous         | Named             | Any               | Prohibit         |
> +-------------------+-------------------+-------------------+------------------+
> | Named             | Anonymous         | Any               | Prohibit         |
> +-------------------+-------------------+-------------------+------------------+
> | Named             | Named             | Any               | Allow            |
> +-------------------+-------------------+-------------------+------------------+
> 
> Does 'Named' refer to file mapping ? Also what does 'Prohibit' imply here ? The
> check will call out a BUG_ON() in such cases ?

Right, Named means file mapping,  Prohibit here trigger BUG_ON.

> 
> page_table_check_clear()
> {
> 
>                  if (anon) {
>                          BUG_ON(atomic_read(&ptc->file_map_count));
>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>                  } else {
>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>                          BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>                  }
> }
> 
> So in the clear path, there are two checks
> 
> - If the current mapping is Anon, file_map_count cannot be positive and other way
> - Decrement the applicable counter ensuring that it does not turn negative
> 
> page_table_check_set()
> {
>                  if (anon) {
>                          BUG_ON(atomic_read(&ptc->file_map_count));
>                          BUG_ON(atomic_inc_return(&ptc->anon_map_count) > 1 && rw);
>                  } else {
>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>                          BUG_ON(atomic_inc_return(&ptc->file_map_count) < 0);
>                  }
> }
> 
> So in the set path, there are two checks
> 
> - If the current mapping is anon, file_map_count cannot be positive and other way
> - Anon mapping cannot be RW if the page has been mapped more than once
> - But then why check for negative values for file_map_count after increment ?

Check for negative after increment is logically OK and <=0 should be 
more reasonable.

> 
> Is there any other checks, which this test ensures, that I might be missing ?

The following checks are performed when page table entry are 
allocated/released:
__page_table_check_zero()
{
	BUG_ON(atomic_read(&ptc->anon_map_count));
	BUG_ON(atomic_read(&ptc->file_map_count));
}

> 
>>
>> The setting and clearing of page table entries are symmetrical.
> 
> This assumption should be true for any user accessible mapping, for this test to work ?

Right, if not, here is BUG_ON.

However, as Pasha said:
"this being new on ARM64, it is possible that the bug is in 
PTC/khugepaged itself."

> 
> Also why PUD_PAGE_SIZE/PMD_PAGE_SIZE are being used here instead of directly using
> generic macros such as PUD_SIZE/PMD_SIZE ? Is there a specific reason ?

I did code optimization for this, in patch 1/4 of this patchset:

+#ifndef PMD_PAGE_SIZE
+#define PMD_PAGE_SIZE	PMD_SIZE
+#endif
+
+#ifndef PUD_PAGE_SIZE
+#define PUD_PAGE_SIZE	PUD_SIZE
+#endif


Thank you.
Tong.

> 
>>
>> Here __page_table_check_pmd_clear() trigger BUGON which indicates that the pmd entry file mapping count has become negative.
>>
>> I guess if PTC didn't detect this exception, would there have been any problems?
> 
> I am looking into this, not sure for now.
> .

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-19  8:52           ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-19  8:52 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/19 15:10, Anshuman Khandual 写道:
> 
> 
> On 4/18/22 21:17, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/18 17:28, Anshuman Khandual 写道:
>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
[...]
>>>
>>> Could you explain what was expected during pmdp_collapse_flush() which when
>>> failed, triggered this BUG_ON() ? This counter seems to be page table check
>>> specific, could it just go wrong ? I have not looked into the details about
>>> page table check mechanism.
>>>
>>> - Anshuman
>>> .
>>
>> Hi Anshuman:
>>
>> Thanks for your job.
>>
>> Let me briefly explain the principle of page table check(PTC).
>>
>> PTC introduces the following struct for page mapping type count:
>> struct page_table_check {
>>          atomic_t anon_map_count;
>>          atomic_t file_map_count;
>> };
>> This structure can be obtained by "lookup_page_ext(page)"
> 
> 
> Right.
> 
>>
>> When page table entries are set(pud/pmd/pte), page_table_check_set()  is called to increase the page mapping count, Also check for errors (eg:if a page is used for anonymous mapping, then the page cannot be used for file mapping at the same time).
>>
>> When page table entries are clear(pud/pmd/pte), page_table_check_clear()  is called to decrease the page mapping count, Also check for errors.
>>
>> The error check rules are described in the following documents: Documentation/vm/page_table_check.rst
> 
> Snippet from that document.
> 
> +-------------------+-------------------+-------------------+------------------+
> | Current Mapping   | New mapping       | Permissions       | Rule             |
> +===================+===================+===================+==================+
> | Anonymous         | Anonymous         | Read              | Allow            |
> +-------------------+-------------------+-------------------+------------------+
> | Anonymous         | Anonymous         | Read / Write      | Prohibit         |
> +-------------------+-------------------+-------------------+------------------+
> | Anonymous         | Named             | Any               | Prohibit         |
> +-------------------+-------------------+-------------------+------------------+
> | Named             | Anonymous         | Any               | Prohibit         |
> +-------------------+-------------------+-------------------+------------------+
> | Named             | Named             | Any               | Allow            |
> +-------------------+-------------------+-------------------+------------------+
> 
> Does 'Named' refer to file mapping ? Also what does 'Prohibit' imply here ? The
> check will call out a BUG_ON() in such cases ?

Right, Named means file mapping,  Prohibit here trigger BUG_ON.

> 
> page_table_check_clear()
> {
> 
>                  if (anon) {
>                          BUG_ON(atomic_read(&ptc->file_map_count));
>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>                  } else {
>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>                          BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>                  }
> }
> 
> So in the clear path, there are two checks
> 
> - If the current mapping is Anon, file_map_count cannot be positive and other way
> - Decrement the applicable counter ensuring that it does not turn negative
> 
> page_table_check_set()
> {
>                  if (anon) {
>                          BUG_ON(atomic_read(&ptc->file_map_count));
>                          BUG_ON(atomic_inc_return(&ptc->anon_map_count) > 1 && rw);
>                  } else {
>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>                          BUG_ON(atomic_inc_return(&ptc->file_map_count) < 0);
>                  }
> }
> 
> So in the set path, there are two checks
> 
> - If the current mapping is anon, file_map_count cannot be positive and other way
> - Anon mapping cannot be RW if the page has been mapped more than once
> - But then why check for negative values for file_map_count after increment ?

Check for negative after increment is logically OK and <=0 should be 
more reasonable.

> 
> Is there any other checks, which this test ensures, that I might be missing ?

The following checks are performed when page table entry are 
allocated/released:
__page_table_check_zero()
{
	BUG_ON(atomic_read(&ptc->anon_map_count));
	BUG_ON(atomic_read(&ptc->file_map_count));
}

> 
>>
>> The setting and clearing of page table entries are symmetrical.
> 
> This assumption should be true for any user accessible mapping, for this test to work ?

Right, if not, here is BUG_ON.

However, as Pasha said:
"this being new on ARM64, it is possible that the bug is in 
PTC/khugepaged itself."

> 
> Also why PUD_PAGE_SIZE/PMD_PAGE_SIZE are being used here instead of directly using
> generic macros such as PUD_SIZE/PMD_SIZE ? Is there a specific reason ?

I did code optimization for this, in patch 1/4 of this patchset:

+#ifndef PMD_PAGE_SIZE
+#define PMD_PAGE_SIZE	PMD_SIZE
+#endif
+
+#ifndef PUD_PAGE_SIZE
+#define PUD_PAGE_SIZE	PUD_SIZE
+#endif


Thank you.
Tong.

> 
>>
>> Here __page_table_check_pmd_clear() trigger BUGON which indicates that the pmd entry file mapping count has become negative.
>>
>> I guess if PTC didn't detect this exception, would there have been any problems?
> 
> I am looking into this, not sure for now.
> .

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-19  8:52           ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-19  8:52 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/19 15:10, Anshuman Khandual 写道:
> 
> 
> On 4/18/22 21:17, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/18 17:28, Anshuman Khandual 写道:
>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
[...]
>>>
>>> Could you explain what was expected during pmdp_collapse_flush() which when
>>> failed, triggered this BUG_ON() ? This counter seems to be page table check
>>> specific, could it just go wrong ? I have not looked into the details about
>>> page table check mechanism.
>>>
>>> - Anshuman
>>> .
>>
>> Hi Anshuman:
>>
>> Thanks for your job.
>>
>> Let me briefly explain the principle of page table check(PTC).
>>
>> PTC introduces the following struct for page mapping type count:
>> struct page_table_check {
>>          atomic_t anon_map_count;
>>          atomic_t file_map_count;
>> };
>> This structure can be obtained by "lookup_page_ext(page)"
> 
> 
> Right.
> 
>>
>> When page table entries are set(pud/pmd/pte), page_table_check_set()  is called to increase the page mapping count, Also check for errors (eg:if a page is used for anonymous mapping, then the page cannot be used for file mapping at the same time).
>>
>> When page table entries are clear(pud/pmd/pte), page_table_check_clear()  is called to decrease the page mapping count, Also check for errors.
>>
>> The error check rules are described in the following documents: Documentation/vm/page_table_check.rst
> 
> Snippet from that document.
> 
> +-------------------+-------------------+-------------------+------------------+
> | Current Mapping   | New mapping       | Permissions       | Rule             |
> +===================+===================+===================+==================+
> | Anonymous         | Anonymous         | Read              | Allow            |
> +-------------------+-------------------+-------------------+------------------+
> | Anonymous         | Anonymous         | Read / Write      | Prohibit         |
> +-------------------+-------------------+-------------------+------------------+
> | Anonymous         | Named             | Any               | Prohibit         |
> +-------------------+-------------------+-------------------+------------------+
> | Named             | Anonymous         | Any               | Prohibit         |
> +-------------------+-------------------+-------------------+------------------+
> | Named             | Named             | Any               | Allow            |
> +-------------------+-------------------+-------------------+------------------+
> 
> Does 'Named' refer to file mapping ? Also what does 'Prohibit' imply here ? The
> check will call out a BUG_ON() in such cases ?

Right, Named means file mapping,  Prohibit here trigger BUG_ON.

> 
> page_table_check_clear()
> {
> 
>                  if (anon) {
>                          BUG_ON(atomic_read(&ptc->file_map_count));
>                          BUG_ON(atomic_dec_return(&ptc->anon_map_count) < 0);
>                  } else {
>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>                          BUG_ON(atomic_dec_return(&ptc->file_map_count) < 0);
>                  }
> }
> 
> So in the clear path, there are two checks
> 
> - If the current mapping is Anon, file_map_count cannot be positive and other way
> - Decrement the applicable counter ensuring that it does not turn negative
> 
> page_table_check_set()
> {
>                  if (anon) {
>                          BUG_ON(atomic_read(&ptc->file_map_count));
>                          BUG_ON(atomic_inc_return(&ptc->anon_map_count) > 1 && rw);
>                  } else {
>                          BUG_ON(atomic_read(&ptc->anon_map_count));
>                          BUG_ON(atomic_inc_return(&ptc->file_map_count) < 0);
>                  }
> }
> 
> So in the set path, there are two checks
> 
> - If the current mapping is anon, file_map_count cannot be positive and other way
> - Anon mapping cannot be RW if the page has been mapped more than once
> - But then why check for negative values for file_map_count after increment ?

Check for negative after increment is logically OK and <=0 should be 
more reasonable.

> 
> Is there any other checks, which this test ensures, that I might be missing ?

The following checks are performed when page table entry are 
allocated/released:
__page_table_check_zero()
{
	BUG_ON(atomic_read(&ptc->anon_map_count));
	BUG_ON(atomic_read(&ptc->file_map_count));
}

> 
>>
>> The setting and clearing of page table entries are symmetrical.
> 
> This assumption should be true for any user accessible mapping, for this test to work ?

Right, if not, here is BUG_ON.

However, as Pasha said:
"this being new on ARM64, it is possible that the bug is in 
PTC/khugepaged itself."

> 
> Also why PUD_PAGE_SIZE/PMD_PAGE_SIZE are being used here instead of directly using
> generic macros such as PUD_SIZE/PMD_SIZE ? Is there a specific reason ?

I did code optimization for this, in patch 1/4 of this patchset:

+#ifndef PMD_PAGE_SIZE
+#define PMD_PAGE_SIZE	PMD_SIZE
+#endif
+
+#ifndef PUD_PAGE_SIZE
+#define PUD_PAGE_SIZE	PUD_SIZE
+#endif


Thank you.
Tong.

> 
>>
>> Here __page_table_check_pmd_clear() trigger BUGON which indicates that the pmd entry file mapping count has become negative.
>>
>> I guess if PTC didn't detect this exception, would there have been any problems?
> 
> I am looking into this, not sure for now.
> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
  2022-04-18  3:44   ` Tong Tiangen
  (?)
@ 2022-04-19  9:29     ` Anshuman Khandual
  -1 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19  9:29 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/18/22 09:14, Tong Tiangen wrote:
> --- a/mm/page_table_check.c
> +++ b/mm/page_table_check.c
> @@ -10,6 +10,14 @@
>  #undef pr_fmt
>  #define pr_fmt(fmt)	"page_table_check: " fmt
>  
> +#ifndef PMD_PAGE_SIZE
> +#define PMD_PAGE_SIZE	PMD_SIZE
> +#endif
> +
> +#ifndef PUD_PAGE_SIZE
> +#define PUD_PAGE_SIZE	PUD_SIZE
> +#endif

Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-19  9:29     ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19  9:29 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/18/22 09:14, Tong Tiangen wrote:
> --- a/mm/page_table_check.c
> +++ b/mm/page_table_check.c
> @@ -10,6 +10,14 @@
>  #undef pr_fmt
>  #define pr_fmt(fmt)	"page_table_check: " fmt
>  
> +#ifndef PMD_PAGE_SIZE
> +#define PMD_PAGE_SIZE	PMD_SIZE
> +#endif
> +
> +#ifndef PUD_PAGE_SIZE
> +#define PUD_PAGE_SIZE	PUD_SIZE
> +#endif

Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-19  9:29     ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19  9:29 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/18/22 09:14, Tong Tiangen wrote:
> --- a/mm/page_table_check.c
> +++ b/mm/page_table_check.c
> @@ -10,6 +10,14 @@
>  #undef pr_fmt
>  #define pr_fmt(fmt)	"page_table_check: " fmt
>  
> +#ifndef PMD_PAGE_SIZE
> +#define PMD_PAGE_SIZE	PMD_SIZE
> +#endif
> +
> +#ifndef PUD_PAGE_SIZE
> +#define PUD_PAGE_SIZE	PUD_SIZE
> +#endif

Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-18  3:44   ` Tong Tiangen
  (?)
@ 2022-04-19 10:22     ` Anshuman Khandual
  -1 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19 10:22 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun


On 4/18/22 09:14, Tong Tiangen wrote:
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_present(pud) && pud_user(pud);
> +}
> +#endif
Wondering why check for these page table entry states when init_mm
has already being excluded ? Should not user page tables be checked
for in entirety for all updates ? what is the rationale for filtering
out only pxx_user_access_page entries ?

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-19 10:22     ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19 10:22 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun


On 4/18/22 09:14, Tong Tiangen wrote:
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_present(pud) && pud_user(pud);
> +}
> +#endif
Wondering why check for these page table entry states when init_mm
has already being excluded ? Should not user page tables be checked
for in entirety for all updates ? what is the rationale for filtering
out only pxx_user_access_page entries ?

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-19 10:22     ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-19 10:22 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun


On 4/18/22 09:14, Tong Tiangen wrote:
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_present(pud) && pud_user(pud);
> +}
> +#endif
Wondering why check for these page table entry states when init_mm
has already being excluded ? Should not user page tables be checked
for in entirety for all updates ? what is the rationale for filtering
out only pxx_user_access_page entries ?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-19 10:22     ` Anshuman Khandual
  (?)
@ 2022-04-19 13:19       ` Pasha Tatashin
  -1 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-19 13:19 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Tue, Apr 19, 2022 at 6:22 AM Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
>
> On 4/18/22 09:14, Tong Tiangen wrote:
> > +#ifdef CONFIG_PAGE_TABLE_CHECK
> > +static inline bool pte_user_accessible_page(pte_t pte)
> > +{
> > +     return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> > +}
> > +
> > +static inline bool pmd_user_accessible_page(pmd_t pmd)
> > +{
> > +     return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> > +}
> > +
> > +static inline bool pud_user_accessible_page(pud_t pud)
> > +{
> > +     return pud_present(pud) && pud_user(pud);
> > +}
> > +#endif
> Wondering why check for these page table entry states when init_mm
> has already being excluded ? Should not user page tables be checked
> for in entirety for all updates ? what is the rationale for filtering
> out only pxx_user_access_page entries ?

The point is to prevent false sharing and memory corruption issues.
The idea of PTC to be simple and relatively independent  from the MM
state machine that catches invalid page sharing. I.e. if an R/W anon
page is accessible by user land, that page can never be mapped into
another process (internally shared anons are treated as named
mappings).

Therefore, we try not to rely on MM states, and ensure that when a
page-table entry is accessible by user it meets the required
assumptions: no false sharing, etc.

For example, one bug that was caught with PTC was where a driver on an
unload would put memory on a freelist but memory is still mapped in
user page table.

Pasha

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-19 13:19       ` Pasha Tatashin
  0 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-19 13:19 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Tue, Apr 19, 2022 at 6:22 AM Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
>
> On 4/18/22 09:14, Tong Tiangen wrote:
> > +#ifdef CONFIG_PAGE_TABLE_CHECK
> > +static inline bool pte_user_accessible_page(pte_t pte)
> > +{
> > +     return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> > +}
> > +
> > +static inline bool pmd_user_accessible_page(pmd_t pmd)
> > +{
> > +     return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> > +}
> > +
> > +static inline bool pud_user_accessible_page(pud_t pud)
> > +{
> > +     return pud_present(pud) && pud_user(pud);
> > +}
> > +#endif
> Wondering why check for these page table entry states when init_mm
> has already being excluded ? Should not user page tables be checked
> for in entirety for all updates ? what is the rationale for filtering
> out only pxx_user_access_page entries ?

The point is to prevent false sharing and memory corruption issues.
The idea of PTC to be simple and relatively independent  from the MM
state machine that catches invalid page sharing. I.e. if an R/W anon
page is accessible by user land, that page can never be mapped into
another process (internally shared anons are treated as named
mappings).

Therefore, we try not to rely on MM states, and ensure that when a
page-table entry is accessible by user it meets the required
assumptions: no false sharing, etc.

For example, one bug that was caught with PTC was where a driver on an
unload would put memory on a freelist but memory is still mapped in
user page table.

Pasha

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-19 13:19       ` Pasha Tatashin
  0 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-19 13:19 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Tue, Apr 19, 2022 at 6:22 AM Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
>
> On 4/18/22 09:14, Tong Tiangen wrote:
> > +#ifdef CONFIG_PAGE_TABLE_CHECK
> > +static inline bool pte_user_accessible_page(pte_t pte)
> > +{
> > +     return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> > +}
> > +
> > +static inline bool pmd_user_accessible_page(pmd_t pmd)
> > +{
> > +     return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> > +}
> > +
> > +static inline bool pud_user_accessible_page(pud_t pud)
> > +{
> > +     return pud_present(pud) && pud_user(pud);
> > +}
> > +#endif
> Wondering why check for these page table entry states when init_mm
> has already being excluded ? Should not user page tables be checked
> for in entirety for all updates ? what is the rationale for filtering
> out only pxx_user_access_page entries ?

The point is to prevent false sharing and memory corruption issues.
The idea of PTC to be simple and relatively independent  from the MM
state machine that catches invalid page sharing. I.e. if an R/W anon
page is accessible by user land, that page can never be mapped into
another process (internally shared anons are treated as named
mappings).

Therefore, we try not to rely on MM states, and ensure that when a
page-table entry is accessible by user it meets the required
assumptions: no false sharing, etc.

For example, one bug that was caught with PTC was where a driver on an
unload would put memory on a freelist but memory is still mapped in
user page table.

Pasha

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-19 13:19       ` Pasha Tatashin
  (?)
@ 2022-04-20  5:05         ` Anshuman Khandual
  -1 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-20  5:05 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



On 4/19/22 18:49, Pasha Tatashin wrote:
> On Tue, Apr 19, 2022 at 6:22 AM Anshuman Khandual
> <anshuman.khandual@arm.com> wrote:
>>
>>
>> On 4/18/22 09:14, Tong Tiangen wrote:
>>> +#ifdef CONFIG_PAGE_TABLE_CHECK
>>> +static inline bool pte_user_accessible_page(pte_t pte)
>>> +{
>>> +     return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
>>> +}
>>> +
>>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>>> +{
>>> +     return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
>>> +}
>>> +
>>> +static inline bool pud_user_accessible_page(pud_t pud)
>>> +{
>>> +     return pud_present(pud) && pud_user(pud);
>>> +}
>>> +#endif
>> Wondering why check for these page table entry states when init_mm
>> has already being excluded ? Should not user page tables be checked
>> for in entirety for all updates ? what is the rationale for filtering
>> out only pxx_user_access_page entries ?
> 
> The point is to prevent false sharing and memory corruption issues.
> The idea of PTC to be simple and relatively independent  from the MM
> state machine that catches invalid page sharing. I.e. if an R/W anon

Right, this mechanism here is truly interdependent validation, which is
orthogonal to other MM states. Although I was curious, if mm_struct is
not 'init_mm', what percentage of its total page table mapped entries
will be user accessible ? These new helpers only filter out entries that
could potentially create false sharing leading upto memory corruption ?

I am wondering if there is any other way such filtering could have been
applied without adding all these new page table helpers just for page
table check purpose.

> page is accessible by user land, that page can never be mapped into
> another process (internally shared anons are treated as named
> mappings).

Right.

> 
> Therefore, we try not to rely on MM states, and ensure that when a
> page-table entry is accessible by user it meets the required
> assumptions: no false sharing, etc.

Right, filtering reduces the page table entries that needs interception
during update (set/clear), but was just curious is there another way of
doing it, without adding page table check specific helpers on platforms
subscribing PAGE_TABLE_CHECK ?

> 
> For example, one bug that was caught with PTC was where a driver on an
> unload would put memory on a freelist but memory is still mapped in
> user page table.

Should not page's refcount (that it is being used else where) prevented
releases into free list ? But page table check here might just detect
such scenarios even before page gets released.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-20  5:05         ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-20  5:05 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



On 4/19/22 18:49, Pasha Tatashin wrote:
> On Tue, Apr 19, 2022 at 6:22 AM Anshuman Khandual
> <anshuman.khandual@arm.com> wrote:
>>
>>
>> On 4/18/22 09:14, Tong Tiangen wrote:
>>> +#ifdef CONFIG_PAGE_TABLE_CHECK
>>> +static inline bool pte_user_accessible_page(pte_t pte)
>>> +{
>>> +     return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
>>> +}
>>> +
>>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>>> +{
>>> +     return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
>>> +}
>>> +
>>> +static inline bool pud_user_accessible_page(pud_t pud)
>>> +{
>>> +     return pud_present(pud) && pud_user(pud);
>>> +}
>>> +#endif
>> Wondering why check for these page table entry states when init_mm
>> has already being excluded ? Should not user page tables be checked
>> for in entirety for all updates ? what is the rationale for filtering
>> out only pxx_user_access_page entries ?
> 
> The point is to prevent false sharing and memory corruption issues.
> The idea of PTC to be simple and relatively independent  from the MM
> state machine that catches invalid page sharing. I.e. if an R/W anon

Right, this mechanism here is truly interdependent validation, which is
orthogonal to other MM states. Although I was curious, if mm_struct is
not 'init_mm', what percentage of its total page table mapped entries
will be user accessible ? These new helpers only filter out entries that
could potentially create false sharing leading upto memory corruption ?

I am wondering if there is any other way such filtering could have been
applied without adding all these new page table helpers just for page
table check purpose.

> page is accessible by user land, that page can never be mapped into
> another process (internally shared anons are treated as named
> mappings).

Right.

> 
> Therefore, we try not to rely on MM states, and ensure that when a
> page-table entry is accessible by user it meets the required
> assumptions: no false sharing, etc.

Right, filtering reduces the page table entries that needs interception
during update (set/clear), but was just curious is there another way of
doing it, without adding page table check specific helpers on platforms
subscribing PAGE_TABLE_CHECK ?

> 
> For example, one bug that was caught with PTC was where a driver on an
> unload would put memory on a freelist but memory is still mapped in
> user page table.

Should not page's refcount (that it is being used else where) prevented
releases into free list ? But page table check here might just detect
such scenarios even before page gets released.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-20  5:05         ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-20  5:05 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



On 4/19/22 18:49, Pasha Tatashin wrote:
> On Tue, Apr 19, 2022 at 6:22 AM Anshuman Khandual
> <anshuman.khandual@arm.com> wrote:
>>
>>
>> On 4/18/22 09:14, Tong Tiangen wrote:
>>> +#ifdef CONFIG_PAGE_TABLE_CHECK
>>> +static inline bool pte_user_accessible_page(pte_t pte)
>>> +{
>>> +     return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
>>> +}
>>> +
>>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>>> +{
>>> +     return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
>>> +}
>>> +
>>> +static inline bool pud_user_accessible_page(pud_t pud)
>>> +{
>>> +     return pud_present(pud) && pud_user(pud);
>>> +}
>>> +#endif
>> Wondering why check for these page table entry states when init_mm
>> has already being excluded ? Should not user page tables be checked
>> for in entirety for all updates ? what is the rationale for filtering
>> out only pxx_user_access_page entries ?
> 
> The point is to prevent false sharing and memory corruption issues.
> The idea of PTC to be simple and relatively independent  from the MM
> state machine that catches invalid page sharing. I.e. if an R/W anon

Right, this mechanism here is truly interdependent validation, which is
orthogonal to other MM states. Although I was curious, if mm_struct is
not 'init_mm', what percentage of its total page table mapped entries
will be user accessible ? These new helpers only filter out entries that
could potentially create false sharing leading upto memory corruption ?

I am wondering if there is any other way such filtering could have been
applied without adding all these new page table helpers just for page
table check purpose.

> page is accessible by user land, that page can never be mapped into
> another process (internally shared anons are treated as named
> mappings).

Right.

> 
> Therefore, we try not to rely on MM states, and ensure that when a
> page-table entry is accessible by user it meets the required
> assumptions: no false sharing, etc.

Right, filtering reduces the page table entries that needs interception
during update (set/clear), but was just curious is there another way of
doing it, without adding page table check specific helpers on platforms
subscribing PAGE_TABLE_CHECK ?

> 
> For example, one bug that was caught with PTC was where a driver on an
> unload would put memory on a freelist but memory is still mapped in
> user page table.

Should not page's refcount (that it is being used else where) prevented
releases into free list ? But page table check here might just detect
such scenarios even before page gets released.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
  2022-04-19  9:29     ` Anshuman Khandual
  (?)
@ 2022-04-20  6:44       ` Tong Tiangen
  -1 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-20  6:44 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/19 17:29, Anshuman Khandual 写道:
> 
> 
> On 4/18/22 09:14, Tong Tiangen wrote:
>> --- a/mm/page_table_check.c
>> +++ b/mm/page_table_check.c
>> @@ -10,6 +10,14 @@
>>   #undef pr_fmt
>>   #define pr_fmt(fmt)	"page_table_check: " fmt
>>   
>> +#ifndef PMD_PAGE_SIZE
>> +#define PMD_PAGE_SIZE	PMD_SIZE
>> +#endif
>> +
>> +#ifndef PUD_PAGE_SIZE
>> +#define PUD_PAGE_SIZE	PUD_SIZE
>> +#endif
> 
> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
> .

Hi, Pasha:
I checked the definitions of PMD_SIZE/PUD_SIZE and 
PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside 
the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be 
better to use a unified PMD_SIZE/PUD_SIZE here?

Thanks,
Tong.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-20  6:44       ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-20  6:44 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/19 17:29, Anshuman Khandual 写道:
> 
> 
> On 4/18/22 09:14, Tong Tiangen wrote:
>> --- a/mm/page_table_check.c
>> +++ b/mm/page_table_check.c
>> @@ -10,6 +10,14 @@
>>   #undef pr_fmt
>>   #define pr_fmt(fmt)	"page_table_check: " fmt
>>   
>> +#ifndef PMD_PAGE_SIZE
>> +#define PMD_PAGE_SIZE	PMD_SIZE
>> +#endif
>> +
>> +#ifndef PUD_PAGE_SIZE
>> +#define PUD_PAGE_SIZE	PUD_SIZE
>> +#endif
> 
> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
> .

Hi, Pasha:
I checked the definitions of PMD_SIZE/PUD_SIZE and 
PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside 
the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be 
better to use a unified PMD_SIZE/PUD_SIZE here?

Thanks,
Tong.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-20  6:44       ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-20  6:44 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/19 17:29, Anshuman Khandual 写道:
> 
> 
> On 4/18/22 09:14, Tong Tiangen wrote:
>> --- a/mm/page_table_check.c
>> +++ b/mm/page_table_check.c
>> @@ -10,6 +10,14 @@
>>   #undef pr_fmt
>>   #define pr_fmt(fmt)	"page_table_check: " fmt
>>   
>> +#ifndef PMD_PAGE_SIZE
>> +#define PMD_PAGE_SIZE	PMD_SIZE
>> +#endif
>> +
>> +#ifndef PUD_PAGE_SIZE
>> +#define PUD_PAGE_SIZE	PUD_SIZE
>> +#endif
> 
> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
> .

Hi, Pasha:
I checked the definitions of PMD_SIZE/PUD_SIZE and 
PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside 
the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be 
better to use a unified PMD_SIZE/PUD_SIZE here?

Thanks,
Tong.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
  2022-04-20  6:44       ` Tong Tiangen
  (?)
@ 2022-04-20 16:44         ` Pasha Tatashin
  -1 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-20 16:44 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>
>
> 在 2022/4/19 17:29, Anshuman Khandual 写道:
> >
> >
> > On 4/18/22 09:14, Tong Tiangen wrote:
> >> --- a/mm/page_table_check.c
> >> +++ b/mm/page_table_check.c
> >> @@ -10,6 +10,14 @@
> >>   #undef pr_fmt
> >>   #define pr_fmt(fmt)        "page_table_check: " fmt
> >>
> >> +#ifndef PMD_PAGE_SIZE
> >> +#define PMD_PAGE_SIZE       PMD_SIZE
> >> +#endif
> >> +
> >> +#ifndef PUD_PAGE_SIZE
> >> +#define PUD_PAGE_SIZE       PUD_SIZE
> >> +#endif
> >
> > Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
> > need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
> > .
>
> Hi, Pasha:
> I checked the definitions of PMD_SIZE/PUD_SIZE and
> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
> better to use a unified PMD_SIZE/PUD_SIZE here?

Hi Tong,

Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
rest of the mm/

Pasha

>
> Thanks,
> Tong.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-20 16:44         ` Pasha Tatashin
  0 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-20 16:44 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>
>
> 在 2022/4/19 17:29, Anshuman Khandual 写道:
> >
> >
> > On 4/18/22 09:14, Tong Tiangen wrote:
> >> --- a/mm/page_table_check.c
> >> +++ b/mm/page_table_check.c
> >> @@ -10,6 +10,14 @@
> >>   #undef pr_fmt
> >>   #define pr_fmt(fmt)        "page_table_check: " fmt
> >>
> >> +#ifndef PMD_PAGE_SIZE
> >> +#define PMD_PAGE_SIZE       PMD_SIZE
> >> +#endif
> >> +
> >> +#ifndef PUD_PAGE_SIZE
> >> +#define PUD_PAGE_SIZE       PUD_SIZE
> >> +#endif
> >
> > Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
> > need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
> > .
>
> Hi, Pasha:
> I checked the definitions of PMD_SIZE/PUD_SIZE and
> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
> better to use a unified PMD_SIZE/PUD_SIZE here?

Hi Tong,

Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
rest of the mm/

Pasha

>
> Thanks,
> Tong.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-20 16:44         ` Pasha Tatashin
  0 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-20 16:44 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>
>
>
> 在 2022/4/19 17:29, Anshuman Khandual 写道:
> >
> >
> > On 4/18/22 09:14, Tong Tiangen wrote:
> >> --- a/mm/page_table_check.c
> >> +++ b/mm/page_table_check.c
> >> @@ -10,6 +10,14 @@
> >>   #undef pr_fmt
> >>   #define pr_fmt(fmt)        "page_table_check: " fmt
> >>
> >> +#ifndef PMD_PAGE_SIZE
> >> +#define PMD_PAGE_SIZE       PMD_SIZE
> >> +#endif
> >> +
> >> +#ifndef PUD_PAGE_SIZE
> >> +#define PUD_PAGE_SIZE       PUD_SIZE
> >> +#endif
> >
> > Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
> > need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
> > .
>
> Hi, Pasha:
> I checked the definitions of PMD_SIZE/PUD_SIZE and
> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
> better to use a unified PMD_SIZE/PUD_SIZE here?

Hi Tong,

Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
rest of the mm/

Pasha

>
> Thanks,
> Tong.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
  2022-04-20  5:05         ` Anshuman Khandual
  (?)
@ 2022-04-20 17:08           ` Pasha Tatashin
  -1 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-20 17:08 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Wed, Apr 20, 2022 at 1:05 AM Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
>
>
> On 4/19/22 18:49, Pasha Tatashin wrote:
> > On Tue, Apr 19, 2022 at 6:22 AM Anshuman Khandual
> > <anshuman.khandual@arm.com> wrote:
> >>
> >>
> >> On 4/18/22 09:14, Tong Tiangen wrote:
> >>> +#ifdef CONFIG_PAGE_TABLE_CHECK
> >>> +static inline bool pte_user_accessible_page(pte_t pte)
> >>> +{
> >>> +     return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> >>> +}
> >>> +
> >>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> >>> +{
> >>> +     return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> >>> +}
> >>> +
> >>> +static inline bool pud_user_accessible_page(pud_t pud)
> >>> +{
> >>> +     return pud_present(pud) && pud_user(pud);
> >>> +}
> >>> +#endif
> >> Wondering why check for these page table entry states when init_mm
> >> has already being excluded ? Should not user page tables be checked
> >> for in entirety for all updates ? what is the rationale for filtering
> >> out only pxx_user_access_page entries ?
> >
> > The point is to prevent false sharing and memory corruption issues.
> > The idea of PTC to be simple and relatively independent  from the MM
> > state machine that catches invalid page sharing. I.e. if an R/W anon
>
> Right, this mechanism here is truly interdependent validation, which is
> orthogonal to other MM states. Although I was curious, if mm_struct is
> not 'init_mm', what percentage of its total page table mapped entries
> will be user accessible ? These new helpers only filter out entries that
> could potentially create false sharing leading upto memory corruption ?

Yes, the intention is to filter out the false sharing scenarios.
Allows crashing the system prior to memory corruption or memory
leaking.

>
> I am wondering if there is any other way such filtering could have been
> applied without adding all these new page table helpers just for page
> table check purpose.
>
> > page is accessible by user land, that page can never be mapped into
> > another process (internally shared anons are treated as named
> > mappings).
>
> Right.
>
> >
> > Therefore, we try not to rely on MM states, and ensure that when a
> > page-table entry is accessible by user it meets the required
> > assumptions: no false sharing, etc.
>
> Right, filtering reduces the page table entries that needs interception
> during update (set/clear), but was just curious is there another way of
> doing it, without adding page table check specific helpers on platforms
> subscribing PAGE_TABLE_CHECK ?
>

It makes sense to limit the scope of PTC only to user accessible
pages, and not try to catch other bugs. This keeps it reasonably
small, and also lowers runtime overhead so it can be used in
production as well. IMO the extra helpers are not very intrusive, and
generic enough that in the future might be used elsewhere as well.


> >
> > For example, one bug that was caught with PTC was where a driver on an
> > unload would put memory on a freelist but memory is still mapped in
> > user page table.
>
> Should not page's refcount (that it is being used else where) prevented
> releases into free list ? But page table check here might just detect
> such scenarios even before page gets released.

Usually yes. However, there are a number of recent bugs related to
refcount [1][2][3]. This is why we need a stronger checker.

The particular bug, however, did not rely on refcount. The driver
allocated a kernel page for a ringbuffer, upon request shared it with
a userspace by mapping it into the user address space, and later when
the driver was unloaded, it never removed the mapping from the user
address space. Thus, even though the page was freed when the driver
was unloaded, the mapping stayed in the user page table.

[1] https://lore.kernel.org/all/xr9335nxwc5y.fsf@gthelen2.svl.corp.google.com
[2] https://lore.kernel.org/all/1582661774-30925-2-git-send-email-akaher@vmware.com
[3] https://lore.kernel.org/all/20210622021423.154662-3-mike.kravetz@oracle.com

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-20 17:08           ` Pasha Tatashin
  0 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-20 17:08 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Wed, Apr 20, 2022 at 1:05 AM Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
>
>
> On 4/19/22 18:49, Pasha Tatashin wrote:
> > On Tue, Apr 19, 2022 at 6:22 AM Anshuman Khandual
> > <anshuman.khandual@arm.com> wrote:
> >>
> >>
> >> On 4/18/22 09:14, Tong Tiangen wrote:
> >>> +#ifdef CONFIG_PAGE_TABLE_CHECK
> >>> +static inline bool pte_user_accessible_page(pte_t pte)
> >>> +{
> >>> +     return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> >>> +}
> >>> +
> >>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> >>> +{
> >>> +     return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> >>> +}
> >>> +
> >>> +static inline bool pud_user_accessible_page(pud_t pud)
> >>> +{
> >>> +     return pud_present(pud) && pud_user(pud);
> >>> +}
> >>> +#endif
> >> Wondering why check for these page table entry states when init_mm
> >> has already being excluded ? Should not user page tables be checked
> >> for in entirety for all updates ? what is the rationale for filtering
> >> out only pxx_user_access_page entries ?
> >
> > The point is to prevent false sharing and memory corruption issues.
> > The idea of PTC to be simple and relatively independent  from the MM
> > state machine that catches invalid page sharing. I.e. if an R/W anon
>
> Right, this mechanism here is truly interdependent validation, which is
> orthogonal to other MM states. Although I was curious, if mm_struct is
> not 'init_mm', what percentage of its total page table mapped entries
> will be user accessible ? These new helpers only filter out entries that
> could potentially create false sharing leading upto memory corruption ?

Yes, the intention is to filter out the false sharing scenarios.
Allows crashing the system prior to memory corruption or memory
leaking.

>
> I am wondering if there is any other way such filtering could have been
> applied without adding all these new page table helpers just for page
> table check purpose.
>
> > page is accessible by user land, that page can never be mapped into
> > another process (internally shared anons are treated as named
> > mappings).
>
> Right.
>
> >
> > Therefore, we try not to rely on MM states, and ensure that when a
> > page-table entry is accessible by user it meets the required
> > assumptions: no false sharing, etc.
>
> Right, filtering reduces the page table entries that needs interception
> during update (set/clear), but was just curious is there another way of
> doing it, without adding page table check specific helpers on platforms
> subscribing PAGE_TABLE_CHECK ?
>

It makes sense to limit the scope of PTC only to user accessible
pages, and not try to catch other bugs. This keeps it reasonably
small, and also lowers runtime overhead so it can be used in
production as well. IMO the extra helpers are not very intrusive, and
generic enough that in the future might be used elsewhere as well.


> >
> > For example, one bug that was caught with PTC was where a driver on an
> > unload would put memory on a freelist but memory is still mapped in
> > user page table.
>
> Should not page's refcount (that it is being used else where) prevented
> releases into free list ? But page table check here might just detect
> such scenarios even before page gets released.

Usually yes. However, there are a number of recent bugs related to
refcount [1][2][3]. This is why we need a stronger checker.

The particular bug, however, did not rely on refcount. The driver
allocated a kernel page for a ringbuffer, upon request shared it with
a userspace by mapping it into the user address space, and later when
the driver was unloaded, it never removed the mapping from the user
address space. Thus, even though the page was freed when the driver
was unloaded, the mapping stayed in the user page table.

[1] https://lore.kernel.org/all/xr9335nxwc5y.fsf@gthelen2.svl.corp.google.com
[2] https://lore.kernel.org/all/1582661774-30925-2-git-send-email-akaher@vmware.com
[3] https://lore.kernel.org/all/20210622021423.154662-3-mike.kravetz@oracle.com

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 3/4] arm64: mm: add support for page table check
@ 2022-04-20 17:08           ` Pasha Tatashin
  0 siblings, 0 replies; 66+ messages in thread
From: Pasha Tatashin @ 2022-04-20 17:08 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun

On Wed, Apr 20, 2022 at 1:05 AM Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
>
>
> On 4/19/22 18:49, Pasha Tatashin wrote:
> > On Tue, Apr 19, 2022 at 6:22 AM Anshuman Khandual
> > <anshuman.khandual@arm.com> wrote:
> >>
> >>
> >> On 4/18/22 09:14, Tong Tiangen wrote:
> >>> +#ifdef CONFIG_PAGE_TABLE_CHECK
> >>> +static inline bool pte_user_accessible_page(pte_t pte)
> >>> +{
> >>> +     return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> >>> +}
> >>> +
> >>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> >>> +{
> >>> +     return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> >>> +}
> >>> +
> >>> +static inline bool pud_user_accessible_page(pud_t pud)
> >>> +{
> >>> +     return pud_present(pud) && pud_user(pud);
> >>> +}
> >>> +#endif
> >> Wondering why check for these page table entry states when init_mm
> >> has already being excluded ? Should not user page tables be checked
> >> for in entirety for all updates ? what is the rationale for filtering
> >> out only pxx_user_access_page entries ?
> >
> > The point is to prevent false sharing and memory corruption issues.
> > The idea of PTC to be simple and relatively independent  from the MM
> > state machine that catches invalid page sharing. I.e. if an R/W anon
>
> Right, this mechanism here is truly interdependent validation, which is
> orthogonal to other MM states. Although I was curious, if mm_struct is
> not 'init_mm', what percentage of its total page table mapped entries
> will be user accessible ? These new helpers only filter out entries that
> could potentially create false sharing leading upto memory corruption ?

Yes, the intention is to filter out the false sharing scenarios.
Allows crashing the system prior to memory corruption or memory
leaking.

>
> I am wondering if there is any other way such filtering could have been
> applied without adding all these new page table helpers just for page
> table check purpose.
>
> > page is accessible by user land, that page can never be mapped into
> > another process (internally shared anons are treated as named
> > mappings).
>
> Right.
>
> >
> > Therefore, we try not to rely on MM states, and ensure that when a
> > page-table entry is accessible by user it meets the required
> > assumptions: no false sharing, etc.
>
> Right, filtering reduces the page table entries that needs interception
> during update (set/clear), but was just curious is there another way of
> doing it, without adding page table check specific helpers on platforms
> subscribing PAGE_TABLE_CHECK ?
>

It makes sense to limit the scope of PTC only to user accessible
pages, and not try to catch other bugs. This keeps it reasonably
small, and also lowers runtime overhead so it can be used in
production as well. IMO the extra helpers are not very intrusive, and
generic enough that in the future might be used elsewhere as well.


> >
> > For example, one bug that was caught with PTC was where a driver on an
> > unload would put memory on a freelist but memory is still mapped in
> > user page table.
>
> Should not page's refcount (that it is being used else where) prevented
> releases into free list ? But page table check here might just detect
> such scenarios even before page gets released.

Usually yes. However, there are a number of recent bugs related to
refcount [1][2][3]. This is why we need a stronger checker.

The particular bug, however, did not rely on refcount. The driver
allocated a kernel page for a ringbuffer, upon request shared it with
a userspace by mapping it into the user address space, and later when
the driver was unloaded, it never removed the mapping from the user
address space. Thus, even though the page was freed when the driver
was unloaded, the mapping stayed in the user page table.

[1] https://lore.kernel.org/all/xr9335nxwc5y.fsf@gthelen2.svl.corp.google.com
[2] https://lore.kernel.org/all/1582661774-30925-2-git-send-email-akaher@vmware.com
[3] https://lore.kernel.org/all/20210622021423.154662-3-mike.kravetz@oracle.com

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
  2022-04-20 16:44         ` Pasha Tatashin
  (?)
@ 2022-04-21  3:05           ` Tong Tiangen
  -1 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-21  3:05 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



在 2022/4/21 0:44, Pasha Tatashin 写道:
> On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>
>>
>>
>> 在 2022/4/19 17:29, Anshuman Khandual 写道:
>>>
>>>
>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>> --- a/mm/page_table_check.c
>>>> +++ b/mm/page_table_check.c
>>>> @@ -10,6 +10,14 @@
>>>>    #undef pr_fmt
>>>>    #define pr_fmt(fmt)        "page_table_check: " fmt
>>>>
>>>> +#ifndef PMD_PAGE_SIZE
>>>> +#define PMD_PAGE_SIZE       PMD_SIZE
>>>> +#endif
>>>> +
>>>> +#ifndef PUD_PAGE_SIZE
>>>> +#define PUD_PAGE_SIZE       PUD_SIZE
>>>> +#endif
>>>
>>> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
>>> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
>>> .
>>
>> Hi, Pasha:
>> I checked the definitions of PMD_SIZE/PUD_SIZE and
>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
>> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
>> better to use a unified PMD_SIZE/PUD_SIZE here?
> 
> Hi Tong,
> 
> Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
> PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
> rest of the mm/
> 
> Pasha
> 
Hi Pasha and Anshuman:

OK, Functional correctness is not affected here, i plan to optimize this 
point after this patchset is merged.

Tong.

>>
>> Thanks,
>> Tong.
> .

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-21  3:05           ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-21  3:05 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



在 2022/4/21 0:44, Pasha Tatashin 写道:
> On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>
>>
>>
>> 在 2022/4/19 17:29, Anshuman Khandual 写道:
>>>
>>>
>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>> --- a/mm/page_table_check.c
>>>> +++ b/mm/page_table_check.c
>>>> @@ -10,6 +10,14 @@
>>>>    #undef pr_fmt
>>>>    #define pr_fmt(fmt)        "page_table_check: " fmt
>>>>
>>>> +#ifndef PMD_PAGE_SIZE
>>>> +#define PMD_PAGE_SIZE       PMD_SIZE
>>>> +#endif
>>>> +
>>>> +#ifndef PUD_PAGE_SIZE
>>>> +#define PUD_PAGE_SIZE       PUD_SIZE
>>>> +#endif
>>>
>>> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
>>> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
>>> .
>>
>> Hi, Pasha:
>> I checked the definitions of PMD_SIZE/PUD_SIZE and
>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
>> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
>> better to use a unified PMD_SIZE/PUD_SIZE here?
> 
> Hi Tong,
> 
> Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
> PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
> rest of the mm/
> 
> Pasha
> 
Hi Pasha and Anshuman:

OK, Functional correctness is not affected here, i plan to optimize this 
point after this patchset is merged.

Tong.

>>
>> Thanks,
>> Tong.
> .

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-21  3:05           ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-21  3:05 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



在 2022/4/21 0:44, Pasha Tatashin 写道:
> On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>
>>
>>
>> 在 2022/4/19 17:29, Anshuman Khandual 写道:
>>>
>>>
>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>> --- a/mm/page_table_check.c
>>>> +++ b/mm/page_table_check.c
>>>> @@ -10,6 +10,14 @@
>>>>    #undef pr_fmt
>>>>    #define pr_fmt(fmt)        "page_table_check: " fmt
>>>>
>>>> +#ifndef PMD_PAGE_SIZE
>>>> +#define PMD_PAGE_SIZE       PMD_SIZE
>>>> +#endif
>>>> +
>>>> +#ifndef PUD_PAGE_SIZE
>>>> +#define PUD_PAGE_SIZE       PUD_SIZE
>>>> +#endif
>>>
>>> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
>>> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
>>> .
>>
>> Hi, Pasha:
>> I checked the definitions of PMD_SIZE/PUD_SIZE and
>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
>> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
>> better to use a unified PMD_SIZE/PUD_SIZE here?
> 
> Hi Tong,
> 
> Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
> PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
> rest of the mm/
> 
> Pasha
> 
Hi Pasha and Anshuman:

OK, Functional correctness is not affected here, i plan to optimize this 
point after this patchset is merged.

Tong.

>>
>> Thanks,
>> Tong.
> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
  2022-04-21  3:05           ` Tong Tiangen
  (?)
@ 2022-04-21  3:44             ` Anshuman Khandual
  -1 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-21  3:44 UTC (permalink / raw)
  To: Tong Tiangen, Pasha Tatashin
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



On 4/21/22 08:35, Tong Tiangen wrote:
> 
> 
> 在 2022/4/21 0:44, Pasha Tatashin 写道:
>> On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>>
>>>
>>>
>>> 在 2022/4/19 17:29, Anshuman Khandual 写道:
>>>>
>>>>
>>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>>> --- a/mm/page_table_check.c
>>>>> +++ b/mm/page_table_check.c
>>>>> @@ -10,6 +10,14 @@
>>>>>    #undef pr_fmt
>>>>>    #define pr_fmt(fmt)        "page_table_check: " fmt
>>>>>
>>>>> +#ifndef PMD_PAGE_SIZE
>>>>> +#define PMD_PAGE_SIZE       PMD_SIZE
>>>>> +#endif
>>>>> +
>>>>> +#ifndef PUD_PAGE_SIZE
>>>>> +#define PUD_PAGE_SIZE       PUD_SIZE
>>>>> +#endif
>>>>
>>>> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
>>>> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
>>>> .
>>>
>>> Hi, Pasha:
>>> I checked the definitions of PMD_SIZE/PUD_SIZE and
>>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
>>> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
>>> better to use a unified PMD_SIZE/PUD_SIZE here?
>>
>> Hi Tong,
>>
>> Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
>> rest of the mm/
>>
>> Pasha
>>
> Hi Pasha and Anshuman:
> 
> OK, Functional correctness is not affected here, i plan to optimize this point after this patchset is merged.

As page table check is now being proposed to be supported on multiple platforms i.e
arm64, riscv besides just x86, it should not have any architecture specific macros
or functions. Hence please do generalize these PMD/PUD sizes in this series itself.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-21  3:44             ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-21  3:44 UTC (permalink / raw)
  To: Tong Tiangen, Pasha Tatashin
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



On 4/21/22 08:35, Tong Tiangen wrote:
> 
> 
> 在 2022/4/21 0:44, Pasha Tatashin 写道:
>> On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>>
>>>
>>>
>>> 在 2022/4/19 17:29, Anshuman Khandual 写道:
>>>>
>>>>
>>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>>> --- a/mm/page_table_check.c
>>>>> +++ b/mm/page_table_check.c
>>>>> @@ -10,6 +10,14 @@
>>>>>    #undef pr_fmt
>>>>>    #define pr_fmt(fmt)        "page_table_check: " fmt
>>>>>
>>>>> +#ifndef PMD_PAGE_SIZE
>>>>> +#define PMD_PAGE_SIZE       PMD_SIZE
>>>>> +#endif
>>>>> +
>>>>> +#ifndef PUD_PAGE_SIZE
>>>>> +#define PUD_PAGE_SIZE       PUD_SIZE
>>>>> +#endif
>>>>
>>>> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
>>>> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
>>>> .
>>>
>>> Hi, Pasha:
>>> I checked the definitions of PMD_SIZE/PUD_SIZE and
>>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
>>> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
>>> better to use a unified PMD_SIZE/PUD_SIZE here?
>>
>> Hi Tong,
>>
>> Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
>> rest of the mm/
>>
>> Pasha
>>
> Hi Pasha and Anshuman:
> 
> OK, Functional correctness is not affected here, i plan to optimize this point after this patchset is merged.

As page table check is now being proposed to be supported on multiple platforms i.e
arm64, riscv besides just x86, it should not have any architecture specific macros
or functions. Hence please do generalize these PMD/PUD sizes in this series itself.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-21  3:44             ` Anshuman Khandual
  0 siblings, 0 replies; 66+ messages in thread
From: Anshuman Khandual @ 2022-04-21  3:44 UTC (permalink / raw)
  To: Tong Tiangen, Pasha Tatashin
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



On 4/21/22 08:35, Tong Tiangen wrote:
> 
> 
> 在 2022/4/21 0:44, Pasha Tatashin 写道:
>> On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>>
>>>
>>>
>>> 在 2022/4/19 17:29, Anshuman Khandual 写道:
>>>>
>>>>
>>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>>> --- a/mm/page_table_check.c
>>>>> +++ b/mm/page_table_check.c
>>>>> @@ -10,6 +10,14 @@
>>>>>    #undef pr_fmt
>>>>>    #define pr_fmt(fmt)        "page_table_check: " fmt
>>>>>
>>>>> +#ifndef PMD_PAGE_SIZE
>>>>> +#define PMD_PAGE_SIZE       PMD_SIZE
>>>>> +#endif
>>>>> +
>>>>> +#ifndef PUD_PAGE_SIZE
>>>>> +#define PUD_PAGE_SIZE       PUD_SIZE
>>>>> +#endif
>>>>
>>>> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
>>>> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
>>>> .
>>>
>>> Hi, Pasha:
>>> I checked the definitions of PMD_SIZE/PUD_SIZE and
>>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
>>> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
>>> better to use a unified PMD_SIZE/PUD_SIZE here?
>>
>> Hi Tong,
>>
>> Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
>> rest of the mm/
>>
>> Pasha
>>
> Hi Pasha and Anshuman:
> 
> OK, Functional correctness is not affected here, i plan to optimize this point after this patchset is merged.

As page table check is now being proposed to be supported on multiple platforms i.e
arm64, riscv besides just x86, it should not have any architecture specific macros
or functions. Hence please do generalize these PMD/PUD sizes in this series itself.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
  2022-04-21  3:44             ` Anshuman Khandual
  (?)
@ 2022-04-21  6:27               ` Tong Tiangen
  -1 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-21  6:27 UTC (permalink / raw)
  To: Anshuman Khandual, Pasha Tatashin
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



在 2022/4/21 11:44, Anshuman Khandual 写道:
> 
> 
> On 4/21/22 08:35, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/21 0:44, Pasha Tatashin 写道:
>>> On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>>>
>>>>
>>>>
>>>> 在 2022/4/19 17:29, Anshuman Khandual 写道:
>>>>>
>>>>>
>>>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>>>> --- a/mm/page_table_check.c
>>>>>> +++ b/mm/page_table_check.c
>>>>>> @@ -10,6 +10,14 @@
>>>>>>     #undef pr_fmt
>>>>>>     #define pr_fmt(fmt)        "page_table_check: " fmt
>>>>>>
>>>>>> +#ifndef PMD_PAGE_SIZE
>>>>>> +#define PMD_PAGE_SIZE       PMD_SIZE
>>>>>> +#endif
>>>>>> +
>>>>>> +#ifndef PUD_PAGE_SIZE
>>>>>> +#define PUD_PAGE_SIZE       PUD_SIZE
>>>>>> +#endif
>>>>>
>>>>> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
>>>>> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
>>>>> .
>>>>
>>>> Hi, Pasha:
>>>> I checked the definitions of PMD_SIZE/PUD_SIZE and
>>>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
>>>> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
>>>> better to use a unified PMD_SIZE/PUD_SIZE here?
>>>
>>> Hi Tong,
>>>
>>> Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
>>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
>>> rest of the mm/
>>>
>>> Pasha
>>>
>> Hi Pasha and Anshuman:
>>
>> OK, Functional correctness is not affected here, i plan to optimize this point after this patchset is merged.
> 
> As page table check is now being proposed to be supported on multiple platforms i.e
> arm64, riscv besides just x86, it should not have any architecture specific macros
> or functions. Hence please do generalize these PMD/PUD sizes in this series itself.
> .

OK, will resend.

Thank you.
Tong.

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-21  6:27               ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-21  6:27 UTC (permalink / raw)
  To: Anshuman Khandual, Pasha Tatashin
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



在 2022/4/21 11:44, Anshuman Khandual 写道:
> 
> 
> On 4/21/22 08:35, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/21 0:44, Pasha Tatashin 写道:
>>> On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>>>
>>>>
>>>>
>>>> 在 2022/4/19 17:29, Anshuman Khandual 写道:
>>>>>
>>>>>
>>>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>>>> --- a/mm/page_table_check.c
>>>>>> +++ b/mm/page_table_check.c
>>>>>> @@ -10,6 +10,14 @@
>>>>>>     #undef pr_fmt
>>>>>>     #define pr_fmt(fmt)        "page_table_check: " fmt
>>>>>>
>>>>>> +#ifndef PMD_PAGE_SIZE
>>>>>> +#define PMD_PAGE_SIZE       PMD_SIZE
>>>>>> +#endif
>>>>>> +
>>>>>> +#ifndef PUD_PAGE_SIZE
>>>>>> +#define PUD_PAGE_SIZE       PUD_SIZE
>>>>>> +#endif
>>>>>
>>>>> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
>>>>> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
>>>>> .
>>>>
>>>> Hi, Pasha:
>>>> I checked the definitions of PMD_SIZE/PUD_SIZE and
>>>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
>>>> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
>>>> better to use a unified PMD_SIZE/PUD_SIZE here?
>>>
>>> Hi Tong,
>>>
>>> Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
>>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
>>> rest of the mm/
>>>
>>> Pasha
>>>
>> Hi Pasha and Anshuman:
>>
>> OK, Functional correctness is not affected here, i plan to optimize this point after this patchset is merged.
> 
> As page table check is now being proposed to be supported on multiple platforms i.e
> arm64, riscv besides just x86, it should not have any architecture specific macros
> or functions. Hence please do generalize these PMD/PUD sizes in this series itself.
> .

OK, will resend.

Thank you.
Tong.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-21  6:27               ` Tong Tiangen
  0 siblings, 0 replies; 66+ messages in thread
From: Tong Tiangen @ 2022-04-21  6:27 UTC (permalink / raw)
  To: Anshuman Khandual, Pasha Tatashin
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, LKML, linux-mm,
	Linux ARM, linux-riscv, Kefeng Wang, Guohanjun



在 2022/4/21 11:44, Anshuman Khandual 写道:
> 
> 
> On 4/21/22 08:35, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/21 0:44, Pasha Tatashin 写道:
>>> On Wed, Apr 20, 2022 at 2:45 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>>>
>>>>
>>>>
>>>> 在 2022/4/19 17:29, Anshuman Khandual 写道:
>>>>>
>>>>>
>>>>> On 4/18/22 09:14, Tong Tiangen wrote:
>>>>>> --- a/mm/page_table_check.c
>>>>>> +++ b/mm/page_table_check.c
>>>>>> @@ -10,6 +10,14 @@
>>>>>>     #undef pr_fmt
>>>>>>     #define pr_fmt(fmt)        "page_table_check: " fmt
>>>>>>
>>>>>> +#ifndef PMD_PAGE_SIZE
>>>>>> +#define PMD_PAGE_SIZE       PMD_SIZE
>>>>>> +#endif
>>>>>> +
>>>>>> +#ifndef PUD_PAGE_SIZE
>>>>>> +#define PUD_PAGE_SIZE       PUD_SIZE
>>>>>> +#endif
>>>>>
>>>>> Why cannot PMD_SIZE/PUD_SIZE be used on every platform instead ? What is the
>>>>> need for using PUD_PAGE_SIZE/PMD_PAGE_SIZE ? Are they different on x86 ?
>>>>> .
>>>>
>>>> Hi, Pasha:
>>>> I checked the definitions of PMD_SIZE/PUD_SIZE and
>>>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in x86 architecture and their use outside
>>>> the architecture(eg: in mm/, all used PMD_SIZE/PUD_SIZE), Would it be
>>>> better to use a unified PMD_SIZE/PUD_SIZE here?
>>>
>>> Hi Tong,
>>>
>>> Yes, it makes sense to use PMD_SIZE/PUD_SIZE instead of
>>> PUD_PAGE_SIZE/PMD_PAGE_SIZE in page_table_check to be inline with the
>>> rest of the mm/
>>>
>>> Pasha
>>>
>> Hi Pasha and Anshuman:
>>
>> OK, Functional correctness is not affected here, i plan to optimize this point after this patchset is merged.
> 
> As page table check is now being proposed to be supported on multiple platforms i.e
> arm64, riscv besides just x86, it should not have any architecture specific macros
> or functions. Hence please do generalize these PMD/PUD sizes in this series itself.
> .

OK, will resend.

Thank you.
Tong.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 66+ messages in thread

end of thread, other threads:[~2022-04-21  6:28 UTC | newest]

Thread overview: 66+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-18  3:44 [PATCH -next v4 0/4]mm: page_table_check: add support on arm64 and riscv Tong Tiangen
2022-04-18  3:44 ` Tong Tiangen
2022-04-18  3:44 ` Tong Tiangen
2022-04-18  3:44 ` [PATCH -next v4 1/4] mm: page_table_check: move pxx_user_accessible_page into x86 Tong Tiangen
2022-04-18  3:44   ` Tong Tiangen
2022-04-18  3:44   ` Tong Tiangen
2022-04-19  9:29   ` Anshuman Khandual
2022-04-19  9:29     ` Anshuman Khandual
2022-04-19  9:29     ` Anshuman Khandual
2022-04-20  6:44     ` Tong Tiangen
2022-04-20  6:44       ` Tong Tiangen
2022-04-20  6:44       ` Tong Tiangen
2022-04-20 16:44       ` Pasha Tatashin
2022-04-20 16:44         ` Pasha Tatashin
2022-04-20 16:44         ` Pasha Tatashin
2022-04-21  3:05         ` Tong Tiangen
2022-04-21  3:05           ` Tong Tiangen
2022-04-21  3:05           ` Tong Tiangen
2022-04-21  3:44           ` Anshuman Khandual
2022-04-21  3:44             ` Anshuman Khandual
2022-04-21  3:44             ` Anshuman Khandual
2022-04-21  6:27             ` Tong Tiangen
2022-04-21  6:27               ` Tong Tiangen
2022-04-21  6:27               ` Tong Tiangen
2022-04-18  3:44 ` [PATCH -next v4 2/4] mm: page_table_check: add hooks to public helpers Tong Tiangen
2022-04-18  3:44   ` Tong Tiangen
2022-04-18  3:44   ` Tong Tiangen
2022-04-18  3:44 ` [PATCH -next v4 3/4] arm64: mm: add support for page table check Tong Tiangen
2022-04-18  3:44   ` Tong Tiangen
2022-04-18  3:44   ` Tong Tiangen
2022-04-18  9:28   ` Anshuman Khandual
2022-04-18  9:28     ` Anshuman Khandual
2022-04-18  9:28     ` Anshuman Khandual
2022-04-18 15:47     ` Tong Tiangen
2022-04-18 15:47       ` Tong Tiangen
2022-04-18 15:47       ` Tong Tiangen
2022-04-18 16:20       ` Pasha Tatashin
2022-04-18 16:20         ` Pasha Tatashin
2022-04-18 16:20         ` Pasha Tatashin
2022-04-19  7:25         ` Anshuman Khandual
2022-04-19  7:25           ` Anshuman Khandual
2022-04-19  7:25           ` Anshuman Khandual
2022-04-19  7:10       ` Anshuman Khandual
2022-04-19  7:10         ` Anshuman Khandual
2022-04-19  7:10         ` Anshuman Khandual
2022-04-19  8:52         ` Tong Tiangen
2022-04-19  8:52           ` Tong Tiangen
2022-04-19  8:52           ` Tong Tiangen
2022-04-19 10:22   ` Anshuman Khandual
2022-04-19 10:22     ` Anshuman Khandual
2022-04-19 10:22     ` Anshuman Khandual
2022-04-19 13:19     ` Pasha Tatashin
2022-04-19 13:19       ` Pasha Tatashin
2022-04-19 13:19       ` Pasha Tatashin
2022-04-20  5:05       ` Anshuman Khandual
2022-04-20  5:05         ` Anshuman Khandual
2022-04-20  5:05         ` Anshuman Khandual
2022-04-20 17:08         ` Pasha Tatashin
2022-04-20 17:08           ` Pasha Tatashin
2022-04-20 17:08           ` Pasha Tatashin
2022-04-18  3:44 ` [PATCH -next v4 4/4] riscv: " Tong Tiangen
2022-04-18  3:44   ` Tong Tiangen
2022-04-18  3:44   ` Tong Tiangen
2022-04-18  6:12 ` [PATCH -next v4 0/4]mm: page_table_check: add support on arm64 and riscv Tong Tiangen
2022-04-18  6:12   ` Tong Tiangen
2022-04-18  6:12   ` Tong Tiangen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.