All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -next v5 0/5]mm: page_table_check: add support on arm64 and riscv
@ 2022-04-21  8:20 ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Page table check performs extra verifications at the time when new
pages become accessible from the userspace by getting their page
table entries (PTEs PMDs etc.) added into the table. It is supported
on X86[1].

This patchset made some simple changes and make it easier to support
new architecture, then we support this feature on ARM64 and RISCV.

[1]https://lore.kernel.org/lkml/20211123214814.3756047-1-pasha.tatashin@soleen.com/

v4 -> v5:
 According to Anshuman's suggestion, using PxD_SIZE instead of
 PxD_PAGE_SIZE in mm/page_table_check.c and it is checked by Pasha.

v3 -> v4:
 Adapt to next-20220414

v2 -> v3:
 Modify ptep_clear() in include/linux/pgtable.h, using IS_ENABLED according
 to the suggestions of Pasha.

v1 -> v2:
 1. Fix arm64's pte/pmd/pud_user_accessible_page() according to the
    suggestions of Catalin.
 2. Also fix riscv's pte_pmd_pud_user_accessible_page().

Kefeng Wang (2):
  mm: page_table_check: move pxx_user_accessible_page into x86
  arm64: mm: add support for page table check

Tong Tiangen (3):
  mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
  mm: page_table_check: add hooks to public helpers
  riscv: mm: add support for page table check

 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 ++++++++++++++++++++++++---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 arch/x86/include/asm/pgtable.h   | 29 +++++++-----
 include/linux/pgtable.h          | 26 +++++++----
 mm/page_table_check.c            | 25 ++---------
 7 files changed, 174 insertions(+), 50 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 60+ messages in thread

* [PATCH -next v5 0/5]mm: page_table_check: add support on arm64 and riscv
@ 2022-04-21  8:20 ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Page table check performs extra verifications at the time when new
pages become accessible from the userspace by getting their page
table entries (PTEs PMDs etc.) added into the table. It is supported
on X86[1].

This patchset made some simple changes and make it easier to support
new architecture, then we support this feature on ARM64 and RISCV.

[1]https://lore.kernel.org/lkml/20211123214814.3756047-1-pasha.tatashin@soleen.com/

v4 -> v5:
 According to Anshuman's suggestion, using PxD_SIZE instead of
 PxD_PAGE_SIZE in mm/page_table_check.c and it is checked by Pasha.

v3 -> v4:
 Adapt to next-20220414

v2 -> v3:
 Modify ptep_clear() in include/linux/pgtable.h, using IS_ENABLED according
 to the suggestions of Pasha.

v1 -> v2:
 1. Fix arm64's pte/pmd/pud_user_accessible_page() according to the
    suggestions of Catalin.
 2. Also fix riscv's pte_pmd_pud_user_accessible_page().

Kefeng Wang (2):
  mm: page_table_check: move pxx_user_accessible_page into x86
  arm64: mm: add support for page table check

Tong Tiangen (3):
  mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
  mm: page_table_check: add hooks to public helpers
  riscv: mm: add support for page table check

 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 ++++++++++++++++++++++++---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 arch/x86/include/asm/pgtable.h   | 29 +++++++-----
 include/linux/pgtable.h          | 26 +++++++----
 mm/page_table_check.c            | 25 ++---------
 7 files changed, 174 insertions(+), 50 deletions(-)

-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* [PATCH -next v5 0/5]mm: page_table_check: add support on arm64 and riscv
@ 2022-04-21  8:20 ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Page table check performs extra verifications at the time when new
pages become accessible from the userspace by getting their page
table entries (PTEs PMDs etc.) added into the table. It is supported
on X86[1].

This patchset made some simple changes and make it easier to support
new architecture, then we support this feature on ARM64 and RISCV.

[1]https://lore.kernel.org/lkml/20211123214814.3756047-1-pasha.tatashin@soleen.com/

v4 -> v5:
 According to Anshuman's suggestion, using PxD_SIZE instead of
 PxD_PAGE_SIZE in mm/page_table_check.c and it is checked by Pasha.

v3 -> v4:
 Adapt to next-20220414

v2 -> v3:
 Modify ptep_clear() in include/linux/pgtable.h, using IS_ENABLED according
 to the suggestions of Pasha.

v1 -> v2:
 1. Fix arm64's pte/pmd/pud_user_accessible_page() according to the
    suggestions of Catalin.
 2. Also fix riscv's pte_pmd_pud_user_accessible_page().

Kefeng Wang (2):
  mm: page_table_check: move pxx_user_accessible_page into x86
  arm64: mm: add support for page table check

Tong Tiangen (3):
  mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
  mm: page_table_check: add hooks to public helpers
  riscv: mm: add support for page table check

 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 ++++++++++++++++++++++++---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 arch/x86/include/asm/pgtable.h   | 29 +++++++-----
 include/linux/pgtable.h          | 26 +++++++----
 mm/page_table_check.c            | 25 ++---------
 7 files changed, 174 insertions(+), 50 deletions(-)

-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
  2022-04-21  8:20 ` Tong Tiangen
  (?)
@ 2022-04-21  8:20   ` Tong Tiangen
  -1 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
support page table check in architectures other than x86 and it is no
functional impact on x86.

Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 mm/page_table_check.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index 2458281bff89..eb0d0b71cdf6 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -177,7 +177,7 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr,
 
 	if (pmd_user_accessible_page(pmd)) {
 		page_table_check_clear(mm, addr, pmd_pfn(pmd),
-				       PMD_PAGE_SIZE >> PAGE_SHIFT);
+				       PMD_SIZE >> PAGE_SHIFT);
 	}
 }
 EXPORT_SYMBOL(__page_table_check_pmd_clear);
@@ -190,7 +190,7 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr,
 
 	if (pud_user_accessible_page(pud)) {
 		page_table_check_clear(mm, addr, pud_pfn(pud),
-				       PUD_PAGE_SIZE >> PAGE_SHIFT);
+				       PUD_SIZE >> PAGE_SHIFT);
 	}
 }
 EXPORT_SYMBOL(__page_table_check_pud_clear);
@@ -219,7 +219,7 @@ void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr,
 	__page_table_check_pmd_clear(mm, addr, *pmdp);
 	if (pmd_user_accessible_page(pmd)) {
 		page_table_check_set(mm, addr, pmd_pfn(pmd),
-				     PMD_PAGE_SIZE >> PAGE_SHIFT,
+				     PMD_SIZE >> PAGE_SHIFT,
 				     pmd_write(pmd));
 	}
 }
@@ -234,7 +234,7 @@ void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr,
 	__page_table_check_pud_clear(mm, addr, *pudp);
 	if (pud_user_accessible_page(pud)) {
 		page_table_check_set(mm, addr, pud_pfn(pud),
-				     PUD_PAGE_SIZE >> PAGE_SHIFT,
+				     PUD_SIZE >> PAGE_SHIFT,
 				     pud_write(pud));
 	}
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
@ 2022-04-21  8:20   ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
support page table check in architectures other than x86 and it is no
functional impact on x86.

Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 mm/page_table_check.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index 2458281bff89..eb0d0b71cdf6 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -177,7 +177,7 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr,
 
 	if (pmd_user_accessible_page(pmd)) {
 		page_table_check_clear(mm, addr, pmd_pfn(pmd),
-				       PMD_PAGE_SIZE >> PAGE_SHIFT);
+				       PMD_SIZE >> PAGE_SHIFT);
 	}
 }
 EXPORT_SYMBOL(__page_table_check_pmd_clear);
@@ -190,7 +190,7 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr,
 
 	if (pud_user_accessible_page(pud)) {
 		page_table_check_clear(mm, addr, pud_pfn(pud),
-				       PUD_PAGE_SIZE >> PAGE_SHIFT);
+				       PUD_SIZE >> PAGE_SHIFT);
 	}
 }
 EXPORT_SYMBOL(__page_table_check_pud_clear);
@@ -219,7 +219,7 @@ void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr,
 	__page_table_check_pmd_clear(mm, addr, *pmdp);
 	if (pmd_user_accessible_page(pmd)) {
 		page_table_check_set(mm, addr, pmd_pfn(pmd),
-				     PMD_PAGE_SIZE >> PAGE_SHIFT,
+				     PMD_SIZE >> PAGE_SHIFT,
 				     pmd_write(pmd));
 	}
 }
@@ -234,7 +234,7 @@ void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr,
 	__page_table_check_pud_clear(mm, addr, *pudp);
 	if (pud_user_accessible_page(pud)) {
 		page_table_check_set(mm, addr, pud_pfn(pud),
-				     PUD_PAGE_SIZE >> PAGE_SHIFT,
+				     PUD_SIZE >> PAGE_SHIFT,
 				     pud_write(pud));
 	}
 }
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
@ 2022-04-21  8:20   ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
support page table check in architectures other than x86 and it is no
functional impact on x86.

Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 mm/page_table_check.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index 2458281bff89..eb0d0b71cdf6 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -177,7 +177,7 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr,
 
 	if (pmd_user_accessible_page(pmd)) {
 		page_table_check_clear(mm, addr, pmd_pfn(pmd),
-				       PMD_PAGE_SIZE >> PAGE_SHIFT);
+				       PMD_SIZE >> PAGE_SHIFT);
 	}
 }
 EXPORT_SYMBOL(__page_table_check_pmd_clear);
@@ -190,7 +190,7 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr,
 
 	if (pud_user_accessible_page(pud)) {
 		page_table_check_clear(mm, addr, pud_pfn(pud),
-				       PUD_PAGE_SIZE >> PAGE_SHIFT);
+				       PUD_SIZE >> PAGE_SHIFT);
 	}
 }
 EXPORT_SYMBOL(__page_table_check_pud_clear);
@@ -219,7 +219,7 @@ void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr,
 	__page_table_check_pmd_clear(mm, addr, *pmdp);
 	if (pmd_user_accessible_page(pmd)) {
 		page_table_check_set(mm, addr, pmd_pfn(pmd),
-				     PMD_PAGE_SIZE >> PAGE_SHIFT,
+				     PMD_SIZE >> PAGE_SHIFT,
 				     pmd_write(pmd));
 	}
 }
@@ -234,7 +234,7 @@ void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr,
 	__page_table_check_pud_clear(mm, addr, *pudp);
 	if (pud_user_accessible_page(pud)) {
 		page_table_check_set(mm, addr, pud_pfn(pud),
-				     PUD_PAGE_SIZE >> PAGE_SHIFT,
+				     PUD_SIZE >> PAGE_SHIFT,
 				     pud_write(pud));
 	}
 }
-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 2/5] mm: page_table_check: move pxx_user_accessible_page into x86
  2022-04-21  8:20 ` Tong Tiangen
  (?)
@ 2022-04-21  8:20   ` Tong Tiangen
  -1 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

The pxx_user_accessible_page() check the PTE bit, it's
architecture-specific code, move them into x86's pgtable.h,

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
 mm/page_table_check.c          | 17 -----------------
 2 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index b7464f13e416..564abe42b0f7 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
 	return true;
 }
 
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
+		(pmd_val(pmd) & _PAGE_USER);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
+		(pud_val(pud) & _PAGE_USER);
+}
+#endif
+
 #endif	/* __ASSEMBLY__ */
 
 #endif /* _ASM_X86_PGTABLE_H */
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index eb0d0b71cdf6..3692bea2ea2c 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -52,23 +52,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
 	return (void *)(page_ext) + page_table_check_ops.offset;
 }
 
-static inline bool pte_user_accessible_page(pte_t pte)
-{
-	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
-}
-
-static inline bool pmd_user_accessible_page(pmd_t pmd)
-{
-	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
-		(pmd_val(pmd) & _PAGE_USER);
-}
-
-static inline bool pud_user_accessible_page(pud_t pud)
-{
-	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
-		(pud_val(pud) & _PAGE_USER);
-}
-
 /*
  * An enty is removed from the page table, decrement the counters for that page
  * verify that it is of correct type and counters do not become negative.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 2/5] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-21  8:20   ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

The pxx_user_accessible_page() check the PTE bit, it's
architecture-specific code, move them into x86's pgtable.h,

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
 mm/page_table_check.c          | 17 -----------------
 2 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index b7464f13e416..564abe42b0f7 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
 	return true;
 }
 
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
+		(pmd_val(pmd) & _PAGE_USER);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
+		(pud_val(pud) & _PAGE_USER);
+}
+#endif
+
 #endif	/* __ASSEMBLY__ */
 
 #endif /* _ASM_X86_PGTABLE_H */
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index eb0d0b71cdf6..3692bea2ea2c 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -52,23 +52,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
 	return (void *)(page_ext) + page_table_check_ops.offset;
 }
 
-static inline bool pte_user_accessible_page(pte_t pte)
-{
-	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
-}
-
-static inline bool pmd_user_accessible_page(pmd_t pmd)
-{
-	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
-		(pmd_val(pmd) & _PAGE_USER);
-}
-
-static inline bool pud_user_accessible_page(pud_t pud)
-{
-	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
-		(pud_val(pud) & _PAGE_USER);
-}
-
 /*
  * An enty is removed from the page table, decrement the counters for that page
  * verify that it is of correct type and counters do not become negative.
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 2/5] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-21  8:20   ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

The pxx_user_accessible_page() check the PTE bit, it's
architecture-specific code, move them into x86's pgtable.h,

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
 mm/page_table_check.c          | 17 -----------------
 2 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index b7464f13e416..564abe42b0f7 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
 	return true;
 }
 
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
+		(pmd_val(pmd) & _PAGE_USER);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
+		(pud_val(pud) & _PAGE_USER);
+}
+#endif
+
 #endif	/* __ASSEMBLY__ */
 
 #endif /* _ASM_X86_PGTABLE_H */
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index eb0d0b71cdf6..3692bea2ea2c 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -52,23 +52,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
 	return (void *)(page_ext) + page_table_check_ops.offset;
 }
 
-static inline bool pte_user_accessible_page(pte_t pte)
-{
-	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
-}
-
-static inline bool pmd_user_accessible_page(pmd_t pmd)
-{
-	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
-		(pmd_val(pmd) & _PAGE_USER);
-}
-
-static inline bool pud_user_accessible_page(pud_t pud)
-{
-	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
-		(pud_val(pud) & _PAGE_USER);
-}
-
 /*
  * An enty is removed from the page table, decrement the counters for that page
  * verify that it is of correct type and counters do not become negative.
-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
  2022-04-21  8:20 ` Tong Tiangen
  (?)
@ 2022-04-21  8:20   ` Tong Tiangen
  -1 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Move ptep_clear() to the include/linux/pgtable.h and add page table check
relate hooks to some helpers, it's prepare for support page table check
feature on new architecture.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 10 ----------
 include/linux/pgtable.h        | 26 ++++++++++++++++++--------
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 564abe42b0f7..51cd39858f81 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 	return pte;
 }
 
-#define __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
-		ptep_get_and_clear(mm, addr, ptep);
-	else
-		pte_clear(mm, addr, ptep);
-}
-
 #define __HAVE_ARCH_PTEP_SET_WRPROTECT
 static inline void ptep_set_wrprotect(struct mm_struct *mm,
 				      unsigned long addr, pte_t *ptep)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 49ab8ee2d6d7..10d2d91edf20 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -12,6 +12,7 @@
 #include <linux/bug.h>
 #include <linux/errno.h>
 #include <asm-generic/pgtable_uffd.h>
+#include <linux/page_table_check.h>
 
 #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
 	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
@@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
 }
 #endif
 
-#ifndef __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	pte_clear(mm, addr, ptep);
-}
-#endif
-
 #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address,
@@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 {
 	pte_t pte = *ptep;
 	pte_clear(mm, address, ptep);
+	page_table_check_pte_clear(mm, address, pte);
 	return pte;
 }
 #endif
 
+#ifndef __HAVE_ARCH_PTEP_CLEAR
+static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep)
+{
+	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
+		ptep_get_and_clear(mm, addr, ptep);
+	else
+		pte_clear(mm, addr, ptep);
+}
+#endif
+
 #ifndef __HAVE_ARCH_PTEP_GET
 static inline pte_t ptep_get(pte_t *ptep)
 {
@@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    pmd_t *pmdp)
 {
 	pmd_t pmd = *pmdp;
+
 	pmd_clear(pmdp);
+	page_table_check_pmd_clear(mm, address, pmd);
+
 	return pmd;
 }
 #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
@@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
 	pud_t pud = *pudp;
 
 	pud_clear(pudp);
+	page_table_check_pud_clear(mm, address, pud);
+
 	return pud;
 }
 #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
@ 2022-04-21  8:20   ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Move ptep_clear() to the include/linux/pgtable.h and add page table check
relate hooks to some helpers, it's prepare for support page table check
feature on new architecture.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 10 ----------
 include/linux/pgtable.h        | 26 ++++++++++++++++++--------
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 564abe42b0f7..51cd39858f81 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 	return pte;
 }
 
-#define __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
-		ptep_get_and_clear(mm, addr, ptep);
-	else
-		pte_clear(mm, addr, ptep);
-}
-
 #define __HAVE_ARCH_PTEP_SET_WRPROTECT
 static inline void ptep_set_wrprotect(struct mm_struct *mm,
 				      unsigned long addr, pte_t *ptep)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 49ab8ee2d6d7..10d2d91edf20 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -12,6 +12,7 @@
 #include <linux/bug.h>
 #include <linux/errno.h>
 #include <asm-generic/pgtable_uffd.h>
+#include <linux/page_table_check.h>
 
 #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
 	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
@@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
 }
 #endif
 
-#ifndef __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	pte_clear(mm, addr, ptep);
-}
-#endif
-
 #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address,
@@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 {
 	pte_t pte = *ptep;
 	pte_clear(mm, address, ptep);
+	page_table_check_pte_clear(mm, address, pte);
 	return pte;
 }
 #endif
 
+#ifndef __HAVE_ARCH_PTEP_CLEAR
+static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep)
+{
+	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
+		ptep_get_and_clear(mm, addr, ptep);
+	else
+		pte_clear(mm, addr, ptep);
+}
+#endif
+
 #ifndef __HAVE_ARCH_PTEP_GET
 static inline pte_t ptep_get(pte_t *ptep)
 {
@@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    pmd_t *pmdp)
 {
 	pmd_t pmd = *pmdp;
+
 	pmd_clear(pmdp);
+	page_table_check_pmd_clear(mm, address, pmd);
+
 	return pmd;
 }
 #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
@@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
 	pud_t pud = *pudp;
 
 	pud_clear(pudp);
+	page_table_check_pud_clear(mm, address, pud);
+
 	return pud;
 }
 #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
@ 2022-04-21  8:20   ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

Move ptep_clear() to the include/linux/pgtable.h and add page table check
relate hooks to some helpers, it's prepare for support page table check
feature on new architecture.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/pgtable.h | 10 ----------
 include/linux/pgtable.h        | 26 ++++++++++++++++++--------
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 564abe42b0f7..51cd39858f81 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 	return pte;
 }
 
-#define __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
-		ptep_get_and_clear(mm, addr, ptep);
-	else
-		pte_clear(mm, addr, ptep);
-}
-
 #define __HAVE_ARCH_PTEP_SET_WRPROTECT
 static inline void ptep_set_wrprotect(struct mm_struct *mm,
 				      unsigned long addr, pte_t *ptep)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 49ab8ee2d6d7..10d2d91edf20 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -12,6 +12,7 @@
 #include <linux/bug.h>
 #include <linux/errno.h>
 #include <asm-generic/pgtable_uffd.h>
+#include <linux/page_table_check.h>
 
 #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
 	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
@@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
 }
 #endif
 
-#ifndef __HAVE_ARCH_PTEP_CLEAR
-static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
-			      pte_t *ptep)
-{
-	pte_clear(mm, addr, ptep);
-}
-#endif
-
 #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address,
@@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 {
 	pte_t pte = *ptep;
 	pte_clear(mm, address, ptep);
+	page_table_check_pte_clear(mm, address, pte);
 	return pte;
 }
 #endif
 
+#ifndef __HAVE_ARCH_PTEP_CLEAR
+static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep)
+{
+	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
+		ptep_get_and_clear(mm, addr, ptep);
+	else
+		pte_clear(mm, addr, ptep);
+}
+#endif
+
 #ifndef __HAVE_ARCH_PTEP_GET
 static inline pte_t ptep_get(pte_t *ptep)
 {
@@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    pmd_t *pmdp)
 {
 	pmd_t pmd = *pmdp;
+
 	pmd_clear(pmdp);
+	page_table_check_pmd_clear(mm, address, pmd);
+
 	return pmd;
 }
 #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
@@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
 	pud_t pud = *pudp;
 
 	pud_clear(pudp);
+	page_table_check_pud_clear(mm, address, pud);
+
 	return pud;
 }
 #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 4/5] arm64: mm: add support for page table check
  2022-04-21  8:20 ` Tong Tiangen
  (?)
@ 2022-04-21  8:20   ` Tong Tiangen
  -1 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
 2 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 18a18a0e855d..c1509525ab8e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -92,6 +92,7 @@ config ARM64
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
 	select ARCH_SUPPORTS_NUMA_BALANCING
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
 	select ARCH_WANT_DEFAULT_BPF_JIT
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 930077f7b572..9f8f97a7cc7c 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -33,6 +33,7 @@
 #include <linux/mmdebug.h>
 #include <linux/mm_types.h>
 #include <linux/sched.h>
+#include <linux/page_table_check.h>
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
@@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
 #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
 #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
 #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
+#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
 #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
 #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
 #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
@@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
 		     __func__, pte_val(old_pte), pte_val(pte));
 }
 
-static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep, pte_t pte)
 {
 	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
@@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
 	set_pte(ptep, pte);
 }
 
+static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep, pte_t pte)
+{
+	page_table_check_pte_set(mm, addr, ptep, pte);
+	return __set_pte_at(mm, addr, ptep, pte);
+}
+
 /*
  * Huge pte definitions.
  */
@@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
 #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
 #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
 #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
+#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
+#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
 #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
 #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
 #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
@@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
 #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
 #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
 
-#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
-#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
+static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+			      pmd_t *pmdp, pmd_t pmd)
+{
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
+			      pud_t *pudp, pud_t pud)
+{
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
 
 #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
 #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
@@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
 #define pud_present(pud)	pte_present(pud_pte(pud))
 #define pud_leaf(pud)		pud_sect(pud)
 #define pud_valid(pud)		pte_valid(pud_pte(pud))
+#define pud_user(pud)		pte_user(pud_pte(pud))
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_present(pud) && pud_user(pud);
+}
+#endif
 
 static inline void set_pud(pud_t *pudp, pud_t pud)
 {
@@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
@@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
 }
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 4/5] arm64: mm: add support for page table check
@ 2022-04-21  8:20   ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
 2 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 18a18a0e855d..c1509525ab8e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -92,6 +92,7 @@ config ARM64
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
 	select ARCH_SUPPORTS_NUMA_BALANCING
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
 	select ARCH_WANT_DEFAULT_BPF_JIT
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 930077f7b572..9f8f97a7cc7c 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -33,6 +33,7 @@
 #include <linux/mmdebug.h>
 #include <linux/mm_types.h>
 #include <linux/sched.h>
+#include <linux/page_table_check.h>
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
@@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
 #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
 #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
 #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
+#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
 #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
 #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
 #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
@@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
 		     __func__, pte_val(old_pte), pte_val(pte));
 }
 
-static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep, pte_t pte)
 {
 	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
@@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
 	set_pte(ptep, pte);
 }
 
+static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep, pte_t pte)
+{
+	page_table_check_pte_set(mm, addr, ptep, pte);
+	return __set_pte_at(mm, addr, ptep, pte);
+}
+
 /*
  * Huge pte definitions.
  */
@@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
 #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
 #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
 #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
+#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
+#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
 #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
 #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
 #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
@@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
 #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
 #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
 
-#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
-#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
+static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+			      pmd_t *pmdp, pmd_t pmd)
+{
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
+			      pud_t *pudp, pud_t pud)
+{
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
 
 #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
 #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
@@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
 #define pud_present(pud)	pte_present(pud_pte(pud))
 #define pud_leaf(pud)		pud_sect(pud)
 #define pud_valid(pud)		pte_valid(pud_pte(pud))
+#define pud_user(pud)		pte_user(pud_pte(pud))
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_present(pud) && pud_user(pud);
+}
+#endif
 
 static inline void set_pud(pud_t *pudp, pud_t pud)
 {
@@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
@@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
 }
 #endif
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 4/5] arm64: mm: add support for page table check
@ 2022-04-21  8:20   ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

From: Kefeng Wang <wangkefeng.wang@huawei.com>

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
 2 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 18a18a0e855d..c1509525ab8e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -92,6 +92,7 @@ config ARM64
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
 	select ARCH_SUPPORTS_NUMA_BALANCING
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
 	select ARCH_WANT_DEFAULT_BPF_JIT
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 930077f7b572..9f8f97a7cc7c 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -33,6 +33,7 @@
 #include <linux/mmdebug.h>
 #include <linux/mm_types.h>
 #include <linux/sched.h>
+#include <linux/page_table_check.h>
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
@@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
 #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
 #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
 #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
+#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
 #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
 #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
 #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
@@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
 		     __func__, pte_val(old_pte), pte_val(pte));
 }
 
-static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep, pte_t pte)
 {
 	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
@@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
 	set_pte(ptep, pte);
 }
 
+static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+			      pte_t *ptep, pte_t pte)
+{
+	page_table_check_pte_set(mm, addr, ptep, pte);
+	return __set_pte_at(mm, addr, ptep, pte);
+}
+
 /*
  * Huge pte definitions.
  */
@@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
 #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
 #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
 #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
+#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
+#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
 #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
 #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
 #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
@@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
 #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
 #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
 
-#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
-#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
+static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+			      pmd_t *pmdp, pmd_t pmd)
+{
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
+			      pud_t *pudp, pud_t pud)
+{
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
 
 #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
 #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
@@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
 #define pud_present(pud)	pte_present(pud_pte(pud))
 #define pud_leaf(pud)		pud_sect(pud)
 #define pud_valid(pud)		pte_valid(pud_pte(pud))
+#define pud_user(pud)		pte_user(pud_pte(pud))
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
+}
+
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_present(pud) && pud_user(pud);
+}
+#endif
 
 static inline void set_pud(pud_t *pudp, pud_t pud)
 {
@@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					    unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
@@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
 }
 #endif
-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 5/5] riscv: mm: add support for page table check
  2022-04-21  8:20 ` Tong Tiangen
  (?)
@ 2022-04-21  8:20   ` Tong Tiangen
  -1 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 2 files changed, 72 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 63f7258984f3..66d241cee52c 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -38,6 +38,7 @@ config RISCV
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
 	select ARCH_SUPPORTS_HUGETLBFS if MMU
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_USE_MEMTEST
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
 	select ARCH_WANT_FRAME_POINTERS
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 046b44225623..6f22d9580658 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -114,6 +114,8 @@
 #include <asm/pgtable-32.h>
 #endif /* CONFIG_64BIT */
 
+#include <linux/page_table_check.h>
+
 #ifdef CONFIG_XIP_KERNEL
 #define XIP_FIXUP(addr) ({							\
 	uintptr_t __a = (uintptr_t)(addr);					\
@@ -315,6 +317,11 @@ static inline int pte_exec(pte_t pte)
 	return pte_val(pte) & _PAGE_EXEC;
 }
 
+static inline int pte_user(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_USER;
+}
+
 static inline int pte_huge(pte_t pte)
 {
 	return pte_present(pte) && (pte_val(pte) & _PAGE_LEAF);
@@ -446,7 +453,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
 
 void flush_icache_pte(pte_t pte);
 
-static inline void set_pte_at(struct mm_struct *mm,
+static inline void __set_pte_at(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep, pte_t pteval)
 {
 	if (pte_present(pteval) && pte_exec(pteval))
@@ -455,10 +462,17 @@ static inline void set_pte_at(struct mm_struct *mm,
 	set_pte(ptep, pteval);
 }
 
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	page_table_check_pte_set(mm, addr, ptep, pteval);
+	__set_pte_at(mm, addr, ptep, pteval);
+}
+
 static inline void pte_clear(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep)
 {
-	set_pte_at(mm, addr, ptep, __pte(0));
+	__set_pte_at(mm, addr, ptep, __pte(0));
 }
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
@@ -475,11 +489,21 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 	return true;
 }
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
@@ -546,6 +570,13 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
 	return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
 }
 
+#define __pud_to_phys(pud)  (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
+
+static inline unsigned long pud_pfn(pud_t pud)
+{
+	return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
+}
+
 static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 {
 	return pte_pmd(pte_modify(pmd_pte(pmd), newprot));
@@ -567,6 +598,11 @@ static inline int pmd_young(pmd_t pmd)
 	return pte_young(pmd_pte(pmd));
 }
 
+static inline int pmd_user(pmd_t pmd)
+{
+	return pte_user(pmd_pte(pmd));
+}
+
 static inline pmd_t pmd_mkold(pmd_t pmd)
 {
 	return pte_pmd(pte_mkold(pmd_pte(pmd)));
@@ -600,15 +636,39 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd)
 static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 				pmd_t *pmdp, pmd_t pmd)
 {
-	return set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline int pud_user(pud_t pud)
+{
+	return pte_user(pud_pte(pud));
 }
 
 static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
 				pud_t *pudp, pud_t pud)
 {
-	return set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && pte_user(pte);
 }
 
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && pmd_user(pmd);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && pud_user(pud);
+}
+#endif
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static inline int pmd_trans_huge(pmd_t pmd)
 {
@@ -634,7 +694,11 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 
 #define __HAVE_ARCH_PMDP_SET_WRPROTECT
@@ -648,6 +712,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 				unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 5/5] riscv: mm: add support for page table check
@ 2022-04-21  8:20   ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 2 files changed, 72 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 63f7258984f3..66d241cee52c 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -38,6 +38,7 @@ config RISCV
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
 	select ARCH_SUPPORTS_HUGETLBFS if MMU
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_USE_MEMTEST
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
 	select ARCH_WANT_FRAME_POINTERS
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 046b44225623..6f22d9580658 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -114,6 +114,8 @@
 #include <asm/pgtable-32.h>
 #endif /* CONFIG_64BIT */
 
+#include <linux/page_table_check.h>
+
 #ifdef CONFIG_XIP_KERNEL
 #define XIP_FIXUP(addr) ({							\
 	uintptr_t __a = (uintptr_t)(addr);					\
@@ -315,6 +317,11 @@ static inline int pte_exec(pte_t pte)
 	return pte_val(pte) & _PAGE_EXEC;
 }
 
+static inline int pte_user(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_USER;
+}
+
 static inline int pte_huge(pte_t pte)
 {
 	return pte_present(pte) && (pte_val(pte) & _PAGE_LEAF);
@@ -446,7 +453,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
 
 void flush_icache_pte(pte_t pte);
 
-static inline void set_pte_at(struct mm_struct *mm,
+static inline void __set_pte_at(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep, pte_t pteval)
 {
 	if (pte_present(pteval) && pte_exec(pteval))
@@ -455,10 +462,17 @@ static inline void set_pte_at(struct mm_struct *mm,
 	set_pte(ptep, pteval);
 }
 
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	page_table_check_pte_set(mm, addr, ptep, pteval);
+	__set_pte_at(mm, addr, ptep, pteval);
+}
+
 static inline void pte_clear(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep)
 {
-	set_pte_at(mm, addr, ptep, __pte(0));
+	__set_pte_at(mm, addr, ptep, __pte(0));
 }
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
@@ -475,11 +489,21 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 	return true;
 }
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
@@ -546,6 +570,13 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
 	return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
 }
 
+#define __pud_to_phys(pud)  (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
+
+static inline unsigned long pud_pfn(pud_t pud)
+{
+	return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
+}
+
 static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 {
 	return pte_pmd(pte_modify(pmd_pte(pmd), newprot));
@@ -567,6 +598,11 @@ static inline int pmd_young(pmd_t pmd)
 	return pte_young(pmd_pte(pmd));
 }
 
+static inline int pmd_user(pmd_t pmd)
+{
+	return pte_user(pmd_pte(pmd));
+}
+
 static inline pmd_t pmd_mkold(pmd_t pmd)
 {
 	return pte_pmd(pte_mkold(pmd_pte(pmd)));
@@ -600,15 +636,39 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd)
 static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 				pmd_t *pmdp, pmd_t pmd)
 {
-	return set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline int pud_user(pud_t pud)
+{
+	return pte_user(pud_pte(pud));
 }
 
 static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
 				pud_t *pudp, pud_t pud)
 {
-	return set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && pte_user(pte);
 }
 
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && pmd_user(pmd);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && pud_user(pud);
+}
+#endif
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static inline int pmd_trans_huge(pmd_t pmd)
 {
@@ -634,7 +694,11 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 
 #define __HAVE_ARCH_PMDP_SET_WRPROTECT
@@ -648,6 +712,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 				unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH -next v5 5/5] riscv: mm: add support for page table check
@ 2022-04-21  8:20   ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-21  8:20 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Pasha Tatashin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Tong Tiangen, Kefeng Wang, Guohanjun

As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
check"), add some necessary page table check hooks into routines that
modify user page tables.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/riscv/Kconfig               |  1 +
 arch/riscv/include/asm/pgtable.h | 77 +++++++++++++++++++++++++++++---
 2 files changed, 72 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 63f7258984f3..66d241cee52c 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -38,6 +38,7 @@ config RISCV
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
 	select ARCH_SUPPORTS_HUGETLBFS if MMU
+	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
 	select ARCH_USE_MEMTEST
 	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
 	select ARCH_WANT_FRAME_POINTERS
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 046b44225623..6f22d9580658 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -114,6 +114,8 @@
 #include <asm/pgtable-32.h>
 #endif /* CONFIG_64BIT */
 
+#include <linux/page_table_check.h>
+
 #ifdef CONFIG_XIP_KERNEL
 #define XIP_FIXUP(addr) ({							\
 	uintptr_t __a = (uintptr_t)(addr);					\
@@ -315,6 +317,11 @@ static inline int pte_exec(pte_t pte)
 	return pte_val(pte) & _PAGE_EXEC;
 }
 
+static inline int pte_user(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_USER;
+}
+
 static inline int pte_huge(pte_t pte)
 {
 	return pte_present(pte) && (pte_val(pte) & _PAGE_LEAF);
@@ -446,7 +453,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
 
 void flush_icache_pte(pte_t pte);
 
-static inline void set_pte_at(struct mm_struct *mm,
+static inline void __set_pte_at(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep, pte_t pteval)
 {
 	if (pte_present(pteval) && pte_exec(pteval))
@@ -455,10 +462,17 @@ static inline void set_pte_at(struct mm_struct *mm,
 	set_pte(ptep, pteval);
 }
 
+static inline void set_pte_at(struct mm_struct *mm,
+	unsigned long addr, pte_t *ptep, pte_t pteval)
+{
+	page_table_check_pte_set(mm, addr, ptep, pteval);
+	__set_pte_at(mm, addr, ptep, pteval);
+}
+
 static inline void pte_clear(struct mm_struct *mm,
 	unsigned long addr, pte_t *ptep)
 {
-	set_pte_at(mm, addr, ptep, __pte(0));
+	__set_pte_at(mm, addr, ptep, __pte(0));
 }
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
@@ -475,11 +489,21 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 	return true;
 }
 
+static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
+				       unsigned long address, pte_t *ptep)
+{
+	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 				       unsigned long address, pte_t *ptep)
 {
-	return __pte(atomic_long_xchg((atomic_long_t *)ptep, 0));
+	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
+
+	page_table_check_pte_clear(mm, address, pte);
+
+	return pte;
 }
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
@@ -546,6 +570,13 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
 	return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
 }
 
+#define __pud_to_phys(pud)  (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
+
+static inline unsigned long pud_pfn(pud_t pud)
+{
+	return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
+}
+
 static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 {
 	return pte_pmd(pte_modify(pmd_pte(pmd), newprot));
@@ -567,6 +598,11 @@ static inline int pmd_young(pmd_t pmd)
 	return pte_young(pmd_pte(pmd));
 }
 
+static inline int pmd_user(pmd_t pmd)
+{
+	return pte_user(pmd_pte(pmd));
+}
+
 static inline pmd_t pmd_mkold(pmd_t pmd)
 {
 	return pte_pmd(pte_mkold(pmd_pte(pmd)));
@@ -600,15 +636,39 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd)
 static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 				pmd_t *pmdp, pmd_t pmd)
 {
-	return set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+	page_table_check_pmd_set(mm, addr, pmdp, pmd);
+	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
+}
+
+static inline int pud_user(pud_t pud)
+{
+	return pte_user(pud_pte(pud));
 }
 
 static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
 				pud_t *pudp, pud_t pud)
 {
-	return set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+	page_table_check_pud_set(mm, addr, pudp, pud);
+	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
+}
+
+#ifdef CONFIG_PAGE_TABLE_CHECK
+static inline bool pte_user_accessible_page(pte_t pte)
+{
+	return pte_present(pte) && pte_user(pte);
 }
 
+static inline bool pmd_user_accessible_page(pmd_t pmd)
+{
+	return pmd_leaf(pmd) && pmd_user(pmd);
+}
+
+static inline bool pud_user_accessible_page(pud_t pud)
+{
+	return pud_leaf(pud) && pud_user(pud);
+}
+#endif
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static inline int pmd_trans_huge(pmd_t pmd)
 {
@@ -634,7 +694,11 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 					unsigned long address, pmd_t *pmdp)
 {
-	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
+
+	page_table_check_pmd_clear(mm, address, pmd);
+
+	return pmd;
 }
 
 #define __HAVE_ARCH_PMDP_SET_WRPROTECT
@@ -648,6 +712,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 				unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
+	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
 	return __pmd(atomic_long_xchg((atomic_long_t *)pmdp, pmd_val(pmd)));
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
  2022-04-21  8:20   ` Tong Tiangen
  (?)
@ 2022-04-21 15:28     ` Pasha Tatashin
  -1 siblings, 0 replies; 60+ messages in thread
From: Pasha Tatashin @ 2022-04-21 15:28 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou, LKML, linux-mm, Linux ARM, linux-riscv, Kefeng Wang,
	Guohanjun

On Thu, Apr 21, 2022 at 4:02 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>
> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
> support page table check in architectures other than x86 and it is no
> functional impact on x86.
>
> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>

Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
@ 2022-04-21 15:28     ` Pasha Tatashin
  0 siblings, 0 replies; 60+ messages in thread
From: Pasha Tatashin @ 2022-04-21 15:28 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou, LKML, linux-mm, Linux ARM, linux-riscv, Kefeng Wang,
	Guohanjun

On Thu, Apr 21, 2022 at 4:02 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>
> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
> support page table check in architectures other than x86 and it is no
> functional impact on x86.
>
> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>

Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
@ 2022-04-21 15:28     ` Pasha Tatashin
  0 siblings, 0 replies; 60+ messages in thread
From: Pasha Tatashin @ 2022-04-21 15:28 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Anshuman Khandual, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou, LKML, linux-mm, Linux ARM, linux-riscv, Kefeng Wang,
	Guohanjun

On Thu, Apr 21, 2022 at 4:02 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>
> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
> support page table check in architectures other than x86 and it is no
> functional impact on x86.
>
> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>

Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
  2022-04-21 15:28     ` Pasha Tatashin
  (?)
@ 2022-04-21 18:40       ` Pasha Tatashin
  -1 siblings, 0 replies; 60+ messages in thread
From: Pasha Tatashin @ 2022-04-21 18:40 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, LKML,
	Anshuman Khandual, linux-mm, Paul Walmsley, Will Deacon,
	Albert Ou, Palmer Dabbelt, Linux ARM, Kefeng Wang, linux-riscv,
	Guohanjun

On 4/21/22 11:28, Pasha Tatashin wrote:
> On Thu, Apr 21, 2022 at 4:02 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>
>> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
>> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
>> support page table check in architectures other than x86 and it is no
>> functional impact on x86.
>>
>> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> 
> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>


To avoid similar problems in the future, please also include the following patch after the current series:

----------------8<-------------[ cut here ]------------------
From cccef7ba2433f8e97d1948f85e3bfb2ef5d32a0a Mon Sep 17 00:00:00 2001
From: Pasha Tatashin <pasha.tatashin@soleen.com>
Date: Thu, 21 Apr 2022 18:04:43 +0000
Subject: [PATCH] x86: removed P*D_PAGE_MASK and P*D_PAGE_SIZE

Other architectures and the common mm/ use P*D_MASK, and P*D_SIZE.
Remove the duplicated P*D_PAGE_MASK and P*D_PAGE_SIZE which are only
used in x86/*.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/page_types.h  | 12 +++---------
 arch/x86/kernel/amd_gart_64.c      |  2 +-
 arch/x86/kernel/head64.c           |  2 +-
 arch/x86/mm/mem_encrypt_boot.S     |  4 ++--
 arch/x86/mm/mem_encrypt_identity.c | 18 +++++++++---------
 arch/x86/mm/pat/set_memory.c       |  6 +++---
 arch/x86/mm/pti.c                  |  2 +-
 7 files changed, 20 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index a506a411474d..86bd4311daf8 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -11,20 +11,14 @@
 #define PAGE_SIZE		(_AC(1,UL) << PAGE_SHIFT)
 #define PAGE_MASK		(~(PAGE_SIZE-1))
 
-#define PMD_PAGE_SIZE		(_AC(1, UL) << PMD_SHIFT)
-#define PMD_PAGE_MASK		(~(PMD_PAGE_SIZE-1))
-
-#define PUD_PAGE_SIZE		(_AC(1, UL) << PUD_SHIFT)
-#define PUD_PAGE_MASK		(~(PUD_PAGE_SIZE-1))
-
 #define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
-/* Cast *PAGE_MASK to a signed type so that it is sign-extended if
+/* Cast P*D_MASK to a signed type so that it is sign-extended if
    virtual addresses are 32-bits but physical addresses are larger
    (ie, 32-bit PAE). */
 #define PHYSICAL_PAGE_MASK	(((signed long)PAGE_MASK) & __PHYSICAL_MASK)
-#define PHYSICAL_PMD_PAGE_MASK	(((signed long)PMD_PAGE_MASK) & __PHYSICAL_MASK)
-#define PHYSICAL_PUD_PAGE_MASK	(((signed long)PUD_PAGE_MASK) & __PHYSICAL_MASK)
+#define PHYSICAL_PMD_PAGE_MASK	(((signed long)PMD_MASK) & __PHYSICAL_MASK)
+#define PHYSICAL_PUD_PAGE_MASK	(((signed long)PUD_MASK) & __PHYSICAL_MASK)
 
 #define HPAGE_SHIFT		PMD_SHIFT
 #define HPAGE_SIZE		(_AC(1,UL) << HPAGE_SHIFT)
diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c
index ed837383de5c..02579ea02351 100644
--- a/arch/x86/kernel/amd_gart_64.c
+++ b/arch/x86/kernel/amd_gart_64.c
@@ -506,7 +506,7 @@ static __init unsigned long check_iommu_size(unsigned long aper, u64 aper_size)
 	}
 
 	a = aper + iommu_size;
-	iommu_size -= round_up(a, PMD_PAGE_SIZE) - a;
+	iommu_size -= round_up(a, PMD_SIZE) - a;
 
 	if (iommu_size < 64*1024*1024) {
 		pr_warn("PCI-DMA: Warning: Small IOMMU %luMB."
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 4f5ecbbaae77..f11ca415e97c 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -189,7 +189,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
 	load_delta = physaddr - (unsigned long)(_text - __START_KERNEL_map);
 
 	/* Is the address not 2M aligned? */
-	if (load_delta & ~PMD_PAGE_MASK)
+	if (load_delta & ~PMD_MASK)
 		for (;;);
 
 	/* Activate Secure Memory Encryption (SME) if supported and enabled */
diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S
index 3d1dba05fce4..640131736a19 100644
--- a/arch/x86/mm/mem_encrypt_boot.S
+++ b/arch/x86/mm/mem_encrypt_boot.S
@@ -26,7 +26,7 @@ SYM_FUNC_START(sme_encrypt_execute)
 	 *   RCX - virtual address of the encryption workarea, including:
 	 *     - stack page (PAGE_SIZE)
 	 *     - encryption routine page (PAGE_SIZE)
-	 *     - intermediate copy buffer (PMD_PAGE_SIZE)
+	 *     - intermediate copy buffer (PMD_SIZE)
 	 *    R8 - physical address of the pagetables to use for encryption
 	 */
 
@@ -120,7 +120,7 @@ SYM_FUNC_START(__enc_copy)
 	wbinvd				/* Invalidate any cache entries */
 
 	/* Copy/encrypt up to 2MB at a time */
-	movq	$PMD_PAGE_SIZE, %r12
+	movq	$PMD_SIZE, %r12
 1:
 	cmpq	%r12, %r9
 	jnb	2f
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index b43bc24d2bb6..357039a38547 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -92,7 +92,7 @@ struct sme_populate_pgd_data {
  * section is 2MB aligned to allow for simple pagetable setup using only
  * PMD entries (see vmlinux.lds.S).
  */
-static char sme_workarea[2 * PMD_PAGE_SIZE] __section(".init.scratch");
+static char sme_workarea[2 * PMD_SIZE] __section(".init.scratch");
 
 static char sme_cmdline_arg[] __initdata = "mem_encrypt";
 static char sme_cmdline_on[]  __initdata = "on";
@@ -197,8 +197,8 @@ static void __init __sme_map_range_pmd(struct sme_populate_pgd_data *ppd)
 	while (ppd->vaddr < ppd->vaddr_end) {
 		sme_populate_pgd_large(ppd);
 
-		ppd->vaddr += PMD_PAGE_SIZE;
-		ppd->paddr += PMD_PAGE_SIZE;
+		ppd->vaddr += PMD_SIZE;
+		ppd->paddr += PMD_SIZE;
 	}
 }
 
@@ -224,11 +224,11 @@ static void __init __sme_map_range(struct sme_populate_pgd_data *ppd,
 	vaddr_end = ppd->vaddr_end;
 
 	/* If start is not 2MB aligned, create PTE entries */
-	ppd->vaddr_end = ALIGN(ppd->vaddr, PMD_PAGE_SIZE);
+	ppd->vaddr_end = ALIGN(ppd->vaddr, PMD_SIZE);
 	__sme_map_range_pte(ppd);
 
 	/* Create PMD entries */
-	ppd->vaddr_end = vaddr_end & PMD_PAGE_MASK;
+	ppd->vaddr_end = vaddr_end & PMD_MASK;
 	__sme_map_range_pmd(ppd);
 
 	/* If end is not 2MB aligned, create PTE entries */
@@ -324,7 +324,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 
 	/* Physical addresses gives us the identity mapped virtual addresses */
 	kernel_start = __pa_symbol(_text);
-	kernel_end = ALIGN(__pa_symbol(_end), PMD_PAGE_SIZE);
+	kernel_end = ALIGN(__pa_symbol(_end), PMD_SIZE);
 	kernel_len = kernel_end - kernel_start;
 
 	initrd_start = 0;
@@ -354,12 +354,12 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 	 *   executable encryption area size:
 	 *     stack page (PAGE_SIZE)
 	 *     encryption routine page (PAGE_SIZE)
-	 *     intermediate copy buffer (PMD_PAGE_SIZE)
+	 *     intermediate copy buffer (PMD_SIZE)
 	 *   pagetable structures for the encryption of the kernel
 	 *   pagetable structures for workarea (in case not currently mapped)
 	 */
 	execute_start = workarea_start;
-	execute_end = execute_start + (PAGE_SIZE * 2) + PMD_PAGE_SIZE;
+	execute_end = execute_start + (PAGE_SIZE * 2) + PMD_SIZE;
 	execute_len = execute_end - execute_start;
 
 	/*
@@ -382,7 +382,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 	 * before it is mapped.
 	 */
 	workarea_len = execute_len + pgtable_area_len;
-	workarea_end = ALIGN(workarea_start + workarea_len, PMD_PAGE_SIZE);
+	workarea_end = ALIGN(workarea_start + workarea_len, PMD_SIZE);
 
 	/*
 	 * Set the address to the start of where newly created pagetable
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index abf5ed76e4b7..8016d93c1288 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -714,11 +714,11 @@ phys_addr_t slow_virt_to_phys(void *__virt_addr)
 	switch (level) {
 	case PG_LEVEL_1G:
 		phys_addr = (phys_addr_t)pud_pfn(*(pud_t *)pte) << PAGE_SHIFT;
-		offset = virt_addr & ~PUD_PAGE_MASK;
+		offset = virt_addr & ~PUD_MASK;
 		break;
 	case PG_LEVEL_2M:
 		phys_addr = (phys_addr_t)pmd_pfn(*(pmd_t *)pte) << PAGE_SHIFT;
-		offset = virt_addr & ~PMD_PAGE_MASK;
+		offset = virt_addr & ~PMD_MASK;
 		break;
 	default:
 		phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
@@ -1006,7 +1006,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 	case PG_LEVEL_1G:
 		ref_prot = pud_pgprot(*(pud_t *)kpte);
 		ref_pfn = pud_pfn(*(pud_t *)kpte);
-		pfninc = PMD_PAGE_SIZE >> PAGE_SHIFT;
+		pfninc = PMD_SIZE >> PAGE_SHIFT;
 		lpaddr = address & PUD_MASK;
 		lpinc = PMD_SIZE;
 		/*
diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 5d5c7bb50ce9..a28c8d57273a 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -592,7 +592,7 @@ static void pti_set_kernel_image_nonglobal(void)
 	 * of the image.
 	 */
 	unsigned long start = PFN_ALIGN(_text);
-	unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE);
+	unsigned long end = ALIGN((unsigned long)_end, PMD_SIZE);
 
 	/*
 	 * This clears _PAGE_GLOBAL from the entire kernel image.
-- 
2.36.0.rc2.479.g8af0fa9b8e-goog

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
@ 2022-04-21 18:40       ` Pasha Tatashin
  0 siblings, 0 replies; 60+ messages in thread
From: Pasha Tatashin @ 2022-04-21 18:40 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, LKML,
	Anshuman Khandual, linux-mm, Paul Walmsley, Will Deacon,
	Albert Ou, Palmer Dabbelt, Linux ARM, Kefeng Wang, linux-riscv,
	Guohanjun

On 4/21/22 11:28, Pasha Tatashin wrote:
> On Thu, Apr 21, 2022 at 4:02 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>
>> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
>> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
>> support page table check in architectures other than x86 and it is no
>> functional impact on x86.
>>
>> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> 
> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>


To avoid similar problems in the future, please also include the following patch after the current series:

----------------8<-------------[ cut here ]------------------
From cccef7ba2433f8e97d1948f85e3bfb2ef5d32a0a Mon Sep 17 00:00:00 2001
From: Pasha Tatashin <pasha.tatashin@soleen.com>
Date: Thu, 21 Apr 2022 18:04:43 +0000
Subject: [PATCH] x86: removed P*D_PAGE_MASK and P*D_PAGE_SIZE

Other architectures and the common mm/ use P*D_MASK, and P*D_SIZE.
Remove the duplicated P*D_PAGE_MASK and P*D_PAGE_SIZE which are only
used in x86/*.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/page_types.h  | 12 +++---------
 arch/x86/kernel/amd_gart_64.c      |  2 +-
 arch/x86/kernel/head64.c           |  2 +-
 arch/x86/mm/mem_encrypt_boot.S     |  4 ++--
 arch/x86/mm/mem_encrypt_identity.c | 18 +++++++++---------
 arch/x86/mm/pat/set_memory.c       |  6 +++---
 arch/x86/mm/pti.c                  |  2 +-
 7 files changed, 20 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index a506a411474d..86bd4311daf8 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -11,20 +11,14 @@
 #define PAGE_SIZE		(_AC(1,UL) << PAGE_SHIFT)
 #define PAGE_MASK		(~(PAGE_SIZE-1))
 
-#define PMD_PAGE_SIZE		(_AC(1, UL) << PMD_SHIFT)
-#define PMD_PAGE_MASK		(~(PMD_PAGE_SIZE-1))
-
-#define PUD_PAGE_SIZE		(_AC(1, UL) << PUD_SHIFT)
-#define PUD_PAGE_MASK		(~(PUD_PAGE_SIZE-1))
-
 #define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
-/* Cast *PAGE_MASK to a signed type so that it is sign-extended if
+/* Cast P*D_MASK to a signed type so that it is sign-extended if
    virtual addresses are 32-bits but physical addresses are larger
    (ie, 32-bit PAE). */
 #define PHYSICAL_PAGE_MASK	(((signed long)PAGE_MASK) & __PHYSICAL_MASK)
-#define PHYSICAL_PMD_PAGE_MASK	(((signed long)PMD_PAGE_MASK) & __PHYSICAL_MASK)
-#define PHYSICAL_PUD_PAGE_MASK	(((signed long)PUD_PAGE_MASK) & __PHYSICAL_MASK)
+#define PHYSICAL_PMD_PAGE_MASK	(((signed long)PMD_MASK) & __PHYSICAL_MASK)
+#define PHYSICAL_PUD_PAGE_MASK	(((signed long)PUD_MASK) & __PHYSICAL_MASK)
 
 #define HPAGE_SHIFT		PMD_SHIFT
 #define HPAGE_SIZE		(_AC(1,UL) << HPAGE_SHIFT)
diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c
index ed837383de5c..02579ea02351 100644
--- a/arch/x86/kernel/amd_gart_64.c
+++ b/arch/x86/kernel/amd_gart_64.c
@@ -506,7 +506,7 @@ static __init unsigned long check_iommu_size(unsigned long aper, u64 aper_size)
 	}
 
 	a = aper + iommu_size;
-	iommu_size -= round_up(a, PMD_PAGE_SIZE) - a;
+	iommu_size -= round_up(a, PMD_SIZE) - a;
 
 	if (iommu_size < 64*1024*1024) {
 		pr_warn("PCI-DMA: Warning: Small IOMMU %luMB."
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 4f5ecbbaae77..f11ca415e97c 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -189,7 +189,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
 	load_delta = physaddr - (unsigned long)(_text - __START_KERNEL_map);
 
 	/* Is the address not 2M aligned? */
-	if (load_delta & ~PMD_PAGE_MASK)
+	if (load_delta & ~PMD_MASK)
 		for (;;);
 
 	/* Activate Secure Memory Encryption (SME) if supported and enabled */
diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S
index 3d1dba05fce4..640131736a19 100644
--- a/arch/x86/mm/mem_encrypt_boot.S
+++ b/arch/x86/mm/mem_encrypt_boot.S
@@ -26,7 +26,7 @@ SYM_FUNC_START(sme_encrypt_execute)
 	 *   RCX - virtual address of the encryption workarea, including:
 	 *     - stack page (PAGE_SIZE)
 	 *     - encryption routine page (PAGE_SIZE)
-	 *     - intermediate copy buffer (PMD_PAGE_SIZE)
+	 *     - intermediate copy buffer (PMD_SIZE)
 	 *    R8 - physical address of the pagetables to use for encryption
 	 */
 
@@ -120,7 +120,7 @@ SYM_FUNC_START(__enc_copy)
 	wbinvd				/* Invalidate any cache entries */
 
 	/* Copy/encrypt up to 2MB at a time */
-	movq	$PMD_PAGE_SIZE, %r12
+	movq	$PMD_SIZE, %r12
 1:
 	cmpq	%r12, %r9
 	jnb	2f
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index b43bc24d2bb6..357039a38547 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -92,7 +92,7 @@ struct sme_populate_pgd_data {
  * section is 2MB aligned to allow for simple pagetable setup using only
  * PMD entries (see vmlinux.lds.S).
  */
-static char sme_workarea[2 * PMD_PAGE_SIZE] __section(".init.scratch");
+static char sme_workarea[2 * PMD_SIZE] __section(".init.scratch");
 
 static char sme_cmdline_arg[] __initdata = "mem_encrypt";
 static char sme_cmdline_on[]  __initdata = "on";
@@ -197,8 +197,8 @@ static void __init __sme_map_range_pmd(struct sme_populate_pgd_data *ppd)
 	while (ppd->vaddr < ppd->vaddr_end) {
 		sme_populate_pgd_large(ppd);
 
-		ppd->vaddr += PMD_PAGE_SIZE;
-		ppd->paddr += PMD_PAGE_SIZE;
+		ppd->vaddr += PMD_SIZE;
+		ppd->paddr += PMD_SIZE;
 	}
 }
 
@@ -224,11 +224,11 @@ static void __init __sme_map_range(struct sme_populate_pgd_data *ppd,
 	vaddr_end = ppd->vaddr_end;
 
 	/* If start is not 2MB aligned, create PTE entries */
-	ppd->vaddr_end = ALIGN(ppd->vaddr, PMD_PAGE_SIZE);
+	ppd->vaddr_end = ALIGN(ppd->vaddr, PMD_SIZE);
 	__sme_map_range_pte(ppd);
 
 	/* Create PMD entries */
-	ppd->vaddr_end = vaddr_end & PMD_PAGE_MASK;
+	ppd->vaddr_end = vaddr_end & PMD_MASK;
 	__sme_map_range_pmd(ppd);
 
 	/* If end is not 2MB aligned, create PTE entries */
@@ -324,7 +324,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 
 	/* Physical addresses gives us the identity mapped virtual addresses */
 	kernel_start = __pa_symbol(_text);
-	kernel_end = ALIGN(__pa_symbol(_end), PMD_PAGE_SIZE);
+	kernel_end = ALIGN(__pa_symbol(_end), PMD_SIZE);
 	kernel_len = kernel_end - kernel_start;
 
 	initrd_start = 0;
@@ -354,12 +354,12 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 	 *   executable encryption area size:
 	 *     stack page (PAGE_SIZE)
 	 *     encryption routine page (PAGE_SIZE)
-	 *     intermediate copy buffer (PMD_PAGE_SIZE)
+	 *     intermediate copy buffer (PMD_SIZE)
 	 *   pagetable structures for the encryption of the kernel
 	 *   pagetable structures for workarea (in case not currently mapped)
 	 */
 	execute_start = workarea_start;
-	execute_end = execute_start + (PAGE_SIZE * 2) + PMD_PAGE_SIZE;
+	execute_end = execute_start + (PAGE_SIZE * 2) + PMD_SIZE;
 	execute_len = execute_end - execute_start;
 
 	/*
@@ -382,7 +382,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 	 * before it is mapped.
 	 */
 	workarea_len = execute_len + pgtable_area_len;
-	workarea_end = ALIGN(workarea_start + workarea_len, PMD_PAGE_SIZE);
+	workarea_end = ALIGN(workarea_start + workarea_len, PMD_SIZE);
 
 	/*
 	 * Set the address to the start of where newly created pagetable
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index abf5ed76e4b7..8016d93c1288 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -714,11 +714,11 @@ phys_addr_t slow_virt_to_phys(void *__virt_addr)
 	switch (level) {
 	case PG_LEVEL_1G:
 		phys_addr = (phys_addr_t)pud_pfn(*(pud_t *)pte) << PAGE_SHIFT;
-		offset = virt_addr & ~PUD_PAGE_MASK;
+		offset = virt_addr & ~PUD_MASK;
 		break;
 	case PG_LEVEL_2M:
 		phys_addr = (phys_addr_t)pmd_pfn(*(pmd_t *)pte) << PAGE_SHIFT;
-		offset = virt_addr & ~PMD_PAGE_MASK;
+		offset = virt_addr & ~PMD_MASK;
 		break;
 	default:
 		phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
@@ -1006,7 +1006,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 	case PG_LEVEL_1G:
 		ref_prot = pud_pgprot(*(pud_t *)kpte);
 		ref_pfn = pud_pfn(*(pud_t *)kpte);
-		pfninc = PMD_PAGE_SIZE >> PAGE_SHIFT;
+		pfninc = PMD_SIZE >> PAGE_SHIFT;
 		lpaddr = address & PUD_MASK;
 		lpinc = PMD_SIZE;
 		/*
diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 5d5c7bb50ce9..a28c8d57273a 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -592,7 +592,7 @@ static void pti_set_kernel_image_nonglobal(void)
 	 * of the image.
 	 */
 	unsigned long start = PFN_ALIGN(_text);
-	unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE);
+	unsigned long end = ALIGN((unsigned long)_end, PMD_SIZE);
 
 	/*
 	 * This clears _PAGE_GLOBAL from the entire kernel image.
-- 
2.36.0.rc2.479.g8af0fa9b8e-goog

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
@ 2022-04-21 18:40       ` Pasha Tatashin
  0 siblings, 0 replies; 60+ messages in thread
From: Pasha Tatashin @ 2022-04-21 18:40 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, LKML,
	Anshuman Khandual, linux-mm, Paul Walmsley, Will Deacon,
	Albert Ou, Palmer Dabbelt, Linux ARM, Kefeng Wang, linux-riscv,
	Guohanjun

On 4/21/22 11:28, Pasha Tatashin wrote:
> On Thu, Apr 21, 2022 at 4:02 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>
>> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
>> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
>> support page table check in architectures other than x86 and it is no
>> functional impact on x86.
>>
>> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> 
> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>


To avoid similar problems in the future, please also include the following patch after the current series:

----------------8<-------------[ cut here ]------------------
From cccef7ba2433f8e97d1948f85e3bfb2ef5d32a0a Mon Sep 17 00:00:00 2001
From: Pasha Tatashin <pasha.tatashin@soleen.com>
Date: Thu, 21 Apr 2022 18:04:43 +0000
Subject: [PATCH] x86: removed P*D_PAGE_MASK and P*D_PAGE_SIZE

Other architectures and the common mm/ use P*D_MASK, and P*D_SIZE.
Remove the duplicated P*D_PAGE_MASK and P*D_PAGE_SIZE which are only
used in x86/*.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 arch/x86/include/asm/page_types.h  | 12 +++---------
 arch/x86/kernel/amd_gart_64.c      |  2 +-
 arch/x86/kernel/head64.c           |  2 +-
 arch/x86/mm/mem_encrypt_boot.S     |  4 ++--
 arch/x86/mm/mem_encrypt_identity.c | 18 +++++++++---------
 arch/x86/mm/pat/set_memory.c       |  6 +++---
 arch/x86/mm/pti.c                  |  2 +-
 7 files changed, 20 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index a506a411474d..86bd4311daf8 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -11,20 +11,14 @@
 #define PAGE_SIZE		(_AC(1,UL) << PAGE_SHIFT)
 #define PAGE_MASK		(~(PAGE_SIZE-1))
 
-#define PMD_PAGE_SIZE		(_AC(1, UL) << PMD_SHIFT)
-#define PMD_PAGE_MASK		(~(PMD_PAGE_SIZE-1))
-
-#define PUD_PAGE_SIZE		(_AC(1, UL) << PUD_SHIFT)
-#define PUD_PAGE_MASK		(~(PUD_PAGE_SIZE-1))
-
 #define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
-/* Cast *PAGE_MASK to a signed type so that it is sign-extended if
+/* Cast P*D_MASK to a signed type so that it is sign-extended if
    virtual addresses are 32-bits but physical addresses are larger
    (ie, 32-bit PAE). */
 #define PHYSICAL_PAGE_MASK	(((signed long)PAGE_MASK) & __PHYSICAL_MASK)
-#define PHYSICAL_PMD_PAGE_MASK	(((signed long)PMD_PAGE_MASK) & __PHYSICAL_MASK)
-#define PHYSICAL_PUD_PAGE_MASK	(((signed long)PUD_PAGE_MASK) & __PHYSICAL_MASK)
+#define PHYSICAL_PMD_PAGE_MASK	(((signed long)PMD_MASK) & __PHYSICAL_MASK)
+#define PHYSICAL_PUD_PAGE_MASK	(((signed long)PUD_MASK) & __PHYSICAL_MASK)
 
 #define HPAGE_SHIFT		PMD_SHIFT
 #define HPAGE_SIZE		(_AC(1,UL) << HPAGE_SHIFT)
diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c
index ed837383de5c..02579ea02351 100644
--- a/arch/x86/kernel/amd_gart_64.c
+++ b/arch/x86/kernel/amd_gart_64.c
@@ -506,7 +506,7 @@ static __init unsigned long check_iommu_size(unsigned long aper, u64 aper_size)
 	}
 
 	a = aper + iommu_size;
-	iommu_size -= round_up(a, PMD_PAGE_SIZE) - a;
+	iommu_size -= round_up(a, PMD_SIZE) - a;
 
 	if (iommu_size < 64*1024*1024) {
 		pr_warn("PCI-DMA: Warning: Small IOMMU %luMB."
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 4f5ecbbaae77..f11ca415e97c 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -189,7 +189,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
 	load_delta = physaddr - (unsigned long)(_text - __START_KERNEL_map);
 
 	/* Is the address not 2M aligned? */
-	if (load_delta & ~PMD_PAGE_MASK)
+	if (load_delta & ~PMD_MASK)
 		for (;;);
 
 	/* Activate Secure Memory Encryption (SME) if supported and enabled */
diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S
index 3d1dba05fce4..640131736a19 100644
--- a/arch/x86/mm/mem_encrypt_boot.S
+++ b/arch/x86/mm/mem_encrypt_boot.S
@@ -26,7 +26,7 @@ SYM_FUNC_START(sme_encrypt_execute)
 	 *   RCX - virtual address of the encryption workarea, including:
 	 *     - stack page (PAGE_SIZE)
 	 *     - encryption routine page (PAGE_SIZE)
-	 *     - intermediate copy buffer (PMD_PAGE_SIZE)
+	 *     - intermediate copy buffer (PMD_SIZE)
 	 *    R8 - physical address of the pagetables to use for encryption
 	 */
 
@@ -120,7 +120,7 @@ SYM_FUNC_START(__enc_copy)
 	wbinvd				/* Invalidate any cache entries */
 
 	/* Copy/encrypt up to 2MB at a time */
-	movq	$PMD_PAGE_SIZE, %r12
+	movq	$PMD_SIZE, %r12
 1:
 	cmpq	%r12, %r9
 	jnb	2f
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index b43bc24d2bb6..357039a38547 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -92,7 +92,7 @@ struct sme_populate_pgd_data {
  * section is 2MB aligned to allow for simple pagetable setup using only
  * PMD entries (see vmlinux.lds.S).
  */
-static char sme_workarea[2 * PMD_PAGE_SIZE] __section(".init.scratch");
+static char sme_workarea[2 * PMD_SIZE] __section(".init.scratch");
 
 static char sme_cmdline_arg[] __initdata = "mem_encrypt";
 static char sme_cmdline_on[]  __initdata = "on";
@@ -197,8 +197,8 @@ static void __init __sme_map_range_pmd(struct sme_populate_pgd_data *ppd)
 	while (ppd->vaddr < ppd->vaddr_end) {
 		sme_populate_pgd_large(ppd);
 
-		ppd->vaddr += PMD_PAGE_SIZE;
-		ppd->paddr += PMD_PAGE_SIZE;
+		ppd->vaddr += PMD_SIZE;
+		ppd->paddr += PMD_SIZE;
 	}
 }
 
@@ -224,11 +224,11 @@ static void __init __sme_map_range(struct sme_populate_pgd_data *ppd,
 	vaddr_end = ppd->vaddr_end;
 
 	/* If start is not 2MB aligned, create PTE entries */
-	ppd->vaddr_end = ALIGN(ppd->vaddr, PMD_PAGE_SIZE);
+	ppd->vaddr_end = ALIGN(ppd->vaddr, PMD_SIZE);
 	__sme_map_range_pte(ppd);
 
 	/* Create PMD entries */
-	ppd->vaddr_end = vaddr_end & PMD_PAGE_MASK;
+	ppd->vaddr_end = vaddr_end & PMD_MASK;
 	__sme_map_range_pmd(ppd);
 
 	/* If end is not 2MB aligned, create PTE entries */
@@ -324,7 +324,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 
 	/* Physical addresses gives us the identity mapped virtual addresses */
 	kernel_start = __pa_symbol(_text);
-	kernel_end = ALIGN(__pa_symbol(_end), PMD_PAGE_SIZE);
+	kernel_end = ALIGN(__pa_symbol(_end), PMD_SIZE);
 	kernel_len = kernel_end - kernel_start;
 
 	initrd_start = 0;
@@ -354,12 +354,12 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 	 *   executable encryption area size:
 	 *     stack page (PAGE_SIZE)
 	 *     encryption routine page (PAGE_SIZE)
-	 *     intermediate copy buffer (PMD_PAGE_SIZE)
+	 *     intermediate copy buffer (PMD_SIZE)
 	 *   pagetable structures for the encryption of the kernel
 	 *   pagetable structures for workarea (in case not currently mapped)
 	 */
 	execute_start = workarea_start;
-	execute_end = execute_start + (PAGE_SIZE * 2) + PMD_PAGE_SIZE;
+	execute_end = execute_start + (PAGE_SIZE * 2) + PMD_SIZE;
 	execute_len = execute_end - execute_start;
 
 	/*
@@ -382,7 +382,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
 	 * before it is mapped.
 	 */
 	workarea_len = execute_len + pgtable_area_len;
-	workarea_end = ALIGN(workarea_start + workarea_len, PMD_PAGE_SIZE);
+	workarea_end = ALIGN(workarea_start + workarea_len, PMD_SIZE);
 
 	/*
 	 * Set the address to the start of where newly created pagetable
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index abf5ed76e4b7..8016d93c1288 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -714,11 +714,11 @@ phys_addr_t slow_virt_to_phys(void *__virt_addr)
 	switch (level) {
 	case PG_LEVEL_1G:
 		phys_addr = (phys_addr_t)pud_pfn(*(pud_t *)pte) << PAGE_SHIFT;
-		offset = virt_addr & ~PUD_PAGE_MASK;
+		offset = virt_addr & ~PUD_MASK;
 		break;
 	case PG_LEVEL_2M:
 		phys_addr = (phys_addr_t)pmd_pfn(*(pmd_t *)pte) << PAGE_SHIFT;
-		offset = virt_addr & ~PMD_PAGE_MASK;
+		offset = virt_addr & ~PMD_MASK;
 		break;
 	default:
 		phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
@@ -1006,7 +1006,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 	case PG_LEVEL_1G:
 		ref_prot = pud_pgprot(*(pud_t *)kpte);
 		ref_pfn = pud_pfn(*(pud_t *)kpte);
-		pfninc = PMD_PAGE_SIZE >> PAGE_SHIFT;
+		pfninc = PMD_SIZE >> PAGE_SHIFT;
 		lpaddr = address & PUD_MASK;
 		lpinc = PMD_SIZE;
 		/*
diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 5d5c7bb50ce9..a28c8d57273a 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -592,7 +592,7 @@ static void pti_set_kernel_image_nonglobal(void)
 	 * of the image.
 	 */
 	unsigned long start = PFN_ALIGN(_text);
-	unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE);
+	unsigned long end = ALIGN((unsigned long)_end, PMD_SIZE);
 
 	/*
 	 * This clears _PAGE_GLOBAL from the entire kernel image.
-- 
2.36.0.rc2.479.g8af0fa9b8e-goog

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
  2022-04-21  8:20   ` Tong Tiangen
  (?)
@ 2022-04-22  4:41     ` Anshuman Khandual
  -1 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  4:41 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/21/22 13:50, Tong Tiangen wrote:
> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
> support page table check in architectures other than x86 and it is no
> functional impact on x86.> 
> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>

There are multiple structural problems in the commit message wording
but will leave them upto Andrew, if he could fix while merging.

Otherwise LGTM

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

> ---
>  mm/page_table_check.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_table_check.c b/mm/page_table_check.c
> index 2458281bff89..eb0d0b71cdf6 100644
> --- a/mm/page_table_check.c
> +++ b/mm/page_table_check.c
> @@ -177,7 +177,7 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr,
>  
>  	if (pmd_user_accessible_page(pmd)) {
>  		page_table_check_clear(mm, addr, pmd_pfn(pmd),
> -				       PMD_PAGE_SIZE >> PAGE_SHIFT);
> +				       PMD_SIZE >> PAGE_SHIFT);
>  	}
>  }
>  EXPORT_SYMBOL(__page_table_check_pmd_clear);
> @@ -190,7 +190,7 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr,
>  
>  	if (pud_user_accessible_page(pud)) {
>  		page_table_check_clear(mm, addr, pud_pfn(pud),
> -				       PUD_PAGE_SIZE >> PAGE_SHIFT);
> +				       PUD_SIZE >> PAGE_SHIFT);
>  	}
>  }
>  EXPORT_SYMBOL(__page_table_check_pud_clear);
> @@ -219,7 +219,7 @@ void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr,
>  	__page_table_check_pmd_clear(mm, addr, *pmdp);
>  	if (pmd_user_accessible_page(pmd)) {
>  		page_table_check_set(mm, addr, pmd_pfn(pmd),
> -				     PMD_PAGE_SIZE >> PAGE_SHIFT,
> +				     PMD_SIZE >> PAGE_SHIFT,
>  				     pmd_write(pmd));
>  	}
>  }
> @@ -234,7 +234,7 @@ void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr,
>  	__page_table_check_pud_clear(mm, addr, *pudp);
>  	if (pud_user_accessible_page(pud)) {
>  		page_table_check_set(mm, addr, pud_pfn(pud),
> -				     PUD_PAGE_SIZE >> PAGE_SHIFT,
> +				     PUD_SIZE >> PAGE_SHIFT,
>  				     pud_write(pud));
>  	}
>  }

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
@ 2022-04-22  4:41     ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  4:41 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/21/22 13:50, Tong Tiangen wrote:
> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
> support page table check in architectures other than x86 and it is no
> functional impact on x86.> 
> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>

There are multiple structural problems in the commit message wording
but will leave them upto Andrew, if he could fix while merging.

Otherwise LGTM

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

> ---
>  mm/page_table_check.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_table_check.c b/mm/page_table_check.c
> index 2458281bff89..eb0d0b71cdf6 100644
> --- a/mm/page_table_check.c
> +++ b/mm/page_table_check.c
> @@ -177,7 +177,7 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr,
>  
>  	if (pmd_user_accessible_page(pmd)) {
>  		page_table_check_clear(mm, addr, pmd_pfn(pmd),
> -				       PMD_PAGE_SIZE >> PAGE_SHIFT);
> +				       PMD_SIZE >> PAGE_SHIFT);
>  	}
>  }
>  EXPORT_SYMBOL(__page_table_check_pmd_clear);
> @@ -190,7 +190,7 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr,
>  
>  	if (pud_user_accessible_page(pud)) {
>  		page_table_check_clear(mm, addr, pud_pfn(pud),
> -				       PUD_PAGE_SIZE >> PAGE_SHIFT);
> +				       PUD_SIZE >> PAGE_SHIFT);
>  	}
>  }
>  EXPORT_SYMBOL(__page_table_check_pud_clear);
> @@ -219,7 +219,7 @@ void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr,
>  	__page_table_check_pmd_clear(mm, addr, *pmdp);
>  	if (pmd_user_accessible_page(pmd)) {
>  		page_table_check_set(mm, addr, pmd_pfn(pmd),
> -				     PMD_PAGE_SIZE >> PAGE_SHIFT,
> +				     PMD_SIZE >> PAGE_SHIFT,
>  				     pmd_write(pmd));
>  	}
>  }
> @@ -234,7 +234,7 @@ void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr,
>  	__page_table_check_pud_clear(mm, addr, *pudp);
>  	if (pud_user_accessible_page(pud)) {
>  		page_table_check_set(mm, addr, pud_pfn(pud),
> -				     PUD_PAGE_SIZE >> PAGE_SHIFT,
> +				     PUD_SIZE >> PAGE_SHIFT,
>  				     pud_write(pud));
>  	}
>  }

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
@ 2022-04-22  4:41     ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  4:41 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/21/22 13:50, Tong Tiangen wrote:
> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
> support page table check in architectures other than x86 and it is no
> functional impact on x86.> 
> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>

There are multiple structural problems in the commit message wording
but will leave them upto Andrew, if he could fix while merging.

Otherwise LGTM

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

> ---
>  mm/page_table_check.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_table_check.c b/mm/page_table_check.c
> index 2458281bff89..eb0d0b71cdf6 100644
> --- a/mm/page_table_check.c
> +++ b/mm/page_table_check.c
> @@ -177,7 +177,7 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr,
>  
>  	if (pmd_user_accessible_page(pmd)) {
>  		page_table_check_clear(mm, addr, pmd_pfn(pmd),
> -				       PMD_PAGE_SIZE >> PAGE_SHIFT);
> +				       PMD_SIZE >> PAGE_SHIFT);
>  	}
>  }
>  EXPORT_SYMBOL(__page_table_check_pmd_clear);
> @@ -190,7 +190,7 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr,
>  
>  	if (pud_user_accessible_page(pud)) {
>  		page_table_check_clear(mm, addr, pud_pfn(pud),
> -				       PUD_PAGE_SIZE >> PAGE_SHIFT);
> +				       PUD_SIZE >> PAGE_SHIFT);
>  	}
>  }
>  EXPORT_SYMBOL(__page_table_check_pud_clear);
> @@ -219,7 +219,7 @@ void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr,
>  	__page_table_check_pmd_clear(mm, addr, *pmdp);
>  	if (pmd_user_accessible_page(pmd)) {
>  		page_table_check_set(mm, addr, pmd_pfn(pmd),
> -				     PMD_PAGE_SIZE >> PAGE_SHIFT,
> +				     PMD_SIZE >> PAGE_SHIFT,
>  				     pmd_write(pmd));
>  	}
>  }
> @@ -234,7 +234,7 @@ void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr,
>  	__page_table_check_pud_clear(mm, addr, *pudp);
>  	if (pud_user_accessible_page(pud)) {
>  		page_table_check_set(mm, addr, pud_pfn(pud),
> -				     PUD_PAGE_SIZE >> PAGE_SHIFT,
> +				     PUD_SIZE >> PAGE_SHIFT,
>  				     pud_write(pud));
>  	}
>  }

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
  2022-04-21 18:40       ` Pasha Tatashin
  (?)
@ 2022-04-22  4:46         ` Anshuman Khandual
  -1 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  4:46 UTC (permalink / raw)
  To: Pasha Tatashin, Tong Tiangen, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, LKML, linux-mm,
	Paul Walmsley, Will Deacon, Albert Ou, Palmer Dabbelt, Linux ARM,
	Kefeng Wang, linux-riscv, Guohanjun



On 4/22/22 00:10, Pasha Tatashin wrote:
> On 4/21/22 11:28, Pasha Tatashin wrote:
>> On Thu, Apr 21, 2022 at 4:02 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
>>> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
>>> support page table check in architectures other than x86 and it is no
>>> functional impact on x86.
>>>
>>> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> 
> To avoid similar problems in the future, please also include the following patch after the current series:
> 
> ----------------8<-------------[ cut here ]------------------
>>From cccef7ba2433f8e97d1948f85e3bfb2ef5d32a0a Mon Sep 17 00:00:00 2001
> From: Pasha Tatashin <pasha.tatashin@soleen.com>
> Date: Thu, 21 Apr 2022 18:04:43 +0000
> Subject: [PATCH] x86: removed P*D_PAGE_MASK and P*D_PAGE_SIZE
> 
> Other architectures and the common mm/ use P*D_MASK, and P*D_SIZE.
> Remove the duplicated P*D_PAGE_MASK and P*D_PAGE_SIZE which are only
> used in x86/*.
> 
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>

Absolutely, helps in minimizing arch specific stuff wrt to page table mapping.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
@ 2022-04-22  4:46         ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  4:46 UTC (permalink / raw)
  To: Pasha Tatashin, Tong Tiangen, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, LKML, linux-mm,
	Paul Walmsley, Will Deacon, Albert Ou, Palmer Dabbelt, Linux ARM,
	Kefeng Wang, linux-riscv, Guohanjun



On 4/22/22 00:10, Pasha Tatashin wrote:
> On 4/21/22 11:28, Pasha Tatashin wrote:
>> On Thu, Apr 21, 2022 at 4:02 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
>>> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
>>> support page table check in architectures other than x86 and it is no
>>> functional impact on x86.
>>>
>>> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> 
> To avoid similar problems in the future, please also include the following patch after the current series:
> 
> ----------------8<-------------[ cut here ]------------------
>>From cccef7ba2433f8e97d1948f85e3bfb2ef5d32a0a Mon Sep 17 00:00:00 2001
> From: Pasha Tatashin <pasha.tatashin@soleen.com>
> Date: Thu, 21 Apr 2022 18:04:43 +0000
> Subject: [PATCH] x86: removed P*D_PAGE_MASK and P*D_PAGE_SIZE
> 
> Other architectures and the common mm/ use P*D_MASK, and P*D_SIZE.
> Remove the duplicated P*D_PAGE_MASK and P*D_PAGE_SIZE which are only
> used in x86/*.
> 
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>

Absolutely, helps in minimizing arch specific stuff wrt to page table mapping.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE
@ 2022-04-22  4:46         ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  4:46 UTC (permalink / raw)
  To: Pasha Tatashin, Tong Tiangen, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Andrew Morton, Catalin Marinas, LKML, linux-mm,
	Paul Walmsley, Will Deacon, Albert Ou, Palmer Dabbelt, Linux ARM,
	Kefeng Wang, linux-riscv, Guohanjun



On 4/22/22 00:10, Pasha Tatashin wrote:
> On 4/21/22 11:28, Pasha Tatashin wrote:
>> On Thu, Apr 21, 2022 at 4:02 AM Tong Tiangen <tongtiangen@huawei.com> wrote:
>>> Macro PUD_SIZE/PMD_SIZE is more general in various architectures. Using
>>> PUD_SIZE/PMD_SIZE instead of PUD_PAGE_SIZE/PMD_PAGE_SIZE can better
>>> support page table check in architectures other than x86 and it is no
>>> functional impact on x86.
>>>
>>> Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> 
> To avoid similar problems in the future, please also include the following patch after the current series:
> 
> ----------------8<-------------[ cut here ]------------------
>>From cccef7ba2433f8e97d1948f85e3bfb2ef5d32a0a Mon Sep 17 00:00:00 2001
> From: Pasha Tatashin <pasha.tatashin@soleen.com>
> Date: Thu, 21 Apr 2022 18:04:43 +0000
> Subject: [PATCH] x86: removed P*D_PAGE_MASK and P*D_PAGE_SIZE
> 
> Other architectures and the common mm/ use P*D_MASK, and P*D_SIZE.
> Remove the duplicated P*D_PAGE_MASK and P*D_PAGE_SIZE which are only
> used in x86/*.
> 
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>

Absolutely, helps in minimizing arch specific stuff wrt to page table mapping.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 2/5] mm: page_table_check: move pxx_user_accessible_page into x86
  2022-04-21  8:20   ` Tong Tiangen
  (?)
@ 2022-04-22  5:11     ` Anshuman Khandual
  -1 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  5:11 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

Similar to previous commits on the same file, the following subject
line format, would have been preferred.

mm/page_table_check: <description>

On 4/21/22 13:50, Tong Tiangen wrote:
> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> 
> The pxx_user_accessible_page() check the PTE bit, it's

s/check/checks			 ^^^^

> architecture-specific code, move them into x86's pgtable.h
The commit message should have been more clear, atleast complete in
sentences. I dont want to be bike shedding here but this is definitely
incomplete. These helpers are being moved out to make the page table
check framework, platform independent. Hence the commit message should
mention this.

> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
>  mm/page_table_check.c          | 17 -----------------
>  2 files changed, 19 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index b7464f13e416..564abe42b0f7 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
>  	return true;
>  }
>  
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
> +		(pmd_val(pmd) & _PAGE_USER);
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
> +		(pud_val(pud) & _PAGE_USER);

A line break is not really required here (and above as well). As single
complete line would still be within 100 characters.

> +}
> +#endif
> +
>  #endif	/* __ASSEMBLY__ */
>  
>  #endif /* _ASM_X86_PGTABLE_H */
> diff --git a/mm/page_table_check.c b/mm/page_table_check.c
> index eb0d0b71cdf6..3692bea2ea2c 100644
> --- a/mm/page_table_check.c
> +++ b/mm/page_table_check.c
> @@ -52,23 +52,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
>  	return (void *)(page_ext) + page_table_check_ops.offset;
>  }
>  
> -static inline bool pte_user_accessible_page(pte_t pte)
> -{
> -	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
> -}
> -
> -static inline bool pmd_user_accessible_page(pmd_t pmd)
> -{
> -	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
> -		(pmd_val(pmd) & _PAGE_USER);
> -}
> -
> -static inline bool pud_user_accessible_page(pud_t pud)
> -{
> -	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
> -		(pud_val(pud) & _PAGE_USER);
> -}
> -
>  /*
>   * An enty is removed from the page table, decrement the counters for that page
>   * verify that it is of correct type and counters do not become negative.

With above mentioned code cleanup and commit message changes in place.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 2/5] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-22  5:11     ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  5:11 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

Similar to previous commits on the same file, the following subject
line format, would have been preferred.

mm/page_table_check: <description>

On 4/21/22 13:50, Tong Tiangen wrote:
> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> 
> The pxx_user_accessible_page() check the PTE bit, it's

s/check/checks			 ^^^^

> architecture-specific code, move them into x86's pgtable.h
The commit message should have been more clear, atleast complete in
sentences. I dont want to be bike shedding here but this is definitely
incomplete. These helpers are being moved out to make the page table
check framework, platform independent. Hence the commit message should
mention this.

> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
>  mm/page_table_check.c          | 17 -----------------
>  2 files changed, 19 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index b7464f13e416..564abe42b0f7 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
>  	return true;
>  }
>  
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
> +		(pmd_val(pmd) & _PAGE_USER);
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
> +		(pud_val(pud) & _PAGE_USER);

A line break is not really required here (and above as well). As single
complete line would still be within 100 characters.

> +}
> +#endif
> +
>  #endif	/* __ASSEMBLY__ */
>  
>  #endif /* _ASM_X86_PGTABLE_H */
> diff --git a/mm/page_table_check.c b/mm/page_table_check.c
> index eb0d0b71cdf6..3692bea2ea2c 100644
> --- a/mm/page_table_check.c
> +++ b/mm/page_table_check.c
> @@ -52,23 +52,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
>  	return (void *)(page_ext) + page_table_check_ops.offset;
>  }
>  
> -static inline bool pte_user_accessible_page(pte_t pte)
> -{
> -	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
> -}
> -
> -static inline bool pmd_user_accessible_page(pmd_t pmd)
> -{
> -	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
> -		(pmd_val(pmd) & _PAGE_USER);
> -}
> -
> -static inline bool pud_user_accessible_page(pud_t pud)
> -{
> -	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
> -		(pud_val(pud) & _PAGE_USER);
> -}
> -
>  /*
>   * An enty is removed from the page table, decrement the counters for that page
>   * verify that it is of correct type and counters do not become negative.

With above mentioned code cleanup and commit message changes in place.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 2/5] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-22  5:11     ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  5:11 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

Similar to previous commits on the same file, the following subject
line format, would have been preferred.

mm/page_table_check: <description>

On 4/21/22 13:50, Tong Tiangen wrote:
> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> 
> The pxx_user_accessible_page() check the PTE bit, it's

s/check/checks			 ^^^^

> architecture-specific code, move them into x86's pgtable.h
The commit message should have been more clear, atleast complete in
sentences. I dont want to be bike shedding here but this is definitely
incomplete. These helpers are being moved out to make the page table
check framework, platform independent. Hence the commit message should
mention this.

> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
>  mm/page_table_check.c          | 17 -----------------
>  2 files changed, 19 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index b7464f13e416..564abe42b0f7 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
>  	return true;
>  }
>  
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
> +		(pmd_val(pmd) & _PAGE_USER);
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
> +		(pud_val(pud) & _PAGE_USER);

A line break is not really required here (and above as well). As single
complete line would still be within 100 characters.

> +}
> +#endif
> +
>  #endif	/* __ASSEMBLY__ */
>  
>  #endif /* _ASM_X86_PGTABLE_H */
> diff --git a/mm/page_table_check.c b/mm/page_table_check.c
> index eb0d0b71cdf6..3692bea2ea2c 100644
> --- a/mm/page_table_check.c
> +++ b/mm/page_table_check.c
> @@ -52,23 +52,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
>  	return (void *)(page_ext) + page_table_check_ops.offset;
>  }
>  
> -static inline bool pte_user_accessible_page(pte_t pte)
> -{
> -	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
> -}
> -
> -static inline bool pmd_user_accessible_page(pmd_t pmd)
> -{
> -	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
> -		(pmd_val(pmd) & _PAGE_USER);
> -}
> -
> -static inline bool pud_user_accessible_page(pud_t pud)
> -{
> -	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
> -		(pud_val(pud) & _PAGE_USER);
> -}
> -
>  /*
>   * An enty is removed from the page table, decrement the counters for that page
>   * verify that it is of correct type and counters do not become negative.

With above mentioned code cleanup and commit message changes in place.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
  2022-04-21  8:20   ` Tong Tiangen
  (?)
@ 2022-04-22  6:05     ` Anshuman Khandual
  -1 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  6:05 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/21/22 13:50, Tong Tiangen wrote:
> Move ptep_clear() to the include/linux/pgtable.h and add page table check
> relate hooks to some helpers, it's prepare for support page table check
> feature on new architecture.

Could instrumenting generic page table helpers (fallback instances when its
corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
the page table check hooks into paths on platforms which have not subscribed
ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
!CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
gets avoided.

> 
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/x86/include/asm/pgtable.h | 10 ----------
>  include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>  2 files changed, 18 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index 564abe42b0f7..51cd39858f81 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>  	return pte;
>  }
>  
> -#define __HAVE_ARCH_PTEP_CLEAR

AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
this is getting dropped for generic ptep_clear(), then no need to add back
#ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
definition for all platforms ?

Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
other page table check related changes, it needs to be done via a separate
patch instead.

> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> -			      pte_t *ptep)
> -{
> -	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
> -		ptep_get_and_clear(mm, addr, ptep);
> -	else
> -		pte_clear(mm, addr, ptep);
> -}
> -
>  #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>  static inline void ptep_set_wrprotect(struct mm_struct *mm,
>  				      unsigned long addr, pte_t *ptep)
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 49ab8ee2d6d7..10d2d91edf20 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -12,6 +12,7 @@
>  #include <linux/bug.h>
>  #include <linux/errno.h>
>  #include <asm-generic/pgtable_uffd.h>
> +#include <linux/page_table_check.h>
>  
>  #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>  	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>  }
>  #endif
>  
> -#ifndef __HAVE_ARCH_PTEP_CLEAR
> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> -			      pte_t *ptep)
> -{
> -	pte_clear(mm, addr, ptep);
> -}
> -#endif
> -
>  #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>  static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  				       unsigned long address,
> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  {
>  	pte_t pte = *ptep;
>  	pte_clear(mm, address, ptep);
> +	page_table_check_pte_clear(mm, address, pte);
>  	return pte;
>  }
>  #endif
>  
> +#ifndef __HAVE_ARCH_PTEP_CLEAR
> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> +			      pte_t *ptep)
> +{
> +	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
> +		ptep_get_and_clear(mm, addr, ptep);
> +	else
> +		pte_clear(mm, addr, ptep);

Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
table hooks can be added to all potential page table paths via generic helpers,
irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
a IS_ENABLED() check here.

> +}
> +#endif
> +
>  #ifndef __HAVE_ARCH_PTEP_GET
>  static inline pte_t ptep_get(pte_t *ptep)
>  {
> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>  					    pmd_t *pmdp)
>  {
>  	pmd_t pmd = *pmdp;
> +
>  	pmd_clear(pmdp);
> +	page_table_check_pmd_clear(mm, address, pmd);
> +
>  	return pmd;
>  }
>  #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>  	pud_t pud = *pudp;
>  
>  	pud_clear(pudp);
> +	page_table_check_pud_clear(mm, address, pud);
> +
>  	return pud;
>  }
>  #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
@ 2022-04-22  6:05     ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  6:05 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/21/22 13:50, Tong Tiangen wrote:
> Move ptep_clear() to the include/linux/pgtable.h and add page table check
> relate hooks to some helpers, it's prepare for support page table check
> feature on new architecture.

Could instrumenting generic page table helpers (fallback instances when its
corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
the page table check hooks into paths on platforms which have not subscribed
ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
!CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
gets avoided.

> 
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/x86/include/asm/pgtable.h | 10 ----------
>  include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>  2 files changed, 18 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index 564abe42b0f7..51cd39858f81 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>  	return pte;
>  }
>  
> -#define __HAVE_ARCH_PTEP_CLEAR

AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
this is getting dropped for generic ptep_clear(), then no need to add back
#ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
definition for all platforms ?

Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
other page table check related changes, it needs to be done via a separate
patch instead.

> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> -			      pte_t *ptep)
> -{
> -	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
> -		ptep_get_and_clear(mm, addr, ptep);
> -	else
> -		pte_clear(mm, addr, ptep);
> -}
> -
>  #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>  static inline void ptep_set_wrprotect(struct mm_struct *mm,
>  				      unsigned long addr, pte_t *ptep)
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 49ab8ee2d6d7..10d2d91edf20 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -12,6 +12,7 @@
>  #include <linux/bug.h>
>  #include <linux/errno.h>
>  #include <asm-generic/pgtable_uffd.h>
> +#include <linux/page_table_check.h>
>  
>  #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>  	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>  }
>  #endif
>  
> -#ifndef __HAVE_ARCH_PTEP_CLEAR
> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> -			      pte_t *ptep)
> -{
> -	pte_clear(mm, addr, ptep);
> -}
> -#endif
> -
>  #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>  static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  				       unsigned long address,
> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  {
>  	pte_t pte = *ptep;
>  	pte_clear(mm, address, ptep);
> +	page_table_check_pte_clear(mm, address, pte);
>  	return pte;
>  }
>  #endif
>  
> +#ifndef __HAVE_ARCH_PTEP_CLEAR
> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> +			      pte_t *ptep)
> +{
> +	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
> +		ptep_get_and_clear(mm, addr, ptep);
> +	else
> +		pte_clear(mm, addr, ptep);

Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
table hooks can be added to all potential page table paths via generic helpers,
irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
a IS_ENABLED() check here.

> +}
> +#endif
> +
>  #ifndef __HAVE_ARCH_PTEP_GET
>  static inline pte_t ptep_get(pte_t *ptep)
>  {
> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>  					    pmd_t *pmdp)
>  {
>  	pmd_t pmd = *pmdp;
> +
>  	pmd_clear(pmdp);
> +	page_table_check_pmd_clear(mm, address, pmd);
> +
>  	return pmd;
>  }
>  #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>  	pud_t pud = *pudp;
>  
>  	pud_clear(pudp);
> +	page_table_check_pud_clear(mm, address, pud);
> +
>  	return pud;
>  }
>  #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
@ 2022-04-22  6:05     ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  6:05 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/21/22 13:50, Tong Tiangen wrote:
> Move ptep_clear() to the include/linux/pgtable.h and add page table check
> relate hooks to some helpers, it's prepare for support page table check
> feature on new architecture.

Could instrumenting generic page table helpers (fallback instances when its
corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
the page table check hooks into paths on platforms which have not subscribed
ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
!CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
gets avoided.

> 
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/x86/include/asm/pgtable.h | 10 ----------
>  include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>  2 files changed, 18 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index 564abe42b0f7..51cd39858f81 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>  	return pte;
>  }
>  
> -#define __HAVE_ARCH_PTEP_CLEAR

AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
this is getting dropped for generic ptep_clear(), then no need to add back
#ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
definition for all platforms ?

Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
other page table check related changes, it needs to be done via a separate
patch instead.

> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> -			      pte_t *ptep)
> -{
> -	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
> -		ptep_get_and_clear(mm, addr, ptep);
> -	else
> -		pte_clear(mm, addr, ptep);
> -}
> -
>  #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>  static inline void ptep_set_wrprotect(struct mm_struct *mm,
>  				      unsigned long addr, pte_t *ptep)
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 49ab8ee2d6d7..10d2d91edf20 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -12,6 +12,7 @@
>  #include <linux/bug.h>
>  #include <linux/errno.h>
>  #include <asm-generic/pgtable_uffd.h>
> +#include <linux/page_table_check.h>
>  
>  #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>  	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>  }
>  #endif
>  
> -#ifndef __HAVE_ARCH_PTEP_CLEAR
> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> -			      pte_t *ptep)
> -{
> -	pte_clear(mm, addr, ptep);
> -}
> -#endif
> -
>  #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>  static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  				       unsigned long address,
> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  {
>  	pte_t pte = *ptep;
>  	pte_clear(mm, address, ptep);
> +	page_table_check_pte_clear(mm, address, pte);
>  	return pte;
>  }
>  #endif
>  
> +#ifndef __HAVE_ARCH_PTEP_CLEAR
> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> +			      pte_t *ptep)
> +{
> +	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
> +		ptep_get_and_clear(mm, addr, ptep);
> +	else
> +		pte_clear(mm, addr, ptep);

Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
table hooks can be added to all potential page table paths via generic helpers,
irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
a IS_ENABLED() check here.

> +}
> +#endif
> +
>  #ifndef __HAVE_ARCH_PTEP_GET
>  static inline pte_t ptep_get(pte_t *ptep)
>  {
> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>  					    pmd_t *pmdp)
>  {
>  	pmd_t pmd = *pmdp;
> +
>  	pmd_clear(pmdp);
> +	page_table_check_pmd_clear(mm, address, pmd);
> +
>  	return pmd;
>  }
>  #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>  	pud_t pud = *pudp;
>  
>  	pud_clear(pudp);
> +	page_table_check_pud_clear(mm, address, pud);
> +
>  	return pud;
>  }
>  #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 2/5] mm: page_table_check: move pxx_user_accessible_page into x86
  2022-04-22  5:11     ` Anshuman Khandual
  (?)
@ 2022-04-22  6:30       ` Tong Tiangen
  -1 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-22  6:30 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/22 13:11, Anshuman Khandual 写道:
> Similar to previous commits on the same file, the following subject
> line format, would have been preferred.
> 
> mm/page_table_check: <description>
> 
> On 4/21/22 13:50, Tong Tiangen wrote:
>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>
>> The pxx_user_accessible_page() check the PTE bit, it's
> 
> s/check/checks			 ^^^^
> 
>> architecture-specific code, move them into x86's pgtable.h
> The commit message should have been more clear, atleast complete in
> sentences. I dont want to be bike shedding here but this is definitely
> incomplete. These helpers are being moved out to make the page table
> check framework, platform independent. Hence the commit message should
> mention this.

The commit message is not very clear and it is too simple.

Thanks.

> 
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>> ---
>>   arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
>>   mm/page_table_check.c          | 17 -----------------
>>   2 files changed, 19 insertions(+), 17 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>> index b7464f13e416..564abe42b0f7 100644
>> --- a/arch/x86/include/asm/pgtable.h
>> +++ b/arch/x86/include/asm/pgtable.h
>> @@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
>>   	return true;
>>   }
>>   
>> +#ifdef CONFIG_PAGE_TABLE_CHECK
>> +static inline bool pte_user_accessible_page(pte_t pte)
>> +{
>> +	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
>> +}
>> +
>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>> +{
>> +	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
>> +		(pmd_val(pmd) & _PAGE_USER);
>> +}
>> +
>> +static inline bool pud_user_accessible_page(pud_t pud)
>> +{
>> +	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
>> +		(pud_val(pud) & _PAGE_USER);
> 
> A line break is not really required here (and above as well). As single
> complete line would still be within 100 characters.
> 
Right, now one line can have 100 characters.

Thanks.

>> +}
>> +#endif
>> +
>>   #endif	/* __ASSEMBLY__ */
>>   
>>   #endif /* _ASM_X86_PGTABLE_H */
>> diff --git a/mm/page_table_check.c b/mm/page_table_check.c
>> index eb0d0b71cdf6..3692bea2ea2c 100644
>> --- a/mm/page_table_check.c
>> +++ b/mm/page_table_check.c
>> @@ -52,23 +52,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
>>   	return (void *)(page_ext) + page_table_check_ops.offset;
>>   }
>>   
>> -static inline bool pte_user_accessible_page(pte_t pte)
>> -{
>> -	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
>> -}
>> -
>> -static inline bool pmd_user_accessible_page(pmd_t pmd)
>> -{
>> -	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
>> -		(pmd_val(pmd) & _PAGE_USER);
>> -}
>> -
>> -static inline bool pud_user_accessible_page(pud_t pud)
>> -{
>> -	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
>> -		(pud_val(pud) & _PAGE_USER);
>> -}
>> -
>>   /*
>>    * An enty is removed from the page table, decrement the counters for that page
>>    * verify that it is of correct type and counters do not become negative.
> 
> With above mentioned code cleanup and commit message changes in place.
> 
> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> .

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 2/5] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-22  6:30       ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-22  6:30 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/22 13:11, Anshuman Khandual 写道:
> Similar to previous commits on the same file, the following subject
> line format, would have been preferred.
> 
> mm/page_table_check: <description>
> 
> On 4/21/22 13:50, Tong Tiangen wrote:
>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>
>> The pxx_user_accessible_page() check the PTE bit, it's
> 
> s/check/checks			 ^^^^
> 
>> architecture-specific code, move them into x86's pgtable.h
> The commit message should have been more clear, atleast complete in
> sentences. I dont want to be bike shedding here but this is definitely
> incomplete. These helpers are being moved out to make the page table
> check framework, platform independent. Hence the commit message should
> mention this.

The commit message is not very clear and it is too simple.

Thanks.

> 
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>> ---
>>   arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
>>   mm/page_table_check.c          | 17 -----------------
>>   2 files changed, 19 insertions(+), 17 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>> index b7464f13e416..564abe42b0f7 100644
>> --- a/arch/x86/include/asm/pgtable.h
>> +++ b/arch/x86/include/asm/pgtable.h
>> @@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
>>   	return true;
>>   }
>>   
>> +#ifdef CONFIG_PAGE_TABLE_CHECK
>> +static inline bool pte_user_accessible_page(pte_t pte)
>> +{
>> +	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
>> +}
>> +
>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>> +{
>> +	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
>> +		(pmd_val(pmd) & _PAGE_USER);
>> +}
>> +
>> +static inline bool pud_user_accessible_page(pud_t pud)
>> +{
>> +	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
>> +		(pud_val(pud) & _PAGE_USER);
> 
> A line break is not really required here (and above as well). As single
> complete line would still be within 100 characters.
> 
Right, now one line can have 100 characters.

Thanks.

>> +}
>> +#endif
>> +
>>   #endif	/* __ASSEMBLY__ */
>>   
>>   #endif /* _ASM_X86_PGTABLE_H */
>> diff --git a/mm/page_table_check.c b/mm/page_table_check.c
>> index eb0d0b71cdf6..3692bea2ea2c 100644
>> --- a/mm/page_table_check.c
>> +++ b/mm/page_table_check.c
>> @@ -52,23 +52,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
>>   	return (void *)(page_ext) + page_table_check_ops.offset;
>>   }
>>   
>> -static inline bool pte_user_accessible_page(pte_t pte)
>> -{
>> -	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
>> -}
>> -
>> -static inline bool pmd_user_accessible_page(pmd_t pmd)
>> -{
>> -	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
>> -		(pmd_val(pmd) & _PAGE_USER);
>> -}
>> -
>> -static inline bool pud_user_accessible_page(pud_t pud)
>> -{
>> -	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
>> -		(pud_val(pud) & _PAGE_USER);
>> -}
>> -
>>   /*
>>    * An enty is removed from the page table, decrement the counters for that page
>>    * verify that it is of correct type and counters do not become negative.
> 
> With above mentioned code cleanup and commit message changes in place.
> 
> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> .

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 2/5] mm: page_table_check: move pxx_user_accessible_page into x86
@ 2022-04-22  6:30       ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-22  6:30 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/22 13:11, Anshuman Khandual 写道:
> Similar to previous commits on the same file, the following subject
> line format, would have been preferred.
> 
> mm/page_table_check: <description>
> 
> On 4/21/22 13:50, Tong Tiangen wrote:
>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>
>> The pxx_user_accessible_page() check the PTE bit, it's
> 
> s/check/checks			 ^^^^
> 
>> architecture-specific code, move them into x86's pgtable.h
> The commit message should have been more clear, atleast complete in
> sentences. I dont want to be bike shedding here but this is definitely
> incomplete. These helpers are being moved out to make the page table
> check framework, platform independent. Hence the commit message should
> mention this.

The commit message is not very clear and it is too simple.

Thanks.

> 
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>> ---
>>   arch/x86/include/asm/pgtable.h | 19 +++++++++++++++++++
>>   mm/page_table_check.c          | 17 -----------------
>>   2 files changed, 19 insertions(+), 17 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>> index b7464f13e416..564abe42b0f7 100644
>> --- a/arch/x86/include/asm/pgtable.h
>> +++ b/arch/x86/include/asm/pgtable.h
>> @@ -1447,6 +1447,25 @@ static inline bool arch_has_hw_pte_young(void)
>>   	return true;
>>   }
>>   
>> +#ifdef CONFIG_PAGE_TABLE_CHECK
>> +static inline bool pte_user_accessible_page(pte_t pte)
>> +{
>> +	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
>> +}
>> +
>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>> +{
>> +	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
>> +		(pmd_val(pmd) & _PAGE_USER);
>> +}
>> +
>> +static inline bool pud_user_accessible_page(pud_t pud)
>> +{
>> +	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
>> +		(pud_val(pud) & _PAGE_USER);
> 
> A line break is not really required here (and above as well). As single
> complete line would still be within 100 characters.
> 
Right, now one line can have 100 characters.

Thanks.

>> +}
>> +#endif
>> +
>>   #endif	/* __ASSEMBLY__ */
>>   
>>   #endif /* _ASM_X86_PGTABLE_H */
>> diff --git a/mm/page_table_check.c b/mm/page_table_check.c
>> index eb0d0b71cdf6..3692bea2ea2c 100644
>> --- a/mm/page_table_check.c
>> +++ b/mm/page_table_check.c
>> @@ -52,23 +52,6 @@ static struct page_table_check *get_page_table_check(struct page_ext *page_ext)
>>   	return (void *)(page_ext) + page_table_check_ops.offset;
>>   }
>>   
>> -static inline bool pte_user_accessible_page(pte_t pte)
>> -{
>> -	return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
>> -}
>> -
>> -static inline bool pmd_user_accessible_page(pmd_t pmd)
>> -{
>> -	return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) &&
>> -		(pmd_val(pmd) & _PAGE_USER);
>> -}
>> -
>> -static inline bool pud_user_accessible_page(pud_t pud)
>> -{
>> -	return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) &&
>> -		(pud_val(pud) & _PAGE_USER);
>> -}
>> -
>>   /*
>>    * An enty is removed from the page table, decrement the counters for that page
>>    * verify that it is of correct type and counters do not become negative.
> 
> With above mentioned code cleanup and commit message changes in place.
> 
> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
  2022-04-21  8:20   ` Tong Tiangen
  (?)
@ 2022-04-22  6:45     ` Anshuman Khandual
  -1 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  6:45 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

Please change the subject line as

arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK

OR

arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK

On 4/21/22 13:50, Tong Tiangen wrote:
> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> 
> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
> check"), add some necessary page table check hooks into routines that
> modify user page tables.

Please make the commit message comprehensive, which should include

- Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
- Adding all additional page table helpers required for PAGE_TABLE_CHECK
- Instrumenting existing page table helpers with page table check hooks

> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/arm64/Kconfig               |  1 +
>  arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
>  2 files changed, 61 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 18a18a0e855d..c1509525ab8e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -92,6 +92,7 @@ config ARM64
>  	select ARCH_SUPPORTS_ATOMIC_RMW
>  	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
>  	select ARCH_SUPPORTS_NUMA_BALANCING
> +	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
>  	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
>  	select ARCH_WANT_DEFAULT_BPF_JIT
>  	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 930077f7b572..9f8f97a7cc7c 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -33,6 +33,7 @@
>  #include <linux/mmdebug.h>
>  #include <linux/mm_types.h>
>  #include <linux/sched.h>
> +#include <linux/page_table_check.h>
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> @@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>  #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
>  #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
>  #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
> +#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
>  #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
>  #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
>  #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
> @@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
>  		     __func__, pte_val(old_pte), pte_val(pte));
>  }
>  
> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>  			      pte_t *ptep, pte_t pte)
>  {
>  	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
> @@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>  	set_pte(ptep, pte);
>  }
>  
> +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +			      pte_t *ptep, pte_t pte)
> +{
> +	page_table_check_pte_set(mm, addr, ptep, pte);
> +	return __set_pte_at(mm, addr, ptep, pte);
> +}
> +
>  /*
>   * Huge pte definitions.
>   */
> @@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
>  #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
>  #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
>  #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
> +#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
> +#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
>  #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
>  #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
>  #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
> @@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
>  #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
>  #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
>  
> -#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
> -#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
> +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> +			      pmd_t *pmdp, pmd_t pmd)
> +{
> +	page_table_check_pmd_set(mm, addr, pmdp, pmd);
> +	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
> +}
> +
> +static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
> +			      pud_t *pudp, pud_t pud)
> +{
> +	page_table_check_pud_set(mm, addr, pudp, pud);
> +	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
> +}
>  
>  #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
>  #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
> @@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
>  #define pud_present(pud)	pte_present(pud_pte(pud))
>  #define pud_leaf(pud)		pud_sect(pud)
>  #define pud_valid(pud)		pte_valid(pud_pte(pud))
> +#define pud_user(pud)		pte_user(pud_pte(pud))
> +
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_present(pud) && pud_user(pud);
> +}
> +#endif
>  
>  static inline void set_pud(pud_t *pudp, pud_t pud)
>  {
> @@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
> +				       unsigned long address, pte_t *ptep)
> +{
> +	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +}
> +
>  #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
>  static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  				       unsigned long address, pte_t *ptep)
>  {
> -	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
>

__ptep_get_and_clear() is not required. Please keep the __pte(xchg_relaxed(..),..)
unchanged as in the case with pmdp_huge_get_and_clear() helper below.

 +
> +	page_table_check_pte_clear(mm, address, pte);
> +
> +	return pte;
>  }
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> @@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>  					    unsigned long address, pmd_t *pmdp)
>  {
> -	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +
> +	page_table_check_pmd_clear(mm, address, pmd);
> +
> +	return pmd;
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> @@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>  static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
> +	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>  	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
>  }
>  #endif

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
@ 2022-04-22  6:45     ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  6:45 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

Please change the subject line as

arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK

OR

arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK

On 4/21/22 13:50, Tong Tiangen wrote:
> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> 
> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
> check"), add some necessary page table check hooks into routines that
> modify user page tables.

Please make the commit message comprehensive, which should include

- Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
- Adding all additional page table helpers required for PAGE_TABLE_CHECK
- Instrumenting existing page table helpers with page table check hooks

> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/arm64/Kconfig               |  1 +
>  arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
>  2 files changed, 61 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 18a18a0e855d..c1509525ab8e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -92,6 +92,7 @@ config ARM64
>  	select ARCH_SUPPORTS_ATOMIC_RMW
>  	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
>  	select ARCH_SUPPORTS_NUMA_BALANCING
> +	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
>  	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
>  	select ARCH_WANT_DEFAULT_BPF_JIT
>  	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 930077f7b572..9f8f97a7cc7c 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -33,6 +33,7 @@
>  #include <linux/mmdebug.h>
>  #include <linux/mm_types.h>
>  #include <linux/sched.h>
> +#include <linux/page_table_check.h>
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> @@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>  #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
>  #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
>  #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
> +#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
>  #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
>  #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
>  #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
> @@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
>  		     __func__, pte_val(old_pte), pte_val(pte));
>  }
>  
> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>  			      pte_t *ptep, pte_t pte)
>  {
>  	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
> @@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>  	set_pte(ptep, pte);
>  }
>  
> +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +			      pte_t *ptep, pte_t pte)
> +{
> +	page_table_check_pte_set(mm, addr, ptep, pte);
> +	return __set_pte_at(mm, addr, ptep, pte);
> +}
> +
>  /*
>   * Huge pte definitions.
>   */
> @@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
>  #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
>  #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
>  #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
> +#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
> +#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
>  #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
>  #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
>  #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
> @@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
>  #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
>  #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
>  
> -#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
> -#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
> +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> +			      pmd_t *pmdp, pmd_t pmd)
> +{
> +	page_table_check_pmd_set(mm, addr, pmdp, pmd);
> +	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
> +}
> +
> +static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
> +			      pud_t *pudp, pud_t pud)
> +{
> +	page_table_check_pud_set(mm, addr, pudp, pud);
> +	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
> +}
>  
>  #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
>  #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
> @@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
>  #define pud_present(pud)	pte_present(pud_pte(pud))
>  #define pud_leaf(pud)		pud_sect(pud)
>  #define pud_valid(pud)		pte_valid(pud_pte(pud))
> +#define pud_user(pud)		pte_user(pud_pte(pud))
> +
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_present(pud) && pud_user(pud);
> +}
> +#endif
>  
>  static inline void set_pud(pud_t *pudp, pud_t pud)
>  {
> @@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
> +				       unsigned long address, pte_t *ptep)
> +{
> +	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +}
> +
>  #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
>  static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  				       unsigned long address, pte_t *ptep)
>  {
> -	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
>

__ptep_get_and_clear() is not required. Please keep the __pte(xchg_relaxed(..),..)
unchanged as in the case with pmdp_huge_get_and_clear() helper below.

 +
> +	page_table_check_pte_clear(mm, address, pte);
> +
> +	return pte;
>  }
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> @@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>  					    unsigned long address, pmd_t *pmdp)
>  {
> -	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +
> +	page_table_check_pmd_clear(mm, address, pmd);
> +
> +	return pmd;
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> @@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>  static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
> +	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>  	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
>  }
>  #endif

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
@ 2022-04-22  6:45     ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-22  6:45 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun

Please change the subject line as

arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK

OR

arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK

On 4/21/22 13:50, Tong Tiangen wrote:
> From: Kefeng Wang <wangkefeng.wang@huawei.com>
> 
> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
> check"), add some necessary page table check hooks into routines that
> modify user page tables.

Please make the commit message comprehensive, which should include

- Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
- Adding all additional page table helpers required for PAGE_TABLE_CHECK
- Instrumenting existing page table helpers with page table check hooks

> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  arch/arm64/Kconfig               |  1 +
>  arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
>  2 files changed, 61 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 18a18a0e855d..c1509525ab8e 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -92,6 +92,7 @@ config ARM64
>  	select ARCH_SUPPORTS_ATOMIC_RMW
>  	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
>  	select ARCH_SUPPORTS_NUMA_BALANCING
> +	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
>  	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
>  	select ARCH_WANT_DEFAULT_BPF_JIT
>  	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 930077f7b572..9f8f97a7cc7c 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -33,6 +33,7 @@
>  #include <linux/mmdebug.h>
>  #include <linux/mm_types.h>
>  #include <linux/sched.h>
> +#include <linux/page_table_check.h>
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> @@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>  #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
>  #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
>  #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
> +#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
>  #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
>  #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
>  #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
> @@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
>  		     __func__, pte_val(old_pte), pte_val(pte));
>  }
>  
> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>  			      pte_t *ptep, pte_t pte)
>  {
>  	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
> @@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>  	set_pte(ptep, pte);
>  }
>  
> +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> +			      pte_t *ptep, pte_t pte)
> +{
> +	page_table_check_pte_set(mm, addr, ptep, pte);
> +	return __set_pte_at(mm, addr, ptep, pte);
> +}
> +
>  /*
>   * Huge pte definitions.
>   */
> @@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
>  #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
>  #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
>  #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
> +#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
> +#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
>  #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
>  #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
>  #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
> @@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
>  #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
>  #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
>  
> -#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
> -#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
> +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> +			      pmd_t *pmdp, pmd_t pmd)
> +{
> +	page_table_check_pmd_set(mm, addr, pmdp, pmd);
> +	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
> +}
> +
> +static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
> +			      pud_t *pudp, pud_t pud)
> +{
> +	page_table_check_pud_set(mm, addr, pudp, pud);
> +	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
> +}
>  
>  #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
>  #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
> @@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
>  #define pud_present(pud)	pte_present(pud_pte(pud))
>  #define pud_leaf(pud)		pud_sect(pud)
>  #define pud_valid(pud)		pte_valid(pud_pte(pud))
> +#define pud_user(pud)		pte_user(pud_pte(pud))
> +
> +#ifdef CONFIG_PAGE_TABLE_CHECK
> +static inline bool pte_user_accessible_page(pte_t pte)
> +{
> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
> +}
> +
> +static inline bool pmd_user_accessible_page(pmd_t pmd)
> +{
> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
> +}
> +
> +static inline bool pud_user_accessible_page(pud_t pud)
> +{
> +	return pud_present(pud) && pud_user(pud);
> +}
> +#endif
>  
>  static inline void set_pud(pud_t *pudp, pud_t pud)
>  {
> @@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
> +				       unsigned long address, pte_t *ptep)
> +{
> +	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +}
> +
>  #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
>  static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  				       unsigned long address, pte_t *ptep)
>  {
> -	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
> +	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
>

__ptep_get_and_clear() is not required. Please keep the __pte(xchg_relaxed(..),..)
unchanged as in the case with pmdp_huge_get_and_clear() helper below.

 +
> +	page_table_check_pte_clear(mm, address, pte);
> +
> +	return pte;
>  }
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> @@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>  					    unsigned long address, pmd_t *pmdp)
>  {
> -	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
> +
> +	page_table_check_pmd_clear(mm, address, pmd);
> +
> +	return pmd;
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> @@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>  static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
> +	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>  	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
>  }
>  #endif

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
  2022-04-22  6:05     ` Anshuman Khandual
  (?)
@ 2022-04-24  4:10       ` Tong Tiangen
  -1 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-24  4:10 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/22 14:05, Anshuman Khandual 写道:
> 
> 
> On 4/21/22 13:50, Tong Tiangen wrote:
>> Move ptep_clear() to the include/linux/pgtable.h and add page table check
>> relate hooks to some helpers, it's prepare for support page table check
>> feature on new architecture.
> 
> Could instrumenting generic page table helpers (fallback instances when its
> corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
> the page table check hooks into paths on platforms which have not subscribed
> ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
> !CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
> gets avoided.

Right, build problems are avoided by fallback stubs in the header file.

> 
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>> ---
>>   arch/x86/include/asm/pgtable.h | 10 ----------
>>   include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>>   2 files changed, 18 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>> index 564abe42b0f7..51cd39858f81 100644
>> --- a/arch/x86/include/asm/pgtable.h
>> +++ b/arch/x86/include/asm/pgtable.h
>> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>   	return pte;
>>   }
>>   
>> -#define __HAVE_ARCH_PTEP_CLEAR
> 
> AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
> this is getting dropped for generic ptep_clear(), then no need to add back
> #ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
> definition for all platforms ?
> 
> Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
> other page table check related changes, it needs to be done via a separate
> patch instead.

Agreed.
IMO, this fix can be patched later.

> 
>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>> -			      pte_t *ptep)
>> -{
>> -	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>> -		ptep_get_and_clear(mm, addr, ptep);
>> -	else
>> -		pte_clear(mm, addr, ptep);
>> -}
>> -
>>   #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>>   static inline void ptep_set_wrprotect(struct mm_struct *mm,
>>   				      unsigned long addr, pte_t *ptep)
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index 49ab8ee2d6d7..10d2d91edf20 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -12,6 +12,7 @@
>>   #include <linux/bug.h>
>>   #include <linux/errno.h>
>>   #include <asm-generic/pgtable_uffd.h>
>> +#include <linux/page_table_check.h>
>>   
>>   #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>>   	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
>> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>>   }
>>   #endif
>>   
>> -#ifndef __HAVE_ARCH_PTEP_CLEAR
>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>> -			      pte_t *ptep)
>> -{
>> -	pte_clear(mm, addr, ptep);
>> -}
>> -#endif
>> -
>>   #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   				       unsigned long address,
>> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   {
>>   	pte_t pte = *ptep;
>>   	pte_clear(mm, address, ptep);
>> +	page_table_check_pte_clear(mm, address, pte);
>>   	return pte;
>>   }
>>   #endif
>>   
>> +#ifndef __HAVE_ARCH_PTEP_CLEAR
>> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>> +			      pte_t *ptep)
>> +{
>> +	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>> +		ptep_get_and_clear(mm, addr, ptep);
>> +	else
>> +		pte_clear(mm, addr, ptep);
> 
> Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
> table hooks can be added to all potential page table paths via generic helpers,
> irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
> a IS_ENABLED() check here.
> 

 From the perspective of code logic, we need to check the pte before 
being cleared. Whether pte check is required depends on IS_ENABLED().

Are there any suggestions for better implementation?

Thank you,
Tong.

>> +}
>> +#endif
>> +
>>   #ifndef __HAVE_ARCH_PTEP_GET
>>   static inline pte_t ptep_get(pte_t *ptep)
>>   {
>> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>   					    pmd_t *pmdp)
>>   {
>>   	pmd_t pmd = *pmdp;
>> +
>>   	pmd_clear(pmdp);
>> +	page_table_check_pmd_clear(mm, address, pmd);
>> +
>>   	return pmd;
>>   }
>>   #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
>> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>>   	pud_t pud = *pudp;
>>   
>>   	pud_clear(pudp);
>> +	page_table_check_pud_clear(mm, address, pud);
>> +
>>   	return pud;
>>   }
>>   #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
> .

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
@ 2022-04-24  4:10       ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-24  4:10 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/22 14:05, Anshuman Khandual 写道:
> 
> 
> On 4/21/22 13:50, Tong Tiangen wrote:
>> Move ptep_clear() to the include/linux/pgtable.h and add page table check
>> relate hooks to some helpers, it's prepare for support page table check
>> feature on new architecture.
> 
> Could instrumenting generic page table helpers (fallback instances when its
> corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
> the page table check hooks into paths on platforms which have not subscribed
> ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
> !CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
> gets avoided.

Right, build problems are avoided by fallback stubs in the header file.

> 
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>> ---
>>   arch/x86/include/asm/pgtable.h | 10 ----------
>>   include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>>   2 files changed, 18 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>> index 564abe42b0f7..51cd39858f81 100644
>> --- a/arch/x86/include/asm/pgtable.h
>> +++ b/arch/x86/include/asm/pgtable.h
>> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>   	return pte;
>>   }
>>   
>> -#define __HAVE_ARCH_PTEP_CLEAR
> 
> AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
> this is getting dropped for generic ptep_clear(), then no need to add back
> #ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
> definition for all platforms ?
> 
> Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
> other page table check related changes, it needs to be done via a separate
> patch instead.

Agreed.
IMO, this fix can be patched later.

> 
>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>> -			      pte_t *ptep)
>> -{
>> -	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>> -		ptep_get_and_clear(mm, addr, ptep);
>> -	else
>> -		pte_clear(mm, addr, ptep);
>> -}
>> -
>>   #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>>   static inline void ptep_set_wrprotect(struct mm_struct *mm,
>>   				      unsigned long addr, pte_t *ptep)
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index 49ab8ee2d6d7..10d2d91edf20 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -12,6 +12,7 @@
>>   #include <linux/bug.h>
>>   #include <linux/errno.h>
>>   #include <asm-generic/pgtable_uffd.h>
>> +#include <linux/page_table_check.h>
>>   
>>   #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>>   	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
>> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>>   }
>>   #endif
>>   
>> -#ifndef __HAVE_ARCH_PTEP_CLEAR
>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>> -			      pte_t *ptep)
>> -{
>> -	pte_clear(mm, addr, ptep);
>> -}
>> -#endif
>> -
>>   #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   				       unsigned long address,
>> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   {
>>   	pte_t pte = *ptep;
>>   	pte_clear(mm, address, ptep);
>> +	page_table_check_pte_clear(mm, address, pte);
>>   	return pte;
>>   }
>>   #endif
>>   
>> +#ifndef __HAVE_ARCH_PTEP_CLEAR
>> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>> +			      pte_t *ptep)
>> +{
>> +	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>> +		ptep_get_and_clear(mm, addr, ptep);
>> +	else
>> +		pte_clear(mm, addr, ptep);
> 
> Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
> table hooks can be added to all potential page table paths via generic helpers,
> irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
> a IS_ENABLED() check here.
> 

 From the perspective of code logic, we need to check the pte before 
being cleared. Whether pte check is required depends on IS_ENABLED().

Are there any suggestions for better implementation?

Thank you,
Tong.

>> +}
>> +#endif
>> +
>>   #ifndef __HAVE_ARCH_PTEP_GET
>>   static inline pte_t ptep_get(pte_t *ptep)
>>   {
>> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>   					    pmd_t *pmdp)
>>   {
>>   	pmd_t pmd = *pmdp;
>> +
>>   	pmd_clear(pmdp);
>> +	page_table_check_pmd_clear(mm, address, pmd);
>> +
>>   	return pmd;
>>   }
>>   #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
>> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>>   	pud_t pud = *pudp;
>>   
>>   	pud_clear(pudp);
>> +	page_table_check_pud_clear(mm, address, pud);
>> +
>>   	return pud;
>>   }
>>   #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
> .

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
@ 2022-04-24  4:10       ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-24  4:10 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/22 14:05, Anshuman Khandual 写道:
> 
> 
> On 4/21/22 13:50, Tong Tiangen wrote:
>> Move ptep_clear() to the include/linux/pgtable.h and add page table check
>> relate hooks to some helpers, it's prepare for support page table check
>> feature on new architecture.
> 
> Could instrumenting generic page table helpers (fallback instances when its
> corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
> the page table check hooks into paths on platforms which have not subscribed
> ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
> !CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
> gets avoided.

Right, build problems are avoided by fallback stubs in the header file.

> 
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>> ---
>>   arch/x86/include/asm/pgtable.h | 10 ----------
>>   include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>>   2 files changed, 18 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>> index 564abe42b0f7..51cd39858f81 100644
>> --- a/arch/x86/include/asm/pgtable.h
>> +++ b/arch/x86/include/asm/pgtable.h
>> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>   	return pte;
>>   }
>>   
>> -#define __HAVE_ARCH_PTEP_CLEAR
> 
> AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
> this is getting dropped for generic ptep_clear(), then no need to add back
> #ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
> definition for all platforms ?
> 
> Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
> other page table check related changes, it needs to be done via a separate
> patch instead.

Agreed.
IMO, this fix can be patched later.

> 
>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>> -			      pte_t *ptep)
>> -{
>> -	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>> -		ptep_get_and_clear(mm, addr, ptep);
>> -	else
>> -		pte_clear(mm, addr, ptep);
>> -}
>> -
>>   #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>>   static inline void ptep_set_wrprotect(struct mm_struct *mm,
>>   				      unsigned long addr, pte_t *ptep)
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index 49ab8ee2d6d7..10d2d91edf20 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -12,6 +12,7 @@
>>   #include <linux/bug.h>
>>   #include <linux/errno.h>
>>   #include <asm-generic/pgtable_uffd.h>
>> +#include <linux/page_table_check.h>
>>   
>>   #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>>   	defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
>> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>>   }
>>   #endif
>>   
>> -#ifndef __HAVE_ARCH_PTEP_CLEAR
>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>> -			      pte_t *ptep)
>> -{
>> -	pte_clear(mm, addr, ptep);
>> -}
>> -#endif
>> -
>>   #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   				       unsigned long address,
>> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   {
>>   	pte_t pte = *ptep;
>>   	pte_clear(mm, address, ptep);
>> +	page_table_check_pte_clear(mm, address, pte);
>>   	return pte;
>>   }
>>   #endif
>>   
>> +#ifndef __HAVE_ARCH_PTEP_CLEAR
>> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>> +			      pte_t *ptep)
>> +{
>> +	if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>> +		ptep_get_and_clear(mm, addr, ptep);
>> +	else
>> +		pte_clear(mm, addr, ptep);
> 
> Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
> table hooks can be added to all potential page table paths via generic helpers,
> irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
> a IS_ENABLED() check here.
> 

 From the perspective of code logic, we need to check the pte before 
being cleared. Whether pte check is required depends on IS_ENABLED().

Are there any suggestions for better implementation?

Thank you,
Tong.

>> +}
>> +#endif
>> +
>>   #ifndef __HAVE_ARCH_PTEP_GET
>>   static inline pte_t ptep_get(pte_t *ptep)
>>   {
>> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>   					    pmd_t *pmdp)
>>   {
>>   	pmd_t pmd = *pmdp;
>> +
>>   	pmd_clear(pmdp);
>> +	page_table_check_pmd_clear(mm, address, pmd);
>> +
>>   	return pmd;
>>   }
>>   #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
>> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>>   	pud_t pud = *pudp;
>>   
>>   	pud_clear(pudp);
>> +	page_table_check_pud_clear(mm, address, pud);
>> +
>>   	return pud;
>>   }
>>   #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
  2022-04-22  6:45     ` Anshuman Khandual
  (?)
@ 2022-04-24  4:14       ` Tong Tiangen
  -1 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-24  4:14 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/22 14:45, Anshuman Khandual 写道:
> Please change the subject line as
> 
> arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK
> 
> OR
> 
> arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK
> 
> On 4/21/22 13:50, Tong Tiangen wrote:
>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>
>> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
>> check"), add some necessary page table check hooks into routines that
>> modify user page tables.
> 
> Please make the commit message comprehensive, which should include
> 
> - Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
> - Adding all additional page table helpers required for PAGE_TABLE_CHECK
> - Instrumenting existing page table helpers with page table check hooks
> 

Good suggestion, if i need to do a new version for some other reason i 
think it should be described more comprehensively.

Thanks,
Tong.

>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>> ---
>>   arch/arm64/Kconfig               |  1 +
>>   arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
>>   2 files changed, 61 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 18a18a0e855d..c1509525ab8e 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -92,6 +92,7 @@ config ARM64
>>   	select ARCH_SUPPORTS_ATOMIC_RMW
>>   	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
>>   	select ARCH_SUPPORTS_NUMA_BALANCING
>> +	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>   	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
>>   	select ARCH_WANT_DEFAULT_BPF_JIT
>>   	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 930077f7b572..9f8f97a7cc7c 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -33,6 +33,7 @@
>>   #include <linux/mmdebug.h>
>>   #include <linux/mm_types.h>
>>   #include <linux/sched.h>
>> +#include <linux/page_table_check.h>
>>   
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>   #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
>> @@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>>   #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
>>   #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
>>   #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
>> +#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
>>   #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
>>   #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
>>   #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
>> @@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
>>   		     __func__, pte_val(old_pte), pte_val(pte));
>>   }
>>   
>> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>> +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>>   			      pte_t *ptep, pte_t pte)
>>   {
>>   	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
>> @@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>>   	set_pte(ptep, pte);
>>   }
>>   
>> +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>> +			      pte_t *ptep, pte_t pte)
>> +{
>> +	page_table_check_pte_set(mm, addr, ptep, pte);
>> +	return __set_pte_at(mm, addr, ptep, pte);
>> +}
>> +
>>   /*
>>    * Huge pte definitions.
>>    */
>> @@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
>>   #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
>>   #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
>>   #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
>> +#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
>> +#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
>>   #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
>>   #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
>>   #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
>> @@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
>>   #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
>>   #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
>>   
>> -#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
>> -#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
>> +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
>> +			      pmd_t *pmdp, pmd_t pmd)
>> +{
>> +	page_table_check_pmd_set(mm, addr, pmdp, pmd);
>> +	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
>> +}
>> +
>> +static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
>> +			      pud_t *pudp, pud_t pud)
>> +{
>> +	page_table_check_pud_set(mm, addr, pudp, pud);
>> +	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
>> +}
>>   
>>   #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
>>   #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
>> @@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
>>   #define pud_present(pud)	pte_present(pud_pte(pud))
>>   #define pud_leaf(pud)		pud_sect(pud)
>>   #define pud_valid(pud)		pte_valid(pud_pte(pud))
>> +#define pud_user(pud)		pte_user(pud_pte(pud))
>> +
>> +#ifdef CONFIG_PAGE_TABLE_CHECK
>> +static inline bool pte_user_accessible_page(pte_t pte)
>> +{
>> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
>> +}
>> +
>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>> +{
>> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
>> +}
>> +
>> +static inline bool pud_user_accessible_page(pud_t pud)
>> +{
>> +	return pud_present(pud) && pud_user(pud);
>> +}
>> +#endif
>>   
>>   static inline void set_pud(pud_t *pudp, pud_t pud)
>>   {
>> @@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>>   }
>>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>   
>> +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
>> +				       unsigned long address, pte_t *ptep)
>> +{
>> +	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
>> +}
>> +
>>   #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   				       unsigned long address, pte_t *ptep)
>>   {
>> -	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
>> +	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
>>
> 
> __ptep_get_and_clear() is not required. Please keep the __pte(xchg_relaxed(..),..)
> unchanged as in the case with pmdp_huge_get_and_clear() helper below.
> 
>   +
>> +	page_table_check_pte_clear(mm, address, pte);
>> +
>> +	return pte;
>>   }
>>   
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> @@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>   					    unsigned long address, pmd_t *pmdp)
>>   {
>> -	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
>> +	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
>> +
>> +	page_table_check_pmd_clear(mm, address, pmd);
>> +
>> +	return pmd;
>>   }
>>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>   
>> @@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>>   static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>>   		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>>   {
>> +	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>>   	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
>>   }
>>   #endif
> .

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
@ 2022-04-24  4:14       ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-24  4:14 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/22 14:45, Anshuman Khandual 写道:
> Please change the subject line as
> 
> arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK
> 
> OR
> 
> arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK
> 
> On 4/21/22 13:50, Tong Tiangen wrote:
>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>
>> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
>> check"), add some necessary page table check hooks into routines that
>> modify user page tables.
> 
> Please make the commit message comprehensive, which should include
> 
> - Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
> - Adding all additional page table helpers required for PAGE_TABLE_CHECK
> - Instrumenting existing page table helpers with page table check hooks
> 

Good suggestion, if i need to do a new version for some other reason i 
think it should be described more comprehensively.

Thanks,
Tong.

>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>> ---
>>   arch/arm64/Kconfig               |  1 +
>>   arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
>>   2 files changed, 61 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 18a18a0e855d..c1509525ab8e 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -92,6 +92,7 @@ config ARM64
>>   	select ARCH_SUPPORTS_ATOMIC_RMW
>>   	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
>>   	select ARCH_SUPPORTS_NUMA_BALANCING
>> +	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>   	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
>>   	select ARCH_WANT_DEFAULT_BPF_JIT
>>   	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 930077f7b572..9f8f97a7cc7c 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -33,6 +33,7 @@
>>   #include <linux/mmdebug.h>
>>   #include <linux/mm_types.h>
>>   #include <linux/sched.h>
>> +#include <linux/page_table_check.h>
>>   
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>   #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
>> @@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>>   #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
>>   #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
>>   #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
>> +#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
>>   #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
>>   #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
>>   #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
>> @@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
>>   		     __func__, pte_val(old_pte), pte_val(pte));
>>   }
>>   
>> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>> +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>>   			      pte_t *ptep, pte_t pte)
>>   {
>>   	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
>> @@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>>   	set_pte(ptep, pte);
>>   }
>>   
>> +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>> +			      pte_t *ptep, pte_t pte)
>> +{
>> +	page_table_check_pte_set(mm, addr, ptep, pte);
>> +	return __set_pte_at(mm, addr, ptep, pte);
>> +}
>> +
>>   /*
>>    * Huge pte definitions.
>>    */
>> @@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
>>   #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
>>   #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
>>   #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
>> +#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
>> +#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
>>   #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
>>   #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
>>   #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
>> @@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
>>   #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
>>   #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
>>   
>> -#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
>> -#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
>> +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
>> +			      pmd_t *pmdp, pmd_t pmd)
>> +{
>> +	page_table_check_pmd_set(mm, addr, pmdp, pmd);
>> +	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
>> +}
>> +
>> +static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
>> +			      pud_t *pudp, pud_t pud)
>> +{
>> +	page_table_check_pud_set(mm, addr, pudp, pud);
>> +	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
>> +}
>>   
>>   #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
>>   #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
>> @@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
>>   #define pud_present(pud)	pte_present(pud_pte(pud))
>>   #define pud_leaf(pud)		pud_sect(pud)
>>   #define pud_valid(pud)		pte_valid(pud_pte(pud))
>> +#define pud_user(pud)		pte_user(pud_pte(pud))
>> +
>> +#ifdef CONFIG_PAGE_TABLE_CHECK
>> +static inline bool pte_user_accessible_page(pte_t pte)
>> +{
>> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
>> +}
>> +
>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>> +{
>> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
>> +}
>> +
>> +static inline bool pud_user_accessible_page(pud_t pud)
>> +{
>> +	return pud_present(pud) && pud_user(pud);
>> +}
>> +#endif
>>   
>>   static inline void set_pud(pud_t *pudp, pud_t pud)
>>   {
>> @@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>>   }
>>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>   
>> +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
>> +				       unsigned long address, pte_t *ptep)
>> +{
>> +	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
>> +}
>> +
>>   #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   				       unsigned long address, pte_t *ptep)
>>   {
>> -	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
>> +	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
>>
> 
> __ptep_get_and_clear() is not required. Please keep the __pte(xchg_relaxed(..),..)
> unchanged as in the case with pmdp_huge_get_and_clear() helper below.
> 
>   +
>> +	page_table_check_pte_clear(mm, address, pte);
>> +
>> +	return pte;
>>   }
>>   
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> @@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>   					    unsigned long address, pmd_t *pmdp)
>>   {
>> -	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
>> +	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
>> +
>> +	page_table_check_pmd_clear(mm, address, pmd);
>> +
>> +	return pmd;
>>   }
>>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>   
>> @@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>>   static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>>   		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>>   {
>> +	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>>   	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
>>   }
>>   #endif
> .

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
@ 2022-04-24  4:14       ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-24  4:14 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/22 14:45, Anshuman Khandual 写道:
> Please change the subject line as
> 
> arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK
> 
> OR
> 
> arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK
> 
> On 4/21/22 13:50, Tong Tiangen wrote:
>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>
>> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
>> check"), add some necessary page table check hooks into routines that
>> modify user page tables.
> 
> Please make the commit message comprehensive, which should include
> 
> - Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
> - Adding all additional page table helpers required for PAGE_TABLE_CHECK
> - Instrumenting existing page table helpers with page table check hooks
> 

Good suggestion, if i need to do a new version for some other reason i 
think it should be described more comprehensively.

Thanks,
Tong.

>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>> ---
>>   arch/arm64/Kconfig               |  1 +
>>   arch/arm64/include/asm/pgtable.h | 65 +++++++++++++++++++++++++++++---
>>   2 files changed, 61 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 18a18a0e855d..c1509525ab8e 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -92,6 +92,7 @@ config ARM64
>>   	select ARCH_SUPPORTS_ATOMIC_RMW
>>   	select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
>>   	select ARCH_SUPPORTS_NUMA_BALANCING
>> +	select ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>   	select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT
>>   	select ARCH_WANT_DEFAULT_BPF_JIT
>>   	select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 930077f7b572..9f8f97a7cc7c 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -33,6 +33,7 @@
>>   #include <linux/mmdebug.h>
>>   #include <linux/mm_types.h>
>>   #include <linux/sched.h>
>> +#include <linux/page_table_check.h>
>>   
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>   #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
>> @@ -96,6 +97,7 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>>   #define pte_young(pte)		(!!(pte_val(pte) & PTE_AF))
>>   #define pte_special(pte)	(!!(pte_val(pte) & PTE_SPECIAL))
>>   #define pte_write(pte)		(!!(pte_val(pte) & PTE_WRITE))
>> +#define pte_user(pte)		(!!(pte_val(pte) & PTE_USER))
>>   #define pte_user_exec(pte)	(!(pte_val(pte) & PTE_UXN))
>>   #define pte_cont(pte)		(!!(pte_val(pte) & PTE_CONT))
>>   #define pte_devmap(pte)		(!!(pte_val(pte) & PTE_DEVMAP))
>> @@ -312,7 +314,7 @@ static inline void __check_racy_pte_update(struct mm_struct *mm, pte_t *ptep,
>>   		     __func__, pte_val(old_pte), pte_val(pte));
>>   }
>>   
>> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>> +static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
>>   			      pte_t *ptep, pte_t pte)
>>   {
>>   	if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
>> @@ -343,6 +345,13 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>>   	set_pte(ptep, pte);
>>   }
>>   
>> +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
>> +			      pte_t *ptep, pte_t pte)
>> +{
>> +	page_table_check_pte_set(mm, addr, ptep, pte);
>> +	return __set_pte_at(mm, addr, ptep, pte);
>> +}
>> +
>>   /*
>>    * Huge pte definitions.
>>    */
>> @@ -454,6 +463,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
>>   #define pmd_dirty(pmd)		pte_dirty(pmd_pte(pmd))
>>   #define pmd_young(pmd)		pte_young(pmd_pte(pmd))
>>   #define pmd_valid(pmd)		pte_valid(pmd_pte(pmd))
>> +#define pmd_user(pmd)		pte_user(pmd_pte(pmd))
>> +#define pmd_user_exec(pmd)	pte_user_exec(pmd_pte(pmd))
>>   #define pmd_cont(pmd)		pte_cont(pmd_pte(pmd))
>>   #define pmd_wrprotect(pmd)	pte_pmd(pte_wrprotect(pmd_pte(pmd)))
>>   #define pmd_mkold(pmd)		pte_pmd(pte_mkold(pmd_pte(pmd)))
>> @@ -501,8 +512,19 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
>>   #define pud_pfn(pud)		((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT)
>>   #define pfn_pud(pfn,prot)	__pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
>>   
>> -#define set_pmd_at(mm, addr, pmdp, pmd)	set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
>> -#define set_pud_at(mm, addr, pudp, pud)	set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud))
>> +static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
>> +			      pmd_t *pmdp, pmd_t pmd)
>> +{
>> +	page_table_check_pmd_set(mm, addr, pmdp, pmd);
>> +	return __set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd));
>> +}
>> +
>> +static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
>> +			      pud_t *pudp, pud_t pud)
>> +{
>> +	page_table_check_pud_set(mm, addr, pudp, pud);
>> +	return __set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud));
>> +}
>>   
>>   #define __p4d_to_phys(p4d)	__pte_to_phys(p4d_pte(p4d))
>>   #define __phys_to_p4d_val(phys)	__phys_to_pte_val(phys)
>> @@ -643,6 +665,24 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
>>   #define pud_present(pud)	pte_present(pud_pte(pud))
>>   #define pud_leaf(pud)		pud_sect(pud)
>>   #define pud_valid(pud)		pte_valid(pud_pte(pud))
>> +#define pud_user(pud)		pte_user(pud_pte(pud))
>> +
>> +#ifdef CONFIG_PAGE_TABLE_CHECK
>> +static inline bool pte_user_accessible_page(pte_t pte)
>> +{
>> +	return pte_present(pte) && (pte_user(pte) || pte_user_exec(pte));
>> +}
>> +
>> +static inline bool pmd_user_accessible_page(pmd_t pmd)
>> +{
>> +	return pmd_present(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
>> +}
>> +
>> +static inline bool pud_user_accessible_page(pud_t pud)
>> +{
>> +	return pud_present(pud) && pud_user(pud);
>> +}
>> +#endif
>>   
>>   static inline void set_pud(pud_t *pudp, pud_t pud)
>>   {
>> @@ -872,11 +912,21 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>>   }
>>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>   
>> +static inline pte_t __ptep_get_and_clear(struct mm_struct *mm,
>> +				       unsigned long address, pte_t *ptep)
>> +{
>> +	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
>> +}
>> +
>>   #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   				       unsigned long address, pte_t *ptep)
>>   {
>> -	return __pte(xchg_relaxed(&pte_val(*ptep), 0));
>> +	pte_t pte = __ptep_get_and_clear(mm, address, ptep);
>>
> 
> __ptep_get_and_clear() is not required. Please keep the __pte(xchg_relaxed(..),..)
> unchanged as in the case with pmdp_huge_get_and_clear() helper below.
> 
>   +
>> +	page_table_check_pte_clear(mm, address, pte);
>> +
>> +	return pte;
>>   }
>>   
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> @@ -884,7 +934,11 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>   static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>   					    unsigned long address, pmd_t *pmdp)
>>   {
>> -	return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
>> +	pmd_t pmd = pte_pmd(__ptep_get_and_clear(mm, address, (pte_t *)pmdp));
>> +
>> +	page_table_check_pmd_clear(mm, address, pmd);
>> +
>> +	return pmd;
>>   }
>>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>   
>> @@ -918,6 +972,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>>   static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>>   		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>>   {
>> +	page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd);
>>   	return __pmd(xchg_relaxed(&pmd_val(*pmdp), pmd_val(pmd)));
>>   }
>>   #endif
> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
  2022-04-24  4:14       ` Tong Tiangen
  (?)
@ 2022-04-25  5:41         ` Anshuman Khandual
  -1 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-25  5:41 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/24/22 09:44, Tong Tiangen wrote:
> 
> 
> 在 2022/4/22 14:45, Anshuman Khandual 写道:
>> Please change the subject line as
>>
>> arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>
>> OR
>>
>> arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>
>> On 4/21/22 13:50, Tong Tiangen wrote:
>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>
>>> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
>>> check"), add some necessary page table check hooks into routines that
>>> modify user page tables.
>>
>> Please make the commit message comprehensive, which should include
>>
>> - Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
>> - Adding all additional page table helpers required for PAGE_TABLE_CHECK
>> - Instrumenting existing page table helpers with page table check hooks
>>
> 
> Good suggestion, if i need to do a new version for some other reason i think it should be described more comprehensivel


This series needs revision to accommodate earlier comments.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
@ 2022-04-25  5:41         ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-25  5:41 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/24/22 09:44, Tong Tiangen wrote:
> 
> 
> 在 2022/4/22 14:45, Anshuman Khandual 写道:
>> Please change the subject line as
>>
>> arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>
>> OR
>>
>> arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>
>> On 4/21/22 13:50, Tong Tiangen wrote:
>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>
>>> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
>>> check"), add some necessary page table check hooks into routines that
>>> modify user page tables.
>>
>> Please make the commit message comprehensive, which should include
>>
>> - Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
>> - Adding all additional page table helpers required for PAGE_TABLE_CHECK
>> - Instrumenting existing page table helpers with page table check hooks
>>
> 
> Good suggestion, if i need to do a new version for some other reason i think it should be described more comprehensivel


This series needs revision to accommodate earlier comments.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
@ 2022-04-25  5:41         ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-25  5:41 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/24/22 09:44, Tong Tiangen wrote:
> 
> 
> 在 2022/4/22 14:45, Anshuman Khandual 写道:
>> Please change the subject line as
>>
>> arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>
>> OR
>>
>> arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>
>> On 4/21/22 13:50, Tong Tiangen wrote:
>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>
>>> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
>>> check"), add some necessary page table check hooks into routines that
>>> modify user page tables.
>>
>> Please make the commit message comprehensive, which should include
>>
>> - Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
>> - Adding all additional page table helpers required for PAGE_TABLE_CHECK
>> - Instrumenting existing page table helpers with page table check hooks
>>
> 
> Good suggestion, if i need to do a new version for some other reason i think it should be described more comprehensivel


This series needs revision to accommodate earlier comments.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
  2022-04-24  4:10       ` Tong Tiangen
  (?)
@ 2022-04-25  5:52         ` Anshuman Khandual
  -1 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-25  5:52 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/24/22 09:40, Tong Tiangen wrote:
> 
> 
> 在 2022/4/22 14:05, Anshuman Khandual 写道:
>>
>>
>> On 4/21/22 13:50, Tong Tiangen wrote:
>>> Move ptep_clear() to the include/linux/pgtable.h and add page table check
>>> relate hooks to some helpers, it's prepare for support page table check
>>> feature on new architecture.
>>
>> Could instrumenting generic page table helpers (fallback instances when its
>> corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
>> the page table check hooks into paths on platforms which have not subscribed
>> ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
>> !CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
>> gets avoided.
> 
> Right, build problems are avoided by fallback stubs in the header file.

Although there might not be a build problem as such, but should non subscribing
platforms get their page table helpers instrumented with page table check hooks
in the first place ? The commit message should address these questions.

> 
>>
>>>
>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>>> ---
>>>   arch/x86/include/asm/pgtable.h | 10 ----------
>>>   include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>>>   2 files changed, 18 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>>> index 564abe42b0f7..51cd39858f81 100644
>>> --- a/arch/x86/include/asm/pgtable.h
>>> +++ b/arch/x86/include/asm/pgtable.h
>>> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>>       return pte;
>>>   }
>>>   -#define __HAVE_ARCH_PTEP_CLEAR
>>
>> AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
>> this is getting dropped for generic ptep_clear(), then no need to add back
>> #ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
>> definition for all platforms ?
>>
>> Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
>> other page table check related changes, it needs to be done via a separate
>> patch instead.
> 
> Agreed.
> IMO, this fix can be patched later.
> 
>>
>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>> -                  pte_t *ptep)
>>> -{
>>> -    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>> -        ptep_get_and_clear(mm, addr, ptep);
>>> -    else
>>> -        pte_clear(mm, addr, ptep);
>>> -}
>>> -
>>>   #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>>>   static inline void ptep_set_wrprotect(struct mm_struct *mm,
>>>                         unsigned long addr, pte_t *ptep)
>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>> index 49ab8ee2d6d7..10d2d91edf20 100644
>>> --- a/include/linux/pgtable.h
>>> +++ b/include/linux/pgtable.h
>>> @@ -12,6 +12,7 @@
>>>   #include <linux/bug.h>
>>>   #include <linux/errno.h>
>>>   #include <asm-generic/pgtable_uffd.h>
>>> +#include <linux/page_table_check.h>
>>>     #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>>>       defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
>>> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>>>   }
>>>   #endif
>>>   -#ifndef __HAVE_ARCH_PTEP_CLEAR
>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>> -                  pte_t *ptep)
>>> -{
>>> -    pte_clear(mm, addr, ptep);
>>> -}
>>> -#endif
>>> -
>>>   #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>                          unsigned long address,
>>> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>   {
>>>       pte_t pte = *ptep;
>>>       pte_clear(mm, address, ptep);
>>> +    page_table_check_pte_clear(mm, address, pte);
>>>       return pte;
>>>   }
>>>   #endif
>>>   +#ifndef __HAVE_ARCH_PTEP_CLEAR
>>> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>> +                  pte_t *ptep)
>>> +{
>>> +    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>> +        ptep_get_and_clear(mm, addr, ptep);
>>> +    else
>>> +        pte_clear(mm, addr, ptep);
>>
>> Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
>> table hooks can be added to all potential page table paths via generic helpers,
>> irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
>> a IS_ENABLED() check here.
>>
> 
> From the perspective of code logic, we need to check the pte before being cleared. Whether pte check is required depends on IS_ENABLED().
> 
> Are there any suggestions for better implementation?

But other generic page table helpers already have page table check hooks
instrumented without IS_ENABLED() checks, then why this is any different.

> 
> Thank you,
> Tong.
> 
>>> +}
>>> +#endif
>>> +
>>>   #ifndef __HAVE_ARCH_PTEP_GET
>>>   static inline pte_t ptep_get(pte_t *ptep)
>>>   {
>>> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>>                           pmd_t *pmdp)
>>>   {
>>>       pmd_t pmd = *pmdp;
>>> +
>>>       pmd_clear(pmdp);
>>> +    page_table_check_pmd_clear(mm, address, pmd);
>>> +
>>>       return pmd;
>>>   }
>>>   #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
>>> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>>>       pud_t pud = *pudp;
>>>         pud_clear(pudp);
>>> +    page_table_check_pud_clear(mm, address, pud);
>>> +
>>>       return pud;
>>>   }
>>>   #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
>> .

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
@ 2022-04-25  5:52         ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-25  5:52 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/24/22 09:40, Tong Tiangen wrote:
> 
> 
> 在 2022/4/22 14:05, Anshuman Khandual 写道:
>>
>>
>> On 4/21/22 13:50, Tong Tiangen wrote:
>>> Move ptep_clear() to the include/linux/pgtable.h and add page table check
>>> relate hooks to some helpers, it's prepare for support page table check
>>> feature on new architecture.
>>
>> Could instrumenting generic page table helpers (fallback instances when its
>> corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
>> the page table check hooks into paths on platforms which have not subscribed
>> ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
>> !CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
>> gets avoided.
> 
> Right, build problems are avoided by fallback stubs in the header file.

Although there might not be a build problem as such, but should non subscribing
platforms get their page table helpers instrumented with page table check hooks
in the first place ? The commit message should address these questions.

> 
>>
>>>
>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>>> ---
>>>   arch/x86/include/asm/pgtable.h | 10 ----------
>>>   include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>>>   2 files changed, 18 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>>> index 564abe42b0f7..51cd39858f81 100644
>>> --- a/arch/x86/include/asm/pgtable.h
>>> +++ b/arch/x86/include/asm/pgtable.h
>>> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>>       return pte;
>>>   }
>>>   -#define __HAVE_ARCH_PTEP_CLEAR
>>
>> AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
>> this is getting dropped for generic ptep_clear(), then no need to add back
>> #ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
>> definition for all platforms ?
>>
>> Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
>> other page table check related changes, it needs to be done via a separate
>> patch instead.
> 
> Agreed.
> IMO, this fix can be patched later.
> 
>>
>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>> -                  pte_t *ptep)
>>> -{
>>> -    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>> -        ptep_get_and_clear(mm, addr, ptep);
>>> -    else
>>> -        pte_clear(mm, addr, ptep);
>>> -}
>>> -
>>>   #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>>>   static inline void ptep_set_wrprotect(struct mm_struct *mm,
>>>                         unsigned long addr, pte_t *ptep)
>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>> index 49ab8ee2d6d7..10d2d91edf20 100644
>>> --- a/include/linux/pgtable.h
>>> +++ b/include/linux/pgtable.h
>>> @@ -12,6 +12,7 @@
>>>   #include <linux/bug.h>
>>>   #include <linux/errno.h>
>>>   #include <asm-generic/pgtable_uffd.h>
>>> +#include <linux/page_table_check.h>
>>>     #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>>>       defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
>>> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>>>   }
>>>   #endif
>>>   -#ifndef __HAVE_ARCH_PTEP_CLEAR
>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>> -                  pte_t *ptep)
>>> -{
>>> -    pte_clear(mm, addr, ptep);
>>> -}
>>> -#endif
>>> -
>>>   #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>                          unsigned long address,
>>> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>   {
>>>       pte_t pte = *ptep;
>>>       pte_clear(mm, address, ptep);
>>> +    page_table_check_pte_clear(mm, address, pte);
>>>       return pte;
>>>   }
>>>   #endif
>>>   +#ifndef __HAVE_ARCH_PTEP_CLEAR
>>> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>> +                  pte_t *ptep)
>>> +{
>>> +    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>> +        ptep_get_and_clear(mm, addr, ptep);
>>> +    else
>>> +        pte_clear(mm, addr, ptep);
>>
>> Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
>> table hooks can be added to all potential page table paths via generic helpers,
>> irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
>> a IS_ENABLED() check here.
>>
> 
> From the perspective of code logic, we need to check the pte before being cleared. Whether pte check is required depends on IS_ENABLED().
> 
> Are there any suggestions for better implementation?

But other generic page table helpers already have page table check hooks
instrumented without IS_ENABLED() checks, then why this is any different.

> 
> Thank you,
> Tong.
> 
>>> +}
>>> +#endif
>>> +
>>>   #ifndef __HAVE_ARCH_PTEP_GET
>>>   static inline pte_t ptep_get(pte_t *ptep)
>>>   {
>>> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>>                           pmd_t *pmdp)
>>>   {
>>>       pmd_t pmd = *pmdp;
>>> +
>>>       pmd_clear(pmdp);
>>> +    page_table_check_pmd_clear(mm, address, pmd);
>>> +
>>>       return pmd;
>>>   }
>>>   #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
>>> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>>>       pud_t pud = *pudp;
>>>         pud_clear(pudp);
>>> +    page_table_check_pud_clear(mm, address, pud);
>>> +
>>>       return pud;
>>>   }
>>>   #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
>> .

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
@ 2022-04-25  5:52         ` Anshuman Khandual
  0 siblings, 0 replies; 60+ messages in thread
From: Anshuman Khandual @ 2022-04-25  5:52 UTC (permalink / raw)
  To: Tong Tiangen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



On 4/24/22 09:40, Tong Tiangen wrote:
> 
> 
> 在 2022/4/22 14:05, Anshuman Khandual 写道:
>>
>>
>> On 4/21/22 13:50, Tong Tiangen wrote:
>>> Move ptep_clear() to the include/linux/pgtable.h and add page table check
>>> relate hooks to some helpers, it's prepare for support page table check
>>> feature on new architecture.
>>
>> Could instrumenting generic page table helpers (fallback instances when its
>> corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
>> the page table check hooks into paths on platforms which have not subscribed
>> ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
>> !CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
>> gets avoided.
> 
> Right, build problems are avoided by fallback stubs in the header file.

Although there might not be a build problem as such, but should non subscribing
platforms get their page table helpers instrumented with page table check hooks
in the first place ? The commit message should address these questions.

> 
>>
>>>
>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>>> ---
>>>   arch/x86/include/asm/pgtable.h | 10 ----------
>>>   include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>>>   2 files changed, 18 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>>> index 564abe42b0f7..51cd39858f81 100644
>>> --- a/arch/x86/include/asm/pgtable.h
>>> +++ b/arch/x86/include/asm/pgtable.h
>>> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>>       return pte;
>>>   }
>>>   -#define __HAVE_ARCH_PTEP_CLEAR
>>
>> AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
>> this is getting dropped for generic ptep_clear(), then no need to add back
>> #ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
>> definition for all platforms ?
>>
>> Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
>> other page table check related changes, it needs to be done via a separate
>> patch instead.
> 
> Agreed.
> IMO, this fix can be patched later.
> 
>>
>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>> -                  pte_t *ptep)
>>> -{
>>> -    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>> -        ptep_get_and_clear(mm, addr, ptep);
>>> -    else
>>> -        pte_clear(mm, addr, ptep);
>>> -}
>>> -
>>>   #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>>>   static inline void ptep_set_wrprotect(struct mm_struct *mm,
>>>                         unsigned long addr, pte_t *ptep)
>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>> index 49ab8ee2d6d7..10d2d91edf20 100644
>>> --- a/include/linux/pgtable.h
>>> +++ b/include/linux/pgtable.h
>>> @@ -12,6 +12,7 @@
>>>   #include <linux/bug.h>
>>>   #include <linux/errno.h>
>>>   #include <asm-generic/pgtable_uffd.h>
>>> +#include <linux/page_table_check.h>
>>>     #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>>>       defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
>>> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>>>   }
>>>   #endif
>>>   -#ifndef __HAVE_ARCH_PTEP_CLEAR
>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>> -                  pte_t *ptep)
>>> -{
>>> -    pte_clear(mm, addr, ptep);
>>> -}
>>> -#endif
>>> -
>>>   #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>>   static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>                          unsigned long address,
>>> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>   {
>>>       pte_t pte = *ptep;
>>>       pte_clear(mm, address, ptep);
>>> +    page_table_check_pte_clear(mm, address, pte);
>>>       return pte;
>>>   }
>>>   #endif
>>>   +#ifndef __HAVE_ARCH_PTEP_CLEAR
>>> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>> +                  pte_t *ptep)
>>> +{
>>> +    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>> +        ptep_get_and_clear(mm, addr, ptep);
>>> +    else
>>> +        pte_clear(mm, addr, ptep);
>>
>> Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
>> table hooks can be added to all potential page table paths via generic helpers,
>> irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
>> a IS_ENABLED() check here.
>>
> 
> From the perspective of code logic, we need to check the pte before being cleared. Whether pte check is required depends on IS_ENABLED().
> 
> Are there any suggestions for better implementation?

But other generic page table helpers already have page table check hooks
instrumented without IS_ENABLED() checks, then why this is any different.

> 
> Thank you,
> Tong.
> 
>>> +}
>>> +#endif
>>> +
>>>   #ifndef __HAVE_ARCH_PTEP_GET
>>>   static inline pte_t ptep_get(pte_t *ptep)
>>>   {
>>> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>>                           pmd_t *pmdp)
>>>   {
>>>       pmd_t pmd = *pmdp;
>>> +
>>>       pmd_clear(pmdp);
>>> +    page_table_check_pmd_clear(mm, address, pmd);
>>> +
>>>       return pmd;
>>>   }
>>>   #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
>>> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>>>       pud_t pud = *pudp;
>>>         pud_clear(pudp);
>>> +    page_table_check_pud_clear(mm, address, pud);
>>> +
>>>       return pud;
>>>   }
>>>   #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
>> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
  2022-04-25  5:41         ` Anshuman Khandual
  (?)
@ 2022-04-25  7:34           ` Tong Tiangen
  -1 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-25  7:34 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/25 13:41, Anshuman Khandual 写道:
> 
> 
> On 4/24/22 09:44, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/22 14:45, Anshuman Khandual 写道:
>>> Please change the subject line as
>>>
>>> arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>>
>>> OR
>>>
>>> arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>>
>>> On 4/21/22 13:50, Tong Tiangen wrote:
>>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>
>>>> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
>>>> check"), add some necessary page table check hooks into routines that
>>>> modify user page tables.
>>>
>>> Please make the commit message comprehensive, which should include
>>>
>>> - Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
>>> - Adding all additional page table helpers required for PAGE_TABLE_CHECK
>>> - Instrumenting existing page table helpers with page table check hooks
>>>
>>
>> Good suggestion, if i need to do a new version for some other reason i think it should be described more comprehensivel
> 
> 
> This series needs revision to accommodate earlier comments.
> .

OK, Thanks.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
@ 2022-04-25  7:34           ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-25  7:34 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/25 13:41, Anshuman Khandual 写道:
> 
> 
> On 4/24/22 09:44, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/22 14:45, Anshuman Khandual 写道:
>>> Please change the subject line as
>>>
>>> arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>>
>>> OR
>>>
>>> arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>>
>>> On 4/21/22 13:50, Tong Tiangen wrote:
>>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>
>>>> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
>>>> check"), add some necessary page table check hooks into routines that
>>>> modify user page tables.
>>>
>>> Please make the commit message comprehensive, which should include
>>>
>>> - Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
>>> - Adding all additional page table helpers required for PAGE_TABLE_CHECK
>>> - Instrumenting existing page table helpers with page table check hooks
>>>
>>
>> Good suggestion, if i need to do a new version for some other reason i think it should be described more comprehensivel
> 
> 
> This series needs revision to accommodate earlier comments.
> .

OK, Thanks.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 4/5] arm64: mm: add support for page table check
@ 2022-04-25  7:34           ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-25  7:34 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/25 13:41, Anshuman Khandual 写道:
> 
> 
> On 4/24/22 09:44, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/22 14:45, Anshuman Khandual 写道:
>>> Please change the subject line as
>>>
>>> arm64/mm: Enable ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>>
>>> OR
>>>
>>> arm64/mm: Subscribe ARCH_SUPPORTS_PAGE_TABLE_CHECK
>>>
>>> On 4/21/22 13:50, Tong Tiangen wrote:
>>>> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>
>>>> As commit d283d422c6c4 ("x86: mm: add x86_64 support for page table
>>>> check"), add some necessary page table check hooks into routines that
>>>> modify user page tables.
>>>
>>> Please make the commit message comprehensive, which should include
>>>
>>> - Enabling ARCH_SUPPORTS_PAGE_TABLE_CHECK on arm64
>>> - Adding all additional page table helpers required for PAGE_TABLE_CHECK
>>> - Instrumenting existing page table helpers with page table check hooks
>>>
>>
>> Good suggestion, if i need to do a new version for some other reason i think it should be described more comprehensivel
> 
> 
> This series needs revision to accommodate earlier comments.
> .

OK, Thanks.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
  2022-04-25  5:52         ` Anshuman Khandual
  (?)
@ 2022-04-25 11:34           ` Tong Tiangen
  -1 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-25 11:34 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/25 13:52, Anshuman Khandual 写道:
> 
> 
> On 4/24/22 09:40, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/22 14:05, Anshuman Khandual 写道:
>>>
>>>
>>> On 4/21/22 13:50, Tong Tiangen wrote:
>>>> Move ptep_clear() to the include/linux/pgtable.h and add page table check
>>>> relate hooks to some helpers, it's prepare for support page table check
>>>> feature on new architecture.
>>>
>>> Could instrumenting generic page table helpers (fallback instances when its
>>> corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
>>> the page table check hooks into paths on platforms which have not subscribed
>>> ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
>>> !CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
>>> gets avoided.
>>
>> Right, build problems are avoided by fallback stubs in the header file.
> 
> Although there might not be a build problem as such, but should non subscribing
> platforms get their page table helpers instrumented with page table check hooks
> in the first place ? The commit message should address these questions.
> 
Add description in commit message to explain that:
Non subscription platforms will call a fallback ptc stubs when getting 
their page table helpers[1] in include/linux/pgtable.h.

[1] 
ptep_clear/ptep_get_and_clear/pmdp_huge_get_and_clear/pudp_huge_get_and_clear

Am i right? :)

Thanks,
Tong.
>>
>>>
>>>>
>>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>>>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>>>> ---
>>>>    arch/x86/include/asm/pgtable.h | 10 ----------
>>>>    include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>>>>    2 files changed, 18 insertions(+), 18 deletions(-)
>>>>
>>>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>>>> index 564abe42b0f7..51cd39858f81 100644
>>>> --- a/arch/x86/include/asm/pgtable.h
>>>> +++ b/arch/x86/include/asm/pgtable.h
>>>> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>>>        return pte;
>>>>    }
>>>>    -#define __HAVE_ARCH_PTEP_CLEAR
>>>
>>> AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
>>> this is getting dropped for generic ptep_clear(), then no need to add back
>>> #ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
>>> definition for all platforms ?
>>>
>>> Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
>>> other page table check related changes, it needs to be done via a separate
>>> patch instead.
>>
>> Agreed.
>> IMO, this fix can be patched later.
>>
>>>
>>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>>> -                  pte_t *ptep)
>>>> -{
>>>> -    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>>> -        ptep_get_and_clear(mm, addr, ptep);
>>>> -    else
>>>> -        pte_clear(mm, addr, ptep);
>>>> -}
>>>> -
>>>>    #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>>>>    static inline void ptep_set_wrprotect(struct mm_struct *mm,
>>>>                          unsigned long addr, pte_t *ptep)
>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>> index 49ab8ee2d6d7..10d2d91edf20 100644
>>>> --- a/include/linux/pgtable.h
>>>> +++ b/include/linux/pgtable.h
>>>> @@ -12,6 +12,7 @@
>>>>    #include <linux/bug.h>
>>>>    #include <linux/errno.h>
>>>>    #include <asm-generic/pgtable_uffd.h>
>>>> +#include <linux/page_table_check.h>
>>>>      #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>>>>        defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
>>>> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>>>>    }
>>>>    #endif
>>>>    -#ifndef __HAVE_ARCH_PTEP_CLEAR
>>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>>> -                  pte_t *ptep)
>>>> -{
>>>> -    pte_clear(mm, addr, ptep);
>>>> -}
>>>> -#endif
>>>> -
>>>>    #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>>>    static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>>                           unsigned long address,
>>>> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>>    {
>>>>        pte_t pte = *ptep;
>>>>        pte_clear(mm, address, ptep);
>>>> +    page_table_check_pte_clear(mm, address, pte);
>>>>        return pte;
>>>>    }
>>>>    #endif
>>>>    +#ifndef __HAVE_ARCH_PTEP_CLEAR
>>>> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>>> +                  pte_t *ptep)
>>>> +{
>>>> +    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>>> +        ptep_get_and_clear(mm, addr, ptep);
>>>> +    else
>>>> +        pte_clear(mm, addr, ptep);
>>>
>>> Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
>>> table hooks can be added to all potential page table paths via generic helpers,
>>> irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
>>> a IS_ENABLED() check here.
>>>
>>
>>  From the perspective of code logic, we need to check the pte before being cleared. Whether pte check is required depends on IS_ENABLED().
>>
>> Are there any suggestions for better implementation?
> 
> But other generic page table helpers already have page table check hooks
> instrumented without IS_ENABLED() checks, then why this is any different.
> 
Maybe i understand what you said, the more reasonable implement is:

static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned 
long address, pte_t *ptep)
  {
  	pte_t pte = *ptep;
  	pte_clear(mm, address, ptep);
	page_table_check_pte_clear(mm, address, pte);
  	return pte;
  }

static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, 
pte_t *ptep)
{
	ptep_get_and_clear(mm, address, ptep);
}

>>
>> Thank you,
>> Tong.
>>
>>>> +}
>>>> +#endif
>>>> +
>>>>    #ifndef __HAVE_ARCH_PTEP_GET
>>>>    static inline pte_t ptep_get(pte_t *ptep)
>>>>    {
>>>> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>>>                            pmd_t *pmdp)
>>>>    {
>>>>        pmd_t pmd = *pmdp;
>>>> +
>>>>        pmd_clear(pmdp);
>>>> +    page_table_check_pmd_clear(mm, address, pmd);
>>>> +
>>>>        return pmd;
>>>>    }
>>>>    #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
>>>> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>>>>        pud_t pud = *pudp;
>>>>          pud_clear(pudp);
>>>> +    page_table_check_pud_clear(mm, address, pud);
>>>> +
>>>>        return pud;
>>>>    }
>>>>    #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
>>> .
> .

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
@ 2022-04-25 11:34           ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-25 11:34 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/25 13:52, Anshuman Khandual 写道:
> 
> 
> On 4/24/22 09:40, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/22 14:05, Anshuman Khandual 写道:
>>>
>>>
>>> On 4/21/22 13:50, Tong Tiangen wrote:
>>>> Move ptep_clear() to the include/linux/pgtable.h and add page table check
>>>> relate hooks to some helpers, it's prepare for support page table check
>>>> feature on new architecture.
>>>
>>> Could instrumenting generic page table helpers (fallback instances when its
>>> corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
>>> the page table check hooks into paths on platforms which have not subscribed
>>> ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
>>> !CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
>>> gets avoided.
>>
>> Right, build problems are avoided by fallback stubs in the header file.
> 
> Although there might not be a build problem as such, but should non subscribing
> platforms get their page table helpers instrumented with page table check hooks
> in the first place ? The commit message should address these questions.
> 
Add description in commit message to explain that:
Non subscription platforms will call a fallback ptc stubs when getting 
their page table helpers[1] in include/linux/pgtable.h.

[1] 
ptep_clear/ptep_get_and_clear/pmdp_huge_get_and_clear/pudp_huge_get_and_clear

Am i right? :)

Thanks,
Tong.
>>
>>>
>>>>
>>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>>>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>>>> ---
>>>>    arch/x86/include/asm/pgtable.h | 10 ----------
>>>>    include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>>>>    2 files changed, 18 insertions(+), 18 deletions(-)
>>>>
>>>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>>>> index 564abe42b0f7..51cd39858f81 100644
>>>> --- a/arch/x86/include/asm/pgtable.h
>>>> +++ b/arch/x86/include/asm/pgtable.h
>>>> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>>>        return pte;
>>>>    }
>>>>    -#define __HAVE_ARCH_PTEP_CLEAR
>>>
>>> AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
>>> this is getting dropped for generic ptep_clear(), then no need to add back
>>> #ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
>>> definition for all platforms ?
>>>
>>> Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
>>> other page table check related changes, it needs to be done via a separate
>>> patch instead.
>>
>> Agreed.
>> IMO, this fix can be patched later.
>>
>>>
>>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>>> -                  pte_t *ptep)
>>>> -{
>>>> -    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>>> -        ptep_get_and_clear(mm, addr, ptep);
>>>> -    else
>>>> -        pte_clear(mm, addr, ptep);
>>>> -}
>>>> -
>>>>    #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>>>>    static inline void ptep_set_wrprotect(struct mm_struct *mm,
>>>>                          unsigned long addr, pte_t *ptep)
>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>> index 49ab8ee2d6d7..10d2d91edf20 100644
>>>> --- a/include/linux/pgtable.h
>>>> +++ b/include/linux/pgtable.h
>>>> @@ -12,6 +12,7 @@
>>>>    #include <linux/bug.h>
>>>>    #include <linux/errno.h>
>>>>    #include <asm-generic/pgtable_uffd.h>
>>>> +#include <linux/page_table_check.h>
>>>>      #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>>>>        defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
>>>> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>>>>    }
>>>>    #endif
>>>>    -#ifndef __HAVE_ARCH_PTEP_CLEAR
>>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>>> -                  pte_t *ptep)
>>>> -{
>>>> -    pte_clear(mm, addr, ptep);
>>>> -}
>>>> -#endif
>>>> -
>>>>    #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>>>    static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>>                           unsigned long address,
>>>> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>>    {
>>>>        pte_t pte = *ptep;
>>>>        pte_clear(mm, address, ptep);
>>>> +    page_table_check_pte_clear(mm, address, pte);
>>>>        return pte;
>>>>    }
>>>>    #endif
>>>>    +#ifndef __HAVE_ARCH_PTEP_CLEAR
>>>> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>>> +                  pte_t *ptep)
>>>> +{
>>>> +    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>>> +        ptep_get_and_clear(mm, addr, ptep);
>>>> +    else
>>>> +        pte_clear(mm, addr, ptep);
>>>
>>> Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
>>> table hooks can be added to all potential page table paths via generic helpers,
>>> irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
>>> a IS_ENABLED() check here.
>>>
>>
>>  From the perspective of code logic, we need to check the pte before being cleared. Whether pte check is required depends on IS_ENABLED().
>>
>> Are there any suggestions for better implementation?
> 
> But other generic page table helpers already have page table check hooks
> instrumented without IS_ENABLED() checks, then why this is any different.
> 
Maybe i understand what you said, the more reasonable implement is:

static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned 
long address, pte_t *ptep)
  {
  	pte_t pte = *ptep;
  	pte_clear(mm, address, ptep);
	page_table_check_pte_clear(mm, address, pte);
  	return pte;
  }

static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, 
pte_t *ptep)
{
	ptep_get_and_clear(mm, address, ptep);
}

>>
>> Thank you,
>> Tong.
>>
>>>> +}
>>>> +#endif
>>>> +
>>>>    #ifndef __HAVE_ARCH_PTEP_GET
>>>>    static inline pte_t ptep_get(pte_t *ptep)
>>>>    {
>>>> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>>>                            pmd_t *pmdp)
>>>>    {
>>>>        pmd_t pmd = *pmdp;
>>>> +
>>>>        pmd_clear(pmdp);
>>>> +    page_table_check_pmd_clear(mm, address, pmd);
>>>> +
>>>>        return pmd;
>>>>    }
>>>>    #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
>>>> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>>>>        pud_t pud = *pudp;
>>>>          pud_clear(pudp);
>>>> +    page_table_check_pud_clear(mm, address, pud);
>>>> +
>>>>        return pud;
>>>>    }
>>>>    #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
>>> .
> .

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers
@ 2022-04-25 11:34           ` Tong Tiangen
  0 siblings, 0 replies; 60+ messages in thread
From: Tong Tiangen @ 2022-04-25 11:34 UTC (permalink / raw)
  To: Anshuman Khandual, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Pasha Tatashin, Andrew Morton,
	Catalin Marinas, Will Deacon, Paul Walmsley, Palmer Dabbelt,
	Albert Ou
  Cc: linux-kernel, linux-mm, linux-arm-kernel, linux-riscv,
	Kefeng Wang, Guohanjun



在 2022/4/25 13:52, Anshuman Khandual 写道:
> 
> 
> On 4/24/22 09:40, Tong Tiangen wrote:
>>
>>
>> 在 2022/4/22 14:05, Anshuman Khandual 写道:
>>>
>>>
>>> On 4/21/22 13:50, Tong Tiangen wrote:
>>>> Move ptep_clear() to the include/linux/pgtable.h and add page table check
>>>> relate hooks to some helpers, it's prepare for support page table check
>>>> feature on new architecture.
>>>
>>> Could instrumenting generic page table helpers (fallback instances when its
>>> corresponding __HAVE_ARCH_XXX is not defined on the platform), might add all
>>> the page table check hooks into paths on platforms which have not subscribed
>>> ARCH_SUPPORTS_PAGE_TABLE_CHECK in the first place ? Although these looks have
>>> !CONFIG_PAGE_TABLE_CHECK fallback stubs in the header, hence a build problem
>>> gets avoided.
>>
>> Right, build problems are avoided by fallback stubs in the header file.
> 
> Although there might not be a build problem as such, but should non subscribing
> platforms get their page table helpers instrumented with page table check hooks
> in the first place ? The commit message should address these questions.
> 
Add description in commit message to explain that:
Non subscription platforms will call a fallback ptc stubs when getting 
their page table helpers[1] in include/linux/pgtable.h.

[1] 
ptep_clear/ptep_get_and_clear/pmdp_huge_get_and_clear/pudp_huge_get_and_clear

Am i right? :)

Thanks,
Tong.
>>
>>>
>>>>
>>>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>>>> Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>>>> ---
>>>>    arch/x86/include/asm/pgtable.h | 10 ----------
>>>>    include/linux/pgtable.h        | 26 ++++++++++++++++++--------
>>>>    2 files changed, 18 insertions(+), 18 deletions(-)
>>>>
>>>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>>>> index 564abe42b0f7..51cd39858f81 100644
>>>> --- a/arch/x86/include/asm/pgtable.h
>>>> +++ b/arch/x86/include/asm/pgtable.h
>>>> @@ -1073,16 +1073,6 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>>>        return pte;
>>>>    }
>>>>    -#define __HAVE_ARCH_PTEP_CLEAR
>>>
>>> AFICS X86 is the only platform subscribing __HAVE_ARCH_PTEP_CLEAR. Hence if
>>> this is getting dropped for generic ptep_clear(), then no need to add back
>>> #ifnded __HAVE_ARCH_PTEP_CLEAR construct. Generic ptep_clear() is the only
>>> definition for all platforms ?
>>>
>>> Also if this patch is trying to drop off __HAVE_ARCH_PTEP_CLEAR along with
>>> other page table check related changes, it needs to be done via a separate
>>> patch instead.
>>
>> Agreed.
>> IMO, this fix can be patched later.
>>
>>>
>>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>>> -                  pte_t *ptep)
>>>> -{
>>>> -    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>>> -        ptep_get_and_clear(mm, addr, ptep);
>>>> -    else
>>>> -        pte_clear(mm, addr, ptep);
>>>> -}
>>>> -
>>>>    #define __HAVE_ARCH_PTEP_SET_WRPROTECT
>>>>    static inline void ptep_set_wrprotect(struct mm_struct *mm,
>>>>                          unsigned long addr, pte_t *ptep)
>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>> index 49ab8ee2d6d7..10d2d91edf20 100644
>>>> --- a/include/linux/pgtable.h
>>>> +++ b/include/linux/pgtable.h
>>>> @@ -12,6 +12,7 @@
>>>>    #include <linux/bug.h>
>>>>    #include <linux/errno.h>
>>>>    #include <asm-generic/pgtable_uffd.h>
>>>> +#include <linux/page_table_check.h>
>>>>      #if 5 - defined(__PAGETABLE_P4D_FOLDED) - defined(__PAGETABLE_PUD_FOLDED) - \
>>>>        defined(__PAGETABLE_PMD_FOLDED) != CONFIG_PGTABLE_LEVELS
>>>> @@ -272,14 +273,6 @@ static inline bool arch_has_hw_pte_young(void)
>>>>    }
>>>>    #endif
>>>>    -#ifndef __HAVE_ARCH_PTEP_CLEAR
>>>> -static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>>> -                  pte_t *ptep)
>>>> -{
>>>> -    pte_clear(mm, addr, ptep);
>>>> -}
>>>> -#endif
>>>> -
>>>>    #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR
>>>>    static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>>                           unsigned long address,
>>>> @@ -287,10 +280,22 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>>>>    {
>>>>        pte_t pte = *ptep;
>>>>        pte_clear(mm, address, ptep);
>>>> +    page_table_check_pte_clear(mm, address, pte);
>>>>        return pte;
>>>>    }
>>>>    #endif
>>>>    +#ifndef __HAVE_ARCH_PTEP_CLEAR
>>>> +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
>>>> +                  pte_t *ptep)
>>>> +{
>>>> +    if (IS_ENABLED(CONFIG_PAGE_TABLE_CHECK))
>>>> +        ptep_get_and_clear(mm, addr, ptep);
>>>> +    else
>>>> +        pte_clear(mm, addr, ptep);
>>>
>>> Could not this be reworked to avoid IS_ENABLED() ? This is confusing. If the page
>>> table hooks can be added to all potential page table paths via generic helpers,
>>> irrespective of CONFIG_PAGE_TABLE_CHECK option, there is no rationale for doing
>>> a IS_ENABLED() check here.
>>>
>>
>>  From the perspective of code logic, we need to check the pte before being cleared. Whether pte check is required depends on IS_ENABLED().
>>
>> Are there any suggestions for better implementation?
> 
> But other generic page table helpers already have page table check hooks
> instrumented without IS_ENABLED() checks, then why this is any different.
> 
Maybe i understand what you said, the more reasonable implement is:

static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned 
long address, pte_t *ptep)
  {
  	pte_t pte = *ptep;
  	pte_clear(mm, address, ptep);
	page_table_check_pte_clear(mm, address, pte);
  	return pte;
  }

static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, 
pte_t *ptep)
{
	ptep_get_and_clear(mm, address, ptep);
}

>>
>> Thank you,
>> Tong.
>>
>>>> +}
>>>> +#endif
>>>> +
>>>>    #ifndef __HAVE_ARCH_PTEP_GET
>>>>    static inline pte_t ptep_get(pte_t *ptep)
>>>>    {
>>>> @@ -360,7 +365,10 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>>>>                            pmd_t *pmdp)
>>>>    {
>>>>        pmd_t pmd = *pmdp;
>>>> +
>>>>        pmd_clear(pmdp);
>>>> +    page_table_check_pmd_clear(mm, address, pmd);
>>>> +
>>>>        return pmd;
>>>>    }
>>>>    #endif /* __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR */
>>>> @@ -372,6 +380,8 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>>>>        pud_t pud = *pudp;
>>>>          pud_clear(pudp);
>>>> +    page_table_check_pud_clear(mm, address, pud);
>>>> +
>>>>        return pud;
>>>>    }
>>>>    #endif /* __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR */
>>> .
> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 60+ messages in thread

end of thread, other threads:[~2022-04-25 11:35 UTC | newest]

Thread overview: 60+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-21  8:20 [PATCH -next v5 0/5]mm: page_table_check: add support on arm64 and riscv Tong Tiangen
2022-04-21  8:20 ` Tong Tiangen
2022-04-21  8:20 ` Tong Tiangen
2022-04-21  8:20 ` [PATCH -next v5 1/5] mm: page_table_check: using PxD_SIZE instead of PxD_PAGE_SIZE Tong Tiangen
2022-04-21  8:20   ` Tong Tiangen
2022-04-21  8:20   ` Tong Tiangen
2022-04-21 15:28   ` Pasha Tatashin
2022-04-21 15:28     ` Pasha Tatashin
2022-04-21 15:28     ` Pasha Tatashin
2022-04-21 18:40     ` Pasha Tatashin
2022-04-21 18:40       ` Pasha Tatashin
2022-04-21 18:40       ` Pasha Tatashin
2022-04-22  4:46       ` Anshuman Khandual
2022-04-22  4:46         ` Anshuman Khandual
2022-04-22  4:46         ` Anshuman Khandual
2022-04-22  4:41   ` Anshuman Khandual
2022-04-22  4:41     ` Anshuman Khandual
2022-04-22  4:41     ` Anshuman Khandual
2022-04-21  8:20 ` [PATCH -next v5 2/5] mm: page_table_check: move pxx_user_accessible_page into x86 Tong Tiangen
2022-04-21  8:20   ` Tong Tiangen
2022-04-21  8:20   ` Tong Tiangen
2022-04-22  5:11   ` Anshuman Khandual
2022-04-22  5:11     ` Anshuman Khandual
2022-04-22  5:11     ` Anshuman Khandual
2022-04-22  6:30     ` Tong Tiangen
2022-04-22  6:30       ` Tong Tiangen
2022-04-22  6:30       ` Tong Tiangen
2022-04-21  8:20 ` [PATCH -next v5 3/5] mm: page_table_check: add hooks to public helpers Tong Tiangen
2022-04-21  8:20   ` Tong Tiangen
2022-04-21  8:20   ` Tong Tiangen
2022-04-22  6:05   ` Anshuman Khandual
2022-04-22  6:05     ` Anshuman Khandual
2022-04-22  6:05     ` Anshuman Khandual
2022-04-24  4:10     ` Tong Tiangen
2022-04-24  4:10       ` Tong Tiangen
2022-04-24  4:10       ` Tong Tiangen
2022-04-25  5:52       ` Anshuman Khandual
2022-04-25  5:52         ` Anshuman Khandual
2022-04-25  5:52         ` Anshuman Khandual
2022-04-25 11:34         ` Tong Tiangen
2022-04-25 11:34           ` Tong Tiangen
2022-04-25 11:34           ` Tong Tiangen
2022-04-21  8:20 ` [PATCH -next v5 4/5] arm64: mm: add support for page table check Tong Tiangen
2022-04-21  8:20   ` Tong Tiangen
2022-04-21  8:20   ` Tong Tiangen
2022-04-22  6:45   ` Anshuman Khandual
2022-04-22  6:45     ` Anshuman Khandual
2022-04-22  6:45     ` Anshuman Khandual
2022-04-24  4:14     ` Tong Tiangen
2022-04-24  4:14       ` Tong Tiangen
2022-04-24  4:14       ` Tong Tiangen
2022-04-25  5:41       ` Anshuman Khandual
2022-04-25  5:41         ` Anshuman Khandual
2022-04-25  5:41         ` Anshuman Khandual
2022-04-25  7:34         ` Tong Tiangen
2022-04-25  7:34           ` Tong Tiangen
2022-04-25  7:34           ` Tong Tiangen
2022-04-21  8:20 ` [PATCH -next v5 5/5] riscv: " Tong Tiangen
2022-04-21  8:20   ` Tong Tiangen
2022-04-21  8:20   ` Tong Tiangen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.