linux-snps-arc.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests
@ 2020-06-15  3:37 Anshuman Khandual
  2020-06-15  3:37 ` [PATCH V3 1/4] mm/debug_vm_pgtable: Add tests validating arch helpers for core MM features Anshuman Khandual
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Anshuman Khandual @ 2020-06-15  3:37 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-doc, Benjamin Herrenschmidt, Heiko Carstens,
	Paul Mackerras, H. Peter Anvin, linux-riscv, Will Deacon,
	linux-arch, linux-s390, Jonathan Corbet, Michael Ellerman, x86,
	Mike Rapoport, Christian Borntraeger, Ingo Molnar,
	linux-arm-kernel, ziy, Catalin Marinas, linux-snps-arc,
	Vasily Gorbik, Anshuman Khandual, Borislav Petkov, Paul Walmsley,
	Kirill A . Shutemov, Thomas Gleixner, gerald.schaefer,
	christophe.leroy, Vineet Gupta, linux-kernel, Palmer Dabbelt,
	Andrew Morton, linuxppc-dev

This series adds some more arch page table helper validation tests which
are related to core and advanced memory functions. This also creates a
documentation, enlisting expected semantics for all page table helpers as
suggested by Mike Rapoport previously (https://lkml.org/lkml/2020/1/30/40).

There are many TRANSPARENT_HUGEPAGE and ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD
ifdefs scattered across the test. But consolidating all the fallback stubs
is not very straight forward because ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD is
not explicitly dependent on ARCH_HAS_TRANSPARENT_HUGEPAGE.

Tested on arm64, x86 platforms but only build tested on all other enabled
platforms through ARCH_HAS_DEBUG_VM_PGTABLE i.e powerpc, arc, s390. The
following failure on arm64 still exists which was mentioned previously. It
will be fixed with the upcoming THP migration on arm64 enablement series.

WARNING .... mm/debug_vm_pgtable.c:860 debug_vm_pgtable+0x940/0xa54
WARN_ON(!pmd_present(pmd_mkinvalid(pmd_mkhuge(pmd))))

This series is based on v5.8-rc1.

Changes in V3:

- Replaced HAVE_ARCH_SOFT_DIRTY with MEM_SOFT_DIRTY
- Added HAVE_ARCH_HUGE_VMAP checks in pxx_huge_tests() per Gerald
- Updated documentation for pmd_thp_tests() per Zi Yan
- Replaced READ_ONCE() with huge_ptep_get() per Gerald
- Added pte_mkhuge() and masking with PMD_MASK per Gerald
- Replaced pte_same() with holding pfn check in pxx_swap_tests()
- Added documentation for all (#ifdef #else #endif) per Gerald
- Updated pmd_protnone_tests() per Gerald
- Updated HugeTLB PTE creation in hugetlb_advanced_tests() per Gerald
- Replaced [pmd|pud]_mknotpresent() with [pmd|pud]_mkinvalid()
- Added has_transparent_hugepage() check for PMD and PUD tests
- Added a patch which debug prints all individual tests being executed
- Updated documentation for renamed [pmd|pud]_mkinvalid() helpers

Changes in V2: (https://patchwork.kernel.org/project/linux-mm/list/?series=260573)

- Dropped CONFIG_ARCH_HAS_PTE_SPECIAL per Christophe
- Dropped CONFIG_NUMA_BALANCING per Christophe
- Dropped CONFIG_HAVE_ARCH_SOFT_DIRTY per Christophe
- Dropped CONFIG_MIGRATION per Christophe
- Replaced CONFIG_S390 with __HAVE_ARCH_PMDP_INVALIDATE
- Moved page allocation & free inside swap_migration_tests() per Christophe
- Added CONFIG_TRANSPARENT_HUGEPAGE to protect pfn_pmd()
- Added CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD to protect pfn_pud()
- Added a patch for other arch advanced page table helper tests
- Added a patch creating a documentation for page table helper semantics

Changes in V1: (https://patchwork.kernel.org/patch/11408253/)

Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-riscv@lists.infradead.org
Cc: x86@kernel.org
Cc: linux-mm@kvack.org
Cc: linux-doc@vger.kernel.org
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

Anshuman Khandual (4):
  mm/debug_vm_pgtable: Add tests validating arch helpers for core MM features
  mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers
  mm/debug_vm_pgtable: Add debug prints for individual tests
  Documentation/mm: Add descriptions for arch page table helpers

 Documentation/vm/arch_pgtable_helpers.rst | 258 +++++++++
 mm/debug_vm_pgtable.c                     | 660 +++++++++++++++++++++-
 2 files changed, 916 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/vm/arch_pgtable_helpers.rst

-- 
2.20.1


_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH V3 1/4] mm/debug_vm_pgtable: Add tests validating arch helpers for core MM features
  2020-06-15  3:37 [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
@ 2020-06-15  3:37 ` Anshuman Khandual
  2020-06-15  3:37 ` [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers Anshuman Khandual
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 19+ messages in thread
From: Anshuman Khandual @ 2020-06-15  3:37 UTC (permalink / raw)
  To: linux-mm
  Cc: Benjamin Herrenschmidt, Heiko Carstens, Paul Mackerras,
	H. Peter Anvin, linux-riscv, Will Deacon, linux-arch, linux-s390,
	Michael Ellerman, x86, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, linux-arm-kernel, ziy, Catalin Marinas,
	linux-snps-arc, Vasily Gorbik, Anshuman Khandual,
	Borislav Petkov, Paul Walmsley, Kirill A . Shutemov,
	Thomas Gleixner, gerald.schaefer, christophe.leroy, Vineet Gupta,
	linux-kernel, Palmer Dabbelt, Andrew Morton, linuxppc-dev

This adds new tests validating arch page table helpers for these following
core memory features. These tests create and test specific mapping types at
various page table levels.

1. SPECIAL mapping
2. PROTNONE mapping
3. DEVMAP mapping
4. SOFTDIRTY mapping
5. SWAP mapping
6. MIGRATION mapping
7. HUGETLB mapping
8. THP mapping

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-riscv@lists.infradead.org
Cc: x86@kernel.org
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Zi Yan <ziy@nvidia.com>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 mm/debug_vm_pgtable.c | 302 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 301 insertions(+), 1 deletion(-)

diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index e45623016aea..ffa163d4c63c 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -282,6 +282,278 @@ static void __init pmd_populate_tests(struct mm_struct *mm, pmd_t *pmdp,
 	WARN_ON(pmd_bad(pmd));
 }
 
+static void __init pte_special_tests(unsigned long pfn, pgprot_t prot)
+{
+	pte_t pte = pfn_pte(pfn, prot);
+
+	if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL))
+		return;
+
+	WARN_ON(!pte_special(pte_mkspecial(pte)));
+}
+
+static void __init pte_protnone_tests(unsigned long pfn, pgprot_t prot)
+{
+	pte_t pte = pfn_pte(pfn, prot);
+
+	if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
+		return;
+
+	WARN_ON(!pte_protnone(pte));
+	WARN_ON(!pte_present(pte));
+}
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static void __init pmd_protnone_tests(unsigned long pfn, pgprot_t prot)
+{
+	pmd_t pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
+
+	if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
+		return;
+
+	WARN_ON(!pmd_protnone(pmd));
+	WARN_ON(!pmd_present(pmd));
+}
+#else  /* !CONFIG_TRANSPARENT_HUGEPAGE */
+static void __init pmd_protnone_tests(unsigned long pfn, pgprot_t prot) { }
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP
+static void __init pte_devmap_tests(unsigned long pfn, pgprot_t prot)
+{
+	pte_t pte = pfn_pte(pfn, prot);
+
+	WARN_ON(!pte_devmap(pte_mkdevmap(pte)));
+}
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot)
+{
+	pmd_t pmd = pfn_pmd(pfn, prot);
+
+	WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd)));
+}
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot)
+{
+	pud_t pud = pfn_pud(pfn, prot);
+
+	WARN_ON(!pud_devmap(pud_mkdevmap(pud)));
+}
+#else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot) { }
+#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+#else  /* CONFIG_TRANSPARENT_HUGEPAGE */
+static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot) { }
+static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot) { }
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+#else
+static void __init pte_devmap_tests(unsigned long pfn, pgprot_t prot) { }
+static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot) { }
+static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot) { }
+#endif /* CONFIG_ARCH_HAS_PTE_DEVMAP */
+
+static void __init pte_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+{
+	pte_t pte = pfn_pte(pfn, prot);
+
+	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY))
+		return;
+
+	WARN_ON(!pte_soft_dirty(pte_mksoft_dirty(pte)));
+	WARN_ON(pte_soft_dirty(pte_clear_soft_dirty(pte)));
+}
+
+static void __init pte_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+{
+	pte_t pte = pfn_pte(pfn, prot);
+
+	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY))
+		return;
+
+	WARN_ON(!pte_swp_soft_dirty(pte_swp_mksoft_dirty(pte)));
+	WARN_ON(pte_swp_soft_dirty(pte_swp_clear_soft_dirty(pte)));
+}
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static void __init pmd_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+{
+	pmd_t pmd = pfn_pmd(pfn, prot);
+
+	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY))
+		return;
+
+	WARN_ON(!pmd_soft_dirty(pmd_mksoft_dirty(pmd)));
+	WARN_ON(pmd_soft_dirty(pmd_clear_soft_dirty(pmd)));
+}
+
+static void __init pmd_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+{
+	pmd_t pmd = pfn_pmd(pfn, prot);
+
+	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) ||
+		!IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION))
+		return;
+
+	WARN_ON(!pmd_swp_soft_dirty(pmd_swp_mksoft_dirty(pmd)));
+	WARN_ON(pmd_swp_soft_dirty(pmd_swp_clear_soft_dirty(pmd)));
+}
+#else  /* !CONFIG_ARCH_HAS_PTE_DEVMAP */
+static void __init pmd_soft_dirty_tests(unsigned long pfn, pgprot_t prot) { }
+static void __init pmd_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
+{
+}
+#endif /* CONFIG_ARCH_HAS_PTE_DEVMAP */
+
+static void __init pte_swap_tests(unsigned long pfn, pgprot_t prot)
+{
+	swp_entry_t swp;
+	pte_t pte;
+
+	pte = pfn_pte(pfn, prot);
+	swp = __pte_to_swp_entry(pte);
+	pte = __swp_entry_to_pte(swp);
+	WARN_ON(pfn != pte_pfn(pte));
+}
+
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+static void __init pmd_swap_tests(unsigned long pfn, pgprot_t prot)
+{
+	swp_entry_t swp;
+	pmd_t pmd;
+
+	pmd = pfn_pmd(pfn, prot);
+	swp = __pmd_to_swp_entry(pmd);
+	pmd = __swp_entry_to_pmd(swp);
+	WARN_ON(pfn != pmd_pfn(pmd));
+}
+#else  /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */
+static void __init pmd_swap_tests(unsigned long pfn, pgprot_t prot) { }
+#endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
+
+static void __init swap_migration_tests(void)
+{
+	struct page *page;
+	swp_entry_t swp;
+
+	if (!IS_ENABLED(CONFIG_MIGRATION))
+		return;
+	/*
+	 * swap_migration_tests() requires a dedicated page as it needs to
+	 * be locked before creating a migration entry from it. Locking the
+	 * page that actually maps kernel text ('start_kernel') can be real
+	 * problematic. Lets allocate a dedicated page explicitly for this
+	 * purpose that will be freed subsequently.
+	 */
+	page = alloc_page(GFP_KERNEL);
+	if (!page) {
+		pr_err("page allocation failed\n");
+		return;
+	}
+
+	/*
+	 * make_migration_entry() expects given page to be
+	 * locked, otherwise it stumbles upon a BUG_ON().
+	 */
+	__SetPageLocked(page);
+	swp = make_migration_entry(page, 1);
+	WARN_ON(!is_migration_entry(swp));
+	WARN_ON(!is_write_migration_entry(swp));
+
+	make_migration_entry_read(&swp);
+	WARN_ON(!is_migration_entry(swp));
+	WARN_ON(is_write_migration_entry(swp));
+
+	swp = make_migration_entry(page, 0);
+	WARN_ON(!is_migration_entry(swp));
+	WARN_ON(is_write_migration_entry(swp));
+	__ClearPageLocked(page);
+	__free_page(page);
+}
+
+#ifdef CONFIG_HUGETLB_PAGE
+static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot)
+{
+	struct page *page;
+	pte_t pte;
+
+	/*
+	 * Accessing the page associated with the pfn is safe here,
+	 * as it was previously derived from a real kernel symbol.
+	 */
+	page = pfn_to_page(pfn);
+	pte = mk_huge_pte(page, prot);
+
+	WARN_ON(!huge_pte_dirty(huge_pte_mkdirty(pte)));
+	WARN_ON(!huge_pte_write(huge_pte_mkwrite(huge_pte_wrprotect(pte))));
+	WARN_ON(huge_pte_write(huge_pte_wrprotect(huge_pte_mkwrite(pte))));
+
+#ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB
+	pte = pfn_pte(pfn, prot);
+
+	WARN_ON(!pte_huge(pte_mkhuge(pte)));
+#endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */
+}
+#else  /* !CONFIG_HUGETLB_PAGE */
+static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot) { }
+#endif /* CONFIG_HUGETLB_PAGE */
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static void __init pmd_thp_tests(unsigned long pfn, pgprot_t prot)
+{
+	pmd_t pmd;
+
+	if (!has_transparent_hugepage())
+		return;
+
+	/*
+	 * pmd_trans_huge() and pmd_present() must return positive after
+	 * MMU invalidation with pmd_mkinvalid(). This behavior is an
+	 * optimization for transparent huge page. pmd_trans_huge() must
+	 * be true if pmd_page() returns a valid THP to avoid taking the
+	 * pmd_lock when others walk over non transhuge pmds (i.e. there
+	 * are no THP allocated). Especially when splitting a THP and
+	 * removing the present bit from the pmd, pmd_trans_huge() still
+	 * needs to return true. pmd_present() should be true whenever
+	 * pmd_trans_huge() returns true.
+	 */
+	pmd = pfn_pmd(pfn, prot);
+	WARN_ON(!pmd_trans_huge(pmd_mkhuge(pmd)));
+
+#ifndef __HAVE_ARCH_PMDP_INVALIDATE
+	WARN_ON(!pmd_trans_huge(pmd_mkinvalid(pmd_mkhuge(pmd))));
+	WARN_ON(!pmd_present(pmd_mkinvalid(pmd_mkhuge(pmd))));
+#endif /* __HAVE_ARCH_PMDP_INVALIDATE */
+}
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+static void __init pud_thp_tests(unsigned long pfn, pgprot_t prot)
+{
+	pud_t pud;
+
+	if (!has_transparent_hugepage())
+		return;
+
+	pud = pfn_pud(pfn, prot);
+	WARN_ON(!pud_trans_huge(pud_mkhuge(pud)));
+
+	/*
+	 * pud_mkinvalid() has been dropped for now. Enable back
+	 * these tests when it comes back with a modified pud_present().
+	 *
+	 * WARN_ON(!pud_trans_huge(pud_mkinvalid(pud_mkhuge(pud))));
+	 * WARN_ON(!pud_present(pud_mkinvalid(pud_mkhuge(pud))));
+	 */
+}
+#else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+static void __init pud_thp_tests(unsigned long pfn, pgprot_t prot) { }
+#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+#else  /* !CONFIG_TRANSPARENT_HUGEPAGE */
+static void __init pmd_thp_tests(unsigned long pfn, pgprot_t prot) { }
+static void __init pud_thp_tests(unsigned long pfn, pgprot_t prot) { }
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
 static unsigned long __init get_random_vaddr(void)
 {
 	unsigned long random_vaddr, random_pages, total_user_pages;
@@ -303,7 +575,7 @@ static int __init debug_vm_pgtable(void)
 	pmd_t *pmdp, *saved_pmdp, pmd;
 	pte_t *ptep;
 	pgtable_t saved_ptep;
-	pgprot_t prot;
+	pgprot_t prot, protnone;
 	phys_addr_t paddr;
 	unsigned long vaddr, pte_aligned, pmd_aligned;
 	unsigned long pud_aligned, p4d_aligned, pgd_aligned;
@@ -318,6 +590,12 @@ static int __init debug_vm_pgtable(void)
 		return 1;
 	}
 
+	/*
+	 * __P000 (or even __S000) will help create page table entries with
+	 * PROT_NONE permission as required for pxx_protnone_tests().
+	 */
+	protnone = __P000;
+
 	/*
 	 * PFN for mapping at PTE level is determined from a standard kernel
 	 * text symbol. But pfns for higher page table levels are derived by
@@ -373,6 +651,28 @@ static int __init debug_vm_pgtable(void)
 	p4d_populate_tests(mm, p4dp, saved_pudp);
 	pgd_populate_tests(mm, pgdp, saved_p4dp);
 
+	pte_special_tests(pte_aligned, prot);
+	pte_protnone_tests(pte_aligned, protnone);
+	pmd_protnone_tests(pmd_aligned, protnone);
+
+	pte_devmap_tests(pte_aligned, prot);
+	pmd_devmap_tests(pmd_aligned, prot);
+	pud_devmap_tests(pud_aligned, prot);
+
+	pte_soft_dirty_tests(pte_aligned, prot);
+	pmd_soft_dirty_tests(pmd_aligned, prot);
+	pte_swap_soft_dirty_tests(pte_aligned, prot);
+	pmd_swap_soft_dirty_tests(pmd_aligned, prot);
+
+	pte_swap_tests(pte_aligned, prot);
+	pmd_swap_tests(pmd_aligned, prot);
+
+	swap_migration_tests();
+	hugetlb_basic_tests(pte_aligned, prot);
+
+	pmd_thp_tests(pmd_aligned, prot);
+	pud_thp_tests(pud_aligned, prot);
+
 	p4d_free(mm, saved_p4dp);
 	pud_free(mm, saved_pudp);
 	pmd_free(mm, saved_pmdp);
-- 
2.20.1


_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers
  2020-06-15  3:37 [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
  2020-06-15  3:37 ` [PATCH V3 1/4] mm/debug_vm_pgtable: Add tests validating arch helpers for core MM features Anshuman Khandual
@ 2020-06-15  3:37 ` Anshuman Khandual
  2020-06-27  7:18   ` Christophe Leroy
  2020-06-27  7:26   ` Christophe Leroy
  2020-06-15  3:37 ` [PATCH V3 3/4] mm/debug_vm_pgtable: Add debug prints for individual tests Anshuman Khandual
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 19+ messages in thread
From: Anshuman Khandual @ 2020-06-15  3:37 UTC (permalink / raw)
  To: linux-mm
  Cc: Benjamin Herrenschmidt, Heiko Carstens, Paul Mackerras,
	H. Peter Anvin, linux-riscv, Will Deacon, linux-arch, linux-s390,
	Michael Ellerman, x86, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, linux-arm-kernel, ziy, Catalin Marinas,
	linux-snps-arc, Vasily Gorbik, Anshuman Khandual,
	Borislav Petkov, Paul Walmsley, Kirill A . Shutemov,
	Thomas Gleixner, gerald.schaefer, christophe.leroy, Vineet Gupta,
	linux-kernel, Palmer Dabbelt, Andrew Morton, linuxppc-dev

This adds new tests validating for these following arch advanced page table
helpers. These tests create and test specific mapping types at various page
table levels.

1. pxxp_set_wrprotect()
2. pxxp_get_and_clear()
3. pxxp_set_access_flags()
4. pxxp_get_and_clear_full()
5. pxxp_test_and_clear_young()
6. pxx_leaf()
7. pxx_set_huge()
8. pxx_(clear|mk)_savedwrite()
9. huge_pxxp_xxx()

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-riscv@lists.infradead.org
Cc: x86@kernel.org
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 mm/debug_vm_pgtable.c | 306 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 306 insertions(+)

diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index ffa163d4c63c..e3f9f8317a98 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -21,6 +21,7 @@
 #include <linux/module.h>
 #include <linux/pfn_t.h>
 #include <linux/printk.h>
+#include <linux/pgtable.h>
 #include <linux/random.h>
 #include <linux/spinlock.h>
 #include <linux/swap.h>
@@ -28,6 +29,7 @@
 #include <linux/start_kernel.h>
 #include <linux/sched/mm.h>
 #include <asm/pgalloc.h>
+#include <asm/tlbflush.h>
 
 #define VMFLAGS	(VM_READ|VM_WRITE|VM_EXEC)
 
@@ -55,6 +57,54 @@ static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot)
 	WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte))));
 }
 
+static void __init pte_advanced_tests(struct mm_struct *mm,
+			struct vm_area_struct *vma, pte_t *ptep,
+			unsigned long pfn, unsigned long vaddr, pgprot_t prot)
+{
+	pte_t pte = pfn_pte(pfn, prot);
+
+	pte = pfn_pte(pfn, prot);
+	set_pte_at(mm, vaddr, ptep, pte);
+	ptep_set_wrprotect(mm, vaddr, ptep);
+	pte = READ_ONCE(*ptep);
+	WARN_ON(pte_write(pte));
+
+	pte = pfn_pte(pfn, prot);
+	set_pte_at(mm, vaddr, ptep, pte);
+	ptep_get_and_clear(mm, vaddr, ptep);
+	pte = READ_ONCE(*ptep);
+	WARN_ON(!pte_none(pte));
+
+	pte = pfn_pte(pfn, prot);
+	pte = pte_wrprotect(pte);
+	pte = pte_mkclean(pte);
+	set_pte_at(mm, vaddr, ptep, pte);
+	pte = pte_mkwrite(pte);
+	pte = pte_mkdirty(pte);
+	ptep_set_access_flags(vma, vaddr, ptep, pte, 1);
+	pte = READ_ONCE(*ptep);
+	WARN_ON(!(pte_write(pte) && pte_dirty(pte)));
+
+	pte = pfn_pte(pfn, prot);
+	set_pte_at(mm, vaddr, ptep, pte);
+	ptep_get_and_clear_full(mm, vaddr, ptep, 1);
+	pte = READ_ONCE(*ptep);
+	WARN_ON(!pte_none(pte));
+
+	pte = pte_mkyoung(pte);
+	set_pte_at(mm, vaddr, ptep, pte);
+	ptep_test_and_clear_young(vma, vaddr, ptep);
+	pte = READ_ONCE(*ptep);
+	WARN_ON(pte_young(pte));
+}
+
+static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot)
+{
+	pte_t pte = pfn_pte(pfn, prot);
+
+	WARN_ON(!pte_savedwrite(pte_mk_savedwrite(pte_clear_savedwrite(pte))));
+	WARN_ON(pte_savedwrite(pte_clear_savedwrite(pte_mk_savedwrite(pte))));
+}
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
 {
@@ -77,6 +127,89 @@ static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
 	WARN_ON(!pmd_bad(pmd_mkhuge(pmd)));
 }
 
+static void __init pmd_advanced_tests(struct mm_struct *mm,
+		struct vm_area_struct *vma, pmd_t *pmdp,
+		unsigned long pfn, unsigned long vaddr, pgprot_t prot)
+{
+	pmd_t pmd = pfn_pmd(pfn, prot);
+
+	if (!has_transparent_hugepage())
+		return;
+
+	/* Align the address wrt HPAGE_PMD_SIZE */
+	vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE;
+
+	pmd = pfn_pmd(pfn, prot);
+	set_pmd_at(mm, vaddr, pmdp, pmd);
+	pmdp_set_wrprotect(mm, vaddr, pmdp);
+	pmd = READ_ONCE(*pmdp);
+	WARN_ON(pmd_write(pmd));
+
+	pmd = pfn_pmd(pfn, prot);
+	set_pmd_at(mm, vaddr, pmdp, pmd);
+	pmdp_huge_get_and_clear(mm, vaddr, pmdp);
+	pmd = READ_ONCE(*pmdp);
+	WARN_ON(!pmd_none(pmd));
+
+	pmd = pfn_pmd(pfn, prot);
+	pmd = pmd_wrprotect(pmd);
+	pmd = pmd_mkclean(pmd);
+	set_pmd_at(mm, vaddr, pmdp, pmd);
+	pmd = pmd_mkwrite(pmd);
+	pmd = pmd_mkdirty(pmd);
+	pmdp_set_access_flags(vma, vaddr, pmdp, pmd, 1);
+	pmd = READ_ONCE(*pmdp);
+	WARN_ON(!(pmd_write(pmd) && pmd_dirty(pmd)));
+
+	pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
+	set_pmd_at(mm, vaddr, pmdp, pmd);
+	pmdp_huge_get_and_clear_full(vma, vaddr, pmdp, 1);
+	pmd = READ_ONCE(*pmdp);
+	WARN_ON(!pmd_none(pmd));
+
+	pmd = pmd_mkyoung(pmd);
+	set_pmd_at(mm, vaddr, pmdp, pmd);
+	pmdp_test_and_clear_young(vma, vaddr, pmdp);
+	pmd = READ_ONCE(*pmdp);
+	WARN_ON(pmd_young(pmd));
+}
+
+static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot)
+{
+	pmd_t pmd = pfn_pmd(pfn, prot);
+
+	/*
+	 * PMD based THP is a leaf entry.
+	 */
+	pmd = pmd_mkhuge(pmd);
+	WARN_ON(!pmd_leaf(pmd));
+}
+
+static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
+{
+	pmd_t pmd;
+
+	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
+		return;
+	/*
+	 * X86 defined pmd_set_huge() verifies that the given
+	 * PMD is not a populated non-leaf entry.
+	 */
+	WRITE_ONCE(*pmdp, __pmd(0));
+	WARN_ON(!pmd_set_huge(pmdp, __pfn_to_phys(pfn), prot));
+	WARN_ON(!pmd_clear_huge(pmdp));
+	pmd = READ_ONCE(*pmdp);
+	WARN_ON(!pmd_none(pmd));
+}
+
+static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
+{
+	pmd_t pmd = pfn_pmd(pfn, prot);
+
+	WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd))));
+	WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd))));
+}
+
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
 static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
 {
@@ -100,12 +233,115 @@ static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
 	 */
 	WARN_ON(!pud_bad(pud_mkhuge(pud)));
 }
+
+static void pud_advanced_tests(struct mm_struct *mm,
+		struct vm_area_struct *vma, pud_t *pudp,
+		unsigned long pfn, unsigned long vaddr, pgprot_t prot)
+{
+	pud_t pud = pfn_pud(pfn, prot);
+
+	if (!has_transparent_hugepage())
+		return;
+
+	/* Align the address wrt HPAGE_PUD_SIZE */
+	vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE;
+
+	set_pud_at(mm, vaddr, pudp, pud);
+	pudp_set_wrprotect(mm, vaddr, pudp);
+	pud = READ_ONCE(*pudp);
+	WARN_ON(pud_write(pud));
+
+#ifndef __PAGETABLE_PMD_FOLDED
+	pud = pfn_pud(pfn, prot);
+	set_pud_at(mm, vaddr, pudp, pud);
+	pudp_huge_get_and_clear(mm, vaddr, pudp);
+	pud = READ_ONCE(*pudp);
+	WARN_ON(!pud_none(pud));
+
+	pud = pfn_pud(pfn, prot);
+	set_pud_at(mm, vaddr, pudp, pud);
+	pudp_huge_get_and_clear_full(mm, vaddr, pudp, 1);
+	pud = READ_ONCE(*pudp);
+	WARN_ON(!pud_none(pud));
+#endif /* __PAGETABLE_PMD_FOLDED */
+	pud = pfn_pud(pfn, prot);
+	pud = pud_wrprotect(pud);
+	pud = pud_mkclean(pud);
+	set_pud_at(mm, vaddr, pudp, pud);
+	pud = pud_mkwrite(pud);
+	pud = pud_mkdirty(pud);
+	pudp_set_access_flags(vma, vaddr, pudp, pud, 1);
+	pud = READ_ONCE(*pudp);
+	WARN_ON(!(pud_write(pud) && pud_dirty(pud)));
+
+	pud = pud_mkyoung(pud);
+	set_pud_at(mm, vaddr, pudp, pud);
+	pudp_test_and_clear_young(vma, vaddr, pudp);
+	pud = READ_ONCE(*pudp);
+	WARN_ON(pud_young(pud));
+}
+
+static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot)
+{
+	pud_t pud = pfn_pud(pfn, prot);
+
+	/*
+	 * PUD based THP is a leaf entry.
+	 */
+	pud = pud_mkhuge(pud);
+	WARN_ON(!pud_leaf(pud));
+}
+
+static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
+{
+	pud_t pud;
+
+	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
+		return;
+	/*
+	 * X86 defined pud_set_huge() verifies that the given
+	 * PUD is not a populated non-leaf entry.
+	 */
+	WRITE_ONCE(*pudp, __pud(0));
+	WARN_ON(!pud_set_huge(pudp, __pfn_to_phys(pfn), prot));
+	WARN_ON(!pud_clear_huge(pudp));
+	pud = READ_ONCE(*pudp);
+	WARN_ON(!pud_none(pud));
+}
 #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
 static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
+static void pud_advanced_tests(struct mm_struct *mm,
+		struct vm_area_struct *vma, pud_t *pudp,
+		unsigned long pfn, unsigned long vaddr, pgprot_t prot)
+{
+}
+static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { }
+static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
+{
+}
 #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
 #else  /* !CONFIG_TRANSPARENT_HUGEPAGE */
 static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) { }
 static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
+static void __init pmd_advanced_tests(struct mm_struct *mm,
+		struct vm_area_struct *vma, pmd_t *pmdp,
+		unsigned long pfn, unsigned long vaddr, pgprot_t prot)
+{
+}
+static void __init pud_advanced_tests(struct mm_struct *mm,
+		struct vm_area_struct *vma, pud_t *pudp,
+		unsigned long pfn, unsigned long vaddr, pgprot_t prot)
+{
+}
+static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) { }
+static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { }
+static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
+{
+}
+static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
+{
+}
+static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) { }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 static void __init p4d_basic_tests(unsigned long pfn, pgprot_t prot)
@@ -495,8 +731,56 @@ static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot)
 	WARN_ON(!pte_huge(pte_mkhuge(pte)));
 #endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */
 }
+
+static void __init hugetlb_advanced_tests(struct mm_struct *mm,
+					  struct vm_area_struct *vma,
+					  pte_t *ptep, unsigned long pfn,
+					  unsigned long vaddr, pgprot_t prot)
+{
+	struct page *page = pfn_to_page(pfn);
+	pte_t pte = READ_ONCE(*ptep);
+	unsigned long paddr = (__pfn_to_phys(pfn) | RANDOM_ORVALUE) & PMD_MASK;
+
+	pte = pte_mkhuge(mk_pte(pfn_to_page(PHYS_PFN(paddr)), prot));
+	set_huge_pte_at(mm, vaddr, ptep, pte);
+	barrier();
+	WARN_ON(!pte_same(pte, huge_ptep_get(ptep)));
+	huge_pte_clear(mm, vaddr, ptep, PMD_SIZE);
+	pte = huge_ptep_get(ptep);
+	WARN_ON(!huge_pte_none(pte));
+
+	pte = mk_huge_pte(page, prot);
+	set_huge_pte_at(mm, vaddr, ptep, pte);
+	barrier();
+	huge_ptep_set_wrprotect(mm, vaddr, ptep);
+	pte = huge_ptep_get(ptep);
+	WARN_ON(huge_pte_write(pte));
+
+	pte = mk_huge_pte(page, prot);
+	set_huge_pte_at(mm, vaddr, ptep, pte);
+	barrier();
+	huge_ptep_get_and_clear(mm, vaddr, ptep);
+	pte = huge_ptep_get(ptep);
+	WARN_ON(!huge_pte_none(pte));
+
+	pte = mk_huge_pte(page, prot);
+	pte = huge_pte_wrprotect(pte);
+	set_huge_pte_at(mm, vaddr, ptep, pte);
+	barrier();
+	pte = huge_pte_mkwrite(pte);
+	pte = huge_pte_mkdirty(pte);
+	huge_ptep_set_access_flags(vma, vaddr, ptep, pte, 1);
+	pte = huge_ptep_get(ptep);
+	WARN_ON(!(huge_pte_write(pte) && huge_pte_dirty(pte)));
+}
 #else  /* !CONFIG_HUGETLB_PAGE */
 static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot) { }
+static void __init hugetlb_advanced_tests(struct mm_struct *mm,
+					  struct vm_area_struct *vma,
+					  pte_t *ptep, unsigned long pfn,
+					  unsigned long vaddr, pgprot_t prot)
+{
+}
 #endif /* CONFIG_HUGETLB_PAGE */
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -568,6 +852,7 @@ static unsigned long __init get_random_vaddr(void)
 
 static int __init debug_vm_pgtable(void)
 {
+	struct vm_area_struct *vma;
 	struct mm_struct *mm;
 	pgd_t *pgdp;
 	p4d_t *p4dp, *saved_p4dp;
@@ -596,6 +881,12 @@ static int __init debug_vm_pgtable(void)
 	 */
 	protnone = __P000;
 
+	vma = vm_area_alloc(mm);
+	if (!vma) {
+		pr_err("vma allocation failed\n");
+		return 1;
+	}
+
 	/*
 	 * PFN for mapping at PTE level is determined from a standard kernel
 	 * text symbol. But pfns for higher page table levels are derived by
@@ -644,6 +935,20 @@ static int __init debug_vm_pgtable(void)
 	p4d_clear_tests(mm, p4dp);
 	pgd_clear_tests(mm, pgdp);
 
+	pte_advanced_tests(mm, vma, ptep, pte_aligned, vaddr, prot);
+	pmd_advanced_tests(mm, vma, pmdp, pmd_aligned, vaddr, prot);
+	pud_advanced_tests(mm, vma, pudp, pud_aligned, vaddr, prot);
+	hugetlb_advanced_tests(mm, vma, ptep, pte_aligned, vaddr, prot);
+
+	pmd_leaf_tests(pmd_aligned, prot);
+	pud_leaf_tests(pud_aligned, prot);
+
+	pmd_huge_tests(pmdp, pmd_aligned, prot);
+	pud_huge_tests(pudp, pud_aligned, prot);
+
+	pte_savedwrite_tests(pte_aligned, prot);
+	pmd_savedwrite_tests(pmd_aligned, prot);
+
 	pte_unmap_unlock(ptep, ptl);
 
 	pmd_populate_tests(mm, pmdp, saved_ptep);
@@ -678,6 +983,7 @@ static int __init debug_vm_pgtable(void)
 	pmd_free(mm, saved_pmdp);
 	pte_free(mm, saved_ptep);
 
+	vm_area_free(vma);
 	mm_dec_nr_puds(mm);
 	mm_dec_nr_pmds(mm);
 	mm_dec_nr_ptes(mm);
-- 
2.20.1


_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V3 3/4] mm/debug_vm_pgtable: Add debug prints for individual tests
  2020-06-15  3:37 [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
  2020-06-15  3:37 ` [PATCH V3 1/4] mm/debug_vm_pgtable: Add tests validating arch helpers for core MM features Anshuman Khandual
  2020-06-15  3:37 ` [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers Anshuman Khandual
@ 2020-06-15  3:37 ` Anshuman Khandual
  2020-06-15  3:37 ` [PATCH V3 4/4] Documentation/mm: Add descriptions for arch page table helpers Anshuman Khandual
  2020-06-24  3:13 ` [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
  4 siblings, 0 replies; 19+ messages in thread
From: Anshuman Khandual @ 2020-06-15  3:37 UTC (permalink / raw)
  To: linux-mm
  Cc: Benjamin Herrenschmidt, Heiko Carstens, Paul Mackerras,
	H. Peter Anvin, linux-riscv, Will Deacon, linux-arch, linux-s390,
	Michael Ellerman, x86, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, linux-arm-kernel, ziy, Catalin Marinas,
	linux-snps-arc, Vasily Gorbik, Anshuman Khandual,
	Borislav Petkov, Paul Walmsley, Kirill A . Shutemov,
	Thomas Gleixner, gerald.schaefer, christophe.leroy, Vineet Gupta,
	linux-kernel, Palmer Dabbelt, Andrew Morton, linuxppc-dev

This adds debug print information that enlists all tests getting executed
on a given platform. With dynamic debug enabled, the following information
will be splashed during boot. For compactness purpose, dropped both time
stamp and prefix (i.e debug_vm_pgtable) from this sample output.

[debug_vm_pgtable      ]: Validating architecture page table helpers
[pte_basic_tests       ]: Validating PTE basic
[pmd_basic_tests       ]: Validating PMD basic
[p4d_basic_tests       ]: Validating P4D basic
[pgd_basic_tests       ]: Validating PGD basic
[pte_clear_tests       ]: Validating PTE clear
[pmd_clear_tests       ]: Validating PMD clear
[pte_advanced_tests    ]: Validating PTE advanced
[pmd_advanced_tests    ]: Validating PMD advanced
[hugetlb_advanced_tests]: Validating HugeTLB advanced
[pmd_leaf_tests        ]: Validating PMD leaf
[pmd_huge_tests        ]: Validating PMD huge
[pte_savedwrite_tests  ]: Validating PTE saved write
[pmd_savedwrite_tests  ]: Validating PMD saved write
[pmd_populate_tests    ]: Validating PMD populate
[pte_special_tests     ]: Validating PTE special
[pte_protnone_tests    ]: Validating PTE protnone
[pmd_protnone_tests    ]: Validating PMD protnone
[pte_devmap_tests      ]: Validating PTE devmap
[pmd_devmap_tests      ]: Validating PMD devmap
[pte_swap_tests        ]: Validating PTE swap
[swap_migration_tests  ]: Validating swap migration
[hugetlb_basic_tests   ]: Validating HugeTLB basic
[pmd_thp_tests         ]: Validating PMD based THP

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-riscv@lists.infradead.org
Cc: x86@kernel.org
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 mm/debug_vm_pgtable.c | 46 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 45 insertions(+), 1 deletion(-)

diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index e3f9f8317a98..536f3b1b3ad6 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -8,7 +8,7 @@
  *
  * Author: Anshuman Khandual <anshuman.khandual@arm.com>
  */
-#define pr_fmt(fmt) "debug_vm_pgtable: %s: " fmt, __func__
+#define pr_fmt(fmt) "debug_vm_pgtable: [%-25s]: " fmt, __func__
 
 #include <linux/gfp.h>
 #include <linux/highmem.h>
@@ -48,6 +48,7 @@ static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot)
 {
 	pte_t pte = pfn_pte(pfn, prot);
 
+	pr_debug("Validating PTE basic\n");
 	WARN_ON(!pte_same(pte, pte));
 	WARN_ON(!pte_young(pte_mkyoung(pte_mkold(pte))));
 	WARN_ON(!pte_dirty(pte_mkdirty(pte_mkclean(pte))));
@@ -63,6 +64,7 @@ static void __init pte_advanced_tests(struct mm_struct *mm,
 {
 	pte_t pte = pfn_pte(pfn, prot);
 
+	pr_debug("Validating PTE advanced\n");
 	pte = pfn_pte(pfn, prot);
 	set_pte_at(mm, vaddr, ptep, pte);
 	ptep_set_wrprotect(mm, vaddr, ptep);
@@ -102,6 +104,7 @@ static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot)
 {
 	pte_t pte = pfn_pte(pfn, prot);
 
+	pr_debug("Validating PTE saved write\n");
 	WARN_ON(!pte_savedwrite(pte_mk_savedwrite(pte_clear_savedwrite(pte))));
 	WARN_ON(pte_savedwrite(pte_clear_savedwrite(pte_mk_savedwrite(pte))));
 }
@@ -113,6 +116,7 @@ static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
 	if (!has_transparent_hugepage())
 		return;
 
+	pr_debug("Validating PMD basic\n");
 	WARN_ON(!pmd_same(pmd, pmd));
 	WARN_ON(!pmd_young(pmd_mkyoung(pmd_mkold(pmd))));
 	WARN_ON(!pmd_dirty(pmd_mkdirty(pmd_mkclean(pmd))));
@@ -136,6 +140,7 @@ static void __init pmd_advanced_tests(struct mm_struct *mm,
 	if (!has_transparent_hugepage())
 		return;
 
+	pr_debug("Validating PMD advanced\n");
 	/* Align the address wrt HPAGE_PMD_SIZE */
 	vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE;
 
@@ -178,6 +183,7 @@ static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot)
 {
 	pmd_t pmd = pfn_pmd(pfn, prot);
 
+	pr_debug("Validating PMD leaf\n");
 	/*
 	 * PMD based THP is a leaf entry.
 	 */
@@ -191,6 +197,8 @@ static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
 
 	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
 		return;
+
+	pr_debug("Validating PMD huge\n");
 	/*
 	 * X86 defined pmd_set_huge() verifies that the given
 	 * PMD is not a populated non-leaf entry.
@@ -206,6 +214,7 @@ static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
 {
 	pmd_t pmd = pfn_pmd(pfn, prot);
 
+	pr_debug("Validating PMD saved write\n");
 	WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd))));
 	WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd))));
 }
@@ -218,6 +227,7 @@ static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
 	if (!has_transparent_hugepage())
 		return;
 
+	pr_debug("Validating PUD basic\n");
 	WARN_ON(!pud_same(pud, pud));
 	WARN_ON(!pud_young(pud_mkyoung(pud_mkold(pud))));
 	WARN_ON(!pud_write(pud_mkwrite(pud_wrprotect(pud))));
@@ -243,6 +253,7 @@ static void pud_advanced_tests(struct mm_struct *mm,
 	if (!has_transparent_hugepage())
 		return;
 
+	pr_debug("Validating PUD advanced\n");
 	/* Align the address wrt HPAGE_PUD_SIZE */
 	vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE;
 
@@ -285,6 +296,7 @@ static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot)
 {
 	pud_t pud = pfn_pud(pfn, prot);
 
+	pr_debug("Validating PUD leaf\n");
 	/*
 	 * PUD based THP is a leaf entry.
 	 */
@@ -298,6 +310,8 @@ static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
 
 	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
 		return;
+
+	pr_debug("Validating PUD huge\n");
 	/*
 	 * X86 defined pud_set_huge() verifies that the given
 	 * PUD is not a populated non-leaf entry.
@@ -348,6 +362,7 @@ static void __init p4d_basic_tests(unsigned long pfn, pgprot_t prot)
 {
 	p4d_t p4d;
 
+	pr_debug("Validating P4D basic\n");
 	memset(&p4d, RANDOM_NZVALUE, sizeof(p4d_t));
 	WARN_ON(!p4d_same(p4d, p4d));
 }
@@ -356,6 +371,7 @@ static void __init pgd_basic_tests(unsigned long pfn, pgprot_t prot)
 {
 	pgd_t pgd;
 
+	pr_debug("Validating PGD basic\n");
 	memset(&pgd, RANDOM_NZVALUE, sizeof(pgd_t));
 	WARN_ON(!pgd_same(pgd, pgd));
 }
@@ -368,6 +384,7 @@ static void __init pud_clear_tests(struct mm_struct *mm, pud_t *pudp)
 	if (mm_pmd_folded(mm))
 		return;
 
+	pr_debug("Validating PUD clear\n");
 	pud = __pud(pud_val(pud) | RANDOM_ORVALUE);
 	WRITE_ONCE(*pudp, pud);
 	pud_clear(pudp);
@@ -382,6 +399,8 @@ static void __init pud_populate_tests(struct mm_struct *mm, pud_t *pudp,
 
 	if (mm_pmd_folded(mm))
 		return;
+
+	pr_debug("Validating PUD populate\n");
 	/*
 	 * This entry points to next level page table page.
 	 * Hence this must not qualify as pud_bad().
@@ -408,6 +427,7 @@ static void __init p4d_clear_tests(struct mm_struct *mm, p4d_t *p4dp)
 	if (mm_pud_folded(mm))
 		return;
 
+	pr_debug("Validating P4D clear\n");
 	p4d = __p4d(p4d_val(p4d) | RANDOM_ORVALUE);
 	WRITE_ONCE(*p4dp, p4d);
 	p4d_clear(p4dp);
@@ -423,6 +443,7 @@ static void __init p4d_populate_tests(struct mm_struct *mm, p4d_t *p4dp,
 	if (mm_pud_folded(mm))
 		return;
 
+	pr_debug("Validating P4D populate\n");
 	/*
 	 * This entry points to next level page table page.
 	 * Hence this must not qualify as p4d_bad().
@@ -441,6 +462,7 @@ static void __init pgd_clear_tests(struct mm_struct *mm, pgd_t *pgdp)
 	if (mm_p4d_folded(mm))
 		return;
 
+	pr_debug("Validating PGD clear\n");
 	pgd = __pgd(pgd_val(pgd) | RANDOM_ORVALUE);
 	WRITE_ONCE(*pgdp, pgd);
 	pgd_clear(pgdp);
@@ -456,6 +478,7 @@ static void __init pgd_populate_tests(struct mm_struct *mm, pgd_t *pgdp,
 	if (mm_p4d_folded(mm))
 		return;
 
+	pr_debug("Validating PGD populate\n");
 	/*
 	 * This entry points to next level page table page.
 	 * Hence this must not qualify as pgd_bad().
@@ -484,6 +507,7 @@ static void __init pte_clear_tests(struct mm_struct *mm, pte_t *ptep,
 {
 	pte_t pte = READ_ONCE(*ptep);
 
+	pr_debug("Validating PTE clear\n");
 	pte = __pte(pte_val(pte) | RANDOM_ORVALUE);
 	set_pte_at(mm, vaddr, ptep, pte);
 	barrier();
@@ -496,6 +520,7 @@ static void __init pmd_clear_tests(struct mm_struct *mm, pmd_t *pmdp)
 {
 	pmd_t pmd = READ_ONCE(*pmdp);
 
+	pr_debug("Validating PMD clear\n");
 	pmd = __pmd(pmd_val(pmd) | RANDOM_ORVALUE);
 	WRITE_ONCE(*pmdp, pmd);
 	pmd_clear(pmdp);
@@ -508,6 +533,7 @@ static void __init pmd_populate_tests(struct mm_struct *mm, pmd_t *pmdp,
 {
 	pmd_t pmd;
 
+	pr_debug("Validating PMD populate\n");
 	/*
 	 * This entry points to next level page table page.
 	 * Hence this must not qualify as pmd_bad().
@@ -525,6 +551,7 @@ static void __init pte_special_tests(unsigned long pfn, pgprot_t prot)
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL))
 		return;
 
+	pr_debug("Validating PTE special\n");
 	WARN_ON(!pte_special(pte_mkspecial(pte)));
 }
 
@@ -535,6 +562,7 @@ static void __init pte_protnone_tests(unsigned long pfn, pgprot_t prot)
 	if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
 		return;
 
+	pr_debug("Validating PTE protnone\n");
 	WARN_ON(!pte_protnone(pte));
 	WARN_ON(!pte_present(pte));
 }
@@ -547,6 +575,7 @@ static void __init pmd_protnone_tests(unsigned long pfn, pgprot_t prot)
 	if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
 		return;
 
+	pr_debug("Validating PMD protnone\n");
 	WARN_ON(!pmd_protnone(pmd));
 	WARN_ON(!pmd_present(pmd));
 }
@@ -559,6 +588,7 @@ static void __init pte_devmap_tests(unsigned long pfn, pgprot_t prot)
 {
 	pte_t pte = pfn_pte(pfn, prot);
 
+	pr_debug("Validating PTE devmap\n");
 	WARN_ON(!pte_devmap(pte_mkdevmap(pte)));
 }
 
@@ -567,6 +597,7 @@ static void __init pmd_devmap_tests(unsigned long pfn, pgprot_t prot)
 {
 	pmd_t pmd = pfn_pmd(pfn, prot);
 
+	pr_debug("Validating PMD devmap\n");
 	WARN_ON(!pmd_devmap(pmd_mkdevmap(pmd)));
 }
 
@@ -575,6 +606,7 @@ static void __init pud_devmap_tests(unsigned long pfn, pgprot_t prot)
 {
 	pud_t pud = pfn_pud(pfn, prot);
 
+	pr_debug("Validating PUD devmap\n");
 	WARN_ON(!pud_devmap(pud_mkdevmap(pud)));
 }
 #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
@@ -597,6 +629,7 @@ static void __init pte_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
 	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY))
 		return;
 
+	pr_debug("Validating PTE soft dirty\n");
 	WARN_ON(!pte_soft_dirty(pte_mksoft_dirty(pte)));
 	WARN_ON(pte_soft_dirty(pte_clear_soft_dirty(pte)));
 }
@@ -608,6 +641,7 @@ static void __init pte_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
 	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY))
 		return;
 
+	pr_debug("Validating PTE swap soft dirty\n");
 	WARN_ON(!pte_swp_soft_dirty(pte_swp_mksoft_dirty(pte)));
 	WARN_ON(pte_swp_soft_dirty(pte_swp_clear_soft_dirty(pte)));
 }
@@ -620,6 +654,7 @@ static void __init pmd_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
 	if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY))
 		return;
 
+	pr_debug("Validating PMD soft dirty\n");
 	WARN_ON(!pmd_soft_dirty(pmd_mksoft_dirty(pmd)));
 	WARN_ON(pmd_soft_dirty(pmd_clear_soft_dirty(pmd)));
 }
@@ -632,6 +667,7 @@ static void __init pmd_swap_soft_dirty_tests(unsigned long pfn, pgprot_t prot)
 		!IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION))
 		return;
 
+	pr_debug("Validating PMD swap soft dirty\n");
 	WARN_ON(!pmd_swp_soft_dirty(pmd_swp_mksoft_dirty(pmd)));
 	WARN_ON(pmd_swp_soft_dirty(pmd_swp_clear_soft_dirty(pmd)));
 }
@@ -647,6 +683,7 @@ static void __init pte_swap_tests(unsigned long pfn, pgprot_t prot)
 	swp_entry_t swp;
 	pte_t pte;
 
+	pr_debug("Validating PTE swap\n");
 	pte = pfn_pte(pfn, prot);
 	swp = __pte_to_swp_entry(pte);
 	pte = __swp_entry_to_pte(swp);
@@ -659,6 +696,7 @@ static void __init pmd_swap_tests(unsigned long pfn, pgprot_t prot)
 	swp_entry_t swp;
 	pmd_t pmd;
 
+	pr_debug("Validating PMD swap\n");
 	pmd = pfn_pmd(pfn, prot);
 	swp = __pmd_to_swp_entry(pmd);
 	pmd = __swp_entry_to_pmd(swp);
@@ -675,6 +713,8 @@ static void __init swap_migration_tests(void)
 
 	if (!IS_ENABLED(CONFIG_MIGRATION))
 		return;
+
+	pr_debug("Validating swap migration\n");
 	/*
 	 * swap_migration_tests() requires a dedicated page as it needs to
 	 * be locked before creating a migration entry from it. Locking the
@@ -714,6 +754,7 @@ static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot)
 	struct page *page;
 	pte_t pte;
 
+	pr_debug("Validating HugeTLB basic\n");
 	/*
 	 * Accessing the page associated with the pfn is safe here,
 	 * as it was previously derived from a real kernel symbol.
@@ -741,6 +782,7 @@ static void __init hugetlb_advanced_tests(struct mm_struct *mm,
 	pte_t pte = READ_ONCE(*ptep);
 	unsigned long paddr = (__pfn_to_phys(pfn) | RANDOM_ORVALUE) & PMD_MASK;
 
+	pr_debug("Validating HugeTLB advanced\n");
 	pte = pte_mkhuge(mk_pte(pfn_to_page(PHYS_PFN(paddr)), prot));
 	set_huge_pte_at(mm, vaddr, ptep, pte);
 	barrier();
@@ -791,6 +833,7 @@ static void __init pmd_thp_tests(unsigned long pfn, pgprot_t prot)
 	if (!has_transparent_hugepage())
 		return;
 
+	pr_debug("Validating PMD based THP\n");
 	/*
 	 * pmd_trans_huge() and pmd_present() must return positive after
 	 * MMU invalidation with pmd_mkinvalid(). This behavior is an
@@ -819,6 +862,7 @@ static void __init pud_thp_tests(unsigned long pfn, pgprot_t prot)
 	if (!has_transparent_hugepage())
 		return;
 
+	pr_debug("Validating PUD based THP\n");
 	pud = pfn_pud(pfn, prot);
 	WARN_ON(!pud_trans_huge(pud_mkhuge(pud)));
 
-- 
2.20.1


_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH V3 4/4] Documentation/mm: Add descriptions for arch page table helpers
  2020-06-15  3:37 [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
                   ` (2 preceding siblings ...)
  2020-06-15  3:37 ` [PATCH V3 3/4] mm/debug_vm_pgtable: Add debug prints for individual tests Anshuman Khandual
@ 2020-06-15  3:37 ` Anshuman Khandual
  2020-06-17 10:01   ` Mike Rapoport
  2020-06-24  3:13 ` [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
  4 siblings, 1 reply; 19+ messages in thread
From: Anshuman Khandual @ 2020-06-15  3:37 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-doc, Benjamin Herrenschmidt, Heiko Carstens,
	Paul Mackerras, H. Peter Anvin, linux-riscv, Will Deacon,
	linux-arch, linux-s390, Jonathan Corbet, Michael Ellerman, x86,
	Mike Rapoport, Christian Borntraeger, Ingo Molnar,
	linux-arm-kernel, ziy, Catalin Marinas, linux-snps-arc,
	Vasily Gorbik, Anshuman Khandual, Borislav Petkov, Paul Walmsley,
	Kirill A . Shutemov, Thomas Gleixner, gerald.schaefer,
	christophe.leroy, Vineet Gupta, linux-kernel, Palmer Dabbelt,
	Andrew Morton, linuxppc-dev

This adds a specific description file for all arch page table helpers which
is in sync with the semantics being tested via CONFIG_DEBUG_VM_PGTABLE. All
future changes either to these descriptions here or the debug test should
always remain in sync.

Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-riscv@lists.infradead.org
Cc: x86@kernel.org
Cc: linux-arch@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Mike Rapoport <rppt@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 Documentation/vm/arch_pgtable_helpers.rst | 258 ++++++++++++++++++++++
 mm/debug_vm_pgtable.c                     |   6 +
 2 files changed, 264 insertions(+)
 create mode 100644 Documentation/vm/arch_pgtable_helpers.rst

diff --git a/Documentation/vm/arch_pgtable_helpers.rst b/Documentation/vm/arch_pgtable_helpers.rst
new file mode 100644
index 000000000000..cd7609b05446
--- /dev/null
+++ b/Documentation/vm/arch_pgtable_helpers.rst
@@ -0,0 +1,258 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+.. _arch_page_table_helpers:
+
+===============================
+Architecture Page Table Helpers
+===============================
+
+Generic MM expects architectures (with MMU) to provide helpers to create, access
+and modify page table entries at various level for different memory functions.
+These page table helpers need to conform to a common semantics across platforms.
+Following tables describe the expected semantics which can also be tested during
+boot via CONFIG_DEBUG_VM_PGTABLE option. All future changes in here or the debug
+test need to be in sync.
+
+======================
+PTE Page Table Helpers
+======================
+
+--------------------------------------------------------------------------------
+| pte_same                  | Tests whether both PTE entries are the same      |
+--------------------------------------------------------------------------------
+| pte_bad                   | Tests a non-table mapped PTE                     |
+--------------------------------------------------------------------------------
+| pte_present               | Tests a valid mapped PTE                         |
+--------------------------------------------------------------------------------
+| pte_young                 | Tests a young PTE                                |
+--------------------------------------------------------------------------------
+| pte_dirty                 | Tests a dirty PTE                                |
+--------------------------------------------------------------------------------
+| pte_write                 | Tests a writable PTE                             |
+--------------------------------------------------------------------------------
+| pte_special               | Tests a special PTE                              |
+--------------------------------------------------------------------------------
+| pte_protnone              | Tests a PROT_NONE PTE                            |
+--------------------------------------------------------------------------------
+| pte_devmap                | Tests a ZONE_DEVICE mapped PTE                   |
+--------------------------------------------------------------------------------
+| pte_soft_dirty            | Tests a soft dirty PTE                           |
+--------------------------------------------------------------------------------
+| pte_swp_soft_dirty        | Tests a soft dirty swapped PTE                   |
+--------------------------------------------------------------------------------
+| pte_mkyoung               | Creates a young PTE                              |
+--------------------------------------------------------------------------------
+| pte_mkold                 | Creates an old PTE                               |
+--------------------------------------------------------------------------------
+| pte_mkdirty               | Creates a dirty PTE                              |
+--------------------------------------------------------------------------------
+| pte_mkclean               | Creates a clean PTE                              |
+--------------------------------------------------------------------------------
+| pte_mkwrite               | Creates a writable PTE                           |
+--------------------------------------------------------------------------------
+| pte_mkwrprotect           | Creates a write protected PTE                    |
+--------------------------------------------------------------------------------
+| pte_mkspecial             | Creates a special PTE                            |
+--------------------------------------------------------------------------------
+| pte_mkdevmap              | Creates a ZONE_DEVICE mapped PTE                 |
+--------------------------------------------------------------------------------
+| pte_mksoft_dirty          | Creates a soft dirty PTE                         |
+--------------------------------------------------------------------------------
+| pte_clear_soft_dirty      | Clears a soft dirty PTE                          |
+--------------------------------------------------------------------------------
+| pte_swp_mksoft_dirty      | Creates a soft dirty swapped PTE                 |
+--------------------------------------------------------------------------------
+| pte_swp_clear_soft_dirty  | Clears a soft dirty swapped PTE                  |
+--------------------------------------------------------------------------------
+| pte_mknotpresent          | Invalidates a mapped PTE                         |
+--------------------------------------------------------------------------------
+| ptep_get_and_clear        | Clears a PTE                                     |
+--------------------------------------------------------------------------------
+| ptep_get_and_clear_full   | Clears a PTE                                     |
+--------------------------------------------------------------------------------
+| ptep_test_and_clear_young | Clears young from a PTE                          |
+--------------------------------------------------------------------------------
+| ptep_set_wrprotect        | Converts into a write protected PTE              |
+--------------------------------------------------------------------------------
+| ptep_set_access_flags     | Converts into a more permissive PTE              |
+--------------------------------------------------------------------------------
+
+======================
+PMD Page Table Helpers
+======================
+
+--------------------------------------------------------------------------------
+| pmd_same                  | Tests whether both PMD entries are the same      |
+--------------------------------------------------------------------------------
+| pmd_bad                   | Tests a non-table mapped PMD                     |
+--------------------------------------------------------------------------------
+| pmd_leaf                  | Tests a leaf mapped PMD                          |
+--------------------------------------------------------------------------------
+| pmd_huge                  | Tests a HugeTLB mapped PMD                       |
+--------------------------------------------------------------------------------
+| pmd_trans_huge            | Tests a Transparent Huge Page (THP) at PMD       |
+--------------------------------------------------------------------------------
+| pmd_present               | Tests a valid mapped PMD                         |
+--------------------------------------------------------------------------------
+| pmd_young                 | Tests a young PMD                                |
+--------------------------------------------------------------------------------
+| pmd_dirty                 | Tests a dirty PMD                                |
+--------------------------------------------------------------------------------
+| pmd_write                 | Tests a writable PMD                             |
+--------------------------------------------------------------------------------
+| pmd_special               | Tests a special PMD                              |
+--------------------------------------------------------------------------------
+| pmd_protnone              | Tests a PROT_NONE PMD                            |
+--------------------------------------------------------------------------------
+| pmd_devmap                | Tests a ZONE_DEVICE mapped PMD                   |
+--------------------------------------------------------------------------------
+| pmd_soft_dirty            | Tests a soft dirty PMD                           |
+--------------------------------------------------------------------------------
+| pmd_swp_soft_dirty        | Tests a soft dirty swapped PMD                   |
+--------------------------------------------------------------------------------
+| pmd_mkyoung               | Creates a young PMD                              |
+--------------------------------------------------------------------------------
+| pmd_mkold                 | Creates an old PMD                               |
+--------------------------------------------------------------------------------
+| pmd_mkdirty               | Creates a dirty PMD                              |
+--------------------------------------------------------------------------------
+| pmd_mkclean               | Creates a clean PMD                              |
+--------------------------------------------------------------------------------
+| pmd_mkwrite               | Creates a writable PMD                           |
+--------------------------------------------------------------------------------
+| pmd_mkwrprotect           | Creates a write protected PMD                    |
+--------------------------------------------------------------------------------
+| pmd_mkspecial             | Creates a special PMD                            |
+--------------------------------------------------------------------------------
+| pmd_mkdevmap              | Creates a ZONE_DEVICE mapped PMD                 |
+--------------------------------------------------------------------------------
+| pmd_mksoft_dirty          | Creates a soft dirty PMD                         |
+--------------------------------------------------------------------------------
+| pmd_clear_soft_dirty      | Clears a soft dirty PMD                          |
+--------------------------------------------------------------------------------
+| pmd_swp_mksoft_dirty      | Creates a soft dirty swapped PMD                 |
+--------------------------------------------------------------------------------
+| pmd_swp_clear_soft_dirty  | Clears a soft dirty swapped PMD                  |
+--------------------------------------------------------------------------------
+| pmd_mkinvalid             | Invalidates a mapped PMD [1]                     |
+--------------------------------------------------------------------------------
+| pmd_set_huge              | Creates a PMD huge mapping                       |
+--------------------------------------------------------------------------------
+| pmd_clear_huge            | Clears a PMD huge mapping                        |
+--------------------------------------------------------------------------------
+| pmdp_get_and_clear        | Clears a PMD                                     |
+--------------------------------------------------------------------------------
+| pmdp_get_and_clear_full   | Clears a PMD                                     |
+--------------------------------------------------------------------------------
+| pmdp_test_and_clear_young | Clears young from a PMD                          |
+--------------------------------------------------------------------------------
+| pmdp_set_wrprotect        | Converts into a write protected PMD              |
+--------------------------------------------------------------------------------
+| pmdp_set_access_flags     | Converts into a more permissive PMD              |
+--------------------------------------------------------------------------------
+
+======================
+PUD Page Table Helpers
+======================
+
+--------------------------------------------------------------------------------
+| pud_same                  | Tests whether both PUD entries are the same      |
+--------------------------------------------------------------------------------
+| pud_bad                   | Tests a non-table mapped PUD                     |
+--------------------------------------------------------------------------------
+| pud_leaf                  | Tests a leaf mapped PUD                          |
+--------------------------------------------------------------------------------
+| pud_huge                  | Tests a HugeTLB mapped PUD                       |
+--------------------------------------------------------------------------------
+| pud_trans_huge            | Tests a Transparent Huge Page (THP) at PUD       |
+--------------------------------------------------------------------------------
+| pud_present               | Tests a valid mapped PUD                         |
+--------------------------------------------------------------------------------
+| pud_young                 | Tests a young PUD                                |
+--------------------------------------------------------------------------------
+| pud_dirty                 | Tests a dirty PUD                                |
+--------------------------------------------------------------------------------
+| pud_write                 | Tests a writable PUD                             |
+--------------------------------------------------------------------------------
+| pud_devmap                | Tests a ZONE_DEVICE mapped PUD                   |
+--------------------------------------------------------------------------------
+| pud_mkyoung               | Creates a young PUD                              |
+--------------------------------------------------------------------------------
+| pud_mkold                 | Creates an old PUD                               |
+--------------------------------------------------------------------------------
+| pud_mkdirty               | Creates a dirty PUD                              |
+--------------------------------------------------------------------------------
+| pud_mkclean               | Creates a clean PUD                              |
+--------------------------------------------------------------------------------
+| pud_mkwrite               | Creates a writable PMD                           |
+--------------------------------------------------------------------------------
+| pud_mkwrprotect           | Creates a write protected PMD                    |
+--------------------------------------------------------------------------------
+| pud_mkdevmap              | Creates a ZONE_DEVICE mapped PMD                 |
+--------------------------------------------------------------------------------
+| pud_mkinvalid             | Invalidates a mapped PUD [1]                     |
+--------------------------------------------------------------------------------
+| pud_set_huge              | Creates a PUD huge mapping                       |
+--------------------------------------------------------------------------------
+| pud_clear_huge            | Clears a PUD huge mapping                        |
+--------------------------------------------------------------------------------
+| pudp_get_and_clear        | Clears a PUD                                     |
+--------------------------------------------------------------------------------
+| pudp_get_and_clear_full   | Clears a PUD                                     |
+--------------------------------------------------------------------------------
+| pudp_test_and_clear_young | Clears young from a PUD                          |
+--------------------------------------------------------------------------------
+| pudp_set_wrprotect        | Converts into a write protected PUD              |
+--------------------------------------------------------------------------------
+| pudp_set_access_flags     | Converts into a more permissive PUD              |
+--------------------------------------------------------------------------------
+
+==========================
+HugeTLB Page Table Helpers
+==========================
+
+--------------------------------------------------------------------------------
+| pte_huge                  | Tests a HugeTLB                                  |
+--------------------------------------------------------------------------------
+| pte_mkhuge                | Creates a HugeTLB                                |
+--------------------------------------------------------------------------------
+| huge_pte_dirty            | Tests a dirty HugeTLB                            |
+--------------------------------------------------------------------------------
+| huge_pte_write            | Tests a writable HugeTLB                         |
+--------------------------------------------------------------------------------
+| huge_pte_mkdirty          | Creates a dirty HugeTLB                          |
+--------------------------------------------------------------------------------
+| huge_pte_mkwrite          | Creates a writable HugeTLB                       |
+--------------------------------------------------------------------------------
+| huge_pte_mkwrprotect      | Creates a write protected HugeTLB                |
+--------------------------------------------------------------------------------
+| huge_ptep_get_and_clear   | Clears a HugeTLB                                 |
+--------------------------------------------------------------------------------
+| huge_ptep_set_wrprotect   | Converts into a write protected HugeTLB          |
+--------------------------------------------------------------------------------
+| huge_ptep_set_access_flags  | Converts into a more permissive HugeTLB        |
+--------------------------------------------------------------------------------
+
+========================
+SWAP Page Table Helpers
+========================
+
+--------------------------------------------------------------------------------
+| __pte_to_swp_entry        | Creates a swapped entry (arch) from a mapepd PTE |
+--------------------------------------------------------------------------------
+| __swp_to_pte_entry        | Creates a mapped PTE from a swapped entry (arch) |
+--------------------------------------------------------------------------------
+| __pmd_to_swp_entry        | Creates a swapped entry (arch) from a mapepd PMD |
+--------------------------------------------------------------------------------
+| __swp_to_pmd_entry        | Creates a mapped PMD from a swapped entry (arch) |
+--------------------------------------------------------------------------------
+| is_migration_entry        | Tests a migration (read or write) swapped entry  |
+--------------------------------------------------------------------------------
+| is_write_migration_entry  | Tests a write migration swapped entry            |
+--------------------------------------------------------------------------------
+| make_migration_entry_read | Converts into read migration swapped entry       |
+--------------------------------------------------------------------------------
+| make_migration_entry      | Creates a migration swapped entry (read or write)|
+--------------------------------------------------------------------------------
+
+[1] https://lore.kernel.org/linux-mm/20181017020930.GN30832@redhat.com/
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 536f3b1b3ad6..a2936938ed78 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -31,6 +31,12 @@
 #include <asm/pgalloc.h>
 #include <asm/tlbflush.h>
 
+/*
+ * Please refer Documentation/vm/arch_pgtable_helpers.rst for the semantics
+ * expectations that are being validated here. All future changes in here
+ * or the documentation need to be in sync.
+ */
+
 #define VMFLAGS	(VM_READ|VM_WRITE|VM_EXEC)
 
 /*
-- 
2.20.1


_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 4/4] Documentation/mm: Add descriptions for arch page table helpers
  2020-06-15  3:37 ` [PATCH V3 4/4] Documentation/mm: Add descriptions for arch page table helpers Anshuman Khandual
@ 2020-06-17 10:01   ` Mike Rapoport
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Rapoport @ 2020-06-17 10:01 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: linux-doc, Benjamin Herrenschmidt, Heiko Carstens, linux-mm,
	Paul Mackerras, H. Peter Anvin, linux-riscv, Will Deacon,
	linux-arch, linux-s390, Jonathan Corbet, Michael Ellerman, x86,
	Christian Borntraeger, Ingo Molnar, linux-arm-kernel, ziy,
	Catalin Marinas, linux-snps-arc, Vasily Gorbik, Borislav Petkov,
	Paul Walmsley, Kirill A . Shutemov, Thomas Gleixner,
	gerald.schaefer, christophe.leroy, Vineet Gupta, linux-kernel,
	Palmer Dabbelt, Andrew Morton, linuxppc-dev

On Mon, Jun 15, 2020 at 09:07:57AM +0530, Anshuman Khandual wrote:
> This adds a specific description file for all arch page table helpers which
> is in sync with the semantics being tested via CONFIG_DEBUG_VM_PGTABLE. All
> future changes either to these descriptions here or the debug test should
> always remain in sync.
> 
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Vineet Gupta <vgupta@synopsys.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Kirill A. Shutemov <kirill@shutemov.name>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: linux-snps-arc@lists.infradead.org
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux-s390@vger.kernel.org
> Cc: linux-riscv@lists.infradead.org
> Cc: x86@kernel.org
> Cc: linux-arch@vger.kernel.org
> Cc: linux-mm@kvack.org
> Cc: linux-doc@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Suggested-by: Mike Rapoport <rppt@kernel.org>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
>  Documentation/vm/arch_pgtable_helpers.rst | 258 ++++++++++++++++++++++
>  mm/debug_vm_pgtable.c                     |   6 +
>  2 files changed, 264 insertions(+)
>  create mode 100644 Documentation/vm/arch_pgtable_helpers.rst

Acked-by: Mike Rapoport <rppt@linux.ibm.com>


_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests
  2020-06-15  3:37 [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
                   ` (3 preceding siblings ...)
  2020-06-15  3:37 ` [PATCH V3 4/4] Documentation/mm: Add descriptions for arch page table helpers Anshuman Khandual
@ 2020-06-24  3:13 ` Anshuman Khandual
  2020-06-24 11:05   ` Alexander Gordeev
                     ` (2 more replies)
  4 siblings, 3 replies; 19+ messages in thread
From: Anshuman Khandual @ 2020-06-24  3:13 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-doc, Benjamin Herrenschmidt, Heiko Carstens,
	Paul Mackerras, H. Peter Anvin, linux-riscv, Will Deacon,
	linux-arch, linux-s390, Jonathan Corbet, Michael Ellerman, x86,
	Christophe Leroy, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, linux-arm-kernel, ziy, Catalin Marinas,
	linux-snps-arc, Vasily Gorbik, Borislav Petkov, Paul Walmsley,
	Kirill A . Shutemov, Thomas Gleixner, gerald.schaefer,
	christophe.leroy, Vineet Gupta, linux-kernel, Palmer Dabbelt,
	Andrew Morton, linuxppc-dev



On 06/15/2020 09:07 AM, Anshuman Khandual wrote:
> This series adds some more arch page table helper validation tests which
> are related to core and advanced memory functions. This also creates a
> documentation, enlisting expected semantics for all page table helpers as
> suggested by Mike Rapoport previously (https://lkml.org/lkml/2020/1/30/40).
> 
> There are many TRANSPARENT_HUGEPAGE and ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD
> ifdefs scattered across the test. But consolidating all the fallback stubs
> is not very straight forward because ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD is
> not explicitly dependent on ARCH_HAS_TRANSPARENT_HUGEPAGE.
> 
> Tested on arm64, x86 platforms but only build tested on all other enabled
> platforms through ARCH_HAS_DEBUG_VM_PGTABLE i.e powerpc, arc, s390. The
> following failure on arm64 still exists which was mentioned previously. It
> will be fixed with the upcoming THP migration on arm64 enablement series.
> 
> WARNING .... mm/debug_vm_pgtable.c:860 debug_vm_pgtable+0x940/0xa54
> WARN_ON(!pmd_present(pmd_mkinvalid(pmd_mkhuge(pmd))))
> 
> This series is based on v5.8-rc1.
> 
> Changes in V3:
> 
> - Replaced HAVE_ARCH_SOFT_DIRTY with MEM_SOFT_DIRTY
> - Added HAVE_ARCH_HUGE_VMAP checks in pxx_huge_tests() per Gerald
> - Updated documentation for pmd_thp_tests() per Zi Yan
> - Replaced READ_ONCE() with huge_ptep_get() per Gerald
> - Added pte_mkhuge() and masking with PMD_MASK per Gerald
> - Replaced pte_same() with holding pfn check in pxx_swap_tests()
> - Added documentation for all (#ifdef #else #endif) per Gerald
> - Updated pmd_protnone_tests() per Gerald
> - Updated HugeTLB PTE creation in hugetlb_advanced_tests() per Gerald
> - Replaced [pmd|pud]_mknotpresent() with [pmd|pud]_mkinvalid()
> - Added has_transparent_hugepage() check for PMD and PUD tests
> - Added a patch which debug prints all individual tests being executed
> - Updated documentation for renamed [pmd|pud]_mkinvalid() helpers

Hello Gerald/Christophe/Vineet,

It would be really great if you could give this series a quick test
on s390/ppc/arc platforms respectively. Thank you.

- Anshuman

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests
  2020-06-24  3:13 ` [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
@ 2020-06-24 11:05   ` Alexander Gordeev
  2020-06-24 11:48     ` Gerald Schaefer
  2020-06-27  8:17   ` Christophe Leroy
  2020-06-30  3:53   ` Anshuman Khandual
  2 siblings, 1 reply; 19+ messages in thread
From: Alexander Gordeev @ 2020-06-24 11:05 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: linux-doc, Benjamin Herrenschmidt, Heiko Carstens, linux-mm,
	Paul Mackerras, H. Peter Anvin, linux-riscv, Will Deacon,
	linux-arch, linux-s390, Jonathan Corbet, Michael Ellerman, x86,
	Christophe Leroy, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, linux-arm-kernel, ziy, Catalin Marinas,
	linux-snps-arc, Vasily Gorbik, Borislav Petkov, Paul Walmsley,
	Kirill A . Shutemov, Thomas Gleixner, gerald.schaefer,
	christophe.leroy, Vineet Gupta, linux-kernel, Palmer Dabbelt,
	Andrew Morton, linuxppc-dev

On Wed, Jun 24, 2020 at 08:43:10AM +0530, Anshuman Khandual wrote:

[...]

> Hello Gerald/Christophe/Vineet,
> 
> It would be really great if you could give this series a quick test
> on s390/ppc/arc platforms respectively. Thank you.

That worked for me with the default and debug s390 configurations.
Would you like to try with some particular options or combinations
of the options?

> - Anshuman

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests
  2020-06-24 11:05   ` Alexander Gordeev
@ 2020-06-24 11:48     ` Gerald Schaefer
  2020-06-24 14:40       ` Alexander Gordeev
  0 siblings, 1 reply; 19+ messages in thread
From: Gerald Schaefer @ 2020-06-24 11:48 UTC (permalink / raw)
  To: Alexander Gordeev, Anshuman Khandual
  Cc: linux-doc, Benjamin Herrenschmidt, Heiko Carstens, linux-mm,
	Paul Mackerras, H. Peter Anvin, linux-riscv, Will Deacon,
	linux-arch, linux-s390, Jonathan Corbet, Michael Ellerman, x86,
	Christophe Leroy, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, ziy, Catalin Marinas, linux-snps-arc, Vasily Gorbik,
	Borislav Petkov, Paul Walmsley, Kirill A . Shutemov,
	Thomas Gleixner, linux-arm-kernel, christophe.leroy,
	Vineet Gupta, linux-kernel, Palmer Dabbelt, Andrew Morton,
	linuxppc-dev

On Wed, 24 Jun 2020 13:05:39 +0200
Alexander Gordeev <agordeev@linux.ibm.com> wrote:

> On Wed, Jun 24, 2020 at 08:43:10AM +0530, Anshuman Khandual wrote:
> 
> [...]
> 
> > Hello Gerald/Christophe/Vineet,
> > 
> > It would be really great if you could give this series a quick test
> > on s390/ppc/arc platforms respectively. Thank you.
> 
> That worked for me with the default and debug s390 configurations.
> Would you like to try with some particular options or combinations
> of the options?

It will be enabled automatically on all archs that set
ARCH_HAS_DEBUG_VM_PGTABLE, which we do for s390 unconditionally.
Also, DEBUG_VM has to be set, which we have only in the debug config.
So only the s390 debug config will have it enabled, you can check
dmesg for "debug_vm_pgtable" to see when / where it was run, and if it
triggered any warnings.

I also checked with the v3 series, and it works fine for s390.

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests
  2020-06-24 11:48     ` Gerald Schaefer
@ 2020-06-24 14:40       ` Alexander Gordeev
  2020-06-29  8:32         ` Anshuman Khandual
  0 siblings, 1 reply; 19+ messages in thread
From: Alexander Gordeev @ 2020-06-24 14:40 UTC (permalink / raw)
  To: Gerald Schaefer
  Cc: linux-doc, Benjamin Herrenschmidt, Heiko Carstens, linux-mm,
	Paul Mackerras, H. Peter Anvin, linux-riscv, Will Deacon,
	linux-arch, linux-s390, Jonathan Corbet, Michael Ellerman, x86,
	Christophe Leroy, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, ziy, Catalin Marinas, linux-snps-arc, Vasily Gorbik,
	Anshuman Khandual, Borislav Petkov, Paul Walmsley,
	Kirill A . Shutemov, Thomas Gleixner, linux-arm-kernel,
	christophe.leroy, Vineet Gupta, linux-kernel, Palmer Dabbelt,
	Andrew Morton, linuxppc-dev

On Wed, Jun 24, 2020 at 01:48:08PM +0200, Gerald Schaefer wrote:
> On Wed, 24 Jun 2020 13:05:39 +0200
> Alexander Gordeev <agordeev@linux.ibm.com> wrote:
> 
> > On Wed, Jun 24, 2020 at 08:43:10AM +0530, Anshuman Khandual wrote:
> > 
> > [...]
> > 
> > > Hello Gerald/Christophe/Vineet,
> > > 
> > > It would be really great if you could give this series a quick test
> > > on s390/ppc/arc platforms respectively. Thank you.
> > 
> > That worked for me with the default and debug s390 configurations.
> > Would you like to try with some particular options or combinations
> > of the options?
> 
> It will be enabled automatically on all archs that set
> ARCH_HAS_DEBUG_VM_PGTABLE, which we do for s390 unconditionally.
> Also, DEBUG_VM has to be set, which we have only in the debug config.
> So only the s390 debug config will have it enabled, you can check
> dmesg for "debug_vm_pgtable" to see when / where it was run, and if it
> triggered any warnings.

Yes, that is what I did ;)

I should have been more clear. I wonder whether Anshuman has in
mind other options which possibly makes sense to set or unset
and check how it goes with non-standard configurations.

> I also checked with the v3 series, and it works fine for s390.

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers
  2020-06-15  3:37 ` [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers Anshuman Khandual
@ 2020-06-27  7:18   ` Christophe Leroy
  2020-06-29  8:09     ` Anshuman Khandual
  2020-06-27  7:26   ` Christophe Leroy
  1 sibling, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2020-06-27  7:18 UTC (permalink / raw)
  To: Anshuman Khandual, linux-mm
  Cc: Benjamin Herrenschmidt, Heiko Carstens, Paul Mackerras,
	H. Peter Anvin, linux-riscv, Will Deacon, linux-arch, linux-s390,
	Michael Ellerman, x86, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, linux-arm-kernel, ziy, Catalin Marinas,
	linux-snps-arc, Vasily Gorbik, Borislav Petkov, Paul Walmsley,
	Kirill A . Shutemov, Thomas Gleixner, gerald.schaefer,
	christophe.leroy, Vineet Gupta, linux-kernel, Palmer Dabbelt,
	Andrew Morton, linuxppc-dev



Le 15/06/2020 à 05:37, Anshuman Khandual a écrit :
> This adds new tests validating for these following arch advanced page table
> helpers. These tests create and test specific mapping types at various page
> table levels.
> 
> 1. pxxp_set_wrprotect()
> 2. pxxp_get_and_clear()
> 3. pxxp_set_access_flags()
> 4. pxxp_get_and_clear_full()
> 5. pxxp_test_and_clear_young()
> 6. pxx_leaf()
> 7. pxx_set_huge()
> 8. pxx_(clear|mk)_savedwrite()
> 9. huge_pxxp_xxx()
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Vineet Gupta <vgupta@synopsys.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Kirill A. Shutemov <kirill@shutemov.name>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: linux-snps-arc@lists.infradead.org
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux-s390@vger.kernel.org
> Cc: linux-riscv@lists.infradead.org
> Cc: x86@kernel.org
> Cc: linux-mm@kvack.org
> Cc: linux-arch@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
>   mm/debug_vm_pgtable.c | 306 ++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 306 insertions(+)
> 
> diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
> index ffa163d4c63c..e3f9f8317a98 100644
> --- a/mm/debug_vm_pgtable.c
> +++ b/mm/debug_vm_pgtable.c
> @@ -21,6 +21,7 @@
>   #include <linux/module.h>
>   #include <linux/pfn_t.h>
>   #include <linux/printk.h>
> +#include <linux/pgtable.h>
>   #include <linux/random.h>
>   #include <linux/spinlock.h>
>   #include <linux/swap.h>
> @@ -28,6 +29,7 @@
>   #include <linux/start_kernel.h>
>   #include <linux/sched/mm.h>
>   #include <asm/pgalloc.h>
> +#include <asm/tlbflush.h>
>   
>   #define VMFLAGS	(VM_READ|VM_WRITE|VM_EXEC)
>   
> @@ -55,6 +57,54 @@ static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot)
>   	WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte))));
>   }
>   
> +static void __init pte_advanced_tests(struct mm_struct *mm,
> +			struct vm_area_struct *vma, pte_t *ptep,
> +			unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> +{
> +	pte_t pte = pfn_pte(pfn, prot);
> +
> +	pte = pfn_pte(pfn, prot);
> +	set_pte_at(mm, vaddr, ptep, pte);
> +	ptep_set_wrprotect(mm, vaddr, ptep);
> +	pte = READ_ONCE(*ptep);

same

> +	WARN_ON(pte_write(pte));
> +
> +	pte = pfn_pte(pfn, prot);
> +	set_pte_at(mm, vaddr, ptep, pte);
> +	ptep_get_and_clear(mm, vaddr, ptep);
> +	pte = READ_ONCE(*ptep);

same

> +	WARN_ON(!pte_none(pte));
> +
> +	pte = pfn_pte(pfn, prot);
> +	pte = pte_wrprotect(pte);
> +	pte = pte_mkclean(pte);
> +	set_pte_at(mm, vaddr, ptep, pte);
> +	pte = pte_mkwrite(pte);
> +	pte = pte_mkdirty(pte);
> +	ptep_set_access_flags(vma, vaddr, ptep, pte, 1);
> +	pte = READ_ONCE(*ptep);

same

> +	WARN_ON(!(pte_write(pte) && pte_dirty(pte)));
> +
> +	pte = pfn_pte(pfn, prot);
> +	set_pte_at(mm, vaddr, ptep, pte);
> +	ptep_get_and_clear_full(mm, vaddr, ptep, 1);
> +	pte = READ_ONCE(*ptep);

same

> +	WARN_ON(!pte_none(pte));
> +
> +	pte = pte_mkyoung(pte);
> +	set_pte_at(mm, vaddr, ptep, pte);
> +	ptep_test_and_clear_young(vma, vaddr, ptep);
> +	pte = READ_ONCE(*ptep);

same

> +	WARN_ON(pte_young(pte));
> +}
> +
> +static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot)
> +{
> +	pte_t pte = pfn_pte(pfn, prot);
> +
> +	WARN_ON(!pte_savedwrite(pte_mk_savedwrite(pte_clear_savedwrite(pte))));
> +	WARN_ON(pte_savedwrite(pte_clear_savedwrite(pte_mk_savedwrite(pte))));
> +}
>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
>   {
> @@ -77,6 +127,89 @@ static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
>   	WARN_ON(!pmd_bad(pmd_mkhuge(pmd)));
>   }
>   
> +static void __init pmd_advanced_tests(struct mm_struct *mm,
> +		struct vm_area_struct *vma, pmd_t *pmdp,
> +		unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> +{
> +	pmd_t pmd = pfn_pmd(pfn, prot);
> +
> +	if (!has_transparent_hugepage())
> +		return;
> +
> +	/* Align the address wrt HPAGE_PMD_SIZE */
> +	vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE;
> +
> +	pmd = pfn_pmd(pfn, prot);
> +	set_pmd_at(mm, vaddr, pmdp, pmd);
> +	pmdp_set_wrprotect(mm, vaddr, pmdp);
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(pmd_write(pmd));
> +
> +	pmd = pfn_pmd(pfn, prot);
> +	set_pmd_at(mm, vaddr, pmdp, pmd);
> +	pmdp_huge_get_and_clear(mm, vaddr, pmdp);
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(!pmd_none(pmd));
> +
> +	pmd = pfn_pmd(pfn, prot);
> +	pmd = pmd_wrprotect(pmd);
> +	pmd = pmd_mkclean(pmd);
> +	set_pmd_at(mm, vaddr, pmdp, pmd);
> +	pmd = pmd_mkwrite(pmd);
> +	pmd = pmd_mkdirty(pmd);
> +	pmdp_set_access_flags(vma, vaddr, pmdp, pmd, 1);
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(!(pmd_write(pmd) && pmd_dirty(pmd)));
> +
> +	pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
> +	set_pmd_at(mm, vaddr, pmdp, pmd);
> +	pmdp_huge_get_and_clear_full(vma, vaddr, pmdp, 1);
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(!pmd_none(pmd));
> +
> +	pmd = pmd_mkyoung(pmd);
> +	set_pmd_at(mm, vaddr, pmdp, pmd);
> +	pmdp_test_and_clear_young(vma, vaddr, pmdp);
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(pmd_young(pmd));
> +}
> +
> +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot)
> +{
> +	pmd_t pmd = pfn_pmd(pfn, prot);
> +
> +	/*
> +	 * PMD based THP is a leaf entry.
> +	 */
> +	pmd = pmd_mkhuge(pmd);
> +	WARN_ON(!pmd_leaf(pmd));
> +}
> +
> +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
> +{
> +	pmd_t pmd;
> +
> +	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
> +		return;
> +	/*
> +	 * X86 defined pmd_set_huge() verifies that the given
> +	 * PMD is not a populated non-leaf entry.
> +	 */
> +	WRITE_ONCE(*pmdp, __pmd(0));
> +	WARN_ON(!pmd_set_huge(pmdp, __pfn_to_phys(pfn), prot));
> +	WARN_ON(!pmd_clear_huge(pmdp));
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(!pmd_none(pmd));
> +}
> +
> +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
> +{
> +	pmd_t pmd = pfn_pmd(pfn, prot);
> +
> +	WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd))));
> +	WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd))));
> +}
> +
>   #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
>   {
> @@ -100,12 +233,115 @@ static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
>   	 */
>   	WARN_ON(!pud_bad(pud_mkhuge(pud)));
>   }
> +
> +static void pud_advanced_tests(struct mm_struct *mm,
> +		struct vm_area_struct *vma, pud_t *pudp,
> +		unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> +{
> +	pud_t pud = pfn_pud(pfn, prot);
> +
> +	if (!has_transparent_hugepage())
> +		return;
> +
> +	/* Align the address wrt HPAGE_PUD_SIZE */
> +	vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE;
> +
> +	set_pud_at(mm, vaddr, pudp, pud);
> +	pudp_set_wrprotect(mm, vaddr, pudp);
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(pud_write(pud));
> +
> +#ifndef __PAGETABLE_PMD_FOLDED
> +	pud = pfn_pud(pfn, prot);
> +	set_pud_at(mm, vaddr, pudp, pud);
> +	pudp_huge_get_and_clear(mm, vaddr, pudp);
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(!pud_none(pud));
> +
> +	pud = pfn_pud(pfn, prot);
> +	set_pud_at(mm, vaddr, pudp, pud);
> +	pudp_huge_get_and_clear_full(mm, vaddr, pudp, 1);
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(!pud_none(pud));
> +#endif /* __PAGETABLE_PMD_FOLDED */
> +	pud = pfn_pud(pfn, prot);
> +	pud = pud_wrprotect(pud);
> +	pud = pud_mkclean(pud);
> +	set_pud_at(mm, vaddr, pudp, pud);
> +	pud = pud_mkwrite(pud);
> +	pud = pud_mkdirty(pud);
> +	pudp_set_access_flags(vma, vaddr, pudp, pud, 1);
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(!(pud_write(pud) && pud_dirty(pud)));
> +
> +	pud = pud_mkyoung(pud);
> +	set_pud_at(mm, vaddr, pudp, pud);
> +	pudp_test_and_clear_young(vma, vaddr, pudp);
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(pud_young(pud));
> +}
> +
> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot)
> +{
> +	pud_t pud = pfn_pud(pfn, prot);
> +
> +	/*
> +	 * PUD based THP is a leaf entry.
> +	 */
> +	pud = pud_mkhuge(pud);
> +	WARN_ON(!pud_leaf(pud));
> +}
> +
> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
> +{
> +	pud_t pud;
> +
> +	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
> +		return;
> +	/*
> +	 * X86 defined pud_set_huge() verifies that the given
> +	 * PUD is not a populated non-leaf entry.
> +	 */
> +	WRITE_ONCE(*pudp, __pud(0));
> +	WARN_ON(!pud_set_huge(pudp, __pfn_to_phys(pfn), prot));
> +	WARN_ON(!pud_clear_huge(pudp));
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(!pud_none(pud));
> +}
>   #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
> +static void pud_advanced_tests(struct mm_struct *mm,
> +		struct vm_area_struct *vma, pud_t *pudp,
> +		unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> +{
> +}
> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { }
> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
> +{
> +}
>   #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>   #else  /* !CONFIG_TRANSPARENT_HUGEPAGE */
>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) { }
>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
> +static void __init pmd_advanced_tests(struct mm_struct *mm,
> +		struct vm_area_struct *vma, pmd_t *pmdp,
> +		unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> +{
> +}
> +static void __init pud_advanced_tests(struct mm_struct *mm,
> +		struct vm_area_struct *vma, pud_t *pudp,
> +		unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> +{
> +}
> +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) { }
> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { }
> +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
> +{
> +}
> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
> +{
> +}
> +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) { }
>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>   
>   static void __init p4d_basic_tests(unsigned long pfn, pgprot_t prot)
> @@ -495,8 +731,56 @@ static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot)
>   	WARN_ON(!pte_huge(pte_mkhuge(pte)));
>   #endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */
>   }
> +
> +static void __init hugetlb_advanced_tests(struct mm_struct *mm,
> +					  struct vm_area_struct *vma,
> +					  pte_t *ptep, unsigned long pfn,
> +					  unsigned long vaddr, pgprot_t prot)
> +{
> +	struct page *page = pfn_to_page(pfn);
> +	pte_t pte = READ_ONCE(*ptep);

Remplace with ptep_get() to avoid build failure on powerpc 8xx.

> +	unsigned long paddr = (__pfn_to_phys(pfn) | RANDOM_ORVALUE) & PMD_MASK;
> +
> +	pte = pte_mkhuge(mk_pte(pfn_to_page(PHYS_PFN(paddr)), prot));
> +	set_huge_pte_at(mm, vaddr, ptep, pte);
> +	barrier();
> +	WARN_ON(!pte_same(pte, huge_ptep_get(ptep)));
> +	huge_pte_clear(mm, vaddr, ptep, PMD_SIZE);
> +	pte = huge_ptep_get(ptep);
> +	WARN_ON(!huge_pte_none(pte));
> +
> +	pte = mk_huge_pte(page, prot);
> +	set_huge_pte_at(mm, vaddr, ptep, pte);
> +	barrier();
> +	huge_ptep_set_wrprotect(mm, vaddr, ptep);
> +	pte = huge_ptep_get(ptep);
> +	WARN_ON(huge_pte_write(pte));
> +
> +	pte = mk_huge_pte(page, prot);
> +	set_huge_pte_at(mm, vaddr, ptep, pte);
> +	barrier();
> +	huge_ptep_get_and_clear(mm, vaddr, ptep);
> +	pte = huge_ptep_get(ptep);
> +	WARN_ON(!huge_pte_none(pte));
> +
> +	pte = mk_huge_pte(page, prot);
> +	pte = huge_pte_wrprotect(pte);
> +	set_huge_pte_at(mm, vaddr, ptep, pte);
> +	barrier();
> +	pte = huge_pte_mkwrite(pte);
> +	pte = huge_pte_mkdirty(pte);
> +	huge_ptep_set_access_flags(vma, vaddr, ptep, pte, 1);
> +	pte = huge_ptep_get(ptep);
> +	WARN_ON(!(huge_pte_write(pte) && huge_pte_dirty(pte)));
> +}
>   #else  /* !CONFIG_HUGETLB_PAGE */
>   static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot) { }
> +static void __init hugetlb_advanced_tests(struct mm_struct *mm,
> +					  struct vm_area_struct *vma,
> +					  pte_t *ptep, unsigned long pfn,
> +					  unsigned long vaddr, pgprot_t prot)
> +{
> +}
>   #endif /* CONFIG_HUGETLB_PAGE */
>   
>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> @@ -568,6 +852,7 @@ static unsigned long __init get_random_vaddr(void)
>   
>   static int __init debug_vm_pgtable(void)
>   {
> +	struct vm_area_struct *vma;
>   	struct mm_struct *mm;
>   	pgd_t *pgdp;
>   	p4d_t *p4dp, *saved_p4dp;
> @@ -596,6 +881,12 @@ static int __init debug_vm_pgtable(void)
>   	 */
>   	protnone = __P000;
>   
> +	vma = vm_area_alloc(mm);
> +	if (!vma) {
> +		pr_err("vma allocation failed\n");
> +		return 1;
> +	}
> +
>   	/*
>   	 * PFN for mapping at PTE level is determined from a standard kernel
>   	 * text symbol. But pfns for higher page table levels are derived by
> @@ -644,6 +935,20 @@ static int __init debug_vm_pgtable(void)
>   	p4d_clear_tests(mm, p4dp);
>   	pgd_clear_tests(mm, pgdp);
>   
> +	pte_advanced_tests(mm, vma, ptep, pte_aligned, vaddr, prot);
> +	pmd_advanced_tests(mm, vma, pmdp, pmd_aligned, vaddr, prot);
> +	pud_advanced_tests(mm, vma, pudp, pud_aligned, vaddr, prot);
> +	hugetlb_advanced_tests(mm, vma, ptep, pte_aligned, vaddr, prot);
> +
> +	pmd_leaf_tests(pmd_aligned, prot);
> +	pud_leaf_tests(pud_aligned, prot);
> +
> +	pmd_huge_tests(pmdp, pmd_aligned, prot);
> +	pud_huge_tests(pudp, pud_aligned, prot);
> +
> +	pte_savedwrite_tests(pte_aligned, prot);
> +	pmd_savedwrite_tests(pmd_aligned, prot);
> +
>   	pte_unmap_unlock(ptep, ptl);
>   
>   	pmd_populate_tests(mm, pmdp, saved_ptep);
> @@ -678,6 +983,7 @@ static int __init debug_vm_pgtable(void)
>   	pmd_free(mm, saved_pmdp);
>   	pte_free(mm, saved_ptep);
>   
> +	vm_area_free(vma);
>   	mm_dec_nr_puds(mm);
>   	mm_dec_nr_pmds(mm);
>   	mm_dec_nr_ptes(mm);
> 

Christophe

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers
  2020-06-15  3:37 ` [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers Anshuman Khandual
  2020-06-27  7:18   ` Christophe Leroy
@ 2020-06-27  7:26   ` Christophe Leroy
  2020-06-29  8:15     ` Anshuman Khandual
  1 sibling, 1 reply; 19+ messages in thread
From: Christophe Leroy @ 2020-06-27  7:26 UTC (permalink / raw)
  To: Anshuman Khandual, linux-mm
  Cc: Benjamin Herrenschmidt, Heiko Carstens, Paul Mackerras,
	H. Peter Anvin, linux-riscv, Will Deacon, linux-arch, linux-s390,
	Michael Ellerman, x86, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, gerald.schaefer, ziy, Catalin Marinas,
	linux-snps-arc, Vasily Gorbik, Borislav Petkov, Paul Walmsley,
	Kirill A . Shutemov, Thomas Gleixner, linux-arm-kernel,
	Vineet Gupta, linux-kernel, Palmer Dabbelt, Andrew Morton,
	linuxppc-dev



Le 15/06/2020 à 05:37, Anshuman Khandual a écrit :
> This adds new tests validating for these following arch advanced page table
> helpers. These tests create and test specific mapping types at various page
> table levels.
> 
> 1. pxxp_set_wrprotect()
> 2. pxxp_get_and_clear()
> 3. pxxp_set_access_flags()
> 4. pxxp_get_and_clear_full()
> 5. pxxp_test_and_clear_young()
> 6. pxx_leaf()
> 7. pxx_set_huge()
> 8. pxx_(clear|mk)_savedwrite()
> 9. huge_pxxp_xxx()
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Vineet Gupta <vgupta@synopsys.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Kirill A. Shutemov <kirill@shutemov.name>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: linux-snps-arc@lists.infradead.org
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux-s390@vger.kernel.org
> Cc: linux-riscv@lists.infradead.org
> Cc: x86@kernel.org
> Cc: linux-mm@kvack.org
> Cc: linux-arch@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
>   mm/debug_vm_pgtable.c | 306 ++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 306 insertions(+)
> 
> diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
> index ffa163d4c63c..e3f9f8317a98 100644
> --- a/mm/debug_vm_pgtable.c
> +++ b/mm/debug_vm_pgtable.c
> @@ -21,6 +21,7 @@
>   #include <linux/module.h>
>   #include <linux/pfn_t.h>
>   #include <linux/printk.h>
> +#include <linux/pgtable.h>
>   #include <linux/random.h>
>   #include <linux/spinlock.h>
>   #include <linux/swap.h>
> @@ -28,6 +29,7 @@
>   #include <linux/start_kernel.h>
>   #include <linux/sched/mm.h>
>   #include <asm/pgalloc.h>
> +#include <asm/tlbflush.h>
>   
>   #define VMFLAGS	(VM_READ|VM_WRITE|VM_EXEC)
>   
> @@ -55,6 +57,54 @@ static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot)
>   	WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte))));
>   }
>   
> +static void __init pte_advanced_tests(struct mm_struct *mm,
> +			struct vm_area_struct *vma, pte_t *ptep,
> +			unsigned long pfn, unsigned long vaddr, pgprot_t prot)

Align args properly.

> +{
> +	pte_t pte = pfn_pte(pfn, prot);
> +
> +	pte = pfn_pte(pfn, prot);
> +	set_pte_at(mm, vaddr, ptep, pte);
> +	ptep_set_wrprotect(mm, vaddr, ptep);
> +	pte = READ_ONCE(*ptep);
> +	WARN_ON(pte_write(pte));
> +
> +	pte = pfn_pte(pfn, prot);
> +	set_pte_at(mm, vaddr, ptep, pte);
> +	ptep_get_and_clear(mm, vaddr, ptep);
> +	pte = READ_ONCE(*ptep);
> +	WARN_ON(!pte_none(pte));
> +
> +	pte = pfn_pte(pfn, prot);
> +	pte = pte_wrprotect(pte);
> +	pte = pte_mkclean(pte);
> +	set_pte_at(mm, vaddr, ptep, pte);
> +	pte = pte_mkwrite(pte);
> +	pte = pte_mkdirty(pte);
> +	ptep_set_access_flags(vma, vaddr, ptep, pte, 1);
> +	pte = READ_ONCE(*ptep);
> +	WARN_ON(!(pte_write(pte) && pte_dirty(pte)));
> +
> +	pte = pfn_pte(pfn, prot);
> +	set_pte_at(mm, vaddr, ptep, pte);
> +	ptep_get_and_clear_full(mm, vaddr, ptep, 1);
> +	pte = READ_ONCE(*ptep);
> +	WARN_ON(!pte_none(pte));
> +
> +	pte = pte_mkyoung(pte);
> +	set_pte_at(mm, vaddr, ptep, pte);
> +	ptep_test_and_clear_young(vma, vaddr, ptep);
> +	pte = READ_ONCE(*ptep);
> +	WARN_ON(pte_young(pte));
> +}
> +
> +static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot)
> +{
> +	pte_t pte = pfn_pte(pfn, prot);
> +
> +	WARN_ON(!pte_savedwrite(pte_mk_savedwrite(pte_clear_savedwrite(pte))));
> +	WARN_ON(pte_savedwrite(pte_clear_savedwrite(pte_mk_savedwrite(pte))));
> +}
>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
>   {
> @@ -77,6 +127,89 @@ static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
>   	WARN_ON(!pmd_bad(pmd_mkhuge(pmd)));
>   }
>   
> +static void __init pmd_advanced_tests(struct mm_struct *mm,
> +		struct vm_area_struct *vma, pmd_t *pmdp,
> +		unsigned long pfn, unsigned long vaddr, pgprot_t prot)

Align args properly

> +{
> +	pmd_t pmd = pfn_pmd(pfn, prot);
> +
> +	if (!has_transparent_hugepage())
> +		return;
> +
> +	/* Align the address wrt HPAGE_PMD_SIZE */
> +	vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE;
> +
> +	pmd = pfn_pmd(pfn, prot);
> +	set_pmd_at(mm, vaddr, pmdp, pmd);
> +	pmdp_set_wrprotect(mm, vaddr, pmdp);
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(pmd_write(pmd));
> +
> +	pmd = pfn_pmd(pfn, prot);
> +	set_pmd_at(mm, vaddr, pmdp, pmd);
> +	pmdp_huge_get_and_clear(mm, vaddr, pmdp);
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(!pmd_none(pmd));
> +
> +	pmd = pfn_pmd(pfn, prot);
> +	pmd = pmd_wrprotect(pmd);
> +	pmd = pmd_mkclean(pmd);
> +	set_pmd_at(mm, vaddr, pmdp, pmd);
> +	pmd = pmd_mkwrite(pmd);
> +	pmd = pmd_mkdirty(pmd);
> +	pmdp_set_access_flags(vma, vaddr, pmdp, pmd, 1);
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(!(pmd_write(pmd) && pmd_dirty(pmd)));
> +
> +	pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
> +	set_pmd_at(mm, vaddr, pmdp, pmd);
> +	pmdp_huge_get_and_clear_full(vma, vaddr, pmdp, 1);
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(!pmd_none(pmd));
> +
> +	pmd = pmd_mkyoung(pmd);
> +	set_pmd_at(mm, vaddr, pmdp, pmd);
> +	pmdp_test_and_clear_young(vma, vaddr, pmdp);
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(pmd_young(pmd));
> +}
> +
> +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot)
> +{
> +	pmd_t pmd = pfn_pmd(pfn, prot);
> +
> +	/*
> +	 * PMD based THP is a leaf entry.
> +	 */
> +	pmd = pmd_mkhuge(pmd);
> +	WARN_ON(!pmd_leaf(pmd));
> +}
> +
> +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
> +{
> +	pmd_t pmd;
> +
> +	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
> +		return;
> +	/*
> +	 * X86 defined pmd_set_huge() verifies that the given
> +	 * PMD is not a populated non-leaf entry.
> +	 */
> +	WRITE_ONCE(*pmdp, __pmd(0));
> +	WARN_ON(!pmd_set_huge(pmdp, __pfn_to_phys(pfn), prot));
> +	WARN_ON(!pmd_clear_huge(pmdp));
> +	pmd = READ_ONCE(*pmdp);
> +	WARN_ON(!pmd_none(pmd));
> +}
> +
> +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
> +{
> +	pmd_t pmd = pfn_pmd(pfn, prot);
> +
> +	WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd))));
> +	WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd))));
> +}
> +
>   #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
>   {
> @@ -100,12 +233,115 @@ static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
>   	 */
>   	WARN_ON(!pud_bad(pud_mkhuge(pud)));
>   }
> +
> +static void pud_advanced_tests(struct mm_struct *mm,
> +		struct vm_area_struct *vma, pud_t *pudp,
> +		unsigned long pfn, unsigned long vaddr, pgprot_t prot)

Align args properly

> +{
> +	pud_t pud = pfn_pud(pfn, prot);
> +
> +	if (!has_transparent_hugepage())
> +		return;
> +
> +	/* Align the address wrt HPAGE_PUD_SIZE */
> +	vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE;
> +
> +	set_pud_at(mm, vaddr, pudp, pud);
> +	pudp_set_wrprotect(mm, vaddr, pudp);
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(pud_write(pud));
> +
> +#ifndef __PAGETABLE_PMD_FOLDED
> +	pud = pfn_pud(pfn, prot);
> +	set_pud_at(mm, vaddr, pudp, pud);
> +	pudp_huge_get_and_clear(mm, vaddr, pudp);
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(!pud_none(pud));
> +
> +	pud = pfn_pud(pfn, prot);
> +	set_pud_at(mm, vaddr, pudp, pud);
> +	pudp_huge_get_and_clear_full(mm, vaddr, pudp, 1);
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(!pud_none(pud));
> +#endif /* __PAGETABLE_PMD_FOLDED */
> +	pud = pfn_pud(pfn, prot);
> +	pud = pud_wrprotect(pud);
> +	pud = pud_mkclean(pud);
> +	set_pud_at(mm, vaddr, pudp, pud);
> +	pud = pud_mkwrite(pud);
> +	pud = pud_mkdirty(pud);
> +	pudp_set_access_flags(vma, vaddr, pudp, pud, 1);
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(!(pud_write(pud) && pud_dirty(pud)));
> +
> +	pud = pud_mkyoung(pud);
> +	set_pud_at(mm, vaddr, pudp, pud);
> +	pudp_test_and_clear_young(vma, vaddr, pudp);
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(pud_young(pud));
> +}
> +
> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot)
> +{
> +	pud_t pud = pfn_pud(pfn, prot);
> +
> +	/*
> +	 * PUD based THP is a leaf entry.
> +	 */
> +	pud = pud_mkhuge(pud);
> +	WARN_ON(!pud_leaf(pud));
> +}
> +
> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
> +{
> +	pud_t pud;
> +
> +	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
> +		return;
> +	/*
> +	 * X86 defined pud_set_huge() verifies that the given
> +	 * PUD is not a populated non-leaf entry.
> +	 */
> +	WRITE_ONCE(*pudp, __pud(0));
> +	WARN_ON(!pud_set_huge(pudp, __pfn_to_phys(pfn), prot));
> +	WARN_ON(!pud_clear_huge(pudp));
> +	pud = READ_ONCE(*pudp);
> +	WARN_ON(!pud_none(pud));
> +}
>   #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
> +static void pud_advanced_tests(struct mm_struct *mm,
> +		struct vm_area_struct *vma, pud_t *pudp,
> +		unsigned long pfn, unsigned long vaddr, pgprot_t prot)

Align args properly

> +{
> +}
> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { }
> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
> +{
> +}
>   #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>   #else  /* !CONFIG_TRANSPARENT_HUGEPAGE */
>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) { }
>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
> +static void __init pmd_advanced_tests(struct mm_struct *mm,
> +		struct vm_area_struct *vma, pmd_t *pmdp,
> +		unsigned long pfn, unsigned long vaddr, pgprot_t prot)

Align args properly

> +{
> +}
> +static void __init pud_advanced_tests(struct mm_struct *mm,
> +		struct vm_area_struct *vma, pud_t *pudp,
> +		unsigned long pfn, unsigned long vaddr, pgprot_t prot)

Align args properly

> +{
> +}
> +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) { }
> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { }
> +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
> +{
> +}
> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
> +{
> +}
> +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) { }
>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>   
>   static void __init p4d_basic_tests(unsigned long pfn, pgprot_t prot)
> @@ -495,8 +731,56 @@ static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot)
>   	WARN_ON(!pte_huge(pte_mkhuge(pte)));
>   #endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */
>   }
> +
> +static void __init hugetlb_advanced_tests(struct mm_struct *mm,
> +					  struct vm_area_struct *vma,
> +					  pte_t *ptep, unsigned long pfn,
> +					  unsigned long vaddr, pgprot_t prot)
> +{
> +	struct page *page = pfn_to_page(pfn);
> +	pte_t pte = READ_ONCE(*ptep);
> +	unsigned long paddr = (__pfn_to_phys(pfn) | RANDOM_ORVALUE) & PMD_MASK;
> +
> +	pte = pte_mkhuge(mk_pte(pfn_to_page(PHYS_PFN(paddr)), prot));
> +	set_huge_pte_at(mm, vaddr, ptep, pte);
> +	barrier();
> +	WARN_ON(!pte_same(pte, huge_ptep_get(ptep)));
> +	huge_pte_clear(mm, vaddr, ptep, PMD_SIZE);
> +	pte = huge_ptep_get(ptep);
> +	WARN_ON(!huge_pte_none(pte));
> +
> +	pte = mk_huge_pte(page, prot);
> +	set_huge_pte_at(mm, vaddr, ptep, pte);
> +	barrier();
> +	huge_ptep_set_wrprotect(mm, vaddr, ptep);
> +	pte = huge_ptep_get(ptep);
> +	WARN_ON(huge_pte_write(pte));
> +
> +	pte = mk_huge_pte(page, prot);
> +	set_huge_pte_at(mm, vaddr, ptep, pte);
> +	barrier();
> +	huge_ptep_get_and_clear(mm, vaddr, ptep);
> +	pte = huge_ptep_get(ptep);
> +	WARN_ON(!huge_pte_none(pte));
> +
> +	pte = mk_huge_pte(page, prot);
> +	pte = huge_pte_wrprotect(pte);
> +	set_huge_pte_at(mm, vaddr, ptep, pte);
> +	barrier();
> +	pte = huge_pte_mkwrite(pte);
> +	pte = huge_pte_mkdirty(pte);
> +	huge_ptep_set_access_flags(vma, vaddr, ptep, pte, 1);
> +	pte = huge_ptep_get(ptep);
> +	WARN_ON(!(huge_pte_write(pte) && huge_pte_dirty(pte)));
> +}
>   #else  /* !CONFIG_HUGETLB_PAGE */
>   static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot) { }
> +static void __init hugetlb_advanced_tests(struct mm_struct *mm,
> +					  struct vm_area_struct *vma,
> +					  pte_t *ptep, unsigned long pfn,
> +					  unsigned long vaddr, pgprot_t prot)
> +{
> +}
>   #endif /* CONFIG_HUGETLB_PAGE */
>   
>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> @@ -568,6 +852,7 @@ static unsigned long __init get_random_vaddr(void)
>   
>   static int __init debug_vm_pgtable(void)
>   {
> +	struct vm_area_struct *vma;
>   	struct mm_struct *mm;
>   	pgd_t *pgdp;
>   	p4d_t *p4dp, *saved_p4dp;
> @@ -596,6 +881,12 @@ static int __init debug_vm_pgtable(void)
>   	 */
>   	protnone = __P000;
>   
> +	vma = vm_area_alloc(mm);
> +	if (!vma) {
> +		pr_err("vma allocation failed\n");
> +		return 1;
> +	}
> +
>   	/*
>   	 * PFN for mapping at PTE level is determined from a standard kernel
>   	 * text symbol. But pfns for higher page table levels are derived by
> @@ -644,6 +935,20 @@ static int __init debug_vm_pgtable(void)
>   	p4d_clear_tests(mm, p4dp);
>   	pgd_clear_tests(mm, pgdp);
>   
> +	pte_advanced_tests(mm, vma, ptep, pte_aligned, vaddr, prot);
> +	pmd_advanced_tests(mm, vma, pmdp, pmd_aligned, vaddr, prot);
> +	pud_advanced_tests(mm, vma, pudp, pud_aligned, vaddr, prot);
> +	hugetlb_advanced_tests(mm, vma, ptep, pte_aligned, vaddr, prot);
> +
> +	pmd_leaf_tests(pmd_aligned, prot);
> +	pud_leaf_tests(pud_aligned, prot);
> +
> +	pmd_huge_tests(pmdp, pmd_aligned, prot);
> +	pud_huge_tests(pudp, pud_aligned, prot);
> +
> +	pte_savedwrite_tests(pte_aligned, prot);
> +	pmd_savedwrite_tests(pmd_aligned, prot);
> +
>   	pte_unmap_unlock(ptep, ptl);
>   
>   	pmd_populate_tests(mm, pmdp, saved_ptep);
> @@ -678,6 +983,7 @@ static int __init debug_vm_pgtable(void)
>   	pmd_free(mm, saved_pmdp);
>   	pte_free(mm, saved_ptep);
>   
> +	vm_area_free(vma);
>   	mm_dec_nr_puds(mm);
>   	mm_dec_nr_pmds(mm);
>   	mm_dec_nr_ptes(mm);
> 

Christophe

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests
  2020-06-24  3:13 ` [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
  2020-06-24 11:05   ` Alexander Gordeev
@ 2020-06-27  8:17   ` Christophe Leroy
  2020-06-30  3:53   ` Anshuman Khandual
  2 siblings, 0 replies; 19+ messages in thread
From: Christophe Leroy @ 2020-06-27  8:17 UTC (permalink / raw)
  To: Anshuman Khandual, linux-mm
  Cc: linux-doc, Benjamin Herrenschmidt, Heiko Carstens,
	Paul Mackerras, H. Peter Anvin, linux-riscv, Will Deacon,
	linux-arch, linux-s390, Jonathan Corbet, Michael Ellerman, x86,
	Mike Rapoport, Christian Borntraeger, Ingo Molnar,
	linux-arm-kernel, ziy, Catalin Marinas, linux-snps-arc,
	Vasily Gorbik, Borislav Petkov, Paul Walmsley,
	Kirill A . Shutemov, Thomas Gleixner, gerald.schaefer,
	christophe.leroy, Vineet Gupta, linux-kernel, Palmer Dabbelt,
	Andrew Morton, linuxppc-dev



Le 24/06/2020 à 05:13, Anshuman Khandual a écrit :
> 
> 
> On 06/15/2020 09:07 AM, Anshuman Khandual wrote:
>> This series adds some more arch page table helper validation tests which
>> are related to core and advanced memory functions. This also creates a
>> documentation, enlisting expected semantics for all page table helpers as
>> suggested by Mike Rapoport previously (https://lkml.org/lkml/2020/1/30/40).
>>
>> There are many TRANSPARENT_HUGEPAGE and ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD
>> ifdefs scattered across the test. But consolidating all the fallback stubs
>> is not very straight forward because ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD is
>> not explicitly dependent on ARCH_HAS_TRANSPARENT_HUGEPAGE.
>>
>> Tested on arm64, x86 platforms but only build tested on all other enabled
>> platforms through ARCH_HAS_DEBUG_VM_PGTABLE i.e powerpc, arc, s390. The
>> following failure on arm64 still exists which was mentioned previously. It
>> will be fixed with the upcoming THP migration on arm64 enablement series.
>>
>> WARNING .... mm/debug_vm_pgtable.c:860 debug_vm_pgtable+0x940/0xa54
>> WARN_ON(!pmd_present(pmd_mkinvalid(pmd_mkhuge(pmd))))
>>
>> This series is based on v5.8-rc1.
>>
>> Changes in V3:
>>
>> - Replaced HAVE_ARCH_SOFT_DIRTY with MEM_SOFT_DIRTY
>> - Added HAVE_ARCH_HUGE_VMAP checks in pxx_huge_tests() per Gerald
>> - Updated documentation for pmd_thp_tests() per Zi Yan
>> - Replaced READ_ONCE() with huge_ptep_get() per Gerald
>> - Added pte_mkhuge() and masking with PMD_MASK per Gerald
>> - Replaced pte_same() with holding pfn check in pxx_swap_tests()
>> - Added documentation for all (#ifdef #else #endif) per Gerald
>> - Updated pmd_protnone_tests() per Gerald
>> - Updated HugeTLB PTE creation in hugetlb_advanced_tests() per Gerald
>> - Replaced [pmd|pud]_mknotpresent() with [pmd|pud]_mkinvalid()
>> - Added has_transparent_hugepage() check for PMD and PUD tests
>> - Added a patch which debug prints all individual tests being executed
>> - Updated documentation for renamed [pmd|pud]_mkinvalid() helpers
> 
> Hello Gerald/Christophe/Vineet,
> 
> It would be really great if you could give this series a quick test
> on s390/ppc/arc platforms respectively. Thank you.
> 

Running ok on powerpc 8xx after fixing build failures.

Christophe

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers
  2020-06-27  7:18   ` Christophe Leroy
@ 2020-06-29  8:09     ` Anshuman Khandual
  0 siblings, 0 replies; 19+ messages in thread
From: Anshuman Khandual @ 2020-06-29  8:09 UTC (permalink / raw)
  To: Christophe Leroy, linux-mm
  Cc: Benjamin Herrenschmidt, Heiko Carstens, Paul Mackerras,
	H. Peter Anvin, linux-riscv, Will Deacon, linux-arch, linux-s390,
	Michael Ellerman, x86, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, linux-arm-kernel, ziy, Catalin Marinas,
	linux-snps-arc, Vasily Gorbik, Borislav Petkov, Paul Walmsley,
	Kirill A . Shutemov, Thomas Gleixner, gerald.schaefer,
	christophe.leroy, Vineet Gupta, linux-kernel, Palmer Dabbelt,
	Andrew Morton, linuxppc-dev


On 06/27/2020 12:48 PM, Christophe Leroy wrote:
> Le 15/06/2020 à 05:37, Anshuman Khandual a écrit :
>> This adds new tests validating for these following arch advanced page table
>> helpers. These tests create and test specific mapping types at various page
>> table levels.
>>
>> 1. pxxp_set_wrprotect()
>> 2. pxxp_get_and_clear()
>> 3. pxxp_set_access_flags()
>> 4. pxxp_get_and_clear_full()
>> 5. pxxp_test_and_clear_young()
>> 6. pxx_leaf()
>> 7. pxx_set_huge()
>> 8. pxx_(clear|mk)_savedwrite()
>> 9. huge_pxxp_xxx()
>>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Mike Rapoport <rppt@linux.ibm.com>
>> Cc: Vineet Gupta <vgupta@synopsys.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
>> Cc: Vasily Gorbik <gor@linux.ibm.com>
>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Borislav Petkov <bp@alien8.de>
>> Cc: "H. Peter Anvin" <hpa@zytor.com>
>> Cc: Kirill A. Shutemov <kirill@shutemov.name>
>> Cc: Paul Walmsley <paul.walmsley@sifive.com>
>> Cc: Palmer Dabbelt <palmer@dabbelt.com>
>> Cc: linux-snps-arc@lists.infradead.org
>> Cc: linux-arm-kernel@lists.infradead.org
>> Cc: linuxppc-dev@lists.ozlabs.org
>> Cc: linux-s390@vger.kernel.org
>> Cc: linux-riscv@lists.infradead.org
>> Cc: x86@kernel.org
>> Cc: linux-mm@kvack.org
>> Cc: linux-arch@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>>   mm/debug_vm_pgtable.c | 306 ++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 306 insertions(+)
>>
>> diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
>> index ffa163d4c63c..e3f9f8317a98 100644
>> --- a/mm/debug_vm_pgtable.c
>> +++ b/mm/debug_vm_pgtable.c
>> @@ -21,6 +21,7 @@
>>   #include <linux/module.h>
>>   #include <linux/pfn_t.h>
>>   #include <linux/printk.h>
>> +#include <linux/pgtable.h>
>>   #include <linux/random.h>
>>   #include <linux/spinlock.h>
>>   #include <linux/swap.h>
>> @@ -28,6 +29,7 @@
>>   #include <linux/start_kernel.h>
>>   #include <linux/sched/mm.h>
>>   #include <asm/pgalloc.h>
>> +#include <asm/tlbflush.h>
>>     #define VMFLAGS    (VM_READ|VM_WRITE|VM_EXEC)
>>   @@ -55,6 +57,54 @@ static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot)
>>       WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte))));
>>   }
>>   +static void __init pte_advanced_tests(struct mm_struct *mm,
>> +            struct vm_area_struct *vma, pte_t *ptep,
>> +            unsigned long pfn, unsigned long vaddr, pgprot_t prot)
>> +{
>> +    pte_t pte = pfn_pte(pfn, prot);
>> +
>> +    pte = pfn_pte(pfn, prot);
>> +    set_pte_at(mm, vaddr, ptep, pte);
>> +    ptep_set_wrprotect(mm, vaddr, ptep);
>> +    pte = READ_ONCE(*ptep);
> 
> same
> 
>> +    WARN_ON(pte_write(pte));
>> +
>> +    pte = pfn_pte(pfn, prot);
>> +    set_pte_at(mm, vaddr, ptep, pte);
>> +    ptep_get_and_clear(mm, vaddr, ptep);
>> +    pte = READ_ONCE(*ptep);
> 
> same
> 
>> +    WARN_ON(!pte_none(pte));
>> +
>> +    pte = pfn_pte(pfn, prot);
>> +    pte = pte_wrprotect(pte);
>> +    pte = pte_mkclean(pte);
>> +    set_pte_at(mm, vaddr, ptep, pte);
>> +    pte = pte_mkwrite(pte);
>> +    pte = pte_mkdirty(pte);
>> +    ptep_set_access_flags(vma, vaddr, ptep, pte, 1);
>> +    pte = READ_ONCE(*ptep);
> 
> same
> 
>> +    WARN_ON(!(pte_write(pte) && pte_dirty(pte)));
>> +
>> +    pte = pfn_pte(pfn, prot);
>> +    set_pte_at(mm, vaddr, ptep, pte);
>> +    ptep_get_and_clear_full(mm, vaddr, ptep, 1);
>> +    pte = READ_ONCE(*ptep);
> 
> same
> 
>> +    WARN_ON(!pte_none(pte));
>> +
>> +    pte = pte_mkyoung(pte);
>> +    set_pte_at(mm, vaddr, ptep, pte);
>> +    ptep_test_and_clear_young(vma, vaddr, ptep);
>> +    pte = READ_ONCE(*ptep);
> 
> same
> 
>> +    WARN_ON(pte_young(pte));
>> +}
>> +
>> +static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot)
>> +{
>> +    pte_t pte = pfn_pte(pfn, prot);
>> +
>> +    WARN_ON(!pte_savedwrite(pte_mk_savedwrite(pte_clear_savedwrite(pte))));
>> +    WARN_ON(pte_savedwrite(pte_clear_savedwrite(pte_mk_savedwrite(pte))));
>> +}
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
>>   {
>> @@ -77,6 +127,89 @@ static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
>>       WARN_ON(!pmd_bad(pmd_mkhuge(pmd)));
>>   }
>>   +static void __init pmd_advanced_tests(struct mm_struct *mm,
>> +        struct vm_area_struct *vma, pmd_t *pmdp,
>> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot)
>> +{
>> +    pmd_t pmd = pfn_pmd(pfn, prot);
>> +
>> +    if (!has_transparent_hugepage())
>> +        return;
>> +
>> +    /* Align the address wrt HPAGE_PMD_SIZE */
>> +    vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE;
>> +
>> +    pmd = pfn_pmd(pfn, prot);
>> +    set_pmd_at(mm, vaddr, pmdp, pmd);
>> +    pmdp_set_wrprotect(mm, vaddr, pmdp);
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(pmd_write(pmd));
>> +
>> +    pmd = pfn_pmd(pfn, prot);
>> +    set_pmd_at(mm, vaddr, pmdp, pmd);
>> +    pmdp_huge_get_and_clear(mm, vaddr, pmdp);
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(!pmd_none(pmd));
>> +
>> +    pmd = pfn_pmd(pfn, prot);
>> +    pmd = pmd_wrprotect(pmd);
>> +    pmd = pmd_mkclean(pmd);
>> +    set_pmd_at(mm, vaddr, pmdp, pmd);
>> +    pmd = pmd_mkwrite(pmd);
>> +    pmd = pmd_mkdirty(pmd);
>> +    pmdp_set_access_flags(vma, vaddr, pmdp, pmd, 1);
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(!(pmd_write(pmd) && pmd_dirty(pmd)));
>> +
>> +    pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
>> +    set_pmd_at(mm, vaddr, pmdp, pmd);
>> +    pmdp_huge_get_and_clear_full(vma, vaddr, pmdp, 1);
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(!pmd_none(pmd));
>> +
>> +    pmd = pmd_mkyoung(pmd);
>> +    set_pmd_at(mm, vaddr, pmdp, pmd);
>> +    pmdp_test_and_clear_young(vma, vaddr, pmdp);
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(pmd_young(pmd));
>> +}
>> +
>> +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot)
>> +{
>> +    pmd_t pmd = pfn_pmd(pfn, prot);
>> +
>> +    /*
>> +     * PMD based THP is a leaf entry.
>> +     */
>> +    pmd = pmd_mkhuge(pmd);
>> +    WARN_ON(!pmd_leaf(pmd));
>> +}
>> +
>> +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
>> +{
>> +    pmd_t pmd;
>> +
>> +    if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
>> +        return;
>> +    /*
>> +     * X86 defined pmd_set_huge() verifies that the given
>> +     * PMD is not a populated non-leaf entry.
>> +     */
>> +    WRITE_ONCE(*pmdp, __pmd(0));
>> +    WARN_ON(!pmd_set_huge(pmdp, __pfn_to_phys(pfn), prot));
>> +    WARN_ON(!pmd_clear_huge(pmdp));
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(!pmd_none(pmd));
>> +}
>> +
>> +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
>> +{
>> +    pmd_t pmd = pfn_pmd(pfn, prot);
>> +
>> +    WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd))));
>> +    WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd))));
>> +}
>> +
>>   #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
>>   {
>> @@ -100,12 +233,115 @@ static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
>>        */
>>       WARN_ON(!pud_bad(pud_mkhuge(pud)));
>>   }
>> +
>> +static void pud_advanced_tests(struct mm_struct *mm,
>> +        struct vm_area_struct *vma, pud_t *pudp,
>> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot)
>> +{
>> +    pud_t pud = pfn_pud(pfn, prot);
>> +
>> +    if (!has_transparent_hugepage())
>> +        return;
>> +
>> +    /* Align the address wrt HPAGE_PUD_SIZE */
>> +    vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE;
>> +
>> +    set_pud_at(mm, vaddr, pudp, pud);
>> +    pudp_set_wrprotect(mm, vaddr, pudp);
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(pud_write(pud));
>> +
>> +#ifndef __PAGETABLE_PMD_FOLDED
>> +    pud = pfn_pud(pfn, prot);
>> +    set_pud_at(mm, vaddr, pudp, pud);
>> +    pudp_huge_get_and_clear(mm, vaddr, pudp);
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(!pud_none(pud));
>> +
>> +    pud = pfn_pud(pfn, prot);
>> +    set_pud_at(mm, vaddr, pudp, pud);
>> +    pudp_huge_get_and_clear_full(mm, vaddr, pudp, 1);
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(!pud_none(pud));
>> +#endif /* __PAGETABLE_PMD_FOLDED */
>> +    pud = pfn_pud(pfn, prot);
>> +    pud = pud_wrprotect(pud);
>> +    pud = pud_mkclean(pud);
>> +    set_pud_at(mm, vaddr, pudp, pud);
>> +    pud = pud_mkwrite(pud);
>> +    pud = pud_mkdirty(pud);
>> +    pudp_set_access_flags(vma, vaddr, pudp, pud, 1);
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(!(pud_write(pud) && pud_dirty(pud)));
>> +
>> +    pud = pud_mkyoung(pud);
>> +    set_pud_at(mm, vaddr, pudp, pud);
>> +    pudp_test_and_clear_young(vma, vaddr, pudp);
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(pud_young(pud));
>> +}
>> +
>> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot)
>> +{
>> +    pud_t pud = pfn_pud(pfn, prot);
>> +
>> +    /*
>> +     * PUD based THP is a leaf entry.
>> +     */
>> +    pud = pud_mkhuge(pud);
>> +    WARN_ON(!pud_leaf(pud));
>> +}
>> +
>> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
>> +{
>> +    pud_t pud;
>> +
>> +    if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
>> +        return;
>> +    /*
>> +     * X86 defined pud_set_huge() verifies that the given
>> +     * PUD is not a populated non-leaf entry.
>> +     */
>> +    WRITE_ONCE(*pudp, __pud(0));
>> +    WARN_ON(!pud_set_huge(pudp, __pfn_to_phys(pfn), prot));
>> +    WARN_ON(!pud_clear_huge(pudp));
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(!pud_none(pud));
>> +}
>>   #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
>> +static void pud_advanced_tests(struct mm_struct *mm,
>> +        struct vm_area_struct *vma, pud_t *pudp,
>> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot)
>> +{
>> +}
>> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { }
>> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
>> +{
>> +}
>>   #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>>   #else  /* !CONFIG_TRANSPARENT_HUGEPAGE */
>>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) { }
>>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
>> +static void __init pmd_advanced_tests(struct mm_struct *mm,
>> +        struct vm_area_struct *vma, pmd_t *pmdp,
>> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot)
>> +{
>> +}
>> +static void __init pud_advanced_tests(struct mm_struct *mm,
>> +        struct vm_area_struct *vma, pud_t *pudp,
>> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot)
>> +{
>> +}
>> +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) { }
>> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { }
>> +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
>> +{
>> +}
>> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
>> +{
>> +}
>> +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) { }
>>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>     static void __init p4d_basic_tests(unsigned long pfn, pgprot_t prot)
>> @@ -495,8 +731,56 @@ static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot)
>>       WARN_ON(!pte_huge(pte_mkhuge(pte)));
>>   #endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */
>>   }
>> +
>> +static void __init hugetlb_advanced_tests(struct mm_struct *mm,
>> +                      struct vm_area_struct *vma,
>> +                      pte_t *ptep, unsigned long pfn,
>> +                      unsigned long vaddr, pgprot_t prot)
>> +{
>> +    struct page *page = pfn_to_page(pfn);
>> +    pte_t pte = READ_ONCE(*ptep);
> 
> Remplace with ptep_get() to avoid build failure on powerpc 8xx.

Sure, will replace all open PTE pointer accesses with ptep_get().

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers
  2020-06-27  7:26   ` Christophe Leroy
@ 2020-06-29  8:15     ` Anshuman Khandual
  0 siblings, 0 replies; 19+ messages in thread
From: Anshuman Khandual @ 2020-06-29  8:15 UTC (permalink / raw)
  To: Christophe Leroy, linux-mm
  Cc: Benjamin Herrenschmidt, Heiko Carstens, Paul Mackerras,
	H. Peter Anvin, linux-riscv, Will Deacon, linux-arch, linux-s390,
	Michael Ellerman, x86, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, gerald.schaefer, ziy, Catalin Marinas,
	linux-snps-arc, Vasily Gorbik, Borislav Petkov, Paul Walmsley,
	Kirill A . Shutemov, Thomas Gleixner, linux-arm-kernel,
	Vineet Gupta, linux-kernel, Palmer Dabbelt, Andrew Morton,
	linuxppc-dev



On 06/27/2020 12:56 PM, Christophe Leroy wrote:
> 
> 
> Le 15/06/2020 à 05:37, Anshuman Khandual a écrit :
>> This adds new tests validating for these following arch advanced page table
>> helpers. These tests create and test specific mapping types at various page
>> table levels.
>>
>> 1. pxxp_set_wrprotect()
>> 2. pxxp_get_and_clear()
>> 3. pxxp_set_access_flags()
>> 4. pxxp_get_and_clear_full()
>> 5. pxxp_test_and_clear_young()
>> 6. pxx_leaf()
>> 7. pxx_set_huge()
>> 8. pxx_(clear|mk)_savedwrite()
>> 9. huge_pxxp_xxx()
>>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Mike Rapoport <rppt@linux.ibm.com>
>> Cc: Vineet Gupta <vgupta@synopsys.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
>> Cc: Vasily Gorbik <gor@linux.ibm.com>
>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Borislav Petkov <bp@alien8.de>
>> Cc: "H. Peter Anvin" <hpa@zytor.com>
>> Cc: Kirill A. Shutemov <kirill@shutemov.name>
>> Cc: Paul Walmsley <paul.walmsley@sifive.com>
>> Cc: Palmer Dabbelt <palmer@dabbelt.com>
>> Cc: linux-snps-arc@lists.infradead.org
>> Cc: linux-arm-kernel@lists.infradead.org
>> Cc: linuxppc-dev@lists.ozlabs.org
>> Cc: linux-s390@vger.kernel.org
>> Cc: linux-riscv@lists.infradead.org
>> Cc: x86@kernel.org
>> Cc: linux-mm@kvack.org
>> Cc: linux-arch@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>>   mm/debug_vm_pgtable.c | 306 ++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 306 insertions(+)
>>
>> diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
>> index ffa163d4c63c..e3f9f8317a98 100644
>> --- a/mm/debug_vm_pgtable.c
>> +++ b/mm/debug_vm_pgtable.c
>> @@ -21,6 +21,7 @@
>>   #include <linux/module.h>
>>   #include <linux/pfn_t.h>
>>   #include <linux/printk.h>
>> +#include <linux/pgtable.h>
>>   #include <linux/random.h>
>>   #include <linux/spinlock.h>
>>   #include <linux/swap.h>
>> @@ -28,6 +29,7 @@
>>   #include <linux/start_kernel.h>
>>   #include <linux/sched/mm.h>
>>   #include <asm/pgalloc.h>
>> +#include <asm/tlbflush.h>
>>     #define VMFLAGS    (VM_READ|VM_WRITE|VM_EXEC)
>>   @@ -55,6 +57,54 @@ static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot)
>>       WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte))));
>>   }
>>   +static void __init pte_advanced_tests(struct mm_struct *mm,
>> +            struct vm_area_struct *vma, pte_t *ptep,
>> +            unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> 
> Align args properly.
> 
>> +{
>> +    pte_t pte = pfn_pte(pfn, prot);
>> +
>> +    pte = pfn_pte(pfn, prot);
>> +    set_pte_at(mm, vaddr, ptep, pte);
>> +    ptep_set_wrprotect(mm, vaddr, ptep);
>> +    pte = READ_ONCE(*ptep);
>> +    WARN_ON(pte_write(pte));
>> +
>> +    pte = pfn_pte(pfn, prot);
>> +    set_pte_at(mm, vaddr, ptep, pte);
>> +    ptep_get_and_clear(mm, vaddr, ptep);
>> +    pte = READ_ONCE(*ptep);
>> +    WARN_ON(!pte_none(pte));
>> +
>> +    pte = pfn_pte(pfn, prot);
>> +    pte = pte_wrprotect(pte);
>> +    pte = pte_mkclean(pte);
>> +    set_pte_at(mm, vaddr, ptep, pte);
>> +    pte = pte_mkwrite(pte);
>> +    pte = pte_mkdirty(pte);
>> +    ptep_set_access_flags(vma, vaddr, ptep, pte, 1);
>> +    pte = READ_ONCE(*ptep);
>> +    WARN_ON(!(pte_write(pte) && pte_dirty(pte)));
>> +
>> +    pte = pfn_pte(pfn, prot);
>> +    set_pte_at(mm, vaddr, ptep, pte);
>> +    ptep_get_and_clear_full(mm, vaddr, ptep, 1);
>> +    pte = READ_ONCE(*ptep);
>> +    WARN_ON(!pte_none(pte));
>> +
>> +    pte = pte_mkyoung(pte);
>> +    set_pte_at(mm, vaddr, ptep, pte);
>> +    ptep_test_and_clear_young(vma, vaddr, ptep);
>> +    pte = READ_ONCE(*ptep);
>> +    WARN_ON(pte_young(pte));
>> +}
>> +
>> +static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot)
>> +{
>> +    pte_t pte = pfn_pte(pfn, prot);
>> +
>> +    WARN_ON(!pte_savedwrite(pte_mk_savedwrite(pte_clear_savedwrite(pte))));
>> +    WARN_ON(pte_savedwrite(pte_clear_savedwrite(pte_mk_savedwrite(pte))));
>> +}
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
>>   {
>> @@ -77,6 +127,89 @@ static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot)
>>       WARN_ON(!pmd_bad(pmd_mkhuge(pmd)));
>>   }
>>   +static void __init pmd_advanced_tests(struct mm_struct *mm,
>> +        struct vm_area_struct *vma, pmd_t *pmdp,
>> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> 
> Align args properly
> 
>> +{
>> +    pmd_t pmd = pfn_pmd(pfn, prot);
>> +
>> +    if (!has_transparent_hugepage())
>> +        return;
>> +
>> +    /* Align the address wrt HPAGE_PMD_SIZE */
>> +    vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE;
>> +
>> +    pmd = pfn_pmd(pfn, prot);
>> +    set_pmd_at(mm, vaddr, pmdp, pmd);
>> +    pmdp_set_wrprotect(mm, vaddr, pmdp);
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(pmd_write(pmd));
>> +
>> +    pmd = pfn_pmd(pfn, prot);
>> +    set_pmd_at(mm, vaddr, pmdp, pmd);
>> +    pmdp_huge_get_and_clear(mm, vaddr, pmdp);
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(!pmd_none(pmd));
>> +
>> +    pmd = pfn_pmd(pfn, prot);
>> +    pmd = pmd_wrprotect(pmd);
>> +    pmd = pmd_mkclean(pmd);
>> +    set_pmd_at(mm, vaddr, pmdp, pmd);
>> +    pmd = pmd_mkwrite(pmd);
>> +    pmd = pmd_mkdirty(pmd);
>> +    pmdp_set_access_flags(vma, vaddr, pmdp, pmd, 1);
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(!(pmd_write(pmd) && pmd_dirty(pmd)));
>> +
>> +    pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
>> +    set_pmd_at(mm, vaddr, pmdp, pmd);
>> +    pmdp_huge_get_and_clear_full(vma, vaddr, pmdp, 1);
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(!pmd_none(pmd));
>> +
>> +    pmd = pmd_mkyoung(pmd);
>> +    set_pmd_at(mm, vaddr, pmdp, pmd);
>> +    pmdp_test_and_clear_young(vma, vaddr, pmdp);
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(pmd_young(pmd));
>> +}
>> +
>> +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot)
>> +{
>> +    pmd_t pmd = pfn_pmd(pfn, prot);
>> +
>> +    /*
>> +     * PMD based THP is a leaf entry.
>> +     */
>> +    pmd = pmd_mkhuge(pmd);
>> +    WARN_ON(!pmd_leaf(pmd));
>> +}
>> +
>> +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
>> +{
>> +    pmd_t pmd;
>> +
>> +    if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
>> +        return;
>> +    /*
>> +     * X86 defined pmd_set_huge() verifies that the given
>> +     * PMD is not a populated non-leaf entry.
>> +     */
>> +    WRITE_ONCE(*pmdp, __pmd(0));
>> +    WARN_ON(!pmd_set_huge(pmdp, __pfn_to_phys(pfn), prot));
>> +    WARN_ON(!pmd_clear_huge(pmdp));
>> +    pmd = READ_ONCE(*pmdp);
>> +    WARN_ON(!pmd_none(pmd));
>> +}
>> +
>> +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
>> +{
>> +    pmd_t pmd = pfn_pmd(pfn, prot);
>> +
>> +    WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd))));
>> +    WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd))));
>> +}
>> +
>>   #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
>>   {
>> @@ -100,12 +233,115 @@ static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot)
>>        */
>>       WARN_ON(!pud_bad(pud_mkhuge(pud)));
>>   }
>> +
>> +static void pud_advanced_tests(struct mm_struct *mm,
>> +        struct vm_area_struct *vma, pud_t *pudp,
>> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> 
> Align args properly
> 
>> +{
>> +    pud_t pud = pfn_pud(pfn, prot);
>> +
>> +    if (!has_transparent_hugepage())
>> +        return;
>> +
>> +    /* Align the address wrt HPAGE_PUD_SIZE */
>> +    vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE;
>> +
>> +    set_pud_at(mm, vaddr, pudp, pud);
>> +    pudp_set_wrprotect(mm, vaddr, pudp);
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(pud_write(pud));
>> +
>> +#ifndef __PAGETABLE_PMD_FOLDED
>> +    pud = pfn_pud(pfn, prot);
>> +    set_pud_at(mm, vaddr, pudp, pud);
>> +    pudp_huge_get_and_clear(mm, vaddr, pudp);
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(!pud_none(pud));
>> +
>> +    pud = pfn_pud(pfn, prot);
>> +    set_pud_at(mm, vaddr, pudp, pud);
>> +    pudp_huge_get_and_clear_full(mm, vaddr, pudp, 1);
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(!pud_none(pud));
>> +#endif /* __PAGETABLE_PMD_FOLDED */
>> +    pud = pfn_pud(pfn, prot);
>> +    pud = pud_wrprotect(pud);
>> +    pud = pud_mkclean(pud);
>> +    set_pud_at(mm, vaddr, pudp, pud);
>> +    pud = pud_mkwrite(pud);
>> +    pud = pud_mkdirty(pud);
>> +    pudp_set_access_flags(vma, vaddr, pudp, pud, 1);
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(!(pud_write(pud) && pud_dirty(pud)));
>> +
>> +    pud = pud_mkyoung(pud);
>> +    set_pud_at(mm, vaddr, pudp, pud);
>> +    pudp_test_and_clear_young(vma, vaddr, pudp);
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(pud_young(pud));
>> +}
>> +
>> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot)
>> +{
>> +    pud_t pud = pfn_pud(pfn, prot);
>> +
>> +    /*
>> +     * PUD based THP is a leaf entry.
>> +     */
>> +    pud = pud_mkhuge(pud);
>> +    WARN_ON(!pud_leaf(pud));
>> +}
>> +
>> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
>> +{
>> +    pud_t pud;
>> +
>> +    if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
>> +        return;
>> +    /*
>> +     * X86 defined pud_set_huge() verifies that the given
>> +     * PUD is not a populated non-leaf entry.
>> +     */
>> +    WRITE_ONCE(*pudp, __pud(0));
>> +    WARN_ON(!pud_set_huge(pudp, __pfn_to_phys(pfn), prot));
>> +    WARN_ON(!pud_clear_huge(pudp));
>> +    pud = READ_ONCE(*pudp);
>> +    WARN_ON(!pud_none(pud));
>> +}
>>   #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
>> +static void pud_advanced_tests(struct mm_struct *mm,
>> +        struct vm_area_struct *vma, pud_t *pudp,
>> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> 
> Align args properly
> 
>> +{
>> +}
>> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { }
>> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot)
>> +{
>> +}
>>   #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>>   #else  /* !CONFIG_TRANSPARENT_HUGEPAGE */
>>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) { }
>>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { }
>> +static void __init pmd_advanced_tests(struct mm_struct *mm,
>> +        struct vm_area_struct *vma, pmd_t *pmdp,
>> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> 
> Align args properly
> 
>> +{
>> +}
>> +static void __init pud_advanced_tests(struct mm_struct *mm,
>> +        struct vm_area_struct *vma, pud_t *pudp,
>> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot)
> 
> Align args properly
> 

Sure, will fix the arguments alignment in the above mentioned places.

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests
  2020-06-24 14:40       ` Alexander Gordeev
@ 2020-06-29  8:32         ` Anshuman Khandual
  0 siblings, 0 replies; 19+ messages in thread
From: Anshuman Khandual @ 2020-06-29  8:32 UTC (permalink / raw)
  To: Alexander Gordeev, Gerald Schaefer
  Cc: linux-doc, Benjamin Herrenschmidt, Heiko Carstens, linux-mm,
	Paul Mackerras, H. Peter Anvin, linux-riscv, Will Deacon,
	linux-arch, linux-s390, Jonathan Corbet, Michael Ellerman, x86,
	Christophe Leroy, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, ziy, Catalin Marinas, linux-snps-arc, Vasily Gorbik,
	Borislav Petkov, Paul Walmsley, Kirill A . Shutemov,
	Thomas Gleixner, linux-arm-kernel, christophe.leroy,
	Vineet Gupta, linux-kernel, Palmer Dabbelt, Andrew Morton,
	linuxppc-dev



On 06/24/2020 08:10 PM, Alexander Gordeev wrote:
> On Wed, Jun 24, 2020 at 01:48:08PM +0200, Gerald Schaefer wrote:
>> On Wed, 24 Jun 2020 13:05:39 +0200
>> Alexander Gordeev <agordeev@linux.ibm.com> wrote:
>>
>>> On Wed, Jun 24, 2020 at 08:43:10AM +0530, Anshuman Khandual wrote:
>>>
>>> [...]
>>>
>>>> Hello Gerald/Christophe/Vineet,
>>>>
>>>> It would be really great if you could give this series a quick test
>>>> on s390/ppc/arc platforms respectively. Thank you.
>>>
>>> That worked for me with the default and debug s390 configurations.
>>> Would you like to try with some particular options or combinations
>>> of the options?
>>
>> It will be enabled automatically on all archs that set
>> ARCH_HAS_DEBUG_VM_PGTABLE, which we do for s390 unconditionally.
>> Also, DEBUG_VM has to be set, which we have only in the debug config.
>> So only the s390 debug config will have it enabled, you can check
>> dmesg for "debug_vm_pgtable" to see when / where it was run, and if it
>> triggered any warnings.
> 
> Yes, that is what I did ;)
> 
> I should have been more clear. I wonder whether Anshuman has in
> mind other options which possibly makes sense to set or unset
> and check how it goes with non-standard configurations.

After enabling CONFIG_DEBUG_VM either explicitly or via DEBUG_VM, ideally
any memory config combination on s390 which can change platform page table
helpers (validated with CONFIG_DEBUG_VM) should also get tested. Recently,
there was a kernel crash on ppc64 [1] and a build failure on ppc32 [2] for
some particular configs. Hence it will be great if you could run this test
on multiple s390 configurations.

[1] 787d563b8642f35c5 ("mm/debug_vm_pgtable: fix kernel crash by checking for THP support")
[2] 9449c9cb420b249eb ("mm/debug_vm_pgtable: fix build failure with powerpc 8xx")

- Anshuman

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests
  2020-06-24  3:13 ` [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
  2020-06-24 11:05   ` Alexander Gordeev
  2020-06-27  8:17   ` Christophe Leroy
@ 2020-06-30  3:53   ` Anshuman Khandual
  2020-06-30 21:32     ` Vineet Gupta
  2 siblings, 1 reply; 19+ messages in thread
From: Anshuman Khandual @ 2020-06-30  3:53 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-doc, Benjamin Herrenschmidt, Heiko Carstens,
	Paul Mackerras, H. Peter Anvin, linux-riscv, Will Deacon,
	linux-arch, linux-s390, Jonathan Corbet, Michael Ellerman, x86,
	Christophe Leroy, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, linux-arm-kernel, ziy, Catalin Marinas,
	linux-snps-arc, Vasily Gorbik, Vineet Gupta, Borislav Petkov,
	Paul Walmsley, Kirill A . Shutemov, Thomas Gleixner,
	gerald.schaefer, christophe.leroy, Vineet Gupta, linux-kernel,
	Palmer Dabbelt, Qian Cai, Andrew Morton, linuxppc-dev



On 06/24/2020 08:43 AM, Anshuman Khandual wrote:
> 
> 
> On 06/15/2020 09:07 AM, Anshuman Khandual wrote:
>> This series adds some more arch page table helper validation tests which
>> are related to core and advanced memory functions. This also creates a
>> documentation, enlisting expected semantics for all page table helpers as
>> suggested by Mike Rapoport previously (https://lkml.org/lkml/2020/1/30/40).
>>
>> There are many TRANSPARENT_HUGEPAGE and ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD
>> ifdefs scattered across the test. But consolidating all the fallback stubs
>> is not very straight forward because ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD is
>> not explicitly dependent on ARCH_HAS_TRANSPARENT_HUGEPAGE.
>>
>> Tested on arm64, x86 platforms but only build tested on all other enabled
>> platforms through ARCH_HAS_DEBUG_VM_PGTABLE i.e powerpc, arc, s390. The
>> following failure on arm64 still exists which was mentioned previously. It
>> will be fixed with the upcoming THP migration on arm64 enablement series.
>>
>> WARNING .... mm/debug_vm_pgtable.c:860 debug_vm_pgtable+0x940/0xa54
>> WARN_ON(!pmd_present(pmd_mkinvalid(pmd_mkhuge(pmd))))
>>
>> This series is based on v5.8-rc1.
>>
>> Changes in V3:
>>
>> - Replaced HAVE_ARCH_SOFT_DIRTY with MEM_SOFT_DIRTY
>> - Added HAVE_ARCH_HUGE_VMAP checks in pxx_huge_tests() per Gerald
>> - Updated documentation for pmd_thp_tests() per Zi Yan
>> - Replaced READ_ONCE() with huge_ptep_get() per Gerald
>> - Added pte_mkhuge() and masking with PMD_MASK per Gerald
>> - Replaced pte_same() with holding pfn check in pxx_swap_tests()
>> - Added documentation for all (#ifdef #else #endif) per Gerald
>> - Updated pmd_protnone_tests() per Gerald
>> - Updated HugeTLB PTE creation in hugetlb_advanced_tests() per Gerald
>> - Replaced [pmd|pud]_mknotpresent() with [pmd|pud]_mkinvalid()
>> - Added has_transparent_hugepage() check for PMD and PUD tests
>> - Added a patch which debug prints all individual tests being executed
>> - Updated documentation for renamed [pmd|pud]_mkinvalid() helpers
> 
> Hello Gerald/Christophe/Vineet,
> 
> It would be really great if you could give this series a quick test
> on s390/ppc/arc platforms respectively. Thank you.

Thanks Alexander, Gerald and Christophe for testing this out on s390
and ppc32 platforms. Probably Vineet and Qian (any other volunteers)
could help us with arc and ppc64 platforms, which I would appreciate.

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests
  2020-06-30  3:53   ` Anshuman Khandual
@ 2020-06-30 21:32     ` Vineet Gupta
  2020-07-03  3:59       ` Anshuman Khandual
  0 siblings, 1 reply; 19+ messages in thread
From: Vineet Gupta @ 2020-06-30 21:32 UTC (permalink / raw)
  To: Anshuman Khandual, linux-mm
  Cc: christophe.leroy, arcml, Vasily Gorbik, Jonathan Corbet,
	Catalin Marinas, Kirill A . Shutemov, H. Peter  Anvin,
	Heiko Carstens, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, Paul Mackerras, Michael Ellerman, ziy,
	Benjamin Herrenschmidt, Borislav Petkov, Andrew Morton,
	Paul Walmsley, Will Deacon, Thomas Gleixner, gerald.schaefer

On 6/29/20 8:53 PM, Anshuman Khandual wrote:
> 
> 
> On 06/24/2020 08:43 AM, Anshuman Khandual wrote:
>>
>>
>> On 06/15/2020 09:07 AM, Anshuman Khandual wrote:
>>> This series adds some more arch page table helper validation tests which
>>> are related to core and advanced memory functions. This also creates a
>>> documentation, enlisting expected semantics for all page table helpers as
>>> suggested by Mike Rapoport previously (https://lkml.org/lkml/2020/1/30/40).
>>>
>>> There are many TRANSPARENT_HUGEPAGE and ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD
>>> ifdefs scattered across the test. But consolidating all the fallback stubs
>>> is not very straight forward because ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD is
>>> not explicitly dependent on ARCH_HAS_TRANSPARENT_HUGEPAGE.
>>>
>>> Tested on arm64, x86 platforms but only build tested on all other enabled
>>> platforms through ARCH_HAS_DEBUG_VM_PGTABLE i.e powerpc, arc, s390. The
>>> following failure on arm64 still exists which was mentioned previously. It
>>> will be fixed with the upcoming THP migration on arm64 enablement series.
>>>
>>> WARNING .... mm/debug_vm_pgtable.c:860 debug_vm_pgtable+0x940/0xa54
>>> WARN_ON(!pmd_present(pmd_mkinvalid(pmd_mkhuge(pmd))))
>>>
>>> This series is based on v5.8-rc1.
>>>
>>> Changes in V3:
>>>
>>> - Replaced HAVE_ARCH_SOFT_DIRTY with MEM_SOFT_DIRTY
>>> - Added HAVE_ARCH_HUGE_VMAP checks in pxx_huge_tests() per Gerald
>>> - Updated documentation for pmd_thp_tests() per Zi Yan
>>> - Replaced READ_ONCE() with huge_ptep_get() per Gerald
>>> - Added pte_mkhuge() and masking with PMD_MASK per Gerald
>>> - Replaced pte_same() with holding pfn check in pxx_swap_tests()
>>> - Added documentation for all (#ifdef #else #endif) per Gerald
>>> - Updated pmd_protnone_tests() per Gerald
>>> - Updated HugeTLB PTE creation in hugetlb_advanced_tests() per Gerald
>>> - Replaced [pmd|pud]_mknotpresent() with [pmd|pud]_mkinvalid()
>>> - Added has_transparent_hugepage() check for PMD and PUD tests
>>> - Added a patch which debug prints all individual tests being executed
>>> - Updated documentation for renamed [pmd|pud]_mkinvalid() helpers
>>
>> Hello Gerald/Christophe/Vineet,
>>
>> It would be really great if you could give this series a quick test
>> on s390/ppc/arc platforms respectively. Thank you.
> 
> Thanks Alexander, Gerald and Christophe for testing this out on s390
> and ppc32 platforms. Probably Vineet and Qian (any other volunteers)
> could help us with arc and ppc64 platforms, which I would appreciate.

Tested-by: Vineet Gupta <vgupta@synopsys.com>

Apologies for the delay in getting to this. Works fine on ARC

I have following enabled:

# CONFIG_DEBUG_VM_RB is not set
# CONFIG_DEBUG_VM_PGFLAGS is not set
CONFIG_DEBUG_VM_PGTABLE=y

And this boots fine

NET: Registered protocol family 17
NET: Registered protocol family 15
debug_vm_pgtable: [debug_vm_pgtable         ]: Validating architecture page table
helpers
Warning: unable to open an initial console.
Freeing unused kernel memory: 18840K
This architecture does not have kernel memory protection.
Run /init as init process
  with arguments:
    /init
  with environment:
    HOME=/
    TERM=linux
...
***********************************************************************
			Welcome to ARCLinux
***********************************************************************
[ARCLinux]#

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests
  2020-06-30 21:32     ` Vineet Gupta
@ 2020-07-03  3:59       ` Anshuman Khandual
  0 siblings, 0 replies; 19+ messages in thread
From: Anshuman Khandual @ 2020-07-03  3:59 UTC (permalink / raw)
  To: Vineet Gupta, linux-mm
  Cc: christophe.leroy, arcml, Vasily Gorbik, Jonathan Corbet,
	Catalin Marinas, Kirill A . Shutemov, H. Peter Anvin,
	Heiko Carstens, Mike Rapoport, Christian Borntraeger,
	Ingo Molnar, Paul Mackerras, Michael Ellerman, ziy,
	Benjamin Herrenschmidt, Borislav Petkov, Andrew Morton,
	Paul Walmsley, Will Deacon, Thomas Gleixner, gerald.schaefer



On 07/01/2020 03:02 AM, Vineet Gupta wrote:
> On 6/29/20 8:53 PM, Anshuman Khandual wrote:
>>
>>
>> On 06/24/2020 08:43 AM, Anshuman Khandual wrote:
>>>
>>>
>>> On 06/15/2020 09:07 AM, Anshuman Khandual wrote:
>>>> This series adds some more arch page table helper validation tests which
>>>> are related to core and advanced memory functions. This also creates a
>>>> documentation, enlisting expected semantics for all page table helpers as
>>>> suggested by Mike Rapoport previously (https://lkml.org/lkml/2020/1/30/40).
>>>>
>>>> There are many TRANSPARENT_HUGEPAGE and ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD
>>>> ifdefs scattered across the test. But consolidating all the fallback stubs
>>>> is not very straight forward because ARCH_HAS_TRANSPARENT_HUGEPAGE_PUD is
>>>> not explicitly dependent on ARCH_HAS_TRANSPARENT_HUGEPAGE.
>>>>
>>>> Tested on arm64, x86 platforms but only build tested on all other enabled
>>>> platforms through ARCH_HAS_DEBUG_VM_PGTABLE i.e powerpc, arc, s390. The
>>>> following failure on arm64 still exists which was mentioned previously. It
>>>> will be fixed with the upcoming THP migration on arm64 enablement series.
>>>>
>>>> WARNING .... mm/debug_vm_pgtable.c:860 debug_vm_pgtable+0x940/0xa54
>>>> WARN_ON(!pmd_present(pmd_mkinvalid(pmd_mkhuge(pmd))))
>>>>
>>>> This series is based on v5.8-rc1.
>>>>
>>>> Changes in V3:
>>>>
>>>> - Replaced HAVE_ARCH_SOFT_DIRTY with MEM_SOFT_DIRTY
>>>> - Added HAVE_ARCH_HUGE_VMAP checks in pxx_huge_tests() per Gerald
>>>> - Updated documentation for pmd_thp_tests() per Zi Yan
>>>> - Replaced READ_ONCE() with huge_ptep_get() per Gerald
>>>> - Added pte_mkhuge() and masking with PMD_MASK per Gerald
>>>> - Replaced pte_same() with holding pfn check in pxx_swap_tests()
>>>> - Added documentation for all (#ifdef #else #endif) per Gerald
>>>> - Updated pmd_protnone_tests() per Gerald
>>>> - Updated HugeTLB PTE creation in hugetlb_advanced_tests() per Gerald
>>>> - Replaced [pmd|pud]_mknotpresent() with [pmd|pud]_mkinvalid()
>>>> - Added has_transparent_hugepage() check for PMD and PUD tests
>>>> - Added a patch which debug prints all individual tests being executed
>>>> - Updated documentation for renamed [pmd|pud]_mkinvalid() helpers
>>>
>>> Hello Gerald/Christophe/Vineet,
>>>
>>> It would be really great if you could give this series a quick test
>>> on s390/ppc/arc platforms respectively. Thank you.
>>
>> Thanks Alexander, Gerald and Christophe for testing this out on s390
>> and ppc32 platforms. Probably Vineet and Qian (any other volunteers)
>> could help us with arc and ppc64 platforms, which I would appreciate.
> 
> Tested-by: Vineet Gupta <vgupta@synopsys.com>
> 
> Apologies for the delay in getting to this. Works fine on ARC
> 
> I have following enabled:
> 
> # CONFIG_DEBUG_VM_RB is not set
> # CONFIG_DEBUG_VM_PGFLAGS is not set
> CONFIG_DEBUG_VM_PGTABLE=y
> 
> And this boots fine
> 
> NET: Registered protocol family 17
> NET: Registered protocol family 15
> debug_vm_pgtable: [debug_vm_pgtable         ]: Validating architecture page table
> helpers
> Warning: unable to open an initial console.
> Freeing unused kernel memory: 18840K
> This architecture does not have kernel memory protection.
> Run /init as init process
>   with arguments:
>     /init
>   with environment:
>     HOME=/
>     TERM=linux
> ...
> ***********************************************************************
> 			Welcome to ARCLinux
> ***********************************************************************
> [ARCLinux]#
> 

Thanks for testing this Vineet, really appreciate it.

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-07-03  3:59 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-15  3:37 [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
2020-06-15  3:37 ` [PATCH V3 1/4] mm/debug_vm_pgtable: Add tests validating arch helpers for core MM features Anshuman Khandual
2020-06-15  3:37 ` [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers Anshuman Khandual
2020-06-27  7:18   ` Christophe Leroy
2020-06-29  8:09     ` Anshuman Khandual
2020-06-27  7:26   ` Christophe Leroy
2020-06-29  8:15     ` Anshuman Khandual
2020-06-15  3:37 ` [PATCH V3 3/4] mm/debug_vm_pgtable: Add debug prints for individual tests Anshuman Khandual
2020-06-15  3:37 ` [PATCH V3 4/4] Documentation/mm: Add descriptions for arch page table helpers Anshuman Khandual
2020-06-17 10:01   ` Mike Rapoport
2020-06-24  3:13 ` [PATCH V3 0/4] mm/debug_vm_pgtable: Add some more tests Anshuman Khandual
2020-06-24 11:05   ` Alexander Gordeev
2020-06-24 11:48     ` Gerald Schaefer
2020-06-24 14:40       ` Alexander Gordeev
2020-06-29  8:32         ` Anshuman Khandual
2020-06-27  8:17   ` Christophe Leroy
2020-06-30  3:53   ` Anshuman Khandual
2020-06-30 21:32     ` Vineet Gupta
2020-07-03  3:59       ` Anshuman Khandual

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).