linux-mips.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers
@ 2019-08-09  7:33 Anshuman Khandual
  2019-08-09  7:33 ` [RFC V2 1/1] mm/pgtable/debug: Add test validating architecture " Anshuman Khandual
  2019-08-09 10:16 ` [RFC V2 0/1] mm/debug: Add tests for architecture exported " Matthew Wilcox
  0 siblings, 2 replies; 10+ messages in thread
From: Anshuman Khandual @ 2019-08-09  7:33 UTC (permalink / raw)
  To: linux-mm
  Cc: Anshuman Khandual, Andrew Morton, Vlastimil Babka,
	Greg Kroah-Hartman, Thomas Gleixner, Mike Rapoport,
	Jason Gunthorpe, Dan Williams, Peter Zijlstra, Michal Hocko,
	Mark Rutland, Mark Brown, Steven Price, Ard Biesheuvel,
	Masahiro Yamada, Kees Cook, Tetsuo Handa, Matthew Wilcox,
	Sri Krishna chowdary, Dave Hansen, Russell King - ARM Linux,
	Michael Ellerman, Paul Mackerras, Martin Schwidefsky,
	Heiko Carstens, David S. Miller, Vineet Gupta, James Hogan,
	Paul Burton, Ralf Baechle, linux-snps-arc, linux-mips,
	linux-arm-kernel, linux-ia64, linuxppc-dev, linux-s390, linux-sh,
	sparclinux, x86, linux-kernel

This series adds a test validation for architecture exported page table
helpers. Patch in the series adds basic transformation tests at various
levels of the page table.

This test was originally suggested by Catalin during arm64 THP migration
RFC discussion earlier. Going forward it can include more specific tests
with respect to various generic MM functions like THP, HugeTLB etc and
platform specific tests.

https://lore.kernel.org/linux-mm/20190628102003.GA56463@arrakis.emea.arm.com/

Questions:

Should alloc_gigantic_page() be made available as an interface for general
use in the kernel. The test module here uses very similar implementation from
HugeTLB to allocate a PUD aligned memory block. Similar for mm_alloc() which
needs to be exported through a header.

Testing:

Build and boot tested on arm64 and x86 platforms. While arm64 clears all
these tests, following errors were reported on x86.

1. WARN_ON(pud_bad(pud)) in pud_populate_tests()
2. WARN_ON(p4d_bad(p4d)) in p4d_populate_tests()

I would really appreciate if folks can help validate this test on other
platforms and report back problems if any. Suggestions, comments and
inputs welcome. Thank you.

Changes in V2:

- Moved test module and it's config from lib/ to mm/
- Renamed config TEST_ARCH_PGTABLE as DEBUG_ARCH_PGTABLE_TEST
- Renamed file from test_arch_pgtable.c to arch_pgtable_test.c
- Added relevant MODULE_DESCRIPTION() and MODULE_AUTHOR() details
- Dropped loadable module config option
- Basic tests now use memory blocks with required size and alignment
- PUD aligned memory block gets allocated with alloc_contig_range()
- If PUD aligned memory could not be allocated it falls back on PMD aligned
  memory block from page allocator and pud_* tests are skipped
- Clear and populate tests now operate on real in memory page table entries
- Dummy mm_struct gets allocated with mm_alloc()
- Dummy page table entries get allocated with [pud|pmd|pte]_alloc_[map]()
- Simplified [p4d|pgd]_basic_tests(), now has random values in the entries

RFC V1:

https://lore.kernel.org/linux-mm/1564037723-26676-1-git-send-email-anshuman.khandual@arm.com/

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Steven Price <Steven.Price@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Sri Krishna chowdary <schowdary@nvidia.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King - ARM Linux <linux@armlinux.org.uk>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-mips@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-ia64@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org

Anshuman Khandual (1):
  mm/pgtable/debug: Add test validating architecture page table helpers

 mm/Kconfig.debug       |  14 ++
 mm/Makefile            |   1 +
 mm/arch_pgtable_test.c | 400 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 415 insertions(+)
 create mode 100644 mm/arch_pgtable_test.c

-- 
2.20.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC V2 1/1] mm/pgtable/debug: Add test validating architecture page table helpers
  2019-08-09  7:33 [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers Anshuman Khandual
@ 2019-08-09  7:33 ` Anshuman Khandual
  2019-08-09 10:16 ` [RFC V2 0/1] mm/debug: Add tests for architecture exported " Matthew Wilcox
  1 sibling, 0 replies; 10+ messages in thread
From: Anshuman Khandual @ 2019-08-09  7:33 UTC (permalink / raw)
  To: linux-mm
  Cc: Anshuman Khandual, Andrew Morton, Vlastimil Babka,
	Greg Kroah-Hartman, Thomas Gleixner, Mike Rapoport,
	Jason Gunthorpe, Dan Williams, Peter Zijlstra, Michal Hocko,
	Mark Rutland, Mark Brown, Steven Price, Ard Biesheuvel,
	Masahiro Yamada, Kees Cook, Tetsuo Handa, Matthew Wilcox,
	Sri Krishna chowdary, Dave Hansen, Russell King - ARM Linux,
	Michael Ellerman, Paul Mackerras, Martin Schwidefsky,
	Heiko Carstens, David S. Miller, Vineet Gupta, James Hogan,
	Paul Burton, Ralf Baechle, linux-snps-arc, linux-mips,
	linux-arm-kernel, linux-ia64, linuxppc-dev, linux-s390, linux-sh,
	sparclinux, x86, linux-kernel

This adds a test module which will validate architecture page table helpers
and accessors regarding compliance with generic MM semantics expectations.
This will help various architectures in validating changes to the existing
page table helpers or addition of new ones.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Steven Price <Steven.Price@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Sri Krishna chowdary <schowdary@nvidia.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King - ARM Linux <linux@armlinux.org.uk>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-mips@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-ia64@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 mm/Kconfig.debug       |  14 ++
 mm/Makefile            |   1 +
 mm/arch_pgtable_test.c | 400 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 415 insertions(+)
 create mode 100644 mm/arch_pgtable_test.c

diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 82b6a20898bd..d3dfbe984d41 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -115,3 +115,17 @@ config DEBUG_RODATA_TEST
     depends on STRICT_KERNEL_RWX
     ---help---
       This option enables a testcase for the setting rodata read-only.
+
+config DEBUG_ARCH_PGTABLE_TEST
+	bool "Test arch page table helpers for semantics compliance"
+	depends on MMU
+	depends on DEBUG_KERNEL
+	help
+	  This options provides a kernel module which can be used to test
+	  architecture page table helper functions on various platform in
+	  verifying if they comply with expected generic MM semantics. This
+	  will help architectures code in making sure that any changes or
+	  new additions of these helpers will still conform to generic MM
+	  expected semantics.
+
+	  If unsure, say N.
diff --git a/mm/Makefile b/mm/Makefile
index 338e528ad436..0e6ac3789ca8 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -84,6 +84,7 @@ obj-$(CONFIG_HWPOISON_INJECT) += hwpoison-inject.o
 obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
 obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
 obj-$(CONFIG_DEBUG_RODATA_TEST) += rodata_test.o
+obj-$(CONFIG_DEBUG_ARCH_PGTABLE_TEST) += arch_pgtable_test.o
 obj-$(CONFIG_PAGE_OWNER) += page_owner.o
 obj-$(CONFIG_CLEANCACHE) += cleancache.o
 obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
diff --git a/mm/arch_pgtable_test.c b/mm/arch_pgtable_test.c
new file mode 100644
index 000000000000..41d6fa78a620
--- /dev/null
+++ b/mm/arch_pgtable_test.c
@@ -0,0 +1,400 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * This kernel module validates architecture page table helpers &
+ * accessors and helps in verifying their continued compliance with
+ * generic MM semantics.
+ *
+ * Copyright (C) 2019 ARM Ltd.
+ *
+ * Author: Anshuman Khandual <anshuman.khandual@arm.com>
+ */
+#define pr_fmt(fmt) "arch_pgtable_test: %s " fmt, __func__
+
+#include <linux/kernel.h>
+#include <linux/hugetlb.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+#include <linux/mm_types.h>
+#include <linux/module.h>
+#include <linux/printk.h>
+#include <linux/swap.h>
+#include <linux/swapops.h>
+#include <linux/pfn_t.h>
+#include <linux/gfp.h>
+#include <linux/spinlock.h>
+#include <linux/sched/mm.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+
+/*
+ * Basic operations
+ *
+ * mkold(entry)			= An old and not a young entry
+ * mkyoung(entry)		= A young and not an old entry
+ * mkdirty(entry)		= A dirty and not a clean entry
+ * mkclean(entry)		= A clean and not a dirty entry
+ * mkwrite(entry)		= A write and not a write protected entry
+ * wrprotect(entry)		= A write protected and not a write entry
+ * pxx_bad(entry)		= A mapped and non-table entry
+ * pxx_same(entry1, entry2)	= Both entries hold the exact same value
+ */
+#define VADDR_TEST	(PGDIR_SIZE + PUD_SIZE + PMD_SIZE + PAGE_SIZE)
+#define VMA_TEST_FLAGS	(VM_READ|VM_WRITE|VM_EXEC)
+#define RANDOM_NZVALUE	(0xbe)
+
+static bool pud_aligned;
+
+extern struct mm_struct *mm_alloc(void);
+
+static void pte_basic_tests(struct page *page, pgprot_t prot)
+{
+	pte_t pte = mk_pte(page, prot);
+
+	WARN_ON(!pte_same(pte, pte));
+	WARN_ON(!pte_young(pte_mkyoung(pte)));
+	WARN_ON(!pte_dirty(pte_mkdirty(pte)));
+	WARN_ON(!pte_write(pte_mkwrite(pte)));
+	WARN_ON(pte_young(pte_mkold(pte)));
+	WARN_ON(pte_dirty(pte_mkclean(pte)));
+	WARN_ON(pte_write(pte_wrprotect(pte)));
+}
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE
+static void pmd_basic_tests(struct page *page, pgprot_t prot)
+{
+	pmd_t pmd = mk_pmd(page, prot);
+
+	WARN_ON(!pmd_same(pmd, pmd));
+	WARN_ON(!pmd_young(pmd_mkyoung(pmd)));
+	WARN_ON(!pmd_dirty(pmd_mkdirty(pmd)));
+	WARN_ON(!pmd_write(pmd_mkwrite(pmd)));
+	WARN_ON(pmd_young(pmd_mkold(pmd)));
+	WARN_ON(pmd_dirty(pmd_mkclean(pmd)));
+	WARN_ON(pmd_write(pmd_wrprotect(pmd)));
+	/*
+	 * A huge page does not point to next level page table
+	 * entry. Hence this must qualify as pmd_bad().
+	 */
+	WARN_ON(!pmd_bad(pmd_mkhuge(pmd)));
+}
+#else
+static void pmd_basic_tests(struct page *page, pgprot_t prot) { }
+#endif
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+static void pud_basic_tests(struct page *page, pgprot_t prot)
+{
+	pud_t pud;
+
+	/*
+	 * Memory block here must be PUD_SIZE aligned. Abort this
+	 * test in case we could not allocate such a memory block.
+	 */
+	if (!pud_aligned) {
+		pr_warn("Could not proceed with PUD tests\n");
+		return;
+	}
+	pud = pfn_pud(page_to_pfn(page), prot);
+
+	WARN_ON(!pud_same(pud, pud));
+	WARN_ON(!pud_young(pud_mkyoung(pud)));
+	WARN_ON(!pud_write(pud_mkwrite(pud)));
+	WARN_ON(pud_write(pud_wrprotect(pud)));
+	WARN_ON(pud_young(pud_mkold(pud)));
+
+#if !defined(__PAGETABLE_PMD_FOLDED) && !defined(__ARCH_HAS_4LEVEL_HACK)
+	/*
+	 * A huge page does not point to next level page table
+	 * entry. Hence this must qualify as pud_bad().
+	 */
+	WARN_ON(!pud_bad(pud_mkhuge(pud)));
+#endif
+}
+#else
+static void pud_basic_tests(struct page *page, pgprot_t prot) { }
+#endif
+
+static void p4d_basic_tests(struct page *page, pgprot_t prot)
+{
+	p4d_t p4d;
+
+	memset(&p4d, RANDOM_NZVALUE, sizeof(p4d_t));
+	WARN_ON(!p4d_same(p4d, p4d));
+}
+
+static void pgd_basic_tests(struct page *page, pgprot_t prot)
+{
+	pgd_t pgd;
+
+	memset(&pgd, RANDOM_NZVALUE, sizeof(pgd_t));
+	WARN_ON(!pgd_same(pgd, pgd));
+}
+
+#if !defined(__PAGETABLE_PMD_FOLDED) && !defined(__ARCH_HAS_4LEVEL_HACK)
+static void pud_clear_tests(pud_t *pudp)
+{
+	memset(pudp, RANDOM_NZVALUE, sizeof(pud_t));
+	pud_clear(pudp);
+	WARN_ON(!pud_none(READ_ONCE(*pudp)));
+}
+
+static void pud_populate_tests(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp)
+{
+	/*
+	 * This entry points to next level page table page.
+	 * Hence this must not qualify as pud_bad().
+	 */
+	pmd_clear(pmdp);
+	pud_clear(pudp);
+	pud_populate(mm, pudp, pmdp);
+	WARN_ON(pud_bad(READ_ONCE(*pudp)));
+}
+#else
+static void pud_clear_tests(pud_t *pudp) { }
+static void pud_populate_tests(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp)
+{
+}
+#endif
+
+#if !defined(__PAGETABLE_PUD_FOLDED) && !defined(__ARCH_HAS_5LEVEL_HACK)
+static void p4d_clear_tests(p4d_t *p4dp)
+{
+	memset(p4dp, RANDOM_NZVALUE, sizeof(p4d_t));
+	p4d_clear(p4dp);
+	WARN_ON(!p4d_none(READ_ONCE(*p4dp)));
+}
+
+static void p4d_populate_tests(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp)
+{
+	/*
+	 * This entry points to next level page table page.
+	 * Hence this must not qualify as p4d_bad().
+	 */
+	pud_clear(pudp);
+	p4d_clear(p4dp);
+	p4d_populate(mm, p4dp, pudp);
+	WARN_ON(p4d_bad(READ_ONCE(*p4dp)));
+}
+#else
+static void p4d_clear_tests(p4d_t *p4dp) { }
+static void p4d_populate_tests(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp)
+{
+}
+#endif
+
+#ifndef __PAGETABLE_P4D_FOLDED
+static void pgd_clear_tests(pgd_t *pgdp)
+{
+	memset(pgdp, RANDOM_NZVALUE, sizeof(pgd_t));
+	pgd_clear(pgdp);
+	WARN_ON(!pgd_none(READ_ONCE(*pgdp)));
+}
+
+static void pgd_populate_tests(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp)
+{
+	/*
+	 * This entry points to next level page table page.
+	 * Hence this must not qualify as pgd_bad().
+	 */
+	p4d_clear(p4dp);
+	pgd_clear(pgdp);
+	pgd_populate(mm, pgdp, p4dp);
+	WARN_ON(pgd_bad(READ_ONCE(*pgdp)));
+}
+#else
+static void pgd_clear_tests(pgd_t *pgdp) { }
+static void pgd_populate_tests(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp)
+{
+}
+#endif
+
+static void pte_clear_tests(pte_t *ptep)
+{
+	memset(ptep, RANDOM_NZVALUE, sizeof(pte_t));
+	pte_clear(NULL, 0, ptep);
+	WARN_ON(!pte_none(READ_ONCE(*ptep)));
+}
+
+static void pmd_clear_tests(pmd_t *pmdp)
+{
+	memset(pmdp, RANDOM_NZVALUE, sizeof(pmd_t));
+	pmd_clear(pmdp);
+	WARN_ON(!pmd_none(READ_ONCE(*pmdp)));
+}
+
+static void pmd_populate_tests(struct mm_struct *mm, pmd_t *pmdp,
+			       pgtable_t pgtable)
+{
+	/*
+	 * This entry points to next level page table page.
+	 * Hence this must not qualify as pmd_bad().
+	 */
+	pmd_clear(pmdp);
+	pmd_populate(mm, pmdp, pgtable);
+	WARN_ON(pmd_bad(READ_ONCE(*pmdp)));
+}
+
+static bool pfn_range_valid(struct zone *z, unsigned long start_pfn,
+			    unsigned long nr_pages)
+{
+	unsigned long i, end_pfn = start_pfn + nr_pages;
+	struct page *page;
+
+	for (i = start_pfn; i < end_pfn; i++) {
+		if (!pfn_valid(i))
+			return false;
+
+		page = pfn_to_page(i);
+
+		if (page_zone(page) != z)
+			return false;
+
+		if (PageReserved(page))
+			return false;
+
+		if (page_count(page) > 0)
+			return false;
+
+		if (PageHuge(page))
+			return false;
+	}
+	return true;
+}
+
+static struct page *alloc_gigantic_page(nodemask_t *nodemask,
+					int nid, gfp_t gfp_mask, int order)
+{
+	struct zonelist *zonelist;
+	struct zone *zone;
+	struct zoneref *z;
+	enum zone_type zonesel;
+	unsigned long ret, pfn, flags, nr_pages;
+
+	nr_pages = 1UL << order;
+	zonesel = gfp_zone(gfp_mask);
+	zonelist = node_zonelist(nid, gfp_mask);
+	for_each_zone_zonelist_nodemask(zone, z, zonelist, zonesel, nodemask) {
+		spin_lock_irqsave(&zone->lock, flags);
+		pfn = ALIGN(zone->zone_start_pfn, nr_pages);
+		while (zone_spans_pfn(zone, pfn + nr_pages - 1)) {
+			if (pfn_range_valid(zone, pfn, nr_pages)) {
+				spin_unlock_irqrestore(&zone->lock, flags);
+				ret = alloc_contig_range(pfn, pfn + nr_pages,
+							 MIGRATE_MOVABLE,
+							 gfp_mask);
+				if (!ret)
+					return pfn_to_page(pfn);
+				spin_lock_irqsave(&zone->lock, flags);
+			}
+			pfn += nr_pages;
+		}
+		spin_unlock_irqrestore(&zone->lock, flags);
+	}
+	return NULL;
+}
+
+static struct page *alloc_mapped_page(void)
+{
+	gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO;
+	struct page *page = NULL;
+
+	page = alloc_gigantic_page(&node_states[N_MEMORY], first_memory_node,
+				   gfp_mask, get_order(PUD_SIZE));
+	if (page) {
+		pud_aligned = true;
+		return page;
+	}
+	return alloc_pages(gfp_mask, get_order(PMD_SIZE));
+}
+
+static void free_mapped_page(struct page *page)
+{
+	if (pud_aligned) {
+		unsigned long pfn = page_to_pfn(page);
+
+		free_contig_range(pfn, 1ULL << get_order(PUD_SIZE));
+		return;
+	}
+	free_pages((unsigned long)page_address(page), get_order(PMD_SIZE));
+}
+
+static int __init arch_pgtable_tests_init(void)
+{
+	struct mm_struct *mm;
+	struct page *page;
+	pgd_t *pgdp;
+	p4d_t *p4dp, *saved_p4dp;
+	pud_t *pudp, *saved_pudp;
+	pmd_t *pmdp, *saved_pmdp;
+	pte_t *ptep, *saved_ptep;
+	pgprot_t prot = vm_get_page_prot(VMA_TEST_FLAGS);
+	unsigned long vaddr = VADDR_TEST;
+
+	mm = mm_alloc();
+	if (!mm) {
+		pr_err("mm_struct allocation failed\n");
+		return 1;
+	}
+
+	page = alloc_mapped_page();
+	if (!page) {
+		pr_err("memory allocation failed\n");
+		return 1;
+	}
+
+	pgdp = pgd_offset(mm, vaddr);
+	p4dp = p4d_alloc(mm, pgdp, vaddr);
+	pudp = pud_alloc(mm, p4dp, vaddr);
+	pmdp = pmd_alloc(mm, pudp, vaddr);
+	ptep = pte_alloc_map(mm, pmdp, vaddr);
+
+	/*
+	 * Save all the page table page addresses as the page table
+	 * entries will be used for testing with random or garbage
+	 * values. These saved addresses will be used for freeing
+	 * page table pages.
+	 */
+	saved_p4dp = p4d_offset(pgdp, 0UL);
+	saved_pudp = pud_offset(p4dp, 0UL);
+	saved_pmdp = pmd_offset(pudp, 0UL);
+	saved_ptep = pte_offset_map(pmdp, 0UL);
+
+	pte_basic_tests(page, prot);
+	pmd_basic_tests(page, prot);
+	pud_basic_tests(page, prot);
+	p4d_basic_tests(page, prot);
+	pgd_basic_tests(page, prot);
+
+	pte_clear_tests(ptep);
+	pmd_clear_tests(pmdp);
+	pud_clear_tests(pudp);
+	p4d_clear_tests(p4dp);
+	pgd_clear_tests(pgdp);
+
+	pmd_populate_tests(mm, pmdp, (pgtable_t) page);
+	pud_populate_tests(mm, pudp, pmdp);
+	p4d_populate_tests(mm, p4dp, pudp);
+	pgd_populate_tests(mm, pgdp, p4dp);
+
+	p4d_free(mm, saved_p4dp);
+	pud_free(mm, saved_pudp);
+	pmd_free(mm, saved_pmdp);
+	pte_free(mm, (pgtable_t) virt_to_page(saved_ptep));
+
+	mm_dec_nr_puds(mm);
+	mm_dec_nr_pmds(mm);
+	mm_dec_nr_ptes(mm);
+	__mmdrop(mm);
+
+	free_mapped_page(page);
+	return 0;
+}
+
+static void __exit arch_pgtable_tests_exit(void) { }
+
+module_init(arch_pgtable_tests_init);
+module_exit(arch_pgtable_tests_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Anshuman Khandual <anshuman.khandual@arm.com>");
+MODULE_DESCRIPTION("Test archicture page table helpers");
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers
  2019-08-09  7:33 [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers Anshuman Khandual
  2019-08-09  7:33 ` [RFC V2 1/1] mm/pgtable/debug: Add test validating architecture " Anshuman Khandual
@ 2019-08-09 10:16 ` Matthew Wilcox
  2019-08-09 10:35   ` Anshuman Khandual
  2019-08-09 11:44   ` Mark Rutland
  1 sibling, 2 replies; 10+ messages in thread
From: Matthew Wilcox @ 2019-08-09 10:16 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: linux-mm, Andrew Morton, Vlastimil Babka, Greg Kroah-Hartman,
	Thomas Gleixner, Mike Rapoport, Jason Gunthorpe, Dan Williams,
	Peter Zijlstra, Michal Hocko, Mark Rutland, Mark Brown,
	Steven Price, Ard Biesheuvel, Masahiro Yamada, Kees Cook,
	Tetsuo Handa, Sri Krishna chowdary, Dave Hansen,
	Russell King - ARM Linux, Michael Ellerman, Paul Mackerras,
	Martin Schwidefsky, Heiko Carstens, David S. Miller,
	Vineet Gupta, James Hogan, Paul Burton, Ralf Baechle,
	linux-snps-arc, linux-mips, linux-arm-kernel, linux-ia64,
	linuxppc-dev, linux-s390, linux-sh, sparclinux, x86,
	linux-kernel

On Fri, Aug 09, 2019 at 01:03:17PM +0530, Anshuman Khandual wrote:
> Should alloc_gigantic_page() be made available as an interface for general
> use in the kernel. The test module here uses very similar implementation from
> HugeTLB to allocate a PUD aligned memory block. Similar for mm_alloc() which
> needs to be exported through a header.

Why are you allocating memory at all instead of just using some
known-to-exist PFNs like I suggested?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers
  2019-08-09 10:16 ` [RFC V2 0/1] mm/debug: Add tests for architecture exported " Matthew Wilcox
@ 2019-08-09 10:35   ` Anshuman Khandual
  2019-08-09 13:52     ` Matthew Wilcox
  2019-08-09 11:44   ` Mark Rutland
  1 sibling, 1 reply; 10+ messages in thread
From: Anshuman Khandual @ 2019-08-09 10:35 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, Andrew Morton, Vlastimil Babka, Greg Kroah-Hartman,
	Thomas Gleixner, Mike Rapoport, Jason Gunthorpe, Dan Williams,
	Peter Zijlstra, Michal Hocko, Mark Rutland, Mark Brown,
	Steven Price, Ard Biesheuvel, Masahiro Yamada, Kees Cook,
	Tetsuo Handa, Sri Krishna chowdary, Dave Hansen,
	Russell King - ARM Linux, Michael Ellerman, Paul Mackerras,
	Martin Schwidefsky, Heiko Carstens, David S. Miller,
	Vineet Gupta, James Hogan, Paul Burton, Ralf Baechle,
	linux-snps-arc, linux-mips, linux-arm-kernel, linux-ia64,
	linuxppc-dev, linux-s390, linux-sh, sparclinux, x86,
	linux-kernel



On 08/09/2019 03:46 PM, Matthew Wilcox wrote:
> On Fri, Aug 09, 2019 at 01:03:17PM +0530, Anshuman Khandual wrote:
>> Should alloc_gigantic_page() be made available as an interface for general
>> use in the kernel. The test module here uses very similar implementation from
>> HugeTLB to allocate a PUD aligned memory block. Similar for mm_alloc() which
>> needs to be exported through a header.
> 
> Why are you allocating memory at all instead of just using some
> known-to-exist PFNs like I suggested?

We needed PFN to be PUD aligned for pfn_pud() and PMD aligned for mk_pmd().
Now walking the kernel page table for a known symbol like kernel_init()
as you had suggested earlier we might encounter page table page entries at PMD
and PUD which might not be PMD or PUD aligned respectively. It seemed to me
that alignment requirement is applicable only for mk_pmd() and pfn_pud()
which create large mappings at those levels but that requirement does not
exist for page table pages pointing to next level. Is not that correct ? Or
I am missing something here ?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers
  2019-08-09 10:16 ` [RFC V2 0/1] mm/debug: Add tests for architecture exported " Matthew Wilcox
  2019-08-09 10:35   ` Anshuman Khandual
@ 2019-08-09 11:44   ` Mark Rutland
  2019-08-26  2:29     ` Anshuman Khandual
  1 sibling, 1 reply; 10+ messages in thread
From: Mark Rutland @ 2019-08-09 11:44 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Anshuman Khandual, linux-mm, Andrew Morton, Vlastimil Babka,
	Greg Kroah-Hartman, Thomas Gleixner, Mike Rapoport,
	Jason Gunthorpe, Dan Williams, Peter Zijlstra, Michal Hocko,
	Mark Brown, Steven Price, Ard Biesheuvel, Masahiro Yamada,
	Kees Cook, Tetsuo Handa, Sri Krishna chowdary, Dave Hansen,
	Russell King - ARM Linux, Michael Ellerman, Paul Mackerras,
	Martin Schwidefsky, Heiko Carstens, David S. Miller,
	Vineet Gupta, James Hogan, Paul Burton, Ralf Baechle,
	linux-snps-arc, linux-mips, linux-arm-kernel, linux-ia64,
	linuxppc-dev, linux-s390, linux-sh, sparclinux, x86,
	linux-kernel

On Fri, Aug 09, 2019 at 03:16:33AM -0700, Matthew Wilcox wrote:
> On Fri, Aug 09, 2019 at 01:03:17PM +0530, Anshuman Khandual wrote:
> > Should alloc_gigantic_page() be made available as an interface for general
> > use in the kernel. The test module here uses very similar implementation from
> > HugeTLB to allocate a PUD aligned memory block. Similar for mm_alloc() which
> > needs to be exported through a header.
> 
> Why are you allocating memory at all instead of just using some
> known-to-exist PFNs like I suggested?

IIUC the issue is that there aren't necessarily known-to-exist PFNs that
are sufficiently aligned -- they may not even exist.

For example, with 64K pages, a PMD covers 512M. The kernel image is
(generally) smaller than 512M, and will be mapped at page granularity.
In that case, any PMD entry for a kernel symbol address will point to
the PTE level table, and that will only necessarily be page-aligned, as
any P?D level table is only necessarily page-aligned.

In the same configuration, you could have less than 512M of total
memory, and none of this memory is necessarily aligned to 512M. So
beyond the PTE level, I don't think you can guarantee a known-to-exist
valid PFN.

I also believe that synthetic PFNs could fail pfn_valid(), so that might
cause us pain too...

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers
  2019-08-09 10:35   ` Anshuman Khandual
@ 2019-08-09 13:52     ` Matthew Wilcox
  2019-08-26  2:37       ` Anshuman Khandual
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2019-08-09 13:52 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: linux-mm, Andrew Morton, Vlastimil Babka, Greg Kroah-Hartman,
	Thomas Gleixner, Mike Rapoport, Jason Gunthorpe, Dan Williams,
	Peter Zijlstra, Michal Hocko, Mark Rutland, Mark Brown,
	Steven Price, Ard Biesheuvel, Masahiro Yamada, Kees Cook,
	Tetsuo Handa, Sri Krishna chowdary, Dave Hansen,
	Russell King - ARM Linux, Michael Ellerman, Paul Mackerras,
	Martin Schwidefsky, Heiko Carstens, David S. Miller,
	Vineet Gupta, James Hogan, Paul Burton, Ralf Baechle,
	linux-snps-arc, linux-mips, linux-arm-kernel, linux-ia64,
	linuxppc-dev, linux-s390, linux-sh, sparclinux, x86,
	linux-kernel

On Fri, Aug 09, 2019 at 04:05:07PM +0530, Anshuman Khandual wrote:
> On 08/09/2019 03:46 PM, Matthew Wilcox wrote:
> > On Fri, Aug 09, 2019 at 01:03:17PM +0530, Anshuman Khandual wrote:
> >> Should alloc_gigantic_page() be made available as an interface for general
> >> use in the kernel. The test module here uses very similar implementation from
> >> HugeTLB to allocate a PUD aligned memory block. Similar for mm_alloc() which
> >> needs to be exported through a header.
> > 
> > Why are you allocating memory at all instead of just using some
> > known-to-exist PFNs like I suggested?
> 
> We needed PFN to be PUD aligned for pfn_pud() and PMD aligned for mk_pmd().
> Now walking the kernel page table for a known symbol like kernel_init()

I didn't say to walk the kernel page table.  I said to call virt_to_pfn()
for a known symbol like kernel_init().

> as you had suggested earlier we might encounter page table page entries at PMD
> and PUD which might not be PMD or PUD aligned respectively. It seemed to me
> that alignment requirement is applicable only for mk_pmd() and pfn_pud()
> which create large mappings at those levels but that requirement does not
> exist for page table pages pointing to next level. Is not that correct ? Or
> I am missing something here ?

Just clear the bottom bits off the PFN until you get a PMD or PUD aligned
PFN.  It's really not hard.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers
  2019-08-09 11:44   ` Mark Rutland
@ 2019-08-26  2:29     ` Anshuman Khandual
  0 siblings, 0 replies; 10+ messages in thread
From: Anshuman Khandual @ 2019-08-26  2:29 UTC (permalink / raw)
  To: Mark Rutland, Matthew Wilcox
  Cc: linux-mm, Andrew Morton, Vlastimil Babka, Greg Kroah-Hartman,
	Thomas Gleixner, Mike Rapoport, Jason Gunthorpe, Dan Williams,
	Peter Zijlstra, Michal Hocko, Mark Brown, Steven Price,
	Ard Biesheuvel, Masahiro Yamada, Kees Cook, Tetsuo Handa,
	Sri Krishna chowdary, Dave Hansen, Russell King - ARM Linux,
	Michael Ellerman, Paul Mackerras, Martin Schwidefsky,
	Heiko Carstens, David S. Miller, Vineet Gupta, James Hogan,
	Paul Burton, Ralf Baechle, linux-snps-arc, linux-mips,
	linux-arm-kernel, linux-ia64, linuxppc-dev, linux-s390, linux-sh,
	sparclinux, x86, linux-kernel



On 08/09/2019 05:14 PM, Mark Rutland wrote:
> On Fri, Aug 09, 2019 at 03:16:33AM -0700, Matthew Wilcox wrote:
>> On Fri, Aug 09, 2019 at 01:03:17PM +0530, Anshuman Khandual wrote:
>>> Should alloc_gigantic_page() be made available as an interface for general
>>> use in the kernel. The test module here uses very similar implementation from
>>> HugeTLB to allocate a PUD aligned memory block. Similar for mm_alloc() which
>>> needs to be exported through a header.
>>
>> Why are you allocating memory at all instead of just using some
>> known-to-exist PFNs like I suggested?
> 
> IIUC the issue is that there aren't necessarily known-to-exist PFNs that
> are sufficiently aligned -- they may not even exist.
> 
> For example, with 64K pages, a PMD covers 512M. The kernel image is
> (generally) smaller than 512M, and will be mapped at page granularity.
> In that case, any PMD entry for a kernel symbol address will point to
> the PTE level table, and that will only necessarily be page-aligned, as
> any P?D level table is only necessarily page-aligned.

Right.

> 
> In the same configuration, you could have less than 512M of total
> memory, and none of this memory is necessarily aligned to 512M. So
> beyond the PTE level, I don't think you can guarantee a known-to-exist
> valid PFN.
Right a PMD aligned valid PFN might not even exist. This proposed patch
which attempts to allocate memory chunk with required alignment will just
fail indicating that such a valid PFN does not exist and hence will skip
any relevant tests. At present this is done for PUD aligned allocation
failure but we can similarly skip PMD relevant tests as well if PMD
aligned memory chunk is not allocated.

> 
> I also believe that synthetic PFNs could fail pfn_valid(), so that might
> cause us pain too...

Agreed. So do we have an agreement that it is better to use allocated
memory with required alignment for the tests than known-to-exist PFNs ?

- Anshuman

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers
  2019-08-09 13:52     ` Matthew Wilcox
@ 2019-08-26  2:37       ` Anshuman Khandual
  2019-08-26 13:13         ` Matthew Wilcox
  0 siblings, 1 reply; 10+ messages in thread
From: Anshuman Khandual @ 2019-08-26  2:37 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, Andrew Morton, Vlastimil Babka, Greg Kroah-Hartman,
	Thomas Gleixner, Mike Rapoport, Jason Gunthorpe, Dan Williams,
	Peter Zijlstra, Michal Hocko, Mark Rutland, Mark Brown,
	Steven Price, Ard Biesheuvel, Masahiro Yamada, Kees Cook,
	Tetsuo Handa, Sri Krishna chowdary, Dave Hansen,
	Russell King - ARM Linux, Michael Ellerman, Paul Mackerras,
	Martin Schwidefsky, Heiko Carstens, David S. Miller,
	Vineet Gupta, James Hogan, Paul Burton, Ralf Baechle,
	linux-snps-arc, linux-mips, linux-arm-kernel, linux-ia64,
	linuxppc-dev, linux-s390, linux-sh, sparclinux, x86,
	linux-kernel



On 08/09/2019 07:22 PM, Matthew Wilcox wrote:
> On Fri, Aug 09, 2019 at 04:05:07PM +0530, Anshuman Khandual wrote:
>> On 08/09/2019 03:46 PM, Matthew Wilcox wrote:
>>> On Fri, Aug 09, 2019 at 01:03:17PM +0530, Anshuman Khandual wrote:
>>>> Should alloc_gigantic_page() be made available as an interface for general
>>>> use in the kernel. The test module here uses very similar implementation from
>>>> HugeTLB to allocate a PUD aligned memory block. Similar for mm_alloc() which
>>>> needs to be exported through a header.
>>>
>>> Why are you allocating memory at all instead of just using some
>>> known-to-exist PFNs like I suggested?
>>
>> We needed PFN to be PUD aligned for pfn_pud() and PMD aligned for mk_pmd().
>> Now walking the kernel page table for a known symbol like kernel_init()
> 
> I didn't say to walk the kernel page table.  I said to call virt_to_pfn()
> for a known symbol like kernel_init().
> 
>> as you had suggested earlier we might encounter page table page entries at PMD
>> and PUD which might not be PMD or PUD aligned respectively. It seemed to me
>> that alignment requirement is applicable only for mk_pmd() and pfn_pud()
>> which create large mappings at those levels but that requirement does not
>> exist for page table pages pointing to next level. Is not that correct ? Or
>> I am missing something here ?
> 
> Just clear the bottom bits off the PFN until you get a PMD or PUD aligned
> PFN.  It's really not hard.

As Mark pointed out earlier that might end up being just a synthetic PFN
which might not even exist on a given system.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers
  2019-08-26  2:37       ` Anshuman Khandual
@ 2019-08-26 13:13         ` Matthew Wilcox
  2019-08-28  9:22           ` Anshuman Khandual
  0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2019-08-26 13:13 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: linux-mm, Andrew Morton, Vlastimil Babka, Greg Kroah-Hartman,
	Thomas Gleixner, Mike Rapoport, Jason Gunthorpe, Dan Williams,
	Peter Zijlstra, Michal Hocko, Mark Rutland, Mark Brown,
	Steven Price, Ard Biesheuvel, Masahiro Yamada, Kees Cook,
	Tetsuo Handa, Sri Krishna chowdary, Dave Hansen,
	Russell King - ARM Linux, Michael Ellerman, Paul Mackerras,
	Martin Schwidefsky, Heiko Carstens, David S. Miller,
	Vineet Gupta, James Hogan, Paul Burton, Ralf Baechle,
	linux-snps-arc, linux-mips, linux-arm-kernel, linux-ia64,
	linuxppc-dev, linux-s390, linux-sh, sparclinux, x86,
	linux-kernel

On Mon, Aug 26, 2019 at 08:07:13AM +0530, Anshuman Khandual wrote:
> On 08/09/2019 07:22 PM, Matthew Wilcox wrote:
> > On Fri, Aug 09, 2019 at 04:05:07PM +0530, Anshuman Khandual wrote:
> >> On 08/09/2019 03:46 PM, Matthew Wilcox wrote:
> >>> On Fri, Aug 09, 2019 at 01:03:17PM +0530, Anshuman Khandual wrote:
> >>>> Should alloc_gigantic_page() be made available as an interface for general
> >>>> use in the kernel. The test module here uses very similar implementation from
> >>>> HugeTLB to allocate a PUD aligned memory block. Similar for mm_alloc() which
> >>>> needs to be exported through a header.
> >>>
> >>> Why are you allocating memory at all instead of just using some
> >>> known-to-exist PFNs like I suggested?
> >>
> >> We needed PFN to be PUD aligned for pfn_pud() and PMD aligned for mk_pmd().
> >> Now walking the kernel page table for a known symbol like kernel_init()
> > 
> > I didn't say to walk the kernel page table.  I said to call virt_to_pfn()
> > for a known symbol like kernel_init().
> > 
> >> as you had suggested earlier we might encounter page table page entries at PMD
> >> and PUD which might not be PMD or PUD aligned respectively. It seemed to me
> >> that alignment requirement is applicable only for mk_pmd() and pfn_pud()
> >> which create large mappings at those levels but that requirement does not
> >> exist for page table pages pointing to next level. Is not that correct ? Or
> >> I am missing something here ?
> > 
> > Just clear the bottom bits off the PFN until you get a PMD or PUD aligned
> > PFN.  It's really not hard.
> 
> As Mark pointed out earlier that might end up being just a synthetic PFN
> which might not even exist on a given system.

And why would that matter?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers
  2019-08-26 13:13         ` Matthew Wilcox
@ 2019-08-28  9:22           ` Anshuman Khandual
  0 siblings, 0 replies; 10+ messages in thread
From: Anshuman Khandual @ 2019-08-28  9:22 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, Andrew Morton, Vlastimil Babka, Greg Kroah-Hartman,
	Thomas Gleixner, Mike Rapoport, Jason Gunthorpe, Dan Williams,
	Peter Zijlstra, Michal Hocko, Mark Rutland, Mark Brown,
	Steven Price, Ard Biesheuvel, Masahiro Yamada, Kees Cook,
	Tetsuo Handa, Sri Krishna chowdary, Dave Hansen,
	Russell King - ARM Linux, Michael Ellerman, Paul Mackerras,
	Martin Schwidefsky, Heiko Carstens, David S. Miller,
	Vineet Gupta, James Hogan, Paul Burton, Ralf Baechle,
	linux-snps-arc, linux-mips, linux-arm-kernel, linux-ia64,
	linuxppc-dev, linux-s390, linux-sh, sparclinux, x86,
	linux-kernel



On 08/26/2019 06:43 PM, Matthew Wilcox wrote:
> On Mon, Aug 26, 2019 at 08:07:13AM +0530, Anshuman Khandual wrote:
>> On 08/09/2019 07:22 PM, Matthew Wilcox wrote:
>>> On Fri, Aug 09, 2019 at 04:05:07PM +0530, Anshuman Khandual wrote:
>>>> On 08/09/2019 03:46 PM, Matthew Wilcox wrote:
>>>>> On Fri, Aug 09, 2019 at 01:03:17PM +0530, Anshuman Khandual wrote:
>>>>>> Should alloc_gigantic_page() be made available as an interface for general
>>>>>> use in the kernel. The test module here uses very similar implementation from
>>>>>> HugeTLB to allocate a PUD aligned memory block. Similar for mm_alloc() which
>>>>>> needs to be exported through a header.
>>>>>
>>>>> Why are you allocating memory at all instead of just using some
>>>>> known-to-exist PFNs like I suggested?
>>>>
>>>> We needed PFN to be PUD aligned for pfn_pud() and PMD aligned for mk_pmd().
>>>> Now walking the kernel page table for a known symbol like kernel_init()
>>>
>>> I didn't say to walk the kernel page table.  I said to call virt_to_pfn()
>>> for a known symbol like kernel_init().
>>>
>>>> as you had suggested earlier we might encounter page table page entries at PMD
>>>> and PUD which might not be PMD or PUD aligned respectively. It seemed to me
>>>> that alignment requirement is applicable only for mk_pmd() and pfn_pud()
>>>> which create large mappings at those levels but that requirement does not
>>>> exist for page table pages pointing to next level. Is not that correct ? Or
>>>> I am missing something here ?
>>>
>>> Just clear the bottom bits off the PFN until you get a PMD or PUD aligned
>>> PFN.  It's really not hard.
>>
>> As Mark pointed out earlier that might end up being just a synthetic PFN
>> which might not even exist on a given system.
> 
> And why would that matter?
> 

To start with the test uses struct page with mk_pte() and mk_pmd() while
pfn gets used in pfn_pud() during pXX_basic_tests(). So we will not be able
to derive a valid struct page from a synthetic pfn. Also if synthetic pfn is
going to be used anyway then why derive it from a real kernel symbol like
kernel_init(). Could not one be just made up with right alignment ?

Currently the test allocates 'mm_struct' and other page table pages from real
memory then why should it use synthetic pfn while creating actual page table
entries ? Couple of benefits going with synthetic pfn will be..

- It simplifies the test a bit removing PUD_SIZE allocation helpers
- It might enable the test to be run on systems without adequate memory

In the current proposal the allocation happens during boot making it much more
likely to succeed than not and when it fails, respective tests will be skipped.

I am just wondering if being able to run complete set of tests on smaller
systems with less memory weighs lot more in favor of going with synthetic
pfn instead.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-08-28  9:23 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-09  7:33 [RFC V2 0/1] mm/debug: Add tests for architecture exported page table helpers Anshuman Khandual
2019-08-09  7:33 ` [RFC V2 1/1] mm/pgtable/debug: Add test validating architecture " Anshuman Khandual
2019-08-09 10:16 ` [RFC V2 0/1] mm/debug: Add tests for architecture exported " Matthew Wilcox
2019-08-09 10:35   ` Anshuman Khandual
2019-08-09 13:52     ` Matthew Wilcox
2019-08-26  2:37       ` Anshuman Khandual
2019-08-26 13:13         ` Matthew Wilcox
2019-08-28  9:22           ` Anshuman Khandual
2019-08-09 11:44   ` Mark Rutland
2019-08-26  2:29     ` Anshuman Khandual

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).