All of lore.kernel.org
 help / color / mirror / Atom feed
From: Anshuman Khandual <anshuman.khandual@arm.com>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org, hch@infradead.org,
	akpm@linux-foundation.org,
	Anshuman Khandual <anshuman.khandual@arm.com>
Subject: [RFC V1 01/31] mm/debug_vm_pgtable: Directly use vm_get_page_prot()
Date: Mon, 24 Jan 2022 18:26:38 +0530	[thread overview]
Message-ID: <1643029028-12710-2-git-send-email-anshuman.khandual@arm.com> (raw)
In-Reply-To: <1643029028-12710-1-git-send-email-anshuman.khandual@arm.com>

Although protection_map[] contains the platform defined page protection map
, vm_get_page_prot() is the right interface to call for page protection for
a given vm_flags. Hence lets use it directly instead. This will also reduce
dependency on protection_map[].

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 mm/debug_vm_pgtable.c | 27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index a7ac97c76762..07593eb79338 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -93,7 +93,7 @@ struct pgtable_debug_args {
 
 static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx)
 {
-	pgprot_t prot = protection_map[idx];
+	pgprot_t prot = vm_get_page_prot(idx);
 	pte_t pte = pfn_pte(args->fixed_pte_pfn, prot);
 	unsigned long val = idx, *ptr = &val;
 
@@ -101,7 +101,7 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx)
 
 	/*
 	 * This test needs to be executed after the given page table entry
-	 * is created with pfn_pte() to make sure that protection_map[idx]
+	 * is created with pfn_pte() to make sure that vm_get_page_prot(idx)
 	 * does not have the dirty bit enabled from the beginning. This is
 	 * important for platforms like arm64 where (!PTE_RDONLY) indicate
 	 * dirty bit being set.
@@ -188,7 +188,7 @@ static void __init pte_savedwrite_tests(struct pgtable_debug_args *args)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx)
 {
-	pgprot_t prot = protection_map[idx];
+	pgprot_t prot = vm_get_page_prot(idx);
 	unsigned long val = idx, *ptr = &val;
 	pmd_t pmd;
 
@@ -200,7 +200,7 @@ static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx)
 
 	/*
 	 * This test needs to be executed after the given page table entry
-	 * is created with pfn_pmd() to make sure that protection_map[idx]
+	 * is created with pfn_pmd() to make sure that vm_get_page_prot(idx)
 	 * does not have the dirty bit enabled from the beginning. This is
 	 * important for platforms like arm64 where (!PTE_RDONLY) indicate
 	 * dirty bit being set.
@@ -323,7 +323,7 @@ static void __init pmd_savedwrite_tests(struct pgtable_debug_args *args)
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
 static void __init pud_basic_tests(struct pgtable_debug_args *args, int idx)
 {
-	pgprot_t prot = protection_map[idx];
+	pgprot_t prot = vm_get_page_prot(idx);
 	unsigned long val = idx, *ptr = &val;
 	pud_t pud;
 
@@ -335,7 +335,7 @@ static void __init pud_basic_tests(struct pgtable_debug_args *args, int idx)
 
 	/*
 	 * This test needs to be executed after the given page table entry
-	 * is created with pfn_pud() to make sure that protection_map[idx]
+	 * is created with pfn_pud() to make sure that vm_get_page_prot(idx)
 	 * does not have the dirty bit enabled from the beginning. This is
 	 * important for platforms like arm64 where (!PTE_RDONLY) indicate
 	 * dirty bit being set.
@@ -1104,14 +1104,14 @@ static int __init init_args(struct pgtable_debug_args *args)
 	/*
 	 * Initialize the debugging data.
 	 *
-	 * protection_map[0] (or even protection_map[8]) will help create
-	 * page table entries with PROT_NONE permission as required for
-	 * pxx_protnone_tests().
+	 * vm_get_page_prot(VM_NONE) or vm_get_page_prot(VM_SHARED|VM_NONE)
+	 * will help create page table entries with PROT_NONE permission as
+	 * required for pxx_protnone_tests().
 	 */
 	memset(args, 0, sizeof(*args));
 	args->vaddr              = get_random_vaddr();
 	args->page_prot          = vm_get_page_prot(VMFLAGS);
-	args->page_prot_none     = protection_map[0];
+	args->page_prot_none     = vm_get_page_prot(VM_NONE);
 	args->is_contiguous_page = false;
 	args->pud_pfn            = ULONG_MAX;
 	args->pmd_pfn            = ULONG_MAX;
@@ -1246,12 +1246,15 @@ static int __init debug_vm_pgtable(void)
 		return ret;
 
 	/*
-	 * Iterate over the protection_map[] to make sure that all
+	 * Iterate over each possible vm_flags to make sure that all
 	 * the basic page table transformation validations just hold
 	 * true irrespective of the starting protection value for a
 	 * given page table entry.
+	 *
+	 * Protection based vm_flags combinatins are always linear
+	 * and increasing i.e VM_NONE ..[VM_SHARED|READ|WRITE|EXEC].
 	 */
-	for (idx = 0; idx < ARRAY_SIZE(protection_map); idx++) {
+	for (idx = VM_NONE; idx <= (VM_SHARED | VM_READ | VM_WRITE | VM_EXEC); idx++) {
 		pte_basic_tests(&args, idx);
 		pmd_basic_tests(&args, idx);
 		pud_basic_tests(&args, idx);
-- 
2.25.1


  reply	other threads:[~2022-01-24 12:57 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-24 12:56 [RFC V1 00/31] mm/mmap: Drop protection_map[] and platform's __SXXX/__PXXX requirements Anshuman Khandual
2022-01-24 12:56 ` Anshuman Khandual [this message]
2022-01-26  7:15   ` [RFC V1 01/31] mm/debug_vm_pgtable: Directly use vm_get_page_prot() Christoph Hellwig
2022-01-27  4:16     ` Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 02/31] mm/mmap: Clarify protection_map[] indices Anshuman Khandual
2022-01-26  7:16   ` Christoph Hellwig
2022-01-27  4:07     ` Anshuman Khandual
2022-01-27 12:39   ` Mike Rapoport
2022-01-31  3:25     ` Anshuman Khandual
2022-02-05  9:10   ` Firo Yang
2022-02-09  3:23     ` Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 03/31] mm/mmap: Add new config ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 04/31] powerpc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-01-24 12:56   ` Anshuman Khandual
2022-02-03 18:15   ` Mike Rapoport
2022-02-03 18:15     ` Mike Rapoport
2022-02-04  2:57     ` Anshuman Khandual
2022-02-04  2:57       ` Anshuman Khandual
2022-02-04  4:44       ` Mike Rapoport
2022-02-04  4:44         ` Mike Rapoport
2022-01-24 12:56 ` [RFC V1 05/31] arm64/mm: " Anshuman Khandual
2022-01-24 12:56   ` Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 06/31] sparc/mm: " Anshuman Khandual
2022-01-24 12:58   ` David Miller
2022-01-24 18:21   ` Khalid Aziz
2022-01-24 12:56 ` [RFC V1 07/31] mips/mm: " Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 08/31] m68k/mm: " Anshuman Khandual
2022-01-24 14:13   ` Andreas Schwab
2022-01-25  3:42     ` Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 09/31] arm/mm: " Anshuman Khandual
2022-01-24 12:56   ` Anshuman Khandual
2022-01-24 17:06   ` Russell King (Oracle)
2022-01-24 17:06     ` Russell King (Oracle)
2022-01-25  3:36     ` Anshuman Khandual
2022-01-25  3:36       ` Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 10/31] x86/mm: " Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 11/31] mm/mmap: Drop protection_map[] Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 12/31] mm/mmap: Drop arch_filter_pgprot() Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 13/31] mm/mmap: Drop arch_vm_get_page_pgprot() Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 14/31] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 15/31] riscv/mm: " Anshuman Khandual
2022-01-24 12:56   ` Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 16/31] alpha/mm: " Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 17/31] sh/mm: " Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 18/31] arc/mm: " Anshuman Khandual
2022-01-24 12:56   ` Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 19/31] csky/mm: " Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 20/31] extensa/mm: " Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 21/31] parisc/mm: " Anshuman Khandual
2022-01-25 16:53   ` Rolf Eike Beer
2022-01-27  4:06     ` Anshuman Khandual
2022-01-24 12:56 ` [RFC V1 22/31] openrisc/mm: " Anshuman Khandual
2022-01-24 12:56   ` [OpenRISC] " Anshuman Khandual
2022-02-05  6:58   ` Stafford Horne
2022-02-05  6:58     ` Stafford Horne
2022-01-24 12:57 ` [RFC V1 23/31] um/mm: " Anshuman Khandual
2022-01-24 12:57   ` Anshuman Khandual
2022-01-24 12:57 ` [RFC V1 24/31] microblaze/mm: " Anshuman Khandual
2022-01-24 12:57 ` [RFC V1 25/31] nios2/mm: " Anshuman Khandual
2022-01-26 16:38   ` Dinh Nguyen
2022-01-24 12:57 ` [RFC V1 26/31] hexagon/mm: " Anshuman Khandual
2022-01-24 12:57 ` [RFC V1 27/31] nds32/mm: " Anshuman Khandual
2022-01-24 12:57 ` [RFC V1 28/31] ia64/mm: " Anshuman Khandual
2022-01-24 12:57   ` Anshuman Khandual
2022-01-24 12:57 ` [RFC V1 29/31] mm/mmap: Drop generic vm_get_page_prot() Anshuman Khandual
2022-01-24 12:57 ` [RFC V1 30/31] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-01-24 12:57 ` [RFC V1 31/31] mm/mmap: Define macros for vm_flags access permission combinations Anshuman Khandual
2022-02-03  5:24   ` Anshuman Khandual
2022-01-27 12:38 ` [RFC V1 00/31] mm/mmap: Drop protection_map[] and platform's __SXXX/__PXXX requirements Mike Rapoport
2022-01-31  3:35   ` Anshuman Khandual

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1643029028-12710-2-git-send-email-anshuman.khandual@arm.com \
    --to=anshuman.khandual@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.