All of lore.kernel.org
 help / color / mirror / Atom feed
From: Anshuman Khandual <anshuman.khandual@arm.com>
To: Christophe Leroy <christophe.leroy@csgroup.eu>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: "hch@infradead.org" <hch@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH V3 1/2] mm/mmap: Restrict generic protection_map[] array visibility
Date: Tue, 21 Jun 2022 15:14:13 +0530	[thread overview]
Message-ID: <8c143d11-2371-23db-0476-0df569faaff9@arm.com> (raw)
In-Reply-To: <2ade2595-d03b-6027-f93a-fa8f7c39654e@csgroup.eu>



On 6/20/22 12:11, Christophe Leroy wrote:
> 
> 
> Le 20/06/2022 à 07:16, Anshuman Khandual a écrit :
>>
>>
>> On 6/16/22 11:05, Christophe Leroy wrote:
>>>
>>> Le 16/06/2022 à 06:09, Anshuman Khandual a écrit :
>>>> Restrict generic protection_map[] array visibility only for platforms which
>>>> do not enable ARCH_HAS_VM_GET_PAGE_PROT. For other platforms that do define
>>>> their own vm_get_page_prot() enabling ARCH_HAS_VM_GET_PAGE_PROT, could have
>>>> their private static protection_map[] still implementing an array look up.
>>>> These private protection_map[] array could do without __PXXX/__SXXX macros,
>>>> making them redundant and dropping them off as well.
>>>>
>>>> But platforms which do not define their custom vm_get_page_prot() enabling
>>>> ARCH_HAS_VM_GET_PAGE_PROT, will still have to provide __PXXX/__SXXX macros.
>>>>
>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>> Cc: linux-mm@kvack.org
>>>> Cc: linux-kernel@vger.kernel.org
>>>> Acked-by: Christoph Hellwig <hch@lst.de>
>>>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>>>> ---
>>>>    arch/arm64/include/asm/pgtable-prot.h | 18 ------------------
>>>>    arch/arm64/mm/mmap.c                  | 21 +++++++++++++++++++++
>>>>    arch/powerpc/include/asm/pgtable.h    |  2 ++
>>>>    arch/powerpc/mm/book3s64/pgtable.c    | 20 ++++++++++++++++++++
>>>>    arch/sparc/include/asm/pgtable_64.h   | 19 -------------------
>>>>    arch/sparc/mm/init_64.c               |  3 +++
>>>>    arch/x86/include/asm/pgtable_types.h  | 19 -------------------
>>>>    arch/x86/mm/pgprot.c                  | 19 +++++++++++++++++++
>>>>    include/linux/mm.h                    |  2 ++
>>>>    mm/mmap.c                             |  2 +-
>>>>    10 files changed, 68 insertions(+), 57 deletions(-)
>>>>
>>>> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
>>>> index d564d0ecd4cd..8ed2a80c896e 100644
>>>> --- a/arch/powerpc/include/asm/pgtable.h
>>>> +++ b/arch/powerpc/include/asm/pgtable.h
>>>> @@ -21,6 +21,7 @@ struct mm_struct;
>>>>    #endif /* !CONFIG_PPC_BOOK3S */
>>>>    
>>>>    /* Note due to the way vm flags are laid out, the bits are XWR */
>>>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>>> This ifdef if not necessary for now, it doesn't matter if __P000 etc
>>> still exist thought not used.
>>>
>>>>    #define __P000	PAGE_NONE
>>>>    #define __P001	PAGE_READONLY
>>>>    #define __P010	PAGE_COPY
>>>> @@ -38,6 +39,7 @@ struct mm_struct;
>>>>    #define __S101	PAGE_READONLY_X
>>>>    #define __S110	PAGE_SHARED_X
>>>>    #define __S111	PAGE_SHARED_X
>>>> +#endif
>>>>    
>>>>    #ifndef __ASSEMBLY__
>>>>    
>>>> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
>>>> index 7b9966402b25..d3b019b95c1d 100644
>>>> --- a/arch/powerpc/mm/book3s64/pgtable.c
>>>> +++ b/arch/powerpc/mm/book3s64/pgtable.c
>>>> @@ -551,6 +551,26 @@ unsigned long memremap_compat_align(void)
>>>>    EXPORT_SYMBOL_GPL(memremap_compat_align);
>>>>    #endif
>>>>    
>>>> +/* Note due to the way vm flags are laid out, the bits are XWR */
>>>> +static const pgprot_t protection_map[16] = {
>>>> +	[VM_NONE]					= PAGE_NONE,
>>>> +	[VM_READ]					= PAGE_READONLY,
>>>> +	[VM_WRITE]					= PAGE_COPY,
>>>> +	[VM_WRITE | VM_READ]				= PAGE_COPY,
>>>> +	[VM_EXEC]					= PAGE_READONLY_X,
>>>> +	[VM_EXEC | VM_READ]				= PAGE_READONLY_X,
>>>> +	[VM_EXEC | VM_WRITE]				= PAGE_COPY_X,
>>>> +	[VM_EXEC | VM_WRITE | VM_READ]			= PAGE_COPY_X,
>>>> +	[VM_SHARED]					= PAGE_NONE,
>>>> +	[VM_SHARED | VM_READ]				= PAGE_READONLY,
>>>> +	[VM_SHARED | VM_WRITE]				= PAGE_SHARED,
>>>> +	[VM_SHARED | VM_WRITE | VM_READ]		= PAGE_SHARED,
>>>> +	[VM_SHARED | VM_EXEC]				= PAGE_READONLY_X,
>>>> +	[VM_SHARED | VM_EXEC | VM_READ]			= PAGE_READONLY_X,
>>>> +	[VM_SHARED | VM_EXEC | VM_WRITE]		= PAGE_SHARED_X,
>>>> +	[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ]	= PAGE_SHARED_X
>>>> +};
>>>> +
>>> There is not much point is first additing that here and then move it
>>> elsewhere in the second patch.
>>>
>>> I think with my suggestion to use #ifdef __P000 as a guard, the powerpc
>>> changes could go in a single patch.
>>>
>>>>    pgprot_t vm_get_page_prot(unsigned long vm_flags)
>>>>    {
>>>>    	unsigned long prot = pgprot_val(protection_map[vm_flags &
>>>> diff --git a/mm/mmap.c b/mm/mmap.c
>>>> index 61e6135c54ef..e66920414945 100644
>>>> --- a/mm/mmap.c
>>>> +++ b/mm/mmap.c
>>>> @@ -101,6 +101,7 @@ static void unmap_region(struct mm_struct *mm,
>>>>     *								w: (no) no
>>>>     *								x: (yes) yes
>>>>     */
>>>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT
>>> You should use #ifdef __P000 instead, that way you could migrate
>>> architectures one by one.
>>
>> If vm_get_page_prot() gets moved into all platforms, wondering what would be
>> the preferred method to organize this patch series ?
>>
>> 1. Move protection_map[] inside platforms with ARCH_HAS_VM_PAGE_PROT (current patch 1)
>> 2. Convert remaining platforms to use ARCH_HAS_VM_PAGE_PROT one after the other
>> 3. Drop ARCH_HAS_VM_PAGE_PROT completely
>>
>> Using "#ifdef __P000" for wrapping protection_map[] will leave two different #ifdefs
>> in flight i.e __P000, ARCH_HAS_VM_PAGE_PROT in the generic mmap code, until both gets
>> dropped eventually. But using "#ifdef __P000" does enable splitting the first patch
>> into multiple changes for each individual platforms.
> 
>  From previous discussions and based on Christoph's suggestion, I guess 
> we now aim at getting vm_get_page_prot() moved into all platforms 
> together with protection_map[]. Therefore the use of #ifdef __P000 could 
> be very temporary at the begining of the series:
> 1. Guard generic protection_map[] with #ifdef ___P000
> 2. Move protection_map[] into architecture and drop __Pxxx/__Sxxx  for arm64
> 3. Same for sparc
> 4. Same for x86
> 5. Convert entire powerpc to ARCH_HAS_VM_PAGE_PROT and move 
> protection_map[] into architecture and drop __Pxxx/__Sxxx
> 6. Replace #ifdef __P000 by #ifdef CONFIG_ARCH_HAS_VM_PAGE_PROT
> 7. Convert all remaining platforms to CONFIG_ARCH_HAS_VM_PAGE_PROT one 
> by one (but keep a protection_map[] table, don't use switch/case)
> 8. Drop ARCH_HAS_VM_PAGE_PROT completely.
> 
> Eventually you can squash step 6 into step 8.

Keeping individual platform changes in a separate patch will make
the series cleaner, and also much easier to review. But the flow
explained above sounds good to me. I will work on these changes.

  reply	other threads:[~2022-06-21  9:44 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-16  4:09 [PATCH V3 0/2] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Anshuman Khandual
2022-06-16  4:09 ` [PATCH V3 1/2] mm/mmap: Restrict generic protection_map[] array visibility Anshuman Khandual
2022-06-16  5:35   ` Christophe Leroy
2022-06-20  5:16     ` Anshuman Khandual
2022-06-20  6:41       ` Christophe Leroy
2022-06-21  9:44         ` Anshuman Khandual [this message]
2022-06-16 12:44   ` kernel test robot
2022-06-20  4:45     ` Anshuman Khandual
2022-06-20  4:45       ` Anshuman Khandual
2022-06-20  5:55       ` Christoph Hellwig
2022-06-20  5:55         ` Christoph Hellwig
2022-06-20  6:43       ` Christophe Leroy
2022-06-20  6:43         ` Christophe Leroy
2022-06-16  4:09 ` [PATCH V3 2/2] mm/mmap: Drop generic protection_map[] array Anshuman Khandual
2022-06-16  5:27   ` Christophe Leroy
2022-06-16  6:10     ` hch
2022-06-17  3:46     ` Anshuman Khandual
2022-06-16  5:45   ` Christophe Leroy
2022-06-16  6:12     ` hch
2022-06-17  3:29       ` Anshuman Khandual
2022-06-17  5:48         ` Christophe Leroy
2022-06-17  8:00           ` hch
2022-06-20  4:14             ` Anshuman Khandual
2022-06-17  3:43     ` Anshuman Khandual
2022-06-17  5:40       ` Christophe Leroy
2022-06-16  5:22 ` [PATCH V3 0/2] mm/mmap: Drop __SXXX/__PXXX macros from across platforms Christophe Leroy
2022-06-16  6:13   ` hch
2022-06-17  3:07   ` Anshuman Khandual

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8c143d11-2371-23db-0476-0df569faaff9@arm.com \
    --to=anshuman.khandual@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=christophe.leroy@csgroup.eu \
    --cc=hch@infradead.org \
    --cc=hch@lst.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.