All of lore.kernel.org
 help / color / mirror / Atom feed
From: Catalin Marinas <catalin.marinas@arm.com>
To: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org,
	linux-kernel@vger.kernel.org, geert@linux-m68k.org,
	Christoph Hellwig <hch@infradead.org>,
	linuxppc-dev@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org,
	linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org,
	linux-alpha@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org,
	linux-xtensa@linux-xtensa.org, linux-parisc@vger.kernel.org,
	openrisc@lists.librecores.org, linux-um@lists.infradead.org,
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
	linux-arch@vger.kernel.org, Will Deacon <will@kernel.org>
Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
Date: Thu, 3 Mar 2022 15:28:50 +0000	[thread overview]
Message-ID: <YiDessYDSt060Euc@arm.com> (raw)
In-Reply-To: <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com>

Hi Anshuman,

On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote:
> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> +{
> +	switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> +	case VM_NONE:
> +		return PAGE_NONE;
> +	case VM_READ:
> +	case VM_WRITE:
> +	case VM_WRITE | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_EXEC | VM_READ:
> +	case VM_EXEC | VM_WRITE:
> +	case VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED:
> +		return PAGE_NONE;
> +	case VM_SHARED | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_SHARED | VM_WRITE:
> +	case VM_SHARED | VM_WRITE | VM_READ:
> +		return PAGE_SHARED;
> +	case VM_SHARED | VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_SHARED | VM_EXEC | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED | VM_EXEC | VM_WRITE:
> +	case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_SHARED_EXEC;
> +	default:
> +		BUILD_BUG();
> +	}
> +}

I'd say ack for trying to get of the extra arch_vm_get_page_prot() and
arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't
built the code to see what's generated but I suspect it's no significant
improvement. As for the code readability, the arm64 parts don't look
much better either. The only advantage with this patch is that all
functions have been moved under arch/arm64.

I'd keep most architectures that don't have own arch_vm_get_page_prot()
or arch_filter_pgprot() unchanged and with a generic protection_map[]
array. For architectures that need fancier stuff, add a
CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define
vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot(). I think you could also duplicate protection_map[]
for architectures with own vm_get_page_prot() (make it static) and
#ifdef it out in mm/mmap.c.

If later you have more complex needs or a switch statement generates
better code, go for it, but for this series I'd keep things simple, only
focus on getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot().

If I grep'ed correctly, there are only 4 architectures that have own
arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own
arch_filter_pgprot() (arm64, x86). Try to only change these for the time
being, together with the other generic mm cleanups you have in this
series. I think there are a couple more that touch protection_map[]
(arm, m68k). You can leave the generic protection_map[] global if the
arch does not select ARCH_HAS_VM_GET_PAGE_PROT.

> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> +		return prot;
> +
> +	if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
> +		return prot;
> +
> +	return PAGE_READONLY_EXEC;
> +}
> +
> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pteval_t prot = 0;
> +
> +	if (vm_flags & VM_ARM64_BTI)
> +		prot |= PTE_GP;
> +
> +	/*
> +	 * There are two conditions required for returning a Normal Tagged
> +	 * memory type: (1) the user requested it via PROT_MTE passed to
> +	 * mmap() or mprotect() and (2) the corresponding vma supports MTE. We
> +	 * register (1) as VM_MTE in the vma->vm_flags and (2) as
> +	 * VM_MTE_ALLOWED. Note that the latter can only be set during the
> +	 * mmap() call since mprotect() does not accept MAP_* flags.
> +	 * Checking for VM_MTE only is sufficient since arch_validate_flags()
> +	 * does not permit (VM_MTE & !VM_MTE_ALLOWED).
> +	 */
> +	if (vm_flags & VM_MTE)
> +		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
> +
> +	return __pgprot(prot);
> +}
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> +			pgprot_val(arm64_arch_vm_get_page_prot(vm_flags)));
> +
> +	return arm64_arch_filter_pgprot(ret);
> +}

If we kept the array, we can have everything in a single function
(untested and with my own comments for future changes):

pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
	pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags &
				(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]));

	/*
	 * We could get rid of this test if we updated protection_map[]
	 * to turn exec-only into read-exec during boot.
	 */
	if (!cpus_have_const_cap(ARM64_HAS_EPAN) &&
	    pgprot_val(prot) == pgprot_val(PAGE_EXECONLY))
		prot = PAGE_READONLY_EXEC;

	if (vm_flags & VM_ARM64_BTI)
		prot != PTE_GP;

	/*
	 * We can get rid of the requirement for PROT_NORMAL to be 0
	 * since here we can mask out PTE_ATTRINDX_MASK.
	 */
	if (vm_flags & VM_MTE) {
		prot &= ~PTE_ATTRINDX_MASK;
		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
	}

	return prot;
}

-- 
Catalin

WARNING: multiple messages have this Message-ID (diff)
From: Catalin Marinas <catalin.marinas@arm.com>
To: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-mips@vger.kernel.org, linux-mm@kvack.org,
	sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org,
	Will Deacon <will@kernel.org>,
	linux-arch@vger.kernel.org, linux-s390@vger.kernel.org,
	linux-hexagon@vger.kernel.org, linux-csky@vger.kernel.org,
	Christoph Hellwig <hch@infradead.org>,
	geert@linux-m68k.org, linux-snps-arc@lists.infradead.org,
	linux-xtensa@linux-xtensa.org, linux-um@lists.infradead.org,
	linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org,
	linux-arm-kernel@lists.infradead.org,
	linux-parisc@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-alpha@vger.kernel.org, akpm@linux-foundation.org,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
Date: Thu, 3 Mar 2022 15:28:50 +0000	[thread overview]
Message-ID: <YiDessYDSt060Euc@arm.com> (raw)
In-Reply-To: <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com>

Hi Anshuman,

On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote:
> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> +{
> +	switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> +	case VM_NONE:
> +		return PAGE_NONE;
> +	case VM_READ:
> +	case VM_WRITE:
> +	case VM_WRITE | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_EXEC | VM_READ:
> +	case VM_EXEC | VM_WRITE:
> +	case VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED:
> +		return PAGE_NONE;
> +	case VM_SHARED | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_SHARED | VM_WRITE:
> +	case VM_SHARED | VM_WRITE | VM_READ:
> +		return PAGE_SHARED;
> +	case VM_SHARED | VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_SHARED | VM_EXEC | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED | VM_EXEC | VM_WRITE:
> +	case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_SHARED_EXEC;
> +	default:
> +		BUILD_BUG();
> +	}
> +}

I'd say ack for trying to get of the extra arch_vm_get_page_prot() and
arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't
built the code to see what's generated but I suspect it's no significant
improvement. As for the code readability, the arm64 parts don't look
much better either. The only advantage with this patch is that all
functions have been moved under arch/arm64.

I'd keep most architectures that don't have own arch_vm_get_page_prot()
or arch_filter_pgprot() unchanged and with a generic protection_map[]
array. For architectures that need fancier stuff, add a
CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define
vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot(). I think you could also duplicate protection_map[]
for architectures with own vm_get_page_prot() (make it static) and
#ifdef it out in mm/mmap.c.

If later you have more complex needs or a switch statement generates
better code, go for it, but for this series I'd keep things simple, only
focus on getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot().

If I grep'ed correctly, there are only 4 architectures that have own
arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own
arch_filter_pgprot() (arm64, x86). Try to only change these for the time
being, together with the other generic mm cleanups you have in this
series. I think there are a couple more that touch protection_map[]
(arm, m68k). You can leave the generic protection_map[] global if the
arch does not select ARCH_HAS_VM_GET_PAGE_PROT.

> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> +		return prot;
> +
> +	if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
> +		return prot;
> +
> +	return PAGE_READONLY_EXEC;
> +}
> +
> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pteval_t prot = 0;
> +
> +	if (vm_flags & VM_ARM64_BTI)
> +		prot |= PTE_GP;
> +
> +	/*
> +	 * There are two conditions required for returning a Normal Tagged
> +	 * memory type: (1) the user requested it via PROT_MTE passed to
> +	 * mmap() or mprotect() and (2) the corresponding vma supports MTE. We
> +	 * register (1) as VM_MTE in the vma->vm_flags and (2) as
> +	 * VM_MTE_ALLOWED. Note that the latter can only be set during the
> +	 * mmap() call since mprotect() does not accept MAP_* flags.
> +	 * Checking for VM_MTE only is sufficient since arch_validate_flags()
> +	 * does not permit (VM_MTE & !VM_MTE_ALLOWED).
> +	 */
> +	if (vm_flags & VM_MTE)
> +		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
> +
> +	return __pgprot(prot);
> +}
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> +			pgprot_val(arm64_arch_vm_get_page_prot(vm_flags)));
> +
> +	return arm64_arch_filter_pgprot(ret);
> +}

If we kept the array, we can have everything in a single function
(untested and with my own comments for future changes):

pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
	pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags &
				(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]));

	/*
	 * We could get rid of this test if we updated protection_map[]
	 * to turn exec-only into read-exec during boot.
	 */
	if (!cpus_have_const_cap(ARM64_HAS_EPAN) &&
	    pgprot_val(prot) == pgprot_val(PAGE_EXECONLY))
		prot = PAGE_READONLY_EXEC;

	if (vm_flags & VM_ARM64_BTI)
		prot != PTE_GP;

	/*
	 * We can get rid of the requirement for PROT_NORMAL to be 0
	 * since here we can mask out PTE_ATTRINDX_MASK.
	 */
	if (vm_flags & VM_MTE) {
		prot &= ~PTE_ATTRINDX_MASK;
		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
	}

	return prot;
}

-- 
Catalin

WARNING: multiple messages have this Message-ID (diff)
From: Catalin Marinas <catalin.marinas@arm.com>
To: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org,
	linux-kernel@vger.kernel.org, geert@linux-m68k.org,
	Christoph Hellwig <hch@infradead.org>,
	linuxppc-dev@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org,
	linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org,
	linux-alpha@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org,
	linux-xtensa@linux-xtensa.org, linux-parisc@vger.kernel.org,
	openrisc@lists.librecores.org, linux-um@lists.infradead.org,
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
	linux-arch@vger.kernel.org, Will Deacon <will@kernel.org>
Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
Date: Thu, 3 Mar 2022 15:28:50 +0000	[thread overview]
Message-ID: <YiDessYDSt060Euc@arm.com> (raw)
In-Reply-To: <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com>

Hi Anshuman,

On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote:
> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> +{
> +	switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> +	case VM_NONE:
> +		return PAGE_NONE;
> +	case VM_READ:
> +	case VM_WRITE:
> +	case VM_WRITE | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_EXEC | VM_READ:
> +	case VM_EXEC | VM_WRITE:
> +	case VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED:
> +		return PAGE_NONE;
> +	case VM_SHARED | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_SHARED | VM_WRITE:
> +	case VM_SHARED | VM_WRITE | VM_READ:
> +		return PAGE_SHARED;
> +	case VM_SHARED | VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_SHARED | VM_EXEC | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED | VM_EXEC | VM_WRITE:
> +	case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_SHARED_EXEC;
> +	default:
> +		BUILD_BUG();
> +	}
> +}

I'd say ack for trying to get of the extra arch_vm_get_page_prot() and
arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't
built the code to see what's generated but I suspect it's no significant
improvement. As for the code readability, the arm64 parts don't look
much better either. The only advantage with this patch is that all
functions have been moved under arch/arm64.

I'd keep most architectures that don't have own arch_vm_get_page_prot()
or arch_filter_pgprot() unchanged and with a generic protection_map[]
array. For architectures that need fancier stuff, add a
CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define
vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot(). I think you could also duplicate protection_map[]
for architectures with own vm_get_page_prot() (make it static) and
#ifdef it out in mm/mmap.c.

If later you have more complex needs or a switch statement generates
better code, go for it, but for this series I'd keep things simple, only
focus on getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot().

If I grep'ed correctly, there are only 4 architectures that have own
arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own
arch_filter_pgprot() (arm64, x86). Try to only change these for the time
being, together with the other generic mm cleanups you have in this
series. I think there are a couple more that touch protection_map[]
(arm, m68k). You can leave the generic protection_map[] global if the
arch does not select ARCH_HAS_VM_GET_PAGE_PROT.

> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> +		return prot;
> +
> +	if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
> +		return prot;
> +
> +	return PAGE_READONLY_EXEC;
> +}
> +
> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pteval_t prot = 0;
> +
> +	if (vm_flags & VM_ARM64_BTI)
> +		prot |= PTE_GP;
> +
> +	/*
> +	 * There are two conditions required for returning a Normal Tagged
> +	 * memory type: (1) the user requested it via PROT_MTE passed to
> +	 * mmap() or mprotect() and (2) the corresponding vma supports MTE. We
> +	 * register (1) as VM_MTE in the vma->vm_flags and (2) as
> +	 * VM_MTE_ALLOWED. Note that the latter can only be set during the
> +	 * mmap() call since mprotect() does not accept MAP_* flags.
> +	 * Checking for VM_MTE only is sufficient since arch_validate_flags()
> +	 * does not permit (VM_MTE & !VM_MTE_ALLOWED).
> +	 */
> +	if (vm_flags & VM_MTE)
> +		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
> +
> +	return __pgprot(prot);
> +}
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> +			pgprot_val(arm64_arch_vm_get_page_prot(vm_flags)));
> +
> +	return arm64_arch_filter_pgprot(ret);
> +}

If we kept the array, we can have everything in a single function
(untested and with my own comments for future changes):

pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
	pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags &
				(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]));

	/*
	 * We could get rid of this test if we updated protection_map[]
	 * to turn exec-only into read-exec during boot.
	 */
	if (!cpus_have_const_cap(ARM64_HAS_EPAN) &&
	    pgprot_val(prot) == pgprot_val(PAGE_EXECONLY))
		prot = PAGE_READONLY_EXEC;

	if (vm_flags & VM_ARM64_BTI)
		prot != PTE_GP;

	/*
	 * We can get rid of the requirement for PROT_NORMAL to be 0
	 * since here we can mask out PTE_ATTRINDX_MASK.
	 */
	if (vm_flags & VM_MTE) {
		prot &= ~PTE_ATTRINDX_MASK;
		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
	}

	return prot;
}

-- 
Catalin

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Catalin Marinas <catalin.marinas@arm.com>
To: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org,
	linux-kernel@vger.kernel.org, geert@linux-m68k.org,
	Christoph Hellwig <hch@infradead.org>,
	linuxppc-dev@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org,
	linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org,
	linux-alpha@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org,
	linux-xtensa@linux-xtensa.org, linux-parisc@vger.kernel.org,
	openrisc@lists.librecores.org, linux-um@lists.infradead.org,
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
	linux-arch@vger.kernel.org, Will Deacon <will@kernel.org>
Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
Date: Thu, 3 Mar 2022 15:28:50 +0000	[thread overview]
Message-ID: <YiDessYDSt060Euc@arm.com> (raw)
In-Reply-To: <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com>

Hi Anshuman,

On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote:
> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> +{
> +	switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> +	case VM_NONE:
> +		return PAGE_NONE;
> +	case VM_READ:
> +	case VM_WRITE:
> +	case VM_WRITE | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_EXEC | VM_READ:
> +	case VM_EXEC | VM_WRITE:
> +	case VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED:
> +		return PAGE_NONE;
> +	case VM_SHARED | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_SHARED | VM_WRITE:
> +	case VM_SHARED | VM_WRITE | VM_READ:
> +		return PAGE_SHARED;
> +	case VM_SHARED | VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_SHARED | VM_EXEC | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED | VM_EXEC | VM_WRITE:
> +	case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_SHARED_EXEC;
> +	default:
> +		BUILD_BUG();
> +	}
> +}

I'd say ack for trying to get of the extra arch_vm_get_page_prot() and
arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't
built the code to see what's generated but I suspect it's no significant
improvement. As for the code readability, the arm64 parts don't look
much better either. The only advantage with this patch is that all
functions have been moved under arch/arm64.

I'd keep most architectures that don't have own arch_vm_get_page_prot()
or arch_filter_pgprot() unchanged and with a generic protection_map[]
array. For architectures that need fancier stuff, add a
CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define
vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot(). I think you could also duplicate protection_map[]
for architectures with own vm_get_page_prot() (make it static) and
#ifdef it out in mm/mmap.c.

If later you have more complex needs or a switch statement generates
better code, go for it, but for this series I'd keep things simple, only
focus on getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot().

If I grep'ed correctly, there are only 4 architectures that have own
arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own
arch_filter_pgprot() (arm64, x86). Try to only change these for the time
being, together with the other generic mm cleanups you have in this
series. I think there are a couple more that touch protection_map[]
(arm, m68k). You can leave the generic protection_map[] global if the
arch does not select ARCH_HAS_VM_GET_PAGE_PROT.

> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> +		return prot;
> +
> +	if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
> +		return prot;
> +
> +	return PAGE_READONLY_EXEC;
> +}
> +
> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pteval_t prot = 0;
> +
> +	if (vm_flags & VM_ARM64_BTI)
> +		prot |= PTE_GP;
> +
> +	/*
> +	 * There are two conditions required for returning a Normal Tagged
> +	 * memory type: (1) the user requested it via PROT_MTE passed to
> +	 * mmap() or mprotect() and (2) the corresponding vma supports MTE. We
> +	 * register (1) as VM_MTE in the vma->vm_flags and (2) as
> +	 * VM_MTE_ALLOWED. Note that the latter can only be set during the
> +	 * mmap() call since mprotect() does not accept MAP_* flags.
> +	 * Checking for VM_MTE only is sufficient since arch_validate_flags()
> +	 * does not permit (VM_MTE & !VM_MTE_ALLOWED).
> +	 */
> +	if (vm_flags & VM_MTE)
> +		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
> +
> +	return __pgprot(prot);
> +}
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> +			pgprot_val(arm64_arch_vm_get_page_prot(vm_flags)));
> +
> +	return arm64_arch_filter_pgprot(ret);
> +}

If we kept the array, we can have everything in a single function
(untested and with my own comments for future changes):

pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
	pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags &
				(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]));

	/*
	 * We could get rid of this test if we updated protection_map[]
	 * to turn exec-only into read-exec during boot.
	 */
	if (!cpus_have_const_cap(ARM64_HAS_EPAN) &&
	    pgprot_val(prot) == pgprot_val(PAGE_EXECONLY))
		prot = PAGE_READONLY_EXEC;

	if (vm_flags & VM_ARM64_BTI)
		prot != PTE_GP;

	/*
	 * We can get rid of the requirement for PROT_NORMAL to be 0
	 * since here we can mask out PTE_ATTRINDX_MASK.
	 */
	if (vm_flags & VM_MTE) {
		prot &= ~PTE_ATTRINDX_MASK;
		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
	}

	return prot;
}

-- 
Catalin

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

WARNING: multiple messages have this Message-ID (diff)
From: Catalin Marinas <catalin.marinas@arm.com>
To: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org,
	linux-kernel@vger.kernel.org, geert@linux-m68k.org,
	Christoph Hellwig <hch@infradead.org>,
	linuxppc-dev@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org,
	linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org,
	linux-alpha@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org,
	linux-xtensa@linux-xtensa.org, linux-parisc@vger.kernel.org,
	openrisc@lists.librecores.org, linux-um@lists.infradead.org,
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
	linux-arch@vger.kernel.org, Will Deacon <will@kernel.org>
Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
Date: Thu, 3 Mar 2022 15:28:50 +0000	[thread overview]
Message-ID: <YiDessYDSt060Euc@arm.com> (raw)
In-Reply-To: <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com>

Hi Anshuman,

On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote:
> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> +{
> +	switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> +	case VM_NONE:
> +		return PAGE_NONE;
> +	case VM_READ:
> +	case VM_WRITE:
> +	case VM_WRITE | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_EXEC | VM_READ:
> +	case VM_EXEC | VM_WRITE:
> +	case VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED:
> +		return PAGE_NONE;
> +	case VM_SHARED | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_SHARED | VM_WRITE:
> +	case VM_SHARED | VM_WRITE | VM_READ:
> +		return PAGE_SHARED;
> +	case VM_SHARED | VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_SHARED | VM_EXEC | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED | VM_EXEC | VM_WRITE:
> +	case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_SHARED_EXEC;
> +	default:
> +		BUILD_BUG();
> +	}
> +}

I'd say ack for trying to get of the extra arch_vm_get_page_prot() and
arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't
built the code to see what's generated but I suspect it's no significant
improvement. As for the code readability, the arm64 parts don't look
much better either. The only advantage with this patch is that all
functions have been moved under arch/arm64.

I'd keep most architectures that don't have own arch_vm_get_page_prot()
or arch_filter_pgprot() unchanged and with a generic protection_map[]
array. For architectures that need fancier stuff, add a
CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define
vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot(). I think you could also duplicate protection_map[]
for architectures with own vm_get_page_prot() (make it static) and
#ifdef it out in mm/mmap.c.

If later you have more complex needs or a switch statement generates
better code, go for it, but for this series I'd keep things simple, only
focus on getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot().

If I grep'ed correctly, there are only 4 architectures that have own
arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own
arch_filter_pgprot() (arm64, x86). Try to only change these for the time
being, together with the other generic mm cleanups you have in this
series. I think there are a couple more that touch protection_map[]
(arm, m68k). You can leave the generic protection_map[] global if the
arch does not select ARCH_HAS_VM_GET_PAGE_PROT.

> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> +		return prot;
> +
> +	if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
> +		return prot;
> +
> +	return PAGE_READONLY_EXEC;
> +}
> +
> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pteval_t prot = 0;
> +
> +	if (vm_flags & VM_ARM64_BTI)
> +		prot |= PTE_GP;
> +
> +	/*
> +	 * There are two conditions required for returning a Normal Tagged
> +	 * memory type: (1) the user requested it via PROT_MTE passed to
> +	 * mmap() or mprotect() and (2) the corresponding vma supports MTE. We
> +	 * register (1) as VM_MTE in the vma->vm_flags and (2) as
> +	 * VM_MTE_ALLOWED. Note that the latter can only be set during the
> +	 * mmap() call since mprotect() does not accept MAP_* flags.
> +	 * Checking for VM_MTE only is sufficient since arch_validate_flags()
> +	 * does not permit (VM_MTE & !VM_MTE_ALLOWED).
> +	 */
> +	if (vm_flags & VM_MTE)
> +		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
> +
> +	return __pgprot(prot);
> +}
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> +			pgprot_val(arm64_arch_vm_get_page_prot(vm_flags)));
> +
> +	return arm64_arch_filter_pgprot(ret);
> +}

If we kept the array, we can have everything in a single function
(untested and with my own comments for future changes):

pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
	pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags &
				(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]));

	/*
	 * We could get rid of this test if we updated protection_map[]
	 * to turn exec-only into read-exec during boot.
	 */
	if (!cpus_have_const_cap(ARM64_HAS_EPAN) &&
	    pgprot_val(prot) == pgprot_val(PAGE_EXECONLY))
		prot = PAGE_READONLY_EXEC;

	if (vm_flags & VM_ARM64_BTI)
		prot != PTE_GP;

	/*
	 * We can get rid of the requirement for PROT_NORMAL to be 0
	 * since here we can mask out PTE_ATTRINDX_MASK.
	 */
	if (vm_flags & VM_MTE) {
		prot &= ~PTE_ATTRINDX_MASK;
		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
	}

	return prot;
}

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Catalin Marinas <catalin.marinas@arm.com>
To: openrisc@lists.librecores.org
Subject: [OpenRISC] [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
Date: Thu, 3 Mar 2022 15:28:50 +0000	[thread overview]
Message-ID: <YiDessYDSt060Euc@arm.com> (raw)
In-Reply-To: <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com>

Hi Anshuman,

On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote:
> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> +{
> +	switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> +	case VM_NONE:
> +		return PAGE_NONE;
> +	case VM_READ:
> +	case VM_WRITE:
> +	case VM_WRITE | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_EXEC | VM_READ:
> +	case VM_EXEC | VM_WRITE:
> +	case VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED:
> +		return PAGE_NONE;
> +	case VM_SHARED | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_SHARED | VM_WRITE:
> +	case VM_SHARED | VM_WRITE | VM_READ:
> +		return PAGE_SHARED;
> +	case VM_SHARED | VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_SHARED | VM_EXEC | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED | VM_EXEC | VM_WRITE:
> +	case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_SHARED_EXEC;
> +	default:
> +		BUILD_BUG();
> +	}
> +}

I'd say ack for trying to get of the extra arch_vm_get_page_prot() and
arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't
built the code to see what's generated but I suspect it's no significant
improvement. As for the code readability, the arm64 parts don't look
much better either. The only advantage with this patch is that all
functions have been moved under arch/arm64.

I'd keep most architectures that don't have own arch_vm_get_page_prot()
or arch_filter_pgprot() unchanged and with a generic protection_map[]
array. For architectures that need fancier stuff, add a
CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define
vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot(). I think you could also duplicate protection_map[]
for architectures with own vm_get_page_prot() (make it static) and
#ifdef it out in mm/mmap.c.

If later you have more complex needs or a switch statement generates
better code, go for it, but for this series I'd keep things simple, only
focus on getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot().

If I grep'ed correctly, there are only 4 architectures that have own
arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own
arch_filter_pgprot() (arm64, x86). Try to only change these for the time
being, together with the other generic mm cleanups you have in this
series. I think there are a couple more that touch protection_map[]
(arm, m68k). You can leave the generic protection_map[] global if the
arch does not select ARCH_HAS_VM_GET_PAGE_PROT.

> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> +		return prot;
> +
> +	if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
> +		return prot;
> +
> +	return PAGE_READONLY_EXEC;
> +}
> +
> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pteval_t prot = 0;
> +
> +	if (vm_flags & VM_ARM64_BTI)
> +		prot |= PTE_GP;
> +
> +	/*
> +	 * There are two conditions required for returning a Normal Tagged
> +	 * memory type: (1) the user requested it via PROT_MTE passed to
> +	 * mmap() or mprotect() and (2) the corresponding vma supports MTE. We
> +	 * register (1) as VM_MTE in the vma->vm_flags and (2) as
> +	 * VM_MTE_ALLOWED. Note that the latter can only be set during the
> +	 * mmap() call since mprotect() does not accept MAP_* flags.
> +	 * Checking for VM_MTE only is sufficient since arch_validate_flags()
> +	 * does not permit (VM_MTE & !VM_MTE_ALLOWED).
> +	 */
> +	if (vm_flags & VM_MTE)
> +		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
> +
> +	return __pgprot(prot);
> +}
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> +			pgprot_val(arm64_arch_vm_get_page_prot(vm_flags)));
> +
> +	return arm64_arch_filter_pgprot(ret);
> +}

If we kept the array, we can have everything in a single function
(untested and with my own comments for future changes):

pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
	pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags &
				(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]));

	/*
	 * We could get rid of this test if we updated protection_map[]
	 * to turn exec-only into read-exec during boot.
	 */
	if (!cpus_have_const_cap(ARM64_HAS_EPAN) &&
	    pgprot_val(prot) == pgprot_val(PAGE_EXECONLY))
		prot = PAGE_READONLY_EXEC;

	if (vm_flags & VM_ARM64_BTI)
		prot != PTE_GP;

	/*
	 * We can get rid of the requirement for PROT_NORMAL to be 0
	 * since here we can mask out PTE_ATTRINDX_MASK.
	 */
	if (vm_flags & VM_MTE) {
		prot &= ~PTE_ATTRINDX_MASK;
		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
	}

	return prot;
}

-- 
Catalin

WARNING: multiple messages have this Message-ID (diff)
From: Catalin Marinas <catalin.marinas@arm.com>
To: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org,
	linux-kernel@vger.kernel.org, geert@linux-m68k.org,
	Christoph Hellwig <hch@infradead.org>,
	linuxppc-dev@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org,
	linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org,
	linux-alpha@vger.kernel.org, linux-sh@vger.kernel.org,
	linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org,
	linux-xtensa@linux-xtensa.org, linux-parisc@vger.kernel.org,
	openrisc@lists.librecores.org, linux-um@lists.infradead.org,
	linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
	linux-arch@vger.kernel.org, Will Deacon <will@kernel.org>
Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT
Date: Thu, 03 Mar 2022 15:28:50 +0000	[thread overview]
Message-ID: <YiDessYDSt060Euc@arm.com> (raw)
In-Reply-To: <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com>

Hi Anshuman,

On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote:
> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags)
> +{
> +	switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) {
> +	case VM_NONE:
> +		return PAGE_NONE;
> +	case VM_READ:
> +	case VM_WRITE:
> +	case VM_WRITE | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_EXEC | VM_READ:
> +	case VM_EXEC | VM_WRITE:
> +	case VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED:
> +		return PAGE_NONE;
> +	case VM_SHARED | VM_READ:
> +		return PAGE_READONLY;
> +	case VM_SHARED | VM_WRITE:
> +	case VM_SHARED | VM_WRITE | VM_READ:
> +		return PAGE_SHARED;
> +	case VM_SHARED | VM_EXEC:
> +		return PAGE_EXECONLY;
> +	case VM_SHARED | VM_EXEC | VM_READ:
> +		return PAGE_READONLY_EXEC;
> +	case VM_SHARED | VM_EXEC | VM_WRITE:
> +	case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ:
> +		return PAGE_SHARED_EXEC;
> +	default:
> +		BUILD_BUG();
> +	}
> +}

I'd say ack for trying to get of the extra arch_vm_get_page_prot() and
arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't
built the code to see what's generated but I suspect it's no significant
improvement. As for the code readability, the arm64 parts don't look
much better either. The only advantage with this patch is that all
functions have been moved under arch/arm64.

I'd keep most architectures that don't have own arch_vm_get_page_prot()
or arch_filter_pgprot() unchanged and with a generic protection_map[]
array. For architectures that need fancier stuff, add a
CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define
vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot(). I think you could also duplicate protection_map[]
for architectures with own vm_get_page_prot() (make it static) and
#ifdef it out in mm/mmap.c.

If later you have more complex needs or a switch statement generates
better code, go for it, but for this series I'd keep things simple, only
focus on getting rid of arch_vm_get_page_prot() and
arch_filter_pgprot().

If I grep'ed correctly, there are only 4 architectures that have own
arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own
arch_filter_pgprot() (arm64, x86). Try to only change these for the time
being, together with the other generic mm cleanups you have in this
series. I think there are a couple more that touch protection_map[]
(arm, m68k). You can leave the generic protection_map[] global if the
arch does not select ARCH_HAS_VM_GET_PAGE_PROT.

> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_EPAN))
> +		return prot;
> +
> +	if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
> +		return prot;
> +
> +	return PAGE_READONLY_EXEC;
> +}
> +
> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pteval_t prot = 0;
> +
> +	if (vm_flags & VM_ARM64_BTI)
> +		prot |= PTE_GP;
> +
> +	/*
> +	 * There are two conditions required for returning a Normal Tagged
> +	 * memory type: (1) the user requested it via PROT_MTE passed to
> +	 * mmap() or mprotect() and (2) the corresponding vma supports MTE. We
> +	 * register (1) as VM_MTE in the vma->vm_flags and (2) as
> +	 * VM_MTE_ALLOWED. Note that the latter can only be set during the
> +	 * mmap() call since mprotect() does not accept MAP_* flags.
> +	 * Checking for VM_MTE only is sufficient since arch_validate_flags()
> +	 * does not permit (VM_MTE & !VM_MTE_ALLOWED).
> +	 */
> +	if (vm_flags & VM_MTE)
> +		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
> +
> +	return __pgprot(prot);
> +}
> +
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> +	pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) |
> +			pgprot_val(arm64_arch_vm_get_page_prot(vm_flags)));
> +
> +	return arm64_arch_filter_pgprot(ret);
> +}

If we kept the array, we can have everything in a single function
(untested and with my own comments for future changes):

pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
	pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags &
				(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]));

	/*
	 * We could get rid of this test if we updated protection_map[]
	 * to turn exec-only into read-exec during boot.
	 */
	if (!cpus_have_const_cap(ARM64_HAS_EPAN) &&
	    pgprot_val(prot) = pgprot_val(PAGE_EXECONLY))
		prot = PAGE_READONLY_EXEC;

	if (vm_flags & VM_ARM64_BTI)
		prot != PTE_GP;

	/*
	 * We can get rid of the requirement for PROT_NORMAL to be 0
	 * since here we can mask out PTE_ATTRINDX_MASK.
	 */
	if (vm_flags & VM_MTE) {
		prot &= ~PTE_ATTRINDX_MASK;
		prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
	}

	return prot;
}

-- 
Catalin

  reply	other threads:[~2022-03-03 15:29 UTC|newest]

Thread overview: 355+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-28 10:47 [PATCH V3 00/30] mm/mmap: Drop protection_map[] and platform's __SXXX/__PXXX requirements Anshuman Khandual
2022-02-28 10:59 ` Anshuman Khandual
2022-02-28 10:47 ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47 ` Anshuman Khandual
2022-02-28 10:47 ` Anshuman Khandual
2022-02-28 10:47 ` Anshuman Khandual
2022-02-28 10:47 ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 01/30] mm/debug_vm_pgtable: Drop protection_map[] usage Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 02/30] mm/mmap: Clarify protection_map[] indices Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 03/30] mm/mmap: Add new config ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 04/30] powerpc/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-03-02  5:23   ` Michael Ellerman
2022-03-02  5:23     ` Michael Ellerman
2022-03-02  5:23     ` [OpenRISC] " Michael Ellerman
2022-03-02  5:23     ` Michael Ellerman
2022-03-02  5:23     ` Michael Ellerman
2022-03-02  5:23     ` Michael Ellerman
2022-03-02  5:23     ` Michael Ellerman
2022-02-28 10:47 ` [PATCH V3 05/30] arm64/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-03-03 15:28   ` Catalin Marinas [this message]
2022-03-03 15:28     ` Catalin Marinas
2022-03-03 15:28     ` [OpenRISC] " Catalin Marinas
2022-03-03 15:28     ` Catalin Marinas
2022-03-03 15:28     ` Catalin Marinas
2022-03-03 15:28     ` Catalin Marinas
2022-03-03 15:28     ` Catalin Marinas
2022-03-09 11:31     ` Anshuman Khandual
2022-03-09 11:43       ` Anshuman Khandual
2022-03-09 11:31       ` [OpenRISC] " Anshuman Khandual
2022-03-09 11:31       ` Anshuman Khandual
2022-03-09 11:31       ` Anshuman Khandual
2022-03-09 11:31       ` Anshuman Khandual
2022-03-09 11:31       ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 06/30] sparc/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 07/30] mips/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 08/30] m68k/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 09/30] arm/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:57   ` Russell King (Oracle)
2022-02-28 10:57     ` [OpenRISC] " Russell King
2022-02-28 10:57     ` Russell King (Oracle)
2022-02-28 10:57     ` Russell King (Oracle)
2022-02-28 10:57     ` Russell King (Oracle)
2022-02-28 10:57     ` Russell King (Oracle)
2022-02-28 13:49     ` Geert Uytterhoeven
2022-02-28 13:49       ` Geert Uytterhoeven
2022-02-28 13:49       ` Geert Uytterhoeven
2022-02-28 13:49       ` [OpenRISC] " Geert Uytterhoeven
2022-02-28 13:49       ` Geert Uytterhoeven
2022-02-28 13:49       ` Geert Uytterhoeven
2022-02-28 13:49       ` Geert Uytterhoeven
2022-02-28 13:49       ` Geert Uytterhoeven
2022-03-01  0:00     ` Anshuman Khandual
2022-03-01  0:12       ` Anshuman Khandual
2022-03-01  0:00       ` [OpenRISC] " Anshuman Khandual
2022-03-01  0:00       ` Anshuman Khandual
2022-03-01  0:00       ` Anshuman Khandual
2022-03-01  0:00       ` Anshuman Khandual
2022-03-01  0:00       ` Anshuman Khandual
2022-03-01  0:31       ` Russell King (Oracle)
2022-03-01  0:31         ` Russell King (Oracle)
2022-03-01  0:31         ` [OpenRISC] " Russell King
2022-03-01  0:31         ` Russell King (Oracle)
2022-03-01  0:31         ` Russell King (Oracle)
2022-03-01  0:31         ` Russell King (Oracle)
2022-03-01  0:31         ` Russell King (Oracle)
2022-03-01  8:16         ` Christophe Leroy
2022-03-01  8:16           ` Christophe Leroy
2022-03-01  8:16           ` Christophe Leroy
2022-03-01  8:16           ` [OpenRISC] " Christophe Leroy
2022-03-01  8:16           ` Christophe Leroy
2022-03-01  8:16           ` Christophe Leroy
2022-03-01  8:16           ` Christophe Leroy
2022-03-01  8:16           ` Christophe Leroy
2022-03-02  3:22           ` Anshuman Khandual
2022-03-02  3:34             ` Anshuman Khandual
2022-03-02  3:22             ` Anshuman Khandual
2022-03-02  3:22             ` [OpenRISC] " Anshuman Khandual
2022-03-02  3:22             ` Anshuman Khandual
2022-03-02  3:22             ` Anshuman Khandual
2022-03-02  3:22             ` Anshuman Khandual
2022-03-02  3:22             ` Anshuman Khandual
2022-03-02  7:05             ` Christophe Leroy
2022-03-02  7:05               ` Christophe Leroy
2022-03-02  7:05               ` Christophe Leroy
2022-03-02  7:05               ` [OpenRISC] " Christophe Leroy
2022-03-02  7:05               ` Christophe Leroy
2022-03-02  7:05               ` Christophe Leroy
2022-03-02  7:05               ` Christophe Leroy
2022-03-02  7:05               ` Christophe Leroy
2022-03-02  9:51               ` Anshuman Khandual
2022-03-02  9:51                 ` Anshuman Khandual
2022-03-02  9:51                 ` [OpenRISC] " Anshuman Khandual
2022-03-02  9:51                 ` Anshuman Khandual
2022-03-02  9:51                 ` Anshuman Khandual
2022-03-02  9:51                 ` Anshuman Khandual
2022-03-02  9:51                 ` Anshuman Khandual
2022-03-02  9:51                 ` Anshuman Khandual
2022-03-02 10:05                 ` Geert Uytterhoeven
2022-03-02 10:05                   ` Geert Uytterhoeven
2022-03-02 10:05                   ` Geert Uytterhoeven
2022-03-02 10:05                   ` [OpenRISC] " Geert Uytterhoeven
2022-03-02 10:05                   ` Geert Uytterhoeven
2022-03-02 10:05                   ` Geert Uytterhoeven
2022-03-02 10:05                   ` Geert Uytterhoeven
2022-03-02 10:05                   ` Geert Uytterhoeven
2022-03-02 11:06                   ` Anshuman Khandual
2022-03-02 11:18                     ` Anshuman Khandual
2022-03-02 11:06                     ` Anshuman Khandual
2022-03-02 11:06                     ` [OpenRISC] " Anshuman Khandual
2022-03-02 11:06                     ` Anshuman Khandual
2022-03-02 11:06                     ` Anshuman Khandual
2022-03-02 11:06                     ` Anshuman Khandual
2022-03-02 11:06                     ` Anshuman Khandual
2022-03-02 11:14                     ` Geert Uytterhoeven
2022-03-02 11:14                       ` Geert Uytterhoeven
2022-03-02 11:14                       ` Geert Uytterhoeven
2022-03-02 11:14                       ` [OpenRISC] " Geert Uytterhoeven
2022-03-02 11:14                       ` Geert Uytterhoeven
2022-03-02 11:14                       ` Geert Uytterhoeven
2022-03-02 11:14                       ` Geert Uytterhoeven
2022-03-02 11:14                       ` Geert Uytterhoeven
2022-03-09 11:33                       ` Anshuman Khandual
2022-03-09 11:45                         ` Anshuman Khandual
2022-03-09 11:33                         ` Anshuman Khandual
2022-03-09 11:33                         ` [OpenRISC] " Anshuman Khandual
2022-03-09 11:33                         ` Anshuman Khandual
2022-03-09 11:33                         ` Anshuman Khandual
2022-03-09 11:33                         ` Anshuman Khandual
2022-03-09 11:33                         ` Anshuman Khandual
2022-03-02 11:19                     ` Russell King (Oracle)
2022-03-02 11:19                       ` Russell King (Oracle)
2022-03-02 11:19                       ` Russell King (Oracle)
2022-03-02 11:19                       ` [OpenRISC] " Russell King
2022-03-02 11:19                       ` Russell King (Oracle)
2022-03-02 11:19                       ` Russell King (Oracle)
2022-03-02 11:19                       ` Russell King (Oracle)
2022-03-02 11:19                       ` Russell King (Oracle)
2022-03-02  3:15         ` Anshuman Khandual
2022-03-02  3:27           ` Anshuman Khandual
2022-03-02  3:15           ` [OpenRISC] " Anshuman Khandual
2022-03-02  3:15           ` Anshuman Khandual
2022-03-02  3:15           ` Anshuman Khandual
2022-03-02  3:15           ` Anshuman Khandual
2022-03-02  3:15           ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 10/30] x86/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 11/30] mm/mmap: Drop protection_map[] Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 12/30] mm/mmap: Drop arch_filter_pgprot() Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 13/30] mm/mmap: Drop arch_vm_get_page_pgprot() Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 14/30] s390/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 15/30] riscv/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 16/30] alpha/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 17/30] sh/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 18/30] arc/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 19/30] csky/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-03-01 14:00   ` Guo Ren
2022-03-01 14:00     ` Guo Ren
2022-03-01 14:00     ` Guo Ren
2022-03-01 14:00     ` [OpenRISC] " Guo Ren
2022-03-01 14:00     ` Guo Ren
2022-03-01 14:00     ` Guo Ren
2022-03-01 14:00     ` Guo Ren
2022-03-01 14:00     ` Guo Ren
2022-02-28 10:47 ` [PATCH V3 20/30] xtensa/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 21/30] parisc/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 22/30] openrisc/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 23/30] um/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 24/30] microblaze/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 25/30] nios2/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 26/30] hexagon/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 27/30] nds32/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 28/30] ia64/mm: " Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 29/30] mm/mmap: Drop generic vm_get_page_prot() Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47 ` [PATCH V3 30/30] mm/mmap: Drop ARCH_HAS_VM_GET_PAGE_PROT Anshuman Khandual
2022-02-28 10:59   ` Anshuman Khandual
2022-02-28 10:47   ` [OpenRISC] " Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-02-28 10:47   ` Anshuman Khandual
2022-03-09 10:59 ` [PATCH V3 00/30] mm/mmap: Drop protection_map[] and platform's __SXXX/__PXXX requirements Anshuman Khandual

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YiDessYDSt060Euc@arm.com \
    --to=catalin.marinas@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=geert@linux-m68k.org \
    --cc=hch@infradead.org \
    --cc=linux-alpha@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-csky@vger.kernel.org \
    --cc=linux-hexagon@vger.kernel.org \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-m68k@lists.linux-m68k.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-parisc@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux-sh@vger.kernel.org \
    --cc=linux-snps-arc@lists.infradead.org \
    --cc=linux-um@lists.infradead.org \
    --cc=linux-xtensa@linux-xtensa.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=openrisc@lists.librecores.org \
    --cc=sparclinux@vger.kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.