From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55259C433FE for ; Wed, 9 Mar 2022 11:31:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232198AbiCILcP (ORCPT ); Wed, 9 Mar 2022 06:32:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230468AbiCILcO (ORCPT ); Wed, 9 Mar 2022 06:32:14 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6A83F14FFD4; Wed, 9 Mar 2022 03:31:15 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 233BF1655; Wed, 9 Mar 2022 03:31:15 -0800 (PST) Received: from [10.163.33.203] (unknown [10.163.33.203]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EFA443FA4D; Wed, 9 Mar 2022 03:31:03 -0800 (PST) Message-ID: Date: Wed, 9 Mar 2022 17:01:02 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Content-Language: en-US To: Catalin Marinas Cc: linux-mm@kvack.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, geert@linux-m68k.org, Christoph Hellwig , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org, linux-alpha@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-parisc@vger.kernel.org, openrisc@lists.librecores.org, linux-um@lists.infradead.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org, Will Deacon References: <1646045273-9343-1-git-send-email-anshuman.khandual@arm.com> <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com> From: Anshuman Khandual In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org On 3/3/22 20:58, Catalin Marinas wrote: > Hi Anshuman, > > On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote: >> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags) >> +{ >> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) { >> + case VM_NONE: >> + return PAGE_NONE; >> + case VM_READ: >> + case VM_WRITE: >> + case VM_WRITE | VM_READ: >> + return PAGE_READONLY; >> + case VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_EXEC | VM_READ: >> + case VM_EXEC | VM_WRITE: >> + case VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED: >> + return PAGE_NONE; >> + case VM_SHARED | VM_READ: >> + return PAGE_READONLY; >> + case VM_SHARED | VM_WRITE: >> + case VM_SHARED | VM_WRITE | VM_READ: >> + return PAGE_SHARED; >> + case VM_SHARED | VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_SHARED | VM_EXEC | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED | VM_EXEC | VM_WRITE: >> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_SHARED_EXEC; >> + default: >> + BUILD_BUG(); >> + } >> +} > > I'd say ack for trying to get of the extra arch_vm_get_page_prot() and > arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't > built the code to see what's generated but I suspect it's no significant > improvement. As for the code readability, the arm64 parts don't look > much better either. The only advantage with this patch is that all > functions have been moved under arch/arm64. Got it. > > I'd keep most architectures that don't have own arch_vm_get_page_prot() > or arch_filter_pgprot() unchanged and with a generic protection_map[] > array. For architectures that need fancier stuff, add a > CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define > vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). I think you could also duplicate protection_map[] > for architectures with own vm_get_page_prot() (make it static) and > #ifdef it out in mm/mmap.c. > > If later you have more complex needs or a switch statement generates > better code, go for it, but for this series I'd keep things simple, only > focus on getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). Got it. > > If I grep'ed correctly, there are only 4 architectures that have own > arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own > arch_filter_pgprot() (arm64, x86). Try to only change these for the time > being, together with the other generic mm cleanups you have in this > series. I think there are a couple more that touch protection_map[] > (arm, m68k). You can leave the generic protection_map[] global if the > arch does not select ARCH_HAS_VM_GET_PAGE_PROT. Okay, I will probably split the series into two parts. - Drop arch_vm_get_page_prot() and arch_filter_pgprot() on relevant platforms i.e arm64, powerpc, sparc and x86 via this new config ARCH_HAS_VM_GET_PAGE_PROT, keeping the generic protection_map[] since platform __SXXX/__PXX macros would be still around. - Drop __SXXX/__PXXX across all platforms via just initializing protection_map[] early during boot in the platform OR moving both vm_get_page_prot() via ARCH_HAS_VM_GET_PAGE_PROT and the generic protection_map[] inside the platform. There were some objections with respect to switch case code in comparison to the array based table look up. > >> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot) >> +{ >> + if (cpus_have_const_cap(ARM64_HAS_EPAN)) >> + return prot; >> + >> + if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY)) >> + return prot; >> + >> + return PAGE_READONLY_EXEC; >> +} >> + >> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pteval_t prot = 0; >> + >> + if (vm_flags & VM_ARM64_BTI) >> + prot |= PTE_GP; >> + >> + /* >> + * There are two conditions required for returning a Normal Tagged >> + * memory type: (1) the user requested it via PROT_MTE passed to >> + * mmap() or mprotect() and (2) the corresponding vma supports MTE. We >> + * register (1) as VM_MTE in the vma->vm_flags and (2) as >> + * VM_MTE_ALLOWED. Note that the latter can only be set during the >> + * mmap() call since mprotect() does not accept MAP_* flags. >> + * Checking for VM_MTE only is sufficient since arch_validate_flags() >> + * does not permit (VM_MTE & !VM_MTE_ALLOWED). >> + */ >> + if (vm_flags & VM_MTE) >> + prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); >> + >> + return __pgprot(prot); >> +} >> + >> +pgprot_t vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) | >> + pgprot_val(arm64_arch_vm_get_page_prot(vm_flags))); >> + >> + return arm64_arch_filter_pgprot(ret); >> +} > > If we kept the array, we can have everything in a single function > (untested and with my own comments for future changes): Got it. > > pgprot_t vm_get_page_prot(unsigned long vm_flags) > { > pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags & > (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)])); > > /* > * We could get rid of this test if we updated protection_map[] > * to turn exec-only into read-exec during boot. > */ > if (!cpus_have_const_cap(ARM64_HAS_EPAN) && > pgprot_val(prot) == pgprot_val(PAGE_EXECONLY)) > prot = PAGE_READONLY_EXEC; > > if (vm_flags & VM_ARM64_BTI) > prot != PTE_GP; > > /* > * We can get rid of the requirement for PROT_NORMAL to be 0 > * since here we can mask out PTE_ATTRINDX_MASK. > */ > if (vm_flags & VM_MTE) { > prot &= ~PTE_ATTRINDX_MASK; > prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); > } > > return prot; > } > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E3645C433FE for ; Wed, 9 Mar 2022 11:31:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To: Subject:MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7DbA0wBXBS7TWAwj7PVr/g6cEdnfqPATCQJEY5MKECc=; b=howV9W/edDluqe PfXt6dEBRInsDkY2oOOl7Ee4qtwvvztEhoplGPxWxs5Bio6AQ2y+pHKZE0g8FmIyvzM3Ny7hElIlc WtxXV4ApS1YhvR7EwgPKmWRKyh/VNLDxT0rwaLmpga+TDyZ5bEhYgW6GABw8qqZGtPsxBs8I7t85s WdV81qZ0kLzeltQAPm608lqnzcYHMZe7ezOUGvhrUd+/IMmwwvwQU7//GmIFKiN90uDWcAfuPsJHV WYCA6PziVY+D/vqc4S+Rxu2k4NMKxyCNdYo3n7y7TElZ2zDBYUUigWL5rS/OgO0y72y/5KxFcpqXh WW9BOlb1l2W1/ubfZ7Ig==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRuXR-008Hxt-E7; Wed, 09 Mar 2022 11:31:29 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRuXE-008HuL-JG; Wed, 09 Mar 2022 11:31:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 233BF1655; Wed, 9 Mar 2022 03:31:15 -0800 (PST) Received: from [10.163.33.203] (unknown [10.163.33.203]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EFA443FA4D; Wed, 9 Mar 2022 03:31:03 -0800 (PST) Message-ID: Date: Wed, 9 Mar 2022 17:01:02 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Content-Language: en-US To: Catalin Marinas Cc: linux-mm@kvack.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, geert@linux-m68k.org, Christoph Hellwig , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org, linux-alpha@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-parisc@vger.kernel.org, openrisc@lists.librecores.org, linux-um@lists.infradead.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org, Will Deacon References: <1646045273-9343-1-git-send-email-anshuman.khandual@arm.com> <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com> From: Anshuman Khandual In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220309_033116_768127_2C85EB48 X-CRM114-Status: GOOD ( 35.22 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On 3/3/22 20:58, Catalin Marinas wrote: > Hi Anshuman, > > On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote: >> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags) >> +{ >> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) { >> + case VM_NONE: >> + return PAGE_NONE; >> + case VM_READ: >> + case VM_WRITE: >> + case VM_WRITE | VM_READ: >> + return PAGE_READONLY; >> + case VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_EXEC | VM_READ: >> + case VM_EXEC | VM_WRITE: >> + case VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED: >> + return PAGE_NONE; >> + case VM_SHARED | VM_READ: >> + return PAGE_READONLY; >> + case VM_SHARED | VM_WRITE: >> + case VM_SHARED | VM_WRITE | VM_READ: >> + return PAGE_SHARED; >> + case VM_SHARED | VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_SHARED | VM_EXEC | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED | VM_EXEC | VM_WRITE: >> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_SHARED_EXEC; >> + default: >> + BUILD_BUG(); >> + } >> +} > > I'd say ack for trying to get of the extra arch_vm_get_page_prot() and > arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't > built the code to see what's generated but I suspect it's no significant > improvement. As for the code readability, the arm64 parts don't look > much better either. The only advantage with this patch is that all > functions have been moved under arch/arm64. Got it. > > I'd keep most architectures that don't have own arch_vm_get_page_prot() > or arch_filter_pgprot() unchanged and with a generic protection_map[] > array. For architectures that need fancier stuff, add a > CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define > vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). I think you could also duplicate protection_map[] > for architectures with own vm_get_page_prot() (make it static) and > #ifdef it out in mm/mmap.c. > > If later you have more complex needs or a switch statement generates > better code, go for it, but for this series I'd keep things simple, only > focus on getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). Got it. > > If I grep'ed correctly, there are only 4 architectures that have own > arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own > arch_filter_pgprot() (arm64, x86). Try to only change these for the time > being, together with the other generic mm cleanups you have in this > series. I think there are a couple more that touch protection_map[] > (arm, m68k). You can leave the generic protection_map[] global if the > arch does not select ARCH_HAS_VM_GET_PAGE_PROT. Okay, I will probably split the series into two parts. - Drop arch_vm_get_page_prot() and arch_filter_pgprot() on relevant platforms i.e arm64, powerpc, sparc and x86 via this new config ARCH_HAS_VM_GET_PAGE_PROT, keeping the generic protection_map[] since platform __SXXX/__PXX macros would be still around. - Drop __SXXX/__PXXX across all platforms via just initializing protection_map[] early during boot in the platform OR moving both vm_get_page_prot() via ARCH_HAS_VM_GET_PAGE_PROT and the generic protection_map[] inside the platform. There were some objections with respect to switch case code in comparison to the array based table look up. > >> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot) >> +{ >> + if (cpus_have_const_cap(ARM64_HAS_EPAN)) >> + return prot; >> + >> + if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY)) >> + return prot; >> + >> + return PAGE_READONLY_EXEC; >> +} >> + >> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pteval_t prot = 0; >> + >> + if (vm_flags & VM_ARM64_BTI) >> + prot |= PTE_GP; >> + >> + /* >> + * There are two conditions required for returning a Normal Tagged >> + * memory type: (1) the user requested it via PROT_MTE passed to >> + * mmap() or mprotect() and (2) the corresponding vma supports MTE. We >> + * register (1) as VM_MTE in the vma->vm_flags and (2) as >> + * VM_MTE_ALLOWED. Note that the latter can only be set during the >> + * mmap() call since mprotect() does not accept MAP_* flags. >> + * Checking for VM_MTE only is sufficient since arch_validate_flags() >> + * does not permit (VM_MTE & !VM_MTE_ALLOWED). >> + */ >> + if (vm_flags & VM_MTE) >> + prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); >> + >> + return __pgprot(prot); >> +} >> + >> +pgprot_t vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) | >> + pgprot_val(arm64_arch_vm_get_page_prot(vm_flags))); >> + >> + return arm64_arch_filter_pgprot(ret); >> +} > > If we kept the array, we can have everything in a single function > (untested and with my own comments for future changes): Got it. > > pgprot_t vm_get_page_prot(unsigned long vm_flags) > { > pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags & > (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)])); > > /* > * We could get rid of this test if we updated protection_map[] > * to turn exec-only into read-exec during boot. > */ > if (!cpus_have_const_cap(ARM64_HAS_EPAN) && > pgprot_val(prot) == pgprot_val(PAGE_EXECONLY)) > prot = PAGE_READONLY_EXEC; > > if (vm_flags & VM_ARM64_BTI) > prot != PTE_GP; > > /* > * We can get rid of the requirement for PROT_NORMAL to be 0 > * since here we can mask out PTE_ATTRINDX_MASK. > */ > if (vm_flags & VM_MTE) { > prot &= ~PTE_ATTRINDX_MASK; > prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); > } > > return prot; > } > _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 31953C433EF for ; Wed, 9 Mar 2022 11:31:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To: Subject:MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xygQGEIWby9aWWxSLDPC4EPEXu3e7S71Xa4BWLjUc9I=; b=KEBbxcn71y3N2H feM3Nfl8fePBn0313Zr9gTBMu4GMnfw+/8/ppLN4GgNK00KcnMs8fWs7SISSUI8kyWCCpdyK19JmY vIqwJexl15lupzvb77YIFAX0ZSXd0jDsjZGXNs5oEYC+xAM8Tlgeth2WhIbHU1lpdj867+TQEW8yt yhJldIBiGB4UwnR+eNLJTF/s3BO/GRzi3CZ8mmlPky/tQ8k/+9WTGGVXkw5Iu+FAgPyygijGTrEs6 sx5DeSUZaK8F3vqEQabxJJ6PP/L033PxXPPR+jHVSGF0OZkTu6MHJUroIyofrTneRL5HlWWZ2XeUU EjQGYk6jdlFhKRAVK4iw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRuXT-008HyH-2M; Wed, 09 Mar 2022 11:31:31 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRuXE-008HuL-JG; Wed, 09 Mar 2022 11:31:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 233BF1655; Wed, 9 Mar 2022 03:31:15 -0800 (PST) Received: from [10.163.33.203] (unknown [10.163.33.203]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EFA443FA4D; Wed, 9 Mar 2022 03:31:03 -0800 (PST) Message-ID: Date: Wed, 9 Mar 2022 17:01:02 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Content-Language: en-US To: Catalin Marinas Cc: linux-mm@kvack.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, geert@linux-m68k.org, Christoph Hellwig , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org, linux-alpha@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-parisc@vger.kernel.org, openrisc@lists.librecores.org, linux-um@lists.infradead.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org, Will Deacon References: <1646045273-9343-1-git-send-email-anshuman.khandual@arm.com> <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com> From: Anshuman Khandual In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220309_033116_768127_2C85EB48 X-CRM114-Status: GOOD ( 35.22 ) X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+linux-snps-arc=archiver.kernel.org@lists.infradead.org On 3/3/22 20:58, Catalin Marinas wrote: > Hi Anshuman, > > On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote: >> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags) >> +{ >> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) { >> + case VM_NONE: >> + return PAGE_NONE; >> + case VM_READ: >> + case VM_WRITE: >> + case VM_WRITE | VM_READ: >> + return PAGE_READONLY; >> + case VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_EXEC | VM_READ: >> + case VM_EXEC | VM_WRITE: >> + case VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED: >> + return PAGE_NONE; >> + case VM_SHARED | VM_READ: >> + return PAGE_READONLY; >> + case VM_SHARED | VM_WRITE: >> + case VM_SHARED | VM_WRITE | VM_READ: >> + return PAGE_SHARED; >> + case VM_SHARED | VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_SHARED | VM_EXEC | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED | VM_EXEC | VM_WRITE: >> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_SHARED_EXEC; >> + default: >> + BUILD_BUG(); >> + } >> +} > > I'd say ack for trying to get of the extra arch_vm_get_page_prot() and > arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't > built the code to see what's generated but I suspect it's no significant > improvement. As for the code readability, the arm64 parts don't look > much better either. The only advantage with this patch is that all > functions have been moved under arch/arm64. Got it. > > I'd keep most architectures that don't have own arch_vm_get_page_prot() > or arch_filter_pgprot() unchanged and with a generic protection_map[] > array. For architectures that need fancier stuff, add a > CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define > vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). I think you could also duplicate protection_map[] > for architectures with own vm_get_page_prot() (make it static) and > #ifdef it out in mm/mmap.c. > > If later you have more complex needs or a switch statement generates > better code, go for it, but for this series I'd keep things simple, only > focus on getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). Got it. > > If I grep'ed correctly, there are only 4 architectures that have own > arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own > arch_filter_pgprot() (arm64, x86). Try to only change these for the time > being, together with the other generic mm cleanups you have in this > series. I think there are a couple more that touch protection_map[] > (arm, m68k). You can leave the generic protection_map[] global if the > arch does not select ARCH_HAS_VM_GET_PAGE_PROT. Okay, I will probably split the series into two parts. - Drop arch_vm_get_page_prot() and arch_filter_pgprot() on relevant platforms i.e arm64, powerpc, sparc and x86 via this new config ARCH_HAS_VM_GET_PAGE_PROT, keeping the generic protection_map[] since platform __SXXX/__PXX macros would be still around. - Drop __SXXX/__PXXX across all platforms via just initializing protection_map[] early during boot in the platform OR moving both vm_get_page_prot() via ARCH_HAS_VM_GET_PAGE_PROT and the generic protection_map[] inside the platform. There were some objections with respect to switch case code in comparison to the array based table look up. > >> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot) >> +{ >> + if (cpus_have_const_cap(ARM64_HAS_EPAN)) >> + return prot; >> + >> + if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY)) >> + return prot; >> + >> + return PAGE_READONLY_EXEC; >> +} >> + >> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pteval_t prot = 0; >> + >> + if (vm_flags & VM_ARM64_BTI) >> + prot |= PTE_GP; >> + >> + /* >> + * There are two conditions required for returning a Normal Tagged >> + * memory type: (1) the user requested it via PROT_MTE passed to >> + * mmap() or mprotect() and (2) the corresponding vma supports MTE. We >> + * register (1) as VM_MTE in the vma->vm_flags and (2) as >> + * VM_MTE_ALLOWED. Note that the latter can only be set during the >> + * mmap() call since mprotect() does not accept MAP_* flags. >> + * Checking for VM_MTE only is sufficient since arch_validate_flags() >> + * does not permit (VM_MTE & !VM_MTE_ALLOWED). >> + */ >> + if (vm_flags & VM_MTE) >> + prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); >> + >> + return __pgprot(prot); >> +} >> + >> +pgprot_t vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) | >> + pgprot_val(arm64_arch_vm_get_page_prot(vm_flags))); >> + >> + return arm64_arch_filter_pgprot(ret); >> +} > > If we kept the array, we can have everything in a single function > (untested and with my own comments for future changes): Got it. > > pgprot_t vm_get_page_prot(unsigned long vm_flags) > { > pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags & > (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)])); > > /* > * We could get rid of this test if we updated protection_map[] > * to turn exec-only into read-exec during boot. > */ > if (!cpus_have_const_cap(ARM64_HAS_EPAN) && > pgprot_val(prot) == pgprot_val(PAGE_EXECONLY)) > prot = PAGE_READONLY_EXEC; > > if (vm_flags & VM_ARM64_BTI) > prot != PTE_GP; > > /* > * We can get rid of the requirement for PROT_NORMAL to be 0 > * since here we can mask out PTE_ATTRINDX_MASK. > */ > if (vm_flags & VM_MTE) { > prot &= ~PTE_ATTRINDX_MASK; > prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); > } > > return prot; > } > _______________________________________________ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 04115C433F5 for ; Wed, 9 Mar 2022 11:32:17 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4KD95q2PRXz3bTl for ; Wed, 9 Mar 2022 22:32:15 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=arm.com (client-ip=217.140.110.172; helo=foss.arm.com; envelope-from=anshuman.khandual@arm.com; receiver=) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lists.ozlabs.org (Postfix) with ESMTP id 4KD95J2n8lz2ywb for ; Wed, 9 Mar 2022 22:31:46 +1100 (AEDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 233BF1655; Wed, 9 Mar 2022 03:31:15 -0800 (PST) Received: from [10.163.33.203] (unknown [10.163.33.203]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EFA443FA4D; Wed, 9 Mar 2022 03:31:03 -0800 (PST) Message-ID: Date: Wed, 9 Mar 2022 17:01:02 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Content-Language: en-US To: Catalin Marinas References: <1646045273-9343-1-git-send-email-anshuman.khandual@arm.com> <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com> From: Anshuman Khandual In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, Will Deacon , linux-arch@vger.kernel.org, linux-s390@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-csky@vger.kernel.org, Christoph Hellwig , geert@linux-m68k.org, linux-snps-arc@lists.infradead.org, linux-xtensa@linux-xtensa.org, linux-um@lists.infradead.org, linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, linux-arm-kernel@lists.infradead.org, linux-parisc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, akpm@linux-foundation.org, linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On 3/3/22 20:58, Catalin Marinas wrote: > Hi Anshuman, > > On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote: >> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags) >> +{ >> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) { >> + case VM_NONE: >> + return PAGE_NONE; >> + case VM_READ: >> + case VM_WRITE: >> + case VM_WRITE | VM_READ: >> + return PAGE_READONLY; >> + case VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_EXEC | VM_READ: >> + case VM_EXEC | VM_WRITE: >> + case VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED: >> + return PAGE_NONE; >> + case VM_SHARED | VM_READ: >> + return PAGE_READONLY; >> + case VM_SHARED | VM_WRITE: >> + case VM_SHARED | VM_WRITE | VM_READ: >> + return PAGE_SHARED; >> + case VM_SHARED | VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_SHARED | VM_EXEC | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED | VM_EXEC | VM_WRITE: >> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_SHARED_EXEC; >> + default: >> + BUILD_BUG(); >> + } >> +} > > I'd say ack for trying to get of the extra arch_vm_get_page_prot() and > arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't > built the code to see what's generated but I suspect it's no significant > improvement. As for the code readability, the arm64 parts don't look > much better either. The only advantage with this patch is that all > functions have been moved under arch/arm64. Got it. > > I'd keep most architectures that don't have own arch_vm_get_page_prot() > or arch_filter_pgprot() unchanged and with a generic protection_map[] > array. For architectures that need fancier stuff, add a > CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define > vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). I think you could also duplicate protection_map[] > for architectures with own vm_get_page_prot() (make it static) and > #ifdef it out in mm/mmap.c. > > If later you have more complex needs or a switch statement generates > better code, go for it, but for this series I'd keep things simple, only > focus on getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). Got it. > > If I grep'ed correctly, there are only 4 architectures that have own > arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own > arch_filter_pgprot() (arm64, x86). Try to only change these for the time > being, together with the other generic mm cleanups you have in this > series. I think there are a couple more that touch protection_map[] > (arm, m68k). You can leave the generic protection_map[] global if the > arch does not select ARCH_HAS_VM_GET_PAGE_PROT. Okay, I will probably split the series into two parts. - Drop arch_vm_get_page_prot() and arch_filter_pgprot() on relevant platforms i.e arm64, powerpc, sparc and x86 via this new config ARCH_HAS_VM_GET_PAGE_PROT, keeping the generic protection_map[] since platform __SXXX/__PXX macros would be still around. - Drop __SXXX/__PXXX across all platforms via just initializing protection_map[] early during boot in the platform OR moving both vm_get_page_prot() via ARCH_HAS_VM_GET_PAGE_PROT and the generic protection_map[] inside the platform. There were some objections with respect to switch case code in comparison to the array based table look up. > >> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot) >> +{ >> + if (cpus_have_const_cap(ARM64_HAS_EPAN)) >> + return prot; >> + >> + if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY)) >> + return prot; >> + >> + return PAGE_READONLY_EXEC; >> +} >> + >> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pteval_t prot = 0; >> + >> + if (vm_flags & VM_ARM64_BTI) >> + prot |= PTE_GP; >> + >> + /* >> + * There are two conditions required for returning a Normal Tagged >> + * memory type: (1) the user requested it via PROT_MTE passed to >> + * mmap() or mprotect() and (2) the corresponding vma supports MTE. We >> + * register (1) as VM_MTE in the vma->vm_flags and (2) as >> + * VM_MTE_ALLOWED. Note that the latter can only be set during the >> + * mmap() call since mprotect() does not accept MAP_* flags. >> + * Checking for VM_MTE only is sufficient since arch_validate_flags() >> + * does not permit (VM_MTE & !VM_MTE_ALLOWED). >> + */ >> + if (vm_flags & VM_MTE) >> + prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); >> + >> + return __pgprot(prot); >> +} >> + >> +pgprot_t vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) | >> + pgprot_val(arm64_arch_vm_get_page_prot(vm_flags))); >> + >> + return arm64_arch_filter_pgprot(ret); >> +} > > If we kept the array, we can have everything in a single function > (untested and with my own comments for future changes): Got it. > > pgprot_t vm_get_page_prot(unsigned long vm_flags) > { > pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags & > (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)])); > > /* > * We could get rid of this test if we updated protection_map[] > * to turn exec-only into read-exec during boot. > */ > if (!cpus_have_const_cap(ARM64_HAS_EPAN) && > pgprot_val(prot) == pgprot_val(PAGE_EXECONLY)) > prot = PAGE_READONLY_EXEC; > > if (vm_flags & VM_ARM64_BTI) > prot != PTE_GP; > > /* > * We can get rid of the requirement for PROT_NORMAL to be 0 > * since here we can mask out PTE_ATTRINDX_MASK. > */ > if (vm_flags & VM_MTE) { > prot &= ~PTE_ATTRINDX_MASK; > prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); > } > > return prot; > } > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8C7D9C433F5 for ; Wed, 9 Mar 2022 11:32:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To: Subject:MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9ZPEikkEx9v9LELMEaUbRhZNe2hSu1pu5qNSs83F11M=; b=gOwI539qIJxzem dLLsyVolgFLF37mmx3GqStsDZze8b5FeHCtnbKQWW2wnhm7Tn1Mn3V3/VM3V7/Mp4FozrfVa8tphY YeyP3Ns63Hy5aw+1LDljwnbyODGlXhvt/D0blCVgi91TqDeqAh4AuLTGSl/DAJEZ9UTX4exnDj9bH Yv7OyCRg0vWk/TRiadNR/okBq3RzOtslVGlzgZFfXf64BWZ5jkBCx/XF6k7Tb5+pZtdiJg8wplyfJ fRJ3bcN1k8sN7nhN6/uKNj/QmLIMssd1A6SDSU/HjCIBsuSF6/6mHVv3c0AuB+etfaFYS7OzqLL+U XLk1Rlf53x0Jr0GqpZrg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRuXJ-008Hvt-2w; Wed, 09 Mar 2022 11:31:21 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nRuXE-008HuL-JG; Wed, 09 Mar 2022 11:31:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 233BF1655; Wed, 9 Mar 2022 03:31:15 -0800 (PST) Received: from [10.163.33.203] (unknown [10.163.33.203]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EFA443FA4D; Wed, 9 Mar 2022 03:31:03 -0800 (PST) Message-ID: Date: Wed, 9 Mar 2022 17:01:02 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Content-Language: en-US To: Catalin Marinas Cc: linux-mm@kvack.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, geert@linux-m68k.org, Christoph Hellwig , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org, linux-alpha@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-parisc@vger.kernel.org, openrisc@lists.librecores.org, linux-um@lists.infradead.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org, Will Deacon References: <1646045273-9343-1-git-send-email-anshuman.khandual@arm.com> <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com> From: Anshuman Khandual In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220309_033116_768127_2C85EB48 X-CRM114-Status: GOOD ( 35.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 3/3/22 20:58, Catalin Marinas wrote: > Hi Anshuman, > > On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote: >> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags) >> +{ >> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) { >> + case VM_NONE: >> + return PAGE_NONE; >> + case VM_READ: >> + case VM_WRITE: >> + case VM_WRITE | VM_READ: >> + return PAGE_READONLY; >> + case VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_EXEC | VM_READ: >> + case VM_EXEC | VM_WRITE: >> + case VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED: >> + return PAGE_NONE; >> + case VM_SHARED | VM_READ: >> + return PAGE_READONLY; >> + case VM_SHARED | VM_WRITE: >> + case VM_SHARED | VM_WRITE | VM_READ: >> + return PAGE_SHARED; >> + case VM_SHARED | VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_SHARED | VM_EXEC | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED | VM_EXEC | VM_WRITE: >> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_SHARED_EXEC; >> + default: >> + BUILD_BUG(); >> + } >> +} > > I'd say ack for trying to get of the extra arch_vm_get_page_prot() and > arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't > built the code to see what's generated but I suspect it's no significant > improvement. As for the code readability, the arm64 parts don't look > much better either. The only advantage with this patch is that all > functions have been moved under arch/arm64. Got it. > > I'd keep most architectures that don't have own arch_vm_get_page_prot() > or arch_filter_pgprot() unchanged and with a generic protection_map[] > array. For architectures that need fancier stuff, add a > CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define > vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). I think you could also duplicate protection_map[] > for architectures with own vm_get_page_prot() (make it static) and > #ifdef it out in mm/mmap.c. > > If later you have more complex needs or a switch statement generates > better code, go for it, but for this series I'd keep things simple, only > focus on getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). Got it. > > If I grep'ed correctly, there are only 4 architectures that have own > arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own > arch_filter_pgprot() (arm64, x86). Try to only change these for the time > being, together with the other generic mm cleanups you have in this > series. I think there are a couple more that touch protection_map[] > (arm, m68k). You can leave the generic protection_map[] global if the > arch does not select ARCH_HAS_VM_GET_PAGE_PROT. Okay, I will probably split the series into two parts. - Drop arch_vm_get_page_prot() and arch_filter_pgprot() on relevant platforms i.e arm64, powerpc, sparc and x86 via this new config ARCH_HAS_VM_GET_PAGE_PROT, keeping the generic protection_map[] since platform __SXXX/__PXX macros would be still around. - Drop __SXXX/__PXXX across all platforms via just initializing protection_map[] early during boot in the platform OR moving both vm_get_page_prot() via ARCH_HAS_VM_GET_PAGE_PROT and the generic protection_map[] inside the platform. There were some objections with respect to switch case code in comparison to the array based table look up. > >> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot) >> +{ >> + if (cpus_have_const_cap(ARM64_HAS_EPAN)) >> + return prot; >> + >> + if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY)) >> + return prot; >> + >> + return PAGE_READONLY_EXEC; >> +} >> + >> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pteval_t prot = 0; >> + >> + if (vm_flags & VM_ARM64_BTI) >> + prot |= PTE_GP; >> + >> + /* >> + * There are two conditions required for returning a Normal Tagged >> + * memory type: (1) the user requested it via PROT_MTE passed to >> + * mmap() or mprotect() and (2) the corresponding vma supports MTE. We >> + * register (1) as VM_MTE in the vma->vm_flags and (2) as >> + * VM_MTE_ALLOWED. Note that the latter can only be set during the >> + * mmap() call since mprotect() does not accept MAP_* flags. >> + * Checking for VM_MTE only is sufficient since arch_validate_flags() >> + * does not permit (VM_MTE & !VM_MTE_ALLOWED). >> + */ >> + if (vm_flags & VM_MTE) >> + prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); >> + >> + return __pgprot(prot); >> +} >> + >> +pgprot_t vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) | >> + pgprot_val(arm64_arch_vm_get_page_prot(vm_flags))); >> + >> + return arm64_arch_filter_pgprot(ret); >> +} > > If we kept the array, we can have everything in a single function > (untested and with my own comments for future changes): Got it. > > pgprot_t vm_get_page_prot(unsigned long vm_flags) > { > pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags & > (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)])); > > /* > * We could get rid of this test if we updated protection_map[] > * to turn exec-only into read-exec during boot. > */ > if (!cpus_have_const_cap(ARM64_HAS_EPAN) && > pgprot_val(prot) == pgprot_val(PAGE_EXECONLY)) > prot = PAGE_READONLY_EXEC; > > if (vm_flags & VM_ARM64_BTI) > prot != PTE_GP; > > /* > * We can get rid of the requirement for PROT_NORMAL to be 0 > * since here we can mask out PTE_ATTRINDX_MASK. > */ > if (vm_flags & VM_MTE) { > prot &= ~PTE_ATTRINDX_MASK; > prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); > } > > return prot; > } > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anshuman Khandual Date: Wed, 9 Mar 2022 17:01:02 +0530 Subject: [OpenRISC] [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT In-Reply-To: References: <1646045273-9343-1-git-send-email-anshuman.khandual@arm.com> <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com> Message-ID: List-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: openrisc@lists.librecores.org On 3/3/22 20:58, Catalin Marinas wrote: > Hi Anshuman, > > On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote: >> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags) >> +{ >> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) { >> + case VM_NONE: >> + return PAGE_NONE; >> + case VM_READ: >> + case VM_WRITE: >> + case VM_WRITE | VM_READ: >> + return PAGE_READONLY; >> + case VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_EXEC | VM_READ: >> + case VM_EXEC | VM_WRITE: >> + case VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED: >> + return PAGE_NONE; >> + case VM_SHARED | VM_READ: >> + return PAGE_READONLY; >> + case VM_SHARED | VM_WRITE: >> + case VM_SHARED | VM_WRITE | VM_READ: >> + return PAGE_SHARED; >> + case VM_SHARED | VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_SHARED | VM_EXEC | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED | VM_EXEC | VM_WRITE: >> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_SHARED_EXEC; >> + default: >> + BUILD_BUG(); >> + } >> +} > > I'd say ack for trying to get of the extra arch_vm_get_page_prot() and > arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't > built the code to see what's generated but I suspect it's no significant > improvement. As for the code readability, the arm64 parts don't look > much better either. The only advantage with this patch is that all > functions have been moved under arch/arm64. Got it. > > I'd keep most architectures that don't have own arch_vm_get_page_prot() > or arch_filter_pgprot() unchanged and with a generic protection_map[] > array. For architectures that need fancier stuff, add a > CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define > vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). I think you could also duplicate protection_map[] > for architectures with own vm_get_page_prot() (make it static) and > #ifdef it out in mm/mmap.c. > > If later you have more complex needs or a switch statement generates > better code, go for it, but for this series I'd keep things simple, only > focus on getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). Got it. > > If I grep'ed correctly, there are only 4 architectures that have own > arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own > arch_filter_pgprot() (arm64, x86). Try to only change these for the time > being, together with the other generic mm cleanups you have in this > series. I think there are a couple more that touch protection_map[] > (arm, m68k). You can leave the generic protection_map[] global if the > arch does not select ARCH_HAS_VM_GET_PAGE_PROT. Okay, I will probably split the series into two parts. - Drop arch_vm_get_page_prot() and arch_filter_pgprot() on relevant platforms i.e arm64, powerpc, sparc and x86 via this new config ARCH_HAS_VM_GET_PAGE_PROT, keeping the generic protection_map[] since platform __SXXX/__PXX macros would be still around. - Drop __SXXX/__PXXX across all platforms via just initializing protection_map[] early during boot in the platform OR moving both vm_get_page_prot() via ARCH_HAS_VM_GET_PAGE_PROT and the generic protection_map[] inside the platform. There were some objections with respect to switch case code in comparison to the array based table look up. > >> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot) >> +{ >> + if (cpus_have_const_cap(ARM64_HAS_EPAN)) >> + return prot; >> + >> + if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY)) >> + return prot; >> + >> + return PAGE_READONLY_EXEC; >> +} >> + >> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pteval_t prot = 0; >> + >> + if (vm_flags & VM_ARM64_BTI) >> + prot |= PTE_GP; >> + >> + /* >> + * There are two conditions required for returning a Normal Tagged >> + * memory type: (1) the user requested it via PROT_MTE passed to >> + * mmap() or mprotect() and (2) the corresponding vma supports MTE. We >> + * register (1) as VM_MTE in the vma->vm_flags and (2) as >> + * VM_MTE_ALLOWED. Note that the latter can only be set during the >> + * mmap() call since mprotect() does not accept MAP_* flags. >> + * Checking for VM_MTE only is sufficient since arch_validate_flags() >> + * does not permit (VM_MTE & !VM_MTE_ALLOWED). >> + */ >> + if (vm_flags & VM_MTE) >> + prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); >> + >> + return __pgprot(prot); >> +} >> + >> +pgprot_t vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) | >> + pgprot_val(arm64_arch_vm_get_page_prot(vm_flags))); >> + >> + return arm64_arch_filter_pgprot(ret); >> +} > > If we kept the array, we can have everything in a single function > (untested and with my own comments for future changes): Got it. > > pgprot_t vm_get_page_prot(unsigned long vm_flags) > { > pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags & > (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)])); > > /* > * We could get rid of this test if we updated protection_map[] > * to turn exec-only into read-exec during boot. > */ > if (!cpus_have_const_cap(ARM64_HAS_EPAN) && > pgprot_val(prot) == pgprot_val(PAGE_EXECONLY)) > prot = PAGE_READONLY_EXEC; > > if (vm_flags & VM_ARM64_BTI) > prot != PTE_GP; > > /* > * We can get rid of the requirement for PROT_NORMAL to be 0 > * since here we can mask out PTE_ATTRINDX_MASK. > */ > if (vm_flags & VM_MTE) { > prot &= ~PTE_ATTRINDX_MASK; > prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); > } > > return prot; > } > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anshuman Khandual Date: Wed, 09 Mar 2022 11:43:02 +0000 Subject: Re: [PATCH V3 05/30] arm64/mm: Enable ARCH_HAS_VM_GET_PAGE_PROT Message-Id: List-Id: References: <1646045273-9343-1-git-send-email-anshuman.khandual@arm.com> <1646045273-9343-6-git-send-email-anshuman.khandual@arm.com> In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Catalin Marinas Cc: linux-mm@kvack.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, geert@linux-m68k.org, Christoph Hellwig , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linux-mips@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org, linux-alpha@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-csky@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-parisc@vger.kernel.org, openrisc@lists.librecores.org, linux-um@lists.infradead.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org, Will Deacon On 3/3/22 20:58, Catalin Marinas wrote: > Hi Anshuman, > > On Mon, Feb 28, 2022 at 04:17:28PM +0530, Anshuman Khandual wrote: >> +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags) >> +{ >> + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) { >> + case VM_NONE: >> + return PAGE_NONE; >> + case VM_READ: >> + case VM_WRITE: >> + case VM_WRITE | VM_READ: >> + return PAGE_READONLY; >> + case VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_EXEC | VM_READ: >> + case VM_EXEC | VM_WRITE: >> + case VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED: >> + return PAGE_NONE; >> + case VM_SHARED | VM_READ: >> + return PAGE_READONLY; >> + case VM_SHARED | VM_WRITE: >> + case VM_SHARED | VM_WRITE | VM_READ: >> + return PAGE_SHARED; >> + case VM_SHARED | VM_EXEC: >> + return PAGE_EXECONLY; >> + case VM_SHARED | VM_EXEC | VM_READ: >> + return PAGE_READONLY_EXEC; >> + case VM_SHARED | VM_EXEC | VM_WRITE: >> + case VM_SHARED | VM_EXEC | VM_WRITE | VM_READ: >> + return PAGE_SHARED_EXEC; >> + default: >> + BUILD_BUG(); >> + } >> +} > > I'd say ack for trying to get of the extra arch_vm_get_page_prot() and > arch_filter_pgprot() but, TBH, I'm not so keen on the outcome. I haven't > built the code to see what's generated but I suspect it's no significant > improvement. As for the code readability, the arm64 parts don't look > much better either. The only advantage with this patch is that all > functions have been moved under arch/arm64. Got it. > > I'd keep most architectures that don't have own arch_vm_get_page_prot() > or arch_filter_pgprot() unchanged and with a generic protection_map[] > array. For architectures that need fancier stuff, add a > CONFIG_ARCH_HAS_VM_GET_PAGE_PROT (as you do) and allow them to define > vm_get_page_prot() while getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). I think you could also duplicate protection_map[] > for architectures with own vm_get_page_prot() (make it static) and > #ifdef it out in mm/mmap.c. > > If later you have more complex needs or a switch statement generates > better code, go for it, but for this series I'd keep things simple, only > focus on getting rid of arch_vm_get_page_prot() and > arch_filter_pgprot(). Got it. > > If I grep'ed correctly, there are only 4 architectures that have own > arch_vm_get_page_prot() (arm64, powerpc, sparc, x86) and 2 that have own > arch_filter_pgprot() (arm64, x86). Try to only change these for the time > being, together with the other generic mm cleanups you have in this > series. I think there are a couple more that touch protection_map[] > (arm, m68k). You can leave the generic protection_map[] global if the > arch does not select ARCH_HAS_VM_GET_PAGE_PROT. Okay, I will probably split the series into two parts. - Drop arch_vm_get_page_prot() and arch_filter_pgprot() on relevant platforms i.e arm64, powerpc, sparc and x86 via this new config ARCH_HAS_VM_GET_PAGE_PROT, keeping the generic protection_map[] since platform __SXXX/__PXX macros would be still around. - Drop __SXXX/__PXXX across all platforms via just initializing protection_map[] early during boot in the platform OR moving both vm_get_page_prot() via ARCH_HAS_VM_GET_PAGE_PROT and the generic protection_map[] inside the platform. There were some objections with respect to switch case code in comparison to the array based table look up. > >> +static pgprot_t arm64_arch_filter_pgprot(pgprot_t prot) >> +{ >> + if (cpus_have_const_cap(ARM64_HAS_EPAN)) >> + return prot; >> + >> + if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY)) >> + return prot; >> + >> + return PAGE_READONLY_EXEC; >> +} >> + >> +static pgprot_t arm64_arch_vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pteval_t prot = 0; >> + >> + if (vm_flags & VM_ARM64_BTI) >> + prot |= PTE_GP; >> + >> + /* >> + * There are two conditions required for returning a Normal Tagged >> + * memory type: (1) the user requested it via PROT_MTE passed to >> + * mmap() or mprotect() and (2) the corresponding vma supports MTE. We >> + * register (1) as VM_MTE in the vma->vm_flags and (2) as >> + * VM_MTE_ALLOWED. Note that the latter can only be set during the >> + * mmap() call since mprotect() does not accept MAP_* flags. >> + * Checking for VM_MTE only is sufficient since arch_validate_flags() >> + * does not permit (VM_MTE & !VM_MTE_ALLOWED). >> + */ >> + if (vm_flags & VM_MTE) >> + prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); >> + >> + return __pgprot(prot); >> +} >> + >> +pgprot_t vm_get_page_prot(unsigned long vm_flags) >> +{ >> + pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) | >> + pgprot_val(arm64_arch_vm_get_page_prot(vm_flags))); >> + >> + return arm64_arch_filter_pgprot(ret); >> +} > > If we kept the array, we can have everything in a single function > (untested and with my own comments for future changes): Got it. > > pgprot_t vm_get_page_prot(unsigned long vm_flags) > { > pgprot_t prot = __pgprot(pgprot_val(protection_map[vm_flags & > (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)])); > > /* > * We could get rid of this test if we updated protection_map[] > * to turn exec-only into read-exec during boot. > */ > if (!cpus_have_const_cap(ARM64_HAS_EPAN) && > pgprot_val(prot) = pgprot_val(PAGE_EXECONLY)) > prot = PAGE_READONLY_EXEC; > > if (vm_flags & VM_ARM64_BTI) > prot != PTE_GP; > > /* > * We can get rid of the requirement for PROT_NORMAL to be 0 > * since here we can mask out PTE_ATTRINDX_MASK. > */ > if (vm_flags & VM_MTE) { > prot &= ~PTE_ATTRINDX_MASK; > prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED); > } > > return prot; > } >