From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AE54C4167E for ; Thu, 14 Apr 2022 21:10:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346799AbiDNVMk (ORCPT ); Thu, 14 Apr 2022 17:12:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346814AbiDNVMg (ORCPT ); Thu, 14 Apr 2022 17:12:36 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD870CEE26 for ; Thu, 14 Apr 2022 14:10:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 74B34B82BB4 for ; Thu, 14 Apr 2022 21:10:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B715C385A1; Thu, 14 Apr 2022 21:10:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1649970607; bh=hu8g5uGFJ+3DzOWCmJNH5xbImZ4c95Wmn8tkM7DNNgw=; h=Date:To:From:Subject:From; b=hbGZSiBoJnhdloBmbbyxquqCBdLaGtSH4KXczvlM2DB1rvCY0U9v6yXX9SkWKFx56 hfHp0tHA2O5hrWz4t97XjwDWYdzmDakQRKR2yiBWwgnHKYQERArse9Xta1ntRsgemP Kxj1spFjCtqNBMDsB8zgK4kcdUZ9rMhQXhjrSCKY= Date: Thu, 14 Apr 2022 14:10:06 -0700 To: mm-commits@vger.kernel.org, will@kernel.org, tglx@linutronix.de, paulus@samba.org, mpe@ellerman.id.au, mingo@redhat.com, khalid.aziz@oracle.com, hch@infradead.org, davem@davemloft.net, christophe.leroy@csgroup.eu, catalin.marinas@arm.com, anshuman.khandual@arm.com, akpm@linux-foundation.org From: Andrew Morton Subject: + powerpc-mm-enable-arch_has_vm_get_page_prot.patch added to -mm tree Message-Id: <20220414211007.0B715C385A1@smtp.kernel.org> Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org The patch titled Subject: powerpc/mm: enable ARCH_HAS_VM_GET_PAGE_PROT has been added to the -mm tree. Its filename is powerpc-mm-enable-arch_has_vm_get_page_prot.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/powerpc-mm-enable-arch_has_vm_get_page_prot.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/powerpc-mm-enable-arch_has_vm_get_page_prot.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Anshuman Khandual Subject: powerpc/mm: enable ARCH_HAS_VM_GET_PAGE_PROT This defines and exports a platform specific custom vm_get_page_prot() via subscribing ARCH_HAS_VM_GET_PAGE_PROT. While here, this also localizes arch_vm_get_page_prot() as __vm_get_page_prot() and moves it near vm_get_page_prot(). Link: https://lkml.kernel.org/r/20220414062125.609297-3-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual Reviewed-by: Christophe Leroy Cc: Michael Ellerman Cc: Paul Mackerras Cc: Catalin Marinas Cc: Christoph Hellwig Cc: David S. Miller Cc: Ingo Molnar Cc: Khalid Aziz Cc: Thomas Gleixner Cc: Will Deacon Signed-off-by: Andrew Morton --- --- a/arch/powerpc/include/asm/mman.h~powerpc-mm-enable-arch_has_vm_get_page_prot +++ a/arch/powerpc/include/asm/mman.h @@ -24,18 +24,6 @@ static inline unsigned long arch_calc_vm } #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) -static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags) -{ -#ifdef CONFIG_PPC_MEM_KEYS - return (vm_flags & VM_SAO) ? - __pgprot(_PAGE_SAO | vmflag_to_pte_pkey_bits(vm_flags)) : - __pgprot(0 | vmflag_to_pte_pkey_bits(vm_flags)); -#else - return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : __pgprot(0); -#endif -} -#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags) - static inline bool arch_validate_prot(unsigned long prot, unsigned long addr) { if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO)) --- a/arch/powerpc/Kconfig~powerpc-mm-enable-arch_has_vm_get_page_prot +++ a/arch/powerpc/Kconfig @@ -140,6 +140,7 @@ config PPC select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_UACCESS_FLUSHCACHE select ARCH_HAS_UBSAN_SANITIZE_ALL + select ARCH_HAS_VM_GET_PAGE_PROT if PPC_BOOK3S_64 select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_KEEP_MEMBLOCK select ARCH_MIGHT_HAVE_PC_PARPORT --- a/arch/powerpc/mm/book3s64/pgtable.c~powerpc-mm-enable-arch_has_vm_get_page_prot +++ a/arch/powerpc/mm/book3s64/pgtable.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -549,3 +550,19 @@ unsigned long memremap_compat_align(void } EXPORT_SYMBOL_GPL(memremap_compat_align); #endif + +pgprot_t vm_get_page_prot(unsigned long vm_flags) +{ + unsigned long prot = pgprot_val(protection_map[vm_flags & + (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]); + + if (vm_flags & VM_SAO) + prot |= _PAGE_SAO; + +#ifdef CONFIG_PPC_MEM_KEYS + prot |= vmflag_to_pte_pkey_bits(vm_flags); +#endif + + return __pgprot(prot); +} +EXPORT_SYMBOL(vm_get_page_prot); _ Patches currently in -mm which might be from anshuman.khandual@arm.com are mm-debug_vm_pgtable-drop-protection_map-usage.patch mm-mmap-clarify-protection_map-indices.patch mm-mmap-add-new-config-arch_has_vm_get_page_prot.patch powerpc-mm-enable-arch_has_vm_get_page_prot.patch arm64-mm-enable-arch_has_vm_get_page_prot.patch sparc-mm-enable-arch_has_vm_get_page_prot.patch mm-mmap-drop-arch_filter_pgprot.patch mm-mmap-drop-arch_vm_get_page_pgprot.patch