From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BDFDECDE32 for ; Wed, 17 Oct 2018 09:41:58 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BDA4A2151D for ; Wed, 17 Oct 2018 09:41:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BDA4A2151D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 42ZnJM5rTgzF3Zn for ; Wed, 17 Oct 2018 20:41:55 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.ibm.com Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=linux.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=aneesh.kumar@linux.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 42ZnG81hrnzF3S6 for ; Wed, 17 Oct 2018 20:40:00 +1100 (AEDT) Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w9H9dt3A073301 for ; Wed, 17 Oct 2018 05:39:57 -0400 Received: from e06smtp01.uk.ibm.com (e06smtp01.uk.ibm.com [195.75.94.97]) by mx0a-001b2d01.pphosted.com with ESMTP id 2n60akx8vx-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 17 Oct 2018 05:39:57 -0400 Received: from localhost by e06smtp01.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 17 Oct 2018 10:39:34 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp01.uk.ibm.com (192.168.101.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 17 Oct 2018 10:39:28 +0100 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w9H9dRHE58196002 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 17 Oct 2018 09:39:27 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 27966A4040; Wed, 17 Oct 2018 09:39:27 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C7553A405D; Wed, 17 Oct 2018 09:39:22 +0000 (GMT) Received: from skywalker (unknown [9.102.2.58]) by d06av23.portsmouth.uk.ibm.com (Postfix) with SMTP; Wed, 17 Oct 2018 09:39:22 +0000 (GMT) Received: (nullmailer pid 7842 invoked by uid 1000); Wed, 17 Oct 2018 09:39:21 -0000 From: "Aneesh Kumar K.V" To: Christophe Leroy , Michael Ellerman , Bartlomiej Zolnierkiewicz , Benjamin Herrenschmidt , Dominik Brodowski , Geoff Levand , Jens Axboe , Kumar Gala , Li Yang , Nicholas Piggin , Paul Mackerras , Scott Wood , aneesh.kumar@linux.vnet.ibm.com Subject: Re: Crash on FSL Book3E due to pte_pgprot()? (was Re: [PATCH v3 12/24] powerpc/mm: use pte helpers in generic code) In-Reply-To: <236d72cd-e6d7-61e5-2c80-e4311e41b4f6@c-s.fr> References: <343c844bbc5081d13ee4c9aa27ff3118f607e1cc.1539092112.git.christophe.leroy@c-s.fr> <87va61jsma.fsf@concordia.ellerman.id.au> <236d72cd-e6d7-61e5-2c80-e4311e41b4f6@c-s.fr> Date: Wed, 17 Oct 2018 15:09:21 +0530 MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-GCONF: 00 x-cbid: 18101709-4275-0000-0000-000002CE4952 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18101709-4276-0000-0000-000037D94C50 Message-Id: <87ftx4x67q.fsf@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2018-10-16_14:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810170087 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-fbdev@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-block@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" Christophe Leroy writes: > On 10/17/2018 12:59 AM, Michael Ellerman wrote: >> Christophe Leroy writes: >> >>> Get rid of platform specific _PAGE_XXXX in powerpc common code and >>> use helpers instead. >>> >>> mm/dump_linuxpagetables.c will be handled separately >>> >>> Reviewed-by: Aneesh Kumar K.V >>> Signed-off-by: Christophe Leroy >>> --- >>> arch/powerpc/include/asm/book3s/32/pgtable.h | 9 +++------ >>> arch/powerpc/include/asm/nohash/32/pgtable.h | 12 ++++++++---- >>> arch/powerpc/include/asm/nohash/pgtable.h | 3 +-- >>> arch/powerpc/mm/pgtable.c | 21 +++++++-------------- >>> arch/powerpc/mm/pgtable_32.c | 15 ++++++++------- >>> arch/powerpc/mm/pgtable_64.c | 14 +++++++------- >>> arch/powerpc/xmon/xmon.c | 12 +++++++----- >>> 7 files changed, 41 insertions(+), 45 deletions(-) >> >> So turns out this patch *also* breaks my p5020ds :) >> >> Even with patch 4 merged, see next. >> >> It's the same crash: >> >> pcieport 2000:00:00.0: AER enabled with IRQ 480 >> Unable to handle kernel paging request for data at address 0x8000080080080000 >> Faulting instruction address: 0xc0000000000192cc >> Oops: Kernel access of bad area, sig: 11 [#1] >> BE SMP NR_CPUS=24 CoreNet Generic >> Modules linked in: >> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.0-rc3-gcc7x-g98c847323b3a #1 >> NIP: c0000000000192cc LR: c0000000005d0f9c CTR: 0000000000100000 >> REGS: c0000000f31bb400 TRAP: 0300 Not tainted (4.19.0-rc3-gcc7x-g98c847323b3a) >> MSR: 0000000080029000 CR: 24000224 XER: 00000000 >> DEAR: 8000080080080000 ESR: 0000000000800000 IRQMASK: 0 >> GPR00: c0000000005d0f84 c0000000f31bb688 c00000000117dc00 8000080080080000 >> GPR04: 0000000000000000 0000000000400000 00000ffbff241010 c0000000f31b8000 >> GPR08: 0000000000000000 0000000000100000 0000000000000000 c0000000012d4710 >> GPR12: 0000000084000422 c0000000012ff000 c000000000002774 0000000000000000 >> GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 >> GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 >> GPR24: 0000000000000000 0000000000000000 8000080080080000 c0000000ffff89a8 >> GPR28: c0000000f3576400 c0000000f3576410 0000000000400000 c0000000012ecc98 >> NIP [c0000000000192cc] ._memset_io+0x6c/0x9c >> LR [c0000000005d0f9c] .fsl_qman_probe+0x198/0x928 >> Call Trace: >> [c0000000f31bb688] [c0000000005d0f84] .fsl_qman_probe+0x180/0x928 (unreliable) >> [c0000000f31bb728] [c0000000006432ec] .platform_drv_probe+0x60/0xb4 >> [c0000000f31bb7a8] [c00000000064083c] .really_probe+0x294/0x35c >> [c0000000f31bb848] [c000000000640d2c] .__driver_attach+0x148/0x14c >> [c0000000f31bb8d8] [c00000000063d7dc] .bus_for_each_dev+0xb0/0x118 >> [c0000000f31bb988] [c00000000063ff28] .driver_attach+0x34/0x4c >> [c0000000f31bba08] [c00000000063f648] .bus_add_driver+0x174/0x2bc >> [c0000000f31bbaa8] [c0000000006418bc] .driver_register+0x90/0x180 >> [c0000000f31bbb28] [c000000000643270] .__platform_driver_register+0x60/0x7c >> [c0000000f31bbba8] [c000000000ee2a70] .fsl_qman_driver_init+0x24/0x38 >> [c0000000f31bbc18] [c0000000000023fc] .do_one_initcall+0x64/0x2b8 >> [c0000000f31bbcf8] [c000000000e9f480] .kernel_init_freeable+0x3a8/0x494 >> [c0000000f31bbda8] [c000000000002798] .kernel_init+0x24/0x148 >> [c0000000f31bbe28] [c0000000000009e8] .ret_from_kernel_thread+0x58/0x70 >> Instruction dump: >> 4e800020 2ba50003 40dd003c 3925fffc 5488402e 7929f082 7d082378 39290001 >> 550a801e 7d2903a6 7d4a4378 794a0020 <91430000> 38630004 4200fff8 70a50003 >> >> >> Comparing a working vs broken kernel, it seems to boil down to the fact >> that we're filtering out more PTE bits now that we use pte_pgprot() in >> ioremap_prot(). >> >> With the old code we get: >> ioremap_prot: addr 0xff800000 flags 0x241215 >> ioremap_prot: addr 0xff800000 flags 0x241215 >> map_kernel_page: ea 0x8000080080080000 pa 0xff800000 pte 0xff800241215 >> >> >> And now we get: >> ioremap_prot: addr 0xff800000 flags 0x241215 pte 0x241215 >> ioremap_prot: addr 0xff800000 pte 0x241215 >> ioremap_prot: addr 0xff800000 prot 0x241014 >> map_kernel_page: ea 0x8000080080080000 pa 0xff800000 pte 0xff800241014 >> >> So we're losing 0x201, which for nohash book3e is: >> >> #define _PAGE_PRESENT 0x000001 /* software: pte contains a translation */ >> #define _PAGE_PSIZE_4K 0x000200 >> >> >> I haven't worked out if it's one or both of those that matter. > > At least missing _PAGE_PRESENT is an issue I believe. >> >> The question is what's the right way to fix it? Should pte_pgprot() not >> be filtering those bits out on book3e? > > I think we should not use pte_pggrot() for that then. What about the > below fix ? > > Christophe > > From: Christophe Leroy > Date: Wed, 17 Oct 2018 05:56:25 +0000 > Subject: [PATCH] powerpc/mm: don't use pte_pgprot() in ioremap_prot() > > pte_pgprot() filters out some required flags like _PAGE_PRESENT. > > This patch replaces pte_pgprot() by __pgprot(pte_val()) > in ioremap_prot() > > Fixes: 26973fa5ac0e ("powerpc/mm: use pte helpers in generic code") > Signed-off-by: Christophe Leroy > --- > arch/powerpc/mm/pgtable_32.c | 3 ++- > arch/powerpc/mm/pgtable_64.c | 4 ++-- > 2 files changed, 4 insertions(+), 3 deletions(-) > > diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c > index 5877f5aa8f5d..a606e2f4937b 100644 > --- a/arch/powerpc/mm/pgtable_32.c > +++ b/arch/powerpc/mm/pgtable_32.c > @@ -122,7 +122,8 @@ ioremap_prot(phys_addr_t addr, unsigned long size, > unsigned long flags) > pte = pte_exprotect(pte); > pte = pte_mkprivileged(pte); > > - return __ioremap_caller(addr, size, pte_pgprot(pte), > __builtin_return_address(0)); > + return __ioremap_caller(addr, size, __pgprot(pte_val(pte)), > + __builtin_return_address(0)); That means we pass the pfn bits also to __ioremap_caller right? How about >From b4d5e0f24f8482375b2dd86afaced26ebf716600 Mon Sep 17 00:00:00 2001 From: "Aneesh Kumar K.V" Date: Wed, 17 Oct 2018 14:07:50 +0530 Subject: [PATCH] powerpc/mm: Make pte_pgprot return all pte bits Other archs do the same and instead of adding required pte bits (which got masked out) in __ioremap_at, make sure we filter only pfn bits out. Fixes: 26973fa5ac0e ("powerpc/mm: use pte helpers in generic code") Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/include/asm/book3s/32/pgtable.h | 6 ------ arch/powerpc/include/asm/book3s/64/pgtable.h | 8 -------- arch/powerpc/include/asm/nohash/32/pte-40x.h | 5 ----- arch/powerpc/include/asm/nohash/32/pte-44x.h | 5 ----- arch/powerpc/include/asm/nohash/32/pte-8xx.h | 5 ----- arch/powerpc/include/asm/nohash/32/pte-fsl-booke.h | 5 ----- arch/powerpc/include/asm/nohash/pgtable.h | 1 - arch/powerpc/include/asm/nohash/pte-book3e.h | 5 ----- arch/powerpc/include/asm/pgtable.h | 10 ++++++++++ 9 files changed, 10 insertions(+), 40 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h index 0fbd4c642b51..e61dd3ae5bc0 100644 --- a/arch/powerpc/include/asm/book3s/32/pgtable.h +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h @@ -48,11 +48,6 @@ static inline bool pte_user(pte_t pte) #define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HASHPTE | _PAGE_DIRTY | \ _PAGE_ACCESSED | _PAGE_SPECIAL) -/* Mask of bits returned by pte_pgprot() */ -#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \ - _PAGE_WRITETHRU | _PAGE_USER | _PAGE_ACCESSED | \ - _PAGE_RW | _PAGE_DIRTY) - /* * We define 2 sets of base prot bits, one for basic pages (ie, * cacheable kernel and user pages) and one for non cacheable @@ -396,7 +391,6 @@ static inline int pte_young(pte_t pte) { return !!(pte_val(pte) & _PAGE_ACCESSE static inline int pte_special(pte_t pte) { return !!(pte_val(pte) & _PAGE_SPECIAL); } static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; } static inline bool pte_exec(pte_t pte) { return true; } -static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); } static inline int pte_present(pte_t pte) { diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index c34a161dc651..cb5dd4078d42 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -128,13 +128,6 @@ #define H_PTE_PKEY (H_PTE_PKEY_BIT0 | H_PTE_PKEY_BIT1 | H_PTE_PKEY_BIT2 | \ H_PTE_PKEY_BIT3 | H_PTE_PKEY_BIT4) -/* - * Mask of bits returned by pte_pgprot() - */ -#define PAGE_PROT_BITS (_PAGE_SAO | _PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT | \ - H_PAGE_4K_PFN | _PAGE_PRIVILEGED | _PAGE_ACCESSED | \ - _PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_EXEC | \ - _PAGE_SOFT_DIRTY | H_PTE_PKEY) /* * We define 2 sets of base prot bits, one for basic pages (ie, * cacheable kernel and user pages) and one for non cacheable @@ -496,7 +489,6 @@ static inline bool pte_exec(pte_t pte) return !!(pte_raw(pte) & cpu_to_be64(_PAGE_EXEC)); } -static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); } #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY static inline bool pte_soft_dirty(pte_t pte) diff --git a/arch/powerpc/include/asm/nohash/32/pte-40x.h b/arch/powerpc/include/asm/nohash/32/pte-40x.h index 7a8b3c94592f..661f4599f2fc 100644 --- a/arch/powerpc/include/asm/nohash/32/pte-40x.h +++ b/arch/powerpc/include/asm/nohash/32/pte-40x.h @@ -73,11 +73,6 @@ /* Until my rework is finished, 40x still needs atomic PTE updates */ #define PTE_ATOMIC_UPDATES 1 -/* Mask of bits returned by pte_pgprot() */ -#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_NO_CACHE | \ - _PAGE_WRITETHRU | _PAGE_USER | _PAGE_ACCESSED | \ - _PAGE_RW | _PAGE_HWWRITE | _PAGE_DIRTY | _PAGE_EXEC) - #define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED) #define _PAGE_BASE (_PAGE_BASE_NC) diff --git a/arch/powerpc/include/asm/nohash/32/pte-44x.h b/arch/powerpc/include/asm/nohash/32/pte-44x.h index 8d6b268a986f..78bc304f750e 100644 --- a/arch/powerpc/include/asm/nohash/32/pte-44x.h +++ b/arch/powerpc/include/asm/nohash/32/pte-44x.h @@ -93,11 +93,6 @@ #define _PAGE_KERNEL_RW (_PAGE_DIRTY | _PAGE_RW) #define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC) -/* Mask of bits returned by pte_pgprot() */ -#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \ - _PAGE_WRITETHRU | _PAGE_USER | _PAGE_ACCESSED | \ - _PAGE_RW | _PAGE_DIRTY | _PAGE_EXEC) - /* TODO: Add large page lowmem mapping support */ #define _PMD_PRESENT 0 #define _PMD_PRESENT_MASK (PAGE_MASK) diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h index 1c57efac089d..6bfe041ef59d 100644 --- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h @@ -55,11 +55,6 @@ #define _PAGE_KERNEL_RW (_PAGE_SH | _PAGE_DIRTY) #define _PAGE_KERNEL_RWX (_PAGE_SH | _PAGE_DIRTY | _PAGE_EXEC) -/* Mask of bits returned by pte_pgprot() */ -#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_NO_CACHE | \ - _PAGE_ACCESSED | _PAGE_RO | _PAGE_NA | \ - _PAGE_SH | _PAGE_DIRTY | _PAGE_EXEC) - #define _PMD_PRESENT 0x0001 #define _PMD_PRESENT_MASK _PMD_PRESENT #define _PMD_BAD 0x0fd0 diff --git a/arch/powerpc/include/asm/nohash/32/pte-fsl-booke.h b/arch/powerpc/include/asm/nohash/32/pte-fsl-booke.h index 1ecf60fe0909..0fc1bd42bb3e 100644 --- a/arch/powerpc/include/asm/nohash/32/pte-fsl-booke.h +++ b/arch/powerpc/include/asm/nohash/32/pte-fsl-booke.h @@ -39,11 +39,6 @@ /* No page size encoding in the linux PTE */ #define _PAGE_PSIZE 0 -/* Mask of bits returned by pte_pgprot() */ -#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \ - _PAGE_WRITETHRU | _PAGE_USER | _PAGE_ACCESSED | \ - _PAGE_RW | _PAGE_DIRTY | _PAGE_EXEC) - #define _PMD_PRESENT 0 #define _PMD_PRESENT_MASK (PAGE_MASK) #define _PMD_BAD (~PAGE_MASK) diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h index 04e9f0922ad4..70ff23974b59 100644 --- a/arch/powerpc/include/asm/nohash/pgtable.h +++ b/arch/powerpc/include/asm/nohash/pgtable.h @@ -52,7 +52,6 @@ static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) static inline bool pte_hashpte(pte_t pte) { return false; } static inline bool pte_ci(pte_t pte) { return pte_val(pte) & _PAGE_NO_CACHE; } static inline bool pte_exec(pte_t pte) { return pte_val(pte) & _PAGE_EXEC; } -static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); } #ifdef CONFIG_NUMA_BALANCING /* diff --git a/arch/powerpc/include/asm/nohash/pte-book3e.h b/arch/powerpc/include/asm/nohash/pte-book3e.h index 58eef8cb569d..f95ab6eaf441 100644 --- a/arch/powerpc/include/asm/nohash/pte-book3e.h +++ b/arch/powerpc/include/asm/nohash/pte-book3e.h @@ -82,11 +82,6 @@ #define _PTE_NONE_MASK 0 #endif -/* Mask of bits returned by pte_pgprot() */ -#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \ - _PAGE_WRITETHRU | _PAGE_USER | _PAGE_ACCESSED | \ - _PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY | _PAGE_EXEC) - /* * We define 2 sets of base prot bits, one for basic pages (ie, * cacheable kernel and user pages) and one for non cacheable diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index fb4b85bba110..9679b7519a35 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -46,6 +46,16 @@ struct mm_struct; /* Keep these as a macros to avoid include dependency mess */ #define pte_page(x) pfn_to_page(pte_pfn(x)) #define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot)) +/* + * Select all bits except the pfn + */ +static inline pgprot_t pte_pgprot(pte_t pte) +{ + unsigned long pte_flags; + + pte_flags = pte_val(pte) & ~PTE_RPN_MASK; + return __pgprot(pte_flags); +} /* * ZERO_PAGE is a global shared page that is always zero: used -- 2.17.2