From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D27BCC433E2 for ; Tue, 8 Sep 2020 19:58:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9356E20768 for ; Tue, 8 Sep 2020 19:58:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="NJ3URzpd" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732570AbgIHT6L (ORCPT ); Tue, 8 Sep 2020 15:58:11 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:55496 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730355AbgIHPfq (ORCPT ); Tue, 8 Sep 2020 11:35:46 -0400 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 088E5HaY155667; Tue, 8 Sep 2020 10:16:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=date : from : to : cc : subject : message-id : references : mime-version : content-type : in-reply-to; s=pp1; bh=JMbH8samJLb5sy2XzpgZzo3gwfoR4Or5sHtVMIqHhLs=; b=NJ3URzpdswVooPaI2LHOaXu6ju+Ngmf2INx7IYktjmTAF583iqpKuti7BeOKDXf5dvjb DFU6oNAIttt5W97m8sP5q85mhtK8i+n1ItawNxhKl5O2wjjq4k9mXoPw5SqlXxavUX61 DbY/EivnZmwZCJBJpbRxTSf59ig7WVFjsDjsZEQGKPiRyE8MJKsL2Z3Am3we93opJ+Lg jn3JNhFff4CDPtZ//sFEUOiYHH8m3x+zIMXGVLVBJKTIpwCGjvScInnPzqoa6ADDieUI XzPpVgnJ/wbqjo+1fcfB9iO+uJ5ZVGEMnbTIksPgteUtaZ/IE90vq4p07FXbR5xH1s/v Vg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 33eb5wh4vq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 08 Sep 2020 10:16:06 -0400 Received: from m0098399.ppops.net (m0098399.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 088E5rqF157526; Tue, 8 Sep 2020 10:16:06 -0400 Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 33eb5wh4sn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 08 Sep 2020 10:16:06 -0400 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 088EDg8b007449; Tue, 8 Sep 2020 14:16:01 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma04fra.de.ibm.com with ESMTP id 33cm5hhqvk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 08 Sep 2020 14:16:01 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 088EFwKF37814660 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 8 Sep 2020 14:15:58 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1FB7CA4069; Tue, 8 Sep 2020 14:15:58 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7DB9EA406D; Tue, 8 Sep 2020 14:15:56 +0000 (GMT) Received: from oc3871087118.ibm.com (unknown [9.145.58.21]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Tue, 8 Sep 2020 14:15:56 +0000 (GMT) Date: Tue, 8 Sep 2020 16:15:55 +0200 From: Alexander Gordeev To: Christophe Leroy Cc: Michael Ellerman , Gerald Schaefer , Jason Gunthorpe , John Hubbard , Peter Zijlstra , Dave Hansen , linux-mm , Paul Mackerras , linux-sparc , Claudio Imbrenda , Will Deacon , linux-arch , linux-s390 , Vasily Gorbik , Richard Weinberger , linux-x86 , Russell King , Christian Borntraeger , Ingo Molnar , Catalin Marinas , Andrey Ryabinin , Heiko Carstens , Arnd Bergmann , Jeff Dike , linux-um , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm , linux-power , LKML , Andrew Morton , Linus Torvalds , Mike Rapoport Subject: Re: [RFC PATCH v2 2/3] mm: make pXd_addr_end() functions page-table entry aware Message-ID: <20200908141554.GA20558@oc3871087118.ibm.com> References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-3-gerald.schaefer@linux.ibm.com> <31dfb3ed-a0cc-3024-d389-ab9bd19e881f@csgroup.eu> <20200908074638.GA19099@oc3871087118.ibm.com> <5d4f5546-afd0-0b8f-664d-700ae346b9ec@csgroup.eu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5d4f5546-afd0-0b8f-664d-700ae346b9ec@csgroup.eu> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-09-08_07:2020-09-08,2020-09-08 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 mlxscore=0 suspectscore=0 bulkscore=0 phishscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 impostorscore=0 lowpriorityscore=0 malwarescore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009080133 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Sep 08, 2020 at 10:16:49AM +0200, Christophe Leroy wrote: > >Yes, and also two more sources :/ > > arch/powerpc/mm/kasan/8xx.c > > arch/powerpc/mm/kasan/kasan_init_32.c > > > >But these two are not quite obvious wrt pgd_addr_end() used > >while traversing pmds. Could you please clarify a bit? > > > > > >diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c > >index 2784224..89c5053 100644 > >--- a/arch/powerpc/mm/kasan/8xx.c > >+++ b/arch/powerpc/mm/kasan/8xx.c > >@@ -15,8 +15,8 @@ > > for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) { > > pte_basic_t *new; > >- k_next = pgd_addr_end(k_cur, k_end); > >- k_next = pgd_addr_end(k_next, k_end); > >+ k_next = pmd_addr_end(k_cur, k_end); > >+ k_next = pmd_addr_end(k_next, k_end); > > No, I don't think so. > On powerpc32 we have only two levels, so pgd and pmd are more or > less the same. > But pmd_addr_end() as defined in include/asm-generic/pgtable-nopmd.h > is a no-op, so I don't think it will work. > > It is likely that this function should iterate on pgd, then you get > pmd = pmd_offset(pud_offset(p4d_offset(pgd))); It looks like the code iterates over single pmd table while using pgd_addr_end() only to skip all the middle levels and bail out from the loop. I would be wary for switching from pmds to pgds, since we are trying to minimize impact (especially functional) and the rework does not seem that obvious. Assuming pmd and pgd are the same would actually such approach work for now? diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c index 2784224..94466cc 100644 --- a/arch/powerpc/mm/kasan/8xx.c +++ b/arch/powerpc/mm/kasan/8xx.c @@ -15,8 +15,8 @@ for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) { pte_basic_t *new; - k_next = pgd_addr_end(k_cur, k_end); - k_next = pgd_addr_end(k_next, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*pmd)), k_cur, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*(pmd + 1))), k_next, k_end); if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c index fb29404..c0bcd64 100644 --- a/arch/powerpc/mm/kasan/kasan_init_32.c +++ b/arch/powerpc/mm/kasan/kasan_init_32.c @@ -38,7 +38,7 @@ int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_ for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) { pte_t *new; - k_next = pgd_addr_end(k_cur, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*pmd)), k_cur, k_end); if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; @@ -196,7 +196,7 @@ void __init kasan_early_init(void) kasan_populate_pte(kasan_early_shadow_pte, PAGE_KERNEL); do { - next = pgd_addr_end(addr, end); + next = pgd_addr_end(__pgd(pmd_val(*pmd)), addr, end); pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte); } while (pmd++, addr = next, addr != end); Alternatively we could pass invalid pgd to keep the code structure intact, but that of course is less nice. Thanks! > Christophe From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Gordeev Date: Tue, 08 Sep 2020 14:15:55 +0000 Subject: Re: [RFC PATCH v2 2/3] mm: make pXd_addr_end() functions page-table entry aware Message-Id: <20200908141554.GA20558@oc3871087118.ibm.com> List-Id: References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-3-gerald.schaefer@linux.ibm.com> <31dfb3ed-a0cc-3024-d389-ab9bd19e881f@csgroup.eu> <20200908074638.GA19099@oc3871087118.ibm.com> <5d4f5546-afd0-0b8f-664d-700ae346b9ec@csgroup.eu> In-Reply-To: <5d4f5546-afd0-0b8f-664d-700ae346b9ec@csgroup.eu> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Christophe Leroy Cc: Michael Ellerman , Gerald Schaefer , Jason Gunthorpe , John Hubbard , Peter Zijlstra , Dave Hansen , linux-mm , Paul Mackerras , linux-sparc , Claudio Imbrenda , Will Deacon , linux-arch , linux-s390 , Vasily Gorbik , Richard Weinberger , linux-x86 , Russell King , Christian Borntraeger , Ingo Molnar , Catalin Marinas , Andrey Ryabinin , Heiko Carstens , Arnd Bergmann , Jeff Dike , linux-um , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm , linux-power , LKML , Andrew Morton , Linus Torvalds , Mike Rapoport On Tue, Sep 08, 2020 at 10:16:49AM +0200, Christophe Leroy wrote: > >Yes, and also two more sources :/ > > arch/powerpc/mm/kasan/8xx.c > > arch/powerpc/mm/kasan/kasan_init_32.c > > > >But these two are not quite obvious wrt pgd_addr_end() used > >while traversing pmds. Could you please clarify a bit? > > > > > >diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c > >index 2784224..89c5053 100644 > >--- a/arch/powerpc/mm/kasan/8xx.c > >+++ b/arch/powerpc/mm/kasan/8xx.c > >@@ -15,8 +15,8 @@ > > for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) { > > pte_basic_t *new; > >- k_next = pgd_addr_end(k_cur, k_end); > >- k_next = pgd_addr_end(k_next, k_end); > >+ k_next = pmd_addr_end(k_cur, k_end); > >+ k_next = pmd_addr_end(k_next, k_end); > > No, I don't think so. > On powerpc32 we have only two levels, so pgd and pmd are more or > less the same. > But pmd_addr_end() as defined in include/asm-generic/pgtable-nopmd.h > is a no-op, so I don't think it will work. > > It is likely that this function should iterate on pgd, then you get > pmd = pmd_offset(pud_offset(p4d_offset(pgd))); It looks like the code iterates over single pmd table while using pgd_addr_end() only to skip all the middle levels and bail out from the loop. I would be wary for switching from pmds to pgds, since we are trying to minimize impact (especially functional) and the rework does not seem that obvious. Assuming pmd and pgd are the same would actually such approach work for now? diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c index 2784224..94466cc 100644 --- a/arch/powerpc/mm/kasan/8xx.c +++ b/arch/powerpc/mm/kasan/8xx.c @@ -15,8 +15,8 @@ for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) { pte_basic_t *new; - k_next = pgd_addr_end(k_cur, k_end); - k_next = pgd_addr_end(k_next, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*pmd)), k_cur, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*(pmd + 1))), k_next, k_end); if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c index fb29404..c0bcd64 100644 --- a/arch/powerpc/mm/kasan/kasan_init_32.c +++ b/arch/powerpc/mm/kasan/kasan_init_32.c @@ -38,7 +38,7 @@ int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_ for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) { pte_t *new; - k_next = pgd_addr_end(k_cur, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*pmd)), k_cur, k_end); if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; @@ -196,7 +196,7 @@ void __init kasan_early_init(void) kasan_populate_pte(kasan_early_shadow_pte, PAGE_KERNEL); do { - next = pgd_addr_end(addr, end); + next = pgd_addr_end(__pgd(pmd_val(*pmd)), addr, end); pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte); } while (pmd++, addr = next, addr != end); Alternatively we could pass invalid pgd to keep the code structure intact, but that of course is less nice. Thanks! > Christophe From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2A2AC43461 for ; Tue, 8 Sep 2020 14:19:53 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 54DD9229F0 for ; Tue, 8 Sep 2020 14:19:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="NJ3URzpd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 54DD9229F0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4Bm6jd6JXQzDqSD for ; Wed, 9 Sep 2020 00:19:49 +1000 (AEST) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=linux.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=agordeev@linux.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=NJ3URzpd; dkim-atps=neutral Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4Bm6f72Vx7zDqQY for ; Wed, 9 Sep 2020 00:16:46 +1000 (AEST) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 088E5HaY155667; Tue, 8 Sep 2020 10:16:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=date : from : to : cc : subject : message-id : references : mime-version : content-type : in-reply-to; s=pp1; bh=JMbH8samJLb5sy2XzpgZzo3gwfoR4Or5sHtVMIqHhLs=; b=NJ3URzpdswVooPaI2LHOaXu6ju+Ngmf2INx7IYktjmTAF583iqpKuti7BeOKDXf5dvjb DFU6oNAIttt5W97m8sP5q85mhtK8i+n1ItawNxhKl5O2wjjq4k9mXoPw5SqlXxavUX61 DbY/EivnZmwZCJBJpbRxTSf59ig7WVFjsDjsZEQGKPiRyE8MJKsL2Z3Am3we93opJ+Lg jn3JNhFff4CDPtZ//sFEUOiYHH8m3x+zIMXGVLVBJKTIpwCGjvScInnPzqoa6ADDieUI XzPpVgnJ/wbqjo+1fcfB9iO+uJ5ZVGEMnbTIksPgteUtaZ/IE90vq4p07FXbR5xH1s/v Vg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 33eb5wh4vq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 08 Sep 2020 10:16:06 -0400 Received: from m0098399.ppops.net (m0098399.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 088E5rqF157526; Tue, 8 Sep 2020 10:16:06 -0400 Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 33eb5wh4sn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 08 Sep 2020 10:16:06 -0400 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 088EDg8b007449; Tue, 8 Sep 2020 14:16:01 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma04fra.de.ibm.com with ESMTP id 33cm5hhqvk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 08 Sep 2020 14:16:01 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 088EFwKF37814660 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 8 Sep 2020 14:15:58 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1FB7CA4069; Tue, 8 Sep 2020 14:15:58 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7DB9EA406D; Tue, 8 Sep 2020 14:15:56 +0000 (GMT) Received: from oc3871087118.ibm.com (unknown [9.145.58.21]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Tue, 8 Sep 2020 14:15:56 +0000 (GMT) Date: Tue, 8 Sep 2020 16:15:55 +0200 From: Alexander Gordeev To: Christophe Leroy Subject: Re: [RFC PATCH v2 2/3] mm: make pXd_addr_end() functions page-table entry aware Message-ID: <20200908141554.GA20558@oc3871087118.ibm.com> References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-3-gerald.schaefer@linux.ibm.com> <31dfb3ed-a0cc-3024-d389-ab9bd19e881f@csgroup.eu> <20200908074638.GA19099@oc3871087118.ibm.com> <5d4f5546-afd0-0b8f-664d-700ae346b9ec@csgroup.eu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5d4f5546-afd0-0b8f-664d-700ae346b9ec@csgroup.eu> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-09-08_07:2020-09-08, 2020-09-08 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 mlxscore=0 suspectscore=0 bulkscore=0 phishscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 impostorscore=0 lowpriorityscore=0 malwarescore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009080133 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm , Paul Mackerras , linux-sparc , Claudio Imbrenda , Will Deacon , linux-arch , linux-s390 , Arnd Bergmann , Christian Borntraeger , Richard Weinberger , linux-x86 , Russell King , Jason Gunthorpe , Ingo Molnar , Andrey Ryabinin , Gerald Schaefer , Jeff Dike , Vasily Gorbik , John Hubbard , Heiko Carstens , linux-um , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm , Linus Torvalds , LKML , Andrew Morton , linux-power , Mike Rapoport Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Tue, Sep 08, 2020 at 10:16:49AM +0200, Christophe Leroy wrote: > >Yes, and also two more sources :/ > > arch/powerpc/mm/kasan/8xx.c > > arch/powerpc/mm/kasan/kasan_init_32.c > > > >But these two are not quite obvious wrt pgd_addr_end() used > >while traversing pmds. Could you please clarify a bit? > > > > > >diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c > >index 2784224..89c5053 100644 > >--- a/arch/powerpc/mm/kasan/8xx.c > >+++ b/arch/powerpc/mm/kasan/8xx.c > >@@ -15,8 +15,8 @@ > > for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) { > > pte_basic_t *new; > >- k_next = pgd_addr_end(k_cur, k_end); > >- k_next = pgd_addr_end(k_next, k_end); > >+ k_next = pmd_addr_end(k_cur, k_end); > >+ k_next = pmd_addr_end(k_next, k_end); > > No, I don't think so. > On powerpc32 we have only two levels, so pgd and pmd are more or > less the same. > But pmd_addr_end() as defined in include/asm-generic/pgtable-nopmd.h > is a no-op, so I don't think it will work. > > It is likely that this function should iterate on pgd, then you get > pmd = pmd_offset(pud_offset(p4d_offset(pgd))); It looks like the code iterates over single pmd table while using pgd_addr_end() only to skip all the middle levels and bail out from the loop. I would be wary for switching from pmds to pgds, since we are trying to minimize impact (especially functional) and the rework does not seem that obvious. Assuming pmd and pgd are the same would actually such approach work for now? diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c index 2784224..94466cc 100644 --- a/arch/powerpc/mm/kasan/8xx.c +++ b/arch/powerpc/mm/kasan/8xx.c @@ -15,8 +15,8 @@ for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) { pte_basic_t *new; - k_next = pgd_addr_end(k_cur, k_end); - k_next = pgd_addr_end(k_next, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*pmd)), k_cur, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*(pmd + 1))), k_next, k_end); if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c index fb29404..c0bcd64 100644 --- a/arch/powerpc/mm/kasan/kasan_init_32.c +++ b/arch/powerpc/mm/kasan/kasan_init_32.c @@ -38,7 +38,7 @@ int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_ for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) { pte_t *new; - k_next = pgd_addr_end(k_cur, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*pmd)), k_cur, k_end); if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; @@ -196,7 +196,7 @@ void __init kasan_early_init(void) kasan_populate_pte(kasan_early_shadow_pte, PAGE_KERNEL); do { - next = pgd_addr_end(addr, end); + next = pgd_addr_end(__pgd(pmd_val(*pmd)), addr, end); pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte); } while (pmd++, addr = next, addr != end); Alternatively we could pass invalid pgd to keep the code structure intact, but that of course is less nice. Thanks! > Christophe From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA4A9C43461 for ; Tue, 8 Sep 2020 14:18:50 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 52043229C7 for ; Tue, 8 Sep 2020 14:18:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="w2cltgHF"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="NJ3URzpd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 52043229C7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=20VaWlK71Lhm5A9RfZNPF26ViFMSvs+txs7TbYZvAfI=; b=w2cltgHF9UTnVc3BRBvrDSs0n w18hiEGk63MWkBIZhRhuInIdimRz7y/CSfk3Aqe0h3f5DuogmyCQgwWFyZcIUruUQm+poXFDKz1CM qxuMBLD+L8AdjhNMEfPrCm95II9moygz3723tZ8vTK1/SRjvmSy0CunWr9dqu63D4dQ+7XiQlZKBw /6uKEimJN0Lfwcizm2PpXG4xsEtOqm7Ub5yfilV6+10/Hxefv4YY9pZZuhsoTnxCR6kx4dwAhe89L pKYRPBx/00kKcQcbEgdLctla+CPGtHr9Mje2ar3uOEDvML8KOpGzRrp5B5Nu64yQ9ZUFOsskLyc35 4Zd40TTMw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kFeQQ-0000Yk-VU; Tue, 08 Sep 2020 14:16:47 +0000 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kFeQO-0000Y9-1l; Tue, 08 Sep 2020 14:16:44 +0000 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 088E5HaY155667; Tue, 8 Sep 2020 10:16:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=date : from : to : cc : subject : message-id : references : mime-version : content-type : in-reply-to; s=pp1; bh=JMbH8samJLb5sy2XzpgZzo3gwfoR4Or5sHtVMIqHhLs=; b=NJ3URzpdswVooPaI2LHOaXu6ju+Ngmf2INx7IYktjmTAF583iqpKuti7BeOKDXf5dvjb DFU6oNAIttt5W97m8sP5q85mhtK8i+n1ItawNxhKl5O2wjjq4k9mXoPw5SqlXxavUX61 DbY/EivnZmwZCJBJpbRxTSf59ig7WVFjsDjsZEQGKPiRyE8MJKsL2Z3Am3we93opJ+Lg jn3JNhFff4CDPtZ//sFEUOiYHH8m3x+zIMXGVLVBJKTIpwCGjvScInnPzqoa6ADDieUI XzPpVgnJ/wbqjo+1fcfB9iO+uJ5ZVGEMnbTIksPgteUtaZ/IE90vq4p07FXbR5xH1s/v Vg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 33eb5wh4vq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 08 Sep 2020 10:16:06 -0400 Received: from m0098399.ppops.net (m0098399.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 088E5rqF157526; Tue, 8 Sep 2020 10:16:06 -0400 Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 33eb5wh4sn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 08 Sep 2020 10:16:06 -0400 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 088EDg8b007449; Tue, 8 Sep 2020 14:16:01 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma04fra.de.ibm.com with ESMTP id 33cm5hhqvk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 08 Sep 2020 14:16:01 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 088EFwKF37814660 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 8 Sep 2020 14:15:58 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1FB7CA4069; Tue, 8 Sep 2020 14:15:58 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7DB9EA406D; Tue, 8 Sep 2020 14:15:56 +0000 (GMT) Received: from oc3871087118.ibm.com (unknown [9.145.58.21]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Tue, 8 Sep 2020 14:15:56 +0000 (GMT) Date: Tue, 8 Sep 2020 16:15:55 +0200 From: Alexander Gordeev To: Christophe Leroy Subject: Re: [RFC PATCH v2 2/3] mm: make pXd_addr_end() functions page-table entry aware Message-ID: <20200908141554.GA20558@oc3871087118.ibm.com> References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-3-gerald.schaefer@linux.ibm.com> <31dfb3ed-a0cc-3024-d389-ab9bd19e881f@csgroup.eu> <20200908074638.GA19099@oc3871087118.ibm.com> <5d4f5546-afd0-0b8f-664d-700ae346b9ec@csgroup.eu> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <5d4f5546-afd0-0b8f-664d-700ae346b9ec@csgroup.eu> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-09-08_07:2020-09-08, 2020-09-08 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 mlxscore=0 suspectscore=0 bulkscore=0 phishscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 impostorscore=0 lowpriorityscore=0 malwarescore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009080133 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200908_101644_210585_86654F44 X-CRM114-Status: GOOD ( 31.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm , Paul Mackerras , linux-sparc , Claudio Imbrenda , Will Deacon , linux-arch , linux-s390 , Arnd Bergmann , Christian Borntraeger , Richard Weinberger , linux-x86 , Russell King , Jason Gunthorpe , Ingo Molnar , Andrey Ryabinin , Gerald Schaefer , Jeff Dike , Vasily Gorbik , John Hubbard , Heiko Carstens , linux-um , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm , Linus Torvalds , LKML , Michael Ellerman , Andrew Morton , linux-power , Mike Rapoport Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Sep 08, 2020 at 10:16:49AM +0200, Christophe Leroy wrote: > >Yes, and also two more sources :/ > > arch/powerpc/mm/kasan/8xx.c > > arch/powerpc/mm/kasan/kasan_init_32.c > > > >But these two are not quite obvious wrt pgd_addr_end() used > >while traversing pmds. Could you please clarify a bit? > > > > > >diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c > >index 2784224..89c5053 100644 > >--- a/arch/powerpc/mm/kasan/8xx.c > >+++ b/arch/powerpc/mm/kasan/8xx.c > >@@ -15,8 +15,8 @@ > > for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) { > > pte_basic_t *new; > >- k_next = pgd_addr_end(k_cur, k_end); > >- k_next = pgd_addr_end(k_next, k_end); > >+ k_next = pmd_addr_end(k_cur, k_end); > >+ k_next = pmd_addr_end(k_next, k_end); > > No, I don't think so. > On powerpc32 we have only two levels, so pgd and pmd are more or > less the same. > But pmd_addr_end() as defined in include/asm-generic/pgtable-nopmd.h > is a no-op, so I don't think it will work. > > It is likely that this function should iterate on pgd, then you get > pmd = pmd_offset(pud_offset(p4d_offset(pgd))); It looks like the code iterates over single pmd table while using pgd_addr_end() only to skip all the middle levels and bail out from the loop. I would be wary for switching from pmds to pgds, since we are trying to minimize impact (especially functional) and the rework does not seem that obvious. Assuming pmd and pgd are the same would actually such approach work for now? diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c index 2784224..94466cc 100644 --- a/arch/powerpc/mm/kasan/8xx.c +++ b/arch/powerpc/mm/kasan/8xx.c @@ -15,8 +15,8 @@ for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) { pte_basic_t *new; - k_next = pgd_addr_end(k_cur, k_end); - k_next = pgd_addr_end(k_next, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*pmd)), k_cur, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*(pmd + 1))), k_next, k_end); if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c index fb29404..c0bcd64 100644 --- a/arch/powerpc/mm/kasan/kasan_init_32.c +++ b/arch/powerpc/mm/kasan/kasan_init_32.c @@ -38,7 +38,7 @@ int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_ for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) { pte_t *new; - k_next = pgd_addr_end(k_cur, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*pmd)), k_cur, k_end); if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; @@ -196,7 +196,7 @@ void __init kasan_early_init(void) kasan_populate_pte(kasan_early_shadow_pte, PAGE_KERNEL); do { - next = pgd_addr_end(addr, end); + next = pgd_addr_end(__pgd(pmd_val(*pmd)), addr, end); pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte); } while (pmd++, addr = next, addr != end); Alternatively we could pass invalid pgd to keep the code structure intact, but that of course is less nice. Thanks! > Christophe _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Date: Tue, 8 Sep 2020 16:15:55 +0200 From: Alexander Gordeev Subject: Re: [RFC PATCH v2 2/3] mm: make pXd_addr_end() functions page-table entry aware Message-ID: <20200908141554.GA20558@oc3871087118.ibm.com> References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-3-gerald.schaefer@linux.ibm.com> <31dfb3ed-a0cc-3024-d389-ab9bd19e881f@csgroup.eu> <20200908074638.GA19099@oc3871087118.ibm.com> <5d4f5546-afd0-0b8f-664d-700ae346b9ec@csgroup.eu> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <5d4f5546-afd0-0b8f-664d-700ae346b9ec@csgroup.eu> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-um" Errors-To: linux-um-bounces+geert=linux-m68k.org@lists.infradead.org To: Christophe Leroy Cc: Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm , Paul Mackerras , linux-sparc , Claudio Imbrenda , Will Deacon , linux-arch , linux-s390 , Arnd Bergmann , Christian Borntraeger , Richard Weinberger , linux-x86 , Russell King , Jason Gunthorpe , Ingo Molnar , Andrey Ryabinin , Gerald Schaefer , Jeff Dike , Vasily Gorbik , John Hubbard , Heiko Carstens , linux-um , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm , Linus Torvalds , LKML , Michael Ellerman , Andrew Morton , linux-power , Mike Rapoport On Tue, Sep 08, 2020 at 10:16:49AM +0200, Christophe Leroy wrote: > >Yes, and also two more sources :/ > > arch/powerpc/mm/kasan/8xx.c > > arch/powerpc/mm/kasan/kasan_init_32.c > > > >But these two are not quite obvious wrt pgd_addr_end() used > >while traversing pmds. Could you please clarify a bit? > > > > > >diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c > >index 2784224..89c5053 100644 > >--- a/arch/powerpc/mm/kasan/8xx.c > >+++ b/arch/powerpc/mm/kasan/8xx.c > >@@ -15,8 +15,8 @@ > > for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) { > > pte_basic_t *new; > >- k_next = pgd_addr_end(k_cur, k_end); > >- k_next = pgd_addr_end(k_next, k_end); > >+ k_next = pmd_addr_end(k_cur, k_end); > >+ k_next = pmd_addr_end(k_next, k_end); > > No, I don't think so. > On powerpc32 we have only two levels, so pgd and pmd are more or > less the same. > But pmd_addr_end() as defined in include/asm-generic/pgtable-nopmd.h > is a no-op, so I don't think it will work. > > It is likely that this function should iterate on pgd, then you get > pmd = pmd_offset(pud_offset(p4d_offset(pgd))); It looks like the code iterates over single pmd table while using pgd_addr_end() only to skip all the middle levels and bail out from the loop. I would be wary for switching from pmds to pgds, since we are trying to minimize impact (especially functional) and the rework does not seem that obvious. Assuming pmd and pgd are the same would actually such approach work for now? diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c index 2784224..94466cc 100644 --- a/arch/powerpc/mm/kasan/8xx.c +++ b/arch/powerpc/mm/kasan/8xx.c @@ -15,8 +15,8 @@ for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd += 2, block += SZ_8M) { pte_basic_t *new; - k_next = pgd_addr_end(k_cur, k_end); - k_next = pgd_addr_end(k_next, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*pmd)), k_cur, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*(pmd + 1))), k_next, k_end); if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c index fb29404..c0bcd64 100644 --- a/arch/powerpc/mm/kasan/kasan_init_32.c +++ b/arch/powerpc/mm/kasan/kasan_init_32.c @@ -38,7 +38,7 @@ int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_ for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) { pte_t *new; - k_next = pgd_addr_end(k_cur, k_end); + k_next = pgd_addr_end(__pgd(pmd_val(*pmd)), k_cur, k_end); if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) continue; @@ -196,7 +196,7 @@ void __init kasan_early_init(void) kasan_populate_pte(kasan_early_shadow_pte, PAGE_KERNEL); do { - next = pgd_addr_end(addr, end); + next = pgd_addr_end(__pgd(pmd_val(*pmd)), addr, end); pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte); } while (pmd++, addr = next, addr != end); Alternatively we could pass invalid pgd to keep the code structure intact, but that of course is less nice. Thanks! > Christophe _______________________________________________ linux-um mailing list linux-um@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-um