From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C64EC433B4 for ; Mon, 19 Apr 2021 14:07:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8DA2460698 for ; Mon, 19 Apr 2021 14:07:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8DA2460698 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C8F1E8D0001; Mon, 19 Apr 2021 10:07:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C17BB6B0081; Mon, 19 Apr 2021 10:07:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA4CB8D0001; Mon, 19 Apr 2021 10:07:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id 8A59D6B0080 for ; Mon, 19 Apr 2021 10:07:51 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3F34E3643 for ; Mon, 19 Apr 2021 14:07:51 +0000 (UTC) X-FDA: 78049295142.38.6252F83 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id A5626500152E for ; Mon, 19 Apr 2021 14:07:49 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1290C31B; Mon, 19 Apr 2021 07:07:50 -0700 (PDT) Received: from [192.168.1.179] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8CA2C3F7D7; Mon, 19 Apr 2021 07:07:48 -0700 (PDT) Subject: Re: [PATCH v2 1/4] mm: pagewalk: Fix walk for hugepage tables To: Christophe Leroy , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , akpm@linux-foundation.org, dja@axtens.net Cc: Oliver O'Halloran , linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: From: Steven Price Message-ID: <1fdb0abe-b4b5-937c-0d9b-859a5cbb5726@arm.com> Date: Mon, 19 Apr 2021 15:07:47 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A5626500152E X-Stat-Signature: hoyre1ja7swwtyuagrd9kfqs6pp1i5oi Received-SPF: none (arm.com>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=foss.arm.com; client-ip=217.140.110.172 X-HE-DKIM-Result: none/none X-HE-Tag: 1618841269-963885 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 19/04/2021 11:47, Christophe Leroy wrote: > Pagewalk ignores hugepd entries and walk down the tables > as if it was traditionnal entries, leading to crazy result. > > Add walk_hugepd_range() and use it to walk hugepage tables. > > Signed-off-by: Christophe Leroy Looks correct to me, sadly I don't have a suitable system to test it. Reviewed-by: Steven Price > --- > v2: > - Add a guard for NULL ops->pte_entry > - Take mm->page_table_lock when walking hugepage table, as suggested by follow_huge_pd() > --- > mm/pagewalk.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++----- > 1 file changed, 53 insertions(+), 5 deletions(-) > > diff --git a/mm/pagewalk.c b/mm/pagewalk.c > index e81640d9f177..9b3db11a4d1d 100644 > --- a/mm/pagewalk.c > +++ b/mm/pagewalk.c > @@ -58,6 +58,45 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, > return err; > } > > +#ifdef CONFIG_ARCH_HAS_HUGEPD > +static int walk_hugepd_range(hugepd_t *phpd, unsigned long addr, > + unsigned long end, struct mm_walk *walk, int pdshift) > +{ > + int err = 0; > + const struct mm_walk_ops *ops = walk->ops; > + int shift = hugepd_shift(*phpd); > + int page_size = 1 << shift; > + > + if (!ops->pte_entry) > + return 0; > + > + if (addr & (page_size - 1)) > + return 0; > + > + for (;;) { > + pte_t *pte; > + > + spin_lock(&walk->mm->page_table_lock); > + pte = hugepte_offset(*phpd, addr, pdshift); > + err = ops->pte_entry(pte, addr, addr + page_size, walk); > + spin_unlock(&walk->mm->page_table_lock); > + > + if (err) > + break; > + if (addr >= end - page_size) > + break; > + addr += page_size; > + } > + return err; > +} > +#else > +static int walk_hugepd_range(hugepd_t *phpd, unsigned long addr, > + unsigned long end, struct mm_walk *walk, int pdshift) > +{ > + return 0; > +} > +#endif > + > static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, > struct mm_walk *walk) > { > @@ -108,7 +147,10 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, > goto again; > } > > - err = walk_pte_range(pmd, addr, next, walk); > + if (is_hugepd(__hugepd(pmd_val(*pmd)))) > + err = walk_hugepd_range((hugepd_t *)pmd, addr, next, walk, PMD_SHIFT); > + else > + err = walk_pte_range(pmd, addr, next, walk); > if (err) > break; > } while (pmd++, addr = next, addr != end); > @@ -157,7 +199,10 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, > if (pud_none(*pud)) > goto again; > > - err = walk_pmd_range(pud, addr, next, walk); > + if (is_hugepd(__hugepd(pud_val(*pud)))) > + err = walk_hugepd_range((hugepd_t *)pud, addr, next, walk, PUD_SHIFT); > + else > + err = walk_pmd_range(pud, addr, next, walk); > if (err) > break; > } while (pud++, addr = next, addr != end); > @@ -189,7 +234,9 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, > if (err) > break; > } > - if (ops->pud_entry || ops->pmd_entry || ops->pte_entry) > + if (is_hugepd(__hugepd(p4d_val(*p4d)))) > + err = walk_hugepd_range((hugepd_t *)p4d, addr, next, walk, P4D_SHIFT); > + else if (ops->pud_entry || ops->pmd_entry || ops->pte_entry) > err = walk_pud_range(p4d, addr, next, walk); > if (err) > break; > @@ -224,8 +271,9 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, > if (err) > break; > } > - if (ops->p4d_entry || ops->pud_entry || ops->pmd_entry || > - ops->pte_entry) > + if (is_hugepd(__hugepd(pgd_val(*pgd)))) > + err = walk_hugepd_range((hugepd_t *)pgd, addr, next, walk, PGDIR_SHIFT); > + else if (ops->p4d_entry || ops->pud_entry || ops->pmd_entry || ops->pte_entry) > err = walk_p4d_range(pgd, addr, next, walk); > if (err) > break; >