From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44986C43461 for ; Tue, 15 Sep 2020 17:18:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AED2421D24 for ; Tue, 15 Sep 2020 17:18:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="YeUxScuV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AED2421D24 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 171ED90006D; Tue, 15 Sep 2020 13:18:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1216B900066; Tue, 15 Sep 2020 13:18:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F059C90006D; Tue, 15 Sep 2020 13:18:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id D2A1F900066 for ; Tue, 15 Sep 2020 13:18:40 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 99973181AEF1A for ; Tue, 15 Sep 2020 17:18:40 +0000 (UTC) X-FDA: 77265955200.12.chalk46_190b88827113 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 525351801200B for ; Tue, 15 Sep 2020 17:18:40 +0000 (UTC) X-HE-Tag: chalk46_190b88827113 X-Filterd-Recvd-Size: 12987 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 17:18:39 +0000 (UTC) Received: from kernel.org (unknown [87.71.73.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 33F3520872; Tue, 15 Sep 2020 17:18:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600190318; bh=Wut+/LKuurJjV6yWdQ0LkAiD7T7c8wwiKIpWcqWDTyQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=YeUxScuVHsoRcQepFJGfs1FFAf2e1LA1+mSzjWkJRc8NB4Y0dZ0Yq79Hxs/lrBnxk pVbp2fKt9JCX5uVPX6sH97oaOJOt5ijiHPqC0TWMYPs7q/pJCrcD7UjG64D4R6O2M1 qX+5OQ6Q4SltpWyluJLoWn8FICTrQN6EcS6Jp4sg= Date: Tue, 15 Sep 2020 20:18:23 +0300 From: Mike Rapoport To: Vasily Gorbik Cc: Jason Gunthorpe , John Hubbard , Linus Torvalds , Gerald Schaefer , Alexander Gordeev , Peter Zijlstra , Dave Hansen , LKML , linux-mm , linux-arch , Andrew Morton , Russell King , Catalin Marinas , Will Deacon , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Jeff Dike , Richard Weinberger , Dave Hansen , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Andrey Ryabinin , linux-x86 , linux-arm , linux-power , linux-sparc , linux-um , linux-s390 , Heiko Carstens , Christian Borntraeger , Claudio Imbrenda Subject: Re: [PATCH v2] mm/gup: fix gup_fast with dynamic page table folding Message-ID: <20200915171823.GJ2142832@kernel.org> References: <20200911200511.GC1221970@ziepe.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 525351801200B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Sep 11, 2020 at 10:36:43PM +0200, Vasily Gorbik wrote: > Currently to make sure that every page table entry is read just once > gup_fast walks perform READ_ONCE and pass pXd value down to the next > gup_pXd_range function by value e.g.: >=20 > static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long e= nd, > unsigned int flags, struct page **pages, int *= nr) > ... > pudp =3D pud_offset(&p4d, addr); >=20 > This function passes a reference on that local value copy to pXd_offset= , > and might get the very same pointer in return. This happens when the > level is folded (on most arches), and that pointer should not be iterat= ed. >=20 > On s390 due to the fact that each task might have different 5,4 or > 3-level address translation and hence different levels folded the logic > is more complex and non-iteratable pointer to a local copy leads to > severe problems. >=20 > Here is an example of what happens with gup_fast on s390, for a task > with 3-levels paging, crossing a 2 GB pud boundary: >=20 > // addr =3D 0x1007ffff000, end =3D 0x10080001000 > static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long e= nd, > unsigned int flags, struct page **pages, int *= nr) > { > unsigned long next; > pud_t *pudp; >=20 > // pud_offset returns &p4d itself (a pointer to a value on stac= k) > pudp =3D pud_offset(&p4d, addr); > do { > // on second iteratation reading "random" stack value > pud_t pud =3D READ_ONCE(*pudp); >=20 > // next =3D 0x10080000000, due to PUD_SIZE/MASK !=3D PG= DIR_SIZE/MASK on s390 > next =3D pud_addr_end(addr, end); > ... > } while (pudp++, addr =3D next, addr !=3D end); // pudp++ itera= ting over stack >=20 > return 1; > } >=20 > This happens since s390 moved to common gup code with > commit d1874a0c2805 ("s390/mm: make the pxd_offset functions more robus= t") > and commit 1a42010cdc26 ("s390/mm: convert to the generic > get_user_pages_fast code"). s390 tried to mimic static level folding by > changing pXd_offset primitives to always calculate top level page table > offset in pgd_offset and just return the value passed when pXd_offset > has to act as folded. >=20 > What is crucial for gup_fast and what has been overlooked is > that PxD_SIZE/MASK and thus pXd_addr_end should also change > correspondingly. And the latter is not possible with dynamic folding. >=20 > To fix the issue in addition to pXd values pass original > pXdp pointers down to gup_pXd_range functions. And introduce > pXd_offset_lockless helpers, which take an additional pXd > entry value parameter. This has already been discussed in > https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1 >=20 > Cc: # 5.2+ > Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fa= st code") > Reviewed-by: Gerald Schaefer > Reviewed-by: Alexander Gordeev > Signed-off-by: Vasily Gorbik Reviewed-by: Mike Rapoport > --- > v2: added brackets &pgd -> &(pgd) >=20 > arch/s390/include/asm/pgtable.h | 42 +++++++++++++++++++++++---------- > include/linux/pgtable.h | 10 ++++++++ > mm/gup.c | 18 +++++++------- > 3 files changed, 49 insertions(+), 21 deletions(-) >=20 > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pg= table.h > index 7eb01a5459cd..b55561cc8786 100644 > --- a/arch/s390/include/asm/pgtable.h > +++ b/arch/s390/include/asm/pgtable.h > @@ -1260,26 +1260,44 @@ static inline pgd_t *pgd_offset_raw(pgd_t *pgd,= unsigned long address) > =20 > #define pgd_offset(mm, address) pgd_offset_raw(READ_ONCE((mm)->pgd), a= ddress) > =20 > -static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) > +static inline p4d_t *p4d_offset_lockless(pgd_t *pgdp, pgd_t pgd, unsig= ned long address) > { > - if ((pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) >=3D _REGION_ENTRY_TYPE= _R1) > - return (p4d_t *) pgd_deref(*pgd) + p4d_index(address); > - return (p4d_t *) pgd; > + if ((pgd_val(pgd) & _REGION_ENTRY_TYPE_MASK) >=3D _REGION_ENTRY_TYPE_= R1) > + return (p4d_t *) pgd_deref(pgd) + p4d_index(address); > + return (p4d_t *) pgdp; > } > +#define p4d_offset_lockless p4d_offset_lockless > =20 > -static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address) > +static inline p4d_t *p4d_offset(pgd_t *pgdp, unsigned long address) > { > - if ((p4d_val(*p4d) & _REGION_ENTRY_TYPE_MASK) >=3D _REGION_ENTRY_TYPE= _R2) > - return (pud_t *) p4d_deref(*p4d) + pud_index(address); > - return (pud_t *) p4d; > + return p4d_offset_lockless(pgdp, *pgdp, address); > +} > + > +static inline pud_t *pud_offset_lockless(p4d_t *p4dp, p4d_t p4d, unsig= ned long address) > +{ > + if ((p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK) >=3D _REGION_ENTRY_TYPE_= R2) > + return (pud_t *) p4d_deref(p4d) + pud_index(address); > + return (pud_t *) p4dp; > +} > +#define pud_offset_lockless pud_offset_lockless > + > +static inline pud_t *pud_offset(p4d_t *p4dp, unsigned long address) > +{ > + return pud_offset_lockless(p4dp, *p4dp, address); > } > #define pud_offset pud_offset > =20 > -static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) > +static inline pmd_t *pmd_offset_lockless(pud_t *pudp, pud_t pud, unsig= ned long address) > +{ > + if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) >=3D _REGION_ENTRY_TYPE_= R3) > + return (pmd_t *) pud_deref(pud) + pmd_index(address); > + return (pmd_t *) pudp; > +} > +#define pmd_offset_lockless pmd_offset_lockless > + > +static inline pmd_t *pmd_offset(pud_t *pudp, unsigned long address) > { > - if ((pud_val(*pud) & _REGION_ENTRY_TYPE_MASK) >=3D _REGION_ENTRY_TYPE= _R3) > - return (pmd_t *) pud_deref(*pud) + pmd_index(address); > - return (pmd_t *) pud; > + return pmd_offset_lockless(pudp, *pudp, address); > } > #define pmd_offset pmd_offset > =20 > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index e8cbc2e795d5..90654cb63e9e 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -1427,6 +1427,16 @@ typedef unsigned int pgtbl_mod_mask; > #define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED) > #endif > =20 > +#ifndef p4d_offset_lockless > +#define p4d_offset_lockless(pgdp, pgd, address) p4d_offset(&(pgd), add= ress) > +#endif > +#ifndef pud_offset_lockless > +#define pud_offset_lockless(p4dp, p4d, address) pud_offset(&(p4d), add= ress) > +#endif > +#ifndef pmd_offset_lockless > +#define pmd_offset_lockless(pudp, pud, address) pmd_offset(&(pud), add= ress) > +#endif > + > /* > * p?d_leaf() - true if this entry is a final mapping to a physical ad= dress. > * This differs from p?d_huge() by the fact that they are always avail= able (if > diff --git a/mm/gup.c b/mm/gup.c > index e5739a1974d5..578bf5bd8bf8 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -2485,13 +2485,13 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp= , unsigned long addr, > return 1; > } > =20 > -static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long = end, > +static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, u= nsigned long end, > unsigned int flags, struct page **pages, int *nr) > { > unsigned long next; > pmd_t *pmdp; > =20 > - pmdp =3D pmd_offset(&pud, addr); > + pmdp =3D pmd_offset_lockless(pudp, pud, addr); > do { > pmd_t pmd =3D READ_ONCE(*pmdp); > =20 > @@ -2528,13 +2528,13 @@ static int gup_pmd_range(pud_t pud, unsigned lo= ng addr, unsigned long end, > return 1; > } > =20 > -static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long = end, > +static int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, u= nsigned long end, > unsigned int flags, struct page **pages, int *nr) > { > unsigned long next; > pud_t *pudp; > =20 > - pudp =3D pud_offset(&p4d, addr); > + pudp =3D pud_offset_lockless(p4dp, p4d, addr); > do { > pud_t pud =3D READ_ONCE(*pudp); > =20 > @@ -2549,20 +2549,20 @@ static int gup_pud_range(p4d_t p4d, unsigned lo= ng addr, unsigned long end, > if (!gup_huge_pd(__hugepd(pud_val(pud)), addr, > PUD_SHIFT, next, flags, pages, nr)) > return 0; > - } else if (!gup_pmd_range(pud, addr, next, flags, pages, nr)) > + } else if (!gup_pmd_range(pudp, pud, addr, next, flags, pages, nr)) > return 0; > } while (pudp++, addr =3D next, addr !=3D end); > =20 > return 1; > } > =20 > -static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long = end, > +static int gup_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, u= nsigned long end, > unsigned int flags, struct page **pages, int *nr) > { > unsigned long next; > p4d_t *p4dp; > =20 > - p4dp =3D p4d_offset(&pgd, addr); > + p4dp =3D p4d_offset_lockless(pgdp, pgd, addr); > do { > p4d_t p4d =3D READ_ONCE(*p4dp); > =20 > @@ -2574,7 +2574,7 @@ static int gup_p4d_range(pgd_t pgd, unsigned long= addr, unsigned long end, > if (!gup_huge_pd(__hugepd(p4d_val(p4d)), addr, > P4D_SHIFT, next, flags, pages, nr)) > return 0; > - } else if (!gup_pud_range(p4d, addr, next, flags, pages, nr)) > + } else if (!gup_pud_range(p4dp, p4d, addr, next, flags, pages, nr)) > return 0; > } while (p4dp++, addr =3D next, addr !=3D end); > =20 > @@ -2602,7 +2602,7 @@ static void gup_pgd_range(unsigned long addr, uns= igned long end, > if (!gup_huge_pd(__hugepd(pgd_val(pgd)), addr, > PGDIR_SHIFT, next, flags, pages, nr)) > return; > - } else if (!gup_p4d_range(pgd, addr, next, flags, pages, nr)) > + } else if (!gup_p4d_range(pgdp, pgd, addr, next, flags, pages, nr)) > return; > } while (pgdp++, addr =3D next, addr !=3D end); > } > --=20 > =E2=A3=BF=E2=A3=BF=E2=A3=BF=E2=A3=BF=E2=A2=8B=E2=A1=80=E2=A3=80=E2=A0=B9= =E2=A3=BF=E2=A3=BF=E2=A3=BF=E2=A3=BF > =E2=A3=BF=E2=A3=BF=E2=A3=BF=E2=A3=BF=E2=A0=A0=E2=A3=B6=E2=A1=A6=E2=A0=80= =E2=A3=BF=E2=A3=BF=E2=A3=BF=E2=A3=BF > =E2=A3=BF=E2=A3=BF=E2=A3=BF=E2=A0=8F=E2=A3=B4=E2=A3=AE=E2=A3=B4=E2=A3=A7= =E2=A0=88=E2=A2=BF=E2=A3=BF=E2=A3=BF > =E2=A3=BF=E2=A3=BF=E2=A1=8F=E2=A2=B0=E2=A3=BF=E2=A0=96=E2=A3=A0=E2=A3=BF= =E2=A1=86=E2=A0=88=E2=A3=BF=E2=A3=BF > =E2=A3=BF=E2=A2=9B=E2=A3=B5=E2=A3=84=E2=A0=99=E2=A3=B6=E2=A3=B6=E2=A1=9F= =E2=A3=85=E2=A3=A0=E2=A0=B9=E2=A3=BF > =E2=A3=BF=E2=A3=9C=E2=A3=9B=E2=A0=BB=E2=A2=8E=E2=A3=89=E2=A3=89=E2=A3=80= =E2=A0=BF=E2=A3=AB=E2=A3=B5=E2=A3=BF --=20 Sincerely yours, Mike.