From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCEB9C433E2 for ; Tue, 8 Sep 2020 05:06:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 77C1F2177B for ; Tue, 8 Sep 2020 05:06:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728598AbgIHFGo (ORCPT ); Tue, 8 Sep 2020 01:06:44 -0400 Received: from pegase1.c-s.fr ([93.17.236.30]:39764 "EHLO pegase1.c-s.fr" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725938AbgIHFGo (ORCPT ); Tue, 8 Sep 2020 01:06:44 -0400 Received: from localhost (mailhub1-int [192.168.12.234]) by localhost (Postfix) with ESMTP id 4BltRK1g9Yz9tyWb; Tue, 8 Sep 2020 07:06:37 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [192.168.12.234]) (amavisd-new, port 10024) with ESMTP id NDyhNxhMFPxL; Tue, 8 Sep 2020 07:06:37 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4BltRG5mjlz9tyWZ; Tue, 8 Sep 2020 07:06:34 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 7661B8B78B; Tue, 8 Sep 2020 07:06:35 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id HWiz5tBjyEqc; Tue, 8 Sep 2020 07:06:35 +0200 (CEST) Received: from [192.168.4.90] (unknown [192.168.4.90]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 3409E8B768; Tue, 8 Sep 2020 07:06:31 +0200 (CEST) Subject: Re: [RFC PATCH v2 1/3] mm/gup: fix gup_fast with dynamic page table folding To: Gerald Schaefer , Jason Gunthorpe , John Hubbard Cc: Peter Zijlstra , Dave Hansen , linux-mm , Paul Mackerras , linux-sparc , Alexander Gordeev , Claudio Imbrenda , Will Deacon , linux-arch , linux-s390 , Vasily Gorbik , Richard Weinberger , linux-x86 , Russell King , Christian Borntraeger , Ingo Molnar , Catalin Marinas , Andrey Ryabinin , Heiko Carstens , Arnd Bergmann , Jeff Dike , linux-um , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm , linux-power , LKML , Andrew Morton , Linus Torvalds , Mike Rapoport References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> From: Christophe Leroy Message-ID: <82fbe8f9-f199-5fc2-4168-eb43ad0b0346@csgroup.eu> Date: Tue, 8 Sep 2020 07:06:23 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: fr Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Le 07/09/2020 à 20:00, Gerald Schaefer a écrit : > From: Alexander Gordeev > > Commit 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast > code") introduced a subtle but severe bug on s390 with gup_fast, due to > dynamic page table folding. > > The question "What would it require for the generic code to work for s390" > has already been discussed here > https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1 > and ended with a promising approach here > https://lkml.kernel.org/r/20190419153307.4f2911b5@mschwideX1 > which in the end unfortunately didn't quite work completely. > > We tried to mimic static level folding by changing pgd_offset to always > calculate top level page table offset, and do nothing in folded pXd_offset. > What has been overlooked is that PxD_SIZE/MASK and thus pXd_addr_end do > not reflect this dynamic behaviour, and still act like static 5-level > page tables. > [...] > > Fix this by introducing new pXd_addr_end_folded helpers, which take an > additional pXd entry value parameter, that can be used on s390 > to determine the correct page table level and return corresponding > end / boundary. With that, the pointer iteration will always > happen in gup_pgd_range for s390. No change for other architectures > introduced. Not sure pXd_addr_end_folded() is the best understandable name, allthough I don't have any alternative suggestion at the moment. Maybe could be something like pXd_addr_end_fixup() as it will disappear in the next patch, or pXd_addr_end_gup() ? Also, if it happens to be acceptable to get patch 2 in stable, I think you should switch patch 1 and patch 2 to avoid the step through pXd_addr_end_folded() > > Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code") > Cc: # 5.2+ > Reviewed-by: Gerald Schaefer > Signed-off-by: Alexander Gordeev > Signed-off-by: Gerald Schaefer > --- > arch/s390/include/asm/pgtable.h | 42 +++++++++++++++++++++++++++++++++ > include/linux/pgtable.h | 16 +++++++++++++ > mm/gup.c | 8 +++---- > 3 files changed, 62 insertions(+), 4 deletions(-) > > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h > index 7eb01a5459cd..027206e4959d 100644 > --- a/arch/s390/include/asm/pgtable.h > +++ b/arch/s390/include/asm/pgtable.h > @@ -512,6 +512,48 @@ static inline bool mm_pmd_folded(struct mm_struct *mm) > } > #define mm_pmd_folded(mm) mm_pmd_folded(mm) > > +/* > + * With dynamic page table levels on s390, the static pXd_addr_end() functions > + * will not return corresponding dynamic boundaries. This is no problem as long > + * as only pXd pointers are passed down during page table walk, because > + * pXd_offset() will simply return the given pointer for folded levels, and the > + * pointer iteration over a range simply happens at the correct page table > + * level. > + * It is however a problem with gup_fast, or other places walking the page > + * tables w/o locks using READ_ONCE(), and passing down the pXd values instead > + * of pointers. In this case, the pointer given to pXd_offset() is a pointer to > + * a stack variable, which cannot be used for pointer iteration at the correct > + * level. Instead, the iteration then has to happen by going up to pgd level > + * again. To allow this, provide pXd_addr_end_folded() functions with an > + * additional pXd value parameter, which can be used on s390 to determine the > + * folding level and return the corresponding boundary. > + */ > +static inline unsigned long rste_addr_end_folded(unsigned long rste, unsigned long addr, unsigned long end) What does 'rste' stands for ? Isn't this line a bit long ? > +{ > + unsigned long type = (rste & _REGION_ENTRY_TYPE_MASK) >> 2; > + unsigned long size = 1UL << (_SEGMENT_SHIFT + type * 11); > + unsigned long boundary = (addr + size) & ~(size - 1); > + > + /* > + * FIXME The below check is for internal testing only, to be removed > + */ > + VM_BUG_ON(type < (_REGION_ENTRY_TYPE_R3 >> 2)); > + > + return (boundary - 1) < (end - 1) ? boundary : end; > +} > + > +#define pgd_addr_end_folded pgd_addr_end_folded > +static inline unsigned long pgd_addr_end_folded(pgd_t pgd, unsigned long addr, unsigned long end) > +{ > + return rste_addr_end_folded(pgd_val(pgd), addr, end); > +} > + > +#define p4d_addr_end_folded p4d_addr_end_folded > +static inline unsigned long p4d_addr_end_folded(p4d_t p4d, unsigned long addr, unsigned long end) > +{ > + return rste_addr_end_folded(p4d_val(p4d), addr, end); > +} > + > static inline int mm_has_pgste(struct mm_struct *mm) > { > #ifdef CONFIG_PGSTE > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index e8cbc2e795d5..981c4c2a31fe 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -681,6 +681,22 @@ static inline int arch_unmap_one(struct mm_struct *mm, > }) > #endif > > +#ifndef pgd_addr_end_folded > +#define pgd_addr_end_folded(pgd, addr, end) pgd_addr_end(addr, end) > +#endif > + > +#ifndef p4d_addr_end_folded > +#define p4d_addr_end_folded(p4d, addr, end) p4d_addr_end(addr, end) > +#endif > + > +#ifndef pud_addr_end_folded > +#define pud_addr_end_folded(pud, addr, end) pud_addr_end(addr, end) > +#endif > + > +#ifndef pmd_addr_end_folded > +#define pmd_addr_end_folded(pmd, addr, end) pmd_addr_end(addr, end) > +#endif > + > /* > * When walking page tables, we usually want to skip any p?d_none entries; > * and any p?d_bad entries - reporting the error before resetting to none. > diff --git a/mm/gup.c b/mm/gup.c > index bd883a112724..ba4aace5d0f4 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -2521,7 +2521,7 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, > do { > pmd_t pmd = READ_ONCE(*pmdp); > > - next = pmd_addr_end(addr, end); > + next = pmd_addr_end_folded(pmd, addr, end); > if (!pmd_present(pmd)) > return 0; > > @@ -2564,7 +2564,7 @@ static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end, > do { > pud_t pud = READ_ONCE(*pudp); > > - next = pud_addr_end(addr, end); > + next = pud_addr_end_folded(pud, addr, end); > if (unlikely(!pud_present(pud))) > return 0; > if (unlikely(pud_huge(pud))) { > @@ -2592,7 +2592,7 @@ static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end, > do { > p4d_t p4d = READ_ONCE(*p4dp); > > - next = p4d_addr_end(addr, end); > + next = p4d_addr_end_folded(p4d, addr, end); > if (p4d_none(p4d)) > return 0; > BUILD_BUG_ON(p4d_huge(p4d)); > @@ -2617,7 +2617,7 @@ static void gup_pgd_range(unsigned long addr, unsigned long end, > do { > pgd_t pgd = READ_ONCE(*pgdp); > > - next = pgd_addr_end(addr, end); > + next = pgd_addr_end_folded(pgd, addr, end); > if (pgd_none(pgd)) > return; > if (unlikely(pgd_huge(pgd))) { > Christophe From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christophe Leroy Date: Tue, 08 Sep 2020 05:06:23 +0000 Subject: Re: [RFC PATCH v2 1/3] mm/gup: fix gup_fast with dynamic page table folding Message-Id: <82fbe8f9-f199-5fc2-4168-eb43ad0b0346@csgroup.eu> List-Id: References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> In-Reply-To: <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit To: Gerald Schaefer , Jason Gunthorpe , John Hubbard Cc: Peter Zijlstra , Dave Hansen , linux-mm , Paul Mackerras , linux-sparc , Alexander Gordeev , Claudio Imbrenda , Will Deacon , linux-arch , linux-s390 , Vasily Gorbik , Richard Weinberger , linux-x86 , Russell King , Christian Borntraeger , Ingo Molnar , Catalin Marinas , Andrey Ryabinin , Heiko Carstens , Arnd Bergmann , Jeff Dike , linux-um , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm , linux-power , LKML , Andrew Morton , Linus Torvalds , Mike Rapoport Le 07/09/2020 à 20:00, Gerald Schaefer a écrit : > From: Alexander Gordeev > > Commit 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast > code") introduced a subtle but severe bug on s390 with gup_fast, due to > dynamic page table folding. > > The question "What would it require for the generic code to work for s390" > has already been discussed here > https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1 > and ended with a promising approach here > https://lkml.kernel.org/r/20190419153307.4f2911b5@mschwideX1 > which in the end unfortunately didn't quite work completely. > > We tried to mimic static level folding by changing pgd_offset to always > calculate top level page table offset, and do nothing in folded pXd_offset. > What has been overlooked is that PxD_SIZE/MASK and thus pXd_addr_end do > not reflect this dynamic behaviour, and still act like static 5-level > page tables. > [...] > > Fix this by introducing new pXd_addr_end_folded helpers, which take an > additional pXd entry value parameter, that can be used on s390 > to determine the correct page table level and return corresponding > end / boundary. With that, the pointer iteration will always > happen in gup_pgd_range for s390. No change for other architectures > introduced. Not sure pXd_addr_end_folded() is the best understandable name, allthough I don't have any alternative suggestion at the moment. Maybe could be something like pXd_addr_end_fixup() as it will disappear in the next patch, or pXd_addr_end_gup() ? Also, if it happens to be acceptable to get patch 2 in stable, I think you should switch patch 1 and patch 2 to avoid the step through pXd_addr_end_folded() > > Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code") > Cc: # 5.2+ > Reviewed-by: Gerald Schaefer > Signed-off-by: Alexander Gordeev > Signed-off-by: Gerald Schaefer > --- > arch/s390/include/asm/pgtable.h | 42 +++++++++++++++++++++++++++++++++ > include/linux/pgtable.h | 16 +++++++++++++ > mm/gup.c | 8 +++---- > 3 files changed, 62 insertions(+), 4 deletions(-) > > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h > index 7eb01a5459cd..027206e4959d 100644 > --- a/arch/s390/include/asm/pgtable.h > +++ b/arch/s390/include/asm/pgtable.h > @@ -512,6 +512,48 @@ static inline bool mm_pmd_folded(struct mm_struct *mm) > } > #define mm_pmd_folded(mm) mm_pmd_folded(mm) > > +/* > + * With dynamic page table levels on s390, the static pXd_addr_end() functions > + * will not return corresponding dynamic boundaries. This is no problem as long > + * as only pXd pointers are passed down during page table walk, because > + * pXd_offset() will simply return the given pointer for folded levels, and the > + * pointer iteration over a range simply happens at the correct page table > + * level. > + * It is however a problem with gup_fast, or other places walking the page > + * tables w/o locks using READ_ONCE(), and passing down the pXd values instead > + * of pointers. In this case, the pointer given to pXd_offset() is a pointer to > + * a stack variable, which cannot be used for pointer iteration at the correct > + * level. Instead, the iteration then has to happen by going up to pgd level > + * again. To allow this, provide pXd_addr_end_folded() functions with an > + * additional pXd value parameter, which can be used on s390 to determine the > + * folding level and return the corresponding boundary. > + */ > +static inline unsigned long rste_addr_end_folded(unsigned long rste, unsigned long addr, unsigned long end) What does 'rste' stands for ? Isn't this line a bit long ? > +{ > + unsigned long type = (rste & _REGION_ENTRY_TYPE_MASK) >> 2; > + unsigned long size = 1UL << (_SEGMENT_SHIFT + type * 11); > + unsigned long boundary = (addr + size) & ~(size - 1); > + > + /* > + * FIXME The below check is for internal testing only, to be removed > + */ > + VM_BUG_ON(type < (_REGION_ENTRY_TYPE_R3 >> 2)); > + > + return (boundary - 1) < (end - 1) ? boundary : end; > +} > + > +#define pgd_addr_end_folded pgd_addr_end_folded > +static inline unsigned long pgd_addr_end_folded(pgd_t pgd, unsigned long addr, unsigned long end) > +{ > + return rste_addr_end_folded(pgd_val(pgd), addr, end); > +} > + > +#define p4d_addr_end_folded p4d_addr_end_folded > +static inline unsigned long p4d_addr_end_folded(p4d_t p4d, unsigned long addr, unsigned long end) > +{ > + return rste_addr_end_folded(p4d_val(p4d), addr, end); > +} > + > static inline int mm_has_pgste(struct mm_struct *mm) > { > #ifdef CONFIG_PGSTE > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index e8cbc2e795d5..981c4c2a31fe 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -681,6 +681,22 @@ static inline int arch_unmap_one(struct mm_struct *mm, > }) > #endif > > +#ifndef pgd_addr_end_folded > +#define pgd_addr_end_folded(pgd, addr, end) pgd_addr_end(addr, end) > +#endif > + > +#ifndef p4d_addr_end_folded > +#define p4d_addr_end_folded(p4d, addr, end) p4d_addr_end(addr, end) > +#endif > + > +#ifndef pud_addr_end_folded > +#define pud_addr_end_folded(pud, addr, end) pud_addr_end(addr, end) > +#endif > + > +#ifndef pmd_addr_end_folded > +#define pmd_addr_end_folded(pmd, addr, end) pmd_addr_end(addr, end) > +#endif > + > /* > * When walking page tables, we usually want to skip any p?d_none entries; > * and any p?d_bad entries - reporting the error before resetting to none. > diff --git a/mm/gup.c b/mm/gup.c > index bd883a112724..ba4aace5d0f4 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -2521,7 +2521,7 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, > do { > pmd_t pmd = READ_ONCE(*pmdp); > > - next = pmd_addr_end(addr, end); > + next = pmd_addr_end_folded(pmd, addr, end); > if (!pmd_present(pmd)) > return 0; > > @@ -2564,7 +2564,7 @@ static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end, > do { > pud_t pud = READ_ONCE(*pudp); > > - next = pud_addr_end(addr, end); > + next = pud_addr_end_folded(pud, addr, end); > if (unlikely(!pud_present(pud))) > return 0; > if (unlikely(pud_huge(pud))) { > @@ -2592,7 +2592,7 @@ static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end, > do { > p4d_t p4d = READ_ONCE(*p4dp); > > - next = p4d_addr_end(addr, end); > + next = p4d_addr_end_folded(p4d, addr, end); > if (p4d_none(p4d)) > return 0; > BUILD_BUG_ON(p4d_huge(p4d)); > @@ -2617,7 +2617,7 @@ static void gup_pgd_range(unsigned long addr, unsigned long end, > do { > pgd_t pgd = READ_ONCE(*pgdp); > > - next = pgd_addr_end(addr, end); > + next = pgd_addr_end_folded(pgd, addr, end); > if (pgd_none(pgd)) > return; > if (unlikely(pgd_huge(pgd))) { > Christophe From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF611C43461 for ; Tue, 8 Sep 2020 05:08:31 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 95A9921532 for ; Tue, 8 Sep 2020 05:08:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95A9921532 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgroup.eu Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BltTS3v6szDqQk for ; Tue, 8 Sep 2020 15:08:28 +1000 (AEST) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=csgroup.eu (client-ip=93.17.236.30; helo=pegase1.c-s.fr; envelope-from=christophe.leroy@csgroup.eu; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=csgroup.eu Received: from pegase1.c-s.fr (pegase1.c-s.fr [93.17.236.30]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BltRT5MjwzDqHw for ; Tue, 8 Sep 2020 15:06:43 +1000 (AEST) Received: from localhost (mailhub1-int [192.168.12.234]) by localhost (Postfix) with ESMTP id 4BltRK1g9Yz9tyWb; Tue, 8 Sep 2020 07:06:37 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [192.168.12.234]) (amavisd-new, port 10024) with ESMTP id NDyhNxhMFPxL; Tue, 8 Sep 2020 07:06:37 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4BltRG5mjlz9tyWZ; Tue, 8 Sep 2020 07:06:34 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 7661B8B78B; Tue, 8 Sep 2020 07:06:35 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id HWiz5tBjyEqc; Tue, 8 Sep 2020 07:06:35 +0200 (CEST) Received: from [192.168.4.90] (unknown [192.168.4.90]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 3409E8B768; Tue, 8 Sep 2020 07:06:31 +0200 (CEST) Subject: Re: [RFC PATCH v2 1/3] mm/gup: fix gup_fast with dynamic page table folding To: Gerald Schaefer , Jason Gunthorpe , John Hubbard References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> From: Christophe Leroy Message-ID: <82fbe8f9-f199-5fc2-4168-eb43ad0b0346@csgroup.eu> Date: Tue, 8 Sep 2020 07:06:23 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: fr Content-Transfer-Encoding: 8bit X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm , Paul Mackerras , linux-sparc , Alexander Gordeev , Claudio Imbrenda , Will Deacon , linux-arch , linux-s390 , Vasily Gorbik , Richard Weinberger , linux-x86 , Russell King , Christian Borntraeger , Ingo Molnar , Andrey Ryabinin , Jeff Dike , Arnd Bergmann , Heiko Carstens , linux-um , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm , Linus Torvalds , LKML , Andrew Morton , linux-power , Mike Rapoport Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" Le 07/09/2020 à 20:00, Gerald Schaefer a écrit : > From: Alexander Gordeev > > Commit 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast > code") introduced a subtle but severe bug on s390 with gup_fast, due to > dynamic page table folding. > > The question "What would it require for the generic code to work for s390" > has already been discussed here > https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1 > and ended with a promising approach here > https://lkml.kernel.org/r/20190419153307.4f2911b5@mschwideX1 > which in the end unfortunately didn't quite work completely. > > We tried to mimic static level folding by changing pgd_offset to always > calculate top level page table offset, and do nothing in folded pXd_offset. > What has been overlooked is that PxD_SIZE/MASK and thus pXd_addr_end do > not reflect this dynamic behaviour, and still act like static 5-level > page tables. > [...] > > Fix this by introducing new pXd_addr_end_folded helpers, which take an > additional pXd entry value parameter, that can be used on s390 > to determine the correct page table level and return corresponding > end / boundary. With that, the pointer iteration will always > happen in gup_pgd_range for s390. No change for other architectures > introduced. Not sure pXd_addr_end_folded() is the best understandable name, allthough I don't have any alternative suggestion at the moment. Maybe could be something like pXd_addr_end_fixup() as it will disappear in the next patch, or pXd_addr_end_gup() ? Also, if it happens to be acceptable to get patch 2 in stable, I think you should switch patch 1 and patch 2 to avoid the step through pXd_addr_end_folded() > > Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code") > Cc: # 5.2+ > Reviewed-by: Gerald Schaefer > Signed-off-by: Alexander Gordeev > Signed-off-by: Gerald Schaefer > --- > arch/s390/include/asm/pgtable.h | 42 +++++++++++++++++++++++++++++++++ > include/linux/pgtable.h | 16 +++++++++++++ > mm/gup.c | 8 +++---- > 3 files changed, 62 insertions(+), 4 deletions(-) > > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h > index 7eb01a5459cd..027206e4959d 100644 > --- a/arch/s390/include/asm/pgtable.h > +++ b/arch/s390/include/asm/pgtable.h > @@ -512,6 +512,48 @@ static inline bool mm_pmd_folded(struct mm_struct *mm) > } > #define mm_pmd_folded(mm) mm_pmd_folded(mm) > > +/* > + * With dynamic page table levels on s390, the static pXd_addr_end() functions > + * will not return corresponding dynamic boundaries. This is no problem as long > + * as only pXd pointers are passed down during page table walk, because > + * pXd_offset() will simply return the given pointer for folded levels, and the > + * pointer iteration over a range simply happens at the correct page table > + * level. > + * It is however a problem with gup_fast, or other places walking the page > + * tables w/o locks using READ_ONCE(), and passing down the pXd values instead > + * of pointers. In this case, the pointer given to pXd_offset() is a pointer to > + * a stack variable, which cannot be used for pointer iteration at the correct > + * level. Instead, the iteration then has to happen by going up to pgd level > + * again. To allow this, provide pXd_addr_end_folded() functions with an > + * additional pXd value parameter, which can be used on s390 to determine the > + * folding level and return the corresponding boundary. > + */ > +static inline unsigned long rste_addr_end_folded(unsigned long rste, unsigned long addr, unsigned long end) What does 'rste' stands for ? Isn't this line a bit long ? > +{ > + unsigned long type = (rste & _REGION_ENTRY_TYPE_MASK) >> 2; > + unsigned long size = 1UL << (_SEGMENT_SHIFT + type * 11); > + unsigned long boundary = (addr + size) & ~(size - 1); > + > + /* > + * FIXME The below check is for internal testing only, to be removed > + */ > + VM_BUG_ON(type < (_REGION_ENTRY_TYPE_R3 >> 2)); > + > + return (boundary - 1) < (end - 1) ? boundary : end; > +} > + > +#define pgd_addr_end_folded pgd_addr_end_folded > +static inline unsigned long pgd_addr_end_folded(pgd_t pgd, unsigned long addr, unsigned long end) > +{ > + return rste_addr_end_folded(pgd_val(pgd), addr, end); > +} > + > +#define p4d_addr_end_folded p4d_addr_end_folded > +static inline unsigned long p4d_addr_end_folded(p4d_t p4d, unsigned long addr, unsigned long end) > +{ > + return rste_addr_end_folded(p4d_val(p4d), addr, end); > +} > + > static inline int mm_has_pgste(struct mm_struct *mm) > { > #ifdef CONFIG_PGSTE > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index e8cbc2e795d5..981c4c2a31fe 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -681,6 +681,22 @@ static inline int arch_unmap_one(struct mm_struct *mm, > }) > #endif > > +#ifndef pgd_addr_end_folded > +#define pgd_addr_end_folded(pgd, addr, end) pgd_addr_end(addr, end) > +#endif > + > +#ifndef p4d_addr_end_folded > +#define p4d_addr_end_folded(p4d, addr, end) p4d_addr_end(addr, end) > +#endif > + > +#ifndef pud_addr_end_folded > +#define pud_addr_end_folded(pud, addr, end) pud_addr_end(addr, end) > +#endif > + > +#ifndef pmd_addr_end_folded > +#define pmd_addr_end_folded(pmd, addr, end) pmd_addr_end(addr, end) > +#endif > + > /* > * When walking page tables, we usually want to skip any p?d_none entries; > * and any p?d_bad entries - reporting the error before resetting to none. > diff --git a/mm/gup.c b/mm/gup.c > index bd883a112724..ba4aace5d0f4 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -2521,7 +2521,7 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, > do { > pmd_t pmd = READ_ONCE(*pmdp); > > - next = pmd_addr_end(addr, end); > + next = pmd_addr_end_folded(pmd, addr, end); > if (!pmd_present(pmd)) > return 0; > > @@ -2564,7 +2564,7 @@ static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end, > do { > pud_t pud = READ_ONCE(*pudp); > > - next = pud_addr_end(addr, end); > + next = pud_addr_end_folded(pud, addr, end); > if (unlikely(!pud_present(pud))) > return 0; > if (unlikely(pud_huge(pud))) { > @@ -2592,7 +2592,7 @@ static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end, > do { > p4d_t p4d = READ_ONCE(*p4dp); > > - next = p4d_addr_end(addr, end); > + next = p4d_addr_end_folded(p4d, addr, end); > if (p4d_none(p4d)) > return 0; > BUILD_BUG_ON(p4d_huge(p4d)); > @@ -2617,7 +2617,7 @@ static void gup_pgd_range(unsigned long addr, unsigned long end, > do { > pgd_t pgd = READ_ONCE(*pgdp); > > - next = pgd_addr_end(addr, end); > + next = pgd_addr_end_folded(pgd, addr, end); > if (pgd_none(pgd)) > return; > if (unlikely(pgd_huge(pgd))) { > Christophe From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D35BC433E2 for ; Tue, 8 Sep 2020 05:08:19 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 910A721532 for ; Tue, 8 Sep 2020 05:08:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Tf0Wdmer" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 910A721532 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgroup.eu Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Type: Content-Transfer-Encoding:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xh5axpdyUz3z6RgqTTjUTtoZPl2ABYl/NP5+/nU1zTY=; b=Tf0Wdmer8riULQKwEBnhiRhdp RODAsQ6lV+X14CxtxbHDFNYhyvQ2VgfIQulO4bVMi2OjMQ7Xsm4Rj5txLYtPptAa2wxRkEEIwd3ng 283rirHIw/FwEKMd06f6d0PIrb6RmRB42YGWzCShxVCx1WqOpVh5qznGy4T3LHhXMZpnPN/4iqoVA bxiZz9BNGD4CRZt/1uHTodzDgCAC+abHyp2YL8BaMO65GxlW2UCsYhFksW9BoenbS1/m6CpWCH1ag 2khsG/3728lpROJwQFhRWoLNieCmJqEzpXNdFZ337GgXJGF+Esk6mHjpi15Enk9lDJHGJtdI7g04T ZeIuWXGGQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kFVqA-0003XD-Er; Tue, 08 Sep 2020 05:06:46 +0000 Received: from pegase1.c-s.fr ([93.17.236.30]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kFVq6-0003Vt-Cw; Tue, 08 Sep 2020 05:06:44 +0000 Received: from localhost (mailhub1-int [192.168.12.234]) by localhost (Postfix) with ESMTP id 4BltRK1g9Yz9tyWb; Tue, 8 Sep 2020 07:06:37 +0200 (CEST) X-Virus-Scanned: Debian amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [192.168.12.234]) (amavisd-new, port 10024) with ESMTP id NDyhNxhMFPxL; Tue, 8 Sep 2020 07:06:37 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 4BltRG5mjlz9tyWZ; Tue, 8 Sep 2020 07:06:34 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 7661B8B78B; Tue, 8 Sep 2020 07:06:35 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id HWiz5tBjyEqc; Tue, 8 Sep 2020 07:06:35 +0200 (CEST) Received: from [192.168.4.90] (unknown [192.168.4.90]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 3409E8B768; Tue, 8 Sep 2020 07:06:31 +0200 (CEST) Subject: Re: [RFC PATCH v2 1/3] mm/gup: fix gup_fast with dynamic page table folding To: Gerald Schaefer , Jason Gunthorpe , John Hubbard References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> From: Christophe Leroy Message-ID: <82fbe8f9-f199-5fc2-4168-eb43ad0b0346@csgroup.eu> Date: Tue, 8 Sep 2020 07:06:23 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> Content-Language: fr X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200908_010642_703430_712E923E X-CRM114-Status: GOOD ( 38.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm , Paul Mackerras , linux-sparc , Alexander Gordeev , Claudio Imbrenda , Will Deacon , linux-arch , linux-s390 , Vasily Gorbik , Richard Weinberger , linux-x86 , Russell King , Christian Borntraeger , Ingo Molnar , Andrey Ryabinin , Jeff Dike , Arnd Bergmann , Heiko Carstens , linux-um , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm , Linus Torvalds , LKML , Andrew Morton , linux-power , Mike Rapoport Content-Transfer-Encoding: base64 Content-Type: text/plain; charset="utf-8"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org CgpMZSAwNy8wOS8yMDIwIMOgIDIwOjAwLCBHZXJhbGQgU2NoYWVmZXIgYSDDqWNyaXTCoDoKPiBG cm9tOiBBbGV4YW5kZXIgR29yZGVldiA8YWdvcmRlZXZAbGludXguaWJtLmNvbT4KPiAKPiBDb21t aXQgMWE0MjAxMGNkYzI2ICgiczM5MC9tbTogY29udmVydCB0byB0aGUgZ2VuZXJpYyBnZXRfdXNl cl9wYWdlc19mYXN0Cj4gY29kZSIpIGludHJvZHVjZWQgYSBzdWJ0bGUgYnV0IHNldmVyZSBidWcg b24gczM5MCB3aXRoIGd1cF9mYXN0LCBkdWUgdG8KPiBkeW5hbWljIHBhZ2UgdGFibGUgZm9sZGlu Zy4KPiAKPiBUaGUgcXVlc3Rpb24gIldoYXQgd291bGQgaXQgcmVxdWlyZSBmb3IgdGhlIGdlbmVy aWMgY29kZSB0byB3b3JrIGZvciBzMzkwIgo+IGhhcyBhbHJlYWR5IGJlZW4gZGlzY3Vzc2VkIGhl cmUKPiBodHRwczovL2xrbWwua2VybmVsLm9yZy9yLzIwMTkwNDE4MTAwMjE4LjBhNGFmZDUxQG1z Y2h3aWRlWDEKPiBhbmQgZW5kZWQgd2l0aCBhIHByb21pc2luZyBhcHByb2FjaCBoZXJlCj4gaHR0 cHM6Ly9sa21sLmtlcm5lbC5vcmcvci8yMDE5MDQxOTE1MzMwNy40ZjI5MTFiNUBtc2Nod2lkZVgx Cj4gd2hpY2ggaW4gdGhlIGVuZCB1bmZvcnR1bmF0ZWx5IGRpZG4ndCBxdWl0ZSB3b3JrIGNvbXBs ZXRlbHkuCj4gCj4gV2UgdHJpZWQgdG8gbWltaWMgc3RhdGljIGxldmVsIGZvbGRpbmcgYnkgY2hh bmdpbmcgcGdkX29mZnNldCB0byBhbHdheXMKPiBjYWxjdWxhdGUgdG9wIGxldmVsIHBhZ2UgdGFi bGUgb2Zmc2V0LCBhbmQgZG8gbm90aGluZyBpbiBmb2xkZWQgcFhkX29mZnNldC4KPiBXaGF0IGhh cyBiZWVuIG92ZXJsb29rZWQgaXMgdGhhdCBQeERfU0laRS9NQVNLIGFuZCB0aHVzIHBYZF9hZGRy X2VuZCBkbwo+IG5vdCByZWZsZWN0IHRoaXMgZHluYW1pYyBiZWhhdmlvdXIsIGFuZCBzdGlsbCBh Y3QgbGlrZSBzdGF0aWMgNS1sZXZlbAo+IHBhZ2UgdGFibGVzLgo+IAoKWy4uLl0KCj4gCj4gRml4 IHRoaXMgYnkgaW50cm9kdWNpbmcgbmV3IHBYZF9hZGRyX2VuZF9mb2xkZWQgaGVscGVycywgd2hp Y2ggdGFrZSBhbgo+IGFkZGl0aW9uYWwgcFhkIGVudHJ5IHZhbHVlIHBhcmFtZXRlciwgdGhhdCBj YW4gYmUgdXNlZCBvbiBzMzkwCj4gdG8gZGV0ZXJtaW5lIHRoZSBjb3JyZWN0IHBhZ2UgdGFibGUg bGV2ZWwgYW5kIHJldHVybiBjb3JyZXNwb25kaW5nCj4gZW5kIC8gYm91bmRhcnkuIFdpdGggdGhh dCwgdGhlIHBvaW50ZXIgaXRlcmF0aW9uIHdpbGwgYWx3YXlzCj4gaGFwcGVuIGluIGd1cF9wZ2Rf cmFuZ2UgZm9yIHMzOTAuIE5vIGNoYW5nZSBmb3Igb3RoZXIgYXJjaGl0ZWN0dXJlcwo+IGludHJv ZHVjZWQuCgpOb3Qgc3VyZSBwWGRfYWRkcl9lbmRfZm9sZGVkKCkgaXMgdGhlIGJlc3QgdW5kZXJz dGFuZGFibGUgbmFtZSwgCmFsbHRob3VnaCBJIGRvbid0IGhhdmUgYW55IGFsdGVybmF0aXZlIHN1 Z2dlc3Rpb24gYXQgdGhlIG1vbWVudC4KTWF5YmUgY291bGQgYmUgc29tZXRoaW5nIGxpa2UgcFhk X2FkZHJfZW5kX2ZpeHVwKCkgYXMgaXQgd2lsbCBkaXNhcHBlYXIgCmluIHRoZSBuZXh0IHBhdGNo LCBvciBwWGRfYWRkcl9lbmRfZ3VwKCkgPwoKQWxzbywgaWYgaXQgaGFwcGVucyB0byBiZSBhY2Nl cHRhYmxlIHRvIGdldCBwYXRjaCAyIGluIHN0YWJsZSwgSSB0aGluayAKeW91IHNob3VsZCBzd2l0 Y2ggcGF0Y2ggMSBhbmQgcGF0Y2ggMiB0byBhdm9pZCB0aGUgc3RlcCB0aHJvdWdoIApwWGRfYWRk cl9lbmRfZm9sZGVkKCkKCgo+IAo+IEZpeGVzOiAxYTQyMDEwY2RjMjYgKCJzMzkwL21tOiBjb252 ZXJ0IHRvIHRoZSBnZW5lcmljIGdldF91c2VyX3BhZ2VzX2Zhc3QgY29kZSIpCj4gQ2M6IDxzdGFi bGVAdmdlci5rZXJuZWwub3JnPiAjIDUuMisKPiBSZXZpZXdlZC1ieTogR2VyYWxkIFNjaGFlZmVy IDxnZXJhbGQuc2NoYWVmZXJAbGludXguaWJtLmNvbT4KPiBTaWduZWQtb2ZmLWJ5OiBBbGV4YW5k ZXIgR29yZGVldiA8YWdvcmRlZXZAbGludXguaWJtLmNvbT4KPiBTaWduZWQtb2ZmLWJ5OiBHZXJh bGQgU2NoYWVmZXIgPGdlcmFsZC5zY2hhZWZlckBsaW51eC5pYm0uY29tPgo+IC0tLQo+ICAgYXJj aC9zMzkwL2luY2x1ZGUvYXNtL3BndGFibGUuaCB8IDQyICsrKysrKysrKysrKysrKysrKysrKysr KysrKysrKysrKwo+ICAgaW5jbHVkZS9saW51eC9wZ3RhYmxlLmggICAgICAgICB8IDE2ICsrKysr KysrKysrKysKPiAgIG1tL2d1cC5jICAgICAgICAgICAgICAgICAgICAgICAgfCAgOCArKystLS0t Cj4gICAzIGZpbGVzIGNoYW5nZWQsIDYyIGluc2VydGlvbnMoKyksIDQgZGVsZXRpb25zKC0pCj4g Cj4gZGlmZiAtLWdpdCBhL2FyY2gvczM5MC9pbmNsdWRlL2FzbS9wZ3RhYmxlLmggYi9hcmNoL3Mz OTAvaW5jbHVkZS9hc20vcGd0YWJsZS5oCj4gaW5kZXggN2ViMDFhNTQ1OWNkLi4wMjcyMDZlNDk1 OWQgMTAwNjQ0Cj4gLS0tIGEvYXJjaC9zMzkwL2luY2x1ZGUvYXNtL3BndGFibGUuaAo+ICsrKyBi L2FyY2gvczM5MC9pbmNsdWRlL2FzbS9wZ3RhYmxlLmgKPiBAQCAtNTEyLDYgKzUxMiw0OCBAQCBz dGF0aWMgaW5saW5lIGJvb2wgbW1fcG1kX2ZvbGRlZChzdHJ1Y3QgbW1fc3RydWN0ICptbSkKPiAg IH0KPiAgICNkZWZpbmUgbW1fcG1kX2ZvbGRlZChtbSkgbW1fcG1kX2ZvbGRlZChtbSkKPiAgIAo+ ICsvKgo+ICsgKiBXaXRoIGR5bmFtaWMgcGFnZSB0YWJsZSBsZXZlbHMgb24gczM5MCwgdGhlIHN0 YXRpYyBwWGRfYWRkcl9lbmQoKSBmdW5jdGlvbnMKPiArICogd2lsbCBub3QgcmV0dXJuIGNvcnJl c3BvbmRpbmcgZHluYW1pYyBib3VuZGFyaWVzLiBUaGlzIGlzIG5vIHByb2JsZW0gYXMgbG9uZwo+ ICsgKiBhcyBvbmx5IHBYZCBwb2ludGVycyBhcmUgcGFzc2VkIGRvd24gZHVyaW5nIHBhZ2UgdGFi bGUgd2FsaywgYmVjYXVzZQo+ICsgKiBwWGRfb2Zmc2V0KCkgd2lsbCBzaW1wbHkgcmV0dXJuIHRo ZSBnaXZlbiBwb2ludGVyIGZvciBmb2xkZWQgbGV2ZWxzLCBhbmQgdGhlCj4gKyAqIHBvaW50ZXIg aXRlcmF0aW9uIG92ZXIgYSByYW5nZSBzaW1wbHkgaGFwcGVucyBhdCB0aGUgY29ycmVjdCBwYWdl IHRhYmxlCj4gKyAqIGxldmVsLgo+ICsgKiBJdCBpcyBob3dldmVyIGEgcHJvYmxlbSB3aXRoIGd1 cF9mYXN0LCBvciBvdGhlciBwbGFjZXMgd2Fsa2luZyB0aGUgcGFnZQo+ICsgKiB0YWJsZXMgdy9v IGxvY2tzIHVzaW5nIFJFQURfT05DRSgpLCBhbmQgcGFzc2luZyBkb3duIHRoZSBwWGQgdmFsdWVz IGluc3RlYWQKPiArICogb2YgcG9pbnRlcnMuIEluIHRoaXMgY2FzZSwgdGhlIHBvaW50ZXIgZ2l2 ZW4gdG8gcFhkX29mZnNldCgpIGlzIGEgcG9pbnRlciB0bwo+ICsgKiBhIHN0YWNrIHZhcmlhYmxl LCB3aGljaCBjYW5ub3QgYmUgdXNlZCBmb3IgcG9pbnRlciBpdGVyYXRpb24gYXQgdGhlIGNvcnJl Y3QKPiArICogbGV2ZWwuIEluc3RlYWQsIHRoZSBpdGVyYXRpb24gdGhlbiBoYXMgdG8gaGFwcGVu IGJ5IGdvaW5nIHVwIHRvIHBnZCBsZXZlbAo+ICsgKiBhZ2Fpbi4gVG8gYWxsb3cgdGhpcywgcHJv dmlkZSBwWGRfYWRkcl9lbmRfZm9sZGVkKCkgZnVuY3Rpb25zIHdpdGggYW4KPiArICogYWRkaXRp b25hbCBwWGQgdmFsdWUgcGFyYW1ldGVyLCB3aGljaCBjYW4gYmUgdXNlZCBvbiBzMzkwIHRvIGRl dGVybWluZSB0aGUKPiArICogZm9sZGluZyBsZXZlbCBhbmQgcmV0dXJuIHRoZSBjb3JyZXNwb25k aW5nIGJvdW5kYXJ5Lgo+ICsgKi8KPiArc3RhdGljIGlubGluZSB1bnNpZ25lZCBsb25nIHJzdGVf YWRkcl9lbmRfZm9sZGVkKHVuc2lnbmVkIGxvbmcgcnN0ZSwgdW5zaWduZWQgbG9uZyBhZGRyLCB1 bnNpZ25lZCBsb25nIGVuZCkKCldoYXQgZG9lcyAncnN0ZScgc3RhbmRzIGZvciA/CgpJc24ndCB0 aGlzIGxpbmUgYSBiaXQgbG9uZyA/Cgo+ICt7Cj4gKwl1bnNpZ25lZCBsb25nIHR5cGUgPSAocnN0 ZSAmIF9SRUdJT05fRU5UUllfVFlQRV9NQVNLKSA+PiAyOwo+ICsJdW5zaWduZWQgbG9uZyBzaXpl ID0gMVVMIDw8IChfU0VHTUVOVF9TSElGVCArIHR5cGUgKiAxMSk7Cj4gKwl1bnNpZ25lZCBsb25n IGJvdW5kYXJ5ID0gKGFkZHIgKyBzaXplKSAmIH4oc2l6ZSAtIDEpOwo+ICsKPiArCS8qCj4gKwkg KiBGSVhNRSBUaGUgYmVsb3cgY2hlY2sgaXMgZm9yIGludGVybmFsIHRlc3Rpbmcgb25seSwgdG8g YmUgcmVtb3ZlZAo+ICsJICovCj4gKwlWTV9CVUdfT04odHlwZSA8IChfUkVHSU9OX0VOVFJZX1RZ UEVfUjMgPj4gMikpOwo+ICsKPiArCXJldHVybiAoYm91bmRhcnkgLSAxKSA8IChlbmQgLSAxKSA/ IGJvdW5kYXJ5IDogZW5kOwo+ICt9Cj4gKwo+ICsjZGVmaW5lIHBnZF9hZGRyX2VuZF9mb2xkZWQg cGdkX2FkZHJfZW5kX2ZvbGRlZAo+ICtzdGF0aWMgaW5saW5lIHVuc2lnbmVkIGxvbmcgcGdkX2Fk ZHJfZW5kX2ZvbGRlZChwZ2RfdCBwZ2QsIHVuc2lnbmVkIGxvbmcgYWRkciwgdW5zaWduZWQgbG9u ZyBlbmQpCj4gK3sKPiArCXJldHVybiByc3RlX2FkZHJfZW5kX2ZvbGRlZChwZ2RfdmFsKHBnZCks IGFkZHIsIGVuZCk7Cj4gK30KPiArCj4gKyNkZWZpbmUgcDRkX2FkZHJfZW5kX2ZvbGRlZCBwNGRf YWRkcl9lbmRfZm9sZGVkCj4gK3N0YXRpYyBpbmxpbmUgdW5zaWduZWQgbG9uZyBwNGRfYWRkcl9l bmRfZm9sZGVkKHA0ZF90IHA0ZCwgdW5zaWduZWQgbG9uZyBhZGRyLCB1bnNpZ25lZCBsb25nIGVu ZCkKPiArewo+ICsJcmV0dXJuIHJzdGVfYWRkcl9lbmRfZm9sZGVkKHA0ZF92YWwocDRkKSwgYWRk ciwgZW5kKTsKPiArfQo+ICsKPiAgIHN0YXRpYyBpbmxpbmUgaW50IG1tX2hhc19wZ3N0ZShzdHJ1 Y3QgbW1fc3RydWN0ICptbSkKPiAgIHsKPiAgICNpZmRlZiBDT05GSUdfUEdTVEUKPiBkaWZmIC0t Z2l0IGEvaW5jbHVkZS9saW51eC9wZ3RhYmxlLmggYi9pbmNsdWRlL2xpbnV4L3BndGFibGUuaAo+ IGluZGV4IGU4Y2JjMmU3OTVkNS4uOTgxYzRjMmEzMWZlIDEwMDY0NAo+IC0tLSBhL2luY2x1ZGUv bGludXgvcGd0YWJsZS5oCj4gKysrIGIvaW5jbHVkZS9saW51eC9wZ3RhYmxlLmgKPiBAQCAtNjgx LDYgKzY4MSwyMiBAQCBzdGF0aWMgaW5saW5lIGludCBhcmNoX3VubWFwX29uZShzdHJ1Y3QgbW1f c3RydWN0ICptbSwKPiAgIH0pCj4gICAjZW5kaWYKPiAgIAo+ICsjaWZuZGVmIHBnZF9hZGRyX2Vu ZF9mb2xkZWQKPiArI2RlZmluZSBwZ2RfYWRkcl9lbmRfZm9sZGVkKHBnZCwgYWRkciwgZW5kKQlw Z2RfYWRkcl9lbmQoYWRkciwgZW5kKQo+ICsjZW5kaWYKPiArCj4gKyNpZm5kZWYgcDRkX2FkZHJf ZW5kX2ZvbGRlZAo+ICsjZGVmaW5lIHA0ZF9hZGRyX2VuZF9mb2xkZWQocDRkLCBhZGRyLCBlbmQp CXA0ZF9hZGRyX2VuZChhZGRyLCBlbmQpCj4gKyNlbmRpZgo+ICsKPiArI2lmbmRlZiBwdWRfYWRk cl9lbmRfZm9sZGVkCj4gKyNkZWZpbmUgcHVkX2FkZHJfZW5kX2ZvbGRlZChwdWQsIGFkZHIsIGVu ZCkJcHVkX2FkZHJfZW5kKGFkZHIsIGVuZCkKPiArI2VuZGlmCj4gKwo+ICsjaWZuZGVmIHBtZF9h ZGRyX2VuZF9mb2xkZWQKPiArI2RlZmluZSBwbWRfYWRkcl9lbmRfZm9sZGVkKHBtZCwgYWRkciwg ZW5kKQlwbWRfYWRkcl9lbmQoYWRkciwgZW5kKQo+ICsjZW5kaWYKPiArCj4gICAvKgo+ICAgICog V2hlbiB3YWxraW5nIHBhZ2UgdGFibGVzLCB3ZSB1c3VhbGx5IHdhbnQgdG8gc2tpcCBhbnkgcD9k X25vbmUgZW50cmllczsKPiAgICAqIGFuZCBhbnkgcD9kX2JhZCBlbnRyaWVzIC0gcmVwb3J0aW5n IHRoZSBlcnJvciBiZWZvcmUgcmVzZXR0aW5nIHRvIG5vbmUuCj4gZGlmZiAtLWdpdCBhL21tL2d1 cC5jIGIvbW0vZ3VwLmMKPiBpbmRleCBiZDg4M2ExMTI3MjQuLmJhNGFhY2U1ZDBmNCAxMDA2NDQK PiAtLS0gYS9tbS9ndXAuYwo+ICsrKyBiL21tL2d1cC5jCj4gQEAgLTI1MjEsNyArMjUyMSw3IEBA IHN0YXRpYyBpbnQgZ3VwX3BtZF9yYW5nZShwdWRfdCBwdWQsIHVuc2lnbmVkIGxvbmcgYWRkciwg dW5zaWduZWQgbG9uZyBlbmQsCj4gICAJZG8gewo+ICAgCQlwbWRfdCBwbWQgPSBSRUFEX09OQ0Uo KnBtZHApOwo+ICAgCj4gLQkJbmV4dCA9IHBtZF9hZGRyX2VuZChhZGRyLCBlbmQpOwo+ICsJCW5l eHQgPSBwbWRfYWRkcl9lbmRfZm9sZGVkKHBtZCwgYWRkciwgZW5kKTsKPiAgIAkJaWYgKCFwbWRf cHJlc2VudChwbWQpKQo+ICAgCQkJcmV0dXJuIDA7Cj4gICAKPiBAQCAtMjU2NCw3ICsyNTY0LDcg QEAgc3RhdGljIGludCBndXBfcHVkX3JhbmdlKHA0ZF90IHA0ZCwgdW5zaWduZWQgbG9uZyBhZGRy LCB1bnNpZ25lZCBsb25nIGVuZCwKPiAgIAlkbyB7Cj4gICAJCXB1ZF90IHB1ZCA9IFJFQURfT05D RSgqcHVkcCk7Cj4gICAKPiAtCQluZXh0ID0gcHVkX2FkZHJfZW5kKGFkZHIsIGVuZCk7Cj4gKwkJ bmV4dCA9IHB1ZF9hZGRyX2VuZF9mb2xkZWQocHVkLCBhZGRyLCBlbmQpOwo+ICAgCQlpZiAodW5s aWtlbHkoIXB1ZF9wcmVzZW50KHB1ZCkpKQo+ICAgCQkJcmV0dXJuIDA7Cj4gICAJCWlmICh1bmxp a2VseShwdWRfaHVnZShwdWQpKSkgewo+IEBAIC0yNTkyLDcgKzI1OTIsNyBAQCBzdGF0aWMgaW50 IGd1cF9wNGRfcmFuZ2UocGdkX3QgcGdkLCB1bnNpZ25lZCBsb25nIGFkZHIsIHVuc2lnbmVkIGxv bmcgZW5kLAo+ICAgCWRvIHsKPiAgIAkJcDRkX3QgcDRkID0gUkVBRF9PTkNFKCpwNGRwKTsKPiAg IAo+IC0JCW5leHQgPSBwNGRfYWRkcl9lbmQoYWRkciwgZW5kKTsKPiArCQluZXh0ID0gcDRkX2Fk ZHJfZW5kX2ZvbGRlZChwNGQsIGFkZHIsIGVuZCk7Cj4gICAJCWlmIChwNGRfbm9uZShwNGQpKQo+ ICAgCQkJcmV0dXJuIDA7Cj4gICAJCUJVSUxEX0JVR19PTihwNGRfaHVnZShwNGQpKTsKPiBAQCAt MjYxNyw3ICsyNjE3LDcgQEAgc3RhdGljIHZvaWQgZ3VwX3BnZF9yYW5nZSh1bnNpZ25lZCBsb25n IGFkZHIsIHVuc2lnbmVkIGxvbmcgZW5kLAo+ICAgCWRvIHsKPiAgIAkJcGdkX3QgcGdkID0gUkVB RF9PTkNFKCpwZ2RwKTsKPiAgIAo+IC0JCW5leHQgPSBwZ2RfYWRkcl9lbmQoYWRkciwgZW5kKTsK PiArCQluZXh0ID0gcGdkX2FkZHJfZW5kX2ZvbGRlZChwZ2QsIGFkZHIsIGVuZCk7Cj4gICAJCWlm IChwZ2Rfbm9uZShwZ2QpKQo+ICAgCQkJcmV0dXJuOwo+ICAgCQlpZiAodW5saWtlbHkocGdkX2h1 Z2UocGdkKSkpIHsKPiAKCkNocmlzdG9waGUKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fCmxpbnV4LWFybS1rZXJuZWwgbWFpbGluZyBsaXN0CmxpbnV4LWFy bS1rZXJuZWxAbGlzdHMuaW5mcmFkZWFkLm9yZwpodHRwOi8vbGlzdHMuaW5mcmFkZWFkLm9yZy9t YWlsbWFuL2xpc3RpbmZvL2xpbnV4LWFybS1rZXJuZWwK From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Subject: Re: [RFC PATCH v2 1/3] mm/gup: fix gup_fast with dynamic page table folding References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> From: Christophe Leroy Message-ID: <82fbe8f9-f199-5fc2-4168-eb43ad0b0346@csgroup.eu> Date: Tue, 8 Sep 2020 07:06:23 +0200 MIME-Version: 1.0 In-Reply-To: <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> Content-Language: fr List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: base64 Content-Type: text/plain; charset="utf-8"; Format="flowed" Sender: "linux-um" Errors-To: linux-um-bounces+geert=linux-m68k.org@lists.infradead.org To: Gerald Schaefer , Jason Gunthorpe , John Hubbard Cc: Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm , Paul Mackerras , linux-sparc , Alexander Gordeev , Claudio Imbrenda , Will Deacon , linux-arch , linux-s390 , Vasily Gorbik , Richard Weinberger , linux-x86 , Russell King , Christian Borntraeger , Ingo Molnar , Andrey Ryabinin , Jeff Dike , Arnd Bergmann , Heiko Carstens , linux-um , Borislav Petkov , Andy Lutomirski , Thomas Gleixner , linux-arm , Linus Torvalds , LKML , Andrew Morton , linux-power , Mike Rapoport CgpMZSAwNy8wOS8yMDIwIMOgIDIwOjAwLCBHZXJhbGQgU2NoYWVmZXIgYSDDqWNyaXTCoDoKPiBG cm9tOiBBbGV4YW5kZXIgR29yZGVldiA8YWdvcmRlZXZAbGludXguaWJtLmNvbT4KPiAKPiBDb21t aXQgMWE0MjAxMGNkYzI2ICgiczM5MC9tbTogY29udmVydCB0byB0aGUgZ2VuZXJpYyBnZXRfdXNl cl9wYWdlc19mYXN0Cj4gY29kZSIpIGludHJvZHVjZWQgYSBzdWJ0bGUgYnV0IHNldmVyZSBidWcg b24gczM5MCB3aXRoIGd1cF9mYXN0LCBkdWUgdG8KPiBkeW5hbWljIHBhZ2UgdGFibGUgZm9sZGlu Zy4KPiAKPiBUaGUgcXVlc3Rpb24gIldoYXQgd291bGQgaXQgcmVxdWlyZSBmb3IgdGhlIGdlbmVy aWMgY29kZSB0byB3b3JrIGZvciBzMzkwIgo+IGhhcyBhbHJlYWR5IGJlZW4gZGlzY3Vzc2VkIGhl cmUKPiBodHRwczovL2xrbWwua2VybmVsLm9yZy9yLzIwMTkwNDE4MTAwMjE4LjBhNGFmZDUxQG1z Y2h3aWRlWDEKPiBhbmQgZW5kZWQgd2l0aCBhIHByb21pc2luZyBhcHByb2FjaCBoZXJlCj4gaHR0 cHM6Ly9sa21sLmtlcm5lbC5vcmcvci8yMDE5MDQxOTE1MzMwNy40ZjI5MTFiNUBtc2Nod2lkZVgx Cj4gd2hpY2ggaW4gdGhlIGVuZCB1bmZvcnR1bmF0ZWx5IGRpZG4ndCBxdWl0ZSB3b3JrIGNvbXBs ZXRlbHkuCj4gCj4gV2UgdHJpZWQgdG8gbWltaWMgc3RhdGljIGxldmVsIGZvbGRpbmcgYnkgY2hh bmdpbmcgcGdkX29mZnNldCB0byBhbHdheXMKPiBjYWxjdWxhdGUgdG9wIGxldmVsIHBhZ2UgdGFi bGUgb2Zmc2V0LCBhbmQgZG8gbm90aGluZyBpbiBmb2xkZWQgcFhkX29mZnNldC4KPiBXaGF0IGhh cyBiZWVuIG92ZXJsb29rZWQgaXMgdGhhdCBQeERfU0laRS9NQVNLIGFuZCB0aHVzIHBYZF9hZGRy X2VuZCBkbwo+IG5vdCByZWZsZWN0IHRoaXMgZHluYW1pYyBiZWhhdmlvdXIsIGFuZCBzdGlsbCBh Y3QgbGlrZSBzdGF0aWMgNS1sZXZlbAo+IHBhZ2UgdGFibGVzLgo+IAoKWy4uLl0KCj4gCj4gRml4 IHRoaXMgYnkgaW50cm9kdWNpbmcgbmV3IHBYZF9hZGRyX2VuZF9mb2xkZWQgaGVscGVycywgd2hp Y2ggdGFrZSBhbgo+IGFkZGl0aW9uYWwgcFhkIGVudHJ5IHZhbHVlIHBhcmFtZXRlciwgdGhhdCBj YW4gYmUgdXNlZCBvbiBzMzkwCj4gdG8gZGV0ZXJtaW5lIHRoZSBjb3JyZWN0IHBhZ2UgdGFibGUg bGV2ZWwgYW5kIHJldHVybiBjb3JyZXNwb25kaW5nCj4gZW5kIC8gYm91bmRhcnkuIFdpdGggdGhh dCwgdGhlIHBvaW50ZXIgaXRlcmF0aW9uIHdpbGwgYWx3YXlzCj4gaGFwcGVuIGluIGd1cF9wZ2Rf cmFuZ2UgZm9yIHMzOTAuIE5vIGNoYW5nZSBmb3Igb3RoZXIgYXJjaGl0ZWN0dXJlcwo+IGludHJv ZHVjZWQuCgpOb3Qgc3VyZSBwWGRfYWRkcl9lbmRfZm9sZGVkKCkgaXMgdGhlIGJlc3QgdW5kZXJz dGFuZGFibGUgbmFtZSwgCmFsbHRob3VnaCBJIGRvbid0IGhhdmUgYW55IGFsdGVybmF0aXZlIHN1 Z2dlc3Rpb24gYXQgdGhlIG1vbWVudC4KTWF5YmUgY291bGQgYmUgc29tZXRoaW5nIGxpa2UgcFhk X2FkZHJfZW5kX2ZpeHVwKCkgYXMgaXQgd2lsbCBkaXNhcHBlYXIgCmluIHRoZSBuZXh0IHBhdGNo LCBvciBwWGRfYWRkcl9lbmRfZ3VwKCkgPwoKQWxzbywgaWYgaXQgaGFwcGVucyB0byBiZSBhY2Nl cHRhYmxlIHRvIGdldCBwYXRjaCAyIGluIHN0YWJsZSwgSSB0aGluayAKeW91IHNob3VsZCBzd2l0 Y2ggcGF0Y2ggMSBhbmQgcGF0Y2ggMiB0byBhdm9pZCB0aGUgc3RlcCB0aHJvdWdoIApwWGRfYWRk cl9lbmRfZm9sZGVkKCkKCgo+IAo+IEZpeGVzOiAxYTQyMDEwY2RjMjYgKCJzMzkwL21tOiBjb252 ZXJ0IHRvIHRoZSBnZW5lcmljIGdldF91c2VyX3BhZ2VzX2Zhc3QgY29kZSIpCj4gQ2M6IDxzdGFi bGVAdmdlci5rZXJuZWwub3JnPiAjIDUuMisKPiBSZXZpZXdlZC1ieTogR2VyYWxkIFNjaGFlZmVy IDxnZXJhbGQuc2NoYWVmZXJAbGludXguaWJtLmNvbT4KPiBTaWduZWQtb2ZmLWJ5OiBBbGV4YW5k ZXIgR29yZGVldiA8YWdvcmRlZXZAbGludXguaWJtLmNvbT4KPiBTaWduZWQtb2ZmLWJ5OiBHZXJh bGQgU2NoYWVmZXIgPGdlcmFsZC5zY2hhZWZlckBsaW51eC5pYm0uY29tPgo+IC0tLQo+ICAgYXJj aC9zMzkwL2luY2x1ZGUvYXNtL3BndGFibGUuaCB8IDQyICsrKysrKysrKysrKysrKysrKysrKysr KysrKysrKysrKwo+ICAgaW5jbHVkZS9saW51eC9wZ3RhYmxlLmggICAgICAgICB8IDE2ICsrKysr KysrKysrKysKPiAgIG1tL2d1cC5jICAgICAgICAgICAgICAgICAgICAgICAgfCAgOCArKystLS0t Cj4gICAzIGZpbGVzIGNoYW5nZWQsIDYyIGluc2VydGlvbnMoKyksIDQgZGVsZXRpb25zKC0pCj4g Cj4gZGlmZiAtLWdpdCBhL2FyY2gvczM5MC9pbmNsdWRlL2FzbS9wZ3RhYmxlLmggYi9hcmNoL3Mz OTAvaW5jbHVkZS9hc20vcGd0YWJsZS5oCj4gaW5kZXggN2ViMDFhNTQ1OWNkLi4wMjcyMDZlNDk1 OWQgMTAwNjQ0Cj4gLS0tIGEvYXJjaC9zMzkwL2luY2x1ZGUvYXNtL3BndGFibGUuaAo+ICsrKyBi L2FyY2gvczM5MC9pbmNsdWRlL2FzbS9wZ3RhYmxlLmgKPiBAQCAtNTEyLDYgKzUxMiw0OCBAQCBz dGF0aWMgaW5saW5lIGJvb2wgbW1fcG1kX2ZvbGRlZChzdHJ1Y3QgbW1fc3RydWN0ICptbSkKPiAg IH0KPiAgICNkZWZpbmUgbW1fcG1kX2ZvbGRlZChtbSkgbW1fcG1kX2ZvbGRlZChtbSkKPiAgIAo+ ICsvKgo+ICsgKiBXaXRoIGR5bmFtaWMgcGFnZSB0YWJsZSBsZXZlbHMgb24gczM5MCwgdGhlIHN0 YXRpYyBwWGRfYWRkcl9lbmQoKSBmdW5jdGlvbnMKPiArICogd2lsbCBub3QgcmV0dXJuIGNvcnJl c3BvbmRpbmcgZHluYW1pYyBib3VuZGFyaWVzLiBUaGlzIGlzIG5vIHByb2JsZW0gYXMgbG9uZwo+ ICsgKiBhcyBvbmx5IHBYZCBwb2ludGVycyBhcmUgcGFzc2VkIGRvd24gZHVyaW5nIHBhZ2UgdGFi bGUgd2FsaywgYmVjYXVzZQo+ICsgKiBwWGRfb2Zmc2V0KCkgd2lsbCBzaW1wbHkgcmV0dXJuIHRo ZSBnaXZlbiBwb2ludGVyIGZvciBmb2xkZWQgbGV2ZWxzLCBhbmQgdGhlCj4gKyAqIHBvaW50ZXIg aXRlcmF0aW9uIG92ZXIgYSByYW5nZSBzaW1wbHkgaGFwcGVucyBhdCB0aGUgY29ycmVjdCBwYWdl IHRhYmxlCj4gKyAqIGxldmVsLgo+ICsgKiBJdCBpcyBob3dldmVyIGEgcHJvYmxlbSB3aXRoIGd1 cF9mYXN0LCBvciBvdGhlciBwbGFjZXMgd2Fsa2luZyB0aGUgcGFnZQo+ICsgKiB0YWJsZXMgdy9v IGxvY2tzIHVzaW5nIFJFQURfT05DRSgpLCBhbmQgcGFzc2luZyBkb3duIHRoZSBwWGQgdmFsdWVz IGluc3RlYWQKPiArICogb2YgcG9pbnRlcnMuIEluIHRoaXMgY2FzZSwgdGhlIHBvaW50ZXIgZ2l2 ZW4gdG8gcFhkX29mZnNldCgpIGlzIGEgcG9pbnRlciB0bwo+ICsgKiBhIHN0YWNrIHZhcmlhYmxl LCB3aGljaCBjYW5ub3QgYmUgdXNlZCBmb3IgcG9pbnRlciBpdGVyYXRpb24gYXQgdGhlIGNvcnJl Y3QKPiArICogbGV2ZWwuIEluc3RlYWQsIHRoZSBpdGVyYXRpb24gdGhlbiBoYXMgdG8gaGFwcGVu IGJ5IGdvaW5nIHVwIHRvIHBnZCBsZXZlbAo+ICsgKiBhZ2Fpbi4gVG8gYWxsb3cgdGhpcywgcHJv dmlkZSBwWGRfYWRkcl9lbmRfZm9sZGVkKCkgZnVuY3Rpb25zIHdpdGggYW4KPiArICogYWRkaXRp b25hbCBwWGQgdmFsdWUgcGFyYW1ldGVyLCB3aGljaCBjYW4gYmUgdXNlZCBvbiBzMzkwIHRvIGRl dGVybWluZSB0aGUKPiArICogZm9sZGluZyBsZXZlbCBhbmQgcmV0dXJuIHRoZSBjb3JyZXNwb25k aW5nIGJvdW5kYXJ5Lgo+ICsgKi8KPiArc3RhdGljIGlubGluZSB1bnNpZ25lZCBsb25nIHJzdGVf YWRkcl9lbmRfZm9sZGVkKHVuc2lnbmVkIGxvbmcgcnN0ZSwgdW5zaWduZWQgbG9uZyBhZGRyLCB1 bnNpZ25lZCBsb25nIGVuZCkKCldoYXQgZG9lcyAncnN0ZScgc3RhbmRzIGZvciA/CgpJc24ndCB0 aGlzIGxpbmUgYSBiaXQgbG9uZyA/Cgo+ICt7Cj4gKwl1bnNpZ25lZCBsb25nIHR5cGUgPSAocnN0 ZSAmIF9SRUdJT05fRU5UUllfVFlQRV9NQVNLKSA+PiAyOwo+ICsJdW5zaWduZWQgbG9uZyBzaXpl ID0gMVVMIDw8IChfU0VHTUVOVF9TSElGVCArIHR5cGUgKiAxMSk7Cj4gKwl1bnNpZ25lZCBsb25n IGJvdW5kYXJ5ID0gKGFkZHIgKyBzaXplKSAmIH4oc2l6ZSAtIDEpOwo+ICsKPiArCS8qCj4gKwkg KiBGSVhNRSBUaGUgYmVsb3cgY2hlY2sgaXMgZm9yIGludGVybmFsIHRlc3Rpbmcgb25seSwgdG8g YmUgcmVtb3ZlZAo+ICsJICovCj4gKwlWTV9CVUdfT04odHlwZSA8IChfUkVHSU9OX0VOVFJZX1RZ UEVfUjMgPj4gMikpOwo+ICsKPiArCXJldHVybiAoYm91bmRhcnkgLSAxKSA8IChlbmQgLSAxKSA/ IGJvdW5kYXJ5IDogZW5kOwo+ICt9Cj4gKwo+ICsjZGVmaW5lIHBnZF9hZGRyX2VuZF9mb2xkZWQg cGdkX2FkZHJfZW5kX2ZvbGRlZAo+ICtzdGF0aWMgaW5saW5lIHVuc2lnbmVkIGxvbmcgcGdkX2Fk ZHJfZW5kX2ZvbGRlZChwZ2RfdCBwZ2QsIHVuc2lnbmVkIGxvbmcgYWRkciwgdW5zaWduZWQgbG9u ZyBlbmQpCj4gK3sKPiArCXJldHVybiByc3RlX2FkZHJfZW5kX2ZvbGRlZChwZ2RfdmFsKHBnZCks IGFkZHIsIGVuZCk7Cj4gK30KPiArCj4gKyNkZWZpbmUgcDRkX2FkZHJfZW5kX2ZvbGRlZCBwNGRf YWRkcl9lbmRfZm9sZGVkCj4gK3N0YXRpYyBpbmxpbmUgdW5zaWduZWQgbG9uZyBwNGRfYWRkcl9l bmRfZm9sZGVkKHA0ZF90IHA0ZCwgdW5zaWduZWQgbG9uZyBhZGRyLCB1bnNpZ25lZCBsb25nIGVu ZCkKPiArewo+ICsJcmV0dXJuIHJzdGVfYWRkcl9lbmRfZm9sZGVkKHA0ZF92YWwocDRkKSwgYWRk ciwgZW5kKTsKPiArfQo+ICsKPiAgIHN0YXRpYyBpbmxpbmUgaW50IG1tX2hhc19wZ3N0ZShzdHJ1 Y3QgbW1fc3RydWN0ICptbSkKPiAgIHsKPiAgICNpZmRlZiBDT05GSUdfUEdTVEUKPiBkaWZmIC0t Z2l0IGEvaW5jbHVkZS9saW51eC9wZ3RhYmxlLmggYi9pbmNsdWRlL2xpbnV4L3BndGFibGUuaAo+ IGluZGV4IGU4Y2JjMmU3OTVkNS4uOTgxYzRjMmEzMWZlIDEwMDY0NAo+IC0tLSBhL2luY2x1ZGUv bGludXgvcGd0YWJsZS5oCj4gKysrIGIvaW5jbHVkZS9saW51eC9wZ3RhYmxlLmgKPiBAQCAtNjgx LDYgKzY4MSwyMiBAQCBzdGF0aWMgaW5saW5lIGludCBhcmNoX3VubWFwX29uZShzdHJ1Y3QgbW1f c3RydWN0ICptbSwKPiAgIH0pCj4gICAjZW5kaWYKPiAgIAo+ICsjaWZuZGVmIHBnZF9hZGRyX2Vu ZF9mb2xkZWQKPiArI2RlZmluZSBwZ2RfYWRkcl9lbmRfZm9sZGVkKHBnZCwgYWRkciwgZW5kKQlw Z2RfYWRkcl9lbmQoYWRkciwgZW5kKQo+ICsjZW5kaWYKPiArCj4gKyNpZm5kZWYgcDRkX2FkZHJf ZW5kX2ZvbGRlZAo+ICsjZGVmaW5lIHA0ZF9hZGRyX2VuZF9mb2xkZWQocDRkLCBhZGRyLCBlbmQp CXA0ZF9hZGRyX2VuZChhZGRyLCBlbmQpCj4gKyNlbmRpZgo+ICsKPiArI2lmbmRlZiBwdWRfYWRk cl9lbmRfZm9sZGVkCj4gKyNkZWZpbmUgcHVkX2FkZHJfZW5kX2ZvbGRlZChwdWQsIGFkZHIsIGVu ZCkJcHVkX2FkZHJfZW5kKGFkZHIsIGVuZCkKPiArI2VuZGlmCj4gKwo+ICsjaWZuZGVmIHBtZF9h ZGRyX2VuZF9mb2xkZWQKPiArI2RlZmluZSBwbWRfYWRkcl9lbmRfZm9sZGVkKHBtZCwgYWRkciwg ZW5kKQlwbWRfYWRkcl9lbmQoYWRkciwgZW5kKQo+ICsjZW5kaWYKPiArCj4gICAvKgo+ICAgICog V2hlbiB3YWxraW5nIHBhZ2UgdGFibGVzLCB3ZSB1c3VhbGx5IHdhbnQgdG8gc2tpcCBhbnkgcD9k X25vbmUgZW50cmllczsKPiAgICAqIGFuZCBhbnkgcD9kX2JhZCBlbnRyaWVzIC0gcmVwb3J0aW5n IHRoZSBlcnJvciBiZWZvcmUgcmVzZXR0aW5nIHRvIG5vbmUuCj4gZGlmZiAtLWdpdCBhL21tL2d1 cC5jIGIvbW0vZ3VwLmMKPiBpbmRleCBiZDg4M2ExMTI3MjQuLmJhNGFhY2U1ZDBmNCAxMDA2NDQK PiAtLS0gYS9tbS9ndXAuYwo+ICsrKyBiL21tL2d1cC5jCj4gQEAgLTI1MjEsNyArMjUyMSw3IEBA IHN0YXRpYyBpbnQgZ3VwX3BtZF9yYW5nZShwdWRfdCBwdWQsIHVuc2lnbmVkIGxvbmcgYWRkciwg dW5zaWduZWQgbG9uZyBlbmQsCj4gICAJZG8gewo+ICAgCQlwbWRfdCBwbWQgPSBSRUFEX09OQ0Uo KnBtZHApOwo+ICAgCj4gLQkJbmV4dCA9IHBtZF9hZGRyX2VuZChhZGRyLCBlbmQpOwo+ICsJCW5l eHQgPSBwbWRfYWRkcl9lbmRfZm9sZGVkKHBtZCwgYWRkciwgZW5kKTsKPiAgIAkJaWYgKCFwbWRf cHJlc2VudChwbWQpKQo+ICAgCQkJcmV0dXJuIDA7Cj4gICAKPiBAQCAtMjU2NCw3ICsyNTY0LDcg QEAgc3RhdGljIGludCBndXBfcHVkX3JhbmdlKHA0ZF90IHA0ZCwgdW5zaWduZWQgbG9uZyBhZGRy LCB1bnNpZ25lZCBsb25nIGVuZCwKPiAgIAlkbyB7Cj4gICAJCXB1ZF90IHB1ZCA9IFJFQURfT05D RSgqcHVkcCk7Cj4gICAKPiAtCQluZXh0ID0gcHVkX2FkZHJfZW5kKGFkZHIsIGVuZCk7Cj4gKwkJ bmV4dCA9IHB1ZF9hZGRyX2VuZF9mb2xkZWQocHVkLCBhZGRyLCBlbmQpOwo+ICAgCQlpZiAodW5s aWtlbHkoIXB1ZF9wcmVzZW50KHB1ZCkpKQo+ICAgCQkJcmV0dXJuIDA7Cj4gICAJCWlmICh1bmxp a2VseShwdWRfaHVnZShwdWQpKSkgewo+IEBAIC0yNTkyLDcgKzI1OTIsNyBAQCBzdGF0aWMgaW50 IGd1cF9wNGRfcmFuZ2UocGdkX3QgcGdkLCB1bnNpZ25lZCBsb25nIGFkZHIsIHVuc2lnbmVkIGxv bmcgZW5kLAo+ICAgCWRvIHsKPiAgIAkJcDRkX3QgcDRkID0gUkVBRF9PTkNFKCpwNGRwKTsKPiAg IAo+IC0JCW5leHQgPSBwNGRfYWRkcl9lbmQoYWRkciwgZW5kKTsKPiArCQluZXh0ID0gcDRkX2Fk ZHJfZW5kX2ZvbGRlZChwNGQsIGFkZHIsIGVuZCk7Cj4gICAJCWlmIChwNGRfbm9uZShwNGQpKQo+ ICAgCQkJcmV0dXJuIDA7Cj4gICAJCUJVSUxEX0JVR19PTihwNGRfaHVnZShwNGQpKTsKPiBAQCAt MjYxNyw3ICsyNjE3LDcgQEAgc3RhdGljIHZvaWQgZ3VwX3BnZF9yYW5nZSh1bnNpZ25lZCBsb25n IGFkZHIsIHVuc2lnbmVkIGxvbmcgZW5kLAo+ICAgCWRvIHsKPiAgIAkJcGdkX3QgcGdkID0gUkVB RF9PTkNFKCpwZ2RwKTsKPiAgIAo+IC0JCW5leHQgPSBwZ2RfYWRkcl9lbmQoYWRkciwgZW5kKTsK PiArCQluZXh0ID0gcGdkX2FkZHJfZW5kX2ZvbGRlZChwZ2QsIGFkZHIsIGVuZCk7Cj4gICAJCWlm IChwZ2Rfbm9uZShwZ2QpKQo+ICAgCQkJcmV0dXJuOwo+ICAgCQlpZiAodW5saWtlbHkocGdkX2h1 Z2UocGdkKSkpIHsKPiAKCkNocmlzdG9waGUKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fCmxpbnV4LXVtIG1haWxpbmcgbGlzdApsaW51eC11bUBsaXN0cy5p bmZyYWRlYWQub3JnCmh0dHA6Ly9saXN0cy5pbmZyYWRlYWQub3JnL21haWxtYW4vbGlzdGluZm8v bGludXgtdW0K