From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61687C433E0 for ; Wed, 1 Jul 2020 11:55:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 25A1520663 for ; Wed, 1 Jul 2020 11:55:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 25A1520663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9F62E8D0024; Wed, 1 Jul 2020 07:54:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A69B8D0013; Wed, 1 Jul 2020 07:54:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BCA88D0024; Wed, 1 Jul 2020 07:54:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0008.hostedemail.com [216.40.44.8]) by kanga.kvack.org (Postfix) with ESMTP id 744C58D0013 for ; Wed, 1 Jul 2020 07:54:59 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 282652DFC for ; Wed, 1 Jul 2020 11:54:59 +0000 (UTC) X-FDA: 76989350718.23.fight77_51101e826e80 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id EC21337609 for ; Wed, 1 Jul 2020 11:54:58 +0000 (UTC) X-HE-Tag: fight77_51101e826e80 X-Filterd-Recvd-Size: 6457 Received: from out4436.biz.mail.alibaba.com (out4436.biz.mail.alibaba.com [47.88.44.36]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Wed, 1 Jul 2020 11:54:57 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04407;MF=richard.weiyang@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0U1MIZDg_1593604481; Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com fp:SMTPD_---0U1MIZDg_1593604481) by smtp.aliyun-inc.com(127.0.0.1); Wed, 01 Jul 2020 19:54:42 +0800 Date: Wed, 1 Jul 2020 19:54:41 +0800 From: Wei Yang To: David Hildenbrand Cc: Wei Yang , dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, akpm@linux-foundation.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org Subject: Re: [PATCH] mm: define pte_add_end for consistency Message-ID: <20200701115441.GA4979@L-31X9LVDL-1304.local> Reply-To: Wei Yang References: <20200630031852.45383-1-richard.weiyang@linux.alibaba.com> <40362e99-a354-c44f-8645-e2326a6df680@redhat.com> <20200701021113.GA51306@L-31X9LVDL-1304.local> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: EC21337609 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 01, 2020 at 10:29:08AM +0200, David Hildenbrand wrote: >On 01.07.20 04:11, Wei Yang wrote: >> On Tue, Jun 30, 2020 at 02:44:00PM +0200, David Hildenbrand wrote: >>> On 30.06.20 05:18, Wei Yang wrote: >>>> When walking page tables, we define several helpers to get the address of >>>> the next boundary. But we don't have one for pte level. >>>> >>>> Let's define it and consolidate the code in several places. >>>> >>>> Signed-off-by: Wei Yang >>>> --- >>>> arch/x86/mm/init_64.c | 6 ++---- >>>> include/linux/pgtable.h | 7 +++++++ >>>> mm/kasan/init.c | 4 +--- >>>> 3 files changed, 10 insertions(+), 7 deletions(-) >>>> >>>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c >>>> index dbae185511cd..f902fbd17f27 100644 >>>> --- a/arch/x86/mm/init_64.c >>>> +++ b/arch/x86/mm/init_64.c >>>> @@ -973,9 +973,7 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end, >>>> >>>> pte = pte_start + pte_index(addr); >>>> for (; addr < end; addr = next, pte++) { >>>> - next = (addr + PAGE_SIZE) & PAGE_MASK; >>>> - if (next > end) >>>> - next = end; >>>> + next = pte_addr_end(addr, end); >>>> >>>> if (!pte_present(*pte)) >>>> continue; >>>> @@ -1558,7 +1556,7 @@ void register_page_bootmem_memmap(unsigned long section_nr, >>>> get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); >>>> >>>> if (!boot_cpu_has(X86_FEATURE_PSE)) { >>>> - next = (addr + PAGE_SIZE) & PAGE_MASK; >>>> + next = pte_addr_end(addr, end); >>>> pmd = pmd_offset(pud, addr); >>>> if (pmd_none(*pmd)) >>>> continue; >>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h >>>> index 32b6c52d41b9..0de09c6c89d2 100644 >>>> --- a/include/linux/pgtable.h >>>> +++ b/include/linux/pgtable.h >>>> @@ -706,6 +706,13 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot) >>>> }) >>>> #endif >>>> >>>> +#ifndef pte_addr_end >>>> +#define pte_addr_end(addr, end) \ >>>> +({ unsigned long __boundary = ((addr) + PAGE_SIZE) & PAGE_MASK; \ >>>> + (__boundary - 1 < (end) - 1) ? __boundary : (end); \ >>>> +}) >>>> +#endif >>>> + >>>> /* >>>> * When walking page tables, we usually want to skip any p?d_none entries; >>>> * and any p?d_bad entries - reporting the error before resetting to none. >>>> diff --git a/mm/kasan/init.c b/mm/kasan/init.c >>>> index fe6be0be1f76..89f748601f74 100644 >>>> --- a/mm/kasan/init.c >>>> +++ b/mm/kasan/init.c >>>> @@ -349,9 +349,7 @@ static void kasan_remove_pte_table(pte_t *pte, unsigned long addr, >>>> unsigned long next; >>>> >>>> for (; addr < end; addr = next, pte++) { >>>> - next = (addr + PAGE_SIZE) & PAGE_MASK; >>>> - if (next > end) >>>> - next = end; >>>> + next = pte_addr_end(addr, end); >>>> >>>> if (!pte_present(*pte)) >>>> continue; >>>> >>> >>> I'm not really a friend of this I have to say. We're simply iterating >>> over single pages, not much magic .... >> >> Hmm... yes, we are iterating on Page boundary, while we many have the case >> when addr or end is not PAGE_ALIGN. > >I really do wonder if not having page aligned addresses actually happens >in real life. Page tables operate on page granularity, and >adding/removing unaligned parts feels wrong ... and that's also why I >dislike such a helper. > >1. kasan_add_zero_shadow()/kasan_remove_zero_shadow(). If I understand >the logic (WARN_ON()) correctly, we bail out in case we would ever end >up in such a scenario, where we would want to add/remove things not >aligned to PAGE_SIZE. > >2. remove_pagetable()...->remove_pte_table() > >vmemmap_free() should never try to de-populate sub-pages. Even with >sub-section hot-add/remove (2MB / 512 pages), with valid struct page >sizes (56, 64, 72, 80), we always end up with full pages. > >kernel_physical_mapping_remove() is only called via >arch_remove_memory(). That will never remove unaligned parts. > I don't have a very clear mind now, while when you look into remove_pte_table(), it has two cases based on alignment of addr and next. If we always remove a page, the second case won't happen? >3. register_page_bootmem_memmap() > >It operates on full pages only. > > >This needs in-depth analysis, but my gut feeling is that this alignment >is unnecessary. > >> >>> >>> What would definitely make sense is replacing (addr + PAGE_SIZE) & >>> PAGE_MASK; by PAGE_ALIGN() ... >>> >> >> No, PAGE_ALIGN() is expanded to be >> >> (addr + PAGE_SIZE - 1) & PAGE_MASK; >> >> If we change the code to PAGE_ALIGN(), we would end up with infinite loop. > >Very right, it would have to be PAGE_ALIGN(addr + 1). > >-- >Thanks, > >David / dhildenb -- Wei Yang Help you, Help me