From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11C65C433DF for ; Tue, 7 Jul 2020 01:39:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 754B020708 for ; Tue, 7 Jul 2020 01:39:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 754B020708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D78C96B0005; Mon, 6 Jul 2020 21:39:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D2A756B0006; Mon, 6 Jul 2020 21:39:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C40626B0008; Mon, 6 Jul 2020 21:39:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id ADE706B0005 for ; Mon, 6 Jul 2020 21:39:02 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3EF588248047 for ; Tue, 7 Jul 2020 01:39:02 +0000 (UTC) X-FDA: 77009571324.12.fowl17_48088fa26eb0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 1311C180559ED for ; Tue, 7 Jul 2020 01:39:02 +0000 (UTC) X-HE-Tag: fowl17_48088fa26eb0 X-Filterd-Recvd-Size: 3856 Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 01:39:00 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01355;MF=richard.weiyang@linux.alibaba.com;NM=1;PH=DS;RN=17;SR=0;TI=SMTPD_---0U1zQdn5_1594085936; Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com fp:SMTPD_---0U1zQdn5_1594085936) by smtp.aliyun-inc.com(127.0.0.1); Tue, 07 Jul 2020 09:38:56 +0800 Date: Tue, 7 Jul 2020 09:38:56 +0800 From: Wei Yang To: "Kirill A. Shutemov" Cc: Wei Yang , akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, yang.shi@linux.alibaba.com, vbabka@suse.cz, willy@infradead.org, thomas_os@shipmail.org, thellstrom@vmware.com, anshuman.khandual@arm.com, sean.j.christopherson@intel.com, aneesh.kumar@linux.ibm.com, peterx@redhat.com, walken@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, digetx@gmail.com Subject: Re: [RESEND Patch v2 3/4] mm/mremap: calculate extent in one place Message-ID: <20200707013856.GA27805@L-31X9LVDL-1304.local> Reply-To: Wei Yang References: <20200626135216.24314-1-richard.weiyang@linux.alibaba.com> <20200626135216.24314-4-richard.weiyang@linux.alibaba.com> <20200706100729.y2wbkpc4tyvjojzg@box> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200706100729.y2wbkpc4tyvjojzg@box> X-Rspamd-Queue-Id: 1311C180559ED X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 06, 2020 at 01:07:29PM +0300, Kirill A. Shutemov wrote: >On Fri, Jun 26, 2020 at 09:52:15PM +0800, Wei Yang wrote: >> Page tables is moved on the base of PMD. This requires both source >> and destination range should meet the requirement. >> >> Current code works well since move_huge_pmd() and move_normal_pmd() >> would check old_addr and new_addr again. And then return to move_ptes() >> if the either of them is not aligned. >> >> In stead of calculating the extent separately, it is better to calculate >> in one place, so we know it is not necessary to try move pmd. By doing >> so, the logic seems a little clear. >> >> Signed-off-by: Wei Yang >> Tested-by: Dmitry Osipenko >> --- >> mm/mremap.c | 6 +++--- >> 1 file changed, 3 insertions(+), 3 deletions(-) >> >> diff --git a/mm/mremap.c b/mm/mremap.c >> index de27b12c8a5a..a30b3e86cc99 100644 >> --- a/mm/mremap.c >> +++ b/mm/mremap.c >> @@ -258,6 +258,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma, >> extent = next - old_addr; >> if (extent > old_end - old_addr) >> extent = old_end - old_addr; >> + next = (new_addr + PMD_SIZE) & PMD_MASK; > >Please use round_up() for both 'next' calculations. > I took another close look into this, seems this is not a good suggestion. round_up(new_addr, PMD_SIZE) would be new_addr when new_addr is PMD_SIZE aligned, which is not what we expect. >> + if (extent > next - new_addr) >> + extent = next - new_addr; >> old_pmd = get_old_pmd(vma->vm_mm, old_addr); >> if (!old_pmd) >> continue; >> @@ -301,9 +304,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma, >> >> if (pte_alloc(new_vma->vm_mm, new_pmd)) >> break; >> - next = (new_addr + PMD_SIZE) & PMD_MASK; >> - if (extent > next - new_addr) >> - extent = next - new_addr; >> move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma, >> new_pmd, new_addr, need_rmap_locks); >> } >> -- >> 2.20.1 (Apple Git-117) >> > >-- > Kirill A. Shutemov -- Wei Yang Help you, Help me