From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 746ADC433E1 for ; Tue, 23 Mar 2021 12:55:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DD17E619C5 for ; Tue, 23 Mar 2021 12:55:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD17E619C5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 275488D0007; Tue, 23 Mar 2021 08:55:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 224BB6B014E; Tue, 23 Mar 2021 08:55:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EC5E8D0007; Tue, 23 Mar 2021 08:55:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id E3DEC6B014B for ; Tue, 23 Mar 2021 08:55:05 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A028B8249980 for ; Tue, 23 Mar 2021 12:55:05 +0000 (UTC) X-FDA: 77951134170.27.A6688AF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id BBFB6E0011E3 for ; Tue, 23 Mar 2021 12:55:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=LTQEMCkZdiBXrLcpX0fP9V/zBXP3nEkqVdlVQQ2MTjE=; b=FZGahySoelD/j9c7Bm8wddQJGw 966+pphwKL3Tb3ETkw+oHfZDIyvpPNhRmwp90/cdsi0oDR/kRWB01pDYErZrv3FfPlGvJz1vMsiRH KopKbMyhZz9dGtIyGKu1aYtLKlC43KXE071VFy9imHYz6N/ZFw6bj7DPPEoYRRoMGDr3H+fGLRc1P bnAGh652qFU2WTRx5OYrkT+CW61CcLgYTQIswre6cmXut4XMDhLi4nd9nnl8vQU7BE2tigkjiACQg tGvvXQWOaU8g5Wpr1GkxGgtPIuNHmUSON6w12ink243xND2t6arvNm5s9GzvYrHq65FJWQPcGd+eI oOVwuIRQ==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lOgXo-00A4AJ-Lf; Tue, 23 Mar 2021 12:54:10 +0000 Date: Tue, 23 Mar 2021 12:54:00 +0000 From: Matthew Wilcox To: Liu Shixin Cc: Andrew Morton , Stephen Rothwell , Michal Hocko , Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Kefeng Wang Subject: Re: [PATCH -next] mm, page_alloc: avoid page_to_pfn() in move_freepages() Message-ID: <20210323125400.GE1719932@casper.infradead.org> References: <20210323131215.934472-1-liushixin2@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210323131215.934472-1-liushixin2@huawei.com> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: BBFB6E0011E3 X-Stat-Signature: 19b1rcnj6zbzeejnzgs7qsmtcm33znyr Received-SPF: none (infradead.org>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616504104-586761 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Mar 23, 2021 at 09:12:15PM +0800, Liu Shixin wrote: > From: Kefeng Wang > > The start_pfn and end_pfn are already available in move_freepages_block(), > there is no need to go back and forth between page and pfn in move_freepages > and move_freepages_block, and pfn_valid_within() should validate pfn first > before touching the page. This looks good to me: Reviewed-by: Matthew Wilcox (Oracle) > static int move_freepages(struct zone *zone, > - struct page *start_page, struct page *end_page, > + unsigned long start_pfn, unsigned long end_pfn, > int migratetype, int *num_movable) > { > struct page *page; > + unsigned long pfn; > unsigned int order; > int pages_moved = 0; > > - for (page = start_page; page <= end_page;) { > - if (!pfn_valid_within(page_to_pfn(page))) { > - page++; > + for (pfn = start_pfn; pfn <= end_pfn;) { > + if (!pfn_valid_within(pfn)) { > + pfn++; > continue; > } > > + page = pfn_to_page(pfn); I wonder if this wouldn't be even better if we did: struct page *start_page = pfn_to_page(start_pfn); for (pfn = start_pfn; pfn <= end_pfn; pfn++) { struct page *page = start_page + pfn - start_pfn; if (!pfn_valid_within(pfn)) continue; > - > - page++; > + pfn++; > continue; ... then we can drop the increment of pfn here > } > > @@ -2458,7 +2459,7 @@ static int move_freepages(struct zone *zone, > > order = buddy_order(page); > move_to_free_list(page, zone, order, migratetype); > - page += 1 << order; > + pfn += 1 << order; ... and change this to pfn += (1 << order) - 1; Do you have any numbers to quantify the benefit of this change?