From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A43E5C432C0 for ; Wed, 27 Nov 2019 11:22:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6C5342068E for ; Wed, 27 Nov 2019 11:22:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6C5342068E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3F1376B0387; Wed, 27 Nov 2019 06:22:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A22C6B0389; Wed, 27 Nov 2019 06:22:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B74C6B038A; Wed, 27 Nov 2019 06:22:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0075.hostedemail.com [216.40.44.75]) by kanga.kvack.org (Postfix) with ESMTP id 1A0E76B0387 for ; Wed, 27 Nov 2019 06:22:04 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id CFACB181AEF10 for ; Wed, 27 Nov 2019 11:22:03 +0000 (UTC) X-FDA: 76201818126.27.dust64_692f20ad83c4e X-HE-Tag: dust64_692f20ad83c4e X-Filterd-Recvd-Size: 3994 Received: from huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 Nov 2019 11:22:02 +0000 (UTC) Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 483A468ADE13DFF98E6B; Wed, 27 Nov 2019 19:21:54 +0800 (CST) Received: from [127.0.0.1] (10.133.217.137) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.439.0; Wed, 27 Nov 2019 19:21:51 +0800 Subject: Re: [RFC PATCH] mm, page_alloc: avoid page_to_pfn() in move_freepages() To: David Hildenbrand , CC: Andrew Morton , Michal Hocko , Vlastimil Babka References: <20191127102800.51526-1-wangkefeng.wang@huawei.com> <0042aeb9-8886-904b-295f-aec4f1d5bb8e@redhat.com> From: Kefeng Wang Message-ID: <54064878-ea85-247a-3382-b96ddf97c667@huawei.com> Date: Wed, 27 Nov 2019 19:21:50 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.0 MIME-Version: 1.0 In-Reply-To: <0042aeb9-8886-904b-295f-aec4f1d5bb8e@redhat.com> Content-Type: text/plain; charset="UTF-8" Content-Language: en-US X-Originating-IP: [10.133.217.137] X-CFilter-Loop: Reflected Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2019/11/27 18:47, David Hildenbrand wrote: > [...] >=20 >> static int move_freepages(struct zone *zone, >> - struct page *start_page, struct page *end_page, >> + unsigned long start_pfn, unsigned long end_pfn, >> int migratetype, int *num_movable) >> { >> struct page *page; >> + unsigned long pfn; >> unsigned int order; >> int pages_moved =3D 0; >> =20 >> - for (page =3D start_page; page <=3D end_page;) { >> - if (!pfn_valid_within(page_to_pfn(page))) { >> - page++; >> + for (pfn =3D start_pfn; pfn <=3D end_pfn;) { >> + if (!pfn_valid_within(pfn)) { >> + pfn++; >> continue; >> } >> =20 >> + page =3D pfn_to_page(pfn); >> if (!PageBuddy(page)) { >> /* >> * We assume that pages that could be isolated for >> @@ -2268,8 +2270,7 @@ static int move_freepages(struct zone *zone, >> if (num_movable && >> (PageLRU(page) || __PageMovable(page))) >> (*num_movable)++; >> - >> - page++; >> + pfn++; >> continue; >> } >> =20 >> @@ -2280,6 +2281,7 @@ static int move_freepages(struct zone *zone, >> order =3D page_order(page); >> move_to_free_area(page, &zone->free_area[order], migratetype); >> page +=3D 1 << order; >=20 > You can drop this now as well, no? should do it >=20 >> + pfn +=3D 1 << order; >> pages_moved +=3D 1 << order; >> } >> =20 >> @@ -2289,25 +2291,22 @@ static int move_freepages(struct zone *zone, >> int move_freepages_block(struct zone *zone, struct page *page, >> int migratetype, int *num_movable) >> { >> - unsigned long start_pfn, end_pfn; >> - struct page *start_page, *end_page; >> + unsigned long start_pfn, end_pfn, pfn; >> =20 >> if (num_movable) >> *num_movable =3D 0; >> =20 >> - start_pfn =3D page_to_pfn(page); >> + pfn =3D start_pfn =3D page_to_pfn(page); >=20 > pfn =3D page_to_pfn(page); >=20 > and ... >=20 >> start_pfn =3D start_pfn & ~(pageblock_nr_pages-1); >=20 > ... >=20 > start_pfn =3D pfn & ~(pageblock_nr_pages - 1); >=20 > instead? will change=EF=BC=8C thanks for your comments. >=20 >> - start_page =3D pfn_to_page(start_pfn); >> - end_page =3D start_page + pageblock_nr_pages - 1; >> end_pfn =3D start_pfn + pageblock_nr_pages - 1; >> =20 >> /* Do not cross zone boundaries */ >> if (!zone_spans_pfn(zone, start_pfn)) >> - start_page =3D page; >> + start_pfn =3D pfn; >> if (!zone_spans_pfn(zone, end_pfn)) >> return 0; >> =20 >> - return move_freepages(zone, start_page, end_page, migratetype, >> + return move_freepages(zone, start_pfn, end_pfn, migratetype, >> num_movable); >> } >> =20 >> >=20 >=20