From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBD76C432C0 for ; Wed, 27 Nov 2019 11:47:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7F0E620684 for ; Wed, 27 Nov 2019 11:47:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7F0E620684 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BCBA06B038E; Wed, 27 Nov 2019 06:47:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B7E856B038F; Wed, 27 Nov 2019 06:47:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB9F66B0391; Wed, 27 Nov 2019 06:47:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id 966CB6B038E for ; Wed, 27 Nov 2019 06:47:53 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 40629824999B for ; Wed, 27 Nov 2019 11:47:53 +0000 (UTC) X-FDA: 76201883226.13.beds77_27b4ca53e7502 X-HE-Tag: beds77_27b4ca53e7502 X-Filterd-Recvd-Size: 5444 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 Nov 2019 11:47:52 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 0AC3DAD4A; Wed, 27 Nov 2019 11:47:51 +0000 (UTC) Date: Wed, 27 Nov 2019 12:47:50 +0100 From: Michal Hocko To: Kefeng Wang Cc: linux-mm@kvack.org, Andrew Morton , Vlastimil Babka Subject: Re: [RFC PATCH] mm, page_alloc: avoid page_to_pfn() in move_freepages() Message-ID: <20191127114750.GP20912@dhcp22.suse.cz> References: <20191127102800.51526-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191127102800.51526-1-wangkefeng.wang@huawei.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 27-11-19 18:28:00, Kefeng Wang wrote: > The start_pfn and end_pfn are already available in move_freepages_block(), > pfn_valid_within() should validate pfn first before touching the page, > or we might access an unitialized page with CONFIG_HOLES_IN_ZONE configs. > > Cc: Andrew Morton > Cc: Michal Hocko > Cc: Vlastimil Babka > Signed-off-by: Kefeng Wang > --- > > Here is an oops in 4.4(arm64 enabled CONFIG_HOLES_IN_ZONE), Is this reproducible with the current upstream kernel? There were large changes in this aread since 4.4 Btw. the below should be part of the changelog. > Unable to handle kernel NULL pointer dereference at virtual address 00000000 > pgd = ffffff8008f7e000 > [00000000] *pgd=0000000017ffe003, *pud=0000000017ffe003, *pmd=0000000000000000 > Internal error: Oops: 96000007 [#1] SMP > CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W O 4.4.185 #1 > > PC is at move_freepages+0x80/0x10c > LR is at move_freepages_block+0xd4/0xf4 > pc : [] lr : [] pstate: 80000085 > [...] > [] move_freepages+0x80/0x10c > [] move_freepages_block+0xd4/0xf4 > [] __rmqueue+0x2bc/0x44c > [] get_page_from_freelist+0x268/0x600 > [] __alloc_pages_nodemask+0x184/0x88c > [] new_slab+0xd0/0x494 > [] ___slab_alloc.constprop.29+0x1c8/0x2e8 > [] __slab_alloc.constprop.28+0x54/0x84 > [] kmem_cache_alloc+0x64/0x198 > [] __build_skb+0x44/0xa4 > [] __netdev_alloc_skb+0xe4/0x134 > > mm/page_alloc.c | 25 ++++++++++++------------- > 1 file changed, 12 insertions(+), 13 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index f391c0c4ed1d..59f2c2b860fe 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2246,19 +2246,21 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, > * boundary. If alignment is required, use move_freepages_block() > */ > static int move_freepages(struct zone *zone, > - struct page *start_page, struct page *end_page, > + unsigned long start_pfn, unsigned long end_pfn, > int migratetype, int *num_movable) > { > struct page *page; > + unsigned long pfn; > unsigned int order; > int pages_moved = 0; > > - for (page = start_page; page <= end_page;) { > - if (!pfn_valid_within(page_to_pfn(page))) { > - page++; > + for (pfn = start_pfn; pfn <= end_pfn;) { > + if (!pfn_valid_within(pfn)) { > + pfn++; > continue; > } > > + page = pfn_to_page(pfn); > if (!PageBuddy(page)) { > /* > * We assume that pages that could be isolated for > @@ -2268,8 +2270,7 @@ static int move_freepages(struct zone *zone, > if (num_movable && > (PageLRU(page) || __PageMovable(page))) > (*num_movable)++; > - > - page++; > + pfn++; > continue; > } > > @@ -2280,6 +2281,7 @@ static int move_freepages(struct zone *zone, > order = page_order(page); > move_to_free_area(page, &zone->free_area[order], migratetype); > page += 1 << order; > + pfn += 1 << order; > pages_moved += 1 << order; > } > > @@ -2289,25 +2291,22 @@ static int move_freepages(struct zone *zone, > int move_freepages_block(struct zone *zone, struct page *page, > int migratetype, int *num_movable) > { > - unsigned long start_pfn, end_pfn; > - struct page *start_page, *end_page; > + unsigned long start_pfn, end_pfn, pfn; > > if (num_movable) > *num_movable = 0; > > - start_pfn = page_to_pfn(page); > + pfn = start_pfn = page_to_pfn(page); > start_pfn = start_pfn & ~(pageblock_nr_pages-1); > - start_page = pfn_to_page(start_pfn); > - end_page = start_page + pageblock_nr_pages - 1; > end_pfn = start_pfn + pageblock_nr_pages - 1; > > /* Do not cross zone boundaries */ > if (!zone_spans_pfn(zone, start_pfn)) > - start_page = page; > + start_pfn = pfn; > if (!zone_spans_pfn(zone, end_pfn)) > return 0; > > - return move_freepages(zone, start_page, end_page, migratetype, > + return move_freepages(zone, start_pfn, end_pfn, migratetype, > num_movable); > } > > -- > 2.20.1 > -- Michal Hocko SUSE Labs