From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6EFFC433DB for ; Fri, 26 Feb 2021 00:08:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A32564F21 for ; Fri, 26 Feb 2021 00:08:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A32564F21 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5780B6B0005; Thu, 25 Feb 2021 19:08:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5299A6B0006; Thu, 25 Feb 2021 19:08:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41D496B006C; Thu, 25 Feb 2021 19:08:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 291A46B0005 for ; Thu, 25 Feb 2021 19:08:55 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D86CB181AF5FD for ; Fri, 26 Feb 2021 00:08:54 +0000 (UTC) X-FDA: 77858483388.24.343CFC2 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf04.hostedemail.com (Postfix) with ESMTP id BB348DE for ; Fri, 26 Feb 2021 00:08:52 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 45E2E64F1B; Fri, 26 Feb 2021 00:08:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1614298133; bh=SPGaR5g39UFjRRi+Wfd5TSNrBSS9mMOZ47S3ieVyolQ=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=aZr7iSM7EJuEHyUbA+qvCZz3rZ2Df2jXCHkrnAt25Wl5S4j3XV0yBGb4OaY63aTOu iq08VqOiNPdBH5QjhiU7kLEVMGDrFL0LejspCLsbdV+mWGmZv9iY8WdFbDu2bYkouM KdU13l5zpuWp0MtGfxEfuD7Ynr/9cxIZ6qq/KBBk= Date: Thu, 25 Feb 2021 16:08:51 -0800 From: Andrew Morton To: Mike Rapoport Cc: Andrea Arcangeli , Baoquan He , Borislav Petkov , Chris Wilson , David Hildenbrand , "H. Peter Anvin" , Ingo Molnar , Linus Torvalds , =?UTF-8?Q?=C5=81ukasz?= Majczak , Mel Gorman , Michal Hocko , Mike Rapoport , Qian Cai , "Sarvela, Tomi P" , Thomas Gleixner , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org, x86@kernel.org Subject: Re: [PATCH v8 1/1] mm/page_alloc.c: refactor initialization of struct page for holes in memory layout Message-Id: <20210225160851.43b50f0d02f8da958a2b7887@linux-foundation.org> In-Reply-To: <20210225224351.7356-2-rppt@kernel.org> References: <20210225224351.7356-1-rppt@kernel.org> <20210225224351.7356-2-rppt@kernel.org> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: BB348DE X-Stat-Signature: 48tawq1domekf6kj3t3ypdkiiakmkg95 Received-SPF: none (linux-foundation.org>: No applicable sender policy available) receiver=imf04; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614298132-141048 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 26 Feb 2021 00:43:51 +0200 Mike Rapoport wrote: > From: Mike Rapoport > > There could be struct pages that are not backed by actual physical memory. > This can happen when the actual memory bank is not a multiple of > SECTION_SIZE or when an architecture does not register memory holes > reserved by the firmware as memblock.memory. > > Such pages are currently initialized using init_unavailable_mem() function > that iterates through PFNs in holes in memblock.memory and if there is a > struct page corresponding to a PFN, the fields of this page are set to > default values and it is marked as Reserved. > > init_unavailable_mem() does not take into account zone and node the page > belongs to and sets both zone and node links in struct page to zero. > > Before commit 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions > rather that check each PFN") the holes inside a zone were re-initialized > during memmap_init() and got their zone/node links right. However, after > that commit nothing updates the struct pages representing such holes. > > On a system that has firmware reserved holes in a zone above ZONE_DMA, for > instance in a configuration below: > > # grep -A1 E820 /proc/iomem > 7a17b000-7a216fff : Unknown E820 type > 7a217000-7bffffff : System RAM > > unset zone link in struct page will trigger > > VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page); > > in set_pfnblock_flags_mask() when called with a struct page from a range > other than E820_TYPE_RAM because there are pages in the range of ZONE_DMA32 > but the unset zone link in struct page makes them appear as a part of > ZONE_DMA. > > Interleave initialization of the unavailable pages with the normal > initialization of memory map, so that zone and node information will be > properly set on struct pages that are not backed by the actual memory. > > With this change the pages for holes inside a zone will get proper > zone/node links and the pages that are not spanned by any node will get > links to the adjacent zone/node. The holes between nodes will be prepended > to the zone/node above the hole and the trailing pages in the last section > that will be appended to the zone/node below. > > ... > > +#if !defined(CONFIG_FLAT_NODE_MEM_MAP) > +/* > + * Only struct pages that correspond to ranges defined by memblock.memory > + * are zeroed and initialized by going through __init_single_page() during > + * memmap_init_zone(). > + * > + * But, there could be struct pages that correspond to holes in > + * memblock.memory. This can happen because of the following reasons: > + * - physical memory bank size is not necessarily the exact multiple of the > + * arbitrary section size > + * - early reserved memory may not be listed in memblock.memory > + * - memory layouts defined with memmap= kernel parameter may not align > + * nicely with memmap sections > + * > + * Explicitly initialize those struct pages so that: > + * - PG_Reserved is set > + * - zone and node links point to zone and node that span the page if the > + * hole is in the middle of a zone > + * - zone and node links point to adjacent zone/node if the hole falls on > + * the zone boundary; the pages in such holes will be prepended to the > + * zone/node above the hole except for the trailing pages in the last > + * section that will be appended to the zone/node below. > + */ The comment helps lot. > void __meminit __weak memmap_init_zone(struct zone *zone) > { > unsigned long zone_start_pfn = zone->zone_start_pfn; > unsigned long zone_end_pfn = zone_start_pfn + zone->spanned_pages; > int i, nid = zone_to_nid(zone), zone_id = zone_idx(zone); > + static unsigned long hole_pfn = 0; static implies that pgdat->node_zones[] is alwyas sorted in ascending pfn order. Always true? > unsigned long start_pfn, end_pfn; > + u64 pgcnt = 0; > > for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { > start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn); > @@ -6295,7 +6348,29 @@ void __meminit __weak memmap_init_zone(struct zone *zone) > memmap_init_range(end_pfn - start_pfn, nid, > zone_id, start_pfn, zone_end_pfn, > MEMINIT_EARLY, NULL, MIGRATE_MOVABLE); > + > + if (hole_pfn < start_pfn) > + pgcnt += init_unavailable_range(hole_pfn, start_pfn, > + zone_id, nid); > + hole_pfn = end_pfn; > } > + > +#ifdef CONFIG_SPARSEMEM > + /* > + * Initialize the hole in the range [zone_end_pfn, section_end]. > + * If zone boundary falls in the middle of a section, this hole > + * will be re-initialized during the call to this function for the > + * higher zone. > + */ > + end_pfn = round_up(zone_end_pfn, PAGES_PER_SECTION); > + if (hole_pfn < end_pfn) > + pgcnt += init_unavailable_range(hole_pfn, end_pfn, > + zone_id, nid); > +#endif > + > + if (pgcnt) > + pr_info(" %s zone: %lld pages in unavailable ranges\n", > + zone->name, pgcnt); I'll make that %llu