From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLACK autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3147C4167B for ; Sun, 6 Dec 2020 00:54:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AD7A32313A for ; Sun, 6 Dec 2020 00:54:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725995AbgLFAyn (ORCPT ); Sat, 5 Dec 2020 19:54:43 -0500 Received: from mail.kernel.org ([198.145.29.99]:43788 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725922AbgLFAyn (ORCPT ); Sat, 5 Dec 2020 19:54:43 -0500 Date: Sat, 05 Dec 2020 16:54:01 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1607216042; bh=D01izbM81DESjHAbmKsAs8ZrWLqmboTck7C/BOVR4XE=; h=From:To:Subject:From; b=Rq6yKUuvnbpEokXBFTUrjWR6umAMD2tPI9yXD/TSF+rtLgMG67yWhG11R9SPZ1JQN EqaNP8flCp2VvoZahxQ9ohzhQ8DMOhYztUIyFRiX5femgr6jo7g6R+qrCnFYLHXBEp WmkxShFeu05yhTNwBj2PLLb5/y2HWeQ42GVMFbj4= From: akpm@linux-foundation.org To: aarcange@redhat.com, bhe@redhat.com, cai@lca.pw, david@redhat.com, mgorman@suse.de, mhocko@kernel.org, mm-commits@vger.kernel.org, rppt@linux.ibm.com, stable@vger.kernel.org, vbabka@suse.cz Subject: + =?US-ASCII?Q?mm-initialize-struct-pages-in-reserved-regions-outside-of-t?= =?US-ASCII?Q?he-zone-ranges.patch?= added to -mm tree Message-ID: <20201206005401.qKuAVgOXr%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch titled Subject: mm: initialize struct pages in reserved regions outside of the zone ranges has been added to the -mm tree. Its filename is mm-initialize-struct-pages-in-reserved-regions-outside-of-the-zone-ranges.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-initialize-struct-pages-in-reserved-regions-outside-of-the-zone-ranges.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-initialize-struct-pages-in-reserved-regions-outside-of-the-zone-ranges.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Andrea Arcangeli Subject: mm: initialize struct pages in reserved regions outside of the zone ranges Without this change, the pfn 0 isn't in any zone spanned range, and it's also not in any memory.memblock range, so the struct page of pfn 0 wasn't initialized and the PagePoison remained set when reserve_bootmem_region called __SetPageReserved, inducing a silent boot failure with DEBUG_VM (and correctly so, because the crash signaled the nodeid/nid of pfn 0 would be again wrong). There's no enforcement that all memblock.reserved ranges must overlap memblock.memory ranges, so the memblock.reserved ranges also require an explicit initialization and the zones ranges need to be extended to include all memblock.reserved ranges with struct pages too or they'll be left uninitialized with PagePoison as it happened to pfn 0. Link: https://lkml.kernel.org/r/20201205013238.21663-2-aarcange@redhat.com Fixes: 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather that check each PFN") Signed-off-by: Andrea Arcangeli Cc: Mike Rapoport Cc: Baoquan He Cc: David Hildenbrand Cc: Mel Gorman Cc: Michal Hocko Cc: Qian Cai Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton --- include/linux/memblock.h | 17 ++++++++--- mm/debug.c | 3 + mm/memblock.c | 4 +- mm/page_alloc.c | 57 +++++++++++++++++++++++++++++-------- 4 files changed, 63 insertions(+), 18 deletions(-) --- a/include/linux/memblock.h~mm-initialize-struct-pages-in-reserved-regions-outside-of-the-zone-ranges +++ a/include/linux/memblock.h @@ -251,7 +251,8 @@ static inline bool memblock_is_nomap(str int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn); void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, - unsigned long *out_end_pfn, int *out_nid); + unsigned long *out_end_pfn, int *out_nid, + struct memblock_type *type); /** * for_each_mem_pfn_range - early memory pfn range iterator @@ -263,9 +264,17 @@ void __next_mem_pfn_range(int *idx, int * * Walks over configured memory ranges. */ -#define for_each_mem_pfn_range(i, nid, p_start, p_end, p_nid) \ - for (i = -1, __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid); \ - i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid)) +#define for_each_mem_pfn_range(i, nid, p_start, p_end, p_nid) \ + for (i = -1, __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid, \ + &memblock.memory); \ + i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid, \ + &memblock.memory)) + +#define for_each_res_pfn_range(i, nid, p_start, p_end, p_nid) \ + for (i = -1, __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid, \ + &memblock.reserved); \ + i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid, \ + &memblock.reserved)) #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone, --- a/mm/debug.c~mm-initialize-struct-pages-in-reserved-regions-outside-of-the-zone-ranges +++ a/mm/debug.c @@ -64,7 +64,8 @@ void __dump_page(struct page *page, cons * dump_page() when detected. */ if (page_poisoned) { - pr_warn("page:%px is uninitialized and poisoned", page); + pr_warn("page:%px pfn:%ld is uninitialized and poisoned", + page, page_to_pfn(page)); goto hex_only; } --- a/mm/memblock.c~mm-initialize-struct-pages-in-reserved-regions-outside-of-the-zone-ranges +++ a/mm/memblock.c @@ -1198,9 +1198,9 @@ void __init_memblock __next_mem_range_re */ void __init_memblock __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, - unsigned long *out_end_pfn, int *out_nid) + unsigned long *out_end_pfn, int *out_nid, + struct memblock_type *type) { - struct memblock_type *type = &memblock.memory; struct memblock_region *r; int r_nid; --- a/mm/page_alloc.c~mm-initialize-struct-pages-in-reserved-regions-outside-of-the-zone-ranges +++ a/mm/page_alloc.c @@ -1458,6 +1458,7 @@ static void __meminit init_reserved_page { pg_data_t *pgdat; int nid, zid; + bool found = false; if (!early_page_uninitialised(pfn)) return; @@ -1468,10 +1469,15 @@ static void __meminit init_reserved_page for (zid = 0; zid < MAX_NR_ZONES; zid++) { struct zone *zone = &pgdat->node_zones[zid]; - if (pfn >= zone->zone_start_pfn && pfn < zone_end_pfn(zone)) + if (pfn >= zone->zone_start_pfn && pfn < zone_end_pfn(zone)) { + found = true; break; + } } - __init_single_page(pfn_to_page(pfn), pfn, zid, nid); + if (likely(found)) + __init_single_page(pfn_to_page(pfn), pfn, zid, nid); + else + WARN_ON_ONCE(1); } #else static inline void init_reserved_page(unsigned long pfn) @@ -6227,7 +6233,7 @@ void __init __weak memmap_init(unsigned unsigned long zone, unsigned long range_start_pfn) { - unsigned long start_pfn, end_pfn, next_pfn = 0; + unsigned long start_pfn, end_pfn, prev_pfn = 0; unsigned long range_end_pfn = range_start_pfn + size; u64 pgcnt = 0; int i; @@ -6235,7 +6241,7 @@ void __init __weak memmap_init(unsigned for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn); end_pfn = clamp(end_pfn, range_start_pfn, range_end_pfn); - next_pfn = clamp(next_pfn, range_start_pfn, range_end_pfn); + prev_pfn = clamp(prev_pfn, range_start_pfn, range_end_pfn); if (end_pfn > start_pfn) { size = end_pfn - start_pfn; @@ -6243,10 +6249,10 @@ void __init __weak memmap_init(unsigned MEMINIT_EARLY, NULL, MIGRATE_MOVABLE); } - if (next_pfn < start_pfn) - pgcnt += init_unavailable_range(next_pfn, start_pfn, + if (prev_pfn < start_pfn) + pgcnt += init_unavailable_range(prev_pfn, start_pfn, zone, nid); - next_pfn = end_pfn; + prev_pfn = end_pfn; } /* @@ -6256,12 +6262,31 @@ void __init __weak memmap_init(unsigned * considered initialized. Make sure that memmap has a well defined * state. */ - if (next_pfn < range_end_pfn) - pgcnt += init_unavailable_range(next_pfn, range_end_pfn, + if (prev_pfn < range_end_pfn) + pgcnt += init_unavailable_range(prev_pfn, range_end_pfn, zone, nid); + /* + * memblock.reserved isn't enforced to overlap with + * memblock.memory so initialize the struct pages for + * memblock.reserved too in case it wasn't overlapping. + * + * If any struct page associated with a memblock.reserved + * range isn't overlapping with a zone range, it'll be left + * uninitialized, ideally with PagePoison, and it'll be a more + * easily detectable error. + */ + for_each_res_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { + start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn); + end_pfn = clamp(end_pfn, range_start_pfn, range_end_pfn); + + if (end_pfn > start_pfn) + pgcnt += init_unavailable_range(start_pfn, end_pfn, + zone, nid); + } + if (pgcnt) - pr_info("%s: Zeroed struct page in unavailable ranges: %lld\n", + pr_info("%s: pages in unavailable ranges: %lld\n", zone_names[zone], pgcnt); } @@ -6499,6 +6524,10 @@ void __init get_pfn_range_for_nid(unsign *start_pfn = min(*start_pfn, this_start_pfn); *end_pfn = max(*end_pfn, this_end_pfn); } + for_each_res_pfn_range(i, nid, &this_start_pfn, &this_end_pfn, NULL) { + *start_pfn = min(*start_pfn, this_start_pfn); + *end_pfn = max(*end_pfn, this_end_pfn); + } if (*start_pfn == -1UL) *start_pfn = 0; @@ -7126,7 +7155,13 @@ unsigned long __init node_map_pfn_alignm */ unsigned long __init find_min_pfn_with_active_regions(void) { - return PHYS_PFN(memblock_start_of_DRAM()); + /* + * reserved regions must be included so that their page + * structure can be part of a zone and obtain a valid zoneid + * before __SetPageReserved(). + */ + return min(PHYS_PFN(memblock_start_of_DRAM()), + PHYS_PFN(memblock.reserved.regions[0].base)); } /* _ Patches currently in -mm which might be from aarcange@redhat.com are mm-initialize-struct-pages-in-reserved-regions-outside-of-the-zone-ranges.patch