From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: ARC-Seal: i=1; a=rsa-sha256; t=1521894318; cv=none; d=google.com; s=arc-20160816; b=AYwN4FqqUiA+QzFPLJ7TADkSSNMRy8moof4FxHy8ugmhBE4QTPpEKnBQ6ZJmgG5O6g E+XGFbUsv/babJF3aUH5hIsu/hUFQmUg7Yvv9VabtE15c2JIVngcguno7w2SE6jYp9Fr jbgo6+xeIH+Y6DRY8Z8SUrTgsyhK1/fdm9Hp1A21J0K9BIgpLWFZSJi7tESRnIkGH69o 7UjOBgKyj7NM8jeKeg3hZvlSTSJhML6GzRuZUy0Qh9O41ajrBBw17iZJ+dUPHDXBgIDY WLMw440WD8H7199HJnuPXN8weaC4rynNUPLJPoR4f8DQA+j7p74Ar07TR7Oz6dqEHCIe VPqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=u98/yESuMVbaXeRtVPwvqpbKxJR0f1kfq2ivtvjaEg4=; b=h6FnhB7j0/iYmEvqdAUstQZi5rYAS+6dRkqvYu+xNnlr3AR2SvSqT78yrpLza0UWCS OFFwZgAkcdEFz71sqWtrU8D2LQICLkpit+4lXyGcqTI9GCY6e8lqbwXmhk91fwLFbgX/ 6ZHfJf9jNQwktSM2ZFWg/mdIy7NkQcospzV6szYdRpsHz/dodx19+L0EIBvWOXZBjnsZ nPbi074DYgLpGqHFrFPg9USYyBJMKyLAPWkpHk2IPzc7T3W37LHsbe7GHu1jd2hsNNVo b3YABxH/hCFu7Zg0y9h2JMWzGW7LkghvKGVtto9/umjOQox/DwS4i8Tp7YHbefxHSrb6 /XLQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=FJzpQT50; spf=pass (google.com: domain of hejianet@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=hejianet@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=FJzpQT50; spf=pass (google.com: domain of hejianet@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=hejianet@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com X-Google-Smtp-Source: AG47ELuOp77ryUaN+sDSzEbaXjiYaRDb298d3FxNJAP919bsifL8/8QVgvu78b9+dy5nEraTZwStcA== From: Jia He To: Andrew Morton , Michal Hocko , Catalin Marinas , Mel Gorman , Will Deacon , Mark Rutland , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" Cc: Pavel Tatashin , Daniel Jordan , AKASHI Takahiro , Gioh Kim , Steven Sistare , Daniel Vacek , Eugeniu Rosca , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, James Morse , Ard Biesheuvel , Steve Capper , x86@kernel.org, Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Jia He , Jia He Subject: [PATCH v2 1/5] mm: page_alloc: remain memblock_next_valid_pfn() when CONFIG_HAVE_ARCH_PFN_VALID is enable Date: Sat, 24 Mar 2018 05:24:38 -0700 Message-Id: <1521894282-6454-2-git-send-email-hejianet@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1521894282-6454-1-git-send-email-hejianet@gmail.com> References: <1521894282-6454-1-git-send-email-hejianet@gmail.com> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: =?utf-8?q?1595821857199891353?= X-GMAIL-MSGID: =?utf-8?q?1595821857199891353?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns where possible") optimized the loop in memmap_init_zone(). But it causes possible panic bug. So Daniel Vacek reverted it later. But memblock_next_valid_pfn is valid when CONFIG_HAVE_ARCH_PFN_VALID is enabled. And as verified by Eugeniu Rosca, arm can benifit from this commit. So remain the memblock_next_valid_pfn. Signed-off-by: Jia He --- include/linux/memblock.h | 4 ++++ mm/memblock.c | 29 +++++++++++++++++++++++++++++ mm/page_alloc.c | 11 ++++++++++- 3 files changed, 43 insertions(+), 1 deletion(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 0257aee..efbbe4b 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -203,6 +203,10 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid)) #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +#ifdef CONFIG_HAVE_ARCH_PFN_VALID +unsigned long memblock_next_valid_pfn(unsigned long pfn); +#endif + /** * for_each_free_mem_range - iterate through free memblock areas * @i: u64 used as loop variable diff --git a/mm/memblock.c b/mm/memblock.c index ba7c878..bea5a9c 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1102,6 +1102,35 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid, *out_nid = r->nid; } +#ifdef CONFIG_HAVE_ARCH_PFN_VALID +unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn) +{ + struct memblock_type *type = &memblock.memory; + unsigned int right = type->cnt; + unsigned int mid, left = 0; + phys_addr_t addr = PFN_PHYS(++pfn); + + do { + mid = (right + left) / 2; + + if (addr < type->regions[mid].base) + right = mid; + else if (addr >= (type->regions[mid].base + + type->regions[mid].size)) + left = mid + 1; + else { + /* addr is within the region, so pfn is valid */ + return pfn; + } + } while (left < right); + + if (right == type->cnt) + return -1UL; + else + return PHYS_PFN(type->regions[right].base); +} +#endif /*CONFIG_HAVE_ARCH_PFN_VALID*/ + /** * memblock_set_node - set node ID on memblock regions * @base: base of area to set node ID for diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c19f5ac..2a967f7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5483,8 +5483,17 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, if (context != MEMMAP_EARLY) goto not_early; - if (!early_pfn_valid(pfn)) + if (!early_pfn_valid(pfn)) { +#if (defined CONFIG_HAVE_MEMBLOCK) && (defined CONFIG_HAVE_ARCH_PFN_VALID) + /* + * Skip to the pfn preceding the next valid one (or + * end_pfn), such that we hit a valid pfn (or end_pfn) + * on our next iteration of the loop. + */ + pfn = memblock_next_valid_pfn(pfn) - 1; +#endif continue; + } if (!early_pfn_in_nid(pfn, nid)) continue; if (!update_defer_init(pgdat, pfn, end_pfn, &nr_initialised)) -- 2.7.4