From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D59B7C433E0 for ; Sat, 1 Aug 2020 13:06:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BA16C2076A for ; Sat, 1 Aug 2020 13:06:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726461AbgHANGM (ORCPT ); Sat, 1 Aug 2020 09:06:12 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:8747 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725778AbgHANGM (ORCPT ); Sat, 1 Aug 2020 09:06:12 -0400 Received: from DGGEMS405-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 2645BE3FBDF526DA0C66; Sat, 1 Aug 2020 21:06:10 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS405-HUB.china.huawei.com (10.3.19.205) with Microsoft SMTP Server id 14.3.487.0; Sat, 1 Aug 2020 21:06:01 +0800 From: Chen Zhou To: , , , , , , , , , , CC: , , , , , , , , , , , , Subject: [PATCH v11 4/5] arm64: kdump: add memory for devices by DT property linux,usable-memory-range Date: Sat, 1 Aug 2020 21:08:55 +0800 Message-ID: <20200801130856.86625-5-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200801130856.86625-1-chenzhou10@huawei.com> References: <20200801130856.86625-1-chenzhou10@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected Sender: linux-doc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org When reserving crashkernel in high memory, some low memory is reserved for crash dump kernel devices and never mapped by the first kernel. This memory range is advertised to crash dump kernel via DT property under /chosen, linux,usable-memory-range = We reused the DT property linux,usable-memory-range and made the low memory region as the second range "BASE2 SIZE2", which keeps compatibility with existing user-space and older kdump kernels. Crash dump kernel reads this property at boot time and call memblock_add() to add the low memory region after memblock_cap_memory_range() has been called. Signed-off-by: Chen Zhou --- arch/arm64/mm/init.c | 44 ++++++++++++++++++++++++++++++++++---------- 1 file changed, 34 insertions(+), 10 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 53c8916fd32f..f385a8281d1b 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -69,6 +69,16 @@ EXPORT_SYMBOL(vmemmap); phys_addr_t arm64_dma_phys_limit __ro_after_init; phys_addr_t arm64_dma32_phys_limit __ro_after_init; +/* + * The main usage of linux,usable-memory-range is for crash dump kernel. + * Originally, the number of usable-memory regions is one. Now there may + * be two regions, low region and high region. + * To make compatibility with existing user-space and older kdump, the low + * region is always the last range of linux,usable-memory-range if exist. + */ +#define MAX_USABLE_RANGES 2 + + #ifdef CONFIG_KEXEC_CORE /* @@ -286,9 +296,9 @@ early_param("mem", early_mem); static int __init early_init_dt_scan_usablemem(unsigned long node, const char *uname, int depth, void *data) { - struct memblock_region *usablemem = data; - const __be32 *reg; - int len; + struct memblock_region *usable_rgns = data; + const __be32 *reg, *endp; + int len, nr = 0; if (depth != 1 || strcmp(uname, "chosen") != 0) return 0; @@ -297,22 +307,36 @@ static int __init early_init_dt_scan_usablemem(unsigned long node, if (!reg || (len < (dt_root_addr_cells + dt_root_size_cells))) return 1; - usablemem->base = dt_mem_next_cell(dt_root_addr_cells, ®); - usablemem->size = dt_mem_next_cell(dt_root_size_cells, ®); + endp = reg + (len / sizeof(__be32)); + while ((endp - reg) >= (dt_root_addr_cells + dt_root_size_cells)) { + usable_rgns[nr].base = dt_mem_next_cell(dt_root_addr_cells, ®); + usable_rgns[nr].size = dt_mem_next_cell(dt_root_size_cells, ®); + + if (++nr >= MAX_USABLE_RANGES) + break; + } return 1; } static void __init fdt_enforce_memory_region(void) { - struct memblock_region reg = { - .size = 0, + struct memblock_region usable_rgns[MAX_USABLE_RANGES] = { + { .size = 0 }, + { .size = 0 } }; - of_scan_flat_dt(early_init_dt_scan_usablemem, ®); + of_scan_flat_dt(early_init_dt_scan_usablemem, &usable_rgns); - if (reg.size) - memblock_cap_memory_range(reg.base, reg.size); + /* + * The first range of usable-memory regions is for crash dump + * kernel with only one region or for high region with two regions, + * the second range is dedicated for low region if exist. + */ + if (usable_rgns[0].size) + memblock_cap_memory_range(usable_rgns[0].base, usable_rgns[0].size); + if (usable_rgns[1].size) + memblock_add(usable_rgns[1].base, usable_rgns[1].size); } void __init arm64_memblock_init(void) -- 2.20.1