From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B9D5C433FE for ; Tue, 28 Dec 2021 13:29:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233382AbhL1N3J (ORCPT ); Tue, 28 Dec 2021 08:29:09 -0500 Received: from szxga02-in.huawei.com ([45.249.212.188]:29302 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232767AbhL1N3E (ORCPT ); Tue, 28 Dec 2021 08:29:04 -0500 Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4JNb2n6H2VzbjSb; Tue, 28 Dec 2021 21:28:33 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:02 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:00 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" Subject: [PATCH v19 07/13] kdump: Add helper reserve_crashkernel_mem[_low]() Date: Tue, 28 Dec 2021 21:26:06 +0800 Message-ID: <20211228132612.1860-8-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211228132612.1860-1-thunder.leizhen@huawei.com> References: <20211228132612.1860-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org Add helper reserve_crashkernel_mem[_low]() to reserve high and low memory for crash kernel. The implementation of these two functions is based on function reserve_crashkernel[_low]() in arch/x86/kernel/setup.c. But the following adaptations are made: 1. To avoid compilation problems on other architectures, provide default values for macro CRASH[_BASE]_ALIGN, CRASH_ADDR_{LOW|HIGH}_MAX, and add new macro CRASH_LOW_SIZE_MIN. 2. Only functions that reserve crash memory are extracted from reserve_crashkernel(), excluding functions such as parse_crashkernel() and insert_resource(). 3. Change "return;" in reserve_crashkernel() to "return -ENOMEM;". 4. Change call reserve_crashkernel_low() to call reserve_crashkernel_mem_low(). 5. Change CONFIG_X86_64 to CONFIG_64BIT. Signed-off-by: Zhen Lei --- include/linux/crash_core.h | 6 ++ kernel/crash_core.c | 154 ++++++++++++++++++++++++++++++++++++- 2 files changed, 159 insertions(+), 1 deletion(-) diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index f5437c9c9411fce..2e19632f8d45a60 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -86,5 +86,11 @@ int __init parse_crashkernel(char *cmdline, unsigned long long system_ram, int __init parse_crashkernel_high_low(char *cmdline, unsigned long long *high_size, unsigned long long *low_size); +int __init reserve_crashkernel_mem_low(unsigned long long low_size); +int __init reserve_crashkernel_mem(unsigned long long system_ram, + unsigned long long crash_size, + unsigned long long crash_base, + unsigned long long low_size, + bool high); #endif /* LINUX_CRASH_CORE_H */ diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 686d8a65e12a337..4bd30098534a184 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -5,7 +5,9 @@ */ #include -#include +#include +#include +#include #include #include @@ -345,6 +347,156 @@ int __init parse_crashkernel_high_low(char *cmdline, return 0; } +/* alignment for crash kernel dynamic regions */ +#ifndef CRASH_ALIGN +#define CRASH_ALIGN SZ_2M +#endif + +/* alignment for crash kernel fixed region */ +#ifndef CRASH_BASE_ALIGN +#define CRASH_BASE_ALIGN SZ_2M +#endif + +/* upper bound for crash low memory */ +#ifndef CRASH_ADDR_LOW_MAX +#ifdef CONFIG_PHYS_ADDR_T_64BIT +#define CRASH_ADDR_LOW_MAX SZ_4G +#else +#define CRASH_ADDR_LOW_MAX MEMBLOCK_ALLOC_ACCESSIBLE +#endif +#endif + +/* upper bound for crash high memory */ +#ifndef CRASH_ADDR_HIGH_MAX +#define CRASH_ADDR_HIGH_MAX MEMBLOCK_ALLOC_ACCESSIBLE +#endif + +#ifdef CONFIG_SWIOTLB +/* + * two parts from kernel/dma/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ +#define CRASH_LOW_SIZE_MIN max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20) +#else +#define CRASH_LOW_SIZE_MIN (256UL << 20) +#endif + +/** + * reserve_crashkernel_mem_low - Reserve crash kernel low memory. + * + * @low_size: The memory size specified by "crashkernel=Y,low" or "-1" + * if it's not specified. + * + * Returns 0 on success, else a negative status code. + */ +int __init reserve_crashkernel_mem_low(unsigned long long low_size) +{ +#ifdef CONFIG_64BIT + unsigned long long low_base = 0; + unsigned long low_mem_limit; + + low_mem_limit = min(memblock_phys_mem_size(), CRASH_ADDR_LOW_MAX); + + /* crashkernel=Y,low is not specified */ + if ((long)low_size < 0) { + low_size = CRASH_LOW_SIZE_MIN; + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_ADDR_LOW_MAX); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (low RAM limit: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(low_mem_limit >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; +#endif + return 0; +} + +/** + * reserve_crashkernel_mem - Reserve crash kernel memory. + * + * @system_ram: Total system memory size. + * @crash_size: The memory size specified by "crashkernel=X[@offset]" or + * "crashkernel=X,high". + * @crash_base: The base address specified by "crashkernel=X@offset" + * @low_size: The memory size specified by "crashkernel=Y,low" or "-1" + * if it's not specified. + * @high: Whether "crashkernel=X,high" is specified. + * + * Returns 0 on success, else a negative status code. + */ +int __init reserve_crashkernel_mem(unsigned long long system_ram, + unsigned long long crash_size, + unsigned long long crash_base, + unsigned long long low_size, + bool high) +{ + /* 0 means: find the address automatically */ + if (!crash_base) { + /* + * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, + * crashkernel=x,high reserves memory over 4G, also allocates + * 256M extra low memory for DMA buffers and swiotlb. + * But the extra memory is not required for all machines. + * So try low memory first and fall back to high memory + * unless "crashkernel=size[KMG],high" is specified. + */ + if (!high) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX); + if (!crash_base) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_HIGH_MAX); + if (!crash_base) { + pr_info("crashkernel reservation failed - No suitable area found.\n"); + return -ENOMEM; + } + } else { + unsigned long long start; + + start = memblock_phys_alloc_range(crash_size, CRASH_BASE_ALIGN, crash_base, + crash_base + crash_size); + if (start != crash_base) { + pr_info("crashkernel reservation failed - memory is in use.\n"); + return -ENOMEM; + } + } + + if (crash_base >= (1ULL << 32) && reserve_crashkernel_mem_low(low_size)) { + memblock_phys_free(crash_base, crash_size); + return -ENOMEM; + } + + pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", + (unsigned long)(crash_size >> 20), + (unsigned long)(crash_base >> 20), + (unsigned long)(system_ram >> 20)); + + crashk_res.start = crash_base; + crashk_res.end = crash_base + crash_size - 1; + + return 0; +} + Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len) { -- 2.25.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26D0BC433EF for ; Tue, 28 Dec 2021 13:33:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NdjzZ3qdgU3edRLnT/q8zB+WjsX1yYv80MBbW207LsU=; b=hPwRypzZq4vNgs 0pcqLzdeuSofK+ES8jh+4QgozapXHleCK56LS2NZnKB6nIAXnO8lGtsODt0KJSpZqSUZ0zbSTUdO1 gSGJwK1wXQDaVUpSodv8KUAKv4K7TwQzsHRJPEwjUs2GJB9dmsCBtG4nygAndZxjvhV9qSbQ0//V+ dbK5Av5i2LPx/GQvjy1kPhGdyL1kFmRouOqAZ5JD6V8aIazZfuCeZcYp8mVPpOanm0GqBaugoYQlp OoMWmojYHdKH2TZ40IpdCEi4nHCvNZKWen+ZjpMmPQ2mCjpxbflK0O170cs0XxaVp2OlqCeE5Pi5q lHFBuCZFl/+aKwD2ZLBQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1n2CZu-0010MU-Vb; Tue, 28 Dec 2021 13:31:47 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1n2CXH-000yxf-R9; Tue, 28 Dec 2021 13:29:10 +0000 Received: from dggpemm500020.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4JNb2n6H2VzbjSb; Tue, 28 Dec 2021 21:28:33 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500020.china.huawei.com (7.185.36.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:02 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Tue, 28 Dec 2021 21:29:00 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" Subject: [PATCH v19 07/13] kdump: Add helper reserve_crashkernel_mem[_low]() Date: Tue, 28 Dec 2021 21:26:06 +0800 Message-ID: <20211228132612.1860-8-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211228132612.1860-1-thunder.leizhen@huawei.com> References: <20211228132612.1860-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211228_052904_293762_7326371E X-CRM114-Status: GOOD ( 20.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add helper reserve_crashkernel_mem[_low]() to reserve high and low memory for crash kernel. The implementation of these two functions is based on function reserve_crashkernel[_low]() in arch/x86/kernel/setup.c. But the following adaptations are made: 1. To avoid compilation problems on other architectures, provide default values for macro CRASH[_BASE]_ALIGN, CRASH_ADDR_{LOW|HIGH}_MAX, and add new macro CRASH_LOW_SIZE_MIN. 2. Only functions that reserve crash memory are extracted from reserve_crashkernel(), excluding functions such as parse_crashkernel() and insert_resource(). 3. Change "return;" in reserve_crashkernel() to "return -ENOMEM;". 4. Change call reserve_crashkernel_low() to call reserve_crashkernel_mem_low(). 5. Change CONFIG_X86_64 to CONFIG_64BIT. Signed-off-by: Zhen Lei --- include/linux/crash_core.h | 6 ++ kernel/crash_core.c | 154 ++++++++++++++++++++++++++++++++++++- 2 files changed, 159 insertions(+), 1 deletion(-) diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index f5437c9c9411fce..2e19632f8d45a60 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -86,5 +86,11 @@ int __init parse_crashkernel(char *cmdline, unsigned long long system_ram, int __init parse_crashkernel_high_low(char *cmdline, unsigned long long *high_size, unsigned long long *low_size); +int __init reserve_crashkernel_mem_low(unsigned long long low_size); +int __init reserve_crashkernel_mem(unsigned long long system_ram, + unsigned long long crash_size, + unsigned long long crash_base, + unsigned long long low_size, + bool high); #endif /* LINUX_CRASH_CORE_H */ diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 686d8a65e12a337..4bd30098534a184 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -5,7 +5,9 @@ */ #include -#include +#include +#include +#include #include #include @@ -345,6 +347,156 @@ int __init parse_crashkernel_high_low(char *cmdline, return 0; } +/* alignment for crash kernel dynamic regions */ +#ifndef CRASH_ALIGN +#define CRASH_ALIGN SZ_2M +#endif + +/* alignment for crash kernel fixed region */ +#ifndef CRASH_BASE_ALIGN +#define CRASH_BASE_ALIGN SZ_2M +#endif + +/* upper bound for crash low memory */ +#ifndef CRASH_ADDR_LOW_MAX +#ifdef CONFIG_PHYS_ADDR_T_64BIT +#define CRASH_ADDR_LOW_MAX SZ_4G +#else +#define CRASH_ADDR_LOW_MAX MEMBLOCK_ALLOC_ACCESSIBLE +#endif +#endif + +/* upper bound for crash high memory */ +#ifndef CRASH_ADDR_HIGH_MAX +#define CRASH_ADDR_HIGH_MAX MEMBLOCK_ALLOC_ACCESSIBLE +#endif + +#ifdef CONFIG_SWIOTLB +/* + * two parts from kernel/dma/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ +#define CRASH_LOW_SIZE_MIN max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20) +#else +#define CRASH_LOW_SIZE_MIN (256UL << 20) +#endif + +/** + * reserve_crashkernel_mem_low - Reserve crash kernel low memory. + * + * @low_size: The memory size specified by "crashkernel=Y,low" or "-1" + * if it's not specified. + * + * Returns 0 on success, else a negative status code. + */ +int __init reserve_crashkernel_mem_low(unsigned long long low_size) +{ +#ifdef CONFIG_64BIT + unsigned long long low_base = 0; + unsigned long low_mem_limit; + + low_mem_limit = min(memblock_phys_mem_size(), CRASH_ADDR_LOW_MAX); + + /* crashkernel=Y,low is not specified */ + if ((long)low_size < 0) { + low_size = CRASH_LOW_SIZE_MIN; + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_ADDR_LOW_MAX); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (low RAM limit: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(low_mem_limit >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; +#endif + return 0; +} + +/** + * reserve_crashkernel_mem - Reserve crash kernel memory. + * + * @system_ram: Total system memory size. + * @crash_size: The memory size specified by "crashkernel=X[@offset]" or + * "crashkernel=X,high". + * @crash_base: The base address specified by "crashkernel=X@offset" + * @low_size: The memory size specified by "crashkernel=Y,low" or "-1" + * if it's not specified. + * @high: Whether "crashkernel=X,high" is specified. + * + * Returns 0 on success, else a negative status code. + */ +int __init reserve_crashkernel_mem(unsigned long long system_ram, + unsigned long long crash_size, + unsigned long long crash_base, + unsigned long long low_size, + bool high) +{ + /* 0 means: find the address automatically */ + if (!crash_base) { + /* + * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, + * crashkernel=x,high reserves memory over 4G, also allocates + * 256M extra low memory for DMA buffers and swiotlb. + * But the extra memory is not required for all machines. + * So try low memory first and fall back to high memory + * unless "crashkernel=size[KMG],high" is specified. + */ + if (!high) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX); + if (!crash_base) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_HIGH_MAX); + if (!crash_base) { + pr_info("crashkernel reservation failed - No suitable area found.\n"); + return -ENOMEM; + } + } else { + unsigned long long start; + + start = memblock_phys_alloc_range(crash_size, CRASH_BASE_ALIGN, crash_base, + crash_base + crash_size); + if (start != crash_base) { + pr_info("crashkernel reservation failed - memory is in use.\n"); + return -ENOMEM; + } + } + + if (crash_base >= (1ULL << 32) && reserve_crashkernel_mem_low(low_size)) { + memblock_phys_free(crash_base, crash_size); + return -ENOMEM; + } + + pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", + (unsigned long)(crash_size >> 20), + (unsigned long)(crash_base >> 20), + (unsigned long)(system_ram >> 20)); + + crashk_res.start = crash_base; + crashk_res.end = crash_base + crash_size - 1; + + return 0; +} + Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len) { -- 2.25.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: From: Zhen Lei Subject: [PATCH v19 07/13] kdump: Add helper reserve_crashkernel_mem[_low]() Date: Tue, 28 Dec 2021 21:26:06 +0800 Message-ID: <20211228132612.1860-8-thunder.leizhen@huawei.com> In-Reply-To: <20211228132612.1860-1-thunder.leizhen@huawei.com> References: <20211228132612.1860-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "kexec" Errors-To: kexec-bounces+dwmw2=infradead.org@lists.infradead.org To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , linux-kernel@vger.kernel.org, Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , kexec@lists.infradead.org, Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org, Rob Herring , Frank Rowand , devicetree@vger.kernel.org, Jonathan Corbet , linux-doc@vger.kernel.org Cc: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , John Donnelly Add helper reserve_crashkernel_mem[_low]() to reserve high and low memory for crash kernel. The implementation of these two functions is based on function reserve_crashkernel[_low]() in arch/x86/kernel/setup.c. But the following adaptations are made: 1. To avoid compilation problems on other architectures, provide default values for macro CRASH[_BASE]_ALIGN, CRASH_ADDR_{LOW|HIGH}_MAX, and add new macro CRASH_LOW_SIZE_MIN. 2. Only functions that reserve crash memory are extracted from reserve_crashkernel(), excluding functions such as parse_crashkernel() and insert_resource(). 3. Change "return;" in reserve_crashkernel() to "return -ENOMEM;". 4. Change call reserve_crashkernel_low() to call reserve_crashkernel_mem_low(). 5. Change CONFIG_X86_64 to CONFIG_64BIT. Signed-off-by: Zhen Lei --- include/linux/crash_core.h | 6 ++ kernel/crash_core.c | 154 ++++++++++++++++++++++++++++++++++++- 2 files changed, 159 insertions(+), 1 deletion(-) diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index f5437c9c9411fce..2e19632f8d45a60 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -86,5 +86,11 @@ int __init parse_crashkernel(char *cmdline, unsigned long long system_ram, int __init parse_crashkernel_high_low(char *cmdline, unsigned long long *high_size, unsigned long long *low_size); +int __init reserve_crashkernel_mem_low(unsigned long long low_size); +int __init reserve_crashkernel_mem(unsigned long long system_ram, + unsigned long long crash_size, + unsigned long long crash_base, + unsigned long long low_size, + bool high); #endif /* LINUX_CRASH_CORE_H */ diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 686d8a65e12a337..4bd30098534a184 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -5,7 +5,9 @@ */ #include -#include +#include +#include +#include #include #include @@ -345,6 +347,156 @@ int __init parse_crashkernel_high_low(char *cmdline, return 0; } +/* alignment for crash kernel dynamic regions */ +#ifndef CRASH_ALIGN +#define CRASH_ALIGN SZ_2M +#endif + +/* alignment for crash kernel fixed region */ +#ifndef CRASH_BASE_ALIGN +#define CRASH_BASE_ALIGN SZ_2M +#endif + +/* upper bound for crash low memory */ +#ifndef CRASH_ADDR_LOW_MAX +#ifdef CONFIG_PHYS_ADDR_T_64BIT +#define CRASH_ADDR_LOW_MAX SZ_4G +#else +#define CRASH_ADDR_LOW_MAX MEMBLOCK_ALLOC_ACCESSIBLE +#endif +#endif + +/* upper bound for crash high memory */ +#ifndef CRASH_ADDR_HIGH_MAX +#define CRASH_ADDR_HIGH_MAX MEMBLOCK_ALLOC_ACCESSIBLE +#endif + +#ifdef CONFIG_SWIOTLB +/* + * two parts from kernel/dma/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ +#define CRASH_LOW_SIZE_MIN max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20) +#else +#define CRASH_LOW_SIZE_MIN (256UL << 20) +#endif + +/** + * reserve_crashkernel_mem_low - Reserve crash kernel low memory. + * + * @low_size: The memory size specified by "crashkernel=Y,low" or "-1" + * if it's not specified. + * + * Returns 0 on success, else a negative status code. + */ +int __init reserve_crashkernel_mem_low(unsigned long long low_size) +{ +#ifdef CONFIG_64BIT + unsigned long long low_base = 0; + unsigned long low_mem_limit; + + low_mem_limit = min(memblock_phys_mem_size(), CRASH_ADDR_LOW_MAX); + + /* crashkernel=Y,low is not specified */ + if ((long)low_size < 0) { + low_size = CRASH_LOW_SIZE_MIN; + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_ADDR_LOW_MAX); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (low RAM limit: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(low_mem_limit >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; +#endif + return 0; +} + +/** + * reserve_crashkernel_mem - Reserve crash kernel memory. + * + * @system_ram: Total system memory size. + * @crash_size: The memory size specified by "crashkernel=X[@offset]" or + * "crashkernel=X,high". + * @crash_base: The base address specified by "crashkernel=X@offset" + * @low_size: The memory size specified by "crashkernel=Y,low" or "-1" + * if it's not specified. + * @high: Whether "crashkernel=X,high" is specified. + * + * Returns 0 on success, else a negative status code. + */ +int __init reserve_crashkernel_mem(unsigned long long system_ram, + unsigned long long crash_size, + unsigned long long crash_base, + unsigned long long low_size, + bool high) +{ + /* 0 means: find the address automatically */ + if (!crash_base) { + /* + * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, + * crashkernel=x,high reserves memory over 4G, also allocates + * 256M extra low memory for DMA buffers and swiotlb. + * But the extra memory is not required for all machines. + * So try low memory first and fall back to high memory + * unless "crashkernel=size[KMG],high" is specified. + */ + if (!high) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX); + if (!crash_base) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_HIGH_MAX); + if (!crash_base) { + pr_info("crashkernel reservation failed - No suitable area found.\n"); + return -ENOMEM; + } + } else { + unsigned long long start; + + start = memblock_phys_alloc_range(crash_size, CRASH_BASE_ALIGN, crash_base, + crash_base + crash_size); + if (start != crash_base) { + pr_info("crashkernel reservation failed - memory is in use.\n"); + return -ENOMEM; + } + } + + if (crash_base >= (1ULL << 32) && reserve_crashkernel_mem_low(low_size)) { + memblock_phys_free(crash_base, crash_size); + return -ENOMEM; + } + + pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", + (unsigned long)(crash_size >> 20), + (unsigned long)(crash_base >> 20), + (unsigned long)(system_ram >> 20)); + + crashk_res.start = crash_base; + crashk_res.end = crash_base + crash_size - 1; + + return 0; +} + Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len) { -- 2.25.1 _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec