From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFA03C47097 for ; Tue, 1 Jun 2021 07:54:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D4B9A6136E for ; Tue, 1 Jun 2021 07:54:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233306AbhFAH4G (ORCPT ); Tue, 1 Jun 2021 03:56:06 -0400 Received: from mail.kernel.org ([198.145.29.99]:34754 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233265AbhFAH4B (ORCPT ); Tue, 1 Jun 2021 03:56:01 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id A08E961378; Tue, 1 Jun 2021 07:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1622534060; bh=3D+bj5o5A5tcEPmr9Tb4qDYrRfzOIR7HmG3BtQDywkw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kbSnnlSctb6taPdQLppGzt3Mh6GYeKS/HKwxxhaOiIssZdPxWwLsOpwG+Z+12pnfG mJhJQdF08CH/ab9fC94kJM1KqI+XzUsqwjHkt5QbMm9ua3IFsG2Um15hhiNNJbJMjh TVKWRoWQd8D8BK6/QR7GpgpxReivElXECLBXtfqCzUiScTU5Ady9CkozetXlU/ra++ H86i00og9/6sMybL8tSNtFnl1dQpdOLgYO07jYbu15DJOq6D0AiMB7bDLPeMUPwNm7 QDn5dBzL08mGPV8mXRuHdCXXdwtIGGHce/NEx9ydaenXjgYP0psbMFeQXAVbHYenJt auqoATZ0lthnA== From: Mike Rapoport To: x86@kernel.org Cc: "H. Peter Anvin" , Andy Lutomirski , Andy Shevchenko , Ard Biesheuvel , Baoquan He , Borislav Petkov , Darren Hart , Dave Young , Hugh Dickins , Ingo Molnar , Jonathan Corbet , Lianbo Jiang , Mike Rapoport , Mike Rapoport , Randy Dunlap , Thomas Gleixner , linux-doc@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, platform-driver-x86@vger.kernel.org Subject: [PATCH 1/3] x86/setup: always reserve the first 1M of RAM Date: Tue, 1 Jun 2021 10:53:52 +0300 Message-Id: <20210601075354.5149-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210601075354.5149-1-rppt@kernel.org> References: <20210601075354.5149-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mike Rapoport There are BIOSes that are known to corrupt the memory under 1M, or more precisely under 640K because the memory above 640K is anyway reserved for the EGA/VGA frame buffer and BIOS. To prevent usage of the memory that will be potentially clobbered by the kernel, the beginning of the memory is always reserved. The exact size of the reserved area is determined by CONFIG_X86_RESERVE_LOW build time and reservelow command line option. The reserved range may be from 4K to 640K with the default of 64K. There are also configurations that reserve the entire 1M range, like machines with SandyBridge graphic devices or systems that enable crash kernel. In addition to the potentially clobbered memory, EBDA of unknown size may be as low as 128K and the memory above that EBDA start is also reserved early. It would have been possible to reserve the entire range under 1M unless for the real mode trampoline that must reside in that area. To accommodate placement of the real mode trampoline and keep the memory safe from being clobbered by BIOS reserve the first 64K of RAM before memory allocations are possible and then, after the real mode trampoline is allocated, reserve the entire range from 0 to 1M. Update trim_snb_memory() and reserve_real_mode() to avoid redundant reservations of the same memory range. Also make sure the memory under 1M is not getting freed by efi_free_boot_services(). Fixes: a799c2bd29d1 ("x86/setup: Consolidate early memory reservations") Signed-off-by: Mike Rapoport --- arch/x86/kernel/setup.c | 35 ++++++++++++++++++++-------------- arch/x86/platform/efi/quirks.c | 12 ++++++++++++ arch/x86/realmode/init.c | 14 ++++++++------ 3 files changed, 41 insertions(+), 20 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 72920af0b3c0..22e9a17d6ac3 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -637,11 +637,11 @@ static void __init trim_snb_memory(void) * them from accessing certain memory ranges, namely anything below * 1M and in the pages listed in bad_pages[] above. * - * To avoid these pages being ever accessed by SNB gfx devices - * reserve all memory below the 1 MB mark and bad_pages that have - * not already been reserved at boot time. + * To avoid these pages being ever accessed by SNB gfx devices reserve + * bad_pages that have not already been reserved at boot time. + * All memory below the 1 MB mark is anyway reserved later during + * setup_arch(), so there is no need to reserve it here. */ - memblock_reserve(0, 1<<20); for (i = 0; i < ARRAY_SIZE(bad_pages); i++) { if (memblock_reserve(bad_pages[i], PAGE_SIZE)) @@ -733,14 +733,14 @@ static void __init early_reserve_memory(void) * The first 4Kb of memory is a BIOS owned area, but generally it is * not listed as such in the E820 table. * - * Reserve the first memory page and typically some additional - * memory (64KiB by default) since some BIOSes are known to corrupt - * low memory. See the Kconfig help text for X86_RESERVE_LOW. + * Reserve the first 64K of memory since some BIOSes are known to + * corrupt low memory. After the real mode trampoline is allocated the + * rest of the memory below 640k is reserved. * * In addition, make sure page 0 is always reserved because on * systems with L1TF its contents can be leaked to user processes. */ - memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); + memblock_reserve(0, SZ_64K); early_reserve_initrd(); @@ -751,6 +751,7 @@ static void __init early_reserve_memory(void) reserve_ibft_region(); reserve_bios_regions(); + trim_snb_memory(); } /* @@ -1081,14 +1082,20 @@ void __init setup_arch(char **cmdline_p) (max_pfn_mapped<