From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1E3FC55178 for ; Tue, 27 Oct 2020 11:30:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A84722284 for ; Tue, 27 Oct 2020 11:30:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="VqGyMUOb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A84722284 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A96646B007D; Tue, 27 Oct 2020 07:30:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A4A676B007E; Tue, 27 Oct 2020 07:30:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95D3D6B0080; Tue, 27 Oct 2020 07:30:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id 687C66B007D for ; Tue, 27 Oct 2020 07:30:51 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 106BC33C4 for ; Tue, 27 Oct 2020 11:30:51 +0000 (UTC) X-FDA: 77417488302.16.drop19_0a14c3d2727b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 8A71E100E691B for ; Tue, 27 Oct 2020 11:30:48 +0000 (UTC) X-HE-Tag: drop19_0a14c3d2727b X-Filterd-Recvd-Size: 7237 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Tue, 27 Oct 2020 11:30:47 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AAC5E22263; Tue, 27 Oct 2020 11:30:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603798246; bh=XipFZ9Gk+9MwwOjLDX33Y4z9wXHWMYFSkxcbyoqGVRQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VqGyMUObvYgG5PnyEd2PUuzKEpLUyHmZb03SGMWoV/7usxrLDwp1bw65SRHW2FXjw 5KB8xvl9vLXg/L228UtF6r5gwXIywW1768rLUuAN1W+p2s4PbsBoKYIXUHrJuEp0Iu GHIgAs+oZyd7B99QQVGwK4MZhf1tnDgWSL3fhPK8= From: Mike Rapoport To: Andrew Morton Cc: Alexey Dobriyan , Catalin Marinas , Geert Uytterhoeven , Greg Ungerer , John Paul Adrian Glaubitz , Jonathan Corbet , Matt Turner , Meelis Roos , Michael Schmitz , Mike Rapoport , Mike Rapoport , Russell King , Tony Luck , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mm@kvack.org, linux-snps-arc@lists.infradead.org Subject: [PATCH 06/13] ia64: forbid using VIRTUAL_MEM_MAP with FLATMEM Date: Tue, 27 Oct 2020 13:29:48 +0200 Message-Id: <20201027112955.14157-7-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201027112955.14157-1-rppt@kernel.org> References: <20201027112955.14157-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport Virtual memory map was intended to avoid wasting memory on the memory map on systems with large holes in the physical memory layout. Long ago it be= en superseded first by DISCONTIGMEM and then by SPARSEMEM. Moreover, SPARSEMEM_VMEMMAP provide the same functionality in much more portable wa= y. As the first step to removing the VIRTUAL_MEM_MAP forbid it's usage with FLATMEM and panic on systems with large holes in the physical memory layout that try to run FLATMEM kernels. Signed-off-by: Mike Rapoport --- arch/ia64/Kconfig | 2 +- arch/ia64/include/asm/meminit.h | 2 -- arch/ia64/mm/contig.c | 48 +++++++++++++++------------------ arch/ia64/mm/init.c | 14 ---------- 4 files changed, 22 insertions(+), 44 deletions(-) diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 12aae706cb27..83de0273d474 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -329,7 +329,7 @@ config NODES_SHIFT # VIRTUAL_MEM_MAP has been retained for historical reasons. config VIRTUAL_MEM_MAP bool "Virtual mem map" - depends on !SPARSEMEM + depends on !SPARSEMEM && !FLATMEM default y help Say Y to compile the kernel with support for a virtual mem map. diff --git a/arch/ia64/include/asm/meminit.h b/arch/ia64/include/asm/memi= nit.h index 092f1c91b36c..e789c0818edb 100644 --- a/arch/ia64/include/asm/meminit.h +++ b/arch/ia64/include/asm/meminit.h @@ -59,10 +59,8 @@ extern int reserve_elfcorehdr(u64 *start, u64 *end); extern int register_active_ranges(u64 start, u64 len, int nid); =20 #ifdef CONFIG_VIRTUAL_MEM_MAP -# define LARGE_GAP 0x40000000 /* Use virtual mem map if hole is > than t= his */ extern unsigned long VMALLOC_END; extern struct page *vmem_map; - extern int find_largest_hole(u64 start, u64 end, void *arg); extern int create_mem_map_page_table(u64 start, u64 end, void *arg); extern int vmemmap_find_next_valid_pfn(int, int); #else diff --git a/arch/ia64/mm/contig.c b/arch/ia64/mm/contig.c index ba81d8cb0059..bfc4ecd0a2ab 100644 --- a/arch/ia64/mm/contig.c +++ b/arch/ia64/mm/contig.c @@ -19,15 +19,12 @@ #include #include #include +#include =20 #include #include #include =20 -#ifdef CONFIG_VIRTUAL_MEM_MAP -static unsigned long max_gap; -#endif - /* physical address where the bootmem map is located */ unsigned long bootmap_start; =20 @@ -166,33 +163,30 @@ find_memory (void) alloc_per_cpu_data(); } =20 -static void __init virtual_map_init(void) +static int __init find_largest_hole(u64 start, u64 end, void *arg) { -#ifdef CONFIG_VIRTUAL_MEM_MAP - efi_memmap_walk(find_largest_hole, (u64 *)&max_gap); - if (max_gap < LARGE_GAP) { - vmem_map =3D (struct page *) 0; - } else { - unsigned long map_size; + u64 *max_gap =3D arg; =20 - /* allocate virtual_mem_map */ + static u64 last_end =3D PAGE_OFFSET; =20 - map_size =3D PAGE_ALIGN(ALIGN(max_low_pfn, MAX_ORDER_NR_PAGES) * - sizeof(struct page)); - VMALLOC_END -=3D map_size; - vmem_map =3D (struct page *) VMALLOC_END; - efi_memmap_walk(create_mem_map_page_table, NULL); + /* NOTE: this algorithm assumes efi memmap table is ordered */ =20 - /* - * alloc_node_mem_map makes an adjustment for mem_map - * which isn't compatible with vmem_map. - */ - NODE_DATA(0)->node_mem_map =3D vmem_map + - find_min_pfn_with_active_regions(); + if (*max_gap < (start - last_end)) + *max_gap =3D start - last_end; + last_end =3D end; + return 0; +} =20 - printk("Virtual mem_map starts at 0x%p\n", mem_map); - } -#endif /* !CONFIG_VIRTUAL_MEM_MAP */ +static void __init verify_gap_absence(void) +{ + unsigned long max_gap; + + /* Forbid FLATMEM if hole is > than 1G */ + efi_memmap_walk(find_largest_hole, (u64 *)&max_gap); + if (max_gap >=3D SZ_1G) + panic("Cannot use FLATMEM with %ldMB hole\n" + "Please switch over to SPARSEMEM\n", + (max_gap >> 20)); } =20 /* @@ -210,7 +204,7 @@ paging_init (void) max_zone_pfns[ZONE_DMA32] =3D max_dma; max_zone_pfns[ZONE_NORMAL] =3D max_low_pfn; =20 - virtual_map_init(); + verify_gap_absence(); =20 free_area_init(max_zone_pfns); zero_page_memmap_ptr =3D virt_to_page(ia64_imva(empty_zero_page)); diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index ef12e097f318..9b5acf8fb092 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -574,20 +574,6 @@ ia64_pfn_valid (unsigned long pfn) } EXPORT_SYMBOL(ia64_pfn_valid); =20 -int __init find_largest_hole(u64 start, u64 end, void *arg) -{ - u64 *max_gap =3D arg; - - static u64 last_end =3D PAGE_OFFSET; - - /* NOTE: this algorithm assumes efi memmap table is ordered */ - - if (*max_gap < (start - last_end)) - *max_gap =3D start - last_end; - last_end =3D end; - return 0; -} - #endif /* CONFIG_VIRTUAL_MEM_MAP */ =20 int __init register_active_ranges(u64 start, u64 len, int nid) --=20 2.28.0