From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC427C83004 for ; Wed, 29 Apr 2020 12:15:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 62A8F20731 for ; Wed, 29 Apr 2020 12:15:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="ty0Ezl2g" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 62A8F20731 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 139018E0015; Wed, 29 Apr 2020 08:15:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E8F48E0001; Wed, 29 Apr 2020 08:15:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EFD708E0015; Wed, 29 Apr 2020 08:15:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id D765A8E0001 for ; Wed, 29 Apr 2020 08:15:08 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9C1E0180AD806 for ; Wed, 29 Apr 2020 12:15:08 +0000 (UTC) X-FDA: 76760787096.20.hate19_49f319a7c7124 X-HE-Tag: hate19_49f319a7c7124 X-Filterd-Recvd-Size: 8272 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Wed, 29 Apr 2020 12:15:08 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AE7A12184D; Wed, 29 Apr 2020 12:14:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1588162507; bh=wSiCtht7XGk0G5xXo1EFLkbEsxIE900LvsbYCn6Fzv0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ty0Ezl2gwznQdHX/QjiKXUjVaUpaBjDw0d0R4fHNz9rFZfs92Jr0GLqFRn/SFE4B+ sfXFVBXVsiMlbTQ50baltSPOWN5UJEyf7Lwwa3iCKJ4HhksnoH4xoKURE8roUYESqq i7MwEHqmOm4Ft3dxu0NpJmgq3kWsOGXxx8BgGNLY= From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Baoquan He , Brian Cain , Catalin Marinas , "David S. Miller" , Geert Uytterhoeven , Greentime Hu , Greg Ungerer , Guan Xuetao , Guo Ren , Heiko Carstens , Helge Deller , Hoan Tran , "James E.J. Bottomley" , Jonathan Corbet , Ley Foon Tan , Mark Salter , Matt Turner , Max Filippov , Michael Ellerman , Michal Hocko , Michal Simek , Nick Hu , Paul Walmsley , Qian Cai , Richard Weinberger , Rich Felker , Russell King , Stafford Horne , Thomas Bogendoerfer , Tony Luck , Vineet Gupta , x86@kernel.org, Yoshinori Sato , linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-csky@vger.kernel.org, linux-doc@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, uclinux-h8-devel@lists.sourceforge.jp, Mike Rapoport Subject: [PATCH v2 13/20] unicore32: simplify detection of memory zone boundaries Date: Wed, 29 Apr 2020 15:11:19 +0300 Message-Id: <20200429121126.17989-14-rppt@kernel.org> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200429121126.17989-1-rppt@kernel.org> References: <20200429121126.17989-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The free_area_init() function only requires the definition of maximal PFN for each of the supported zone rater than calculation of actual zone size= s and the sizes of the holes between the zones. After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is available to all architectures. Using this function instead of free_area_init_node() simplifies the zone detection. Signed-off-by: Mike Rapoport --- arch/unicore32/include/asm/memory.h | 2 +- arch/unicore32/include/mach/memory.h | 6 ++-- arch/unicore32/kernel/pci.c | 14 ++------- arch/unicore32/mm/init.c | 43 ++++++---------------------- 4 files changed, 15 insertions(+), 50 deletions(-) diff --git a/arch/unicore32/include/asm/memory.h b/arch/unicore32/include= /asm/memory.h index 23c93105f98f..66285178dd9b 100644 --- a/arch/unicore32/include/asm/memory.h +++ b/arch/unicore32/include/asm/memory.h @@ -60,7 +60,7 @@ #ifndef __ASSEMBLY__ =20 #ifndef arch_adjust_zones -#define arch_adjust_zones(size, holes) do { } while (0) +#define arch_adjust_zones(max_zone_pfn) do { } while (0) #endif =20 /* diff --git a/arch/unicore32/include/mach/memory.h b/arch/unicore32/includ= e/mach/memory.h index 2b527cedd03d..b4e6035cb9a3 100644 --- a/arch/unicore32/include/mach/memory.h +++ b/arch/unicore32/include/mach/memory.h @@ -25,10 +25,10 @@ =20 #if !defined(__ASSEMBLY__) && defined(CONFIG_PCI) =20 -void puv3_pci_adjust_zones(unsigned long *size, unsigned long *holes); +void puv3_pci_adjust_zones(unsigned long *max_zone_pfn); =20 -#define arch_adjust_zones(size, holes) \ - puv3_pci_adjust_zones(size, holes) +#define arch_adjust_zones(max_zone_pfn) \ + puv3_pci_adjust_zones(max_zone_pfn) =20 #endif =20 diff --git a/arch/unicore32/kernel/pci.c b/arch/unicore32/kernel/pci.c index efa04a94dcdb..0d098aa05b47 100644 --- a/arch/unicore32/kernel/pci.c +++ b/arch/unicore32/kernel/pci.c @@ -133,21 +133,11 @@ static int pci_puv3_map_irq(const struct pci_dev *d= ev, u8 slot, u8 pin) * This is really ugly and we need a better way of specifying * DMA-capable regions of memory. */ -void __init puv3_pci_adjust_zones(unsigned long *zone_size, - unsigned long *zhole_size) +void __init puv3_pci_adjust_zones(unsigned long max_zone_pfn) { unsigned int sz =3D SZ_128M >> PAGE_SHIFT; =20 - /* - * Only adjust if > 128M on current system - */ - if (zone_size[0] <=3D sz) - return; - - zone_size[1] =3D zone_size[0] - sz; - zone_size[0] =3D sz; - zhole_size[1] =3D zhole_size[0]; - zhole_size[0] =3D 0; + max_zone_pfn[ZONE_DMA] =3D sz; } =20 /* diff --git a/arch/unicore32/mm/init.c b/arch/unicore32/mm/init.c index 6cf010fadc7a..52425d383cea 100644 --- a/arch/unicore32/mm/init.c +++ b/arch/unicore32/mm/init.c @@ -61,46 +61,21 @@ static void __init find_limits(unsigned long *min, un= signed long *max_low, } } =20 -static void __init uc32_bootmem_free(unsigned long min, unsigned long ma= x_low, - unsigned long max_high) +static void __init uc32_bootmem_free(unsigned long max_low) { - unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES]; - struct memblock_region *reg; + unsigned long max_zone_pfn[MAX_NR_ZONES] =3D { 0 }; =20 - /* - * initialise the zones. - */ - memset(zone_size, 0, sizeof(zone_size)); - - /* - * The memory size has already been determined. If we need - * to do anything fancy with the allocation of this memory - * to the zones, now is the time to do it. - */ - zone_size[0] =3D max_low - min; - - /* - * Calculate the size of the holes. - * holes =3D node_size - sum(bank_sizes) - */ - memcpy(zhole_size, zone_size, sizeof(zhole_size)); - for_each_memblock(memory, reg) { - unsigned long start =3D memblock_region_memory_base_pfn(reg); - unsigned long end =3D memblock_region_memory_end_pfn(reg); - - if (start < max_low) { - unsigned long low_end =3D min(end, max_low); - zhole_size[0] -=3D low_end - start; - } - } + max_zone_pfn[ZONE_DMA] =3D max_low; + max_zone_pfn[ZONE_NORMAL] =3D max_low; =20 /* * Adjust the sizes according to any special requirements for * this machine type. + * This might lower ZONE_DMA limit. */ - arch_adjust_zones(zone_size, zhole_size); + arch_adjust_zones(max_zone_pfn); =20 - free_area_init_node(0, zone_size, min, zhole_size); + free_area_init(max_zone_pfn); } =20 int pfn_valid(unsigned long pfn) @@ -176,11 +151,11 @@ void __init bootmem_init(void) sparse_init(); =20 /* - * Now free the memory - free_area_init_node needs + * Now free the memory - free_area_init needs * the sparse mem_map arrays initialized by sparse_init() * for memmap_init_zone(), otherwise all PFNs are invalid. */ - uc32_bootmem_free(min, max_low, max_high); + uc32_bootmem_free(max_low); =20 high_memory =3D __va((max_low << PAGE_SHIFT) - 1) + 1; =20 --=20 2.26.1