From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 871AEC433E1 for ; Wed, 19 Aug 2020 19:27:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6197D20B1F for ; Wed, 19 Aug 2020 19:27:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597865271; bh=amWaUs7PanycJG+1b7/YglTAd1wVWb6EWcrzMMCCebM=; h=Date:From:To:Subject:In-Reply-To:Reply-To:List-ID:From; b=oBWk991lTxWrYevKce9B3MCwBIrd59o3DDea4BzsTqU249M4twISBsra3+N2j+g96 TO51we3Dy14i/cK4WVNsD55AjmnCHe+h3gUr9if0a5Dij5y93MAvpSqW8U4Jwo+uQJ vBUZaQ6Fa9xMTkMxRpV0HkbH3Orwjq9agSSIse08= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726867AbgHST1v (ORCPT ); Wed, 19 Aug 2020 15:27:51 -0400 Received: from mail.kernel.org ([198.145.29.99]:36410 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726852AbgHST1u (ORCPT ); Wed, 19 Aug 2020 15:27:50 -0400 Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 57FA3208E4; Wed, 19 Aug 2020 19:27:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597865268; bh=amWaUs7PanycJG+1b7/YglTAd1wVWb6EWcrzMMCCebM=; h=Date:From:To:Subject:In-Reply-To:From; b=S1c4+3K6oNVg1XzSEOuu4s8GHiznOSX97F1W6Z7lFO0Clf+1VNov1gM14P6gl1lII xXE29v8hpQwxn+EyHY4cfFeO2qiCXEtQWUEPtlOJc+A79uzOkJ5huYOJJQXrGufWIv 3wvG9rl/qoMPcfP3oyKDZHAPMv/U/7ftsUsyJFFU= Date: Wed, 19 Aug 2020 12:27:46 -0700 From: Andrew Morton To: benh@kernel.crashing.org, bhe@redhat.com, bp@alien8.de, catalin.marinas@arm.com, dave.hansen@linux.intel.com, dja@axtens.net, hbathini@linux.ibm.com, hch@lst.de, jcmvbkbc@gmail.com, Jonathan.Cameron@huawei.com, kernel@esmil.dk, linux@armlinux.org.uk, luto@kernel.org, m.szyprowski@samsung.com, miguel.ojeda.sandonis@gmail.com, mingo@kernel.org, mingo@redhat.com, mm-commits@vger.kernel.org, monstr@monstr.eu, mpe@ellerman.id.au, palmer@dabbelt.com, paul.walmsley@sifive.com, paulus@samba.org, peterz@infradead.org, rppt@linux.ibm.com, shorne@gmail.com, tglx@linutronix.de, tsbogend@alpha.franken.de, will@kernel.org, ysato@users.sourceforge.jp Subject: + memblock-reduce-number-of-parameters-in-for_each_mem_range.patch added to -mm tree Message-ID: <20200819192746.MlQkswjjq%akpm@linux-foundation.org> In-Reply-To: <20200814172939.55d6d80b6e21e4241f1ee1f3@linux-foundation.org> User-Agent: s-nail v14.8.16 Sender: mm-commits-owner@vger.kernel.org Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org The patch titled Subject: memblock: reduce number of parameters in for_each_mem_range() has been added to the -mm tree. Its filename is memblock-reduce-number-of-parameters-in-for_each_mem_range.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/memblock-reduce-number-of-parameters-in-for_each_mem_range.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/memblock-reduce-number-of-parameters-in-for_each_mem_range.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Mike Rapoport Subject: memblock: reduce number of parameters in for_each_mem_range() Currently for_each_mem_range() and for_each_mem_range_rev() iterators are the most generic way to traverse memblock regions. As such, they have 8 parameters and they are hardly convenient to users. Most users choose to utilize one of their wrappers and the only user that actually needs most of the parameters is memblock itself. To avoid yet another naming for memblock iterators, rename the existing for_each_mem_range[_rev]() to __for_each_mem_range[_rev]() and add a new for_each_mem_range[_rev]() wrappers with only index, start and end parameters. The new wrapper nicely fits into init_unavailable_mem() and will be used in upcoming changes to simplify memblock traversals. Link: https://lkml.kernel.org/r/20200818151634.14343-11-rppt@kernel.org Signed-off-by: Mike Rapoport Acked-by: Thomas Bogendoerfer [MIPS] Cc: Andy Lutomirski Cc: Baoquan He Cc: Benjamin Herrenschmidt Cc: Borislav Petkov Cc: Catalin Marinas Cc: Christoph Hellwig Cc: Daniel Axtens Cc: Dave Hansen Cc: Emil Renner Berthing Cc: Hari Bathini Cc: Ingo Molnar Cc: Ingo Molnar Cc: Jonathan Cameron Cc: Marek Szyprowski Cc: Max Filippov Cc: Michael Ellerman Cc: Michal Simek Cc: Miguel Ojeda Cc: Palmer Dabbelt Cc: Paul Mackerras Cc: Paul Walmsley Cc: Peter Zijlstra Cc: Russell King Cc: Stafford Horne Cc: Thomas Gleixner Cc: Will Deacon Cc: Yoshinori Sato Signed-off-by: Andrew Morton --- .clang-format | 2 + arch/arm64/kernel/machine_kexec_file.c | 6 +-- arch/powerpc/kexec/file_load_64.c | 6 +-- include/linux/memblock.h | 41 +++++++++++++++++------ mm/page_alloc.c | 3 - 5 files changed, 38 insertions(+), 20 deletions(-) --- a/arch/arm64/kernel/machine_kexec_file.c~memblock-reduce-number-of-parameters-in-for_each_mem_range +++ a/arch/arm64/kernel/machine_kexec_file.c @@ -215,8 +215,7 @@ static int prepare_elf_headers(void **ad phys_addr_t start, end; nr_ranges = 1; /* for exclusion of crashkernel region */ - for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, - MEMBLOCK_NONE, &start, &end, NULL) + for_each_mem_range(i, &start, &end) nr_ranges++; cmem = kmalloc(struct_size(cmem, ranges, nr_ranges), GFP_KERNEL); @@ -225,8 +224,7 @@ static int prepare_elf_headers(void **ad cmem->max_nr_ranges = nr_ranges; cmem->nr_ranges = 0; - for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, - MEMBLOCK_NONE, &start, &end, NULL) { + for_each_mem_range(i, &start, &end) { cmem->ranges[cmem->nr_ranges].start = start; cmem->ranges[cmem->nr_ranges].end = end - 1; cmem->nr_ranges++; --- a/arch/powerpc/kexec/file_load_64.c~memblock-reduce-number-of-parameters-in-for_each_mem_range +++ a/arch/powerpc/kexec/file_load_64.c @@ -250,8 +250,7 @@ static int __locate_mem_hole_top_down(st phys_addr_t start, end; u64 i; - for_each_mem_range_rev(i, &memblock.memory, NULL, NUMA_NO_NODE, - MEMBLOCK_NONE, &start, &end, NULL) { + for_each_mem_range_rev(i, &start, &end) { /* * memblock uses [start, end) convention while it is * [start, end] here. Fix the off-by-one to have the @@ -350,8 +349,7 @@ static int __locate_mem_hole_bottom_up(s phys_addr_t start, end; u64 i; - for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, - MEMBLOCK_NONE, &start, &end, NULL) { + for_each_mem_range(i, &start, &end) { /* * memblock uses [start, end) convention while it is * [start, end] here. Fix the off-by-one to have the --- a/.clang-format~memblock-reduce-number-of-parameters-in-for_each_mem_range +++ a/.clang-format @@ -205,7 +205,9 @@ ForEachMacros: - 'for_each_memblock_type' - 'for_each_memcg_cache_index' - 'for_each_mem_pfn_range' + - '__for_each_mem_range' - 'for_each_mem_range' + - '__for_each_mem_range_rev' - 'for_each_mem_range_rev' - 'for_each_migratetype_order' - 'for_each_msi_entry' --- a/include/linux/memblock.h~memblock-reduce-number-of-parameters-in-for_each_mem_range +++ a/include/linux/memblock.h @@ -162,7 +162,7 @@ static inline void __next_physmem_range( #endif /* CONFIG_HAVE_MEMBLOCK_PHYS_MAP */ /** - * for_each_mem_range - iterate through memblock areas from type_a and not + * __for_each_mem_range - iterate through memblock areas from type_a and not * included in type_b. Or just type_a if type_b is NULL. * @i: u64 used as loop variable * @type_a: ptr to memblock_type to iterate @@ -173,7 +173,7 @@ static inline void __next_physmem_range( * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL * @p_nid: ptr to int for nid of the range, can be %NULL */ -#define for_each_mem_range(i, type_a, type_b, nid, flags, \ +#define __for_each_mem_range(i, type_a, type_b, nid, flags, \ p_start, p_end, p_nid) \ for (i = 0, __next_mem_range(&i, nid, flags, type_a, type_b, \ p_start, p_end, p_nid); \ @@ -182,7 +182,7 @@ static inline void __next_physmem_range( p_start, p_end, p_nid)) /** - * for_each_mem_range_rev - reverse iterate through memblock areas from + * __for_each_mem_range_rev - reverse iterate through memblock areas from * type_a and not included in type_b. Or just type_a if type_b is NULL. * @i: u64 used as loop variable * @type_a: ptr to memblock_type to iterate @@ -193,16 +193,37 @@ static inline void __next_physmem_range( * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL * @p_nid: ptr to int for nid of the range, can be %NULL */ -#define for_each_mem_range_rev(i, type_a, type_b, nid, flags, \ - p_start, p_end, p_nid) \ +#define __for_each_mem_range_rev(i, type_a, type_b, nid, flags, \ + p_start, p_end, p_nid) \ for (i = (u64)ULLONG_MAX, \ - __next_mem_range_rev(&i, nid, flags, type_a, type_b,\ + __next_mem_range_rev(&i, nid, flags, type_a, type_b, \ p_start, p_end, p_nid); \ i != (u64)ULLONG_MAX; \ __next_mem_range_rev(&i, nid, flags, type_a, type_b, \ p_start, p_end, p_nid)) /** + * for_each_mem_range - iterate through memory areas. + * @i: u64 used as loop variable + * @p_start: ptr to phys_addr_t for start address of the range, can be %NULL + * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL + */ +#define for_each_mem_range(i, p_start, p_end) \ + __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, \ + MEMBLOCK_NONE, p_start, p_end, NULL) + +/** + * for_each_mem_range_rev - reverse iterate through memblock areas from + * type_a and not included in type_b. Or just type_a if type_b is NULL. + * @i: u64 used as loop variable + * @p_start: ptr to phys_addr_t for start address of the range, can be %NULL + * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL + */ +#define for_each_mem_range_rev(i, p_start, p_end) \ + __for_each_mem_range_rev(i, &memblock.memory, NULL, NUMA_NO_NODE, \ + MEMBLOCK_NONE, p_start, p_end, NULL) + +/** * for_each_reserved_mem_region - iterate over all reserved memblock areas * @i: u64 used as loop variable * @p_start: ptr to phys_addr_t for start address of the range, can be %NULL @@ -307,8 +328,8 @@ int __init deferred_page_init_max_thread * soon as memblock is initialized. */ #define for_each_free_mem_range(i, nid, flags, p_start, p_end, p_nid) \ - for_each_mem_range(i, &memblock.memory, &memblock.reserved, \ - nid, flags, p_start, p_end, p_nid) + __for_each_mem_range(i, &memblock.memory, &memblock.reserved, \ + nid, flags, p_start, p_end, p_nid) /** * for_each_free_mem_range_reverse - rev-iterate through free memblock areas @@ -324,8 +345,8 @@ int __init deferred_page_init_max_thread */ #define for_each_free_mem_range_reverse(i, nid, flags, p_start, p_end, \ p_nid) \ - for_each_mem_range_rev(i, &memblock.memory, &memblock.reserved, \ - nid, flags, p_start, p_end, p_nid) + __for_each_mem_range_rev(i, &memblock.memory, &memblock.reserved, \ + nid, flags, p_start, p_end, p_nid) int memblock_set_node(phys_addr_t base, phys_addr_t size, struct memblock_type *type, int nid); --- a/mm/page_alloc.c~memblock-reduce-number-of-parameters-in-for_each_mem_range +++ a/mm/page_alloc.c @@ -6984,8 +6984,7 @@ static void __init init_unavailable_mem( * Loop through unavailable ranges not covered by memblock.memory. */ pgcnt = 0; - for_each_mem_range(i, &memblock.memory, NULL, - NUMA_NO_NODE, MEMBLOCK_NONE, &start, &end, NULL) { + for_each_mem_range(i, &start, &end) { if (next < start) pgcnt += init_unavailable_range(PFN_DOWN(next), PFN_UP(start)); _ Patches currently in -mm which might be from rppt@linux.ibm.com are kvm-ppc-book3s-hv-simplify-kvm_cma_reserve.patch dma-contiguous-simplify-cma_early_percent_memory.patch arm-xtensa-simplify-initialization-of-high-memory-pages.patch arm64-numa-simplify-dummy_numa_init.patch h8300-nds32-openrisc-simplify-detection-of-memory-extents.patch riscv-drop-unneeded-node-initialization.patch mircoblaze-drop-unneeded-numa-and-sparsemem-initializations.patch memblock-make-for_each_memblock_type-iterator-private.patch memblock-make-memblock_debug-and-related-functionality-private.patch memblock-reduce-number-of-parameters-in-for_each_mem_range.patch arch-mm-replace-for_each_memblock-with-for_each_mem_pfn_range.patch arch-drivers-replace-for_each_membock-with-for_each_mem_range.patch x86-setup-simplify-initrd-relocation-and-reservation.patch x86-setup-simplify-reserve_crashkernel.patch memblock-remove-unused-memblock_mem_size.patch memblock-implement-for_each_reserved_mem_region-using-__next_mem_region.patch memblock-use-separate-iterators-for-memory-and-reserved-regions.patch