From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25E3FC433E0 for ; Tue, 16 Mar 2021 17:14:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D4CEB65087 for ; Tue, 16 Mar 2021 17:14:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238691AbhCPRO0 (ORCPT ); Tue, 16 Mar 2021 13:14:26 -0400 Received: from mail.kernel.org ([198.145.29.99]:53930 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239084AbhCPRN4 (ORCPT ); Tue, 16 Mar 2021 13:13:56 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 90A0E65013; Tue, 16 Mar 2021 17:13:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1615914835; bh=XieCZyeIda15jzEAAI/y4+VLPR3qlObfVN96dVZaJj0=; h=From:To:Cc:Subject:Date:From; b=ZKdeS6p+E09TOd6dWytI/OA5eYJebcWEdHzd4F5OHYIU7oE4fU1/Fbj2qfzcXddLc bh0IW93QtFFAWthEDQf6AOeTNWZtj4lrILisBDhrRkm6TT0CcZZF5iD/aHw/GWiC6/ oYZOnsYl5+NkKKdRmmiRwAO6ABuCd1xoGosIGNvVB2VVBK1okbwCn5C3+y0Ba+vLxD JVTPVfVxE1QQ2xgmVJMKMxbQBR0c9MFHC1YzDEWWeOpl/B5Jh4RSWLKN3gjLEUXjdF bA06lmb+8f/tnfoFPyCR0mReG4GUqh3dIxpSsk1rxXneBSvMQ+TPgiUaxMILegOxjw 3/NCabqMUHNKA== From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , David Hildenbrand , Mike Rapoport , Mike Rapoport , Nick Desaulniers , clang-built-linux@googlegroups.com, kbuild-all@lists.01.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel test robot Subject: [PATCH] memblock: fix section mismatch warning again Date: Tue, 16 Mar 2021 19:13:47 +0200 Message-Id: <20210316171347.14084-1-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mike Rapoport Commit 34dc2efb39a2 ("memblock: fix section mismatch warning") marked memblock_bottom_up() and memblock_set_bottom_up() as __init, but they could be referenced from non-init functions like memblock_find_in_range_node() on architectures that enable CONFIG_ARCH_KEEP_MEMBLOCK. For such builds kernel test robot reports: All warnings (new ones prefixed by >>, old ones prefixed by <<): >> WARNING: modpost: vmlinux.o(.text+0x74fea4): Section mismatch in reference from the function memblock_find_in_range_node() to the function .init.text:memblock_bottom_up() The function memblock_find_in_range_node() references the function __init memblock_bottom_up(). This is often because memblock_find_in_range_node lacks a __init annotation or the annotation of memblock_bottom_up is wrong. Replace __init annotations with __init_memblock annotations so that the appropriate section will be selected depending on CONFIG_ARCH_KEEP_MEMBLOCK. Link: https://lore.kernel.org/lkml/202103160133.UzhgY0wt-lkp@intel.com Fixes: 34dc2efb39a2 ("memblock: fix section mismatch warning") Signed-off-by: Mike Rapoport Reported-by: kernel test robot Reviewed-by: Arnd Bergmann --- @Andrew, please let me know if you'd prefer this merged via memblock tree. include/linux/memblock.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index d13e3cd938b4..5984fff3f175 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -460,7 +460,7 @@ static inline void memblock_free_late(phys_addr_t base, phys_addr_t size) /* * Set the allocation direction to bottom-up or top-down. */ -static inline __init void memblock_set_bottom_up(bool enable) +static inline __init_memblock void memblock_set_bottom_up(bool enable) { memblock.bottom_up = enable; } @@ -470,7 +470,7 @@ static inline __init void memblock_set_bottom_up(bool enable) * if this is true, that said, memblock will allocate memory * in bottom-up direction. */ -static inline __init bool memblock_bottom_up(void) +static inline __init_memblock bool memblock_bottom_up(void) { return memblock.bottom_up; } -- 2.28.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============7625990150155173422==" MIME-Version: 1.0 From: Mike Rapoport To: kbuild-all@lists.01.org Subject: [PATCH] memblock: fix section mismatch warning again Date: Tue, 16 Mar 2021 19:13:47 +0200 Message-ID: <20210316171347.14084-1-rppt@kernel.org> List-Id: --===============7625990150155173422== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable From: Mike Rapoport Commit 34dc2efb39a2 ("memblock: fix section mismatch warning") marked memblock_bottom_up() and memblock_set_bottom_up() as __init, but they could be referenced from non-init functions like memblock_find_in_range_node() on architectures that enable CONFIG_ARCH_KEEP_MEMBLOCK. For such builds kernel test robot reports: All warnings (new ones prefixed by >>, old ones prefixed by <<): >> WARNING: modpost: vmlinux.o(.text+0x74fea4): Section mismatch in referen= ce from the function memblock_find_in_range_node() to the function .init.te= xt:memblock_bottom_up() The function memblock_find_in_range_node() references the function __init memblock_bottom_up(). This is often because memblock_find_in_range_node lacks a __init annotation or the annotation of memblock_bottom_up is wrong. Replace __init annotations with __init_memblock annotations so that the appropriate section will be selected depending on CONFIG_ARCH_KEEP_MEMBLOCK. Link: https://lore.kernel.org/lkml/202103160133.UzhgY0wt-lkp(a)intel.com Fixes: 34dc2efb39a2 ("memblock: fix section mismatch warning") Signed-off-by: Mike Rapoport Reported-by: kernel test robot Reviewed-by: Arnd Bergmann --- @Andrew, please let me know if you'd prefer this merged via memblock tree. include/linux/memblock.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index d13e3cd938b4..5984fff3f175 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -460,7 +460,7 @@ static inline void memblock_free_late(phys_addr_t base,= phys_addr_t size) /* * Set the allocation direction to bottom-up or top-down. */ -static inline __init void memblock_set_bottom_up(bool enable) +static inline __init_memblock void memblock_set_bottom_up(bool enable) { memblock.bottom_up =3D enable; } @@ -470,7 +470,7 @@ static inline __init void memblock_set_bottom_up(bool e= nable) * if this is true, that said, memblock will allocate memory * in bottom-up direction. */ -static inline __init bool memblock_bottom_up(void) +static inline __init_memblock bool memblock_bottom_up(void) { return memblock.bottom_up; } -- = 2.28.0 --===============7625990150155173422==--