From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_RED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAB0BC433B4 for ; Sun, 9 May 2021 03:20:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7855D613D8 for ; Sun, 9 May 2021 03:20:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229558AbhEIDVw (ORCPT ); Sat, 8 May 2021 23:21:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:33508 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229553AbhEIDVw (ORCPT ); Sat, 8 May 2021 23:21:52 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id CE7C36128C; Sun, 9 May 2021 03:20:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1620530449; bh=+xWFBBJI0ef9box8LT9PgNdKd+CTbP0EbcS+WASVt+g=; h=Date:From:To:Subject:From; b=RKGAHcxTX4qKnKvn0MrVGw6T104JWNGH3itliVNr7hHAAkQx/E7D5Fev7prSZXA6+ K10qSI53Fk3hDXMLagRJqy1JVaaMYH7c/AIizoLxJHQR6SGhAbCmyv7RQ0GMvPXs4A 8Olbv1KYG2Ya+ARj2d74KchQfWJA18XyvY8icetQ= Date: Sat, 08 May 2021 20:20:48 -0700 From: akpm@linux-foundation.org To: ak@linux.intel.com, andi.kleen@intel.com, andriy.shevchenko@intel.com, feng.tang@intel.com, masahiroy@kernel.org, michal.lkml@markovi.net, mm-commits@vger.kernel.org, tglx@linutronix.de, ying.huang@intel.com Subject: + makefile-extend-32b-aligned-debug-option-to-64b-aligned.patch added to -mm tree Message-ID: <20210509032048.6MEV45l9N%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org The patch titled Subject: Makefile: extend 32B aligned debug option to 64B aligned has been added to the -mm tree. Its filename is makefile-extend-32b-aligned-debug-option-to-64b-aligned.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/makefile-extend-32b-aligned-debug-option-to-64b-aligned.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/makefile-extend-32b-aligned-debug-option-to-64b-aligned.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Feng Tang Subject: Makefile: extend 32B aligned debug option to 64B aligned commit 09c60546f04f ("./Makefile: add debug option to enable function aligned on 32 bytes") was introduced to help debugging strange kernel performance changes caused by code alignment change. Recently we found 2 similar cases [1][2] caused by code-alignment changes, which can only be identified by forcing 64 bytes aligned for all functions. Originally, 32 bytes was used mainly for not wasting too much text space, but this option is only for debug anyway where text space is not a big concern. So extend the alignment to 64 bytes to cover more similar cases. [1].https://lore.kernel.org/lkml/20210427090013.GG32408@xsang-OptiPlex-9020/ [2].https://lore.kernel.org/lkml/20210420030837.GB31773@xsang-OptiPlex-9020/ Link: https://lkml.kernel.org/r/1620286499-40999-1-git-send-email-feng.tang@intel.com Signed-off-by: Feng Tang Cc: Thomas Gleixner Cc: Andi Kleen Cc: Masahiro Yamada Cc: Michal Marek Cc: Andi Kleen Cc: Huang Ying Cc: Andy Shevchenko Signed-off-by: Andrew Morton --- Makefile | 4 ++-- lib/Kconfig.debug | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) --- a/lib/Kconfig.debug~makefile-extend-32b-aligned-debug-option-to-64b-aligned +++ a/lib/Kconfig.debug @@ -400,8 +400,8 @@ config SECTION_MISMATCH_WARN_ONLY If unsure, say Y. -config DEBUG_FORCE_FUNCTION_ALIGN_32B - bool "Force all function address 32B aligned" if EXPERT +config DEBUG_FORCE_FUNCTION_ALIGN_64B + bool "Force all function address 64B aligned" if EXPERT help There are cases that a commit from one domain changes the function address alignment of other domains, and cause magic performance --- a/Makefile~makefile-extend-32b-aligned-debug-option-to-64b-aligned +++ a/Makefile @@ -953,8 +953,8 @@ KBUILD_CFLAGS += $(CC_FLAGS_CFI) export CC_FLAGS_CFI endif -ifdef CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_32B -KBUILD_CFLAGS += -falign-functions=32 +ifdef CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B +KBUILD_CFLAGS += -falign-functions=64 endif # arch Makefile may override CC so keep this after arch Makefile is included _ Patches currently in -mm which might be from feng.tang@intel.com are makefile-extend-32b-aligned-debug-option-to-64b-aligned.patch