From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE7BBCCA479 for ; Mon, 20 Jun 2022 10:04:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240851AbiFTKED (ORCPT ); Mon, 20 Jun 2022 06:04:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240836AbiFTKDe (ORCPT ); Mon, 20 Jun 2022 06:03:34 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C314113F6A; Mon, 20 Jun 2022 03:03:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655719399; x=1687255399; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=zzztM5Ztw2pX+OG+K1EH92bRdh0HXV7pnoB7w/qVhi0=; b=HUGZvDZjbfPv0H9q+FWFnaIar/OHwp82sBUKIMJMQUR3IwPFiP+EmYdi iRQjJrl8/2J6QsQO0UNXclgWd+sm6VVWgUzQBaM7E505LXVCcEG4hxuUg 7dRpW8YN8xsgSOF/x/0cAVvQEZULmanDX7XyoOCc3g1HpSIqsaWH9i5Je 3RLmj3BApkBQKB2Jvc+6nAD1LEhZmOsnFT8sQ2kZPvt6GxUgYLTxgZTaZ DC95N0yVj9dHByAvXsWT3QXx/eLdVftfpEiVmDZ9lWirYF6LPZDzARqL9 yf7Dkt3ZWKBUmx/aESNDSTW+YJfir9f5N9iUsaWdjL2dUz9h0hWqd3EJH Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10380"; a="262890999" X-IronPort-AV: E=Sophos;i="5.92,306,1650956400"; d="scan'208";a="262890999" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jun 2022 03:03:18 -0700 X-IronPort-AV: E=Sophos;i="5.92,306,1650956400"; d="scan'208";a="584833961" Received: from smile.fi.intel.com ([10.237.72.54]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jun 2022 03:03:12 -0700 Received: from andy by smile.fi.intel.com with local (Exim 4.95) (envelope-from ) id 1o3EFP-000h50-US; Mon, 20 Jun 2022 13:03:07 +0300 Date: Mon, 20 Jun 2022 13:03:07 +0300 From: Andy Shevchenko To: Alexander Lobakin Cc: Arnd Bergmann , Yury Norov , Mark Rutland , Matt Turner , Brian Cain , Geert Uytterhoeven , Yoshinori Sato , Rich Felker , "David S. Miller" , Kees Cook , "Peter Zijlstra (Intel)" , Marco Elver , Borislav Petkov , Tony Luck , Maciej Fijalkowski , Jesse Brandeburg , Greg Kroah-Hartman , linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 4/7] bitops: define const_*() versions of the non-atomics Message-ID: References: <20220617144031.2549432-1-alexandr.lobakin@intel.com> <20220617144031.2549432-5-alexandr.lobakin@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220617144031.2549432-5-alexandr.lobakin@intel.com> Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org On Fri, Jun 17, 2022 at 04:40:28PM +0200, Alexander Lobakin wrote: > Define const_*() variants of the non-atomic bitops to be used when > the input arguments are compile-time constants, so that the compiler > will be always able to resolve those to compile-time constants as > well. Those are mostly direct aliases for generic_*() with one > exception for const_test_bit(): the original one is declared > atomic-safe and thus doesn't discard the `volatile` qualifier, so > in order to let optimize code, define it separately disregarding > the qualifier. > Add them to the compile-time type checks as well just in case. Reviewed-by: Andy Shevchenko > Suggested-by: Marco Elver > Signed-off-by: Alexander Lobakin > --- > .../asm-generic/bitops/generic-non-atomic.h | 31 +++++++++++++++++++ > include/linux/bitops.h | 1 + > 2 files changed, 32 insertions(+) > > diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h > index b85b8a2ac239..3d5ebd24652b 100644 > --- a/include/asm-generic/bitops/generic-non-atomic.h > +++ b/include/asm-generic/bitops/generic-non-atomic.h > @@ -127,4 +127,35 @@ generic_test_bit(unsigned long nr, const volatile unsigned long *addr) > return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > } > > +/* > + * const_*() definitions provide good compile-time optimizations when > + * the passed arguments can be resolved at compile time. > + */ > +#define const___set_bit generic___set_bit > +#define const___clear_bit generic___clear_bit > +#define const___change_bit generic___change_bit > +#define const___test_and_set_bit generic___test_and_set_bit > +#define const___test_and_clear_bit generic___test_and_clear_bit > +#define const___test_and_change_bit generic___test_and_change_bit > + > +/** > + * const_test_bit - Determine whether a bit is set > + * @nr: bit number to test > + * @addr: Address to start counting from > + * > + * A version of generic_test_bit() which discards the `volatile` qualifier to > + * allow a compiler to optimize code harder. Non-atomic and to be called only > + * for testing compile-time constants, e.g. by the corresponding macros, not > + * directly from "regular" code. > + */ > +static __always_inline bool > +const_test_bit(unsigned long nr, const volatile unsigned long *addr) > +{ > + const unsigned long *p = (const unsigned long *)addr + BIT_WORD(nr); > + unsigned long mask = BIT_MASK(nr); > + unsigned long val = *p; > + > + return !!(val & mask); > +} > + > #endif /* __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H */ > diff --git a/include/linux/bitops.h b/include/linux/bitops.h > index 87087454a288..d393297287d5 100644 > --- a/include/linux/bitops.h > +++ b/include/linux/bitops.h > @@ -37,6 +37,7 @@ extern unsigned long __sw_hweight64(__u64 w); > /* Check that the bitops prototypes are sane */ > #define __check_bitop_pr(name) \ > static_assert(__same_type(arch_##name, generic_##name) && \ > + __same_type(const_##name, generic_##name) && \ > __same_type(name, generic_##name)) > > __check_bitop_pr(__set_bit); > -- > 2.36.1 > -- With Best Regards, Andy Shevchenko From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andy Shevchenko Date: Mon, 20 Jun 2022 10:03:07 +0000 Subject: Re: [PATCH v3 4/7] bitops: define const_*() versions of the non-atomics Message-Id: List-Id: References: <20220617144031.2549432-1-alexandr.lobakin@intel.com> <20220617144031.2549432-5-alexandr.lobakin@intel.com> In-Reply-To: <20220617144031.2549432-5-alexandr.lobakin@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Alexander Lobakin Cc: Arnd Bergmann , Yury Norov , Mark Rutland , Matt Turner , Brian Cain , Geert Uytterhoeven , Yoshinori Sato , Rich Felker , "David S. Miller" , Kees Cook , "Peter Zijlstra (Intel)" , Marco Elver , Borislav Petkov , Tony Luck , Maciej Fijalkowski , Jesse Brandeburg , Greg Kroah-Hartman , linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org On Fri, Jun 17, 2022 at 04:40:28PM +0200, Alexander Lobakin wrote: > Define const_*() variants of the non-atomic bitops to be used when > the input arguments are compile-time constants, so that the compiler > will be always able to resolve those to compile-time constants as > well. Those are mostly direct aliases for generic_*() with one > exception for const_test_bit(): the original one is declared > atomic-safe and thus doesn't discard the `volatile` qualifier, so > in order to let optimize code, define it separately disregarding > the qualifier. > Add them to the compile-time type checks as well just in case. Reviewed-by: Andy Shevchenko > Suggested-by: Marco Elver > Signed-off-by: Alexander Lobakin > --- > .../asm-generic/bitops/generic-non-atomic.h | 31 +++++++++++++++++++ > include/linux/bitops.h | 1 + > 2 files changed, 32 insertions(+) > > diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h > index b85b8a2ac239..3d5ebd24652b 100644 > --- a/include/asm-generic/bitops/generic-non-atomic.h > +++ b/include/asm-generic/bitops/generic-non-atomic.h > @@ -127,4 +127,35 @@ generic_test_bit(unsigned long nr, const volatile unsigned long *addr) > return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > } > > +/* > + * const_*() definitions provide good compile-time optimizations when > + * the passed arguments can be resolved at compile time. > + */ > +#define const___set_bit generic___set_bit > +#define const___clear_bit generic___clear_bit > +#define const___change_bit generic___change_bit > +#define const___test_and_set_bit generic___test_and_set_bit > +#define const___test_and_clear_bit generic___test_and_clear_bit > +#define const___test_and_change_bit generic___test_and_change_bit > + > +/** > + * const_test_bit - Determine whether a bit is set > + * @nr: bit number to test > + * @addr: Address to start counting from > + * > + * A version of generic_test_bit() which discards the `volatile` qualifier to > + * allow a compiler to optimize code harder. Non-atomic and to be called only > + * for testing compile-time constants, e.g. by the corresponding macros, not > + * directly from "regular" code. > + */ > +static __always_inline bool > +const_test_bit(unsigned long nr, const volatile unsigned long *addr) > +{ > + const unsigned long *p = (const unsigned long *)addr + BIT_WORD(nr); > + unsigned long mask = BIT_MASK(nr); > + unsigned long val = *p; > + > + return !!(val & mask); > +} > + > #endif /* __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H */ > diff --git a/include/linux/bitops.h b/include/linux/bitops.h > index 87087454a288..d393297287d5 100644 > --- a/include/linux/bitops.h > +++ b/include/linux/bitops.h > @@ -37,6 +37,7 @@ extern unsigned long __sw_hweight64(__u64 w); > /* Check that the bitops prototypes are sane */ > #define __check_bitop_pr(name) \ > static_assert(__same_type(arch_##name, generic_##name) && \ > + __same_type(const_##name, generic_##name) && \ > __same_type(name, generic_##name)) > > __check_bitop_pr(__set_bit); > -- > 2.36.1 > -- With Best Regards, Andy Shevchenko From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andy Shevchenko Subject: Re: [PATCH v3 4/7] bitops: define const_*() versions of the non-atomics Date: Mon, 20 Jun 2022 13:03:07 +0300 Message-ID: References: <20220617144031.2549432-1-alexandr.lobakin@intel.com> <20220617144031.2549432-5-alexandr.lobakin@intel.com> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655719399; x=1687255399; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=zzztM5Ztw2pX+OG+K1EH92bRdh0HXV7pnoB7w/qVhi0=; b=HUGZvDZjbfPv0H9q+FWFnaIar/OHwp82sBUKIMJMQUR3IwPFiP+EmYdi iRQjJrl8/2J6QsQO0UNXclgWd+sm6VVWgUzQBaM7E505LXVCcEG4hxuUg 7dRpW8YN8xsgSOF/x/0cAVvQEZULmanDX7XyoOCc3g1HpSIqsaWH9i5Je 3RLmj3BApkBQKB2Jvc+6nAD1LEhZmOsnFT8sQ2kZPvt6GxUgYLTxgZTaZ DC95N0yVj9dHByAvXsWT3QXx/eLdVftfpEiVmDZ9lWirYF6LPZDzARqL9 yf7Dkt3ZWKBUmx/aESNDSTW+YJfir9f5N9iUsaWdjL2dUz9h0hWqd3EJH Q==; Content-Disposition: inline In-Reply-To: <20220617144031.2549432-5-alexandr.lobakin@intel.com> List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Alexander Lobakin Cc: Arnd Bergmann , Yury Norov , Mark Rutland , Matt Turner , Brian Cain , Geert Uytterhoeven , Yoshinori Sato , Rich Felker , "David S. Miller" , Kees Cook , "Peter Zijlstra (Intel)" , Marco Elver , Borislav Petkov , Tony Luck , Maciej Fijalkowski , Jesse Brandeburg , Greg Kroah-Hartman , linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m6 On Fri, Jun 17, 2022 at 04:40:28PM +0200, Alexander Lobakin wrote: > Define const_*() variants of the non-atomic bitops to be used when > the input arguments are compile-time constants, so that the compiler > will be always able to resolve those to compile-time constants as > well. Those are mostly direct aliases for generic_*() with one > exception for const_test_bit(): the original one is declared > atomic-safe and thus doesn't discard the `volatile` qualifier, so > in order to let optimize code, define it separately disregarding > the qualifier. > Add them to the compile-time type checks as well just in case. Reviewed-by: Andy Shevchenko > Suggested-by: Marco Elver > Signed-off-by: Alexander Lobakin > --- > .../asm-generic/bitops/generic-non-atomic.h | 31 +++++++++++++++++++ > include/linux/bitops.h | 1 + > 2 files changed, 32 insertions(+) > > diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h > index b85b8a2ac239..3d5ebd24652b 100644 > --- a/include/asm-generic/bitops/generic-non-atomic.h > +++ b/include/asm-generic/bitops/generic-non-atomic.h > @@ -127,4 +127,35 @@ generic_test_bit(unsigned long nr, const volatile unsigned long *addr) > return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > } > > +/* > + * const_*() definitions provide good compile-time optimizations when > + * the passed arguments can be resolved at compile time. > + */ > +#define const___set_bit generic___set_bit > +#define const___clear_bit generic___clear_bit > +#define const___change_bit generic___change_bit > +#define const___test_and_set_bit generic___test_and_set_bit > +#define const___test_and_clear_bit generic___test_and_clear_bit > +#define const___test_and_change_bit generic___test_and_change_bit > + > +/** > + * const_test_bit - Determine whether a bit is set > + * @nr: bit number to test > + * @addr: Address to start counting from > + * > + * A version of generic_test_bit() which discards the `volatile` qualifier to > + * allow a compiler to optimize code harder. Non-atomic and to be called only > + * for testing compile-time constants, e.g. by the corresponding macros, not > + * directly from "regular" code. > + */ > +static __always_inline bool > +const_test_bit(unsigned long nr, const volatile unsigned long *addr) > +{ > + const unsigned long *p = (const unsigned long *)addr + BIT_WORD(nr); > + unsigned long mask = BIT_MASK(nr); > + unsigned long val = *p; > + > + return !!(val & mask); > +} > + > #endif /* __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H */ > diff --git a/include/linux/bitops.h b/include/linux/bitops.h > index 87087454a288..d393297287d5 100644 > --- a/include/linux/bitops.h > +++ b/include/linux/bitops.h > @@ -37,6 +37,7 @@ extern unsigned long __sw_hweight64(__u64 w); > /* Check that the bitops prototypes are sane */ > #define __check_bitop_pr(name) \ > static_assert(__same_type(arch_##name, generic_##name) && \ > + __same_type(const_##name, generic_##name) && \ > __same_type(name, generic_##name)) > > __check_bitop_pr(__set_bit); > -- > 2.36.1 > -- With Best Regards, Andy Shevchenko