All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alexander Lobakin <alexandr.lobakin@intel.com>
To: Yury Norov <yury.norov@gmail.com>
Cc: Alexander Lobakin <alexandr.lobakin@intel.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Matt Turner <mattst88@gmail.com>, Brian Cain <bcain@quicinc.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>,
	"David S. Miller" <davem@davemloft.net>,
	Kees Cook <keescook@chromium.org>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Marco Elver <elver@google.com>, Borislav Petkov <bp@suse.de>,
	Tony Luck <tony.luck@intel.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 6/6] bitops: let optimize out non-atomic bitops on compile-time constants
Date: Wed, 15 Jun 2022 16:00:30 +0200	[thread overview]
Message-ID: <20220615140030.1265068-1-alexandr.lobakin@intel.com> (raw)
In-Reply-To: <YqlRfoB5+VBIw8gJ@yury-laptop>

From: Yury Norov <yury.norov@gmail.com>
Date: Tue, 14 Jun 2022 20:26:54 -0700

> Hi Alexander,
> 
> On Fri, Jun 10, 2022 at 01:34:27PM +0200, Alexander Lobakin wrote:
> > Currently, many architecture-specific non-atomic bitop
> > implementations use inline asm or other hacks which are faster or
> > more robust when working with "real" variables (i.e. fields from
> > the structures etc.), but the compilers have no clue how to optimize
> > them out when called on compile-time constants. That said, the
> > following code:
> > 
> > 	DECLARE_BITMAP(foo, BITS_PER_LONG) = { }; // -> unsigned long foo[1];
> > 	unsigned long bar = BIT(BAR_BIT);
> > 	unsigned long baz = 0;
> > 
> > 	__set_bit(FOO_BIT, foo);
> > 	baz |= BIT(BAZ_BIT);
> > 
> > 	BUILD_BUG_ON(!__builtin_constant_p(test_bit(FOO_BIT, foo));
> > 	BUILD_BUG_ON(!__builtin_constant_p(bar & BAR_BIT));
> > 	BUILD_BUG_ON(!__builtin_constant_p(baz & BAZ_BIT));
> 
> Can you put this snippet into lib/test_bitops.c?

Great idea, sure!

> 
> Thanks,
> Yury
> 
> > triggers the first assertion on x86_64, which means that the
> > compiler is unable to evaluate it to a compile-time initializer
> > when the architecture-specific bitop is used even if it's obvious.
> > In order to let the compiler optimize out such cases, expand the
> > bitop() macro to use the "constant" C non-atomic bitop
> > implementations when all of the arguments passed are compile-time
> > constants, which means that the result will be a compile-time
> > constant as well, so that it produces more efficient and simple
> > code in 100% cases, comparing to the architecture-specific
> > counterparts.
> > 
> > The savings are architecture, compiler and compiler flags dependent,
> > for example, on x86_64 -O2:
> > 
> > GCC 12: add/remove: 78/29 grow/shrink: 332/525 up/down: 31325/-61560 (-30235)
> > LLVM 13: add/remove: 79/76 grow/shrink: 184/537 up/down: 55076/-141892 (-86816)
> > LLVM 14: add/remove: 10/3 grow/shrink: 93/138 up/down: 3705/-6992 (-3287)
> > 
> > and ARM64 (courtesy of Mark):
> > 
> > GCC 11: add/remove: 92/29 grow/shrink: 933/2766 up/down: 39340/-82580 (-43240)
> > LLVM 14: add/remove: 21/11 grow/shrink: 620/651 up/down: 12060/-15824 (-3764)
> > 
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com>
> > ---
> >  include/linux/bitops.h | 18 +++++++++++++++++-
> >  1 file changed, 17 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/bitops.h b/include/linux/bitops.h
> > index 753f98e0dcf5..364bdc3606b4 100644
> > --- a/include/linux/bitops.h
> > +++ b/include/linux/bitops.h
> > @@ -33,8 +33,24 @@ extern unsigned long __sw_hweight64(__u64 w);
> >  
> >  #include <asm-generic/bitops/generic-non-atomic.h>
> >  
> > +/*
> > + * Many architecture-specific non-atomic bitops contain inline asm code and due
> > + * to that the compiler can't optimize them to compile-time expressions or
> > + * constants. In contrary, gen_*() helpers are defined in pure C and compilers
> > + * optimize them just well.
> > + * Therefore, to make `unsigned long foo = 0; __set_bit(BAR, &foo)` effectively
> > + * equal to `unsigned long foo = BIT(BAR)`, pick the generic C alternative when
> > + * the arguments can be resolved at compile time. That expression itself is a
> > + * constant and doesn't bring any functional changes to the rest of cases.
> > + * The casts to `uintptr_t` are needed to mitigate `-Waddress` warnings when
> > + * passing a bitmap from .bss or .data (-> `!!addr` is always true).
> > + */
> >  #define bitop(op, nr, addr)						\
> > -	op(nr, addr)
> > +	((__builtin_constant_p(nr) &&					\
> > +	  __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)NULL) &&	\
> > +	  (uintptr_t)(addr) != (uintptr_t)NULL &&			\
> > +	  __builtin_constant_p(*(const unsigned long *)(addr))) ?	\
> > +	 const##op(nr, addr) : op(nr, addr))
> >  
> >  #define __set_bit(nr, addr)		bitop(___set_bit, nr, addr)
> >  #define __clear_bit(nr, addr)		bitop(___clear_bit, nr, addr)
> > -- 
> 2.36.1

Thanks,
Olek

WARNING: multiple messages have this Message-ID (diff)
From: Alexander Lobakin <alexandr.lobakin@intel.com>
To: Yury Norov <yury.norov@gmail.com>
Cc: Alexander Lobakin <alexandr.lobakin@intel.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Matt Turner <mattst88@gmail.com>, Brian Cain <bcain@quicinc.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>,
	"David S. Miller" <davem@davemloft.net>,
	Kees Cook <keescook@chromium.org>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Marco Elver <elver@google.com>, Borislav Petkov <bp@suse.de>,
	Tony Luck <tony.luck@intel.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 6/6] bitops: let optimize out non-atomic bitops on compile-time constants
Date: Wed, 15 Jun 2022 14:00:30 +0000	[thread overview]
Message-ID: <20220615140030.1265068-1-alexandr.lobakin@intel.com> (raw)
In-Reply-To: <YqlRfoB5+VBIw8gJ@yury-laptop>

From: Yury Norov <yury.norov@gmail.com>
Date: Tue, 14 Jun 2022 20:26:54 -0700

> Hi Alexander,
> 
> On Fri, Jun 10, 2022 at 01:34:27PM +0200, Alexander Lobakin wrote:
> > Currently, many architecture-specific non-atomic bitop
> > implementations use inline asm or other hacks which are faster or
> > more robust when working with "real" variables (i.e. fields from
> > the structures etc.), but the compilers have no clue how to optimize
> > them out when called on compile-time constants. That said, the
> > following code:
> > 
> > 	DECLARE_BITMAP(foo, BITS_PER_LONG) = { }; // -> unsigned long foo[1];
> > 	unsigned long bar = BIT(BAR_BIT);
> > 	unsigned long baz = 0;
> > 
> > 	__set_bit(FOO_BIT, foo);
> > 	baz |= BIT(BAZ_BIT);
> > 
> > 	BUILD_BUG_ON(!__builtin_constant_p(test_bit(FOO_BIT, foo));
> > 	BUILD_BUG_ON(!__builtin_constant_p(bar & BAR_BIT));
> > 	BUILD_BUG_ON(!__builtin_constant_p(baz & BAZ_BIT));
> 
> Can you put this snippet into lib/test_bitops.c?

Great idea, sure!

> 
> Thanks,
> Yury
> 
> > triggers the first assertion on x86_64, which means that the
> > compiler is unable to evaluate it to a compile-time initializer
> > when the architecture-specific bitop is used even if it's obvious.
> > In order to let the compiler optimize out such cases, expand the
> > bitop() macro to use the "constant" C non-atomic bitop
> > implementations when all of the arguments passed are compile-time
> > constants, which means that the result will be a compile-time
> > constant as well, so that it produces more efficient and simple
> > code in 100% cases, comparing to the architecture-specific
> > counterparts.
> > 
> > The savings are architecture, compiler and compiler flags dependent,
> > for example, on x86_64 -O2:
> > 
> > GCC 12: add/remove: 78/29 grow/shrink: 332/525 up/down: 31325/-61560 (-30235)
> > LLVM 13: add/remove: 79/76 grow/shrink: 184/537 up/down: 55076/-141892 (-86816)
> > LLVM 14: add/remove: 10/3 grow/shrink: 93/138 up/down: 3705/-6992 (-3287)
> > 
> > and ARM64 (courtesy of Mark):
> > 
> > GCC 11: add/remove: 92/29 grow/shrink: 933/2766 up/down: 39340/-82580 (-43240)
> > LLVM 14: add/remove: 21/11 grow/shrink: 620/651 up/down: 12060/-15824 (-3764)
> > 
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com>
> > ---
> >  include/linux/bitops.h | 18 +++++++++++++++++-
> >  1 file changed, 17 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/bitops.h b/include/linux/bitops.h
> > index 753f98e0dcf5..364bdc3606b4 100644
> > --- a/include/linux/bitops.h
> > +++ b/include/linux/bitops.h
> > @@ -33,8 +33,24 @@ extern unsigned long __sw_hweight64(__u64 w);
> >  
> >  #include <asm-generic/bitops/generic-non-atomic.h>
> >  
> > +/*
> > + * Many architecture-specific non-atomic bitops contain inline asm code and due
> > + * to that the compiler can't optimize them to compile-time expressions or
> > + * constants. In contrary, gen_*() helpers are defined in pure C and compilers
> > + * optimize them just well.
> > + * Therefore, to make `unsigned long foo = 0; __set_bit(BAR, &foo)` effectively
> > + * equal to `unsigned long foo = BIT(BAR)`, pick the generic C alternative when
> > + * the arguments can be resolved at compile time. That expression itself is a
> > + * constant and doesn't bring any functional changes to the rest of cases.
> > + * The casts to `uintptr_t` are needed to mitigate `-Waddress` warnings when
> > + * passing a bitmap from .bss or .data (-> `!!addr` is always true).
> > + */
> >  #define bitop(op, nr, addr)						\
> > -	op(nr, addr)
> > +	((__builtin_constant_p(nr) &&					\
> > +	  __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)NULL) &&	\
> > +	  (uintptr_t)(addr) != (uintptr_t)NULL &&			\
> > +	  __builtin_constant_p(*(const unsigned long *)(addr))) ?	\
> > +	 const##op(nr, addr) : op(nr, addr))
> >  
> >  #define __set_bit(nr, addr)		bitop(___set_bit, nr, addr)
> >  #define __clear_bit(nr, addr)		bitop(___clear_bit, nr, addr)
> > -- 
> 2.36.1

Thanks,
Olek

WARNING: multiple messages have this Message-ID (diff)
From: Alexander Lobakin <alexandr.lobakin@intel.com>
To: Yury Norov <yury.norov@gmail.com>
Cc: Alexander Lobakin <alexandr.lobakin@intel.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Matt Turner <mattst88@gmail.com>, Brian Cain <bcain@quicinc.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Rich Felker <dalias@libc.org>,
	"David S. Miller" <davem@davemloft.net>,
	Kees Cook <keescook@chromium.org>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Marco Elver <elver@google.com>, Borislav Petkov <bp@suse.de>,
	Tony Luck <tony.luck@intel.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org,
	linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org,
	linux-sh@vger.ke
Subject: Re: [PATCH v2 6/6] bitops: let optimize out non-atomic bitops on compile-time constants
Date: Wed, 15 Jun 2022 16:00:30 +0200	[thread overview]
Message-ID: <20220615140030.1265068-1-alexandr.lobakin@intel.com> (raw)
In-Reply-To: <YqlRfoB5+VBIw8gJ@yury-laptop>

From: Yury Norov <yury.norov@gmail.com>
Date: Tue, 14 Jun 2022 20:26:54 -0700

> Hi Alexander,
> 
> On Fri, Jun 10, 2022 at 01:34:27PM +0200, Alexander Lobakin wrote:
> > Currently, many architecture-specific non-atomic bitop
> > implementations use inline asm or other hacks which are faster or
> > more robust when working with "real" variables (i.e. fields from
> > the structures etc.), but the compilers have no clue how to optimize
> > them out when called on compile-time constants. That said, the
> > following code:
> > 
> > 	DECLARE_BITMAP(foo, BITS_PER_LONG) = { }; // -> unsigned long foo[1];
> > 	unsigned long bar = BIT(BAR_BIT);
> > 	unsigned long baz = 0;
> > 
> > 	__set_bit(FOO_BIT, foo);
> > 	baz |= BIT(BAZ_BIT);
> > 
> > 	BUILD_BUG_ON(!__builtin_constant_p(test_bit(FOO_BIT, foo));
> > 	BUILD_BUG_ON(!__builtin_constant_p(bar & BAR_BIT));
> > 	BUILD_BUG_ON(!__builtin_constant_p(baz & BAZ_BIT));
> 
> Can you put this snippet into lib/test_bitops.c?

Great idea, sure!

> 
> Thanks,
> Yury
> 
> > triggers the first assertion on x86_64, which means that the
> > compiler is unable to evaluate it to a compile-time initializer
> > when the architecture-specific bitop is used even if it's obvious.
> > In order to let the compiler optimize out such cases, expand the
> > bitop() macro to use the "constant" C non-atomic bitop
> > implementations when all of the arguments passed are compile-time
> > constants, which means that the result will be a compile-time
> > constant as well, so that it produces more efficient and simple
> > code in 100% cases, comparing to the architecture-specific
> > counterparts.
> > 
> > The savings are architecture, compiler and compiler flags dependent,
> > for example, on x86_64 -O2:
> > 
> > GCC 12: add/remove: 78/29 grow/shrink: 332/525 up/down: 31325/-61560 (-30235)
> > LLVM 13: add/remove: 79/76 grow/shrink: 184/537 up/down: 55076/-141892 (-86816)
> > LLVM 14: add/remove: 10/3 grow/shrink: 93/138 up/down: 3705/-6992 (-3287)
> > 
> > and ARM64 (courtesy of Mark):
> > 
> > GCC 11: add/remove: 92/29 grow/shrink: 933/2766 up/down: 39340/-82580 (-43240)
> > LLVM 14: add/remove: 21/11 grow/shrink: 620/651 up/down: 12060/-15824 (-3764)
> > 
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com>
> > ---
> >  include/linux/bitops.h | 18 +++++++++++++++++-
> >  1 file changed, 17 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/bitops.h b/include/linux/bitops.h
> > index 753f98e0dcf5..364bdc3606b4 100644
> > --- a/include/linux/bitops.h
> > +++ b/include/linux/bitops.h
> > @@ -33,8 +33,24 @@ extern unsigned long __sw_hweight64(__u64 w);
> >  
> >  #include <asm-generic/bitops/generic-non-atomic.h>
> >  
> > +/*
> > + * Many architecture-specific non-atomic bitops contain inline asm code and due
> > + * to that the compiler can't optimize them to compile-time expressions or
> > + * constants. In contrary, gen_*() helpers are defined in pure C and compilers
> > + * optimize them just well.
> > + * Therefore, to make `unsigned long foo = 0; __set_bit(BAR, &foo)` effectively
> > + * equal to `unsigned long foo = BIT(BAR)`, pick the generic C alternative when
> > + * the arguments can be resolved at compile time. That expression itself is a
> > + * constant and doesn't bring any functional changes to the rest of cases.
> > + * The casts to `uintptr_t` are needed to mitigate `-Waddress` warnings when
> > + * passing a bitmap from .bss or .data (-> `!!addr` is always true).
> > + */
> >  #define bitop(op, nr, addr)						\
> > -	op(nr, addr)
> > +	((__builtin_constant_p(nr) &&					\
> > +	  __builtin_constant_p((uintptr_t)(addr) != (uintptr_t)NULL) &&	\
> > +	  (uintptr_t)(addr) != (uintptr_t)NULL &&			\
> > +	  __builtin_constant_p(*(const unsigned long *)(addr))) ?	\
> > +	 const##op(nr, addr) : op(nr, addr))
> >  
> >  #define __set_bit(nr, addr)		bitop(___set_bit, nr, addr)
> >  #define __clear_bit(nr, addr)		bitop(___clear_bit, nr, addr)
> > -- 
> 2.36.1

Thanks,
Olek

  reply	other threads:[~2022-06-15 14:03 UTC|newest]

Thread overview: 85+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-10 11:34 [PATCH v2 0/6] bitops: let optimize out non-atomic bitops on compile-time constants Alexander Lobakin
2022-06-10 11:34 ` Alexander Lobakin
2022-06-10 11:34 ` Alexander Lobakin
2022-06-10 11:34 ` [PATCH v2 1/6] ia64, processor: fix -Wincompatible-pointer-types in ia64_get_irr() Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 12:18   ` David Laight
2022-06-10 12:18     ` David Laight
2022-06-10 13:46   ` Andy Shevchenko
2022-06-10 13:46     ` Andy Shevchenko
2022-06-10 13:46     ` Andy Shevchenko
2022-06-15  2:59   ` Yury Norov
2022-06-15  2:59     ` Yury Norov
2022-06-15  2:59     ` Yury Norov
2022-06-10 11:34 ` [PATCH v2 2/6] bitops: always define asm-generic non-atomic bitops Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 13:50   ` Andy Shevchenko
2022-06-10 13:50     ` Andy Shevchenko
2022-06-10 13:50     ` Andy Shevchenko
2022-06-10 16:02     ` Luck, Tony
2022-06-10 16:02       ` Luck, Tony
2022-06-10 16:32       ` Marco Elver
2022-06-10 16:32         ` Marco Elver
2022-06-10 16:32         ` Marco Elver
2022-06-13 14:19         ` Alexander Lobakin
2022-06-13 14:19           ` Alexander Lobakin
2022-06-13 14:19           ` Alexander Lobakin
2022-06-13 14:33           ` Marco Elver
2022-06-13 14:33             ` Marco Elver
2022-06-13 14:33             ` Marco Elver
2022-06-15  2:47             ` Yury Norov
2022-06-15  2:47               ` Yury Norov
2022-06-15  2:47               ` Yury Norov
2022-06-15  7:46               ` Marco Elver
2022-06-15  7:46                 ` Marco Elver
2022-06-15  7:46                 ` Marco Elver
2022-06-13 16:26           ` Luck, Tony
2022-06-13 16:26             ` Luck, Tony
2022-06-13 21:29             ` David Laight
2022-06-13 21:29               ` David Laight
2022-06-10 11:34 ` [PATCH v2 3/6] bitops: unify non-atomic bitops prototypes across architectures Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 11:34 ` [PATCH v2 4/6] bitops: define const_*() versions of the non-atomics Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 13:56   ` Andy Shevchenko
2022-06-10 13:56     ` Andy Shevchenko
2022-06-10 13:56     ` Andy Shevchenko
2022-06-13 14:30     ` Alexander Lobakin
2022-06-13 14:30       ` Alexander Lobakin
2022-06-13 14:30       ` Alexander Lobakin
2022-06-15  2:57   ` Yury Norov
2022-06-15  2:57     ` Yury Norov
2022-06-15  2:57     ` Yury Norov
2022-06-15 13:55     ` Alexander Lobakin
2022-06-15 13:55       ` Alexander Lobakin
2022-06-15 13:55       ` Alexander Lobakin
2022-06-15 15:52       ` David Laight
2022-06-15 15:52         ` David Laight
2022-06-10 11:34 ` [PATCH v2 5/6] bitops: wrap non-atomic bitops with a transparent macro Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 11:34 ` [PATCH v2 6/6] bitops: let optimize out non-atomic bitops on compile-time constants Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-10 11:34   ` Alexander Lobakin
2022-06-15  3:26   ` Yury Norov
2022-06-15  3:26     ` Yury Norov
2022-06-15  3:26     ` Yury Norov
2022-06-15 14:00     ` Alexander Lobakin [this message]
2022-06-15 14:00       ` Alexander Lobakin
2022-06-15 14:00       ` Alexander Lobakin
2022-06-13  7:35 ` [PATCH v2 0/6] " Geert Uytterhoeven
2022-06-13  7:35   ` Geert Uytterhoeven
2022-06-13  7:35   ` Geert Uytterhoeven
2022-06-13 14:26   ` Alexander Lobakin
2022-06-13 14:26     ` Alexander Lobakin
2022-06-13 14:26     ` Alexander Lobakin
2022-06-13 15:22     ` Geert Uytterhoeven
2022-06-13 15:22       ` Geert Uytterhoeven
2022-06-13 15:22       ` Geert Uytterhoeven
2022-06-15 14:17       ` Alexander Lobakin
2022-06-15 14:17         ` Alexander Lobakin
2022-06-15 14:17         ` Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220615140030.1265068-1-alexandr.lobakin@intel.com \
    --to=alexandr.lobakin@intel.com \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=arnd@arndb.de \
    --cc=bcain@quicinc.com \
    --cc=bp@suse.de \
    --cc=dalias@libc.org \
    --cc=davem@davemloft.net \
    --cc=elver@google.com \
    --cc=geert@linux-m68k.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=keescook@chromium.org \
    --cc=linux-alpha@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-hexagon@vger.kernel.org \
    --cc=linux-ia64@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-m68k@lists.linux-m68k.org \
    --cc=linux-sh@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mattst88@gmail.com \
    --cc=peterz@infradead.org \
    --cc=sparclinux@vger.kernel.org \
    --cc=tony.luck@intel.com \
    --cc=ysato@users.sourceforge.jp \
    --cc=yury.norov@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.