linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
@ 2019-12-06 12:46 Michael Ellerman
  2019-12-06 13:16 ` Peter Zijlstra
  2019-12-06 22:15 ` [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops) pr-tracker-bot
  0 siblings, 2 replies; 43+ messages in thread
From: Michael Ellerman @ 2019-12-06 12:46 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: dja, elver, linux-kernel, linuxppc-dev, christophe.leroy,
	linux-s390, linux-arch, x86, kasan-dev

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Linus,

Please pull another powerpc update for 5.5.

As you'll see from the diffstat this is mostly not powerpc code. In order to do
KASAN instrumentation of bitops we needed to juggle some of the generic bitops
headers.

Because those changes potentially affect several architectures I wasn't
confident putting them directly into my tree, so I've had them sitting in a
topic branch. That branch (topic/kasan-bitops) has been in linux-next for a
month, and I've not had any feedback that it's caused any problems.

So I think this is good to merge, but it's a standalone pull so if anyone does
object it's not a problem.

cheers


The following changes since commit da0c9ea146cbe92b832f1b0f694840ea8eb33cce:

  Linux 5.4-rc2 (2019-10-06 14:27:30 -0700)

are available in the git repository at:

  https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git tags/powerpc-5.5-2

for you to fetch changes up to 4f4afc2c9599520300b3f2b3666d2034fca03df3:

  docs/core-api: Remove possibly confusing sub-headings from Bit Operations (2019-12-04 21:20:28 +1100)

- ------------------------------------------------------------------
powerpc updates for 5.5 #2

A few commits splitting the KASAN instrumented bitops header in
three, to match the split of the asm-generic bitops headers.

This is needed on powerpc because we use asm-generic/bitops/non-atomic.h,
for the non-atomic bitops, whereas the existing KASAN instrumented
bitops assume all the underlying operations are provided by the arch
as arch_foo() versions.

Thanks to:
  Daniel Axtens & Christophe Leroy.

- ------------------------------------------------------------------
Daniel Axtens (2):
      kasan: support instrumented bitops combined with generic bitops
      powerpc: support KASAN instrumentation of bitops

Michael Ellerman (1):
      docs/core-api: Remove possibly confusing sub-headings from Bit Operations


 Documentation/core-api/kernel-api.rst                |   8 +-
 arch/powerpc/include/asm/bitops.h                    |  51 ++--
 arch/s390/include/asm/bitops.h                       |   4 +-
 arch/x86/include/asm/bitops.h                        |   4 +-
 include/asm-generic/bitops-instrumented.h            | 263 --------------------
 include/asm-generic/bitops/instrumented-atomic.h     | 100 ++++++++
 include/asm-generic/bitops/instrumented-lock.h       |  81 ++++++
 include/asm-generic/bitops/instrumented-non-atomic.h | 114 +++++++++
 8 files changed, 337 insertions(+), 288 deletions(-)
 delete mode 100644 include/asm-generic/bitops-instrumented.h
 create mode 100644 include/asm-generic/bitops/instrumented-atomic.h
 create mode 100644 include/asm-generic/bitops/instrumented-lock.h
 create mode 100644 include/asm-generic/bitops/instrumented-non-atomic.h
-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAl3qSS4ACgkQUevqPMjh
pYCp1Q//TrG2tPMDPHpWqCzNdWoh96zpIo2UsauDcc8l+XT7shkwHcGnpoECgCfK
NjhP77qqXI61E+5qUCfO16/j5g6PbvvG/E/xlQEdgX7lIxBeGs4IkoRU8QjkJ9w5
wAjG/XwaMJ21CQY2F51dn9NPQUvFxKV0o6QJ+/pIFBnv0eeYCtRWno7+tZGIiMhk
ExfJhR0rnBdBc6oonNOTAfWn5u51FRRqUeICeo4iFoICu5v4cTbPiU3/8bZYzhSb
wM9WdG+/IUs02PffIQF4GDyMmzi/Qm3Ujl3tUIEaFHlfN9pF6X7Yog7Co26CShJj
No4wJK5rS3ECXmwo7Yd69sV9FZrMZZvGY9x7p7bEE7mqk1fHMaM3DMXvR8Gx6UGM
NCXX2QIIigz3RUTbj3CW2iZa9R/FTSFXs3Ih4YDDJdPNanYpcX3/wE6mpwsco8do
lxWcN1AMGXLiaNdQ8IkRZ6hOLH/Po34RvDo1P1mS06NzfyyTZW7JNiUtU2HSqPRs
vjIkHDM7585ika6jeDHU4cJaLy7bsCNV2fLsHWDE3Xno43g7qcKGOx+PtO25XubZ
iP1vojR4Qml+e3ySf6dDiOIDltSWZwjCGtbi2gmdErHiLdLeJX2XGjC36Qnep6u6
15HIWzX41tg8y4QRJDmPyeDm3Ccbabz+m4LaccbdObgGWVwxwgA=
=06Wr
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
  2019-12-06 12:46 [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops) Michael Ellerman
@ 2019-12-06 13:16 ` Peter Zijlstra
  2019-12-10  5:38   ` Michael Ellerman
  2019-12-12  5:42   ` READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)) Michael Ellerman
  2019-12-06 22:15 ` [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops) pr-tracker-bot
  1 sibling, 2 replies; 43+ messages in thread
From: Peter Zijlstra @ 2019-12-06 13:16 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Linus Torvalds, dja, elver, linux-kernel, linuxppc-dev,
	christophe.leroy, linux-s390, linux-arch, x86, kasan-dev,
	Will Deacon, Mark Rutland

On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
> 
> Hi Linus,
> 
> Please pull another powerpc update for 5.5.
> 
> As you'll see from the diffstat this is mostly not powerpc code. In order to do
> KASAN instrumentation of bitops we needed to juggle some of the generic bitops
> headers.
> 
> Because those changes potentially affect several architectures I wasn't
> confident putting them directly into my tree, so I've had them sitting in a
> topic branch. That branch (topic/kasan-bitops) has been in linux-next for a
> month, and I've not had any feedback that it's caused any problems.
> 
> So I think this is good to merge, but it's a standalone pull so if anyone does
> object it's not a problem.

No objections, but here:

  https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=topic/kasan-bitops&id=81d2c6f81996e01fbcd2b5aeefbb519e21c806e9

you write:

  "Currently bitops-instrumented.h assumes that the architecture provides
atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
This is true on x86 and s390, but is not always true: there is a
generic bitops/non-atomic.h header that provides generic non-atomic
operations, and also a generic bitops/lock.h for locking operations."

Is there any actual benefit for PPC to using their own atomic bitops
over bitops/lock.h ? I'm thinking that the generic code is fairly
optimal for most LL/SC architectures.

I've been meaning to audit the various architectures and move them over,
but alas, it's something I've not yet had time for...

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
  2019-12-06 12:46 [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops) Michael Ellerman
  2019-12-06 13:16 ` Peter Zijlstra
@ 2019-12-06 22:15 ` pr-tracker-bot
  1 sibling, 0 replies; 43+ messages in thread
From: pr-tracker-bot @ 2019-12-06 22:15 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Linus Torvalds, dja, elver, linux-kernel, linuxppc-dev,
	christophe.leroy, linux-s390, linux-arch, x86, kasan-dev

The pull request you sent on Fri, 06 Dec 2019 23:46:11 +1100:

> https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git tags/powerpc-5.5-2

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/43a2898631a8beee66c1d64c1e860f43d96b2e91

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
  2019-12-06 13:16 ` Peter Zijlstra
@ 2019-12-10  5:38   ` Michael Ellerman
  2019-12-10 10:15     ` Peter Zijlstra
  2019-12-12  5:42   ` READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)) Michael Ellerman
  1 sibling, 1 reply; 43+ messages in thread
From: Michael Ellerman @ 2019-12-10  5:38 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linus Torvalds, dja, elver, linux-kernel, linuxppc-dev,
	christophe.leroy, linux-s390, linux-arch, x86, kasan-dev,
	Will Deacon, Mark Rutland

Peter Zijlstra <peterz@infradead.org> writes:
> On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>> 
>> Hi Linus,
>> 
>> Please pull another powerpc update for 5.5.
>> 
>> As you'll see from the diffstat this is mostly not powerpc code. In order to do
>> KASAN instrumentation of bitops we needed to juggle some of the generic bitops
>> headers.
>> 
>> Because those changes potentially affect several architectures I wasn't
>> confident putting them directly into my tree, so I've had them sitting in a
>> topic branch. That branch (topic/kasan-bitops) has been in linux-next for a
>> month, and I've not had any feedback that it's caused any problems.
>> 
>> So I think this is good to merge, but it's a standalone pull so if anyone does
>> object it's not a problem.
>
> No objections, but here:
>
>   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=topic/kasan-bitops&id=81d2c6f81996e01fbcd2b5aeefbb519e21c806e9
>
> you write:
>
>   "Currently bitops-instrumented.h assumes that the architecture provides
> atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
> This is true on x86 and s390, but is not always true: there is a
> generic bitops/non-atomic.h header that provides generic non-atomic
> operations, and also a generic bitops/lock.h for locking operations."
>
> Is there any actual benefit for PPC to using their own atomic bitops
> over bitops/lock.h ? I'm thinking that the generic code is fairly
> optimal for most LL/SC architectures.

Good question, I'll have a look.

There seems to be confusion about what the type of the bit number is,
which is leading to sign extension in some cases and not others.

eg, comparing the generic clear_bit_unlock() vs ours:

 1 c000000000031890 <generic_clear_bit_unlock>:             1 c0000000000319a0 <ppc_clear_bit_unlock>:
                                                            2         extsw   r3,r3
                                                            3         li      r10,1
                                                            4         srawi   r9,r3,6
                                                            5         addze   r9,r9
                                                            6         rlwinm  r8,r9,6,0,25
                                                            7         extsw   r9,r9
                                                            8         subf    r3,r8,r3
 2         rlwinm  r9,r3,29,3,28                            9         rldicr  r9,r9,3,60
                                                           10         sld     r3,r10,r3
 3         add     r4,r4,r9                                11         add     r4,r4,r9
 4         lwsync                                          12         lwsync
 5         li      r9,-2
 6         clrlwi  r3,r3,26
 7         rotld   r3,r9,r3
 8         ldarx   r9,0,r4                                 13         ldarx   r9,0,r4
 9         and     r10,r3,r9                               14         andc    r9,r9,r3
10         stdcx.  r10,0,r4                                15         stdcx.  r9,0,r4
11         bne-    <generic_clear_bit_unlock+0x18>         16         bne-    <ppc_clear_bit_unlock+0x2c>
12         blr                                             17         blr

It looks like in actual usage it often doesn't matter, ie. when we pass
a constant bit number it all gets inlined and the compiler works it out.

It looks like the type should be unsigned long?

  Documentation/core-api/atomic_ops.rst:  void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
  arch/mips/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
  arch/powerpc/include/asm/bitops.h:static inline void arch___clear_bit_unlock(int nr, volatile unsigned long *addr)
  arch/riscv/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
  arch/s390/include/asm/bitops.h:static inline void arch___clear_bit_unlock(unsigned long nr,
  include/asm-generic/bitops/instrumented-lock.h:static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
  include/asm-generic/bitops/lock.h:static inline void __clear_bit_unlock(unsigned int nr,

So I guess step one is to convert our versions to use unsigned long, so
we're at least not tripping over that difference when comparing the
assembly.

cheers

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
  2019-12-10  5:38   ` Michael Ellerman
@ 2019-12-10 10:15     ` Peter Zijlstra
  2019-12-11  0:29       ` Michael Ellerman
  0 siblings, 1 reply; 43+ messages in thread
From: Peter Zijlstra @ 2019-12-10 10:15 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Linus Torvalds, dja, elver, linux-kernel, linuxppc-dev,
	christophe.leroy, linux-s390, linux-arch, x86, kasan-dev,
	Will Deacon, Mark Rutland

On Tue, Dec 10, 2019 at 04:38:54PM +1100, Michael Ellerman wrote:

> Good question, I'll have a look.
> 
> There seems to be confusion about what the type of the bit number is,
> which is leading to sign extension in some cases and not others.

Shiny.

> It looks like the type should be unsigned long?

I'm thinking unsigned makes most sense, I mean, negative bit offsets
should 'work' but that's almost always guaranteed to be an out-of-bound
operation.

As to 'long' vs 'int', I'm not sure, 4G bits is a long bitmap. But I
suppose since the bitmap itself is 'unsigned long', we might as well use
'unsigned long' for the bitnr too.

>   Documentation/core-api/atomic_ops.rst:  void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
>   arch/mips/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
>   arch/powerpc/include/asm/bitops.h:static inline void arch___clear_bit_unlock(int nr, volatile unsigned long *addr)
>   arch/riscv/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
>   arch/s390/include/asm/bitops.h:static inline void arch___clear_bit_unlock(unsigned long nr,
>   include/asm-generic/bitops/instrumented-lock.h:static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
>   include/asm-generic/bitops/lock.h:static inline void __clear_bit_unlock(unsigned int nr,
> 
> So I guess step one is to convert our versions to use unsigned long, so
> we're at least not tripping over that difference when comparing the
> assembly.

Yeah, I'll look at fixing the generic code, bitops/atomic.h and
bitops/non-atomic.h don't even agree on the type of bitnr.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)
  2019-12-10 10:15     ` Peter Zijlstra
@ 2019-12-11  0:29       ` Michael Ellerman
  0 siblings, 0 replies; 43+ messages in thread
From: Michael Ellerman @ 2019-12-11  0:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linus Torvalds, dja, elver, linux-kernel, linuxppc-dev,
	christophe.leroy, linux-s390, linux-arch, x86, kasan-dev,
	Will Deacon, Mark Rutland

Peter Zijlstra <peterz@infradead.org> writes:
> On Tue, Dec 10, 2019 at 04:38:54PM +1100, Michael Ellerman wrote:
>
>> Good question, I'll have a look.
>> 
>> There seems to be confusion about what the type of the bit number is,
>> which is leading to sign extension in some cases and not others.
>
> Shiny.
>
>> It looks like the type should be unsigned long?
>
> I'm thinking unsigned makes most sense, I mean, negative bit offsets
> should 'work' but that's almost always guaranteed to be an out-of-bound
> operation.

Yeah I agree.

> As to 'long' vs 'int', I'm not sure, 4G bits is a long bitmap. But I
> suppose since the bitmap itself is 'unsigned long', we might as well use
> 'unsigned long' for the bitnr too.

4G is a lot of bits, but it's not *that* many.

eg. If we had a bit per 4K page on a 32T machine that would be 8G bits.

So unsigned long seems best.

>>   Documentation/core-api/atomic_ops.rst:  void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
>>   arch/mips/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
>>   arch/powerpc/include/asm/bitops.h:static inline void arch___clear_bit_unlock(int nr, volatile unsigned long *addr)
>>   arch/riscv/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
>>   arch/s390/include/asm/bitops.h:static inline void arch___clear_bit_unlock(unsigned long nr,
>>   include/asm-generic/bitops/instrumented-lock.h:static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
>>   include/asm-generic/bitops/lock.h:static inline void __clear_bit_unlock(unsigned int nr,
>> 
>> So I guess step one is to convert our versions to use unsigned long, so
>> we're at least not tripping over that difference when comparing the
>> assembly.
>
> Yeah, I'll look at fixing the generic code, bitops/atomic.h and
> bitops/non-atomic.h don't even agree on the type of bitnr.

Thanks.

cheers

^ permalink raw reply	[flat|nested] 43+ messages in thread

* READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-06 13:16 ` Peter Zijlstra
  2019-12-10  5:38   ` Michael Ellerman
@ 2019-12-12  5:42   ` Michael Ellerman
  2019-12-12  8:01     ` Peter Zijlstra
  2019-12-12 15:10     ` Segher Boessenkool
  1 sibling, 2 replies; 43+ messages in thread
From: Michael Ellerman @ 2019-12-12  5:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Linus Torvalds, dja, linux-kernel, linuxppc-dev,
	christophe.leroy, linux-arch, Will Deacon, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

[ trimmed CC a bit ]

Peter Zijlstra <peterz@infradead.org> writes:
> On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote:
...
> you write:
>
>   "Currently bitops-instrumented.h assumes that the architecture provides
> atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
> This is true on x86 and s390, but is not always true: there is a
> generic bitops/non-atomic.h header that provides generic non-atomic
> operations, and also a generic bitops/lock.h for locking operations."
>
> Is there any actual benefit for PPC to using their own atomic bitops
> over bitops/lock.h ? I'm thinking that the generic code is fairly
> optimal for most LL/SC architectures.

Yes and no :)

Some of the generic versions don't generate good code compared to our
versions, but that's because READ_ONCE() is triggering stack protector
to be enabled.

For example, comparing an out-of-line copy of the generic and ppc
versions of test_and_set_bit_lock():

   1 <generic_test_and_set_bit_lock>:           1 <ppc_test_and_set_bit_lock>:
   2         addis   r2,r12,361
   3         addi    r2,r2,-4240
   4         stdu    r1,-48(r1)
   5         rlwinm  r8,r3,29,3,28
   6         clrlwi  r10,r3,26                   2         rldicl  r10,r3,58,6
   7         ld      r9,3320(r13)
   8         std     r9,40(r1)
   9         li      r9,0
  10         li      r9,1                        3         li      r9,1
                                                 4         clrlwi  r3,r3,26
                                                 5         rldicr  r10,r10,3,60
  11         sld     r9,r9,r10                   6         sld     r3,r9,r3
  12         add     r10,r4,r8                   7         add     r4,r4,r10
  13         ldx     r8,r4,r8
  14         and.    r8,r9,r8
  15         bne     34f
  16         ldarx   r7,0,r10                    8         ldarx   r9,0,r4,1
  17         or      r8,r9,r7                    9         or      r10,r9,r3
  18         stdcx.  r8,0,r10                   10         stdcx.  r10,0,r4
  19         bne-    16b                        11         bne-    8b
  20         isync                              12         isync
  21         and     r9,r7,r9                   13         and     r3,r3,r9
  22         addic   r7,r9,-1                   14         addic   r9,r3,-1
  23         subfe   r7,r7,r9                   15         subfe   r3,r9,r3
  24         ld      r9,40(r1)
  25         ld      r10,3320(r13)
  26         xor.    r9,r9,r10
  27         li      r10,0
  28         mr      r3,r7
  29         bne     36f
  30         addi    r1,r1,48
  31         blr                                16         blr
  32         nop
  33         nop
  34         li      r7,1
  35         b       24b
  36         mflr    r0
  37         std     r0,64(r1)
  38         bl      <__stack_chk_fail+0x8>


If you squint, the generated code for the actual logic is pretty similar, but
the stack protector gunk makes a big mess. It's particularly bad here
because the ppc version doesn't even need a stack frame.

I've also confirmed that even when test_and_set_bit_lock() is inlined
into an actual call site the stack protector logic still triggers.

eg, if I make two versions of ext4_resize_begin() which call the generic or ppc
version of test_and_set_bit_lock(), the generic version gets a bunch of extra
stack protector code.

   1 c0000000005336e0 <ext4_resize_begin_generic>:           1 c0000000005335b0 <ext4_resize_begin_ppc>:
   2         addis   r2,r12,281                              2         addis   r2,r12,281
   3         addi    r2,r2,-12256                            3         addi    r2,r2,-11952
   4         mflr    r0                                      4         mflr    r0
   5         bl      <_mcount>                               5         bl      <_mcount>
   6         mflr    r0                                      6         mflr    r0
   7         std     r31,-8(r1)                              7         std     r31,-8(r1)
   8         std     r30,-16(r1)                             8         std     r30,-16(r1)
   9         mr      r31,r3                                  9         mr      r31,r3
  10         li      r3,24                                  10         li      r3,24
  11         std     r0,16(r1)                              11         std     r0,16(r1)
  12         stdu    r1,-128(r1)                            12         stdu    r1,-112(r1)
  13         ld      r9,3320(r13)
  14         std     r9,104(r1)
  15         li      r9,0
  16         ld      r30,920(r31)                           13         ld      r30,920(r31)
  17         bl      <capable+0x8>                          14         bl      <capable+0x8>
  18         nop                                            15         nop
  19         cmpdi   cr7,r3,0                               16         cmpdi   cr7,r3,0
  20         beq     cr7,<ext4_resize_begin_generic+0xf0>   17         beq     cr7,<ext4_resize_begin_ppc+0xc0>
  21         ld      r9,920(r31)                            18         ld      r9,920(r31)
  22         ld      r10,96(r30)                            19         ld      r10,96(r30)
  23         lwz     r7,84(r30)                             20         lwz     r7,84(r30)
  24         ld      r8,104(r9)                             21         ld      r8,104(r9)
  25         ld      r10,24(r10)                            22         ld      r10,24(r10)
  26         lwz     r8,20(r8)                              23         lwz     r8,20(r8)
  27         srd     r10,r10,r7                             24         srd     r10,r10,r7
  28         cmpd    cr7,r10,r8                             25         cmpd    cr7,r10,r8
  29         bne     cr7,<ext4_resize_begin_generic+0x128>  26         bne     cr7,<ext4_resize_begin_ppc+0xf8>
  30         lhz     r10,160(r9)                            27         lhz     r10,160(r9)
  31         andi.   r10,r10,2                              28         andi.   r10,r10,2
  32         bne     <ext4_resize_begin_generic+0x100>
  33         ld      r10,560(r9)
  34         andi.   r10,r10,1
  35         bne     <ext4_resize_begin_generic+0xe0>       29         bne     <ext4_resize_begin_ppc+0xd0>
  36         addi    r7,r9,560                              30         addi    r9,r9,560
  37         li      r8,1                                   31         li      r10,1
  38         ldarx   r10,0,r7                               32         ldarx   r3,0,r9,1
  39         or      r6,r8,r10                              33         or      r8,r3,r10
  40         stdcx.  r6,0,r7                                34         stdcx.  r8,0,r9
  41         bne-    <ext4_resize_begin_generic+0x90>       35         bne-    <ext4_resize_begin_ppc+0x78>
  42         isync                                          36         isync
                                                            37         clrldi  r3,r3,63
  43         andi.   r9,r10,1                               38         addi    r3,r3,-1
  44         li      r3,0                                   39         rlwinm  r3,r3,0,27,27
  45         bne     <ext4_resize_begin_generic+0xe0>       40         addi    r3,r3,-16
  46         ld      r9,104(r1)
  47         ld      r10,3320(r13)
  48         xor.    r9,r9,r10
  49         li      r10,0
  50         bne     <ext4_resize_begin_generic+0x158>
  51         addi    r1,r1,128                              41         addi    r1,r1,112
  52         ld      r0,16(r1)                              42         ld      r0,16(r1)
  53         ld      r30,-16(r1)                            43         ld      r30,-16(r1)
  54         ld      r31,-8(r1)                             44         ld      r31,-8(r1)
  55         mtlr    r0                                     45         mtlr    r0
  56         blr                                            46         blr
  57         nop                                            47         nop
  58         li      r3,-16
  59         b       <ext4_resize_begin_generic+0xb0>
  60         nop                                            48         nop
  61         nop                                            49         nop
  62         li      r3,-1                                  50         li      r3,-1
  63         b       <ext4_resize_begin_generic+0xb0>       51         b       <ext4_resize_begin_ppc+0x9c>
  64         nop                                            52         nop
  65         nop                                            53         nop
  66         addis   r6,r2,-118                             54         addis   r6,r2,-118
  67         addis   r4,r2,-140                             55         addis   r4,r2,-140
  68         mr      r3,r31                                 56         mr      r3,r31
  69         li      r5,97                                  57         li      r5,46
  70         addi    r6,r6,30288                            58         addi    r6,r6,30288
  71         addi    r4,r4,3064                             59         addi    r4,r4,3040
  72         bl      <__ext4_warning+0x8>                   60         bl      <__ext4_warning+0x8>
  73         nop                                            61         nop
  74         li      r3,-1                                  62         li      r3,-1
  75         b       <ext4_resize_begin_generic+0xb0>       63         b       <ext4_resize_begin_ppc+0x9c>
  76         ld      r9,96(r9)                              64         ld      r9,96(r9)
  77         addis   r6,r2,-118                             65         addis   r6,r2,-118
  78         addis   r4,r2,-140                             66         addis   r4,r2,-140
  79         mr      r3,r31                                 67         mr      r3,r31
  80         li      r5,87                                  68         li      r5,36
  81         addi    r6,r6,30240                            69         addi    r6,r6,30240
  82         addi    r4,r4,3064                             70         addi    r4,r4,3040
  83         ld      r7,24(r9)                              71         ld      r7,24(r9)
  84         bl      <__ext4_warning+0x8>                   72         bl      <__ext4_warning+0x8>
  85         nop                                            73         nop
  86         li      r3,-1                                  74         li      r3,-1
  87         b       <ext4_resize_begin_generic+0xb0>       75         b       <ext4_resize_begin_ppc+0x9c>
  88         bl      <__stack_chk_fail+0x8>


If I change the READ_ONCE() in test_and_set_bit_lock():

	if (READ_ONCE(*p) & mask)
		return 1;

to a regular pointer access:

	if (*p & mask)
		return 1;

Then the generated code looks more or less the same, except for the extra early
return in the generic version of test_and_set_bit_lock(), and different handling
of the return code by the compiler.

   1 <ext4_resize_begin_generic>:                            1 <ext4_resize_begin_ppc>:
   2         addis   r2,r12,281                              2         addis   r2,r12,281
   3         addi    r2,r2,-12256                            3         addi    r2,r2,-11952
   4         mflr    r0                                      4         mflr    r0
   5         bl      <_mcount>                               5         bl      <_mcount>
   6         mflr    r0                                      6         mflr    r0
   7         std     r31,-8(r1)                              7         std     r31,-8(r1)
   8         std     r30,-16(r1)                             8         std     r30,-16(r1)
   9         mr      r31,r3                                  9         mr      r31,r3
  10         li      r3,24                                  10         li      r3,24
  11         std     r0,16(r1)                              11         std     r0,16(r1)
  12         stdu    r1,-112(r1)                            12         stdu    r1,-112(r1)
  13         ld      r30,920(r31)                           13         ld      r30,920(r31)
  14         bl      <capable+0x8>                          14         bl      <capable+0x8>
  15         nop                                            15         nop
  16         cmpdi   cr7,r3,0                               16         cmpdi   cr7,r3,0
  17         beq     cr7,<ext4_resize_begin_generic+0xe0>   17         beq     cr7,<ext4_resize_begin_ppc+0xc0>
  18         ld      r9,920(r31)                            18         ld      r9,920(r31)
  19         ld      r10,96(r30)                            19         ld      r10,96(r30)
  20         lwz     r7,84(r30)                             20         lwz     r7,84(r30)
  21         ld      r8,104(r9)                             21         ld      r8,104(r9)
  22         ld      r10,24(r10)                            22         ld      r10,24(r10)
  23         lwz     r8,20(r8)                              23         lwz     r8,20(r8)
  24         srd     r10,r10,r7                             24         srd     r10,r10,r7
  25         cmpd    cr7,r10,r8                             25         cmpd    cr7,r10,r8
  26         bne     cr7,<ext4_resize_begin_generic+0x118>  26         bne     cr7,<ext4_resize_begin_ppc+0xf8>
  27         lhz     r10,160(r9)                            27         lhz     r10,160(r9)
  28         andi.   r10,r10,2                              28         andi.   r10,r10,2
  29         bne     <ext4_resize_begin_generic+0xf0>       29         bne     <ext4_resize_begin_ppc+0xd0>
  30         ld      r10,560(r9)
  31         andi.   r10,r10,1
  32         bne     <ext4_resize_begin_generic+0xc0>
  33         addi    r7,r9,560                              30         addi    r9,r9,560
  34         li      r8,1                                   31         li      r10,1
  35         ldarx   r10,0,r7                               32         ldarx   r3,0,r9,1
  36         or      r6,r8,r10                              33         or      r8,r3,r10
  37         stdcx.  r6,0,r7                                34         stdcx.  r8,0,r9
  38         bne-    <ext4_resize_begin_generic+0x84>       35         bne-    <ext4_resize_begin_ppc+0x78>
  39         isync                                          36         isync
                                                            37         clrldi  r3,r3,63
  40         andi.   r9,r10,1                               38         addi    r3,r3,-1
  41         li      r3,0                                   39         rlwinm  r3,r3,0,27,27
  42         bne     <ext4_resize_begin_generic+0xc0>       40         addi    r3,r3,-16
  43         addi    r1,r1,112                              41         addi    r1,r1,112
  44         ld      r0,16(r1)                              42         ld      r0,16(r1)
  45         ld      r30,-16(r1)                            43         ld      r30,-16(r1)
  46         ld      r31,-8(r1)                             44         ld      r31,-8(r1)
  47         mtlr    r0                                     45         mtlr    r0
  48         blr                                            46         blr
  49         nop                                            47         nop
  50         addi    r1,r1,112                              48         nop
  51         li      r3,-16
  52         ld      r0,16(r1)
  53         ld      r30,-16(r1)
  54         ld      r31,-8(r1)
  55         mtlr    r0
  56         blr
  57         nop                                            49         nop
  58         li      r3,-1                                  50         li      r3,-1
  59         b       <ext4_resize_begin_generic+0xa4>       51         b       <ext4_resize_begin_ppc+0x9c>
  60         nop                                            52         nop
  61         nop                                            53         nop
  62         addis   r6,r2,-118                             54         addis   r6,r2,-118
  63         addis   r4,r2,-140                             55         addis   r4,r2,-140
  64         mr      r3,r31                                 56         mr      r3,r31
  65         li      r5,97                                  57         li      r5,46
  66         addi    r6,r6,30288                            58         addi    r6,r6,30288
  67         addi    r4,r4,3064                             59         addi    r4,r4,3040
  68         bl      <__ext4_warning+0x8>                   60         bl      <__ext4_warning+0x8>
  69         nop                                            61         nop
  70         li      r3,-1                                  62         li      r3,-1
  71         b       <ext4_resize_begin_generic+0xa4>       63         b       <ext4_resize_begin_ppc+0x9c>
  72         ld      r9,96(r9)                              64         ld      r9,96(r9)
  73         addis   r6,r2,-118                             65         addis   r6,r2,-118
  74         addis   r4,r2,-140                             66         addis   r4,r2,-140
  75         mr      r3,r31                                 67         mr      r3,r31
  76         li      r5,87                                  68         li      r5,36
  77         addi    r6,r6,30240                            69         addi    r6,r6,30240
  78         addi    r4,r4,3064                             70         addi    r4,r4,3040
  79         ld      r7,24(r9)                              71         ld      r7,24(r9)
  80         bl      <__ext4_warning+0x8>                   72         bl      <__ext4_warning+0x8>
  81         nop                                            73         nop
  82         li      r3,-1                                  74         li      r3,-1
  83         b       <ext4_resize_begin_generic+0xa4>       75         b       <ext4_resize_begin_ppc+0x9c>


So READ_ONCE() + STACKPROTECTOR_STRONG is problematic. The root cause is
presumably that READ_ONCE() does an access to an on-stack variable,
which triggers the heuristics in the compiler that the stack needs
protecting.

It seems like a compiler "mis-feature" that a constant-sized access to the stack
triggers the stack protector logic, especially when the access is eventually
optimised away. But I guess that's probably what we get for doing tricks like
READ_ONCE() in the first place :/

I tried going back to the version of READ_ONCE() that doesn't use a
union, ie. effectively reverting dd36929720f4 ("kernel: make READ_ONCE()
valid on const arguments") to get:

#define READ_ONCE(x)							\
	({ typeof(x) __val; __read_once_size(&x, &__val, sizeof(__val)); __val; })

But it makes no difference, the stack protector stuff still triggers. So
I guess it's simply taking the address of a stack variable that triggers
it.

There seems to be a function attribute to enable stack protector for a
function, but not one to disable it:
  https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/Common-Function-Attributes.html#index-stack_005fprotect-function-attribute

That may not be a good solution even if it did exist, because it would
potentially disable stack protector in places where we do want it
enabled.

cheers

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12  5:42   ` READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)) Michael Ellerman
@ 2019-12-12  8:01     ` Peter Zijlstra
  2019-12-12 10:07       ` Will Deacon
  2019-12-12 15:10     ` Segher Boessenkool
  1 sibling, 1 reply; 43+ messages in thread
From: Peter Zijlstra @ 2019-12-12  8:01 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Linus Torvalds, dja, linux-kernel, linuxppc-dev,
	christophe.leroy, linux-arch, Will Deacon, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 04:42:13PM +1100, Michael Ellerman wrote:
> [ trimmed CC a bit ]
> 
> Peter Zijlstra <peterz@infradead.org> writes:
> > On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote:
> ...
> > you write:
> >
> >   "Currently bitops-instrumented.h assumes that the architecture provides
> > atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
> > This is true on x86 and s390, but is not always true: there is a
> > generic bitops/non-atomic.h header that provides generic non-atomic
> > operations, and also a generic bitops/lock.h for locking operations."
> >
> > Is there any actual benefit for PPC to using their own atomic bitops
> > over bitops/lock.h ? I'm thinking that the generic code is fairly
> > optimal for most LL/SC architectures.
> 
> Yes and no :)
> 
> Some of the generic versions don't generate good code compared to our
> versions, but that's because READ_ONCE() is triggering stack protector
> to be enabled.

Bah, there's never anything simple, is there :/

> For example, comparing an out-of-line copy of the generic and ppc
> versions of test_and_set_bit_lock():
> 
>    1 <generic_test_and_set_bit_lock>:           1 <ppc_test_and_set_bit_lock>:
>    2         addis   r2,r12,361
>    3         addi    r2,r2,-4240
>    4         stdu    r1,-48(r1)
>    5         rlwinm  r8,r3,29,3,28
>    6         clrlwi  r10,r3,26                   2         rldicl  r10,r3,58,6
>    7         ld      r9,3320(r13)
>    8         std     r9,40(r1)
>    9         li      r9,0
>   10         li      r9,1                        3         li      r9,1
>                                                  4         clrlwi  r3,r3,26
>                                                  5         rldicr  r10,r10,3,60
>   11         sld     r9,r9,r10                   6         sld     r3,r9,r3
>   12         add     r10,r4,r8                   7         add     r4,r4,r10
>   13         ldx     r8,r4,r8
>   14         and.    r8,r9,r8
>   15         bne     34f
>   16         ldarx   r7,0,r10                    8         ldarx   r9,0,r4,1
>   17         or      r8,r9,r7                    9         or      r10,r9,r3
>   18         stdcx.  r8,0,r10                   10         stdcx.  r10,0,r4
>   19         bne-    16b                        11         bne-    8b
>   20         isync                              12         isync
>   21         and     r9,r7,r9                   13         and     r3,r3,r9
>   22         addic   r7,r9,-1                   14         addic   r9,r3,-1
>   23         subfe   r7,r7,r9                   15         subfe   r3,r9,r3
>   24         ld      r9,40(r1)
>   25         ld      r10,3320(r13)
>   26         xor.    r9,r9,r10
>   27         li      r10,0
>   28         mr      r3,r7
>   29         bne     36f
>   30         addi    r1,r1,48
>   31         blr                                16         blr
>   32         nop
>   33         nop
>   34         li      r7,1
>   35         b       24b
>   36         mflr    r0
>   37         std     r0,64(r1)
>   38         bl      <__stack_chk_fail+0x8>
> 
> 
> If you squint, the generated code for the actual logic is pretty similar, but
> the stack protector gunk makes a big mess. It's particularly bad here
> because the ppc version doesn't even need a stack frame.
> 
> I've also confirmed that even when test_and_set_bit_lock() is inlined
> into an actual call site the stack protector logic still triggers.

> If I change the READ_ONCE() in test_and_set_bit_lock():
> 
> 	if (READ_ONCE(*p) & mask)
> 		return 1;
> 
> to a regular pointer access:
> 
> 	if (*p & mask)
> 		return 1;
> 
> Then the generated code looks more or less the same, except for the extra early
> return in the generic version of test_and_set_bit_lock(), and different handling
> of the return code by the compiler.

So given that the function signature is:

static inline int test_and_set_bit_lock(unsigned int nr,
					volatile unsigned long *p)

@p already carries the required volatile qualifier, so READ_ONCE() does
not add anything here (except for easier to read code and poor code
generation).

So your proposed change _should_ be fine. Will, I'm assuming you never
saw this on your ARGH64 builds when you did this code ?

---
diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
index dd90c9792909..10264e8808f8 100644
--- a/include/asm-generic/bitops/atomic.h
+++ b/include/asm-generic/bitops/atomic.h
@@ -35,7 +35,7 @@ static inline int test_and_set_bit(unsigned int nr, volatile unsigned long *p)
 	unsigned long mask = BIT_MASK(nr);
 
 	p += BIT_WORD(nr);
-	if (READ_ONCE(*p) & mask)
+	if (*p & mask)
 		return 1;
 
 	old = atomic_long_fetch_or(mask, (atomic_long_t *)p);
@@ -48,7 +48,7 @@ static inline int test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
 	unsigned long mask = BIT_MASK(nr);
 
 	p += BIT_WORD(nr);
-	if (!(READ_ONCE(*p) & mask))
+	if (!(*p & mask))
 		return 0;
 
 	old = atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h
index 3ae021368f48..9baf0a0055b8 100644
--- a/include/asm-generic/bitops/lock.h
+++ b/include/asm-generic/bitops/lock.h
@@ -22,7 +22,7 @@ static inline int test_and_set_bit_lock(unsigned int nr,
 	unsigned long mask = BIT_MASK(nr);
 
 	p += BIT_WORD(nr);
-	if (READ_ONCE(*p) & mask)
+	if (*p & mask)
 		return 1;
 
 	old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
@@ -60,7 +60,7 @@ static inline void __clear_bit_unlock(unsigned int nr,
 	unsigned long old;
 
 	p += BIT_WORD(nr);
-	old = READ_ONCE(*p);
+	old = *p;
 	old &= ~BIT_MASK(nr);
 	atomic_long_set_release((atomic_long_t *)p, old);
 }

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12  8:01     ` Peter Zijlstra
@ 2019-12-12 10:07       ` Will Deacon
  2019-12-12 10:46         ` Peter Zijlstra
  0 siblings, 1 reply; 43+ messages in thread
From: Will Deacon @ 2019-12-12 10:07 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael Ellerman, Linus Torvalds, dja, linux-kernel,
	linuxppc-dev, christophe.leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 09:01:05AM +0100, Peter Zijlstra wrote:
> On Thu, Dec 12, 2019 at 04:42:13PM +1100, Michael Ellerman wrote:
> > Peter Zijlstra <peterz@infradead.org> writes:
> > > On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote:
> > Some of the generic versions don't generate good code compared to our
> > versions, but that's because READ_ONCE() is triggering stack protector
> > to be enabled.
> 
> Bah, there's never anything simple, is there :/
> 
> > For example, comparing an out-of-line copy of the generic and ppc
> > versions of test_and_set_bit_lock():
> > 
> >    1 <generic_test_and_set_bit_lock>:           1 <ppc_test_and_set_bit_lock>:
> >    2         addis   r2,r12,361
> >    3         addi    r2,r2,-4240
> >    4         stdu    r1,-48(r1)
> >    5         rlwinm  r8,r3,29,3,28
> >    6         clrlwi  r10,r3,26                   2         rldicl  r10,r3,58,6
> >    7         ld      r9,3320(r13)
> >    8         std     r9,40(r1)
> >    9         li      r9,0
> >   10         li      r9,1                        3         li      r9,1
> >                                                  4         clrlwi  r3,r3,26
> >                                                  5         rldicr  r10,r10,3,60
> >   11         sld     r9,r9,r10                   6         sld     r3,r9,r3
> >   12         add     r10,r4,r8                   7         add     r4,r4,r10
> >   13         ldx     r8,r4,r8
> >   14         and.    r8,r9,r8
> >   15         bne     34f
> >   16         ldarx   r7,0,r10                    8         ldarx   r9,0,r4,1
> >   17         or      r8,r9,r7                    9         or      r10,r9,r3
> >   18         stdcx.  r8,0,r10                   10         stdcx.  r10,0,r4
> >   19         bne-    16b                        11         bne-    8b
> >   20         isync                              12         isync
> >   21         and     r9,r7,r9                   13         and     r3,r3,r9
> >   22         addic   r7,r9,-1                   14         addic   r9,r3,-1
> >   23         subfe   r7,r7,r9                   15         subfe   r3,r9,r3
> >   24         ld      r9,40(r1)
> >   25         ld      r10,3320(r13)
> >   26         xor.    r9,r9,r10
> >   27         li      r10,0
> >   28         mr      r3,r7
> >   29         bne     36f
> >   30         addi    r1,r1,48
> >   31         blr                                16         blr
> >   32         nop
> >   33         nop
> >   34         li      r7,1
> >   35         b       24b
> >   36         mflr    r0
> >   37         std     r0,64(r1)
> >   38         bl      <__stack_chk_fail+0x8>
> > 
> > 
> > If you squint, the generated code for the actual logic is pretty similar, but
> > the stack protector gunk makes a big mess. It's particularly bad here
> > because the ppc version doesn't even need a stack frame.
> > 
> > I've also confirmed that even when test_and_set_bit_lock() is inlined
> > into an actual call site the stack protector logic still triggers.
> 
> > If I change the READ_ONCE() in test_and_set_bit_lock():
> > 
> > 	if (READ_ONCE(*p) & mask)
> > 		return 1;
> > 
> > to a regular pointer access:
> > 
> > 	if (*p & mask)
> > 		return 1;
> > 
> > Then the generated code looks more or less the same, except for the extra early
> > return in the generic version of test_and_set_bit_lock(), and different handling
> > of the return code by the compiler.
> 
> So given that the function signature is:
> 
> static inline int test_and_set_bit_lock(unsigned int nr,
> 					volatile unsigned long *p)
> 
> @p already carries the required volatile qualifier, so READ_ONCE() does
> not add anything here (except for easier to read code and poor code
> generation).
> 
> So your proposed change _should_ be fine. Will, I'm assuming you never
> saw this on your ARGH64 builds when you did this code ?

I did see it, but (a) looking at the code out-of-line makes it look a lot
worse than it actually is (so the ext4 example is really helpful -- thanks
Michael!) and (b) I chalked it up to a crappy compiler.

However, see this comment from Arnd on my READ_ONCE series from the other
day:

https://lore.kernel.org/lkml/CAK8P3a0f=WvSQSBQ4t0FmEkcFE_mC3oARxaeTviTSkSa-D2qhg@mail.gmail.com

In which case, I'm thinking that we should be doing better in READ_ONCE()
for non-buggy compilers which would also keep the KCSAN folks happy for this
code (and would help with [1] too).

Will

[1] https://lkml.org/lkml/2019/11/12/898

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 10:07       ` Will Deacon
@ 2019-12-12 10:46         ` Peter Zijlstra
  2019-12-12 17:04           ` Will Deacon
                             ` (2 more replies)
  0 siblings, 3 replies; 43+ messages in thread
From: Peter Zijlstra @ 2019-12-12 10:46 UTC (permalink / raw)
  To: Will Deacon
  Cc: Michael Ellerman, Linus Torvalds, dja, linux-kernel,
	linuxppc-dev, christophe.leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 10:07:56AM +0000, Will Deacon wrote:

> > So your proposed change _should_ be fine. Will, I'm assuming you never
> > saw this on your ARGH64 builds when you did this code ?
> 
> I did see it, but (a) looking at the code out-of-line makes it look a lot
> worse than it actually is (so the ext4 example is really helpful -- thanks
> Michael!) and (b) I chalked it up to a crappy compiler.
> 
> However, see this comment from Arnd on my READ_ONCE series from the other
> day:
> 
> https://lore.kernel.org/lkml/CAK8P3a0f=WvSQSBQ4t0FmEkcFE_mC3oARxaeTviTSkSa-D2qhg@mail.gmail.com
> 
> In which case, I'm thinking that we should be doing better in READ_ONCE()
> for non-buggy compilers which would also keep the KCSAN folks happy for this
> code (and would help with [1] too).

So something like this then? Although I suppose that should be moved
into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so.

---

diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index ad8c76144a3c..8326e2cf28b4 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -179,20 +179,8 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
 
 #include <uapi/linux/types.h>
 #include <linux/kcsan-checks.h>
-
-#define __READ_ONCE_SIZE						\
-({									\
-	switch (size) {							\
-	case 1: *(__u8 *)res = *(volatile __u8 *)p; break;		\
-	case 2: *(__u16 *)res = *(volatile __u16 *)p; break;		\
-	case 4: *(__u32 *)res = *(volatile __u32 *)p; break;		\
-	case 8: *(__u64 *)res = *(volatile __u64 *)p; break;		\
-	default:							\
-		barrier();						\
-		__builtin_memcpy((void *)res, (const void *)p, size);	\
-		barrier();						\
-	}								\
-})
+#include <asm/barrier.h>
+#include <linux/kasan-checks.h>
 
 #ifdef CONFIG_KASAN
 /*
@@ -222,6 +210,22 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
 #define __no_sanitize_or_inline __always_inline
 #endif
 
+#ifdef GCC_VERSION < 40800
+
+#define __READ_ONCE_SIZE						\
+({									\
+	switch (size) {							\
+	case 1: *(__u8 *)res = *(volatile __u8 *)p; break;		\
+	case 2: *(__u16 *)res = *(volatile __u16 *)p; break;		\
+	case 4: *(__u32 *)res = *(volatile __u32 *)p; break;		\
+	case 8: *(__u64 *)res = *(volatile __u64 *)p; break;		\
+	default:							\
+		barrier();						\
+		__builtin_memcpy((void *)res, (const void *)p, size);	\
+		barrier();						\
+	}								\
+})
+
 static __no_kcsan_or_inline
 void __read_once_size(const volatile void *p, void *res, int size)
 {
@@ -274,9 +278,6 @@ void __write_once_size(volatile void *p, void *res, int size)
  * with an explicit memory barrier or atomic instruction that provides the
  * required ordering.
  */
-#include <asm/barrier.h>
-#include <linux/kasan-checks.h>
-
 #define __READ_ONCE(x, check)						\
 ({									\
 	union { typeof(x) __val; char __c[1]; } __u;			\
@@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, int size)
  */
 #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0)
 
+#else /* GCC_VERSION < 40800 */
+
+#define READ_ONCE_NOCHECK(x)						\
+({									\
+	typeof(x) __x = *(volatile typeof(x))&(x);			\
+	smp_read_barrier_depends();					\
+	__x;
+})
+
+#define READ_ONCE(x)							\
+({									\
+	kcsan_check_atomic_read(&(x), sizeof(x));			\
+	READ_ONCE_NOCHECK(x);						\
+})
+
+#endif /* GCC_VERSION < 40800 */
+
 static __no_kasan_or_inline
 unsigned long read_word_at_a_time(const void *addr)
 {

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12  5:42   ` READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)) Michael Ellerman
  2019-12-12  8:01     ` Peter Zijlstra
@ 2019-12-12 15:10     ` Segher Boessenkool
  1 sibling, 0 replies; 43+ messages in thread
From: Segher Boessenkool @ 2019-12-12 15:10 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Peter Zijlstra, Linus Torvalds, dja, linux-kernel, linuxppc-dev,
	christophe.leroy, linux-arch, Will Deacon, Mark Rutland,
	Arnd Bergmann, Christian Borntraeger

Hi,

On Thu, Dec 12, 2019 at 04:42:13PM +1100, Michael Ellerman wrote:
> Some of the generic versions don't generate good code compared to our
> versions, but that's because READ_ONCE() is triggering stack protector
> to be enabled.

The *big* difference is the generic code has a special path that does not
do an atomic access at all.  Either that is a good idea or not, but we
probably should not change the behaviour here, not without benchmarking
anyway.

> For example, comparing an out-of-line copy of the generic and ppc
> versions of test_and_set_bit_lock():

(With what GCC version, and what exact flags?)

(A stand-alone testcase would be nice too, btw).

(Michael gave me one, thanks!)

> If you squint, the generated code for the actual logic is pretty similar, but
> the stack protector gunk makes a big mess.

And with stack protector it cannot shrink-wrap the exit, one of the bigger
performance costs of the stack protector.  The extra branch in the generic
code isn't fun either (but maybe it is good for performance?

> It's particularly bad here
> because the ppc version doesn't even need a stack frame.

You are hit by this:

  if (... || (RECORD_OR_UNION_TYPE_P (var_type)
              && record_or_union_type_has_array_p (var_type)) ...)

(in the GCC code, stack_protect_decl_p (), cfgexpand.c)

for the variable __u from

#define __READ_ONCE(x, check)                                           \
({                                                                      \
        union { typeof(x) __val; char __c[1]; } __u;                    \
        __read_once_size(&(x), __u.__c, sizeof(x));                     \
        smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \
        __u.__val;                                                      \
})

This is all optimised away later, but at the point this decision is made
GCC does not know that.

> So READ_ONCE() + STACKPROTECTOR_STRONG is problematic. The root cause is
> presumably that READ_ONCE() does an access to an on-stack variable,
> which triggers the heuristics in the compiler that the stack needs
> protecting.

Not exactly, but the problem is READ_ONCE alright.

> It seems like a compiler "mis-feature" that a constant-sized access to the stack
> triggers the stack protector logic, especially when the access is eventually
> optimised away. But I guess that's probably what we get for doing tricks like
> READ_ONCE() in the first place :/

__c is an array.  That is all that matters.  I don't think it is very
reasonable to fault GCC for this.

> I tried going back to the version of READ_ONCE() that doesn't use a
> union, ie. effectively reverting dd36929720f4 ("kernel: make READ_ONCE()
> valid on const arguments") to get:
> 
> #define READ_ONCE(x)							\
> 	({ typeof(x) __val; __read_once_size(&x, &__val, sizeof(__val)); __val; })

With that, it is that the address of __val is taken:

  ...
  || TREE_ADDRESSABLE (var)
  ...

> But it makes no difference, the stack protector stuff still triggers. So
> I guess it's simply taking the address of a stack variable that triggers
> it.

Not in the earlier testcase.  Btw, there is no such thing as a "stack
variable" at that point in the compiler: it just is a local var.

> There seems to be a function attribute to enable stack protector for a
> function, but not one to disable it:
>   https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/Common-Function-Attributes.html#index-stack_005fprotect-function-attribute

Yes.

> That may not be a good solution even if it did exist, because it would
> potentially disable stack protector in places where we do want it
> enabled.

Right, I don't think we want that, such an attribute invites people to
write dangerous code.  (You already can just put the functions that you
want to be unsafe in a separate source file...  It sounds even sillier
that way, heh).


Segher

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 10:46         ` Peter Zijlstra
@ 2019-12-12 17:04           ` Will Deacon
  2019-12-12 17:16             ` Will Deacon
  2019-12-12 17:41           ` Linus Torvalds
  2019-12-13 12:07           ` Michael Ellerman
  2 siblings, 1 reply; 43+ messages in thread
From: Will Deacon @ 2019-12-12 17:04 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael Ellerman, Linus Torvalds, dja, linux-kernel,
	linuxppc-dev, christophe.leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 11:46:10AM +0100, Peter Zijlstra wrote:
> On Thu, Dec 12, 2019 at 10:07:56AM +0000, Will Deacon wrote:
> 
> > > So your proposed change _should_ be fine. Will, I'm assuming you never
> > > saw this on your ARGH64 builds when you did this code ?
> > 
> > I did see it, but (a) looking at the code out-of-line makes it look a lot
> > worse than it actually is (so the ext4 example is really helpful -- thanks
> > Michael!) and (b) I chalked it up to a crappy compiler.
> > 
> > However, see this comment from Arnd on my READ_ONCE series from the other
> > day:
> > 
> > https://lore.kernel.org/lkml/CAK8P3a0f=WvSQSBQ4t0FmEkcFE_mC3oARxaeTviTSkSa-D2qhg@mail.gmail.com
> > 
> > In which case, I'm thinking that we should be doing better in READ_ONCE()
> > for non-buggy compilers which would also keep the KCSAN folks happy for this
> > code (and would help with [1] too).
> 
> So something like this then? Although I suppose that should be moved
> into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so.

Ah wait, I think we've been looking at this wrong. The volatile pointer
argument is actually the problem here, not READ_ONCE()! The use of typeof()
means that the temporary variable to which __READ_ONCE_SIZE writes ends up
being a volatile store, so it can't be optimised away. This is why we get
a stack access and why stack protector then wrecks the codegen for us.

I'll cook a patch getting rid of those volatiles.

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 17:04           ` Will Deacon
@ 2019-12-12 17:16             ` Will Deacon
  0 siblings, 0 replies; 43+ messages in thread
From: Will Deacon @ 2019-12-12 17:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Michael Ellerman, Linus Torvalds, dja, linux-kernel,
	linuxppc-dev, christophe.leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 05:04:27PM +0000, Will Deacon wrote:
> On Thu, Dec 12, 2019 at 11:46:10AM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 12, 2019 at 10:07:56AM +0000, Will Deacon wrote:
> > 
> > > > So your proposed change _should_ be fine. Will, I'm assuming you never
> > > > saw this on your ARGH64 builds when you did this code ?
> > > 
> > > I did see it, but (a) looking at the code out-of-line makes it look a lot
> > > worse than it actually is (so the ext4 example is really helpful -- thanks
> > > Michael!) and (b) I chalked it up to a crappy compiler.
> > > 
> > > However, see this comment from Arnd on my READ_ONCE series from the other
> > > day:
> > > 
> > > https://lore.kernel.org/lkml/CAK8P3a0f=WvSQSBQ4t0FmEkcFE_mC3oARxaeTviTSkSa-D2qhg@mail.gmail.com
> > > 
> > > In which case, I'm thinking that we should be doing better in READ_ONCE()
> > > for non-buggy compilers which would also keep the KCSAN folks happy for this
> > > code (and would help with [1] too).
> > 
> > So something like this then? Although I suppose that should be moved
> > into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so.
> 
> Ah wait, I think we've been looking at this wrong. The volatile pointer
> argument is actually the problem here, not READ_ONCE()! The use of typeof()
> means that the temporary variable to which __READ_ONCE_SIZE writes ends up
> being a volatile store, so it can't be optimised away. This is why we get
> a stack access and why stack protector then wrecks the codegen for us.

Hmm, it's actually probably the volatile read which is causing the problem,
since __READ_ONCE_SIZE has casted that away and just uses "void *", but you
get the idea.

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 10:46         ` Peter Zijlstra
  2019-12-12 17:04           ` Will Deacon
@ 2019-12-12 17:41           ` Linus Torvalds
  2019-12-12 17:50             ` Segher Boessenkool
  2019-12-12 18:06             ` Will Deacon
  2019-12-13 12:07           ` Michael Ellerman
  2 siblings, 2 replies; 43+ messages in thread
From: Linus Torvalds @ 2019-12-12 17:41 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Michael Ellerman, dja, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 2:46 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> +#ifdef GCC_VERSION < 40800

Where does that 4.8 version check come from, and why?

Yeah, I know, but this really wants a comment. Sadly it looks like gcc
bugzilla is down, so

   https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145

currently gives an "Internal Server Error" for me.

[ Delete the horrid code we have because of gcc bugs ]

> +#else /* GCC_VERSION < 40800 */
> +
> +#define READ_ONCE_NOCHECK(x)                                           \
> +({                                                                     \
> +       typeof(x) __x = *(volatile typeof(x))&(x);                      \

I think we can/should just do this unconditionally if it helps th eissue.

Maybe add a warning about how gcc < 4.8 might mis-compile the kernel -
those versions are getting close to being unacceptable for kernel
builds anyway.

We could also look at being stricter for the normal READ/WRITE_ONCE(),
and require that they are

 (a) regular integer types

 (b) fit in an atomic word

We actually did (b) for a while, until we noticed that we do it on
loff_t's etc and relaxed the rules. But maybe we could have a
"non-atomic" version of READ/WRITE_ONCE() that is used for the
questionable cases?

              Linus

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 17:41           ` Linus Torvalds
@ 2019-12-12 17:50             ` Segher Boessenkool
  2019-12-12 18:06             ` Will Deacon
  1 sibling, 0 replies; 43+ messages in thread
From: Segher Boessenkool @ 2019-12-12 17:50 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Peter Zijlstra, Will Deacon, Michael Ellerman, dja,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 09:41:32AM -0800, Linus Torvalds wrote:
> Yeah, I know, but this really wants a comment. Sadly it looks like gcc
> bugzilla is down, so
> 
>    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145
> 
> currently gives an "Internal Server Error" for me.

We're being DoSsed again.  Reload, it will work after a while :-/


Segher

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 17:41           ` Linus Torvalds
  2019-12-12 17:50             ` Segher Boessenkool
@ 2019-12-12 18:06             ` Will Deacon
  2019-12-12 18:29               ` Christian Borntraeger
  2019-12-12 18:43               ` Linus Torvalds
  1 sibling, 2 replies; 43+ messages in thread
From: Will Deacon @ 2019-12-12 18:06 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Peter Zijlstra, Michael Ellerman, dja, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 09:41:32AM -0800, Linus Torvalds wrote:
> On Thu, Dec 12, 2019 at 2:46 AM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > +#ifdef GCC_VERSION < 40800
> 
> Where does that 4.8 version check come from, and why?
> 
> Yeah, I know, but this really wants a comment. Sadly it looks like gcc
> bugzilla is down, so
> 
>    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145
> 
> currently gives an "Internal Server Error" for me.
> 
> [ Delete the horrid code we have because of gcc bugs ]
> 
> > +#else /* GCC_VERSION < 40800 */
> > +
> > +#define READ_ONCE_NOCHECK(x)                                           \
> > +({                                                                     \
> > +       typeof(x) __x = *(volatile typeof(x))&(x);                      \
> 
> I think we can/should just do this unconditionally if it helps th eissue.

I'm currently trying to solve the issue by removing volatile from the bitop
function signatures, but it's grotty because there are quite a few callers
to fix up. I'm still trying to do it, because removing volatile fields from
structurs is generally a "good thing", but I'd be keen to simplify
READ_ONCE() as you suggest regardless.

> Maybe add a warning about how gcc < 4.8 might mis-compile the kernel -
> those versions are getting close to being unacceptable for kernel
> builds anyway.
> 
> We could also look at being stricter for the normal READ/WRITE_ONCE(),
> and require that they are
> 
>  (a) regular integer types
> 
>  (b) fit in an atomic word
> 
> We actually did (b) for a while, until we noticed that we do it on
> loff_t's etc and relaxed the rules. But maybe we could have a
> "non-atomic" version of READ/WRITE_ONCE() that is used for the
> questionable cases?

That makes a lot of sense to me, and it would allow us to use
compiletime_assert_atomic_type() as we do for the acquire/release accessors.

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 18:06             ` Will Deacon
@ 2019-12-12 18:29               ` Christian Borntraeger
  2019-12-12 18:43               ` Linus Torvalds
  1 sibling, 0 replies; 43+ messages in thread
From: Christian Borntraeger @ 2019-12-12 18:29 UTC (permalink / raw)
  To: Will Deacon, Linus Torvalds
  Cc: Peter Zijlstra, Michael Ellerman, dja, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann



On 12.12.19 19:06, Will Deacon wrote:
> On Thu, Dec 12, 2019 at 09:41:32AM -0800, Linus Torvalds wrote:
>> On Thu, Dec 12, 2019 at 2:46 AM Peter Zijlstra <peterz@infradead.org> wrote:
>>>
>>> +#ifdef GCC_VERSION < 40800
>>
>> Where does that 4.8 version check come from, and why?
>>
>> Yeah, I know, but this really wants a comment. Sadly it looks like gcc
>> bugzilla is down, so
>>
>>    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145
>>
>> currently gives an "Internal Server Error" for me.
>>
>> [ Delete the horrid code we have because of gcc bugs ]
>>
>>> +#else /* GCC_VERSION < 40800 */
>>> +
>>> +#define READ_ONCE_NOCHECK(x)                                           \
>>> +({                                                                     \
>>> +       typeof(x) __x = *(volatile typeof(x))&(x);                      \
>>
>> I think we can/should just do this unconditionally if it helps th eissue.
> 
> I'm currently trying to solve the issue by removing volatile from the bitop
> function signatures, but it's grotty because there are quite a few callers
> to fix up. I'm still trying to do it, because removing volatile fields from
> structurs is generally a "good thing", but I'd be keen to simplify
> READ_ONCE() as you suggest regardless.

As I am the one who added the foundation of READ_ONCEs uglyness, I am now in
favour of re-simplifying it again. I was first a bit scared about re-introducing
bugs, but the gcc testsuite has this particular case covered, so hopefully we
should not see the issue with volatile and aggregate types again.

Christian


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 18:06             ` Will Deacon
  2019-12-12 18:29               ` Christian Borntraeger
@ 2019-12-12 18:43               ` Linus Torvalds
  2019-12-12 19:34                 ` Will Deacon
  1 sibling, 1 reply; 43+ messages in thread
From: Linus Torvalds @ 2019-12-12 18:43 UTC (permalink / raw)
  To: Will Deacon
  Cc: Peter Zijlstra, Michael Ellerman, dja, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 10:06 AM Will Deacon <will@kernel.org> wrote:
>
> I'm currently trying to solve the issue by removing volatile from the bitop
> function signatures

I really think that's the wrong thing to do.

The bitop signature really should be "volatile" (and it should be
"const volatile" for test_bit, but I'm not sure anybody cares).

Exactly because it's simply valid to say "hey, my data is volatile,
but do an atomic test of this bit". So it might be volatile in the
caller.

Now, I generally frown on actual volatile data structures - because
the data structure volatility often depends on _context_. The same
data might be volatile in one context (when you do some optimistic
test on it without locking), but 100% stable in another (when you do
have a lock).

So I don't want to see "volatile" on data definitions ("jiffies" being
the one traditional exception), but marking things volatile in code
(because you know you're working with unlocked data) and then passing
them down to various helper functions - including the bitops ones - is
quite traditional and accepted.

In other words, 'volatile" should be treated the same way "const" is
largely treated in C.

A pointer to "const" data doesn't mean that the data is read-only, or
that it cannot be modified _elsewhere_, it means that within this
particular context and this copy of the pointer we promise not to
write to it.

Similarly, a pointer to "volatile" data doesn't mean that the data
might not be stable once you take a lock, for example. So it's ok to
have volatile pointers even if the data declaration itself isn't
volatile - you're stating something about the context, not something
fundamental about the data.

And in the context of the bit operations, "volatile" is the correct thing to do.

                     Linus

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 18:43               ` Linus Torvalds
@ 2019-12-12 19:34                 ` Will Deacon
  2019-12-12 20:21                   ` Peter Zijlstra
  2019-12-12 20:49                   ` Linus Torvalds
  0 siblings, 2 replies; 43+ messages in thread
From: Will Deacon @ 2019-12-12 19:34 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Peter Zijlstra, Michael Ellerman, dja, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

Hi Linus,

On Thu, Dec 12, 2019 at 10:43:05AM -0800, Linus Torvalds wrote:
> On Thu, Dec 12, 2019 at 10:06 AM Will Deacon <will@kernel.org> wrote:
> >
> > I'm currently trying to solve the issue by removing volatile from the bitop
> > function signatures
> 
> I really think that's the wrong thing to do.
> 
> The bitop signature really should be "volatile" (and it should be
> "const volatile" for test_bit, but I'm not sure anybody cares).

Agreed on the "const" part, although I do think the "volatile" aspect has
nasty side-effects despite being a visual indicator that we're eliding
locks. More below.

> Exactly because it's simply valid to say "hey, my data is volatile,
> but do an atomic test of this bit". So it might be volatile in the
> caller.

That's fair, although the cases I've run into so far for the bitops are
usually just that the functions have been wrapped, and volatile could easily
be dropped from the caller as well (e.g. assign_bit(), __node_clear(),
linkmode_test_bit()).

> Now, I generally frown on actual volatile data structures - because
> the data structure volatility often depends on _context_. The same
> data might be volatile in one context (when you do some optimistic
> test on it without locking), but 100% stable in another (when you do
> have a lock).

There are cases in driver code where it looks as though data members are
being declared volatile specifically because of the bitops type signatures
(e.g. 'wrapped' in 'struct mdp5_mdss', 'context_flag' in 'struct
drm_device', 'state' in 'struct s2io_nic'). Yeah, it's bogus, but I think
that having the modifier in the function signature is still leading people
astray.

> So I don't want to see "volatile" on data definitions ("jiffies" being
> the one traditional exception), but marking things volatile in code
> (because you know you're working with unlocked data) and then passing
> them down to various helper functions - including the bitops ones - is
> quite traditional and accepted.
> 
> In other words, 'volatile" should be treated the same way "const" is
> largely treated in C.
> 
> A pointer to "const" data doesn't mean that the data is read-only, or
> that it cannot be modified _elsewhere_, it means that within this
> particular context and this copy of the pointer we promise not to
> write to it.
> 
> Similarly, a pointer to "volatile" data doesn't mean that the data
> might not be stable once you take a lock, for example. So it's ok to
> have volatile pointers even if the data declaration itself isn't
> volatile - you're stating something about the context, not something
> fundamental about the data.
> 
> And in the context of the bit operations, "volatile" is the correct thing
> to do.

The root of my concern in all of this, and what started me looking at it in
the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
for a pointer means that local variables in macros declared using typeof()
suddenly start generating *hideous* code, particularly when pointless stack
spills get stackprotector all excited. Even if we simplify READ_ONCE() back
to its old incantation, the acquire/release accessors will have the exact
same issues on architectures that implement them.

For example, consider this code on arm64:

void ool_store_release(unsigned long *ptr, unsigned long val)
{
	smp_store_release(ptr, val);
}

This compiles to a single instruction plus return, which is what we want:

0000000000000000 <ool_store_release>:
   0:   c89ffc01        stlr    x1, [x0]
   4:   d65f03c0        ret

Now, see what happens if we make the 'ptr' argument volatile:

void ool_store_release(volatile unsigned long *ptr, unsigned long val)
{
	smp_store_release(ptr, val);
}

0000000000000000 <ool_store_release>:
   0:   a9be7bfd        stp     x29, x30, [sp, #-32]!
   4:   90000002        adrp    x2, 0 <__stack_chk_guard>
   8:   91000042        add     x2, x2, #0x0
   c:   910003fd        mov     x29, sp
  10:   f9400043        ldr     x3, [x2]
  14:   f9000fa3        str     x3, [x29, #24]
  18:   d2800003        mov     x3, #0x0                        // #0
  1c:   c89ffc01        stlr    x1, [x0]
  20:   f9400fa1        ldr     x1, [x29, #24]
  24:   f9400040        ldr     x0, [x2]
  28:   ca000020        eor     x0, x1, x0
  2c:   b5000060        cbnz    x0, 38 <ool_store_release+0x38>
  30:   a8c27bfd        ldp     x29, x30, [sp], #32
  34:   d65f03c0        ret
  38:   94000000        bl      0 <__stack_chk_fail>

It's a mess, and fixing READ_ONCE() doesn't help this case, which is why
I was looking at getting rid of volatile where it's not strictly needed.
I'm certainly open to other suggestions, I just haven't managed to think
of anything else.

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 19:34                 ` Will Deacon
@ 2019-12-12 20:21                   ` Peter Zijlstra
  2019-12-12 20:53                     ` Peter Zijlstra
  2019-12-12 20:49                   ` Linus Torvalds
  1 sibling, 1 reply; 43+ messages in thread
From: Peter Zijlstra @ 2019-12-12 20:21 UTC (permalink / raw)
  To: Will Deacon
  Cc: Linus Torvalds, Michael Ellerman, dja, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 07:34:01PM +0000, Will Deacon wrote:
> void ool_store_release(volatile unsigned long *ptr, unsigned long val)
> {
> 	smp_store_release(ptr, val);
> }
> 
> 0000000000000000 <ool_store_release>:
>    0:   a9be7bfd        stp     x29, x30, [sp, #-32]!
>    4:   90000002        adrp    x2, 0 <__stack_chk_guard>
>    8:   91000042        add     x2, x2, #0x0
>    c:   910003fd        mov     x29, sp
>   10:   f9400043        ldr     x3, [x2]
>   14:   f9000fa3        str     x3, [x29, #24]
>   18:   d2800003        mov     x3, #0x0                        // #0
>   1c:   c89ffc01        stlr    x1, [x0]
>   20:   f9400fa1        ldr     x1, [x29, #24]
>   24:   f9400040        ldr     x0, [x2]
>   28:   ca000020        eor     x0, x1, x0
>   2c:   b5000060        cbnz    x0, 38 <ool_store_release+0x38>
>   30:   a8c27bfd        ldp     x29, x30, [sp], #32
>   34:   d65f03c0        ret
>   38:   94000000        bl      0 <__stack_chk_fail>
> 
> It's a mess, and fixing READ_ONCE() doesn't help this case, which is why
> I was looking at getting rid of volatile where it's not strictly needed.
> I'm certainly open to other suggestions, I just haven't managed to think
> of anything else.

We could move the kernel to C++ and write:

	std::remove_volatile<typeof(p)>::type __p = (p);

/me runs like hell...

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 19:34                 ` Will Deacon
  2019-12-12 20:21                   ` Peter Zijlstra
@ 2019-12-12 20:49                   ` Linus Torvalds
  2019-12-13 13:17                     ` Arnd Bergmann
                                       ` (2 more replies)
  1 sibling, 3 replies; 43+ messages in thread
From: Linus Torvalds @ 2019-12-12 20:49 UTC (permalink / raw)
  To: Will Deacon
  Cc: Peter Zijlstra, Michael Ellerman, dja, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 11:34 AM Will Deacon <will@kernel.org> wrote:
>
> The root of my concern in all of this, and what started me looking at it in
> the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
> for a pointer means that local variables in macros declared using typeof()
> suddenly start generating *hideous* code, particularly when pointless stack
> spills get stackprotector all excited.

Yeah, removing volatile can be a bit annoying.

For the particular case of the bitops, though, it's not an issue.
Since you know the type there, you can just cast it.

And if we had the rule that READ_ONCE() was an arithmetic type, you could do

    typeof(0+(*p)) __var;

since you might as well get the integer promotion anyway (on the
non-volatile result).

But that doesn't work with structures or unions, of course.

I'm not entirely sure we have READ_ONCE() with a struct. I do know we
have it with 64-bit entities on 32-bit machines, but that's ok with
the "0+" trick.

               Linus

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 20:21                   ` Peter Zijlstra
@ 2019-12-12 20:53                     ` Peter Zijlstra
  2019-12-13 10:47                       ` Luc Van Oostenryck
  0 siblings, 1 reply; 43+ messages in thread
From: Peter Zijlstra @ 2019-12-12 20:53 UTC (permalink / raw)
  To: Will Deacon
  Cc: Linus Torvalds, Michael Ellerman, dja, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 09:21:57PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 12, 2019 at 07:34:01PM +0000, Will Deacon wrote:
> > void ool_store_release(volatile unsigned long *ptr, unsigned long val)
> > {
> > 	smp_store_release(ptr, val);
> > }
> > 
> > 0000000000000000 <ool_store_release>:
> >    0:   a9be7bfd        stp     x29, x30, [sp, #-32]!
> >    4:   90000002        adrp    x2, 0 <__stack_chk_guard>
> >    8:   91000042        add     x2, x2, #0x0
> >    c:   910003fd        mov     x29, sp
> >   10:   f9400043        ldr     x3, [x2]
> >   14:   f9000fa3        str     x3, [x29, #24]
> >   18:   d2800003        mov     x3, #0x0                        // #0
> >   1c:   c89ffc01        stlr    x1, [x0]
> >   20:   f9400fa1        ldr     x1, [x29, #24]
> >   24:   f9400040        ldr     x0, [x2]
> >   28:   ca000020        eor     x0, x1, x0
> >   2c:   b5000060        cbnz    x0, 38 <ool_store_release+0x38>
> >   30:   a8c27bfd        ldp     x29, x30, [sp], #32
> >   34:   d65f03c0        ret
> >   38:   94000000        bl      0 <__stack_chk_fail>
> > 
> > It's a mess, and fixing READ_ONCE() doesn't help this case, which is why
> > I was looking at getting rid of volatile where it's not strictly needed.
> > I'm certainly open to other suggestions, I just haven't managed to think
> > of anything else.
> 
> We could move the kernel to C++ and write:
> 
> 	std::remove_volatile<typeof(p)>::type __p = (p);
> 
> /me runs like hell...

Also, the GCC __auto_type thing strips _Atomic and const qualifiers but
for some obscure raisin forgets to strip volatile :/

  https://gcc.gnu.org/ml/gcc-patches/2013-11/msg01378.html

Now, looking at the current GCC source:

  https://github.com/gcc-mirror/gcc/blob/97d7270f894395e513667a031a0c309d1819d05e/gcc/c/c-parser.c#L3707

it seems that __typeof__() is supposed to strip all qualifiers from
_Atomic types. That lead me to try:

	typeof(_Atomic typeof(p)) __p = (p);

But alas, I still get the same junk you got for ool_store_release() :/

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 20:53                     ` Peter Zijlstra
@ 2019-12-13 10:47                       ` Luc Van Oostenryck
  2019-12-13 12:56                         ` Peter Zijlstra
  0 siblings, 1 reply; 43+ messages in thread
From: Luc Van Oostenryck @ 2019-12-13 10:47 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Linus Torvalds, Michael Ellerman, dja,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool, Arnd Bergmann,
	Christian Borntraeger

On Thu, Dec 12, 2019 at 09:53:38PM +0100, Peter Zijlstra wrote:
> Now, looking at the current GCC source:
> 
>   https://github.com/gcc-mirror/gcc/blob/97d7270f894395e513667a031a0c309d1819d05e/gcc/c/c-parser.c#L3707
> 
> it seems that __typeof__() is supposed to strip all qualifiers from
> _Atomic types. That lead me to try:
> 
> 	typeof(_Atomic typeof(p)) __p = (p);
> 
> But alas, I still get the same junk you got for ool_store_release() :/

I was checking this to see if Sparse was ready to support this.
I was a bit surprised because at first sigth GCC was doing as
it claims (typeof striping const & volatile on _Atomic types)
but your exampe wasn't working. But it's working if an
intermediate var is used:
	_Atomic typeof(p) tmp;
	typeof(tmp) __p = (p);
or, uglier but probably more practical:
	typeof(({_Atomic typeof(p) tmp; })) __p = (p);

Go figure!

OTOH, at least on GCC 8.3, it seems to always do the same with
volatiles than it does with consts.

-- Luc Van Oostenryck

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 10:46         ` Peter Zijlstra
  2019-12-12 17:04           ` Will Deacon
  2019-12-12 17:41           ` Linus Torvalds
@ 2019-12-13 12:07           ` Michael Ellerman
  2019-12-13 13:53             ` Segher Boessenkool
  2 siblings, 1 reply; 43+ messages in thread
From: Michael Ellerman @ 2019-12-13 12:07 UTC (permalink / raw)
  To: Peter Zijlstra, Will Deacon
  Cc: Linus Torvalds, dja, linux-kernel, linuxppc-dev,
	christophe.leroy, linux-arch, Mark Rutland, Segher Boessenkool,
	Arnd Bergmann, Christian Borntraeger

Peter Zijlstra <peterz@infradead.org> writes:
> On Thu, Dec 12, 2019 at 10:07:56AM +0000, Will Deacon wrote:
>
>> > So your proposed change _should_ be fine. Will, I'm assuming you never
>> > saw this on your ARGH64 builds when you did this code ?
>> 
>> I did see it, but (a) looking at the code out-of-line makes it look a lot
>> worse than it actually is (so the ext4 example is really helpful -- thanks
>> Michael!) and (b) I chalked it up to a crappy compiler.
>> 
>> However, see this comment from Arnd on my READ_ONCE series from the other
>> day:
>> 
>> https://lore.kernel.org/lkml/CAK8P3a0f=WvSQSBQ4t0FmEkcFE_mC3oARxaeTviTSkSa-D2qhg@mail.gmail.com
>> 
>> In which case, I'm thinking that we should be doing better in READ_ONCE()
>> for non-buggy compilers which would also keep the KCSAN folks happy for this
>> code (and would help with [1] too).
>
> So something like this then? Although I suppose that should be moved
> into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so.

I tried this:

> @@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, int size)
>   */
>  #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0)
>  
> +#else /* GCC_VERSION < 40800 */
> +
> +#define READ_ONCE_NOCHECK(x)						\
> +({									\
> +	typeof(x) __x = *(volatile typeof(x))&(x);			\

Didn't compile, needed:

	typeof(x) __x = *(volatile typeof(&x))&(x);			\


> +	smp_read_barrier_depends();					\
> +	__x;
> +})


And that works for me. No extra stack check stuff.

I guess the question is does that version of READ_ONCE() implement the
read once semantics. Do we have a good way to test that?

The only differences are because of the early return in the generic
test_and_set_bit_lock():

   1 <ext4_resize_begin_generic>:                            1 <ext4_resize_begin_ppc>:
   2         addis   r2,r12,281                              2         addis   r2,r12,281
   3         addi    r2,r2,-22368                            3         addi    r2,r2,-22064
   4         mflr    r0                                      4         mflr    r0
   5         bl      <_mcount>                               5         bl      <_mcount>
   6         mflr    r0                                      6         mflr    r0
   7         std     r31,-8(r1)                              7         std     r31,-8(r1)
   8         std     r30,-16(r1)                             8         std     r30,-16(r1)
   9         mr      r31,r3                                  9         mr      r31,r3
  10         li      r3,24                                  10         li      r3,24
  11         std     r0,16(r1)                              11         std     r0,16(r1)
  12         stdu    r1,-128(r1)                            12         stdu    r1,-112(r1)
  13         ld      r30,920(r31)                           13         ld      r30,920(r31)
  14         bl      <capable+0x8>                          14         bl      <capable+0x8>
  15         nop                                            15         nop
  16         cmpdi   cr7,r3,0                               16         cmpdi   cr7,r3,0
  17         beq     cr7,<ext4_resize_begin_generic+0xf0>   17         beq     cr7,<ext4_resize_begin_ppc+0xc0>
  18         ld      r9,920(r31)                            18         ld      r9,920(r31)
  19         ld      r10,96(r30)                            19         ld      r10,96(r30)
  20         lwz     r7,84(r30)                             20         lwz     r7,84(r30)
  21         ld      r8,104(r9)                             21         ld      r8,104(r9)
  22         ld      r10,24(r10)                            22         ld      r10,24(r10)
  23         lwz     r8,20(r8)                              23         lwz     r8,20(r8)
  24         srd     r10,r10,r7                             24         srd     r10,r10,r7
  25         cmpd    cr7,r10,r8                             25         cmpd    cr7,r10,r8
  26         bne     cr7,<ext4_resize_begin_generic+0x128>  26         bne     cr7,<ext4_resize_begin_ppc+0xf8>
  27         lhz     r10,160(r9)                            27         lhz     r10,160(r9)
  28         andi.   r10,r10,2                              28         andi.   r10,r10,2
  29         bne     <ext4_resize_begin_generic+0x100>
  30         ld      r10,560(r9)
  31         std     r10,104(r1)
  32         ld      r10,104(r1)
  33         andi.   r10,r10,1
  34         bne     <ext4_resize_begin_generic+0xd0>       29         bne     <ext4_resize_begin_ppc+0xd0>
  35         addi    r7,r9,560                              30         addi    r9,r9,560
  36         li      r8,1                                   31         li      r10,1
  37         ldarx   r10,0,r7                               32         ldarx   r3,0,r9,1
  38         or      r6,r8,r10                              33         or      r8,r3,r10
  39         stdcx.  r6,0,r7                                34         stdcx.  r8,0,r9
  40         bne-    <ext4_resize_begin_generic+0x8c>       35         bne-    <ext4_resize_begin_ppc+0x78>
  41         isync                                          36         isync
                                                            37         clrldi  r3,r3,63
  42         andi.   r9,r10,1                               38         addi    r3,r3,-1
  43         li      r3,0                                   39         rlwinm  r3,r3,0,27,27
  44         bne     <ext4_resize_begin_generic+0xd0>       40         addi    r3,r3,-16
  45         addi    r1,r1,128                              41         addi    r1,r1,112
  46         ld      r0,16(r1)                              42         ld      r0,16(r1)
  47         ld      r30,-16(r1)                            43         ld      r30,-16(r1)
  48         ld      r31,-8(r1)                             44         ld      r31,-8(r1)
  49         mtlr    r0                                     45         mtlr    r0
  50         blr                                            46         blr
  51         nop                                            47         nop
  52         nop                                            48         nop
  53         nop                                            49         nop
  54         addi    r1,r1,128
  55         li      r3,-16
  56         ld      r0,16(r1)
  57         ld      r30,-16(r1)
  58         ld      r31,-8(r1)
  59         mtlr    r0
  60         blr
  61         nop
  62         li      r3,-1                                  50         li      r3,-1
  63         b       <ext4_resize_begin_generic+0xac>       51         b       <ext4_resize_begin_ppc+0x9c>
  64         nop                                            52         nop
  65         nop                                            53         nop
  66         addis   r6,r2,-117                             54         addis   r6,r2,-117
  67         addis   r4,r2,-140                             55         addis   r4,r2,-140
  68         mr      r3,r31                                 56         mr      r3,r31
  69         li      r5,146                                 57         li      r5,83
  70         addi    r6,r6,-32736                           58         addi    r6,r6,-32736
  71         addi    r4,r4,3088                             59         addi    r4,r4,3064
  72         bl      <__ext4_warning+0x8>                   60         bl      <__ext4_warning+0x8>
  73         nop                                            61         nop
  74         li      r3,-1                                  62         li      r3,-1
  75         b       <ext4_resize_begin_generic+0xac>       63         b       <ext4_resize_begin_ppc+0x9c>
  76         ld      r9,96(r9)                              64         ld      r9,96(r9)
  77         addis   r6,r2,-118                             65         addis   r6,r2,-118
  78         addis   r4,r2,-140                             66         addis   r4,r2,-140
  79         mr      r3,r31                                 67         mr      r3,r31
  80         li      r5,136                                 68         li      r5,73
  81         addi    r6,r6,32752                            69         addi    r6,r6,32752
  82         addi    r4,r4,3088                             70         addi    r4,r4,3064
  83         ld      r7,24(r9)                              71         ld      r7,24(r9)
  84         bl      <__ext4_warning+0x8>                   72         bl      <__ext4_warning+0x8>
  85         nop                                            73         nop
  86         li      r3,-1                                  74         li      r3,-1
  87         b       <ext4_resize_begin_generic+0xac>       75         b       <ext4_resize_begin_ppc+0x9c>


cheers

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-13 10:47                       ` Luc Van Oostenryck
@ 2019-12-13 12:56                         ` Peter Zijlstra
  2019-12-13 14:28                           ` Luc Van Oostenryck
  0 siblings, 1 reply; 43+ messages in thread
From: Peter Zijlstra @ 2019-12-13 12:56 UTC (permalink / raw)
  To: Luc Van Oostenryck
  Cc: Will Deacon, Linus Torvalds, Michael Ellerman, dja,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool, Arnd Bergmann,
	Christian Borntraeger

On Fri, Dec 13, 2019 at 11:47:06AM +0100, Luc Van Oostenryck wrote:
> On Thu, Dec 12, 2019 at 09:53:38PM +0100, Peter Zijlstra wrote:
> > Now, looking at the current GCC source:
> > 
> >   https://github.com/gcc-mirror/gcc/blob/97d7270f894395e513667a031a0c309d1819d05e/gcc/c/c-parser.c#L3707
> > 
> > it seems that __typeof__() is supposed to strip all qualifiers from
> > _Atomic types. That lead me to try:
> > 
> > 	typeof(_Atomic typeof(p)) __p = (p);
> > 
> > But alas, I still get the same junk you got for ool_store_release() :/
> 
> I was checking this to see if Sparse was ready to support this.
> I was a bit surprised because at first sigth GCC was doing as
> it claims (typeof striping const & volatile on _Atomic types)
> but your exampe wasn't working. But it's working if an
> intermediate var is used:
> 	_Atomic typeof(p) tmp;
> 	typeof(tmp) __p = (p);
> or, uglier but probably more practical:
> 	typeof(({_Atomic typeof(p) tmp; })) __p = (p);
> 
> Go figure!

Excellent! I had to change it to something like:

#define unqual_typeof(x)    typeof(({_Atomic typeof(x) ___x __maybe_unused; ___x; }))

but that does indeed work!

Now I suppose we should wrap that in a symbol that indicates our
compiler does indeed support _Atomic, otherwise things will come apart.

That is, my gcc-4.6 doesn't seem to have it, while gcc-4.8 does, which
is exactly the range that needs the daft READ_ONCE() construct, how
convenient :/

Something a little like this perhaps?

---

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 7d9cc5ec4971..c389af602da8 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -75,9 +75,9 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx,
 
 #define __smp_store_release(p, v)					\
 do {									\
-	typeof(p) __p = (p);						\
-	union { typeof(*p) __val; char __c[1]; } __u =			\
-		{ .__val = (__force typeof(*p)) (v) };			\
+	unqual_typeof(p) __p = (p);					\
+	union { unqual_typeof(*p) __val; char __c[1]; } __u =	\
+		{ .__val = (__force unqual_typeof(*p)) (v) };	\
 	compiletime_assert_atomic_type(*p);				\
 	kasan_check_write(__p, sizeof(*p));				\
 	switch (sizeof(*p)) {						\
@@ -110,8 +110,8 @@ do {									\
 
 #define __smp_load_acquire(p)						\
 ({									\
-	union { typeof(*p) __val; char __c[1]; } __u;			\
-	typeof(p) __p = (p);						\
+	union { unqual_typeof(*p) __val; char __c[1]; } __u;		\
+	unqual_typeof(p) __p = (p);					\
 	compiletime_assert_atomic_type(*p);				\
 	kasan_check_read(__p, sizeof(*p));				\
 	switch (sizeof(*p)) {						\
@@ -141,8 +141,8 @@ do {									\
 
 #define smp_cond_load_relaxed(ptr, cond_expr)				\
 ({									\
-	typeof(ptr) __PTR = (ptr);					\
-	typeof(*ptr) VAL;						\
+	unqual_typeof(ptr) __PTR = (ptr);				\
+	unqual_typeof(*ptr) VAL;					\
 	for (;;) {							\
 		VAL = READ_ONCE(*__PTR);				\
 		if (cond_expr)						\
@@ -154,8 +154,8 @@ do {									\
 
 #define smp_cond_load_acquire(ptr, cond_expr)				\
 ({									\
-	typeof(ptr) __PTR = (ptr);					\
-	typeof(*ptr) VAL;						\
+	unqual_typeof(ptr) __PTR = (ptr);				\
+	unqual_typeof(*ptr) VAL;					\
 	for (;;) {							\
 		VAL = smp_load_acquire(__PTR);				\
 		if (cond_expr)						\
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 85b28eb80b11..dd5bb055f5ab 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -228,8 +228,8 @@ do {									\
  */
 #ifndef smp_cond_load_relaxed
 #define smp_cond_load_relaxed(ptr, cond_expr) ({		\
-	typeof(ptr) __PTR = (ptr);				\
-	typeof(*ptr) VAL;					\
+	unqual_typeof(ptr) __PTR = (ptr);			\
+	unqual_typeof(*ptr) VAL;				\
 	for (;;) {						\
 		VAL = READ_ONCE(*__PTR);			\
 		if (cond_expr)					\
@@ -250,7 +250,7 @@ do {									\
  */
 #ifndef smp_cond_load_acquire
 #define smp_cond_load_acquire(ptr, cond_expr) ({		\
-	typeof(*ptr) _val;					\
+	unqual_typeof(*ptr) _val;				\
 	_val = smp_cond_load_relaxed(ptr, cond_expr);		\
 	smp_acquire__after_ctrl_dep();				\
 	_val;							\
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index 0eb2a1cc411d..15fd7ea3882a 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -179,3 +179,10 @@
 #endif
 
 #define __no_fgcse __attribute__((optimize("-fno-gcse")))
+
+#if GCC_VERSION < 40800
+/*
+ * GCC-4.6 doesn't support _Atomic, which is required to strip qualifiers.
+ */
+#define unqual_typeof(x)	typeof(x)
+#endif
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index ad8c76144a3c..9736993f2ba1 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -279,7 +279,7 @@ void __write_once_size(volatile void *p, void *res, int size)
 
 #define __READ_ONCE(x, check)						\
 ({									\
-	union { typeof(x) __val; char __c[1]; } __u;			\
+	union { unqual_typeof(x) __val; char __c[1]; } __u;		\
 	if (check)							\
 		__read_once_size(&(x), __u.__c, sizeof(x));		\
 	else								\
@@ -302,12 +302,12 @@ unsigned long read_word_at_a_time(const void *addr)
 	return *(unsigned long *)addr;
 }
 
-#define WRITE_ONCE(x, val) \
-({							\
-	union { typeof(x) __val; char __c[1]; } __u =	\
-		{ .__val = (__force typeof(x)) (val) }; \
-	__write_once_size(&(x), __u.__c, sizeof(x));	\
-	__u.__val;					\
+#define WRITE_ONCE(x, val)					\
+({								\
+	union { unqual_typeof(x) __val; char __c[1]; } __u =	\
+		{ .__val = (__force unqual_typeof(x)) (val) };	\
+	__write_once_size(&(x), __u.__c, sizeof(x));		\
+	__u.__val;						\
 })
 
 #include <linux/kcsan.h>
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index 72393a8c1a6c..fe8012c54251 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -243,4 +243,11 @@ struct ftrace_likely_data {
 #define __diag_error(compiler, version, option, comment) \
 	__diag_ ## compiler(version, error, option)
 
+#ifndef unqual_typeof
+/*
+ * GCC __typeof__() strips all qualifiers from _Atomic types.
+ */
+#define unqual_typeof(x)	typeof(({_Atomic typeof(x) ___x __maybe_unused; ___x; }))
+#endif
+
 #endif /* __LINUX_COMPILER_TYPES_H */

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 20:49                   ` Linus Torvalds
@ 2019-12-13 13:17                     ` Arnd Bergmann
  2019-12-13 21:32                       ` Arnd Bergmann
  2019-12-16 10:28                       ` Will Deacon
  2019-12-17 17:07                     ` Will Deacon
  2019-12-18 10:22                     ` Christian Borntraeger
  2 siblings, 2 replies; 43+ messages in thread
From: Arnd Bergmann @ 2019-12-13 13:17 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Will Deacon, Peter Zijlstra, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool,
	Christian Borntraeger

On Thu, Dec 12, 2019 at 9:50 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Thu, Dec 12, 2019 at 11:34 AM Will Deacon <will@kernel.org> wrote:
> > The root of my concern in all of this, and what started me looking at it in
> > the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
> > for a pointer means that local variables in macros declared using typeof()
> > suddenly start generating *hideous* code, particularly when pointless stack
> > spills get stackprotector all excited.
>
> Yeah, removing volatile can be a bit annoying.
>
> For the particular case of the bitops, though, it's not an issue.
> Since you know the type there, you can just cast it.
>
> And if we had the rule that READ_ONCE() was an arithmetic type, you could do
>
>     typeof(0+(*p)) __var;
>
> since you might as well get the integer promotion anyway (on the
> non-volatile result).
>
> But that doesn't work with structures or unions, of course.
>
> I'm not entirely sure we have READ_ONCE() with a struct. I do know we
> have it with 64-bit entities on 32-bit machines, but that's ok with
> the "0+" trick.

I'll have my randconfig builder look for instances, so far I found one,
see below. My feeling is that it would be better to enforce at least
the size being a 1/2/4/8, to avoid cases where someone thinks
the access is atomic, but it falls back on a memcpy.

      Arnd

diff --git a/drivers/xen/time.c b/drivers/xen/time.c
index 0968859c29d0..adb492c0aa34 100644
--- a/drivers/xen/time.c
+++ b/drivers/xen/time.c
@@ -64,7 +64,7 @@ static void xen_get_runstate_snapshot_cpu_delta(
        do {
                state_time = get64(&state->state_entry_time);
                rmb();  /* Hypervisor might update data. */
-               *res = READ_ONCE(*state);
+               memcpy(res, state, sizeof(*res));
                rmb();  /* Hypervisor might update data. */
        } while (get64(&state->state_entry_time) != state_time ||
                 (state_time & XEN_RUNSTATE_UPDATE));
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 5e88e7e33abe..f4ae360efdba 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -179,6 +179,8 @@ void ftrace_likely_update(struct
ftrace_likely_data *f, int val,

 #include <uapi/linux/types.h>

+extern void __broken_access_once(void *, const void *, unsigned long);
+
 #define __READ_ONCE_SIZE                                               \
 ({                                                                     \
        switch (size) {                                                 \
@@ -187,9 +189,7 @@ void ftrace_likely_update(struct
ftrace_likely_data *f, int val,
        case 4: *(__u32 *)res = *(volatile __u32 *)p; break;            \
        case 8: *(__u64 *)res = *(volatile __u64 *)p; break;            \
        default:                                                        \
-               barrier();                                              \
-               __builtin_memcpy((void *)res, (const void *)p, size);   \
-               barrier();                                              \
+               __broken_access_once((void *)res, (const void *)p,
size);       \
        }                                                               \
 })

@@ -225,9 +225,7 @@ static __always_inline void
__write_once_size(volatile void *p, void *res, int s
        case 4: *(volatile __u32 *)p = *(__u32 *)res; break;
        case 8: *(volatile __u64 *)p = *(__u64 *)res; break;
        default:
-               barrier();
-               __builtin_memcpy((void *)p, (const void *)res, size);
-               barrier();
+               __broken_access_once((void *)p, (const void *)res, size);
        }
 }

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-13 12:07           ` Michael Ellerman
@ 2019-12-13 13:53             ` Segher Boessenkool
  2019-12-13 21:06               ` Michael Ellerman
  0 siblings, 1 reply; 43+ messages in thread
From: Segher Boessenkool @ 2019-12-13 13:53 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Peter Zijlstra, Will Deacon, Linus Torvalds, dja, linux-kernel,
	linuxppc-dev, christophe.leroy, linux-arch, Mark Rutland,
	Arnd Bergmann, Christian Borntraeger

Hi!

On Fri, Dec 13, 2019 at 11:07:55PM +1100, Michael Ellerman wrote:
> I tried this:
> 
> > @@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, int size)
> >   */
> >  #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0)
> >  
> > +#else /* GCC_VERSION < 40800 */
> > +
> > +#define READ_ONCE_NOCHECK(x)						\
> > +({									\
> > +	typeof(x) __x = *(volatile typeof(x))&(x);			\
> 
> Didn't compile, needed:
> 
> 	typeof(x) __x = *(volatile typeof(&x))&(x);			\
> 
> 
> > +	smp_read_barrier_depends();					\
> > +	__x;
> > +})
> 
> 
> And that works for me. No extra stack check stuff.
> 
> I guess the question is does that version of READ_ONCE() implement the
> read once semantics. Do we have a good way to test that?
> 
> The only differences are because of the early return in the generic
> test_and_set_bit_lock():

No, there is another difference:

>   30         ld      r10,560(r9)
>   31         std     r10,104(r1)
>   32         ld      r10,104(r1)
>   33         andi.   r10,r10,1
>   34         bne     <ext4_resize_begin_generic+0xd0>       29         bne     <ext4_resize_begin_ppc+0xd0>

The stack var is volatile, so it is read back immediately after writing
it, here.  This is a bad idea for performance, in general.


Segher

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-13 12:56                         ` Peter Zijlstra
@ 2019-12-13 14:28                           ` Luc Van Oostenryck
  0 siblings, 0 replies; 43+ messages in thread
From: Luc Van Oostenryck @ 2019-12-13 14:28 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Linus Torvalds, Michael Ellerman, dja,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool, Arnd Bergmann,
	Christian Borntraeger

On Fri, Dec 13, 2019 at 01:56:18PM +0100, Peter Zijlstra wrote:
> 
> Excellent! I had to change it to something like:
> 
> #define unqual_typeof(x)    typeof(({_Atomic typeof(x) ___x __maybe_unused; ___x; }))
> 
> but that does indeed work!
> 
> Now I suppose we should wrap that in a symbol that indicates our
> compiler does indeed support _Atomic, otherwise things will come apart.
> 
> That is, my gcc-4.6 doesn't seem to have it, while gcc-4.8 does, which
> is exactly the range that needs the daft READ_ONCE() construct, how
> convenient :/
> 
> Something a little like this perhaps?

Yes, this looks good to me.
Just a small nit here below.

> ---
> 
> diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
> index 7d9cc5ec4971..c389af602da8 100644
> --- a/arch/arm64/include/asm/barrier.h
> +++ b/arch/arm64/include/asm/barrier.h
> @@ -75,9 +75,9 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx,
>  
>  #define __smp_store_release(p, v)					\
>  do {									\
> -	typeof(p) __p = (p);						\
> -	union { typeof(*p) __val; char __c[1]; } __u =			\
> -		{ .__val = (__force typeof(*p)) (v) };			\
> +	unqual_typeof(p) __p = (p);					\
> +	union { unqual_typeof(*p) __val; char __c[1]; } __u =	\
> +		{ .__val = (__force unqual_typeof(*p)) (v) };	\

The 2 two trailing backslashes are now off by one tab.

-- Luc 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-13 13:53             ` Segher Boessenkool
@ 2019-12-13 21:06               ` Michael Ellerman
  0 siblings, 0 replies; 43+ messages in thread
From: Michael Ellerman @ 2019-12-13 21:06 UTC (permalink / raw)
  To: Segher Boessenkool
  Cc: Peter Zijlstra, Will Deacon, Linus Torvalds, dja, linux-kernel,
	linuxppc-dev, christophe.leroy, linux-arch, Mark Rutland,
	Arnd Bergmann, Christian Borntraeger

Segher Boessenkool <segher@kernel.crashing.org> writes:
> Hi!
>
> On Fri, Dec 13, 2019 at 11:07:55PM +1100, Michael Ellerman wrote:
>> I tried this:
>> 
>> > @@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, int size)
>> >   */
>> >  #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0)
>> >  
>> > +#else /* GCC_VERSION < 40800 */
>> > +
>> > +#define READ_ONCE_NOCHECK(x)						\
>> > +({									\
>> > +	typeof(x) __x = *(volatile typeof(x))&(x);			\
>> 
>> Didn't compile, needed:
>> 
>> 	typeof(x) __x = *(volatile typeof(&x))&(x);			\
>> 
>> 
>> > +	smp_read_barrier_depends();					\
>> > +	__x;
>> > +})
>> 
>> 
>> And that works for me. No extra stack check stuff.
>> 
>> I guess the question is does that version of READ_ONCE() implement the
>> read once semantics. Do we have a good way to test that?
>> 
>> The only differences are because of the early return in the generic
>> test_and_set_bit_lock():
>
> No, there is another difference:
>
>>   30         ld      r10,560(r9)
>>   31         std     r10,104(r1)
>>   32         ld      r10,104(r1)
>>   33         andi.   r10,r10,1
>>   34         bne     <ext4_resize_begin_generic+0xd0>       29         bne     <ext4_resize_begin_ppc+0xd0>
>
> The stack var is volatile, so it is read back immediately after writing
> it, here.  This is a bad idea for performance, in general.

Argh, yuck. Thanks, I shouldn't try to read asm listings at 11pm.

So that just confirms what Will was saying further up the thread about
the volatile pointer, rather than READ_ONCE() per se.

cheers

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-13 13:17                     ` Arnd Bergmann
@ 2019-12-13 21:32                       ` Arnd Bergmann
  2019-12-13 22:01                         ` Linus Torvalds
  2019-12-16 10:28                       ` Will Deacon
  1 sibling, 1 reply; 43+ messages in thread
From: Arnd Bergmann @ 2019-12-13 21:32 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Will Deacon, Peter Zijlstra, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool,
	Christian Borntraeger

On Fri, Dec 13, 2019 at 2:17 PM Arnd Bergmann <arnd@arndb.de> wrote:
>
> On Thu, Dec 12, 2019 at 9:50 PM Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> I'll have my randconfig builder look for instances, so far I found one,
> see below. My feeling is that it would be better to enforce at least
> the size being a 1/2/4/8, to avoid cases where someone thinks
> the access is atomic, but it falls back on a memcpy.
>
>       Arnd
>
> diff --git a/drivers/xen/time.c b/drivers/xen/time.c
> index 0968859c29d0..adb492c0aa34 100644
> --- a/drivers/xen/time.c
> +++ b/drivers/xen/time.c
> @@ -64,7 +64,7 @@ static void xen_get_runstate_snapshot_cpu_delta(
>         do {
>                 state_time = get64(&state->state_entry_time);
>                 rmb();  /* Hypervisor might update data. */
> -               *res = READ_ONCE(*state);
> +               memcpy(res, state, sizeof(*res));
>                 rmb();  /* Hypervisor might update data. */
>         } while (get64(&state->state_entry_time) != state_time ||
>                  (state_time & XEN_RUNSTATE_UPDATE));


A few hundred randconfig (x86, arm32 and arm64) builds later I
still only found one other instance:

diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index eddae4688862..1c1f33447e96 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -304,7 +304,9 @@ static inline struct xdp_desc
*xskq_validate_desc(struct xsk_queue *q,
                struct xdp_rxtx_ring *ring = (struct xdp_rxtx_ring *)q->ring;
                unsigned int idx = q->cons_tail & q->ring_mask;

-               *desc = READ_ONCE(ring->desc[idx]);
+               barrier();
+               memcpy(desc, &ring->desc[idx], sizeof(*desc));
+               barrier();
                if (xskq_is_valid_desc(q, desc, umem))
                        return desc;

       Arnd

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-13 21:32                       ` Arnd Bergmann
@ 2019-12-13 22:01                         ` Linus Torvalds
  0 siblings, 0 replies; 43+ messages in thread
From: Linus Torvalds @ 2019-12-13 22:01 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Will Deacon, Peter Zijlstra, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool,
	Christian Borntraeger

On Fri, Dec 13, 2019 at 1:33 PM Arnd Bergmann <arnd@arndb.de> wrote:
>
> A few hundred randconfig (x86, arm32 and arm64) builds later I
> still only found one other instance:

Just send me the pull request to READ_ONCE() and WRITE_ONCE() be
arithmetic types, and your two trivial fixes, and let's get this over
with.

With that, you can remove the 'volatile' with my simple
'typeof(0+*(p))' trick, and we're all good, and we don't need to worry
about compiler versions either.

I'm willing to take that after the merge window as a "sanity fix".

             Linus

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-13 13:17                     ` Arnd Bergmann
  2019-12-13 21:32                       ` Arnd Bergmann
@ 2019-12-16 10:28                       ` Will Deacon
  2019-12-16 11:47                         ` Peter Zijlstra
  2019-12-16 12:06                         ` Arnd Bergmann
  1 sibling, 2 replies; 43+ messages in thread
From: Will Deacon @ 2019-12-16 10:28 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Linus Torvalds, Peter Zijlstra, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool,
	Christian Borntraeger

On Fri, Dec 13, 2019 at 02:17:08PM +0100, Arnd Bergmann wrote:
> On Thu, Dec 12, 2019 at 9:50 PM Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> > On Thu, Dec 12, 2019 at 11:34 AM Will Deacon <will@kernel.org> wrote:
> > > The root of my concern in all of this, and what started me looking at it in
> > > the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
> > > for a pointer means that local variables in macros declared using typeof()
> > > suddenly start generating *hideous* code, particularly when pointless stack
> > > spills get stackprotector all excited.
> >
> > Yeah, removing volatile can be a bit annoying.
> >
> > For the particular case of the bitops, though, it's not an issue.
> > Since you know the type there, you can just cast it.
> >
> > And if we had the rule that READ_ONCE() was an arithmetic type, you could do
> >
> >     typeof(0+(*p)) __var;
> >
> > since you might as well get the integer promotion anyway (on the
> > non-volatile result).
> >
> > But that doesn't work with structures or unions, of course.
> >
> > I'm not entirely sure we have READ_ONCE() with a struct. I do know we
> > have it with 64-bit entities on 32-bit machines, but that's ok with
> > the "0+" trick.
> 
> I'll have my randconfig builder look for instances, so far I found one,
> see below. My feeling is that it would be better to enforce at least
> the size being a 1/2/4/8, to avoid cases where someone thinks
> the access is atomic, but it falls back on a memcpy.

I've been using something similar built on compiletime_assert_atomic_type()
and I spotted another instance in the xdp code (xskq_validate_desc()) which
tries to READ_ONCE() on a 128-bit descriptor, although a /very/ quick read
of the code suggests that this probably can't be concurrently modified if
the ring indexes are synchronised properly.

However, enabling this for 32-bit ARM is total carnage; as Linus mentioned,
a whole bunch of code appears to be relying on atomic 64-bit access of
READ_ONCE(); the perf ring buffer, io_uring, the scheduler, pm_runtime,
cpuidle, ... :(

Unfortunately, at least some of these *do* look like bugs, but I can't see
how we can fix them, not least because the first two are user ABI afaict. It
may also be that in practice we get 2x32-bit stores, and that works out fine
when storing a 32-bit virtual address. I'm not sure what (if anything) the
compiler guarantees in these cases.

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-16 10:28                       ` Will Deacon
@ 2019-12-16 11:47                         ` Peter Zijlstra
  2019-12-16 12:06                         ` Arnd Bergmann
  1 sibling, 0 replies; 43+ messages in thread
From: Peter Zijlstra @ 2019-12-16 11:47 UTC (permalink / raw)
  To: Will Deacon
  Cc: Arnd Bergmann, Linus Torvalds, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool,
	Christian Borntraeger

On Mon, Dec 16, 2019 at 10:28:06AM +0000, Will Deacon wrote:
> However, enabling this for 32-bit ARM is total carnage; as Linus mentioned,
> a whole bunch of code appears to be relying on atomic 64-bit access of
> READ_ONCE(); the perf ring buffer, io_uring, the scheduler, pm_runtime,
> cpuidle, ... :(
> 
> Unfortunately, at least some of these *do* look like bugs, but I can't see
> how we can fix them, not least because the first two are user ABI afaict. It
> may also be that in practice we get 2x32-bit stores, and that works out fine
> when storing a 32-bit virtual address. I'm not sure what (if anything) the
> compiler guarantees in these cases.

Perf does indeed have a (known) problem here for the head/tail values.
Last time we looked at that nobody could really come up with a sane
solution that wouldn't break something.

I'll try and dig out that thread. Perhaps casting the value to 'unsigned
long' internally might work, I forgot the details.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-16 10:28                       ` Will Deacon
  2019-12-16 11:47                         ` Peter Zijlstra
@ 2019-12-16 12:06                         ` Arnd Bergmann
  1 sibling, 0 replies; 43+ messages in thread
From: Arnd Bergmann @ 2019-12-16 12:06 UTC (permalink / raw)
  To: Will Deacon
  Cc: Linus Torvalds, Peter Zijlstra, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool,
	Christian Borntraeger

On Mon, Dec 16, 2019 at 11:28 AM Will Deacon <will@kernel.org> wrote:
> On Fri, Dec 13, 2019 at 02:17:08PM +0100, Arnd Bergmann wrote:
> > On Thu, Dec 12, 2019 at 9:50 PM Linus Torvalds
> > <torvalds@linux-foundation.org> wrote:
> > > On Thu, Dec 12, 2019 at 11:34 AM Will Deacon <will@kernel.org> wrote:
> > > > The root of my concern in all of this, and what started me looking at it in
> > > > the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
> > > > for a pointer means that local variables in macros declared using typeof()
> > > > suddenly start generating *hideous* code, particularly when pointless stack
> > > > spills get stackprotector all excited.
> > >
> > > Yeah, removing volatile can be a bit annoying.
> > >
> > > For the particular case of the bitops, though, it's not an issue.
> > > Since you know the type there, you can just cast it.
> > >
> > > And if we had the rule that READ_ONCE() was an arithmetic type, you could do
> > >
> > >     typeof(0+(*p)) __var;
> > >
> > > since you might as well get the integer promotion anyway (on the
> > > non-volatile result).
> > >
> > > But that doesn't work with structures or unions, of course.
> > >
> > > I'm not entirely sure we have READ_ONCE() with a struct. I do know we
> > > have it with 64-bit entities on 32-bit machines, but that's ok with
> > > the "0+" trick.
> >
> > I'll have my randconfig builder look for instances, so far I found one,
> > see below. My feeling is that it would be better to enforce at least
> > the size being a 1/2/4/8, to avoid cases where someone thinks
> > the access is atomic, but it falls back on a memcpy.
>
> I've been using something similar built on compiletime_assert_atomic_type()
> and I spotted another instance in the xdp code (xskq_validate_desc()) which
> tries to READ_ONCE() on a 128-bit descriptor, although a /very/ quick read
> of the code suggests that this probably can't be concurrently modified if
> the ring indexes are synchronised properly.

That's the only other one I found. I have not checked how many are structs
that are the size of a normal u8/u16/u32/u64, or if there are types that
have a lower alignment than there size, such as a __u16[2] that might
span a page boundary.

> However, enabling this for 32-bit ARM is total carnage; as Linus mentioned,
> a whole bunch of code appears to be relying on atomic 64-bit access of
> READ_ONCE(); the perf ring buffer, io_uring, the scheduler, pm_runtime,
> cpuidle, ... :(
>
> Unfortunately, at least some of these *do* look like bugs, but I can't see
> how we can fix them, not least because the first two are user ABI afaict. It
> may also be that in practice we get 2x32-bit stores, and that works out fine
> when storing a 32-bit virtual address. I'm not sure what (if anything) the
> compiler guarantees in these cases.

Would it help if 32-bit architectures use atomic64_read() and atomic64_set()
to implement a 64-bit READ_ONCE()/WRITE_ONCE(), or would that make it
worse in other ways?

On mips32, riscv32 and some minor 32-bit architectures with SMP support
(xtensa,  csky, hexagon, openrisc, parisc32, sparc32 and ppc32 AFAICT) this
ends up using a spinlock for GENERIC_ATOMIC64, but at least ARMv6+,
i586+ and most ARC should be fine.

(Side note: the ARMv7 implementation is suboptimimal for ARMv7VE+
if LPAE is disabled, I think we need to really add Kconfig options for
ARMv7VE and 32-bit ARMv8 improve this and things like integer divide).

      Arnd

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 20:49                   ` Linus Torvalds
  2019-12-13 13:17                     ` Arnd Bergmann
@ 2019-12-17 17:07                     ` Will Deacon
  2019-12-17 18:04                       ` Linus Torvalds
  2019-12-18 10:22                     ` Christian Borntraeger
  2 siblings, 1 reply; 43+ messages in thread
From: Will Deacon @ 2019-12-17 17:07 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Peter Zijlstra, Michael Ellerman, dja, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

On Thu, Dec 12, 2019 at 12:49:52PM -0800, Linus Torvalds wrote:
> On Thu, Dec 12, 2019 at 11:34 AM Will Deacon <will@kernel.org> wrote:
> >
> > The root of my concern in all of this, and what started me looking at it in
> > the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
> > for a pointer means that local variables in macros declared using typeof()
> > suddenly start generating *hideous* code, particularly when pointless stack
> > spills get stackprotector all excited.
> 
> Yeah, removing volatile can be a bit annoying.
> 
> For the particular case of the bitops, though, it's not an issue.
> Since you know the type there, you can just cast it.
> 
> And if we had the rule that READ_ONCE() was an arithmetic type, you could do
> 
>     typeof(0+(*p)) __var;
> 
> since you might as well get the integer promotion anyway (on the
> non-volatile result).
> 
> But that doesn't work with structures or unions, of course.
> 
> I'm not entirely sure we have READ_ONCE() with a struct. I do know we
> have it with 64-bit entities on 32-bit machines, but that's ok with
> the "0+" trick.

Other than the two trivial examples Arnd and I spotted, it looks like
we're in for some fun with the page-table types such as pud_t but that
/should/ be fixable with enough effort.

However, I'm really banging my head against the compiler trying to get
your trick above to work for pointer types when the pointed-to-type is
not defined. As a very cut down example (I pulled this back out of the
preprocessor and cleaned it up a bit):


struct dentry {
	struct inode *d_inode;
};

static inline struct inode *d_inode_rcu(struct dentry *dentry)
{
	return ({
		typeof(0 + dentry->d_inode) __x = (*(volatile typeof(dentry->d_inode) *)&(dentry->d_inode));
		(typeof(dentry->d_inode))__x;
	});
}


Trying to compile this results in:

  | In function 'd_inode_rcu':
  | error: invalid use of undefined type 'struct inode'

whereas it compiles fine if you remove the '0 +' from the first typeof.

What am I missing? Perhaps the compiler wants the size information of
'struct inode' before it will contemplate the arithmetic, but if so then
I don't think we can use this trick after all. Hmm.

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-17 17:07                     ` Will Deacon
@ 2019-12-17 18:04                       ` Linus Torvalds
  2019-12-17 18:05                         ` Linus Torvalds
  2019-12-17 18:32                         ` Linus Torvalds
  0 siblings, 2 replies; 43+ messages in thread
From: Linus Torvalds @ 2019-12-17 18:04 UTC (permalink / raw)
  To: Will Deacon
  Cc: Peter Zijlstra, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool, Arnd Bergmann,
	Christian Borntraeger

On Tue, Dec 17, 2019 at 9:07 AM Will Deacon <will@kernel.org> wrote:
>
> However, I'm really banging my head against the compiler trying to get
> your trick above to work for pointer types when the pointed-to-type is
> not defined.

You are right, of course. The trick works fine with arithmetic types,
but since it does use arithmetic, it requires that pointer types be
not only declared, but defined. The addition wants the size of the
underlying type (even though with an addition of zero it wouldn't be
required - but that's not how C works).

Let me think about it.

             Linus

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-17 18:04                       ` Linus Torvalds
@ 2019-12-17 18:05                         ` Linus Torvalds
  2019-12-17 18:31                           ` Will Deacon
  2019-12-17 18:32                         ` Linus Torvalds
  1 sibling, 1 reply; 43+ messages in thread
From: Linus Torvalds @ 2019-12-17 18:05 UTC (permalink / raw)
  To: Will Deacon
  Cc: Peter Zijlstra, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool, Arnd Bergmann,
	Christian Borntraeger

On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> Let me think about it.

.. and in the short term, maybe for code generation, the right thing
is to just do the cast in the bitops, where we can just cast to
"unsigned long *" and remove the volatile that way.

I'm still hoping there's a trick, but..

           Linus

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-17 18:05                         ` Linus Torvalds
@ 2019-12-17 18:31                           ` Will Deacon
  0 siblings, 0 replies; 43+ messages in thread
From: Will Deacon @ 2019-12-17 18:31 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Peter Zijlstra, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool, Arnd Bergmann,
	Christian Borntraeger

On Tue, Dec 17, 2019 at 10:05:53AM -0800, Linus Torvalds wrote:
> On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> >
> > Let me think about it.
> 
> .. and in the short term, maybe for code generation, the right thing
> is to just do the cast in the bitops, where we can just cast to
> "unsigned long *" and remove the volatile that way.

Yeah, I think I'll spin that patch series tomorrow anyway, since I don't
think we need to hold it up.

> I'm still hoping there's a trick, but..

Well, there's always Peter's awful hack [1] but it's really gross. FWIW,
I've pushed the handful of patches I have to [2], which drop the GCC 4.8
workaround and introduce a non-atomic version instead of the
'__builtin_memcpy()'.

Will

[1] https://lore.kernel.org/lkml/20191213125618.GD2844@hirez.programming.kicks-ass.net
[2] https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/log/?h=rwonce/cleanup

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-17 18:04                       ` Linus Torvalds
  2019-12-17 18:05                         ` Linus Torvalds
@ 2019-12-17 18:32                         ` Linus Torvalds
  2019-12-18 12:17                           ` Michael Ellerman
  2019-12-19 12:11                           ` Will Deacon
  1 sibling, 2 replies; 43+ messages in thread
From: Linus Torvalds @ 2019-12-17 18:32 UTC (permalink / raw)
  To: Will Deacon
  Cc: Peter Zijlstra, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool, Arnd Bergmann,
	Christian Borntraeger

[-- Attachment #1: Type: text/plain, Size: 755 bytes --]

On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> Let me think about it.

How about we just get rid of the union entirely, and just use
'unsigned long' or 'unsigned long long' depending on the size.

Something like the attached patch - it still requires that it be an
arithmetic type, but now because of the final cast.

But it might still be a cast to a volatile type, of course. Then the
result will be volatile, but at least now READ_ONCE() won't be taking
the address of a volatile variable on the stack - does that at least
fix some of the horrible code generation. Hmm?

This is untested, because I obviously still have the cases of
structures (page table entries) being accessed once..

              Linus

[-- Attachment #2: patch.diff --]
[-- Type: text/x-patch, Size: 2149 bytes --]

 include/linux/compiler.h | 33 +++++++++++++++++----------------
 1 file changed, 17 insertions(+), 16 deletions(-)

diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 5e88e7e33abe..8b4282194f16 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -179,18 +179,18 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
 
 #include <uapi/linux/types.h>
 
-#define __READ_ONCE_SIZE						\
-({									\
-	switch (size) {							\
-	case 1: *(__u8 *)res = *(volatile __u8 *)p; break;		\
-	case 2: *(__u16 *)res = *(volatile __u16 *)p; break;		\
-	case 4: *(__u32 *)res = *(volatile __u32 *)p; break;		\
-	case 8: *(__u64 *)res = *(volatile __u64 *)p; break;		\
-	default:							\
-		barrier();						\
-		__builtin_memcpy((void *)res, (const void *)p, size);	\
-		barrier();						\
-	}								\
+/* "unsigned long" or "unsigned long long" - make it fit in a register if possible */
+#define __READ_ONCE_TYPE(size) \
+	__typeof__(__builtin_choose_expr(size > sizeof(0UL), 0ULL, 0UL))
+
+#define __READ_ONCE_SIZE							\
+({										\
+	switch (size) {								\
+	case 1: *(unsigned long *)res = *(volatile __u8 *)p; break;		\
+	case 2: *(unsigned long *)res = *(volatile __u16 *)p; break;		\
+	case 4: *(unsigned long *)res = *(volatile __u32 *)p; break;		\
+	case 8: *(unsigned long long *)res = *(volatile __u64 *)p; break;	\
+	}									\
 })
 
 static __always_inline
@@ -258,13 +258,14 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
 
 #define __READ_ONCE(x, check)						\
 ({									\
-	union { typeof(x) __val; char __c[1]; } __u;			\
+	__READ_ONCE_TYPE(sizeof(x)) __u;				\
+	compiletime_assert(sizeof(x) <= sizeof(__u), "READ_ONCE type");	\
 	if (check)							\
-		__read_once_size(&(x), __u.__c, sizeof(x));		\
+		__read_once_size(&(x), &__u, sizeof(x));		\
 	else								\
-		__read_once_size_nocheck(&(x), __u.__c, sizeof(x));	\
+		__read_once_size_nocheck(&(x), &__u, sizeof(x));	\
 	smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \
-	__u.__val;							\
+	(__typeof__(x))__u;						\
 })
 #define READ_ONCE(x) __READ_ONCE(x, 1)
 

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-12 20:49                   ` Linus Torvalds
  2019-12-13 13:17                     ` Arnd Bergmann
  2019-12-17 17:07                     ` Will Deacon
@ 2019-12-18 10:22                     ` Christian Borntraeger
  2019-12-18 10:35                       ` Will Deacon
  2 siblings, 1 reply; 43+ messages in thread
From: Christian Borntraeger @ 2019-12-18 10:22 UTC (permalink / raw)
  To: Linus Torvalds, Will Deacon
  Cc: Peter Zijlstra, Michael Ellerman, dja, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann

On 12.12.19 21:49, Linus Torvalds wrote:
> On Thu, Dec 12, 2019 at 11:34 AM Will Deacon <will@kernel.org> wrote:
>>
>> The root of my concern in all of this, and what started me looking at it in
>> the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
>> for a pointer means that local variables in macros declared using typeof()
>> suddenly start generating *hideous* code, particularly when pointless stack
>> spills get stackprotector all excited.
> 
> Yeah, removing volatile can be a bit annoying.
> 
> For the particular case of the bitops, though, it's not an issue.
> Since you know the type there, you can just cast it.
> 
> And if we had the rule that READ_ONCE() was an arithmetic type, you could do
> 
>     typeof(0+(*p)) __var;
> 
> since you might as well get the integer promotion anyway (on the
> non-volatile result).
> 
> But that doesn't work with structures or unions, of course.

We do have a READ_ONCE on the following union in s390 code.

union ipte_control {
        unsigned long val;
        struct {
                unsigned long k  : 1;
                unsigned long kh : 31;
                unsigned long kg : 32;
        }; 
};


In fact this one was the original failure case why we change ACCESS_ONCE.

see arch/s390/kvm/gaccess.c


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-18 10:22                     ` Christian Borntraeger
@ 2019-12-18 10:35                       ` Will Deacon
  0 siblings, 0 replies; 43+ messages in thread
From: Will Deacon @ 2019-12-18 10:35 UTC (permalink / raw)
  To: Christian Borntraeger
  Cc: Linus Torvalds, Peter Zijlstra, Michael Ellerman, dja,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool, Arnd Bergmann

On Wed, Dec 18, 2019 at 11:22:05AM +0100, Christian Borntraeger wrote:
> On 12.12.19 21:49, Linus Torvalds wrote:
> > On Thu, Dec 12, 2019 at 11:34 AM Will Deacon <will@kernel.org> wrote:
> >>
> >> The root of my concern in all of this, and what started me looking at it in
> >> the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
> >> for a pointer means that local variables in macros declared using typeof()
> >> suddenly start generating *hideous* code, particularly when pointless stack
> >> spills get stackprotector all excited.
> > 
> > Yeah, removing volatile can be a bit annoying.
> > 
> > For the particular case of the bitops, though, it's not an issue.
> > Since you know the type there, you can just cast it.
> > 
> > And if we had the rule that READ_ONCE() was an arithmetic type, you could do
> > 
> >     typeof(0+(*p)) __var;
> > 
> > since you might as well get the integer promotion anyway (on the
> > non-volatile result).
> > 
> > But that doesn't work with structures or unions, of course.
> 
> We do have a READ_ONCE on the following union in s390 code.
> 
> union ipte_control {
>         unsigned long val;
>         struct {
>                 unsigned long k  : 1;
>                 unsigned long kh : 31;
>                 unsigned long kg : 32;
>         }; 
> };
> 
> 
> In fact this one was the original failure case why we change ACCESS_ONCE.
> 
> see arch/s390/kvm/gaccess.c

Thanks. I think we should be ok just using the 'val' field instead of the
whole union but, then again, when bitfields are involved who knows what the
compiler might do. I thought we usually shied away from using them to mirror
hardware structures like this?

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-17 18:32                         ` Linus Torvalds
@ 2019-12-18 12:17                           ` Michael Ellerman
  2019-12-19 12:11                           ` Will Deacon
  1 sibling, 0 replies; 43+ messages in thread
From: Michael Ellerman @ 2019-12-18 12:17 UTC (permalink / raw)
  To: Linus Torvalds, Will Deacon
  Cc: Peter Zijlstra, Daniel Axtens, Linux Kernel Mailing List,
	linuxppc-dev, Christophe Leroy, linux-arch, Mark Rutland,
	Segher Boessenkool, Arnd Bergmann, Christian Borntraeger

Linus Torvalds <torvalds@linux-foundation.org> writes:
> On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
>>
>> Let me think about it.
>
> How about we just get rid of the union entirely, and just use
> 'unsigned long' or 'unsigned long long' depending on the size.
>
> Something like the attached patch - it still requires that it be an
> arithmetic type, but now because of the final cast.
>
> But it might still be a cast to a volatile type, of course. Then the
> result will be volatile, but at least now READ_ONCE() won't be taking
> the address of a volatile variable on the stack - does that at least
> fix some of the horrible code generation. Hmm?

Yes it seems to fix it for me.

There's no unnecessary stack protector gunk, and no store/load to the
stack variable.

This is my previous example of ext4_resize_begin(), hacked to use a copy
of the generic version of test_and_set_bit_lock(), which in turn was
hacked to use a local version of your READ_ONCE().

  c000000000534390 <ext4_resize_begin_generic>:
  c000000000534390:       19 01 4c 3c     addis   r2,r12,281
  c000000000534394:       70 c3 42 38     addi    r2,r2,-15504
  c000000000534398:       a6 02 08 7c     mflr    r0
  c00000000053439c:       4d 98 b3 4b     bl      c00000000006dbe8 <_mcount>
  c0000000005343a0:       a6 02 08 7c     mflr    r0
  c0000000005343a4:       f8 ff e1 fb     std     r31,-8(r1)
  c0000000005343a8:       f0 ff c1 fb     std     r30,-16(r1)
  c0000000005343ac:       78 1b 7f 7c     mr      r31,r3
  c0000000005343b0:       18 00 60 38     li      r3,24
  c0000000005343b4:       10 00 01 f8     std     r0,16(r1)
  c0000000005343b8:       91 ff 21 f8     stdu    r1,-112(r1)
  c0000000005343bc:       98 03 df eb     ld      r30,920(r31)
  c0000000005343c0:       d9 d3 c0 4b     bl      c000000000141798 <capable+0x8>
  c0000000005343c4:       00 00 00 60     nop
  c0000000005343c8:       00 00 a3 2f     cmpdi   cr7,r3,0
  c0000000005343cc:       a4 00 9e 41     beq     cr7,c000000000534470 <ext4_resize_begin_generic+0xe0>
  c0000000005343d0:       98 03 3f e9     ld      r9,920(r31)
  c0000000005343d4:       60 00 5e e9     ld      r10,96(r30)
  c0000000005343d8:       54 00 fe 80     lwz     r7,84(r30)
  c0000000005343dc:       68 00 09 e9     ld      r8,104(r9)
  c0000000005343e0:       18 00 4a e9     ld      r10,24(r10)
  c0000000005343e4:       14 00 08 81     lwz     r8,20(r8)
  c0000000005343e8:       36 3c 4a 7d     srd     r10,r10,r7
  c0000000005343ec:       00 40 aa 7f     cmpd    cr7,r10,r8
  c0000000005343f0:       b8 00 9e 40     bne     cr7,c0000000005344a8 <ext4_resize_begin_generic+0x118>
  c0000000005343f4:       a0 00 49 a1     lhz     r10,160(r9)
  c0000000005343f8:       02 00 4a 71     andi.   r10,r10,2
  c0000000005343fc:       84 00 82 40     bne     c000000000534480 <ext4_resize_begin_generic+0xf0>
  c000000000534400:       30 02 49 e9     ld      r10,560(r9)						# simple load of EXT4_SB(sb)->s_ext4_flags
  c000000000534404:       01 00 4a 71     andi.   r10,r10,1
  c000000000534408:       48 00 82 40     bne     c000000000534450 <ext4_resize_begin_generic+0xc0>
  c00000000053440c:       30 02 e9 38     addi    r7,r9,560
  c000000000534410:       01 00 00 39     li      r8,1
  c000000000534414:       a8 38 40 7d     ldarx   r10,0,r7
  c000000000534418:       78 53 06 7d     or      r6,r8,r10
  c00000000053441c:       ad 39 c0 7c     stdcx.  r6,0,r7
  c000000000534420:       f4 ff c2 40     bne-    c000000000534414 <ext4_resize_begin_generic+0x84>
  c000000000534424:       2c 01 00 4c     isync
  c000000000534428:       01 00 49 71     andi.   r9,r10,1
  c00000000053442c:       00 00 60 38     li      r3,0
  c000000000534430:       20 00 82 40     bne     c000000000534450 <ext4_resize_begin_generic+0xc0>
  c000000000534434:       70 00 21 38     addi    r1,r1,112
  c000000000534438:       10 00 01 e8     ld      r0,16(r1)
  c00000000053443c:       f0 ff c1 eb     ld      r30,-16(r1)
  c000000000534440:       f8 ff e1 eb     ld      r31,-8(r1)
  c000000000534444:       a6 03 08 7c     mtlr    r0
  c000000000534448:       20 00 80 4e     blr
  c00000000053444c:       00 00 00 60     nop
  c000000000534450:       70 00 21 38     addi    r1,r1,112
  c000000000534454:       f0 ff 60 38     li      r3,-16
  c000000000534458:       10 00 01 e8     ld      r0,16(r1)
  c00000000053445c:       f0 ff c1 eb     ld      r30,-16(r1)
  c000000000534460:       f8 ff e1 eb     ld      r31,-8(r1)
  c000000000534464:       a6 03 08 7c     mtlr    r0
  c000000000534468:       20 00 80 4e     blr
  c00000000053446c:       00 00 00 60     nop
  c000000000534470:       ff ff 60 38     li      r3,-1
  c000000000534474:       c0 ff ff 4b     b       c000000000534434 <ext4_resize_begin_generic+0xa4>
  c000000000534478:       00 00 00 60     nop
  c00000000053447c:       00 00 00 60     nop
  c000000000534480:       8a ff c2 3c     addis   r6,r2,-118
  c000000000534484:       74 ff 82 3c     addis   r4,r2,-140
  c000000000534488:       78 fb e3 7f     mr      r3,r31
  c00000000053448c:       7c 00 a0 38     li      r5,124
  c000000000534490:       a8 75 c6 38     addi    r6,r6,30120
  c000000000534494:       f8 0b 84 38     addi    r4,r4,3064
  c000000000534498:       d1 a4 01 48     bl      c00000000054e968 <__ext4_warning+0x8>
  c00000000053449c:       00 00 00 60     nop
  c0000000005344a0:       ff ff 60 38     li      r3,-1
  c0000000005344a4:       90 ff ff 4b     b       c000000000534434 <ext4_resize_begin_generic+0xa4>
  c0000000005344a8:       60 00 29 e9     ld      r9,96(r9)
  c0000000005344ac:       8a ff c2 3c     addis   r6,r2,-118
  c0000000005344b0:       74 ff 82 3c     addis   r4,r2,-140
  c0000000005344b4:       78 fb e3 7f     mr      r3,r31
  c0000000005344b8:       72 00 a0 38     li      r5,114
  c0000000005344bc:       78 75 c6 38     addi    r6,r6,30072
  c0000000005344c0:       f8 0b 84 38     addi    r4,r4,3064
  c0000000005344c4:       18 00 e9 e8     ld      r7,24(r9)
  c0000000005344c8:       a1 a4 01 48     bl      c00000000054e968 <__ext4_warning+0x8>
  c0000000005344cc:       00 00 00 60     nop
  c0000000005344d0:       ff ff 60 38     li      r3,-1
  c0000000005344d4:       60 ff ff 4b     b       c000000000534434 <ext4_resize_begin_generic+0xa4>
  c0000000005344d8:       00 00 00 60     nop
  c0000000005344dc:       00 00 00 60     nop


cheers

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))
  2019-12-17 18:32                         ` Linus Torvalds
  2019-12-18 12:17                           ` Michael Ellerman
@ 2019-12-19 12:11                           ` Will Deacon
  1 sibling, 0 replies; 43+ messages in thread
From: Will Deacon @ 2019-12-19 12:11 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Peter Zijlstra, Michael Ellerman, Daniel Axtens,
	Linux Kernel Mailing List, linuxppc-dev, Christophe Leroy,
	linux-arch, Mark Rutland, Segher Boessenkool, Arnd Bergmann,
	Christian Borntraeger

On Tue, Dec 17, 2019 at 10:32:35AM -0800, Linus Torvalds wrote:
> On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> >
> > Let me think about it.
> 
> How about we just get rid of the union entirely, and just use
> 'unsigned long' or 'unsigned long long' depending on the size.
> 
> Something like the attached patch - it still requires that it be an
> arithmetic type, but now because of the final cast.
> 
> But it might still be a cast to a volatile type, of course. Then the
> result will be volatile, but at least now READ_ONCE() won't be taking
> the address of a volatile variable on the stack - does that at least
> fix some of the horrible code generation. Hmm?

Sounds like it according to mpe, but I'll confirm too for arm64.

> This is untested, because I obviously still have the cases of
> structures (page table entries) being accessed once..
> 
>               Linus

>  include/linux/compiler.h | 33 +++++++++++++++++----------------
>  1 file changed, 17 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 5e88e7e33abe..8b4282194f16 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -179,18 +179,18 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
>  
>  #include <uapi/linux/types.h>
>  
> -#define __READ_ONCE_SIZE						\
> -({									\
> -	switch (size) {							\
> -	case 1: *(__u8 *)res = *(volatile __u8 *)p; break;		\
> -	case 2: *(__u16 *)res = *(volatile __u16 *)p; break;		\
> -	case 4: *(__u32 *)res = *(volatile __u32 *)p; break;		\
> -	case 8: *(__u64 *)res = *(volatile __u64 *)p; break;		\
> -	default:							\
> -		barrier();						\
> -		__builtin_memcpy((void *)res, (const void *)p, size);	\
> -		barrier();						\
> -	}								\
> +/* "unsigned long" or "unsigned long long" - make it fit in a register if possible */
> +#define __READ_ONCE_TYPE(size) \
> +	__typeof__(__builtin_choose_expr(size > sizeof(0UL), 0ULL, 0UL))

Ha, I wondered when '__builtin_choose_expr()' would make an appearance in
this thread! Nice trick. I'll try integrating this with what I have and see
what I run into next.

Back down the rabbit hole...

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2019-12-19 12:11 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-06 12:46 [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops) Michael Ellerman
2019-12-06 13:16 ` Peter Zijlstra
2019-12-10  5:38   ` Michael Ellerman
2019-12-10 10:15     ` Peter Zijlstra
2019-12-11  0:29       ` Michael Ellerman
2019-12-12  5:42   ` READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)) Michael Ellerman
2019-12-12  8:01     ` Peter Zijlstra
2019-12-12 10:07       ` Will Deacon
2019-12-12 10:46         ` Peter Zijlstra
2019-12-12 17:04           ` Will Deacon
2019-12-12 17:16             ` Will Deacon
2019-12-12 17:41           ` Linus Torvalds
2019-12-12 17:50             ` Segher Boessenkool
2019-12-12 18:06             ` Will Deacon
2019-12-12 18:29               ` Christian Borntraeger
2019-12-12 18:43               ` Linus Torvalds
2019-12-12 19:34                 ` Will Deacon
2019-12-12 20:21                   ` Peter Zijlstra
2019-12-12 20:53                     ` Peter Zijlstra
2019-12-13 10:47                       ` Luc Van Oostenryck
2019-12-13 12:56                         ` Peter Zijlstra
2019-12-13 14:28                           ` Luc Van Oostenryck
2019-12-12 20:49                   ` Linus Torvalds
2019-12-13 13:17                     ` Arnd Bergmann
2019-12-13 21:32                       ` Arnd Bergmann
2019-12-13 22:01                         ` Linus Torvalds
2019-12-16 10:28                       ` Will Deacon
2019-12-16 11:47                         ` Peter Zijlstra
2019-12-16 12:06                         ` Arnd Bergmann
2019-12-17 17:07                     ` Will Deacon
2019-12-17 18:04                       ` Linus Torvalds
2019-12-17 18:05                         ` Linus Torvalds
2019-12-17 18:31                           ` Will Deacon
2019-12-17 18:32                         ` Linus Torvalds
2019-12-18 12:17                           ` Michael Ellerman
2019-12-19 12:11                           ` Will Deacon
2019-12-18 10:22                     ` Christian Borntraeger
2019-12-18 10:35                       ` Will Deacon
2019-12-13 12:07           ` Michael Ellerman
2019-12-13 13:53             ` Segher Boessenkool
2019-12-13 21:06               ` Michael Ellerman
2019-12-12 15:10     ` Segher Boessenkool
2019-12-06 22:15 ` [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops) pr-tracker-bot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).