* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-29 18:33 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-29 18:33 UTC (permalink / raw) To: linux-kernel, will.deacon Cc: mingo, peterz, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, Vineet Gupta, linux-arch Hi Guys, Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or even on boot. I don't know exactly what breaks but bisect clearly assign the blame to this commit: 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 Reverting the commit solves this problem. I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same generic bitops implementation and it works fine. Do you have any ideas what went wrong? -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-29 18:33 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-29 18:33 UTC (permalink / raw) To: linux-arm-kernel Hi Guys, Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or even on boot. I don't know exactly what breaks but bisect clearly assign the blame to this commit: 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 Reverting the commit solves this problem. I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same generic bitops implementation and it works fine. Do you have any ideas what went wrong? -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-29 18:33 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-29 18:33 UTC (permalink / raw) To: linux-snps-arc Hi Guys, Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or even on boot. I don't know exactly what breaks but bisect clearly assign the blame to this commit: 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 Reverting the commit solves this problem. I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same generic bitops implementation and it works fine. Do you have any ideas what went wrong? -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-29 18:33 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-29 18:33 UTC (permalink / raw) To: linux-kernel, will.deacon Cc: mingo, peterz, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, Vineet Gupta, linux-arch Hi Guys, Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or even on boot. I don't know exactly what breaks but bisect clearly assign the blame to this commit: 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 Reverting the commit solves this problem. I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same generic bitops implementation and it works fine. Do you have any ideas what went wrong? -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-29 18:33 ` Eugeniy Paltsev (?) (?) @ 2018-08-29 21:16 ` Vineet Gupta -1 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-29 21:16 UTC (permalink / raw) To: Eugeniy Paltsev, linux-kernel, will.deacon Cc: mingo, peterz, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > Hi Guys, > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > even on boot. > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > Reverting the commit solves this problem. > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > generic bitops implementation and it works fine. > > Do you have any ideas what went wrong? Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See commit f75d48644c56a ("bitops: Do not default to __clear_bit() for __clear_bit_unlock()") That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. This patch undoes that which could explain the issues you see. @Peter, @Will ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-29 21:16 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-29 21:16 UTC (permalink / raw) To: linux-arm-kernel On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > Hi Guys, > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > even on boot. > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > Reverting the commit solves this problem. > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > generic bitops implementation and it works fine. > > Do you have any ideas what went wrong? Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See commit f75d48644c56a ("bitops: Do not default to __clear_bit() for __clear_bit_unlock()") That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. This patch undoes that which could explain the issues you see. @Peter, @Will ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-29 21:16 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-29 21:16 UTC (permalink / raw) To: linux-snps-arc On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > Hi Guys, > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > even on boot. > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > Reverting the commit solves this problem. > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > generic bitops implementation and it works fine. > > Do you have any ideas what went wrong? Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See commit f75d48644c56a ("bitops: Do not default to __clear_bit() for __clear_bit_unlock()") That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. This patch undoes that which could explain the issues you see. @Peter, @Will ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-29 21:16 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-29 21:16 UTC (permalink / raw) To: Eugeniy Paltsev, linux-kernel, will.deacon Cc: mingo, peterz, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > Hi Guys, > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > even on boot. > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > Reverting the commit solves this problem. > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > generic bitops implementation and it works fine. > > Do you have any ideas what went wrong? Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See commit f75d48644c56a ("bitops: Do not default to __clear_bit() for __clear_bit_unlock()") That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. This patch undoes that which could explain the issues you see. @Peter, @Will ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-29 21:16 ` Vineet Gupta (?) (?) @ 2018-08-30 9:35 ` Will Deacon -1 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 9:35 UTC (permalink / raw) To: Vineet Gupta Cc: Eugeniy Paltsev, linux-kernel, mingo, peterz, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > Hi Guys, > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > even on boot. > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > Reverting the commit solves this problem. > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > generic bitops implementation and it works fine. > > > > Do you have any ideas what went wrong? > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > __clear_bit_unlock()") > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > This patch undoes that which could explain the issues you see. @Peter, @Will ? /me grabs arc toolchain (incidentally, make.cross fuzzy matches "arc" to "sparc", so that was fun for a few minutes). I'll take a look today, thanks for the report. Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 9:35 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 9:35 UTC (permalink / raw) To: linux-arm-kernel On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > Hi Guys, > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > even on boot. > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > Reverting the commit solves this problem. > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > generic bitops implementation and it works fine. > > > > Do you have any ideas what went wrong? > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > __clear_bit_unlock()") > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > This patch undoes that which could explain the issues you see. @Peter, @Will ? /me grabs arc toolchain (incidentally, make.cross fuzzy matches "arc" to "sparc", so that was fun for a few minutes). I'll take a look today, thanks for the report. Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 9:35 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 9:35 UTC (permalink / raw) To: linux-snps-arc On Wed, Aug 29, 2018@09:16:43PM +0000, Vineet Gupta wrote: > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > Hi Guys, > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > even on boot. > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > Reverting the commit solves this problem. > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > generic bitops implementation and it works fine. > > > > Do you have any ideas what went wrong? > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > __clear_bit_unlock()") > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > This patch undoes that which could explain the issues you see. @Peter, @Will ? /me grabs arc toolchain (incidentally, make.cross fuzzy matches "arc" to "sparc", so that was fun for a few minutes). I'll take a look today, thanks for the report. Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 9:35 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 9:35 UTC (permalink / raw) To: Vineet Gupta Cc: Eugeniy Paltsev, linux-kernel, mingo, peterz, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > Hi Guys, > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > even on boot. > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > Reverting the commit solves this problem. > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > generic bitops implementation and it works fine. > > > > Do you have any ideas what went wrong? > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > __clear_bit_unlock()") > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > This patch undoes that which could explain the issues you see. @Peter, @Will ? /me grabs arc toolchain (incidentally, make.cross fuzzy matches "arc" to "sparc", so that was fun for a few minutes). I'll take a look today, thanks for the report. Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-29 21:16 ` Vineet Gupta ` (2 preceding siblings ...) (?) @ 2018-08-30 9:44 ` Peter Zijlstra -1 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 9:44 UTC (permalink / raw) To: Vineet Gupta Cc: Eugeniy Paltsev, linux-kernel, will.deacon, mingo, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > Hi Guys, > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > even on boot. > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > Reverting the commit solves this problem. > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > generic bitops implementation and it works fine. > > > > Do you have any ideas what went wrong? > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > __clear_bit_unlock()") > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > This patch undoes that which could explain the issues you see. @Peter, @Will ? Right, so the thinking is that on platforms that suffer that issue, atomic_set*() should DTRT. And if you look at your spinlock based atomic implementation, you'll note that atomic_set() does indeed do the right thing. arch/arc/include/asm/atomic.h:108 ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 9:44 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 9:44 UTC (permalink / raw) To: linux-arm-kernel On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > Hi Guys, > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > even on boot. > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > Reverting the commit solves this problem. > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > generic bitops implementation and it works fine. > > > > Do you have any ideas what went wrong? > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > __clear_bit_unlock()") > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > This patch undoes that which could explain the issues you see. @Peter, @Will ? Right, so the thinking is that on platforms that suffer that issue, atomic_set*() should DTRT. And if you look at your spinlock based atomic implementation, you'll note that atomic_set() does indeed do the right thing. arch/arc/include/asm/atomic.h:108 ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 9:44 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 9:44 UTC (permalink / raw) To: linux-snps-arc On Wed, Aug 29, 2018@09:16:43PM +0000, Vineet Gupta wrote: > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > Hi Guys, > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > even on boot. > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > Reverting the commit solves this problem. > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > generic bitops implementation and it works fine. > > > > Do you have any ideas what went wrong? > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > __clear_bit_unlock()") > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > This patch undoes that which could explain the issues you see. @Peter, @Will ? Right, so the thinking is that on platforms that suffer that issue, atomic_set*() should DTRT. And if you look at your spinlock based atomic implementation, you'll note that atomic_set() does indeed do the right thing. arch/arc/include/asm/atomic.h:108 ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 9:44 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 9:44 UTC (permalink / raw) To: Vineet Gupta Cc: Eugeniy Paltsev, linux-kernel, will.deacon, mingo, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > Hi Guys, > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > even on boot. > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > Reverting the commit solves this problem. > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > generic bitops implementation and it works fine. > > > > Do you have any ideas what went wrong? > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > __clear_bit_unlock()") > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > This patch undoes that which could explain the issues you see. @Peter, @Will ? Right, so the thinking is that on platforms that suffer that issue, atomic_set*() should DTRT. And if you look at your spinlock based atomic implementation, you'll note that atomic_set() does indeed do the right thing. arch/arc/include/asm/atomic.h:108 ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 9:44 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 9:44 UTC (permalink / raw) To: Vineet Gupta Cc: linux-arch, Alexey Brodkin, will.deacon, linux-kernel, yamada.masahiro, tglx, linux-snps-arc, Eugeniy Paltsev, mingo, linux-arm-kernel On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > Hi Guys, > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > even on boot. > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > Reverting the commit solves this problem. > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > generic bitops implementation and it works fine. > > > > Do you have any ideas what went wrong? > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > __clear_bit_unlock()") > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > This patch undoes that which could explain the issues you see. @Peter, @Will ? Right, so the thinking is that on platforms that suffer that issue, atomic_set*() should DTRT. And if you look at your spinlock based atomic implementation, you'll note that atomic_set() does indeed do the right thing. arch/arc/include/asm/atomic.h:108 ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 9:44 ` Peter Zijlstra (?) (?) @ 2018-08-30 9:51 ` Will Deacon -1 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 9:51 UTC (permalink / raw) To: Peter Zijlstra Cc: Vineet Gupta, Eugeniy Paltsev, linux-kernel, mingo, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 11:44:11AM +0200, Peter Zijlstra wrote: > On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > Hi Guys, > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > even on boot. > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > > > Reverting the commit solves this problem. > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > generic bitops implementation and it works fine. > > > > > > Do you have any ideas what went wrong? > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > __clear_bit_unlock()") > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > Right, so the thinking is that on platforms that suffer that issue, > atomic_set*() should DTRT. And if you look at your spinlock based atomic > implementation, you'll note that atomic_set() does indeed do the right > thing. > > arch/arc/include/asm/atomic.h:108 Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a boils down to concurrent atomic_long_set_release() vs atomic_long_fetch_or_acquire(), which really needs to work. I'll keep digging. In the meantime, Vineet, do you have any useful crash logs and do you only see the crashes in certain configurations (e.g. SMP but !CONFIG_ARC_HAS_LLSC)? Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 9:51 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 9:51 UTC (permalink / raw) To: linux-arm-kernel On Thu, Aug 30, 2018 at 11:44:11AM +0200, Peter Zijlstra wrote: > On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > Hi Guys, > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > even on boot. > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > > > Reverting the commit solves this problem. > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > generic bitops implementation and it works fine. > > > > > > Do you have any ideas what went wrong? > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > __clear_bit_unlock()") > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > Right, so the thinking is that on platforms that suffer that issue, > atomic_set*() should DTRT. And if you look at your spinlock based atomic > implementation, you'll note that atomic_set() does indeed do the right > thing. > > arch/arc/include/asm/atomic.h:108 Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a boils down to concurrent atomic_long_set_release() vs atomic_long_fetch_or_acquire(), which really needs to work. I'll keep digging. In the meantime, Vineet, do you have any useful crash logs and do you only see the crashes in certain configurations (e.g. SMP but !CONFIG_ARC_HAS_LLSC)? Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 9:51 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 9:51 UTC (permalink / raw) To: linux-snps-arc On Thu, Aug 30, 2018@11:44:11AM +0200, Peter Zijlstra wrote: > On Wed, Aug 29, 2018@09:16:43PM +0000, Vineet Gupta wrote: > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > Hi Guys, > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > even on boot. > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > > > Reverting the commit solves this problem. > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > generic bitops implementation and it works fine. > > > > > > Do you have any ideas what went wrong? > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > __clear_bit_unlock()") > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > Right, so the thinking is that on platforms that suffer that issue, > atomic_set*() should DTRT. And if you look at your spinlock based atomic > implementation, you'll note that atomic_set() does indeed do the right > thing. > > arch/arc/include/asm/atomic.h:108 Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a boils down to concurrent atomic_long_set_release() vs atomic_long_fetch_or_acquire(), which really needs to work. I'll keep digging. In the meantime, Vineet, do you have any useful crash logs and do you only see the crashes in certain configurations (e.g. SMP but !CONFIG_ARC_HAS_LLSC)? Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 9:51 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 9:51 UTC (permalink / raw) To: Peter Zijlstra Cc: Vineet Gupta, Eugeniy Paltsev, linux-kernel, mingo, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 11:44:11AM +0200, Peter Zijlstra wrote: > On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > Hi Guys, > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > even on boot. > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > https://github.com/torvalds/linux/commit/84c6591103dbeaf393a092a3fc7b09510825f6b9 > > > > > > Reverting the commit solves this problem. > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > generic bitops implementation and it works fine. > > > > > > Do you have any ideas what went wrong? > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > __clear_bit_unlock()") > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > Right, so the thinking is that on platforms that suffer that issue, > atomic_set*() should DTRT. And if you look at your spinlock based atomic > implementation, you'll note that atomic_set() does indeed do the right > thing. > > arch/arc/include/asm/atomic.h:108 Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a boils down to concurrent atomic_long_set_release() vs atomic_long_fetch_or_acquire(), which really needs to work. I'll keep digging. In the meantime, Vineet, do you have any useful crash logs and do you only see the crashes in certain configurations (e.g. SMP but !CONFIG_ARC_HAS_LLSC)? Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 9:51 ` Will Deacon (?) (?) @ 2018-08-30 11:53 ` Eugeniy Paltsev -1 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-30 11:53 UTC (permalink / raw) To: peterz, will.deacon Cc: mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, Eugeniy.Paltsev, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch Hi Will, On Thu, 2018-08-30 at 10:51 +0100, Will Deacon wrote: > On Thu, Aug 30, 2018 at 11:44:11AM +0200, Peter Zijlstra wrote: > > On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > > Hi Guys, > > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > > even on boot. > > > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_torvalds_linux_commit_84c6591103dbeaf393a092a3fc7b09510825f6b9&d=DwIBAg&c=DPL6 > > > > _X_6JkXFx7AXWqB0tg&r=ZlJN1MriPUTkBKCrPSx67GmaplEUGcAEk9yPtCLdUXI&m=6y0FFvkGdIQ6kX2lZ31V99lMfMV- > > > > RyWyYhiUGzh0Bi0&s=GNwmhSynIcWqgZhiOwFEEH_AtbZAH443_L6QH4nw_ls&e= > > > > > > > > Reverting the commit solves this problem. > > > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > > generic bitops implementation and it works fine. > > > > > > > > Do you have any ideas what went wrong? > > > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > > __clear_bit_unlock()") > > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > > > Right, so the thinking is that on platforms that suffer that issue, > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > implementation, you'll note that atomic_set() does indeed do the right > > thing. > > > > arch/arc/include/asm/atomic.h:108 > > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > boils down to concurrent atomic_long_set_release() vs > atomic_long_fetch_or_acquire(), which really needs to work. > > I'll keep digging. In the meantime, Vineet, do you have any useful crash > logs and do you only see the crashes in certain configurations (e.g. SMP but > !CONFIG_ARC_HAS_LLSC)? We don't have such configuration (SMP with !CONFIG_ARC_HAS_LLSC). I can see crashes with LLSC enabled in both SMP running on 4 cores and SMP running on 1 core. There are some crash logs (not sure if they are really useful): Crashes are quite spontaneous and mostly happens in IO-related code: Crash on boot: ------------------------>8---------------------------- usb 1-1: new high-speed USB device number 2 using ehci-platform hub 1-1:1.0: USB hub found hub 1-1:1.0: 2 ports detected usb 1-1.1: new high-speed USB device number 3 using ehci-platform usb-storage 1-1.1:1.0: USB Mass Storage device detected scsi host0: usb-storage 1-1.1:1.0 scsi 0:0:0:0: Direct-Access Generic STORAGE DEVICE 0272 PQ: 0 ANSI: 0 sd 0:0:0:0: [sda] 15759360 512-byte logical blocks: (8.07 GB/7.51 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 0b 00 00 08 sd 0:0:0:0: [sda] No Caching mode page found sd 0:0:0:0: [sda] Assuming drive cache: write through sda: sda1 sda2 sda3 sda4 sd 0:0:0:0: [sda] Attached SCSI removable disk INFO: task start-stop-daem:85 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. start-stop-daem D 0 85 81 0x00000000 Stack Trace: __switch_to+0x0/0xac __schedule+0x1b2/0x730 io_schedule+0x5c/0xc0 __lock_page+0x98/0xdc find_lock_entry+0x38/0x100 shmem_getpage_gfp.isra.3+0x82/0xbfc shmem_fault+0x46/0x138 handle_mm_fault+0x5bc/0x924 do_page_fault+0xfc/0x294 ret_from_exception+0x0/0x8 INFO: task start-stop-daem:85 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. start-stop-daem D 0 85 81 0x00000000 ------------------------>8---------------------------- ------------------------>8---------------------------- BUG: failure at mm/page-writeback.c:2652/clear_page_dirty_for_io()! gcc generated __builtin_trap Path: (null) CPU: 3 PID: 7 Comm: kworker/u8:0 Not tainted 4.18.0-06995-g54dbe75bbf1e #22 Workqueue: writeback wb_workfn (flush-8:0) [ECR ]: 0x00090005 => gcc generated __builtin_trap [EFA ]: 0x90162b50 [BLINK ]: clear_page_dirty_for_io+0x13a/0x164 [ERET ]: abort+0x2/0x4 [STAT32]: 0x80080802 : IE K BTA: 0x9021e356 SP: 0xbf045c64 FP: 0x00000000 LPS: 0x9067fa68 LPE: 0x9067fa7c LPC: 0x00000000 r00: 0x00000043 r01: 0xbfb462d4 r02: 0x00000000 r03: 0x90157788 r04: 0x00000000 r05: 0x9080c040 r06: 0x00000031 r07: 0x00000000 r08: 0xa9ee8400 r09: 0x0000001e r10: 0x00000100 r11: 0x00000000 r12: 0x9021e35a r13: 0xbff33a54 r14: 0xbe87e124 r15: 0xbf045e50 r16: 0xbe87e124 r17: 0x0000470c r18: 0x0000000f r19: 0x902afee4 r20: 0xbf045cf4 r21: 0xffffffff r22: 0x9080e474 r23: 0x00000000 r24: 0xbff33a54 r25: 0xbf031680 Stack Trace: abort+0x2/0x4 clear_page_dirty_for_io+0x13a/0x164 write_cache_pages+0x10a/0x328 mpage_writepages+0x3c/0x94 do_writepages+0x42/0x398 __writeback_single_inode+0x2a/0x154 wb_writeback+0x524/0xac8 wb_workfn+0x17c/0x34c process_one_work+0x1a0/0x354 worker_thread+0x108/0x4a0 kthread+0x10c/0x110 ret_from_fork+0x18/0x1c ------------------------>8---------------------------- ------------------------>8---------------------------- # # mkdir -p /mnt/mmc2/ttt && mount /dev/sda1 /mnt/mmc2/ttt && df -h /mnt/mmc2/ttt && bonnie++ -u root -r 256 -s 512 -x 1 -d /mnt/mmc2/ttt && echo 'OK' & # FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. Filesystem Size Used Available Use% Mounted on /dev/sda1 2.0G 4.0K 2.0G 0% /mnt/mmc2/ttt Using uid:0, gid:0. Writing with putc()...random: crng init done don e Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...INFO: task kworker/u8:0:7 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #41 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u8:0 D 0 7 2 0x00000000 Workqueue: writeback wb_workfn (flush-8:0) Stack Trace: __switch_to+0x0/0xac __schedule+0x1b8/0x734 io_schedule+0x5c/0xc0 __lock_page+0x9a/0xdc generic_writepages+0xc6/0x384 do_writepages+0x42/0x398 __writeback_single_inode+0x2a/0x154 wb_writeback+0x524/0xac8 wb_workfn+0x21a/0x34c process_one_work+0x1a0/0x354 worker_thread+0x108/0x4a0 kthread+0x10c/0x110 ret_from_fork+0x18/0x1c INFO: task kworker/u8:0:7 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #41 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u8:0 D 0 7 2 0x00000000 Workqueue: writeback wb_workfn (flush-8:0) ------------------------>8---------------------------- ------------------------>8---------------------------- # # hackbench Running in process mode with 10 groups using 40 file descriptors each (== 400 tasks) Each sender will pass 100 messages of 100 bytes INFO: task hackbench:519 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. hackbench D 0 519 110 0x00000000 Stack Trace: __switch_to+0x0/0xac __schedule+0x1b2/0x730 io_schedule+0x5c/0xc0 __lock_page+0x98/0xdc handle_mm_fault+0x594/0xe1c do_page_fault+0xfc/0x294 ret_from_exception+0x0/0x8 INFO: task hackbench:519 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. hackbench D 0 519 110 0x00000000 ------------------------>8---------------------------- -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 11:53 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-30 11:53 UTC (permalink / raw) To: linux-arm-kernel Hi Will, On Thu, 2018-08-30 at 10:51 +0100, Will Deacon wrote: > On Thu, Aug 30, 2018 at 11:44:11AM +0200, Peter Zijlstra wrote: > > On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > > Hi Guys, > > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > > even on boot. > > > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_torvalds_linux_commit_84c6591103dbeaf393a092a3fc7b09510825f6b9&d=DwIBAg&c=DPL6 > > > > _X_6JkXFx7AXWqB0tg&r=ZlJN1MriPUTkBKCrPSx67GmaplEUGcAEk9yPtCLdUXI&m=6y0FFvkGdIQ6kX2lZ31V99lMfMV- > > > > RyWyYhiUGzh0Bi0&s=GNwmhSynIcWqgZhiOwFEEH_AtbZAH443_L6QH4nw_ls&e= > > > > > > > > Reverting the commit solves this problem. > > > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > > generic bitops implementation and it works fine. > > > > > > > > Do you have any ideas what went wrong? > > > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > > __clear_bit_unlock()") > > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > > > Right, so the thinking is that on platforms that suffer that issue, > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > implementation, you'll note that atomic_set() does indeed do the right > > thing. > > > > arch/arc/include/asm/atomic.h:108 > > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > boils down to concurrent atomic_long_set_release() vs > atomic_long_fetch_or_acquire(), which really needs to work. > > I'll keep digging. In the meantime, Vineet, do you have any useful crash > logs and do you only see the crashes in certain configurations (e.g. SMP but > !CONFIG_ARC_HAS_LLSC)? We don't have such configuration (SMP with !CONFIG_ARC_HAS_LLSC). I can see crashes with LLSC enabled in both SMP running on 4 cores and SMP running on 1 core. There are some crash logs (not sure if they are really useful): Crashes are quite spontaneous and mostly happens in IO-related code: Crash on boot: ------------------------>8---------------------------- usb 1-1: new high-speed USB device number 2 using ehci-platform hub 1-1:1.0: USB hub found hub 1-1:1.0: 2 ports detected usb 1-1.1: new high-speed USB device number 3 using ehci-platform usb-storage 1-1.1:1.0: USB Mass Storage device detected scsi host0: usb-storage 1-1.1:1.0 scsi 0:0:0:0: Direct-Access Generic STORAGE DEVICE 0272 PQ: 0 ANSI: 0 sd 0:0:0:0: [sda] 15759360 512-byte logical blocks: (8.07 GB/7.51 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 0b 00 00 08 sd 0:0:0:0: [sda] No Caching mode page found sd 0:0:0:0: [sda] Assuming drive cache: write through sda: sda1 sda2 sda3 sda4 sd 0:0:0:0: [sda] Attached SCSI removable disk INFO: task start-stop-daem:85 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. start-stop-daem D 0 85 81 0x00000000 Stack Trace: __switch_to+0x0/0xac __schedule+0x1b2/0x730 io_schedule+0x5c/0xc0 __lock_page+0x98/0xdc find_lock_entry+0x38/0x100 shmem_getpage_gfp.isra.3+0x82/0xbfc shmem_fault+0x46/0x138 handle_mm_fault+0x5bc/0x924 do_page_fault+0xfc/0x294 ret_from_exception+0x0/0x8 INFO: task start-stop-daem:85 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. start-stop-daem D 0 85 81 0x00000000 ------------------------>8---------------------------- ------------------------>8---------------------------- BUG: failure at mm/page-writeback.c:2652/clear_page_dirty_for_io()! gcc generated __builtin_trap Path: (null) CPU: 3 PID: 7 Comm: kworker/u8:0 Not tainted 4.18.0-06995-g54dbe75bbf1e #22 Workqueue: writeback wb_workfn (flush-8:0) [ECR ]: 0x00090005 => gcc generated __builtin_trap [EFA ]: 0x90162b50 [BLINK ]: clear_page_dirty_for_io+0x13a/0x164 [ERET ]: abort+0x2/0x4 [STAT32]: 0x80080802 : IE K BTA: 0x9021e356 SP: 0xbf045c64 FP: 0x00000000 LPS: 0x9067fa68 LPE: 0x9067fa7c LPC: 0x00000000 r00: 0x00000043 r01: 0xbfb462d4 r02: 0x00000000 r03: 0x90157788 r04: 0x00000000 r05: 0x9080c040 r06: 0x00000031 r07: 0x00000000 r08: 0xa9ee8400 r09: 0x0000001e r10: 0x00000100 r11: 0x00000000 r12: 0x9021e35a r13: 0xbff33a54 r14: 0xbe87e124 r15: 0xbf045e50 r16: 0xbe87e124 r17: 0x0000470c r18: 0x0000000f r19: 0x902afee4 r20: 0xbf045cf4 r21: 0xffffffff r22: 0x9080e474 r23: 0x00000000 r24: 0xbff33a54 r25: 0xbf031680 Stack Trace: abort+0x2/0x4 clear_page_dirty_for_io+0x13a/0x164 write_cache_pages+0x10a/0x328 mpage_writepages+0x3c/0x94 do_writepages+0x42/0x398 __writeback_single_inode+0x2a/0x154 wb_writeback+0x524/0xac8 wb_workfn+0x17c/0x34c process_one_work+0x1a0/0x354 worker_thread+0x108/0x4a0 kthread+0x10c/0x110 ret_from_fork+0x18/0x1c ------------------------>8---------------------------- ------------------------>8---------------------------- # # mkdir -p /mnt/mmc2/ttt && mount /dev/sda1 /mnt/mmc2/ttt && df -h /mnt/mmc2/ttt && bonnie++ -u root -r 256 -s 512 -x 1 -d /mnt/mmc2/ttt && echo 'OK' & # FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. Filesystem Size Used Available Use% Mounted on /dev/sda1 2.0G 4.0K 2.0G 0% /mnt/mmc2/ttt Using uid:0, gid:0. Writing with putc()...random: crng init done don e Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...INFO: task kworker/u8:0:7 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #41 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u8:0 D 0 7 2 0x00000000 Workqueue: writeback wb_workfn (flush-8:0) Stack Trace: __switch_to+0x0/0xac __schedule+0x1b8/0x734 io_schedule+0x5c/0xc0 __lock_page+0x9a/0xdc generic_writepages+0xc6/0x384 do_writepages+0x42/0x398 __writeback_single_inode+0x2a/0x154 wb_writeback+0x524/0xac8 wb_workfn+0x21a/0x34c process_one_work+0x1a0/0x354 worker_thread+0x108/0x4a0 kthread+0x10c/0x110 ret_from_fork+0x18/0x1c INFO: task kworker/u8:0:7 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #41 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u8:0 D 0 7 2 0x00000000 Workqueue: writeback wb_workfn (flush-8:0) ------------------------>8---------------------------- ------------------------>8---------------------------- # # hackbench Running in process mode with 10 groups using 40 file descriptors each (== 400 tasks) Each sender will pass 100 messages of 100 bytes INFO: task hackbench:519 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. hackbench D 0 519 110 0x00000000 Stack Trace: __switch_to+0x0/0xac __schedule+0x1b2/0x730 io_schedule+0x5c/0xc0 __lock_page+0x98/0xdc handle_mm_fault+0x594/0xe1c do_page_fault+0xfc/0x294 ret_from_exception+0x0/0x8 INFO: task hackbench:519 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. hackbench D 0 519 110 0x00000000 ------------------------>8---------------------------- -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 11:53 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-30 11:53 UTC (permalink / raw) To: linux-snps-arc Hi Will, On Thu, 2018-08-30@10:51 +0100, Will Deacon wrote: > On Thu, Aug 30, 2018@11:44:11AM +0200, Peter Zijlstra wrote: > > On Wed, Aug 29, 2018@09:16:43PM +0000, Vineet Gupta wrote: > > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > > Hi Guys, > > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > > even on boot. > > > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_torvalds_linux_commit_84c6591103dbeaf393a092a3fc7b09510825f6b9&d=DwIBAg&c=DPL6 > > > > _X_6JkXFx7AXWqB0tg&r=ZlJN1MriPUTkBKCrPSx67GmaplEUGcAEk9yPtCLdUXI&m=6y0FFvkGdIQ6kX2lZ31V99lMfMV- > > > > RyWyYhiUGzh0Bi0&s=GNwmhSynIcWqgZhiOwFEEH_AtbZAH443_L6QH4nw_ls&e= > > > > > > > > Reverting the commit solves this problem. > > > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > > generic bitops implementation and it works fine. > > > > > > > > Do you have any ideas what went wrong? > > > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > > __clear_bit_unlock()") > > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > > > Right, so the thinking is that on platforms that suffer that issue, > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > implementation, you'll note that atomic_set() does indeed do the right > > thing. > > > > arch/arc/include/asm/atomic.h:108 > > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > boils down to concurrent atomic_long_set_release() vs > atomic_long_fetch_or_acquire(), which really needs to work. > > I'll keep digging. In the meantime, Vineet, do you have any useful crash > logs and do you only see the crashes in certain configurations (e.g. SMP but > !CONFIG_ARC_HAS_LLSC)? We don't have such configuration (SMP with !CONFIG_ARC_HAS_LLSC). I can see crashes with LLSC enabled in both SMP running on 4 cores and SMP running on 1 core. There are some crash logs (not sure if they are really useful): Crashes are quite spontaneous and mostly happens in IO-related code: Crash on boot: ------------------------>8---------------------------- usb 1-1: new high-speed USB device number 2 using ehci-platform hub 1-1:1.0: USB hub found hub 1-1:1.0: 2 ports detected usb 1-1.1: new high-speed USB device number 3 using ehci-platform usb-storage 1-1.1:1.0: USB Mass Storage device detected scsi host0: usb-storage 1-1.1:1.0 scsi 0:0:0:0: Direct-Access Generic STORAGE DEVICE 0272 PQ: 0 ANSI: 0 sd 0:0:0:0: [sda] 15759360 512-byte logical blocks: (8.07 GB/7.51 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 0b 00 00 08 sd 0:0:0:0: [sda] No Caching mode page found sd 0:0:0:0: [sda] Assuming drive cache: write through sda: sda1 sda2 sda3 sda4 sd 0:0:0:0: [sda] Attached SCSI removable disk INFO: task start-stop-daem:85 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. start-stop-daem D 0 85 81 0x00000000 Stack Trace: __switch_to+0x0/0xac __schedule+0x1b2/0x730 io_schedule+0x5c/0xc0 __lock_page+0x98/0xdc find_lock_entry+0x38/0x100 shmem_getpage_gfp.isra.3+0x82/0xbfc shmem_fault+0x46/0x138 handle_mm_fault+0x5bc/0x924 do_page_fault+0xfc/0x294 ret_from_exception+0x0/0x8 INFO: task start-stop-daem:85 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. start-stop-daem D 0 85 81 0x00000000 ------------------------>8---------------------------- ------------------------>8---------------------------- BUG: failure at mm/page-writeback.c:2652/clear_page_dirty_for_io()! gcc generated __builtin_trap Path: (null) CPU: 3 PID: 7 Comm: kworker/u8:0 Not tainted 4.18.0-06995-g54dbe75bbf1e #22 Workqueue: writeback wb_workfn (flush-8:0) [ECR ]: 0x00090005 => gcc generated __builtin_trap [EFA ]: 0x90162b50 [BLINK ]: clear_page_dirty_for_io+0x13a/0x164 [ERET ]: abort+0x2/0x4 [STAT32]: 0x80080802 : IE K BTA: 0x9021e356 SP: 0xbf045c64 FP: 0x00000000 LPS: 0x9067fa68 LPE: 0x9067fa7c LPC: 0x00000000 r00: 0x00000043 r01: 0xbfb462d4 r02: 0x00000000 r03: 0x90157788 r04: 0x00000000 r05: 0x9080c040 r06: 0x00000031 r07: 0x00000000 r08: 0xa9ee8400 r09: 0x0000001e r10: 0x00000100 r11: 0x00000000 r12: 0x9021e35a r13: 0xbff33a54 r14: 0xbe87e124 r15: 0xbf045e50 r16: 0xbe87e124 r17: 0x0000470c r18: 0x0000000f r19: 0x902afee4 r20: 0xbf045cf4 r21: 0xffffffff r22: 0x9080e474 r23: 0x00000000 r24: 0xbff33a54 r25: 0xbf031680 Stack Trace: abort+0x2/0x4 clear_page_dirty_for_io+0x13a/0x164 write_cache_pages+0x10a/0x328 mpage_writepages+0x3c/0x94 do_writepages+0x42/0x398 __writeback_single_inode+0x2a/0x154 wb_writeback+0x524/0xac8 wb_workfn+0x17c/0x34c process_one_work+0x1a0/0x354 worker_thread+0x108/0x4a0 kthread+0x10c/0x110 ret_from_fork+0x18/0x1c ------------------------>8---------------------------- ------------------------>8---------------------------- # # mkdir -p /mnt/mmc2/ttt && mount /dev/sda1 /mnt/mmc2/ttt && df -h /mnt/mmc2/ttt && bonnie++ -u root -r 256 -s 512 -x 1 -d /mnt/mmc2/ttt && echo 'OK' & # FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. Filesystem Size Used Available Use% Mounted on /dev/sda1 2.0G 4.0K 2.0G 0% /mnt/mmc2/ttt Using uid:0, gid:0. Writing with putc()...random: crng init done don e Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...INFO: task kworker/u8:0:7 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #41 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u8:0 D 0 7 2 0x00000000 Workqueue: writeback wb_workfn (flush-8:0) Stack Trace: __switch_to+0x0/0xac __schedule+0x1b8/0x734 io_schedule+0x5c/0xc0 __lock_page+0x9a/0xdc generic_writepages+0xc6/0x384 do_writepages+0x42/0x398 __writeback_single_inode+0x2a/0x154 wb_writeback+0x524/0xac8 wb_workfn+0x21a/0x34c process_one_work+0x1a0/0x354 worker_thread+0x108/0x4a0 kthread+0x10c/0x110 ret_from_fork+0x18/0x1c INFO: task kworker/u8:0:7 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #41 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u8:0 D 0 7 2 0x00000000 Workqueue: writeback wb_workfn (flush-8:0) ------------------------>8---------------------------- ------------------------>8---------------------------- # # hackbench Running in process mode with 10 groups using 40 file descriptors each (== 400 tasks) Each sender will pass 100 messages of 100 bytes INFO: task hackbench:519 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. hackbench D 0 519 110 0x00000000 Stack Trace: __switch_to+0x0/0xac __schedule+0x1b2/0x730 io_schedule+0x5c/0xc0 __lock_page+0x98/0xdc handle_mm_fault+0x594/0xe1c do_page_fault+0xfc/0x294 ret_from_exception+0x0/0x8 INFO: task hackbench:519 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. hackbench D 0 519 110 0x00000000 ------------------------>8---------------------------- -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 11:53 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-30 11:53 UTC (permalink / raw) To: peterz, will.deacon Cc: mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, Eugeniy.Paltsev, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch Hi Will, On Thu, 2018-08-30 at 10:51 +0100, Will Deacon wrote: > On Thu, Aug 30, 2018 at 11:44:11AM +0200, Peter Zijlstra wrote: > > On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > > Hi Guys, > > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > > even on boot. > > > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_torvalds_linux_commit_84c6591103dbeaf393a092a3fc7b09510825f6b9&d=DwIBAg&c=DPL6 > > > > _X_6JkXFx7AXWqB0tg&r=ZlJN1MriPUTkBKCrPSx67GmaplEUGcAEk9yPtCLdUXI&m=6y0FFvkGdIQ6kX2lZ31V99lMfMV- > > > > RyWyYhiUGzh0Bi0&s=GNwmhSynIcWqgZhiOwFEEH_AtbZAH443_L6QH4nw_ls&e= > > > > > > > > Reverting the commit solves this problem. > > > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > > generic bitops implementation and it works fine. > > > > > > > > Do you have any ideas what went wrong? > > > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > > __clear_bit_unlock()") > > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > > > Right, so the thinking is that on platforms that suffer that issue, > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > implementation, you'll note that atomic_set() does indeed do the right > > thing. > > > > arch/arc/include/asm/atomic.h:108 > > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > boils down to concurrent atomic_long_set_release() vs > atomic_long_fetch_or_acquire(), which really needs to work. > > I'll keep digging. In the meantime, Vineet, do you have any useful crash > logs and do you only see the crashes in certain configurations (e.g. SMP but > !CONFIG_ARC_HAS_LLSC)? We don't have such configuration (SMP with !CONFIG_ARC_HAS_LLSC). I can see crashes with LLSC enabled in both SMP running on 4 cores and SMP running on 1 core. There are some crash logs (not sure if they are really useful): Crashes are quite spontaneous and mostly happens in IO-related code: Crash on boot: ------------------------>8---------------------------- usb 1-1: new high-speed USB device number 2 using ehci-platform hub 1-1:1.0: USB hub found hub 1-1:1.0: 2 ports detected usb 1-1.1: new high-speed USB device number 3 using ehci-platform usb-storage 1-1.1:1.0: USB Mass Storage device detected scsi host0: usb-storage 1-1.1:1.0 scsi 0:0:0:0: Direct-Access Generic STORAGE DEVICE 0272 PQ: 0 ANSI: 0 sd 0:0:0:0: [sda] 15759360 512-byte logical blocks: (8.07 GB/7.51 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 0b 00 00 08 sd 0:0:0:0: [sda] No Caching mode page found sd 0:0:0:0: [sda] Assuming drive cache: write through sda: sda1 sda2 sda3 sda4 sd 0:0:0:0: [sda] Attached SCSI removable disk INFO: task start-stop-daem:85 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. start-stop-daem D 0 85 81 0x00000000 Stack Trace: __switch_to+0x0/0xac __schedule+0x1b2/0x730 io_schedule+0x5c/0xc0 __lock_page+0x98/0xdc find_lock_entry+0x38/0x100 shmem_getpage_gfp.isra.3+0x82/0xbfc shmem_fault+0x46/0x138 handle_mm_fault+0x5bc/0x924 do_page_fault+0xfc/0x294 ret_from_exception+0x0/0x8 INFO: task start-stop-daem:85 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. start-stop-daem D 0 85 81 0x00000000 ------------------------>8---------------------------- ------------------------>8---------------------------- BUG: failure at mm/page-writeback.c:2652/clear_page_dirty_for_io()! gcc generated __builtin_trap Path: (null) CPU: 3 PID: 7 Comm: kworker/u8:0 Not tainted 4.18.0-06995-g54dbe75bbf1e #22 Workqueue: writeback wb_workfn (flush-8:0) [ECR ]: 0x00090005 => gcc generated __builtin_trap [EFA ]: 0x90162b50 [BLINK ]: clear_page_dirty_for_io+0x13a/0x164 [ERET ]: abort+0x2/0x4 [STAT32]: 0x80080802 : IE K BTA: 0x9021e356 SP: 0xbf045c64 FP: 0x00000000 LPS: 0x9067fa68 LPE: 0x9067fa7c LPC: 0x00000000 r00: 0x00000043 r01: 0xbfb462d4 r02: 0x00000000 r03: 0x90157788 r04: 0x00000000 r05: 0x9080c040 r06: 0x00000031 r07: 0x00000000 r08: 0xa9ee8400 r09: 0x0000001e r10: 0x00000100 r11: 0x00000000 r12: 0x9021e35a r13: 0xbff33a54 r14: 0xbe87e124 r15: 0xbf045e50 r16: 0xbe87e124 r17: 0x0000470c r18: 0x0000000f r19: 0x902afee4 r20: 0xbf045cf4 r21: 0xffffffff r22: 0x9080e474 r23: 0x00000000 r24: 0xbff33a54 r25: 0xbf031680 Stack Trace: abort+0x2/0x4 clear_page_dirty_for_io+0x13a/0x164 write_cache_pages+0x10a/0x328 mpage_writepages+0x3c/0x94 do_writepages+0x42/0x398 __writeback_single_inode+0x2a/0x154 wb_writeback+0x524/0xac8 wb_workfn+0x17c/0x34c process_one_work+0x1a0/0x354 worker_thread+0x108/0x4a0 kthread+0x10c/0x110 ret_from_fork+0x18/0x1c ------------------------>8---------------------------- ------------------------>8---------------------------- # # mkdir -p /mnt/mmc2/ttt && mount /dev/sda1 /mnt/mmc2/ttt && df -h /mnt/mmc2/ttt && bonnie++ -u root -r 256 -s 512 -x 1 -d /mnt/mmc2/ttt && echo 'OK' & # FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. Filesystem Size Used Available Use% Mounted on /dev/sda1 2.0G 4.0K 2.0G 0% /mnt/mmc2/ttt Using uid:0, gid:0. Writing with putc()...random: crng init done don e Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...INFO: task kworker/u8:0:7 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #41 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u8:0 D 0 7 2 0x00000000 Workqueue: writeback wb_workfn (flush-8:0) Stack Trace: __switch_to+0x0/0xac __schedule+0x1b8/0x734 io_schedule+0x5c/0xc0 __lock_page+0x9a/0xdc generic_writepages+0xc6/0x384 do_writepages+0x42/0x398 __writeback_single_inode+0x2a/0x154 wb_writeback+0x524/0xac8 wb_workfn+0x21a/0x34c process_one_work+0x1a0/0x354 worker_thread+0x108/0x4a0 kthread+0x10c/0x110 ret_from_fork+0x18/0x1c INFO: task kworker/u8:0:7 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #41 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u8:0 D 0 7 2 0x00000000 Workqueue: writeback wb_workfn (flush-8:0) ------------------------>8---------------------------- ------------------------>8---------------------------- # # hackbench Running in process mode with 10 groups using 40 file descriptors each (== 400 tasks) Each sender will pass 100 messages of 100 bytes INFO: task hackbench:519 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. hackbench D 0 519 110 0x00000000 Stack Trace: __switch_to+0x0/0xac __schedule+0x1b2/0x730 io_schedule+0x5c/0xc0 __lock_page+0x98/0xdc handle_mm_fault+0x594/0xe1c do_page_fault+0xfc/0x294 ret_from_exception+0x0/0x8 INFO: task hackbench:519 blocked for more than 10 seconds. Not tainted 4.19.0-rc1 #2 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. hackbench D 0 519 110 0x00000000 ------------------------>8---------------------------- -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 11:53 ` Eugeniy Paltsev (?) (?) @ 2018-08-30 13:57 ` Will Deacon -1 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 13:57 UTC (permalink / raw) To: Eugeniy Paltsev Cc: peterz, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch Hi Eugeniy, On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > On Thu, 2018-08-30 at 10:51 +0100, Will Deacon wrote: > > On Thu, Aug 30, 2018 at 11:44:11AM +0200, Peter Zijlstra wrote: > > > On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > > > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > > > Hi Guys, > > > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > > > even on boot. > > > > > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_torvalds_linux_commit_84c6591103dbeaf393a092a3fc7b09510825f6b9&d=DwIBAg&c=DPL6 > > > > > _X_6JkXFx7AXWqB0tg&r=ZlJN1MriPUTkBKCrPSx67GmaplEUGcAEk9yPtCLdUXI&m=6y0FFvkGdIQ6kX2lZ31V99lMfMV- > > > > > RyWyYhiUGzh0Bi0&s=GNwmhSynIcWqgZhiOwFEEH_AtbZAH443_L6QH4nw_ls&e= > > > > > > > > > > Reverting the commit solves this problem. > > > > > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > > > generic bitops implementation and it works fine. > > > > > > > > > > Do you have any ideas what went wrong? > > > > > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > > > __clear_bit_unlock()") > > > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > > > > > Right, so the thinking is that on platforms that suffer that issue, > > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > > implementation, you'll note that atomic_set() does indeed do the right > > > thing. > > > > > > arch/arc/include/asm/atomic.h:108 > > > > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > > boils down to concurrent atomic_long_set_release() vs > > atomic_long_fetch_or_acquire(), which really needs to work. > > > > I'll keep digging. In the meantime, Vineet, do you have any useful crash > > logs and do you only see the crashes in certain configurations (e.g. SMP but > > !CONFIG_ARC_HAS_LLSC)? > > We don't have such configuration (SMP with !CONFIG_ARC_HAS_LLSC). Ok, thanks for confirming. If you could put your .config somewhere that would be helpful, since the build fails for me with defconfig. > I can see crashes with LLSC enabled in both SMP running on 4 cores > and SMP running on 1 core. > > > There are some crash logs (not sure if they are really useful): > Crashes are quite spontaneous and mostly happens in IO-related code: Aha: arc seems to have separate spinlocks for protecting bitops and atomics. This is a seriously bad idea, because you only implement some of the bitops API in the architecture and rely on asm-generic to give you bitops/lock.h, assuming that it's going to be implemented in terms of the other parts of the bitops API (it isn't anymore). Here's a quick hack (totally untested, since I can't even build an arc kernel) which moves arc over to asm-generic for all of the bitops API and ditches the bitops lock. You could probably also drop the ffs() stuff, but I'll leave that up to you. The "downside" is serialisation between bitops and atomics, but I actually think you probably want that for xchg/cmpxchg anyway. Will --->8 diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h index 8da87feec59a..50b0d5a56e32 100644 --- a/arch/arc/include/asm/bitops.h +++ b/arch/arc/include/asm/bitops.h @@ -17,242 +17,6 @@ #include <linux/types.h> #include <linux/compiler.h> -#include <asm/barrier.h> -#ifndef CONFIG_ARC_HAS_LLSC -#include <asm/smp.h> -#endif - -#ifdef CONFIG_ARC_HAS_LLSC - -/* - * Hardware assisted Atomic-R-M-W - */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned int temp; \ - \ - m += nr >> 5; \ - \ - nr &= 0x1f; \ - \ - __asm__ __volatile__( \ - "1: llock %0, [%1] \n" \ - " " #asm_op " %0, %0, %2 \n" \ - " scond %0, [%1] \n" \ - " bnz 1b \n" \ - : "=&r"(temp) /* Early clobber, to prevent reg reuse */ \ - : "r"(m), /* Not "m": llock only supports reg direct addr mode */ \ - "ir"(nr) \ - : "cc"); \ -} - -/* - * Semantically: - * Test the bit - * if clear - * set it and return 0 (old value) - * else - * return 1 (old value). - * - * Since ARC lacks a equivalent h/w primitive, the bit is set unconditionally - * and the old value of bit is returned - */ -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old, temp; \ - \ - m += nr >> 5; \ - \ - nr &= 0x1f; \ - \ - /* \ - * Explicit full memory barrier needed before/after as \ - * LLOCK/SCOND themselves don't provide any such smenatic \ - */ \ - smp_mb(); \ - \ - __asm__ __volatile__( \ - "1: llock %0, [%2] \n" \ - " " #asm_op " %1, %0, %3 \n" \ - " scond %1, [%2] \n" \ - " bnz 1b \n" \ - : "=&r"(old), "=&r"(temp) \ - : "r"(m), "ir"(nr) \ - : "cc"); \ - \ - smp_mb(); \ - \ - return (old & (1 << nr)) != 0; \ -} - -#elif !defined(CONFIG_ARC_PLAT_EZNPS) - -/* - * Non hardware assisted Atomic-R-M-W - * Locking would change to irq-disabling only (UP) and spinlocks (SMP) - * - * There's "significant" micro-optimization in writing our own variants of - * bitops (over generic variants) - * - * (1) The generic APIs have "signed" @nr while we have it "unsigned" - * This avoids extra code to be generated for pointer arithmatic, since - * is "not sure" that index is NOT -ve - * (2) Utilize the fact that ARCompact bit fidding insn (BSET/BCLR/ASL) etc - * only consider bottom 5 bits of @nr, so NO need to mask them off. - * (GCC Quirk: however for constant @nr we still need to do the masking - * at compile time) - */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long temp, flags; \ - m += nr >> 5; \ - \ - /* \ - * spin lock/unlock provide the needed smp_mb() before/after \ - */ \ - bitops_lock(flags); \ - \ - temp = *m; \ - *m = temp c_op (1UL << (nr & 0x1f)); \ - \ - bitops_unlock(flags); \ -} - -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old, flags; \ - m += nr >> 5; \ - \ - bitops_lock(flags); \ - \ - old = *m; \ - *m = old c_op (1UL << (nr & 0x1f)); \ - \ - bitops_unlock(flags); \ - \ - return (old & (1UL << (nr & 0x1f))) != 0; \ -} - -#else /* CONFIG_ARC_PLAT_EZNPS */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - m += nr >> 5; \ - \ - nr = (1UL << (nr & 0x1f)); \ - if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \ - nr = ~nr; \ - \ - __asm__ __volatile__( \ - " mov r2, %0\n" \ - " mov r3, %1\n" \ - " .word %2\n" \ - : \ - : "r"(nr), "r"(m), "i"(asm_op) \ - : "r2", "r3", "memory"); \ -} - -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old; \ - \ - m += nr >> 5; \ - \ - nr = old = (1UL << (nr & 0x1f)); \ - if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \ - old = ~old; \ - \ - /* Explicit full memory barrier needed before/after */ \ - smp_mb(); \ - \ - __asm__ __volatile__( \ - " mov r2, %0\n" \ - " mov r3, %1\n" \ - " .word %2\n" \ - " mov %0, r2" \ - : "+r"(old) \ - : "r"(m), "i"(asm_op) \ - : "r2", "r3", "memory"); \ - \ - smp_mb(); \ - \ - return (old & nr) != 0; \ -} - -#endif /* CONFIG_ARC_PLAT_EZNPS */ - -/*************************************** - * Non atomic variants - **************************************/ - -#define __BIT_OP(op, c_op, asm_op) \ -static inline void __##op##_bit(unsigned long nr, volatile unsigned long *m) \ -{ \ - unsigned long temp; \ - m += nr >> 5; \ - \ - temp = *m; \ - *m = temp c_op (1UL << (nr & 0x1f)); \ -} - -#define __TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int __test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old; \ - m += nr >> 5; \ - \ - old = *m; \ - *m = old c_op (1UL << (nr & 0x1f)); \ - \ - return (old & (1UL << (nr & 0x1f))) != 0; \ -} - -#define BIT_OPS(op, c_op, asm_op) \ - \ - /* set_bit(), clear_bit(), change_bit() */ \ - BIT_OP(op, c_op, asm_op) \ - \ - /* test_and_set_bit(), test_and_clear_bit(), test_and_change_bit() */\ - TEST_N_BIT_OP(op, c_op, asm_op) \ - \ - /* __set_bit(), __clear_bit(), __change_bit() */ \ - __BIT_OP(op, c_op, asm_op) \ - \ - /* __test_and_set_bit(), __test_and_clear_bit(), __test_and_change_bit() */\ - __TEST_N_BIT_OP(op, c_op, asm_op) - -#ifndef CONFIG_ARC_PLAT_EZNPS -BIT_OPS(set, |, bset) -BIT_OPS(clear, & ~, bclr) -BIT_OPS(change, ^, bxor) -#else -BIT_OPS(set, |, CTOP_INST_AOR_DI_R2_R2_R3) -BIT_OPS(clear, & ~, CTOP_INST_AAND_DI_R2_R2_R3) -BIT_OPS(change, ^, CTOP_INST_AXOR_DI_R2_R2_R3) -#endif - -/* - * This routine doesn't need to be atomic. - */ -static inline int -test_bit(unsigned int nr, const volatile unsigned long *addr) -{ - unsigned long mask; - - addr += nr >> 5; - - mask = 1UL << (nr & 0x1f); - - return ((mask & *addr) != 0); -} #ifdef CONFIG_ISA_ARCOMPACT @@ -428,6 +192,9 @@ static inline __attribute__ ((const)) int __ffs(unsigned long x) #include <asm-generic/bitops/sched.h> #include <asm-generic/bitops/lock.h> +#include <asm-generic/bitops/atomic.h> +#include <asm-generic/bitops/non-atomic.h> + #include <asm-generic/bitops/find.h> #include <asm-generic/bitops/le.h> #include <asm-generic/bitops/ext2-atomic-setbit.h> diff --git a/arch/arc/include/asm/smp.h b/arch/arc/include/asm/smp.h index 0861007d9ef3..8df446c2b299 100644 --- a/arch/arc/include/asm/smp.h +++ b/arch/arc/include/asm/smp.h @@ -108,7 +108,6 @@ static inline const char *arc_platform_smp_cpuinfo(void) #include <asm/spinlock.h> extern arch_spinlock_t smp_atomic_ops_lock; -extern arch_spinlock_t smp_bitops_lock; #define atomic_ops_lock(flags) do { \ local_irq_save(flags); \ @@ -120,24 +119,11 @@ extern arch_spinlock_t smp_bitops_lock; local_irq_restore(flags); \ } while (0) -#define bitops_lock(flags) do { \ - local_irq_save(flags); \ - arch_spin_lock(&smp_bitops_lock); \ -} while (0) - -#define bitops_unlock(flags) do { \ - arch_spin_unlock(&smp_bitops_lock); \ - local_irq_restore(flags); \ -} while (0) - #else /* !CONFIG_SMP */ #define atomic_ops_lock(flags) local_irq_save(flags) #define atomic_ops_unlock(flags) local_irq_restore(flags) -#define bitops_lock(flags) local_irq_save(flags) -#define bitops_unlock(flags) local_irq_restore(flags) - #endif /* !CONFIG_SMP */ #endif /* !CONFIG_ARC_HAS_LLSC */ diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c index 21d86c36692b..80c80c540743 100644 --- a/arch/arc/kernel/smp.c +++ b/arch/arc/kernel/smp.c @@ -32,10 +32,7 @@ #ifndef CONFIG_ARC_HAS_LLSC arch_spinlock_t smp_atomic_ops_lock = __ARCH_SPIN_LOCK_UNLOCKED; -arch_spinlock_t smp_bitops_lock = __ARCH_SPIN_LOCK_UNLOCKED; - EXPORT_SYMBOL_GPL(smp_atomic_ops_lock); -EXPORT_SYMBOL_GPL(smp_bitops_lock); #endif struct plat_smp_ops __weak plat_smp_ops; ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 13:57 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 13:57 UTC (permalink / raw) To: linux-arm-kernel Hi Eugeniy, On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > On Thu, 2018-08-30 at 10:51 +0100, Will Deacon wrote: > > On Thu, Aug 30, 2018 at 11:44:11AM +0200, Peter Zijlstra wrote: > > > On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > > > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > > > Hi Guys, > > > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > > > even on boot. > > > > > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_torvalds_linux_commit_84c6591103dbeaf393a092a3fc7b09510825f6b9&d=DwIBAg&c=DPL6 > > > > > _X_6JkXFx7AXWqB0tg&r=ZlJN1MriPUTkBKCrPSx67GmaplEUGcAEk9yPtCLdUXI&m=6y0FFvkGdIQ6kX2lZ31V99lMfMV- > > > > > RyWyYhiUGzh0Bi0&s=GNwmhSynIcWqgZhiOwFEEH_AtbZAH443_L6QH4nw_ls&e= > > > > > > > > > > Reverting the commit solves this problem. > > > > > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > > > generic bitops implementation and it works fine. > > > > > > > > > > Do you have any ideas what went wrong? > > > > > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > > > __clear_bit_unlock()") > > > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > > > > > Right, so the thinking is that on platforms that suffer that issue, > > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > > implementation, you'll note that atomic_set() does indeed do the right > > > thing. > > > > > > arch/arc/include/asm/atomic.h:108 > > > > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > > boils down to concurrent atomic_long_set_release() vs > > atomic_long_fetch_or_acquire(), which really needs to work. > > > > I'll keep digging. In the meantime, Vineet, do you have any useful crash > > logs and do you only see the crashes in certain configurations (e.g. SMP but > > !CONFIG_ARC_HAS_LLSC)? > > We don't have such configuration (SMP with !CONFIG_ARC_HAS_LLSC). Ok, thanks for confirming. If you could put your .config somewhere that would be helpful, since the build fails for me with defconfig. > I can see crashes with LLSC enabled in both SMP running on 4 cores > and SMP running on 1 core. > > > There are some crash logs (not sure if they are really useful): > Crashes are quite spontaneous and mostly happens in IO-related code: Aha: arc seems to have separate spinlocks for protecting bitops and atomics. This is a seriously bad idea, because you only implement some of the bitops API in the architecture and rely on asm-generic to give you bitops/lock.h, assuming that it's going to be implemented in terms of the other parts of the bitops API (it isn't anymore). Here's a quick hack (totally untested, since I can't even build an arc kernel) which moves arc over to asm-generic for all of the bitops API and ditches the bitops lock. You could probably also drop the ffs() stuff, but I'll leave that up to you. The "downside" is serialisation between bitops and atomics, but I actually think you probably want that for xchg/cmpxchg anyway. Will --->8 diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h index 8da87feec59a..50b0d5a56e32 100644 --- a/arch/arc/include/asm/bitops.h +++ b/arch/arc/include/asm/bitops.h @@ -17,242 +17,6 @@ #include <linux/types.h> #include <linux/compiler.h> -#include <asm/barrier.h> -#ifndef CONFIG_ARC_HAS_LLSC -#include <asm/smp.h> -#endif - -#ifdef CONFIG_ARC_HAS_LLSC - -/* - * Hardware assisted Atomic-R-M-W - */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned int temp; \ - \ - m += nr >> 5; \ - \ - nr &= 0x1f; \ - \ - __asm__ __volatile__( \ - "1: llock %0, [%1] \n" \ - " " #asm_op " %0, %0, %2 \n" \ - " scond %0, [%1] \n" \ - " bnz 1b \n" \ - : "=&r"(temp) /* Early clobber, to prevent reg reuse */ \ - : "r"(m), /* Not "m": llock only supports reg direct addr mode */ \ - "ir"(nr) \ - : "cc"); \ -} - -/* - * Semantically: - * Test the bit - * if clear - * set it and return 0 (old value) - * else - * return 1 (old value). - * - * Since ARC lacks a equivalent h/w primitive, the bit is set unconditionally - * and the old value of bit is returned - */ -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old, temp; \ - \ - m += nr >> 5; \ - \ - nr &= 0x1f; \ - \ - /* \ - * Explicit full memory barrier needed before/after as \ - * LLOCK/SCOND themselves don't provide any such smenatic \ - */ \ - smp_mb(); \ - \ - __asm__ __volatile__( \ - "1: llock %0, [%2] \n" \ - " " #asm_op " %1, %0, %3 \n" \ - " scond %1, [%2] \n" \ - " bnz 1b \n" \ - : "=&r"(old), "=&r"(temp) \ - : "r"(m), "ir"(nr) \ - : "cc"); \ - \ - smp_mb(); \ - \ - return (old & (1 << nr)) != 0; \ -} - -#elif !defined(CONFIG_ARC_PLAT_EZNPS) - -/* - * Non hardware assisted Atomic-R-M-W - * Locking would change to irq-disabling only (UP) and spinlocks (SMP) - * - * There's "significant" micro-optimization in writing our own variants of - * bitops (over generic variants) - * - * (1) The generic APIs have "signed" @nr while we have it "unsigned" - * This avoids extra code to be generated for pointer arithmatic, since - * is "not sure" that index is NOT -ve - * (2) Utilize the fact that ARCompact bit fidding insn (BSET/BCLR/ASL) etc - * only consider bottom 5 bits of @nr, so NO need to mask them off. - * (GCC Quirk: however for constant @nr we still need to do the masking - * @compile time) - */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long temp, flags; \ - m += nr >> 5; \ - \ - /* \ - * spin lock/unlock provide the needed smp_mb() before/after \ - */ \ - bitops_lock(flags); \ - \ - temp = *m; \ - *m = temp c_op (1UL << (nr & 0x1f)); \ - \ - bitops_unlock(flags); \ -} - -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old, flags; \ - m += nr >> 5; \ - \ - bitops_lock(flags); \ - \ - old = *m; \ - *m = old c_op (1UL << (nr & 0x1f)); \ - \ - bitops_unlock(flags); \ - \ - return (old & (1UL << (nr & 0x1f))) != 0; \ -} - -#else /* CONFIG_ARC_PLAT_EZNPS */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - m += nr >> 5; \ - \ - nr = (1UL << (nr & 0x1f)); \ - if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \ - nr = ~nr; \ - \ - __asm__ __volatile__( \ - " mov r2, %0\n" \ - " mov r3, %1\n" \ - " .word %2\n" \ - : \ - : "r"(nr), "r"(m), "i"(asm_op) \ - : "r2", "r3", "memory"); \ -} - -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old; \ - \ - m += nr >> 5; \ - \ - nr = old = (1UL << (nr & 0x1f)); \ - if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \ - old = ~old; \ - \ - /* Explicit full memory barrier needed before/after */ \ - smp_mb(); \ - \ - __asm__ __volatile__( \ - " mov r2, %0\n" \ - " mov r3, %1\n" \ - " .word %2\n" \ - " mov %0, r2" \ - : "+r"(old) \ - : "r"(m), "i"(asm_op) \ - : "r2", "r3", "memory"); \ - \ - smp_mb(); \ - \ - return (old & nr) != 0; \ -} - -#endif /* CONFIG_ARC_PLAT_EZNPS */ - -/*************************************** - * Non atomic variants - **************************************/ - -#define __BIT_OP(op, c_op, asm_op) \ -static inline void __##op##_bit(unsigned long nr, volatile unsigned long *m) \ -{ \ - unsigned long temp; \ - m += nr >> 5; \ - \ - temp = *m; \ - *m = temp c_op (1UL << (nr & 0x1f)); \ -} - -#define __TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int __test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old; \ - m += nr >> 5; \ - \ - old = *m; \ - *m = old c_op (1UL << (nr & 0x1f)); \ - \ - return (old & (1UL << (nr & 0x1f))) != 0; \ -} - -#define BIT_OPS(op, c_op, asm_op) \ - \ - /* set_bit(), clear_bit(), change_bit() */ \ - BIT_OP(op, c_op, asm_op) \ - \ - /* test_and_set_bit(), test_and_clear_bit(), test_and_change_bit() */\ - TEST_N_BIT_OP(op, c_op, asm_op) \ - \ - /* __set_bit(), __clear_bit(), __change_bit() */ \ - __BIT_OP(op, c_op, asm_op) \ - \ - /* __test_and_set_bit(), __test_and_clear_bit(), __test_and_change_bit() */\ - __TEST_N_BIT_OP(op, c_op, asm_op) - -#ifndef CONFIG_ARC_PLAT_EZNPS -BIT_OPS(set, |, bset) -BIT_OPS(clear, & ~, bclr) -BIT_OPS(change, ^, bxor) -#else -BIT_OPS(set, |, CTOP_INST_AOR_DI_R2_R2_R3) -BIT_OPS(clear, & ~, CTOP_INST_AAND_DI_R2_R2_R3) -BIT_OPS(change, ^, CTOP_INST_AXOR_DI_R2_R2_R3) -#endif - -/* - * This routine doesn't need to be atomic. - */ -static inline int -test_bit(unsigned int nr, const volatile unsigned long *addr) -{ - unsigned long mask; - - addr += nr >> 5; - - mask = 1UL << (nr & 0x1f); - - return ((mask & *addr) != 0); -} #ifdef CONFIG_ISA_ARCOMPACT @@ -428,6 +192,9 @@ static inline __attribute__ ((const)) int __ffs(unsigned long x) #include <asm-generic/bitops/sched.h> #include <asm-generic/bitops/lock.h> +#include <asm-generic/bitops/atomic.h> +#include <asm-generic/bitops/non-atomic.h> + #include <asm-generic/bitops/find.h> #include <asm-generic/bitops/le.h> #include <asm-generic/bitops/ext2-atomic-setbit.h> diff --git a/arch/arc/include/asm/smp.h b/arch/arc/include/asm/smp.h index 0861007d9ef3..8df446c2b299 100644 --- a/arch/arc/include/asm/smp.h +++ b/arch/arc/include/asm/smp.h @@ -108,7 +108,6 @@ static inline const char *arc_platform_smp_cpuinfo(void) #include <asm/spinlock.h> extern arch_spinlock_t smp_atomic_ops_lock; -extern arch_spinlock_t smp_bitops_lock; #define atomic_ops_lock(flags) do { \ local_irq_save(flags); \ @@ -120,24 +119,11 @@ extern arch_spinlock_t smp_bitops_lock; local_irq_restore(flags); \ } while (0) -#define bitops_lock(flags) do { \ - local_irq_save(flags); \ - arch_spin_lock(&smp_bitops_lock); \ -} while (0) - -#define bitops_unlock(flags) do { \ - arch_spin_unlock(&smp_bitops_lock); \ - local_irq_restore(flags); \ -} while (0) - #else /* !CONFIG_SMP */ #define atomic_ops_lock(flags) local_irq_save(flags) #define atomic_ops_unlock(flags) local_irq_restore(flags) -#define bitops_lock(flags) local_irq_save(flags) -#define bitops_unlock(flags) local_irq_restore(flags) - #endif /* !CONFIG_SMP */ #endif /* !CONFIG_ARC_HAS_LLSC */ diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c index 21d86c36692b..80c80c540743 100644 --- a/arch/arc/kernel/smp.c +++ b/arch/arc/kernel/smp.c @@ -32,10 +32,7 @@ #ifndef CONFIG_ARC_HAS_LLSC arch_spinlock_t smp_atomic_ops_lock = __ARCH_SPIN_LOCK_UNLOCKED; -arch_spinlock_t smp_bitops_lock = __ARCH_SPIN_LOCK_UNLOCKED; - EXPORT_SYMBOL_GPL(smp_atomic_ops_lock); -EXPORT_SYMBOL_GPL(smp_bitops_lock); #endif struct plat_smp_ops __weak plat_smp_ops; ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 13:57 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 13:57 UTC (permalink / raw) To: linux-snps-arc Hi Eugeniy, On Thu, Aug 30, 2018@11:53:17AM +0000, Eugeniy Paltsev wrote: > On Thu, 2018-08-30@10:51 +0100, Will Deacon wrote: > > On Thu, Aug 30, 2018@11:44:11AM +0200, Peter Zijlstra wrote: > > > On Wed, Aug 29, 2018@09:16:43PM +0000, Vineet Gupta wrote: > > > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > > > Hi Guys, > > > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > > > even on boot. > > > > > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_torvalds_linux_commit_84c6591103dbeaf393a092a3fc7b09510825f6b9&d=DwIBAg&c=DPL6 > > > > > _X_6JkXFx7AXWqB0tg&r=ZlJN1MriPUTkBKCrPSx67GmaplEUGcAEk9yPtCLdUXI&m=6y0FFvkGdIQ6kX2lZ31V99lMfMV- > > > > > RyWyYhiUGzh0Bi0&s=GNwmhSynIcWqgZhiOwFEEH_AtbZAH443_L6QH4nw_ls&e= > > > > > > > > > > Reverting the commit solves this problem. > > > > > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > > > generic bitops implementation and it works fine. > > > > > > > > > > Do you have any ideas what went wrong? > > > > > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > > > __clear_bit_unlock()") > > > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > > > > > Right, so the thinking is that on platforms that suffer that issue, > > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > > implementation, you'll note that atomic_set() does indeed do the right > > > thing. > > > > > > arch/arc/include/asm/atomic.h:108 > > > > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > > boils down to concurrent atomic_long_set_release() vs > > atomic_long_fetch_or_acquire(), which really needs to work. > > > > I'll keep digging. In the meantime, Vineet, do you have any useful crash > > logs and do you only see the crashes in certain configurations (e.g. SMP but > > !CONFIG_ARC_HAS_LLSC)? > > We don't have such configuration (SMP with !CONFIG_ARC_HAS_LLSC). Ok, thanks for confirming. If you could put your .config somewhere that would be helpful, since the build fails for me with defconfig. > I can see crashes with LLSC enabled in both SMP running on 4 cores > and SMP running on 1 core. > > > There are some crash logs (not sure if they are really useful): > Crashes are quite spontaneous and mostly happens in IO-related code: Aha: arc seems to have separate spinlocks for protecting bitops and atomics. This is a seriously bad idea, because you only implement some of the bitops API in the architecture and rely on asm-generic to give you bitops/lock.h, assuming that it's going to be implemented in terms of the other parts of the bitops API (it isn't anymore). Here's a quick hack (totally untested, since I can't even build an arc kernel) which moves arc over to asm-generic for all of the bitops API and ditches the bitops lock. You could probably also drop the ffs() stuff, but I'll leave that up to you. The "downside" is serialisation between bitops and atomics, but I actually think you probably want that for xchg/cmpxchg anyway. Will --->8 diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h index 8da87feec59a..50b0d5a56e32 100644 --- a/arch/arc/include/asm/bitops.h +++ b/arch/arc/include/asm/bitops.h @@ -17,242 +17,6 @@ #include <linux/types.h> #include <linux/compiler.h> -#include <asm/barrier.h> -#ifndef CONFIG_ARC_HAS_LLSC -#include <asm/smp.h> -#endif - -#ifdef CONFIG_ARC_HAS_LLSC - -/* - * Hardware assisted Atomic-R-M-W - */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned int temp; \ - \ - m += nr >> 5; \ - \ - nr &= 0x1f; \ - \ - __asm__ __volatile__( \ - "1: llock %0, [%1] \n" \ - " " #asm_op " %0, %0, %2 \n" \ - " scond %0, [%1] \n" \ - " bnz 1b \n" \ - : "=&r"(temp) /* Early clobber, to prevent reg reuse */ \ - : "r"(m), /* Not "m": llock only supports reg direct addr mode */ \ - "ir"(nr) \ - : "cc"); \ -} - -/* - * Semantically: - * Test the bit - * if clear - * set it and return 0 (old value) - * else - * return 1 (old value). - * - * Since ARC lacks a equivalent h/w primitive, the bit is set unconditionally - * and the old value of bit is returned - */ -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old, temp; \ - \ - m += nr >> 5; \ - \ - nr &= 0x1f; \ - \ - /* \ - * Explicit full memory barrier needed before/after as \ - * LLOCK/SCOND themselves don't provide any such smenatic \ - */ \ - smp_mb(); \ - \ - __asm__ __volatile__( \ - "1: llock %0, [%2] \n" \ - " " #asm_op " %1, %0, %3 \n" \ - " scond %1, [%2] \n" \ - " bnz 1b \n" \ - : "=&r"(old), "=&r"(temp) \ - : "r"(m), "ir"(nr) \ - : "cc"); \ - \ - smp_mb(); \ - \ - return (old & (1 << nr)) != 0; \ -} - -#elif !defined(CONFIG_ARC_PLAT_EZNPS) - -/* - * Non hardware assisted Atomic-R-M-W - * Locking would change to irq-disabling only (UP) and spinlocks (SMP) - * - * There's "significant" micro-optimization in writing our own variants of - * bitops (over generic variants) - * - * (1) The generic APIs have "signed" @nr while we have it "unsigned" - * This avoids extra code to be generated for pointer arithmatic, since - * is "not sure" that index is NOT -ve - * (2) Utilize the fact that ARCompact bit fidding insn (BSET/BCLR/ASL) etc - * only consider bottom 5 bits of @nr, so NO need to mask them off. - * (GCC Quirk: however for constant @nr we still need to do the masking - * at compile time) - */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long temp, flags; \ - m += nr >> 5; \ - \ - /* \ - * spin lock/unlock provide the needed smp_mb() before/after \ - */ \ - bitops_lock(flags); \ - \ - temp = *m; \ - *m = temp c_op (1UL << (nr & 0x1f)); \ - \ - bitops_unlock(flags); \ -} - -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old, flags; \ - m += nr >> 5; \ - \ - bitops_lock(flags); \ - \ - old = *m; \ - *m = old c_op (1UL << (nr & 0x1f)); \ - \ - bitops_unlock(flags); \ - \ - return (old & (1UL << (nr & 0x1f))) != 0; \ -} - -#else /* CONFIG_ARC_PLAT_EZNPS */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - m += nr >> 5; \ - \ - nr = (1UL << (nr & 0x1f)); \ - if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \ - nr = ~nr; \ - \ - __asm__ __volatile__( \ - " mov r2, %0\n" \ - " mov r3, %1\n" \ - " .word %2\n" \ - : \ - : "r"(nr), "r"(m), "i"(asm_op) \ - : "r2", "r3", "memory"); \ -} - -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old; \ - \ - m += nr >> 5; \ - \ - nr = old = (1UL << (nr & 0x1f)); \ - if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \ - old = ~old; \ - \ - /* Explicit full memory barrier needed before/after */ \ - smp_mb(); \ - \ - __asm__ __volatile__( \ - " mov r2, %0\n" \ - " mov r3, %1\n" \ - " .word %2\n" \ - " mov %0, r2" \ - : "+r"(old) \ - : "r"(m), "i"(asm_op) \ - : "r2", "r3", "memory"); \ - \ - smp_mb(); \ - \ - return (old & nr) != 0; \ -} - -#endif /* CONFIG_ARC_PLAT_EZNPS */ - -/*************************************** - * Non atomic variants - **************************************/ - -#define __BIT_OP(op, c_op, asm_op) \ -static inline void __##op##_bit(unsigned long nr, volatile unsigned long *m) \ -{ \ - unsigned long temp; \ - m += nr >> 5; \ - \ - temp = *m; \ - *m = temp c_op (1UL << (nr & 0x1f)); \ -} - -#define __TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int __test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old; \ - m += nr >> 5; \ - \ - old = *m; \ - *m = old c_op (1UL << (nr & 0x1f)); \ - \ - return (old & (1UL << (nr & 0x1f))) != 0; \ -} - -#define BIT_OPS(op, c_op, asm_op) \ - \ - /* set_bit(), clear_bit(), change_bit() */ \ - BIT_OP(op, c_op, asm_op) \ - \ - /* test_and_set_bit(), test_and_clear_bit(), test_and_change_bit() */\ - TEST_N_BIT_OP(op, c_op, asm_op) \ - \ - /* __set_bit(), __clear_bit(), __change_bit() */ \ - __BIT_OP(op, c_op, asm_op) \ - \ - /* __test_and_set_bit(), __test_and_clear_bit(), __test_and_change_bit() */\ - __TEST_N_BIT_OP(op, c_op, asm_op) - -#ifndef CONFIG_ARC_PLAT_EZNPS -BIT_OPS(set, |, bset) -BIT_OPS(clear, & ~, bclr) -BIT_OPS(change, ^, bxor) -#else -BIT_OPS(set, |, CTOP_INST_AOR_DI_R2_R2_R3) -BIT_OPS(clear, & ~, CTOP_INST_AAND_DI_R2_R2_R3) -BIT_OPS(change, ^, CTOP_INST_AXOR_DI_R2_R2_R3) -#endif - -/* - * This routine doesn't need to be atomic. - */ -static inline int -test_bit(unsigned int nr, const volatile unsigned long *addr) -{ - unsigned long mask; - - addr += nr >> 5; - - mask = 1UL << (nr & 0x1f); - - return ((mask & *addr) != 0); -} #ifdef CONFIG_ISA_ARCOMPACT @@ -428,6 +192,9 @@ static inline __attribute__ ((const)) int __ffs(unsigned long x) #include <asm-generic/bitops/sched.h> #include <asm-generic/bitops/lock.h> +#include <asm-generic/bitops/atomic.h> +#include <asm-generic/bitops/non-atomic.h> + #include <asm-generic/bitops/find.h> #include <asm-generic/bitops/le.h> #include <asm-generic/bitops/ext2-atomic-setbit.h> diff --git a/arch/arc/include/asm/smp.h b/arch/arc/include/asm/smp.h index 0861007d9ef3..8df446c2b299 100644 --- a/arch/arc/include/asm/smp.h +++ b/arch/arc/include/asm/smp.h @@ -108,7 +108,6 @@ static inline const char *arc_platform_smp_cpuinfo(void) #include <asm/spinlock.h> extern arch_spinlock_t smp_atomic_ops_lock; -extern arch_spinlock_t smp_bitops_lock; #define atomic_ops_lock(flags) do { \ local_irq_save(flags); \ @@ -120,24 +119,11 @@ extern arch_spinlock_t smp_bitops_lock; local_irq_restore(flags); \ } while (0) -#define bitops_lock(flags) do { \ - local_irq_save(flags); \ - arch_spin_lock(&smp_bitops_lock); \ -} while (0) - -#define bitops_unlock(flags) do { \ - arch_spin_unlock(&smp_bitops_lock); \ - local_irq_restore(flags); \ -} while (0) - #else /* !CONFIG_SMP */ #define atomic_ops_lock(flags) local_irq_save(flags) #define atomic_ops_unlock(flags) local_irq_restore(flags) -#define bitops_lock(flags) local_irq_save(flags) -#define bitops_unlock(flags) local_irq_restore(flags) - #endif /* !CONFIG_SMP */ #endif /* !CONFIG_ARC_HAS_LLSC */ diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c index 21d86c36692b..80c80c540743 100644 --- a/arch/arc/kernel/smp.c +++ b/arch/arc/kernel/smp.c @@ -32,10 +32,7 @@ #ifndef CONFIG_ARC_HAS_LLSC arch_spinlock_t smp_atomic_ops_lock = __ARCH_SPIN_LOCK_UNLOCKED; -arch_spinlock_t smp_bitops_lock = __ARCH_SPIN_LOCK_UNLOCKED; - EXPORT_SYMBOL_GPL(smp_atomic_ops_lock); -EXPORT_SYMBOL_GPL(smp_bitops_lock); #endif struct plat_smp_ops __weak plat_smp_ops; ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 13:57 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 13:57 UTC (permalink / raw) To: Eugeniy Paltsev Cc: peterz, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch Hi Eugeniy, On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > On Thu, 2018-08-30 at 10:51 +0100, Will Deacon wrote: > > On Thu, Aug 30, 2018 at 11:44:11AM +0200, Peter Zijlstra wrote: > > > On Wed, Aug 29, 2018 at 09:16:43PM +0000, Vineet Gupta wrote: > > > > On 08/29/2018 11:33 AM, Eugeniy Paltsev wrote: > > > > > Hi Guys, > > > > > Since v4.19-rc1 we are getting a serious regression on platforms with ARC architecture. > > > > > The kernel have become unstable and spontaneously crashes on LTP tests execution / IO tests or > > > > > even on boot. > > > > > > > > > > I don't know exactly what breaks but bisect clearly assign the blame to this commit: > > > > > 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_torvalds_linux_commit_84c6591103dbeaf393a092a3fc7b09510825f6b9&d=DwIBAg&c=DPL6 > > > > > _X_6JkXFx7AXWqB0tg&r=ZlJN1MriPUTkBKCrPSx67GmaplEUGcAEk9yPtCLdUXI&m=6y0FFvkGdIQ6kX2lZ31V99lMfMV- > > > > > RyWyYhiUGzh0Bi0&s=GNwmhSynIcWqgZhiOwFEEH_AtbZAH443_L6QH4nw_ls&e= > > > > > > > > > > Reverting the commit solves this problem. > > > > > > > > > > I tested v4.19-rc1 on ARM (wandboard, i.mx6, 32bit, quard core, ARMv7) which uses same > > > > > generic bitops implementation and it works fine. > > > > > > > > > > Do you have any ideas what went wrong? > > > > > > > > Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > > > > commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > > > > __clear_bit_unlock()") > > > > That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > > > > __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > > > > > > > > This patch undoes that which could explain the issues you see. @Peter, @Will ? > > > > > > Right, so the thinking is that on platforms that suffer that issue, > > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > > implementation, you'll note that atomic_set() does indeed do the right > > > thing. > > > > > > arch/arc/include/asm/atomic.h:108 > > > > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > > boils down to concurrent atomic_long_set_release() vs > > atomic_long_fetch_or_acquire(), which really needs to work. > > > > I'll keep digging. In the meantime, Vineet, do you have any useful crash > > logs and do you only see the crashes in certain configurations (e.g. SMP but > > !CONFIG_ARC_HAS_LLSC)? > > We don't have such configuration (SMP with !CONFIG_ARC_HAS_LLSC). Ok, thanks for confirming. If you could put your .config somewhere that would be helpful, since the build fails for me with defconfig. > I can see crashes with LLSC enabled in both SMP running on 4 cores > and SMP running on 1 core. > > > There are some crash logs (not sure if they are really useful): > Crashes are quite spontaneous and mostly happens in IO-related code: Aha: arc seems to have separate spinlocks for protecting bitops and atomics. This is a seriously bad idea, because you only implement some of the bitops API in the architecture and rely on asm-generic to give you bitops/lock.h, assuming that it's going to be implemented in terms of the other parts of the bitops API (it isn't anymore). Here's a quick hack (totally untested, since I can't even build an arc kernel) which moves arc over to asm-generic for all of the bitops API and ditches the bitops lock. You could probably also drop the ffs() stuff, but I'll leave that up to you. The "downside" is serialisation between bitops and atomics, but I actually think you probably want that for xchg/cmpxchg anyway. Will --->8 diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h index 8da87feec59a..50b0d5a56e32 100644 --- a/arch/arc/include/asm/bitops.h +++ b/arch/arc/include/asm/bitops.h @@ -17,242 +17,6 @@ #include <linux/types.h> #include <linux/compiler.h> -#include <asm/barrier.h> -#ifndef CONFIG_ARC_HAS_LLSC -#include <asm/smp.h> -#endif - -#ifdef CONFIG_ARC_HAS_LLSC - -/* - * Hardware assisted Atomic-R-M-W - */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned int temp; \ - \ - m += nr >> 5; \ - \ - nr &= 0x1f; \ - \ - __asm__ __volatile__( \ - "1: llock %0, [%1] \n" \ - " " #asm_op " %0, %0, %2 \n" \ - " scond %0, [%1] \n" \ - " bnz 1b \n" \ - : "=&r"(temp) /* Early clobber, to prevent reg reuse */ \ - : "r"(m), /* Not "m": llock only supports reg direct addr mode */ \ - "ir"(nr) \ - : "cc"); \ -} - -/* - * Semantically: - * Test the bit - * if clear - * set it and return 0 (old value) - * else - * return 1 (old value). - * - * Since ARC lacks a equivalent h/w primitive, the bit is set unconditionally - * and the old value of bit is returned - */ -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old, temp; \ - \ - m += nr >> 5; \ - \ - nr &= 0x1f; \ - \ - /* \ - * Explicit full memory barrier needed before/after as \ - * LLOCK/SCOND themselves don't provide any such smenatic \ - */ \ - smp_mb(); \ - \ - __asm__ __volatile__( \ - "1: llock %0, [%2] \n" \ - " " #asm_op " %1, %0, %3 \n" \ - " scond %1, [%2] \n" \ - " bnz 1b \n" \ - : "=&r"(old), "=&r"(temp) \ - : "r"(m), "ir"(nr) \ - : "cc"); \ - \ - smp_mb(); \ - \ - return (old & (1 << nr)) != 0; \ -} - -#elif !defined(CONFIG_ARC_PLAT_EZNPS) - -/* - * Non hardware assisted Atomic-R-M-W - * Locking would change to irq-disabling only (UP) and spinlocks (SMP) - * - * There's "significant" micro-optimization in writing our own variants of - * bitops (over generic variants) - * - * (1) The generic APIs have "signed" @nr while we have it "unsigned" - * This avoids extra code to be generated for pointer arithmatic, since - * is "not sure" that index is NOT -ve - * (2) Utilize the fact that ARCompact bit fidding insn (BSET/BCLR/ASL) etc - * only consider bottom 5 bits of @nr, so NO need to mask them off. - * (GCC Quirk: however for constant @nr we still need to do the masking - * at compile time) - */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long temp, flags; \ - m += nr >> 5; \ - \ - /* \ - * spin lock/unlock provide the needed smp_mb() before/after \ - */ \ - bitops_lock(flags); \ - \ - temp = *m; \ - *m = temp c_op (1UL << (nr & 0x1f)); \ - \ - bitops_unlock(flags); \ -} - -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old, flags; \ - m += nr >> 5; \ - \ - bitops_lock(flags); \ - \ - old = *m; \ - *m = old c_op (1UL << (nr & 0x1f)); \ - \ - bitops_unlock(flags); \ - \ - return (old & (1UL << (nr & 0x1f))) != 0; \ -} - -#else /* CONFIG_ARC_PLAT_EZNPS */ - -#define BIT_OP(op, c_op, asm_op) \ -static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - m += nr >> 5; \ - \ - nr = (1UL << (nr & 0x1f)); \ - if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \ - nr = ~nr; \ - \ - __asm__ __volatile__( \ - " mov r2, %0\n" \ - " mov r3, %1\n" \ - " .word %2\n" \ - : \ - : "r"(nr), "r"(m), "i"(asm_op) \ - : "r2", "r3", "memory"); \ -} - -#define TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old; \ - \ - m += nr >> 5; \ - \ - nr = old = (1UL << (nr & 0x1f)); \ - if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \ - old = ~old; \ - \ - /* Explicit full memory barrier needed before/after */ \ - smp_mb(); \ - \ - __asm__ __volatile__( \ - " mov r2, %0\n" \ - " mov r3, %1\n" \ - " .word %2\n" \ - " mov %0, r2" \ - : "+r"(old) \ - : "r"(m), "i"(asm_op) \ - : "r2", "r3", "memory"); \ - \ - smp_mb(); \ - \ - return (old & nr) != 0; \ -} - -#endif /* CONFIG_ARC_PLAT_EZNPS */ - -/*************************************** - * Non atomic variants - **************************************/ - -#define __BIT_OP(op, c_op, asm_op) \ -static inline void __##op##_bit(unsigned long nr, volatile unsigned long *m) \ -{ \ - unsigned long temp; \ - m += nr >> 5; \ - \ - temp = *m; \ - *m = temp c_op (1UL << (nr & 0x1f)); \ -} - -#define __TEST_N_BIT_OP(op, c_op, asm_op) \ -static inline int __test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ -{ \ - unsigned long old; \ - m += nr >> 5; \ - \ - old = *m; \ - *m = old c_op (1UL << (nr & 0x1f)); \ - \ - return (old & (1UL << (nr & 0x1f))) != 0; \ -} - -#define BIT_OPS(op, c_op, asm_op) \ - \ - /* set_bit(), clear_bit(), change_bit() */ \ - BIT_OP(op, c_op, asm_op) \ - \ - /* test_and_set_bit(), test_and_clear_bit(), test_and_change_bit() */\ - TEST_N_BIT_OP(op, c_op, asm_op) \ - \ - /* __set_bit(), __clear_bit(), __change_bit() */ \ - __BIT_OP(op, c_op, asm_op) \ - \ - /* __test_and_set_bit(), __test_and_clear_bit(), __test_and_change_bit() */\ - __TEST_N_BIT_OP(op, c_op, asm_op) - -#ifndef CONFIG_ARC_PLAT_EZNPS -BIT_OPS(set, |, bset) -BIT_OPS(clear, & ~, bclr) -BIT_OPS(change, ^, bxor) -#else -BIT_OPS(set, |, CTOP_INST_AOR_DI_R2_R2_R3) -BIT_OPS(clear, & ~, CTOP_INST_AAND_DI_R2_R2_R3) -BIT_OPS(change, ^, CTOP_INST_AXOR_DI_R2_R2_R3) -#endif - -/* - * This routine doesn't need to be atomic. - */ -static inline int -test_bit(unsigned int nr, const volatile unsigned long *addr) -{ - unsigned long mask; - - addr += nr >> 5; - - mask = 1UL << (nr & 0x1f); - - return ((mask & *addr) != 0); -} #ifdef CONFIG_ISA_ARCOMPACT @@ -428,6 +192,9 @@ static inline __attribute__ ((const)) int __ffs(unsigned long x) #include <asm-generic/bitops/sched.h> #include <asm-generic/bitops/lock.h> +#include <asm-generic/bitops/atomic.h> +#include <asm-generic/bitops/non-atomic.h> + #include <asm-generic/bitops/find.h> #include <asm-generic/bitops/le.h> #include <asm-generic/bitops/ext2-atomic-setbit.h> diff --git a/arch/arc/include/asm/smp.h b/arch/arc/include/asm/smp.h index 0861007d9ef3..8df446c2b299 100644 --- a/arch/arc/include/asm/smp.h +++ b/arch/arc/include/asm/smp.h @@ -108,7 +108,6 @@ static inline const char *arc_platform_smp_cpuinfo(void) #include <asm/spinlock.h> extern arch_spinlock_t smp_atomic_ops_lock; -extern arch_spinlock_t smp_bitops_lock; #define atomic_ops_lock(flags) do { \ local_irq_save(flags); \ @@ -120,24 +119,11 @@ extern arch_spinlock_t smp_bitops_lock; local_irq_restore(flags); \ } while (0) -#define bitops_lock(flags) do { \ - local_irq_save(flags); \ - arch_spin_lock(&smp_bitops_lock); \ -} while (0) - -#define bitops_unlock(flags) do { \ - arch_spin_unlock(&smp_bitops_lock); \ - local_irq_restore(flags); \ -} while (0) - #else /* !CONFIG_SMP */ #define atomic_ops_lock(flags) local_irq_save(flags) #define atomic_ops_unlock(flags) local_irq_restore(flags) -#define bitops_lock(flags) local_irq_save(flags) -#define bitops_unlock(flags) local_irq_restore(flags) - #endif /* !CONFIG_SMP */ #endif /* !CONFIG_ARC_HAS_LLSC */ diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c index 21d86c36692b..80c80c540743 100644 --- a/arch/arc/kernel/smp.c +++ b/arch/arc/kernel/smp.c @@ -32,10 +32,7 @@ #ifndef CONFIG_ARC_HAS_LLSC arch_spinlock_t smp_atomic_ops_lock = __ARCH_SPIN_LOCK_UNLOCKED; -arch_spinlock_t smp_bitops_lock = __ARCH_SPIN_LOCK_UNLOCKED; - EXPORT_SYMBOL_GPL(smp_atomic_ops_lock); -EXPORT_SYMBOL_GPL(smp_bitops_lock); #endif struct plat_smp_ops __weak plat_smp_ops; ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 11:53 ` Eugeniy Paltsev ` (2 preceding siblings ...) (?) @ 2018-08-30 14:17 ` Peter Zijlstra -1 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:17 UTC (permalink / raw) To: Eugeniy Paltsev Cc: will.deacon, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > I can see crashes with LLSC enabled in both SMP running on 4 cores > and SMP running on 1 core. So you're running on LL/SC enabled hardware; that would make Will's patch irrelevant (although still a good idea for the hardware that does care about that spinlocked atomic crud). Does something like the below cure things? That would confirm the suggestion that the change to __clear_bit_unlock() is the curprit. If that doesn't cure things, then we've been looking in entirely the wrong place. --- diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h index 3ae021368f48..79c6978152f8 100644 --- a/include/asm-generic/bitops/lock.h +++ b/include/asm-generic/bitops/lock.h @@ -57,12 +57,7 @@ static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p) static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { - unsigned long old; - - p += BIT_WORD(nr); - old = READ_ONCE(*p); - old &= ~BIT_MASK(nr); - atomic_long_set_release((atomic_long_t *)p, old); + clear_bit_unlock(nr, p); } /** ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:17 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:17 UTC (permalink / raw) To: linux-arm-kernel On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > I can see crashes with LLSC enabled in both SMP running on 4 cores > and SMP running on 1 core. So you're running on LL/SC enabled hardware; that would make Will's patch irrelevant (although still a good idea for the hardware that does care about that spinlocked atomic crud). Does something like the below cure things? That would confirm the suggestion that the change to __clear_bit_unlock() is the curprit. If that doesn't cure things, then we've been looking in entirely the wrong place. --- diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h index 3ae021368f48..79c6978152f8 100644 --- a/include/asm-generic/bitops/lock.h +++ b/include/asm-generic/bitops/lock.h @@ -57,12 +57,7 @@ static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p) static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { - unsigned long old; - - p += BIT_WORD(nr); - old = READ_ONCE(*p); - old &= ~BIT_MASK(nr); - atomic_long_set_release((atomic_long_t *)p, old); + clear_bit_unlock(nr, p); } /** ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:17 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:17 UTC (permalink / raw) To: linux-snps-arc On Thu, Aug 30, 2018@11:53:17AM +0000, Eugeniy Paltsev wrote: > I can see crashes with LLSC enabled in both SMP running on 4 cores > and SMP running on 1 core. So you're running on LL/SC enabled hardware; that would make Will's patch irrelevant (although still a good idea for the hardware that does care about that spinlocked atomic crud). Does something like the below cure things? That would confirm the suggestion that the change to __clear_bit_unlock() is the curprit. If that doesn't cure things, then we've been looking in entirely the wrong place. --- diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h index 3ae021368f48..79c6978152f8 100644 --- a/include/asm-generic/bitops/lock.h +++ b/include/asm-generic/bitops/lock.h @@ -57,12 +57,7 @@ static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p) static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { - unsigned long old; - - p += BIT_WORD(nr); - old = READ_ONCE(*p); - old &= ~BIT_MASK(nr); - atomic_long_set_release((atomic_long_t *)p, old); + clear_bit_unlock(nr, p); } /** ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:17 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:17 UTC (permalink / raw) To: Eugeniy Paltsev Cc: will.deacon, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > I can see crashes with LLSC enabled in both SMP running on 4 cores > and SMP running on 1 core. So you're running on LL/SC enabled hardware; that would make Will's patch irrelevant (although still a good idea for the hardware that does care about that spinlocked atomic crud). Does something like the below cure things? That would confirm the suggestion that the change to __clear_bit_unlock() is the curprit. If that doesn't cure things, then we've been looking in entirely the wrong place. --- diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h index 3ae021368f48..79c6978152f8 100644 --- a/include/asm-generic/bitops/lock.h +++ b/include/asm-generic/bitops/lock.h @@ -57,12 +57,7 @@ static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p) static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { - unsigned long old; - - p += BIT_WORD(nr); - old = READ_ONCE(*p); - old &= ~BIT_MASK(nr); - atomic_long_set_release((atomic_long_t *)p, old); + clear_bit_unlock(nr, p); } /** ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:17 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:17 UTC (permalink / raw) To: Eugeniy Paltsev Cc: linux-arch, Vineet.Gupta1, Alexey.Brodkin, will.deacon, linux-kernel, yamada.masahiro, tglx, linux-snps-arc, mingo, linux-arm-kernel On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > I can see crashes with LLSC enabled in both SMP running on 4 cores > and SMP running on 1 core. So you're running on LL/SC enabled hardware; that would make Will's patch irrelevant (although still a good idea for the hardware that does care about that spinlocked atomic crud). Does something like the below cure things? That would confirm the suggestion that the change to __clear_bit_unlock() is the curprit. If that doesn't cure things, then we've been looking in entirely the wrong place. --- diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h index 3ae021368f48..79c6978152f8 100644 --- a/include/asm-generic/bitops/lock.h +++ b/include/asm-generic/bitops/lock.h @@ -57,12 +57,7 @@ static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p) static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { - unsigned long old; - - p += BIT_WORD(nr); - old = READ_ONCE(*p); - old &= ~BIT_MASK(nr); - atomic_long_set_release((atomic_long_t *)p, old); + clear_bit_unlock(nr, p); } /** ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 14:17 ` Peter Zijlstra (?) (?) @ 2018-08-30 14:23 ` Will Deacon -1 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 14:23 UTC (permalink / raw) To: Peter Zijlstra Cc: Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 04:17:13PM +0200, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > and SMP running on 1 core. > > So you're running on LL/SC enabled hardware; that would make Will's > patch irrelevant (although still a good idea for the hardware that does > care about that spinlocked atomic crud). Yeah, that's a good point. I think the !LLSC case is broken without my patch, so we're looking at two bugs... > Does something like the below cure things? That would confirm the > suggestion that the change to __clear_bit_unlock() is the curprit. > > If that doesn't cure things, then we've been looking in entirely the > wrong place. Yes, that would be worth trying. However, I also just noticed that the fetch-ops (which are now used to implement test_and_set_bit_lock()) seem to be missing the backwards branch in the LL/SC case. Yet another diff below. Will --->8 diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 4e0072730241..f06c5ed672b3 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ "1: llock %[orig], [%[ctr]] \n" \ " " #asm_op " %[val], %[orig], %[i] \n" \ " scond %[val], [%[ctr]] \n" \ - " \n" \ + " bnz 1b \n" \ : [val] "=&r" (val), \ [orig] "=&r" (orig) \ : [ctr] "r" (&v->counter), \ ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:23 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 14:23 UTC (permalink / raw) To: linux-arm-kernel On Thu, Aug 30, 2018 at 04:17:13PM +0200, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > and SMP running on 1 core. > > So you're running on LL/SC enabled hardware; that would make Will's > patch irrelevant (although still a good idea for the hardware that does > care about that spinlocked atomic crud). Yeah, that's a good point. I think the !LLSC case is broken without my patch, so we're looking at two bugs... > Does something like the below cure things? That would confirm the > suggestion that the change to __clear_bit_unlock() is the curprit. > > If that doesn't cure things, then we've been looking in entirely the > wrong place. Yes, that would be worth trying. However, I also just noticed that the fetch-ops (which are now used to implement test_and_set_bit_lock()) seem to be missing the backwards branch in the LL/SC case. Yet another diff below. Will --->8 diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 4e0072730241..f06c5ed672b3 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ "1: llock %[orig], [%[ctr]] \n" \ " " #asm_op " %[val], %[orig], %[i] \n" \ " scond %[val], [%[ctr]] \n" \ - " \n" \ + " bnz 1b \n" \ : [val] "=&r" (val), \ [orig] "=&r" (orig) \ : [ctr] "r" (&v->counter), \ ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:23 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 14:23 UTC (permalink / raw) To: linux-snps-arc On Thu, Aug 30, 2018@04:17:13PM +0200, Peter Zijlstra wrote: > On Thu, Aug 30, 2018@11:53:17AM +0000, Eugeniy Paltsev wrote: > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > and SMP running on 1 core. > > So you're running on LL/SC enabled hardware; that would make Will's > patch irrelevant (although still a good idea for the hardware that does > care about that spinlocked atomic crud). Yeah, that's a good point. I think the !LLSC case is broken without my patch, so we're looking at two bugs... > Does something like the below cure things? That would confirm the > suggestion that the change to __clear_bit_unlock() is the curprit. > > If that doesn't cure things, then we've been looking in entirely the > wrong place. Yes, that would be worth trying. However, I also just noticed that the fetch-ops (which are now used to implement test_and_set_bit_lock()) seem to be missing the backwards branch in the LL/SC case. Yet another diff below. Will --->8 diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 4e0072730241..f06c5ed672b3 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ "1: llock %[orig], [%[ctr]] \n" \ " " #asm_op " %[val], %[orig], %[i] \n" \ " scond %[val], [%[ctr]] \n" \ - " \n" \ + " bnz 1b \n" \ : [val] "=&r" (val), \ [orig] "=&r" (orig) \ : [ctr] "r" (&v->counter), \ ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:23 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-30 14:23 UTC (permalink / raw) To: Peter Zijlstra Cc: Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 04:17:13PM +0200, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > and SMP running on 1 core. > > So you're running on LL/SC enabled hardware; that would make Will's > patch irrelevant (although still a good idea for the hardware that does > care about that spinlocked atomic crud). Yeah, that's a good point. I think the !LLSC case is broken without my patch, so we're looking at two bugs... > Does something like the below cure things? That would confirm the > suggestion that the change to __clear_bit_unlock() is the curprit. > > If that doesn't cure things, then we've been looking in entirely the > wrong place. Yes, that would be worth trying. However, I also just noticed that the fetch-ops (which are now used to implement test_and_set_bit_lock()) seem to be missing the backwards branch in the LL/SC case. Yet another diff below. Will --->8 diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 4e0072730241..f06c5ed672b3 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ "1: llock %[orig], [%[ctr]] \n" \ " " #asm_op " %[val], %[orig], %[i] \n" \ " scond %[val], [%[ctr]] \n" \ - " \n" \ + " bnz 1b \n" \ : [val] "=&r" (val), \ [orig] "=&r" (orig) \ : [ctr] "r" (&v->counter), \ ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 14:23 ` Will Deacon (?) (?) @ 2018-08-30 14:29 ` Peter Zijlstra -1 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:29 UTC (permalink / raw) To: Will Deacon Cc: Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote: > Yes, that would be worth trying. However, I also just noticed that the > fetch-ops (which are now used to implement test_and_set_bit_lock()) seem > to be missing the backwards branch in the LL/SC case. Yet another diff > below. > > Will > > --->8 > > diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > index 4e0072730241..f06c5ed672b3 100644 > --- a/arch/arc/include/asm/atomic.h > +++ b/arch/arc/include/asm/atomic.h > @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > "1: llock %[orig], [%[ctr]] \n" \ > " " #asm_op " %[val], %[orig], %[i] \n" \ > " scond %[val], [%[ctr]] \n" \ > - " \n" \ > + " bnz 1b \n" \ > : [val] "=&r" (val), \ > [orig] "=&r" (orig) \ > : [ctr] "r" (&v->counter), \ ACK!! sorry about that, no idea how I messed that up. Also, once it all works, they should look at switching to _relaxed atomics for LL/SC. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:29 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:29 UTC (permalink / raw) To: linux-arm-kernel On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote: > Yes, that would be worth trying. However, I also just noticed that the > fetch-ops (which are now used to implement test_and_set_bit_lock()) seem > to be missing the backwards branch in the LL/SC case. Yet another diff > below. > > Will > > --->8 > > diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > index 4e0072730241..f06c5ed672b3 100644 > --- a/arch/arc/include/asm/atomic.h > +++ b/arch/arc/include/asm/atomic.h > @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > "1: llock %[orig], [%[ctr]] \n" \ > " " #asm_op " %[val], %[orig], %[i] \n" \ > " scond %[val], [%[ctr]] \n" \ > - " \n" \ > + " bnz 1b \n" \ > : [val] "=&r" (val), \ > [orig] "=&r" (orig) \ > : [ctr] "r" (&v->counter), \ ACK!! sorry about that, no idea how I messed that up. Also, once it all works, they should look at switching to _relaxed atomics for LL/SC. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:29 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:29 UTC (permalink / raw) To: linux-snps-arc On Thu, Aug 30, 2018@03:23:55PM +0100, Will Deacon wrote: > Yes, that would be worth trying. However, I also just noticed that the > fetch-ops (which are now used to implement test_and_set_bit_lock()) seem > to be missing the backwards branch in the LL/SC case. Yet another diff > below. > > Will > > --->8 > > diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > index 4e0072730241..f06c5ed672b3 100644 > --- a/arch/arc/include/asm/atomic.h > +++ b/arch/arc/include/asm/atomic.h > @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > "1: llock %[orig], [%[ctr]] \n" \ > " " #asm_op " %[val], %[orig], %[i] \n" \ > " scond %[val], [%[ctr]] \n" \ > - " \n" \ > + " bnz 1b \n" \ > : [val] "=&r" (val), \ > [orig] "=&r" (orig) \ > : [ctr] "r" (&v->counter), \ ACK!! sorry about that, no idea how I messed that up. Also, once it all works, they should look at switching to _relaxed atomics for LL/SC. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:29 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:29 UTC (permalink / raw) To: Will Deacon Cc: Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote: > Yes, that would be worth trying. However, I also just noticed that the > fetch-ops (which are now used to implement test_and_set_bit_lock()) seem > to be missing the backwards branch in the LL/SC case. Yet another diff > below. > > Will > > --->8 > > diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > index 4e0072730241..f06c5ed672b3 100644 > --- a/arch/arc/include/asm/atomic.h > +++ b/arch/arc/include/asm/atomic.h > @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > "1: llock %[orig], [%[ctr]] \n" \ > " " #asm_op " %[val], %[orig], %[i] \n" \ > " scond %[val], [%[ctr]] \n" \ > - " \n" \ > + " bnz 1b \n" \ > : [val] "=&r" (val), \ > [orig] "=&r" (orig) \ > : [ctr] "r" (&v->counter), \ ACK!! sorry about that, no idea how I messed that up. Also, once it all works, they should look at switching to _relaxed atomics for LL/SC. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 14:29 ` Peter Zijlstra ` (2 preceding siblings ...) (?) @ 2018-08-30 14:43 ` Peter Zijlstra -1 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:43 UTC (permalink / raw) To: Will Deacon Cc: Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote: > Also, once it all works, they should look at switching to _relaxed > atomics for LL/SC. A little something like so.. should save a few smp_mb(). --- diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 4e0072730241..714b54c308b0 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ } \ #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ -static inline int atomic_##op##_return(int i, atomic_t *v) \ +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ { \ unsigned int val; \ \ @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ return val; \ } +#define atomic_add_return_relaxed atomic_add_return_relaxed +#define atomic_sub_return_relaxed atomic_sub_return_relaxed + #define ATOMIC_FETCH_OP(op, c_op, asm_op) \ -static inline int atomic_fetch_##op(int i, atomic_t *v) \ +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ { \ unsigned int val, orig; \ \ @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ return orig; \ } +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed + +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed + #else /* !CONFIG_ARC_HAS_LLSC */ #ifndef CONFIG_SMP @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ } \ #define ATOMIC64_OP_RETURN(op, op1, op2) \ -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val; \ \ @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ return val; \ } +#define atomic64_add_return_relaxed atomic64_add_return_relaxed +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed + #define ATOMIC64_FETCH_OP(op, op1, op2) \ -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val, orig; \ \ @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ return orig; \ } +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed + +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed + #define ATOMIC64_OPS(op, op1, op2) \ ATOMIC64_OP(op, op1, op2) \ ATOMIC64_OP_RETURN(op, op1, op2) \ @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ ATOMIC64_OPS(add, add.f, adc) ATOMIC64_OPS(sub, sub.f, sbc) + +#undef ATOMIC64_OPS +#define ATOMIC64_OPS(op, op1, op2) \ + ATOMIC64_OP(op, op1, op2) \ + ATOMIC64_FETCH_OP(op, op1, op2) + ATOMIC64_OPS(and, and, and) ATOMIC64_OPS(andnot, bic, bic) ATOMIC64_OPS(or, or, or) ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:43 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:43 UTC (permalink / raw) To: linux-arm-kernel On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote: > Also, once it all works, they should look at switching to _relaxed > atomics for LL/SC. A little something like so.. should save a few smp_mb(). --- diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 4e0072730241..714b54c308b0 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ } \ #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ -static inline int atomic_##op##_return(int i, atomic_t *v) \ +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ { \ unsigned int val; \ \ @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ return val; \ } +#define atomic_add_return_relaxed atomic_add_return_relaxed +#define atomic_sub_return_relaxed atomic_sub_return_relaxed + #define ATOMIC_FETCH_OP(op, c_op, asm_op) \ -static inline int atomic_fetch_##op(int i, atomic_t *v) \ +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ { \ unsigned int val, orig; \ \ @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ return orig; \ } +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed + +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed + #else /* !CONFIG_ARC_HAS_LLSC */ #ifndef CONFIG_SMP @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ } \ #define ATOMIC64_OP_RETURN(op, op1, op2) \ -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val; \ \ @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ return val; \ } +#define atomic64_add_return_relaxed atomic64_add_return_relaxed +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed + #define ATOMIC64_FETCH_OP(op, op1, op2) \ -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val, orig; \ \ @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ return orig; \ } +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed + +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed + #define ATOMIC64_OPS(op, op1, op2) \ ATOMIC64_OP(op, op1, op2) \ ATOMIC64_OP_RETURN(op, op1, op2) \ @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ ATOMIC64_OPS(add, add.f, adc) ATOMIC64_OPS(sub, sub.f, sbc) + +#undef ATOMIC64_OPS +#define ATOMIC64_OPS(op, op1, op2) \ + ATOMIC64_OP(op, op1, op2) \ + ATOMIC64_FETCH_OP(op, op1, op2) + ATOMIC64_OPS(and, and, and) ATOMIC64_OPS(andnot, bic, bic) ATOMIC64_OPS(or, or, or) ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:43 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:43 UTC (permalink / raw) To: linux-snps-arc On Thu, Aug 30, 2018@04:29:20PM +0200, Peter Zijlstra wrote: > Also, once it all works, they should look at switching to _relaxed > atomics for LL/SC. A little something like so.. should save a few smp_mb(). --- diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 4e0072730241..714b54c308b0 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ } \ #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ -static inline int atomic_##op##_return(int i, atomic_t *v) \ +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ { \ unsigned int val; \ \ @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ return val; \ } +#define atomic_add_return_relaxed atomic_add_return_relaxed +#define atomic_sub_return_relaxed atomic_sub_return_relaxed + #define ATOMIC_FETCH_OP(op, c_op, asm_op) \ -static inline int atomic_fetch_##op(int i, atomic_t *v) \ +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ { \ unsigned int val, orig; \ \ @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ return orig; \ } +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed + +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed + #else /* !CONFIG_ARC_HAS_LLSC */ #ifndef CONFIG_SMP @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ } \ #define ATOMIC64_OP_RETURN(op, op1, op2) \ -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val; \ \ @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ return val; \ } +#define atomic64_add_return_relaxed atomic64_add_return_relaxed +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed + #define ATOMIC64_FETCH_OP(op, op1, op2) \ -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val, orig; \ \ @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ return orig; \ } +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed + +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed + #define ATOMIC64_OPS(op, op1, op2) \ ATOMIC64_OP(op, op1, op2) \ ATOMIC64_OP_RETURN(op, op1, op2) \ @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ ATOMIC64_OPS(add, add.f, adc) ATOMIC64_OPS(sub, sub.f, sbc) + +#undef ATOMIC64_OPS +#define ATOMIC64_OPS(op, op1, op2) \ + ATOMIC64_OP(op, op1, op2) \ + ATOMIC64_FETCH_OP(op, op1, op2) + ATOMIC64_OPS(and, and, and) ATOMIC64_OPS(andnot, bic, bic) ATOMIC64_OPS(or, or, or) ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:43 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:43 UTC (permalink / raw) To: Will Deacon Cc: Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote: > Also, once it all works, they should look at switching to _relaxed > atomics for LL/SC. A little something like so.. should save a few smp_mb(). --- diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 4e0072730241..714b54c308b0 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ } \ #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ -static inline int atomic_##op##_return(int i, atomic_t *v) \ +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ { \ unsigned int val; \ \ @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ return val; \ } +#define atomic_add_return_relaxed atomic_add_return_relaxed +#define atomic_sub_return_relaxed atomic_sub_return_relaxed + #define ATOMIC_FETCH_OP(op, c_op, asm_op) \ -static inline int atomic_fetch_##op(int i, atomic_t *v) \ +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ { \ unsigned int val, orig; \ \ @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ return orig; \ } +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed + +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed + #else /* !CONFIG_ARC_HAS_LLSC */ #ifndef CONFIG_SMP @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ } \ #define ATOMIC64_OP_RETURN(op, op1, op2) \ -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val; \ \ @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ return val; \ } +#define atomic64_add_return_relaxed atomic64_add_return_relaxed +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed + #define ATOMIC64_FETCH_OP(op, op1, op2) \ -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val, orig; \ \ @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ return orig; \ } +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed + +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed + #define ATOMIC64_OPS(op, op1, op2) \ ATOMIC64_OP(op, op1, op2) \ ATOMIC64_OP_RETURN(op, op1, op2) \ @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ ATOMIC64_OPS(add, add.f, adc) ATOMIC64_OPS(sub, sub.f, sbc) + +#undef ATOMIC64_OPS +#define ATOMIC64_OPS(op, op1, op2) \ + ATOMIC64_OP(op, op1, op2) \ + ATOMIC64_FETCH_OP(op, op1, op2) + ATOMIC64_OPS(and, and, and) ATOMIC64_OPS(andnot, bic, bic) ATOMIC64_OPS(or, or, or) ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:43 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 14:43 UTC (permalink / raw) To: Will Deacon Cc: linux-arch, Vineet.Gupta1, Alexey.Brodkin, linux-kernel, yamada.masahiro, tglx, linux-snps-arc, Eugeniy Paltsev, mingo, linux-arm-kernel On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote: > Also, once it all works, they should look at switching to _relaxed > atomics for LL/SC. A little something like so.. should save a few smp_mb(). --- diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 4e0072730241..714b54c308b0 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ } \ #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ -static inline int atomic_##op##_return(int i, atomic_t *v) \ +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ { \ unsigned int val; \ \ @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ return val; \ } +#define atomic_add_return_relaxed atomic_add_return_relaxed +#define atomic_sub_return_relaxed atomic_sub_return_relaxed + #define ATOMIC_FETCH_OP(op, c_op, asm_op) \ -static inline int atomic_fetch_##op(int i, atomic_t *v) \ +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ { \ unsigned int val, orig; \ \ @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ return orig; \ } +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed + +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed + #else /* !CONFIG_ARC_HAS_LLSC */ #ifndef CONFIG_SMP @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ } \ #define ATOMIC64_OP_RETURN(op, op1, op2) \ -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val; \ \ @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ return val; \ } +#define atomic64_add_return_relaxed atomic64_add_return_relaxed +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed + #define ATOMIC64_FETCH_OP(op, op1, op2) \ -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \ { \ unsigned long long val, orig; \ \ @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ return orig; \ } +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed + +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed + #define ATOMIC64_OPS(op, op1, op2) \ ATOMIC64_OP(op, op1, op2) \ ATOMIC64_OP_RETURN(op, op1, op2) \ @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ ATOMIC64_OPS(add, add.f, adc) ATOMIC64_OPS(sub, sub.f, sbc) + +#undef ATOMIC64_OPS +#define ATOMIC64_OPS(op, op1, op2) \ + ATOMIC64_OP(op, op1, op2) \ + ATOMIC64_FETCH_OP(op, op1, op2) + ATOMIC64_OPS(and, and, and) ATOMIC64_OPS(andnot, bic, bic) ATOMIC64_OPS(or, or, or) ^ permalink raw reply related [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 14:43 ` Peter Zijlstra (?) (?) @ 2020-04-14 1:19 ` Vineet Gupta -1 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2020-04-14 1:19 UTC (permalink / raw) To: Peter Zijlstra, Will Deacon Cc: Eugeniy Paltsev, mingo, linux-kernel, Alexey Brodkin, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On 8/30/18 7:43 AM, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote: > >> Also, once it all works, they should look at switching to _relaxed >> atomics for LL/SC. > A little something like so.. should save a few smp_mb(). Finally got to this - time for some spring cleaning ;-) > --- > > diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > index 4e0072730241..714b54c308b0 100644 > --- a/arch/arc/include/asm/atomic.h > +++ b/arch/arc/include/asm/atomic.h > @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ > } \ > > #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ > -static inline int atomic_##op##_return(int i, atomic_t *v) \ > +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ > { \ This being relaxed, shoudn't it also remove the smp_mb() before the operation and leave the generic code to add one / more smp_mb() as appropriate for fully ordered, acquire and release variants ? > unsigned int val; \ > \ > @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ > return val; \ > } > > +#define atomic_add_return_relaxed atomic_add_return_relaxed > +#define atomic_sub_return_relaxed atomic_sub_return_relaxed > + > #define ATOMIC_FETCH_OP(op, c_op, asm_op) \ > -static inline int atomic_fetch_##op(int i, atomic_t *v) \ > +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ > { \ > unsigned int val, orig; \ > \ > @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > return orig; \ > } > > +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed > +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed > + > +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed > +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed > +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed > +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed > + > #else /* !CONFIG_ARC_HAS_LLSC */ > > #ifndef CONFIG_SMP > @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ > } \ > > #define ATOMIC64_OP_RETURN(op, op1, op2) \ > -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ > +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \ > { \ > unsigned long long val; \ > \ > @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ > return val; \ > } > > +#define atomic64_add_return_relaxed atomic64_add_return_relaxed > +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed > + > #define ATOMIC64_FETCH_OP(op, op1, op2) \ > -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \ > { \ > unsigned long long val, orig; \ > \ > @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > return orig; \ > } > > +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed > +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed > + > +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed > +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed > +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed > +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed > + > #define ATOMIC64_OPS(op, op1, op2) \ > ATOMIC64_OP(op, op1, op2) \ > ATOMIC64_OP_RETURN(op, op1, op2) \ > @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > > ATOMIC64_OPS(add, add.f, adc) > ATOMIC64_OPS(sub, sub.f, sbc) > + > +#undef ATOMIC64_OPS > +#define ATOMIC64_OPS(op, op1, op2) \ > + ATOMIC64_OP(op, op1, op2) \ > + ATOMIC64_FETCH_OP(op, op1, op2) > + For clarity I split off this hunk into a seperate patch as it elides generation of unused bitwise ops. > ATOMIC64_OPS(and, and, and) > ATOMIC64_OPS(andnot, bic, bic) > ATOMIC64_OPS(or, or, or) > ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2020-04-14 1:19 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2020-04-14 1:19 UTC (permalink / raw) To: Peter Zijlstra, Will Deacon Cc: linux-arch, Alexey Brodkin, linux-kernel, yamada.masahiro, tglx, linux-snps-arc, Eugeniy Paltsev, mingo, linux-arm-kernel On 8/30/18 7:43 AM, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote: > >> Also, once it all works, they should look at switching to _relaxed >> atomics for LL/SC. > A little something like so.. should save a few smp_mb(). Finally got to this - time for some spring cleaning ;-) > --- > > diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > index 4e0072730241..714b54c308b0 100644 > --- a/arch/arc/include/asm/atomic.h > +++ b/arch/arc/include/asm/atomic.h > @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ > } \ > > #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ > -static inline int atomic_##op##_return(int i, atomic_t *v) \ > +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ > { \ This being relaxed, shoudn't it also remove the smp_mb() before the operation and leave the generic code to add one / more smp_mb() as appropriate for fully ordered, acquire and release variants ? > unsigned int val; \ > \ > @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ > return val; \ > } > > +#define atomic_add_return_relaxed atomic_add_return_relaxed > +#define atomic_sub_return_relaxed atomic_sub_return_relaxed > + > #define ATOMIC_FETCH_OP(op, c_op, asm_op) \ > -static inline int atomic_fetch_##op(int i, atomic_t *v) \ > +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ > { \ > unsigned int val, orig; \ > \ > @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > return orig; \ > } > > +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed > +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed > + > +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed > +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed > +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed > +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed > + > #else /* !CONFIG_ARC_HAS_LLSC */ > > #ifndef CONFIG_SMP > @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ > } \ > > #define ATOMIC64_OP_RETURN(op, op1, op2) \ > -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ > +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \ > { \ > unsigned long long val; \ > \ > @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ > return val; \ > } > > +#define atomic64_add_return_relaxed atomic64_add_return_relaxed > +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed > + > #define ATOMIC64_FETCH_OP(op, op1, op2) \ > -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \ > { \ > unsigned long long val, orig; \ > \ > @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > return orig; \ > } > > +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed > +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed > + > +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed > +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed > +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed > +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed > + > #define ATOMIC64_OPS(op, op1, op2) \ > ATOMIC64_OP(op, op1, op2) \ > ATOMIC64_OP_RETURN(op, op1, op2) \ > @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > > ATOMIC64_OPS(add, add.f, adc) > ATOMIC64_OPS(sub, sub.f, sbc) > + > +#undef ATOMIC64_OPS > +#define ATOMIC64_OPS(op, op1, op2) \ > + ATOMIC64_OP(op, op1, op2) \ > + ATOMIC64_FETCH_OP(op, op1, op2) > + For clarity I split off this hunk into a seperate patch as it elides generation of unused bitwise ops. > ATOMIC64_OPS(and, and, and) > ATOMIC64_OPS(andnot, bic, bic) > ATOMIC64_OPS(or, or, or) > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2020-04-14 1:19 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2020-04-14 1:19 UTC (permalink / raw) To: Peter Zijlstra, Will Deacon Cc: linux-arch, Alexey Brodkin, linux-kernel, yamada.masahiro, tglx, linux-snps-arc, Eugeniy Paltsev, mingo, linux-arm-kernel On 8/30/18 7:43 AM, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote: > >> Also, once it all works, they should look at switching to _relaxed >> atomics for LL/SC. > A little something like so.. should save a few smp_mb(). Finally got to this - time for some spring cleaning ;-) > --- > > diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > index 4e0072730241..714b54c308b0 100644 > --- a/arch/arc/include/asm/atomic.h > +++ b/arch/arc/include/asm/atomic.h > @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ > } \ > > #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ > -static inline int atomic_##op##_return(int i, atomic_t *v) \ > +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ > { \ This being relaxed, shoudn't it also remove the smp_mb() before the operation and leave the generic code to add one / more smp_mb() as appropriate for fully ordered, acquire and release variants ? > unsigned int val; \ > \ > @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ > return val; \ > } > > +#define atomic_add_return_relaxed atomic_add_return_relaxed > +#define atomic_sub_return_relaxed atomic_sub_return_relaxed > + > #define ATOMIC_FETCH_OP(op, c_op, asm_op) \ > -static inline int atomic_fetch_##op(int i, atomic_t *v) \ > +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ > { \ > unsigned int val, orig; \ > \ > @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > return orig; \ > } > > +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed > +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed > + > +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed > +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed > +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed > +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed > + > #else /* !CONFIG_ARC_HAS_LLSC */ > > #ifndef CONFIG_SMP > @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ > } \ > > #define ATOMIC64_OP_RETURN(op, op1, op2) \ > -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ > +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \ > { \ > unsigned long long val; \ > \ > @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ > return val; \ > } > > +#define atomic64_add_return_relaxed atomic64_add_return_relaxed > +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed > + > #define ATOMIC64_FETCH_OP(op, op1, op2) \ > -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \ > { \ > unsigned long long val, orig; \ > \ > @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > return orig; \ > } > > +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed > +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed > + > +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed > +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed > +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed > +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed > + > #define ATOMIC64_OPS(op, op1, op2) \ > ATOMIC64_OP(op, op1, op2) \ > ATOMIC64_OP_RETURN(op, op1, op2) \ > @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > > ATOMIC64_OPS(add, add.f, adc) > ATOMIC64_OPS(sub, sub.f, sbc) > + > +#undef ATOMIC64_OPS > +#define ATOMIC64_OPS(op, op1, op2) \ > + ATOMIC64_OP(op, op1, op2) \ > + ATOMIC64_FETCH_OP(op, op1, op2) > + For clarity I split off this hunk into a seperate patch as it elides generation of unused bitwise ops. > ATOMIC64_OPS(and, and, and) > ATOMIC64_OPS(andnot, bic, bic) > ATOMIC64_OPS(or, or, or) > _______________________________________________ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2020-04-14 1:19 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2020-04-14 1:19 UTC (permalink / raw) To: Peter Zijlstra, Will Deacon Cc: Eugeniy Paltsev, mingo, linux-kernel, Alexey Brodkin, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On 8/30/18 7:43 AM, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 04:29:20PM +0200, Peter Zijlstra wrote: > >> Also, once it all works, they should look at switching to _relaxed >> atomics for LL/SC. > A little something like so.. should save a few smp_mb(). Finally got to this - time for some spring cleaning ;-) > --- > > diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > index 4e0072730241..714b54c308b0 100644 > --- a/arch/arc/include/asm/atomic.h > +++ b/arch/arc/include/asm/atomic.h > @@ -44,7 +44,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ > } \ > > #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ > -static inline int atomic_##op##_return(int i, atomic_t *v) \ > +static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ > { \ This being relaxed, shoudn't it also remove the smp_mb() before the operation and leave the generic code to add one / more smp_mb() as appropriate for fully ordered, acquire and release variants ? > unsigned int val; \ > \ > @@ -69,8 +69,11 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ > return val; \ > } > > +#define atomic_add_return_relaxed atomic_add_return_relaxed > +#define atomic_sub_return_relaxed atomic_sub_return_relaxed > + > #define ATOMIC_FETCH_OP(op, c_op, asm_op) \ > -static inline int atomic_fetch_##op(int i, atomic_t *v) \ > +static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ > { \ > unsigned int val, orig; \ > \ > @@ -96,6 +99,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > return orig; \ > } > > +#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed > +#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed > + > +#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed > +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed > +#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed > +#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed > + > #else /* !CONFIG_ARC_HAS_LLSC */ > > #ifndef CONFIG_SMP > @@ -379,7 +390,7 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \ > } \ > > #define ATOMIC64_OP_RETURN(op, op1, op2) \ > -static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ > +static inline long long atomic64_##op##_return_relaxed(long long a, atomic64_t *v) \ > { \ > unsigned long long val; \ > \ > @@ -401,8 +412,11 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \ > return val; \ > } > > +#define atomic64_add_return_relaxed atomic64_add_return_relaxed > +#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed > + > #define ATOMIC64_FETCH_OP(op, op1, op2) \ > -static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > +static inline long long atomic64_fetch_##op##_relaxed(long long a, atomic64_t *v) \ > { \ > unsigned long long val, orig; \ > \ > @@ -424,6 +438,14 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > return orig; \ > } > > +#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed > +#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed > + > +#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed > +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed > +#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed > +#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed > + > #define ATOMIC64_OPS(op, op1, op2) \ > ATOMIC64_OP(op, op1, op2) \ > ATOMIC64_OP_RETURN(op, op1, op2) \ > @@ -434,6 +456,12 @@ static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \ > > ATOMIC64_OPS(add, add.f, adc) > ATOMIC64_OPS(sub, sub.f, sbc) > + > +#undef ATOMIC64_OPS > +#define ATOMIC64_OPS(op, op1, op2) \ > + ATOMIC64_OP(op, op1, op2) \ > + ATOMIC64_FETCH_OP(op, op1, op2) > + For clarity I split off this hunk into a seperate patch as it elides generation of unused bitwise ops. > ATOMIC64_OPS(and, and, and) > ATOMIC64_OPS(andnot, bic, bic) > ATOMIC64_OPS(or, or, or) > ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 14:29 ` Peter Zijlstra (?) (?) @ 2018-08-30 20:31 ` Vineet Gupta -1 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-30 20:31 UTC (permalink / raw) To: Peter Zijlstra, Will Deacon Cc: Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On 08/30/2018 07:29 AM, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote: > >> Yes, that would be worth trying. However, I also just noticed that the >> fetch-ops (which are now used to implement test_and_set_bit_lock()) seem >> to be missing the backwards branch in the LL/SC case. Yet another diff >> below. >> >> Will >> >> --->8 >> >> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h >> index 4e0072730241..f06c5ed672b3 100644 >> --- a/arch/arc/include/asm/atomic.h >> +++ b/arch/arc/include/asm/atomic.h >> @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ >> "1: llock %[orig], [%[ctr]] \n" \ >> " " #asm_op " %[val], %[orig], %[i] \n" \ >> " scond %[val], [%[ctr]] \n" \ >> - " \n" \ >> + " bnz 1b \n" \ >> : [val] "=&r" (val), \ >> [orig] "=&r" (orig) \ >> : [ctr] "r" (&v->counter), \ > ACK!! sorry about that, no idea how I messed that up. > > Also, once it all works, they should look at switching to _relaxed > atomics for LL/SC. Indeed this is the mother of all issues, I tried and system is clearly hosed with and works after. What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) Back then we had a retry branch with backoff stuff which I'd reverted for new cores and the merge conflict somehow missed it. @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, acks etc ? So after this there are 2 other things to be addresses / looked at still while we are still here. 1. After 84c6591103db __clear_bit_lock() implementation will be broken (or atleast not consistent with what we had after), do we need to reinstate it. 2. Will's proposed change to remove the underlying issue, but the issue in #1 remains ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 20:31 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-30 20:31 UTC (permalink / raw) To: linux-arm-kernel On 08/30/2018 07:29 AM, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote: > >> Yes, that would be worth trying. However, I also just noticed that the >> fetch-ops (which are now used to implement test_and_set_bit_lock()) seem >> to be missing the backwards branch in the LL/SC case. Yet another diff >> below. >> >> Will >> >> --->8 >> >> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h >> index 4e0072730241..f06c5ed672b3 100644 >> --- a/arch/arc/include/asm/atomic.h >> +++ b/arch/arc/include/asm/atomic.h >> @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ >> "1: llock %[orig], [%[ctr]] \n" \ >> " " #asm_op " %[val], %[orig], %[i] \n" \ >> " scond %[val], [%[ctr]] \n" \ >> - " \n" \ >> + " bnz 1b \n" \ >> : [val] "=&r" (val), \ >> [orig] "=&r" (orig) \ >> : [ctr] "r" (&v->counter), \ > ACK!! sorry about that, no idea how I messed that up. > > Also, once it all works, they should look at switching to _relaxed > atomics for LL/SC. Indeed this is the mother of all issues, I tried and system is clearly hosed with and works after. What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) Back then we had a retry branch with backoff stuff which I'd reverted for new cores and the merge conflict somehow missed it. @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, acks etc ? So after this there are 2 other things to be addresses / looked at still while we are still here. 1. After 84c6591103db __clear_bit_lock() implementation will be broken (or atleast not consistent with what we had after), do we need to reinstate it. 2. Will's proposed change to remove the underlying issue, but the issue in #1 remains ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 20:31 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-30 20:31 UTC (permalink / raw) To: linux-snps-arc On 08/30/2018 07:29 AM, Peter Zijlstra wrote: > On Thu, Aug 30, 2018@03:23:55PM +0100, Will Deacon wrote: > >> Yes, that would be worth trying. However, I also just noticed that the >> fetch-ops (which are now used to implement test_and_set_bit_lock()) seem >> to be missing the backwards branch in the LL/SC case. Yet another diff >> below. >> >> Will >> >> --->8 >> >> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h >> index 4e0072730241..f06c5ed672b3 100644 >> --- a/arch/arc/include/asm/atomic.h >> +++ b/arch/arc/include/asm/atomic.h >> @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ >> "1: llock %[orig], [%[ctr]] \n" \ >> " " #asm_op " %[val], %[orig], %[i] \n" \ >> " scond %[val], [%[ctr]] \n" \ >> - " \n" \ >> + " bnz 1b \n" \ >> : [val] "=&r" (val), \ >> [orig] "=&r" (orig) \ >> : [ctr] "r" (&v->counter), \ > ACK!! sorry about that, no idea how I messed that up. > > Also, once it all works, they should look at switching to _relaxed > atomics for LL/SC. Indeed this is the mother of all issues, I tried and system is clearly hosed with and works after. What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) Back then we had a retry branch with backoff stuff which I'd reverted for new cores and the merge conflict somehow missed it. @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, acks etc ? So after this there are 2 other things to be addresses / looked at still while we are still here. 1. After 84c6591103db __clear_bit_lock() implementation will be broken (or atleast not consistent with what we had after), do we need to reinstate it. 2. Will's proposed change to remove the underlying issue, but the issue in #1 remains ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 20:31 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-30 20:31 UTC (permalink / raw) To: Peter Zijlstra, Will Deacon Cc: Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On 08/30/2018 07:29 AM, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote: > >> Yes, that would be worth trying. However, I also just noticed that the >> fetch-ops (which are now used to implement test_and_set_bit_lock()) seem >> to be missing the backwards branch in the LL/SC case. Yet another diff >> below. >> >> Will >> >> --->8 >> >> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h >> index 4e0072730241..f06c5ed672b3 100644 >> --- a/arch/arc/include/asm/atomic.h >> +++ b/arch/arc/include/asm/atomic.h >> @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ >> "1: llock %[orig], [%[ctr]] \n" \ >> " " #asm_op " %[val], %[orig], %[i] \n" \ >> " scond %[val], [%[ctr]] \n" \ >> - " \n" \ >> + " bnz 1b \n" \ >> : [val] "=&r" (val), \ >> [orig] "=&r" (orig) \ >> : [ctr] "r" (&v->counter), \ > ACK!! sorry about that, no idea how I messed that up. > > Also, once it all works, they should look at switching to _relaxed > atomics for LL/SC. Indeed this is the mother of all issues, I tried and system is clearly hosed with and works after. What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) Back then we had a retry branch with backoff stuff which I'd reverted for new cores and the merge conflict somehow missed it. @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, acks etc ? So after this there are 2 other things to be addresses / looked at still while we are still here. 1. After 84c6591103db __clear_bit_lock() implementation will be broken (or atleast not consistent with what we had after), do we need to reinstate it. 2. Will's proposed change to remove the underlying issue, but the issue in #1 remains ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 20:31 ` Vineet Gupta ` (2 preceding siblings ...) (?) @ 2018-08-30 20:45 ` Peter Zijlstra -1 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 20:45 UTC (permalink / raw) To: Vineet Gupta Cc: Will Deacon, Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 08:31:59PM +0000, Vineet Gupta wrote: > On 08/30/2018 07:29 AM, Peter Zijlstra wrote: > > On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote: > > > >> Yes, that would be worth trying. However, I also just noticed that the > >> fetch-ops (which are now used to implement test_and_set_bit_lock()) seem > >> to be missing the backwards branch in the LL/SC case. Yet another diff > >> below. > >> > >> Will > >> > >> --->8 > >> > >> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > >> index 4e0072730241..f06c5ed672b3 100644 > >> --- a/arch/arc/include/asm/atomic.h > >> +++ b/arch/arc/include/asm/atomic.h > >> @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > >> "1: llock %[orig], [%[ctr]] \n" \ > >> " " #asm_op " %[val], %[orig], %[i] \n" \ > >> " scond %[val], [%[ctr]] \n" \ > >> - " \n" \ > >> + " bnz 1b \n" \ > >> : [val] "=&r" (val), \ > >> [orig] "=&r" (orig) \ > >> : [ctr] "r" (&v->counter), \ > > ACK!! sorry about that, no idea how I messed that up. > > > > Also, once it all works, they should look at switching to _relaxed > > atomics for LL/SC. > > Indeed this is the mother of all issues, I tried and system is clearly hosed with > and works after. > What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) > Back then we had a retry branch with backoff stuff which I'd reverted for new > cores and the merge conflict somehow missed it. > > @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, > acks etc ? Well, Will spotted it, give authorship to him, you have my ack per the above. > So after this there are 2 other things to be addresses / looked at still while we > are still here. > > 1. After 84c6591103db __clear_bit_lock() implementation will be broken (or atleast > not consistent with what we had after), do we need to reinstate it. > 2. Will's proposed change to remove the underlying issue, but the issue in #1 > remains ? No, like explained, for spinlock based atomics the issue _should_ not exist, and if you look at your atomic_set() implementation for that variant, you'll see it does the right thing by taking the lock. Basically atomic_set() for spinlock based atomics ends up being (void)atomic_xchg(). FWIW, also ACK on Will's patch to switch you over to asm-generic bitops entirely. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 20:45 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 20:45 UTC (permalink / raw) To: linux-arm-kernel On Thu, Aug 30, 2018 at 08:31:59PM +0000, Vineet Gupta wrote: > On 08/30/2018 07:29 AM, Peter Zijlstra wrote: > > On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote: > > > >> Yes, that would be worth trying. However, I also just noticed that the > >> fetch-ops (which are now used to implement test_and_set_bit_lock()) seem > >> to be missing the backwards branch in the LL/SC case. Yet another diff > >> below. > >> > >> Will > >> > >> --->8 > >> > >> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > >> index 4e0072730241..f06c5ed672b3 100644 > >> --- a/arch/arc/include/asm/atomic.h > >> +++ b/arch/arc/include/asm/atomic.h > >> @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > >> "1: llock %[orig], [%[ctr]] \n" \ > >> " " #asm_op " %[val], %[orig], %[i] \n" \ > >> " scond %[val], [%[ctr]] \n" \ > >> - " \n" \ > >> + " bnz 1b \n" \ > >> : [val] "=&r" (val), \ > >> [orig] "=&r" (orig) \ > >> : [ctr] "r" (&v->counter), \ > > ACK!! sorry about that, no idea how I messed that up. > > > > Also, once it all works, they should look at switching to _relaxed > > atomics for LL/SC. > > Indeed this is the mother of all issues, I tried and system is clearly hosed with > and works after. > What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) > Back then we had a retry branch with backoff stuff which I'd reverted for new > cores and the merge conflict somehow missed it. > > @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, > acks etc ? Well, Will spotted it, give authorship to him, you have my ack per the above. > So after this there are 2 other things to be addresses / looked at still while we > are still here. > > 1. After 84c6591103db __clear_bit_lock() implementation will be broken (or atleast > not consistent with what we had after), do we need to reinstate it. > 2. Will's proposed change to remove the underlying issue, but the issue in #1 > remains ? No, like explained, for spinlock based atomics the issue _should_ not exist, and if you look at your atomic_set() implementation for that variant, you'll see it does the right thing by taking the lock. Basically atomic_set() for spinlock based atomics ends up being (void)atomic_xchg(). FWIW, also ACK on Will's patch to switch you over to asm-generic bitops entirely. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 20:45 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 20:45 UTC (permalink / raw) To: linux-snps-arc On Thu, Aug 30, 2018@08:31:59PM +0000, Vineet Gupta wrote: > On 08/30/2018 07:29 AM, Peter Zijlstra wrote: > > On Thu, Aug 30, 2018@03:23:55PM +0100, Will Deacon wrote: > > > >> Yes, that would be worth trying. However, I also just noticed that the > >> fetch-ops (which are now used to implement test_and_set_bit_lock()) seem > >> to be missing the backwards branch in the LL/SC case. Yet another diff > >> below. > >> > >> Will > >> > >> --->8 > >> > >> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > >> index 4e0072730241..f06c5ed672b3 100644 > >> --- a/arch/arc/include/asm/atomic.h > >> +++ b/arch/arc/include/asm/atomic.h > >> @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > >> "1: llock %[orig], [%[ctr]] \n" \ > >> " " #asm_op " %[val], %[orig], %[i] \n" \ > >> " scond %[val], [%[ctr]] \n" \ > >> - " \n" \ > >> + " bnz 1b \n" \ > >> : [val] "=&r" (val), \ > >> [orig] "=&r" (orig) \ > >> : [ctr] "r" (&v->counter), \ > > ACK!! sorry about that, no idea how I messed that up. > > > > Also, once it all works, they should look at switching to _relaxed > > atomics for LL/SC. > > Indeed this is the mother of all issues, I tried and system is clearly hosed with > and works after. > What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) > Back then we had a retry branch with backoff stuff which I'd reverted for new > cores and the merge conflict somehow missed it. > > @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, > acks etc ? Well, Will spotted it, give authorship to him, you have my ack per the above. > So after this there are 2 other things to be addresses / looked at still while we > are still here. > > 1. After 84c6591103db __clear_bit_lock() implementation will be broken (or atleast > not consistent with what we had after), do we need to reinstate it. > 2. Will's proposed change to remove the underlying issue, but the issue in #1 > remains ? No, like explained, for spinlock based atomics the issue _should_ not exist, and if you look at your atomic_set() implementation for that variant, you'll see it does the right thing by taking the lock. Basically atomic_set() for spinlock based atomics ends up being (void)atomic_xchg(). FWIW, also ACK on Will's patch to switch you over to asm-generic bitops entirely. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 20:45 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 20:45 UTC (permalink / raw) To: Vineet Gupta Cc: Will Deacon, Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 08:31:59PM +0000, Vineet Gupta wrote: > On 08/30/2018 07:29 AM, Peter Zijlstra wrote: > > On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote: > > > >> Yes, that would be worth trying. However, I also just noticed that the > >> fetch-ops (which are now used to implement test_and_set_bit_lock()) seem > >> to be missing the backwards branch in the LL/SC case. Yet another diff > >> below. > >> > >> Will > >> > >> --->8 > >> > >> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > >> index 4e0072730241..f06c5ed672b3 100644 > >> --- a/arch/arc/include/asm/atomic.h > >> +++ b/arch/arc/include/asm/atomic.h > >> @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > >> "1: llock %[orig], [%[ctr]] \n" \ > >> " " #asm_op " %[val], %[orig], %[i] \n" \ > >> " scond %[val], [%[ctr]] \n" \ > >> - " \n" \ > >> + " bnz 1b \n" \ > >> : [val] "=&r" (val), \ > >> [orig] "=&r" (orig) \ > >> : [ctr] "r" (&v->counter), \ > > ACK!! sorry about that, no idea how I messed that up. > > > > Also, once it all works, they should look at switching to _relaxed > > atomics for LL/SC. > > Indeed this is the mother of all issues, I tried and system is clearly hosed with > and works after. > What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) > Back then we had a retry branch with backoff stuff which I'd reverted for new > cores and the merge conflict somehow missed it. > > @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, > acks etc ? Well, Will spotted it, give authorship to him, you have my ack per the above. > So after this there are 2 other things to be addresses / looked at still while we > are still here. > > 1. After 84c6591103db __clear_bit_lock() implementation will be broken (or atleast > not consistent with what we had after), do we need to reinstate it. > 2. Will's proposed change to remove the underlying issue, but the issue in #1 > remains ? No, like explained, for spinlock based atomics the issue _should_ not exist, and if you look at your atomic_set() implementation for that variant, you'll see it does the right thing by taking the lock. Basically atomic_set() for spinlock based atomics ends up being (void)atomic_xchg(). FWIW, also ACK on Will's patch to switch you over to asm-generic bitops entirely. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 20:45 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 20:45 UTC (permalink / raw) To: Vineet Gupta Cc: linux-arch, Alexey.Brodkin, Will Deacon, linux-kernel, yamada.masahiro, tglx, linux-snps-arc, Eugeniy Paltsev, mingo, linux-arm-kernel On Thu, Aug 30, 2018 at 08:31:59PM +0000, Vineet Gupta wrote: > On 08/30/2018 07:29 AM, Peter Zijlstra wrote: > > On Thu, Aug 30, 2018 at 03:23:55PM +0100, Will Deacon wrote: > > > >> Yes, that would be worth trying. However, I also just noticed that the > >> fetch-ops (which are now used to implement test_and_set_bit_lock()) seem > >> to be missing the backwards branch in the LL/SC case. Yet another diff > >> below. > >> > >> Will > >> > >> --->8 > >> > >> diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h > >> index 4e0072730241..f06c5ed672b3 100644 > >> --- a/arch/arc/include/asm/atomic.h > >> +++ b/arch/arc/include/asm/atomic.h > >> @@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ > >> "1: llock %[orig], [%[ctr]] \n" \ > >> " " #asm_op " %[val], %[orig], %[i] \n" \ > >> " scond %[val], [%[ctr]] \n" \ > >> - " \n" \ > >> + " bnz 1b \n" \ > >> : [val] "=&r" (val), \ > >> [orig] "=&r" (orig) \ > >> : [ctr] "r" (&v->counter), \ > > ACK!! sorry about that, no idea how I messed that up. > > > > Also, once it all works, they should look at switching to _relaxed > > atomics for LL/SC. > > Indeed this is the mother of all issues, I tried and system is clearly hosed with > and works after. > What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) > Back then we had a retry branch with backoff stuff which I'd reverted for new > cores and the merge conflict somehow missed it. > > @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, > acks etc ? Well, Will spotted it, give authorship to him, you have my ack per the above. > So after this there are 2 other things to be addresses / looked at still while we > are still here. > > 1. After 84c6591103db __clear_bit_lock() implementation will be broken (or atleast > not consistent with what we had after), do we need to reinstate it. > 2. Will's proposed change to remove the underlying issue, but the issue in #1 > remains ? No, like explained, for spinlock based atomics the issue _should_ not exist, and if you look at your atomic_set() implementation for that variant, you'll see it does the right thing by taking the lock. Basically atomic_set() for spinlock based atomics ends up being (void)atomic_xchg(). FWIW, also ACK on Will's patch to switch you over to asm-generic bitops entirely. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 20:45 ` Peter Zijlstra (?) (?) @ 2018-08-31 0:30 ` Vineet Gupta -1 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:30 UTC (permalink / raw) To: Peter Zijlstra Cc: Will Deacon, Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On 08/30/2018 01:45 PM, Peter Zijlstra wrote: >> >> Indeed this is the mother of all issues, I tried and system is clearly hosed with >> and works after. >> What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) >> Back then we had a retry branch with backoff stuff which I'd reverted for new >> cores and the merge conflict somehow missed it. >> >> @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, >> acks etc ? > Well, Will spotted it, give authorship to him, you have my ack per the > above. Oops, sorry for the mixup. I have him as author now and pushed to ARC for-curr (will trickle into linux-next eventually). > FWIW, also ACK on Will's patch to switch you over to asm-generic bitops > entirely. Yeah I'll get it to that and your's after the clear_bit_lock stuff - sorry for hijacking this thread to that topic now -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-31 0:30 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:30 UTC (permalink / raw) To: linux-arm-kernel On 08/30/2018 01:45 PM, Peter Zijlstra wrote: >> >> Indeed this is the mother of all issues, I tried and system is clearly hosed with >> and works after. >> What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) >> Back then we had a retry branch with backoff stuff which I'd reverted for new >> cores and the merge conflict somehow missed it. >> >> @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, >> acks etc ? > Well, Will spotted it, give authorship to him, you have my ack per the > above. Oops, sorry for the mixup. I have him as author now and pushed to ARC for-curr (will trickle into linux-next eventually). > FWIW, also ACK on Will's patch to switch you over to asm-generic bitops > entirely. Yeah I'll get it to that and your's after the clear_bit_lock stuff - sorry for hijacking this thread to that topic now -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-31 0:30 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:30 UTC (permalink / raw) To: linux-snps-arc On 08/30/2018 01:45 PM, Peter Zijlstra wrote: >> >> Indeed this is the mother of all issues, I tried and system is clearly hosed with >> and works after. >> What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) >> Back then we had a retry branch with backoff stuff which I'd reverted for new >> cores and the merge conflict somehow missed it. >> >> @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, >> acks etc ? > Well, Will spotted it, give authorship to him, you have my ack per the > above. Oops, sorry for the mixup. I have him as author now and pushed to ARC for-curr (will trickle into linux-next eventually). > FWIW, also ACK on Will's patch to switch you over to asm-generic bitops > entirely. Yeah I'll get it to that and your's after the clear_bit_lock stuff - sorry for hijacking this thread to that topic now -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-31 0:30 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:30 UTC (permalink / raw) To: Peter Zijlstra Cc: Will Deacon, Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On 08/30/2018 01:45 PM, Peter Zijlstra wrote: >> >> Indeed this is the mother of all issues, I tried and system is clearly hosed with >> and works after. >> What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) >> Back then we had a retry branch with backoff stuff which I'd reverted for new >> cores and the merge conflict somehow missed it. >> >> @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, >> acks etc ? > Well, Will spotted it, give authorship to him, you have my ack per the > above. Oops, sorry for the mixup. I have him as author now and pushed to ARC for-curr (will trickle into linux-next eventually). > FWIW, also ACK on Will's patch to switch you over to asm-generic bitops > entirely. Yeah I'll get it to that and your's after the clear_bit_lock stuff - sorry for hijacking this thread to that topic now -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-31 0:30 ` Vineet Gupta (?) (?) @ 2018-08-31 9:53 ` Will Deacon -1 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-31 9:53 UTC (permalink / raw) To: Vineet Gupta Cc: Peter Zijlstra, Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Fri, Aug 31, 2018 at 12:30:50AM +0000, Vineet Gupta wrote: > On 08/30/2018 01:45 PM, Peter Zijlstra wrote: > >> > >> Indeed this is the mother of all issues, I tried and system is clearly hosed with > >> and works after. > >> What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) > >> Back then we had a retry branch with backoff stuff which I'd reverted for new > >> cores and the merge conflict somehow missed it. > >> > >> @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, > >> acks etc ? > > Well, Will spotted it, give authorship to him, you have my ack per the > > above. > > Oops, sorry for the mixup. I have him as author now and pushed to ARC for-curr > (will trickle into linux-next eventually). Okey doke: you can have my Signed-off-by for all the diffs I sent. Just stick something like [vineet: wrote commit message] in there so I don't read it in the future and confuse myself. Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-31 9:53 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-31 9:53 UTC (permalink / raw) To: linux-arm-kernel On Fri, Aug 31, 2018 at 12:30:50AM +0000, Vineet Gupta wrote: > On 08/30/2018 01:45 PM, Peter Zijlstra wrote: > >> > >> Indeed this is the mother of all issues, I tried and system is clearly hosed with > >> and works after. > >> What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) > >> Back then we had a retry branch with backoff stuff which I'd reverted for new > >> cores and the merge conflict somehow missed it. > >> > >> @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, > >> acks etc ? > > Well, Will spotted it, give authorship to him, you have my ack per the > > above. > > Oops, sorry for the mixup. I have him as author now and pushed to ARC for-curr > (will trickle into linux-next eventually). Okey doke: you can have my Signed-off-by for all the diffs I sent. Just stick something like [vineet: wrote commit message] in there so I don't read it in the future and confuse myself. Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-31 9:53 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-31 9:53 UTC (permalink / raw) To: linux-snps-arc On Fri, Aug 31, 2018@12:30:50AM +0000, Vineet Gupta wrote: > On 08/30/2018 01:45 PM, Peter Zijlstra wrote: > >> > >> Indeed this is the mother of all issues, I tried and system is clearly hosed with > >> and works after. > >> What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) > >> Back then we had a retry branch with backoff stuff which I'd reverted for new > >> cores and the merge conflict somehow missed it. > >> > >> @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, > >> acks etc ? > > Well, Will spotted it, give authorship to him, you have my ack per the > > above. > > Oops, sorry for the mixup. I have him as author now and pushed to ARC for-curr > (will trickle into linux-next eventually). Okey doke: you can have my Signed-off-by for all the diffs I sent. Just stick something like [vineet: wrote commit message] in there so I don't read it in the future and confuse myself. Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-31 9:53 ` Will Deacon 0 siblings, 0 replies; 89+ messages in thread From: Will Deacon @ 2018-08-31 9:53 UTC (permalink / raw) To: Vineet Gupta Cc: Peter Zijlstra, Eugeniy Paltsev, mingo, linux-kernel, Alexey.Brodkin, tglx, linux-snps-arc, yamada.masahiro, linux-arm-kernel, linux-arch On Fri, Aug 31, 2018 at 12:30:50AM +0000, Vineet Gupta wrote: > On 08/30/2018 01:45 PM, Peter Zijlstra wrote: > >> > >> Indeed this is the mother of all issues, I tried and system is clearly hosed with > >> and works after. > >> What's amazing is the commit 4aef66c8ae9 which introduced it is from 2016 ;-) > >> Back then we had a retry branch with backoff stuff which I'd reverted for new > >> cores and the merge conflict somehow missed it. > >> > >> @PeterZ I'll create a patch with you as author ? do I need any formal sign offs, > >> acks etc ? > > Well, Will spotted it, give authorship to him, you have my ack per the > > above. > > Oops, sorry for the mixup. I have him as author now and pushed to ARC for-curr > (will trickle into linux-next eventually). Okey doke: you can have my Signed-off-by for all the diffs I sent. Just stick something like [vineet: wrote commit message] in there so I don't read it in the future and confuse myself. Will ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 14:17 ` Peter Zijlstra ` (2 preceding siblings ...) (?) @ 2018-08-30 14:46 ` Eugeniy Paltsev -1 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-30 14:46 UTC (permalink / raw) To: peterz, Eugeniy.Paltsev Cc: mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, will.deacon, linux-arm-kernel, linux-arch On Thu, 2018-08-30 at 16:17 +0200, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > and SMP running on 1 core. > > So you're running on LL/SC enabled hardware; that would make Will's > patch irrelevant (although still a good idea for the hardware that does > care about that spinlocked atomic crud). > > Does something like the below cure things? That would confirm the > suggestion that the change to __clear_bit_unlock() is the curprit. I tested it - this doesn't change anything, the problem still reproduces. I'll test it with last Will fix. > If that doesn't cure things, then we've been looking in entirely the > wrong place. > > --- > diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h > index 3ae021368f48..79c6978152f8 100644 > --- a/include/asm-generic/bitops/lock.h > +++ b/include/asm-generic/bitops/lock.h > @@ -57,12 +57,7 @@ static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p) > static inline void __clear_bit_unlock(unsigned int nr, > volatile unsigned long *p) > { > - unsigned long old; > - > - p += BIT_WORD(nr); > - old = READ_ONCE(*p); > - old &= ~BIT_MASK(nr); > - atomic_long_set_release((atomic_long_t *)p, old); > + clear_bit_unlock(nr, p); > } > > /** -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:46 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-30 14:46 UTC (permalink / raw) To: linux-arm-kernel On Thu, 2018-08-30 at 16:17 +0200, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > and SMP running on 1 core. > > So you're running on LL/SC enabled hardware; that would make Will's > patch irrelevant (although still a good idea for the hardware that does > care about that spinlocked atomic crud). > > Does something like the below cure things? That would confirm the > suggestion that the change to __clear_bit_unlock() is the curprit. I tested it - this doesn't change anything, the problem still reproduces. I'll test it with last Will fix. > If that doesn't cure things, then we've been looking in entirely the > wrong place. > > --- > diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h > index 3ae021368f48..79c6978152f8 100644 > --- a/include/asm-generic/bitops/lock.h > +++ b/include/asm-generic/bitops/lock.h > @@ -57,12 +57,7 @@ static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p) > static inline void __clear_bit_unlock(unsigned int nr, > volatile unsigned long *p) > { > - unsigned long old; > - > - p += BIT_WORD(nr); > - old = READ_ONCE(*p); > - old &= ~BIT_MASK(nr); > - atomic_long_set_release((atomic_long_t *)p, old); > + clear_bit_unlock(nr, p); > } > > /** -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:46 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-30 14:46 UTC (permalink / raw) To: linux-snps-arc On Thu, 2018-08-30@16:17 +0200, Peter Zijlstra wrote: > On Thu, Aug 30, 2018@11:53:17AM +0000, Eugeniy Paltsev wrote: > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > and SMP running on 1 core. > > So you're running on LL/SC enabled hardware; that would make Will's > patch irrelevant (although still a good idea for the hardware that does > care about that spinlocked atomic crud). > > Does something like the below cure things? That would confirm the > suggestion that the change to __clear_bit_unlock() is the curprit. I tested it - this doesn't change anything, the problem still reproduces. I'll test it with last Will fix. > If that doesn't cure things, then we've been looking in entirely the > wrong place. > > --- > diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h > index 3ae021368f48..79c6978152f8 100644 > --- a/include/asm-generic/bitops/lock.h > +++ b/include/asm-generic/bitops/lock.h > @@ -57,12 +57,7 @@ static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p) > static inline void __clear_bit_unlock(unsigned int nr, > volatile unsigned long *p) > { > - unsigned long old; > - > - p += BIT_WORD(nr); > - old = READ_ONCE(*p); > - old &= ~BIT_MASK(nr); > - atomic_long_set_release((atomic_long_t *)p, old); > + clear_bit_unlock(nr, p); > } > > /** -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:46 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-30 14:46 UTC (permalink / raw) To: peterz, Eugeniy.Paltsev Cc: mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, will.deacon, linux-arm-kernel, linux-arch On Thu, 2018-08-30 at 16:17 +0200, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > and SMP running on 1 core. > > So you're running on LL/SC enabled hardware; that would make Will's > patch irrelevant (although still a good idea for the hardware that does > care about that spinlocked atomic crud). > > Does something like the below cure things? That would confirm the > suggestion that the change to __clear_bit_unlock() is the curprit. I tested it - this doesn't change anything, the problem still reproduces. I'll test it with last Will fix. > If that doesn't cure things, then we've been looking in entirely the > wrong place. > > --- > diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h > index 3ae021368f48..79c6978152f8 100644 > --- a/include/asm-generic/bitops/lock.h > +++ b/include/asm-generic/bitops/lock.h > @@ -57,12 +57,7 @@ static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p) > static inline void __clear_bit_unlock(unsigned int nr, > volatile unsigned long *p) > { > - unsigned long old; > - > - p += BIT_WORD(nr); > - old = READ_ONCE(*p); > - old &= ~BIT_MASK(nr); > - atomic_long_set_release((atomic_long_t *)p, old); > + clear_bit_unlock(nr, p); > } > > /** -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 14:46 ` Eugeniy Paltsev 0 siblings, 0 replies; 89+ messages in thread From: Eugeniy Paltsev @ 2018-08-30 14:46 UTC (permalink / raw) To: peterz, Eugeniy.Paltsev Cc: linux-arch, Vineet.Gupta1, Alexey.Brodkin, will.deacon, linux-kernel, yamada.masahiro, tglx, linux-snps-arc, mingo, linux-arm-kernel On Thu, 2018-08-30 at 16:17 +0200, Peter Zijlstra wrote: > On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > and SMP running on 1 core. > > So you're running on LL/SC enabled hardware; that would make Will's > patch irrelevant (although still a good idea for the hardware that does > care about that spinlocked atomic crud). > > Does something like the below cure things? That would confirm the > suggestion that the change to __clear_bit_unlock() is the curprit. I tested it - this doesn't change anything, the problem still reproduces. I'll test it with last Will fix. > If that doesn't cure things, then we've been looking in entirely the > wrong place. > > --- > diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h > index 3ae021368f48..79c6978152f8 100644 > --- a/include/asm-generic/bitops/lock.h > +++ b/include/asm-generic/bitops/lock.h > @@ -57,12 +57,7 @@ static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p) > static inline void __clear_bit_unlock(unsigned int nr, > volatile unsigned long *p) > { > - unsigned long old; > - > - p += BIT_WORD(nr); > - old = READ_ONCE(*p); > - old &= ~BIT_MASK(nr); > - atomic_long_set_release((atomic_long_t *)p, old); > + clear_bit_unlock(nr, p); > } > > /** -- Eugeniy Paltsev ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 14:46 ` Eugeniy Paltsev (?) (?) @ 2018-08-30 17:15 ` Peter Zijlstra -1 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 17:15 UTC (permalink / raw) To: Eugeniy Paltsev Cc: mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, will.deacon, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 02:46:16PM +0000, Eugeniy Paltsev wrote: > On Thu, 2018-08-30 at 16:17 +0200, Peter Zijlstra wrote: > > On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > > and SMP running on 1 core. > > > > So you're running on LL/SC enabled hardware; that would make Will's > > patch irrelevant (although still a good idea for the hardware that does > > care about that spinlocked atomic crud). > > > > Does something like the below cure things? That would confirm the > > suggestion that the change to __clear_bit_unlock() is the curprit. > > I tested it - this doesn't change anything, the problem still reproduces. OK, 'good' :-) > I'll test it with last Will fix. Yes, that missing "bnz 1b" is very very suspect. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 17:15 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 17:15 UTC (permalink / raw) To: linux-arm-kernel On Thu, Aug 30, 2018 at 02:46:16PM +0000, Eugeniy Paltsev wrote: > On Thu, 2018-08-30 at 16:17 +0200, Peter Zijlstra wrote: > > On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > > and SMP running on 1 core. > > > > So you're running on LL/SC enabled hardware; that would make Will's > > patch irrelevant (although still a good idea for the hardware that does > > care about that spinlocked atomic crud). > > > > Does something like the below cure things? That would confirm the > > suggestion that the change to __clear_bit_unlock() is the curprit. > > I tested it - this doesn't change anything, the problem still reproduces. OK, 'good' :-) > I'll test it with last Will fix. Yes, that missing "bnz 1b" is very very suspect. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 17:15 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 17:15 UTC (permalink / raw) To: linux-snps-arc On Thu, Aug 30, 2018@02:46:16PM +0000, Eugeniy Paltsev wrote: > On Thu, 2018-08-30@16:17 +0200, Peter Zijlstra wrote: > > On Thu, Aug 30, 2018@11:53:17AM +0000, Eugeniy Paltsev wrote: > > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > > and SMP running on 1 core. > > > > So you're running on LL/SC enabled hardware; that would make Will's > > patch irrelevant (although still a good idea for the hardware that does > > care about that spinlocked atomic crud). > > > > Does something like the below cure things? That would confirm the > > suggestion that the change to __clear_bit_unlock() is the curprit. > > I tested it - this doesn't change anything, the problem still reproduces. OK, 'good' :-) > I'll test it with last Will fix. Yes, that missing "bnz 1b" is very very suspect. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-30 17:15 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-30 17:15 UTC (permalink / raw) To: Eugeniy Paltsev Cc: mingo, linux-kernel, Alexey.Brodkin, Vineet.Gupta1, tglx, linux-snps-arc, yamada.masahiro, will.deacon, linux-arm-kernel, linux-arch On Thu, Aug 30, 2018 at 02:46:16PM +0000, Eugeniy Paltsev wrote: > On Thu, 2018-08-30 at 16:17 +0200, Peter Zijlstra wrote: > > On Thu, Aug 30, 2018 at 11:53:17AM +0000, Eugeniy Paltsev wrote: > > > I can see crashes with LLSC enabled in both SMP running on 4 cores > > > and SMP running on 1 core. > > > > So you're running on LL/SC enabled hardware; that would make Will's > > patch irrelevant (although still a good idea for the hardware that does > > care about that spinlocked atomic crud). > > > > Does something like the below cure things? That would confirm the > > suggestion that the change to __clear_bit_unlock() is the curprit. > > I tested it - this doesn't change anything, the problem still reproduces. OK, 'good' :-) > I'll test it with last Will fix. Yes, that missing "bnz 1b" is very very suspect. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash 2018-08-30 9:51 ` Will Deacon (?) (?) @ 2018-08-31 0:42 ` Vineet Gupta -1 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:42 UTC (permalink / raw) To: Will Deacon, Peter Zijlstra Cc: Eugeniy Paltsev, linux-kernel, mingo, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On 08/30/2018 02:51 AM, Will Deacon wrote: > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > boils down to concurrent atomic_long_set_release() vs > atomic_long_fetch_or_acquire(), which really needs to work. I don't see how: __clear_bit_unlock() reads @old, flips a bit and then calls atomic_long_set_release() so the race is not just with set_release. static inline int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) { long old; unsigned long mask = (1UL << ((nr) % 32)); p += ((nr) / 32); old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); return !!(old & mask); } static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { unsigned long old; p += ((nr) / 32); old = // soem typecheck magic on *p old &= ~(1UL << ((nr) % 32)); atomic_long_set_release((atomic_long_t *)p, old); } ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-31 0:42 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:42 UTC (permalink / raw) To: linux-arm-kernel On 08/30/2018 02:51 AM, Will Deacon wrote: > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > boils down to concurrent atomic_long_set_release() vs > atomic_long_fetch_or_acquire(), which really needs to work. I don't see how: __clear_bit_unlock() reads @old, flips a bit and then calls atomic_long_set_release() so the race is not just with set_release. static inline int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) { long old; unsigned long mask = (1UL << ((nr) % 32)); p += ((nr) / 32); old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); return !!(old & mask); } static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { unsigned long old; p += ((nr) / 32); old = // soem typecheck magic on *p old &= ~(1UL << ((nr) % 32)); atomic_long_set_release((atomic_long_t *)p, old); } ^ permalink raw reply [flat|nested] 89+ messages in thread
* Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-31 0:42 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:42 UTC (permalink / raw) To: linux-snps-arc On 08/30/2018 02:51 AM, Will Deacon wrote: > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > boils down to concurrent atomic_long_set_release() vs > atomic_long_fetch_or_acquire(), which really needs to work. I don't see how: __clear_bit_unlock() reads @old, flips a bit and then calls atomic_long_set_release() so the race is not just with set_release. static inline int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) { long old; unsigned long mask = (1UL << ((nr) % 32)); p += ((nr) / 32); old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); return !!(old & mask); } static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { unsigned long old; p += ((nr) / 32); old = // soem typecheck magic on *p old &= ~(1UL << ((nr) % 32)); atomic_long_set_release((atomic_long_t *)p, old); } ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash @ 2018-08-31 0:42 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:42 UTC (permalink / raw) To: Will Deacon, Peter Zijlstra Cc: Eugeniy Paltsev, linux-kernel, mingo, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On 08/30/2018 02:51 AM, Will Deacon wrote: > Yeah, the bit_spin_lock()/__bit_spin_unlock() race described in f75d48644c56a > boils down to concurrent atomic_long_set_release() vs > atomic_long_fetch_or_acquire(), which really needs to work. I don't see how: __clear_bit_unlock() reads @old, flips a bit and then calls atomic_long_set_release() so the race is not just with set_release. static inline int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) { long old; unsigned long mask = (1UL << ((nr) % 32)); p += ((nr) / 32); old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); return !!(old & mask); } static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { unsigned long old; p += ((nr) / 32); old = // soem typecheck magic on *p old &= ~(1UL << ((nr) % 32)); atomic_long_set_release((atomic_long_t *)p, old); } ^ permalink raw reply [flat|nested] 89+ messages in thread
* __clear_bit_lock to use atomic clear_bit (was Re: Patch "asm-generic/bitops/lock.h) 2018-08-30 9:44 ` Peter Zijlstra (?) (?) @ 2018-08-31 0:29 ` Vineet Gupta -1 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:29 UTC (permalink / raw) To: Peter Zijlstra Cc: Eugeniy Paltsev, linux-kernel, will.deacon, mingo, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On 08/30/2018 02:44 AM, Peter Zijlstra wrote: >> Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See >> commit f75d48644c56a ("bitops: Do not default to __clear_bit() for >> __clear_bit_unlock()") >> That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic >> __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. >> >> This patch undoes that which could explain the issues you see. @Peter, @Will ? > Right, so the thinking is that on platforms that suffer that issue, > atomic_set*() should DTRT. And if you look at your spinlock based atomic > implementation, you'll note that atomic_set() does indeed do the right > thing. > > arch/arc/include/asm/atomic.h:108 For !LLSC atomics, ARC has always had atomic_set() DTRT even in the git revision of 2016. The problem was not in atomics, but the asymmetric way slub bit lock etc worked (haven't checked if this changed), i.e. slab_lock() -> bit_spin_lock() -> test_and_set_bit() # atomic slab_unlock() -> __bit_spin_unlock() -> __clear_bit() # non-atomic And with v4.19-rc1, we have essentially reverted f75d48644c56a due to 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") So what we have with 4.19-rc1 is static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { unsigned long old; p += ((nr) / 32); old = // some typecheck magic on *p old &= ~(1UL << ((nr) % 32)); atomic_long_set_release((atomic_long_t *)p, old); } So @p is being r-m-w non atomically. The lock variant uses atomic op... int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) { ... old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); .... } Now I don't know why we don't see the issue with LLSC atomics, perhaps race window reduces due to less verbose code itself etc.. Am I missing something still ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* __clear_bit_lock to use atomic clear_bit (was Re: Patch "asm-generic/bitops/lock.h) @ 2018-08-31 0:29 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:29 UTC (permalink / raw) To: linux-arm-kernel On 08/30/2018 02:44 AM, Peter Zijlstra wrote: >> Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See >> commit f75d48644c56a ("bitops: Do not default to __clear_bit() for >> __clear_bit_unlock()") >> That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic >> __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. >> >> This patch undoes that which could explain the issues you see. @Peter, @Will ? > Right, so the thinking is that on platforms that suffer that issue, > atomic_set*() should DTRT. And if you look at your spinlock based atomic > implementation, you'll note that atomic_set() does indeed do the right > thing. > > arch/arc/include/asm/atomic.h:108 For !LLSC atomics, ARC has always had atomic_set() DTRT even in the git revision of 2016. The problem was not in atomics, but the asymmetric way slub bit lock etc worked (haven't checked if this changed), i.e. slab_lock() -> bit_spin_lock() -> test_and_set_bit() # atomic slab_unlock() -> __bit_spin_unlock() -> __clear_bit() # non-atomic And with v4.19-rc1, we have essentially reverted f75d48644c56a due to 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") So what we have with 4.19-rc1 is static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { unsigned long old; p += ((nr) / 32); old = // some typecheck magic on *p old &= ~(1UL << ((nr) % 32)); atomic_long_set_release((atomic_long_t *)p, old); } So @p is being r-m-w non atomically. The lock variant uses atomic op... int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) { ... old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); .... } Now I don't know why we don't see the issue with LLSC atomics, perhaps race window reduces due to less verbose code itself etc.. Am I missing something still ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* __clear_bit_lock to use atomic clear_bit (was Re: Patch "asm-generic/bitops/lock.h) @ 2018-08-31 0:29 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:29 UTC (permalink / raw) To: linux-snps-arc On 08/30/2018 02:44 AM, Peter Zijlstra wrote: >> Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See >> commit f75d48644c56a ("bitops: Do not default to __clear_bit() for >> __clear_bit_unlock()") >> That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic >> __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. >> >> This patch undoes that which could explain the issues you see. @Peter, @Will ? > Right, so the thinking is that on platforms that suffer that issue, > atomic_set*() should DTRT. And if you look at your spinlock based atomic > implementation, you'll note that atomic_set() does indeed do the right > thing. > > arch/arc/include/asm/atomic.h:108 For !LLSC atomics, ARC has always had atomic_set() DTRT even in the git revision of 2016. The problem was not in atomics, but the asymmetric way slub bit lock etc worked (haven't checked if this changed), i.e. slab_lock() -> bit_spin_lock() -> test_and_set_bit() # atomic slab_unlock() -> __bit_spin_unlock() -> __clear_bit() # non-atomic And with v4.19-rc1, we have essentially reverted f75d48644c56a due to 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") So what we have with 4.19-rc1 is static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { unsigned long old; p += ((nr) / 32); old = // some typecheck magic on *p old &= ~(1UL << ((nr) % 32)); atomic_long_set_release((atomic_long_t *)p, old); } So @p is being r-m-w non atomically. The lock variant uses atomic op... int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) { ... old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); .... } Now I don't know why we don't see the issue with LLSC atomics, perhaps race window reduces due to less verbose code itself etc.. Am I missing something still ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* __clear_bit_lock to use atomic clear_bit (was Re: Patch "asm-generic/bitops/lock.h) @ 2018-08-31 0:29 ` Vineet Gupta 0 siblings, 0 replies; 89+ messages in thread From: Vineet Gupta @ 2018-08-31 0:29 UTC (permalink / raw) To: Peter Zijlstra Cc: Eugeniy Paltsev, linux-kernel, will.deacon, mingo, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On 08/30/2018 02:44 AM, Peter Zijlstra wrote: >> Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See >> commit f75d48644c56a ("bitops: Do not default to __clear_bit() for >> __clear_bit_unlock()") >> That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic >> __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. >> >> This patch undoes that which could explain the issues you see. @Peter, @Will ? > Right, so the thinking is that on platforms that suffer that issue, > atomic_set*() should DTRT. And if you look at your spinlock based atomic > implementation, you'll note that atomic_set() does indeed do the right > thing. > > arch/arc/include/asm/atomic.h:108 For !LLSC atomics, ARC has always had atomic_set() DTRT even in the git revision of 2016. The problem was not in atomics, but the asymmetric way slub bit lock etc worked (haven't checked if this changed), i.e. slab_lock() -> bit_spin_lock() -> test_and_set_bit() # atomic slab_unlock() -> __bit_spin_unlock() -> __clear_bit() # non-atomic And with v4.19-rc1, we have essentially reverted f75d48644c56a due to 84c6591103db ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") So what we have with 4.19-rc1 is static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) { unsigned long old; p += ((nr) / 32); old = // some typecheck magic on *p old &= ~(1UL << ((nr) % 32)); atomic_long_set_release((atomic_long_t *)p, old); } So @p is being r-m-w non atomically. The lock variant uses atomic op... int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) { ... old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); .... } Now I don't know why we don't see the issue with LLSC atomics, perhaps race window reduces due to less verbose code itself etc.. Am I missing something still ? -Vineet ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: __clear_bit_lock to use atomic clear_bit (was Re: Patch "asm-generic/bitops/lock.h) 2018-08-31 0:29 ` Vineet Gupta (?) (?) @ 2018-08-31 7:24 ` Peter Zijlstra -1 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-31 7:24 UTC (permalink / raw) To: Vineet Gupta Cc: Eugeniy Paltsev, linux-kernel, will.deacon, mingo, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On Fri, Aug 31, 2018 at 12:29:27AM +0000, Vineet Gupta wrote: > On 08/30/2018 02:44 AM, Peter Zijlstra wrote: > >> Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > >> commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > >> __clear_bit_unlock()") > >> That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > >> __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > >> > >> This patch undoes that which could explain the issues you see. @Peter, @Will ? > > Right, so the thinking is that on platforms that suffer that issue, > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > implementation, you'll note that atomic_set() does indeed do the right > > thing. > > > > arch/arc/include/asm/atomic.h:108 > > For !LLSC atomics, ARC has always had atomic_set() DTRT even in the git revision > of 2016. The problem was not in atomics, but the asymmetric way slub bit lock etc > worked (haven't checked if this changed), i.e. > > slab_lock() -> bit_spin_lock() -> test_and_set_bit() # atomic > slab_unlock() -> __bit_spin_unlock() -> __clear_bit() # non-atomic > > And with v4.19-rc1, we have essentially reverted f75d48644c56a due to 84c6591103db > ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > So what we have with 4.19-rc1 is > > static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) > { > unsigned long old; > p += ((nr) / 32); > old = // some typecheck magic on *p > old &= ~(1UL << ((nr) % 32)); > atomic_long_set_release((atomic_long_t *)p, old); > } > > So @p is being r-m-w non atomically. The lock variant uses atomic op... > > int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) > { > ... > old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); > .... > } > > Now I don't know why we don't see the issue with LLSC atomics, perhaps race window > reduces due to less verbose code itself etc.. > > Am I missing something still ? Yes :-) So there are 2 things to consider: 1) this whole test_and_set_bit() + __clear_bit() combo only works if we have the guarantee that no other bit will change while we have our 'lock' bit set. This means that @old is invariant. 2) atomic ops and stores work as 'expected' -- which is true for all hardware LL/SC or CAS implementations, but not for spinlock based atomics. The bug in f75d48644c56a was the atomic test_and_set loosing the __clear_bit() store. With LL/SC this cannot happen, because the competing store (__clear_bit) will cause the SC to fail, then we'll retry, the second LL observes the new value. So the main point is that test_and_set must not loose a store. atomic_fetch_or() vs atomic_set() ensures this. NOTE: another possible solution for spinlock based bitops is making test_and_set 'smarter': spin_lock(); val = READ_ONCE(word); if (!(val & bit)) { val |= bit; WRITE_ONCE(word, val); } spin_unlock(); But that is not something that works in generic (the other atomic ops), and therefore atomic_set() is required to take the spinlock too, which also cures the problem. ^ permalink raw reply [flat|nested] 89+ messages in thread
* __clear_bit_lock to use atomic clear_bit (was Re: Patch "asm-generic/bitops/lock.h) @ 2018-08-31 7:24 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-31 7:24 UTC (permalink / raw) To: linux-arm-kernel On Fri, Aug 31, 2018 at 12:29:27AM +0000, Vineet Gupta wrote: > On 08/30/2018 02:44 AM, Peter Zijlstra wrote: > >> Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > >> commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > >> __clear_bit_unlock()") > >> That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > >> __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > >> > >> This patch undoes that which could explain the issues you see. @Peter, @Will ? > > Right, so the thinking is that on platforms that suffer that issue, > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > implementation, you'll note that atomic_set() does indeed do the right > > thing. > > > > arch/arc/include/asm/atomic.h:108 > > For !LLSC atomics, ARC has always had atomic_set() DTRT even in the git revision > of 2016. The problem was not in atomics, but the asymmetric way slub bit lock etc > worked (haven't checked if this changed), i.e. > > slab_lock() -> bit_spin_lock() -> test_and_set_bit() # atomic > slab_unlock() -> __bit_spin_unlock() -> __clear_bit() # non-atomic > > And with v4.19-rc1, we have essentially reverted f75d48644c56a due to 84c6591103db > ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > So what we have with 4.19-rc1 is > > static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) > { > unsigned long old; > p += ((nr) / 32); > old = // some typecheck magic on *p > old &= ~(1UL << ((nr) % 32)); > atomic_long_set_release((atomic_long_t *)p, old); > } > > So @p is being r-m-w non atomically. The lock variant uses atomic op... > > int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) > { > ... > old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); > .... > } > > Now I don't know why we don't see the issue with LLSC atomics, perhaps race window > reduces due to less verbose code itself etc.. > > Am I missing something still ? Yes :-) So there are 2 things to consider: 1) this whole test_and_set_bit() + __clear_bit() combo only works if we have the guarantee that no other bit will change while we have our 'lock' bit set. This means that @old is invariant. 2) atomic ops and stores work as 'expected' -- which is true for all hardware LL/SC or CAS implementations, but not for spinlock based atomics. The bug in f75d48644c56a was the atomic test_and_set loosing the __clear_bit() store. With LL/SC this cannot happen, because the competing store (__clear_bit) will cause the SC to fail, then we'll retry, the second LL observes the new value. So the main point is that test_and_set must not loose a store. atomic_fetch_or() vs atomic_set() ensures this. NOTE: another possible solution for spinlock based bitops is making test_and_set 'smarter': spin_lock(); val = READ_ONCE(word); if (!(val & bit)) { val |= bit; WRITE_ONCE(word, val); } spin_unlock(); But that is not something that works in generic (the other atomic ops), and therefore atomic_set() is required to take the spinlock too, which also cures the problem. ^ permalink raw reply [flat|nested] 89+ messages in thread
* __clear_bit_lock to use atomic clear_bit (was Re: Patch "asm-generic/bitops/lock.h) @ 2018-08-31 7:24 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-31 7:24 UTC (permalink / raw) To: linux-snps-arc On Fri, Aug 31, 2018@12:29:27AM +0000, Vineet Gupta wrote: > On 08/30/2018 02:44 AM, Peter Zijlstra wrote: > >> Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > >> commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > >> __clear_bit_unlock()") > >> That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > >> __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > >> > >> This patch undoes that which could explain the issues you see. @Peter, @Will ? > > Right, so the thinking is that on platforms that suffer that issue, > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > implementation, you'll note that atomic_set() does indeed do the right > > thing. > > > > arch/arc/include/asm/atomic.h:108 > > For !LLSC atomics, ARC has always had atomic_set() DTRT even in the git revision > of 2016. The problem was not in atomics, but the asymmetric way slub bit lock etc > worked (haven't checked if this changed), i.e. > > slab_lock() -> bit_spin_lock() -> test_and_set_bit() # atomic > slab_unlock() -> __bit_spin_unlock() -> __clear_bit() # non-atomic > > And with v4.19-rc1, we have essentially reverted f75d48644c56a due to 84c6591103db > ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > So what we have with 4.19-rc1 is > > static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) > { > unsigned long old; > p += ((nr) / 32); > old = // some typecheck magic on *p > old &= ~(1UL << ((nr) % 32)); > atomic_long_set_release((atomic_long_t *)p, old); > } > > So @p is being r-m-w non atomically. The lock variant uses atomic op... > > int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) > { > ... > old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); > .... > } > > Now I don't know why we don't see the issue with LLSC atomics, perhaps race window > reduces due to less verbose code itself etc.. > > Am I missing something still ? Yes :-) So there are 2 things to consider: 1) this whole test_and_set_bit() + __clear_bit() combo only works if we have the guarantee that no other bit will change while we have our 'lock' bit set. This means that @old is invariant. 2) atomic ops and stores work as 'expected' -- which is true for all hardware LL/SC or CAS implementations, but not for spinlock based atomics. The bug in f75d48644c56a was the atomic test_and_set loosing the __clear_bit() store. With LL/SC this cannot happen, because the competing store (__clear_bit) will cause the SC to fail, then we'll retry, the second LL observes the new value. So the main point is that test_and_set must not loose a store. atomic_fetch_or() vs atomic_set() ensures this. NOTE: another possible solution for spinlock based bitops is making test_and_set 'smarter': spin_lock(); val = READ_ONCE(word); if (!(val & bit)) { val |= bit; WRITE_ONCE(word, val); } spin_unlock(); But that is not something that works in generic (the other atomic ops), and therefore atomic_set() is required to take the spinlock too, which also cures the problem. ^ permalink raw reply [flat|nested] 89+ messages in thread
* Re: __clear_bit_lock to use atomic clear_bit (was Re: Patch "asm-generic/bitops/lock.h) @ 2018-08-31 7:24 ` Peter Zijlstra 0 siblings, 0 replies; 89+ messages in thread From: Peter Zijlstra @ 2018-08-31 7:24 UTC (permalink / raw) To: Vineet Gupta Cc: Eugeniy Paltsev, linux-kernel, will.deacon, mingo, tglx, linux-snps-arc, Alexey Brodkin, yamada.masahiro, linux-arm-kernel, linux-arch On Fri, Aug 31, 2018 at 12:29:27AM +0000, Vineet Gupta wrote: > On 08/30/2018 02:44 AM, Peter Zijlstra wrote: > >> Back in 2016, Peter had fixed this file due to a problem I reported on ARC. See > >> commit f75d48644c56a ("bitops: Do not default to __clear_bit() for > >> __clear_bit_unlock()") > >> That made __clear_bit_unlock() use the atomic clear_bit() vs. non-atomic > >> __clear_bit(), effectively making clear_bit_unlock() and __clear_bit_unlock() same. > >> > >> This patch undoes that which could explain the issues you see. @Peter, @Will ? > > Right, so the thinking is that on platforms that suffer that issue, > > atomic_set*() should DTRT. And if you look at your spinlock based atomic > > implementation, you'll note that atomic_set() does indeed do the right > > thing. > > > > arch/arc/include/asm/atomic.h:108 > > For !LLSC atomics, ARC has always had atomic_set() DTRT even in the git revision > of 2016. The problem was not in atomics, but the asymmetric way slub bit lock etc > worked (haven't checked if this changed), i.e. > > slab_lock() -> bit_spin_lock() -> test_and_set_bit() # atomic > slab_unlock() -> __bit_spin_unlock() -> __clear_bit() # non-atomic > > And with v4.19-rc1, we have essentially reverted f75d48644c56a due to 84c6591103db > ("locking/atomics, asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()") > > So what we have with 4.19-rc1 is > > static inline void __clear_bit_unlock(unsigned int nr, volatile unsigned long *p) > { > unsigned long old; > p += ((nr) / 32); > old = // some typecheck magic on *p > old &= ~(1UL << ((nr) % 32)); > atomic_long_set_release((atomic_long_t *)p, old); > } > > So @p is being r-m-w non atomically. The lock variant uses atomic op... > > int test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p) > { > ... > old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p); > .... > } > > Now I don't know why we don't see the issue with LLSC atomics, perhaps race window > reduces due to less verbose code itself etc.. > > Am I missing something still ? Yes :-) So there are 2 things to consider: 1) this whole test_and_set_bit() + __clear_bit() combo only works if we have the guarantee that no other bit will change while we have our 'lock' bit set. This means that @old is invariant. 2) atomic ops and stores work as 'expected' -- which is true for all hardware LL/SC or CAS implementations, but not for spinlock based atomics. The bug in f75d48644c56a was the atomic test_and_set loosing the __clear_bit() store. With LL/SC this cannot happen, because the competing store (__clear_bit) will cause the SC to fail, then we'll retry, the second LL observes the new value. So the main point is that test_and_set must not loose a store. atomic_fetch_or() vs atomic_set() ensures this. NOTE: another possible solution for spinlock based bitops is making test_and_set 'smarter': spin_lock(); val = READ_ONCE(word); if (!(val & bit)) { val |= bit; WRITE_ONCE(word, val); } spin_unlock(); But that is not something that works in generic (the other atomic ops), and therefore atomic_set() is required to take the spinlock too, which also cures the problem. ^ permalink raw reply [flat|nested] 89+ messages in thread
end of thread, other threads:[~2020-04-14 1:19 UTC | newest] Thread overview: 89+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-08-29 18:33 Patch "asm-generic/bitops/lock.h: Rewrite using atomic_fetch_" causes kernel crash Eugeniy Paltsev 2018-08-29 18:33 ` Eugeniy Paltsev 2018-08-29 18:33 ` Eugeniy Paltsev 2018-08-29 18:33 ` Eugeniy Paltsev 2018-08-29 21:16 ` Vineet Gupta 2018-08-29 21:16 ` Vineet Gupta 2018-08-29 21:16 ` Vineet Gupta 2018-08-29 21:16 ` Vineet Gupta 2018-08-30 9:35 ` Will Deacon 2018-08-30 9:35 ` Will Deacon 2018-08-30 9:35 ` Will Deacon 2018-08-30 9:35 ` Will Deacon 2018-08-30 9:44 ` Peter Zijlstra 2018-08-30 9:44 ` Peter Zijlstra 2018-08-30 9:44 ` Peter Zijlstra 2018-08-30 9:44 ` Peter Zijlstra 2018-08-30 9:44 ` Peter Zijlstra 2018-08-30 9:51 ` Will Deacon 2018-08-30 9:51 ` Will Deacon 2018-08-30 9:51 ` Will Deacon 2018-08-30 9:51 ` Will Deacon 2018-08-30 11:53 ` Eugeniy Paltsev 2018-08-30 11:53 ` Eugeniy Paltsev 2018-08-30 11:53 ` Eugeniy Paltsev 2018-08-30 11:53 ` Eugeniy Paltsev 2018-08-30 13:57 ` Will Deacon 2018-08-30 13:57 ` Will Deacon 2018-08-30 13:57 ` Will Deacon 2018-08-30 13:57 ` Will Deacon 2018-08-30 14:17 ` Peter Zijlstra 2018-08-30 14:17 ` Peter Zijlstra 2018-08-30 14:17 ` Peter Zijlstra 2018-08-30 14:17 ` Peter Zijlstra 2018-08-30 14:17 ` Peter Zijlstra 2018-08-30 14:23 ` Will Deacon 2018-08-30 14:23 ` Will Deacon 2018-08-30 14:23 ` Will Deacon 2018-08-30 14:23 ` Will Deacon 2018-08-30 14:29 ` Peter Zijlstra 2018-08-30 14:29 ` Peter Zijlstra 2018-08-30 14:29 ` Peter Zijlstra 2018-08-30 14:29 ` Peter Zijlstra 2018-08-30 14:43 ` Peter Zijlstra 2018-08-30 14:43 ` Peter Zijlstra 2018-08-30 14:43 ` Peter Zijlstra 2018-08-30 14:43 ` Peter Zijlstra 2018-08-30 14:43 ` Peter Zijlstra 2020-04-14 1:19 ` Vineet Gupta 2020-04-14 1:19 ` Vineet Gupta 2020-04-14 1:19 ` Vineet Gupta 2020-04-14 1:19 ` Vineet Gupta 2018-08-30 20:31 ` Vineet Gupta 2018-08-30 20:31 ` Vineet Gupta 2018-08-30 20:31 ` Vineet Gupta 2018-08-30 20:31 ` Vineet Gupta 2018-08-30 20:45 ` Peter Zijlstra 2018-08-30 20:45 ` Peter Zijlstra 2018-08-30 20:45 ` Peter Zijlstra 2018-08-30 20:45 ` Peter Zijlstra 2018-08-30 20:45 ` Peter Zijlstra 2018-08-31 0:30 ` Vineet Gupta 2018-08-31 0:30 ` Vineet Gupta 2018-08-31 0:30 ` Vineet Gupta 2018-08-31 0:30 ` Vineet Gupta 2018-08-31 9:53 ` Will Deacon 2018-08-31 9:53 ` Will Deacon 2018-08-31 9:53 ` Will Deacon 2018-08-31 9:53 ` Will Deacon 2018-08-30 14:46 ` Eugeniy Paltsev 2018-08-30 14:46 ` Eugeniy Paltsev 2018-08-30 14:46 ` Eugeniy Paltsev 2018-08-30 14:46 ` Eugeniy Paltsev 2018-08-30 14:46 ` Eugeniy Paltsev 2018-08-30 17:15 ` Peter Zijlstra 2018-08-30 17:15 ` Peter Zijlstra 2018-08-30 17:15 ` Peter Zijlstra 2018-08-30 17:15 ` Peter Zijlstra 2018-08-31 0:42 ` Vineet Gupta 2018-08-31 0:42 ` Vineet Gupta 2018-08-31 0:42 ` Vineet Gupta 2018-08-31 0:42 ` Vineet Gupta 2018-08-31 0:29 ` __clear_bit_lock to use atomic clear_bit (was Re: Patch "asm-generic/bitops/lock.h) Vineet Gupta 2018-08-31 0:29 ` Vineet Gupta 2018-08-31 0:29 ` Vineet Gupta 2018-08-31 0:29 ` Vineet Gupta 2018-08-31 7:24 ` Peter Zijlstra 2018-08-31 7:24 ` Peter Zijlstra 2018-08-31 7:24 ` Peter Zijlstra 2018-08-31 7:24 ` Peter Zijlstra
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.