linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] asm-generic/mmiowb: Allow mmiowb_set_pending() when preemptible()
@ 2020-07-16 11:28 Will Deacon
  2020-07-16 11:54 ` Emil Renner Berthing
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Will Deacon @ 2020-07-16 11:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: kernel-team, linux-riscv, linux-arch, Will Deacon, Arnd Bergmann,
	Paul Walmsley, Guo Ren, Michael Ellerman, Palmer Dabbelt

Although mmiowb() is concerned only with serialising MMIO writes occuring
in contexts where a spinlock is held, the call to mmiowb_set_pending()
from the MMIO write accessors can occur in preemptible contexts, such
as during driver probe() functions where ordering between CPUs is not
usually a concern, assuming that the task migration path provides the
necessary ordering guarantees.

Unfortunately, the default implementation of mmiowb_set_pending() is not
preempt-safe, as it makes use of a a per-cpu variable to track its
internal state. This has been reported to generate the following splat
on riscv:

 | BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
 | caller is regmap_mmio_write32le+0x1c/0x46
 | CPU: 3 PID: 1 Comm: swapper/0 Not tainted 5.8.0-rc3-hfu+ #1
 | Call Trace:
 |  walk_stackframe+0x0/0x7a
 |  dump_stack+0x6e/0x88
 |  regmap_mmio_write32le+0x18/0x46
 |  check_preemption_disabled+0xa4/0xaa
 |  regmap_mmio_write32le+0x18/0x46
 |  regmap_mmio_write+0x26/0x44
 |  regmap_write+0x28/0x48
 |  sifive_gpio_probe+0xc0/0x1da

Although it's possible to fix the driver in this case, other splats have
been seen from other drivers, including the infamous 8250 UART, and so
it's better to address this problem in the mmiowb core itself.

Fix mmiowb_set_pending() by using the raw_cpu_ptr() to get at the mmiowb
state and then only updating the 'mmiowb_pending' field if we are not
preemptible (i.e. we have a non-zero nesting count).

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Guo Ren <guoren@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Reported-by: Palmer Dabbelt <palmer@dabbelt.com>
Signed-off-by: Will Deacon <will@kernel.org>
---

I can queue this in the arm64 tree as a fix, as I already have some other
fixes targetting -rc6.

 include/asm-generic/mmiowb.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
index 9439ff037b2d..5698fca3bf56 100644
--- a/include/asm-generic/mmiowb.h
+++ b/include/asm-generic/mmiowb.h
@@ -27,7 +27,7 @@
 #include <asm/smp.h>
 
 DECLARE_PER_CPU(struct mmiowb_state, __mmiowb_state);
-#define __mmiowb_state()	this_cpu_ptr(&__mmiowb_state)
+#define __mmiowb_state()	raw_cpu_ptr(&__mmiowb_state)
 #else
 #define __mmiowb_state()	arch_mmiowb_state()
 #endif	/* arch_mmiowb_state */
@@ -35,7 +35,9 @@ DECLARE_PER_CPU(struct mmiowb_state, __mmiowb_state);
 static inline void mmiowb_set_pending(void)
 {
 	struct mmiowb_state *ms = __mmiowb_state();
-	ms->mmiowb_pending = ms->nesting_count;
+
+	if (likely(ms->nesting_count))
+		ms->mmiowb_pending = ms->nesting_count;
 }
 
 static inline void mmiowb_spin_lock(void)
-- 
2.27.0.389.gc38d7665816-goog


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] asm-generic/mmiowb: Allow mmiowb_set_pending() when preemptible()
  2020-07-16 11:28 [PATCH] asm-generic/mmiowb: Allow mmiowb_set_pending() when preemptible() Will Deacon
@ 2020-07-16 11:54 ` Emil Renner Berthing
  2020-07-16 19:00 ` Palmer Dabbelt
  2020-07-17 10:43 ` Will Deacon
  2 siblings, 0 replies; 4+ messages in thread
From: Emil Renner Berthing @ 2020-07-16 11:54 UTC (permalink / raw)
  To: Will Deacon
  Cc: Linux Kernel Mailing List, linux-arch, Arnd Bergmann,
	kernel-team, Guo Ren, Michael Ellerman, Paul Walmsley,
	Palmer Dabbelt, linux-riscv

On Thu, 16 Jul 2020 at 13:28, Will Deacon <will@kernel.org> wrote:
>
> Although mmiowb() is concerned only with serialising MMIO writes occuring
> in contexts where a spinlock is held, the call to mmiowb_set_pending()
> from the MMIO write accessors can occur in preemptible contexts, such
> as during driver probe() functions where ordering between CPUs is not
> usually a concern, assuming that the task migration path provides the
> necessary ordering guarantees.
>
> Unfortunately, the default implementation of mmiowb_set_pending() is not
> preempt-safe, as it makes use of a a per-cpu variable to track its
> internal state. This has been reported to generate the following splat
> on riscv:
>
>  | BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
>  | caller is regmap_mmio_write32le+0x1c/0x46
>  | CPU: 3 PID: 1 Comm: swapper/0 Not tainted 5.8.0-rc3-hfu+ #1
>  | Call Trace:
>  |  walk_stackframe+0x0/0x7a
>  |  dump_stack+0x6e/0x88
>  |  regmap_mmio_write32le+0x18/0x46
>  |  check_preemption_disabled+0xa4/0xaa
>  |  regmap_mmio_write32le+0x18/0x46
>  |  regmap_mmio_write+0x26/0x44
>  |  regmap_write+0x28/0x48
>  |  sifive_gpio_probe+0xc0/0x1da
>
> Although it's possible to fix the driver in this case, other splats have
> been seen from other drivers, including the infamous 8250 UART, and so
> it's better to address this problem in the mmiowb core itself.
>
> Fix mmiowb_set_pending() by using the raw_cpu_ptr() to get at the mmiowb
> state and then only updating the 'mmiowb_pending' field if we are not
> preemptible (i.e. we have a non-zero nesting count).
>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Guo Ren <guoren@kernel.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Reported-by: Palmer Dabbelt <palmer@dabbelt.com>

Nice. This fixes the problems I saw both in Qemu and on the HiFive Unleashed.

Btw. I was the one who originally stumbled upon this problem and send
the mail to linux-riscv that Palmer CC'ed you on, so I think this
ought to be
Reported-by: Emil Renner Berthing <kernel@esmil.dk>

In any case you can add
Tested-by: Emil Renner Berthing <kernel@esmil.dk>

> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>
> I can queue this in the arm64 tree as a fix, as I already have some other
> fixes targetting -rc6.
>
>  include/asm-generic/mmiowb.h | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
> index 9439ff037b2d..5698fca3bf56 100644
> --- a/include/asm-generic/mmiowb.h
> +++ b/include/asm-generic/mmiowb.h
> @@ -27,7 +27,7 @@
>  #include <asm/smp.h>
>
>  DECLARE_PER_CPU(struct mmiowb_state, __mmiowb_state);
> -#define __mmiowb_state()       this_cpu_ptr(&__mmiowb_state)
> +#define __mmiowb_state()       raw_cpu_ptr(&__mmiowb_state)
>  #else
>  #define __mmiowb_state()       arch_mmiowb_state()
>  #endif /* arch_mmiowb_state */
> @@ -35,7 +35,9 @@ DECLARE_PER_CPU(struct mmiowb_state, __mmiowb_state);
>  static inline void mmiowb_set_pending(void)
>  {
>         struct mmiowb_state *ms = __mmiowb_state();
> -       ms->mmiowb_pending = ms->nesting_count;
> +
> +       if (likely(ms->nesting_count))
> +               ms->mmiowb_pending = ms->nesting_count;
>  }
>
>  static inline void mmiowb_spin_lock(void)
> --
> 2.27.0.389.gc38d7665816-goog
>
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] asm-generic/mmiowb: Allow mmiowb_set_pending() when preemptible()
  2020-07-16 11:28 [PATCH] asm-generic/mmiowb: Allow mmiowb_set_pending() when preemptible() Will Deacon
  2020-07-16 11:54 ` Emil Renner Berthing
@ 2020-07-16 19:00 ` Palmer Dabbelt
  2020-07-17 10:43 ` Will Deacon
  2 siblings, 0 replies; 4+ messages in thread
From: Palmer Dabbelt @ 2020-07-16 19:00 UTC (permalink / raw)
  To: will
  Cc: linux-kernel, kernel-team, linux-riscv, linux-arch, will,
	Arnd Bergmann, Paul Walmsley, guoren, mpe

On Thu, 16 Jul 2020 04:28:16 PDT (-0700), will@kernel.org wrote:
> Although mmiowb() is concerned only with serialising MMIO writes occuring
> in contexts where a spinlock is held, the call to mmiowb_set_pending()
> from the MMIO write accessors can occur in preemptible contexts, such
> as during driver probe() functions where ordering between CPUs is not
> usually a concern, assuming that the task migration path provides the
> necessary ordering guarantees.
>
> Unfortunately, the default implementation of mmiowb_set_pending() is not
> preempt-safe, as it makes use of a a per-cpu variable to track its
> internal state. This has been reported to generate the following splat
> on riscv:
>
>  | BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
>  | caller is regmap_mmio_write32le+0x1c/0x46
>  | CPU: 3 PID: 1 Comm: swapper/0 Not tainted 5.8.0-rc3-hfu+ #1
>  | Call Trace:
>  |  walk_stackframe+0x0/0x7a
>  |  dump_stack+0x6e/0x88
>  |  regmap_mmio_write32le+0x18/0x46
>  |  check_preemption_disabled+0xa4/0xaa
>  |  regmap_mmio_write32le+0x18/0x46
>  |  regmap_mmio_write+0x26/0x44
>  |  regmap_write+0x28/0x48
>  |  sifive_gpio_probe+0xc0/0x1da
>
> Although it's possible to fix the driver in this case, other splats have
> been seen from other drivers, including the infamous 8250 UART, and so
> it's better to address this problem in the mmiowb core itself.
>
> Fix mmiowb_set_pending() by using the raw_cpu_ptr() to get at the mmiowb
> state and then only updating the 'mmiowb_pending' field if we are not
> preemptible (i.e. we have a non-zero nesting count).
>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Guo Ren <guoren@kernel.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Reported-by: Palmer Dabbelt <palmer@dabbelt.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>
> I can queue this in the arm64 tree as a fix, as I already have some other
> fixes targetting -rc6.
>
>  include/asm-generic/mmiowb.h | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
> index 9439ff037b2d..5698fca3bf56 100644
> --- a/include/asm-generic/mmiowb.h
> +++ b/include/asm-generic/mmiowb.h
> @@ -27,7 +27,7 @@
>  #include <asm/smp.h>
>
>  DECLARE_PER_CPU(struct mmiowb_state, __mmiowb_state);
> -#define __mmiowb_state()	this_cpu_ptr(&__mmiowb_state)
> +#define __mmiowb_state()	raw_cpu_ptr(&__mmiowb_state)
>  #else
>  #define __mmiowb_state()	arch_mmiowb_state()
>  #endif	/* arch_mmiowb_state */
> @@ -35,7 +35,9 @@ DECLARE_PER_CPU(struct mmiowb_state, __mmiowb_state);
>  static inline void mmiowb_set_pending(void)
>  {
>  	struct mmiowb_state *ms = __mmiowb_state();
> -	ms->mmiowb_pending = ms->nesting_count;
> +
> +	if (likely(ms->nesting_count))
> +		ms->mmiowb_pending = ms->nesting_count;
>  }
>
>  static inline void mmiowb_spin_lock(void)

Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com>

The arm64 tree works for me.

Thanks!

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] asm-generic/mmiowb: Allow mmiowb_set_pending() when preemptible()
  2020-07-16 11:28 [PATCH] asm-generic/mmiowb: Allow mmiowb_set_pending() when preemptible() Will Deacon
  2020-07-16 11:54 ` Emil Renner Berthing
  2020-07-16 19:00 ` Palmer Dabbelt
@ 2020-07-17 10:43 ` Will Deacon
  2 siblings, 0 replies; 4+ messages in thread
From: Will Deacon @ 2020-07-17 10:43 UTC (permalink / raw)
  To: linux-kernel, Will Deacon
  Cc: catalin.marinas, kernel-team, Paul Walmsley, Guo Ren,
	linux-riscv, Michael Ellerman, linux-arch, Palmer Dabbelt,
	Arnd Bergmann

On Thu, 16 Jul 2020 12:28:16 +0100, Will Deacon wrote:
> Although mmiowb() is concerned only with serialising MMIO writes occuring
> in contexts where a spinlock is held, the call to mmiowb_set_pending()
> from the MMIO write accessors can occur in preemptible contexts, such
> as during driver probe() functions where ordering between CPUs is not
> usually a concern, assuming that the task migration path provides the
> necessary ordering guarantees.
> 
> [...]

Applied to arm64 (for-next/fixes), thanks!

[1/1] asm-generic/mmiowb: Allow mmiowb_set_pending() when preemptible()
      https://git.kernel.org/arm64/c/bd024e82e4cd

Cheers,
-- 
Will

https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-07-17 10:44 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-16 11:28 [PATCH] asm-generic/mmiowb: Allow mmiowb_set_pending() when preemptible() Will Deacon
2020-07-16 11:54 ` Emil Renner Berthing
2020-07-16 19:00 ` Palmer Dabbelt
2020-07-17 10:43 ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).