linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [patch 00/13] preempt: Make preempt count unconditional
@ 2020-09-14 20:42 Thomas Gleixner
  2020-09-14 20:42 ` [patch 01/13] lib/debug: Remove pointless ARCH_NO_PREEMPT dependencies Thomas Gleixner
                   ` (14 more replies)
  0 siblings, 15 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	linux-m68k, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, Linus Torvalds, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

Folks!

While working on various preempt count related things, I stumbled (again)
over the inconsistency of our preempt count handling.

The handling of preempt_count() is inconsistent accross kernel
configurations. On kernels which have PREEMPT_COUNT=n
preempt_disable/enable() and the lock/unlock functions are not affecting
the preempt count, only local_bh_disable/enable() and _bh variants of
locking, soft interrupt delivery, hard interrupt and NMI context affect it.

It's therefore impossible to have a consistent set of checks which provide
information about the context in which a function is called. In many cases
it makes sense to have seperate functions for seperate contexts, but there
are valid reasons to avoid that and handle different calling contexts
conditionally.

The lack of such indicators which work on all kernel configuratios is a
constant source of trouble because developers either do not understand the
implications or try to work around this inconsistency in weird
ways. Neither seem these issues be catched by reviewers and testing.

Recently merged code does:

	 gfp = preemptible() ? GFP_KERNEL : GFP_ATOMIC;

Looks obviously correct, except for the fact that preemptible() is
unconditionally false for CONFIF_PREEMPT_COUNT=n, i.e. all allocations in
that code use GFP_ATOMIC on such kernels.

Attempts to make preempt count unconditional and consistent have been
rejected in the past with handwaving performance arguments.

Freshly conducted benchmarks did not reveal any measurable impact from
enabling preempt count unconditionally. On kernels with CONFIG_PREEMPT_NONE
or CONFIG_PREEMPT_VOLUNTARY the preempt count is only incremented and
decremented but the result of the decrement is not tested. Contrary to that
enabling CONFIG_PREEMPT which tests the result has a small but measurable
impact due to the conditional branch/call.

It's about time to make essential functionality of the kernel consistent
accross the various preemption models.

The series is also available from git:

   git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git preempt

That's the first part of a larger effort related to preempt count:

 1) The analysis of the usage sites of in_interrupt(), in_atomic(),
    in_softirq() is still ongoing, but so far the number of buggy users is
    clearly the vast majority. There will be seperate patch series
    (currently 46 and counting) to address these issues once the analysis
    is complete in the next days.

 2) The long discussed state tracking of local irq disable in preempt count
    which accounts interrupt disabled sections as atomic and avoids issuing
    costly instructions (sti, cli, popf or their non X86 counterparts) when
    the state does not change, i.e. nested irq_save() or irq_restore(). I
    have this working on X86 already and contrary to my earlier attempts
    this was reasonably straight forward due to the recent entry/exit code
    consolidation.

    What I've not done yet is to optimize the preempt count handling
    of the [un]lock_irq* operations so they handle the interrupt disabled
    state and the preempt count modification in one go. That's an obvious
    add on, but correctness first ...

 3) Lazy interrupt disabling as a straight forward extension to #2. This
    avoids the actual disabling at the CPU level completely and catches an
    incoming interrupt in the low level entry code, modifies the interrupt
    disabled state on the return stack, notes the interrupt as pending in
    software and raises it again when interrupts are reenabled. This has
    still a few issues which I'm hunting down (cpuidle is unhappy ...)

Thanks,

	tglx
---
 arch/arm/include/asm/assembler.h                                 |   11 --
 arch/arm/kernel/iwmmxt.S                                         |    2 
 arch/arm/mach-ep93xx/crunch-bits.S                               |    2 
 arch/xtensa/kernel/entry.S                                       |    2 
 drivers/gpu/drm/i915/Kconfig.debug                               |    1 
 drivers/gpu/drm/i915/i915_utils.h                                |    3 
 include/linux/bit_spinlock.h                                     |    4 -
 include/linux/lockdep.h                                          |    6 -
 include/linux/pagemap.h                                          |    4 -
 include/linux/preempt.h                                          |   37 +---------
 include/linux/uaccess.h                                          |    6 -
 kernel/Kconfig.preempt                                           |    4 -
 kernel/sched/core.c                                              |    6 -
 lib/Kconfig.debug                                                |    3 
 lib/Kconfig.debug.rej                                            |   14 +--
 tools/testing/selftests/rcutorture/configs/rcu/SRCU-t            |    1 
 tools/testing/selftests/rcutorture/configs/rcu/SRCU-u            |    1 
 tools/testing/selftests/rcutorture/configs/rcu/TINY01            |    1 
 tools/testing/selftests/rcutorture/doc/TINY_RCU.txt              |    5 -
 tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt      |    1 
 tools/testing/selftests/rcutorture/formal/srcu-cbmc/src/config.h |    1 
 21 files changed, 23 insertions(+), 92 deletions(-)


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 01/13] lib/debug: Remove pointless ARCH_NO_PREEMPT dependencies
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-14 20:42 ` [patch 02/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	linux-m68k, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, Linus Torvalds, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

ARCH_NO_PREEMPT disables the selection of CONFIG_PREEMPT_VOLUNTARY and
CONFIG_PREEMPT, but architectures which set this config option still
support preempt count for hard and softirq accounting.

There is absolutely no reason to prevent lockdep from using the preempt
counter nor is there a reason to prevent the enablement of
CONFIG_DEBUG_ATOMIC_SLEEP on such architectures.

Remove the dependencies, which affects ALPHA, HEXAGON, M68K and UM.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: linux-alpha@vger.kernel.org
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: linux-um@lists.infradead.org
Cc: Brian Cain <bcain@codeaurora.org>
Cc: linux-hexagon@vger.kernel.org
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: linux-m68k@lists.linux-m68k.org
---
 lib/Kconfig.debug |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1161,7 +1161,7 @@ config PROVE_LOCKING
 	select DEBUG_RWSEMS
 	select DEBUG_WW_MUTEX_SLOWPATH
 	select DEBUG_LOCK_ALLOC
-	select PREEMPT_COUNT if !ARCH_NO_PREEMPT
+	select PREEMPT_COUNT
 	select TRACE_IRQFLAGS
 	default n
 	help
@@ -1323,7 +1323,6 @@ config DEBUG_ATOMIC_SLEEP
 	bool "Sleep inside atomic section checking"
 	select PREEMPT_COUNT
 	depends on DEBUG_KERNEL
-	depends on !ARCH_NO_PREEMPT
 	help
 	  If you say Y here, various routines which may sleep will become very
 	  noisy if they are called inside atomic sections: when a spinlock is


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 02/13] preempt: Make preempt count unconditional
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
  2020-09-14 20:42 ` [patch 01/13] lib/debug: Remove pointless ARCH_NO_PREEMPT dependencies Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-14 20:42 ` [patch 03/13] preempt: Clenaup PREEMPT_COUNT leftovers Thomas Gleixner
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	linux-m68k, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, Linus Torvalds, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

The handling of preempt_count() is inconsistent accross kernel
configurations. On kernels which have PREEMPT_COUNT=n
preempt_disable/enable() and the lock/unlock functions are not affecting
the preempt count, only local_bh_disable/enable() and _bh variants of
locking, soft interrupt delivery, hard interrupt and NMI context affect it.

It's therefore impossible to have a consistent set of checks which provide
information about the context in which a function is called. In many cases
it makes sense to have seperate functions for seperate contexts, but there
are valid reasons to avoid that and handle different calling contexts
conditionally.

The lack of such indicators which work on all kernel configuratios is a
constant source of trouble because developers either do not understand the
implications or try to work around this inconsistency in weird
ways. Neither seem these issues be catched by reviewers and testing.

Recently merged code does:

	 gfp = preemptible() ? GFP_KERNEL : GFP_ATOMIC;

Looks obviously correct, except for the fact that preemptible() is
unconditionally false for CONFIF_PREEMPT_COUNT=n, i.e. all allocations in
that code use GFP_ATOMIC on such kernels.

Attempts to make preempt count unconditional and consistent have been
rejected in the past with handwaving performance arguments.

Freshly conducted benchmarks did not reveal any measurable impact from
enabling preempt count unconditionally. On kernels with CONFIG_PREEMPT_NONE
or CONFIG_PREEMPT_VOLUNTARY the preempt count is only incremented and
decremented but the result of the decrement is not tested. Contrary to that
enabling CONFIG_PREEMPT which tests the result has a small but measurable
impact due to the conditional branch/call.

It's about time to make essential functionality of the kernel consistent
accross the various preemption models.

Enable CONFIG_PREEMPT_COUNT unconditionally. Follow up changes will remove
the #ifdeffery and remove the config option at the end.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/Kconfig.preempt |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

--- a/kernel/Kconfig.preempt
+++ b/kernel/Kconfig.preempt
@@ -75,8 +75,7 @@ config PREEMPT_RT
 endchoice
 
 config PREEMPT_COUNT
-       bool
+       def_bool y
 
 config PREEMPTION
        bool
-       select PREEMPT_COUNT


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 03/13] preempt: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
  2020-09-14 20:42 ` [patch 01/13] lib/debug: Remove pointless ARCH_NO_PREEMPT dependencies Thomas Gleixner
  2020-09-14 20:42 ` [patch 02/13] preempt: Make preempt count unconditional Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-16 10:56   ` Valentin Schneider
  2020-09-14 20:42 ` [patch 04/13] lockdep: " Thomas Gleixner
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	Linus Torvalds, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, linux-m68k, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
---
 include/linux/preempt.h |   37 ++++---------------------------------
 1 file changed, 4 insertions(+), 33 deletions(-)

--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -56,8 +56,7 @@
 #define PREEMPT_DISABLED	(PREEMPT_DISABLE_OFFSET + PREEMPT_ENABLED)
 
 /*
- * Disable preemption until the scheduler is running -- use an unconditional
- * value so that it also works on !PREEMPT_COUNT kernels.
+ * Disable preemption until the scheduler is running.
  *
  * Reset by start_kernel()->sched_init()->init_idle()->init_idle_preempt_count().
  */
@@ -69,7 +68,6 @@
  *
  *    preempt_count() == 2*PREEMPT_DISABLE_OFFSET
  *
- * Note: PREEMPT_DISABLE_OFFSET is 0 for !PREEMPT_COUNT kernels.
  * Note: See finish_task_switch().
  */
 #define FORK_PREEMPT_COUNT	(2*PREEMPT_DISABLE_OFFSET + PREEMPT_ENABLED)
@@ -106,11 +104,7 @@
 /*
  * The preempt_count offset after preempt_disable();
  */
-#if defined(CONFIG_PREEMPT_COUNT)
-# define PREEMPT_DISABLE_OFFSET	PREEMPT_OFFSET
-#else
-# define PREEMPT_DISABLE_OFFSET	0
-#endif
+#define PREEMPT_DISABLE_OFFSET	PREEMPT_OFFSET
 
 /*
  * The preempt_count offset after spin_lock()
@@ -122,8 +116,8 @@
  *
  *  spin_lock_bh()
  *
- * Which need to disable both preemption (CONFIG_PREEMPT_COUNT) and
- * softirqs, such that unlock sequences of:
+ * Which need to disable both preemption and softirqs, such that unlock
+ * sequences of:
  *
  *  spin_unlock();
  *  local_bh_enable();
@@ -164,8 +158,6 @@ extern void preempt_count_sub(int val);
 #define preempt_count_inc() preempt_count_add(1)
 #define preempt_count_dec() preempt_count_sub(1)
 
-#ifdef CONFIG_PREEMPT_COUNT
-
 #define preempt_disable() \
 do { \
 	preempt_count_inc(); \
@@ -231,27 +223,6 @@ do { \
 	__preempt_count_dec(); \
 } while (0)
 
-#else /* !CONFIG_PREEMPT_COUNT */
-
-/*
- * Even if we don't have any preemption, we need preempt disable/enable
- * to be barriers, so that we don't have things like get_user/put_user
- * that can cause faults and scheduling migrate into our preempt-protected
- * region.
- */
-#define preempt_disable()			barrier()
-#define sched_preempt_enable_no_resched()	barrier()
-#define preempt_enable_no_resched()		barrier()
-#define preempt_enable()			barrier()
-#define preempt_check_resched()			do { } while (0)
-
-#define preempt_disable_notrace()		barrier()
-#define preempt_enable_no_resched_notrace()	barrier()
-#define preempt_enable_notrace()		barrier()
-#define preemptible()				0
-
-#endif /* CONFIG_PREEMPT_COUNT */
-
 #ifdef MODULE
 /*
  * Modules have no business playing preemption tricks.


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 04/13] lockdep: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (2 preceding siblings ...)
  2020-09-14 20:42 ` [patch 03/13] preempt: Clenaup PREEMPT_COUNT leftovers Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-15 16:11   ` Will Deacon
  2020-09-14 20:42 ` [patch 05/13] mm/pagemap: " Thomas Gleixner
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	linux-m68k, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, Linus Torvalds, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/lockdep.h |    6 ++----
 lib/Kconfig.debug       |    1 -
 2 files changed, 2 insertions(+), 5 deletions(-)

--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -585,16 +585,14 @@ do {									\
 
 #define lockdep_assert_preemption_enabled()				\
 do {									\
-	WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)	&&		\
-		     debug_locks			&&		\
+	WARN_ON_ONCE(debug_locks			&&		\
 		     (preempt_count() != 0		||		\
 		      !raw_cpu_read(hardirqs_enabled)));		\
 } while (0)
 
 #define lockdep_assert_preemption_disabled()				\
 do {									\
-	WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)	&&		\
-		     debug_locks			&&		\
+	WARN_ON_ONCE(debug_locks			&&		\
 		     (preempt_count() == 0		&&		\
 		      raw_cpu_read(hardirqs_enabled)));			\
 } while (0)
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1161,7 +1161,6 @@ config PROVE_LOCKING
 	select DEBUG_RWSEMS
 	select DEBUG_WW_MUTEX_SLOWPATH
 	select DEBUG_LOCK_ALLOC
-	select PREEMPT_COUNT
 	select TRACE_IRQFLAGS
 	default n
 	help


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 05/13] mm/pagemap: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (3 preceding siblings ...)
  2020-09-14 20:42 ` [patch 04/13] lockdep: " Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-14 20:42 ` [patch 06/13] locking/bitspinlock: " Thomas Gleixner
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	linux-m68k, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, Linus Torvalds, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
---
 include/linux/pagemap.h |    4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -168,9 +168,7 @@ void release_pages(struct page **pages,
 static inline int __page_cache_add_speculative(struct page *page, int count)
 {
 #ifdef CONFIG_TINY_RCU
-# ifdef CONFIG_PREEMPT_COUNT
-	VM_BUG_ON(!in_atomic() && !irqs_disabled());
-# endif
+	VM_BUG_ON(preemptible())
 	/*
 	 * Preempt must be disabled here - we rely on rcu_read_lock doing
 	 * this for us.


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 06/13] locking/bitspinlock: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (4 preceding siblings ...)
  2020-09-14 20:42 ` [patch 05/13] mm/pagemap: " Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-15 16:10   ` Will Deacon
  2020-09-14 20:42 ` [patch 07/13] uaccess: " Thomas Gleixner
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	linux-m68k, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, Linus Torvalds, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/bit_spinlock.h |    4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

--- a/include/linux/bit_spinlock.h
+++ b/include/linux/bit_spinlock.h
@@ -90,10 +90,8 @@ static inline int bit_spin_is_locked(int
 {
 #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
 	return test_bit(bitnum, addr);
-#elif defined CONFIG_PREEMPT_COUNT
-	return preempt_count();
 #else
-	return 1;
+	return preempt_count();
 #endif
 }
 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 07/13] uaccess: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (5 preceding siblings ...)
  2020-09-14 20:42 ` [patch 06/13] locking/bitspinlock: " Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-14 20:42 ` [patch 08/13] sched: " Thomas Gleixner
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	linux-m68k, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, Linus Torvalds, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/uaccess.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -230,9 +230,9 @@ static inline bool pagefault_disabled(vo
  *
  * This function should only be used by the fault handlers. Other users should
  * stick to pagefault_disabled().
- * Please NEVER use preempt_disable() to disable the fault handler. With
- * !CONFIG_PREEMPT_COUNT, this is like a NOP. So the handler won't be disabled.
- * in_atomic() will report different values based on !CONFIG_PREEMPT_COUNT.
+ *
+ * Please NEVER use preempt_disable() or local_irq_disable() to disable the
+ * fault handler.
  */
 #define faulthandler_disabled() (pagefault_disabled() || in_atomic())
 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 08/13] sched: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (6 preceding siblings ...)
  2020-09-14 20:42 ` [patch 07/13] uaccess: " Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-16 10:56   ` Valentin Schneider
  2020-09-14 20:42 ` [patch 09/13] ARM: " Thomas Gleixner
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	Linus Torvalds, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, linux-m68k, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
---
 kernel/sched/core.c |    6 +-----
 lib/Kconfig.debug   |    1 -
 2 files changed, 1 insertion(+), 6 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3706,8 +3706,7 @@ asmlinkage __visible void schedule_tail(
 	 * finish_task_switch() for details.
 	 *
 	 * finish_task_switch() will drop rq->lock() and lower preempt_count
-	 * and the preempt_enable() will end up enabling preemption (on
-	 * PREEMPT_COUNT kernels).
+	 * and the preempt_enable() will end up enabling preemption.
 	 */
 
 	rq = finish_task_switch(prev);
@@ -7311,9 +7310,6 @@ void __cant_sleep(const char *file, int
 	if (irqs_disabled())
 		return;
 
-	if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-		return;
-
 	if (preempt_count() > preempt_offset)
 		return;
 
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1320,7 +1320,6 @@ config DEBUG_LOCKDEP
 
 config DEBUG_ATOMIC_SLEEP
 	bool "Sleep inside atomic section checking"
-	select PREEMPT_COUNT
 	depends on DEBUG_KERNEL
 	help
 	  If you say Y here, various routines which may sleep will become very


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 09/13] ARM: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (7 preceding siblings ...)
  2020-09-14 20:42 ` [patch 08/13] sched: " Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-14 20:42 ` [patch 10/13] xtensa: " Thomas Gleixner
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	linux-m68k, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, Linus Torvalds, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm/include/asm/assembler.h   |   11 -----------
 arch/arm/kernel/iwmmxt.S           |    2 --
 arch/arm/mach-ep93xx/crunch-bits.S |    2 --
 3 files changed, 15 deletions(-)

--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -212,7 +212,6 @@
 /*
  * Increment/decrement the preempt count.
  */
-#ifdef CONFIG_PREEMPT_COUNT
 	.macro	inc_preempt_count, ti, tmp
 	ldr	\tmp, [\ti, #TI_PREEMPT]	@ get preempt count
 	add	\tmp, \tmp, #1			@ increment it
@@ -229,16 +228,6 @@
 	get_thread_info \ti
 	dec_preempt_count \ti, \tmp
 	.endm
-#else
-	.macro	inc_preempt_count, ti, tmp
-	.endm
-
-	.macro	dec_preempt_count, ti, tmp
-	.endm
-
-	.macro	dec_preempt_count_ti, ti, tmp
-	.endm
-#endif
 
 #define USERL(l, x...)				\
 9999:	x;					\
--- a/arch/arm/kernel/iwmmxt.S
+++ b/arch/arm/kernel/iwmmxt.S
@@ -94,9 +94,7 @@ ENTRY(iwmmxt_task_enable)
 	mov	r2, r2				@ cpwait
 	bl	concan_save
 
-#ifdef CONFIG_PREEMPT_COUNT
 	get_thread_info r10
-#endif
 4:	dec_preempt_count r10, r3
 	ret	r9				@ normal exit from exception
 
--- a/arch/arm/mach-ep93xx/crunch-bits.S
+++ b/arch/arm/mach-ep93xx/crunch-bits.S
@@ -191,9 +191,7 @@ ENTRY(crunch_task_enable)
 	cfldr64		mvdx15, [r0, #CRUNCH_MVDX15]
 
 1:
-#ifdef CONFIG_PREEMPT_COUNT
 	get_thread_info r10
-#endif
 2:	dec_preempt_count r10, r3
 	ret	lr
 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 10/13] xtensa: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (8 preceding siblings ...)
  2020-09-14 20:42 ` [patch 09/13] ARM: " Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-14 20:42 ` [patch 11/13] drm/i915: " Thomas Gleixner
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall,
	Max Filippov, linux-kselftest, linux-hexagon, Will Deacon,
	Ingo Molnar, Anton Ivanov, linux-arch, Vincent Guittot,
	Brian Cain, Richard Weinberger, Russell King, David Airlie,
	Ingo Molnar, Geert Uytterhoeven, Mel Gorman, intel-gfx,
	Matt Turner, Valentin Schneider, linux-xtensa, Shuah Khan,
	Paul E. McKenney, Jeff Dike, linux-um, Josh Triplett,
	Steven Rostedt, rcu, linux-m68k, Ivan Kokshaysky, Jani Nikula,
	Rodrigo Vivi, Dietmar Eggemann, linux-arm-kernel,
	Richard Henderson, Chris Zankel, linux-mm, Linus Torvalds,
	Daniel Vetter, linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: linux-xtensa@linux-xtensa.org
---
 arch/xtensa/kernel/entry.S |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/xtensa/kernel/entry.S
+++ b/arch/xtensa/kernel/entry.S
@@ -819,7 +819,7 @@ ENTRY(debug_exception)
 	 * preemption if we have HW breakpoints to preserve DEBUGCAUSE.DBNUM
 	 * meaning.
 	 */
-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_HAVE_HW_BREAKPOINT)
+#ifdef CONFIG_HAVE_HW_BREAKPOINT
 	GET_THREAD_INFO(a2, a1)
 	l32i	a3, a2, TI_PRE_COUNT
 	addi	a3, a3, 1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 11/13] drm/i915: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (9 preceding siblings ...)
  2020-09-14 20:42 ` [patch 10/13] xtensa: " Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-14 20:42 ` [patch 12/13] rcutorture: " Thomas Gleixner
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, David Airlie, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, Peter Zijlstra, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, Matt Turner, intel-gfx,
	linux-xtensa, Shuah Khan, Paul E. McKenney, Jeff Dike, linux-um,
	Josh Triplett, Jani Nikula, rcu, Linus Torvalds, Ivan Kokshaysky,
	Steven Rostedt, Rodrigo Vivi, Dietmar Eggemann, linux-arm-kernel,
	Richard Henderson, Chris Zankel, Max Filippov, linux-m68k,
	Valentin Schneider, Daniel Vetter, linux-alpha,
	Mathieu Desnoyers, Andrew Morton, Daniel Bristot de Oliveira

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: intel-gfx@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org
---
 drivers/gpu/drm/i915/Kconfig.debug |    1 -
 drivers/gpu/drm/i915/i915_utils.h  |    3 +--
 2 files changed, 1 insertion(+), 3 deletions(-)

--- a/drivers/gpu/drm/i915/Kconfig.debug
+++ b/drivers/gpu/drm/i915/Kconfig.debug
@@ -20,7 +20,6 @@ config DRM_I915_DEBUG
 	bool "Enable additional driver debugging"
 	depends on DRM_I915
 	select DEBUG_FS
-	select PREEMPT_COUNT
 	select I2C_CHARDEV
 	select STACKDEPOT
 	select DRM_DP_AUX_CHARDEV
--- a/drivers/gpu/drm/i915/i915_utils.h
+++ b/drivers/gpu/drm/i915/i915_utils.h
@@ -337,8 +337,7 @@ wait_remaining_ms_from_jiffies(unsigned
 						   (Wmax))
 #define wait_for(COND, MS)		_wait_for((COND), (MS) * 1000, 10, 1000)
 
-/* If CONFIG_PREEMPT_COUNT is disabled, in_atomic() always reports false. */
-#if defined(CONFIG_DRM_I915_DEBUG) && defined(CONFIG_PREEMPT_COUNT)
+#ifdef CONFIG_DRM_I915_DEBUG
 # define _WAIT_FOR_ATOMIC_CHECK(ATOMIC) WARN_ON_ONCE((ATOMIC) && !in_atomic())
 #else
 # define _WAIT_FOR_ATOMIC_CHECK(ATOMIC) do { } while (0)


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 12/13] rcutorture: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (10 preceding siblings ...)
  2020-09-14 20:42 ` [patch 11/13] drm/i915: " Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-14 20:42 ` [patch 13/13] preempt: Remove PREEMPT_COUNT from Kconfig Thomas Gleixner
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	Will Deacon, linux-kselftest, Shuah Khan, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Paul E. McKenney, Jeff Dike,
	linux-alpha, linux-um, Josh Triplett, Steven Rostedt, rcu,
	Linus Torvalds, Mathieu Desnoyers, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, linux-m68k, Daniel Vetter,
	linux-hexagon, Ivan Kokshaysky, Andrew Morton,
	Daniel Bristot de Oliveira

CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: rcu@vger.kernel.org
Cc: linux-kselftest@vger.kernel.org
---
 tools/testing/selftests/rcutorture/configs/rcu/SRCU-t            |    1 -
 tools/testing/selftests/rcutorture/configs/rcu/SRCU-u            |    1 -
 tools/testing/selftests/rcutorture/configs/rcu/TINY01            |    1 -
 tools/testing/selftests/rcutorture/doc/TINY_RCU.txt              |    5 ++---
 tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt      |    1 -
 tools/testing/selftests/rcutorture/formal/srcu-cbmc/src/config.h |    1 -
 6 files changed, 2 insertions(+), 8 deletions(-)

--- a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-t
+++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-t
@@ -7,4 +7,3 @@ CONFIG_RCU_TRACE=n
 CONFIG_DEBUG_LOCK_ALLOC=n
 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
 CONFIG_DEBUG_ATOMIC_SLEEP=y
-#CHECK#CONFIG_PREEMPT_COUNT=y
--- a/tools/testing/selftests/rcutorture/configs/rcu/SRCU-u
+++ b/tools/testing/selftests/rcutorture/configs/rcu/SRCU-u
@@ -7,4 +7,3 @@ CONFIG_RCU_TRACE=n
 CONFIG_DEBUG_LOCK_ALLOC=y
 CONFIG_PROVE_LOCKING=y
 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
-CONFIG_PREEMPT_COUNT=n
--- a/tools/testing/selftests/rcutorture/configs/rcu/TINY01
+++ b/tools/testing/selftests/rcutorture/configs/rcu/TINY01
@@ -10,4 +10,3 @@ CONFIG_RCU_TRACE=n
 #CHECK#CONFIG_RCU_STALL_COMMON=n
 CONFIG_DEBUG_LOCK_ALLOC=n
 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
-CONFIG_PREEMPT_COUNT=n
--- a/tools/testing/selftests/rcutorture/doc/TINY_RCU.txt
+++ b/tools/testing/selftests/rcutorture/doc/TINY_RCU.txt
@@ -3,11 +3,10 @@ This document gives a brief rationale fo
 
 Kconfig Parameters:
 
-CONFIG_DEBUG_LOCK_ALLOC -- Do all three and none of the three.
-CONFIG_PREEMPT_COUNT
+CONFIG_DEBUG_LOCK_ALLOC -- Do both and none of the two.
 CONFIG_RCU_TRACE
 
-The theory here is that randconfig testing will hit the other six possible
+The theory here is that randconfig testing will hit the other two possible
 combinations of these parameters.
 
 
--- a/tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt
+++ b/tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt
@@ -43,7 +43,6 @@ CONFIG_64BIT
 
 	Used only to check CONFIG_RCU_FANOUT value, inspection suffices.
 
-CONFIG_PREEMPT_COUNT
 CONFIG_PREEMPT_RCU
 
 	Redundant with CONFIG_PREEMPT, ignore.
--- a/tools/testing/selftests/rcutorture/formal/srcu-cbmc/src/config.h
+++ b/tools/testing/selftests/rcutorture/formal/srcu-cbmc/src/config.h
@@ -8,7 +8,6 @@
 #undef CONFIG_HOTPLUG_CPU
 #undef CONFIG_MODULES
 #undef CONFIG_NO_HZ_FULL_SYSIDLE
-#undef CONFIG_PREEMPT_COUNT
 #undef CONFIG_PREEMPT_RCU
 #undef CONFIG_PROVE_RCU
 #undef CONFIG_RCU_NOCB_CPU


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [patch 13/13] preempt: Remove PREEMPT_COUNT from Kconfig
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (11 preceding siblings ...)
  2020-09-14 20:42 ` [patch 12/13] rcutorture: " Thomas Gleixner
@ 2020-09-14 20:42 ` Thomas Gleixner
  2020-09-14 20:54 ` [patch 00/13] preempt: Make preempt count unconditional Steven Rostedt
  2020-09-14 20:59 ` Linus Torvalds
  14 siblings, 0 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 20:42 UTC (permalink / raw)
  To: LKML
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Shuah Khan, Paul E. McKenney,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	linux-m68k, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, Linus Torvalds, Daniel Vetter,
	linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

All conditionals and irritations are gone.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/Kconfig.preempt |    3 ---
 1 file changed, 3 deletions(-)

--- a/kernel/Kconfig.preempt
+++ b/kernel/Kconfig.preempt
@@ -74,8 +74,5 @@ config PREEMPT_RT
 
 endchoice
 
-config PREEMPT_COUNT
-       def_bool y
-
 config PREEMPTION
        bool


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [patch 00/13] preempt: Make preempt count unconditional
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (12 preceding siblings ...)
  2020-09-14 20:42 ` [patch 13/13] preempt: Remove PREEMPT_COUNT from Kconfig Thomas Gleixner
@ 2020-09-14 20:54 ` Steven Rostedt
  2020-09-14 20:59 ` Linus Torvalds
  14 siblings, 0 replies; 22+ messages in thread
From: Steven Rostedt @ 2020-09-14 20:54 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Juri Lelli, Peter Zijlstra, Linus Torvalds,
	Sebastian Andrzej Siewior, Joonas Lahtinen, Lai Jiangshan,
	dri-devel, Ben Segall, linux-mm, linux-kselftest, linux-hexagon,
	Will Deacon, Ingo Molnar, Anton Ivanov, linux-arch,
	Vincent Guittot, Brian Cain, Richard Weinberger, Russell King,
	David Airlie, Ingo Molnar, Geert Uytterhoeven, Mel Gorman,
	intel-gfx, Matt Turner, Valentin Schneider, linux-xtensa,
	Shuah Khan, Paul E. McKenney, Jeff Dike, linux-um, Josh Triplett,
	Jani Nikula, rcu, linux-m68k, Ivan Kokshaysky, Rodrigo Vivi,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, LKML, Daniel Vetter, linux-alpha,
	Mathieu Desnoyers, Andrew Morton, Daniel Bristot de Oliveira

On Mon, 14 Sep 2020 22:42:09 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> 21 files changed, 23 insertions(+), 92 deletions(-)

This alone makes it look promising, and hopefully acceptable by Linus :-)

-- Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [patch 00/13] preempt: Make preempt count unconditional
  2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
                   ` (13 preceding siblings ...)
  2020-09-14 20:54 ` [patch 00/13] preempt: Make preempt count unconditional Steven Rostedt
@ 2020-09-14 20:59 ` Linus Torvalds
  2020-09-14 21:55   ` Thomas Gleixner
  2020-09-15 17:25   ` Paul E. McKenney
  14 siblings, 2 replies; 22+ messages in thread
From: Linus Torvalds @ 2020-09-14 20:59 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, Linux-MM,
	open list:KERNEL SELFTEST FRAMEWORK, linux-hexagon, Will Deacon,
	Ingo Molnar, Anton Ivanov, linux-arch, Vincent Guittot,
	Brian Cain, Richard Weinberger, Russell King, David Airlie,
	Ingo Molnar, Geert Uytterhoeven, Mel Gorman, intel-gfx,
	Matt Turner, Valentin Schneider, linux-xtensa, Shuah Khan,
	Paul E. McKenney, Jeff Dike, linux-um, Josh Triplett,
	Steven Rostedt, rcu, linux-m68k, Ivan Kokshaysky, Jani Nikula,
	Rodrigo Vivi, Dietmar Eggemann, Linux ARM, Richard Henderson,
	Chris Zankel, Max Filippov, LKML, Daniel Vetter, alpha,
	Mathieu Desnoyers, Andrew Morton, Daniel Bristot de Oliveira

On Mon, Sep 14, 2020 at 1:45 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Recently merged code does:
>
>          gfp = preemptible() ? GFP_KERNEL : GFP_ATOMIC;
>
> Looks obviously correct, except for the fact that preemptible() is
> unconditionally false for CONFIF_PREEMPT_COUNT=n, i.e. all allocations in
> that code use GFP_ATOMIC on such kernels.

I don't think this is a good reason to entirely get rid of the no-preempt thing.

The above is just garbage. It's bogus. You can't do it.

Blaming the no-preempt code for this bug is extremely unfair, imho.

And the no-preempt code does help make for much better code generation
for simple spinlocks.

Where is that horribly buggy recent code? It's not in that exact
format, certainly, since 'grep' doesn't find it.

             Linus

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [patch 00/13] preempt: Make preempt count unconditional
  2020-09-14 20:59 ` Linus Torvalds
@ 2020-09-14 21:55   ` Thomas Gleixner
  2020-09-15 17:25   ` Paul E. McKenney
  1 sibling, 0 replies; 22+ messages in thread
From: Thomas Gleixner @ 2020-09-14 21:55 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, Linux-MM,
	open list:KERNEL SELFTEST FRAMEWORK, linux-hexagon, Will Deacon,
	Ingo Molnar, Anton Ivanov, linux-arch, Vincent Guittot,
	Brian Cain, Richard Weinberger, Russell King, David Airlie,
	Ingo Molnar, Geert Uytterhoeven, Mel Gorman, intel-gfx,
	Matt Turner, Valentin Schneider, linux-xtensa, Shuah Khan,
	Paul E. McKenney, Jeff Dike, linux-um, Josh Triplett,
	Steven Rostedt, rcu, linux-m68k, Ivan Kokshaysky, Jani Nikula,
	Rodrigo Vivi, Dietmar Eggemann, Linux ARM, Richard Henderson,
	Chris Zankel, Max Filippov, LKML, Daniel Vetter, alpha,
	Mathieu Desnoyers, Andrew Morton, Daniel Bristot de Oliveira

On Mon, Sep 14 2020 at 13:59, Linus Torvalds wrote:
> On Mon, Sep 14, 2020 at 1:45 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>>
>> Recently merged code does:
>>
>>          gfp = preemptible() ? GFP_KERNEL : GFP_ATOMIC;
>>
>> Looks obviously correct, except for the fact that preemptible() is
>> unconditionally false for CONFIF_PREEMPT_COUNT=n, i.e. all allocations in
>> that code use GFP_ATOMIC on such kernels.
>
> I don't think this is a good reason to entirely get rid of the
> no-preempt thing.

I did not say that this is a good reason. It just illustrates the
problem.

> The above is just garbage. It's bogus. You can't do it.
>
> Blaming the no-preempt code for this bug is extremely unfair, imho.

I'm not blaming the no-preempt code. I'm blaming inconsistency and there
is no real good argument for inconsistent behaviour, TBH.

> And the no-preempt code does help make for much better code generation
> for simple spinlocks.

Yes it does generate better code, but I tried hard to spot a difference
in various metrics exposed by perf. It's all in the noise and I only
can spot a difference when the actual preemption check after the
decrement which still depends on CONFIG_PREEMPT is in place, but that's
not the case for PREEMPT_NONE or PREEMPT_VOLUNTARY kernels where the
decrement is just a decrement w/o any conditional behind it.

> Where is that horribly buggy recent code? It's not in that exact
> format, certainly, since 'grep' doesn't find it.

Bah, that was stuff in next which got dropped again.

But just look at any check which uses preemptible(), especially those
which check !preemptible():

In the X86 #GP handler we have:

	/*
	 * To be potentially processing a kprobe fault and to trust the result
	 * from kprobe_running(), we have to be non-preemptible.
	 */
	if (!preemptible() &&
	    kprobe_running() &&
	    kprobe_fault_handler(regs, X86_TRAP_GP))
		goto exit;

and a similar check in the S390 code in kprobe_exceptions_notify(). That
all magically 'works' because that code might have been actually tested
with lockdep enabled which enforces PREEMPT_COUNT...

The SG code has some interesting usage as well:

		if (miter->__flags & SG_MITER_ATOMIC) {
			WARN_ON_ONCE(preemptible());
			kunmap_atomic(miter->addr);

How is that WARN_ON_ONCE() supposed to catch anything? Especially as
calling code does:

	flags = SG_MITER_TO_SG;
	if (!preemptible())
		flags |= SG_MITER_ATOMIC;

which is equally useless on kernels which have PREEMPT_COUNT=n.

There are bugs which are related to in_atomic() or other in_***() usage
all over the place as well.

Inconsistency at the core level is a clear recipe for disaster and at
some point we have to bite the bullet and accept that consistency is
more important than the non measurable 3 cycles?

Thanks,

        tglx

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [patch 06/13] locking/bitspinlock: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 ` [patch 06/13] locking/bitspinlock: " Thomas Gleixner
@ 2020-09-15 16:10   ` Will Deacon
  0 siblings, 0 replies; 22+ messages in thread
From: Will Deacon @ 2020-09-15 16:10 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Juri Lelli, Peter Zijlstra, Linus Torvalds,
	Sebastian Andrzej Siewior, Joonas Lahtinen, Lai Jiangshan,
	dri-devel, Ben Segall, linux-mm, linux-kselftest, linux-hexagon,
	Shuah Khan, Ingo Molnar, Anton Ivanov, linux-arch,
	Vincent Guittot, Brian Cain, Richard Weinberger, Russell King,
	David Airlie, Ingo Molnar, Geert Uytterhoeven, Mel Gorman,
	intel-gfx, Matt Turner, Valentin Schneider, linux-xtensa,
	Paul E. McKenney, Jeff Dike, linux-um, Josh Triplett,
	Steven Rostedt, rcu, linux-m68k, Ivan Kokshaysky, Jani Nikula,
	Rodrigo Vivi, Dietmar Eggemann, linux-arm-kernel,
	Richard Henderson, Chris Zankel, Max Filippov, LKML,
	Daniel Vetter, linux-alpha, Mathieu Desnoyers, Andrew Morton,
	Daniel Bristot de Oliveira

On Mon, Sep 14, 2020 at 10:42:15PM +0200, Thomas Gleixner wrote:
> CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
> removed. Cleanup the leftovers before doing so.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  include/linux/bit_spinlock.h |    4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> --- a/include/linux/bit_spinlock.h
> +++ b/include/linux/bit_spinlock.h
> @@ -90,10 +90,8 @@ static inline int bit_spin_is_locked(int
>  {
>  #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
>  	return test_bit(bitnum, addr);
> -#elif defined CONFIG_PREEMPT_COUNT
> -	return preempt_count();
>  #else
> -	return 1;
> +	return preempt_count();
>  #endif

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [patch 04/13] lockdep: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 ` [patch 04/13] lockdep: " Thomas Gleixner
@ 2020-09-15 16:11   ` Will Deacon
  0 siblings, 0 replies; 22+ messages in thread
From: Will Deacon @ 2020-09-15 16:11 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Shuah Khan, Ingo Molnar,
	Anton Ivanov, linux-arch, Linus Torvalds, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, intel-gfx, Matt Turner,
	Valentin Schneider, linux-xtensa, Paul E. McKenney, Jeff Dike,
	linux-um, Josh Triplett, Steven Rostedt, rcu, linux-m68k,
	Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi, Vincent Guittot,
	Dietmar Eggemann, linux-arm-kernel, Richard Henderson,
	Chris Zankel, Max Filippov, LKML, Daniel Vetter, linux-alpha,
	Mathieu Desnoyers, Andrew Morton, Daniel Bristot de Oliveira

On Mon, Sep 14, 2020 at 10:42:13PM +0200, Thomas Gleixner wrote:
> CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
> removed. Cleanup the leftovers before doing so.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> ---
>  include/linux/lockdep.h |    6 ++----
>  lib/Kconfig.debug       |    1 -
>  2 files changed, 2 insertions(+), 5 deletions(-)
> 
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -585,16 +585,14 @@ do {									\
>  
>  #define lockdep_assert_preemption_enabled()				\
>  do {									\
> -	WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)	&&		\
> -		     debug_locks			&&		\
> +	WARN_ON_ONCE(debug_locks			&&		\
>  		     (preempt_count() != 0		||		\
>  		      !raw_cpu_read(hardirqs_enabled)));		\
>  } while (0)
>  
>  #define lockdep_assert_preemption_disabled()				\
>  do {									\
> -	WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT)	&&		\
> -		     debug_locks			&&		\
> +	WARN_ON_ONCE(debug_locks			&&		\
>  		     (preempt_count() == 0		&&		\
>  		      raw_cpu_read(hardirqs_enabled)));			\
>  } while (0)
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -1161,7 +1161,6 @@ config PROVE_LOCKING
>  	select DEBUG_RWSEMS
>  	select DEBUG_WW_MUTEX_SLOWPATH
>  	select DEBUG_LOCK_ALLOC
> -	select PREEMPT_COUNT
>  	select TRACE_IRQFLAGS
>  	default n
>  	help

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [patch 00/13] preempt: Make preempt count unconditional
  2020-09-14 20:59 ` Linus Torvalds
  2020-09-14 21:55   ` Thomas Gleixner
@ 2020-09-15 17:25   ` Paul E. McKenney
  1 sibling, 0 replies; 22+ messages in thread
From: Paul E. McKenney @ 2020-09-15 17:25 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, Linux-MM,
	open list:KERNEL SELFTEST FRAMEWORK, linux-hexagon, Will Deacon,
	Ingo Molnar, Anton Ivanov, linux-arch, Vincent Guittot,
	Brian Cain, Richard Weinberger, Russell King, David Airlie,
	Ingo Molnar, Geert Uytterhoeven, Mel Gorman, intel-gfx,
	Matt Turner, Valentin Schneider, linux-xtensa, Shuah Khan,
	Jeff Dike, linux-um, Josh Triplett, Steven Rostedt, rcu,
	linux-m68k, Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi,
	Thomas Gleixner, Dietmar Eggemann, Linux ARM, Richard Henderson,
	Chris Zankel, Max Filippov, LKML, Daniel Vetter, alpha,
	Mathieu Desnoyers, Andrew Morton, Daniel Bristot de Oliveira

On Mon, Sep 14, 2020 at 01:59:15PM -0700, Linus Torvalds wrote:
> On Mon, Sep 14, 2020 at 1:45 PM Thomas Gleixner <tglx@linutronix.de> wrote:
> >
> > Recently merged code does:
> >
> >          gfp = preemptible() ? GFP_KERNEL : GFP_ATOMIC;
> >
> > Looks obviously correct, except for the fact that preemptible() is
> > unconditionally false for CONFIF_PREEMPT_COUNT=n, i.e. all allocations in
> > that code use GFP_ATOMIC on such kernels.
> 
> I don't think this is a good reason to entirely get rid of the no-preempt thing.
> 
> The above is just garbage. It's bogus. You can't do it.
> 
> Blaming the no-preempt code for this bug is extremely unfair, imho.
> 
> And the no-preempt code does help make for much better code generation
> for simple spinlocks.
> 
> Where is that horribly buggy recent code? It's not in that exact
> format, certainly, since 'grep' doesn't find it.

It would be convenient for that "gfp =" code to work, as this would
allow better cache locality while invoking RCU callbacks, and would
further provide better robustness to callback floods.  The full story
is quite long, but here are alternatives have not yet been proven to be
abject failures:

1.	Use workqueues to do the allocations in a clean context.
	While waiting for the allocations, the callbacks are queued
	in the old cache-busting manner.  This functions correctly,
	but in the meantime (which on busy systems can be some time)
	the cache locality and robustness are lost.

2.	Provide the ability to allocate memory in raw atomic context.
	This is extremely effective, especially when used in combination
	with #1 above, but as you might suspect, the MM guys don't like
	it much.

In contrast, with Thomas's patch series, call_rcu() and kvfree_rcu()
could just look at preemptible() to see whether or not it was safe to
allocate memory, even in !PREEMPT kernels -- and in the common case,
it almost always would be safe.  It is quite possible that this approach
would work in isolation, or failing that, that adding #1 above would do
the trick.

I understand that this is all very hand-wavy, and I do apologize for that.
If you really want the full sad story with performance numbers and the
works, let me know!

							Thanx, Paul

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [patch 03/13] preempt: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 ` [patch 03/13] preempt: Clenaup PREEMPT_COUNT leftovers Thomas Gleixner
@ 2020-09-16 10:56   ` Valentin Schneider
  0 siblings, 0 replies; 22+ messages in thread
From: Valentin Schneider @ 2020-09-16 10:56 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, Matt Turner, intel-gfx,
	linux-xtensa, Shuah Khan, Paul E. McKenney, Jeff Dike, linux-um,
	Josh Triplett, Steven Rostedt, rcu, Linus Torvalds,
	Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi, Dietmar Eggemann,
	linux-arm-kernel, Richard Henderson, Chris Zankel, Max Filippov,
	linux-m68k, LKML, Daniel Vetter, linux-alpha, Mathieu Desnoyers,
	Andrew Morton, Daniel Bristot de Oliveira


On 14/09/20 21:42, Thomas Gleixner wrote:
> CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
> removed. Cleanup the leftovers before doing so.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Ben Segall <bsegall@google.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Daniel Bristot de Oliveira <bristot@redhat.com>

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [patch 08/13] sched: Clenaup PREEMPT_COUNT leftovers
  2020-09-14 20:42 ` [patch 08/13] sched: " Thomas Gleixner
@ 2020-09-16 10:56   ` Valentin Schneider
  0 siblings, 0 replies; 22+ messages in thread
From: Valentin Schneider @ 2020-09-16 10:56 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Juri Lelli, Peter Zijlstra, Sebastian Andrzej Siewior,
	Joonas Lahtinen, Lai Jiangshan, dri-devel, Ben Segall, linux-mm,
	linux-kselftest, linux-hexagon, Will Deacon, Ingo Molnar,
	Anton Ivanov, linux-arch, Vincent Guittot, Brian Cain,
	Richard Weinberger, Russell King, David Airlie, Ingo Molnar,
	Geert Uytterhoeven, Mel Gorman, Matt Turner, intel-gfx,
	linux-xtensa, Shuah Khan, Paul E. McKenney, Jeff Dike, linux-um,
	Josh Triplett, Steven Rostedt, rcu, Linus Torvalds,
	Ivan Kokshaysky, Jani Nikula, Rodrigo Vivi, Dietmar Eggemann,
	linux-arm-kernel, Richard Henderson, Chris Zankel, Max Filippov,
	linux-m68k, LKML, Daniel Vetter, linux-alpha, Mathieu Desnoyers,
	Andrew Morton, Daniel Bristot de Oliveira


On 14/09/20 21:42, Thomas Gleixner wrote:
> CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
> removed. Cleanup the leftovers before doing so.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Ben Segall <bsegall@google.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Daniel Bristot de Oliveira <bristot@redhat.com>

Small nit below;

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

> ---
>  kernel/sched/core.c |    6 +-----
>  lib/Kconfig.debug   |    1 -
>  2 files changed, 1 insertion(+), 6 deletions(-)
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3706,8 +3706,7 @@ asmlinkage __visible void schedule_tail(
>        * finish_task_switch() for details.
>        *
>        * finish_task_switch() will drop rq->lock() and lower preempt_count
> -	 * and the preempt_enable() will end up enabling preemption (on
> -	 * PREEMPT_COUNT kernels).

I suppose this wanted to be s/PREEMPT_COUNT/PREEMPT/ in the first place,
which ought to be still relevant.

> +	 * and the preempt_enable() will end up enabling preemption.
>        */
>
>       rq = finish_task_switch(prev);

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2020-09-16 10:57 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-14 20:42 [patch 00/13] preempt: Make preempt count unconditional Thomas Gleixner
2020-09-14 20:42 ` [patch 01/13] lib/debug: Remove pointless ARCH_NO_PREEMPT dependencies Thomas Gleixner
2020-09-14 20:42 ` [patch 02/13] preempt: Make preempt count unconditional Thomas Gleixner
2020-09-14 20:42 ` [patch 03/13] preempt: Clenaup PREEMPT_COUNT leftovers Thomas Gleixner
2020-09-16 10:56   ` Valentin Schneider
2020-09-14 20:42 ` [patch 04/13] lockdep: " Thomas Gleixner
2020-09-15 16:11   ` Will Deacon
2020-09-14 20:42 ` [patch 05/13] mm/pagemap: " Thomas Gleixner
2020-09-14 20:42 ` [patch 06/13] locking/bitspinlock: " Thomas Gleixner
2020-09-15 16:10   ` Will Deacon
2020-09-14 20:42 ` [patch 07/13] uaccess: " Thomas Gleixner
2020-09-14 20:42 ` [patch 08/13] sched: " Thomas Gleixner
2020-09-16 10:56   ` Valentin Schneider
2020-09-14 20:42 ` [patch 09/13] ARM: " Thomas Gleixner
2020-09-14 20:42 ` [patch 10/13] xtensa: " Thomas Gleixner
2020-09-14 20:42 ` [patch 11/13] drm/i915: " Thomas Gleixner
2020-09-14 20:42 ` [patch 12/13] rcutorture: " Thomas Gleixner
2020-09-14 20:42 ` [patch 13/13] preempt: Remove PREEMPT_COUNT from Kconfig Thomas Gleixner
2020-09-14 20:54 ` [patch 00/13] preempt: Make preempt count unconditional Steven Rostedt
2020-09-14 20:59 ` Linus Torvalds
2020-09-14 21:55   ` Thomas Gleixner
2020-09-15 17:25   ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).