All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys
@ 2022-02-09 15:35 ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
mechanism allowing the preemption functions to be enabled/disabled using
static keys rather than static calls, with architectures selecting
whether they use static calls or static keys.

With non-inline static calls, each function call results in a call to
the (out-of-line) trampoline which either tail-calls its associated
callee or performs an early return.

The key idea is that where we're only enabling/disabling a single
callee, we can inline this trampoline into the start of the callee,
using a static key to decide whether to return early, and leaving the
remaining codegen to the compiler. The overhead should be similar to
(and likely lower than) using a static call trampoline. Since most
codegen is up to the compiler, we sidestep a number of implementation
pain-points (e.g. things like CFI should "just work" as well as they do
for any other functions).

The bulk of the diffstat for kernel/sched/core.c is shuffling the
PREEMPT_DYNAMIC code later in the file, and the actual additions are
fairly trivial.

I've given this very light build+boot testing so far.

Since v1 [1]:
* Rework Kconfig text to be clearer
* Rework arm64 entry code
* Clarify commit messages.

Since v2 [2]:
* Add missing includes
* Always provide prototype for preempt_schedule()
* Always provide prototype for preempt_schedule_notrace()
* Fix __cond_resched() to default to disabled
* Fix might_resched() to default to disabled
* Clarify example in commit message

[1] https://lore.kernel.org/r/20211109172408.49641-1-mark.rutland@arm.com/
[2] https://lore.kernel.org/r/20220204150557.434610-1-mark.rutland@arm.com/

Mark Rutland (7):
  sched/preempt: move PREEMPT_DYNAMIC logic later
  sched/preempt: refactor sched_dynamic_update()
  sched/preempt: simplify irqentry_exit_cond_resched() callers
  sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY
  sched/preempt: add PREEMPT_DYNAMIC using static keys
  arm64: entry: centralize premeption decision
  arm64: support PREEMPT_DYNAMIC

 arch/Kconfig                     |  37 +++-
 arch/arm64/Kconfig               |   1 +
 arch/arm64/include/asm/preempt.h |  19 +-
 arch/arm64/kernel/entry-common.c |  28 ++-
 arch/x86/Kconfig                 |   2 +-
 arch/x86/include/asm/preempt.h   |  10 +-
 include/linux/entry-common.h     |  15 +-
 include/linux/kernel.h           |   7 +-
 include/linux/sched.h            |  10 +-
 kernel/entry/common.c            |  23 +-
 kernel/sched/core.c              | 347 ++++++++++++++++++-------------
 11 files changed, 327 insertions(+), 172 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys
@ 2022-02-09 15:35 ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
mechanism allowing the preemption functions to be enabled/disabled using
static keys rather than static calls, with architectures selecting
whether they use static calls or static keys.

With non-inline static calls, each function call results in a call to
the (out-of-line) trampoline which either tail-calls its associated
callee or performs an early return.

The key idea is that where we're only enabling/disabling a single
callee, we can inline this trampoline into the start of the callee,
using a static key to decide whether to return early, and leaving the
remaining codegen to the compiler. The overhead should be similar to
(and likely lower than) using a static call trampoline. Since most
codegen is up to the compiler, we sidestep a number of implementation
pain-points (e.g. things like CFI should "just work" as well as they do
for any other functions).

The bulk of the diffstat for kernel/sched/core.c is shuffling the
PREEMPT_DYNAMIC code later in the file, and the actual additions are
fairly trivial.

I've given this very light build+boot testing so far.

Since v1 [1]:
* Rework Kconfig text to be clearer
* Rework arm64 entry code
* Clarify commit messages.

Since v2 [2]:
* Add missing includes
* Always provide prototype for preempt_schedule()
* Always provide prototype for preempt_schedule_notrace()
* Fix __cond_resched() to default to disabled
* Fix might_resched() to default to disabled
* Clarify example in commit message

[1] https://lore.kernel.org/r/20211109172408.49641-1-mark.rutland@arm.com/
[2] https://lore.kernel.org/r/20220204150557.434610-1-mark.rutland@arm.com/

Mark Rutland (7):
  sched/preempt: move PREEMPT_DYNAMIC logic later
  sched/preempt: refactor sched_dynamic_update()
  sched/preempt: simplify irqentry_exit_cond_resched() callers
  sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY
  sched/preempt: add PREEMPT_DYNAMIC using static keys
  arm64: entry: centralize premeption decision
  arm64: support PREEMPT_DYNAMIC

 arch/Kconfig                     |  37 +++-
 arch/arm64/Kconfig               |   1 +
 arch/arm64/include/asm/preempt.h |  19 +-
 arch/arm64/kernel/entry-common.c |  28 ++-
 arch/x86/Kconfig                 |   2 +-
 arch/x86/include/asm/preempt.h   |  10 +-
 include/linux/entry-common.h     |  15 +-
 include/linux/kernel.h           |   7 +-
 include/linux/sched.h            |  10 +-
 kernel/entry/common.c            |  23 +-
 kernel/sched/core.c              | 347 ++++++++++++++++++-------------
 11 files changed, 327 insertions(+), 172 deletions(-)

-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 1/7] sched/preempt: move PREEMPT_DYNAMIC logic later
  2022-02-09 15:35 ` Mark Rutland
@ 2022-02-09 15:35   ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

The PREEMPT_DYNAMIC logic in kernel/sched/core.c patches static calls
for a bunch of preemption functions. While most are defined prior to
this, the definition of cond_resched() is later in the file, and so we
only have its declarations from include/linux/sched.h.

In subsequent patches we'd like to define some macros alongside the
definition of each of the preemption functions, which we can use within
sched_dynamic_update(). For this to be possible, the PREEMPT_DYNAMIC
logic needs to be placed after the various preemption functions.

As a preparatory step, this patch moves the PREEMPT_DYNAMIC logic after
the various preemption functions, with no other changes -- this is
purely a move.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
---
 kernel/sched/core.c | 272 ++++++++++++++++++++++----------------------
 1 file changed, 136 insertions(+), 136 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 848eaa0efe0e..6e8998267102 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6553,142 +6553,6 @@ EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
 
 #endif /* CONFIG_PREEMPTION */
 
-#ifdef CONFIG_PREEMPT_DYNAMIC
-
-#include <linux/entry-common.h>
-
-/*
- * SC:cond_resched
- * SC:might_resched
- * SC:preempt_schedule
- * SC:preempt_schedule_notrace
- * SC:irqentry_exit_cond_resched
- *
- *
- * NONE:
- *   cond_resched               <- __cond_resched
- *   might_resched              <- RET0
- *   preempt_schedule           <- NOP
- *   preempt_schedule_notrace   <- NOP
- *   irqentry_exit_cond_resched <- NOP
- *
- * VOLUNTARY:
- *   cond_resched               <- __cond_resched
- *   might_resched              <- __cond_resched
- *   preempt_schedule           <- NOP
- *   preempt_schedule_notrace   <- NOP
- *   irqentry_exit_cond_resched <- NOP
- *
- * FULL:
- *   cond_resched               <- RET0
- *   might_resched              <- RET0
- *   preempt_schedule           <- preempt_schedule
- *   preempt_schedule_notrace   <- preempt_schedule_notrace
- *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
- */
-
-enum {
-	preempt_dynamic_undefined = -1,
-	preempt_dynamic_none,
-	preempt_dynamic_voluntary,
-	preempt_dynamic_full,
-};
-
-int preempt_dynamic_mode = preempt_dynamic_undefined;
-
-int sched_dynamic_mode(const char *str)
-{
-	if (!strcmp(str, "none"))
-		return preempt_dynamic_none;
-
-	if (!strcmp(str, "voluntary"))
-		return preempt_dynamic_voluntary;
-
-	if (!strcmp(str, "full"))
-		return preempt_dynamic_full;
-
-	return -EINVAL;
-}
-
-void sched_dynamic_update(int mode)
-{
-	/*
-	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
-	 * the ZERO state, which is invalid.
-	 */
-	static_call_update(cond_resched, __cond_resched);
-	static_call_update(might_resched, __cond_resched);
-	static_call_update(preempt_schedule, __preempt_schedule_func);
-	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
-
-	switch (mode) {
-	case preempt_dynamic_none:
-		static_call_update(cond_resched, __cond_resched);
-		static_call_update(might_resched, (void *)&__static_call_return0);
-		static_call_update(preempt_schedule, NULL);
-		static_call_update(preempt_schedule_notrace, NULL);
-		static_call_update(irqentry_exit_cond_resched, NULL);
-		pr_info("Dynamic Preempt: none\n");
-		break;
-
-	case preempt_dynamic_voluntary:
-		static_call_update(cond_resched, __cond_resched);
-		static_call_update(might_resched, __cond_resched);
-		static_call_update(preempt_schedule, NULL);
-		static_call_update(preempt_schedule_notrace, NULL);
-		static_call_update(irqentry_exit_cond_resched, NULL);
-		pr_info("Dynamic Preempt: voluntary\n");
-		break;
-
-	case preempt_dynamic_full:
-		static_call_update(cond_resched, (void *)&__static_call_return0);
-		static_call_update(might_resched, (void *)&__static_call_return0);
-		static_call_update(preempt_schedule, __preempt_schedule_func);
-		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
-		pr_info("Dynamic Preempt: full\n");
-		break;
-	}
-
-	preempt_dynamic_mode = mode;
-}
-
-static int __init setup_preempt_mode(char *str)
-{
-	int mode = sched_dynamic_mode(str);
-	if (mode < 0) {
-		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
-		return 0;
-	}
-
-	sched_dynamic_update(mode);
-	return 1;
-}
-__setup("preempt=", setup_preempt_mode);
-
-static void __init preempt_dynamic_init(void)
-{
-	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
-		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
-			sched_dynamic_update(preempt_dynamic_none);
-		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
-			sched_dynamic_update(preempt_dynamic_voluntary);
-		} else {
-			/* Default static call setting, nothing to do */
-			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
-			preempt_dynamic_mode = preempt_dynamic_full;
-			pr_info("Dynamic Preempt: full\n");
-		}
-	}
-}
-
-#else /* !CONFIG_PREEMPT_DYNAMIC */
-
-static inline void preempt_dynamic_init(void) { }
-
-#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
-
 /*
  * This is the entry point to schedule() from kernel preemption
  * off of irq context.
@@ -8263,6 +8127,142 @@ int __cond_resched_rwlock_write(rwlock_t *lock)
 }
 EXPORT_SYMBOL(__cond_resched_rwlock_write);
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+#include <linux/entry-common.h>
+
+/*
+ * SC:cond_resched
+ * SC:might_resched
+ * SC:preempt_schedule
+ * SC:preempt_schedule_notrace
+ * SC:irqentry_exit_cond_resched
+ *
+ *
+ * NONE:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * VOLUNTARY:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- __cond_resched
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * FULL:
+ *   cond_resched               <- RET0
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- preempt_schedule
+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
+ */
+
+enum {
+	preempt_dynamic_undefined = -1,
+	preempt_dynamic_none,
+	preempt_dynamic_voluntary,
+	preempt_dynamic_full,
+};
+
+int preempt_dynamic_mode = preempt_dynamic_undefined;
+
+int sched_dynamic_mode(const char *str)
+{
+	if (!strcmp(str, "none"))
+		return preempt_dynamic_none;
+
+	if (!strcmp(str, "voluntary"))
+		return preempt_dynamic_voluntary;
+
+	if (!strcmp(str, "full"))
+		return preempt_dynamic_full;
+
+	return -EINVAL;
+}
+
+void sched_dynamic_update(int mode)
+{
+	/*
+	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
+	 * the ZERO state, which is invalid.
+	 */
+	static_call_update(cond_resched, __cond_resched);
+	static_call_update(might_resched, __cond_resched);
+	static_call_update(preempt_schedule, __preempt_schedule_func);
+	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+
+	switch (mode) {
+	case preempt_dynamic_none:
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, (void *)&__static_call_return0);
+		static_call_update(preempt_schedule, NULL);
+		static_call_update(preempt_schedule_notrace, NULL);
+		static_call_update(irqentry_exit_cond_resched, NULL);
+		pr_info("Dynamic Preempt: none\n");
+		break;
+
+	case preempt_dynamic_voluntary:
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, __cond_resched);
+		static_call_update(preempt_schedule, NULL);
+		static_call_update(preempt_schedule_notrace, NULL);
+		static_call_update(irqentry_exit_cond_resched, NULL);
+		pr_info("Dynamic Preempt: voluntary\n");
+		break;
+
+	case preempt_dynamic_full:
+		static_call_update(cond_resched, (void *)&__static_call_return0);
+		static_call_update(might_resched, (void *)&__static_call_return0);
+		static_call_update(preempt_schedule, __preempt_schedule_func);
+		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+		pr_info("Dynamic Preempt: full\n");
+		break;
+	}
+
+	preempt_dynamic_mode = mode;
+}
+
+static int __init setup_preempt_mode(char *str)
+{
+	int mode = sched_dynamic_mode(str);
+	if (mode < 0) {
+		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
+		return 0;
+	}
+
+	sched_dynamic_update(mode);
+	return 1;
+}
+__setup("preempt=", setup_preempt_mode);
+
+static void __init preempt_dynamic_init(void)
+{
+	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
+		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
+			sched_dynamic_update(preempt_dynamic_none);
+		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
+			sched_dynamic_update(preempt_dynamic_voluntary);
+		} else {
+			/* Default static call setting, nothing to do */
+			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
+			preempt_dynamic_mode = preempt_dynamic_full;
+			pr_info("Dynamic Preempt: full\n");
+		}
+	}
+}
+
+#else /* !CONFIG_PREEMPT_DYNAMIC */
+
+static inline void preempt_dynamic_init(void) { }
+
+#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
+
 /**
  * yield - yield the current processor to other threads.
  *
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 1/7] sched/preempt: move PREEMPT_DYNAMIC logic later
@ 2022-02-09 15:35   ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

The PREEMPT_DYNAMIC logic in kernel/sched/core.c patches static calls
for a bunch of preemption functions. While most are defined prior to
this, the definition of cond_resched() is later in the file, and so we
only have its declarations from include/linux/sched.h.

In subsequent patches we'd like to define some macros alongside the
definition of each of the preemption functions, which we can use within
sched_dynamic_update(). For this to be possible, the PREEMPT_DYNAMIC
logic needs to be placed after the various preemption functions.

As a preparatory step, this patch moves the PREEMPT_DYNAMIC logic after
the various preemption functions, with no other changes -- this is
purely a move.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
---
 kernel/sched/core.c | 272 ++++++++++++++++++++++----------------------
 1 file changed, 136 insertions(+), 136 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 848eaa0efe0e..6e8998267102 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6553,142 +6553,6 @@ EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
 
 #endif /* CONFIG_PREEMPTION */
 
-#ifdef CONFIG_PREEMPT_DYNAMIC
-
-#include <linux/entry-common.h>
-
-/*
- * SC:cond_resched
- * SC:might_resched
- * SC:preempt_schedule
- * SC:preempt_schedule_notrace
- * SC:irqentry_exit_cond_resched
- *
- *
- * NONE:
- *   cond_resched               <- __cond_resched
- *   might_resched              <- RET0
- *   preempt_schedule           <- NOP
- *   preempt_schedule_notrace   <- NOP
- *   irqentry_exit_cond_resched <- NOP
- *
- * VOLUNTARY:
- *   cond_resched               <- __cond_resched
- *   might_resched              <- __cond_resched
- *   preempt_schedule           <- NOP
- *   preempt_schedule_notrace   <- NOP
- *   irqentry_exit_cond_resched <- NOP
- *
- * FULL:
- *   cond_resched               <- RET0
- *   might_resched              <- RET0
- *   preempt_schedule           <- preempt_schedule
- *   preempt_schedule_notrace   <- preempt_schedule_notrace
- *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
- */
-
-enum {
-	preempt_dynamic_undefined = -1,
-	preempt_dynamic_none,
-	preempt_dynamic_voluntary,
-	preempt_dynamic_full,
-};
-
-int preempt_dynamic_mode = preempt_dynamic_undefined;
-
-int sched_dynamic_mode(const char *str)
-{
-	if (!strcmp(str, "none"))
-		return preempt_dynamic_none;
-
-	if (!strcmp(str, "voluntary"))
-		return preempt_dynamic_voluntary;
-
-	if (!strcmp(str, "full"))
-		return preempt_dynamic_full;
-
-	return -EINVAL;
-}
-
-void sched_dynamic_update(int mode)
-{
-	/*
-	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
-	 * the ZERO state, which is invalid.
-	 */
-	static_call_update(cond_resched, __cond_resched);
-	static_call_update(might_resched, __cond_resched);
-	static_call_update(preempt_schedule, __preempt_schedule_func);
-	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
-
-	switch (mode) {
-	case preempt_dynamic_none:
-		static_call_update(cond_resched, __cond_resched);
-		static_call_update(might_resched, (void *)&__static_call_return0);
-		static_call_update(preempt_schedule, NULL);
-		static_call_update(preempt_schedule_notrace, NULL);
-		static_call_update(irqentry_exit_cond_resched, NULL);
-		pr_info("Dynamic Preempt: none\n");
-		break;
-
-	case preempt_dynamic_voluntary:
-		static_call_update(cond_resched, __cond_resched);
-		static_call_update(might_resched, __cond_resched);
-		static_call_update(preempt_schedule, NULL);
-		static_call_update(preempt_schedule_notrace, NULL);
-		static_call_update(irqentry_exit_cond_resched, NULL);
-		pr_info("Dynamic Preempt: voluntary\n");
-		break;
-
-	case preempt_dynamic_full:
-		static_call_update(cond_resched, (void *)&__static_call_return0);
-		static_call_update(might_resched, (void *)&__static_call_return0);
-		static_call_update(preempt_schedule, __preempt_schedule_func);
-		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
-		pr_info("Dynamic Preempt: full\n");
-		break;
-	}
-
-	preempt_dynamic_mode = mode;
-}
-
-static int __init setup_preempt_mode(char *str)
-{
-	int mode = sched_dynamic_mode(str);
-	if (mode < 0) {
-		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
-		return 0;
-	}
-
-	sched_dynamic_update(mode);
-	return 1;
-}
-__setup("preempt=", setup_preempt_mode);
-
-static void __init preempt_dynamic_init(void)
-{
-	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
-		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
-			sched_dynamic_update(preempt_dynamic_none);
-		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
-			sched_dynamic_update(preempt_dynamic_voluntary);
-		} else {
-			/* Default static call setting, nothing to do */
-			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
-			preempt_dynamic_mode = preempt_dynamic_full;
-			pr_info("Dynamic Preempt: full\n");
-		}
-	}
-}
-
-#else /* !CONFIG_PREEMPT_DYNAMIC */
-
-static inline void preempt_dynamic_init(void) { }
-
-#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
-
 /*
  * This is the entry point to schedule() from kernel preemption
  * off of irq context.
@@ -8263,6 +8127,142 @@ int __cond_resched_rwlock_write(rwlock_t *lock)
 }
 EXPORT_SYMBOL(__cond_resched_rwlock_write);
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+#include <linux/entry-common.h>
+
+/*
+ * SC:cond_resched
+ * SC:might_resched
+ * SC:preempt_schedule
+ * SC:preempt_schedule_notrace
+ * SC:irqentry_exit_cond_resched
+ *
+ *
+ * NONE:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * VOLUNTARY:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- __cond_resched
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * FULL:
+ *   cond_resched               <- RET0
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- preempt_schedule
+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
+ */
+
+enum {
+	preempt_dynamic_undefined = -1,
+	preempt_dynamic_none,
+	preempt_dynamic_voluntary,
+	preempt_dynamic_full,
+};
+
+int preempt_dynamic_mode = preempt_dynamic_undefined;
+
+int sched_dynamic_mode(const char *str)
+{
+	if (!strcmp(str, "none"))
+		return preempt_dynamic_none;
+
+	if (!strcmp(str, "voluntary"))
+		return preempt_dynamic_voluntary;
+
+	if (!strcmp(str, "full"))
+		return preempt_dynamic_full;
+
+	return -EINVAL;
+}
+
+void sched_dynamic_update(int mode)
+{
+	/*
+	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
+	 * the ZERO state, which is invalid.
+	 */
+	static_call_update(cond_resched, __cond_resched);
+	static_call_update(might_resched, __cond_resched);
+	static_call_update(preempt_schedule, __preempt_schedule_func);
+	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+
+	switch (mode) {
+	case preempt_dynamic_none:
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, (void *)&__static_call_return0);
+		static_call_update(preempt_schedule, NULL);
+		static_call_update(preempt_schedule_notrace, NULL);
+		static_call_update(irqentry_exit_cond_resched, NULL);
+		pr_info("Dynamic Preempt: none\n");
+		break;
+
+	case preempt_dynamic_voluntary:
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, __cond_resched);
+		static_call_update(preempt_schedule, NULL);
+		static_call_update(preempt_schedule_notrace, NULL);
+		static_call_update(irqentry_exit_cond_resched, NULL);
+		pr_info("Dynamic Preempt: voluntary\n");
+		break;
+
+	case preempt_dynamic_full:
+		static_call_update(cond_resched, (void *)&__static_call_return0);
+		static_call_update(might_resched, (void *)&__static_call_return0);
+		static_call_update(preempt_schedule, __preempt_schedule_func);
+		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+		pr_info("Dynamic Preempt: full\n");
+		break;
+	}
+
+	preempt_dynamic_mode = mode;
+}
+
+static int __init setup_preempt_mode(char *str)
+{
+	int mode = sched_dynamic_mode(str);
+	if (mode < 0) {
+		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
+		return 0;
+	}
+
+	sched_dynamic_update(mode);
+	return 1;
+}
+__setup("preempt=", setup_preempt_mode);
+
+static void __init preempt_dynamic_init(void)
+{
+	if (preempt_dynamic_mode == preempt_dynamic_undefined) {
+		if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
+			sched_dynamic_update(preempt_dynamic_none);
+		} else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
+			sched_dynamic_update(preempt_dynamic_voluntary);
+		} else {
+			/* Default static call setting, nothing to do */
+			WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
+			preempt_dynamic_mode = preempt_dynamic_full;
+			pr_info("Dynamic Preempt: full\n");
+		}
+	}
+}
+
+#else /* !CONFIG_PREEMPT_DYNAMIC */
+
+static inline void preempt_dynamic_init(void) { }
+
+#endif /* #ifdef CONFIG_PREEMPT_DYNAMIC */
+
 /**
  * yield - yield the current processor to other threads.
  *
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 2/7] sched/preempt: refactor sched_dynamic_update()
  2022-02-09 15:35 ` Mark Rutland
@ 2022-02-09 15:35   ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

Currently sched_dynamic_update needs to open-code the enabled/disabled
function names for each preemption model it supports, when in practice
this is a boolean enabled/disabled state for each function.

Make this clearer and avoid repetition by defining the enabled/disabled
states at the function definition, and using helper macros to perform the
static_call_update(). Where x86 currently overrides the enabled
function, it is made to provide both the enabled and disabled states for
consistency, with defaults provided by the core code otherwise.

In subsequent patches this will allow us to support PREEMPT_DYNAMIC
without static calls.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
---
 arch/x86/include/asm/preempt.h | 10 +++---
 include/linux/entry-common.h   |  2 ++
 kernel/sched/core.c            | 59 +++++++++++++++++++++-------------
 3 files changed, 45 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index fe5efbcba824..5f6daea1ee24 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -108,16 +108,18 @@ static __always_inline bool should_resched(int preempt_offset)
 extern asmlinkage void preempt_schedule(void);
 extern asmlinkage void preempt_schedule_thunk(void);
 
-#define __preempt_schedule_func preempt_schedule_thunk
+#define preempt_schedule_dynamic_enabled	preempt_schedule_thunk
+#define preempt_schedule_dynamic_disabled	NULL
 
 extern asmlinkage void preempt_schedule_notrace(void);
 extern asmlinkage void preempt_schedule_notrace_thunk(void);
 
-#define __preempt_schedule_notrace_func preempt_schedule_notrace_thunk
+#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace_thunk
+#define preempt_schedule_notrace_dynamic_disabled	NULL
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
 
-DECLARE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
+DECLARE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
 
 #define __preempt_schedule() \
 do { \
@@ -125,7 +127,7 @@ do { \
 	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule) : ASM_CALL_CONSTRAINT); \
 } while (0)
 
-DECLARE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+DECLARE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
 
 #define __preempt_schedule_notrace() \
 do { \
diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index 2e2b8d6140ed..a01ac1a0a292 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -456,6 +456,8 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
  */
 void irqentry_exit_cond_resched(void);
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#define irqentry_exit_cond_resched_dynamic_enabled	irqentry_exit_cond_resched
+#define irqentry_exit_cond_resched_dynamic_disabled	NULL
 DECLARE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
 #endif
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6e8998267102..414165c430f4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6489,7 +6489,11 @@ NOKPROBE_SYMBOL(preempt_schedule);
 EXPORT_SYMBOL(preempt_schedule);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
+#ifndef preempt_schedule_dynamic_enabled
+#define preempt_schedule_dynamic_enabled	preempt_schedule
+#define preempt_schedule_dynamic_disabled	NULL
+#endif
+DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
 EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
 #endif
 
@@ -6547,7 +6551,11 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
 EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+#ifndef preempt_schedule_notrace_dynamic_enabled
+#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
+#define preempt_schedule_notrace_dynamic_disabled	NULL
+#endif
+DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
 EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
 #endif
 
@@ -8058,9 +8066,13 @@ EXPORT_SYMBOL(__cond_resched);
 #endif
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#define cond_resched_dynamic_enabled	__cond_resched
+#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
 DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
 EXPORT_STATIC_CALL_TRAMP(cond_resched);
 
+#define might_resched_dynamic_enabled	__cond_resched
+#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
 DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
 EXPORT_STATIC_CALL_TRAMP(might_resched);
 #endif
@@ -8184,43 +8196,46 @@ int sched_dynamic_mode(const char *str)
 	return -EINVAL;
 }
 
+#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
+#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
+
 void sched_dynamic_update(int mode)
 {
 	/*
 	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
 	 * the ZERO state, which is invalid.
 	 */
-	static_call_update(cond_resched, __cond_resched);
-	static_call_update(might_resched, __cond_resched);
-	static_call_update(preempt_schedule, __preempt_schedule_func);
-	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+	preempt_dynamic_enable(cond_resched);
+	preempt_dynamic_enable(might_resched);
+	preempt_dynamic_enable(preempt_schedule);
+	preempt_dynamic_enable(preempt_schedule_notrace);
+	preempt_dynamic_enable(irqentry_exit_cond_resched);
 
 	switch (mode) {
 	case preempt_dynamic_none:
-		static_call_update(cond_resched, __cond_resched);
-		static_call_update(might_resched, (void *)&__static_call_return0);
-		static_call_update(preempt_schedule, NULL);
-		static_call_update(preempt_schedule_notrace, NULL);
-		static_call_update(irqentry_exit_cond_resched, NULL);
+		preempt_dynamic_enable(cond_resched);
+		preempt_dynamic_disable(might_resched);
+		preempt_dynamic_disable(preempt_schedule);
+		preempt_dynamic_disable(preempt_schedule_notrace);
+		preempt_dynamic_disable(irqentry_exit_cond_resched);
 		pr_info("Dynamic Preempt: none\n");
 		break;
 
 	case preempt_dynamic_voluntary:
-		static_call_update(cond_resched, __cond_resched);
-		static_call_update(might_resched, __cond_resched);
-		static_call_update(preempt_schedule, NULL);
-		static_call_update(preempt_schedule_notrace, NULL);
-		static_call_update(irqentry_exit_cond_resched, NULL);
+		preempt_dynamic_enable(cond_resched);
+		preempt_dynamic_enable(might_resched);
+		preempt_dynamic_disable(preempt_schedule);
+		preempt_dynamic_disable(preempt_schedule_notrace);
+		preempt_dynamic_disable(irqentry_exit_cond_resched);
 		pr_info("Dynamic Preempt: voluntary\n");
 		break;
 
 	case preempt_dynamic_full:
-		static_call_update(cond_resched, (void *)&__static_call_return0);
-		static_call_update(might_resched, (void *)&__static_call_return0);
-		static_call_update(preempt_schedule, __preempt_schedule_func);
-		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+		preempt_dynamic_disable(cond_resched);
+		preempt_dynamic_disable(might_resched);
+		preempt_dynamic_enable(preempt_schedule);
+		preempt_dynamic_enable(preempt_schedule_notrace);
+		preempt_dynamic_enable(irqentry_exit_cond_resched);
 		pr_info("Dynamic Preempt: full\n");
 		break;
 	}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 2/7] sched/preempt: refactor sched_dynamic_update()
@ 2022-02-09 15:35   ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

Currently sched_dynamic_update needs to open-code the enabled/disabled
function names for each preemption model it supports, when in practice
this is a boolean enabled/disabled state for each function.

Make this clearer and avoid repetition by defining the enabled/disabled
states at the function definition, and using helper macros to perform the
static_call_update(). Where x86 currently overrides the enabled
function, it is made to provide both the enabled and disabled states for
consistency, with defaults provided by the core code otherwise.

In subsequent patches this will allow us to support PREEMPT_DYNAMIC
without static calls.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
---
 arch/x86/include/asm/preempt.h | 10 +++---
 include/linux/entry-common.h   |  2 ++
 kernel/sched/core.c            | 59 +++++++++++++++++++++-------------
 3 files changed, 45 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index fe5efbcba824..5f6daea1ee24 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -108,16 +108,18 @@ static __always_inline bool should_resched(int preempt_offset)
 extern asmlinkage void preempt_schedule(void);
 extern asmlinkage void preempt_schedule_thunk(void);
 
-#define __preempt_schedule_func preempt_schedule_thunk
+#define preempt_schedule_dynamic_enabled	preempt_schedule_thunk
+#define preempt_schedule_dynamic_disabled	NULL
 
 extern asmlinkage void preempt_schedule_notrace(void);
 extern asmlinkage void preempt_schedule_notrace_thunk(void);
 
-#define __preempt_schedule_notrace_func preempt_schedule_notrace_thunk
+#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace_thunk
+#define preempt_schedule_notrace_dynamic_disabled	NULL
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
 
-DECLARE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
+DECLARE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
 
 #define __preempt_schedule() \
 do { \
@@ -125,7 +127,7 @@ do { \
 	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule) : ASM_CALL_CONSTRAINT); \
 } while (0)
 
-DECLARE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+DECLARE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
 
 #define __preempt_schedule_notrace() \
 do { \
diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index 2e2b8d6140ed..a01ac1a0a292 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -456,6 +456,8 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
  */
 void irqentry_exit_cond_resched(void);
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#define irqentry_exit_cond_resched_dynamic_enabled	irqentry_exit_cond_resched
+#define irqentry_exit_cond_resched_dynamic_disabled	NULL
 DECLARE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
 #endif
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6e8998267102..414165c430f4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6489,7 +6489,11 @@ NOKPROBE_SYMBOL(preempt_schedule);
 EXPORT_SYMBOL(preempt_schedule);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
+#ifndef preempt_schedule_dynamic_enabled
+#define preempt_schedule_dynamic_enabled	preempt_schedule
+#define preempt_schedule_dynamic_disabled	NULL
+#endif
+DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
 EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
 #endif
 
@@ -6547,7 +6551,11 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
 EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+#ifndef preempt_schedule_notrace_dynamic_enabled
+#define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
+#define preempt_schedule_notrace_dynamic_disabled	NULL
+#endif
+DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
 EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
 #endif
 
@@ -8058,9 +8066,13 @@ EXPORT_SYMBOL(__cond_resched);
 #endif
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#define cond_resched_dynamic_enabled	__cond_resched
+#define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
 DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
 EXPORT_STATIC_CALL_TRAMP(cond_resched);
 
+#define might_resched_dynamic_enabled	__cond_resched
+#define might_resched_dynamic_disabled	((void *)&__static_call_return0)
 DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
 EXPORT_STATIC_CALL_TRAMP(might_resched);
 #endif
@@ -8184,43 +8196,46 @@ int sched_dynamic_mode(const char *str)
 	return -EINVAL;
 }
 
+#define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
+#define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
+
 void sched_dynamic_update(int mode)
 {
 	/*
 	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
 	 * the ZERO state, which is invalid.
 	 */
-	static_call_update(cond_resched, __cond_resched);
-	static_call_update(might_resched, __cond_resched);
-	static_call_update(preempt_schedule, __preempt_schedule_func);
-	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+	preempt_dynamic_enable(cond_resched);
+	preempt_dynamic_enable(might_resched);
+	preempt_dynamic_enable(preempt_schedule);
+	preempt_dynamic_enable(preempt_schedule_notrace);
+	preempt_dynamic_enable(irqentry_exit_cond_resched);
 
 	switch (mode) {
 	case preempt_dynamic_none:
-		static_call_update(cond_resched, __cond_resched);
-		static_call_update(might_resched, (void *)&__static_call_return0);
-		static_call_update(preempt_schedule, NULL);
-		static_call_update(preempt_schedule_notrace, NULL);
-		static_call_update(irqentry_exit_cond_resched, NULL);
+		preempt_dynamic_enable(cond_resched);
+		preempt_dynamic_disable(might_resched);
+		preempt_dynamic_disable(preempt_schedule);
+		preempt_dynamic_disable(preempt_schedule_notrace);
+		preempt_dynamic_disable(irqentry_exit_cond_resched);
 		pr_info("Dynamic Preempt: none\n");
 		break;
 
 	case preempt_dynamic_voluntary:
-		static_call_update(cond_resched, __cond_resched);
-		static_call_update(might_resched, __cond_resched);
-		static_call_update(preempt_schedule, NULL);
-		static_call_update(preempt_schedule_notrace, NULL);
-		static_call_update(irqentry_exit_cond_resched, NULL);
+		preempt_dynamic_enable(cond_resched);
+		preempt_dynamic_enable(might_resched);
+		preempt_dynamic_disable(preempt_schedule);
+		preempt_dynamic_disable(preempt_schedule_notrace);
+		preempt_dynamic_disable(irqentry_exit_cond_resched);
 		pr_info("Dynamic Preempt: voluntary\n");
 		break;
 
 	case preempt_dynamic_full:
-		static_call_update(cond_resched, (void *)&__static_call_return0);
-		static_call_update(might_resched, (void *)&__static_call_return0);
-		static_call_update(preempt_schedule, __preempt_schedule_func);
-		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
-		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+		preempt_dynamic_disable(cond_resched);
+		preempt_dynamic_disable(might_resched);
+		preempt_dynamic_enable(preempt_schedule);
+		preempt_dynamic_enable(preempt_schedule_notrace);
+		preempt_dynamic_enable(irqentry_exit_cond_resched);
 		pr_info("Dynamic Preempt: full\n");
 		break;
 	}
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 3/7] sched/preempt: simplify irqentry_exit_cond_resched() callers
  2022-02-09 15:35 ` Mark Rutland
@ 2022-02-09 15:35   ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

Currently callers of irqentry_exit_cond_resched() need to be aware of
whether the function should be indirected via a static call, leading to
ugly ifdeffery in callers.

Save them the hassle with a static inline wrapper that does the right
thing. The raw_irqentry_exit_cond_resched() will also be useful in
subsequent patches which will add conditional wrappers for preemption
functions.

Note: in arch/x86/entry/common.c, xen_pv_evtchn_do_upcall() always calls
irqentry_exit_cond_resched() directly, even when PREEMPT_DYNAMIC is in
use. I believe this is a latent bug (which this patch corrects), but I'm
not entirely certain this wasn't deliberate.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Valentin Schneider <valentin.schneider@arm.com>
---
 include/linux/entry-common.h |  9 ++++++---
 kernel/entry/common.c        | 12 ++++--------
 2 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index a01ac1a0a292..dfd84c59b144 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -454,11 +454,14 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
  *
  * Conditional reschedule with additional sanity checks.
  */
-void irqentry_exit_cond_resched(void);
+void raw_irqentry_exit_cond_resched(void);
 #ifdef CONFIG_PREEMPT_DYNAMIC
-#define irqentry_exit_cond_resched_dynamic_enabled	irqentry_exit_cond_resched
+#define irqentry_exit_cond_resched_dynamic_enabled	raw_irqentry_exit_cond_resched
 #define irqentry_exit_cond_resched_dynamic_disabled	NULL
-DECLARE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+DECLARE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
+#define irqentry_exit_cond_resched()	static_call(irqentry_exit_cond_resched)()
+#else
+#define irqentry_exit_cond_resched()	raw_irqentry_exit_cond_resched()
 #endif
 
 /**
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index bad713684c2e..1739ca79613b 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -380,7 +380,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
 	return ret;
 }
 
-void irqentry_exit_cond_resched(void)
+void raw_irqentry_exit_cond_resched(void)
 {
 	if (!preempt_count()) {
 		/* Sanity check RCU and thread stack */
@@ -392,7 +392,7 @@ void irqentry_exit_cond_resched(void)
 	}
 }
 #ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+DEFINE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
 #endif
 
 noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
@@ -420,13 +420,9 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
 		}
 
 		instrumentation_begin();
-		if (IS_ENABLED(CONFIG_PREEMPTION)) {
-#ifdef CONFIG_PREEMPT_DYNAMIC
-			static_call(irqentry_exit_cond_resched)();
-#else
+		if (IS_ENABLED(CONFIG_PREEMPTION))
 			irqentry_exit_cond_resched();
-#endif
-		}
+
 		/* Covers both tracing and lockdep */
 		trace_hardirqs_on();
 		instrumentation_end();
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 3/7] sched/preempt: simplify irqentry_exit_cond_resched() callers
@ 2022-02-09 15:35   ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

Currently callers of irqentry_exit_cond_resched() need to be aware of
whether the function should be indirected via a static call, leading to
ugly ifdeffery in callers.

Save them the hassle with a static inline wrapper that does the right
thing. The raw_irqentry_exit_cond_resched() will also be useful in
subsequent patches which will add conditional wrappers for preemption
functions.

Note: in arch/x86/entry/common.c, xen_pv_evtchn_do_upcall() always calls
irqentry_exit_cond_resched() directly, even when PREEMPT_DYNAMIC is in
use. I believe this is a latent bug (which this patch corrects), but I'm
not entirely certain this wasn't deliberate.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Valentin Schneider <valentin.schneider@arm.com>
---
 include/linux/entry-common.h |  9 ++++++---
 kernel/entry/common.c        | 12 ++++--------
 2 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index a01ac1a0a292..dfd84c59b144 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -454,11 +454,14 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
  *
  * Conditional reschedule with additional sanity checks.
  */
-void irqentry_exit_cond_resched(void);
+void raw_irqentry_exit_cond_resched(void);
 #ifdef CONFIG_PREEMPT_DYNAMIC
-#define irqentry_exit_cond_resched_dynamic_enabled	irqentry_exit_cond_resched
+#define irqentry_exit_cond_resched_dynamic_enabled	raw_irqentry_exit_cond_resched
 #define irqentry_exit_cond_resched_dynamic_disabled	NULL
-DECLARE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+DECLARE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
+#define irqentry_exit_cond_resched()	static_call(irqentry_exit_cond_resched)()
+#else
+#define irqentry_exit_cond_resched()	raw_irqentry_exit_cond_resched()
 #endif
 
 /**
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index bad713684c2e..1739ca79613b 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -380,7 +380,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
 	return ret;
 }
 
-void irqentry_exit_cond_resched(void)
+void raw_irqentry_exit_cond_resched(void)
 {
 	if (!preempt_count()) {
 		/* Sanity check RCU and thread stack */
@@ -392,7 +392,7 @@ void irqentry_exit_cond_resched(void)
 	}
 }
 #ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+DEFINE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
 #endif
 
 noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
@@ -420,13 +420,9 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
 		}
 
 		instrumentation_begin();
-		if (IS_ENABLED(CONFIG_PREEMPTION)) {
-#ifdef CONFIG_PREEMPT_DYNAMIC
-			static_call(irqentry_exit_cond_resched)();
-#else
+		if (IS_ENABLED(CONFIG_PREEMPTION))
 			irqentry_exit_cond_resched();
-#endif
-		}
+
 		/* Covers both tracing and lockdep */
 		trace_hardirqs_on();
 		instrumentation_end();
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 4/7] sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY
  2022-02-09 15:35 ` Mark Rutland
@ 2022-02-09 15:35   ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

Now that the enabled/disabled states for the preemption functions are
declared alongside their definitions, the core PREEMPT_DYNAMIC logic is
no longer tied to GENERIC_ENTRY, and can safely be selected so long as
an architecture provides enabled/disabled states for
irqentry_exit_cond_resched().

Make it possible to select HAVE_PREEMPT_DYNAMIC without GENERIC_ENTRY.

For existing users of HAVE_PREEMPT_DYNAMIC there should be no functional
change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
---
 arch/Kconfig        | 1 -
 kernel/sched/core.c | 2 ++
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 678a80713b21..601691f1570f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1279,7 +1279,6 @@ config HAVE_STATIC_CALL_INLINE
 config HAVE_PREEMPT_DYNAMIC
 	bool
 	depends on HAVE_STATIC_CALL
-	depends on GENERIC_ENTRY
 	help
 	   Select this if the architecture support boot time preempt setting
 	   on top of static calls. It is strongly advised to support inline
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 414165c430f4..3bf7f90d0ef7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8141,7 +8141,9 @@ EXPORT_SYMBOL(__cond_resched_rwlock_write);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
 
+#ifdef CONFIG_GENERIC_ENTRY
 #include <linux/entry-common.h>
+#endif
 
 /*
  * SC:cond_resched
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 4/7] sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY
@ 2022-02-09 15:35   ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

Now that the enabled/disabled states for the preemption functions are
declared alongside their definitions, the core PREEMPT_DYNAMIC logic is
no longer tied to GENERIC_ENTRY, and can safely be selected so long as
an architecture provides enabled/disabled states for
irqentry_exit_cond_resched().

Make it possible to select HAVE_PREEMPT_DYNAMIC without GENERIC_ENTRY.

For existing users of HAVE_PREEMPT_DYNAMIC there should be no functional
change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
---
 arch/Kconfig        | 1 -
 kernel/sched/core.c | 2 ++
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 678a80713b21..601691f1570f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1279,7 +1279,6 @@ config HAVE_STATIC_CALL_INLINE
 config HAVE_PREEMPT_DYNAMIC
 	bool
 	depends on HAVE_STATIC_CALL
-	depends on GENERIC_ENTRY
 	help
 	   Select this if the architecture support boot time preempt setting
 	   on top of static calls. It is strongly advised to support inline
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 414165c430f4..3bf7f90d0ef7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8141,7 +8141,9 @@ EXPORT_SYMBOL(__cond_resched_rwlock_write);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
 
+#ifdef CONFIG_GENERIC_ENTRY
 #include <linux/entry-common.h>
+#endif
 
 /*
  * SC:cond_resched
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 5/7] sched/preempt: add PREEMPT_DYNAMIC using static keys
  2022-02-09 15:35 ` Mark Rutland
@ 2022-02-09 15:35   ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

Where an architecture selects HAVE_STATIC_CALL but not
HAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampoline
which will either branch to a callee or return to the caller.

On such architectures, a number of constraints can conspire to make
those trampolines more complicated and potentially less useful than we'd
like. For example:

* Hardware and software control flow integrity schemes can require the
  addition of "landing pad" instructions (e.g. `BTI` for arm64), which
  will also be present at the "real" callee.

* Limited branch ranges can require that trampolines generate or load an
  address into a register and perform an indirect branch (or at least
  have a slow path that does so). This loses some of the benefits of
  having a direct branch.

* Interaction with SW CFI schemes can be complicated and fragile, e.g.
  requiring that we can recognise idiomatic codegen and remove
  indirections understand, at least until clang proves more helpful
  mechanisms for dealing with this.

For PREEMPT_DYNAMIC, we don't need the full power of static calls, as we
really only need to enable/disable specific preemption functions. We can
achieve the same effect without a number of the pain points above by
using static keys to fold early returns into the preemption functions
themselves rather than in an out-of-line trampoline, effectively
inlining the trampoline into the start of the function.

For arm64, this results in good code generation. For example, the
dynamic_cond_resched() wrapper looks as follows when enabled. When
disabled, the first `B` is replaced with a `NOP`, resulting in an early
return.

| <dynamic_cond_resched>:
|        bti     c
|        b       <dynamic_cond_resched+0x10>     // or `nop`
|        mov     w0, #0x0
|        ret
|        mrs     x0, sp_el0
|        ldr     x0, [x0, #8]
|        cbnz    x0, <dynamic_cond_resched+0x8>
|        paciasp
|        stp     x29, x30, [sp, #-16]!
|        mov     x29, sp
|        bl      <preempt_schedule_common>
|        mov     w0, #0x1
|        ldp     x29, x30, [sp], #16
|        autiasp
|        ret

... compared to the regular form of the function:

| <__cond_resched>:
|        bti     c
|        mrs     x0, sp_el0
|        ldr     x1, [x0, #8]
|        cbz     x1, <__cond_resched+0x18>
|        mov     w0, #0x0
|        ret
|        paciasp
|        stp     x29, x30, [sp, #-16]!
|        mov     x29, sp
|        bl      <preempt_schedule_common>
|        mov     w0, #0x1
|        ldp     x29, x30, [sp], #16
|        autiasp
|        ret

Any architecture which implements static keys should be able to use this
to implement PREEMPT_DYNAMIC with similar cost to non-inlined static
calls.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
---
 arch/Kconfig                 | 36 ++++++++++++++++++++++--
 arch/x86/Kconfig             |  2 +-
 include/linux/entry-common.h | 10 +++++--
 include/linux/kernel.h       |  7 ++++-
 include/linux/sched.h        | 10 ++++++-
 kernel/entry/common.c        | 11 ++++++++
 kernel/sched/core.c          | 54 ++++++++++++++++++++++++++++++++++--
 7 files changed, 120 insertions(+), 10 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 601691f1570f..7587ed30b9dc 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1278,11 +1278,41 @@ config HAVE_STATIC_CALL_INLINE
 
 config HAVE_PREEMPT_DYNAMIC
 	bool
+
+config HAVE_PREEMPT_DYNAMIC_CALL
+	bool
 	depends on HAVE_STATIC_CALL
+	select HAVE_PREEMPT_DYNAMIC
 	help
-	   Select this if the architecture support boot time preempt setting
-	   on top of static calls. It is strongly advised to support inline
-	   static call to avoid any overhead.
+	   An architecture should select this if it can handle the preemption
+	   model being selected at boot time using static calls.
+
+	   Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
+	   preemption function will be patched directly.
+
+	   Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
+	   call to a preemption function will go through a trampoline, and the
+	   trampoline will be patched.
+
+	   It is strongly advised to support inline static call to avoid any
+	   overhead.
+
+config HAVE_PREEMPT_DYNAMIC_KEY
+	bool
+	depends on JUMP_LABEL
+	select HAVE_PREEMPT_DYNAMIC
+	help
+	   An architecture should select this if it can handle the preemption
+	   model being selected at boot time using static keys.
+
+	   Each preemption function will be given an early return based on a
+	   static key. This should have slightly lower overhead than non-inline
+	   static calls, as this effectively inlines each trampoline into the
+	   start of its callee. This may avoid redundant work, and may
+	   integrate better with CFI schemes.
+
+	   This will have greater overhead than using inline static calls as
+	   the call to the preemption function cannot be entirely elided.
 
 config ARCH_WANT_LD_ORPHAN_WARN
 	bool
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9f5bd41bf660..70f94094fd6f 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -246,7 +246,7 @@ config X86
 	select HAVE_STACK_VALIDATION		if X86_64
 	select HAVE_STATIC_CALL
 	select HAVE_STATIC_CALL_INLINE		if HAVE_STACK_VALIDATION
-	select HAVE_PREEMPT_DYNAMIC
+	select HAVE_PREEMPT_DYNAMIC_CALL
 	select HAVE_RSEQ
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UNSTABLE_SCHED_CLOCK
diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index dfd84c59b144..141952f4fee8 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -456,13 +456,19 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
  */
 void raw_irqentry_exit_cond_resched(void);
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 #define irqentry_exit_cond_resched_dynamic_enabled	raw_irqentry_exit_cond_resched
 #define irqentry_exit_cond_resched_dynamic_disabled	NULL
 DECLARE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
 #define irqentry_exit_cond_resched()	static_call(irqentry_exit_cond_resched)()
-#else
-#define irqentry_exit_cond_resched()	raw_irqentry_exit_cond_resched()
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
+void dynamic_irqentry_exit_cond_resched(void);
+#define irqentry_exit_cond_resched()	dynamic_irqentry_exit_cond_resched()
 #endif
+#else /* CONFIG_PREEMPT_DYNAMIC */
+#define irqentry_exit_cond_resched()	raw_irqentry_exit_cond_resched()
+#endif /* CONFIG_PREEMPT_DYNAMIC */
 
 /**
  * irqentry_exit - Handle return from exception that used irqentry_enter()
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 33f47a996513..a890428bcc1a 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -99,7 +99,7 @@ struct user;
 extern int __cond_resched(void);
 # define might_resched() __cond_resched()
 
-#elif defined(CONFIG_PREEMPT_DYNAMIC)
+#elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 
 extern int __cond_resched(void);
 
@@ -110,6 +110,11 @@ static __always_inline void might_resched(void)
 	static_call_mod(might_resched)();
 }
 
+#elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+
+extern int dynamic_might_resched(void);
+# define might_resched() dynamic_might_resched()
+
 #else
 
 # define might_resched() do { } while (0)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index f5b2be39a78c..8b320fb54290 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2016,7 +2016,7 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
 #if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
 extern int __cond_resched(void);
 
-#ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 
 DECLARE_STATIC_CALL(cond_resched, __cond_resched);
 
@@ -2025,6 +2025,14 @@ static __always_inline int _cond_resched(void)
 	return static_call_mod(cond_resched)();
 }
 
+#elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+extern int dynamic_cond_resched(void);
+
+static __always_inline int _cond_resched(void)
+{
+	return dynamic_cond_resched();
+}
+
 #else
 
 static inline int _cond_resched(void)
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index 1739ca79613b..b145249ad91a 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -3,6 +3,7 @@
 #include <linux/context_tracking.h>
 #include <linux/entry-common.h>
 #include <linux/highmem.h>
+#include <linux/jump_label.h>
 #include <linux/livepatch.h>
 #include <linux/audit.h>
 #include <linux/tick.h>
@@ -392,7 +393,17 @@ void raw_irqentry_exit_cond_resched(void)
 	}
 }
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 DEFINE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
+void dynamic_irqentry_exit_cond_resched(void)
+{
+	if (!static_key_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
+		return;
+	raw_irqentry_exit_cond_resched();
+}
+#endif
 #endif
 
 noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3bf7f90d0ef7..4cca19d40636 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -14,6 +14,7 @@
 
 #include <linux/nospec.h>
 #include <linux/blkdev.h>
+#include <linux/jump_label.h>
 #include <linux/kcov.h>
 #include <linux/scs.h>
 
@@ -6482,21 +6483,31 @@ asmlinkage __visible void __sched notrace preempt_schedule(void)
 	 */
 	if (likely(!preemptible()))
 		return;
-
 	preempt_schedule_common();
 }
 NOKPROBE_SYMBOL(preempt_schedule);
 EXPORT_SYMBOL(preempt_schedule);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 #ifndef preempt_schedule_dynamic_enabled
 #define preempt_schedule_dynamic_enabled	preempt_schedule
 #define preempt_schedule_dynamic_disabled	NULL
 #endif
 DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
 EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
+void __sched notrace dynamic_preempt_schedule(void)
+{
+	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
+		return;
+	preempt_schedule();
+}
+NOKPROBE_SYMBOL(dynamic_preempt_schedule);
+EXPORT_SYMBOL(dynamic_preempt_schedule);
+#endif
 #endif
-
 
 /**
  * preempt_schedule_notrace - preempt_schedule called by tracing
@@ -6551,12 +6562,24 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
 EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 #ifndef preempt_schedule_notrace_dynamic_enabled
 #define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
 #define preempt_schedule_notrace_dynamic_disabled	NULL
 #endif
 DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
 EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
+void __sched notrace dynamic_preempt_schedule_notrace(void)
+{
+	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
+		return;
+	preempt_schedule_notrace();
+}
+NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
+EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
+#endif
 #endif
 
 #endif /* CONFIG_PREEMPTION */
@@ -8066,6 +8089,7 @@ EXPORT_SYMBOL(__cond_resched);
 #endif
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 #define cond_resched_dynamic_enabled	__cond_resched
 #define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
 DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
@@ -8075,6 +8099,25 @@ EXPORT_STATIC_CALL_TRAMP(cond_resched);
 #define might_resched_dynamic_disabled	((void *)&__static_call_return0)
 DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
 EXPORT_STATIC_CALL_TRAMP(might_resched);
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
+int __sched dynamic_cond_resched(void)
+{
+	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
+		return 0;
+	return __cond_resched();
+}
+EXPORT_SYMBOL(dynamic_cond_resched);
+
+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
+int __sched dynamic_might_resched(void)
+{
+	if (!static_branch_unlikely(&sk_dynamic_might_resched))
+		return 0;
+	return __cond_resched();
+}
+EXPORT_SYMBOL(dynamic_might_resched);
+#endif
 #endif
 
 /*
@@ -8198,8 +8241,15 @@ int sched_dynamic_mode(const char *str)
 	return -EINVAL;
 }
 
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 #define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
 #define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
+#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
+#else
+#error "Unsupported PREEMPT_DYNAMIC mechanism"
+#endif
 
 void sched_dynamic_update(int mode)
 {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 5/7] sched/preempt: add PREEMPT_DYNAMIC using static keys
@ 2022-02-09 15:35   ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

Where an architecture selects HAVE_STATIC_CALL but not
HAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampoline
which will either branch to a callee or return to the caller.

On such architectures, a number of constraints can conspire to make
those trampolines more complicated and potentially less useful than we'd
like. For example:

* Hardware and software control flow integrity schemes can require the
  addition of "landing pad" instructions (e.g. `BTI` for arm64), which
  will also be present at the "real" callee.

* Limited branch ranges can require that trampolines generate or load an
  address into a register and perform an indirect branch (or at least
  have a slow path that does so). This loses some of the benefits of
  having a direct branch.

* Interaction with SW CFI schemes can be complicated and fragile, e.g.
  requiring that we can recognise idiomatic codegen and remove
  indirections understand, at least until clang proves more helpful
  mechanisms for dealing with this.

For PREEMPT_DYNAMIC, we don't need the full power of static calls, as we
really only need to enable/disable specific preemption functions. We can
achieve the same effect without a number of the pain points above by
using static keys to fold early returns into the preemption functions
themselves rather than in an out-of-line trampoline, effectively
inlining the trampoline into the start of the function.

For arm64, this results in good code generation. For example, the
dynamic_cond_resched() wrapper looks as follows when enabled. When
disabled, the first `B` is replaced with a `NOP`, resulting in an early
return.

| <dynamic_cond_resched>:
|        bti     c
|        b       <dynamic_cond_resched+0x10>     // or `nop`
|        mov     w0, #0x0
|        ret
|        mrs     x0, sp_el0
|        ldr     x0, [x0, #8]
|        cbnz    x0, <dynamic_cond_resched+0x8>
|        paciasp
|        stp     x29, x30, [sp, #-16]!
|        mov     x29, sp
|        bl      <preempt_schedule_common>
|        mov     w0, #0x1
|        ldp     x29, x30, [sp], #16
|        autiasp
|        ret

... compared to the regular form of the function:

| <__cond_resched>:
|        bti     c
|        mrs     x0, sp_el0
|        ldr     x1, [x0, #8]
|        cbz     x1, <__cond_resched+0x18>
|        mov     w0, #0x0
|        ret
|        paciasp
|        stp     x29, x30, [sp, #-16]!
|        mov     x29, sp
|        bl      <preempt_schedule_common>
|        mov     w0, #0x1
|        ldp     x29, x30, [sp], #16
|        autiasp
|        ret

Any architecture which implements static keys should be able to use this
to implement PREEMPT_DYNAMIC with similar cost to non-inlined static
calls.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
---
 arch/Kconfig                 | 36 ++++++++++++++++++++++--
 arch/x86/Kconfig             |  2 +-
 include/linux/entry-common.h | 10 +++++--
 include/linux/kernel.h       |  7 ++++-
 include/linux/sched.h        | 10 ++++++-
 kernel/entry/common.c        | 11 ++++++++
 kernel/sched/core.c          | 54 ++++++++++++++++++++++++++++++++++--
 7 files changed, 120 insertions(+), 10 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 601691f1570f..7587ed30b9dc 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1278,11 +1278,41 @@ config HAVE_STATIC_CALL_INLINE
 
 config HAVE_PREEMPT_DYNAMIC
 	bool
+
+config HAVE_PREEMPT_DYNAMIC_CALL
+	bool
 	depends on HAVE_STATIC_CALL
+	select HAVE_PREEMPT_DYNAMIC
 	help
-	   Select this if the architecture support boot time preempt setting
-	   on top of static calls. It is strongly advised to support inline
-	   static call to avoid any overhead.
+	   An architecture should select this if it can handle the preemption
+	   model being selected at boot time using static calls.
+
+	   Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
+	   preemption function will be patched directly.
+
+	   Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
+	   call to a preemption function will go through a trampoline, and the
+	   trampoline will be patched.
+
+	   It is strongly advised to support inline static call to avoid any
+	   overhead.
+
+config HAVE_PREEMPT_DYNAMIC_KEY
+	bool
+	depends on JUMP_LABEL
+	select HAVE_PREEMPT_DYNAMIC
+	help
+	   An architecture should select this if it can handle the preemption
+	   model being selected at boot time using static keys.
+
+	   Each preemption function will be given an early return based on a
+	   static key. This should have slightly lower overhead than non-inline
+	   static calls, as this effectively inlines each trampoline into the
+	   start of its callee. This may avoid redundant work, and may
+	   integrate better with CFI schemes.
+
+	   This will have greater overhead than using inline static calls as
+	   the call to the preemption function cannot be entirely elided.
 
 config ARCH_WANT_LD_ORPHAN_WARN
 	bool
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9f5bd41bf660..70f94094fd6f 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -246,7 +246,7 @@ config X86
 	select HAVE_STACK_VALIDATION		if X86_64
 	select HAVE_STATIC_CALL
 	select HAVE_STATIC_CALL_INLINE		if HAVE_STACK_VALIDATION
-	select HAVE_PREEMPT_DYNAMIC
+	select HAVE_PREEMPT_DYNAMIC_CALL
 	select HAVE_RSEQ
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UNSTABLE_SCHED_CLOCK
diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index dfd84c59b144..141952f4fee8 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -456,13 +456,19 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
  */
 void raw_irqentry_exit_cond_resched(void);
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 #define irqentry_exit_cond_resched_dynamic_enabled	raw_irqentry_exit_cond_resched
 #define irqentry_exit_cond_resched_dynamic_disabled	NULL
 DECLARE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
 #define irqentry_exit_cond_resched()	static_call(irqentry_exit_cond_resched)()
-#else
-#define irqentry_exit_cond_resched()	raw_irqentry_exit_cond_resched()
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
+void dynamic_irqentry_exit_cond_resched(void);
+#define irqentry_exit_cond_resched()	dynamic_irqentry_exit_cond_resched()
 #endif
+#else /* CONFIG_PREEMPT_DYNAMIC */
+#define irqentry_exit_cond_resched()	raw_irqentry_exit_cond_resched()
+#endif /* CONFIG_PREEMPT_DYNAMIC */
 
 /**
  * irqentry_exit - Handle return from exception that used irqentry_enter()
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 33f47a996513..a890428bcc1a 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -99,7 +99,7 @@ struct user;
 extern int __cond_resched(void);
 # define might_resched() __cond_resched()
 
-#elif defined(CONFIG_PREEMPT_DYNAMIC)
+#elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 
 extern int __cond_resched(void);
 
@@ -110,6 +110,11 @@ static __always_inline void might_resched(void)
 	static_call_mod(might_resched)();
 }
 
+#elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+
+extern int dynamic_might_resched(void);
+# define might_resched() dynamic_might_resched()
+
 #else
 
 # define might_resched() do { } while (0)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index f5b2be39a78c..8b320fb54290 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2016,7 +2016,7 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
 #if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
 extern int __cond_resched(void);
 
-#ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 
 DECLARE_STATIC_CALL(cond_resched, __cond_resched);
 
@@ -2025,6 +2025,14 @@ static __always_inline int _cond_resched(void)
 	return static_call_mod(cond_resched)();
 }
 
+#elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+extern int dynamic_cond_resched(void);
+
+static __always_inline int _cond_resched(void)
+{
+	return dynamic_cond_resched();
+}
+
 #else
 
 static inline int _cond_resched(void)
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index 1739ca79613b..b145249ad91a 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -3,6 +3,7 @@
 #include <linux/context_tracking.h>
 #include <linux/entry-common.h>
 #include <linux/highmem.h>
+#include <linux/jump_label.h>
 #include <linux/livepatch.h>
 #include <linux/audit.h>
 #include <linux/tick.h>
@@ -392,7 +393,17 @@ void raw_irqentry_exit_cond_resched(void)
 	}
 }
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 DEFINE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
+void dynamic_irqentry_exit_cond_resched(void)
+{
+	if (!static_key_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
+		return;
+	raw_irqentry_exit_cond_resched();
+}
+#endif
 #endif
 
 noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3bf7f90d0ef7..4cca19d40636 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -14,6 +14,7 @@
 
 #include <linux/nospec.h>
 #include <linux/blkdev.h>
+#include <linux/jump_label.h>
 #include <linux/kcov.h>
 #include <linux/scs.h>
 
@@ -6482,21 +6483,31 @@ asmlinkage __visible void __sched notrace preempt_schedule(void)
 	 */
 	if (likely(!preemptible()))
 		return;
-
 	preempt_schedule_common();
 }
 NOKPROBE_SYMBOL(preempt_schedule);
 EXPORT_SYMBOL(preempt_schedule);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 #ifndef preempt_schedule_dynamic_enabled
 #define preempt_schedule_dynamic_enabled	preempt_schedule
 #define preempt_schedule_dynamic_disabled	NULL
 #endif
 DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
 EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
+void __sched notrace dynamic_preempt_schedule(void)
+{
+	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
+		return;
+	preempt_schedule();
+}
+NOKPROBE_SYMBOL(dynamic_preempt_schedule);
+EXPORT_SYMBOL(dynamic_preempt_schedule);
+#endif
 #endif
-
 
 /**
  * preempt_schedule_notrace - preempt_schedule called by tracing
@@ -6551,12 +6562,24 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
 EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 #ifndef preempt_schedule_notrace_dynamic_enabled
 #define preempt_schedule_notrace_dynamic_enabled	preempt_schedule_notrace
 #define preempt_schedule_notrace_dynamic_disabled	NULL
 #endif
 DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
 EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
+void __sched notrace dynamic_preempt_schedule_notrace(void)
+{
+	if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
+		return;
+	preempt_schedule_notrace();
+}
+NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
+EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
+#endif
 #endif
 
 #endif /* CONFIG_PREEMPTION */
@@ -8066,6 +8089,7 @@ EXPORT_SYMBOL(__cond_resched);
 #endif
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 #define cond_resched_dynamic_enabled	__cond_resched
 #define cond_resched_dynamic_disabled	((void *)&__static_call_return0)
 DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
@@ -8075,6 +8099,25 @@ EXPORT_STATIC_CALL_TRAMP(cond_resched);
 #define might_resched_dynamic_disabled	((void *)&__static_call_return0)
 DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
 EXPORT_STATIC_CALL_TRAMP(might_resched);
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
+int __sched dynamic_cond_resched(void)
+{
+	if (!static_branch_unlikely(&sk_dynamic_cond_resched))
+		return 0;
+	return __cond_resched();
+}
+EXPORT_SYMBOL(dynamic_cond_resched);
+
+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
+int __sched dynamic_might_resched(void)
+{
+	if (!static_branch_unlikely(&sk_dynamic_might_resched))
+		return 0;
+	return __cond_resched();
+}
+EXPORT_SYMBOL(dynamic_might_resched);
+#endif
 #endif
 
 /*
@@ -8198,8 +8241,15 @@ int sched_dynamic_mode(const char *str)
 	return -EINVAL;
 }
 
+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
 #define preempt_dynamic_enable(f)	static_call_update(f, f##_dynamic_enabled)
 #define preempt_dynamic_disable(f)	static_call_update(f, f##_dynamic_disabled)
+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
+#define preempt_dynamic_enable(f)	static_key_enable(&sk_dynamic_##f.key)
+#define preempt_dynamic_disable(f)	static_key_disable(&sk_dynamic_##f.key)
+#else
+#error "Unsupported PREEMPT_DYNAMIC mechanism"
+#endif
 
 void sched_dynamic_update(int mode)
 {
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 6/7] arm64: entry: centralize premeption decision
  2022-02-09 15:35 ` Mark Rutland
@ 2022-02-09 15:35   ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

For historical reasons, the decision of whether or not to preempt is
spread across arm64_preempt_schedule_irq() and __el1_irq(), and it would
be clearer if this were all in one place.

Also, arm64_preempt_schedule_irq() calls lockdep_assert_irqs_disabled(),
but this is redundant, as we have a subsequent identical assertion in
__exit_to_kernel_mode(), and preempt_schedule_irq() will
BUG_ON(!irqs_disabled()) anyway.

This patch removes the redundant assertion and centralizes the
preemption decision making within arm64_preempt_schedule_irq().

Other than the slight change to assertion behaviour, there should be no
functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index ef7fcefb96bd..2c639b6b676d 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -222,7 +222,16 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs)
 
 static void __sched arm64_preempt_schedule_irq(void)
 {
-	lockdep_assert_irqs_disabled();
+	if (!IS_ENABLED(CONFIG_PREEMPTION))
+		return;
+
+	/*
+	 * Note: thread_info::preempt_count includes both thread_info::count
+	 * and thread_info::need_resched, and is not equivalent to
+	 * preempt_count().
+	 */
+	if (READ_ONCE(current_thread_info()->preempt_count) != 0)
+		return;
 
 	/*
 	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
@@ -438,14 +447,7 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
 	do_interrupt_handler(regs, handler);
 	irq_exit_rcu();
 
-	/*
-	 * Note: thread_info::preempt_count includes both thread_info::count
-	 * and thread_info::need_resched, and is not equivalent to
-	 * preempt_count().
-	 */
-	if (IS_ENABLED(CONFIG_PREEMPTION) &&
-	    READ_ONCE(current_thread_info()->preempt_count) == 0)
-		arm64_preempt_schedule_irq();
+	arm64_preempt_schedule_irq();
 
 	exit_to_kernel_mode(regs);
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 6/7] arm64: entry: centralize premeption decision
@ 2022-02-09 15:35   ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

For historical reasons, the decision of whether or not to preempt is
spread across arm64_preempt_schedule_irq() and __el1_irq(), and it would
be clearer if this were all in one place.

Also, arm64_preempt_schedule_irq() calls lockdep_assert_irqs_disabled(),
but this is redundant, as we have a subsequent identical assertion in
__exit_to_kernel_mode(), and preempt_schedule_irq() will
BUG_ON(!irqs_disabled()) anyway.

This patch removes the redundant assertion and centralizes the
preemption decision making within arm64_preempt_schedule_irq().

Other than the slight change to assertion behaviour, there should be no
functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/entry-common.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index ef7fcefb96bd..2c639b6b676d 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -222,7 +222,16 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs)
 
 static void __sched arm64_preempt_schedule_irq(void)
 {
-	lockdep_assert_irqs_disabled();
+	if (!IS_ENABLED(CONFIG_PREEMPTION))
+		return;
+
+	/*
+	 * Note: thread_info::preempt_count includes both thread_info::count
+	 * and thread_info::need_resched, and is not equivalent to
+	 * preempt_count().
+	 */
+	if (READ_ONCE(current_thread_info()->preempt_count) != 0)
+		return;
 
 	/*
 	 * DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
@@ -438,14 +447,7 @@ static __always_inline void __el1_irq(struct pt_regs *regs,
 	do_interrupt_handler(regs, handler);
 	irq_exit_rcu();
 
-	/*
-	 * Note: thread_info::preempt_count includes both thread_info::count
-	 * and thread_info::need_resched, and is not equivalent to
-	 * preempt_count().
-	 */
-	if (IS_ENABLED(CONFIG_PREEMPTION) &&
-	    READ_ONCE(current_thread_info()->preempt_count) == 0)
-		arm64_preempt_schedule_irq();
+	arm64_preempt_schedule_irq();
 
 	exit_to_kernel_mode(regs);
 }
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
  2022-02-09 15:35 ` Mark Rutland
@ 2022-02-09 15:35   ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

This patch enables support for PREEMPT_DYNAMIC on arm64, allowing the
preemption model to be chosen at boot time.

Specifically, this patch selects HAVE_PREEMPT_DYNAMIC_KEY, so that each
preemption function is an out-of-line call with an early return
depending upon a static key. This leaves almost all the codegen up to
the compiler, and side-steps a number of pain points with static calls
(e.g. interaction with CFI schemes). This should have no worse overhead
than using non-inline static calls, as those use out-of-line trampolines
with early returns.

For example, the dynamic_cond_resched() wrapper looks as follows when
enabled. When disabled, the first `B` is replaced with a `NOP`,
resulting in an early return.

| <dynamic_cond_resched>:
|        bti     c
|        b       <dynamic_cond_resched+0x10>     // or `nop`
|        mov     w0, #0x0
|        ret
|        mrs     x0, sp_el0
|        ldr     x0, [x0, #8]
|        cbnz    x0, <dynamic_cond_resched+0x8>
|        paciasp
|        stp     x29, x30, [sp, #-16]!
|        mov     x29, sp
|        bl      <preempt_schedule_common>
|        mov     w0, #0x1
|        ldp     x29, x30, [sp], #16
|        autiasp
|        ret

... compared to the regular form of the function:

| <__cond_resched>:
|        bti     c
|        mrs     x0, sp_el0
|        ldr     x1, [x0, #8]
|        cbz     x1, <__cond_resched+0x18>
|        mov     w0, #0x0
|        ret
|        paciasp
|        stp     x29, x30, [sp, #-16]!
|        mov     x29, sp
|        bl      <preempt_schedule_common>
|        mov     w0, #0x1
|        ldp     x29, x30, [sp], #16
|        autiasp
|        ret

Since arm64 does not yet use the generic entry code, we must define our
own `sk_dynamic_irqentry_exit_cond_resched`, which will be
enabled/disabled by the common code in kernel/sched/core.c. All other
preemption functions and associated static keys are defined there.

Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
enabled.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/preempt.h | 19 +++++++++++++++++--
 arch/arm64/kernel/entry-common.c | 10 +++++++++-
 3 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f2b5a4abef21..3831d922a81d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -192,6 +192,7 @@ config ARM64
 	select HAVE_PERF_EVENTS
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
+	select HAVE_PREEMPT_DYNAMIC_KEY
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_POSIX_CPU_TIMERS_TASK_WORK
 	select HAVE_FUNCTION_ARG_ACCESS_API
diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
index e83f0982b99c..0159b625cc7f 100644
--- a/arch/arm64/include/asm/preempt.h
+++ b/arch/arm64/include/asm/preempt.h
@@ -2,6 +2,7 @@
 #ifndef __ASM_PREEMPT_H
 #define __ASM_PREEMPT_H
 
+#include <linux/jump_label.h>
 #include <linux/thread_info.h>
 
 #define PREEMPT_NEED_RESCHED	BIT(32)
@@ -80,10 +81,24 @@ static inline bool should_resched(int preempt_offset)
 }
 
 #ifdef CONFIG_PREEMPTION
+
 void preempt_schedule(void);
-#define __preempt_schedule() preempt_schedule()
 void preempt_schedule_notrace(void);
-#define __preempt_schedule_notrace() preempt_schedule_notrace()
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
+void dynamic_preempt_schedule(void);
+#define __preempt_schedule()		dynamic_preempt_schedule()
+void dynamic_preempt_schedule_notrace(void);
+#define __preempt_schedule_notrace()	dynamic_preempt_schedule_notrace()
+
+#else /* CONFIG_PREEMPT_DYNAMIC */
+
+#define __preempt_schedule()		preempt_schedule()
+#define __preempt_schedule_notrace()	preempt_schedule_notrace()
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
 #endif /* CONFIG_PREEMPTION */
 
 #endif /* __ASM_PREEMPT_H */
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 2c639b6b676d..675352ec1368 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -220,9 +220,17 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs)
 		lockdep_hardirqs_on(CALLER_ADDR0);
 }
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
+#define need_irq_preemption() \
+	(static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
+#else
+#define need_irq_preemption()	(IS_ENABLED(CONFIG_PREEMPTION))
+#endif
+
 static void __sched arm64_preempt_schedule_irq(void)
 {
-	if (!IS_ENABLED(CONFIG_PREEMPTION))
+	if (!need_irq_preemption())
 		return;
 
 	/*
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
@ 2022-02-09 15:35   ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-09 15:35 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: ardb, bp, catalin.marinas, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mark.rutland, mingo,
	peterz, tglx, valentin.schneider, will

This patch enables support for PREEMPT_DYNAMIC on arm64, allowing the
preemption model to be chosen at boot time.

Specifically, this patch selects HAVE_PREEMPT_DYNAMIC_KEY, so that each
preemption function is an out-of-line call with an early return
depending upon a static key. This leaves almost all the codegen up to
the compiler, and side-steps a number of pain points with static calls
(e.g. interaction with CFI schemes). This should have no worse overhead
than using non-inline static calls, as those use out-of-line trampolines
with early returns.

For example, the dynamic_cond_resched() wrapper looks as follows when
enabled. When disabled, the first `B` is replaced with a `NOP`,
resulting in an early return.

| <dynamic_cond_resched>:
|        bti     c
|        b       <dynamic_cond_resched+0x10>     // or `nop`
|        mov     w0, #0x0
|        ret
|        mrs     x0, sp_el0
|        ldr     x0, [x0, #8]
|        cbnz    x0, <dynamic_cond_resched+0x8>
|        paciasp
|        stp     x29, x30, [sp, #-16]!
|        mov     x29, sp
|        bl      <preempt_schedule_common>
|        mov     w0, #0x1
|        ldp     x29, x30, [sp], #16
|        autiasp
|        ret

... compared to the regular form of the function:

| <__cond_resched>:
|        bti     c
|        mrs     x0, sp_el0
|        ldr     x1, [x0, #8]
|        cbz     x1, <__cond_resched+0x18>
|        mov     w0, #0x0
|        ret
|        paciasp
|        stp     x29, x30, [sp, #-16]!
|        mov     x29, sp
|        bl      <preempt_schedule_common>
|        mov     w0, #0x1
|        ldp     x29, x30, [sp], #16
|        autiasp
|        ret

Since arm64 does not yet use the generic entry code, we must define our
own `sk_dynamic_irqentry_exit_cond_resched`, which will be
enabled/disabled by the common code in kernel/sched/core.c. All other
preemption functions and associated static keys are defined there.

Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
enabled.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/preempt.h | 19 +++++++++++++++++--
 arch/arm64/kernel/entry-common.c | 10 +++++++++-
 3 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f2b5a4abef21..3831d922a81d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -192,6 +192,7 @@ config ARM64
 	select HAVE_PERF_EVENTS
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
+	select HAVE_PREEMPT_DYNAMIC_KEY
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_POSIX_CPU_TIMERS_TASK_WORK
 	select HAVE_FUNCTION_ARG_ACCESS_API
diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h
index e83f0982b99c..0159b625cc7f 100644
--- a/arch/arm64/include/asm/preempt.h
+++ b/arch/arm64/include/asm/preempt.h
@@ -2,6 +2,7 @@
 #ifndef __ASM_PREEMPT_H
 #define __ASM_PREEMPT_H
 
+#include <linux/jump_label.h>
 #include <linux/thread_info.h>
 
 #define PREEMPT_NEED_RESCHED	BIT(32)
@@ -80,10 +81,24 @@ static inline bool should_resched(int preempt_offset)
 }
 
 #ifdef CONFIG_PREEMPTION
+
 void preempt_schedule(void);
-#define __preempt_schedule() preempt_schedule()
 void preempt_schedule_notrace(void);
-#define __preempt_schedule_notrace() preempt_schedule_notrace()
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
+void dynamic_preempt_schedule(void);
+#define __preempt_schedule()		dynamic_preempt_schedule()
+void dynamic_preempt_schedule_notrace(void);
+#define __preempt_schedule_notrace()	dynamic_preempt_schedule_notrace()
+
+#else /* CONFIG_PREEMPT_DYNAMIC */
+
+#define __preempt_schedule()		preempt_schedule()
+#define __preempt_schedule_notrace()	preempt_schedule_notrace()
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
 #endif /* CONFIG_PREEMPTION */
 
 #endif /* __ASM_PREEMPT_H */
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index 2c639b6b676d..675352ec1368 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -220,9 +220,17 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs)
 		lockdep_hardirqs_on(CALLER_ADDR0);
 }
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
+#define need_irq_preemption() \
+	(static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
+#else
+#define need_irq_preemption()	(IS_ENABLED(CONFIG_PREEMPTION))
+#endif
+
 static void __sched arm64_preempt_schedule_irq(void)
 {
-	if (!IS_ENABLED(CONFIG_PREEMPTION))
+	if (!need_irq_preemption())
 		return;
 
 	/*
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 5/7] sched/preempt: add PREEMPT_DYNAMIC using static keys
  2022-02-09 15:35   ` Mark Rutland
@ 2022-02-09 17:48     ` Frederic Weisbecker
  -1 siblings, 0 replies; 40+ messages in thread
From: Frederic Weisbecker @ 2022-02-09 17:48 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Wed, Feb 09, 2022 at 03:35:33PM +0000, Mark Rutland wrote:
> +config HAVE_PREEMPT_DYNAMIC_KEY
> +	bool
> +	depends on JUMP_LABEL

This should probably be:

     depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
     select JUMP_LABEL

Otherwise you may run into trouble if CONFIG_JUMP_LABEL is initially n.

Thanks.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 5/7] sched/preempt: add PREEMPT_DYNAMIC using static keys
@ 2022-02-09 17:48     ` Frederic Weisbecker
  0 siblings, 0 replies; 40+ messages in thread
From: Frederic Weisbecker @ 2022-02-09 17:48 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Wed, Feb 09, 2022 at 03:35:33PM +0000, Mark Rutland wrote:
> +config HAVE_PREEMPT_DYNAMIC_KEY
> +	bool
> +	depends on JUMP_LABEL

This should probably be:

     depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
     select JUMP_LABEL

Otherwise you may run into trouble if CONFIG_JUMP_LABEL is initially n.

Thanks.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 6/7] arm64: entry: centralize premeption decision
  2022-02-09 15:35   ` Mark Rutland
@ 2022-02-09 18:10     ` Catalin Marinas
  -1 siblings, 0 replies; 40+ messages in thread
From: Catalin Marinas @ 2022-02-09 18:10 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mingo, peterz, tglx,
	valentin.schneider, will

On Wed, Feb 09, 2022 at 03:35:34PM +0000, Mark Rutland wrote:
> For historical reasons, the decision of whether or not to preempt is
> spread across arm64_preempt_schedule_irq() and __el1_irq(), and it would
> be clearer if this were all in one place.
> 
> Also, arm64_preempt_schedule_irq() calls lockdep_assert_irqs_disabled(),
> but this is redundant, as we have a subsequent identical assertion in
> __exit_to_kernel_mode(), and preempt_schedule_irq() will
> BUG_ON(!irqs_disabled()) anyway.
> 
> This patch removes the redundant assertion and centralizes the
> preemption decision making within arm64_preempt_schedule_irq().
> 
> Other than the slight change to assertion behaviour, there should be no
> functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Joey Gouly <joey.gouly@arm.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>
> Cc: Will Deacon <will@kernel.org>

I acked this patch in v2, has anything changed? Well, here it is again:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

BTW, you have a typo in the subject.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 6/7] arm64: entry: centralize premeption decision
@ 2022-02-09 18:10     ` Catalin Marinas
  0 siblings, 0 replies; 40+ messages in thread
From: Catalin Marinas @ 2022-02-09 18:10 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mingo, peterz, tglx,
	valentin.schneider, will

On Wed, Feb 09, 2022 at 03:35:34PM +0000, Mark Rutland wrote:
> For historical reasons, the decision of whether or not to preempt is
> spread across arm64_preempt_schedule_irq() and __el1_irq(), and it would
> be clearer if this were all in one place.
> 
> Also, arm64_preempt_schedule_irq() calls lockdep_assert_irqs_disabled(),
> but this is redundant, as we have a subsequent identical assertion in
> __exit_to_kernel_mode(), and preempt_schedule_irq() will
> BUG_ON(!irqs_disabled()) anyway.
> 
> This patch removes the redundant assertion and centralizes the
> preemption decision making within arm64_preempt_schedule_irq().
> 
> Other than the slight change to assertion behaviour, there should be no
> functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Joey Gouly <joey.gouly@arm.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>
> Cc: Will Deacon <will@kernel.org>

I acked this patch in v2, has anything changed? Well, here it is again:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

BTW, you have a typo in the subject.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
  2022-02-09 15:35   ` Mark Rutland
@ 2022-02-09 18:13     ` Catalin Marinas
  -1 siblings, 0 replies; 40+ messages in thread
From: Catalin Marinas @ 2022-02-09 18:13 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mingo, peterz, tglx,
	valentin.schneider, will

On Wed, Feb 09, 2022 at 03:35:35PM +0000, Mark Rutland wrote:
> This patch enables support for PREEMPT_DYNAMIC on arm64, allowing the
> preemption model to be chosen at boot time.
> 
> Specifically, this patch selects HAVE_PREEMPT_DYNAMIC_KEY, so that each
> preemption function is an out-of-line call with an early return
> depending upon a static key. This leaves almost all the codegen up to
> the compiler, and side-steps a number of pain points with static calls
> (e.g. interaction with CFI schemes). This should have no worse overhead
> than using non-inline static calls, as those use out-of-line trampolines
> with early returns.
> 
> For example, the dynamic_cond_resched() wrapper looks as follows when
> enabled. When disabled, the first `B` is replaced with a `NOP`,
> resulting in an early return.
> 
> | <dynamic_cond_resched>:
> |        bti     c
> |        b       <dynamic_cond_resched+0x10>     // or `nop`
> |        mov     w0, #0x0
> |        ret
> |        mrs     x0, sp_el0
> |        ldr     x0, [x0, #8]
> |        cbnz    x0, <dynamic_cond_resched+0x8>
> |        paciasp
> |        stp     x29, x30, [sp, #-16]!
> |        mov     x29, sp
> |        bl      <preempt_schedule_common>
> |        mov     w0, #0x1
> |        ldp     x29, x30, [sp], #16
> |        autiasp
> |        ret
> 
> ... compared to the regular form of the function:
> 
> | <__cond_resched>:
> |        bti     c
> |        mrs     x0, sp_el0
> |        ldr     x1, [x0, #8]
> |        cbz     x1, <__cond_resched+0x18>
> |        mov     w0, #0x0
> |        ret
> |        paciasp
> |        stp     x29, x30, [sp, #-16]!
> |        mov     x29, sp
> |        bl      <preempt_schedule_common>
> |        mov     w0, #0x1
> |        ldp     x29, x30, [sp], #16
> |        autiasp
> |        ret
> 
> Since arm64 does not yet use the generic entry code, we must define our
> own `sk_dynamic_irqentry_exit_cond_resched`, which will be
> enabled/disabled by the common code in kernel/sched/core.c. All other
> preemption functions and associated static keys are defined there.
> 
> Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
> enabled.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Ard Biesheuvel <ardb@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Frederic Weisbecker <frederic@kernel.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Joey Gouly <joey.gouly@arm.com>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Valentin Schneider <valentin.schneider@arm.com>
> Cc: Will Deacon <will@kernel.org>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
@ 2022-02-09 18:13     ` Catalin Marinas
  0 siblings, 0 replies; 40+ messages in thread
From: Catalin Marinas @ 2022-02-09 18:13 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mingo, peterz, tglx,
	valentin.schneider, will

On Wed, Feb 09, 2022 at 03:35:35PM +0000, Mark Rutland wrote:
> This patch enables support for PREEMPT_DYNAMIC on arm64, allowing the
> preemption model to be chosen at boot time.
> 
> Specifically, this patch selects HAVE_PREEMPT_DYNAMIC_KEY, so that each
> preemption function is an out-of-line call with an early return
> depending upon a static key. This leaves almost all the codegen up to
> the compiler, and side-steps a number of pain points with static calls
> (e.g. interaction with CFI schemes). This should have no worse overhead
> than using non-inline static calls, as those use out-of-line trampolines
> with early returns.
> 
> For example, the dynamic_cond_resched() wrapper looks as follows when
> enabled. When disabled, the first `B` is replaced with a `NOP`,
> resulting in an early return.
> 
> | <dynamic_cond_resched>:
> |        bti     c
> |        b       <dynamic_cond_resched+0x10>     // or `nop`
> |        mov     w0, #0x0
> |        ret
> |        mrs     x0, sp_el0
> |        ldr     x0, [x0, #8]
> |        cbnz    x0, <dynamic_cond_resched+0x8>
> |        paciasp
> |        stp     x29, x30, [sp, #-16]!
> |        mov     x29, sp
> |        bl      <preempt_schedule_common>
> |        mov     w0, #0x1
> |        ldp     x29, x30, [sp], #16
> |        autiasp
> |        ret
> 
> ... compared to the regular form of the function:
> 
> | <__cond_resched>:
> |        bti     c
> |        mrs     x0, sp_el0
> |        ldr     x1, [x0, #8]
> |        cbz     x1, <__cond_resched+0x18>
> |        mov     w0, #0x0
> |        ret
> |        paciasp
> |        stp     x29, x30, [sp, #-16]!
> |        mov     x29, sp
> |        bl      <preempt_schedule_common>
> |        mov     w0, #0x1
> |        ldp     x29, x30, [sp], #16
> |        autiasp
> |        ret
> 
> Since arm64 does not yet use the generic entry code, we must define our
> own `sk_dynamic_irqentry_exit_cond_resched`, which will be
> enabled/disabled by the common code in kernel/sched/core.c. All other
> preemption functions and associated static keys are defined there.
> 
> Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
> enabled.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Ard Biesheuvel <ardb@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Frederic Weisbecker <frederic@kernel.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Joey Gouly <joey.gouly@arm.com>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Valentin Schneider <valentin.schneider@arm.com>
> Cc: Will Deacon <will@kernel.org>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
  2022-02-09 15:35   ` Mark Rutland
@ 2022-02-09 19:57     ` Frederic Weisbecker
  -1 siblings, 0 replies; 40+ messages in thread
From: Frederic Weisbecker @ 2022-02-09 19:57 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Wed, Feb 09, 2022 at 03:35:35PM +0000, Mark Rutland wrote:
> Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
> enabled.

It should probably be "def_bool y if HAVE_STATIC_CALL_INLINE"...

Thanks.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
@ 2022-02-09 19:57     ` Frederic Weisbecker
  0 siblings, 0 replies; 40+ messages in thread
From: Frederic Weisbecker @ 2022-02-09 19:57 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Wed, Feb 09, 2022 at 03:35:35PM +0000, Mark Rutland wrote:
> Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
> enabled.

It should probably be "def_bool y if HAVE_STATIC_CALL_INLINE"...

Thanks.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys
  2022-02-09 15:35 ` Mark Rutland
@ 2022-02-09 19:58   ` Frederic Weisbecker
  -1 siblings, 0 replies; 40+ messages in thread
From: Frederic Weisbecker @ 2022-02-09 19:58 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Wed, Feb 09, 2022 at 03:35:28PM +0000, Mark Rutland wrote:
> This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
> mechanism allowing the preemption functions to be enabled/disabled using
> static keys rather than static calls, with architectures selecting
> whether they use static calls or static keys.
> 
> With non-inline static calls, each function call results in a call to
> the (out-of-line) trampoline which either tail-calls its associated
> callee or performs an early return.
> 
> The key idea is that where we're only enabling/disabling a single
> callee, we can inline this trampoline into the start of the callee,
> using a static key to decide whether to return early, and leaving the
> remaining codegen to the compiler. The overhead should be similar to
> (and likely lower than) using a static call trampoline. Since most
> codegen is up to the compiler, we sidestep a number of implementation
> pain-points (e.g. things like CFI should "just work" as well as they do
> for any other functions).
> 
> The bulk of the diffstat for kernel/sched/core.c is shuffling the
> PREEMPT_DYNAMIC code later in the file, and the actual additions are
> fairly trivial.
> 
> I've given this very light build+boot testing so far.
> 
> Since v1 [1]:
> * Rework Kconfig text to be clearer
> * Rework arm64 entry code
> * Clarify commit messages.
> 
> Since v2 [2]:
> * Add missing includes
> * Always provide prototype for preempt_schedule()
> * Always provide prototype for preempt_schedule_notrace()
> * Fix __cond_resched() to default to disabled
> * Fix might_resched() to default to disabled
> * Clarify example in commit message

Acked-by: Frederic Weisbecker <frederic@kernel.org>

Thanks!

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys
@ 2022-02-09 19:58   ` Frederic Weisbecker
  0 siblings, 0 replies; 40+ messages in thread
From: Frederic Weisbecker @ 2022-02-09 19:58 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Wed, Feb 09, 2022 at 03:35:28PM +0000, Mark Rutland wrote:
> This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
> mechanism allowing the preemption functions to be enabled/disabled using
> static keys rather than static calls, with architectures selecting
> whether they use static calls or static keys.
> 
> With non-inline static calls, each function call results in a call to
> the (out-of-line) trampoline which either tail-calls its associated
> callee or performs an early return.
> 
> The key idea is that where we're only enabling/disabling a single
> callee, we can inline this trampoline into the start of the callee,
> using a static key to decide whether to return early, and leaving the
> remaining codegen to the compiler. The overhead should be similar to
> (and likely lower than) using a static call trampoline. Since most
> codegen is up to the compiler, we sidestep a number of implementation
> pain-points (e.g. things like CFI should "just work" as well as they do
> for any other functions).
> 
> The bulk of the diffstat for kernel/sched/core.c is shuffling the
> PREEMPT_DYNAMIC code later in the file, and the actual additions are
> fairly trivial.
> 
> I've given this very light build+boot testing so far.
> 
> Since v1 [1]:
> * Rework Kconfig text to be clearer
> * Rework arm64 entry code
> * Clarify commit messages.
> 
> Since v2 [2]:
> * Add missing includes
> * Always provide prototype for preempt_schedule()
> * Always provide prototype for preempt_schedule_notrace()
> * Fix __cond_resched() to default to disabled
> * Fix might_resched() to default to disabled
> * Clarify example in commit message

Acked-by: Frederic Weisbecker <frederic@kernel.org>

Thanks!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 6/7] arm64: entry: centralize premeption decision
  2022-02-09 18:10     ` Catalin Marinas
@ 2022-02-10  9:19       ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-10  9:19 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-arm-kernel, ardb, bp, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mingo, peterz, tglx,
	valentin.schneider, will

On Wed, Feb 09, 2022 at 06:10:52PM +0000, Catalin Marinas wrote:
> On Wed, Feb 09, 2022 at 03:35:34PM +0000, Mark Rutland wrote:
> > For historical reasons, the decision of whether or not to preempt is
> > spread across arm64_preempt_schedule_irq() and __el1_irq(), and it would
> > be clearer if this were all in one place.
> > 
> > Also, arm64_preempt_schedule_irq() calls lockdep_assert_irqs_disabled(),
> > but this is redundant, as we have a subsequent identical assertion in
> > __exit_to_kernel_mode(), and preempt_schedule_irq() will
> > BUG_ON(!irqs_disabled()) anyway.
> > 
> > This patch removes the redundant assertion and centralizes the
> > preemption decision making within arm64_preempt_schedule_irq().
> > 
> > Other than the slight change to assertion behaviour, there should be no
> > functional change as a result of this patch.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: James Morse <james.morse@arm.com>
> > Cc: Joey Gouly <joey.gouly@arm.com>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> 
> I acked this patch in v2, has anything changed? Well, here it is again:

Sorry; I had meant to add your acks.

This patch is the same as in v2; the other patch has some minor changes as in
the cover letter (adding includes and always exposing a couple of function
prototypes).

> Acked-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks!

> BTW, you have a typo in the subject.

I'll go fix that now.

Mark.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 6/7] arm64: entry: centralize premeption decision
@ 2022-02-10  9:19       ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-10  9:19 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-arm-kernel, ardb, bp, dave.hansen, frederic, james.morse,
	joey.gouly, juri.lelli, linux-kernel, luto, mingo, peterz, tglx,
	valentin.schneider, will

On Wed, Feb 09, 2022 at 06:10:52PM +0000, Catalin Marinas wrote:
> On Wed, Feb 09, 2022 at 03:35:34PM +0000, Mark Rutland wrote:
> > For historical reasons, the decision of whether or not to preempt is
> > spread across arm64_preempt_schedule_irq() and __el1_irq(), and it would
> > be clearer if this were all in one place.
> > 
> > Also, arm64_preempt_schedule_irq() calls lockdep_assert_irqs_disabled(),
> > but this is redundant, as we have a subsequent identical assertion in
> > __exit_to_kernel_mode(), and preempt_schedule_irq() will
> > BUG_ON(!irqs_disabled()) anyway.
> > 
> > This patch removes the redundant assertion and centralizes the
> > preemption decision making within arm64_preempt_schedule_irq().
> > 
> > Other than the slight change to assertion behaviour, there should be no
> > functional change as a result of this patch.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: James Morse <james.morse@arm.com>
> > Cc: Joey Gouly <joey.gouly@arm.com>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> 
> I acked this patch in v2, has anything changed? Well, here it is again:

Sorry; I had meant to add your acks.

This patch is the same as in v2; the other patch has some minor changes as in
the cover letter (adding includes and always exposing a couple of function
prototypes).

> Acked-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks!

> BTW, you have a typo in the subject.

I'll go fix that now.

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys
  2022-02-09 15:35 ` Mark Rutland
@ 2022-02-10  9:29   ` Ard Biesheuvel
  -1 siblings, 0 replies; 40+ messages in thread
From: Ard Biesheuvel @ 2022-02-10  9:29 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux ARM, Borislav Petkov, Catalin Marinas, Dave Hansen,
	Frederic Weisbecker, James Morse, joey.gouly, Juri Lelli,
	Linux Kernel Mailing List, Andy Lutomirski, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Valentin Schneider, Will Deacon

On Wed, 9 Feb 2022 at 16:35, Mark Rutland <mark.rutland@arm.com> wrote:
>
> This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
> mechanism allowing the preemption functions to be enabled/disabled using
> static keys rather than static calls, with architectures selecting
> whether they use static calls or static keys.
>
> With non-inline static calls, each function call results in a call to
> the (out-of-line) trampoline which either tail-calls its associated
> callee or performs an early return.
>
> The key idea is that where we're only enabling/disabling a single
> callee, we can inline this trampoline into the start of the callee,
> using a static key to decide whether to return early, and leaving the
> remaining codegen to the compiler. The overhead should be similar to
> (and likely lower than) using a static call trampoline. Since most
> codegen is up to the compiler, we sidestep a number of implementation
> pain-points (e.g. things like CFI should "just work" as well as they do
> for any other functions).
>
> The bulk of the diffstat for kernel/sched/core.c is shuffling the
> PREEMPT_DYNAMIC code later in the file, and the actual additions are
> fairly trivial.
>
> I've given this very light build+boot testing so far.
>
> Since v1 [1]:
> * Rework Kconfig text to be clearer
> * Rework arm64 entry code
> * Clarify commit messages.
>
> Since v2 [2]:
> * Add missing includes
> * Always provide prototype for preempt_schedule()
> * Always provide prototype for preempt_schedule_notrace()
> * Fix __cond_resched() to default to disabled
> * Fix might_resched() to default to disabled
> * Clarify example in commit message
>
> [1] https://lore.kernel.org/r/20211109172408.49641-1-mark.rutland@arm.com/
> [2] https://lore.kernel.org/r/20220204150557.434610-1-mark.rutland@arm.com/
>
> Mark Rutland (7):
>   sched/preempt: move PREEMPT_DYNAMIC logic later
>   sched/preempt: refactor sched_dynamic_update()
>   sched/preempt: simplify irqentry_exit_cond_resched() callers
>   sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY
>   sched/preempt: add PREEMPT_DYNAMIC using static keys
>   arm64: entry: centralize premeption decision
>   arm64: support PREEMPT_DYNAMIC
>

Acked-by: Ard Biesheuvel <ardb@kernel.org>


>  arch/Kconfig                     |  37 +++-
>  arch/arm64/Kconfig               |   1 +
>  arch/arm64/include/asm/preempt.h |  19 +-
>  arch/arm64/kernel/entry-common.c |  28 ++-
>  arch/x86/Kconfig                 |   2 +-
>  arch/x86/include/asm/preempt.h   |  10 +-
>  include/linux/entry-common.h     |  15 +-
>  include/linux/kernel.h           |   7 +-
>  include/linux/sched.h            |  10 +-
>  kernel/entry/common.c            |  23 +-
>  kernel/sched/core.c              | 347 ++++++++++++++++++-------------
>  11 files changed, 327 insertions(+), 172 deletions(-)
>
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys
@ 2022-02-10  9:29   ` Ard Biesheuvel
  0 siblings, 0 replies; 40+ messages in thread
From: Ard Biesheuvel @ 2022-02-10  9:29 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux ARM, Borislav Petkov, Catalin Marinas, Dave Hansen,
	Frederic Weisbecker, James Morse, joey.gouly, Juri Lelli,
	Linux Kernel Mailing List, Andy Lutomirski, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Valentin Schneider, Will Deacon

On Wed, 9 Feb 2022 at 16:35, Mark Rutland <mark.rutland@arm.com> wrote:
>
> This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
> mechanism allowing the preemption functions to be enabled/disabled using
> static keys rather than static calls, with architectures selecting
> whether they use static calls or static keys.
>
> With non-inline static calls, each function call results in a call to
> the (out-of-line) trampoline which either tail-calls its associated
> callee or performs an early return.
>
> The key idea is that where we're only enabling/disabling a single
> callee, we can inline this trampoline into the start of the callee,
> using a static key to decide whether to return early, and leaving the
> remaining codegen to the compiler. The overhead should be similar to
> (and likely lower than) using a static call trampoline. Since most
> codegen is up to the compiler, we sidestep a number of implementation
> pain-points (e.g. things like CFI should "just work" as well as they do
> for any other functions).
>
> The bulk of the diffstat for kernel/sched/core.c is shuffling the
> PREEMPT_DYNAMIC code later in the file, and the actual additions are
> fairly trivial.
>
> I've given this very light build+boot testing so far.
>
> Since v1 [1]:
> * Rework Kconfig text to be clearer
> * Rework arm64 entry code
> * Clarify commit messages.
>
> Since v2 [2]:
> * Add missing includes
> * Always provide prototype for preempt_schedule()
> * Always provide prototype for preempt_schedule_notrace()
> * Fix __cond_resched() to default to disabled
> * Fix might_resched() to default to disabled
> * Clarify example in commit message
>
> [1] https://lore.kernel.org/r/20211109172408.49641-1-mark.rutland@arm.com/
> [2] https://lore.kernel.org/r/20220204150557.434610-1-mark.rutland@arm.com/
>
> Mark Rutland (7):
>   sched/preempt: move PREEMPT_DYNAMIC logic later
>   sched/preempt: refactor sched_dynamic_update()
>   sched/preempt: simplify irqentry_exit_cond_resched() callers
>   sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY
>   sched/preempt: add PREEMPT_DYNAMIC using static keys
>   arm64: entry: centralize premeption decision
>   arm64: support PREEMPT_DYNAMIC
>

Acked-by: Ard Biesheuvel <ardb@kernel.org>


>  arch/Kconfig                     |  37 +++-
>  arch/arm64/Kconfig               |   1 +
>  arch/arm64/include/asm/preempt.h |  19 +-
>  arch/arm64/kernel/entry-common.c |  28 ++-
>  arch/x86/Kconfig                 |   2 +-
>  arch/x86/include/asm/preempt.h   |  10 +-
>  include/linux/entry-common.h     |  15 +-
>  include/linux/kernel.h           |   7 +-
>  include/linux/sched.h            |  10 +-
>  kernel/entry/common.c            |  23 +-
>  kernel/sched/core.c              | 347 ++++++++++++++++++-------------
>  11 files changed, 327 insertions(+), 172 deletions(-)
>
> --
> 2.30.2
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
  2022-02-09 19:57     ` Frederic Weisbecker
@ 2022-02-10  9:38       ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-10  9:38 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Wed, Feb 09, 2022 at 08:57:09PM +0100, Frederic Weisbecker wrote:
> On Wed, Feb 09, 2022 at 03:35:35PM +0000, Mark Rutland wrote:
> > Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
> > enabled.
> 
> It should probably be "def_bool y if HAVE_STATIC_CALL_INLINE"...

Sure; I'm more than happy to fold that into patch 5.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
@ 2022-02-10  9:38       ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-10  9:38 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Wed, Feb 09, 2022 at 08:57:09PM +0100, Frederic Weisbecker wrote:
> On Wed, Feb 09, 2022 at 03:35:35PM +0000, Mark Rutland wrote:
> > Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
> > enabled.
> 
> It should probably be "def_bool y if HAVE_STATIC_CALL_INLINE"...

Sure; I'm more than happy to fold that into patch 5.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 5/7] sched/preempt: add PREEMPT_DYNAMIC using static keys
  2022-02-09 17:48     ` Frederic Weisbecker
@ 2022-02-10 10:27       ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-10 10:27 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Wed, Feb 09, 2022 at 06:48:01PM +0100, Frederic Weisbecker wrote:
> On Wed, Feb 09, 2022 at 03:35:33PM +0000, Mark Rutland wrote:
> > +config HAVE_PREEMPT_DYNAMIC_KEY
> > +	bool
> > +	depends on JUMP_LABEL
> 
> This should probably be:
> 
>      depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
>      select JUMP_LABEL
> 
> Otherwise you may run into trouble if CONFIG_JUMP_LABEL is initially n.

I'll make that:

 config HAVE_PREEMPT_DYNAMIC_KEY
        bool
        depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
        ...

... and 

 config PREEMPT_DYNAMIC
        bool "Preemption behaviour defined on boot"
        depends on HAVE_PREEMPT_DYNAMIC && !PREEMPT_RT
        select JUMP_LABEL if HAVE_PREEMPT_DYNAMIC_KEY
        ...

So that we don't force JUMP_LABEL on even when people aren't using it.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 5/7] sched/preempt: add PREEMPT_DYNAMIC using static keys
@ 2022-02-10 10:27       ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-10 10:27 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Wed, Feb 09, 2022 at 06:48:01PM +0100, Frederic Weisbecker wrote:
> On Wed, Feb 09, 2022 at 03:35:33PM +0000, Mark Rutland wrote:
> > +config HAVE_PREEMPT_DYNAMIC_KEY
> > +	bool
> > +	depends on JUMP_LABEL
> 
> This should probably be:
> 
>      depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
>      select JUMP_LABEL
> 
> Otherwise you may run into trouble if CONFIG_JUMP_LABEL is initially n.

I'll make that:

 config HAVE_PREEMPT_DYNAMIC_KEY
        bool
        depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
        ...

... and 

 config PREEMPT_DYNAMIC
        bool "Preemption behaviour defined on boot"
        depends on HAVE_PREEMPT_DYNAMIC && !PREEMPT_RT
        select JUMP_LABEL if HAVE_PREEMPT_DYNAMIC_KEY
        ...

So that we don't force JUMP_LABEL on even when people aren't using it.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
  2022-02-10  9:38       ` Mark Rutland
@ 2022-02-10 12:00         ` Mark Rutland
  -1 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-10 12:00 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Thu, Feb 10, 2022 at 09:38:37AM +0000, Mark Rutland wrote:
> On Wed, Feb 09, 2022 at 08:57:09PM +0100, Frederic Weisbecker wrote:
> > On Wed, Feb 09, 2022 at 03:35:35PM +0000, Mark Rutland wrote:
> > > Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
> > > enabled.
> > 
> > It should probably be "def_bool y if HAVE_STATIC_CALL_INLINE"...
> 
> Sure; I'm more than happy to fold that into patch 5.

For the moment I've made that:

	def_bool y if HAVE_PREEMPT_DYNAMIC_CALL

... since that fit more neatly with the other bits I had to add, and didn't
change the existing behaviour of 32-bit x86.

Please shout if you think that should be HAVE_STATIC_CALL_INLINE specifically!

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
@ 2022-02-10 12:00         ` Mark Rutland
  0 siblings, 0 replies; 40+ messages in thread
From: Mark Rutland @ 2022-02-10 12:00 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Thu, Feb 10, 2022 at 09:38:37AM +0000, Mark Rutland wrote:
> On Wed, Feb 09, 2022 at 08:57:09PM +0100, Frederic Weisbecker wrote:
> > On Wed, Feb 09, 2022 at 03:35:35PM +0000, Mark Rutland wrote:
> > > Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
> > > enabled.
> > 
> > It should probably be "def_bool y if HAVE_STATIC_CALL_INLINE"...
> 
> Sure; I'm more than happy to fold that into patch 5.

For the moment I've made that:

	def_bool y if HAVE_PREEMPT_DYNAMIC_CALL

... since that fit more neatly with the other bits I had to add, and didn't
change the existing behaviour of 32-bit x86.

Please shout if you think that should be HAVE_STATIC_CALL_INLINE specifically!

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
  2022-02-10 12:00         ` Mark Rutland
@ 2022-02-10 15:58           ` Frederic Weisbecker
  -1 siblings, 0 replies; 40+ messages in thread
From: Frederic Weisbecker @ 2022-02-10 15:58 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Thu, Feb 10, 2022 at 12:00:56PM +0000, Mark Rutland wrote:
> On Thu, Feb 10, 2022 at 09:38:37AM +0000, Mark Rutland wrote:
> > On Wed, Feb 09, 2022 at 08:57:09PM +0100, Frederic Weisbecker wrote:
> > > On Wed, Feb 09, 2022 at 03:35:35PM +0000, Mark Rutland wrote:
> > > > Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
> > > > enabled.
> > > 
> > > It should probably be "def_bool y if HAVE_STATIC_CALL_INLINE"...
> > 
> > Sure; I'm more than happy to fold that into patch 5.
> 
> For the moment I've made that:
> 
> 	def_bool y if HAVE_PREEMPT_DYNAMIC_CALL
> 
> ... since that fit more neatly with the other bits I had to add, and didn't
> change the existing behaviour of 32-bit x86.
> 
> Please shout if you think that should be HAVE_STATIC_CALL_INLINE specifically!

I seem to remember peterz didn't mind keeping it default y as long as
HAVE_STATIC_CALL*. So I guess that's fine.

Thanks!

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC
@ 2022-02-10 15:58           ` Frederic Weisbecker
  0 siblings, 0 replies; 40+ messages in thread
From: Frederic Weisbecker @ 2022-02-10 15:58 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Thu, Feb 10, 2022 at 12:00:56PM +0000, Mark Rutland wrote:
> On Thu, Feb 10, 2022 at 09:38:37AM +0000, Mark Rutland wrote:
> > On Wed, Feb 09, 2022 at 08:57:09PM +0100, Frederic Weisbecker wrote:
> > > On Wed, Feb 09, 2022 at 03:35:35PM +0000, Mark Rutland wrote:
> > > > Note that PREEMPT_DYNAMIC is `def bool y`, so this will default to
> > > > enabled.
> > > 
> > > It should probably be "def_bool y if HAVE_STATIC_CALL_INLINE"...
> > 
> > Sure; I'm more than happy to fold that into patch 5.
> 
> For the moment I've made that:
> 
> 	def_bool y if HAVE_PREEMPT_DYNAMIC_CALL
> 
> ... since that fit more neatly with the other bits I had to add, and didn't
> change the existing behaviour of 32-bit x86.
> 
> Please shout if you think that should be HAVE_STATIC_CALL_INLINE specifically!

I seem to remember peterz didn't mind keeping it default y as long as
HAVE_STATIC_CALL*. So I guess that's fine.

Thanks!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 5/7] sched/preempt: add PREEMPT_DYNAMIC using static keys
  2022-02-10 10:27       ` Mark Rutland
@ 2022-02-10 15:59         ` Frederic Weisbecker
  -1 siblings, 0 replies; 40+ messages in thread
From: Frederic Weisbecker @ 2022-02-10 15:59 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Thu, Feb 10, 2022 at 10:27:39AM +0000, Mark Rutland wrote:
> On Wed, Feb 09, 2022 at 06:48:01PM +0100, Frederic Weisbecker wrote:
> > On Wed, Feb 09, 2022 at 03:35:33PM +0000, Mark Rutland wrote:
> > > +config HAVE_PREEMPT_DYNAMIC_KEY
> > > +	bool
> > > +	depends on JUMP_LABEL
> > 
> > This should probably be:
> > 
> >      depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
> >      select JUMP_LABEL
> > 
> > Otherwise you may run into trouble if CONFIG_JUMP_LABEL is initially n.
> 
> I'll make that:
> 
>  config HAVE_PREEMPT_DYNAMIC_KEY
>         bool
>         depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
>         ...
> 
> ... and 
> 
>  config PREEMPT_DYNAMIC
>         bool "Preemption behaviour defined on boot"
>         depends on HAVE_PREEMPT_DYNAMIC && !PREEMPT_RT
>         select JUMP_LABEL if HAVE_PREEMPT_DYNAMIC_KEY
>         ...
> 
> So that we don't force JUMP_LABEL on even when people aren't using it.

Much better!

Thanks!

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 5/7] sched/preempt: add PREEMPT_DYNAMIC using static keys
@ 2022-02-10 15:59         ` Frederic Weisbecker
  0 siblings, 0 replies; 40+ messages in thread
From: Frederic Weisbecker @ 2022-02-10 15:59 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, ardb, bp, catalin.marinas, dave.hansen,
	james.morse, joey.gouly, juri.lelli, linux-kernel, luto, mingo,
	peterz, tglx, valentin.schneider, will

On Thu, Feb 10, 2022 at 10:27:39AM +0000, Mark Rutland wrote:
> On Wed, Feb 09, 2022 at 06:48:01PM +0100, Frederic Weisbecker wrote:
> > On Wed, Feb 09, 2022 at 03:35:33PM +0000, Mark Rutland wrote:
> > > +config HAVE_PREEMPT_DYNAMIC_KEY
> > > +	bool
> > > +	depends on JUMP_LABEL
> > 
> > This should probably be:
> > 
> >      depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
> >      select JUMP_LABEL
> > 
> > Otherwise you may run into trouble if CONFIG_JUMP_LABEL is initially n.
> 
> I'll make that:
> 
>  config HAVE_PREEMPT_DYNAMIC_KEY
>         bool
>         depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
>         ...
> 
> ... and 
> 
>  config PREEMPT_DYNAMIC
>         bool "Preemption behaviour defined on boot"
>         depends on HAVE_PREEMPT_DYNAMIC && !PREEMPT_RT
>         select JUMP_LABEL if HAVE_PREEMPT_DYNAMIC_KEY
>         ...
> 
> So that we don't force JUMP_LABEL on even when people aren't using it.

Much better!

Thanks!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2022-02-10 16:00 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-09 15:35 [PATCH v3 0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys Mark Rutland
2022-02-09 15:35 ` Mark Rutland
2022-02-09 15:35 ` [PATCH v3 1/7] sched/preempt: move PREEMPT_DYNAMIC logic later Mark Rutland
2022-02-09 15:35   ` Mark Rutland
2022-02-09 15:35 ` [PATCH v3 2/7] sched/preempt: refactor sched_dynamic_update() Mark Rutland
2022-02-09 15:35   ` Mark Rutland
2022-02-09 15:35 ` [PATCH v3 3/7] sched/preempt: simplify irqentry_exit_cond_resched() callers Mark Rutland
2022-02-09 15:35   ` Mark Rutland
2022-02-09 15:35 ` [PATCH v3 4/7] sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY Mark Rutland
2022-02-09 15:35   ` Mark Rutland
2022-02-09 15:35 ` [PATCH v3 5/7] sched/preempt: add PREEMPT_DYNAMIC using static keys Mark Rutland
2022-02-09 15:35   ` Mark Rutland
2022-02-09 17:48   ` Frederic Weisbecker
2022-02-09 17:48     ` Frederic Weisbecker
2022-02-10 10:27     ` Mark Rutland
2022-02-10 10:27       ` Mark Rutland
2022-02-10 15:59       ` Frederic Weisbecker
2022-02-10 15:59         ` Frederic Weisbecker
2022-02-09 15:35 ` [PATCH v3 6/7] arm64: entry: centralize premeption decision Mark Rutland
2022-02-09 15:35   ` Mark Rutland
2022-02-09 18:10   ` Catalin Marinas
2022-02-09 18:10     ` Catalin Marinas
2022-02-10  9:19     ` Mark Rutland
2022-02-10  9:19       ` Mark Rutland
2022-02-09 15:35 ` [PATCH v3 7/7] arm64: support PREEMPT_DYNAMIC Mark Rutland
2022-02-09 15:35   ` Mark Rutland
2022-02-09 18:13   ` Catalin Marinas
2022-02-09 18:13     ` Catalin Marinas
2022-02-09 19:57   ` Frederic Weisbecker
2022-02-09 19:57     ` Frederic Weisbecker
2022-02-10  9:38     ` Mark Rutland
2022-02-10  9:38       ` Mark Rutland
2022-02-10 12:00       ` Mark Rutland
2022-02-10 12:00         ` Mark Rutland
2022-02-10 15:58         ` Frederic Weisbecker
2022-02-10 15:58           ` Frederic Weisbecker
2022-02-09 19:58 ` [PATCH v3 0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys Frederic Weisbecker
2022-02-09 19:58   ` Frederic Weisbecker
2022-02-10  9:29 ` Ard Biesheuvel
2022-02-10  9:29   ` Ard Biesheuvel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.