All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4
@ 2021-01-18 14:12 Frederic Weisbecker
  2021-01-18 14:12 ` [RFC PATCH 1/8] static_call/x86: Add __static_call_return0() Frederic Weisbecker
                   ` (9 more replies)
  0 siblings, 10 replies; 61+ messages in thread
From: Frederic Weisbecker @ 2021-01-18 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Frederic Weisbecker, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

Hi,

Here is a new version of the feature that can select the preempt flavour
on boot time. Note that it doesn't entirely mimic the actual real
config-based preemption flavours, because at least preempt-RCU
implementation is there in any case.

Also there is still some work to do against subsystems that may play
their own games with CONFIG_PREEMPT.

In this version:

* Restore the initial simple __static_call_return0() implementation.

* Uninline __static_call_return0 on all flavours since its address is
always needed on DEFINE_STATIC_CALL()

* Introduce DEFINE_STATIC_CALL_RET0()

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
	preempt/dynamic-v4

HEAD: b5f3b1da9df4197d0b0ffe0f55f0f6a8c838d75f

Thanks,
	Frederic
---

Peter Zijlstra (Intel) (4):
      preempt/dynamic: Provide cond_resched() and might_resched() static calls
      preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
      preempt/dynamic: Provide irqentry_exit_cond_resched() static call
      preempt/dynamic: Support dynamic preempt with preempt= boot option

Peter Zijlstra (2):
      static_call/x86: Add __static_call_return0()
      static_call: Pull some static_call declarations to the type headers

Frederic Weisbecker (1):
      static_call: Provide DEFINE_STATIC_CALL_RET0()

Michal Hocko (1):
      preempt: Introduce CONFIG_PREEMPT_DYNAMIC


 Documentation/admin-guide/kernel-parameters.txt |  7 ++
 arch/Kconfig                                    |  9 +++
 arch/x86/Kconfig                                |  1 +
 arch/x86/include/asm/preempt.h                  | 34 ++++++---
 arch/x86/kernel/static_call.c                   | 17 ++++-
 include/linux/entry-common.h                    |  4 ++
 include/linux/kernel.h                          | 23 ++++--
 include/linux/sched.h                           | 27 ++++++-
 include/linux/static_call.h                     | 43 ++++--------
 include/linux/static_call_types.h               | 29 ++++++++
 kernel/Kconfig.preempt                          | 19 +++++
 kernel/entry/common.c                           | 10 ++-
 kernel/sched/core.c                             | 93 ++++++++++++++++++++++++-
 kernel/static_call.c                            |  5 ++
 14 files changed, 271 insertions(+), 50 deletions(-)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [RFC PATCH 1/8] static_call/x86: Add __static_call_return0()
  2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
@ 2021-01-18 14:12 ` Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra
  2021-01-18 14:12 ` [RFC PATCH 2/8] static_call: Provide DEFINE_STATIC_CALL_RET0() Frederic Weisbecker
                   ` (8 subsequent siblings)
  9 siblings, 2 replies; 61+ messages in thread
From: Frederic Weisbecker @ 2021-01-18 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: Peter Zijlstra <peterz@infradead.org>

Provide a stub function that return 0 and wire up the static call site
patching to replace the CALL with a single 5 byte instruction that
clears %RAX, the return value register.

The function can be cast to any function pointer type that has a
single %RAX return (including pointers). Also provide a version that
returns an int for convenience. We are clearing the entire %RAX register
in any case, whether the return value is 32 or 64 bits, since %RAX is
always a scratch register anyway.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 arch/x86/kernel/static_call.c | 17 +++++++++++++++--
 include/linux/static_call.h   |  2 ++
 kernel/static_call.c          |  5 +++++
 3 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
index ca9a380d9c0b..9442c4136c38 100644
--- a/arch/x86/kernel/static_call.c
+++ b/arch/x86/kernel/static_call.c
@@ -11,14 +11,26 @@ enum insn_type {
 	RET = 3,  /* tramp / site cond-tail-call */
 };
 
+/*
+ * data16 data16 xorq %rax, %rax - a single 5 byte instruction that clears %rax
+ * The REX.W cancels the effect of any data16.
+ */
+static const u8 xor5rax[] = { 0x66, 0x66, 0x48, 0x31, 0xc0 };
+
 static void __ref __static_call_transform(void *insn, enum insn_type type, void *func)
 {
+	const void *emulate = NULL;
 	int size = CALL_INSN_SIZE;
 	const void *code;
 
 	switch (type) {
 	case CALL:
 		code = text_gen_insn(CALL_INSN_OPCODE, insn, func);
+		if (func == &__static_call_return0) {
+			emulate = code;
+			code = &xor5rax;
+		}
+
 		break;
 
 	case NOP:
@@ -41,7 +53,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, void
 	if (unlikely(system_state == SYSTEM_BOOTING))
 		return text_poke_early(insn, code, size);
 
-	text_poke_bp(insn, code, size, NULL);
+	text_poke_bp(insn, code, size, emulate);
 }
 
 static void __static_call_validate(void *insn, bool tail)
@@ -54,7 +66,8 @@ static void __static_call_validate(void *insn, bool tail)
 			return;
 	} else {
 		if (opcode == CALL_INSN_OPCODE ||
-		    !memcmp(insn, ideal_nops[NOP_ATOMIC5], 5))
+		    !memcmp(insn, ideal_nops[NOP_ATOMIC5], 5) ||
+		    !memcmp(insn, xor5rax, 5))
 			return;
 	}
 
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 695da4c9b338..9f05d60aca70 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -134,6 +134,8 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
 			     STATIC_CALL_TRAMP_ADDR(name), func);	\
 })
 
+extern long __static_call_return0(void);
+
 #ifdef CONFIG_HAVE_STATIC_CALL_INLINE
 
 extern int __init static_call_init(void);
diff --git a/kernel/static_call.c b/kernel/static_call.c
index 84565c2a41b8..0bc11b5ce681 100644
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -438,6 +438,11 @@ int __init static_call_init(void)
 }
 early_initcall(static_call_init);
 
+long __static_call_return0(void)
+{
+	return 0;
+}
+
 #ifdef CONFIG_STATIC_CALL_SELFTEST
 
 static int func_a(int x)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [RFC PATCH 2/8] static_call: Provide DEFINE_STATIC_CALL_RET0()
  2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
  2021-01-18 14:12 ` [RFC PATCH 1/8] static_call/x86: Add __static_call_return0() Frederic Weisbecker
@ 2021-01-18 14:12 ` Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Frederic Weisbecker
  2021-02-17 13:17   ` tip-bot2 for Frederic Weisbecker
  2021-01-18 14:12 ` [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 61+ messages in thread
From: Frederic Weisbecker @ 2021-01-18 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Frederic Weisbecker, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

DECLARE_STATIC_CALL() must pass the original function targeted for a
given static call. But DEFINE_STATIC_CALL() may want to initialize it as
off. In this case we can't pass NULL (for functions without return value)
or __static_call_return0 (for functions returning a value) directly
to DEFINE_STATIC_CALL() as that may trigger a static call redeclaration
with a different function prototype. Type casts neither can work around
that as they don't get along with typeof().

The proper way to do that for functions that don't return a value is
to use DEFINE_STATIC_CALL_NULL(). But functions returning a actual value
don't have an equivalent yet.

Provide DEFINE_STATIC_CALL_RET0() to solve this situation.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/static_call.h | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 9f05d60aca70..076f124c957a 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -160,13 +160,13 @@ extern void __static_call_update(struct static_call_key *key, void *tramp, void
 extern int static_call_mod_init(struct module *mod);
 extern int static_call_text_reserved(void *start, void *end);
 
-#define DEFINE_STATIC_CALL(name, _func)					\
+#define __DEFINE_STATIC_CALL(name, _func, _func_init)			\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
-		.func = _func,						\
+		.func = _func_init,					\
 		.type = 1,						\
 	};								\
-	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func)
+	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func_init)
 
 #define DEFINE_STATIC_CALL_NULL(name, _func)				\
 	DECLARE_STATIC_CALL(name, _func);				\
@@ -195,12 +195,12 @@ struct static_call_key {
 	void *func;
 };
 
-#define DEFINE_STATIC_CALL(name, _func)					\
+#define __DEFINE_STATIC_CALL(name, _func _func_init)			\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
-		.func = _func,						\
+		.func = _func_init,					\
 	};								\
-	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func)
+	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func_init)
 
 #define DEFINE_STATIC_CALL_NULL(name, _func)				\
 	DECLARE_STATIC_CALL(name, _func);				\
@@ -242,10 +242,10 @@ struct static_call_key {
 	void *func;
 };
 
-#define DEFINE_STATIC_CALL(name, _func)					\
+#define __DEFINE_STATIC_CALL(name, _func, _func_init)			\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
-		.func = _func,						\
+		.func = _func_init,					\
 	}
 
 #define DEFINE_STATIC_CALL_NULL(name, _func)				\
@@ -297,4 +297,10 @@ static inline int static_call_text_reserved(void *start, void *end)
 
 #endif /* CONFIG_HAVE_STATIC_CALL */
 
+#define DEFINE_STATIC_CALL(name, _func)					\
+	__DEFINE_STATIC_CALL(name, _func, _func)
+
+#define DEFINE_STATIC_CALL_RET0(name, _func)				\
+	__DEFINE_STATIC_CALL(name, _func, __static_call_return0)
+
 #endif /* _LINUX_STATIC_CALL_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers
  2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
  2021-01-18 14:12 ` [RFC PATCH 1/8] static_call/x86: Add __static_call_return0() Frederic Weisbecker
  2021-01-18 14:12 ` [RFC PATCH 2/8] static_call: Provide DEFINE_STATIC_CALL_RET0() Frederic Weisbecker
@ 2021-01-18 14:12 ` Frederic Weisbecker
  2021-01-18 17:06   ` kernel test robot
                     ` (4 more replies)
  2021-01-18 14:12 ` [RFC PATCH 4/8] preempt: Introduce CONFIG_PREEMPT_DYNAMIC Frederic Weisbecker
                   ` (6 subsequent siblings)
  9 siblings, 5 replies; 61+ messages in thread
From: Frederic Weisbecker @ 2021-01-18 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: Peter Zijlstra <peterz@infradead.org>

Some static call declarations are going to be needed on low level header
files. Move the necessary material to the dedicated static call types
header to avoid inclusion dependency hell.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/static_call.h       | 23 -----------------------
 include/linux/static_call_types.h | 29 +++++++++++++++++++++++++++++
 2 files changed, 29 insertions(+), 23 deletions(-)

diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 076f124c957a..c272ccd811de 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -107,26 +107,10 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
 
 #define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name)
 
-/*
- * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
- * the symbol table so that objtool can reference it when it generates the
- * .static_call_sites section.
- */
-#define __static_call(name)						\
-({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
-})
-
 #else
 #define STATIC_CALL_TRAMP_ADDR(name) NULL
 #endif
 
-
-#define DECLARE_STATIC_CALL(name, func)					\
-	extern struct static_call_key STATIC_CALL_KEY(name);		\
-	extern typeof(func) STATIC_CALL_TRAMP(name);
-
 #define static_call_update(name, func)					\
 ({									\
 	BUILD_BUG_ON(!__same_type(*(func), STATIC_CALL_TRAMP(name)));	\
@@ -134,8 +118,6 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
 			     STATIC_CALL_TRAMP_ADDR(name), func);	\
 })
 
-extern long __static_call_return0(void);
-
 #ifdef CONFIG_HAVE_STATIC_CALL_INLINE
 
 extern int __init static_call_init(void);
@@ -176,7 +158,6 @@ extern int static_call_text_reserved(void *start, void *end);
 	};								\
 	ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
 
-#define static_call(name)	__static_call(name)
 #define static_call_cond(name)	(void)__static_call(name)
 
 #define EXPORT_STATIC_CALL(name)					\
@@ -209,7 +190,6 @@ struct static_call_key {
 	};								\
 	ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
 
-#define static_call(name)	__static_call(name)
 #define static_call_cond(name)	(void)__static_call(name)
 
 static inline
@@ -254,9 +234,6 @@ struct static_call_key {
 		.func = NULL,						\
 	}
 
-#define static_call(name)						\
-	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
-
 static inline void __static_call_nop(void) { }
 
 /*
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 89135bb35bf7..632a9e79c003 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -4,6 +4,7 @@
 
 #include <linux/types.h>
 #include <linux/stringify.h>
+#include <linux/compiler.h>
 
 #define STATIC_CALL_KEY_PREFIX		__SCK__
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
@@ -32,4 +33,32 @@ struct static_call_site {
 	s32 key;
 };
 
+#define DECLARE_STATIC_CALL(name, func)					\
+	extern struct static_call_key STATIC_CALL_KEY(name);		\
+	extern typeof(func) STATIC_CALL_TRAMP(name);
+
+extern long __static_call_return0(void);
+
+#ifdef CONFIG_HAVE_STATIC_CALL
+
+/*
+ * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
+ * the symbol table so that objtool can reference it when it generates the
+ * .static_call_sites section.
+ */
+#define __static_call(name)						\
+({									\
+	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
+	&STATIC_CALL_TRAMP(name);					\
+})
+
+#define static_call(name)	__static_call(name)
+
+#else
+
+#define static_call(name)						\
+	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+
+#endif /* CONFIG_HAVE_STATIC_CALL */
+
 #endif /* _STATIC_CALL_TYPES_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [RFC PATCH 4/8] preempt: Introduce CONFIG_PREEMPT_DYNAMIC
  2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2021-01-18 14:12 ` [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
@ 2021-01-18 14:12 ` Frederic Weisbecker
  2021-01-22 16:53   ` Peter Zijlstra
                     ` (2 more replies)
  2021-01-18 14:12 ` [RFC PATCH 5/8] preempt/dynamic: Provide cond_resched() and might_resched() static calls Frederic Weisbecker
                   ` (5 subsequent siblings)
  9 siblings, 3 replies; 61+ messages in thread
From: Frederic Weisbecker @ 2021-01-18 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Michal Hocko, Mel Gorman, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: Michal Hocko <mhocko@kernel.org>

Preemption mode selection is currently hardcoded on Kconfig choices.
Introduce a dedicated option to tune preemption flavour at boot time,

This will be only available on architectures efficiently supporting
static calls in order not to tempt with the feature against additional
overhead that might be prohibitive or undesirable.

CONFIG_PREEMPT_DYNAMIC is automatically selected by CONFIG_PREEMPT if
the architecture provides the necessary support (CONFIG_STATIC_CALL_INLINE,
CONFIG_GENERIC_ENTRY, and provide with __preempt_schedule_function() /
__preempt_schedule_notrace_function()).

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 .../admin-guide/kernel-parameters.txt         |  7 +++++++
 arch/Kconfig                                  |  9 +++++++++
 arch/x86/Kconfig                              |  1 +
 kernel/Kconfig.preempt                        | 19 +++++++++++++++++++
 4 files changed, 36 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 9e3cdb271d06..75e6c5d736fb 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3916,6 +3916,13 @@
 			Format: {"off"}
 			Disable Hardware Transactional Memory
 
+	preempt=	[KNL]
+			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
+			none - Limited to cond_resched() calls
+			voluntary - Limited to cond_resched() and might_sleep() calls
+			full - Any section that isn't explicitly preempt disabled
+			       can be preempted anytime.
+
 	print-fatal-signals=
 			[KNL] debug: print fatal signals
 
diff --git a/arch/Kconfig b/arch/Kconfig
index 24862d15f3a3..84db237bdb67 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1090,6 +1090,15 @@ config HAVE_STATIC_CALL_INLINE
 	bool
 	depends on HAVE_STATIC_CALL
 
+config HAVE_PREEMPT_DYNAMIC
+	bool
+	depends on HAVE_STATIC_CALL_INLINE
+	depends on GENERIC_ENTRY
+	help
+	   Select this if the architecture support boot time preempt setting
+	   on top of static calls. It is strongly advised to support inline
+	   static call to avoid any overhead.
+
 config ARCH_WANT_LD_ORPHAN_WARN
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 21f851179ff0..cdae92ec29d2 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -224,6 +224,7 @@ config X86
 	select HAVE_STACK_VALIDATION		if X86_64
 	select HAVE_STATIC_CALL
 	select HAVE_STATIC_CALL_INLINE		if HAVE_STACK_VALIDATION
+	select HAVE_PREEMPT_DYNAMIC		if HAVE_STATIC_CALL_INLINE
 	select HAVE_RSEQ
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UNSTABLE_SCHED_CLOCK
diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
index bf82259cff96..416017301660 100644
--- a/kernel/Kconfig.preempt
+++ b/kernel/Kconfig.preempt
@@ -40,6 +40,7 @@ config PREEMPT
 	depends on !ARCH_NO_PREEMPT
 	select PREEMPTION
 	select UNINLINE_SPIN_UNLOCK if !ARCH_INLINE_SPIN_UNLOCK
+	select PREEMPT_DYNAMIC if HAVE_PREEMPT_DYNAMIC
 	help
 	  This option reduces the latency of the kernel by making
 	  all kernel code (that is not executing in a critical section)
@@ -80,3 +81,21 @@ config PREEMPT_COUNT
 config PREEMPTION
        bool
        select PREEMPT_COUNT
+
+config PREEMPT_DYNAMIC
+	bool
+	help
+	  This option allows to define the preemption model on the kernel
+	  command line parameter and thus override the default preemption
+	  model defined during compile time.
+
+	  The feature is primarily interesting for Linux distributions which
+	  provide a pre-built kernel binary to reduce the number of kernel
+	  flavors they offer while still offering different usecases.
+
+	  The runtime overhead is negligible with HAVE_STATIC_CALL_INLINE enabled
+	  but if runtime patching is not available for the specific architecture
+	  then the potential overhead should be considered.
+
+	  Interesting if you want the same pre-built kernel should be used for
+	  both Server and Desktop workloads.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [RFC PATCH 5/8] preempt/dynamic: Provide cond_resched() and might_resched() static calls
  2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2021-01-18 14:12 ` [RFC PATCH 4/8] preempt: Introduce CONFIG_PREEMPT_DYNAMIC Frederic Weisbecker
@ 2021-01-18 14:12 ` Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  2021-01-18 14:12 ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() " Frederic Weisbecker
                   ` (4 subsequent siblings)
  9 siblings, 2 replies; 61+ messages in thread
From: Frederic Weisbecker @ 2021-01-18 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: "Peter Zijlstra (Intel)" <peterz@infradead.org>

Provide static calls to control cond_resched() (called in !CONFIG_PREEMPT)
and might_resched() (called in CONFIG_PREEMPT_VOLUNTARY) to that we
can override their behaviour when preempt= is overriden.

Since the default behaviour is full preemption, both their calls are
ignored when preempt= isn't passed.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
[branch might_resched() directly to __cond_resched(), only define static
calls when PREEMPT_DYNAMIC]
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/kernel.h | 23 +++++++++++++++++++----
 include/linux/sched.h  | 27 ++++++++++++++++++++++++---
 kernel/sched/core.c    | 16 +++++++++++++---
 3 files changed, 56 insertions(+), 10 deletions(-)

diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index f7902d8c1048..cfd3d349f905 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -15,7 +15,7 @@
 #include <linux/typecheck.h>
 #include <linux/printk.h>
 #include <linux/build_bug.h>
-
+#include <linux/static_call_types.h>
 #include <asm/byteorder.h>
 
 #include <uapi/linux/kernel.h>
@@ -81,11 +81,26 @@ struct pt_regs;
 struct user;
 
 #ifdef CONFIG_PREEMPT_VOLUNTARY
-extern int _cond_resched(void);
-# define might_resched() _cond_resched()
+
+extern int __cond_resched(void);
+# define might_resched() __cond_resched()
+
+#elif defined(CONFIG_PREEMPT_DYNAMIC)
+
+extern int __cond_resched(void);
+
+DECLARE_STATIC_CALL(might_resched, __cond_resched);
+
+static __always_inline void might_resched(void)
+{
+	static_call(might_resched)();
+}
+
 #else
+
 # define might_resched() do { } while (0)
-#endif
+
+#endif /* CONFIG_PREEMPT_* */
 
 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
 extern void ___might_sleep(const char *file, int line, int preempt_offset);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6e3a5eeec509..86bcb589da09 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1871,11 +1871,32 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
  * value indicates whether a reschedule was done in fact.
  * cond_resched_lock() will drop the spinlock before scheduling,
  */
-#ifndef CONFIG_PREEMPTION
-extern int _cond_resched(void);
+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
+extern int __cond_resched(void);
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+DECLARE_STATIC_CALL(cond_resched, __cond_resched);
+
+static __always_inline int _cond_resched(void)
+{
+	return static_call(cond_resched)();
+}
+
 #else
+
+static inline int _cond_resched(void)
+{
+	return __cond_resched();
+}
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
+
+#else
+
 static inline int _cond_resched(void) { return 0; }
-#endif
+
+#endif /* !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC) */
 
 #define cond_resched() ({			\
 	___might_sleep(__FILE__, __LINE__, 0);	\
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 15d2562118d1..d6de12b4eef2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6654,17 +6654,27 @@ SYSCALL_DEFINE0(sched_yield)
 	return 0;
 }
 
-#ifndef CONFIG_PREEMPTION
-int __sched _cond_resched(void)
+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
+int __sched __cond_resched(void)
 {
 	if (should_resched(0)) {
 		preempt_schedule_common();
 		return 1;
 	}
+#ifndef CONFIG_PREEMPT_RCU
 	rcu_all_qs();
+#endif
 	return 0;
 }
-EXPORT_SYMBOL(_cond_resched);
+EXPORT_SYMBOL(__cond_resched);
+#endif
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
+EXPORT_STATIC_CALL(cond_resched);
+
+DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
+EXPORT_STATIC_CALL(might_resched);
 #endif
 
 /*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
                   ` (4 preceding siblings ...)
  2021-01-18 14:12 ` [RFC PATCH 5/8] preempt/dynamic: Provide cond_resched() and might_resched() static calls Frederic Weisbecker
@ 2021-01-18 14:12 ` Frederic Weisbecker
  2021-01-21 21:58   ` Peter Zijlstra
                     ` (3 more replies)
  2021-01-18 14:12 ` [RFC PATCH 7/8] preempt/dynamic: Provide irqentry_exit_cond_resched() static call Frederic Weisbecker
                   ` (3 subsequent siblings)
  9 siblings, 4 replies; 61+ messages in thread
From: Frederic Weisbecker @ 2021-01-18 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: "Peter Zijlstra (Intel)" <peterz@infradead.org>

Provide static calls to control preempt_schedule[_notrace]()
(called in CONFIG_PREEMPT) so that we can override their behaviour when
preempt= is overriden.

Since the default behaviour is full preemption, both their calls are
initialized to the arch provided wrapper, if any.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
[only define static calls when PREEMPT_DYNAMIC, make it less dependent
on x86 with __preempt_schedule_func()]
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 arch/x86/include/asm/preempt.h | 34 ++++++++++++++++++++++++++--------
 kernel/sched/core.c            | 12 ++++++++++++
 2 files changed, 38 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index 69485ca13665..3db9cb8b1a25 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -5,6 +5,7 @@
 #include <asm/rmwcc.h>
 #include <asm/percpu.h>
 #include <linux/thread_info.h>
+#include <linux/static_call_types.h>
 
 DECLARE_PER_CPU(int, __preempt_count);
 
@@ -103,16 +104,33 @@ static __always_inline bool should_resched(int preempt_offset)
 }
 
 #ifdef CONFIG_PREEMPTION
-  extern asmlinkage void preempt_schedule_thunk(void);
-# define __preempt_schedule() \
-	asm volatile ("call preempt_schedule_thunk" : ASM_CALL_CONSTRAINT)
 
-  extern asmlinkage void preempt_schedule(void);
-  extern asmlinkage void preempt_schedule_notrace_thunk(void);
-# define __preempt_schedule_notrace() \
-	asm volatile ("call preempt_schedule_notrace_thunk" : ASM_CALL_CONSTRAINT)
+extern asmlinkage void preempt_schedule(void);
+extern asmlinkage void preempt_schedule_thunk(void);
+
+#define __preempt_schedule_func() preempt_schedule_thunk
+
+DECLARE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
+
+#define __preempt_schedule() \
+do { \
+	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule)); \
+	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule) : ASM_CALL_CONSTRAINT); \
+} while (0)
+
+extern asmlinkage void preempt_schedule_notrace(void);
+extern asmlinkage void preempt_schedule_notrace_thunk(void);
+
+#define __preempt_schedule_notrace_func() preempt_schedule_notrace_thunk
+
+DECLARE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
+
+#define __preempt_schedule_notrace() \
+do { \
+	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule_notrace)); \
+	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule_notrace) : ASM_CALL_CONSTRAINT); \
+} while (0)
 
-  extern asmlinkage void preempt_schedule_notrace(void);
 #endif
 
 #endif /* __ASM_PREEMPT_H */
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d6de12b4eef2..faff4b546c5f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5251,6 +5251,12 @@ asmlinkage __visible void __sched notrace preempt_schedule(void)
 NOKPROBE_SYMBOL(preempt_schedule);
 EXPORT_SYMBOL(preempt_schedule);
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
+EXPORT_STATIC_CALL(preempt_schedule);
+#endif
+
+
 /**
  * preempt_schedule_notrace - preempt_schedule called by tracing
  *
@@ -5303,6 +5309,12 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
 }
 EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
+EXPORT_STATIC_CALL(preempt_schedule_notrace);
+#endif
+
+
 #endif /* CONFIG_PREEMPTION */
 
 /*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [RFC PATCH 7/8] preempt/dynamic: Provide irqentry_exit_cond_resched() static call
  2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
                   ` (5 preceding siblings ...)
  2021-01-18 14:12 ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() " Frederic Weisbecker
@ 2021-01-18 14:12 ` Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  2021-01-18 14:12 ` [RFC PATCH 8/8] preempt/dynamic: Support dynamic preempt with preempt= boot option Frederic Weisbecker
                   ` (2 subsequent siblings)
  9 siblings, 2 replies; 61+ messages in thread
From: Frederic Weisbecker @ 2021-01-18 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: "Peter Zijlstra (Intel)" <peterz@infradead.org>

Provide static call to control IRQ preemption (called in CONFIG_PREEMPT)
so that we can override its behaviour when preempt= is overriden.

Since the default behaviour is full preemption, its call is
initialized to provide IRQ preemption when preempt= isn't passed.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/entry-common.h |  4 ++++
 kernel/entry/common.c        | 10 +++++++++-
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index ca86a00abe86..1401c93b65e1 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_ENTRYCOMMON_H
 #define __LINUX_ENTRYCOMMON_H
 
+#include <linux/static_call_types.h>
 #include <linux/tracehook.h>
 #include <linux/syscalls.h>
 #include <linux/seccomp.h>
@@ -453,6 +454,9 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
  * Conditional reschedule with additional sanity checks.
  */
 void irqentry_exit_cond_resched(void);
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DECLARE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+#endif
 
 /**
  * irqentry_exit - Handle return from exception that used irqentry_enter()
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index 378341642f94..84fa7ec28c36 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -393,6 +393,9 @@ void irqentry_exit_cond_resched(void)
 			preempt_schedule_irq();
 	}
 }
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+#endif
 
 noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
 {
@@ -419,8 +422,13 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
 		}
 
 		instrumentation_begin();
-		if (IS_ENABLED(CONFIG_PREEMPTION))
+		if (IS_ENABLED(CONFIG_PREEMPTION)) {
+#ifdef CONFIG_PREEMT_DYNAMIC
+			static_call(irqentry_exit_cond_resched)();
+#else
 			irqentry_exit_cond_resched();
+#endif
+		}
 		/* Covers both tracing and lockdep */
 		trace_hardirqs_on();
 		instrumentation_end();
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [RFC PATCH 8/8] preempt/dynamic: Support dynamic preempt with preempt= boot option
  2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
                   ` (6 preceding siblings ...)
  2021-01-18 14:12 ` [RFC PATCH 7/8] preempt/dynamic: Provide irqentry_exit_cond_resched() static call Frederic Weisbecker
@ 2021-01-18 14:12 ` Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
                     ` (2 more replies)
  2021-01-21 21:22 ` [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Peter Zijlstra
  2021-01-22 15:02 ` Paul E. McKenney
  9 siblings, 3 replies; 61+ messages in thread
From: Frederic Weisbecker @ 2021-01-18 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: "Peter Zijlstra (Intel)" <peterz@infradead.org>

Support the preempt= boot option and patch the static call sites
accordingly.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/sched/core.c | 67 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 66 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index faff4b546c5f..509bd51f59aa 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -12,6 +12,7 @@
 
 #include "sched.h"
 
+#include <linux/entry-common.h>
 #include <linux/nospec.h>
 
 #include <linux/kcov.h>
@@ -5314,9 +5315,73 @@ DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
 EXPORT_STATIC_CALL(preempt_schedule_notrace);
 #endif
 
-
 #endif /* CONFIG_PREEMPTION */
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+/*
+ * SC:cond_resched
+ * SC:might_resched
+ * SC:preempt_schedule
+ * SC:preempt_schedule_notrace
+ * SC:irqentry_exit_cond_resched
+ *
+ *
+ * NONE:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * VOLUNTARY:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- __cond_resched
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * FULL:
+ *   cond_resched               <- RET0
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- preempt_schedule
+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
+ */
+static int __init setup_preempt_mode(char *str)
+{
+	if (!strcmp(str, "none")) {
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
+		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
+		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else if (!strcmp(str, "voluntary")) {
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, __cond_resched);
+		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
+		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
+		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else if (!strcmp(str, "full")) {
+		static_call_update(cond_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(preempt_schedule, __preempt_schedule_func());
+		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func());
+		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else {
+		pr_warn("Dynamic Preempt: Unsupported preempt mode %s, default to full\n", str);
+		return 1;
+	}
+	return 0;
+}
+__setup("preempt=", setup_preempt_mode);
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
+
+
 /*
  * This is the entry point to schedule() from kernel preemption
  * off of irq context.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers
  2021-01-18 14:12 ` [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
@ 2021-01-18 17:06   ` kernel test robot
  2021-01-19 10:26   ` kernel test robot
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 61+ messages in thread
From: kernel test robot @ 2021-01-18 17:06 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 1897 bytes --]

Hi Frederic,

[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on tip/sched/core]
[also build test WARNING on linus/master v5.11-rc4 next-20210118]
[cannot apply to tip/x86/core]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Frederic-Weisbecker/static_call-x86-Add-__static_call_return0/20210118-231312
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 65bcf072e20ed7597caa902f170f293662b0af3c
config: x86_64-randconfig-s022-20210118 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.3-208-g46a52ca4-dirty
        # https://github.com/0day-ci/linux/commit/56103459734b6ee3e431c9907b11e0008319ae34
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Frederic-Weisbecker/static_call-x86-Add-__static_call_return0/20210118-231312
        git checkout 56103459734b6ee3e431c9907b11e0008319ae34
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> Warning: Kernel ABI header at 'tools/include/linux/static_call_types.h' differs from latest version at 'include/linux/static_call_types.h'
--
>> Warning: Kernel ABI header at 'tools/include/linux/static_call_types.h' differs from latest version at 'include/linux/static_call_types.h'

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 28335 bytes --]

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers
  2021-01-18 14:12 ` [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
  2021-01-18 17:06   ` kernel test robot
@ 2021-01-19 10:26   ` kernel test robot
  2021-01-19 10:46   ` Jürgen Groß
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 61+ messages in thread
From: kernel test robot @ 2021-01-19 10:26 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 2324 bytes --]

Hi Frederic,

[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on tip/sched/core]
[also build test WARNING on tip/core/entry linus/master v5.11-rc4 next-20210118]
[cannot apply to tip/x86/core]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Frederic-Weisbecker/static_call-x86-Add-__static_call_return0/20210118-231312
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 65bcf072e20ed7597caa902f170f293662b0af3c
config: x86_64-randconfig-r006-20210118 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 95d146182fdf2315e74943b93fb3bb0cbafc5d89)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # https://github.com/0day-ci/linux/commit/56103459734b6ee3e431c9907b11e0008319ae34
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Frederic-Weisbecker/static_call-x86-Add-__static_call_return0/20210118-231312
        git checkout 56103459734b6ee3e431c9907b11e0008319ae34
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> Warning: Kernel ABI header at 'tools/include/linux/static_call_types.h' differs from latest version at 'include/linux/static_call_types.h'
--
>> Warning: Kernel ABI header at 'tools/include/linux/static_call_types.h' differs from latest version at 'include/linux/static_call_types.h'
--
>> Warning: Kernel ABI header at 'tools/include/linux/static_call_types.h' differs from latest version at 'include/linux/static_call_types.h'

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 39638 bytes --]

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers
  2021-01-18 14:12 ` [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
  2021-01-18 17:06   ` kernel test robot
  2021-01-19 10:26   ` kernel test robot
@ 2021-01-19 10:46   ` Jürgen Groß
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra
  4 siblings, 0 replies; 61+ messages in thread
From: Jürgen Groß @ 2021-01-19 10:46 UTC (permalink / raw)
  To: Frederic Weisbecker, Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko


[-- Attachment #1.1.1: Type: text/plain, Size: 1007 bytes --]

On 18.01.21 15:12, Frederic Weisbecker wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> 
> Some static call declarations are going to be needed on low level header
> files. Move the necessary material to the dedicated static call types
> header to avoid inclusion dependency hell.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Paul E. McKenney <paulmck@kernel.org>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>

Could the patches please be reordered to have this one first in the
series? I could make use of it in my paravirt simplification series.

I'll include this patch (in slightly modified form. i.e. without the
__static_call_return0() parts) in my series in order to make it
build.

My tests suggest you should also update
tools/include/linux/static_call_types.h


Juergen

[-- Attachment #1.1.2: OpenPGP_0xB0DE9DD628BF132F.asc --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4
  2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
                   ` (7 preceding siblings ...)
  2021-01-18 14:12 ` [RFC PATCH 8/8] preempt/dynamic: Support dynamic preempt with preempt= boot option Frederic Weisbecker
@ 2021-01-21 21:22 ` Peter Zijlstra
  2021-01-22 15:02 ` Paul E. McKenney
  9 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-21 21:22 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Mon, Jan 18, 2021 at 03:12:15PM +0100, Frederic Weisbecker wrote:
> * Uninline __static_call_return0 on all flavours since its address is
> always needed on DEFINE_STATIC_CALL()

obj-$(CONFIG_HAVE_STATIC_CALL_INLINE) += static_call.o

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-18 14:12 ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() " Frederic Weisbecker
@ 2021-01-21 21:58   ` Peter Zijlstra
  2021-01-21 22:25     ` Peter Zijlstra
  2021-01-22 16:52   ` Peter Zijlstra
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-21 21:58 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Mon, Jan 18, 2021 at 03:12:21PM +0100, Frederic Weisbecker wrote:
> diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
> index 69485ca13665..3db9cb8b1a25 100644
> --- a/arch/x86/include/asm/preempt.h
> +++ b/arch/x86/include/asm/preempt.h
> @@ -5,6 +5,7 @@
>  #include <asm/rmwcc.h>
>  #include <asm/percpu.h>
>  #include <linux/thread_info.h>
> +#include <linux/static_call_types.h>
>  
>  DECLARE_PER_CPU(int, __preempt_count);
>  
> @@ -103,16 +104,33 @@ static __always_inline bool should_resched(int preempt_offset)
>  }
>  
>  #ifdef CONFIG_PREEMPTION
> -  extern asmlinkage void preempt_schedule_thunk(void);
> -# define __preempt_schedule() \
> -	asm volatile ("call preempt_schedule_thunk" : ASM_CALL_CONSTRAINT)
>  
> -  extern asmlinkage void preempt_schedule(void);
> -  extern asmlinkage void preempt_schedule_notrace_thunk(void);
> -# define __preempt_schedule_notrace() \
> -	asm volatile ("call preempt_schedule_notrace_thunk" : ASM_CALL_CONSTRAINT)
> +extern asmlinkage void preempt_schedule(void);
> +extern asmlinkage void preempt_schedule_thunk(void);
> +
> +#define __preempt_schedule_func() preempt_schedule_thunk
> +
> +DECLARE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
> +
> +#define __preempt_schedule() \
> +do { \
> +	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule)); \
> +	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule) : ASM_CALL_CONSTRAINT); \
> +} while (0)
> +
> +extern asmlinkage void preempt_schedule_notrace(void);
> +extern asmlinkage void preempt_schedule_notrace_thunk(void);
> +
> +#define __preempt_schedule_notrace_func() preempt_schedule_notrace_thunk
> +
> +DECLARE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
> +
> +#define __preempt_schedule_notrace() \
> +do { \
> +	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule_notrace)); \
> +	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule_notrace) : ASM_CALL_CONSTRAINT); \
> +} while (0)
>  
> -  extern asmlinkage void preempt_schedule_notrace(void);
>  #endif

I'm thinking the above doesn't build for !PREEMPT_DYNAMIC, given it
relies on the STATIC_CALL unconditionally, but we only define it for
PREEMPT_DYNAMIC:

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d6de12b4eef2..faff4b546c5f 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5251,6 +5251,12 @@ asmlinkage __visible void __sched notrace preempt_schedule(void)
>  NOKPROBE_SYMBOL(preempt_schedule);
>  EXPORT_SYMBOL(preempt_schedule);
>  
> +#ifdef CONFIG_PREEMPT_DYNAMIC
> +DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
> +EXPORT_STATIC_CALL(preempt_schedule);
> +#endif
> +
> +
>  /**
>   * preempt_schedule_notrace - preempt_schedule called by tracing
>   *
> @@ -5303,6 +5309,12 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
>  }
>  EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
>  
> +#ifdef CONFIG_PREEMPT_DYNAMIC
> +DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
> +EXPORT_STATIC_CALL(preempt_schedule_notrace);
> +#endif
> +
> +
>  #endif /* CONFIG_PREEMPTION */
>  
>  /*
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-21 21:58   ` Peter Zijlstra
@ 2021-01-21 22:25     ` Peter Zijlstra
  0 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-21 22:25 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Thu, Jan 21, 2021 at 10:58:26PM +0100, Peter Zijlstra wrote:
> I'm thinking the above doesn't build for !PREEMPT_DYNAMIC, given it
> relies on the STATIC_CALL unconditionally, but we only define it for
> PREEMPT_DYNAMIC:

Ooh, I see, x86 cannot get there anymore.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4
  2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
                   ` (8 preceding siblings ...)
  2021-01-21 21:22 ` [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Peter Zijlstra
@ 2021-01-22 15:02 ` Paul E. McKenney
  9 siblings, 0 replies; 61+ messages in thread
From: Paul E. McKenney @ 2021-01-22 15:02 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Peter Zijlstra, LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Ingo Molnar, Michal Hocko

On Mon, Jan 18, 2021 at 03:12:15PM +0100, Frederic Weisbecker wrote:
> Hi,
> 
> Here is a new version of the feature that can select the preempt flavour
> on boot time. Note that it doesn't entirely mimic the actual real
> config-based preemption flavours, because at least preempt-RCU
> implementation is there in any case.
> 
> Also there is still some work to do against subsystems that may play
> their own games with CONFIG_PREEMPT.
> 
> In this version:
> 
> * Restore the initial simple __static_call_return0() implementation.
> 
> * Uninline __static_call_return0 on all flavours since its address is
> always needed on DEFINE_STATIC_CALL()
> 
> * Introduce DEFINE_STATIC_CALL_RET0()
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
> 	preempt/dynamic-v4
> 
> HEAD: b5f3b1da9df4197d0b0ffe0f55f0f6a8c838d75f

I gave these a quick test and got the following:

Warning: Kernel ABI header at 'tools/include/linux/static_call_types.h' differs from latest version at 'include/linux/static_call_types.h'.

Other than that, looks good.

							Thanx, Paul

> Thanks,
> 	Frederic
> ---
> 
> Peter Zijlstra (Intel) (4):
>       preempt/dynamic: Provide cond_resched() and might_resched() static calls
>       preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
>       preempt/dynamic: Provide irqentry_exit_cond_resched() static call
>       preempt/dynamic: Support dynamic preempt with preempt= boot option
> 
> Peter Zijlstra (2):
>       static_call/x86: Add __static_call_return0()
>       static_call: Pull some static_call declarations to the type headers
> 
> Frederic Weisbecker (1):
>       static_call: Provide DEFINE_STATIC_CALL_RET0()
> 
> Michal Hocko (1):
>       preempt: Introduce CONFIG_PREEMPT_DYNAMIC
> 
> 
>  Documentation/admin-guide/kernel-parameters.txt |  7 ++
>  arch/Kconfig                                    |  9 +++
>  arch/x86/Kconfig                                |  1 +
>  arch/x86/include/asm/preempt.h                  | 34 ++++++---
>  arch/x86/kernel/static_call.c                   | 17 ++++-
>  include/linux/entry-common.h                    |  4 ++
>  include/linux/kernel.h                          | 23 ++++--
>  include/linux/sched.h                           | 27 ++++++-
>  include/linux/static_call.h                     | 43 ++++--------
>  include/linux/static_call_types.h               | 29 ++++++++
>  kernel/Kconfig.preempt                          | 19 +++++
>  kernel/entry/common.c                           | 10 ++-
>  kernel/sched/core.c                             | 93 ++++++++++++++++++++++++-
>  kernel/static_call.c                            |  5 ++
>  14 files changed, 271 insertions(+), 50 deletions(-)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-18 14:12 ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() " Frederic Weisbecker
  2021-01-21 21:58   ` Peter Zijlstra
@ 2021-01-22 16:52   ` Peter Zijlstra
  2021-01-22 16:57     ` Ard Biesheuvel
                       ` (2 more replies)
  2021-02-08 12:00   ` [tip: sched/core] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls tip-bot2 for Peter Zijlstra (Intel)
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  3 siblings, 3 replies; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-22 16:52 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko, Josh Poimboeuf,
	rostedt, jbaron, ardb

On Mon, Jan 18, 2021 at 03:12:21PM +0100, Frederic Weisbecker wrote:
> +#ifdef CONFIG_PREEMPT_DYNAMIC
> +DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
> +EXPORT_STATIC_CALL(preempt_schedule);
> +#endif

> +#ifdef CONFIG_PREEMPT_DYNAMIC
> +DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
> +EXPORT_STATIC_CALL(preempt_schedule_notrace);
> +#endif

So one of the things I hates most of this is that is allows 'random'
modules to hijack the preemption by rewriting these callsites. Once you
export the key, we've lost.

I've tried a number of things, but this is the only one I could come up
with that actually stands a chance against malicious modules (vbox and
the like).

It's somewhat elaborate, but afaict it actually works.

---

--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -114,7 +114,7 @@ DECLARE_STATIC_CALL(preempt_schedule, __
 
 #define __preempt_schedule() \
 do { \
-	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule)); \
+	__STATIC_CALL_MOD_ADDRESSABLE(preempt_schedule); \
 	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule) : ASM_CALL_CONSTRAINT); \
 } while (0)
 
@@ -127,7 +127,7 @@ DECLARE_STATIC_CALL(preempt_schedule_not
 
 #define __preempt_schedule_notrace() \
 do { \
-	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule_notrace)); \
+	__STATIC_CALL_MOD_ADDRESSABLE(preempt_schedule_notrace); \
 	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule_notrace) : ASM_CALL_CONSTRAINT); \
 } while (0)
 
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -93,7 +93,7 @@ DECLARE_STATIC_CALL(might_resched, __con
 
 static __always_inline void might_resched(void)
 {
-	static_call(might_resched)();
+	static_call_mod(might_resched)();
 }
 
 #else
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1880,7 +1880,7 @@ DECLARE_STATIC_CALL(cond_resched, __cond
 
 static __always_inline int _cond_resched(void)
 {
-	return static_call(cond_resched)();
+	return static_call_mod(cond_resched)();
 }
 
 #else
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -107,6 +107,10 @@ extern void arch_static_call_transform(v
 
 #define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name)
 
+#define static_call_register(name) \
+	__static_call_register(&STATIC_CALL_KEY(name), \
+			       &STATIC_CALL_TRAMP(name))
+
 #else
 #define STATIC_CALL_TRAMP_ADDR(name) NULL
 #endif
@@ -138,6 +142,7 @@ struct static_call_key {
 	};
 };
 
+extern int __static_call_register(struct static_call_key *key, void *tramp);
 extern void __static_call_update(struct static_call_key *key, void *tramp, void *func);
 extern int static_call_mod_init(struct module *mod);
 extern int static_call_text_reserved(void *start, void *end);
@@ -162,6 +167,9 @@ extern long __static_call_return0(void);
 
 #define static_call_cond(name)	(void)__static_call(name)
 
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
@@ -194,6 +202,11 @@ struct static_call_key {
 
 #define static_call_cond(name)	(void)__static_call(name)
 
+static inline int __static_call_register(struct static_call_key *key, void *tramp)
+{
+	return 0;
+}
+
 static inline
 void __static_call_update(struct static_call_key *key, void *tramp, void *func)
 {
@@ -213,6 +226,9 @@ static inline long __static_call_return0
 	return 0;
 }
 
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -39,17 +39,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5268,7 +5268,7 @@ EXPORT_SYMBOL(preempt_schedule);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
 DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
-EXPORT_STATIC_CALL(preempt_schedule);
+EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
 #endif
 
 
@@ -5326,7 +5326,7 @@ EXPORT_SYMBOL_GPL(preempt_schedule_notra
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
 DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
-EXPORT_STATIC_CALL(preempt_schedule_notrace);
+EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
 #endif
 
 #endif /* CONFIG_PREEMPTION */
@@ -6879,10 +6879,10 @@ EXPORT_SYMBOL(__cond_resched);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
 DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
-EXPORT_STATIC_CALL(cond_resched);
+EXPORT_STATIC_CALL_TRAMP(cond_resched);
 
 DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
-EXPORT_STATIC_CALL(might_resched);
+EXPORT_STATIC_CALL_TRAMP(might_resched);
 #endif
 
 /*
@@ -8096,6 +8096,13 @@ void __init sched_init(void)
 
 	init_uclamp();
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+	static_call_register(cond_resched);
+	static_call_register(might_resched);
+	static_call_register(preempt_schedule);
+	static_call_register(preempt_schedule_notrace);
+#endif
+
 	scheduler_running = 1;
 }
 
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -323,10 +323,85 @@ static int __static_call_mod_text_reserv
 	return ret;
 }
 
+struct static_call_ass {
+	struct rb_node node;
+	struct static_call_key *key;
+	unsigned long tramp;
+};
+
+static struct rb_root static_call_asses;
+
+#define __node_2_ass(_n) \
+	rb_entry((_n), struct static_call_ass, node)
+
+static inline bool ass_less(struct rb_node *a, const struct rb_node *b)
+{
+	return __node_2_ass(a)->tramp < __node_2_ass(b)->tramp;
+}
+
+static inline int ass_cmp(const void *a, const struct rb_node *b)
+{
+	if (*(unsigned long *)a < __node_2_ass(b)->tramp)
+		return -1;
+
+	if (*(unsigned long *)a > __node_2_ass(b)->tramp)
+		return 1;
+
+	return 0;
+}
+
+int __static_call_register(struct static_call_key *key, void *tramp)
+{
+	struct static_call_ass *ass = kzalloc(sizeof(*ass), GFP_KERNEL);
+	if (!ass)
+		return -ENOMEM;
+
+	ass->key = key;
+	ass->tramp = (unsigned long)tramp;
+
+	/* trampolines should be aligned */
+	WARN_ON_ONCE(ass->tramp & STATIC_CALL_SITE_FLAGS);
+
+	rb_add(&ass->node, &static_call_asses, &ass_less);
+	return 0;
+}
+
+static struct static_call_ass *static_call_find_ass(unsigned long addr)
+{
+	struct rb_node *node = rb_find(&addr, &static_call_asses, &ass_cmp);
+	if (!node)
+		return NULL;
+	return __node_2_ass(node);
+}
+
 static int static_call_add_module(struct module *mod)
 {
-	return __static_call_init(mod, mod->static_call_sites,
-				  mod->static_call_sites + mod->num_static_call_sites);
+	struct static_call_site *start = mod->static_call_sites;
+	struct static_call_site *stop = start + mod->num_static_call_sites;
+	struct static_call_site *site;
+
+	for (site = start; site != stop; site++) {
+		unsigned long addr = (unsigned long)static_call_key(site);
+		struct static_call_ass *ass;
+
+		/*
+		 * Gotta fix up the keys that point to the trampoline.
+		 */
+		if (!kernel_text_address(addr))
+			continue;
+
+		ass = static_call_find_ass(addr);
+		if (!ass) {
+			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
+				static_call_addr(site));
+			return -EINVAL;
+		}
+
+		site->key = ((unsigned long)ass->key - (unsigned long)&site->key) |
+			    (site->key & STATIC_CALL_SITE_FLAGS);
+	}
+
+	return __static_call_init(mod, start, stop);
 }
 
 static void static_call_del_module(struct module *mod)
@@ -392,6 +467,11 @@ static struct notifier_block static_call
 
 #else
 
+int __static_call_register(struct static_call_key *key, void *tramp)
+{
+	return 0;
+}
+
 static inline int __static_call_mod_text_reserved(void *start, void *end)
 {
 	return 0;
--- a/tools/include/linux/static_call_types.h
+++ b/tools/include/linux/static_call_types.h
@@ -39,17 +39,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -502,8 +502,16 @@ static int create_static_call_sections(s
 
 		key_sym = find_symbol_by_name(file->elf, tmp);
 		if (!key_sym) {
-			WARN("static_call: can't find static_call_key symbol: %s", tmp);
-			return -1;
+			if (!module) {
+				WARN("static_call: can't find static_call_key symbol: %s", tmp);
+				return -1;
+			}
+			/*
+			 * For static_call_mod() we allow the key to be the
+			 * trampoline address. This is fixed up in
+			 * static_call_add_module().
+			 */
+			key_sym = insn->call_dest;
 		}
 		free(key_name);
 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 4/8] preempt: Introduce CONFIG_PREEMPT_DYNAMIC
  2021-01-18 14:12 ` [RFC PATCH 4/8] preempt: Introduce CONFIG_PREEMPT_DYNAMIC Frederic Weisbecker
@ 2021-01-22 16:53   ` Peter Zijlstra
  2021-01-28 12:17     ` Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Michal Hocko
  2021-02-17 13:17   ` tip-bot2 for Michal Hocko
  2 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-22 16:53 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Michal Hocko, Mel Gorman, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko, ardb, jpoimboe

On Mon, Jan 18, 2021 at 03:12:19PM +0100, Frederic Weisbecker wrote:
> +config HAVE_PREEMPT_DYNAMIC
> +	bool
> +	depends on HAVE_STATIC_CALL_INLINE

I think we can relax this to HAVE_STATIC_CALL, using trampolines
shouldn't be too bad, and that would put it in reach of arm64.

> +	depends on GENERIC_ENTRY
> +	help
> +	   Select this if the architecture support boot time preempt setting
> +	   on top of static calls. It is strongly advised to support inline
> +	   static call to avoid any overhead.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-22 16:52   ` Peter Zijlstra
@ 2021-01-22 16:57     ` Ard Biesheuvel
  2021-01-22 17:08       ` Peter Zijlstra
  2021-01-25 23:40     ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls Josh Poimboeuf
  2021-01-26 23:57     ` Josh Poimboeuf
  2 siblings, 1 reply; 61+ messages in thread
From: Ard Biesheuvel @ 2021-01-22 16:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	Josh Poimboeuf, Steven Rostedt (VMware),
	Jason Baron

On Fri, 22 Jan 2021 at 17:53, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, Jan 18, 2021 at 03:12:21PM +0100, Frederic Weisbecker wrote:
> > +#ifdef CONFIG_PREEMPT_DYNAMIC
> > +DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
> > +EXPORT_STATIC_CALL(preempt_schedule);
> > +#endif
>
> > +#ifdef CONFIG_PREEMPT_DYNAMIC
> > +DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
> > +EXPORT_STATIC_CALL(preempt_schedule_notrace);
> > +#endif
>
> So one of the things I hates most of this is that is allows 'random'
> modules to hijack the preemption by rewriting these callsites. Once you
> export the key, we've lost.
>

Are these supposed to be switchable at any time? Or only at boot? In
the latter case, can't we drop the associated data structure in
__ro_after_init so it becomes R/O when booting completes?

> I've tried a number of things, but this is the only one I could come up
> with that actually stands a chance against malicious modules (vbox and
> the like).
>
> It's somewhat elaborate, but afaict it actually works.
>
> ---
>
> --- a/arch/x86/include/asm/preempt.h
> +++ b/arch/x86/include/asm/preempt.h
> @@ -114,7 +114,7 @@ DECLARE_STATIC_CALL(preempt_schedule, __
>
>  #define __preempt_schedule() \
>  do { \
> -       __ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule)); \
> +       __STATIC_CALL_MOD_ADDRESSABLE(preempt_schedule); \
>         asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule) : ASM_CALL_CONSTRAINT); \
>  } while (0)
>
> @@ -127,7 +127,7 @@ DECLARE_STATIC_CALL(preempt_schedule_not
>
>  #define __preempt_schedule_notrace() \
>  do { \
> -       __ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule_notrace)); \
> +       __STATIC_CALL_MOD_ADDRESSABLE(preempt_schedule_notrace); \
>         asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule_notrace) : ASM_CALL_CONSTRAINT); \
>  } while (0)
>
> --- a/include/linux/kernel.h
> +++ b/include/linux/kernel.h
> @@ -93,7 +93,7 @@ DECLARE_STATIC_CALL(might_resched, __con
>
>  static __always_inline void might_resched(void)
>  {
> -       static_call(might_resched)();
> +       static_call_mod(might_resched)();
>  }
>
>  #else
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1880,7 +1880,7 @@ DECLARE_STATIC_CALL(cond_resched, __cond
>
>  static __always_inline int _cond_resched(void)
>  {
> -       return static_call(cond_resched)();
> +       return static_call_mod(cond_resched)();
>  }
>
>  #else
> --- a/include/linux/static_call.h
> +++ b/include/linux/static_call.h
> @@ -107,6 +107,10 @@ extern void arch_static_call_transform(v
>
>  #define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name)
>
> +#define static_call_register(name) \
> +       __static_call_register(&STATIC_CALL_KEY(name), \
> +                              &STATIC_CALL_TRAMP(name))
> +
>  #else
>  #define STATIC_CALL_TRAMP_ADDR(name) NULL
>  #endif
> @@ -138,6 +142,7 @@ struct static_call_key {
>         };
>  };
>
> +extern int __static_call_register(struct static_call_key *key, void *tramp);
>  extern void __static_call_update(struct static_call_key *key, void *tramp, void *func);
>  extern int static_call_mod_init(struct module *mod);
>  extern int static_call_text_reserved(void *start, void *end);
> @@ -162,6 +167,9 @@ extern long __static_call_return0(void);
>
>  #define static_call_cond(name) (void)__static_call(name)
>
> +#define EXPORT_STATIC_CALL_TRAMP(name)                                 \
> +       EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
> +
>  #define EXPORT_STATIC_CALL(name)                                       \
>         EXPORT_SYMBOL(STATIC_CALL_KEY(name));                           \
>         EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
> @@ -194,6 +202,11 @@ struct static_call_key {
>
>  #define static_call_cond(name) (void)__static_call(name)
>
> +static inline int __static_call_register(struct static_call_key *key, void *tramp)
> +{
> +       return 0;
> +}
> +
>  static inline
>  void __static_call_update(struct static_call_key *key, void *tramp, void *func)
>  {
> @@ -213,6 +226,9 @@ static inline long __static_call_return0
>         return 0;
>  }
>
> +#define EXPORT_STATIC_CALL_TRAMP(name)                                 \
> +       EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
> +
>  #define EXPORT_STATIC_CALL(name)                                       \
>         EXPORT_SYMBOL(STATIC_CALL_KEY(name));                           \
>         EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
> --- a/include/linux/static_call_types.h
> +++ b/include/linux/static_call_types.h
> @@ -39,17 +39,39 @@ struct static_call_site {
>
>  #ifdef CONFIG_HAVE_STATIC_CALL
>
> +#define __raw_static_call(name)        (&STATIC_CALL_TRAMP(name))
> +
> +#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
> +
>  /*
>   * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
>   * the symbol table so that objtool can reference it when it generates the
>   * .static_call_sites section.
>   */
> +#define __STATIC_CALL_ADDRESSABLE(name) \
> +       __ADDRESSABLE(STATIC_CALL_KEY(name))
> +
>  #define __static_call(name)                                            \
>  ({                                                                     \
> -       __ADDRESSABLE(STATIC_CALL_KEY(name));                           \
> -       &STATIC_CALL_TRAMP(name);                                       \
> +       __STATIC_CALL_ADDRESSABLE(name);                                \
> +       __raw_static_call(name);                                        \
>  })
>
> +#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
> +
> +#define __STATIC_CALL_ADDRESSABLE(name)
> +#define __static_call(name)    __raw_static_call(name)
> +
> +#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
> +
> +#ifdef MODULE
> +#define __STATIC_CALL_MOD_ADDRESSABLE(name)
> +#define static_call_mod(name)  __raw_static_call(name)
> +#else
> +#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
> +#define static_call_mod(name)  __static_call(name)
> +#endif
> +
>  #define static_call(name)      __static_call(name)
>
>  #else
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5268,7 +5268,7 @@ EXPORT_SYMBOL(preempt_schedule);
>
>  #ifdef CONFIG_PREEMPT_DYNAMIC
>  DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
> -EXPORT_STATIC_CALL(preempt_schedule);
> +EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
>  #endif
>
>
> @@ -5326,7 +5326,7 @@ EXPORT_SYMBOL_GPL(preempt_schedule_notra
>
>  #ifdef CONFIG_PREEMPT_DYNAMIC
>  DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
> -EXPORT_STATIC_CALL(preempt_schedule_notrace);
> +EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
>  #endif
>
>  #endif /* CONFIG_PREEMPTION */
> @@ -6879,10 +6879,10 @@ EXPORT_SYMBOL(__cond_resched);
>
>  #ifdef CONFIG_PREEMPT_DYNAMIC
>  DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
> -EXPORT_STATIC_CALL(cond_resched);
> +EXPORT_STATIC_CALL_TRAMP(cond_resched);
>
>  DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
> -EXPORT_STATIC_CALL(might_resched);
> +EXPORT_STATIC_CALL_TRAMP(might_resched);
>  #endif
>
>  /*
> @@ -8096,6 +8096,13 @@ void __init sched_init(void)
>
>         init_uclamp();
>
> +#ifdef CONFIG_PREEMPT_DYNAMIC
> +       static_call_register(cond_resched);
> +       static_call_register(might_resched);
> +       static_call_register(preempt_schedule);
> +       static_call_register(preempt_schedule_notrace);
> +#endif
> +
>         scheduler_running = 1;
>  }
>
> --- a/kernel/static_call.c
> +++ b/kernel/static_call.c
> @@ -323,10 +323,85 @@ static int __static_call_mod_text_reserv
>         return ret;
>  }
>
> +struct static_call_ass {
> +       struct rb_node node;
> +       struct static_call_key *key;
> +       unsigned long tramp;
> +};
> +
> +static struct rb_root static_call_asses;
> +
> +#define __node_2_ass(_n) \
> +       rb_entry((_n), struct static_call_ass, node)
> +
> +static inline bool ass_less(struct rb_node *a, const struct rb_node *b)
> +{
> +       return __node_2_ass(a)->tramp < __node_2_ass(b)->tramp;
> +}
> +
> +static inline int ass_cmp(const void *a, const struct rb_node *b)
> +{
> +       if (*(unsigned long *)a < __node_2_ass(b)->tramp)
> +               return -1;
> +
> +       if (*(unsigned long *)a > __node_2_ass(b)->tramp)
> +               return 1;
> +
> +       return 0;
> +}
> +
> +int __static_call_register(struct static_call_key *key, void *tramp)
> +{
> +       struct static_call_ass *ass = kzalloc(sizeof(*ass), GFP_KERNEL);
> +       if (!ass)
> +               return -ENOMEM;
> +
> +       ass->key = key;
> +       ass->tramp = (unsigned long)tramp;
> +
> +       /* trampolines should be aligned */
> +       WARN_ON_ONCE(ass->tramp & STATIC_CALL_SITE_FLAGS);
> +
> +       rb_add(&ass->node, &static_call_asses, &ass_less);
> +       return 0;
> +}
> +
> +static struct static_call_ass *static_call_find_ass(unsigned long addr)
> +{
> +       struct rb_node *node = rb_find(&addr, &static_call_asses, &ass_cmp);
> +       if (!node)
> +               return NULL;
> +       return __node_2_ass(node);
> +}
> +
>  static int static_call_add_module(struct module *mod)
>  {
> -       return __static_call_init(mod, mod->static_call_sites,
> -                                 mod->static_call_sites + mod->num_static_call_sites);
> +       struct static_call_site *start = mod->static_call_sites;
> +       struct static_call_site *stop = start + mod->num_static_call_sites;
> +       struct static_call_site *site;
> +
> +       for (site = start; site != stop; site++) {
> +               unsigned long addr = (unsigned long)static_call_key(site);
> +               struct static_call_ass *ass;
> +
> +               /*
> +                * Gotta fix up the keys that point to the trampoline.
> +                */
> +               if (!kernel_text_address(addr))
> +                       continue;
> +
> +               ass = static_call_find_ass(addr);
> +               if (!ass) {
> +                       pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
> +                               static_call_addr(site));
> +                       return -EINVAL;
> +               }
> +
> +               site->key = ((unsigned long)ass->key - (unsigned long)&site->key) |
> +                           (site->key & STATIC_CALL_SITE_FLAGS);
> +       }
> +
> +       return __static_call_init(mod, start, stop);
>  }
>
>  static void static_call_del_module(struct module *mod)
> @@ -392,6 +467,11 @@ static struct notifier_block static_call
>
>  #else
>
> +int __static_call_register(struct static_call_key *key, void *tramp)
> +{
> +       return 0;
> +}
> +
>  static inline int __static_call_mod_text_reserved(void *start, void *end)
>  {
>         return 0;
> --- a/tools/include/linux/static_call_types.h
> +++ b/tools/include/linux/static_call_types.h
> @@ -39,17 +39,39 @@ struct static_call_site {
>
>  #ifdef CONFIG_HAVE_STATIC_CALL
>
> +#define __raw_static_call(name)        (&STATIC_CALL_TRAMP(name))
> +
> +#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
> +
>  /*
>   * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
>   * the symbol table so that objtool can reference it when it generates the
>   * .static_call_sites section.
>   */
> +#define __STATIC_CALL_ADDRESSABLE(name) \
> +       __ADDRESSABLE(STATIC_CALL_KEY(name))
> +
>  #define __static_call(name)                                            \
>  ({                                                                     \
> -       __ADDRESSABLE(STATIC_CALL_KEY(name));                           \
> -       &STATIC_CALL_TRAMP(name);                                       \
> +       __STATIC_CALL_ADDRESSABLE(name);                                \
> +       __raw_static_call(name);                                        \
>  })
>
> +#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
> +
> +#define __STATIC_CALL_ADDRESSABLE(name)
> +#define __static_call(name)    __raw_static_call(name)
> +
> +#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
> +
> +#ifdef MODULE
> +#define __STATIC_CALL_MOD_ADDRESSABLE(name)
> +#define static_call_mod(name)  __raw_static_call(name)
> +#else
> +#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
> +#define static_call_mod(name)  __static_call(name)
> +#endif
> +
>  #define static_call(name)      __static_call(name)
>
>  #else
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -502,8 +502,16 @@ static int create_static_call_sections(s
>
>                 key_sym = find_symbol_by_name(file->elf, tmp);
>                 if (!key_sym) {
> -                       WARN("static_call: can't find static_call_key symbol: %s", tmp);
> -                       return -1;
> +                       if (!module) {
> +                               WARN("static_call: can't find static_call_key symbol: %s", tmp);
> +                               return -1;
> +                       }
> +                       /*
> +                        * For static_call_mod() we allow the key to be the
> +                        * trampoline address. This is fixed up in
> +                        * static_call_add_module().
> +                        */
> +                       key_sym = insn->call_dest;
>                 }
>                 free(key_name);
>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-22 16:57     ` Ard Biesheuvel
@ 2021-01-22 17:08       ` Peter Zijlstra
  2021-02-08 12:00         ` [tip: sched/core] sched: Add /debug/sched_preempt tip-bot2 for Peter Zijlstra
                           ` (2 more replies)
  0 siblings, 3 replies; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-22 17:08 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	Josh Poimboeuf, Steven Rostedt (VMware),
	Jason Baron

On Fri, Jan 22, 2021 at 05:57:53PM +0100, Ard Biesheuvel wrote:
> On Fri, 22 Jan 2021 at 17:53, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Mon, Jan 18, 2021 at 03:12:21PM +0100, Frederic Weisbecker wrote:
> > > +#ifdef CONFIG_PREEMPT_DYNAMIC
> > > +DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
> > > +EXPORT_STATIC_CALL(preempt_schedule);
> > > +#endif
> >
> > > +#ifdef CONFIG_PREEMPT_DYNAMIC
> > > +DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
> > > +EXPORT_STATIC_CALL(preempt_schedule_notrace);
> > > +#endif
> >
> > So one of the things I hates most of this is that is allows 'random'
> > modules to hijack the preemption by rewriting these callsites. Once you
> > export the key, we've lost.
> >
> 
> Are these supposed to be switchable at any time? Or only at boot? In
> the latter case, can't we drop the associated data structure in
> __ro_after_init so it becomes R/O when booting completes?

Doesn't work, loading modules modifies the key -- we recently had
someone complain about that for jump_label.

And also, I have this patch...

---
Subject: sched: Add /debug/sched_preempt
From: Peter Zijlstra <peterz@infradead.org>
Date: Fri Jan 22 13:01:58 CET 2021

Add a debugfs file to muck about with the preempt mode at runtime.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/sched/core.c |  135 ++++++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 126 insertions(+), 9 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5362,37 +5362,154 @@ EXPORT_STATIC_CALL(preempt_schedule_notr
  *   preempt_schedule_notrace   <- preempt_schedule_notrace
  *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
  */
-static int __init setup_preempt_mode(char *str)
+
+enum {
+	preempt_dynamic_none = 0,
+	preempt_dynamic_voluntary,
+	preempt_dynamic_full,
+};
+
+static int preempt_dynamic_mode = preempt_dynamic_full;
+
+static int sched_dynamic_mode(const char *str)
+{
+	if (!strcmp(str, "none"))
+		return 0;
+
+	if (!strcmp(str, "voluntary"))
+		return 1;
+
+	if (!strcmp(str, "full"))
+		return 2;
+
+	return -1;
+}
+
+static void sched_dynamic_update(int mode)
 {
-	if (!strcmp(str, "none")) {
+	/*
+	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
+	 * the ZERO state, which is invalid.
+	 */
+	static_call_update(cond_resched, __cond_resched);
+	static_call_update(might_resched, __cond_resched);
+	static_call_update(preempt_schedule, __preempt_schedule_func());
+	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func());
+	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+
+	switch (mode) {
+	case preempt_dynamic_none:
 		static_call_update(cond_resched, __cond_resched);
 		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
 		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
 		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else if (!strcmp(str, "voluntary")) {
+		pr_info("Dynamic Preempt: none\n");
+		break;
+
+	case preempt_dynamic_voluntary:
 		static_call_update(cond_resched, __cond_resched);
 		static_call_update(might_resched, __cond_resched);
 		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
 		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
 		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else if (!strcmp(str, "full")) {
+		pr_info("Dynamic Preempt: voluntary\n");
+		break;
+
+	case preempt_dynamic_full:
 		static_call_update(cond_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(preempt_schedule, __preempt_schedule_func());
 		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func());
 		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else {
-		pr_warn("Dynamic Preempt: Unsupported preempt mode %s, default to full\n", str);
+		pr_info("Dynamic Preempt: full\n");
+		break;
+	}
+
+	preempt_dynamic_mode = mode;
+}
+
+static int __init setup_preempt_mode(char *str)
+{
+	int mode = sched_dynamic_mode(str);
+	if (mode < 0) {
+		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
 		return 1;
 	}
+
+	sched_dynamic_update(mode);
 	return 0;
 }
 __setup("preempt=", setup_preempt_mode);
 
+#ifdef CONFIG_SCHED_DEBUG
+
+static ssize_t sched_dynamic_write(struct file *filp, const char __user *ubuf,
+				   size_t cnt, loff_t *ppos)
+{
+	char buf[16];
+	int mode;
+
+	if (cnt > 15)
+		cnt = 15;
+
+	if (copy_from_user(&buf, ubuf, cnt))
+		return -EFAULT;
+
+	buf[cnt] = 0;
+	mode = sched_dynamic_mode(strstrip(buf));
+	if (mode < 0)
+		return mode;
+
+	sched_dynamic_update(mode);
+
+	*ppos += cnt;
+
+	return cnt;
+}
+
+static int sched_dynamic_show(struct seq_file *m, void *v)
+{
+	static const char * preempt_modes[] = {
+		"none", "voluntary", "full"
+	};
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(preempt_modes); i++) {
+		if (preempt_dynamic_mode == i)
+			seq_puts(m, "(");
+		seq_puts(m, preempt_modes[i]);
+		if (preempt_dynamic_mode == i)
+			seq_puts(m, ")");
+
+		seq_puts(m, " ");
+	}
+
+	seq_puts(m, "\n");
+	return 0;
+}
+
+static int sched_dynamic_open(struct inode *inode, struct file *filp)
+{
+	return single_open(filp, sched_dynamic_show, NULL);
+}
+
+static const struct file_operations sched_dynamic_fops = {
+	.open		= sched_dynamic_open,
+	.write		= sched_dynamic_write,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static __init int sched_init_debug_dynamic(void)
+{
+	debugfs_create_file("sched_preempt", 0644, NULL, NULL, &sched_dynamic_fops);
+	return 0;
+}
+late_initcall(sched_init_debug_dynamic);
+
+#endif /* CONFIG_SCHED_DEBUG */
 #endif /* CONFIG_PREEMPT_DYNAMIC */
 
 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-22 16:52   ` Peter Zijlstra
  2021-01-22 16:57     ` Ard Biesheuvel
@ 2021-01-25 23:40     ` Josh Poimboeuf
  2021-01-26  9:24       ` Peter Zijlstra
  2021-01-26 23:57     ` Josh Poimboeuf
  2 siblings, 1 reply; 61+ messages in thread
From: Josh Poimboeuf @ 2021-01-25 23:40 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Fri, Jan 22, 2021 at 05:52:26PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 18, 2021 at 03:12:21PM +0100, Frederic Weisbecker wrote:
> > +#ifdef CONFIG_PREEMPT_DYNAMIC
> > +DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
> > +EXPORT_STATIC_CALL(preempt_schedule);
> > +#endif
> 
> > +#ifdef CONFIG_PREEMPT_DYNAMIC
> > +DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
> > +EXPORT_STATIC_CALL(preempt_schedule_notrace);
> > +#endif
> 
> So one of the things I hates most of this is that is allows 'random'
> modules to hijack the preemption by rewriting these callsites. Once you
> export the key, we've lost.
> 
> I've tried a number of things, but this is the only one I could come up
> with that actually stands a chance against malicious modules (vbox and
> the like).
> 
> It's somewhat elaborate, but afaict it actually works.

What about this hopefully abuse-proof idea which has less code, less
complexity, no registration, no new data structures, no COC defiance.

Add a writable-by-modules bit to the key struct, which can be set when
you define the key.  Enforce it in __static_call_update() with a call to
__builtin_return_address(0).  WARN when the caller's text isn't in the
kernel proper and the flag isn't set.

Hm?

-- 
Josh


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-25 23:40     ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls Josh Poimboeuf
@ 2021-01-26  9:24       ` Peter Zijlstra
  0 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-26  9:24 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Mon, Jan 25, 2021 at 05:40:39PM -0600, Josh Poimboeuf wrote:
> On Fri, Jan 22, 2021 at 05:52:26PM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 18, 2021 at 03:12:21PM +0100, Frederic Weisbecker wrote:
> > > +#ifdef CONFIG_PREEMPT_DYNAMIC
> > > +DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
> > > +EXPORT_STATIC_CALL(preempt_schedule);
> > > +#endif
> > 
> > > +#ifdef CONFIG_PREEMPT_DYNAMIC
> > > +DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
> > > +EXPORT_STATIC_CALL(preempt_schedule_notrace);
> > > +#endif
> > 
> > So one of the things I hates most of this is that is allows 'random'
> > modules to hijack the preemption by rewriting these callsites. Once you
> > export the key, we've lost.
> > 
> > I've tried a number of things, but this is the only one I could come up
> > with that actually stands a chance against malicious modules (vbox and
> > the like).
> > 
> > It's somewhat elaborate, but afaict it actually works.
> 
> What about this hopefully abuse-proof idea which has less code, less
> complexity, no registration, no new data structures, no COC defiance.
> 
> Add a writable-by-modules bit to the key struct, which can be set when
> you define the key.  Enforce it in __static_call_update() with a call to
> __builtin_return_address(0).  WARN when the caller's text isn't in the
> kernel proper and the flag isn't set.
> 
> Hm?

What stops a module from clearing said bit? It has the key pointer.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-22 16:52   ` Peter Zijlstra
  2021-01-22 16:57     ` Ard Biesheuvel
  2021-01-25 23:40     ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls Josh Poimboeuf
@ 2021-01-26 23:57     ` Josh Poimboeuf
  2021-01-27  9:13       ` Peter Zijlstra
  2 siblings, 1 reply; 61+ messages in thread
From: Josh Poimboeuf @ 2021-01-26 23:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Fri, Jan 22, 2021 at 05:52:26PM +0100, Peter Zijlstra wrote:
>  static int static_call_add_module(struct module *mod)
>  {
> -	return __static_call_init(mod, mod->static_call_sites,
> -				  mod->static_call_sites + mod->num_static_call_sites);
> +	struct static_call_site *start = mod->static_call_sites;
> +	struct static_call_site *stop = start + mod->num_static_call_sites;
> +	struct static_call_site *site;
> +
> +	for (site = start; site != stop; site++) {
> +		unsigned long addr = (unsigned long)static_call_key(site);
> +		struct static_call_ass *ass;
> +
> +		/*
> +		 * Gotta fix up the keys that point to the trampoline.
> +		 */
> +		if (!kernel_text_address(addr))
> +			continue;
> +
> +		ass = static_call_find_ass(addr);
> +		if (!ass) {
> +			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
> +				static_call_addr(site));
> +			return -EINVAL;
> +		}
> +		site->key = ((unsigned long)ass->key - (unsigned long)&site->key) |
> +			    (site->key & STATIC_CALL_SITE_FLAGS);

Well, I hate it, but I'm not sure I have any better ideas.  It should be
possible to use kallsyms, instead of the rb-tree/register nonsense.  Not
sure about the performance impact though.  Might be a good reason to
speed up kallsyms!

Also I do have some naming suggestions ;-)

-- 
Josh


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-26 23:57     ` Josh Poimboeuf
@ 2021-01-27  9:13       ` Peter Zijlstra
  2021-01-27 11:27         ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-27  9:13 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Tue, Jan 26, 2021 at 05:57:30PM -0600, Josh Poimboeuf wrote:
> On Fri, Jan 22, 2021 at 05:52:26PM +0100, Peter Zijlstra wrote:
> >  static int static_call_add_module(struct module *mod)
> >  {
> > -	return __static_call_init(mod, mod->static_call_sites,
> > -				  mod->static_call_sites + mod->num_static_call_sites);
> > +	struct static_call_site *start = mod->static_call_sites;
> > +	struct static_call_site *stop = start + mod->num_static_call_sites;
> > +	struct static_call_site *site;
> > +
> > +	for (site = start; site != stop; site++) {
> > +		unsigned long addr = (unsigned long)static_call_key(site);
> > +		struct static_call_ass *ass;
> > +
> > +		/*
> > +		 * Gotta fix up the keys that point to the trampoline.
> > +		 */
> > +		if (!kernel_text_address(addr))
> > +			continue;
> > +
> > +		ass = static_call_find_ass(addr);
> > +		if (!ass) {
> > +			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
> > +				static_call_addr(site));
> > +			return -EINVAL;
> > +		}
> > +		site->key = ((unsigned long)ass->key - (unsigned long)&site->key) |
> > +			    (site->key & STATIC_CALL_SITE_FLAGS);
> 
> Well, I hate it, but I'm not sure I have any better ideas.  It should be
> possible to use kallsyms, instead of the rb-tree/register nonsense.  Not
> sure about the performance impact though.  Might be a good reason to
> speed up kallsyms!

Oh right, let me see if I can make that work.

> Also I do have some naming suggestions ;-)

Nah, we need a little more fun back in the code :-)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-27  9:13       ` Peter Zijlstra
@ 2021-01-27 11:27         ` Peter Zijlstra
  2021-01-27 15:59           ` Josh Poimboeuf
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-27 11:27 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Jan 27, 2021 at 10:13:47AM +0100, Peter Zijlstra wrote:
> On Tue, Jan 26, 2021 at 05:57:30PM -0600, Josh Poimboeuf wrote:

> > Well, I hate it, but I'm not sure I have any better ideas.  It should be
> > possible to use kallsyms, instead of the rb-tree/register nonsense.  Not
> > sure about the performance impact though.  Might be a good reason to
> > speed up kallsyms!
> 
> Oh right, let me see if I can make that work.

Something like so compiles.. but it does make the whole thing depend on
KALLSYMS_ALL, which is somewhat yuck.

Also, kallsyms_lookup_name() is horrible, but not trivial to fix because
of that compression scheme used.

---
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -8,6 +8,7 @@
 #include <linux/module.h>
 #include <linux/cpu.h>
 #include <linux/processor.h>
+#include <linux/kallsyms.h>
 #include <asm/sections.h>
 
 extern struct static_call_site __start_static_call_sites[],
@@ -325,8 +326,66 @@ static int __static_call_mod_text_reserv
 
 static int static_call_add_module(struct module *mod)
 {
-	return __static_call_init(mod, mod->static_call_sites,
-				  mod->static_call_sites + mod->num_static_call_sites);
+	struct static_call_site *start = mod->static_call_sites;
+	struct static_call_site *stop = start + mod->num_static_call_sites;
+	struct static_call_site *site;
+
+	struct {
+		unsigned long tramp;
+		unsigned long key;
+	} cache[8] = { { 0, 0}, };
+	int idx = 0;
+
+	for (site = start; site != stop; site++) {
+		unsigned long key, addr = (unsigned long)static_call_key(site);
+		unsigned long sym_size, sym_offset;
+		char sym_name[KSYM_NAME_LEN];
+		const char *name;
+		int i;
+
+		if (!kernel_text_address(addr))
+			continue;
+
+		/*
+		 * Gotta fix up the keys that point to the trampoline.
+		 */
+
+		/* Simple cache to avoid kallsyms */
+		for (i = 0; i < ARRAY_SIZE(cache); i++) {
+			if (cache[i].tramp == addr) {
+				key = cache[i].key;
+				goto got_key;
+			}
+		}
+
+		name = kallsyms_lookup(addr, &sym_size, &sym_offset, NULL, sym_name);
+		if (!name)
+			goto fail;
+
+		if (name != sym_name)
+			strcpy(sym_name, name);
+		memcpy(sym_name, STATIC_CALL_KEY_PREFIX_STR, STATIC_CALL_KEY_PREFIX_LEN);
+		key = kallsyms_lookup_name(sym_name);
+		if (!key)
+			goto fail;
+
+		/* Remember for next time.. */
+		cache[idx].tramp = addr;
+		cache[idx].key = key;
+		idx++;
+		idx %= ARRAY_SIZE(cache);
+
+got_key:
+		site->key = (key - (unsigned long)&site->key) |
+			    (site->key & STATIC_CALL_SITE_FLAGS);
+	}
+
+	return __static_call_init(mod, start, stop);
+
+fail:
+	pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
+		static_call_addr(site));
+	return -EINVAL;
 }
 
 static void static_call_del_module(struct module *mod)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-27 11:27         ` Peter Zijlstra
@ 2021-01-27 15:59           ` Josh Poimboeuf
  2021-01-27 16:19             ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Josh Poimboeuf @ 2021-01-27 15:59 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Jan 27, 2021 at 12:27:09PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 27, 2021 at 10:13:47AM +0100, Peter Zijlstra wrote:
> > On Tue, Jan 26, 2021 at 05:57:30PM -0600, Josh Poimboeuf wrote:
> 
> > > Well, I hate it, but I'm not sure I have any better ideas.  It should be
> > > possible to use kallsyms, instead of the rb-tree/register nonsense.  Not
> > > sure about the performance impact though.  Might be a good reason to
> > > speed up kallsyms!
> > 
> > Oh right, let me see if I can make that work.
> 
> Something like so compiles.. but it does make the whole thing depend on
> KALLSYMS_ALL, which is somewhat yuck.
> 
> Also, kallsyms_lookup_name() is horrible, but not trivial to fix because
> of that compression scheme used.

The KALLSYMS_ALL dependency doesn't bother me personally but I assume
some of the tinyconfig folks might not appreciate it being on
permanently.

Can DEFINE_STATIC_CALL() make the tramp-key association?

e.g. have DEFINE_STATIC_CALL() add an entry to .static_call_tramp_key
which has an array of 

struct static_call_tramp_key {
	unsigned int tramp;  // PC-relative pointer to tramp
	unsigned int key;    // PC-relative pointer to key
}

and then just scan that instead of kallsyms.

-- 
Josh


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-27 15:59           ` Josh Poimboeuf
@ 2021-01-27 16:19             ` Peter Zijlstra
  2021-01-27 16:33               ` Josh Poimboeuf
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-27 16:19 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Jan 27, 2021 at 09:59:14AM -0600, Josh Poimboeuf wrote:
> On Wed, Jan 27, 2021 at 12:27:09PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 27, 2021 at 10:13:47AM +0100, Peter Zijlstra wrote:
> > > On Tue, Jan 26, 2021 at 05:57:30PM -0600, Josh Poimboeuf wrote:
> > 
> > > > Well, I hate it, but I'm not sure I have any better ideas.  It should be
> > > > possible to use kallsyms, instead of the rb-tree/register nonsense.  Not
> > > > sure about the performance impact though.  Might be a good reason to
> > > > speed up kallsyms!
> > > 
> > > Oh right, let me see if I can make that work.
> > 
> > Something like so compiles.. but it does make the whole thing depend on
> > KALLSYMS_ALL, which is somewhat yuck.
> > 
> > Also, kallsyms_lookup_name() is horrible, but not trivial to fix because
> > of that compression scheme used.
> 
> The KALLSYMS_ALL dependency doesn't bother me personally but I assume
> some of the tinyconfig folks might not appreciate it being on
> permanently.

I suppose this ought to cure the _ALL thing.

diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c
index 7ecd2ccba531..83586cc4d954 100644
--- a/scripts/kallsyms.c
+++ b/scripts/kallsyms.c
@@ -260,6 +260,12 @@ static int symbol_valid(const struct sym_entry *s)
 {
 	const char *name = sym_name(s);
 
+	/*
+	 * Always emit __SCK__ symbols for static_call_add_module().
+	 */
+	if (!strncmp(name, "__SCK__", 7))
+		return 1;
+
 	/* if --all-symbols is not specified, then symbols outside the text
 	 * and inittext sections are discarded */
 	if (!all_symbols) {

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-27 16:19             ` Peter Zijlstra
@ 2021-01-27 16:33               ` Josh Poimboeuf
  2021-01-27 18:44                 ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Josh Poimboeuf @ 2021-01-27 16:33 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Jan 27, 2021 at 05:19:02PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 27, 2021 at 09:59:14AM -0600, Josh Poimboeuf wrote:
> > On Wed, Jan 27, 2021 at 12:27:09PM +0100, Peter Zijlstra wrote:
> > > On Wed, Jan 27, 2021 at 10:13:47AM +0100, Peter Zijlstra wrote:
> > > > On Tue, Jan 26, 2021 at 05:57:30PM -0600, Josh Poimboeuf wrote:
> > > 
> > > > > Well, I hate it, but I'm not sure I have any better ideas.  It should be
> > > > > possible to use kallsyms, instead of the rb-tree/register nonsense.  Not
> > > > > sure about the performance impact though.  Might be a good reason to
> > > > > speed up kallsyms!
> > > > 
> > > > Oh right, let me see if I can make that work.
> > > 
> > > Something like so compiles.. but it does make the whole thing depend on
> > > KALLSYMS_ALL, which is somewhat yuck.
> > > 
> > > Also, kallsyms_lookup_name() is horrible, but not trivial to fix because
> > > of that compression scheme used.
> > 
> > The KALLSYMS_ALL dependency doesn't bother me personally but I assume
> > some of the tinyconfig folks might not appreciate it being on
> > permanently.
> 
> I suppose this ought to cure the _ALL thing.
> 
> diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c
> index 7ecd2ccba531..83586cc4d954 100644
> --- a/scripts/kallsyms.c
> +++ b/scripts/kallsyms.c
> @@ -260,6 +260,12 @@ static int symbol_valid(const struct sym_entry *s)
>  {
>  	const char *name = sym_name(s);
>  
> +	/*
> +	 * Always emit __SCK__ symbols for static_call_add_module().
> +	 */
> +	if (!strncmp(name, "__SCK__", 7))
> +		return 1;
> +
>  	/* if --all-symbols is not specified, then symbols outside the text
>  	 * and inittext sections are discarded */
>  	if (!all_symbols) {

This made me LOL for some reason.  I like it though.

What did you think about .static_call_tramp_key?  I could whip up a
patch later unless you beat me to it.

-- 
Josh


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-27 16:33               ` Josh Poimboeuf
@ 2021-01-27 18:44                 ` Peter Zijlstra
  2021-01-27 19:00                   ` Josh Poimboeuf
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2021-01-27 18:44 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Jan 27, 2021 at 10:33:08AM -0600, Josh Poimboeuf wrote:

> What did you think about .static_call_tramp_key?  I could whip up a
> patch later unless you beat me to it.

Yeah, I'm not sure.. why duplicate information already present in
kallsyms?

There's a fair number of features that already require KALLSYMS, I can't
really be bothered about adding one more (kprobes, function_tracer,
stack_tracer, ftrace_syscalls).

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-27 18:44                 ` Peter Zijlstra
@ 2021-01-27 19:00                   ` Josh Poimboeuf
  2021-01-27 19:02                     ` Josh Poimboeuf
  0 siblings, 1 reply; 61+ messages in thread
From: Josh Poimboeuf @ 2021-01-27 19:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Jan 27, 2021 at 07:44:01PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 27, 2021 at 10:33:08AM -0600, Josh Poimboeuf wrote:
> 
> > What did you think about .static_call_tramp_key?  I could whip up a
> > patch later unless you beat me to it.
> 
> Yeah, I'm not sure.. why duplicate information already present in
> kallsyms?

Well, but it's not exactly duplicating kallsyms.  No need to store
symbol names, just the pointer relationships.  And kallsyms is
presumably slow.

> There's a fair number of features that already require KALLSYMS, I can't
> really be bothered about adding one more (kprobes, function_tracer,
> stack_tracer, ftrace_syscalls).

Right, but I don't think they rely on KALLSYMS_ALL?

-- 
Josh


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-27 19:00                   ` Josh Poimboeuf
@ 2021-01-27 19:02                     ` Josh Poimboeuf
  2021-01-27 23:18                       ` Josh Poimboeuf
  0 siblings, 1 reply; 61+ messages in thread
From: Josh Poimboeuf @ 2021-01-27 19:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Jan 27, 2021 at 01:00:07PM -0600, Josh Poimboeuf wrote:
> On Wed, Jan 27, 2021 at 07:44:01PM +0100, Peter Zijlstra wrote:
> > On Wed, Jan 27, 2021 at 10:33:08AM -0600, Josh Poimboeuf wrote:
> > 
> > > What did you think about .static_call_tramp_key?  I could whip up a
> > > patch later unless you beat me to it.
> > 
> > Yeah, I'm not sure.. why duplicate information already present in
> > kallsyms?
> 
> Well, but it's not exactly duplicating kallsyms.  No need to store
> symbol names, just the pointer relationships.  And kallsyms is
> presumably slow.
> 
> > There's a fair number of features that already require KALLSYMS, I can't
> > really be bothered about adding one more (kprobes, function_tracer,
> > stack_tracer, ftrace_syscalls).
> 
> Right, but I don't think they rely on KALLSYMS_ALL?

Scratch that, I forgot about your last hack.  (That's what I get for
emailing during meetings.)

I mean - your patch is fine... let me just whip up the alternative and
we can compare.

-- 
Josh


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-27 19:02                     ` Josh Poimboeuf
@ 2021-01-27 23:18                       ` Josh Poimboeuf
  2021-02-03 14:04                         ` Peter Zijlstra
                                           ` (4 more replies)
  0 siblings, 5 replies; 61+ messages in thread
From: Josh Poimboeuf @ 2021-01-27 23:18 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Jan 27, 2021 at 01:02:18PM -0600, Josh Poimboeuf wrote:
> On Wed, Jan 27, 2021 at 01:00:07PM -0600, Josh Poimboeuf wrote:
> > On Wed, Jan 27, 2021 at 07:44:01PM +0100, Peter Zijlstra wrote:
> > > On Wed, Jan 27, 2021 at 10:33:08AM -0600, Josh Poimboeuf wrote:
> > > 
> > > > What did you think about .static_call_tramp_key?  I could whip up a
> > > > patch later unless you beat me to it.
> > > 
> > > Yeah, I'm not sure.. why duplicate information already present in
> > > kallsyms?
> > 
> > Well, but it's not exactly duplicating kallsyms.  No need to store
> > symbol names, just the pointer relationships.  And kallsyms is
> > presumably slow.
> > 
> > > There's a fair number of features that already require KALLSYMS, I can't
> > > really be bothered about adding one more (kprobes, function_tracer,
> > > stack_tracer, ftrace_syscalls).

Here ya go.  It builds...  And the tramp_key section is nice and small.

Relocation section [1497] '.rela.static_call_tramp_key' for section [1496] '.static_call_tramp_key' at offset 0x179ab818 contains 8 entries:
  Offset              Type            Value               Addend Name
  000000000000000000  X86_64_PC32     0x00000000000004c0      +0 __SCT__preempt_schedule
  0x0000000000000004  X86_64_PC32     0x000000000005ee10      +0 __SCK__preempt_schedule
  0x0000000000000008  X86_64_PC32     0x00000000000004c8      +0 __SCT__preempt_schedule_notrace
  0x000000000000000c  X86_64_PC32     0x000000000005ee00      +0 __SCK__preempt_schedule_notrace
  0x0000000000000010  X86_64_PC32     0x00000000000004d0      +0 __SCT__cond_resched
  0x0000000000000014  X86_64_PC32     0x000000000005dd20      +0 __SCK__cond_resched
  0x0000000000000018  X86_64_PC32     0x00000000000004d8      +0 __SCT__might_resched
  0x000000000000001c  X86_64_PC32     0x000000000005dd10      +0 __SCK__might_resched


diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
index c37f11999d0c..cbb67b6030f9 100644
--- a/arch/x86/include/asm/static_call.h
+++ b/arch/x86/include/asm/static_call.h
@@ -37,4 +37,11 @@
 #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)			\
 	__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; nop; nop; nop; nop")
 
+
+#define ARCH_ADD_TRAMP_KEY(name)					\
+	asm(".pushsection .static_call_tramp_key, \"a\"		\n"	\
+	    ".long " STATIC_CALL_TRAMP_STR(name) " - .		\n"	\
+	    ".long " STATIC_CALL_KEY_STR(name) " - .		\n"	\
+	    ".popsection					\n")
+
 #endif /* _ASM_STATIC_CALL_H */
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index b2b3d81b1535..b0871e282c4f 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -393,7 +393,10 @@
 	. = ALIGN(8);							\
 	__start_static_call_sites = .;					\
 	KEEP(*(.static_call_sites))					\
-	__stop_static_call_sites = .;
+	__stop_static_call_sites = .;					\
+	__start_static_call_tramp_key = .;				\
+	KEEP(*(.static_call_tramp_key))					\
+	__stop_static_call_tramp_key = .;
 
 /*
  * Allow architectures to handle ro_after_init data on their
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 077330874c60..16bcd5af3d35 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -138,6 +138,12 @@ struct static_call_key {
 	};
 };
 
+/* For finding the key associated with a trampoline */
+struct static_call_tramp_key {
+	s32 tramp;
+	s32 key;
+};
+
 extern void __static_call_update(struct static_call_key *key, void *tramp, void *func);
 extern int static_call_mod_init(struct module *mod);
 extern int static_call_text_reserved(void *start, void *end);
@@ -165,11 +171,18 @@ extern long __static_call_return0(void);
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
-
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
+/* Leave the key unexported, so modules can't change static call targets: */
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name));				\
+	ARCH_ADD_TRAMP_KEY(name)
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name));			\
+	ARCH_ADD_TRAMP_KEY(name)
+
 #elif defined(CONFIG_HAVE_STATIC_CALL)
 
 static inline int static_call_init(void) { return 0; }
@@ -216,11 +229,16 @@ static inline long __static_call_return0(void)
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
-
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
+/* Leave the key unexported, so modules can't change static call targets: */
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+
 #else /* Generic implementation */
 
 static inline int static_call_init(void) { return 0; }
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 08f78b1b88b4..ae5662d368b9 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -10,6 +10,7 @@
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
 #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
 #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
+#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
 
 #define STATIC_CALL_TRAMP_PREFIX	__SCT__
 #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
@@ -39,17 +40,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 66129245b6a0..9f4564b89e9f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5268,7 +5268,7 @@ EXPORT_SYMBOL(preempt_schedule);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
 DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
-EXPORT_STATIC_CALL(preempt_schedule);
+EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
 #endif
 
 
@@ -5326,7 +5326,7 @@ EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
 DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
-EXPORT_STATIC_CALL(preempt_schedule_notrace);
+EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
 #endif
 
 #endif /* CONFIG_PREEMPTION */
@@ -6993,10 +6993,10 @@ EXPORT_SYMBOL(__cond_resched);
 
 #ifdef CONFIG_PREEMPT_DYNAMIC
 DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
-EXPORT_STATIC_CALL(cond_resched);
+EXPORT_STATIC_CALL_TRAMP(cond_resched);
 
 DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
-EXPORT_STATIC_CALL(might_resched);
+EXPORT_STATIC_CALL_TRAMP(might_resched);
 #endif
 
 /*
diff --git a/kernel/static_call.c b/kernel/static_call.c
index 0bc11b5ce681..5e6f567976c1 100644
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -12,6 +12,8 @@
 
 extern struct static_call_site __start_static_call_sites[],
 			       __stop_static_call_sites[];
+extern struct static_call_tramp_key __start_static_call_tramp_key[],
+				    __stop_static_call_tramp_key[];
 
 static bool static_call_initialized;
 
@@ -323,10 +325,59 @@ static int __static_call_mod_text_reserved(void *start, void *end)
 	return ret;
 }
 
+static struct static_call_tramp_key *tramp_key_lookup(unsigned long addr)
+{
+	struct static_call_tramp_key *start = __start_static_call_tramp_key;
+	struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
+	struct static_call_tramp_key *tramp_key;
+
+	for (tramp_key = start; tramp_key != stop; tramp_key++) {
+		unsigned long tramp;
+
+		tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
+		if (tramp == addr)
+			return tramp_key;
+	}
+
+	return NULL;
+}
+
 static int static_call_add_module(struct module *mod)
 {
-	return __static_call_init(mod, mod->static_call_sites,
-				  mod->static_call_sites + mod->num_static_call_sites);
+	struct static_call_site *start = mod->static_call_sites;
+	struct static_call_site *stop = start + mod->num_static_call_sites;
+	struct static_call_site *site;
+
+	for (site = start; site != stop; site++) {
+		unsigned long addr = (unsigned long)static_call_key(site);
+		struct static_call_tramp_key *tramp_key;
+
+		/*
+		 * Is the key is exported, 'addr' points to the key, which
+		 * means modules are allowed to call static_call_update() on
+		 * it.
+		 *
+		 * Otherwise, the key isn't exported, and 'addr' points to the
+		 * trampoline so we need to lookup the key.
+		 *
+		 * We go through this dance to prevent crazy modules from
+		 * abusing sensitive static calls.
+		 */
+		if (!kernel_text_address(addr))
+			continue;
+
+		tramp_key = tramp_key_lookup(addr);
+		if (!tramp_key) {
+			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
+				static_call_addr(site));
+			return -EINVAL;
+		}
+
+		site->key = ((long)tramp_key->key - (long)&tramp_key->key) |
+			    (site->key & STATIC_CALL_SITE_FLAGS);
+	}
+
+	return __static_call_init(mod, start, stop);
 }
 
 static void static_call_del_module(struct module *mod)
diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h
index 08f78b1b88b4..2a3afb6ebf49 100644
--- a/tools/include/linux/static_call_types.h
+++ b/tools/include/linux/static_call_types.h
@@ -39,17 +39,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 4bd30315eb62..f2e5e5ce1a05 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -502,8 +502,21 @@ static int create_static_call_sections(struct objtool_file *file)
 
 		key_sym = find_symbol_by_name(file->elf, tmp);
 		if (!key_sym) {
-			WARN("static_call: can't find static_call_key symbol: %s", tmp);
-			return -1;
+			if (!module) {
+				WARN("static_call: can't find static_call_key symbol: %s", tmp);
+				return -1;
+			}
+
+			/*
+			 * For modules(), the key might not be exported, which
+			 * means the module can make static calls but isn't
+			 * allowed to change them.
+			 *
+			 * In that case we temporarily set the key to be the
+			 * trampoline address.  This is fixed up in
+			 * static_call_add_module().
+			 */
+			key_sym = insn->call_dest;
 		}
 		free(key_name);
 


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 4/8] preempt: Introduce CONFIG_PREEMPT_DYNAMIC
  2021-01-22 16:53   ` Peter Zijlstra
@ 2021-01-28 12:17     ` Frederic Weisbecker
  0 siblings, 0 replies; 61+ messages in thread
From: Frederic Weisbecker @ 2021-01-28 12:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Michal Hocko, Mel Gorman, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko, ardb, jpoimboe

On Fri, Jan 22, 2021 at 05:53:43PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 18, 2021 at 03:12:19PM +0100, Frederic Weisbecker wrote:
> > +config HAVE_PREEMPT_DYNAMIC
> > +	bool
> > +	depends on HAVE_STATIC_CALL_INLINE
> 
> I think we can relax this to HAVE_STATIC_CALL, using trampolines
> shouldn't be too bad, and that would put it in reach of arm64.

Why not, but then I need to make CONFIG_PREEMPT_DYNAMIC optional
in order not to make the overhead mandatory for everyone.

> 
> > +	depends on GENERIC_ENTRY
> > +	help
> > +	   Select this if the architecture support boot time preempt setting
> > +	   on top of static calls. It is strongly advised to support inline
> > +	   static call to avoid any overhead.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-27 23:18                       ` Josh Poimboeuf
@ 2021-02-03 14:04                         ` Peter Zijlstra
  2021-02-05 15:30                           ` Peter Zijlstra
  2021-02-05 15:22                         ` Peter Zijlstra
                                           ` (3 subsequent siblings)
  4 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2021-02-03 14:04 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Jan 27, 2021 at 05:18:37PM -0600, Josh Poimboeuf wrote:
> On Wed, Jan 27, 2021 at 01:02:18PM -0600, Josh Poimboeuf wrote:
> > On Wed, Jan 27, 2021 at 01:00:07PM -0600, Josh Poimboeuf wrote:
> > > On Wed, Jan 27, 2021 at 07:44:01PM +0100, Peter Zijlstra wrote:
> > > > On Wed, Jan 27, 2021 at 10:33:08AM -0600, Josh Poimboeuf wrote:
> > > > 
> > > > > What did you think about .static_call_tramp_key?  I could whip up a
> > > > > patch later unless you beat me to it.
> > > > 
> > > > Yeah, I'm not sure.. why duplicate information already present in
> > > > kallsyms?
> > > 
> > > Well, but it's not exactly duplicating kallsyms.  No need to store
> > > symbol names, just the pointer relationships.  And kallsyms is
> > > presumably slow.
> > > 
> > > > There's a fair number of features that already require KALLSYMS, I can't
> > > > really be bothered about adding one more (kprobes, function_tracer,
> > > > stack_tracer, ftrace_syscalls).
> 
> Here ya go.  It builds...  And the tramp_key section is nice and small.
> 
> Relocation section [1497] '.rela.static_call_tramp_key' for section [1496] '.static_call_tramp_key' at offset 0x179ab818 contains 8 entries:
>   Offset              Type            Value               Addend Name
>   000000000000000000  X86_64_PC32     0x00000000000004c0      +0 __SCT__preempt_schedule
>   0x0000000000000004  X86_64_PC32     0x000000000005ee10      +0 __SCK__preempt_schedule
>   0x0000000000000008  X86_64_PC32     0x00000000000004c8      +0 __SCT__preempt_schedule_notrace
>   0x000000000000000c  X86_64_PC32     0x000000000005ee00      +0 __SCK__preempt_schedule_notrace
>   0x0000000000000010  X86_64_PC32     0x00000000000004d0      +0 __SCT__cond_resched
>   0x0000000000000014  X86_64_PC32     0x000000000005dd20      +0 __SCK__cond_resched
>   0x0000000000000018  X86_64_PC32     0x00000000000004d8      +0 __SCT__might_resched
>   0x000000000000001c  X86_64_PC32     0x000000000005dd10      +0 __SCK__might_resched
> 

Fair enough I suppose. I'll slap a changelog and your SoB on it and I
suppose I'll got commit the whole lot. Then we can forget about it
again.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-27 23:18                       ` Josh Poimboeuf
  2021-02-03 14:04                         ` Peter Zijlstra
@ 2021-02-05 15:22                         ` Peter Zijlstra
  2021-02-08 12:00                         ` [tip: sched/core] static_call: Allow module use without exposing static_call_key tip-bot2 for Josh Poimboeuf
                                           ` (2 subsequent siblings)
  4 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2021-02-05 15:22 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Jan 27, 2021 at 05:18:37PM -0600, Josh Poimboeuf wrote:

> +static struct static_call_tramp_key *tramp_key_lookup(unsigned long addr)
> +{
> +	struct static_call_tramp_key *start = __start_static_call_tramp_key;
> +	struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
> +	struct static_call_tramp_key *tramp_key;
> +
> +	for (tramp_key = start; tramp_key != stop; tramp_key++) {
> +		unsigned long tramp;
> +
> +		tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
> +		if (tramp == addr)
> +			return tramp_key;
> +	}
> +
> +	return NULL;
> +}
> +
>  static int static_call_add_module(struct module *mod)
>  {
> -	return __static_call_init(mod, mod->static_call_sites,
> -				  mod->static_call_sites + mod->num_static_call_sites);
> +	struct static_call_site *start = mod->static_call_sites;
> +	struct static_call_site *stop = start + mod->num_static_call_sites;
> +	struct static_call_site *site;
> +
> +	for (site = start; site != stop; site++) {
> +		unsigned long addr = (unsigned long)static_call_key(site);
> +		struct static_call_tramp_key *tramp_key;
> +
> +		/*
> +		 * Is the key is exported, 'addr' points to the key, which
> +		 * means modules are allowed to call static_call_update() on
> +		 * it.
> +		 *
> +		 * Otherwise, the key isn't exported, and 'addr' points to the
> +		 * trampoline so we need to lookup the key.
> +		 *
> +		 * We go through this dance to prevent crazy modules from
> +		 * abusing sensitive static calls.
> +		 */
> +		if (!kernel_text_address(addr))
> +			continue;
> +
> +		tramp_key = tramp_key_lookup(addr);
> +		if (!tramp_key) {
> +			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
> +				static_call_addr(site));
> +			return -EINVAL;
> +		}
> +
> +		site->key = ((long)tramp_key->key - (long)&tramp_key->key) |
> +			    (site->key & STATIC_CALL_SITE_FLAGS);
> +	}
> +
> +	return __static_call_init(mod, start, stop);
>  }

I find it works better with this on..

---
diff --git a/kernel/static_call.c b/kernel/static_call.c
index 5e6f567976c1..6906c6ec4c97 100644
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -325,7 +325,7 @@ static int __static_call_mod_text_reserved(void *start, void *end)
 	return ret;
 }
 
-static struct static_call_tramp_key *tramp_key_lookup(unsigned long addr)
+static unsigned long tramp_key_lookup(unsigned long addr)
 {
 	struct static_call_tramp_key *start = __start_static_call_tramp_key;
 	struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
@@ -336,10 +336,10 @@ static struct static_call_tramp_key *tramp_key_lookup(unsigned long addr)
 
 		tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
 		if (tramp == addr)
-			return tramp_key;
+			return (long)tramp_key->key + (long)&tramp_key->key;
 	}
 
-	return NULL;
+	return 0;
 }
 
 static int static_call_add_module(struct module *mod)
@@ -350,7 +350,7 @@ static int static_call_add_module(struct module *mod)
 
 	for (site = start; site != stop; site++) {
 		unsigned long addr = (unsigned long)static_call_key(site);
-		struct static_call_tramp_key *tramp_key;
+		unsigned long key;
 
 		/*
 		 * Is the key is exported, 'addr' points to the key, which
@@ -366,14 +366,14 @@ static int static_call_add_module(struct module *mod)
 		if (!kernel_text_address(addr))
 			continue;
 
-		tramp_key = tramp_key_lookup(addr);
-		if (!tramp_key) {
+		key = tramp_key_lookup(addr);
+		if (!key) {
 			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
 				static_call_addr(site));
 			return -EINVAL;
 		}
 
-		site->key = ((long)tramp_key->key - (long)&tramp_key->key) |
+		site->key = (key - (long)&site->key) |
 			    (site->key & STATIC_CALL_SITE_FLAGS);
 	}
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-02-03 14:04                         ` Peter Zijlstra
@ 2021-02-05 15:30                           ` Peter Zijlstra
  2021-02-06  2:31                             ` Josh Poimboeuf
  0 siblings, 1 reply; 61+ messages in thread
From: Peter Zijlstra @ 2021-02-05 15:30 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Wed, Feb 03, 2021 at 03:04:23PM +0100, Peter Zijlstra wrote:
> Fair enough I suppose. I'll slap a changelog and your SoB on it and I
> suppose I'll got commit the whole lot. Then we can forget about it
> again.

FWIW, the whole thing looks like this..

---
Subject: static_call: Allow module use without exposing static_call_key
From: Josh Poimboeuf <jpoimboe@redhat.com>
Date: Wed, 27 Jan 2021 17:18:37 -0600

From: Josh Poimboeuf <jpoimboe@redhat.com>

When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module
can use static_call_update() to change the function called.  This is
not desirable in general.

Not exporting static_call_key however also disallows usage of
static_call(), since objtool needs the key to construct the
static_call_site.

Solve this by allowing objtool to create the static_call_site using
the trampoline address when it builds a module and cannot find the
static_call_key symbol. The module loader will then try and map the
trampole back to a key before it constructs the normal sites list.

Doing this requires a trampoline -> key associsation, so add another
magic section that keeps those.

Originally-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210127231837.ifddpn7rhwdaepiu@treble
---
 arch/x86/include/asm/static_call.h      |    7 ++++
 include/asm-generic/vmlinux.lds.h       |    5 ++
 include/linux/static_call.h             |   22 +++++++++++-
 include/linux/static_call_types.h       |   27 ++++++++++++++-
 kernel/static_call.c                    |   55 ++++++++++++++++++++++++++++++--
 tools/include/linux/static_call_types.h |   27 ++++++++++++++-
 tools/objtool/check.c                   |   17 ++++++++-
 7 files changed, 149 insertions(+), 11 deletions(-)

--- a/arch/x86/include/asm/static_call.h
+++ b/arch/x86/include/asm/static_call.h
@@ -37,4 +37,11 @@
 #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)			\
 	__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; nop; nop; nop; nop")
 
+
+#define ARCH_ADD_TRAMP_KEY(name)					\
+	asm(".pushsection .static_call_tramp_key, \"a\"		\n"	\
+	    ".long " STATIC_CALL_TRAMP_STR(name) " - .		\n"	\
+	    ".long " STATIC_CALL_KEY_STR(name) " - .		\n"	\
+	    ".popsection					\n")
+
 #endif /* _ASM_STATIC_CALL_H */
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -393,7 +393,10 @@
 	. = ALIGN(8);							\
 	__start_static_call_sites = .;					\
 	KEEP(*(.static_call_sites))					\
-	__stop_static_call_sites = .;
+	__stop_static_call_sites = .;					\
+	__start_static_call_tramp_key = .;				\
+	KEEP(*(.static_call_tramp_key))					\
+	__stop_static_call_tramp_key = .;
 
 /*
  * Allow architectures to handle ro_after_init data on their
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -138,6 +138,12 @@ struct static_call_key {
 	};
 };
 
+/* For finding the key associated with a trampoline */
+struct static_call_tramp_key {
+	s32 tramp;
+	s32 key;
+};
+
 extern void __static_call_update(struct static_call_key *key, void *tramp, void *func);
 extern int static_call_mod_init(struct module *mod);
 extern int static_call_text_reserved(void *start, void *end);
@@ -165,11 +171,18 @@ extern long __static_call_return0(void);
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
-
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
+/* Leave the key unexported, so modules can't change static call targets: */
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name));				\
+	ARCH_ADD_TRAMP_KEY(name)
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name));			\
+	ARCH_ADD_TRAMP_KEY(name)
+
 #elif defined(CONFIG_HAVE_STATIC_CALL)
 
 static inline int static_call_init(void) { return 0; }
@@ -216,11 +229,16 @@ static inline long __static_call_return0
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
-
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
+/* Leave the key unexported, so modules can't change static call targets: */
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+
 #else /* Generic implementation */
 
 static inline int static_call_init(void) { return 0; }
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -10,6 +10,7 @@
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
 #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
 #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
+#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
 
 #define STATIC_CALL_TRAMP_PREFIX	__SCT__
 #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
@@ -39,17 +40,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -12,6 +12,8 @@
 
 extern struct static_call_site __start_static_call_sites[],
 			       __stop_static_call_sites[];
+extern struct static_call_tramp_key __start_static_call_tramp_key[],
+				    __stop_static_call_tramp_key[];
 
 static bool static_call_initialized;
 
@@ -323,10 +325,59 @@ static int __static_call_mod_text_reserv
 	return ret;
 }
 
+static unsigned long tramp_key_lookup(unsigned long addr)
+{
+	struct static_call_tramp_key *start = __start_static_call_tramp_key;
+	struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
+	struct static_call_tramp_key *tramp_key;
+
+	for (tramp_key = start; tramp_key != stop; tramp_key++) {
+		unsigned long tramp;
+
+		tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
+		if (tramp == addr)
+			return (long)tramp_key->key + (long)&tramp_key->key;
+	}
+
+	return 0;
+}
+
 static int static_call_add_module(struct module *mod)
 {
-	return __static_call_init(mod, mod->static_call_sites,
-				  mod->static_call_sites + mod->num_static_call_sites);
+	struct static_call_site *start = mod->static_call_sites;
+	struct static_call_site *stop = start + mod->num_static_call_sites;
+	struct static_call_site *site;
+
+	for (site = start; site != stop; site++) {
+		unsigned long addr = (unsigned long)static_call_key(site);
+		unsigned long key;
+
+		/*
+		 * Is the key is exported, 'addr' points to the key, which
+		 * means modules are allowed to call static_call_update() on
+		 * it.
+		 *
+		 * Otherwise, the key isn't exported, and 'addr' points to the
+		 * trampoline so we need to lookup the key.
+		 *
+		 * We go through this dance to prevent crazy modules from
+		 * abusing sensitive static calls.
+		 */
+		if (!kernel_text_address(addr))
+			continue;
+
+		key = tramp_key_lookup(addr);
+		if (!key) {
+			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
+				static_call_addr(site));
+			return -EINVAL;
+		}
+
+		site->key = (key - (long)&site->key) |
+			    (site->key & STATIC_CALL_SITE_FLAGS);
+	}
+
+	return __static_call_init(mod, start, stop);
 }
 
 static void static_call_del_module(struct module *mod)
--- a/tools/include/linux/static_call_types.h
+++ b/tools/include/linux/static_call_types.h
@@ -10,6 +10,7 @@
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
 #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
 #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
+#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
 
 #define STATIC_CALL_TRAMP_PREFIX	__SCT__
 #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
@@ -39,17 +40,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -507,8 +507,21 @@ static int create_static_call_sections(s
 
 		key_sym = find_symbol_by_name(file->elf, tmp);
 		if (!key_sym) {
-			WARN("static_call: can't find static_call_key symbol: %s", tmp);
-			return -1;
+			if (!module) {
+				WARN("static_call: can't find static_call_key symbol: %s", tmp);
+				return -1;
+			}
+
+			/*
+			 * For modules(), the key might not be exported, which
+			 * means the module can make static calls but isn't
+			 * allowed to change them.
+			 *
+			 * In that case we temporarily set the key to be the
+			 * trampoline address.  This is fixed up in
+			 * static_call_add_module().
+			 */
+			key_sym = insn->call_dest;
 		}
 		free(key_name);
 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-02-05 15:30                           ` Peter Zijlstra
@ 2021-02-06  2:31                             ` Josh Poimboeuf
  2021-02-06  9:03                               ` Peter Zijlstra
  0 siblings, 1 reply; 61+ messages in thread
From: Josh Poimboeuf @ 2021-02-06  2:31 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Fri, Feb 05, 2021 at 04:30:56PM +0100, Peter Zijlstra wrote:
> On Wed, Feb 03, 2021 at 03:04:23PM +0100, Peter Zijlstra wrote:
> > Fair enough I suppose. I'll slap a changelog and your SoB on it and I
> > suppose I'll got commit the whole lot. Then we can forget about it
> > again.
> 
> FWIW, the whole thing looks like this..
> 
> ---
> Subject: static_call: Allow module use without exposing static_call_key
> From: Josh Poimboeuf <jpoimboe@redhat.com>
> Date: Wed, 27 Jan 2021 17:18:37 -0600
> 
> From: Josh Poimboeuf <jpoimboe@redhat.com>
> 
> When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module
> can use static_call_update() to change the function called.  This is
> not desirable in general.

Looks good to me, thanks for fixing that up.  Never said I tested it ;-)

-- 
Josh


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-02-06  2:31                             ` Josh Poimboeuf
@ 2021-02-06  9:03                               ` Peter Zijlstra
  0 siblings, 0 replies; 61+ messages in thread
From: Peter Zijlstra @ 2021-02-06  9:03 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Frederic Weisbecker, LKML, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko,
	rostedt, jbaron, ardb

On Fri, Feb 05, 2021 at 08:31:22PM -0600, Josh Poimboeuf wrote:
> On Fri, Feb 05, 2021 at 04:30:56PM +0100, Peter Zijlstra wrote:
> > On Wed, Feb 03, 2021 at 03:04:23PM +0100, Peter Zijlstra wrote:
> > > Fair enough I suppose. I'll slap a changelog and your SoB on it and I
> > > suppose I'll got commit the whole lot. Then we can forget about it
> > > again.
> > 
> > FWIW, the whole thing looks like this..
> > 
> > ---
> > Subject: static_call: Allow module use without exposing static_call_key
> > From: Josh Poimboeuf <jpoimboe@redhat.com>
> > Date: Wed, 27 Jan 2021 17:18:37 -0600
> > 
> > From: Josh Poimboeuf <jpoimboe@redhat.com>
> > 
> > When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module
> > can use static_call_update() to change the function called.  This is
> > not desirable in general.
> 
> Looks good to me, thanks for fixing that up.  Never said I tested it ;-)

Yeah, and I'm the genius that 'tested' it with a MODULES=n build :-)

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [tip: sched/core] sched: Add /debug/sched_preempt
  2021-01-22 17:08       ` Peter Zijlstra
@ 2021-02-08 12:00         ` tip-bot2 for Peter Zijlstra
  2021-02-09 15:45         ` tip-bot2 for Peter Zijlstra
  2021-02-17 13:17         ` tip-bot2 for Peter Zijlstra
  2 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2021-02-08 12:00 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Peter Zijlstra (Intel), x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     48dd9c490b9402ff7ca0a7ddecd486dba9450a44
Gitweb:        https://git.kernel.org/tip/48dd9c490b9402ff7ca0a7ddecd486dba9450a44
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Fri, 22 Jan 2021 13:01:58 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 05 Feb 2021 17:19:58 +01:00

sched: Add /debug/sched_preempt

Add a debugfs file to muck about with the preempt mode at runtime.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/YAsGiUYf6NyaTplX@hirez.programming.kicks-ass.net
---
 kernel/sched/core.c | 135 ++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 126 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0562992..c659266 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5348,37 +5348,154 @@ EXPORT_STATIC_CALL(preempt_schedule_notrace);
  *   preempt_schedule_notrace   <- preempt_schedule_notrace
  *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
  */
-static int __init setup_preempt_mode(char *str)
+
+enum {
+	preempt_dynamic_none = 0,
+	preempt_dynamic_voluntary,
+	preempt_dynamic_full,
+};
+
+static int preempt_dynamic_mode = preempt_dynamic_full;
+
+static int sched_dynamic_mode(const char *str)
 {
-	if (!strcmp(str, "none")) {
+	if (!strcmp(str, "none"))
+		return 0;
+
+	if (!strcmp(str, "voluntary"))
+		return 1;
+
+	if (!strcmp(str, "full"))
+		return 2;
+
+	return -1;
+}
+
+static void sched_dynamic_update(int mode)
+{
+	/*
+	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
+	 * the ZERO state, which is invalid.
+	 */
+	static_call_update(cond_resched, __cond_resched);
+	static_call_update(might_resched, __cond_resched);
+	static_call_update(preempt_schedule, __preempt_schedule_func);
+	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+
+	switch (mode) {
+	case preempt_dynamic_none:
 		static_call_update(cond_resched, __cond_resched);
 		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
 		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
 		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else if (!strcmp(str, "voluntary")) {
+		pr_info("Dynamic Preempt: none\n");
+		break;
+
+	case preempt_dynamic_voluntary:
 		static_call_update(cond_resched, __cond_resched);
 		static_call_update(might_resched, __cond_resched);
 		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
 		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
 		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else if (!strcmp(str, "full")) {
+		pr_info("Dynamic Preempt: voluntary\n");
+		break;
+
+	case preempt_dynamic_full:
 		static_call_update(cond_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(preempt_schedule, __preempt_schedule_func);
 		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
 		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else {
-		pr_warn("Dynamic Preempt: Unsupported preempt mode %s, default to full\n", str);
+		pr_info("Dynamic Preempt: full\n");
+		break;
+	}
+
+	preempt_dynamic_mode = mode;
+}
+
+static int __init setup_preempt_mode(char *str)
+{
+	int mode = sched_dynamic_mode(str);
+	if (mode < 0) {
+		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
 		return 1;
 	}
+
+	sched_dynamic_update(mode);
 	return 0;
 }
 __setup("preempt=", setup_preempt_mode);
 
+#ifdef CONFIG_SCHED_DEBUG
+
+static ssize_t sched_dynamic_write(struct file *filp, const char __user *ubuf,
+				   size_t cnt, loff_t *ppos)
+{
+	char buf[16];
+	int mode;
+
+	if (cnt > 15)
+		cnt = 15;
+
+	if (copy_from_user(&buf, ubuf, cnt))
+		return -EFAULT;
+
+	buf[cnt] = 0;
+	mode = sched_dynamic_mode(strstrip(buf));
+	if (mode < 0)
+		return mode;
+
+	sched_dynamic_update(mode);
+
+	*ppos += cnt;
+
+	return cnt;
+}
+
+static int sched_dynamic_show(struct seq_file *m, void *v)
+{
+	static const char * preempt_modes[] = {
+		"none", "voluntary", "full"
+	};
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(preempt_modes); i++) {
+		if (preempt_dynamic_mode == i)
+			seq_puts(m, "(");
+		seq_puts(m, preempt_modes[i]);
+		if (preempt_dynamic_mode == i)
+			seq_puts(m, ")");
+
+		seq_puts(m, " ");
+	}
+
+	seq_puts(m, "\n");
+	return 0;
+}
+
+static int sched_dynamic_open(struct inode *inode, struct file *filp)
+{
+	return single_open(filp, sched_dynamic_show, NULL);
+}
+
+static const struct file_operations sched_dynamic_fops = {
+	.open		= sched_dynamic_open,
+	.write		= sched_dynamic_write,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static __init int sched_init_debug_dynamic(void)
+{
+	debugfs_create_file("sched_preempt", 0644, NULL, NULL, &sched_dynamic_fops);
+	return 0;
+}
+late_initcall(sched_init_debug_dynamic);
+
+#endif /* CONFIG_SCHED_DEBUG */
 #endif /* CONFIG_PREEMPT_DYNAMIC */
 
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] static_call: Allow module use without exposing static_call_key
  2021-01-27 23:18                       ` Josh Poimboeuf
  2021-02-03 14:04                         ` Peter Zijlstra
  2021-02-05 15:22                         ` Peter Zijlstra
@ 2021-02-08 12:00                         ` tip-bot2 for Josh Poimboeuf
  2021-02-09 15:45                         ` tip-bot2 for Josh Poimboeuf
  2021-02-17 13:17                         ` tip-bot2 for Josh Poimboeuf
  4 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2021-02-08 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel), Josh Poimboeuf, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     5e444f1cadde5cf3da2bd7a89631519806ac91f7
Gitweb:        https://git.kernel.org/tip/5e444f1cadde5cf3da2bd7a89631519806ac91f7
Author:        Josh Poimboeuf <jpoimboe@redhat.com>
AuthorDate:    Wed, 27 Jan 2021 17:18:37 -06:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 05 Feb 2021 17:19:59 +01:00

static_call: Allow module use without exposing static_call_key

When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module
can use static_call_update() to change the function called.  This is
not desirable in general.

Not exporting static_call_key however also disallows usage of
static_call(), since objtool needs the key to construct the
static_call_site.

Solve this by allowing objtool to create the static_call_site using
the trampoline address when it builds a module and cannot find the
static_call_key symbol. The module loader will then try and map the
trampole back to a key before it constructs the normal sites list.

Doing this requires a trampoline -> key associsation, so add another
magic section that keeps those.

Originally-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210127231837.ifddpn7rhwdaepiu@treble
---
 arch/x86/include/asm/static_call.h      |  7 +++-
 include/asm-generic/vmlinux.lds.h       |  5 +-
 include/linux/static_call.h             | 22 +++++++++-
 include/linux/static_call_types.h       | 27 +++++++++++-
 kernel/static_call.c                    | 55 +++++++++++++++++++++++-
 tools/include/linux/static_call_types.h | 27 +++++++++++-
 tools/objtool/check.c                   | 17 ++++++-
 7 files changed, 149 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
index c37f119..cbb67b6 100644
--- a/arch/x86/include/asm/static_call.h
+++ b/arch/x86/include/asm/static_call.h
@@ -37,4 +37,11 @@
 #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)			\
 	__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; nop; nop; nop; nop")
 
+
+#define ARCH_ADD_TRAMP_KEY(name)					\
+	asm(".pushsection .static_call_tramp_key, \"a\"		\n"	\
+	    ".long " STATIC_CALL_TRAMP_STR(name) " - .		\n"	\
+	    ".long " STATIC_CALL_KEY_STR(name) " - .		\n"	\
+	    ".popsection					\n")
+
 #endif /* _ASM_STATIC_CALL_H */
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index b2b3d81..b0871e2 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -393,7 +393,10 @@
 	. = ALIGN(8);							\
 	__start_static_call_sites = .;					\
 	KEEP(*(.static_call_sites))					\
-	__stop_static_call_sites = .;
+	__stop_static_call_sites = .;					\
+	__start_static_call_tramp_key = .;				\
+	KEEP(*(.static_call_tramp_key))					\
+	__stop_static_call_tramp_key = .;
 
 /*
  * Allow architectures to handle ro_after_init data on their
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index d69dd8b..85ecc78 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -138,6 +138,12 @@ struct static_call_key {
 	};
 };
 
+/* For finding the key associated with a trampoline */
+struct static_call_tramp_key {
+	s32 tramp;
+	s32 key;
+};
+
 extern void __static_call_update(struct static_call_key *key, void *tramp, void *func);
 extern int static_call_mod_init(struct module *mod);
 extern int static_call_text_reserved(void *start, void *end);
@@ -165,11 +171,18 @@ extern long __static_call_return0(void);
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
-
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
+/* Leave the key unexported, so modules can't change static call targets: */
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name));				\
+	ARCH_ADD_TRAMP_KEY(name)
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name));			\
+	ARCH_ADD_TRAMP_KEY(name)
+
 #elif defined(CONFIG_HAVE_STATIC_CALL)
 
 static inline int static_call_init(void) { return 0; }
@@ -216,11 +229,16 @@ static inline long __static_call_return0(void)
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
-
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
+/* Leave the key unexported, so modules can't change static call targets: */
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+
 #else /* Generic implementation */
 
 static inline int static_call_init(void) { return 0; }
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 08f78b1..ae5662d 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -10,6 +10,7 @@
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
 #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
 #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
+#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
 
 #define STATIC_CALL_TRAMP_PREFIX	__SCT__
 #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
@@ -39,17 +40,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
diff --git a/kernel/static_call.c b/kernel/static_call.c
index 0bc11b5..6906c6e 100644
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -12,6 +12,8 @@
 
 extern struct static_call_site __start_static_call_sites[],
 			       __stop_static_call_sites[];
+extern struct static_call_tramp_key __start_static_call_tramp_key[],
+				    __stop_static_call_tramp_key[];
 
 static bool static_call_initialized;
 
@@ -323,10 +325,59 @@ static int __static_call_mod_text_reserved(void *start, void *end)
 	return ret;
 }
 
+static unsigned long tramp_key_lookup(unsigned long addr)
+{
+	struct static_call_tramp_key *start = __start_static_call_tramp_key;
+	struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
+	struct static_call_tramp_key *tramp_key;
+
+	for (tramp_key = start; tramp_key != stop; tramp_key++) {
+		unsigned long tramp;
+
+		tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
+		if (tramp == addr)
+			return (long)tramp_key->key + (long)&tramp_key->key;
+	}
+
+	return 0;
+}
+
 static int static_call_add_module(struct module *mod)
 {
-	return __static_call_init(mod, mod->static_call_sites,
-				  mod->static_call_sites + mod->num_static_call_sites);
+	struct static_call_site *start = mod->static_call_sites;
+	struct static_call_site *stop = start + mod->num_static_call_sites;
+	struct static_call_site *site;
+
+	for (site = start; site != stop; site++) {
+		unsigned long addr = (unsigned long)static_call_key(site);
+		unsigned long key;
+
+		/*
+		 * Is the key is exported, 'addr' points to the key, which
+		 * means modules are allowed to call static_call_update() on
+		 * it.
+		 *
+		 * Otherwise, the key isn't exported, and 'addr' points to the
+		 * trampoline so we need to lookup the key.
+		 *
+		 * We go through this dance to prevent crazy modules from
+		 * abusing sensitive static calls.
+		 */
+		if (!kernel_text_address(addr))
+			continue;
+
+		key = tramp_key_lookup(addr);
+		if (!key) {
+			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
+				static_call_addr(site));
+			return -EINVAL;
+		}
+
+		site->key = (key - (long)&site->key) |
+			    (site->key & STATIC_CALL_SITE_FLAGS);
+	}
+
+	return __static_call_init(mod, start, stop);
 }
 
 static void static_call_del_module(struct module *mod)
diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h
index 08f78b1..ae5662d 100644
--- a/tools/include/linux/static_call_types.h
+++ b/tools/include/linux/static_call_types.h
@@ -10,6 +10,7 @@
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
 #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
 #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
+#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
 
 #define STATIC_CALL_TRAMP_PREFIX	__SCT__
 #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
@@ -39,17 +40,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 5f8d3ee..7bd96d6 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -502,8 +502,21 @@ static int create_static_call_sections(struct objtool_file *file)
 
 		key_sym = find_symbol_by_name(file->elf, tmp);
 		if (!key_sym) {
-			WARN("static_call: can't find static_call_key symbol: %s", tmp);
-			return -1;
+			if (!module) {
+				WARN("static_call: can't find static_call_key symbol: %s", tmp);
+				return -1;
+			}
+
+			/*
+			 * For modules(), the key might not be exported, which
+			 * means the module can make static calls but isn't
+			 * allowed to change them.
+			 *
+			 * In that case we temporarily set the key to be the
+			 * trampoline address.  This is fixed up in
+			 * static_call_add_module().
+			 */
+			key_sym = insn->call_dest;
 		}
 		free(key_name);
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt/dynamic: Support dynamic preempt with preempt= boot option
  2021-01-18 14:12 ` [RFC PATCH 8/8] preempt/dynamic: Support dynamic preempt with preempt= boot option Frederic Weisbecker
@ 2021-02-08 12:00   ` tip-bot2 for Peter Zijlstra (Intel)
  2021-02-09 15:45   ` tip-bot2 for Peter Zijlstra (Intel)
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  2 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra (Intel) @ 2021-02-08 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel), Frederic Weisbecker, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     4f8a0041162ab74ba5f89ee58f8aad59c4c85d22
Gitweb:        https://git.kernel.org/tip/4f8a0041162ab74ba5f89ee58f8aad59c4c85d22
Author:        Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:23 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 05 Feb 2021 17:19:58 +01:00

preempt/dynamic: Support dynamic preempt with preempt= boot option

Support the preempt= boot option and patch the static call sites
accordingly.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-9-frederic@kernel.org
---
 kernel/sched/core.c | 67 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 66 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index cd0c46f..0562992 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -12,6 +12,7 @@
 
 #include "sched.h"
 
+#include <linux/entry-common.h>
 #include <linux/nospec.h>
 
 #include <linux/kcov.h>
@@ -5314,9 +5315,73 @@ DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
 EXPORT_STATIC_CALL(preempt_schedule_notrace);
 #endif
 
-
 #endif /* CONFIG_PREEMPTION */
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+/*
+ * SC:cond_resched
+ * SC:might_resched
+ * SC:preempt_schedule
+ * SC:preempt_schedule_notrace
+ * SC:irqentry_exit_cond_resched
+ *
+ *
+ * NONE:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * VOLUNTARY:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- __cond_resched
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * FULL:
+ *   cond_resched               <- RET0
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- preempt_schedule
+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
+ */
+static int __init setup_preempt_mode(char *str)
+{
+	if (!strcmp(str, "none")) {
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
+		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
+		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else if (!strcmp(str, "voluntary")) {
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, __cond_resched);
+		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
+		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
+		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else if (!strcmp(str, "full")) {
+		static_call_update(cond_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(preempt_schedule, __preempt_schedule_func);
+		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else {
+		pr_warn("Dynamic Preempt: Unsupported preempt mode %s, default to full\n", str);
+		return 1;
+	}
+	return 0;
+}
+__setup("preempt=", setup_preempt_mode);
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
+
+
 /*
  * This is the entry point to schedule() from kernel preemption
  * off of irq context.

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-18 14:12 ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() " Frederic Weisbecker
  2021-01-21 21:58   ` Peter Zijlstra
  2021-01-22 16:52   ` Peter Zijlstra
@ 2021-02-08 12:00   ` tip-bot2 for Peter Zijlstra (Intel)
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  3 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra (Intel) @ 2021-02-08 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel), Frederic Weisbecker, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     8c98e8cf723c3ab2ac924b0942dd3b8074f874e5
Gitweb:        https://git.kernel.org/tip/8c98e8cf723c3ab2ac924b0942dd3b8074f874e5
Author:        Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:21 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 05 Feb 2021 17:19:57 +01:00

preempt/dynamic: Provide preempt_schedule[_notrace]() static calls

Provide static calls to control preempt_schedule[_notrace]()
(called in CONFIG_PREEMPT) so that we can override their behaviour when
preempt= is overriden.

Since the default behaviour is full preemption, both their calls are
initialized to the arch provided wrapper, if any.

[fweisbec: only define static calls when PREEMPT_DYNAMIC, make it less
           dependent on x86 with __preempt_schedule_func]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-7-frederic@kernel.org
---
 arch/x86/include/asm/preempt.h | 34 +++++++++++++++++++++++++--------
 kernel/sched/core.c            | 12 ++++++++++++-
 2 files changed, 38 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index 69485ca..9b12dce 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -5,6 +5,7 @@
 #include <asm/rmwcc.h>
 #include <asm/percpu.h>
 #include <linux/thread_info.h>
+#include <linux/static_call_types.h>
 
 DECLARE_PER_CPU(int, __preempt_count);
 
@@ -103,16 +104,33 @@ static __always_inline bool should_resched(int preempt_offset)
 }
 
 #ifdef CONFIG_PREEMPTION
-  extern asmlinkage void preempt_schedule_thunk(void);
-# define __preempt_schedule() \
-	asm volatile ("call preempt_schedule_thunk" : ASM_CALL_CONSTRAINT)
 
-  extern asmlinkage void preempt_schedule(void);
-  extern asmlinkage void preempt_schedule_notrace_thunk(void);
-# define __preempt_schedule_notrace() \
-	asm volatile ("call preempt_schedule_notrace_thunk" : ASM_CALL_CONSTRAINT)
+extern asmlinkage void preempt_schedule(void);
+extern asmlinkage void preempt_schedule_thunk(void);
+
+#define __preempt_schedule_func preempt_schedule_thunk
+
+DECLARE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
+
+#define __preempt_schedule() \
+do { \
+	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule)); \
+	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule) : ASM_CALL_CONSTRAINT); \
+} while (0)
+
+extern asmlinkage void preempt_schedule_notrace(void);
+extern asmlinkage void preempt_schedule_notrace_thunk(void);
+
+#define __preempt_schedule_notrace_func preempt_schedule_notrace_thunk
+
+DECLARE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+
+#define __preempt_schedule_notrace() \
+do { \
+	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule_notrace)); \
+	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule_notrace) : ASM_CALL_CONSTRAINT); \
+} while (0)
 
-  extern asmlinkage void preempt_schedule_notrace(void);
 #endif
 
 #endif /* __ASM_PREEMPT_H */
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bc7b00b..cd0c46f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5251,6 +5251,12 @@ asmlinkage __visible void __sched notrace preempt_schedule(void)
 NOKPROBE_SYMBOL(preempt_schedule);
 EXPORT_SYMBOL(preempt_schedule);
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
+EXPORT_STATIC_CALL(preempt_schedule);
+#endif
+
+
 /**
  * preempt_schedule_notrace - preempt_schedule called by tracing
  *
@@ -5303,6 +5309,12 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
 }
 EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+EXPORT_STATIC_CALL(preempt_schedule_notrace);
+#endif
+
+
 #endif /* CONFIG_PREEMPTION */
 
 /*

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt: Introduce CONFIG_PREEMPT_DYNAMIC
  2021-01-18 14:12 ` [RFC PATCH 4/8] preempt: Introduce CONFIG_PREEMPT_DYNAMIC Frederic Weisbecker
  2021-01-22 16:53   ` Peter Zijlstra
@ 2021-02-08 12:00   ` tip-bot2 for Michal Hocko
  2021-02-17 13:17   ` tip-bot2 for Michal Hocko
  2 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Michal Hocko @ 2021-02-08 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra, Frederic Weisbecker, Michal Hocko, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     5759bcdb871f7f73b033643cd27d6cec33280540
Gitweb:        https://git.kernel.org/tip/5759bcdb871f7f73b033643cd27d6cec33280540
Author:        Michal Hocko <mhocko@kernel.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:19 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 05 Feb 2021 17:19:56 +01:00

preempt: Introduce CONFIG_PREEMPT_DYNAMIC

Preemption mode selection is currently hardcoded on Kconfig choices.
Introduce a dedicated option to tune preemption flavour at boot time,

This will be only available on architectures efficiently supporting
static calls in order not to tempt with the feature against additional
overhead that might be prohibitive or undesirable.

CONFIG_PREEMPT_DYNAMIC is automatically selected by CONFIG_PREEMPT if
the architecture provides the necessary support (CONFIG_STATIC_CALL_INLINE,
CONFIG_GENERIC_ENTRY, and provide with __preempt_schedule_function() /
__preempt_schedule_notrace_function()).

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Michal Hocko <mhocko@suse.com>
[peterz: relax requirement to HAVE_STATIC_CALL]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-5-frederic@kernel.org
---
 Documentation/admin-guide/kernel-parameters.txt |  7 ++++++-
 arch/Kconfig                                    |  9 ++++++++-
 arch/x86/Kconfig                                |  1 +-
 kernel/Kconfig.preempt                          | 19 ++++++++++++++++-
 4 files changed, 36 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index c722ec1..e854863 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3916,6 +3916,13 @@
 			Format: {"off"}
 			Disable Hardware Transactional Memory
 
+	preempt=	[KNL]
+			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
+			none - Limited to cond_resched() calls
+			voluntary - Limited to cond_resched() and might_sleep() calls
+			full - Any section that isn't explicitly preempt disabled
+			       can be preempted anytime.
+
 	print-fatal-signals=
 			[KNL] debug: print fatal signals
 
diff --git a/arch/Kconfig b/arch/Kconfig
index 78c6f05..94d3dde 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1090,6 +1090,15 @@ config HAVE_STATIC_CALL_INLINE
 	bool
 	depends on HAVE_STATIC_CALL
 
+config HAVE_PREEMPT_DYNAMIC
+	bool
+	depends on HAVE_STATIC_CALL
+	depends on GENERIC_ENTRY
+	help
+	   Select this if the architecture support boot time preempt setting
+	   on top of static calls. It is strongly advised to support inline
+	   static call to avoid any overhead.
+
 config ARCH_WANT_LD_ORPHAN_WARN
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 7b6dd10..d021da9 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -223,6 +223,7 @@ config X86
 	select HAVE_STACK_VALIDATION		if X86_64
 	select HAVE_STATIC_CALL
 	select HAVE_STATIC_CALL_INLINE		if HAVE_STACK_VALIDATION
+	select HAVE_PREEMPT_DYNAMIC
 	select HAVE_RSEQ
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UNSTABLE_SCHED_CLOCK
diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
index bf82259..4160173 100644
--- a/kernel/Kconfig.preempt
+++ b/kernel/Kconfig.preempt
@@ -40,6 +40,7 @@ config PREEMPT
 	depends on !ARCH_NO_PREEMPT
 	select PREEMPTION
 	select UNINLINE_SPIN_UNLOCK if !ARCH_INLINE_SPIN_UNLOCK
+	select PREEMPT_DYNAMIC if HAVE_PREEMPT_DYNAMIC
 	help
 	  This option reduces the latency of the kernel by making
 	  all kernel code (that is not executing in a critical section)
@@ -80,3 +81,21 @@ config PREEMPT_COUNT
 config PREEMPTION
        bool
        select PREEMPT_COUNT
+
+config PREEMPT_DYNAMIC
+	bool
+	help
+	  This option allows to define the preemption model on the kernel
+	  command line parameter and thus override the default preemption
+	  model defined during compile time.
+
+	  The feature is primarily interesting for Linux distributions which
+	  provide a pre-built kernel binary to reduce the number of kernel
+	  flavors they offer while still offering different usecases.
+
+	  The runtime overhead is negligible with HAVE_STATIC_CALL_INLINE enabled
+	  but if runtime patching is not available for the specific architecture
+	  then the potential overhead should be considered.
+
+	  Interesting if you want the same pre-built kernel should be used for
+	  both Server and Desktop workloads.

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt/dynamic: Provide cond_resched() and might_resched() static calls
  2021-01-18 14:12 ` [RFC PATCH 5/8] preempt/dynamic: Provide cond_resched() and might_resched() static calls Frederic Weisbecker
@ 2021-02-08 12:00   ` tip-bot2 for Peter Zijlstra (Intel)
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  1 sibling, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra (Intel) @ 2021-02-08 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel), Frederic Weisbecker, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     bf3054bb801cf566e65e5f3d060435dbfa4a2f36
Gitweb:        https://git.kernel.org/tip/bf3054bb801cf566e65e5f3d060435dbfa4a2f36
Author:        Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:20 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 05 Feb 2021 17:19:56 +01:00

preempt/dynamic: Provide cond_resched() and might_resched() static calls

Provide static calls to control cond_resched() (called in !CONFIG_PREEMPT)
and might_resched() (called in CONFIG_PREEMPT_VOLUNTARY) to that we
can override their behaviour when preempt= is overriden.

Since the default behaviour is full preemption, both their calls are
ignored when preempt= isn't passed.

[fweisbec: branch might_resched() directly to __cond_resched(), only
           define static calls when PREEMPT_DYNAMIC]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-6-frederic@kernel.org
---
 include/linux/kernel.h | 23 +++++++++++++++++++----
 include/linux/sched.h  | 27 ++++++++++++++++++++++++---
 kernel/sched/core.c    | 16 +++++++++++++---
 3 files changed, 56 insertions(+), 10 deletions(-)

diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index f7902d8..cfd3d34 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -15,7 +15,7 @@
 #include <linux/typecheck.h>
 #include <linux/printk.h>
 #include <linux/build_bug.h>
-
+#include <linux/static_call_types.h>
 #include <asm/byteorder.h>
 
 #include <uapi/linux/kernel.h>
@@ -81,11 +81,26 @@ struct pt_regs;
 struct user;
 
 #ifdef CONFIG_PREEMPT_VOLUNTARY
-extern int _cond_resched(void);
-# define might_resched() _cond_resched()
+
+extern int __cond_resched(void);
+# define might_resched() __cond_resched()
+
+#elif defined(CONFIG_PREEMPT_DYNAMIC)
+
+extern int __cond_resched(void);
+
+DECLARE_STATIC_CALL(might_resched, __cond_resched);
+
+static __always_inline void might_resched(void)
+{
+	static_call(might_resched)();
+}
+
 #else
+
 # define might_resched() do { } while (0)
-#endif
+
+#endif /* CONFIG_PREEMPT_* */
 
 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
 extern void ___might_sleep(const char *file, int line, int preempt_offset);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index e115222..2f35594 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1871,11 +1871,32 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
  * value indicates whether a reschedule was done in fact.
  * cond_resched_lock() will drop the spinlock before scheduling,
  */
-#ifndef CONFIG_PREEMPTION
-extern int _cond_resched(void);
+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
+extern int __cond_resched(void);
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+DECLARE_STATIC_CALL(cond_resched, __cond_resched);
+
+static __always_inline int _cond_resched(void)
+{
+	return static_call(cond_resched)();
+}
+
 #else
+
+static inline int _cond_resched(void)
+{
+	return __cond_resched();
+}
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
+
+#else
+
 static inline int _cond_resched(void) { return 0; }
-#endif
+
+#endif /* !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC) */
 
 #define cond_resched() ({			\
 	___might_sleep(__FILE__, __LINE__, 0);	\
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index be3a956..bc7b00b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6771,17 +6771,27 @@ SYSCALL_DEFINE0(sched_yield)
 	return 0;
 }
 
-#ifndef CONFIG_PREEMPTION
-int __sched _cond_resched(void)
+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
+int __sched __cond_resched(void)
 {
 	if (should_resched(0)) {
 		preempt_schedule_common();
 		return 1;
 	}
+#ifndef CONFIG_PREEMPT_RCU
 	rcu_all_qs();
+#endif
 	return 0;
 }
-EXPORT_SYMBOL(_cond_resched);
+EXPORT_SYMBOL(__cond_resched);
+#endif
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
+EXPORT_STATIC_CALL(cond_resched);
+
+DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
+EXPORT_STATIC_CALL(might_resched);
 #endif
 
 /*

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt/dynamic: Provide irqentry_exit_cond_resched() static call
  2021-01-18 14:12 ` [RFC PATCH 7/8] preempt/dynamic: Provide irqentry_exit_cond_resched() static call Frederic Weisbecker
@ 2021-02-08 12:00   ` tip-bot2 for Peter Zijlstra (Intel)
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  1 sibling, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra (Intel) @ 2021-02-08 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel), Frederic Weisbecker, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     74345075999752a7a9c805fe5e2ec770345cd1ca
Gitweb:        https://git.kernel.org/tip/74345075999752a7a9c805fe5e2ec770345cd1ca
Author:        Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:22 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 05 Feb 2021 17:19:57 +01:00

preempt/dynamic: Provide irqentry_exit_cond_resched() static call

Provide static call to control IRQ preemption (called in CONFIG_PREEMPT)
so that we can override its behaviour when preempt= is overriden.

Since the default behaviour is full preemption, its call is
initialized to provide IRQ preemption when preempt= isn't passed.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-8-frederic@kernel.org
---
 include/linux/entry-common.h |  4 ++++
 kernel/entry/common.c        | 10 +++++++++-
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index ca86a00..1401c93 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_ENTRYCOMMON_H
 #define __LINUX_ENTRYCOMMON_H
 
+#include <linux/static_call_types.h>
 #include <linux/tracehook.h>
 #include <linux/syscalls.h>
 #include <linux/seccomp.h>
@@ -453,6 +454,9 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
  * Conditional reschedule with additional sanity checks.
  */
 void irqentry_exit_cond_resched(void);
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DECLARE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+#endif
 
 /**
  * irqentry_exit - Handle return from exception that used irqentry_enter()
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index 3783416..84fa7ec 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -393,6 +393,9 @@ void irqentry_exit_cond_resched(void)
 			preempt_schedule_irq();
 	}
 }
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+#endif
 
 noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
 {
@@ -419,8 +422,13 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
 		}
 
 		instrumentation_begin();
-		if (IS_ENABLED(CONFIG_PREEMPTION))
+		if (IS_ENABLED(CONFIG_PREEMPTION)) {
+#ifdef CONFIG_PREEMT_DYNAMIC
+			static_call(irqentry_exit_cond_resched)();
+#else
 			irqentry_exit_cond_resched();
+#endif
+		}
 		/* Covers both tracing and lockdep */
 		trace_hardirqs_on();
 		instrumentation_end();

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] static_call/x86: Add __static_call_return0()
  2021-01-18 14:12 ` [RFC PATCH 1/8] static_call/x86: Add __static_call_return0() Frederic Weisbecker
@ 2021-02-08 12:00   ` tip-bot2 for Peter Zijlstra
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra
  1 sibling, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2021-02-08 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel), Frederic Weisbecker, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     2f44200d3f3d6e6abab4e5529335f7852936f3a1
Gitweb:        https://git.kernel.org/tip/2f44200d3f3d6e6abab4e5529335f7852936f3a1
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:16 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 05 Feb 2021 17:19:55 +01:00

static_call/x86: Add __static_call_return0()

Provide a stub function that return 0 and wire up the static call site
patching to replace the CALL with a single 5 byte instruction that
clears %RAX, the return value register.

The function can be cast to any function pointer type that has a
single %RAX return (including pointers). Also provide a version that
returns an int for convenience. We are clearing the entire %RAX register
in any case, whether the return value is 32 or 64 bits, since %RAX is
always a scratch register anyway.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-2-frederic@kernel.org
---
 arch/x86/kernel/static_call.c | 17 +++++++++++++++--
 include/linux/static_call.h   | 12 ++++++++++++
 kernel/static_call.c          |  5 +++++
 3 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
index ca9a380..9442c41 100644
--- a/arch/x86/kernel/static_call.c
+++ b/arch/x86/kernel/static_call.c
@@ -11,14 +11,26 @@ enum insn_type {
 	RET = 3,  /* tramp / site cond-tail-call */
 };
 
+/*
+ * data16 data16 xorq %rax, %rax - a single 5 byte instruction that clears %rax
+ * The REX.W cancels the effect of any data16.
+ */
+static const u8 xor5rax[] = { 0x66, 0x66, 0x48, 0x31, 0xc0 };
+
 static void __ref __static_call_transform(void *insn, enum insn_type type, void *func)
 {
+	const void *emulate = NULL;
 	int size = CALL_INSN_SIZE;
 	const void *code;
 
 	switch (type) {
 	case CALL:
 		code = text_gen_insn(CALL_INSN_OPCODE, insn, func);
+		if (func == &__static_call_return0) {
+			emulate = code;
+			code = &xor5rax;
+		}
+
 		break;
 
 	case NOP:
@@ -41,7 +53,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, void 
 	if (unlikely(system_state == SYSTEM_BOOTING))
 		return text_poke_early(insn, code, size);
 
-	text_poke_bp(insn, code, size, NULL);
+	text_poke_bp(insn, code, size, emulate);
 }
 
 static void __static_call_validate(void *insn, bool tail)
@@ -54,7 +66,8 @@ static void __static_call_validate(void *insn, bool tail)
 			return;
 	} else {
 		if (opcode == CALL_INSN_OPCODE ||
-		    !memcmp(insn, ideal_nops[NOP_ATOMIC5], 5))
+		    !memcmp(insn, ideal_nops[NOP_ATOMIC5], 5) ||
+		    !memcmp(insn, xor5rax, 5))
 			return;
 	}
 
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index a2c0645..bd6735d 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -142,6 +142,8 @@ extern void __static_call_update(struct static_call_key *key, void *tramp, void 
 extern int static_call_mod_init(struct module *mod);
 extern int static_call_text_reserved(void *start, void *end);
 
+extern long __static_call_return0(void);
+
 #define DEFINE_STATIC_CALL(name, _func)					\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
@@ -206,6 +208,11 @@ static inline int static_call_text_reserved(void *start, void *end)
 	return 0;
 }
 
+static inline long __static_call_return0(void)
+{
+	return 0;
+}
+
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
@@ -222,6 +229,11 @@ struct static_call_key {
 	void *func;
 };
 
+static inline long __static_call_return0(void)
+{
+	return 0;
+}
+
 #define DEFINE_STATIC_CALL(name, _func)					\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
diff --git a/kernel/static_call.c b/kernel/static_call.c
index 84565c2..0bc11b5 100644
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -438,6 +438,11 @@ int __init static_call_init(void)
 }
 early_initcall(static_call_init);
 
+long __static_call_return0(void)
+{
+	return 0;
+}
+
 #ifdef CONFIG_STATIC_CALL_SELFTEST
 
 static int func_a(int x)

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] static_call: Provide DEFINE_STATIC_CALL_RET0()
  2021-01-18 14:12 ` [RFC PATCH 2/8] static_call: Provide DEFINE_STATIC_CALL_RET0() Frederic Weisbecker
@ 2021-02-08 12:00   ` tip-bot2 for Frederic Weisbecker
  2021-02-17 13:17   ` tip-bot2 for Frederic Weisbecker
  1 sibling, 0 replies; 61+ messages in thread
From: tip-bot2 for Frederic Weisbecker @ 2021-02-08 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Frederic Weisbecker, Peter Zijlstra (Intel), x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     50ace20f2cfecd90c88edaf58400b362f42f2960
Gitweb:        https://git.kernel.org/tip/50ace20f2cfecd90c88edaf58400b362f42f2960
Author:        Frederic Weisbecker <frederic@kernel.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:17 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 05 Feb 2021 17:19:55 +01:00

static_call: Provide DEFINE_STATIC_CALL_RET0()

DECLARE_STATIC_CALL() must pass the original function targeted for a
given static call. But DEFINE_STATIC_CALL() may want to initialize it as
off. In this case we can't pass NULL (for functions without return value)
or __static_call_return0 (for functions returning a value) directly
to DEFINE_STATIC_CALL() as that may trigger a static call redeclaration
with a different function prototype. Type casts neither can work around
that as they don't get along with typeof().

The proper way to do that for functions that don't return a value is
to use DEFINE_STATIC_CALL_NULL(). But functions returning a actual value
don't have an equivalent yet.

Provide DEFINE_STATIC_CALL_RET0() to solve this situation.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-3-frederic@kernel.org
---
 include/linux/static_call.h | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index bd6735d..d69dd8b 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -144,13 +144,13 @@ extern int static_call_text_reserved(void *start, void *end);
 
 extern long __static_call_return0(void);
 
-#define DEFINE_STATIC_CALL(name, _func)					\
+#define __DEFINE_STATIC_CALL(name, _func, _func_init)			\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
-		.func = _func,						\
+		.func = _func_init,					\
 		.type = 1,						\
 	};								\
-	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func)
+	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func_init)
 
 #define DEFINE_STATIC_CALL_NULL(name, _func)				\
 	DECLARE_STATIC_CALL(name, _func);				\
@@ -178,12 +178,12 @@ struct static_call_key {
 	void *func;
 };
 
-#define DEFINE_STATIC_CALL(name, _func)					\
+#define __DEFINE_STATIC_CALL(name, _func, _func_init)			\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
-		.func = _func,						\
+		.func = _func_init,					\
 	};								\
-	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func)
+	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func_init)
 
 #define DEFINE_STATIC_CALL_NULL(name, _func)				\
 	DECLARE_STATIC_CALL(name, _func);				\
@@ -234,10 +234,10 @@ static inline long __static_call_return0(void)
 	return 0;
 }
 
-#define DEFINE_STATIC_CALL(name, _func)					\
+#define __DEFINE_STATIC_CALL(name, _func, _func_init)			\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
-		.func = _func,						\
+		.func = _func_init,					\
 	}
 
 #define DEFINE_STATIC_CALL_NULL(name, _func)				\
@@ -286,4 +286,10 @@ static inline int static_call_text_reserved(void *start, void *end)
 
 #endif /* CONFIG_HAVE_STATIC_CALL */
 
+#define DEFINE_STATIC_CALL(name, _func)					\
+	__DEFINE_STATIC_CALL(name, _func, _func)
+
+#define DEFINE_STATIC_CALL_RET0(name, _func)				\
+	__DEFINE_STATIC_CALL(name, _func, __static_call_return0)
+
 #endif /* _LINUX_STATIC_CALL_H */

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] static_call: Pull some static_call declarations to the type headers
  2021-01-18 14:12 ` [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
                     ` (2 preceding siblings ...)
  2021-01-19 10:46   ` Jürgen Groß
@ 2021-02-08 12:00   ` tip-bot2 for Peter Zijlstra
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra
  4 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2021-02-08 12:00 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel), Frederic Weisbecker, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     407bc881b21d9b6cd14dd9b09adffc2d8e45fbe9
Gitweb:        https://git.kernel.org/tip/407bc881b21d9b6cd14dd9b09adffc2d8e45fbe9
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:18 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Fri, 05 Feb 2021 17:19:54 +01:00

static_call: Pull some static_call declarations to the type headers

Some static call declarations are going to be needed on low level header
files. Move the necessary material to the dedicated static call types
header to avoid inclusion dependency hell.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-4-frederic@kernel.org
---
 include/linux/static_call.h             | 21 +-------------------
 include/linux/static_call_types.h       | 27 ++++++++++++++++++++++++-
 tools/include/linux/static_call_types.h | 27 ++++++++++++++++++++++++-
 3 files changed, 54 insertions(+), 21 deletions(-)

diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 695da4c..a2c0645 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -107,26 +107,10 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
 
 #define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name)
 
-/*
- * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
- * the symbol table so that objtool can reference it when it generates the
- * .static_call_sites section.
- */
-#define __static_call(name)						\
-({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
-})
-
 #else
 #define STATIC_CALL_TRAMP_ADDR(name) NULL
 #endif
 
-
-#define DECLARE_STATIC_CALL(name, func)					\
-	extern struct static_call_key STATIC_CALL_KEY(name);		\
-	extern typeof(func) STATIC_CALL_TRAMP(name);
-
 #define static_call_update(name, func)					\
 ({									\
 	BUILD_BUG_ON(!__same_type(*(func), STATIC_CALL_TRAMP(name)));	\
@@ -174,7 +158,6 @@ extern int static_call_text_reserved(void *start, void *end);
 	};								\
 	ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
 
-#define static_call(name)	__static_call(name)
 #define static_call_cond(name)	(void)__static_call(name)
 
 #define EXPORT_STATIC_CALL(name)					\
@@ -207,7 +190,6 @@ struct static_call_key {
 	};								\
 	ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
 
-#define static_call(name)	__static_call(name)
 #define static_call_cond(name)	(void)__static_call(name)
 
 static inline
@@ -252,9 +234,6 @@ struct static_call_key {
 		.func = NULL,						\
 	}
 
-#define static_call(name)						\
-	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
-
 static inline void __static_call_nop(void) { }
 
 /*
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 89135bb..08f78b1 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -4,6 +4,7 @@
 
 #include <linux/types.h>
 #include <linux/stringify.h>
+#include <linux/compiler.h>
 
 #define STATIC_CALL_KEY_PREFIX		__SCK__
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
@@ -32,4 +33,30 @@ struct static_call_site {
 	s32 key;
 };
 
+#define DECLARE_STATIC_CALL(name, func)					\
+	extern struct static_call_key STATIC_CALL_KEY(name);		\
+	extern typeof(func) STATIC_CALL_TRAMP(name);
+
+#ifdef CONFIG_HAVE_STATIC_CALL
+
+/*
+ * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
+ * the symbol table so that objtool can reference it when it generates the
+ * .static_call_sites section.
+ */
+#define __static_call(name)						\
+({									\
+	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
+	&STATIC_CALL_TRAMP(name);					\
+})
+
+#define static_call(name)	__static_call(name)
+
+#else
+
+#define static_call(name)						\
+	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+
+#endif /* CONFIG_HAVE_STATIC_CALL */
+
 #endif /* _STATIC_CALL_TYPES_H */
diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h
index 89135bb..08f78b1 100644
--- a/tools/include/linux/static_call_types.h
+++ b/tools/include/linux/static_call_types.h
@@ -4,6 +4,7 @@
 
 #include <linux/types.h>
 #include <linux/stringify.h>
+#include <linux/compiler.h>
 
 #define STATIC_CALL_KEY_PREFIX		__SCK__
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
@@ -32,4 +33,30 @@ struct static_call_site {
 	s32 key;
 };
 
+#define DECLARE_STATIC_CALL(name, func)					\
+	extern struct static_call_key STATIC_CALL_KEY(name);		\
+	extern typeof(func) STATIC_CALL_TRAMP(name);
+
+#ifdef CONFIG_HAVE_STATIC_CALL
+
+/*
+ * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
+ * the symbol table so that objtool can reference it when it generates the
+ * .static_call_sites section.
+ */
+#define __static_call(name)						\
+({									\
+	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
+	&STATIC_CALL_TRAMP(name);					\
+})
+
+#define static_call(name)	__static_call(name)
+
+#else
+
+#define static_call(name)						\
+	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+
+#endif /* CONFIG_HAVE_STATIC_CALL */
+
 #endif /* _STATIC_CALL_TYPES_H */

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] static_call: Allow module use without exposing static_call_key
  2021-01-27 23:18                       ` Josh Poimboeuf
                                           ` (2 preceding siblings ...)
  2021-02-08 12:00                         ` [tip: sched/core] static_call: Allow module use without exposing static_call_key tip-bot2 for Josh Poimboeuf
@ 2021-02-09 15:45                         ` tip-bot2 for Josh Poimboeuf
  2021-02-17 13:17                         ` tip-bot2 for Josh Poimboeuf
  4 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2021-02-09 15:45 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel), Josh Poimboeuf, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     8659343e7612746d595d55e7cf695c46f2ed571a
Gitweb:        https://git.kernel.org/tip/8659343e7612746d595d55e7cf695c46f2ed571a
Author:        Josh Poimboeuf <jpoimboe@redhat.com>
AuthorDate:    Wed, 27 Jan 2021 17:18:37 -06:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 09 Feb 2021 16:31:03 +01:00

static_call: Allow module use without exposing static_call_key

When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module
can use static_call_update() to change the function called.  This is
not desirable in general.

Not exporting static_call_key however also disallows usage of
static_call(), since objtool needs the key to construct the
static_call_site.

Solve this by allowing objtool to create the static_call_site using
the trampoline address when it builds a module and cannot find the
static_call_key symbol. The module loader will then try and map the
trampole back to a key before it constructs the normal sites list.

Doing this requires a trampoline -> key associsation, so add another
magic section that keeps those.

Originally-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210127231837.ifddpn7rhwdaepiu@treble
---
 arch/x86/include/asm/static_call.h      |  7 +++-
 include/asm-generic/vmlinux.lds.h       |  5 +-
 include/linux/static_call.h             | 22 +++++++++-
 include/linux/static_call_types.h       | 27 +++++++++++-
 kernel/static_call.c                    | 55 +++++++++++++++++++++++-
 tools/include/linux/static_call_types.h | 27 +++++++++++-
 tools/objtool/check.c                   | 17 ++++++-
 7 files changed, 149 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
index c37f119..cbb67b6 100644
--- a/arch/x86/include/asm/static_call.h
+++ b/arch/x86/include/asm/static_call.h
@@ -37,4 +37,11 @@
 #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)			\
 	__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; nop; nop; nop; nop")
 
+
+#define ARCH_ADD_TRAMP_KEY(name)					\
+	asm(".pushsection .static_call_tramp_key, \"a\"		\n"	\
+	    ".long " STATIC_CALL_TRAMP_STR(name) " - .		\n"	\
+	    ".long " STATIC_CALL_KEY_STR(name) " - .		\n"	\
+	    ".popsection					\n")
+
 #endif /* _ASM_STATIC_CALL_H */
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index b2b3d81..b0871e2 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -393,7 +393,10 @@
 	. = ALIGN(8);							\
 	__start_static_call_sites = .;					\
 	KEEP(*(.static_call_sites))					\
-	__stop_static_call_sites = .;
+	__stop_static_call_sites = .;					\
+	__start_static_call_tramp_key = .;				\
+	KEEP(*(.static_call_tramp_key))					\
+	__stop_static_call_tramp_key = .;
 
 /*
  * Allow architectures to handle ro_after_init data on their
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index d69dd8b..85ecc78 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -138,6 +138,12 @@ struct static_call_key {
 	};
 };
 
+/* For finding the key associated with a trampoline */
+struct static_call_tramp_key {
+	s32 tramp;
+	s32 key;
+};
+
 extern void __static_call_update(struct static_call_key *key, void *tramp, void *func);
 extern int static_call_mod_init(struct module *mod);
 extern int static_call_text_reserved(void *start, void *end);
@@ -165,11 +171,18 @@ extern long __static_call_return0(void);
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
-
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
+/* Leave the key unexported, so modules can't change static call targets: */
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name));				\
+	ARCH_ADD_TRAMP_KEY(name)
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name));			\
+	ARCH_ADD_TRAMP_KEY(name)
+
 #elif defined(CONFIG_HAVE_STATIC_CALL)
 
 static inline int static_call_init(void) { return 0; }
@@ -216,11 +229,16 @@ static inline long __static_call_return0(void)
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
-
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
+/* Leave the key unexported, so modules can't change static call targets: */
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+
 #else /* Generic implementation */
 
 static inline int static_call_init(void) { return 0; }
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 08f78b1..ae5662d 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -10,6 +10,7 @@
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
 #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
 #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
+#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
 
 #define STATIC_CALL_TRAMP_PREFIX	__SCT__
 #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
@@ -39,17 +40,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
diff --git a/kernel/static_call.c b/kernel/static_call.c
index 0bc11b5..6906c6e 100644
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -12,6 +12,8 @@
 
 extern struct static_call_site __start_static_call_sites[],
 			       __stop_static_call_sites[];
+extern struct static_call_tramp_key __start_static_call_tramp_key[],
+				    __stop_static_call_tramp_key[];
 
 static bool static_call_initialized;
 
@@ -323,10 +325,59 @@ static int __static_call_mod_text_reserved(void *start, void *end)
 	return ret;
 }
 
+static unsigned long tramp_key_lookup(unsigned long addr)
+{
+	struct static_call_tramp_key *start = __start_static_call_tramp_key;
+	struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
+	struct static_call_tramp_key *tramp_key;
+
+	for (tramp_key = start; tramp_key != stop; tramp_key++) {
+		unsigned long tramp;
+
+		tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
+		if (tramp == addr)
+			return (long)tramp_key->key + (long)&tramp_key->key;
+	}
+
+	return 0;
+}
+
 static int static_call_add_module(struct module *mod)
 {
-	return __static_call_init(mod, mod->static_call_sites,
-				  mod->static_call_sites + mod->num_static_call_sites);
+	struct static_call_site *start = mod->static_call_sites;
+	struct static_call_site *stop = start + mod->num_static_call_sites;
+	struct static_call_site *site;
+
+	for (site = start; site != stop; site++) {
+		unsigned long addr = (unsigned long)static_call_key(site);
+		unsigned long key;
+
+		/*
+		 * Is the key is exported, 'addr' points to the key, which
+		 * means modules are allowed to call static_call_update() on
+		 * it.
+		 *
+		 * Otherwise, the key isn't exported, and 'addr' points to the
+		 * trampoline so we need to lookup the key.
+		 *
+		 * We go through this dance to prevent crazy modules from
+		 * abusing sensitive static calls.
+		 */
+		if (!kernel_text_address(addr))
+			continue;
+
+		key = tramp_key_lookup(addr);
+		if (!key) {
+			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
+				static_call_addr(site));
+			return -EINVAL;
+		}
+
+		site->key = (key - (long)&site->key) |
+			    (site->key & STATIC_CALL_SITE_FLAGS);
+	}
+
+	return __static_call_init(mod, start, stop);
 }
 
 static void static_call_del_module(struct module *mod)
diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h
index 08f78b1..ae5662d 100644
--- a/tools/include/linux/static_call_types.h
+++ b/tools/include/linux/static_call_types.h
@@ -10,6 +10,7 @@
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
 #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
 #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
+#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
 
 #define STATIC_CALL_TRAMP_PREFIX	__SCT__
 #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
@@ -39,17 +40,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 5f8d3ee..7bd96d6 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -502,8 +502,21 @@ static int create_static_call_sections(struct objtool_file *file)
 
 		key_sym = find_symbol_by_name(file->elf, tmp);
 		if (!key_sym) {
-			WARN("static_call: can't find static_call_key symbol: %s", tmp);
-			return -1;
+			if (!module) {
+				WARN("static_call: can't find static_call_key symbol: %s", tmp);
+				return -1;
+			}
+
+			/*
+			 * For modules(), the key might not be exported, which
+			 * means the module can make static calls but isn't
+			 * allowed to change them.
+			 *
+			 * In that case we temporarily set the key to be the
+			 * trampoline address.  This is fixed up in
+			 * static_call_add_module().
+			 */
+			key_sym = insn->call_dest;
 		}
 		free(key_name);
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] sched: Add /debug/sched_preempt
  2021-01-22 17:08       ` Peter Zijlstra
  2021-02-08 12:00         ` [tip: sched/core] sched: Add /debug/sched_preempt tip-bot2 for Peter Zijlstra
@ 2021-02-09 15:45         ` tip-bot2 for Peter Zijlstra
  2021-02-17 13:17         ` tip-bot2 for Peter Zijlstra
  2 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2021-02-09 15:45 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Peter Zijlstra (Intel), x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     b57f3de85c79f9fbfe2fd84cc6ba548e4e73d02d
Gitweb:        https://git.kernel.org/tip/b57f3de85c79f9fbfe2fd84cc6ba548e4e73d02d
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Fri, 22 Jan 2021 13:01:58 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 09 Feb 2021 16:31:03 +01:00

sched: Add /debug/sched_preempt

Add a debugfs file to muck about with the preempt mode at runtime.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/YAsGiUYf6NyaTplX@hirez.programming.kicks-ass.net
---
 kernel/sched/core.c | 135 ++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 126 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 220393d..cb226f7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5349,37 +5349,154 @@ EXPORT_STATIC_CALL(preempt_schedule_notrace);
  *   preempt_schedule_notrace   <- preempt_schedule_notrace
  *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
  */
-static int __init setup_preempt_mode(char *str)
+
+enum {
+	preempt_dynamic_none = 0,
+	preempt_dynamic_voluntary,
+	preempt_dynamic_full,
+};
+
+static int preempt_dynamic_mode = preempt_dynamic_full;
+
+static int sched_dynamic_mode(const char *str)
 {
-	if (!strcmp(str, "none")) {
+	if (!strcmp(str, "none"))
+		return 0;
+
+	if (!strcmp(str, "voluntary"))
+		return 1;
+
+	if (!strcmp(str, "full"))
+		return 2;
+
+	return -1;
+}
+
+static void sched_dynamic_update(int mode)
+{
+	/*
+	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
+	 * the ZERO state, which is invalid.
+	 */
+	static_call_update(cond_resched, __cond_resched);
+	static_call_update(might_resched, __cond_resched);
+	static_call_update(preempt_schedule, __preempt_schedule_func);
+	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+
+	switch (mode) {
+	case preempt_dynamic_none:
 		static_call_update(cond_resched, __cond_resched);
 		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
 		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
 		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else if (!strcmp(str, "voluntary")) {
+		pr_info("Dynamic Preempt: none\n");
+		break;
+
+	case preempt_dynamic_voluntary:
 		static_call_update(cond_resched, __cond_resched);
 		static_call_update(might_resched, __cond_resched);
 		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
 		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
 		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else if (!strcmp(str, "full")) {
+		pr_info("Dynamic Preempt: voluntary\n");
+		break;
+
+	case preempt_dynamic_full:
 		static_call_update(cond_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(preempt_schedule, __preempt_schedule_func);
 		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
 		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else {
-		pr_warn("Dynamic Preempt: Unsupported preempt mode %s, default to full\n", str);
+		pr_info("Dynamic Preempt: full\n");
+		break;
+	}
+
+	preempt_dynamic_mode = mode;
+}
+
+static int __init setup_preempt_mode(char *str)
+{
+	int mode = sched_dynamic_mode(str);
+	if (mode < 0) {
+		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
 		return 1;
 	}
+
+	sched_dynamic_update(mode);
 	return 0;
 }
 __setup("preempt=", setup_preempt_mode);
 
+#ifdef CONFIG_SCHED_DEBUG
+
+static ssize_t sched_dynamic_write(struct file *filp, const char __user *ubuf,
+				   size_t cnt, loff_t *ppos)
+{
+	char buf[16];
+	int mode;
+
+	if (cnt > 15)
+		cnt = 15;
+
+	if (copy_from_user(&buf, ubuf, cnt))
+		return -EFAULT;
+
+	buf[cnt] = 0;
+	mode = sched_dynamic_mode(strstrip(buf));
+	if (mode < 0)
+		return mode;
+
+	sched_dynamic_update(mode);
+
+	*ppos += cnt;
+
+	return cnt;
+}
+
+static int sched_dynamic_show(struct seq_file *m, void *v)
+{
+	static const char * preempt_modes[] = {
+		"none", "voluntary", "full"
+	};
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(preempt_modes); i++) {
+		if (preempt_dynamic_mode == i)
+			seq_puts(m, "(");
+		seq_puts(m, preempt_modes[i]);
+		if (preempt_dynamic_mode == i)
+			seq_puts(m, ")");
+
+		seq_puts(m, " ");
+	}
+
+	seq_puts(m, "\n");
+	return 0;
+}
+
+static int sched_dynamic_open(struct inode *inode, struct file *filp)
+{
+	return single_open(filp, sched_dynamic_show, NULL);
+}
+
+static const struct file_operations sched_dynamic_fops = {
+	.open		= sched_dynamic_open,
+	.write		= sched_dynamic_write,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static __init int sched_init_debug_dynamic(void)
+{
+	debugfs_create_file("sched_preempt", 0644, NULL, NULL, &sched_dynamic_fops);
+	return 0;
+}
+late_initcall(sched_init_debug_dynamic);
+
+#endif /* CONFIG_SCHED_DEBUG */
 #endif /* CONFIG_PREEMPT_DYNAMIC */
 
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt/dynamic: Support dynamic preempt with preempt= boot option
  2021-01-18 14:12 ` [RFC PATCH 8/8] preempt/dynamic: Support dynamic preempt with preempt= boot option Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
@ 2021-02-09 15:45   ` tip-bot2 for Peter Zijlstra (Intel)
  2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  2 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra (Intel) @ 2021-02-09 15:45 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel), Frederic Weisbecker, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     0e79823f55de3cff95894fbb40440b17910e7378
Gitweb:        https://git.kernel.org/tip/0e79823f55de3cff95894fbb40440b17910e7378
Author:        Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:23 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 09 Feb 2021 16:30:58 +01:00

preempt/dynamic: Support dynamic preempt with preempt= boot option

Support the preempt= boot option and patch the static call sites
accordingly.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-9-frederic@kernel.org
---
 kernel/sched/core.c | 68 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 67 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index cd0c46f..220393d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5314,9 +5314,75 @@ DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
 EXPORT_STATIC_CALL(preempt_schedule_notrace);
 #endif
 
-
 #endif /* CONFIG_PREEMPTION */
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+#include <linux/entry-common.h>
+
+/*
+ * SC:cond_resched
+ * SC:might_resched
+ * SC:preempt_schedule
+ * SC:preempt_schedule_notrace
+ * SC:irqentry_exit_cond_resched
+ *
+ *
+ * NONE:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * VOLUNTARY:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- __cond_resched
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * FULL:
+ *   cond_resched               <- RET0
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- preempt_schedule
+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
+ */
+static int __init setup_preempt_mode(char *str)
+{
+	if (!strcmp(str, "none")) {
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
+		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
+		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else if (!strcmp(str, "voluntary")) {
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, __cond_resched);
+		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
+		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
+		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else if (!strcmp(str, "full")) {
+		static_call_update(cond_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(preempt_schedule, __preempt_schedule_func);
+		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else {
+		pr_warn("Dynamic Preempt: Unsupported preempt mode %s, default to full\n", str);
+		return 1;
+	}
+	return 0;
+}
+__setup("preempt=", setup_preempt_mode);
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
+
+
 /*
  * This is the entry point to schedule() from kernel preemption
  * off of irq context.

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] sched: Add /debug/sched_preempt
  2021-01-22 17:08       ` Peter Zijlstra
  2021-02-08 12:00         ` [tip: sched/core] sched: Add /debug/sched_preempt tip-bot2 for Peter Zijlstra
  2021-02-09 15:45         ` tip-bot2 for Peter Zijlstra
@ 2021-02-17 13:17         ` tip-bot2 for Peter Zijlstra
  2 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2021-02-17 13:17 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Peter Zijlstra (Intel), Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     e59e10f8ef63d42fbb99776a5a112841e798b3b5
Gitweb:        https://git.kernel.org/tip/e59e10f8ef63d42fbb99776a5a112841e798b3b5
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Fri, 22 Jan 2021 13:01:58 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 17 Feb 2021 14:12:42 +01:00

sched: Add /debug/sched_preempt

Add a debugfs file to muck about with the preempt mode at runtime.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/YAsGiUYf6NyaTplX@hirez.programming.kicks-ass.net
---
 kernel/sched/core.c | 135 ++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 126 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0c06717..4a17bb5 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5363,37 +5363,154 @@ EXPORT_STATIC_CALL(preempt_schedule_notrace);
  *   preempt_schedule_notrace   <- preempt_schedule_notrace
  *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
  */
-static int __init setup_preempt_mode(char *str)
+
+enum {
+	preempt_dynamic_none = 0,
+	preempt_dynamic_voluntary,
+	preempt_dynamic_full,
+};
+
+static int preempt_dynamic_mode = preempt_dynamic_full;
+
+static int sched_dynamic_mode(const char *str)
 {
-	if (!strcmp(str, "none")) {
+	if (!strcmp(str, "none"))
+		return 0;
+
+	if (!strcmp(str, "voluntary"))
+		return 1;
+
+	if (!strcmp(str, "full"))
+		return 2;
+
+	return -1;
+}
+
+static void sched_dynamic_update(int mode)
+{
+	/*
+	 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
+	 * the ZERO state, which is invalid.
+	 */
+	static_call_update(cond_resched, __cond_resched);
+	static_call_update(might_resched, __cond_resched);
+	static_call_update(preempt_schedule, __preempt_schedule_func);
+	static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+	static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+
+	switch (mode) {
+	case preempt_dynamic_none:
 		static_call_update(cond_resched, __cond_resched);
 		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
 		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
 		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else if (!strcmp(str, "voluntary")) {
+		pr_info("Dynamic Preempt: none\n");
+		break;
+
+	case preempt_dynamic_voluntary:
 		static_call_update(cond_resched, __cond_resched);
 		static_call_update(might_resched, __cond_resched);
 		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
 		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
 		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else if (!strcmp(str, "full")) {
+		pr_info("Dynamic Preempt: voluntary\n");
+		break;
+
+	case preempt_dynamic_full:
 		static_call_update(cond_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
 		static_call_update(preempt_schedule, __preempt_schedule_func);
 		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
 		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
-		pr_info("Dynamic Preempt: %s\n", str);
-	} else {
-		pr_warn("Dynamic Preempt: Unsupported preempt mode %s, default to full\n", str);
+		pr_info("Dynamic Preempt: full\n");
+		break;
+	}
+
+	preempt_dynamic_mode = mode;
+}
+
+static int __init setup_preempt_mode(char *str)
+{
+	int mode = sched_dynamic_mode(str);
+	if (mode < 0) {
+		pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
 		return 1;
 	}
+
+	sched_dynamic_update(mode);
 	return 0;
 }
 __setup("preempt=", setup_preempt_mode);
 
+#ifdef CONFIG_SCHED_DEBUG
+
+static ssize_t sched_dynamic_write(struct file *filp, const char __user *ubuf,
+				   size_t cnt, loff_t *ppos)
+{
+	char buf[16];
+	int mode;
+
+	if (cnt > 15)
+		cnt = 15;
+
+	if (copy_from_user(&buf, ubuf, cnt))
+		return -EFAULT;
+
+	buf[cnt] = 0;
+	mode = sched_dynamic_mode(strstrip(buf));
+	if (mode < 0)
+		return mode;
+
+	sched_dynamic_update(mode);
+
+	*ppos += cnt;
+
+	return cnt;
+}
+
+static int sched_dynamic_show(struct seq_file *m, void *v)
+{
+	static const char * preempt_modes[] = {
+		"none", "voluntary", "full"
+	};
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(preempt_modes); i++) {
+		if (preempt_dynamic_mode == i)
+			seq_puts(m, "(");
+		seq_puts(m, preempt_modes[i]);
+		if (preempt_dynamic_mode == i)
+			seq_puts(m, ")");
+
+		seq_puts(m, " ");
+	}
+
+	seq_puts(m, "\n");
+	return 0;
+}
+
+static int sched_dynamic_open(struct inode *inode, struct file *filp)
+{
+	return single_open(filp, sched_dynamic_show, NULL);
+}
+
+static const struct file_operations sched_dynamic_fops = {
+	.open		= sched_dynamic_open,
+	.write		= sched_dynamic_write,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static __init int sched_init_debug_dynamic(void)
+{
+	debugfs_create_file("sched_preempt", 0644, NULL, NULL, &sched_dynamic_fops);
+	return 0;
+}
+late_initcall(sched_init_debug_dynamic);
+
+#endif /* CONFIG_SCHED_DEBUG */
 #endif /* CONFIG_PREEMPT_DYNAMIC */
 
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] static_call: Allow module use without exposing static_call_key
  2021-01-27 23:18                       ` Josh Poimboeuf
                                           ` (3 preceding siblings ...)
  2021-02-09 15:45                         ` tip-bot2 for Josh Poimboeuf
@ 2021-02-17 13:17                         ` tip-bot2 for Josh Poimboeuf
  4 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2021-02-17 13:17 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel), Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     73f44fe19d359635a607e8e8daa0da4001c1cfc2
Gitweb:        https://git.kernel.org/tip/73f44fe19d359635a607e8e8daa0da4001c1cfc2
Author:        Josh Poimboeuf <jpoimboe@redhat.com>
AuthorDate:    Wed, 27 Jan 2021 17:18:37 -06:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 17 Feb 2021 14:12:42 +01:00

static_call: Allow module use without exposing static_call_key

When exporting static_call_key; with EXPORT_STATIC_CALL*(), the module
can use static_call_update() to change the function called.  This is
not desirable in general.

Not exporting static_call_key however also disallows usage of
static_call(), since objtool needs the key to construct the
static_call_site.

Solve this by allowing objtool to create the static_call_site using
the trampoline address when it builds a module and cannot find the
static_call_key symbol. The module loader will then try and map the
trampole back to a key before it constructs the normal sites list.

Doing this requires a trampoline -> key associsation, so add another
magic section that keeps those.

Originally-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20210127231837.ifddpn7rhwdaepiu@treble
---
 arch/x86/include/asm/static_call.h      |  7 +++-
 include/asm-generic/vmlinux.lds.h       |  5 +-
 include/linux/static_call.h             | 22 +++++++++-
 include/linux/static_call_types.h       | 27 +++++++++++-
 kernel/static_call.c                    | 55 +++++++++++++++++++++++-
 tools/include/linux/static_call_types.h | 27 +++++++++++-
 tools/objtool/check.c                   | 17 ++++++-
 7 files changed, 149 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
index c37f119..cbb67b6 100644
--- a/arch/x86/include/asm/static_call.h
+++ b/arch/x86/include/asm/static_call.h
@@ -37,4 +37,11 @@
 #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)			\
 	__ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; nop; nop; nop; nop")
 
+
+#define ARCH_ADD_TRAMP_KEY(name)					\
+	asm(".pushsection .static_call_tramp_key, \"a\"		\n"	\
+	    ".long " STATIC_CALL_TRAMP_STR(name) " - .		\n"	\
+	    ".long " STATIC_CALL_KEY_STR(name) " - .		\n"	\
+	    ".popsection					\n")
+
 #endif /* _ASM_STATIC_CALL_H */
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index b97c628..3f747de 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -393,7 +393,10 @@
 	. = ALIGN(8);							\
 	__start_static_call_sites = .;					\
 	KEEP(*(.static_call_sites))					\
-	__stop_static_call_sites = .;
+	__stop_static_call_sites = .;					\
+	__start_static_call_tramp_key = .;				\
+	KEEP(*(.static_call_tramp_key))					\
+	__stop_static_call_tramp_key = .;
 
 /*
  * Allow architectures to handle ro_after_init data on their
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index d69dd8b..85ecc78 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -138,6 +138,12 @@ struct static_call_key {
 	};
 };
 
+/* For finding the key associated with a trampoline */
+struct static_call_tramp_key {
+	s32 tramp;
+	s32 key;
+};
+
 extern void __static_call_update(struct static_call_key *key, void *tramp, void *func);
 extern int static_call_mod_init(struct module *mod);
 extern int static_call_text_reserved(void *start, void *end);
@@ -165,11 +171,18 @@ extern long __static_call_return0(void);
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
-
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
+/* Leave the key unexported, so modules can't change static call targets: */
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name));				\
+	ARCH_ADD_TRAMP_KEY(name)
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name));			\
+	ARCH_ADD_TRAMP_KEY(name)
+
 #elif defined(CONFIG_HAVE_STATIC_CALL)
 
 static inline int static_call_init(void) { return 0; }
@@ -216,11 +229,16 @@ static inline long __static_call_return0(void)
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
-
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
+/* Leave the key unexported, so modules can't change static call targets: */
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+
 #else /* Generic implementation */
 
 static inline int static_call_init(void) { return 0; }
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 08f78b1..ae5662d 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -10,6 +10,7 @@
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
 #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
 #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
+#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
 
 #define STATIC_CALL_TRAMP_PREFIX	__SCT__
 #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
@@ -39,17 +40,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
diff --git a/kernel/static_call.c b/kernel/static_call.c
index 0bc11b5..6906c6e 100644
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -12,6 +12,8 @@
 
 extern struct static_call_site __start_static_call_sites[],
 			       __stop_static_call_sites[];
+extern struct static_call_tramp_key __start_static_call_tramp_key[],
+				    __stop_static_call_tramp_key[];
 
 static bool static_call_initialized;
 
@@ -323,10 +325,59 @@ static int __static_call_mod_text_reserved(void *start, void *end)
 	return ret;
 }
 
+static unsigned long tramp_key_lookup(unsigned long addr)
+{
+	struct static_call_tramp_key *start = __start_static_call_tramp_key;
+	struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
+	struct static_call_tramp_key *tramp_key;
+
+	for (tramp_key = start; tramp_key != stop; tramp_key++) {
+		unsigned long tramp;
+
+		tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
+		if (tramp == addr)
+			return (long)tramp_key->key + (long)&tramp_key->key;
+	}
+
+	return 0;
+}
+
 static int static_call_add_module(struct module *mod)
 {
-	return __static_call_init(mod, mod->static_call_sites,
-				  mod->static_call_sites + mod->num_static_call_sites);
+	struct static_call_site *start = mod->static_call_sites;
+	struct static_call_site *stop = start + mod->num_static_call_sites;
+	struct static_call_site *site;
+
+	for (site = start; site != stop; site++) {
+		unsigned long addr = (unsigned long)static_call_key(site);
+		unsigned long key;
+
+		/*
+		 * Is the key is exported, 'addr' points to the key, which
+		 * means modules are allowed to call static_call_update() on
+		 * it.
+		 *
+		 * Otherwise, the key isn't exported, and 'addr' points to the
+		 * trampoline so we need to lookup the key.
+		 *
+		 * We go through this dance to prevent crazy modules from
+		 * abusing sensitive static calls.
+		 */
+		if (!kernel_text_address(addr))
+			continue;
+
+		key = tramp_key_lookup(addr);
+		if (!key) {
+			pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
+				static_call_addr(site));
+			return -EINVAL;
+		}
+
+		site->key = (key - (long)&site->key) |
+			    (site->key & STATIC_CALL_SITE_FLAGS);
+	}
+
+	return __static_call_init(mod, start, stop);
 }
 
 static void static_call_del_module(struct module *mod)
diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h
index 08f78b1..ae5662d 100644
--- a/tools/include/linux/static_call_types.h
+++ b/tools/include/linux/static_call_types.h
@@ -10,6 +10,7 @@
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
 #define STATIC_CALL_KEY_PREFIX_LEN	(sizeof(STATIC_CALL_KEY_PREFIX_STR) - 1)
 #define STATIC_CALL_KEY(name)		__PASTE(STATIC_CALL_KEY_PREFIX, name)
+#define STATIC_CALL_KEY_STR(name)	__stringify(STATIC_CALL_KEY(name))
 
 #define STATIC_CALL_TRAMP_PREFIX	__SCT__
 #define STATIC_CALL_TRAMP_PREFIX_STR	__stringify(STATIC_CALL_TRAMP_PREFIX)
@@ -39,17 +40,39 @@ struct static_call_site {
 
 #ifdef CONFIG_HAVE_STATIC_CALL
 
+#define __raw_static_call(name)	(&STATIC_CALL_TRAMP(name))
+
+#ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
 /*
  * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
  * the symbol table so that objtool can reference it when it generates the
  * .static_call_sites section.
  */
+#define __STATIC_CALL_ADDRESSABLE(name) \
+	__ADDRESSABLE(STATIC_CALL_KEY(name))
+
 #define __static_call(name)						\
 ({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
+	__STATIC_CALL_ADDRESSABLE(name);				\
+	__raw_static_call(name);					\
 })
 
+#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#define __STATIC_CALL_ADDRESSABLE(name)
+#define __static_call(name)	__raw_static_call(name)
+
+#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */
+
+#ifdef MODULE
+#define __STATIC_CALL_MOD_ADDRESSABLE(name)
+#define static_call_mod(name)	__raw_static_call(name)
+#else
+#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
+#define static_call_mod(name)	__static_call(name)
+#endif
+
 #define static_call(name)	__static_call(name)
 
 #else
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 4bd3031..f2e5e5c 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -502,8 +502,21 @@ static int create_static_call_sections(struct objtool_file *file)
 
 		key_sym = find_symbol_by_name(file->elf, tmp);
 		if (!key_sym) {
-			WARN("static_call: can't find static_call_key symbol: %s", tmp);
-			return -1;
+			if (!module) {
+				WARN("static_call: can't find static_call_key symbol: %s", tmp);
+				return -1;
+			}
+
+			/*
+			 * For modules(), the key might not be exported, which
+			 * means the module can make static calls but isn't
+			 * allowed to change them.
+			 *
+			 * In that case we temporarily set the key to be the
+			 * trampoline address.  This is fixed up in
+			 * static_call_add_module().
+			 */
+			key_sym = insn->call_dest;
 		}
 		free(key_name);
 

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt/dynamic: Provide irqentry_exit_cond_resched() static call
  2021-01-18 14:12 ` [RFC PATCH 7/8] preempt/dynamic: Provide irqentry_exit_cond_resched() static call Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
@ 2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  1 sibling, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra (Intel) @ 2021-02-17 13:17 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel),
	Frederic Weisbecker, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     40607ee97e4eec5655cc0f76a720bdc4c63a6434
Gitweb:        https://git.kernel.org/tip/40607ee97e4eec5655cc0f76a720bdc4c63a6434
Author:        Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:22 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 17 Feb 2021 14:12:42 +01:00

preempt/dynamic: Provide irqentry_exit_cond_resched() static call

Provide static call to control IRQ preemption (called in CONFIG_PREEMPT)
so that we can override its behaviour when preempt= is overriden.

Since the default behaviour is full preemption, its call is
initialized to provide IRQ preemption when preempt= isn't passed.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-8-frederic@kernel.org
---
 include/linux/entry-common.h |  4 ++++
 kernel/entry/common.c        | 10 +++++++++-
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index a104b29..883acef 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_ENTRYCOMMON_H
 #define __LINUX_ENTRYCOMMON_H
 
+#include <linux/static_call_types.h>
 #include <linux/tracehook.h>
 #include <linux/syscalls.h>
 #include <linux/seccomp.h>
@@ -454,6 +455,9 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
  * Conditional reschedule with additional sanity checks.
  */
 void irqentry_exit_cond_resched(void);
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DECLARE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+#endif
 
 /**
  * irqentry_exit - Handle return from exception that used irqentry_enter()
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index f9d491b..f09cae3 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -385,6 +385,9 @@ void irqentry_exit_cond_resched(void)
 			preempt_schedule_irq();
 	}
 }
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+#endif
 
 noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
 {
@@ -411,8 +414,13 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
 		}
 
 		instrumentation_begin();
-		if (IS_ENABLED(CONFIG_PREEMPTION))
+		if (IS_ENABLED(CONFIG_PREEMPTION)) {
+#ifdef CONFIG_PREEMT_DYNAMIC
+			static_call(irqentry_exit_cond_resched)();
+#else
 			irqentry_exit_cond_resched();
+#endif
+		}
 		/* Covers both tracing and lockdep */
 		trace_hardirqs_on();
 		instrumentation_end();

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2021-01-18 14:12 ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() " Frederic Weisbecker
                     ` (2 preceding siblings ...)
  2021-02-08 12:00   ` [tip: sched/core] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls tip-bot2 for Peter Zijlstra (Intel)
@ 2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  3 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra (Intel) @ 2021-02-17 13:17 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel),
	Frederic Weisbecker, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     2c9a98d3bc808717ab63ad928a2b568967775388
Gitweb:        https://git.kernel.org/tip/2c9a98d3bc808717ab63ad928a2b568967775388
Author:        Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:21 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 17 Feb 2021 14:12:42 +01:00

preempt/dynamic: Provide preempt_schedule[_notrace]() static calls

Provide static calls to control preempt_schedule[_notrace]()
(called in CONFIG_PREEMPT) so that we can override their behaviour when
preempt= is overriden.

Since the default behaviour is full preemption, both their calls are
initialized to the arch provided wrapper, if any.

[fweisbec: only define static calls when PREEMPT_DYNAMIC, make it less
           dependent on x86 with __preempt_schedule_func]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-7-frederic@kernel.org
---
 arch/x86/include/asm/preempt.h | 34 +++++++++++++++++++++++++--------
 kernel/sched/core.c            | 12 ++++++++++++-
 2 files changed, 38 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index 69485ca..9b12dce 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -5,6 +5,7 @@
 #include <asm/rmwcc.h>
 #include <asm/percpu.h>
 #include <linux/thread_info.h>
+#include <linux/static_call_types.h>
 
 DECLARE_PER_CPU(int, __preempt_count);
 
@@ -103,16 +104,33 @@ static __always_inline bool should_resched(int preempt_offset)
 }
 
 #ifdef CONFIG_PREEMPTION
-  extern asmlinkage void preempt_schedule_thunk(void);
-# define __preempt_schedule() \
-	asm volatile ("call preempt_schedule_thunk" : ASM_CALL_CONSTRAINT)
 
-  extern asmlinkage void preempt_schedule(void);
-  extern asmlinkage void preempt_schedule_notrace_thunk(void);
-# define __preempt_schedule_notrace() \
-	asm volatile ("call preempt_schedule_notrace_thunk" : ASM_CALL_CONSTRAINT)
+extern asmlinkage void preempt_schedule(void);
+extern asmlinkage void preempt_schedule_thunk(void);
+
+#define __preempt_schedule_func preempt_schedule_thunk
+
+DECLARE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
+
+#define __preempt_schedule() \
+do { \
+	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule)); \
+	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule) : ASM_CALL_CONSTRAINT); \
+} while (0)
+
+extern asmlinkage void preempt_schedule_notrace(void);
+extern asmlinkage void preempt_schedule_notrace_thunk(void);
+
+#define __preempt_schedule_notrace_func preempt_schedule_notrace_thunk
+
+DECLARE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+
+#define __preempt_schedule_notrace() \
+do { \
+	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule_notrace)); \
+	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule_notrace) : ASM_CALL_CONSTRAINT); \
+} while (0)
 
-  extern asmlinkage void preempt_schedule_notrace(void);
 #endif
 
 #endif /* __ASM_PREEMPT_H */
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f7c8fd8..880611c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5265,6 +5265,12 @@ asmlinkage __visible void __sched notrace preempt_schedule(void)
 NOKPROBE_SYMBOL(preempt_schedule);
 EXPORT_SYMBOL(preempt_schedule);
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func);
+EXPORT_STATIC_CALL(preempt_schedule);
+#endif
+
+
 /**
  * preempt_schedule_notrace - preempt_schedule called by tracing
  *
@@ -5317,6 +5323,12 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
 }
 EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+EXPORT_STATIC_CALL(preempt_schedule_notrace);
+#endif
+
+
 #endif /* CONFIG_PREEMPTION */
 
 /*

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt/dynamic: Support dynamic preempt with preempt= boot option
  2021-01-18 14:12 ` [RFC PATCH 8/8] preempt/dynamic: Support dynamic preempt with preempt= boot option Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
  2021-02-09 15:45   ` tip-bot2 for Peter Zijlstra (Intel)
@ 2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  2 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra (Intel) @ 2021-02-17 13:17 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel),
	Frederic Weisbecker, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     826bfeb37bb4302ee6042f330c4c0c757152bdb8
Gitweb:        https://git.kernel.org/tip/826bfeb37bb4302ee6042f330c4c0c757152bdb8
Author:        Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:23 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 17 Feb 2021 14:12:42 +01:00

preempt/dynamic: Support dynamic preempt with preempt= boot option

Support the preempt= boot option and patch the static call sites
accordingly.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-9-frederic@kernel.org
---
 kernel/sched/core.c | 68 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 67 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 880611c..0c06717 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5328,9 +5328,75 @@ DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func);
 EXPORT_STATIC_CALL(preempt_schedule_notrace);
 #endif
 
-
 #endif /* CONFIG_PREEMPTION */
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+#include <linux/entry-common.h>
+
+/*
+ * SC:cond_resched
+ * SC:might_resched
+ * SC:preempt_schedule
+ * SC:preempt_schedule_notrace
+ * SC:irqentry_exit_cond_resched
+ *
+ *
+ * NONE:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * VOLUNTARY:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- __cond_resched
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * FULL:
+ *   cond_resched               <- RET0
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- preempt_schedule
+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
+ */
+static int __init setup_preempt_mode(char *str)
+{
+	if (!strcmp(str, "none")) {
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
+		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
+		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else if (!strcmp(str, "voluntary")) {
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, __cond_resched);
+		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
+		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
+		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else if (!strcmp(str, "full")) {
+		static_call_update(cond_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(might_resched, (typeof(&__cond_resched)) __static_call_return0);
+		static_call_update(preempt_schedule, __preempt_schedule_func);
+		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func);
+		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else {
+		pr_warn("Dynamic Preempt: Unsupported preempt mode %s, default to full\n", str);
+		return 1;
+	}
+	return 0;
+}
+__setup("preempt=", setup_preempt_mode);
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
+
+
 /*
  * This is the entry point to schedule() from kernel preemption
  * off of irq context.

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt/dynamic: Provide cond_resched() and might_resched() static calls
  2021-01-18 14:12 ` [RFC PATCH 5/8] preempt/dynamic: Provide cond_resched() and might_resched() static calls Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
@ 2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
  1 sibling, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra (Intel) @ 2021-02-17 13:17 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel),
	Frederic Weisbecker, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     b965f1ddb47daa5b8b2e2bc9c921431236830367
Gitweb:        https://git.kernel.org/tip/b965f1ddb47daa5b8b2e2bc9c921431236830367
Author:        Peter Zijlstra (Intel) <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:20 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 17 Feb 2021 14:12:42 +01:00

preempt/dynamic: Provide cond_resched() and might_resched() static calls

Provide static calls to control cond_resched() (called in !CONFIG_PREEMPT)
and might_resched() (called in CONFIG_PREEMPT_VOLUNTARY) to that we
can override their behaviour when preempt= is overriden.

Since the default behaviour is full preemption, both their calls are
ignored when preempt= isn't passed.

  [fweisbec: branch might_resched() directly to __cond_resched(), only
             define static calls when PREEMPT_DYNAMIC]

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-6-frederic@kernel.org
---
 include/linux/kernel.h | 23 +++++++++++++++++++----
 include/linux/sched.h  | 27 ++++++++++++++++++++++++---
 kernel/sched/core.c    | 16 +++++++++++++---
 3 files changed, 56 insertions(+), 10 deletions(-)

diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index f7902d8..cfd3d34 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -15,7 +15,7 @@
 #include <linux/typecheck.h>
 #include <linux/printk.h>
 #include <linux/build_bug.h>
-
+#include <linux/static_call_types.h>
 #include <asm/byteorder.h>
 
 #include <uapi/linux/kernel.h>
@@ -81,11 +81,26 @@ struct pt_regs;
 struct user;
 
 #ifdef CONFIG_PREEMPT_VOLUNTARY
-extern int _cond_resched(void);
-# define might_resched() _cond_resched()
+
+extern int __cond_resched(void);
+# define might_resched() __cond_resched()
+
+#elif defined(CONFIG_PREEMPT_DYNAMIC)
+
+extern int __cond_resched(void);
+
+DECLARE_STATIC_CALL(might_resched, __cond_resched);
+
+static __always_inline void might_resched(void)
+{
+	static_call(might_resched)();
+}
+
 #else
+
 # define might_resched() do { } while (0)
-#endif
+
+#endif /* CONFIG_PREEMPT_* */
 
 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
 extern void ___might_sleep(const char *file, int line, int preempt_offset);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index e115222..2f35594 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1871,11 +1871,32 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
  * value indicates whether a reschedule was done in fact.
  * cond_resched_lock() will drop the spinlock before scheduling,
  */
-#ifndef CONFIG_PREEMPTION
-extern int _cond_resched(void);
+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
+extern int __cond_resched(void);
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+DECLARE_STATIC_CALL(cond_resched, __cond_resched);
+
+static __always_inline int _cond_resched(void)
+{
+	return static_call(cond_resched)();
+}
+
 #else
+
+static inline int _cond_resched(void)
+{
+	return __cond_resched();
+}
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
+
+#else
+
 static inline int _cond_resched(void) { return 0; }
-#endif
+
+#endif /* !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC) */
 
 #define cond_resched() ({			\
 	___might_sleep(__FILE__, __LINE__, 0);	\
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4afbdd2..f7c8fd8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6785,17 +6785,27 @@ SYSCALL_DEFINE0(sched_yield)
 	return 0;
 }
 
-#ifndef CONFIG_PREEMPTION
-int __sched _cond_resched(void)
+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
+int __sched __cond_resched(void)
 {
 	if (should_resched(0)) {
 		preempt_schedule_common();
 		return 1;
 	}
+#ifndef CONFIG_PREEMPT_RCU
 	rcu_all_qs();
+#endif
 	return 0;
 }
-EXPORT_SYMBOL(_cond_resched);
+EXPORT_SYMBOL(__cond_resched);
+#endif
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
+EXPORT_STATIC_CALL(cond_resched);
+
+DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
+EXPORT_STATIC_CALL(might_resched);
 #endif
 
 /*

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] preempt: Introduce CONFIG_PREEMPT_DYNAMIC
  2021-01-18 14:12 ` [RFC PATCH 4/8] preempt: Introduce CONFIG_PREEMPT_DYNAMIC Frederic Weisbecker
  2021-01-22 16:53   ` Peter Zijlstra
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Michal Hocko
@ 2021-02-17 13:17   ` tip-bot2 for Michal Hocko
  2 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Michal Hocko @ 2021-02-17 13:17 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra, Michal Hocko, Frederic Weisbecker, Ingo Molnar,
	x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     6ef869e0647439af0fc28dde162d33320d4e1dd7
Gitweb:        https://git.kernel.org/tip/6ef869e0647439af0fc28dde162d33320d4e1dd7
Author:        Michal Hocko <mhocko@kernel.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:19 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 17 Feb 2021 14:12:24 +01:00

preempt: Introduce CONFIG_PREEMPT_DYNAMIC

Preemption mode selection is currently hardcoded on Kconfig choices.
Introduce a dedicated option to tune preemption flavour at boot time,

This will be only available on architectures efficiently supporting
static calls in order not to tempt with the feature against additional
overhead that might be prohibitive or undesirable.

CONFIG_PREEMPT_DYNAMIC is automatically selected by CONFIG_PREEMPT if
the architecture provides the necessary support (CONFIG_STATIC_CALL_INLINE,
CONFIG_GENERIC_ENTRY, and provide with __preempt_schedule_function() /
__preempt_schedule_notrace_function()).

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
[peterz: relax requirement to HAVE_STATIC_CALL]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-5-frederic@kernel.org
---
 Documentation/admin-guide/kernel-parameters.txt |  7 ++++++-
 arch/Kconfig                                    |  9 ++++++++-
 arch/x86/Kconfig                                |  1 +-
 kernel/Kconfig.preempt                          | 19 ++++++++++++++++-
 4 files changed, 36 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a10b545..78ab294 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3916,6 +3916,13 @@
 			Format: {"off"}
 			Disable Hardware Transactional Memory
 
+	preempt=	[KNL]
+			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
+			none - Limited to cond_resched() calls
+			voluntary - Limited to cond_resched() and might_sleep() calls
+			full - Any section that isn't explicitly preempt disabled
+			       can be preempted anytime.
+
 	print-fatal-signals=
 			[KNL] debug: print fatal signals
 
diff --git a/arch/Kconfig b/arch/Kconfig
index 24862d1..1245079 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1090,6 +1090,15 @@ config HAVE_STATIC_CALL_INLINE
 	bool
 	depends on HAVE_STATIC_CALL
 
+config HAVE_PREEMPT_DYNAMIC
+	bool
+	depends on HAVE_STATIC_CALL
+	depends on GENERIC_ENTRY
+	help
+	   Select this if the architecture support boot time preempt setting
+	   on top of static calls. It is strongly advised to support inline
+	   static call to avoid any overhead.
+
 config ARCH_WANT_LD_ORPHAN_WARN
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 21f8511..d3338a8 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -224,6 +224,7 @@ config X86
 	select HAVE_STACK_VALIDATION		if X86_64
 	select HAVE_STATIC_CALL
 	select HAVE_STATIC_CALL_INLINE		if HAVE_STACK_VALIDATION
+	select HAVE_PREEMPT_DYNAMIC
 	select HAVE_RSEQ
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UNSTABLE_SCHED_CLOCK
diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
index bf82259..4160173 100644
--- a/kernel/Kconfig.preempt
+++ b/kernel/Kconfig.preempt
@@ -40,6 +40,7 @@ config PREEMPT
 	depends on !ARCH_NO_PREEMPT
 	select PREEMPTION
 	select UNINLINE_SPIN_UNLOCK if !ARCH_INLINE_SPIN_UNLOCK
+	select PREEMPT_DYNAMIC if HAVE_PREEMPT_DYNAMIC
 	help
 	  This option reduces the latency of the kernel by making
 	  all kernel code (that is not executing in a critical section)
@@ -80,3 +81,21 @@ config PREEMPT_COUNT
 config PREEMPTION
        bool
        select PREEMPT_COUNT
+
+config PREEMPT_DYNAMIC
+	bool
+	help
+	  This option allows to define the preemption model on the kernel
+	  command line parameter and thus override the default preemption
+	  model defined during compile time.
+
+	  The feature is primarily interesting for Linux distributions which
+	  provide a pre-built kernel binary to reduce the number of kernel
+	  flavors they offer while still offering different usecases.
+
+	  The runtime overhead is negligible with HAVE_STATIC_CALL_INLINE enabled
+	  but if runtime patching is not available for the specific architecture
+	  then the potential overhead should be considered.
+
+	  Interesting if you want the same pre-built kernel should be used for
+	  both Server and Desktop workloads.

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] static_call: Pull some static_call declarations to the type headers
  2021-01-18 14:12 ` [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
                     ` (3 preceding siblings ...)
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
@ 2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra
  4 siblings, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2021-02-17 13:17 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel),
	Frederic Weisbecker, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     880cfed3a012d7863f42251791cea7fe78c39390
Gitweb:        https://git.kernel.org/tip/880cfed3a012d7863f42251791cea7fe78c39390
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:18 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 17 Feb 2021 14:08:35 +01:00

static_call: Pull some static_call declarations to the type headers

Some static call declarations are going to be needed on low level header
files. Move the necessary material to the dedicated static call types
header to avoid inclusion dependency hell.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-4-frederic@kernel.org
---
 include/linux/static_call.h             | 21 +-------------------
 include/linux/static_call_types.h       | 27 ++++++++++++++++++++++++-
 tools/include/linux/static_call_types.h | 27 ++++++++++++++++++++++++-
 3 files changed, 54 insertions(+), 21 deletions(-)

diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 695da4c..a2c0645 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -107,26 +107,10 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
 
 #define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name)
 
-/*
- * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
- * the symbol table so that objtool can reference it when it generates the
- * .static_call_sites section.
- */
-#define __static_call(name)						\
-({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
-})
-
 #else
 #define STATIC_CALL_TRAMP_ADDR(name) NULL
 #endif
 
-
-#define DECLARE_STATIC_CALL(name, func)					\
-	extern struct static_call_key STATIC_CALL_KEY(name);		\
-	extern typeof(func) STATIC_CALL_TRAMP(name);
-
 #define static_call_update(name, func)					\
 ({									\
 	BUILD_BUG_ON(!__same_type(*(func), STATIC_CALL_TRAMP(name)));	\
@@ -174,7 +158,6 @@ extern int static_call_text_reserved(void *start, void *end);
 	};								\
 	ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
 
-#define static_call(name)	__static_call(name)
 #define static_call_cond(name)	(void)__static_call(name)
 
 #define EXPORT_STATIC_CALL(name)					\
@@ -207,7 +190,6 @@ struct static_call_key {
 	};								\
 	ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
 
-#define static_call(name)	__static_call(name)
 #define static_call_cond(name)	(void)__static_call(name)
 
 static inline
@@ -252,9 +234,6 @@ struct static_call_key {
 		.func = NULL,						\
 	}
 
-#define static_call(name)						\
-	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
-
 static inline void __static_call_nop(void) { }
 
 /*
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 89135bb..08f78b1 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -4,6 +4,7 @@
 
 #include <linux/types.h>
 #include <linux/stringify.h>
+#include <linux/compiler.h>
 
 #define STATIC_CALL_KEY_PREFIX		__SCK__
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
@@ -32,4 +33,30 @@ struct static_call_site {
 	s32 key;
 };
 
+#define DECLARE_STATIC_CALL(name, func)					\
+	extern struct static_call_key STATIC_CALL_KEY(name);		\
+	extern typeof(func) STATIC_CALL_TRAMP(name);
+
+#ifdef CONFIG_HAVE_STATIC_CALL
+
+/*
+ * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
+ * the symbol table so that objtool can reference it when it generates the
+ * .static_call_sites section.
+ */
+#define __static_call(name)						\
+({									\
+	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
+	&STATIC_CALL_TRAMP(name);					\
+})
+
+#define static_call(name)	__static_call(name)
+
+#else
+
+#define static_call(name)						\
+	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+
+#endif /* CONFIG_HAVE_STATIC_CALL */
+
 #endif /* _STATIC_CALL_TYPES_H */
diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h
index 89135bb..08f78b1 100644
--- a/tools/include/linux/static_call_types.h
+++ b/tools/include/linux/static_call_types.h
@@ -4,6 +4,7 @@
 
 #include <linux/types.h>
 #include <linux/stringify.h>
+#include <linux/compiler.h>
 
 #define STATIC_CALL_KEY_PREFIX		__SCK__
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
@@ -32,4 +33,30 @@ struct static_call_site {
 	s32 key;
 };
 
+#define DECLARE_STATIC_CALL(name, func)					\
+	extern struct static_call_key STATIC_CALL_KEY(name);		\
+	extern typeof(func) STATIC_CALL_TRAMP(name);
+
+#ifdef CONFIG_HAVE_STATIC_CALL
+
+/*
+ * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
+ * the symbol table so that objtool can reference it when it generates the
+ * .static_call_sites section.
+ */
+#define __static_call(name)						\
+({									\
+	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
+	&STATIC_CALL_TRAMP(name);					\
+})
+
+#define static_call(name)	__static_call(name)
+
+#else
+
+#define static_call(name)						\
+	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+
+#endif /* CONFIG_HAVE_STATIC_CALL */
+
 #endif /* _STATIC_CALL_TYPES_H */

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] static_call: Provide DEFINE_STATIC_CALL_RET0()
  2021-01-18 14:12 ` [RFC PATCH 2/8] static_call: Provide DEFINE_STATIC_CALL_RET0() Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Frederic Weisbecker
@ 2021-02-17 13:17   ` tip-bot2 for Frederic Weisbecker
  1 sibling, 0 replies; 61+ messages in thread
From: tip-bot2 for Frederic Weisbecker @ 2021-02-17 13:17 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Frederic Weisbecker, Peter Zijlstra (Intel),
	Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     29fd01944b7273bb630c649a2104b7f9e4ef3fa6
Gitweb:        https://git.kernel.org/tip/29fd01944b7273bb630c649a2104b7f9e4ef3fa6
Author:        Frederic Weisbecker <frederic@kernel.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:17 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 17 Feb 2021 14:08:51 +01:00

static_call: Provide DEFINE_STATIC_CALL_RET0()

DECLARE_STATIC_CALL() must pass the original function targeted for a
given static call. But DEFINE_STATIC_CALL() may want to initialize it as
off. In this case we can't pass NULL (for functions without return value)
or __static_call_return0 (for functions returning a value) directly
to DEFINE_STATIC_CALL() as that may trigger a static call redeclaration
with a different function prototype. Type casts neither can work around
that as they don't get along with typeof().

The proper way to do that for functions that don't return a value is
to use DEFINE_STATIC_CALL_NULL(). But functions returning a actual value
don't have an equivalent yet.

Provide DEFINE_STATIC_CALL_RET0() to solve this situation.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-3-frederic@kernel.org
---
 include/linux/static_call.h | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index bd6735d..d69dd8b 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -144,13 +144,13 @@ extern int static_call_text_reserved(void *start, void *end);
 
 extern long __static_call_return0(void);
 
-#define DEFINE_STATIC_CALL(name, _func)					\
+#define __DEFINE_STATIC_CALL(name, _func, _func_init)			\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
-		.func = _func,						\
+		.func = _func_init,					\
 		.type = 1,						\
 	};								\
-	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func)
+	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func_init)
 
 #define DEFINE_STATIC_CALL_NULL(name, _func)				\
 	DECLARE_STATIC_CALL(name, _func);				\
@@ -178,12 +178,12 @@ struct static_call_key {
 	void *func;
 };
 
-#define DEFINE_STATIC_CALL(name, _func)					\
+#define __DEFINE_STATIC_CALL(name, _func, _func_init)			\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
-		.func = _func,						\
+		.func = _func_init,					\
 	};								\
-	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func)
+	ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func_init)
 
 #define DEFINE_STATIC_CALL_NULL(name, _func)				\
 	DECLARE_STATIC_CALL(name, _func);				\
@@ -234,10 +234,10 @@ static inline long __static_call_return0(void)
 	return 0;
 }
 
-#define DEFINE_STATIC_CALL(name, _func)					\
+#define __DEFINE_STATIC_CALL(name, _func, _func_init)			\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
-		.func = _func,						\
+		.func = _func_init,					\
 	}
 
 #define DEFINE_STATIC_CALL_NULL(name, _func)				\
@@ -286,4 +286,10 @@ static inline int static_call_text_reserved(void *start, void *end)
 
 #endif /* CONFIG_HAVE_STATIC_CALL */
 
+#define DEFINE_STATIC_CALL(name, _func)					\
+	__DEFINE_STATIC_CALL(name, _func, _func)
+
+#define DEFINE_STATIC_CALL_RET0(name, _func)				\
+	__DEFINE_STATIC_CALL(name, _func, __static_call_return0)
+
 #endif /* _LINUX_STATIC_CALL_H */

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [tip: sched/core] static_call/x86: Add __static_call_return0()
  2021-01-18 14:12 ` [RFC PATCH 1/8] static_call/x86: Add __static_call_return0() Frederic Weisbecker
  2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
@ 2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra
  1 sibling, 0 replies; 61+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2021-02-17 13:17 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Peter Zijlstra (Intel),
	Frederic Weisbecker, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     3f2a8fc4b15de18644e8a80a09edda168676e22c
Gitweb:        https://git.kernel.org/tip/3f2a8fc4b15de18644e8a80a09edda168676e22c
Author:        Peter Zijlstra <peterz@infradead.org>
AuthorDate:    Mon, 18 Jan 2021 15:12:16 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Wed, 17 Feb 2021 14:08:43 +01:00

static_call/x86: Add __static_call_return0()

Provide a stub function that return 0 and wire up the static call site
patching to replace the CALL with a single 5 byte instruction that
clears %RAX, the return value register.

The function can be cast to any function pointer type that has a
single %RAX return (including pointers). Also provide a version that
returns an int for convenience. We are clearing the entire %RAX register
in any case, whether the return value is 32 or 64 bits, since %RAX is
always a scratch register anyway.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20210118141223.123667-2-frederic@kernel.org
---
 arch/x86/kernel/static_call.c | 17 +++++++++++++++--
 include/linux/static_call.h   | 12 ++++++++++++
 kernel/static_call.c          |  5 +++++
 3 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
index ca9a380..9442c41 100644
--- a/arch/x86/kernel/static_call.c
+++ b/arch/x86/kernel/static_call.c
@@ -11,14 +11,26 @@ enum insn_type {
 	RET = 3,  /* tramp / site cond-tail-call */
 };
 
+/*
+ * data16 data16 xorq %rax, %rax - a single 5 byte instruction that clears %rax
+ * The REX.W cancels the effect of any data16.
+ */
+static const u8 xor5rax[] = { 0x66, 0x66, 0x48, 0x31, 0xc0 };
+
 static void __ref __static_call_transform(void *insn, enum insn_type type, void *func)
 {
+	const void *emulate = NULL;
 	int size = CALL_INSN_SIZE;
 	const void *code;
 
 	switch (type) {
 	case CALL:
 		code = text_gen_insn(CALL_INSN_OPCODE, insn, func);
+		if (func == &__static_call_return0) {
+			emulate = code;
+			code = &xor5rax;
+		}
+
 		break;
 
 	case NOP:
@@ -41,7 +53,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, void 
 	if (unlikely(system_state == SYSTEM_BOOTING))
 		return text_poke_early(insn, code, size);
 
-	text_poke_bp(insn, code, size, NULL);
+	text_poke_bp(insn, code, size, emulate);
 }
 
 static void __static_call_validate(void *insn, bool tail)
@@ -54,7 +66,8 @@ static void __static_call_validate(void *insn, bool tail)
 			return;
 	} else {
 		if (opcode == CALL_INSN_OPCODE ||
-		    !memcmp(insn, ideal_nops[NOP_ATOMIC5], 5))
+		    !memcmp(insn, ideal_nops[NOP_ATOMIC5], 5) ||
+		    !memcmp(insn, xor5rax, 5))
 			return;
 	}
 
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index a2c0645..bd6735d 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -142,6 +142,8 @@ extern void __static_call_update(struct static_call_key *key, void *tramp, void 
 extern int static_call_mod_init(struct module *mod);
 extern int static_call_text_reserved(void *start, void *end);
 
+extern long __static_call_return0(void);
+
 #define DEFINE_STATIC_CALL(name, _func)					\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
@@ -206,6 +208,11 @@ static inline int static_call_text_reserved(void *start, void *end)
 	return 0;
 }
 
+static inline long __static_call_return0(void)
+{
+	return 0;
+}
+
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
 	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
@@ -222,6 +229,11 @@ struct static_call_key {
 	void *func;
 };
 
+static inline long __static_call_return0(void)
+{
+	return 0;
+}
+
 #define DEFINE_STATIC_CALL(name, _func)					\
 	DECLARE_STATIC_CALL(name, _func);				\
 	struct static_call_key STATIC_CALL_KEY(name) = {		\
diff --git a/kernel/static_call.c b/kernel/static_call.c
index 84565c2..0bc11b5 100644
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -438,6 +438,11 @@ int __init static_call_init(void)
 }
 early_initcall(static_call_init);
 
+long __static_call_return0(void)
+{
+	return 0;
+}
+
 #ifdef CONFIG_STATIC_CALL_SELFTEST
 
 static int func_a(int x)

^ permalink raw reply related	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2021-02-17 13:28 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-18 14:12 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Frederic Weisbecker
2021-01-18 14:12 ` [RFC PATCH 1/8] static_call/x86: Add __static_call_return0() Frederic Weisbecker
2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra
2021-01-18 14:12 ` [RFC PATCH 2/8] static_call: Provide DEFINE_STATIC_CALL_RET0() Frederic Weisbecker
2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Frederic Weisbecker
2021-02-17 13:17   ` tip-bot2 for Frederic Weisbecker
2021-01-18 14:12 ` [RFC PATCH 3/8] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
2021-01-18 17:06   ` kernel test robot
2021-01-19 10:26   ` kernel test robot
2021-01-19 10:46   ` Jürgen Groß
2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra
2021-01-18 14:12 ` [RFC PATCH 4/8] preempt: Introduce CONFIG_PREEMPT_DYNAMIC Frederic Weisbecker
2021-01-22 16:53   ` Peter Zijlstra
2021-01-28 12:17     ` Frederic Weisbecker
2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Michal Hocko
2021-02-17 13:17   ` tip-bot2 for Michal Hocko
2021-01-18 14:12 ` [RFC PATCH 5/8] preempt/dynamic: Provide cond_resched() and might_resched() static calls Frederic Weisbecker
2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
2021-01-18 14:12 ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() " Frederic Weisbecker
2021-01-21 21:58   ` Peter Zijlstra
2021-01-21 22:25     ` Peter Zijlstra
2021-01-22 16:52   ` Peter Zijlstra
2021-01-22 16:57     ` Ard Biesheuvel
2021-01-22 17:08       ` Peter Zijlstra
2021-02-08 12:00         ` [tip: sched/core] sched: Add /debug/sched_preempt tip-bot2 for Peter Zijlstra
2021-02-09 15:45         ` tip-bot2 for Peter Zijlstra
2021-02-17 13:17         ` tip-bot2 for Peter Zijlstra
2021-01-25 23:40     ` [RFC PATCH 6/8] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls Josh Poimboeuf
2021-01-26  9:24       ` Peter Zijlstra
2021-01-26 23:57     ` Josh Poimboeuf
2021-01-27  9:13       ` Peter Zijlstra
2021-01-27 11:27         ` Peter Zijlstra
2021-01-27 15:59           ` Josh Poimboeuf
2021-01-27 16:19             ` Peter Zijlstra
2021-01-27 16:33               ` Josh Poimboeuf
2021-01-27 18:44                 ` Peter Zijlstra
2021-01-27 19:00                   ` Josh Poimboeuf
2021-01-27 19:02                     ` Josh Poimboeuf
2021-01-27 23:18                       ` Josh Poimboeuf
2021-02-03 14:04                         ` Peter Zijlstra
2021-02-05 15:30                           ` Peter Zijlstra
2021-02-06  2:31                             ` Josh Poimboeuf
2021-02-06  9:03                               ` Peter Zijlstra
2021-02-05 15:22                         ` Peter Zijlstra
2021-02-08 12:00                         ` [tip: sched/core] static_call: Allow module use without exposing static_call_key tip-bot2 for Josh Poimboeuf
2021-02-09 15:45                         ` tip-bot2 for Josh Poimboeuf
2021-02-17 13:17                         ` tip-bot2 for Josh Poimboeuf
2021-02-08 12:00   ` [tip: sched/core] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls tip-bot2 for Peter Zijlstra (Intel)
2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
2021-01-18 14:12 ` [RFC PATCH 7/8] preempt/dynamic: Provide irqentry_exit_cond_resched() static call Frederic Weisbecker
2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
2021-01-18 14:12 ` [RFC PATCH 8/8] preempt/dynamic: Support dynamic preempt with preempt= boot option Frederic Weisbecker
2021-02-08 12:00   ` [tip: sched/core] " tip-bot2 for Peter Zijlstra (Intel)
2021-02-09 15:45   ` tip-bot2 for Peter Zijlstra (Intel)
2021-02-17 13:17   ` tip-bot2 for Peter Zijlstra (Intel)
2021-01-21 21:22 ` [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v4 Peter Zijlstra
2021-01-22 15:02 ` Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.