linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v3
@ 2020-11-10  0:56 Frederic Weisbecker
  2020-11-10  0:56 ` [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0() Frederic Weisbecker
                   ` (6 more replies)
  0 siblings, 7 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10  0:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Frederic Weisbecker, Mel Gorman, Michal Hocko,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

This is a reworked version of what came out of the debate between Michal
Hocko and Peter Zijlstra in order to tune the preemption mode from
kernel parameters, see v2 in:

https://lore.kernel.org/lkml/20201009122926.29962-1-mhocko@kernel.org/

I mostly fetched the raw diff from Peter's proof of concept using
static calls + a few cherry picking here and there + some rework from my
end. The result is still not complete, I still need to handle
__cond_resched_lock() and other CONFIG_PREEMPT specifics. And also
some others cleanup patches that were in Michal's series.

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
	preempt/dynamic

HEAD: 764be94f20534c96e6f5a16922ad81c0a3bcd868

Thanks,
	Frederic
---

Peter Zijlstra (Intel) (4):
      preempt/dynamic: Provide cond_resched() and might_resched() static calls
      preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
      preempt/dynamic: Provide irqentry_exit_cond_resched() static call
      preempt/dynamic: Support dynamic preempt with preempt= boot option

Peter Zijlstra (2):
      static_call/x86: Add __static_call_returnl0()
      static_call: Pull some static_call declarations to the type headers

Michal Hocko (1):
      preempt: Introduce CONFIG_PREEMPT_DYNAMIC


 Documentation/admin-guide/kernel-parameters.txt |  7 ++
 arch/Kconfig                                    |  9 +++
 arch/x86/Kconfig                                |  1 +
 arch/x86/include/asm/preempt.h                  | 34 ++++++---
 arch/x86/include/asm/text-patching.h            | 26 ++++++-
 arch/x86/kernel/alternative.c                   |  5 ++
 arch/x86/kernel/static_call.c                   | 10 ++-
 include/linux/entry-common.h                    |  4 ++
 include/linux/kernel.h                          | 22 +++++-
 include/linux/sched.h                           | 27 ++++++-
 include/linux/static_call.h                     | 21 ------
 include/linux/static_call_types.h               | 33 +++++++++
 kernel/Kconfig.preempt                          | 19 +++++
 kernel/entry/common.c                           | 10 ++-
 kernel/sched/core.c                             | 93 ++++++++++++++++++++++++-
 kernel/static_call.c                            | 10 +++
 16 files changed, 289 insertions(+), 42 deletions(-)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0()
  2020-11-10  0:56 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v3 Frederic Weisbecker
@ 2020-11-10  0:56 ` Frederic Weisbecker
  2020-11-10  9:55   ` Peter Zijlstra
  2020-11-10 10:06   ` Peter Zijlstra
  2020-11-10  0:56 ` [RFC PATCH 2/7] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10  0:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: Peter Zijlstra <peterz@infradead.org>

Provide a stub function that return 0 and wire up the static call site
patching to replace the CALL with a single 5 byte instruction that
clears %RAX, the return value register.

The function can be cast to any function pointer type that has a
single %RAX return (including pointers). Also provide a version that
returns an int for convenience. We are clearing the entire %RAX register
in any case, whether the return value is 32 or 64 bits, since %RAX is
always a scratch register anyway.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
[fweisbec: s/disp16/data16, integrate into text_get_insn(), elaborate
 comment on the resulting insn, emulate on int3 trap, provide validation,
 uninline __static_call_return0() for HAVE_STATIC_CALL]
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 arch/x86/include/asm/text-patching.h | 26 +++++++++++++++++++++++++-
 arch/x86/kernel/alternative.c        |  5 +++++
 arch/x86/kernel/static_call.c        | 10 ++++++++--
 include/linux/static_call.h          |  9 +++++++++
 kernel/static_call.c                 | 10 ++++++++++
 5 files changed, 57 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index b7421780e4e9..1250f440d1be 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -65,6 +65,9 @@ extern void text_poke_finish(void);
 #define JMP8_INSN_SIZE		2
 #define JMP8_INSN_OPCODE	0xEB
 
+#define XOR5RAX_INSN_SIZE	5
+#define XOR5RAX_INSN_OPCODE	0x31
+
 #define DISP32_SIZE		4
 
 static __always_inline int text_opcode_size(u8 opcode)
@@ -80,6 +83,7 @@ static __always_inline int text_opcode_size(u8 opcode)
 	__CASE(CALL);
 	__CASE(JMP32);
 	__CASE(JMP8);
+	__CASE(XOR5RAX);
 	}
 
 #undef __CASE
@@ -99,8 +103,21 @@ static __always_inline
 void *text_gen_insn(u8 opcode, const void *addr, const void *dest)
 {
 	static union text_poke_insn insn; /* per instance */
-	int size = text_opcode_size(opcode);
+	int size;
 
+	if (opcode == XOR5RAX_INSN_OPCODE) {
+		/*
+		 * data16 data16 xorq %rax, %rax - a single 5 byte instruction that clears %rax
+		 * The REX.W cancels the effect of any data16.
+		 */
+		static union text_poke_insn xor5rax = {
+			.text = { 0x66, 0x66, 0x48, 0x31, 0xc0 },
+		};
+
+		return &xor5rax.text;
+	}
+
+	size = text_opcode_size(opcode);
 	insn.opcode = opcode;
 
 	if (size > 1) {
@@ -165,6 +182,13 @@ void int3_emulate_ret(struct pt_regs *regs)
 	unsigned long ip = int3_emulate_pop(regs);
 	int3_emulate_jmp(regs, ip);
 }
+
+static __always_inline
+void int3_emulate_xor5rax(struct pt_regs *regs)
+{
+	regs->ax = 0;
+}
+
 #endif /* !CONFIG_UML_X86 */
 
 #endif /* _ASM_X86_TEXT_PATCHING_H */
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 2400ad62f330..37592f576a10 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -1125,6 +1125,10 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
 		int3_emulate_jmp(regs, (long)ip + tp->rel32);
 		break;
 
+	case XOR5RAX_INSN_OPCODE:
+		int3_emulate_xor5rax(regs);
+		break;
+
 	default:
 		BUG();
 	}
@@ -1291,6 +1295,7 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
 	switch (tp->opcode) {
 	case INT3_INSN_OPCODE:
 	case RET_INSN_OPCODE:
+	case XOR5RAX_INSN_OPCODE:
 		break;
 
 	case CALL_INSN_OPCODE:
diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
index ca9a380d9c0b..3a36eaf3dd1f 100644
--- a/arch/x86/kernel/static_call.c
+++ b/arch/x86/kernel/static_call.c
@@ -18,7 +18,11 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, void
 
 	switch (type) {
 	case CALL:
-		code = text_gen_insn(CALL_INSN_OPCODE, insn, func);
+		if (func == &__static_call_return0 ||
+		    func == &__static_call_returnl0)
+			code = text_gen_insn(XOR5RAX_INSN_OPCODE, insn, func);
+		else
+			code = text_gen_insn(CALL_INSN_OPCODE, insn, func);
 		break;
 
 	case NOP:
@@ -54,7 +58,9 @@ static void __static_call_validate(void *insn, bool tail)
 			return;
 	} else {
 		if (opcode == CALL_INSN_OPCODE ||
-		    !memcmp(insn, ideal_nops[NOP_ATOMIC5], 5))
+		    !memcmp(insn, ideal_nops[NOP_ATOMIC5], 5) ||
+		    !memcmp(insn, text_gen_insn(XOR5RAX_INSN_OPCODE, NULL, NULL),
+			    XOR5RAX_INSN_SIZE))
 			return;
 	}
 
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 695da4c9b338..055544793430 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -136,6 +136,9 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
 
 #ifdef CONFIG_HAVE_STATIC_CALL_INLINE
 
+extern int __static_call_return0(void);
+extern long __static_call_returnl0(void);
+
 extern int __init static_call_init(void);
 
 struct static_call_mod {
@@ -187,6 +190,9 @@ extern int static_call_text_reserved(void *start, void *end);
 
 #elif defined(CONFIG_HAVE_STATIC_CALL)
 
+extern int __static_call_return0(void);
+extern long __static_call_returnl0(void);
+
 static inline int static_call_init(void) { return 0; }
 
 struct static_call_key {
@@ -234,6 +240,9 @@ static inline int static_call_text_reserved(void *start, void *end)
 
 #else /* Generic implementation */
 
+static inline int __static_call_return0(void) { return 0; }
+static inline long __static_call_returnl0(void) { return 0; }
+
 static inline int static_call_init(void) { return 0; }
 
 struct static_call_key {
diff --git a/kernel/static_call.c b/kernel/static_call.c
index 84565c2a41b8..3cb371e71be6 100644
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -438,6 +438,16 @@ int __init static_call_init(void)
 }
 early_initcall(static_call_init);
 
+int __static_call_return0(void)
+{
+	return 0;
+}
+
+long __static_call_returnl0(void)
+{
+	return 0;
+}
+
 #ifdef CONFIG_STATIC_CALL_SELFTEST
 
 static int func_a(int x)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 2/7] static_call: Pull some static_call declarations to the type headers
  2020-11-10  0:56 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v3 Frederic Weisbecker
  2020-11-10  0:56 ` [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0() Frederic Weisbecker
@ 2020-11-10  0:56 ` Frederic Weisbecker
  2020-11-10  0:56 ` [RFC PATCH 3/7] preempt: Introduce CONFIG_PREEMPT_DYNAMIC Frederic Weisbecker
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10  0:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: Peter Zijlstra <peterz@infradead.org>

Some static call declarations are going to be needed on low level header
files. Move the necessary material to the dedicated static call types
header to avoid inclusion dependency hell.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/static_call.h       | 30 ----------------------------
 include/linux/static_call_types.h | 33 +++++++++++++++++++++++++++++++
 2 files changed, 33 insertions(+), 30 deletions(-)

diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 055544793430..a2c064585c03 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -107,26 +107,10 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
 
 #define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name)
 
-/*
- * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
- * the symbol table so that objtool can reference it when it generates the
- * .static_call_sites section.
- */
-#define __static_call(name)						\
-({									\
-	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
-	&STATIC_CALL_TRAMP(name);					\
-})
-
 #else
 #define STATIC_CALL_TRAMP_ADDR(name) NULL
 #endif
 
-
-#define DECLARE_STATIC_CALL(name, func)					\
-	extern struct static_call_key STATIC_CALL_KEY(name);		\
-	extern typeof(func) STATIC_CALL_TRAMP(name);
-
 #define static_call_update(name, func)					\
 ({									\
 	BUILD_BUG_ON(!__same_type(*(func), STATIC_CALL_TRAMP(name)));	\
@@ -136,9 +120,6 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
 
 #ifdef CONFIG_HAVE_STATIC_CALL_INLINE
 
-extern int __static_call_return0(void);
-extern long __static_call_returnl0(void);
-
 extern int __init static_call_init(void);
 
 struct static_call_mod {
@@ -177,7 +158,6 @@ extern int static_call_text_reserved(void *start, void *end);
 	};								\
 	ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
 
-#define static_call(name)	__static_call(name)
 #define static_call_cond(name)	(void)__static_call(name)
 
 #define EXPORT_STATIC_CALL(name)					\
@@ -190,9 +170,6 @@ extern int static_call_text_reserved(void *start, void *end);
 
 #elif defined(CONFIG_HAVE_STATIC_CALL)
 
-extern int __static_call_return0(void);
-extern long __static_call_returnl0(void);
-
 static inline int static_call_init(void) { return 0; }
 
 struct static_call_key {
@@ -213,7 +190,6 @@ struct static_call_key {
 	};								\
 	ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
 
-#define static_call(name)	__static_call(name)
 #define static_call_cond(name)	(void)__static_call(name)
 
 static inline
@@ -240,9 +216,6 @@ static inline int static_call_text_reserved(void *start, void *end)
 
 #else /* Generic implementation */
 
-static inline int __static_call_return0(void) { return 0; }
-static inline long __static_call_returnl0(void) { return 0; }
-
 static inline int static_call_init(void) { return 0; }
 
 struct static_call_key {
@@ -261,9 +234,6 @@ struct static_call_key {
 		.func = NULL,						\
 	}
 
-#define static_call(name)						\
-	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
-
 static inline void __static_call_nop(void) { }
 
 /*
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 89135bb35bf7..437eea5573fc 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -4,6 +4,7 @@
 
 #include <linux/types.h>
 #include <linux/stringify.h>
+#include <linux/compiler.h>
 
 #define STATIC_CALL_KEY_PREFIX		__SCK__
 #define STATIC_CALL_KEY_PREFIX_STR	__stringify(STATIC_CALL_KEY_PREFIX)
@@ -32,4 +33,36 @@ struct static_call_site {
 	s32 key;
 };
 
+#define DECLARE_STATIC_CALL(name, func)					\
+	extern struct static_call_key STATIC_CALL_KEY(name);		\
+	extern typeof(func) STATIC_CALL_TRAMP(name);
+
+#ifdef CONFIG_HAVE_STATIC_CALL
+
+/*
+ * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from
+ * the symbol table so that objtool can reference it when it generates the
+ * .static_call_sites section.
+ */
+#define __static_call(name)						\
+({									\
+	__ADDRESSABLE(STATIC_CALL_KEY(name));				\
+	&STATIC_CALL_TRAMP(name);					\
+})
+
+#define static_call(name)	__static_call(name)
+
+extern int __static_call_return0(void);
+extern long __static_call_returnl0(void);
+
+#else
+
+#define static_call(name)						\
+	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+
+static inline int __static_call_return0(void) { return 0; }
+static inline long __static_call_returnl0(void) { return 0; }
+
+#endif /* CONFIG_HAVE_STATIC_CALL */
+
 #endif /* _STATIC_CALL_TYPES_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 3/7] preempt: Introduce CONFIG_PREEMPT_DYNAMIC
  2020-11-10  0:56 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v3 Frederic Weisbecker
  2020-11-10  0:56 ` [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0() Frederic Weisbecker
  2020-11-10  0:56 ` [RFC PATCH 2/7] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
@ 2020-11-10  0:56 ` Frederic Weisbecker
  2020-11-10  0:56 ` [RFC PATCH 4/7] preempt/dynamic: Provide cond_resched() and might_resched() static calls Frederic Weisbecker
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10  0:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Michal Hocko, Mel Gorman, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: Michal Hocko <mhocko@kernel.org>

Preemption mode selection is currently hardcoded on Kconfig choices.
Introduce a dedicated option to tune preemption flavour at boot time,

This will be only available on architectures efficiently supporting
static calls in order not to tempt with the feature against additional
overhead that might be prohibitive or undesirable.

CONFIG_PREEMPT_DYNAMIC is automatically selected by CONFIG_PREEMPT if
the architecture provides the necessary support (CONFIG_STATIC_CALL_INLINE,
CONFIG_GENERIC_ENTRY, and provide with __preempt_schedule_function() /
__preempt_schedule_notrace_function()).

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Paul E. McKenney <paulmck@kernel.org>
[Added documentation, reorganize dependencies on top of static call,
 etc...]
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 .../admin-guide/kernel-parameters.txt         |  7 +++++++
 arch/Kconfig                                  |  9 +++++++++
 arch/x86/Kconfig                              |  1 +
 kernel/Kconfig.preempt                        | 19 +++++++++++++++++++
 4 files changed, 36 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 526d65d8573a..f03c4c447871 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3902,6 +3902,13 @@
 			Format: {"off"}
 			Disable Hardware Transactional Memory
 
+	preempt=	[KNL]
+			Select preemption mode if you have CONFIG_PREEMPT_DYNAMIC
+			none - Limited to cond_resched() calls
+			voluntary - Limited to cond_resched() and might_sleep() calls
+			full - Any section that isn't explicitly preempt disabled
+			       can be preempted anytime.
+
 	print-fatal-signals=
 			[KNL] debug: print fatal signals
 
diff --git a/arch/Kconfig b/arch/Kconfig
index 56b6ccc0e32d..69e0c7f15a7f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1028,6 +1028,15 @@ config HAVE_STATIC_CALL_INLINE
 	bool
 	depends on HAVE_STATIC_CALL
 
+config HAVE_PREEMPT_DYNAMIC
+	bool
+	depends on HAVE_STATIC_CALL_INLINE
+	depends on GENERIC_ENTRY
+	help
+	   Select this if the architecture support boot time preempt setting
+	   on top of static calls. It is strongly advised to support inline
+	   static call to avoid any overhead.
+
 source "kernel/gcov/Kconfig"
 
 source "scripts/gcc-plugins/Kconfig"
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f6946b81f74a..8a57fdfd3372 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -217,6 +217,7 @@ config X86
 	select HAVE_STACK_VALIDATION		if X86_64
 	select HAVE_STATIC_CALL
 	select HAVE_STATIC_CALL_INLINE		if HAVE_STACK_VALIDATION
+	select HAVE_PREEMPT_DYNAMIC		if HAVE_STATIC_CALL_INLINE
 	select HAVE_RSEQ
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UNSTABLE_SCHED_CLOCK
diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
index bf82259cff96..416017301660 100644
--- a/kernel/Kconfig.preempt
+++ b/kernel/Kconfig.preempt
@@ -40,6 +40,7 @@ config PREEMPT
 	depends on !ARCH_NO_PREEMPT
 	select PREEMPTION
 	select UNINLINE_SPIN_UNLOCK if !ARCH_INLINE_SPIN_UNLOCK
+	select PREEMPT_DYNAMIC if HAVE_PREEMPT_DYNAMIC
 	help
 	  This option reduces the latency of the kernel by making
 	  all kernel code (that is not executing in a critical section)
@@ -80,3 +81,21 @@ config PREEMPT_COUNT
 config PREEMPTION
        bool
        select PREEMPT_COUNT
+
+config PREEMPT_DYNAMIC
+	bool
+	help
+	  This option allows to define the preemption model on the kernel
+	  command line parameter and thus override the default preemption
+	  model defined during compile time.
+
+	  The feature is primarily interesting for Linux distributions which
+	  provide a pre-built kernel binary to reduce the number of kernel
+	  flavors they offer while still offering different usecases.
+
+	  The runtime overhead is negligible with HAVE_STATIC_CALL_INLINE enabled
+	  but if runtime patching is not available for the specific architecture
+	  then the potential overhead should be considered.
+
+	  Interesting if you want the same pre-built kernel should be used for
+	  both Server and Desktop workloads.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 4/7] preempt/dynamic: Provide cond_resched() and might_resched() static calls
  2020-11-10  0:56 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v3 Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2020-11-10  0:56 ` [RFC PATCH 3/7] preempt: Introduce CONFIG_PREEMPT_DYNAMIC Frederic Weisbecker
@ 2020-11-10  0:56 ` Frederic Weisbecker
  2020-11-10 10:39   ` Peter Zijlstra
  2020-11-10  0:56 ` [RFC PATCH 5/7] preempt/dynamic: Provide preempt_schedule[_notrace]() " Frederic Weisbecker
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10  0:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: "Peter Zijlstra (Intel)" <peterz@infradead.org>

Provide static calls to control cond_resched() (called in !CONFIG_PREEMPT)
and might_resched() (called in CONFIG_PREEMPT_VOLUNTARY) to that we
can override their behaviour when preempt= is overriden.

Since the default behaviour is full preemption, both their calls are
ignored when preempt= isn't passed.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
[branch might_resched() directly to __cond_resched(), only define static
calls when PREEMPT_DYNAMIC]
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/kernel.h | 22 +++++++++++++++++++---
 include/linux/sched.h  | 27 ++++++++++++++++++++++++---
 kernel/sched/core.c    | 16 +++++++++++++---
 3 files changed, 56 insertions(+), 9 deletions(-)

diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 2f05e9128201..ecd820174455 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -15,6 +15,7 @@
 #include <linux/typecheck.h>
 #include <linux/printk.h>
 #include <linux/build_bug.h>
+#include <linux/static_call_types.h>
 #include <asm/byteorder.h>
 #include <asm/div64.h>
 #include <uapi/linux/kernel.h>
@@ -194,11 +195,26 @@ struct pt_regs;
 struct user;
 
 #ifdef CONFIG_PREEMPT_VOLUNTARY
-extern int _cond_resched(void);
-# define might_resched() _cond_resched()
+
+extern int __cond_resched(void);
+# define might_resched() __cond_resched()
+
+#elif defined(CONFIG_PREEMPT_DYNAMIC)
+
+extern int __cond_resched(void);
+
+DECLARE_STATIC_CALL(might_resched, __static_call_return0);
+
+static __always_inline void might_resched(void)
+{
+	static_call(might_resched)();
+}
+
 #else
+
 # define might_resched() do { } while (0)
-#endif
+
+#endif /* CONFIG_PREEMPT_* */
 
 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
 extern void ___might_sleep(const char *file, int line, int preempt_offset);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 063cd120b459..f1d6f274e0dc 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1829,11 +1829,32 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
  * value indicates whether a reschedule was done in fact.
  * cond_resched_lock() will drop the spinlock before scheduling,
  */
-#ifndef CONFIG_PREEMPTION
-extern int _cond_resched(void);
+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
+extern int __cond_resched(void);
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+DECLARE_STATIC_CALL(cond_resched, __static_call_return0);
+
+static __always_inline int _cond_resched(void)
+{
+	return static_call(cond_resched)();
+}
+
 #else
+
+static inline int _cond_resched(void)
+{
+	return __cond_resched();
+}
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
+
+#else
+
 static inline int _cond_resched(void) { return 0; }
-#endif
+
+#endif /* !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC) */
 
 #define cond_resched() ({			\
 	___might_sleep(__FILE__, __LINE__, 0);	\
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d2003a7d5ab5..6432d0079510 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6111,17 +6111,27 @@ SYSCALL_DEFINE0(sched_yield)
 	return 0;
 }
 
-#ifndef CONFIG_PREEMPTION
-int __sched _cond_resched(void)
+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
+int __sched __cond_resched(void)
 {
 	if (should_resched(0)) {
 		preempt_schedule_common();
 		return 1;
 	}
+#ifndef CONFIG_PREEMPT_RCU
 	rcu_all_qs();
+#endif
 	return 0;
 }
-EXPORT_SYMBOL(_cond_resched);
+EXPORT_SYMBOL(__cond_resched);
+#endif
+
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(cond_resched, __static_call_return0);
+EXPORT_STATIC_CALL(cond_resched);
+
+DEFINE_STATIC_CALL(might_resched, __static_call_return0);
+EXPORT_STATIC_CALL(might_resched);
 #endif
 
 /*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 5/7] preempt/dynamic: Provide preempt_schedule[_notrace]() static calls
  2020-11-10  0:56 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v3 Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2020-11-10  0:56 ` [RFC PATCH 4/7] preempt/dynamic: Provide cond_resched() and might_resched() static calls Frederic Weisbecker
@ 2020-11-10  0:56 ` Frederic Weisbecker
  2020-11-10  0:56 ` [RFC PATCH 6/7] preempt/dynamic: Provide irqentry_exit_cond_resched() static call Frederic Weisbecker
  2020-11-10  0:56 ` [RFC PATCH 7/7] preempt/dynamic: Support dynamic preempt with preempt= boot option Frederic Weisbecker
  6 siblings, 0 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10  0:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: "Peter Zijlstra (Intel)" <peterz@infradead.org>

Provide static calls to control preempt_schedule[_notrace]()
(called in CONFIG_PREEMPT) so that we can override their behaviour when
preempt= is overriden.

Since the default behaviour is full preemption, both their calls are
initialized to the arch provided wrapper, if any.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
[only define static calls when PREEMPT_DYNAMIC, make it less dependent
on x86 with __preempt_schedule_func()]
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 arch/x86/include/asm/preempt.h | 34 ++++++++++++++++++++++++++--------
 kernel/sched/core.c            | 12 ++++++++++++
 2 files changed, 38 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index 69485ca13665..3db9cb8b1a25 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -5,6 +5,7 @@
 #include <asm/rmwcc.h>
 #include <asm/percpu.h>
 #include <linux/thread_info.h>
+#include <linux/static_call_types.h>
 
 DECLARE_PER_CPU(int, __preempt_count);
 
@@ -103,16 +104,33 @@ static __always_inline bool should_resched(int preempt_offset)
 }
 
 #ifdef CONFIG_PREEMPTION
-  extern asmlinkage void preempt_schedule_thunk(void);
-# define __preempt_schedule() \
-	asm volatile ("call preempt_schedule_thunk" : ASM_CALL_CONSTRAINT)
 
-  extern asmlinkage void preempt_schedule(void);
-  extern asmlinkage void preempt_schedule_notrace_thunk(void);
-# define __preempt_schedule_notrace() \
-	asm volatile ("call preempt_schedule_notrace_thunk" : ASM_CALL_CONSTRAINT)
+extern asmlinkage void preempt_schedule(void);
+extern asmlinkage void preempt_schedule_thunk(void);
+
+#define __preempt_schedule_func() preempt_schedule_thunk
+
+DECLARE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
+
+#define __preempt_schedule() \
+do { \
+	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule)); \
+	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule) : ASM_CALL_CONSTRAINT); \
+} while (0)
+
+extern asmlinkage void preempt_schedule_notrace(void);
+extern asmlinkage void preempt_schedule_notrace_thunk(void);
+
+#define __preempt_schedule_notrace_func() preempt_schedule_notrace_thunk
+
+DECLARE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
+
+#define __preempt_schedule_notrace() \
+do { \
+	__ADDRESSABLE(STATIC_CALL_KEY(preempt_schedule_notrace)); \
+	asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule_notrace) : ASM_CALL_CONSTRAINT); \
+} while (0)
 
-  extern asmlinkage void preempt_schedule_notrace(void);
 #endif
 
 #endif /* __ASM_PREEMPT_H */
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6432d0079510..6715caa17ea7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4709,6 +4709,12 @@ asmlinkage __visible void __sched notrace preempt_schedule(void)
 NOKPROBE_SYMBOL(preempt_schedule);
 EXPORT_SYMBOL(preempt_schedule);
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(preempt_schedule, __preempt_schedule_func());
+EXPORT_STATIC_CALL(preempt_schedule);
+#endif
+
+
 /**
  * preempt_schedule_notrace - preempt_schedule called by tracing
  *
@@ -4761,6 +4767,12 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
 }
 EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
+EXPORT_STATIC_CALL(preempt_schedule_notrace);
+#endif
+
+
 #endif /* CONFIG_PREEMPTION */
 
 /*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 6/7] preempt/dynamic: Provide irqentry_exit_cond_resched() static call
  2020-11-10  0:56 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v3 Frederic Weisbecker
                   ` (4 preceding siblings ...)
  2020-11-10  0:56 ` [RFC PATCH 5/7] preempt/dynamic: Provide preempt_schedule[_notrace]() " Frederic Weisbecker
@ 2020-11-10  0:56 ` Frederic Weisbecker
  2020-11-10 10:32   ` Peter Zijlstra
  2020-11-10  0:56 ` [RFC PATCH 7/7] preempt/dynamic: Support dynamic preempt with preempt= boot option Frederic Weisbecker
  6 siblings, 1 reply; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10  0:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: "Peter Zijlstra (Intel)" <peterz@infradead.org>

Provide static call to control IRQ preemption (called in CONFIG_PREEMPT)
so that we can override its behaviour when preempt= is overriden.

Since the default behaviour is full preemption, its call is
initialized to provide IRQ preemption when preempt= isn't passed.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
[convert from static key to static call, only define static call when
PREEMPT_DYNAMIC]
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/linux/entry-common.h |  4 ++++
 kernel/entry/common.c        | 10 +++++++++-
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index 474f29638d2c..36738f67f6ad 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_ENTRYCOMMON_H
 #define __LINUX_ENTRYCOMMON_H
 
+#include <linux/static_call_types.h>
 #include <linux/tracehook.h>
 #include <linux/syscalls.h>
 #include <linux/seccomp.h>
@@ -385,6 +386,9 @@ irqentry_state_t noinstr irqentry_enter(struct pt_regs *regs);
  * Conditional reschedule with additional sanity checks.
  */
 void irqentry_exit_cond_resched(void);
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DECLARE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+#endif
 
 /**
  * irqentry_exit - Handle return from exception that used irqentry_enter()
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index 2b8366693d5c..c501c951f8bc 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -357,6 +357,9 @@ void irqentry_exit_cond_resched(void)
 			preempt_schedule_irq();
 	}
 }
+#ifdef CONFIG_PREEMPT_DYNAMIC
+DEFINE_STATIC_CALL(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+#endif
 
 noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
 {
@@ -383,8 +386,13 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
 		}
 
 		instrumentation_begin();
-		if (IS_ENABLED(CONFIG_PREEMPTION))
+		if (IS_ENABLED(CONFIG_PREEMPTION)) {
+#ifdef CONFIG_PREEMT_DYNAMIC
+			static_call(irqentry_exit_cond_resched)();
+#else
 			irqentry_exit_cond_resched();
+#endif
+		}
 		/* Covers both tracing and lockdep */
 		trace_hardirqs_on();
 		instrumentation_end();
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH 7/7] preempt/dynamic: Support dynamic preempt with preempt= boot option
  2020-11-10  0:56 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v3 Frederic Weisbecker
                   ` (5 preceding siblings ...)
  2020-11-10  0:56 ` [RFC PATCH 6/7] preempt/dynamic: Provide irqentry_exit_cond_resched() static call Frederic Weisbecker
@ 2020-11-10  0:56 ` Frederic Weisbecker
  6 siblings, 0 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10  0:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Frederic Weisbecker,
	Thomas Gleixner, Paul E . McKenney, Ingo Molnar, Michal Hocko

From: "Peter Zijlstra (Intel)" <peterz@infradead.org>

Support the preempt= boot option and patch the static call sites
accordingly.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
[remove the mad scientist experiments]
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/sched/core.c | 67 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 66 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6715caa17ea7..84ac05d2df3a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -12,6 +12,7 @@
 
 #include "sched.h"
 
+#include <linux/entry-common.h>
 #include <linux/nospec.h>
 
 #include <linux/kcov.h>
@@ -4772,9 +4773,73 @@ DEFINE_STATIC_CALL(preempt_schedule_notrace, __preempt_schedule_notrace_func());
 EXPORT_STATIC_CALL(preempt_schedule_notrace);
 #endif
 
-
 #endif /* CONFIG_PREEMPTION */
 
+#ifdef CONFIG_PREEMPT_DYNAMIC
+
+/*
+ * SC:cond_resched
+ * SC:might_resched
+ * SC:preempt_schedule
+ * SC:preempt_schedule_notrace
+ * SC:irqentry_exit_cond_resched
+ *
+ *
+ * NONE:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * VOLUNTARY:
+ *   cond_resched               <- __cond_resched
+ *   might_resched              <- __cond_resched
+ *   preempt_schedule           <- NOP
+ *   preempt_schedule_notrace   <- NOP
+ *   irqentry_exit_cond_resched <- NOP
+ *
+ * FULL:
+ *   cond_resched               <- RET0
+ *   might_resched              <- RET0
+ *   preempt_schedule           <- preempt_schedule
+ *   preempt_schedule_notrace   <- preempt_schedule_notrace
+ *   irqentry_exit_cond_resched <- irqentry_exit_cond_resched
+ */
+static int __init setup_preempt_mode(char *str)
+{
+	if (!strcmp(str, "none")) {
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, __static_call_return0);
+		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
+		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
+		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else if (!strcmp(str, "voluntary")) {
+		static_call_update(cond_resched, __cond_resched);
+		static_call_update(might_resched, __cond_resched);
+		static_call_update(preempt_schedule, (typeof(&preempt_schedule)) NULL);
+		static_call_update(preempt_schedule_notrace, (typeof(&preempt_schedule_notrace)) NULL);
+		static_call_update(irqentry_exit_cond_resched, (typeof(&irqentry_exit_cond_resched)) NULL);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else if (!strcmp(str, "full")) {
+		static_call_update(cond_resched, __static_call_return0);
+		static_call_update(might_resched, __static_call_return0);
+		static_call_update(preempt_schedule, __preempt_schedule_func());
+		static_call_update(preempt_schedule_notrace, __preempt_schedule_notrace_func());
+		static_call_update(irqentry_exit_cond_resched, irqentry_exit_cond_resched);
+		pr_info("Dynamic Preempt: %s\n", str);
+	} else {
+		pr_warn("Dynamic Preempt: Unsupported preempt mode %s, default to full\n", str);
+		return 1;
+	}
+	return 0;
+}
+__setup("preempt=", setup_preempt_mode);
+
+#endif /* CONFIG_PREEMPT_DYNAMIC */
+
+
 /*
  * This is the entry point to schedule() from kernel preemption
  * off of irq context.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0()
  2020-11-10  0:56 ` [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0() Frederic Weisbecker
@ 2020-11-10  9:55   ` Peter Zijlstra
  2020-11-10 10:13     ` Peter Zijlstra
  2020-11-10 13:24     ` Frederic Weisbecker
  2020-11-10 10:06   ` Peter Zijlstra
  1 sibling, 2 replies; 19+ messages in thread
From: Peter Zijlstra @ 2020-11-10  9:55 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 01:56:03AM +0100, Frederic Weisbecker wrote:

> [fweisbec: s/disp16/data16, integrate into text_get_insn(), elaborate
>  comment on the resulting insn, emulate on int3 trap, provide validation,
>  uninline __static_call_return0() for HAVE_STATIC_CALL]

> diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
> index b7421780e4e9..1250f440d1be 100644
> --- a/arch/x86/include/asm/text-patching.h
> +++ b/arch/x86/include/asm/text-patching.h
> @@ -65,6 +65,9 @@ extern void text_poke_finish(void);
>  #define JMP8_INSN_SIZE		2
>  #define JMP8_INSN_OPCODE	0xEB
>  
> +#define XOR5RAX_INSN_SIZE	5
> +#define XOR5RAX_INSN_OPCODE	0x31
> +
>  #define DISP32_SIZE		4
>  
>  static __always_inline int text_opcode_size(u8 opcode)
> @@ -80,6 +83,7 @@ static __always_inline int text_opcode_size(u8 opcode)
>  	__CASE(CALL);
>  	__CASE(JMP32);
>  	__CASE(JMP8);
> +	__CASE(XOR5RAX);
>  	}
>  
>  #undef __CASE
> @@ -99,8 +103,21 @@ static __always_inline
>  void *text_gen_insn(u8 opcode, const void *addr, const void *dest)
>  {
>  	static union text_poke_insn insn; /* per instance */
> -	int size = text_opcode_size(opcode);
> +	int size;
>  
> +	if (opcode == XOR5RAX_INSN_OPCODE) {
> +		/*
> +		 * data16 data16 xorq %rax, %rax - a single 5 byte instruction that clears %rax
> +		 * The REX.W cancels the effect of any data16.
> +		 */
> +		static union text_poke_insn xor5rax = {
> +			.text = { 0x66, 0x66, 0x48, 0x31, 0xc0 },
> +		};
> +
> +		return &xor5rax.text;
> +	}
> +
> +	size = text_opcode_size(opcode);
>  	insn.opcode = opcode;
>  
>  	if (size > 1) {
> @@ -165,6 +182,13 @@ void int3_emulate_ret(struct pt_regs *regs)
>  	unsigned long ip = int3_emulate_pop(regs);
>  	int3_emulate_jmp(regs, ip);
>  }
> +
> +static __always_inline
> +void int3_emulate_xor5rax(struct pt_regs *regs)
> +{
> +	regs->ax = 0;
> +}
> +
>  #endif /* !CONFIG_UML_X86 */
>  
>  #endif /* _ASM_X86_TEXT_PATCHING_H */
> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> index 2400ad62f330..37592f576a10 100644
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c
> @@ -1125,6 +1125,10 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
>  		int3_emulate_jmp(regs, (long)ip + tp->rel32);
>  		break;
>  
> +	case XOR5RAX_INSN_OPCODE:
> +		int3_emulate_xor5rax(regs);
> +		break;
> +
>  	default:
>  		BUG();
>  	}
> @@ -1291,6 +1295,7 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
>  	switch (tp->opcode) {
>  	case INT3_INSN_OPCODE:
>  	case RET_INSN_OPCODE:
> +	case XOR5RAX_INSN_OPCODE:
>  		break;
>  
>  	case CALL_INSN_OPCODE:

Why did you add full emulation of this? The patch I send to you used the
text_poke_bp(.emulate) argument to have it emulate an actual call to the
out-of-line version of that function.

That should work fine and is a lot less code.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0()
  2020-11-10  0:56 ` [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0() Frederic Weisbecker
  2020-11-10  9:55   ` Peter Zijlstra
@ 2020-11-10 10:06   ` Peter Zijlstra
  1 sibling, 0 replies; 19+ messages in thread
From: Peter Zijlstra @ 2020-11-10 10:06 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 01:56:03AM +0100, Frederic Weisbecker wrote:

> diff --git a/include/linux/static_call.h b/include/linux/static_call.h
> index 695da4c9b338..055544793430 100644
> --- a/include/linux/static_call.h
> +++ b/include/linux/static_call.h
> @@ -136,6 +136,9 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
>  
>  #ifdef CONFIG_HAVE_STATIC_CALL_INLINE
>  
> +extern int __static_call_return0(void);
> +extern long __static_call_returnl0(void);
> +
>  extern int __init static_call_init(void);
>  
>  struct static_call_mod {
> @@ -187,6 +190,9 @@ extern int static_call_text_reserved(void *start, void *end);
>  
>  #elif defined(CONFIG_HAVE_STATIC_CALL)
>  
> +extern int __static_call_return0(void);
> +extern long __static_call_returnl0(void);
> +
>  static inline int static_call_init(void) { return 0; }
>  
>  struct static_call_key {
> @@ -234,6 +240,9 @@ static inline int static_call_text_reserved(void *start, void *end)
>  
>  #else /* Generic implementation */
>  
> +static inline int __static_call_return0(void) { return 0; }
> +static inline long __static_call_returnl0(void) { return 0; }
> +
>  static inline int static_call_init(void) { return 0; }
>  
>  struct static_call_key {
> diff --git a/kernel/static_call.c b/kernel/static_call.c
> index 84565c2a41b8..3cb371e71be6 100644
> --- a/kernel/static_call.c
> +++ b/kernel/static_call.c
> @@ -438,6 +438,16 @@ int __init static_call_init(void)
>  }
>  early_initcall(static_call_init);
>  
> +int __static_call_return0(void)
> +{
> +	return 0;
> +}
> +
> +long __static_call_returnl0(void)
> +{
> +	return 0;
> +}
> +
>  #ifdef CONFIG_STATIC_CALL_SELFTEST
>  
>  static int func_a(int x)

So yes, we need the out of line copy, but why do we need the int/long
variants?

AFAICT we only need the long version and can cast it to whatever we need
(provided the return value is no bigger than long).

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0()
  2020-11-10  9:55   ` Peter Zijlstra
@ 2020-11-10 10:13     ` Peter Zijlstra
  2020-11-10 13:42       ` Frederic Weisbecker
  2020-11-10 13:24     ` Frederic Weisbecker
  1 sibling, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2020-11-10 10:13 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 10:55:15AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 01:56:03AM +0100, Frederic Weisbecker wrote:
> 
> > [fweisbec: s/disp16/data16, integrate into text_get_insn(), elaborate
> >  comment on the resulting insn, emulate on int3 trap, provide validation,
> >  uninline __static_call_return0() for HAVE_STATIC_CALL]

> Why did you add full emulation of this? The patch I send to you used the
> text_poke_bp(.emulate) argument to have it emulate an actual call to the
> out-of-line version of that function.
> 
> That should work fine and is a lot less code.

For reference; the below is what I send you. Actually doing the
__static_call_return0() call while we poke the magic XOR instruction is
much simpler.

---
Subject: static_call/x86: Add __static_call_return0
From: Peter Zijlstra <peterz@infradead.org>
Date: Mon Oct 12 11:43:32 CEST 2020

Provide a stub function that return 0 and wire up the static call site
patching to replace the CALL with a single 5 byte instruction that
clears %RAX, the return value register.

The function can be cast to any function pointer type that has a
single %RAX return (including pointers).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/kernel/static_call.c |   11 ++++++++++-
 include/linux/static_call.h   |    6 ++++++
 kernel/static_call.c          |    5 +++++
 3 files changed, 21 insertions(+), 1 deletion(-)

--- a/arch/x86/kernel/static_call.c
+++ b/arch/x86/kernel/static_call.c
@@ -13,12 +13,21 @@ enum insn_type {
 
 static void __ref __static_call_transform(void *insn, enum insn_type type, void *func)
 {
+	/*
+	 * disp16 disp16 xorq %rax, %rax - a single 5 byte instruction that clears %rax
+	 */
+	static const u8 ret0[5] = { 0x66, 0x66, 0x48, 0x31, 0xc0 };
 	int size = CALL_INSN_SIZE;
+	const void *emulate = NULL;
 	const void *code;
 
 	switch (type) {
 	case CALL:
 		code = text_gen_insn(CALL_INSN_OPCODE, insn, func);
+		if (func == &__static_call_return0) {
+			emulate = code;
+			code = ret0;
+		}
 		break;
 
 	case NOP:
@@ -41,7 +50,7 @@ static void __ref __static_call_transfor
 	if (unlikely(system_state == SYSTEM_BOOTING))
 		return text_poke_early(insn, code, size);
 
-	text_poke_bp(insn, code, size, NULL);
+	text_poke_bp(insn, code, size, emulate);
 }
 
 static void __static_call_validate(void *insn, bool tail)
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -136,6 +136,8 @@ extern void arch_static_call_transform(v
 
 #ifdef CONFIG_HAVE_STATIC_CALL_INLINE
 
+extern long __static_call_return0(void);
+
 extern int __init static_call_init(void);
 
 struct static_call_mod {
@@ -187,6 +189,8 @@ extern int static_call_text_reserved(voi
 
 #elif defined(CONFIG_HAVE_STATIC_CALL)
 
+static inline long __static_call_return0(void) { return 0; }
+
 static inline int static_call_init(void) { return 0; }
 
 struct static_call_key {
@@ -234,6 +238,8 @@ static inline int static_call_text_reser
 
 #else /* Generic implementation */
 
+static inline long __static_call_return0(void) { return 0; }
+
 static inline int static_call_init(void) { return 0; }
 
 struct static_call_key {
--- a/kernel/static_call.c
+++ b/kernel/static_call.c
@@ -438,6 +438,11 @@ int __init static_call_init(void)
 }
 early_initcall(static_call_init);
 
+long __static_call_return0(void)
+{
+	return 0;
+}
+
 #ifdef CONFIG_STATIC_CALL_SELFTEST
 
 static int func_a(int x)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 6/7] preempt/dynamic: Provide irqentry_exit_cond_resched() static call
  2020-11-10  0:56 ` [RFC PATCH 6/7] preempt/dynamic: Provide irqentry_exit_cond_resched() static call Frederic Weisbecker
@ 2020-11-10 10:32   ` Peter Zijlstra
  2020-11-10 13:45     ` Frederic Weisbecker
  0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2020-11-10 10:32 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 01:56:08AM +0100, Frederic Weisbecker wrote:
> [convert from static key to static call, only define static call when
> PREEMPT_DYNAMIC]

>  noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
>  {
> @@ -383,8 +386,13 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
>  		}
>  
>  		instrumentation_begin();
> -		if (IS_ENABLED(CONFIG_PREEMPTION))
> +		if (IS_ENABLED(CONFIG_PREEMPTION)) {
> +#ifdef CONFIG_PREEMT_DYNAMIC
> +			static_call(irqentry_exit_cond_resched)();
> +#else
>  			irqentry_exit_cond_resched();
> +#endif
> +		}
>  		/* Covers both tracing and lockdep */
>  		trace_hardirqs_on();
>  		instrumentation_end();

The reason I had this a static_branch() is that because if you look at
the code-gen of this function, you'll find it will inline the call.

Not sure it matters much, but it avoids a CALL+RET.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 4/7] preempt/dynamic: Provide cond_resched() and might_resched() static calls
  2020-11-10  0:56 ` [RFC PATCH 4/7] preempt/dynamic: Provide cond_resched() and might_resched() static calls Frederic Weisbecker
@ 2020-11-10 10:39   ` Peter Zijlstra
  2020-11-10 10:48     ` Peter Zijlstra
  0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2020-11-10 10:39 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 01:56:06AM +0100, Frederic Weisbecker wrote:

> +#ifdef CONFIG_PREEMPT_DYNAMIC
> +DEFINE_STATIC_CALL(cond_resched, __static_call_return0);
> +EXPORT_STATIC_CALL(cond_resched);
> +
> +DEFINE_STATIC_CALL(might_resched, __static_call_return0);
> +EXPORT_STATIC_CALL(might_resched);
>  #endif

I suppose we want the below and change the above to use
EXPORT_STATIC_CALL_TRAMP().

---
Subject: static_call: EXPORT_STATIC_CALL_TRAMP()
From: Peter Zijlstra <peterz@infradead.org>
Date: Tue Nov 10 11:37:48 CET 2020

For when we want to allow modules to call the static_call() but not
change it.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/static_call.h |   23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -160,13 +160,19 @@ extern int static_call_text_reserved(voi
 
 #define static_call_cond(name)	(void)__static_call(name)
 
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
-	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+	EXPORT_STATIC_CALL_TRAMP(name)
+
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
-	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+	EXPORT_STATIC_CALL_TRAMP_GPL(name)
 
 #elif defined(CONFIG_HAVE_STATIC_CALL)
 
@@ -206,13 +212,19 @@ static inline int static_call_text_reser
 	return 0;
 }
 
+#define EXPORT_STATIC_CALL_TRAMP(name)					\
+	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
-	EXPORT_SYMBOL(STATIC_CALL_TRAMP(name))
+	EXPORT_STATIC_CALL_TRAMP(name)
+
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)				\
+	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
 
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
-	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+	EXPORT_STATIC_CALL_TRAMP_GPL(name)
 
 #else /* Generic implementation */
 
@@ -269,6 +281,9 @@ static inline int static_call_text_reser
 	return 0;
 }
 
+#define EXPORT_STATIC_CALL_TRAMP(name)
+#define EXPORT_STATIC_CALL_TRAMP_GPL(name)
+
 #define EXPORT_STATIC_CALL(name)	EXPORT_SYMBOL(STATIC_CALL_KEY(name))
 #define EXPORT_STATIC_CALL_GPL(name)	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name))
 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 4/7] preempt/dynamic: Provide cond_resched() and might_resched() static calls
  2020-11-10 10:39   ` Peter Zijlstra
@ 2020-11-10 10:48     ` Peter Zijlstra
  2021-01-18 13:58       ` Frederic Weisbecker
  0 siblings, 1 reply; 19+ messages in thread
From: Peter Zijlstra @ 2020-11-10 10:48 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 11:39:09AM +0100, Peter Zijlstra wrote:
> Subject: static_call: EXPORT_STATIC_CALL_TRAMP()
> From: Peter Zijlstra <peterz@infradead.org>
> Date: Tue Nov 10 11:37:48 CET 2020
> 
> For when we want to allow modules to call the static_call() but not
> change it.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> @@ -269,6 +281,9 @@ static inline int static_call_text_reser
>  	return 0;
>  }
>  
> +#define EXPORT_STATIC_CALL_TRAMP(name)
> +#define EXPORT_STATIC_CALL_TRAMP_GPL(name)
> +
>  #define EXPORT_STATIC_CALL(name)	EXPORT_SYMBOL(STATIC_CALL_KEY(name))
>  #define EXPORT_STATIC_CALL_GPL(name)	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name))

Hurmph, this hunk is wrong, it should export the KEY in both cases :/

That's unfortunate but unavoidable I suppose.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0()
  2020-11-10  9:55   ` Peter Zijlstra
  2020-11-10 10:13     ` Peter Zijlstra
@ 2020-11-10 13:24     ` Frederic Weisbecker
  1 sibling, 0 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10 13:24 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 10:55:15AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 01:56:03AM +0100, Frederic Weisbecker wrote:
> > diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> > index 2400ad62f330..37592f576a10 100644
> > --- a/arch/x86/kernel/alternative.c
> > +++ b/arch/x86/kernel/alternative.c
> > @@ -1125,6 +1125,10 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
> >  		int3_emulate_jmp(regs, (long)ip + tp->rel32);
> >  		break;
> >  
> > +	case XOR5RAX_INSN_OPCODE:
> > +		int3_emulate_xor5rax(regs);
> > +		break;
> > +
> >  	default:
> >  		BUG();
> >  	}
> > @@ -1291,6 +1295,7 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
> >  	switch (tp->opcode) {
> >  	case INT3_INSN_OPCODE:
> >  	case RET_INSN_OPCODE:
> > +	case XOR5RAX_INSN_OPCODE:
> >  		break;
> >  
> >  	case CALL_INSN_OPCODE:
> 
> Why did you add full emulation of this? The patch I send to you used the
> text_poke_bp(.emulate) argument to have it emulate an actual call to the
> out-of-line version of that function.
> 
> That should work fine and is a lot less code.

Perhaps I pushed the cleanup a bit too far indeed. I wanted to standardize
it just like any flavour of text patching. And also I thought that emulate
thing was on the way to be deprecated.

Anyway, I'll restore the old version.

Thanks.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0()
  2020-11-10 10:13     ` Peter Zijlstra
@ 2020-11-10 13:42       ` Frederic Weisbecker
  2020-11-10 13:53         ` Peter Zijlstra
  0 siblings, 1 reply; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10 13:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 11:13:07AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 10:55:15AM +0100, Peter Zijlstra wrote:
> > On Tue, Nov 10, 2020 at 01:56:03AM +0100, Frederic Weisbecker wrote:
> > 
> > > [fweisbec: s/disp16/data16, integrate into text_get_insn(), elaborate
> > >  comment on the resulting insn, emulate on int3 trap, provide validation,
> > >  uninline __static_call_return0() for HAVE_STATIC_CALL]
> 
> > Why did you add full emulation of this? The patch I send to you used the
> > text_poke_bp(.emulate) argument to have it emulate an actual call to the
> > out-of-line version of that function.
> > 
> > That should work fine and is a lot less code.
> 
> For reference; the below is what I send you. Actually doing the
> __static_call_return0() call while we poke the magic XOR instruction is
> much simpler.

Ok I'll get back to that. I'll just tweak a bit static_call_validate()
so that it is aware of that instruction.

Thanks.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 6/7] preempt/dynamic: Provide irqentry_exit_cond_resched() static call
  2020-11-10 10:32   ` Peter Zijlstra
@ 2020-11-10 13:45     ` Frederic Weisbecker
  0 siblings, 0 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2020-11-10 13:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 11:32:21AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 01:56:08AM +0100, Frederic Weisbecker wrote:
> > [convert from static key to static call, only define static call when
> > PREEMPT_DYNAMIC]
> 
> >  noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
> >  {
> > @@ -383,8 +386,13 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
> >  		}
> >  
> >  		instrumentation_begin();
> > -		if (IS_ENABLED(CONFIG_PREEMPTION))
> > +		if (IS_ENABLED(CONFIG_PREEMPTION)) {
> > +#ifdef CONFIG_PREEMT_DYNAMIC
> > +			static_call(irqentry_exit_cond_resched)();
> > +#else
> >  			irqentry_exit_cond_resched();
> > +#endif
> > +		}
> >  		/* Covers both tracing and lockdep */
> >  		trace_hardirqs_on();
> >  		instrumentation_end();
> 
> The reason I had this a static_branch() is that because if you look at
> the code-gen of this function, you'll find it will inline the call.
> 
> Not sure it matters much, but it avoids a CALL+RET.

I wouldn't mind turning it to a static key but that adds one more
requirement for the architectures that want this, namely proper
support for static key. Now probably architectures supporting
static call inline (for now only x86) will have proper static
key support as well. So this probably doesn't matter after all.

Thanks.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0()
  2020-11-10 13:42       ` Frederic Weisbecker
@ 2020-11-10 13:53         ` Peter Zijlstra
  0 siblings, 0 replies; 19+ messages in thread
From: Peter Zijlstra @ 2020-11-10 13:53 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 02:42:23PM +0100, Frederic Weisbecker wrote:
> On Tue, Nov 10, 2020 at 11:13:07AM +0100, Peter Zijlstra wrote:
> > On Tue, Nov 10, 2020 at 10:55:15AM +0100, Peter Zijlstra wrote:
> > > On Tue, Nov 10, 2020 at 01:56:03AM +0100, Frederic Weisbecker wrote:
> > > 
> > > > [fweisbec: s/disp16/data16, integrate into text_get_insn(), elaborate
> > > >  comment on the resulting insn, emulate on int3 trap, provide validation,
> > > >  uninline __static_call_return0() for HAVE_STATIC_CALL]
> > 
> > > Why did you add full emulation of this? The patch I send to you used the
> > > text_poke_bp(.emulate) argument to have it emulate an actual call to the
> > > out-of-line version of that function.
> > > 
> > > That should work fine and is a lot less code.
> > 
> > For reference; the below is what I send you. Actually doing the
> > __static_call_return0() call while we poke the magic XOR instruction is
> > much simpler.
> 
> Ok I'll get back to that. I'll just tweak a bit static_call_validate()
> so that it is aware of that instruction.

Ah yes indeed!

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH 4/7] preempt/dynamic: Provide cond_resched() and might_resched() static calls
  2020-11-10 10:48     ` Peter Zijlstra
@ 2021-01-18 13:58       ` Frederic Weisbecker
  0 siblings, 0 replies; 19+ messages in thread
From: Frederic Weisbecker @ 2021-01-18 13:58 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Mel Gorman, Michal Hocko, Thomas Gleixner,
	Paul E . McKenney, Ingo Molnar, Michal Hocko

On Tue, Nov 10, 2020 at 11:48:33AM +0100, Peter Zijlstra wrote:
> On Tue, Nov 10, 2020 at 11:39:09AM +0100, Peter Zijlstra wrote:
> > Subject: static_call: EXPORT_STATIC_CALL_TRAMP()
> > From: Peter Zijlstra <peterz@infradead.org>
> > Date: Tue Nov 10 11:37:48 CET 2020
> > 
> > For when we want to allow modules to call the static_call() but not
> > change it.
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> > @@ -269,6 +281,9 @@ static inline int static_call_text_reser
> >  	return 0;
> >  }
> >  
> > +#define EXPORT_STATIC_CALL_TRAMP(name)
> > +#define EXPORT_STATIC_CALL_TRAMP_GPL(name)
> > +
> >  #define EXPORT_STATIC_CALL(name)	EXPORT_SYMBOL(STATIC_CALL_KEY(name))
> >  #define EXPORT_STATIC_CALL_GPL(name)	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name))
> 
> Hurmph, this hunk is wrong, it should export the KEY in both cases :/
> 
> That's unfortunate but unavoidable I suppose.

Right, AFAICT static_call() refers to both key and tramp in any case.

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-01-18 13:59 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-10  0:56 [RFC PATCH 0/7] preempt: Tune preemption flavour on boot v3 Frederic Weisbecker
2020-11-10  0:56 ` [RFC PATCH 1/7] static_call/x86: Add __static_call_returnl0() Frederic Weisbecker
2020-11-10  9:55   ` Peter Zijlstra
2020-11-10 10:13     ` Peter Zijlstra
2020-11-10 13:42       ` Frederic Weisbecker
2020-11-10 13:53         ` Peter Zijlstra
2020-11-10 13:24     ` Frederic Weisbecker
2020-11-10 10:06   ` Peter Zijlstra
2020-11-10  0:56 ` [RFC PATCH 2/7] static_call: Pull some static_call declarations to the type headers Frederic Weisbecker
2020-11-10  0:56 ` [RFC PATCH 3/7] preempt: Introduce CONFIG_PREEMPT_DYNAMIC Frederic Weisbecker
2020-11-10  0:56 ` [RFC PATCH 4/7] preempt/dynamic: Provide cond_resched() and might_resched() static calls Frederic Weisbecker
2020-11-10 10:39   ` Peter Zijlstra
2020-11-10 10:48     ` Peter Zijlstra
2021-01-18 13:58       ` Frederic Weisbecker
2020-11-10  0:56 ` [RFC PATCH 5/7] preempt/dynamic: Provide preempt_schedule[_notrace]() " Frederic Weisbecker
2020-11-10  0:56 ` [RFC PATCH 6/7] preempt/dynamic: Provide irqentry_exit_cond_resched() static call Frederic Weisbecker
2020-11-10 10:32   ` Peter Zijlstra
2020-11-10 13:45     ` Frederic Weisbecker
2020-11-10  0:56 ` [RFC PATCH 7/7] preempt/dynamic: Support dynamic preempt with preempt= boot option Frederic Weisbecker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).