linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
@ 2018-12-17 16:03 Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 01/12] Revert "x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs" Masahiro Yamada
                   ` (13 more replies)
  0 siblings, 14 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, Arnd Bergmann, Andrey Ryabinin, virtualization,
	Luc Van Oostenryck, Alok Kataria, Ard Biesheuvel, Jann Horn,
	linux-arch, Alexey Dobriyan, linux-sparse, Andrew Morton,
	linux-kbuild, Yonghong Song, Michal Marek,
	Arnaldo Carvalho de Melo, Jan Beulich, Nadav Amit,
	David Woodhouse, Alexei Starovoitov, linux-kernel

This series reverts the in-kernel workarounds for inlining issues.

The commit description of 77b0bf55bc67 mentioned
"We also hope that GCC will eventually get fixed,..."

Now, GCC provides a solution.

https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html
explains the new "asm inline" syntax.

The performance issue will be eventually solved.

[About Code cleanups]

I know Nadam Amit is opposed to the full revert.
He also claims his motivation for macrofying was not only
performance, but also cleanups.

IIUC, the criticism addresses the code duplication between C and ASM.

If so, I'd like to suggest a different approach for cleanups.
Please see the last 3 patches.
IMHO, preprocessor approach is more straight-forward, and readable.
Basically, this idea should work because it is what we already do for
__ASM_FORM() etc.

[Quick Test of "asm inline" of GCC 9]

If you want to try "asm inline" feature, the patch is available:
https://lore.kernel.org/patchwork/patch/1024590/

The number of symbols for arch/x86/configs/x86_64_defconfig:

                            nr_symbols
  [1]    v4.20-rc7       :   96502
  [2]    [1]+full revert :   96705   (+203)
  [3]    [2]+"asm inline":   96568   (-137)

[3]: apply my patch, then replace "asm" -> "asm_inline"
    for _BUG_FLAGS(), refcount_add(), refcount_inc(), refcount_dec(),
        annotate_reachable(), annotate_unreachable()


Changes in v3:
  - Split into per-commit revert (per Nadav Amit)
  - Add some cleanups with preprocessor approach

Changes in v2:
  - Revive clean-ups made by 5bdcd510c2ac (per Peter Zijlstra)
  - Fix commit quoting style (per Peter Zijlstra)

Masahiro Yamada (12):
  Revert "x86/jump-labels: Macrofy inline assembly code to work around
    GCC inlining bugs"
  Revert "x86/cpufeature: Macrofy inline assembly code to work around
    GCC inlining bugs"
  Revert "x86/extable: Macrofy inline assembly code to work around GCC
    inlining bugs"
  Revert "x86/paravirt: Work around GCC inlining bugs when compiling
    paravirt ops"
  Revert "x86/bug: Macrofy the BUG table section handling, to work
    around GCC inlining bugs"
  Revert "x86/alternatives: Macrofy lock prefixes to work around GCC
    inlining bugs"
  Revert "x86/refcount: Work around GCC inlining bug"
  Revert "x86/objtool: Use asm macros to work around GCC inlining bugs"
  Revert "kbuild/Makefile: Prepare for using macros in inline assembly
    code to work around asm() related GCC inlining bugs"
  linux/linkage: add ASM() macro to reduce duplication between C/ASM
    code
  x86/alternatives: consolidate LOCK_PREFIX macro
  x86/asm: consolidate ASM_EXTABLE_* macros

 Makefile                                  |  9 +--
 arch/x86/Makefile                         |  7 ---
 arch/x86/include/asm/alternative-asm.h    | 22 +------
 arch/x86/include/asm/alternative-common.h | 47 +++++++++++++++
 arch/x86/include/asm/alternative.h        | 30 +---------
 arch/x86/include/asm/asm.h                | 46 +++++----------
 arch/x86/include/asm/bug.h                | 98 +++++++++++++------------------
 arch/x86/include/asm/cpufeature.h         | 82 +++++++++++---------------
 arch/x86/include/asm/jump_label.h         | 22 +++++--
 arch/x86/include/asm/paravirt_types.h     | 56 +++++++++---------
 arch/x86/include/asm/refcount.h           | 81 +++++++++++--------------
 arch/x86/kernel/macros.S                  | 16 -----
 include/asm-generic/bug.h                 |  8 +--
 include/linux/compiler.h                  | 56 ++++--------------
 include/linux/linkage.h                   |  8 +++
 scripts/Kbuild.include                    |  4 +-
 scripts/mod/Makefile                      |  2 -
 17 files changed, 249 insertions(+), 345 deletions(-)
 create mode 100644 arch/x86/include/asm/alternative-common.h
 delete mode 100644 arch/x86/kernel/macros.S

-- 
2.7.4


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 01/12] Revert "x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs"
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 02/12] Revert "x86/cpufeature: " Masahiro Yamada
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, linux-kernel, Ard Biesheuvel, Nadav Amit

This partially reverts commit 5bdcd510c2ac9efaf55c4cbd8d46421d8e2320cd.

The in-kernel workarounds will be replaced with GCC's new
"asm inline" syntax.

Only the asm_volatile_goto parts were reverted.

The other cleanups (removal of unneeded #error, replacement of
STATIC_JUMP_IF_TRUE/FALSE with STATIC_JUMP_IF_NOP/JMP) are kept.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

 arch/x86/include/asm/jump_label.h | 22 +++++++++++++++++-----
 arch/x86/kernel/macros.S          |  1 -
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
index a5fb34f..cf88ebf 100644
--- a/arch/x86/include/asm/jump_label.h
+++ b/arch/x86/include/asm/jump_label.h
@@ -20,9 +20,15 @@
 
 static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
 {
-	asm_volatile_goto("STATIC_BRANCH_NOP l_yes=\"%l[l_yes]\" key=\"%c0\" "
-			  "branch=\"%c1\""
-			: :  "i" (key), "i" (branch) : : l_yes);
+	asm_volatile_goto("1:"
+		".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t"
+		".pushsection __jump_table,  \"aw\" \n\t"
+		_ASM_ALIGN "\n\t"
+		".long 1b - ., %l[l_yes] - . \n\t"
+		_ASM_PTR "%c0 + %c1 - .\n\t"
+		".popsection \n\t"
+		: :  "i" (key), "i" (branch) : : l_yes);
+
 	return false;
 l_yes:
 	return true;
@@ -30,8 +36,14 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
 
 static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
 {
-	asm_volatile_goto("STATIC_BRANCH_JMP l_yes=\"%l[l_yes]\" key=\"%c0\" "
-			  "branch=\"%c1\""
+	asm_volatile_goto("1:"
+		".byte 0xe9\n\t .long %l[l_yes] - 2f\n\t"
+		"2:\n\t"
+		".pushsection __jump_table,  \"aw\" \n\t"
+		_ASM_ALIGN "\n\t"
+		".long 1b - ., %l[l_yes] - . \n\t"
+		_ASM_PTR "%c0 + %c1 - .\n\t"
+		".popsection \n\t"
 		: :  "i" (key), "i" (branch) : : l_yes);
 
 	return false;
diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S
index 161c950..bf8b9c9 100644
--- a/arch/x86/kernel/macros.S
+++ b/arch/x86/kernel/macros.S
@@ -13,4 +13,3 @@
 #include <asm/paravirt.h>
 #include <asm/asm.h>
 #include <asm/cpufeature.h>
-#include <asm/jump_label.h>
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 02/12] Revert "x86/cpufeature: Macrofy inline assembly code to work around GCC inlining bugs"
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 01/12] Revert "x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs" Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 03/12] Revert "x86/extable: " Masahiro Yamada
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, David Woodhouse, Alexei Starovoitov,
	linux-kernel, Nadav Amit

This reverts commit d5a581d84ae6b8a4a740464b80d8d9cf1e7947b2.

The in-kernel workarounds will be replaced with GCC's new
"asm inline" syntax.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

 arch/x86/include/asm/cpufeature.h | 82 +++++++++++++++++----------------------
 arch/x86/kernel/macros.S          |  1 -
 2 files changed, 35 insertions(+), 48 deletions(-)

diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 7d44272..aced6c9 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -2,10 +2,10 @@
 #ifndef _ASM_X86_CPUFEATURE_H
 #define _ASM_X86_CPUFEATURE_H
 
-#ifdef __KERNEL__
-#ifndef __ASSEMBLY__
-
 #include <asm/processor.h>
+
+#if defined(__KERNEL__) && !defined(__ASSEMBLY__)
+
 #include <asm/asm.h>
 #include <linux/bitops.h>
 
@@ -161,10 +161,37 @@ extern void clear_cpu_cap(struct cpuinfo_x86 *c, unsigned int bit);
  */
 static __always_inline __pure bool _static_cpu_has(u16 bit)
 {
-	asm_volatile_goto("STATIC_CPU_HAS bitnum=%[bitnum] "
-			  "cap_byte=\"%[cap_byte]\" "
-			  "feature=%P[feature] t_yes=%l[t_yes] "
-			  "t_no=%l[t_no] always=%P[always]"
+	asm_volatile_goto("1: jmp 6f\n"
+		 "2:\n"
+		 ".skip -(((5f-4f) - (2b-1b)) > 0) * "
+			 "((5f-4f) - (2b-1b)),0x90\n"
+		 "3:\n"
+		 ".section .altinstructions,\"a\"\n"
+		 " .long 1b - .\n"		/* src offset */
+		 " .long 4f - .\n"		/* repl offset */
+		 " .word %P[always]\n"		/* always replace */
+		 " .byte 3b - 1b\n"		/* src len */
+		 " .byte 5f - 4f\n"		/* repl len */
+		 " .byte 3b - 2b\n"		/* pad len */
+		 ".previous\n"
+		 ".section .altinstr_replacement,\"ax\"\n"
+		 "4: jmp %l[t_no]\n"
+		 "5:\n"
+		 ".previous\n"
+		 ".section .altinstructions,\"a\"\n"
+		 " .long 1b - .\n"		/* src offset */
+		 " .long 0\n"			/* no replacement */
+		 " .word %P[feature]\n"		/* feature bit */
+		 " .byte 3b - 1b\n"		/* src len */
+		 " .byte 0\n"			/* repl len */
+		 " .byte 0\n"			/* pad len */
+		 ".previous\n"
+		 ".section .altinstr_aux,\"ax\"\n"
+		 "6:\n"
+		 " testb %[bitnum],%[cap_byte]\n"
+		 " jnz %l[t_yes]\n"
+		 " jmp %l[t_no]\n"
+		 ".previous\n"
 		 : : [feature]  "i" (bit),
 		     [always]   "i" (X86_FEATURE_ALWAYS),
 		     [bitnum]   "i" (1 << (bit & 7)),
@@ -199,44 +226,5 @@ static __always_inline __pure bool _static_cpu_has(u16 bit)
 #define CPU_FEATURE_TYPEVAL		boot_cpu_data.x86_vendor, boot_cpu_data.x86, \
 					boot_cpu_data.x86_model
 
-#else /* __ASSEMBLY__ */
-
-.macro STATIC_CPU_HAS bitnum:req cap_byte:req feature:req t_yes:req t_no:req always:req
-1:
-	jmp 6f
-2:
-	.skip -(((5f-4f) - (2b-1b)) > 0) * ((5f-4f) - (2b-1b)),0x90
-3:
-	.section .altinstructions,"a"
-	.long 1b - .		/* src offset */
-	.long 4f - .		/* repl offset */
-	.word \always		/* always replace */
-	.byte 3b - 1b		/* src len */
-	.byte 5f - 4f		/* repl len */
-	.byte 3b - 2b		/* pad len */
-	.previous
-	.section .altinstr_replacement,"ax"
-4:
-	jmp \t_no
-5:
-	.previous
-	.section .altinstructions,"a"
-	.long 1b - .		/* src offset */
-	.long 0			/* no replacement */
-	.word \feature		/* feature bit */
-	.byte 3b - 1b		/* src len */
-	.byte 0			/* repl len */
-	.byte 0			/* pad len */
-	.previous
-	.section .altinstr_aux,"ax"
-6:
-	testb \bitnum,\cap_byte
-	jnz \t_yes
-	jmp \t_no
-	.previous
-.endm
-
-#endif /* __ASSEMBLY__ */
-
-#endif /* __KERNEL__ */
+#endif /* defined(__KERNEL__) && !defined(__ASSEMBLY__) */
 #endif /* _ASM_X86_CPUFEATURE_H */
diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S
index bf8b9c9..7baa40d 100644
--- a/arch/x86/kernel/macros.S
+++ b/arch/x86/kernel/macros.S
@@ -12,4 +12,3 @@
 #include <asm/bug.h>
 #include <asm/paravirt.h>
 #include <asm/asm.h>
-#include <asm/cpufeature.h>
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 03/12] Revert "x86/extable: Macrofy inline assembly code to work around GCC inlining bugs"
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 01/12] Revert "x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs" Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 02/12] Revert "x86/cpufeature: " Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 04/12] Revert "x86/paravirt: Work around GCC inlining bugs when compiling paravirt ops" Masahiro Yamada
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, Yonghong Song, linux-kernel,
	Arnaldo Carvalho de Melo, Nadav Amit, Jann Horn

This reverts commit 0474d5d9d2f7f3b11262f7bf87d0e7314ead9200.

The in-kernel workarounds will be replaced with GCC's new
"asm inline" syntax.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

I blindly reverted here, but I will clean it up
in a different way later.

 arch/x86/include/asm/asm.h | 53 +++++++++++++++++++++++++++++-----------------
 arch/x86/kernel/macros.S   |  1 -
 2 files changed, 33 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index 21b0867..6467757b 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -120,25 +120,12 @@
 /* Exception table entry */
 #ifdef __ASSEMBLY__
 # define _ASM_EXTABLE_HANDLE(from, to, handler)			\
-	ASM_EXTABLE_HANDLE from to handler
-
-.macro ASM_EXTABLE_HANDLE from:req to:req handler:req
-	.pushsection "__ex_table","a"
-	.balign 4
-	.long (\from) - .
-	.long (\to) - .
-	.long (\handler) - .
+	.pushsection "__ex_table","a" ;				\
+	.balign 4 ;						\
+	.long (from) - . ;					\
+	.long (to) - . ;					\
+	.long (handler) - . ;					\
 	.popsection
-.endm
-#else /* __ASSEMBLY__ */
-
-# define _ASM_EXTABLE_HANDLE(from, to, handler)			\
-	"ASM_EXTABLE_HANDLE from=" #from " to=" #to		\
-	" handler=\"" #handler "\"\n\t"
-
-/* For C file, we already have NOKPROBE_SYMBOL macro */
-
-#endif /* __ASSEMBLY__ */
 
 # define _ASM_EXTABLE(from, to)					\
 	_ASM_EXTABLE_HANDLE(from, to, ex_handler_default)
@@ -161,7 +148,6 @@
 	_ASM_PTR (entry);					\
 	.popsection
 
-#ifdef __ASSEMBLY__
 .macro ALIGN_DESTINATION
 	/* check for bad alignment of destination */
 	movl %edi,%ecx
@@ -185,7 +171,34 @@
 	_ASM_EXTABLE_UA(100b, 103b)
 	_ASM_EXTABLE_UA(101b, 103b)
 	.endm
-#endif /* __ASSEMBLY__ */
+
+#else
+# define _EXPAND_EXTABLE_HANDLE(x) #x
+# define _ASM_EXTABLE_HANDLE(from, to, handler)			\
+	" .pushsection \"__ex_table\",\"a\"\n"			\
+	" .balign 4\n"						\
+	" .long (" #from ") - .\n"				\
+	" .long (" #to ") - .\n"				\
+	" .long (" _EXPAND_EXTABLE_HANDLE(handler) ") - .\n"	\
+	" .popsection\n"
+
+# define _ASM_EXTABLE(from, to)					\
+	_ASM_EXTABLE_HANDLE(from, to, ex_handler_default)
+
+# define _ASM_EXTABLE_UA(from, to)				\
+	_ASM_EXTABLE_HANDLE(from, to, ex_handler_uaccess)
+
+# define _ASM_EXTABLE_FAULT(from, to)				\
+	_ASM_EXTABLE_HANDLE(from, to, ex_handler_fault)
+
+# define _ASM_EXTABLE_EX(from, to)				\
+	_ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
+
+# define _ASM_EXTABLE_REFCOUNT(from, to)			\
+	_ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount)
+
+/* For C file, we already have NOKPROBE_SYMBOL macro */
+#endif
 
 #ifndef __ASSEMBLY__
 /*
diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S
index 7baa40d..71d8b71 100644
--- a/arch/x86/kernel/macros.S
+++ b/arch/x86/kernel/macros.S
@@ -11,4 +11,3 @@
 #include <asm/alternative-asm.h>
 #include <asm/bug.h>
 #include <asm/paravirt.h>
-#include <asm/asm.h>
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 04/12] Revert "x86/paravirt: Work around GCC inlining bugs when compiling paravirt ops"
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (2 preceding siblings ...)
  2018-12-17 16:03 ` [PATCH v3 03/12] Revert "x86/extable: " Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 05/12] Revert "x86/bug: Macrofy the BUG table section handling, to work around GCC inlining bugs" Masahiro Yamada
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, Nadav Amit, linux-kernel, virtualization,
	Alok Kataria

This reverts commit 494b5168f2de009eb80f198f668da374295098dd.

The in-kernel workarounds will be replaced with GCC's new
"asm inline" syntax.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

 arch/x86/include/asm/paravirt_types.h | 56 ++++++++++++++++++-----------------
 arch/x86/kernel/macros.S              |  1 -
 2 files changed, 29 insertions(+), 28 deletions(-)

diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 26942ad..488c596 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -348,11 +348,23 @@ extern struct paravirt_patch_template pv_ops;
 #define paravirt_clobber(clobber)		\
 	[paravirt_clobber] "i" (clobber)
 
+/*
+ * Generate some code, and mark it as patchable by the
+ * apply_paravirt() alternate instruction patcher.
+ */
+#define _paravirt_alt(insn_string, type, clobber)	\
+	"771:\n\t" insn_string "\n" "772:\n"		\
+	".pushsection .parainstructions,\"a\"\n"	\
+	_ASM_ALIGN "\n"					\
+	_ASM_PTR " 771b\n"				\
+	"  .byte " type "\n"				\
+	"  .byte 772b-771b\n"				\
+	"  .short " clobber "\n"			\
+	".popsection\n"
+
 /* Generate patchable code, with the default asm parameters. */
-#define paravirt_call							\
-	"PARAVIRT_CALL type=\"%c[paravirt_typenum]\""			\
-	" clobber=\"%c[paravirt_clobber]\""				\
-	" pv_opptr=\"%c[paravirt_opptr]\";"
+#define paravirt_alt(insn_string)					\
+	_paravirt_alt(insn_string, "%c[paravirt_typenum]", "%c[paravirt_clobber]")
 
 /* Simple instruction patching code. */
 #define NATIVE_LABEL(a,x,b) "\n\t.globl " a #x "_" #b "\n" a #x "_" #b ":\n\t"
@@ -373,6 +385,16 @@ unsigned native_patch(u8 type, void *ibuf, unsigned long addr, unsigned len);
 int paravirt_disable_iospace(void);
 
 /*
+ * This generates an indirect call based on the operation type number.
+ * The type number, computed in PARAVIRT_PATCH, is derived from the
+ * offset into the paravirt_patch_template structure, and can therefore be
+ * freely converted back into a structure offset.
+ */
+#define PARAVIRT_CALL					\
+	ANNOTATE_RETPOLINE_SAFE				\
+	"call *%c[paravirt_opptr];"
+
+/*
  * These macros are intended to wrap calls through one of the paravirt
  * ops structs, so that they can be later identified and patched at
  * runtime.
@@ -509,7 +531,7 @@ int paravirt_disable_iospace(void);
 		/* since this condition will never hold */		\
 		if (sizeof(rettype) > sizeof(unsigned long)) {		\
 			asm volatile(pre				\
-				     paravirt_call			\
+				     paravirt_alt(PARAVIRT_CALL)	\
 				     post				\
 				     : call_clbr, ASM_CALL_CONSTRAINT	\
 				     : paravirt_type(op),		\
@@ -519,7 +541,7 @@ int paravirt_disable_iospace(void);
 			__ret = (rettype)((((u64)__edx) << 32) | __eax); \
 		} else {						\
 			asm volatile(pre				\
-				     paravirt_call			\
+				     paravirt_alt(PARAVIRT_CALL)	\
 				     post				\
 				     : call_clbr, ASM_CALL_CONSTRAINT	\
 				     : paravirt_type(op),		\
@@ -546,7 +568,7 @@ int paravirt_disable_iospace(void);
 		PVOP_VCALL_ARGS;					\
 		PVOP_TEST_NULL(op);					\
 		asm volatile(pre					\
-			     paravirt_call				\
+			     paravirt_alt(PARAVIRT_CALL)		\
 			     post					\
 			     : call_clbr, ASM_CALL_CONSTRAINT		\
 			     : paravirt_type(op),			\
@@ -664,26 +686,6 @@ struct paravirt_patch_site {
 extern struct paravirt_patch_site __parainstructions[],
 	__parainstructions_end[];
 
-#else	/* __ASSEMBLY__ */
-
-/*
- * This generates an indirect call based on the operation type number.
- * The type number, computed in PARAVIRT_PATCH, is derived from the
- * offset into the paravirt_patch_template structure, and can therefore be
- * freely converted back into a structure offset.
- */
-.macro PARAVIRT_CALL type:req clobber:req pv_opptr:req
-771:	ANNOTATE_RETPOLINE_SAFE
-	call *\pv_opptr
-772:	.pushsection .parainstructions,"a"
-	_ASM_ALIGN
-	_ASM_PTR 771b
-	.byte \type
-	.byte 772b-771b
-	.short \clobber
-	.popsection
-.endm
-
 #endif	/* __ASSEMBLY__ */
 
 #endif	/* _ASM_X86_PARAVIRT_TYPES_H */
diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S
index 71d8b71..66ccb8e 100644
--- a/arch/x86/kernel/macros.S
+++ b/arch/x86/kernel/macros.S
@@ -10,4 +10,3 @@
 #include <asm/refcount.h>
 #include <asm/alternative-asm.h>
 #include <asm/bug.h>
-#include <asm/paravirt.h>
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 05/12] Revert "x86/bug: Macrofy the BUG table section handling, to work around GCC inlining bugs"
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (3 preceding siblings ...)
  2018-12-17 16:03 ` [PATCH v3 04/12] Revert "x86/paravirt: Work around GCC inlining bugs when compiling paravirt ops" Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 06/12] Revert "x86/alternatives: Macrofy lock prefixes " Masahiro Yamada
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, linux-arch, Arnd Bergmann, linux-kernel,
	Nadav Amit

This reverts commit f81f8ad56fd1c7b99b2ed1c314527f7d9ac447c6.

The in-kernel workarounds will be replaced with GCC's new
"asm inline" syntax.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

 arch/x86/include/asm/bug.h | 98 ++++++++++++++++++++--------------------------
 arch/x86/kernel/macros.S   |  1 -
 include/asm-generic/bug.h  |  8 ++--
 3 files changed, 46 insertions(+), 61 deletions(-)

diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h
index 5090035..6804d66 100644
--- a/arch/x86/include/asm/bug.h
+++ b/arch/x86/include/asm/bug.h
@@ -4,8 +4,6 @@
 
 #include <linux/stringify.h>
 
-#ifndef __ASSEMBLY__
-
 /*
  * Despite that some emulators terminate on UD2, we use it for WARN().
  *
@@ -22,15 +20,53 @@
 
 #define LEN_UD2		2
 
+#ifdef CONFIG_GENERIC_BUG
+
+#ifdef CONFIG_X86_32
+# define __BUG_REL(val)	".long " __stringify(val)
+#else
+# define __BUG_REL(val)	".long " __stringify(val) " - 2b"
+#endif
+
+#ifdef CONFIG_DEBUG_BUGVERBOSE
+
+#define _BUG_FLAGS(ins, flags)						\
+do {									\
+	asm volatile("1:\t" ins "\n"					\
+		     ".pushsection __bug_table,\"aw\"\n"		\
+		     "2:\t" __BUG_REL(1b) "\t# bug_entry::bug_addr\n"	\
+		     "\t"  __BUG_REL(%c0) "\t# bug_entry::file\n"	\
+		     "\t.word %c1"        "\t# bug_entry::line\n"	\
+		     "\t.word %c2"        "\t# bug_entry::flags\n"	\
+		     "\t.org 2b+%c3\n"					\
+		     ".popsection"					\
+		     : : "i" (__FILE__), "i" (__LINE__),		\
+			 "i" (flags),					\
+			 "i" (sizeof(struct bug_entry)));		\
+} while (0)
+
+#else /* !CONFIG_DEBUG_BUGVERBOSE */
+
 #define _BUG_FLAGS(ins, flags)						\
 do {									\
-	asm volatile("ASM_BUG ins=\"" ins "\" file=%c0 line=%c1 "	\
-		     "flags=%c2 size=%c3"				\
-		     : : "i" (__FILE__), "i" (__LINE__),                \
-			 "i" (flags),                                   \
+	asm volatile("1:\t" ins "\n"					\
+		     ".pushsection __bug_table,\"aw\"\n"		\
+		     "2:\t" __BUG_REL(1b) "\t# bug_entry::bug_addr\n"	\
+		     "\t.word %c0"        "\t# bug_entry::flags\n"	\
+		     "\t.org 2b+%c1\n"					\
+		     ".popsection"					\
+		     : : "i" (flags),					\
 			 "i" (sizeof(struct bug_entry)));		\
 } while (0)
 
+#endif /* CONFIG_DEBUG_BUGVERBOSE */
+
+#else
+
+#define _BUG_FLAGS(ins, flags)  asm volatile(ins)
+
+#endif /* CONFIG_GENERIC_BUG */
+
 #define HAVE_ARCH_BUG
 #define BUG()							\
 do {								\
@@ -46,54 +82,4 @@ do {								\
 
 #include <asm-generic/bug.h>
 
-#else /* __ASSEMBLY__ */
-
-#ifdef CONFIG_GENERIC_BUG
-
-#ifdef CONFIG_X86_32
-.macro __BUG_REL val:req
-	.long \val
-.endm
-#else
-.macro __BUG_REL val:req
-	.long \val - 2b
-.endm
-#endif
-
-#ifdef CONFIG_DEBUG_BUGVERBOSE
-
-.macro ASM_BUG ins:req file:req line:req flags:req size:req
-1:	\ins
-	.pushsection __bug_table,"aw"
-2:	__BUG_REL val=1b	# bug_entry::bug_addr
-	__BUG_REL val=\file	# bug_entry::file
-	.word \line		# bug_entry::line
-	.word \flags		# bug_entry::flags
-	.org 2b+\size
-	.popsection
-.endm
-
-#else /* !CONFIG_DEBUG_BUGVERBOSE */
-
-.macro ASM_BUG ins:req file:req line:req flags:req size:req
-1:	\ins
-	.pushsection __bug_table,"aw"
-2:	__BUG_REL val=1b	# bug_entry::bug_addr
-	.word \flags		# bug_entry::flags
-	.org 2b+\size
-	.popsection
-.endm
-
-#endif /* CONFIG_DEBUG_BUGVERBOSE */
-
-#else /* CONFIG_GENERIC_BUG */
-
-.macro ASM_BUG ins:req file:req line:req flags:req size:req
-	\ins
-.endm
-
-#endif /* CONFIG_GENERIC_BUG */
-
-#endif /* __ASSEMBLY__ */
-
 #endif /* _ASM_X86_BUG_H */
diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S
index 66ccb8e..852487a 100644
--- a/arch/x86/kernel/macros.S
+++ b/arch/x86/kernel/macros.S
@@ -9,4 +9,3 @@
 #include <linux/compiler.h>
 #include <asm/refcount.h>
 #include <asm/alternative-asm.h>
-#include <asm/bug.h>
diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h
index cdafa5e..20561a6 100644
--- a/include/asm-generic/bug.h
+++ b/include/asm-generic/bug.h
@@ -17,8 +17,10 @@
 #ifndef __ASSEMBLY__
 #include <linux/kernel.h>
 
-struct bug_entry {
+#ifdef CONFIG_BUG
+
 #ifdef CONFIG_GENERIC_BUG
+struct bug_entry {
 #ifndef CONFIG_GENERIC_BUG_RELATIVE_POINTERS
 	unsigned long	bug_addr;
 #else
@@ -33,10 +35,8 @@ struct bug_entry {
 	unsigned short	line;
 #endif
 	unsigned short	flags;
-#endif	/* CONFIG_GENERIC_BUG */
 };
-
-#ifdef CONFIG_BUG
+#endif	/* CONFIG_GENERIC_BUG */
 
 /*
  * Don't use BUG() or BUG_ON() unless there's really no way out; one
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 06/12] Revert "x86/alternatives: Macrofy lock prefixes to work around GCC inlining bugs"
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (4 preceding siblings ...)
  2018-12-17 16:03 ` [PATCH v3 05/12] Revert "x86/bug: Macrofy the BUG table section handling, to work around GCC inlining bugs" Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 07/12] Revert "x86/refcount: Work around GCC inlining bug" Masahiro Yamada
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, David Woodhouse, linux-kernel, Alexey Dobriyan,
	Nadav Amit

This reverts commit 77f48ec28e4ccff94d2e5f4260a83ac27a7f3099.

The in-kernel workarounds will be replaced with GCC's new
"asm inline" syntax.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

I blindly reverted here, but I will clean it up
in a different way later.

 arch/x86/include/asm/alternative-asm.h | 20 ++++++--------------
 arch/x86/include/asm/alternative.h     | 11 +++++++++--
 arch/x86/kernel/macros.S               |  1 -
 3 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/alternative-asm.h b/arch/x86/include/asm/alternative-asm.h
index 8e4ea39..31b627b 100644
--- a/arch/x86/include/asm/alternative-asm.h
+++ b/arch/x86/include/asm/alternative-asm.h
@@ -7,24 +7,16 @@
 #include <asm/asm.h>
 
 #ifdef CONFIG_SMP
-.macro LOCK_PREFIX_HERE
+	.macro LOCK_PREFIX
+672:	lock
 	.pushsection .smp_locks,"a"
 	.balign 4
-	.long 671f - .		# offset
+	.long 672b - .
 	.popsection
-671:
-.endm
-
-.macro LOCK_PREFIX insn:vararg
-	LOCK_PREFIX_HERE
-	lock \insn
-.endm
+	.endm
 #else
-.macro LOCK_PREFIX_HERE
-.endm
-
-.macro LOCK_PREFIX insn:vararg
-.endm
+	.macro LOCK_PREFIX
+	.endm
 #endif
 
 /*
diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index d7faa16..4cd6a3b 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -31,8 +31,15 @@
  */
 
 #ifdef CONFIG_SMP
-#define LOCK_PREFIX_HERE "LOCK_PREFIX_HERE\n\t"
-#define LOCK_PREFIX "LOCK_PREFIX "
+#define LOCK_PREFIX_HERE \
+		".pushsection .smp_locks,\"a\"\n"	\
+		".balign 4\n"				\
+		".long 671f - .\n" /* offset */		\
+		".popsection\n"				\
+		"671:"
+
+#define LOCK_PREFIX LOCK_PREFIX_HERE "\n\tlock; "
+
 #else /* ! CONFIG_SMP */
 #define LOCK_PREFIX_HERE ""
 #define LOCK_PREFIX ""
diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S
index 852487a..f1fe1d5 100644
--- a/arch/x86/kernel/macros.S
+++ b/arch/x86/kernel/macros.S
@@ -8,4 +8,3 @@
 
 #include <linux/compiler.h>
 #include <asm/refcount.h>
-#include <asm/alternative-asm.h>
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 07/12] Revert "x86/refcount: Work around GCC inlining bug"
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (5 preceding siblings ...)
  2018-12-17 16:03 ` [PATCH v3 06/12] Revert "x86/alternatives: Macrofy lock prefixes " Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 08/12] Revert "x86/objtool: Use asm macros to work around GCC inlining bugs" Masahiro Yamada
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, Alexey Dobriyan, linux-kernel, Jan Beulich,
	Nadav Amit

This reverts commit 9e1725b410594911cc5981b6c7b4cea4ec054ca8.

The in-kernel workarounds will be replaced with GCC's new
"asm inline" syntax.

Resolved conflicts caused by 288e4521f0f6 ("x86/asm: 'Simplify'
GEN_*_RMWcc() macros").

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

 arch/x86/include/asm/refcount.h | 81 +++++++++++++++++------------------------
 arch/x86/kernel/macros.S        |  1 -
 2 files changed, 33 insertions(+), 49 deletions(-)

diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h
index a8b5e1e..dbaed55 100644
--- a/arch/x86/include/asm/refcount.h
+++ b/arch/x86/include/asm/refcount.h
@@ -4,41 +4,6 @@
  * x86-specific implementation of refcount_t. Based on PAX_REFCOUNT from
  * PaX/grsecurity.
  */
-
-#ifdef __ASSEMBLY__
-
-#include <asm/asm.h>
-#include <asm/bug.h>
-
-.macro REFCOUNT_EXCEPTION counter:req
-	.pushsection .text..refcount
-111:	lea \counter, %_ASM_CX
-112:	ud2
-	ASM_UNREACHABLE
-	.popsection
-113:	_ASM_EXTABLE_REFCOUNT(112b, 113b)
-.endm
-
-/* Trigger refcount exception if refcount result is negative. */
-.macro REFCOUNT_CHECK_LT_ZERO counter:req
-	js 111f
-	REFCOUNT_EXCEPTION counter="\counter"
-.endm
-
-/* Trigger refcount exception if refcount result is zero or negative. */
-.macro REFCOUNT_CHECK_LE_ZERO counter:req
-	jz 111f
-	REFCOUNT_CHECK_LT_ZERO counter="\counter"
-.endm
-
-/* Trigger refcount exception unconditionally. */
-.macro REFCOUNT_ERROR counter:req
-	jmp 111f
-	REFCOUNT_EXCEPTION counter="\counter"
-.endm
-
-#else /* __ASSEMBLY__ */
-
 #include <linux/refcount.h>
 #include <asm/bug.h>
 
@@ -50,12 +15,35 @@
  * central refcount exception. The fixup address for the exception points
  * back to the regular execution flow in .text.
  */
+#define _REFCOUNT_EXCEPTION				\
+	".pushsection .text..refcount\n"		\
+	"111:\tlea %[var], %%" _ASM_CX "\n"		\
+	"112:\t" ASM_UD2 "\n"				\
+	ASM_UNREACHABLE					\
+	".popsection\n"					\
+	"113:\n"					\
+	_ASM_EXTABLE_REFCOUNT(112b, 113b)
+
+/* Trigger refcount exception if refcount result is negative. */
+#define REFCOUNT_CHECK_LT_ZERO				\
+	"js 111f\n\t"					\
+	_REFCOUNT_EXCEPTION
+
+/* Trigger refcount exception if refcount result is zero or negative. */
+#define REFCOUNT_CHECK_LE_ZERO				\
+	"jz 111f\n\t"					\
+	REFCOUNT_CHECK_LT_ZERO
+
+/* Trigger refcount exception unconditionally. */
+#define REFCOUNT_ERROR					\
+	"jmp 111f\n\t"					\
+	_REFCOUNT_EXCEPTION
 
 static __always_inline void refcount_add(unsigned int i, refcount_t *r)
 {
 	asm volatile(LOCK_PREFIX "addl %1,%0\n\t"
-		"REFCOUNT_CHECK_LT_ZERO counter=\"%[counter]\""
-		: [counter] "+m" (r->refs.counter)
+		REFCOUNT_CHECK_LT_ZERO
+		: [var] "+m" (r->refs.counter)
 		: "ir" (i)
 		: "cc", "cx");
 }
@@ -63,32 +51,31 @@ static __always_inline void refcount_add(unsigned int i, refcount_t *r)
 static __always_inline void refcount_inc(refcount_t *r)
 {
 	asm volatile(LOCK_PREFIX "incl %0\n\t"
-		"REFCOUNT_CHECK_LT_ZERO counter=\"%[counter]\""
-		: [counter] "+m" (r->refs.counter)
+		REFCOUNT_CHECK_LT_ZERO
+		: [var] "+m" (r->refs.counter)
 		: : "cc", "cx");
 }
 
 static __always_inline void refcount_dec(refcount_t *r)
 {
 	asm volatile(LOCK_PREFIX "decl %0\n\t"
-		"REFCOUNT_CHECK_LE_ZERO counter=\"%[counter]\""
-		: [counter] "+m" (r->refs.counter)
+		REFCOUNT_CHECK_LE_ZERO
+		: [var] "+m" (r->refs.counter)
 		: : "cc", "cx");
 }
 
 static __always_inline __must_check
 bool refcount_sub_and_test(unsigned int i, refcount_t *r)
 {
-
 	return GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl",
-					 "REFCOUNT_CHECK_LT_ZERO counter=\"%[var]\"",
+					 REFCOUNT_CHECK_LT_ZERO,
 					 r->refs.counter, e, "er", i, "cx");
 }
 
 static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r)
 {
 	return GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl",
-					"REFCOUNT_CHECK_LT_ZERO counter=\"%[var]\"",
+					REFCOUNT_CHECK_LT_ZERO,
 					r->refs.counter, e, "cx");
 }
 
@@ -106,8 +93,8 @@ bool refcount_add_not_zero(unsigned int i, refcount_t *r)
 
 		/* Did we try to increment from/to an undesirable state? */
 		if (unlikely(c < 0 || c == INT_MAX || result < c)) {
-			asm volatile("REFCOUNT_ERROR counter=\"%[counter]\""
-				     : : [counter] "m" (r->refs.counter)
+			asm volatile(REFCOUNT_ERROR
+				     : : [var] "m" (r->refs.counter)
 				     : "cc", "cx");
 			break;
 		}
@@ -122,6 +109,4 @@ static __always_inline __must_check bool refcount_inc_not_zero(refcount_t *r)
 	return refcount_add_not_zero(1, r);
 }
 
-#endif /* __ASSEMBLY__ */
-
 #endif
diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S
index f1fe1d5..cee28c3 100644
--- a/arch/x86/kernel/macros.S
+++ b/arch/x86/kernel/macros.S
@@ -7,4 +7,3 @@
  */
 
 #include <linux/compiler.h>
-#include <asm/refcount.h>
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 08/12] Revert "x86/objtool: Use asm macros to work around GCC inlining bugs"
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (6 preceding siblings ...)
  2018-12-17 16:03 ` [PATCH v3 07/12] Revert "x86/refcount: Work around GCC inlining bug" Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 09/12] Revert "kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related " Masahiro Yamada
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, Luc Van Oostenryck, linux-kernel, linux-sparse,
	Nadav Amit

This reverts commit c06c4d8090513f2974dfdbed2ac98634357ac475.

The in-kernel workarounds will be replaced with GCC's new
"asm inline" syntax.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

 arch/x86/kernel/macros.S |  2 --
 include/linux/compiler.h | 56 +++++++++++-------------------------------------
 2 files changed, 13 insertions(+), 45 deletions(-)

diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S
index cee28c3..cfc1c7d 100644
--- a/arch/x86/kernel/macros.S
+++ b/arch/x86/kernel/macros.S
@@ -5,5 +5,3 @@
  * commonly used. The macros are precompiled into assmebly file which is later
  * assembled together with each compiled file.
  */
-
-#include <linux/compiler.h>
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 06396c1..fc5004a 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -99,13 +99,22 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
  * unique, to convince GCC not to merge duplicate inline asm statements.
  */
 #define annotate_reachable() ({						\
-	asm volatile("ANNOTATE_REACHABLE counter=%c0"			\
-		     : : "i" (__COUNTER__));				\
+	asm volatile("%c0:\n\t"						\
+		     ".pushsection .discard.reachable\n\t"		\
+		     ".long %c0b - .\n\t"				\
+		     ".popsection\n\t" : : "i" (__COUNTER__));		\
 })
 #define annotate_unreachable() ({					\
-	asm volatile("ANNOTATE_UNREACHABLE counter=%c0"			\
-		     : : "i" (__COUNTER__));				\
+	asm volatile("%c0:\n\t"						\
+		     ".pushsection .discard.unreachable\n\t"		\
+		     ".long %c0b - .\n\t"				\
+		     ".popsection\n\t" : : "i" (__COUNTER__));		\
 })
+#define ASM_UNREACHABLE							\
+	"999:\n\t"							\
+	".pushsection .discard.unreachable\n\t"				\
+	".long 999b - .\n\t"						\
+	".popsection\n\t"
 #else
 #define annotate_reachable()
 #define annotate_unreachable()
@@ -293,45 +302,6 @@ static inline void *offset_to_ptr(const int *off)
 	return (void *)((unsigned long)off + *off);
 }
 
-#else /* __ASSEMBLY__ */
-
-#ifdef __KERNEL__
-#ifndef LINKER_SCRIPT
-
-#ifdef CONFIG_STACK_VALIDATION
-.macro ANNOTATE_UNREACHABLE counter:req
-\counter:
-	.pushsection .discard.unreachable
-	.long \counter\()b -.
-	.popsection
-.endm
-
-.macro ANNOTATE_REACHABLE counter:req
-\counter:
-	.pushsection .discard.reachable
-	.long \counter\()b -.
-	.popsection
-.endm
-
-.macro ASM_UNREACHABLE
-999:
-	.pushsection .discard.unreachable
-	.long 999b - .
-	.popsection
-.endm
-#else /* CONFIG_STACK_VALIDATION */
-.macro ANNOTATE_UNREACHABLE counter:req
-.endm
-
-.macro ANNOTATE_REACHABLE counter:req
-.endm
-
-.macro ASM_UNREACHABLE
-.endm
-#endif /* CONFIG_STACK_VALIDATION */
-
-#endif /* LINKER_SCRIPT */
-#endif /* __KERNEL__ */
 #endif /* __ASSEMBLY__ */
 
 /* Compile time object size, -1 for unknown */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 09/12] Revert "kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs"
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (7 preceding siblings ...)
  2018-12-17 16:03 ` [PATCH v3 08/12] Revert "x86/objtool: Use asm macros to work around GCC inlining bugs" Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 10/12] linux/linkage: add ASM() macro to reduce duplication between C/ASM code Masahiro Yamada
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, Logan Gunthorpe, Nadav Amit, linux-kbuild,
	linux-kernel, Michal Marek

This reverts commit 77b0bf55bc675233d22cd5df97605d516d64525e.

A few days after the patch set applied, discussion started to solve
the issue more elegantly with the help of compiler:

  https://lkml.org/lkml/2018/10/7/92

The new syntax "asm inline" was implemented by Segher Boessenkool, and
now queued up for GCC 9. (People were positive even for back-porting it
to older compilers).

Since the in-kernel workarounds merged, some issues have been reported.
The current urgent issue is distro packages for module building. More
fundamentally, we cannot build external modules after 'make clean'
because *.s files are globally removed.

We could fix those in Makefiles, but I do not want to mess up the build
system any more.

Given that this issue will be solved in a cleaner way sooner or later,
let's revert the in-kernel workarounds, and wait for GCC 9.

Link: https://lkml.org/lkml/2018/11/15/467
Link: https://marc.info/?t=154212770600037&r=1&w=2
Reported-by: Sedat Dilek <sedat.dilek@gmail.com> # deb/rpm package
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Segher Boessenkool <segher@kernel.crashing.org>
---


 Makefile                 | 9 ++-------
 arch/x86/Makefile        | 7 -------
 arch/x86/kernel/macros.S | 7 -------
 scripts/Kbuild.include   | 4 +---
 scripts/mod/Makefile     | 2 --
 5 files changed, 3 insertions(+), 26 deletions(-)
 delete mode 100644 arch/x86/kernel/macros.S

diff --git a/Makefile b/Makefile
index 56d5270..885c74b 100644
--- a/Makefile
+++ b/Makefile
@@ -1081,7 +1081,7 @@ scripts: scripts_basic scripts_dtc asm-generic gcc-plugins $(autoksyms_h)
 # version.h and scripts_basic is processed / created.
 
 # Listed in dependency order
-PHONY += prepare archprepare macroprepare prepare0 prepare1 prepare2 prepare3
+PHONY += prepare archprepare prepare0 prepare1 prepare2 prepare3
 
 # prepare3 is used to check if we are building in a separate output directory,
 # and if so do:
@@ -1104,9 +1104,7 @@ prepare2: prepare3 outputmakefile asm-generic
 prepare1: prepare2 $(version_h) $(autoksyms_h) include/generated/utsrelease.h
 	$(cmd_crmodverdir)
 
-macroprepare: prepare1 archmacros
-
-archprepare: archheaders archscripts macroprepare scripts_basic
+archprepare: archheaders archscripts prepare1 scripts_basic
 
 prepare0: archprepare gcc-plugins
 	$(Q)$(MAKE) $(build)=.
@@ -1174,9 +1172,6 @@ archheaders:
 PHONY += archscripts
 archscripts:
 
-PHONY += archmacros
-archmacros:
-
 PHONY += __headers
 __headers: $(version_h) scripts_basic uapi-asm-generic archheaders archscripts
 	$(Q)$(MAKE) $(build)=scripts build_unifdef
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 75ef499..85a66c4 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -232,13 +232,6 @@ archscripts: scripts_basic
 archheaders:
 	$(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all
 
-archmacros:
-	$(Q)$(MAKE) $(build)=arch/x86/kernel arch/x86/kernel/macros.s
-
-ASM_MACRO_FLAGS = -Wa,arch/x86/kernel/macros.s
-export ASM_MACRO_FLAGS
-KBUILD_CFLAGS += $(ASM_MACRO_FLAGS)
-
 ###
 # Kernel objects
 
diff --git a/arch/x86/kernel/macros.S b/arch/x86/kernel/macros.S
deleted file mode 100644
index cfc1c7d..0000000
--- a/arch/x86/kernel/macros.S
+++ /dev/null
@@ -1,7 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-
-/*
- * This file includes headers whose assembly part includes macros which are
- * commonly used. The macros are precompiled into assmebly file which is later
- * assembled together with each compiled file.
- */
diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
index bb01555..3d09844 100644
--- a/scripts/Kbuild.include
+++ b/scripts/Kbuild.include
@@ -115,9 +115,7 @@ __cc-option = $(call try-run,\
 
 # Do not attempt to build with gcc plugins during cc-option tests.
 # (And this uses delayed resolution so the flags will be up to date.)
-# In addition, do not include the asm macros which are built later.
-CC_OPTION_FILTERED = $(GCC_PLUGINS_CFLAGS) $(ASM_MACRO_FLAGS)
-CC_OPTION_CFLAGS = $(filter-out $(CC_OPTION_FILTERED),$(KBUILD_CFLAGS))
+CC_OPTION_CFLAGS = $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS))
 
 # cc-option
 # Usage: cflags-y += $(call cc-option,-march=winchip-c6,-march=i586)
diff --git a/scripts/mod/Makefile b/scripts/mod/Makefile
index a5b4af4..42c5d50 100644
--- a/scripts/mod/Makefile
+++ b/scripts/mod/Makefile
@@ -4,8 +4,6 @@ OBJECT_FILES_NON_STANDARD := y
 hostprogs-y	:= modpost mk_elfconfig
 always		:= $(hostprogs-y) empty.o
 
-CFLAGS_REMOVE_empty.o := $(ASM_MACRO_FLAGS)
-
 modpost-objs	:= modpost.o file2alias.o sumversion.o
 
 devicetable-offsets-file := devicetable-offsets.h
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 10/12] linux/linkage: add ASM() macro to reduce duplication between C/ASM code
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (8 preceding siblings ...)
  2018-12-17 16:03 ` [PATCH v3 09/12] Revert "kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related " Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 11/12] x86/alternatives: consolidate LOCK_PREFIX macro Masahiro Yamada
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, Andrey Ryabinin, Andrew Morton, linux-kernel

We often duplicate similar assembly code to use it from .c and .S files.
The difference is mostly the presence of double quotes.

So, here is a new macro ASM().
(We have similar approach for __ASM_FORM(), etc.)

The usage is like this:

    #define MY_CODE              \
    ASM(    .section ".text"    )\
    ASM(    movl $1, %eax       )

In C context, the preprocessor expands it into:

  ".section \".text\"" "\n\t" "movl $1, %eax" "\n\t"

This is perfect for the use from the inline asm(...) in .c files.

In assembly context, the preprocessor expands it into:

  .section ".text" ; movl $1, %eax ;

This is fine for the use in .S files.

I used double-expansion like __stringify() so that we can use
macros in ASM(...).

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

 include/linux/linkage.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index 7c47b1a..80faeae 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -12,6 +12,14 @@
 #define ASM_NL		 ;
 #endif
 
+#ifdef __ASSEMBLY__
+#define _ASM(x...)	x ASM_NL
+#else
+#define _ASM(x...)	#x __stringify(\n\t)
+#endif
+/* Doing two levels allows macros to be used in ASM(...) */
+#define ASM(x...)	_ASM(x)
+
 #ifdef __cplusplus
 #define CPP_ASMLINKAGE extern "C"
 #else
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 11/12] x86/alternatives: consolidate LOCK_PREFIX macro
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (9 preceding siblings ...)
  2018-12-17 16:03 ` [PATCH v3 10/12] linux/linkage: add ASM() macro to reduce duplication between C/ASM code Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-17 16:03 ` [PATCH v3 12/12] x86/asm: consolidate ASM_EXTABLE_* macros Masahiro Yamada
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, David Woodhouse, linux-kernel, Alexey Dobriyan,
	Nadav Amit

The LOCK_PREFIX is mostly used in inline asm, but also used by
atomic64_cx8_32.S

Let's unify the definition by using ASM() macro.

This was previously cleaned up by 77f48ec28e4c ("x86/alternatives:
Macrofy lock prefixes to work around GCC inlining bugs").

Now, I am refactoring the code without using the macros approach.

The new header <asm/alternative-common.h> contains macros that
can be used by C and assembly.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

 arch/x86/include/asm/alternative-asm.h    | 14 +--------
 arch/x86/include/asm/alternative-common.h | 47 +++++++++++++++++++++++++++++++
 arch/x86/include/asm/alternative.h        | 37 +-----------------------
 3 files changed, 49 insertions(+), 49 deletions(-)
 create mode 100644 arch/x86/include/asm/alternative-common.h

diff --git a/arch/x86/include/asm/alternative-asm.h b/arch/x86/include/asm/alternative-asm.h
index 31b627b..7425514 100644
--- a/arch/x86/include/asm/alternative-asm.h
+++ b/arch/x86/include/asm/alternative-asm.h
@@ -4,21 +4,9 @@
 
 #ifdef __ASSEMBLY__
 
+#include <asm/alternative-common.h>
 #include <asm/asm.h>
 
-#ifdef CONFIG_SMP
-	.macro LOCK_PREFIX
-672:	lock
-	.pushsection .smp_locks,"a"
-	.balign 4
-	.long 672b - .
-	.popsection
-	.endm
-#else
-	.macro LOCK_PREFIX
-	.endm
-#endif
-
 /*
  * Issue one struct alt_instr descriptor entry (need to put it into
  * the section .altinstructions, see below). This entry contains
diff --git a/arch/x86/include/asm/alternative-common.h b/arch/x86/include/asm/alternative-common.h
new file mode 100644
index 0000000..ae0b58f
--- /dev/null
+++ b/arch/x86/include/asm/alternative-common.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_ALTERNATIVE_COMMON_H
+#define _ASM_X86_ALTERNATIVE_COMMON_H
+
+#include <linux/linkage.h>
+
+/*
+ * Alternative inline assembly for SMP.
+ *
+ * The LOCK_PREFIX macro defined here replaces the LOCK and
+ * LOCK_PREFIX macros used everywhere in the source tree.
+ *
+ * SMP alternatives use the same data structures as the other
+ * alternatives and the X86_FEATURE_UP flag to indicate the case of a
+ * UP system running a SMP kernel.  The existing apply_alternatives()
+ * works fine for patching a SMP kernel for UP.
+ *
+ * The SMP alternative tables can be kept after boot and contain both
+ * UP and SMP versions of the instructions to allow switching back to
+ * SMP at runtime, when hotplugging in a new CPU, which is especially
+ * useful in virtualized environments.
+ *
+ * The very common lock prefix is handled as special case in a
+ * separate table which is a pure address list without replacement ptr
+ * and size information.  That keeps the table sizes small.
+ */
+
+#include <linux/linkage.h>
+#include <linux/stringify.h>
+
+#ifdef CONFIG_SMP
+
+#define LOCK_PREFIX_HERE			 \
+ASM(	.pushsection .smp_locks,"a"		)\
+ASM(	.balign 4				)\
+ASM(	.long 671f - .				)\
+ASM(	.popsection				)\
+ASM( 671:					)
+
+#define LOCK_PREFIX	LOCK_PREFIX_HERE	ASM(lock)
+
+#else /* ! CONFIG_SMP */
+#define LOCK_PREFIX_HERE	ASM()
+#define LOCK_PREFIX		ASM()
+#endif
+
+#endif  /* _ASM_X86_ALTERNATIVE_COMMON_H */
diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index 4cd6a3b..157967c 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -7,44 +7,9 @@
 #include <linux/types.h>
 #include <linux/stddef.h>
 #include <linux/stringify.h>
+#include <asm/alternative-common.h>
 #include <asm/asm.h>
 
-/*
- * Alternative inline assembly for SMP.
- *
- * The LOCK_PREFIX macro defined here replaces the LOCK and
- * LOCK_PREFIX macros used everywhere in the source tree.
- *
- * SMP alternatives use the same data structures as the other
- * alternatives and the X86_FEATURE_UP flag to indicate the case of a
- * UP system running a SMP kernel.  The existing apply_alternatives()
- * works fine for patching a SMP kernel for UP.
- *
- * The SMP alternative tables can be kept after boot and contain both
- * UP and SMP versions of the instructions to allow switching back to
- * SMP at runtime, when hotplugging in a new CPU, which is especially
- * useful in virtualized environments.
- *
- * The very common lock prefix is handled as special case in a
- * separate table which is a pure address list without replacement ptr
- * and size information.  That keeps the table sizes small.
- */
-
-#ifdef CONFIG_SMP
-#define LOCK_PREFIX_HERE \
-		".pushsection .smp_locks,\"a\"\n"	\
-		".balign 4\n"				\
-		".long 671f - .\n" /* offset */		\
-		".popsection\n"				\
-		"671:"
-
-#define LOCK_PREFIX LOCK_PREFIX_HERE "\n\tlock; "
-
-#else /* ! CONFIG_SMP */
-#define LOCK_PREFIX_HERE ""
-#define LOCK_PREFIX ""
-#endif
-
 struct alt_instr {
 	s32 instr_offset;	/* original instruction */
 	s32 repl_offset;	/* offset to replacement instruction */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 12/12] x86/asm: consolidate ASM_EXTABLE_* macros
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (10 preceding siblings ...)
  2018-12-17 16:03 ` [PATCH v3 11/12] x86/alternatives: consolidate LOCK_PREFIX macro Masahiro Yamada
@ 2018-12-17 16:03 ` Masahiro Yamada
  2018-12-18 19:43 ` [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Nadav Amit
  2018-12-19 11:20 ` Ingo Molnar
  13 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-17 16:03 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Masahiro Yamada, linux-kernel, Arnaldo Carvalho de Melo,
	Nadav Amit, Jann Horn

These macros are used by .c and .S files.

Let's unify the definition by using ASM() macro.

This was previously cleaned up by 0474d5d9d2f7 ("x86/extable:
Macrofy inline assembly code to work around GCC inlining bugs").

Now, I am refactoring the code without using the macros approach.

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
---

 arch/x86/include/asm/asm.h | 57 ++++++++++++----------------------------------
 1 file changed, 15 insertions(+), 42 deletions(-)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index 6467757b..cff3b0a 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -118,36 +118,36 @@
 #endif
 
 /* Exception table entry */
-#ifdef __ASSEMBLY__
-# define _ASM_EXTABLE_HANDLE(from, to, handler)			\
-	.pushsection "__ex_table","a" ;				\
-	.balign 4 ;						\
-	.long (from) - . ;					\
-	.long (to) - . ;					\
-	.long (handler) - . ;					\
-	.popsection
-
-# define _ASM_EXTABLE(from, to)					\
+#define _ASM_EXTABLE_HANDLE(from, to, handler)			 \
+ASM(	.pushsection "__ex_table","a" ;				)\
+ASM(	.balign 4 ;						)\
+ASM(	.long (from) - . ;					)\
+ASM(	.long (to) - . ;					)\
+ASM(	.long (handler) - . ;					)\
+ASM(	.popsection						)
+
+#define _ASM_EXTABLE(from, to)					\
 	_ASM_EXTABLE_HANDLE(from, to, ex_handler_default)
 
-# define _ASM_EXTABLE_UA(from, to)				\
+#define _ASM_EXTABLE_UA(from, to)				\
 	_ASM_EXTABLE_HANDLE(from, to, ex_handler_uaccess)
 
-# define _ASM_EXTABLE_FAULT(from, to)				\
+#define _ASM_EXTABLE_FAULT(from, to)				\
 	_ASM_EXTABLE_HANDLE(from, to, ex_handler_fault)
 
-# define _ASM_EXTABLE_EX(from, to)				\
+#define _ASM_EXTABLE_EX(from, to)				\
 	_ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
 
-# define _ASM_EXTABLE_REFCOUNT(from, to)			\
+#define _ASM_EXTABLE_REFCOUNT(from, to)			\
 	_ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount)
 
-# define _ASM_NOKPROBE(entry)					\
+#define _ASM_NOKPROBE(entry)					\
 	.pushsection "_kprobe_blacklist","aw" ;			\
 	_ASM_ALIGN ;						\
 	_ASM_PTR (entry);					\
 	.popsection
 
+#ifdef __ASSEMBLY__
 .macro ALIGN_DESTINATION
 	/* check for bad alignment of destination */
 	movl %edi,%ecx
@@ -173,34 +173,7 @@
 	.endm
 
 #else
-# define _EXPAND_EXTABLE_HANDLE(x) #x
-# define _ASM_EXTABLE_HANDLE(from, to, handler)			\
-	" .pushsection \"__ex_table\",\"a\"\n"			\
-	" .balign 4\n"						\
-	" .long (" #from ") - .\n"				\
-	" .long (" #to ") - .\n"				\
-	" .long (" _EXPAND_EXTABLE_HANDLE(handler) ") - .\n"	\
-	" .popsection\n"
-
-# define _ASM_EXTABLE(from, to)					\
-	_ASM_EXTABLE_HANDLE(from, to, ex_handler_default)
-
-# define _ASM_EXTABLE_UA(from, to)				\
-	_ASM_EXTABLE_HANDLE(from, to, ex_handler_uaccess)
-
-# define _ASM_EXTABLE_FAULT(from, to)				\
-	_ASM_EXTABLE_HANDLE(from, to, ex_handler_fault)
-
-# define _ASM_EXTABLE_EX(from, to)				\
-	_ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
-
-# define _ASM_EXTABLE_REFCOUNT(from, to)			\
-	_ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount)
-
-/* For C file, we already have NOKPROBE_SYMBOL macro */
-#endif
 
-#ifndef __ASSEMBLY__
 /*
  * This output constraint should be used for any inline asm which has a "call"
  * instruction.  Otherwise the asm may be inserted before the frame pointer
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (11 preceding siblings ...)
  2018-12-17 16:03 ` [PATCH v3 12/12] x86/asm: consolidate ASM_EXTABLE_* macros Masahiro Yamada
@ 2018-12-18 19:43 ` Nadav Amit
  2018-12-19  3:19   ` Masahiro Yamada
  2018-12-19 11:20 ` Ingo Molnar
  13 siblings, 1 reply; 17+ messages in thread
From: Nadav Amit @ 2018-12-18 19:43 UTC (permalink / raw)
  To: Masahiro Yamada
  Cc: X86 ML, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H . Peter Anvin, Richard Biener, Segher Boessenkool,
	Peter Zijlstra, Juergen Gross, Josh Poimboeuf, Kees Cook,
	Linus Torvalds, Arnd Bergmann, Andrey Ryabinin, virtualization,
	Luc Van Oostenryck, Alok Kataria, Ard Biesheuvel, Jann Horn,
	linux-arch, Alexey Dobriyan, linux-sparse, Andrew Morton,
	Linux Kbuild mailing list, Yonghong Song, Michal Marek,
	Arnaldo Carvalho de Melo, Jan Beulich, David Woodhouse,
	Alexei Starovoitov, linux-kernel

> On Dec 17, 2018, at 8:03 AM, Masahiro Yamada <yamada.masahiro@socionext.com> wrote:
> 
> This series reverts the in-kernel workarounds for inlining issues.
> 
> The commit description of 77b0bf55bc67 mentioned
> "We also hope that GCC will eventually get fixed,..."
> 
> Now, GCC provides a solution.
> 
> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgcc.gnu.org%2Fonlinedocs%2Fgcc%2FExtended-Asm.html&amp;data=02%7C01%7Cnamit%40vmware.com%7Cc43f433d8c6244de15f108d6643a49e4%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C636806598899962669&amp;sdata=88UJ25RoiHik9vTCJKZV6%2F7xpzCGsvKb9LFg1kfEYL0%3D&amp;reserved=0
> explains the new "asm inline" syntax.
> 
> The performance issue will be eventually solved.
> 
> [About Code cleanups]
> 
> I know Nadam Amit is opposed to the full revert.

My name is Nadav.

> He also claims his motivation for macrofying was not only
> performance, but also cleanups.

Masahiro, I understand your concerns and criticism, and indeed various
alternative solutions exist. I do have my reservations about the one you
propose, since it makes coding more complicated to simplify the Make system.
Yet, more important, starting this discussion suddenly now after RC7 is
strange. Anyhow, since it’s obviously not my call, please don’t make it
sound as if I am a side in the decision.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
  2018-12-18 19:43 ` [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Nadav Amit
@ 2018-12-19  3:19   ` Masahiro Yamada
  0 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-19  3:19 UTC (permalink / raw)
  To: Nadav Amit, X86 ML, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H . Peter Anvin
  Cc: Richard Biener, Segher Boessenkool, Peter Zijlstra,
	Juergen Gross, Josh Poimboeuf, Kees Cook, Linus Torvalds,
	Arnd Bergmann, Andrey Ryabinin, virtualization,
	Luc Van Oostenryck, Alok Kataria, Ard Biesheuvel, Jann Horn,
	linux-arch, Alexey Dobriyan, linux-sparse, Andrew Morton,
	Linux Kbuild mailing list, Yonghong Song, Michal Marek,
	Arnaldo Carvalho de Melo, Jan Beulich, David Woodhouse,
	Alexei Starovoitov, linux-kernel

On Wed, Dec 19, 2018 at 5:26 AM Nadav Amit <namit@vmware.com> wrote:
>
> > On Dec 17, 2018, at 8:03 AM, Masahiro Yamada <yamada.masahiro@socionext.com> wrote:
> >
> > This series reverts the in-kernel workarounds for inlining issues.
> >
> > The commit description of 77b0bf55bc67 mentioned
> > "We also hope that GCC will eventually get fixed,..."
> >
> > Now, GCC provides a solution.
> >
> > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgcc.gnu.org%2Fonlinedocs%2Fgcc%2FExtended-Asm.html&amp;data=02%7C01%7Cnamit%40vmware.com%7Cc43f433d8c6244de15f108d6643a49e4%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C636806598899962669&amp;sdata=88UJ25RoiHik9vTCJKZV6%2F7xpzCGsvKb9LFg1kfEYL0%3D&amp;reserved=0
> > explains the new "asm inline" syntax.
> >
> > The performance issue will be eventually solved.
> >
> > [About Code cleanups]
> >
> > I know Nadam Amit is opposed to the full revert.
>
> My name is Nadav.


Sorry about that.

(I relied on a spell checker. I should be careful about typos in people's name.)



> > He also claims his motivation for macrofying was not only
> > performance, but also cleanups.
>
> Masahiro, I understand your concerns and criticism, and indeed various
> alternative solutions exist. I do have my reservations about the one you
> propose, since it makes coding more complicated to simplify the Make system.
> Yet, more important, starting this discussion suddenly now after RC7 is
> strange.



I did not think this was so sudden.

When I suggested the revert a few weeks ago,
Borislav was for it.
I did not hear from the other x86 maintainers.

Anyway, it is true we are running out of time for the release.


> Anyhow, since it’s obviously not my call, please don’t make it
> sound as if I am a side in the decision.
>

Not my call, either.

That's why I put the x86 maintainers in the TO list,
and other people in CC.

The original patch set went in via x86 tree.
So, its revert set, if we want,
should go in the same path.



Anyway, we have to do something for v4.20.
Maybe, discussing short-term / long-term solutions
separately could move things forward.


[1] Solution for v4.20

[2] Solution for future kernel



If we do not want to see the revert for [1],
the best I can suggest is to copy arch/x86/kernel/macros.s
to include/generated/macros.h
so that it is kept for the external module build.
(It is not literally included by anyone, but should work at least.)



For [2], what do we want to see?




-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
  2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
                   ` (12 preceding siblings ...)
  2018-12-18 19:43 ` [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Nadav Amit
@ 2018-12-19 11:20 ` Ingo Molnar
  2018-12-19 14:33   ` Masahiro Yamada
  13 siblings, 1 reply; 17+ messages in thread
From: Ingo Molnar @ 2018-12-19 11:20 UTC (permalink / raw)
  To: Masahiro Yamada
  Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H . Peter Anvin, Richard Biener, Segher Boessenkool,
	Peter Zijlstra, Juergen Gross, Josh Poimboeuf, Kees Cook,
	Linus Torvalds, Arnd Bergmann, Andrey Ryabinin, virtualization,
	Luc Van Oostenryck, Alok Kataria, Ard Biesheuvel, Jann Horn,
	linux-arch, Alexey Dobriyan, linux-sparse, Andrew Morton,
	linux-kbuild, Yonghong Song, Michal Marek,
	Arnaldo Carvalho de Melo, Jan Beulich, Nadav Amit,
	David Woodhouse, Alexei Starovoitov, linux-kernel


* Masahiro Yamada <yamada.masahiro@socionext.com> wrote:

> This series reverts the in-kernel workarounds for inlining issues.
> 
> The commit description of 77b0bf55bc67 mentioned
> "We also hope that GCC will eventually get fixed,..."
> 
> Now, GCC provides a solution.
> 
> https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html
> explains the new "asm inline" syntax.
> 
> The performance issue will be eventually solved.
> 
> [About Code cleanups]
> 
> I know Nadam Amit is opposed to the full revert.
> He also claims his motivation for macrofying was not only
> performance, but also cleanups.
> 
> IIUC, the criticism addresses the code duplication between C and ASM.
> 
> If so, I'd like to suggest a different approach for cleanups.
> Please see the last 3 patches.
> IMHO, preprocessor approach is more straight-forward, and readable.
> Basically, this idea should work because it is what we already do for
> __ASM_FORM() etc.
> 
> [Quick Test of "asm inline" of GCC 9]
> 
> If you want to try "asm inline" feature, the patch is available:
> https://lore.kernel.org/patchwork/patch/1024590/
> 
> The number of symbols for arch/x86/configs/x86_64_defconfig:
> 
>                             nr_symbols
>   [1]    v4.20-rc7       :   96502
>   [2]    [1]+full revert :   96705   (+203)
>   [3]    [2]+"asm inline":   96568   (-137)
> 
> [3]: apply my patch, then replace "asm" -> "asm_inline"
>     for _BUG_FLAGS(), refcount_add(), refcount_inc(), refcount_dec(),
>         annotate_reachable(), annotate_unreachable()
> 
> 
> Changes in v3:
>   - Split into per-commit revert (per Nadav Amit)
>   - Add some cleanups with preprocessor approach
> 
> Changes in v2:
>   - Revive clean-ups made by 5bdcd510c2ac (per Peter Zijlstra)
>   - Fix commit quoting style (per Peter Zijlstra)
> 
> Masahiro Yamada (12):
>   Revert "x86/jump-labels: Macrofy inline assembly code to work around
>     GCC inlining bugs"
>   Revert "x86/cpufeature: Macrofy inline assembly code to work around
>     GCC inlining bugs"
>   Revert "x86/extable: Macrofy inline assembly code to work around GCC
>     inlining bugs"
>   Revert "x86/paravirt: Work around GCC inlining bugs when compiling
>     paravirt ops"
>   Revert "x86/bug: Macrofy the BUG table section handling, to work
>     around GCC inlining bugs"
>   Revert "x86/alternatives: Macrofy lock prefixes to work around GCC
>     inlining bugs"
>   Revert "x86/refcount: Work around GCC inlining bug"
>   Revert "x86/objtool: Use asm macros to work around GCC inlining bugs"
>   Revert "kbuild/Makefile: Prepare for using macros in inline assembly
>     code to work around asm() related GCC inlining bugs"
>   linux/linkage: add ASM() macro to reduce duplication between C/ASM
>     code
>   x86/alternatives: consolidate LOCK_PREFIX macro
>   x86/asm: consolidate ASM_EXTABLE_* macros
> 
>  Makefile                                  |  9 +--
>  arch/x86/Makefile                         |  7 ---
>  arch/x86/include/asm/alternative-asm.h    | 22 +------
>  arch/x86/include/asm/alternative-common.h | 47 +++++++++++++++
>  arch/x86/include/asm/alternative.h        | 30 +---------
>  arch/x86/include/asm/asm.h                | 46 +++++----------
>  arch/x86/include/asm/bug.h                | 98 +++++++++++++------------------
>  arch/x86/include/asm/cpufeature.h         | 82 +++++++++++---------------
>  arch/x86/include/asm/jump_label.h         | 22 +++++--
>  arch/x86/include/asm/paravirt_types.h     | 56 +++++++++---------
>  arch/x86/include/asm/refcount.h           | 81 +++++++++++--------------
>  arch/x86/kernel/macros.S                  | 16 -----
>  include/asm-generic/bug.h                 |  8 +--
>  include/linux/compiler.h                  | 56 ++++--------------
>  include/linux/linkage.h                   |  8 +++
>  scripts/Kbuild.include                    |  4 +-
>  scripts/mod/Makefile                      |  2 -
>  17 files changed, 249 insertions(+), 345 deletions(-)
>  create mode 100644 arch/x86/include/asm/alternative-common.h
>  delete mode 100644 arch/x86/kernel/macros.S

I absolutely agree that this needs to be resolved in v4.20.

So I did the 1-9 reverts manually myself as well, because I think the 
first commit should be reverted fully to get as close to the starting 
point as possible (we are late in the cycle) - and came to the attached 
interdiff between your series and mine.

Does this approach look OK to you, or did I miss something?

Thanks,

	Ingo

=============>

 entry/calling.h          |    2 -
 include/asm/jump_label.h |   50 ++++++++++++++++++++++++++++++++++-------------
 2 files changed, 38 insertions(+), 14 deletions(-)

diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index 25e5a6bda8c3..20d0885b00fb 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -352,7 +352,7 @@ For 32-bit we have the following conventions - kernel is built with
 .macro CALL_enter_from_user_mode
 #ifdef CONFIG_CONTEXT_TRACKING
 #ifdef HAVE_JUMP_LABEL
-	STATIC_BRANCH_JMP l_yes=.Lafter_call_\@, key=context_tracking_enabled, branch=1
+	STATIC_JUMP_IF_FALSE .Lafter_call_\@, context_tracking_enabled, def=0
 #endif
 	call enter_from_user_mode
 .Lafter_call_\@:
diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
index cf88ebf9a4ca..21efc9d07ed9 100644
--- a/arch/x86/include/asm/jump_label.h
+++ b/arch/x86/include/asm/jump_label.h
@@ -2,6 +2,19 @@
 #ifndef _ASM_X86_JUMP_LABEL_H
 #define _ASM_X86_JUMP_LABEL_H
 
+#ifndef HAVE_JUMP_LABEL
+/*
+ * For better or for worse, if jump labels (the gcc extension) are missing,
+ * then the entire static branch patching infrastructure is compiled out.
+ * If that happens, the code in here will malfunction.  Raise a compiler
+ * error instead.
+ *
+ * In theory, jump labels and the static branch patching infrastructure
+ * could be decoupled to fix this.
+ */
+#error asm/jump_label.h included on a non-jump-label kernel
+#endif
+
 #define JUMP_LABEL_NOP_SIZE 5
 
 #ifdef CONFIG_X86_64
@@ -53,26 +66,37 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
 
 #else	/* __ASSEMBLY__ */
 
-.macro STATIC_BRANCH_NOP l_yes:req key:req branch:req
-.Lstatic_branch_nop_\@:
-	.byte STATIC_KEY_INIT_NOP
-.Lstatic_branch_no_after_\@:
+.macro STATIC_JUMP_IF_TRUE target, key, def
+.Lstatic_jump_\@:
+	.if \def
+	/* Equivalent to "jmp.d32 \target" */
+	.byte		0xe9
+	.long		\target - .Lstatic_jump_after_\@
+.Lstatic_jump_after_\@:
+	.else
+	.byte		STATIC_KEY_INIT_NOP
+	.endif
 	.pushsection __jump_table, "aw"
 	_ASM_ALIGN
-	.long		.Lstatic_branch_nop_\@ - ., \l_yes - .
-	_ASM_PTR        \key + \branch - .
+	.long		.Lstatic_jump_\@ - ., \target - .
+	_ASM_PTR	\key - .
 	.popsection
 .endm
 
-.macro STATIC_BRANCH_JMP l_yes:req key:req branch:req
-.Lstatic_branch_jmp_\@:
-	.byte 0xe9
-	.long \l_yes - .Lstatic_branch_jmp_after_\@
-.Lstatic_branch_jmp_after_\@:
+.macro STATIC_JUMP_IF_FALSE target, key, def
+.Lstatic_jump_\@:
+	.if \def
+	.byte		STATIC_KEY_INIT_NOP
+	.else
+	/* Equivalent to "jmp.d32 \target" */
+	.byte		0xe9
+	.long		\target - .Lstatic_jump_after_\@
+.Lstatic_jump_after_\@:
+	.endif
 	.pushsection __jump_table, "aw"
 	_ASM_ALIGN
-	.long		.Lstatic_branch_jmp_\@ - ., \l_yes - .
-	_ASM_PTR	\key + \branch - .
+	.long		.Lstatic_jump_\@ - ., \target - .
+	_ASM_PTR	\key + 1 - .
 	.popsection
 .endm
 


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
  2018-12-19 11:20 ` Ingo Molnar
@ 2018-12-19 14:33   ` Masahiro Yamada
  0 siblings, 0 replies; 17+ messages in thread
From: Masahiro Yamada @ 2018-12-19 14:33 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: X86 ML, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H . Peter Anvin, Richard Biener, Segher Boessenkool,
	Peter Zijlstra, Juergen Gross, Josh Poimboeuf, Kees Cook,
	Linus Torvalds, Arnd Bergmann, Andrey Ryabinin, virtualization,
	Luc Van Oostenryck, Alok Kataria, Ard Biesheuvel, Jann Horn,
	linux-arch, Alexey Dobriyan, linux-sparse, Andrew Morton,
	Linux Kbuild mailing list, Yonghong Song, Michal Marek,
	Arnaldo Carvalho de Melo, Jan Beulich, Nadav Amit,
	David Woodhouse, Alexei Starovoitov, Linux Kernel Mailing List

On Wed, Dec 19, 2018 at 9:44 PM Ingo Molnar <mingo@kernel.org> wrote:
>
>
> * Masahiro Yamada <yamada.masahiro@socionext.com> wrote:
>
> > This series reverts the in-kernel workarounds for inlining issues.
> >
> > The commit description of 77b0bf55bc67 mentioned
> > "We also hope that GCC will eventually get fixed,..."
> >
> > Now, GCC provides a solution.
> >
> > https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html
> > explains the new "asm inline" syntax.
> >
> > The performance issue will be eventually solved.
> >
> > [About Code cleanups]
> >
> > I know Nadam Amit is opposed to the full revert.
> > He also claims his motivation for macrofying was not only
> > performance, but also cleanups.
> >
> > IIUC, the criticism addresses the code duplication between C and ASM.
> >
> > If so, I'd like to suggest a different approach for cleanups.
> > Please see the last 3 patches.
> > IMHO, preprocessor approach is more straight-forward, and readable.
> > Basically, this idea should work because it is what we already do for
> > __ASM_FORM() etc.
> >
> > [Quick Test of "asm inline" of GCC 9]
> >
> > If you want to try "asm inline" feature, the patch is available:
> > https://lore.kernel.org/patchwork/patch/1024590/
> >
> > The number of symbols for arch/x86/configs/x86_64_defconfig:
> >
> >                             nr_symbols
> >   [1]    v4.20-rc7       :   96502
> >   [2]    [1]+full revert :   96705   (+203)
> >   [3]    [2]+"asm inline":   96568   (-137)
> >
> > [3]: apply my patch, then replace "asm" -> "asm_inline"
> >     for _BUG_FLAGS(), refcount_add(), refcount_inc(), refcount_dec(),
> >         annotate_reachable(), annotate_unreachable()
> >
> >
> > Changes in v3:
> >   - Split into per-commit revert (per Nadav Amit)
> >   - Add some cleanups with preprocessor approach
> >
> > Changes in v2:
> >   - Revive clean-ups made by 5bdcd510c2ac (per Peter Zijlstra)
> >   - Fix commit quoting style (per Peter Zijlstra)
> >
> > Masahiro Yamada (12):
> >   Revert "x86/jump-labels: Macrofy inline assembly code to work around
> >     GCC inlining bugs"
> >   Revert "x86/cpufeature: Macrofy inline assembly code to work around
> >     GCC inlining bugs"
> >   Revert "x86/extable: Macrofy inline assembly code to work around GCC
> >     inlining bugs"
> >   Revert "x86/paravirt: Work around GCC inlining bugs when compiling
> >     paravirt ops"
> >   Revert "x86/bug: Macrofy the BUG table section handling, to work
> >     around GCC inlining bugs"
> >   Revert "x86/alternatives: Macrofy lock prefixes to work around GCC
> >     inlining bugs"
> >   Revert "x86/refcount: Work around GCC inlining bug"
> >   Revert "x86/objtool: Use asm macros to work around GCC inlining bugs"
> >   Revert "kbuild/Makefile: Prepare for using macros in inline assembly
> >     code to work around asm() related GCC inlining bugs"
> >   linux/linkage: add ASM() macro to reduce duplication between C/ASM
> >     code
> >   x86/alternatives: consolidate LOCK_PREFIX macro
> >   x86/asm: consolidate ASM_EXTABLE_* macros
> >
> >  Makefile                                  |  9 +--
> >  arch/x86/Makefile                         |  7 ---
> >  arch/x86/include/asm/alternative-asm.h    | 22 +------
> >  arch/x86/include/asm/alternative-common.h | 47 +++++++++++++++
> >  arch/x86/include/asm/alternative.h        | 30 +---------
> >  arch/x86/include/asm/asm.h                | 46 +++++----------
> >  arch/x86/include/asm/bug.h                | 98 +++++++++++++------------------
> >  arch/x86/include/asm/cpufeature.h         | 82 +++++++++++---------------
> >  arch/x86/include/asm/jump_label.h         | 22 +++++--
> >  arch/x86/include/asm/paravirt_types.h     | 56 +++++++++---------
> >  arch/x86/include/asm/refcount.h           | 81 +++++++++++--------------
> >  arch/x86/kernel/macros.S                  | 16 -----
> >  include/asm-generic/bug.h                 |  8 +--
> >  include/linux/compiler.h                  | 56 ++++--------------
> >  include/linux/linkage.h                   |  8 +++
> >  scripts/Kbuild.include                    |  4 +-
> >  scripts/mod/Makefile                      |  2 -
> >  17 files changed, 249 insertions(+), 345 deletions(-)
> >  create mode 100644 arch/x86/include/asm/alternative-common.h
> >  delete mode 100644 arch/x86/kernel/macros.S
>
> I absolutely agree that this needs to be resolved in v4.20.
>
> So I did the 1-9 reverts manually myself as well, because I think the
> first commit should be reverted fully to get as close to the starting
> point as possible (we are late in the cycle) - and came to the attached
> interdiff between your series and mine.
>
> Does this approach look OK to you, or did I miss something?


It looks OK to me.

I thought the diff was a good cleanup part,
but we can deal with it later on,
so I do not mind it.

Thanks!



> Thanks,
>
>         Ingo
>
> =============>
>
>  entry/calling.h          |    2 -
>  include/asm/jump_label.h |   50 ++++++++++++++++++++++++++++++++++-------------
>  2 files changed, 38 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
> index 25e5a6bda8c3..20d0885b00fb 100644
> --- a/arch/x86/entry/calling.h
> +++ b/arch/x86/entry/calling.h
> @@ -352,7 +352,7 @@ For 32-bit we have the following conventions - kernel is built with
>  .macro CALL_enter_from_user_mode
>  #ifdef CONFIG_CONTEXT_TRACKING
>  #ifdef HAVE_JUMP_LABEL
> -       STATIC_BRANCH_JMP l_yes=.Lafter_call_\@, key=context_tracking_enabled, branch=1
> +       STATIC_JUMP_IF_FALSE .Lafter_call_\@, context_tracking_enabled, def=0
>  #endif
>         call enter_from_user_mode
>  .Lafter_call_\@:
> diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
> index cf88ebf9a4ca..21efc9d07ed9 100644
> --- a/arch/x86/include/asm/jump_label.h
> +++ b/arch/x86/include/asm/jump_label.h
> @@ -2,6 +2,19 @@
>  #ifndef _ASM_X86_JUMP_LABEL_H
>  #define _ASM_X86_JUMP_LABEL_H
>
> +#ifndef HAVE_JUMP_LABEL
> +/*
> + * For better or for worse, if jump labels (the gcc extension) are missing,
> + * then the entire static branch patching infrastructure is compiled out.
> + * If that happens, the code in here will malfunction.  Raise a compiler
> + * error instead.
> + *
> + * In theory, jump labels and the static branch patching infrastructure
> + * could be decoupled to fix this.
> + */
> +#error asm/jump_label.h included on a non-jump-label kernel
> +#endif
> +
>  #define JUMP_LABEL_NOP_SIZE 5
>
>  #ifdef CONFIG_X86_64
> @@ -53,26 +66,37 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
>
>  #else  /* __ASSEMBLY__ */
>
> -.macro STATIC_BRANCH_NOP l_yes:req key:req branch:req
> -.Lstatic_branch_nop_\@:
> -       .byte STATIC_KEY_INIT_NOP
> -.Lstatic_branch_no_after_\@:
> +.macro STATIC_JUMP_IF_TRUE target, key, def
> +.Lstatic_jump_\@:
> +       .if \def
> +       /* Equivalent to "jmp.d32 \target" */
> +       .byte           0xe9
> +       .long           \target - .Lstatic_jump_after_\@
> +.Lstatic_jump_after_\@:
> +       .else
> +       .byte           STATIC_KEY_INIT_NOP
> +       .endif
>         .pushsection __jump_table, "aw"
>         _ASM_ALIGN
> -       .long           .Lstatic_branch_nop_\@ - ., \l_yes - .
> -       _ASM_PTR        \key + \branch - .
> +       .long           .Lstatic_jump_\@ - ., \target - .
> +       _ASM_PTR        \key - .
>         .popsection
>  .endm
>
> -.macro STATIC_BRANCH_JMP l_yes:req key:req branch:req
> -.Lstatic_branch_jmp_\@:
> -       .byte 0xe9
> -       .long \l_yes - .Lstatic_branch_jmp_after_\@
> -.Lstatic_branch_jmp_after_\@:
> +.macro STATIC_JUMP_IF_FALSE target, key, def
> +.Lstatic_jump_\@:
> +       .if \def
> +       .byte           STATIC_KEY_INIT_NOP
> +       .else
> +       /* Equivalent to "jmp.d32 \target" */
> +       .byte           0xe9
> +       .long           \target - .Lstatic_jump_after_\@
> +.Lstatic_jump_after_\@:
> +       .endif
>         .pushsection __jump_table, "aw"
>         _ASM_ALIGN
> -       .long           .Lstatic_branch_jmp_\@ - ., \l_yes - .
> -       _ASM_PTR        \key + \branch - .
> +       .long           .Lstatic_jump_\@ - ., \target - .
> +       _ASM_PTR        \key + 1 - .
>         .popsection
>  .endm
>
>


-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-12-19 14:34 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-17 16:03 [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 01/12] Revert "x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs" Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 02/12] Revert "x86/cpufeature: " Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 03/12] Revert "x86/extable: " Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 04/12] Revert "x86/paravirt: Work around GCC inlining bugs when compiling paravirt ops" Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 05/12] Revert "x86/bug: Macrofy the BUG table section handling, to work around GCC inlining bugs" Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 06/12] Revert "x86/alternatives: Macrofy lock prefixes " Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 07/12] Revert "x86/refcount: Work around GCC inlining bug" Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 08/12] Revert "x86/objtool: Use asm macros to work around GCC inlining bugs" Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 09/12] Revert "kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related " Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 10/12] linux/linkage: add ASM() macro to reduce duplication between C/ASM code Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 11/12] x86/alternatives: consolidate LOCK_PREFIX macro Masahiro Yamada
2018-12-17 16:03 ` [PATCH v3 12/12] x86/asm: consolidate ASM_EXTABLE_* macros Masahiro Yamada
2018-12-18 19:43 ` [PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code Nadav Amit
2018-12-19  3:19   ` Masahiro Yamada
2018-12-19 11:20 ` Ingo Molnar
2018-12-19 14:33   ` Masahiro Yamada

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).