All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/7] linkage: better symbol aliasing
@ 2022-01-25 11:31 ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

This series aims to make symbol aliasing simpler and more consistent.
The basic idea is to replace SYM_FUNC_START_ALIAS(alias) and
SYM_FUNC_END_ALIAS(alias) with a new SYM_FUNC_ALIAS(alias, name), so
that e.g.

    SYM_FUNC_START(func)
    SYM_FUNC_START_ALIAS(alias1)
    SYM_FUNC_START_ALIAS(alias2)
        ... asm insns ...
    SYM_FUNC_END(func)
    SYM_FUNC_END_ALIAS(alias1)
    SYM_FUNC_END_ALIAS(alias2)
    EXPORT_SYMBOL(alias1)
    EXPORT_SYMBOL(alias2)

... can become:

    SYM_FUNC_START(name)
        ... asm insns ...
    SYM_FUNC_END(name)

    SYM_FUNC_ALIAS(alias1, func)
    EXPORT_SYMBOL(alias1)

    SYM_FUNC_ALIAS(alias2, func)
    EXPORT_SYMBOL(alias2)

This avoids repetition and hopefully make it easier to ensure
consistency (e.g. so each function has a single canonical name and
associated metadata).

I've build+boot tested arm64 defconfig without issues, and also build
tested arm/i386/x86_64 defconfig without issues. I've pushed the series
to my `linkage/alias-rework` branch on git.kernel.org, atop v5.17-rc1:

  git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git linkage/alias-rework
  https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=linkage/alias-rework

Since RFCv1 [1]:
* Drop arm64 dma alias removal (taken via arm64 for v5.17)
* Rename SYM_FUNC_LOCAL_ALIAS() -> SYM_FUNC_ALIAS_LOCAL()
* Update the tools/ copies of x86 routines
* Add preparatory fix for 32-bit arm
* Rebase to v5.17-rc1

[1] https://lore.kernel.org/r/20211206124715.4101571-1-mark.rutland@arm.com/

Thanks,
Mark.

Mark Rutland (7):
  arm: lib: remove leading whitespace in bitop macro
  linkage: add SYM_{ENTRY,START,END}_AT()
  linkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}()
  arm64: clean up symbol aliasing
  x86: clean up symbol aliasing
  linkage: remove SYM_FUNC_{START,END}_ALIAS()
  tools: update x86 string routines

 Documentation/asm-annotations.rst       | 11 ++--
 arch/arm/lib/bitops.h                   |  8 +--
 arch/arm64/include/asm/linkage.h        | 24 -------
 arch/arm64/kvm/hyp/nvhe/cache.S         |  5 +-
 arch/arm64/lib/clear_page.S             |  5 +-
 arch/arm64/lib/copy_page.S              |  5 +-
 arch/arm64/lib/memchr.S                 |  5 +-
 arch/arm64/lib/memcmp.S                 |  6 +-
 arch/arm64/lib/memcpy.S                 | 21 +++---
 arch/arm64/lib/memset.S                 | 12 ++--
 arch/arm64/lib/strchr.S                 |  6 +-
 arch/arm64/lib/strcmp.S                 |  6 +-
 arch/arm64/lib/strlen.S                 |  6 +-
 arch/arm64/lib/strncmp.S                |  6 +-
 arch/arm64/lib/strnlen.S                |  6 +-
 arch/arm64/lib/strrchr.S                |  5 +-
 arch/arm64/mm/cache.S                   | 35 ++++++----
 arch/x86/boot/compressed/head_32.S      |  3 +-
 arch/x86/boot/compressed/head_64.S      |  3 +-
 arch/x86/crypto/aesni-intel_asm.S       |  4 +-
 arch/x86/lib/memcpy_64.S                | 10 +--
 arch/x86/lib/memmove_64.S               |  4 +-
 arch/x86/lib/memset_64.S                |  6 +-
 include/linux/linkage.h                 | 87 +++++++++++++++----------
 tools/arch/x86/lib/memcpy_64.S          | 10 +--
 tools/arch/x86/lib/memset_64.S          |  6 +-
 tools/perf/util/include/linux/linkage.h | 80 +++++++++++++++--------
 27 files changed, 209 insertions(+), 176 deletions(-)

-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 0/7] linkage: better symbol aliasing
@ 2022-01-25 11:31 ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

This series aims to make symbol aliasing simpler and more consistent.
The basic idea is to replace SYM_FUNC_START_ALIAS(alias) and
SYM_FUNC_END_ALIAS(alias) with a new SYM_FUNC_ALIAS(alias, name), so
that e.g.

    SYM_FUNC_START(func)
    SYM_FUNC_START_ALIAS(alias1)
    SYM_FUNC_START_ALIAS(alias2)
        ... asm insns ...
    SYM_FUNC_END(func)
    SYM_FUNC_END_ALIAS(alias1)
    SYM_FUNC_END_ALIAS(alias2)
    EXPORT_SYMBOL(alias1)
    EXPORT_SYMBOL(alias2)

... can become:

    SYM_FUNC_START(name)
        ... asm insns ...
    SYM_FUNC_END(name)

    SYM_FUNC_ALIAS(alias1, func)
    EXPORT_SYMBOL(alias1)

    SYM_FUNC_ALIAS(alias2, func)
    EXPORT_SYMBOL(alias2)

This avoids repetition and hopefully make it easier to ensure
consistency (e.g. so each function has a single canonical name and
associated metadata).

I've build+boot tested arm64 defconfig without issues, and also build
tested arm/i386/x86_64 defconfig without issues. I've pushed the series
to my `linkage/alias-rework` branch on git.kernel.org, atop v5.17-rc1:

  git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git linkage/alias-rework
  https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=linkage/alias-rework

Since RFCv1 [1]:
* Drop arm64 dma alias removal (taken via arm64 for v5.17)
* Rename SYM_FUNC_LOCAL_ALIAS() -> SYM_FUNC_ALIAS_LOCAL()
* Update the tools/ copies of x86 routines
* Add preparatory fix for 32-bit arm
* Rebase to v5.17-rc1

[1] https://lore.kernel.org/r/20211206124715.4101571-1-mark.rutland@arm.com/

Thanks,
Mark.

Mark Rutland (7):
  arm: lib: remove leading whitespace in bitop macro
  linkage: add SYM_{ENTRY,START,END}_AT()
  linkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}()
  arm64: clean up symbol aliasing
  x86: clean up symbol aliasing
  linkage: remove SYM_FUNC_{START,END}_ALIAS()
  tools: update x86 string routines

 Documentation/asm-annotations.rst       | 11 ++--
 arch/arm/lib/bitops.h                   |  8 +--
 arch/arm64/include/asm/linkage.h        | 24 -------
 arch/arm64/kvm/hyp/nvhe/cache.S         |  5 +-
 arch/arm64/lib/clear_page.S             |  5 +-
 arch/arm64/lib/copy_page.S              |  5 +-
 arch/arm64/lib/memchr.S                 |  5 +-
 arch/arm64/lib/memcmp.S                 |  6 +-
 arch/arm64/lib/memcpy.S                 | 21 +++---
 arch/arm64/lib/memset.S                 | 12 ++--
 arch/arm64/lib/strchr.S                 |  6 +-
 arch/arm64/lib/strcmp.S                 |  6 +-
 arch/arm64/lib/strlen.S                 |  6 +-
 arch/arm64/lib/strncmp.S                |  6 +-
 arch/arm64/lib/strnlen.S                |  6 +-
 arch/arm64/lib/strrchr.S                |  5 +-
 arch/arm64/mm/cache.S                   | 35 ++++++----
 arch/x86/boot/compressed/head_32.S      |  3 +-
 arch/x86/boot/compressed/head_64.S      |  3 +-
 arch/x86/crypto/aesni-intel_asm.S       |  4 +-
 arch/x86/lib/memcpy_64.S                | 10 +--
 arch/x86/lib/memmove_64.S               |  4 +-
 arch/x86/lib/memset_64.S                |  6 +-
 include/linux/linkage.h                 | 87 +++++++++++++++----------
 tools/arch/x86/lib/memcpy_64.S          | 10 +--
 tools/arch/x86/lib/memset_64.S          |  6 +-
 tools/perf/util/include/linux/linkage.h | 80 +++++++++++++++--------
 27 files changed, 209 insertions(+), 176 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 1/7] arm: lib: remove leading whitespace in bitop macro
  2022-01-25 11:31 ` Mark Rutland
@ 2022-01-25 11:31   ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

The `bitop` macro is used to create assembly functions for bit
operations, handling all the boilderplate such as ENTRY(), ENDPROC(),
etc.

Within the `bitop` macro, the argument to ENTRY() has leading whitespace
for alignment, but there is no leading whitespace for the argument to
ENDPROC().

The leading whitespace in the argument to ENTRY() prevents the value
from safely being token-pasted to form a longer symbol name, as
subsequent patches will need to do, where the value will be passed into
a new SYM_ENTRY_AT() macro:

  ENTRY(name)
  -> SYM_FUNC_START(name)
  -> SYM_START(name, [...])
  -> SYM_START_AT(name, [...])
  -> SYM_ENTRY_AT(name, [...])

... where SYM_ENTRY_AT() will token-paste name to form a local label
used by later macros:

| .set .L____sym_entry__##name, location ASM_NL

... but as this happens before assembler macros are evaluated, and
`name` will expand to `\name`, the token-pasting will retain the leading
space:

| .L____sym_entry__ \name

... and when evaluated within an assembler macro this will result in
build errors:

| [mark@lakrids:~/src/linux]% git clean -qfdx
| [mark@lakrids:~/src/linux]% usekorg 10.3.0 make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- -s multi_v7_defconfig
| [mark@lakrids:~/src/linux]% usekorg 10.3.0 make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- -s arch/arm/lib
| arch/arm/lib/changebit.S: Assembler messages:
| arch/arm/lib/changebit.S:12: Error: expected comma after ".L____sym_entry__"
| make[1]: *** [scripts/Makefile.build:388: arch/arm/lib/changebit.o] Error 1
| make: *** [Makefile:1846: arch/arm/lib] Error 2

This patch removes the leading space such that the name can be
token-pasted.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm/lib/bitops.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm/lib/bitops.h b/arch/arm/lib/bitops.h
index 95bd359912889..36195edeb0b9f 100644
--- a/arch/arm/lib/bitops.h
+++ b/arch/arm/lib/bitops.h
@@ -4,7 +4,7 @@
 
 #if __LINUX_ARM_ARCH__ >= 6
 	.macro	bitop, name, instr
-ENTRY(	\name		)
+ENTRY(\name		)
 UNWIND(	.fnstart	)
 	ands	ip, r1, #3
 	strbne	r1, [ip]		@ assert word-aligned
@@ -29,7 +29,7 @@ ENDPROC(\name		)
 	.endm
 
 	.macro	testop, name, instr, store
-ENTRY(	\name		)
+ENTRY(\name		)
 UNWIND(	.fnstart	)
 	ands	ip, r1, #3
 	strbne	r1, [ip]		@ assert word-aligned
@@ -59,7 +59,7 @@ ENDPROC(\name		)
 	.endm
 #else
 	.macro	bitop, name, instr
-ENTRY(	\name		)
+ENTRY(\name		)
 UNWIND(	.fnstart	)
 	ands	ip, r1, #3
 	strbne	r1, [ip]		@ assert word-aligned
@@ -86,7 +86,7 @@ ENDPROC(\name		)
  * to avoid dirtying the data cache.
  */
 	.macro	testop, name, instr, store
-ENTRY(	\name		)
+ENTRY(\name		)
 UNWIND(	.fnstart	)
 	ands	ip, r1, #3
 	strbne	r1, [ip]		@ assert word-aligned
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 1/7] arm: lib: remove leading whitespace in bitop macro
@ 2022-01-25 11:31   ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

The `bitop` macro is used to create assembly functions for bit
operations, handling all the boilderplate such as ENTRY(), ENDPROC(),
etc.

Within the `bitop` macro, the argument to ENTRY() has leading whitespace
for alignment, but there is no leading whitespace for the argument to
ENDPROC().

The leading whitespace in the argument to ENTRY() prevents the value
from safely being token-pasted to form a longer symbol name, as
subsequent patches will need to do, where the value will be passed into
a new SYM_ENTRY_AT() macro:

  ENTRY(name)
  -> SYM_FUNC_START(name)
  -> SYM_START(name, [...])
  -> SYM_START_AT(name, [...])
  -> SYM_ENTRY_AT(name, [...])

... where SYM_ENTRY_AT() will token-paste name to form a local label
used by later macros:

| .set .L____sym_entry__##name, location ASM_NL

... but as this happens before assembler macros are evaluated, and
`name` will expand to `\name`, the token-pasting will retain the leading
space:

| .L____sym_entry__ \name

... and when evaluated within an assembler macro this will result in
build errors:

| [mark@lakrids:~/src/linux]% git clean -qfdx
| [mark@lakrids:~/src/linux]% usekorg 10.3.0 make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- -s multi_v7_defconfig
| [mark@lakrids:~/src/linux]% usekorg 10.3.0 make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- -s arch/arm/lib
| arch/arm/lib/changebit.S: Assembler messages:
| arch/arm/lib/changebit.S:12: Error: expected comma after ".L____sym_entry__"
| make[1]: *** [scripts/Makefile.build:388: arch/arm/lib/changebit.o] Error 1
| make: *** [Makefile:1846: arch/arm/lib] Error 2

This patch removes the leading space such that the name can be
token-pasted.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm/lib/bitops.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm/lib/bitops.h b/arch/arm/lib/bitops.h
index 95bd359912889..36195edeb0b9f 100644
--- a/arch/arm/lib/bitops.h
+++ b/arch/arm/lib/bitops.h
@@ -4,7 +4,7 @@
 
 #if __LINUX_ARM_ARCH__ >= 6
 	.macro	bitop, name, instr
-ENTRY(	\name		)
+ENTRY(\name		)
 UNWIND(	.fnstart	)
 	ands	ip, r1, #3
 	strbne	r1, [ip]		@ assert word-aligned
@@ -29,7 +29,7 @@ ENDPROC(\name		)
 	.endm
 
 	.macro	testop, name, instr, store
-ENTRY(	\name		)
+ENTRY(\name		)
 UNWIND(	.fnstart	)
 	ands	ip, r1, #3
 	strbne	r1, [ip]		@ assert word-aligned
@@ -59,7 +59,7 @@ ENDPROC(\name		)
 	.endm
 #else
 	.macro	bitop, name, instr
-ENTRY(	\name		)
+ENTRY(\name		)
 UNWIND(	.fnstart	)
 	ands	ip, r1, #3
 	strbne	r1, [ip]		@ assert word-aligned
@@ -86,7 +86,7 @@ ENDPROC(\name		)
  * to avoid dirtying the data cache.
  */
 	.macro	testop, name, instr, store
-ENTRY(	\name		)
+ENTRY(\name		)
 UNWIND(	.fnstart	)
 	ands	ip, r1, #3
 	strbne	r1, [ip]		@ assert word-aligned
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT()
  2022-01-25 11:31 ` Mark Rutland
@ 2022-01-25 11:31   ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

Currently, the SYM_{ENTRY,START,END}() helpers define symbols in terms
of the current position within the section. In subsequent patches we'll
need to define symbols after moving this position.

This patch splits the core out of SYM_{ENTRY,START,END}() into
SYM_{ENTRY,START,END}_AT() macros which take a location argument,
with SYM_{ENTRY,START,END}() passing the current position.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/linkage.h | 28 +++++++++++++++++++++++-----
 1 file changed, 23 insertions(+), 5 deletions(-)

diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index dbf8506decca0..d87c2acda2540 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -147,25 +147,43 @@
 
 /* === generic annotations === */
 
+#ifndef SYM_ENTRY_AT
+#define SYM_ENTRY_AT(name, location, linkage)		\
+	linkage(name) ASM_NL				\
+	.set name, location ASM_NL
+#endif
+
 /* SYM_ENTRY -- use only if you have to for non-paired symbols */
 #ifndef SYM_ENTRY
 #define SYM_ENTRY(name, linkage, align...)		\
-	linkage(name) ASM_NL				\
 	align ASM_NL					\
-	name:
+	SYM_ENTRY_AT(name, ., linkage)
+#endif
+
+/* SYM_START_AT -- use only if you have to */
+#ifndef SYM_START_AT
+#define SYM_START_AT(name, location, linkage)		\
+	SYM_ENTRY_AT(name, location, linkage)
 #endif
 
 /* SYM_START -- use only if you have to */
 #ifndef SYM_START
 #define SYM_START(name, linkage, align...)		\
-	SYM_ENTRY(name, linkage, align)
+	align ASM_NL					\
+	SYM_START_AT(name, ., linkage)
+#endif
+
+/* SYM_END_AT -- use only if you have to */
+#ifndef SYM_END_AT
+#define SYM_END_AT(name, location, sym_type)		\
+	.type name sym_type ASM_NL			\
+	.size name, location-name ASM_NL
 #endif
 
 /* SYM_END -- use only if you have to */
 #ifndef SYM_END
 #define SYM_END(name, sym_type)				\
-	.type name sym_type ASM_NL			\
-	.size name, .-name
+	SYM_END_AT(name, ., sym_type)
 #endif
 
 /* === code annotations === */
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT()
@ 2022-01-25 11:31   ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

Currently, the SYM_{ENTRY,START,END}() helpers define symbols in terms
of the current position within the section. In subsequent patches we'll
need to define symbols after moving this position.

This patch splits the core out of SYM_{ENTRY,START,END}() into
SYM_{ENTRY,START,END}_AT() macros which take a location argument,
with SYM_{ENTRY,START,END}() passing the current position.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 include/linux/linkage.h | 28 +++++++++++++++++++++++-----
 1 file changed, 23 insertions(+), 5 deletions(-)

diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index dbf8506decca0..d87c2acda2540 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -147,25 +147,43 @@
 
 /* === generic annotations === */
 
+#ifndef SYM_ENTRY_AT
+#define SYM_ENTRY_AT(name, location, linkage)		\
+	linkage(name) ASM_NL				\
+	.set name, location ASM_NL
+#endif
+
 /* SYM_ENTRY -- use only if you have to for non-paired symbols */
 #ifndef SYM_ENTRY
 #define SYM_ENTRY(name, linkage, align...)		\
-	linkage(name) ASM_NL				\
 	align ASM_NL					\
-	name:
+	SYM_ENTRY_AT(name, ., linkage)
+#endif
+
+/* SYM_START_AT -- use only if you have to */
+#ifndef SYM_START_AT
+#define SYM_START_AT(name, location, linkage)		\
+	SYM_ENTRY_AT(name, location, linkage)
 #endif
 
 /* SYM_START -- use only if you have to */
 #ifndef SYM_START
 #define SYM_START(name, linkage, align...)		\
-	SYM_ENTRY(name, linkage, align)
+	align ASM_NL					\
+	SYM_START_AT(name, ., linkage)
+#endif
+
+/* SYM_END_AT -- use only if you have to */
+#ifndef SYM_END_AT
+#define SYM_END_AT(name, location, sym_type)		\
+	.type name sym_type ASM_NL			\
+	.size name, location-name ASM_NL
 #endif
 
 /* SYM_END -- use only if you have to */
 #ifndef SYM_END
 #define SYM_END(name, sym_type)				\
-	.type name sym_type ASM_NL			\
-	.size name, .-name
+	SYM_END_AT(name, ., sym_type)
 #endif
 
 /* === code annotations === */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 3/7] linkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}()
  2022-01-25 11:31 ` Mark Rutland
@ 2022-01-25 11:31   ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

Currently aliasing an asm function requires adding START and END
annotations for each name, as per Documentation/asm-annotations.rst:

	SYM_FUNC_START_ALIAS(__memset)
	SYM_FUNC_START(memset)
	    ... asm insns ...
	SYM_FUNC_END(memset)
	SYM_FUNC_END_ALIAS(__memset)

This is more painful than necessary to maintain, especially where a
function has many aliases, some of which we may wish to define
conditionally. For example, arm64's memcpy/memmove implementation (which
uses some arch-specific SYM_*() helpers) has:

	SYM_FUNC_START_ALIAS(__memmove)
	SYM_FUNC_START_ALIAS_WEAK_PI(memmove)
	SYM_FUNC_START_ALIAS(__memcpy)
	SYM_FUNC_START_WEAK_PI(memcpy)
	    ... asm insns ...
	SYM_FUNC_END_PI(memcpy)
	EXPORT_SYMBOL(memcpy)
	SYM_FUNC_END_ALIAS(__memcpy)
	EXPORT_SYMBOL(__memcpy)
	SYM_FUNC_END_ALIAS_PI(memmove)
	EXPORT_SYMBOL(memmove)
	SYM_FUNC_END_ALIAS(__memmove)
	EXPORT_SYMBOL(__memmove)
	SYM_FUNC_START(name)

It would be much nicer if we could define the aliases *after* the
standard function definition. This would avoid the need to specify each
symbol name twice, and would make it easier to spot the canonical
function definition. This patch adds new macros to allow us to do so,
which allows the above example to be rewritten more succinctly as:

	SYM_FUNC_START(__pi_memcpy)
	    ... asm insns ...
	SYM_FUNC_END(__pi_memcpy)

	SYM_FUNC_ALIAS(__memcpy, __pi_memcpy)
	EXPORT_SYMBOL(__memcpy)
	SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
	EXPORT_SYMBOL(memcpy)

	SYM_FUNC_ALIAS(__pi_memmove, __pi_memcpy)
	SYM_FUNC_ALIAS(__memmove, __pi_memmove)
	EXPORT_SYMBOL(__memmove)
	SYM_FUNC_ALIAS_WEAK(memmove, __memmove)
	EXPORT_SYMBOL(memmove)

The reduction in duplication will also make it possible to replace some
uses of WEAK with more accurate Kconfig guards, e.g.

	#ifndef CONFIG_KASAN
	SYM_FUNC_ALIAS(memmove, __memmove)
	EXPORT_SYMBOL(memmove)
	#endif

... which should make it easier to ensure that symbols are neither used
nor overidden unexpectedly.

The existing SYM_FUNC_START_ALIAS() and SYM_FUNC_START_LOCAL_ALIAS() are
marked as deprecated, and will be removed once existing users are moved
over to the new scheme.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 Documentation/asm-annotations.rst | 16 ++++++++++++++--
 include/linux/linkage.h           | 29 +++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/Documentation/asm-annotations.rst b/Documentation/asm-annotations.rst
index f4bf0f6395fb9..4868b58c60fb1 100644
--- a/Documentation/asm-annotations.rst
+++ b/Documentation/asm-annotations.rst
@@ -130,8 +130,20 @@ denoting a range of code via ``SYM_*_START/END`` annotations.
   In fact, this kind of annotation corresponds to the now deprecated ``ENTRY``
   and ``ENDPROC`` macros.
 
-* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` serve for those
-  who decided to have two or more names for one function. The typical use is::
+* ``SYM_FUNC_ALIAS``, ``SYM_FUNC_ALIAS_LOCAL``, and ``SYM_FUNC_ALIAS_WEAK`` can
+  be used to define multiple names for a function. The typical use is::
+
+    SYM_FUNC_START(__memset)
+        ... asm insns ...
+    SYN_FUNC_END(__memset)
+    SYM_FUNC_ALIAS(memset, __memset)
+
+  In this example, one can call ``__memset`` or ``memset`` with the same
+  result, except the debug information for the instructions is generated to
+  the object file only once -- for the non-``ALIAS`` case.
+
+* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` are deprecated
+    ways to define two or more names for one function. The typical use is::
 
     SYM_FUNC_START_ALIAS(__memset)
     SYM_FUNC_START(memset)
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index d87c2acda2540..becd64e9e5b18 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -150,6 +150,7 @@
 #ifndef SYM_ENTRY_AT
 #define SYM_ENTRY_AT(name, location, linkage)		\
 	linkage(name) ASM_NL				\
+	.set .L____sym_entry__##name, location ASM_NL	\
 	.set name, location ASM_NL
 #endif
 
@@ -177,6 +178,7 @@
 #ifndef SYM_END_AT
 #define SYM_END_AT(name, location, sym_type)		\
 	.type name sym_type ASM_NL			\
+	.set .L____sym_end__##name, location ASM_NL	\
 	.size name, location-name ASM_NL
 #endif
 
@@ -293,6 +295,33 @@
 	SYM_END(name, SYM_T_FUNC)
 #endif
 
+/*
+ * SYM_FUNC_ALIAS_LOCAL -- define a local alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS_LOCAL
+#define SYM_FUNC_ALIAS_LOCAL(alias, name)				\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_LOCAL)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS -- define a global alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS
+#define SYM_FUNC_ALIAS(alias, name)					\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_GLOBAL)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS_WEAK -- define a weak global alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS_WEAK
+#define SYM_FUNC_ALIAS_WEAK(alias, name)				\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_WEAK)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
 /* SYM_CODE_START -- use for non-C (special) functions */
 #ifndef SYM_CODE_START
 #define SYM_CODE_START(name)				\
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 3/7] linkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}()
@ 2022-01-25 11:31   ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

Currently aliasing an asm function requires adding START and END
annotations for each name, as per Documentation/asm-annotations.rst:

	SYM_FUNC_START_ALIAS(__memset)
	SYM_FUNC_START(memset)
	    ... asm insns ...
	SYM_FUNC_END(memset)
	SYM_FUNC_END_ALIAS(__memset)

This is more painful than necessary to maintain, especially where a
function has many aliases, some of which we may wish to define
conditionally. For example, arm64's memcpy/memmove implementation (which
uses some arch-specific SYM_*() helpers) has:

	SYM_FUNC_START_ALIAS(__memmove)
	SYM_FUNC_START_ALIAS_WEAK_PI(memmove)
	SYM_FUNC_START_ALIAS(__memcpy)
	SYM_FUNC_START_WEAK_PI(memcpy)
	    ... asm insns ...
	SYM_FUNC_END_PI(memcpy)
	EXPORT_SYMBOL(memcpy)
	SYM_FUNC_END_ALIAS(__memcpy)
	EXPORT_SYMBOL(__memcpy)
	SYM_FUNC_END_ALIAS_PI(memmove)
	EXPORT_SYMBOL(memmove)
	SYM_FUNC_END_ALIAS(__memmove)
	EXPORT_SYMBOL(__memmove)
	SYM_FUNC_START(name)

It would be much nicer if we could define the aliases *after* the
standard function definition. This would avoid the need to specify each
symbol name twice, and would make it easier to spot the canonical
function definition. This patch adds new macros to allow us to do so,
which allows the above example to be rewritten more succinctly as:

	SYM_FUNC_START(__pi_memcpy)
	    ... asm insns ...
	SYM_FUNC_END(__pi_memcpy)

	SYM_FUNC_ALIAS(__memcpy, __pi_memcpy)
	EXPORT_SYMBOL(__memcpy)
	SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
	EXPORT_SYMBOL(memcpy)

	SYM_FUNC_ALIAS(__pi_memmove, __pi_memcpy)
	SYM_FUNC_ALIAS(__memmove, __pi_memmove)
	EXPORT_SYMBOL(__memmove)
	SYM_FUNC_ALIAS_WEAK(memmove, __memmove)
	EXPORT_SYMBOL(memmove)

The reduction in duplication will also make it possible to replace some
uses of WEAK with more accurate Kconfig guards, e.g.

	#ifndef CONFIG_KASAN
	SYM_FUNC_ALIAS(memmove, __memmove)
	EXPORT_SYMBOL(memmove)
	#endif

... which should make it easier to ensure that symbols are neither used
nor overidden unexpectedly.

The existing SYM_FUNC_START_ALIAS() and SYM_FUNC_START_LOCAL_ALIAS() are
marked as deprecated, and will be removed once existing users are moved
over to the new scheme.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 Documentation/asm-annotations.rst | 16 ++++++++++++++--
 include/linux/linkage.h           | 29 +++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/Documentation/asm-annotations.rst b/Documentation/asm-annotations.rst
index f4bf0f6395fb9..4868b58c60fb1 100644
--- a/Documentation/asm-annotations.rst
+++ b/Documentation/asm-annotations.rst
@@ -130,8 +130,20 @@ denoting a range of code via ``SYM_*_START/END`` annotations.
   In fact, this kind of annotation corresponds to the now deprecated ``ENTRY``
   and ``ENDPROC`` macros.
 
-* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` serve for those
-  who decided to have two or more names for one function. The typical use is::
+* ``SYM_FUNC_ALIAS``, ``SYM_FUNC_ALIAS_LOCAL``, and ``SYM_FUNC_ALIAS_WEAK`` can
+  be used to define multiple names for a function. The typical use is::
+
+    SYM_FUNC_START(__memset)
+        ... asm insns ...
+    SYN_FUNC_END(__memset)
+    SYM_FUNC_ALIAS(memset, __memset)
+
+  In this example, one can call ``__memset`` or ``memset`` with the same
+  result, except the debug information for the instructions is generated to
+  the object file only once -- for the non-``ALIAS`` case.
+
+* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` are deprecated
+    ways to define two or more names for one function. The typical use is::
 
     SYM_FUNC_START_ALIAS(__memset)
     SYM_FUNC_START(memset)
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index d87c2acda2540..becd64e9e5b18 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -150,6 +150,7 @@
 #ifndef SYM_ENTRY_AT
 #define SYM_ENTRY_AT(name, location, linkage)		\
 	linkage(name) ASM_NL				\
+	.set .L____sym_entry__##name, location ASM_NL	\
 	.set name, location ASM_NL
 #endif
 
@@ -177,6 +178,7 @@
 #ifndef SYM_END_AT
 #define SYM_END_AT(name, location, sym_type)		\
 	.type name sym_type ASM_NL			\
+	.set .L____sym_end__##name, location ASM_NL	\
 	.size name, location-name ASM_NL
 #endif
 
@@ -293,6 +295,33 @@
 	SYM_END(name, SYM_T_FUNC)
 #endif
 
+/*
+ * SYM_FUNC_ALIAS_LOCAL -- define a local alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS_LOCAL
+#define SYM_FUNC_ALIAS_LOCAL(alias, name)				\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_LOCAL)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS -- define a global alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS
+#define SYM_FUNC_ALIAS(alias, name)					\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_GLOBAL)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS_WEAK -- define a weak global alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS_WEAK
+#define SYM_FUNC_ALIAS_WEAK(alias, name)				\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_WEAK)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
 /* SYM_CODE_START -- use for non-C (special) functions */
 #ifndef SYM_CODE_START
 #define SYM_CODE_START(name)				\
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 4/7] arm64: clean up symbol aliasing
  2022-01-25 11:31 ` Mark Rutland
@ 2022-01-25 11:31   ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
to simplify and more consistently define function aliases across
arch/arm64.

Aliases are now defined in terms of a canonical function name. For
position-independent functions I've made the __pi_<func> name the
canonical name, and defined other alises in terms of this.

The SYM_FUNC_{START,END}_PI(func) macros obscure the __pi_<func> name,
and make this hard to seatch for. The SYM_FUNC_START_WEAK_PI() macro
also obscures the fact that the __pi_<func> fymbol is global and the
<func> symbol is weak. For clarity, I have removed these macros and used
SYM_FUNC_{START,END}() directly with the __pi_<func> name.

For example:

    SYM_FUNC_START_WEAK_PI(func)
        ... asm insns ...
    SYM_FUNC_END_PI(func)
    EXPORT_SYMBOL(func)

... becomes:

    SYM_FUNC_START(__pi_func)
        ... asm insns ...
    SYM_FUNC_END(__pi_func)

    SYM_FUNC_ALIAS_WEAK(func, __pi_func)
    EXPORT_SYMBOL(func)

For clarity, where there are multiple annotations such as
EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
example, where a function has a name and an alias which are both
exported, this is organised as:

    SYM_FUNC_START(func)
	... asm insns ...
    SYM_FUNC_END(func)
    EXPORT_SYMBOL(func)

    SYM_FUNC_ALAIAS(alias, func)
    EXPORT_SYMBOL(alias)

For consistency with the other string functions, I've defined strrchr as
a position-independent function, as it can safely be used as such even
though we have no users today.

As we no longer use SYM_FUNC_{START,END}_ALIAS(), our local copies are
removed. The common versions will be removed by a subsequent patch.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/linkage.h | 24 ----------------------
 arch/arm64/kvm/hyp/nvhe/cache.S  |  5 +++--
 arch/arm64/lib/clear_page.S      |  5 +++--
 arch/arm64/lib/copy_page.S       |  5 +++--
 arch/arm64/lib/memchr.S          |  5 +++--
 arch/arm64/lib/memcmp.S          |  6 +++---
 arch/arm64/lib/memcpy.S          | 21 ++++++++++---------
 arch/arm64/lib/memset.S          | 12 ++++++-----
 arch/arm64/lib/strchr.S          |  6 ++++--
 arch/arm64/lib/strcmp.S          |  6 +++---
 arch/arm64/lib/strlen.S          |  6 +++---
 arch/arm64/lib/strncmp.S         |  6 +++---
 arch/arm64/lib/strnlen.S         |  6 ++++--
 arch/arm64/lib/strrchr.S         |  5 +++--
 arch/arm64/mm/cache.S            | 35 +++++++++++++++++++-------------
 15 files changed, 74 insertions(+), 79 deletions(-)

diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h
index b77e9b3f5371c..43f8c25b3fda6 100644
--- a/arch/arm64/include/asm/linkage.h
+++ b/arch/arm64/include/asm/linkage.h
@@ -39,28 +39,4 @@
 	SYM_START(name, SYM_L_WEAK, SYM_A_NONE)		\
 	bti c ;
 
-/*
- * Annotate a function as position independent, i.e., safe to be called before
- * the kernel virtual mapping is activated.
- */
-#define SYM_FUNC_START_PI(x)			\
-		SYM_FUNC_START_ALIAS(__pi_##x);	\
-		SYM_FUNC_START(x)
-
-#define SYM_FUNC_START_WEAK_PI(x)		\
-		SYM_FUNC_START_ALIAS(__pi_##x);	\
-		SYM_FUNC_START_WEAK(x)
-
-#define SYM_FUNC_START_WEAK_ALIAS_PI(x)		\
-		SYM_FUNC_START_ALIAS(__pi_##x);	\
-		SYM_START(x, SYM_L_WEAK, SYM_A_ALIGN)
-
-#define SYM_FUNC_END_PI(x)			\
-		SYM_FUNC_END(x);		\
-		SYM_FUNC_END_ALIAS(__pi_##x)
-
-#define SYM_FUNC_END_ALIAS_PI(x)		\
-		SYM_FUNC_END_ALIAS(x);		\
-		SYM_FUNC_END_ALIAS(__pi_##x)
-
 #endif
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 958734f4d6b0e..0c367eb5f4e28 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -7,7 +7,8 @@
 #include <asm/assembler.h>
 #include <asm/alternative.h>
 
-SYM_FUNC_START_PI(dcache_clean_inval_poc)
+SYM_FUNC_START(__pi_dcache_clean_inval_poc)
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(dcache_clean_inval_poc)
+SYM_FUNC_END(__pi_dcache_clean_inval_poc)
+SYM_FUNC_ALIAS(dcache_clean_inval_poc, __pi_dcache_clean_inval_poc)
diff --git a/arch/arm64/lib/clear_page.S b/arch/arm64/lib/clear_page.S
index 1fd5d790ab800..ebde40e7fa2b2 100644
--- a/arch/arm64/lib/clear_page.S
+++ b/arch/arm64/lib/clear_page.S
@@ -14,7 +14,7 @@
  * Parameters:
  *	x0 - dest
  */
-SYM_FUNC_START_PI(clear_page)
+SYM_FUNC_START(__pi_clear_page)
 	mrs	x1, dczid_el0
 	tbnz	x1, #4, 2f	/* Branch if DC ZVA is prohibited */
 	and	w1, w1, #0xf
@@ -35,5 +35,6 @@ SYM_FUNC_START_PI(clear_page)
 	tst	x0, #(PAGE_SIZE - 1)
 	b.ne	2b
 	ret
-SYM_FUNC_END_PI(clear_page)
+SYM_FUNC_END(__pi_clear_page)
+SYM_FUNC_ALIAS(clear_page, __pi_clear_page)
 EXPORT_SYMBOL(clear_page)
diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S
index 29144f4cd4492..c336d2ffdec55 100644
--- a/arch/arm64/lib/copy_page.S
+++ b/arch/arm64/lib/copy_page.S
@@ -17,7 +17,7 @@
  *	x0 - dest
  *	x1 - src
  */
-SYM_FUNC_START_PI(copy_page)
+SYM_FUNC_START(__pi_copy_page)
 alternative_if ARM64_HAS_NO_HW_PREFETCH
 	// Prefetch three cache lines ahead.
 	prfm	pldl1strm, [x1, #128]
@@ -75,5 +75,6 @@ alternative_else_nop_endif
 	stnp	x16, x17, [x0, #112 - 256]
 
 	ret
-SYM_FUNC_END_PI(copy_page)
+SYM_FUNC_END(__pi_copy_page)
+SYM_FUNC_ALIAS(copy_page, __pi_copy_page)
 EXPORT_SYMBOL(copy_page)
diff --git a/arch/arm64/lib/memchr.S b/arch/arm64/lib/memchr.S
index 7c2276fdab543..37a9f2a4f7f4b 100644
--- a/arch/arm64/lib/memchr.S
+++ b/arch/arm64/lib/memchr.S
@@ -38,7 +38,7 @@
 
 	.p2align 4
 	nop
-SYM_FUNC_START_WEAK_PI(memchr)
+SYM_FUNC_START(__pi_memchr)
 	and	chrin, chrin, #0xff
 	lsr	wordcnt, cntin, #3
 	cbz	wordcnt, L(byte_loop)
@@ -71,5 +71,6 @@ CPU_LE(	rev	tmp, tmp)
 L(not_found):
 	mov	result, #0
 	ret
-SYM_FUNC_END_PI(memchr)
+SYM_FUNC_END(__pi_memchr)
+SYM_FUNC_ALIAS_WEAK(memchr, __pi_memchr)
 EXPORT_SYMBOL_NOKASAN(memchr)
diff --git a/arch/arm64/lib/memcmp.S b/arch/arm64/lib/memcmp.S
index 7d956384222ff..a5ccf2c55f911 100644
--- a/arch/arm64/lib/memcmp.S
+++ b/arch/arm64/lib/memcmp.S
@@ -32,7 +32,7 @@
 #define tmp1		x7
 #define tmp2		x8
 
-SYM_FUNC_START_WEAK_PI(memcmp)
+SYM_FUNC_START(__pi_memcmp)
 	subs	limit, limit, 8
 	b.lo	L(less8)
 
@@ -134,6 +134,6 @@ L(byte_loop):
 	b.eq	L(byte_loop)
 	sub	result, data1w, data2w
 	ret
-
-SYM_FUNC_END_PI(memcmp)
+SYM_FUNC_END(__pi_memcmp)
+SYM_FUNC_ALIAS_WEAK(memcmp, __pi_memcmp)
 EXPORT_SYMBOL_NOKASAN(memcmp)
diff --git a/arch/arm64/lib/memcpy.S b/arch/arm64/lib/memcpy.S
index b82fd64ee1e1c..4ab48d49c4515 100644
--- a/arch/arm64/lib/memcpy.S
+++ b/arch/arm64/lib/memcpy.S
@@ -57,10 +57,7 @@
    The loop tail is handled by always copying 64 bytes from the end.
 */
 
-SYM_FUNC_START_ALIAS(__memmove)
-SYM_FUNC_START_WEAK_ALIAS_PI(memmove)
-SYM_FUNC_START_ALIAS(__memcpy)
-SYM_FUNC_START_WEAK_PI(memcpy)
+SYM_FUNC_START(__pi_memcpy)
 	add	srcend, src, count
 	add	dstend, dstin, count
 	cmp	count, 128
@@ -241,12 +238,16 @@ L(copy64_from_start):
 	stp	B_l, B_h, [dstin, 16]
 	stp	C_l, C_h, [dstin]
 	ret
+SYM_FUNC_END(__pi_memcpy)
 
-SYM_FUNC_END_PI(memcpy)
-EXPORT_SYMBOL(memcpy)
-SYM_FUNC_END_ALIAS(__memcpy)
+SYM_FUNC_ALIAS(__memcpy, __pi_memcpy)
 EXPORT_SYMBOL(__memcpy)
-SYM_FUNC_END_ALIAS_PI(memmove)
-EXPORT_SYMBOL(memmove)
-SYM_FUNC_END_ALIAS(__memmove)
+SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
+EXPORT_SYMBOL(memcpy)
+
+SYM_FUNC_ALIAS(__pi_memmove, __pi_memcpy)
+
+SYM_FUNC_ALIAS(__memmove, __pi_memmove)
 EXPORT_SYMBOL(__memmove)
+SYM_FUNC_ALIAS_WEAK(memmove, __memmove)
+EXPORT_SYMBOL(memmove)
diff --git a/arch/arm64/lib/memset.S b/arch/arm64/lib/memset.S
index a9c1c9a01ea90..a5aebe82ad73b 100644
--- a/arch/arm64/lib/memset.S
+++ b/arch/arm64/lib/memset.S
@@ -42,8 +42,7 @@ dst		.req	x8
 tmp3w		.req	w9
 tmp3		.req	x9
 
-SYM_FUNC_START_ALIAS(__memset)
-SYM_FUNC_START_WEAK_PI(memset)
+SYM_FUNC_START(__pi_memset)
 	mov	dst, dstin	/* Preserve return value.  */
 	and	A_lw, val, #255
 	orr	A_lw, A_lw, A_lw, lsl #8
@@ -202,7 +201,10 @@ SYM_FUNC_START_WEAK_PI(memset)
 	ands	count, count, zva_bits_x
 	b.ne	.Ltail_maybe_long
 	ret
-SYM_FUNC_END_PI(memset)
-EXPORT_SYMBOL(memset)
-SYM_FUNC_END_ALIAS(__memset)
+SYM_FUNC_END(__pi_memset)
+
+SYM_FUNC_ALIAS(__memset, __pi_memset)
 EXPORT_SYMBOL(__memset)
+
+SYM_FUNC_ALIAS_WEAK(memset, __pi_memset)
+EXPORT_SYMBOL(memset)
diff --git a/arch/arm64/lib/strchr.S b/arch/arm64/lib/strchr.S
index 1f47eae3b0d6d..94ee67a6b212c 100644
--- a/arch/arm64/lib/strchr.S
+++ b/arch/arm64/lib/strchr.S
@@ -18,7 +18,7 @@
  * Returns:
  *	x0 - address of first occurrence of 'c' or 0
  */
-SYM_FUNC_START_WEAK(strchr)
+SYM_FUNC_START(__pi_strchr)
 	and	w1, w1, #0xff
 1:	ldrb	w2, [x0], #1
 	cmp	w2, w1
@@ -28,5 +28,7 @@ SYM_FUNC_START_WEAK(strchr)
 	cmp	w2, w1
 	csel	x0, x0, xzr, eq
 	ret
-SYM_FUNC_END(strchr)
+SYM_FUNC_END(__pi_strchr)
+
+SYM_FUNC_ALIAS_WEAK(strchr, __pi_strchr)
 EXPORT_SYMBOL_NOKASAN(strchr)
diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
index 83bcad72ec972..cda7de747efcf 100644
--- a/arch/arm64/lib/strcmp.S
+++ b/arch/arm64/lib/strcmp.S
@@ -41,7 +41,7 @@
 
 	/* Start of performance-critical section  -- one 64B cache line.  */
 	.align 6
-SYM_FUNC_START_WEAK_PI(strcmp)
+SYM_FUNC_START(__pi_strcmp)
 	eor	tmp1, src1, src2
 	mov	zeroones, #REP8_01
 	tst	tmp1, #7
@@ -171,6 +171,6 @@ L(loop_misaligned):
 L(done):
 	sub	result, data1, data2
 	ret
-
-SYM_FUNC_END_PI(strcmp)
+SYM_FUNC_END(__pi_strcmp)
+SYM_FUNC_ALIAS_WEAK(strcmp, __pi_strcmp)
 EXPORT_SYMBOL_NOHWKASAN(strcmp)
diff --git a/arch/arm64/lib/strlen.S b/arch/arm64/lib/strlen.S
index 1648790e91b3c..4919fe81ae540 100644
--- a/arch/arm64/lib/strlen.S
+++ b/arch/arm64/lib/strlen.S
@@ -79,7 +79,7 @@
 	   whether the first fetch, which may be misaligned, crosses a page
 	   boundary.  */
 
-SYM_FUNC_START_WEAK_PI(strlen)
+SYM_FUNC_START(__pi_strlen)
 	and	tmp1, srcin, MIN_PAGE_SIZE - 1
 	mov	zeroones, REP8_01
 	cmp	tmp1, MIN_PAGE_SIZE - 16
@@ -208,6 +208,6 @@ L(page_cross):
 	csel	data1, data1, tmp4, eq
 	csel	data2, data2, tmp2, eq
 	b	L(page_cross_entry)
-
-SYM_FUNC_END_PI(strlen)
+SYM_FUNC_END(__pi_strlen)
+SYM_FUNC_ALIAS_WEAK(strlen, __pi_strlen)
 EXPORT_SYMBOL_NOKASAN(strlen)
diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S
index e42bcfcd37e6f..a848abcec975e 100644
--- a/arch/arm64/lib/strncmp.S
+++ b/arch/arm64/lib/strncmp.S
@@ -44,7 +44,7 @@
 #define endloop		x15
 #define count		mask
 
-SYM_FUNC_START_WEAK_PI(strncmp)
+SYM_FUNC_START(__pi_strncmp)
 	cbz	limit, L(ret0)
 	eor	tmp1, src1, src2
 	mov	zeroones, #REP8_01
@@ -256,6 +256,6 @@ L(done_loop):
 L(ret0):
 	mov	result, #0
 	ret
-
-SYM_FUNC_END_PI(strncmp)
+SYM_FUNC_END(__pi_strncmp)
+SYM_FUNC_ALIAS_WEAK(strncmp, __pi_strncmp)
 EXPORT_SYMBOL_NOHWKASAN(strncmp)
diff --git a/arch/arm64/lib/strnlen.S b/arch/arm64/lib/strnlen.S
index b72913a990389..d5ac0e10a01db 100644
--- a/arch/arm64/lib/strnlen.S
+++ b/arch/arm64/lib/strnlen.S
@@ -47,7 +47,7 @@ limit_wd	.req	x14
 #define REP8_7f 0x7f7f7f7f7f7f7f7f
 #define REP8_80 0x8080808080808080
 
-SYM_FUNC_START_WEAK_PI(strnlen)
+SYM_FUNC_START(__pi_strnlen)
 	cbz	limit, .Lhit_limit
 	mov	zeroones, #REP8_01
 	bic	src, srcin, #15
@@ -156,5 +156,7 @@ CPU_LE( lsr	tmp2, tmp2, tmp4 )	/* Shift (tmp1 & 63).  */
 .Lhit_limit:
 	mov	len, limit
 	ret
-SYM_FUNC_END_PI(strnlen)
+SYM_FUNC_END(__pi_strnlen)
+
+SYM_FUNC_ALIAS_WEAK(strnlen, __pi_strnlen)
 EXPORT_SYMBOL_NOKASAN(strnlen)
diff --git a/arch/arm64/lib/strrchr.S b/arch/arm64/lib/strrchr.S
index 13132d1ed6d12..a5123cf0ce125 100644
--- a/arch/arm64/lib/strrchr.S
+++ b/arch/arm64/lib/strrchr.S
@@ -18,7 +18,7 @@
  * Returns:
  *	x0 - address of last occurrence of 'c' or 0
  */
-SYM_FUNC_START_WEAK_PI(strrchr)
+SYM_FUNC_START(__pi_strrchr)
 	mov	x3, #0
 	and	w1, w1, #0xff
 1:	ldrb	w2, [x0], #1
@@ -29,5 +29,6 @@ SYM_FUNC_START_WEAK_PI(strrchr)
 	b	1b
 2:	mov	x0, x3
 	ret
-SYM_FUNC_END_PI(strrchr)
+SYM_FUNC_END(__pi_strrchr)
+SYM_FUNC_ALIAS_WEAK(strrchr, __pi_strrchr)
 EXPORT_SYMBOL_NOKASAN(strrchr)
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 7d0563db42014..0ea6cc25dc663 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -107,10 +107,11 @@ SYM_FUNC_END(icache_inval_pou)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(dcache_clean_inval_poc)
+SYM_FUNC_START(__pi_dcache_clean_inval_poc)
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(dcache_clean_inval_poc)
+SYM_FUNC_END(__pi_dcache_clean_inval_poc)
+SYM_FUNC_ALIAS(dcache_clean_inval_poc, __pi_dcache_clean_inval_poc)
 
 /*
  *	dcache_clean_pou(start, end)
@@ -140,7 +141,7 @@ SYM_FUNC_END(dcache_clean_pou)
  *	- start   - kernel start address of region
  *	- end     - kernel end address of region
  */
-SYM_FUNC_START_PI(dcache_inval_poc)
+SYM_FUNC_START(__pi_dcache_inval_poc)
 	dcache_line_size x2, x3
 	sub	x3, x2, #1
 	tst	x1, x3				// end cache line aligned?
@@ -158,7 +159,8 @@ SYM_FUNC_START_PI(dcache_inval_poc)
 	b.lo	2b
 	dsb	sy
 	ret
-SYM_FUNC_END_PI(dcache_inval_poc)
+SYM_FUNC_END(__pi_dcache_inval_poc)
+SYM_FUNC_ALIAS(dcache_inval_poc, __pi_dcache_inval_poc)
 
 /*
  *	dcache_clean_poc(start, end)
@@ -169,10 +171,11 @@ SYM_FUNC_END_PI(dcache_inval_poc)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(dcache_clean_poc)
+SYM_FUNC_START(__pi_dcache_clean_poc)
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(dcache_clean_poc)
+SYM_FUNC_END(__pi_dcache_clean_poc)
+SYM_FUNC_ALIAS(dcache_clean_poc, __pi_dcache_clean_poc)
 
 /*
  *	dcache_clean_pop(start, end)
@@ -183,13 +186,14 @@ SYM_FUNC_END_PI(dcache_clean_poc)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(dcache_clean_pop)
+SYM_FUNC_START(__pi_dcache_clean_pop)
 	alternative_if_not ARM64_HAS_DCPOP
 	b	dcache_clean_poc
 	alternative_else_nop_endif
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(dcache_clean_pop)
+SYM_FUNC_END(__pi_dcache_clean_pop)
+SYM_FUNC_ALIAS(dcache_clean_pop, __pi_dcache_clean_pop)
 
 /*
  *	__dma_flush_area(start, size)
@@ -199,11 +203,12 @@ SYM_FUNC_END_PI(dcache_clean_pop)
  *	- start   - virtual start address of region
  *	- size    - size in question
  */
-SYM_FUNC_START_PI(__dma_flush_area)
+SYM_FUNC_START(__pi___dma_flush_area)
 	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__dma_flush_area)
+SYM_FUNC_END(__pi___dma_flush_area)
+SYM_FUNC_ALIAS(__dma_flush_area, __pi___dma_flush_area)
 
 /*
  *	__dma_map_area(start, size, dir)
@@ -211,12 +216,13 @@ SYM_FUNC_END_PI(__dma_flush_area)
  *	- size	- size of region
  *	- dir	- DMA direction
  */
-SYM_FUNC_START_PI(__dma_map_area)
+SYM_FUNC_START(__pi___dma_map_area)
 	add	x1, x0, x1
 	cmp	w2, #DMA_FROM_DEVICE
 	b.eq	__pi_dcache_inval_poc
 	b	__pi_dcache_clean_poc
-SYM_FUNC_END_PI(__dma_map_area)
+SYM_FUNC_END(__pi___dma_map_area)
+SYM_FUNC_ALIAS(__dma_map_area, __pi___dma_map_area)
 
 /*
  *	__dma_unmap_area(start, size, dir)
@@ -224,9 +230,10 @@ SYM_FUNC_END_PI(__dma_map_area)
  *	- size	- size of region
  *	- dir	- DMA direction
  */
-SYM_FUNC_START_PI(__dma_unmap_area)
+SYM_FUNC_START(__pi___dma_unmap_area)
 	add	x1, x0, x1
 	cmp	w2, #DMA_TO_DEVICE
 	b.ne	__pi_dcache_inval_poc
 	ret
-SYM_FUNC_END_PI(__dma_unmap_area)
+SYM_FUNC_END(__pi___dma_unmap_area)
+SYM_FUNC_ALIAS(__dma_unmap_area, __pi___dma_unmap_area)
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 4/7] arm64: clean up symbol aliasing
@ 2022-01-25 11:31   ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
to simplify and more consistently define function aliases across
arch/arm64.

Aliases are now defined in terms of a canonical function name. For
position-independent functions I've made the __pi_<func> name the
canonical name, and defined other alises in terms of this.

The SYM_FUNC_{START,END}_PI(func) macros obscure the __pi_<func> name,
and make this hard to seatch for. The SYM_FUNC_START_WEAK_PI() macro
also obscures the fact that the __pi_<func> fymbol is global and the
<func> symbol is weak. For clarity, I have removed these macros and used
SYM_FUNC_{START,END}() directly with the __pi_<func> name.

For example:

    SYM_FUNC_START_WEAK_PI(func)
        ... asm insns ...
    SYM_FUNC_END_PI(func)
    EXPORT_SYMBOL(func)

... becomes:

    SYM_FUNC_START(__pi_func)
        ... asm insns ...
    SYM_FUNC_END(__pi_func)

    SYM_FUNC_ALIAS_WEAK(func, __pi_func)
    EXPORT_SYMBOL(func)

For clarity, where there are multiple annotations such as
EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
example, where a function has a name and an alias which are both
exported, this is organised as:

    SYM_FUNC_START(func)
	... asm insns ...
    SYM_FUNC_END(func)
    EXPORT_SYMBOL(func)

    SYM_FUNC_ALAIAS(alias, func)
    EXPORT_SYMBOL(alias)

For consistency with the other string functions, I've defined strrchr as
a position-independent function, as it can safely be used as such even
though we have no users today.

As we no longer use SYM_FUNC_{START,END}_ALIAS(), our local copies are
removed. The common versions will be removed by a subsequent patch.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/linkage.h | 24 ----------------------
 arch/arm64/kvm/hyp/nvhe/cache.S  |  5 +++--
 arch/arm64/lib/clear_page.S      |  5 +++--
 arch/arm64/lib/copy_page.S       |  5 +++--
 arch/arm64/lib/memchr.S          |  5 +++--
 arch/arm64/lib/memcmp.S          |  6 +++---
 arch/arm64/lib/memcpy.S          | 21 ++++++++++---------
 arch/arm64/lib/memset.S          | 12 ++++++-----
 arch/arm64/lib/strchr.S          |  6 ++++--
 arch/arm64/lib/strcmp.S          |  6 +++---
 arch/arm64/lib/strlen.S          |  6 +++---
 arch/arm64/lib/strncmp.S         |  6 +++---
 arch/arm64/lib/strnlen.S         |  6 ++++--
 arch/arm64/lib/strrchr.S         |  5 +++--
 arch/arm64/mm/cache.S            | 35 +++++++++++++++++++-------------
 15 files changed, 74 insertions(+), 79 deletions(-)

diff --git a/arch/arm64/include/asm/linkage.h b/arch/arm64/include/asm/linkage.h
index b77e9b3f5371c..43f8c25b3fda6 100644
--- a/arch/arm64/include/asm/linkage.h
+++ b/arch/arm64/include/asm/linkage.h
@@ -39,28 +39,4 @@
 	SYM_START(name, SYM_L_WEAK, SYM_A_NONE)		\
 	bti c ;
 
-/*
- * Annotate a function as position independent, i.e., safe to be called before
- * the kernel virtual mapping is activated.
- */
-#define SYM_FUNC_START_PI(x)			\
-		SYM_FUNC_START_ALIAS(__pi_##x);	\
-		SYM_FUNC_START(x)
-
-#define SYM_FUNC_START_WEAK_PI(x)		\
-		SYM_FUNC_START_ALIAS(__pi_##x);	\
-		SYM_FUNC_START_WEAK(x)
-
-#define SYM_FUNC_START_WEAK_ALIAS_PI(x)		\
-		SYM_FUNC_START_ALIAS(__pi_##x);	\
-		SYM_START(x, SYM_L_WEAK, SYM_A_ALIGN)
-
-#define SYM_FUNC_END_PI(x)			\
-		SYM_FUNC_END(x);		\
-		SYM_FUNC_END_ALIAS(__pi_##x)
-
-#define SYM_FUNC_END_ALIAS_PI(x)		\
-		SYM_FUNC_END_ALIAS(x);		\
-		SYM_FUNC_END_ALIAS(__pi_##x)
-
 #endif
diff --git a/arch/arm64/kvm/hyp/nvhe/cache.S b/arch/arm64/kvm/hyp/nvhe/cache.S
index 958734f4d6b0e..0c367eb5f4e28 100644
--- a/arch/arm64/kvm/hyp/nvhe/cache.S
+++ b/arch/arm64/kvm/hyp/nvhe/cache.S
@@ -7,7 +7,8 @@
 #include <asm/assembler.h>
 #include <asm/alternative.h>
 
-SYM_FUNC_START_PI(dcache_clean_inval_poc)
+SYM_FUNC_START(__pi_dcache_clean_inval_poc)
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(dcache_clean_inval_poc)
+SYM_FUNC_END(__pi_dcache_clean_inval_poc)
+SYM_FUNC_ALIAS(dcache_clean_inval_poc, __pi_dcache_clean_inval_poc)
diff --git a/arch/arm64/lib/clear_page.S b/arch/arm64/lib/clear_page.S
index 1fd5d790ab800..ebde40e7fa2b2 100644
--- a/arch/arm64/lib/clear_page.S
+++ b/arch/arm64/lib/clear_page.S
@@ -14,7 +14,7 @@
  * Parameters:
  *	x0 - dest
  */
-SYM_FUNC_START_PI(clear_page)
+SYM_FUNC_START(__pi_clear_page)
 	mrs	x1, dczid_el0
 	tbnz	x1, #4, 2f	/* Branch if DC ZVA is prohibited */
 	and	w1, w1, #0xf
@@ -35,5 +35,6 @@ SYM_FUNC_START_PI(clear_page)
 	tst	x0, #(PAGE_SIZE - 1)
 	b.ne	2b
 	ret
-SYM_FUNC_END_PI(clear_page)
+SYM_FUNC_END(__pi_clear_page)
+SYM_FUNC_ALIAS(clear_page, __pi_clear_page)
 EXPORT_SYMBOL(clear_page)
diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S
index 29144f4cd4492..c336d2ffdec55 100644
--- a/arch/arm64/lib/copy_page.S
+++ b/arch/arm64/lib/copy_page.S
@@ -17,7 +17,7 @@
  *	x0 - dest
  *	x1 - src
  */
-SYM_FUNC_START_PI(copy_page)
+SYM_FUNC_START(__pi_copy_page)
 alternative_if ARM64_HAS_NO_HW_PREFETCH
 	// Prefetch three cache lines ahead.
 	prfm	pldl1strm, [x1, #128]
@@ -75,5 +75,6 @@ alternative_else_nop_endif
 	stnp	x16, x17, [x0, #112 - 256]
 
 	ret
-SYM_FUNC_END_PI(copy_page)
+SYM_FUNC_END(__pi_copy_page)
+SYM_FUNC_ALIAS(copy_page, __pi_copy_page)
 EXPORT_SYMBOL(copy_page)
diff --git a/arch/arm64/lib/memchr.S b/arch/arm64/lib/memchr.S
index 7c2276fdab543..37a9f2a4f7f4b 100644
--- a/arch/arm64/lib/memchr.S
+++ b/arch/arm64/lib/memchr.S
@@ -38,7 +38,7 @@
 
 	.p2align 4
 	nop
-SYM_FUNC_START_WEAK_PI(memchr)
+SYM_FUNC_START(__pi_memchr)
 	and	chrin, chrin, #0xff
 	lsr	wordcnt, cntin, #3
 	cbz	wordcnt, L(byte_loop)
@@ -71,5 +71,6 @@ CPU_LE(	rev	tmp, tmp)
 L(not_found):
 	mov	result, #0
 	ret
-SYM_FUNC_END_PI(memchr)
+SYM_FUNC_END(__pi_memchr)
+SYM_FUNC_ALIAS_WEAK(memchr, __pi_memchr)
 EXPORT_SYMBOL_NOKASAN(memchr)
diff --git a/arch/arm64/lib/memcmp.S b/arch/arm64/lib/memcmp.S
index 7d956384222ff..a5ccf2c55f911 100644
--- a/arch/arm64/lib/memcmp.S
+++ b/arch/arm64/lib/memcmp.S
@@ -32,7 +32,7 @@
 #define tmp1		x7
 #define tmp2		x8
 
-SYM_FUNC_START_WEAK_PI(memcmp)
+SYM_FUNC_START(__pi_memcmp)
 	subs	limit, limit, 8
 	b.lo	L(less8)
 
@@ -134,6 +134,6 @@ L(byte_loop):
 	b.eq	L(byte_loop)
 	sub	result, data1w, data2w
 	ret
-
-SYM_FUNC_END_PI(memcmp)
+SYM_FUNC_END(__pi_memcmp)
+SYM_FUNC_ALIAS_WEAK(memcmp, __pi_memcmp)
 EXPORT_SYMBOL_NOKASAN(memcmp)
diff --git a/arch/arm64/lib/memcpy.S b/arch/arm64/lib/memcpy.S
index b82fd64ee1e1c..4ab48d49c4515 100644
--- a/arch/arm64/lib/memcpy.S
+++ b/arch/arm64/lib/memcpy.S
@@ -57,10 +57,7 @@
    The loop tail is handled by always copying 64 bytes from the end.
 */
 
-SYM_FUNC_START_ALIAS(__memmove)
-SYM_FUNC_START_WEAK_ALIAS_PI(memmove)
-SYM_FUNC_START_ALIAS(__memcpy)
-SYM_FUNC_START_WEAK_PI(memcpy)
+SYM_FUNC_START(__pi_memcpy)
 	add	srcend, src, count
 	add	dstend, dstin, count
 	cmp	count, 128
@@ -241,12 +238,16 @@ L(copy64_from_start):
 	stp	B_l, B_h, [dstin, 16]
 	stp	C_l, C_h, [dstin]
 	ret
+SYM_FUNC_END(__pi_memcpy)
 
-SYM_FUNC_END_PI(memcpy)
-EXPORT_SYMBOL(memcpy)
-SYM_FUNC_END_ALIAS(__memcpy)
+SYM_FUNC_ALIAS(__memcpy, __pi_memcpy)
 EXPORT_SYMBOL(__memcpy)
-SYM_FUNC_END_ALIAS_PI(memmove)
-EXPORT_SYMBOL(memmove)
-SYM_FUNC_END_ALIAS(__memmove)
+SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
+EXPORT_SYMBOL(memcpy)
+
+SYM_FUNC_ALIAS(__pi_memmove, __pi_memcpy)
+
+SYM_FUNC_ALIAS(__memmove, __pi_memmove)
 EXPORT_SYMBOL(__memmove)
+SYM_FUNC_ALIAS_WEAK(memmove, __memmove)
+EXPORT_SYMBOL(memmove)
diff --git a/arch/arm64/lib/memset.S b/arch/arm64/lib/memset.S
index a9c1c9a01ea90..a5aebe82ad73b 100644
--- a/arch/arm64/lib/memset.S
+++ b/arch/arm64/lib/memset.S
@@ -42,8 +42,7 @@ dst		.req	x8
 tmp3w		.req	w9
 tmp3		.req	x9
 
-SYM_FUNC_START_ALIAS(__memset)
-SYM_FUNC_START_WEAK_PI(memset)
+SYM_FUNC_START(__pi_memset)
 	mov	dst, dstin	/* Preserve return value.  */
 	and	A_lw, val, #255
 	orr	A_lw, A_lw, A_lw, lsl #8
@@ -202,7 +201,10 @@ SYM_FUNC_START_WEAK_PI(memset)
 	ands	count, count, zva_bits_x
 	b.ne	.Ltail_maybe_long
 	ret
-SYM_FUNC_END_PI(memset)
-EXPORT_SYMBOL(memset)
-SYM_FUNC_END_ALIAS(__memset)
+SYM_FUNC_END(__pi_memset)
+
+SYM_FUNC_ALIAS(__memset, __pi_memset)
 EXPORT_SYMBOL(__memset)
+
+SYM_FUNC_ALIAS_WEAK(memset, __pi_memset)
+EXPORT_SYMBOL(memset)
diff --git a/arch/arm64/lib/strchr.S b/arch/arm64/lib/strchr.S
index 1f47eae3b0d6d..94ee67a6b212c 100644
--- a/arch/arm64/lib/strchr.S
+++ b/arch/arm64/lib/strchr.S
@@ -18,7 +18,7 @@
  * Returns:
  *	x0 - address of first occurrence of 'c' or 0
  */
-SYM_FUNC_START_WEAK(strchr)
+SYM_FUNC_START(__pi_strchr)
 	and	w1, w1, #0xff
 1:	ldrb	w2, [x0], #1
 	cmp	w2, w1
@@ -28,5 +28,7 @@ SYM_FUNC_START_WEAK(strchr)
 	cmp	w2, w1
 	csel	x0, x0, xzr, eq
 	ret
-SYM_FUNC_END(strchr)
+SYM_FUNC_END(__pi_strchr)
+
+SYM_FUNC_ALIAS_WEAK(strchr, __pi_strchr)
 EXPORT_SYMBOL_NOKASAN(strchr)
diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
index 83bcad72ec972..cda7de747efcf 100644
--- a/arch/arm64/lib/strcmp.S
+++ b/arch/arm64/lib/strcmp.S
@@ -41,7 +41,7 @@
 
 	/* Start of performance-critical section  -- one 64B cache line.  */
 	.align 6
-SYM_FUNC_START_WEAK_PI(strcmp)
+SYM_FUNC_START(__pi_strcmp)
 	eor	tmp1, src1, src2
 	mov	zeroones, #REP8_01
 	tst	tmp1, #7
@@ -171,6 +171,6 @@ L(loop_misaligned):
 L(done):
 	sub	result, data1, data2
 	ret
-
-SYM_FUNC_END_PI(strcmp)
+SYM_FUNC_END(__pi_strcmp)
+SYM_FUNC_ALIAS_WEAK(strcmp, __pi_strcmp)
 EXPORT_SYMBOL_NOHWKASAN(strcmp)
diff --git a/arch/arm64/lib/strlen.S b/arch/arm64/lib/strlen.S
index 1648790e91b3c..4919fe81ae540 100644
--- a/arch/arm64/lib/strlen.S
+++ b/arch/arm64/lib/strlen.S
@@ -79,7 +79,7 @@
 	   whether the first fetch, which may be misaligned, crosses a page
 	   boundary.  */
 
-SYM_FUNC_START_WEAK_PI(strlen)
+SYM_FUNC_START(__pi_strlen)
 	and	tmp1, srcin, MIN_PAGE_SIZE - 1
 	mov	zeroones, REP8_01
 	cmp	tmp1, MIN_PAGE_SIZE - 16
@@ -208,6 +208,6 @@ L(page_cross):
 	csel	data1, data1, tmp4, eq
 	csel	data2, data2, tmp2, eq
 	b	L(page_cross_entry)
-
-SYM_FUNC_END_PI(strlen)
+SYM_FUNC_END(__pi_strlen)
+SYM_FUNC_ALIAS_WEAK(strlen, __pi_strlen)
 EXPORT_SYMBOL_NOKASAN(strlen)
diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S
index e42bcfcd37e6f..a848abcec975e 100644
--- a/arch/arm64/lib/strncmp.S
+++ b/arch/arm64/lib/strncmp.S
@@ -44,7 +44,7 @@
 #define endloop		x15
 #define count		mask
 
-SYM_FUNC_START_WEAK_PI(strncmp)
+SYM_FUNC_START(__pi_strncmp)
 	cbz	limit, L(ret0)
 	eor	tmp1, src1, src2
 	mov	zeroones, #REP8_01
@@ -256,6 +256,6 @@ L(done_loop):
 L(ret0):
 	mov	result, #0
 	ret
-
-SYM_FUNC_END_PI(strncmp)
+SYM_FUNC_END(__pi_strncmp)
+SYM_FUNC_ALIAS_WEAK(strncmp, __pi_strncmp)
 EXPORT_SYMBOL_NOHWKASAN(strncmp)
diff --git a/arch/arm64/lib/strnlen.S b/arch/arm64/lib/strnlen.S
index b72913a990389..d5ac0e10a01db 100644
--- a/arch/arm64/lib/strnlen.S
+++ b/arch/arm64/lib/strnlen.S
@@ -47,7 +47,7 @@ limit_wd	.req	x14
 #define REP8_7f 0x7f7f7f7f7f7f7f7f
 #define REP8_80 0x8080808080808080
 
-SYM_FUNC_START_WEAK_PI(strnlen)
+SYM_FUNC_START(__pi_strnlen)
 	cbz	limit, .Lhit_limit
 	mov	zeroones, #REP8_01
 	bic	src, srcin, #15
@@ -156,5 +156,7 @@ CPU_LE( lsr	tmp2, tmp2, tmp4 )	/* Shift (tmp1 & 63).  */
 .Lhit_limit:
 	mov	len, limit
 	ret
-SYM_FUNC_END_PI(strnlen)
+SYM_FUNC_END(__pi_strnlen)
+
+SYM_FUNC_ALIAS_WEAK(strnlen, __pi_strnlen)
 EXPORT_SYMBOL_NOKASAN(strnlen)
diff --git a/arch/arm64/lib/strrchr.S b/arch/arm64/lib/strrchr.S
index 13132d1ed6d12..a5123cf0ce125 100644
--- a/arch/arm64/lib/strrchr.S
+++ b/arch/arm64/lib/strrchr.S
@@ -18,7 +18,7 @@
  * Returns:
  *	x0 - address of last occurrence of 'c' or 0
  */
-SYM_FUNC_START_WEAK_PI(strrchr)
+SYM_FUNC_START(__pi_strrchr)
 	mov	x3, #0
 	and	w1, w1, #0xff
 1:	ldrb	w2, [x0], #1
@@ -29,5 +29,6 @@ SYM_FUNC_START_WEAK_PI(strrchr)
 	b	1b
 2:	mov	x0, x3
 	ret
-SYM_FUNC_END_PI(strrchr)
+SYM_FUNC_END(__pi_strrchr)
+SYM_FUNC_ALIAS_WEAK(strrchr, __pi_strrchr)
 EXPORT_SYMBOL_NOKASAN(strrchr)
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 7d0563db42014..0ea6cc25dc663 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -107,10 +107,11 @@ SYM_FUNC_END(icache_inval_pou)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(dcache_clean_inval_poc)
+SYM_FUNC_START(__pi_dcache_clean_inval_poc)
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(dcache_clean_inval_poc)
+SYM_FUNC_END(__pi_dcache_clean_inval_poc)
+SYM_FUNC_ALIAS(dcache_clean_inval_poc, __pi_dcache_clean_inval_poc)
 
 /*
  *	dcache_clean_pou(start, end)
@@ -140,7 +141,7 @@ SYM_FUNC_END(dcache_clean_pou)
  *	- start   - kernel start address of region
  *	- end     - kernel end address of region
  */
-SYM_FUNC_START_PI(dcache_inval_poc)
+SYM_FUNC_START(__pi_dcache_inval_poc)
 	dcache_line_size x2, x3
 	sub	x3, x2, #1
 	tst	x1, x3				// end cache line aligned?
@@ -158,7 +159,8 @@ SYM_FUNC_START_PI(dcache_inval_poc)
 	b.lo	2b
 	dsb	sy
 	ret
-SYM_FUNC_END_PI(dcache_inval_poc)
+SYM_FUNC_END(__pi_dcache_inval_poc)
+SYM_FUNC_ALIAS(dcache_inval_poc, __pi_dcache_inval_poc)
 
 /*
  *	dcache_clean_poc(start, end)
@@ -169,10 +171,11 @@ SYM_FUNC_END_PI(dcache_inval_poc)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(dcache_clean_poc)
+SYM_FUNC_START(__pi_dcache_clean_poc)
 	dcache_by_line_op cvac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(dcache_clean_poc)
+SYM_FUNC_END(__pi_dcache_clean_poc)
+SYM_FUNC_ALIAS(dcache_clean_poc, __pi_dcache_clean_poc)
 
 /*
  *	dcache_clean_pop(start, end)
@@ -183,13 +186,14 @@ SYM_FUNC_END_PI(dcache_clean_poc)
  *	- start   - virtual start address of region
  *	- end     - virtual end address of region
  */
-SYM_FUNC_START_PI(dcache_clean_pop)
+SYM_FUNC_START(__pi_dcache_clean_pop)
 	alternative_if_not ARM64_HAS_DCPOP
 	b	dcache_clean_poc
 	alternative_else_nop_endif
 	dcache_by_line_op cvap, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(dcache_clean_pop)
+SYM_FUNC_END(__pi_dcache_clean_pop)
+SYM_FUNC_ALIAS(dcache_clean_pop, __pi_dcache_clean_pop)
 
 /*
  *	__dma_flush_area(start, size)
@@ -199,11 +203,12 @@ SYM_FUNC_END_PI(dcache_clean_pop)
  *	- start   - virtual start address of region
  *	- size    - size in question
  */
-SYM_FUNC_START_PI(__dma_flush_area)
+SYM_FUNC_START(__pi___dma_flush_area)
 	add	x1, x0, x1
 	dcache_by_line_op civac, sy, x0, x1, x2, x3
 	ret
-SYM_FUNC_END_PI(__dma_flush_area)
+SYM_FUNC_END(__pi___dma_flush_area)
+SYM_FUNC_ALIAS(__dma_flush_area, __pi___dma_flush_area)
 
 /*
  *	__dma_map_area(start, size, dir)
@@ -211,12 +216,13 @@ SYM_FUNC_END_PI(__dma_flush_area)
  *	- size	- size of region
  *	- dir	- DMA direction
  */
-SYM_FUNC_START_PI(__dma_map_area)
+SYM_FUNC_START(__pi___dma_map_area)
 	add	x1, x0, x1
 	cmp	w2, #DMA_FROM_DEVICE
 	b.eq	__pi_dcache_inval_poc
 	b	__pi_dcache_clean_poc
-SYM_FUNC_END_PI(__dma_map_area)
+SYM_FUNC_END(__pi___dma_map_area)
+SYM_FUNC_ALIAS(__dma_map_area, __pi___dma_map_area)
 
 /*
  *	__dma_unmap_area(start, size, dir)
@@ -224,9 +230,10 @@ SYM_FUNC_END_PI(__dma_map_area)
  *	- size	- size of region
  *	- dir	- DMA direction
  */
-SYM_FUNC_START_PI(__dma_unmap_area)
+SYM_FUNC_START(__pi___dma_unmap_area)
 	add	x1, x0, x1
 	cmp	w2, #DMA_TO_DEVICE
 	b.ne	__pi_dcache_inval_poc
 	ret
-SYM_FUNC_END_PI(__dma_unmap_area)
+SYM_FUNC_END(__pi___dma_unmap_area)
+SYM_FUNC_ALIAS(__dma_unmap_area, __pi___dma_unmap_area)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 5/7] x86: clean up symbol aliasing
  2022-01-25 11:31 ` Mark Rutland
@ 2022-01-25 11:31   ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
to simplify the definition of function aliases across arch/x86.

For clarity, where there are multiple annotations such as
EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
example, where a function has a name and an alias which are both
exported, this is organised as:

	SYM_FUNC_START(func)
	    ... asm insns ...
	SYM_FUNC_END(func)
	EXPORT_SYMBOL(func)

	SYM_FUNC_ALIAS(alias, func)
	EXPORT_SYMBOL(alias)

Where there are only aliases and no exports or other annotations, I have
not bothered with line spacing, e.g.

	SYM_FUNC_START(func)
	    ... asm insns ...
	SYM_FUNC_END(func)
	SYM_FUNC_ALAIAS(alias, func)

There should be no functional change as a result of this patch.

The x86 string routines under tools/ will be handled by a subsequent
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/boot/compressed/head_32.S |  3 +--
 arch/x86/boot/compressed/head_64.S |  3 +--
 arch/x86/crypto/aesni-intel_asm.S  |  4 +---
 arch/x86/lib/memcpy_64.S           | 10 +++++-----
 arch/x86/lib/memmove_64.S          |  4 ++--
 arch/x86/lib/memset_64.S           |  6 +++---
 6 files changed, 13 insertions(+), 17 deletions(-)

diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index 659fad53ca823..3b354eb9516df 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -152,14 +152,13 @@ SYM_FUNC_END(startup_32)
 
 #ifdef CONFIG_EFI_STUB
 SYM_FUNC_START(efi32_stub_entry)
-SYM_FUNC_START_ALIAS(efi_stub_entry)
 	add	$0x4, %esp
 	movl	8(%esp), %esi	/* save boot_params pointer */
 	call	efi_main
 	/* efi_main returns the possibly relocated address of startup_32 */
 	jmp	*%eax
 SYM_FUNC_END(efi32_stub_entry)
-SYM_FUNC_END_ALIAS(efi_stub_entry)
+SYM_FUNC_ALIAS(efi_stub_entry, efi32_stub_entry)
 #endif
 
 	.text
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index fd9441f404570..dea95301196b8 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -535,7 +535,6 @@ SYM_CODE_END(startup_64)
 #ifdef CONFIG_EFI_STUB
 	.org 0x390
 SYM_FUNC_START(efi64_stub_entry)
-SYM_FUNC_START_ALIAS(efi_stub_entry)
 	and	$~0xf, %rsp			/* realign the stack */
 	movq	%rdx, %rbx			/* save boot_params pointer */
 	call	efi_main
@@ -543,7 +542,7 @@ SYM_FUNC_START_ALIAS(efi_stub_entry)
 	leaq	rva(startup_64)(%rax), %rax
 	jmp	*%rax
 SYM_FUNC_END(efi64_stub_entry)
-SYM_FUNC_END_ALIAS(efi_stub_entry)
+SYM_FUNC_ALIAS(efi_stub_entry, efi64_stub_entry)
 #endif
 
 	.text
diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
index 363699dd72206..837c1e0aa0217 100644
--- a/arch/x86/crypto/aesni-intel_asm.S
+++ b/arch/x86/crypto/aesni-intel_asm.S
@@ -1751,8 +1751,6 @@ SYM_FUNC_END(aesni_gcm_finalize)
 
 #endif
 
-
-SYM_FUNC_START_LOCAL_ALIAS(_key_expansion_128)
 SYM_FUNC_START_LOCAL(_key_expansion_256a)
 	pshufd $0b11111111, %xmm1, %xmm1
 	shufps $0b00010000, %xmm0, %xmm4
@@ -1764,7 +1762,7 @@ SYM_FUNC_START_LOCAL(_key_expansion_256a)
 	add $0x10, TKEYP
 	RET
 SYM_FUNC_END(_key_expansion_256a)
-SYM_FUNC_END_ALIAS(_key_expansion_128)
+SYM_FUNC_ALIAS_LOCAL(_key_expansion_128, _key_expansion_256a)
 
 SYM_FUNC_START_LOCAL(_key_expansion_192a)
 	pshufd $0b01010101, %xmm1, %xmm1
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 59cf2343f3d90..d0d7b9bc6cad3 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -27,8 +27,7 @@
  * Output:
  * rax original destination
  */
-SYM_FUNC_START_ALIAS(__memcpy)
-SYM_FUNC_START_WEAK(memcpy)
+SYM_FUNC_START(__memcpy)
 	ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
 		      "jmp memcpy_erms", X86_FEATURE_ERMS
 
@@ -40,11 +39,12 @@ SYM_FUNC_START_WEAK(memcpy)
 	movl %edx, %ecx
 	rep movsb
 	RET
-SYM_FUNC_END(memcpy)
-SYM_FUNC_END_ALIAS(__memcpy)
-EXPORT_SYMBOL(memcpy)
+SYM_FUNC_END(__memcpy)
 EXPORT_SYMBOL(__memcpy)
 
+SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
+EXPORT_SYMBOL(memcpy)
+
 /*
  * memcpy_erms() - enhanced fast string memcpy. This is faster and
  * simpler than memcpy. Use memcpy_erms when possible.
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 50ea390df7128..d83cba364e31d 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,6 @@
  * Output:
  * rax: dest
  */
-SYM_FUNC_START_WEAK(memmove)
 SYM_FUNC_START(__memmove)
 
 	mov %rdi, %rax
@@ -207,6 +206,7 @@ SYM_FUNC_START(__memmove)
 13:
 	RET
 SYM_FUNC_END(__memmove)
-SYM_FUNC_END_ALIAS(memmove)
 EXPORT_SYMBOL(__memmove)
+
+SYM_FUNC_ALIAS_WEAK(memmove, __memmove)
 EXPORT_SYMBOL(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index d624f2bc42f16..fc9ffd3ff3b21 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -17,7 +17,6 @@
  *
  * rax   original destination
  */
-SYM_FUNC_START_WEAK(memset)
 SYM_FUNC_START(__memset)
 	/*
 	 * Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended
@@ -42,10 +41,11 @@ SYM_FUNC_START(__memset)
 	movq %r9,%rax
 	RET
 SYM_FUNC_END(__memset)
-SYM_FUNC_END_ALIAS(memset)
-EXPORT_SYMBOL(memset)
 EXPORT_SYMBOL(__memset)
 
+SYM_FUNC_ALIAS_WEAK(memset, __memset)
+EXPORT_SYMBOL(memset)
+
 /*
  * ISO C memset - set a memory block to a byte value. This function uses
  * enhanced rep stosb to override the fast string function.
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 5/7] x86: clean up symbol aliasing
@ 2022-01-25 11:31   ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
to simplify the definition of function aliases across arch/x86.

For clarity, where there are multiple annotations such as
EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
example, where a function has a name and an alias which are both
exported, this is organised as:

	SYM_FUNC_START(func)
	    ... asm insns ...
	SYM_FUNC_END(func)
	EXPORT_SYMBOL(func)

	SYM_FUNC_ALIAS(alias, func)
	EXPORT_SYMBOL(alias)

Where there are only aliases and no exports or other annotations, I have
not bothered with line spacing, e.g.

	SYM_FUNC_START(func)
	    ... asm insns ...
	SYM_FUNC_END(func)
	SYM_FUNC_ALAIAS(alias, func)

There should be no functional change as a result of this patch.

The x86 string routines under tools/ will be handled by a subsequent
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/boot/compressed/head_32.S |  3 +--
 arch/x86/boot/compressed/head_64.S |  3 +--
 arch/x86/crypto/aesni-intel_asm.S  |  4 +---
 arch/x86/lib/memcpy_64.S           | 10 +++++-----
 arch/x86/lib/memmove_64.S          |  4 ++--
 arch/x86/lib/memset_64.S           |  6 +++---
 6 files changed, 13 insertions(+), 17 deletions(-)

diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index 659fad53ca823..3b354eb9516df 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -152,14 +152,13 @@ SYM_FUNC_END(startup_32)
 
 #ifdef CONFIG_EFI_STUB
 SYM_FUNC_START(efi32_stub_entry)
-SYM_FUNC_START_ALIAS(efi_stub_entry)
 	add	$0x4, %esp
 	movl	8(%esp), %esi	/* save boot_params pointer */
 	call	efi_main
 	/* efi_main returns the possibly relocated address of startup_32 */
 	jmp	*%eax
 SYM_FUNC_END(efi32_stub_entry)
-SYM_FUNC_END_ALIAS(efi_stub_entry)
+SYM_FUNC_ALIAS(efi_stub_entry, efi32_stub_entry)
 #endif
 
 	.text
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index fd9441f404570..dea95301196b8 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -535,7 +535,6 @@ SYM_CODE_END(startup_64)
 #ifdef CONFIG_EFI_STUB
 	.org 0x390
 SYM_FUNC_START(efi64_stub_entry)
-SYM_FUNC_START_ALIAS(efi_stub_entry)
 	and	$~0xf, %rsp			/* realign the stack */
 	movq	%rdx, %rbx			/* save boot_params pointer */
 	call	efi_main
@@ -543,7 +542,7 @@ SYM_FUNC_START_ALIAS(efi_stub_entry)
 	leaq	rva(startup_64)(%rax), %rax
 	jmp	*%rax
 SYM_FUNC_END(efi64_stub_entry)
-SYM_FUNC_END_ALIAS(efi_stub_entry)
+SYM_FUNC_ALIAS(efi_stub_entry, efi64_stub_entry)
 #endif
 
 	.text
diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
index 363699dd72206..837c1e0aa0217 100644
--- a/arch/x86/crypto/aesni-intel_asm.S
+++ b/arch/x86/crypto/aesni-intel_asm.S
@@ -1751,8 +1751,6 @@ SYM_FUNC_END(aesni_gcm_finalize)
 
 #endif
 
-
-SYM_FUNC_START_LOCAL_ALIAS(_key_expansion_128)
 SYM_FUNC_START_LOCAL(_key_expansion_256a)
 	pshufd $0b11111111, %xmm1, %xmm1
 	shufps $0b00010000, %xmm0, %xmm4
@@ -1764,7 +1762,7 @@ SYM_FUNC_START_LOCAL(_key_expansion_256a)
 	add $0x10, TKEYP
 	RET
 SYM_FUNC_END(_key_expansion_256a)
-SYM_FUNC_END_ALIAS(_key_expansion_128)
+SYM_FUNC_ALIAS_LOCAL(_key_expansion_128, _key_expansion_256a)
 
 SYM_FUNC_START_LOCAL(_key_expansion_192a)
 	pshufd $0b01010101, %xmm1, %xmm1
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 59cf2343f3d90..d0d7b9bc6cad3 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -27,8 +27,7 @@
  * Output:
  * rax original destination
  */
-SYM_FUNC_START_ALIAS(__memcpy)
-SYM_FUNC_START_WEAK(memcpy)
+SYM_FUNC_START(__memcpy)
 	ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
 		      "jmp memcpy_erms", X86_FEATURE_ERMS
 
@@ -40,11 +39,12 @@ SYM_FUNC_START_WEAK(memcpy)
 	movl %edx, %ecx
 	rep movsb
 	RET
-SYM_FUNC_END(memcpy)
-SYM_FUNC_END_ALIAS(__memcpy)
-EXPORT_SYMBOL(memcpy)
+SYM_FUNC_END(__memcpy)
 EXPORT_SYMBOL(__memcpy)
 
+SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
+EXPORT_SYMBOL(memcpy)
+
 /*
  * memcpy_erms() - enhanced fast string memcpy. This is faster and
  * simpler than memcpy. Use memcpy_erms when possible.
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 50ea390df7128..d83cba364e31d 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -24,7 +24,6 @@
  * Output:
  * rax: dest
  */
-SYM_FUNC_START_WEAK(memmove)
 SYM_FUNC_START(__memmove)
 
 	mov %rdi, %rax
@@ -207,6 +206,7 @@ SYM_FUNC_START(__memmove)
 13:
 	RET
 SYM_FUNC_END(__memmove)
-SYM_FUNC_END_ALIAS(memmove)
 EXPORT_SYMBOL(__memmove)
+
+SYM_FUNC_ALIAS_WEAK(memmove, __memmove)
 EXPORT_SYMBOL(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index d624f2bc42f16..fc9ffd3ff3b21 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -17,7 +17,6 @@
  *
  * rax   original destination
  */
-SYM_FUNC_START_WEAK(memset)
 SYM_FUNC_START(__memset)
 	/*
 	 * Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended
@@ -42,10 +41,11 @@ SYM_FUNC_START(__memset)
 	movq %r9,%rax
 	RET
 SYM_FUNC_END(__memset)
-SYM_FUNC_END_ALIAS(memset)
-EXPORT_SYMBOL(memset)
 EXPORT_SYMBOL(__memset)
 
+SYM_FUNC_ALIAS_WEAK(memset, __memset)
+EXPORT_SYMBOL(memset)
+
 /*
  * ISO C memset - set a memory block to a byte value. This function uses
  * enhanced rep stosb to override the fast string function.
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 6/7] linkage: remove SYM_FUNC_{START,END}_ALIAS()
  2022-01-25 11:31 ` Mark Rutland
@ 2022-01-25 11:31   ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

Now that all aliases are defined using SYM_FUNC_ALIAS(), remove the old
SYM_FUNC_{START,END}_ALIAS() macros.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 Documentation/asm-annotations.rst | 13 -------------
 include/linux/linkage.h           | 30 ------------------------------
 2 files changed, 43 deletions(-)

diff --git a/Documentation/asm-annotations.rst b/Documentation/asm-annotations.rst
index 4868b58c60fb1..a64f2ca469d45 100644
--- a/Documentation/asm-annotations.rst
+++ b/Documentation/asm-annotations.rst
@@ -142,19 +142,6 @@ denoting a range of code via ``SYM_*_START/END`` annotations.
   result, except the debug information for the instructions is generated to
   the object file only once -- for the non-``ALIAS`` case.
 
-* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` are deprecated
-    ways to define two or more names for one function. The typical use is::
-
-    SYM_FUNC_START_ALIAS(__memset)
-    SYM_FUNC_START(memset)
-        ... asm insns ...
-    SYM_FUNC_END(memset)
-    SYM_FUNC_END_ALIAS(__memset)
-
-  In this example, one can call ``__memset`` or ``memset`` with the same
-  result, except the debug information for the instructions is generated to
-  the object file only once -- for the non-``ALIAS`` case.
-
 * ``SYM_CODE_START`` and ``SYM_CODE_START_LOCAL`` should be used only in
   special cases -- if you know what you are doing. This is used exclusively
   for interrupt handlers and similar where the calling convention is not the C
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index becd64e9e5b18..960d9a032861e 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -220,30 +220,8 @@
 	SYM_ENTRY(name, linkage, SYM_A_NONE)
 #endif
 
-/*
- * SYM_FUNC_START_LOCAL_ALIAS -- use where there are two local names for one
- * function
- */
-#ifndef SYM_FUNC_START_LOCAL_ALIAS
-#define SYM_FUNC_START_LOCAL_ALIAS(name)		\
-	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
-#endif
-
-/*
- * SYM_FUNC_START_ALIAS -- use where there are two global names for one
- * function
- */
-#ifndef SYM_FUNC_START_ALIAS
-#define SYM_FUNC_START_ALIAS(name)			\
-	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
-#endif
-
 /* SYM_FUNC_START -- use for global functions */
 #ifndef SYM_FUNC_START
-/*
- * The same as SYM_FUNC_START_ALIAS, but we will need to distinguish these two
- * later.
- */
 #define SYM_FUNC_START(name)				\
 	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
 #endif
@@ -256,7 +234,6 @@
 
 /* SYM_FUNC_START_LOCAL -- use for local functions */
 #ifndef SYM_FUNC_START_LOCAL
-/* the same as SYM_FUNC_START_LOCAL_ALIAS, see comment near SYM_FUNC_START */
 #define SYM_FUNC_START_LOCAL(name)			\
 	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
 #endif
@@ -279,18 +256,11 @@
 	SYM_START(name, SYM_L_WEAK, SYM_A_NONE)
 #endif
 
-/* SYM_FUNC_END_ALIAS -- the end of LOCAL_ALIASed or ALIASed function */
-#ifndef SYM_FUNC_END_ALIAS
-#define SYM_FUNC_END_ALIAS(name)			\
-	SYM_END(name, SYM_T_FUNC)
-#endif
-
 /*
  * SYM_FUNC_END -- the end of SYM_FUNC_START_LOCAL, SYM_FUNC_START,
  * SYM_FUNC_START_WEAK, ...
  */
 #ifndef SYM_FUNC_END
-/* the same as SYM_FUNC_END_ALIAS, see comment near SYM_FUNC_START */
 #define SYM_FUNC_END(name)				\
 	SYM_END(name, SYM_T_FUNC)
 #endif
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 6/7] linkage: remove SYM_FUNC_{START,END}_ALIAS()
@ 2022-01-25 11:31   ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:31 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

Now that all aliases are defined using SYM_FUNC_ALIAS(), remove the old
SYM_FUNC_{START,END}_ALIAS() macros.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 Documentation/asm-annotations.rst | 13 -------------
 include/linux/linkage.h           | 30 ------------------------------
 2 files changed, 43 deletions(-)

diff --git a/Documentation/asm-annotations.rst b/Documentation/asm-annotations.rst
index 4868b58c60fb1..a64f2ca469d45 100644
--- a/Documentation/asm-annotations.rst
+++ b/Documentation/asm-annotations.rst
@@ -142,19 +142,6 @@ denoting a range of code via ``SYM_*_START/END`` annotations.
   result, except the debug information for the instructions is generated to
   the object file only once -- for the non-``ALIAS`` case.
 
-* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` are deprecated
-    ways to define two or more names for one function. The typical use is::
-
-    SYM_FUNC_START_ALIAS(__memset)
-    SYM_FUNC_START(memset)
-        ... asm insns ...
-    SYM_FUNC_END(memset)
-    SYM_FUNC_END_ALIAS(__memset)
-
-  In this example, one can call ``__memset`` or ``memset`` with the same
-  result, except the debug information for the instructions is generated to
-  the object file only once -- for the non-``ALIAS`` case.
-
 * ``SYM_CODE_START`` and ``SYM_CODE_START_LOCAL`` should be used only in
   special cases -- if you know what you are doing. This is used exclusively
   for interrupt handlers and similar where the calling convention is not the C
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index becd64e9e5b18..960d9a032861e 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -220,30 +220,8 @@
 	SYM_ENTRY(name, linkage, SYM_A_NONE)
 #endif
 
-/*
- * SYM_FUNC_START_LOCAL_ALIAS -- use where there are two local names for one
- * function
- */
-#ifndef SYM_FUNC_START_LOCAL_ALIAS
-#define SYM_FUNC_START_LOCAL_ALIAS(name)		\
-	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
-#endif
-
-/*
- * SYM_FUNC_START_ALIAS -- use where there are two global names for one
- * function
- */
-#ifndef SYM_FUNC_START_ALIAS
-#define SYM_FUNC_START_ALIAS(name)			\
-	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
-#endif
-
 /* SYM_FUNC_START -- use for global functions */
 #ifndef SYM_FUNC_START
-/*
- * The same as SYM_FUNC_START_ALIAS, but we will need to distinguish these two
- * later.
- */
 #define SYM_FUNC_START(name)				\
 	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
 #endif
@@ -256,7 +234,6 @@
 
 /* SYM_FUNC_START_LOCAL -- use for local functions */
 #ifndef SYM_FUNC_START_LOCAL
-/* the same as SYM_FUNC_START_LOCAL_ALIAS, see comment near SYM_FUNC_START */
 #define SYM_FUNC_START_LOCAL(name)			\
 	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
 #endif
@@ -279,18 +256,11 @@
 	SYM_START(name, SYM_L_WEAK, SYM_A_NONE)
 #endif
 
-/* SYM_FUNC_END_ALIAS -- the end of LOCAL_ALIASed or ALIASed function */
-#ifndef SYM_FUNC_END_ALIAS
-#define SYM_FUNC_END_ALIAS(name)			\
-	SYM_END(name, SYM_T_FUNC)
-#endif
-
 /*
  * SYM_FUNC_END -- the end of SYM_FUNC_START_LOCAL, SYM_FUNC_START,
  * SYM_FUNC_START_WEAK, ...
  */
 #ifndef SYM_FUNC_END
-/* the same as SYM_FUNC_END_ALIAS, see comment near SYM_FUNC_START */
 #define SYM_FUNC_END(name)				\
 	SYM_END(name, SYM_T_FUNC)
 #endif
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 7/7] tools: update x86 string routines
  2022-01-25 11:31 ` Mark Rutland
@ 2022-01-25 11:32   ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

When building the perf tool the build system complains that the x86
string routines are out-of-date:

| Warning: Kernel ABI header at 'tools/arch/x86/lib/memcpy_64.S' differs from latest version at 'arch/x86/lib/memcpy_64.S'
| diff -u tools/arch/x86/lib/memcpy_64.S arch/x86/lib/memcpy_64.S
| Warning: Kernel ABI header at 'tools/arch/x86/lib/memset_64.S' differs from latest version at 'arch/x86/lib/memset_64.S'
| diff -u tools/arch/x86/lib/memset_64.S arch/x86/lib/memset_64.S

This is due to the way the asm-annotations for symbol aliasing were
reworked, which should have no functional/performance impact.

Import the latest versions, adding the new style SYM_FUNC_ALIAS(),
SYM_FUNC_ALIAS_LOAD(), SYM_FUNC_ALIAS_WEAK() macros into the perf
<linux/linkage.h> header. The old style SYM_FUNC_START_ALIAS() and
SYM_FUNC_END_ALIAS() macros are removed.

Other than removign the build-time warning, there should be no
functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
---
 tools/arch/x86/lib/memcpy_64.S          | 10 ++--
 tools/arch/x86/lib/memset_64.S          |  6 +-
 tools/perf/util/include/linux/linkage.h | 80 ++++++++++++++++---------
 3 files changed, 61 insertions(+), 35 deletions(-)

diff --git a/tools/arch/x86/lib/memcpy_64.S b/tools/arch/x86/lib/memcpy_64.S
index 59cf2343f3d90..d0d7b9bc6cad3 100644
--- a/tools/arch/x86/lib/memcpy_64.S
+++ b/tools/arch/x86/lib/memcpy_64.S
@@ -27,8 +27,7 @@
  * Output:
  * rax original destination
  */
-SYM_FUNC_START_ALIAS(__memcpy)
-SYM_FUNC_START_WEAK(memcpy)
+SYM_FUNC_START(__memcpy)
 	ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
 		      "jmp memcpy_erms", X86_FEATURE_ERMS
 
@@ -40,11 +39,12 @@ SYM_FUNC_START_WEAK(memcpy)
 	movl %edx, %ecx
 	rep movsb
 	RET
-SYM_FUNC_END(memcpy)
-SYM_FUNC_END_ALIAS(__memcpy)
-EXPORT_SYMBOL(memcpy)
+SYM_FUNC_END(__memcpy)
 EXPORT_SYMBOL(__memcpy)
 
+SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
+EXPORT_SYMBOL(memcpy)
+
 /*
  * memcpy_erms() - enhanced fast string memcpy. This is faster and
  * simpler than memcpy. Use memcpy_erms when possible.
diff --git a/tools/arch/x86/lib/memset_64.S b/tools/arch/x86/lib/memset_64.S
index d624f2bc42f16..fc9ffd3ff3b21 100644
--- a/tools/arch/x86/lib/memset_64.S
+++ b/tools/arch/x86/lib/memset_64.S
@@ -17,7 +17,6 @@
  *
  * rax   original destination
  */
-SYM_FUNC_START_WEAK(memset)
 SYM_FUNC_START(__memset)
 	/*
 	 * Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended
@@ -42,10 +41,11 @@ SYM_FUNC_START(__memset)
 	movq %r9,%rax
 	RET
 SYM_FUNC_END(__memset)
-SYM_FUNC_END_ALIAS(memset)
-EXPORT_SYMBOL(memset)
 EXPORT_SYMBOL(__memset)
 
+SYM_FUNC_ALIAS_WEAK(memset, __memset)
+EXPORT_SYMBOL(memset)
+
 /*
  * ISO C memset - set a memory block to a byte value. This function uses
  * enhanced rep stosb to override the fast string function.
diff --git a/tools/perf/util/include/linux/linkage.h b/tools/perf/util/include/linux/linkage.h
index 5acf053fca7d4..b8edb5059bd6d 100644
--- a/tools/perf/util/include/linux/linkage.h
+++ b/tools/perf/util/include/linux/linkage.h
@@ -32,59 +32,59 @@
 
 /* === generic annotations === */
 
+#ifndef SYM_ENTRY_AT
+#define SYM_ENTRY_AT(name, location, linkage)		\
+	linkage(name) ASM_NL				\
+	.set .L____sym_entry__##name, location ASM_NL	\
+	.set name, location ASM_NL
+#endif
+
 /* SYM_ENTRY -- use only if you have to for non-paired symbols */
 #ifndef SYM_ENTRY
 #define SYM_ENTRY(name, linkage, align...)		\
-	linkage(name) ASM_NL				\
 	align ASM_NL					\
-	name:
+	SYM_ENTRY_AT(name, ., linkage)
+#endif
+
+/* SYM_START_AT -- use only if you have to */
+#ifndef SYM_START_AT
+#define SYM_START_AT(name, location, linkage)		\
+	SYM_ENTRY_AT(name, location, linkage)
 #endif
 
 /* SYM_START -- use only if you have to */
 #ifndef SYM_START
 #define SYM_START(name, linkage, align...)		\
-	SYM_ENTRY(name, linkage, align)
+	align ASM_NL					\
+	SYM_START_AT(name, ., linkage)
 #endif
 
-/* SYM_END -- use only if you have to */
-#ifndef SYM_END
-#define SYM_END(name, sym_type)				\
+/* SYM_END_AT -- use only if you have to */
+#ifndef SYM_END_AT
+#define SYM_END_AT(name, location, sym_type)		\
 	.type name sym_type ASM_NL			\
-	.size name, .-name
+	.set .L____sym_end__##name, location ASM_NL	\
+	.size name, location-name ASM_NL
 #endif
 
-/*
- * SYM_FUNC_START_ALIAS -- use where there are two global names for one
- * function
- */
-#ifndef SYM_FUNC_START_ALIAS
-#define SYM_FUNC_START_ALIAS(name)			\
-	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+/* SYM_END -- use only if you have to */
+#ifndef SYM_END
+#define SYM_END(name, sym_type)				\
+	SYM_END_AT(name, ., sym_type)
 #endif
 
 /* SYM_FUNC_START -- use for global functions */
 #ifndef SYM_FUNC_START
-/*
- * The same as SYM_FUNC_START_ALIAS, but we will need to distinguish these two
- * later.
- */
 #define SYM_FUNC_START(name)				\
 	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
 #endif
 
 /* SYM_FUNC_START_LOCAL -- use for local functions */
 #ifndef SYM_FUNC_START_LOCAL
-/* the same as SYM_FUNC_START_LOCAL_ALIAS, see comment near SYM_FUNC_START */
 #define SYM_FUNC_START_LOCAL(name)			\
 	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
 #endif
 
-/* SYM_FUNC_END_ALIAS -- the end of LOCAL_ALIASed or ALIASed function */
-#ifndef SYM_FUNC_END_ALIAS
-#define SYM_FUNC_END_ALIAS(name)			\
-	SYM_END(name, SYM_T_FUNC)
-#endif
-
 /* SYM_FUNC_START_WEAK -- use for weak functions */
 #ifndef SYM_FUNC_START_WEAK
 #define SYM_FUNC_START_WEAK(name)			\
@@ -96,9 +96,35 @@
  * SYM_FUNC_START_WEAK, ...
  */
 #ifndef SYM_FUNC_END
-/* the same as SYM_FUNC_END_ALIAS, see comment near SYM_FUNC_START */
 #define SYM_FUNC_END(name)				\
 	SYM_END(name, SYM_T_FUNC)
 #endif
 
-#endif	/* PERF_LINUX_LINKAGE_H_ */
+/*
+ * SYM_FUNC_ALIAS_LOCAL -- define a local alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS_LOCAL
+#define SYM_FUNC_ALIAS_LOCAL(alias, name)				\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_LOCAL)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS -- define a global alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS
+#define SYM_FUNC_ALIAS(alias, name)					\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_GLOBAL)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS_WEAK -- define a weak global alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS_WEAK
+#define SYM_FUNC_ALIAS_WEAK(alias, name)				\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_WEAK)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
+#endif /* PERF_LINUX_LINKAGE_H */
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v2 7/7] tools: update x86 string routines
@ 2022-01-25 11:32   ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 11:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mark.rutland, mingo, peterz,
	tglx, will

When building the perf tool the build system complains that the x86
string routines are out-of-date:

| Warning: Kernel ABI header at 'tools/arch/x86/lib/memcpy_64.S' differs from latest version at 'arch/x86/lib/memcpy_64.S'
| diff -u tools/arch/x86/lib/memcpy_64.S arch/x86/lib/memcpy_64.S
| Warning: Kernel ABI header at 'tools/arch/x86/lib/memset_64.S' differs from latest version at 'arch/x86/lib/memset_64.S'
| diff -u tools/arch/x86/lib/memset_64.S arch/x86/lib/memset_64.S

This is due to the way the asm-annotations for symbol aliasing were
reworked, which should have no functional/performance impact.

Import the latest versions, adding the new style SYM_FUNC_ALIAS(),
SYM_FUNC_ALIAS_LOAD(), SYM_FUNC_ALIAS_WEAK() macros into the perf
<linux/linkage.h> header. The old style SYM_FUNC_START_ALIAS() and
SYM_FUNC_END_ALIAS() macros are removed.

Other than removign the build-time warning, there should be no
functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
---
 tools/arch/x86/lib/memcpy_64.S          | 10 ++--
 tools/arch/x86/lib/memset_64.S          |  6 +-
 tools/perf/util/include/linux/linkage.h | 80 ++++++++++++++++---------
 3 files changed, 61 insertions(+), 35 deletions(-)

diff --git a/tools/arch/x86/lib/memcpy_64.S b/tools/arch/x86/lib/memcpy_64.S
index 59cf2343f3d90..d0d7b9bc6cad3 100644
--- a/tools/arch/x86/lib/memcpy_64.S
+++ b/tools/arch/x86/lib/memcpy_64.S
@@ -27,8 +27,7 @@
  * Output:
  * rax original destination
  */
-SYM_FUNC_START_ALIAS(__memcpy)
-SYM_FUNC_START_WEAK(memcpy)
+SYM_FUNC_START(__memcpy)
 	ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
 		      "jmp memcpy_erms", X86_FEATURE_ERMS
 
@@ -40,11 +39,12 @@ SYM_FUNC_START_WEAK(memcpy)
 	movl %edx, %ecx
 	rep movsb
 	RET
-SYM_FUNC_END(memcpy)
-SYM_FUNC_END_ALIAS(__memcpy)
-EXPORT_SYMBOL(memcpy)
+SYM_FUNC_END(__memcpy)
 EXPORT_SYMBOL(__memcpy)
 
+SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
+EXPORT_SYMBOL(memcpy)
+
 /*
  * memcpy_erms() - enhanced fast string memcpy. This is faster and
  * simpler than memcpy. Use memcpy_erms when possible.
diff --git a/tools/arch/x86/lib/memset_64.S b/tools/arch/x86/lib/memset_64.S
index d624f2bc42f16..fc9ffd3ff3b21 100644
--- a/tools/arch/x86/lib/memset_64.S
+++ b/tools/arch/x86/lib/memset_64.S
@@ -17,7 +17,6 @@
  *
  * rax   original destination
  */
-SYM_FUNC_START_WEAK(memset)
 SYM_FUNC_START(__memset)
 	/*
 	 * Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended
@@ -42,10 +41,11 @@ SYM_FUNC_START(__memset)
 	movq %r9,%rax
 	RET
 SYM_FUNC_END(__memset)
-SYM_FUNC_END_ALIAS(memset)
-EXPORT_SYMBOL(memset)
 EXPORT_SYMBOL(__memset)
 
+SYM_FUNC_ALIAS_WEAK(memset, __memset)
+EXPORT_SYMBOL(memset)
+
 /*
  * ISO C memset - set a memory block to a byte value. This function uses
  * enhanced rep stosb to override the fast string function.
diff --git a/tools/perf/util/include/linux/linkage.h b/tools/perf/util/include/linux/linkage.h
index 5acf053fca7d4..b8edb5059bd6d 100644
--- a/tools/perf/util/include/linux/linkage.h
+++ b/tools/perf/util/include/linux/linkage.h
@@ -32,59 +32,59 @@
 
 /* === generic annotations === */
 
+#ifndef SYM_ENTRY_AT
+#define SYM_ENTRY_AT(name, location, linkage)		\
+	linkage(name) ASM_NL				\
+	.set .L____sym_entry__##name, location ASM_NL	\
+	.set name, location ASM_NL
+#endif
+
 /* SYM_ENTRY -- use only if you have to for non-paired symbols */
 #ifndef SYM_ENTRY
 #define SYM_ENTRY(name, linkage, align...)		\
-	linkage(name) ASM_NL				\
 	align ASM_NL					\
-	name:
+	SYM_ENTRY_AT(name, ., linkage)
+#endif
+
+/* SYM_START_AT -- use only if you have to */
+#ifndef SYM_START_AT
+#define SYM_START_AT(name, location, linkage)		\
+	SYM_ENTRY_AT(name, location, linkage)
 #endif
 
 /* SYM_START -- use only if you have to */
 #ifndef SYM_START
 #define SYM_START(name, linkage, align...)		\
-	SYM_ENTRY(name, linkage, align)
+	align ASM_NL					\
+	SYM_START_AT(name, ., linkage)
 #endif
 
-/* SYM_END -- use only if you have to */
-#ifndef SYM_END
-#define SYM_END(name, sym_type)				\
+/* SYM_END_AT -- use only if you have to */
+#ifndef SYM_END_AT
+#define SYM_END_AT(name, location, sym_type)		\
 	.type name sym_type ASM_NL			\
-	.size name, .-name
+	.set .L____sym_end__##name, location ASM_NL	\
+	.size name, location-name ASM_NL
 #endif
 
-/*
- * SYM_FUNC_START_ALIAS -- use where there are two global names for one
- * function
- */
-#ifndef SYM_FUNC_START_ALIAS
-#define SYM_FUNC_START_ALIAS(name)			\
-	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+/* SYM_END -- use only if you have to */
+#ifndef SYM_END
+#define SYM_END(name, sym_type)				\
+	SYM_END_AT(name, ., sym_type)
 #endif
 
 /* SYM_FUNC_START -- use for global functions */
 #ifndef SYM_FUNC_START
-/*
- * The same as SYM_FUNC_START_ALIAS, but we will need to distinguish these two
- * later.
- */
 #define SYM_FUNC_START(name)				\
 	SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
 #endif
 
 /* SYM_FUNC_START_LOCAL -- use for local functions */
 #ifndef SYM_FUNC_START_LOCAL
-/* the same as SYM_FUNC_START_LOCAL_ALIAS, see comment near SYM_FUNC_START */
 #define SYM_FUNC_START_LOCAL(name)			\
 	SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
 #endif
 
-/* SYM_FUNC_END_ALIAS -- the end of LOCAL_ALIASed or ALIASed function */
-#ifndef SYM_FUNC_END_ALIAS
-#define SYM_FUNC_END_ALIAS(name)			\
-	SYM_END(name, SYM_T_FUNC)
-#endif
-
 /* SYM_FUNC_START_WEAK -- use for weak functions */
 #ifndef SYM_FUNC_START_WEAK
 #define SYM_FUNC_START_WEAK(name)			\
@@ -96,9 +96,35 @@
  * SYM_FUNC_START_WEAK, ...
  */
 #ifndef SYM_FUNC_END
-/* the same as SYM_FUNC_END_ALIAS, see comment near SYM_FUNC_START */
 #define SYM_FUNC_END(name)				\
 	SYM_END(name, SYM_T_FUNC)
 #endif
 
-#endif	/* PERF_LINUX_LINKAGE_H_ */
+/*
+ * SYM_FUNC_ALIAS_LOCAL -- define a local alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS_LOCAL
+#define SYM_FUNC_ALIAS_LOCAL(alias, name)				\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_LOCAL)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS -- define a global alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS
+#define SYM_FUNC_ALIAS(alias, name)					\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_GLOBAL)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_ALIAS_WEAK -- define a weak global alias for an existing function
+ */
+#ifndef SYM_FUNC_ALIAS_WEAK
+#define SYM_FUNC_ALIAS_WEAK(alias, name)				\
+	SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_WEAK)	\
+	SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
+#endif
+
+#endif /* PERF_LINUX_LINKAGE_H */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/7] linkage: better symbol aliasing
  2022-01-25 11:31 ` Mark Rutland
@ 2022-01-25 15:28   ` Ard Biesheuvel
  -1 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2022-01-25 15:28 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, acme, Borislav Petkov, Mark Brown,
	Catalin Marinas, Dave Hansen, Josh Poimboeuf, Jiri Slaby,
	Linux ARM, Russell King, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Will Deacon

On Tue, 25 Jan 2022 at 12:32, Mark Rutland <mark.rutland@arm.com> wrote:
>
> This series aims to make symbol aliasing simpler and more consistent.
> The basic idea is to replace SYM_FUNC_START_ALIAS(alias) and
> SYM_FUNC_END_ALIAS(alias) with a new SYM_FUNC_ALIAS(alias, name), so
> that e.g.
>
>     SYM_FUNC_START(func)
>     SYM_FUNC_START_ALIAS(alias1)
>     SYM_FUNC_START_ALIAS(alias2)
>         ... asm insns ...
>     SYM_FUNC_END(func)
>     SYM_FUNC_END_ALIAS(alias1)
>     SYM_FUNC_END_ALIAS(alias2)
>     EXPORT_SYMBOL(alias1)
>     EXPORT_SYMBOL(alias2)
>
> ... can become:
>
>     SYM_FUNC_START(name)
>         ... asm insns ...
>     SYM_FUNC_END(name)
>
>     SYM_FUNC_ALIAS(alias1, func)
>     EXPORT_SYMBOL(alias1)
>
>     SYM_FUNC_ALIAS(alias2, func)
>     EXPORT_SYMBOL(alias2)
>
> This avoids repetition and hopefully make it easier to ensure
> consistency (e.g. so each function has a single canonical name and
> associated metadata).
>

I take it this affects the sizes of the alias ELF symbols? Does that matter?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/7] linkage: better symbol aliasing
@ 2022-01-25 15:28   ` Ard Biesheuvel
  0 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2022-01-25 15:28 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, acme, Borislav Petkov, Mark Brown,
	Catalin Marinas, Dave Hansen, Josh Poimboeuf, Jiri Slaby,
	Linux ARM, Russell King, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Will Deacon

On Tue, 25 Jan 2022 at 12:32, Mark Rutland <mark.rutland@arm.com> wrote:
>
> This series aims to make symbol aliasing simpler and more consistent.
> The basic idea is to replace SYM_FUNC_START_ALIAS(alias) and
> SYM_FUNC_END_ALIAS(alias) with a new SYM_FUNC_ALIAS(alias, name), so
> that e.g.
>
>     SYM_FUNC_START(func)
>     SYM_FUNC_START_ALIAS(alias1)
>     SYM_FUNC_START_ALIAS(alias2)
>         ... asm insns ...
>     SYM_FUNC_END(func)
>     SYM_FUNC_END_ALIAS(alias1)
>     SYM_FUNC_END_ALIAS(alias2)
>     EXPORT_SYMBOL(alias1)
>     EXPORT_SYMBOL(alias2)
>
> ... can become:
>
>     SYM_FUNC_START(name)
>         ... asm insns ...
>     SYM_FUNC_END(name)
>
>     SYM_FUNC_ALIAS(alias1, func)
>     EXPORT_SYMBOL(alias1)
>
>     SYM_FUNC_ALIAS(alias2, func)
>     EXPORT_SYMBOL(alias2)
>
> This avoids repetition and hopefully make it easier to ensure
> consistency (e.g. so each function has a single canonical name and
> associated metadata).
>

I take it this affects the sizes of the alias ELF symbols? Does that matter?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/7] linkage: better symbol aliasing
  2022-01-25 15:28   ` Ard Biesheuvel
@ 2022-01-25 15:45     ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 15:45 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux Kernel Mailing List, acme, Borislav Petkov, Mark Brown,
	Catalin Marinas, Dave Hansen, Josh Poimboeuf, Jiri Slaby,
	Linux ARM, Russell King, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Will Deacon

On Tue, Jan 25, 2022 at 04:28:11PM +0100, Ard Biesheuvel wrote:
> On Tue, 25 Jan 2022 at 12:32, Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > This series aims to make symbol aliasing simpler and more consistent.
> > The basic idea is to replace SYM_FUNC_START_ALIAS(alias) and
> > SYM_FUNC_END_ALIAS(alias) with a new SYM_FUNC_ALIAS(alias, name), so
> > that e.g.
> >
> >     SYM_FUNC_START(func)
> >     SYM_FUNC_START_ALIAS(alias1)
> >     SYM_FUNC_START_ALIAS(alias2)
> >         ... asm insns ...
> >     SYM_FUNC_END(func)
> >     SYM_FUNC_END_ALIAS(alias1)
> >     SYM_FUNC_END_ALIAS(alias2)
> >     EXPORT_SYMBOL(alias1)
> >     EXPORT_SYMBOL(alias2)
> >
> > ... can become:
> >
> >     SYM_FUNC_START(name)
> >         ... asm insns ...
> >     SYM_FUNC_END(name)
> >
> >     SYM_FUNC_ALIAS(alias1, func)
> >     EXPORT_SYMBOL(alias1)
> >
> >     SYM_FUNC_ALIAS(alias2, func)
> >     EXPORT_SYMBOL(alias2)
> >
> > This avoids repetition and hopefully make it easier to ensure
> > consistency (e.g. so each function has a single canonical name and
> > associated metadata).
> >
> 
> I take it this affects the sizes of the alias ELF symbols? Does that matter?

The alias should be given the same size as the original symbol, unless I've
made an error. If you look at patch 3:

* In SYM_FUNC_START(name), via SYM_ENTRY_AT(name, ...), we create a local label
  for the start of the function: .L____sym_entry__##name

* In SYM_FUNC_END(name), via SYM_END_AT(name, ...), we create a local label for
  the end of the function: .L____sym_end__##name

* In SYM_FUNC_ALIAS*(alias,name), we define the start and end of the alias as
  the start and end of the original symbol using those local labels, e.g.

  | #define SYM_FUNC_ALIAS(alias, name)                                     \
  |         SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_GLOBAL)      \
  |         SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)

Note that:

* SYM_FUNC_START() uses SYM_START(), which uses SYM_ENTRY_AT()
* SYM_FUNC_END() uses SYM_END(), which uses SYM_END_AT()

... so the definition of tha alias is ultimately structurally identical to the
definition of the canoncial name, at least for now.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/7] linkage: better symbol aliasing
@ 2022-01-25 15:45     ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 15:45 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux Kernel Mailing List, acme, Borislav Petkov, Mark Brown,
	Catalin Marinas, Dave Hansen, Josh Poimboeuf, Jiri Slaby,
	Linux ARM, Russell King, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Will Deacon

On Tue, Jan 25, 2022 at 04:28:11PM +0100, Ard Biesheuvel wrote:
> On Tue, 25 Jan 2022 at 12:32, Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > This series aims to make symbol aliasing simpler and more consistent.
> > The basic idea is to replace SYM_FUNC_START_ALIAS(alias) and
> > SYM_FUNC_END_ALIAS(alias) with a new SYM_FUNC_ALIAS(alias, name), so
> > that e.g.
> >
> >     SYM_FUNC_START(func)
> >     SYM_FUNC_START_ALIAS(alias1)
> >     SYM_FUNC_START_ALIAS(alias2)
> >         ... asm insns ...
> >     SYM_FUNC_END(func)
> >     SYM_FUNC_END_ALIAS(alias1)
> >     SYM_FUNC_END_ALIAS(alias2)
> >     EXPORT_SYMBOL(alias1)
> >     EXPORT_SYMBOL(alias2)
> >
> > ... can become:
> >
> >     SYM_FUNC_START(name)
> >         ... asm insns ...
> >     SYM_FUNC_END(name)
> >
> >     SYM_FUNC_ALIAS(alias1, func)
> >     EXPORT_SYMBOL(alias1)
> >
> >     SYM_FUNC_ALIAS(alias2, func)
> >     EXPORT_SYMBOL(alias2)
> >
> > This avoids repetition and hopefully make it easier to ensure
> > consistency (e.g. so each function has a single canonical name and
> > associated metadata).
> >
> 
> I take it this affects the sizes of the alias ELF symbols? Does that matter?

The alias should be given the same size as the original symbol, unless I've
made an error. If you look at patch 3:

* In SYM_FUNC_START(name), via SYM_ENTRY_AT(name, ...), we create a local label
  for the start of the function: .L____sym_entry__##name

* In SYM_FUNC_END(name), via SYM_END_AT(name, ...), we create a local label for
  the end of the function: .L____sym_end__##name

* In SYM_FUNC_ALIAS*(alias,name), we define the start and end of the alias as
  the start and end of the original symbol using those local labels, e.g.

  | #define SYM_FUNC_ALIAS(alias, name)                                     \
  |         SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_GLOBAL)      \
  |         SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)

Note that:

* SYM_FUNC_START() uses SYM_START(), which uses SYM_ENTRY_AT()
* SYM_FUNC_END() uses SYM_END(), which uses SYM_END_AT()

... so the definition of tha alias is ultimately structurally identical to the
definition of the canoncial name, at least for now.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/7] linkage: better symbol aliasing
  2022-01-25 15:45     ` Mark Rutland
@ 2022-01-25 15:49       ` Ard Biesheuvel
  -1 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2022-01-25 15:49 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, acme, Borislav Petkov, Mark Brown,
	Catalin Marinas, Dave Hansen, Josh Poimboeuf, Jiri Slaby,
	Linux ARM, Russell King, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Will Deacon

On Tue, 25 Jan 2022 at 16:46, Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Tue, Jan 25, 2022 at 04:28:11PM +0100, Ard Biesheuvel wrote:
> > On Tue, 25 Jan 2022 at 12:32, Mark Rutland <mark.rutland@arm.com> wrote:
> > >
> > > This series aims to make symbol aliasing simpler and more consistent.
> > > The basic idea is to replace SYM_FUNC_START_ALIAS(alias) and
> > > SYM_FUNC_END_ALIAS(alias) with a new SYM_FUNC_ALIAS(alias, name), so
> > > that e.g.
> > >
> > >     SYM_FUNC_START(func)
> > >     SYM_FUNC_START_ALIAS(alias1)
> > >     SYM_FUNC_START_ALIAS(alias2)
> > >         ... asm insns ...
> > >     SYM_FUNC_END(func)
> > >     SYM_FUNC_END_ALIAS(alias1)
> > >     SYM_FUNC_END_ALIAS(alias2)
> > >     EXPORT_SYMBOL(alias1)
> > >     EXPORT_SYMBOL(alias2)
> > >
> > > ... can become:
> > >
> > >     SYM_FUNC_START(name)
> > >         ... asm insns ...
> > >     SYM_FUNC_END(name)
> > >
> > >     SYM_FUNC_ALIAS(alias1, func)
> > >     EXPORT_SYMBOL(alias1)
> > >
> > >     SYM_FUNC_ALIAS(alias2, func)
> > >     EXPORT_SYMBOL(alias2)
> > >
> > > This avoids repetition and hopefully make it easier to ensure
> > > consistency (e.g. so each function has a single canonical name and
> > > associated metadata).
> > >
> >
> > I take it this affects the sizes of the alias ELF symbols? Does that matter?
>
> The alias should be given the same size as the original symbol, unless I've
> made an error. If you look at patch 3:
>
> * In SYM_FUNC_START(name), via SYM_ENTRY_AT(name, ...), we create a local label
>   for the start of the function: .L____sym_entry__##name
>
> * In SYM_FUNC_END(name), via SYM_END_AT(name, ...), we create a local label for
>   the end of the function: .L____sym_end__##name
>
> * In SYM_FUNC_ALIAS*(alias,name), we define the start and end of the alias as
>   the start and end of the original symbol using those local labels, e.g.
>
>   | #define SYM_FUNC_ALIAS(alias, name)                                     \
>   |         SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_GLOBAL)      \
>   |         SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
>
> Note that:
>
> * SYM_FUNC_START() uses SYM_START(), which uses SYM_ENTRY_AT()
> * SYM_FUNC_END() uses SYM_END(), which uses SYM_END_AT()
>
> ... so the definition of tha alias is ultimately structurally identical to the
> definition of the canoncial name, at least for now.
>

Ah right, apologies for not looking more carefully - I assumed the
changed placement implied that the aliases had zero size.

And ultimately, I don't think there is an obviously correct answer
anyway, it's just the [apparently non-existent] change in behavior I
was curious about.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/7] linkage: better symbol aliasing
@ 2022-01-25 15:49       ` Ard Biesheuvel
  0 siblings, 0 replies; 32+ messages in thread
From: Ard Biesheuvel @ 2022-01-25 15:49 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, acme, Borislav Petkov, Mark Brown,
	Catalin Marinas, Dave Hansen, Josh Poimboeuf, Jiri Slaby,
	Linux ARM, Russell King, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Will Deacon

On Tue, 25 Jan 2022 at 16:46, Mark Rutland <mark.rutland@arm.com> wrote:
>
> On Tue, Jan 25, 2022 at 04:28:11PM +0100, Ard Biesheuvel wrote:
> > On Tue, 25 Jan 2022 at 12:32, Mark Rutland <mark.rutland@arm.com> wrote:
> > >
> > > This series aims to make symbol aliasing simpler and more consistent.
> > > The basic idea is to replace SYM_FUNC_START_ALIAS(alias) and
> > > SYM_FUNC_END_ALIAS(alias) with a new SYM_FUNC_ALIAS(alias, name), so
> > > that e.g.
> > >
> > >     SYM_FUNC_START(func)
> > >     SYM_FUNC_START_ALIAS(alias1)
> > >     SYM_FUNC_START_ALIAS(alias2)
> > >         ... asm insns ...
> > >     SYM_FUNC_END(func)
> > >     SYM_FUNC_END_ALIAS(alias1)
> > >     SYM_FUNC_END_ALIAS(alias2)
> > >     EXPORT_SYMBOL(alias1)
> > >     EXPORT_SYMBOL(alias2)
> > >
> > > ... can become:
> > >
> > >     SYM_FUNC_START(name)
> > >         ... asm insns ...
> > >     SYM_FUNC_END(name)
> > >
> > >     SYM_FUNC_ALIAS(alias1, func)
> > >     EXPORT_SYMBOL(alias1)
> > >
> > >     SYM_FUNC_ALIAS(alias2, func)
> > >     EXPORT_SYMBOL(alias2)
> > >
> > > This avoids repetition and hopefully make it easier to ensure
> > > consistency (e.g. so each function has a single canonical name and
> > > associated metadata).
> > >
> >
> > I take it this affects the sizes of the alias ELF symbols? Does that matter?
>
> The alias should be given the same size as the original symbol, unless I've
> made an error. If you look at patch 3:
>
> * In SYM_FUNC_START(name), via SYM_ENTRY_AT(name, ...), we create a local label
>   for the start of the function: .L____sym_entry__##name
>
> * In SYM_FUNC_END(name), via SYM_END_AT(name, ...), we create a local label for
>   the end of the function: .L____sym_end__##name
>
> * In SYM_FUNC_ALIAS*(alias,name), we define the start and end of the alias as
>   the start and end of the original symbol using those local labels, e.g.
>
>   | #define SYM_FUNC_ALIAS(alias, name)                                     \
>   |         SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_GLOBAL)      \
>   |         SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
>
> Note that:
>
> * SYM_FUNC_START() uses SYM_START(), which uses SYM_ENTRY_AT()
> * SYM_FUNC_END() uses SYM_END(), which uses SYM_END_AT()
>
> ... so the definition of tha alias is ultimately structurally identical to the
> definition of the canoncial name, at least for now.
>

Ah right, apologies for not looking more carefully - I assumed the
changed placement implied that the aliases had zero size.

And ultimately, I don't think there is an obviously correct answer
anyway, it's just the [apparently non-existent] change in behavior I
was curious about.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/7] linkage: better symbol aliasing
  2022-01-25 15:49       ` Ard Biesheuvel
@ 2022-01-25 15:58         ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 15:58 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux Kernel Mailing List, acme, Borislav Petkov, Mark Brown,
	Catalin Marinas, Dave Hansen, Josh Poimboeuf, Jiri Slaby,
	Linux ARM, Russell King, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Will Deacon

On Tue, Jan 25, 2022 at 04:49:03PM +0100, Ard Biesheuvel wrote:
> On Tue, 25 Jan 2022 at 16:46, Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > On Tue, Jan 25, 2022 at 04:28:11PM +0100, Ard Biesheuvel wrote:
> > > On Tue, 25 Jan 2022 at 12:32, Mark Rutland <mark.rutland@arm.com> wrote:
> > > >
> > > > This series aims to make symbol aliasing simpler and more consistent.
> > > > The basic idea is to replace SYM_FUNC_START_ALIAS(alias) and
> > > > SYM_FUNC_END_ALIAS(alias) with a new SYM_FUNC_ALIAS(alias, name), so
> > > > that e.g.
> > > >
> > > >     SYM_FUNC_START(func)
> > > >     SYM_FUNC_START_ALIAS(alias1)
> > > >     SYM_FUNC_START_ALIAS(alias2)
> > > >         ... asm insns ...
> > > >     SYM_FUNC_END(func)
> > > >     SYM_FUNC_END_ALIAS(alias1)
> > > >     SYM_FUNC_END_ALIAS(alias2)
> > > >     EXPORT_SYMBOL(alias1)
> > > >     EXPORT_SYMBOL(alias2)
> > > >
> > > > ... can become:
> > > >
> > > >     SYM_FUNC_START(name)
> > > >         ... asm insns ...
> > > >     SYM_FUNC_END(name)
> > > >
> > > >     SYM_FUNC_ALIAS(alias1, func)
> > > >     EXPORT_SYMBOL(alias1)
> > > >
> > > >     SYM_FUNC_ALIAS(alias2, func)
> > > >     EXPORT_SYMBOL(alias2)
> > > >
> > > > This avoids repetition and hopefully make it easier to ensure
> > > > consistency (e.g. so each function has a single canonical name and
> > > > associated metadata).
> > > >
> > >
> > > I take it this affects the sizes of the alias ELF symbols? Does that matter?
> >
> > The alias should be given the same size as the original symbol, unless I've
> > made an error. If you look at patch 3:
> >
> > * In SYM_FUNC_START(name), via SYM_ENTRY_AT(name, ...), we create a local label
> >   for the start of the function: .L____sym_entry__##name
> >
> > * In SYM_FUNC_END(name), via SYM_END_AT(name, ...), we create a local label for
> >   the end of the function: .L____sym_end__##name
> >
> > * In SYM_FUNC_ALIAS*(alias,name), we define the start and end of the alias as
> >   the start and end of the original symbol using those local labels, e.g.
> >
> >   | #define SYM_FUNC_ALIAS(alias, name)                                     \
> >   |         SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_GLOBAL)      \
> >   |         SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
> >
> > Note that:
> >
> > * SYM_FUNC_START() uses SYM_START(), which uses SYM_ENTRY_AT()
> > * SYM_FUNC_END() uses SYM_END(), which uses SYM_END_AT()
> >
> > ... so the definition of tha alias is ultimately structurally identical to the
> > definition of the canoncial name, at least for now.
> >
> 
> Ah right, apologies for not looking more carefully - I assumed the
> changed placement implied that the aliases had zero size.

NP; I didn't make that clear in the cover letter, and now it's written up and
archived for future reference. :)

> And ultimately, I don't think there is an obviously correct answer
> anyway, it's just the [apparently non-existent] change in behavior I
> was curious about.

FWIW, I hadn't really considered whether we actually need that for the aliases;
it was jsut the path of least resistance implementation-wise, and I'd like to
be able to use the local labls for the bounds in future for other annotations
and sanity-checks.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 0/7] linkage: better symbol aliasing
@ 2022-01-25 15:58         ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-01-25 15:58 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux Kernel Mailing List, acme, Borislav Petkov, Mark Brown,
	Catalin Marinas, Dave Hansen, Josh Poimboeuf, Jiri Slaby,
	Linux ARM, Russell King, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Will Deacon

On Tue, Jan 25, 2022 at 04:49:03PM +0100, Ard Biesheuvel wrote:
> On Tue, 25 Jan 2022 at 16:46, Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > On Tue, Jan 25, 2022 at 04:28:11PM +0100, Ard Biesheuvel wrote:
> > > On Tue, 25 Jan 2022 at 12:32, Mark Rutland <mark.rutland@arm.com> wrote:
> > > >
> > > > This series aims to make symbol aliasing simpler and more consistent.
> > > > The basic idea is to replace SYM_FUNC_START_ALIAS(alias) and
> > > > SYM_FUNC_END_ALIAS(alias) with a new SYM_FUNC_ALIAS(alias, name), so
> > > > that e.g.
> > > >
> > > >     SYM_FUNC_START(func)
> > > >     SYM_FUNC_START_ALIAS(alias1)
> > > >     SYM_FUNC_START_ALIAS(alias2)
> > > >         ... asm insns ...
> > > >     SYM_FUNC_END(func)
> > > >     SYM_FUNC_END_ALIAS(alias1)
> > > >     SYM_FUNC_END_ALIAS(alias2)
> > > >     EXPORT_SYMBOL(alias1)
> > > >     EXPORT_SYMBOL(alias2)
> > > >
> > > > ... can become:
> > > >
> > > >     SYM_FUNC_START(name)
> > > >         ... asm insns ...
> > > >     SYM_FUNC_END(name)
> > > >
> > > >     SYM_FUNC_ALIAS(alias1, func)
> > > >     EXPORT_SYMBOL(alias1)
> > > >
> > > >     SYM_FUNC_ALIAS(alias2, func)
> > > >     EXPORT_SYMBOL(alias2)
> > > >
> > > > This avoids repetition and hopefully make it easier to ensure
> > > > consistency (e.g. so each function has a single canonical name and
> > > > associated metadata).
> > > >
> > >
> > > I take it this affects the sizes of the alias ELF symbols? Does that matter?
> >
> > The alias should be given the same size as the original symbol, unless I've
> > made an error. If you look at patch 3:
> >
> > * In SYM_FUNC_START(name), via SYM_ENTRY_AT(name, ...), we create a local label
> >   for the start of the function: .L____sym_entry__##name
> >
> > * In SYM_FUNC_END(name), via SYM_END_AT(name, ...), we create a local label for
> >   the end of the function: .L____sym_end__##name
> >
> > * In SYM_FUNC_ALIAS*(alias,name), we define the start and end of the alias as
> >   the start and end of the original symbol using those local labels, e.g.
> >
> >   | #define SYM_FUNC_ALIAS(alias, name)                                     \
> >   |         SYM_START_AT(alias, .L____sym_entry__##name, SYM_L_GLOBAL)      \
> >   |         SYM_END_AT(alias, .L____sym_end__##name, SYM_T_FUNC)
> >
> > Note that:
> >
> > * SYM_FUNC_START() uses SYM_START(), which uses SYM_ENTRY_AT()
> > * SYM_FUNC_END() uses SYM_END(), which uses SYM_END_AT()
> >
> > ... so the definition of tha alias is ultimately structurally identical to the
> > definition of the canoncial name, at least for now.
> >
> 
> Ah right, apologies for not looking more carefully - I assumed the
> changed placement implied that the aliases had zero size.

NP; I didn't make that clear in the cover letter, and now it's written up and
archived for future reference. :)

> And ultimately, I don't think there is an obviously correct answer
> anyway, it's just the [apparently non-existent] change in behavior I
> was curious about.

FWIW, I hadn't really considered whether we actually need that for the aliases;
it was jsut the path of least resistance implementation-wise, and I'd like to
be able to use the local labls for the bounds in future for other annotations
and sanity-checks.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT()
  2022-01-25 11:31   ` Mark Rutland
@ 2022-02-10 14:52     ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-02-10 14:52 UTC (permalink / raw)
  To: linux-kernel, Nick Desaulniers, Nathan Chancellor
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mingo, peterz, tglx, will

[adding clang folk]

Nick, Nathan, I have a couple of questions for you below.

On Tue, Jan 25, 2022 at 11:31:55AM +0000, Mark Rutland wrote:
> Currently, the SYM_{ENTRY,START,END}() helpers define symbols in terms
> of the current position within the section. In subsequent patches we'll
> need to define symbols after moving this position.
> 
> This patch splits the core out of SYM_{ENTRY,START,END}() into
> SYM_{ENTRY,START,END}_AT() macros which take a location argument,
> with SYM_{ENTRY,START,END}() passing the current position.
> 
> There should be no functional change as a result of this patch.

Unfortunately, it turns out clang doesn't like this:

| [mark@lakrids:~/src/linux]% usellvm 13.0.0 make ARCH=arm LLVM=1 -s omap1_defconfig
| [mark@lakrids:~/src/linux]% usellvm 13.0.0 make ARCH=arm LLVM=1 -s -j50 Image     
| arch/arm/mach-omap1/ams-delta-fiq-handler.S:272:5: error: expected absolute expression
| .if (qwerty_fiqin_end - qwerty_fiqin_start) > (0x200 - 0x1c)
|     ^
| arch/arm/mach-omap1/ams-delta-fiq-handler.S:273:2: error: .err encountered
|  .err
|  ^
| make[1]: *** [scripts/Makefile.build:389: arch/arm/mach-omap1/ams-delta-fiq-handler.o] Error 1
| make[1]: *** Waiting for unfinished jobs....
| make: *** [Makefile:1831: arch/arm/mach-omap1] Error 2
| make: *** Waiting for unfinished jobs....

Both GCC and clang are happy to treat labels as constant expressions:

| [mark@lakrids:~/asm-test]% cat test-label.S                                
|         .text
| 
| start:
|         nop
| end:
| 
|         .if (end - start) == 0
|         .err
|         .endif
| 
| [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-label.S                                      
| [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-label.S

... but only GCC is happy to treat symbol definitions as constants:

| [mark@lakrids:~/asm-test]% cat test-symbol.S 
|         .text
| 
| .set start, .;
|         nop
| .set end, .;
| 
|         .if (end - start) == 0
|         .err
|         .endif
| 
| [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-symbol.S          
| [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-symbol.S
| test-symbol.S:7:6: error: expected absolute expression
|  .if (end - start) == 0
|      ^
| test-symbol.S:8:2: error: .err encountered
|  .err
|  ^

This is obviously a behavioural difference, but I'm not sure whether it's
intentional, or just an artifact of the differing implementation of GNU as and
LLVM's integrated assembler. Nich, Nathan, any thoughts on that?

Does clang have any mechanism other than labels to define location constants
that can be used as absolute expressions? e.g. is there any mechanism to alias
a label which results in the alias also being a constant?

Thanks,
Mark.

> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Ard Biesheuvel <ardb@kernel.org>
> Acked-by: Mark Brown <broonie@kernel.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Jiri Slaby <jslaby@suse.cz>
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/linux/linkage.h | 28 +++++++++++++++++++++++-----
>  1 file changed, 23 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/linkage.h b/include/linux/linkage.h
> index dbf8506decca0..d87c2acda2540 100644
> --- a/include/linux/linkage.h
> +++ b/include/linux/linkage.h
> @@ -147,25 +147,43 @@
>  
>  /* === generic annotations === */
>  
> +#ifndef SYM_ENTRY_AT
> +#define SYM_ENTRY_AT(name, location, linkage)		\
> +	linkage(name) ASM_NL				\
> +	.set name, location ASM_NL
> +#endif
> +
>  /* SYM_ENTRY -- use only if you have to for non-paired symbols */
>  #ifndef SYM_ENTRY
>  #define SYM_ENTRY(name, linkage, align...)		\
> -	linkage(name) ASM_NL				\
>  	align ASM_NL					\
> -	name:
> +	SYM_ENTRY_AT(name, ., linkage)
> +#endif
> +
> +/* SYM_START_AT -- use only if you have to */
> +#ifndef SYM_START_AT
> +#define SYM_START_AT(name, location, linkage)		\
> +	SYM_ENTRY_AT(name, location, linkage)
>  #endif
>  
>  /* SYM_START -- use only if you have to */
>  #ifndef SYM_START
>  #define SYM_START(name, linkage, align...)		\
> -	SYM_ENTRY(name, linkage, align)
> +	align ASM_NL					\
> +	SYM_START_AT(name, ., linkage)
> +#endif
> +
> +/* SYM_END_AT -- use only if you have to */
> +#ifndef SYM_END_AT
> +#define SYM_END_AT(name, location, sym_type)		\
> +	.type name sym_type ASM_NL			\
> +	.size name, location-name ASM_NL
>  #endif
>  
>  /* SYM_END -- use only if you have to */
>  #ifndef SYM_END
>  #define SYM_END(name, sym_type)				\
> -	.type name sym_type ASM_NL			\
> -	.size name, .-name
> +	SYM_END_AT(name, ., sym_type)
>  #endif
>  
>  /* === code annotations === */
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT()
@ 2022-02-10 14:52     ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-02-10 14:52 UTC (permalink / raw)
  To: linux-kernel, Nick Desaulniers, Nathan Chancellor
  Cc: acme, ardb, bp, broonie, catalin.marinas, dave.hansen, jpoimboe,
	jslaby, linux-arm-kernel, linux, mingo, peterz, tglx, will

[adding clang folk]

Nick, Nathan, I have a couple of questions for you below.

On Tue, Jan 25, 2022 at 11:31:55AM +0000, Mark Rutland wrote:
> Currently, the SYM_{ENTRY,START,END}() helpers define symbols in terms
> of the current position within the section. In subsequent patches we'll
> need to define symbols after moving this position.
> 
> This patch splits the core out of SYM_{ENTRY,START,END}() into
> SYM_{ENTRY,START,END}_AT() macros which take a location argument,
> with SYM_{ENTRY,START,END}() passing the current position.
> 
> There should be no functional change as a result of this patch.

Unfortunately, it turns out clang doesn't like this:

| [mark@lakrids:~/src/linux]% usellvm 13.0.0 make ARCH=arm LLVM=1 -s omap1_defconfig
| [mark@lakrids:~/src/linux]% usellvm 13.0.0 make ARCH=arm LLVM=1 -s -j50 Image     
| arch/arm/mach-omap1/ams-delta-fiq-handler.S:272:5: error: expected absolute expression
| .if (qwerty_fiqin_end - qwerty_fiqin_start) > (0x200 - 0x1c)
|     ^
| arch/arm/mach-omap1/ams-delta-fiq-handler.S:273:2: error: .err encountered
|  .err
|  ^
| make[1]: *** [scripts/Makefile.build:389: arch/arm/mach-omap1/ams-delta-fiq-handler.o] Error 1
| make[1]: *** Waiting for unfinished jobs....
| make: *** [Makefile:1831: arch/arm/mach-omap1] Error 2
| make: *** Waiting for unfinished jobs....

Both GCC and clang are happy to treat labels as constant expressions:

| [mark@lakrids:~/asm-test]% cat test-label.S                                
|         .text
| 
| start:
|         nop
| end:
| 
|         .if (end - start) == 0
|         .err
|         .endif
| 
| [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-label.S                                      
| [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-label.S

... but only GCC is happy to treat symbol definitions as constants:

| [mark@lakrids:~/asm-test]% cat test-symbol.S 
|         .text
| 
| .set start, .;
|         nop
| .set end, .;
| 
|         .if (end - start) == 0
|         .err
|         .endif
| 
| [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-symbol.S          
| [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-symbol.S
| test-symbol.S:7:6: error: expected absolute expression
|  .if (end - start) == 0
|      ^
| test-symbol.S:8:2: error: .err encountered
|  .err
|  ^

This is obviously a behavioural difference, but I'm not sure whether it's
intentional, or just an artifact of the differing implementation of GNU as and
LLVM's integrated assembler. Nich, Nathan, any thoughts on that?

Does clang have any mechanism other than labels to define location constants
that can be used as absolute expressions? e.g. is there any mechanism to alias
a label which results in the alias also being a constant?

Thanks,
Mark.

> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Acked-by: Ard Biesheuvel <ardb@kernel.org>
> Acked-by: Mark Brown <broonie@kernel.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Jiri Slaby <jslaby@suse.cz>
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> ---
>  include/linux/linkage.h | 28 +++++++++++++++++++++++-----
>  1 file changed, 23 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/linkage.h b/include/linux/linkage.h
> index dbf8506decca0..d87c2acda2540 100644
> --- a/include/linux/linkage.h
> +++ b/include/linux/linkage.h
> @@ -147,25 +147,43 @@
>  
>  /* === generic annotations === */
>  
> +#ifndef SYM_ENTRY_AT
> +#define SYM_ENTRY_AT(name, location, linkage)		\
> +	linkage(name) ASM_NL				\
> +	.set name, location ASM_NL
> +#endif
> +
>  /* SYM_ENTRY -- use only if you have to for non-paired symbols */
>  #ifndef SYM_ENTRY
>  #define SYM_ENTRY(name, linkage, align...)		\
> -	linkage(name) ASM_NL				\
>  	align ASM_NL					\
> -	name:
> +	SYM_ENTRY_AT(name, ., linkage)
> +#endif
> +
> +/* SYM_START_AT -- use only if you have to */
> +#ifndef SYM_START_AT
> +#define SYM_START_AT(name, location, linkage)		\
> +	SYM_ENTRY_AT(name, location, linkage)
>  #endif
>  
>  /* SYM_START -- use only if you have to */
>  #ifndef SYM_START
>  #define SYM_START(name, linkage, align...)		\
> -	SYM_ENTRY(name, linkage, align)
> +	align ASM_NL					\
> +	SYM_START_AT(name, ., linkage)
> +#endif
> +
> +/* SYM_END_AT -- use only if you have to */
> +#ifndef SYM_END_AT
> +#define SYM_END_AT(name, location, sym_type)		\
> +	.type name sym_type ASM_NL			\
> +	.size name, location-name ASM_NL
>  #endif
>  
>  /* SYM_END -- use only if you have to */
>  #ifndef SYM_END
>  #define SYM_END(name, sym_type)				\
> -	.type name sym_type ASM_NL			\
> -	.size name, .-name
> +	SYM_END_AT(name, ., sym_type)
>  #endif
>  
>  /* === code annotations === */
> -- 
> 2.30.2
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT()
  2022-02-10 14:52     ` Mark Rutland
@ 2022-02-11  1:20       ` Nick Desaulniers
  -1 siblings, 0 replies; 32+ messages in thread
From: Nick Desaulniers @ 2022-02-11  1:20 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-kernel, Nathan Chancellor, acme, ardb, bp, broonie,
	catalin.marinas, dave.hansen, jpoimboe, jslaby, linux-arm-kernel,
	linux, mingo, peterz, tglx, will, llvm, James Y Knight

On Thu, Feb 10, 2022 at 6:52 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> Both GCC and clang are happy to treat labels as constant expressions:
>
> | [mark@lakrids:~/asm-test]% cat test-label.S
> |         .text
> |
> | start:
> |         nop
> | end:
> |
> |         .if (end - start) == 0
> |         .err
> |         .endif
> |
> | [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-label.S
> | [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-label.S
>
> ... but only GCC is happy to treat symbol definitions as constants:
>
> | [mark@lakrids:~/asm-test]% cat test-symbol.S
> |         .text
> |
> | .set start, .;
> |         nop
> | .set end, .;
> |
> |         .if (end - start) == 0
> |         .err
> |         .endif
> |
> | [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-symbol.S
> | [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-symbol.S
> | test-symbol.S:7:6: error: expected absolute expression
> |  .if (end - start) == 0
> |      ^
> | test-symbol.S:8:2: error: .err encountered
> |  .err
> |  ^
>
> This is obviously a behavioural difference, but I'm not sure whether it's
> intentional, or just an artifact of the differing implementation of GNU as and
> LLVM's integrated assembler. Nich, Nathan, any thoughts on that?
>
> Does clang have any mechanism other than labels to define location constants
> that can be used as absolute expressions? e.g. is there any mechanism to alias
> a label which results in the alias also being a constant?

s/constant/absolute/

Nothing off the top of my head comes to mind as a substitute that will
work as expected today.

I've filed https://github.com/llvm/llvm-project/issues/53728 to
discuss more with folks that understand our AsmParser better.

From what I can tell, our AsmParser is falling down because the
operands of the binary expression themselves are not considered
absolute.

I doubt we will be able to handle the general case of symbol
differencing; the symbol reference operands could refer to symbols in
different sections that haven't been laid out yet (or whose order
could be shuffled by the linker).  But if they're in the same section
(or "fragment" of a section), we might be able to special case this.

For the expression

> .if (qwerty_fiqin_end - qwerty_fiqin_start) > (0x200 - 0x1c)

can you use local labels (`.L` prefix) rather than symbolic
references? or is there a risk of them not being unique per TU?
-- 
Thanks,
~Nick Desaulniers

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT()
@ 2022-02-11  1:20       ` Nick Desaulniers
  0 siblings, 0 replies; 32+ messages in thread
From: Nick Desaulniers @ 2022-02-11  1:20 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-kernel, Nathan Chancellor, acme, ardb, bp, broonie,
	catalin.marinas, dave.hansen, jpoimboe, jslaby, linux-arm-kernel,
	linux, mingo, peterz, tglx, will, llvm, James Y Knight

On Thu, Feb 10, 2022 at 6:52 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> Both GCC and clang are happy to treat labels as constant expressions:
>
> | [mark@lakrids:~/asm-test]% cat test-label.S
> |         .text
> |
> | start:
> |         nop
> | end:
> |
> |         .if (end - start) == 0
> |         .err
> |         .endif
> |
> | [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-label.S
> | [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-label.S
>
> ... but only GCC is happy to treat symbol definitions as constants:
>
> | [mark@lakrids:~/asm-test]% cat test-symbol.S
> |         .text
> |
> | .set start, .;
> |         nop
> | .set end, .;
> |
> |         .if (end - start) == 0
> |         .err
> |         .endif
> |
> | [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-symbol.S
> | [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-symbol.S
> | test-symbol.S:7:6: error: expected absolute expression
> |  .if (end - start) == 0
> |      ^
> | test-symbol.S:8:2: error: .err encountered
> |  .err
> |  ^
>
> This is obviously a behavioural difference, but I'm not sure whether it's
> intentional, or just an artifact of the differing implementation of GNU as and
> LLVM's integrated assembler. Nich, Nathan, any thoughts on that?
>
> Does clang have any mechanism other than labels to define location constants
> that can be used as absolute expressions? e.g. is there any mechanism to alias
> a label which results in the alias also being a constant?

s/constant/absolute/

Nothing off the top of my head comes to mind as a substitute that will
work as expected today.

I've filed https://github.com/llvm/llvm-project/issues/53728 to
discuss more with folks that understand our AsmParser better.

From what I can tell, our AsmParser is falling down because the
operands of the binary expression themselves are not considered
absolute.

I doubt we will be able to handle the general case of symbol
differencing; the symbol reference operands could refer to symbols in
different sections that haven't been laid out yet (or whose order
could be shuffled by the linker).  But if they're in the same section
(or "fragment" of a section), we might be able to special case this.

For the expression

> .if (qwerty_fiqin_end - qwerty_fiqin_start) > (0x200 - 0x1c)

can you use local labels (`.L` prefix) rather than symbolic
references? or is there a risk of them not being unique per TU?
-- 
Thanks,
~Nick Desaulniers

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT()
  2022-02-11  1:20       ` Nick Desaulniers
@ 2022-02-11 11:32         ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-02-11 11:32 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: linux-kernel, Nathan Chancellor, acme, ardb, bp, broonie,
	catalin.marinas, dave.hansen, jpoimboe, jslaby, linux-arm-kernel,
	linux, mingo, peterz, tglx, will, llvm, James Y Knight

On Thu, Feb 10, 2022 at 05:20:10PM -0800, Nick Desaulniers wrote:
> On Thu, Feb 10, 2022 at 6:52 AM Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > Both GCC and clang are happy to treat labels as constant expressions:
> >
> > | [mark@lakrids:~/asm-test]% cat test-label.S
> > |         .text
> > |
> > | start:
> > |         nop
> > | end:
> > |
> > |         .if (end - start) == 0
> > |         .err
> > |         .endif
> > |
> > | [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-label.S
> > | [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-label.S
> >
> > ... but only GCC is happy to treat symbol definitions as constants:
> >
> > | [mark@lakrids:~/asm-test]% cat test-symbol.S
> > |         .text
> > |
> > | .set start, .;
> > |         nop
> > | .set end, .;
> > |
> > |         .if (end - start) == 0
> > |         .err
> > |         .endif
> > |
> > | [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-symbol.S
> > | [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-symbol.S
> > | test-symbol.S:7:6: error: expected absolute expression
> > |  .if (end - start) == 0
> > |      ^
> > | test-symbol.S:8:2: error: .err encountered
> > |  .err
> > |  ^
> >
> > This is obviously a behavioural difference, but I'm not sure whether it's
> > intentional, or just an artifact of the differing implementation of GNU as and
> > LLVM's integrated assembler. Nich, Nathan, any thoughts on that?
> >
> > Does clang have any mechanism other than labels to define location constants
> > that can be used as absolute expressions? e.g. is there any mechanism to alias
> > a label which results in the alias also being a constant?
> 
> s/constant/absolute/

Sorry, yes. I wasn't clear on the terminology when I wrote this, and I realise
now what I said was a bit confused.

IIUC the symbols themselves are relocatable (rather than absolute) whether
they're labels or created via `.set`, since the base of the section hasn't been
set yet. The difference between the two *should* be absolute (since they're
both relocatable relative to the same base), and LLVM can figure that out for
two labels, but not when either is created via `.set`, so it seems some
information is tracked differently for labels and othey symbols?

I note LLVM *can* treat the result of a `.set` as absolute, eg. if I do:

	.set sym_offset (label_end - label_start)
	.if sym_offset == 0
	.err
	.endif

... LLVM treats `sym_offset` as an absolute value.

> Nothing off the top of my head comes to mind as a substitute that will
> work as expected today.
> 
> I've filed https://github.com/llvm/llvm-project/issues/53728 to
> discuss more with folks that understand our AsmParser better.

Thanks!

> From what I can tell, our AsmParser is falling down because the
> operands of the binary expression themselves are not considered
> absolute.

IIUC even in the label case the operands aren't absolute, but rather that
they're relocatable relative to the same base, and hence the expression can be
evaluate to be absolute (since the base gets cancelled out, removing the
relocation).

So is there something that gets tracked differently for labels and other
symbols?

> I doubt we will be able to handle the general case of symbol
> differencing; the symbol reference operands could refer to symbols in
> different sections that haven't been laid out yet (or whose order
> could be shuffled by the linker).  But if they're in the same section
> (or "fragment" of a section), we might be able to special case this.

Sure, I think the only case that needs to work (or can work conceptually) is
when they're in the same section (of fragment thereof), and that's all the GNU
assembler supports.

> For the expression
> 
> > .if (qwerty_fiqin_end - qwerty_fiqin_start) > (0x200 - 0x1c)
> 
> can you use local labels (`.L` prefix) rather than symbolic
> references? or is there a risk of them not being unique per TU?

For the problem in this patch I might be able to do something of that shape,
but I'll need to factor the SYM_*() helpers differently so that I can use
labels for the primary definition.

I can't do that for the aliases, since I don't know the set of aliases until
after the primary definition is created. Since there's no label aliasing
mechanism, I must use symbol references. However, I think I can propagate the
size by calculating alongside the primary definition, e.g.

	primary_start:
		nop
	primary_end:
	.set primary_size, (primary_end - primary_start)

	.set alias_start, primary_start
	.set alias_end,   primary_end
	.set alias_size,  primary_size

... so I'll have a go at that.

However, that still means I can't treat aliases as entirely equivalent to the
primary definition, e.g.

	.if (alias_end - alias_start) == 0
	.err
	.endif

... and there might be more things of that sort of shape, so it'd be good if
LLVM could handle this for symbol references (in the same section or fragment
thereof), so that we have the chance to use that in future.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT()
@ 2022-02-11 11:32         ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-02-11 11:32 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: linux-kernel, Nathan Chancellor, acme, ardb, bp, broonie,
	catalin.marinas, dave.hansen, jpoimboe, jslaby, linux-arm-kernel,
	linux, mingo, peterz, tglx, will, llvm, James Y Knight

On Thu, Feb 10, 2022 at 05:20:10PM -0800, Nick Desaulniers wrote:
> On Thu, Feb 10, 2022 at 6:52 AM Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > Both GCC and clang are happy to treat labels as constant expressions:
> >
> > | [mark@lakrids:~/asm-test]% cat test-label.S
> > |         .text
> > |
> > | start:
> > |         nop
> > | end:
> > |
> > |         .if (end - start) == 0
> > |         .err
> > |         .endif
> > |
> > | [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-label.S
> > | [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-label.S
> >
> > ... but only GCC is happy to treat symbol definitions as constants:
> >
> > | [mark@lakrids:~/asm-test]% cat test-symbol.S
> > |         .text
> > |
> > | .set start, .;
> > |         nop
> > | .set end, .;
> > |
> > |         .if (end - start) == 0
> > |         .err
> > |         .endif
> > |
> > | [mark@lakrids:~/asm-test]% usekorg 11.1.0 aarch64-linux-gcc -c test-symbol.S
> > | [mark@lakrids:~/asm-test]% usellvm 13.0.0 clang --target=aarch64-linux -c test-symbol.S
> > | test-symbol.S:7:6: error: expected absolute expression
> > |  .if (end - start) == 0
> > |      ^
> > | test-symbol.S:8:2: error: .err encountered
> > |  .err
> > |  ^
> >
> > This is obviously a behavioural difference, but I'm not sure whether it's
> > intentional, or just an artifact of the differing implementation of GNU as and
> > LLVM's integrated assembler. Nich, Nathan, any thoughts on that?
> >
> > Does clang have any mechanism other than labels to define location constants
> > that can be used as absolute expressions? e.g. is there any mechanism to alias
> > a label which results in the alias also being a constant?
> 
> s/constant/absolute/

Sorry, yes. I wasn't clear on the terminology when I wrote this, and I realise
now what I said was a bit confused.

IIUC the symbols themselves are relocatable (rather than absolute) whether
they're labels or created via `.set`, since the base of the section hasn't been
set yet. The difference between the two *should* be absolute (since they're
both relocatable relative to the same base), and LLVM can figure that out for
two labels, but not when either is created via `.set`, so it seems some
information is tracked differently for labels and othey symbols?

I note LLVM *can* treat the result of a `.set` as absolute, eg. if I do:

	.set sym_offset (label_end - label_start)
	.if sym_offset == 0
	.err
	.endif

... LLVM treats `sym_offset` as an absolute value.

> Nothing off the top of my head comes to mind as a substitute that will
> work as expected today.
> 
> I've filed https://github.com/llvm/llvm-project/issues/53728 to
> discuss more with folks that understand our AsmParser better.

Thanks!

> From what I can tell, our AsmParser is falling down because the
> operands of the binary expression themselves are not considered
> absolute.

IIUC even in the label case the operands aren't absolute, but rather that
they're relocatable relative to the same base, and hence the expression can be
evaluate to be absolute (since the base gets cancelled out, removing the
relocation).

So is there something that gets tracked differently for labels and other
symbols?

> I doubt we will be able to handle the general case of symbol
> differencing; the symbol reference operands could refer to symbols in
> different sections that haven't been laid out yet (or whose order
> could be shuffled by the linker).  But if they're in the same section
> (or "fragment" of a section), we might be able to special case this.

Sure, I think the only case that needs to work (or can work conceptually) is
when they're in the same section (of fragment thereof), and that's all the GNU
assembler supports.

> For the expression
> 
> > .if (qwerty_fiqin_end - qwerty_fiqin_start) > (0x200 - 0x1c)
> 
> can you use local labels (`.L` prefix) rather than symbolic
> references? or is there a risk of them not being unique per TU?

For the problem in this patch I might be able to do something of that shape,
but I'll need to factor the SYM_*() helpers differently so that I can use
labels for the primary definition.

I can't do that for the aliases, since I don't know the set of aliases until
after the primary definition is created. Since there's no label aliasing
mechanism, I must use symbol references. However, I think I can propagate the
size by calculating alongside the primary definition, e.g.

	primary_start:
		nop
	primary_end:
	.set primary_size, (primary_end - primary_start)

	.set alias_start, primary_start
	.set alias_end,   primary_end
	.set alias_size,  primary_size

... so I'll have a go at that.

However, that still means I can't treat aliases as entirely equivalent to the
primary definition, e.g.

	.if (alias_end - alias_start) == 0
	.err
	.endif

... and there might be more things of that sort of shape, so it'd be good if
LLVM could handle this for symbol references (in the same section or fragment
thereof), so that we have the chance to use that in future.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT()
  2022-02-11 11:32         ` Mark Rutland
@ 2022-02-11 13:24           ` Mark Rutland
  -1 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-02-11 13:24 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: linux-kernel, Nathan Chancellor, acme, ardb, bp, broonie,
	catalin.marinas, dave.hansen, jpoimboe, jslaby, linux-arm-kernel,
	linux, mingo, peterz, tglx, will, llvm, James Y Knight

On Fri, Feb 11, 2022 at 11:32:27AM +0000, Mark Rutland wrote:
> On Thu, Feb 10, 2022 at 05:20:10PM -0800, Nick Desaulniers wrote:
> > On Thu, Feb 10, 2022 at 6:52 AM Mark Rutland <mark.rutland@arm.com> wrote:
 > For the expression
> > 
> > > .if (qwerty_fiqin_end - qwerty_fiqin_start) > (0x200 - 0x1c)
> > 
> > can you use local labels (`.L` prefix) rather than symbolic
> > references? or is there a risk of them not being unique per TU?
>
> For the problem in this patch I might be able to do something of that shape,
> but I'll need to factor the SYM_*() helpers differently so that I can use
> labels for the primary definition.

FWIW, that refactoring turned out to be easier than I expected, and I actually
prefer the new structure.

I've ended up dropping this patch, and in the next patch I leave
SYM_FUNC_START() unchanged, but calculate the size in SYM_FUNC_END() and
propagate that to all the aliases pre-calculated:

diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index dbf8506decca..027ab1618bf8 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -165,7 +165,18 @@
 #ifndef SYM_END
 #define SYM_END(name, sym_type)                                \
        .type name sym_type ASM_NL                      \
-       .size name, .-name
+       .set .L__sym_size_##name, .-name ASM_NL         \
+       .size name, .L__sym_size_##name
+#endif
+
+/* SYM_ALIAS -- use only if you have to */
+#ifndef SYM_ALIAS
+#define SYM_ALIAS(alias, name, sym_type, linkage)                      \
+       linkage(alias) ASM_NL                                           \
+       .set alias, name                                                \
+       .type alias sym_type ASM_NL                                     \
+       .set .L__sym_size_##alias, .L__sym_size_##name ASM_NL           \
+       .size alias, .L__sym_size_##alias
 #endif

I still think that in future we *might* want to be able to use two non-label
symbols (in the same section/fragment/etc) to generate an absolute expression,
but that's not a blocker for this series, and for the common cases (e.g.
checking size) we can probably work around that as above.

Thanks for looknig at this!

Mark.

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT()
@ 2022-02-11 13:24           ` Mark Rutland
  0 siblings, 0 replies; 32+ messages in thread
From: Mark Rutland @ 2022-02-11 13:24 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: linux-kernel, Nathan Chancellor, acme, ardb, bp, broonie,
	catalin.marinas, dave.hansen, jpoimboe, jslaby, linux-arm-kernel,
	linux, mingo, peterz, tglx, will, llvm, James Y Knight

On Fri, Feb 11, 2022 at 11:32:27AM +0000, Mark Rutland wrote:
> On Thu, Feb 10, 2022 at 05:20:10PM -0800, Nick Desaulniers wrote:
> > On Thu, Feb 10, 2022 at 6:52 AM Mark Rutland <mark.rutland@arm.com> wrote:
 > For the expression
> > 
> > > .if (qwerty_fiqin_end - qwerty_fiqin_start) > (0x200 - 0x1c)
> > 
> > can you use local labels (`.L` prefix) rather than symbolic
> > references? or is there a risk of them not being unique per TU?
>
> For the problem in this patch I might be able to do something of that shape,
> but I'll need to factor the SYM_*() helpers differently so that I can use
> labels for the primary definition.

FWIW, that refactoring turned out to be easier than I expected, and I actually
prefer the new structure.

I've ended up dropping this patch, and in the next patch I leave
SYM_FUNC_START() unchanged, but calculate the size in SYM_FUNC_END() and
propagate that to all the aliases pre-calculated:

diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index dbf8506decca..027ab1618bf8 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -165,7 +165,18 @@
 #ifndef SYM_END
 #define SYM_END(name, sym_type)                                \
        .type name sym_type ASM_NL                      \
-       .size name, .-name
+       .set .L__sym_size_##name, .-name ASM_NL         \
+       .size name, .L__sym_size_##name
+#endif
+
+/* SYM_ALIAS -- use only if you have to */
+#ifndef SYM_ALIAS
+#define SYM_ALIAS(alias, name, sym_type, linkage)                      \
+       linkage(alias) ASM_NL                                           \
+       .set alias, name                                                \
+       .type alias sym_type ASM_NL                                     \
+       .set .L__sym_size_##alias, .L__sym_size_##name ASM_NL           \
+       .size alias, .L__sym_size_##alias
 #endif

I still think that in future we *might* want to be able to use two non-label
symbols (in the same section/fragment/etc) to generate an absolute expression,
but that's not a blocker for this series, and for the common cases (e.g.
checking size) we can probably work around that as above.

Thanks for looknig at this!

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2022-02-11 13:26 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-25 11:31 [PATCH v2 0/7] linkage: better symbol aliasing Mark Rutland
2022-01-25 11:31 ` Mark Rutland
2022-01-25 11:31 ` [PATCH v2 1/7] arm: lib: remove leading whitespace in bitop macro Mark Rutland
2022-01-25 11:31   ` Mark Rutland
2022-01-25 11:31 ` [PATCH v2 2/7] linkage: add SYM_{ENTRY,START,END}_AT() Mark Rutland
2022-01-25 11:31   ` Mark Rutland
2022-02-10 14:52   ` Mark Rutland
2022-02-10 14:52     ` Mark Rutland
2022-02-11  1:20     ` Nick Desaulniers
2022-02-11  1:20       ` Nick Desaulniers
2022-02-11 11:32       ` Mark Rutland
2022-02-11 11:32         ` Mark Rutland
2022-02-11 13:24         ` Mark Rutland
2022-02-11 13:24           ` Mark Rutland
2022-01-25 11:31 ` [PATCH v2 3/7] linkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}() Mark Rutland
2022-01-25 11:31   ` Mark Rutland
2022-01-25 11:31 ` [PATCH v2 4/7] arm64: clean up symbol aliasing Mark Rutland
2022-01-25 11:31   ` Mark Rutland
2022-01-25 11:31 ` [PATCH v2 5/7] x86: " Mark Rutland
2022-01-25 11:31   ` Mark Rutland
2022-01-25 11:31 ` [PATCH v2 6/7] linkage: remove SYM_FUNC_{START,END}_ALIAS() Mark Rutland
2022-01-25 11:31   ` Mark Rutland
2022-01-25 11:32 ` [PATCH v2 7/7] tools: update x86 string routines Mark Rutland
2022-01-25 11:32   ` Mark Rutland
2022-01-25 15:28 ` [PATCH v2 0/7] linkage: better symbol aliasing Ard Biesheuvel
2022-01-25 15:28   ` Ard Biesheuvel
2022-01-25 15:45   ` Mark Rutland
2022-01-25 15:45     ` Mark Rutland
2022-01-25 15:49     ` Ard Biesheuvel
2022-01-25 15:49       ` Ard Biesheuvel
2022-01-25 15:58       ` Mark Rutland
2022-01-25 15:58         ` Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.