All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/8] add support for relative references in special sections
@ 2017-12-27  8:50 ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ard Biesheuvel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Steven Rostedt, Martin Schwidefsky, Sergey Senozhatsky,
	Linus Torvalds, Jessica Yu, linux-arm-kernel, linux-mips,
	linuxppc-dev, linux-s390, sparclinux, x86

This adds support for emitting special sections such as initcall arrays,
PCI fixups and tracepoints as relative references rather than absolute
references. This reduces the size by 50% on 64-bit architectures, but
more importantly, it removes the need for carrying relocation metadata
for these sections in relocatables kernels (e.g., for KASLR) that need
to fix up these absolute references at boot time. On arm64, this reduces
the vmlinux footprint of such a reference by 8x (8 byte absolute reference
+ 24 byte RELA entry vs 4 byte relative reference)

Patch #2 was sent out before as a single patch. This series supersedes
the previous submission. This version makes relative ksymtab entries
dependent on the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS rather
than trying to infer from kbuild test robot replies for which architectures
it should be blacklisted.

Patch #1 introduces the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS,
and sets it for the main architectures that are expected to benefit the
most from this feature, i.e., 64-bit architectures or ones that use
runtime relocations.

Patches #3 - #5 implement relative references for initcalls, PCI fixups
and tracepoints, respectively, all of which produce sections with order
~1000 entries on an arm64 defconfig kernel with tracing enabled. This
means we save about 28 KB of vmlinux space for each of these patches.

Patches #6 - #8 have been added in v5, and implement relative references
in jump tables for arm64 and x86. On arm64, this results in significant
space savings (650+ KB on a typical distro kernel). On x86, the savings
are not as impressive, but still worthwhile. (Note that these patches
do not rely on CONFIG_HAVE_ARCH_PREL32_RELOCATIONS, given that the
inline asm that is emitted is already per-arch)

For the arm64 kernel, all patches combined reduce the memory footprint of
vmlinux by about 1.3 MB (using a config copied from Ubuntu that has KASLR
enabled), of which ~1 MB is the size reduction of the RELA section in .init,
and the remaining 300 KB is reduction of .text/.data.

Branch:
git://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git relative-special-sections-v6

Changes since v5:
- add missing jump_label prototypes to s390 jump_label.h (#6)
- fix inverted condition in call to jump_entry_is_module_init() (#6)

Changes since v4:
- add patches to convert x86 and arm64 to use relative references for jump
  tables (#6 - #8)
- rename PCI patch and add Bjorn's ack (#4)
- rebase onto v4.15-rc5

Changes since v3:
- fix module unload issue in patch #5 reported by Jessica, by reusing the
  updated routine for_each_tracepoint_range() for the quiescent check at
  module unload time; this requires this routine to be moved before
  tracepoint_module_going() in kernel/tracepoint.c
- add Jessica's ack to #2
- rebase onto v4.14-rc1

Changes since v2:
- Revert my slightly misguided attempt to appease checkpatch, which resulted
  in needless churn and worse code. This v3 is based on v1 with a few tweaks
  that were actually reasonable checkpatch warnings: unnecessary braces (as
  pointed out by Ingo) and other minor whitespace misdemeanors.

Changes since v1:
- Remove checkpatch errors to the extent feasible: in some cases, this
  involves moving extern declarations into C files, and switching to
  struct definitions rather than typedefs. Some errors are impossible
  to fix: please find the remaining ones after the diffstat.
- Used 'int' instead if 'signed int' for the various offset fields: there
  is no ambiguity between architectures regarding its signedness (unlike
  'char')
- Refactor the different patches to be more uniform in the way they define
  the section entry type and accessors in the .h file, and avoid the need to
  add #ifdefs to the C code.

Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jessica Yu <jeyu@kernel.org>

Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: x86@kernel.org

Ard Biesheuvel (8):
  arch: enable relative relocations for arm64, power, x86, s390 and x86
  module: use relative references for __ksymtab entries
  init: allow initcall tables to be emitted using relative references
  PCI: Add support for relative addressing in quirk tables
  kernel: tracepoints: add support for relative references
  kernel/jump_label: abstract jump_entry member accessors
  arm64/kernel: jump_label: use relative references
  x86/kernel: jump_table: use relative references

 arch/Kconfig                          | 10 ++++
 arch/arm/include/asm/jump_label.h     | 27 +++++++++
 arch/arm64/Kconfig                    |  1 +
 arch/arm64/include/asm/jump_label.h   | 48 +++++++++++++---
 arch/arm64/kernel/jump_label.c        | 22 +++++++-
 arch/arm64/kernel/vmlinux.lds.S       |  2 +-
 arch/mips/include/asm/jump_label.h    | 27 +++++++++
 arch/powerpc/Kconfig                  |  1 +
 arch/powerpc/include/asm/jump_label.h | 27 +++++++++
 arch/s390/Kconfig                     |  1 +
 arch/s390/include/asm/jump_label.h    | 20 +++++++
 arch/sparc/include/asm/jump_label.h   | 27 +++++++++
 arch/tile/include/asm/jump_label.h    | 27 +++++++++
 arch/x86/Kconfig                      |  1 +
 arch/x86/include/asm/Kbuild           |  1 +
 arch/x86/include/asm/export.h         |  5 --
 arch/x86/include/asm/jump_label.h     | 56 +++++++++++++++----
 arch/x86/kernel/jump_label.c          | 59 ++++++++++++++------
 drivers/pci/quirks.c                  | 13 ++++-
 include/asm-generic/export.h          | 12 +++-
 include/linux/compiler.h              | 11 ++++
 include/linux/export.h                | 46 +++++++++++----
 include/linux/init.h                  | 44 +++++++++++----
 include/linux/pci.h                   | 20 +++++++
 include/linux/tracepoint.h            | 19 +++++--
 init/main.c                           | 32 +++++------
 kernel/jump_label.c                   | 38 ++++++-------
 kernel/module.c                       | 33 +++++++++--
 kernel/printk/printk.c                |  4 +-
 kernel/tracepoint.c                   | 50 +++++++++--------
 security/security.c                   |  4 +-
 tools/objtool/special.c               |  4 +-
 32 files changed, 544 insertions(+), 148 deletions(-)
 delete mode 100644 arch/x86/include/asm/export.h

-- 
2.11.0

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 0/8] add support for relative references in special sections
@ 2017-12-27  8:50 ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

This adds support for emitting special sections such as initcall arrays,
PCI fixups and tracepoints as relative references rather than absolute
references. This reduces the size by 50% on 64-bit architectures, but
more importantly, it removes the need for carrying relocation metadata
for these sections in relocatables kernels (e.g., for KASLR) that need
to fix up these absolute references at boot time. On arm64, this reduces
the vmlinux footprint of such a reference by 8x (8 byte absolute reference
+ 24 byte RELA entry vs 4 byte relative reference)

Patch #2 was sent out before as a single patch. This series supersedes
the previous submission. This version makes relative ksymtab entries
dependent on the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS rather
than trying to infer from kbuild test robot replies for which architectures
it should be blacklisted.

Patch #1 introduces the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS,
and sets it for the main architectures that are expected to benefit the
most from this feature, i.e., 64-bit architectures or ones that use
runtime relocations.

Patches #3 - #5 implement relative references for initcalls, PCI fixups
and tracepoints, respectively, all of which produce sections with order
~1000 entries on an arm64 defconfig kernel with tracing enabled. This
means we save about 28 KB of vmlinux space for each of these patches.

Patches #6 - #8 have been added in v5, and implement relative references
in jump tables for arm64 and x86. On arm64, this results in significant
space savings (650+ KB on a typical distro kernel). On x86, the savings
are not as impressive, but still worthwhile. (Note that these patches
do not rely on CONFIG_HAVE_ARCH_PREL32_RELOCATIONS, given that the
inline asm that is emitted is already per-arch)

For the arm64 kernel, all patches combined reduce the memory footprint of
vmlinux by about 1.3 MB (using a config copied from Ubuntu that has KASLR
enabled), of which ~1 MB is the size reduction of the RELA section in .init,
and the remaining 300 KB is reduction of .text/.data.

Branch:
git://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git relative-special-sections-v6

Changes since v5:
- add missing jump_label prototypes to s390 jump_label.h (#6)
- fix inverted condition in call to jump_entry_is_module_init() (#6)

Changes since v4:
- add patches to convert x86 and arm64 to use relative references for jump
  tables (#6 - #8)
- rename PCI patch and add Bjorn's ack (#4)
- rebase onto v4.15-rc5

Changes since v3:
- fix module unload issue in patch #5 reported by Jessica, by reusing the
  updated routine for_each_tracepoint_range() for the quiescent check at
  module unload time; this requires this routine to be moved before
  tracepoint_module_going() in kernel/tracepoint.c
- add Jessica's ack to #2
- rebase onto v4.14-rc1

Changes since v2:
- Revert my slightly misguided attempt to appease checkpatch, which resulted
  in needless churn and worse code. This v3 is based on v1 with a few tweaks
  that were actually reasonable checkpatch warnings: unnecessary braces (as
  pointed out by Ingo) and other minor whitespace misdemeanors.

Changes since v1:
- Remove checkpatch errors to the extent feasible: in some cases, this
  involves moving extern declarations into C files, and switching to
  struct definitions rather than typedefs. Some errors are impossible
  to fix: please find the remaining ones after the diffstat.
- Used 'int' instead if 'signed int' for the various offset fields: there
  is no ambiguity between architectures regarding its signedness (unlike
  'char')
- Refactor the different patches to be more uniform in the way they define
  the section entry type and accessors in the .h file, and avoid the need to
  add #ifdefs to the C code.

Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jessica Yu <jeyu@kernel.org>

Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: x86@kernel.org

Ard Biesheuvel (8):
  arch: enable relative relocations for arm64, power, x86, s390 and x86
  module: use relative references for __ksymtab entries
  init: allow initcall tables to be emitted using relative references
  PCI: Add support for relative addressing in quirk tables
  kernel: tracepoints: add support for relative references
  kernel/jump_label: abstract jump_entry member accessors
  arm64/kernel: jump_label: use relative references
  x86/kernel: jump_table: use relative references

 arch/Kconfig                          | 10 ++++
 arch/arm/include/asm/jump_label.h     | 27 +++++++++
 arch/arm64/Kconfig                    |  1 +
 arch/arm64/include/asm/jump_label.h   | 48 +++++++++++++---
 arch/arm64/kernel/jump_label.c        | 22 +++++++-
 arch/arm64/kernel/vmlinux.lds.S       |  2 +-
 arch/mips/include/asm/jump_label.h    | 27 +++++++++
 arch/powerpc/Kconfig                  |  1 +
 arch/powerpc/include/asm/jump_label.h | 27 +++++++++
 arch/s390/Kconfig                     |  1 +
 arch/s390/include/asm/jump_label.h    | 20 +++++++
 arch/sparc/include/asm/jump_label.h   | 27 +++++++++
 arch/tile/include/asm/jump_label.h    | 27 +++++++++
 arch/x86/Kconfig                      |  1 +
 arch/x86/include/asm/Kbuild           |  1 +
 arch/x86/include/asm/export.h         |  5 --
 arch/x86/include/asm/jump_label.h     | 56 +++++++++++++++----
 arch/x86/kernel/jump_label.c          | 59 ++++++++++++++------
 drivers/pci/quirks.c                  | 13 ++++-
 include/asm-generic/export.h          | 12 +++-
 include/linux/compiler.h              | 11 ++++
 include/linux/export.h                | 46 +++++++++++----
 include/linux/init.h                  | 44 +++++++++++----
 include/linux/pci.h                   | 20 +++++++
 include/linux/tracepoint.h            | 19 +++++--
 init/main.c                           | 32 +++++------
 kernel/jump_label.c                   | 38 ++++++-------
 kernel/module.c                       | 33 +++++++++--
 kernel/printk/printk.c                |  4 +-
 kernel/tracepoint.c                   | 50 +++++++++--------
 security/security.c                   |  4 +-
 tools/objtool/special.c               |  4 +-
 32 files changed, 544 insertions(+), 148 deletions(-)
 delete mode 100644 arch/x86/include/asm/export.h

-- 
2.11.0


^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 0/8] add support for relative references in special sections
@ 2017-12-27  8:50 ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

This adds support for emitting special sections such as initcall arrays,
PCI fixups and tracepoints as relative references rather than absolute
references. This reduces the size by 50% on 64-bit architectures, but
more importantly, it removes the need for carrying relocation metadata
for these sections in relocatables kernels (e.g., for KASLR) that need
to fix up these absolute references at boot time. On arm64, this reduces
the vmlinux footprint of such a reference by 8x (8 byte absolute reference
+ 24 byte RELA entry vs 4 byte relative reference)

Patch #2 was sent out before as a single patch. This series supersedes
the previous submission. This version makes relative ksymtab entries
dependent on the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS rather
than trying to infer from kbuild test robot replies for which architectures
it should be blacklisted.

Patch #1 introduces the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS,
and sets it for the main architectures that are expected to benefit the
most from this feature, i.e., 64-bit architectures or ones that use
runtime relocations.

Patches #3 - #5 implement relative references for initcalls, PCI fixups
and tracepoints, respectively, all of which produce sections with order
~1000 entries on an arm64 defconfig kernel with tracing enabled. This
means we save about 28 KB of vmlinux space for each of these patches.

Patches #6 - #8 have been added in v5, and implement relative references
in jump tables for arm64 and x86. On arm64, this results in significant
space savings (650+ KB on a typical distro kernel). On x86, the savings
are not as impressive, but still worthwhile. (Note that these patches
do not rely on CONFIG_HAVE_ARCH_PREL32_RELOCATIONS, given that the
inline asm that is emitted is already per-arch)

For the arm64 kernel, all patches combined reduce the memory footprint of
vmlinux by about 1.3 MB (using a config copied from Ubuntu that has KASLR
enabled), of which ~1 MB is the size reduction of the RELA section in .init,
and the remaining 300 KB is reduction of .text/.data.

Branch:
git://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git relative-special-sections-v6

Changes since v5:
- add missing jump_label prototypes to s390 jump_label.h (#6)
- fix inverted condition in call to jump_entry_is_module_init() (#6)

Changes since v4:
- add patches to convert x86 and arm64 to use relative references for jump
  tables (#6 - #8)
- rename PCI patch and add Bjorn's ack (#4)
- rebase onto v4.15-rc5

Changes since v3:
- fix module unload issue in patch #5 reported by Jessica, by reusing the
  updated routine for_each_tracepoint_range() for the quiescent check at
  module unload time; this requires this routine to be moved before
  tracepoint_module_going() in kernel/tracepoint.c
- add Jessica's ack to #2
- rebase onto v4.14-rc1

Changes since v2:
- Revert my slightly misguided attempt to appease checkpatch, which resulted
  in needless churn and worse code. This v3 is based on v1 with a few tweaks
  that were actually reasonable checkpatch warnings: unnecessary braces (as
  pointed out by Ingo) and other minor whitespace misdemeanors.

Changes since v1:
- Remove checkpatch errors to the extent feasible: in some cases, this
  involves moving extern declarations into C files, and switching to
  struct definitions rather than typedefs. Some errors are impossible
  to fix: please find the remaining ones after the diffstat.
- Used 'int' instead if 'signed int' for the various offset fields: there
  is no ambiguity between architectures regarding its signedness (unlike
  'char')
- Refactor the different patches to be more uniform in the way they define
  the section entry type and accessors in the .h file, and avoid the need to
  add #ifdefs to the C code.

Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jessica Yu <jeyu@kernel.org>

Cc: linux-arm-kernel at lists.infradead.org
Cc: linux-kernel at vger.kernel.org
Cc: linux-mips at linux-mips.org
Cc: linuxppc-dev at lists.ozlabs.org
Cc: linux-s390 at vger.kernel.org
Cc: sparclinux at vger.kernel.org
Cc: x86 at kernel.org

Ard Biesheuvel (8):
  arch: enable relative relocations for arm64, power, x86, s390 and x86
  module: use relative references for __ksymtab entries
  init: allow initcall tables to be emitted using relative references
  PCI: Add support for relative addressing in quirk tables
  kernel: tracepoints: add support for relative references
  kernel/jump_label: abstract jump_entry member accessors
  arm64/kernel: jump_label: use relative references
  x86/kernel: jump_table: use relative references

 arch/Kconfig                          | 10 ++++
 arch/arm/include/asm/jump_label.h     | 27 +++++++++
 arch/arm64/Kconfig                    |  1 +
 arch/arm64/include/asm/jump_label.h   | 48 +++++++++++++---
 arch/arm64/kernel/jump_label.c        | 22 +++++++-
 arch/arm64/kernel/vmlinux.lds.S       |  2 +-
 arch/mips/include/asm/jump_label.h    | 27 +++++++++
 arch/powerpc/Kconfig                  |  1 +
 arch/powerpc/include/asm/jump_label.h | 27 +++++++++
 arch/s390/Kconfig                     |  1 +
 arch/s390/include/asm/jump_label.h    | 20 +++++++
 arch/sparc/include/asm/jump_label.h   | 27 +++++++++
 arch/tile/include/asm/jump_label.h    | 27 +++++++++
 arch/x86/Kconfig                      |  1 +
 arch/x86/include/asm/Kbuild           |  1 +
 arch/x86/include/asm/export.h         |  5 --
 arch/x86/include/asm/jump_label.h     | 56 +++++++++++++++----
 arch/x86/kernel/jump_label.c          | 59 ++++++++++++++------
 drivers/pci/quirks.c                  | 13 ++++-
 include/asm-generic/export.h          | 12 +++-
 include/linux/compiler.h              | 11 ++++
 include/linux/export.h                | 46 +++++++++++----
 include/linux/init.h                  | 44 +++++++++++----
 include/linux/pci.h                   | 20 +++++++
 include/linux/tracepoint.h            | 19 +++++--
 init/main.c                           | 32 +++++------
 kernel/jump_label.c                   | 38 ++++++-------
 kernel/module.c                       | 33 +++++++++--
 kernel/printk/printk.c                |  4 +-
 kernel/tracepoint.c                   | 50 +++++++++--------
 security/security.c                   |  4 +-
 tools/objtool/special.c               |  4 +-
 32 files changed, 544 insertions(+), 148 deletions(-)
 delete mode 100644 arch/x86/include/asm/export.h

-- 
2.11.0

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 1/8] arch: enable relative relocations for arm64, power, x86, s390 and x86
  2017-12-27  8:50 ` Ard Biesheuvel
  (?)
@ 2017-12-27  8:50   ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ard Biesheuvel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Steven Rostedt, Martin Schwidefsky, Sergey Senozhatsky,
	Linus Torvalds, Jessica Yu, linux-arm-kernel, linux-mips,
	linuxppc-dev, linux-s390, sparclinux, x86

Before updating certain subsystems to use place relative 32-bit
relocations in special sections, to save space  and reduce the
number of absolute relocations that need to be processed at runtime
by relocatable kernels, introduce the Kconfig symbol and define it
for some architectures that should be able to support and benefit
from it.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/Kconfig                    | 10 ++++++++++
 arch/arm64/Kconfig              |  1 +
 arch/arm64/kernel/vmlinux.lds.S |  2 +-
 arch/powerpc/Kconfig            |  1 +
 arch/s390/Kconfig               |  1 +
 arch/x86/Kconfig                |  1 +
 6 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 400b9e1b2f27..dbc036a7bd1b 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -959,4 +959,14 @@ config REFCOUNT_FULL
 	  against various use-after-free conditions that can be used in
 	  security flaw exploits.
 
+config HAVE_ARCH_PREL32_RELOCATIONS
+	bool
+	help
+	  May be selected by an architecture if it supports place-relative
+	  32-bit relocations, both in the toolchain and in the module loader,
+	  in which case relative references can be used in special sections
+	  for PCI fixup, initcalls etc which are only half the size on 64 bit
+	  architectures, and don't require runtime relocation on relocatable
+	  kernels.
+
 source "kernel/gcov/Kconfig"
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c9a7e9e1414f..66c7b9ab2a3d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -89,6 +89,7 @@ config ARM64
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 7da3e5c366a0..49ae5b43fe2b 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -156,7 +156,7 @@ SECTIONS
 		CON_INITCALL
 		SECURITY_INITCALL
 		INIT_RAM_FS
-		*(.init.rodata.* .init.bss)	/* from the EFI stub */
+		*(.init.rodata.* .init.bss .init.discard.*)	/* EFI stub */
 	}
 	.exit.data : {
 		ARM_EXIT_KEEP(EXIT_DATA)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c51e6ce42e7a..e172478e2ae7 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -177,6 +177,7 @@ config PPC
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 829c67986db7..ed29d1ebecd9 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -129,6 +129,7 @@ config S390
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_JUMP_LABEL
 	select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_SOFT_DIRTY
 	select HAVE_ARCH_TRACEHOOK
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d4fc98c50378..9f2bb853aedb 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -115,6 +115,7 @@ config X86
 	select HAVE_ARCH_MMAP_RND_BITS		if MMU
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if MMU && COMPAT
 	select HAVE_ARCH_COMPAT_MMAP_BASES	if MMU && COMPAT
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 1/8] arch: enable relative relocations for arm64, power, x86, s390 and x86
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

Before updating certain subsystems to use place relative 32-bit
relocations in special sections, to save space  and reduce the
number of absolute relocations that need to be processed at runtime
by relocatable kernels, introduce the Kconfig symbol and define it
for some architectures that should be able to support and benefit
from it.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/Kconfig                    | 10 ++++++++++
 arch/arm64/Kconfig              |  1 +
 arch/arm64/kernel/vmlinux.lds.S |  2 +-
 arch/powerpc/Kconfig            |  1 +
 arch/s390/Kconfig               |  1 +
 arch/x86/Kconfig                |  1 +
 6 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 400b9e1b2f27..dbc036a7bd1b 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -959,4 +959,14 @@ config REFCOUNT_FULL
 	  against various use-after-free conditions that can be used in
 	  security flaw exploits.
 
+config HAVE_ARCH_PREL32_RELOCATIONS
+	bool
+	help
+	  May be selected by an architecture if it supports place-relative
+	  32-bit relocations, both in the toolchain and in the module loader,
+	  in which case relative references can be used in special sections
+	  for PCI fixup, initcalls etc which are only half the size on 64 bit
+	  architectures, and don't require runtime relocation on relocatable
+	  kernels.
+
 source "kernel/gcov/Kconfig"
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c9a7e9e1414f..66c7b9ab2a3d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -89,6 +89,7 @@ config ARM64
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 7da3e5c366a0..49ae5b43fe2b 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -156,7 +156,7 @@ SECTIONS
 		CON_INITCALL
 		SECURITY_INITCALL
 		INIT_RAM_FS
-		*(.init.rodata.* .init.bss)	/* from the EFI stub */
+		*(.init.rodata.* .init.bss .init.discard.*)	/* EFI stub */
 	}
 	.exit.data : {
 		ARM_EXIT_KEEP(EXIT_DATA)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c51e6ce42e7a..e172478e2ae7 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -177,6 +177,7 @@ config PPC
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 829c67986db7..ed29d1ebecd9 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -129,6 +129,7 @@ config S390
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_JUMP_LABEL
 	select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_SOFT_DIRTY
 	select HAVE_ARCH_TRACEHOOK
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d4fc98c50378..9f2bb853aedb 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -115,6 +115,7 @@ config X86
 	select HAVE_ARCH_MMAP_RND_BITS		if MMU
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if MMU && COMPAT
 	select HAVE_ARCH_COMPAT_MMAP_BASES	if MMU && COMPAT
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 1/8] arch: enable relative relocations for arm64, power, x86, s390 and x86
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

Before updating certain subsystems to use place relative 32-bit
relocations in special sections, to save space  and reduce the
number of absolute relocations that need to be processed at runtime
by relocatable kernels, introduce the Kconfig symbol and define it
for some architectures that should be able to support and benefit
from it.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86 at kernel.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/Kconfig                    | 10 ++++++++++
 arch/arm64/Kconfig              |  1 +
 arch/arm64/kernel/vmlinux.lds.S |  2 +-
 arch/powerpc/Kconfig            |  1 +
 arch/s390/Kconfig               |  1 +
 arch/x86/Kconfig                |  1 +
 6 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 400b9e1b2f27..dbc036a7bd1b 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -959,4 +959,14 @@ config REFCOUNT_FULL
 	  against various use-after-free conditions that can be used in
 	  security flaw exploits.
 
+config HAVE_ARCH_PREL32_RELOCATIONS
+	bool
+	help
+	  May be selected by an architecture if it supports place-relative
+	  32-bit relocations, both in the toolchain and in the module loader,
+	  in which case relative references can be used in special sections
+	  for PCI fixup, initcalls etc which are only half the size on 64 bit
+	  architectures, and don't require runtime relocation on relocatable
+	  kernels.
+
 source "kernel/gcov/Kconfig"
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c9a7e9e1414f..66c7b9ab2a3d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -89,6 +89,7 @@ config ARM64
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 7da3e5c366a0..49ae5b43fe2b 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -156,7 +156,7 @@ SECTIONS
 		CON_INITCALL
 		SECURITY_INITCALL
 		INIT_RAM_FS
-		*(.init.rodata.* .init.bss)	/* from the EFI stub */
+		*(.init.rodata.* .init.bss .init.discard.*)	/* EFI stub */
 	}
 	.exit.data : {
 		ARM_EXIT_KEEP(EXIT_DATA)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c51e6ce42e7a..e172478e2ae7 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -177,6 +177,7 @@ config PPC
 	select HAVE_ARCH_KGDB
 	select HAVE_ARCH_MMAP_RND_BITS
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 829c67986db7..ed29d1ebecd9 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -129,6 +129,7 @@ config S390
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_JUMP_LABEL
 	select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_SOFT_DIRTY
 	select HAVE_ARCH_TRACEHOOK
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d4fc98c50378..9f2bb853aedb 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -115,6 +115,7 @@ config X86
 	select HAVE_ARCH_MMAP_RND_BITS		if MMU
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if MMU && COMPAT
 	select HAVE_ARCH_COMPAT_MMAP_BASES	if MMU && COMPAT
+	select HAVE_ARCH_PREL32_RELOCATIONS
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 2/8] module: use relative references for __ksymtab entries
  2017-12-27  8:50 ` Ard Biesheuvel
  (?)
@ 2017-12-27  8:50   ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ard Biesheuvel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Steven Rostedt, Martin Schwidefsky, Sergey Senozhatsky,
	Linus Torvalds, Jessica Yu, linux-arm-kernel, linux-mips,
	linuxppc-dev, linux-s390, sparclinux, x86, Ingo Molnar

An ordinary arm64 defconfig build has ~64 KB worth of __ksymtab
entries, each consisting of two 64-bit fields containing absolute
references, to the symbol itself and to a char array containing
its name, respectively.

When we build the same configuration with KASLR enabled, we end
up with an additional ~192 KB of relocations in the .init section,
i.e., one 24 byte entry for each absolute reference, which all need
to be processed at boot time.

Given how the struct kernel_symbol that describes each entry is
completely local to module.c (except for the references emitted
by EXPORT_SYMBOL() itself), we can easily modify it to contain
two 32-bit relative references instead. This reduces the size of
the __ksymtab section by 50% for all 64-bit architectures, and
gets rid of the runtime relocations entirely for architectures
implementing KASLR, either via standard PIE linking (arm64) or
using custom host tools (x86).

Note that the binary search involving __ksymtab contents relies
on each section being sorted by symbol name. This is implemented
based on the input section names, not the names in the ksymtab
entries, so this patch does not interfere with that.

Given that the use of place-relative relocations requires support
both in the toolchain and in the module loader, we cannot enable
this feature for all architectures. So make it dependent on whether
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS is defined.

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Nicolas Pitre <nico@linaro.org>
Acked-by: Jessica Yu <jeyu@kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/x86/include/asm/Kbuild   |  1 +
 arch/x86/include/asm/export.h |  5 ---
 include/asm-generic/export.h  | 12 ++++-
 include/linux/compiler.h      | 11 +++++
 include/linux/export.h        | 46 +++++++++++++++-----
 kernel/module.c               | 33 +++++++++++---
 6 files changed, 84 insertions(+), 24 deletions(-)

diff --git a/arch/x86/include/asm/Kbuild b/arch/x86/include/asm/Kbuild
index 5d6a53fd7521..3e8a88dcaa1d 100644
--- a/arch/x86/include/asm/Kbuild
+++ b/arch/x86/include/asm/Kbuild
@@ -9,5 +9,6 @@ generated-y += xen-hypercalls.h
 generic-y += clkdev.h
 generic-y += dma-contiguous.h
 generic-y += early_ioremap.h
+generic-y += export.h
 generic-y += mcs_spinlock.h
 generic-y += mm-arch-hooks.h
diff --git a/arch/x86/include/asm/export.h b/arch/x86/include/asm/export.h
deleted file mode 100644
index 2a51d66689c5..000000000000
--- a/arch/x86/include/asm/export.h
+++ /dev/null
@@ -1,5 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifdef CONFIG_64BIT
-#define KSYM_ALIGN 16
-#endif
-#include <asm-generic/export.h>
diff --git a/include/asm-generic/export.h b/include/asm-generic/export.h
index 719db1968d81..97ce606459ae 100644
--- a/include/asm-generic/export.h
+++ b/include/asm-generic/export.h
@@ -5,12 +5,10 @@
 #define KSYM_FUNC(x) x
 #endif
 #ifdef CONFIG_64BIT
-#define __put .quad
 #ifndef KSYM_ALIGN
 #define KSYM_ALIGN 8
 #endif
 #else
-#define __put .long
 #ifndef KSYM_ALIGN
 #define KSYM_ALIGN 4
 #endif
@@ -25,6 +23,16 @@
 #define KSYM(name) name
 #endif
 
+.macro __put, val, name
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	.long	\val - ., \name - .
+#elif defined(CONFIG_64BIT)
+	.quad	\val, \name
+#else
+	.long	\val, \name
+#endif
+.endm
+
 /*
  * note on .section use: @progbits vs %progbits nastiness doesn't matter,
  * since we immediately emit into those sections anyway.
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 52e611ab9a6c..fe752d365334 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -327,4 +327,15 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
 	compiletime_assert(__native_word(t),				\
 		"Need native word sized stores/loads for atomicity.")
 
+/*
+ * Force the compiler to emit 'sym' as a symbol, so that we can reference
+ * it from inline assembler. Necessary in case 'sym' could be inlined
+ * otherwise, or eliminated entirely due to lack of references that are
+ * visibile to the compiler.
+ */
+#define __ADDRESSABLE(sym) \
+	static void *__attribute__((section(".discard.text"), used))	\
+		__PASTE(__discard_##sym, __LINE__)(void)		\
+			{ return (void *)&sym; }			\
+
 #endif /* __LINUX_COMPILER_H */
diff --git a/include/linux/export.h b/include/linux/export.h
index 1a1dfdb2a5c6..5112d0c41512 100644
--- a/include/linux/export.h
+++ b/include/linux/export.h
@@ -24,12 +24,6 @@
 #define VMLINUX_SYMBOL_STR(x) __VMLINUX_SYMBOL_STR(x)
 
 #ifndef __ASSEMBLY__
-struct kernel_symbol
-{
-	unsigned long value;
-	const char *name;
-};
-
 #ifdef MODULE
 extern struct module __this_module;
 #define THIS_MODULE (&__this_module)
@@ -60,17 +54,47 @@ extern struct module __this_module;
 #define __CRC_SYMBOL(sym, sec)
 #endif
 
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#include <linux/compiler.h>
+/*
+ * Emit the ksymtab entry as a pair of relative references: this reduces
+ * the size by half on 64-bit architectures, and eliminates the need for
+ * absolute relocations that require runtime processing on relocatable
+ * kernels.
+ */
+#define __KSYMTAB_ENTRY(sym, sec)					\
+	__ADDRESSABLE(sym)						\
+	asm("	.section \"___ksymtab" sec "+" #sym "\", \"a\"	\n"	\
+	    "	.balign	8					\n"	\
+	    VMLINUX_SYMBOL_STR(__ksymtab_##sym) ":		\n"	\
+	    "	.long "	VMLINUX_SYMBOL_STR(sym) "- .		\n"	\
+	    "	.long "	VMLINUX_SYMBOL_STR(__kstrtab_##sym) "- .\n"	\
+	    "	.previous					\n")
+
+struct kernel_symbol {
+	signed int value_offset;
+	signed int name_offset;
+};
+#else
+#define __KSYMTAB_ENTRY(sym, sec)					\
+	static const struct kernel_symbol __ksymtab_##sym		\
+	__attribute__((section("___ksymtab" sec "+" #sym), used))	\
+	= { (unsigned long)&sym, __kstrtab_##sym }
+
+struct kernel_symbol {
+	unsigned long value;
+	const char *name;
+};
+#endif
+
 /* For every exported symbol, place a struct in the __ksymtab section */
 #define ___EXPORT_SYMBOL(sym, sec)					\
 	extern typeof(sym) sym;						\
 	__CRC_SYMBOL(sym, sec)						\
 	static const char __kstrtab_##sym[]				\
-	__attribute__((section("__ksymtab_strings"), aligned(1)))	\
+	__attribute__((section("__ksymtab_strings"), used, aligned(1)))	\
 	= VMLINUX_SYMBOL_STR(sym);					\
-	static const struct kernel_symbol __ksymtab_##sym		\
-	__used								\
-	__attribute__((section("___ksymtab" sec "+" #sym), used))	\
-	= { (unsigned long)&sym, __kstrtab_##sym }
+	__KSYMTAB_ENTRY(sym, sec)
 
 #if defined(__KSYM_DEPS__)
 
diff --git a/kernel/module.c b/kernel/module.c
index dea01ac9cb74..d3a908ffc42c 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -549,12 +549,31 @@ static bool check_symbol(const struct symsearch *syms,
 	return true;
 }
 
+static unsigned long kernel_symbol_value(const struct kernel_symbol *sym)
+{
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	return (unsigned long)&sym->value_offset + sym->value_offset;
+#else
+	return sym->value;
+#endif
+}
+
+static const char *kernel_symbol_name(const struct kernel_symbol *sym)
+{
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	return (const char *)((unsigned long)&sym->name_offset +
+			      sym->name_offset);
+#else
+	return sym->name;
+#endif
+}
+
 static int cmp_name(const void *va, const void *vb)
 {
 	const char *a;
 	const struct kernel_symbol *b;
 	a = va; b = vb;
-	return strcmp(a, b->name);
+	return strcmp(a, kernel_symbol_name(b));
 }
 
 static bool find_symbol_in_section(const struct symsearch *syms,
@@ -2198,7 +2217,7 @@ void *__symbol_get(const char *symbol)
 		sym = NULL;
 	preempt_enable();
 
-	return sym ? (void *)sym->value : NULL;
+	return sym ? (void *)kernel_symbol_value(sym) : NULL;
 }
 EXPORT_SYMBOL_GPL(__symbol_get);
 
@@ -2228,10 +2247,12 @@ static int verify_export_symbols(struct module *mod)
 
 	for (i = 0; i < ARRAY_SIZE(arr); i++) {
 		for (s = arr[i].sym; s < arr[i].sym + arr[i].num; s++) {
-			if (find_symbol(s->name, &owner, NULL, true, false)) {
+			if (find_symbol(kernel_symbol_name(s), &owner, NULL,
+					true, false)) {
 				pr_err("%s: exports duplicate symbol %s"
 				       " (owned by %s)\n",
-				       mod->name, s->name, module_name(owner));
+				       mod->name, kernel_symbol_name(s),
+				       module_name(owner));
 				return -ENOEXEC;
 			}
 		}
@@ -2280,7 +2301,7 @@ static int simplify_symbols(struct module *mod, const struct load_info *info)
 			ksym = resolve_symbol_wait(mod, info, name);
 			/* Ok if resolved.  */
 			if (ksym && !IS_ERR(ksym)) {
-				sym[i].st_value = ksym->value;
+				sym[i].st_value = kernel_symbol_value(ksym);
 				break;
 			}
 
@@ -2540,7 +2561,7 @@ static int is_exported(const char *name, unsigned long value,
 		ks = lookup_symbol(name, __start___ksymtab, __stop___ksymtab);
 	else
 		ks = lookup_symbol(name, mod->syms, mod->syms + mod->num_syms);
-	return ks != NULL && ks->value == value;
+	return ks != NULL && kernel_symbol_value(ks) == value;
 }
 
 /* As per nm */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

An ordinary arm64 defconfig build has ~64 KB worth of __ksymtab
entries, each consisting of two 64-bit fields containing absolute
references, to the symbol itself and to a char array containing
its name, respectively.

When we build the same configuration with KASLR enabled, we end
up with an additional ~192 KB of relocations in the .init section,
i.e., one 24 byte entry for each absolute reference, which all need
to be processed at boot time.

Given how the struct kernel_symbol that describes each entry is
completely local to module.c (except for the references emitted
by EXPORT_SYMBOL() itself), we can easily modify it to contain
two 32-bit relative references instead. This reduces the size of
the __ksymtab section by 50% for all 64-bit architectures, and
gets rid of the runtime relocations entirely for architectures
implementing KASLR, either via standard PIE linking (arm64) or
using custom host tools (x86).

Note that the binary search involving __ksymtab contents relies
on each section being sorted by symbol name. This is implemented
based on the input section names, not the names in the ksymtab
entries, so this patch does not interfere with that.

Given that the use of place-relative relocations requires support
both in the toolchain and in the module loader, we cannot enable
this feature for all architectures. So make it dependent on whether
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS is defined.

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Nicolas Pitre <nico@linaro.org>
Acked-by: Jessica Yu <jeyu@kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/x86/include/asm/Kbuild   |  1 +
 arch/x86/include/asm/export.h |  5 ---
 include/asm-generic/export.h  | 12 ++++-
 include/linux/compiler.h      | 11 +++++
 include/linux/export.h        | 46 +++++++++++++++-----
 kernel/module.c               | 33 +++++++++++---
 6 files changed, 84 insertions(+), 24 deletions(-)

diff --git a/arch/x86/include/asm/Kbuild b/arch/x86/include/asm/Kbuild
index 5d6a53fd7521..3e8a88dcaa1d 100644
--- a/arch/x86/include/asm/Kbuild
+++ b/arch/x86/include/asm/Kbuild
@@ -9,5 +9,6 @@ generated-y += xen-hypercalls.h
 generic-y += clkdev.h
 generic-y += dma-contiguous.h
 generic-y += early_ioremap.h
+generic-y += export.h
 generic-y += mcs_spinlock.h
 generic-y += mm-arch-hooks.h
diff --git a/arch/x86/include/asm/export.h b/arch/x86/include/asm/export.h
deleted file mode 100644
index 2a51d66689c5..000000000000
--- a/arch/x86/include/asm/export.h
+++ /dev/null
@@ -1,5 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifdef CONFIG_64BIT
-#define KSYM_ALIGN 16
-#endif
-#include <asm-generic/export.h>
diff --git a/include/asm-generic/export.h b/include/asm-generic/export.h
index 719db1968d81..97ce606459ae 100644
--- a/include/asm-generic/export.h
+++ b/include/asm-generic/export.h
@@ -5,12 +5,10 @@
 #define KSYM_FUNC(x) x
 #endif
 #ifdef CONFIG_64BIT
-#define __put .quad
 #ifndef KSYM_ALIGN
 #define KSYM_ALIGN 8
 #endif
 #else
-#define __put .long
 #ifndef KSYM_ALIGN
 #define KSYM_ALIGN 4
 #endif
@@ -25,6 +23,16 @@
 #define KSYM(name) name
 #endif
 
+.macro __put, val, name
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	.long	\val - ., \name - .
+#elif defined(CONFIG_64BIT)
+	.quad	\val, \name
+#else
+	.long	\val, \name
+#endif
+.endm
+
 /*
  * note on .section use: @progbits vs %progbits nastiness doesn't matter,
  * since we immediately emit into those sections anyway.
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 52e611ab9a6c..fe752d365334 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -327,4 +327,15 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
 	compiletime_assert(__native_word(t),				\
 		"Need native word sized stores/loads for atomicity.")
 
+/*
+ * Force the compiler to emit 'sym' as a symbol, so that we can reference
+ * it from inline assembler. Necessary in case 'sym' could be inlined
+ * otherwise, or eliminated entirely due to lack of references that are
+ * visibile to the compiler.
+ */
+#define __ADDRESSABLE(sym) \
+	static void *__attribute__((section(".discard.text"), used))	\
+		__PASTE(__discard_##sym, __LINE__)(void)		\
+			{ return (void *)&sym; }			\
+
 #endif /* __LINUX_COMPILER_H */
diff --git a/include/linux/export.h b/include/linux/export.h
index 1a1dfdb2a5c6..5112d0c41512 100644
--- a/include/linux/export.h
+++ b/include/linux/export.h
@@ -24,12 +24,6 @@
 #define VMLINUX_SYMBOL_STR(x) __VMLINUX_SYMBOL_STR(x)
 
 #ifndef __ASSEMBLY__
-struct kernel_symbol
-{
-	unsigned long value;
-	const char *name;
-};
-
 #ifdef MODULE
 extern struct module __this_module;
 #define THIS_MODULE (&__this_module)
@@ -60,17 +54,47 @@ extern struct module __this_module;
 #define __CRC_SYMBOL(sym, sec)
 #endif
 
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#include <linux/compiler.h>
+/*
+ * Emit the ksymtab entry as a pair of relative references: this reduces
+ * the size by half on 64-bit architectures, and eliminates the need for
+ * absolute relocations that require runtime processing on relocatable
+ * kernels.
+ */
+#define __KSYMTAB_ENTRY(sym, sec)					\
+	__ADDRESSABLE(sym)						\
+	asm("	.section \"___ksymtab" sec "+" #sym "\", \"a\"	\n"	\
+	    "	.balign	8					\n"	\
+	    VMLINUX_SYMBOL_STR(__ksymtab_##sym) ":		\n"	\
+	    "	.long "	VMLINUX_SYMBOL_STR(sym) "- .		\n"	\
+	    "	.long "	VMLINUX_SYMBOL_STR(__kstrtab_##sym) "- .\n"	\
+	    "	.previous					\n")
+
+struct kernel_symbol {
+	signed int value_offset;
+	signed int name_offset;
+};
+#else
+#define __KSYMTAB_ENTRY(sym, sec)					\
+	static const struct kernel_symbol __ksymtab_##sym		\
+	__attribute__((section("___ksymtab" sec "+" #sym), used))	\
+	= { (unsigned long)&sym, __kstrtab_##sym }
+
+struct kernel_symbol {
+	unsigned long value;
+	const char *name;
+};
+#endif
+
 /* For every exported symbol, place a struct in the __ksymtab section */
 #define ___EXPORT_SYMBOL(sym, sec)					\
 	extern typeof(sym) sym;						\
 	__CRC_SYMBOL(sym, sec)						\
 	static const char __kstrtab_##sym[]				\
-	__attribute__((section("__ksymtab_strings"), aligned(1)))	\
+	__attribute__((section("__ksymtab_strings"), used, aligned(1)))	\
 	= VMLINUX_SYMBOL_STR(sym);					\
-	static const struct kernel_symbol __ksymtab_##sym		\
-	__used								\
-	__attribute__((section("___ksymtab" sec "+" #sym), used))	\
-	= { (unsigned long)&sym, __kstrtab_##sym }
+	__KSYMTAB_ENTRY(sym, sec)
 
 #if defined(__KSYM_DEPS__)
 
diff --git a/kernel/module.c b/kernel/module.c
index dea01ac9cb74..d3a908ffc42c 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -549,12 +549,31 @@ static bool check_symbol(const struct symsearch *syms,
 	return true;
 }
 
+static unsigned long kernel_symbol_value(const struct kernel_symbol *sym)
+{
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	return (unsigned long)&sym->value_offset + sym->value_offset;
+#else
+	return sym->value;
+#endif
+}
+
+static const char *kernel_symbol_name(const struct kernel_symbol *sym)
+{
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	return (const char *)((unsigned long)&sym->name_offset +
+			      sym->name_offset);
+#else
+	return sym->name;
+#endif
+}
+
 static int cmp_name(const void *va, const void *vb)
 {
 	const char *a;
 	const struct kernel_symbol *b;
 	a = va; b = vb;
-	return strcmp(a, b->name);
+	return strcmp(a, kernel_symbol_name(b));
 }
 
 static bool find_symbol_in_section(const struct symsearch *syms,
@@ -2198,7 +2217,7 @@ void *__symbol_get(const char *symbol)
 		sym = NULL;
 	preempt_enable();
 
-	return sym ? (void *)sym->value : NULL;
+	return sym ? (void *)kernel_symbol_value(sym) : NULL;
 }
 EXPORT_SYMBOL_GPL(__symbol_get);
 
@@ -2228,10 +2247,12 @@ static int verify_export_symbols(struct module *mod)
 
 	for (i = 0; i < ARRAY_SIZE(arr); i++) {
 		for (s = arr[i].sym; s < arr[i].sym + arr[i].num; s++) {
-			if (find_symbol(s->name, &owner, NULL, true, false)) {
+			if (find_symbol(kernel_symbol_name(s), &owner, NULL,
+					true, false)) {
 				pr_err("%s: exports duplicate symbol %s"
 				       " (owned by %s)\n",
-				       mod->name, s->name, module_name(owner));
+				       mod->name, kernel_symbol_name(s),
+				       module_name(owner));
 				return -ENOEXEC;
 			}
 		}
@@ -2280,7 +2301,7 @@ static int simplify_symbols(struct module *mod, const struct load_info *info)
 			ksym = resolve_symbol_wait(mod, info, name);
 			/* Ok if resolved.  */
 			if (ksym && !IS_ERR(ksym)) {
-				sym[i].st_value = ksym->value;
+				sym[i].st_value = kernel_symbol_value(ksym);
 				break;
 			}
 
@@ -2540,7 +2561,7 @@ static int is_exported(const char *name, unsigned long value,
 		ks = lookup_symbol(name, __start___ksymtab, __stop___ksymtab);
 	else
 		ks = lookup_symbol(name, mod->syms, mod->syms + mod->num_syms);
-	return ks != NULL && ks->value = value;
+	return ks != NULL && kernel_symbol_value(ks) = value;
 }
 
 /* As per nm */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

An ordinary arm64 defconfig build has ~64 KB worth of __ksymtab
entries, each consisting of two 64-bit fields containing absolute
references, to the symbol itself and to a char array containing
its name, respectively.

When we build the same configuration with KASLR enabled, we end
up with an additional ~192 KB of relocations in the .init section,
i.e., one 24 byte entry for each absolute reference, which all need
to be processed at boot time.

Given how the struct kernel_symbol that describes each entry is
completely local to module.c (except for the references emitted
by EXPORT_SYMBOL() itself), we can easily modify it to contain
two 32-bit relative references instead. This reduces the size of
the __ksymtab section by 50% for all 64-bit architectures, and
gets rid of the runtime relocations entirely for architectures
implementing KASLR, either via standard PIE linking (arm64) or
using custom host tools (x86).

Note that the binary search involving __ksymtab contents relies
on each section being sorted by symbol name. This is implemented
based on the input section names, not the names in the ksymtab
entries, so this patch does not interfere with that.

Given that the use of place-relative relocations requires support
both in the toolchain and in the module loader, we cannot enable
this feature for all architectures. So make it dependent on whether
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS is defined.

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Nicolas Pitre <nico@linaro.org>
Acked-by: Jessica Yu <jeyu@kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/x86/include/asm/Kbuild   |  1 +
 arch/x86/include/asm/export.h |  5 ---
 include/asm-generic/export.h  | 12 ++++-
 include/linux/compiler.h      | 11 +++++
 include/linux/export.h        | 46 +++++++++++++++-----
 kernel/module.c               | 33 +++++++++++---
 6 files changed, 84 insertions(+), 24 deletions(-)

diff --git a/arch/x86/include/asm/Kbuild b/arch/x86/include/asm/Kbuild
index 5d6a53fd7521..3e8a88dcaa1d 100644
--- a/arch/x86/include/asm/Kbuild
+++ b/arch/x86/include/asm/Kbuild
@@ -9,5 +9,6 @@ generated-y += xen-hypercalls.h
 generic-y += clkdev.h
 generic-y += dma-contiguous.h
 generic-y += early_ioremap.h
+generic-y += export.h
 generic-y += mcs_spinlock.h
 generic-y += mm-arch-hooks.h
diff --git a/arch/x86/include/asm/export.h b/arch/x86/include/asm/export.h
deleted file mode 100644
index 2a51d66689c5..000000000000
--- a/arch/x86/include/asm/export.h
+++ /dev/null
@@ -1,5 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifdef CONFIG_64BIT
-#define KSYM_ALIGN 16
-#endif
-#include <asm-generic/export.h>
diff --git a/include/asm-generic/export.h b/include/asm-generic/export.h
index 719db1968d81..97ce606459ae 100644
--- a/include/asm-generic/export.h
+++ b/include/asm-generic/export.h
@@ -5,12 +5,10 @@
 #define KSYM_FUNC(x) x
 #endif
 #ifdef CONFIG_64BIT
-#define __put .quad
 #ifndef KSYM_ALIGN
 #define KSYM_ALIGN 8
 #endif
 #else
-#define __put .long
 #ifndef KSYM_ALIGN
 #define KSYM_ALIGN 4
 #endif
@@ -25,6 +23,16 @@
 #define KSYM(name) name
 #endif
 
+.macro __put, val, name
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	.long	\val - ., \name - .
+#elif defined(CONFIG_64BIT)
+	.quad	\val, \name
+#else
+	.long	\val, \name
+#endif
+.endm
+
 /*
  * note on .section use: @progbits vs %progbits nastiness doesn't matter,
  * since we immediately emit into those sections anyway.
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 52e611ab9a6c..fe752d365334 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -327,4 +327,15 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
 	compiletime_assert(__native_word(t),				\
 		"Need native word sized stores/loads for atomicity.")
 
+/*
+ * Force the compiler to emit 'sym' as a symbol, so that we can reference
+ * it from inline assembler. Necessary in case 'sym' could be inlined
+ * otherwise, or eliminated entirely due to lack of references that are
+ * visibile to the compiler.
+ */
+#define __ADDRESSABLE(sym) \
+	static void *__attribute__((section(".discard.text"), used))	\
+		__PASTE(__discard_##sym, __LINE__)(void)		\
+			{ return (void *)&sym; }			\
+
 #endif /* __LINUX_COMPILER_H */
diff --git a/include/linux/export.h b/include/linux/export.h
index 1a1dfdb2a5c6..5112d0c41512 100644
--- a/include/linux/export.h
+++ b/include/linux/export.h
@@ -24,12 +24,6 @@
 #define VMLINUX_SYMBOL_STR(x) __VMLINUX_SYMBOL_STR(x)
 
 #ifndef __ASSEMBLY__
-struct kernel_symbol
-{
-	unsigned long value;
-	const char *name;
-};
-
 #ifdef MODULE
 extern struct module __this_module;
 #define THIS_MODULE (&__this_module)
@@ -60,17 +54,47 @@ extern struct module __this_module;
 #define __CRC_SYMBOL(sym, sec)
 #endif
 
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#include <linux/compiler.h>
+/*
+ * Emit the ksymtab entry as a pair of relative references: this reduces
+ * the size by half on 64-bit architectures, and eliminates the need for
+ * absolute relocations that require runtime processing on relocatable
+ * kernels.
+ */
+#define __KSYMTAB_ENTRY(sym, sec)					\
+	__ADDRESSABLE(sym)						\
+	asm("	.section \"___ksymtab" sec "+" #sym "\", \"a\"	\n"	\
+	    "	.balign	8					\n"	\
+	    VMLINUX_SYMBOL_STR(__ksymtab_##sym) ":		\n"	\
+	    "	.long "	VMLINUX_SYMBOL_STR(sym) "- .		\n"	\
+	    "	.long "	VMLINUX_SYMBOL_STR(__kstrtab_##sym) "- .\n"	\
+	    "	.previous					\n")
+
+struct kernel_symbol {
+	signed int value_offset;
+	signed int name_offset;
+};
+#else
+#define __KSYMTAB_ENTRY(sym, sec)					\
+	static const struct kernel_symbol __ksymtab_##sym		\
+	__attribute__((section("___ksymtab" sec "+" #sym), used))	\
+	= { (unsigned long)&sym, __kstrtab_##sym }
+
+struct kernel_symbol {
+	unsigned long value;
+	const char *name;
+};
+#endif
+
 /* For every exported symbol, place a struct in the __ksymtab section */
 #define ___EXPORT_SYMBOL(sym, sec)					\
 	extern typeof(sym) sym;						\
 	__CRC_SYMBOL(sym, sec)						\
 	static const char __kstrtab_##sym[]				\
-	__attribute__((section("__ksymtab_strings"), aligned(1)))	\
+	__attribute__((section("__ksymtab_strings"), used, aligned(1)))	\
 	= VMLINUX_SYMBOL_STR(sym);					\
-	static const struct kernel_symbol __ksymtab_##sym		\
-	__used								\
-	__attribute__((section("___ksymtab" sec "+" #sym), used))	\
-	= { (unsigned long)&sym, __kstrtab_##sym }
+	__KSYMTAB_ENTRY(sym, sec)
 
 #if defined(__KSYM_DEPS__)
 
diff --git a/kernel/module.c b/kernel/module.c
index dea01ac9cb74..d3a908ffc42c 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -549,12 +549,31 @@ static bool check_symbol(const struct symsearch *syms,
 	return true;
 }
 
+static unsigned long kernel_symbol_value(const struct kernel_symbol *sym)
+{
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	return (unsigned long)&sym->value_offset + sym->value_offset;
+#else
+	return sym->value;
+#endif
+}
+
+static const char *kernel_symbol_name(const struct kernel_symbol *sym)
+{
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	return (const char *)((unsigned long)&sym->name_offset +
+			      sym->name_offset);
+#else
+	return sym->name;
+#endif
+}
+
 static int cmp_name(const void *va, const void *vb)
 {
 	const char *a;
 	const struct kernel_symbol *b;
 	a = va; b = vb;
-	return strcmp(a, b->name);
+	return strcmp(a, kernel_symbol_name(b));
 }
 
 static bool find_symbol_in_section(const struct symsearch *syms,
@@ -2198,7 +2217,7 @@ void *__symbol_get(const char *symbol)
 		sym = NULL;
 	preempt_enable();
 
-	return sym ? (void *)sym->value : NULL;
+	return sym ? (void *)kernel_symbol_value(sym) : NULL;
 }
 EXPORT_SYMBOL_GPL(__symbol_get);
 
@@ -2228,10 +2247,12 @@ static int verify_export_symbols(struct module *mod)
 
 	for (i = 0; i < ARRAY_SIZE(arr); i++) {
 		for (s = arr[i].sym; s < arr[i].sym + arr[i].num; s++) {
-			if (find_symbol(s->name, &owner, NULL, true, false)) {
+			if (find_symbol(kernel_symbol_name(s), &owner, NULL,
+					true, false)) {
 				pr_err("%s: exports duplicate symbol %s"
 				       " (owned by %s)\n",
-				       mod->name, s->name, module_name(owner));
+				       mod->name, kernel_symbol_name(s),
+				       module_name(owner));
 				return -ENOEXEC;
 			}
 		}
@@ -2280,7 +2301,7 @@ static int simplify_symbols(struct module *mod, const struct load_info *info)
 			ksym = resolve_symbol_wait(mod, info, name);
 			/* Ok if resolved.  */
 			if (ksym && !IS_ERR(ksym)) {
-				sym[i].st_value = ksym->value;
+				sym[i].st_value = kernel_symbol_value(ksym);
 				break;
 			}
 
@@ -2540,7 +2561,7 @@ static int is_exported(const char *name, unsigned long value,
 		ks = lookup_symbol(name, __start___ksymtab, __stop___ksymtab);
 	else
 		ks = lookup_symbol(name, mod->syms, mod->syms + mod->num_syms);
-	return ks != NULL && ks->value == value;
+	return ks != NULL && kernel_symbol_value(ks) == value;
 }
 
 /* As per nm */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 3/8] init: allow initcall tables to be emitted using relative references
  2017-12-27  8:50 ` Ard Biesheuvel
  (?)
@ 2017-12-27  8:50   ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ard Biesheuvel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Steven Rostedt, Martin Schwidefsky, Sergey Senozhatsky,
	Linus Torvalds, Jessica Yu, linux-arm-kernel, linux-mips,
	linuxppc-dev, linux-s390, sparclinux, x86

Allow the initcall tables to be emitted using relative references that
are only half the size on 64-bit architectures and don't require fixups
at runtime on relocatable kernels.

Cc: Petr Mladek <pmladek@suse.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: James Morris <james.l.morris@oracle.com>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 include/linux/init.h   | 44 +++++++++++++++-----
 init/main.c            | 32 +++++++-------
 kernel/printk/printk.c |  4 +-
 security/security.c    |  4 +-
 4 files changed, 53 insertions(+), 31 deletions(-)

diff --git a/include/linux/init.h b/include/linux/init.h
index ea1b31101d9e..125bbea99c6b 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -109,8 +109,24 @@
 typedef int (*initcall_t)(void);
 typedef void (*exitcall_t)(void);
 
-extern initcall_t __con_initcall_start[], __con_initcall_end[];
-extern initcall_t __security_initcall_start[], __security_initcall_end[];
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+typedef signed int initcall_entry_t;
+
+static inline initcall_t initcall_from_entry(initcall_entry_t *entry)
+{
+	return (initcall_t)((unsigned long)entry + *entry);
+}
+#else
+typedef initcall_t initcall_entry_t;
+
+static inline initcall_t initcall_from_entry(initcall_entry_t *entry)
+{
+	return *entry;
+}
+#endif
+
+extern initcall_entry_t __con_initcall_start[], __con_initcall_end[];
+extern initcall_entry_t __security_initcall_start[], __security_initcall_end[];
 
 /* Used for contructor calls. */
 typedef void (*ctor_fn_t)(void);
@@ -160,9 +176,20 @@ extern bool initcall_debug;
  * as KEEP() in the linker script.
  */
 
-#define __define_initcall(fn, id) \
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#define ___define_initcall(fn, id, __sec)			\
+	__ADDRESSABLE(fn)					\
+	asm(".section	\"" #__sec ".init\", \"a\"	\n"	\
+	"__initcall_" #fn #id ":			\n"	\
+	    ".long "	VMLINUX_SYMBOL_STR(fn) " - .	\n"	\
+	    ".previous					\n");
+#else
+#define ___define_initcall(fn, id, __sec) \
 	static initcall_t __initcall_##fn##id __used \
-	__attribute__((__section__(".initcall" #id ".init"))) = fn;
+		__attribute__((__section__(#__sec ".init"))) = fn;
+#endif
+
+#define __define_initcall(fn, id) ___define_initcall(fn, id, .initcall##id)
 
 /*
  * Early initcalls run before initializing SMP.
@@ -201,13 +228,8 @@ extern bool initcall_debug;
 #define __exitcall(fn)						\
 	static exitcall_t __exitcall_##fn __exit_call = fn
 
-#define console_initcall(fn)					\
-	static initcall_t __initcall_##fn			\
-	__used __section(.con_initcall.init) = fn
-
-#define security_initcall(fn)					\
-	static initcall_t __initcall_##fn			\
-	__used __section(.security_initcall.init) = fn
+#define console_initcall(fn)	___define_initcall(fn,, .con_initcall)
+#define security_initcall(fn)	___define_initcall(fn,, .security_initcall)
 
 struct obs_kernel_param {
 	const char *str;
diff --git a/init/main.c b/init/main.c
index 7b606fc48482..2cbe3c2804ab 100644
--- a/init/main.c
+++ b/init/main.c
@@ -845,18 +845,18 @@ int __init_or_module do_one_initcall(initcall_t fn)
 }
 
 
-extern initcall_t __initcall_start[];
-extern initcall_t __initcall0_start[];
-extern initcall_t __initcall1_start[];
-extern initcall_t __initcall2_start[];
-extern initcall_t __initcall3_start[];
-extern initcall_t __initcall4_start[];
-extern initcall_t __initcall5_start[];
-extern initcall_t __initcall6_start[];
-extern initcall_t __initcall7_start[];
-extern initcall_t __initcall_end[];
-
-static initcall_t *initcall_levels[] __initdata = {
+extern initcall_entry_t __initcall_start[];
+extern initcall_entry_t __initcall0_start[];
+extern initcall_entry_t __initcall1_start[];
+extern initcall_entry_t __initcall2_start[];
+extern initcall_entry_t __initcall3_start[];
+extern initcall_entry_t __initcall4_start[];
+extern initcall_entry_t __initcall5_start[];
+extern initcall_entry_t __initcall6_start[];
+extern initcall_entry_t __initcall7_start[];
+extern initcall_entry_t __initcall_end[];
+
+static initcall_entry_t *initcall_levels[] __initdata = {
 	__initcall0_start,
 	__initcall1_start,
 	__initcall2_start,
@@ -882,7 +882,7 @@ static char *initcall_level_names[] __initdata = {
 
 static void __init do_initcall_level(int level)
 {
-	initcall_t *fn;
+	initcall_entry_t *fn;
 
 	strcpy(initcall_command_line, saved_command_line);
 	parse_args(initcall_level_names[level],
@@ -892,7 +892,7 @@ static void __init do_initcall_level(int level)
 		   NULL, &repair_env_string);
 
 	for (fn = initcall_levels[level]; fn < initcall_levels[level+1]; fn++)
-		do_one_initcall(*fn);
+		do_one_initcall(initcall_from_entry(fn));
 }
 
 static void __init do_initcalls(void)
@@ -923,10 +923,10 @@ static void __init do_basic_setup(void)
 
 static void __init do_pre_smp_initcalls(void)
 {
-	initcall_t *fn;
+	initcall_entry_t *fn;
 
 	for (fn = __initcall_start; fn < __initcall0_start; fn++)
-		do_one_initcall(*fn);
+		do_one_initcall(initcall_from_entry(fn));
 }
 
 /*
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index b9006617710f..0516005261c7 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -2611,7 +2611,7 @@ EXPORT_SYMBOL(unregister_console);
  */
 void __init console_init(void)
 {
-	initcall_t *call;
+	initcall_entry_t *call;
 
 	/* Setup the default TTY line discipline. */
 	n_tty_init();
@@ -2622,7 +2622,7 @@ void __init console_init(void)
 	 */
 	call = __con_initcall_start;
 	while (call < __con_initcall_end) {
-		(*call)();
+		initcall_from_entry(call)();
 		call++;
 	}
 }
diff --git a/security/security.c b/security/security.c
index 1cd8526cb0b7..f648eeff06de 100644
--- a/security/security.c
+++ b/security/security.c
@@ -45,10 +45,10 @@ static __initdata char chosen_lsm[SECURITY_NAME_MAX + 1] =
 
 static void __init do_security_initcalls(void)
 {
-	initcall_t *call;
+	initcall_entry_t *call;
 	call = __security_initcall_start;
 	while (call < __security_initcall_end) {
-		(*call) ();
+		initcall_from_entry(call)();
 		call++;
 	}
 }
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 3/8] init: allow initcall tables to be emitted using relative references
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

Allow the initcall tables to be emitted using relative references that
are only half the size on 64-bit architectures and don't require fixups
at runtime on relocatable kernels.

Cc: Petr Mladek <pmladek@suse.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: James Morris <james.l.morris@oracle.com>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 include/linux/init.h   | 44 +++++++++++++++-----
 init/main.c            | 32 +++++++-------
 kernel/printk/printk.c |  4 +-
 security/security.c    |  4 +-
 4 files changed, 53 insertions(+), 31 deletions(-)

diff --git a/include/linux/init.h b/include/linux/init.h
index ea1b31101d9e..125bbea99c6b 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -109,8 +109,24 @@
 typedef int (*initcall_t)(void);
 typedef void (*exitcall_t)(void);
 
-extern initcall_t __con_initcall_start[], __con_initcall_end[];
-extern initcall_t __security_initcall_start[], __security_initcall_end[];
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+typedef signed int initcall_entry_t;
+
+static inline initcall_t initcall_from_entry(initcall_entry_t *entry)
+{
+	return (initcall_t)((unsigned long)entry + *entry);
+}
+#else
+typedef initcall_t initcall_entry_t;
+
+static inline initcall_t initcall_from_entry(initcall_entry_t *entry)
+{
+	return *entry;
+}
+#endif
+
+extern initcall_entry_t __con_initcall_start[], __con_initcall_end[];
+extern initcall_entry_t __security_initcall_start[], __security_initcall_end[];
 
 /* Used for contructor calls. */
 typedef void (*ctor_fn_t)(void);
@@ -160,9 +176,20 @@ extern bool initcall_debug;
  * as KEEP() in the linker script.
  */
 
-#define __define_initcall(fn, id) \
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#define ___define_initcall(fn, id, __sec)			\
+	__ADDRESSABLE(fn)					\
+	asm(".section	\"" #__sec ".init\", \"a\"	\n"	\
+	"__initcall_" #fn #id ":			\n"	\
+	    ".long "	VMLINUX_SYMBOL_STR(fn) " - .	\n"	\
+	    ".previous					\n");
+#else
+#define ___define_initcall(fn, id, __sec) \
 	static initcall_t __initcall_##fn##id __used \
-	__attribute__((__section__(".initcall" #id ".init"))) = fn;
+		__attribute__((__section__(#__sec ".init"))) = fn;
+#endif
+
+#define __define_initcall(fn, id) ___define_initcall(fn, id, .initcall##id)
 
 /*
  * Early initcalls run before initializing SMP.
@@ -201,13 +228,8 @@ extern bool initcall_debug;
 #define __exitcall(fn)						\
 	static exitcall_t __exitcall_##fn __exit_call = fn
 
-#define console_initcall(fn)					\
-	static initcall_t __initcall_##fn			\
-	__used __section(.con_initcall.init) = fn
-
-#define security_initcall(fn)					\
-	static initcall_t __initcall_##fn			\
-	__used __section(.security_initcall.init) = fn
+#define console_initcall(fn)	___define_initcall(fn,, .con_initcall)
+#define security_initcall(fn)	___define_initcall(fn,, .security_initcall)
 
 struct obs_kernel_param {
 	const char *str;
diff --git a/init/main.c b/init/main.c
index 7b606fc48482..2cbe3c2804ab 100644
--- a/init/main.c
+++ b/init/main.c
@@ -845,18 +845,18 @@ int __init_or_module do_one_initcall(initcall_t fn)
 }
 
 
-extern initcall_t __initcall_start[];
-extern initcall_t __initcall0_start[];
-extern initcall_t __initcall1_start[];
-extern initcall_t __initcall2_start[];
-extern initcall_t __initcall3_start[];
-extern initcall_t __initcall4_start[];
-extern initcall_t __initcall5_start[];
-extern initcall_t __initcall6_start[];
-extern initcall_t __initcall7_start[];
-extern initcall_t __initcall_end[];
-
-static initcall_t *initcall_levels[] __initdata = {
+extern initcall_entry_t __initcall_start[];
+extern initcall_entry_t __initcall0_start[];
+extern initcall_entry_t __initcall1_start[];
+extern initcall_entry_t __initcall2_start[];
+extern initcall_entry_t __initcall3_start[];
+extern initcall_entry_t __initcall4_start[];
+extern initcall_entry_t __initcall5_start[];
+extern initcall_entry_t __initcall6_start[];
+extern initcall_entry_t __initcall7_start[];
+extern initcall_entry_t __initcall_end[];
+
+static initcall_entry_t *initcall_levels[] __initdata = {
 	__initcall0_start,
 	__initcall1_start,
 	__initcall2_start,
@@ -882,7 +882,7 @@ static char *initcall_level_names[] __initdata = {
 
 static void __init do_initcall_level(int level)
 {
-	initcall_t *fn;
+	initcall_entry_t *fn;
 
 	strcpy(initcall_command_line, saved_command_line);
 	parse_args(initcall_level_names[level],
@@ -892,7 +892,7 @@ static void __init do_initcall_level(int level)
 		   NULL, &repair_env_string);
 
 	for (fn = initcall_levels[level]; fn < initcall_levels[level+1]; fn++)
-		do_one_initcall(*fn);
+		do_one_initcall(initcall_from_entry(fn));
 }
 
 static void __init do_initcalls(void)
@@ -923,10 +923,10 @@ static void __init do_basic_setup(void)
 
 static void __init do_pre_smp_initcalls(void)
 {
-	initcall_t *fn;
+	initcall_entry_t *fn;
 
 	for (fn = __initcall_start; fn < __initcall0_start; fn++)
-		do_one_initcall(*fn);
+		do_one_initcall(initcall_from_entry(fn));
 }
 
 /*
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index b9006617710f..0516005261c7 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -2611,7 +2611,7 @@ EXPORT_SYMBOL(unregister_console);
  */
 void __init console_init(void)
 {
-	initcall_t *call;
+	initcall_entry_t *call;
 
 	/* Setup the default TTY line discipline. */
 	n_tty_init();
@@ -2622,7 +2622,7 @@ void __init console_init(void)
 	 */
 	call = __con_initcall_start;
 	while (call < __con_initcall_end) {
-		(*call)();
+		initcall_from_entry(call)();
 		call++;
 	}
 }
diff --git a/security/security.c b/security/security.c
index 1cd8526cb0b7..f648eeff06de 100644
--- a/security/security.c
+++ b/security/security.c
@@ -45,10 +45,10 @@ static __initdata char chosen_lsm[SECURITY_NAME_MAX + 1]  
 static void __init do_security_initcalls(void)
 {
-	initcall_t *call;
+	initcall_entry_t *call;
 	call = __security_initcall_start;
 	while (call < __security_initcall_end) {
-		(*call) ();
+		initcall_from_entry(call)();
 		call++;
 	}
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 3/8] init: allow initcall tables to be emitted using relative references
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

Allow the initcall tables to be emitted using relative references that
are only half the size on 64-bit architectures and don't require fixups
at runtime on relocatable kernels.

Cc: Petr Mladek <pmladek@suse.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: James Morris <james.l.morris@oracle.com>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 include/linux/init.h   | 44 +++++++++++++++-----
 init/main.c            | 32 +++++++-------
 kernel/printk/printk.c |  4 +-
 security/security.c    |  4 +-
 4 files changed, 53 insertions(+), 31 deletions(-)

diff --git a/include/linux/init.h b/include/linux/init.h
index ea1b31101d9e..125bbea99c6b 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -109,8 +109,24 @@
 typedef int (*initcall_t)(void);
 typedef void (*exitcall_t)(void);
 
-extern initcall_t __con_initcall_start[], __con_initcall_end[];
-extern initcall_t __security_initcall_start[], __security_initcall_end[];
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+typedef signed int initcall_entry_t;
+
+static inline initcall_t initcall_from_entry(initcall_entry_t *entry)
+{
+	return (initcall_t)((unsigned long)entry + *entry);
+}
+#else
+typedef initcall_t initcall_entry_t;
+
+static inline initcall_t initcall_from_entry(initcall_entry_t *entry)
+{
+	return *entry;
+}
+#endif
+
+extern initcall_entry_t __con_initcall_start[], __con_initcall_end[];
+extern initcall_entry_t __security_initcall_start[], __security_initcall_end[];
 
 /* Used for contructor calls. */
 typedef void (*ctor_fn_t)(void);
@@ -160,9 +176,20 @@ extern bool initcall_debug;
  * as KEEP() in the linker script.
  */
 
-#define __define_initcall(fn, id) \
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#define ___define_initcall(fn, id, __sec)			\
+	__ADDRESSABLE(fn)					\
+	asm(".section	\"" #__sec ".init\", \"a\"	\n"	\
+	"__initcall_" #fn #id ":			\n"	\
+	    ".long "	VMLINUX_SYMBOL_STR(fn) " - .	\n"	\
+	    ".previous					\n");
+#else
+#define ___define_initcall(fn, id, __sec) \
 	static initcall_t __initcall_##fn##id __used \
-	__attribute__((__section__(".initcall" #id ".init"))) = fn;
+		__attribute__((__section__(#__sec ".init"))) = fn;
+#endif
+
+#define __define_initcall(fn, id) ___define_initcall(fn, id, .initcall##id)
 
 /*
  * Early initcalls run before initializing SMP.
@@ -201,13 +228,8 @@ extern bool initcall_debug;
 #define __exitcall(fn)						\
 	static exitcall_t __exitcall_##fn __exit_call = fn
 
-#define console_initcall(fn)					\
-	static initcall_t __initcall_##fn			\
-	__used __section(.con_initcall.init) = fn
-
-#define security_initcall(fn)					\
-	static initcall_t __initcall_##fn			\
-	__used __section(.security_initcall.init) = fn
+#define console_initcall(fn)	___define_initcall(fn,, .con_initcall)
+#define security_initcall(fn)	___define_initcall(fn,, .security_initcall)
 
 struct obs_kernel_param {
 	const char *str;
diff --git a/init/main.c b/init/main.c
index 7b606fc48482..2cbe3c2804ab 100644
--- a/init/main.c
+++ b/init/main.c
@@ -845,18 +845,18 @@ int __init_or_module do_one_initcall(initcall_t fn)
 }
 
 
-extern initcall_t __initcall_start[];
-extern initcall_t __initcall0_start[];
-extern initcall_t __initcall1_start[];
-extern initcall_t __initcall2_start[];
-extern initcall_t __initcall3_start[];
-extern initcall_t __initcall4_start[];
-extern initcall_t __initcall5_start[];
-extern initcall_t __initcall6_start[];
-extern initcall_t __initcall7_start[];
-extern initcall_t __initcall_end[];
-
-static initcall_t *initcall_levels[] __initdata = {
+extern initcall_entry_t __initcall_start[];
+extern initcall_entry_t __initcall0_start[];
+extern initcall_entry_t __initcall1_start[];
+extern initcall_entry_t __initcall2_start[];
+extern initcall_entry_t __initcall3_start[];
+extern initcall_entry_t __initcall4_start[];
+extern initcall_entry_t __initcall5_start[];
+extern initcall_entry_t __initcall6_start[];
+extern initcall_entry_t __initcall7_start[];
+extern initcall_entry_t __initcall_end[];
+
+static initcall_entry_t *initcall_levels[] __initdata = {
 	__initcall0_start,
 	__initcall1_start,
 	__initcall2_start,
@@ -882,7 +882,7 @@ static char *initcall_level_names[] __initdata = {
 
 static void __init do_initcall_level(int level)
 {
-	initcall_t *fn;
+	initcall_entry_t *fn;
 
 	strcpy(initcall_command_line, saved_command_line);
 	parse_args(initcall_level_names[level],
@@ -892,7 +892,7 @@ static void __init do_initcall_level(int level)
 		   NULL, &repair_env_string);
 
 	for (fn = initcall_levels[level]; fn < initcall_levels[level+1]; fn++)
-		do_one_initcall(*fn);
+		do_one_initcall(initcall_from_entry(fn));
 }
 
 static void __init do_initcalls(void)
@@ -923,10 +923,10 @@ static void __init do_basic_setup(void)
 
 static void __init do_pre_smp_initcalls(void)
 {
-	initcall_t *fn;
+	initcall_entry_t *fn;
 
 	for (fn = __initcall_start; fn < __initcall0_start; fn++)
-		do_one_initcall(*fn);
+		do_one_initcall(initcall_from_entry(fn));
 }
 
 /*
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index b9006617710f..0516005261c7 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -2611,7 +2611,7 @@ EXPORT_SYMBOL(unregister_console);
  */
 void __init console_init(void)
 {
-	initcall_t *call;
+	initcall_entry_t *call;
 
 	/* Setup the default TTY line discipline. */
 	n_tty_init();
@@ -2622,7 +2622,7 @@ void __init console_init(void)
 	 */
 	call = __con_initcall_start;
 	while (call < __con_initcall_end) {
-		(*call)();
+		initcall_from_entry(call)();
 		call++;
 	}
 }
diff --git a/security/security.c b/security/security.c
index 1cd8526cb0b7..f648eeff06de 100644
--- a/security/security.c
+++ b/security/security.c
@@ -45,10 +45,10 @@ static __initdata char chosen_lsm[SECURITY_NAME_MAX + 1] =
 
 static void __init do_security_initcalls(void)
 {
-	initcall_t *call;
+	initcall_entry_t *call;
 	call = __security_initcall_start;
 	while (call < __security_initcall_end) {
-		(*call) ();
+		initcall_from_entry(call)();
 		call++;
 	}
 }
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 4/8] PCI: Add support for relative addressing in quirk tables
  2017-12-27  8:50 ` Ard Biesheuvel
  (?)
@ 2017-12-27  8:50   ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ard Biesheuvel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Steven Rostedt, Martin Schwidefsky, Sergey Senozhatsky,
	Linus Torvalds, Jessica Yu, linux-arm-kernel, linux-mips,
	linuxppc-dev, linux-s390, sparclinux, x86

Allow the PCI quirk tables to be emitted in a way that avoids absolute
references to the hook functions. This reduces the size of the entries,
and, more importantly, makes them invariant under runtime relocation
(e.g., for KASLR)

Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 drivers/pci/quirks.c | 13 ++++++++++---
 include/linux/pci.h  | 20 ++++++++++++++++++++
 2 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 10684b17d0bd..b6d51b4d5ce1 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -3556,9 +3556,16 @@ static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f,
 		     f->vendor == (u16) PCI_ANY_ID) &&
 		    (f->device == dev->device ||
 		     f->device == (u16) PCI_ANY_ID)) {
-			calltime = fixup_debug_start(dev, f->hook);
-			f->hook(dev);
-			fixup_debug_report(dev, calltime, f->hook);
+			void (*hook)(struct pci_dev *dev);
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+			hook = (void *)((unsigned long)&f->hook_offset +
+					f->hook_offset);
+#else
+			hook = f->hook;
+#endif
+			calltime = fixup_debug_start(dev, hook);
+			hook(dev);
+			fixup_debug_report(dev, calltime, hook);
 		}
 }
 
diff --git a/include/linux/pci.h b/include/linux/pci.h
index c170c9250c8b..e8c34afb5d4a 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1792,7 +1792,11 @@ struct pci_fixup {
 	u16 device;		/* You can use PCI_ANY_ID here of course */
 	u32 class;		/* You can use PCI_ANY_ID here too */
 	unsigned int class_shift;	/* should be 0, 8, 16 */
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	signed int hook_offset;
+#else
 	void (*hook)(struct pci_dev *dev);
+#endif
 };
 
 enum pci_fixup_pass {
@@ -1806,12 +1810,28 @@ enum pci_fixup_pass {
 	pci_fixup_suspend_late,	/* pci_device_suspend_late() */
 };
 
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#define __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
+				    class_shift, hook)			\
+	__ADDRESSABLE(hook)						\
+	asm(".section "	#sec ", \"a\"				\n"	\
+	    ".balign	16					\n"	\
+	    ".short "	#vendor ", " #device "			\n"	\
+	    ".long "	#class ", " #class_shift "		\n"	\
+	    ".long "	VMLINUX_SYMBOL_STR(hook) " - .		\n"	\
+	    ".previous						\n");
+#define DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
+				  class_shift, hook)			\
+	__DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
+				  class_shift, hook)
+#else
 /* Anonymous variables would be nice... */
 #define DECLARE_PCI_FIXUP_SECTION(section, name, vendor, device, class,	\
 				  class_shift, hook)			\
 	static const struct pci_fixup __PASTE(__pci_fixup_##name,__LINE__) __used	\
 	__attribute__((__section__(#section), aligned((sizeof(void *)))))    \
 		= { vendor, device, class, class_shift, hook };
+#endif
 
 #define DECLARE_PCI_FIXUP_CLASS_EARLY(vendor, device, class,		\
 					 class_shift, hook)		\
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 4/8] PCI: Add support for relative addressing in quirk tables
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

Allow the PCI quirk tables to be emitted in a way that avoids absolute
references to the hook functions. This reduces the size of the entries,
and, more importantly, makes them invariant under runtime relocation
(e.g., for KASLR)

Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 drivers/pci/quirks.c | 13 ++++++++++---
 include/linux/pci.h  | 20 ++++++++++++++++++++
 2 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 10684b17d0bd..b6d51b4d5ce1 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -3556,9 +3556,16 @@ static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f,
 		     f->vendor = (u16) PCI_ANY_ID) &&
 		    (f->device = dev->device ||
 		     f->device = (u16) PCI_ANY_ID)) {
-			calltime = fixup_debug_start(dev, f->hook);
-			f->hook(dev);
-			fixup_debug_report(dev, calltime, f->hook);
+			void (*hook)(struct pci_dev *dev);
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+			hook = (void *)((unsigned long)&f->hook_offset +
+					f->hook_offset);
+#else
+			hook = f->hook;
+#endif
+			calltime = fixup_debug_start(dev, hook);
+			hook(dev);
+			fixup_debug_report(dev, calltime, hook);
 		}
 }
 
diff --git a/include/linux/pci.h b/include/linux/pci.h
index c170c9250c8b..e8c34afb5d4a 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1792,7 +1792,11 @@ struct pci_fixup {
 	u16 device;		/* You can use PCI_ANY_ID here of course */
 	u32 class;		/* You can use PCI_ANY_ID here too */
 	unsigned int class_shift;	/* should be 0, 8, 16 */
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	signed int hook_offset;
+#else
 	void (*hook)(struct pci_dev *dev);
+#endif
 };
 
 enum pci_fixup_pass {
@@ -1806,12 +1810,28 @@ enum pci_fixup_pass {
 	pci_fixup_suspend_late,	/* pci_device_suspend_late() */
 };
 
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#define __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
+				    class_shift, hook)			\
+	__ADDRESSABLE(hook)						\
+	asm(".section "	#sec ", \"a\"				\n"	\
+	    ".balign	16					\n"	\
+	    ".short "	#vendor ", " #device "			\n"	\
+	    ".long "	#class ", " #class_shift "		\n"	\
+	    ".long "	VMLINUX_SYMBOL_STR(hook) " - .		\n"	\
+	    ".previous						\n");
+#define DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
+				  class_shift, hook)			\
+	__DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
+				  class_shift, hook)
+#else
 /* Anonymous variables would be nice... */
 #define DECLARE_PCI_FIXUP_SECTION(section, name, vendor, device, class,	\
 				  class_shift, hook)			\
 	static const struct pci_fixup __PASTE(__pci_fixup_##name,__LINE__) __used	\
 	__attribute__((__section__(#section), aligned((sizeof(void *)))))    \
 		= { vendor, device, class, class_shift, hook };
+#endif
 
 #define DECLARE_PCI_FIXUP_CLASS_EARLY(vendor, device, class,		\
 					 class_shift, hook)		\
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 4/8] PCI: Add support for relative addressing in quirk tables
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

Allow the PCI quirk tables to be emitted in a way that avoids absolute
references to the hook functions. This reduces the size of the entries,
and, more importantly, makes them invariant under runtime relocation
(e.g., for KASLR)

Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 drivers/pci/quirks.c | 13 ++++++++++---
 include/linux/pci.h  | 20 ++++++++++++++++++++
 2 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 10684b17d0bd..b6d51b4d5ce1 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -3556,9 +3556,16 @@ static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f,
 		     f->vendor == (u16) PCI_ANY_ID) &&
 		    (f->device == dev->device ||
 		     f->device == (u16) PCI_ANY_ID)) {
-			calltime = fixup_debug_start(dev, f->hook);
-			f->hook(dev);
-			fixup_debug_report(dev, calltime, f->hook);
+			void (*hook)(struct pci_dev *dev);
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+			hook = (void *)((unsigned long)&f->hook_offset +
+					f->hook_offset);
+#else
+			hook = f->hook;
+#endif
+			calltime = fixup_debug_start(dev, hook);
+			hook(dev);
+			fixup_debug_report(dev, calltime, hook);
 		}
 }
 
diff --git a/include/linux/pci.h b/include/linux/pci.h
index c170c9250c8b..e8c34afb5d4a 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1792,7 +1792,11 @@ struct pci_fixup {
 	u16 device;		/* You can use PCI_ANY_ID here of course */
 	u32 class;		/* You can use PCI_ANY_ID here too */
 	unsigned int class_shift;	/* should be 0, 8, 16 */
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+	signed int hook_offset;
+#else
 	void (*hook)(struct pci_dev *dev);
+#endif
 };
 
 enum pci_fixup_pass {
@@ -1806,12 +1810,28 @@ enum pci_fixup_pass {
 	pci_fixup_suspend_late,	/* pci_device_suspend_late() */
 };
 
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#define __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
+				    class_shift, hook)			\
+	__ADDRESSABLE(hook)						\
+	asm(".section "	#sec ", \"a\"				\n"	\
+	    ".balign	16					\n"	\
+	    ".short "	#vendor ", " #device "			\n"	\
+	    ".long "	#class ", " #class_shift "		\n"	\
+	    ".long "	VMLINUX_SYMBOL_STR(hook) " - .		\n"	\
+	    ".previous						\n");
+#define DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
+				  class_shift, hook)			\
+	__DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
+				  class_shift, hook)
+#else
 /* Anonymous variables would be nice... */
 #define DECLARE_PCI_FIXUP_SECTION(section, name, vendor, device, class,	\
 				  class_shift, hook)			\
 	static const struct pci_fixup __PASTE(__pci_fixup_##name,__LINE__) __used	\
 	__attribute__((__section__(#section), aligned((sizeof(void *)))))    \
 		= { vendor, device, class, class_shift, hook };
+#endif
 
 #define DECLARE_PCI_FIXUP_CLASS_EARLY(vendor, device, class,		\
 					 class_shift, hook)		\
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 5/8] kernel: tracepoints: add support for relative references
  2017-12-27  8:50 ` Ard Biesheuvel
  (?)
@ 2017-12-27  8:50   ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ard Biesheuvel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Steven Rostedt, Martin Schwidefsky, Sergey Senozhatsky,
	Linus Torvalds, Jessica Yu, linux-arm-kernel, linux-mips,
	linuxppc-dev, linux-s390, sparclinux, x86

To avoid the need for relocating absolute references to tracepoint
structures at boot time when running relocatable kernels (which may
take a disproportionate amount of space), add the option to emit
these tables as relative references instead.

Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 include/linux/tracepoint.h | 19 ++++++--
 kernel/tracepoint.c        | 50 +++++++++++---------
 2 files changed, 42 insertions(+), 27 deletions(-)

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index a26ffbe09e71..d02bf1a695e8 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -228,6 +228,19 @@ extern void syscall_unregfunc(void);
 		return static_key_false(&__tracepoint_##name.key);	\
 	}
 
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#define __TRACEPOINT_ENTRY(name)					 \
+	asm("	.section \"__tracepoints_ptrs\", \"a\"		     \n" \
+	    "	.balign 4					     \n" \
+	    "	.long " VMLINUX_SYMBOL_STR(__tracepoint_##name) " - .\n" \
+	    "	.previous					     \n")
+#else
+#define __TRACEPOINT_ENTRY(name)					 \
+	static struct tracepoint * const __tracepoint_ptr_##name __used	 \
+	__attribute__((section("__tracepoints_ptrs"))) =		 \
+		&__tracepoint_##name
+#endif
+
 /*
  * We have no guarantee that gcc and the linker won't up-align the tracepoint
  * structures, so we create an array of pointers that will be used for iteration
@@ -237,11 +250,9 @@ extern void syscall_unregfunc(void);
 	static const char __tpstrtab_##name[]				 \
 	__attribute__((section("__tracepoints_strings"))) = #name;	 \
 	struct tracepoint __tracepoint_##name				 \
-	__attribute__((section("__tracepoints"))) =			 \
+	__attribute__((section("__tracepoints"), used)) =		 \
 		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL };\
-	static struct tracepoint * const __tracepoint_ptr_##name __used	 \
-	__attribute__((section("__tracepoints_ptrs"))) =		 \
-		&__tracepoint_##name;
+	__TRACEPOINT_ENTRY(name);
 
 #define DEFINE_TRACE(name)						\
 	DEFINE_TRACE_FN(name, NULL, NULL);
diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index 685c50ae6300..05649fef106c 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -327,6 +327,28 @@ int tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data)
 }
 EXPORT_SYMBOL_GPL(tracepoint_probe_unregister);
 
+static void for_each_tracepoint_range(struct tracepoint * const *begin,
+		struct tracepoint * const *end,
+		void (*fct)(struct tracepoint *tp, void *priv),
+		void *priv)
+{
+	if (!begin)
+		return;
+
+	if (IS_ENABLED(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)) {
+		const int *iter;
+
+		for (iter = (const int *)begin; iter < (const int *)end; iter++)
+			fct((struct tracepoint *)((unsigned long)iter + *iter),
+			    priv);
+	} else {
+		struct tracepoint * const *iter;
+
+		for (iter = begin; iter < end; iter++)
+			fct(*iter, priv);
+	}
+}
+
 #ifdef CONFIG_MODULES
 bool trace_module_has_bad_taint(struct module *mod)
 {
@@ -391,15 +413,9 @@ EXPORT_SYMBOL_GPL(unregister_tracepoint_module_notifier);
  * Ensure the tracer unregistered the module's probes before the module
  * teardown is performed. Prevents leaks of probe and data pointers.
  */
-static void tp_module_going_check_quiescent(struct tracepoint * const *begin,
-		struct tracepoint * const *end)
+static void tp_module_going_check_quiescent(struct tracepoint *tp, void *priv)
 {
-	struct tracepoint * const *iter;
-
-	if (!begin)
-		return;
-	for (iter = begin; iter < end; iter++)
-		WARN_ON_ONCE((*iter)->funcs);
+	WARN_ON_ONCE(tp->funcs);
 }
 
 static int tracepoint_module_coming(struct module *mod)
@@ -450,8 +466,9 @@ static void tracepoint_module_going(struct module *mod)
 			 * Called the going notifier before checking for
 			 * quiescence.
 			 */
-			tp_module_going_check_quiescent(mod->tracepoints_ptrs,
-				mod->tracepoints_ptrs + mod->num_tracepoints);
+			for_each_tracepoint_range(mod->tracepoints_ptrs,
+				mod->tracepoints_ptrs + mod->num_tracepoints,
+				tp_module_going_check_quiescent, NULL);
 			break;
 		}
 	}
@@ -503,19 +520,6 @@ static __init int init_tracepoints(void)
 __initcall(init_tracepoints);
 #endif /* CONFIG_MODULES */
 
-static void for_each_tracepoint_range(struct tracepoint * const *begin,
-		struct tracepoint * const *end,
-		void (*fct)(struct tracepoint *tp, void *priv),
-		void *priv)
-{
-	struct tracepoint * const *iter;
-
-	if (!begin)
-		return;
-	for (iter = begin; iter < end; iter++)
-		fct(*iter, priv);
-}
-
 /**
  * for_each_kernel_tracepoint - iteration on all kernel tracepoints
  * @fct: callback
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 5/8] kernel: tracepoints: add support for relative references
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

To avoid the need for relocating absolute references to tracepoint
structures at boot time when running relocatable kernels (which may
take a disproportionate amount of space), add the option to emit
these tables as relative references instead.

Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 include/linux/tracepoint.h | 19 ++++++--
 kernel/tracepoint.c        | 50 +++++++++++---------
 2 files changed, 42 insertions(+), 27 deletions(-)

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index a26ffbe09e71..d02bf1a695e8 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -228,6 +228,19 @@ extern void syscall_unregfunc(void);
 		return static_key_false(&__tracepoint_##name.key);	\
 	}
 
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#define __TRACEPOINT_ENTRY(name)					 \
+	asm("	.section \"__tracepoints_ptrs\", \"a\"		     \n" \
+	    "	.balign 4					     \n" \
+	    "	.long " VMLINUX_SYMBOL_STR(__tracepoint_##name) " - .\n" \
+	    "	.previous					     \n")
+#else
+#define __TRACEPOINT_ENTRY(name)					 \
+	static struct tracepoint * const __tracepoint_ptr_##name __used	 \
+	__attribute__((section("__tracepoints_ptrs"))) =		 \
+		&__tracepoint_##name
+#endif
+
 /*
  * We have no guarantee that gcc and the linker won't up-align the tracepoint
  * structures, so we create an array of pointers that will be used for iteration
@@ -237,11 +250,9 @@ extern void syscall_unregfunc(void);
 	static const char __tpstrtab_##name[]				 \
 	__attribute__((section("__tracepoints_strings"))) = #name;	 \
 	struct tracepoint __tracepoint_##name				 \
-	__attribute__((section("__tracepoints"))) =			 \
+	__attribute__((section("__tracepoints"), used)) =		 \
 		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL };\
-	static struct tracepoint * const __tracepoint_ptr_##name __used	 \
-	__attribute__((section("__tracepoints_ptrs"))) =		 \
-		&__tracepoint_##name;
+	__TRACEPOINT_ENTRY(name);
 
 #define DEFINE_TRACE(name)						\
 	DEFINE_TRACE_FN(name, NULL, NULL);
diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index 685c50ae6300..05649fef106c 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -327,6 +327,28 @@ int tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data)
 }
 EXPORT_SYMBOL_GPL(tracepoint_probe_unregister);
 
+static void for_each_tracepoint_range(struct tracepoint * const *begin,
+		struct tracepoint * const *end,
+		void (*fct)(struct tracepoint *tp, void *priv),
+		void *priv)
+{
+	if (!begin)
+		return;
+
+	if (IS_ENABLED(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)) {
+		const int *iter;
+
+		for (iter = (const int *)begin; iter < (const int *)end; iter++)
+			fct((struct tracepoint *)((unsigned long)iter + *iter),
+			    priv);
+	} else {
+		struct tracepoint * const *iter;
+
+		for (iter = begin; iter < end; iter++)
+			fct(*iter, priv);
+	}
+}
+
 #ifdef CONFIG_MODULES
 bool trace_module_has_bad_taint(struct module *mod)
 {
@@ -391,15 +413,9 @@ EXPORT_SYMBOL_GPL(unregister_tracepoint_module_notifier);
  * Ensure the tracer unregistered the module's probes before the module
  * teardown is performed. Prevents leaks of probe and data pointers.
  */
-static void tp_module_going_check_quiescent(struct tracepoint * const *begin,
-		struct tracepoint * const *end)
+static void tp_module_going_check_quiescent(struct tracepoint *tp, void *priv)
 {
-	struct tracepoint * const *iter;
-
-	if (!begin)
-		return;
-	for (iter = begin; iter < end; iter++)
-		WARN_ON_ONCE((*iter)->funcs);
+	WARN_ON_ONCE(tp->funcs);
 }
 
 static int tracepoint_module_coming(struct module *mod)
@@ -450,8 +466,9 @@ static void tracepoint_module_going(struct module *mod)
 			 * Called the going notifier before checking for
 			 * quiescence.
 			 */
-			tp_module_going_check_quiescent(mod->tracepoints_ptrs,
-				mod->tracepoints_ptrs + mod->num_tracepoints);
+			for_each_tracepoint_range(mod->tracepoints_ptrs,
+				mod->tracepoints_ptrs + mod->num_tracepoints,
+				tp_module_going_check_quiescent, NULL);
 			break;
 		}
 	}
@@ -503,19 +520,6 @@ static __init int init_tracepoints(void)
 __initcall(init_tracepoints);
 #endif /* CONFIG_MODULES */
 
-static void for_each_tracepoint_range(struct tracepoint * const *begin,
-		struct tracepoint * const *end,
-		void (*fct)(struct tracepoint *tp, void *priv),
-		void *priv)
-{
-	struct tracepoint * const *iter;
-
-	if (!begin)
-		return;
-	for (iter = begin; iter < end; iter++)
-		fct(*iter, priv);
-}
-
 /**
  * for_each_kernel_tracepoint - iteration on all kernel tracepoints
  * @fct: callback
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 5/8] kernel: tracepoints: add support for relative references
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

To avoid the need for relocating absolute references to tracepoint
structures at boot time when running relocatable kernels (which may
take a disproportionate amount of space), add the option to emit
these tables as relative references instead.

Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 include/linux/tracepoint.h | 19 ++++++--
 kernel/tracepoint.c        | 50 +++++++++++---------
 2 files changed, 42 insertions(+), 27 deletions(-)

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index a26ffbe09e71..d02bf1a695e8 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -228,6 +228,19 @@ extern void syscall_unregfunc(void);
 		return static_key_false(&__tracepoint_##name.key);	\
 	}
 
+#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+#define __TRACEPOINT_ENTRY(name)					 \
+	asm("	.section \"__tracepoints_ptrs\", \"a\"		     \n" \
+	    "	.balign 4					     \n" \
+	    "	.long " VMLINUX_SYMBOL_STR(__tracepoint_##name) " - .\n" \
+	    "	.previous					     \n")
+#else
+#define __TRACEPOINT_ENTRY(name)					 \
+	static struct tracepoint * const __tracepoint_ptr_##name __used	 \
+	__attribute__((section("__tracepoints_ptrs"))) =		 \
+		&__tracepoint_##name
+#endif
+
 /*
  * We have no guarantee that gcc and the linker won't up-align the tracepoint
  * structures, so we create an array of pointers that will be used for iteration
@@ -237,11 +250,9 @@ extern void syscall_unregfunc(void);
 	static const char __tpstrtab_##name[]				 \
 	__attribute__((section("__tracepoints_strings"))) = #name;	 \
 	struct tracepoint __tracepoint_##name				 \
-	__attribute__((section("__tracepoints"))) =			 \
+	__attribute__((section("__tracepoints"), used)) =		 \
 		{ __tpstrtab_##name, STATIC_KEY_INIT_FALSE, reg, unreg, NULL };\
-	static struct tracepoint * const __tracepoint_ptr_##name __used	 \
-	__attribute__((section("__tracepoints_ptrs"))) =		 \
-		&__tracepoint_##name;
+	__TRACEPOINT_ENTRY(name);
 
 #define DEFINE_TRACE(name)						\
 	DEFINE_TRACE_FN(name, NULL, NULL);
diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index 685c50ae6300..05649fef106c 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -327,6 +327,28 @@ int tracepoint_probe_unregister(struct tracepoint *tp, void *probe, void *data)
 }
 EXPORT_SYMBOL_GPL(tracepoint_probe_unregister);
 
+static void for_each_tracepoint_range(struct tracepoint * const *begin,
+		struct tracepoint * const *end,
+		void (*fct)(struct tracepoint *tp, void *priv),
+		void *priv)
+{
+	if (!begin)
+		return;
+
+	if (IS_ENABLED(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)) {
+		const int *iter;
+
+		for (iter = (const int *)begin; iter < (const int *)end; iter++)
+			fct((struct tracepoint *)((unsigned long)iter + *iter),
+			    priv);
+	} else {
+		struct tracepoint * const *iter;
+
+		for (iter = begin; iter < end; iter++)
+			fct(*iter, priv);
+	}
+}
+
 #ifdef CONFIG_MODULES
 bool trace_module_has_bad_taint(struct module *mod)
 {
@@ -391,15 +413,9 @@ EXPORT_SYMBOL_GPL(unregister_tracepoint_module_notifier);
  * Ensure the tracer unregistered the module's probes before the module
  * teardown is performed. Prevents leaks of probe and data pointers.
  */
-static void tp_module_going_check_quiescent(struct tracepoint * const *begin,
-		struct tracepoint * const *end)
+static void tp_module_going_check_quiescent(struct tracepoint *tp, void *priv)
 {
-	struct tracepoint * const *iter;
-
-	if (!begin)
-		return;
-	for (iter = begin; iter < end; iter++)
-		WARN_ON_ONCE((*iter)->funcs);
+	WARN_ON_ONCE(tp->funcs);
 }
 
 static int tracepoint_module_coming(struct module *mod)
@@ -450,8 +466,9 @@ static void tracepoint_module_going(struct module *mod)
 			 * Called the going notifier before checking for
 			 * quiescence.
 			 */
-			tp_module_going_check_quiescent(mod->tracepoints_ptrs,
-				mod->tracepoints_ptrs + mod->num_tracepoints);
+			for_each_tracepoint_range(mod->tracepoints_ptrs,
+				mod->tracepoints_ptrs + mod->num_tracepoints,
+				tp_module_going_check_quiescent, NULL);
 			break;
 		}
 	}
@@ -503,19 +520,6 @@ static __init int init_tracepoints(void)
 __initcall(init_tracepoints);
 #endif /* CONFIG_MODULES */
 
-static void for_each_tracepoint_range(struct tracepoint * const *begin,
-		struct tracepoint * const *end,
-		void (*fct)(struct tracepoint *tp, void *priv),
-		void *priv)
-{
-	struct tracepoint * const *iter;
-
-	if (!begin)
-		return;
-	for (iter = begin; iter < end; iter++)
-		fct(*iter, priv);
-}
-
 /**
  * for_each_kernel_tracepoint - iteration on all kernel tracepoints
  * @fct: callback
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 6/8] kernel/jump_label: abstract jump_entry member accessors
  2017-12-27  8:50 ` Ard Biesheuvel
  (?)
@ 2017-12-27  8:50   ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ard Biesheuvel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Steven Rostedt, Martin Schwidefsky, Sergey Senozhatsky,
	Linus Torvalds, Jessica Yu, linux-arm-kernel, linux-mips,
	linuxppc-dev, linux-s390, sparclinux, x86

In preparation of allowing architectures to use relative references
in jump_label entries [which can dramatically reduce the memory
footprint], introduce abstractions for references to the 'code' and
'key' members of struct jump_entry.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm/include/asm/jump_label.h     | 27 ++++++++++++++
 arch/arm64/include/asm/jump_label.h   | 27 ++++++++++++++
 arch/mips/include/asm/jump_label.h    | 27 ++++++++++++++
 arch/powerpc/include/asm/jump_label.h | 27 ++++++++++++++
 arch/s390/include/asm/jump_label.h    | 20 +++++++++++
 arch/sparc/include/asm/jump_label.h   | 27 ++++++++++++++
 arch/tile/include/asm/jump_label.h    | 27 ++++++++++++++
 arch/x86/include/asm/jump_label.h     | 27 ++++++++++++++
 kernel/jump_label.c                   | 38 +++++++++-----------
 9 files changed, 225 insertions(+), 22 deletions(-)

diff --git a/arch/arm/include/asm/jump_label.h b/arch/arm/include/asm/jump_label.h
index e12d7d096fc0..7b05b404063a 100644
--- a/arch/arm/include/asm/jump_label.h
+++ b/arch/arm/include/asm/jump_label.h
@@ -45,5 +45,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif
diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
index 1b5e0e843c3a..9d6e46355c89 100644
--- a/arch/arm64/include/asm/jump_label.h
+++ b/arch/arm64/include/asm/jump_label.h
@@ -62,5 +62,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif	/* __ASM_JUMP_LABEL_H */
diff --git a/arch/mips/include/asm/jump_label.h b/arch/mips/include/asm/jump_label.h
index e77672539e8e..70df9293dc49 100644
--- a/arch/mips/include/asm/jump_label.h
+++ b/arch/mips/include/asm/jump_label.h
@@ -66,5 +66,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif /* _ASM_MIPS_JUMP_LABEL_H */
diff --git a/arch/powerpc/include/asm/jump_label.h b/arch/powerpc/include/asm/jump_label.h
index 9a287e0ac8b1..412b2699c9f6 100644
--- a/arch/powerpc/include/asm/jump_label.h
+++ b/arch/powerpc/include/asm/jump_label.h
@@ -59,6 +59,33 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #else
 #define ARCH_STATIC_BRANCH(LABEL, KEY)		\
 1098:	nop;					\
diff --git a/arch/s390/include/asm/jump_label.h b/arch/s390/include/asm/jump_label.h
index 40f651292aa7..3d4a08e9514b 100644
--- a/arch/s390/include/asm/jump_label.h
+++ b/arch/s390/include/asm/jump_label.h
@@ -50,5 +50,25 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline jump_label_t jump_entry_key(const struct jump_entry *entry)
+{
+	return entry->key;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
 #endif  /* __ASSEMBLY__ */
 #endif
diff --git a/arch/sparc/include/asm/jump_label.h b/arch/sparc/include/asm/jump_label.h
index 94eb529dcb77..18e893687f7c 100644
--- a/arch/sparc/include/asm/jump_label.h
+++ b/arch/sparc/include/asm/jump_label.h
@@ -48,5 +48,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif
diff --git a/arch/tile/include/asm/jump_label.h b/arch/tile/include/asm/jump_label.h
index cde7573f397b..86acaa6ff33d 100644
--- a/arch/tile/include/asm/jump_label.h
+++ b/arch/tile/include/asm/jump_label.h
@@ -55,4 +55,31 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif /* _ASM_TILE_JUMP_LABEL_H */
diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
index 8c0de4282659..009ff2699d07 100644
--- a/arch/x86/include/asm/jump_label.h
+++ b/arch/x86/include/asm/jump_label.h
@@ -74,6 +74,33 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #else	/* __ASSEMBLY__ */
 
 .macro STATIC_JUMP_IF_TRUE target, key, def
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 8594d24e4adc..4f44db58d981 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -37,10 +37,12 @@ static int jump_label_cmp(const void *a, const void *b)
 	const struct jump_entry *jea = a;
 	const struct jump_entry *jeb = b;
 
-	if (jea->key < jeb->key)
+	if ((unsigned long)jump_entry_key(jea) <
+	    (unsigned long)jump_entry_key(jeb))
 		return -1;
 
-	if (jea->key > jeb->key)
+	if ((unsigned long)jump_entry_key(jea) >
+	    (unsigned long)jump_entry_key(jeb))
 		return 1;
 
 	return 0;
@@ -53,7 +55,8 @@ jump_label_sort_entries(struct jump_entry *start, struct jump_entry *stop)
 
 	size = (((unsigned long)stop - (unsigned long)start)
 					/ sizeof(struct jump_entry));
-	sort(start, size, sizeof(struct jump_entry), jump_label_cmp, NULL);
+	sort(start, size, sizeof(struct jump_entry), jump_label_cmp,
+	     jump_label_swap);
 }
 
 static void jump_label_update(struct static_key *key);
@@ -254,8 +257,8 @@ EXPORT_SYMBOL_GPL(jump_label_rate_limit);
 
 static int addr_conflict(struct jump_entry *entry, void *start, void *end)
 {
-	if (entry->code <= (unsigned long)end &&
-		entry->code + JUMP_LABEL_NOP_SIZE > (unsigned long)start)
+	if (jump_entry_code(entry) <= (unsigned long)end &&
+	    jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE > (unsigned long)start)
 		return 1;
 
 	return 0;
@@ -314,16 +317,6 @@ static inline void static_key_set_linked(struct static_key *key)
 	key->type |= JUMP_TYPE_LINKED;
 }
 
-static inline struct static_key *jump_entry_key(struct jump_entry *entry)
-{
-	return (struct static_key *)((unsigned long)entry->key & ~1UL);
-}
-
-static bool jump_entry_branch(struct jump_entry *entry)
-{
-	return (unsigned long)entry->key & 1UL;
-}
-
 /***
  * A 'struct static_key' uses a union such that it either points directly
  * to a table of 'struct jump_entry' or to a linked list of modules which in
@@ -348,7 +341,7 @@ static enum jump_label_type jump_label_type(struct jump_entry *entry)
 {
 	struct static_key *key = jump_entry_key(entry);
 	bool enabled = static_key_enabled(key);
-	bool branch = jump_entry_branch(entry);
+	bool branch = jump_entry_is_branch(entry);
 
 	/* See the comment in linux/jump_label.h */
 	return enabled ^ branch;
@@ -364,7 +357,8 @@ static void __jump_label_update(struct static_key *key,
 		 * kernel_text_address() verifies we are not in core kernel
 		 * init code, see jump_label_invalidate_module_init().
 		 */
-		if (entry->code && kernel_text_address(entry->code))
+		if (!jump_entry_is_module_init(entry) &&
+		    kernel_text_address(jump_entry_code(entry)))
 			arch_jump_label_transform(entry, jump_label_type(entry));
 	}
 }
@@ -417,7 +411,7 @@ static enum jump_label_type jump_label_init_type(struct jump_entry *entry)
 {
 	struct static_key *key = jump_entry_key(entry);
 	bool type = static_key_type(key);
-	bool branch = jump_entry_branch(entry);
+	bool branch = jump_entry_is_branch(entry);
 
 	/* See the comment in linux/jump_label.h */
 	return type ^ branch;
@@ -541,7 +535,7 @@ static int jump_label_add_module(struct module *mod)
 			continue;
 
 		key = iterk;
-		if (within_module(iter->key, mod)) {
+		if (within_module((unsigned long)key, mod)) {
 			static_key_set_entries(key, iter);
 			continue;
 		}
@@ -591,7 +585,7 @@ static void jump_label_del_module(struct module *mod)
 
 		key = jump_entry_key(iter);
 
-		if (within_module(iter->key, mod))
+		if (within_module((unsigned long)key, mod))
 			continue;
 
 		/* No memory during module load */
@@ -634,8 +628,8 @@ static void jump_label_invalidate_module_init(struct module *mod)
 	struct jump_entry *iter;
 
 	for (iter = iter_start; iter < iter_stop; iter++) {
-		if (within_module_init(iter->code, mod))
-			iter->code = 0;
+		if (within_module_init(jump_entry_code(iter), mod))
+			jump_entry_set_module_init(iter);
 	}
 }
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 6/8] kernel/jump_label: abstract jump_entry member accessors
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

In preparation of allowing architectures to use relative references
in jump_label entries [which can dramatically reduce the memory
footprint], introduce abstractions for references to the 'code' and
'key' members of struct jump_entry.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm/include/asm/jump_label.h     | 27 ++++++++++++++
 arch/arm64/include/asm/jump_label.h   | 27 ++++++++++++++
 arch/mips/include/asm/jump_label.h    | 27 ++++++++++++++
 arch/powerpc/include/asm/jump_label.h | 27 ++++++++++++++
 arch/s390/include/asm/jump_label.h    | 20 +++++++++++
 arch/sparc/include/asm/jump_label.h   | 27 ++++++++++++++
 arch/tile/include/asm/jump_label.h    | 27 ++++++++++++++
 arch/x86/include/asm/jump_label.h     | 27 ++++++++++++++
 kernel/jump_label.c                   | 38 +++++++++-----------
 9 files changed, 225 insertions(+), 22 deletions(-)

diff --git a/arch/arm/include/asm/jump_label.h b/arch/arm/include/asm/jump_label.h
index e12d7d096fc0..7b05b404063a 100644
--- a/arch/arm/include/asm/jump_label.h
+++ b/arch/arm/include/asm/jump_label.h
@@ -45,5 +45,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code = 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif
diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
index 1b5e0e843c3a..9d6e46355c89 100644
--- a/arch/arm64/include/asm/jump_label.h
+++ b/arch/arm64/include/asm/jump_label.h
@@ -62,5 +62,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code = 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif	/* __ASM_JUMP_LABEL_H */
diff --git a/arch/mips/include/asm/jump_label.h b/arch/mips/include/asm/jump_label.h
index e77672539e8e..70df9293dc49 100644
--- a/arch/mips/include/asm/jump_label.h
+++ b/arch/mips/include/asm/jump_label.h
@@ -66,5 +66,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code = 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif /* _ASM_MIPS_JUMP_LABEL_H */
diff --git a/arch/powerpc/include/asm/jump_label.h b/arch/powerpc/include/asm/jump_label.h
index 9a287e0ac8b1..412b2699c9f6 100644
--- a/arch/powerpc/include/asm/jump_label.h
+++ b/arch/powerpc/include/asm/jump_label.h
@@ -59,6 +59,33 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code = 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #else
 #define ARCH_STATIC_BRANCH(LABEL, KEY)		\
 1098:	nop;					\
diff --git a/arch/s390/include/asm/jump_label.h b/arch/s390/include/asm/jump_label.h
index 40f651292aa7..3d4a08e9514b 100644
--- a/arch/s390/include/asm/jump_label.h
+++ b/arch/s390/include/asm/jump_label.h
@@ -50,5 +50,25 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline jump_label_t jump_entry_key(const struct jump_entry *entry)
+{
+	return entry->key;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code = 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
 #endif  /* __ASSEMBLY__ */
 #endif
diff --git a/arch/sparc/include/asm/jump_label.h b/arch/sparc/include/asm/jump_label.h
index 94eb529dcb77..18e893687f7c 100644
--- a/arch/sparc/include/asm/jump_label.h
+++ b/arch/sparc/include/asm/jump_label.h
@@ -48,5 +48,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code = 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif
diff --git a/arch/tile/include/asm/jump_label.h b/arch/tile/include/asm/jump_label.h
index cde7573f397b..86acaa6ff33d 100644
--- a/arch/tile/include/asm/jump_label.h
+++ b/arch/tile/include/asm/jump_label.h
@@ -55,4 +55,31 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code = 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif /* _ASM_TILE_JUMP_LABEL_H */
diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
index 8c0de4282659..009ff2699d07 100644
--- a/arch/x86/include/asm/jump_label.h
+++ b/arch/x86/include/asm/jump_label.h
@@ -74,6 +74,33 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code = 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #else	/* __ASSEMBLY__ */
 
 .macro STATIC_JUMP_IF_TRUE target, key, def
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 8594d24e4adc..4f44db58d981 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -37,10 +37,12 @@ static int jump_label_cmp(const void *a, const void *b)
 	const struct jump_entry *jea = a;
 	const struct jump_entry *jeb = b;
 
-	if (jea->key < jeb->key)
+	if ((unsigned long)jump_entry_key(jea) <
+	    (unsigned long)jump_entry_key(jeb))
 		return -1;
 
-	if (jea->key > jeb->key)
+	if ((unsigned long)jump_entry_key(jea) >
+	    (unsigned long)jump_entry_key(jeb))
 		return 1;
 
 	return 0;
@@ -53,7 +55,8 @@ jump_label_sort_entries(struct jump_entry *start, struct jump_entry *stop)
 
 	size = (((unsigned long)stop - (unsigned long)start)
 					/ sizeof(struct jump_entry));
-	sort(start, size, sizeof(struct jump_entry), jump_label_cmp, NULL);
+	sort(start, size, sizeof(struct jump_entry), jump_label_cmp,
+	     jump_label_swap);
 }
 
 static void jump_label_update(struct static_key *key);
@@ -254,8 +257,8 @@ EXPORT_SYMBOL_GPL(jump_label_rate_limit);
 
 static int addr_conflict(struct jump_entry *entry, void *start, void *end)
 {
-	if (entry->code <= (unsigned long)end &&
-		entry->code + JUMP_LABEL_NOP_SIZE > (unsigned long)start)
+	if (jump_entry_code(entry) <= (unsigned long)end &&
+	    jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE > (unsigned long)start)
 		return 1;
 
 	return 0;
@@ -314,16 +317,6 @@ static inline void static_key_set_linked(struct static_key *key)
 	key->type |= JUMP_TYPE_LINKED;
 }
 
-static inline struct static_key *jump_entry_key(struct jump_entry *entry)
-{
-	return (struct static_key *)((unsigned long)entry->key & ~1UL);
-}
-
-static bool jump_entry_branch(struct jump_entry *entry)
-{
-	return (unsigned long)entry->key & 1UL;
-}
-
 /***
  * A 'struct static_key' uses a union such that it either points directly
  * to a table of 'struct jump_entry' or to a linked list of modules which in
@@ -348,7 +341,7 @@ static enum jump_label_type jump_label_type(struct jump_entry *entry)
 {
 	struct static_key *key = jump_entry_key(entry);
 	bool enabled = static_key_enabled(key);
-	bool branch = jump_entry_branch(entry);
+	bool branch = jump_entry_is_branch(entry);
 
 	/* See the comment in linux/jump_label.h */
 	return enabled ^ branch;
@@ -364,7 +357,8 @@ static void __jump_label_update(struct static_key *key,
 		 * kernel_text_address() verifies we are not in core kernel
 		 * init code, see jump_label_invalidate_module_init().
 		 */
-		if (entry->code && kernel_text_address(entry->code))
+		if (!jump_entry_is_module_init(entry) &&
+		    kernel_text_address(jump_entry_code(entry)))
 			arch_jump_label_transform(entry, jump_label_type(entry));
 	}
 }
@@ -417,7 +411,7 @@ static enum jump_label_type jump_label_init_type(struct jump_entry *entry)
 {
 	struct static_key *key = jump_entry_key(entry);
 	bool type = static_key_type(key);
-	bool branch = jump_entry_branch(entry);
+	bool branch = jump_entry_is_branch(entry);
 
 	/* See the comment in linux/jump_label.h */
 	return type ^ branch;
@@ -541,7 +535,7 @@ static int jump_label_add_module(struct module *mod)
 			continue;
 
 		key = iterk;
-		if (within_module(iter->key, mod)) {
+		if (within_module((unsigned long)key, mod)) {
 			static_key_set_entries(key, iter);
 			continue;
 		}
@@ -591,7 +585,7 @@ static void jump_label_del_module(struct module *mod)
 
 		key = jump_entry_key(iter);
 
-		if (within_module(iter->key, mod))
+		if (within_module((unsigned long)key, mod))
 			continue;
 
 		/* No memory during module load */
@@ -634,8 +628,8 @@ static void jump_label_invalidate_module_init(struct module *mod)
 	struct jump_entry *iter;
 
 	for (iter = iter_start; iter < iter_stop; iter++) {
-		if (within_module_init(iter->code, mod))
-			iter->code = 0;
+		if (within_module_init(jump_entry_code(iter), mod))
+			jump_entry_set_module_init(iter);
 	}
 }
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 6/8] kernel/jump_label: abstract jump_entry member accessors
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

In preparation of allowing architectures to use relative references
in jump_label entries [which can dramatically reduce the memory
footprint], introduce abstractions for references to the 'code' and
'key' members of struct jump_entry.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm/include/asm/jump_label.h     | 27 ++++++++++++++
 arch/arm64/include/asm/jump_label.h   | 27 ++++++++++++++
 arch/mips/include/asm/jump_label.h    | 27 ++++++++++++++
 arch/powerpc/include/asm/jump_label.h | 27 ++++++++++++++
 arch/s390/include/asm/jump_label.h    | 20 +++++++++++
 arch/sparc/include/asm/jump_label.h   | 27 ++++++++++++++
 arch/tile/include/asm/jump_label.h    | 27 ++++++++++++++
 arch/x86/include/asm/jump_label.h     | 27 ++++++++++++++
 kernel/jump_label.c                   | 38 +++++++++-----------
 9 files changed, 225 insertions(+), 22 deletions(-)

diff --git a/arch/arm/include/asm/jump_label.h b/arch/arm/include/asm/jump_label.h
index e12d7d096fc0..7b05b404063a 100644
--- a/arch/arm/include/asm/jump_label.h
+++ b/arch/arm/include/asm/jump_label.h
@@ -45,5 +45,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif
diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
index 1b5e0e843c3a..9d6e46355c89 100644
--- a/arch/arm64/include/asm/jump_label.h
+++ b/arch/arm64/include/asm/jump_label.h
@@ -62,5 +62,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif	/* __ASM_JUMP_LABEL_H */
diff --git a/arch/mips/include/asm/jump_label.h b/arch/mips/include/asm/jump_label.h
index e77672539e8e..70df9293dc49 100644
--- a/arch/mips/include/asm/jump_label.h
+++ b/arch/mips/include/asm/jump_label.h
@@ -66,5 +66,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif /* _ASM_MIPS_JUMP_LABEL_H */
diff --git a/arch/powerpc/include/asm/jump_label.h b/arch/powerpc/include/asm/jump_label.h
index 9a287e0ac8b1..412b2699c9f6 100644
--- a/arch/powerpc/include/asm/jump_label.h
+++ b/arch/powerpc/include/asm/jump_label.h
@@ -59,6 +59,33 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #else
 #define ARCH_STATIC_BRANCH(LABEL, KEY)		\
 1098:	nop;					\
diff --git a/arch/s390/include/asm/jump_label.h b/arch/s390/include/asm/jump_label.h
index 40f651292aa7..3d4a08e9514b 100644
--- a/arch/s390/include/asm/jump_label.h
+++ b/arch/s390/include/asm/jump_label.h
@@ -50,5 +50,25 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline jump_label_t jump_entry_key(const struct jump_entry *entry)
+{
+	return entry->key;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
 #endif  /* __ASSEMBLY__ */
 #endif
diff --git a/arch/sparc/include/asm/jump_label.h b/arch/sparc/include/asm/jump_label.h
index 94eb529dcb77..18e893687f7c 100644
--- a/arch/sparc/include/asm/jump_label.h
+++ b/arch/sparc/include/asm/jump_label.h
@@ -48,5 +48,32 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif  /* __ASSEMBLY__ */
 #endif
diff --git a/arch/tile/include/asm/jump_label.h b/arch/tile/include/asm/jump_label.h
index cde7573f397b..86acaa6ff33d 100644
--- a/arch/tile/include/asm/jump_label.h
+++ b/arch/tile/include/asm/jump_label.h
@@ -55,4 +55,31 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #endif /* _ASM_TILE_JUMP_LABEL_H */
diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
index 8c0de4282659..009ff2699d07 100644
--- a/arch/x86/include/asm/jump_label.h
+++ b/arch/x86/include/asm/jump_label.h
@@ -74,6 +74,33 @@ struct jump_entry {
 	jump_label_t key;
 };
 
+static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
+{
+	return entry->code;
+}
+
+static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
+{
+	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+}
+
+static inline bool jump_entry_is_branch(const struct jump_entry *entry)
+{
+	return (unsigned long)entry->key & 1UL;
+}
+
+static inline bool jump_entry_is_module_init(const struct jump_entry *entry)
+{
+	return entry->code == 0;
+}
+
+static inline void jump_entry_set_module_init(struct jump_entry *entry)
+{
+	entry->code = 0;
+}
+
+#define jump_label_swap		NULL
+
 #else	/* __ASSEMBLY__ */
 
 .macro STATIC_JUMP_IF_TRUE target, key, def
diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 8594d24e4adc..4f44db58d981 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -37,10 +37,12 @@ static int jump_label_cmp(const void *a, const void *b)
 	const struct jump_entry *jea = a;
 	const struct jump_entry *jeb = b;
 
-	if (jea->key < jeb->key)
+	if ((unsigned long)jump_entry_key(jea) <
+	    (unsigned long)jump_entry_key(jeb))
 		return -1;
 
-	if (jea->key > jeb->key)
+	if ((unsigned long)jump_entry_key(jea) >
+	    (unsigned long)jump_entry_key(jeb))
 		return 1;
 
 	return 0;
@@ -53,7 +55,8 @@ jump_label_sort_entries(struct jump_entry *start, struct jump_entry *stop)
 
 	size = (((unsigned long)stop - (unsigned long)start)
 					/ sizeof(struct jump_entry));
-	sort(start, size, sizeof(struct jump_entry), jump_label_cmp, NULL);
+	sort(start, size, sizeof(struct jump_entry), jump_label_cmp,
+	     jump_label_swap);
 }
 
 static void jump_label_update(struct static_key *key);
@@ -254,8 +257,8 @@ EXPORT_SYMBOL_GPL(jump_label_rate_limit);
 
 static int addr_conflict(struct jump_entry *entry, void *start, void *end)
 {
-	if (entry->code <= (unsigned long)end &&
-		entry->code + JUMP_LABEL_NOP_SIZE > (unsigned long)start)
+	if (jump_entry_code(entry) <= (unsigned long)end &&
+	    jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE > (unsigned long)start)
 		return 1;
 
 	return 0;
@@ -314,16 +317,6 @@ static inline void static_key_set_linked(struct static_key *key)
 	key->type |= JUMP_TYPE_LINKED;
 }
 
-static inline struct static_key *jump_entry_key(struct jump_entry *entry)
-{
-	return (struct static_key *)((unsigned long)entry->key & ~1UL);
-}
-
-static bool jump_entry_branch(struct jump_entry *entry)
-{
-	return (unsigned long)entry->key & 1UL;
-}
-
 /***
  * A 'struct static_key' uses a union such that it either points directly
  * to a table of 'struct jump_entry' or to a linked list of modules which in
@@ -348,7 +341,7 @@ static enum jump_label_type jump_label_type(struct jump_entry *entry)
 {
 	struct static_key *key = jump_entry_key(entry);
 	bool enabled = static_key_enabled(key);
-	bool branch = jump_entry_branch(entry);
+	bool branch = jump_entry_is_branch(entry);
 
 	/* See the comment in linux/jump_label.h */
 	return enabled ^ branch;
@@ -364,7 +357,8 @@ static void __jump_label_update(struct static_key *key,
 		 * kernel_text_address() verifies we are not in core kernel
 		 * init code, see jump_label_invalidate_module_init().
 		 */
-		if (entry->code && kernel_text_address(entry->code))
+		if (!jump_entry_is_module_init(entry) &&
+		    kernel_text_address(jump_entry_code(entry)))
 			arch_jump_label_transform(entry, jump_label_type(entry));
 	}
 }
@@ -417,7 +411,7 @@ static enum jump_label_type jump_label_init_type(struct jump_entry *entry)
 {
 	struct static_key *key = jump_entry_key(entry);
 	bool type = static_key_type(key);
-	bool branch = jump_entry_branch(entry);
+	bool branch = jump_entry_is_branch(entry);
 
 	/* See the comment in linux/jump_label.h */
 	return type ^ branch;
@@ -541,7 +535,7 @@ static int jump_label_add_module(struct module *mod)
 			continue;
 
 		key = iterk;
-		if (within_module(iter->key, mod)) {
+		if (within_module((unsigned long)key, mod)) {
 			static_key_set_entries(key, iter);
 			continue;
 		}
@@ -591,7 +585,7 @@ static void jump_label_del_module(struct module *mod)
 
 		key = jump_entry_key(iter);
 
-		if (within_module(iter->key, mod))
+		if (within_module((unsigned long)key, mod))
 			continue;
 
 		/* No memory during module load */
@@ -634,8 +628,8 @@ static void jump_label_invalidate_module_init(struct module *mod)
 	struct jump_entry *iter;
 
 	for (iter = iter_start; iter < iter_stop; iter++) {
-		if (within_module_init(iter->code, mod))
-			iter->code = 0;
+		if (within_module_init(jump_entry_code(iter), mod))
+			jump_entry_set_module_init(iter);
 	}
 }
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 7/8] arm64/kernel: jump_label: use relative references
  2017-12-27  8:50 ` Ard Biesheuvel
  (?)
@ 2017-12-27  8:50   ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ard Biesheuvel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Steven Rostedt, Martin Schwidefsky, Sergey Senozhatsky,
	Linus Torvalds, Jessica Yu, linux-arm-kernel, linux-mips,
	linuxppc-dev, linux-s390, sparclinux, x86

On a randomly chosen distro kernel build for arm64, vmlinux.o shows the
following sections, containing jump label entries, and the associated
RELA relocation records, respectively:

  ...
  [38088] __jump_table      PROGBITS         0000000000000000  00e19f30
       000000000002ea10  0000000000000000  WA       0     0     8
  [38089] .rela__jump_table RELA             0000000000000000  01fd8bb0
       000000000008be30  0000000000000018   I      38178   38088     8
  ...

In other words, we have 190 KB worth of 'struct jump_entry' instances,
and 573 KB worth of RELA entries to relocate each entry's code, target
and key members. This means the RELA section occupies 10% of the .init
segment, and the two sections combined represent 5% of vmlinux's entire
memory footprint.

So let's switch from 64-bit absolute references to 32-bit relative
references: this reduces the size of the __jump_table by 50%, and gets
rid of the RELA section entirely.

Note that this requires some extra care in the sorting routine, given
that the offsets change when entries are moved around in the jump_entry
table.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/jump_label.h | 27 ++++++++++++--------
 arch/arm64/kernel/jump_label.c      | 22 +++++++++++++---
 2 files changed, 36 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
index 9d6e46355c89..5cec68616125 100644
--- a/arch/arm64/include/asm/jump_label.h
+++ b/arch/arm64/include/asm/jump_label.h
@@ -30,8 +30,8 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
 {
 	asm goto("1: nop\n\t"
 		 ".pushsection __jump_table,  \"aw\"\n\t"
-		 ".align 3\n\t"
-		 ".quad 1b, %l[l_yes], %c0\n\t"
+		 ".align 2\n\t"
+		 ".long 1b - ., %l[l_yes] - ., %c0 - .\n\t"
 		 ".popsection\n\t"
 		 :  :  "i"(&((char *)key)[branch]) :  : l_yes);
 
@@ -44,8 +44,8 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
 {
 	asm goto("1: b %l[l_yes]\n\t"
 		 ".pushsection __jump_table,  \"aw\"\n\t"
-		 ".align 3\n\t"
-		 ".quad 1b, %l[l_yes], %c0\n\t"
+		 ".align 2\n\t"
+		 ".long 1b - ., %l[l_yes] - ., %c0 - .\n\t"
 		 ".popsection\n\t"
 		 :  :  "i"(&((char *)key)[branch]) :  : l_yes);
 
@@ -57,19 +57,26 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
 typedef u64 jump_label_t;
 
 struct jump_entry {
-	jump_label_t code;
-	jump_label_t target;
-	jump_label_t key;
+	s32 code;
+	s32 target;
+	s32 key;
 };
 
 static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
 {
-	return entry->code;
+	return (jump_label_t)&entry->code + entry->code;
+}
+
+static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
+{
+	return (jump_label_t)&entry->target + entry->target;
 }
 
 static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
 {
-	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+	unsigned long key = (unsigned long)&entry->key + entry->key;
+
+	return (struct static_key *)(key & ~1UL);
 }
 
 static inline bool jump_entry_is_branch(const struct jump_entry *entry)
@@ -87,7 +94,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 	entry->code = 0;
 }
 
-#define jump_label_swap		NULL
+void jump_label_swap(void *a, void *b, int size);
 
 #endif  /* __ASSEMBLY__ */
 #endif	/* __ASM_JUMP_LABEL_H */
diff --git a/arch/arm64/kernel/jump_label.c b/arch/arm64/kernel/jump_label.c
index c2dd1ad3e648..2b8e459e91f7 100644
--- a/arch/arm64/kernel/jump_label.c
+++ b/arch/arm64/kernel/jump_label.c
@@ -25,12 +25,12 @@
 void arch_jump_label_transform(struct jump_entry *entry,
 			       enum jump_label_type type)
 {
-	void *addr = (void *)entry->code;
+	void *addr = (void *)jump_entry_code(entry);
 	u32 insn;
 
 	if (type == JUMP_LABEL_JMP) {
-		insn = aarch64_insn_gen_branch_imm(entry->code,
-						   entry->target,
+		insn = aarch64_insn_gen_branch_imm(jump_entry_code(entry),
+						   jump_entry_target(entry),
 						   AARCH64_INSN_BRANCH_NOLINK);
 	} else {
 		insn = aarch64_insn_gen_nop();
@@ -50,4 +50,20 @@ void arch_jump_label_transform_static(struct jump_entry *entry,
 	 */
 }
 
+void jump_label_swap(void *a, void *b, int size)
+{
+	long delta = (unsigned long)a - (unsigned long)b;
+	struct jump_entry *jea = a;
+	struct jump_entry *jeb = b;
+	struct jump_entry tmp = *jea;
+
+	jea->code	= jeb->code - delta;
+	jea->target	= jeb->target - delta;
+	jea->key	= jeb->key - delta;
+
+	jeb->code	= tmp.code + delta;
+	jeb->target	= tmp.target + delta;
+	jeb->key	= tmp.key + delta;
+}
+
 #endif	/* HAVE_JUMP_LABEL */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 7/8] arm64/kernel: jump_label: use relative references
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

On a randomly chosen distro kernel build for arm64, vmlinux.o shows the
following sections, containing jump label entries, and the associated
RELA relocation records, respectively:

  ...
  [38088] __jump_table      PROGBITS         0000000000000000  00e19f30
       000000000002ea10  0000000000000000  WA       0     0     8
  [38089] .rela__jump_table RELA             0000000000000000  01fd8bb0
       000000000008be30  0000000000000018   I      38178   38088     8
  ...

In other words, we have 190 KB worth of 'struct jump_entry' instances,
and 573 KB worth of RELA entries to relocate each entry's code, target
and key members. This means the RELA section occupies 10% of the .init
segment, and the two sections combined represent 5% of vmlinux's entire
memory footprint.

So let's switch from 64-bit absolute references to 32-bit relative
references: this reduces the size of the __jump_table by 50%, and gets
rid of the RELA section entirely.

Note that this requires some extra care in the sorting routine, given
that the offsets change when entries are moved around in the jump_entry
table.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/jump_label.h | 27 ++++++++++++--------
 arch/arm64/kernel/jump_label.c      | 22 +++++++++++++---
 2 files changed, 36 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
index 9d6e46355c89..5cec68616125 100644
--- a/arch/arm64/include/asm/jump_label.h
+++ b/arch/arm64/include/asm/jump_label.h
@@ -30,8 +30,8 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
 {
 	asm goto("1: nop\n\t"
 		 ".pushsection __jump_table,  \"aw\"\n\t"
-		 ".align 3\n\t"
-		 ".quad 1b, %l[l_yes], %c0\n\t"
+		 ".align 2\n\t"
+		 ".long 1b - ., %l[l_yes] - ., %c0 - .\n\t"
 		 ".popsection\n\t"
 		 :  :  "i"(&((char *)key)[branch]) :  : l_yes);
 
@@ -44,8 +44,8 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
 {
 	asm goto("1: b %l[l_yes]\n\t"
 		 ".pushsection __jump_table,  \"aw\"\n\t"
-		 ".align 3\n\t"
-		 ".quad 1b, %l[l_yes], %c0\n\t"
+		 ".align 2\n\t"
+		 ".long 1b - ., %l[l_yes] - ., %c0 - .\n\t"
 		 ".popsection\n\t"
 		 :  :  "i"(&((char *)key)[branch]) :  : l_yes);
 
@@ -57,19 +57,26 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
 typedef u64 jump_label_t;
 
 struct jump_entry {
-	jump_label_t code;
-	jump_label_t target;
-	jump_label_t key;
+	s32 code;
+	s32 target;
+	s32 key;
 };
 
 static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
 {
-	return entry->code;
+	return (jump_label_t)&entry->code + entry->code;
+}
+
+static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
+{
+	return (jump_label_t)&entry->target + entry->target;
 }
 
 static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
 {
-	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+	unsigned long key = (unsigned long)&entry->key + entry->key;
+
+	return (struct static_key *)(key & ~1UL);
 }
 
 static inline bool jump_entry_is_branch(const struct jump_entry *entry)
@@ -87,7 +94,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 	entry->code = 0;
 }
 
-#define jump_label_swap		NULL
+void jump_label_swap(void *a, void *b, int size);
 
 #endif  /* __ASSEMBLY__ */
 #endif	/* __ASM_JUMP_LABEL_H */
diff --git a/arch/arm64/kernel/jump_label.c b/arch/arm64/kernel/jump_label.c
index c2dd1ad3e648..2b8e459e91f7 100644
--- a/arch/arm64/kernel/jump_label.c
+++ b/arch/arm64/kernel/jump_label.c
@@ -25,12 +25,12 @@
 void arch_jump_label_transform(struct jump_entry *entry,
 			       enum jump_label_type type)
 {
-	void *addr = (void *)entry->code;
+	void *addr = (void *)jump_entry_code(entry);
 	u32 insn;
 
 	if (type = JUMP_LABEL_JMP) {
-		insn = aarch64_insn_gen_branch_imm(entry->code,
-						   entry->target,
+		insn = aarch64_insn_gen_branch_imm(jump_entry_code(entry),
+						   jump_entry_target(entry),
 						   AARCH64_INSN_BRANCH_NOLINK);
 	} else {
 		insn = aarch64_insn_gen_nop();
@@ -50,4 +50,20 @@ void arch_jump_label_transform_static(struct jump_entry *entry,
 	 */
 }
 
+void jump_label_swap(void *a, void *b, int size)
+{
+	long delta = (unsigned long)a - (unsigned long)b;
+	struct jump_entry *jea = a;
+	struct jump_entry *jeb = b;
+	struct jump_entry tmp = *jea;
+
+	jea->code	= jeb->code - delta;
+	jea->target	= jeb->target - delta;
+	jea->key	= jeb->key - delta;
+
+	jeb->code	= tmp.code + delta;
+	jeb->target	= tmp.target + delta;
+	jeb->key	= tmp.key + delta;
+}
+
 #endif	/* HAVE_JUMP_LABEL */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 7/8] arm64/kernel: jump_label: use relative references
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

On a randomly chosen distro kernel build for arm64, vmlinux.o shows the
following sections, containing jump label entries, and the associated
RELA relocation records, respectively:

  ...
  [38088] __jump_table      PROGBITS         0000000000000000  00e19f30
       000000000002ea10  0000000000000000  WA       0     0     8
  [38089] .rela__jump_table RELA             0000000000000000  01fd8bb0
       000000000008be30  0000000000000018   I      38178   38088     8
  ...

In other words, we have 190 KB worth of 'struct jump_entry' instances,
and 573 KB worth of RELA entries to relocate each entry's code, target
and key members. This means the RELA section occupies 10% of the .init
segment, and the two sections combined represent 5% of vmlinux's entire
memory footprint.

So let's switch from 64-bit absolute references to 32-bit relative
references: this reduces the size of the __jump_table by 50%, and gets
rid of the RELA section entirely.

Note that this requires some extra care in the sorting routine, given
that the offsets change when entries are moved around in the jump_entry
table.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/jump_label.h | 27 ++++++++++++--------
 arch/arm64/kernel/jump_label.c      | 22 +++++++++++++---
 2 files changed, 36 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
index 9d6e46355c89..5cec68616125 100644
--- a/arch/arm64/include/asm/jump_label.h
+++ b/arch/arm64/include/asm/jump_label.h
@@ -30,8 +30,8 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
 {
 	asm goto("1: nop\n\t"
 		 ".pushsection __jump_table,  \"aw\"\n\t"
-		 ".align 3\n\t"
-		 ".quad 1b, %l[l_yes], %c0\n\t"
+		 ".align 2\n\t"
+		 ".long 1b - ., %l[l_yes] - ., %c0 - .\n\t"
 		 ".popsection\n\t"
 		 :  :  "i"(&((char *)key)[branch]) :  : l_yes);
 
@@ -44,8 +44,8 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
 {
 	asm goto("1: b %l[l_yes]\n\t"
 		 ".pushsection __jump_table,  \"aw\"\n\t"
-		 ".align 3\n\t"
-		 ".quad 1b, %l[l_yes], %c0\n\t"
+		 ".align 2\n\t"
+		 ".long 1b - ., %l[l_yes] - ., %c0 - .\n\t"
 		 ".popsection\n\t"
 		 :  :  "i"(&((char *)key)[branch]) :  : l_yes);
 
@@ -57,19 +57,26 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
 typedef u64 jump_label_t;
 
 struct jump_entry {
-	jump_label_t code;
-	jump_label_t target;
-	jump_label_t key;
+	s32 code;
+	s32 target;
+	s32 key;
 };
 
 static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
 {
-	return entry->code;
+	return (jump_label_t)&entry->code + entry->code;
+}
+
+static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
+{
+	return (jump_label_t)&entry->target + entry->target;
 }
 
 static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
 {
-	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+	unsigned long key = (unsigned long)&entry->key + entry->key;
+
+	return (struct static_key *)(key & ~1UL);
 }
 
 static inline bool jump_entry_is_branch(const struct jump_entry *entry)
@@ -87,7 +94,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 	entry->code = 0;
 }
 
-#define jump_label_swap		NULL
+void jump_label_swap(void *a, void *b, int size);
 
 #endif  /* __ASSEMBLY__ */
 #endif	/* __ASM_JUMP_LABEL_H */
diff --git a/arch/arm64/kernel/jump_label.c b/arch/arm64/kernel/jump_label.c
index c2dd1ad3e648..2b8e459e91f7 100644
--- a/arch/arm64/kernel/jump_label.c
+++ b/arch/arm64/kernel/jump_label.c
@@ -25,12 +25,12 @@
 void arch_jump_label_transform(struct jump_entry *entry,
 			       enum jump_label_type type)
 {
-	void *addr = (void *)entry->code;
+	void *addr = (void *)jump_entry_code(entry);
 	u32 insn;
 
 	if (type == JUMP_LABEL_JMP) {
-		insn = aarch64_insn_gen_branch_imm(entry->code,
-						   entry->target,
+		insn = aarch64_insn_gen_branch_imm(jump_entry_code(entry),
+						   jump_entry_target(entry),
 						   AARCH64_INSN_BRANCH_NOLINK);
 	} else {
 		insn = aarch64_insn_gen_nop();
@@ -50,4 +50,20 @@ void arch_jump_label_transform_static(struct jump_entry *entry,
 	 */
 }
 
+void jump_label_swap(void *a, void *b, int size)
+{
+	long delta = (unsigned long)a - (unsigned long)b;
+	struct jump_entry *jea = a;
+	struct jump_entry *jeb = b;
+	struct jump_entry tmp = *jea;
+
+	jea->code	= jeb->code - delta;
+	jea->target	= jeb->target - delta;
+	jea->key	= jeb->key - delta;
+
+	jeb->code	= tmp.code + delta;
+	jeb->target	= tmp.target + delta;
+	jeb->key	= tmp.key + delta;
+}
+
 #endif	/* HAVE_JUMP_LABEL */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 8/8] x86/kernel: jump_table: use relative references
  2017-12-27  8:50 ` Ard Biesheuvel
  (?)
@ 2017-12-27  8:50   ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ard Biesheuvel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Steven Rostedt, Martin Schwidefsky, Sergey Senozhatsky,
	Linus Torvalds, Jessica Yu, linux-arm-kernel, linux-mips,
	linuxppc-dev, linux-s390, sparclinux, x86

Similar to the arm64 case, 64-bit x86 can benefit from using 32-bit
relative references rather than 64-bit absolute ones when emitting
struct jump_entry instances. Not only does this reduce the memory
footprint of the entries themselves by 50%, it also removes the need
for carrying relocation metadata on relocatable builds (i.e., for KASLR)
which saves a fair chunk of .init space as well (although the savings
are not as dramatic as on arm64)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/x86/include/asm/jump_label.h | 35 +++++++-----
 arch/x86/kernel/jump_label.c      | 59 ++++++++++++++------
 tools/objtool/special.c           |  4 +-
 3 files changed, 65 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
index 009ff2699d07..91c01af96907 100644
--- a/arch/x86/include/asm/jump_label.h
+++ b/arch/x86/include/asm/jump_label.h
@@ -36,8 +36,8 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
 	asm_volatile_goto("1:"
 		".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t"
 		".pushsection __jump_table,  \"aw\" \n\t"
-		_ASM_ALIGN "\n\t"
-		_ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t"
+		".balign 4\n\t"
+		".long 1b - ., %l[l_yes] - ., %c0 + %c1 - .\n\t"
 		".popsection \n\t"
 		: :  "i" (key), "i" (branch) : : l_yes);
 
@@ -52,8 +52,8 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
 		".byte 0xe9\n\t .long %l[l_yes] - 2f\n\t"
 		"2:\n\t"
 		".pushsection __jump_table,  \"aw\" \n\t"
-		_ASM_ALIGN "\n\t"
-		_ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t"
+		".balign 4\n\t"
+		".long 1b - ., %l[l_yes] - ., %c0 + %c1 - .\n\t"
 		".popsection \n\t"
 		: :  "i" (key), "i" (branch) : : l_yes);
 
@@ -69,19 +69,26 @@ typedef u32 jump_label_t;
 #endif
 
 struct jump_entry {
-	jump_label_t code;
-	jump_label_t target;
-	jump_label_t key;
+	s32 code;
+	s32 target;
+	s32 key;
 };
 
 static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
 {
-	return entry->code;
+	return (jump_label_t)&entry->code + entry->code;
+}
+
+static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
+{
+	return (jump_label_t)&entry->target + entry->target;
 }
 
 static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
 {
-	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+	unsigned long key = (unsigned long)&entry->key + entry->key;
+
+	return (struct static_key *)(key & ~1UL);
 }
 
 static inline bool jump_entry_is_branch(const struct jump_entry *entry)
@@ -99,7 +106,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 	entry->code = 0;
 }
 
-#define jump_label_swap		NULL
+void jump_label_swap(void *a, void *b, int size);
 
 #else	/* __ASSEMBLY__ */
 
@@ -114,8 +121,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 	.byte		STATIC_KEY_INIT_NOP
 	.endif
 	.pushsection __jump_table, "aw"
-	_ASM_ALIGN
-	_ASM_PTR	.Lstatic_jump_\@, \target, \key
+	.balign		4
+	.long		.Lstatic_jump_\@ - ., \target - ., \key - .
 	.popsection
 .endm
 
@@ -130,8 +137,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 .Lstatic_jump_after_\@:
 	.endif
 	.pushsection __jump_table, "aw"
-	_ASM_ALIGN
-	_ASM_PTR	.Lstatic_jump_\@, \target, \key + 1
+	.balign		4
+	.long		.Lstatic_jump_\@ - ., \target - ., \key - . + 1
 	.popsection
 .endm
 
diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index e56c95be2808..cc5034b42335 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -52,22 +52,24 @@ static void __jump_label_transform(struct jump_entry *entry,
 			 * Jump label is enabled for the first time.
 			 * So we expect a default_nop...
 			 */
-			if (unlikely(memcmp((void *)entry->code, default_nop, 5)
-				     != 0))
-				bug_at((void *)entry->code, __LINE__);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+					    default_nop, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		} else {
 			/*
 			 * ...otherwise expect an ideal_nop. Otherwise
 			 * something went horribly wrong.
 			 */
-			if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
-				     != 0))
-				bug_at((void *)entry->code, __LINE__);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+					    ideal_nop, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		}
 
 		code.jump = 0xe9;
-		code.offset = entry->target -
-				(entry->code + JUMP_LABEL_NOP_SIZE);
+		code.offset = jump_entry_target(entry) -
+			      (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
 	} else {
 		/*
 		 * We are disabling this jump label. If it is not what
@@ -76,14 +78,18 @@ static void __jump_label_transform(struct jump_entry *entry,
 		 * are converting the default nop to the ideal nop.
 		 */
 		if (init) {
-			if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
-				bug_at((void *)entry->code, __LINE__);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+					    default_nop, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		} else {
 			code.jump = 0xe9;
-			code.offset = entry->target -
-				(entry->code + JUMP_LABEL_NOP_SIZE);
-			if (unlikely(memcmp((void *)entry->code, &code, 5) != 0))
-				bug_at((void *)entry->code, __LINE__);
+			code.offset = jump_entry_target(entry) -
+				(jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+				     &code, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		}
 		memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE);
 	}
@@ -97,10 +103,13 @@ static void __jump_label_transform(struct jump_entry *entry,
 	 *
 	 */
 	if (poker)
-		(*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE);
+		(*poker)((void *)jump_entry_code(entry), &code,
+			 JUMP_LABEL_NOP_SIZE);
 	else
-		text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE,
-			     (void *)entry->code + JUMP_LABEL_NOP_SIZE);
+		text_poke_bp((void *)jump_entry_code(entry), &code,
+			     JUMP_LABEL_NOP_SIZE,
+			     (void *)jump_entry_code(entry) +
+			     JUMP_LABEL_NOP_SIZE);
 }
 
 void arch_jump_label_transform(struct jump_entry *entry,
@@ -140,4 +149,20 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
 		__jump_label_transform(entry, type, text_poke_early, 1);
 }
 
+void jump_label_swap(void *a, void *b, int size)
+{
+	long delta = (unsigned long)a - (unsigned long)b;
+	struct jump_entry *jea = a;
+	struct jump_entry *jeb = b;
+	struct jump_entry tmp = *jea;
+
+	jea->code	= jeb->code - delta;
+	jea->target	= jeb->target - delta;
+	jea->key	= jeb->key - delta;
+
+	jeb->code	= tmp.code + delta;
+	jeb->target	= tmp.target + delta;
+	jeb->key	= tmp.key + delta;
+}
+
 #endif
diff --git a/tools/objtool/special.c b/tools/objtool/special.c
index 84f001d52322..98ae55b39037 100644
--- a/tools/objtool/special.c
+++ b/tools/objtool/special.c
@@ -30,9 +30,9 @@
 #define EX_ORIG_OFFSET		0
 #define EX_NEW_OFFSET		4
 
-#define JUMP_ENTRY_SIZE		24
+#define JUMP_ENTRY_SIZE		12
 #define JUMP_ORIG_OFFSET	0
-#define JUMP_NEW_OFFSET		8
+#define JUMP_NEW_OFFSET		4
 
 #define ALT_ENTRY_SIZE		13
 #define ALT_ORIG_OFFSET		0
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 8/8] x86/kernel: jump_table: use relative references
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

Similar to the arm64 case, 64-bit x86 can benefit from using 32-bit
relative references rather than 64-bit absolute ones when emitting
struct jump_entry instances. Not only does this reduce the memory
footprint of the entries themselves by 50%, it also removes the need
for carrying relocation metadata on relocatable builds (i.e., for KASLR)
which saves a fair chunk of .init space as well (although the savings
are not as dramatic as on arm64)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/x86/include/asm/jump_label.h | 35 +++++++-----
 arch/x86/kernel/jump_label.c      | 59 ++++++++++++++------
 tools/objtool/special.c           |  4 +-
 3 files changed, 65 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
index 009ff2699d07..91c01af96907 100644
--- a/arch/x86/include/asm/jump_label.h
+++ b/arch/x86/include/asm/jump_label.h
@@ -36,8 +36,8 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
 	asm_volatile_goto("1:"
 		".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t"
 		".pushsection __jump_table,  \"aw\" \n\t"
-		_ASM_ALIGN "\n\t"
-		_ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t"
+		".balign 4\n\t"
+		".long 1b - ., %l[l_yes] - ., %c0 + %c1 - .\n\t"
 		".popsection \n\t"
 		: :  "i" (key), "i" (branch) : : l_yes);
 
@@ -52,8 +52,8 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
 		".byte 0xe9\n\t .long %l[l_yes] - 2f\n\t"
 		"2:\n\t"
 		".pushsection __jump_table,  \"aw\" \n\t"
-		_ASM_ALIGN "\n\t"
-		_ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t"
+		".balign 4\n\t"
+		".long 1b - ., %l[l_yes] - ., %c0 + %c1 - .\n\t"
 		".popsection \n\t"
 		: :  "i" (key), "i" (branch) : : l_yes);
 
@@ -69,19 +69,26 @@ typedef u32 jump_label_t;
 #endif
 
 struct jump_entry {
-	jump_label_t code;
-	jump_label_t target;
-	jump_label_t key;
+	s32 code;
+	s32 target;
+	s32 key;
 };
 
 static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
 {
-	return entry->code;
+	return (jump_label_t)&entry->code + entry->code;
+}
+
+static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
+{
+	return (jump_label_t)&entry->target + entry->target;
 }
 
 static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
 {
-	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+	unsigned long key = (unsigned long)&entry->key + entry->key;
+
+	return (struct static_key *)(key & ~1UL);
 }
 
 static inline bool jump_entry_is_branch(const struct jump_entry *entry)
@@ -99,7 +106,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 	entry->code = 0;
 }
 
-#define jump_label_swap		NULL
+void jump_label_swap(void *a, void *b, int size);
 
 #else	/* __ASSEMBLY__ */
 
@@ -114,8 +121,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 	.byte		STATIC_KEY_INIT_NOP
 	.endif
 	.pushsection __jump_table, "aw"
-	_ASM_ALIGN
-	_ASM_PTR	.Lstatic_jump_\@, \target, \key
+	.balign		4
+	.long		.Lstatic_jump_\@ - ., \target - ., \key - .
 	.popsection
 .endm
 
@@ -130,8 +137,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 .Lstatic_jump_after_\@:
 	.endif
 	.pushsection __jump_table, "aw"
-	_ASM_ALIGN
-	_ASM_PTR	.Lstatic_jump_\@, \target, \key + 1
+	.balign		4
+	.long		.Lstatic_jump_\@ - ., \target - ., \key - . + 1
 	.popsection
 .endm
 
diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index e56c95be2808..cc5034b42335 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -52,22 +52,24 @@ static void __jump_label_transform(struct jump_entry *entry,
 			 * Jump label is enabled for the first time.
 			 * So we expect a default_nop...
 			 */
-			if (unlikely(memcmp((void *)entry->code, default_nop, 5)
-				     != 0))
-				bug_at((void *)entry->code, __LINE__);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+					    default_nop, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		} else {
 			/*
 			 * ...otherwise expect an ideal_nop. Otherwise
 			 * something went horribly wrong.
 			 */
-			if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
-				     != 0))
-				bug_at((void *)entry->code, __LINE__);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+					    ideal_nop, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		}
 
 		code.jump = 0xe9;
-		code.offset = entry->target -
-				(entry->code + JUMP_LABEL_NOP_SIZE);
+		code.offset = jump_entry_target(entry) -
+			      (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
 	} else {
 		/*
 		 * We are disabling this jump label. If it is not what
@@ -76,14 +78,18 @@ static void __jump_label_transform(struct jump_entry *entry,
 		 * are converting the default nop to the ideal nop.
 		 */
 		if (init) {
-			if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
-				bug_at((void *)entry->code, __LINE__);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+					    default_nop, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		} else {
 			code.jump = 0xe9;
-			code.offset = entry->target -
-				(entry->code + JUMP_LABEL_NOP_SIZE);
-			if (unlikely(memcmp((void *)entry->code, &code, 5) != 0))
-				bug_at((void *)entry->code, __LINE__);
+			code.offset = jump_entry_target(entry) -
+				(jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+				     &code, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		}
 		memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE);
 	}
@@ -97,10 +103,13 @@ static void __jump_label_transform(struct jump_entry *entry,
 	 *
 	 */
 	if (poker)
-		(*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE);
+		(*poker)((void *)jump_entry_code(entry), &code,
+			 JUMP_LABEL_NOP_SIZE);
 	else
-		text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE,
-			     (void *)entry->code + JUMP_LABEL_NOP_SIZE);
+		text_poke_bp((void *)jump_entry_code(entry), &code,
+			     JUMP_LABEL_NOP_SIZE,
+			     (void *)jump_entry_code(entry) +
+			     JUMP_LABEL_NOP_SIZE);
 }
 
 void arch_jump_label_transform(struct jump_entry *entry,
@@ -140,4 +149,20 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
 		__jump_label_transform(entry, type, text_poke_early, 1);
 }
 
+void jump_label_swap(void *a, void *b, int size)
+{
+	long delta = (unsigned long)a - (unsigned long)b;
+	struct jump_entry *jea = a;
+	struct jump_entry *jeb = b;
+	struct jump_entry tmp = *jea;
+
+	jea->code	= jeb->code - delta;
+	jea->target	= jeb->target - delta;
+	jea->key	= jeb->key - delta;
+
+	jeb->code	= tmp.code + delta;
+	jeb->target	= tmp.target + delta;
+	jeb->key	= tmp.key + delta;
+}
+
 #endif
diff --git a/tools/objtool/special.c b/tools/objtool/special.c
index 84f001d52322..98ae55b39037 100644
--- a/tools/objtool/special.c
+++ b/tools/objtool/special.c
@@ -30,9 +30,9 @@
 #define EX_ORIG_OFFSET		0
 #define EX_NEW_OFFSET		4
 
-#define JUMP_ENTRY_SIZE		24
+#define JUMP_ENTRY_SIZE		12
 #define JUMP_ORIG_OFFSET	0
-#define JUMP_NEW_OFFSET		8
+#define JUMP_NEW_OFFSET		4
 
 #define ALT_ENTRY_SIZE		13
 #define ALT_ORIG_OFFSET		0
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH v6 8/8] x86/kernel: jump_table: use relative references
@ 2017-12-27  8:50   ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27  8:50 UTC (permalink / raw)
  To: linux-arm-kernel

Similar to the arm64 case, 64-bit x86 can benefit from using 32-bit
relative references rather than 64-bit absolute ones when emitting
struct jump_entry instances. Not only does this reduce the memory
footprint of the entries themselves by 50%, it also removes the need
for carrying relocation metadata on relocatable builds (i.e., for KASLR)
which saves a fair chunk of .init space as well (although the savings
are not as dramatic as on arm64)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/x86/include/asm/jump_label.h | 35 +++++++-----
 arch/x86/kernel/jump_label.c      | 59 ++++++++++++++------
 tools/objtool/special.c           |  4 +-
 3 files changed, 65 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h
index 009ff2699d07..91c01af96907 100644
--- a/arch/x86/include/asm/jump_label.h
+++ b/arch/x86/include/asm/jump_label.h
@@ -36,8 +36,8 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran
 	asm_volatile_goto("1:"
 		".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t"
 		".pushsection __jump_table,  \"aw\" \n\t"
-		_ASM_ALIGN "\n\t"
-		_ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t"
+		".balign 4\n\t"
+		".long 1b - ., %l[l_yes] - ., %c0 + %c1 - .\n\t"
 		".popsection \n\t"
 		: :  "i" (key), "i" (branch) : : l_yes);
 
@@ -52,8 +52,8 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool
 		".byte 0xe9\n\t .long %l[l_yes] - 2f\n\t"
 		"2:\n\t"
 		".pushsection __jump_table,  \"aw\" \n\t"
-		_ASM_ALIGN "\n\t"
-		_ASM_PTR "1b, %l[l_yes], %c0 + %c1 \n\t"
+		".balign 4\n\t"
+		".long 1b - ., %l[l_yes] - ., %c0 + %c1 - .\n\t"
 		".popsection \n\t"
 		: :  "i" (key), "i" (branch) : : l_yes);
 
@@ -69,19 +69,26 @@ typedef u32 jump_label_t;
 #endif
 
 struct jump_entry {
-	jump_label_t code;
-	jump_label_t target;
-	jump_label_t key;
+	s32 code;
+	s32 target;
+	s32 key;
 };
 
 static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
 {
-	return entry->code;
+	return (jump_label_t)&entry->code + entry->code;
+}
+
+static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
+{
+	return (jump_label_t)&entry->target + entry->target;
 }
 
 static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
 {
-	return (struct static_key *)((unsigned long)entry->key & ~1UL);
+	unsigned long key = (unsigned long)&entry->key + entry->key;
+
+	return (struct static_key *)(key & ~1UL);
 }
 
 static inline bool jump_entry_is_branch(const struct jump_entry *entry)
@@ -99,7 +106,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 	entry->code = 0;
 }
 
-#define jump_label_swap		NULL
+void jump_label_swap(void *a, void *b, int size);
 
 #else	/* __ASSEMBLY__ */
 
@@ -114,8 +121,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 	.byte		STATIC_KEY_INIT_NOP
 	.endif
 	.pushsection __jump_table, "aw"
-	_ASM_ALIGN
-	_ASM_PTR	.Lstatic_jump_\@, \target, \key
+	.balign		4
+	.long		.Lstatic_jump_\@ - ., \target - ., \key - .
 	.popsection
 .endm
 
@@ -130,8 +137,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
 .Lstatic_jump_after_\@:
 	.endif
 	.pushsection __jump_table, "aw"
-	_ASM_ALIGN
-	_ASM_PTR	.Lstatic_jump_\@, \target, \key + 1
+	.balign		4
+	.long		.Lstatic_jump_\@ - ., \target - ., \key - . + 1
 	.popsection
 .endm
 
diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index e56c95be2808..cc5034b42335 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -52,22 +52,24 @@ static void __jump_label_transform(struct jump_entry *entry,
 			 * Jump label is enabled for the first time.
 			 * So we expect a default_nop...
 			 */
-			if (unlikely(memcmp((void *)entry->code, default_nop, 5)
-				     != 0))
-				bug_at((void *)entry->code, __LINE__);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+					    default_nop, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		} else {
 			/*
 			 * ...otherwise expect an ideal_nop. Otherwise
 			 * something went horribly wrong.
 			 */
-			if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
-				     != 0))
-				bug_at((void *)entry->code, __LINE__);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+					    ideal_nop, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		}
 
 		code.jump = 0xe9;
-		code.offset = entry->target -
-				(entry->code + JUMP_LABEL_NOP_SIZE);
+		code.offset = jump_entry_target(entry) -
+			      (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
 	} else {
 		/*
 		 * We are disabling this jump label. If it is not what
@@ -76,14 +78,18 @@ static void __jump_label_transform(struct jump_entry *entry,
 		 * are converting the default nop to the ideal nop.
 		 */
 		if (init) {
-			if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
-				bug_at((void *)entry->code, __LINE__);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+					    default_nop, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		} else {
 			code.jump = 0xe9;
-			code.offset = entry->target -
-				(entry->code + JUMP_LABEL_NOP_SIZE);
-			if (unlikely(memcmp((void *)entry->code, &code, 5) != 0))
-				bug_at((void *)entry->code, __LINE__);
+			code.offset = jump_entry_target(entry) -
+				(jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
+			if (unlikely(memcmp((void *)jump_entry_code(entry),
+				     &code, 5) != 0))
+				bug_at((void *)jump_entry_code(entry),
+				       __LINE__);
 		}
 		memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE);
 	}
@@ -97,10 +103,13 @@ static void __jump_label_transform(struct jump_entry *entry,
 	 *
 	 */
 	if (poker)
-		(*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE);
+		(*poker)((void *)jump_entry_code(entry), &code,
+			 JUMP_LABEL_NOP_SIZE);
 	else
-		text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE,
-			     (void *)entry->code + JUMP_LABEL_NOP_SIZE);
+		text_poke_bp((void *)jump_entry_code(entry), &code,
+			     JUMP_LABEL_NOP_SIZE,
+			     (void *)jump_entry_code(entry) +
+			     JUMP_LABEL_NOP_SIZE);
 }
 
 void arch_jump_label_transform(struct jump_entry *entry,
@@ -140,4 +149,20 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
 		__jump_label_transform(entry, type, text_poke_early, 1);
 }
 
+void jump_label_swap(void *a, void *b, int size)
+{
+	long delta = (unsigned long)a - (unsigned long)b;
+	struct jump_entry *jea = a;
+	struct jump_entry *jeb = b;
+	struct jump_entry tmp = *jea;
+
+	jea->code	= jeb->code - delta;
+	jea->target	= jeb->target - delta;
+	jea->key	= jeb->key - delta;
+
+	jeb->code	= tmp.code + delta;
+	jeb->target	= tmp.target + delta;
+	jeb->key	= tmp.key + delta;
+}
+
 #endif
diff --git a/tools/objtool/special.c b/tools/objtool/special.c
index 84f001d52322..98ae55b39037 100644
--- a/tools/objtool/special.c
+++ b/tools/objtool/special.c
@@ -30,9 +30,9 @@
 #define EX_ORIG_OFFSET		0
 #define EX_NEW_OFFSET		4
 
-#define JUMP_ENTRY_SIZE		24
+#define JUMP_ENTRY_SIZE		12
 #define JUMP_ORIG_OFFSET	0
-#define JUMP_NEW_OFFSET		8
+#define JUMP_NEW_OFFSET		4
 
 #define ALT_ENTRY_SIZE		13
 #define ALT_ORIG_OFFSET		0
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 1/8] arch: enable relative relocations for arm64, power, x86, s390 and x86
  2017-12-27  8:50   ` Ard Biesheuvel
  (?)
@ 2017-12-27 19:54     ` Linus Torvalds
  -1 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2017-12-27 19:54 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux Kernel Mailing List, H. Peter Anvin, Ralf Baechle,
	Arnd Bergmann, Heiko Carstens, Kees Cook, Will Deacon,
	Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Steven Rostedt,
	Martin Schwidefsky, Sergey Senozhatsky, Jessica Yu,
	linux-arm-kernel, linux-mips, ppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers

On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index 7da3e5c366a0..49ae5b43fe2b 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -156,7 +156,7 @@ SECTIONS
>                 CON_INITCALL
>                 SECURITY_INITCALL
>                 INIT_RAM_FS
> -               *(.init.rodata.* .init.bss)     /* from the EFI stub */
> +               *(.init.rodata.* .init.bss .init.discard.*)     /* EFI stub */
>         }
>         .exit.data : {
>                 ARM_EXIT_KEEP(EXIT_DATA)

Weren't you supposed to explain this part in the commit message?

It isn't obvious why this is mixed up with the Kconfig changes, and
somebody already asked about it. The commit message only talks about
the Kconfig changes, and then suddenly there's that odd vmlinux.lds.S
change in there...

              Linus

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 1/8] arch: enable relative relocations for arm64, power, x86, s390 and x86
@ 2017-12-27 19:54     ` Linus Torvalds
  0 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2017-12-27 19:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index 7da3e5c366a0..49ae5b43fe2b 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -156,7 +156,7 @@ SECTIONS
>                 CON_INITCALL
>                 SECURITY_INITCALL
>                 INIT_RAM_FS
> -               *(.init.rodata.* .init.bss)     /* from the EFI stub */
> +               *(.init.rodata.* .init.bss .init.discard.*)     /* EFI stub */
>         }
>         .exit.data : {
>                 ARM_EXIT_KEEP(EXIT_DATA)

Weren't you supposed to explain this part in the commit message?

It isn't obvious why this is mixed up with the Kconfig changes, and
somebody already asked about it. The commit message only talks about
the Kconfig changes, and then suddenly there's that odd vmlinux.lds.S
change in there...

              Linus

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 1/8] arch: enable relative relocations for arm64, power, x86, s390 and x86
@ 2017-12-27 19:54     ` Linus Torvalds
  0 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2017-12-27 19:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index 7da3e5c366a0..49ae5b43fe2b 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -156,7 +156,7 @@ SECTIONS
>                 CON_INITCALL
>                 SECURITY_INITCALL
>                 INIT_RAM_FS
> -               *(.init.rodata.* .init.bss)     /* from the EFI stub */
> +               *(.init.rodata.* .init.bss .init.discard.*)     /* EFI stub */
>         }
>         .exit.data : {
>                 ARM_EXIT_KEEP(EXIT_DATA)

Weren't you supposed to explain this part in the commit message?

It isn't obvious why this is mixed up with the Kconfig changes, and
somebody already asked about it. The commit message only talks about
the Kconfig changes, and then suddenly there's that odd vmlinux.lds.S
change in there...

              Linus

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 1/8] arch: enable relative relocations for arm64, power, x86, s390 and x86
  2017-12-27 19:54     ` Linus Torvalds
  (?)
@ 2017-12-27 19:59       ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27 19:59 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Linux Kernel Mailing List, H. Peter Anvin, Ralf Baechle,
	Arnd Bergmann, Heiko Carstens, Kees Cook, Will Deacon,
	Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Steven Rostedt,
	Martin Schwidefsky, Sergey Senozhatsky, Jessica Yu,
	linux-arm-kernel, linux-mips, ppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers

On 27 December 2017 at 19:54, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
>> index 7da3e5c366a0..49ae5b43fe2b 100644
>> --- a/arch/arm64/kernel/vmlinux.lds.S
>> +++ b/arch/arm64/kernel/vmlinux.lds.S
>> @@ -156,7 +156,7 @@ SECTIONS
>>                 CON_INITCALL
>>                 SECURITY_INITCALL
>>                 INIT_RAM_FS
>> -               *(.init.rodata.* .init.bss)     /* from the EFI stub */
>> +               *(.init.rodata.* .init.bss .init.discard.*)     /* EFI stub */
>>         }
>>         .exit.data : {
>>                 ARM_EXIT_KEEP(EXIT_DATA)
>
> Weren't you supposed to explain this part in the commit message?
>

Oops. Apologies, I indeed forgot to update the commit log.

> It isn't obvious why this is mixed up with the Kconfig changes, and
> somebody already asked about it. The commit message only talks about
> the Kconfig changes, and then suddenly there's that odd vmlinux.lds.S
> change in there...
>

Yeah. It doesn't make sense to respin right away for just that, so I
will give people some time to respond, and respin in a week or so.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 1/8] arch: enable relative relocations for arm64, power, x86, s390 and x86
@ 2017-12-27 19:59       ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27 19:59 UTC (permalink / raw)
  To: linux-arm-kernel

On 27 December 2017 at 19:54, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
>> index 7da3e5c366a0..49ae5b43fe2b 100644
>> --- a/arch/arm64/kernel/vmlinux.lds.S
>> +++ b/arch/arm64/kernel/vmlinux.lds.S
>> @@ -156,7 +156,7 @@ SECTIONS
>>                 CON_INITCALL
>>                 SECURITY_INITCALL
>>                 INIT_RAM_FS
>> -               *(.init.rodata.* .init.bss)     /* from the EFI stub */
>> +               *(.init.rodata.* .init.bss .init.discard.*)     /* EFI stub */
>>         }
>>         .exit.data : {
>>                 ARM_EXIT_KEEP(EXIT_DATA)
>
> Weren't you supposed to explain this part in the commit message?
>

Oops. Apologies, I indeed forgot to update the commit log.

> It isn't obvious why this is mixed up with the Kconfig changes, and
> somebody already asked about it. The commit message only talks about
> the Kconfig changes, and then suddenly there's that odd vmlinux.lds.S
> change in there...
>

Yeah. It doesn't make sense to respin right away for just that, so I
will give people some time to respond, and respin in a week or so.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 1/8] arch: enable relative relocations for arm64, power, x86, s390 and x86
@ 2017-12-27 19:59       ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27 19:59 UTC (permalink / raw)
  To: linux-arm-kernel

On 27 December 2017 at 19:54, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
>> index 7da3e5c366a0..49ae5b43fe2b 100644
>> --- a/arch/arm64/kernel/vmlinux.lds.S
>> +++ b/arch/arm64/kernel/vmlinux.lds.S
>> @@ -156,7 +156,7 @@ SECTIONS
>>                 CON_INITCALL
>>                 SECURITY_INITCALL
>>                 INIT_RAM_FS
>> -               *(.init.rodata.* .init.bss)     /* from the EFI stub */
>> +               *(.init.rodata.* .init.bss .init.discard.*)     /* EFI stub */
>>         }
>>         .exit.data : {
>>                 ARM_EXIT_KEEP(EXIT_DATA)
>
> Weren't you supposed to explain this part in the commit message?
>

Oops. Apologies, I indeed forgot to update the commit log.

> It isn't obvious why this is mixed up with the Kconfig changes, and
> somebody already asked about it. The commit message only talks about
> the Kconfig changes, and then suddenly there's that odd vmlinux.lds.S
> change in there...
>

Yeah. It doesn't make sense to respin right away for just that, so I
will give people some time to respond, and respin in a week or so.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
  2017-12-27  8:50   ` Ard Biesheuvel
  (?)
@ 2017-12-27 20:07     ` Linus Torvalds
  -1 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2017-12-27 20:07 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux Kernel Mailing List, H. Peter Anvin, Ralf Baechle,
	Arnd Bergmann, Heiko Carstens, Kees Cook, Will Deacon,
	Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Steven Rostedt,
	Martin Schwidefsky, Sergey Senozhatsky, Jessica Yu,
	linux-arm-kernel, linux-mips, ppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers, Ingo Molnar

On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 52e611ab9a6c..fe752d365334 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -327,4 +327,15 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
>         compiletime_assert(__native_word(t),                            \
>                 "Need native word sized stores/loads for atomicity.")
>
> +/*
> + * Force the compiler to emit 'sym' as a symbol, so that we can reference
> + * it from inline assembler. Necessary in case 'sym' could be inlined
> + * otherwise, or eliminated entirely due to lack of references that are
> + * visibile to the compiler.
> + */
> +#define __ADDRESSABLE(sym) \
> +       static void *__attribute__((section(".discard.text"), used))    \
> +               __PASTE(__discard_##sym, __LINE__)(void)                \
> +                       { return (void *)&sym; }                        \
> +
>  #endif /* __LINUX_COMPILER_H */

Isn't this logically the point where you should add the arm64
vmlinux.lds.S change, and explain how ".discard.text" turns into
".init.discard.text" for some odd arm64 reason?

                   Linus

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-27 20:07     ` Linus Torvalds
  0 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2017-12-27 20:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 52e611ab9a6c..fe752d365334 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -327,4 +327,15 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
>         compiletime_assert(__native_word(t),                            \
>                 "Need native word sized stores/loads for atomicity.")
>
> +/*
> + * Force the compiler to emit 'sym' as a symbol, so that we can reference
> + * it from inline assembler. Necessary in case 'sym' could be inlined
> + * otherwise, or eliminated entirely due to lack of references that are
> + * visibile to the compiler.
> + */
> +#define __ADDRESSABLE(sym) \
> +       static void *__attribute__((section(".discard.text"), used))    \
> +               __PASTE(__discard_##sym, __LINE__)(void)                \
> +                       { return (void *)&sym; }                        \
> +
>  #endif /* __LINUX_COMPILER_H */

Isn't this logically the point where you should add the arm64
vmlinux.lds.S change, and explain how ".discard.text" turns into
".init.discard.text" for some odd arm64 reason?

                   Linus

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-27 20:07     ` Linus Torvalds
  0 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2017-12-27 20:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 52e611ab9a6c..fe752d365334 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -327,4 +327,15 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
>         compiletime_assert(__native_word(t),                            \
>                 "Need native word sized stores/loads for atomicity.")
>
> +/*
> + * Force the compiler to emit 'sym' as a symbol, so that we can reference
> + * it from inline assembler. Necessary in case 'sym' could be inlined
> + * otherwise, or eliminated entirely due to lack of references that are
> + * visibile to the compiler.
> + */
> +#define __ADDRESSABLE(sym) \
> +       static void *__attribute__((section(".discard.text"), used))    \
> +               __PASTE(__discard_##sym, __LINE__)(void)                \
> +                       { return (void *)&sym; }                        \
> +
>  #endif /* __LINUX_COMPILER_H */

Isn't this logically the point where you should add the arm64
vmlinux.lds.S change, and explain how ".discard.text" turns into
".init.discard.text" for some odd arm64 reason?

                   Linus

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
  2017-12-27 20:07     ` Linus Torvalds
  (?)
@ 2017-12-27 20:11       ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27 20:11 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Linux Kernel Mailing List, H. Peter Anvin, Ralf Baechle,
	Arnd Bergmann, Heiko Carstens, Kees Cook, Will Deacon,
	Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Steven Rostedt,
	Martin Schwidefsky, Sergey Senozhatsky, Jessica Yu,
	linux-arm-kernel, linux-mips, ppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers, Ingo Molnar

On 27 December 2017 at 20:07, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
>> index 52e611ab9a6c..fe752d365334 100644
>> --- a/include/linux/compiler.h
>> +++ b/include/linux/compiler.h
>> @@ -327,4 +327,15 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
>>         compiletime_assert(__native_word(t),                            \
>>                 "Need native word sized stores/loads for atomicity.")
>>
>> +/*
>> + * Force the compiler to emit 'sym' as a symbol, so that we can reference
>> + * it from inline assembler. Necessary in case 'sym' could be inlined
>> + * otherwise, or eliminated entirely due to lack of references that are
>> + * visibile to the compiler.
>> + */
>> +#define __ADDRESSABLE(sym) \
>> +       static void *__attribute__((section(".discard.text"), used))    \
>> +               __PASTE(__discard_##sym, __LINE__)(void)                \
>> +                       { return (void *)&sym; }                        \
>> +
>>  #endif /* __LINUX_COMPILER_H */
>
> Isn't this logically the point where you should add the arm64
> vmlinux.lds.S change, and explain how ".discard.text" turns into
> ".init.discard.text" for some odd arm64 reason?
>

I tried to keep the generic patches generic, so perhaps I should just
put the arm64 vmlinux.lds.S change in a patch on its own?

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-27 20:11       ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27 20:11 UTC (permalink / raw)
  To: linux-arm-kernel

On 27 December 2017 at 20:07, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
>> index 52e611ab9a6c..fe752d365334 100644
>> --- a/include/linux/compiler.h
>> +++ b/include/linux/compiler.h
>> @@ -327,4 +327,15 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
>>         compiletime_assert(__native_word(t),                            \
>>                 "Need native word sized stores/loads for atomicity.")
>>
>> +/*
>> + * Force the compiler to emit 'sym' as a symbol, so that we can reference
>> + * it from inline assembler. Necessary in case 'sym' could be inlined
>> + * otherwise, or eliminated entirely due to lack of references that are
>> + * visibile to the compiler.
>> + */
>> +#define __ADDRESSABLE(sym) \
>> +       static void *__attribute__((section(".discard.text"), used))    \
>> +               __PASTE(__discard_##sym, __LINE__)(void)                \
>> +                       { return (void *)&sym; }                        \
>> +
>>  #endif /* __LINUX_COMPILER_H */
>
> Isn't this logically the point where you should add the arm64
> vmlinux.lds.S change, and explain how ".discard.text" turns into
> ".init.discard.text" for some odd arm64 reason?
>

I tried to keep the generic patches generic, so perhaps I should just
put the arm64 vmlinux.lds.S change in a patch on its own?

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-27 20:11       ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27 20:11 UTC (permalink / raw)
  To: linux-arm-kernel

On 27 December 2017 at 20:07, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, Dec 27, 2017 at 12:50 AM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
>> index 52e611ab9a6c..fe752d365334 100644
>> --- a/include/linux/compiler.h
>> +++ b/include/linux/compiler.h
>> @@ -327,4 +327,15 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
>>         compiletime_assert(__native_word(t),                            \
>>                 "Need native word sized stores/loads for atomicity.")
>>
>> +/*
>> + * Force the compiler to emit 'sym' as a symbol, so that we can reference
>> + * it from inline assembler. Necessary in case 'sym' could be inlined
>> + * otherwise, or eliminated entirely due to lack of references that are
>> + * visibile to the compiler.
>> + */
>> +#define __ADDRESSABLE(sym) \
>> +       static void *__attribute__((section(".discard.text"), used))    \
>> +               __PASTE(__discard_##sym, __LINE__)(void)                \
>> +                       { return (void *)&sym; }                        \
>> +
>>  #endif /* __LINUX_COMPILER_H */
>
> Isn't this logically the point where you should add the arm64
> vmlinux.lds.S change, and explain how ".discard.text" turns into
> ".init.discard.text" for some odd arm64 reason?
>

I tried to keep the generic patches generic, so perhaps I should just
put the arm64 vmlinux.lds.S change in a patch on its own?

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
  2017-12-27 20:11       ` Ard Biesheuvel
  (?)
@ 2017-12-27 20:13         ` Linus Torvalds
  -1 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2017-12-27 20:13 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux Kernel Mailing List, H. Peter Anvin, Ralf Baechle,
	Arnd Bergmann, Heiko Carstens, Kees Cook, Will Deacon,
	Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Steven Rostedt,
	Martin Schwidefsky, Sergey Senozhatsky, Jessica Yu,
	linux-arm-kernel, linux-mips, ppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers, Ingo Molnar

On Wed, Dec 27, 2017 at 12:11 PM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
>
> I tried to keep the generic patches generic, so perhaps I should just
> put the arm64 vmlinux.lds.S change in a patch on its own?

I guess it doesn't matter, but regardless of where it gets introduced
I would like to see the explanation for where the heck that magical
".init.discard.text" comes from. It's definitely not obvious from the
patches, and is presumably some odd arm64 special case.

                Linus

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-27 20:13         ` Linus Torvalds
  0 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2017-12-27 20:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 27, 2017 at 12:11 PM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
>
> I tried to keep the generic patches generic, so perhaps I should just
> put the arm64 vmlinux.lds.S change in a patch on its own?

I guess it doesn't matter, but regardless of where it gets introduced
I would like to see the explanation for where the heck that magical
".init.discard.text" comes from. It's definitely not obvious from the
patches, and is presumably some odd arm64 special case.

                Linus

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-27 20:13         ` Linus Torvalds
  0 siblings, 0 replies; 71+ messages in thread
From: Linus Torvalds @ 2017-12-27 20:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 27, 2017 at 12:11 PM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
>
> I tried to keep the generic patches generic, so perhaps I should just
> put the arm64 vmlinux.lds.S change in a patch on its own?

I guess it doesn't matter, but regardless of where it gets introduced
I would like to see the explanation for where the heck that magical
".init.discard.text" comes from. It's definitely not obvious from the
patches, and is presumably some odd arm64 special case.

                Linus

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
  2017-12-27 20:13         ` Linus Torvalds
  (?)
@ 2017-12-27 20:24           ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27 20:24 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Linux Kernel Mailing List, H. Peter Anvin, Ralf Baechle,
	Arnd Bergmann, Heiko Carstens, Kees Cook, Will Deacon,
	Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Steven Rostedt,
	Martin Schwidefsky, Sergey Senozhatsky, Jessica Yu,
	linux-arm-kernel, linux-mips, ppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers, Ingo Molnar

On 27 December 2017 at 20:13, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, Dec 27, 2017 at 12:11 PM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>>
>> I tried to keep the generic patches generic, so perhaps I should just
>> put the arm64 vmlinux.lds.S change in a patch on its own?
>
> I guess it doesn't matter, but regardless of where it gets introduced
> I would like to see the explanation for where the heck that magical
> ".init.discard.text" comes from. It's definitely not obvious from the
> patches, and is presumably some odd arm64 special case.
>

This has to do with the EFI stub. x86 and ARM link it into the
decompressor, and so the code and data are not annotated as __init
(and doing so would involve modifying a lot of code). arm64 does not
have a decompressor, and so the EFI stub is linked into the kernel
proper. To make sure the code ends up in the .init segment, all
sections are prepended with .init at the object level, using objcopy.

Annoyingly, we need this because there is a single instance of a
special section that ends up in the EFI stub code: we build lib/sort.c
again as a EFI libstub object, and given that sort() is exported, we
end up with a ksymtab section in the EFI stub. The sort() thing has
caused issues before [0], so perhaps I should just clone sort.c into
drivers/firmware/efi/libstub and get rid of that hack.

[0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=29f9007b3182ab3f328a31da13e6b1c9072f7a95

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-27 20:24           ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27 20:24 UTC (permalink / raw)
  To: linux-arm-kernel

On 27 December 2017 at 20:13, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, Dec 27, 2017 at 12:11 PM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>>
>> I tried to keep the generic patches generic, so perhaps I should just
>> put the arm64 vmlinux.lds.S change in a patch on its own?
>
> I guess it doesn't matter, but regardless of where it gets introduced
> I would like to see the explanation for where the heck that magical
> ".init.discard.text" comes from. It's definitely not obvious from the
> patches, and is presumably some odd arm64 special case.
>

This has to do with the EFI stub. x86 and ARM link it into the
decompressor, and so the code and data are not annotated as __init
(and doing so would involve modifying a lot of code). arm64 does not
have a decompressor, and so the EFI stub is linked into the kernel
proper. To make sure the code ends up in the .init segment, all
sections are prepended with .init at the object level, using objcopy.

Annoyingly, we need this because there is a single instance of a
special section that ends up in the EFI stub code: we build lib/sort.c
again as a EFI libstub object, and given that sort() is exported, we
end up with a ksymtab section in the EFI stub. The sort() thing has
caused issues before [0], so perhaps I should just clone sort.c into
drivers/firmware/efi/libstub and get rid of that hack.

[0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id)f9007b3182ab3f328a31da13e6b1c9072f7a95

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-27 20:24           ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-27 20:24 UTC (permalink / raw)
  To: linux-arm-kernel

On 27 December 2017 at 20:13, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Wed, Dec 27, 2017 at 12:11 PM, Ard Biesheuvel
> <ard.biesheuvel@linaro.org> wrote:
>>
>> I tried to keep the generic patches generic, so perhaps I should just
>> put the arm64 vmlinux.lds.S change in a patch on its own?
>
> I guess it doesn't matter, but regardless of where it gets introduced
> I would like to see the explanation for where the heck that magical
> ".init.discard.text" comes from. It's definitely not obvious from the
> patches, and is presumably some odd arm64 special case.
>

This has to do with the EFI stub. x86 and ARM link it into the
decompressor, and so the code and data are not annotated as __init
(and doing so would involve modifying a lot of code). arm64 does not
have a decompressor, and so the EFI stub is linked into the kernel
proper. To make sure the code ends up in the .init segment, all
sections are prepended with .init at the object level, using objcopy.

Annoyingly, we need this because there is a single instance of a
special section that ends up in the EFI stub code: we build lib/sort.c
again as a EFI libstub object, and given that sort() is exported, we
end up with a ksymtab section in the EFI stub. The sort() thing has
caused issues before [0], so perhaps I should just clone sort.c into
drivers/firmware/efi/libstub and get rid of that hack.

[0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=29f9007b3182ab3f328a31da13e6b1c9072f7a95

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
  2017-12-27 20:24           ` Ard Biesheuvel
  (?)
@ 2017-12-28 12:05             ` Ingo Molnar
  -1 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2017-12-28 12:05 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linus Torvalds, Linux Kernel Mailing List, H. Peter Anvin,
	Ralf Baechle, Arnd Bergmann, Heiko Carstens, Kees Cook,
	Will Deacon, Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Steven Rostedt,
	Martin Schwidefsky, Sergey Senozhatsky, Jessica Yu,
	linux-arm-kernel, linux-mips, ppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers


* Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> Annoyingly, we need this because there is a single instance of a
> special section that ends up in the EFI stub code: we build lib/sort.c
> again as a EFI libstub object, and given that sort() is exported, we
> end up with a ksymtab section in the EFI stub. The sort() thing has
> caused issues before [0], so perhaps I should just clone sort.c into
> drivers/firmware/efi/libstub and get rid of that hack.
> 
> [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=29f9007b3182ab3f328a31da13e6b1c9072f7a95

If the root problem is early bootstrap code randomly using generic facility that 
isn't __init, then we should definitely improve tooling to at least detect these 
problems.

As bootstrap code gets improved (KASLR, more complex decompression, etc. etc.) we 
keep using new bits of generic facilities...

So this should definitely not be hidden by open coding that function (which has 
various other disadvantages as well), but should be turned from silent breakage 
either into non-breakage (and do so not only for sort() but for other generic 
functions as well), or should be turned into a build failure.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-28 12:05             ` Ingo Molnar
  0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2017-12-28 12:05 UTC (permalink / raw)
  To: linux-arm-kernel


* Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> Annoyingly, we need this because there is a single instance of a
> special section that ends up in the EFI stub code: we build lib/sort.c
> again as a EFI libstub object, and given that sort() is exported, we
> end up with a ksymtab section in the EFI stub. The sort() thing has
> caused issues before [0], so perhaps I should just clone sort.c into
> drivers/firmware/efi/libstub and get rid of that hack.
> 
> [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id)f9007b3182ab3f328a31da13e6b1c9072f7a95

If the root problem is early bootstrap code randomly using generic facility that 
isn't __init, then we should definitely improve tooling to at least detect these 
problems.

As bootstrap code gets improved (KASLR, more complex decompression, etc. etc.) we 
keep using new bits of generic facilities...

So this should definitely not be hidden by open coding that function (which has 
various other disadvantages as well), but should be turned from silent breakage 
either into non-breakage (and do so not only for sort() but for other generic 
functions as well), or should be turned into a build failure.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-28 12:05             ` Ingo Molnar
  0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2017-12-28 12:05 UTC (permalink / raw)
  To: linux-arm-kernel


* Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> Annoyingly, we need this because there is a single instance of a
> special section that ends up in the EFI stub code: we build lib/sort.c
> again as a EFI libstub object, and given that sort() is exported, we
> end up with a ksymtab section in the EFI stub. The sort() thing has
> caused issues before [0], so perhaps I should just clone sort.c into
> drivers/firmware/efi/libstub and get rid of that hack.
> 
> [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=29f9007b3182ab3f328a31da13e6b1c9072f7a95

If the root problem is early bootstrap code randomly using generic facility that 
isn't __init, then we should definitely improve tooling to at least detect these 
problems.

As bootstrap code gets improved (KASLR, more complex decompression, etc. etc.) we 
keep using new bits of generic facilities...

So this should definitely not be hidden by open coding that function (which has 
various other disadvantages as well), but should be turned from silent breakage 
either into non-breakage (and do so not only for sort() but for other generic 
functions as well), or should be turned into a build failure.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
  2017-12-28 12:05             ` Ingo Molnar
  (?)
@ 2017-12-28 12:39               ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-28 12:39 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Linus Torvalds, Linux Kernel Mailing List, H. Peter Anvin,
	Ralf Baechle, Arnd Bergmann, Heiko Carstens, Kees Cook,
	Will Deacon, Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Steven Rostedt,
	Martin Schwidefsky, Sergey Senozhatsky, Jessica Yu,
	linux-arm-kernel, linux-mips, ppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers

On 28 December 2017 at 12:05, Ingo Molnar <mingo@kernel.org> wrote:
>
> * Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
>> Annoyingly, we need this because there is a single instance of a
>> special section that ends up in the EFI stub code: we build lib/sort.c
>> again as a EFI libstub object, and given that sort() is exported, we
>> end up with a ksymtab section in the EFI stub. The sort() thing has
>> caused issues before [0], so perhaps I should just clone sort.c into
>> drivers/firmware/efi/libstub and get rid of that hack.
>>
>> [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=29f9007b3182ab3f328a31da13e6b1c9072f7a95
>
> If the root problem is early bootstrap code randomly using generic facility that
> isn't __init, then we should definitely improve tooling to at least detect these
> problems.
>
> As bootstrap code gets improved (KASLR, more complex decompression, etc. etc.) we
> keep using new bits of generic facilities...
>
> So this should definitely not be hidden by open coding that function (which has
> various other disadvantages as well), but should be turned from silent breakage
> either into non-breakage (and do so not only for sort() but for other generic
> functions as well), or should be turned into a build failure.
>

We already have safeguards in place to ensure that the arm64 EFI stub
(which is essentially the same executable as the kernel proper) only
pulls in code that has been made available to it explicitly. That is
why sort.c is recompiled for the EFI stub, as well as all other C code
that is shared between the stub and the kernel. We also have a build
time check to ensure that the resulting code does not rely on absolute
symbol references, which will be invalid in the UEFI execution
context.

So the only problem is that unneeded ksymtab/kcrctab sections, which
affected ARM for obscure reasons; typically, they just take up some
space. On x86, the kaslr code deals with a similar issue by
#define'ing _LINUX_EXPORT_H before including linux/export.h, which
also gets rid of these sections, but I was a bit reluctant to copy
that pattern. Perhaps we should enhance linux/export.h for reasons
such as these by adding a macro that nops out EXPORT_SYMBOL()
declarations?

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-28 12:39               ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-28 12:39 UTC (permalink / raw)
  To: linux-arm-kernel

On 28 December 2017 at 12:05, Ingo Molnar <mingo@kernel.org> wrote:
>
> * Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
>> Annoyingly, we need this because there is a single instance of a
>> special section that ends up in the EFI stub code: we build lib/sort.c
>> again as a EFI libstub object, and given that sort() is exported, we
>> end up with a ksymtab section in the EFI stub. The sort() thing has
>> caused issues before [0], so perhaps I should just clone sort.c into
>> drivers/firmware/efi/libstub and get rid of that hack.
>>
>> [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id)f9007b3182ab3f328a31da13e6b1c9072f7a95
>
> If the root problem is early bootstrap code randomly using generic facility that
> isn't __init, then we should definitely improve tooling to at least detect these
> problems.
>
> As bootstrap code gets improved (KASLR, more complex decompression, etc. etc.) we
> keep using new bits of generic facilities...
>
> So this should definitely not be hidden by open coding that function (which has
> various other disadvantages as well), but should be turned from silent breakage
> either into non-breakage (and do so not only for sort() but for other generic
> functions as well), or should be turned into a build failure.
>

We already have safeguards in place to ensure that the arm64 EFI stub
(which is essentially the same executable as the kernel proper) only
pulls in code that has been made available to it explicitly. That is
why sort.c is recompiled for the EFI stub, as well as all other C code
that is shared between the stub and the kernel. We also have a build
time check to ensure that the resulting code does not rely on absolute
symbol references, which will be invalid in the UEFI execution
context.

So the only problem is that unneeded ksymtab/kcrctab sections, which
affected ARM for obscure reasons; typically, they just take up some
space. On x86, the kaslr code deals with a similar issue by
#define'ing _LINUX_EXPORT_H before including linux/export.h, which
also gets rid of these sections, but I was a bit reluctant to copy
that pattern. Perhaps we should enhance linux/export.h for reasons
such as these by adding a macro that nops out EXPORT_SYMBOL()
declarations?

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-28 12:39               ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-28 12:39 UTC (permalink / raw)
  To: linux-arm-kernel

On 28 December 2017 at 12:05, Ingo Molnar <mingo@kernel.org> wrote:
>
> * Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
>> Annoyingly, we need this because there is a single instance of a
>> special section that ends up in the EFI stub code: we build lib/sort.c
>> again as a EFI libstub object, and given that sort() is exported, we
>> end up with a ksymtab section in the EFI stub. The sort() thing has
>> caused issues before [0], so perhaps I should just clone sort.c into
>> drivers/firmware/efi/libstub and get rid of that hack.
>>
>> [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=29f9007b3182ab3f328a31da13e6b1c9072f7a95
>
> If the root problem is early bootstrap code randomly using generic facility that
> isn't __init, then we should definitely improve tooling to at least detect these
> problems.
>
> As bootstrap code gets improved (KASLR, more complex decompression, etc. etc.) we
> keep using new bits of generic facilities...
>
> So this should definitely not be hidden by open coding that function (which has
> various other disadvantages as well), but should be turned from silent breakage
> either into non-breakage (and do so not only for sort() but for other generic
> functions as well), or should be turned into a build failure.
>

We already have safeguards in place to ensure that the arm64 EFI stub
(which is essentially the same executable as the kernel proper) only
pulls in code that has been made available to it explicitly. That is
why sort.c is recompiled for the EFI stub, as well as all other C code
that is shared between the stub and the kernel. We also have a build
time check to ensure that the resulting code does not rely on absolute
symbol references, which will be invalid in the UEFI execution
context.

So the only problem is that unneeded ksymtab/kcrctab sections, which
affected ARM for obscure reasons; typically, they just take up some
space. On x86, the kaslr code deals with a similar issue by
#define'ing _LINUX_EXPORT_H before including linux/export.h, which
also gets rid of these sections, but I was a bit reluctant to copy
that pattern. Perhaps we should enhance linux/export.h for reasons
such as these by adding a macro that nops out EXPORT_SYMBOL()
declarations?

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 5/8] kernel: tracepoints: add support for relative references
  2017-12-27  8:50   ` Ard Biesheuvel
  (?)
@ 2017-12-28 15:42     ` Steven Rostedt
  -1 siblings, 0 replies; 71+ messages in thread
From: Steven Rostedt @ 2017-12-28 15:42 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-kernel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Martin Schwidefsky, Sergey Senozhatsky, Linus Torvalds,
	Jessica Yu, linux-arm-kernel, linux-mips, linuxppc-dev,
	linux-s390, sparclinux, x86

On Wed, 27 Dec 2017 08:50:30 +0000
Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> To avoid the need for relocating absolute references to tracepoint
> structures at boot time when running relocatable kernels (which may
> take a disproportionate amount of space), add the option to emit
> these tables as relative references instead.
> 

I gave this patch a quick skim over. It appears to not modify anything
when CONFIG_HAVE_PREL32_RELOCATIONS is not defined. I haven't
thoroughly reviewed it or tested it. But if it doesn't break anything,
I'm fine giving you an ack.

Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

--  Steve

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 5/8] kernel: tracepoints: add support for relative references
@ 2017-12-28 15:42     ` Steven Rostedt
  0 siblings, 0 replies; 71+ messages in thread
From: Steven Rostedt @ 2017-12-28 15:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 27 Dec 2017 08:50:30 +0000
Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> To avoid the need for relocating absolute references to tracepoint
> structures at boot time when running relocatable kernels (which may
> take a disproportionate amount of space), add the option to emit
> these tables as relative references instead.
> 

I gave this patch a quick skim over. It appears to not modify anything
when CONFIG_HAVE_PREL32_RELOCATIONS is not defined. I haven't
thoroughly reviewed it or tested it. But if it doesn't break anything,
I'm fine giving you an ack.

Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

--  Steve

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 5/8] kernel: tracepoints: add support for relative references
@ 2017-12-28 15:42     ` Steven Rostedt
  0 siblings, 0 replies; 71+ messages in thread
From: Steven Rostedt @ 2017-12-28 15:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 27 Dec 2017 08:50:30 +0000
Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> To avoid the need for relocating absolute references to tracepoint
> structures at boot time when running relocatable kernels (which may
> take a disproportionate amount of space), add the option to emit
> these tables as relative references instead.
> 

I gave this patch a quick skim over. It appears to not modify anything
when CONFIG_HAVE_PREL32_RELOCATIONS is not defined. I haven't
thoroughly reviewed it or tested it. But if it doesn't break anything,
I'm fine giving you an ack.

Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

--  Steve

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 8/8] x86/kernel: jump_table: use relative references
  2017-12-27  8:50   ` Ard Biesheuvel
  (?)
@ 2017-12-28 16:19     ` Steven Rostedt
  -1 siblings, 0 replies; 71+ messages in thread
From: Steven Rostedt @ 2017-12-28 16:19 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-kernel, H. Peter Anvin, Ralf Baechle, Arnd Bergmann,
	Heiko Carstens, Kees Cook, Will Deacon, Michael Ellerman,
	Thomas Garnier, Thomas Gleixner, Serge E. Hallyn, Bjorn Helgaas,
	Benjamin Herrenschmidt, Russell King, Paul Mackerras,
	Catalin Marinas, David S. Miller, Petr Mladek, Ingo Molnar,
	James Morris, Andrew Morton, Nicolas Pitre, Josh Poimboeuf,
	Martin Schwidefsky, Sergey Senozhatsky, Linus Torvalds,
	Jessica Yu, linux-arm-kernel, linux-mips, linuxppc-dev,
	linux-s390, sparclinux, x86

On Wed, 27 Dec 2017 08:50:33 +0000
Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

>  static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
>  {
> -	return entry->code;
> +	return (jump_label_t)&entry->code + entry->code;

I'm paranoid about doing arithmetic on abstract types. What happens in
the future if jump_label_t becomes a pointer? You will get a different
result.

Could we switch these calculations to something like:

	return (jump_label_t)((long)&entrty->code + entry->code);

> +}
> +
> +static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
> +{
> +	return (jump_label_t)&entry->target + entry->target;
>  }
>  
>  static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
>  {
> -	return (struct static_key *)((unsigned long)entry->key & ~1UL);
> +	unsigned long key = (unsigned long)&entry->key + entry->key;
> +
> +	return (struct static_key *)(key & ~1UL);
>  }
>  
>  static inline bool jump_entry_is_branch(const struct jump_entry *entry)
> @@ -99,7 +106,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>  	entry->code = 0;
>  }
>  
> -#define jump_label_swap		NULL
> +void jump_label_swap(void *a, void *b, int size);
>  
>  #else	/* __ASSEMBLY__ */
>  
> @@ -114,8 +121,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>  	.byte		STATIC_KEY_INIT_NOP
>  	.endif
>  	.pushsection __jump_table, "aw"
> -	_ASM_ALIGN
> -	_ASM_PTR	.Lstatic_jump_\@, \target, \key
> +	.balign		4
> +	.long		.Lstatic_jump_\@ - ., \target - ., \key - .
>  	.popsection
>  .endm
>  
> @@ -130,8 +137,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>  .Lstatic_jump_after_\@:
>  	.endif
>  	.pushsection __jump_table, "aw"
> -	_ASM_ALIGN
> -	_ASM_PTR	.Lstatic_jump_\@, \target, \key + 1
> +	.balign		4
> +	.long		.Lstatic_jump_\@ - ., \target - ., \key - . + 1
>  	.popsection
>  .endm
>  
> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
> index e56c95be2808..cc5034b42335 100644
> --- a/arch/x86/kernel/jump_label.c
> +++ b/arch/x86/kernel/jump_label.c
> @@ -52,22 +52,24 @@ static void __jump_label_transform(struct jump_entry *entry,
>  			 * Jump label is enabled for the first time.
>  			 * So we expect a default_nop...
>  			 */
> -			if (unlikely(memcmp((void *)entry->code, default_nop, 5)
> -				     != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +					    default_nop, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),

You have the functions already made before this patch. Perhaps we
should have a separate patch to use them (here and elsewhere) before
you make the conversion to using relative references. It will help out
in debugging and bisects. To know if the use of functions is an issue,
or the conversion of relative references is an issue.

I suggest splitting this into two patches.

-- Steve


> +				       __LINE__);
>  		} else {
>  			/*
>  			 * ...otherwise expect an ideal_nop. Otherwise
>  			 * something went horribly wrong.
>  			 */
> -			if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
> -				     != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +					    ideal_nop, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),
> +				       __LINE__);
>  		}
>  
>  		code.jump = 0xe9;
> -		code.offset = entry->target -
> -				(entry->code + JUMP_LABEL_NOP_SIZE);
> +		code.offset = jump_entry_target(entry) -
> +			      (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>  	} else {
>  		/*
>  		 * We are disabling this jump label. If it is not what
> @@ -76,14 +78,18 @@ static void __jump_label_transform(struct jump_entry *entry,
>  		 * are converting the default nop to the ideal nop.
>  		 */
>  		if (init) {
> -			if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +					    default_nop, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),
> +				       __LINE__);
>  		} else {
>  			code.jump = 0xe9;
> -			code.offset = entry->target -
> -				(entry->code + JUMP_LABEL_NOP_SIZE);
> -			if (unlikely(memcmp((void *)entry->code, &code, 5) != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			code.offset = jump_entry_target(entry) -
> +				(jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +				     &code, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),
> +				       __LINE__);
>  		}
>  		memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE);
>  	}
> @@ -97,10 +103,13 @@ static void __jump_label_transform(struct jump_entry *entry,
>  	 *
>  	 */
>  	if (poker)
> -		(*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE);
> +		(*poker)((void *)jump_entry_code(entry), &code,
> +			 JUMP_LABEL_NOP_SIZE);
>  	else
> -		text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE,
> -			     (void *)entry->code + JUMP_LABEL_NOP_SIZE);
> +		text_poke_bp((void *)jump_entry_code(entry), &code,
> +			     JUMP_LABEL_NOP_SIZE,
> +			     (void *)jump_entry_code(entry) +
> +			     JUMP_LABEL_NOP_SIZE);
>  }
>  
>  void arch_jump_label_transform(struct jump_entry *entry,
> @@ -140,4 +149,20 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
>  		__jump_label_transform(entry, type, text_poke_early, 1);
>  }
>  
> +void jump_label_swap(void *a, void *b, int size)
> +{
> +	long delta = (unsigned long)a - (unsigned long)b;
> +	struct jump_entry *jea = a;
> +	struct jump_entry *jeb = b;
> +	struct jump_entry tmp = *jea;
> +
> +	jea->code	= jeb->code - delta;
> +	jea->target	= jeb->target - delta;
> +	jea->key	= jeb->key - delta;
> +
> +	jeb->code	= tmp.code + delta;
> +	jeb->target	= tmp.target + delta;
> +	jeb->key	= tmp.key + delta;
> +}
> +
>  #endif
> diff --git a/tools/objtool/special.c b/tools/objtool/special.c
> index 84f001d52322..98ae55b39037 100644
> --- a/tools/objtool/special.c
> +++ b/tools/objtool/special.c
> @@ -30,9 +30,9 @@
>  #define EX_ORIG_OFFSET		0
>  #define EX_NEW_OFFSET		4
>  
> -#define JUMP_ENTRY_SIZE		24
> +#define JUMP_ENTRY_SIZE		12
>  #define JUMP_ORIG_OFFSET	0
> -#define JUMP_NEW_OFFSET		8
> +#define JUMP_NEW_OFFSET		4
>  
>  #define ALT_ENTRY_SIZE		13
>  #define ALT_ORIG_OFFSET		0

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 8/8] x86/kernel: jump_table: use relative references
@ 2017-12-28 16:19     ` Steven Rostedt
  0 siblings, 0 replies; 71+ messages in thread
From: Steven Rostedt @ 2017-12-28 16:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 27 Dec 2017 08:50:33 +0000
Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

>  static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
>  {
> -	return entry->code;
> +	return (jump_label_t)&entry->code + entry->code;

I'm paranoid about doing arithmetic on abstract types. What happens in
the future if jump_label_t becomes a pointer? You will get a different
result.

Could we switch these calculations to something like:

	return (jump_label_t)((long)&entrty->code + entry->code);

> +}
> +
> +static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
> +{
> +	return (jump_label_t)&entry->target + entry->target;
>  }
>  
>  static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
>  {
> -	return (struct static_key *)((unsigned long)entry->key & ~1UL);
> +	unsigned long key = (unsigned long)&entry->key + entry->key;
> +
> +	return (struct static_key *)(key & ~1UL);
>  }
>  
>  static inline bool jump_entry_is_branch(const struct jump_entry *entry)
> @@ -99,7 +106,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>  	entry->code = 0;
>  }
>  
> -#define jump_label_swap		NULL
> +void jump_label_swap(void *a, void *b, int size);
>  
>  #else	/* __ASSEMBLY__ */
>  
> @@ -114,8 +121,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>  	.byte		STATIC_KEY_INIT_NOP
>  	.endif
>  	.pushsection __jump_table, "aw"
> -	_ASM_ALIGN
> -	_ASM_PTR	.Lstatic_jump_\@, \target, \key
> +	.balign		4
> +	.long		.Lstatic_jump_\@ - ., \target - ., \key - .
>  	.popsection
>  .endm
>  
> @@ -130,8 +137,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>  .Lstatic_jump_after_\@:
>  	.endif
>  	.pushsection __jump_table, "aw"
> -	_ASM_ALIGN
> -	_ASM_PTR	.Lstatic_jump_\@, \target, \key + 1
> +	.balign		4
> +	.long		.Lstatic_jump_\@ - ., \target - ., \key - . + 1
>  	.popsection
>  .endm
>  
> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
> index e56c95be2808..cc5034b42335 100644
> --- a/arch/x86/kernel/jump_label.c
> +++ b/arch/x86/kernel/jump_label.c
> @@ -52,22 +52,24 @@ static void __jump_label_transform(struct jump_entry *entry,
>  			 * Jump label is enabled for the first time.
>  			 * So we expect a default_nop...
>  			 */
> -			if (unlikely(memcmp((void *)entry->code, default_nop, 5)
> -				     != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +					    default_nop, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),

You have the functions already made before this patch. Perhaps we
should have a separate patch to use them (here and elsewhere) before
you make the conversion to using relative references. It will help out
in debugging and bisects. To know if the use of functions is an issue,
or the conversion of relative references is an issue.

I suggest splitting this into two patches.

-- Steve


> +				       __LINE__);
>  		} else {
>  			/*
>  			 * ...otherwise expect an ideal_nop. Otherwise
>  			 * something went horribly wrong.
>  			 */
> -			if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
> -				     != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +					    ideal_nop, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),
> +				       __LINE__);
>  		}
>  
>  		code.jump = 0xe9;
> -		code.offset = entry->target -
> -				(entry->code + JUMP_LABEL_NOP_SIZE);
> +		code.offset = jump_entry_target(entry) -
> +			      (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>  	} else {
>  		/*
>  		 * We are disabling this jump label. If it is not what
> @@ -76,14 +78,18 @@ static void __jump_label_transform(struct jump_entry *entry,
>  		 * are converting the default nop to the ideal nop.
>  		 */
>  		if (init) {
> -			if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +					    default_nop, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),
> +				       __LINE__);
>  		} else {
>  			code.jump = 0xe9;
> -			code.offset = entry->target -
> -				(entry->code + JUMP_LABEL_NOP_SIZE);
> -			if (unlikely(memcmp((void *)entry->code, &code, 5) != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			code.offset = jump_entry_target(entry) -
> +				(jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +				     &code, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),
> +				       __LINE__);
>  		}
>  		memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE);
>  	}
> @@ -97,10 +103,13 @@ static void __jump_label_transform(struct jump_entry *entry,
>  	 *
>  	 */
>  	if (poker)
> -		(*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE);
> +		(*poker)((void *)jump_entry_code(entry), &code,
> +			 JUMP_LABEL_NOP_SIZE);
>  	else
> -		text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE,
> -			     (void *)entry->code + JUMP_LABEL_NOP_SIZE);
> +		text_poke_bp((void *)jump_entry_code(entry), &code,
> +			     JUMP_LABEL_NOP_SIZE,
> +			     (void *)jump_entry_code(entry) +
> +			     JUMP_LABEL_NOP_SIZE);
>  }
>  
>  void arch_jump_label_transform(struct jump_entry *entry,
> @@ -140,4 +149,20 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
>  		__jump_label_transform(entry, type, text_poke_early, 1);
>  }
>  
> +void jump_label_swap(void *a, void *b, int size)
> +{
> +	long delta = (unsigned long)a - (unsigned long)b;
> +	struct jump_entry *jea = a;
> +	struct jump_entry *jeb = b;
> +	struct jump_entry tmp = *jea;
> +
> +	jea->code	= jeb->code - delta;
> +	jea->target	= jeb->target - delta;
> +	jea->key	= jeb->key - delta;
> +
> +	jeb->code	= tmp.code + delta;
> +	jeb->target	= tmp.target + delta;
> +	jeb->key	= tmp.key + delta;
> +}
> +
>  #endif
> diff --git a/tools/objtool/special.c b/tools/objtool/special.c
> index 84f001d52322..98ae55b39037 100644
> --- a/tools/objtool/special.c
> +++ b/tools/objtool/special.c
> @@ -30,9 +30,9 @@
>  #define EX_ORIG_OFFSET		0
>  #define EX_NEW_OFFSET		4
>  
> -#define JUMP_ENTRY_SIZE		24
> +#define JUMP_ENTRY_SIZE		12
>  #define JUMP_ORIG_OFFSET	0
> -#define JUMP_NEW_OFFSET		8
> +#define JUMP_NEW_OFFSET		4
>  
>  #define ALT_ENTRY_SIZE		13
>  #define ALT_ORIG_OFFSET		0


^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 8/8] x86/kernel: jump_table: use relative references
@ 2017-12-28 16:19     ` Steven Rostedt
  0 siblings, 0 replies; 71+ messages in thread
From: Steven Rostedt @ 2017-12-28 16:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 27 Dec 2017 08:50:33 +0000
Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

>  static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
>  {
> -	return entry->code;
> +	return (jump_label_t)&entry->code + entry->code;

I'm paranoid about doing arithmetic on abstract types. What happens in
the future if jump_label_t becomes a pointer? You will get a different
result.

Could we switch these calculations to something like:

	return (jump_label_t)((long)&entrty->code + entry->code);

> +}
> +
> +static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
> +{
> +	return (jump_label_t)&entry->target + entry->target;
>  }
>  
>  static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
>  {
> -	return (struct static_key *)((unsigned long)entry->key & ~1UL);
> +	unsigned long key = (unsigned long)&entry->key + entry->key;
> +
> +	return (struct static_key *)(key & ~1UL);
>  }
>  
>  static inline bool jump_entry_is_branch(const struct jump_entry *entry)
> @@ -99,7 +106,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>  	entry->code = 0;
>  }
>  
> -#define jump_label_swap		NULL
> +void jump_label_swap(void *a, void *b, int size);
>  
>  #else	/* __ASSEMBLY__ */
>  
> @@ -114,8 +121,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>  	.byte		STATIC_KEY_INIT_NOP
>  	.endif
>  	.pushsection __jump_table, "aw"
> -	_ASM_ALIGN
> -	_ASM_PTR	.Lstatic_jump_\@, \target, \key
> +	.balign		4
> +	.long		.Lstatic_jump_\@ - ., \target - ., \key - .
>  	.popsection
>  .endm
>  
> @@ -130,8 +137,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>  .Lstatic_jump_after_\@:
>  	.endif
>  	.pushsection __jump_table, "aw"
> -	_ASM_ALIGN
> -	_ASM_PTR	.Lstatic_jump_\@, \target, \key + 1
> +	.balign		4
> +	.long		.Lstatic_jump_\@ - ., \target - ., \key - . + 1
>  	.popsection
>  .endm
>  
> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
> index e56c95be2808..cc5034b42335 100644
> --- a/arch/x86/kernel/jump_label.c
> +++ b/arch/x86/kernel/jump_label.c
> @@ -52,22 +52,24 @@ static void __jump_label_transform(struct jump_entry *entry,
>  			 * Jump label is enabled for the first time.
>  			 * So we expect a default_nop...
>  			 */
> -			if (unlikely(memcmp((void *)entry->code, default_nop, 5)
> -				     != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +					    default_nop, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),

You have the functions already made before this patch. Perhaps we
should have a separate patch to use them (here and elsewhere) before
you make the conversion to using relative references. It will help out
in debugging and bisects. To know if the use of functions is an issue,
or the conversion of relative references is an issue.

I suggest splitting this into two patches.

-- Steve


> +				       __LINE__);
>  		} else {
>  			/*
>  			 * ...otherwise expect an ideal_nop. Otherwise
>  			 * something went horribly wrong.
>  			 */
> -			if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
> -				     != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +					    ideal_nop, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),
> +				       __LINE__);
>  		}
>  
>  		code.jump = 0xe9;
> -		code.offset = entry->target -
> -				(entry->code + JUMP_LABEL_NOP_SIZE);
> +		code.offset = jump_entry_target(entry) -
> +			      (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>  	} else {
>  		/*
>  		 * We are disabling this jump label. If it is not what
> @@ -76,14 +78,18 @@ static void __jump_label_transform(struct jump_entry *entry,
>  		 * are converting the default nop to the ideal nop.
>  		 */
>  		if (init) {
> -			if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +					    default_nop, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),
> +				       __LINE__);
>  		} else {
>  			code.jump = 0xe9;
> -			code.offset = entry->target -
> -				(entry->code + JUMP_LABEL_NOP_SIZE);
> -			if (unlikely(memcmp((void *)entry->code, &code, 5) != 0))
> -				bug_at((void *)entry->code, __LINE__);
> +			code.offset = jump_entry_target(entry) -
> +				(jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
> +			if (unlikely(memcmp((void *)jump_entry_code(entry),
> +				     &code, 5) != 0))
> +				bug_at((void *)jump_entry_code(entry),
> +				       __LINE__);
>  		}
>  		memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE);
>  	}
> @@ -97,10 +103,13 @@ static void __jump_label_transform(struct jump_entry *entry,
>  	 *
>  	 */
>  	if (poker)
> -		(*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE);
> +		(*poker)((void *)jump_entry_code(entry), &code,
> +			 JUMP_LABEL_NOP_SIZE);
>  	else
> -		text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE,
> -			     (void *)entry->code + JUMP_LABEL_NOP_SIZE);
> +		text_poke_bp((void *)jump_entry_code(entry), &code,
> +			     JUMP_LABEL_NOP_SIZE,
> +			     (void *)jump_entry_code(entry) +
> +			     JUMP_LABEL_NOP_SIZE);
>  }
>  
>  void arch_jump_label_transform(struct jump_entry *entry,
> @@ -140,4 +149,20 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
>  		__jump_label_transform(entry, type, text_poke_early, 1);
>  }
>  
> +void jump_label_swap(void *a, void *b, int size)
> +{
> +	long delta = (unsigned long)a - (unsigned long)b;
> +	struct jump_entry *jea = a;
> +	struct jump_entry *jeb = b;
> +	struct jump_entry tmp = *jea;
> +
> +	jea->code	= jeb->code - delta;
> +	jea->target	= jeb->target - delta;
> +	jea->key	= jeb->key - delta;
> +
> +	jeb->code	= tmp.code + delta;
> +	jeb->target	= tmp.target + delta;
> +	jeb->key	= tmp.key + delta;
> +}
> +
>  #endif
> diff --git a/tools/objtool/special.c b/tools/objtool/special.c
> index 84f001d52322..98ae55b39037 100644
> --- a/tools/objtool/special.c
> +++ b/tools/objtool/special.c
> @@ -30,9 +30,9 @@
>  #define EX_ORIG_OFFSET		0
>  #define EX_NEW_OFFSET		4
>  
> -#define JUMP_ENTRY_SIZE		24
> +#define JUMP_ENTRY_SIZE		12
>  #define JUMP_ORIG_OFFSET	0
> -#define JUMP_NEW_OFFSET		8
> +#define JUMP_NEW_OFFSET		4
>  
>  #define ALT_ENTRY_SIZE		13
>  #define ALT_ORIG_OFFSET		0

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 8/8] x86/kernel: jump_table: use relative references
  2017-12-28 16:19     ` Steven Rostedt
  (?)
@ 2017-12-28 16:26       ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-28 16:26 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Linux Kernel Mailing List, H. Peter Anvin, Ralf Baechle,
	Arnd Bergmann, Heiko Carstens, Kees Cook, Will Deacon,
	Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Martin Schwidefsky,
	Sergey Senozhatsky, Linus Torvalds, Jessica Yu, linux-arm-kernel,
	linux-mips, linuxppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers

On 28 December 2017 at 16:19, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 27 Dec 2017 08:50:33 +0000
> Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
>>  static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
>>  {
>> -     return entry->code;
>> +     return (jump_label_t)&entry->code + entry->code;
>
> I'm paranoid about doing arithmetic on abstract types. What happens in
> the future if jump_label_t becomes a pointer? You will get a different
> result.
>

In general, I share your concern. In this case, however, jump_label_t
is typedef'd three lines up and is never used anywhere else.

> Could we switch these calculations to something like:
>
>         return (jump_label_t)((long)&entrty->code + entry->code);
>

jump_label_t is local to this .h file, so it can be defined as u32 or
u64 depending on the word size. I don't mind adding the extra cast,
but I am not sure if your paranoia is justified in this particular
case. Perhaps we should just use 'unsigned long' throughout?

>> +}
>> +
>> +static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
>> +{
>> +     return (jump_label_t)&entry->target + entry->target;
>>  }
>>
>>  static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
>>  {
>> -     return (struct static_key *)((unsigned long)entry->key & ~1UL);
>> +     unsigned long key = (unsigned long)&entry->key + entry->key;
>> +
>> +     return (struct static_key *)(key & ~1UL);
>>  }
>>
>>  static inline bool jump_entry_is_branch(const struct jump_entry *entry)
>> @@ -99,7 +106,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>>       entry->code = 0;
>>  }
>>
>> -#define jump_label_swap              NULL
>> +void jump_label_swap(void *a, void *b, int size);
>>
>>  #else        /* __ASSEMBLY__ */
>>
>> @@ -114,8 +121,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>>       .byte           STATIC_KEY_INIT_NOP
>>       .endif
>>       .pushsection __jump_table, "aw"
>> -     _ASM_ALIGN
>> -     _ASM_PTR        .Lstatic_jump_\@, \target, \key
>> +     .balign         4
>> +     .long           .Lstatic_jump_\@ - ., \target - ., \key - .
>>       .popsection
>>  .endm
>>
>> @@ -130,8 +137,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>>  .Lstatic_jump_after_\@:
>>       .endif
>>       .pushsection __jump_table, "aw"
>> -     _ASM_ALIGN
>> -     _ASM_PTR        .Lstatic_jump_\@, \target, \key + 1
>> +     .balign         4
>> +     .long           .Lstatic_jump_\@ - ., \target - ., \key - . + 1
>>       .popsection
>>  .endm
>>
>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>> index e56c95be2808..cc5034b42335 100644
>> --- a/arch/x86/kernel/jump_label.c
>> +++ b/arch/x86/kernel/jump_label.c
>> @@ -52,22 +52,24 @@ static void __jump_label_transform(struct jump_entry *entry,
>>                        * Jump label is enabled for the first time.
>>                        * So we expect a default_nop...
>>                        */
>> -                     if (unlikely(memcmp((void *)entry->code, default_nop, 5)
>> -                                  != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                         default_nop, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>
> You have the functions already made before this patch. Perhaps we
> should have a separate patch to use them (here and elsewhere) before
> you make the conversion to using relative references. It will help out
> in debugging and bisects. To know if the use of functions is an issue,
> or the conversion of relative references is an issue.
>
> I suggest splitting this into two patches.
>

Fair enough.


>> +                                    __LINE__);
>>               } else {
>>                       /*
>>                        * ...otherwise expect an ideal_nop. Otherwise
>>                        * something went horribly wrong.
>>                        */
>> -                     if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
>> -                                  != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                         ideal_nop, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>> +                                    __LINE__);
>>               }
>>
>>               code.jump = 0xe9;
>> -             code.offset = entry->target -
>> -                             (entry->code + JUMP_LABEL_NOP_SIZE);
>> +             code.offset = jump_entry_target(entry) -
>> +                           (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>>       } else {
>>               /*
>>                * We are disabling this jump label. If it is not what
>> @@ -76,14 +78,18 @@ static void __jump_label_transform(struct jump_entry *entry,
>>                * are converting the default nop to the ideal nop.
>>                */
>>               if (init) {
>> -                     if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                         default_nop, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>> +                                    __LINE__);
>>               } else {
>>                       code.jump = 0xe9;
>> -                     code.offset = entry->target -
>> -                             (entry->code + JUMP_LABEL_NOP_SIZE);
>> -                     if (unlikely(memcmp((void *)entry->code, &code, 5) != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     code.offset = jump_entry_target(entry) -
>> +                             (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                  &code, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>> +                                    __LINE__);
>>               }
>>               memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE);
>>       }
>> @@ -97,10 +103,13 @@ static void __jump_label_transform(struct jump_entry *entry,
>>        *
>>        */
>>       if (poker)
>> -             (*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE);
>> +             (*poker)((void *)jump_entry_code(entry), &code,
>> +                      JUMP_LABEL_NOP_SIZE);
>>       else
>> -             text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE,
>> -                          (void *)entry->code + JUMP_LABEL_NOP_SIZE);
>> +             text_poke_bp((void *)jump_entry_code(entry), &code,
>> +                          JUMP_LABEL_NOP_SIZE,
>> +                          (void *)jump_entry_code(entry) +
>> +                          JUMP_LABEL_NOP_SIZE);
>>  }
>>
>>  void arch_jump_label_transform(struct jump_entry *entry,
>> @@ -140,4 +149,20 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
>>               __jump_label_transform(entry, type, text_poke_early, 1);
>>  }
>>
>> +void jump_label_swap(void *a, void *b, int size)
>> +{
>> +     long delta = (unsigned long)a - (unsigned long)b;
>> +     struct jump_entry *jea = a;
>> +     struct jump_entry *jeb = b;
>> +     struct jump_entry tmp = *jea;
>> +
>> +     jea->code       = jeb->code - delta;
>> +     jea->target     = jeb->target - delta;
>> +     jea->key        = jeb->key - delta;
>> +
>> +     jeb->code       = tmp.code + delta;
>> +     jeb->target     = tmp.target + delta;
>> +     jeb->key        = tmp.key + delta;
>> +}
>> +
>>  #endif
>> diff --git a/tools/objtool/special.c b/tools/objtool/special.c
>> index 84f001d52322..98ae55b39037 100644
>> --- a/tools/objtool/special.c
>> +++ b/tools/objtool/special.c
>> @@ -30,9 +30,9 @@
>>  #define EX_ORIG_OFFSET               0
>>  #define EX_NEW_OFFSET                4
>>
>> -#define JUMP_ENTRY_SIZE              24
>> +#define JUMP_ENTRY_SIZE              12
>>  #define JUMP_ORIG_OFFSET     0
>> -#define JUMP_NEW_OFFSET              8
>> +#define JUMP_NEW_OFFSET              4
>>
>>  #define ALT_ENTRY_SIZE               13
>>  #define ALT_ORIG_OFFSET              0
>

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 8/8] x86/kernel: jump_table: use relative references
@ 2017-12-28 16:26       ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-28 16:26 UTC (permalink / raw)
  To: linux-arm-kernel

On 28 December 2017 at 16:19, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 27 Dec 2017 08:50:33 +0000
> Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
>>  static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
>>  {
>> -     return entry->code;
>> +     return (jump_label_t)&entry->code + entry->code;
>
> I'm paranoid about doing arithmetic on abstract types. What happens in
> the future if jump_label_t becomes a pointer? You will get a different
> result.
>

In general, I share your concern. In this case, however, jump_label_t
is typedef'd three lines up and is never used anywhere else.

> Could we switch these calculations to something like:
>
>         return (jump_label_t)((long)&entrty->code + entry->code);
>

jump_label_t is local to this .h file, so it can be defined as u32 or
u64 depending on the word size. I don't mind adding the extra cast,
but I am not sure if your paranoia is justified in this particular
case. Perhaps we should just use 'unsigned long' throughout?

>> +}
>> +
>> +static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
>> +{
>> +     return (jump_label_t)&entry->target + entry->target;
>>  }
>>
>>  static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
>>  {
>> -     return (struct static_key *)((unsigned long)entry->key & ~1UL);
>> +     unsigned long key = (unsigned long)&entry->key + entry->key;
>> +
>> +     return (struct static_key *)(key & ~1UL);
>>  }
>>
>>  static inline bool jump_entry_is_branch(const struct jump_entry *entry)
>> @@ -99,7 +106,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>>       entry->code = 0;
>>  }
>>
>> -#define jump_label_swap              NULL
>> +void jump_label_swap(void *a, void *b, int size);
>>
>>  #else        /* __ASSEMBLY__ */
>>
>> @@ -114,8 +121,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>>       .byte           STATIC_KEY_INIT_NOP
>>       .endif
>>       .pushsection __jump_table, "aw"
>> -     _ASM_ALIGN
>> -     _ASM_PTR        .Lstatic_jump_\@, \target, \key
>> +     .balign         4
>> +     .long           .Lstatic_jump_\@ - ., \target - ., \key - .
>>       .popsection
>>  .endm
>>
>> @@ -130,8 +137,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>>  .Lstatic_jump_after_\@:
>>       .endif
>>       .pushsection __jump_table, "aw"
>> -     _ASM_ALIGN
>> -     _ASM_PTR        .Lstatic_jump_\@, \target, \key + 1
>> +     .balign         4
>> +     .long           .Lstatic_jump_\@ - ., \target - ., \key - . + 1
>>       .popsection
>>  .endm
>>
>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>> index e56c95be2808..cc5034b42335 100644
>> --- a/arch/x86/kernel/jump_label.c
>> +++ b/arch/x86/kernel/jump_label.c
>> @@ -52,22 +52,24 @@ static void __jump_label_transform(struct jump_entry *entry,
>>                        * Jump label is enabled for the first time.
>>                        * So we expect a default_nop...
>>                        */
>> -                     if (unlikely(memcmp((void *)entry->code, default_nop, 5)
>> -                                  != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                         default_nop, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>
> You have the functions already made before this patch. Perhaps we
> should have a separate patch to use them (here and elsewhere) before
> you make the conversion to using relative references. It will help out
> in debugging and bisects. To know if the use of functions is an issue,
> or the conversion of relative references is an issue.
>
> I suggest splitting this into two patches.
>

Fair enough.


>> +                                    __LINE__);
>>               } else {
>>                       /*
>>                        * ...otherwise expect an ideal_nop. Otherwise
>>                        * something went horribly wrong.
>>                        */
>> -                     if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
>> -                                  != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                         ideal_nop, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>> +                                    __LINE__);
>>               }
>>
>>               code.jump = 0xe9;
>> -             code.offset = entry->target -
>> -                             (entry->code + JUMP_LABEL_NOP_SIZE);
>> +             code.offset = jump_entry_target(entry) -
>> +                           (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>>       } else {
>>               /*
>>                * We are disabling this jump label. If it is not what
>> @@ -76,14 +78,18 @@ static void __jump_label_transform(struct jump_entry *entry,
>>                * are converting the default nop to the ideal nop.
>>                */
>>               if (init) {
>> -                     if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                         default_nop, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>> +                                    __LINE__);
>>               } else {
>>                       code.jump = 0xe9;
>> -                     code.offset = entry->target -
>> -                             (entry->code + JUMP_LABEL_NOP_SIZE);
>> -                     if (unlikely(memcmp((void *)entry->code, &code, 5) != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     code.offset = jump_entry_target(entry) -
>> +                             (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                  &code, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>> +                                    __LINE__);
>>               }
>>               memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE);
>>       }
>> @@ -97,10 +103,13 @@ static void __jump_label_transform(struct jump_entry *entry,
>>        *
>>        */
>>       if (poker)
>> -             (*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE);
>> +             (*poker)((void *)jump_entry_code(entry), &code,
>> +                      JUMP_LABEL_NOP_SIZE);
>>       else
>> -             text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE,
>> -                          (void *)entry->code + JUMP_LABEL_NOP_SIZE);
>> +             text_poke_bp((void *)jump_entry_code(entry), &code,
>> +                          JUMP_LABEL_NOP_SIZE,
>> +                          (void *)jump_entry_code(entry) +
>> +                          JUMP_LABEL_NOP_SIZE);
>>  }
>>
>>  void arch_jump_label_transform(struct jump_entry *entry,
>> @@ -140,4 +149,20 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
>>               __jump_label_transform(entry, type, text_poke_early, 1);
>>  }
>>
>> +void jump_label_swap(void *a, void *b, int size)
>> +{
>> +     long delta = (unsigned long)a - (unsigned long)b;
>> +     struct jump_entry *jea = a;
>> +     struct jump_entry *jeb = b;
>> +     struct jump_entry tmp = *jea;
>> +
>> +     jea->code       = jeb->code - delta;
>> +     jea->target     = jeb->target - delta;
>> +     jea->key        = jeb->key - delta;
>> +
>> +     jeb->code       = tmp.code + delta;
>> +     jeb->target     = tmp.target + delta;
>> +     jeb->key        = tmp.key + delta;
>> +}
>> +
>>  #endif
>> diff --git a/tools/objtool/special.c b/tools/objtool/special.c
>> index 84f001d52322..98ae55b39037 100644
>> --- a/tools/objtool/special.c
>> +++ b/tools/objtool/special.c
>> @@ -30,9 +30,9 @@
>>  #define EX_ORIG_OFFSET               0
>>  #define EX_NEW_OFFSET                4
>>
>> -#define JUMP_ENTRY_SIZE              24
>> +#define JUMP_ENTRY_SIZE              12
>>  #define JUMP_ORIG_OFFSET     0
>> -#define JUMP_NEW_OFFSET              8
>> +#define JUMP_NEW_OFFSET              4
>>
>>  #define ALT_ENTRY_SIZE               13
>>  #define ALT_ORIG_OFFSET              0
>

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 8/8] x86/kernel: jump_table: use relative references
@ 2017-12-28 16:26       ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-28 16:26 UTC (permalink / raw)
  To: linux-arm-kernel

On 28 December 2017 at 16:19, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 27 Dec 2017 08:50:33 +0000
> Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
>>  static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
>>  {
>> -     return entry->code;
>> +     return (jump_label_t)&entry->code + entry->code;
>
> I'm paranoid about doing arithmetic on abstract types. What happens in
> the future if jump_label_t becomes a pointer? You will get a different
> result.
>

In general, I share your concern. In this case, however, jump_label_t
is typedef'd three lines up and is never used anywhere else.

> Could we switch these calculations to something like:
>
>         return (jump_label_t)((long)&entrty->code + entry->code);
>

jump_label_t is local to this .h file, so it can be defined as u32 or
u64 depending on the word size. I don't mind adding the extra cast,
but I am not sure if your paranoia is justified in this particular
case. Perhaps we should just use 'unsigned long' throughout?

>> +}
>> +
>> +static inline jump_label_t jump_entry_target(const struct jump_entry *entry)
>> +{
>> +     return (jump_label_t)&entry->target + entry->target;
>>  }
>>
>>  static inline struct static_key *jump_entry_key(const struct jump_entry *entry)
>>  {
>> -     return (struct static_key *)((unsigned long)entry->key & ~1UL);
>> +     unsigned long key = (unsigned long)&entry->key + entry->key;
>> +
>> +     return (struct static_key *)(key & ~1UL);
>>  }
>>
>>  static inline bool jump_entry_is_branch(const struct jump_entry *entry)
>> @@ -99,7 +106,7 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>>       entry->code = 0;
>>  }
>>
>> -#define jump_label_swap              NULL
>> +void jump_label_swap(void *a, void *b, int size);
>>
>>  #else        /* __ASSEMBLY__ */
>>
>> @@ -114,8 +121,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>>       .byte           STATIC_KEY_INIT_NOP
>>       .endif
>>       .pushsection __jump_table, "aw"
>> -     _ASM_ALIGN
>> -     _ASM_PTR        .Lstatic_jump_\@, \target, \key
>> +     .balign         4
>> +     .long           .Lstatic_jump_\@ - ., \target - ., \key - .
>>       .popsection
>>  .endm
>>
>> @@ -130,8 +137,8 @@ static inline void jump_entry_set_module_init(struct jump_entry *entry)
>>  .Lstatic_jump_after_\@:
>>       .endif
>>       .pushsection __jump_table, "aw"
>> -     _ASM_ALIGN
>> -     _ASM_PTR        .Lstatic_jump_\@, \target, \key + 1
>> +     .balign         4
>> +     .long           .Lstatic_jump_\@ - ., \target - ., \key - . + 1
>>       .popsection
>>  .endm
>>
>> diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
>> index e56c95be2808..cc5034b42335 100644
>> --- a/arch/x86/kernel/jump_label.c
>> +++ b/arch/x86/kernel/jump_label.c
>> @@ -52,22 +52,24 @@ static void __jump_label_transform(struct jump_entry *entry,
>>                        * Jump label is enabled for the first time.
>>                        * So we expect a default_nop...
>>                        */
>> -                     if (unlikely(memcmp((void *)entry->code, default_nop, 5)
>> -                                  != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                         default_nop, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>
> You have the functions already made before this patch. Perhaps we
> should have a separate patch to use them (here and elsewhere) before
> you make the conversion to using relative references. It will help out
> in debugging and bisects. To know if the use of functions is an issue,
> or the conversion of relative references is an issue.
>
> I suggest splitting this into two patches.
>

Fair enough.


>> +                                    __LINE__);
>>               } else {
>>                       /*
>>                        * ...otherwise expect an ideal_nop. Otherwise
>>                        * something went horribly wrong.
>>                        */
>> -                     if (unlikely(memcmp((void *)entry->code, ideal_nop, 5)
>> -                                  != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                         ideal_nop, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>> +                                    __LINE__);
>>               }
>>
>>               code.jump = 0xe9;
>> -             code.offset = entry->target -
>> -                             (entry->code + JUMP_LABEL_NOP_SIZE);
>> +             code.offset = jump_entry_target(entry) -
>> +                           (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>>       } else {
>>               /*
>>                * We are disabling this jump label. If it is not what
>> @@ -76,14 +78,18 @@ static void __jump_label_transform(struct jump_entry *entry,
>>                * are converting the default nop to the ideal nop.
>>                */
>>               if (init) {
>> -                     if (unlikely(memcmp((void *)entry->code, default_nop, 5) != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                         default_nop, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>> +                                    __LINE__);
>>               } else {
>>                       code.jump = 0xe9;
>> -                     code.offset = entry->target -
>> -                             (entry->code + JUMP_LABEL_NOP_SIZE);
>> -                     if (unlikely(memcmp((void *)entry->code, &code, 5) != 0))
>> -                             bug_at((void *)entry->code, __LINE__);
>> +                     code.offset = jump_entry_target(entry) -
>> +                             (jump_entry_code(entry) + JUMP_LABEL_NOP_SIZE);
>> +                     if (unlikely(memcmp((void *)jump_entry_code(entry),
>> +                                  &code, 5) != 0))
>> +                             bug_at((void *)jump_entry_code(entry),
>> +                                    __LINE__);
>>               }
>>               memcpy(&code, ideal_nops[NOP_ATOMIC5], JUMP_LABEL_NOP_SIZE);
>>       }
>> @@ -97,10 +103,13 @@ static void __jump_label_transform(struct jump_entry *entry,
>>        *
>>        */
>>       if (poker)
>> -             (*poker)((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE);
>> +             (*poker)((void *)jump_entry_code(entry), &code,
>> +                      JUMP_LABEL_NOP_SIZE);
>>       else
>> -             text_poke_bp((void *)entry->code, &code, JUMP_LABEL_NOP_SIZE,
>> -                          (void *)entry->code + JUMP_LABEL_NOP_SIZE);
>> +             text_poke_bp((void *)jump_entry_code(entry), &code,
>> +                          JUMP_LABEL_NOP_SIZE,
>> +                          (void *)jump_entry_code(entry) +
>> +                          JUMP_LABEL_NOP_SIZE);
>>  }
>>
>>  void arch_jump_label_transform(struct jump_entry *entry,
>> @@ -140,4 +149,20 @@ __init_or_module void arch_jump_label_transform_static(struct jump_entry *entry,
>>               __jump_label_transform(entry, type, text_poke_early, 1);
>>  }
>>
>> +void jump_label_swap(void *a, void *b, int size)
>> +{
>> +     long delta = (unsigned long)a - (unsigned long)b;
>> +     struct jump_entry *jea = a;
>> +     struct jump_entry *jeb = b;
>> +     struct jump_entry tmp = *jea;
>> +
>> +     jea->code       = jeb->code - delta;
>> +     jea->target     = jeb->target - delta;
>> +     jea->key        = jeb->key - delta;
>> +
>> +     jeb->code       = tmp.code + delta;
>> +     jeb->target     = tmp.target + delta;
>> +     jeb->key        = tmp.key + delta;
>> +}
>> +
>>  #endif
>> diff --git a/tools/objtool/special.c b/tools/objtool/special.c
>> index 84f001d52322..98ae55b39037 100644
>> --- a/tools/objtool/special.c
>> +++ b/tools/objtool/special.c
>> @@ -30,9 +30,9 @@
>>  #define EX_ORIG_OFFSET               0
>>  #define EX_NEW_OFFSET                4
>>
>> -#define JUMP_ENTRY_SIZE              24
>> +#define JUMP_ENTRY_SIZE              12
>>  #define JUMP_ORIG_OFFSET     0
>> -#define JUMP_NEW_OFFSET              8
>> +#define JUMP_NEW_OFFSET              4
>>
>>  #define ALT_ENTRY_SIZE               13
>>  #define ALT_ORIG_OFFSET              0
>

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 8/8] x86/kernel: jump_table: use relative references
  2017-12-28 16:26       ` Ard Biesheuvel
  (?)
@ 2017-12-28 16:39         ` Steven Rostedt
  -1 siblings, 0 replies; 71+ messages in thread
From: Steven Rostedt @ 2017-12-28 16:39 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux Kernel Mailing List, H. Peter Anvin, Ralf Baechle,
	Arnd Bergmann, Heiko Carstens, Kees Cook, Will Deacon,
	Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Martin Schwidefsky,
	Sergey Senozhatsky, Linus Torvalds, Jessica Yu, linux-arm-kernel,
	linux-mips, linuxppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers

On Thu, 28 Dec 2017 16:26:07 +0000
Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> On 28 December 2017 at 16:19, Steven Rostedt <rostedt@goodmis.org> wrote:
> > On Wed, 27 Dec 2017 08:50:33 +0000
> > Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> >  
> >>  static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
> >>  {
> >> -     return entry->code;
> >> +     return (jump_label_t)&entry->code + entry->code;  
> >
> > I'm paranoid about doing arithmetic on abstract types. What happens in
> > the future if jump_label_t becomes a pointer? You will get a different
> > result.
> >  
> 
> In general, I share your concern. In this case, however, jump_label_t
> is typedef'd three lines up and is never used anywhere else.

I would agree if this was in a .c file, but it's in a header file,
which causes me to be more paranoid.

> 
> > Could we switch these calculations to something like:
> >
> >         return (jump_label_t)((long)&entrty->code + entry->code);
> >  
> 
> jump_label_t is local to this .h file, so it can be defined as u32 or
> u64 depending on the word size. I don't mind adding the extra cast,
> but I am not sure if your paranoia is justified in this particular
> case. Perhaps we should just use 'unsigned long' throughout?

Actually, that may be better. Have the return value be jump_label_t,
but the cast be "unsigned long". That way it should always work.

static inline jump_label_t jump_entry_code(...)
{
	return (unsigned long)&entry->code + entry->code;
}


-- Steve

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 8/8] x86/kernel: jump_table: use relative references
@ 2017-12-28 16:39         ` Steven Rostedt
  0 siblings, 0 replies; 71+ messages in thread
From: Steven Rostedt @ 2017-12-28 16:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 28 Dec 2017 16:26:07 +0000
Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> On 28 December 2017 at 16:19, Steven Rostedt <rostedt@goodmis.org> wrote:
> > On Wed, 27 Dec 2017 08:50:33 +0000
> > Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> >  
> >>  static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
> >>  {
> >> -     return entry->code;
> >> +     return (jump_label_t)&entry->code + entry->code;  
> >
> > I'm paranoid about doing arithmetic on abstract types. What happens in
> > the future if jump_label_t becomes a pointer? You will get a different
> > result.
> >  
> 
> In general, I share your concern. In this case, however, jump_label_t
> is typedef'd three lines up and is never used anywhere else.

I would agree if this was in a .c file, but it's in a header file,
which causes me to be more paranoid.

> 
> > Could we switch these calculations to something like:
> >
> >         return (jump_label_t)((long)&entrty->code + entry->code);
> >  
> 
> jump_label_t is local to this .h file, so it can be defined as u32 or
> u64 depending on the word size. I don't mind adding the extra cast,
> but I am not sure if your paranoia is justified in this particular
> case. Perhaps we should just use 'unsigned long' throughout?

Actually, that may be better. Have the return value be jump_label_t,
but the cast be "unsigned long". That way it should always work.

static inline jump_label_t jump_entry_code(...)
{
	return (unsigned long)&entry->code + entry->code;
}


-- Steve

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 8/8] x86/kernel: jump_table: use relative references
@ 2017-12-28 16:39         ` Steven Rostedt
  0 siblings, 0 replies; 71+ messages in thread
From: Steven Rostedt @ 2017-12-28 16:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 28 Dec 2017 16:26:07 +0000
Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> On 28 December 2017 at 16:19, Steven Rostedt <rostedt@goodmis.org> wrote:
> > On Wed, 27 Dec 2017 08:50:33 +0000
> > Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> >  
> >>  static inline jump_label_t jump_entry_code(const struct jump_entry *entry)
> >>  {
> >> -     return entry->code;
> >> +     return (jump_label_t)&entry->code + entry->code;  
> >
> > I'm paranoid about doing arithmetic on abstract types. What happens in
> > the future if jump_label_t becomes a pointer? You will get a different
> > result.
> >  
> 
> In general, I share your concern. In this case, however, jump_label_t
> is typedef'd three lines up and is never used anywhere else.

I would agree if this was in a .c file, but it's in a header file,
which causes me to be more paranoid.

> 
> > Could we switch these calculations to something like:
> >
> >         return (jump_label_t)((long)&entrty->code + entry->code);
> >  
> 
> jump_label_t is local to this .h file, so it can be defined as u32 or
> u64 depending on the word size. I don't mind adding the extra cast,
> but I am not sure if your paranoia is justified in this particular
> case. Perhaps we should just use 'unsigned long' throughout?

Actually, that may be better. Have the return value be jump_label_t,
but the cast be "unsigned long". That way it should always work.

static inline jump_label_t jump_entry_code(...)
{
	return (unsigned long)&entry->code + entry->code;
}


-- Steve

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 5/8] kernel: tracepoints: add support for relative references
  2017-12-28 15:42     ` Steven Rostedt
  (?)
@ 2017-12-28 23:24       ` Ard Biesheuvel
  -1 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-28 23:24 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Linux Kernel Mailing List, H. Peter Anvin, Ralf Baechle,
	Arnd Bergmann, Heiko Carstens, Kees Cook, Will Deacon,
	Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Martin Schwidefsky,
	Sergey Senozhatsky, Linus Torvalds, Jessica Yu, linux-arm-kernel,
	linux-mips, linuxppc-dev, linux-s390, sparclinux,
	the arch/x86 maintainers

On 28 December 2017 at 15:42, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 27 Dec 2017 08:50:30 +0000
> Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
>> To avoid the need for relocating absolute references to tracepoint
>> structures at boot time when running relocatable kernels (which may
>> take a disproportionate amount of space), add the option to emit
>> these tables as relative references instead.
>>
>
> I gave this patch a quick skim over. It appears to not modify anything
> when CONFIG_HAVE_PREL32_RELOCATIONS is not defined. I haven't
> thoroughly reviewed it or tested it. But if it doesn't break anything,
> I'm fine giving you an ack.
>
> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
>

Thank you Steven.

I should mention though (as you don't appear to recall) that an
earlier version of this patch triggered an issue for you

https://marc.info/?l=linux-arch&m=150584374820168&w=2

but I have never managed to reproduce it, neither at the time nor
currently with this v6.


ard@bezzzef:~/linux-2.6$ sudo tools/testing/selftests/ftrace/ftracetest
=== Ftrace unit tests ===
[1] Basic trace file check [PASS]
[2] Basic test for tracers [PASS]
[3] Basic trace clock test [PASS]
[4] Basic event tracing check [PASS]
[5] event tracing - enable/disable with event level files [PASS]
[6] event tracing - restricts events based on pid [PASS]
[7] event tracing - enable/disable with subsystem level files [PASS]
[8] event tracing - enable/disable with top level files [PASS]
[9] ftrace - function graph filters with stack tracer [PASS]
[10] ftrace - function graph filters [PASS]
[11] ftrace - test for function event triggers [PASS]
[12] ftrace - function glob filters [PASS]
[13] ftrace - function pid filters [PASS]
[14] ftrace - function profiler with function tracing [PASS]
[15] ftrace - test reading of set_ftrace_filter [PASS]
[16] ftrace - test for function traceon/off triggers [PASS]
[17] Test creation and deletion of trace instances while setting an event [PASS]
[18] Test creation and deletion of trace instances [PASS]
[19] Kprobe dynamic event - adding and removing [PASS]
[20] Kprobe dynamic event - busy event check [PASS]
[21] Kprobe dynamic event with arguments [PASS]
[22] Kprobes event arguments with types [PASS]
[23] Kprobe event auto/manual naming [PASS]
[24] Kprobe dynamic event with function tracer [PASS]
[25] Kprobe dynamic event - probing module [PASS]
[26] Kretprobe dynamic event with arguments [PASS]
[27] Kretprobe dynamic event with maxactive [PASS]
[28] Register/unregister many kprobe events [PASS]
[29] event trigger - test event enable/disable trigger [PASS]
[30] event trigger - test trigger filter [PASS]
[31] event trigger - test histogram modifiers [PASS]
[32] event trigger - test histogram trigger [PASS]
[33] event trigger - test multiple histogram triggers [PASS]
[34] event trigger - test snapshot-trigger [PASS]
[35] event trigger - test stacktrace-trigger [PASS]
[36] event trigger - test traceon/off trigger [PASS]
[37] (instance)  Basic test for tracers [PASS]
[38] (instance)  Basic trace clock test [PASS]
[39] (instance)  event tracing - enable/disable with event level files [PASS]
[40] (instance)  event tracing - restricts events based on pid [PASS]
[41] (instance)  event tracing - enable/disable with subsystem level
files [PASS]
[42] (instance)  ftrace - test for function event triggers [PASS]
[43] (instance)  ftrace - test for function traceon/off triggers [PASS]
[44] (instance)  event trigger - test event enable/disable trigger [PASS]
[45] (instance)  event trigger - test trigger filter [PASS]
[46] (instance)  event trigger - test histogram modifiers [PASS]
[47] (instance)  event trigger - test histogram trigger [PASS]
[48] (instance)  event trigger - test multiple histogram triggers [PASS]

# of passed:  48
# of failed:  0
# of unresolved:  0
# of untested:  0
# of unsupported:  0
# of xfailed:  0
# of undefined(test bug):  0

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 5/8] kernel: tracepoints: add support for relative references
@ 2017-12-28 23:24       ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-28 23:24 UTC (permalink / raw)
  To: linux-arm-kernel

On 28 December 2017 at 15:42, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 27 Dec 2017 08:50:30 +0000
> Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
>> To avoid the need for relocating absolute references to tracepoint
>> structures at boot time when running relocatable kernels (which may
>> take a disproportionate amount of space), add the option to emit
>> these tables as relative references instead.
>>
>
> I gave this patch a quick skim over. It appears to not modify anything
> when CONFIG_HAVE_PREL32_RELOCATIONS is not defined. I haven't
> thoroughly reviewed it or tested it. But if it doesn't break anything,
> I'm fine giving you an ack.
>
> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
>

Thank you Steven.

I should mention though (as you don't appear to recall) that an
earlier version of this patch triggered an issue for you

https://marc.info/?l=linux-arch&m\x150584374820168&w=2

but I have never managed to reproduce it, neither at the time nor
currently with this v6.


ard@bezzzef:~/linux-2.6$ sudo tools/testing/selftests/ftrace/ftracetest
== Ftrace unit tests =[1] Basic trace file check [PASS]
[2] Basic test for tracers [PASS]
[3] Basic trace clock test [PASS]
[4] Basic event tracing check [PASS]
[5] event tracing - enable/disable with event level files [PASS]
[6] event tracing - restricts events based on pid [PASS]
[7] event tracing - enable/disable with subsystem level files [PASS]
[8] event tracing - enable/disable with top level files [PASS]
[9] ftrace - function graph filters with stack tracer [PASS]
[10] ftrace - function graph filters [PASS]
[11] ftrace - test for function event triggers [PASS]
[12] ftrace - function glob filters [PASS]
[13] ftrace - function pid filters [PASS]
[14] ftrace - function profiler with function tracing [PASS]
[15] ftrace - test reading of set_ftrace_filter [PASS]
[16] ftrace - test for function traceon/off triggers [PASS]
[17] Test creation and deletion of trace instances while setting an event [PASS]
[18] Test creation and deletion of trace instances [PASS]
[19] Kprobe dynamic event - adding and removing [PASS]
[20] Kprobe dynamic event - busy event check [PASS]
[21] Kprobe dynamic event with arguments [PASS]
[22] Kprobes event arguments with types [PASS]
[23] Kprobe event auto/manual naming [PASS]
[24] Kprobe dynamic event with function tracer [PASS]
[25] Kprobe dynamic event - probing module [PASS]
[26] Kretprobe dynamic event with arguments [PASS]
[27] Kretprobe dynamic event with maxactive [PASS]
[28] Register/unregister many kprobe events [PASS]
[29] event trigger - test event enable/disable trigger [PASS]
[30] event trigger - test trigger filter [PASS]
[31] event trigger - test histogram modifiers [PASS]
[32] event trigger - test histogram trigger [PASS]
[33] event trigger - test multiple histogram triggers [PASS]
[34] event trigger - test snapshot-trigger [PASS]
[35] event trigger - test stacktrace-trigger [PASS]
[36] event trigger - test traceon/off trigger [PASS]
[37] (instance)  Basic test for tracers [PASS]
[38] (instance)  Basic trace clock test [PASS]
[39] (instance)  event tracing - enable/disable with event level files [PASS]
[40] (instance)  event tracing - restricts events based on pid [PASS]
[41] (instance)  event tracing - enable/disable with subsystem level
files [PASS]
[42] (instance)  ftrace - test for function event triggers [PASS]
[43] (instance)  ftrace - test for function traceon/off triggers [PASS]
[44] (instance)  event trigger - test event enable/disable trigger [PASS]
[45] (instance)  event trigger - test trigger filter [PASS]
[46] (instance)  event trigger - test histogram modifiers [PASS]
[47] (instance)  event trigger - test histogram trigger [PASS]
[48] (instance)  event trigger - test multiple histogram triggers [PASS]

# of passed:  48
# of failed:  0
# of unresolved:  0
# of untested:  0
# of unsupported:  0
# of xfailed:  0
# of undefined(test bug):  0

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 5/8] kernel: tracepoints: add support for relative references
@ 2017-12-28 23:24       ` Ard Biesheuvel
  0 siblings, 0 replies; 71+ messages in thread
From: Ard Biesheuvel @ 2017-12-28 23:24 UTC (permalink / raw)
  To: linux-arm-kernel

On 28 December 2017 at 15:42, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 27 Dec 2017 08:50:30 +0000
> Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
>> To avoid the need for relocating absolute references to tracepoint
>> structures at boot time when running relocatable kernels (which may
>> take a disproportionate amount of space), add the option to emit
>> these tables as relative references instead.
>>
>
> I gave this patch a quick skim over. It appears to not modify anything
> when CONFIG_HAVE_PREL32_RELOCATIONS is not defined. I haven't
> thoroughly reviewed it or tested it. But if it doesn't break anything,
> I'm fine giving you an ack.
>
> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
>

Thank you Steven.

I should mention though (as you don't appear to recall) that an
earlier version of this patch triggered an issue for you

https://marc.info/?l=linux-arch&m=150584374820168&w=2

but I have never managed to reproduce it, neither at the time nor
currently with this v6.


ard at bezzzef:~/linux-2.6$ sudo tools/testing/selftests/ftrace/ftracetest
=== Ftrace unit tests ===
[1] Basic trace file check [PASS]
[2] Basic test for tracers [PASS]
[3] Basic trace clock test [PASS]
[4] Basic event tracing check [PASS]
[5] event tracing - enable/disable with event level files [PASS]
[6] event tracing - restricts events based on pid [PASS]
[7] event tracing - enable/disable with subsystem level files [PASS]
[8] event tracing - enable/disable with top level files [PASS]
[9] ftrace - function graph filters with stack tracer [PASS]
[10] ftrace - function graph filters [PASS]
[11] ftrace - test for function event triggers [PASS]
[12] ftrace - function glob filters [PASS]
[13] ftrace - function pid filters [PASS]
[14] ftrace - function profiler with function tracing [PASS]
[15] ftrace - test reading of set_ftrace_filter [PASS]
[16] ftrace - test for function traceon/off triggers [PASS]
[17] Test creation and deletion of trace instances while setting an event [PASS]
[18] Test creation and deletion of trace instances [PASS]
[19] Kprobe dynamic event - adding and removing [PASS]
[20] Kprobe dynamic event - busy event check [PASS]
[21] Kprobe dynamic event with arguments [PASS]
[22] Kprobes event arguments with types [PASS]
[23] Kprobe event auto/manual naming [PASS]
[24] Kprobe dynamic event with function tracer [PASS]
[25] Kprobe dynamic event - probing module [PASS]
[26] Kretprobe dynamic event with arguments [PASS]
[27] Kretprobe dynamic event with maxactive [PASS]
[28] Register/unregister many kprobe events [PASS]
[29] event trigger - test event enable/disable trigger [PASS]
[30] event trigger - test trigger filter [PASS]
[31] event trigger - test histogram modifiers [PASS]
[32] event trigger - test histogram trigger [PASS]
[33] event trigger - test multiple histogram triggers [PASS]
[34] event trigger - test snapshot-trigger [PASS]
[35] event trigger - test stacktrace-trigger [PASS]
[36] event trigger - test traceon/off trigger [PASS]
[37] (instance)  Basic test for tracers [PASS]
[38] (instance)  Basic trace clock test [PASS]
[39] (instance)  event tracing - enable/disable with event level files [PASS]
[40] (instance)  event tracing - restricts events based on pid [PASS]
[41] (instance)  event tracing - enable/disable with subsystem level
files [PASS]
[42] (instance)  ftrace - test for function event triggers [PASS]
[43] (instance)  ftrace - test for function traceon/off triggers [PASS]
[44] (instance)  event trigger - test event enable/disable trigger [PASS]
[45] (instance)  event trigger - test trigger filter [PASS]
[46] (instance)  event trigger - test histogram modifiers [PASS]
[47] (instance)  event trigger - test histogram trigger [PASS]
[48] (instance)  event trigger - test multiple histogram triggers [PASS]

# of passed:  48
# of failed:  0
# of unresolved:  0
# of untested:  0
# of unsupported:  0
# of xfailed:  0
# of undefined(test bug):  0

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
  2017-12-28 12:39               ` Ard Biesheuvel
                       ` (2 preceding siblings ...)
  (?)
@ 2017-12-29  6:42     ` kbuild test robot
  -1 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2017-12-29  6:42 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: kbuild-all, linux-kernel, Ard Biesheuvel, H. Peter Anvin,
	Ralf Baechle, Arnd Bergmann, Heiko Carstens, Kees Cook,
	Will Deacon, Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Steven Rostedt,
	Martin Schwidefsky, Sergey Senozhatsky, Linus Torvalds,
	Jessica Yu, linux-arm-kernel, linux-mips, linuxppc-dev,
	linux-s390, sparclinux, x86, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 1299 bytes --]

Hi Ard,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.15-rc5 next-20171222]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/add-support-for-relative-references-in-special-sections/20171228-171634
config: s390-gcov_defconfig (attached as .config)
compiler: s390x-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=s390 

All errors (new ones prefixed by >>):

>> arch/s390/kernel/ebcdic.o:(.data+0x118): undefined reference to `__gcov_merge_add'
   arch/s390/kernel/ebcdic.o: In function `_GLOBAL__sub_I_00100_0__ascebc':
>> ebcdic.c:(.text.startup+0xe): undefined reference to `__gcov_init'
   arch/s390/kernel/ebcdic.o: In function `_GLOBAL__sub_D_00100_1__ascebc':
>> ebcdic.c:(.text.exit+0x8): undefined reference to `__gcov_exit'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 17457 bytes --]

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-29  6:42     ` kbuild test robot
  0 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2017-12-29  6:42 UTC (permalink / raw)
  To: linux-s390, linux-kernel, linux-sparc, linuxppc-embedded,
	linux-mips, linux-arm-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: multipart/mixed; boundary="--n8g4imXOkfNTN/H1", Size: 25154 bytes --]


--n8g4imXOkfNTN/H1
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Ard,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.15-rc5 next-20171222]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/add-support-for-relative-references-in-special-sections/20171228-171634
config: s390-gcov_defconfig (attached as .config)
compiler: s390x-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=s390 

All errors (new ones prefixed by >>):

>> arch/s390/kernel/ebcdic.o:(.data+0x118): undefined reference to `__gcov_merge_add'
   arch/s390/kernel/ebcdic.o: In function `_GLOBAL__sub_I_00100_0__ascebc':
>> ebcdic.c:(.text.startup+0xe): undefined reference to `__gcov_init'
   arch/s390/kernel/ebcdic.o: In function `_GLOBAL__sub_D_00100_1__ascebc':
>> ebcdic.c:(.text.exit+0x8): undefined reference to `__gcov_exit'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

--n8g4imXOkfNTN/H1
Content-Type: application/gzip
Content-Disposition: attachment; filename=".config.gz"
Content-Transfer-Encoding: base64

H4sICPXcRVoAAy5jb25maWcAlDzLcty2svt8xZRzF/cuTqyHrcR1SwuQBIfIkARNgDOSNixZ
njiqSBqXNDqJ//50A3wAIEDqpFKJpruJR6PfaPLnn35ekdfj4fH2eH93+/DwY/Vt/7R/vj3u
v67+uH/Y//8q4auSyxVNmPwFiPP7p9d/3r+cfzpZffjl9OMvJ/96vvu42uyfn/YPq/jw9Mf9
t1d4/P7w9NPPP8W8TNm6LYrm8kf/44aXtE0KMkLiqmkj+D8tE0bKEZ7zeJPQqhVNVfFajggh
SbyRNYnpFFfvBC3aqzhbkyRpSb7mNZNZAQQ/rzoSUsdZmxHRspyvz9rm/Gx1/7J6OhxXL/tj
mOzig0nWEa1pSWsWt9mOsnVmrKNHRM3aC2xrmhPJtrStOCslrcVIpmYGFrWwhbqV7cWHiBlD
V2shqUPd8UG0CcXBK7KmJAcGjmQbekWNn6SBA1XPjrCSt4zjMG1BKpNjVczazw2rN8LDAnv+
pqp5RI3NCBAV41ec0aTlBcyd1qQYdm/uTpIop21OtzQXlx97eELTXi6YkJfv3j/cf3n/ePj6
+rB/ef8/TYmDAU8pEfT9L3dKEN8NEsZLIesmltxkM6s/tzteb0ZI1LA8kQxGold6FcISLpnV
lCQtK1PgVNlKIvBhEPSfV2ulNg/ImNfvo+izEnZKyy1wCRcO+748PxuWVXMhYHFFxXJ6+e7d
yPAO1koqpIflcLAk34LMMF7icx4wHLDk48ozApK2oXVJ83Z9wyo/JgLMmR+V35gqa2KubkJP
BObPbz6YYmmuaWCAuSCvdhrLmsNf3cw/zefRPpUHQSRNLtuMC4lSd/nuf58OT/v/G45B7IjB
X3EttqyKJwD8fyxzQ/C5YFdt8bmhDfVDJ49oASpowevrlkgwi5nJxEbQnEU+lUXldw5HqbFC
4CxgPBxb4Ye2OyJNE6KBsqa0VwzQstXL65eXHy/H/eOoGL0pLARDRTTYU5Fa0A42bKUnR51V
Ji71maKeSqnwdrLiHh2jZwHzUkrRr1LeP+6fX3wLlSzetOCzRMalZSqzG1TTgpfmOgFYwRw8
YbFnffopluTUfEZBPdQZeBQwaUJtpx6WCu7yvbx9+Wt1hDWvbp++rl6Ot8eX1e3d3eH16Xj/
9G1c/JaBMUf/SuKYN6VkpeGNPMi2VD7JsIgiacGixxTkDMhkGNNuzw1DCZYRvLQUNgiOLifX
zkAKceWBMe5dNq6YCY7eU/FeMaWOm5XwHB4IYgs4k93wE8w7nJLPsgpNbD7ugHBnrQXCAWGz
eT7Kg4EpKTg8QddxpPyW7Wsg7inPDNvANvqPKUTx2oyNcIS0FRlL5eXpryYcGVSQKxM/uJyq
Bne7aQVJqTvGud+hlw0EIhHJSRlbxzCBj/JsYQZ7SUv0qImH6fG65k1lyAqGL606eTMyADsX
r52fjrEdYf10BsPzTTeTuVgdL4244PLaHQSSNILI0xBFjVFBzQhNCatbLyZOBTCmTHYskYbR
BDX0k2toxRJhLrkD14nt+mxsCqJ7Y3Kvg2fNmso8stRJUFNRUdBwzg4zGSGhWxZTz4KAHrU4
vCbQudTzXFSlXi88zAcn5FNVMOIDDZFGdIJeGdxIbIahDYqyGfyBBzZ/w1ZrC4AcMH+XVFq/
dSSLUdZEpsDxpBiJVzWNifSKPEb/17ZsAl9VuFgbEqB+kwJGE7ypY2oEe3XiBHIAcOI3gNhh
GwDMaE3hufPbiM3iuOUVOB92Q9uU1+r8eF2AXtvH75AJ+MN3Xk7wQkqIUlnJEytbUERgn2Na
oX1vVZpnMKpKxx/aihvaD2aG4UFahwHyXqA76cIBr6jpE1ugwIV5SHqtz0Cx80nkNvhuywC7
v9uyYKZrMMwczVMwhbXJA0hx2rQxI5u0kfTK+QkS7IR4GhwXFabHxgwVN8cSbF2SPDWkUO3B
BKjYyQSIDCyvcbTMkCqSbJmgPecMVsAjEalrZlmpjMYblRNi5COtfW/w8etCTCGtDvPG+H2A
RxAowK5RdME8eY5tIFVc7ZNyS+DaSSCJMqacmuLBMC3shyaJV98V/1F/2iHuNLLr0xMr0VAR
TVdSqfbPfxyeH2+f7vYr+u/9EwR6BEK+GEM9iFjHUCcweFcPQCSsut0WqizgWeG20E/3ftdU
yryJ9ECWYiG0c7hKfXjpN+OQyxLZRvXGr1g58aUoOLo9G/eTEVxEDfFCF2fYDwEWPSGGXm0N
CsqL4CJGwozUCaQKvnNUm8YAC7IUyYgldcAFSQvli1rIwlnKYhWjeoYBH5uy3IqnlKFTkm9w
Pq6JyBz1V4UcB8b1gPTy0RK4ATw+7JZnfm+KqoXdU3srELtDorSh12AVwQIF6hBjsWdMgHBe
VbACpQRLgj4yxiQhpBQ0BT4xFLymtJ8wTAhGpCi+GDRDNgDJhxX2bWoq3Y3pioMfGiK3rPCY
FaujyTjfOEgs0cFvydYNb4yx+jxTAGsx2evqRs5+sLQI7lKy9Lr37lMCCMC6/N5B7kiJ1b5r
iHUw4VUOTpW4nDXWdA1msEx00bTjbEsmvkGVGys2qPnEbuFAPrgKnPTgCciSj3+WjNk71EzX
+cjEMekROpnQu1OpgUPRPafLXwFcwpsodxmMPFSBoC459AU1D1Fnw95EyyGtG+l9/BA0RoIW
VNNKcEJw9eQawqwqb9astPTNAPutb6wPB+RdUiyAevRQZqBSuDLwfO4Zz1YQRjEFzlOVMWCs
sDwEqkhA00osAKE5wEzFc9qagzyVbQLjXjvYgicdRUVjtMBGuMGTJqdC2REMqtDXe7aiUMpT
QBjrShOvrvsisMyndq8fJPMeBROQFjdK+Xw5Uo4XI5hf7sD9CCOMR4GCoEw0sKcyOZ8gSCwt
UeyEz8WOIgFHUXLD8KapX3bGPUHYUGk5Cl+VcBWwk7yvnta7q/+KuHfgoSsGJWcS7Ko0HjKU
J4xyH9eS4H3cQo0lI7x7aJBTjTf2HxkFKV9+foayxbXzH8p265hv//Xl9mX/dfWXDu2+Px/+
uH+w6nVI1G3Bs3yF7Vx6a1c25zEqa5MqWU0omgHbgowU5+0H75mZNB/aX8PS0rs+7Roziprs
S3+B05iMmOZGBeAC49PLE0dnXSXGpcRYwDKdaodqSi9YPzEgh1UDurPlfh3oHhd1PFysBDLE
npKt59ConLU/HOoNlKo65hBuNIY3jbBmbRcMRCwYSNrnhlp1xa6UEIm1F5izaArHO7h1zZQ5
HVbcI/HG1hcL93gwiFzK3KkCTrGwp52XNar6ViRAQ7V79TkpJNpFzkYB0IrPU1jx2d0jZkGp
cBco8DKyIvkk+6pun4/3eIu9kj++7800C+N+VZmAvBYrIZY0EQjLy5HGb//Y1QIFF+nSGAUY
9iUaSWrmp+kdUlSMeEspSDz7YCESLqxHe56KBNyy2DjBXsFK2LRoIs8jWMmvmWivfrvwjdjA
k+AQqTXssNA8KRa4INYBHoy3ZDmY/qUjEc3SsW4IWMdZrtGU+dmNN4IXvy2MbyhIcAZlJDr/
bytA8Rkv8ScwDM3MWk0HrnV/hr6/4ytx9+ce79nNcgPjugBacm5YqR6aQISEq51i4tRQTfjR
Vb479Ijqbw2MkYyahsbB415W9Xhc28wFbj/nu6/726/gh/dDbRVYEN6JgdxcR3axsUdE9sp6
2wGGvahwXZD0MbssSOwSNhHlqRMBsFKdv6iw86C+tv1BiKKNshmihTHeNoB9fRskEWTr5l8m
GXrm2cVogvnldDTzCxqJJhVGk1YH+XN8VhRvQAfXPFIEV2yRhFmoyOZYaBDML2eJhQ7RLAvV
jdk8DzXJW/DBZRskwVXbNGE+aro5RpoUC0taYqVLNeHlrMYvKXtYz2dVfF67lxV7QWWXtPWN
ijqro2H1nNXMeaVc1sc5VVzQwiUFfKPuzavdjMbNK9uCnr1BxWa1a0mxFnXqrepk9zUQybFy
Whc7w7WrCzIlfBDg811pVuF0Z2kAqSYN4CZX8SpTztmWVnbzlAnSWcjz4W7/8nJ4Xh0hC1G9
Pn/sb4+vz2ZGouteanM3n05O2pQS2dSTQl5P8WmRoj09+bRAc7o0yOmnC5PCLhJ0NDQ+PRuI
wiWFfsTz/4L2wzztyK1Z9Kd5NPJpjuD0xJc6DfyxE66BI7Mjns9i/bUbQzRkU9KFHS/TKPFY
ojpdHghFxKWZskOReFgF4rA4wYcwCSJ8BUebICghHTYgIB02KB8afzr3MDDHV6hXOGSLyZHu
Cb9sdEhfI6vTVu6mjX2f6gRuXvKUtWoTuvw4tphlXOKFBMINOmzzwN5OsI4yw1Ybu+8De8Ym
1Kov7YM2hWL/sL87rpBu9Xj4ahpA1W9GzXcM4IfqQrk8+ef0RP8zrF7lZ6IwN6RARexCIuvG
r0sHa7KzrmwUVPKK53xtXEaMqZ4tvT18y/OmlKS+9h5aR+WrNN+AVJ1YFvWmPfvoFyVAnQcE
WI/jM1DZzeWpy66sxi5Yx5PqS5LeVRX7x8PzD7f9vbuKwZpeAc5T9+S5HnlAjzf4Fp7mNO5v
QbFIOqmN62F7ik7+lmhq+GuSBnRUosqZbKsiaStpByu6jIzdlVgV5XUCwvBpYNbcSsdtFqRs
iA9jKAA2jap2pgqW4+iK9nR6Eqwl01L6pqFXuEfqQ23hP8XQIzlDMZ3UqUNbYLXQNvxYW2XX
wsaXHNVMWpvvtmb2+NpevntCv4eBc/pcSEAYbHi3Y8u/2AR9sYirIHN2IleiOimSuiiHVvKD
81CEzThW/VQDtM31Xes5sIKta2KDFI/9Lw5FvCnNLgN1MSs5XkoaQxbNUKAaoRthHFDPEyVD
Bd4dw3SXH04+XQyOavYe14dtSb4j11as6CUrdDtZ+GZRN0bIrFJtwD4n6gyrWgpiAobO4HVO
SenA0prDyFbfSaycllGiJdNu1SnW++oCYmFNRFz+agiJ96L7xl7ETcW5YU1uoia5fBynvTlP
ee67vrkRuhfMJO5fvoKDhVzG13XbP6VahEK5FQgIrWu7aUP1sloytESiWmcUHBtwNs79ku7o
U8vwB/lrbC2mZZxBOLjxBplXSlmQzaAmoJ2D+FbYzaKuu42zxs5EF6j7XtYNqZPLs48X4+Q7
UpdDd87k4lwX1t9z3/smnxOzIG8V7fE1PAi4UtBifbV8evab9ZIeRfaDNvtqz/Csaj7wv1II
2EIwi78apN+44QVh/msJJAvLPGKFbKIgkvFtEFfVLIwjgnn7LDuzrfk2uo0RDJY49o9rEoms
8jVJWiT6xUZ1lsn+5f7b0+72eb/C0eMD/CFev38/PMP4XS4P8D8PL8fV3eHp+Hx4eNg/r74+
3/9b36oMJPTp6/fD/dNRv+ozbpiWiWoOnN5UwkMvf98f7/70j2yfxQ7+ZTLOJPVtTx2mOmsr
3BBwTJaHhp/gJTKaV6aP0G+6rNXdf07LtfmKg8IhomskMV9gjTMRY6BpRtWF7ZQoSQpF8tjx
+/XxOyhPz+B+INUt2XWhKTr6z/7u9Xj75WGv3mZeqVba48vq/Yo+vj6o15SNRCKCgKKQ2AHk
ZkVelLJ5I8LdFoDsJk38pVY3vhIDT2WwN+sqqBtRxDWrrEYN3WXCG19e0D1UMBGPPaA4od2T
13WYDfWlw98gK8Xt0+23/eP+6egE7hmLaF2q+AK72QWzgsUeS1tsjsbGJTFFTnvkhOocRKuO
3aWGCYIDlolxQTq+6oConNLKJkZIl4yOMl6oJheF89uOAkzzBrtzvO8yV4UzWti07T4DTyAX
NPqowg1MpfkeDb5JAs6stlpcEUh7mDqccn/8+/D81/3Tt9XhuyOp6KDMIfVvCBTIejx+vDof
SfBXTzCGWLmPC1dpbSg7/lIv0o8jK5B60cIYSwFFEwFbchb7E1tFo4NWfwinB8HuYwEpZ2hx
2FiJYvxocg/EyVxOB/LN1luxJt4awpmq38OQooitHw5zmXWkrNLZWkyEDe2bRNoa9Na5K8ZG
zwhiG0anUYwzLmaBKjKz3mDSg3YUxDS3A25L64gL6sHEOQGNTpwVVaUvZFYSWzGH4axao0Gj
RXPlIrDshD1sU3rfEFHNSTJhXaHW6QHNsqRihYD09dQHNHoqxTUmWXzDqKODrNpKZi+ySYz9
WPKV8sYrxR1uZIT3bJGKZLZItVRU5iw9rOVpiq1roXFchVBApSruSSiMF6iVE1N4nSLhpw+C
FPMDRJS6z07Vp5Vx5QMjv11DpRA12SmEv7u2nwQkEnIKfu1hFU4If67N3i0XFTFD7wdo3ERm
L/EA38FcO84TzyMZ/OUDiwD8OsqJB76la0gKTZvdY8rt3BaxJVtVd6ZD5r75t7TkHvA1NUV0
ALM8ZyVn/oUlMfzpr0YP/Ez8pzgeQ+RrAhw6f7rjmLT8QFbPZ57rh798d/f65f7unT1xkXx0
ujcHE7K9MA3K9qKz51h/S20T2uNUdcovqUij35hEJ9cmQbW+mNiHC5+BuHiDhbiYmghcRsGq
C2s4BLLc92KxHiVoUy4C0EWrcrFgVi5m7YqJVXzvXkXVoae9WcuyK4hgcrJ7gLUXtZePiC4T
iLFV1VFeV9SRisn6Eai9nj1JyHP0yG6k0CJmPCDuoImwh1hMBFP7z/DEgq4v2nw3ndtDlhUk
Djk/1afpjyPwgzuYhmBRxva8lay6oCS12pD7h6rsWr2JCrFSEShOAan77swA8tj8qGYJ5CHj
U112qdJ3CL8hZTxCfhT4qtU4si+Y71DIDFZurIDCRumvYczg9QdhZghybnjQEt8WLktVMbOg
6ssQ+vsUZhSjETBUQn3uxByudc7MRI0nag494vG2xBcEWUT45YFUBGYYXm31IVEuQLRnsEpq
Anglrs7QElcjOfiyuPJj7GDWQIhYBh6ByCZnZkxrLYMUpEyI/yjbVFYBTHZ+dh5AsToOYMaQ
248HgYgYV59M8BOIsggtqKqCaxWkpCEUCz0kJ3uXHr0ywYM82PG6ozfrvIE0yJuup/ghHGtw
+K2KWKb96MABQRlRvmMfsRNxQZRHFhDscgJh7iEjzGUmwjo2mvqJ4JomrKYhNnT+xX5MA5X7
9XuAgWTGrki8oMyS2lwn3p1KYs8GKwzNIvHqf019nfCIjKU7Er5NWyvfGBwRSYJv8PUEEZOF
925JTdt9t8bal2OjZfctOGd9BRH+jnbFBjyowJxaWi1yHv0OsWlwNOVVZrBc+r+rplfye0hg
+ldl7c0iQ30e7GoQL+V3r1Sl9mV1d3j8cv+0/7rqPu/n87lXUrsj76jKeMyghYoirTmPt8/f
9sfQVJLUa8zW1XfT/GN2JOqaSDTFAlUf58xTze/CoOrd7zzhwtITEVfzFFm+gF9eBF7qqe+J
zJPZ+uIhmJnJVQbP0yUNKbCPOF1cTZn2Udr8tFx5tDfOi2VLKhb3MmfAPeQw5vxe8NMIVwuS
qb/KM0vyJpGErLsQYpEGcjp8P7dylfbx9nj354x9kPhJwySpVabmn0QT4WeFQmzWFMGvg/lo
80bIoIR3NBCKY+POPE1ZRteShhg0UukX4xapOp8zTzVzaiORm194qKpmFq/ipVkCutVf1pol
CtssTUDjch4v5p9H17XMt+4qdJbErR67BLo+8zYJY1VNyvW8TEOqPi84+Zmc33t3iTtLssia
gsQL+AVx07UOq2zkoSrTUB49kHCRzuNV2/4chb7ZmifJrgVI7jzNRi5aJBWBzVKM7mGGhpI8
FIr0FPGSGVIJyiyBuvheolDFzgWqGpt85khmHUZHgl8gmSNozs/MCl0XDVq/sTlINfTYUB33
t6ya0A8YS9xtpFP6rIZcQw9oXsMZmED3kk00NzTiPCs2sCWVc/P7y3wmlUPjoYApxpn8+CBi
DhfeOCBZakUpHVZ94ss9861wOLAVqtYZ2vpWBL9LqbGQ2eiustOzrt0LzPHq+Hz79IJNK/jx
j+Ph7vCwejjcfl19uX24fbr7D2XP2tw4juNfcd2Hq+6qnd3YTtzJVs0HmqIsTfSKKMlOf1F5
Ene3a9NJLk52t+/XH0BKNkmB8t5UzUwMgE+RIAjigU/9A6MWXZ1WFVSmisFE1IEHwfQJR+K8
CBbZmuMTBlnB0JANR3bofcTdnpelM9HteghK+IAo4cMv4n1QQWTehN6vkSyHLSBs0JEgciFy
CBGBC8ruetFUTYaM/PMBS/O4Nq6NMtvX16f9g9IxT37snl6HJS3lT9duyKvBNxSd+qer++//
gSo7xBeykin9/qVPmahRppYHQ7DqN3PKv8TQFDm14jWZxVn/UDaouFcxDGq2aAKMGzBGgOYV
Z/vGSltV1tf6+1BLDqR0VYgczJwxhKHOzjMhFE4BUTNUi5IFgsajOhYj6MRDXSCtrVYYV1GL
QFudDEsK4HFxVNRY8O6GFdFwS/Q2EWVxfG8hsFWVuAia/Hj/xXl2mMUJLe8zdaGleYOr6bWK
nibdW/tQkUDTjVzS+7Fnq0R4OtJdKJ1T/IQnZrq/PA8ns2RrFwR39VpF/3HgsODpD898nxAQ
p6F0DOifi/8vC1r4WdDidx8LohzmLBa0oHbckQU5FZssiJYCDBbkJ+hYkKdvfXEP37DhHZNZ
UMyEHBqFI5jJwvyGC992X/j2u4EQdby49ODw03pQqN3xoKLEg8B+a8tZD0Hq6yS1ck10NUAQ
ys8O46lphDGZ+FHOtKCZwoLYwQvfFl4Q7M3sgI+/mTQZ6YZ42lf6Dd3dO93LOr7m+F6IVPYB
RUZS9G/zYSuWuikfk0VFkuf24V5cFURFfjU7jMA2WK7wyYRnZFYIRdHbHimzQWX/gBZDpqjs
pZMRm5L995ZAx1FfT871YKzlfnb62XAM8MpAWj90KCgLos3Xji0iyG8lDTdP+vrIqpTolq2S
wl+9y5QDNZONKEDslhOm5kqa1a4ske+4gO2FF69AoJIYQcqNa6fxTcKybv/TtiiK4PpiNr07
1X2CtaumtCzXDVTalLREEcA5L8h8UPZlCX7SYQTiwhOSs2IJHdlzM7si4QkrPE49Ue57Fl4k
+bpgtBtRLITA0V+Rwrpaqzo8mJIp7j52Hzu4Lf+ti0xmxdDsqFu+vHM3Pxwo1ZIAhpIPoUUZ
50Oo0kYSFZfmpbAHypBoTYZE8UrcJQR06T5LdCMjTWk67IrsSiAHilYFh/+LlCAvS2Lsd/Sc
8Ci/FUPwHTVQrryiB+Dw7ogZDNcT5e349SLPK3f/HWOKkfbY3nZr+PnRbZjoDeHkoY3GnraH
w/5bd4W3lyNPHCtzAAxuax244nEWiM0QoSTVyyE8XA9hloqzA/TpVxzo0BBPNSabgugCQBdE
D2BjD6HHXDvuuAdvbcdK7MCfAxIlofnCryKRUBSer401MO744TC0zUKtvrN+EY6hk81TQ5tx
LYcVpHGp95zVGcRIlhaJb/kxJeVXw4Zt26O+l8K1PdAtxK4fjILeLmlyrk0TBh3F48c7rUjg
S9JzrLh7ORwlgm6leTBKEoceq5QOr21T0bNndKnAHhrZ9GEc5pb1OKdySwSZxCQkOaZUtCQA
kF2Yiv9KdiEvRNZoL0sS3+grq4eFK8s01/sBFhEt/uKAMo+xUiT9u0l3zzHNsiiSOV4P8N1k
jCrjkoplXprJvMpQJYgzY2FsTHyXwgmrU8cLhTg5KxmNl5jiTN63dvqa5Z35owjbP2LLi1Il
dKlKwVJ/rGGsHflR97xpO+RN3neH94HEUdxWGNPe2nAgKQ8uGwgPyrxo0zyLnXD8RyLu2YwM
7n6b0idRh+0t96Q3OTNe1FKUXaDtY6F1nDJaXCzD23iEC9/Q0itnsSfPlyjwIZmWJ7OQHm0x
ZK1WJ3x8hLJu7/e7hIVmx3yAFQjdS+xQ/+oTigZ3KlFLyu5VioGOwvSjxmwnxIoseMpjMpUb
X58czP+5f9hNAtthXCWd3T904EnuuonWOqOP66htgVvlOXhytoV+V2lhh8ruYbBoa/qCXKG5
apKbcXKLUjcTxmWqIjirjIdGPI21CuBuRcrrSeNsEAUfw9mwI4WVLfdYk05G0o0r7LySie5i
BIu1ik5uuEgbo8Vg4UEZNx55pCMQjS++nCbApDldNa2ODUNfkZGMyfuM98SKaVCzfC/b6B7G
1sQyt46kY2bTou7y842lR1W5OOoq18zJHHnKUGGAYTmWdRiSvEJF9j+Fhw5237YfTzqqwf77
x8vHYfJTh4Lavu22k8P+f3d/Nzz0sQHME5dqu6aLAULC4d4hncgqRzRG/oCFxTyaI7sqT9gK
m8jmcwaJSj6j7LS6p7oljO9R7UVjm6Hso3KrmIs/50Q2rrSipZ+c5ozAsPGBm+hcF13fOti6
gPtZnST4gxoSnDyW+NeXwdghUgbQv7iYzzY02/9aMvp46WsJGL9Z0PHGepI6FeN1cNiY2k3F
P2pMjmBqcAyoCkekLLDhi7lVl/dFldNlg3JpCfD4u+3ChGeoxvUFXO/m3C7dg+Ut/bmP+M31
SKUw3YYkcwJ245suKJwKezeffVkYkWHUZ0cJhQcN3SFMWZYj6xGVJUzq0Ht/m99cTP58enn4
R7f2jfAiThc2Bfb6OL0BlxJQBoBJM8g700mE7RxlCir4rUsYLpkDUXKyU86Ozpd2iTNOKjgV
8R77a2nfjlCVVWP0o8HKGMdLe/9o2bFJhRERZrhpEU/KMg3mtQ88Wc4V1iMhKZzW4g+6k+4P
DwYfOwmKIoOTRaKNzDxpLmb0SOHATO8xMJBHA8iyypdnb4Uhfzj9ZF/FYaoOZGJPiIwnuaxL
ZNilOt/M/RYVbZzQtiHSx7TMoDqtm9/utHVmLgPWcWUErNrUMNY5FtCY9mbON/SrIF9+mV4M
hqmTkO/+vT1M4ufD+9vHT5Xq8fADTtFHw2wI0xRMHuHr7V/xz14QZPi0up2ExYpNvu3ffv4L
IxE9vvzrWVkXaTeKyae33f987N920MSMf+6Lxs/vu6dJGvPJf0/edk/bd2jvYIcuOpHg8aeF
zR4nOUj1Q3CTFwT0VFGEkZB8SL59e6Sa8dK/vB4DJct3GIER0mbyiecy/exKzti/Y3Un8Wh9
Z+iB9O+2SFiFqY9aUZY5SnIcOeU9yC6nj84jevXxTaLyGnmRLKx7qS8vPBncgMy5HXVTAoyq
48cDqzGVGyjNDRZasjgAPl+Vhq4TqUxuCWUCMw6rgnRKCfuVCWu/Gwl4oyiUEBQeA9qoDnc9
1QGtP8FK/sdfJu/b191fJjz4DXbO5+GZYp4YPCo1zNBt9bBcmtBj6ZI8mUsMeBCQ6fCObayI
dnlkT9+JMzlw+BtvRabLqoIn+WrlPCkpuOSoU8JrAP2lq54HHJyvLIuY+q5wLHRgu/1Y/Zcq
IJn0wmEBSkYhohytKa0YUwpVFmRdcO9KQCw27nW6P86jogZinE+dW9bzdorTvFkt55p+nOjy
HNEy28xGaJZiNoLsltV83W7gH7XZ/C1FhaSPc4WFOm42HhG8J4CZ9+MZZ+VI64zx8e6xmH8Z
7QAS3JwhuLkcI0ib0RGkTZ2OfKmgAKl85jnvVfvo6g8LZ4Si5KlHP6p3NfRvRuNTsWKKmWZi
7TifDmkS+MMTC+tIMz4VRTU/RzAbJahDGfHR1QiiL62yg13vES71Js/iEWyQbubTm+lIy2GN
IfrHoktqjuU5GDUyw2hxo3g29QTB1oOoxMhClffp1Zxfw5amX9QV0R3w9Ji309n1SDt3CfNJ
6kf8GQ6VFGMVBHx+c/XvkSWPI7n5QsveimIdfJneUKoQXb/KDHxk519DXgwOsSI9w1uK9Pri
wmcKg0w4dGfJxHbpyn46hXgkEgnXt5DntM0M9j5yBZ0IrlWms0YPVZGDh2CRErQsqZkDzeES
qhYlQ4XQEFcnAQENihLvIXj/Er9PnfEpAq+JDaso3XEaDHUHJiwN7HSgJzDGuWZGzwGE3/Ri
AJkOIUOiy6uFBTsFqzOhSoy0wvcBsLN6pQ9cpagfUZ4EaZ8sfDgPgaUDA0palDUpfM4eqpnQ
XpM9eZeOL2UZW4F8jz9oUyGsBJYvLAFpxizC3IaYcxQmAYNLMtMfDHBKnWVBZMYKGeU2UCV3
Bkm8iTF7qfavMrs6mEgTKUpqbeGMxXgZshpCgz7Uv8vC8roBDK4DC/BVlLkFIFaFCW1N4xgL
Ie3BBiJh9xZEv3UYSqK0DROmY0WeQHD/iu31dwS2IRkqF6dbPaI6hXAWVCYiz8tAesq1SitB
eqf+kv4oYS0dVaS+DAshJtP5zeXkUwhX/DX8+5lST4RxKdaxr+4O2Wa5pKxSMxXNQylfbKsO
K8ztMs8Cy41PqYtOP8VdzZL4qx08VL24kk5kyoTKoqsEo2wHU8bRisHgKwComKHsbTYabz2b
eF5lVhUtEkGlUngDX8FfMve/SuJLtffFEpF4a6xK+IN8eqlqY3R6ZKd1U2dtoz5GCffgljQ6
aSybyE7naXnXZYkTHwrk5IzQH6q33JNWaqAOFpjjJXOD3uobdzvnuRVmt7ovopxUrxuFWMAK
OKrMch0IHx/LMCbVhmYFwIOtxy5RTedTStoxCyVwWYLNzC2POJnEPCdTOFtFK2HnrgC26BOY
kbhkbSXPDSJlX82IyhbKzkScBtfT6dRV6BvqTyg7p4VaKNvCjVl0T9ac2pdmy7ChM5B33Dfy
Hl2eK4+LJbfeyViVeLpWJbT0iAhPVgjA+GadlvzNvtVwzFFnoNpqLBA6oYbJHihTIqNGHXnL
3gHLS1oqR5UErRP0LaQqXuUZnQkKK6PHCxhP9BCj0zhYq8+Zb1q6Mpw1cW1v9KjO8JUeOt96
zMlMkuY8yXJFj8ikKT00SXxXx4Hn7tgjfZ0wx6lvH2cmA652lsmZlxUEtIm3UVdgs0F1JNZJ
7Avc3JdylbhBMqOt9STMHUZKHK9PgBAmNtaKELOzfRdfeRQXJAMTG2bZNMiZ59s0m9WZvkXW
zTQqpmSmK7NAzdYi9jAwFNjoY/sP36PdqXDKykYknmCYRyKgYFluRGtPk81la9qTKYAStGyQ
vpX/GpChbDOz4Fd98ROzQqB6MyKkqb4A9sKuprMOMkGik4V/OZWrCuIi9xh3AY1c+2VdQIfr
MzMX89JOCX8rr6+vplCW/mS38uv19eWmdZKAEDXfl9aCwN/TC4/NSShYkp09TDIGB3x6pl34
s8yzPLWYbRae2d1ZA9zKOoFVprLAOf6HBfPb2BaKotx3XHdJBkS2cvKxRyCWwLYmh38v0NYq
jM+Id1qFZlZ6l7C5Twt9l3gPwLvE84mgsY3IWm854TMY7nsItxY0TLH6CAA0OqerLNOz7BDj
s1TCYsvXcIvzRM9GVJXTTLG8ni5uzjWWoVqbZL5lYE1+ubi4PMMyS7RiLsnKJEvhDLGfuJQ8
eXY1SiHu6CpjvLpZ6sab2cWccm2zSllSJfz0JfQE1PTmzIgxH10Zwr/W4pc+nWzI0QSRn7vY
yFRaUy+KmHsV1kB7M5169NWIvJydG0SFLNMSRQCEnoLnP07tpOIrivsUFrBPiFh5TL042nJn
NLvI4nq8E5WI6spiWRpyppRdApNAwcHDPDf1ynGGGNbX2LwWfrZlFHtyzSK2wZQzjs5nWO06
/qovzceyGtKur3xL4kgwPyfjyPssL6SdYi9Y83aTrHwcLAwCT6qsuChoTIpCs9YS+fFOirLT
pTS6dywdekRhqHHgR7uUgR3NHoGBgN1ZWSIOgrWXJl0rCDPFoIBSL3rMuwGfC7PZyu5Ebruw
YnXqUd8GKWvfytY4SlqEl0mEhQ1LvOfOD8Fni5dw49WdV9x273Oz3MgVDdHWqyf4nTATzOCv
Npm5gLkDsBMncL7ufcLUcNAI6LfD/nE3qeXyaBmCU7DbPe4eMVuXwvSOF+xx+4ohJBzzJPGs
Enyt92jH+2mYNunz5P0FZnQ3ef/RUxHp0dYen4sm3aCmhOZWjcXhOrOk1493r11OnBW1tcEV
oA1DzPaYCOnxcldEqCD0OeNoCp3N9NYbkUQRpawq441LpPpeH3ZvT5hlfv8M8/xt61gHduXz
WorxfvyR348TiOYc3nluMaZ24ArhlL0V98vcZwFhDGG8/xisjr4eaxIVk4B+LuoI8hoT25XC
YyLQ9SSWPuExvqStBKPt26Oy78MEkr1B0ImLiNJzbV6xVJDGjPzH9m37gBtrYNlemXlHMYPY
zXVbVGbaNm3g4AV2tsoXdgdZgvENtNOI50Nl+dfcYwOStStJSxYqLC3IciTnhhWnkxieTj/R
3Dr26F2u8bf99oliEl3nr52M29rE9+X5N4U46OKKmRFPMF0dNSsrjApPS0+a5g/PODu05Dzb
0Fu9o+j0yn9UDHOU0svVJj1LVnrEXY0uC4+dhEaHMmmT4lwb+EbrExFgRbbqrZz6xFHT+9AY
blcAszO9dQ8fXL/VWMy4SGOQa7OAfn+J1n3Sc9P2uAeqwJOlypQ5VvZkxDCsoEgpsEo0akbc
mN8saH0xSOT4NEG1r3Jn6akxHsg2Gi4aaUa7hBlY6XS3TqTuiq9UJ01ZGkANwFpfKBuoXAVp
/GWKMA8OyxkKMVU2n325OH00/dtmSB3M1EZ1oNO3NeDTK/f3kA5ElCFQ8qSwW1YQmq6pZrML
glrD6TKOhHiCtXHw+39d9P8cffQiTEdYNrYhTIIpoCjhGdcPPhcKW4xEq9vJj/4wGVoQ96Xa
+eXGCEdgwK9uDHuPJk3yVWnmc2hSM1wl/jq6XRmZ0POsdOI4AUi9oJdOo01a2wkn4yS5d/iE
lhNmnGK8CCa5jucqIwvPARSRTs9FIe2LBBEXR2ddLeTk4WmvDfyH3cSCPFEJRW/V/qOvSieq
JPAJEQbRqrDFgmNPvqMP6fb95e3gytVwtYF+otfPYHFgrqbp1fU11K69a0xBvLvFoRyZ+XI3
GRL59vFxj3I6HJmqtcNfrdmIM16VtI4Bx+TzG17Tz4Q6ZStrPPdPhS2F9Eh2Go9pzhPaeiNa
O45rJz4ZiTJl1DvAGmNWBLmhxO8hjmHFEZzla3ZvpTs6ovThpS2nRYYyqvUqfKQbWFWrOV9j
PP3Hl+9DGbvfEpiO/thhey/G2eaI8uh91HPpOFGwHsfDmbWYb860xJI4/TK9mLbrwCOiL+YX
F0IuXYK+EzFbzWB1GyPtc23/9uf2AFfT40yhr4qbZrvgo72DNp3XB3vyi7fd+/7n7gXuOqsX
mP/nF/em2n3EohRdoqN2lVNZdqRcmombNfN/ed4/HCZy/7R/eHmeLLcP/3iFO9/OElEldY4s
kXsb1R3JETEYTvrx9L7/9vH8oCLaDo750wcNA/8LUFRxleqd0/fvpIAT0hNmA3HSg8M2/2DZ
VziPct8LMNLcirRI6PsJotNqMb/54kWXAQchg1bWKnwlB+9QFoFMrzy2smy5uboYepHZpe8l
9zAjRFdoETyfX23aSsLOpA8QRZiOzFCzub6iI4SVYlUnzBfhouQjnRew/xSXpW6rq7ft6w9c
wcTRGZTDyxzjxeQT+3jcv0z4S9H7iX0exCHVxGkwSfZ/vm3ffk3eYAPun20NCPemhoKm0Vy+
k9kHvQjftj93kz8/vn0DmTcYOl6G9CmG8QuSeBXBIc8DakpOaqoVU2H4aUkGLuHBoE9RHFCT
GMXUS1gNrCSPeAyDrKoEX4XhIxkvw4jv+mcDdSZ59BTi1lFUkzwGS8CW6JkVEjkWZkd48ePX
Yf8AUkOy/YUXiSFrUZVFnkBBeaHwGy5iWhOF2BULVh5ZADM40JsGC9ZJEXvvsPXa8xyeenah
SP3qqEysWzjl6ZYw7g7yazidybeHGP6bxUtYs5b31RGqA2qknqyyNpVuy1OP2BQm1X9QndbV
DJF5hinH8a+CdY50QyJM+qPzzZ5BtxpphzsxKNMq4h7ziEt79k6Iq/PTmvnCmRrj5F6vb4PK
b1thEElPPEmzHum5Zxg02J+G3gyIasuNz46cETFJdPSQlC3rsFfUWyII3n4xLgC9rusNXHqK
hNGbuybNsZowzts4T9NabV7DfUFhmri8CwMbaH4+RZTlqgJf7ci63CIpnQERWmuX9yDcyN5D
wCoZl33sGGqzdDqkjks2+zeYP4oDdqou2tilD3qA0XBsq98Oo1TU9GmjCdKUuFSm+4e3l8PL
t/dJ9Ot19/ZbM/n+sTuQqlAdzAYFkMIXzQUOtZXjNWHctuDqnuENdKjTVvdW+fLxRkudrEox
kkzs0UPqLDIlyIdnCNKq9ig8e4oqpTM9i05hCCP0SIAsTpY5ZaCs1zArDN9tDTodwFZ4KL1k
i+33ncqbZsSh6DuL6hVFZdkFIBQVYoMlr2ovdz9f3nfoeU9NMAY8qjAWwtCduXz9efhOlilS
2a+toVIeavokfx3edz8n+fOE/9i/fp4cXncPGHPTde9nP59evgNYvnBXdFi+vWwfH15+Urj9
X9PN/zX2ZMtt3Mr+ispP51adJCK1WH7wA2YhOeZsAmZISi9TiszYqliSS0vd5H79RTeAGSwN
0lVJyezuwY5Go9ELBb9+v/shP/G/mUaqr3dFPNSEbPoQmeQWfFY2C57TsTTyXReV4eUAc5r9
FRGBve7oowIChUSf5LfEEye/PrmX4x/qhZiUgsEVEjTLNbcd2QxmczYUXeTwazEzcaQhqJWJ
+EdMYnQVLjeQAMX7n6+4dOxZM+GwYiKivNYO66ZmINDNo1Sg2mp3bJhf1RVoBiOaOJsKyovr
l9LI622VhjEg2v3LX88vj3dPkss9yov92/MLxWc5C49e9vT15fnhq8MU64w3Ee9NebepN1lR
UW8LGXPsgFUoqIg15CYWAEp0NBytb4eIF4NvvKruWlOYCeeKJUKdy+JBskO1Miwt10IAy2OO
EbHcifMhEohA4s483IQ5d94nEADhARdo8dzNPZSsum1EsZNychmixiD/Liav0RewsD1DzCcO
zm7xedSn8UuSWc2CX76VMUQCTEyoRGuHghQpceRQfEHEJF59obv6xe3mWDjAow2Gb0yGOWuw
d6pK57fKq+eAiFYAmDt2GgBpavCNHUTKe5qRAtGWcZph7w70QF5A5s7oaIBxGh2y0roNNalP
biBDM08TAjz6ZYa5SUcaGEHhV6IcWSsm1mXjGHTaaHK+k457w28gzoBP3NZg1cvjYS/JkZj3
NSSVBz9NiOFE709FHfegVngpiecR5cpUXb5At9AFachXlOO8TFxnHtsQ0CS2czYVufVBonNZ
iIJo9+LGjqULFyazZiwTfbCz6KS0EMH7bGIE100HSaon51kfUCgAri/rQ+bTGYi+/oNfc1UI
8GS1Gu9tTfw5hkSG9cAxn58VYxFc5hUZ7Dovqo5CxHacwkIqeMvEbVF1w2bmA6xQ6vhV2lnz
A070C3Hu7MYFMng7xLEEWDtOriB5c3U38AgbU0UN8o+1HQkCVm7ZDWTFAxN7Z3NOxBhMPrwe
3bl5kxdCcXN74SqQYgv04aspVpLvNsuY8Guo4tvPUDQJJj6C/IzEhCENrF9ne03QAxVYRGRb
1ZBg+Kk/IEIiSAWBUFCI5tPlpWtt8KUpC9vJ9VYS2fg+WzizDL/rcgyIlTXijwXr/qg7ukqJ
cz6vhPzCgWx8Evg9pZPJcrhXfz4/+0jhiyZdgb1g9/nDw+vz1dXFp99mdlRdi7TvFlSkyroz
h7p1xzhwzCGSj0GN29f9+9fnk7+ovk+xw2zA2n0RRdimIoDw+GFvUwTCYJi42x4qXRVlxm1X
J4g/bNePL6aWAYnrBaXCIxP8WyF2EAvLsknql5KtJXYBGoRttLiLUZ4viyX42KYeXv3xJCuw
KUROL5vc5ZWj2mwwb3RwJE1XgOwAbhE7ynI8PLyVMALlVVWIA1qceI0S1Up5JYZODnQkiaPC
r8xoSr5gD6T6rc5ZFWzXrIXrnomVTWog6oQNRGMXHU0FOJJlOVi3gyXjsqQL0hRoqk7fpyhK
jJ3cUurLkdxbqiP8Vt4AyZaUt1TOHgvdEKXtbsmybkUkQPJIcY4hXDBQa3EbURka2rxK8iwj
HbumCeFsCQGeB33aYfhcSzWyi6+jqqjlbj+CBKe/YmPey0jSpjqwA9o47rrenR/EXsYWOtdV
TutXQeC5EWJ/3+iouf+66KYe4ZPKQp6nkXdcyX42sdb1B7bnrom125gTuczNIL0uwW9bdsPf
Z/5vl1sj7NylEVtXE6BoBvphXjUiHlOnVpcxYyGd1WQ3NZEKgQ9ETgszp32Z7GTQicxJkKYB
FNW517NMzXCJ9kWxHmQYluoYzaLMdzBRIZ3bgvG6P5QscX15VTlHo3Av0ey7BeNba2yQa3s/
VYetYZZDMmrNnVlXMcgtztXXvLVtNPH3sLRTR2mYXjBm67QQMAYIhzVPLhybTUUfk5Y0etfy
zqThm0RayJlBH2KFc4QV4c1+gs094DZn66HdYgJ1D9W3KbPTrCDQOysQhjKQPYkIleNMi+bF
AXFRIe3K3e9ElZzN6H2I+IMbMW1jDEjKzSwuA8VYk5PiqxRGdP784f3tr6sPNsbI5YOUyx3R
2cZ9PKMNi1yijxd0UyaSq4vTaB1XF7Qa2iOi7Xs8ol9o7VUkN4BHFGGsLtGvNPwy4rTlEkXS
h7tEvzIEl5EswC7Rp+NEn85+oaRPF78wmJ8iAX5covNfaNNVJG4kEMlLL1weh6vjxczmv9Js
SUU6dkMCN5EWhR2vaqp+5q9zg4iPgaGILxRDcbz38SViKOKzaijim8hQxKdqHIbjnZmRSTZt
ggt/LNdNcTVE8okZNP3OXWPivhREyFiKOU2R5vKGEgmtNpLUXd7zyPOmIeKNlLmPVXbDi7I8
Ut2S5UdJeJ5HDLM0RSH7xeqIl5uhqfuIKYIzfMc61fV8XQjSjR+yPXaLKysqX+lGwSyJIJeo
pVnvX572P06+393/7WQ9U852YDRTsqXwbQ9+vjw8vf2NTgBfH/ev3ywbH3NzQC8uNIJwFBrg
HwNhFDFO+HiMjhospUwgKM6tztzUrCoIyUpp+p4ffz782P8GRtYn99/393+/YjvvFfyFMkdS
Dx1FvaAMi5ShPeqgJWErL/esc+3uNUXVQ4oxeN+g3gM4q1Qhn2enc6s3ouMF+ANVEI049jjO
MqyBCeqBtq/BZRc+Txo3lxny22Zb06me9OuOpaVCVx39RuO++QKpUCI6aKBimTh9EjVqTV3e
hMVhCBstlIIxDqm4qBiYFcjbILcih1jAUYep5uDz6T8zikq5HlpPHdiCMQmZsmxSia2y/Z/v
376pveAOZL7r8tqPz+l1CgjB4ipiJwDFtA0Ego3py6Zi4DnqAAlvIGZsPLeJolJa90iE7bJP
DFkkQDdQBE8EZrVA+EA9kOjezog1YzAHmijLT9fyRhyzEVNUG9I0Qb1foq0MhmUOG6CXllwN
LdWJUQW7Thsn3xb8PtTolZcgR6n8Ye2clM/3f7//VFxndff0zWM1EJJ5WPU1BPkU9MBsr0kP
mHEVQeA7ucsa53XQAQ8bVvYQa9pBan+SKTEb5t323/8VELihBzM3zWl9IKWawLzOQubnjRrU
v87z1lv8OD4watPmO/nP68+HJ3QY++/J4/vb/p+9/Mf+7f7333//n5B1yzt01Xf57mDGPsqc
0CM5Xsh2q4jk3oB85xHLFUWLL7gHuAFvNjkVOHqkwAJg1A9UwroGDkNRynE90hYwNWBtAVnx
FvH3dKy0k4c9RO+PspdpHHRh1GkACwMFCddLTR4JclTkuSXyPMszwqvC5yKKjUU5gPx/k/Ok
EbnP4uG5kWAKRfAO6a+VQ9wZX7sLz3bXo0m57BrETXVz+yoTybSPHDO4IgAd9tSeEcdfPO0x
93cwVRbe+9bCAGuU0yRnw/CGy1PnS3/2AJhfH3qP1fvkWh/wPDjaPUpl6CAPV0xESutsZCtX
TQdJ23E75MZSkJaX9fTolE6kbnEkPq5/hKBDdXrTNXZgy6ZVQ2M/+8F5uOhrJQEdxi45a1c0
jZFuF2bo48hhW3QrcMUSfj0KXWF4DsxpxTOPBJ5QcdqBEtOk2e+d2DC08/RaoQpOvfcDYCwq
pahlyILpYYHesUaBiYO5ViElgyGwitJ5XF19a1CeMfP0C9KEodeSP67RGYtNlrVb87xqOzD1
V7lAyQUm0aJZLPT3FJfEQzQsfrWVKy/+mZ5jPY/h5I3pAuxwDi5qFI5gjIkqEsmY5RzpFI4Q
1CV3+6/grJY7Du5E+oPI+TmSQwJTgtA+NILJMMkdisZfkWvMcztlYzWXIhoMCZp9GE0Z24fH
t+C4cHSPub/4go05cSM9qR2TDLuNsXSztit7Z4Dti8QUy2XORTDpKhFiIhnZqmKcFoHsvffr
lLG2uvXL4WWQireNe/2AkxFIJ/S1nr8/4Y2+89Ozl+usc3QdmHEZjvdBxNwpkSSKVStK2JaH
JF1ipBuUgeJ0PAFDtDgejd9geEgys0jxad87wZXsd3k+CmH2vKsQI5Cd4/LA5MBIrPIdZMk+
MFQdzrhKvR2J3AN0a0nYNZGg3UCAWqFI4G/AJ0UXC2+G+L6PGLAjlsPLFobuOtDXmIuvWjbr
SOgprBwcF9OmpaxDVevbRbAQF4W8FyUR9Z+mUEnPDww/mjsdaBgmRjk0fQyMi9Z5JGeamruq
ifjC51VkTSrNAeauAX9v3geW6IJVbUmy+fHi3Sdyn6m9Bhm7nWdIJNpCmkVNVjeYANuuAhGH
KlCRwYZCKIHCVd3Byk47TUOdRuAl1XY6ibwRbRgvTZwqoz0S+/v3l4e3f0NNKAy7bXh2g8/P
8uponUBTIiCJh71mH1lTGRZPBTvvLJhS00Bl76sJ7DUpf0Nmq0bWyGCyIpoh82afVblA5xwc
Jlrh5/ksjN9ClB/UWq6aZi1CggUBMzYgzhQZ3OSserjRxo5kt+CxAGua0r+/myNFVEMFZxWk
Gwdv4s+XFxdnVgCtDSTP4PJ+lyt3ZmAK6qLPlPHhJMT5ZPQlRB5FYD8tmp5HIqvqPQzFyI2a
K1Z8YELgSCjqfkeMscZgGJmWOXnI4zRaoXQapcwKwRIn9lVAAQp9+xYVULBN6pszBDSoY+L5
tTyku7FR4dSKikU0PSOJPDybm0hyQUPDWtn7KhLmYjJtaVjWxpIcGqIbVlG+5qMrxNTnETSI
YlkzuLpTSHnAV1UOO9PjMRZJnxUWnyns9L/yh+TtDNNut6m8J2e7z7NTGwv7gPdl7rrPV6B0
riBvMpWnAdCg2tQU/peiWB772mj0xyI+PDze/fb07QNFBMf5IFZs5lfkE8wvLo/Uh2YxH16/
382cmjBYGURnLdIbd/DgWYZEyEUjxS5bGWVDnX03vSFFFNyqbcSutrQoHk3GyJRmHpns6v7H
w9P7P2Nvd5AKBm721lJSQqSbmULB5HmZtjc+dGenjVOg9tqHKJkUU23b5rjyxBqd3tOXf3++
PZ/cP7/sT55fTr7vf/y0k3krYslyl46ftAOeh3A5YSQwJJX3zbRoV/bdzceEH3mGVRMwJOWO
TmSEkYTjG2jQ9GhLWKz167YNqdd2/G1TAsgoRHMEC2DZKvg6TwmgCYTgf6/hYWXod2SbxTn0
5sQJXcNc8uViNr+q+jIoXkuSITBsSeulY9RgEDOu+7zPgw/wTxYOQQTO+m6V20EiNdy97pnB
VSk6dNQy9v72fS/vxvd3b/uvJ/nTPewcCMr1vw9v30/Y6+vz/QOisru3u2AHpXb8cDNgaRX2
Z8Xkf/NTye5uZmenF8FHIr8uNsFnufxIymwb09gEAzc8Pn+13ZNMFUkaVtuFyyV1rB1NPUkA
K/k2aGULlfjAHVGgPFC3HHPw6ajMr99jza5YWOQKgH6ZO6ryjfpcPR8+fNu/voU18PRsTowN
gIPyJLSbnWbFIqBfkuwpOqFVdk7ALsLdW8g5zkv4G9DzKpN7jwRfnlLguZ1PdgKfzUNqfegH
QCiCAF/MwrGS4DOCv4jqLM5NuiWffQqL2raqAnV4Pfz87gRWHI+acKFJGOSvCBhC3SeFCME8
DWclwRQ8ImS4BmHMnYO1wiA1VcEIBFiyxD4SXbhaABpOXZaHXVjQrHS9YrcsZIyClYIRs2/4
EcGHcqKUnLdOuoORvYZ9l3dHcjA1fBqW0droZf/6qqLM+b03CSs8xmT7zGjY1Xm4psrbcK4l
bEoYwe+evj4/ntTvj3/uX1Q8GBPvzl9NkEe75XbeVtNInoCKp+7DYx0wmpH5W0ThYqo0m0hy
8PhmAoqg3i9F1+U8h/e99oaUCTA4jt8Rg1BSWBQrjGQUpaBGaUSiCBlwe7hjuPYSBrOdQLfe
wm+dNMWGHQM/0CHBQ34XxUimF8VJxkTjslhVYRuUGowamaW6YFHF4AGlUMQKyjcYLz5lrBrr
Re1/JNqx9V1dyLWxG9K6vrjYUSGULFo/Hrt7acZQYc5txyDbPik1jegTl2x3cfppSHPQ2RRg
DDhgYg73bXCdio+jNaPCh6aK+5c3iG8kxbNXTIfy+vDt6e7t/UUbLnrWAconytb98ZjhiCZN
SowwKTqKWJMmRc241mYuzPWLiJKpydXdrbWs8RI5GzlEMvOyxm1ypYOd8ETt5vkWMhn1XWG/
YxoUatAXBVfPA55yGByp0qrdpStluMJzR/CRV9dUshN7aaazS5ciFJfSoej6wf3qbO4uYlza
cdMXTSCXUJ7cXBGfKgxteK5JGN/GQn4qitizgsRSNuAS/NHymCqSULBMLWFtt9MC4/SiBNok
Ne5Kg23miH72Qquew8Mkjz0siue2pxVAlV+rCwcnVeCzWrFkQ81ZO0LlITuV7ECtki34OdEO
PGxpOFkK+LcSlSKYot/d6myvzu9hd3UZwDB2VBvSFuzyPAAyXlGwbtVXSYCArEJhuUn6xZ55
DY3M4tS3YXlr53+1EIlEzElMeWtrJS0EehFT9E0Ebo0E2JiIHJYqBRvWdnIxC55UJHghLDhn
WbFTb7Xodd3wzFZqMCGatFBuwYxzZokw+BrldBaUpU3T6jAvzlM4xohsSKNVeAjmzuN/dm1p
MpZl4/hyw+9Dm7AuXUe/kveDCRhiaixvISeSxSpkr/HCN7U5owQ9MI6BzFbTl1VbKF9zc1SF
Ku6myMCkQp5b3LEbF2DDWRbkOxSEV2v8p0MYcIlBJYRVo3pl/uw9A4KxEDFIYwzO97sfD//n
ydetr5irbHVKVXm5aSCDC76WJszWDo9giIr7GEAbQQDhwBR95Yloq5vWC8uE5/B6U4UQmBs3
XbONWfgGYxo+8Eae07b2csSiBY39HQC1DnhBQzEQLFEUrvC8ZDtlHJHmbecWgOFbiQbWG1bK
1bNl67y3jaNhN/USdcvcKEgwMo92wTiMfarMC63vV42UPyB946MDsneAokl5I4QOCKAdwWPv
ystemWz8Pzq1iY6nFwEA

--n8g4imXOkfNTN/H1--

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-29  6:42     ` kbuild test robot
  0 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2017-12-29  6:42 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: kbuild-all, linux-kernel, H. Peter Anvin, Ralf Baechle,
	Arnd Bergmann, Heiko Carstens, Kees Cook, Will Deacon,
	Michael Ellerman, Thomas Garnier, Thomas Gleixner,
	Serge E. Hallyn, Bjorn Helgaas, Benjamin Herrenschmidt,
	Russell King, Paul Mackerras, Catalin Marinas, David S. Miller,
	Petr Mladek, Ingo Molnar, James Morris, Andrew Morton,
	Nicolas Pitre, Josh Poimboeuf, Steven Rostedt,
	Martin Schwidefsky, Sergey Senozhatsky, Linus Torvalds,
	Jessica Yu, linux-arm-kernel, linux-mips, linuxppc-dev,
	linux-s390, sparclinux, x86, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 1299 bytes --]

Hi Ard,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.15-rc5 next-20171222]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/add-support-for-relative-references-in-special-sections/20171228-171634
config: s390-gcov_defconfig (attached as .config)
compiler: s390x-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=s390 

All errors (new ones prefixed by >>):

>> arch/s390/kernel/ebcdic.o:(.data+0x118): undefined reference to `__gcov_merge_add'
   arch/s390/kernel/ebcdic.o: In function `_GLOBAL__sub_I_00100_0__ascebc':
>> ebcdic.c:(.text.startup+0xe): undefined reference to `__gcov_init'
   arch/s390/kernel/ebcdic.o: In function `_GLOBAL__sub_D_00100_1__ascebc':
>> ebcdic.c:(.text.exit+0x8): undefined reference to `__gcov_exit'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 17457 bytes --]

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-29  6:42     ` kbuild test robot
  0 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2017-12-29  6:42 UTC (permalink / raw)
  To: sparclinux

[-- Attachment #1: Type: text/plain, Size: 1299 bytes --]

Hi Ard,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.15-rc5 next-20171222]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/add-support-for-relative-references-in-special-sections/20171228-171634
config: s390-gcov_defconfig (attached as .config)
compiler: s390x-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=s390 

All errors (new ones prefixed by >>):

>> arch/s390/kernel/ebcdic.o:(.data+0x118): undefined reference to `__gcov_merge_add'
   arch/s390/kernel/ebcdic.o: In function `_GLOBAL__sub_I_00100_0__ascebc':
>> ebcdic.c:(.text.startup+0xe): undefined reference to `__gcov_init'
   arch/s390/kernel/ebcdic.o: In function `_GLOBAL__sub_D_00100_1__ascebc':
>> ebcdic.c:(.text.exit+0x8): undefined reference to `__gcov_exit'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 17457 bytes --]

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH v6 2/8] module: use relative references for __ksymtab entries
@ 2017-12-29  6:42     ` kbuild test robot
  0 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2017-12-29  6:42 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Ard,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.15-rc5 next-20171222]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/add-support-for-relative-references-in-special-sections/20171228-171634
config: s390-gcov_defconfig (attached as .config)
compiler: s390x-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=s390 

All errors (new ones prefixed by >>):

>> arch/s390/kernel/ebcdic.o:(.data+0x118): undefined reference to `__gcov_merge_add'
   arch/s390/kernel/ebcdic.o: In function `_GLOBAL__sub_I_00100_0__ascebc':
>> ebcdic.c:(.text.startup+0xe): undefined reference to `__gcov_init'
   arch/s390/kernel/ebcdic.o: In function `_GLOBAL__sub_D_00100_1__ascebc':
>> ebcdic.c:(.text.exit+0x8): undefined reference to `__gcov_exit'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/gzip
Size: 17457 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20171229/24576561/attachment.gz>

^ permalink raw reply	[flat|nested] 71+ messages in thread

end of thread, other threads:[~2017-12-29  6:43 UTC | newest]

Thread overview: 71+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-27  8:50 [PATCH v6 0/8] add support for relative references in special sections Ard Biesheuvel
2017-12-27  8:50 ` Ard Biesheuvel
2017-12-27  8:50 ` Ard Biesheuvel
2017-12-27  8:50 ` [PATCH v6 1/8] arch: enable relative relocations for arm64, power, x86, s390 and x86 Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27 19:54   ` Linus Torvalds
2017-12-27 19:54     ` Linus Torvalds
2017-12-27 19:54     ` Linus Torvalds
2017-12-27 19:59     ` Ard Biesheuvel
2017-12-27 19:59       ` Ard Biesheuvel
2017-12-27 19:59       ` Ard Biesheuvel
2017-12-27  8:50 ` [PATCH v6 2/8] module: use relative references for __ksymtab entries Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27 20:07   ` Linus Torvalds
2017-12-27 20:07     ` Linus Torvalds
2017-12-27 20:07     ` Linus Torvalds
2017-12-27 20:11     ` Ard Biesheuvel
2017-12-27 20:11       ` Ard Biesheuvel
2017-12-27 20:11       ` Ard Biesheuvel
2017-12-27 20:13       ` Linus Torvalds
2017-12-27 20:13         ` Linus Torvalds
2017-12-27 20:13         ` Linus Torvalds
2017-12-27 20:24         ` Ard Biesheuvel
2017-12-27 20:24           ` Ard Biesheuvel
2017-12-27 20:24           ` Ard Biesheuvel
2017-12-28 12:05           ` Ingo Molnar
2017-12-28 12:05             ` Ingo Molnar
2017-12-28 12:05             ` Ingo Molnar
2017-12-28 12:39             ` Ard Biesheuvel
2017-12-28 12:39               ` Ard Biesheuvel
2017-12-28 12:39               ` Ard Biesheuvel
2017-12-29  6:42   ` kbuild test robot
2017-12-29  6:42     ` kbuild test robot
2017-12-29  6:42     ` kbuild test robot
2017-12-29  6:42     ` kbuild test robot
2017-12-29  6:42     ` kbuild test robot
2017-12-27  8:50 ` [PATCH v6 3/8] init: allow initcall tables to be emitted using relative references Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50 ` [PATCH v6 4/8] PCI: Add support for relative addressing in quirk tables Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50 ` [PATCH v6 5/8] kernel: tracepoints: add support for relative references Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-28 15:42   ` Steven Rostedt
2017-12-28 15:42     ` Steven Rostedt
2017-12-28 15:42     ` Steven Rostedt
2017-12-28 23:24     ` Ard Biesheuvel
2017-12-28 23:24       ` Ard Biesheuvel
2017-12-28 23:24       ` Ard Biesheuvel
2017-12-27  8:50 ` [PATCH v6 6/8] kernel/jump_label: abstract jump_entry member accessors Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50 ` [PATCH v6 7/8] arm64/kernel: jump_label: use relative references Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50 ` [PATCH v6 8/8] x86/kernel: jump_table: " Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-27  8:50   ` Ard Biesheuvel
2017-12-28 16:19   ` Steven Rostedt
2017-12-28 16:19     ` Steven Rostedt
2017-12-28 16:19     ` Steven Rostedt
2017-12-28 16:26     ` Ard Biesheuvel
2017-12-28 16:26       ` Ard Biesheuvel
2017-12-28 16:26       ` Ard Biesheuvel
2017-12-28 16:39       ` Steven Rostedt
2017-12-28 16:39         ` Steven Rostedt
2017-12-28 16:39         ` Steven Rostedt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.