All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 00/21] KCFI support
@ 2022-04-29 20:36 ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

KCFI is a proposed forward-edge control-flow integrity scheme for
Clang, which is more suitable for kernel use than the existing CFI
scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
alter function references to point to a jump table, and won't break
function address equality. The latest LLVM patches are here:

  https://reviews.llvm.org/D119296
  https://reviews.llvm.org/D124211

This RFC series replaces the current arm64 CFI implementation with
KCFI and adds support for x86_64.

The proposed compiler patches add a built-in function that allows
CFI checks to be disabled for specific indirect calls. This is
necessary to prevent unnecessary checks from being emitted for
static_call trampoline calls that are later patched into direct
calls. However, as the call expression must be passed as an argument
to the built-in, this requires changing the static_call macro API to
include the call arguments. Patch 14 changes the macros to accept
arguments and patch 15 disables checks for the generated calls.

KCFI also requires assembly functions that are indirectly called
from C code to be annotated with type identifiers. As type
information is only available in C, the compiler emits expected
type identifiers into the symbol table, so they can be referenced
from assembly without having to hardcode type hashes. Patch 7 adds
helper macros for annotating functions, and patches 8 and 18 add
annotations.

In case of a type mismatch, KCFI always traps. To support error
handling, the compiler generates a .kcfi_traps section that contains
the locations of each trap. Patches 9 and 21 add arch-specific error
handlers. In addition, to support x86_64, objtool must be able to
identify KCFI type identifiers that are emitted before function
entries. The compiler generates an additional .kcfi_types section,
which points to each emitted type identifier. Patch 16 adds objtool
support.

To test this series, you'll need to compile your own Clang toolchain
with the patches linked above. You can also find the complete source
tree here:

  https://github.com/samitolvanen/llvm-project/commits/kcfi-rfc

This series is also available in GitHub:

  https://github.com/samitolvanen/linux/commits/kcfi-rfc


Sami Tolvanen (21):
  efi/libstub: Filter out CC_FLAGS_CFI
  arm64/vdso: Filter out CC_FLAGS_CFI
  kallsyms: Ignore __kcfi_typeid_
  cfi: Remove CONFIG_CFI_CLANG_SHADOW
  cfi: Drop __CFI_ADDRESSABLE
  cfi: Switch to -fsanitize=kcfi
  cfi: Add type helper macros
  arm64/crypto: Add types to indirect called assembly functions
  arm64: Add CFI error handling
  treewide: Drop function_nocfi
  treewide: Drop WARN_ON_FUNCTION_MISMATCH
  treewide: Drop __cficanonical
  cfi: Add the cfi_unchecked macro
  treewide: static_call: Pass call arguments to the macro
  static_call: Use cfi_unchecked
  objtool: Add support for CONFIG_CFI_CLANG
  x86/tools/relocs: Ignore __kcfi_typeid_ relocations
  x86: Add types to indirect called assembly functions
  x86/purgatory: Disable CFI
  x86/vdso: Disable CFI
  x86: Add support for CONFIG_CFI_CLANG

 Makefile                                  |  13 +-
 arch/Kconfig                              |  18 +-
 arch/arm/include/asm/paravirt.h           |   2 +-
 arch/arm64/crypto/ghash-ce-core.S         |   5 +-
 arch/arm64/crypto/sm3-ce-core.S           |   3 +-
 arch/arm64/include/asm/brk-imm.h          |   2 +
 arch/arm64/include/asm/compiler.h         |  16 -
 arch/arm64/include/asm/ftrace.h           |   2 +-
 arch/arm64/include/asm/insn.h             |   1 +
 arch/arm64/include/asm/mmu_context.h      |   2 +-
 arch/arm64/include/asm/paravirt.h         |   2 +-
 arch/arm64/kernel/acpi_parking_protocol.c |   2 +-
 arch/arm64/kernel/cpufeature.c            |   2 +-
 arch/arm64/kernel/ftrace.c                |   2 +-
 arch/arm64/kernel/machine_kexec.c         |   2 +-
 arch/arm64/kernel/psci.c                  |   2 +-
 arch/arm64/kernel/smp_spin_table.c        |   2 +-
 arch/arm64/kernel/traps.c                 |  57 ++++
 arch/arm64/kernel/vdso/Makefile           |   3 +-
 arch/x86/Kconfig                          |   1 +
 arch/x86/crypto/aesni-intel_glue.c        |   7 +-
 arch/x86/crypto/blowfish-x86_64-asm_64.S  |   5 +-
 arch/x86/entry/vdso/Makefile              |   3 +-
 arch/x86/events/core.c                    |  40 +--
 arch/x86/include/asm/kvm_host.h           |   6 +-
 arch/x86/include/asm/linkage.h            |   7 +
 arch/x86/include/asm/paravirt.h           |   4 +-
 arch/x86/kernel/traps.c                   |  39 ++-
 arch/x86/kvm/cpuid.c                      |   2 +-
 arch/x86/kvm/hyperv.c                     |   4 +-
 arch/x86/kvm/irq.c                        |   2 +-
 arch/x86/kvm/kvm_cache_regs.h             |  10 +-
 arch/x86/kvm/lapic.c                      |  32 +-
 arch/x86/kvm/mmu.h                        |   4 +-
 arch/x86/kvm/mmu/mmu.c                    |   8 +-
 arch/x86/kvm/mmu/spte.c                   |   4 +-
 arch/x86/kvm/pmu.c                        |   4 +-
 arch/x86/kvm/trace.h                      |   4 +-
 arch/x86/kvm/x86.c                        | 326 ++++++++++-----------
 arch/x86/kvm/x86.h                        |   4 +-
 arch/x86/kvm/xen.c                        |   4 +-
 arch/x86/lib/memcpy_64.S                  |   3 +-
 arch/x86/purgatory/Makefile               |   4 +
 arch/x86/tools/relocs.c                   |   1 +
 drivers/cpufreq/amd-pstate.c              |   8 +-
 drivers/firmware/efi/libstub/Makefile     |   2 +
 drivers/firmware/psci/psci.c              |   4 +-
 drivers/misc/lkdtm/usercopy.c             |   2 +-
 include/asm-generic/bug.h                 |  16 -
 include/asm-generic/vmlinux.lds.h         |  38 +--
 include/linux/cfi.h                       |  50 ++--
 include/linux/cfi_types.h                 |  57 ++++
 include/linux/compiler-clang.h            |  10 +-
 include/linux/compiler.h                  |  16 +-
 include/linux/compiler_types.h            |   4 +-
 include/linux/entry-common.h              |   2 +-
 include/linux/init.h                      |   4 +-
 include/linux/kernel.h                    |   2 +-
 include/linux/module.h                    |   8 +-
 include/linux/pci.h                       |   4 +-
 include/linux/perf_event.h                |   6 +-
 include/linux/sched.h                     |   2 +-
 include/linux/static_call.h               |  18 +-
 include/linux/static_call_types.h         |  13 +-
 include/linux/tracepoint.h                |   2 +-
 kernel/cfi.c                              | 340 ++++------------------
 kernel/kthread.c                          |   3 +-
 kernel/module.c                           |  49 +---
 kernel/static_call_inline.c               |   2 +-
 kernel/trace/bpf_trace.c                  |   2 +-
 kernel/workqueue.c                        |   2 +-
 scripts/Makefile.build                    |   3 +-
 scripts/kallsyms.c                        |   1 +
 scripts/link-vmlinux.sh                   |   3 +
 scripts/module.lds.S                      |  24 +-
 security/keys/trusted-keys/trusted_core.c |  14 +-
 tools/include/linux/static_call_types.h   |  13 +-
 tools/objtool/arch/x86/include/arch/elf.h |   2 +
 tools/objtool/builtin-check.c             |   3 +-
 tools/objtool/check.c                     | 128 +++++++-
 tools/objtool/elf.c                       |  13 +
 tools/objtool/include/objtool/arch.h      |   1 +
 tools/objtool/include/objtool/builtin.h   |   2 +-
 tools/objtool/include/objtool/elf.h       |   2 +
 84 files changed, 748 insertions(+), 793 deletions(-)
 create mode 100644 include/linux/cfi_types.h

-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply	[flat|nested] 100+ messages in thread

* [RFC PATCH 00/21] KCFI support
@ 2022-04-29 20:36 ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

KCFI is a proposed forward-edge control-flow integrity scheme for
Clang, which is more suitable for kernel use than the existing CFI
scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
alter function references to point to a jump table, and won't break
function address equality. The latest LLVM patches are here:

  https://reviews.llvm.org/D119296
  https://reviews.llvm.org/D124211

This RFC series replaces the current arm64 CFI implementation with
KCFI and adds support for x86_64.

The proposed compiler patches add a built-in function that allows
CFI checks to be disabled for specific indirect calls. This is
necessary to prevent unnecessary checks from being emitted for
static_call trampoline calls that are later patched into direct
calls. However, as the call expression must be passed as an argument
to the built-in, this requires changing the static_call macro API to
include the call arguments. Patch 14 changes the macros to accept
arguments and patch 15 disables checks for the generated calls.

KCFI also requires assembly functions that are indirectly called
from C code to be annotated with type identifiers. As type
information is only available in C, the compiler emits expected
type identifiers into the symbol table, so they can be referenced
from assembly without having to hardcode type hashes. Patch 7 adds
helper macros for annotating functions, and patches 8 and 18 add
annotations.

In case of a type mismatch, KCFI always traps. To support error
handling, the compiler generates a .kcfi_traps section that contains
the locations of each trap. Patches 9 and 21 add arch-specific error
handlers. In addition, to support x86_64, objtool must be able to
identify KCFI type identifiers that are emitted before function
entries. The compiler generates an additional .kcfi_types section,
which points to each emitted type identifier. Patch 16 adds objtool
support.

To test this series, you'll need to compile your own Clang toolchain
with the patches linked above. You can also find the complete source
tree here:

  https://github.com/samitolvanen/llvm-project/commits/kcfi-rfc

This series is also available in GitHub:

  https://github.com/samitolvanen/linux/commits/kcfi-rfc


Sami Tolvanen (21):
  efi/libstub: Filter out CC_FLAGS_CFI
  arm64/vdso: Filter out CC_FLAGS_CFI
  kallsyms: Ignore __kcfi_typeid_
  cfi: Remove CONFIG_CFI_CLANG_SHADOW
  cfi: Drop __CFI_ADDRESSABLE
  cfi: Switch to -fsanitize=kcfi
  cfi: Add type helper macros
  arm64/crypto: Add types to indirect called assembly functions
  arm64: Add CFI error handling
  treewide: Drop function_nocfi
  treewide: Drop WARN_ON_FUNCTION_MISMATCH
  treewide: Drop __cficanonical
  cfi: Add the cfi_unchecked macro
  treewide: static_call: Pass call arguments to the macro
  static_call: Use cfi_unchecked
  objtool: Add support for CONFIG_CFI_CLANG
  x86/tools/relocs: Ignore __kcfi_typeid_ relocations
  x86: Add types to indirect called assembly functions
  x86/purgatory: Disable CFI
  x86/vdso: Disable CFI
  x86: Add support for CONFIG_CFI_CLANG

 Makefile                                  |  13 +-
 arch/Kconfig                              |  18 +-
 arch/arm/include/asm/paravirt.h           |   2 +-
 arch/arm64/crypto/ghash-ce-core.S         |   5 +-
 arch/arm64/crypto/sm3-ce-core.S           |   3 +-
 arch/arm64/include/asm/brk-imm.h          |   2 +
 arch/arm64/include/asm/compiler.h         |  16 -
 arch/arm64/include/asm/ftrace.h           |   2 +-
 arch/arm64/include/asm/insn.h             |   1 +
 arch/arm64/include/asm/mmu_context.h      |   2 +-
 arch/arm64/include/asm/paravirt.h         |   2 +-
 arch/arm64/kernel/acpi_parking_protocol.c |   2 +-
 arch/arm64/kernel/cpufeature.c            |   2 +-
 arch/arm64/kernel/ftrace.c                |   2 +-
 arch/arm64/kernel/machine_kexec.c         |   2 +-
 arch/arm64/kernel/psci.c                  |   2 +-
 arch/arm64/kernel/smp_spin_table.c        |   2 +-
 arch/arm64/kernel/traps.c                 |  57 ++++
 arch/arm64/kernel/vdso/Makefile           |   3 +-
 arch/x86/Kconfig                          |   1 +
 arch/x86/crypto/aesni-intel_glue.c        |   7 +-
 arch/x86/crypto/blowfish-x86_64-asm_64.S  |   5 +-
 arch/x86/entry/vdso/Makefile              |   3 +-
 arch/x86/events/core.c                    |  40 +--
 arch/x86/include/asm/kvm_host.h           |   6 +-
 arch/x86/include/asm/linkage.h            |   7 +
 arch/x86/include/asm/paravirt.h           |   4 +-
 arch/x86/kernel/traps.c                   |  39 ++-
 arch/x86/kvm/cpuid.c                      |   2 +-
 arch/x86/kvm/hyperv.c                     |   4 +-
 arch/x86/kvm/irq.c                        |   2 +-
 arch/x86/kvm/kvm_cache_regs.h             |  10 +-
 arch/x86/kvm/lapic.c                      |  32 +-
 arch/x86/kvm/mmu.h                        |   4 +-
 arch/x86/kvm/mmu/mmu.c                    |   8 +-
 arch/x86/kvm/mmu/spte.c                   |   4 +-
 arch/x86/kvm/pmu.c                        |   4 +-
 arch/x86/kvm/trace.h                      |   4 +-
 arch/x86/kvm/x86.c                        | 326 ++++++++++-----------
 arch/x86/kvm/x86.h                        |   4 +-
 arch/x86/kvm/xen.c                        |   4 +-
 arch/x86/lib/memcpy_64.S                  |   3 +-
 arch/x86/purgatory/Makefile               |   4 +
 arch/x86/tools/relocs.c                   |   1 +
 drivers/cpufreq/amd-pstate.c              |   8 +-
 drivers/firmware/efi/libstub/Makefile     |   2 +
 drivers/firmware/psci/psci.c              |   4 +-
 drivers/misc/lkdtm/usercopy.c             |   2 +-
 include/asm-generic/bug.h                 |  16 -
 include/asm-generic/vmlinux.lds.h         |  38 +--
 include/linux/cfi.h                       |  50 ++--
 include/linux/cfi_types.h                 |  57 ++++
 include/linux/compiler-clang.h            |  10 +-
 include/linux/compiler.h                  |  16 +-
 include/linux/compiler_types.h            |   4 +-
 include/linux/entry-common.h              |   2 +-
 include/linux/init.h                      |   4 +-
 include/linux/kernel.h                    |   2 +-
 include/linux/module.h                    |   8 +-
 include/linux/pci.h                       |   4 +-
 include/linux/perf_event.h                |   6 +-
 include/linux/sched.h                     |   2 +-
 include/linux/static_call.h               |  18 +-
 include/linux/static_call_types.h         |  13 +-
 include/linux/tracepoint.h                |   2 +-
 kernel/cfi.c                              | 340 ++++------------------
 kernel/kthread.c                          |   3 +-
 kernel/module.c                           |  49 +---
 kernel/static_call_inline.c               |   2 +-
 kernel/trace/bpf_trace.c                  |   2 +-
 kernel/workqueue.c                        |   2 +-
 scripts/Makefile.build                    |   3 +-
 scripts/kallsyms.c                        |   1 +
 scripts/link-vmlinux.sh                   |   3 +
 scripts/module.lds.S                      |  24 +-
 security/keys/trusted-keys/trusted_core.c |  14 +-
 tools/include/linux/static_call_types.h   |  13 +-
 tools/objtool/arch/x86/include/arch/elf.h |   2 +
 tools/objtool/builtin-check.c             |   3 +-
 tools/objtool/check.c                     | 128 +++++++-
 tools/objtool/elf.c                       |  13 +
 tools/objtool/include/objtool/arch.h      |   1 +
 tools/objtool/include/objtool/builtin.h   |   2 +-
 tools/objtool/include/objtool/elf.h       |   2 +
 84 files changed, 748 insertions(+), 793 deletions(-)
 create mode 100644 include/linux/cfi_types.h

-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* [RFC PATCH 01/21] efi/libstub: Filter out CC_FLAGS_CFI
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Explicitly filter out CC_FLAGS_CFI in preparation for the flags being
removed from CC_FLAGS_LTO.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 drivers/firmware/efi/libstub/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index d0537573501e..234fb2910622 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -39,6 +39,8 @@ KBUILD_CFLAGS			:= $(cflags-y) -Os -DDISABLE_BRANCH_PROFILING \
 
 # remove SCS flags from all objects in this directory
 KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS))
+# disable CFI
+KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_CFI), $(KBUILD_CFLAGS))
 # disable LTO
 KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO), $(KBUILD_CFLAGS))
 
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 01/21] efi/libstub: Filter out CC_FLAGS_CFI
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Explicitly filter out CC_FLAGS_CFI in preparation for the flags being
removed from CC_FLAGS_LTO.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 drivers/firmware/efi/libstub/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index d0537573501e..234fb2910622 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -39,6 +39,8 @@ KBUILD_CFLAGS			:= $(cflags-y) -Os -DDISABLE_BRANCH_PROFILING \
 
 # remove SCS flags from all objects in this directory
 KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS))
+# disable CFI
+KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_CFI), $(KBUILD_CFLAGS))
 # disable LTO
 KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO), $(KBUILD_CFLAGS))
 
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 02/21] arm64/vdso: Filter out CC_FLAGS_CFI
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Explicitly filter out CC_FLAGS_CFI in preparation for the flags being
removed from CC_FLAGS_LTO.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/arm64/kernel/vdso/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
index 172452f79e46..6c26e0a76a06 100644
--- a/arch/arm64/kernel/vdso/Makefile
+++ b/arch/arm64/kernel/vdso/Makefile
@@ -33,7 +33,8 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
 # the CFLAGS of vgettimeofday.c to make possible to build the
 # kernel with CONFIG_WERROR enabled.
 CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) $(GCC_PLUGINS_CFLAGS) \
-				$(CC_FLAGS_LTO) -Wmissing-prototypes -Wmissing-declarations
+				$(CC_FLAGS_LTO) $(CC_FLAGS_CFI) \
+				-Wmissing-prototypes -Wmissing-declarations
 KASAN_SANITIZE			:= n
 KCSAN_SANITIZE			:= n
 UBSAN_SANITIZE			:= n
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 02/21] arm64/vdso: Filter out CC_FLAGS_CFI
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Explicitly filter out CC_FLAGS_CFI in preparation for the flags being
removed from CC_FLAGS_LTO.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/arm64/kernel/vdso/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
index 172452f79e46..6c26e0a76a06 100644
--- a/arch/arm64/kernel/vdso/Makefile
+++ b/arch/arm64/kernel/vdso/Makefile
@@ -33,7 +33,8 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
 # the CFLAGS of vgettimeofday.c to make possible to build the
 # kernel with CONFIG_WERROR enabled.
 CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) $(GCC_PLUGINS_CFLAGS) \
-				$(CC_FLAGS_LTO) -Wmissing-prototypes -Wmissing-declarations
+				$(CC_FLAGS_LTO) $(CC_FLAGS_CFI) \
+				-Wmissing-prototypes -Wmissing-declarations
 KASAN_SANITIZE			:= n
 KCSAN_SANITIZE			:= n
 UBSAN_SANITIZE			:= n
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 03/21] kallsyms: Ignore __kcfi_typeid_
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

The compiler generates CFI type identifier symbols for annotating
assembly functions at link time. Ignore them in kallsyms.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 scripts/kallsyms.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c
index 8caabddf817c..eebd02e4b832 100644
--- a/scripts/kallsyms.c
+++ b/scripts/kallsyms.c
@@ -118,6 +118,7 @@ static bool is_ignored_symbol(const char *name, char type)
 		"__ThumbV7PILongThunk_",
 		"__LA25Thunk_",		/* mips lld */
 		"__microLA25Thunk_",
+		"__kcfi_typeid_",	/* CFI type identifiers */
 		NULL
 	};
 
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 03/21] kallsyms: Ignore __kcfi_typeid_
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

The compiler generates CFI type identifier symbols for annotating
assembly functions at link time. Ignore them in kallsyms.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 scripts/kallsyms.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c
index 8caabddf817c..eebd02e4b832 100644
--- a/scripts/kallsyms.c
+++ b/scripts/kallsyms.c
@@ -118,6 +118,7 @@ static bool is_ignored_symbol(const char *name, char type)
 		"__ThumbV7PILongThunk_",
 		"__LA25Thunk_",		/* mips lld */
 		"__microLA25Thunk_",
+		"__kcfi_typeid_",	/* CFI type identifiers */
 		NULL
 	};
 
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 04/21] cfi: Remove CONFIG_CFI_CLANG_SHADOW
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

In preparation to switching to -fsanitize=kcfi, remove support for the
CFI module shadow that will no longer be needed.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/Kconfig        |  10 --
 include/linux/cfi.h |  12 ---
 kernel/cfi.c        | 237 +-------------------------------------------
 kernel/module.c     |  15 ---
 4 files changed, 1 insertion(+), 273 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 31c4fdc4a4ba..625db6376726 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -739,16 +739,6 @@ config CFI_CLANG
 
 	    https://clang.llvm.org/docs/ControlFlowIntegrity.html
 
-config CFI_CLANG_SHADOW
-	bool "Use CFI shadow to speed up cross-module checks"
-	default y
-	depends on CFI_CLANG && MODULES
-	help
-	  If you select this option, the kernel builds a fast look-up table of
-	  CFI check functions in loaded modules to reduce performance overhead.
-
-	  If unsure, say Y.
-
 config CFI_PERMISSIVE
 	bool "Use CFI in permissive mode"
 	depends on CFI_CLANG
diff --git a/include/linux/cfi.h b/include/linux/cfi.h
index c6dfc1ed0626..4ab51c067007 100644
--- a/include/linux/cfi.h
+++ b/include/linux/cfi.h
@@ -20,18 +20,6 @@ extern void __cfi_check(uint64_t id, void *ptr, void *diag);
 #define __CFI_ADDRESSABLE(fn, __attr) \
 	const void *__cfi_jt_ ## fn __visible __attr = (void *)&fn
 
-#ifdef CONFIG_CFI_CLANG_SHADOW
-
-extern void cfi_module_add(struct module *mod, unsigned long base_addr);
-extern void cfi_module_remove(struct module *mod, unsigned long base_addr);
-
-#else
-
-static inline void cfi_module_add(struct module *mod, unsigned long base_addr) {}
-static inline void cfi_module_remove(struct module *mod, unsigned long base_addr) {}
-
-#endif /* CONFIG_CFI_CLANG_SHADOW */
-
 #else /* !CONFIG_CFI_CLANG */
 
 #ifdef CONFIG_X86_KERNEL_IBT
diff --git a/kernel/cfi.c b/kernel/cfi.c
index 9594cfd1cf2c..2cc0d01ea980 100644
--- a/kernel/cfi.c
+++ b/kernel/cfi.c
@@ -32,237 +32,6 @@ static inline void handle_cfi_failure(void *ptr)
 }
 
 #ifdef CONFIG_MODULES
-#ifdef CONFIG_CFI_CLANG_SHADOW
-/*
- * Index type. A 16-bit index can address at most (2^16)-2 pages (taking
- * into account SHADOW_INVALID), i.e. ~256M with 4k pages.
- */
-typedef u16 shadow_t;
-#define SHADOW_INVALID		((shadow_t)~0UL)
-
-struct cfi_shadow {
-	/* Page index for the beginning of the shadow */
-	unsigned long base;
-	/* An array of __cfi_check locations (as indices to the shadow) */
-	shadow_t shadow[1];
-} __packed;
-
-/*
- * The shadow covers ~128M from the beginning of the module region. If
- * the region is larger, we fall back to __module_address for the rest.
- */
-#define __SHADOW_RANGE		(_UL(SZ_128M) >> PAGE_SHIFT)
-
-/* The in-memory size of struct cfi_shadow, always at least one page */
-#define __SHADOW_PAGES		((__SHADOW_RANGE * sizeof(shadow_t)) >> PAGE_SHIFT)
-#define SHADOW_PAGES		max(1UL, __SHADOW_PAGES)
-#define SHADOW_SIZE		(SHADOW_PAGES << PAGE_SHIFT)
-
-/* The actual size of the shadow array, minus metadata */
-#define SHADOW_ARR_SIZE		(SHADOW_SIZE - offsetof(struct cfi_shadow, shadow))
-#define SHADOW_ARR_SLOTS	(SHADOW_ARR_SIZE / sizeof(shadow_t))
-
-static DEFINE_MUTEX(shadow_update_lock);
-static struct cfi_shadow __rcu *cfi_shadow __read_mostly;
-
-/* Returns the index in the shadow for the given address */
-static inline int ptr_to_shadow(const struct cfi_shadow *s, unsigned long ptr)
-{
-	unsigned long index;
-	unsigned long page = ptr >> PAGE_SHIFT;
-
-	if (unlikely(page < s->base))
-		return -1; /* Outside of module area */
-
-	index = page - s->base;
-
-	if (index >= SHADOW_ARR_SLOTS)
-		return -1; /* Cannot be addressed with shadow */
-
-	return (int)index;
-}
-
-/* Returns the page address for an index in the shadow */
-static inline unsigned long shadow_to_ptr(const struct cfi_shadow *s,
-	int index)
-{
-	if (unlikely(index < 0 || index >= SHADOW_ARR_SLOTS))
-		return 0;
-
-	return (s->base + index) << PAGE_SHIFT;
-}
-
-/* Returns the __cfi_check function address for the given shadow location */
-static inline unsigned long shadow_to_check_fn(const struct cfi_shadow *s,
-	int index)
-{
-	if (unlikely(index < 0 || index >= SHADOW_ARR_SLOTS))
-		return 0;
-
-	if (unlikely(s->shadow[index] == SHADOW_INVALID))
-		return 0;
-
-	/* __cfi_check is always page aligned */
-	return (s->base + s->shadow[index]) << PAGE_SHIFT;
-}
-
-static void prepare_next_shadow(const struct cfi_shadow __rcu *prev,
-		struct cfi_shadow *next)
-{
-	int i, index, check;
-
-	/* Mark everything invalid */
-	memset(next->shadow, 0xFF, SHADOW_ARR_SIZE);
-
-	if (!prev)
-		return; /* No previous shadow */
-
-	/* If the base address didn't change, an update is not needed */
-	if (prev->base == next->base) {
-		memcpy(next->shadow, prev->shadow, SHADOW_ARR_SIZE);
-		return;
-	}
-
-	/* Convert the previous shadow to the new address range */
-	for (i = 0; i < SHADOW_ARR_SLOTS; ++i) {
-		if (prev->shadow[i] == SHADOW_INVALID)
-			continue;
-
-		index = ptr_to_shadow(next, shadow_to_ptr(prev, i));
-		if (index < 0)
-			continue;
-
-		check = ptr_to_shadow(next,
-				shadow_to_check_fn(prev, prev->shadow[i]));
-		if (check < 0)
-			continue;
-
-		next->shadow[index] = (shadow_t)check;
-	}
-}
-
-static void add_module_to_shadow(struct cfi_shadow *s, struct module *mod,
-			unsigned long min_addr, unsigned long max_addr)
-{
-	int check_index;
-	unsigned long check = (unsigned long)mod->cfi_check;
-	unsigned long ptr;
-
-	if (unlikely(!PAGE_ALIGNED(check))) {
-		pr_warn("cfi: not using shadow for module %s\n", mod->name);
-		return;
-	}
-
-	check_index = ptr_to_shadow(s, check);
-	if (check_index < 0)
-		return; /* Module not addressable with shadow */
-
-	/* For each page, store the check function index in the shadow */
-	for (ptr = min_addr; ptr <= max_addr; ptr += PAGE_SIZE) {
-		int index = ptr_to_shadow(s, ptr);
-
-		if (index >= 0) {
-			/* Each page must only contain one module */
-			WARN_ON_ONCE(s->shadow[index] != SHADOW_INVALID);
-			s->shadow[index] = (shadow_t)check_index;
-		}
-	}
-}
-
-static void remove_module_from_shadow(struct cfi_shadow *s, struct module *mod,
-		unsigned long min_addr, unsigned long max_addr)
-{
-	unsigned long ptr;
-
-	for (ptr = min_addr; ptr <= max_addr; ptr += PAGE_SIZE) {
-		int index = ptr_to_shadow(s, ptr);
-
-		if (index >= 0)
-			s->shadow[index] = SHADOW_INVALID;
-	}
-}
-
-typedef void (*update_shadow_fn)(struct cfi_shadow *, struct module *,
-			unsigned long min_addr, unsigned long max_addr);
-
-static void update_shadow(struct module *mod, unsigned long base_addr,
-		update_shadow_fn fn)
-{
-	struct cfi_shadow *prev;
-	struct cfi_shadow *next;
-	unsigned long min_addr, max_addr;
-
-	next = vmalloc(SHADOW_SIZE);
-
-	mutex_lock(&shadow_update_lock);
-	prev = rcu_dereference_protected(cfi_shadow,
-					 mutex_is_locked(&shadow_update_lock));
-
-	if (next) {
-		next->base = base_addr >> PAGE_SHIFT;
-		prepare_next_shadow(prev, next);
-
-		min_addr = (unsigned long)mod->core_layout.base;
-		max_addr = min_addr + mod->core_layout.text_size;
-		fn(next, mod, min_addr & PAGE_MASK, max_addr & PAGE_MASK);
-
-		set_memory_ro((unsigned long)next, SHADOW_PAGES);
-	}
-
-	rcu_assign_pointer(cfi_shadow, next);
-	mutex_unlock(&shadow_update_lock);
-	synchronize_rcu();
-
-	if (prev) {
-		set_memory_rw((unsigned long)prev, SHADOW_PAGES);
-		vfree(prev);
-	}
-}
-
-void cfi_module_add(struct module *mod, unsigned long base_addr)
-{
-	update_shadow(mod, base_addr, add_module_to_shadow);
-}
-
-void cfi_module_remove(struct module *mod, unsigned long base_addr)
-{
-	update_shadow(mod, base_addr, remove_module_from_shadow);
-}
-
-static inline cfi_check_fn ptr_to_check_fn(const struct cfi_shadow __rcu *s,
-	unsigned long ptr)
-{
-	int index;
-
-	if (unlikely(!s))
-		return NULL; /* No shadow available */
-
-	index = ptr_to_shadow(s, ptr);
-	if (index < 0)
-		return NULL; /* Cannot be addressed with shadow */
-
-	return (cfi_check_fn)shadow_to_check_fn(s, index);
-}
-
-static inline cfi_check_fn find_shadow_check_fn(unsigned long ptr)
-{
-	cfi_check_fn fn;
-
-	rcu_read_lock_sched_notrace();
-	fn = ptr_to_check_fn(rcu_dereference_sched(cfi_shadow), ptr);
-	rcu_read_unlock_sched_notrace();
-
-	return fn;
-}
-
-#else /* !CONFIG_CFI_CLANG_SHADOW */
-
-static inline cfi_check_fn find_shadow_check_fn(unsigned long ptr)
-{
-	return NULL;
-}
-
-#endif /* CONFIG_CFI_CLANG_SHADOW */
 
 static inline cfi_check_fn find_module_check_fn(unsigned long ptr)
 {
@@ -291,11 +60,7 @@ static inline cfi_check_fn find_check_fn(unsigned long ptr)
 	 * up if necessary.
 	 */
 	RCU_NONIDLE({
-		if (IS_ENABLED(CONFIG_CFI_CLANG_SHADOW))
-			fn = find_shadow_check_fn(ptr);
-
-		if (!fn)
-			fn = find_module_check_fn(ptr);
+		fn = find_module_check_fn(ptr);
 	});
 
 	return fn;
diff --git a/kernel/module.c b/kernel/module.c
index 6cea788fd965..296fe02323e9 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -2151,8 +2151,6 @@ void __weak module_arch_freeing_init(struct module *mod)
 {
 }
 
-static void cfi_cleanup(struct module *mod);
-
 /* Free a module, remove from lists, etc. */
 static void free_module(struct module *mod)
 {
@@ -2194,9 +2192,6 @@ static void free_module(struct module *mod)
 	synchronize_rcu();
 	mutex_unlock(&module_mutex);
 
-	/* Clean up CFI for the module. */
-	cfi_cleanup(mod);
-
 	/* This may be empty, but that's OK */
 	module_arch_freeing_init(mod);
 	module_memfree(mod->init_layout.base);
@@ -4141,7 +4136,6 @@ static int load_module(struct load_info *info, const char __user *uargs,
 	synchronize_rcu();
 	kfree(mod->args);
  free_arch_cleanup:
-	cfi_cleanup(mod);
 	module_arch_cleanup(mod);
  free_modinfo:
 	free_modinfo(mod);
@@ -4530,15 +4524,6 @@ static void cfi_init(struct module *mod)
 	if (exit)
 		mod->exit = *exit;
 #endif
-
-	cfi_module_add(mod, module_addr_min);
-#endif
-}
-
-static void cfi_cleanup(struct module *mod)
-{
-#ifdef CONFIG_CFI_CLANG
-	cfi_module_remove(mod, module_addr_min);
 #endif
 }
 
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 04/21] cfi: Remove CONFIG_CFI_CLANG_SHADOW
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

In preparation to switching to -fsanitize=kcfi, remove support for the
CFI module shadow that will no longer be needed.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/Kconfig        |  10 --
 include/linux/cfi.h |  12 ---
 kernel/cfi.c        | 237 +-------------------------------------------
 kernel/module.c     |  15 ---
 4 files changed, 1 insertion(+), 273 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 31c4fdc4a4ba..625db6376726 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -739,16 +739,6 @@ config CFI_CLANG
 
 	    https://clang.llvm.org/docs/ControlFlowIntegrity.html
 
-config CFI_CLANG_SHADOW
-	bool "Use CFI shadow to speed up cross-module checks"
-	default y
-	depends on CFI_CLANG && MODULES
-	help
-	  If you select this option, the kernel builds a fast look-up table of
-	  CFI check functions in loaded modules to reduce performance overhead.
-
-	  If unsure, say Y.
-
 config CFI_PERMISSIVE
 	bool "Use CFI in permissive mode"
 	depends on CFI_CLANG
diff --git a/include/linux/cfi.h b/include/linux/cfi.h
index c6dfc1ed0626..4ab51c067007 100644
--- a/include/linux/cfi.h
+++ b/include/linux/cfi.h
@@ -20,18 +20,6 @@ extern void __cfi_check(uint64_t id, void *ptr, void *diag);
 #define __CFI_ADDRESSABLE(fn, __attr) \
 	const void *__cfi_jt_ ## fn __visible __attr = (void *)&fn
 
-#ifdef CONFIG_CFI_CLANG_SHADOW
-
-extern void cfi_module_add(struct module *mod, unsigned long base_addr);
-extern void cfi_module_remove(struct module *mod, unsigned long base_addr);
-
-#else
-
-static inline void cfi_module_add(struct module *mod, unsigned long base_addr) {}
-static inline void cfi_module_remove(struct module *mod, unsigned long base_addr) {}
-
-#endif /* CONFIG_CFI_CLANG_SHADOW */
-
 #else /* !CONFIG_CFI_CLANG */
 
 #ifdef CONFIG_X86_KERNEL_IBT
diff --git a/kernel/cfi.c b/kernel/cfi.c
index 9594cfd1cf2c..2cc0d01ea980 100644
--- a/kernel/cfi.c
+++ b/kernel/cfi.c
@@ -32,237 +32,6 @@ static inline void handle_cfi_failure(void *ptr)
 }
 
 #ifdef CONFIG_MODULES
-#ifdef CONFIG_CFI_CLANG_SHADOW
-/*
- * Index type. A 16-bit index can address at most (2^16)-2 pages (taking
- * into account SHADOW_INVALID), i.e. ~256M with 4k pages.
- */
-typedef u16 shadow_t;
-#define SHADOW_INVALID		((shadow_t)~0UL)
-
-struct cfi_shadow {
-	/* Page index for the beginning of the shadow */
-	unsigned long base;
-	/* An array of __cfi_check locations (as indices to the shadow) */
-	shadow_t shadow[1];
-} __packed;
-
-/*
- * The shadow covers ~128M from the beginning of the module region. If
- * the region is larger, we fall back to __module_address for the rest.
- */
-#define __SHADOW_RANGE		(_UL(SZ_128M) >> PAGE_SHIFT)
-
-/* The in-memory size of struct cfi_shadow, always at least one page */
-#define __SHADOW_PAGES		((__SHADOW_RANGE * sizeof(shadow_t)) >> PAGE_SHIFT)
-#define SHADOW_PAGES		max(1UL, __SHADOW_PAGES)
-#define SHADOW_SIZE		(SHADOW_PAGES << PAGE_SHIFT)
-
-/* The actual size of the shadow array, minus metadata */
-#define SHADOW_ARR_SIZE		(SHADOW_SIZE - offsetof(struct cfi_shadow, shadow))
-#define SHADOW_ARR_SLOTS	(SHADOW_ARR_SIZE / sizeof(shadow_t))
-
-static DEFINE_MUTEX(shadow_update_lock);
-static struct cfi_shadow __rcu *cfi_shadow __read_mostly;
-
-/* Returns the index in the shadow for the given address */
-static inline int ptr_to_shadow(const struct cfi_shadow *s, unsigned long ptr)
-{
-	unsigned long index;
-	unsigned long page = ptr >> PAGE_SHIFT;
-
-	if (unlikely(page < s->base))
-		return -1; /* Outside of module area */
-
-	index = page - s->base;
-
-	if (index >= SHADOW_ARR_SLOTS)
-		return -1; /* Cannot be addressed with shadow */
-
-	return (int)index;
-}
-
-/* Returns the page address for an index in the shadow */
-static inline unsigned long shadow_to_ptr(const struct cfi_shadow *s,
-	int index)
-{
-	if (unlikely(index < 0 || index >= SHADOW_ARR_SLOTS))
-		return 0;
-
-	return (s->base + index) << PAGE_SHIFT;
-}
-
-/* Returns the __cfi_check function address for the given shadow location */
-static inline unsigned long shadow_to_check_fn(const struct cfi_shadow *s,
-	int index)
-{
-	if (unlikely(index < 0 || index >= SHADOW_ARR_SLOTS))
-		return 0;
-
-	if (unlikely(s->shadow[index] == SHADOW_INVALID))
-		return 0;
-
-	/* __cfi_check is always page aligned */
-	return (s->base + s->shadow[index]) << PAGE_SHIFT;
-}
-
-static void prepare_next_shadow(const struct cfi_shadow __rcu *prev,
-		struct cfi_shadow *next)
-{
-	int i, index, check;
-
-	/* Mark everything invalid */
-	memset(next->shadow, 0xFF, SHADOW_ARR_SIZE);
-
-	if (!prev)
-		return; /* No previous shadow */
-
-	/* If the base address didn't change, an update is not needed */
-	if (prev->base == next->base) {
-		memcpy(next->shadow, prev->shadow, SHADOW_ARR_SIZE);
-		return;
-	}
-
-	/* Convert the previous shadow to the new address range */
-	for (i = 0; i < SHADOW_ARR_SLOTS; ++i) {
-		if (prev->shadow[i] == SHADOW_INVALID)
-			continue;
-
-		index = ptr_to_shadow(next, shadow_to_ptr(prev, i));
-		if (index < 0)
-			continue;
-
-		check = ptr_to_shadow(next,
-				shadow_to_check_fn(prev, prev->shadow[i]));
-		if (check < 0)
-			continue;
-
-		next->shadow[index] = (shadow_t)check;
-	}
-}
-
-static void add_module_to_shadow(struct cfi_shadow *s, struct module *mod,
-			unsigned long min_addr, unsigned long max_addr)
-{
-	int check_index;
-	unsigned long check = (unsigned long)mod->cfi_check;
-	unsigned long ptr;
-
-	if (unlikely(!PAGE_ALIGNED(check))) {
-		pr_warn("cfi: not using shadow for module %s\n", mod->name);
-		return;
-	}
-
-	check_index = ptr_to_shadow(s, check);
-	if (check_index < 0)
-		return; /* Module not addressable with shadow */
-
-	/* For each page, store the check function index in the shadow */
-	for (ptr = min_addr; ptr <= max_addr; ptr += PAGE_SIZE) {
-		int index = ptr_to_shadow(s, ptr);
-
-		if (index >= 0) {
-			/* Each page must only contain one module */
-			WARN_ON_ONCE(s->shadow[index] != SHADOW_INVALID);
-			s->shadow[index] = (shadow_t)check_index;
-		}
-	}
-}
-
-static void remove_module_from_shadow(struct cfi_shadow *s, struct module *mod,
-		unsigned long min_addr, unsigned long max_addr)
-{
-	unsigned long ptr;
-
-	for (ptr = min_addr; ptr <= max_addr; ptr += PAGE_SIZE) {
-		int index = ptr_to_shadow(s, ptr);
-
-		if (index >= 0)
-			s->shadow[index] = SHADOW_INVALID;
-	}
-}
-
-typedef void (*update_shadow_fn)(struct cfi_shadow *, struct module *,
-			unsigned long min_addr, unsigned long max_addr);
-
-static void update_shadow(struct module *mod, unsigned long base_addr,
-		update_shadow_fn fn)
-{
-	struct cfi_shadow *prev;
-	struct cfi_shadow *next;
-	unsigned long min_addr, max_addr;
-
-	next = vmalloc(SHADOW_SIZE);
-
-	mutex_lock(&shadow_update_lock);
-	prev = rcu_dereference_protected(cfi_shadow,
-					 mutex_is_locked(&shadow_update_lock));
-
-	if (next) {
-		next->base = base_addr >> PAGE_SHIFT;
-		prepare_next_shadow(prev, next);
-
-		min_addr = (unsigned long)mod->core_layout.base;
-		max_addr = min_addr + mod->core_layout.text_size;
-		fn(next, mod, min_addr & PAGE_MASK, max_addr & PAGE_MASK);
-
-		set_memory_ro((unsigned long)next, SHADOW_PAGES);
-	}
-
-	rcu_assign_pointer(cfi_shadow, next);
-	mutex_unlock(&shadow_update_lock);
-	synchronize_rcu();
-
-	if (prev) {
-		set_memory_rw((unsigned long)prev, SHADOW_PAGES);
-		vfree(prev);
-	}
-}
-
-void cfi_module_add(struct module *mod, unsigned long base_addr)
-{
-	update_shadow(mod, base_addr, add_module_to_shadow);
-}
-
-void cfi_module_remove(struct module *mod, unsigned long base_addr)
-{
-	update_shadow(mod, base_addr, remove_module_from_shadow);
-}
-
-static inline cfi_check_fn ptr_to_check_fn(const struct cfi_shadow __rcu *s,
-	unsigned long ptr)
-{
-	int index;
-
-	if (unlikely(!s))
-		return NULL; /* No shadow available */
-
-	index = ptr_to_shadow(s, ptr);
-	if (index < 0)
-		return NULL; /* Cannot be addressed with shadow */
-
-	return (cfi_check_fn)shadow_to_check_fn(s, index);
-}
-
-static inline cfi_check_fn find_shadow_check_fn(unsigned long ptr)
-{
-	cfi_check_fn fn;
-
-	rcu_read_lock_sched_notrace();
-	fn = ptr_to_check_fn(rcu_dereference_sched(cfi_shadow), ptr);
-	rcu_read_unlock_sched_notrace();
-
-	return fn;
-}
-
-#else /* !CONFIG_CFI_CLANG_SHADOW */
-
-static inline cfi_check_fn find_shadow_check_fn(unsigned long ptr)
-{
-	return NULL;
-}
-
-#endif /* CONFIG_CFI_CLANG_SHADOW */
 
 static inline cfi_check_fn find_module_check_fn(unsigned long ptr)
 {
@@ -291,11 +60,7 @@ static inline cfi_check_fn find_check_fn(unsigned long ptr)
 	 * up if necessary.
 	 */
 	RCU_NONIDLE({
-		if (IS_ENABLED(CONFIG_CFI_CLANG_SHADOW))
-			fn = find_shadow_check_fn(ptr);
-
-		if (!fn)
-			fn = find_module_check_fn(ptr);
+		fn = find_module_check_fn(ptr);
 	});
 
 	return fn;
diff --git a/kernel/module.c b/kernel/module.c
index 6cea788fd965..296fe02323e9 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -2151,8 +2151,6 @@ void __weak module_arch_freeing_init(struct module *mod)
 {
 }
 
-static void cfi_cleanup(struct module *mod);
-
 /* Free a module, remove from lists, etc. */
 static void free_module(struct module *mod)
 {
@@ -2194,9 +2192,6 @@ static void free_module(struct module *mod)
 	synchronize_rcu();
 	mutex_unlock(&module_mutex);
 
-	/* Clean up CFI for the module. */
-	cfi_cleanup(mod);
-
 	/* This may be empty, but that's OK */
 	module_arch_freeing_init(mod);
 	module_memfree(mod->init_layout.base);
@@ -4141,7 +4136,6 @@ static int load_module(struct load_info *info, const char __user *uargs,
 	synchronize_rcu();
 	kfree(mod->args);
  free_arch_cleanup:
-	cfi_cleanup(mod);
 	module_arch_cleanup(mod);
  free_modinfo:
 	free_modinfo(mod);
@@ -4530,15 +4524,6 @@ static void cfi_init(struct module *mod)
 	if (exit)
 		mod->exit = *exit;
 #endif
-
-	cfi_module_add(mod, module_addr_min);
-#endif
-}
-
-static void cfi_cleanup(struct module *mod)
-{
-#ifdef CONFIG_CFI_CLANG
-	cfi_module_remove(mod, module_addr_min);
 #endif
 }
 
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 05/21] cfi: Drop __CFI_ADDRESSABLE
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

The __CFI_ADDRESSABLE macro is used for init_module and cleanup_module
to ensure we have the address of the CFI jump table, and with
CONFIG_X86_KERNEL_IBT to ensure LTO won't optimize away the symbols.
As __CFI_ADDRESSABLE is no longer necessary with -fsanitize=kcfi, add
a more flexible version of the __ADDRESSABLE macro and always ensure
these symbols won't be dropped.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/linux/cfi.h      | 20 --------------------
 include/linux/compiler.h |  6 ++++--
 include/linux/module.h   |  4 ++--
 3 files changed, 6 insertions(+), 24 deletions(-)

diff --git a/include/linux/cfi.h b/include/linux/cfi.h
index 4ab51c067007..2cdbc0fbd0ab 100644
--- a/include/linux/cfi.h
+++ b/include/linux/cfi.h
@@ -13,26 +13,6 @@ typedef void (*cfi_check_fn)(uint64_t id, void *ptr, void *diag);
 /* Compiler-generated function in each module, and the kernel */
 extern void __cfi_check(uint64_t id, void *ptr, void *diag);
 
-/*
- * Force the compiler to generate a CFI jump table entry for a function
- * and store the jump table address to __cfi_jt_<function>.
- */
-#define __CFI_ADDRESSABLE(fn, __attr) \
-	const void *__cfi_jt_ ## fn __visible __attr = (void *)&fn
-
-#else /* !CONFIG_CFI_CLANG */
-
-#ifdef CONFIG_X86_KERNEL_IBT
-
-#define __CFI_ADDRESSABLE(fn, __attr) \
-	const void *__cfi_jt_ ## fn __visible __attr = (void *)&fn
-
-#endif /* CONFIG_X86_KERNEL_IBT */
-
 #endif /* CONFIG_CFI_CLANG */
 
-#ifndef __CFI_ADDRESSABLE
-#define __CFI_ADDRESSABLE(fn, __attr)
-#endif
-
 #endif /* _LINUX_CFI_H */
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 219aa5ddbc73..9303f5fe5d89 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -221,9 +221,11 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
  * otherwise, or eliminated entirely due to lack of references that are
  * visible to the compiler.
  */
-#define __ADDRESSABLE(sym) \
-	static void * __section(".discard.addressable") __used \
+#define ___ADDRESSABLE(sym, __attrs) \
+	static void * __used __attrs \
 		__UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)&sym;
+#define __ADDRESSABLE(sym) \
+	___ADDRESSABLE(sym, __section(".discard.addressable"))
 
 /**
  * offset_to_ptr - convert a relative memory offset to an absolute pointer
diff --git a/include/linux/module.h b/include/linux/module.h
index 1e135fd5c076..87857275c047 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -132,7 +132,7 @@ extern void cleanup_module(void);
 	{ return initfn; }					\
 	int init_module(void) __copy(initfn)			\
 		__attribute__((alias(#initfn)));		\
-	__CFI_ADDRESSABLE(init_module, __initdata);
+	___ADDRESSABLE(init_module, __initdata);
 
 /* This is only required if you want to be unloadable. */
 #define module_exit(exitfn)					\
@@ -140,7 +140,7 @@ extern void cleanup_module(void);
 	{ return exitfn; }					\
 	void cleanup_module(void) __copy(exitfn)		\
 		__attribute__((alias(#exitfn)));		\
-	__CFI_ADDRESSABLE(cleanup_module, __exitdata);
+	___ADDRESSABLE(cleanup_module, __exitdata);
 
 #endif
 
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 05/21] cfi: Drop __CFI_ADDRESSABLE
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

The __CFI_ADDRESSABLE macro is used for init_module and cleanup_module
to ensure we have the address of the CFI jump table, and with
CONFIG_X86_KERNEL_IBT to ensure LTO won't optimize away the symbols.
As __CFI_ADDRESSABLE is no longer necessary with -fsanitize=kcfi, add
a more flexible version of the __ADDRESSABLE macro and always ensure
these symbols won't be dropped.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/linux/cfi.h      | 20 --------------------
 include/linux/compiler.h |  6 ++++--
 include/linux/module.h   |  4 ++--
 3 files changed, 6 insertions(+), 24 deletions(-)

diff --git a/include/linux/cfi.h b/include/linux/cfi.h
index 4ab51c067007..2cdbc0fbd0ab 100644
--- a/include/linux/cfi.h
+++ b/include/linux/cfi.h
@@ -13,26 +13,6 @@ typedef void (*cfi_check_fn)(uint64_t id, void *ptr, void *diag);
 /* Compiler-generated function in each module, and the kernel */
 extern void __cfi_check(uint64_t id, void *ptr, void *diag);
 
-/*
- * Force the compiler to generate a CFI jump table entry for a function
- * and store the jump table address to __cfi_jt_<function>.
- */
-#define __CFI_ADDRESSABLE(fn, __attr) \
-	const void *__cfi_jt_ ## fn __visible __attr = (void *)&fn
-
-#else /* !CONFIG_CFI_CLANG */
-
-#ifdef CONFIG_X86_KERNEL_IBT
-
-#define __CFI_ADDRESSABLE(fn, __attr) \
-	const void *__cfi_jt_ ## fn __visible __attr = (void *)&fn
-
-#endif /* CONFIG_X86_KERNEL_IBT */
-
 #endif /* CONFIG_CFI_CLANG */
 
-#ifndef __CFI_ADDRESSABLE
-#define __CFI_ADDRESSABLE(fn, __attr)
-#endif
-
 #endif /* _LINUX_CFI_H */
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 219aa5ddbc73..9303f5fe5d89 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -221,9 +221,11 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
  * otherwise, or eliminated entirely due to lack of references that are
  * visible to the compiler.
  */
-#define __ADDRESSABLE(sym) \
-	static void * __section(".discard.addressable") __used \
+#define ___ADDRESSABLE(sym, __attrs) \
+	static void * __used __attrs \
 		__UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)&sym;
+#define __ADDRESSABLE(sym) \
+	___ADDRESSABLE(sym, __section(".discard.addressable"))
 
 /**
  * offset_to_ptr - convert a relative memory offset to an absolute pointer
diff --git a/include/linux/module.h b/include/linux/module.h
index 1e135fd5c076..87857275c047 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -132,7 +132,7 @@ extern void cleanup_module(void);
 	{ return initfn; }					\
 	int init_module(void) __copy(initfn)			\
 		__attribute__((alias(#initfn)));		\
-	__CFI_ADDRESSABLE(init_module, __initdata);
+	___ADDRESSABLE(init_module, __initdata);
 
 /* This is only required if you want to be unloadable. */
 #define module_exit(exitfn)					\
@@ -140,7 +140,7 @@ extern void cleanup_module(void);
 	{ return exitfn; }					\
 	void cleanup_module(void) __copy(exitfn)		\
 		__attribute__((alias(#exitfn)));		\
-	__CFI_ADDRESSABLE(cleanup_module, __exitdata);
+	___ADDRESSABLE(cleanup_module, __exitdata);
 
 #endif
 
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 06/21] cfi: Switch to -fsanitize=kcfi
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Switch from Clang's original forward-edge control-flow integrity
implementation to -fsanitize=kcfi, which is better suited for the
kernel, as it doesn't require LTO, doesn't use a jump table that
requires altering function references, and won't break cross-module
function address equality.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 Makefile                          |  13 +--
 arch/Kconfig                      |   8 +-
 include/asm-generic/vmlinux.lds.h |  38 ++++-----
 include/linux/cfi.h               |  24 +++++-
 include/linux/compiler-clang.h    |   8 +-
 include/linux/module.h            |   4 +-
 kernel/cfi.c                      | 129 ++++++++++++++++--------------
 kernel/module.c                   |  34 +-------
 scripts/module.lds.S              |  24 ++----
 9 files changed, 126 insertions(+), 156 deletions(-)

diff --git a/Makefile b/Makefile
index c3ec1ea42379..22a5d48f5fb4 100644
--- a/Makefile
+++ b/Makefile
@@ -915,18 +915,7 @@ export CC_FLAGS_LTO
 endif
 
 ifdef CONFIG_CFI_CLANG
-CC_FLAGS_CFI	:= -fsanitize=cfi \
-		   -fsanitize-cfi-cross-dso \
-		   -fno-sanitize-cfi-canonical-jump-tables \
-		   -fno-sanitize-trap=cfi \
-		   -fno-sanitize-blacklist
-
-ifdef CONFIG_CFI_PERMISSIVE
-CC_FLAGS_CFI	+= -fsanitize-recover=cfi
-endif
-
-# If LTO flags are filtered out, we must also filter out CFI.
-CC_FLAGS_LTO	+= $(CC_FLAGS_CFI)
+CC_FLAGS_CFI	:= -fsanitize=kcfi -fno-sanitize-blacklist
 KBUILD_CFLAGS	+= $(CC_FLAGS_CFI)
 export CC_FLAGS_CFI
 endif
diff --git a/arch/Kconfig b/arch/Kconfig
index 625db6376726..601379a6173d 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -722,12 +722,8 @@ config ARCH_SUPPORTS_CFI_CLANG
 
 config CFI_CLANG
 	bool "Use Clang's Control Flow Integrity (CFI)"
-	depends on LTO_CLANG && ARCH_SUPPORTS_CFI_CLANG
-	# Clang >= 12:
-	# - https://bugs.llvm.org/show_bug.cgi?id=46258
-	# - https://bugs.llvm.org/show_bug.cgi?id=47479
-	depends on CLANG_VERSION >= 120000
-	select KALLSYMS
+	depends on ARCH_SUPPORTS_CFI_CLANG
+	depends on $(cc-option,-fsanitize=kcfi)
 	help
 	  This option enables Clang’s forward-edge Control Flow Integrity
 	  (CFI) checking, where the compiler injects a runtime check to each
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 69138e9db787..20bfd2f01d6f 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -421,6 +421,22 @@
 	__end_ro_after_init = .;
 #endif
 
+/*
+ * .kcfi_traps contains a list KCFI trap locations.
+ */
+#ifndef KCFI_TRAPS
+#ifdef CONFIG_CFI_CLANG
+#define KCFI_TRAPS							\
+	__kcfi_traps : AT(ADDR(__kcfi_traps) - LOAD_OFFSET) {		\
+		__start___kcfi_traps = .;				\
+		KEEP(*(.kcfi_traps))					\
+		__stop___kcfi_traps = .;				\
+	}
+#else
+#define KCFI_TRAPS
+#endif
+#endif
+
 /*
  * Read only Data
  */
@@ -529,6 +545,8 @@
 		__stop___modver = .;					\
 	}								\
 									\
+	KCFI_TRAPS							\
+									\
 	RO_EXCEPTION_TABLE						\
 	NOTES								\
 	BTF								\
@@ -537,21 +555,6 @@
 	__end_rodata = .;
 
 
-/*
- * .text..L.cfi.jumptable.* contain Control-Flow Integrity (CFI)
- * jump table entries.
- */
-#ifdef CONFIG_CFI_CLANG
-#define TEXT_CFI_JT							\
-		. = ALIGN(PMD_SIZE);					\
-		__cfi_jt_start = .;					\
-		*(.text..L.cfi.jumptable .text..L.cfi.jumptable.*)	\
-		. = ALIGN(PMD_SIZE);					\
-		__cfi_jt_end = .;
-#else
-#define TEXT_CFI_JT
-#endif
-
 /*
  * Non-instrumentable text section
  */
@@ -579,7 +582,6 @@
 		*(.text..refcount)					\
 		*(.ref.text)						\
 		*(.text.asan.* .text.tsan.*)				\
-		TEXT_CFI_JT						\
 	MEM_KEEP(init.text*)						\
 	MEM_KEEP(exit.text*)						\
 
@@ -1008,8 +1010,7 @@
  * keep any .init_array.* sections.
  * https://bugs.llvm.org/show_bug.cgi?id=46478
  */
-#if defined(CONFIG_GCOV_KERNEL) || defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN) || \
-	defined(CONFIG_CFI_CLANG)
+#if defined(CONFIG_GCOV_KERNEL) || defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN)
 # ifdef CONFIG_CONSTRUCTORS
 #  define SANITIZER_DISCARDS						\
 	*(.eh_frame)
@@ -1027,6 +1028,7 @@
 	*(.discard)							\
 	*(.discard.*)							\
 	*(.modinfo)							\
+	*(.kcfi_types)							\
 	/* ld.bfd warns about .gnu.version* even when not emitted */	\
 	*(.gnu.version*)						\
 
diff --git a/include/linux/cfi.h b/include/linux/cfi.h
index 2cdbc0fbd0ab..9cbadfca7e01 100644
--- a/include/linux/cfi.h
+++ b/include/linux/cfi.h
@@ -2,17 +2,33 @@
 /*
  * Clang Control Flow Integrity (CFI) support.
  *
- * Copyright (C) 2021 Google LLC
+ * Copyright (C) 2022 Google LLC
  */
 #ifndef _LINUX_CFI_H
 #define _LINUX_CFI_H
 
+#include <linux/bug.h>
+#include <linux/module.h>
+
 #ifdef CONFIG_CFI_CLANG
-typedef void (*cfi_check_fn)(uint64_t id, void *ptr, void *diag);
 
-/* Compiler-generated function in each module, and the kernel */
-extern void __cfi_check(uint64_t id, void *ptr, void *diag);
+#ifdef CONFIG_MODULES
+void module_cfi_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod);
+#endif
+
+void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs);
+enum bug_trap_type report_cfi(unsigned long addr, struct pt_regs *regs);
+#else
+
+#ifdef CONFIG_MODULES
+static inline void module_cfi_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs,
+				       struct module *mod) {}
+#endif
 
+static inline enum bug_trap_type report_cfi(unsigned long addr, struct pt_regs *regs)
+{
+	return BUG_TRAP_TYPE_NONE;
+}
 #endif /* CONFIG_CFI_CLANG */
 
 #endif /* _LINUX_CFI_H */
diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
index babb1347148c..c4ff42859077 100644
--- a/include/linux/compiler-clang.h
+++ b/include/linux/compiler-clang.h
@@ -66,9 +66,6 @@
 # define __noscs	__attribute__((__no_sanitize__("shadow-call-stack")))
 #endif
 
-#define __nocfi		__attribute__((__no_sanitize__("cfi")))
-#define __cficanonical	__attribute__((__cfi_canonical_jump_table__))
-
 /*
  * Turn individual warnings and errors on and off locally, depending
  * on version.
@@ -93,3 +90,8 @@
 
 #define __diag_ignore_all(option, comment) \
 	__diag_clang(11, ignore, option)
+
+#if CONFIG_CFI_CLANG
+/* Disable CFI checking inside a function. */
+#define __nocfi		__attribute__((__no_sanitize__("kcfi")))
+#endif
diff --git a/include/linux/module.h b/include/linux/module.h
index 87857275c047..430ea19f14f6 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -27,7 +27,6 @@
 #include <linux/tracepoint-defs.h>
 #include <linux/srcu.h>
 #include <linux/static_call_types.h>
-#include <linux/cfi.h>
 
 #include <linux/percpu.h>
 #include <asm/module.h>
@@ -389,7 +388,8 @@ struct module {
 	unsigned int num_syms;
 
 #ifdef CONFIG_CFI_CLANG
-	cfi_check_fn cfi_check;
+	unsigned long *kcfi_traps;
+	unsigned long *kcfi_traps_end;
 #endif
 
 	/* Kernel parameters. */
diff --git a/kernel/cfi.c b/kernel/cfi.c
index 2cc0d01ea980..d9907df6576e 100644
--- a/kernel/cfi.c
+++ b/kernel/cfi.c
@@ -1,94 +1,101 @@
 // SPDX-License-Identifier: GPL-2.0
 /*
- * Clang Control Flow Integrity (CFI) error and slowpath handling.
+ * Clang Control Flow Integrity (CFI) error handling.
  *
- * Copyright (C) 2021 Google LLC
+ * Copyright (C) 2022 Google LLC
  */
 
-#include <linux/hardirq.h>
-#include <linux/kallsyms.h>
-#include <linux/module.h>
-#include <linux/mutex.h>
-#include <linux/printk.h>
-#include <linux/ratelimit.h>
-#include <linux/rcupdate.h>
-#include <linux/vmalloc.h>
-#include <asm/cacheflush.h>
-#include <asm/set_memory.h>
-
-/* Compiler-defined handler names */
-#ifdef CONFIG_CFI_PERMISSIVE
-#define cfi_failure_handler	__ubsan_handle_cfi_check_fail
-#else
-#define cfi_failure_handler	__ubsan_handle_cfi_check_fail_abort
-#endif
-
-static inline void handle_cfi_failure(void *ptr)
+#include <linux/cfi.h>
+
+/* Returns the target of the indirect call that follows the trap in `addr`. */
+void * __weak arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
 {
-	if (IS_ENABLED(CONFIG_CFI_PERMISSIVE))
-		WARN_RATELIMIT(1, "CFI failure (target: %pS):\n", ptr);
-	else
-		panic("CFI failure (target: %pS)\n", ptr);
+	return NULL;
 }
 
 #ifdef CONFIG_MODULES
+/* Populates `kcfi_trap(_end)?` fields in `struct module`. */
+void module_cfi_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs,
+			 struct module *mod)
+{
+	char *secstrings;
+	unsigned int i;
+
+	mod->kcfi_traps = NULL;
+	mod->kcfi_traps_end = NULL;
+
+	secstrings = (char *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+	for (i = 1; i < hdr->e_shnum; i++) {
+		if (strcmp(secstrings+sechdrs[i].sh_name, "__kcfi_traps"))
+			continue;
 
-static inline cfi_check_fn find_module_check_fn(unsigned long ptr)
+		mod->kcfi_traps = (unsigned long *)sechdrs[i].sh_addr;
+		mod->kcfi_traps_end = (unsigned long *)(sechdrs[i].sh_addr + sechdrs[i].sh_size);
+		break;
+	}
+}
+
+static bool is_module_cfi_trap(unsigned long addr)
 {
-	cfi_check_fn fn = NULL;
+	bool found = false;
 	struct module *mod;
+	unsigned long *p;
 
 	rcu_read_lock_sched_notrace();
-	mod = __module_address(ptr);
+
+	mod = __module_address(addr);
 	if (mod)
-		fn = mod->cfi_check;
+		for (p = mod->kcfi_traps; !found && p < mod->kcfi_traps_end; ++p)
+			found = (*p == addr);
+
 	rcu_read_unlock_sched_notrace();
 
-	return fn;
+	return found;
 }
 
-static inline cfi_check_fn find_check_fn(unsigned long ptr)
-{
-	cfi_check_fn fn = NULL;
+#else /* CONFIG_MODULES */
 
-	if (is_kernel_text(ptr))
-		return __cfi_check;
+static inline bool is_module_cfi_trap(unsigned long addr)
+{
+	return false;
+}
 
-	/*
-	 * Indirect call checks can happen when RCU is not watching. Both
-	 * the shadow and __module_address use RCU, so we need to wake it
-	 * up if necessary.
-	 */
-	RCU_NONIDLE({
-		fn = find_module_check_fn(ptr);
-	});
+#endif /* CONFIG_MODULES */
 
-	return fn;
-}
+extern unsigned long __start___kcfi_traps[];
+extern unsigned long __stop___kcfi_traps[];
 
-void __cfi_slowpath_diag(uint64_t id, void *ptr, void *diag)
+static bool is_cfi_trap(unsigned long addr)
 {
-	cfi_check_fn fn = find_check_fn((unsigned long)ptr);
+	unsigned long *p;
 
-	if (likely(fn))
-		fn(id, ptr, diag);
-	else /* Don't allow unchecked modules */
-		handle_cfi_failure(ptr);
+	for (p = __start___kcfi_traps; p < __stop___kcfi_traps; ++p)
+		if (*p == addr)
+			return true;
+
+	return is_module_cfi_trap(addr);
 }
-EXPORT_SYMBOL(__cfi_slowpath_diag);
 
-#else /* !CONFIG_MODULES */
+#define __CFI_ERROR_FMT "CFI failure at %pS (target: %pS)\n"
 
-void __cfi_slowpath_diag(uint64_t id, void *ptr, void *diag)
+static enum bug_trap_type __report_cfi(void *addr, void *target, struct pt_regs *regs)
 {
-	handle_cfi_failure(ptr); /* No modules */
+	if (IS_ENABLED(CONFIG_CFI_PERMISSIVE)) {
+		pr_warn(__CFI_ERROR_FMT, addr, target);
+		__warn(NULL, 0, addr, 0, regs, NULL);
+
+		return BUG_TRAP_TYPE_WARN;
+	} else {
+		pr_crit(__CFI_ERROR_FMT, addr, target);
+		return BUG_TRAP_TYPE_BUG;
+	}
 }
-EXPORT_SYMBOL(__cfi_slowpath_diag);
-
-#endif /* CONFIG_MODULES */
 
-void cfi_failure_handler(void *data, void *ptr, void *vtable)
+enum bug_trap_type report_cfi(unsigned long addr, struct pt_regs *regs)
 {
-	handle_cfi_failure(ptr);
+	if (!is_cfi_trap(addr))
+		return BUG_TRAP_TYPE_NONE;
+
+	return __report_cfi((void *)addr, arch_get_cfi_target(addr, regs), regs);
 }
-EXPORT_SYMBOL(cfi_failure_handler);
diff --git a/kernel/module.c b/kernel/module.c
index 296fe02323e9..411ae8c358e6 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -57,6 +57,7 @@
 #include <linux/bsearch.h>
 #include <linux/dynamic_debug.h>
 #include <linux/audit.h>
+#include <linux/cfi.h>
 #include <uapi/linux/module.h>
 #include "module-internal.h"
 
@@ -3871,8 +3872,9 @@ static int complete_formation(struct module *mod, struct load_info *info)
 	if (err < 0)
 		goto out;
 
-	/* This relies on module_mutex for list integrity. */
+	/* These rely on module_mutex for list integrity. */
 	module_bug_finalize(info->hdr, info->sechdrs, mod);
+	module_cfi_finalize(info->hdr, info->sechdrs, mod);
 
 	module_enable_ro(mod, false);
 	module_enable_nx(mod);
@@ -3928,8 +3930,6 @@ static int unknown_module_param_cb(char *param, char *val, const char *modname,
 	return 0;
 }
 
-static void cfi_init(struct module *mod);
-
 /*
  * Allocate and load the module: note that size of section 0 is always
  * zero, and we rely on this for optional sections.
@@ -4059,9 +4059,6 @@ static int load_module(struct load_info *info, const char __user *uargs,
 
 	flush_module_icache(mod);
 
-	/* Setup CFI for the module. */
-	cfi_init(mod);
-
 	/* Now copy in args */
 	mod->args = strndup_user(uargs, ~0UL >> 1);
 	if (IS_ERR(mod->args)) {
@@ -4502,31 +4499,6 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
 #endif /* CONFIG_LIVEPATCH */
 #endif /* CONFIG_KALLSYMS */
 
-static void cfi_init(struct module *mod)
-{
-#ifdef CONFIG_CFI_CLANG
-	initcall_t *init;
-	exitcall_t *exit;
-
-	rcu_read_lock_sched();
-	mod->cfi_check = (cfi_check_fn)
-		find_kallsyms_symbol_value(mod, "__cfi_check");
-	init = (initcall_t *)
-		find_kallsyms_symbol_value(mod, "__cfi_jt_init_module");
-	exit = (exitcall_t *)
-		find_kallsyms_symbol_value(mod, "__cfi_jt_cleanup_module");
-	rcu_read_unlock_sched();
-
-	/* Fix init/exit functions to point to the CFI jump table */
-	if (init)
-		mod->init = *init;
-#ifdef CONFIG_MODULE_UNLOAD
-	if (exit)
-		mod->exit = *exit;
-#endif
-#endif
-}
-
 /* Maximum number of characters written by module_flags() */
 #define MODULE_FLAGS_BUF_SIZE (TAINT_FLAGS_COUNT + 4)
 
diff --git a/scripts/module.lds.S b/scripts/module.lds.S
index 1d0e1e4dc3d2..ccd75d283840 100644
--- a/scripts/module.lds.S
+++ b/scripts/module.lds.S
@@ -3,20 +3,11 @@
  * Archs are free to supply their own linker scripts.  ld will
  * combine them automatically.
  */
-#ifdef CONFIG_CFI_CLANG
-# include <asm/page.h>
-# define ALIGN_CFI 		ALIGN(PAGE_SIZE)
-# define SANITIZER_DISCARDS	*(.eh_frame)
-#else
-# define ALIGN_CFI
-# define SANITIZER_DISCARDS
-#endif
-
 SECTIONS {
 	/DISCARD/ : {
 		*(.discard)
 		*(.discard.*)
-		SANITIZER_DISCARDS
+		*(.kcfi_types)
 	}
 
 	__ksymtab		0 : { *(SORT(___ksymtab+*)) }
@@ -31,6 +22,10 @@ SECTIONS {
 
 	__patchable_function_entries : { *(__patchable_function_entries) }
 
+#ifdef CONFIG_CFI_CLANG
+	__kcfi_traps 		: { KEEP(*(.kcfi_traps)) }
+#endif
+
 #ifdef CONFIG_LTO_CLANG
 	/*
 	 * With CONFIG_LTO_CLANG, LLD always enables -fdata-sections and
@@ -51,15 +46,6 @@ SECTIONS {
 		*(.rodata .rodata.[0-9a-zA-Z_]*)
 		*(.rodata..L*)
 	}
-
-	/*
-	 * With CONFIG_CFI_CLANG, we assume __cfi_check is at the beginning
-	 * of the .text section, and is aligned to PAGE_SIZE.
-	 */
-	.text : ALIGN_CFI {
-		*(.text.__cfi_check)
-		*(.text .text.[0-9a-zA-Z_]* .text..L.cfi*)
-	}
 #endif
 }
 
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 06/21] cfi: Switch to -fsanitize=kcfi
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Switch from Clang's original forward-edge control-flow integrity
implementation to -fsanitize=kcfi, which is better suited for the
kernel, as it doesn't require LTO, doesn't use a jump table that
requires altering function references, and won't break cross-module
function address equality.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 Makefile                          |  13 +--
 arch/Kconfig                      |   8 +-
 include/asm-generic/vmlinux.lds.h |  38 ++++-----
 include/linux/cfi.h               |  24 +++++-
 include/linux/compiler-clang.h    |   8 +-
 include/linux/module.h            |   4 +-
 kernel/cfi.c                      | 129 ++++++++++++++++--------------
 kernel/module.c                   |  34 +-------
 scripts/module.lds.S              |  24 ++----
 9 files changed, 126 insertions(+), 156 deletions(-)

diff --git a/Makefile b/Makefile
index c3ec1ea42379..22a5d48f5fb4 100644
--- a/Makefile
+++ b/Makefile
@@ -915,18 +915,7 @@ export CC_FLAGS_LTO
 endif
 
 ifdef CONFIG_CFI_CLANG
-CC_FLAGS_CFI	:= -fsanitize=cfi \
-		   -fsanitize-cfi-cross-dso \
-		   -fno-sanitize-cfi-canonical-jump-tables \
-		   -fno-sanitize-trap=cfi \
-		   -fno-sanitize-blacklist
-
-ifdef CONFIG_CFI_PERMISSIVE
-CC_FLAGS_CFI	+= -fsanitize-recover=cfi
-endif
-
-# If LTO flags are filtered out, we must also filter out CFI.
-CC_FLAGS_LTO	+= $(CC_FLAGS_CFI)
+CC_FLAGS_CFI	:= -fsanitize=kcfi -fno-sanitize-blacklist
 KBUILD_CFLAGS	+= $(CC_FLAGS_CFI)
 export CC_FLAGS_CFI
 endif
diff --git a/arch/Kconfig b/arch/Kconfig
index 625db6376726..601379a6173d 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -722,12 +722,8 @@ config ARCH_SUPPORTS_CFI_CLANG
 
 config CFI_CLANG
 	bool "Use Clang's Control Flow Integrity (CFI)"
-	depends on LTO_CLANG && ARCH_SUPPORTS_CFI_CLANG
-	# Clang >= 12:
-	# - https://bugs.llvm.org/show_bug.cgi?id=46258
-	# - https://bugs.llvm.org/show_bug.cgi?id=47479
-	depends on CLANG_VERSION >= 120000
-	select KALLSYMS
+	depends on ARCH_SUPPORTS_CFI_CLANG
+	depends on $(cc-option,-fsanitize=kcfi)
 	help
 	  This option enables Clang’s forward-edge Control Flow Integrity
 	  (CFI) checking, where the compiler injects a runtime check to each
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 69138e9db787..20bfd2f01d6f 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -421,6 +421,22 @@
 	__end_ro_after_init = .;
 #endif
 
+/*
+ * .kcfi_traps contains a list KCFI trap locations.
+ */
+#ifndef KCFI_TRAPS
+#ifdef CONFIG_CFI_CLANG
+#define KCFI_TRAPS							\
+	__kcfi_traps : AT(ADDR(__kcfi_traps) - LOAD_OFFSET) {		\
+		__start___kcfi_traps = .;				\
+		KEEP(*(.kcfi_traps))					\
+		__stop___kcfi_traps = .;				\
+	}
+#else
+#define KCFI_TRAPS
+#endif
+#endif
+
 /*
  * Read only Data
  */
@@ -529,6 +545,8 @@
 		__stop___modver = .;					\
 	}								\
 									\
+	KCFI_TRAPS							\
+									\
 	RO_EXCEPTION_TABLE						\
 	NOTES								\
 	BTF								\
@@ -537,21 +555,6 @@
 	__end_rodata = .;
 
 
-/*
- * .text..L.cfi.jumptable.* contain Control-Flow Integrity (CFI)
- * jump table entries.
- */
-#ifdef CONFIG_CFI_CLANG
-#define TEXT_CFI_JT							\
-		. = ALIGN(PMD_SIZE);					\
-		__cfi_jt_start = .;					\
-		*(.text..L.cfi.jumptable .text..L.cfi.jumptable.*)	\
-		. = ALIGN(PMD_SIZE);					\
-		__cfi_jt_end = .;
-#else
-#define TEXT_CFI_JT
-#endif
-
 /*
  * Non-instrumentable text section
  */
@@ -579,7 +582,6 @@
 		*(.text..refcount)					\
 		*(.ref.text)						\
 		*(.text.asan.* .text.tsan.*)				\
-		TEXT_CFI_JT						\
 	MEM_KEEP(init.text*)						\
 	MEM_KEEP(exit.text*)						\
 
@@ -1008,8 +1010,7 @@
  * keep any .init_array.* sections.
  * https://bugs.llvm.org/show_bug.cgi?id=46478
  */
-#if defined(CONFIG_GCOV_KERNEL) || defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN) || \
-	defined(CONFIG_CFI_CLANG)
+#if defined(CONFIG_GCOV_KERNEL) || defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN)
 # ifdef CONFIG_CONSTRUCTORS
 #  define SANITIZER_DISCARDS						\
 	*(.eh_frame)
@@ -1027,6 +1028,7 @@
 	*(.discard)							\
 	*(.discard.*)							\
 	*(.modinfo)							\
+	*(.kcfi_types)							\
 	/* ld.bfd warns about .gnu.version* even when not emitted */	\
 	*(.gnu.version*)						\
 
diff --git a/include/linux/cfi.h b/include/linux/cfi.h
index 2cdbc0fbd0ab..9cbadfca7e01 100644
--- a/include/linux/cfi.h
+++ b/include/linux/cfi.h
@@ -2,17 +2,33 @@
 /*
  * Clang Control Flow Integrity (CFI) support.
  *
- * Copyright (C) 2021 Google LLC
+ * Copyright (C) 2022 Google LLC
  */
 #ifndef _LINUX_CFI_H
 #define _LINUX_CFI_H
 
+#include <linux/bug.h>
+#include <linux/module.h>
+
 #ifdef CONFIG_CFI_CLANG
-typedef void (*cfi_check_fn)(uint64_t id, void *ptr, void *diag);
 
-/* Compiler-generated function in each module, and the kernel */
-extern void __cfi_check(uint64_t id, void *ptr, void *diag);
+#ifdef CONFIG_MODULES
+void module_cfi_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mod);
+#endif
+
+void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs);
+enum bug_trap_type report_cfi(unsigned long addr, struct pt_regs *regs);
+#else
+
+#ifdef CONFIG_MODULES
+static inline void module_cfi_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs,
+				       struct module *mod) {}
+#endif
 
+static inline enum bug_trap_type report_cfi(unsigned long addr, struct pt_regs *regs)
+{
+	return BUG_TRAP_TYPE_NONE;
+}
 #endif /* CONFIG_CFI_CLANG */
 
 #endif /* _LINUX_CFI_H */
diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
index babb1347148c..c4ff42859077 100644
--- a/include/linux/compiler-clang.h
+++ b/include/linux/compiler-clang.h
@@ -66,9 +66,6 @@
 # define __noscs	__attribute__((__no_sanitize__("shadow-call-stack")))
 #endif
 
-#define __nocfi		__attribute__((__no_sanitize__("cfi")))
-#define __cficanonical	__attribute__((__cfi_canonical_jump_table__))
-
 /*
  * Turn individual warnings and errors on and off locally, depending
  * on version.
@@ -93,3 +90,8 @@
 
 #define __diag_ignore_all(option, comment) \
 	__diag_clang(11, ignore, option)
+
+#if CONFIG_CFI_CLANG
+/* Disable CFI checking inside a function. */
+#define __nocfi		__attribute__((__no_sanitize__("kcfi")))
+#endif
diff --git a/include/linux/module.h b/include/linux/module.h
index 87857275c047..430ea19f14f6 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -27,7 +27,6 @@
 #include <linux/tracepoint-defs.h>
 #include <linux/srcu.h>
 #include <linux/static_call_types.h>
-#include <linux/cfi.h>
 
 #include <linux/percpu.h>
 #include <asm/module.h>
@@ -389,7 +388,8 @@ struct module {
 	unsigned int num_syms;
 
 #ifdef CONFIG_CFI_CLANG
-	cfi_check_fn cfi_check;
+	unsigned long *kcfi_traps;
+	unsigned long *kcfi_traps_end;
 #endif
 
 	/* Kernel parameters. */
diff --git a/kernel/cfi.c b/kernel/cfi.c
index 2cc0d01ea980..d9907df6576e 100644
--- a/kernel/cfi.c
+++ b/kernel/cfi.c
@@ -1,94 +1,101 @@
 // SPDX-License-Identifier: GPL-2.0
 /*
- * Clang Control Flow Integrity (CFI) error and slowpath handling.
+ * Clang Control Flow Integrity (CFI) error handling.
  *
- * Copyright (C) 2021 Google LLC
+ * Copyright (C) 2022 Google LLC
  */
 
-#include <linux/hardirq.h>
-#include <linux/kallsyms.h>
-#include <linux/module.h>
-#include <linux/mutex.h>
-#include <linux/printk.h>
-#include <linux/ratelimit.h>
-#include <linux/rcupdate.h>
-#include <linux/vmalloc.h>
-#include <asm/cacheflush.h>
-#include <asm/set_memory.h>
-
-/* Compiler-defined handler names */
-#ifdef CONFIG_CFI_PERMISSIVE
-#define cfi_failure_handler	__ubsan_handle_cfi_check_fail
-#else
-#define cfi_failure_handler	__ubsan_handle_cfi_check_fail_abort
-#endif
-
-static inline void handle_cfi_failure(void *ptr)
+#include <linux/cfi.h>
+
+/* Returns the target of the indirect call that follows the trap in `addr`. */
+void * __weak arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
 {
-	if (IS_ENABLED(CONFIG_CFI_PERMISSIVE))
-		WARN_RATELIMIT(1, "CFI failure (target: %pS):\n", ptr);
-	else
-		panic("CFI failure (target: %pS)\n", ptr);
+	return NULL;
 }
 
 #ifdef CONFIG_MODULES
+/* Populates `kcfi_trap(_end)?` fields in `struct module`. */
+void module_cfi_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs,
+			 struct module *mod)
+{
+	char *secstrings;
+	unsigned int i;
+
+	mod->kcfi_traps = NULL;
+	mod->kcfi_traps_end = NULL;
+
+	secstrings = (char *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+	for (i = 1; i < hdr->e_shnum; i++) {
+		if (strcmp(secstrings+sechdrs[i].sh_name, "__kcfi_traps"))
+			continue;
 
-static inline cfi_check_fn find_module_check_fn(unsigned long ptr)
+		mod->kcfi_traps = (unsigned long *)sechdrs[i].sh_addr;
+		mod->kcfi_traps_end = (unsigned long *)(sechdrs[i].sh_addr + sechdrs[i].sh_size);
+		break;
+	}
+}
+
+static bool is_module_cfi_trap(unsigned long addr)
 {
-	cfi_check_fn fn = NULL;
+	bool found = false;
 	struct module *mod;
+	unsigned long *p;
 
 	rcu_read_lock_sched_notrace();
-	mod = __module_address(ptr);
+
+	mod = __module_address(addr);
 	if (mod)
-		fn = mod->cfi_check;
+		for (p = mod->kcfi_traps; !found && p < mod->kcfi_traps_end; ++p)
+			found = (*p == addr);
+
 	rcu_read_unlock_sched_notrace();
 
-	return fn;
+	return found;
 }
 
-static inline cfi_check_fn find_check_fn(unsigned long ptr)
-{
-	cfi_check_fn fn = NULL;
+#else /* CONFIG_MODULES */
 
-	if (is_kernel_text(ptr))
-		return __cfi_check;
+static inline bool is_module_cfi_trap(unsigned long addr)
+{
+	return false;
+}
 
-	/*
-	 * Indirect call checks can happen when RCU is not watching. Both
-	 * the shadow and __module_address use RCU, so we need to wake it
-	 * up if necessary.
-	 */
-	RCU_NONIDLE({
-		fn = find_module_check_fn(ptr);
-	});
+#endif /* CONFIG_MODULES */
 
-	return fn;
-}
+extern unsigned long __start___kcfi_traps[];
+extern unsigned long __stop___kcfi_traps[];
 
-void __cfi_slowpath_diag(uint64_t id, void *ptr, void *diag)
+static bool is_cfi_trap(unsigned long addr)
 {
-	cfi_check_fn fn = find_check_fn((unsigned long)ptr);
+	unsigned long *p;
 
-	if (likely(fn))
-		fn(id, ptr, diag);
-	else /* Don't allow unchecked modules */
-		handle_cfi_failure(ptr);
+	for (p = __start___kcfi_traps; p < __stop___kcfi_traps; ++p)
+		if (*p == addr)
+			return true;
+
+	return is_module_cfi_trap(addr);
 }
-EXPORT_SYMBOL(__cfi_slowpath_diag);
 
-#else /* !CONFIG_MODULES */
+#define __CFI_ERROR_FMT "CFI failure at %pS (target: %pS)\n"
 
-void __cfi_slowpath_diag(uint64_t id, void *ptr, void *diag)
+static enum bug_trap_type __report_cfi(void *addr, void *target, struct pt_regs *regs)
 {
-	handle_cfi_failure(ptr); /* No modules */
+	if (IS_ENABLED(CONFIG_CFI_PERMISSIVE)) {
+		pr_warn(__CFI_ERROR_FMT, addr, target);
+		__warn(NULL, 0, addr, 0, regs, NULL);
+
+		return BUG_TRAP_TYPE_WARN;
+	} else {
+		pr_crit(__CFI_ERROR_FMT, addr, target);
+		return BUG_TRAP_TYPE_BUG;
+	}
 }
-EXPORT_SYMBOL(__cfi_slowpath_diag);
-
-#endif /* CONFIG_MODULES */
 
-void cfi_failure_handler(void *data, void *ptr, void *vtable)
+enum bug_trap_type report_cfi(unsigned long addr, struct pt_regs *regs)
 {
-	handle_cfi_failure(ptr);
+	if (!is_cfi_trap(addr))
+		return BUG_TRAP_TYPE_NONE;
+
+	return __report_cfi((void *)addr, arch_get_cfi_target(addr, regs), regs);
 }
-EXPORT_SYMBOL(cfi_failure_handler);
diff --git a/kernel/module.c b/kernel/module.c
index 296fe02323e9..411ae8c358e6 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -57,6 +57,7 @@
 #include <linux/bsearch.h>
 #include <linux/dynamic_debug.h>
 #include <linux/audit.h>
+#include <linux/cfi.h>
 #include <uapi/linux/module.h>
 #include "module-internal.h"
 
@@ -3871,8 +3872,9 @@ static int complete_formation(struct module *mod, struct load_info *info)
 	if (err < 0)
 		goto out;
 
-	/* This relies on module_mutex for list integrity. */
+	/* These rely on module_mutex for list integrity. */
 	module_bug_finalize(info->hdr, info->sechdrs, mod);
+	module_cfi_finalize(info->hdr, info->sechdrs, mod);
 
 	module_enable_ro(mod, false);
 	module_enable_nx(mod);
@@ -3928,8 +3930,6 @@ static int unknown_module_param_cb(char *param, char *val, const char *modname,
 	return 0;
 }
 
-static void cfi_init(struct module *mod);
-
 /*
  * Allocate and load the module: note that size of section 0 is always
  * zero, and we rely on this for optional sections.
@@ -4059,9 +4059,6 @@ static int load_module(struct load_info *info, const char __user *uargs,
 
 	flush_module_icache(mod);
 
-	/* Setup CFI for the module. */
-	cfi_init(mod);
-
 	/* Now copy in args */
 	mod->args = strndup_user(uargs, ~0UL >> 1);
 	if (IS_ERR(mod->args)) {
@@ -4502,31 +4499,6 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
 #endif /* CONFIG_LIVEPATCH */
 #endif /* CONFIG_KALLSYMS */
 
-static void cfi_init(struct module *mod)
-{
-#ifdef CONFIG_CFI_CLANG
-	initcall_t *init;
-	exitcall_t *exit;
-
-	rcu_read_lock_sched();
-	mod->cfi_check = (cfi_check_fn)
-		find_kallsyms_symbol_value(mod, "__cfi_check");
-	init = (initcall_t *)
-		find_kallsyms_symbol_value(mod, "__cfi_jt_init_module");
-	exit = (exitcall_t *)
-		find_kallsyms_symbol_value(mod, "__cfi_jt_cleanup_module");
-	rcu_read_unlock_sched();
-
-	/* Fix init/exit functions to point to the CFI jump table */
-	if (init)
-		mod->init = *init;
-#ifdef CONFIG_MODULE_UNLOAD
-	if (exit)
-		mod->exit = *exit;
-#endif
-#endif
-}
-
 /* Maximum number of characters written by module_flags() */
 #define MODULE_FLAGS_BUF_SIZE (TAINT_FLAGS_COUNT + 4)
 
diff --git a/scripts/module.lds.S b/scripts/module.lds.S
index 1d0e1e4dc3d2..ccd75d283840 100644
--- a/scripts/module.lds.S
+++ b/scripts/module.lds.S
@@ -3,20 +3,11 @@
  * Archs are free to supply their own linker scripts.  ld will
  * combine them automatically.
  */
-#ifdef CONFIG_CFI_CLANG
-# include <asm/page.h>
-# define ALIGN_CFI 		ALIGN(PAGE_SIZE)
-# define SANITIZER_DISCARDS	*(.eh_frame)
-#else
-# define ALIGN_CFI
-# define SANITIZER_DISCARDS
-#endif
-
 SECTIONS {
 	/DISCARD/ : {
 		*(.discard)
 		*(.discard.*)
-		SANITIZER_DISCARDS
+		*(.kcfi_types)
 	}
 
 	__ksymtab		0 : { *(SORT(___ksymtab+*)) }
@@ -31,6 +22,10 @@ SECTIONS {
 
 	__patchable_function_entries : { *(__patchable_function_entries) }
 
+#ifdef CONFIG_CFI_CLANG
+	__kcfi_traps 		: { KEEP(*(.kcfi_traps)) }
+#endif
+
 #ifdef CONFIG_LTO_CLANG
 	/*
 	 * With CONFIG_LTO_CLANG, LLD always enables -fdata-sections and
@@ -51,15 +46,6 @@ SECTIONS {
 		*(.rodata .rodata.[0-9a-zA-Z_]*)
 		*(.rodata..L*)
 	}
-
-	/*
-	 * With CONFIG_CFI_CLANG, we assume __cfi_check is at the beginning
-	 * of the .text section, and is aligned to PAGE_SIZE.
-	 */
-	.text : ALIGN_CFI {
-		*(.text.__cfi_check)
-		*(.text .text.[0-9a-zA-Z_]* .text..L.cfi*)
-	}
 #endif
 }
 
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 07/21] cfi: Add type helper macros
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With CONFIG_CFI_CLANG, assembly functions called indirectly
from C code must be annotated with type identifiers to pass CFI
checking. The compiler emits a __kcfi_typeid_<function> symbol for
each address-taken function declaration in C, which contains the
expected type identifier. Add typed versions of SYM_FUNC_START and
SYM_FUNC_START_ALIAS, which emit the type identifier before the
function.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/linux/cfi_types.h | 57 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 57 insertions(+)
 create mode 100644 include/linux/cfi_types.h

diff --git a/include/linux/cfi_types.h b/include/linux/cfi_types.h
new file mode 100644
index 000000000000..dd16e755a197
--- /dev/null
+++ b/include/linux/cfi_types.h
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Clang Control Flow Integrity (CFI) type definitions.
+ */
+#ifndef _LINUX_CFI_TYPES_H
+#define _LINUX_CFI_TYPES_H
+
+#ifdef CONFIG_CFI_CLANG
+#include <linux/linkage.h>
+
+#ifdef __ASSEMBLY__
+/*
+ * Use the __kcfi_typeid_<function> type identifier symbol to
+ * annotate indirectly called assembly functions. The compiler emits
+ * these symbols for all address-taken function declarations in C
+ * code.
+ */
+#ifndef __CFI_TYPE
+#define __CFI_TYPE(name)				\
+	.4byte __kcfi_typeid_##name
+#endif
+
+#define SYM_TYPED_ENTRY(name, fname, linkage, align...)	\
+	linkage(name) ASM_NL				\
+	align ASM_NL					\
+	__CFI_TYPE(fname) ASM_NL			\
+	name:
+
+#define __SYM_TYPED_FUNC_START_ALIAS(name, fname) \
+	SYM_TYPED_ENTRY(name, fname, SYM_L_GLOBAL, SYM_A_ALIGN)
+
+#define __SYM_TYPED_FUNC_START(name, fname) \
+	SYM_TYPED_ENTRY(name, fname, SYM_L_GLOBAL, SYM_A_ALIGN)
+
+#endif /* __ASSEMBLY__ */
+
+#else /* CONFIG_CFI_CLANG */
+
+#ifdef __ASSEMBLY__
+#define __SYM_TYPED_FUNC_START_ALIAS(name, fname) \
+	SYM_FUNC_START_ALIAS(name)
+
+#define __SYM_TYPED_FUNC_START(name, fname) \
+	SYM_FUNC_START(name)
+#endif /* __ASSEMBLY__ */
+
+#endif /* CONFIG_CFI_CLANG */
+
+#ifdef __ASSEMBLY__
+#define SYM_TYPED_FUNC_START_ALIAS(name) \
+	__SYM_TYPED_FUNC_START_ALIAS(name, name)
+
+#define SYM_TYPED_FUNC_START(name) \
+	__SYM_TYPED_FUNC_START(name, name)
+#endif /* __ASSEMBLY__ */
+
+#endif /* _LINUX_CFI_TYPES_H */
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 07/21] cfi: Add type helper macros
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With CONFIG_CFI_CLANG, assembly functions called indirectly
from C code must be annotated with type identifiers to pass CFI
checking. The compiler emits a __kcfi_typeid_<function> symbol for
each address-taken function declaration in C, which contains the
expected type identifier. Add typed versions of SYM_FUNC_START and
SYM_FUNC_START_ALIAS, which emit the type identifier before the
function.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/linux/cfi_types.h | 57 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 57 insertions(+)
 create mode 100644 include/linux/cfi_types.h

diff --git a/include/linux/cfi_types.h b/include/linux/cfi_types.h
new file mode 100644
index 000000000000..dd16e755a197
--- /dev/null
+++ b/include/linux/cfi_types.h
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Clang Control Flow Integrity (CFI) type definitions.
+ */
+#ifndef _LINUX_CFI_TYPES_H
+#define _LINUX_CFI_TYPES_H
+
+#ifdef CONFIG_CFI_CLANG
+#include <linux/linkage.h>
+
+#ifdef __ASSEMBLY__
+/*
+ * Use the __kcfi_typeid_<function> type identifier symbol to
+ * annotate indirectly called assembly functions. The compiler emits
+ * these symbols for all address-taken function declarations in C
+ * code.
+ */
+#ifndef __CFI_TYPE
+#define __CFI_TYPE(name)				\
+	.4byte __kcfi_typeid_##name
+#endif
+
+#define SYM_TYPED_ENTRY(name, fname, linkage, align...)	\
+	linkage(name) ASM_NL				\
+	align ASM_NL					\
+	__CFI_TYPE(fname) ASM_NL			\
+	name:
+
+#define __SYM_TYPED_FUNC_START_ALIAS(name, fname) \
+	SYM_TYPED_ENTRY(name, fname, SYM_L_GLOBAL, SYM_A_ALIGN)
+
+#define __SYM_TYPED_FUNC_START(name, fname) \
+	SYM_TYPED_ENTRY(name, fname, SYM_L_GLOBAL, SYM_A_ALIGN)
+
+#endif /* __ASSEMBLY__ */
+
+#else /* CONFIG_CFI_CLANG */
+
+#ifdef __ASSEMBLY__
+#define __SYM_TYPED_FUNC_START_ALIAS(name, fname) \
+	SYM_FUNC_START_ALIAS(name)
+
+#define __SYM_TYPED_FUNC_START(name, fname) \
+	SYM_FUNC_START(name)
+#endif /* __ASSEMBLY__ */
+
+#endif /* CONFIG_CFI_CLANG */
+
+#ifdef __ASSEMBLY__
+#define SYM_TYPED_FUNC_START_ALIAS(name) \
+	__SYM_TYPED_FUNC_START_ALIAS(name, name)
+
+#define SYM_TYPED_FUNC_START(name) \
+	__SYM_TYPED_FUNC_START(name, name)
+#endif /* __ASSEMBLY__ */
+
+#endif /* _LINUX_CFI_TYPES_H */
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 08/21] arm64/crypto: Add types to indirect called assembly functions
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With CONFIG_CFI_CLANG, assembly functions indirectly called from C code
must be annotated with type identifiers to pass CFI checking. Use
SYM_TYPED_FUNC_START for indirectly called functions in the crypto code.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/arm64/crypto/ghash-ce-core.S | 5 +++--
 arch/arm64/crypto/sm3-ce-core.S   | 3 ++-
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S
index 7868330dd54e..ebe5558929b7 100644
--- a/arch/arm64/crypto/ghash-ce-core.S
+++ b/arch/arm64/crypto/ghash-ce-core.S
@@ -6,6 +6,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/assembler.h>
 
 	SHASH		.req	v0
@@ -350,11 +351,11 @@ CPU_LE(	rev64		T1.16b, T1.16b	)
 	 * void pmull_ghash_update(int blocks, u64 dg[], const char *src,
 	 *			   struct ghash_key const *k, const char *head)
 	 */
-SYM_FUNC_START(pmull_ghash_update_p64)
+SYM_TYPED_FUNC_START(pmull_ghash_update_p64)
 	__pmull_ghash	p64
 SYM_FUNC_END(pmull_ghash_update_p64)
 
-SYM_FUNC_START(pmull_ghash_update_p8)
+SYM_TYPED_FUNC_START(pmull_ghash_update_p8)
 	__pmull_ghash	p8
 SYM_FUNC_END(pmull_ghash_update_p8)
 
diff --git a/arch/arm64/crypto/sm3-ce-core.S b/arch/arm64/crypto/sm3-ce-core.S
index ef97d3187cb7..ca70cfacd0d0 100644
--- a/arch/arm64/crypto/sm3-ce-core.S
+++ b/arch/arm64/crypto/sm3-ce-core.S
@@ -6,6 +6,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/assembler.h>
 
 	.irp		b, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
@@ -73,7 +74,7 @@
 	 *                       int blocks)
 	 */
 	.text
-SYM_FUNC_START(sm3_ce_transform)
+SYM_TYPED_FUNC_START(sm3_ce_transform)
 	/* load state */
 	ld1		{v8.4s-v9.4s}, [x0]
 	rev64		v8.4s, v8.4s
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 08/21] arm64/crypto: Add types to indirect called assembly functions
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With CONFIG_CFI_CLANG, assembly functions indirectly called from C code
must be annotated with type identifiers to pass CFI checking. Use
SYM_TYPED_FUNC_START for indirectly called functions in the crypto code.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/arm64/crypto/ghash-ce-core.S | 5 +++--
 arch/arm64/crypto/sm3-ce-core.S   | 3 ++-
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S
index 7868330dd54e..ebe5558929b7 100644
--- a/arch/arm64/crypto/ghash-ce-core.S
+++ b/arch/arm64/crypto/ghash-ce-core.S
@@ -6,6 +6,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/assembler.h>
 
 	SHASH		.req	v0
@@ -350,11 +351,11 @@ CPU_LE(	rev64		T1.16b, T1.16b	)
 	 * void pmull_ghash_update(int blocks, u64 dg[], const char *src,
 	 *			   struct ghash_key const *k, const char *head)
 	 */
-SYM_FUNC_START(pmull_ghash_update_p64)
+SYM_TYPED_FUNC_START(pmull_ghash_update_p64)
 	__pmull_ghash	p64
 SYM_FUNC_END(pmull_ghash_update_p64)
 
-SYM_FUNC_START(pmull_ghash_update_p8)
+SYM_TYPED_FUNC_START(pmull_ghash_update_p8)
 	__pmull_ghash	p8
 SYM_FUNC_END(pmull_ghash_update_p8)
 
diff --git a/arch/arm64/crypto/sm3-ce-core.S b/arch/arm64/crypto/sm3-ce-core.S
index ef97d3187cb7..ca70cfacd0d0 100644
--- a/arch/arm64/crypto/sm3-ce-core.S
+++ b/arch/arm64/crypto/sm3-ce-core.S
@@ -6,6 +6,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/assembler.h>
 
 	.irp		b, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
@@ -73,7 +74,7 @@
 	 *                       int blocks)
 	 */
 	.text
-SYM_FUNC_START(sm3_ce_transform)
+SYM_TYPED_FUNC_START(sm3_ce_transform)
 	/* load state */
 	ld1		{v8.4s-v9.4s}, [x0]
 	rev64		v8.4s, v8.4s
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 09/21] arm64: Add CFI error handling
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With -fsanitize=kcfi, CFI always traps. Add arm64 support for handling
CFI failures and determining the target address.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/arm64/include/asm/brk-imm.h |  2 ++
 arch/arm64/include/asm/insn.h    |  1 +
 arch/arm64/kernel/traps.c        | 57 ++++++++++++++++++++++++++++++++
 3 files changed, 60 insertions(+)

diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h
index ec7720dbe2c8..3a50b70b4404 100644
--- a/arch/arm64/include/asm/brk-imm.h
+++ b/arch/arm64/include/asm/brk-imm.h
@@ -16,6 +16,7 @@
  * 0x400: for dynamic BRK instruction
  * 0x401: for compile time BRK instruction
  * 0x800: kernel-mode BUG() and WARN() traps
+ * 0x801: Control-Flow Integrity traps
  * 0x9xx: tag-based KASAN trap (allowed values 0x900 - 0x9ff)
  */
 #define KPROBES_BRK_IMM			0x004
@@ -25,6 +26,7 @@
 #define KGDB_DYN_DBG_BRK_IMM		0x400
 #define KGDB_COMPILED_DBG_BRK_IMM	0x401
 #define BUG_BRK_IMM			0x800
+#define CFI_BRK_IMM			0x801
 #define KASAN_BRK_IMM			0x900
 #define KASAN_BRK_MASK			0x0ff
 
diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 1e5760d567ae..12225bdfa776 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -334,6 +334,7 @@ __AARCH64_INSN_FUNCS(store_pre,	0x3FE00C00, 0x38000C00)
 __AARCH64_INSN_FUNCS(load_pre,	0x3FE00C00, 0x38400C00)
 __AARCH64_INSN_FUNCS(store_post,	0x3FE00C00, 0x38000400)
 __AARCH64_INSN_FUNCS(load_post,	0x3FE00C00, 0x38400400)
+__AARCH64_INSN_FUNCS(ldur,	0x3FE00C00, 0x38400000)
 __AARCH64_INSN_FUNCS(str_reg,	0x3FE0EC00, 0x38206800)
 __AARCH64_INSN_FUNCS(ldadd,	0x3F20FC00, 0x38200000)
 __AARCH64_INSN_FUNCS(ldclr,	0x3F20FC00, 0x38201000)
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 0529fd57567e..b524411ba663 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -26,6 +26,7 @@
 #include <linux/syscalls.h>
 #include <linux/mm_types.h>
 #include <linux/kasan.h>
+#include <linux/cfi.h>
 
 #include <asm/atomic.h>
 #include <asm/bug.h>
@@ -990,6 +991,55 @@ static struct break_hook bug_break_hook = {
 	.imm = BUG_BRK_IMM,
 };
 
+#ifdef CONFIG_CFI_CLANG
+void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
+{
+	/* The expected CFI check instruction sequence:
+	 *   ldur    wA, [xN, #-4]
+	 *   movk    wB, #nnnnn
+	 *   movk    wB, #nnnnn, lsl #16
+	 *   cmp     wA, wB
+	 *   b.eq    .Ltmp1
+	 *   brk     #0x801		; <- addr
+	 *   .Ltmp1:
+	 *
+	 * Therefore, the target address is in the xN register, which we can
+	 * decode from the ldur instruction.
+	 */
+	u32 insn, rn;
+	void *p = (void *)(addr - 5 * AARCH64_INSN_SIZE);
+
+	if (aarch64_insn_read(p, &insn) || !aarch64_insn_is_ldur(insn))
+		return NULL;
+
+	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, insn);
+	return (void *)regs->regs[rn];
+}
+
+static int cfi_handler(struct pt_regs *regs, unsigned int esr)
+{
+	switch (report_cfi(regs->pc, regs)) {
+	case BUG_TRAP_TYPE_BUG:
+		die("Oops - CFI", regs, 0);
+		break;
+
+	case BUG_TRAP_TYPE_WARN:
+		break;
+
+	default:
+		return DBG_HOOK_ERROR;
+	}
+
+	arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
+	return DBG_HOOK_HANDLED;
+}
+
+static struct break_hook cfi_break_hook = {
+	.fn = cfi_handler,
+	.imm = CFI_BRK_IMM,
+};
+#endif /* CONFIG_CFI_CLANG */
+
 static int reserved_fault_handler(struct pt_regs *regs, unsigned int esr)
 {
 	pr_err("%s generated an invalid instruction at %pS!\n",
@@ -1063,6 +1113,10 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
 
 	if ((comment & ~KASAN_BRK_MASK) == KASAN_BRK_IMM)
 		return kasan_handler(regs, esr) != DBG_HOOK_HANDLED;
+#endif
+#ifdef CONFIG_CFI_CLANG
+	if ((esr & ESR_ELx_BRK64_ISS_COMMENT_MASK) == CFI_BRK_IMM)
+		return cfi_handler(regs, esr) != DBG_HOOK_HANDLED;
 #endif
 	return bug_handler(regs, esr) != DBG_HOOK_HANDLED;
 }
@@ -1070,6 +1124,9 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
 void __init trap_init(void)
 {
 	register_kernel_break_hook(&bug_break_hook);
+#ifdef CONFIG_CFI_CLANG
+	register_kernel_break_hook(&cfi_break_hook);
+#endif
 	register_kernel_break_hook(&fault_break_hook);
 #ifdef CONFIG_KASAN_SW_TAGS
 	register_kernel_break_hook(&kasan_break_hook);
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 09/21] arm64: Add CFI error handling
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With -fsanitize=kcfi, CFI always traps. Add arm64 support for handling
CFI failures and determining the target address.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/arm64/include/asm/brk-imm.h |  2 ++
 arch/arm64/include/asm/insn.h    |  1 +
 arch/arm64/kernel/traps.c        | 57 ++++++++++++++++++++++++++++++++
 3 files changed, 60 insertions(+)

diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h
index ec7720dbe2c8..3a50b70b4404 100644
--- a/arch/arm64/include/asm/brk-imm.h
+++ b/arch/arm64/include/asm/brk-imm.h
@@ -16,6 +16,7 @@
  * 0x400: for dynamic BRK instruction
  * 0x401: for compile time BRK instruction
  * 0x800: kernel-mode BUG() and WARN() traps
+ * 0x801: Control-Flow Integrity traps
  * 0x9xx: tag-based KASAN trap (allowed values 0x900 - 0x9ff)
  */
 #define KPROBES_BRK_IMM			0x004
@@ -25,6 +26,7 @@
 #define KGDB_DYN_DBG_BRK_IMM		0x400
 #define KGDB_COMPILED_DBG_BRK_IMM	0x401
 #define BUG_BRK_IMM			0x800
+#define CFI_BRK_IMM			0x801
 #define KASAN_BRK_IMM			0x900
 #define KASAN_BRK_MASK			0x0ff
 
diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 1e5760d567ae..12225bdfa776 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -334,6 +334,7 @@ __AARCH64_INSN_FUNCS(store_pre,	0x3FE00C00, 0x38000C00)
 __AARCH64_INSN_FUNCS(load_pre,	0x3FE00C00, 0x38400C00)
 __AARCH64_INSN_FUNCS(store_post,	0x3FE00C00, 0x38000400)
 __AARCH64_INSN_FUNCS(load_post,	0x3FE00C00, 0x38400400)
+__AARCH64_INSN_FUNCS(ldur,	0x3FE00C00, 0x38400000)
 __AARCH64_INSN_FUNCS(str_reg,	0x3FE0EC00, 0x38206800)
 __AARCH64_INSN_FUNCS(ldadd,	0x3F20FC00, 0x38200000)
 __AARCH64_INSN_FUNCS(ldclr,	0x3F20FC00, 0x38201000)
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 0529fd57567e..b524411ba663 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -26,6 +26,7 @@
 #include <linux/syscalls.h>
 #include <linux/mm_types.h>
 #include <linux/kasan.h>
+#include <linux/cfi.h>
 
 #include <asm/atomic.h>
 #include <asm/bug.h>
@@ -990,6 +991,55 @@ static struct break_hook bug_break_hook = {
 	.imm = BUG_BRK_IMM,
 };
 
+#ifdef CONFIG_CFI_CLANG
+void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
+{
+	/* The expected CFI check instruction sequence:
+	 *   ldur    wA, [xN, #-4]
+	 *   movk    wB, #nnnnn
+	 *   movk    wB, #nnnnn, lsl #16
+	 *   cmp     wA, wB
+	 *   b.eq    .Ltmp1
+	 *   brk     #0x801		; <- addr
+	 *   .Ltmp1:
+	 *
+	 * Therefore, the target address is in the xN register, which we can
+	 * decode from the ldur instruction.
+	 */
+	u32 insn, rn;
+	void *p = (void *)(addr - 5 * AARCH64_INSN_SIZE);
+
+	if (aarch64_insn_read(p, &insn) || !aarch64_insn_is_ldur(insn))
+		return NULL;
+
+	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, insn);
+	return (void *)regs->regs[rn];
+}
+
+static int cfi_handler(struct pt_regs *regs, unsigned int esr)
+{
+	switch (report_cfi(regs->pc, regs)) {
+	case BUG_TRAP_TYPE_BUG:
+		die("Oops - CFI", regs, 0);
+		break;
+
+	case BUG_TRAP_TYPE_WARN:
+		break;
+
+	default:
+		return DBG_HOOK_ERROR;
+	}
+
+	arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
+	return DBG_HOOK_HANDLED;
+}
+
+static struct break_hook cfi_break_hook = {
+	.fn = cfi_handler,
+	.imm = CFI_BRK_IMM,
+};
+#endif /* CONFIG_CFI_CLANG */
+
 static int reserved_fault_handler(struct pt_regs *regs, unsigned int esr)
 {
 	pr_err("%s generated an invalid instruction at %pS!\n",
@@ -1063,6 +1113,10 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
 
 	if ((comment & ~KASAN_BRK_MASK) == KASAN_BRK_IMM)
 		return kasan_handler(regs, esr) != DBG_HOOK_HANDLED;
+#endif
+#ifdef CONFIG_CFI_CLANG
+	if ((esr & ESR_ELx_BRK64_ISS_COMMENT_MASK) == CFI_BRK_IMM)
+		return cfi_handler(regs, esr) != DBG_HOOK_HANDLED;
 #endif
 	return bug_handler(regs, esr) != DBG_HOOK_HANDLED;
 }
@@ -1070,6 +1124,9 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
 void __init trap_init(void)
 {
 	register_kernel_break_hook(&bug_break_hook);
+#ifdef CONFIG_CFI_CLANG
+	register_kernel_break_hook(&cfi_break_hook);
+#endif
 	register_kernel_break_hook(&fault_break_hook);
 #ifdef CONFIG_KASAN_SW_TAGS
 	register_kernel_break_hook(&kasan_break_hook);
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 10/21] treewide: Drop function_nocfi
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With -fsanitize=kcfi, we no longer need function_nocfi() as
the compiler won't change function references to point to a
jump table. Remove all implementations and uses of the macro.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/arm64/include/asm/compiler.h         | 16 ----------------
 arch/arm64/include/asm/ftrace.h           |  2 +-
 arch/arm64/include/asm/mmu_context.h      |  2 +-
 arch/arm64/kernel/acpi_parking_protocol.c |  2 +-
 arch/arm64/kernel/cpufeature.c            |  2 +-
 arch/arm64/kernel/ftrace.c                |  2 +-
 arch/arm64/kernel/machine_kexec.c         |  2 +-
 arch/arm64/kernel/psci.c                  |  2 +-
 arch/arm64/kernel/smp_spin_table.c        |  2 +-
 drivers/firmware/psci/psci.c              |  4 ++--
 drivers/misc/lkdtm/usercopy.c             |  2 +-
 include/linux/compiler.h                  | 10 ----------
 12 files changed, 11 insertions(+), 37 deletions(-)

diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
index dc3ea4080e2e..6fb2e6bcc392 100644
--- a/arch/arm64/include/asm/compiler.h
+++ b/arch/arm64/include/asm/compiler.h
@@ -23,20 +23,4 @@
 #define __builtin_return_address(val)					\
 	(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
 
-#ifdef CONFIG_CFI_CLANG
-/*
- * With CONFIG_CFI_CLANG, the compiler replaces function address
- * references with the address of the function's CFI jump table
- * entry. The function_nocfi macro always returns the address of the
- * actual function instead.
- */
-#define function_nocfi(x) ({						\
-	void *addr;							\
-	asm("adrp %0, " __stringify(x) "\n\t"				\
-	    "add  %0, %0, :lo12:" __stringify(x)			\
-	    : "=r" (addr));						\
-	addr;								\
-})
-#endif
-
 #endif /* __ASM_COMPILER_H */
diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
index 1494cfa8639b..c96d47cb8f46 100644
--- a/arch/arm64/include/asm/ftrace.h
+++ b/arch/arm64/include/asm/ftrace.h
@@ -26,7 +26,7 @@
 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
 #define ARCH_SUPPORTS_FTRACE_OPS 1
 #else
-#define MCOUNT_ADDR		((unsigned long)function_nocfi(_mcount))
+#define MCOUNT_ADDR		((unsigned long)_mcount)
 #endif
 
 /* The BL at the callsite's adjusted rec->ip */
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 6770667b34a3..c9df5ab2c448 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -164,7 +164,7 @@ static inline void __nocfi cpu_replace_ttbr1(pgd_t *pgdp)
 		ttbr1 |= TTBR_CNP_BIT;
 	}
 
-	replace_phys = (void *)__pa_symbol(function_nocfi(idmap_cpu_replace_ttbr1));
+	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
 
 	cpu_install_idmap();
 	replace_phys(ttbr1);
diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/acpi_parking_protocol.c
index bfeeb5319abf..b1990e38aed0 100644
--- a/arch/arm64/kernel/acpi_parking_protocol.c
+++ b/arch/arm64/kernel/acpi_parking_protocol.c
@@ -99,7 +99,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
 	 * that read this address need to convert this address to the
 	 * Boot-Loader's endianness before jumping.
 	 */
-	writeq_relaxed(__pa_symbol(function_nocfi(secondary_entry)),
+	writeq_relaxed(__pa_symbol(secondary_entry),
 		       &mailbox->entry_point);
 	writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id);
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d72c4b4d389c..dae07d99508b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1619,7 +1619,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 	if (arm64_use_ng_mappings)
 		return;
 
-	remap_fn = (void *)__pa_symbol(function_nocfi(idmap_kpti_install_ng_mappings));
+	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
 
 	cpu_install_idmap();
 	remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));
diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
index 4506c4a90ac1..4128ca6ed485 100644
--- a/arch/arm64/kernel/ftrace.c
+++ b/arch/arm64/kernel/ftrace.c
@@ -56,7 +56,7 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
 	unsigned long pc;
 	u32 new;
 
-	pc = (unsigned long)function_nocfi(ftrace_call);
+	pc = (unsigned long)ftrace_call;
 	new = aarch64_insn_gen_branch_imm(pc, (unsigned long)func,
 					  AARCH64_INSN_BRANCH_LINK);
 
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index e16b248699d5..4eb5388aa5a6 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -204,7 +204,7 @@ void machine_kexec(struct kimage *kimage)
 		typeof(cpu_soft_restart) *restart;
 
 		cpu_install_idmap();
-		restart = (void *)__pa_symbol(function_nocfi(cpu_soft_restart));
+		restart = (void *)__pa_symbol(cpu_soft_restart);
 		restart(is_hyp_nvhe(), kimage->start, kimage->arch.dtb_mem,
 			0, 0);
 	} else {
diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index ab7f4c476104..29a8e444db83 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -38,7 +38,7 @@ static int __init cpu_psci_cpu_prepare(unsigned int cpu)
 
 static int cpu_psci_cpu_boot(unsigned int cpu)
 {
-	phys_addr_t pa_secondary_entry = __pa_symbol(function_nocfi(secondary_entry));
+	phys_addr_t pa_secondary_entry = __pa_symbol(secondary_entry);
 	int err = psci_ops.cpu_on(cpu_logical_map(cpu), pa_secondary_entry);
 	if (err)
 		pr_err("failed to boot CPU%d (%d)\n", cpu, err);
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index 7e1624ecab3c..49029eace3ad 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -66,7 +66,7 @@ static int smp_spin_table_cpu_init(unsigned int cpu)
 static int smp_spin_table_cpu_prepare(unsigned int cpu)
 {
 	__le64 __iomem *release_addr;
-	phys_addr_t pa_holding_pen = __pa_symbol(function_nocfi(secondary_holding_pen));
+	phys_addr_t pa_holding_pen = __pa_symbol(secondary_holding_pen);
 
 	if (!cpu_release_addr[cpu])
 		return -ENODEV;
diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c
index cfb448eabdaa..aa3133cafced 100644
--- a/drivers/firmware/psci/psci.c
+++ b/drivers/firmware/psci/psci.c
@@ -334,7 +334,7 @@ static int __init psci_features(u32 psci_func_id)
 static int psci_suspend_finisher(unsigned long state)
 {
 	u32 power_state = state;
-	phys_addr_t pa_cpu_resume = __pa_symbol(function_nocfi(cpu_resume));
+	phys_addr_t pa_cpu_resume = __pa_symbol(cpu_resume);
 
 	return psci_ops.cpu_suspend(power_state, pa_cpu_resume);
 }
@@ -359,7 +359,7 @@ int psci_cpu_suspend_enter(u32 state)
 
 static int psci_system_suspend(unsigned long unused)
 {
-	phys_addr_t pa_cpu_resume = __pa_symbol(function_nocfi(cpu_resume));
+	phys_addr_t pa_cpu_resume = __pa_symbol(cpu_resume);
 
 	return invoke_psci_fn(PSCI_FN_NATIVE(1_0, SYSTEM_SUSPEND),
 			      pa_cpu_resume, 0, 0);
diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c
index 9161ce7ed47a..79a17b1c4885 100644
--- a/drivers/misc/lkdtm/usercopy.c
+++ b/drivers/misc/lkdtm/usercopy.c
@@ -318,7 +318,7 @@ void lkdtm_USERCOPY_KERNEL(void)
 
 	pr_info("attempting bad copy_to_user from kernel text: %px\n",
 		vm_mmap);
-	if (copy_to_user((void __user *)user_addr, function_nocfi(vm_mmap),
+	if (copy_to_user((void __user *)user_addr, vm_mmap,
 			 unconst + PAGE_SIZE)) {
 		pr_warn("copy_to_user failed, but lacked Oops\n");
 		goto free_user;
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 9303f5fe5d89..80ed9644d129 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -203,16 +203,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
 	__v;								\
 })
 
-/*
- * With CONFIG_CFI_CLANG, the compiler replaces function addresses in
- * instrumented C code with jump table addresses. Architectures that
- * support CFI can define this macro to return the actual function address
- * when needed.
- */
-#ifndef function_nocfi
-#define function_nocfi(x) (x)
-#endif
-
 #endif /* __KERNEL__ */
 
 /*
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 10/21] treewide: Drop function_nocfi
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With -fsanitize=kcfi, we no longer need function_nocfi() as
the compiler won't change function references to point to a
jump table. Remove all implementations and uses of the macro.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/arm64/include/asm/compiler.h         | 16 ----------------
 arch/arm64/include/asm/ftrace.h           |  2 +-
 arch/arm64/include/asm/mmu_context.h      |  2 +-
 arch/arm64/kernel/acpi_parking_protocol.c |  2 +-
 arch/arm64/kernel/cpufeature.c            |  2 +-
 arch/arm64/kernel/ftrace.c                |  2 +-
 arch/arm64/kernel/machine_kexec.c         |  2 +-
 arch/arm64/kernel/psci.c                  |  2 +-
 arch/arm64/kernel/smp_spin_table.c        |  2 +-
 drivers/firmware/psci/psci.c              |  4 ++--
 drivers/misc/lkdtm/usercopy.c             |  2 +-
 include/linux/compiler.h                  | 10 ----------
 12 files changed, 11 insertions(+), 37 deletions(-)

diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
index dc3ea4080e2e..6fb2e6bcc392 100644
--- a/arch/arm64/include/asm/compiler.h
+++ b/arch/arm64/include/asm/compiler.h
@@ -23,20 +23,4 @@
 #define __builtin_return_address(val)					\
 	(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
 
-#ifdef CONFIG_CFI_CLANG
-/*
- * With CONFIG_CFI_CLANG, the compiler replaces function address
- * references with the address of the function's CFI jump table
- * entry. The function_nocfi macro always returns the address of the
- * actual function instead.
- */
-#define function_nocfi(x) ({						\
-	void *addr;							\
-	asm("adrp %0, " __stringify(x) "\n\t"				\
-	    "add  %0, %0, :lo12:" __stringify(x)			\
-	    : "=r" (addr));						\
-	addr;								\
-})
-#endif
-
 #endif /* __ASM_COMPILER_H */
diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
index 1494cfa8639b..c96d47cb8f46 100644
--- a/arch/arm64/include/asm/ftrace.h
+++ b/arch/arm64/include/asm/ftrace.h
@@ -26,7 +26,7 @@
 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
 #define ARCH_SUPPORTS_FTRACE_OPS 1
 #else
-#define MCOUNT_ADDR		((unsigned long)function_nocfi(_mcount))
+#define MCOUNT_ADDR		((unsigned long)_mcount)
 #endif
 
 /* The BL at the callsite's adjusted rec->ip */
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 6770667b34a3..c9df5ab2c448 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -164,7 +164,7 @@ static inline void __nocfi cpu_replace_ttbr1(pgd_t *pgdp)
 		ttbr1 |= TTBR_CNP_BIT;
 	}
 
-	replace_phys = (void *)__pa_symbol(function_nocfi(idmap_cpu_replace_ttbr1));
+	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
 
 	cpu_install_idmap();
 	replace_phys(ttbr1);
diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/acpi_parking_protocol.c
index bfeeb5319abf..b1990e38aed0 100644
--- a/arch/arm64/kernel/acpi_parking_protocol.c
+++ b/arch/arm64/kernel/acpi_parking_protocol.c
@@ -99,7 +99,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int cpu)
 	 * that read this address need to convert this address to the
 	 * Boot-Loader's endianness before jumping.
 	 */
-	writeq_relaxed(__pa_symbol(function_nocfi(secondary_entry)),
+	writeq_relaxed(__pa_symbol(secondary_entry),
 		       &mailbox->entry_point);
 	writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id);
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d72c4b4d389c..dae07d99508b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1619,7 +1619,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 	if (arm64_use_ng_mappings)
 		return;
 
-	remap_fn = (void *)__pa_symbol(function_nocfi(idmap_kpti_install_ng_mappings));
+	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
 
 	cpu_install_idmap();
 	remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));
diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
index 4506c4a90ac1..4128ca6ed485 100644
--- a/arch/arm64/kernel/ftrace.c
+++ b/arch/arm64/kernel/ftrace.c
@@ -56,7 +56,7 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
 	unsigned long pc;
 	u32 new;
 
-	pc = (unsigned long)function_nocfi(ftrace_call);
+	pc = (unsigned long)ftrace_call;
 	new = aarch64_insn_gen_branch_imm(pc, (unsigned long)func,
 					  AARCH64_INSN_BRANCH_LINK);
 
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index e16b248699d5..4eb5388aa5a6 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -204,7 +204,7 @@ void machine_kexec(struct kimage *kimage)
 		typeof(cpu_soft_restart) *restart;
 
 		cpu_install_idmap();
-		restart = (void *)__pa_symbol(function_nocfi(cpu_soft_restart));
+		restart = (void *)__pa_symbol(cpu_soft_restart);
 		restart(is_hyp_nvhe(), kimage->start, kimage->arch.dtb_mem,
 			0, 0);
 	} else {
diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index ab7f4c476104..29a8e444db83 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -38,7 +38,7 @@ static int __init cpu_psci_cpu_prepare(unsigned int cpu)
 
 static int cpu_psci_cpu_boot(unsigned int cpu)
 {
-	phys_addr_t pa_secondary_entry = __pa_symbol(function_nocfi(secondary_entry));
+	phys_addr_t pa_secondary_entry = __pa_symbol(secondary_entry);
 	int err = psci_ops.cpu_on(cpu_logical_map(cpu), pa_secondary_entry);
 	if (err)
 		pr_err("failed to boot CPU%d (%d)\n", cpu, err);
diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c
index 7e1624ecab3c..49029eace3ad 100644
--- a/arch/arm64/kernel/smp_spin_table.c
+++ b/arch/arm64/kernel/smp_spin_table.c
@@ -66,7 +66,7 @@ static int smp_spin_table_cpu_init(unsigned int cpu)
 static int smp_spin_table_cpu_prepare(unsigned int cpu)
 {
 	__le64 __iomem *release_addr;
-	phys_addr_t pa_holding_pen = __pa_symbol(function_nocfi(secondary_holding_pen));
+	phys_addr_t pa_holding_pen = __pa_symbol(secondary_holding_pen);
 
 	if (!cpu_release_addr[cpu])
 		return -ENODEV;
diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c
index cfb448eabdaa..aa3133cafced 100644
--- a/drivers/firmware/psci/psci.c
+++ b/drivers/firmware/psci/psci.c
@@ -334,7 +334,7 @@ static int __init psci_features(u32 psci_func_id)
 static int psci_suspend_finisher(unsigned long state)
 {
 	u32 power_state = state;
-	phys_addr_t pa_cpu_resume = __pa_symbol(function_nocfi(cpu_resume));
+	phys_addr_t pa_cpu_resume = __pa_symbol(cpu_resume);
 
 	return psci_ops.cpu_suspend(power_state, pa_cpu_resume);
 }
@@ -359,7 +359,7 @@ int psci_cpu_suspend_enter(u32 state)
 
 static int psci_system_suspend(unsigned long unused)
 {
-	phys_addr_t pa_cpu_resume = __pa_symbol(function_nocfi(cpu_resume));
+	phys_addr_t pa_cpu_resume = __pa_symbol(cpu_resume);
 
 	return invoke_psci_fn(PSCI_FN_NATIVE(1_0, SYSTEM_SUSPEND),
 			      pa_cpu_resume, 0, 0);
diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c
index 9161ce7ed47a..79a17b1c4885 100644
--- a/drivers/misc/lkdtm/usercopy.c
+++ b/drivers/misc/lkdtm/usercopy.c
@@ -318,7 +318,7 @@ void lkdtm_USERCOPY_KERNEL(void)
 
 	pr_info("attempting bad copy_to_user from kernel text: %px\n",
 		vm_mmap);
-	if (copy_to_user((void __user *)user_addr, function_nocfi(vm_mmap),
+	if (copy_to_user((void __user *)user_addr, vm_mmap,
 			 unconst + PAGE_SIZE)) {
 		pr_warn("copy_to_user failed, but lacked Oops\n");
 		goto free_user;
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 9303f5fe5d89..80ed9644d129 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -203,16 +203,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
 	__v;								\
 })
 
-/*
- * With CONFIG_CFI_CLANG, the compiler replaces function addresses in
- * instrumented C code with jump table addresses. Architectures that
- * support CFI can define this macro to return the actual function address
- * when needed.
- */
-#ifndef function_nocfi
-#define function_nocfi(x) (x)
-#endif
-
 #endif /* __KERNEL__ */
 
 /*
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 11/21] treewide: Drop WARN_ON_FUNCTION_MISMATCH
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

CONFIG_CFI_CLANG no longer breaks cross-module function address
equality, which makes WARN_ON_FUNCTION_MISMATCH unnecessary. Remove
the definition and switch back to WARN_ON_ONCE.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/asm-generic/bug.h | 16 ----------------
 kernel/kthread.c          |  3 +--
 kernel/workqueue.c        |  2 +-
 3 files changed, 2 insertions(+), 19 deletions(-)

diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h
index edb0e2a602a8..a4c116dec698 100644
--- a/include/asm-generic/bug.h
+++ b/include/asm-generic/bug.h
@@ -219,22 +219,6 @@ void __warn(const char *file, int line, void *caller, unsigned taint,
 # define WARN_ON_SMP(x)			({0;})
 #endif
 
-/*
- * WARN_ON_FUNCTION_MISMATCH() warns if a value doesn't match a
- * function address, and can be useful for catching issues with
- * callback functions, for example.
- *
- * With CONFIG_CFI_CLANG, the warning is disabled because the
- * compiler replaces function addresses taken in C code with
- * local jump table addresses, which breaks cross-module function
- * address equality.
- */
-#if defined(CONFIG_CFI_CLANG) && defined(CONFIG_MODULES)
-# define WARN_ON_FUNCTION_MISMATCH(x, fn) ({ 0; })
-#else
-# define WARN_ON_FUNCTION_MISMATCH(x, fn) WARN_ON_ONCE((x) != (fn))
-#endif
-
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 50265f69a135..dfeb87876b4a 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -1050,8 +1050,7 @@ static void __kthread_queue_delayed_work(struct kthread_worker *worker,
 	struct timer_list *timer = &dwork->timer;
 	struct kthread_work *work = &dwork->work;
 
-	WARN_ON_FUNCTION_MISMATCH(timer->function,
-				  kthread_delayed_work_timer_fn);
+	WARN_ON_ONCE(timer->function != kthread_delayed_work_timer_fn);
 
 	/*
 	 * If @delay is 0, queue @dwork->work immediately.  This is for
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0d2514b4ff0d..18c1a1c09684 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1651,7 +1651,7 @@ static void __queue_delayed_work(int cpu, struct workqueue_struct *wq,
 	struct work_struct *work = &dwork->work;
 
 	WARN_ON_ONCE(!wq);
-	WARN_ON_FUNCTION_MISMATCH(timer->function, delayed_work_timer_fn);
+	WARN_ON_ONCE(timer->function != delayed_work_timer_fn);
 	WARN_ON_ONCE(timer_pending(timer));
 	WARN_ON_ONCE(!list_empty(&work->entry));
 
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 11/21] treewide: Drop WARN_ON_FUNCTION_MISMATCH
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

CONFIG_CFI_CLANG no longer breaks cross-module function address
equality, which makes WARN_ON_FUNCTION_MISMATCH unnecessary. Remove
the definition and switch back to WARN_ON_ONCE.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/asm-generic/bug.h | 16 ----------------
 kernel/kthread.c          |  3 +--
 kernel/workqueue.c        |  2 +-
 3 files changed, 2 insertions(+), 19 deletions(-)

diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h
index edb0e2a602a8..a4c116dec698 100644
--- a/include/asm-generic/bug.h
+++ b/include/asm-generic/bug.h
@@ -219,22 +219,6 @@ void __warn(const char *file, int line, void *caller, unsigned taint,
 # define WARN_ON_SMP(x)			({0;})
 #endif
 
-/*
- * WARN_ON_FUNCTION_MISMATCH() warns if a value doesn't match a
- * function address, and can be useful for catching issues with
- * callback functions, for example.
- *
- * With CONFIG_CFI_CLANG, the warning is disabled because the
- * compiler replaces function addresses taken in C code with
- * local jump table addresses, which breaks cross-module function
- * address equality.
- */
-#if defined(CONFIG_CFI_CLANG) && defined(CONFIG_MODULES)
-# define WARN_ON_FUNCTION_MISMATCH(x, fn) ({ 0; })
-#else
-# define WARN_ON_FUNCTION_MISMATCH(x, fn) WARN_ON_ONCE((x) != (fn))
-#endif
-
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 50265f69a135..dfeb87876b4a 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -1050,8 +1050,7 @@ static void __kthread_queue_delayed_work(struct kthread_worker *worker,
 	struct timer_list *timer = &dwork->timer;
 	struct kthread_work *work = &dwork->work;
 
-	WARN_ON_FUNCTION_MISMATCH(timer->function,
-				  kthread_delayed_work_timer_fn);
+	WARN_ON_ONCE(timer->function != kthread_delayed_work_timer_fn);
 
 	/*
 	 * If @delay is 0, queue @dwork->work immediately.  This is for
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0d2514b4ff0d..18c1a1c09684 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1651,7 +1651,7 @@ static void __queue_delayed_work(int cpu, struct workqueue_struct *wq,
 	struct work_struct *work = &dwork->work;
 
 	WARN_ON_ONCE(!wq);
-	WARN_ON_FUNCTION_MISMATCH(timer->function, delayed_work_timer_fn);
+	WARN_ON_ONCE(timer->function != delayed_work_timer_fn);
 	WARN_ON_ONCE(timer_pending(timer));
 	WARN_ON_ONCE(!list_empty(&work->entry));
 
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 12/21] treewide: Drop __cficanonical
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

CONFIG_CFI_CLANG doesn't use a jump table anymore and therefore,
won't change function references to point elsewhere. Remove the
__cficanonical attribute and all uses of it.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/linux/compiler_types.h | 4 ----
 include/linux/init.h           | 4 ++--
 include/linux/pci.h            | 4 ++--
 3 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index 1c2c33ae1b37..bdd2526af46a 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -263,10 +263,6 @@ struct ftrace_likely_data {
 # define __nocfi
 #endif
 
-#ifndef __cficanonical
-# define __cficanonical
-#endif
-
 /*
  * Any place that could be marked with the "alloc_size" attribute is also
  * a place to be marked with the "malloc" attribute. Do this as part of the
diff --git a/include/linux/init.h b/include/linux/init.h
index baf0b29a7010..76058c9e0399 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -220,8 +220,8 @@ extern bool initcall_debug;
 	__initcall_name(initstub, __iid, id)
 
 #define __define_initcall_stub(__stub, fn)			\
-	int __init __cficanonical __stub(void);			\
-	int __init __cficanonical __stub(void)			\
+	int __init __stub(void);				\
+	int __init __stub(void)					\
 	{ 							\
 		return fn();					\
 	}							\
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 60adf42460ab..3cc50c4e3c64 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -2021,8 +2021,8 @@ enum pci_fixup_pass {
 #ifdef CONFIG_LTO_CLANG
 #define __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
 				  class_shift, hook, stub)		\
-	void __cficanonical stub(struct pci_dev *dev);			\
-	void __cficanonical stub(struct pci_dev *dev)			\
+	void stub(struct pci_dev *dev);					\
+	void stub(struct pci_dev *dev)					\
 	{ 								\
 		hook(dev); 						\
 	}								\
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 12/21] treewide: Drop __cficanonical
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

CONFIG_CFI_CLANG doesn't use a jump table anymore and therefore,
won't change function references to point elsewhere. Remove the
__cficanonical attribute and all uses of it.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/linux/compiler_types.h | 4 ----
 include/linux/init.h           | 4 ++--
 include/linux/pci.h            | 4 ++--
 3 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index 1c2c33ae1b37..bdd2526af46a 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -263,10 +263,6 @@ struct ftrace_likely_data {
 # define __nocfi
 #endif
 
-#ifndef __cficanonical
-# define __cficanonical
-#endif
-
 /*
  * Any place that could be marked with the "alloc_size" attribute is also
  * a place to be marked with the "malloc" attribute. Do this as part of the
diff --git a/include/linux/init.h b/include/linux/init.h
index baf0b29a7010..76058c9e0399 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -220,8 +220,8 @@ extern bool initcall_debug;
 	__initcall_name(initstub, __iid, id)
 
 #define __define_initcall_stub(__stub, fn)			\
-	int __init __cficanonical __stub(void);			\
-	int __init __cficanonical __stub(void)			\
+	int __init __stub(void);				\
+	int __init __stub(void)					\
 	{ 							\
 		return fn();					\
 	}							\
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 60adf42460ab..3cc50c4e3c64 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -2021,8 +2021,8 @@ enum pci_fixup_pass {
 #ifdef CONFIG_LTO_CLANG
 #define __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class,	\
 				  class_shift, hook, stub)		\
-	void __cficanonical stub(struct pci_dev *dev);			\
-	void __cficanonical stub(struct pci_dev *dev)			\
+	void stub(struct pci_dev *dev);					\
+	void stub(struct pci_dev *dev)					\
 	{ 								\
 		hook(dev); 						\
 	}								\
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 13/21] cfi: Add the cfi_unchecked macro
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

The cfi_unchecked macro allows CFI checking to be disabled for a
specific indirect call expression, by passing the expression as an
argument to the macro. For example:

  static void call(void (*f)(void)) {
          cfi_unchecked(f());
  }

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/linux/compiler-clang.h | 2 ++
 include/linux/compiler_types.h | 4 ++++
 2 files changed, 6 insertions(+)

diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
index c4ff42859077..0d6a0e7e36dc 100644
--- a/include/linux/compiler-clang.h
+++ b/include/linux/compiler-clang.h
@@ -94,4 +94,6 @@
 #if CONFIG_CFI_CLANG
 /* Disable CFI checking inside a function. */
 #define __nocfi		__attribute__((__no_sanitize__("kcfi")))
+/* Disable CFI checking for the indirect call expression. */
+#define cfi_unchecked(expr)	__builtin_kcfi_call_unchecked(expr)
 #endif
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index bdd2526af46a..41f547fe9724 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -263,6 +263,10 @@ struct ftrace_likely_data {
 # define __nocfi
 #endif
 
+#ifndef cfi_unchecked
+# define cfi_unchecked(expr)	expr
+#endif
+
 /*
  * Any place that could be marked with the "alloc_size" attribute is also
  * a place to be marked with the "malloc" attribute. Do this as part of the
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 13/21] cfi: Add the cfi_unchecked macro
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

The cfi_unchecked macro allows CFI checking to be disabled for a
specific indirect call expression, by passing the expression as an
argument to the macro. For example:

  static void call(void (*f)(void)) {
          cfi_unchecked(f());
  }

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/linux/compiler-clang.h | 2 ++
 include/linux/compiler_types.h | 4 ++++
 2 files changed, 6 insertions(+)

diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
index c4ff42859077..0d6a0e7e36dc 100644
--- a/include/linux/compiler-clang.h
+++ b/include/linux/compiler-clang.h
@@ -94,4 +94,6 @@
 #if CONFIG_CFI_CLANG
 /* Disable CFI checking inside a function. */
 #define __nocfi		__attribute__((__no_sanitize__("kcfi")))
+/* Disable CFI checking for the indirect call expression. */
+#define cfi_unchecked(expr)	__builtin_kcfi_call_unchecked(expr)
 #endif
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index bdd2526af46a..41f547fe9724 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -263,6 +263,10 @@ struct ftrace_likely_data {
 # define __nocfi
 #endif
 
+#ifndef cfi_unchecked
+# define cfi_unchecked(expr)	expr
+#endif
+
 /*
  * Any place that could be marked with the "alloc_size" attribute is also
  * a place to be marked with the "malloc" attribute. Do this as part of the
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 14/21] treewide: static_call: Pass call arguments to the macro
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Include the function arguments in the static call macro to make it
possible to add a wrapper for the call. This is needed with
CONFIG_CFI_CLANG to disable indirect call checking for static calls
that are patched into direct calls at runtime.

Users of static_call were updated using the following Coccinelle
script and manually adjusted to preserve coding style:

  @@
  expression name;
  expression list args;
  identifier static_call =~ "^static_call(_mod|_cond)?$";
  @@

  - static_call(name)(args)
  + static_call(name, args)

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/arm/include/asm/paravirt.h           |   2 +-
 arch/arm64/include/asm/paravirt.h         |   2 +-
 arch/x86/crypto/aesni-intel_glue.c        |   7 +-
 arch/x86/events/core.c                    |  40 +--
 arch/x86/include/asm/kvm_host.h           |   6 +-
 arch/x86/include/asm/paravirt.h           |   4 +-
 arch/x86/kvm/cpuid.c                      |   2 +-
 arch/x86/kvm/hyperv.c                     |   4 +-
 arch/x86/kvm/irq.c                        |   2 +-
 arch/x86/kvm/kvm_cache_regs.h             |  10 +-
 arch/x86/kvm/lapic.c                      |  32 +--
 arch/x86/kvm/mmu.h                        |   4 +-
 arch/x86/kvm/mmu/mmu.c                    |   8 +-
 arch/x86/kvm/mmu/spte.c                   |   4 +-
 arch/x86/kvm/pmu.c                        |   4 +-
 arch/x86/kvm/trace.h                      |   4 +-
 arch/x86/kvm/x86.c                        | 326 +++++++++++-----------
 arch/x86/kvm/x86.h                        |   4 +-
 arch/x86/kvm/xen.c                        |   4 +-
 drivers/cpufreq/amd-pstate.c              |   8 +-
 include/linux/entry-common.h              |   2 +-
 include/linux/kernel.h                    |   2 +-
 include/linux/perf_event.h                |   6 +-
 include/linux/sched.h                     |   2 +-
 include/linux/static_call.h               |  16 +-
 include/linux/static_call_types.h         |  10 +-
 include/linux/tracepoint.h                |   2 +-
 kernel/static_call_inline.c               |   2 +-
 kernel/trace/bpf_trace.c                  |   2 +-
 security/keys/trusted-keys/trusted_core.c |  14 +-
 30 files changed, 267 insertions(+), 268 deletions(-)

diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h
index 95d5b0d625cd..43c419eadb9a 100644
--- a/arch/arm/include/asm/paravirt.h
+++ b/arch/arm/include/asm/paravirt.h
@@ -15,7 +15,7 @@ DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock);
 
 static inline u64 paravirt_steal_clock(int cpu)
 {
-	return static_call(pv_steal_clock)(cpu);
+	return static_call(pv_steal_clock, cpu);
 }
 #endif
 
diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h
index 9aa193e0e8f2..35a9d649c448 100644
--- a/arch/arm64/include/asm/paravirt.h
+++ b/arch/arm64/include/asm/paravirt.h
@@ -15,7 +15,7 @@ DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock);
 
 static inline u64 paravirt_steal_clock(int cpu)
 {
-	return static_call(pv_steal_clock)(cpu);
+	return static_call(pv_steal_clock, cpu);
 }
 
 int __init pv_time_init(void);
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 41901ba9d3a2..06182c068145 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -507,10 +507,9 @@ static int ctr_crypt(struct skcipher_request *req)
 	while ((nbytes = walk.nbytes) > 0) {
 		kernel_fpu_begin();
 		if (nbytes & AES_BLOCK_MASK)
-			static_call(aesni_ctr_enc_tfm)(ctx, walk.dst.virt.addr,
-						       walk.src.virt.addr,
-						       nbytes & AES_BLOCK_MASK,
-						       walk.iv);
+			static_call(aesni_ctr_enc_tfm, ctx,
+				    walk.dst.virt.addr, walk.src.virt.addr,
+				    nbytes & AES_BLOCK_MASK, walk.iv);
 		nbytes &= ~AES_BLOCK_MASK;
 
 		if (walk.nbytes == walk.total && nbytes > 0) {
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index eef816fc216d..74315c87220b 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -695,7 +695,7 @@ void x86_pmu_disable_all(void)
 
 struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr)
 {
-	return static_call(x86_pmu_guest_get_msrs)(nr);
+	return static_call(x86_pmu_guest_get_msrs, nr);
 }
 EXPORT_SYMBOL_GPL(perf_guest_get_msrs);
 
@@ -726,7 +726,7 @@ static void x86_pmu_disable(struct pmu *pmu)
 	cpuc->enabled = 0;
 	barrier();
 
-	static_call(x86_pmu_disable_all)();
+	static_call(x86_pmu_disable_all);
 }
 
 void x86_pmu_enable_all(int added)
@@ -991,7 +991,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
 	if (cpuc->txn_flags & PERF_PMU_TXN_ADD)
 		n0 -= cpuc->n_txn;
 
-	static_call_cond(x86_pmu_start_scheduling)(cpuc);
+	static_call_cond(x86_pmu_start_scheduling, cpuc);
 
 	for (i = 0, wmin = X86_PMC_IDX_MAX, wmax = 0; i < n; i++) {
 		c = cpuc->event_constraint[i];
@@ -1008,7 +1008,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
 		 * change due to external factors (sibling state, allow_tfa).
 		 */
 		if (!c || (c->flags & PERF_X86_EVENT_DYNAMIC)) {
-			c = static_call(x86_pmu_get_event_constraints)(cpuc, i, cpuc->event_list[i]);
+			c = static_call(x86_pmu_get_event_constraints, cpuc, i, cpuc->event_list[i]);
 			cpuc->event_constraint[i] = c;
 		}
 
@@ -1090,7 +1090,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
 	 */
 	if (!unsched && assign) {
 		for (i = 0; i < n; i++)
-			static_call_cond(x86_pmu_commit_scheduling)(cpuc, i, assign[i]);
+			static_call_cond(x86_pmu_commit_scheduling, cpuc, i, assign[i]);
 	} else {
 		for (i = n0; i < n; i++) {
 			e = cpuc->event_list[i];
@@ -1098,13 +1098,13 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
 			/*
 			 * release events that failed scheduling
 			 */
-			static_call_cond(x86_pmu_put_event_constraints)(cpuc, e);
+			static_call_cond(x86_pmu_put_event_constraints, cpuc, e);
 
 			cpuc->event_constraint[i] = NULL;
 		}
 	}
 
-	static_call_cond(x86_pmu_stop_scheduling)(cpuc);
+	static_call_cond(x86_pmu_stop_scheduling, cpuc);
 
 	return unsched ? -EINVAL : 0;
 }
@@ -1217,7 +1217,7 @@ static inline void x86_assign_hw_event(struct perf_event *event,
 	hwc->last_cpu = smp_processor_id();
 	hwc->last_tag = ++cpuc->tags[i];
 
-	static_call_cond(x86_pmu_assign)(event, idx);
+	static_call_cond(x86_pmu_assign, event, idx);
 
 	switch (hwc->idx) {
 	case INTEL_PMC_IDX_FIXED_BTS:
@@ -1347,7 +1347,7 @@ static void x86_pmu_enable(struct pmu *pmu)
 	cpuc->enabled = 1;
 	barrier();
 
-	static_call(x86_pmu_enable_all)(added);
+	static_call(x86_pmu_enable_all, added);
 }
 
 static DEFINE_PER_CPU(u64 [X86_PMC_IDX_MAX], pmc_prev_left);
@@ -1472,7 +1472,7 @@ static int x86_pmu_add(struct perf_event *event, int flags)
 	if (cpuc->txn_flags & PERF_PMU_TXN_ADD)
 		goto done_collect;
 
-	ret = static_call(x86_pmu_schedule_events)(cpuc, n, assign);
+	ret = static_call(x86_pmu_schedule_events, cpuc, n, assign);
 	if (ret)
 		goto out;
 	/*
@@ -1494,7 +1494,7 @@ static int x86_pmu_add(struct perf_event *event, int flags)
 	 * This is before x86_pmu_enable() will call x86_pmu_start(),
 	 * so we enable LBRs before an event needs them etc..
 	 */
-	static_call_cond(x86_pmu_add)(event);
+	static_call_cond(x86_pmu_add, event);
 
 	ret = 0;
 out:
@@ -1521,7 +1521,7 @@ static void x86_pmu_start(struct perf_event *event, int flags)
 
 	cpuc->events[idx] = event;
 	__set_bit(idx, cpuc->active_mask);
-	static_call(x86_pmu_enable)(event);
+	static_call(x86_pmu_enable, event);
 	perf_event_update_userpage(event);
 }
 
@@ -1594,7 +1594,7 @@ void x86_pmu_stop(struct perf_event *event, int flags)
 	struct hw_perf_event *hwc = &event->hw;
 
 	if (test_bit(hwc->idx, cpuc->active_mask)) {
-		static_call(x86_pmu_disable)(event);
+		static_call(x86_pmu_disable, event);
 		__clear_bit(hwc->idx, cpuc->active_mask);
 		cpuc->events[hwc->idx] = NULL;
 		WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
@@ -1647,7 +1647,7 @@ static void x86_pmu_del(struct perf_event *event, int flags)
 	if (i >= cpuc->n_events - cpuc->n_added)
 		--cpuc->n_added;
 
-	static_call_cond(x86_pmu_put_event_constraints)(cpuc, event);
+	static_call_cond(x86_pmu_put_event_constraints, cpuc, event);
 
 	/* Delete the array entry. */
 	while (++i < cpuc->n_events) {
@@ -1667,7 +1667,7 @@ static void x86_pmu_del(struct perf_event *event, int flags)
 	 * This is after x86_pmu_stop(); so we disable LBRs after any
 	 * event can need them etc..
 	 */
-	static_call_cond(x86_pmu_del)(event);
+	static_call_cond(x86_pmu_del, event);
 }
 
 int x86_pmu_handle_irq(struct pt_regs *regs)
@@ -1745,7 +1745,7 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
 		return NMI_DONE;
 
 	start_clock = sched_clock();
-	ret = static_call(x86_pmu_handle_irq)(regs);
+	ret = static_call(x86_pmu_handle_irq, regs);
 	finish_clock = sched_clock();
 
 	perf_sample_event_took(finish_clock - start_clock);
@@ -2217,7 +2217,7 @@ early_initcall(init_hw_perf_events);
 
 static void x86_pmu_read(struct perf_event *event)
 {
-	static_call(x86_pmu_read)(event);
+	static_call(x86_pmu_read, event);
 }
 
 /*
@@ -2298,7 +2298,7 @@ static int x86_pmu_commit_txn(struct pmu *pmu)
 	if (!x86_pmu_initialized())
 		return -EAGAIN;
 
-	ret = static_call(x86_pmu_schedule_events)(cpuc, n, assign);
+	ret = static_call(x86_pmu_schedule_events, cpuc, n, assign);
 	if (ret)
 		return ret;
 
@@ -2638,13 +2638,13 @@ static const struct attribute_group *x86_pmu_attr_groups[] = {
 
 static void x86_pmu_sched_task(struct perf_event_context *ctx, bool sched_in)
 {
-	static_call_cond(x86_pmu_sched_task)(ctx, sched_in);
+	static_call_cond(x86_pmu_sched_task, ctx, sched_in);
 }
 
 static void x86_pmu_swap_task_ctx(struct perf_event_context *prev,
 				  struct perf_event_context *next)
 {
-	static_call_cond(x86_pmu_swap_task_ctx)(prev, next);
+	static_call_cond(x86_pmu_swap_task_ctx, prev, next);
 }
 
 void perf_check_microcode(void)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4ff36610af6a..0d3869f6efc2 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1576,7 +1576,7 @@ void kvm_arch_free_vm(struct kvm *kvm);
 static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm)
 {
 	if (kvm_x86_ops.tlb_remote_flush &&
-	    !static_call(kvm_x86_tlb_remote_flush)(kvm))
+	    !static_call(kvm_x86_tlb_remote_flush, kvm))
 		return 0;
 	else
 		return -ENOTSUPP;
@@ -1953,12 +1953,12 @@ static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
 
 static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
 {
-	static_call_cond(kvm_x86_vcpu_blocking)(vcpu);
+	static_call_cond(kvm_x86_vcpu_blocking, vcpu);
 }
 
 static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 {
-	static_call_cond(kvm_x86_vcpu_unblocking)(vcpu);
+	static_call_cond(kvm_x86_vcpu_unblocking, vcpu);
 }
 
 static inline int kvm_cpu_get_apicid(int mps_cpu)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 964442b99245..16aa752f1ccb 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -28,7 +28,7 @@ void paravirt_set_sched_clock(u64 (*func)(void));
 
 static inline u64 paravirt_sched_clock(void)
 {
-	return static_call(pv_sched_clock)();
+	return static_call(pv_sched_clock);
 }
 
 struct static_key;
@@ -42,7 +42,7 @@ bool pv_is_native_vcpu_is_preempted(void);
 
 static inline u64 paravirt_steal_clock(int cpu)
 {
-	return static_call(pv_steal_clock)(cpu);
+	return static_call(pv_steal_clock, cpu);
 }
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index b24ca7f4ed7c..e40e9b8b2bd6 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -311,7 +311,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
 	kvm_hv_set_cpuid(vcpu);
 
 	/* Invoke the vendor callback only after the above state is updated. */
-	static_call(kvm_x86_vcpu_after_set_cpuid)(vcpu);
+	static_call(kvm_x86_vcpu_after_set_cpuid, vcpu);
 
 	/*
 	 * Except for the MMU, which needs to do its thing any vendor specific
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 46f9dfb60469..b1b8006f9084 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1335,7 +1335,7 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data,
 		}
 
 		/* vmcall/vmmcall */
-		static_call(kvm_x86_patch_hypercall)(vcpu, instructions + i);
+		static_call(kvm_x86_patch_hypercall, vcpu, instructions + i);
 		i += 3;
 
 		/* ret */
@@ -2201,7 +2201,7 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 	 * hypercall generates UD from non zero cpl and real mode
 	 * per HYPER-V spec
 	 */
-	if (static_call(kvm_x86_get_cpl)(vcpu) != 0 || !is_protmode(vcpu)) {
+	if (static_call(kvm_x86_get_cpl, vcpu) != 0 || !is_protmode(vcpu)) {
 		kvm_queue_exception(vcpu, UD_VECTOR);
 		return 1;
 	}
diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c
index 172b05343cfd..b86cf55afe4d 100644
--- a/arch/x86/kvm/irq.c
+++ b/arch/x86/kvm/irq.c
@@ -150,7 +150,7 @@ void __kvm_migrate_timers(struct kvm_vcpu *vcpu)
 {
 	__kvm_migrate_apic_timer(vcpu);
 	__kvm_migrate_pit_timer(vcpu);
-	static_call_cond(kvm_x86_migrate_timers)(vcpu);
+	static_call_cond(kvm_x86_migrate_timers, vcpu);
 }
 
 bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 3febc342360c..643b4abb2797 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -86,7 +86,7 @@ static inline unsigned long kvm_register_read_raw(struct kvm_vcpu *vcpu, int reg
 		return 0;
 
 	if (!kvm_register_is_available(vcpu, reg))
-		static_call(kvm_x86_cache_reg)(vcpu, reg);
+		static_call(kvm_x86_cache_reg, vcpu, reg);
 
 	return vcpu->arch.regs[reg];
 }
@@ -126,7 +126,7 @@ static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu, int index)
 	might_sleep();  /* on svm */
 
 	if (!kvm_register_is_available(vcpu, VCPU_EXREG_PDPTR))
-		static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_PDPTR);
+		static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_PDPTR);
 
 	return vcpu->arch.walk_mmu->pdptrs[index];
 }
@@ -141,7 +141,7 @@ static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask)
 	ulong tmask = mask & KVM_POSSIBLE_CR0_GUEST_BITS;
 	if ((tmask & vcpu->arch.cr0_guest_owned_bits) &&
 	    !kvm_register_is_available(vcpu, VCPU_EXREG_CR0))
-		static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR0);
+		static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_CR0);
 	return vcpu->arch.cr0 & mask;
 }
 
@@ -155,14 +155,14 @@ static inline ulong kvm_read_cr4_bits(struct kvm_vcpu *vcpu, ulong mask)
 	ulong tmask = mask & KVM_POSSIBLE_CR4_GUEST_BITS;
 	if ((tmask & vcpu->arch.cr4_guest_owned_bits) &&
 	    !kvm_register_is_available(vcpu, VCPU_EXREG_CR4))
-		static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR4);
+		static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_CR4);
 	return vcpu->arch.cr4 & mask;
 }
 
 static inline ulong kvm_read_cr3(struct kvm_vcpu *vcpu)
 {
 	if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3))
-		static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR3);
+		static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_CR3);
 	return vcpu->arch.cr3;
 }
 
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 66b0eb0bda94..743b99eb43ef 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -525,7 +525,7 @@ static inline void apic_clear_irr(int vec, struct kvm_lapic *apic)
 	if (unlikely(vcpu->arch.apicv_active)) {
 		/* need to update RVI */
 		kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
-		static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
+		static_call_cond(kvm_x86_hwapic_irr_update, vcpu, apic_find_highest_irr(apic));
 	} else {
 		apic->irr_pending = false;
 		kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
@@ -555,7 +555,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
 	 * just set SVI.
 	 */
 	if (unlikely(vcpu->arch.apicv_active))
-		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, vec);
+		static_call_cond(kvm_x86_hwapic_isr_update, vcpu, vec);
 	else {
 		++apic->isr_count;
 		BUG_ON(apic->isr_count > MAX_APIC_VECTOR);
@@ -603,7 +603,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
 	 * and must be left alone.
 	 */
 	if (unlikely(vcpu->arch.apicv_active))
-		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
+		static_call_cond(kvm_x86_hwapic_isr_update, vcpu, apic_find_highest_isr(apic));
 	else {
 		--apic->isr_count;
 		BUG_ON(apic->isr_count < 0);
@@ -739,7 +739,7 @@ static int apic_has_interrupt_for_ppr(struct kvm_lapic *apic, u32 ppr)
 {
 	int highest_irr;
 	if (kvm_x86_ops.sync_pir_to_irr)
-		highest_irr = static_call(kvm_x86_sync_pir_to_irr)(apic->vcpu);
+		highest_irr = static_call(kvm_x86_sync_pir_to_irr, apic->vcpu);
 	else
 		highest_irr = apic_find_highest_irr(apic);
 	if (highest_irr == -1 || (highest_irr & 0xF0) <= ppr)
@@ -1132,8 +1132,8 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 						       apic->regs + APIC_TMR);
 		}
 
-		static_call(kvm_x86_deliver_interrupt)(apic, delivery_mode,
-						       trig_mode, vector);
+		static_call(kvm_x86_deliver_interrupt, apic, delivery_mode,
+			    trig_mode, vector);
 		break;
 
 	case APIC_DM_REMRD:
@@ -1888,7 +1888,7 @@ static void cancel_hv_timer(struct kvm_lapic *apic)
 {
 	WARN_ON(preemptible());
 	WARN_ON(!apic->lapic_timer.hv_timer_in_use);
-	static_call(kvm_x86_cancel_hv_timer)(apic->vcpu);
+	static_call(kvm_x86_cancel_hv_timer, apic->vcpu);
 	apic->lapic_timer.hv_timer_in_use = false;
 }
 
@@ -1905,7 +1905,7 @@ static bool start_hv_timer(struct kvm_lapic *apic)
 	if (!ktimer->tscdeadline)
 		return false;
 
-	if (static_call(kvm_x86_set_hv_timer)(vcpu, ktimer->tscdeadline, &expired))
+	if (static_call(kvm_x86_set_hv_timer, vcpu, ktimer->tscdeadline, &expired))
 		return false;
 
 	ktimer->hv_timer_in_use = true;
@@ -2329,7 +2329,7 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
 		kvm_apic_set_x2apic_id(apic, vcpu->vcpu_id);
 
 	if ((old_value ^ value) & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE))
-		static_call_cond(kvm_x86_set_virtual_apic_mode)(vcpu);
+		static_call_cond(kvm_x86_set_virtual_apic_mode, vcpu);
 
 	apic->base_address = apic->vcpu->arch.apic_base &
 			     MSR_IA32_APICBASE_BASE;
@@ -2419,9 +2419,9 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vcpu->arch.pv_eoi.msr_val = 0;
 	apic_update_ppr(apic);
 	if (vcpu->arch.apicv_active) {
-		static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
-		static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, -1);
-		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, -1);
+		static_call_cond(kvm_x86_apicv_post_state_restore, vcpu);
+		static_call_cond(kvm_x86_hwapic_irr_update, vcpu, -1);
+		static_call_cond(kvm_x86_hwapic_isr_update, vcpu, -1);
 	}
 
 	vcpu->arch.apic_arb_prio = 0;
@@ -2697,9 +2697,9 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
 	kvm_apic_update_apicv(vcpu);
 	apic->highest_isr_cache = -1;
 	if (vcpu->arch.apicv_active) {
-		static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
-		static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
-		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
+		static_call_cond(kvm_x86_apicv_post_state_restore, vcpu);
+		static_call_cond(kvm_x86_hwapic_irr_update, vcpu, apic_find_highest_irr(apic));
+		static_call_cond(kvm_x86_hwapic_isr_update, vcpu, apic_find_highest_isr(apic));
 	}
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	if (ioapic_in_kernel(vcpu->kvm))
@@ -3002,7 +3002,7 @@ int kvm_apic_accept_events(struct kvm_vcpu *vcpu)
 			/* evaluate pending_events before reading the vector */
 			smp_rmb();
 			sipi_vector = apic->sipi_vector;
-			static_call(kvm_x86_vcpu_deliver_sipi_vector)(vcpu, sipi_vector);
+			static_call(kvm_x86_vcpu_deliver_sipi_vector, vcpu, sipi_vector);
 			vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
 		}
 	}
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index e6cae6f22683..73880aa0b9e2 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -113,7 +113,7 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu)
 	if (!VALID_PAGE(root_hpa))
 		return;
 
-	static_call(kvm_x86_load_mmu_pgd)(vcpu, root_hpa,
+	static_call(kvm_x86_load_mmu_pgd, vcpu, root_hpa,
 					  vcpu->arch.mmu->shadow_root_level);
 }
 
@@ -218,7 +218,7 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 {
 	/* strip nested paging fault error codes */
 	unsigned int pfec = access;
-	unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
+	unsigned long rflags = static_call(kvm_x86_get_rflags, vcpu);
 
 	/*
 	 * For explicit supervisor accesses, SMAP is disabled if EFLAGS.AC = 1.
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f9080ee50ffa..0bdf76d94875 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -268,7 +268,7 @@ static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm,
 	int ret = -ENOTSUPP;
 
 	if (range && kvm_x86_ops.tlb_remote_flush_with_range)
-		ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, range);
+		ret = static_call(kvm_x86_tlb_remote_flush_with_range, kvm, range);
 
 	if (ret)
 		kvm_flush_remote_tlbs(kvm);
@@ -5102,7 +5102,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
 	 * stale entries.  Flushing on alloc also allows KVM to skip the TLB
 	 * flush when freeing a root (see kvm_tdp_mmu_put_root()).
 	 */
-	static_call(kvm_x86_flush_tlb_current)(vcpu);
+	static_call(kvm_x86_flush_tlb_current, vcpu);
 out:
 	return r;
 }
@@ -5408,7 +5408,7 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 		if (is_noncanonical_address(gva, vcpu))
 			return;
 
-		static_call(kvm_x86_flush_tlb_gva)(vcpu, gva);
+		static_call(kvm_x86_flush_tlb_gva, vcpu, gva);
 	}
 
 	if (!mmu->invlpg)
@@ -5464,7 +5464,7 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
 	}
 
 	if (tlb_flush)
-		static_call(kvm_x86_flush_tlb_gva)(vcpu, gva);
+		static_call(kvm_x86_flush_tlb_gva, vcpu, gva);
 
 	++vcpu->stat.invlpg;
 
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 4739b53c9734..6b7bae4778a4 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -131,8 +131,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 	if (level > PG_LEVEL_4K)
 		spte |= PT_PAGE_SIZE_MASK;
 	if (tdp_enabled)
-		spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn,
-			kvm_is_mmio_pfn(pfn));
+		spte |= static_call(kvm_x86_get_mt_mask, vcpu, gfn,
+				    kvm_is_mmio_pfn(pfn));
 
 	if (host_writable)
 		spte |= shadow_host_writable_mask;
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index eca39f56c231..4361f0e247ee 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -371,7 +371,7 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
 		return 1;
 
 	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
-	    (static_call(kvm_x86_get_cpl)(vcpu) != 0) &&
+	    (static_call(kvm_x86_get_cpl, vcpu) != 0) &&
 	    (kvm_read_cr0(vcpu) & X86_CR0_PE))
 		return 1;
 
@@ -523,7 +523,7 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc)
 		select_user = config & 0x2;
 	}
 
-	return (static_call(kvm_x86_get_cpl)(pmc->vcpu) == 0) ? select_os : select_user;
+	return (static_call(kvm_x86_get_cpl, pmc->vcpu) == 0) ? select_os : select_user;
 }
 
 void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id)
diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
index e3a24b8f04be..a4845e1b5574 100644
--- a/arch/x86/kvm/trace.h
+++ b/arch/x86/kvm/trace.h
@@ -308,7 +308,7 @@ TRACE_EVENT(name,							     \
 		__entry->guest_rip	= kvm_rip_read(vcpu);		     \
 		__entry->isa            = isa;				     \
 		__entry->vcpu_id        = vcpu->vcpu_id;		     \
-		static_call(kvm_x86_get_exit_info)(vcpu,		     \
+		static_call(kvm_x86_get_exit_info, vcpu,		     \
 					  &__entry->exit_reason,	     \
 					  &__entry->info1,		     \
 					  &__entry->info2,		     \
@@ -792,7 +792,7 @@ TRACE_EVENT(kvm_emulate_insn,
 		),
 
 	TP_fast_assign(
-		__entry->csbase = static_call(kvm_x86_get_segment_base)(vcpu, VCPU_SREG_CS);
+		__entry->csbase = static_call(kvm_x86_get_segment_base, vcpu, VCPU_SREG_CS);
 		__entry->len = vcpu->arch.emulate_ctxt->fetch.ptr
 			       - vcpu->arch.emulate_ctxt->fetch.data;
 		__entry->rip = vcpu->arch.emulate_ctxt->_eip - __entry->len;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a6ab19afc638..ca400a219241 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -796,7 +796,7 @@ EXPORT_SYMBOL_GPL(kvm_requeue_exception_e);
  */
 bool kvm_require_cpl(struct kvm_vcpu *vcpu, int required_cpl)
 {
-	if (static_call(kvm_x86_get_cpl)(vcpu) <= required_cpl)
+	if (static_call(kvm_x86_get_cpl, vcpu) <= required_cpl)
 		return true;
 	kvm_queue_exception_e(vcpu, GP_VECTOR, 0);
 	return false;
@@ -918,7 +918,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 
 		if (!is_pae(vcpu))
 			return 1;
-		static_call(kvm_x86_get_cs_db_l_bits)(vcpu, &cs_db, &cs_l);
+		static_call(kvm_x86_get_cs_db_l_bits, vcpu, &cs_db, &cs_l);
 		if (cs_l)
 			return 1;
 	}
@@ -932,7 +932,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 	    (is_64_bit_mode(vcpu) || kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE)))
 		return 1;
 
-	static_call(kvm_x86_set_cr0)(vcpu, cr0);
+	static_call(kvm_x86_set_cr0, vcpu, cr0);
 
 	kvm_post_set_cr0(vcpu, old_cr0, cr0);
 
@@ -1054,7 +1054,7 @@ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
 
 int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu)
 {
-	if (static_call(kvm_x86_get_cpl)(vcpu) != 0 ||
+	if (static_call(kvm_x86_get_cpl, vcpu) != 0 ||
 	    __kvm_set_xcr(vcpu, kvm_rcx_read(vcpu), kvm_read_edx_eax(vcpu))) {
 		kvm_inject_gp(vcpu, 0);
 		return 1;
@@ -1072,7 +1072,7 @@ bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	if (cr4 & vcpu->arch.cr4_guest_rsvd_bits)
 		return false;
 
-	return static_call(kvm_x86_is_valid_cr4)(vcpu, cr4);
+	return static_call(kvm_x86_is_valid_cr4, vcpu, cr4);
 }
 EXPORT_SYMBOL_GPL(kvm_is_valid_cr4);
 
@@ -1144,7 +1144,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 			return 1;
 	}
 
-	static_call(kvm_x86_set_cr4)(vcpu, cr4);
+	static_call(kvm_x86_set_cr4, vcpu, cr4);
 
 	kvm_post_set_cr4(vcpu, old_cr4, cr4);
 
@@ -1285,7 +1285,7 @@ void kvm_update_dr7(struct kvm_vcpu *vcpu)
 		dr7 = vcpu->arch.guest_debug_dr7;
 	else
 		dr7 = vcpu->arch.dr7;
-	static_call(kvm_x86_set_dr7)(vcpu, dr7);
+	static_call(kvm_x86_set_dr7, vcpu, dr7);
 	vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_BP_ENABLED;
 	if (dr7 & DR7_BP_EN_MASK)
 		vcpu->arch.switch_db_regs |= KVM_DEBUGREG_BP_ENABLED;
@@ -1600,7 +1600,7 @@ static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
 		rdmsrl_safe(msr->index, &msr->data);
 		break;
 	default:
-		return static_call(kvm_x86_get_msr_feature)(msr);
+		return static_call(kvm_x86_get_msr_feature, msr);
 	}
 	return 0;
 }
@@ -1676,7 +1676,7 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	efer &= ~EFER_LMA;
 	efer |= vcpu->arch.efer & EFER_LMA;
 
-	r = static_call(kvm_x86_set_efer)(vcpu, efer);
+	r = static_call(kvm_x86_set_efer, vcpu, efer);
 	if (r) {
 		WARN_ON(r > 0);
 		return r;
@@ -1802,7 +1802,7 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data,
 	msr.index = index;
 	msr.host_initiated = host_initiated;
 
-	return static_call(kvm_x86_set_msr)(vcpu, &msr);
+	return static_call(kvm_x86_set_msr, vcpu, &msr);
 }
 
 static int kvm_set_msr_ignored_check(struct kvm_vcpu *vcpu,
@@ -1844,7 +1844,7 @@ int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data,
 	msr.index = index;
 	msr.host_initiated = host_initiated;
 
-	ret = static_call(kvm_x86_get_msr)(vcpu, &msr);
+	ret = static_call(kvm_x86_get_msr, vcpu, &msr);
 	if (!ret)
 		*data = msr.data;
 	return ret;
@@ -1912,7 +1912,7 @@ static int complete_emulated_rdmsr(struct kvm_vcpu *vcpu)
 
 static int complete_fast_msr_access(struct kvm_vcpu *vcpu)
 {
-	return static_call(kvm_x86_complete_emulated_msr)(vcpu, vcpu->run->msr.error);
+	return static_call(kvm_x86_complete_emulated_msr, vcpu, vcpu->run->msr.error);
 }
 
 static int complete_fast_rdmsr(struct kvm_vcpu *vcpu)
@@ -1976,7 +1976,7 @@ int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
 		trace_kvm_msr_read_ex(ecx);
 	}
 
-	return static_call(kvm_x86_complete_emulated_msr)(vcpu, r);
+	return static_call(kvm_x86_complete_emulated_msr, vcpu, r);
 }
 EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr);
 
@@ -2001,7 +2001,7 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
 		trace_kvm_msr_write_ex(ecx, data);
 	}
 
-	return static_call(kvm_x86_complete_emulated_msr)(vcpu, r);
+	return static_call(kvm_x86_complete_emulated_msr, vcpu, r);
 }
 EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
 
@@ -2507,12 +2507,12 @@ static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 l1_offset)
 	if (is_guest_mode(vcpu))
 		vcpu->arch.tsc_offset = kvm_calc_nested_tsc_offset(
 			l1_offset,
-			static_call(kvm_x86_get_l2_tsc_offset)(vcpu),
-			static_call(kvm_x86_get_l2_tsc_multiplier)(vcpu));
+			static_call(kvm_x86_get_l2_tsc_offset, vcpu),
+			static_call(kvm_x86_get_l2_tsc_multiplier, vcpu));
 	else
 		vcpu->arch.tsc_offset = l1_offset;
 
-	static_call(kvm_x86_write_tsc_offset)(vcpu, vcpu->arch.tsc_offset);
+	static_call(kvm_x86_write_tsc_offset, vcpu, vcpu->arch.tsc_offset);
 }
 
 static void kvm_vcpu_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 l1_multiplier)
@@ -2523,13 +2523,13 @@ static void kvm_vcpu_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 l1_multipli
 	if (is_guest_mode(vcpu))
 		vcpu->arch.tsc_scaling_ratio = kvm_calc_nested_tsc_multiplier(
 			l1_multiplier,
-			static_call(kvm_x86_get_l2_tsc_multiplier)(vcpu));
+			static_call(kvm_x86_get_l2_tsc_multiplier, vcpu));
 	else
 		vcpu->arch.tsc_scaling_ratio = l1_multiplier;
 
 	if (kvm_has_tsc_control)
-		static_call(kvm_x86_write_tsc_multiplier)(
-			vcpu, vcpu->arch.tsc_scaling_ratio);
+		static_call(kvm_x86_write_tsc_multiplier, vcpu,
+			    vcpu->arch.tsc_scaling_ratio);
 }
 
 static inline bool kvm_check_tsc_unstable(void)
@@ -3307,7 +3307,7 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu)
 static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	++vcpu->stat.tlb_flush;
-	static_call(kvm_x86_flush_tlb_all)(vcpu);
+	static_call(kvm_x86_flush_tlb_all, vcpu);
 }
 
 static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
@@ -3325,14 +3325,14 @@ static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
 		kvm_mmu_sync_prev_roots(vcpu);
 	}
 
-	static_call(kvm_x86_flush_tlb_guest)(vcpu);
+	static_call(kvm_x86_flush_tlb_guest, vcpu);
 }
 
 
 static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu)
 {
 	++vcpu->stat.tlb_flush;
-	static_call(kvm_x86_flush_tlb_current)(vcpu);
+	static_call(kvm_x86_flush_tlb_current, vcpu);
 }
 
 /*
@@ -4310,7 +4310,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		 * fringe case that is not enabled except via specific settings
 		 * of the module parameters.
 		 */
-		r = static_call(kvm_x86_has_emulated_msr)(kvm, MSR_IA32_SMBASE);
+		r = static_call(kvm_x86_has_emulated_msr, kvm, MSR_IA32_SMBASE);
 		break;
 	case KVM_CAP_NR_VCPUS:
 		r = min_t(unsigned int, num_online_cpus(), KVM_MAX_VCPUS);
@@ -4548,14 +4548,14 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	/* Address WBINVD may be executed by guest */
 	if (need_emulate_wbinvd(vcpu)) {
-		if (static_call(kvm_x86_has_wbinvd_exit)())
+		if (static_call(kvm_x86_has_wbinvd_exit))
 			cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask);
 		else if (vcpu->cpu != -1 && vcpu->cpu != cpu)
 			smp_call_function_single(vcpu->cpu,
 					wbinvd_ipi, NULL, 1);
 	}
 
-	static_call(kvm_x86_vcpu_load)(vcpu, cpu);
+	static_call(kvm_x86_vcpu_load, vcpu, cpu);
 
 	/* Save host pkru register if supported */
 	vcpu->arch.host_pkru = read_pkru();
@@ -4634,7 +4634,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 	int idx;
 
 	if (vcpu->preempted && !vcpu->arch.guest_state_protected)
-		vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu);
+		vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl, vcpu);
 
 	/*
 	 * Take the srcu lock as memslots will be accessed to check the gfn
@@ -4647,14 +4647,14 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 		kvm_steal_time_set_preempted(vcpu);
 	srcu_read_unlock(&vcpu->kvm->srcu, idx);
 
-	static_call(kvm_x86_vcpu_put)(vcpu);
+	static_call(kvm_x86_vcpu_put, vcpu);
 	vcpu->arch.last_host_tsc = rdtsc();
 }
 
 static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu,
 				    struct kvm_lapic_state *s)
 {
-	static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);
+	static_call_cond(kvm_x86_sync_pir_to_irr, vcpu);
 
 	return kvm_apic_get_state(vcpu, s);
 }
@@ -4773,7 +4773,7 @@ static int kvm_vcpu_ioctl_x86_setup_mce(struct kvm_vcpu *vcpu,
 	for (bank = 0; bank < bank_num; bank++)
 		vcpu->arch.mce_banks[bank*4] = ~(u64)0;
 
-	static_call(kvm_x86_setup_mce)(vcpu);
+	static_call(kvm_x86_setup_mce, vcpu);
 out:
 	return r;
 }
@@ -4880,11 +4880,11 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 		vcpu->arch.interrupt.injected && !vcpu->arch.interrupt.soft;
 	events->interrupt.nr = vcpu->arch.interrupt.nr;
 	events->interrupt.soft = 0;
-	events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);
+	events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow, vcpu);
 
 	events->nmi.injected = vcpu->arch.nmi_injected;
 	events->nmi.pending = vcpu->arch.nmi_pending != 0;
-	events->nmi.masked = static_call(kvm_x86_get_nmi_mask)(vcpu);
+	events->nmi.masked = static_call(kvm_x86_get_nmi_mask, vcpu);
 	events->nmi.pad = 0;
 
 	events->sipi_vector = 0; /* never valid when reporting to user space */
@@ -4951,13 +4951,13 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 	vcpu->arch.interrupt.nr = events->interrupt.nr;
 	vcpu->arch.interrupt.soft = events->interrupt.soft;
 	if (events->flags & KVM_VCPUEVENT_VALID_SHADOW)
-		static_call(kvm_x86_set_interrupt_shadow)(vcpu,
-						events->interrupt.shadow);
+		static_call(kvm_x86_set_interrupt_shadow, vcpu,
+			    events->interrupt.shadow);
 
 	vcpu->arch.nmi_injected = events->nmi.injected;
 	if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING)
 		vcpu->arch.nmi_pending = events->nmi.pending;
-	static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);
+	static_call(kvm_x86_set_nmi_mask, vcpu, events->nmi.masked);
 
 	if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR &&
 	    lapic_in_kernel(vcpu))
@@ -5254,7 +5254,7 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 		if (!kvm_x86_ops.enable_direct_tlbflush)
 			return -ENOTTY;
 
-		return static_call(kvm_x86_enable_direct_tlbflush)(vcpu);
+		return static_call(kvm_x86_enable_direct_tlbflush, vcpu);
 
 	case KVM_CAP_HYPERV_ENFORCE_CPUID:
 		return kvm_hv_set_enforce_cpuid(vcpu, cap->args[0]);
@@ -5723,14 +5723,14 @@ static int kvm_vm_ioctl_set_tss_addr(struct kvm *kvm, unsigned long addr)
 
 	if (addr > (unsigned int)(-3 * PAGE_SIZE))
 		return -EINVAL;
-	ret = static_call(kvm_x86_set_tss_addr)(kvm, addr);
+	ret = static_call(kvm_x86_set_tss_addr, kvm, addr);
 	return ret;
 }
 
 static int kvm_vm_ioctl_set_identity_map_addr(struct kvm *kvm,
 					      u64 ident_addr)
 {
-	return static_call(kvm_x86_set_identity_map_addr)(kvm, ident_addr);
+	return static_call(kvm_x86_set_identity_map_addr, kvm, ident_addr);
 }
 
 static int kvm_vm_ioctl_set_nr_mmu_pages(struct kvm *kvm,
@@ -6027,14 +6027,14 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 		if (!kvm_x86_ops.vm_copy_enc_context_from)
 			break;
 
-		r = static_call(kvm_x86_vm_copy_enc_context_from)(kvm, cap->args[0]);
+		r = static_call(kvm_x86_vm_copy_enc_context_from, kvm, cap->args[0]);
 		break;
 	case KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM:
 		r = -EINVAL;
 		if (!kvm_x86_ops.vm_move_enc_context_from)
 			break;
 
-		r = static_call(kvm_x86_vm_move_enc_context_from)(kvm, cap->args[0]);
+		r = static_call(kvm_x86_vm_move_enc_context_from, kvm, cap->args[0]);
 		break;
 	case KVM_CAP_EXIT_HYPERCALL:
 		if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) {
@@ -6525,7 +6525,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		if (!kvm_x86_ops.mem_enc_ioctl)
 			goto out;
 
-		r = static_call(kvm_x86_mem_enc_ioctl)(kvm, argp);
+		r = static_call(kvm_x86_mem_enc_ioctl, kvm, argp);
 		break;
 	}
 	case KVM_MEMORY_ENCRYPT_REG_REGION: {
@@ -6539,7 +6539,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		if (!kvm_x86_ops.mem_enc_register_region)
 			goto out;
 
-		r = static_call(kvm_x86_mem_enc_register_region)(kvm, &region);
+		r = static_call(kvm_x86_mem_enc_register_region, kvm, &region);
 		break;
 	}
 	case KVM_MEMORY_ENCRYPT_UNREG_REGION: {
@@ -6553,7 +6553,8 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		if (!kvm_x86_ops.mem_enc_unregister_region)
 			goto out;
 
-		r = static_call(kvm_x86_mem_enc_unregister_region)(kvm, &region);
+		r = static_call(kvm_x86_mem_enc_unregister_region, kvm,
+				&region);
 		break;
 	}
 	case KVM_HYPERV_EVENTFD: {
@@ -6661,7 +6662,7 @@ static void kvm_init_msr_list(void)
 	}
 
 	for (i = 0; i < ARRAY_SIZE(emulated_msrs_all); i++) {
-		if (!static_call(kvm_x86_has_emulated_msr)(NULL, emulated_msrs_all[i]))
+		if (!static_call(kvm_x86_has_emulated_msr, NULL, emulated_msrs_all[i]))
 			continue;
 
 		emulated_msrs[num_emulated_msrs++] = emulated_msrs_all[i];
@@ -6724,13 +6725,13 @@ static int vcpu_mmio_read(struct kvm_vcpu *vcpu, gpa_t addr, int len, void *v)
 static void kvm_set_segment(struct kvm_vcpu *vcpu,
 			struct kvm_segment *var, int seg)
 {
-	static_call(kvm_x86_set_segment)(vcpu, var, seg);
+	static_call(kvm_x86_set_segment, vcpu, var, seg);
 }
 
 void kvm_get_segment(struct kvm_vcpu *vcpu,
 		     struct kvm_segment *var, int seg)
 {
-	static_call(kvm_x86_get_segment)(vcpu, var, seg);
+	static_call(kvm_x86_get_segment, vcpu, var, seg);
 }
 
 gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access,
@@ -6753,7 +6754,7 @@ gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva,
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
 
-	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0;
 	return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception);
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read);
@@ -6763,7 +6764,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read);
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
 
-	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0;
 	access |= PFERR_FETCH_MASK;
 	return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception);
 }
@@ -6773,7 +6774,7 @@ gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva,
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
 
-	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0;
 	access |= PFERR_WRITE_MASK;
 	return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception);
 }
@@ -6826,7 +6827,7 @@ static int kvm_fetch_guest_virt(struct x86_emulate_ctxt *ctxt,
 {
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
-	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0;
 	unsigned offset;
 	int ret;
 
@@ -6851,7 +6852,7 @@ int kvm_read_guest_virt(struct kvm_vcpu *vcpu,
 			       gva_t addr, void *val, unsigned int bytes,
 			       struct x86_exception *exception)
 {
-	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0;
 
 	/*
 	 * FIXME: this should call handle_emulation_failure if X86EMUL_IO_NEEDED
@@ -6874,7 +6875,7 @@ static int emulator_read_std(struct x86_emulate_ctxt *ctxt,
 
 	if (system)
 		access |= PFERR_IMPLICIT_ACCESS;
-	else if (static_call(kvm_x86_get_cpl)(vcpu) == 3)
+	else if (static_call(kvm_x86_get_cpl, vcpu) == 3)
 		access |= PFERR_USER_MASK;
 
 	return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, exception);
@@ -6928,7 +6929,7 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v
 
 	if (system)
 		access |= PFERR_IMPLICIT_ACCESS;
-	else if (static_call(kvm_x86_get_cpl)(vcpu) == 3)
+	else if (static_call(kvm_x86_get_cpl, vcpu) == 3)
 		access |= PFERR_USER_MASK;
 
 	return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
@@ -6949,8 +6950,8 @@ EXPORT_SYMBOL_GPL(kvm_write_guest_virt_system);
 static int kvm_can_emulate_insn(struct kvm_vcpu *vcpu, int emul_type,
 				void *insn, int insn_len)
 {
-	return static_call(kvm_x86_can_emulate_instruction)(vcpu, emul_type,
-							    insn, insn_len);
+	return static_call(kvm_x86_can_emulate_instruction, vcpu, emul_type,
+			   insn, insn_len);
 }
 
 int handle_ud(struct kvm_vcpu *vcpu)
@@ -6995,7 +6996,7 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 				bool write)
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
-	u64 access = ((static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0)
+	u64 access = ((static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0)
 		| (write ? PFERR_WRITE_MASK : 0);
 
 	/*
@@ -7425,7 +7426,7 @@ static int emulator_pio_out_emulated(struct x86_emulate_ctxt *ctxt,
 
 static unsigned long get_segment_base(struct kvm_vcpu *vcpu, int seg)
 {
-	return static_call(kvm_x86_get_segment_base)(vcpu, seg);
+	return static_call(kvm_x86_get_segment_base, vcpu, seg);
 }
 
 static void emulator_invlpg(struct x86_emulate_ctxt *ctxt, ulong address)
@@ -7438,7 +7439,7 @@ static int kvm_emulate_wbinvd_noskip(struct kvm_vcpu *vcpu)
 	if (!need_emulate_wbinvd(vcpu))
 		return X86EMUL_CONTINUE;
 
-	if (static_call(kvm_x86_has_wbinvd_exit)()) {
+	if (static_call(kvm_x86_has_wbinvd_exit)) {
 		int cpu = get_cpu();
 
 		cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask);
@@ -7543,27 +7544,27 @@ static int emulator_set_cr(struct x86_emulate_ctxt *ctxt, int cr, ulong val)
 
 static int emulator_get_cpl(struct x86_emulate_ctxt *ctxt)
 {
-	return static_call(kvm_x86_get_cpl)(emul_to_vcpu(ctxt));
+	return static_call(kvm_x86_get_cpl, emul_to_vcpu(ctxt));
 }
 
 static void emulator_get_gdt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	static_call(kvm_x86_get_gdt)(emul_to_vcpu(ctxt), dt);
+	static_call(kvm_x86_get_gdt, emul_to_vcpu(ctxt), dt);
 }
 
 static void emulator_get_idt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	static_call(kvm_x86_get_idt)(emul_to_vcpu(ctxt), dt);
+	static_call(kvm_x86_get_idt, emul_to_vcpu(ctxt), dt);
 }
 
 static void emulator_set_gdt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	static_call(kvm_x86_set_gdt)(emul_to_vcpu(ctxt), dt);
+	static_call(kvm_x86_set_gdt, emul_to_vcpu(ctxt), dt);
 }
 
 static void emulator_set_idt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	static_call(kvm_x86_set_idt)(emul_to_vcpu(ctxt), dt);
+	static_call(kvm_x86_set_idt, emul_to_vcpu(ctxt), dt);
 }
 
 static unsigned long emulator_get_cached_segment_base(
@@ -7721,8 +7722,8 @@ static int emulator_intercept(struct x86_emulate_ctxt *ctxt,
 			      struct x86_instruction_info *info,
 			      enum x86_intercept_stage stage)
 {
-	return static_call(kvm_x86_check_intercept)(emul_to_vcpu(ctxt), info, stage,
-					    &ctxt->exception);
+	return static_call(kvm_x86_check_intercept, emul_to_vcpu(ctxt), info,
+			   stage, &ctxt->exception);
 }
 
 static bool emulator_get_cpuid(struct x86_emulate_ctxt *ctxt,
@@ -7764,7 +7765,7 @@ static void emulator_write_gpr(struct x86_emulate_ctxt *ctxt, unsigned reg, ulon
 
 static void emulator_set_nmi_mask(struct x86_emulate_ctxt *ctxt, bool masked)
 {
-	static_call(kvm_x86_set_nmi_mask)(emul_to_vcpu(ctxt), masked);
+	static_call(kvm_x86_set_nmi_mask, emul_to_vcpu(ctxt), masked);
 }
 
 static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt)
@@ -7782,7 +7783,7 @@ static void emulator_exiting_smm(struct x86_emulate_ctxt *ctxt)
 static int emulator_leave_smm(struct x86_emulate_ctxt *ctxt,
 				  const char *smstate)
 {
-	return static_call(kvm_x86_leave_smm)(emul_to_vcpu(ctxt), smstate);
+	return static_call(kvm_x86_leave_smm, emul_to_vcpu(ctxt), smstate);
 }
 
 static void emulator_triple_fault(struct x86_emulate_ctxt *ctxt)
@@ -7847,7 +7848,7 @@ static const struct x86_emulate_ops emulate_ops = {
 
 static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask)
 {
-	u32 int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);
+	u32 int_shadow = static_call(kvm_x86_get_interrupt_shadow, vcpu);
 	/*
 	 * an sti; sti; sequence only disable interrupts for the first
 	 * instruction. So, if the last instruction, be it emulated or
@@ -7858,7 +7859,7 @@ static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask)
 	if (int_shadow & mask)
 		mask = 0;
 	if (unlikely(int_shadow || mask)) {
-		static_call(kvm_x86_set_interrupt_shadow)(vcpu, mask);
+		static_call(kvm_x86_set_interrupt_shadow, vcpu, mask);
 		if (!mask)
 			kvm_make_request(KVM_REQ_EVENT, vcpu);
 	}
@@ -7900,7 +7901,7 @@ static void init_emulate_ctxt(struct kvm_vcpu *vcpu)
 	struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt;
 	int cs_db, cs_l;
 
-	static_call(kvm_x86_get_cs_db_l_bits)(vcpu, &cs_db, &cs_l);
+	static_call(kvm_x86_get_cs_db_l_bits, vcpu, &cs_db, &cs_l);
 
 	ctxt->gpa_available = false;
 	ctxt->eflags = kvm_get_rflags(vcpu);
@@ -7960,9 +7961,8 @@ static void prepare_emulation_failure_exit(struct kvm_vcpu *vcpu, u64 *data,
 	 */
 	memset(&info, 0, sizeof(info));
 
-	static_call(kvm_x86_get_exit_info)(vcpu, (u32 *)&info[0], &info[1],
-					   &info[2], (u32 *)&info[3],
-					   (u32 *)&info[4]);
+	static_call(kvm_x86_get_exit_info, vcpu, (u32 *)&info[0], &info[1],
+		    &info[2], (u32 *)&info[3], (u32 *)&info[4]);
 
 	run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 	run->emulation_failure.suberror = KVM_INTERNAL_ERROR_EMULATION;
@@ -8039,7 +8039,7 @@ static int handle_emulation_failure(struct kvm_vcpu *vcpu, int emulation_type)
 
 	kvm_queue_exception(vcpu, UD_VECTOR);
 
-	if (!is_guest_mode(vcpu) && static_call(kvm_x86_get_cpl)(vcpu) == 0) {
+	if (!is_guest_mode(vcpu) && static_call(kvm_x86_get_cpl, vcpu) == 0) {
 		prepare_emulation_ctxt_failure_exit(vcpu);
 		return 0;
 	}
@@ -8228,10 +8228,10 @@ static int kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu)
 
 int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu)
 {
-	unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
+	unsigned long rflags = static_call(kvm_x86_get_rflags, vcpu);
 	int r;
 
-	r = static_call(kvm_x86_skip_emulated_instruction)(vcpu);
+	r = static_call(kvm_x86_skip_emulated_instruction, vcpu);
 	if (unlikely(!r))
 		return 0;
 
@@ -8494,7 +8494,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 
 writeback:
 	if (writeback) {
-		unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
+		unsigned long rflags = static_call(kvm_x86_get_rflags, vcpu);
 		toggle_interruptibility(vcpu, ctxt->interruptibility);
 		vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
 		if (!ctxt->have_exception ||
@@ -8505,7 +8505,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 			kvm_rip_write(vcpu, ctxt->eip);
 			if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)))
 				r = kvm_vcpu_do_singlestep(vcpu);
-			static_call_cond(kvm_x86_update_emulated_instruction)(vcpu);
+			static_call_cond(kvm_x86_update_emulated_instruction, vcpu);
 			__kvm_set_rflags(vcpu, ctxt->eflags);
 		}
 
@@ -9187,7 +9187,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 		a3 &= 0xFFFFFFFF;
 	}
 
-	if (static_call(kvm_x86_get_cpl)(vcpu) != 0) {
+	if (static_call(kvm_x86_get_cpl, vcpu) != 0) {
 		ret = -KVM_EPERM;
 		goto out;
 	}
@@ -9266,7 +9266,7 @@ static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt)
 	char instruction[3];
 	unsigned long rip = kvm_rip_read(vcpu);
 
-	static_call(kvm_x86_patch_hypercall)(vcpu, instruction);
+	static_call(kvm_x86_patch_hypercall, vcpu, instruction);
 
 	return emulator_write_emulated(ctxt, rip, instruction, 3,
 		&ctxt->exception);
@@ -9283,7 +9283,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *kvm_run = vcpu->run;
 
-	kvm_run->if_flag = static_call(kvm_x86_get_if_flag)(vcpu);
+	kvm_run->if_flag = static_call(kvm_x86_get_if_flag, vcpu);
 	kvm_run->cr8 = kvm_get_cr8(vcpu);
 	kvm_run->apic_base = kvm_get_apic_base(vcpu);
 
@@ -9318,7 +9318,7 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu)
 
 	tpr = kvm_lapic_get_cr8(vcpu);
 
-	static_call(kvm_x86_update_cr8_intercept)(vcpu, tpr, max_irr);
+	static_call(kvm_x86_update_cr8_intercept, vcpu, tpr, max_irr);
 }
 
 
@@ -9336,7 +9336,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
 	if (vcpu->arch.exception.error_code && !is_protmode(vcpu))
 		vcpu->arch.exception.error_code = false;
-	static_call(kvm_x86_queue_exception)(vcpu);
+	static_call(kvm_x86_queue_exception, vcpu);
 }
 
 static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
@@ -9366,10 +9366,10 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 	 */
 	else if (!vcpu->arch.exception.pending) {
 		if (vcpu->arch.nmi_injected) {
-			static_call(kvm_x86_inject_nmi)(vcpu);
+			static_call(kvm_x86_inject_nmi, vcpu);
 			can_inject = false;
 		} else if (vcpu->arch.interrupt.injected) {
-			static_call(kvm_x86_inject_irq)(vcpu);
+			static_call(kvm_x86_inject_irq, vcpu);
 			can_inject = false;
 		}
 	}
@@ -9430,7 +9430,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 	 * The kvm_x86_ops hooks communicate this by returning -EBUSY.
 	 */
 	if (vcpu->arch.smi_pending) {
-		r = can_inject ? static_call(kvm_x86_smi_allowed)(vcpu, true) : -EBUSY;
+		r = can_inject ? static_call(kvm_x86_smi_allowed, vcpu, true) : -EBUSY;
 		if (r < 0)
 			goto out;
 		if (r) {
@@ -9439,35 +9439,35 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 			enter_smm(vcpu);
 			can_inject = false;
 		} else
-			static_call(kvm_x86_enable_smi_window)(vcpu);
+			static_call(kvm_x86_enable_smi_window, vcpu);
 	}
 
 	if (vcpu->arch.nmi_pending) {
-		r = can_inject ? static_call(kvm_x86_nmi_allowed)(vcpu, true) : -EBUSY;
+		r = can_inject ? static_call(kvm_x86_nmi_allowed, vcpu, true) : -EBUSY;
 		if (r < 0)
 			goto out;
 		if (r) {
 			--vcpu->arch.nmi_pending;
 			vcpu->arch.nmi_injected = true;
-			static_call(kvm_x86_inject_nmi)(vcpu);
+			static_call(kvm_x86_inject_nmi, vcpu);
 			can_inject = false;
-			WARN_ON(static_call(kvm_x86_nmi_allowed)(vcpu, true) < 0);
+			WARN_ON(static_call(kvm_x86_nmi_allowed, vcpu, true) < 0);
 		}
 		if (vcpu->arch.nmi_pending)
-			static_call(kvm_x86_enable_nmi_window)(vcpu);
+			static_call(kvm_x86_enable_nmi_window, vcpu);
 	}
 
 	if (kvm_cpu_has_injectable_intr(vcpu)) {
-		r = can_inject ? static_call(kvm_x86_interrupt_allowed)(vcpu, true) : -EBUSY;
+		r = can_inject ? static_call(kvm_x86_interrupt_allowed, vcpu, true) : -EBUSY;
 		if (r < 0)
 			goto out;
 		if (r) {
 			kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false);
-			static_call(kvm_x86_inject_irq)(vcpu);
-			WARN_ON(static_call(kvm_x86_interrupt_allowed)(vcpu, true) < 0);
+			static_call(kvm_x86_inject_irq, vcpu);
+			WARN_ON(static_call(kvm_x86_interrupt_allowed, vcpu, true) < 0);
 		}
 		if (kvm_cpu_has_injectable_intr(vcpu))
-			static_call(kvm_x86_enable_irq_window)(vcpu);
+			static_call(kvm_x86_enable_irq_window, vcpu);
 	}
 
 	if (is_guest_mode(vcpu) &&
@@ -9495,7 +9495,7 @@ static void process_nmi(struct kvm_vcpu *vcpu)
 	 * If an NMI is already in progress, limit further NMIs to just one.
 	 * Otherwise, allow two (and we'll inject the first one immediately).
 	 */
-	if (static_call(kvm_x86_get_nmi_mask)(vcpu) || vcpu->arch.nmi_injected)
+	if (static_call(kvm_x86_get_nmi_mask, vcpu) || vcpu->arch.nmi_injected)
 		limit = 1;
 
 	vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
@@ -9585,11 +9585,11 @@ static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf)
 	put_smstate(u32, buf, 0x7f7c, seg.limit);
 	put_smstate(u32, buf, 0x7f78, enter_smm_get_segment_flags(&seg));
 
-	static_call(kvm_x86_get_gdt)(vcpu, &dt);
+	static_call(kvm_x86_get_gdt, vcpu, &dt);
 	put_smstate(u32, buf, 0x7f74, dt.address);
 	put_smstate(u32, buf, 0x7f70, dt.size);
 
-	static_call(kvm_x86_get_idt)(vcpu, &dt);
+	static_call(kvm_x86_get_idt, vcpu, &dt);
 	put_smstate(u32, buf, 0x7f58, dt.address);
 	put_smstate(u32, buf, 0x7f54, dt.size);
 
@@ -9639,7 +9639,7 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)
 	put_smstate(u32, buf, 0x7e94, seg.limit);
 	put_smstate(u64, buf, 0x7e98, seg.base);
 
-	static_call(kvm_x86_get_idt)(vcpu, &dt);
+	static_call(kvm_x86_get_idt, vcpu, &dt);
 	put_smstate(u32, buf, 0x7e84, dt.size);
 	put_smstate(u64, buf, 0x7e88, dt.address);
 
@@ -9649,7 +9649,7 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)
 	put_smstate(u32, buf, 0x7e74, seg.limit);
 	put_smstate(u64, buf, 0x7e78, seg.base);
 
-	static_call(kvm_x86_get_gdt)(vcpu, &dt);
+	static_call(kvm_x86_get_gdt, vcpu, &dt);
 	put_smstate(u32, buf, 0x7e64, dt.size);
 	put_smstate(u64, buf, 0x7e68, dt.address);
 
@@ -9678,28 +9678,28 @@ static void enter_smm(struct kvm_vcpu *vcpu)
 	 * state (e.g. leave guest mode) after we've saved the state into the
 	 * SMM state-save area.
 	 */
-	static_call(kvm_x86_enter_smm)(vcpu, buf);
+	static_call(kvm_x86_enter_smm, vcpu, buf);
 
 	kvm_smm_changed(vcpu, true);
 	kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf));
 
-	if (static_call(kvm_x86_get_nmi_mask)(vcpu))
+	if (static_call(kvm_x86_get_nmi_mask, vcpu))
 		vcpu->arch.hflags |= HF_SMM_INSIDE_NMI_MASK;
 	else
-		static_call(kvm_x86_set_nmi_mask)(vcpu, true);
+		static_call(kvm_x86_set_nmi_mask, vcpu, true);
 
 	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
 	kvm_rip_write(vcpu, 0x8000);
 
 	cr0 = vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0_PG);
-	static_call(kvm_x86_set_cr0)(vcpu, cr0);
+	static_call(kvm_x86_set_cr0, vcpu, cr0);
 	vcpu->arch.cr0 = cr0;
 
-	static_call(kvm_x86_set_cr4)(vcpu, 0);
+	static_call(kvm_x86_set_cr4, vcpu, 0);
 
 	/* Undocumented: IDT limit is set to zero on entry to SMM.  */
 	dt.address = dt.size = 0;
-	static_call(kvm_x86_set_idt)(vcpu, &dt);
+	static_call(kvm_x86_set_idt, vcpu, &dt);
 
 	kvm_set_dr(vcpu, 7, DR7_FIXED_1);
 
@@ -9730,7 +9730,7 @@ static void enter_smm(struct kvm_vcpu *vcpu)
 
 #ifdef CONFIG_X86_64
 	if (guest_cpuid_has(vcpu, X86_FEATURE_LM))
-		static_call(kvm_x86_set_efer)(vcpu, 0);
+		static_call(kvm_x86_set_efer, vcpu, 0);
 #endif
 
 	kvm_update_cpuid_runtime(vcpu);
@@ -9769,7 +9769,7 @@ void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.apicv_active = activate;
 	kvm_apic_update_apicv(vcpu);
-	static_call(kvm_x86_refresh_apicv_exec_ctrl)(vcpu);
+	static_call(kvm_x86_refresh_apicv_exec_ctrl, vcpu);
 
 	/*
 	 * When APICv gets disabled, we may still have injected interrupts
@@ -9792,7 +9792,7 @@ void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm,
 
 	lockdep_assert_held_write(&kvm->arch.apicv_update_lock);
 
-	if (!static_call(kvm_x86_check_apicv_inhibit_reasons)(reason))
+	if (!static_call(kvm_x86_check_apicv_inhibit_reasons, reason))
 		return;
 
 	old = new = kvm->arch.apicv_inhibit_reasons;
@@ -9845,7 +9845,7 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
 	if (irqchip_split(vcpu->kvm))
 		kvm_scan_ioapic_routes(vcpu, vcpu->arch.ioapic_handled_vectors);
 	else {
-		static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);
+		static_call_cond(kvm_x86_sync_pir_to_irr, vcpu);
 		if (ioapic_in_kernel(vcpu->kvm))
 			kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors);
 	}
@@ -9867,12 +9867,13 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu)
 		bitmap_or((ulong *)eoi_exit_bitmap,
 			  vcpu->arch.ioapic_handled_vectors,
 			  to_hv_synic(vcpu)->vec_bitmap, 256);
-		static_call_cond(kvm_x86_load_eoi_exitmap)(vcpu, eoi_exit_bitmap);
+		static_call_cond(kvm_x86_load_eoi_exitmap, vcpu,
+				 eoi_exit_bitmap);
 		return;
 	}
 
-	static_call_cond(kvm_x86_load_eoi_exitmap)(
-		vcpu, (u64 *)vcpu->arch.ioapic_handled_vectors);
+	static_call_cond(kvm_x86_load_eoi_exitmap, vcpu,
+		         (u64 *)vcpu->arch.ioapic_handled_vectors);
 }
 
 void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
@@ -9891,7 +9892,7 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
 
 void kvm_arch_guest_memory_reclaimed(struct kvm *kvm)
 {
-	static_call_cond(kvm_x86_guest_memory_reclaimed)(kvm);
+	static_call_cond(kvm_x86_guest_memory_reclaimed, kvm);
 }
 
 static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
@@ -9899,7 +9900,7 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 	if (!lapic_in_kernel(vcpu))
 		return;
 
-	static_call_cond(kvm_x86_set_apic_access_page_addr)(vcpu);
+	static_call_cond(kvm_x86_set_apic_access_page_addr, vcpu);
 }
 
 void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu)
@@ -10050,10 +10051,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		if (kvm_check_request(KVM_REQ_APF_READY, vcpu))
 			kvm_check_async_pf_completion(vcpu);
 		if (kvm_check_request(KVM_REQ_MSR_FILTER_CHANGED, vcpu))
-			static_call(kvm_x86_msr_filter_changed)(vcpu);
+			static_call(kvm_x86_msr_filter_changed, vcpu);
 
 		if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
-			static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
+			static_call(kvm_x86_update_cpu_dirty_logging, vcpu);
 	}
 
 	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
@@ -10075,7 +10076,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 			goto out;
 		}
 		if (req_int_win)
-			static_call(kvm_x86_enable_irq_window)(vcpu);
+			static_call(kvm_x86_enable_irq_window, vcpu);
 
 		if (kvm_lapic_enabled(vcpu)) {
 			update_cr8_intercept(vcpu);
@@ -10090,7 +10091,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	preempt_disable();
 
-	static_call(kvm_x86_prepare_switch_to_guest)(vcpu);
+	static_call(kvm_x86_prepare_switch_to_guest, vcpu);
 
 	/*
 	 * Disable IRQs before setting IN_GUEST_MODE.  Posted interrupt
@@ -10126,7 +10127,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	 * i.e. they can post interrupts even if APICv is temporarily disabled.
 	 */
 	if (kvm_lapic_enabled(vcpu))
-		static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);
+		static_call_cond(kvm_x86_sync_pir_to_irr, vcpu);
 
 	if (kvm_vcpu_exit_request(vcpu)) {
 		vcpu->mode = OUTSIDE_GUEST_MODE;
@@ -10140,7 +10141,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	if (req_immediate_exit) {
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
-		static_call(kvm_x86_request_immediate_exit)(vcpu);
+		static_call(kvm_x86_request_immediate_exit, vcpu);
 	}
 
 	fpregs_assert_state_consistent();
@@ -10171,12 +10172,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		 */
 		WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu));
 
-		exit_fastpath = static_call(kvm_x86_vcpu_run)(vcpu);
+		exit_fastpath = static_call(kvm_x86_vcpu_run, vcpu);
 		if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
 			break;
 
 		if (kvm_lapic_enabled(vcpu))
-			static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);
+			static_call_cond(kvm_x86_sync_pir_to_irr, vcpu);
 
 		if (unlikely(kvm_vcpu_exit_request(vcpu))) {
 			exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
@@ -10192,7 +10193,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	 */
 	if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) {
 		WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP);
-		static_call(kvm_x86_sync_dirty_debug_regs)(vcpu);
+		static_call(kvm_x86_sync_dirty_debug_regs, vcpu);
 		kvm_update_dr0123(vcpu);
 		kvm_update_dr7(vcpu);
 	}
@@ -10221,7 +10222,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.xfd_no_write_intercept)
 		fpu_sync_guest_vmexit_xfd_state();
 
-	static_call(kvm_x86_handle_exit_irqoff)(vcpu);
+	static_call(kvm_x86_handle_exit_irqoff, vcpu);
 
 	if (vcpu->arch.guest_fpu.xfd_err)
 		wrmsrl(MSR_IA32_XFD_ERR, 0);
@@ -10275,13 +10276,13 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.apic_attention)
 		kvm_lapic_sync_from_vapic(vcpu);
 
-	r = static_call(kvm_x86_handle_exit)(vcpu, exit_fastpath);
+	r = static_call(kvm_x86_handle_exit, vcpu, exit_fastpath);
 	return r;
 
 cancel_injection:
 	if (req_immediate_exit)
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
-	static_call(kvm_x86_cancel_injection)(vcpu);
+	static_call(kvm_x86_cancel_injection, vcpu);
 	if (unlikely(vcpu->arch.apic_attention))
 		kvm_lapic_sync_from_vapic(vcpu);
 out:
@@ -10554,7 +10555,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		goto out;
 	}
 
-	r = static_call(kvm_x86_vcpu_pre_run)(vcpu);
+	r = static_call(kvm_x86_vcpu_pre_run, vcpu);
 	if (r <= 0)
 		goto out;
 
@@ -10673,10 +10674,10 @@ static void __get_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
 	kvm_get_segment(vcpu, &sregs->tr, VCPU_SREG_TR);
 	kvm_get_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR);
 
-	static_call(kvm_x86_get_idt)(vcpu, &dt);
+	static_call(kvm_x86_get_idt, vcpu, &dt);
 	sregs->idt.limit = dt.size;
 	sregs->idt.base = dt.address;
-	static_call(kvm_x86_get_gdt)(vcpu, &dt);
+	static_call(kvm_x86_get_gdt, vcpu, &dt);
 	sregs->gdt.limit = dt.size;
 	sregs->gdt.base = dt.address;
 
@@ -10857,28 +10858,28 @@ static int __set_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs,
 
 	dt.size = sregs->idt.limit;
 	dt.address = sregs->idt.base;
-	static_call(kvm_x86_set_idt)(vcpu, &dt);
+	static_call(kvm_x86_set_idt, vcpu, &dt);
 	dt.size = sregs->gdt.limit;
 	dt.address = sregs->gdt.base;
-	static_call(kvm_x86_set_gdt)(vcpu, &dt);
+	static_call(kvm_x86_set_gdt, vcpu, &dt);
 
 	vcpu->arch.cr2 = sregs->cr2;
 	*mmu_reset_needed |= kvm_read_cr3(vcpu) != sregs->cr3;
 	vcpu->arch.cr3 = sregs->cr3;
 	kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3);
-	static_call_cond(kvm_x86_post_set_cr3)(vcpu, sregs->cr3);
+	static_call_cond(kvm_x86_post_set_cr3, vcpu, sregs->cr3);
 
 	kvm_set_cr8(vcpu, sregs->cr8);
 
 	*mmu_reset_needed |= vcpu->arch.efer != sregs->efer;
-	static_call(kvm_x86_set_efer)(vcpu, sregs->efer);
+	static_call(kvm_x86_set_efer, vcpu, sregs->efer);
 
 	*mmu_reset_needed |= kvm_read_cr0(vcpu) != sregs->cr0;
-	static_call(kvm_x86_set_cr0)(vcpu, sregs->cr0);
+	static_call(kvm_x86_set_cr0, vcpu, sregs->cr0);
 	vcpu->arch.cr0 = sregs->cr0;
 
 	*mmu_reset_needed |= kvm_read_cr4(vcpu) != sregs->cr4;
-	static_call(kvm_x86_set_cr4)(vcpu, sregs->cr4);
+	static_call(kvm_x86_set_cr4, vcpu, sregs->cr4);
 
 	if (update_pdptrs) {
 		idx = srcu_read_lock(&vcpu->kvm->srcu);
@@ -11048,7 +11049,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 	 */
 	kvm_set_rflags(vcpu, rflags);
 
-	static_call(kvm_x86_update_exception_bitmap)(vcpu);
+	static_call(kvm_x86_update_exception_bitmap, vcpu);
 
 	kvm_arch_vcpu_guestdbg_update_apicv_inhibit(vcpu->kvm);
 
@@ -11255,7 +11256,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.hv_root_tdp = INVALID_PAGE;
 #endif
 
-	r = static_call(kvm_x86_vcpu_create)(vcpu);
+	r = static_call(kvm_x86_vcpu_create, vcpu);
 	if (r)
 		goto free_guest_fpu;
 
@@ -11312,7 +11313,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 	kvmclock_reset(vcpu);
 
-	static_call(kvm_x86_vcpu_free)(vcpu);
+	static_call(kvm_x86_vcpu_free, vcpu);
 
 	kmem_cache_free(x86_emulator_cache, vcpu->arch.emulate_ctxt);
 	free_cpumask_var(vcpu->arch.wbinvd_dirty_mask);
@@ -11419,7 +11420,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	cpuid_0x1 = kvm_find_cpuid_entry(vcpu, 1, 0);
 	kvm_rdx_write(vcpu, cpuid_0x1 ? cpuid_0x1->eax : 0x600);
 
-	static_call(kvm_x86_vcpu_reset)(vcpu, init_event);
+	static_call(kvm_x86_vcpu_reset, vcpu, init_event);
 
 	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
 	kvm_rip_write(vcpu, 0xfff0);
@@ -11438,10 +11439,10 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	else
 		new_cr0 |= X86_CR0_NW | X86_CR0_CD;
 
-	static_call(kvm_x86_set_cr0)(vcpu, new_cr0);
-	static_call(kvm_x86_set_cr4)(vcpu, 0);
-	static_call(kvm_x86_set_efer)(vcpu, 0);
-	static_call(kvm_x86_update_exception_bitmap)(vcpu);
+	static_call(kvm_x86_set_cr0, vcpu, new_cr0);
+	static_call(kvm_x86_set_cr4, vcpu, 0);
+	static_call(kvm_x86_set_efer, vcpu, 0);
+	static_call(kvm_x86_update_exception_bitmap, vcpu);
 
 	/*
 	 * On the standard CR0/CR4/EFER modification paths, there are several
@@ -11493,7 +11494,7 @@ int kvm_arch_hardware_enable(void)
 	bool stable, backwards_tsc = false;
 
 	kvm_user_return_msr_cpu_online();
-	ret = static_call(kvm_x86_hardware_enable)();
+	ret = static_call(kvm_x86_hardware_enable);
 	if (ret != 0)
 		return ret;
 
@@ -11575,7 +11576,7 @@ int kvm_arch_hardware_enable(void)
 
 void kvm_arch_hardware_disable(void)
 {
-	static_call(kvm_x86_hardware_disable)();
+	static_call(kvm_x86_hardware_disable);
 	drop_user_return_notifiers();
 }
 
@@ -11625,7 +11626,7 @@ void kvm_arch_hardware_unsetup(void)
 {
 	kvm_unregister_perf_callbacks();
 
-	static_call(kvm_x86_hardware_unsetup)();
+	static_call(kvm_x86_hardware_unsetup);
 }
 
 int kvm_arch_check_processor_compat(void *opaque)
@@ -11665,7 +11666,7 @@ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
 		pmu->need_cleanup = true;
 		kvm_make_request(KVM_REQ_PMU, vcpu);
 	}
-	static_call(kvm_x86_sched_in)(vcpu, cpu);
+	static_call(kvm_x86_sched_in, vcpu, cpu);
 }
 
 void kvm_arch_free_vm(struct kvm *kvm)
@@ -11725,7 +11726,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	kvm_hv_init_vm(kvm);
 	kvm_xen_init_vm(kvm);
 
-	return static_call(kvm_x86_vm_init)(kvm);
+	return static_call(kvm_x86_vm_init, kvm);
 
 out_page_track:
 	kvm_page_track_cleanup(kvm);
@@ -11864,7 +11865,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 		__x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
 		mutex_unlock(&kvm->slots_lock);
 	}
-	static_call_cond(kvm_x86_vm_destroy)(kvm);
+	static_call_cond(kvm_x86_vm_destroy, kvm);
 	kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
 	kvm_pic_destroy(kvm);
 	kvm_ioapic_destroy(kvm);
@@ -12147,7 +12148,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
 static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
 	return (is_guest_mode(vcpu) &&
-		static_call(kvm_x86_guest_apic_has_interrupt)(vcpu));
+		static_call(kvm_x86_guest_apic_has_interrupt, vcpu));
 }
 
 static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
@@ -12166,12 +12167,12 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 
 	if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
 	    (vcpu->arch.nmi_pending &&
-	     static_call(kvm_x86_nmi_allowed)(vcpu, false)))
+	     static_call(kvm_x86_nmi_allowed, vcpu, false)))
 		return true;
 
 	if (kvm_test_request(KVM_REQ_SMI, vcpu) ||
 	    (vcpu->arch.smi_pending &&
-	     static_call(kvm_x86_smi_allowed)(vcpu, false)))
+	     static_call(kvm_x86_smi_allowed, vcpu, false)))
 		return true;
 
 	if (kvm_arch_interrupt_allowed(vcpu) &&
@@ -12197,7 +12198,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
 
 bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.apicv_active && static_call(kvm_x86_dy_apicv_has_pending_interrupt)(vcpu))
+	if (vcpu->arch.apicv_active && static_call(kvm_x86_dy_apicv_has_pending_interrupt, vcpu))
 		return true;
 
 	return false;
@@ -12236,7 +12237,7 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
 
 int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu)
 {
-	return static_call(kvm_x86_interrupt_allowed)(vcpu, false);
+	return static_call(kvm_x86_interrupt_allowed, vcpu, false);
 }
 
 unsigned long kvm_get_linear_rip(struct kvm_vcpu *vcpu)
@@ -12262,7 +12263,7 @@ unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu)
 {
 	unsigned long rflags;
 
-	rflags = static_call(kvm_x86_get_rflags)(vcpu);
+	rflags = static_call(kvm_x86_get_rflags, vcpu);
 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
 		rflags &= ~X86_EFLAGS_TF;
 	return rflags;
@@ -12274,7 +12275,7 @@ static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP &&
 	    kvm_is_linear_rip(vcpu, vcpu->arch.singlestep_rip))
 		rflags |= X86_EFLAGS_TF;
-	static_call(kvm_x86_set_rflags)(vcpu, rflags);
+	static_call(kvm_x86_set_rflags, vcpu, rflags);
 }
 
 void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
@@ -12405,7 +12406,7 @@ static bool kvm_can_deliver_async_pf(struct kvm_vcpu *vcpu)
 		return false;
 
 	if (vcpu->arch.apf.send_user_only &&
-	    static_call(kvm_x86_get_cpl)(vcpu) == 0)
+	    static_call(kvm_x86_get_cpl, vcpu) == 0)
 		return false;
 
 	if (is_guest_mode(vcpu)) {
@@ -12516,7 +12517,7 @@ bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu)
 void kvm_arch_start_assignment(struct kvm *kvm)
 {
 	if (atomic_inc_return(&kvm->arch.assigned_device_count) == 1)
-		static_call_cond(kvm_x86_pi_start_assignment)(kvm);
+		static_call_cond(kvm_x86_pi_start_assignment, kvm);
 }
 EXPORT_SYMBOL_GPL(kvm_arch_start_assignment);
 
@@ -12564,8 +12565,7 @@ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
 
 	irqfd->producer = prod;
 	kvm_arch_start_assignment(irqfd->kvm);
-	ret = static_call(kvm_x86_pi_update_irte)(irqfd->kvm,
-					 prod->irq, irqfd->gsi, 1);
+	ret = static_call(kvm_x86_pi_update_irte, irqfd->kvm, prod->irq, irqfd->gsi, 1);
 
 	if (ret)
 		kvm_arch_end_assignment(irqfd->kvm);
@@ -12589,7 +12589,7 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
 	 * when the irq is masked/disabled or the consumer side (KVM
 	 * int this case doesn't want to receive the interrupts.
 	*/
-	ret = static_call(kvm_x86_pi_update_irte)(irqfd->kvm, prod->irq, irqfd->gsi, 0);
+	ret = static_call(kvm_x86_pi_update_irte, irqfd->kvm, prod->irq, irqfd->gsi, 0);
 	if (ret)
 		printk(KERN_INFO "irq bypass consumer (token %p) unregistration"
 		       " fails: %d\n", irqfd->consumer.token, ret);
@@ -12600,7 +12600,7 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
 int kvm_arch_update_irqfd_routing(struct kvm *kvm, unsigned int host_irq,
 				   uint32_t guest_irq, bool set)
 {
-	return static_call(kvm_x86_pi_update_irte)(kvm, host_irq, guest_irq, set);
+	return static_call(kvm_x86_pi_update_irte, kvm, host_irq, guest_irq, set);
 }
 
 bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old,
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 588792f00334..4b3b3d9b66b8 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -113,7 +113,7 @@ static inline bool is_64_bit_mode(struct kvm_vcpu *vcpu)
 
 	if (!is_long_mode(vcpu))
 		return false;
-	static_call(kvm_x86_get_cs_db_l_bits)(vcpu, &cs_db, &cs_l);
+	static_call(kvm_x86_get_cs_db_l_bits, vcpu, &cs_db, &cs_l);
 	return cs_l;
 }
 
@@ -248,7 +248,7 @@ static inline bool kvm_check_has_quirk(struct kvm *kvm, u64 quirk)
 
 static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
 {
-	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
+	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked, vcpu);
 }
 
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index bf6cc25eee76..9c5d966d18e4 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -732,7 +732,7 @@ int kvm_xen_write_hypercall_page(struct kvm_vcpu *vcpu, u64 data)
 		instructions[0] = 0xb8;
 
 		/* vmcall / vmmcall */
-		static_call(kvm_x86_patch_hypercall)(vcpu, instructions + 5);
+		static_call(kvm_x86_patch_hypercall, vcpu, instructions + 5);
 
 		/* ret */
 		instructions[8] = 0xc3;
@@ -867,7 +867,7 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
 	vcpu->run->exit_reason = KVM_EXIT_XEN;
 	vcpu->run->xen.type = KVM_EXIT_XEN_HCALL;
 	vcpu->run->xen.u.hcall.longmode = longmode;
-	vcpu->run->xen.u.hcall.cpl = static_call(kvm_x86_get_cpl)(vcpu);
+	vcpu->run->xen.u.hcall.cpl = static_call(kvm_x86_get_cpl, vcpu);
 	vcpu->run->xen.u.hcall.input = input;
 	vcpu->run->xen.u.hcall.params[0] = params[0];
 	vcpu->run->xen.u.hcall.params[1] = params[1];
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 7be38bc6a673..06c77ca2b3bb 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -146,7 +146,7 @@ DEFINE_STATIC_CALL(amd_pstate_enable, pstate_enable);
 
 static inline int amd_pstate_enable(bool enable)
 {
-	return static_call(amd_pstate_enable)(enable);
+	return static_call(amd_pstate_enable, enable);
 }
 
 static int pstate_init_perf(struct amd_cpudata *cpudata)
@@ -194,7 +194,7 @@ DEFINE_STATIC_CALL(amd_pstate_init_perf, pstate_init_perf);
 
 static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata)
 {
-	return static_call(amd_pstate_init_perf)(cpudata);
+	return static_call(amd_pstate_init_perf, cpudata);
 }
 
 static void pstate_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
@@ -226,8 +226,8 @@ static inline void amd_pstate_update_perf(struct amd_cpudata *cpudata,
 					  u32 min_perf, u32 des_perf,
 					  u32 max_perf, bool fast_switch)
 {
-	static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf,
-					    max_perf, fast_switch);
+	static_call(amd_pstate_update_perf, cpudata, min_perf, des_perf,
+		    max_perf, fast_switch);
 }
 
 static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index ab78bd4c2eb0..a7d800a5dbd8 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -421,7 +421,7 @@ void raw_irqentry_exit_cond_resched(void);
 #define irqentry_exit_cond_resched_dynamic_enabled	raw_irqentry_exit_cond_resched
 #define irqentry_exit_cond_resched_dynamic_disabled	NULL
 DECLARE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
-#define irqentry_exit_cond_resched()	static_call(irqentry_exit_cond_resched)()
+#define irqentry_exit_cond_resched()	static_call(irqentry_exit_cond_resched)
 #elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
 DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
 void dynamic_irqentry_exit_cond_resched(void);
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index fe6efb24d151..7814129fe0c9 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -107,7 +107,7 @@ DECLARE_STATIC_CALL(might_resched, __cond_resched);
 
 static __always_inline void might_resched(void)
 {
-	static_call_mod(might_resched)();
+	static_call_mod(might_resched);
 }
 
 #elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index af97dd427501..2e12811b3730 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1253,15 +1253,15 @@ DECLARE_STATIC_CALL(__perf_guest_handle_intel_pt_intr, *perf_guest_cbs->handle_i
 
 static inline unsigned int perf_guest_state(void)
 {
-	return static_call(__perf_guest_state)();
+	return static_call(__perf_guest_state);
 }
 static inline unsigned long perf_guest_get_ip(void)
 {
-	return static_call(__perf_guest_get_ip)();
+	return static_call(__perf_guest_get_ip);
 }
 static inline unsigned int perf_guest_handle_intel_pt_intr(void)
 {
-	return static_call(__perf_guest_handle_intel_pt_intr)();
+	return static_call(__perf_guest_handle_intel_pt_intr);
 }
 extern void perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *cbs);
 extern void perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *cbs);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index a8911b1f35aa..e8a98ee1442d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2040,7 +2040,7 @@ DECLARE_STATIC_CALL(cond_resched, __cond_resched);
 
 static __always_inline int _cond_resched(void)
 {
-	return static_call_mod(cond_resched)();
+	return static_call_mod(cond_resched);
 }
 
 #elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index df53bed9d71f..7f1219fb98cf 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -21,8 +21,8 @@
  *
  *   __static_call_return0;
  *
- *   static_call(name)(args...);
- *   static_call_cond(name)(args...);
+ *   static_call(name, args...);
+ *   static_call_cond(name, args...);
  *   static_call_update(name, func);
  *   static_call_query(name);
  *
@@ -38,13 +38,13 @@
  *   DEFINE_STATIC_CALL(my_name, func_a);
  *
  *   # Call func_a()
- *   static_call(my_name)(arg1, arg2);
+ *   static_call(my_name, arg1, arg2);
  *
  *   # Update 'my_name' to point to func_b()
  *   static_call_update(my_name, &func_b);
  *
  *   # Call func_b()
- *   static_call(my_name)(arg1, arg2);
+ *   static_call(my_name, arg1, arg2);
  *
  *
  * Implementation details:
@@ -94,7 +94,7 @@
  *
  *   When calling a static_call that can be NULL, use:
  *
- *     static_call_cond(name)(arg1);
+ *     static_call_cond(name, arg1);
  *
  *   which will include the required value tests to avoid NULL-pointer
  *   dereferences.
@@ -204,7 +204,7 @@ extern long __static_call_return0(void);
 	};								\
 	ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name)
 
-#define static_call_cond(name)	(void)__static_call(name)
+#define static_call_cond(name, args...)	(void)__static_call(name)(args)
 
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
@@ -246,7 +246,7 @@ static inline int static_call_init(void) { return 0; }
 	};								\
 	ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name)
 
-#define static_call_cond(name)	(void)__static_call(name)
+#define static_call_cond(name, args...)	(void)__static_call(name)(args)
 
 static inline
 void __static_call_update(struct static_call_key *key, void *tramp, void *func)
@@ -323,7 +323,7 @@ static inline void __static_call_nop(void) { }
 	(typeof(STATIC_CALL_TRAMP(name))*)func;				\
 })
 
-#define static_call_cond(name)	(void)__static_call_cond(name)
+#define static_call_cond(name, args...)	(void)__static_call_cond(name)(args)
 
 static inline
 void __static_call_update(struct static_call_key *key, void *tramp, void *func)
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 5a00b8b2cf9f..7e1ce240a2cd 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -81,13 +81,13 @@ struct static_call_key {
 
 #ifdef MODULE
 #define __STATIC_CALL_MOD_ADDRESSABLE(name)
-#define static_call_mod(name)	__raw_static_call(name)
+#define static_call_mod(name, args...)	__raw_static_call(name)(args)
 #else
 #define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
-#define static_call_mod(name)	__static_call(name)
+#define static_call_mod(name, args...)	__static_call(name)(args)
 #endif
 
-#define static_call(name)	__static_call(name)
+#define static_call(name, args...)	__static_call(name)(args)
 
 #else
 
@@ -95,8 +95,8 @@ struct static_call_key {
 	void *func;
 };
 
-#define static_call(name)						\
-	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+#define static_call(name, args...)					\
+	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))(args)
 
 #endif /* CONFIG_HAVE_STATIC_CALL */
 
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 28031b15f878..1c68fcad48a2 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -170,7 +170,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
 			rcu_dereference_raw((&__tracepoint_##name)->funcs); \
 		if (it_func_ptr) {					\
 			__data = (it_func_ptr)->data;			\
-			static_call(tp_func_##name)(__data, args);	\
+			static_call(tp_func_##name, __data, args);	\
 		}							\
 	} while (0)
 #else
diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c
index dc5665b62814..9752489fcaab 100644
--- a/kernel/static_call_inline.c
+++ b/kernel/static_call_inline.c
@@ -533,7 +533,7 @@ static int __init test_static_call_init(void)
               if (scd->func)
                       static_call_update(sc_selftest, scd->func);
 
-              WARN_ON(static_call(sc_selftest)(scd->val) != scd->expect);
+              WARN_ON(static_call(sc_selftest, scd->val) != scd->expect);
       }
 
       return 0;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index d8553f46caa2..fa1a0deddda5 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1096,7 +1096,7 @@ BPF_CALL_3(bpf_get_branch_snapshot, void *, buf, u32, size, u64, flags)
 	static const u32 br_entry_size = sizeof(struct perf_branch_entry);
 	u32 entry_cnt = size / br_entry_size;
 
-	entry_cnt = static_call(perf_snapshot_branch_stack)(buf, entry_cnt);
+	entry_cnt = static_call(perf_snapshot_branch_stack, buf, entry_cnt);
 
 	if (unlikely(flags))
 		return -EINVAL;
diff --git a/security/keys/trusted-keys/trusted_core.c b/security/keys/trusted-keys/trusted_core.c
index 9b9d3ef79cbe..3f48310a4ce3 100644
--- a/security/keys/trusted-keys/trusted_core.c
+++ b/security/keys/trusted-keys/trusted_core.c
@@ -170,15 +170,15 @@ static int trusted_instantiate(struct key *key,
 
 	switch (key_cmd) {
 	case Opt_load:
-		ret = static_call(trusted_key_unseal)(payload, datablob);
+		ret = static_call(trusted_key_unseal, payload, datablob);
 		dump_payload(payload);
 		if (ret < 0)
 			pr_info("key_unseal failed (%d)\n", ret);
 		break;
 	case Opt_new:
 		key_len = payload->key_len;
-		ret = static_call(trusted_key_get_random)(payload->key,
-							  key_len);
+		ret = static_call(trusted_key_get_random, payload->key,
+				  key_len);
 		if (ret < 0)
 			goto out;
 
@@ -188,7 +188,7 @@ static int trusted_instantiate(struct key *key,
 			goto out;
 		}
 
-		ret = static_call(trusted_key_seal)(payload, datablob);
+		ret = static_call(trusted_key_seal, payload, datablob);
 		if (ret < 0)
 			pr_info("key_seal failed (%d)\n", ret);
 		break;
@@ -257,7 +257,7 @@ static int trusted_update(struct key *key, struct key_preparsed_payload *prep)
 	dump_payload(p);
 	dump_payload(new_p);
 
-	ret = static_call(trusted_key_seal)(new_p, datablob);
+	ret = static_call(trusted_key_seal, new_p, datablob);
 	if (ret < 0) {
 		pr_info("key_seal failed (%d)\n", ret);
 		kfree_sensitive(new_p);
@@ -334,7 +334,7 @@ static int __init init_trusted(void)
 				   trusted_key_sources[i].ops->exit);
 		migratable = trusted_key_sources[i].ops->migratable;
 
-		ret = static_call(trusted_key_init)();
+		ret = static_call(trusted_key_init);
 		if (!ret)
 			break;
 	}
@@ -351,7 +351,7 @@ static int __init init_trusted(void)
 
 static void __exit cleanup_trusted(void)
 {
-	static_call_cond(trusted_key_exit)();
+	static_call_cond(trusted_key_exit);
 }
 
 late_initcall(init_trusted);
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 14/21] treewide: static_call: Pass call arguments to the macro
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Include the function arguments in the static call macro to make it
possible to add a wrapper for the call. This is needed with
CONFIG_CFI_CLANG to disable indirect call checking for static calls
that are patched into direct calls at runtime.

Users of static_call were updated using the following Coccinelle
script and manually adjusted to preserve coding style:

  @@
  expression name;
  expression list args;
  identifier static_call =~ "^static_call(_mod|_cond)?$";
  @@

  - static_call(name)(args)
  + static_call(name, args)

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/arm/include/asm/paravirt.h           |   2 +-
 arch/arm64/include/asm/paravirt.h         |   2 +-
 arch/x86/crypto/aesni-intel_glue.c        |   7 +-
 arch/x86/events/core.c                    |  40 +--
 arch/x86/include/asm/kvm_host.h           |   6 +-
 arch/x86/include/asm/paravirt.h           |   4 +-
 arch/x86/kvm/cpuid.c                      |   2 +-
 arch/x86/kvm/hyperv.c                     |   4 +-
 arch/x86/kvm/irq.c                        |   2 +-
 arch/x86/kvm/kvm_cache_regs.h             |  10 +-
 arch/x86/kvm/lapic.c                      |  32 +--
 arch/x86/kvm/mmu.h                        |   4 +-
 arch/x86/kvm/mmu/mmu.c                    |   8 +-
 arch/x86/kvm/mmu/spte.c                   |   4 +-
 arch/x86/kvm/pmu.c                        |   4 +-
 arch/x86/kvm/trace.h                      |   4 +-
 arch/x86/kvm/x86.c                        | 326 +++++++++++-----------
 arch/x86/kvm/x86.h                        |   4 +-
 arch/x86/kvm/xen.c                        |   4 +-
 drivers/cpufreq/amd-pstate.c              |   8 +-
 include/linux/entry-common.h              |   2 +-
 include/linux/kernel.h                    |   2 +-
 include/linux/perf_event.h                |   6 +-
 include/linux/sched.h                     |   2 +-
 include/linux/static_call.h               |  16 +-
 include/linux/static_call_types.h         |  10 +-
 include/linux/tracepoint.h                |   2 +-
 kernel/static_call_inline.c               |   2 +-
 kernel/trace/bpf_trace.c                  |   2 +-
 security/keys/trusted-keys/trusted_core.c |  14 +-
 30 files changed, 267 insertions(+), 268 deletions(-)

diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h
index 95d5b0d625cd..43c419eadb9a 100644
--- a/arch/arm/include/asm/paravirt.h
+++ b/arch/arm/include/asm/paravirt.h
@@ -15,7 +15,7 @@ DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock);
 
 static inline u64 paravirt_steal_clock(int cpu)
 {
-	return static_call(pv_steal_clock)(cpu);
+	return static_call(pv_steal_clock, cpu);
 }
 #endif
 
diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h
index 9aa193e0e8f2..35a9d649c448 100644
--- a/arch/arm64/include/asm/paravirt.h
+++ b/arch/arm64/include/asm/paravirt.h
@@ -15,7 +15,7 @@ DECLARE_STATIC_CALL(pv_steal_clock, dummy_steal_clock);
 
 static inline u64 paravirt_steal_clock(int cpu)
 {
-	return static_call(pv_steal_clock)(cpu);
+	return static_call(pv_steal_clock, cpu);
 }
 
 int __init pv_time_init(void);
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 41901ba9d3a2..06182c068145 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -507,10 +507,9 @@ static int ctr_crypt(struct skcipher_request *req)
 	while ((nbytes = walk.nbytes) > 0) {
 		kernel_fpu_begin();
 		if (nbytes & AES_BLOCK_MASK)
-			static_call(aesni_ctr_enc_tfm)(ctx, walk.dst.virt.addr,
-						       walk.src.virt.addr,
-						       nbytes & AES_BLOCK_MASK,
-						       walk.iv);
+			static_call(aesni_ctr_enc_tfm, ctx,
+				    walk.dst.virt.addr, walk.src.virt.addr,
+				    nbytes & AES_BLOCK_MASK, walk.iv);
 		nbytes &= ~AES_BLOCK_MASK;
 
 		if (walk.nbytes == walk.total && nbytes > 0) {
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index eef816fc216d..74315c87220b 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -695,7 +695,7 @@ void x86_pmu_disable_all(void)
 
 struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr)
 {
-	return static_call(x86_pmu_guest_get_msrs)(nr);
+	return static_call(x86_pmu_guest_get_msrs, nr);
 }
 EXPORT_SYMBOL_GPL(perf_guest_get_msrs);
 
@@ -726,7 +726,7 @@ static void x86_pmu_disable(struct pmu *pmu)
 	cpuc->enabled = 0;
 	barrier();
 
-	static_call(x86_pmu_disable_all)();
+	static_call(x86_pmu_disable_all);
 }
 
 void x86_pmu_enable_all(int added)
@@ -991,7 +991,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
 	if (cpuc->txn_flags & PERF_PMU_TXN_ADD)
 		n0 -= cpuc->n_txn;
 
-	static_call_cond(x86_pmu_start_scheduling)(cpuc);
+	static_call_cond(x86_pmu_start_scheduling, cpuc);
 
 	for (i = 0, wmin = X86_PMC_IDX_MAX, wmax = 0; i < n; i++) {
 		c = cpuc->event_constraint[i];
@@ -1008,7 +1008,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
 		 * change due to external factors (sibling state, allow_tfa).
 		 */
 		if (!c || (c->flags & PERF_X86_EVENT_DYNAMIC)) {
-			c = static_call(x86_pmu_get_event_constraints)(cpuc, i, cpuc->event_list[i]);
+			c = static_call(x86_pmu_get_event_constraints, cpuc, i, cpuc->event_list[i]);
 			cpuc->event_constraint[i] = c;
 		}
 
@@ -1090,7 +1090,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
 	 */
 	if (!unsched && assign) {
 		for (i = 0; i < n; i++)
-			static_call_cond(x86_pmu_commit_scheduling)(cpuc, i, assign[i]);
+			static_call_cond(x86_pmu_commit_scheduling, cpuc, i, assign[i]);
 	} else {
 		for (i = n0; i < n; i++) {
 			e = cpuc->event_list[i];
@@ -1098,13 +1098,13 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
 			/*
 			 * release events that failed scheduling
 			 */
-			static_call_cond(x86_pmu_put_event_constraints)(cpuc, e);
+			static_call_cond(x86_pmu_put_event_constraints, cpuc, e);
 
 			cpuc->event_constraint[i] = NULL;
 		}
 	}
 
-	static_call_cond(x86_pmu_stop_scheduling)(cpuc);
+	static_call_cond(x86_pmu_stop_scheduling, cpuc);
 
 	return unsched ? -EINVAL : 0;
 }
@@ -1217,7 +1217,7 @@ static inline void x86_assign_hw_event(struct perf_event *event,
 	hwc->last_cpu = smp_processor_id();
 	hwc->last_tag = ++cpuc->tags[i];
 
-	static_call_cond(x86_pmu_assign)(event, idx);
+	static_call_cond(x86_pmu_assign, event, idx);
 
 	switch (hwc->idx) {
 	case INTEL_PMC_IDX_FIXED_BTS:
@@ -1347,7 +1347,7 @@ static void x86_pmu_enable(struct pmu *pmu)
 	cpuc->enabled = 1;
 	barrier();
 
-	static_call(x86_pmu_enable_all)(added);
+	static_call(x86_pmu_enable_all, added);
 }
 
 static DEFINE_PER_CPU(u64 [X86_PMC_IDX_MAX], pmc_prev_left);
@@ -1472,7 +1472,7 @@ static int x86_pmu_add(struct perf_event *event, int flags)
 	if (cpuc->txn_flags & PERF_PMU_TXN_ADD)
 		goto done_collect;
 
-	ret = static_call(x86_pmu_schedule_events)(cpuc, n, assign);
+	ret = static_call(x86_pmu_schedule_events, cpuc, n, assign);
 	if (ret)
 		goto out;
 	/*
@@ -1494,7 +1494,7 @@ static int x86_pmu_add(struct perf_event *event, int flags)
 	 * This is before x86_pmu_enable() will call x86_pmu_start(),
 	 * so we enable LBRs before an event needs them etc..
 	 */
-	static_call_cond(x86_pmu_add)(event);
+	static_call_cond(x86_pmu_add, event);
 
 	ret = 0;
 out:
@@ -1521,7 +1521,7 @@ static void x86_pmu_start(struct perf_event *event, int flags)
 
 	cpuc->events[idx] = event;
 	__set_bit(idx, cpuc->active_mask);
-	static_call(x86_pmu_enable)(event);
+	static_call(x86_pmu_enable, event);
 	perf_event_update_userpage(event);
 }
 
@@ -1594,7 +1594,7 @@ void x86_pmu_stop(struct perf_event *event, int flags)
 	struct hw_perf_event *hwc = &event->hw;
 
 	if (test_bit(hwc->idx, cpuc->active_mask)) {
-		static_call(x86_pmu_disable)(event);
+		static_call(x86_pmu_disable, event);
 		__clear_bit(hwc->idx, cpuc->active_mask);
 		cpuc->events[hwc->idx] = NULL;
 		WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
@@ -1647,7 +1647,7 @@ static void x86_pmu_del(struct perf_event *event, int flags)
 	if (i >= cpuc->n_events - cpuc->n_added)
 		--cpuc->n_added;
 
-	static_call_cond(x86_pmu_put_event_constraints)(cpuc, event);
+	static_call_cond(x86_pmu_put_event_constraints, cpuc, event);
 
 	/* Delete the array entry. */
 	while (++i < cpuc->n_events) {
@@ -1667,7 +1667,7 @@ static void x86_pmu_del(struct perf_event *event, int flags)
 	 * This is after x86_pmu_stop(); so we disable LBRs after any
 	 * event can need them etc..
 	 */
-	static_call_cond(x86_pmu_del)(event);
+	static_call_cond(x86_pmu_del, event);
 }
 
 int x86_pmu_handle_irq(struct pt_regs *regs)
@@ -1745,7 +1745,7 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
 		return NMI_DONE;
 
 	start_clock = sched_clock();
-	ret = static_call(x86_pmu_handle_irq)(regs);
+	ret = static_call(x86_pmu_handle_irq, regs);
 	finish_clock = sched_clock();
 
 	perf_sample_event_took(finish_clock - start_clock);
@@ -2217,7 +2217,7 @@ early_initcall(init_hw_perf_events);
 
 static void x86_pmu_read(struct perf_event *event)
 {
-	static_call(x86_pmu_read)(event);
+	static_call(x86_pmu_read, event);
 }
 
 /*
@@ -2298,7 +2298,7 @@ static int x86_pmu_commit_txn(struct pmu *pmu)
 	if (!x86_pmu_initialized())
 		return -EAGAIN;
 
-	ret = static_call(x86_pmu_schedule_events)(cpuc, n, assign);
+	ret = static_call(x86_pmu_schedule_events, cpuc, n, assign);
 	if (ret)
 		return ret;
 
@@ -2638,13 +2638,13 @@ static const struct attribute_group *x86_pmu_attr_groups[] = {
 
 static void x86_pmu_sched_task(struct perf_event_context *ctx, bool sched_in)
 {
-	static_call_cond(x86_pmu_sched_task)(ctx, sched_in);
+	static_call_cond(x86_pmu_sched_task, ctx, sched_in);
 }
 
 static void x86_pmu_swap_task_ctx(struct perf_event_context *prev,
 				  struct perf_event_context *next)
 {
-	static_call_cond(x86_pmu_swap_task_ctx)(prev, next);
+	static_call_cond(x86_pmu_swap_task_ctx, prev, next);
 }
 
 void perf_check_microcode(void)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4ff36610af6a..0d3869f6efc2 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1576,7 +1576,7 @@ void kvm_arch_free_vm(struct kvm *kvm);
 static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm)
 {
 	if (kvm_x86_ops.tlb_remote_flush &&
-	    !static_call(kvm_x86_tlb_remote_flush)(kvm))
+	    !static_call(kvm_x86_tlb_remote_flush, kvm))
 		return 0;
 	else
 		return -ENOTSUPP;
@@ -1953,12 +1953,12 @@ static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
 
 static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
 {
-	static_call_cond(kvm_x86_vcpu_blocking)(vcpu);
+	static_call_cond(kvm_x86_vcpu_blocking, vcpu);
 }
 
 static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 {
-	static_call_cond(kvm_x86_vcpu_unblocking)(vcpu);
+	static_call_cond(kvm_x86_vcpu_unblocking, vcpu);
 }
 
 static inline int kvm_cpu_get_apicid(int mps_cpu)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 964442b99245..16aa752f1ccb 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -28,7 +28,7 @@ void paravirt_set_sched_clock(u64 (*func)(void));
 
 static inline u64 paravirt_sched_clock(void)
 {
-	return static_call(pv_sched_clock)();
+	return static_call(pv_sched_clock);
 }
 
 struct static_key;
@@ -42,7 +42,7 @@ bool pv_is_native_vcpu_is_preempted(void);
 
 static inline u64 paravirt_steal_clock(int cpu)
 {
-	return static_call(pv_steal_clock)(cpu);
+	return static_call(pv_steal_clock, cpu);
 }
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index b24ca7f4ed7c..e40e9b8b2bd6 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -311,7 +311,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
 	kvm_hv_set_cpuid(vcpu);
 
 	/* Invoke the vendor callback only after the above state is updated. */
-	static_call(kvm_x86_vcpu_after_set_cpuid)(vcpu);
+	static_call(kvm_x86_vcpu_after_set_cpuid, vcpu);
 
 	/*
 	 * Except for the MMU, which needs to do its thing any vendor specific
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 46f9dfb60469..b1b8006f9084 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1335,7 +1335,7 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data,
 		}
 
 		/* vmcall/vmmcall */
-		static_call(kvm_x86_patch_hypercall)(vcpu, instructions + i);
+		static_call(kvm_x86_patch_hypercall, vcpu, instructions + i);
 		i += 3;
 
 		/* ret */
@@ -2201,7 +2201,7 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 	 * hypercall generates UD from non zero cpl and real mode
 	 * per HYPER-V spec
 	 */
-	if (static_call(kvm_x86_get_cpl)(vcpu) != 0 || !is_protmode(vcpu)) {
+	if (static_call(kvm_x86_get_cpl, vcpu) != 0 || !is_protmode(vcpu)) {
 		kvm_queue_exception(vcpu, UD_VECTOR);
 		return 1;
 	}
diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c
index 172b05343cfd..b86cf55afe4d 100644
--- a/arch/x86/kvm/irq.c
+++ b/arch/x86/kvm/irq.c
@@ -150,7 +150,7 @@ void __kvm_migrate_timers(struct kvm_vcpu *vcpu)
 {
 	__kvm_migrate_apic_timer(vcpu);
 	__kvm_migrate_pit_timer(vcpu);
-	static_call_cond(kvm_x86_migrate_timers)(vcpu);
+	static_call_cond(kvm_x86_migrate_timers, vcpu);
 }
 
 bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 3febc342360c..643b4abb2797 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -86,7 +86,7 @@ static inline unsigned long kvm_register_read_raw(struct kvm_vcpu *vcpu, int reg
 		return 0;
 
 	if (!kvm_register_is_available(vcpu, reg))
-		static_call(kvm_x86_cache_reg)(vcpu, reg);
+		static_call(kvm_x86_cache_reg, vcpu, reg);
 
 	return vcpu->arch.regs[reg];
 }
@@ -126,7 +126,7 @@ static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu, int index)
 	might_sleep();  /* on svm */
 
 	if (!kvm_register_is_available(vcpu, VCPU_EXREG_PDPTR))
-		static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_PDPTR);
+		static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_PDPTR);
 
 	return vcpu->arch.walk_mmu->pdptrs[index];
 }
@@ -141,7 +141,7 @@ static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask)
 	ulong tmask = mask & KVM_POSSIBLE_CR0_GUEST_BITS;
 	if ((tmask & vcpu->arch.cr0_guest_owned_bits) &&
 	    !kvm_register_is_available(vcpu, VCPU_EXREG_CR0))
-		static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR0);
+		static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_CR0);
 	return vcpu->arch.cr0 & mask;
 }
 
@@ -155,14 +155,14 @@ static inline ulong kvm_read_cr4_bits(struct kvm_vcpu *vcpu, ulong mask)
 	ulong tmask = mask & KVM_POSSIBLE_CR4_GUEST_BITS;
 	if ((tmask & vcpu->arch.cr4_guest_owned_bits) &&
 	    !kvm_register_is_available(vcpu, VCPU_EXREG_CR4))
-		static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR4);
+		static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_CR4);
 	return vcpu->arch.cr4 & mask;
 }
 
 static inline ulong kvm_read_cr3(struct kvm_vcpu *vcpu)
 {
 	if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3))
-		static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR3);
+		static_call(kvm_x86_cache_reg, vcpu, VCPU_EXREG_CR3);
 	return vcpu->arch.cr3;
 }
 
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 66b0eb0bda94..743b99eb43ef 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -525,7 +525,7 @@ static inline void apic_clear_irr(int vec, struct kvm_lapic *apic)
 	if (unlikely(vcpu->arch.apicv_active)) {
 		/* need to update RVI */
 		kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
-		static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
+		static_call_cond(kvm_x86_hwapic_irr_update, vcpu, apic_find_highest_irr(apic));
 	} else {
 		apic->irr_pending = false;
 		kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
@@ -555,7 +555,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
 	 * just set SVI.
 	 */
 	if (unlikely(vcpu->arch.apicv_active))
-		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, vec);
+		static_call_cond(kvm_x86_hwapic_isr_update, vcpu, vec);
 	else {
 		++apic->isr_count;
 		BUG_ON(apic->isr_count > MAX_APIC_VECTOR);
@@ -603,7 +603,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
 	 * and must be left alone.
 	 */
 	if (unlikely(vcpu->arch.apicv_active))
-		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
+		static_call_cond(kvm_x86_hwapic_isr_update, vcpu, apic_find_highest_isr(apic));
 	else {
 		--apic->isr_count;
 		BUG_ON(apic->isr_count < 0);
@@ -739,7 +739,7 @@ static int apic_has_interrupt_for_ppr(struct kvm_lapic *apic, u32 ppr)
 {
 	int highest_irr;
 	if (kvm_x86_ops.sync_pir_to_irr)
-		highest_irr = static_call(kvm_x86_sync_pir_to_irr)(apic->vcpu);
+		highest_irr = static_call(kvm_x86_sync_pir_to_irr, apic->vcpu);
 	else
 		highest_irr = apic_find_highest_irr(apic);
 	if (highest_irr == -1 || (highest_irr & 0xF0) <= ppr)
@@ -1132,8 +1132,8 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
 						       apic->regs + APIC_TMR);
 		}
 
-		static_call(kvm_x86_deliver_interrupt)(apic, delivery_mode,
-						       trig_mode, vector);
+		static_call(kvm_x86_deliver_interrupt, apic, delivery_mode,
+			    trig_mode, vector);
 		break;
 
 	case APIC_DM_REMRD:
@@ -1888,7 +1888,7 @@ static void cancel_hv_timer(struct kvm_lapic *apic)
 {
 	WARN_ON(preemptible());
 	WARN_ON(!apic->lapic_timer.hv_timer_in_use);
-	static_call(kvm_x86_cancel_hv_timer)(apic->vcpu);
+	static_call(kvm_x86_cancel_hv_timer, apic->vcpu);
 	apic->lapic_timer.hv_timer_in_use = false;
 }
 
@@ -1905,7 +1905,7 @@ static bool start_hv_timer(struct kvm_lapic *apic)
 	if (!ktimer->tscdeadline)
 		return false;
 
-	if (static_call(kvm_x86_set_hv_timer)(vcpu, ktimer->tscdeadline, &expired))
+	if (static_call(kvm_x86_set_hv_timer, vcpu, ktimer->tscdeadline, &expired))
 		return false;
 
 	ktimer->hv_timer_in_use = true;
@@ -2329,7 +2329,7 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
 		kvm_apic_set_x2apic_id(apic, vcpu->vcpu_id);
 
 	if ((old_value ^ value) & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE))
-		static_call_cond(kvm_x86_set_virtual_apic_mode)(vcpu);
+		static_call_cond(kvm_x86_set_virtual_apic_mode, vcpu);
 
 	apic->base_address = apic->vcpu->arch.apic_base &
 			     MSR_IA32_APICBASE_BASE;
@@ -2419,9 +2419,9 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vcpu->arch.pv_eoi.msr_val = 0;
 	apic_update_ppr(apic);
 	if (vcpu->arch.apicv_active) {
-		static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
-		static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, -1);
-		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, -1);
+		static_call_cond(kvm_x86_apicv_post_state_restore, vcpu);
+		static_call_cond(kvm_x86_hwapic_irr_update, vcpu, -1);
+		static_call_cond(kvm_x86_hwapic_isr_update, vcpu, -1);
 	}
 
 	vcpu->arch.apic_arb_prio = 0;
@@ -2697,9 +2697,9 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
 	kvm_apic_update_apicv(vcpu);
 	apic->highest_isr_cache = -1;
 	if (vcpu->arch.apicv_active) {
-		static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
-		static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
-		static_call_cond(kvm_x86_hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
+		static_call_cond(kvm_x86_apicv_post_state_restore, vcpu);
+		static_call_cond(kvm_x86_hwapic_irr_update, vcpu, apic_find_highest_irr(apic));
+		static_call_cond(kvm_x86_hwapic_isr_update, vcpu, apic_find_highest_isr(apic));
 	}
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	if (ioapic_in_kernel(vcpu->kvm))
@@ -3002,7 +3002,7 @@ int kvm_apic_accept_events(struct kvm_vcpu *vcpu)
 			/* evaluate pending_events before reading the vector */
 			smp_rmb();
 			sipi_vector = apic->sipi_vector;
-			static_call(kvm_x86_vcpu_deliver_sipi_vector)(vcpu, sipi_vector);
+			static_call(kvm_x86_vcpu_deliver_sipi_vector, vcpu, sipi_vector);
 			vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
 		}
 	}
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index e6cae6f22683..73880aa0b9e2 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -113,7 +113,7 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu)
 	if (!VALID_PAGE(root_hpa))
 		return;
 
-	static_call(kvm_x86_load_mmu_pgd)(vcpu, root_hpa,
+	static_call(kvm_x86_load_mmu_pgd, vcpu, root_hpa,
 					  vcpu->arch.mmu->shadow_root_level);
 }
 
@@ -218,7 +218,7 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 {
 	/* strip nested paging fault error codes */
 	unsigned int pfec = access;
-	unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
+	unsigned long rflags = static_call(kvm_x86_get_rflags, vcpu);
 
 	/*
 	 * For explicit supervisor accesses, SMAP is disabled if EFLAGS.AC = 1.
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f9080ee50ffa..0bdf76d94875 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -268,7 +268,7 @@ static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm,
 	int ret = -ENOTSUPP;
 
 	if (range && kvm_x86_ops.tlb_remote_flush_with_range)
-		ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, range);
+		ret = static_call(kvm_x86_tlb_remote_flush_with_range, kvm, range);
 
 	if (ret)
 		kvm_flush_remote_tlbs(kvm);
@@ -5102,7 +5102,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
 	 * stale entries.  Flushing on alloc also allows KVM to skip the TLB
 	 * flush when freeing a root (see kvm_tdp_mmu_put_root()).
 	 */
-	static_call(kvm_x86_flush_tlb_current)(vcpu);
+	static_call(kvm_x86_flush_tlb_current, vcpu);
 out:
 	return r;
 }
@@ -5408,7 +5408,7 @@ void kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 		if (is_noncanonical_address(gva, vcpu))
 			return;
 
-		static_call(kvm_x86_flush_tlb_gva)(vcpu, gva);
+		static_call(kvm_x86_flush_tlb_gva, vcpu, gva);
 	}
 
 	if (!mmu->invlpg)
@@ -5464,7 +5464,7 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
 	}
 
 	if (tlb_flush)
-		static_call(kvm_x86_flush_tlb_gva)(vcpu, gva);
+		static_call(kvm_x86_flush_tlb_gva, vcpu, gva);
 
 	++vcpu->stat.invlpg;
 
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 4739b53c9734..6b7bae4778a4 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -131,8 +131,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 	if (level > PG_LEVEL_4K)
 		spte |= PT_PAGE_SIZE_MASK;
 	if (tdp_enabled)
-		spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn,
-			kvm_is_mmio_pfn(pfn));
+		spte |= static_call(kvm_x86_get_mt_mask, vcpu, gfn,
+				    kvm_is_mmio_pfn(pfn));
 
 	if (host_writable)
 		spte |= shadow_host_writable_mask;
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index eca39f56c231..4361f0e247ee 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -371,7 +371,7 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data)
 		return 1;
 
 	if (!(kvm_read_cr4(vcpu) & X86_CR4_PCE) &&
-	    (static_call(kvm_x86_get_cpl)(vcpu) != 0) &&
+	    (static_call(kvm_x86_get_cpl, vcpu) != 0) &&
 	    (kvm_read_cr0(vcpu) & X86_CR0_PE))
 		return 1;
 
@@ -523,7 +523,7 @@ static inline bool cpl_is_matched(struct kvm_pmc *pmc)
 		select_user = config & 0x2;
 	}
 
-	return (static_call(kvm_x86_get_cpl)(pmc->vcpu) == 0) ? select_os : select_user;
+	return (static_call(kvm_x86_get_cpl, pmc->vcpu) == 0) ? select_os : select_user;
 }
 
 void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id)
diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
index e3a24b8f04be..a4845e1b5574 100644
--- a/arch/x86/kvm/trace.h
+++ b/arch/x86/kvm/trace.h
@@ -308,7 +308,7 @@ TRACE_EVENT(name,							     \
 		__entry->guest_rip	= kvm_rip_read(vcpu);		     \
 		__entry->isa            = isa;				     \
 		__entry->vcpu_id        = vcpu->vcpu_id;		     \
-		static_call(kvm_x86_get_exit_info)(vcpu,		     \
+		static_call(kvm_x86_get_exit_info, vcpu,		     \
 					  &__entry->exit_reason,	     \
 					  &__entry->info1,		     \
 					  &__entry->info2,		     \
@@ -792,7 +792,7 @@ TRACE_EVENT(kvm_emulate_insn,
 		),
 
 	TP_fast_assign(
-		__entry->csbase = static_call(kvm_x86_get_segment_base)(vcpu, VCPU_SREG_CS);
+		__entry->csbase = static_call(kvm_x86_get_segment_base, vcpu, VCPU_SREG_CS);
 		__entry->len = vcpu->arch.emulate_ctxt->fetch.ptr
 			       - vcpu->arch.emulate_ctxt->fetch.data;
 		__entry->rip = vcpu->arch.emulate_ctxt->_eip - __entry->len;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a6ab19afc638..ca400a219241 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -796,7 +796,7 @@ EXPORT_SYMBOL_GPL(kvm_requeue_exception_e);
  */
 bool kvm_require_cpl(struct kvm_vcpu *vcpu, int required_cpl)
 {
-	if (static_call(kvm_x86_get_cpl)(vcpu) <= required_cpl)
+	if (static_call(kvm_x86_get_cpl, vcpu) <= required_cpl)
 		return true;
 	kvm_queue_exception_e(vcpu, GP_VECTOR, 0);
 	return false;
@@ -918,7 +918,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 
 		if (!is_pae(vcpu))
 			return 1;
-		static_call(kvm_x86_get_cs_db_l_bits)(vcpu, &cs_db, &cs_l);
+		static_call(kvm_x86_get_cs_db_l_bits, vcpu, &cs_db, &cs_l);
 		if (cs_l)
 			return 1;
 	}
@@ -932,7 +932,7 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 	    (is_64_bit_mode(vcpu) || kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE)))
 		return 1;
 
-	static_call(kvm_x86_set_cr0)(vcpu, cr0);
+	static_call(kvm_x86_set_cr0, vcpu, cr0);
 
 	kvm_post_set_cr0(vcpu, old_cr0, cr0);
 
@@ -1054,7 +1054,7 @@ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
 
 int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu)
 {
-	if (static_call(kvm_x86_get_cpl)(vcpu) != 0 ||
+	if (static_call(kvm_x86_get_cpl, vcpu) != 0 ||
 	    __kvm_set_xcr(vcpu, kvm_rcx_read(vcpu), kvm_read_edx_eax(vcpu))) {
 		kvm_inject_gp(vcpu, 0);
 		return 1;
@@ -1072,7 +1072,7 @@ bool kvm_is_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	if (cr4 & vcpu->arch.cr4_guest_rsvd_bits)
 		return false;
 
-	return static_call(kvm_x86_is_valid_cr4)(vcpu, cr4);
+	return static_call(kvm_x86_is_valid_cr4, vcpu, cr4);
 }
 EXPORT_SYMBOL_GPL(kvm_is_valid_cr4);
 
@@ -1144,7 +1144,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 			return 1;
 	}
 
-	static_call(kvm_x86_set_cr4)(vcpu, cr4);
+	static_call(kvm_x86_set_cr4, vcpu, cr4);
 
 	kvm_post_set_cr4(vcpu, old_cr4, cr4);
 
@@ -1285,7 +1285,7 @@ void kvm_update_dr7(struct kvm_vcpu *vcpu)
 		dr7 = vcpu->arch.guest_debug_dr7;
 	else
 		dr7 = vcpu->arch.dr7;
-	static_call(kvm_x86_set_dr7)(vcpu, dr7);
+	static_call(kvm_x86_set_dr7, vcpu, dr7);
 	vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_BP_ENABLED;
 	if (dr7 & DR7_BP_EN_MASK)
 		vcpu->arch.switch_db_regs |= KVM_DEBUGREG_BP_ENABLED;
@@ -1600,7 +1600,7 @@ static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
 		rdmsrl_safe(msr->index, &msr->data);
 		break;
 	default:
-		return static_call(kvm_x86_get_msr_feature)(msr);
+		return static_call(kvm_x86_get_msr_feature, msr);
 	}
 	return 0;
 }
@@ -1676,7 +1676,7 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	efer &= ~EFER_LMA;
 	efer |= vcpu->arch.efer & EFER_LMA;
 
-	r = static_call(kvm_x86_set_efer)(vcpu, efer);
+	r = static_call(kvm_x86_set_efer, vcpu, efer);
 	if (r) {
 		WARN_ON(r > 0);
 		return r;
@@ -1802,7 +1802,7 @@ static int __kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data,
 	msr.index = index;
 	msr.host_initiated = host_initiated;
 
-	return static_call(kvm_x86_set_msr)(vcpu, &msr);
+	return static_call(kvm_x86_set_msr, vcpu, &msr);
 }
 
 static int kvm_set_msr_ignored_check(struct kvm_vcpu *vcpu,
@@ -1844,7 +1844,7 @@ int __kvm_get_msr(struct kvm_vcpu *vcpu, u32 index, u64 *data,
 	msr.index = index;
 	msr.host_initiated = host_initiated;
 
-	ret = static_call(kvm_x86_get_msr)(vcpu, &msr);
+	ret = static_call(kvm_x86_get_msr, vcpu, &msr);
 	if (!ret)
 		*data = msr.data;
 	return ret;
@@ -1912,7 +1912,7 @@ static int complete_emulated_rdmsr(struct kvm_vcpu *vcpu)
 
 static int complete_fast_msr_access(struct kvm_vcpu *vcpu)
 {
-	return static_call(kvm_x86_complete_emulated_msr)(vcpu, vcpu->run->msr.error);
+	return static_call(kvm_x86_complete_emulated_msr, vcpu, vcpu->run->msr.error);
 }
 
 static int complete_fast_rdmsr(struct kvm_vcpu *vcpu)
@@ -1976,7 +1976,7 @@ int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
 		trace_kvm_msr_read_ex(ecx);
 	}
 
-	return static_call(kvm_x86_complete_emulated_msr)(vcpu, r);
+	return static_call(kvm_x86_complete_emulated_msr, vcpu, r);
 }
 EXPORT_SYMBOL_GPL(kvm_emulate_rdmsr);
 
@@ -2001,7 +2001,7 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
 		trace_kvm_msr_write_ex(ecx, data);
 	}
 
-	return static_call(kvm_x86_complete_emulated_msr)(vcpu, r);
+	return static_call(kvm_x86_complete_emulated_msr, vcpu, r);
 }
 EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
 
@@ -2507,12 +2507,12 @@ static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 l1_offset)
 	if (is_guest_mode(vcpu))
 		vcpu->arch.tsc_offset = kvm_calc_nested_tsc_offset(
 			l1_offset,
-			static_call(kvm_x86_get_l2_tsc_offset)(vcpu),
-			static_call(kvm_x86_get_l2_tsc_multiplier)(vcpu));
+			static_call(kvm_x86_get_l2_tsc_offset, vcpu),
+			static_call(kvm_x86_get_l2_tsc_multiplier, vcpu));
 	else
 		vcpu->arch.tsc_offset = l1_offset;
 
-	static_call(kvm_x86_write_tsc_offset)(vcpu, vcpu->arch.tsc_offset);
+	static_call(kvm_x86_write_tsc_offset, vcpu, vcpu->arch.tsc_offset);
 }
 
 static void kvm_vcpu_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 l1_multiplier)
@@ -2523,13 +2523,13 @@ static void kvm_vcpu_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 l1_multipli
 	if (is_guest_mode(vcpu))
 		vcpu->arch.tsc_scaling_ratio = kvm_calc_nested_tsc_multiplier(
 			l1_multiplier,
-			static_call(kvm_x86_get_l2_tsc_multiplier)(vcpu));
+			static_call(kvm_x86_get_l2_tsc_multiplier, vcpu));
 	else
 		vcpu->arch.tsc_scaling_ratio = l1_multiplier;
 
 	if (kvm_has_tsc_control)
-		static_call(kvm_x86_write_tsc_multiplier)(
-			vcpu, vcpu->arch.tsc_scaling_ratio);
+		static_call(kvm_x86_write_tsc_multiplier, vcpu,
+			    vcpu->arch.tsc_scaling_ratio);
 }
 
 static inline bool kvm_check_tsc_unstable(void)
@@ -3307,7 +3307,7 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu)
 static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *vcpu)
 {
 	++vcpu->stat.tlb_flush;
-	static_call(kvm_x86_flush_tlb_all)(vcpu);
+	static_call(kvm_x86_flush_tlb_all, vcpu);
 }
 
 static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
@@ -3325,14 +3325,14 @@ static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
 		kvm_mmu_sync_prev_roots(vcpu);
 	}
 
-	static_call(kvm_x86_flush_tlb_guest)(vcpu);
+	static_call(kvm_x86_flush_tlb_guest, vcpu);
 }
 
 
 static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu)
 {
 	++vcpu->stat.tlb_flush;
-	static_call(kvm_x86_flush_tlb_current)(vcpu);
+	static_call(kvm_x86_flush_tlb_current, vcpu);
 }
 
 /*
@@ -4310,7 +4310,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		 * fringe case that is not enabled except via specific settings
 		 * of the module parameters.
 		 */
-		r = static_call(kvm_x86_has_emulated_msr)(kvm, MSR_IA32_SMBASE);
+		r = static_call(kvm_x86_has_emulated_msr, kvm, MSR_IA32_SMBASE);
 		break;
 	case KVM_CAP_NR_VCPUS:
 		r = min_t(unsigned int, num_online_cpus(), KVM_MAX_VCPUS);
@@ -4548,14 +4548,14 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
 	/* Address WBINVD may be executed by guest */
 	if (need_emulate_wbinvd(vcpu)) {
-		if (static_call(kvm_x86_has_wbinvd_exit)())
+		if (static_call(kvm_x86_has_wbinvd_exit))
 			cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask);
 		else if (vcpu->cpu != -1 && vcpu->cpu != cpu)
 			smp_call_function_single(vcpu->cpu,
 					wbinvd_ipi, NULL, 1);
 	}
 
-	static_call(kvm_x86_vcpu_load)(vcpu, cpu);
+	static_call(kvm_x86_vcpu_load, vcpu, cpu);
 
 	/* Save host pkru register if supported */
 	vcpu->arch.host_pkru = read_pkru();
@@ -4634,7 +4634,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 	int idx;
 
 	if (vcpu->preempted && !vcpu->arch.guest_state_protected)
-		vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu);
+		vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl, vcpu);
 
 	/*
 	 * Take the srcu lock as memslots will be accessed to check the gfn
@@ -4647,14 +4647,14 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 		kvm_steal_time_set_preempted(vcpu);
 	srcu_read_unlock(&vcpu->kvm->srcu, idx);
 
-	static_call(kvm_x86_vcpu_put)(vcpu);
+	static_call(kvm_x86_vcpu_put, vcpu);
 	vcpu->arch.last_host_tsc = rdtsc();
 }
 
 static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu,
 				    struct kvm_lapic_state *s)
 {
-	static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);
+	static_call_cond(kvm_x86_sync_pir_to_irr, vcpu);
 
 	return kvm_apic_get_state(vcpu, s);
 }
@@ -4773,7 +4773,7 @@ static int kvm_vcpu_ioctl_x86_setup_mce(struct kvm_vcpu *vcpu,
 	for (bank = 0; bank < bank_num; bank++)
 		vcpu->arch.mce_banks[bank*4] = ~(u64)0;
 
-	static_call(kvm_x86_setup_mce)(vcpu);
+	static_call(kvm_x86_setup_mce, vcpu);
 out:
 	return r;
 }
@@ -4880,11 +4880,11 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
 		vcpu->arch.interrupt.injected && !vcpu->arch.interrupt.soft;
 	events->interrupt.nr = vcpu->arch.interrupt.nr;
 	events->interrupt.soft = 0;
-	events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);
+	events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow, vcpu);
 
 	events->nmi.injected = vcpu->arch.nmi_injected;
 	events->nmi.pending = vcpu->arch.nmi_pending != 0;
-	events->nmi.masked = static_call(kvm_x86_get_nmi_mask)(vcpu);
+	events->nmi.masked = static_call(kvm_x86_get_nmi_mask, vcpu);
 	events->nmi.pad = 0;
 
 	events->sipi_vector = 0; /* never valid when reporting to user space */
@@ -4951,13 +4951,13 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
 	vcpu->arch.interrupt.nr = events->interrupt.nr;
 	vcpu->arch.interrupt.soft = events->interrupt.soft;
 	if (events->flags & KVM_VCPUEVENT_VALID_SHADOW)
-		static_call(kvm_x86_set_interrupt_shadow)(vcpu,
-						events->interrupt.shadow);
+		static_call(kvm_x86_set_interrupt_shadow, vcpu,
+			    events->interrupt.shadow);
 
 	vcpu->arch.nmi_injected = events->nmi.injected;
 	if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING)
 		vcpu->arch.nmi_pending = events->nmi.pending;
-	static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);
+	static_call(kvm_x86_set_nmi_mask, vcpu, events->nmi.masked);
 
 	if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR &&
 	    lapic_in_kernel(vcpu))
@@ -5254,7 +5254,7 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
 		if (!kvm_x86_ops.enable_direct_tlbflush)
 			return -ENOTTY;
 
-		return static_call(kvm_x86_enable_direct_tlbflush)(vcpu);
+		return static_call(kvm_x86_enable_direct_tlbflush, vcpu);
 
 	case KVM_CAP_HYPERV_ENFORCE_CPUID:
 		return kvm_hv_set_enforce_cpuid(vcpu, cap->args[0]);
@@ -5723,14 +5723,14 @@ static int kvm_vm_ioctl_set_tss_addr(struct kvm *kvm, unsigned long addr)
 
 	if (addr > (unsigned int)(-3 * PAGE_SIZE))
 		return -EINVAL;
-	ret = static_call(kvm_x86_set_tss_addr)(kvm, addr);
+	ret = static_call(kvm_x86_set_tss_addr, kvm, addr);
 	return ret;
 }
 
 static int kvm_vm_ioctl_set_identity_map_addr(struct kvm *kvm,
 					      u64 ident_addr)
 {
-	return static_call(kvm_x86_set_identity_map_addr)(kvm, ident_addr);
+	return static_call(kvm_x86_set_identity_map_addr, kvm, ident_addr);
 }
 
 static int kvm_vm_ioctl_set_nr_mmu_pages(struct kvm *kvm,
@@ -6027,14 +6027,14 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 		if (!kvm_x86_ops.vm_copy_enc_context_from)
 			break;
 
-		r = static_call(kvm_x86_vm_copy_enc_context_from)(kvm, cap->args[0]);
+		r = static_call(kvm_x86_vm_copy_enc_context_from, kvm, cap->args[0]);
 		break;
 	case KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM:
 		r = -EINVAL;
 		if (!kvm_x86_ops.vm_move_enc_context_from)
 			break;
 
-		r = static_call(kvm_x86_vm_move_enc_context_from)(kvm, cap->args[0]);
+		r = static_call(kvm_x86_vm_move_enc_context_from, kvm, cap->args[0]);
 		break;
 	case KVM_CAP_EXIT_HYPERCALL:
 		if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) {
@@ -6525,7 +6525,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		if (!kvm_x86_ops.mem_enc_ioctl)
 			goto out;
 
-		r = static_call(kvm_x86_mem_enc_ioctl)(kvm, argp);
+		r = static_call(kvm_x86_mem_enc_ioctl, kvm, argp);
 		break;
 	}
 	case KVM_MEMORY_ENCRYPT_REG_REGION: {
@@ -6539,7 +6539,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		if (!kvm_x86_ops.mem_enc_register_region)
 			goto out;
 
-		r = static_call(kvm_x86_mem_enc_register_region)(kvm, &region);
+		r = static_call(kvm_x86_mem_enc_register_region, kvm, &region);
 		break;
 	}
 	case KVM_MEMORY_ENCRYPT_UNREG_REGION: {
@@ -6553,7 +6553,8 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		if (!kvm_x86_ops.mem_enc_unregister_region)
 			goto out;
 
-		r = static_call(kvm_x86_mem_enc_unregister_region)(kvm, &region);
+		r = static_call(kvm_x86_mem_enc_unregister_region, kvm,
+				&region);
 		break;
 	}
 	case KVM_HYPERV_EVENTFD: {
@@ -6661,7 +6662,7 @@ static void kvm_init_msr_list(void)
 	}
 
 	for (i = 0; i < ARRAY_SIZE(emulated_msrs_all); i++) {
-		if (!static_call(kvm_x86_has_emulated_msr)(NULL, emulated_msrs_all[i]))
+		if (!static_call(kvm_x86_has_emulated_msr, NULL, emulated_msrs_all[i]))
 			continue;
 
 		emulated_msrs[num_emulated_msrs++] = emulated_msrs_all[i];
@@ -6724,13 +6725,13 @@ static int vcpu_mmio_read(struct kvm_vcpu *vcpu, gpa_t addr, int len, void *v)
 static void kvm_set_segment(struct kvm_vcpu *vcpu,
 			struct kvm_segment *var, int seg)
 {
-	static_call(kvm_x86_set_segment)(vcpu, var, seg);
+	static_call(kvm_x86_set_segment, vcpu, var, seg);
 }
 
 void kvm_get_segment(struct kvm_vcpu *vcpu,
 		     struct kvm_segment *var, int seg)
 {
-	static_call(kvm_x86_get_segment)(vcpu, var, seg);
+	static_call(kvm_x86_get_segment, vcpu, var, seg);
 }
 
 gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access,
@@ -6753,7 +6754,7 @@ gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva,
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
 
-	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0;
 	return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception);
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read);
@@ -6763,7 +6764,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_gva_to_gpa_read);
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
 
-	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0;
 	access |= PFERR_FETCH_MASK;
 	return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception);
 }
@@ -6773,7 +6774,7 @@ gpa_t kvm_mmu_gva_to_gpa_write(struct kvm_vcpu *vcpu, gva_t gva,
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
 
-	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0;
 	access |= PFERR_WRITE_MASK;
 	return mmu->gva_to_gpa(vcpu, mmu, gva, access, exception);
 }
@@ -6826,7 +6827,7 @@ static int kvm_fetch_guest_virt(struct x86_emulate_ctxt *ctxt,
 {
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
-	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0;
 	unsigned offset;
 	int ret;
 
@@ -6851,7 +6852,7 @@ int kvm_read_guest_virt(struct kvm_vcpu *vcpu,
 			       gva_t addr, void *val, unsigned int bytes,
 			       struct x86_exception *exception)
 {
-	u64 access = (static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0;
+	u64 access = (static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0;
 
 	/*
 	 * FIXME: this should call handle_emulation_failure if X86EMUL_IO_NEEDED
@@ -6874,7 +6875,7 @@ static int emulator_read_std(struct x86_emulate_ctxt *ctxt,
 
 	if (system)
 		access |= PFERR_IMPLICIT_ACCESS;
-	else if (static_call(kvm_x86_get_cpl)(vcpu) == 3)
+	else if (static_call(kvm_x86_get_cpl, vcpu) == 3)
 		access |= PFERR_USER_MASK;
 
 	return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, exception);
@@ -6928,7 +6929,7 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v
 
 	if (system)
 		access |= PFERR_IMPLICIT_ACCESS;
-	else if (static_call(kvm_x86_get_cpl)(vcpu) == 3)
+	else if (static_call(kvm_x86_get_cpl, vcpu) == 3)
 		access |= PFERR_USER_MASK;
 
 	return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
@@ -6949,8 +6950,8 @@ EXPORT_SYMBOL_GPL(kvm_write_guest_virt_system);
 static int kvm_can_emulate_insn(struct kvm_vcpu *vcpu, int emul_type,
 				void *insn, int insn_len)
 {
-	return static_call(kvm_x86_can_emulate_instruction)(vcpu, emul_type,
-							    insn, insn_len);
+	return static_call(kvm_x86_can_emulate_instruction, vcpu, emul_type,
+			   insn, insn_len);
 }
 
 int handle_ud(struct kvm_vcpu *vcpu)
@@ -6995,7 +6996,7 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 				bool write)
 {
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
-	u64 access = ((static_call(kvm_x86_get_cpl)(vcpu) == 3) ? PFERR_USER_MASK : 0)
+	u64 access = ((static_call(kvm_x86_get_cpl, vcpu) == 3) ? PFERR_USER_MASK : 0)
 		| (write ? PFERR_WRITE_MASK : 0);
 
 	/*
@@ -7425,7 +7426,7 @@ static int emulator_pio_out_emulated(struct x86_emulate_ctxt *ctxt,
 
 static unsigned long get_segment_base(struct kvm_vcpu *vcpu, int seg)
 {
-	return static_call(kvm_x86_get_segment_base)(vcpu, seg);
+	return static_call(kvm_x86_get_segment_base, vcpu, seg);
 }
 
 static void emulator_invlpg(struct x86_emulate_ctxt *ctxt, ulong address)
@@ -7438,7 +7439,7 @@ static int kvm_emulate_wbinvd_noskip(struct kvm_vcpu *vcpu)
 	if (!need_emulate_wbinvd(vcpu))
 		return X86EMUL_CONTINUE;
 
-	if (static_call(kvm_x86_has_wbinvd_exit)()) {
+	if (static_call(kvm_x86_has_wbinvd_exit)) {
 		int cpu = get_cpu();
 
 		cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask);
@@ -7543,27 +7544,27 @@ static int emulator_set_cr(struct x86_emulate_ctxt *ctxt, int cr, ulong val)
 
 static int emulator_get_cpl(struct x86_emulate_ctxt *ctxt)
 {
-	return static_call(kvm_x86_get_cpl)(emul_to_vcpu(ctxt));
+	return static_call(kvm_x86_get_cpl, emul_to_vcpu(ctxt));
 }
 
 static void emulator_get_gdt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	static_call(kvm_x86_get_gdt)(emul_to_vcpu(ctxt), dt);
+	static_call(kvm_x86_get_gdt, emul_to_vcpu(ctxt), dt);
 }
 
 static void emulator_get_idt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	static_call(kvm_x86_get_idt)(emul_to_vcpu(ctxt), dt);
+	static_call(kvm_x86_get_idt, emul_to_vcpu(ctxt), dt);
 }
 
 static void emulator_set_gdt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	static_call(kvm_x86_set_gdt)(emul_to_vcpu(ctxt), dt);
+	static_call(kvm_x86_set_gdt, emul_to_vcpu(ctxt), dt);
 }
 
 static void emulator_set_idt(struct x86_emulate_ctxt *ctxt, struct desc_ptr *dt)
 {
-	static_call(kvm_x86_set_idt)(emul_to_vcpu(ctxt), dt);
+	static_call(kvm_x86_set_idt, emul_to_vcpu(ctxt), dt);
 }
 
 static unsigned long emulator_get_cached_segment_base(
@@ -7721,8 +7722,8 @@ static int emulator_intercept(struct x86_emulate_ctxt *ctxt,
 			      struct x86_instruction_info *info,
 			      enum x86_intercept_stage stage)
 {
-	return static_call(kvm_x86_check_intercept)(emul_to_vcpu(ctxt), info, stage,
-					    &ctxt->exception);
+	return static_call(kvm_x86_check_intercept, emul_to_vcpu(ctxt), info,
+			   stage, &ctxt->exception);
 }
 
 static bool emulator_get_cpuid(struct x86_emulate_ctxt *ctxt,
@@ -7764,7 +7765,7 @@ static void emulator_write_gpr(struct x86_emulate_ctxt *ctxt, unsigned reg, ulon
 
 static void emulator_set_nmi_mask(struct x86_emulate_ctxt *ctxt, bool masked)
 {
-	static_call(kvm_x86_set_nmi_mask)(emul_to_vcpu(ctxt), masked);
+	static_call(kvm_x86_set_nmi_mask, emul_to_vcpu(ctxt), masked);
 }
 
 static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt)
@@ -7782,7 +7783,7 @@ static void emulator_exiting_smm(struct x86_emulate_ctxt *ctxt)
 static int emulator_leave_smm(struct x86_emulate_ctxt *ctxt,
 				  const char *smstate)
 {
-	return static_call(kvm_x86_leave_smm)(emul_to_vcpu(ctxt), smstate);
+	return static_call(kvm_x86_leave_smm, emul_to_vcpu(ctxt), smstate);
 }
 
 static void emulator_triple_fault(struct x86_emulate_ctxt *ctxt)
@@ -7847,7 +7848,7 @@ static const struct x86_emulate_ops emulate_ops = {
 
 static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask)
 {
-	u32 int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);
+	u32 int_shadow = static_call(kvm_x86_get_interrupt_shadow, vcpu);
 	/*
 	 * an sti; sti; sequence only disable interrupts for the first
 	 * instruction. So, if the last instruction, be it emulated or
@@ -7858,7 +7859,7 @@ static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask)
 	if (int_shadow & mask)
 		mask = 0;
 	if (unlikely(int_shadow || mask)) {
-		static_call(kvm_x86_set_interrupt_shadow)(vcpu, mask);
+		static_call(kvm_x86_set_interrupt_shadow, vcpu, mask);
 		if (!mask)
 			kvm_make_request(KVM_REQ_EVENT, vcpu);
 	}
@@ -7900,7 +7901,7 @@ static void init_emulate_ctxt(struct kvm_vcpu *vcpu)
 	struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt;
 	int cs_db, cs_l;
 
-	static_call(kvm_x86_get_cs_db_l_bits)(vcpu, &cs_db, &cs_l);
+	static_call(kvm_x86_get_cs_db_l_bits, vcpu, &cs_db, &cs_l);
 
 	ctxt->gpa_available = false;
 	ctxt->eflags = kvm_get_rflags(vcpu);
@@ -7960,9 +7961,8 @@ static void prepare_emulation_failure_exit(struct kvm_vcpu *vcpu, u64 *data,
 	 */
 	memset(&info, 0, sizeof(info));
 
-	static_call(kvm_x86_get_exit_info)(vcpu, (u32 *)&info[0], &info[1],
-					   &info[2], (u32 *)&info[3],
-					   (u32 *)&info[4]);
+	static_call(kvm_x86_get_exit_info, vcpu, (u32 *)&info[0], &info[1],
+		    &info[2], (u32 *)&info[3], (u32 *)&info[4]);
 
 	run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
 	run->emulation_failure.suberror = KVM_INTERNAL_ERROR_EMULATION;
@@ -8039,7 +8039,7 @@ static int handle_emulation_failure(struct kvm_vcpu *vcpu, int emulation_type)
 
 	kvm_queue_exception(vcpu, UD_VECTOR);
 
-	if (!is_guest_mode(vcpu) && static_call(kvm_x86_get_cpl)(vcpu) == 0) {
+	if (!is_guest_mode(vcpu) && static_call(kvm_x86_get_cpl, vcpu) == 0) {
 		prepare_emulation_ctxt_failure_exit(vcpu);
 		return 0;
 	}
@@ -8228,10 +8228,10 @@ static int kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu)
 
 int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu)
 {
-	unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
+	unsigned long rflags = static_call(kvm_x86_get_rflags, vcpu);
 	int r;
 
-	r = static_call(kvm_x86_skip_emulated_instruction)(vcpu);
+	r = static_call(kvm_x86_skip_emulated_instruction, vcpu);
 	if (unlikely(!r))
 		return 0;
 
@@ -8494,7 +8494,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 
 writeback:
 	if (writeback) {
-		unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
+		unsigned long rflags = static_call(kvm_x86_get_rflags, vcpu);
 		toggle_interruptibility(vcpu, ctxt->interruptibility);
 		vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
 		if (!ctxt->have_exception ||
@@ -8505,7 +8505,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
 			kvm_rip_write(vcpu, ctxt->eip);
 			if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)))
 				r = kvm_vcpu_do_singlestep(vcpu);
-			static_call_cond(kvm_x86_update_emulated_instruction)(vcpu);
+			static_call_cond(kvm_x86_update_emulated_instruction, vcpu);
 			__kvm_set_rflags(vcpu, ctxt->eflags);
 		}
 
@@ -9187,7 +9187,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 		a3 &= 0xFFFFFFFF;
 	}
 
-	if (static_call(kvm_x86_get_cpl)(vcpu) != 0) {
+	if (static_call(kvm_x86_get_cpl, vcpu) != 0) {
 		ret = -KVM_EPERM;
 		goto out;
 	}
@@ -9266,7 +9266,7 @@ static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt)
 	char instruction[3];
 	unsigned long rip = kvm_rip_read(vcpu);
 
-	static_call(kvm_x86_patch_hypercall)(vcpu, instruction);
+	static_call(kvm_x86_patch_hypercall, vcpu, instruction);
 
 	return emulator_write_emulated(ctxt, rip, instruction, 3,
 		&ctxt->exception);
@@ -9283,7 +9283,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *kvm_run = vcpu->run;
 
-	kvm_run->if_flag = static_call(kvm_x86_get_if_flag)(vcpu);
+	kvm_run->if_flag = static_call(kvm_x86_get_if_flag, vcpu);
 	kvm_run->cr8 = kvm_get_cr8(vcpu);
 	kvm_run->apic_base = kvm_get_apic_base(vcpu);
 
@@ -9318,7 +9318,7 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu)
 
 	tpr = kvm_lapic_get_cr8(vcpu);
 
-	static_call(kvm_x86_update_cr8_intercept)(vcpu, tpr, max_irr);
+	static_call(kvm_x86_update_cr8_intercept, vcpu, tpr, max_irr);
 }
 
 
@@ -9336,7 +9336,7 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu)
 {
 	if (vcpu->arch.exception.error_code && !is_protmode(vcpu))
 		vcpu->arch.exception.error_code = false;
-	static_call(kvm_x86_queue_exception)(vcpu);
+	static_call(kvm_x86_queue_exception, vcpu);
 }
 
 static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
@@ -9366,10 +9366,10 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 	 */
 	else if (!vcpu->arch.exception.pending) {
 		if (vcpu->arch.nmi_injected) {
-			static_call(kvm_x86_inject_nmi)(vcpu);
+			static_call(kvm_x86_inject_nmi, vcpu);
 			can_inject = false;
 		} else if (vcpu->arch.interrupt.injected) {
-			static_call(kvm_x86_inject_irq)(vcpu);
+			static_call(kvm_x86_inject_irq, vcpu);
 			can_inject = false;
 		}
 	}
@@ -9430,7 +9430,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 	 * The kvm_x86_ops hooks communicate this by returning -EBUSY.
 	 */
 	if (vcpu->arch.smi_pending) {
-		r = can_inject ? static_call(kvm_x86_smi_allowed)(vcpu, true) : -EBUSY;
+		r = can_inject ? static_call(kvm_x86_smi_allowed, vcpu, true) : -EBUSY;
 		if (r < 0)
 			goto out;
 		if (r) {
@@ -9439,35 +9439,35 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool *req_immediate_exit)
 			enter_smm(vcpu);
 			can_inject = false;
 		} else
-			static_call(kvm_x86_enable_smi_window)(vcpu);
+			static_call(kvm_x86_enable_smi_window, vcpu);
 	}
 
 	if (vcpu->arch.nmi_pending) {
-		r = can_inject ? static_call(kvm_x86_nmi_allowed)(vcpu, true) : -EBUSY;
+		r = can_inject ? static_call(kvm_x86_nmi_allowed, vcpu, true) : -EBUSY;
 		if (r < 0)
 			goto out;
 		if (r) {
 			--vcpu->arch.nmi_pending;
 			vcpu->arch.nmi_injected = true;
-			static_call(kvm_x86_inject_nmi)(vcpu);
+			static_call(kvm_x86_inject_nmi, vcpu);
 			can_inject = false;
-			WARN_ON(static_call(kvm_x86_nmi_allowed)(vcpu, true) < 0);
+			WARN_ON(static_call(kvm_x86_nmi_allowed, vcpu, true) < 0);
 		}
 		if (vcpu->arch.nmi_pending)
-			static_call(kvm_x86_enable_nmi_window)(vcpu);
+			static_call(kvm_x86_enable_nmi_window, vcpu);
 	}
 
 	if (kvm_cpu_has_injectable_intr(vcpu)) {
-		r = can_inject ? static_call(kvm_x86_interrupt_allowed)(vcpu, true) : -EBUSY;
+		r = can_inject ? static_call(kvm_x86_interrupt_allowed, vcpu, true) : -EBUSY;
 		if (r < 0)
 			goto out;
 		if (r) {
 			kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false);
-			static_call(kvm_x86_inject_irq)(vcpu);
-			WARN_ON(static_call(kvm_x86_interrupt_allowed)(vcpu, true) < 0);
+			static_call(kvm_x86_inject_irq, vcpu);
+			WARN_ON(static_call(kvm_x86_interrupt_allowed, vcpu, true) < 0);
 		}
 		if (kvm_cpu_has_injectable_intr(vcpu))
-			static_call(kvm_x86_enable_irq_window)(vcpu);
+			static_call(kvm_x86_enable_irq_window, vcpu);
 	}
 
 	if (is_guest_mode(vcpu) &&
@@ -9495,7 +9495,7 @@ static void process_nmi(struct kvm_vcpu *vcpu)
 	 * If an NMI is already in progress, limit further NMIs to just one.
 	 * Otherwise, allow two (and we'll inject the first one immediately).
 	 */
-	if (static_call(kvm_x86_get_nmi_mask)(vcpu) || vcpu->arch.nmi_injected)
+	if (static_call(kvm_x86_get_nmi_mask, vcpu) || vcpu->arch.nmi_injected)
 		limit = 1;
 
 	vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
@@ -9585,11 +9585,11 @@ static void enter_smm_save_state_32(struct kvm_vcpu *vcpu, char *buf)
 	put_smstate(u32, buf, 0x7f7c, seg.limit);
 	put_smstate(u32, buf, 0x7f78, enter_smm_get_segment_flags(&seg));
 
-	static_call(kvm_x86_get_gdt)(vcpu, &dt);
+	static_call(kvm_x86_get_gdt, vcpu, &dt);
 	put_smstate(u32, buf, 0x7f74, dt.address);
 	put_smstate(u32, buf, 0x7f70, dt.size);
 
-	static_call(kvm_x86_get_idt)(vcpu, &dt);
+	static_call(kvm_x86_get_idt, vcpu, &dt);
 	put_smstate(u32, buf, 0x7f58, dt.address);
 	put_smstate(u32, buf, 0x7f54, dt.size);
 
@@ -9639,7 +9639,7 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)
 	put_smstate(u32, buf, 0x7e94, seg.limit);
 	put_smstate(u64, buf, 0x7e98, seg.base);
 
-	static_call(kvm_x86_get_idt)(vcpu, &dt);
+	static_call(kvm_x86_get_idt, vcpu, &dt);
 	put_smstate(u32, buf, 0x7e84, dt.size);
 	put_smstate(u64, buf, 0x7e88, dt.address);
 
@@ -9649,7 +9649,7 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu, char *buf)
 	put_smstate(u32, buf, 0x7e74, seg.limit);
 	put_smstate(u64, buf, 0x7e78, seg.base);
 
-	static_call(kvm_x86_get_gdt)(vcpu, &dt);
+	static_call(kvm_x86_get_gdt, vcpu, &dt);
 	put_smstate(u32, buf, 0x7e64, dt.size);
 	put_smstate(u64, buf, 0x7e68, dt.address);
 
@@ -9678,28 +9678,28 @@ static void enter_smm(struct kvm_vcpu *vcpu)
 	 * state (e.g. leave guest mode) after we've saved the state into the
 	 * SMM state-save area.
 	 */
-	static_call(kvm_x86_enter_smm)(vcpu, buf);
+	static_call(kvm_x86_enter_smm, vcpu, buf);
 
 	kvm_smm_changed(vcpu, true);
 	kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, buf, sizeof(buf));
 
-	if (static_call(kvm_x86_get_nmi_mask)(vcpu))
+	if (static_call(kvm_x86_get_nmi_mask, vcpu))
 		vcpu->arch.hflags |= HF_SMM_INSIDE_NMI_MASK;
 	else
-		static_call(kvm_x86_set_nmi_mask)(vcpu, true);
+		static_call(kvm_x86_set_nmi_mask, vcpu, true);
 
 	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
 	kvm_rip_write(vcpu, 0x8000);
 
 	cr0 = vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0_PG);
-	static_call(kvm_x86_set_cr0)(vcpu, cr0);
+	static_call(kvm_x86_set_cr0, vcpu, cr0);
 	vcpu->arch.cr0 = cr0;
 
-	static_call(kvm_x86_set_cr4)(vcpu, 0);
+	static_call(kvm_x86_set_cr4, vcpu, 0);
 
 	/* Undocumented: IDT limit is set to zero on entry to SMM.  */
 	dt.address = dt.size = 0;
-	static_call(kvm_x86_set_idt)(vcpu, &dt);
+	static_call(kvm_x86_set_idt, vcpu, &dt);
 
 	kvm_set_dr(vcpu, 7, DR7_FIXED_1);
 
@@ -9730,7 +9730,7 @@ static void enter_smm(struct kvm_vcpu *vcpu)
 
 #ifdef CONFIG_X86_64
 	if (guest_cpuid_has(vcpu, X86_FEATURE_LM))
-		static_call(kvm_x86_set_efer)(vcpu, 0);
+		static_call(kvm_x86_set_efer, vcpu, 0);
 #endif
 
 	kvm_update_cpuid_runtime(vcpu);
@@ -9769,7 +9769,7 @@ void kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.apicv_active = activate;
 	kvm_apic_update_apicv(vcpu);
-	static_call(kvm_x86_refresh_apicv_exec_ctrl)(vcpu);
+	static_call(kvm_x86_refresh_apicv_exec_ctrl, vcpu);
 
 	/*
 	 * When APICv gets disabled, we may still have injected interrupts
@@ -9792,7 +9792,7 @@ void __kvm_set_or_clear_apicv_inhibit(struct kvm *kvm,
 
 	lockdep_assert_held_write(&kvm->arch.apicv_update_lock);
 
-	if (!static_call(kvm_x86_check_apicv_inhibit_reasons)(reason))
+	if (!static_call(kvm_x86_check_apicv_inhibit_reasons, reason))
 		return;
 
 	old = new = kvm->arch.apicv_inhibit_reasons;
@@ -9845,7 +9845,7 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu)
 	if (irqchip_split(vcpu->kvm))
 		kvm_scan_ioapic_routes(vcpu, vcpu->arch.ioapic_handled_vectors);
 	else {
-		static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);
+		static_call_cond(kvm_x86_sync_pir_to_irr, vcpu);
 		if (ioapic_in_kernel(vcpu->kvm))
 			kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors);
 	}
@@ -9867,12 +9867,13 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu)
 		bitmap_or((ulong *)eoi_exit_bitmap,
 			  vcpu->arch.ioapic_handled_vectors,
 			  to_hv_synic(vcpu)->vec_bitmap, 256);
-		static_call_cond(kvm_x86_load_eoi_exitmap)(vcpu, eoi_exit_bitmap);
+		static_call_cond(kvm_x86_load_eoi_exitmap, vcpu,
+				 eoi_exit_bitmap);
 		return;
 	}
 
-	static_call_cond(kvm_x86_load_eoi_exitmap)(
-		vcpu, (u64 *)vcpu->arch.ioapic_handled_vectors);
+	static_call_cond(kvm_x86_load_eoi_exitmap, vcpu,
+		         (u64 *)vcpu->arch.ioapic_handled_vectors);
 }
 
 void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
@@ -9891,7 +9892,7 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
 
 void kvm_arch_guest_memory_reclaimed(struct kvm *kvm)
 {
-	static_call_cond(kvm_x86_guest_memory_reclaimed)(kvm);
+	static_call_cond(kvm_x86_guest_memory_reclaimed, kvm);
 }
 
 static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
@@ -9899,7 +9900,7 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
 	if (!lapic_in_kernel(vcpu))
 		return;
 
-	static_call_cond(kvm_x86_set_apic_access_page_addr)(vcpu);
+	static_call_cond(kvm_x86_set_apic_access_page_addr, vcpu);
 }
 
 void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu)
@@ -10050,10 +10051,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		if (kvm_check_request(KVM_REQ_APF_READY, vcpu))
 			kvm_check_async_pf_completion(vcpu);
 		if (kvm_check_request(KVM_REQ_MSR_FILTER_CHANGED, vcpu))
-			static_call(kvm_x86_msr_filter_changed)(vcpu);
+			static_call(kvm_x86_msr_filter_changed, vcpu);
 
 		if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
-			static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
+			static_call(kvm_x86_update_cpu_dirty_logging, vcpu);
 	}
 
 	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win ||
@@ -10075,7 +10076,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 			goto out;
 		}
 		if (req_int_win)
-			static_call(kvm_x86_enable_irq_window)(vcpu);
+			static_call(kvm_x86_enable_irq_window, vcpu);
 
 		if (kvm_lapic_enabled(vcpu)) {
 			update_cr8_intercept(vcpu);
@@ -10090,7 +10091,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	preempt_disable();
 
-	static_call(kvm_x86_prepare_switch_to_guest)(vcpu);
+	static_call(kvm_x86_prepare_switch_to_guest, vcpu);
 
 	/*
 	 * Disable IRQs before setting IN_GUEST_MODE.  Posted interrupt
@@ -10126,7 +10127,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	 * i.e. they can post interrupts even if APICv is temporarily disabled.
 	 */
 	if (kvm_lapic_enabled(vcpu))
-		static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);
+		static_call_cond(kvm_x86_sync_pir_to_irr, vcpu);
 
 	if (kvm_vcpu_exit_request(vcpu)) {
 		vcpu->mode = OUTSIDE_GUEST_MODE;
@@ -10140,7 +10141,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	if (req_immediate_exit) {
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
-		static_call(kvm_x86_request_immediate_exit)(vcpu);
+		static_call(kvm_x86_request_immediate_exit, vcpu);
 	}
 
 	fpregs_assert_state_consistent();
@@ -10171,12 +10172,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		 */
 		WARN_ON_ONCE(kvm_apicv_activated(vcpu->kvm) != kvm_vcpu_apicv_active(vcpu));
 
-		exit_fastpath = static_call(kvm_x86_vcpu_run)(vcpu);
+		exit_fastpath = static_call(kvm_x86_vcpu_run, vcpu);
 		if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))
 			break;
 
 		if (kvm_lapic_enabled(vcpu))
-			static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);
+			static_call_cond(kvm_x86_sync_pir_to_irr, vcpu);
 
 		if (unlikely(kvm_vcpu_exit_request(vcpu))) {
 			exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
@@ -10192,7 +10193,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	 */
 	if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) {
 		WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP);
-		static_call(kvm_x86_sync_dirty_debug_regs)(vcpu);
+		static_call(kvm_x86_sync_dirty_debug_regs, vcpu);
 		kvm_update_dr0123(vcpu);
 		kvm_update_dr7(vcpu);
 	}
@@ -10221,7 +10222,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.xfd_no_write_intercept)
 		fpu_sync_guest_vmexit_xfd_state();
 
-	static_call(kvm_x86_handle_exit_irqoff)(vcpu);
+	static_call(kvm_x86_handle_exit_irqoff, vcpu);
 
 	if (vcpu->arch.guest_fpu.xfd_err)
 		wrmsrl(MSR_IA32_XFD_ERR, 0);
@@ -10275,13 +10276,13 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.apic_attention)
 		kvm_lapic_sync_from_vapic(vcpu);
 
-	r = static_call(kvm_x86_handle_exit)(vcpu, exit_fastpath);
+	r = static_call(kvm_x86_handle_exit, vcpu, exit_fastpath);
 	return r;
 
 cancel_injection:
 	if (req_immediate_exit)
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
-	static_call(kvm_x86_cancel_injection)(vcpu);
+	static_call(kvm_x86_cancel_injection, vcpu);
 	if (unlikely(vcpu->arch.apic_attention))
 		kvm_lapic_sync_from_vapic(vcpu);
 out:
@@ -10554,7 +10555,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		goto out;
 	}
 
-	r = static_call(kvm_x86_vcpu_pre_run)(vcpu);
+	r = static_call(kvm_x86_vcpu_pre_run, vcpu);
 	if (r <= 0)
 		goto out;
 
@@ -10673,10 +10674,10 @@ static void __get_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
 	kvm_get_segment(vcpu, &sregs->tr, VCPU_SREG_TR);
 	kvm_get_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR);
 
-	static_call(kvm_x86_get_idt)(vcpu, &dt);
+	static_call(kvm_x86_get_idt, vcpu, &dt);
 	sregs->idt.limit = dt.size;
 	sregs->idt.base = dt.address;
-	static_call(kvm_x86_get_gdt)(vcpu, &dt);
+	static_call(kvm_x86_get_gdt, vcpu, &dt);
 	sregs->gdt.limit = dt.size;
 	sregs->gdt.base = dt.address;
 
@@ -10857,28 +10858,28 @@ static int __set_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs,
 
 	dt.size = sregs->idt.limit;
 	dt.address = sregs->idt.base;
-	static_call(kvm_x86_set_idt)(vcpu, &dt);
+	static_call(kvm_x86_set_idt, vcpu, &dt);
 	dt.size = sregs->gdt.limit;
 	dt.address = sregs->gdt.base;
-	static_call(kvm_x86_set_gdt)(vcpu, &dt);
+	static_call(kvm_x86_set_gdt, vcpu, &dt);
 
 	vcpu->arch.cr2 = sregs->cr2;
 	*mmu_reset_needed |= kvm_read_cr3(vcpu) != sregs->cr3;
 	vcpu->arch.cr3 = sregs->cr3;
 	kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3);
-	static_call_cond(kvm_x86_post_set_cr3)(vcpu, sregs->cr3);
+	static_call_cond(kvm_x86_post_set_cr3, vcpu, sregs->cr3);
 
 	kvm_set_cr8(vcpu, sregs->cr8);
 
 	*mmu_reset_needed |= vcpu->arch.efer != sregs->efer;
-	static_call(kvm_x86_set_efer)(vcpu, sregs->efer);
+	static_call(kvm_x86_set_efer, vcpu, sregs->efer);
 
 	*mmu_reset_needed |= kvm_read_cr0(vcpu) != sregs->cr0;
-	static_call(kvm_x86_set_cr0)(vcpu, sregs->cr0);
+	static_call(kvm_x86_set_cr0, vcpu, sregs->cr0);
 	vcpu->arch.cr0 = sregs->cr0;
 
 	*mmu_reset_needed |= kvm_read_cr4(vcpu) != sregs->cr4;
-	static_call(kvm_x86_set_cr4)(vcpu, sregs->cr4);
+	static_call(kvm_x86_set_cr4, vcpu, sregs->cr4);
 
 	if (update_pdptrs) {
 		idx = srcu_read_lock(&vcpu->kvm->srcu);
@@ -11048,7 +11049,7 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 	 */
 	kvm_set_rflags(vcpu, rflags);
 
-	static_call(kvm_x86_update_exception_bitmap)(vcpu);
+	static_call(kvm_x86_update_exception_bitmap, vcpu);
 
 	kvm_arch_vcpu_guestdbg_update_apicv_inhibit(vcpu->kvm);
 
@@ -11255,7 +11256,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.hv_root_tdp = INVALID_PAGE;
 #endif
 
-	r = static_call(kvm_x86_vcpu_create)(vcpu);
+	r = static_call(kvm_x86_vcpu_create, vcpu);
 	if (r)
 		goto free_guest_fpu;
 
@@ -11312,7 +11313,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 	kvmclock_reset(vcpu);
 
-	static_call(kvm_x86_vcpu_free)(vcpu);
+	static_call(kvm_x86_vcpu_free, vcpu);
 
 	kmem_cache_free(x86_emulator_cache, vcpu->arch.emulate_ctxt);
 	free_cpumask_var(vcpu->arch.wbinvd_dirty_mask);
@@ -11419,7 +11420,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	cpuid_0x1 = kvm_find_cpuid_entry(vcpu, 1, 0);
 	kvm_rdx_write(vcpu, cpuid_0x1 ? cpuid_0x1->eax : 0x600);
 
-	static_call(kvm_x86_vcpu_reset)(vcpu, init_event);
+	static_call(kvm_x86_vcpu_reset, vcpu, init_event);
 
 	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
 	kvm_rip_write(vcpu, 0xfff0);
@@ -11438,10 +11439,10 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	else
 		new_cr0 |= X86_CR0_NW | X86_CR0_CD;
 
-	static_call(kvm_x86_set_cr0)(vcpu, new_cr0);
-	static_call(kvm_x86_set_cr4)(vcpu, 0);
-	static_call(kvm_x86_set_efer)(vcpu, 0);
-	static_call(kvm_x86_update_exception_bitmap)(vcpu);
+	static_call(kvm_x86_set_cr0, vcpu, new_cr0);
+	static_call(kvm_x86_set_cr4, vcpu, 0);
+	static_call(kvm_x86_set_efer, vcpu, 0);
+	static_call(kvm_x86_update_exception_bitmap, vcpu);
 
 	/*
 	 * On the standard CR0/CR4/EFER modification paths, there are several
@@ -11493,7 +11494,7 @@ int kvm_arch_hardware_enable(void)
 	bool stable, backwards_tsc = false;
 
 	kvm_user_return_msr_cpu_online();
-	ret = static_call(kvm_x86_hardware_enable)();
+	ret = static_call(kvm_x86_hardware_enable);
 	if (ret != 0)
 		return ret;
 
@@ -11575,7 +11576,7 @@ int kvm_arch_hardware_enable(void)
 
 void kvm_arch_hardware_disable(void)
 {
-	static_call(kvm_x86_hardware_disable)();
+	static_call(kvm_x86_hardware_disable);
 	drop_user_return_notifiers();
 }
 
@@ -11625,7 +11626,7 @@ void kvm_arch_hardware_unsetup(void)
 {
 	kvm_unregister_perf_callbacks();
 
-	static_call(kvm_x86_hardware_unsetup)();
+	static_call(kvm_x86_hardware_unsetup);
 }
 
 int kvm_arch_check_processor_compat(void *opaque)
@@ -11665,7 +11666,7 @@ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
 		pmu->need_cleanup = true;
 		kvm_make_request(KVM_REQ_PMU, vcpu);
 	}
-	static_call(kvm_x86_sched_in)(vcpu, cpu);
+	static_call(kvm_x86_sched_in, vcpu, cpu);
 }
 
 void kvm_arch_free_vm(struct kvm *kvm)
@@ -11725,7 +11726,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	kvm_hv_init_vm(kvm);
 	kvm_xen_init_vm(kvm);
 
-	return static_call(kvm_x86_vm_init)(kvm);
+	return static_call(kvm_x86_vm_init, kvm);
 
 out_page_track:
 	kvm_page_track_cleanup(kvm);
@@ -11864,7 +11865,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 		__x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
 		mutex_unlock(&kvm->slots_lock);
 	}
-	static_call_cond(kvm_x86_vm_destroy)(kvm);
+	static_call_cond(kvm_x86_vm_destroy, kvm);
 	kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
 	kvm_pic_destroy(kvm);
 	kvm_ioapic_destroy(kvm);
@@ -12147,7 +12148,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
 static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu)
 {
 	return (is_guest_mode(vcpu) &&
-		static_call(kvm_x86_guest_apic_has_interrupt)(vcpu));
+		static_call(kvm_x86_guest_apic_has_interrupt, vcpu));
 }
 
 static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
@@ -12166,12 +12167,12 @@ static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu)
 
 	if (kvm_test_request(KVM_REQ_NMI, vcpu) ||
 	    (vcpu->arch.nmi_pending &&
-	     static_call(kvm_x86_nmi_allowed)(vcpu, false)))
+	     static_call(kvm_x86_nmi_allowed, vcpu, false)))
 		return true;
 
 	if (kvm_test_request(KVM_REQ_SMI, vcpu) ||
 	    (vcpu->arch.smi_pending &&
-	     static_call(kvm_x86_smi_allowed)(vcpu, false)))
+	     static_call(kvm_x86_smi_allowed, vcpu, false)))
 		return true;
 
 	if (kvm_arch_interrupt_allowed(vcpu) &&
@@ -12197,7 +12198,7 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
 
 bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.apicv_active && static_call(kvm_x86_dy_apicv_has_pending_interrupt)(vcpu))
+	if (vcpu->arch.apicv_active && static_call(kvm_x86_dy_apicv_has_pending_interrupt, vcpu))
 		return true;
 
 	return false;
@@ -12236,7 +12237,7 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
 
 int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu)
 {
-	return static_call(kvm_x86_interrupt_allowed)(vcpu, false);
+	return static_call(kvm_x86_interrupt_allowed, vcpu, false);
 }
 
 unsigned long kvm_get_linear_rip(struct kvm_vcpu *vcpu)
@@ -12262,7 +12263,7 @@ unsigned long kvm_get_rflags(struct kvm_vcpu *vcpu)
 {
 	unsigned long rflags;
 
-	rflags = static_call(kvm_x86_get_rflags)(vcpu);
+	rflags = static_call(kvm_x86_get_rflags, vcpu);
 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
 		rflags &= ~X86_EFLAGS_TF;
 	return rflags;
@@ -12274,7 +12275,7 @@ static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP &&
 	    kvm_is_linear_rip(vcpu, vcpu->arch.singlestep_rip))
 		rflags |= X86_EFLAGS_TF;
-	static_call(kvm_x86_set_rflags)(vcpu, rflags);
+	static_call(kvm_x86_set_rflags, vcpu, rflags);
 }
 
 void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
@@ -12405,7 +12406,7 @@ static bool kvm_can_deliver_async_pf(struct kvm_vcpu *vcpu)
 		return false;
 
 	if (vcpu->arch.apf.send_user_only &&
-	    static_call(kvm_x86_get_cpl)(vcpu) == 0)
+	    static_call(kvm_x86_get_cpl, vcpu) == 0)
 		return false;
 
 	if (is_guest_mode(vcpu)) {
@@ -12516,7 +12517,7 @@ bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu)
 void kvm_arch_start_assignment(struct kvm *kvm)
 {
 	if (atomic_inc_return(&kvm->arch.assigned_device_count) == 1)
-		static_call_cond(kvm_x86_pi_start_assignment)(kvm);
+		static_call_cond(kvm_x86_pi_start_assignment, kvm);
 }
 EXPORT_SYMBOL_GPL(kvm_arch_start_assignment);
 
@@ -12564,8 +12565,7 @@ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
 
 	irqfd->producer = prod;
 	kvm_arch_start_assignment(irqfd->kvm);
-	ret = static_call(kvm_x86_pi_update_irte)(irqfd->kvm,
-					 prod->irq, irqfd->gsi, 1);
+	ret = static_call(kvm_x86_pi_update_irte, irqfd->kvm, prod->irq, irqfd->gsi, 1);
 
 	if (ret)
 		kvm_arch_end_assignment(irqfd->kvm);
@@ -12589,7 +12589,7 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
 	 * when the irq is masked/disabled or the consumer side (KVM
 	 * int this case doesn't want to receive the interrupts.
 	*/
-	ret = static_call(kvm_x86_pi_update_irte)(irqfd->kvm, prod->irq, irqfd->gsi, 0);
+	ret = static_call(kvm_x86_pi_update_irte, irqfd->kvm, prod->irq, irqfd->gsi, 0);
 	if (ret)
 		printk(KERN_INFO "irq bypass consumer (token %p) unregistration"
 		       " fails: %d\n", irqfd->consumer.token, ret);
@@ -12600,7 +12600,7 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
 int kvm_arch_update_irqfd_routing(struct kvm *kvm, unsigned int host_irq,
 				   uint32_t guest_irq, bool set)
 {
-	return static_call(kvm_x86_pi_update_irte)(kvm, host_irq, guest_irq, set);
+	return static_call(kvm_x86_pi_update_irte, kvm, host_irq, guest_irq, set);
 }
 
 bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old,
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 588792f00334..4b3b3d9b66b8 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -113,7 +113,7 @@ static inline bool is_64_bit_mode(struct kvm_vcpu *vcpu)
 
 	if (!is_long_mode(vcpu))
 		return false;
-	static_call(kvm_x86_get_cs_db_l_bits)(vcpu, &cs_db, &cs_l);
+	static_call(kvm_x86_get_cs_db_l_bits, vcpu, &cs_db, &cs_l);
 	return cs_l;
 }
 
@@ -248,7 +248,7 @@ static inline bool kvm_check_has_quirk(struct kvm *kvm, u64 quirk)
 
 static inline bool kvm_vcpu_latch_init(struct kvm_vcpu *vcpu)
 {
-	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
+	return is_smm(vcpu) || static_call(kvm_x86_apic_init_signal_blocked, vcpu);
 }
 
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index bf6cc25eee76..9c5d966d18e4 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -732,7 +732,7 @@ int kvm_xen_write_hypercall_page(struct kvm_vcpu *vcpu, u64 data)
 		instructions[0] = 0xb8;
 
 		/* vmcall / vmmcall */
-		static_call(kvm_x86_patch_hypercall)(vcpu, instructions + 5);
+		static_call(kvm_x86_patch_hypercall, vcpu, instructions + 5);
 
 		/* ret */
 		instructions[8] = 0xc3;
@@ -867,7 +867,7 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
 	vcpu->run->exit_reason = KVM_EXIT_XEN;
 	vcpu->run->xen.type = KVM_EXIT_XEN_HCALL;
 	vcpu->run->xen.u.hcall.longmode = longmode;
-	vcpu->run->xen.u.hcall.cpl = static_call(kvm_x86_get_cpl)(vcpu);
+	vcpu->run->xen.u.hcall.cpl = static_call(kvm_x86_get_cpl, vcpu);
 	vcpu->run->xen.u.hcall.input = input;
 	vcpu->run->xen.u.hcall.params[0] = params[0];
 	vcpu->run->xen.u.hcall.params[1] = params[1];
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 7be38bc6a673..06c77ca2b3bb 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -146,7 +146,7 @@ DEFINE_STATIC_CALL(amd_pstate_enable, pstate_enable);
 
 static inline int amd_pstate_enable(bool enable)
 {
-	return static_call(amd_pstate_enable)(enable);
+	return static_call(amd_pstate_enable, enable);
 }
 
 static int pstate_init_perf(struct amd_cpudata *cpudata)
@@ -194,7 +194,7 @@ DEFINE_STATIC_CALL(amd_pstate_init_perf, pstate_init_perf);
 
 static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata)
 {
-	return static_call(amd_pstate_init_perf)(cpudata);
+	return static_call(amd_pstate_init_perf, cpudata);
 }
 
 static void pstate_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
@@ -226,8 +226,8 @@ static inline void amd_pstate_update_perf(struct amd_cpudata *cpudata,
 					  u32 min_perf, u32 des_perf,
 					  u32 max_perf, bool fast_switch)
 {
-	static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf,
-					    max_perf, fast_switch);
+	static_call(amd_pstate_update_perf, cpudata, min_perf, des_perf,
+		    max_perf, fast_switch);
 }
 
 static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index ab78bd4c2eb0..a7d800a5dbd8 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -421,7 +421,7 @@ void raw_irqentry_exit_cond_resched(void);
 #define irqentry_exit_cond_resched_dynamic_enabled	raw_irqentry_exit_cond_resched
 #define irqentry_exit_cond_resched_dynamic_disabled	NULL
 DECLARE_STATIC_CALL(irqentry_exit_cond_resched, raw_irqentry_exit_cond_resched);
-#define irqentry_exit_cond_resched()	static_call(irqentry_exit_cond_resched)()
+#define irqentry_exit_cond_resched()	static_call(irqentry_exit_cond_resched)
 #elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
 DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
 void dynamic_irqentry_exit_cond_resched(void);
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index fe6efb24d151..7814129fe0c9 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -107,7 +107,7 @@ DECLARE_STATIC_CALL(might_resched, __cond_resched);
 
 static __always_inline void might_resched(void)
 {
-	static_call_mod(might_resched)();
+	static_call_mod(might_resched);
 }
 
 #elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index af97dd427501..2e12811b3730 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1253,15 +1253,15 @@ DECLARE_STATIC_CALL(__perf_guest_handle_intel_pt_intr, *perf_guest_cbs->handle_i
 
 static inline unsigned int perf_guest_state(void)
 {
-	return static_call(__perf_guest_state)();
+	return static_call(__perf_guest_state);
 }
 static inline unsigned long perf_guest_get_ip(void)
 {
-	return static_call(__perf_guest_get_ip)();
+	return static_call(__perf_guest_get_ip);
 }
 static inline unsigned int perf_guest_handle_intel_pt_intr(void)
 {
-	return static_call(__perf_guest_handle_intel_pt_intr)();
+	return static_call(__perf_guest_handle_intel_pt_intr);
 }
 extern void perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *cbs);
 extern void perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *cbs);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index a8911b1f35aa..e8a98ee1442d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2040,7 +2040,7 @@ DECLARE_STATIC_CALL(cond_resched, __cond_resched);
 
 static __always_inline int _cond_resched(void)
 {
-	return static_call_mod(cond_resched)();
+	return static_call_mod(cond_resched);
 }
 
 #elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index df53bed9d71f..7f1219fb98cf 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -21,8 +21,8 @@
  *
  *   __static_call_return0;
  *
- *   static_call(name)(args...);
- *   static_call_cond(name)(args...);
+ *   static_call(name, args...);
+ *   static_call_cond(name, args...);
  *   static_call_update(name, func);
  *   static_call_query(name);
  *
@@ -38,13 +38,13 @@
  *   DEFINE_STATIC_CALL(my_name, func_a);
  *
  *   # Call func_a()
- *   static_call(my_name)(arg1, arg2);
+ *   static_call(my_name, arg1, arg2);
  *
  *   # Update 'my_name' to point to func_b()
  *   static_call_update(my_name, &func_b);
  *
  *   # Call func_b()
- *   static_call(my_name)(arg1, arg2);
+ *   static_call(my_name, arg1, arg2);
  *
  *
  * Implementation details:
@@ -94,7 +94,7 @@
  *
  *   When calling a static_call that can be NULL, use:
  *
- *     static_call_cond(name)(arg1);
+ *     static_call_cond(name, arg1);
  *
  *   which will include the required value tests to avoid NULL-pointer
  *   dereferences.
@@ -204,7 +204,7 @@ extern long __static_call_return0(void);
 	};								\
 	ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name)
 
-#define static_call_cond(name)	(void)__static_call(name)
+#define static_call_cond(name, args...)	(void)__static_call(name)(args)
 
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
@@ -246,7 +246,7 @@ static inline int static_call_init(void) { return 0; }
 	};								\
 	ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name)
 
-#define static_call_cond(name)	(void)__static_call(name)
+#define static_call_cond(name, args...)	(void)__static_call(name)(args)
 
 static inline
 void __static_call_update(struct static_call_key *key, void *tramp, void *func)
@@ -323,7 +323,7 @@ static inline void __static_call_nop(void) { }
 	(typeof(STATIC_CALL_TRAMP(name))*)func;				\
 })
 
-#define static_call_cond(name)	(void)__static_call_cond(name)
+#define static_call_cond(name, args...)	(void)__static_call_cond(name)(args)
 
 static inline
 void __static_call_update(struct static_call_key *key, void *tramp, void *func)
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 5a00b8b2cf9f..7e1ce240a2cd 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -81,13 +81,13 @@ struct static_call_key {
 
 #ifdef MODULE
 #define __STATIC_CALL_MOD_ADDRESSABLE(name)
-#define static_call_mod(name)	__raw_static_call(name)
+#define static_call_mod(name, args...)	__raw_static_call(name)(args)
 #else
 #define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
-#define static_call_mod(name)	__static_call(name)
+#define static_call_mod(name, args...)	__static_call(name)(args)
 #endif
 
-#define static_call(name)	__static_call(name)
+#define static_call(name, args...)	__static_call(name)(args)
 
 #else
 
@@ -95,8 +95,8 @@ struct static_call_key {
 	void *func;
 };
 
-#define static_call(name)						\
-	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+#define static_call(name, args...)					\
+	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))(args)
 
 #endif /* CONFIG_HAVE_STATIC_CALL */
 
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 28031b15f878..1c68fcad48a2 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -170,7 +170,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
 			rcu_dereference_raw((&__tracepoint_##name)->funcs); \
 		if (it_func_ptr) {					\
 			__data = (it_func_ptr)->data;			\
-			static_call(tp_func_##name)(__data, args);	\
+			static_call(tp_func_##name, __data, args);	\
 		}							\
 	} while (0)
 #else
diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c
index dc5665b62814..9752489fcaab 100644
--- a/kernel/static_call_inline.c
+++ b/kernel/static_call_inline.c
@@ -533,7 +533,7 @@ static int __init test_static_call_init(void)
               if (scd->func)
                       static_call_update(sc_selftest, scd->func);
 
-              WARN_ON(static_call(sc_selftest)(scd->val) != scd->expect);
+              WARN_ON(static_call(sc_selftest, scd->val) != scd->expect);
       }
 
       return 0;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index d8553f46caa2..fa1a0deddda5 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1096,7 +1096,7 @@ BPF_CALL_3(bpf_get_branch_snapshot, void *, buf, u32, size, u64, flags)
 	static const u32 br_entry_size = sizeof(struct perf_branch_entry);
 	u32 entry_cnt = size / br_entry_size;
 
-	entry_cnt = static_call(perf_snapshot_branch_stack)(buf, entry_cnt);
+	entry_cnt = static_call(perf_snapshot_branch_stack, buf, entry_cnt);
 
 	if (unlikely(flags))
 		return -EINVAL;
diff --git a/security/keys/trusted-keys/trusted_core.c b/security/keys/trusted-keys/trusted_core.c
index 9b9d3ef79cbe..3f48310a4ce3 100644
--- a/security/keys/trusted-keys/trusted_core.c
+++ b/security/keys/trusted-keys/trusted_core.c
@@ -170,15 +170,15 @@ static int trusted_instantiate(struct key *key,
 
 	switch (key_cmd) {
 	case Opt_load:
-		ret = static_call(trusted_key_unseal)(payload, datablob);
+		ret = static_call(trusted_key_unseal, payload, datablob);
 		dump_payload(payload);
 		if (ret < 0)
 			pr_info("key_unseal failed (%d)\n", ret);
 		break;
 	case Opt_new:
 		key_len = payload->key_len;
-		ret = static_call(trusted_key_get_random)(payload->key,
-							  key_len);
+		ret = static_call(trusted_key_get_random, payload->key,
+				  key_len);
 		if (ret < 0)
 			goto out;
 
@@ -188,7 +188,7 @@ static int trusted_instantiate(struct key *key,
 			goto out;
 		}
 
-		ret = static_call(trusted_key_seal)(payload, datablob);
+		ret = static_call(trusted_key_seal, payload, datablob);
 		if (ret < 0)
 			pr_info("key_seal failed (%d)\n", ret);
 		break;
@@ -257,7 +257,7 @@ static int trusted_update(struct key *key, struct key_preparsed_payload *prep)
 	dump_payload(p);
 	dump_payload(new_p);
 
-	ret = static_call(trusted_key_seal)(new_p, datablob);
+	ret = static_call(trusted_key_seal, new_p, datablob);
 	if (ret < 0) {
 		pr_info("key_seal failed (%d)\n", ret);
 		kfree_sensitive(new_p);
@@ -334,7 +334,7 @@ static int __init init_trusted(void)
 				   trusted_key_sources[i].ops->exit);
 		migratable = trusted_key_sources[i].ops->migratable;
 
-		ret = static_call(trusted_key_init)();
+		ret = static_call(trusted_key_init);
 		if (!ret)
 			break;
 	}
@@ -351,7 +351,7 @@ static int __init init_trusted(void)
 
 static void __exit cleanup_trusted(void)
 {
-	static_call_cond(trusted_key_exit)();
+	static_call_cond(trusted_key_exit);
 }
 
 late_initcall(init_trusted);
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 15/21] static_call: Use cfi_unchecked
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With CONFIG_HAVE_STATIC_CALL, static calls are patched into direct
calls. Disable indirect call CFI checking for the call sites with the
cfi_unchecked macro.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/linux/static_call.h             |  6 ++++--
 include/linux/static_call_types.h       |  9 ++++++---
 tools/include/linux/static_call_types.h | 13 ++++++++-----
 3 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 7f1219fb98cf..f666c841b718 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -204,7 +204,8 @@ extern long __static_call_return0(void);
 	};								\
 	ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name)
 
-#define static_call_cond(name, args...)	(void)__static_call(name)(args)
+#define static_call_cond(name, args...)					\
+	(void)cfi_unchecked(__static_call(name)(args))
 
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
@@ -246,7 +247,8 @@ static inline int static_call_init(void) { return 0; }
 	};								\
 	ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name)
 
-#define static_call_cond(name, args...)	(void)__static_call(name)(args)
+#define static_call_cond(name, args...)					\
+	(void)cfi_unchecked(__static_call(name)(args))
 
 static inline
 void __static_call_update(struct static_call_key *key, void *tramp, void *func)
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 7e1ce240a2cd..faebc1412c86 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -81,13 +81,16 @@ struct static_call_key {
 
 #ifdef MODULE
 #define __STATIC_CALL_MOD_ADDRESSABLE(name)
-#define static_call_mod(name, args...)	__raw_static_call(name)(args)
+#define static_call_mod(name, args...) \
+	cfi_unchecked(__raw_static_call(name)(args))
 #else
 #define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
-#define static_call_mod(name, args...)	__static_call(name)(args)
+#define static_call_mod(name, args...) \
+	cfi_unchecked(__static_call(name)(args))
 #endif
 
-#define static_call(name, args...)	__static_call(name)(args)
+#define static_call(name, args...) \
+	cfi_unchecked(__static_call(name)(args))
 
 #else
 
diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h
index 5a00b8b2cf9f..faebc1412c86 100644
--- a/tools/include/linux/static_call_types.h
+++ b/tools/include/linux/static_call_types.h
@@ -81,13 +81,16 @@ struct static_call_key {
 
 #ifdef MODULE
 #define __STATIC_CALL_MOD_ADDRESSABLE(name)
-#define static_call_mod(name)	__raw_static_call(name)
+#define static_call_mod(name, args...) \
+	cfi_unchecked(__raw_static_call(name)(args))
 #else
 #define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
-#define static_call_mod(name)	__static_call(name)
+#define static_call_mod(name, args...) \
+	cfi_unchecked(__static_call(name)(args))
 #endif
 
-#define static_call(name)	__static_call(name)
+#define static_call(name, args...) \
+	cfi_unchecked(__static_call(name)(args))
 
 #else
 
@@ -95,8 +98,8 @@ struct static_call_key {
 	void *func;
 };
 
-#define static_call(name)						\
-	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+#define static_call(name, args...)					\
+	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))(args)
 
 #endif /* CONFIG_HAVE_STATIC_CALL */
 
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 15/21] static_call: Use cfi_unchecked
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With CONFIG_HAVE_STATIC_CALL, static calls are patched into direct
calls. Disable indirect call CFI checking for the call sites with the
cfi_unchecked macro.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 include/linux/static_call.h             |  6 ++++--
 include/linux/static_call_types.h       |  9 ++++++---
 tools/include/linux/static_call_types.h | 13 ++++++++-----
 3 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 7f1219fb98cf..f666c841b718 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -204,7 +204,8 @@ extern long __static_call_return0(void);
 	};								\
 	ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name)
 
-#define static_call_cond(name, args...)	(void)__static_call(name)(args)
+#define static_call_cond(name, args...)					\
+	(void)cfi_unchecked(__static_call(name)(args))
 
 #define EXPORT_STATIC_CALL(name)					\
 	EXPORT_SYMBOL(STATIC_CALL_KEY(name));				\
@@ -246,7 +247,8 @@ static inline int static_call_init(void) { return 0; }
 	};								\
 	ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name)
 
-#define static_call_cond(name, args...)	(void)__static_call(name)(args)
+#define static_call_cond(name, args...)					\
+	(void)cfi_unchecked(__static_call(name)(args))
 
 static inline
 void __static_call_update(struct static_call_key *key, void *tramp, void *func)
diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h
index 7e1ce240a2cd..faebc1412c86 100644
--- a/include/linux/static_call_types.h
+++ b/include/linux/static_call_types.h
@@ -81,13 +81,16 @@ struct static_call_key {
 
 #ifdef MODULE
 #define __STATIC_CALL_MOD_ADDRESSABLE(name)
-#define static_call_mod(name, args...)	__raw_static_call(name)(args)
+#define static_call_mod(name, args...) \
+	cfi_unchecked(__raw_static_call(name)(args))
 #else
 #define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
-#define static_call_mod(name, args...)	__static_call(name)(args)
+#define static_call_mod(name, args...) \
+	cfi_unchecked(__static_call(name)(args))
 #endif
 
-#define static_call(name, args...)	__static_call(name)(args)
+#define static_call(name, args...) \
+	cfi_unchecked(__static_call(name)(args))
 
 #else
 
diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h
index 5a00b8b2cf9f..faebc1412c86 100644
--- a/tools/include/linux/static_call_types.h
+++ b/tools/include/linux/static_call_types.h
@@ -81,13 +81,16 @@ struct static_call_key {
 
 #ifdef MODULE
 #define __STATIC_CALL_MOD_ADDRESSABLE(name)
-#define static_call_mod(name)	__raw_static_call(name)
+#define static_call_mod(name, args...) \
+	cfi_unchecked(__raw_static_call(name)(args))
 #else
 #define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name)
-#define static_call_mod(name)	__static_call(name)
+#define static_call_mod(name, args...) \
+	cfi_unchecked(__static_call(name)(args))
 #endif
 
-#define static_call(name)	__static_call(name)
+#define static_call(name, args...) \
+	cfi_unchecked(__static_call(name)(args))
 
 #else
 
@@ -95,8 +98,8 @@ struct static_call_key {
 	void *func;
 };
 
-#define static_call(name)						\
-	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))
+#define static_call(name, args...)					\
+	((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func))(args)
 
 #endif /* CONFIG_HAVE_STATIC_CALL */
 
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 16/21] objtool: Add support for CONFIG_CFI_CLANG
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With -fsanitize=kcfi, the compiler injects a type identifier before
each function. Teach objtool to recognize the identifier.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 scripts/Makefile.build                    |   3 +-
 scripts/link-vmlinux.sh                   |   3 +
 tools/objtool/arch/x86/include/arch/elf.h |   2 +
 tools/objtool/builtin-check.c             |   3 +-
 tools/objtool/check.c                     | 128 ++++++++++++++++++++--
 tools/objtool/elf.c                       |  13 +++
 tools/objtool/include/objtool/arch.h      |   1 +
 tools/objtool/include/objtool/builtin.h   |   2 +-
 tools/objtool/include/objtool/elf.h       |   2 +
 9 files changed, 145 insertions(+), 12 deletions(-)

diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 9717e6f6fb31..c850ac420b60 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -235,7 +235,8 @@ objtool_args =								\
 	$(if $(CONFIG_RETPOLINE), --retpoline)				\
 	$(if $(CONFIG_X86_SMAP), --uaccess)				\
 	$(if $(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL), --mcount)		\
-	$(if $(CONFIG_SLS), --sls)
+	$(if $(CONFIG_SLS), --sls)					\
+	$(if $(CONFIG_CFI_CLANG), --kcfi)
 
 cmd_objtool = $(if $(objtool-enabled), ; $(objtool) $(objtool_args) $@)
 cmd_gen_objtooldep = $(if $(objtool-enabled), { echo ; echo '$@: $$(wildcard $(objtool))' ; } >> $(dot-target).cmd)
diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
index 20f44504a644..d171f8507db2 100755
--- a/scripts/link-vmlinux.sh
+++ b/scripts/link-vmlinux.sh
@@ -152,6 +152,9 @@ objtool_link()
 		if is_enabled CONFIG_SLS; then
 			objtoolopt="${objtoolopt} --sls"
 		fi
+		if is_enabled CONFIG_CFI_CLANG; then
+			objtoolopt="${objtoolopt} --kcfi"
+		fi
 		info OBJTOOL ${1}
 		tools/objtool/objtool ${objtoolcmd} ${objtoolopt} ${1}
 	fi
diff --git a/tools/objtool/arch/x86/include/arch/elf.h b/tools/objtool/arch/x86/include/arch/elf.h
index 69cc4264b28a..8833d989eec7 100644
--- a/tools/objtool/arch/x86/include/arch/elf.h
+++ b/tools/objtool/arch/x86/include/arch/elf.h
@@ -3,4 +3,6 @@
 
 #define R_NONE R_X86_64_NONE
 
+#define KCFI_TYPEID_LEN	6
+
 #endif /* _OBJTOOL_ARCH_ELF */
diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c
index fc6975ab8b06..8a662dcc21be 100644
--- a/tools/objtool/builtin-check.c
+++ b/tools/objtool/builtin-check.c
@@ -21,7 +21,7 @@
 
 bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats,
      lto, vmlinux, mcount, noinstr, backup, sls, dryrun,
-     ibt;
+     ibt, kcfi;
 
 static const char * const check_usage[] = {
 	"objtool check [<options>] file.o",
@@ -49,6 +49,7 @@ const struct option check_options[] = {
 	OPT_BOOLEAN('S', "sls", &sls, "validate straight-line-speculation"),
 	OPT_BOOLEAN(0, "dry-run", &dryrun, "don't write the modifications"),
 	OPT_BOOLEAN(0, "ibt", &ibt, "validate ENDBR placement"),
+	OPT_BOOLEAN('k', "kcfi", &kcfi, "detect control-flow integrity type identifiers"),
 	OPT_END(),
 };
 
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index bd0c2c828940..e6bee2f2996a 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -27,6 +27,12 @@ struct alternative {
 	bool skip_orig;
 };
 
+struct kcfi_type {
+	struct section *sec;
+	unsigned long offset;
+	struct hlist_node hash;
+};
+
 static unsigned long nr_cfi, nr_cfi_reused, nr_cfi_cache;
 
 static struct cfi_init_state initial_func_cfi;
@@ -143,6 +149,99 @@ static bool is_sibling_call(struct instruction *insn)
 	return (is_static_jump(insn) && insn->call_dest);
 }
 
+static int kcfi_bits;
+static struct hlist_head *kcfi_hash;
+
+static void *kcfi_alloc_hash(unsigned long size)
+{
+	kcfi_bits = max(10, ilog2(size));
+	kcfi_hash = mmap(NULL, sizeof(struct hlist_head) << kcfi_bits,
+			PROT_READ|PROT_WRITE,
+			MAP_PRIVATE|MAP_ANON, -1, 0);
+	if (kcfi_hash == (void *)-1L) {
+		WARN("mmap fail kcfi_hash");
+		kcfi_hash = NULL;
+	}  else if (stats) {
+		printf("kcfi_bits: %d\n", kcfi_bits);
+	}
+
+	return kcfi_hash;
+}
+
+static void add_kcfi_type(struct kcfi_type *type)
+{
+	hlist_add_head(&type->hash,
+		&kcfi_hash[hash_min(
+			sec_offset_hash(type->sec, type->offset),
+			kcfi_bits)]);
+}
+
+static bool add_kcfi_types(struct section *sec)
+{
+	struct reloc *reloc;
+
+	list_for_each_entry(reloc, &sec->reloc_list, list) {
+		struct kcfi_type *type;
+
+		if (reloc->sym->type != STT_SECTION) {
+			WARN("unexpected relocation symbol type in %s", sec->name);
+			return false;
+		}
+
+		type = malloc(sizeof(*type));
+		if (!type) {
+			perror("malloc");
+			return false;
+		}
+
+		type->sec = reloc->sym->sec;
+		type->offset = reloc->addend;
+
+		add_kcfi_type(type);
+	}
+
+	return true;
+}
+
+static int read_kcfi_types(struct objtool_file *file)
+{
+	if (!kcfi)
+		return 0;
+
+	if (!kcfi_alloc_hash(file->elf->text_size / 16))
+		return -1;
+
+	if (!for_each_section_by_name(file->elf, ".rela.kcfi_types", add_kcfi_types))
+		return -1;
+
+	return 0;
+}
+
+static bool is_kcfi_typeid(struct elf *elf, struct instruction *insn)
+{
+	struct hlist_head *head;
+	struct kcfi_type *type;
+	struct reloc *reloc;
+
+	if (!kcfi)
+		return false;
+
+	/* Compiler-generated annotation in .kcfi_types. */
+	head = &kcfi_hash[hash_min(sec_offset_hash(insn->sec, insn->offset), kcfi_bits)];
+
+	hlist_for_each_entry(type, head, hash)
+		if (type->sec == insn->sec && type->offset == insn->offset)
+			return true;
+
+	/* Manual annotation (in assembly code). */
+	reloc = find_reloc_by_dest(elf, insn->sec, insn->offset);
+
+	if (reloc && !strncmp(reloc->sym->name, "__kcfi_typeid_", 14))
+		return true;
+
+	return false;
+}
+
 /*
  * This checks to see if the given function is a "noreturn" function.
  *
@@ -388,13 +487,18 @@ static int decode_instructions(struct objtool_file *file)
 			insn->sec = sec;
 			insn->offset = offset;
 
-			ret = arch_decode_instruction(file, sec, offset,
-						      sec->sh.sh_size - offset,
-						      &insn->len, &insn->type,
-						      &insn->immediate,
-						      &insn->stack_ops);
-			if (ret)
-				goto err;
+			if (is_kcfi_typeid(file->elf, insn)) {
+				insn->type = INSN_KCFI_TYPEID;
+				insn->len = KCFI_TYPEID_LEN;
+			} else {
+				ret = arch_decode_instruction(file, sec, offset,
+							      sec->sh.sh_size - offset,
+							      &insn->len, &insn->type,
+							      &insn->immediate,
+							      &insn->stack_ops);
+				if (ret)
+					goto err;
+			}
 
 			/*
 			 * By default, "ud2" is a dead end unless otherwise
@@ -420,7 +524,8 @@ static int decode_instructions(struct objtool_file *file)
 			}
 
 			sym_for_each_insn(file, func, insn) {
-				insn->func = func;
+				if (insn->type != INSN_KCFI_TYPEID)
+					insn->func = func;
 				if (insn->type == INSN_ENDBR && list_empty(&insn->call_node)) {
 					if (insn->offset == insn->func->offset) {
 						list_add_tail(&insn->call_node, &file->endbr_list);
@@ -2219,6 +2324,10 @@ static int decode_sections(struct objtool_file *file)
 	if (ret)
 		return ret;
 
+	ret = read_kcfi_types(file);
+	if (ret)
+		return ret;
+
 	ret = decode_instructions(file);
 	if (ret)
 		return ret;
@@ -3595,7 +3704,8 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
 	int i;
 	struct instruction *prev_insn;
 
-	if (insn->ignore || insn->type == INSN_NOP || insn->type == INSN_TRAP)
+	if (insn->ignore || insn->type == INSN_NOP || insn->type == INSN_TRAP ||
+			insn->type == INSN_KCFI_TYPEID)
 		return true;
 
 	/*
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index d7b99a737496..c4e277d41fd2 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -120,6 +120,19 @@ struct section *find_section_by_name(const struct elf *elf, const char *name)
 	return NULL;
 }
 
+bool for_each_section_by_name(const struct elf *elf, const char *name,
+			      bool (*callback)(struct section *))
+{
+	struct section *sec;
+
+	elf_hash_for_each_possible(section_name, sec, name_hash, str_hash(name)) {
+		if (!strcmp(sec->name, name) && !callback(sec))
+			return false;
+	}
+
+	return true;
+}
+
 static struct section *find_section_by_index(struct elf *elf,
 					     unsigned int idx)
 {
diff --git a/tools/objtool/include/objtool/arch.h b/tools/objtool/include/objtool/arch.h
index 9b19cc304195..3db5951e7aa9 100644
--- a/tools/objtool/include/objtool/arch.h
+++ b/tools/objtool/include/objtool/arch.h
@@ -28,6 +28,7 @@ enum insn_type {
 	INSN_CLD,
 	INSN_TRAP,
 	INSN_ENDBR,
+	INSN_KCFI_TYPEID,
 	INSN_OTHER,
 };
 
diff --git a/tools/objtool/include/objtool/builtin.h b/tools/objtool/include/objtool/builtin.h
index c39dbfaef6dc..68409070bca5 100644
--- a/tools/objtool/include/objtool/builtin.h
+++ b/tools/objtool/include/objtool/builtin.h
@@ -10,7 +10,7 @@
 extern const struct option check_options[];
 extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats,
 	    lto, vmlinux, mcount, noinstr, backup, sls, dryrun,
-	    ibt;
+	    ibt, kcfi;
 
 extern int cmd_parse_options(int argc, const char **argv, const char * const usage[]);
 
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index 22ba7e2b816e..7fd3462ce32a 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -148,6 +148,8 @@ int elf_write(struct elf *elf);
 void elf_close(struct elf *elf);
 
 struct section *find_section_by_name(const struct elf *elf, const char *name);
+bool for_each_section_by_name(const struct elf *elf, const char *name,
+			      bool (*callback)(struct section *));
 struct symbol *find_func_by_offset(struct section *sec, unsigned long offset);
 struct symbol *find_symbol_by_offset(struct section *sec, unsigned long offset);
 struct symbol *find_symbol_by_name(const struct elf *elf, const char *name);
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 16/21] objtool: Add support for CONFIG_CFI_CLANG
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With -fsanitize=kcfi, the compiler injects a type identifier before
each function. Teach objtool to recognize the identifier.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 scripts/Makefile.build                    |   3 +-
 scripts/link-vmlinux.sh                   |   3 +
 tools/objtool/arch/x86/include/arch/elf.h |   2 +
 tools/objtool/builtin-check.c             |   3 +-
 tools/objtool/check.c                     | 128 ++++++++++++++++++++--
 tools/objtool/elf.c                       |  13 +++
 tools/objtool/include/objtool/arch.h      |   1 +
 tools/objtool/include/objtool/builtin.h   |   2 +-
 tools/objtool/include/objtool/elf.h       |   2 +
 9 files changed, 145 insertions(+), 12 deletions(-)

diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 9717e6f6fb31..c850ac420b60 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -235,7 +235,8 @@ objtool_args =								\
 	$(if $(CONFIG_RETPOLINE), --retpoline)				\
 	$(if $(CONFIG_X86_SMAP), --uaccess)				\
 	$(if $(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL), --mcount)		\
-	$(if $(CONFIG_SLS), --sls)
+	$(if $(CONFIG_SLS), --sls)					\
+	$(if $(CONFIG_CFI_CLANG), --kcfi)
 
 cmd_objtool = $(if $(objtool-enabled), ; $(objtool) $(objtool_args) $@)
 cmd_gen_objtooldep = $(if $(objtool-enabled), { echo ; echo '$@: $$(wildcard $(objtool))' ; } >> $(dot-target).cmd)
diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
index 20f44504a644..d171f8507db2 100755
--- a/scripts/link-vmlinux.sh
+++ b/scripts/link-vmlinux.sh
@@ -152,6 +152,9 @@ objtool_link()
 		if is_enabled CONFIG_SLS; then
 			objtoolopt="${objtoolopt} --sls"
 		fi
+		if is_enabled CONFIG_CFI_CLANG; then
+			objtoolopt="${objtoolopt} --kcfi"
+		fi
 		info OBJTOOL ${1}
 		tools/objtool/objtool ${objtoolcmd} ${objtoolopt} ${1}
 	fi
diff --git a/tools/objtool/arch/x86/include/arch/elf.h b/tools/objtool/arch/x86/include/arch/elf.h
index 69cc4264b28a..8833d989eec7 100644
--- a/tools/objtool/arch/x86/include/arch/elf.h
+++ b/tools/objtool/arch/x86/include/arch/elf.h
@@ -3,4 +3,6 @@
 
 #define R_NONE R_X86_64_NONE
 
+#define KCFI_TYPEID_LEN	6
+
 #endif /* _OBJTOOL_ARCH_ELF */
diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c
index fc6975ab8b06..8a662dcc21be 100644
--- a/tools/objtool/builtin-check.c
+++ b/tools/objtool/builtin-check.c
@@ -21,7 +21,7 @@
 
 bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats,
      lto, vmlinux, mcount, noinstr, backup, sls, dryrun,
-     ibt;
+     ibt, kcfi;
 
 static const char * const check_usage[] = {
 	"objtool check [<options>] file.o",
@@ -49,6 +49,7 @@ const struct option check_options[] = {
 	OPT_BOOLEAN('S', "sls", &sls, "validate straight-line-speculation"),
 	OPT_BOOLEAN(0, "dry-run", &dryrun, "don't write the modifications"),
 	OPT_BOOLEAN(0, "ibt", &ibt, "validate ENDBR placement"),
+	OPT_BOOLEAN('k', "kcfi", &kcfi, "detect control-flow integrity type identifiers"),
 	OPT_END(),
 };
 
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index bd0c2c828940..e6bee2f2996a 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -27,6 +27,12 @@ struct alternative {
 	bool skip_orig;
 };
 
+struct kcfi_type {
+	struct section *sec;
+	unsigned long offset;
+	struct hlist_node hash;
+};
+
 static unsigned long nr_cfi, nr_cfi_reused, nr_cfi_cache;
 
 static struct cfi_init_state initial_func_cfi;
@@ -143,6 +149,99 @@ static bool is_sibling_call(struct instruction *insn)
 	return (is_static_jump(insn) && insn->call_dest);
 }
 
+static int kcfi_bits;
+static struct hlist_head *kcfi_hash;
+
+static void *kcfi_alloc_hash(unsigned long size)
+{
+	kcfi_bits = max(10, ilog2(size));
+	kcfi_hash = mmap(NULL, sizeof(struct hlist_head) << kcfi_bits,
+			PROT_READ|PROT_WRITE,
+			MAP_PRIVATE|MAP_ANON, -1, 0);
+	if (kcfi_hash == (void *)-1L) {
+		WARN("mmap fail kcfi_hash");
+		kcfi_hash = NULL;
+	}  else if (stats) {
+		printf("kcfi_bits: %d\n", kcfi_bits);
+	}
+
+	return kcfi_hash;
+}
+
+static void add_kcfi_type(struct kcfi_type *type)
+{
+	hlist_add_head(&type->hash,
+		&kcfi_hash[hash_min(
+			sec_offset_hash(type->sec, type->offset),
+			kcfi_bits)]);
+}
+
+static bool add_kcfi_types(struct section *sec)
+{
+	struct reloc *reloc;
+
+	list_for_each_entry(reloc, &sec->reloc_list, list) {
+		struct kcfi_type *type;
+
+		if (reloc->sym->type != STT_SECTION) {
+			WARN("unexpected relocation symbol type in %s", sec->name);
+			return false;
+		}
+
+		type = malloc(sizeof(*type));
+		if (!type) {
+			perror("malloc");
+			return false;
+		}
+
+		type->sec = reloc->sym->sec;
+		type->offset = reloc->addend;
+
+		add_kcfi_type(type);
+	}
+
+	return true;
+}
+
+static int read_kcfi_types(struct objtool_file *file)
+{
+	if (!kcfi)
+		return 0;
+
+	if (!kcfi_alloc_hash(file->elf->text_size / 16))
+		return -1;
+
+	if (!for_each_section_by_name(file->elf, ".rela.kcfi_types", add_kcfi_types))
+		return -1;
+
+	return 0;
+}
+
+static bool is_kcfi_typeid(struct elf *elf, struct instruction *insn)
+{
+	struct hlist_head *head;
+	struct kcfi_type *type;
+	struct reloc *reloc;
+
+	if (!kcfi)
+		return false;
+
+	/* Compiler-generated annotation in .kcfi_types. */
+	head = &kcfi_hash[hash_min(sec_offset_hash(insn->sec, insn->offset), kcfi_bits)];
+
+	hlist_for_each_entry(type, head, hash)
+		if (type->sec == insn->sec && type->offset == insn->offset)
+			return true;
+
+	/* Manual annotation (in assembly code). */
+	reloc = find_reloc_by_dest(elf, insn->sec, insn->offset);
+
+	if (reloc && !strncmp(reloc->sym->name, "__kcfi_typeid_", 14))
+		return true;
+
+	return false;
+}
+
 /*
  * This checks to see if the given function is a "noreturn" function.
  *
@@ -388,13 +487,18 @@ static int decode_instructions(struct objtool_file *file)
 			insn->sec = sec;
 			insn->offset = offset;
 
-			ret = arch_decode_instruction(file, sec, offset,
-						      sec->sh.sh_size - offset,
-						      &insn->len, &insn->type,
-						      &insn->immediate,
-						      &insn->stack_ops);
-			if (ret)
-				goto err;
+			if (is_kcfi_typeid(file->elf, insn)) {
+				insn->type = INSN_KCFI_TYPEID;
+				insn->len = KCFI_TYPEID_LEN;
+			} else {
+				ret = arch_decode_instruction(file, sec, offset,
+							      sec->sh.sh_size - offset,
+							      &insn->len, &insn->type,
+							      &insn->immediate,
+							      &insn->stack_ops);
+				if (ret)
+					goto err;
+			}
 
 			/*
 			 * By default, "ud2" is a dead end unless otherwise
@@ -420,7 +524,8 @@ static int decode_instructions(struct objtool_file *file)
 			}
 
 			sym_for_each_insn(file, func, insn) {
-				insn->func = func;
+				if (insn->type != INSN_KCFI_TYPEID)
+					insn->func = func;
 				if (insn->type == INSN_ENDBR && list_empty(&insn->call_node)) {
 					if (insn->offset == insn->func->offset) {
 						list_add_tail(&insn->call_node, &file->endbr_list);
@@ -2219,6 +2324,10 @@ static int decode_sections(struct objtool_file *file)
 	if (ret)
 		return ret;
 
+	ret = read_kcfi_types(file);
+	if (ret)
+		return ret;
+
 	ret = decode_instructions(file);
 	if (ret)
 		return ret;
@@ -3595,7 +3704,8 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
 	int i;
 	struct instruction *prev_insn;
 
-	if (insn->ignore || insn->type == INSN_NOP || insn->type == INSN_TRAP)
+	if (insn->ignore || insn->type == INSN_NOP || insn->type == INSN_TRAP ||
+			insn->type == INSN_KCFI_TYPEID)
 		return true;
 
 	/*
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index d7b99a737496..c4e277d41fd2 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -120,6 +120,19 @@ struct section *find_section_by_name(const struct elf *elf, const char *name)
 	return NULL;
 }
 
+bool for_each_section_by_name(const struct elf *elf, const char *name,
+			      bool (*callback)(struct section *))
+{
+	struct section *sec;
+
+	elf_hash_for_each_possible(section_name, sec, name_hash, str_hash(name)) {
+		if (!strcmp(sec->name, name) && !callback(sec))
+			return false;
+	}
+
+	return true;
+}
+
 static struct section *find_section_by_index(struct elf *elf,
 					     unsigned int idx)
 {
diff --git a/tools/objtool/include/objtool/arch.h b/tools/objtool/include/objtool/arch.h
index 9b19cc304195..3db5951e7aa9 100644
--- a/tools/objtool/include/objtool/arch.h
+++ b/tools/objtool/include/objtool/arch.h
@@ -28,6 +28,7 @@ enum insn_type {
 	INSN_CLD,
 	INSN_TRAP,
 	INSN_ENDBR,
+	INSN_KCFI_TYPEID,
 	INSN_OTHER,
 };
 
diff --git a/tools/objtool/include/objtool/builtin.h b/tools/objtool/include/objtool/builtin.h
index c39dbfaef6dc..68409070bca5 100644
--- a/tools/objtool/include/objtool/builtin.h
+++ b/tools/objtool/include/objtool/builtin.h
@@ -10,7 +10,7 @@
 extern const struct option check_options[];
 extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats,
 	    lto, vmlinux, mcount, noinstr, backup, sls, dryrun,
-	    ibt;
+	    ibt, kcfi;
 
 extern int cmd_parse_options(int argc, const char **argv, const char * const usage[]);
 
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index 22ba7e2b816e..7fd3462ce32a 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -148,6 +148,8 @@ int elf_write(struct elf *elf);
 void elf_close(struct elf *elf);
 
 struct section *find_section_by_name(const struct elf *elf, const char *name);
+bool for_each_section_by_name(const struct elf *elf, const char *name,
+			      bool (*callback)(struct section *));
 struct symbol *find_func_by_offset(struct section *sec, unsigned long offset);
 struct symbol *find_symbol_by_offset(struct section *sec, unsigned long offset);
 struct symbol *find_symbol_by_name(const struct elf *elf, const char *name);
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 17/21] x86/tools/relocs: Ignore __kcfi_typeid_ relocations
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Ignore __kcfi_typeid_ symbols. These are compiler-generated constants
that contain CFI type identifiers.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/x86/tools/relocs.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
index e2c5b296120d..2925074b9a58 100644
--- a/arch/x86/tools/relocs.c
+++ b/arch/x86/tools/relocs.c
@@ -56,6 +56,7 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = {
 	"^(xen_irq_disable_direct_reloc$|"
 	"xen_save_fl_direct_reloc$|"
 	"VDSO|"
+	"__kcfi_typeid_|"
 	"__crc_)",
 
 /*
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 17/21] x86/tools/relocs: Ignore __kcfi_typeid_ relocations
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Ignore __kcfi_typeid_ symbols. These are compiler-generated constants
that contain CFI type identifiers.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/x86/tools/relocs.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
index e2c5b296120d..2925074b9a58 100644
--- a/arch/x86/tools/relocs.c
+++ b/arch/x86/tools/relocs.c
@@ -56,6 +56,7 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = {
 	"^(xen_irq_disable_direct_reloc$|"
 	"xen_save_fl_direct_reloc$|"
 	"VDSO|"
+	"__kcfi_typeid_|"
 	"__crc_)",
 
 /*
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 18/21] x86: Add types to indirect called assembly functions
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With CONFIG_CFI_CLANG, assembly functions indirectly called from C code
must be annotated with type identifiers to pass CFI checking.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/x86/crypto/blowfish-x86_64-asm_64.S | 5 +++--
 arch/x86/lib/memcpy_64.S                 | 3 ++-
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/crypto/blowfish-x86_64-asm_64.S b/arch/x86/crypto/blowfish-x86_64-asm_64.S
index 802d71582689..4a43e072d2d1 100644
--- a/arch/x86/crypto/blowfish-x86_64-asm_64.S
+++ b/arch/x86/crypto/blowfish-x86_64-asm_64.S
@@ -6,6 +6,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 .file "blowfish-x86_64-asm.S"
 .text
@@ -141,7 +142,7 @@ SYM_FUNC_START(__blowfish_enc_blk)
 	RET;
 SYM_FUNC_END(__blowfish_enc_blk)
 
-SYM_FUNC_START(blowfish_dec_blk)
+SYM_TYPED_FUNC_START(blowfish_dec_blk)
 	/* input:
 	 *	%rdi: ctx
 	 *	%rsi: dst
@@ -332,7 +333,7 @@ SYM_FUNC_START(__blowfish_enc_blk_4way)
 	RET;
 SYM_FUNC_END(__blowfish_enc_blk_4way)
 
-SYM_FUNC_START(blowfish_dec_blk_4way)
+SYM_TYPED_FUNC_START(blowfish_dec_blk_4way)
 	/* input:
 	 *	%rdi: ctx
 	 *	%rsi: dst
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index d0d7b9bc6cad..e5d9b299577f 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -2,6 +2,7 @@
 /* Copyright 2002 Andi Kleen */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/errno.h>
 #include <asm/cpufeatures.h>
 #include <asm/alternative.h>
@@ -27,7 +28,7 @@
  * Output:
  * rax original destination
  */
-SYM_FUNC_START(__memcpy)
+__SYM_TYPED_FUNC_START(__memcpy, memcpy)
 	ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
 		      "jmp memcpy_erms", X86_FEATURE_ERMS
 
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 18/21] x86: Add types to indirect called assembly functions
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

With CONFIG_CFI_CLANG, assembly functions indirectly called from C code
must be annotated with type identifiers to pass CFI checking.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/x86/crypto/blowfish-x86_64-asm_64.S | 5 +++--
 arch/x86/lib/memcpy_64.S                 | 3 ++-
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/crypto/blowfish-x86_64-asm_64.S b/arch/x86/crypto/blowfish-x86_64-asm_64.S
index 802d71582689..4a43e072d2d1 100644
--- a/arch/x86/crypto/blowfish-x86_64-asm_64.S
+++ b/arch/x86/crypto/blowfish-x86_64-asm_64.S
@@ -6,6 +6,7 @@
  */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 
 .file "blowfish-x86_64-asm.S"
 .text
@@ -141,7 +142,7 @@ SYM_FUNC_START(__blowfish_enc_blk)
 	RET;
 SYM_FUNC_END(__blowfish_enc_blk)
 
-SYM_FUNC_START(blowfish_dec_blk)
+SYM_TYPED_FUNC_START(blowfish_dec_blk)
 	/* input:
 	 *	%rdi: ctx
 	 *	%rsi: dst
@@ -332,7 +333,7 @@ SYM_FUNC_START(__blowfish_enc_blk_4way)
 	RET;
 SYM_FUNC_END(__blowfish_enc_blk_4way)
 
-SYM_FUNC_START(blowfish_dec_blk_4way)
+SYM_TYPED_FUNC_START(blowfish_dec_blk_4way)
 	/* input:
 	 *	%rdi: ctx
 	 *	%rsi: dst
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index d0d7b9bc6cad..e5d9b299577f 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -2,6 +2,7 @@
 /* Copyright 2002 Andi Kleen */
 
 #include <linux/linkage.h>
+#include <linux/cfi_types.h>
 #include <asm/errno.h>
 #include <asm/cpufeatures.h>
 #include <asm/alternative.h>
@@ -27,7 +28,7 @@
  * Output:
  * rax original destination
  */
-SYM_FUNC_START(__memcpy)
+__SYM_TYPED_FUNC_START(__memcpy, memcpy)
 	ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
 		      "jmp memcpy_erms", X86_FEATURE_ERMS
 
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 19/21] x86/purgatory: Disable CFI
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Disable CONFIG_CFI_CLANG for the stand-alone purgatory.ro.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
---
 arch/x86/purgatory/Makefile | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
index ae53d54d7959..b3fa947fa38b 100644
--- a/arch/x86/purgatory/Makefile
+++ b/arch/x86/purgatory/Makefile
@@ -55,6 +55,10 @@ ifdef CONFIG_RETPOLINE
 PURGATORY_CFLAGS_REMOVE		+= $(RETPOLINE_CFLAGS)
 endif
 
+ifdef CONFIG_CFI_CLANG
+PURGATORY_CFLAGS_REMOVE		+= $(CC_FLAGS_CFI)
+endif
+
 CFLAGS_REMOVE_purgatory.o	+= $(PURGATORY_CFLAGS_REMOVE)
 CFLAGS_purgatory.o		+= $(PURGATORY_CFLAGS)
 
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 19/21] x86/purgatory: Disable CFI
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Disable CONFIG_CFI_CLANG for the stand-alone purgatory.ro.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Nick Desaulniers <ndesaulniers@google.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
---
 arch/x86/purgatory/Makefile | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
index ae53d54d7959..b3fa947fa38b 100644
--- a/arch/x86/purgatory/Makefile
+++ b/arch/x86/purgatory/Makefile
@@ -55,6 +55,10 @@ ifdef CONFIG_RETPOLINE
 PURGATORY_CFLAGS_REMOVE		+= $(RETPOLINE_CFLAGS)
 endif
 
+ifdef CONFIG_CFI_CLANG
+PURGATORY_CFLAGS_REMOVE		+= $(CC_FLAGS_CFI)
+endif
+
 CFLAGS_REMOVE_purgatory.o	+= $(PURGATORY_CFLAGS_REMOVE)
 CFLAGS_purgatory.o		+= $(PURGATORY_CFLAGS)
 
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 20/21] x86/vdso: Disable CFI
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

CC_FLAGS_LTO no longer includes CC_FLAGS_CFI, so filter these flags
out as well.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/x86/entry/vdso/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 693f8b9031fb..abf41ef0f89e 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -91,7 +91,7 @@ ifneq ($(RETPOLINE_VDSO_CFLAGS),)
 endif
 endif
 
-$(vobjs): KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
+$(vobjs): KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
 
 #
 # vDSO code runs in userspace and -pg doesn't help with profiling anyway.
@@ -151,6 +151,7 @@ KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_LTO),$(KBUILD_CFLAGS_32))
+KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_CFI),$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
 KBUILD_CFLAGS_32 += -fno-stack-protector
 KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 20/21] x86/vdso: Disable CFI
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

CC_FLAGS_LTO no longer includes CC_FLAGS_CFI, so filter these flags
out as well.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/x86/entry/vdso/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 693f8b9031fb..abf41ef0f89e 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -91,7 +91,7 @@ ifneq ($(RETPOLINE_VDSO_CFLAGS),)
 endif
 endif
 
-$(vobjs): KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
+$(vobjs): KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
 
 #
 # vDSO code runs in userspace and -pg doesn't help with profiling anyway.
@@ -151,6 +151,7 @@ KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_LTO),$(KBUILD_CFLAGS_32))
+KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_CFI),$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
 KBUILD_CFLAGS_32 += -fno-stack-protector
 KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 21/21] x86: Add support for CONFIG_CFI_CLANG
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 20:36   ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Add CONFIG_CFI_CLANG error handling and allow the config to be selected
on x86_64.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/x86/Kconfig               |  1 +
 arch/x86/include/asm/linkage.h |  7 ++++++
 arch/x86/kernel/traps.c        | 39 +++++++++++++++++++++++++++++++++-
 3 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b0142e01002e..01db5c5c4dde 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -108,6 +108,7 @@ config X86
 	select ARCH_SUPPORTS_PAGE_TABLE_CHECK	if X86_64
 	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
 	select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP	if NR_CPUS <= 4096
+	select ARCH_SUPPORTS_CFI_CLANG		if X86_64
 	select ARCH_SUPPORTS_LTO_CLANG
 	select ARCH_SUPPORTS_LTO_CLANG_THIN
 	select ARCH_USE_BUILTIN_BSWAP
diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
index 85865f1645bd..d20acf5ebae3 100644
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -25,6 +25,13 @@
 #define RET	ret
 #endif
 
+#ifdef CONFIG_CFI_CLANG
+#define __CFI_TYPE(name)			\
+	.fill 10, 1, 0x90 ASM_NL		\
+	.4byte __kcfi_typeid_##name ASM_NL	\
+	.fill 2, 1, 0xcc
+#endif
+
 #else /* __ASSEMBLY__ */
 
 #ifdef CONFIG_SLS
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 1563fb995005..b9e46e6ed83b 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -40,6 +40,7 @@
 #include <linux/hardirq.h>
 #include <linux/atomic.h>
 #include <linux/ioasid.h>
+#include <linux/cfi.h>
 
 #include <asm/stacktrace.h>
 #include <asm/processor.h>
@@ -295,6 +296,41 @@ static inline void handle_invalid_op(struct pt_regs *regs)
 		      ILL_ILLOPN, error_get_trap_addr(regs));
 }
 
+#ifdef CONFIG_CFI_CLANG
+void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
+{
+	char buffer[MAX_INSN_SIZE];
+	int offset;
+	struct insn insn;
+	unsigned long *target;
+
+	/*
+	 * The expected CFI check instruction sequence:
+	 *   cmpl    <id>, -6(%reg)	; 7 bytes
+	 *   je      .Ltmp1		; 2 bytes
+	 *   ud2			; <- addr
+	 *   .Ltmp1:
+	 *
+	 * Therefore, the target address is in a register that we can
+	 * decode from the cmpl instruction.
+	 */
+	if (copy_from_kernel_nofault(buffer, (void *)addr - 9, MAX_INSN_SIZE))
+		return NULL;
+	if (insn_decode(&insn, buffer, MAX_INSN_SIZE, INSN_MODE_64))
+		return NULL;
+	if (insn.opcode.value != 0x81)
+		return NULL;
+
+	offset = insn_get_modrm_rm_off(&insn, regs);
+	if (offset < 0)
+		return NULL;
+
+	target = (void *)regs + offset;
+
+	return (void *)*target;
+}
+#endif
+
 static noinstr bool handle_bug(struct pt_regs *regs)
 {
 	bool handled = false;
@@ -312,7 +348,8 @@ static noinstr bool handle_bug(struct pt_regs *regs)
 	 */
 	if (regs->flags & X86_EFLAGS_IF)
 		raw_local_irq_enable();
-	if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
+	if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN ||
+	    report_cfi(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
 		regs->ip += LEN_UD2;
 		handled = true;
 	}
-- 
2.36.0.464.gb9c8b46e94-goog


^ permalink raw reply related	[flat|nested] 100+ messages in thread

* [RFC PATCH 21/21] x86: Add support for CONFIG_CFI_CLANG
@ 2022-04-29 20:36   ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-29 20:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm, Sami Tolvanen

Add CONFIG_CFI_CLANG error handling and allow the config to be selected
on x86_64.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
---
 arch/x86/Kconfig               |  1 +
 arch/x86/include/asm/linkage.h |  7 ++++++
 arch/x86/kernel/traps.c        | 39 +++++++++++++++++++++++++++++++++-
 3 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b0142e01002e..01db5c5c4dde 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -108,6 +108,7 @@ config X86
 	select ARCH_SUPPORTS_PAGE_TABLE_CHECK	if X86_64
 	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
 	select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP	if NR_CPUS <= 4096
+	select ARCH_SUPPORTS_CFI_CLANG		if X86_64
 	select ARCH_SUPPORTS_LTO_CLANG
 	select ARCH_SUPPORTS_LTO_CLANG_THIN
 	select ARCH_USE_BUILTIN_BSWAP
diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
index 85865f1645bd..d20acf5ebae3 100644
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -25,6 +25,13 @@
 #define RET	ret
 #endif
 
+#ifdef CONFIG_CFI_CLANG
+#define __CFI_TYPE(name)			\
+	.fill 10, 1, 0x90 ASM_NL		\
+	.4byte __kcfi_typeid_##name ASM_NL	\
+	.fill 2, 1, 0xcc
+#endif
+
 #else /* __ASSEMBLY__ */
 
 #ifdef CONFIG_SLS
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 1563fb995005..b9e46e6ed83b 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -40,6 +40,7 @@
 #include <linux/hardirq.h>
 #include <linux/atomic.h>
 #include <linux/ioasid.h>
+#include <linux/cfi.h>
 
 #include <asm/stacktrace.h>
 #include <asm/processor.h>
@@ -295,6 +296,41 @@ static inline void handle_invalid_op(struct pt_regs *regs)
 		      ILL_ILLOPN, error_get_trap_addr(regs));
 }
 
+#ifdef CONFIG_CFI_CLANG
+void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
+{
+	char buffer[MAX_INSN_SIZE];
+	int offset;
+	struct insn insn;
+	unsigned long *target;
+
+	/*
+	 * The expected CFI check instruction sequence:
+	 *   cmpl    <id>, -6(%reg)	; 7 bytes
+	 *   je      .Ltmp1		; 2 bytes
+	 *   ud2			; <- addr
+	 *   .Ltmp1:
+	 *
+	 * Therefore, the target address is in a register that we can
+	 * decode from the cmpl instruction.
+	 */
+	if (copy_from_kernel_nofault(buffer, (void *)addr - 9, MAX_INSN_SIZE))
+		return NULL;
+	if (insn_decode(&insn, buffer, MAX_INSN_SIZE, INSN_MODE_64))
+		return NULL;
+	if (insn.opcode.value != 0x81)
+		return NULL;
+
+	offset = insn_get_modrm_rm_off(&insn, regs);
+	if (offset < 0)
+		return NULL;
+
+	target = (void *)regs + offset;
+
+	return (void *)*target;
+}
+#endif
+
 static noinstr bool handle_bug(struct pt_regs *regs)
 {
 	bool handled = false;
@@ -312,7 +348,8 @@ static noinstr bool handle_bug(struct pt_regs *regs)
 	 */
 	if (regs->flags & X86_EFLAGS_IF)
 		raw_local_irq_enable();
-	if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
+	if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN ||
+	    report_cfi(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
 		regs->ip += LEN_UD2;
 		handled = true;
 	}
-- 
2.36.0.464.gb9c8b46e94-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-29 22:53   ` Kees Cook
  -1 siblings, 0 replies; 100+ messages in thread
From: Kees Cook @ 2022-04-29 22:53 UTC (permalink / raw)
  To: Peter Zijlstra, Mark Rutland, Josh Poimboeuf, Will Deacon,
	Catalin Marinas
  Cc: Sami Tolvanen, Nathan Chancellor, Nick Desaulniers, Joao Moreira,
	Sedat Dilek, Steven Rostedt, linux-kernel, x86, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:23PM -0700, Sami Tolvanen wrote:
> KCFI is a proposed forward-edge control-flow integrity scheme for
> Clang, which is more suitable for kernel use than the existing CFI
> scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
> alter function references to point to a jump table, and won't break
> function address equality.

🎉 :)

> The latest LLVM patches are here:
> 
>   https://reviews.llvm.org/D119296
>   https://reviews.llvm.org/D124211
> 
> [...]
> To test this series, you'll need to compile your own Clang toolchain
> with the patches linked above. You can also find the complete source
> tree here:
> 
>   https://github.com/samitolvanen/llvm-project/commits/kcfi-rfc

And note that this RFC is seeking to break a bit of a circular dependency
with regard to the design of __builtin_kcfi_call_unchecked (D124211
above), as the implementation has gone around a few times in review within
LLVM, and we want to make sure that kernel folks are okay with what was
settled on. If there are no objections on the kernel side, then we can
land the KCFI patches, as this is basically the only remaining blocker.

-Kees

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-04-29 22:53   ` Kees Cook
  0 siblings, 0 replies; 100+ messages in thread
From: Kees Cook @ 2022-04-29 22:53 UTC (permalink / raw)
  To: Peter Zijlstra, Mark Rutland, Josh Poimboeuf, Will Deacon,
	Catalin Marinas
  Cc: Sami Tolvanen, Nathan Chancellor, Nick Desaulniers, Joao Moreira,
	Sedat Dilek, Steven Rostedt, linux-kernel, x86, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:23PM -0700, Sami Tolvanen wrote:
> KCFI is a proposed forward-edge control-flow integrity scheme for
> Clang, which is more suitable for kernel use than the existing CFI
> scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
> alter function references to point to a jump table, and won't break
> function address equality.

🎉 :)

> The latest LLVM patches are here:
> 
>   https://reviews.llvm.org/D119296
>   https://reviews.llvm.org/D124211
> 
> [...]
> To test this series, you'll need to compile your own Clang toolchain
> with the patches linked above. You can also find the complete source
> tree here:
> 
>   https://github.com/samitolvanen/llvm-project/commits/kcfi-rfc

And note that this RFC is seeking to break a bit of a circular dependency
with regard to the design of __builtin_kcfi_call_unchecked (D124211
above), as the implementation has gone around a few times in review within
LLVM, and we want to make sure that kernel folks are okay with what was
settled on. If there are no objections on the kernel side, then we can
land the KCFI patches, as this is basically the only remaining blocker.

-Kees

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 14/21] treewide: static_call: Pass call arguments to the macro
  2022-04-29 20:36   ` Sami Tolvanen
@ 2022-04-29 23:21     ` Peter Zijlstra
  -1 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-29 23:21 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:37PM -0700, Sami Tolvanen wrote:
> Include the function arguments in the static call macro to make it
> possible to add a wrapper for the call. This is needed with
> CONFIG_CFI_CLANG to disable indirect call checking for static calls
> that are patched into direct calls at runtime.
> 
> Users of static_call were updated using the following Coccinelle
> script and manually adjusted to preserve coding style:
> 
>   @@
>   expression name;
>   expression list args;
>   identifier static_call =~ "^static_call(_mod|_cond)?$";
>   @@
> 
>   - static_call(name)(args)
>   + static_call(name, args)

Urgh, sadness.. I worked so hard to get away from that terrible syntax.

Can you explain why this is needed? I don't think there are any indirect
calls to get confused about. That is, if you have STATIC_CALL_INLINE
then the compiler should be emitting direct calls to the trampoline.

At no point will there be an indirect call.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 14/21] treewide: static_call: Pass call arguments to the macro
@ 2022-04-29 23:21     ` Peter Zijlstra
  0 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-29 23:21 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:37PM -0700, Sami Tolvanen wrote:
> Include the function arguments in the static call macro to make it
> possible to add a wrapper for the call. This is needed with
> CONFIG_CFI_CLANG to disable indirect call checking for static calls
> that are patched into direct calls at runtime.
> 
> Users of static_call were updated using the following Coccinelle
> script and manually adjusted to preserve coding style:
> 
>   @@
>   expression name;
>   expression list args;
>   identifier static_call =~ "^static_call(_mod|_cond)?$";
>   @@
> 
>   - static_call(name)(args)
>   + static_call(name, args)

Urgh, sadness.. I worked so hard to get away from that terrible syntax.

Can you explain why this is needed? I don't think there are any indirect
calls to get confused about. That is, if you have STATIC_CALL_INLINE
then the compiler should be emitting direct calls to the trampoline.

At no point will there be an indirect call.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 15/21] static_call: Use cfi_unchecked
  2022-04-29 20:36   ` Sami Tolvanen
@ 2022-04-29 23:23     ` Peter Zijlstra
  -1 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-29 23:23 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:38PM -0700, Sami Tolvanen wrote:
> With CONFIG_HAVE_STATIC_CALL, static calls are patched into direct
> calls. Disable indirect call CFI checking for the call sites with the
> cfi_unchecked macro.

-ENOPARSE

There are no indirect calls.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 15/21] static_call: Use cfi_unchecked
@ 2022-04-29 23:23     ` Peter Zijlstra
  0 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-29 23:23 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:38PM -0700, Sami Tolvanen wrote:
> With CONFIG_HAVE_STATIC_CALL, static calls are patched into direct
> calls. Disable indirect call CFI checking for the call sites with the
> cfi_unchecked macro.

-ENOPARSE

There are no indirect calls.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 16/21] objtool: Add support for CONFIG_CFI_CLANG
  2022-04-29 20:36   ` Sami Tolvanen
@ 2022-04-29 23:30     ` Peter Zijlstra
  -1 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-29 23:30 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:39PM -0700, Sami Tolvanen wrote:

> +static void *kcfi_alloc_hash(unsigned long size)
> +{
> +	kcfi_bits = max(10, ilog2(size));
> +	kcfi_hash = mmap(NULL, sizeof(struct hlist_head) << kcfi_bits,
> +			PROT_READ|PROT_WRITE,
> +			MAP_PRIVATE|MAP_ANON, -1, 0);
> +	if (kcfi_hash == (void *)-1L) {
> +		WARN("mmap fail kcfi_hash");
> +		kcfi_hash = NULL;
> +	}  else if (stats) {
> +		printf("kcfi_bits: %d\n", kcfi_bits);
> +	}
> +
> +	return kcfi_hash;
> +}
> +
> +static void add_kcfi_type(struct kcfi_type *type)
> +{
> +	hlist_add_head(&type->hash,
> +		&kcfi_hash[hash_min(
> +			sec_offset_hash(type->sec, type->offset),
> +			kcfi_bits)]);

:se cino=(0:0

Also, I'm thinking you can unwrap some lines at least.

> +}
> +
> +static bool is_kcfi_typeid(struct elf *elf, struct instruction *insn)
> +{
> +	struct hlist_head *head;
> +	struct kcfi_type *type;
> +	struct reloc *reloc;
> +
> +	if (!kcfi)
> +		return false;
> +
> +	/* Compiler-generated annotation in .kcfi_types. */
> +	head = &kcfi_hash[hash_min(sec_offset_hash(insn->sec, insn->offset), kcfi_bits)];
> +
> +	hlist_for_each_entry(type, head, hash)
> +		if (type->sec == insn->sec && type->offset == insn->offset)
> +			return true;

missing { }

> +
> +	/* Manual annotation (in assembly code). */
> +	reloc = find_reloc_by_dest(elf, insn->sec, insn->offset);
> +
> +	if (reloc && !strncmp(reloc->sym->name, "__kcfi_typeid_", 14))
> +		return true;
> +
> +	return false;
> +}
> +
>  /*
>   * This checks to see if the given function is a "noreturn" function.
>   *
> @@ -388,13 +487,18 @@ static int decode_instructions(struct objtool_file *file)
>  			insn->sec = sec;
>  			insn->offset = offset;
>  
> -			ret = arch_decode_instruction(file, sec, offset,
> -						      sec->sh.sh_size - offset,
> -						      &insn->len, &insn->type,
> -						      &insn->immediate,
> -						      &insn->stack_ops);
> -			if (ret)
> -				goto err;
> +			if (is_kcfi_typeid(file->elf, insn)) {
> +				insn->type = INSN_KCFI_TYPEID;
> +				insn->len = KCFI_TYPEID_LEN;

Urgh, what does this do for decode speed? This is a hash-lookup for
every single instruction.

Is that kcfi location array sorted by the compiler? Because then you can
keep a running iterator and replace the whole lookup with a simple
equality comparison.

> +			} else {
> +				ret = arch_decode_instruction(file, sec, offset,
> +							      sec->sh.sh_size - offset,
> +							      &insn->len, &insn->type,
> +							      &insn->immediate,
> +							      &insn->stack_ops);
> +				if (ret)
> +					goto err;
> +			}
>  
>  			/*
>  			 * By default, "ud2" is a dead end unless otherwise

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 16/21] objtool: Add support for CONFIG_CFI_CLANG
@ 2022-04-29 23:30     ` Peter Zijlstra
  0 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-29 23:30 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:39PM -0700, Sami Tolvanen wrote:

> +static void *kcfi_alloc_hash(unsigned long size)
> +{
> +	kcfi_bits = max(10, ilog2(size));
> +	kcfi_hash = mmap(NULL, sizeof(struct hlist_head) << kcfi_bits,
> +			PROT_READ|PROT_WRITE,
> +			MAP_PRIVATE|MAP_ANON, -1, 0);
> +	if (kcfi_hash == (void *)-1L) {
> +		WARN("mmap fail kcfi_hash");
> +		kcfi_hash = NULL;
> +	}  else if (stats) {
> +		printf("kcfi_bits: %d\n", kcfi_bits);
> +	}
> +
> +	return kcfi_hash;
> +}
> +
> +static void add_kcfi_type(struct kcfi_type *type)
> +{
> +	hlist_add_head(&type->hash,
> +		&kcfi_hash[hash_min(
> +			sec_offset_hash(type->sec, type->offset),
> +			kcfi_bits)]);

:se cino=(0:0

Also, I'm thinking you can unwrap some lines at least.

> +}
> +
> +static bool is_kcfi_typeid(struct elf *elf, struct instruction *insn)
> +{
> +	struct hlist_head *head;
> +	struct kcfi_type *type;
> +	struct reloc *reloc;
> +
> +	if (!kcfi)
> +		return false;
> +
> +	/* Compiler-generated annotation in .kcfi_types. */
> +	head = &kcfi_hash[hash_min(sec_offset_hash(insn->sec, insn->offset), kcfi_bits)];
> +
> +	hlist_for_each_entry(type, head, hash)
> +		if (type->sec == insn->sec && type->offset == insn->offset)
> +			return true;

missing { }

> +
> +	/* Manual annotation (in assembly code). */
> +	reloc = find_reloc_by_dest(elf, insn->sec, insn->offset);
> +
> +	if (reloc && !strncmp(reloc->sym->name, "__kcfi_typeid_", 14))
> +		return true;
> +
> +	return false;
> +}
> +
>  /*
>   * This checks to see if the given function is a "noreturn" function.
>   *
> @@ -388,13 +487,18 @@ static int decode_instructions(struct objtool_file *file)
>  			insn->sec = sec;
>  			insn->offset = offset;
>  
> -			ret = arch_decode_instruction(file, sec, offset,
> -						      sec->sh.sh_size - offset,
> -						      &insn->len, &insn->type,
> -						      &insn->immediate,
> -						      &insn->stack_ops);
> -			if (ret)
> -				goto err;
> +			if (is_kcfi_typeid(file->elf, insn)) {
> +				insn->type = INSN_KCFI_TYPEID;
> +				insn->len = KCFI_TYPEID_LEN;

Urgh, what does this do for decode speed? This is a hash-lookup for
every single instruction.

Is that kcfi location array sorted by the compiler? Because then you can
keep a running iterator and replace the whole lookup with a simple
equality comparison.

> +			} else {
> +				ret = arch_decode_instruction(file, sec, offset,
> +							      sec->sh.sh_size - offset,
> +							      &insn->len, &insn->type,
> +							      &insn->immediate,
> +							      &insn->stack_ops);
> +				if (ret)
> +					goto err;
> +			}
>  
>  			/*
>  			 * By default, "ud2" is a dead end unless otherwise

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 14/21] treewide: static_call: Pass call arguments to the macro
  2022-04-29 23:21     ` Peter Zijlstra
@ 2022-04-30  0:49       ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-30  0:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Kees Cook, Josh Poimboeuf, X86 ML, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 4:21 PM Peter Zijlstra <peterz@infradead.org> wrote:
> Can you explain why this is needed? I don't think there are any indirect
> calls to get confused about. That is, if you have STATIC_CALL_INLINE
> then the compiler should be emitting direct calls to the trampoline.

Clang emits an indirect call for ({ &f; })(), which is optimized into
a direct call when possible. Come to think of it, the recent
InstCombine change to the compiler patch should solve this issue. Let
me double check, I'd be more than happy to drop these two patches.

Sami

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 14/21] treewide: static_call: Pass call arguments to the macro
@ 2022-04-30  0:49       ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-30  0:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Kees Cook, Josh Poimboeuf, X86 ML, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 4:21 PM Peter Zijlstra <peterz@infradead.org> wrote:
> Can you explain why this is needed? I don't think there are any indirect
> calls to get confused about. That is, if you have STATIC_CALL_INLINE
> then the compiler should be emitting direct calls to the trampoline.

Clang emits an indirect call for ({ &f; })(), which is optimized into
a direct call when possible. Come to think of it, the recent
InstCombine change to the compiler patch should solve this issue. Let
me double check, I'd be more than happy to drop these two patches.

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 16/21] objtool: Add support for CONFIG_CFI_CLANG
  2022-04-29 23:30     ` Peter Zijlstra
@ 2022-04-30  1:00       ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-30  1:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Kees Cook, Josh Poimboeuf, X86 ML, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 4:30 PM Peter Zijlstra <peterz@infradead.org> wrote:
> Urgh, what does this do for decode speed? This is a hash-lookup for
> every single instruction.

Two actually, since .kcfi_traps only contains compiler-emitted
locations and we also have to check for manual type annotations. I
haven't measured performance yet, but I also didn't notice a
significant impact here.

> Is that kcfi location array sorted by the compiler? Because then you can
> keep a running iterator and replace the whole lookup with a simple
> equality comparison.

The compiler generates a separate .kcfi_types section for each text
section and the entries are emitted in order, so this should be
doable.

Sami

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 16/21] objtool: Add support for CONFIG_CFI_CLANG
@ 2022-04-30  1:00       ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-04-30  1:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Kees Cook, Josh Poimboeuf, X86 ML, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 4:30 PM Peter Zijlstra <peterz@infradead.org> wrote:
> Urgh, what does this do for decode speed? This is a hash-lookup for
> every single instruction.

Two actually, since .kcfi_traps only contains compiler-emitted
locations and we also have to check for manual type annotations. I
haven't measured performance yet, but I also didn't notice a
significant impact here.

> Is that kcfi location array sorted by the compiler? Because then you can
> keep a running iterator and replace the whole lookup with a simple
> equality comparison.

The compiler generates a separate .kcfi_types section for each text
section and the entries are emitted in order, so this should be
doable.

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-04-29 22:53   ` Kees Cook
@ 2022-04-30  9:02     ` Peter Zijlstra
  -1 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-30  9:02 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Josh Poimboeuf, Will Deacon, Catalin Marinas,
	Sami Tolvanen, Nathan Chancellor, Nick Desaulniers, Joao Moreira,
	Sedat Dilek, Steven Rostedt, linux-kernel, x86, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 03:53:12PM -0700, Kees Cook wrote:
> On Fri, Apr 29, 2022 at 01:36:23PM -0700, Sami Tolvanen wrote:
> > KCFI is a proposed forward-edge control-flow integrity scheme for
> > Clang, which is more suitable for kernel use than the existing CFI
> > scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
> > alter function references to point to a jump table, and won't break
> > function address equality.
> 
> 🎉 :)
> 
> > The latest LLVM patches are here:
> > 
> >   https://reviews.llvm.org/D119296
> >   https://reviews.llvm.org/D124211
> > 
> > [...]
> > To test this series, you'll need to compile your own Clang toolchain
> > with the patches linked above. You can also find the complete source
> > tree here:
> > 
> >   https://github.com/samitolvanen/llvm-project/commits/kcfi-rfc
> 
> And note that this RFC is seeking to break a bit of a circular dependency
> with regard to the design of __builtin_kcfi_call_unchecked (D124211
> above), as the implementation has gone around a few times in review within
> LLVM, and we want to make sure that kernel folks are okay with what was
> settled on. If there are no objections on the kernel side, then we can
> land the KCFI patches, as this is basically the only remaining blocker.

So aside from the static_call usage, was there any other?

Anyway, I think I hate that __builtin, I'd *much* rather see a variable
attribute or qualifier for this, such that one can mark a function
pointer as not doing CFI.

I simply doesn't make sense to have a builtin that operates on an
expression. The whole thing is about indirect calls, IOW function
pointers.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-04-30  9:02     ` Peter Zijlstra
  0 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-30  9:02 UTC (permalink / raw)
  To: Kees Cook
  Cc: Mark Rutland, Josh Poimboeuf, Will Deacon, Catalin Marinas,
	Sami Tolvanen, Nathan Chancellor, Nick Desaulniers, Joao Moreira,
	Sedat Dilek, Steven Rostedt, linux-kernel, x86, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 03:53:12PM -0700, Kees Cook wrote:
> On Fri, Apr 29, 2022 at 01:36:23PM -0700, Sami Tolvanen wrote:
> > KCFI is a proposed forward-edge control-flow integrity scheme for
> > Clang, which is more suitable for kernel use than the existing CFI
> > scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
> > alter function references to point to a jump table, and won't break
> > function address equality.
> 
> 🎉 :)
> 
> > The latest LLVM patches are here:
> > 
> >   https://reviews.llvm.org/D119296
> >   https://reviews.llvm.org/D124211
> > 
> > [...]
> > To test this series, you'll need to compile your own Clang toolchain
> > with the patches linked above. You can also find the complete source
> > tree here:
> > 
> >   https://github.com/samitolvanen/llvm-project/commits/kcfi-rfc
> 
> And note that this RFC is seeking to break a bit of a circular dependency
> with regard to the design of __builtin_kcfi_call_unchecked (D124211
> above), as the implementation has gone around a few times in review within
> LLVM, and we want to make sure that kernel folks are okay with what was
> settled on. If there are no objections on the kernel side, then we can
> land the KCFI patches, as this is basically the only remaining blocker.

So aside from the static_call usage, was there any other?

Anyway, I think I hate that __builtin, I'd *much* rather see a variable
attribute or qualifier for this, such that one can mark a function
pointer as not doing CFI.

I simply doesn't make sense to have a builtin that operates on an
expression. The whole thing is about indirect calls, IOW function
pointers.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 06/21] cfi: Switch to -fsanitize=kcfi
  2022-04-29 20:36   ` Sami Tolvanen
@ 2022-04-30  9:09     ` Peter Zijlstra
  -1 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-30  9:09 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:29PM -0700, Sami Tolvanen wrote:

> +CC_FLAGS_CFI	:= -fsanitize=kcfi -fno-sanitize-blacklist

I'm somewhat surprised to see CFI is a sanitizer. It just doesn't seem
to fit in the line of {UB,KC,KA}SAN and friends.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 06/21] cfi: Switch to -fsanitize=kcfi
@ 2022-04-30  9:09     ` Peter Zijlstra
  0 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-30  9:09 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:29PM -0700, Sami Tolvanen wrote:

> +CC_FLAGS_CFI	:= -fsanitize=kcfi -fno-sanitize-blacklist

I'm somewhat surprised to see CFI is a sanitizer. It just doesn't seem
to fit in the line of {UB,KC,KA}SAN and friends.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 21/21] x86: Add support for CONFIG_CFI_CLANG
  2022-04-29 20:36   ` Sami Tolvanen
@ 2022-04-30  9:24     ` Peter Zijlstra
  -1 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-30  9:24 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:44PM -0700, Sami Tolvanen wrote:
> Add CONFIG_CFI_CLANG error handling and allow the config to be selected
> on x86_64.

Might be useful to have an example output of all ths somewhere, because
unless I go build my own clang again, I can't tell from these patches
what actual codegen looks like.

Going from the below, I seem to be able to reverse engineer some of it:

  .long \signature
  int3
  int3
my_func:
  ENDBR
  ...
  ret

And then the callsites look like (clang *always* uses r11, right?):


  cmpl	\signature, -6(%r11)
  je	1f
  ud2
1:
  call __x86_indirect_thunk_r11



> Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
> ---
>  arch/x86/Kconfig               |  1 +
>  arch/x86/include/asm/linkage.h |  7 ++++++
>  arch/x86/kernel/traps.c        | 39 +++++++++++++++++++++++++++++++++-
>  3 files changed, 46 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index b0142e01002e..01db5c5c4dde 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -108,6 +108,7 @@ config X86
>  	select ARCH_SUPPORTS_PAGE_TABLE_CHECK	if X86_64
>  	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
>  	select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP	if NR_CPUS <= 4096
> +	select ARCH_SUPPORTS_CFI_CLANG		if X86_64
>  	select ARCH_SUPPORTS_LTO_CLANG
>  	select ARCH_SUPPORTS_LTO_CLANG_THIN
>  	select ARCH_USE_BUILTIN_BSWAP
> diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
> index 85865f1645bd..d20acf5ebae3 100644
> --- a/arch/x86/include/asm/linkage.h
> +++ b/arch/x86/include/asm/linkage.h
> @@ -25,6 +25,13 @@
>  #define RET	ret
>  #endif
>  
> +#ifdef CONFIG_CFI_CLANG
> +#define __CFI_TYPE(name)			\
> +	.fill 10, 1, 0x90 ASM_NL		\
> +	.4byte __kcfi_typeid_##name ASM_NL	\
> +	.fill 2, 1, 0xcc
> +#endif
> +
>  #else /* __ASSEMBLY__ */
>  
>  #ifdef CONFIG_SLS
> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> index 1563fb995005..b9e46e6ed83b 100644
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -40,6 +40,7 @@
>  #include <linux/hardirq.h>
>  #include <linux/atomic.h>
>  #include <linux/ioasid.h>
> +#include <linux/cfi.h>
>  
>  #include <asm/stacktrace.h>
>  #include <asm/processor.h>
> @@ -295,6 +296,41 @@ static inline void handle_invalid_op(struct pt_regs *regs)
>  		      ILL_ILLOPN, error_get_trap_addr(regs));
>  }
>  
> +#ifdef CONFIG_CFI_CLANG
> +void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
> +{
> +	char buffer[MAX_INSN_SIZE];
> +	int offset;
> +	struct insn insn;
> +	unsigned long *target;

Reverse xmas please..

> +
> +	/*
> +	 * The expected CFI check instruction sequence:
> +	 *   cmpl    <id>, -6(%reg)	; 7 bytes
> +	 *   je      .Ltmp1		; 2 bytes
> +	 *   ud2			; <- addr
> +	 *   .Ltmp1:
> +	 *
> +	 * Therefore, the target address is in a register that we can
> +	 * decode from the cmpl instruction.
> +	 */
> +	if (copy_from_kernel_nofault(buffer, (void *)addr - 9, MAX_INSN_SIZE))
> +		return NULL;
> +	if (insn_decode(&insn, buffer, MAX_INSN_SIZE, INSN_MODE_64))
> +		return NULL;

insn_decode_kernel()

> +	if (insn.opcode.value != 0x81)
> +		return NULL;

That's not sufficient to uniquely identify cmp, you also need to look at
the modrm to find r==7 I think.

> +
> +	offset = insn_get_modrm_rm_off(&insn, regs);
> +	if (offset < 0)
> +		return NULL;
> +
> +	target = (void *)regs + offset;
> +
> +	return (void *)*target;
> +}
> +#endif
> +
>  static noinstr bool handle_bug(struct pt_regs *regs)
>  {
>  	bool handled = false;
> @@ -312,7 +348,8 @@ static noinstr bool handle_bug(struct pt_regs *regs)
>  	 */
>  	if (regs->flags & X86_EFLAGS_IF)
>  		raw_local_irq_enable();
> -	if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
> +	if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN ||
> +	    report_cfi(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {

This way you'll first get a BUG splat and then tack on the CFI thing.
Seems a bit daft to have two splats.

>  		regs->ip += LEN_UD2;
>  		handled = true;
>  	}
> -- 
> 2.36.0.464.gb9c8b46e94-goog
> 

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 21/21] x86: Add support for CONFIG_CFI_CLANG
@ 2022-04-30  9:24     ` Peter Zijlstra
  0 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-04-30  9:24 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, x86, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:44PM -0700, Sami Tolvanen wrote:
> Add CONFIG_CFI_CLANG error handling and allow the config to be selected
> on x86_64.

Might be useful to have an example output of all ths somewhere, because
unless I go build my own clang again, I can't tell from these patches
what actual codegen looks like.

Going from the below, I seem to be able to reverse engineer some of it:

  .long \signature
  int3
  int3
my_func:
  ENDBR
  ...
  ret

And then the callsites look like (clang *always* uses r11, right?):


  cmpl	\signature, -6(%r11)
  je	1f
  ud2
1:
  call __x86_indirect_thunk_r11



> Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
> ---
>  arch/x86/Kconfig               |  1 +
>  arch/x86/include/asm/linkage.h |  7 ++++++
>  arch/x86/kernel/traps.c        | 39 +++++++++++++++++++++++++++++++++-
>  3 files changed, 46 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index b0142e01002e..01db5c5c4dde 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -108,6 +108,7 @@ config X86
>  	select ARCH_SUPPORTS_PAGE_TABLE_CHECK	if X86_64
>  	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
>  	select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP	if NR_CPUS <= 4096
> +	select ARCH_SUPPORTS_CFI_CLANG		if X86_64
>  	select ARCH_SUPPORTS_LTO_CLANG
>  	select ARCH_SUPPORTS_LTO_CLANG_THIN
>  	select ARCH_USE_BUILTIN_BSWAP
> diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
> index 85865f1645bd..d20acf5ebae3 100644
> --- a/arch/x86/include/asm/linkage.h
> +++ b/arch/x86/include/asm/linkage.h
> @@ -25,6 +25,13 @@
>  #define RET	ret
>  #endif
>  
> +#ifdef CONFIG_CFI_CLANG
> +#define __CFI_TYPE(name)			\
> +	.fill 10, 1, 0x90 ASM_NL		\
> +	.4byte __kcfi_typeid_##name ASM_NL	\
> +	.fill 2, 1, 0xcc
> +#endif
> +
>  #else /* __ASSEMBLY__ */
>  
>  #ifdef CONFIG_SLS
> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> index 1563fb995005..b9e46e6ed83b 100644
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -40,6 +40,7 @@
>  #include <linux/hardirq.h>
>  #include <linux/atomic.h>
>  #include <linux/ioasid.h>
> +#include <linux/cfi.h>
>  
>  #include <asm/stacktrace.h>
>  #include <asm/processor.h>
> @@ -295,6 +296,41 @@ static inline void handle_invalid_op(struct pt_regs *regs)
>  		      ILL_ILLOPN, error_get_trap_addr(regs));
>  }
>  
> +#ifdef CONFIG_CFI_CLANG
> +void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
> +{
> +	char buffer[MAX_INSN_SIZE];
> +	int offset;
> +	struct insn insn;
> +	unsigned long *target;

Reverse xmas please..

> +
> +	/*
> +	 * The expected CFI check instruction sequence:
> +	 *   cmpl    <id>, -6(%reg)	; 7 bytes
> +	 *   je      .Ltmp1		; 2 bytes
> +	 *   ud2			; <- addr
> +	 *   .Ltmp1:
> +	 *
> +	 * Therefore, the target address is in a register that we can
> +	 * decode from the cmpl instruction.
> +	 */
> +	if (copy_from_kernel_nofault(buffer, (void *)addr - 9, MAX_INSN_SIZE))
> +		return NULL;
> +	if (insn_decode(&insn, buffer, MAX_INSN_SIZE, INSN_MODE_64))
> +		return NULL;

insn_decode_kernel()

> +	if (insn.opcode.value != 0x81)
> +		return NULL;

That's not sufficient to uniquely identify cmp, you also need to look at
the modrm to find r==7 I think.

> +
> +	offset = insn_get_modrm_rm_off(&insn, regs);
> +	if (offset < 0)
> +		return NULL;
> +
> +	target = (void *)regs + offset;
> +
> +	return (void *)*target;
> +}
> +#endif
> +
>  static noinstr bool handle_bug(struct pt_regs *regs)
>  {
>  	bool handled = false;
> @@ -312,7 +348,8 @@ static noinstr bool handle_bug(struct pt_regs *regs)
>  	 */
>  	if (regs->flags & X86_EFLAGS_IF)
>  		raw_local_irq_enable();
> -	if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
> +	if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN ||
> +	    report_cfi(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {

This way you'll first get a BUG splat and then tack on the CFI thing.
Seems a bit daft to have two splats.

>  		regs->ip += LEN_UD2;
>  		handled = true;
>  	}
> -- 
> 2.36.0.464.gb9c8b46e94-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-04-30 16:07   ` Kenton Groombridge
  -1 siblings, 0 replies; 100+ messages in thread
From: Kenton Groombridge @ 2022-04-30 16:07 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86,
	Catalin Marinas, Will Deacon, Mark Rutland, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm


[-- Attachment #1.1: Type: text/plain, Size: 1010 bytes --]

On 22/04/29 01:36PM, Sami Tolvanen wrote:
> KCFI is a proposed forward-edge control-flow integrity scheme for
> Clang, which is more suitable for kernel use than the existing CFI
> scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
> alter function references to point to a jump table, and won't break
> function address equality. The latest LLVM patches are here:
> 
>   https://reviews.llvm.org/D119296
>   https://reviews.llvm.org/D124211

Many thanks for continuing to work on this! As a user who has been
following the evolution of this patch series for a while now, I have a
couple of burning questions:

1) The LLVM patch says that kCFI is not compatible with execute-only
memory. Is there a plan ahead for kCFI if and when execute-only memory
is implemented?

2) kCFI only checks indirect calls while Clang's traditional CFI has
more schemes like bad cast checking and so on. Are there any major
security tradeoffs as a result of this?

V/R

Kenton Groombridge

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 963 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-04-30 16:07   ` Kenton Groombridge
  0 siblings, 0 replies; 100+ messages in thread
From: Kenton Groombridge @ 2022-04-30 16:07 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86,
	Catalin Marinas, Will Deacon, Mark Rutland, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

[-- Attachment #1: Type: text/plain, Size: 1010 bytes --]

On 22/04/29 01:36PM, Sami Tolvanen wrote:
> KCFI is a proposed forward-edge control-flow integrity scheme for
> Clang, which is more suitable for kernel use than the existing CFI
> scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
> alter function references to point to a jump table, and won't break
> function address equality. The latest LLVM patches are here:
> 
>   https://reviews.llvm.org/D119296
>   https://reviews.llvm.org/D124211

Many thanks for continuing to work on this! As a user who has been
following the evolution of this patch series for a while now, I have a
couple of burning questions:

1) The LLVM patch says that kCFI is not compatible with execute-only
memory. Is there a plan ahead for kCFI if and when execute-only memory
is implemented?

2) kCFI only checks indirect calls while Clang's traditional CFI has
more schemes like bad cast checking and so on. Are there any major
security tradeoffs as a result of this?

V/R

Kenton Groombridge

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 963 bytes --]

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 14/21] treewide: static_call: Pass call arguments to the macro
  2022-04-30  0:49       ` Sami Tolvanen
@ 2022-05-02  7:46         ` Peter Zijlstra
  -1 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-05-02  7:46 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: LKML, Kees Cook, Josh Poimboeuf, X86 ML, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 05:49:21PM -0700, Sami Tolvanen wrote:
> On Fri, Apr 29, 2022 at 4:21 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > Can you explain why this is needed? I don't think there are any indirect
> > calls to get confused about. That is, if you have STATIC_CALL_INLINE
> > then the compiler should be emitting direct calls to the trampoline.
> 
> Clang emits an indirect call for ({ &f; })(), which is optimized into
> a direct call when possible. Come to think of it, the recent
> InstCombine change to the compiler patch should solve this issue. Let
> me double check, I'd be more than happy to drop these two patches.

Oooh, but this must not require any magic. That is, we have a *ton* of
code that relies on constant propagation of function pointers to not
emit indirect calls.

Please make sure that 'just-works'.

Look at all the __always_inline functions in rbtree*.h for instance,
some like latch and augment rely on quite complicated const propagation
for the actual function pointer is in a const struct.

I've verified all that actually generates direct calls when we did that
code (on GCC, clang wasn't really a thing back then).




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 14/21] treewide: static_call: Pass call arguments to the macro
@ 2022-05-02  7:46         ` Peter Zijlstra
  0 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-05-02  7:46 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: LKML, Kees Cook, Josh Poimboeuf, X86 ML, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 05:49:21PM -0700, Sami Tolvanen wrote:
> On Fri, Apr 29, 2022 at 4:21 PM Peter Zijlstra <peterz@infradead.org> wrote:
> > Can you explain why this is needed? I don't think there are any indirect
> > calls to get confused about. That is, if you have STATIC_CALL_INLINE
> > then the compiler should be emitting direct calls to the trampoline.
> 
> Clang emits an indirect call for ({ &f; })(), which is optimized into
> a direct call when possible. Come to think of it, the recent
> InstCombine change to the compiler patch should solve this issue. Let
> me double check, I'd be more than happy to drop these two patches.

Oooh, but this must not require any magic. That is, we have a *ton* of
code that relies on constant propagation of function pointers to not
emit indirect calls.

Please make sure that 'just-works'.

Look at all the __always_inline functions in rbtree*.h for instance,
some like latch and augment rely on quite complicated const propagation
for the actual function pointer is in a const struct.

I've verified all that actually generates direct calls when we did that
code (on GCC, clang wasn't really a thing back then).




_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 21/21] x86: Add support for CONFIG_CFI_CLANG
  2022-04-30  9:24     ` Peter Zijlstra
@ 2022-05-02 15:20       ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-02 15:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Kees Cook, Josh Poimboeuf, X86 ML, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Sat, Apr 30, 2022 at 2:24 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Apr 29, 2022 at 01:36:44PM -0700, Sami Tolvanen wrote:
> > -     if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
> > +     if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN ||
> > +         report_cfi(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
>
> This way you'll first get a BUG splat and then tack on the CFI thing.

The CFI ud2 isn't in the bug table, which means find_bug returns
BUG_TRAP_TYPE_NONE and report_bug bails out before printing out
anything.

Sami

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 21/21] x86: Add support for CONFIG_CFI_CLANG
@ 2022-05-02 15:20       ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-02 15:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Kees Cook, Josh Poimboeuf, X86 ML, Catalin Marinas,
	Will Deacon, Mark Rutland, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, linux-hardening,
	linux-arm-kernel, llvm

On Sat, Apr 30, 2022 at 2:24 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Apr 29, 2022 at 01:36:44PM -0700, Sami Tolvanen wrote:
> > -     if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
> > +     if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN ||
> > +         report_cfi(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
>
> This way you'll first get a BUG splat and then tack on the CFI thing.

The CFI ud2 isn't in the bug table, which means find_bug returns
BUG_TRAP_TYPE_NONE and report_bug bails out before printing out
anything.

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-04-30  9:02     ` Peter Zijlstra
@ 2022-05-02 15:22       ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-02 15:22 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kees Cook, Mark Rutland, Josh Poimboeuf, Will Deacon,
	Catalin Marinas, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, LKML, X86 ML,
	linux-hardening, linux-arm-kernel, llvm

On Sat, Apr 30, 2022 at 2:02 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Apr 29, 2022 at 03:53:12PM -0700, Kees Cook wrote:
> > On Fri, Apr 29, 2022 at 01:36:23PM -0700, Sami Tolvanen wrote:
> > > KCFI is a proposed forward-edge control-flow integrity scheme for
> > > Clang, which is more suitable for kernel use than the existing CFI
> > > scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
> > > alter function references to point to a jump table, and won't break
> > > function address equality.
> >
> > 🎉 :)
> >
> > > The latest LLVM patches are here:
> > >
> > >   https://reviews.llvm.org/D119296
> > >   https://reviews.llvm.org/D124211
> > >
> > > [...]
> > > To test this series, you'll need to compile your own Clang toolchain
> > > with the patches linked above. You can also find the complete source
> > > tree here:
> > >
> > >   https://github.com/samitolvanen/llvm-project/commits/kcfi-rfc
> >
> > And note that this RFC is seeking to break a bit of a circular dependency
> > with regard to the design of __builtin_kcfi_call_unchecked (D124211
> > above), as the implementation has gone around a few times in review within
> > LLVM, and we want to make sure that kernel folks are okay with what was
> > settled on. If there are no objections on the kernel side, then we can
> > land the KCFI patches, as this is basically the only remaining blocker.
>
> So aside from the static_call usage, was there any other?

Not at the moment, and it looks like we can get rid of that too.

> Anyway, I think I hate that __builtin, I'd *much* rather see a variable
> attribute or qualifier for this, such that one can mark a function
> pointer as not doing CFI.
>
> I simply doesn't make sense to have a builtin that operates on an
> expression. The whole thing is about indirect calls, IOW function
> pointers.

I also thought an attribute would be more convenient, but the compiler
folks prefer a built-in:

https://reviews.llvm.org/D122673

Sami

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-02 15:22       ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-02 15:22 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kees Cook, Mark Rutland, Josh Poimboeuf, Will Deacon,
	Catalin Marinas, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, LKML, X86 ML,
	linux-hardening, linux-arm-kernel, llvm

On Sat, Apr 30, 2022 at 2:02 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Apr 29, 2022 at 03:53:12PM -0700, Kees Cook wrote:
> > On Fri, Apr 29, 2022 at 01:36:23PM -0700, Sami Tolvanen wrote:
> > > KCFI is a proposed forward-edge control-flow integrity scheme for
> > > Clang, which is more suitable for kernel use than the existing CFI
> > > scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
> > > alter function references to point to a jump table, and won't break
> > > function address equality.
> >
> > 🎉 :)
> >
> > > The latest LLVM patches are here:
> > >
> > >   https://reviews.llvm.org/D119296
> > >   https://reviews.llvm.org/D124211
> > >
> > > [...]
> > > To test this series, you'll need to compile your own Clang toolchain
> > > with the patches linked above. You can also find the complete source
> > > tree here:
> > >
> > >   https://github.com/samitolvanen/llvm-project/commits/kcfi-rfc
> >
> > And note that this RFC is seeking to break a bit of a circular dependency
> > with regard to the design of __builtin_kcfi_call_unchecked (D124211
> > above), as the implementation has gone around a few times in review within
> > LLVM, and we want to make sure that kernel folks are okay with what was
> > settled on. If there are no objections on the kernel side, then we can
> > land the KCFI patches, as this is basically the only remaining blocker.
>
> So aside from the static_call usage, was there any other?

Not at the moment, and it looks like we can get rid of that too.

> Anyway, I think I hate that __builtin, I'd *much* rather see a variable
> attribute or qualifier for this, such that one can mark a function
> pointer as not doing CFI.
>
> I simply doesn't make sense to have a builtin that operates on an
> expression. The whole thing is about indirect calls, IOW function
> pointers.

I also thought an attribute would be more convenient, but the compiler
folks prefer a built-in:

https://reviews.llvm.org/D122673

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-04-30 16:07   ` Kenton Groombridge
@ 2022-05-02 15:31     ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-02 15:31 UTC (permalink / raw)
  To: Sami Tolvanen, LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra,
	X86 ML, Catalin Marinas, Will Deacon, Mark Rutland,
	Nathan Chancellor, Nick Desaulniers, Joao Moreira, Sedat Dilek,
	Steven Rostedt, linux-hardening, linux-arm-kernel, llvm

On Sat, Apr 30, 2022 at 9:08 AM Kenton Groombridge <me@concord.sh> wrote:
> Many thanks for continuing to work on this! As a user who has been
> following the evolution of this patch series for a while now, I have a
> couple of burning questions:
>
> 1) The LLVM patch says that kCFI is not compatible with execute-only
> memory. Is there a plan ahead for kCFI if and when execute-only memory
> is implemented?

There's no plan for executable-only memory right now, that would
require type hashes to be moved somewhere else to read-only memory.

> 2) kCFI only checks indirect calls while Clang's traditional CFI has
> more schemes like bad cast checking and so on. Are there any major
> security tradeoffs as a result of this?

No, cfi-icall is only scheme that's relevant for the kernel. The other
schemes implemented in Clang are mostly useful for C++.

Sami

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-02 15:31     ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-02 15:31 UTC (permalink / raw)
  To: Sami Tolvanen, LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra,
	X86 ML, Catalin Marinas, Will Deacon, Mark Rutland,
	Nathan Chancellor, Nick Desaulniers, Joao Moreira, Sedat Dilek,
	Steven Rostedt, linux-hardening, linux-arm-kernel, llvm

On Sat, Apr 30, 2022 at 9:08 AM Kenton Groombridge <me@concord.sh> wrote:
> Many thanks for continuing to work on this! As a user who has been
> following the evolution of this patch series for a while now, I have a
> couple of burning questions:
>
> 1) The LLVM patch says that kCFI is not compatible with execute-only
> memory. Is there a plan ahead for kCFI if and when execute-only memory
> is implemented?

There's no plan for executable-only memory right now, that would
require type hashes to be moved somewhere else to read-only memory.

> 2) kCFI only checks indirect calls while Clang's traditional CFI has
> more schemes like bad cast checking and so on. Are there any major
> security tradeoffs as a result of this?

No, cfi-icall is only scheme that's relevant for the kernel. The other
schemes implemented in Clang are mostly useful for C++.

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-05-02 15:22       ` Sami Tolvanen
@ 2022-05-02 19:55         ` Peter Zijlstra
  -1 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-05-02 19:55 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: Kees Cook, Mark Rutland, Josh Poimboeuf, Will Deacon,
	Catalin Marinas, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, LKML, X86 ML,
	linux-hardening, linux-arm-kernel, llvm

On Mon, May 02, 2022 at 08:22:57AM -0700, Sami Tolvanen wrote:

> > Anyway, I think I hate that __builtin, I'd *much* rather see a variable
> > attribute or qualifier for this, such that one can mark a function
> > pointer as not doing CFI.
> >
> > I simply doesn't make sense to have a builtin that operates on an
> > expression. The whole thing is about indirect calls, IOW function
> > pointers.
> 
> I also thought an attribute would be more convenient, but the compiler
> folks prefer a built-in:
> 
> https://reviews.llvm.org/D122673

That seems to mostly worry about C++ things (overload sets, template
specialization, name mangling) we kernel folks don't seem to much care
about.

I'll stick with saying type system makes more sense to me though.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-02 19:55         ` Peter Zijlstra
  0 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-05-02 19:55 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: Kees Cook, Mark Rutland, Josh Poimboeuf, Will Deacon,
	Catalin Marinas, Nathan Chancellor, Nick Desaulniers,
	Joao Moreira, Sedat Dilek, Steven Rostedt, LKML, X86 ML,
	linux-hardening, linux-arm-kernel, llvm

On Mon, May 02, 2022 at 08:22:57AM -0700, Sami Tolvanen wrote:

> > Anyway, I think I hate that __builtin, I'd *much* rather see a variable
> > attribute or qualifier for this, such that one can mark a function
> > pointer as not doing CFI.
> >
> > I simply doesn't make sense to have a builtin that operates on an
> > expression. The whole thing is about indirect calls, IOW function
> > pointers.
> 
> I also thought an attribute would be more convenient, but the compiler
> folks prefer a built-in:
> 
> https://reviews.llvm.org/D122673

That seems to mostly worry about C++ things (overload sets, template
specialization, name mangling) we kernel folks don't seem to much care
about.

I'll stick with saying type system makes more sense to me though.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-05-02 19:55         ` Peter Zijlstra
@ 2022-05-03 22:35           ` Peter Collingbourne
  -1 siblings, 0 replies; 100+ messages in thread
From: Peter Collingbourne @ 2022-05-03 22:35 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Sami Tolvanen, Kees Cook, Mark Rutland, Josh Poimboeuf,
	Will Deacon, Catalin Marinas, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	LKML, X86 ML, linux-hardening, linux-arm-kernel, llvm

On Mon, May 2, 2022 at 1:02 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, May 02, 2022 at 08:22:57AM -0700, Sami Tolvanen wrote:
>
> > > Anyway, I think I hate that __builtin, I'd *much* rather see a variable
> > > attribute or qualifier for this, such that one can mark a function
> > > pointer as not doing CFI.
> > >
> > > I simply doesn't make sense to have a builtin that operates on an
> > > expression. The whole thing is about indirect calls, IOW function
> > > pointers.
> >
> > I also thought an attribute would be more convenient, but the compiler
> > folks prefer a built-in:
> >
> > https://reviews.llvm.org/D122673
>
> That seems to mostly worry about C++ things (overload sets, template
> specialization, name mangling) we kernel folks don't seem to much care
> about.
>
> I'll stick with saying type system makes more sense to me though.

I'd say it's not only the C++ issues but more the "action at a
distance" that's implied by having this be part of the type system.
With this being in the function type it's hard to tell whether any
particular call will have CFI disabled, without needing to go and look
at how the function pointer is defined. On the other hand, if we
explicitly mark up the calls with CFI disabled, the code becomes
easier to audit (think Rust "unsafe" blocks).

Does it seem any better to you to have this be marked up via the
function expression, rather than the call? The idea is that this would
always compile to a check-free function call, no matter what "func"
is:

__builtin_kcfi_call_unchecked(func)(args)

We already have this, to some degree, with KCFI as implemented: CFI
checks are disabled if the function expression refers to a declared
function. The builtin would allow overriding the decision to also
disable CFI checks for function expressions that use the builtin. It
also wouldn't preclude a type based system later on (the builtin would
become effectively a cast to the "unchecked" type).

Peter

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-03 22:35           ` Peter Collingbourne
  0 siblings, 0 replies; 100+ messages in thread
From: Peter Collingbourne @ 2022-05-03 22:35 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Sami Tolvanen, Kees Cook, Mark Rutland, Josh Poimboeuf,
	Will Deacon, Catalin Marinas, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	LKML, X86 ML, linux-hardening, linux-arm-kernel, llvm

On Mon, May 2, 2022 at 1:02 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, May 02, 2022 at 08:22:57AM -0700, Sami Tolvanen wrote:
>
> > > Anyway, I think I hate that __builtin, I'd *much* rather see a variable
> > > attribute or qualifier for this, such that one can mark a function
> > > pointer as not doing CFI.
> > >
> > > I simply doesn't make sense to have a builtin that operates on an
> > > expression. The whole thing is about indirect calls, IOW function
> > > pointers.
> >
> > I also thought an attribute would be more convenient, but the compiler
> > folks prefer a built-in:
> >
> > https://reviews.llvm.org/D122673
>
> That seems to mostly worry about C++ things (overload sets, template
> specialization, name mangling) we kernel folks don't seem to much care
> about.
>
> I'll stick with saying type system makes more sense to me though.

I'd say it's not only the C++ issues but more the "action at a
distance" that's implied by having this be part of the type system.
With this being in the function type it's hard to tell whether any
particular call will have CFI disabled, without needing to go and look
at how the function pointer is defined. On the other hand, if we
explicitly mark up the calls with CFI disabled, the code becomes
easier to audit (think Rust "unsafe" blocks).

Does it seem any better to you to have this be marked up via the
function expression, rather than the call? The idea is that this would
always compile to a check-free function call, no matter what "func"
is:

__builtin_kcfi_call_unchecked(func)(args)

We already have this, to some degree, with KCFI as implemented: CFI
checks are disabled if the function expression refers to a declared
function. The builtin would allow overriding the decision to also
disable CFI checks for function expressions that use the builtin. It
also wouldn't preclude a type based system later on (the builtin would
become effectively a cast to the "unchecked" type).

Peter

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-05-03 22:35           ` Peter Collingbourne
@ 2022-05-04  7:34             ` Peter Zijlstra
  -1 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-05-04  7:34 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Sami Tolvanen, Kees Cook, Mark Rutland, Josh Poimboeuf,
	Will Deacon, Catalin Marinas, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	LKML, X86 ML, linux-hardening, linux-arm-kernel, llvm

On Tue, May 03, 2022 at 03:35:34PM -0700, Peter Collingbourne wrote:
> On Mon, May 2, 2022 at 1:02 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Mon, May 02, 2022 at 08:22:57AM -0700, Sami Tolvanen wrote:
> >
> > > > Anyway, I think I hate that __builtin, I'd *much* rather see a variable
> > > > attribute or qualifier for this, such that one can mark a function
> > > > pointer as not doing CFI.
> > > >
> > > > I simply doesn't make sense to have a builtin that operates on an
> > > > expression. The whole thing is about indirect calls, IOW function
> > > > pointers.
> > >
> > > I also thought an attribute would be more convenient, but the compiler
> > > folks prefer a built-in:
> > >
> > > https://reviews.llvm.org/D122673
> >
> > That seems to mostly worry about C++ things (overload sets, template
> > specialization, name mangling) we kernel folks don't seem to much care
> > about.
> >
> > I'll stick with saying type system makes more sense to me though.
> 
> I'd say it's not only the C++ issues but more the "action at a
> distance" that's implied by having this be part of the type system.
> With this being in the function type it's hard to tell whether any
> particular call will have CFI disabled, without needing to go and look
> at how the function pointer is defined.

Look at how we use volatile:

	*(volatile int *)(&foo)

we don't use volatile on actual variable definitions (much), but instead
cast it in at the usage site. Same can be done with this if so desired.

> On the other hand, if we
> explicitly mark up the calls with CFI disabled, the code becomes
> easier to audit (think Rust "unsafe" blocks).

I don't know any Rust. To me Rust still looks like line noise.

> Does it seem any better to you to have this be marked up via the
> function expression, rather than the call? The idea is that this would
> always compile to a check-free function call, no matter what "func"
> is:
> 
> __builtin_kcfi_call_unchecked(func)(args)
> 
> We already have this, to some degree, with KCFI as implemented: CFI
> checks are disabled if the function expression refers to a declared
> function. The builtin would allow overriding the decision to also
> disable CFI checks for function expressions that use the builtin. It
> also wouldn't preclude a type based system later on (the builtin would
> become effectively a cast to the "unchecked" type).

That's still a bit naf; you've effectively made that builtin a type-cast.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-04  7:34             ` Peter Zijlstra
  0 siblings, 0 replies; 100+ messages in thread
From: Peter Zijlstra @ 2022-05-04  7:34 UTC (permalink / raw)
  To: Peter Collingbourne
  Cc: Sami Tolvanen, Kees Cook, Mark Rutland, Josh Poimboeuf,
	Will Deacon, Catalin Marinas, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	LKML, X86 ML, linux-hardening, linux-arm-kernel, llvm

On Tue, May 03, 2022 at 03:35:34PM -0700, Peter Collingbourne wrote:
> On Mon, May 2, 2022 at 1:02 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Mon, May 02, 2022 at 08:22:57AM -0700, Sami Tolvanen wrote:
> >
> > > > Anyway, I think I hate that __builtin, I'd *much* rather see a variable
> > > > attribute or qualifier for this, such that one can mark a function
> > > > pointer as not doing CFI.
> > > >
> > > > I simply doesn't make sense to have a builtin that operates on an
> > > > expression. The whole thing is about indirect calls, IOW function
> > > > pointers.
> > >
> > > I also thought an attribute would be more convenient, but the compiler
> > > folks prefer a built-in:
> > >
> > > https://reviews.llvm.org/D122673
> >
> > That seems to mostly worry about C++ things (overload sets, template
> > specialization, name mangling) we kernel folks don't seem to much care
> > about.
> >
> > I'll stick with saying type system makes more sense to me though.
> 
> I'd say it's not only the C++ issues but more the "action at a
> distance" that's implied by having this be part of the type system.
> With this being in the function type it's hard to tell whether any
> particular call will have CFI disabled, without needing to go and look
> at how the function pointer is defined.

Look at how we use volatile:

	*(volatile int *)(&foo)

we don't use volatile on actual variable definitions (much), but instead
cast it in at the usage site. Same can be done with this if so desired.

> On the other hand, if we
> explicitly mark up the calls with CFI disabled, the code becomes
> easier to audit (think Rust "unsafe" blocks).

I don't know any Rust. To me Rust still looks like line noise.

> Does it seem any better to you to have this be marked up via the
> function expression, rather than the call? The idea is that this would
> always compile to a check-free function call, no matter what "func"
> is:
> 
> __builtin_kcfi_call_unchecked(func)(args)
> 
> We already have this, to some degree, with KCFI as implemented: CFI
> checks are disabled if the function expression refers to a declared
> function. The builtin would allow overriding the decision to also
> disable CFI checks for function expressions that use the builtin. It
> also wouldn't preclude a type based system later on (the builtin would
> become effectively a cast to the "unchecked" type).

That's still a bit naf; you've effectively made that builtin a type-cast.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-04-29 20:36 ` Sami Tolvanen
@ 2022-05-04 16:17   ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-04 16:17 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

Hi Sami,

On Fri, Apr 29, 2022 at 01:36:23PM -0700, Sami Tolvanen wrote:
> KCFI is a proposed forward-edge control-flow integrity scheme for
> Clang, which is more suitable for kernel use than the existing CFI
> scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
> alter function references to point to a jump table, and won't break
> function address equality. The latest LLVM patches are here:
> 
>   https://reviews.llvm.org/D119296
>   https://reviews.llvm.org/D124211

This is really exciting to see!

I wanted to give this a spin on arm64, but I'm seeing some very odd toolchain
behaviour. I'm not sure if I've done something wrong, or if I'm just hitting an
edge-case, but it looks like using -fsanitize=kcfi causes the toolchain to hit
out-of-memory errors and other issues which look like they could be memory
corruption.

Setup-wise:

* My build machine is a "Intel(R) Xeon(R) CPU E5-2660 v4" with 56 HW threads
  and 64GB of RAM, running x86_64 Debian 11.3.

* I applied D119296 atop LLVM commit 11d3e31c60bd (per the "Parents" part of
  the Revision Contents on https://reviews.llvm.org/D119296), and built that
  with:

  cmake -S llvm -B build -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_PROJECTS='clang;lld
  cmake --build build

  Aside: I'll go build a Debug release to compare this to.

* I applied this series atop v5.18-rc4.

* I normally build with -j50, and LLVM=1.

Aside from a single ifdef issue in compiler-clang.h, defconfig builds cleanly,
but defconfig + CONFIG_CFI_CLANG produces lots of out of memory errors and some
other errors which look erroneous. I see a bunch of errors even when I
significantly reduce my build parallelism (e.g. down to -j10, a reduction of
5x).

Some of these don't look right at all, e.g.

| make: *** [Makefile:1823: fs] Error 2
| ^[^[<inline asm>:5:1: error: unexpected token at start of statement
| 93825275602704
| ^
| 1 error generated.
| make[2]: *** [scripts/Makefile.build:289: arch/arm64/kernel/suspend.o] Error 1
| make[2]: *** Waiting for unfinished jobs....
| make[1]: *** [scripts/Makefile.build:551: arch/arm64/kernel] Error 2
                                              
| <inline asm>:5:1: error: unexpected token at start of statement
| @<U+001D><U+001A>8DV
| ^
| 1 error generated.
| make[3]: *** [scripts/Makefile.build:289: drivers/phy/amlogic/phy-meson8b-usb2.o] Error 1
| make[3]: *** Waiting for unfinished jobs....
| make[2]: *** [scripts/Makefile.build:551: drivers/phy/amlogic] Error 2
| make[2]: *** Waiting for unfinished jobs....
| make[1]: *** [scripts/Makefile.build:551: kernel/sched] Error 2
| make: *** [Makefile:1823: kernel] Error 2
| make: *** Waiting for unfinished jobs....

... maybe those are due to memory corruption / bad out-of-memory handling?


Some are out-of-memory errors:

| LLVM ERROR: out of memory
| Allocation failed
| PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace, preprocessed source, and associated run script.
| Stack dump:
| 0.      Program arguments: clang -Wp,-MMD,kernel/dma/.pool.o.d -nostdinc -I./arch/arm64/include -I./arch/arm64/include/generated -I./include -I./arch/arm64/include/uapi -I./arch/arm64/include/generated/uapi -I./include/uapi -I./include/generated/uapi -include ./include/linux/compiler-version.h -include ./include/linux/kconfig.h -include ./include/linux/compiler_types.h -D__KERNEL__ -mlittle-endian -DKASAN_SHADOW_SCALE_SHIFT= -Qunused-arguments -fmacro-prefix-map=./= -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE -Werror=implicit-function-declaration -Werror=implicit-int -Werror=return-type -Wno-format-security -std=gnu11 --target=aarch64-linux-gnu -fintegrated-as -Werror=unknown-warning-option -Werror=ignored-optimization-argument -mgeneral-regs-only -DCONFIG_CC_HAS_K_CONSTRAINT=1 -Wno-psabi -fno-asynchronous-unwind-tables -fno-unwind-tables -mbranch-protection=pac-ret+leaf+bti -Wa,-march=armv8.5-a -DARM64_ASM_ARCH=\"armv8.5-a\" -DKASAN_SHADOW_SCALE_SHIFT= -fno-delete-null-pointer-checks -Wno-frame-address -Wno-address-of-packed-member -O2 -Wframe-larger-than=2048 -fstack-protector-strong -Wimplicit-fallthrough -Wno-gnu -Wno-unused-but-set-variable -Wno-unused-const-variable -fno-omit-frame-pointer -fno-optimize-sibling-calls -ftrivial-auto-var-init=zero -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang -fno-stack-clash-protection -fsanitize=kcfi -fno-sanitize-blacklist -Wdeclaration-after-statement -Wvla -Wno-pointer-sign -Wcast-function-type -fno-strict-overflow -fno-stack-check -Werror=date-time -Werror=incompatible-pointer-types -Wno-initializer-overrides -Wno-format -Wno-sign-compare -Wno-format-zero-length -Wno-pointer-to-enum-cast -Wno-tautological-constant-out-of-range-compare -Wno-unaligned-access -mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=1184 -DKBUILD_MODFILE=\"kernel/dma/pool\" -DKBUILD_BASENAME=\"pool\" -DKBUILD_MODNAME=\"pool\" -D__KBUILD_MODNAME=kmod_pool -c -o kernel/dma/pool.o kernel/dma/pool.c
| 1.      <eof> parser at end of file
| 2.      Per-file LLVM IR generation
|  #0 0x00005559ef670830 PrintStackTraceSignalHandler(void*) Signals.cpp:0:0
|  #1 0x00005559ef66e6e4 llvm::sys::CleanupOnSignal(unsigned long) (/home/mark/src/llvm-project/build/bin/clang-15+0x36136e4)
|  #2 0x00005559ef5ab3f8 CrashRecoverySignalHandler(int) CrashRecoveryContext.cpp:0:0
|  #3 0x00007f5ac3547140 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x14140)
|  #4 0x00007f5ac302ace1 raise (/lib/x86_64-linux-gnu/libc.so.6+0x3bce1)
|  #5 0x00007f5ac3014537 abort (/lib/x86_64-linux-gnu/libc.so.6+0x25537)
|  #6 0x00005559ef5b2389 (/home/mark/src/llvm-project/build/bin/clang-15+0x3557389)
|  #7 0x00005559ef5e00f7 (/home/mark/src/llvm-project/build/bin/clang-15+0x35850f7)
|  #8 0x00005559ef641191 llvm::raw_svector_ostream::write_impl(char const*, unsigned long) (/home/mark/src/llvm-project/build/bin/clang-15+0x35e6191)
|  #9 0x00005559ef64325e llvm::raw_ostream::write(char const*, unsigned long) (/home/mark/src/llvm-project/build/bin/clang-15+0x35e825e)
| #10 0x00005559ef611dae llvm::Twine::str[abi:cxx11]() const (/home/mark/src/llvm-project/build/bin/clang-15+0x35b6dae)
| #11 0x00005559efac97be clang::CodeGen::CodeGenModule::FinalizeKCFITypePrefixes() (/home/mark/src/llvm-project/build/bin/clang-15+0x3a6e7be)
| #12 0x00005559efafd53c clang::CodeGen::CodeGenModule::Release() (/home/mark/src/llvm-project/build/bin/clang-15+0x3aa253c)
| #13 0x00005559f07564aa (anonymous namespace)::CodeGeneratorImpl::HandleTranslationUnit(clang::ASTContext&) ModuleBuilder.cpp:0:0
| #14 0x00005559f07543e5 clang::BackendConsumer::HandleTranslationUnit(clang::ASTContext&) (/home/mark/src/llvm-project/build/bin/clang-15+0x46f93e5)
| #15 0x00005559f11b85a9 clang::ParseAST(clang::Sema&, bool, bool) (/home/mark/src/llvm-project/build/bin/clang-15+0x515d5a9)
| #16 0x00005559f00cf419 clang::FrontendAction::Execute() (/home/mark/src/llvm-project/build/bin/clang-15+0x4074419)
| #17 0x00005559f005a85b clang::CompilerInstance::ExecuteAction(clang::FrontendAction&) (/home/mark/src/llvm-project/build/bin/clang-15+0x3fff85b)
| #18 0x00005559f0183860 clang::ExecuteCompilerInvocation(clang::CompilerInstance*) (/home/mark/src/llvm-project/build/bin/clang-15+0x4128860)
| #19 0x00005559ed0f051c cc1_main(llvm::ArrayRef<char const*>, char const*, void*) (/home/mark/src/llvm-project/build/bin/clang-15+0x109551c)
| #20 0x00005559ed0ed3f9 ExecuteCC1Tool(llvm::SmallVectorImpl<char const*>&) driver.cpp:0:0
| #21 0x00005559efed5fa5 void llvm::function_ref<void ()>::callback_fn<clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, bool*) const::'lambda'()>(long) Job.cpp:0:0
| #22 0x00005559ef5ab4f3 llvm::CrashRecoveryContext::RunSafely(llvm::function_ref<void ()>) (/home/mark/src/llvm-project/build/bin/clang-15+0x35504f3)
| #23 0x00005559efed6304 clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, bool*) const (.part.0) Job.cpp:0:0
| #24 0x00005559efea7b36 clang::driver::Compilation::ExecuteCommand(clang::driver::Command const&, clang::driver::Command const*&) const (/home/mark/src/llvm-project/build/bin/clang-15+0x3e4cb36)
| #25 0x00005559efea84e9 clang::driver::Compilation::ExecuteJobs(clang::driver::JobList const&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*> >&) const (/home/mark/src/llvm-project/build/bin/clang-15+0x3e4d4e9)
| #26 0x00005559efeb6619 clang::driver::Driver::ExecuteCompilation(clang::driver::Compilation&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*> >&) (/home/mark/src/llvm-project/build/bin/clang-15+0x3e5b619)
| #27 0x00005559ed033793 main (/home/mark/src/llvm-project/build/bin/clang-15+0xfd8793)
| #28 0x00007f5ac3015d0a __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26d0a)
| #29 0x00005559ed0ecdaa _start (/home/mark/src/llvm-project/build/bin/clang-15+0x1091daa)
| clang-15: error: clang frontend command failed with exit code 134 (use -v to see invocation)
| clang version 15.0.0 (https://github.com/llvm/llvm-project.git 1e3994ce3cd7b217678edd589392c3c3c1575880)
| Target: aarch64-unknown-linux-gnu
| Thread model: posix
| InstalledDir: /home/mark/src/llvm-project/build/bin
| clang-15: note: diagnostic msg:
| ********************
| 
| PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
| Preprocessed source(s) and associated run script(s) are located at:
| clang-15: note: diagnostic msg: /tmp/pool-b4bab3.c
| clang-15: note: diagnostic msg: /tmp/pool-b4bab3.sh
| clang-15: note: diagnostic msg:
| 
| ********************
| make[2]: *** [scripts/Makefile.build:289: kernel/dma/pool.o] Error 134
| make[1]: *** [scripts/Makefile.build:551: kernel/dma] Error 2
| make[1]: *** Waiting for unfinished jobs....

Note: I've kept those files, but as the c file is 3.9M I have not included that here.


There appar to be other failures too:

| [mark@lakrids:~/src/linux]% PATH=/home/mark/src/llvm-project/build/bin/:$PATH make LLVM=1 ARCH=arm64 -j10 Image -s
| PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace, preprocessed source, and associated run script.
| Stack dump:
| 0.      Program arguments: clang -Wp,-MMD,mm/.util.o.d -nostdinc -I./arch/arm64/include -I./arch/arm64/include/generated -I./include -I./arch/arm64/include/uapi -I./arch/arm64/include/generated/uapi -I./i
| nclude/uapi -I./include/generated/uapi -include ./include/linux/compiler-version.h -include ./include/linux/kconfig.h -include ./include/linux/compiler_types.h -D__KERNEL__ -mlittle-endian -DKASAN_SHADOW_SCALE_SHIFT= -Qunused-arguments -fmacro-prefix-map=./= -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE -Werror=implicit-function-declaration -Werror=implicit-int -Werror=return-type -Wno-format-security -std=gnu11 --target=aarch64-linux-gnu -fintegrated-as -Werror=unknown-warning-option -Werror=ignored-optimization-argument -mgeneral-regs-only -DCONFIG_CC_HAS_K_CONSTRAINT=1 -Wno-psabi -fno-asynchronous-unwind-tables -fno-unwind-tables -mbranch-protection=pac-ret+leaf+bti -Wa,-march=armv8.5-a -DARM64_ASM_ARCH=\"armv8.5-a\" -DKASAN_SHADOW_SCALE_SHIFT= -fno-delete-null-pointer-checks -Wno-frame-address -Wno-address-of-packed-member -O2 -Wframe-larger-than=2048 -fstack-protector-strong -Wimplicit-fallthrough -Wno-gnu -Wno-unused-but-set-variable -Wno-unused-const-variable -fno-omit-frame-pointer -fno-optimize-sibling-calls -ftrivial-auto-var-init=zero -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang -fno-stack-clash-protection -fsanitize=kcfi -fno-sanitize-blacklist -Wdeclaration-after-statement -Wvla -Wno-pointer-sign -Wcast-function-type -fno-strict-overflow -fno-stack-check -Werror=date-time -Werror=incompatible-pointer-types -Wno-initializer-overrides -Wno-format -Wno-sign-compare -Wno-format-zero-length -Wno-pointer-to-enum-cast -Wno-tautological-constant-out-of-range-compare -Wno-unaligned-access -mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=1184 -DKBUILD_MODFILE=\"mm/util\" -DKBUILD_BASENAME=\"util\" -DKBUILD_MODNAME=\"util\" -D__KBUILD_MODNAME=kmod_util -c -o mm/util.o mm/util.c
| 1.      <eof> parser at end of file
| 2.      Per-file LLVM IR generation
|  #0 0x0000559484667830 PrintStackTraceSignalHandler(void*) Signals.cpp:0:0
|  #1 0x00005594846656e4 llvm::sys::CleanupOnSignal(unsigned long) (/home/mark/src/llvm-project/build/bin/clang-15+0x36136e4)
|  #2 0x00005594845a23f8 CrashRecoverySignalHandler(int) CrashRecoveryContext.cpp:0:0
|  #3 0x00007f490bbd1140 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x14140)
|  #4 0x0000559484608ca8 llvm::Twine::printOneChild(llvm::raw_ostream&, llvm::Twine::Child, llvm::Twine::NodeKind) const (/home/mark/src/llvm-project/build/bin/clang-15+0x35b6ca8)
|  #5 0x0000559484608dae llvm::Twine::str[abi:cxx11]() const (/home/mark/src/llvm-project/build/bin/clang-15+0x35b6dae)
|  #6 0x0000559484ac07be clang::CodeGen::CodeGenModule::FinalizeKCFITypePrefixes() (/home/mark/src/llvm-project/build/bin/clang-15+0x3a6e7be)
|  #7 0x0000559484af453c clang::CodeGen::CodeGenModule::Release() (/home/mark/src/llvm-project/build/bin/clang-15+0x3aa253c)
|  #8 0x000055948574d4aa (anonymous namespace)::CodeGeneratorImpl::HandleTranslationUnit(clang::ASTContext&) ModuleBuilder.cpp:0:0
|  #9 0x000055948574b3e5 clang::BackendConsumer::HandleTranslationUnit(clang::ASTContext&) (/home/mark/src/llvm-project/build/bin/clang-15+0x46f93e5)
| #10 0x00005594861af5a9 clang::ParseAST(clang::Sema&, bool, bool) (/home/mark/src/llvm-project/build/bin/clang-15+0x515d5a9)
| #11 0x00005594850c6419 clang::FrontendAction::Execute() (/home/mark/src/llvm-project/build/bin/clang-15+0x4074419)
| #12 0x000055948505185b clang::CompilerInstance::ExecuteAction(clang::FrontendAction&) (/home/mark/src/llvm-project/build/bin/clang-15+0x3fff85b)
| #13 0x000055948517a860 clang::ExecuteCompilerInvocation(clang::CompilerInstance*) (/home/mark/src/llvm-project/build/bin/clang-15+0x4128860)
| #14 0x00005594820e751c cc1_main(llvm::ArrayRef<char const*>, char const*, void*) (/home/mark/src/llvm-project/build/bin/clang-15+0x109551c)
| #15 0x00005594820e43f9 ExecuteCC1Tool(llvm::SmallVectorImpl<char const*>&) driver.cpp:0:0
| #16 0x0000559484eccfa5 void llvm::function_ref<void ()>::callback_fn<clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, bool*) const::'lambda'()>(long) Job.cpp:0:0
| #17 0x00005594845a24f3 llvm::CrashRecoveryContext::RunSafely(llvm::function_ref<void ()>) (/home/mark/src/llvm-project/build/bin/clang-15+0x35504f3)
| #18 0x0000559484ecd304 clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, bool*) const (.part.0) Job.cpp:0:0
| #19 0x0000559484e9eb36 clang::driver::Compilation::ExecuteCommand(clang::driver::Command const&, clang::driver::Command const*&) const (/home/mark/src/llvm-project/build/bin/clang-15+0x3e4cb36)
| #20 0x0000559484e9f4e9 clang::driver::Compilation::ExecuteJobs(clang::driver::JobList const&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*> >&) const (/home/mark/src/llvm-project/build/bin/clang-15+0x3e4d4e9)
| #21 0x0000559484ead619 clang::driver::Driver::ExecuteCompilation(clang::driver::Compilation&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*> >&) (/home/mark/src/llvm-project/build/bin/clang-15+0x3e5b619)
| #22 0x000055948202a793 main (/home/mark/src/llvm-project/build/bin/clang-15+0xfd8793)
| #23 0x00007f490b69fd0a __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26d0a)
| #24 0x00005594820e3daa _start (/home/mark/src/llvm-project/build/bin/clang-15+0x1091daa)
| clang-15: error: clang frontend command failed with exit code 135 (use -v to see invocation)
| clang version 15.0.0 (https://github.com/llvm/llvm-project.git 1e3994ce3cd7b217678edd589392c3c3c1575880)
| Target: aarch64-unknown-linux-gnu
| Thread model: posix
| InstalledDir: /home/mark/src/llvm-project/build/bin
| clang-15: note: diagnostic msg:
| ********************
| 
| PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
| Preprocessed source(s) and associated run script(s) are located at:
| clang-15: note: diagnostic msg: /tmp/util-30a0f2.c
| clang-15: note: diagnostic msg: /tmp/util-30a0f2.sh
| clang-15: note: diagnostic msg:
| 
| ********************

Note: I've saved those files for now, but the c file is 4.8M, so I haven't included it
inline or attached it here. 

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-04 16:17   ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-04 16:17 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

Hi Sami,

On Fri, Apr 29, 2022 at 01:36:23PM -0700, Sami Tolvanen wrote:
> KCFI is a proposed forward-edge control-flow integrity scheme for
> Clang, which is more suitable for kernel use than the existing CFI
> scheme used by CONFIG_CFI_CLANG. KCFI doesn't require LTO, doesn't
> alter function references to point to a jump table, and won't break
> function address equality. The latest LLVM patches are here:
> 
>   https://reviews.llvm.org/D119296
>   https://reviews.llvm.org/D124211

This is really exciting to see!

I wanted to give this a spin on arm64, but I'm seeing some very odd toolchain
behaviour. I'm not sure if I've done something wrong, or if I'm just hitting an
edge-case, but it looks like using -fsanitize=kcfi causes the toolchain to hit
out-of-memory errors and other issues which look like they could be memory
corruption.

Setup-wise:

* My build machine is a "Intel(R) Xeon(R) CPU E5-2660 v4" with 56 HW threads
  and 64GB of RAM, running x86_64 Debian 11.3.

* I applied D119296 atop LLVM commit 11d3e31c60bd (per the "Parents" part of
  the Revision Contents on https://reviews.llvm.org/D119296), and built that
  with:

  cmake -S llvm -B build -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_PROJECTS='clang;lld
  cmake --build build

  Aside: I'll go build a Debug release to compare this to.

* I applied this series atop v5.18-rc4.

* I normally build with -j50, and LLVM=1.

Aside from a single ifdef issue in compiler-clang.h, defconfig builds cleanly,
but defconfig + CONFIG_CFI_CLANG produces lots of out of memory errors and some
other errors which look erroneous. I see a bunch of errors even when I
significantly reduce my build parallelism (e.g. down to -j10, a reduction of
5x).

Some of these don't look right at all, e.g.

| make: *** [Makefile:1823: fs] Error 2
| ^[^[<inline asm>:5:1: error: unexpected token at start of statement
| 93825275602704
| ^
| 1 error generated.
| make[2]: *** [scripts/Makefile.build:289: arch/arm64/kernel/suspend.o] Error 1
| make[2]: *** Waiting for unfinished jobs....
| make[1]: *** [scripts/Makefile.build:551: arch/arm64/kernel] Error 2
                                              
| <inline asm>:5:1: error: unexpected token at start of statement
| @<U+001D><U+001A>8DV
| ^
| 1 error generated.
| make[3]: *** [scripts/Makefile.build:289: drivers/phy/amlogic/phy-meson8b-usb2.o] Error 1
| make[3]: *** Waiting for unfinished jobs....
| make[2]: *** [scripts/Makefile.build:551: drivers/phy/amlogic] Error 2
| make[2]: *** Waiting for unfinished jobs....
| make[1]: *** [scripts/Makefile.build:551: kernel/sched] Error 2
| make: *** [Makefile:1823: kernel] Error 2
| make: *** Waiting for unfinished jobs....

... maybe those are due to memory corruption / bad out-of-memory handling?


Some are out-of-memory errors:

| LLVM ERROR: out of memory
| Allocation failed
| PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace, preprocessed source, and associated run script.
| Stack dump:
| 0.      Program arguments: clang -Wp,-MMD,kernel/dma/.pool.o.d -nostdinc -I./arch/arm64/include -I./arch/arm64/include/generated -I./include -I./arch/arm64/include/uapi -I./arch/arm64/include/generated/uapi -I./include/uapi -I./include/generated/uapi -include ./include/linux/compiler-version.h -include ./include/linux/kconfig.h -include ./include/linux/compiler_types.h -D__KERNEL__ -mlittle-endian -DKASAN_SHADOW_SCALE_SHIFT= -Qunused-arguments -fmacro-prefix-map=./= -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE -Werror=implicit-function-declaration -Werror=implicit-int -Werror=return-type -Wno-format-security -std=gnu11 --target=aarch64-linux-gnu -fintegrated-as -Werror=unknown-warning-option -Werror=ignored-optimization-argument -mgeneral-regs-only -DCONFIG_CC_HAS_K_CONSTRAINT=1 -Wno-psabi -fno-asynchronous-unwind-tables -fno-unwind-tables -mbranch-protection=pac-ret+leaf+bti -Wa,-march=armv8.5-a -DARM64_ASM_ARCH=\"armv8.5-a\" -DKASAN_SHADOW_SCALE_SHIFT= -fno-delete-null-pointer-checks -Wno-frame-address -Wno-address-of-packed-member -O2 -Wframe-larger-than=2048 -fstack-protector-strong -Wimplicit-fallthrough -Wno-gnu -Wno-unused-but-set-variable -Wno-unused-const-variable -fno-omit-frame-pointer -fno-optimize-sibling-calls -ftrivial-auto-var-init=zero -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang -fno-stack-clash-protection -fsanitize=kcfi -fno-sanitize-blacklist -Wdeclaration-after-statement -Wvla -Wno-pointer-sign -Wcast-function-type -fno-strict-overflow -fno-stack-check -Werror=date-time -Werror=incompatible-pointer-types -Wno-initializer-overrides -Wno-format -Wno-sign-compare -Wno-format-zero-length -Wno-pointer-to-enum-cast -Wno-tautological-constant-out-of-range-compare -Wno-unaligned-access -mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=1184 -DKBUILD_MODFILE=\"kernel/dma/pool\" -DKBUILD_BASENAME=\"pool\" -DKBUILD_MODNAME=\"pool\" -D__KBUILD_MODNAME=kmod_pool -c -o kernel/dma/pool.o kernel/dma/pool.c
| 1.      <eof> parser at end of file
| 2.      Per-file LLVM IR generation
|  #0 0x00005559ef670830 PrintStackTraceSignalHandler(void*) Signals.cpp:0:0
|  #1 0x00005559ef66e6e4 llvm::sys::CleanupOnSignal(unsigned long) (/home/mark/src/llvm-project/build/bin/clang-15+0x36136e4)
|  #2 0x00005559ef5ab3f8 CrashRecoverySignalHandler(int) CrashRecoveryContext.cpp:0:0
|  #3 0x00007f5ac3547140 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x14140)
|  #4 0x00007f5ac302ace1 raise (/lib/x86_64-linux-gnu/libc.so.6+0x3bce1)
|  #5 0x00007f5ac3014537 abort (/lib/x86_64-linux-gnu/libc.so.6+0x25537)
|  #6 0x00005559ef5b2389 (/home/mark/src/llvm-project/build/bin/clang-15+0x3557389)
|  #7 0x00005559ef5e00f7 (/home/mark/src/llvm-project/build/bin/clang-15+0x35850f7)
|  #8 0x00005559ef641191 llvm::raw_svector_ostream::write_impl(char const*, unsigned long) (/home/mark/src/llvm-project/build/bin/clang-15+0x35e6191)
|  #9 0x00005559ef64325e llvm::raw_ostream::write(char const*, unsigned long) (/home/mark/src/llvm-project/build/bin/clang-15+0x35e825e)
| #10 0x00005559ef611dae llvm::Twine::str[abi:cxx11]() const (/home/mark/src/llvm-project/build/bin/clang-15+0x35b6dae)
| #11 0x00005559efac97be clang::CodeGen::CodeGenModule::FinalizeKCFITypePrefixes() (/home/mark/src/llvm-project/build/bin/clang-15+0x3a6e7be)
| #12 0x00005559efafd53c clang::CodeGen::CodeGenModule::Release() (/home/mark/src/llvm-project/build/bin/clang-15+0x3aa253c)
| #13 0x00005559f07564aa (anonymous namespace)::CodeGeneratorImpl::HandleTranslationUnit(clang::ASTContext&) ModuleBuilder.cpp:0:0
| #14 0x00005559f07543e5 clang::BackendConsumer::HandleTranslationUnit(clang::ASTContext&) (/home/mark/src/llvm-project/build/bin/clang-15+0x46f93e5)
| #15 0x00005559f11b85a9 clang::ParseAST(clang::Sema&, bool, bool) (/home/mark/src/llvm-project/build/bin/clang-15+0x515d5a9)
| #16 0x00005559f00cf419 clang::FrontendAction::Execute() (/home/mark/src/llvm-project/build/bin/clang-15+0x4074419)
| #17 0x00005559f005a85b clang::CompilerInstance::ExecuteAction(clang::FrontendAction&) (/home/mark/src/llvm-project/build/bin/clang-15+0x3fff85b)
| #18 0x00005559f0183860 clang::ExecuteCompilerInvocation(clang::CompilerInstance*) (/home/mark/src/llvm-project/build/bin/clang-15+0x4128860)
| #19 0x00005559ed0f051c cc1_main(llvm::ArrayRef<char const*>, char const*, void*) (/home/mark/src/llvm-project/build/bin/clang-15+0x109551c)
| #20 0x00005559ed0ed3f9 ExecuteCC1Tool(llvm::SmallVectorImpl<char const*>&) driver.cpp:0:0
| #21 0x00005559efed5fa5 void llvm::function_ref<void ()>::callback_fn<clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, bool*) const::'lambda'()>(long) Job.cpp:0:0
| #22 0x00005559ef5ab4f3 llvm::CrashRecoveryContext::RunSafely(llvm::function_ref<void ()>) (/home/mark/src/llvm-project/build/bin/clang-15+0x35504f3)
| #23 0x00005559efed6304 clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, bool*) const (.part.0) Job.cpp:0:0
| #24 0x00005559efea7b36 clang::driver::Compilation::ExecuteCommand(clang::driver::Command const&, clang::driver::Command const*&) const (/home/mark/src/llvm-project/build/bin/clang-15+0x3e4cb36)
| #25 0x00005559efea84e9 clang::driver::Compilation::ExecuteJobs(clang::driver::JobList const&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*> >&) const (/home/mark/src/llvm-project/build/bin/clang-15+0x3e4d4e9)
| #26 0x00005559efeb6619 clang::driver::Driver::ExecuteCompilation(clang::driver::Compilation&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*> >&) (/home/mark/src/llvm-project/build/bin/clang-15+0x3e5b619)
| #27 0x00005559ed033793 main (/home/mark/src/llvm-project/build/bin/clang-15+0xfd8793)
| #28 0x00007f5ac3015d0a __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26d0a)
| #29 0x00005559ed0ecdaa _start (/home/mark/src/llvm-project/build/bin/clang-15+0x1091daa)
| clang-15: error: clang frontend command failed with exit code 134 (use -v to see invocation)
| clang version 15.0.0 (https://github.com/llvm/llvm-project.git 1e3994ce3cd7b217678edd589392c3c3c1575880)
| Target: aarch64-unknown-linux-gnu
| Thread model: posix
| InstalledDir: /home/mark/src/llvm-project/build/bin
| clang-15: note: diagnostic msg:
| ********************
| 
| PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
| Preprocessed source(s) and associated run script(s) are located at:
| clang-15: note: diagnostic msg: /tmp/pool-b4bab3.c
| clang-15: note: diagnostic msg: /tmp/pool-b4bab3.sh
| clang-15: note: diagnostic msg:
| 
| ********************
| make[2]: *** [scripts/Makefile.build:289: kernel/dma/pool.o] Error 134
| make[1]: *** [scripts/Makefile.build:551: kernel/dma] Error 2
| make[1]: *** Waiting for unfinished jobs....

Note: I've kept those files, but as the c file is 3.9M I have not included that here.


There appar to be other failures too:

| [mark@lakrids:~/src/linux]% PATH=/home/mark/src/llvm-project/build/bin/:$PATH make LLVM=1 ARCH=arm64 -j10 Image -s
| PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace, preprocessed source, and associated run script.
| Stack dump:
| 0.      Program arguments: clang -Wp,-MMD,mm/.util.o.d -nostdinc -I./arch/arm64/include -I./arch/arm64/include/generated -I./include -I./arch/arm64/include/uapi -I./arch/arm64/include/generated/uapi -I./i
| nclude/uapi -I./include/generated/uapi -include ./include/linux/compiler-version.h -include ./include/linux/kconfig.h -include ./include/linux/compiler_types.h -D__KERNEL__ -mlittle-endian -DKASAN_SHADOW_SCALE_SHIFT= -Qunused-arguments -fmacro-prefix-map=./= -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE -Werror=implicit-function-declaration -Werror=implicit-int -Werror=return-type -Wno-format-security -std=gnu11 --target=aarch64-linux-gnu -fintegrated-as -Werror=unknown-warning-option -Werror=ignored-optimization-argument -mgeneral-regs-only -DCONFIG_CC_HAS_K_CONSTRAINT=1 -Wno-psabi -fno-asynchronous-unwind-tables -fno-unwind-tables -mbranch-protection=pac-ret+leaf+bti -Wa,-march=armv8.5-a -DARM64_ASM_ARCH=\"armv8.5-a\" -DKASAN_SHADOW_SCALE_SHIFT= -fno-delete-null-pointer-checks -Wno-frame-address -Wno-address-of-packed-member -O2 -Wframe-larger-than=2048 -fstack-protector-strong -Wimplicit-fallthrough -Wno-gnu -Wno-unused-but-set-variable -Wno-unused-const-variable -fno-omit-frame-pointer -fno-optimize-sibling-calls -ftrivial-auto-var-init=zero -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang -fno-stack-clash-protection -fsanitize=kcfi -fno-sanitize-blacklist -Wdeclaration-after-statement -Wvla -Wno-pointer-sign -Wcast-function-type -fno-strict-overflow -fno-stack-check -Werror=date-time -Werror=incompatible-pointer-types -Wno-initializer-overrides -Wno-format -Wno-sign-compare -Wno-format-zero-length -Wno-pointer-to-enum-cast -Wno-tautological-constant-out-of-range-compare -Wno-unaligned-access -mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=1184 -DKBUILD_MODFILE=\"mm/util\" -DKBUILD_BASENAME=\"util\" -DKBUILD_MODNAME=\"util\" -D__KBUILD_MODNAME=kmod_util -c -o mm/util.o mm/util.c
| 1.      <eof> parser at end of file
| 2.      Per-file LLVM IR generation
|  #0 0x0000559484667830 PrintStackTraceSignalHandler(void*) Signals.cpp:0:0
|  #1 0x00005594846656e4 llvm::sys::CleanupOnSignal(unsigned long) (/home/mark/src/llvm-project/build/bin/clang-15+0x36136e4)
|  #2 0x00005594845a23f8 CrashRecoverySignalHandler(int) CrashRecoveryContext.cpp:0:0
|  #3 0x00007f490bbd1140 __restore_rt (/lib/x86_64-linux-gnu/libpthread.so.0+0x14140)
|  #4 0x0000559484608ca8 llvm::Twine::printOneChild(llvm::raw_ostream&, llvm::Twine::Child, llvm::Twine::NodeKind) const (/home/mark/src/llvm-project/build/bin/clang-15+0x35b6ca8)
|  #5 0x0000559484608dae llvm::Twine::str[abi:cxx11]() const (/home/mark/src/llvm-project/build/bin/clang-15+0x35b6dae)
|  #6 0x0000559484ac07be clang::CodeGen::CodeGenModule::FinalizeKCFITypePrefixes() (/home/mark/src/llvm-project/build/bin/clang-15+0x3a6e7be)
|  #7 0x0000559484af453c clang::CodeGen::CodeGenModule::Release() (/home/mark/src/llvm-project/build/bin/clang-15+0x3aa253c)
|  #8 0x000055948574d4aa (anonymous namespace)::CodeGeneratorImpl::HandleTranslationUnit(clang::ASTContext&) ModuleBuilder.cpp:0:0
|  #9 0x000055948574b3e5 clang::BackendConsumer::HandleTranslationUnit(clang::ASTContext&) (/home/mark/src/llvm-project/build/bin/clang-15+0x46f93e5)
| #10 0x00005594861af5a9 clang::ParseAST(clang::Sema&, bool, bool) (/home/mark/src/llvm-project/build/bin/clang-15+0x515d5a9)
| #11 0x00005594850c6419 clang::FrontendAction::Execute() (/home/mark/src/llvm-project/build/bin/clang-15+0x4074419)
| #12 0x000055948505185b clang::CompilerInstance::ExecuteAction(clang::FrontendAction&) (/home/mark/src/llvm-project/build/bin/clang-15+0x3fff85b)
| #13 0x000055948517a860 clang::ExecuteCompilerInvocation(clang::CompilerInstance*) (/home/mark/src/llvm-project/build/bin/clang-15+0x4128860)
| #14 0x00005594820e751c cc1_main(llvm::ArrayRef<char const*>, char const*, void*) (/home/mark/src/llvm-project/build/bin/clang-15+0x109551c)
| #15 0x00005594820e43f9 ExecuteCC1Tool(llvm::SmallVectorImpl<char const*>&) driver.cpp:0:0
| #16 0x0000559484eccfa5 void llvm::function_ref<void ()>::callback_fn<clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, bool*) const::'lambda'()>(long) Job.cpp:0:0
| #17 0x00005594845a24f3 llvm::CrashRecoveryContext::RunSafely(llvm::function_ref<void ()>) (/home/mark/src/llvm-project/build/bin/clang-15+0x35504f3)
| #18 0x0000559484ecd304 clang::driver::CC1Command::Execute(llvm::ArrayRef<llvm::Optional<llvm::StringRef> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*, bool*) const (.part.0) Job.cpp:0:0
| #19 0x0000559484e9eb36 clang::driver::Compilation::ExecuteCommand(clang::driver::Command const&, clang::driver::Command const*&) const (/home/mark/src/llvm-project/build/bin/clang-15+0x3e4cb36)
| #20 0x0000559484e9f4e9 clang::driver::Compilation::ExecuteJobs(clang::driver::JobList const&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*> >&) const (/home/mark/src/llvm-project/build/bin/clang-15+0x3e4d4e9)
| #21 0x0000559484ead619 clang::driver::Driver::ExecuteCompilation(clang::driver::Compilation&, llvm::SmallVectorImpl<std::pair<int, clang::driver::Command const*> >&) (/home/mark/src/llvm-project/build/bin/clang-15+0x3e5b619)
| #22 0x000055948202a793 main (/home/mark/src/llvm-project/build/bin/clang-15+0xfd8793)
| #23 0x00007f490b69fd0a __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26d0a)
| #24 0x00005594820e3daa _start (/home/mark/src/llvm-project/build/bin/clang-15+0x1091daa)
| clang-15: error: clang frontend command failed with exit code 135 (use -v to see invocation)
| clang version 15.0.0 (https://github.com/llvm/llvm-project.git 1e3994ce3cd7b217678edd589392c3c3c1575880)
| Target: aarch64-unknown-linux-gnu
| Thread model: posix
| InstalledDir: /home/mark/src/llvm-project/build/bin
| clang-15: note: diagnostic msg:
| ********************
| 
| PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
| Preprocessed source(s) and associated run script(s) are located at:
| clang-15: note: diagnostic msg: /tmp/util-30a0f2.c
| clang-15: note: diagnostic msg: /tmp/util-30a0f2.sh
| clang-15: note: diagnostic msg:
| 
| ********************

Note: I've saved those files for now, but the c file is 4.8M, so I haven't included it
inline or attached it here. 

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-05-04 16:17   ` Mark Rutland
@ 2022-05-04 16:41     ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-04 16:41 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

Hi Mark,

On Wed, May 4, 2022 at 9:18 AM Mark Rutland <mark.rutland@arm.com> wrote:
> I wanted to give this a spin on arm64, but I'm seeing some very odd toolchain
> behaviour. I'm not sure if I've done something wrong, or if I'm just hitting an
> edge-case, but it looks like using -fsanitize=kcfi causes the toolchain to hit
> out-of-memory errors and other issues which look like they could be memory
> corruption.

Thanks for the detailed bug report! It definitely looks like something
is wrong with the recent switch from std::string to Twine in the Clang
code. I didn't see this issue when compiling the arm64 kernel, but
I'll take a closer look and see if I can reproduce it.

Sami

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-04 16:41     ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-04 16:41 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

Hi Mark,

On Wed, May 4, 2022 at 9:18 AM Mark Rutland <mark.rutland@arm.com> wrote:
> I wanted to give this a spin on arm64, but I'm seeing some very odd toolchain
> behaviour. I'm not sure if I've done something wrong, or if I'm just hitting an
> edge-case, but it looks like using -fsanitize=kcfi causes the toolchain to hit
> out-of-memory errors and other issues which look like they could be memory
> corruption.

Thanks for the detailed bug report! It definitely looks like something
is wrong with the recent switch from std::string to Twine in the Clang
code. I didn't see this issue when compiling the arm64 kernel, but
I'll take a closer look and see if I can reproduce it.

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-05-04 16:41     ` Sami Tolvanen
@ 2022-05-04 20:17       ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-04 20:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Wed, May 4, 2022 at 9:41 AM Sami Tolvanen <samitolvanen@google.com> wrote:
>
> Hi Mark,
>
> On Wed, May 4, 2022 at 9:18 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > I wanted to give this a spin on arm64, but I'm seeing some very odd toolchain
> > behaviour. I'm not sure if I've done something wrong, or if I'm just hitting an
> > edge-case, but it looks like using -fsanitize=kcfi causes the toolchain to hit
> > out-of-memory errors and other issues which look like they could be memory
> > corruption.
>
> Thanks for the detailed bug report! It definitely looks like something
> is wrong with the recent switch from std::string to Twine in the Clang
> code. I didn't see this issue when compiling the arm64 kernel, but
> I'll take a closer look and see if I can reproduce it.

I was able to reproduce this by turning off assertions in Clang. It
seems to work fine with -DLLVM_ENABLE_ASSERTIONS=ON. I'll go fix.

Sami

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-04 20:17       ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-04 20:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Wed, May 4, 2022 at 9:41 AM Sami Tolvanen <samitolvanen@google.com> wrote:
>
> Hi Mark,
>
> On Wed, May 4, 2022 at 9:18 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > I wanted to give this a spin on arm64, but I'm seeing some very odd toolchain
> > behaviour. I'm not sure if I've done something wrong, or if I'm just hitting an
> > edge-case, but it looks like using -fsanitize=kcfi causes the toolchain to hit
> > out-of-memory errors and other issues which look like they could be memory
> > corruption.
>
> Thanks for the detailed bug report! It definitely looks like something
> is wrong with the recent switch from std::string to Twine in the Clang
> code. I didn't see this issue when compiling the arm64 kernel, but
> I'll take a closer look and see if I can reproduce it.

I was able to reproduce this by turning off assertions in Clang. It
seems to work fine with -DLLVM_ENABLE_ASSERTIONS=ON. I'll go fix.

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-05-04 20:17       ` Sami Tolvanen
@ 2022-05-05 12:36         ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-05 12:36 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Wed, May 04, 2022 at 01:17:25PM -0700, Sami Tolvanen wrote:
> On Wed, May 4, 2022 at 9:41 AM Sami Tolvanen <samitolvanen@google.com> wrote:
> >
> > Hi Mark,
> >
> > On Wed, May 4, 2022 at 9:18 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > > I wanted to give this a spin on arm64, but I'm seeing some very odd toolchain
> > > behaviour. I'm not sure if I've done something wrong, or if I'm just hitting an
> > > edge-case, but it looks like using -fsanitize=kcfi causes the toolchain to hit
> > > out-of-memory errors and other issues which look like they could be memory
> > > corruption.
> >
> > Thanks for the detailed bug report! It definitely looks like something
> > is wrong with the recent switch from std::string to Twine in the Clang
> > code. I didn't see this issue when compiling the arm64 kernel, but
> > I'll take a closer look and see if I can reproduce it.
> 
> I was able to reproduce this by turning off assertions in Clang. It
> seems to work fine with -DLLVM_ENABLE_ASSERTIONS=ON. I'll go fix.

FWIW, a `-DLLVM_ENABLE_ASSERTIONS=ON` build also seems to work for me when
building a kernel with CONFIG_CFI_CLANG=y. It's much slower than a regular
Release build, so I'm still waiting for that to finish building a kernel, but
it has gotten much further through the build without issues.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-05 12:36         ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-05 12:36 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Wed, May 04, 2022 at 01:17:25PM -0700, Sami Tolvanen wrote:
> On Wed, May 4, 2022 at 9:41 AM Sami Tolvanen <samitolvanen@google.com> wrote:
> >
> > Hi Mark,
> >
> > On Wed, May 4, 2022 at 9:18 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > > I wanted to give this a spin on arm64, but I'm seeing some very odd toolchain
> > > behaviour. I'm not sure if I've done something wrong, or if I'm just hitting an
> > > edge-case, but it looks like using -fsanitize=kcfi causes the toolchain to hit
> > > out-of-memory errors and other issues which look like they could be memory
> > > corruption.
> >
> > Thanks for the detailed bug report! It definitely looks like something
> > is wrong with the recent switch from std::string to Twine in the Clang
> > code. I didn't see this issue when compiling the arm64 kernel, but
> > I'll take a closer look and see if I can reproduce it.
> 
> I was able to reproduce this by turning off assertions in Clang. It
> seems to work fine with -DLLVM_ENABLE_ASSERTIONS=ON. I'll go fix.

FWIW, a `-DLLVM_ENABLE_ASSERTIONS=ON` build also seems to work for me when
building a kernel with CONFIG_CFI_CLANG=y. It's much slower than a regular
Release build, so I'm still waiting for that to finish building a kernel, but
it has gotten much further through the build without issues.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 09/21] arm64: Add CFI error handling
  2022-04-29 20:36   ` Sami Tolvanen
@ 2022-05-05 15:44     ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-05 15:44 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

Hi Sami,

On Fri, Apr 29, 2022 at 01:36:32PM -0700, Sami Tolvanen wrote:
> With -fsanitize=kcfi, CFI always traps. Add arm64 support for handling
> CFI failures and determining the target address.
> 
> Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
> ---
>  arch/arm64/include/asm/brk-imm.h |  2 ++
>  arch/arm64/include/asm/insn.h    |  1 +
>  arch/arm64/kernel/traps.c        | 57 ++++++++++++++++++++++++++++++++
>  3 files changed, 60 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h
> index ec7720dbe2c8..3a50b70b4404 100644
> --- a/arch/arm64/include/asm/brk-imm.h
> +++ b/arch/arm64/include/asm/brk-imm.h
> @@ -16,6 +16,7 @@
>   * 0x400: for dynamic BRK instruction
>   * 0x401: for compile time BRK instruction
>   * 0x800: kernel-mode BUG() and WARN() traps
> + * 0x801: Control-Flow Integrity traps
>   * 0x9xx: tag-based KASAN trap (allowed values 0x900 - 0x9ff)

As a high-level thing, it would be good if we could agree on some paritioning
of the BRK immediate space between compiler usage and kernel usage (or have
some way to ask the compiler to use specific values), so that we can allocate
values without clashing.

>   */
>  #define KPROBES_BRK_IMM			0x004
> @@ -25,6 +26,7 @@
>  #define KGDB_DYN_DBG_BRK_IMM		0x400
>  #define KGDB_COMPILED_DBG_BRK_IMM	0x401
>  #define BUG_BRK_IMM			0x800
> +#define CFI_BRK_IMM			0x801
>  #define KASAN_BRK_IMM			0x900
>  #define KASAN_BRK_MASK			0x0ff
>  
> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
> index 1e5760d567ae..12225bdfa776 100644
> --- a/arch/arm64/include/asm/insn.h
> +++ b/arch/arm64/include/asm/insn.h
> @@ -334,6 +334,7 @@ __AARCH64_INSN_FUNCS(store_pre,	0x3FE00C00, 0x38000C00)
>  __AARCH64_INSN_FUNCS(load_pre,	0x3FE00C00, 0x38400C00)
>  __AARCH64_INSN_FUNCS(store_post,	0x3FE00C00, 0x38000400)
>  __AARCH64_INSN_FUNCS(load_post,	0x3FE00C00, 0x38400400)
> +__AARCH64_INSN_FUNCS(ldur,	0x3FE00C00, 0x38400000)
>  __AARCH64_INSN_FUNCS(str_reg,	0x3FE0EC00, 0x38206800)
>  __AARCH64_INSN_FUNCS(ldadd,	0x3F20FC00, 0x38200000)
>  __AARCH64_INSN_FUNCS(ldclr,	0x3F20FC00, 0x38201000)
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index 0529fd57567e..b524411ba663 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -26,6 +26,7 @@
>  #include <linux/syscalls.h>
>  #include <linux/mm_types.h>
>  #include <linux/kasan.h>
> +#include <linux/cfi.h>
>  
>  #include <asm/atomic.h>
>  #include <asm/bug.h>
> @@ -990,6 +991,55 @@ static struct break_hook bug_break_hook = {
>  	.imm = BUG_BRK_IMM,
>  };
>  
> +#ifdef CONFIG_CFI_CLANG
> +void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
> +{
> +	/* The expected CFI check instruction sequence:
> +	 *   ldur    wA, [xN, #-4]
> +	 *   movk    wB, #nnnnn
> +	 *   movk    wB, #nnnnn, lsl #16
> +	 *   cmp     wA, wB
> +	 *   b.eq    .Ltmp1
> +	 *   brk     #0x801		; <- addr
> +	 *   .Ltmp1:
> +	 *
> +	 * Therefore, the target address is in the xN register, which we can
> +	 * decode from the ldur instruction.
> +	 */
> +	u32 insn, rn;
> +	void *p = (void *)(addr - 5 * AARCH64_INSN_SIZE);

It would be a bit nicer if we could encode the register index into the BRK
immediate, i.e. allocate a range of 32 immediates (or 31 given BLR XZR is
nonsensical), and have:

	BRK #CFI_BRK_IMM + n

... where `n` is the Xn index.

That way the kernel doesn't need to know the specific code sequence and
wouldn't have to decode the instruction to find the relevant register -- we
could determine that from the ESR alone. That would also avoid tying the
compiler into a specific code sequence, and would allow that to change.

Since the BRK immediate is 16 bits, we have enough space to also encode the
index of the wB register, which would allow the kernel's BRK handler to recover
and log the expected type value and the the value at the target of the branch
(that latter we can recover from xN, so we don't need wA to be encoded into the
immediate).

With that, the handler can be something like:

| #define CFI_BRK_IMM_TARGET	GENMASK(4,0)
| #define CFI_BRK_IMM_TYPE	GENMASK(9,5)
| 
| #define CFI_BRK_IMM_BASE	0x8000
| #define CFI_BRK_IMM_MASK	(CFI_BRK_IMM_TARGET | CFI_BRK_IMM_TYPE)
| 
| static int cfi_handler(struct pt_regs *regs, unsigned long esr)
| {
| 	int reg_target, reg_type;
| 	unsigned long target, type;
| 
| 	reg_target = FIELD_GET(esr, BRK_CFI_IMM_TARGET);
| 	target = pt_regs_read(regs, reg_target);
| 	
| 	reg_type = FIELD_GET(esr, BRK_CFI_IMM_TYPE);
| 	type = pt_regs_read(regs, reg_type);
| 
| 	report_cfi_failure(regs,		// regs
| 			   regs->pc,		// BRK address
| 			   target,		// branch target
| 			   type);		// expected type
| 
| 	// TODO: switch over the return value of report_cfi_failure()
| }
| 
| struct break_hook cfi_break_hook = {
| 	.fn = cfi_handler,
| 	.imm = CFI_BRK_IMM_BASE,
| 	.mask = CFI_BRK_IMM_MASK,
| };

... does the compiler side of that sound possible?

Thanks,
Mark.

>
> +
> +	if (aarch64_insn_read(p, &insn) || !aarch64_insn_is_ldur(insn))
> +		return NULL;
> +
> +	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, insn);
> +	return (void *)regs->regs[rn];
> +}
> +
> +static int cfi_handler(struct pt_regs *regs, unsigned int esr)
> +{
> +	switch (report_cfi(regs->pc, regs)) {
> +	case BUG_TRAP_TYPE_BUG:
> +		die("Oops - CFI", regs, 0);
> +		break;
> +
> +	case BUG_TRAP_TYPE_WARN:
> +		break;
> +
> +	default:
> +		return DBG_HOOK_ERROR;
> +	}
> +
> +	arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
> +	return DBG_HOOK_HANDLED;
> +}
> +
> +static struct break_hook cfi_break_hook = {
> +	.fn = cfi_handler,
> +	.imm = CFI_BRK_IMM,
> +};
> +#endif /* CONFIG_CFI_CLANG */
> +
>  static int reserved_fault_handler(struct pt_regs *regs, unsigned int esr)
>  {
>  	pr_err("%s generated an invalid instruction at %pS!\n",
> @@ -1063,6 +1113,10 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
>  
>  	if ((comment & ~KASAN_BRK_MASK) == KASAN_BRK_IMM)
>  		return kasan_handler(regs, esr) != DBG_HOOK_HANDLED;
> +#endif
> +#ifdef CONFIG_CFI_CLANG
> +	if ((esr & ESR_ELx_BRK64_ISS_COMMENT_MASK) == CFI_BRK_IMM)
> +		return cfi_handler(regs, esr) != DBG_HOOK_HANDLED;
>  #endif
>  	return bug_handler(regs, esr) != DBG_HOOK_HANDLED;
>  }
> @@ -1070,6 +1124,9 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
>  void __init trap_init(void)
>  {
>  	register_kernel_break_hook(&bug_break_hook);
> +#ifdef CONFIG_CFI_CLANG
> +	register_kernel_break_hook(&cfi_break_hook);
> +#endif
>  	register_kernel_break_hook(&fault_break_hook);
>  #ifdef CONFIG_KASAN_SW_TAGS
>  	register_kernel_break_hook(&kasan_break_hook);
> -- 
> 2.36.0.464.gb9c8b46e94-goog
> 

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 09/21] arm64: Add CFI error handling
@ 2022-05-05 15:44     ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-05 15:44 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

Hi Sami,

On Fri, Apr 29, 2022 at 01:36:32PM -0700, Sami Tolvanen wrote:
> With -fsanitize=kcfi, CFI always traps. Add arm64 support for handling
> CFI failures and determining the target address.
> 
> Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
> ---
>  arch/arm64/include/asm/brk-imm.h |  2 ++
>  arch/arm64/include/asm/insn.h    |  1 +
>  arch/arm64/kernel/traps.c        | 57 ++++++++++++++++++++++++++++++++
>  3 files changed, 60 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/brk-imm.h b/arch/arm64/include/asm/brk-imm.h
> index ec7720dbe2c8..3a50b70b4404 100644
> --- a/arch/arm64/include/asm/brk-imm.h
> +++ b/arch/arm64/include/asm/brk-imm.h
> @@ -16,6 +16,7 @@
>   * 0x400: for dynamic BRK instruction
>   * 0x401: for compile time BRK instruction
>   * 0x800: kernel-mode BUG() and WARN() traps
> + * 0x801: Control-Flow Integrity traps
>   * 0x9xx: tag-based KASAN trap (allowed values 0x900 - 0x9ff)

As a high-level thing, it would be good if we could agree on some paritioning
of the BRK immediate space between compiler usage and kernel usage (or have
some way to ask the compiler to use specific values), so that we can allocate
values without clashing.

>   */
>  #define KPROBES_BRK_IMM			0x004
> @@ -25,6 +26,7 @@
>  #define KGDB_DYN_DBG_BRK_IMM		0x400
>  #define KGDB_COMPILED_DBG_BRK_IMM	0x401
>  #define BUG_BRK_IMM			0x800
> +#define CFI_BRK_IMM			0x801
>  #define KASAN_BRK_IMM			0x900
>  #define KASAN_BRK_MASK			0x0ff
>  
> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
> index 1e5760d567ae..12225bdfa776 100644
> --- a/arch/arm64/include/asm/insn.h
> +++ b/arch/arm64/include/asm/insn.h
> @@ -334,6 +334,7 @@ __AARCH64_INSN_FUNCS(store_pre,	0x3FE00C00, 0x38000C00)
>  __AARCH64_INSN_FUNCS(load_pre,	0x3FE00C00, 0x38400C00)
>  __AARCH64_INSN_FUNCS(store_post,	0x3FE00C00, 0x38000400)
>  __AARCH64_INSN_FUNCS(load_post,	0x3FE00C00, 0x38400400)
> +__AARCH64_INSN_FUNCS(ldur,	0x3FE00C00, 0x38400000)
>  __AARCH64_INSN_FUNCS(str_reg,	0x3FE0EC00, 0x38206800)
>  __AARCH64_INSN_FUNCS(ldadd,	0x3F20FC00, 0x38200000)
>  __AARCH64_INSN_FUNCS(ldclr,	0x3F20FC00, 0x38201000)
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index 0529fd57567e..b524411ba663 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -26,6 +26,7 @@
>  #include <linux/syscalls.h>
>  #include <linux/mm_types.h>
>  #include <linux/kasan.h>
> +#include <linux/cfi.h>
>  
>  #include <asm/atomic.h>
>  #include <asm/bug.h>
> @@ -990,6 +991,55 @@ static struct break_hook bug_break_hook = {
>  	.imm = BUG_BRK_IMM,
>  };
>  
> +#ifdef CONFIG_CFI_CLANG
> +void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
> +{
> +	/* The expected CFI check instruction sequence:
> +	 *   ldur    wA, [xN, #-4]
> +	 *   movk    wB, #nnnnn
> +	 *   movk    wB, #nnnnn, lsl #16
> +	 *   cmp     wA, wB
> +	 *   b.eq    .Ltmp1
> +	 *   brk     #0x801		; <- addr
> +	 *   .Ltmp1:
> +	 *
> +	 * Therefore, the target address is in the xN register, which we can
> +	 * decode from the ldur instruction.
> +	 */
> +	u32 insn, rn;
> +	void *p = (void *)(addr - 5 * AARCH64_INSN_SIZE);

It would be a bit nicer if we could encode the register index into the BRK
immediate, i.e. allocate a range of 32 immediates (or 31 given BLR XZR is
nonsensical), and have:

	BRK #CFI_BRK_IMM + n

... where `n` is the Xn index.

That way the kernel doesn't need to know the specific code sequence and
wouldn't have to decode the instruction to find the relevant register -- we
could determine that from the ESR alone. That would also avoid tying the
compiler into a specific code sequence, and would allow that to change.

Since the BRK immediate is 16 bits, we have enough space to also encode the
index of the wB register, which would allow the kernel's BRK handler to recover
and log the expected type value and the the value at the target of the branch
(that latter we can recover from xN, so we don't need wA to be encoded into the
immediate).

With that, the handler can be something like:

| #define CFI_BRK_IMM_TARGET	GENMASK(4,0)
| #define CFI_BRK_IMM_TYPE	GENMASK(9,5)
| 
| #define CFI_BRK_IMM_BASE	0x8000
| #define CFI_BRK_IMM_MASK	(CFI_BRK_IMM_TARGET | CFI_BRK_IMM_TYPE)
| 
| static int cfi_handler(struct pt_regs *regs, unsigned long esr)
| {
| 	int reg_target, reg_type;
| 	unsigned long target, type;
| 
| 	reg_target = FIELD_GET(esr, BRK_CFI_IMM_TARGET);
| 	target = pt_regs_read(regs, reg_target);
| 	
| 	reg_type = FIELD_GET(esr, BRK_CFI_IMM_TYPE);
| 	type = pt_regs_read(regs, reg_type);
| 
| 	report_cfi_failure(regs,		// regs
| 			   regs->pc,		// BRK address
| 			   target,		// branch target
| 			   type);		// expected type
| 
| 	// TODO: switch over the return value of report_cfi_failure()
| }
| 
| struct break_hook cfi_break_hook = {
| 	.fn = cfi_handler,
| 	.imm = CFI_BRK_IMM_BASE,
| 	.mask = CFI_BRK_IMM_MASK,
| };

... does the compiler side of that sound possible?

Thanks,
Mark.

>
> +
> +	if (aarch64_insn_read(p, &insn) || !aarch64_insn_is_ldur(insn))
> +		return NULL;
> +
> +	rn = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RN, insn);
> +	return (void *)regs->regs[rn];
> +}
> +
> +static int cfi_handler(struct pt_regs *regs, unsigned int esr)
> +{
> +	switch (report_cfi(regs->pc, regs)) {
> +	case BUG_TRAP_TYPE_BUG:
> +		die("Oops - CFI", regs, 0);
> +		break;
> +
> +	case BUG_TRAP_TYPE_WARN:
> +		break;
> +
> +	default:
> +		return DBG_HOOK_ERROR;
> +	}
> +
> +	arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
> +	return DBG_HOOK_HANDLED;
> +}
> +
> +static struct break_hook cfi_break_hook = {
> +	.fn = cfi_handler,
> +	.imm = CFI_BRK_IMM,
> +};
> +#endif /* CONFIG_CFI_CLANG */
> +
>  static int reserved_fault_handler(struct pt_regs *regs, unsigned int esr)
>  {
>  	pr_err("%s generated an invalid instruction at %pS!\n",
> @@ -1063,6 +1113,10 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
>  
>  	if ((comment & ~KASAN_BRK_MASK) == KASAN_BRK_IMM)
>  		return kasan_handler(regs, esr) != DBG_HOOK_HANDLED;
> +#endif
> +#ifdef CONFIG_CFI_CLANG
> +	if ((esr & ESR_ELx_BRK64_ISS_COMMENT_MASK) == CFI_BRK_IMM)
> +		return cfi_handler(regs, esr) != DBG_HOOK_HANDLED;
>  #endif
>  	return bug_handler(regs, esr) != DBG_HOOK_HANDLED;
>  }
> @@ -1070,6 +1124,9 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
>  void __init trap_init(void)
>  {
>  	register_kernel_break_hook(&bug_break_hook);
> +#ifdef CONFIG_CFI_CLANG
> +	register_kernel_break_hook(&cfi_break_hook);
> +#endif
>  	register_kernel_break_hook(&fault_break_hook);
>  #ifdef CONFIG_KASAN_SW_TAGS
>  	register_kernel_break_hook(&kasan_break_hook);
> -- 
> 2.36.0.464.gb9c8b46e94-goog
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-05-05 12:36         ` Mark Rutland
@ 2022-05-05 16:00           ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-05 16:00 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Thu, May 5, 2022 at 5:36 AM Mark Rutland <mark.rutland@arm.com> wrote:
> FWIW, a `-DLLVM_ENABLE_ASSERTIONS=ON` build also seems to work for me when
> building a kernel with CONFIG_CFI_CLANG=y. It's much slower than a regular
> Release build, so I'm still waiting for that to finish building a kernel, but
> it has gotten much further through the build without issues.

Thanks for confirming. This issue should be fixed here if you want to
give it another try:

https://github.com/samitolvanen/llvm-project/commits/kcfi

Sami

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-05 16:00           ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-05 16:00 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Thu, May 5, 2022 at 5:36 AM Mark Rutland <mark.rutland@arm.com> wrote:
> FWIW, a `-DLLVM_ENABLE_ASSERTIONS=ON` build also seems to work for me when
> building a kernel with CONFIG_CFI_CLANG=y. It's much slower than a regular
> Release build, so I'm still waiting for that to finish building a kernel, but
> it has gotten much further through the build without issues.

Thanks for confirming. This issue should be fixed here if you want to
give it another try:

https://github.com/samitolvanen/llvm-project/commits/kcfi

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 09/21] arm64: Add CFI error handling
  2022-05-05 15:44     ` Mark Rutland
@ 2022-05-05 16:23       ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-05 16:23 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Thu, May 5, 2022 at 8:45 AM Mark Rutland <mark.rutland@arm.com> wrote:
> It would be a bit nicer if we could encode the register index into the BRK
> immediate, i.e. allocate a range of 32 immediates (or 31 given BLR XZR is
> nonsensical), and have:
>
>         BRK #CFI_BRK_IMM + n
>
> ... where `n` is the Xn index.
>
> That way the kernel doesn't need to know the specific code sequence and
> wouldn't have to decode the instruction to find the relevant register -- we
> could determine that from the ESR alone. That would also avoid tying the
> compiler into a specific code sequence, and would allow that to change.
>
> Since the BRK immediate is 16 bits, we have enough space to also encode the
> index of the wB register, which would allow the kernel's BRK handler to recover
> and log the expected type value and the the value at the target of the branch
> (that latter we can recover from xN, so we don't need wA to be encoded into the
> immediate).

Sure, sounds like a good idea.

> ... does the compiler side of that sound possible?

Yes, this should be doable. I'll take a look and change this in the
next version.

Sami

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 09/21] arm64: Add CFI error handling
@ 2022-05-05 16:23       ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-05 16:23 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Thu, May 5, 2022 at 8:45 AM Mark Rutland <mark.rutland@arm.com> wrote:
> It would be a bit nicer if we could encode the register index into the BRK
> immediate, i.e. allocate a range of 32 immediates (or 31 given BLR XZR is
> nonsensical), and have:
>
>         BRK #CFI_BRK_IMM + n
>
> ... where `n` is the Xn index.
>
> That way the kernel doesn't need to know the specific code sequence and
> wouldn't have to decode the instruction to find the relevant register -- we
> could determine that from the ESR alone. That would also avoid tying the
> compiler into a specific code sequence, and would allow that to change.
>
> Since the BRK immediate is 16 bits, we have enough space to also encode the
> index of the wB register, which would allow the kernel's BRK handler to recover
> and log the expected type value and the the value at the target of the branch
> (that latter we can recover from xN, so we don't need wA to be encoded into the
> immediate).

Sure, sounds like a good idea.

> ... does the compiler side of that sound possible?

Yes, this should be doable. I'll take a look and change this in the
next version.

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 10/21] treewide: Drop function_nocfi
  2022-04-29 20:36   ` Sami Tolvanen
@ 2022-05-05 16:30     ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-05 16:30 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:33PM -0700, Sami Tolvanen wrote:
> With -fsanitize=kcfi, we no longer need function_nocfi() as
> the compiler won't change function references to point to a
> jump table. Remove all implementations and uses of the macro.
> 
> Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
> ---
>  arch/arm64/include/asm/compiler.h         | 16 ----------------
>  arch/arm64/include/asm/ftrace.h           |  2 +-
>  arch/arm64/include/asm/mmu_context.h      |  2 +-
>  arch/arm64/kernel/acpi_parking_protocol.c |  2 +-
>  arch/arm64/kernel/cpufeature.c            |  2 +-
>  arch/arm64/kernel/ftrace.c                |  2 +-
>  arch/arm64/kernel/machine_kexec.c         |  2 +-
>  arch/arm64/kernel/psci.c                  |  2 +-
>  arch/arm64/kernel/smp_spin_table.c        |  2 +-
>  drivers/firmware/psci/psci.c              |  4 ++--
>  drivers/misc/lkdtm/usercopy.c             |  2 +-
>  include/linux/compiler.h                  | 10 ----------
>  12 files changed, 11 insertions(+), 37 deletions(-)

Nice!

I also believe that in most cases we can drop the __nocfi annotation on callers
now that we can mark the called assembly function with SYM_TYPED_FUNC_START().

In most cases we needed the __nocfi annotation on a caller because it was
invoking an assembly function at an unusual virtual address (which differed
from the link address), and the existing CFI scheme couldn't handle that. The
kCFI scheme should handle that fine so long as the type ID before the function
is accessible.

The other odd case was where we had the non-cfi address of a target function
(e.g. for callback structures populated in assembly), and that doesn't matter
with kCFI.

In looking at the below I spotted some latent issues. I'll prepare some patches
for those.

> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
> index dc3ea4080e2e..6fb2e6bcc392 100644
> --- a/arch/arm64/include/asm/compiler.h
> +++ b/arch/arm64/include/asm/compiler.h
> @@ -23,20 +23,4 @@
>  #define __builtin_return_address(val)					\
>  	(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
>  
> -#ifdef CONFIG_CFI_CLANG
> -/*
> - * With CONFIG_CFI_CLANG, the compiler replaces function address
> - * references with the address of the function's CFI jump table
> - * entry. The function_nocfi macro always returns the address of the
> - * actual function instead.
> - */
> -#define function_nocfi(x) ({						\
> -	void *addr;							\
> -	asm("adrp %0, " __stringify(x) "\n\t"				\
> -	    "add  %0, %0, :lo12:" __stringify(x)			\
> -	    : "=r" (addr));						\
> -	addr;								\
> -})
> -#endif
> -
>  #endif /* __ASM_COMPILER_H */
> diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
> index 1494cfa8639b..c96d47cb8f46 100644
> --- a/arch/arm64/include/asm/ftrace.h
> +++ b/arch/arm64/include/asm/ftrace.h
> @@ -26,7 +26,7 @@
>  #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
>  #define ARCH_SUPPORTS_FTRACE_OPS 1
>  #else
> -#define MCOUNT_ADDR		((unsigned long)function_nocfi(_mcount))
> +#define MCOUNT_ADDR		((unsigned long)_mcount)
>  #endif
>  
>  /* The BL at the callsite's adjusted rec->ip */
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 6770667b34a3..c9df5ab2c448 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -164,7 +164,7 @@ static inline void __nocfi cpu_replace_ttbr1(pgd_t *pgdp)
>  		ttbr1 |= TTBR_CNP_BIT;
>  	}
>  
> -	replace_phys = (void *)__pa_symbol(function_nocfi(idmap_cpu_replace_ttbr1));
> +	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
>  
>  	cpu_install_idmap();
>  	replace_phys(ttbr1);


As long as we create `idmap_cpu_replace_ttbr1` with SYM_TYPED_FUNC_START(), we
can drop `__nocfi` from `cpu_replace_ttbr1`

[...]

> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d72c4b4d389c..dae07d99508b 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1619,7 +1619,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>  	if (arm64_use_ng_mappings)
>  		return;
>  
> -	remap_fn = (void *)__pa_symbol(function_nocfi(idmap_kpti_install_ng_mappings));
> +	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
>  
>  	cpu_install_idmap();
>  	remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));

There' a latent bug here with the existing CFI scheme, since
`kpti_install_ng_mappings` isn't marked with __nocfi, and should explode when
calling `idmap_kpti_install_ng_mappings` via the idmap.

With the kCFI scheme we instead need to mark `idmap_kpti_install_ng_mappings`
with SYM_TYPED_FUNC_START().

[...]

> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> index e16b248699d5..4eb5388aa5a6 100644
> --- a/arch/arm64/kernel/machine_kexec.c
> +++ b/arch/arm64/kernel/machine_kexec.c
> @@ -204,7 +204,7 @@ void machine_kexec(struct kimage *kimage)
>  		typeof(cpu_soft_restart) *restart;
>  
>  		cpu_install_idmap();
> -		restart = (void *)__pa_symbol(function_nocfi(cpu_soft_restart));
> +		restart = (void *)__pa_symbol(cpu_soft_restart);
>  		restart(is_hyp_nvhe(), kimage->start, kimage->arch.dtb_mem,
>  			0, 0);
>  	} else {

There' a latent bug here with the existing CFI scheme, since
`machine_kexec` isn't marked with __nocfi, and should explode when calling
`cpu_soft_restart` via the idmap.

With the kCFI scheme we instead need to mark `cpu_soft_restart` with
SYM_TYPED_FUNC_START(). It's currently marked as SYM_CODE() because it doesn't
follow the usual function call conventions, but that also means it's broken for
BTI, and for now (without something like objtool caring about function calling
conventions) SYM_FUNC_START() is fine.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 10/21] treewide: Drop function_nocfi
@ 2022-05-05 16:30     ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-05 16:30 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: linux-kernel, Kees Cook, Josh Poimboeuf, Peter Zijlstra, x86,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Fri, Apr 29, 2022 at 01:36:33PM -0700, Sami Tolvanen wrote:
> With -fsanitize=kcfi, we no longer need function_nocfi() as
> the compiler won't change function references to point to a
> jump table. Remove all implementations and uses of the macro.
> 
> Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
> ---
>  arch/arm64/include/asm/compiler.h         | 16 ----------------
>  arch/arm64/include/asm/ftrace.h           |  2 +-
>  arch/arm64/include/asm/mmu_context.h      |  2 +-
>  arch/arm64/kernel/acpi_parking_protocol.c |  2 +-
>  arch/arm64/kernel/cpufeature.c            |  2 +-
>  arch/arm64/kernel/ftrace.c                |  2 +-
>  arch/arm64/kernel/machine_kexec.c         |  2 +-
>  arch/arm64/kernel/psci.c                  |  2 +-
>  arch/arm64/kernel/smp_spin_table.c        |  2 +-
>  drivers/firmware/psci/psci.c              |  4 ++--
>  drivers/misc/lkdtm/usercopy.c             |  2 +-
>  include/linux/compiler.h                  | 10 ----------
>  12 files changed, 11 insertions(+), 37 deletions(-)

Nice!

I also believe that in most cases we can drop the __nocfi annotation on callers
now that we can mark the called assembly function with SYM_TYPED_FUNC_START().

In most cases we needed the __nocfi annotation on a caller because it was
invoking an assembly function at an unusual virtual address (which differed
from the link address), and the existing CFI scheme couldn't handle that. The
kCFI scheme should handle that fine so long as the type ID before the function
is accessible.

The other odd case was where we had the non-cfi address of a target function
(e.g. for callback structures populated in assembly), and that doesn't matter
with kCFI.

In looking at the below I spotted some latent issues. I'll prepare some patches
for those.

> diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h
> index dc3ea4080e2e..6fb2e6bcc392 100644
> --- a/arch/arm64/include/asm/compiler.h
> +++ b/arch/arm64/include/asm/compiler.h
> @@ -23,20 +23,4 @@
>  #define __builtin_return_address(val)					\
>  	(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
>  
> -#ifdef CONFIG_CFI_CLANG
> -/*
> - * With CONFIG_CFI_CLANG, the compiler replaces function address
> - * references with the address of the function's CFI jump table
> - * entry. The function_nocfi macro always returns the address of the
> - * actual function instead.
> - */
> -#define function_nocfi(x) ({						\
> -	void *addr;							\
> -	asm("adrp %0, " __stringify(x) "\n\t"				\
> -	    "add  %0, %0, :lo12:" __stringify(x)			\
> -	    : "=r" (addr));						\
> -	addr;								\
> -})
> -#endif
> -
>  #endif /* __ASM_COMPILER_H */
> diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
> index 1494cfa8639b..c96d47cb8f46 100644
> --- a/arch/arm64/include/asm/ftrace.h
> +++ b/arch/arm64/include/asm/ftrace.h
> @@ -26,7 +26,7 @@
>  #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
>  #define ARCH_SUPPORTS_FTRACE_OPS 1
>  #else
> -#define MCOUNT_ADDR		((unsigned long)function_nocfi(_mcount))
> +#define MCOUNT_ADDR		((unsigned long)_mcount)
>  #endif
>  
>  /* The BL at the callsite's adjusted rec->ip */
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 6770667b34a3..c9df5ab2c448 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -164,7 +164,7 @@ static inline void __nocfi cpu_replace_ttbr1(pgd_t *pgdp)
>  		ttbr1 |= TTBR_CNP_BIT;
>  	}
>  
> -	replace_phys = (void *)__pa_symbol(function_nocfi(idmap_cpu_replace_ttbr1));
> +	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
>  
>  	cpu_install_idmap();
>  	replace_phys(ttbr1);


As long as we create `idmap_cpu_replace_ttbr1` with SYM_TYPED_FUNC_START(), we
can drop `__nocfi` from `cpu_replace_ttbr1`

[...]

> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d72c4b4d389c..dae07d99508b 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1619,7 +1619,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>  	if (arm64_use_ng_mappings)
>  		return;
>  
> -	remap_fn = (void *)__pa_symbol(function_nocfi(idmap_kpti_install_ng_mappings));
> +	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
>  
>  	cpu_install_idmap();
>  	remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));

There' a latent bug here with the existing CFI scheme, since
`kpti_install_ng_mappings` isn't marked with __nocfi, and should explode when
calling `idmap_kpti_install_ng_mappings` via the idmap.

With the kCFI scheme we instead need to mark `idmap_kpti_install_ng_mappings`
with SYM_TYPED_FUNC_START().

[...]

> diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
> index e16b248699d5..4eb5388aa5a6 100644
> --- a/arch/arm64/kernel/machine_kexec.c
> +++ b/arch/arm64/kernel/machine_kexec.c
> @@ -204,7 +204,7 @@ void machine_kexec(struct kimage *kimage)
>  		typeof(cpu_soft_restart) *restart;
>  
>  		cpu_install_idmap();
> -		restart = (void *)__pa_symbol(function_nocfi(cpu_soft_restart));
> +		restart = (void *)__pa_symbol(cpu_soft_restart);
>  		restart(is_hyp_nvhe(), kimage->start, kimage->arch.dtb_mem,
>  			0, 0);
>  	} else {

There' a latent bug here with the existing CFI scheme, since
`machine_kexec` isn't marked with __nocfi, and should explode when calling
`cpu_soft_restart` via the idmap.

With the kCFI scheme we instead need to mark `cpu_soft_restart` with
SYM_TYPED_FUNC_START(). It's currently marked as SYM_CODE() because it doesn't
follow the usual function call conventions, but that also means it's broken for
BTI, and for now (without something like objtool caring about function calling
conventions) SYM_FUNC_START() is fine.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 10/21] treewide: Drop function_nocfi
  2022-05-05 16:30     ` Mark Rutland
@ 2022-05-05 16:51       ` Sami Tolvanen
  -1 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-05 16:51 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Thu, May 5, 2022 at 9:30 AM Mark Rutland <mark.rutland@arm.com> wrote:
> I also believe that in most cases we can drop the __nocfi annotation on callers
> now that we can mark the called assembly function with SYM_TYPED_FUNC_START().

Good point, thanks for pointing that out. I'll add these to the next
version of the series.

> There' a latent bug here with the existing CFI scheme, since
> `kpti_install_ng_mappings` isn't marked with __nocfi, and should explode when
> calling `idmap_kpti_install_ng_mappings` via the idmap.

The CONFIG_UNMAP_KERNEL_AT_EL0 version of kpti_install_ng_mappings is
marked __nocfi

> There' a latent bug here with the existing CFI scheme, since
> `machine_kexec` isn't marked with __nocfi, and should explode when calling
> `cpu_soft_restart` via the idmap.

But it's indeed missing from this one.

Sami

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 10/21] treewide: Drop function_nocfi
@ 2022-05-05 16:51       ` Sami Tolvanen
  0 siblings, 0 replies; 100+ messages in thread
From: Sami Tolvanen @ 2022-05-05 16:51 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Thu, May 5, 2022 at 9:30 AM Mark Rutland <mark.rutland@arm.com> wrote:
> I also believe that in most cases we can drop the __nocfi annotation on callers
> now that we can mark the called assembly function with SYM_TYPED_FUNC_START().

Good point, thanks for pointing that out. I'll add these to the next
version of the series.

> There' a latent bug here with the existing CFI scheme, since
> `kpti_install_ng_mappings` isn't marked with __nocfi, and should explode when
> calling `idmap_kpti_install_ng_mappings` via the idmap.

The CONFIG_UNMAP_KERNEL_AT_EL0 version of kpti_install_ng_mappings is
marked __nocfi

> There' a latent bug here with the existing CFI scheme, since
> `machine_kexec` isn't marked with __nocfi, and should explode when calling
> `cpu_soft_restart` via the idmap.

But it's indeed missing from this one.

Sami

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
  2022-05-05 16:00           ` Sami Tolvanen
@ 2022-05-05 17:14             ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-05 17:14 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Thu, May 05, 2022 at 09:00:39AM -0700, Sami Tolvanen wrote:
> On Thu, May 5, 2022 at 5:36 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > FWIW, a `-DLLVM_ENABLE_ASSERTIONS=ON` build also seems to work for me when
> > building a kernel with CONFIG_CFI_CLANG=y. It's much slower than a regular
> > Release build, so I'm still waiting for that to finish building a kernel, but
> > it has gotten much further through the build without issues.
> 
> Thanks for confirming. This issue should be fixed here if you want to
> give it another try:
> 
> https://github.com/samitolvanen/llvm-project/commits/kcfi

That works for me, building in Release mode. A defconfig + CFI_CLANG kernel
built with that builds and boots clenaly.

Thanks!

Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 00/21] KCFI support
@ 2022-05-05 17:14             ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-05 17:14 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Thu, May 05, 2022 at 09:00:39AM -0700, Sami Tolvanen wrote:
> On Thu, May 5, 2022 at 5:36 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > FWIW, a `-DLLVM_ENABLE_ASSERTIONS=ON` build also seems to work for me when
> > building a kernel with CONFIG_CFI_CLANG=y. It's much slower than a regular
> > Release build, so I'm still waiting for that to finish building a kernel, but
> > it has gotten much further through the build without issues.
> 
> Thanks for confirming. This issue should be fixed here if you want to
> give it another try:
> 
> https://github.com/samitolvanen/llvm-project/commits/kcfi

That works for me, building in Release mode. A defconfig + CFI_CLANG kernel
built with that builds and boots clenaly.

Thanks!

Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 10/21] treewide: Drop function_nocfi
  2022-05-05 16:51       ` Sami Tolvanen
@ 2022-05-05 18:03         ` Mark Rutland
  -1 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-05 18:03 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Thu, May 05, 2022 at 09:51:39AM -0700, Sami Tolvanen wrote:
> On Thu, May 5, 2022 at 9:30 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > I also believe that in most cases we can drop the __nocfi annotation on callers
> > now that we can mark the called assembly function with SYM_TYPED_FUNC_START().
> 
> Good point, thanks for pointing that out. I'll add these to the next
> version of the series.

Also, I *think* we can drop __nocfi from __init, and always check calls to
functions in .init.text. IIUC we made those __nocfi because it leads to section
mismatches, and dangling entries in the jump tables after we discarded the init
text, neither of which should be a problem with kCFI.

Unfortuantely, that appears to be masking some existing type mismatches; e.g.
psci_dt_init() blows up because it uses the wrong type for its callees (a
mismatched `const`). With that fixed up, arm64 boots fine.

> > There' a latent bug here with the existing CFI scheme, since
> > `kpti_install_ng_mappings` isn't marked with __nocfi, and should explode when
> > calling `idmap_kpti_install_ng_mappings` via the idmap.
> 
> The CONFIG_UNMAP_KERNEL_AT_EL0 version of kpti_install_ng_mappings is
> marked __nocfi

Ah, so it is. Sorry for the noise!

> > There' a latent bug here with the existing CFI scheme, since
> > `machine_kexec` isn't marked with __nocfi, and should explode when calling
> > `cpu_soft_restart` via the idmap.
> 
> But it's indeed missing from this one.

Cool; I'll prep a patch that fixes just this, then.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: [RFC PATCH 10/21] treewide: Drop function_nocfi
@ 2022-05-05 18:03         ` Mark Rutland
  0 siblings, 0 replies; 100+ messages in thread
From: Mark Rutland @ 2022-05-05 18:03 UTC (permalink / raw)
  To: Sami Tolvanen
  Cc: LKML, Kees Cook, Josh Poimboeuf, Peter Zijlstra, X86 ML,
	Catalin Marinas, Will Deacon, Nathan Chancellor,
	Nick Desaulniers, Joao Moreira, Sedat Dilek, Steven Rostedt,
	linux-hardening, linux-arm-kernel, llvm

On Thu, May 05, 2022 at 09:51:39AM -0700, Sami Tolvanen wrote:
> On Thu, May 5, 2022 at 9:30 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > I also believe that in most cases we can drop the __nocfi annotation on callers
> > now that we can mark the called assembly function with SYM_TYPED_FUNC_START().
> 
> Good point, thanks for pointing that out. I'll add these to the next
> version of the series.

Also, I *think* we can drop __nocfi from __init, and always check calls to
functions in .init.text. IIUC we made those __nocfi because it leads to section
mismatches, and dangling entries in the jump tables after we discarded the init
text, neither of which should be a problem with kCFI.

Unfortuantely, that appears to be masking some existing type mismatches; e.g.
psci_dt_init() blows up because it uses the wrong type for its callees (a
mismatched `const`). With that fixed up, arm64 boots fine.

> > There' a latent bug here with the existing CFI scheme, since
> > `kpti_install_ng_mappings` isn't marked with __nocfi, and should explode when
> > calling `idmap_kpti_install_ng_mappings` via the idmap.
> 
> The CONFIG_UNMAP_KERNEL_AT_EL0 version of kpti_install_ng_mappings is
> marked __nocfi

Ah, so it is. Sorry for the noise!

> > There' a latent bug here with the existing CFI scheme, since
> > `machine_kexec` isn't marked with __nocfi, and should explode when calling
> > `cpu_soft_restart` via the idmap.
> 
> But it's indeed missing from this one.

Cool; I'll prep a patch that fixes just this, then.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 100+ messages in thread

end of thread, other threads:[~2022-05-05 18:04 UTC | newest]

Thread overview: 100+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-29 20:36 [RFC PATCH 00/21] KCFI support Sami Tolvanen
2022-04-29 20:36 ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 01/21] efi/libstub: Filter out CC_FLAGS_CFI Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 02/21] arm64/vdso: " Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 03/21] kallsyms: Ignore __kcfi_typeid_ Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 04/21] cfi: Remove CONFIG_CFI_CLANG_SHADOW Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 05/21] cfi: Drop __CFI_ADDRESSABLE Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 06/21] cfi: Switch to -fsanitize=kcfi Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-30  9:09   ` Peter Zijlstra
2022-04-30  9:09     ` Peter Zijlstra
2022-04-29 20:36 ` [RFC PATCH 07/21] cfi: Add type helper macros Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 08/21] arm64/crypto: Add types to indirect called assembly functions Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 09/21] arm64: Add CFI error handling Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-05-05 15:44   ` Mark Rutland
2022-05-05 15:44     ` Mark Rutland
2022-05-05 16:23     ` Sami Tolvanen
2022-05-05 16:23       ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 10/21] treewide: Drop function_nocfi Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-05-05 16:30   ` Mark Rutland
2022-05-05 16:30     ` Mark Rutland
2022-05-05 16:51     ` Sami Tolvanen
2022-05-05 16:51       ` Sami Tolvanen
2022-05-05 18:03       ` Mark Rutland
2022-05-05 18:03         ` Mark Rutland
2022-04-29 20:36 ` [RFC PATCH 11/21] treewide: Drop WARN_ON_FUNCTION_MISMATCH Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 12/21] treewide: Drop __cficanonical Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 13/21] cfi: Add the cfi_unchecked macro Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 14/21] treewide: static_call: Pass call arguments to the macro Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 23:21   ` Peter Zijlstra
2022-04-29 23:21     ` Peter Zijlstra
2022-04-30  0:49     ` Sami Tolvanen
2022-04-30  0:49       ` Sami Tolvanen
2022-05-02  7:46       ` Peter Zijlstra
2022-05-02  7:46         ` Peter Zijlstra
2022-04-29 20:36 ` [RFC PATCH 15/21] static_call: Use cfi_unchecked Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 23:23   ` Peter Zijlstra
2022-04-29 23:23     ` Peter Zijlstra
2022-04-29 20:36 ` [RFC PATCH 16/21] objtool: Add support for CONFIG_CFI_CLANG Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 23:30   ` Peter Zijlstra
2022-04-29 23:30     ` Peter Zijlstra
2022-04-30  1:00     ` Sami Tolvanen
2022-04-30  1:00       ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 17/21] x86/tools/relocs: Ignore __kcfi_typeid_ relocations Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 18/21] x86: Add types to indirect called assembly functions Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 19/21] x86/purgatory: Disable CFI Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 20/21] x86/vdso: " Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-29 20:36 ` [RFC PATCH 21/21] x86: Add support for CONFIG_CFI_CLANG Sami Tolvanen
2022-04-29 20:36   ` Sami Tolvanen
2022-04-30  9:24   ` Peter Zijlstra
2022-04-30  9:24     ` Peter Zijlstra
2022-05-02 15:20     ` Sami Tolvanen
2022-05-02 15:20       ` Sami Tolvanen
2022-04-29 22:53 ` [RFC PATCH 00/21] KCFI support Kees Cook
2022-04-29 22:53   ` Kees Cook
2022-04-30  9:02   ` Peter Zijlstra
2022-04-30  9:02     ` Peter Zijlstra
2022-05-02 15:22     ` Sami Tolvanen
2022-05-02 15:22       ` Sami Tolvanen
2022-05-02 19:55       ` Peter Zijlstra
2022-05-02 19:55         ` Peter Zijlstra
2022-05-03 22:35         ` Peter Collingbourne
2022-05-03 22:35           ` Peter Collingbourne
2022-05-04  7:34           ` Peter Zijlstra
2022-05-04  7:34             ` Peter Zijlstra
2022-04-30 16:07 ` Kenton Groombridge
2022-04-30 16:07   ` Kenton Groombridge
2022-05-02 15:31   ` Sami Tolvanen
2022-05-02 15:31     ` Sami Tolvanen
2022-05-04 16:17 ` Mark Rutland
2022-05-04 16:17   ` Mark Rutland
2022-05-04 16:41   ` Sami Tolvanen
2022-05-04 16:41     ` Sami Tolvanen
2022-05-04 20:17     ` Sami Tolvanen
2022-05-04 20:17       ` Sami Tolvanen
2022-05-05 12:36       ` Mark Rutland
2022-05-05 12:36         ` Mark Rutland
2022-05-05 16:00         ` Sami Tolvanen
2022-05-05 16:00           ` Sami Tolvanen
2022-05-05 17:14           ` Mark Rutland
2022-05-05 17:14             ` Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.