All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RESEND bpf-next 0/4] Reduce overhead of LSMs with static calls
@ 2023-01-20  0:08 KP Singh
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 1/4] kernel: Add helper macros for loop unrolling KP Singh
                   ` (3 more replies)
  0 siblings, 4 replies; 26+ messages in thread
From: KP Singh @ 2023-01-20  0:08 UTC (permalink / raw)
  To: linux-security-module, bpf
  Cc: ast, daniel, jackmanb, renauld, paul, casey, song, revest, keescook

# Background

LSM hooks (callbacks) are currently invoked as indirect function calls. These
callbacks are registered into a linked list at boot time as the order of the
LSMs can be configured on the kernel command line with the "lsm=" command line
parameter.

Indirect function calls have a high overhead due to retpoline mitigation for
various speculative execution attacks.

Retpolines remain relevant even with newer generation CPUs as recently
discovered speculative attacks, like Spectre BHB need Retpolines to mitigate
against branch history injection and still need to be used in combination with
newer mitigation features like eIBRS.

This overhead is especially significant for the "bpf" LSM which allows the user
to implement LSM functionality with eBPF program. In order to facilitate this
the "bpf" LSM provides a default callback for all LSM hooks. When enabled,
the "bpf" LSM incurs an unnecessary / avoidable indirect call. This is
especially bad in OS hot paths (e.g. in the networking stack).
This overhead prevents the adoption of bpf LSM on performance critical
systems, and also, in general, slows down all LSMs.

Since we know the address of the enabled LSM callbacks at compile time and only
the order is determined at boot time, the LSM framework can allocate static
calls for each of the possible LSM callbacks and these calls can be updated once
the order is determined at boot.

This series is a respin of the RFC proposed by Paul Renauld (renauld@google.com)
and Brendan Jackman (jackmanb@google.com) [1]

# Performance improvement

With this patch-set some syscalls with lots of LSM hooks in their path
benefitted at an average of ~3%. Here are the results of the relevant Unixbench
system benchmarks with BPF LSM and a major LSM (in this case apparmor) enabled
with and without the series.

Benchmark                                               Delta(%): (+ is better)
===============================================================================
Execl Throughput                                             +2.9015
File Write 1024 bufsize 2000 maxblocks                       +5.4196
Pipe Throughput                                              +7.7434
Pipe-based Context Switching                                 +3.5118
Process Creation                                             +0.3552
Shell Scripts (1 concurrent)                                 +1.7106
System Call Overhead                                         +3.0067
System Benchmarks Index Score (Partial Only):                +3.1809

In the best case, some syscalls like eventfd_create benefitted to about ~10%.
The full analysis can be viewed at https://kpsingh.ch/lsm-perf

[1] https://lore.kernel.org/linux-security-module/20200820164753.3256899-1-jackmanb@chromium.org/

KP Singh (4):
  kernel: Add helper macros for loop unrolling
  security: Generate a header with the count of enabled LSMs
  security: Replace indirect LSM hook calls with static calls
  bpf: Only enable BPF LSM hooks when an LSM program is attached

 include/linux/bpf.h              |   1 +
 include/linux/bpf_lsm.h          |   1 +
 include/linux/lsm_hooks.h        |  94 +++++++++++--
 include/linux/unroll.h           |  35 +++++
 kernel/bpf/trampoline.c          |  29 ++++-
 scripts/Makefile                 |   1 +
 scripts/security/.gitignore      |   1 +
 scripts/security/Makefile        |   4 +
 scripts/security/gen_lsm_count.c |  57 ++++++++
 security/Makefile                |  11 ++
 security/bpf/hooks.c             |  26 +++-
 security/security.c              | 217 ++++++++++++++++++++-----------
 12 files changed, 386 insertions(+), 91 deletions(-)
 create mode 100644 include/linux/unroll.h
 create mode 100644 scripts/security/.gitignore
 create mode 100644 scripts/security/Makefile
 create mode 100644 scripts/security/gen_lsm_count.c

-- 
2.39.0.246.g2a6d74b583-goog


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH RESEND bpf-next 1/4] kernel: Add helper macros for loop unrolling
  2023-01-20  0:08 [PATCH RESEND bpf-next 0/4] Reduce overhead of LSMs with static calls KP Singh
@ 2023-01-20  0:08 ` KP Singh
  2023-01-20  3:48   ` Kees Cook
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs KP Singh
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 26+ messages in thread
From: KP Singh @ 2023-01-20  0:08 UTC (permalink / raw)
  To: linux-security-module, bpf
  Cc: ast, daniel, jackmanb, renauld, paul, casey, song, revest,
	keescook, KP Singh

This helps in easily initializing blocks of code (e.g. static calls and
keys).

UNROLL(N, MACRO, __VA_ARGS__) calls MACRO N times with the first
argument as the index of the iteration. This allows string pasting to
create unique tokens for variable names, function calls etc.

As an example:

	#include <linux/unroll.h>

	#define MACRO(N, a, b)            \
		int add_##N(int a, int b) \
		{                         \
			return a + b + N; \
		}

	UNROLL(2, MACRO, x, y)

expands to:

	int add_0(int x, int y)
	{
		return x + y + 0;
	}

	int add_1(int x, int y)
	{
		return x + y + 1;
	}

Signed-off-by: KP Singh <kpsingh@kernel.org>
---
 include/linux/unroll.h | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)
 create mode 100644 include/linux/unroll.h

diff --git a/include/linux/unroll.h b/include/linux/unroll.h
new file mode 100644
index 000000000000..e19aef95b94b
--- /dev/null
+++ b/include/linux/unroll.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright (C) 2020 Google LLC.
+ */
+
+#ifndef __UNROLL_H
+#define __UNROLL_H
+
+#define __UNROLL_CONCAT(a, b) a ## _ ## b
+#define UNROLL(N, MACRO, args...) __UNROLL_CONCAT(__UNROLL, N)(MACRO, args)
+
+#define __UNROLL_0(MACRO, args...)
+#define __UNROLL_1(MACRO, args...)  __UNROLL_0(MACRO, args)  MACRO(0, args)
+#define __UNROLL_2(MACRO, args...)  __UNROLL_1(MACRO, args)  MACRO(1, args)
+#define __UNROLL_3(MACRO, args...)  __UNROLL_2(MACRO, args)  MACRO(2, args)
+#define __UNROLL_4(MACRO, args...)  __UNROLL_3(MACRO, args)  MACRO(3, args)
+#define __UNROLL_5(MACRO, args...)  __UNROLL_4(MACRO, args)  MACRO(4, args)
+#define __UNROLL_6(MACRO, args...)  __UNROLL_5(MACRO, args)  MACRO(5, args)
+#define __UNROLL_7(MACRO, args...)  __UNROLL_6(MACRO, args)  MACRO(6, args)
+#define __UNROLL_8(MACRO, args...)  __UNROLL_7(MACRO, args)  MACRO(7, args)
+#define __UNROLL_9(MACRO, args...)  __UNROLL_8(MACRO, args)  MACRO(8, args)
+#define __UNROLL_10(MACRO, args...) __UNROLL_9(MACRO, args)  MACRO(9, args)
+#define __UNROLL_11(MACRO, args...) __UNROLL_10(MACRO, args) MACRO(10, args)
+#define __UNROLL_12(MACRO, args...) __UNROLL_11(MACRO, args) MACRO(11, args)
+#define __UNROLL_13(MACRO, args...) __UNROLL_12(MACRO, args) MACRO(12, args)
+#define __UNROLL_14(MACRO, args...) __UNROLL_13(MACRO, args) MACRO(13, args)
+#define __UNROLL_15(MACRO, args...) __UNROLL_14(MACRO, args) MACRO(14, args)
+#define __UNROLL_16(MACRO, args...) __UNROLL_15(MACRO, args) MACRO(15, args)
+#define __UNROLL_17(MACRO, args...) __UNROLL_16(MACRO, args) MACRO(16, args)
+#define __UNROLL_18(MACRO, args...) __UNROLL_17(MACRO, args) MACRO(17, args)
+#define __UNROLL_19(MACRO, args...) __UNROLL_18(MACRO, args) MACRO(18, args)
+#define __UNROLL_20(MACRO, args...) __UNROLL_19(MACRO, args) MACRO(19, args)
+
+#endif /* __UNROLL_H */
-- 
2.39.0.246.g2a6d74b583-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs
  2023-01-20  0:08 [PATCH RESEND bpf-next 0/4] Reduce overhead of LSMs with static calls KP Singh
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 1/4] kernel: Add helper macros for loop unrolling KP Singh
@ 2023-01-20  0:08 ` KP Singh
  2023-01-20  4:04   ` Kees Cook
                     ` (3 more replies)
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls KP Singh
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 4/4] bpf: Only enable BPF LSM hooks when an LSM program is attached KP Singh
  3 siblings, 4 replies; 26+ messages in thread
From: KP Singh @ 2023-01-20  0:08 UTC (permalink / raw)
  To: linux-security-module, bpf
  Cc: ast, daniel, jackmanb, renauld, paul, casey, song, revest,
	keescook, KP Singh

The header defines a MAX_LSM_COUNT constant which is used in a
subsequent patch to generate the static calls for each LSM hook which
are named using preprocessor token pasting. Since token pasting does not
work with arithmetic expressions, generate a simple lsm_count.h header
which represents the subset of LSMs that can be enabled on a given
kernel based on the config.

While one can generate static calls for all the possible LSMs that the
kernel has, this is actually wasteful as most kernels only enable a
handful of LSMs.

Signed-off-by: KP Singh <kpsingh@kernel.org>
---
 scripts/Makefile                 |  1 +
 scripts/security/.gitignore      |  1 +
 scripts/security/Makefile        |  4 +++
 scripts/security/gen_lsm_count.c | 57 ++++++++++++++++++++++++++++++++
 security/Makefile                | 11 ++++++
 5 files changed, 74 insertions(+)
 create mode 100644 scripts/security/.gitignore
 create mode 100644 scripts/security/Makefile
 create mode 100644 scripts/security/gen_lsm_count.c

diff --git a/scripts/Makefile b/scripts/Makefile
index 1575af84d557..9712249c0fb3 100644
--- a/scripts/Makefile
+++ b/scripts/Makefile
@@ -41,6 +41,7 @@ targets += module.lds
 subdir-$(CONFIG_GCC_PLUGINS) += gcc-plugins
 subdir-$(CONFIG_MODVERSIONS) += genksyms
 subdir-$(CONFIG_SECURITY_SELINUX) += selinux
+subdir-$(CONFIG_SECURITY) += security
 
 # Let clean descend into subdirs
 subdir-	+= basic dtc gdb kconfig mod
diff --git a/scripts/security/.gitignore b/scripts/security/.gitignore
new file mode 100644
index 000000000000..684af16735f1
--- /dev/null
+++ b/scripts/security/.gitignore
@@ -0,0 +1 @@
+gen_lsm_count
diff --git a/scripts/security/Makefile b/scripts/security/Makefile
new file mode 100644
index 000000000000..05f7e4109052
--- /dev/null
+++ b/scripts/security/Makefile
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: GPL-2.0
+hostprogs-always-y += gen_lsm_count
+HOST_EXTRACFLAGS += \
+	-I$(srctree)/include/uapi -I$(srctree)/include
diff --git a/scripts/security/gen_lsm_count.c b/scripts/security/gen_lsm_count.c
new file mode 100644
index 000000000000..a9a227724d84
--- /dev/null
+++ b/scripts/security/gen_lsm_count.c
@@ -0,0 +1,57 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* NOTE: we really do want to use the kernel headers here */
+#define __EXPORTED_HEADERS__
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <errno.h>
+#include <ctype.h>
+
+#include <linux/kconfig.h>
+
+#define GEN_MAX_LSM_COUNT (				\
+	/* Capabilities */				\
+	IS_ENABLED(CONFIG_SECURITY) +			\
+	IS_ENABLED(CONFIG_SECURITY_SELINUX) +		\
+	IS_ENABLED(CONFIG_SECURITY_SMACK) +		\
+	IS_ENABLED(CONFIG_SECURITY_TOMOYO) +		\
+	IS_ENABLED(CONFIG_SECURITY_APPARMOR) +		\
+	IS_ENABLED(CONFIG_SECURITY_YAMA) +		\
+	IS_ENABLED(CONFIG_SECURITY_LOADPIN) +		\
+	IS_ENABLED(CONFIG_SECURITY_SAFESETID) +		\
+	IS_ENABLED(CONFIG_SECURITY_LOCKDOWN_LSM) + 	\
+	IS_ENABLED(CONFIG_BPF_LSM) + \
+	IS_ENABLED(CONFIG_SECURITY_LANDLOCK))
+
+const char *progname;
+
+static void usage(void)
+{
+	printf("usage: %s lsm_count.h\n", progname);
+	exit(1);
+}
+
+int main(int argc, char *argv[])
+{
+	FILE *fout;
+
+	progname = argv[0];
+
+	if (argc < 2)
+		usage();
+
+	fout = fopen(argv[1], "w");
+	if (!fout) {
+		fprintf(stderr, "Could not open %s for writing:  %s\n",
+			argv[1], strerror(errno));
+		exit(2);
+	}
+
+	fprintf(fout, "#ifndef _LSM_COUNT_H_\n#define _LSM_COUNT_H_\n\n");
+	fprintf(fout, "\n#define MAX_LSM_COUNT %d\n", GEN_MAX_LSM_COUNT);
+	fprintf(fout, "#endif /* _LSM_COUNT_H_ */\n");
+	exit(0);
+}
diff --git a/security/Makefile b/security/Makefile
index 18121f8f85cd..7a47174831f4 100644
--- a/security/Makefile
+++ b/security/Makefile
@@ -3,6 +3,7 @@
 # Makefile for the kernel security code
 #
 
+gen := include/generated
 obj-$(CONFIG_KEYS)			+= keys/
 
 # always enable default capabilities
@@ -27,3 +28,13 @@ obj-$(CONFIG_SECURITY_LANDLOCK)		+= landlock/
 
 # Object integrity file lists
 obj-$(CONFIG_INTEGRITY)			+= integrity/
+
+$(addprefix $(obj)/,$(obj-y)): $(gen)/lsm_count.h
+
+quiet_cmd_lsm_count = GEN     ${gen}/lsm_count.h
+      cmd_lsm_count = scripts/security/gen_lsm_count ${gen}/lsm_count.h
+
+targets += lsm_count.h
+
+${gen}/lsm_count.h: FORCE
+	$(call if_changed,lsm_count)
-- 
2.39.0.246.g2a6d74b583-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-01-20  0:08 [PATCH RESEND bpf-next 0/4] Reduce overhead of LSMs with static calls KP Singh
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 1/4] kernel: Add helper macros for loop unrolling KP Singh
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs KP Singh
@ 2023-01-20  0:08 ` KP Singh
  2023-01-20  4:36   ` Kees Cook
                     ` (2 more replies)
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 4/4] bpf: Only enable BPF LSM hooks when an LSM program is attached KP Singh
  3 siblings, 3 replies; 26+ messages in thread
From: KP Singh @ 2023-01-20  0:08 UTC (permalink / raw)
  To: linux-security-module, bpf
  Cc: ast, daniel, jackmanb, renauld, paul, casey, song, revest,
	keescook, KP Singh

LSM hooks are currently invoked from a linked list as indirect calls
which are invoked using retpolines as a mitigation for speculative
attacks (Branch History / Target injection)  and add extra overhead which
is especially bad in kernel hot paths:

security_file_ioctl:
   0xffffffff814f0320 <+0>:	endbr64
   0xffffffff814f0324 <+4>:	push   %rbp
   0xffffffff814f0325 <+5>:	push   %r15
   0xffffffff814f0327 <+7>:	push   %r14
   0xffffffff814f0329 <+9>:	push   %rbx
   0xffffffff814f032a <+10>:	mov    %rdx,%rbx
   0xffffffff814f032d <+13>:	mov    %esi,%ebp
   0xffffffff814f032f <+15>:	mov    %rdi,%r14
   0xffffffff814f0332 <+18>:	mov    $0xffffffff834a7030,%r15
   0xffffffff814f0339 <+25>:	mov    (%r15),%r15
   0xffffffff814f033c <+28>:	test   %r15,%r15
   0xffffffff814f033f <+31>:	je     0xffffffff814f0358 <security_file_ioctl+56>
   0xffffffff814f0341 <+33>:	mov    0x18(%r15),%r11
   0xffffffff814f0345 <+37>:	mov    %r14,%rdi
   0xffffffff814f0348 <+40>:	mov    %ebp,%esi
   0xffffffff814f034a <+42>:	mov    %rbx,%rdx

>>> 0xffffffff814f034d <+45>:	call   0xffffffff81f742e0 <__x86_indirect_thunk_array+352>

    Indirect calls that use retpolines leading to overhead, not just due
    to extra instruction but also branch misses.

   0xffffffff814f0352 <+50>:	test   %eax,%eax
   0xffffffff814f0354 <+52>:	je     0xffffffff814f0339 <security_file_ioctl+25>
   0xffffffff814f0356 <+54>:	jmp    0xffffffff814f035a <security_file_ioctl+58>
   0xffffffff814f0358 <+56>:	xor    %eax,%eax
   0xffffffff814f035a <+58>:	pop    %rbx
   0xffffffff814f035b <+59>:	pop    %r14
   0xffffffff814f035d <+61>:	pop    %r15
   0xffffffff814f035f <+63>:	pop    %rbp
   0xffffffff814f0360 <+64>:	jmp    0xffffffff81f747c4 <__x86_return_thunk>

The indirect calls are not really needed as one knows the addresses of
enabled LSM callbacks at boot time and only the order can possibly
change at boot time with the lsm= kernel command line parameter.

An array of static calls is defined per LSM hook and the static calls
are updated at boot time once the order has been determined.

A static key guards whether an LSM static call is enabled or not,
without this static key, for LSM hooks that return an int, the presence
of the hook that returns a default value can create side-effects which
has resulted in bugs [1].

With the hook now exposed as a static call, one can see that the
retpolines are no longer there and the LSM callbacks are invoked
directly:

security_file_ioctl:
   0xffffffff814f0dd0 <+0>:	endbr64
   0xffffffff814f0dd4 <+4>:	push   %rbp
   0xffffffff814f0dd5 <+5>:	push   %r14
   0xffffffff814f0dd7 <+7>:	push   %rbx
   0xffffffff814f0dd8 <+8>:	mov    %rdx,%rbx
   0xffffffff814f0ddb <+11>:	mov    %esi,%ebp
   0xffffffff814f0ddd <+13>:	mov    %rdi,%r14

>>> 0xffffffff814f0de0 <+16>:	jmp    0xffffffff814f0df1 <security_file_ioctl+33>

    Static key enabled for selinux_file_ioctl

>>> 0xffffffff814f0de2 <+18>:	jmp    0xffffffff814f0e08 <security_file_ioctl+56>

   Static key enabled for bpf_lsm_file_ioctl. This is something that is
   changed to default to false to avoid the existing side effect issues
   of BPF LSM [1]

   0xffffffff814f0de4 <+20>:	xor    %eax,%eax
   0xffffffff814f0de6 <+22>:	xchg   %ax,%ax
   0xffffffff814f0de8 <+24>:	pop    %rbx
   0xffffffff814f0de9 <+25>:	pop    %r14
   0xffffffff814f0deb <+27>:	pop    %rbp
   0xffffffff814f0dec <+28>:	jmp    0xffffffff81f767c4 <__x86_return_thunk>
   0xffffffff814f0df1 <+33>:	endbr64
   0xffffffff814f0df5 <+37>:	mov    %r14,%rdi
   0xffffffff814f0df8 <+40>:	mov    %ebp,%esi
   0xffffffff814f0dfa <+42>:	mov    %rbx,%rdx

>>>   0xffffffff814f0dfd <+45>:	call   0xffffffff814fe820 <selinux_file_ioctl>

   Direct call to SELinux.

   0xffffffff814f0e02 <+50>:	test   %eax,%eax
   0xffffffff814f0e04 <+52>:	jne    0xffffffff814f0de8 <security_file_ioctl+24>
   0xffffffff814f0e06 <+54>:	jmp    0xffffffff814f0de2 <security_file_ioctl+18>
   0xffffffff814f0e08 <+56>:	endbr64
   0xffffffff814f0e0c <+60>:	mov    %r14,%rdi
   0xffffffff814f0e0f <+63>:	mov    %ebp,%esi
   0xffffffff814f0e11 <+65>:	mov    %rbx,%rdx

>>>  0xffffffff814f0e14 <+68>:	call   0xffffffff8123b7d0 <bpf_lsm_file_ioctl>

   Direct call to bpf_lsm_file_ioctl

   0xffffffff814f0e19 <+73>:	test   %eax,%eax
   0xffffffff814f0e1b <+75>:	jne    0xffffffff814f0de8 <security_file_ioctl+24>
   0xffffffff814f0e1d <+77>:	jmp    0xffffffff814f0de4 <security_file_ioctl+20>
   0xffffffff814f0e1f <+79>:	endbr64
   0xffffffff814f0e23 <+83>:	mov    %r14,%rdi
   0xffffffff814f0e26 <+86>:	mov    %ebp,%esi
   0xffffffff814f0e28 <+88>:	mov    %rbx,%rdx
   0xffffffff814f0e2b <+91>:	pop    %rbx
   0xffffffff814f0e2c <+92>:	pop    %r14
   0xffffffff814f0e2e <+94>:	pop    %rbp
   0xffffffff814f0e2f <+95>:	ret
   0xffffffff814f0e30 <+96>:	int3
   0xffffffff814f0e31 <+97>:	int3
   0xffffffff814f0e32 <+98>:	int3
   0xffffffff814f0e33 <+99>:	int3

There are some hooks that don't use the call_int_hook and
call_void_hook. These hooks are updated to use a new macro called
security_for_each_hook where the lsm_callback is directly invoked as an
indirect call. Currently, there are no performance sensitive hooks that
use the security_for_each_hook macro. However, if, some performance
sensitive hooks are discovered, these can be updated to use
static calls with loop unrolling as well using a custom macro.

[1] https://lore.kernel.org/linux-security-module/20220609234601.2026362-1-kpsingh@kernel.org/

Signed-off-by: KP Singh <kpsingh@kernel.org>
---
 include/linux/lsm_hooks.h |  83 +++++++++++++--
 security/security.c       | 216 ++++++++++++++++++++++++--------------
 2 files changed, 211 insertions(+), 88 deletions(-)

diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
index 0a5ba81f7367..c82d15a4ef50 100644
--- a/include/linux/lsm_hooks.h
+++ b/include/linux/lsm_hooks.h
@@ -28,6 +28,26 @@
 #include <linux/security.h>
 #include <linux/init.h>
 #include <linux/rculist.h>
+#include <linux/static_call.h>
+#include <linux/unroll.h>
+#include <linux/jump_label.h>
+
+/* Include the generated MAX_LSM_COUNT */
+#include <generated/lsm_count.h>
+
+#define SECURITY_HOOK_ENABLED_KEY(HOOK, IDX) security_enabled_key_##HOOK##_##IDX
+
+/*
+ * Identifier for the LSM static calls.
+ * HOOK is an LSM hook as defined in linux/lsm_hookdefs.h
+ * IDX is the index of the static call. 0 <= NUM < MAX_LSM_COUNT
+ */
+#define LSM_STATIC_CALL(HOOK, IDX) lsm_static_call_##HOOK##_##IDX
+
+/*
+ * Call the macro M for each LSM hook MAX_LSM_COUNT times.
+ */
+#define LSM_UNROLL(M, ...) UNROLL(MAX_LSM_COUNT, M, __VA_ARGS__)
 
 /**
  * union security_list_options - Linux Security Module hook function list
@@ -1657,21 +1677,48 @@ union security_list_options {
 	#define LSM_HOOK(RET, DEFAULT, NAME, ...) RET (*NAME)(__VA_ARGS__);
 	#include "lsm_hook_defs.h"
 	#undef LSM_HOOK
+	void *lsm_callback;
 };
 
-struct security_hook_heads {
-	#define LSM_HOOK(RET, DEFAULT, NAME, ...) struct hlist_head NAME;
-	#include "lsm_hook_defs.h"
+/*
+ * @key: static call key as defined by STATIC_CALL_KEY
+ * @trampoline: static call trampoline as defined by STATIC_CALL_TRAMP
+ * @hl: The security_hook_list as initialized by the owning LSM.
+ * @enabled_key: Enabled when the static call has an LSM hook associated.
+ */
+struct lsm_static_call {
+	struct static_call_key *key;
+	void *trampoline;
+	struct security_hook_list *hl;
+	struct static_key *enabled_key;
+};
+
+/*
+ * Table of the static calls for each LSM hook.
+ * Once the LSMs are initialized, their callbacks will be copied to these
+ * tables such that the calls are filled backwards (from last to first).
+ * This way, we can jump directly to the first used static call, and execute
+ * all of them after. This essentially makes the entry point
+ * dynamic to adapt the number of static calls to the number of callbacks.
+ */
+struct lsm_static_calls_table {
+	#define LSM_HOOK(RET, DEFAULT, NAME, ...) \
+		struct lsm_static_call NAME[MAX_LSM_COUNT];
+	#include <linux/lsm_hook_defs.h>
 	#undef LSM_HOOK
 } __randomize_layout;
 
 /*
  * Security module hook list structure.
  * For use with generic list macros for common operations.
+ *
+ * struct security_hook_list - Contents of a cacheable, mappable object.
+ * @scalls: The beginning of the array of static calls assigned to this hook.
+ * @hook: The callback for the hook.
+ * @lsm: The name of the lsm that owns this hook.
  */
 struct security_hook_list {
-	struct hlist_node		list;
-	struct hlist_head		*head;
+	struct lsm_static_call	*scalls;
 	union security_list_options	hook;
 	const char			*lsm;
 } __randomize_layout;
@@ -1701,10 +1748,12 @@ struct lsm_blob_sizes {
  * care of the common case and reduces the amount of
  * text involved.
  */
-#define LSM_HOOK_INIT(HEAD, HOOK) \
-	{ .head = &security_hook_heads.HEAD, .hook = { .HEAD = HOOK } }
+#define LSM_HOOK_INIT(NAME, CALLBACK)			\
+	{						\
+		.scalls = static_calls_table.NAME,	\
+		.hook = { .NAME = CALLBACK }		\
+	}
 
-extern struct security_hook_heads security_hook_heads;
 extern char *lsm_names;
 
 extern void security_add_hooks(struct security_hook_list *hooks, int count,
@@ -1756,10 +1805,21 @@ extern struct lsm_info __start_early_lsm_info[], __end_early_lsm_info[];
 static inline void security_delete_hooks(struct security_hook_list *hooks,
 						int count)
 {
-	int i;
+	struct lsm_static_call *scalls;
+	int i, j;
+
+	for (i = 0; i < count; i++) {
+		scalls = hooks[i].scalls;
+		for (j = 0; j < MAX_LSM_COUNT; j++) {
+			if (scalls[j].hl != &hooks[i])
+				continue;
 
-	for (i = 0; i < count; i++)
-		hlist_del_rcu(&hooks[i].list);
+			static_key_disable(scalls[j].enabled_key);
+			__static_call_update(scalls[j].key,
+					     scalls[j].trampoline, NULL);
+			scalls[j].hl = NULL;
+		}
+	}
 }
 #endif /* CONFIG_SECURITY_SELINUX_DISABLE */
 
@@ -1771,5 +1831,6 @@ static inline void security_delete_hooks(struct security_hook_list *hooks,
 #endif /* CONFIG_SECURITY_WRITABLE_HOOKS */
 
 extern int lsm_inode_alloc(struct inode *inode);
+extern struct lsm_static_calls_table static_calls_table __ro_after_init;
 
 #endif /* ! __LINUX_LSM_HOOKS_H */
diff --git a/security/security.c b/security/security.c
index d1571900a8c7..e54d5ba187d1 100644
--- a/security/security.c
+++ b/security/security.c
@@ -29,6 +29,8 @@
 #include <linux/string.h>
 #include <linux/msg.h>
 #include <net/flow.h>
+#include <linux/static_call.h>
+#include <linux/jump_label.h>
 
 #define MAX_LSM_EVM_XATTR	2
 
@@ -74,7 +76,6 @@ const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = {
 	[LOCKDOWN_CONFIDENTIALITY_MAX] = "confidentiality",
 };
 
-struct security_hook_heads security_hook_heads __lsm_ro_after_init;
 static BLOCKING_NOTIFIER_HEAD(blocking_lsm_notifier_chain);
 
 static struct kmem_cache *lsm_file_cache;
@@ -93,6 +94,43 @@ static __initconst const char * const builtin_lsm_order = CONFIG_LSM;
 static __initdata struct lsm_info **ordered_lsms;
 static __initdata struct lsm_info *exclusive;
 
+/*
+ * Define static calls and static keys for each LSM hook.
+ */
+
+#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)			\
+	DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),		\
+				*((RET(*)(__VA_ARGS__))NULL));		\
+	DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
+
+#define LSM_HOOK(RET, DEFAULT, NAME, ...)				\
+	LSM_UNROLL(DEFINE_LSM_STATIC_CALL, NAME, RET, __VA_ARGS__)
+#include <linux/lsm_hook_defs.h>
+#undef LSM_HOOK
+#undef DEFINE_LSM_STATIC_CALL
+
+/*
+ * Initialise a table of static calls for each LSM hook.
+ * DEFINE_STATIC_CALL_NULL invocation above generates a key (STATIC_CALL_KEY)
+ * and a trampoline (STATIC_CALL_TRAMP) which are used to call
+ * __static_call_update when updating the static call.
+ */
+struct lsm_static_calls_table static_calls_table __lsm_ro_after_init = {
+#define INIT_LSM_STATIC_CALL(NUM, NAME)					\
+	(struct lsm_static_call) {					\
+		.key = &STATIC_CALL_KEY(LSM_STATIC_CALL(NAME, NUM)),	\
+		.trampoline = &STATIC_CALL_TRAMP(LSM_STATIC_CALL(NAME, NUM)),\
+		.enabled_key = &SECURITY_HOOK_ENABLED_KEY(NAME, NUM).key,\
+	},
+#define LSM_HOOK(RET, DEFAULT, NAME, ...)				\
+	.NAME = {							\
+		LSM_UNROLL(INIT_LSM_STATIC_CALL, NAME)			\
+	},
+#include <linux/lsm_hook_defs.h>
+#undef LSM_HOOK
+#undef INIT_LSM_STATIC_CALL
+};
+
 static __initdata bool debug;
 #define init_debug(...)						\
 	do {							\
@@ -153,7 +191,7 @@ static void __init append_ordered_lsm(struct lsm_info *lsm, const char *from)
 	if (exists_ordered_lsm(lsm))
 		return;
 
-	if (WARN(last_lsm == LSM_COUNT, "%s: out of LSM slots!?\n", from))
+	if (WARN(last_lsm == LSM_COUNT, "%s: out of LSM static calls!?\n", from))
 		return;
 
 	/* Enable this LSM, if it is not already set. */
@@ -318,6 +356,24 @@ static void __init ordered_lsm_parse(const char *order, const char *origin)
 	kfree(sep);
 }
 
+static void __init lsm_static_call_init(struct security_hook_list *hl)
+{
+	struct lsm_static_call *scall = hl->scalls;
+	int i;
+
+	for (i = 0; i < MAX_LSM_COUNT; i++) {
+		/* Update the first static call that is not used yet */
+		if (!scall->hl) {
+			__static_call_update(scall->key, scall->trampoline, hl->hook.lsm_callback);
+			scall->hl = hl;
+			static_key_enable(scall->enabled_key);
+			return;
+		}
+		scall++;
+	}
+	panic("%s - No static call remaining to add LSM hook.\n", __func__);
+}
+
 static void __init lsm_early_cred(struct cred *cred);
 static void __init lsm_early_task(struct task_struct *task);
 
@@ -395,11 +451,6 @@ int __init early_security_init(void)
 {
 	struct lsm_info *lsm;
 
-#define LSM_HOOK(RET, DEFAULT, NAME, ...) \
-	INIT_HLIST_HEAD(&security_hook_heads.NAME);
-#include "linux/lsm_hook_defs.h"
-#undef LSM_HOOK
-
 	for (lsm = __start_early_lsm_info; lsm < __end_early_lsm_info; lsm++) {
 		if (!lsm->enabled)
 			lsm->enabled = &lsm_enabled_true;
@@ -515,7 +566,7 @@ void __init security_add_hooks(struct security_hook_list *hooks, int count,
 
 	for (i = 0; i < count; i++) {
 		hooks[i].lsm = lsm;
-		hlist_add_tail_rcu(&hooks[i].list, hooks[i].head);
+		lsm_static_call_init(&hooks[i]);
 	}
 
 	/*
@@ -753,28 +804,42 @@ static int lsm_superblock_alloc(struct super_block *sb)
  * call_int_hook:
  *	This is a hook that returns a value.
  */
+#define __CALL_STATIC_VOID(NUM, HOOK, ...)                                   \
+	if (static_branch_unlikely(&SECURITY_HOOK_ENABLED_KEY(HOOK, NUM))) { \
+		static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__);        \
+	}
 
-#define call_void_hook(FUNC, ...)				\
-	do {							\
-		struct security_hook_list *P;			\
-								\
-		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
-			P->hook.FUNC(__VA_ARGS__);		\
+#define call_void_hook(FUNC, ...)                                 \
+	do {                                                      \
+		LSM_UNROLL(__CALL_STATIC_VOID, FUNC, __VA_ARGS__) \
 	} while (0)
 
-#define call_int_hook(FUNC, IRC, ...) ({			\
-	int RC = IRC;						\
-	do {							\
-		struct security_hook_list *P;			\
-								\
-		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
-			RC = P->hook.FUNC(__VA_ARGS__);		\
-			if (RC != 0)				\
-				break;				\
-		}						\
-	} while (0);						\
-	RC;							\
-})
+#define __CALL_STATIC_INT(NUM, R, HOOK, ...)                                 \
+	if (static_branch_unlikely(&SECURITY_HOOK_ENABLED_KEY(HOOK, NUM))) { \
+		R = static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__);    \
+		if (R != 0)                                                  \
+			goto out;                                            \
+	}
+
+#define call_int_hook(FUNC, IRC, ...)                                        \
+	({                                                                   \
+		__label__ out;                                               \
+		int RC = IRC;                                                \
+		do {                                                         \
+			LSM_UNROLL(__CALL_STATIC_INT, RC, FUNC, __VA_ARGS__) \
+									     \
+		} while (0);                                                 \
+out:                                                                         \
+		RC;                                                          \
+	})
+
+#define security_for_each_hook(scall, NAME, expression)                  \
+	for (scall = static_calls_table.NAME;                            \
+	     scall - static_calls_table.NAME < MAX_LSM_COUNT; scall++) { \
+		if (!static_key_enabled(scall->enabled_key))             \
+			continue;                                        \
+		(expression);                                            \
+	}
 
 /* Security operations */
 
@@ -859,7 +924,7 @@ int security_settime64(const struct timespec64 *ts, const struct timezone *tz)
 
 int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
 {
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 	int cap_sys_admin = 1;
 	int rc;
 
@@ -870,13 +935,13 @@ int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
 	 * agree that it should be set it will. If any module
 	 * thinks it should not be set it won't.
 	 */
-	hlist_for_each_entry(hp, &security_hook_heads.vm_enough_memory, list) {
-		rc = hp->hook.vm_enough_memory(mm, pages);
+	security_for_each_hook(scall, vm_enough_memory, ({
+		rc = scall->hl->hook.vm_enough_memory(mm, pages);
 		if (rc <= 0) {
 			cap_sys_admin = 0;
 			break;
 		}
-	}
+	}));
 	return __vm_enough_memory(mm, pages, cap_sys_admin);
 }
 
@@ -918,18 +983,17 @@ int security_fs_context_dup(struct fs_context *fc, struct fs_context *src_fc)
 int security_fs_context_parse_param(struct fs_context *fc,
 				    struct fs_parameter *param)
 {
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 	int trc;
 	int rc = -ENOPARAM;
 
-	hlist_for_each_entry(hp, &security_hook_heads.fs_context_parse_param,
-			     list) {
-		trc = hp->hook.fs_context_parse_param(fc, param);
+	security_for_each_hook(scall, fs_context_parse_param, ({
+		trc = scall->hl->hook.fs_context_parse_param(fc, param);
 		if (trc == 0)
 			rc = 0;
 		else if (trc != -ENOPARAM)
 			return trc;
-	}
+	}));
 	return rc;
 }
 
@@ -1092,18 +1156,19 @@ int security_dentry_init_security(struct dentry *dentry, int mode,
 				  const char **xattr_name, void **ctx,
 				  u32 *ctxlen)
 {
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 	int rc;
 
 	/*
 	 * Only one module will provide a security context.
 	 */
-	hlist_for_each_entry(hp, &security_hook_heads.dentry_init_security, list) {
-		rc = hp->hook.dentry_init_security(dentry, mode, name,
+	security_for_each_hook(scall, dentry_init_security, ({
+		rc = scall->hl->hook.dentry_init_security(dentry, mode, name,
 						   xattr_name, ctx, ctxlen);
 		if (rc != LSM_RET_DEFAULT(dentry_init_security))
 			return rc;
-	}
+	}));
+
 	return LSM_RET_DEFAULT(dentry_init_security);
 }
 EXPORT_SYMBOL(security_dentry_init_security);
@@ -1502,7 +1567,7 @@ int security_inode_getsecurity(struct user_namespace *mnt_userns,
 			       struct inode *inode, const char *name,
 			       void **buffer, bool alloc)
 {
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 	int rc;
 
 	if (unlikely(IS_PRIVATE(inode)))
@@ -1510,17 +1575,17 @@ int security_inode_getsecurity(struct user_namespace *mnt_userns,
 	/*
 	 * Only one module will provide an attribute with a given name.
 	 */
-	hlist_for_each_entry(hp, &security_hook_heads.inode_getsecurity, list) {
-		rc = hp->hook.inode_getsecurity(mnt_userns, inode, name, buffer, alloc);
+	security_for_each_hook(scall, inode_getsecurity, ({
+		rc = scall->hl->hook.inode_getsecurity(mnt_userns, inode, name, buffer, alloc);
 		if (rc != LSM_RET_DEFAULT(inode_getsecurity))
 			return rc;
-	}
+	}));
 	return LSM_RET_DEFAULT(inode_getsecurity);
 }
 
 int security_inode_setsecurity(struct inode *inode, const char *name, const void *value, size_t size, int flags)
 {
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 	int rc;
 
 	if (unlikely(IS_PRIVATE(inode)))
@@ -1528,12 +1593,11 @@ int security_inode_setsecurity(struct inode *inode, const char *name, const void
 	/*
 	 * Only one module will provide an attribute with a given name.
 	 */
-	hlist_for_each_entry(hp, &security_hook_heads.inode_setsecurity, list) {
-		rc = hp->hook.inode_setsecurity(inode, name, value, size,
-								flags);
+	security_for_each_hook(scall, inode_setsecurity, ({
+		rc = scall->hl->hook.inode_setsecurity(inode, name, value, size, flags);
 		if (rc != LSM_RET_DEFAULT(inode_setsecurity))
 			return rc;
-	}
+	}));
 	return LSM_RET_DEFAULT(inode_setsecurity);
 }
 
@@ -1558,7 +1622,7 @@ EXPORT_SYMBOL(security_inode_copy_up);
 
 int security_inode_copy_up_xattr(const char *name)
 {
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 	int rc;
 
 	/*
@@ -1566,12 +1630,11 @@ int security_inode_copy_up_xattr(const char *name)
 	 * xattr), -EOPNOTSUPP if it does not know anything about the xattr or
 	 * any other error code incase of an error.
 	 */
-	hlist_for_each_entry(hp,
-		&security_hook_heads.inode_copy_up_xattr, list) {
-		rc = hp->hook.inode_copy_up_xattr(name);
+	security_for_each_hook(scall, inode_copy_up_xattr, ({
+		rc = scall->hl->hook.inode_copy_up_xattr(name);
 		if (rc != LSM_RET_DEFAULT(inode_copy_up_xattr))
 			return rc;
-	}
+	}));
 
 	return LSM_RET_DEFAULT(inode_copy_up_xattr);
 }
@@ -1968,16 +2031,16 @@ int security_task_prctl(int option, unsigned long arg2, unsigned long arg3,
 {
 	int thisrc;
 	int rc = LSM_RET_DEFAULT(task_prctl);
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 
-	hlist_for_each_entry(hp, &security_hook_heads.task_prctl, list) {
-		thisrc = hp->hook.task_prctl(option, arg2, arg3, arg4, arg5);
+	security_for_each_hook(scall, task_prctl, ({
+		thisrc = scall->hl->hook.task_prctl(option, arg2, arg3, arg4, arg5);
 		if (thisrc != LSM_RET_DEFAULT(task_prctl)) {
 			rc = thisrc;
 			if (thisrc != 0)
 				break;
 		}
-	}
+	}));
 	return rc;
 }
 
@@ -2142,26 +2205,26 @@ EXPORT_SYMBOL(security_d_instantiate);
 int security_getprocattr(struct task_struct *p, const char *lsm,
 			 const char *name, char **value)
 {
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 
-	hlist_for_each_entry(hp, &security_hook_heads.getprocattr, list) {
-		if (lsm != NULL && strcmp(lsm, hp->lsm))
+	security_for_each_hook(scall, getprocattr, ({
+		if (lsm != NULL && strcmp(lsm, scall->hl->lsm))
 			continue;
-		return hp->hook.getprocattr(p, name, value);
-	}
+		return scall->hl->hook.getprocattr(p, name, value);
+	}));
 	return LSM_RET_DEFAULT(getprocattr);
 }
 
 int security_setprocattr(const char *lsm, const char *name, void *value,
 			 size_t size)
 {
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 
-	hlist_for_each_entry(hp, &security_hook_heads.setprocattr, list) {
-		if (lsm != NULL && strcmp(lsm, hp->lsm))
+	security_for_each_hook(scall, setprocattr, ({
+		if (lsm != NULL && strcmp(lsm, scall->hl->lsm))
 			continue;
-		return hp->hook.setprocattr(name, value, size);
-	}
+		return scall->hl->hook.setprocattr(name, value, size);
+	}));
 	return LSM_RET_DEFAULT(setprocattr);
 }
 
@@ -2178,18 +2241,18 @@ EXPORT_SYMBOL(security_ismaclabel);
 
 int security_secid_to_secctx(u32 secid, char **secdata, u32 *seclen)
 {
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 	int rc;
 
 	/*
 	 * Currently, only one LSM can implement secid_to_secctx (i.e this
 	 * LSM hook is not "stackable").
 	 */
-	hlist_for_each_entry(hp, &security_hook_heads.secid_to_secctx, list) {
-		rc = hp->hook.secid_to_secctx(secid, secdata, seclen);
+	security_for_each_hook(scall, secid_to_secctx, ({
+		rc = scall->hl->hook.secid_to_secctx(secid, secdata, seclen);
 		if (rc != LSM_RET_DEFAULT(secid_to_secctx))
 			return rc;
-	}
+	}));
 
 	return LSM_RET_DEFAULT(secid_to_secctx);
 }
@@ -2582,7 +2645,7 @@ int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
 				       struct xfrm_policy *xp,
 				       const struct flowi_common *flic)
 {
-	struct security_hook_list *hp;
+	struct lsm_static_call *scall;
 	int rc = LSM_RET_DEFAULT(xfrm_state_pol_flow_match);
 
 	/*
@@ -2594,11 +2657,10 @@ int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
 	 * For speed optimization, we explicitly break the loop rather than
 	 * using the macro
 	 */
-	hlist_for_each_entry(hp, &security_hook_heads.xfrm_state_pol_flow_match,
-				list) {
-		rc = hp->hook.xfrm_state_pol_flow_match(x, xp, flic);
+	security_for_each_hook(scall, xfrm_state_pol_flow_match, ({
+		rc = scall->hl->hook.xfrm_state_pol_flow_match(x, xp, flic);
 		break;
-	}
+	}));
 	return rc;
 }
 
-- 
2.39.0.246.g2a6d74b583-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH RESEND bpf-next 4/4] bpf: Only enable BPF LSM hooks when an LSM program is attached
  2023-01-20  0:08 [PATCH RESEND bpf-next 0/4] Reduce overhead of LSMs with static calls KP Singh
                   ` (2 preceding siblings ...)
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls KP Singh
@ 2023-01-20  0:08 ` KP Singh
  3 siblings, 0 replies; 26+ messages in thread
From: KP Singh @ 2023-01-20  0:08 UTC (permalink / raw)
  To: linux-security-module, bpf
  Cc: ast, daniel, jackmanb, renauld, paul, casey, song, revest,
	keescook, KP Singh

BPF LSM hooks have side-effects (even when a default value is returned),
as some hooks end up behaving differently due to the very presence of
the hook.

The static keys guarding the BPF LSM hooks are disabled by default and
enabled only when a BPF program is attached implementing the hook
logic. This avoids the issue of the side-effects and also the minor
overhead associated with the empty callback.

security_file_ioctl:
   0xffffffff814f0e90 <+0>:	endbr64
   0xffffffff814f0e94 <+4>:	push   %rbp
   0xffffffff814f0e95 <+5>:	push   %r14
   0xffffffff814f0e97 <+7>:	push   %rbx
   0xffffffff814f0e98 <+8>:	mov    %rdx,%rbx
   0xffffffff814f0e9b <+11>:	mov    %esi,%ebp
   0xffffffff814f0e9d <+13>:	mov    %rdi,%r14

>>> 0xffffffff814f0ea0 <+16>:	jmp    0xffffffff814f0eb1 <security_file_ioctl+33>

   Static key enabled for SELinux

   0xffffffff814f0ea2 <+18>:	xchg   %ax,%ax

   Static key disabled for BPF. This gets patched to a jmp when a BPF
   LSM program is attached.

   0xffffffff814f0ea4 <+20>:	xor    %eax,%eax
   0xffffffff814f0ea6 <+22>:	xchg   %ax,%ax
   0xffffffff814f0ea8 <+24>:	pop    %rbx
   0xffffffff814f0ea9 <+25>:	pop    %r14
   0xffffffff814f0eab <+27>:	pop    %rbp
   0xffffffff814f0eac <+28>:	jmp    0xffffffff81f767c4 <__x86_return_thunk>
   0xffffffff814f0eb1 <+33>:	endbr64
   0xffffffff814f0eb5 <+37>:	mov    %r14,%rdi
   0xffffffff814f0eb8 <+40>:	mov    %ebp,%esi
   0xffffffff814f0eba <+42>:	mov    %rbx,%rdx
   0xffffffff814f0ebd <+45>:	call   0xffffffff814fe8e0 <selinux_file_ioctl>
   0xffffffff814f0ec2 <+50>:	test   %eax,%eax
   0xffffffff814f0ec4 <+52>:	jne    0xffffffff814f0ea8 <security_file_ioctl+24>
   0xffffffff814f0ec6 <+54>:	jmp    0xffffffff814f0ea2 <security_file_ioctl+18>
   0xffffffff814f0ec8 <+56>:	endbr64
   0xffffffff814f0ecc <+60>:	mov    %r14,%rdi
   0xffffffff814f0ecf <+63>:	mov    %ebp,%esi
   0xffffffff814f0ed1 <+65>:	mov    %rbx,%rdx
   0xffffffff814f0ed4 <+68>:	call   0xffffffff8123b890 <bpf_lsm_file_ioctl>
   0xffffffff814f0ed9 <+73>:	test   %eax,%eax
   0xffffffff814f0edb <+75>:	jne    0xffffffff814f0ea8 <security_file_ioctl+24>
   0xffffffff814f0edd <+77>:	jmp    0xffffffff814f0ea4 <security_file_ioctl+20>
   0xffffffff814f0edf <+79>:	endbr64
   0xffffffff814f0ee3 <+83>:	mov    %r14,%rdi
   0xffffffff814f0ee6 <+86>:	mov    %ebp,%esi
   0xffffffff814f0ee8 <+88>:	mov    %rbx,%rdx
   0xffffffff814f0eeb <+91>:	pop    %rbx
   0xffffffff814f0eec <+92>:	pop    %r14
   0xffffffff814f0eee <+94>:	pop    %rbp
   0xffffffff814f0eef <+95>:	ret
   0xffffffff814f0ef0 <+96>:	int3
   0xffffffff814f0ef1 <+97>:	int3
   0xffffffff814f0ef2 <+98>:	int3
   0xffffffff814f0ef3 <+99>:	int3

Signed-off-by: KP Singh <kpsingh@kernel.org>
---
 include/linux/bpf.h       |  1 +
 include/linux/bpf_lsm.h   |  1 +
 include/linux/lsm_hooks.h | 13 ++++++++++++-
 kernel/bpf/trampoline.c   | 29 +++++++++++++++++++++++++++--
 security/bpf/hooks.c      | 26 +++++++++++++++++++++++++-
 security/security.c       |  3 ++-
 6 files changed, 68 insertions(+), 5 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index ae7771c7d750..4008f4a00851 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1049,6 +1049,7 @@ struct bpf_attach_target_info {
 	long tgt_addr;
 	const char *tgt_name;
 	const struct btf_type *tgt_type;
+	bool is_lsm_target;
 };
 
 #define BPF_DISPATCHER_MAX 48 /* Fits in 2048B */
diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
index 1de7ece5d36d..cce615af93d5 100644
--- a/include/linux/bpf_lsm.h
+++ b/include/linux/bpf_lsm.h
@@ -29,6 +29,7 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
 
 bool bpf_lsm_is_sleepable_hook(u32 btf_id);
 bool bpf_lsm_is_trusted(const struct bpf_prog *prog);
+void bpf_lsm_toggle_hook(void *addr, bool value);
 
 static inline struct bpf_storage_blob *bpf_inode(
 	const struct inode *inode)
diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
index c82d15a4ef50..5e85d3340a07 100644
--- a/include/linux/lsm_hooks.h
+++ b/include/linux/lsm_hooks.h
@@ -1716,11 +1716,14 @@ struct lsm_static_calls_table {
  * @scalls: The beginning of the array of static calls assigned to this hook.
  * @hook: The callback for the hook.
  * @lsm: The name of the lsm that owns this hook.
+ * @default_state: The state of the LSM hook when initialized. If set to false,
+ * the static key guarding the hook will be set to disabled.
  */
 struct security_hook_list {
 	struct lsm_static_call	*scalls;
 	union security_list_options	hook;
 	const char			*lsm;
+	bool				default_state;
 } __randomize_layout;
 
 /*
@@ -1751,7 +1754,15 @@ struct lsm_blob_sizes {
 #define LSM_HOOK_INIT(NAME, CALLBACK)			\
 	{						\
 		.scalls = static_calls_table.NAME,	\
-		.hook = { .NAME = CALLBACK }		\
+		.hook = { .NAME = CALLBACK },		\
+		.default_state = true			\
+	}
+
+#define LSM_HOOK_INIT_DISABLED(NAME, CALLBACK)		\
+	{						\
+		.scalls = static_calls_table.NAME,	\
+		.hook = { .NAME = CALLBACK },		\
+		.default_state = false			\
 	}
 
 extern char *lsm_names;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index d0ed7d6f5eec..9789ecf6f29c 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -14,6 +14,7 @@
 #include <linux/bpf_verifier.h>
 #include <linux/bpf_lsm.h>
 #include <linux/delay.h>
+#include <linux/bpf_lsm.h>
 
 /* dummy _ops. The verifier will operate on target program's ops. */
 const struct bpf_verifier_ops bpf_extension_verifier_ops = {
@@ -536,7 +537,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr
 {
 	enum bpf_tramp_prog_type kind;
 	struct bpf_tramp_link *link_exiting;
-	int err = 0;
+	int err = 0, num_lsm_progs = 0;
 	int cnt = 0, i;
 
 	kind = bpf_attach_type_to_tramp(link->link.prog);
@@ -567,8 +568,14 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr
 			continue;
 		/* prog already linked */
 		return -EBUSY;
+
+		if (link_exiting->link.prog->type == BPF_PROG_TYPE_LSM)
+			num_lsm_progs++;
 	}
 
+	if (!num_lsm_progs && link->link.prog->type == BPF_PROG_TYPE_LSM)
+		bpf_lsm_toggle_hook(tr->func.addr, true);
+
 	hlist_add_head(&link->tramp_hlist, &tr->progs_hlist[kind]);
 	tr->progs_cnt[kind]++;
 	err = bpf_trampoline_update(tr, true /* lock_direct_mutex */);
@@ -591,8 +598,10 @@ int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline
 
 static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr)
 {
+	struct bpf_tramp_link *link_exiting;
 	enum bpf_tramp_prog_type kind;
-	int err;
+	bool lsm_link_found = false;
+	int err, num_lsm_progs = 0;
 
 	kind = bpf_attach_type_to_tramp(link->link.prog);
 	if (kind == BPF_TRAMP_REPLACE) {
@@ -602,8 +611,24 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_
 		tr->extension_prog = NULL;
 		return err;
 	}
+
+	if (link->link.prog->type == BPF_PROG_TYPE_LSM) {
+		hlist_for_each_entry(link_exiting, &tr->progs_hlist[kind],
+				     tramp_hlist) {
+			if (link_exiting->link.prog->type == BPF_PROG_TYPE_LSM)
+				num_lsm_progs++;
+
+			if (link_exiting->link.prog == link->link.prog)
+				lsm_link_found = true;
+		}
+	}
+
 	hlist_del_init(&link->tramp_hlist);
 	tr->progs_cnt[kind]--;
+
+	if (lsm_link_found && num_lsm_progs == 1)
+		bpf_lsm_toggle_hook(tr->func.addr, false);
+
 	return bpf_trampoline_update(tr, true /* lock_direct_mutex */);
 }
 
diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
index e5971fa74fd7..a39799e9f648 100644
--- a/security/bpf/hooks.c
+++ b/security/bpf/hooks.c
@@ -8,7 +8,7 @@
 
 static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = {
 	#define LSM_HOOK(RET, DEFAULT, NAME, ...) \
-	LSM_HOOK_INIT(NAME, bpf_lsm_##NAME),
+	LSM_HOOK_INIT_DISABLED(NAME, bpf_lsm_##NAME),
 	#include <linux/lsm_hook_defs.h>
 	#undef LSM_HOOK
 	LSM_HOOK_INIT(inode_free_security, bpf_inode_storage_free),
@@ -32,3 +32,27 @@ DEFINE_LSM(bpf) = {
 	.init = bpf_lsm_init,
 	.blobs = &bpf_lsm_blob_sizes
 };
+
+void bpf_lsm_toggle_hook(void *addr, bool value)
+{
+	struct security_hook_list *h;
+	struct lsm_static_call *scalls;
+	int i, j;
+
+	for (i = 0; i < ARRAY_SIZE(bpf_lsm_hooks); i++) {
+		h = &bpf_lsm_hooks[i];
+		scalls = h->scalls;
+		if (h->hook.lsm_callback == addr)
+			continue;
+
+		for (j = 0; j < MAX_LSM_COUNT; j++) {
+			if (scalls[j].hl != h)
+				continue;
+
+			if (value)
+				static_key_enable(scalls[j].enabled_key);
+			else
+				static_key_disable(scalls[j].enabled_key);
+		}
+	}
+}
diff --git a/security/security.c b/security/security.c
index e54d5ba187d1..f74135349429 100644
--- a/security/security.c
+++ b/security/security.c
@@ -366,7 +366,8 @@ static void __init lsm_static_call_init(struct security_hook_list *hl)
 		if (!scall->hl) {
 			__static_call_update(scall->key, scall->trampoline, hl->hook.lsm_callback);
 			scall->hl = hl;
-			static_key_enable(scall->enabled_key);
+			if (hl->default_state)
+				static_key_enable(scall->enabled_key);
 			return;
 		}
 		scall++;
-- 
2.39.0.246.g2a6d74b583-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 1/4] kernel: Add helper macros for loop unrolling
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 1/4] kernel: Add helper macros for loop unrolling KP Singh
@ 2023-01-20  3:48   ` Kees Cook
  0 siblings, 0 replies; 26+ messages in thread
From: Kees Cook @ 2023-01-20  3:48 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-security-module, bpf, ast, daniel, jackmanb, renauld, paul,
	casey, song, revest

On Fri, Jan 20, 2023 at 01:08:15AM +0100, KP Singh wrote:
> This helps in easily initializing blocks of code (e.g. static calls and
> keys).
> 
> UNROLL(N, MACRO, __VA_ARGS__) calls MACRO N times with the first
> argument as the index of the iteration. This allows string pasting to
> create unique tokens for variable names, function calls etc.
> 
> As an example:
> 
> 	#include <linux/unroll.h>
> 
> 	#define MACRO(N, a, b)            \
> 		int add_##N(int a, int b) \
> 		{                         \
> 			return a + b + N; \
> 		}
> 
> 	UNROLL(2, MACRO, x, y)
> 
> expands to:
> 
> 	int add_0(int x, int y)
> 	{
> 		return x + y + 0;
> 	}
> 
> 	int add_1(int x, int y)
> 	{
> 		return x + y + 1;
> 	}
> 
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
>  include/linux/unroll.h | 35 +++++++++++++++++++++++++++++++++++
>  1 file changed, 35 insertions(+)
>  create mode 100644 include/linux/unroll.h
> 
> diff --git a/include/linux/unroll.h b/include/linux/unroll.h
> new file mode 100644
> index 000000000000..e19aef95b94b
> --- /dev/null
> +++ b/include/linux/unroll.h
> @@ -0,0 +1,35 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +/*
> + * Copyright (C) 2020 Google LLC.
> + */
> +
> +#ifndef __UNROLL_H
> +#define __UNROLL_H
> +
> +#define __UNROLL_CONCAT(a, b) a ## _ ## b

#include <linux/kernel.h>
and just use CONCATENATE

> +#define UNROLL(N, MACRO, args...) __UNROLL_CONCAT(__UNROLL, N)(MACRO, args)

Which would be, and it just loses the _, which isn't important here:
#define UNROLL(N, MACRO, args...) CONCATENATE(__UNROLL, N)(MACRO, args)

> +#define __UNROLL_0(MACRO, args...)
> +#define __UNROLL_1(MACRO, args...)  __UNROLL_0(MACRO, args)  MACRO(0, args)

But yeah, I like it. It's an expression version of the __MAP() macro
used by the syscall wrappers for arguments.

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs KP Singh
@ 2023-01-20  4:04   ` Kees Cook
  2023-01-20  7:33   ` kernel test robot
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 26+ messages in thread
From: Kees Cook @ 2023-01-20  4:04 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-security-module, bpf, ast, daniel, jackmanb, renauld, paul,
	casey, song, revest

On Fri, Jan 20, 2023 at 01:08:16AM +0100, KP Singh wrote:
> The header defines a MAX_LSM_COUNT constant which is used in a
> subsequent patch to generate the static calls for each LSM hook which
> are named using preprocessor token pasting. Since token pasting does not
> work with arithmetic expressions, generate a simple lsm_count.h header
> which represents the subset of LSMs that can be enabled on a given
> kernel based on the config.
> 
> While one can generate static calls for all the possible LSMs that the
> kernel has, this is actually wasteful as most kernels only enable a
> handful of LSMs.

This is a frustrating bit of complexity to deal with it, but it seems
worse to leave each security_... callsite with a huge swath of NOPs.

> 
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
>  scripts/Makefile                 |  1 +
>  scripts/security/.gitignore      |  1 +
>  scripts/security/Makefile        |  4 +++
>  scripts/security/gen_lsm_count.c | 57 ++++++++++++++++++++++++++++++++
>  security/Makefile                | 11 ++++++
>  5 files changed, 74 insertions(+)
>  create mode 100644 scripts/security/.gitignore
>  create mode 100644 scripts/security/Makefile
>  create mode 100644 scripts/security/gen_lsm_count.c
> 
> diff --git a/scripts/Makefile b/scripts/Makefile
> index 1575af84d557..9712249c0fb3 100644
> --- a/scripts/Makefile
> +++ b/scripts/Makefile
> @@ -41,6 +41,7 @@ targets += module.lds
>  subdir-$(CONFIG_GCC_PLUGINS) += gcc-plugins
>  subdir-$(CONFIG_MODVERSIONS) += genksyms
>  subdir-$(CONFIG_SECURITY_SELINUX) += selinux
> +subdir-$(CONFIG_SECURITY) += security
>  
>  # Let clean descend into subdirs
>  subdir-	+= basic dtc gdb kconfig mod
> diff --git a/scripts/security/.gitignore b/scripts/security/.gitignore
> new file mode 100644
> index 000000000000..684af16735f1
> --- /dev/null
> +++ b/scripts/security/.gitignore
> @@ -0,0 +1 @@
> +gen_lsm_count
> diff --git a/scripts/security/Makefile b/scripts/security/Makefile
> new file mode 100644
> index 000000000000..05f7e4109052
> --- /dev/null
> +++ b/scripts/security/Makefile
> @@ -0,0 +1,4 @@
> +# SPDX-License-Identifier: GPL-2.0
> +hostprogs-always-y += gen_lsm_count
> +HOST_EXTRACFLAGS += \
> +	-I$(srctree)/include/uapi -I$(srctree)/include
> diff --git a/scripts/security/gen_lsm_count.c b/scripts/security/gen_lsm_count.c
> new file mode 100644
> index 000000000000..a9a227724d84
> --- /dev/null
> +++ b/scripts/security/gen_lsm_count.c
> @@ -0,0 +1,57 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/* NOTE: we really do want to use the kernel headers here */
> +#define __EXPORTED_HEADERS__
> +
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <string.h>
> +#include <errno.h>
> +#include <ctype.h>
> +
> +#include <linux/kconfig.h>
> +
> +#define GEN_MAX_LSM_COUNT (				\
> +	/* Capabilities */				\
> +	IS_ENABLED(CONFIG_SECURITY) +			\
> +	IS_ENABLED(CONFIG_SECURITY_SELINUX) +		\
> +	IS_ENABLED(CONFIG_SECURITY_SMACK) +		\
> +	IS_ENABLED(CONFIG_SECURITY_TOMOYO) +		\
> +	IS_ENABLED(CONFIG_SECURITY_APPARMOR) +		\
> +	IS_ENABLED(CONFIG_SECURITY_YAMA) +		\
> +	IS_ENABLED(CONFIG_SECURITY_LOADPIN) +		\
> +	IS_ENABLED(CONFIG_SECURITY_SAFESETID) +		\
> +	IS_ENABLED(CONFIG_SECURITY_LOCKDOWN_LSM) + 	\
> +	IS_ENABLED(CONFIG_BPF_LSM) + \
> +	IS_ENABLED(CONFIG_SECURITY_LANDLOCK))

I'm bothered that we have another place to collect a list of "all" the
LSMs. The stacking design went out of its way to create DEFINE_LSM() so
there didn't need to be this kind of centralized list.

I don't have a better suggestion, though. Casey has a centralized list
too, so it might make sense (as he mentioned) to use something like
that. It can be arranged to provide a MAX_... macro (that could be
BUILD_BUG_ON checked against a similarly named enum). I'm thinking:

enum lsm_list {
	LSM_SELINUX,
	LSM_SMACK,
	...
	__LSM_MAX,
};

/*
 * We can't use __LSM_MAX directly because we need it for macro
 * concatenation, so just check it against __LSM_MAX at build time.
 */
#define LSM_MAX 15

...
	BUILD_BUG_ON(LSM_MAX != __LSM_MAX);

> +
> +const char *progname;
> +
> +static void usage(void)
> +{
> +	printf("usage: %s lsm_count.h\n", progname);
> +	exit(1);
> +}
> +
> +int main(int argc, char *argv[])
> +{
> +	FILE *fout;
> +
> +	progname = argv[0];
> +
> +	if (argc < 2)
> +		usage();
> +
> +	fout = fopen(argv[1], "w");
> +	if (!fout) {
> +		fprintf(stderr, "Could not open %s for writing:  %s\n",
> +			argv[1], strerror(errno));
> +		exit(2);
> +	}
> +
> +	fprintf(fout, "#ifndef _LSM_COUNT_H_\n#define _LSM_COUNT_H_\n\n");
> +	fprintf(fout, "\n#define MAX_LSM_COUNT %d\n", GEN_MAX_LSM_COUNT);
> +	fprintf(fout, "#endif /* _LSM_COUNT_H_ */\n");
> +	exit(0);
> +}
> diff --git a/security/Makefile b/security/Makefile
> index 18121f8f85cd..7a47174831f4 100644
> --- a/security/Makefile
> +++ b/security/Makefile
> @@ -3,6 +3,7 @@
>  # Makefile for the kernel security code
>  #
>  
> +gen := include/generated
>  obj-$(CONFIG_KEYS)			+= keys/
>  
>  # always enable default capabilities
> @@ -27,3 +28,13 @@ obj-$(CONFIG_SECURITY_LANDLOCK)		+= landlock/
>  
>  # Object integrity file lists
>  obj-$(CONFIG_INTEGRITY)			+= integrity/

Or, if the enum/#define doesn't work, it might be possible to just do
this in the Makefile more directly?

${gen}/lsm_count.h: FORCE
	(echo "#ifndef _LSM_COUNT_H_"; \
	 echo "#define _LSM_COUNT_H_"; \
	 echo -n "#define MAX_LSM_COUNT "; \
	 echo	$(CONFIG_SECURITY_SELINUX) \
		$(CONFIG_SECURITY_SMACK) \
		...
		| wc -w; \
	 echo "#endif /* _LSM_COUNT_H_") >$@

> +
> +$(addprefix $(obj)/,$(obj-y)): $(gen)/lsm_count.h
> +
> +quiet_cmd_lsm_count = GEN     ${gen}/lsm_count.h
> +      cmd_lsm_count = scripts/security/gen_lsm_count ${gen}/lsm_count.h
> +
> +targets += lsm_count.h
> +
> +${gen}/lsm_count.h: FORCE
> +	$(call if_changed,lsm_count)
> -- 
> 2.39.0.246.g2a6d74b583-goog
> 

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls KP Singh
@ 2023-01-20  4:36   ` Kees Cook
  2023-01-20 18:26     ` Casey Schaufler
  2023-02-06 13:04     ` KP Singh
  2023-01-20 10:10   ` kernel test robot
  2023-01-20 10:41   ` kernel test robot
  2 siblings, 2 replies; 26+ messages in thread
From: Kees Cook @ 2023-01-20  4:36 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-security-module, bpf, ast, daniel, jackmanb, renauld, paul,
	casey, song, revest

On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
> The indirect calls are not really needed as one knows the addresses of
> enabled LSM callbacks at boot time and only the order can possibly
> change at boot time with the lsm= kernel command line parameter.
> 
> An array of static calls is defined per LSM hook and the static calls
> are updated at boot time once the order has been determined.
> 
> A static key guards whether an LSM static call is enabled or not,
> without this static key, for LSM hooks that return an int, the presence
> of the hook that returns a default value can create side-effects which
> has resulted in bugs [1].

I think this patch has too many logic changes in it. There are basically
two things going on here:

- replace list with unrolled calls
- replace calls with static calls

I see why it was merged, since some of the logic that would be added for
step 1 would be immediate replaced, but I think it might make things a
bit more clear.

There is likely a few intermediate steps here too, to rename things,
etc.

> There are some hooks that don't use the call_int_hook and
> call_void_hook. These hooks are updated to use a new macro called
> security_for_each_hook where the lsm_callback is directly invoked as an
> indirect call. Currently, there are no performance sensitive hooks that
> use the security_for_each_hook macro. However, if, some performance
> sensitive hooks are discovered, these can be updated to use
> static calls with loop unrolling as well using a custom macro.
> 
> [1] https://lore.kernel.org/linux-security-module/20220609234601.2026362-1-kpsingh@kernel.org/
> 
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
>  include/linux/lsm_hooks.h |  83 +++++++++++++--
>  security/security.c       | 216 ++++++++++++++++++++++++--------------
>  2 files changed, 211 insertions(+), 88 deletions(-)
> 
> diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
> index 0a5ba81f7367..c82d15a4ef50 100644
> --- a/include/linux/lsm_hooks.h
> +++ b/include/linux/lsm_hooks.h
> @@ -28,6 +28,26 @@
>  #include <linux/security.h>
>  #include <linux/init.h>
>  #include <linux/rculist.h>
> +#include <linux/static_call.h>
> +#include <linux/unroll.h>
> +#include <linux/jump_label.h>
> +
> +/* Include the generated MAX_LSM_COUNT */
> +#include <generated/lsm_count.h>
> +
> +#define SECURITY_HOOK_ENABLED_KEY(HOOK, IDX) security_enabled_key_##HOOK##_##IDX
> +
> +/*
> + * Identifier for the LSM static calls.
> + * HOOK is an LSM hook as defined in linux/lsm_hookdefs.h
> + * IDX is the index of the static call. 0 <= NUM < MAX_LSM_COUNT
> + */
> +#define LSM_STATIC_CALL(HOOK, IDX) lsm_static_call_##HOOK##_##IDX
> +
> +/*
> + * Call the macro M for each LSM hook MAX_LSM_COUNT times.
> + */
> +#define LSM_UNROLL(M, ...) UNROLL(MAX_LSM_COUNT, M, __VA_ARGS__)

I think this should be:

#define LSM_UNROLL(M, ...)	do {			\
		UNROLL(MAX_LSM_COUNT, M, __VA_ARGS__);	\
	} while (0)

or maybe UNROLL needs the do/while.

>  
>  /**
>   * union security_list_options - Linux Security Module hook function list
> @@ -1657,21 +1677,48 @@ union security_list_options {
>  	#define LSM_HOOK(RET, DEFAULT, NAME, ...) RET (*NAME)(__VA_ARGS__);
>  	#include "lsm_hook_defs.h"
>  	#undef LSM_HOOK
> +	void *lsm_callback;
>  };
>  
> -struct security_hook_heads {
> -	#define LSM_HOOK(RET, DEFAULT, NAME, ...) struct hlist_head NAME;
> -	#include "lsm_hook_defs.h"
> +/*
> + * @key: static call key as defined by STATIC_CALL_KEY
> + * @trampoline: static call trampoline as defined by STATIC_CALL_TRAMP
> + * @hl: The security_hook_list as initialized by the owning LSM.
> + * @enabled_key: Enabled when the static call has an LSM hook associated.
> + */
> +struct lsm_static_call {
> +	struct static_call_key *key;
> +	void *trampoline;
> +	struct security_hook_list *hl;
> +	struct static_key *enabled_key;
> +};
> +
> +/*
> + * Table of the static calls for each LSM hook.
> + * Once the LSMs are initialized, their callbacks will be copied to these
> + * tables such that the calls are filled backwards (from last to first).
> + * This way, we can jump directly to the first used static call, and execute
> + * all of them after. This essentially makes the entry point
> + * dynamic to adapt the number of static calls to the number of callbacks.
> + */
> +struct lsm_static_calls_table {
> +	#define LSM_HOOK(RET, DEFAULT, NAME, ...) \
> +		struct lsm_static_call NAME[MAX_LSM_COUNT];
> +	#include <linux/lsm_hook_defs.h>
>  	#undef LSM_HOOK
>  } __randomize_layout;
>  
>  /*
>   * Security module hook list structure.
>   * For use with generic list macros for common operations.
> + *
> + * struct security_hook_list - Contents of a cacheable, mappable object.
> + * @scalls: The beginning of the array of static calls assigned to this hook.
> + * @hook: The callback for the hook.
> + * @lsm: The name of the lsm that owns this hook.
>   */
>  struct security_hook_list {
> -	struct hlist_node		list;
> -	struct hlist_head		*head;
> +	struct lsm_static_call	*scalls;
>  	union security_list_options	hook;
>  	const char			*lsm;
>  } __randomize_layout;
> @@ -1701,10 +1748,12 @@ struct lsm_blob_sizes {
>   * care of the common case and reduces the amount of
>   * text involved.
>   */
> -#define LSM_HOOK_INIT(HEAD, HOOK) \
> -	{ .head = &security_hook_heads.HEAD, .hook = { .HEAD = HOOK } }
> +#define LSM_HOOK_INIT(NAME, CALLBACK)			\
> +	{						\
> +		.scalls = static_calls_table.NAME,	\
> +		.hook = { .NAME = CALLBACK }		\
> +	}
>  
> -extern struct security_hook_heads security_hook_heads;
>  extern char *lsm_names;
>  
>  extern void security_add_hooks(struct security_hook_list *hooks, int count,
> @@ -1756,10 +1805,21 @@ extern struct lsm_info __start_early_lsm_info[], __end_early_lsm_info[];
>  static inline void security_delete_hooks(struct security_hook_list *hooks,
>  						int count)

Hey Paul, can we get rid of CONFIG_SECURITY_SELINUX_DISABLE yet? It's
been deprecated for years....

>  {
> -	int i;
> +	struct lsm_static_call *scalls;
> +	int i, j;
> +
> +	for (i = 0; i < count; i++) {
> +		scalls = hooks[i].scalls;
> +		for (j = 0; j < MAX_LSM_COUNT; j++) {
> +			if (scalls[j].hl != &hooks[i])
> +				continue;
>  
> -	for (i = 0; i < count; i++)
> -		hlist_del_rcu(&hooks[i].list);
> +			static_key_disable(scalls[j].enabled_key);
> +			__static_call_update(scalls[j].key,
> +					     scalls[j].trampoline, NULL);
> +			scalls[j].hl = NULL;
> +		}
> +	}
>  }
>  #endif /* CONFIG_SECURITY_SELINUX_DISABLE */
>  
> @@ -1771,5 +1831,6 @@ static inline void security_delete_hooks(struct security_hook_list *hooks,
>  #endif /* CONFIG_SECURITY_WRITABLE_HOOKS */
>  
>  extern int lsm_inode_alloc(struct inode *inode);
> +extern struct lsm_static_calls_table static_calls_table __ro_after_init;
>  
>  #endif /* ! __LINUX_LSM_HOOKS_H */
> diff --git a/security/security.c b/security/security.c
> index d1571900a8c7..e54d5ba187d1 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -29,6 +29,8 @@
>  #include <linux/string.h>
>  #include <linux/msg.h>
>  #include <net/flow.h>
> +#include <linux/static_call.h>
> +#include <linux/jump_label.h>
>  
>  #define MAX_LSM_EVM_XATTR	2
>  
> @@ -74,7 +76,6 @@ const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = {
>  	[LOCKDOWN_CONFIDENTIALITY_MAX] = "confidentiality",
>  };
>  
> -struct security_hook_heads security_hook_heads __lsm_ro_after_init;
>  static BLOCKING_NOTIFIER_HEAD(blocking_lsm_notifier_chain);
>  
>  static struct kmem_cache *lsm_file_cache;
> @@ -93,6 +94,43 @@ static __initconst const char * const builtin_lsm_order = CONFIG_LSM;
>  static __initdata struct lsm_info **ordered_lsms;
>  static __initdata struct lsm_info *exclusive;
>  
> +/*
> + * Define static calls and static keys for each LSM hook.
> + */
> +
> +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)			\
> +	DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),		\
> +				*((RET(*)(__VA_ARGS__))NULL));		\
> +	DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));

Hm, another place where we would benefit from having separated logic for
"is it built?" and "is it enabled by default?" and we could use
DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
out-of-line? (i.e. the default compiled state will be NOPs?) If we're
trying to optimize for having LSMs, I think we should default to inline
calls. (The machine code in the commit log seems to indicate that they
are out of line -- it uses jumps.)

> [...]
> +#define __CALL_STATIC_VOID(NUM, HOOK, ...)                                   \
> +	if (static_branch_unlikely(&SECURITY_HOOK_ENABLED_KEY(HOOK, NUM))) { \
> +		static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__);        \
> +	}

Same here -- I would expect this to be static_branch_likely() or we'll
get out-of-line branches. Also, IMO, this should be:

	do {
		if (...)
			static_call(...);
	} while (0)


> +#define call_void_hook(FUNC, ...)                                 \
> +	do {                                                      \
> +		LSM_UNROLL(__CALL_STATIC_VOID, FUNC, __VA_ARGS__) \
>  	} while (0)

With the do/while in LSM_UNROLL, this is more readable.

>  
> -#define call_int_hook(FUNC, IRC, ...) ({			\
> -	int RC = IRC;						\
> -	do {							\
> -		struct security_hook_list *P;			\
> -								\
> -		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
> -			RC = P->hook.FUNC(__VA_ARGS__);		\
> -			if (RC != 0)				\
> -				break;				\
> -		}						\
> -	} while (0);						\
> -	RC;							\
> -})
> +#define __CALL_STATIC_INT(NUM, R, HOOK, ...)                                 \
> +	if (static_branch_unlikely(&SECURITY_HOOK_ENABLED_KEY(HOOK, NUM))) { \
> +		R = static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__);    \
> +		if (R != 0)                                                  \
> +			goto out;                                            \
> +	}

I would expect the label to be a passed argument, but maybe since it
never changes, it's fine as-is?

And again, I'd expect a do/while wrapper, and for it to be s_b_likely.

> +
> +#define call_int_hook(FUNC, IRC, ...)                                        \
> +	({                                                                   \
> +		__label__ out;                                               \
> +		int RC = IRC;                                                \
> +		do {                                                         \
> +			LSM_UNROLL(__CALL_STATIC_INT, RC, FUNC, __VA_ARGS__) \
> +									     \
> +		} while (0);                                                 \

Then this becomes:

({
	int RC = IRC;
	LSM_UNROLL(__CALL_STATIC_INT, RC, FUNC, __VA_ARGS__);
out:
	RC;
})

> +#define security_for_each_hook(scall, NAME, expression)                  \
> +	for (scall = static_calls_table.NAME;                            \
> +	     scall - static_calls_table.NAME < MAX_LSM_COUNT; scall++) { \
> +		if (!static_key_enabled(scall->enabled_key))             \
> +			continue;                                        \
> +		(expression);                                            \
> +	}

Why isn't this using static_branch_enabled()? I would expect this to be:

#define security_for_each_hook(scall, NAME)				\
	for (scall = static_calls_table.NAME;                           \
	     scall - static_calls_table.NAME < MAX_LSM_COUNT; scall++)	\
		if (static_branch_likely(scall->enabled_key))

>  
>  /* Security operations */
>  
> @@ -859,7 +924,7 @@ int security_settime64(const struct timespec64 *ts, const struct timezone *tz)
>  
>  int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
>  {
> -	struct security_hook_list *hp;
> +	struct lsm_static_call *scall;
>  	int cap_sys_admin = 1;
>  	int rc;
>  
> @@ -870,13 +935,13 @@ int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
>  	 * agree that it should be set it will. If any module
>  	 * thinks it should not be set it won't.
>  	 */
> -	hlist_for_each_entry(hp, &security_hook_heads.vm_enough_memory, list) {
> -		rc = hp->hook.vm_enough_memory(mm, pages);
> +	security_for_each_hook(scall, vm_enough_memory, ({
> +		rc = scall->hl->hook.vm_enough_memory(mm, pages);
>  		if (rc <= 0) {
>  			cap_sys_admin = 0;
>  			break;
>  		}
> -	}
> +	}));

Then these replacements don't look weird. This would just be:

	security_for_each_hook(scall, vm_enough_memory) {
		rc = scall->hl->hook.vm_enough_memory(mm, pages);
  		if (rc <= 0) {
  			cap_sys_admin = 0;
  			break;
  		}
	}

I'm excited to have this. The speed improvements are pretty nice.

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs KP Singh
  2023-01-20  4:04   ` Kees Cook
@ 2023-01-20  7:33   ` kernel test robot
  2023-01-20  9:50   ` kernel test robot
  2023-01-20  9:50   ` kernel test robot
  3 siblings, 0 replies; 26+ messages in thread
From: kernel test robot @ 2023-01-20  7:33 UTC (permalink / raw)
  To: KP Singh, linux-security-module, bpf
  Cc: oe-kbuild-all, ast, daniel, jackmanb, renauld, paul, casey, song,
	revest, keescook, KP Singh

Hi KP,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/KP-Singh/kernel-Add-helper-macros-for-loop-unrolling/20230120-133309
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230120000818.1324170-3-kpsingh%40kernel.org
patch subject: [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs
config: i386-tinyconfig (https://download.01.org/0day-ci/archive/20230120/202301201525.vZDnlpJ2-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-8) 11.3.0
reproduce (this is a W=1 build):
        # https://github.com/intel-lab-lkp/linux/commit/831b06220bb29c6db171467b13903dac0ef2faa5
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review KP-Singh/kernel-Add-helper-macros-for-loop-unrolling/20230120-133309
        git checkout 831b06220bb29c6db171467b13903dac0ef2faa5
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        make W=1 O=build_dir ARCH=i386 olddefconfig
        make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> /bin/bash: line 1: scripts/security/gen_lsm_count: No such file or directory

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs KP Singh
  2023-01-20  4:04   ` Kees Cook
  2023-01-20  7:33   ` kernel test robot
@ 2023-01-20  9:50   ` kernel test robot
  2023-01-20  9:50   ` kernel test robot
  3 siblings, 0 replies; 26+ messages in thread
From: kernel test robot @ 2023-01-20  9:50 UTC (permalink / raw)
  To: KP Singh, linux-security-module, bpf
  Cc: oe-kbuild-all, ast, daniel, jackmanb, renauld, paul, casey, song,
	revest, keescook, KP Singh

Hi KP,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/KP-Singh/kernel-Add-helper-macros-for-loop-unrolling/20230120-133309
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230120000818.1324170-3-kpsingh%40kernel.org
patch subject: [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs
config: ia64-allmodconfig (https://download.01.org/0day-ci/archive/20230120/202301201746.Q5765zQF-lkp@intel.com/config)
compiler: ia64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/831b06220bb29c6db171467b13903dac0ef2faa5
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review KP-Singh/kernel-Add-helper-macros-for-loop-unrolling/20230120-133309
        git checkout 831b06220bb29c6db171467b13903dac0ef2faa5
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=ia64 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=ia64 prepare

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from scripts/security/gen_lsm_count.c:13:
>> include/linux/kconfig.h:5:10: fatal error: generated/autoconf.h: No such file or directory
       5 | #include <generated/autoconf.h>
         |          ^~~~~~~~~~~~~~~~~~~~~~
   compilation terminated.
   make[3]: *** [scripts/Makefile.host:111: scripts/security/gen_lsm_count] Error 1
   make[3]: Target 'scripts/security/' not remade because of errors.
   make[2]: *** [scripts/Makefile.build:504: scripts/security] Error 2
   make[2]: Target 'scripts/' not remade because of errors.
   make[1]: *** [Makefile:1270: scripts] Error 2
   make[1]: Target 'prepare' not remade because of errors.
   make: *** [Makefile:242: __sub-make] Error 2
   make: Target 'prepare' not remade because of errors.


vim +5 include/linux/kconfig.h

2a11c8ea20bf85 Michal Marek 2011-07-20  4  
2a11c8ea20bf85 Michal Marek 2011-07-20 @5  #include <generated/autoconf.h>
2a11c8ea20bf85 Michal Marek 2011-07-20  6  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs KP Singh
                     ` (2 preceding siblings ...)
  2023-01-20  9:50   ` kernel test robot
@ 2023-01-20  9:50   ` kernel test robot
  3 siblings, 0 replies; 26+ messages in thread
From: kernel test robot @ 2023-01-20  9:50 UTC (permalink / raw)
  To: KP Singh, linux-security-module, bpf
  Cc: llvm, oe-kbuild-all, ast, daniel, jackmanb, renauld, paul, casey,
	song, revest, keescook, KP Singh

Hi KP,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/KP-Singh/kernel-Add-helper-macros-for-loop-unrolling/20230120-133309
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230120000818.1324170-3-kpsingh%40kernel.org
patch subject: [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs
config: s390-randconfig-r044-20230119 (https://download.01.org/0day-ci/archive/20230120/202301201738.WShrQVx6-lkp@intel.com/config)
compiler: clang version 16.0.0 (https://github.com/llvm/llvm-project 4196ca3278f78c6e19246e54ab0ecb364e37d66a)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install s390 cross compiling tool for clang build
        # apt-get install binutils-s390x-linux-gnu
        # https://github.com/intel-lab-lkp/linux/commit/831b06220bb29c6db171467b13903dac0ef2faa5
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review KP-Singh/kernel-Add-helper-macros-for-loop-unrolling/20230120-133309
        git checkout 831b06220bb29c6db171467b13903dac0ef2faa5
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=s390 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=s390 prepare

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from scripts/security/gen_lsm_count.c:13:
>> include/linux/kconfig.h:5:10: fatal error: 'generated/autoconf.h' file not found
   #include <generated/autoconf.h>
            ^~~~~~~~~~~~~~~~~~~~~~
   1 error generated.
   make[3]: *** [scripts/Makefile.host:111: scripts/security/gen_lsm_count] Error 1
   make[3]: Target 'scripts/security/' not remade because of errors.
   make[2]: *** [scripts/Makefile.build:504: scripts/security] Error 2
   make[2]: Target 'scripts/' not remade because of errors.
   make[1]: *** [Makefile:1270: scripts] Error 2
   make[1]: Target 'prepare' not remade because of errors.
   make: *** [Makefile:242: __sub-make] Error 2
   make: Target 'prepare' not remade because of errors.


vim +5 include/linux/kconfig.h

2a11c8ea20bf85 Michal Marek 2011-07-20  4  
2a11c8ea20bf85 Michal Marek 2011-07-20 @5  #include <generated/autoconf.h>
2a11c8ea20bf85 Michal Marek 2011-07-20  6  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls KP Singh
  2023-01-20  4:36   ` Kees Cook
@ 2023-01-20 10:10   ` kernel test robot
  2023-01-20 10:41   ` kernel test robot
  2 siblings, 0 replies; 26+ messages in thread
From: kernel test robot @ 2023-01-20 10:10 UTC (permalink / raw)
  To: KP Singh, linux-security-module, bpf
  Cc: oe-kbuild-all, ast, daniel, jackmanb, renauld, paul, casey, song,
	revest, keescook, KP Singh

Hi KP,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/KP-Singh/kernel-Add-helper-macros-for-loop-unrolling/20230120-133309
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230120000818.1324170-4-kpsingh%40kernel.org
patch subject: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
config: arc-randconfig-r043-20230119 (https://download.01.org/0day-ci/archive/20230120/202301201833.YM7Hr62n-lkp@intel.com/config)
compiler: arceb-elf-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/1dfb90849b42d8af6e854cd6ae8cd96d1ebfc50a
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review KP-Singh/kernel-Add-helper-macros-for-loop-unrolling/20230120-133309
        git checkout 1dfb90849b42d8af6e854cd6ae8cd96d1ebfc50a
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=arc olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=arc SHELL=/bin/bash kernel/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from include/linux/bpf_lsm.h:12,
                    from kernel/bpf/syscall.c:31:
>> include/linux/lsm_hooks.h:36:10: fatal error: generated/lsm_count.h: No such file or directory
      36 | #include <generated/lsm_count.h>
         |          ^~~~~~~~~~~~~~~~~~~~~~~
   compilation terminated.


vim +36 include/linux/lsm_hooks.h

    34	
    35	/* Include the generated MAX_LSM_COUNT */
  > 36	#include <generated/lsm_count.h>
    37	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-01-20  0:08 ` [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls KP Singh
  2023-01-20  4:36   ` Kees Cook
  2023-01-20 10:10   ` kernel test robot
@ 2023-01-20 10:41   ` kernel test robot
  2 siblings, 0 replies; 26+ messages in thread
From: kernel test robot @ 2023-01-20 10:41 UTC (permalink / raw)
  To: KP Singh, linux-security-module, bpf
  Cc: llvm, oe-kbuild-all, ast, daniel, jackmanb, renauld, paul, casey,
	song, revest, keescook, KP Singh

Hi KP,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/KP-Singh/kernel-Add-helper-macros-for-loop-unrolling/20230120-133309
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/20230120000818.1324170-4-kpsingh%40kernel.org
patch subject: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
config: hexagon-randconfig-r041-20230119 (https://download.01.org/0day-ci/archive/20230120/202301201845.mf9dWfym-lkp@intel.com/config)
compiler: clang version 16.0.0 (https://github.com/llvm/llvm-project 4196ca3278f78c6e19246e54ab0ecb364e37d66a)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/1dfb90849b42d8af6e854cd6ae8cd96d1ebfc50a
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review KP-Singh/kernel-Add-helper-macros-for-loop-unrolling/20230120-133309
        git checkout 1dfb90849b42d8af6e854cd6ae8cd96d1ebfc50a
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon SHELL=/bin/bash kernel/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from kernel/bpf/syscall.c:5:
   In file included from include/linux/bpf-cgroup.h:11:
   In file included from include/net/sock.h:38:
   In file included from include/linux/hardirq.h:11:
   In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
   In file included from include/asm-generic/hardirq.h:17:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/hexagon/include/asm/io.h:334:
   include/asm-generic/io.h:547:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __raw_readb(PCI_IOBASE + addr);
                             ~~~~~~~~~~ ^
   include/asm-generic/io.h:560:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
                                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/little_endian.h:37:51: note: expanded from macro '__le16_to_cpu'
   #define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
                                                     ^
   In file included from kernel/bpf/syscall.c:5:
   In file included from include/linux/bpf-cgroup.h:11:
   In file included from include/net/sock.h:38:
   In file included from include/linux/hardirq.h:11:
   In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
   In file included from include/asm-generic/hardirq.h:17:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/hexagon/include/asm/io.h:334:
   include/asm-generic/io.h:573:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
                                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu'
   #define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
                                                     ^
   In file included from kernel/bpf/syscall.c:5:
   In file included from include/linux/bpf-cgroup.h:11:
   In file included from include/net/sock.h:38:
   In file included from include/linux/hardirq.h:11:
   In file included from ./arch/hexagon/include/generated/asm/hardirq.h:1:
   In file included from include/asm-generic/hardirq.h:17:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/hexagon/include/asm/io.h:334:
   include/asm-generic/io.h:584:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writeb(value, PCI_IOBASE + addr);
                               ~~~~~~~~~~ ^
   include/asm-generic/io.h:594:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
                                                         ~~~~~~~~~~ ^
   include/asm-generic/io.h:604:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
                                                         ~~~~~~~~~~ ^
   In file included from kernel/bpf/syscall.c:31:
   In file included from include/linux/bpf_lsm.h:12:
>> include/linux/lsm_hooks.h:36:10: fatal error: 'generated/lsm_count.h' file not found
   #include <generated/lsm_count.h>
            ^~~~~~~~~~~~~~~~~~~~~~~
   6 warnings and 1 error generated.


vim +36 include/linux/lsm_hooks.h

    34	
    35	/* Include the generated MAX_LSM_COUNT */
  > 36	#include <generated/lsm_count.h>
    37	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-01-20  4:36   ` Kees Cook
@ 2023-01-20 18:26     ` Casey Schaufler
  2023-02-06 13:04     ` KP Singh
  1 sibling, 0 replies; 26+ messages in thread
From: Casey Schaufler @ 2023-01-20 18:26 UTC (permalink / raw)
  To: Kees Cook, KP Singh
  Cc: linux-security-module, bpf, ast, daniel, jackmanb, renauld, paul,
	song, revest, casey

On 1/19/2023 8:36 PM, Kees Cook wrote:
> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
>> The indirect calls are not really needed as one knows the addresses of
>> enabled LSM callbacks at boot time and only the order can possibly
>> change at boot time with the lsm= kernel command line parameter.
>>
>> ...
> Then these replacements don't look weird. This would just be:
>
> 	security_for_each_hook(scall, vm_enough_memory) {
> 		rc = scall->hl->hook.vm_enough_memory(mm, pages);
>   		if (rc <= 0) {
>   			cap_sys_admin = 0;
>   			break;
>   		}
> 	}

That's a whole lot easier to swallow than what was originally proposed.

>
> I'm excited to have this. The speed improvements are pretty nice.
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-01-20  4:36   ` Kees Cook
  2023-01-20 18:26     ` Casey Schaufler
@ 2023-02-06 13:04     ` KP Singh
  2023-02-06 16:29       ` Casey Schaufler
  1 sibling, 1 reply; 26+ messages in thread
From: KP Singh @ 2023-02-06 13:04 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-security-module, bpf, ast, daniel, jackmanb, renauld, paul,
	casey, song, revest

On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook@chromium.org> wrote:
>
> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
> > The indirect calls are not really needed as one knows the addresses of

[...]

> > A static key guards whether an LSM static call is enabled or not,
> > without this static key, for LSM hooks that return an int, the presence
> > of the hook that returns a default value can create side-effects which
> > has resulted in bugs [1].
>
> I think this patch has too many logic changes in it. There are basically
> two things going on here:
>
> - replace list with unrolled calls
> - replace calls with static calls
>
> I see why it was merged, since some of the logic that would be added for
> step 1 would be immediate replaced, but I think it might make things a
> bit more clear.
>
> There is likely a few intermediate steps here too, to rename things,
> etc.

I can try to split this patch, but I am not able to find a decent
slice without duplicating a lot of work which will get squashed in the
end anyways.

>
> > There are some hooks that don't use the call_int_hook and
> > call_void_hook. These hooks are updated to use a new macro called
> > security_for_each_hook where the lsm_callback is directly invoked as an

[...]

> > + * Call the macro M for each LSM hook MAX_LSM_COUNT times.
> > + */
> > +#define LSM_UNROLL(M, ...) UNROLL(MAX_LSM_COUNT, M, __VA_ARGS__)
>
> I think this should be:
>
> #define LSM_UNROLL(M, ...)      do {                    \
>                 UNROLL(MAX_LSM_COUNT, M, __VA_ARGS__);  \
>         } while (0)
>
> or maybe UNROLL needs the do/while.

UNROLL is used for both declaration and loop unrolling. So I have
split the LSM macros into LSM_UNROLL to LSM_LOOP_UNROLL (which is
surrounded by a do/while) and an LSM_DEFINE_UNROLL which is not.

>
> >
> >  /**
> >   * union security_list_options - Linux Security Module hook function list
> > @@ -1657,21 +1677,48 @@ union security_list_options {
> >       #define LSM_HOOK(RET, DEFAULT, NAME, ...) RET (*NAME)(__VA_ARGS__);
> >       #include "lsm_hook_defs.h"
> >       #undef LSM_HOOK
> > +     void *lsm_callback;
> >  };
> >
> > -struct security_hook_heads {
> > -     #define LSM_HOOK(RET, DEFAULT, NAME, ...) struct hlist_head NAME;
> > -     #include "lsm_hook_defs.h"
> > +/*
> > + * @key: static call key as defined by STATIC_CALL_KEY
> > + * @trampoline: static call trampoline as defined by STATIC_CALL_TRAMP
> > + * @hl: The security_hook_list as initialized by the owning LSM.

[...]

> > +             struct lsm_static_call NAME[MAX_LSM_COUNT];
> > +     #include <linux/lsm_hook_defs.h>
> >       #undef LSM_HOOK
> >  } __randomize_layout;
> >
> >  /*
> >   * Security module hook list structure.
> >   * For use with generic list macros for common operations.
> > + *
> > + * struct security_hook_list - Contents of a cacheable, mappable object.

[...]

> > -#define LSM_HOOK_INIT(HEAD, HOOK) \
> > -     { .head = &security_hook_heads.HEAD, .hook = { .HEAD = HOOK } }
> > +#define LSM_HOOK_INIT(NAME, CALLBACK)                        \
> > +     {                                               \
> > +             .scalls = static_calls_table.NAME,      \
> > +             .hook = { .NAME = CALLBACK }            \
> > +     }
> >
> > -extern struct security_hook_heads security_hook_heads;
> >  extern char *lsm_names;
> >
> >  extern void security_add_hooks(struct security_hook_list *hooks, int count,
> > @@ -1756,10 +1805,21 @@ extern struct lsm_info __start_early_lsm_info[], __end_early_lsm_info[];
> >  static inline void security_delete_hooks(struct security_hook_list *hooks,
> >                                               int count)
>
> Hey Paul, can we get rid of CONFIG_SECURITY_SELINUX_DISABLE yet? It's
> been deprecated for years....
>
> >  {
> > -     int i;
> > +     struct lsm_static_call *scalls;
> > +     int i, j;
> > +
> > +     for (i = 0; i < count; i++) {
> > +             scalls = hooks[i].scalls;
> > +             for (j = 0; j < MAX_LSM_COUNT; j++) {
> > +                     if (scalls[j].hl != &hooks[i])
> > +                             continue;
> >
> > -     for (i = 0; i < count; i++)
> > -             hlist_del_rcu(&hooks[i].list);
> > +                     static_key_disable(scalls[j].enabled_key);
> > +                     __static_call_update(scalls[j].key,
> > +                                          scalls[j].trampoline, NULL);
> > +                     scalls[j].hl = NULL;
> > +             }
> > +     }
> >  }
> >  #endif /* CONFIG_SECURITY_SELINUX_DISABLE */
> >
> > @@ -1771,5 +1831,6 @@ static inline void security_delete_hooks(struct security_hook_list *hooks,
> >  #endif /* CONFIG_SECURITY_WRITABLE_HOOKS */
> >
> >  extern int lsm_inode_alloc(struct inode *inode);
> > +extern struct lsm_static_calls_table static_calls_table __ro_after_init;

[...]

> >
> > +/*
> > + * Define static calls and static keys for each LSM hook.
> > + */
> > +
> > +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)                  \
> > +     DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),             \
> > +                             *((RET(*)(__VA_ARGS__))NULL));          \
> > +     DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
>
> Hm, another place where we would benefit from having separated logic for
> "is it built?" and "is it enabled by default?" and we could use
> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
> trying to optimize for having LSMs, I think we should default to inline
> calls. (The machine code in the commit log seems to indicate that they
> are out of line -- it uses jumps.)
>

I should have added it in the commit description, actually we are
optimizing for "hot paths are less likely to have LSM hooks enabled"
(eg. socket_sendmsg). But I do see that there are LSMs that have these
enabled. Maybe we can put this behind a config option, possibly
depending on CONFIG_EXPERT?

> > [...]
> > +#define __CALL_STATIC_VOID(NUM, HOOK, ...)                                   \
> > +     if (static_branch_unlikely(&SECURITY_HOOK_ENABLED_KEY(HOOK, NUM))) { \
> > +             static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__);        \
> > +     }
>
> Same here -- I would expect this to be static_branch_likely() or we'll
> get out-of-line branches. Also, IMO, this should be:
>
>         do {
>                 if (...)
>                         static_call(...);
>         } while (0)
>

Note we cannot really omit the semicolon here. We also use the UNROLL
macro for declaring struct members which cannot assume that the MACRO
to UNROLL will be terminated by a semicolon.

>
> > +#define call_void_hook(FUNC, ...)                                 \
> > +     do {                                                      \
> > +             LSM_UNROLL(__CALL_STATIC_VOID, FUNC, __VA_ARGS__) \
> >       } while (0)
>
> With the do/while in LSM_UNROLL, this is more readable.

Agreed, done.

>
> >
> > -#define call_int_hook(FUNC, IRC, ...) ({                     \
> > -     int RC = IRC;                                           \
> > -     do {                                                    \
> > -             struct security_hook_list *P;                   \
> > -                                                             \
> > -             hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
> > -                     RC = P->hook.FUNC(__VA_ARGS__);         \
> > -                     if (RC != 0)                            \
> > -                             break;                          \
> > -             }                                               \
> > -     } while (0);                                            \
> > -     RC;                                                     \
> > -})
> > +#define __CALL_STATIC_INT(NUM, R, HOOK, ...)                                 \
> > +     if (static_branch_unlikely(&SECURITY_HOOK_ENABLED_KEY(HOOK, NUM))) { \
> > +             R = static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__);    \
> > +             if (R != 0)                                                  \
> > +                     goto out;                                            \
> > +     }
>
> I would expect the label to be a passed argument, but maybe since it
> never changes, it's fine as-is?

I think passing the label as an argument is cleaner, so I went ahead
and did that.

>
> And again, I'd expect a do/while wrapper, and for it to be s_b_likely.

The problem with the do/while wrapper here again is that UNROLL macros
are terminated by semi-colons which does not work for unrolling struct
member initialization, in particular for the static_calls_table where
it's used to initialize the array.

Now we could do what I suggested in LSM_LOOP_UNROLL and
LSM_DEFINE_UNROLL and push the logic up to UNROLL into:

* UNROLL_LOOP: Which expects a macro that will be surrounded by a
do/while and terminated by a semicolon in the unroll body
* UNROLL_DEFINE (or UNROLL_RAW) where you can pass anything.

What do you think?

>
> > +
> > +#define call_int_hook(FUNC, IRC, ...)                                        \
> > +     ({                                                                   \
> > +             __label__ out;                                               \
> > +             int RC = IRC;                                                \
> > +             do {                                                         \
> > +                     LSM_UNROLL(__CALL_STATIC_INT, RC, FUNC, __VA_ARGS__) \
> > +                                                                          \
> > +             } while (0);                                                 \
>
> Then this becomes:
>
> ({
>         int RC = IRC;
>         LSM_UNROLL(__CALL_STATIC_INT, RC, FUNC, __VA_ARGS__);
> out:
>         RC;
> })
>
> > +#define security_for_each_hook(scall, NAME, expression)                  \
> > +     for (scall = static_calls_table.NAME;                            \
> > +          scall - static_calls_table.NAME < MAX_LSM_COUNT; scall++) { \
> > +             if (!static_key_enabled(scall->enabled_key))             \
> > +                     continue;                                        \
> > +             (expression);                                            \
> > +     }
>
> Why isn't this using static_branch_enabled()? I would expect this to be:

I am guessing you mean static_branch_likely, I tried using
static_branch_likely here and it does not work, the key is now being
visited from a loop counter and the static_branch_likely needs it at
compile time. So one needs to resort to the underlying static_key
implementation. Doing this causes:

./arch/x86/include/asm/jump_label.h:27:20: error: invalid operand for
inline asm constraint 'i'
./arch/x86/include/asm/jump_label.h:27:20: error: invalid operand for
inline asm constraint 'i'

The operand needs to be an immediate operand and needs patching at
runtime. I think it's okay we are already not doing any optimization
here as we still have the indirect call.

TL; DR static_branch_likely  cannot depend on runtime computations

>
> #define security_for_each_hook(scall, NAME)                             \
>         for (scall = static_calls_table.NAME;                           \
>              scall - static_calls_table.NAME < MAX_LSM_COUNT; scall++)  \
>                 if (static_branch_likely(scall->enabled_key))

I like this macro, it does make the code easier to read thanks.

>
> >
> >  /* Security operations */
> >
> > @@ -859,7 +924,7 @@ int security_settime64(const struct timespec64 *ts, const struct timezone *tz)
> >
> >  int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
> >  {
> > -     struct security_hook_list *hp;
> > +     struct lsm_static_call *scall;
> >       int cap_sys_admin = 1;
> >       int rc;
> >
> > @@ -870,13 +935,13 @@ int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
> >        * agree that it should be set it will. If any module
> >        * thinks it should not be set it won't.
> >        */
> > -     hlist_for_each_entry(hp, &security_hook_heads.vm_enough_memory, list) {
> > -             rc = hp->hook.vm_enough_memory(mm, pages);
> > +     security_for_each_hook(scall, vm_enough_memory, ({
> > +             rc = scall->hl->hook.vm_enough_memory(mm, pages);
> >               if (rc <= 0) {
> >                       cap_sys_admin = 0;
> >                       break;
> >               }
> > -     }
> > +     }));
>
> Then these replacements don't look weird. This would just be:
>
>         security_for_each_hook(scall, vm_enough_memory) {
>                 rc = scall->hl->hook.vm_enough_memory(mm, pages);
>                 if (rc <= 0) {
>                         cap_sys_admin = 0;
>                         break;
>                 }
>         }

Agreed, Thanks!

>
> I'm excited to have this. The speed improvements are pretty nice.

Yay!


>
> --
> Kees Cook

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-02-06 13:04     ` KP Singh
@ 2023-02-06 16:29       ` Casey Schaufler
  2023-02-06 17:48         ` Song Liu
  0 siblings, 1 reply; 26+ messages in thread
From: Casey Schaufler @ 2023-02-06 16:29 UTC (permalink / raw)
  To: KP Singh, Kees Cook
  Cc: linux-security-module, bpf, ast, daniel, jackmanb, renauld, paul,
	song, revest, casey

On 2/6/2023 5:04 AM, KP Singh wrote:
> On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook@chromium.org> wrote:
>> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
>>> The indirect calls are not really needed as one knows the addresses of
> [...]
>
>>> +/*
>>> + * Define static calls and static keys for each LSM hook.
>>> + */
>>> +
>>> +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)                  \
>>> +     DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),             \
>>> +                             *((RET(*)(__VA_ARGS__))NULL));          \
>>> +     DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
>> Hm, another place where we would benefit from having separated logic for
>> "is it built?" and "is it enabled by default?" and we could use
>> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
>> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
>> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
>> trying to optimize for having LSMs, I think we should default to inline
>> calls. (The machine code in the commit log seems to indicate that they
>> are out of line -- it uses jumps.)
>>
> I should have added it in the commit description, actually we are
> optimizing for "hot paths are less likely to have LSM hooks enabled"
> (eg. socket_sendmsg).

How did you come to that conclusion? Where is there a correlation between
"hot path" and "less likely to be enabled"? 

>  But I do see that there are LSMs that have these
> enabled. Maybe we can put this behind a config option, possibly
> depending on CONFIG_EXPERT?

Help me, as the maintainer of one of those LSMs, understand why that would
be a good idea.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-02-06 16:29       ` Casey Schaufler
@ 2023-02-06 17:48         ` Song Liu
  2023-02-06 18:19           ` KP Singh
  2023-02-06 18:29           ` Casey Schaufler
  0 siblings, 2 replies; 26+ messages in thread
From: Song Liu @ 2023-02-06 17:48 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: KP Singh, Kees Cook, linux-security-module, bpf, ast, daniel,
	jackmanb, renauld, paul, revest

On Mon, Feb 6, 2023 at 8:29 AM Casey Schaufler <casey@schaufler-ca.com> wrote:
>
> On 2/6/2023 5:04 AM, KP Singh wrote:
> > On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook@chromium.org> wrote:
> >> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
> >>> The indirect calls are not really needed as one knows the addresses of
> > [...]
> >
> >>> +/*
> >>> + * Define static calls and static keys for each LSM hook.
> >>> + */
> >>> +
> >>> +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)                  \
> >>> +     DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),             \
> >>> +                             *((RET(*)(__VA_ARGS__))NULL));          \
> >>> +     DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
> >> Hm, another place where we would benefit from having separated logic for
> >> "is it built?" and "is it enabled by default?" and we could use
> >> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
> >> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
> >> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
> >> trying to optimize for having LSMs, I think we should default to inline
> >> calls. (The machine code in the commit log seems to indicate that they
> >> are out of line -- it uses jumps.)
> >>
> > I should have added it in the commit description, actually we are
> > optimizing for "hot paths are less likely to have LSM hooks enabled"
> > (eg. socket_sendmsg).
>
> How did you come to that conclusion? Where is there a correlation between
> "hot path" and "less likely to be enabled"?

I could echo KP's reasoning here. AFAICT, the correlation is that LSMs on
hot path will give more performance overhead. In our use cases (Meta),
we are very careful with "small" performance hits. 0.25% is significant
overhead; 1% overhead will not fly without very good reasons (Do we
have to do this? Are there any other alternatives?). If it is possible to
achieve similar security on a different hook, we will not enable the hook on
the hot path. For example, we may not enable socket_sendmsg, but try
to disallow opening such sockets instead.

>
> >  But I do see that there are LSMs that have these
> > enabled. Maybe we can put this behind a config option, possibly
> > depending on CONFIG_EXPERT?
>
> Help me, as the maintainer of one of those LSMs, understand why that would
> be a good idea.

IIUC, this is also from performance concerns. We would like to manage
the complexity at compile time for performance benefits.

Thanks,
Song

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-02-06 17:48         ` Song Liu
@ 2023-02-06 18:19           ` KP Singh
  2023-02-06 18:29           ` Casey Schaufler
  1 sibling, 0 replies; 26+ messages in thread
From: KP Singh @ 2023-02-06 18:19 UTC (permalink / raw)
  To: Song Liu
  Cc: Casey Schaufler, Kees Cook, linux-security-module, bpf, ast,
	daniel, jackmanb, renauld, paul, revest

On Mon, Feb 6, 2023 at 6:49 PM Song Liu <song@kernel.org> wrote:
>
> On Mon, Feb 6, 2023 at 8:29 AM Casey Schaufler <casey@schaufler-ca.com> wrote:
> >
> > On 2/6/2023 5:04 AM, KP Singh wrote:
> > > On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook@chromium.org> wrote:
> > >> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
> > >>> The indirect calls are not really needed as one knows the addresses of
> > > [...]
> > >
> > >>> +/*
> > >>> + * Define static calls and static keys for each LSM hook.
> > >>> + */
> > >>> +
> > >>> +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)                  \
> > >>> +     DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),             \
> > >>> +                             *((RET(*)(__VA_ARGS__))NULL));          \
> > >>> +     DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
> > >> Hm, another place where we would benefit from having separated logic for
> > >> "is it built?" and "is it enabled by default?" and we could use
> > >> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
> > >> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
> > >> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
> > >> trying to optimize for having LSMs, I think we should default to inline
> > >> calls. (The machine code in the commit log seems to indicate that they
> > >> are out of line -- it uses jumps.)
> > >>
> > > I should have added it in the commit description, actually we are
> > > optimizing for "hot paths are less likely to have LSM hooks enabled"
> > > (eg. socket_sendmsg).
> >
> > How did you come to that conclusion? Where is there a correlation between
> > "hot path" and "less likely to be enabled"?
>
> I could echo KP's reasoning here. AFAICT, the correlation is that LSMs on
> hot path will give more performance overhead. In our use cases (Meta),
> we are very careful with "small" performance hits. 0.25% is significant

+1 to everything Song said, I am not saying that one direction is
better than the other and for distros that have LSMs (like SELinux and
AppArmor enabled) it's okay to have this default to
static_branch_likely. On systems that will have just the BPF LSM
enabled, it's the opposite that is true, i.e. one would never add a
hook on a hotpath as the overheads are unacceptable, and when one does
add a hook, they are willing to add the extra overhead (this is
already much less compared to the indirect calls). I am okay with the
default being static_branch_likely if that's what the other LSM
maintainers prefer.


> overhead; 1% overhead will not fly without very good reasons (Do we
> have to do this? Are there any other alternatives?). If it is possible to
> achieve similar security on a different hook, we will not enable the hook on
> the hot path. For example, we may not enable socket_sendmsg, but try
> to disallow opening such sockets instead.
>
> >
> > >  But I do see that there are LSMs that have these
> > > enabled. Maybe we can put this behind a config option, possibly
> > > depending on CONFIG_EXPERT?
> >
> > Help me, as the maintainer of one of those LSMs, understand why that would
> > be a good idea.
>
> IIUC, this is also from performance concerns. We would like to manage
> the complexity at compile time for performance benefits.
>
> Thanks,
> Song

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-02-06 17:48         ` Song Liu
  2023-02-06 18:19           ` KP Singh
@ 2023-02-06 18:29           ` Casey Schaufler
  2023-02-06 18:41             ` KP Singh
  2023-02-06 19:05             ` Song Liu
  1 sibling, 2 replies; 26+ messages in thread
From: Casey Schaufler @ 2023-02-06 18:29 UTC (permalink / raw)
  To: Song Liu
  Cc: KP Singh, Kees Cook, linux-security-module, bpf, ast, daniel,
	jackmanb, renauld, paul, revest, casey

On 2/6/2023 9:48 AM, Song Liu wrote:
> On Mon, Feb 6, 2023 at 8:29 AM Casey Schaufler <casey@schaufler-ca.com> wrote:
>> On 2/6/2023 5:04 AM, KP Singh wrote:
>>> On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook@chromium.org> wrote:
>>>> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
>>>>> The indirect calls are not really needed as one knows the addresses of
>>> [...]
>>>
>>>>> +/*
>>>>> + * Define static calls and static keys for each LSM hook.
>>>>> + */
>>>>> +
>>>>> +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)                  \
>>>>> +     DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),             \
>>>>> +                             *((RET(*)(__VA_ARGS__))NULL));          \
>>>>> +     DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
>>>> Hm, another place where we would benefit from having separated logic for
>>>> "is it built?" and "is it enabled by default?" and we could use
>>>> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
>>>> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
>>>> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
>>>> trying to optimize for having LSMs, I think we should default to inline
>>>> calls. (The machine code in the commit log seems to indicate that they
>>>> are out of line -- it uses jumps.)
>>>>
>>> I should have added it in the commit description, actually we are
>>> optimizing for "hot paths are less likely to have LSM hooks enabled"
>>> (eg. socket_sendmsg).
>> How did you come to that conclusion? Where is there a correlation between
>> "hot path" and "less likely to be enabled"?
> I could echo KP's reasoning here. AFAICT, the correlation is that LSMs on
> hot path will give more performance overhead. In our use cases (Meta),
> we are very careful with "small" performance hits. 0.25% is significant
> overhead; 1% overhead will not fly without very good reasons (Do we
> have to do this? Are there any other alternatives?). If it is possible to
> achieve similar security on a different hook, we will not enable the hook on
> the hot path. For example, we may not enable socket_sendmsg, but try
> to disallow opening such sockets instead.

I'm not asking about BPF. I'm asking about the impact on other LSMs.
If you're talking strictly about BPF you need to say that. I'm all for
performance improvement. But as I've said before, it should be for all
the security modules, not just BPF.

>
>>>  But I do see that there are LSMs that have these
>>> enabled. Maybe we can put this behind a config option, possibly
>>> depending on CONFIG_EXPERT?
>> Help me, as the maintainer of one of those LSMs, understand why that would
>> be a good idea.
> IIUC, this is also from performance concerns. We would like to manage
> the complexity at compile time for performance benefits.

What complexity? What config option? I know that I'm slow, but it looks
as if you're suggesting making the LSM infrastructure incredibly fragile
and difficult to understand. 

>
> Thanks,
> Song

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-02-06 18:29           ` Casey Schaufler
@ 2023-02-06 18:41             ` KP Singh
  2023-02-06 18:50               ` Kees Cook
  2023-02-06 19:05             ` Song Liu
  1 sibling, 1 reply; 26+ messages in thread
From: KP Singh @ 2023-02-06 18:41 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: Song Liu, Kees Cook, linux-security-module, bpf, ast, daniel,
	jackmanb, renauld, paul, revest

On Mon, Feb 6, 2023 at 7:29 PM Casey Schaufler <casey@schaufler-ca.com> wrote:
>
> On 2/6/2023 9:48 AM, Song Liu wrote:
> > On Mon, Feb 6, 2023 at 8:29 AM Casey Schaufler <casey@schaufler-ca.com> wrote:
> >> On 2/6/2023 5:04 AM, KP Singh wrote:
> >>> On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook@chromium.org> wrote:
> >>>> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
> >>>>> The indirect calls are not really needed as one knows the addresses of
> >>> [...]
> >>>
> >>>>> +/*
> >>>>> + * Define static calls and static keys for each LSM hook.
> >>>>> + */
> >>>>> +
> >>>>> +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)                  \
> >>>>> +     DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),             \
> >>>>> +                             *((RET(*)(__VA_ARGS__))NULL));          \
> >>>>> +     DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
> >>>> Hm, another place where we would benefit from having separated logic for
> >>>> "is it built?" and "is it enabled by default?" and we could use
> >>>> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
> >>>> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
> >>>> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
> >>>> trying to optimize for having LSMs, I think we should default to inline
> >>>> calls. (The machine code in the commit log seems to indicate that they
> >>>> are out of line -- it uses jumps.)
> >>>>
> >>> I should have added it in the commit description, actually we are
> >>> optimizing for "hot paths are less likely to have LSM hooks enabled"
> >>> (eg. socket_sendmsg).
> >> How did you come to that conclusion? Where is there a correlation between
> >> "hot path" and "less likely to be enabled"?
> > I could echo KP's reasoning here. AFAICT, the correlation is that LSMs on
> > hot path will give more performance overhead. In our use cases (Meta),
> > we are very careful with "small" performance hits. 0.25% is significant
> > overhead; 1% overhead will not fly without very good reasons (Do we
> > have to do this? Are there any other alternatives?). If it is possible to
> > achieve similar security on a different hook, we will not enable the hook on
> > the hot path. For example, we may not enable socket_sendmsg, but try
> > to disallow opening such sockets instead.
>
> I'm not asking about BPF. I'm asking about the impact on other LSMs.
> If you're talking strictly about BPF you need to say that. I'm all for
> performance improvement. But as I've said before, it should be for all
> the security modules, not just BPF.

It's a trade off that will work differently for different LSMs and
distros (based on the LSM they chose) and this the config option. I
even suggested this be behind CONFIG_EXPERT (which is basically says
this:

 "This option allows certain base kernel options and settings
 to be disabled or tweaked. This is for specialized
 environments which can tolerate a "non-standard" kernel.
 Only use this if you really know what you are doing."


>
> >
> >>>  But I do see that there are LSMs that have these
> >>> enabled. Maybe we can put this behind a config option, possibly
> >>> depending on CONFIG_EXPERT?
> >> Help me, as the maintainer of one of those LSMs, understand why that would
> >> be a good idea.
> > IIUC, this is also from performance concerns. We would like to manage
> > the complexity at compile time for performance benefits.
>
> What complexity? What config option? I know that I'm slow, but it looks
> as if you're suggesting making the LSM infrastructure incredibly fragile
> and difficult to understand.

I am sorry but the LSM is a core piece of the kernel that currently
has significant unnecessary overheads (look at the numbers that I
posted) and this not making it fragile, it's making it performant,
such optimisations are everywhere in the kernel and the LSM
infrastructure has somehow been neglected and is just catching up.
These are resources being wasted which could be saved.

>
> >
> > Thanks,
> > Song

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-02-06 18:41             ` KP Singh
@ 2023-02-06 18:50               ` Kees Cook
  2023-06-08  2:48                 ` KP Singh
  0 siblings, 1 reply; 26+ messages in thread
From: Kees Cook @ 2023-02-06 18:50 UTC (permalink / raw)
  To: KP Singh
  Cc: Casey Schaufler, Song Liu, linux-security-module, bpf, ast,
	daniel, jackmanb, renauld, paul, revest

On Mon, Feb 06, 2023 at 07:41:04PM +0100, KP Singh wrote:
> On Mon, Feb 6, 2023 at 7:29 PM Casey Schaufler <casey@schaufler-ca.com> wrote:
> >
> > On 2/6/2023 9:48 AM, Song Liu wrote:
> > > On Mon, Feb 6, 2023 at 8:29 AM Casey Schaufler <casey@schaufler-ca.com> wrote:
> > >> On 2/6/2023 5:04 AM, KP Singh wrote:
> > >>> On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook@chromium.org> wrote:
> > >>>> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
> > >>>>> The indirect calls are not really needed as one knows the addresses of
> > >>> [...]
> > >>>
> > >>>>> +/*
> > >>>>> + * Define static calls and static keys for each LSM hook.
> > >>>>> + */
> > >>>>> +
> > >>>>> +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)                  \
> > >>>>> +     DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),             \
> > >>>>> +                             *((RET(*)(__VA_ARGS__))NULL));          \
> > >>>>> +     DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
> > >>>> Hm, another place where we would benefit from having separated logic for
> > >>>> "is it built?" and "is it enabled by default?" and we could use
> > >>>> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
> > >>>> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
> > >>>> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
> > >>>> trying to optimize for having LSMs, I think we should default to inline
> > >>>> calls. (The machine code in the commit log seems to indicate that they
> > >>>> are out of line -- it uses jumps.)
> > >>>>
> > >>> I should have added it in the commit description, actually we are
> > >>> optimizing for "hot paths are less likely to have LSM hooks enabled"
> > >>> (eg. socket_sendmsg).
> > >> How did you come to that conclusion? Where is there a correlation between
> > >> "hot path" and "less likely to be enabled"?
> > > I could echo KP's reasoning here. AFAICT, the correlation is that LSMs on
> > > hot path will give more performance overhead. In our use cases (Meta),
> > > we are very careful with "small" performance hits. 0.25% is significant
> > > overhead; 1% overhead will not fly without very good reasons (Do we
> > > have to do this? Are there any other alternatives?). If it is possible to
> > > achieve similar security on a different hook, we will not enable the hook on
> > > the hot path. For example, we may not enable socket_sendmsg, but try
> > > to disallow opening such sockets instead.
> >
> > I'm not asking about BPF. I'm asking about the impact on other LSMs.
> > If you're talking strictly about BPF you need to say that. I'm all for
> > performance improvement. But as I've said before, it should be for all
> > the security modules, not just BPF.
> 
> It's a trade off that will work differently for different LSMs and
> distros (based on the LSM they chose) and this the config option. I
> even suggested this be behind CONFIG_EXPERT (which is basically says
> this:
> 
>  "This option allows certain base kernel options and settings
>  to be disabled or tweaked. This is for specialized
>  environments which can tolerate a "non-standard" kernel.
>  Only use this if you really know what you are doing."

Using the DEFINE_STATIC_KEY_MAYBE() and static_branch_maybe() macros
tied to a new CONFIG seems like it can give us a reasonable knob for
in-line vs out-of-line calls.

> > >>>  But I do see that there are LSMs that have these
> > >>> enabled. Maybe we can put this behind a config option, possibly
> > >>> depending on CONFIG_EXPERT?
> > >> Help me, as the maintainer of one of those LSMs, understand why that would
> > >> be a good idea.
> > > IIUC, this is also from performance concerns. We would like to manage
> > > the complexity at compile time for performance benefits.
> >
> > What complexity? What config option? I know that I'm slow, but it looks
> > as if you're suggesting making the LSM infrastructure incredibly fragile
> > and difficult to understand.
> 
> I am sorry but the LSM is a core piece of the kernel that currently
> has significant unnecessary overheads (look at the numbers that I
> posted) and this not making it fragile, it's making it performant,
> such optimisations are everywhere in the kernel and the LSM
> infrastructure has somehow been neglected and is just catching up.
> These are resources being wasted which could be saved.

Let's just move forward to v2, which I think will look much cleaner. I
think we can get to both maintainable code and run-time performance with
this series.

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-02-06 18:29           ` Casey Schaufler
  2023-02-06 18:41             ` KP Singh
@ 2023-02-06 19:05             ` Song Liu
  2023-02-06 20:11               ` Casey Schaufler
  1 sibling, 1 reply; 26+ messages in thread
From: Song Liu @ 2023-02-06 19:05 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: KP Singh, Kees Cook, linux-security-module, bpf, ast, daniel,
	jackmanb, renauld, paul, revest

On Mon, Feb 6, 2023 at 10:29 AM Casey Schaufler <casey@schaufler-ca.com> wrote:
>
[...]
> >>> I should have added it in the commit description, actually we are
> >>> optimizing for "hot paths are less likely to have LSM hooks enabled"
> >>> (eg. socket_sendmsg).
> >> How did you come to that conclusion? Where is there a correlation between
> >> "hot path" and "less likely to be enabled"?
> > I could echo KP's reasoning here. AFAICT, the correlation is that LSMs on
> > hot path will give more performance overhead. In our use cases (Meta),
> > we are very careful with "small" performance hits. 0.25% is significant
> > overhead; 1% overhead will not fly without very good reasons (Do we
> > have to do this? Are there any other alternatives?). If it is possible to
> > achieve similar security on a different hook, we will not enable the hook on
> > the hot path. For example, we may not enable socket_sendmsg, but try
> > to disallow opening such sockets instead.
>
> I'm not asking about BPF. I'm asking about the impact on other LSMs.
> If you're talking strictly about BPF you need to say that. I'm all for
> performance improvement. But as I've said before, it should be for all
> the security modules, not just BPF.

I don't think anything here is BPF specific. Performance-security tradeoff
should be the same for all LSMs. A hook on the hot path is more expensive
than a hook on a cooler path. This is the same for all LSMs, no?

Thanks,
Song

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-02-06 19:05             ` Song Liu
@ 2023-02-06 20:11               ` Casey Schaufler
  0 siblings, 0 replies; 26+ messages in thread
From: Casey Schaufler @ 2023-02-06 20:11 UTC (permalink / raw)
  To: Song Liu
  Cc: KP Singh, Kees Cook, linux-security-module, bpf, ast, daniel,
	jackmanb, renauld, paul, revest, casey

On 2/6/2023 11:05 AM, Song Liu wrote:
> On Mon, Feb 6, 2023 at 10:29 AM Casey Schaufler <casey@schaufler-ca.com> wrote:
> [...]
>>>>> I should have added it in the commit description, actually we are
>>>>> optimizing for "hot paths are less likely to have LSM hooks enabled"
>>>>> (eg. socket_sendmsg).
>>>> How did you come to that conclusion? Where is there a correlation between
>>>> "hot path" and "less likely to be enabled"?
>>> I could echo KP's reasoning here. AFAICT, the correlation is that LSMs on
>>> hot path will give more performance overhead. In our use cases (Meta),
>>> we are very careful with "small" performance hits. 0.25% is significant
>>> overhead; 1% overhead will not fly without very good reasons (Do we
>>> have to do this? Are there any other alternatives?). If it is possible to
>>> achieve similar security on a different hook, we will not enable the hook on
>>> the hot path. For example, we may not enable socket_sendmsg, but try
>>> to disallow opening such sockets instead.
>> I'm not asking about BPF. I'm asking about the impact on other LSMs.
>> If you're talking strictly about BPF you need to say that. I'm all for
>> performance improvement. But as I've said before, it should be for all
>> the security modules, not just BPF.
> I don't think anything here is BPF specific. Performance-security tradeoff
> should be the same for all LSMs. A hook on the hot path is more expensive
> than a hook on a cooler path. This is the same for all LSMs, no?

Yes. Alas, for some security schemes it isn't possible to avoid checks in
hot paths. The assumption that "hot paths are less likely to have LSM hooks
enabled" is not generally valid. The statement is question seems to imply
that it's OK to not optimize hot paths. Maybe I read it wrong. I will hold
off further comment until we see the next version.

>
> Thanks,
> Song

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-02-06 18:50               ` Kees Cook
@ 2023-06-08  2:48                 ` KP Singh
  2023-06-13 21:43                   ` Paul Moore
  0 siblings, 1 reply; 26+ messages in thread
From: KP Singh @ 2023-06-08  2:48 UTC (permalink / raw)
  To: Kees Cook
  Cc: Casey Schaufler, Song Liu, linux-security-module, bpf, ast,
	daniel, jackmanb, renauld, paul, revest

On Mon, Feb 6, 2023 at 7:51 PM Kees Cook <keescook@chromium.org> wrote:
>
> On Mon, Feb 06, 2023 at 07:41:04PM +0100, KP Singh wrote:
> > On Mon, Feb 6, 2023 at 7:29 PM Casey Schaufler <casey@schaufler-ca.com> wrote:
> > >
> > > On 2/6/2023 9:48 AM, Song Liu wrote:
> > > > On Mon, Feb 6, 2023 at 8:29 AM Casey Schaufler <casey@schaufler-ca.com> wrote:
> > > >> On 2/6/2023 5:04 AM, KP Singh wrote:
> > > >>> On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook@chromium.org> wrote:
> > > >>>> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
> > > >>>>> The indirect calls are not really needed as one knows the addresses of
> > > >>> [...]
> > > >>>
> > > >>>>> +/*
> > > >>>>> + * Define static calls and static keys for each LSM hook.
> > > >>>>> + */
> > > >>>>> +
> > > >>>>> +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)                  \
> > > >>>>> +     DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),             \
> > > >>>>> +                             *((RET(*)(__VA_ARGS__))NULL));          \
> > > >>>>> +     DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
> > > >>>> Hm, another place where we would benefit from having separated logic for
> > > >>>> "is it built?" and "is it enabled by default?" and we could use
> > > >>>> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
> > > >>>> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
> > > >>>> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
> > > >>>> trying to optimize for having LSMs, I think we should default to inline
> > > >>>> calls. (The machine code in the commit log seems to indicate that they
> > > >>>> are out of line -- it uses jumps.)
> > > >>>>
> > > >>> I should have added it in the commit description, actually we are
> > > >>> optimizing for "hot paths are less likely to have LSM hooks enabled"
> > > >>> (eg. socket_sendmsg).
> > > >> How did you come to that conclusion? Where is there a correlation between
> > > >> "hot path" and "less likely to be enabled"?
> > > > I could echo KP's reasoning here. AFAICT, the correlation is that LSMs on
> > > > hot path will give more performance overhead. In our use cases (Meta),
> > > > we are very careful with "small" performance hits. 0.25% is significant
> > > > overhead; 1% overhead will not fly without very good reasons (Do we
> > > > have to do this? Are there any other alternatives?). If it is possible to
> > > > achieve similar security on a different hook, we will not enable the hook on
> > > > the hot path. For example, we may not enable socket_sendmsg, but try
> > > > to disallow opening such sockets instead.
> > >
> > > I'm not asking about BPF. I'm asking about the impact on other LSMs.
> > > If you're talking strictly about BPF you need to say that. I'm all for
> > > performance improvement. But as I've said before, it should be for all
> > > the security modules, not just BPF.
> >
> > It's a trade off that will work differently for different LSMs and
> > distros (based on the LSM they chose) and this the config option. I
> > even suggested this be behind CONFIG_EXPERT (which is basically says
> > this:
> >
> >  "This option allows certain base kernel options and settings
> >  to be disabled or tweaked. This is for specialized
> >  environments which can tolerate a "non-standard" kernel.
> >  Only use this if you really know what you are doing."
>
> Using the DEFINE_STATIC_KEY_MAYBE() and static_branch_maybe() macros
> tied to a new CONFIG seems like it can give us a reasonable knob for
> in-line vs out-of-line calls.

Coming back to this after a while as I finally got time to work on
this. (work/personal downtime).

I am changing it to DEFINE_STATIC_KEY_TRUE in this patch and
DEFINE_STATIC_KEY_MAYBE in a subsequent one and guarded by a config
called CONFIG_SECURITY_HOOK_LIKELY. I am letting it default to yes,
but distros can change it depending on their choice of LSM and
performance characteristics.

>
> > > >>>  But I do see that there are LSMs that have these
> > > >>> enabled. Maybe we can put this behind a config option, possibly
> > > >>> depending on CONFIG_EXPERT?
> > > >> Help me, as the maintainer of one of those LSMs, understand why that would
> > > >> be a good idea.
> > > > IIUC, this is also from performance concerns. We would like to manage
> > > > the complexity at compile time for performance benefits.
> > >
> > > What complexity? What config option? I know that I'm slow, but it looks
> > > as if you're suggesting making the LSM infrastructure incredibly fragile
> > > and difficult to understand.
> >
> > I am sorry but the LSM is a core piece of the kernel that currently
> > has significant unnecessary overheads (look at the numbers that I
> > posted) and this not making it fragile, it's making it performant,
> > such optimisations are everywhere in the kernel and the LSM
> > infrastructure has somehow been neglected and is just catching up.
> > These are resources being wasted which could be saved.
>
> Let's just move forward to v2, which I think will look much cleaner. I
> think we can get to both maintainable code and run-time performance with
> this series.
>
> --
> Kees Cook

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-06-08  2:48                 ` KP Singh
@ 2023-06-13 21:43                   ` Paul Moore
  2023-06-13 22:03                     ` KP Singh
  0 siblings, 1 reply; 26+ messages in thread
From: Paul Moore @ 2023-06-13 21:43 UTC (permalink / raw)
  To: KP Singh
  Cc: Kees Cook, Casey Schaufler, Song Liu, linux-security-module, bpf,
	ast, daniel, jackmanb, renauld, revest

On Wed, Jun 7, 2023 at 10:48 PM KP Singh <kpsingh@kernel.org> wrote:
> On Mon, Feb 6, 2023 at 7:51 PM Kees Cook <keescook@chromium.org> wrote:
> > On Mon, Feb 06, 2023 at 07:41:04PM +0100, KP Singh wrote:
> > > On Mon, Feb 6, 2023 at 7:29 PM Casey Schaufler <casey@schaufler-ca.com> wrote:
> > > >
> > > > On 2/6/2023 9:48 AM, Song Liu wrote:
> > > > > On Mon, Feb 6, 2023 at 8:29 AM Casey Schaufler <casey@schaufler-ca.com> wrote:
> > > > >> On 2/6/2023 5:04 AM, KP Singh wrote:
> > > > >>> On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook@chromium.org> wrote:
> > > > >>>> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
> > > > >>>>> The indirect calls are not really needed as one knows the addresses of
> > > > >>> [...]
> > > > >>>
> > > > >>>>> +/*
> > > > >>>>> + * Define static calls and static keys for each LSM hook.
> > > > >>>>> + */
> > > > >>>>> +
> > > > >>>>> +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)                  \
> > > > >>>>> +     DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),             \
> > > > >>>>> +                             *((RET(*)(__VA_ARGS__))NULL));          \
> > > > >>>>> +     DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
> > > > >>>> Hm, another place where we would benefit from having separated logic for
> > > > >>>> "is it built?" and "is it enabled by default?" and we could use
> > > > >>>> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
> > > > >>>> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
> > > > >>>> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
> > > > >>>> trying to optimize for having LSMs, I think we should default to inline
> > > > >>>> calls. (The machine code in the commit log seems to indicate that they
> > > > >>>> are out of line -- it uses jumps.)
> > > > >>>>
> > > > >>> I should have added it in the commit description, actually we are
> > > > >>> optimizing for "hot paths are less likely to have LSM hooks enabled"
> > > > >>> (eg. socket_sendmsg).
> > > > >> How did you come to that conclusion? Where is there a correlation between
> > > > >> "hot path" and "less likely to be enabled"?
> > > > > I could echo KP's reasoning here. AFAICT, the correlation is that LSMs on
> > > > > hot path will give more performance overhead. In our use cases (Meta),
> > > > > we are very careful with "small" performance hits. 0.25% is significant
> > > > > overhead; 1% overhead will not fly without very good reasons (Do we
> > > > > have to do this? Are there any other alternatives?). If it is possible to
> > > > > achieve similar security on a different hook, we will not enable the hook on
> > > > > the hot path. For example, we may not enable socket_sendmsg, but try
> > > > > to disallow opening such sockets instead.
> > > >
> > > > I'm not asking about BPF. I'm asking about the impact on other LSMs.
> > > > If you're talking strictly about BPF you need to say that. I'm all for
> > > > performance improvement. But as I've said before, it should be for all
> > > > the security modules, not just BPF.
> > >
> > > It's a trade off that will work differently for different LSMs and
> > > distros (based on the LSM they chose) and this the config option. I
> > > even suggested this be behind CONFIG_EXPERT (which is basically says
> > > this:
> > >
> > >  "This option allows certain base kernel options and settings
> > >  to be disabled or tweaked. This is for specialized
> > >  environments which can tolerate a "non-standard" kernel.
> > >  Only use this if you really know what you are doing."
> >
> > Using the DEFINE_STATIC_KEY_MAYBE() and static_branch_maybe() macros
> > tied to a new CONFIG seems like it can give us a reasonable knob for
> > in-line vs out-of-line calls.
>
> Coming back to this after a while as I finally got time to work on
> this. (work/personal downtime).
>
> I am changing it to DEFINE_STATIC_KEY_TRUE in this patch and
> DEFINE_STATIC_KEY_MAYBE in a subsequent one and guarded by a config
> called CONFIG_SECURITY_HOOK_LIKELY. I am letting it default to yes,
> but distros can change it depending on their choice of LSM and
> performance characteristics.

I'm still more curious about the correctness/isolation aspect that I
mused about back in Jan/Feb on your original posting.

-- 
paul-moore.com

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls
  2023-06-13 21:43                   ` Paul Moore
@ 2023-06-13 22:03                     ` KP Singh
  0 siblings, 0 replies; 26+ messages in thread
From: KP Singh @ 2023-06-13 22:03 UTC (permalink / raw)
  To: Paul Moore
  Cc: Kees Cook, Casey Schaufler, Song Liu, linux-security-module, bpf,
	ast, daniel, jackmanb, renauld, revest

On Tue, Jun 13, 2023 at 11:43 PM Paul Moore <paul@paul-moore.com> wrote:
>
> On Wed, Jun 7, 2023 at 10:48 PM KP Singh <kpsingh@kernel.org> wrote:
> > On Mon, Feb 6, 2023 at 7:51 PM Kees Cook <keescook@chromium.org> wrote:
> > > On Mon, Feb 06, 2023 at 07:41:04PM +0100, KP Singh wrote:
> > > > On Mon, Feb 6, 2023 at 7:29 PM Casey Schaufler <casey@schaufler-ca.com> wrote:
> > > > >
> > > > > On 2/6/2023 9:48 AM, Song Liu wrote:
> > > > > > On Mon, Feb 6, 2023 at 8:29 AM Casey Schaufler <casey@schaufler-ca.com> wrote:
> > > > > >> On 2/6/2023 5:04 AM, KP Singh wrote:
> > > > > >>> On Fri, Jan 20, 2023 at 5:36 AM Kees Cook <keescook@chromium.org> wrote:
> > > > > >>>> On Fri, Jan 20, 2023 at 01:08:17AM +0100, KP Singh wrote:
> > > > > >>>>> The indirect calls are not really needed as one knows the addresses of
> > > > > >>> [...]
> > > > > >>>
> > > > > >>>>> +/*
> > > > > >>>>> + * Define static calls and static keys for each LSM hook.
> > > > > >>>>> + */
> > > > > >>>>> +
> > > > > >>>>> +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...)                  \
> > > > > >>>>> +     DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM),             \
> > > > > >>>>> +                             *((RET(*)(__VA_ARGS__))NULL));          \
> > > > > >>>>> +     DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM));
> > > > > >>>> Hm, another place where we would benefit from having separated logic for
> > > > > >>>> "is it built?" and "is it enabled by default?" and we could use
> > > > > >>>> DEFINE_STATIC_KEY_MAYBE(). But, since we don't, I think we need to use
> > > > > >>>> DEFINE_STATIC_KEY_TRUE() here or else won't all the calls be
> > > > > >>>> out-of-line? (i.e. the default compiled state will be NOPs?) If we're
> > > > > >>>> trying to optimize for having LSMs, I think we should default to inline
> > > > > >>>> calls. (The machine code in the commit log seems to indicate that they
> > > > > >>>> are out of line -- it uses jumps.)
> > > > > >>>>
> > > > > >>> I should have added it in the commit description, actually we are
> > > > > >>> optimizing for "hot paths are less likely to have LSM hooks enabled"
> > > > > >>> (eg. socket_sendmsg).
> > > > > >> How did you come to that conclusion? Where is there a correlation between
> > > > > >> "hot path" and "less likely to be enabled"?
> > > > > > I could echo KP's reasoning here. AFAICT, the correlation is that LSMs on
> > > > > > hot path will give more performance overhead. In our use cases (Meta),
> > > > > > we are very careful with "small" performance hits. 0.25% is significant
> > > > > > overhead; 1% overhead will not fly without very good reasons (Do we
> > > > > > have to do this? Are there any other alternatives?). If it is possible to
> > > > > > achieve similar security on a different hook, we will not enable the hook on
> > > > > > the hot path. For example, we may not enable socket_sendmsg, but try
> > > > > > to disallow opening such sockets instead.
> > > > >
> > > > > I'm not asking about BPF. I'm asking about the impact on other LSMs.
> > > > > If you're talking strictly about BPF you need to say that. I'm all for
> > > > > performance improvement. But as I've said before, it should be for all
> > > > > the security modules, not just BPF.
> > > >
> > > > It's a trade off that will work differently for different LSMs and
> > > > distros (based on the LSM they chose) and this the config option. I
> > > > even suggested this be behind CONFIG_EXPERT (which is basically says
> > > > this:
> > > >
> > > >  "This option allows certain base kernel options and settings
> > > >  to be disabled or tweaked. This is for specialized
> > > >  environments which can tolerate a "non-standard" kernel.
> > > >  Only use this if you really know what you are doing."
> > >
> > > Using the DEFINE_STATIC_KEY_MAYBE() and static_branch_maybe() macros
> > > tied to a new CONFIG seems like it can give us a reasonable knob for
> > > in-line vs out-of-line calls.
> >
> > Coming back to this after a while as I finally got time to work on
> > this. (work/personal downtime).
> >
> > I am changing it to DEFINE_STATIC_KEY_TRUE in this patch and
> > DEFINE_STATIC_KEY_MAYBE in a subsequent one and guarded by a config
> > called CONFIG_SECURITY_HOOK_LIKELY. I am letting it default to yes,
> > but distros can change it depending on their choice of LSM and
> > performance characteristics.
>
> I'm still more curious about the correctness/isolation aspect that I
> mused about back in Jan/Feb on your original posting.

Thanks, I put some clarifications there. I will post a v2 soon (TM).
Although beware I have upcoming downtime due to a surgery next week.

>
> --
> paul-moore.com

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2023-06-13 22:03 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-20  0:08 [PATCH RESEND bpf-next 0/4] Reduce overhead of LSMs with static calls KP Singh
2023-01-20  0:08 ` [PATCH RESEND bpf-next 1/4] kernel: Add helper macros for loop unrolling KP Singh
2023-01-20  3:48   ` Kees Cook
2023-01-20  0:08 ` [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs KP Singh
2023-01-20  4:04   ` Kees Cook
2023-01-20  7:33   ` kernel test robot
2023-01-20  9:50   ` kernel test robot
2023-01-20  9:50   ` kernel test robot
2023-01-20  0:08 ` [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls KP Singh
2023-01-20  4:36   ` Kees Cook
2023-01-20 18:26     ` Casey Schaufler
2023-02-06 13:04     ` KP Singh
2023-02-06 16:29       ` Casey Schaufler
2023-02-06 17:48         ` Song Liu
2023-02-06 18:19           ` KP Singh
2023-02-06 18:29           ` Casey Schaufler
2023-02-06 18:41             ` KP Singh
2023-02-06 18:50               ` Kees Cook
2023-06-08  2:48                 ` KP Singh
2023-06-13 21:43                   ` Paul Moore
2023-06-13 22:03                     ` KP Singh
2023-02-06 19:05             ` Song Liu
2023-02-06 20:11               ` Casey Schaufler
2023-01-20 10:10   ` kernel test robot
2023-01-20 10:41   ` kernel test robot
2023-01-20  0:08 ` [PATCH RESEND bpf-next 4/4] bpf: Only enable BPF LSM hooks when an LSM program is attached KP Singh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.