All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI)
@ 2020-02-20 17:52 KP Singh
  2020-02-20 17:52 ` [PATCH bpf-next v4 1/8] bpf: Introduce BPF_PROG_TYPE_LSM KP Singh
                   ` (9 more replies)
  0 siblings, 10 replies; 45+ messages in thread
From: KP Singh @ 2020-02-20 17:52 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

# v3 -> v4

  https://lkml.org/lkml/2020/1/23/515

* Moved away from allocating a separate security_hook_heads and adding a
  new special case for arch_prepare_bpf_trampoline to using BPF fexit
  trampolines called from the right place in the LSM hook and toggled by
  static keys based on the discussion in:

    https://lore.kernel.org/bpf/CAG48ez25mW+_oCxgCtbiGMX07g_ph79UOJa07h=o_6B6+Q-u5g@mail.gmail.com/

* Since the code does not deal with security_hook_heads anymore, it goes
  from "being a BPF LSM" to "BPF program attachment to LSM hooks".

* Added a new test case which ensures that the BPF programs' return value
  is reflected by the LSM hook.

# v2 -> v3 does not change the overall design and has some minor fixes:

  https://lkml.org/lkml/2020/1/15/843

* LSM_ORDER_LAST is introduced to represent the behaviour of the BPF LSM
* Fixed the inadvertent clobbering of the LSM Hook error codes
* Added GPL license requirement to the commit log
* The lsm_hook_idx is now the more conventional 0-based index
* Some changes were split into a separate patch ("Load btf_vmlinux only
  once per object")
  https://lore.kernel.org/bpf/20200117212825.11755-1-kpsingh@chromium.org/
* Addressed Andrii's feedback on the BTF implementation
* Documentation update for using generated vmlinux.h to simplify
  programs
* Rebase

# Changes since v1:

  https://lkml.org/lkml/2019/12/20/641

* Eliminate the requirement to maintain LSM hooks separately in
  security/bpf/hooks.h Use BPF trampolines to dynamically allocate
  security hooks
* Drop the use of securityfs as bpftool provides the required
  introspection capabilities.  Update the tests to use the bpf_skeleton
  and global variables
* Use O_CLOEXEC anonymous fds to represent BPF attachment in line with
  the other BPF programs with the possibility to use bpf program pinning
  in the future to provide "permanent attachment".
* Drop the logic based on prog names for handling re-attachment.
* Drop bpf_lsm_event_output from this series and send it as a separate
  patch.

# Motivation

Google does analysis of rich runtime security data to detect and thwart
threats in real-time. Currently, this is done in custom kernel modules
but we would like to replace this with something that's upstream and
useful to others.

The current kernel infrastructure for providing telemetry (Audit, Perf
etc.) is disjoint from access enforcement (i.e. LSMs).  Augmenting the
information provided by audit requires kernel changes to audit, its
policy language and user-space components. Furthermore, building a MAC
policy based on the newly added telemetry data requires changes to
various LSMs and their respective policy languages.

This patchset allows BPF programs to be attached to LSM hooks This
facilitates a unified and dynamic (not requiring re-compilation of the
kernel) audit and MAC policy.

# Why an LSM?

Linux Security Modules target security behaviours rather than the
kernel's API. For example, it's easy to miss out a newly added system
call for executing processes (eg. execve, execveat etc.) but the LSM
framework ensures that all process executions trigger the relevant hooks
irrespective of how the process was executed.

Allowing users to implement LSM hooks at runtime also benefits the LSM
eco-system by enabling a quick feedback loop from the security community
about the kind of behaviours that the LSM Framework should be targeting.

# How does it work?

The patchset introduces a new eBPF (https://docs.cilium.io/en/v1.6/bpf/)
program type BPF_PROG_TYPE_LSM which can only be attached to LSM hooks.
Attachment requires CAP_SYS_ADMIN for loading eBPF programs and
CAP_MAC_ADMIN for modifying MAC policies.

The eBPF programs are attached to nop functions (bpf_lsm_<name>) added
in the LSM hooks (when CONFIG_BPF_LSM) and executed after all the
statically defined hooks (i.e. the ones declared by static LSMs (eg,
SELinux, AppArmor, Smack etc) allow the action. This also ensures that
statically defined LSM hooks retain the behaviour of "being read-only
after init", i.e. __lsm_ro_after_init and do not increase the attack
surface.

The branch into this nop function is guarded with a static key (jump
label) and is only taken when a BPF program is attached to the LSM hook.

eg. for bprm_check_security:

int bpf_lsm_bprm_check_security(struct linux_binprm *bprm)
{
        return 0;
}

DEFINE_STATIC_KEY_FALSE(bpf_lsm_key_bprm_check_security)

// Run all static hooks for bprm_check_security and set RC
if (static_key_unlikely(&bpf_lsm_key_bprm_check_security) {
        if (RC == 0)
                bpf_lsm_bprm_check_security(bprm);
}

Upon attachment, a BPF fexit trampoline is attached to the nop function
and the static key for the LSM hook is enabled. The trampoline has code
to handle the conversion from the signature of the hook to the BPF
context and allows the JIT'ed BPF program to be called as a C function
with the same arguments as the LSM hooks. If the attached eBPF programs
returns an error (like ENOPERM), the behaviour represented by the hook
is denied.

Audit logs can be written using a format chosen by the eBPF program to
the perf events buffer or to global eBPF variables or maps and can be
further processed in user-space.

# BTF Based Design

The current design uses BTF
(https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html,
https://lwn.net/Articles/803258/) which allows verifiable read-only
structure accesses by field names rather than fixed offsets. This allows
accessing the hook parameters using a dynamically created context which
provides a certain degree of ABI stability:


// Only declare the structure and fields intended to be used
// in the program
struct vm_area_struct {
  unsigned long vm_start;
} __attribute__((preserve_access_index));

// Declare the eBPF program mprotect_audit which attaches to
// to the file_mprotect LSM hook and accepts three arguments.
SEC("lsm/file_mprotect")
int BPF_PROG(mprotect_audit, struct vm_area_struct *vma,
       unsigned long reqprot, unsigned long prot)
{
  unsigned long vm_start = vma->vm_start;

  return 0;
}

By relocating field offsets, BTF makes a large portion of kernel data
structures readily accessible across kernel versions without requiring a
large corpus of BPF helper functions and requiring recompilation with
every kernel version. The BTF type information is also used by the BPF
verifier to validate memory accesses within the BPF program and also
prevents arbitrary writes to the kernel memory.

The limitations of BTF compatibility are described in BPF Co-Re
(http://vger.kernel.org/bpfconf2019_talks/bpf-core.pdf, i.e. field
renames, #defines and changes to the signature of LSM hooks).

This design imposes that the MAC policy (eBPF programs) be updated when
the inspected kernel structures change outside of BTF compatibility
guarantees. In practice, this is only required when a structure field
used by a current policy is removed (or renamed) or when the used LSM
hooks change. We expect the maintenance cost of these changes to be
acceptable as compared to the previous design
(https://lore.kernel.org/bpf/20190910115527.5235-1-kpsingh@chromium.org/).


# Usage Examples

A simple example and some documentation is included in the patchset.

In order to better illustrate the capabilities of the framework some
more advanced prototype (not-ready for review) code has also been
published separately:

* Logging execution events (including environment variables and
  arguments)
https://github.com/sinkap/linux-krsi/blob/patch/v1/examples/samples/bpf/lsm_audit_env.c
* Detecting deletion of running executables:
https://github.com/sinkap/linux-krsi/blob/patch/v1/examples/samples/bpf/lsm_detect_exec_unlink.c
* Detection of writes to /proc/<pid>/mem:

https://github.com/sinkap/linux-krsi/blob/patch/v1/examples/samples/bpf/lsm_audit_env.c

We have updated Google's internal telemetry infrastructure and have
started deploying this LSM on our Linux Workstations. This gives us more
confidence in the real-world applications of such a system.


KP Singh (8):
  bpf: Introduce BPF_PROG_TYPE_LSM
  security: Refactor declaration of LSM hooks
  bpf: lsm: provide attachment points for BPF LSM programs
  bpf: lsm: Add support for enabling/disabling BPF hooks
  bpf: lsm: Implement attach, detach and execution
  tools/libbpf: Add support for BPF_PROG_TYPE_LSM
  bpf: lsm: Add selftests for BPF_PROG_TYPE_LSM
  bpf: lsm: Add Documentation

 Documentation/bpf/bpf_lsm.rst                 | 147 +++++
 Documentation/bpf/index.rst                   |   1 +
 MAINTAINERS                                   |   1 +
 arch/x86/net/bpf_jit_comp.c                   |  21 +-
 include/linux/bpf.h                           |   7 +
 include/linux/bpf_lsm.h                       |  66 ++
 include/linux/bpf_types.h                     |   4 +
 include/linux/lsm_hook_names.h                | 353 ++++++++++
 include/linux/lsm_hooks.h                     | 622 +-----------------
 include/uapi/linux/bpf.h                      |   2 +
 init/Kconfig                                  |  11 +
 kernel/bpf/Makefile                           |   1 +
 kernel/bpf/bpf_lsm.c                          |  88 +++
 kernel/bpf/btf.c                              |   3 +-
 kernel/bpf/syscall.c                          |  47 +-
 kernel/bpf/trampoline.c                       |  24 +-
 kernel/bpf/verifier.c                         |  19 +-
 kernel/trace/bpf_trace.c                      |  12 +-
 security/security.c                           |  35 +
 tools/include/uapi/linux/bpf.h                |   2 +
 tools/lib/bpf/bpf.c                           |   3 +-
 tools/lib/bpf/libbpf.c                        |  46 +-
 tools/lib/bpf/libbpf.h                        |   4 +
 tools/lib/bpf/libbpf.map                      |   3 +
 tools/lib/bpf/libbpf_probes.c                 |   1 +
 tools/testing/selftests/bpf/lsm_helpers.h     |  19 +
 .../selftests/bpf/prog_tests/lsm_mprotect.c   |  96 +++
 .../selftests/bpf/progs/lsm_mprotect_audit.c  |  48 ++
 .../selftests/bpf/progs/lsm_mprotect_mac.c    |  53 ++
 29 files changed, 1085 insertions(+), 654 deletions(-)
 create mode 100644 Documentation/bpf/bpf_lsm.rst
 create mode 100644 include/linux/bpf_lsm.h
 create mode 100644 include/linux/lsm_hook_names.h
 create mode 100644 kernel/bpf/bpf_lsm.c
 create mode 100644 tools/testing/selftests/bpf/lsm_helpers.h
 create mode 100644 tools/testing/selftests/bpf/prog_tests/lsm_mprotect.c
 create mode 100644 tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c
 create mode 100644 tools/testing/selftests/bpf/progs/lsm_mprotect_mac.c

-- 
2.20.1


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH bpf-next v4 1/8] bpf: Introduce BPF_PROG_TYPE_LSM
  2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
@ 2020-02-20 17:52 ` KP Singh
  2020-02-20 17:52 ` [PATCH bpf-next v4 2/8] security: Refactor declaration of LSM hooks KP Singh
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 45+ messages in thread
From: KP Singh @ 2020-02-20 17:52 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

Introduce types and configs for bpf programs that can be attached to
LSM hooks. The programs can be enabled by the config option
CONFIG_BPF_LSM.

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 MAINTAINERS                    |  1 +
 include/linux/bpf.h            |  3 +++
 include/linux/bpf_types.h      |  4 ++++
 include/uapi/linux/bpf.h       |  2 ++
 init/Kconfig                   | 11 +++++++++++
 kernel/bpf/Makefile            |  1 +
 kernel/bpf/bpf_lsm.c           | 17 +++++++++++++++++
 kernel/trace/bpf_trace.c       | 12 ++++++------
 tools/include/uapi/linux/bpf.h |  2 ++
 tools/lib/bpf/libbpf_probes.c  |  1 +
 10 files changed, 48 insertions(+), 6 deletions(-)
 create mode 100644 kernel/bpf/bpf_lsm.c

diff --git a/MAINTAINERS b/MAINTAINERS
index a0d86490c2c6..0f603e8928d5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3147,6 +3147,7 @@ R:	Martin KaFai Lau <kafai@fb.com>
 R:	Song Liu <songliubraving@fb.com>
 R:	Yonghong Song <yhs@fb.com>
 R:	Andrii Nakryiko <andriin@fb.com>
+R:	KP Singh <kpsingh@chromium.org>
 L:	netdev@vger.kernel.org
 L:	bpf@vger.kernel.org
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 49b1a70e12c8..c647cef3f4c1 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1429,6 +1429,9 @@ extern const struct bpf_func_proto bpf_strtoul_proto;
 extern const struct bpf_func_proto bpf_tcp_sock_proto;
 extern const struct bpf_func_proto bpf_jiffies64_proto;
 
+const struct bpf_func_proto *bpf_tracing_func_proto(
+	enum bpf_func_id func_id, const struct bpf_prog *prog);
+
 /* Shared helpers among cBPF and eBPF. */
 void bpf_user_rnd_init_once(void);
 u64 bpf_user_rnd_u32(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index c81d4ece79a4..ba0c2d56f8a3 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -70,6 +70,10 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_STRUCT_OPS, bpf_struct_ops,
 	      void *, void *)
 BPF_PROG_TYPE(BPF_PROG_TYPE_EXT, bpf_extension,
 	      void *, void *)
+#ifdef CONFIG_BPF_LSM
+BPF_PROG_TYPE(BPF_PROG_TYPE_LSM, lsm,
+	       void *, void *)
+#endif /* CONFIG_BPF_LSM */
 #endif
 
 BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index f1d74a2bd234..2f1e24a8c4a4 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -181,6 +181,7 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_TRACING,
 	BPF_PROG_TYPE_STRUCT_OPS,
 	BPF_PROG_TYPE_EXT,
+	BPF_PROG_TYPE_LSM,
 };
 
 enum bpf_attach_type {
@@ -210,6 +211,7 @@ enum bpf_attach_type {
 	BPF_TRACE_RAW_TP,
 	BPF_TRACE_FENTRY,
 	BPF_TRACE_FEXIT,
+	BPF_LSM_MAC,
 	__MAX_BPF_ATTACH_TYPE
 };
 
diff --git a/init/Kconfig b/init/Kconfig
index 452bc1835cd4..7d5db2982875 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1617,6 +1617,17 @@ config KALLSYMS_BASE_RELATIVE
 # end of the "standard kernel features (expert users)" menu
 
 # syscall, maps, verifier
+
+config BPF_LSM
+	bool "LSM Instrumentation with BPF"
+	depends on BPF_SYSCALL
+	help
+	  This enables instrumentation of the security hooks with eBPF programs.
+	  The programs are executed after all the statically defined LSM hooks
+	  allow the action.
+
+	  If you are unsure how to answer this question, answer N.
+
 config BPF_SYSCALL
 	bool "Enable bpf() system call"
 	select BPF
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 046ce5d98033..f2d7be596966 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -29,4 +29,5 @@ obj-$(CONFIG_DEBUG_INFO_BTF) += sysfs_btf.o
 endif
 ifeq ($(CONFIG_BPF_JIT),y)
 obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
+obj-${CONFIG_BPF_LSM} += bpf_lsm.o
 endif
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
new file mode 100644
index 000000000000..affb6941622e
--- /dev/null
+++ b/kernel/bpf/bpf_lsm.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#include <linux/filter.h>
+#include <linux/bpf.h>
+#include <linux/btf.h>
+
+const struct bpf_prog_ops lsm_prog_ops = {
+};
+
+const struct bpf_verifier_ops lsm_verifier_ops = {
+	.get_func_proto = bpf_tracing_func_proto,
+	.is_valid_access = btf_ctx_access,
+};
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 4ddd5ac46094..a69cb8a0042d 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -781,8 +781,8 @@ static const struct bpf_func_proto bpf_send_signal_thread_proto = {
 	.arg1_type	= ARG_ANYTHING,
 };
 
-static const struct bpf_func_proto *
-tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+const struct bpf_func_proto *
+bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
 	switch (func_id) {
 	case BPF_FUNC_map_lookup_elem:
@@ -865,7 +865,7 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_override_return_proto;
 #endif
 	default:
-		return tracing_func_proto(func_id, prog);
+		return bpf_tracing_func_proto(func_id, prog);
 	}
 }
 
@@ -975,7 +975,7 @@ tp_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 	case BPF_FUNC_get_stack:
 		return &bpf_get_stack_proto_tp;
 	default:
-		return tracing_func_proto(func_id, prog);
+		return bpf_tracing_func_proto(func_id, prog);
 	}
 }
 
@@ -1041,7 +1041,7 @@ pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 	case BPF_FUNC_perf_prog_read_value:
 		return &bpf_perf_prog_read_value_proto;
 	default:
-		return tracing_func_proto(func_id, prog);
+		return bpf_tracing_func_proto(func_id, prog);
 	}
 }
 
@@ -1168,7 +1168,7 @@ raw_tp_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 	case BPF_FUNC_get_stack:
 		return &bpf_get_stack_proto_raw_tp;
 	default:
-		return tracing_func_proto(func_id, prog);
+		return bpf_tracing_func_proto(func_id, prog);
 	}
 }
 
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index f1d74a2bd234..2f1e24a8c4a4 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -181,6 +181,7 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_TRACING,
 	BPF_PROG_TYPE_STRUCT_OPS,
 	BPF_PROG_TYPE_EXT,
+	BPF_PROG_TYPE_LSM,
 };
 
 enum bpf_attach_type {
@@ -210,6 +211,7 @@ enum bpf_attach_type {
 	BPF_TRACE_RAW_TP,
 	BPF_TRACE_FENTRY,
 	BPF_TRACE_FEXIT,
+	BPF_LSM_MAC,
 	__MAX_BPF_ATTACH_TYPE
 };
 
diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
index b782ebef6ac9..2c92059c0c90 100644
--- a/tools/lib/bpf/libbpf_probes.c
+++ b/tools/lib/bpf/libbpf_probes.c
@@ -108,6 +108,7 @@ probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns,
 	case BPF_PROG_TYPE_TRACING:
 	case BPF_PROG_TYPE_STRUCT_OPS:
 	case BPF_PROG_TYPE_EXT:
+	case BPF_PROG_TYPE_LSM:
 	default:
 		break;
 	}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH bpf-next v4 2/8] security: Refactor declaration of LSM hooks
  2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
  2020-02-20 17:52 ` [PATCH bpf-next v4 1/8] bpf: Introduce BPF_PROG_TYPE_LSM KP Singh
@ 2020-02-20 17:52 ` KP Singh
  2020-02-20 17:52 ` [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs KP Singh
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 45+ messages in thread
From: KP Singh @ 2020-02-20 17:52 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

The information about the different types of LSM hooks is scattered
in two locations i.e. union security_list_options and
struct security_hook_heads. Rather than duplicating this information
even further for BPF_PROG_TYPE_LSM, define all the hooks with the
LSM_HOOK macro in lsm_hook_names.h which is then used to generate all
the data structures required by the LSM framework.

Signed-off-by: KP Singh <kpsingh@google.com>
---
 include/linux/lsm_hook_names.h | 353 +++++++++++++++++++
 include/linux/lsm_hooks.h      | 622 +--------------------------------
 2 files changed, 359 insertions(+), 616 deletions(-)
 create mode 100644 include/linux/lsm_hook_names.h

diff --git a/include/linux/lsm_hook_names.h b/include/linux/lsm_hook_names.h
new file mode 100644
index 000000000000..1137a3e70bf5
--- /dev/null
+++ b/include/linux/lsm_hook_names.h
@@ -0,0 +1,353 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Linux Security Module Hook declarations.
+ *
+ * Copyright (C) 2001 WireX Communications, Inc <chris@wirex.com>
+ * Copyright (C) 2001 Greg Kroah-Hartman <greg@kroah.com>
+ * Copyright (C) 2001 Networks Associates Technology, Inc <ssmalley@nai.com>
+ * Copyright (C) 2001 James Morris <jmorris@intercode.com.au>
+ * Copyright (C) 2001 Silicon Graphics, Inc. (Trust Technology Group)
+ * Copyright (C) 2015 Intel Corporation.
+ * Copyright (C) 2015 Casey Schaufler <casey@schaufler-ca.com>
+ * Copyright (C) 2016 Mellanox Techonologies
+ * Copyright 2019 Google LLC.
+ */
+
+/* The macro LSM_HOOK is used to define the data structures required by the
+ * the LSM framework using the pattern:
+ *
+ * struct security_hook_heads {
+ *   #define LSM_HOOK(RET, NAME, ...) struct hlist_head NAME;
+ *   #include <linux/lsm_hook_names.h>
+ *   #undef LSM_HOOK
+ * };
+ */
+LSM_HOOK(int, binder_set_context_mgr, struct task_struct *mgr)
+LSM_HOOK(int, binder_transaction, struct task_struct *from,
+	 struct task_struct *to)
+LSM_HOOK(int, binder_transfer_binder, struct task_struct *from,
+	 struct task_struct *to)
+LSM_HOOK(int, binder_transfer_file, struct task_struct *from,
+	 struct task_struct *to, struct file *file)
+LSM_HOOK(int, ptrace_access_check, struct task_struct *child, unsigned int mode)
+LSM_HOOK(int, ptrace_traceme, struct task_struct *parent)
+LSM_HOOK(int, capget, struct task_struct *target, kernel_cap_t *effective,
+	 kernel_cap_t *inheritable, kernel_cap_t *permitted)
+LSM_HOOK(int, capset, struct cred *new, const struct cred *old,
+	 const kernel_cap_t *effective, const kernel_cap_t *inheritable,
+	 const kernel_cap_t *permitted)
+LSM_HOOK(int, capable, const struct cred *cred, struct user_namespace *ns,
+	 int cap, unsigned int opts)
+LSM_HOOK(int, quotactl, int cmds, int type, int id, struct super_block *sb)
+LSM_HOOK(int, quota_on, struct dentry *dentry)
+LSM_HOOK(int, syslog, int type)
+LSM_HOOK(int, settime, const struct timespec64 *ts, const struct timezone *tz)
+LSM_HOOK(int, vm_enough_memory, struct mm_struct *mm, long pages)
+LSM_HOOK(int, bprm_set_creds, struct linux_binprm *bprm)
+LSM_HOOK(int, bprm_check_security, struct linux_binprm *bprm)
+LSM_HOOK(void, bprm_committing_creds, struct linux_binprm *bprm)
+LSM_HOOK(void, bprm_committed_creds, struct linux_binprm *bprm)
+LSM_HOOK(int, fs_context_dup, struct fs_context *fc, struct fs_context *src_sc)
+LSM_HOOK(int, fs_context_parse_param, struct fs_context *fc,
+	 struct fs_parameter *param)
+LSM_HOOK(int, sb_alloc_security, struct super_block *sb)
+LSM_HOOK(void, sb_free_security, struct super_block *sb)
+LSM_HOOK(void, sb_free_mnt_opts, void *mnt_opts)
+LSM_HOOK(int, sb_eat_lsm_opts, char *orig, void **mnt_opts)
+LSM_HOOK(int, sb_remount, struct super_block *sb, void *mnt_opts)
+LSM_HOOK(int, sb_kern_mount, struct super_block *sb)
+LSM_HOOK(int, sb_show_options, struct seq_file *m, struct super_block *sb)
+LSM_HOOK(int, sb_statfs, struct dentry *dentry)
+LSM_HOOK(int, sb_mount, const char *dev_name, const struct path *path,
+	 const char *type, unsigned long flags, void *data)
+LSM_HOOK(int, sb_umount, struct vfsmount *mnt, int flags)
+LSM_HOOK(int, sb_pivotroot, const struct path *old_path,
+	 const struct path *new_path)
+LSM_HOOK(int, sb_set_mnt_opts, struct super_block *sb, void *mnt_opts,
+	 unsigned long kern_flags, unsigned long *set_kern_flags)
+LSM_HOOK(int, sb_clone_mnt_opts, const struct super_block *oldsb,
+	 struct super_block *newsb, unsigned long kern_flags,
+	 unsigned long *set_kern_flags)
+LSM_HOOK(int, sb_add_mnt_opt, const char *option, const char *val, int len,
+	 void **mnt_opts)
+LSM_HOOK(int, move_mount, const struct path *from_path,
+	 const struct path *to_path)
+LSM_HOOK(int, dentry_init_security, struct dentry *dentry, int mode,
+	 const struct qstr *name, void **ctx, u32 *ctxlen)
+LSM_HOOK(int, dentry_create_files_as, struct dentry *dentry, int mode,
+	 struct qstr *name, const struct cred *old, struct cred *new)
+#ifdef CONFIG_SECURITY_PATH
+LSM_HOOK(int, path_unlink, const struct path *dir, struct dentry *dentry)
+LSM_HOOK(int, path_mkdir, const struct path *dir, struct dentry *dentry,
+	 umode_t mode)
+LSM_HOOK(int, path_rmdir, const struct path *dir, struct dentry *dentry)
+LSM_HOOK(int, path_mknod, const struct path *dir, struct dentry *dentry,
+	 umode_t mode, unsigned int dev)
+LSM_HOOK(int, path_truncate, const struct path *path)
+LSM_HOOK(int, path_symlink, const struct path *dir, struct dentry *dentry,
+	 const char *old_name)
+LSM_HOOK(int, path_link, struct dentry *old_dentry, const struct path *new_dir,
+	 struct dentry *new_dentry)
+LSM_HOOK(int, path_rename, const struct path *old_dir,
+	 struct dentry *old_dentry, const struct path *new_dir,
+	 struct dentry *new_dentry)
+LSM_HOOK(int, path_chmod, const struct path *path, umode_t mode)
+LSM_HOOK(int, path_chown, const struct path *path, kuid_t uid, kgid_t gid)
+LSM_HOOK(int, path_chroot, const struct path *path)
+#endif
+
+/* Needed for inode based security check */
+LSM_HOOK(int, path_notify, const struct path *path, u64 mask,
+	 unsigned int obj_type)
+LSM_HOOK(int, inode_alloc_security, struct inode *inode)
+LSM_HOOK(void, inode_free_security, struct inode *inode)
+LSM_HOOK(int, inode_init_security, struct inode *inode, struct inode *dir,
+	 const struct qstr *qstr, const char **name, void **value, size_t *len)
+LSM_HOOK(int, inode_create, struct inode *dir, struct dentry *dentry,
+	 umode_t mode)
+LSM_HOOK(int, inode_link, struct dentry *old_dentry, struct inode *dir,
+	 struct dentry *new_dentry)
+LSM_HOOK(int, inode_unlink, struct inode *dir, struct dentry *dentry)
+LSM_HOOK(int, inode_symlink, struct inode *dir, struct dentry *dentry,
+	 const char *old_name)
+LSM_HOOK(int, inode_mkdir, struct inode *dir, struct dentry *dentry,
+	 umode_t mode)
+LSM_HOOK(int, inode_rmdir, struct inode *dir, struct dentry *dentry)
+LSM_HOOK(int, inode_mknod, struct inode *dir, struct dentry *dentry,
+	 umode_t mode, dev_t dev)
+LSM_HOOK(int, inode_rename, struct inode *old_dir, struct dentry *old_dentry,
+	 struct inode *new_dir, struct dentry *new_dentry)
+LSM_HOOK(int, inode_readlink, struct dentry *dentry)
+LSM_HOOK(int, inode_follow_link, struct dentry *dentry, struct inode *inode,
+	 bool rcu)
+LSM_HOOK(int, inode_permission, struct inode *inode, int mask)
+LSM_HOOK(int, inode_setattr, struct dentry *dentry, struct iattr *attr)
+LSM_HOOK(int, inode_getattr, const struct path *path)
+LSM_HOOK(int, inode_setxattr, struct dentry *dentry, const char *name,
+	 const void *value, size_t size, int flags)
+LSM_HOOK(void, inode_post_setxattr, struct dentry *dentry, const char *name,
+	 const void *value, size_t size, int flags)
+LSM_HOOK(int, inode_getxattr, struct dentry *dentry, const char *name)
+LSM_HOOK(int, inode_listxattr, struct dentry *dentry)
+LSM_HOOK(int, inode_removexattr, struct dentry *dentry, const char *name)
+LSM_HOOK(int, inode_need_killpriv, struct dentry *dentry)
+LSM_HOOK(int, inode_killpriv, struct dentry *dentry)
+LSM_HOOK(int, inode_getsecurity, struct inode *inode, const char *name,
+	 void **buffer, bool alloc)
+LSM_HOOK(int, inode_setsecurity, struct inode *inode, const char *name,
+	 const void *value, size_t size, int flags)
+LSM_HOOK(int, inode_listsecurity, struct inode *inode, char *buffer,
+	 size_t buffer_size)
+LSM_HOOK(void, inode_getsecid, struct inode *inode, u32 *secid)
+LSM_HOOK(int, inode_copy_up, struct dentry *src, struct cred **new)
+LSM_HOOK(int, inode_copy_up_xattr, const char *name)
+LSM_HOOK(int, kernfs_init_security, struct kernfs_node *kn_dir,
+	 struct kernfs_node *kn)
+LSM_HOOK(int, file_permission, struct file *file, int mask)
+LSM_HOOK(int, file_alloc_security, struct file *file)
+LSM_HOOK(void, file_free_security, struct file *file)
+LSM_HOOK(int, file_ioctl, struct file *file, unsigned int cmd,
+	 unsigned long arg)
+LSM_HOOK(int, mmap_addr, unsigned long addr)
+LSM_HOOK(int, mmap_file, struct file *file, unsigned long reqprot,
+	 unsigned long prot, unsigned long flags)
+LSM_HOOK(int, file_mprotect, struct vm_area_struct *vma, unsigned long reqprot,
+	 unsigned long prot)
+LSM_HOOK(int, file_lock, struct file *file, unsigned int cmd)
+LSM_HOOK(int, file_fcntl, struct file *file, unsigned int cmd,
+	 unsigned long arg)
+LSM_HOOK(void, file_set_fowner, struct file *file)
+LSM_HOOK(int, file_send_sigiotask, struct task_struct *tsk,
+	 struct fown_struct *fown, int sig)
+LSM_HOOK(int, file_receive, struct file *file)
+LSM_HOOK(int, file_open, struct file *file)
+LSM_HOOK(int, task_alloc, struct task_struct *task, unsigned long clone_flags)
+LSM_HOOK(void, task_free, struct task_struct *task)
+LSM_HOOK(int, cred_alloc_blank, struct cred *cred, gfp_t gfp)
+LSM_HOOK(void, cred_free, struct cred *cred)
+LSM_HOOK(int, cred_prepare, struct cred *new, const struct cred *old, gfp_t gfp)
+LSM_HOOK(void, cred_transfer, struct cred *new, const struct cred *old)
+LSM_HOOK(void, cred_getsecid, const struct cred *c, u32 *secid)
+LSM_HOOK(int, kernel_act_as, struct cred *new, u32 secid)
+LSM_HOOK(int, kernel_create_files_as, struct cred *new, struct inode *inode)
+LSM_HOOK(int, kernel_module_request, char *kmod_name)
+LSM_HOOK(int, kernel_load_data, enum kernel_load_data_id id)
+LSM_HOOK(int, kernel_read_file, struct file *file, enum kernel_read_file_id id)
+LSM_HOOK(int, kernel_post_read_file, struct file *file, char *buf, loff_t size,
+	 enum kernel_read_file_id id)
+LSM_HOOK(int, task_fix_setuid, struct cred *new, const struct cred *old,
+	 int flags)
+LSM_HOOK(int, task_setpgid, struct task_struct *p, pid_t pgid)
+LSM_HOOK(int, task_getpgid, struct task_struct *p)
+LSM_HOOK(int, task_getsid, struct task_struct *p)
+LSM_HOOK(void, task_getsecid, struct task_struct *p, u32 *secid)
+LSM_HOOK(int, task_setnice, struct task_struct *p, int nice)
+LSM_HOOK(int, task_setioprio, struct task_struct *p, int ioprio)
+LSM_HOOK(int, task_getioprio, struct task_struct *p)
+LSM_HOOK(int, task_prlimit, const struct cred *cred, const struct cred *tcred,
+	 unsigned int flags)
+LSM_HOOK(int, task_setrlimit, struct task_struct *p, unsigned int resource,
+	 struct rlimit *new_rlim)
+LSM_HOOK(int, task_setscheduler, struct task_struct *p)
+LSM_HOOK(int, task_getscheduler, struct task_struct *p)
+LSM_HOOK(int, task_movememory, struct task_struct *p)
+LSM_HOOK(int, task_kill, struct task_struct *p, struct kernel_siginfo *info,
+	 int sig, const struct cred *cred)
+LSM_HOOK(int, task_prctl, int option, unsigned long arg2, unsigned long arg3,
+	 unsigned long arg4, unsigned long arg5)
+LSM_HOOK(void, task_to_inode, struct task_struct *p, struct inode *inode)
+LSM_HOOK(int, ipc_permission, struct kern_ipc_perm *ipcp, short flag)
+LSM_HOOK(void, ipc_getsecid, struct kern_ipc_perm *ipcp, u32 *secid)
+LSM_HOOK(int, msg_msg_alloc_security, struct msg_msg *msg)
+LSM_HOOK(void, msg_msg_free_security, struct msg_msg *msg)
+LSM_HOOK(int, msg_queue_alloc_security, struct kern_ipc_perm *perm)
+LSM_HOOK(void, msg_queue_free_security, struct kern_ipc_perm *perm)
+LSM_HOOK(int, msg_queue_associate, struct kern_ipc_perm *perm, int msqflg)
+LSM_HOOK(int, msg_queue_msgctl, struct kern_ipc_perm *perm, int cmd)
+LSM_HOOK(int, msg_queue_msgsnd, struct kern_ipc_perm *perm, struct msg_msg *msg,
+	 int msqflg)
+LSM_HOOK(int, msg_queue_msgrcv, struct kern_ipc_perm *perm, struct msg_msg *msg,
+	 struct task_struct *target, long type, int mode)
+LSM_HOOK(int, shm_alloc_security, struct kern_ipc_perm *perm)
+LSM_HOOK(void, shm_free_security, struct kern_ipc_perm *perm)
+LSM_HOOK(int, shm_associate, struct kern_ipc_perm *perm, int shmflg)
+LSM_HOOK(int, shm_shmctl, struct kern_ipc_perm *perm, int cmd)
+LSM_HOOK(int, shm_shmat, struct kern_ipc_perm *perm, char __user *shmaddr,
+	 int shmflg)
+LSM_HOOK(int, sem_alloc_security, struct kern_ipc_perm *perm)
+LSM_HOOK(void, sem_free_security, struct kern_ipc_perm *perm)
+LSM_HOOK(int, sem_associate, struct kern_ipc_perm *perm, int semflg)
+LSM_HOOK(int, sem_semctl, struct kern_ipc_perm *perm, int cmd)
+LSM_HOOK(int, sem_semop, struct kern_ipc_perm *perm, struct sembuf *sops,
+	 unsigned nsops, int alter)
+LSM_HOOK(int, netlink_send, struct sock *sk, struct sk_buff *skb)
+LSM_HOOK(void, d_instantiate, struct dentry *dentry, struct inode *inode)
+LSM_HOOK(int, getprocattr, struct task_struct *p, char *name, char **value)
+LSM_HOOK(int, setprocattr, const char *name, void *value, size_t size)
+LSM_HOOK(int, ismaclabel, const char *name)
+LSM_HOOK(int, secid_to_secctx, u32 secid, char **secdata, u32 *seclen)
+LSM_HOOK(int, secctx_to_secid, const char *secdata, u32 seclen, u32 *secid)
+LSM_HOOK(void, release_secctx, char *secdata, u32 seclen)
+LSM_HOOK(void, inode_invalidate_secctx, struct inode *inode)
+LSM_HOOK(int, inode_notifysecctx, struct inode *inode, void *ctx, u32 ctxlen)
+LSM_HOOK(int, inode_setsecctx, struct dentry *dentry, void *ctx, u32 ctxlen)
+LSM_HOOK(int, inode_getsecctx, struct inode *inode, void **ctx, u32 *ctxlen)
+#ifdef CONFIG_SECURITY_NETWORK
+LSM_HOOK(int, unix_stream_connect, struct sock *sock, struct sock *other,
+	 struct sock *newsk)
+LSM_HOOK(int, unix_may_send, struct socket *sock, struct socket *other)
+LSM_HOOK(int, socket_create, int family, int type, int protocol, int kern)
+LSM_HOOK(int, socket_post_create, struct socket *sock, int family, int type,
+	 int protocol, int kern)
+LSM_HOOK(int, socket_socketpair, struct socket *socka, struct socket *sockb)
+LSM_HOOK(int, socket_bind, struct socket *sock, struct sockaddr *address,
+	 int addrlen)
+LSM_HOOK(int, socket_connect, struct socket *sock, struct sockaddr *address,
+	 int addrlen)
+LSM_HOOK(int, socket_listen, struct socket *sock, int backlog)
+LSM_HOOK(int, socket_accept, struct socket *sock, struct socket *newsock)
+LSM_HOOK(int, socket_sendmsg, struct socket *sock, struct msghdr *msg, int size)
+LSM_HOOK(int, socket_recvmsg, struct socket *sock, struct msghdr *msg, int size,
+	 int flags)
+LSM_HOOK(int, socket_getsockname, struct socket *sock)
+LSM_HOOK(int, socket_getpeername, struct socket *sock)
+LSM_HOOK(int, socket_getsockopt, struct socket *sock, int level, int optname)
+LSM_HOOK(int, socket_setsockopt, struct socket *sock, int level, int optname)
+LSM_HOOK(int, socket_shutdown, struct socket *sock, int how)
+LSM_HOOK(int, socket_sock_rcv_skb, struct sock *sk, struct sk_buff *skb)
+LSM_HOOK(int, socket_getpeersec_stream, struct socket *sock,
+	 char __user *optval, int __user *optlen, unsigned len)
+LSM_HOOK(int, socket_getpeersec_dgram, struct socket *sock, struct sk_buff *skb,
+	 u32 *secid)
+LSM_HOOK(int, sk_alloc_security, struct sock *sk, int family, gfp_t priority)
+LSM_HOOK(void, sk_free_security, struct sock *sk)
+LSM_HOOK(void, sk_clone_security, const struct sock *sk, struct sock *newsk)
+LSM_HOOK(void, sk_getsecid, struct sock *sk, u32 *secid)
+LSM_HOOK(void, sock_graft, struct sock *sk, struct socket *parent)
+LSM_HOOK(int, inet_conn_request, struct sock *sk, struct sk_buff *skb,
+	 struct request_sock *req)
+LSM_HOOK(void, inet_csk_clone, struct sock *newsk,
+	 const struct request_sock *req)
+LSM_HOOK(void, inet_conn_established, struct sock *sk, struct sk_buff *skb)
+LSM_HOOK(int, secmark_relabel_packet, u32 secid)
+LSM_HOOK(void, secmark_refcount_inc, void)
+LSM_HOOK(void, secmark_refcount_dec, void)
+LSM_HOOK(void, req_classify_flow, const struct request_sock *req,
+	 struct flowi *fl)
+LSM_HOOK(int, tun_dev_alloc_security, void **security)
+LSM_HOOK(void, tun_dev_free_security, void *security)
+LSM_HOOK(int, tun_dev_create, void)
+LSM_HOOK(int, tun_dev_attach_queue, void *security)
+LSM_HOOK(int, tun_dev_attach, struct sock *sk, void *security)
+LSM_HOOK(int, tun_dev_open, void *security)
+LSM_HOOK(int, sctp_assoc_request, struct sctp_endpoint *ep, struct sk_buff *skb)
+LSM_HOOK(int, sctp_bind_connect, struct sock *sk, int optname,
+	 struct sockaddr *address, int addrlen)
+LSM_HOOK(void, sctp_sk_clone, struct sctp_endpoint *ep, struct sock *sk,
+	 struct sock *newsk)
+#endif /* CONFIG_SECURITY_NETWORK */
+
+#ifdef CONFIG_SECURITY_INFINIBAND
+LSM_HOOK(int, ib_pkey_access, void *sec, u64 subnet_prefix, u16 pkey)
+LSM_HOOK(int, ib_endport_manage_subnet, void *sec, const char *dev_name,
+	 u8 port_num)
+LSM_HOOK(int, ib_alloc_security, void **sec)
+LSM_HOOK(void, ib_free_security, void *sec)
+#endif /* CONFIG_SECURITY_INFINIBAND */
+
+#ifdef CONFIG_SECURITY_NETWORK_XFRM
+LSM_HOOK(int, xfrm_policy_alloc_security, struct xfrm_sec_ctx **ctxp,
+	 struct xfrm_user_sec_ctx *sec_ctx, gfp_t gfp)
+LSM_HOOK(int, xfrm_policy_clone_security, struct xfrm_sec_ctx *old_ctx,
+	 struct xfrm_sec_ctx **new_ctx)
+LSM_HOOK(void, xfrm_policy_free_security, struct xfrm_sec_ctx *ctx)
+LSM_HOOK(int, xfrm_policy_delete_security, struct xfrm_sec_ctx *ctx)
+LSM_HOOK(int, xfrm_state_alloc, struct xfrm_state *x,
+	 struct xfrm_user_sec_ctx *sec_ctx)
+LSM_HOOK(int, xfrm_state_alloc_acquire, struct xfrm_state *x,
+	 struct xfrm_sec_ctx *polsec, u32 secid)
+LSM_HOOK(void, xfrm_state_free_security, struct xfrm_state *x)
+LSM_HOOK(int, xfrm_state_delete_security, struct xfrm_state *x)
+LSM_HOOK(int, xfrm_policy_lookup, struct xfrm_sec_ctx *ctx, u32 fl_secid,
+	 u8 dir)
+LSM_HOOK(int, xfrm_state_pol_flow_match, struct xfrm_state *x,
+	 struct xfrm_policy *xp, const struct flowi *fl)
+LSM_HOOK(int, xfrm_decode_session, struct sk_buff *skb, u32 *secid, int ckall)
+#endif /* CONFIG_SECURITY_NETWORK_XFRM */
+
+/* key management security hooks */
+#ifdef CONFIG_KEYS
+LSM_HOOK(int, key_alloc, struct key *key, const struct cred *cred,
+	 unsigned long flags)
+LSM_HOOK(void, key_free, struct key *key)
+LSM_HOOK(int, key_permission, key_ref_t key_ref, const struct cred *cred,
+	 unsigned perm)
+LSM_HOOK(int, key_getsecurity, struct key *key, char **_buffer)
+#endif /* CONFIG_KEYS */
+
+#ifdef CONFIG_AUDIT
+LSM_HOOK(int, audit_rule_init, u32 field, u32 op, char *rulestr, void **lsmrule)
+LSM_HOOK(int, audit_rule_known, struct audit_krule *krule)
+LSM_HOOK(int, audit_rule_match, u32 secid, u32 field, u32 op, void *lsmrule)
+LSM_HOOK(void, audit_rule_free, void *lsmrule)
+#endif /* CONFIG_AUDIT */
+
+#ifdef CONFIG_BPF_SYSCALL
+LSM_HOOK(int, bpf, int cmd, union bpf_attr *attr, unsigned int size)
+LSM_HOOK(int, bpf_map, struct bpf_map *map, fmode_t fmode)
+LSM_HOOK(int, bpf_prog, struct bpf_prog *prog)
+LSM_HOOK(int, bpf_map_alloc_security, struct bpf_map *map)
+LSM_HOOK(void, bpf_map_free_security, struct bpf_map *map)
+LSM_HOOK(int, bpf_prog_alloc_security, struct bpf_prog_aux *aux)
+LSM_HOOK(void, bpf_prog_free_security, struct bpf_prog_aux *aux)
+#endif /* CONFIG_BPF_SYSCALL */
+
+LSM_HOOK(int, locked_down, enum lockdown_reason what)
+#ifdef CONFIG_PERF_EVENTS
+LSM_HOOK(int, perf_event_open, struct perf_event_attr *attr, int type)
+LSM_HOOK(int, perf_event_alloc, struct perf_event *event)
+LSM_HOOK(void, perf_event_free, struct perf_event *event)
+LSM_HOOK(int, perf_event_read, struct perf_event *event)
+LSM_HOOK(int, perf_event_write, struct perf_event *event)
+#endif
diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
index 20d8cf194fb7..905954c650ff 100644
--- a/include/linux/lsm_hooks.h
+++ b/include/linux/lsm_hooks.h
@@ -1456,625 +1456,15 @@
  *     @what: kernel feature being accessed
  */
 union security_list_options {
-	int (*binder_set_context_mgr)(struct task_struct *mgr);
-	int (*binder_transaction)(struct task_struct *from,
-					struct task_struct *to);
-	int (*binder_transfer_binder)(struct task_struct *from,
-					struct task_struct *to);
-	int (*binder_transfer_file)(struct task_struct *from,
-					struct task_struct *to,
-					struct file *file);
-
-	int (*ptrace_access_check)(struct task_struct *child,
-					unsigned int mode);
-	int (*ptrace_traceme)(struct task_struct *parent);
-	int (*capget)(struct task_struct *target, kernel_cap_t *effective,
-			kernel_cap_t *inheritable, kernel_cap_t *permitted);
-	int (*capset)(struct cred *new, const struct cred *old,
-			const kernel_cap_t *effective,
-			const kernel_cap_t *inheritable,
-			const kernel_cap_t *permitted);
-	int (*capable)(const struct cred *cred,
-			struct user_namespace *ns,
-			int cap,
-			unsigned int opts);
-	int (*quotactl)(int cmds, int type, int id, struct super_block *sb);
-	int (*quota_on)(struct dentry *dentry);
-	int (*syslog)(int type);
-	int (*settime)(const struct timespec64 *ts, const struct timezone *tz);
-	int (*vm_enough_memory)(struct mm_struct *mm, long pages);
-
-	int (*bprm_set_creds)(struct linux_binprm *bprm);
-	int (*bprm_check_security)(struct linux_binprm *bprm);
-	void (*bprm_committing_creds)(struct linux_binprm *bprm);
-	void (*bprm_committed_creds)(struct linux_binprm *bprm);
-
-	int (*fs_context_dup)(struct fs_context *fc, struct fs_context *src_sc);
-	int (*fs_context_parse_param)(struct fs_context *fc, struct fs_parameter *param);
-
-	int (*sb_alloc_security)(struct super_block *sb);
-	void (*sb_free_security)(struct super_block *sb);
-	void (*sb_free_mnt_opts)(void *mnt_opts);
-	int (*sb_eat_lsm_opts)(char *orig, void **mnt_opts);
-	int (*sb_remount)(struct super_block *sb, void *mnt_opts);
-	int (*sb_kern_mount)(struct super_block *sb);
-	int (*sb_show_options)(struct seq_file *m, struct super_block *sb);
-	int (*sb_statfs)(struct dentry *dentry);
-	int (*sb_mount)(const char *dev_name, const struct path *path,
-			const char *type, unsigned long flags, void *data);
-	int (*sb_umount)(struct vfsmount *mnt, int flags);
-	int (*sb_pivotroot)(const struct path *old_path, const struct path *new_path);
-	int (*sb_set_mnt_opts)(struct super_block *sb,
-				void *mnt_opts,
-				unsigned long kern_flags,
-				unsigned long *set_kern_flags);
-	int (*sb_clone_mnt_opts)(const struct super_block *oldsb,
-					struct super_block *newsb,
-					unsigned long kern_flags,
-					unsigned long *set_kern_flags);
-	int (*sb_add_mnt_opt)(const char *option, const char *val, int len,
-			      void **mnt_opts);
-	int (*move_mount)(const struct path *from_path, const struct path *to_path);
-	int (*dentry_init_security)(struct dentry *dentry, int mode,
-					const struct qstr *name, void **ctx,
-					u32 *ctxlen);
-	int (*dentry_create_files_as)(struct dentry *dentry, int mode,
-					struct qstr *name,
-					const struct cred *old,
-					struct cred *new);
-
-
-#ifdef CONFIG_SECURITY_PATH
-	int (*path_unlink)(const struct path *dir, struct dentry *dentry);
-	int (*path_mkdir)(const struct path *dir, struct dentry *dentry,
-				umode_t mode);
-	int (*path_rmdir)(const struct path *dir, struct dentry *dentry);
-	int (*path_mknod)(const struct path *dir, struct dentry *dentry,
-				umode_t mode, unsigned int dev);
-	int (*path_truncate)(const struct path *path);
-	int (*path_symlink)(const struct path *dir, struct dentry *dentry,
-				const char *old_name);
-	int (*path_link)(struct dentry *old_dentry, const struct path *new_dir,
-				struct dentry *new_dentry);
-	int (*path_rename)(const struct path *old_dir, struct dentry *old_dentry,
-				const struct path *new_dir,
-				struct dentry *new_dentry);
-	int (*path_chmod)(const struct path *path, umode_t mode);
-	int (*path_chown)(const struct path *path, kuid_t uid, kgid_t gid);
-	int (*path_chroot)(const struct path *path);
-#endif
-	/* Needed for inode based security check */
-	int (*path_notify)(const struct path *path, u64 mask,
-				unsigned int obj_type);
-	int (*inode_alloc_security)(struct inode *inode);
-	void (*inode_free_security)(struct inode *inode);
-	int (*inode_init_security)(struct inode *inode, struct inode *dir,
-					const struct qstr *qstr,
-					const char **name, void **value,
-					size_t *len);
-	int (*inode_create)(struct inode *dir, struct dentry *dentry,
-				umode_t mode);
-	int (*inode_link)(struct dentry *old_dentry, struct inode *dir,
-				struct dentry *new_dentry);
-	int (*inode_unlink)(struct inode *dir, struct dentry *dentry);
-	int (*inode_symlink)(struct inode *dir, struct dentry *dentry,
-				const char *old_name);
-	int (*inode_mkdir)(struct inode *dir, struct dentry *dentry,
-				umode_t mode);
-	int (*inode_rmdir)(struct inode *dir, struct dentry *dentry);
-	int (*inode_mknod)(struct inode *dir, struct dentry *dentry,
-				umode_t mode, dev_t dev);
-	int (*inode_rename)(struct inode *old_dir, struct dentry *old_dentry,
-				struct inode *new_dir,
-				struct dentry *new_dentry);
-	int (*inode_readlink)(struct dentry *dentry);
-	int (*inode_follow_link)(struct dentry *dentry, struct inode *inode,
-				 bool rcu);
-	int (*inode_permission)(struct inode *inode, int mask);
-	int (*inode_setattr)(struct dentry *dentry, struct iattr *attr);
-	int (*inode_getattr)(const struct path *path);
-	int (*inode_setxattr)(struct dentry *dentry, const char *name,
-				const void *value, size_t size, int flags);
-	void (*inode_post_setxattr)(struct dentry *dentry, const char *name,
-					const void *value, size_t size,
-					int flags);
-	int (*inode_getxattr)(struct dentry *dentry, const char *name);
-	int (*inode_listxattr)(struct dentry *dentry);
-	int (*inode_removexattr)(struct dentry *dentry, const char *name);
-	int (*inode_need_killpriv)(struct dentry *dentry);
-	int (*inode_killpriv)(struct dentry *dentry);
-	int (*inode_getsecurity)(struct inode *inode, const char *name,
-					void **buffer, bool alloc);
-	int (*inode_setsecurity)(struct inode *inode, const char *name,
-					const void *value, size_t size,
-					int flags);
-	int (*inode_listsecurity)(struct inode *inode, char *buffer,
-					size_t buffer_size);
-	void (*inode_getsecid)(struct inode *inode, u32 *secid);
-	int (*inode_copy_up)(struct dentry *src, struct cred **new);
-	int (*inode_copy_up_xattr)(const char *name);
-
-	int (*kernfs_init_security)(struct kernfs_node *kn_dir,
-				    struct kernfs_node *kn);
-
-	int (*file_permission)(struct file *file, int mask);
-	int (*file_alloc_security)(struct file *file);
-	void (*file_free_security)(struct file *file);
-	int (*file_ioctl)(struct file *file, unsigned int cmd,
-				unsigned long arg);
-	int (*mmap_addr)(unsigned long addr);
-	int (*mmap_file)(struct file *file, unsigned long reqprot,
-				unsigned long prot, unsigned long flags);
-	int (*file_mprotect)(struct vm_area_struct *vma, unsigned long reqprot,
-				unsigned long prot);
-	int (*file_lock)(struct file *file, unsigned int cmd);
-	int (*file_fcntl)(struct file *file, unsigned int cmd,
-				unsigned long arg);
-	void (*file_set_fowner)(struct file *file);
-	int (*file_send_sigiotask)(struct task_struct *tsk,
-					struct fown_struct *fown, int sig);
-	int (*file_receive)(struct file *file);
-	int (*file_open)(struct file *file);
-
-	int (*task_alloc)(struct task_struct *task, unsigned long clone_flags);
-	void (*task_free)(struct task_struct *task);
-	int (*cred_alloc_blank)(struct cred *cred, gfp_t gfp);
-	void (*cred_free)(struct cred *cred);
-	int (*cred_prepare)(struct cred *new, const struct cred *old,
-				gfp_t gfp);
-	void (*cred_transfer)(struct cred *new, const struct cred *old);
-	void (*cred_getsecid)(const struct cred *c, u32 *secid);
-	int (*kernel_act_as)(struct cred *new, u32 secid);
-	int (*kernel_create_files_as)(struct cred *new, struct inode *inode);
-	int (*kernel_module_request)(char *kmod_name);
-	int (*kernel_load_data)(enum kernel_load_data_id id);
-	int (*kernel_read_file)(struct file *file, enum kernel_read_file_id id);
-	int (*kernel_post_read_file)(struct file *file, char *buf, loff_t size,
-				     enum kernel_read_file_id id);
-	int (*task_fix_setuid)(struct cred *new, const struct cred *old,
-				int flags);
-	int (*task_setpgid)(struct task_struct *p, pid_t pgid);
-	int (*task_getpgid)(struct task_struct *p);
-	int (*task_getsid)(struct task_struct *p);
-	void (*task_getsecid)(struct task_struct *p, u32 *secid);
-	int (*task_setnice)(struct task_struct *p, int nice);
-	int (*task_setioprio)(struct task_struct *p, int ioprio);
-	int (*task_getioprio)(struct task_struct *p);
-	int (*task_prlimit)(const struct cred *cred, const struct cred *tcred,
-			    unsigned int flags);
-	int (*task_setrlimit)(struct task_struct *p, unsigned int resource,
-				struct rlimit *new_rlim);
-	int (*task_setscheduler)(struct task_struct *p);
-	int (*task_getscheduler)(struct task_struct *p);
-	int (*task_movememory)(struct task_struct *p);
-	int (*task_kill)(struct task_struct *p, struct kernel_siginfo *info,
-				int sig, const struct cred *cred);
-	int (*task_prctl)(int option, unsigned long arg2, unsigned long arg3,
-				unsigned long arg4, unsigned long arg5);
-	void (*task_to_inode)(struct task_struct *p, struct inode *inode);
-
-	int (*ipc_permission)(struct kern_ipc_perm *ipcp, short flag);
-	void (*ipc_getsecid)(struct kern_ipc_perm *ipcp, u32 *secid);
-
-	int (*msg_msg_alloc_security)(struct msg_msg *msg);
-	void (*msg_msg_free_security)(struct msg_msg *msg);
-
-	int (*msg_queue_alloc_security)(struct kern_ipc_perm *perm);
-	void (*msg_queue_free_security)(struct kern_ipc_perm *perm);
-	int (*msg_queue_associate)(struct kern_ipc_perm *perm, int msqflg);
-	int (*msg_queue_msgctl)(struct kern_ipc_perm *perm, int cmd);
-	int (*msg_queue_msgsnd)(struct kern_ipc_perm *perm, struct msg_msg *msg,
-				int msqflg);
-	int (*msg_queue_msgrcv)(struct kern_ipc_perm *perm, struct msg_msg *msg,
-				struct task_struct *target, long type,
-				int mode);
-
-	int (*shm_alloc_security)(struct kern_ipc_perm *perm);
-	void (*shm_free_security)(struct kern_ipc_perm *perm);
-	int (*shm_associate)(struct kern_ipc_perm *perm, int shmflg);
-	int (*shm_shmctl)(struct kern_ipc_perm *perm, int cmd);
-	int (*shm_shmat)(struct kern_ipc_perm *perm, char __user *shmaddr,
-				int shmflg);
-
-	int (*sem_alloc_security)(struct kern_ipc_perm *perm);
-	void (*sem_free_security)(struct kern_ipc_perm *perm);
-	int (*sem_associate)(struct kern_ipc_perm *perm, int semflg);
-	int (*sem_semctl)(struct kern_ipc_perm *perm, int cmd);
-	int (*sem_semop)(struct kern_ipc_perm *perm, struct sembuf *sops,
-				unsigned nsops, int alter);
-
-	int (*netlink_send)(struct sock *sk, struct sk_buff *skb);
-
-	void (*d_instantiate)(struct dentry *dentry, struct inode *inode);
-
-	int (*getprocattr)(struct task_struct *p, char *name, char **value);
-	int (*setprocattr)(const char *name, void *value, size_t size);
-	int (*ismaclabel)(const char *name);
-	int (*secid_to_secctx)(u32 secid, char **secdata, u32 *seclen);
-	int (*secctx_to_secid)(const char *secdata, u32 seclen, u32 *secid);
-	void (*release_secctx)(char *secdata, u32 seclen);
-
-	void (*inode_invalidate_secctx)(struct inode *inode);
-	int (*inode_notifysecctx)(struct inode *inode, void *ctx, u32 ctxlen);
-	int (*inode_setsecctx)(struct dentry *dentry, void *ctx, u32 ctxlen);
-	int (*inode_getsecctx)(struct inode *inode, void **ctx, u32 *ctxlen);
-
-#ifdef CONFIG_SECURITY_NETWORK
-	int (*unix_stream_connect)(struct sock *sock, struct sock *other,
-					struct sock *newsk);
-	int (*unix_may_send)(struct socket *sock, struct socket *other);
-
-	int (*socket_create)(int family, int type, int protocol, int kern);
-	int (*socket_post_create)(struct socket *sock, int family, int type,
-					int protocol, int kern);
-	int (*socket_socketpair)(struct socket *socka, struct socket *sockb);
-	int (*socket_bind)(struct socket *sock, struct sockaddr *address,
-				int addrlen);
-	int (*socket_connect)(struct socket *sock, struct sockaddr *address,
-				int addrlen);
-	int (*socket_listen)(struct socket *sock, int backlog);
-	int (*socket_accept)(struct socket *sock, struct socket *newsock);
-	int (*socket_sendmsg)(struct socket *sock, struct msghdr *msg,
-				int size);
-	int (*socket_recvmsg)(struct socket *sock, struct msghdr *msg,
-				int size, int flags);
-	int (*socket_getsockname)(struct socket *sock);
-	int (*socket_getpeername)(struct socket *sock);
-	int (*socket_getsockopt)(struct socket *sock, int level, int optname);
-	int (*socket_setsockopt)(struct socket *sock, int level, int optname);
-	int (*socket_shutdown)(struct socket *sock, int how);
-	int (*socket_sock_rcv_skb)(struct sock *sk, struct sk_buff *skb);
-	int (*socket_getpeersec_stream)(struct socket *sock,
-					char __user *optval,
-					int __user *optlen, unsigned len);
-	int (*socket_getpeersec_dgram)(struct socket *sock,
-					struct sk_buff *skb, u32 *secid);
-	int (*sk_alloc_security)(struct sock *sk, int family, gfp_t priority);
-	void (*sk_free_security)(struct sock *sk);
-	void (*sk_clone_security)(const struct sock *sk, struct sock *newsk);
-	void (*sk_getsecid)(struct sock *sk, u32 *secid);
-	void (*sock_graft)(struct sock *sk, struct socket *parent);
-	int (*inet_conn_request)(struct sock *sk, struct sk_buff *skb,
-					struct request_sock *req);
-	void (*inet_csk_clone)(struct sock *newsk,
-				const struct request_sock *req);
-	void (*inet_conn_established)(struct sock *sk, struct sk_buff *skb);
-	int (*secmark_relabel_packet)(u32 secid);
-	void (*secmark_refcount_inc)(void);
-	void (*secmark_refcount_dec)(void);
-	void (*req_classify_flow)(const struct request_sock *req,
-					struct flowi *fl);
-	int (*tun_dev_alloc_security)(void **security);
-	void (*tun_dev_free_security)(void *security);
-	int (*tun_dev_create)(void);
-	int (*tun_dev_attach_queue)(void *security);
-	int (*tun_dev_attach)(struct sock *sk, void *security);
-	int (*tun_dev_open)(void *security);
-	int (*sctp_assoc_request)(struct sctp_endpoint *ep,
-				  struct sk_buff *skb);
-	int (*sctp_bind_connect)(struct sock *sk, int optname,
-				 struct sockaddr *address, int addrlen);
-	void (*sctp_sk_clone)(struct sctp_endpoint *ep, struct sock *sk,
-			      struct sock *newsk);
-#endif	/* CONFIG_SECURITY_NETWORK */
-
-#ifdef CONFIG_SECURITY_INFINIBAND
-	int (*ib_pkey_access)(void *sec, u64 subnet_prefix, u16 pkey);
-	int (*ib_endport_manage_subnet)(void *sec, const char *dev_name,
-					u8 port_num);
-	int (*ib_alloc_security)(void **sec);
-	void (*ib_free_security)(void *sec);
-#endif	/* CONFIG_SECURITY_INFINIBAND */
-
-#ifdef CONFIG_SECURITY_NETWORK_XFRM
-	int (*xfrm_policy_alloc_security)(struct xfrm_sec_ctx **ctxp,
-					  struct xfrm_user_sec_ctx *sec_ctx,
-						gfp_t gfp);
-	int (*xfrm_policy_clone_security)(struct xfrm_sec_ctx *old_ctx,
-						struct xfrm_sec_ctx **new_ctx);
-	void (*xfrm_policy_free_security)(struct xfrm_sec_ctx *ctx);
-	int (*xfrm_policy_delete_security)(struct xfrm_sec_ctx *ctx);
-	int (*xfrm_state_alloc)(struct xfrm_state *x,
-				struct xfrm_user_sec_ctx *sec_ctx);
-	int (*xfrm_state_alloc_acquire)(struct xfrm_state *x,
-					struct xfrm_sec_ctx *polsec,
-					u32 secid);
-	void (*xfrm_state_free_security)(struct xfrm_state *x);
-	int (*xfrm_state_delete_security)(struct xfrm_state *x);
-	int (*xfrm_policy_lookup)(struct xfrm_sec_ctx *ctx, u32 fl_secid,
-					u8 dir);
-	int (*xfrm_state_pol_flow_match)(struct xfrm_state *x,
-						struct xfrm_policy *xp,
-						const struct flowi *fl);
-	int (*xfrm_decode_session)(struct sk_buff *skb, u32 *secid, int ckall);
-#endif	/* CONFIG_SECURITY_NETWORK_XFRM */
-
-	/* key management security hooks */
-#ifdef CONFIG_KEYS
-	int (*key_alloc)(struct key *key, const struct cred *cred,
-				unsigned long flags);
-	void (*key_free)(struct key *key);
-	int (*key_permission)(key_ref_t key_ref, const struct cred *cred,
-				unsigned perm);
-	int (*key_getsecurity)(struct key *key, char **_buffer);
-#endif	/* CONFIG_KEYS */
-
-#ifdef CONFIG_AUDIT
-	int (*audit_rule_init)(u32 field, u32 op, char *rulestr,
-				void **lsmrule);
-	int (*audit_rule_known)(struct audit_krule *krule);
-	int (*audit_rule_match)(u32 secid, u32 field, u32 op, void *lsmrule);
-	void (*audit_rule_free)(void *lsmrule);
-#endif /* CONFIG_AUDIT */
-
-#ifdef CONFIG_BPF_SYSCALL
-	int (*bpf)(int cmd, union bpf_attr *attr,
-				 unsigned int size);
-	int (*bpf_map)(struct bpf_map *map, fmode_t fmode);
-	int (*bpf_prog)(struct bpf_prog *prog);
-	int (*bpf_map_alloc_security)(struct bpf_map *map);
-	void (*bpf_map_free_security)(struct bpf_map *map);
-	int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux);
-	void (*bpf_prog_free_security)(struct bpf_prog_aux *aux);
-#endif /* CONFIG_BPF_SYSCALL */
-	int (*locked_down)(enum lockdown_reason what);
-#ifdef CONFIG_PERF_EVENTS
-	int (*perf_event_open)(struct perf_event_attr *attr, int type);
-	int (*perf_event_alloc)(struct perf_event *event);
-	void (*perf_event_free)(struct perf_event *event);
-	int (*perf_event_read)(struct perf_event *event);
-	int (*perf_event_write)(struct perf_event *event);
-
-#endif
+	#define LSM_HOOK(RET, NAME, ...) RET (*NAME)(__VA_ARGS__);
+	#include "lsm_hook_names.h"
+	#undef LSM_HOOK
 };
 
 struct security_hook_heads {
-	struct hlist_head binder_set_context_mgr;
-	struct hlist_head binder_transaction;
-	struct hlist_head binder_transfer_binder;
-	struct hlist_head binder_transfer_file;
-	struct hlist_head ptrace_access_check;
-	struct hlist_head ptrace_traceme;
-	struct hlist_head capget;
-	struct hlist_head capset;
-	struct hlist_head capable;
-	struct hlist_head quotactl;
-	struct hlist_head quota_on;
-	struct hlist_head syslog;
-	struct hlist_head settime;
-	struct hlist_head vm_enough_memory;
-	struct hlist_head bprm_set_creds;
-	struct hlist_head bprm_check_security;
-	struct hlist_head bprm_committing_creds;
-	struct hlist_head bprm_committed_creds;
-	struct hlist_head fs_context_dup;
-	struct hlist_head fs_context_parse_param;
-	struct hlist_head sb_alloc_security;
-	struct hlist_head sb_free_security;
-	struct hlist_head sb_free_mnt_opts;
-	struct hlist_head sb_eat_lsm_opts;
-	struct hlist_head sb_remount;
-	struct hlist_head sb_kern_mount;
-	struct hlist_head sb_show_options;
-	struct hlist_head sb_statfs;
-	struct hlist_head sb_mount;
-	struct hlist_head sb_umount;
-	struct hlist_head sb_pivotroot;
-	struct hlist_head sb_set_mnt_opts;
-	struct hlist_head sb_clone_mnt_opts;
-	struct hlist_head sb_add_mnt_opt;
-	struct hlist_head move_mount;
-	struct hlist_head dentry_init_security;
-	struct hlist_head dentry_create_files_as;
-#ifdef CONFIG_SECURITY_PATH
-	struct hlist_head path_unlink;
-	struct hlist_head path_mkdir;
-	struct hlist_head path_rmdir;
-	struct hlist_head path_mknod;
-	struct hlist_head path_truncate;
-	struct hlist_head path_symlink;
-	struct hlist_head path_link;
-	struct hlist_head path_rename;
-	struct hlist_head path_chmod;
-	struct hlist_head path_chown;
-	struct hlist_head path_chroot;
-#endif
-	/* Needed for inode based modules as well */
-	struct hlist_head path_notify;
-	struct hlist_head inode_alloc_security;
-	struct hlist_head inode_free_security;
-	struct hlist_head inode_init_security;
-	struct hlist_head inode_create;
-	struct hlist_head inode_link;
-	struct hlist_head inode_unlink;
-	struct hlist_head inode_symlink;
-	struct hlist_head inode_mkdir;
-	struct hlist_head inode_rmdir;
-	struct hlist_head inode_mknod;
-	struct hlist_head inode_rename;
-	struct hlist_head inode_readlink;
-	struct hlist_head inode_follow_link;
-	struct hlist_head inode_permission;
-	struct hlist_head inode_setattr;
-	struct hlist_head inode_getattr;
-	struct hlist_head inode_setxattr;
-	struct hlist_head inode_post_setxattr;
-	struct hlist_head inode_getxattr;
-	struct hlist_head inode_listxattr;
-	struct hlist_head inode_removexattr;
-	struct hlist_head inode_need_killpriv;
-	struct hlist_head inode_killpriv;
-	struct hlist_head inode_getsecurity;
-	struct hlist_head inode_setsecurity;
-	struct hlist_head inode_listsecurity;
-	struct hlist_head inode_getsecid;
-	struct hlist_head inode_copy_up;
-	struct hlist_head inode_copy_up_xattr;
-	struct hlist_head kernfs_init_security;
-	struct hlist_head file_permission;
-	struct hlist_head file_alloc_security;
-	struct hlist_head file_free_security;
-	struct hlist_head file_ioctl;
-	struct hlist_head mmap_addr;
-	struct hlist_head mmap_file;
-	struct hlist_head file_mprotect;
-	struct hlist_head file_lock;
-	struct hlist_head file_fcntl;
-	struct hlist_head file_set_fowner;
-	struct hlist_head file_send_sigiotask;
-	struct hlist_head file_receive;
-	struct hlist_head file_open;
-	struct hlist_head task_alloc;
-	struct hlist_head task_free;
-	struct hlist_head cred_alloc_blank;
-	struct hlist_head cred_free;
-	struct hlist_head cred_prepare;
-	struct hlist_head cred_transfer;
-	struct hlist_head cred_getsecid;
-	struct hlist_head kernel_act_as;
-	struct hlist_head kernel_create_files_as;
-	struct hlist_head kernel_load_data;
-	struct hlist_head kernel_read_file;
-	struct hlist_head kernel_post_read_file;
-	struct hlist_head kernel_module_request;
-	struct hlist_head task_fix_setuid;
-	struct hlist_head task_setpgid;
-	struct hlist_head task_getpgid;
-	struct hlist_head task_getsid;
-	struct hlist_head task_getsecid;
-	struct hlist_head task_setnice;
-	struct hlist_head task_setioprio;
-	struct hlist_head task_getioprio;
-	struct hlist_head task_prlimit;
-	struct hlist_head task_setrlimit;
-	struct hlist_head task_setscheduler;
-	struct hlist_head task_getscheduler;
-	struct hlist_head task_movememory;
-	struct hlist_head task_kill;
-	struct hlist_head task_prctl;
-	struct hlist_head task_to_inode;
-	struct hlist_head ipc_permission;
-	struct hlist_head ipc_getsecid;
-	struct hlist_head msg_msg_alloc_security;
-	struct hlist_head msg_msg_free_security;
-	struct hlist_head msg_queue_alloc_security;
-	struct hlist_head msg_queue_free_security;
-	struct hlist_head msg_queue_associate;
-	struct hlist_head msg_queue_msgctl;
-	struct hlist_head msg_queue_msgsnd;
-	struct hlist_head msg_queue_msgrcv;
-	struct hlist_head shm_alloc_security;
-	struct hlist_head shm_free_security;
-	struct hlist_head shm_associate;
-	struct hlist_head shm_shmctl;
-	struct hlist_head shm_shmat;
-	struct hlist_head sem_alloc_security;
-	struct hlist_head sem_free_security;
-	struct hlist_head sem_associate;
-	struct hlist_head sem_semctl;
-	struct hlist_head sem_semop;
-	struct hlist_head netlink_send;
-	struct hlist_head d_instantiate;
-	struct hlist_head getprocattr;
-	struct hlist_head setprocattr;
-	struct hlist_head ismaclabel;
-	struct hlist_head secid_to_secctx;
-	struct hlist_head secctx_to_secid;
-	struct hlist_head release_secctx;
-	struct hlist_head inode_invalidate_secctx;
-	struct hlist_head inode_notifysecctx;
-	struct hlist_head inode_setsecctx;
-	struct hlist_head inode_getsecctx;
-#ifdef CONFIG_SECURITY_NETWORK
-	struct hlist_head unix_stream_connect;
-	struct hlist_head unix_may_send;
-	struct hlist_head socket_create;
-	struct hlist_head socket_post_create;
-	struct hlist_head socket_socketpair;
-	struct hlist_head socket_bind;
-	struct hlist_head socket_connect;
-	struct hlist_head socket_listen;
-	struct hlist_head socket_accept;
-	struct hlist_head socket_sendmsg;
-	struct hlist_head socket_recvmsg;
-	struct hlist_head socket_getsockname;
-	struct hlist_head socket_getpeername;
-	struct hlist_head socket_getsockopt;
-	struct hlist_head socket_setsockopt;
-	struct hlist_head socket_shutdown;
-	struct hlist_head socket_sock_rcv_skb;
-	struct hlist_head socket_getpeersec_stream;
-	struct hlist_head socket_getpeersec_dgram;
-	struct hlist_head sk_alloc_security;
-	struct hlist_head sk_free_security;
-	struct hlist_head sk_clone_security;
-	struct hlist_head sk_getsecid;
-	struct hlist_head sock_graft;
-	struct hlist_head inet_conn_request;
-	struct hlist_head inet_csk_clone;
-	struct hlist_head inet_conn_established;
-	struct hlist_head secmark_relabel_packet;
-	struct hlist_head secmark_refcount_inc;
-	struct hlist_head secmark_refcount_dec;
-	struct hlist_head req_classify_flow;
-	struct hlist_head tun_dev_alloc_security;
-	struct hlist_head tun_dev_free_security;
-	struct hlist_head tun_dev_create;
-	struct hlist_head tun_dev_attach_queue;
-	struct hlist_head tun_dev_attach;
-	struct hlist_head tun_dev_open;
-	struct hlist_head sctp_assoc_request;
-	struct hlist_head sctp_bind_connect;
-	struct hlist_head sctp_sk_clone;
-#endif	/* CONFIG_SECURITY_NETWORK */
-#ifdef CONFIG_SECURITY_INFINIBAND
-	struct hlist_head ib_pkey_access;
-	struct hlist_head ib_endport_manage_subnet;
-	struct hlist_head ib_alloc_security;
-	struct hlist_head ib_free_security;
-#endif	/* CONFIG_SECURITY_INFINIBAND */
-#ifdef CONFIG_SECURITY_NETWORK_XFRM
-	struct hlist_head xfrm_policy_alloc_security;
-	struct hlist_head xfrm_policy_clone_security;
-	struct hlist_head xfrm_policy_free_security;
-	struct hlist_head xfrm_policy_delete_security;
-	struct hlist_head xfrm_state_alloc;
-	struct hlist_head xfrm_state_alloc_acquire;
-	struct hlist_head xfrm_state_free_security;
-	struct hlist_head xfrm_state_delete_security;
-	struct hlist_head xfrm_policy_lookup;
-	struct hlist_head xfrm_state_pol_flow_match;
-	struct hlist_head xfrm_decode_session;
-#endif	/* CONFIG_SECURITY_NETWORK_XFRM */
-#ifdef CONFIG_KEYS
-	struct hlist_head key_alloc;
-	struct hlist_head key_free;
-	struct hlist_head key_permission;
-	struct hlist_head key_getsecurity;
-#endif	/* CONFIG_KEYS */
-#ifdef CONFIG_AUDIT
-	struct hlist_head audit_rule_init;
-	struct hlist_head audit_rule_known;
-	struct hlist_head audit_rule_match;
-	struct hlist_head audit_rule_free;
-#endif /* CONFIG_AUDIT */
-#ifdef CONFIG_BPF_SYSCALL
-	struct hlist_head bpf;
-	struct hlist_head bpf_map;
-	struct hlist_head bpf_prog;
-	struct hlist_head bpf_map_alloc_security;
-	struct hlist_head bpf_map_free_security;
-	struct hlist_head bpf_prog_alloc_security;
-	struct hlist_head bpf_prog_free_security;
-#endif /* CONFIG_BPF_SYSCALL */
-	struct hlist_head locked_down;
-#ifdef CONFIG_PERF_EVENTS
-	struct hlist_head perf_event_open;
-	struct hlist_head perf_event_alloc;
-	struct hlist_head perf_event_free;
-	struct hlist_head perf_event_read;
-	struct hlist_head perf_event_write;
-#endif
+	#define LSM_HOOK(RET, NAME, ...) struct hlist_head NAME;
+	#include "lsm_hook_names.h"
+	#undef LSM_HOOK
 } __randomize_layout;
 
 /*
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
  2020-02-20 17:52 ` [PATCH bpf-next v4 1/8] bpf: Introduce BPF_PROG_TYPE_LSM KP Singh
  2020-02-20 17:52 ` [PATCH bpf-next v4 2/8] security: Refactor declaration of LSM hooks KP Singh
@ 2020-02-20 17:52 ` KP Singh
  2020-02-20 23:49   ` Casey Schaufler
  2020-02-21  2:25   ` Alexei Starovoitov
  2020-02-20 17:52 ` [PATCH bpf-next v4 4/8] bpf: lsm: Add support for enabling/disabling BPF hooks KP Singh
                   ` (6 subsequent siblings)
  9 siblings, 2 replies; 45+ messages in thread
From: KP Singh @ 2020-02-20 17:52 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

The BPF LSM programs are implemented as fexit trampolines to avoid the
overhead of retpolines. These programs cannot be attached to security_*
wrappers as there are quite a few security_* functions that do more than
just calling the LSM callbacks.

This was discussed on the lists in:

  https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#m068becce588a0cdf01913f368a97aea4c62d8266

Adding a NOP callback after all the static LSM callbacks are called has
the following benefits:

- The BPF programs run at the right stage of the security_* wrappers.
- They run after all the static LSM hooks allowed the operation,
  therefore cannot allow an action that was already denied.

There are some hooks which do not call call_int_hooks or
call_void_hooks. It's not possible to call the bpf_lsm_* functions
without checking if there is BPF LSM program attached to these hooks.
This is added further in a subsequent patch. For now, these hooks are
marked as NO_BPF (i.e. attachment of BPF programs is not possible).

Signed-off-by: KP Singh <kpsingh@google.com>
---
 include/linux/bpf_lsm.h | 34 ++++++++++++++++++++++++++++++++++
 kernel/bpf/bpf_lsm.c    | 16 ++++++++++++++++
 security/security.c     |  3 +++
 3 files changed, 53 insertions(+)
 create mode 100644 include/linux/bpf_lsm.h

diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
new file mode 100644
index 000000000000..f867f72f6aa9
--- /dev/null
+++ b/include/linux/bpf_lsm.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#ifndef _LINUX_BPF_LSM_H
+#define _LINUX_BPF_LSM_H
+
+#include <linux/bpf.h>
+
+#ifdef CONFIG_BPF_LSM
+
+#define LSM_HOOK(RET, NAME, ...) RET bpf_lsm_##NAME(__VA_ARGS__);
+#include <linux/lsm_hook_names.h>
+#undef LSM_HOOK
+
+#define RUN_BPF_LSM_VOID_PROGS(FUNC, ...) bpf_lsm_##FUNC(__VA_ARGS__)
+#define RUN_BPF_LSM_INT_PROGS(RC, FUNC, ...) ({				\
+	do {								\
+		if (RC == 0)						\
+			RC = bpf_lsm_##FUNC(__VA_ARGS__);		\
+	} while (0);							\
+	RC;								\
+})
+
+#else /* !CONFIG_BPF_LSM */
+
+#define RUN_BPF_LSM_INT_PROGS(RC, FUNC, ...) (RC)
+#define RUN_BPF_LSM_VOID_PROGS(FUNC, ...)
+
+#endif /* CONFIG_BPF_LSM */
+
+#endif /* _LINUX_BPF_LSM_H */
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index affb6941622e..abc847c9b9a1 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -7,6 +7,22 @@
 #include <linux/filter.h>
 #include <linux/bpf.h>
 #include <linux/btf.h>
+#include <linux/bpf_lsm.h>
+
+/* For every LSM hook  that allows attachment of BPF programs, declare a NOP
+ * function where a BPF program can be attached as an fexit trampoline.
+ */
+#define LSM_HOOK(RET, NAME, ...) LSM_HOOK_##RET(NAME, __VA_ARGS__)
+#define LSM_HOOK_int(NAME, ...) noinline int bpf_lsm_##NAME(__VA_ARGS__)  \
+{									  \
+	return 0;							  \
+}
+
+#define LSM_HOOK_void(NAME, ...) \
+	noinline void bpf_lsm_##NAME(__VA_ARGS__) {}
+
+#include <linux/lsm_hook_names.h>
+#undef LSM_HOOK
 
 const struct bpf_prog_ops lsm_prog_ops = {
 };
diff --git a/security/security.c b/security/security.c
index 565bc9b67276..aa111392a700 100644
--- a/security/security.c
+++ b/security/security.c
@@ -28,6 +28,7 @@
 #include <linux/string.h>
 #include <linux/msg.h>
 #include <net/flow.h>
+#include <linux/bpf_lsm.h>
 
 #define MAX_LSM_EVM_XATTR	2
 
@@ -684,6 +685,7 @@ static void __init lsm_early_task(struct task_struct *task)
 								\
 		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
 			P->hook.FUNC(__VA_ARGS__);		\
+		RUN_BPF_LSM_VOID_PROGS(FUNC, __VA_ARGS__);	\
 	} while (0)
 
 #define call_int_hook(FUNC, IRC, ...) ({			\
@@ -696,6 +698,7 @@ static void __init lsm_early_task(struct task_struct *task)
 			if (RC != 0)				\
 				break;				\
 		}						\
+		RC = RUN_BPF_LSM_INT_PROGS(RC, FUNC, __VA_ARGS__); \
 	} while (0);						\
 	RC;							\
 })
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH bpf-next v4 4/8] bpf: lsm: Add support for enabling/disabling BPF hooks
  2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (2 preceding siblings ...)
  2020-02-20 17:52 ` [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs KP Singh
@ 2020-02-20 17:52 ` KP Singh
  2020-02-21 18:57   ` Casey Schaufler
  2020-02-22  4:26   ` Kees Cook
  2020-02-20 17:52 ` [PATCH bpf-next v4 5/8] bpf: lsm: Implement attach, detach and execution KP Singh
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 45+ messages in thread
From: KP Singh @ 2020-02-20 17:52 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

Each LSM hook defines a static key i.e. bpf_lsm_<name>
and a bpf_lsm_<name>_set_enabled function to toggle the key
which enables/disables the branch which executes the BPF programs
attached to the LSM hook.

Use of static keys was suggested in upstream discussion:

  https://lore.kernel.org/bpf/1cd10710-a81b-8f9b-696d-aa40b0a67225@iogearbox.net/

and results in the following assembly:

  0x0000000000001e31 <+65>:    jmpq   0x1e36 <security_bprm_check+70>
  0x0000000000001e36 <+70>:    nopl   0x0(%rax,%rax,1)
  0x0000000000001e3b <+75>:    xor    %eax,%eax
  0x0000000000001e3d <+77>:    jmp    0x1e25 <security_bprm_check+53>

which avoids an indirect branch and results in lower overhead which is
especially helpful for LSM hooks in performance hotpaths.

Given the ability to toggle the BPF trampolines, some hooks which do
not call call_<int, void>_hooks as they have different default return
values, also gain support for BPF program attachment.

There are some hooks like security_setprocattr and security_getprocattr
which are not instrumentable as they do not provide any monitoring or
access control decisions. If required, generation of BTF type
information for these hooks can be also be blacklisted.

Signed-off-by: KP Singh <kpsingh@google.com>
---
 include/linux/bpf_lsm.h | 30 +++++++++++++++++++++++++++---
 kernel/bpf/bpf_lsm.c    | 28 ++++++++++++++++++++++++++++
 security/security.c     | 32 ++++++++++++++++++++++++++++++++
 3 files changed, 87 insertions(+), 3 deletions(-)

diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
index f867f72f6aa9..53dcda8ace01 100644
--- a/include/linux/bpf_lsm.h
+++ b/include/linux/bpf_lsm.h
@@ -8,27 +8,51 @@
 #define _LINUX_BPF_LSM_H
 
 #include <linux/bpf.h>
+#include <linux/jump_label.h>
 
 #ifdef CONFIG_BPF_LSM
 
+#define LSM_HOOK(RET, NAME, ...)		\
+DECLARE_STATIC_KEY_FALSE(bpf_lsm_key_##NAME);   \
+void bpf_lsm_##NAME##_set_enabled(bool value);
+#include <linux/lsm_hook_names.h>
+#undef LSM_HOOK
+
 #define LSM_HOOK(RET, NAME, ...) RET bpf_lsm_##NAME(__VA_ARGS__);
 #include <linux/lsm_hook_names.h>
 #undef LSM_HOOK
 
-#define RUN_BPF_LSM_VOID_PROGS(FUNC, ...) bpf_lsm_##FUNC(__VA_ARGS__)
+#define HAS_BPF_LSM_PROG(FUNC) (static_branch_unlikely(&bpf_lsm_key_##FUNC))
+
+#define RUN_BPF_LSM_VOID_PROGS(FUNC, ...)				\
+	do {								\
+		if (HAS_BPF_LSM_PROG(FUNC))				\
+			bpf_lsm_##FUNC(__VA_ARGS__);			\
+	} while (0)
+
 #define RUN_BPF_LSM_INT_PROGS(RC, FUNC, ...) ({				\
 	do {								\
-		if (RC == 0)						\
-			RC = bpf_lsm_##FUNC(__VA_ARGS__);		\
+		if (HAS_BPF_LSM_PROG(FUNC)) {				\
+			if (RC == 0)					\
+				RC = bpf_lsm_##FUNC(__VA_ARGS__);	\
+		}							\
 	} while (0);							\
 	RC;								\
 })
 
+int bpf_lsm_set_enabled(const char *name, bool value);
+
 #else /* !CONFIG_BPF_LSM */
 
+#define HAS_BPF_LSM_PROG false
 #define RUN_BPF_LSM_INT_PROGS(RC, FUNC, ...) (RC)
 #define RUN_BPF_LSM_VOID_PROGS(FUNC, ...)
 
+static inline int bpf_lsm_set_enabled(const char *name, bool value)
+{
+	return -EOPNOTSUPP;
+}
+
 #endif /* CONFIG_BPF_LSM */
 
 #endif /* _LINUX_BPF_LSM_H */
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index abc847c9b9a1..d7c44433c003 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -8,6 +8,20 @@
 #include <linux/bpf.h>
 #include <linux/btf.h>
 #include <linux/bpf_lsm.h>
+#include <linux/jump_label.h>
+#include <linux/kallsyms.h>
+
+#define LSM_HOOK(RET, NAME, ...)					\
+	DEFINE_STATIC_KEY_FALSE(bpf_lsm_key_##NAME);			\
+	void bpf_lsm_##NAME##_set_enabled(bool value)			\
+	{								\
+		if (value)						\
+			static_branch_enable(&bpf_lsm_key_##NAME);	\
+		else							\
+			static_branch_disable(&bpf_lsm_key_##NAME);	\
+	}
+#include <linux/lsm_hook_names.h>
+#undef LSM_HOOK
 
 /* For every LSM hook  that allows attachment of BPF programs, declare a NOP
  * function where a BPF program can be attached as an fexit trampoline.
@@ -24,6 +38,20 @@
 #include <linux/lsm_hook_names.h>
 #undef LSM_HOOK
 
+int bpf_lsm_set_enabled(const char *name, bool value)
+{
+	char toggle_fn_name[KSYM_NAME_LEN];
+	void (*toggle_fn)(bool value);
+
+	snprintf(toggle_fn_name, KSYM_NAME_LEN, "%s_set_enabled", name);
+	toggle_fn = (void *)kallsyms_lookup_name(toggle_fn_name);
+	if (!toggle_fn)
+		return -ESRCH;
+
+	toggle_fn(value);
+	return 0;
+}
+
 const struct bpf_prog_ops lsm_prog_ops = {
 };
 
diff --git a/security/security.c b/security/security.c
index aa111392a700..569cc07d5e34 100644
--- a/security/security.c
+++ b/security/security.c
@@ -804,6 +804,13 @@ int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
 			break;
 		}
 	}
+#ifdef CONFIG_BPF_LSM
+	if (HAS_BPF_LSM_PROG(vm_enough_memory)) {
+		rc = bpf_lsm_vm_enough_memory(mm, pages);
+		if (rc <= 0)
+			cap_sys_admin = 0;
+	}
+#endif
 	return __vm_enough_memory(mm, pages, cap_sys_admin);
 }
 
@@ -1350,6 +1357,13 @@ int security_inode_getsecurity(struct inode *inode, const char *name, void **buf
 		if (rc != -EOPNOTSUPP)
 			return rc;
 	}
+#ifdef CONFIG_BPF_LSM
+	if (HAS_BPF_LSM_PROG(inode_getsecurity)) {
+		rc = bpf_lsm_inode_getsecurity(inode, name, buffer, alloc);
+		if (rc != -EOPNOTSUPP)
+			return rc;
+	}
+#endif
 	return -EOPNOTSUPP;
 }
 
@@ -1369,6 +1383,14 @@ int security_inode_setsecurity(struct inode *inode, const char *name, const void
 		if (rc != -EOPNOTSUPP)
 			return rc;
 	}
+#ifdef CONFIG_BPF_LSM
+	if (HAS_BPF_LSM_PROG(inode_setsecurity)) {
+		rc = bpf_lsm_inode_setsecurity(inode, name, value, size,
+					       flags);
+		if (rc != -EOPNOTSUPP)
+			return rc;
+	}
+#endif
 	return -EOPNOTSUPP;
 }
 
@@ -1754,6 +1776,12 @@ int security_task_prctl(int option, unsigned long arg2, unsigned long arg3,
 				break;
 		}
 	}
+#ifdef CONFIG_BPF_LSM
+	if (HAS_BPF_LSM_PROG(task_prctl)) {
+		if (rc == -ENOSYS)
+			rc = bpf_lsm_task_prctl(option, arg2, arg3, arg4, arg5);
+	}
+#endif
 	return rc;
 }
 
@@ -2334,6 +2362,10 @@ int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
 		rc = hp->hook.xfrm_state_pol_flow_match(x, xp, fl);
 		break;
 	}
+#ifdef CONFIG_BPF_LSM
+	if (HAS_BPF_LSM_PROG(xfrm_state_pol_flow_match))
+		rc = bpf_lsm_xfrm_state_pol_flow_match(x, xp, fl);
+#endif
 	return rc;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH bpf-next v4 5/8] bpf: lsm: Implement attach, detach and execution
  2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (3 preceding siblings ...)
  2020-02-20 17:52 ` [PATCH bpf-next v4 4/8] bpf: lsm: Add support for enabling/disabling BPF hooks KP Singh
@ 2020-02-20 17:52 ` KP Singh
  2020-02-21  2:17   ` Alexei Starovoitov
  2020-02-20 17:52 ` [PATCH bpf-next v4 6/8] tools/libbpf: Add support for BPF_PROG_TYPE_LSM KP Singh
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 45+ messages in thread
From: KP Singh @ 2020-02-20 17:52 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

JITed BPF programs are dynamically attached to the LSM hooks
using fexit trampolines. The trampoline prologue generates code to handle
conversion of the signature of the hook to the BPF context and the newly
introduced BPF_TRAMP_F_OVERRIDE_RETURN allows the fexit trampoline
to override the return value of the function it is attached to.

The allocated fexit trampolines are attached to the nop function added
at the appropriate places and are executed if all the statically defined
LSM hooks allow the action.

The BPF_PROG_TYPE_LSM programs must have a GPL compatible license and
the following permissions are required to attach a program to a
hook:

- CAP_SYS_ADMIN to load the program
- CAP_MAC_ADMIN to attach it (i.e. to update the security policy)

When the program is loaded (BPF_PROG_LOAD):

* The verifier validates if the program is trying to attach to a valid
  security hook and updates the prog->aux->attach_func_proto.
* The verifier then further verifies the program for memory accesses by
  using the BTF information. (It also ensures that no memory is being
  written to).
* An fexit trampoline is initialized (if not present in the lookup
  table).

When an attachment (BPF_PROG_ATTACH) is requested:

* The fexit trampoline is updated to use the program being attached.
* The static key of the LSM hook is toggled if this is the first program
  being attached to this hook. (and not a replacement).

The attached programs can override the return value of the fexit
trampoline to indicate a MAC Policy decision.

When multiple programs aree attached to the hook, each program receives
the return value from the previous program on the stack and the last
program provides the return value to the LSM hook.

Signed-off-by: KP Singh <kpsingh@google.com>
---
 arch/x86/net/bpf_jit_comp.c | 21 +++++++++++++----
 include/linux/bpf.h         |  4 ++++
 include/linux/bpf_lsm.h     |  8 +++++++
 kernel/bpf/bpf_lsm.c        | 27 +++++++++++++++++++++
 kernel/bpf/btf.c            |  3 ++-
 kernel/bpf/syscall.c        | 47 ++++++++++++++++++++++++++++++-------
 kernel/bpf/trampoline.c     | 24 +++++++++++++++----
 kernel/bpf/verifier.c       | 19 +++++++++++----
 8 files changed, 131 insertions(+), 22 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 9ba08e9abc09..b710abfe06c4 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1362,7 +1362,8 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_args,
 }
 
 static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
-		      struct bpf_prog **progs, int prog_cnt, int stack_size)
+		      struct bpf_prog **progs, int prog_cnt, int stack_size,
+		      bool override_return)
 {
 	u8 *prog = *pprog;
 	int cnt = 0, i;
@@ -1384,6 +1385,14 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
 		if (emit_call(&prog, progs[i]->bpf_func, prog))
 			return -EINVAL;
 
+
+		/* If BPF_TRAMP_F_OVERRIDE_RETURN is set, fexit trampolines can
+		 * override the return value of the previous trampoline which is
+		 * then passed on the stack to the next BPF program.
+		 */
+		if (override_return)
+			emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+
 		/* arg1: mov rdi, progs[i] */
 		emit_mov_imm64(&prog, BPF_REG_1, (long) progs[i] >> 32,
 			       (u32) (long) progs[i]);
@@ -1462,6 +1471,7 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
 				struct bpf_prog **fexit_progs, int fexit_cnt,
 				void *orig_call)
 {
+	bool override_return = flags & BPF_TRAMP_F_OVERRIDE_RETURN;
 	int cnt = 0, nr_args = m->nr_args;
 	int stack_size = nr_args * 8;
 	u8 *prog;
@@ -1493,7 +1503,8 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
 	save_regs(m, &prog, nr_args, stack_size);
 
 	if (fentry_cnt)
-		if (invoke_bpf(m, &prog, fentry_progs, fentry_cnt, stack_size))
+		if (invoke_bpf(m, &prog, fentry_progs, fentry_cnt, stack_size,
+			       false))
 			return -EINVAL;
 
 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
@@ -1503,18 +1514,20 @@ int arch_prepare_bpf_trampoline(void *image, void *image_end,
 		/* call original function */
 		if (emit_call(&prog, orig_call, prog))
 			return -EINVAL;
+
 		/* remember return value in a stack for bpf prog to access */
 		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
 	}
 
 	if (fexit_cnt)
-		if (invoke_bpf(m, &prog, fexit_progs, fexit_cnt, stack_size))
+		if (invoke_bpf(m, &prog, fexit_progs, fexit_cnt, stack_size,
+			       override_return))
 			return -EINVAL;
 
 	if (flags & BPF_TRAMP_F_RESTORE_REGS)
 		restore_regs(m, &prog, nr_args, stack_size);
 
-	if (flags & BPF_TRAMP_F_CALL_ORIG)
+	if (flags & BPF_TRAMP_F_CALL_ORIG && !override_return)
 		/* restore original return value back into RAX */
 		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
 
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index c647cef3f4c1..e63caadbaef3 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -432,6 +432,10 @@ struct btf_func_model {
  * programs only. Should not be used with normal calls and indirect calls.
  */
 #define BPF_TRAMP_F_SKIP_FRAME		BIT(2)
+/* Override the return value of the original function. This flag only makes
+ * sense for fexit trampolines.
+ */
+#define BPF_TRAMP_F_OVERRIDE_RETURN     BIT(3)
 
 /* Different use cases for BPF trampoline:
  * 1. replace nop at the function entry (kprobe equivalent)
diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
index 53dcda8ace01..8f114affe5c6 100644
--- a/include/linux/bpf_lsm.h
+++ b/include/linux/bpf_lsm.h
@@ -41,6 +41,8 @@ void bpf_lsm_##NAME##_set_enabled(bool value);
 })
 
 int bpf_lsm_set_enabled(const char *name, bool value);
+int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
+			const struct bpf_prog *prog);
 
 #else /* !CONFIG_BPF_LSM */
 
@@ -53,6 +55,12 @@ static inline int bpf_lsm_set_enabled(const char *name, bool value)
 	return -EOPNOTSUPP;
 }
 
+static inline int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
+				      const struct bpf_prog *prog)
+{
+	return -EOPNOTSUPP;
+}
+
 #endif /* CONFIG_BPF_LSM */
 
 #endif /* _LINUX_BPF_LSM_H */
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index d7c44433c003..edeb4ded1d3e 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -10,6 +10,7 @@
 #include <linux/bpf_lsm.h>
 #include <linux/jump_label.h>
 #include <linux/kallsyms.h>
+#include <linux/bpf_verifier.h>
 
 #define LSM_HOOK(RET, NAME, ...)					\
 	DEFINE_STATIC_KEY_FALSE(bpf_lsm_key_##NAME);			\
@@ -52,6 +53,32 @@ int bpf_lsm_set_enabled(const char *name, bool value)
 	return 0;
 }
 
+#define BPF_LSM_SYM_PREFX  "bpf_lsm_"
+
+int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
+			const struct bpf_prog *prog)
+{
+	/* Only CAP_MAC_ADMIN users are allowed to make changes to LSM hooks
+	 */
+	if (!capable(CAP_MAC_ADMIN))
+		return -EPERM;
+
+	if (!prog->gpl_compatible) {
+		bpf_log(vlog,
+			"LSM programs must have a GPL compatible license\n");
+		return -EINVAL;
+	}
+
+	if (strncmp(BPF_LSM_SYM_PREFX, prog->aux->attach_func_name,
+		    strlen(BPF_LSM_SYM_PREFX))) {
+		bpf_log(vlog, "attach_btf_id %u points to wrong type name %s\n",
+			prog->aux->attach_btf_id, prog->aux->attach_func_name);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 const struct bpf_prog_ops lsm_prog_ops = {
 };
 
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 805c43b083e9..0e4cad3c810b 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3710,7 +3710,8 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		nr_args--;
 	}
 
-	if (prog->expected_attach_type == BPF_TRACE_FEXIT &&
+	if ((prog->expected_attach_type == BPF_TRACE_FEXIT ||
+	     prog->expected_attach_type == BPF_LSM_MAC) &&
 	    arg == nr_args) {
 		if (!t)
 			/* Default prog with 5 args. 6th arg is retval. */
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index a91ad518c050..e10e216463ad 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -25,6 +25,7 @@
 #include <linux/nospec.h>
 #include <linux/audit.h>
 #include <uapi/linux/btf.h>
+#include <linux/bpf_lsm.h>
 
 #define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \
 			  (map)->map_type == BPF_MAP_TYPE_CGROUP_ARRAY || \
@@ -1931,6 +1932,7 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
 
 		switch (prog_type) {
 		case BPF_PROG_TYPE_TRACING:
+		case BPF_PROG_TYPE_LSM:
 		case BPF_PROG_TYPE_STRUCT_OPS:
 		case BPF_PROG_TYPE_EXT:
 			break;
@@ -2169,28 +2171,53 @@ static int bpf_obj_get(const union bpf_attr *attr)
 				attr->file_flags);
 }
 
-static int bpf_tracing_prog_release(struct inode *inode, struct file *filp)
+static int bpf_tramp_prog_release(struct inode *inode, struct file *filp)
 {
 	struct bpf_prog *prog = filp->private_data;
 
+	/* Only CAP_MAC_ADMIN users are allowed to make changes to LSM hooks
+	 */
+	if (prog->type == BPF_PROG_TYPE_LSM && !capable(CAP_MAC_ADMIN))
+		return -EPERM;
+
 	WARN_ON_ONCE(bpf_trampoline_unlink_prog(prog));
 	bpf_prog_put(prog);
 	return 0;
 }
 
-static const struct file_operations bpf_tracing_prog_fops = {
-	.release	= bpf_tracing_prog_release,
+static const struct file_operations bpf_tramp_prog_fops = {
+	.release	= bpf_tramp_prog_release,
 	.read		= bpf_dummy_read,
 	.write		= bpf_dummy_write,
 };
 
-static int bpf_tracing_prog_attach(struct bpf_prog *prog)
+static int bpf_tramp_prog_attach(struct bpf_prog *prog)
 {
 	int tr_fd, err;
 
-	if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
-	    prog->expected_attach_type != BPF_TRACE_FEXIT &&
-	    prog->type != BPF_PROG_TYPE_EXT) {
+	switch (prog->type) {
+	case BPF_PROG_TYPE_TRACING:
+		if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
+		    prog->expected_attach_type != BPF_TRACE_FEXIT &&
+		    prog->type != BPF_PROG_TYPE_EXT) {
+			err = -EINVAL;
+			goto out_put_prog;
+		}
+		break;
+	case BPF_PROG_TYPE_LSM:
+		if (prog->expected_attach_type != BPF_LSM_MAC) {
+			err = -EINVAL;
+			goto out_put_prog;
+		}
+		/* Only CAP_MAC_ADMIN users are allowed to make changes to LSM
+		 * hooks.
+		 */
+		if (!capable(CAP_MAC_ADMIN)) {
+			err = -EPERM;
+			goto out_put_prog;
+		}
+		break;
+	default:
 		err = -EINVAL;
 		goto out_put_prog;
 	}
@@ -2199,7 +2226,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog)
 	if (err)
 		goto out_put_prog;
 
-	tr_fd = anon_inode_getfd("bpf-tracing-prog", &bpf_tracing_prog_fops,
+	tr_fd = anon_inode_getfd("bpf-tramp-prog", &bpf_tramp_prog_fops,
 				 prog, O_CLOEXEC);
 	if (tr_fd < 0) {
 		WARN_ON_ONCE(bpf_trampoline_unlink_prog(prog));
@@ -2258,12 +2285,14 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 	if (prog->type != BPF_PROG_TYPE_RAW_TRACEPOINT &&
 	    prog->type != BPF_PROG_TYPE_TRACING &&
 	    prog->type != BPF_PROG_TYPE_EXT &&
+	    prog->type != BPF_PROG_TYPE_LSM &&
 	    prog->type != BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE) {
 		err = -EINVAL;
 		goto out_put_prog;
 	}
 
 	if (prog->type == BPF_PROG_TYPE_TRACING ||
+	    prog->type == BPF_PROG_TYPE_LSM ||
 	    prog->type == BPF_PROG_TYPE_EXT) {
 		if (attr->raw_tracepoint.name) {
 			/* The attach point for this category of programs
@@ -2275,7 +2304,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 		if (prog->expected_attach_type == BPF_TRACE_RAW_TP)
 			tp_name = prog->aux->attach_func_name;
 		else
-			return bpf_tracing_prog_attach(prog);
+			return bpf_tramp_prog_attach(prog);
 	} else {
 		if (strncpy_from_user(buf,
 				      u64_to_user_ptr(attr->raw_tracepoint.name),
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 6b264a92064b..4974c14258a9 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -5,6 +5,7 @@
 #include <linux/filter.h>
 #include <linux/ftrace.h>
 #include <linux/rbtree_latch.h>
+#include <linux/bpf_lsm.h>
 
 /* dummy _ops. The verifier will operate on target program's ops. */
 const struct bpf_verifier_ops bpf_extension_verifier_ops = {
@@ -195,8 +196,9 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
  */
 #define BPF_MAX_TRAMP_PROGS 40
 
-static int bpf_trampoline_update(struct bpf_trampoline *tr)
+static int bpf_trampoline_update(struct bpf_prog *prog)
 {
+	struct bpf_trampoline *tr = prog->aux->trampoline;
 	void *old_image = tr->image + ((tr->selector + 1) & 1) * BPF_IMAGE_SIZE/2;
 	void *new_image = tr->image + (tr->selector & 1) * BPF_IMAGE_SIZE/2;
 	struct bpf_prog *progs_to_run[BPF_MAX_TRAMP_PROGS];
@@ -223,8 +225,11 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
 	hlist_for_each_entry(aux, &tr->progs_hlist[BPF_TRAMP_FEXIT], tramp_hlist)
 		*progs++ = aux->prog;
 
-	if (fexit_cnt)
+	if (fexit_cnt) {
 		flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
+		if (prog->type == BPF_PROG_TYPE_LSM)
+			flags |= BPF_TRAMP_F_OVERRIDE_RETURN;
+	}
 
 	/* Though the second half of trampoline page is unused a task could be
 	 * preempted in the middle of the first half of trampoline and two
@@ -261,6 +266,7 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(enum bpf_attach_type t)
 	case BPF_TRACE_FENTRY:
 		return BPF_TRAMP_FENTRY;
 	case BPF_TRACE_FEXIT:
+	case BPF_LSM_MAC:
 		return BPF_TRAMP_FEXIT;
 	default:
 		return BPF_TRAMP_REPLACE;
@@ -307,11 +313,17 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog)
 	}
 	hlist_add_head(&prog->aux->tramp_hlist, &tr->progs_hlist[kind]);
 	tr->progs_cnt[kind]++;
-	err = bpf_trampoline_update(prog->aux->trampoline);
+	err = bpf_trampoline_update(prog);
 	if (err) {
 		hlist_del(&prog->aux->tramp_hlist);
 		tr->progs_cnt[kind]--;
 	}
+
+	/* This is the first program to be attached to the LSM hook, the hook
+	 * needs to be enabled.
+	 */
+	if (prog->type == BPF_PROG_TYPE_LSM && tr->progs_cnt[kind] == 1)
+		err = bpf_lsm_set_enabled(prog->aux->attach_func_name, true);
 out:
 	mutex_unlock(&tr->mutex);
 	return err;
@@ -336,7 +348,11 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog)
 	}
 	hlist_del(&prog->aux->tramp_hlist);
 	tr->progs_cnt[kind]--;
-	err = bpf_trampoline_update(prog->aux->trampoline);
+	err = bpf_trampoline_update(prog);
+
+	/* There are no more LSM programs, the hook should be disabled */
+	if (prog->type == BPF_PROG_TYPE_LSM && tr->progs_cnt[kind] == 0)
+		err = bpf_lsm_set_enabled(prog->aux->attach_func_name, false);
 out:
 	mutex_unlock(&tr->mutex);
 	return err;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1cc945daa9c8..6be11889678b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -19,6 +19,7 @@
 #include <linux/sort.h>
 #include <linux/perf_event.h>
 #include <linux/ctype.h>
+#include <linux/bpf_lsm.h>
 
 #include "disasm.h"
 
@@ -6405,8 +6406,9 @@ static int check_return_code(struct bpf_verifier_env *env)
 	struct tnum range = tnum_range(0, 1);
 	int err;
 
-	/* The struct_ops func-ptr's return type could be "void" */
-	if (env->prog->type == BPF_PROG_TYPE_STRUCT_OPS &&
+	/* LSM and struct_ops func-ptr's return type could be "void" */
+	if ((env->prog->type == BPF_PROG_TYPE_STRUCT_OPS ||
+	     env->prog->type == BPF_PROG_TYPE_LSM) &&
 	    !prog->aux->attach_func_proto->type)
 		return 0;
 
@@ -9794,7 +9796,9 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 	if (prog->type == BPF_PROG_TYPE_STRUCT_OPS)
 		return check_struct_ops_btf_id(env);
 
-	if (prog->type != BPF_PROG_TYPE_TRACING && !prog_extension)
+	if (prog->type != BPF_PROG_TYPE_TRACING &&
+	    prog->type != BPF_PROG_TYPE_LSM &&
+	    !prog_extension)
 		return 0;
 
 	if (!btf_id) {
@@ -9924,8 +9928,16 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 		if (!prog_extension)
 			return -EINVAL;
 		/* fallthrough */
+	case BPF_LSM_MAC:
 	case BPF_TRACE_FENTRY:
 	case BPF_TRACE_FEXIT:
+		prog->aux->attach_func_name = tname;
+		if (prog->type == BPF_PROG_TYPE_LSM) {
+			ret = bpf_lsm_verify_prog(&env->log, prog);
+			if (ret < 0)
+				return ret;
+		}
+
 		if (!btf_type_is_func(t)) {
 			verbose(env, "attach_btf_id %u is not a function\n",
 				btf_id);
@@ -9940,7 +9952,6 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 		tr = bpf_trampoline_lookup(key);
 		if (!tr)
 			return -ENOMEM;
-		prog->aux->attach_func_name = tname;
 		/* t is either vmlinux type or another program's type */
 		prog->aux->attach_func_proto = t;
 		mutex_lock(&tr->mutex);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH bpf-next v4 6/8] tools/libbpf: Add support for BPF_PROG_TYPE_LSM
  2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (4 preceding siblings ...)
  2020-02-20 17:52 ` [PATCH bpf-next v4 5/8] bpf: lsm: Implement attach, detach and execution KP Singh
@ 2020-02-20 17:52 ` KP Singh
  2020-02-25  6:45   ` Andrii Nakryiko
  2020-02-20 17:52 ` [PATCH bpf-next v4 7/8] bpf: lsm: Add selftests " KP Singh
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 45+ messages in thread
From: KP Singh @ 2020-02-20 17:52 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

Since BPF_PROG_TYPE_LSM uses the same attaching mechanism as
BPF_PROG_TYPE_TRACING, the common logic is refactored into a static
function bpf_program__attach_btf.

A new API call bpf_program__attach_lsm is still added to avoid userspace
conflicts if this ever changes in the future.

Signed-off-by: KP Singh <kpsingh@google.com>
---
 tools/lib/bpf/bpf.c      |  3 ++-
 tools/lib/bpf/libbpf.c   | 46 ++++++++++++++++++++++++++++++++--------
 tools/lib/bpf/libbpf.h   |  4 ++++
 tools/lib/bpf/libbpf.map |  3 +++
 4 files changed, 46 insertions(+), 10 deletions(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index c6dafe563176..73220176728d 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -235,7 +235,8 @@ int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr,
 	memset(&attr, 0, sizeof(attr));
 	attr.prog_type = load_attr->prog_type;
 	attr.expected_attach_type = load_attr->expected_attach_type;
-	if (attr.prog_type == BPF_PROG_TYPE_STRUCT_OPS) {
+	if (attr.prog_type == BPF_PROG_TYPE_STRUCT_OPS ||
+	    attr.prog_type == BPF_PROG_TYPE_LSM) {
 		attr.attach_btf_id = load_attr->attach_btf_id;
 	} else if (attr.prog_type == BPF_PROG_TYPE_TRACING ||
 		   attr.prog_type == BPF_PROG_TYPE_EXT) {
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 514b1a524abb..d11139d5e76b 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -2351,16 +2351,14 @@ static int bpf_object__finalize_btf(struct bpf_object *obj)
 
 static inline bool libbpf_prog_needs_vmlinux_btf(struct bpf_program *prog)
 {
-	if (prog->type == BPF_PROG_TYPE_STRUCT_OPS)
+	if (prog->type == BPF_PROG_TYPE_STRUCT_OPS ||
+	    prog->type == BPF_PROG_TYPE_LSM)
 		return true;
 
 	/* BPF_PROG_TYPE_TRACING programs which do not attach to other programs
 	 * also need vmlinux BTF
 	 */
-	if (prog->type == BPF_PROG_TYPE_TRACING && !prog->attach_prog_fd)
-		return true;
-
-	return false;
+	return prog->type == BPF_PROG_TYPE_TRACING && !prog->attach_prog_fd;
 }
 
 static int bpf_object__load_vmlinux_btf(struct bpf_object *obj)
@@ -4855,7 +4853,8 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
 	load_attr.insns = insns;
 	load_attr.insns_cnt = insns_cnt;
 	load_attr.license = license;
-	if (prog->type == BPF_PROG_TYPE_STRUCT_OPS) {
+	if (prog->type == BPF_PROG_TYPE_STRUCT_OPS ||
+	    prog->type == BPF_PROG_TYPE_LSM) {
 		load_attr.attach_btf_id = prog->attach_btf_id;
 	} else if (prog->type == BPF_PROG_TYPE_TRACING ||
 		   prog->type == BPF_PROG_TYPE_EXT) {
@@ -4940,6 +4939,7 @@ int bpf_program__load(struct bpf_program *prog, char *license, __u32 kern_ver)
 	int err = 0, fd, i, btf_id;
 
 	if (prog->type == BPF_PROG_TYPE_TRACING ||
+	    prog->type == BPF_PROG_TYPE_LSM ||
 	    prog->type == BPF_PROG_TYPE_EXT) {
 		btf_id = libbpf_find_attach_btf_id(prog);
 		if (btf_id <= 0)
@@ -6179,6 +6179,7 @@ bool bpf_program__is_##NAME(const struct bpf_program *prog)	\
 }								\
 
 BPF_PROG_TYPE_FNS(socket_filter, BPF_PROG_TYPE_SOCKET_FILTER);
+BPF_PROG_TYPE_FNS(lsm, BPF_PROG_TYPE_LSM);
 BPF_PROG_TYPE_FNS(kprobe, BPF_PROG_TYPE_KPROBE);
 BPF_PROG_TYPE_FNS(sched_cls, BPF_PROG_TYPE_SCHED_CLS);
 BPF_PROG_TYPE_FNS(sched_act, BPF_PROG_TYPE_SCHED_ACT);
@@ -6245,6 +6246,8 @@ static struct bpf_link *attach_raw_tp(const struct bpf_sec_def *sec,
 				      struct bpf_program *prog);
 static struct bpf_link *attach_trace(const struct bpf_sec_def *sec,
 				     struct bpf_program *prog);
+static struct bpf_link *attach_lsm(const struct bpf_sec_def *sec,
+				   struct bpf_program *prog);
 
 struct bpf_sec_def {
 	const char *sec;
@@ -6291,6 +6294,10 @@ static const struct bpf_sec_def section_defs[] = {
 	SEC_DEF("freplace/", EXT,
 		.is_attach_btf = true,
 		.attach_fn = attach_trace),
+	SEC_DEF("lsm/", LSM,
+		.is_attach_btf = true,
+		.expected_attach_type = BPF_LSM_MAC,
+		.attach_fn = attach_lsm),
 	BPF_PROG_SEC("xdp",			BPF_PROG_TYPE_XDP),
 	BPF_PROG_SEC("perf_event",		BPF_PROG_TYPE_PERF_EVENT),
 	BPF_PROG_SEC("lwt_in",			BPF_PROG_TYPE_LWT_IN),
@@ -6553,6 +6560,7 @@ static int bpf_object__collect_struct_ops_map_reloc(struct bpf_object *obj,
 }
 
 #define BTF_TRACE_PREFIX "btf_trace_"
+#define BTF_LSM_PREFIX "bpf_lsm_"
 #define BTF_MAX_NAME_SIZE 128
 
 static int find_btf_by_prefix_kind(const struct btf *btf, const char *prefix,
@@ -6580,6 +6588,9 @@ static inline int __find_vmlinux_btf_id(struct btf *btf, const char *name,
 	if (attach_type == BPF_TRACE_RAW_TP)
 		err = find_btf_by_prefix_kind(btf, BTF_TRACE_PREFIX, name,
 					      BTF_KIND_TYPEDEF);
+	else if (attach_type == BPF_LSM_MAC)
+		err = find_btf_by_prefix_kind(btf, BTF_LSM_PREFIX, name,
+					      BTF_KIND_FUNC);
 	else
 		err = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC);
 
@@ -7354,7 +7365,8 @@ static struct bpf_link *attach_raw_tp(const struct bpf_sec_def *sec,
 	return bpf_program__attach_raw_tracepoint(prog, tp_name);
 }
 
-struct bpf_link *bpf_program__attach_trace(struct bpf_program *prog)
+/* Common logic for all BPF program types that attach to a btf_id */
+static struct bpf_link *bpf_program__attach_btf(struct bpf_program *prog)
 {
 	char errmsg[STRERR_BUFSIZE];
 	struct bpf_link_fd *link;
@@ -7376,7 +7388,7 @@ struct bpf_link *bpf_program__attach_trace(struct bpf_program *prog)
 	if (pfd < 0) {
 		pfd = -errno;
 		free(link);
-		pr_warn("program '%s': failed to attach to trace: %s\n",
+		pr_warn("program '%s': failed to attach to: %s\n",
 			bpf_program__title(prog, false),
 			libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
 		return ERR_PTR(pfd);
@@ -7385,10 +7397,26 @@ struct bpf_link *bpf_program__attach_trace(struct bpf_program *prog)
 	return (struct bpf_link *)link;
 }
 
+struct bpf_link *bpf_program__attach_trace(struct bpf_program *prog)
+{
+	return bpf_program__attach_btf(prog);
+}
+
+struct bpf_link *bpf_program__attach_lsm(struct bpf_program *prog)
+{
+	return bpf_program__attach_btf(prog);
+}
+
 static struct bpf_link *attach_trace(const struct bpf_sec_def *sec,
 				     struct bpf_program *prog)
 {
-	return bpf_program__attach_trace(prog);
+	return bpf_program__attach_btf(prog);
+}
+
+static struct bpf_link *attach_lsm(const struct bpf_sec_def *sec,
+				   struct bpf_program *prog)
+{
+	return bpf_program__attach_btf(prog);
 }
 
 struct bpf_link *bpf_program__attach(struct bpf_program *prog)
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 3fe12c9d1f92..3f72323f205b 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -243,6 +243,8 @@ bpf_program__attach_raw_tracepoint(struct bpf_program *prog,
 
 LIBBPF_API struct bpf_link *
 bpf_program__attach_trace(struct bpf_program *prog);
+LIBBPF_API struct bpf_link *
+bpf_program__attach_lsm(struct bpf_program *prog);
 struct bpf_map;
 LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(struct bpf_map *map);
 struct bpf_insn;
@@ -316,6 +318,7 @@ LIBBPF_API int bpf_program__set_socket_filter(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_tracepoint(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_raw_tracepoint(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_kprobe(struct bpf_program *prog);
+LIBBPF_API int bpf_program__set_lsm(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_sched_cls(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_sched_act(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_xdp(struct bpf_program *prog);
@@ -338,6 +341,7 @@ LIBBPF_API bool bpf_program__is_socket_filter(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_tracepoint(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_raw_tracepoint(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_kprobe(const struct bpf_program *prog);
+LIBBPF_API bool bpf_program__is_lsm(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_sched_cls(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_sched_act(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_xdp(const struct bpf_program *prog);
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index b035122142bb..8df332a528a0 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -227,10 +227,13 @@ LIBBPF_0.0.7 {
 		bpf_probe_large_insn_limit;
 		bpf_prog_attach_xattr;
 		bpf_program__attach;
+		bpf_program__attach_lsm;
 		bpf_program__name;
 		bpf_program__is_extension;
+		bpf_program__is_lsm;
 		bpf_program__is_struct_ops;
 		bpf_program__set_extension;
+		bpf_program__set_lsm;
 		bpf_program__set_struct_ops;
 		btf__align_of;
 		libbpf_find_kernel_btf;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH bpf-next v4 7/8] bpf: lsm: Add selftests for BPF_PROG_TYPE_LSM
  2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (5 preceding siblings ...)
  2020-02-20 17:52 ` [PATCH bpf-next v4 6/8] tools/libbpf: Add support for BPF_PROG_TYPE_LSM KP Singh
@ 2020-02-20 17:52 ` KP Singh
  2020-02-20 17:52 ` [PATCH bpf-next v4 8/8] bpf: lsm: Add Documentation KP Singh
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 45+ messages in thread
From: KP Singh @ 2020-02-20 17:52 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

* Load a BPF program that hooks to the mprotect calls.
* Attach the program to the "file_mprotect" LSM hook.
* Do an mprotect on some memory allocated on the heap
* Verify if the return value is overridden.
* Verify if the audit event was received using the shared global
  result variable.

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 tools/testing/selftests/bpf/lsm_helpers.h     | 19 ++++
 .../selftests/bpf/prog_tests/lsm_mprotect.c   | 96 +++++++++++++++++++
 .../selftests/bpf/progs/lsm_mprotect_audit.c  | 48 ++++++++++
 .../selftests/bpf/progs/lsm_mprotect_mac.c    | 53 ++++++++++
 4 files changed, 216 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/lsm_helpers.h
 create mode 100644 tools/testing/selftests/bpf/prog_tests/lsm_mprotect.c
 create mode 100644 tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c
 create mode 100644 tools/testing/selftests/bpf/progs/lsm_mprotect_mac.c

diff --git a/tools/testing/selftests/bpf/lsm_helpers.h b/tools/testing/selftests/bpf/lsm_helpers.h
new file mode 100644
index 000000000000..b973ec1c4a0b
--- /dev/null
+++ b/tools/testing/selftests/bpf/lsm_helpers.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+#ifndef _LSM_HELPERS_H
+#define _LSM_HELPERS_H
+
+struct lsm_mprotect_result {
+	/* This ensures that the LSM Hook only monitors the PID requested
+	 * by the loader
+	 */
+	__u32 monitored_pid;
+	/* The number of mprotect calls for the monitored PID.
+	 */
+	__u32 mprotect_count;
+};
+
+#endif /* _LSM_HELPERS_H */
diff --git a/tools/testing/selftests/bpf/prog_tests/lsm_mprotect.c b/tools/testing/selftests/bpf/prog_tests/lsm_mprotect.c
new file mode 100644
index 000000000000..93c3b5fb2ef0
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/lsm_mprotect.c
@@ -0,0 +1,96 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#include <test_progs.h>
+#include <sys/mman.h>
+#include <unistd.h>
+#include <malloc.h>
+#include "lsm_helpers.h"
+#include "lsm_mprotect_audit.skel.h"
+#include "lsm_mprotect_mac.skel.h"
+
+int heap_mprotect(void)
+{
+	void *buf;
+	long sz;
+
+	sz = sysconf(_SC_PAGESIZE);
+	if (sz < 0)
+		return sz;
+
+	buf = memalign(sz, 2 * sz);
+	if (buf == NULL)
+		return -ENOMEM;
+
+	return mprotect(buf, sz, PROT_READ | PROT_EXEC);
+}
+
+void test_lsm_mprotect_audit(void)
+{
+	struct lsm_mprotect_result *result;
+	struct lsm_mprotect_audit *skel = NULL;
+	int err, duration = 0;
+
+	skel = lsm_mprotect_audit__open_and_load();
+	if (CHECK(!skel, "skel_load", "lsm_mprotect_audit skeleton failed\n"))
+		goto close_prog;
+
+	err = lsm_mprotect_audit__attach(skel);
+	if (CHECK(err, "attach", "lsm_mprotect_audit attach failed: %d\n", err))
+		goto close_prog;
+
+	result = &skel->bss->result;
+	result->monitored_pid = getpid();
+
+	err = heap_mprotect();
+	if (CHECK(err < 0, "heap_mprotect", "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	/* Make sure mprotect_audit program was triggered
+	 * and detected an mprotect on the heap.
+	 */
+	CHECK_FAIL(result->mprotect_count != 1);
+
+close_prog:
+	lsm_mprotect_audit__destroy(skel);
+}
+
+void test_lsm_mprotect_mac(void)
+{
+	struct lsm_mprotect_result *result;
+	struct lsm_mprotect_mac *skel = NULL;
+	int err, duration = 0;
+
+	skel = lsm_mprotect_mac__open_and_load();
+	if (CHECK(!skel, "skel_load", "lsm_mprotect_mac skeleton failed\n"))
+		goto close_prog;
+
+	err = lsm_mprotect_mac__attach(skel);
+	if (CHECK(err, "attach", "lsm_mprotect_mac attach failed: %d\n", err))
+		goto close_prog;
+
+	result = &skel->bss->result;
+	result->monitored_pid = getpid();
+
+	err = heap_mprotect();
+	if (CHECK(errno != EPERM, "heap_mprotect", "want errno=EPERM, got %d\n",
+		  errno))
+		goto close_prog;
+
+	/* Make sure mprotect_mac program was triggered
+	 * and detected an mprotect on the heap.
+	 */
+	CHECK_FAIL(result->mprotect_count != 1);
+
+close_prog:
+	lsm_mprotect_mac__destroy(skel);
+}
+
+void test_lsm_mprotect(void)
+{
+	test_lsm_mprotect_audit();
+	test_lsm_mprotect_mac();
+}
diff --git a/tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c b/tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c
new file mode 100644
index 000000000000..c68fb02b57fa
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#include <linux/bpf.h>
+#include <stdbool.h>
+#include "bpf_trace_helpers.h"
+#include  <errno.h>
+#include "lsm_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct lsm_mprotect_result result = {
+	.mprotect_count = 0,
+	.monitored_pid = 0,
+};
+
+/*
+ * Define some of the structs used in the BPF program.
+ * Only the field names and their sizes need to be the
+ * same as the kernel type, the order is irrelevant.
+ */
+struct mm_struct {
+	unsigned long start_brk, brk;
+} __attribute__((preserve_access_index));
+
+struct vm_area_struct {
+	unsigned long vm_start, vm_end;
+	struct mm_struct *vm_mm;
+} __attribute__((preserve_access_index));
+
+SEC("lsm/file_mprotect")
+int BPF_PROG(mprotect_audit, struct vm_area_struct *vma,
+	     unsigned long reqprot, unsigned long prot)
+{
+	__u32 pid = bpf_get_current_pid_tgid();
+	int is_heap = 0;
+
+	is_heap = (vma->vm_start >= vma->vm_mm->start_brk &&
+		   vma->vm_end <= vma->vm_mm->brk);
+
+	if (is_heap && result.monitored_pid == pid)
+		result.mprotect_count++;
+
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/lsm_mprotect_mac.c b/tools/testing/selftests/bpf/progs/lsm_mprotect_mac.c
new file mode 100644
index 000000000000..c0ae344593e8
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/lsm_mprotect_mac.c
@@ -0,0 +1,53 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#include <linux/bpf.h>
+#include <stdbool.h>
+#include "bpf_trace_helpers.h"
+#include  <errno.h>
+#include "lsm_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct lsm_mprotect_result result = {
+	.mprotect_count = 0,
+	.monitored_pid = 0,
+};
+
+/*
+ * Define some of the structs used in the BPF program.
+ * Only the field names and their sizes need to be the
+ * same as the kernel type, the order is irrelevant.
+ */
+struct mm_struct {
+	unsigned long start_brk, brk;
+} __attribute__((preserve_access_index));
+
+struct vm_area_struct {
+	unsigned long vm_start, vm_end;
+	struct mm_struct *vm_mm;
+} __attribute__((preserve_access_index));
+
+SEC("lsm/file_mprotect")
+int BPF_PROG(mprotect_mac, struct vm_area_struct *vma,
+	     unsigned long reqprot, unsigned long prot, int ret)
+{
+	if (ret != 0)
+		return ret;
+
+	__u32 pid = bpf_get_current_pid_tgid();
+	int is_heap = 0;
+
+	is_heap = (vma->vm_start >= vma->vm_mm->start_brk &&
+		   vma->vm_end <= vma->vm_mm->brk);
+
+	if (is_heap && result.monitored_pid == pid) {
+		result.mprotect_count++;
+		ret = -EPERM;
+	}
+
+	return ret;
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH bpf-next v4 8/8] bpf: lsm: Add Documentation
  2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (6 preceding siblings ...)
  2020-02-20 17:52 ` [PATCH bpf-next v4 7/8] bpf: lsm: Add selftests " KP Singh
@ 2020-02-20 17:52 ` KP Singh
  2020-02-21 19:19 ` [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) Casey Schaufler
  2020-02-27 18:40 ` Dr. Greg
  9 siblings, 0 replies; 45+ messages in thread
From: KP Singh @ 2020-02-20 17:52 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

Document how eBPF programs (BPF_PROG_TYPE_LSM) can be loaded and
attached (BPF_LSM_MAC) to the LSM hooks.

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 Documentation/bpf/bpf_lsm.rst | 147 ++++++++++++++++++++++++++++++++++
 Documentation/bpf/index.rst   |   1 +
 2 files changed, 148 insertions(+)
 create mode 100644 Documentation/bpf/bpf_lsm.rst

diff --git a/Documentation/bpf/bpf_lsm.rst b/Documentation/bpf/bpf_lsm.rst
new file mode 100644
index 000000000000..9d7ec8cb431d
--- /dev/null
+++ b/Documentation/bpf/bpf_lsm.rst
@@ -0,0 +1,147 @@
+.. SPDX-License-Identifier: GPL-2.0+
+.. Copyright 2019 Google LLC.
+
+================
+LSM BPF Programs
+================
+
+These BPF programs allow runtime instrumentation of the LSM hooks by privileged
+users to implement system-wide MAC (Mandatory Access Control) and Audit
+policies using eBPF. Since these program end up modifying the MAC policies of
+the system, they require both ``CAP_MAC_ADMIN`` and also require
+``CAP_SYS_ADMIN`` for the loading of BPF programs.
+
+Structure
+---------
+
+The example shows an eBPF program that can be attached to the ``file_mprotect``
+LSM hook:
+
+.. c:function:: int file_mprotect(struct vm_area_struct *vma, unsigned long reqprot, unsigned long prot);
+
+eBPF programs that use :doc:`/bpf/btf` do not need to include kernel headers
+for accessing information from the attached eBPF program's context. They can
+simply declare the structures in the eBPF program and only specify the fields
+that need to be accessed.
+
+.. code-block:: c
+
+	struct mm_struct {
+		unsigned long start_brk, brk, start_stack;
+	} __attribute__((preserve_access_index));
+
+	struct vm_area_struct {
+		unsigned long start_brk, brk, start_stack;
+		unsigned long vm_start, vm_end;
+		struct mm_struct *vm_mm;
+	} __attribute__((preserve_access_index));
+
+
+.. note:: Only the size and the names of the fields must match the type in the
+	  kernel and the order of the fields is irrelevant.
+
+This can be further simplified (if one has access to the BTF information at
+build time) by generating the ``vmlinux.h`` with:
+
+.. code-block:: console
+
+        # bpftool dump file <path-to-btf-vmlinux> format c > vmlinux.h
+
+.. note:: ``path-to-btf-vmlinux`` can be ``/sys/kernel/btf/vmlinux`` if the
+	  build environment matches the environment the BPF programs are
+	  deployed in.
+
+The ``vmlinux.h`` can then simply be included in the BPF programs without
+requiring the definition of the types.
+
+The eBPF programs can be declared using the``BPF_PROG``
+macros defined in `tools/testing/selftests/bpf/bpf_trace_helpers.h`_. In this
+example:
+
+	* ``"lsm/file_mprotect"`` indicates the LSM hook that the program must
+	  be attached to
+	* ``mprotect_audit`` is the name of the eBPF program
+
+.. code-block:: c
+
+        SEC("lsm/file_mprotect")
+        int BPF_PROG(mprotect_audit, struct vm_area_struct *vma,
+                     unsigned long reqprot, unsigned long prot, int ret)
+	{
+                /* Ret is the return value from the previous BPF program
+                 * or 0 if it's the first hook.
+                 */
+                if (ret != 0)
+                        return ret;
+
+		int is_heap;
+
+		is_heap = (vma->vm_start >= vma->vm_mm->start_brk &&
+			   vma->vm_end <= vma->vm_mm->brk);
+
+		/* Return an -EPERM or write information to the perf events buffer
+		 * for auditing
+		 */
+	}
+
+The ``__attribute__((preserve_access_index))`` is a clang feature that allows
+the BPF verifier to update the offsets for the access at runtime using the
+:doc:`/bpf/btf` information. Since the BPF verifier is aware of the types, it
+also validates all the accesses made to the various types in the eBPF program.
+
+Loading
+-------
+
+eBPP programs can be loaded with the :manpage:`bpf(2)` syscall's
+``BPF_PROG_LOAD`` operation or more simply by using the the libbpf helper
+``bpf_prog_load_xattr``:
+
+
+.. code-block:: c
+
+	struct bpf_prog_load_attr attr = {
+		.file = "./prog.o",
+	};
+	struct bpf_object *prog_obj;
+	struct bpf_program *prog;
+	int prog_fd;
+
+	bpf_prog_load_xattr(&attr, &prog_obj, &prog_fd);
+
+Attachment to LSM Hooks
+-----------------------
+
+The LSM allows attachment of eBPF programs as LSM hooks using :manpage:`bpf(2)`
+syscall's ``BPF_PROG_ATTACH`` operation or more simply by
+using the libbpf helper ``bpf_program__attach_lsm``. In the code shown below
+``prog`` is the eBPF program loaded using ``BPF_PROG_LOAD``:
+
+.. code-block:: c
+
+	struct bpf_link *link;
+
+	link = bpf_program__attach_lsm(prog);
+
+The program can be detached from the LSM hook by *destroying* the ``link``
+link returned by ``bpf_program__attach_lsm``:
+
+.. code-block:: c
+
+	link->destroy();
+
+Examples
+--------
+
+Example eBPF programs can be found in
+`tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c`_ and `tools/testing/selftests/bpf/progs/lsm_mprotect_mac.c`_ and the corresponding
+userspace code in `tools/testing/selftests/bpf/prog_tests/lsm_mprotect.c`_
+
+.. Links
+.. _tools/testing/selftests/bpf/bpf_trace_helpers.h:
+   https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/tools/testing/selftests/selftests/bpf/bpf_trace_helpers.h
+.. _tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c:
+   https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c
+.. _tools/testing/selftests/bpf/progs/lsm_mprotect_mac.c:
+   https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/tools/testing/selftests/bpf/progs/lsm_mprotect_mac.c
+.. _tools/testing/selftests/bpf/prog_tests/lsm_mprotect.c:
+   https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/tools/testing/selftests/bpf/prog_tests/lsm_mprotect.c
diff --git a/Documentation/bpf/index.rst b/Documentation/bpf/index.rst
index 4f5410b61441..2c3d3c0cb7bb 100644
--- a/Documentation/bpf/index.rst
+++ b/Documentation/bpf/index.rst
@@ -45,6 +45,7 @@ Program types
    prog_cgroup_sockopt
    prog_cgroup_sysctl
    prog_flow_dissector
+   bpf_lsm
 
 
 Testing BPF
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-20 17:52 ` [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs KP Singh
@ 2020-02-20 23:49   ` Casey Schaufler
  2020-02-21 11:44     ` KP Singh
  2020-02-22  4:22     ` Kees Cook
  2020-02-21  2:25   ` Alexei Starovoitov
  1 sibling, 2 replies; 45+ messages in thread
From: Casey Schaufler @ 2020-02-20 23:49 UTC (permalink / raw)
  To: KP Singh, LKML, Linux Security Module list

On 2/20/2020 9:52 AM, KP Singh wrote:
> From: KP Singh <kpsingh@google.com>

Sorry about the heavy list pruning - the original set
blows thunderbird up.

>
> The BPF LSM programs are implemented as fexit trampolines to avoid the
> overhead of retpolines. These programs cannot be attached to security_*
> wrappers as there are quite a few security_* functions that do more than
> just calling the LSM callbacks.
>
> This was discussed on the lists in:
>
>   https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#m068becce588a0cdf01913f368a97aea4c62d8266
>
> Adding a NOP callback after all the static LSM callbacks are called has
> the following benefits:
>
> - The BPF programs run at the right stage of the security_* wrappers.
> - They run after all the static LSM hooks allowed the operation,
>   therefore cannot allow an action that was already denied.

I still say that the special call-out to BPF is unnecessary.
I remain unconvinced by the arguments. You aren't doing anything
so special that the general mechanism won't work.

>
> There are some hooks which do not call call_int_hooks or
> call_void_hooks. It's not possible to call the bpf_lsm_* functions
> without checking if there is BPF LSM program attached to these hooks.
> This is added further in a subsequent patch. For now, these hooks are
> marked as NO_BPF (i.e. attachment of BPF programs is not possible).
>
> Signed-off-by: KP Singh <kpsingh@google.com>
> ---
>  include/linux/bpf_lsm.h | 34 ++++++++++++++++++++++++++++++++++
>  kernel/bpf/bpf_lsm.c    | 16 ++++++++++++++++
>  security/security.c     |  3 +++
>  3 files changed, 53 insertions(+)
>  create mode 100644 include/linux/bpf_lsm.h
>
> diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
> new file mode 100644
> index 000000000000..f867f72f6aa9
> --- /dev/null
> +++ b/include/linux/bpf_lsm.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +/*
> + * Copyright 2019 Google LLC.
> + */
> +
> +#ifndef _LINUX_BPF_LSM_H
> +#define _LINUX_BPF_LSM_H
> +
> +#include <linux/bpf.h>
> +
> +#ifdef CONFIG_BPF_LSM
> +
> +#define LSM_HOOK(RET, NAME, ...) RET bpf_lsm_##NAME(__VA_ARGS__);
> +#include <linux/lsm_hook_names.h>
> +#undef LSM_HOOK
> +
> +#define RUN_BPF_LSM_VOID_PROGS(FUNC, ...) bpf_lsm_##FUNC(__VA_ARGS__)
> +#define RUN_BPF_LSM_INT_PROGS(RC, FUNC, ...) ({				\
> +	do {								\
> +		if (RC == 0)						\
> +			RC = bpf_lsm_##FUNC(__VA_ARGS__);		\
> +	} while (0);							\
> +	RC;								\
> +})
> +
> +#else /* !CONFIG_BPF_LSM */
> +
> +#define RUN_BPF_LSM_INT_PROGS(RC, FUNC, ...) (RC)
> +#define RUN_BPF_LSM_VOID_PROGS(FUNC, ...)
> +
> +#endif /* CONFIG_BPF_LSM */
> +
> +#endif /* _LINUX_BPF_LSM_H */
> diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
> index affb6941622e..abc847c9b9a1 100644
> --- a/kernel/bpf/bpf_lsm.c
> +++ b/kernel/bpf/bpf_lsm.c
> @@ -7,6 +7,22 @@
>  #include <linux/filter.h>
>  #include <linux/bpf.h>
>  #include <linux/btf.h>
> +#include <linux/bpf_lsm.h>
> +
> +/* For every LSM hook  that allows attachment of BPF programs, declare a NOP
> + * function where a BPF program can be attached as an fexit trampoline.
> + */
> +#define LSM_HOOK(RET, NAME, ...) LSM_HOOK_##RET(NAME, __VA_ARGS__)
> +#define LSM_HOOK_int(NAME, ...) noinline int bpf_lsm_##NAME(__VA_ARGS__)  \
> +{									  \
> +	return 0;							  \
> +}
> +
> +#define LSM_HOOK_void(NAME, ...) \
> +	noinline void bpf_lsm_##NAME(__VA_ARGS__) {}
> +
> +#include <linux/lsm_hook_names.h>
> +#undef LSM_HOOK
>  
>  const struct bpf_prog_ops lsm_prog_ops = {
>  };
> diff --git a/security/security.c b/security/security.c
> index 565bc9b67276..aa111392a700 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -28,6 +28,7 @@
>  #include <linux/string.h>
>  #include <linux/msg.h>
>  #include <net/flow.h>
> +#include <linux/bpf_lsm.h>
>  
>  #define MAX_LSM_EVM_XATTR	2
>  
> @@ -684,6 +685,7 @@ static void __init lsm_early_task(struct task_struct *task)
>  								\
>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
>  			P->hook.FUNC(__VA_ARGS__);		\
> +		RUN_BPF_LSM_VOID_PROGS(FUNC, __VA_ARGS__);	\

>  	} while (0)
>  
>  #define call_int_hook(FUNC, IRC, ...) ({			\
> @@ -696,6 +698,7 @@ static void __init lsm_early_task(struct task_struct *task)
>  			if (RC != 0)				\
>  				break;				\
>  		}						\
> +		RC = RUN_BPF_LSM_INT_PROGS(RC, FUNC, __VA_ARGS__); \
>  	} while (0);						\
>  	RC;							\
>  })


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 5/8] bpf: lsm: Implement attach, detach and execution
  2020-02-20 17:52 ` [PATCH bpf-next v4 5/8] bpf: lsm: Implement attach, detach and execution KP Singh
@ 2020-02-21  2:17   ` Alexei Starovoitov
  2020-02-21 12:02     ` KP Singh
  0 siblings, 1 reply; 45+ messages in thread
From: Alexei Starovoitov @ 2020-02-21  2:17 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-kernel, bpf, linux-security-module, Daniel Borkmann,
	James Morris, Kees Cook, Jann Horn, David S. Miller,
	Greg Kroah-Hartman

On Thu, Feb 20, 2020 at 06:52:47PM +0100, KP Singh wrote:
> +
> +	/* This is the first program to be attached to the LSM hook, the hook
> +	 * needs to be enabled.
> +	 */
> +	if (prog->type == BPF_PROG_TYPE_LSM && tr->progs_cnt[kind] == 1)
> +		err = bpf_lsm_set_enabled(prog->aux->attach_func_name, true);
>  out:
>  	mutex_unlock(&tr->mutex);
>  	return err;
> @@ -336,7 +348,11 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog)
>  	}
>  	hlist_del(&prog->aux->tramp_hlist);
>  	tr->progs_cnt[kind]--;
> -	err = bpf_trampoline_update(prog->aux->trampoline);
> +	err = bpf_trampoline_update(prog);
> +
> +	/* There are no more LSM programs, the hook should be disabled */
> +	if (prog->type == BPF_PROG_TYPE_LSM && tr->progs_cnt[kind] == 0)
> +		err = bpf_lsm_set_enabled(prog->aux->attach_func_name, false);

Overall looks good, but I don't think above logic works.
Consider lsm being attached, then fexit, then lsm detached, then fexit detached.
Both are kind==fexit and static_key stays enabled.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-20 17:52 ` [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs KP Singh
  2020-02-20 23:49   ` Casey Schaufler
@ 2020-02-21  2:25   ` Alexei Starovoitov
  2020-02-21 11:47     ` KP Singh
  1 sibling, 1 reply; 45+ messages in thread
From: Alexei Starovoitov @ 2020-02-21  2:25 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-kernel, bpf, linux-security-module, Daniel Borkmann,
	James Morris, Kees Cook, Jann Horn, David S. Miller,
	Greg Kroah-Hartman

On Thu, Feb 20, 2020 at 06:52:45PM +0100, KP Singh wrote:
> From: KP Singh <kpsingh@google.com>
> 
> The BPF LSM programs are implemented as fexit trampolines to avoid the
> overhead of retpolines. These programs cannot be attached to security_*
> wrappers as there are quite a few security_* functions that do more than
> just calling the LSM callbacks.
> 
> This was discussed on the lists in:
> 
>   https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#m068becce588a0cdf01913f368a97aea4c62d8266
> 
> Adding a NOP callback after all the static LSM callbacks are called has
> the following benefits:
> 
> - The BPF programs run at the right stage of the security_* wrappers.
> - They run after all the static LSM hooks allowed the operation,
>   therefore cannot allow an action that was already denied.
> 
> There are some hooks which do not call call_int_hooks or
> call_void_hooks. It's not possible to call the bpf_lsm_* functions
> without checking if there is BPF LSM program attached to these hooks.
> This is added further in a subsequent patch. For now, these hooks are
> marked as NO_BPF (i.e. attachment of BPF programs is not possible).

the commit log doesn't match the code.

> +
> +/* For every LSM hook  that allows attachment of BPF programs, declare a NOP
> + * function where a BPF program can be attached as an fexit trampoline.
> + */
> +#define LSM_HOOK(RET, NAME, ...) LSM_HOOK_##RET(NAME, __VA_ARGS__)
> +#define LSM_HOOK_int(NAME, ...) noinline int bpf_lsm_##NAME(__VA_ARGS__)  \

Did you check generated asm?
I think I saw cases when gcc ignored 'noinline' when function is defined in the
same file and still performed inlining while keeping the function body.
To be safe I think __weak is necessary. That will guarantee noinline.

And please reduce your cc next time. It's way too long.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-20 23:49   ` Casey Schaufler
@ 2020-02-21 11:44     ` KP Singh
  2020-02-21 18:23       ` Casey Schaufler
  2020-02-22  4:22     ` Kees Cook
  1 sibling, 1 reply; 45+ messages in thread
From: KP Singh @ 2020-02-21 11:44 UTC (permalink / raw)
  To: Casey Schaufler; +Cc: KP Singh, LKML, Linux Security Module list, bpf

On 20-Feb 15:49, Casey Schaufler wrote:
> On 2/20/2020 9:52 AM, KP Singh wrote:
> > From: KP Singh <kpsingh@google.com>
> 
> Sorry about the heavy list pruning - the original set
> blows thunderbird up.
> 
> >
> > The BPF LSM programs are implemented as fexit trampolines to avoid the
> > overhead of retpolines. These programs cannot be attached to security_*
> > wrappers as there are quite a few security_* functions that do more than
> > just calling the LSM callbacks.
> >
> > This was discussed on the lists in:
> >
> >   https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#m068becce588a0cdf01913f368a97aea4c62d8266
> >
> > Adding a NOP callback after all the static LSM callbacks are called has
> > the following benefits:
> >
> > - The BPF programs run at the right stage of the security_* wrappers.
> > - They run after all the static LSM hooks allowed the operation,
> >   therefore cannot allow an action that was already denied.
> 
> I still say that the special call-out to BPF is unnecessary.
> I remain unconvinced by the arguments. You aren't doing anything
> so special that the general mechanism won't work.

The existing mechanism would work functionally, but the cost of an
indirect call for all the hooks, even those that are completely unused
is not really acceptable for KRSI’s use cases. It’s easy to avoid and
I do think that what we’re doing here (with hooks being defined at
runtime) has significant functional differences from existing LSMs.

- KP

> 
> >
> > There are some hooks which do not call call_int_hooks or
> > call_void_hooks. It's not possible to call the bpf_lsm_* functions
> > without checking if there is BPF LSM program attached to these hooks.
> > This is added further in a subsequent patch. For now, these hooks are
> > marked as NO_BPF (i.e. attachment of BPF programs is not possible).
> >

[...]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-21  2:25   ` Alexei Starovoitov
@ 2020-02-21 11:47     ` KP Singh
  0 siblings, 0 replies; 45+ messages in thread
From: KP Singh @ 2020-02-21 11:47 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: KP Singh, linux-kernel, bpf, linux-security-module,
	Daniel Borkmann, James Morris, Kees Cook, Jann Horn,
	David S. Miller, Greg Kroah-Hartman

On 20-Feb 18:25, Alexei Starovoitov wrote:
> On Thu, Feb 20, 2020 at 06:52:45PM +0100, KP Singh wrote:
> > From: KP Singh <kpsingh@google.com>
> > 
> > The BPF LSM programs are implemented as fexit trampolines to avoid the
> > overhead of retpolines. These programs cannot be attached to security_*
> > wrappers as there are quite a few security_* functions that do more than
> > just calling the LSM callbacks.
> > 
> > This was discussed on the lists in:
> > 
> >   https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#m068becce588a0cdf01913f368a97aea4c62d8266
> > 
> > Adding a NOP callback after all the static LSM callbacks are called has
> > the following benefits:
> > 
> > - The BPF programs run at the right stage of the security_* wrappers.
> > - They run after all the static LSM hooks allowed the operation,
> >   therefore cannot allow an action that was already denied.
> > 
> > There are some hooks which do not call call_int_hooks or
> > call_void_hooks. It's not possible to call the bpf_lsm_* functions
> > without checking if there is BPF LSM program attached to these hooks.
> > This is added further in a subsequent patch. For now, these hooks are
> > marked as NO_BPF (i.e. attachment of BPF programs is not possible).
> 
> the commit log doesn't match the code.

Fixed. Thanks!

> 
> > +
> > +/* For every LSM hook  that allows attachment of BPF programs, declare a NOP
> > + * function where a BPF program can be attached as an fexit trampoline.
> > + */
> > +#define LSM_HOOK(RET, NAME, ...) LSM_HOOK_##RET(NAME, __VA_ARGS__)
> > +#define LSM_HOOK_int(NAME, ...) noinline int bpf_lsm_##NAME(__VA_ARGS__)  \
> 
> Did you check generated asm?
> I think I saw cases when gcc ignored 'noinline' when function is defined in the
> same file and still performed inlining while keeping the function body.
> To be safe I think __weak is necessary. That will guarantee noinline.

Sure, will change it to __weak.

> 
> And please reduce your cc next time. It's way too long.

Will do.

- KP

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 5/8] bpf: lsm: Implement attach, detach and execution
  2020-02-21  2:17   ` Alexei Starovoitov
@ 2020-02-21 12:02     ` KP Singh
  0 siblings, 0 replies; 45+ messages in thread
From: KP Singh @ 2020-02-21 12:02 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: KP Singh, open list, bpf, Linux Security Module list,
	Daniel Borkmann, James Morris, Kees Cook, Jann Horn,
	David S. Miller, Greg Kroah-Hartman

On 20-Feb 18:17, Alexei Starovoitov wrote:
> On Thu, Feb 20, 2020 at 06:52:47PM +0100, KP Singh wrote:
> > +
> > +   /* This is the first program to be attached to the LSM hook, the hook
> > +    * needs to be enabled.
> > +    */
> > +   if (prog->type == BPF_PROG_TYPE_LSM && tr->progs_cnt[kind] == 1)
> > +           err = bpf_lsm_set_enabled(prog->aux->attach_func_name, true);
> >  out:
> >     mutex_unlock(&tr->mutex);
> >     return err;
> > @@ -336,7 +348,11 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog)
> >     }
> >     hlist_del(&prog->aux->tramp_hlist);
> >     tr->progs_cnt[kind]--;
> > -   err = bpf_trampoline_update(prog->aux->trampoline);
> > +   err = bpf_trampoline_update(prog);
> > +
> > +   /* There are no more LSM programs, the hook should be disabled */
> > +   if (prog->type == BPF_PROG_TYPE_LSM && tr->progs_cnt[kind] == 0)
> > +           err = bpf_lsm_set_enabled(prog->aux->attach_func_name, false);
>
> Overall looks good, but I don't think above logic works.
> Consider lsm being attached, then fexit, then lsm detached, then fexit detached.
> Both are kind==fexit and static_key stays enabled.

You're right. I was weary of introducing a new kind (something like
BPF_TRAMP_LSM) since they are just fexit trampolines. For now, I
added nr_lsm_progs as a member in struct bpf_trampoline and refactored
the increment and decrement logic into inline helper functions e.g.

static inline void bpf_trampoline_dec_progs(struct bpf_prog *prog,
                                            enum bpf_tramp_prog_type kind)
{
        struct bpf_trampoline *tr = prog->aux->trampoline;

        if (prog->type == BPF_PROG_TYPE_LSM)
                tr->nr_lsm_progs--;

        tr->progs_cnt[kind]--;
}

and doing the check as:

  if (prog->type == BPF_PROG_TYPE_LSM && tr->nr_lsm_progs == 0)
        err = bpf_lsm_set_enabled(prog->aux->attach_func_name, false);

This should work, If you're okay with it, I will update it in the next
revision of the patch-set.

- KP

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-21 11:44     ` KP Singh
@ 2020-02-21 18:23       ` Casey Schaufler
  0 siblings, 0 replies; 45+ messages in thread
From: Casey Schaufler @ 2020-02-21 18:23 UTC (permalink / raw)
  To: KP Singh; +Cc: LKML, Linux Security Module list, bpf, Casey Schaufler

On 2/21/2020 3:44 AM, KP Singh wrote:
> On 20-Feb 15:49, Casey Schaufler wrote:
>> On 2/20/2020 9:52 AM, KP Singh wrote:
>>> From: KP Singh <kpsingh@google.com>
>> Sorry about the heavy list pruning - the original set
>> blows thunderbird up.
>>
>>> The BPF LSM programs are implemented as fexit trampolines to avoid the
>>> overhead of retpolines. These programs cannot be attached to security_*
>>> wrappers as there are quite a few security_* functions that do more than
>>> just calling the LSM callbacks.
>>>
>>> This was discussed on the lists in:
>>>
>>>   https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#m068becce588a0cdf01913f368a97aea4c62d8266
>>>
>>> Adding a NOP callback after all the static LSM callbacks are called has
>>> the following benefits:
>>>
>>> - The BPF programs run at the right stage of the security_* wrappers.
>>> - They run after all the static LSM hooks allowed the operation,
>>>   therefore cannot allow an action that was already denied.
>> I still say that the special call-out to BPF is unnecessary.
>> I remain unconvinced by the arguments. You aren't doing anything
>> so special that the general mechanism won't work.
> The existing mechanism would work functionally, but the cost of an
> indirect call for all the hooks, even those that are completely unused
> is not really acceptable for KRSI’s use cases.

Are you at all familiar with the way LSMs where installed
before the current list infrastructure? Every interface had
a hook that got called, even if the installed module did not
provide one. That was deemed acceptable for a good long time.

Way back in the early days of the stacking effort I seriously
considered implementing a new security module that would do
the stacking, and leave the infrastructure alone. Very much
like what you're proposing for BPF modules. It would have worked,
but the list model works better.

>  It’s easy to avoid and
> I do think that what we’re doing here (with hooks being defined at
> runtime) has significant functional differences from existing LSMs.

KRSI isn't all that different from the other modules.
The way you specify where system policy is restricted
and under which circumstances is different. You're trying
to be extremely general, beyond the Mandatory Access Control
claims of the existing modules. But really, there's nothing
all that special.

I know that you don't want to be making a lot of checks on
empty BPF program lists. You've come up with clever hacks
to avoid doing so. But the cleverer the hack, the more likely
it is to haunt someone else later. It probably won't cause
KRSI any grief, but you can bet someone will take it in the
chin.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 4/8] bpf: lsm: Add support for enabling/disabling BPF hooks
  2020-02-20 17:52 ` [PATCH bpf-next v4 4/8] bpf: lsm: Add support for enabling/disabling BPF hooks KP Singh
@ 2020-02-21 18:57   ` Casey Schaufler
  2020-02-21 19:11     ` James Morris
  2020-02-22  4:26   ` Kees Cook
  1 sibling, 1 reply; 45+ messages in thread
From: Casey Schaufler @ 2020-02-21 18:57 UTC (permalink / raw)
  To: KP Singh; +Cc: LKML, Linux Security Module list, bpf, Casey Schaufler

On 2/20/2020 9:52 AM, KP Singh wrote:
> From: KP Singh <kpsingh@google.com>

Again, sorry for trimming the CC list, but thunderbird ...

>
> Each LSM hook defines a static key i.e. bpf_lsm_<name>
> and a bpf_lsm_<name>_set_enabled function to toggle the key
> which enables/disables the branch which executes the BPF programs
> attached to the LSM hook.
>
> Use of static keys was suggested in upstream discussion:
>
>   https://lore.kernel.org/bpf/1cd10710-a81b-8f9b-696d-aa40b0a67225@iogearbox.net/
>
> and results in the following assembly:
>
>   0x0000000000001e31 <+65>:    jmpq   0x1e36 <security_bprm_check+70>
>   0x0000000000001e36 <+70>:    nopl   0x0(%rax,%rax,1)
>   0x0000000000001e3b <+75>:    xor    %eax,%eax
>   0x0000000000001e3d <+77>:    jmp    0x1e25 <security_bprm_check+53>
>
> which avoids an indirect branch and results in lower overhead which is
> especially helpful for LSM hooks in performance hotpaths.
>
> Given the ability to toggle the BPF trampolines, some hooks which do
> not call call_<int, void>_hooks as they have different default return
> values, also gain support for BPF program attachment.
>
> There are some hooks like security_setprocattr and security_getprocattr
> which are not instrumentable as they do not provide any monitoring or
> access control decisions. If required, generation of BTF type
> information for these hooks can be also be blacklisted.
>
> Signed-off-by: KP Singh <kpsingh@google.com>
> ---
>  include/linux/bpf_lsm.h | 30 +++++++++++++++++++++++++++---
>  kernel/bpf/bpf_lsm.c    | 28 ++++++++++++++++++++++++++++
>  security/security.c     | 32 ++++++++++++++++++++++++++++++++
>  3 files changed, 87 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
> index f867f72f6aa9..53dcda8ace01 100644
> --- a/include/linux/bpf_lsm.h
> +++ b/include/linux/bpf_lsm.h
> @@ -8,27 +8,51 @@
>  #define _LINUX_BPF_LSM_H
>  
>  #include <linux/bpf.h>
> +#include <linux/jump_label.h>
>  
>  #ifdef CONFIG_BPF_LSM
>  
> +#define LSM_HOOK(RET, NAME, ...)		\
> +DECLARE_STATIC_KEY_FALSE(bpf_lsm_key_##NAME);   \
> +void bpf_lsm_##NAME##_set_enabled(bool value);
> +#include <linux/lsm_hook_names.h>
> +#undef LSM_HOOK

This is an amazing amount of macro magic. You're creating
dependencies that will make changes to the infrastructure
much more difficult. I think. It's really hard to tell.
At the very least you should have a description of what this
accomplishes, as it's far from obvious.

> +
>  #define LSM_HOOK(RET, NAME, ...) RET bpf_lsm_##NAME(__VA_ARGS__);
>  #include <linux/lsm_hook_names.h>
>  #undef LSM_HOOK
>  
> -#define RUN_BPF_LSM_VOID_PROGS(FUNC, ...) bpf_lsm_##FUNC(__VA_ARGS__)
> +#define HAS_BPF_LSM_PROG(FUNC) (static_branch_unlikely(&bpf_lsm_key_##FUNC))
> +
> +#define RUN_BPF_LSM_VOID_PROGS(FUNC, ...)				\
> +	do {								\
> +		if (HAS_BPF_LSM_PROG(FUNC))				\
> +			bpf_lsm_##FUNC(__VA_ARGS__);			\
> +	} while (0)
> +
>  #define RUN_BPF_LSM_INT_PROGS(RC, FUNC, ...) ({				\
>  	do {								\
> -		if (RC == 0)						\
> -			RC = bpf_lsm_##FUNC(__VA_ARGS__);		\
> +		if (HAS_BPF_LSM_PROG(FUNC)) {				\
> +			if (RC == 0)					\
> +				RC = bpf_lsm_##FUNC(__VA_ARGS__);	\
> +		}							\
>  	} while (0);							\
>  	RC;								\
>  })
>  
> +int bpf_lsm_set_enabled(const char *name, bool value);
> +
>  #else /* !CONFIG_BPF_LSM */
>  
> +#define HAS_BPF_LSM_PROG false
>  #define RUN_BPF_LSM_INT_PROGS(RC, FUNC, ...) (RC)
>  #define RUN_BPF_LSM_VOID_PROGS(FUNC, ...)
>  
> +static inline int bpf_lsm_set_enabled(const char *name, bool value)
> +{
> +	return -EOPNOTSUPP;
> +}
> +
>  #endif /* CONFIG_BPF_LSM */
>  
>  #endif /* _LINUX_BPF_LSM_H */
> diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
> index abc847c9b9a1..d7c44433c003 100644
> --- a/kernel/bpf/bpf_lsm.c
> +++ b/kernel/bpf/bpf_lsm.c
> @@ -8,6 +8,20 @@
>  #include <linux/bpf.h>
>  #include <linux/btf.h>
>  #include <linux/bpf_lsm.h>
> +#include <linux/jump_label.h>
> +#include <linux/kallsyms.h>
> +
> +#define LSM_HOOK(RET, NAME, ...)					\
> +	DEFINE_STATIC_KEY_FALSE(bpf_lsm_key_##NAME);			\
> +	void bpf_lsm_##NAME##_set_enabled(bool value)			\
> +	{								\
> +		if (value)						\
> +			static_branch_enable(&bpf_lsm_key_##NAME);	\
> +		else							\
> +			static_branch_disable(&bpf_lsm_key_##NAME);	\
> +	}
> +#include <linux/lsm_hook_names.h>
> +#undef LSM_HOOK
>  
>  /* For every LSM hook  that allows attachment of BPF programs, declare a NOP
>   * function where a BPF program can be attached as an fexit trampoline.
> @@ -24,6 +38,20 @@
>  #include <linux/lsm_hook_names.h>
>  #undef LSM_HOOK
>  
> +int bpf_lsm_set_enabled(const char *name, bool value)
> +{
> +	char toggle_fn_name[KSYM_NAME_LEN];
> +	void (*toggle_fn)(bool value);
> +
> +	snprintf(toggle_fn_name, KSYM_NAME_LEN, "%s_set_enabled", name);
> +	toggle_fn = (void *)kallsyms_lookup_name(toggle_fn_name);
> +	if (!toggle_fn)
> +		return -ESRCH;
> +
> +	toggle_fn(value);
> +	return 0;
> +}
> +
>  const struct bpf_prog_ops lsm_prog_ops = {
>  };
>  
> diff --git a/security/security.c b/security/security.c
> index aa111392a700..569cc07d5e34 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -804,6 +804,13 @@ int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
>  			break;
>  		}
>  	}
> +#ifdef CONFIG_BPF_LSM
> +	if (HAS_BPF_LSM_PROG(vm_enough_memory)) {
> +		rc = bpf_lsm_vm_enough_memory(mm, pages);
> +		if (rc <= 0)
> +			cap_sys_admin = 0;
> +	}
> +#endif
>  	return __vm_enough_memory(mm, pages, cap_sys_admin);
>  }
>  
> @@ -1350,6 +1357,13 @@ int security_inode_getsecurity(struct inode *inode, const char *name, void **buf
>  		if (rc != -EOPNOTSUPP)
>  			return rc;
>  	}
> +#ifdef CONFIG_BPF_LSM
> +	if (HAS_BPF_LSM_PROG(inode_getsecurity)) {
> +		rc = bpf_lsm_inode_getsecurity(inode, name, buffer, alloc);
> +		if (rc != -EOPNOTSUPP)
> +			return rc;
> +	}
> +#endif
>  	return -EOPNOTSUPP;
>  }
>  
> @@ -1369,6 +1383,14 @@ int security_inode_setsecurity(struct inode *inode, const char *name, const void
>  		if (rc != -EOPNOTSUPP)
>  			return rc;
>  	}
> +#ifdef CONFIG_BPF_LSM
> +	if (HAS_BPF_LSM_PROG(inode_setsecurity)) {
> +		rc = bpf_lsm_inode_setsecurity(inode, name, value, size,
> +					       flags);
> +		if (rc != -EOPNOTSUPP)
> +			return rc;
> +	}
> +#endif
>  	return -EOPNOTSUPP;
>  }
>  
> @@ -1754,6 +1776,12 @@ int security_task_prctl(int option, unsigned long arg2, unsigned long arg3,
>  				break;
>  		}
>  	}
> +#ifdef CONFIG_BPF_LSM
> +	if (HAS_BPF_LSM_PROG(task_prctl)) {
> +		if (rc == -ENOSYS)
> +			rc = bpf_lsm_task_prctl(option, arg2, arg3, arg4, arg5);
> +	}
> +#endif
>  	return rc;
>  }
>  
> @@ -2334,6 +2362,10 @@ int security_xfrm_state_pol_flow_match(struct xfrm_state *x,
>  		rc = hp->hook.xfrm_state_pol_flow_match(x, xp, fl);
>  		break;
>  	}
> +#ifdef CONFIG_BPF_LSM
> +	if (HAS_BPF_LSM_PROG(xfrm_state_pol_flow_match))
> +		rc = bpf_lsm_xfrm_state_pol_flow_match(x, xp, fl);
> +#endif
>  	return rc;
>  }
>  


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 4/8] bpf: lsm: Add support for enabling/disabling BPF hooks
  2020-02-21 18:57   ` Casey Schaufler
@ 2020-02-21 19:11     ` James Morris
  0 siblings, 0 replies; 45+ messages in thread
From: James Morris @ 2020-02-21 19:11 UTC (permalink / raw)
  To: Casey Schaufler; +Cc: KP Singh, LKML, Linux Security Module list, bpf

On Fri, 21 Feb 2020, Casey Schaufler wrote:

> On 2/20/2020 9:52 AM, KP Singh wrote:
> > From: KP Singh <kpsingh@google.com>
> 
> Again, sorry for trimming the CC list, but thunderbird ...

Fix your mail client, please.

-- 
James Morris
<jmorris@namei.org>


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI)
  2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (7 preceding siblings ...)
  2020-02-20 17:52 ` [PATCH bpf-next v4 8/8] bpf: lsm: Add Documentation KP Singh
@ 2020-02-21 19:19 ` Casey Schaufler
  2020-02-21 19:41   ` KP Singh
  2020-02-27 18:40 ` Dr. Greg
  9 siblings, 1 reply; 45+ messages in thread
From: Casey Schaufler @ 2020-02-21 19:19 UTC (permalink / raw)
  To: KP Singh
  Cc: Linux Security Module list, LKML, bpf, James Morris, Kees Cook,
	Casey Schaufler

On 2/20/2020 9:52 AM, KP Singh wrote:
> From: KP Singh <kpsingh@google.com>

Again, apologies for the CC list trimming.

>
> # v3 -> v4
>
>   https://lkml.org/lkml/2020/1/23/515
>
> * Moved away from allocating a separate security_hook_heads and adding a
>   new special case for arch_prepare_bpf_trampoline to using BPF fexit
>   trampolines called from the right place in the LSM hook and toggled by
>   static keys based on the discussion in:
>
>     https://lore.kernel.org/bpf/CAG48ez25mW+_oCxgCtbiGMX07g_ph79UOJa07h=o_6B6+Q-u5g@mail.gmail.com/
>
> * Since the code does not deal with security_hook_heads anymore, it goes
>   from "being a BPF LSM" to "BPF program attachment to LSM hooks".

I've finally been able to review the entire patch set.
I can't imagine how it can make sense to add this much
complexity to the LSM infrastructure in support of this
feature. There is macro magic going on that is going to
break, and soon. You are introducing dependencies on BPF
into the infrastructure, and that's unnecessary and most
likely harmful.

Would you please drop the excessive optimization? I understand
that there's been a lot of discussion and debate about it,
but this implementation is out of control, disruptive, and
dangerous to the code around it.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI)
  2020-02-21 19:19 ` [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) Casey Schaufler
@ 2020-02-21 19:41   ` KP Singh
  2020-02-21 22:31     ` Casey Schaufler
  0 siblings, 1 reply; 45+ messages in thread
From: KP Singh @ 2020-02-21 19:41 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: Linux Security Module list, LKML, bpf, James Morris, Kees Cook

On 21-Feb 11:19, Casey Schaufler wrote:
> On 2/20/2020 9:52 AM, KP Singh wrote:
> > From: KP Singh <kpsingh@google.com>
> 
> Again, apologies for the CC list trimming.
> 
> >
> > # v3 -> v4
> >
> >   https://lkml.org/lkml/2020/1/23/515
> >
> > * Moved away from allocating a separate security_hook_heads and adding a
> >   new special case for arch_prepare_bpf_trampoline to using BPF fexit
> >   trampolines called from the right place in the LSM hook and toggled by
> >   static keys based on the discussion in:
> >
> >     https://lore.kernel.org/bpf/CAG48ez25mW+_oCxgCtbiGMX07g_ph79UOJa07h=o_6B6+Q-u5g@mail.gmail.com/
> >
> > * Since the code does not deal with security_hook_heads anymore, it goes
> >   from "being a BPF LSM" to "BPF program attachment to LSM hooks".
> 
> I've finally been able to review the entire patch set.
> I can't imagine how it can make sense to add this much
> complexity to the LSM infrastructure in support of this
> feature. There is macro magic going on that is going to
> break, and soon. You are introducing dependencies on BPF
> into the infrastructure, and that's unnecessary and most
> likely harmful.

We will be happy to document each of the macros in detail. Do note a
few things here:

* There is really nothing magical about them though, the LSM hooks are
  collectively declared in lsm_hook_names.h and are used to delcare
  the security_list_options and security_hook_heads for the LSM
  framework (this was previously maitained in two different places):

  For BPF, they declare:

    * bpf_lsm_<name> attachment points and their prototypes.
    * A static key (bpf_lsm_key_<name>) to enable and disable these
       hooks with a function to set its value i.e.
       (bpf_lsm_<name>_set_enabled).

* We have kept the BPF related macros out of security/.
* All the BPF calls in the LSM infrastructure are guarded by
  CONFIG_BPF_LSM (there are only two main calls though, i.e.
  call_int_hook, call_void_hook).

Honestly, the macros aren't any more complicated than
call_int_progs/call_void_progs.

- KP

> 
> Would you please drop the excessive optimization? I understand
> that there's been a lot of discussion and debate about it,
> but this implementation is out of control, disruptive, and
> dangerous to the code around it.
> 
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI)
  2020-02-21 19:41   ` KP Singh
@ 2020-02-21 22:31     ` Casey Schaufler
  2020-02-21 23:09       ` KP Singh
  2020-02-22  0:22       ` Kees Cook
  0 siblings, 2 replies; 45+ messages in thread
From: Casey Schaufler @ 2020-02-21 22:31 UTC (permalink / raw)
  To: KP Singh
  Cc: Linux Security Module list, LKML, bpf, James Morris, Kees Cook,
	Casey Schaufler

On 2/21/2020 11:41 AM, KP Singh wrote:
> On 21-Feb 11:19, Casey Schaufler wrote:
>> On 2/20/2020 9:52 AM, KP Singh wrote:
>>> From: KP Singh <kpsingh@google.com>
>> Again, apologies for the CC list trimming.
>>
>>> # v3 -> v4
>>>
>>>   https://lkml.org/lkml/2020/1/23/515
>>>
>>> * Moved away from allocating a separate security_hook_heads and adding a
>>>   new special case for arch_prepare_bpf_trampoline to using BPF fexit
>>>   trampolines called from the right place in the LSM hook and toggled by
>>>   static keys based on the discussion in:
>>>
>>>     https://lore.kernel.org/bpf/CAG48ez25mW+_oCxgCtbiGMX07g_ph79UOJa07h=o_6B6+Q-u5g@mail.gmail.com/
>>>
>>> * Since the code does not deal with security_hook_heads anymore, it goes
>>>   from "being a BPF LSM" to "BPF program attachment to LSM hooks".
>> I've finally been able to review the entire patch set.
>> I can't imagine how it can make sense to add this much
>> complexity to the LSM infrastructure in support of this
>> feature. There is macro magic going on that is going to
>> break, and soon. You are introducing dependencies on BPF
>> into the infrastructure, and that's unnecessary and most
>> likely harmful.
> We will be happy to document each of the macros in detail. Do note a
> few things here:
>
> * There is really nothing magical about them though,


+#define LSM_HOOK_void(NAME, ...) \
+	noinline void bpf_lsm_##NAME(__VA_ARGS__) {}
+
+#include <linux/lsm_hook_names.h>
+#undef LSM_HOOK

I haven't seen anything this ... novel ... in a very long time.
I see why you want to do this, but you're tying the two sets
of code together unnaturally. When (not if) the two sets diverge
you're going to be introducing another clever way to deal with
the special case.

It's not that I don't understand what you're doing. It's that
I don't like what you're doing. Explanation doesn't make me like
it better.

>  the LSM hooks are
>   collectively declared in lsm_hook_names.h and are used to delcare
>   the security_list_options and security_hook_heads for the LSM
>   framework (this was previously maitained in two different places):
>
>   For BPF, they declare:
>
>     * bpf_lsm_<name> attachment points and their prototypes.
>     * A static key (bpf_lsm_key_<name>) to enable and disable these
>        hooks with a function to set its value i.e.
>        (bpf_lsm_<name>_set_enabled).
>
> * We have kept the BPF related macros out of security/.
> * All the BPF calls in the LSM infrastructure are guarded by
>   CONFIG_BPF_LSM (there are only two main calls though, i.e.
>   call_int_hook, call_void_hook).
>
> Honestly, the macros aren't any more complicated than
> call_int_progs/call_void_progs.
>
> - KP
>
>> Would you please drop the excessive optimization? I understand
>> that there's been a lot of discussion and debate about it,
>> but this implementation is out of control, disruptive, and
>> dangerous to the code around it.
>>
>>


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI)
  2020-02-21 22:31     ` Casey Schaufler
@ 2020-02-21 23:09       ` KP Singh
  2020-02-21 23:49         ` Casey Schaufler
  2020-02-22  0:22       ` Kees Cook
  1 sibling, 1 reply; 45+ messages in thread
From: KP Singh @ 2020-02-21 23:09 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: Linux Security Module list, LKML, bpf, James Morris, Kees Cook,
	Alexei Starovoitov, Daniel Borkmann

Thanks Casey,

I appreciate your quick responses!

On 21-Feb 14:31, Casey Schaufler wrote:
> On 2/21/2020 11:41 AM, KP Singh wrote:
> > On 21-Feb 11:19, Casey Schaufler wrote:
> >> On 2/20/2020 9:52 AM, KP Singh wrote:
> >>> From: KP Singh <kpsingh@google.com>
> >> Again, apologies for the CC list trimming.
> >>
> >>> # v3 -> v4
> >>>
> >>>   https://lkml.org/lkml/2020/1/23/515
> >>>
> >>> * Moved away from allocating a separate security_hook_heads and adding a
> >>>   new special case for arch_prepare_bpf_trampoline to using BPF fexit
> >>>   trampolines called from the right place in the LSM hook and toggled by
> >>>   static keys based on the discussion in:
> >>>
> >>>     https://lore.kernel.org/bpf/CAG48ez25mW+_oCxgCtbiGMX07g_ph79UOJa07h=o_6B6+Q-u5g@mail.gmail.com/
> >>>
> >>> * Since the code does not deal with security_hook_heads anymore, it goes
> >>>   from "being a BPF LSM" to "BPF program attachment to LSM hooks".

[...]

> >> likely harmful.
> > We will be happy to document each of the macros in detail. Do note a
> > few things here:
> >
> > * There is really nothing magical about them though,
> 
> 
> +#define LSM_HOOK_void(NAME, ...) \
> +	noinline void bpf_lsm_##NAME(__VA_ARGS__) {}
> +
> +#include <linux/lsm_hook_names.h>
> +#undef LSM_HOOK
> 
> I haven't seen anything this ... novel ... in a very long time.

This is not "novel", it's a fairly common pattern followed in tracing:

For example, the TRACE_INCLUDE macro which is used for tracepoints:

  include/trace/define_trace.h

and used in:

  * include/trace/bpf_probe.h

    https://github.com/torvalds/linux/blob/master/include/trace/bpf_probe.h#L110

  * include/trace/perf.h

    https://github.com/torvalds/linux/blob/master/include/trace/perf.h#L90

  * include/trace/trace_events.h

    https://github.com/torvalds/linux/blob/master/include/trace/trace_events.h#L402

> I see why you want to do this, but you're tying the two sets
> of code together unnaturally. When (not if) the two sets diverge
> you're going to be introducing another clever way to deal with

I don't fully understand what "two sets diverge means" here. All the
BPF headers need is the name, return type and the args. This is the
same information which is needed by the call_{int, void}_hooks and the
LSM declarataions (i.e. security_hook_heads and
security_list_options).

> the special case.
> 
> It's not that I don't understand what you're doing. It's that
> I don't like what you're doing. Explanation doesn't make me like
> it better.

As I have previously said, we will be happy to (and have already)
updated our approach based on the consensus we arrive at here. The
best outcome would be to not sacrifice performance as the LSM hooks
are called from various performance critical code-paths.

It would be great to know the maintainers' (BPF and Security)
perspective on this as well.

- KP

> 
> >  the LSM hooks are
> >   collectively declared in lsm_hook_names.h and are used to delcare
> >   the security_list_options and security_hook_heads for the LSM
> >   framework (this was previously maitained in two different places):
> >
> >   For BPF, they declare:
> >
> >     * bpf_lsm_<name> attachment points and their prototypes.
> >     * A static key (bpf_lsm_key_<name>) to enable and disable these
> >        hooks with a function to set its value i.e.
> >        (bpf_lsm_<name>_set_enabled).
> >
> > * We have kept the BPF related macros out of security/.
> > * All the BPF calls in the LSM infrastructure are guarded by
> >   CONFIG_BPF_LSM (there are only two main calls though, i.e.
> >   call_int_hook, call_void_hook).
> >
> > Honestly, the macros aren't any more complicated than
> > call_int_progs/call_void_progs.
> >
> > - KP
> >
> >> Would you please drop the excessive optimization? I understand
> >> that there's been a lot of discussion and debate about it,
> >> but this implementation is out of control, disruptive, and
> >> dangerous to the code around it.
> >>
> >>
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI)
  2020-02-21 23:09       ` KP Singh
@ 2020-02-21 23:49         ` Casey Schaufler
  0 siblings, 0 replies; 45+ messages in thread
From: Casey Schaufler @ 2020-02-21 23:49 UTC (permalink / raw)
  To: KP Singh
  Cc: Linux Security Module list, LKML, bpf, James Morris, Kees Cook,
	Alexei Starovoitov, Daniel Borkmann, Casey Schaufler

On 2/21/2020 3:09 PM, KP Singh wrote:
> Thanks Casey,
>
> I appreciate your quick responses!
>
> On 21-Feb 14:31, Casey Schaufler wrote:
>> On 2/21/2020 11:41 AM, KP Singh wrote:
>>> On 21-Feb 11:19, Casey Schaufler wrote:
>>>> On 2/20/2020 9:52 AM, KP Singh wrote:
>>>>> From: KP Singh <kpsingh@google.com>
>>>> Again, apologies for the CC list trimming.
>>>>
>>>>> # v3 -> v4
>>>>>
>>>>>   https://lkml.org/lkml/2020/1/23/515
>>>>>
>>>>> * Moved away from allocating a separate security_hook_heads and adding a
>>>>>   new special case for arch_prepare_bpf_trampoline to using BPF fexit
>>>>>   trampolines called from the right place in the LSM hook and toggled by
>>>>>   static keys based on the discussion in:
>>>>>
>>>>>     https://lore.kernel.org/bpf/CAG48ez25mW+_oCxgCtbiGMX07g_ph79UOJa07h=o_6B6+Q-u5g@mail.gmail.com/
>>>>>
>>>>> * Since the code does not deal with security_hook_heads anymore, it goes
>>>>>   from "being a BPF LSM" to "BPF program attachment to LSM hooks".
> [...]
>
>>>> likely harmful.
>>> We will be happy to document each of the macros in detail. Do note a
>>> few things here:
>>>
>>> * There is really nothing magical about them though,
>>
>> +#define LSM_HOOK_void(NAME, ...) \
>> +	noinline void bpf_lsm_##NAME(__VA_ARGS__) {}
>> +
>> +#include <linux/lsm_hook_names.h>
>> +#undef LSM_HOOK
>>
>> I haven't seen anything this ... novel ... in a very long time.
> This is not "novel", it's a fairly common pattern followed in tracing:
>
> For example, the TRACE_INCLUDE macro which is used for tracepoints:
>
>   include/trace/define_trace.h
>
> and used in:
>
>   * include/trace/bpf_probe.h
>
>     https://github.com/torvalds/linux/blob/master/include/trace/bpf_probe.h#L110
>
>   * include/trace/perf.h
>
>     https://github.com/torvalds/linux/blob/master/include/trace/perf.h#L90
>
>   * include/trace/trace_events.h
>
>     https://github.com/torvalds/linux/blob/master/include/trace/trace_events.h#L402

I can't say I care for that, either, and it's a simpler case.

>> I see why you want to do this, but you're tying the two sets
>> of code together unnaturally. When (not if) the two sets diverge
>> you're going to be introducing another clever way to deal with
> I don't fully understand what "two sets diverge means" here. All the
> BPF headers need is the name, return type and the args. This is the
> same information which is needed by the call_{int, void}_hooks and the
> LSM declarataions (i.e. security_hook_heads and
> security_list_options).

As you've noticed, not all the interfaces can use call_{int,void}_hooks.
If you've been following the stacking efforts, you'll see that increasing.

At some point I anticipate a BPF hook that needs different information
than the LSM hook. That's been discussed, too. Asserting that it will
never happen does not make me comfortable.

>> the special case.
>>
>> It's not that I don't understand what you're doing. It's that
>> I don't like what you're doing. Explanation doesn't make me like
>> it better.
> As I have previously said, we will be happy to (and have already)
> updated our approach based on the consensus we arrive at here.

Not to put too fine a point on it, but I have raised the same
objection - that you should use the infrastructure as it is -
each time. I do not see consensus, I see you plowing ahead with
the direction you've chosen in spite of the significant objection.

>  The
> best outcome would be to not sacrifice performance as the LSM hooks
> are called from various performance critical code-paths.

Then help me tune the infrastructure to be better in those cases.

> It would be great to know the maintainers' (BPF and Security)
> perspective on this as well.

Many eyes and all that, but the BPF maintainers haven't been working
with the LSM infrastructure and won't be familiar with its quirks.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI)
  2020-02-21 22:31     ` Casey Schaufler
  2020-02-21 23:09       ` KP Singh
@ 2020-02-22  0:22       ` Kees Cook
  2020-02-22  1:04         ` Casey Schaufler
  1 sibling, 1 reply; 45+ messages in thread
From: Kees Cook @ 2020-02-22  0:22 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: KP Singh, Linux Security Module list, LKML, bpf, James Morris

On Fri, Feb 21, 2020 at 02:31:18PM -0800, Casey Schaufler wrote:
> On 2/21/2020 11:41 AM, KP Singh wrote:
> > On 21-Feb 11:19, Casey Schaufler wrote:
> >> On 2/20/2020 9:52 AM, KP Singh wrote:
> >>> From: KP Singh <kpsingh@google.com>
> >>> # v3 -> v4
> >>>
> >>>   https://lkml.org/lkml/2020/1/23/515
> >>>
> >>> * Moved away from allocating a separate security_hook_heads and adding a
> >>>   new special case for arch_prepare_bpf_trampoline to using BPF fexit
> >>>   trampolines called from the right place in the LSM hook and toggled by
> >>>   static keys based on the discussion in:
> >>>
> >>>     https://lore.kernel.org/bpf/CAG48ez25mW+_oCxgCtbiGMX07g_ph79UOJa07h=o_6B6+Q-u5g@mail.gmail.com/
> >>>
> >>> * Since the code does not deal with security_hook_heads anymore, it goes
> >>>   from "being a BPF LSM" to "BPF program attachment to LSM hooks".
> >> I've finally been able to review the entire patch set.
> >> I can't imagine how it can make sense to add this much
> >> complexity to the LSM infrastructure in support of this
> >> feature. There is macro magic going on that is going to
> >> break, and soon. You are introducing dependencies on BPF
> >> into the infrastructure, and that's unnecessary and most
> >> likely harmful.
> > We will be happy to document each of the macros in detail. Do note a
> > few things here:
> >
> > * There is really nothing magical about them though,
> 
> 
> +#define LSM_HOOK_void(NAME, ...) \
> +	noinline void bpf_lsm_##NAME(__VA_ARGS__) {}
> +
> +#include <linux/lsm_hook_names.h>
> +#undef LSM_HOOK
> 
> I haven't seen anything this ... novel ... in a very long time.
> I see why you want to do this, but you're tying the two sets
> of code together unnaturally. When (not if) the two sets diverge
> you're going to be introducing another clever way to deal with
> the special case.

I really like this approach: it actually _simplifies_ the LSM piece in
that there is no need to keep the union and the hook lists in sync any
more: they're defined once now. (There were already 2 lists, and this
collapses the list into 1 place for all 3 users.) It's very visible in
the diffstat too (~300 lines removed):

 include/linux/lsm_hook_names.h | 353 +++++++++++++++++++
 include/linux/lsm_hooks.h      | 622 +--------------------------------
 2 files changed, 359 insertions(+), 616 deletions(-)

Also, there is no need to worry about divergence: the BPF will always
track the exposed LSM. Backward compat is (AIUI) explicitly a
non-feature.

I don't see why anything here is "harmful"?

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI)
  2020-02-22  0:22       ` Kees Cook
@ 2020-02-22  1:04         ` Casey Schaufler
  2020-02-22  3:36           ` Kees Cook
  0 siblings, 1 reply; 45+ messages in thread
From: Casey Schaufler @ 2020-02-22  1:04 UTC (permalink / raw)
  To: Kees Cook
  Cc: KP Singh, Linux Security Module list, LKML, bpf, James Morris,
	Casey Schaufler

On 2/21/2020 4:22 PM, Kees Cook wrote:
> On Fri, Feb 21, 2020 at 02:31:18PM -0800, Casey Schaufler wrote:
>> On 2/21/2020 11:41 AM, KP Singh wrote:
>>> On 21-Feb 11:19, Casey Schaufler wrote:
>>>> On 2/20/2020 9:52 AM, KP Singh wrote:
>>>>> From: KP Singh <kpsingh@google.com>
>>>>> # v3 -> v4
>>>>>
>>>>>   https://lkml.org/lkml/2020/1/23/515
>>>>>
>>>>> * Moved away from allocating a separate security_hook_heads and adding a
>>>>>   new special case for arch_prepare_bpf_trampoline to using BPF fexit
>>>>>   trampolines called from the right place in the LSM hook and toggled by
>>>>>   static keys based on the discussion in:
>>>>>
>>>>>     https://lore.kernel.org/bpf/CAG48ez25mW+_oCxgCtbiGMX07g_ph79UOJa07h=o_6B6+Q-u5g@mail.gmail.com/
>>>>>
>>>>> * Since the code does not deal with security_hook_heads anymore, it goes
>>>>>   from "being a BPF LSM" to "BPF program attachment to LSM hooks".
>>>> I've finally been able to review the entire patch set.
>>>> I can't imagine how it can make sense to add this much
>>>> complexity to the LSM infrastructure in support of this
>>>> feature. There is macro magic going on that is going to
>>>> break, and soon. You are introducing dependencies on BPF
>>>> into the infrastructure, and that's unnecessary and most
>>>> likely harmful.
>>> We will be happy to document each of the macros in detail. Do note a
>>> few things here:
>>>
>>> * There is really nothing magical about them though,
>>
>> +#define LSM_HOOK_void(NAME, ...) \
>> +	noinline void bpf_lsm_##NAME(__VA_ARGS__) {}
>> +
>> +#include <linux/lsm_hook_names.h>
>> +#undef LSM_HOOK
>>
>> I haven't seen anything this ... novel ... in a very long time.
>> I see why you want to do this, but you're tying the two sets
>> of code together unnaturally. When (not if) the two sets diverge
>> you're going to be introducing another clever way to deal with
>> the special case.
> I really like this approach: it actually _simplifies_ the LSM piece in
> that there is no need to keep the union and the hook lists in sync any
> more: they're defined once now. (There were already 2 lists, and this
> collapses the list into 1 place for all 3 users.) It's very visible in
> the diffstat too (~300 lines removed):

Erk. Too many smart people like this. I still don't, but it's possible
that I could learn to.

>
>  include/linux/lsm_hook_names.h | 353 +++++++++++++++++++
>  include/linux/lsm_hooks.h      | 622 +--------------------------------
>  2 files changed, 359 insertions(+), 616 deletions(-)
>
> Also, there is no need to worry about divergence: the BPF will always
> track the exposed LSM. Backward compat is (AIUI) explicitly a
> non-feature.

As written you're correct, it can't diverge. My concern is about
what happens when someone decides that they want the BPF and hook
to be different. I fear there will be a hideous solution.

> I don't see why anything here is "harmful"?

Injecting large chucks of code via an #include does nothing
for readability. I've seen it fail disastrously many times,
usually after the original author has moved on and entrusted
the code to someone who missed some of the nuance.

I'll drop objection to this bit, but still object to making
BPF special in the infrastructure. It doesn't need to be and
it is exactly the kind of additional complexity we need to
avoid.
 



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI)
  2020-02-22  1:04         ` Casey Schaufler
@ 2020-02-22  3:36           ` Kees Cook
  0 siblings, 0 replies; 45+ messages in thread
From: Kees Cook @ 2020-02-22  3:36 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: KP Singh, Linux Security Module list, LKML, bpf, James Morris

On Fri, Feb 21, 2020 at 05:04:38PM -0800, Casey Schaufler wrote:
> On 2/21/2020 4:22 PM, Kees Cook wrote:
> > I really like this approach: it actually _simplifies_ the LSM piece in
> > that there is no need to keep the union and the hook lists in sync any
> > more: they're defined once now. (There were already 2 lists, and this
> > collapses the list into 1 place for all 3 users.) It's very visible in
> > the diffstat too (~300 lines removed):
> 
> Erk. Too many smart people like this. I still don't, but it's possible
> that I could learn to.

Well, I admit that I am, perhaps, overly infatuatied with "fancy" macros,
but in cases like this where we're operating on a list of stuff and doing
the same thing over and over but with different elements, I've found
this is actually much nicer way to do it. (E.g. I did something like
this in drivers/misc/lkdtm/core.c to avoid endless typing, and Mimi did
something similar in include/linux/fs.h for keeping kernel_read_file_id
and kernel_read_file_str automatically in sync.) KP's macros are more
extensive, but I think it's a clever to avoid going crazy as LSM hooks
evolve.

> > Also, there is no need to worry about divergence: the BPF will always
> > track the exposed LSM. Backward compat is (AIUI) explicitly a
> > non-feature.
> 
> As written you're correct, it can't diverge. My concern is about
> what happens when someone decides that they want the BPF and hook
> to be different. I fear there will be a hideous solution.

This is related to some of the discussion at the last Maintainer's
Summit and tracepoints: i.e. the exposure of what is basically kernel
internals to a userspace system. The conclusion there (which, I think,
has been extended strongly into BPF) is that things that produce BPF are
accepted to be strongly tied to kernel version, so if a hook changes, so
much the userspace side. This appears to be proven out in the existing
BPF world, which gives me some evidence that this claim (close tie to
kernel version) isn't an empty promise.

> > I don't see why anything here is "harmful"?
> 
> Injecting large chucks of code via an #include does nothing
> for readability. I've seen it fail disastrously many times,
> usually after the original author has moved on and entrusted
> the code to someone who missed some of the nuance.

I totally agree about wanting to avoid reduced readability. In this case,
I actually think readability is improved since the macro "implementation"
are right above each #include. And then looking at the resulting included
header, all the metadata is visible in one place. But I agree: it is
"unusual", but I think on the sum it's an improvement. (But I share some
of the frustration of the kernel being filled with weird preprocessor
insanity. I will never get back the weeks I spent on trying to improve
the min/max macros.... *sob*)

> I'll drop objection to this bit, but still object to making
> BPF special in the infrastructure. It doesn't need to be and
> it is exactly the kind of additional complexity we need to
> avoid.

You mean 3/8's RUN_BPF_LSM_*_PROGS() additions to the call_*_hook()s?

I'll go comment on that thread directly instead of splitting the
discussion. :)

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-20 23:49   ` Casey Schaufler
  2020-02-21 11:44     ` KP Singh
@ 2020-02-22  4:22     ` Kees Cook
  2020-02-23 22:08       ` Alexei Starovoitov
                         ` (2 more replies)
  1 sibling, 3 replies; 45+ messages in thread
From: Kees Cook @ 2020-02-22  4:22 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: KP Singh, LKML, Linux Security Module list, Alexei Starovoitov,
	James Morris

On Thu, Feb 20, 2020 at 03:49:05PM -0800, Casey Schaufler wrote:
> On 2/20/2020 9:52 AM, KP Singh wrote:
> > From: KP Singh <kpsingh@google.com>
> 
> Sorry about the heavy list pruning - the original set
> blows thunderbird up.

(I've added some people back; I had to dig this thread back out of lkml
since I didn't get a direct copy...)

> > The BPF LSM programs are implemented as fexit trampolines to avoid the
> > overhead of retpolines. These programs cannot be attached to security_*
> > wrappers as there are quite a few security_* functions that do more than
> > just calling the LSM callbacks.
> >
> > This was discussed on the lists in:
> >
> >   https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#m068becce588a0cdf01913f368a97aea4c62d8266
> >
> > Adding a NOP callback after all the static LSM callbacks are called has
> > the following benefits:
> >
> > - The BPF programs run at the right stage of the security_* wrappers.
> > - They run after all the static LSM hooks allowed the operation,
> >   therefore cannot allow an action that was already denied.
> 
> I still say that the special call-out to BPF is unnecessary.
> I remain unconvinced by the arguments. You aren't doing anything
> so special that the general mechanism won't work.

If I'm understanding this correctly, there are two issues:

1- BPF needs to be run last due to fexit trampolines (?)

2- BPF hooks don't know what may be attached at any given time, so
   ALL LSM hooks need to be universally hooked. THIS turns out to create
   a measurable performance problem in that the cost of the indirect call
   on the (mostly/usually) empty BPF policy is too high.

"1" can be solved a lot of ways, and doesn't seem to be a debated part
of this series.

"2" is interesting -- it creates a performance problem for EVERYONE that
builds in this kernel feature, regardless of them using it. Excepting
SELinux, "traditional" LSMs tends to be relatively sparse in their hooking:

$ grep '^      struct hlist_head' include/linux/lsm_hooks.h | wc -l
230
$ for i in apparmor loadpin lockdown safesetid selinux smack tomoyo yama ; \
  do echo -n "$i " && (cd $i && git grep LSM_HOOK_INIT | wc -l) ; done
apparmor   68
loadpin     3
lockdown    1
safesetid   2
selinux   202
smack     108
tomoyo     28
yama        4

So, trying to avoid the indirect calls is, as you say, an optimization,
but it might be a needed one due to the other limitations.

To me, some questions present themselves:

a) What, exactly, are the performance characteristics of:
	"before"
	"with indirect calls"
	"with static keys optimization"

b) Would there actually be a global benefit to using the static keys
   optimization for other LSMs? (Especially given that they're already
   sparsely populated and policy likely determines utility -- all the
   LSMs would just turn ON all their static keys or turn off ALL their
   static keys depending on having policy loaded.)

If static keys are justified for KRSI (by "a") then it seems the approach
here should stand. If "b" is also true, then we need an additional
series to apply this optimization for the other LSMs (but that seems
distinctly separate from THIS series).

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 4/8] bpf: lsm: Add support for enabling/disabling BPF hooks
  2020-02-20 17:52 ` [PATCH bpf-next v4 4/8] bpf: lsm: Add support for enabling/disabling BPF hooks KP Singh
  2020-02-21 18:57   ` Casey Schaufler
@ 2020-02-22  4:26   ` Kees Cook
  1 sibling, 0 replies; 45+ messages in thread
From: Kees Cook @ 2020-02-22  4:26 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-kernel, bpf, linux-security-module, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Thomas Garnier, Michael Halcrow,
	Paul Turner, Brendan Gregg, Jann Horn, Matthew Garrett,
	Christian Brauner, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	David S. Miller, Greg Kroah-Hartman, Nicolas Ferre,
	Stanislav Fomichev, Quentin Monnet, Andrey Ignatov, Joe Stringer

On Thu, Feb 20, 2020 at 06:52:46PM +0100, KP Singh wrote:
> index aa111392a700..569cc07d5e34 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -804,6 +804,13 @@ int security_vm_enough_memory_mm(struct mm_struct *mm, long pages)
>  			break;
>  		}
>  	}
> +#ifdef CONFIG_BPF_LSM
> +	if (HAS_BPF_LSM_PROG(vm_enough_memory)) {
> +		rc = bpf_lsm_vm_enough_memory(mm, pages);
> +		if (rc <= 0)
> +			cap_sys_admin = 0;
> +	}
> +#endif

This pattern of using #ifdef in code is not considered best practice.
Using in-code IS_ENABLED(CONFIG_BPF_LSM) is preferred. But since this
pattern always uses HAS_BPF_LSM_PROG(), you could fold the
IS_ENABLED() into the definition of HAS_BPF_LSM_PROG itself -- or more
likely, have the macro defined as:

#ifdef CONFIG_BPF_LSM
# define HAS_BPF_LSM_PROG(x)    ....existing implementation....
#else
# define HAS_BPF_LSM_PROG(x)	false
#endif

Then none of these ifdefs are needed.

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-22  4:22     ` Kees Cook
@ 2020-02-23 22:08       ` Alexei Starovoitov
  2020-02-24 16:32         ` Casey Schaufler
  2020-02-24 16:09       ` Casey Schaufler
  2020-02-24 17:23       ` KP Singh
  2 siblings, 1 reply; 45+ messages in thread
From: Alexei Starovoitov @ 2020-02-23 22:08 UTC (permalink / raw)
  To: Kees Cook
  Cc: Casey Schaufler, KP Singh, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris, bpf, netdev

On Fri, Feb 21, 2020 at 08:22:59PM -0800, Kees Cook wrote:
> 
> If I'm understanding this correctly, there are two issues:
> 
> 1- BPF needs to be run last due to fexit trampolines (?)

no.
The placement of nop call can be anywhere.
BPF trampoline is automagically converting nop call into a sequence
of directly invoked BPF programs.
No link list traversals and no indirect calls in run-time.

> 2- BPF hooks don't know what may be attached at any given time, so
>    ALL LSM hooks need to be universally hooked. THIS turns out to create
>    a measurable performance problem in that the cost of the indirect call
>    on the (mostly/usually) empty BPF policy is too high.

also no.

> So, trying to avoid the indirect calls is, as you say, an optimization,
> but it might be a needed one due to the other limitations.

I'm convinced that avoiding the cost of retpoline in critical path is a
requirement for any new infrastructure in the kernel.
Not only for security, but for any new infra.
Networking stack converted all such places to conditional calls.
In BPF land we converted indirect calls to direct jumps and direct calls.
It took two years to do so. Adding new indirect calls is not an option.
I'm eagerly waiting for Peter's static_call patches to land to convert
a lot more indirect calls. May be existing LSMs will take advantage
of static_call patches too, but static_call is not an option for BPF.
That's why we introduced BPF trampoline in the last kernel release.

> b) Would there actually be a global benefit to using the static keys
>    optimization for other LSMs?

Yes. Just compiling with CONFIG_SECURITY adds "if (hlist_empty)" check
for every hook. Some of those hooks are in critical path. This load+cmp
can be avoided with static_key optimization. I think it's worth doing.

> If static keys are justified for KRSI

I really like that KRSI costs absolutely zero when it's not enabled.
Attaching BPF prog to one hook preserves zero cost for all other hooks.
And when one hook is BPF powered it's using direct call instead of
super expensive retpoline.

Overall this patch set looks good to me. There was a minor issue with prog
accounting. I expect only that bit to be fixed in v5.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-22  4:22     ` Kees Cook
  2020-02-23 22:08       ` Alexei Starovoitov
@ 2020-02-24 16:09       ` Casey Schaufler
  2020-02-24 17:23       ` KP Singh
  2 siblings, 0 replies; 45+ messages in thread
From: Casey Schaufler @ 2020-02-24 16:09 UTC (permalink / raw)
  To: Kees Cook
  Cc: KP Singh, LKML, Linux Security Module list, Alexei Starovoitov,
	James Morris, Casey Schaufler

On 2/21/2020 8:22 PM, Kees Cook wrote:
> On Thu, Feb 20, 2020 at 03:49:05PM -0800, Casey Schaufler wrote:
>> On 2/20/2020 9:52 AM, KP Singh wrote:
>>> From: KP Singh <kpsingh@google.com>
>> Sorry about the heavy list pruning - the original set
>> blows thunderbird up.
> (I've added some people back; I had to dig this thread back out of lkml
> since I didn't get a direct copy...)
>
>>> The BPF LSM programs are implemented as fexit trampolines to avoid the
>>> overhead of retpolines. These programs cannot be attached to security_*
>>> wrappers as there are quite a few security_* functions that do more than
>>> just calling the LSM callbacks.
>>>
>>> This was discussed on the lists in:
>>>
>>>   https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#m068becce588a0cdf01913f368a97aea4c62d8266
>>>
>>> Adding a NOP callback after all the static LSM callbacks are called has
>>> the following benefits:
>>>
>>> - The BPF programs run at the right stage of the security_* wrappers.
>>> - They run after all the static LSM hooks allowed the operation,
>>>   therefore cannot allow an action that was already denied.
>> I still say that the special call-out to BPF is unnecessary.
>> I remain unconvinced by the arguments. You aren't doing anything
>> so special that the general mechanism won't work.
> If I'm understanding this correctly, there are two issues:
>
> 1- BPF needs to be run last due to fexit trampolines (?)

That's my understanding. As you mention below, there are many
ways to skin that cat.

> 2- BPF hooks don't know what may be attached at any given time, so
>    ALL LSM hooks need to be universally hooked.

Right. But that's exactly what we had before we switched to
the hook lists for stacking. It was perfectly acceptable, and
was accepted that way, for years. People even objected to it
being changed.

>  THIS turns out to create
>    a measurable performance problem in that the cost of the indirect call
>    on the (mostly/usually) empty BPF policy is too high.

Right. Except that it was deemed acceptable back before stacking.
What has changed? 

>
> "1" can be solved a lot of ways, and doesn't seem to be a debated part
> of this series.
>
> "2" is interesting -- it creates a performance problem for EVERYONE that
> builds in this kernel feature, regardless of them using it. Excepting
> SELinux, "traditional" LSMs tends to be relatively sparse in their hooking:
>
> $ grep '^      struct hlist_head' include/linux/lsm_hooks.h | wc -l
> 230
> $ for i in apparmor loadpin lockdown safesetid selinux smack tomoyo yama ; \
>   do echo -n "$i " && (cd $i && git grep LSM_HOOK_INIT | wc -l) ; done
> apparmor   68
> loadpin     3
> lockdown    1
> safesetid   2
> selinux   202
> smack     108
> tomoyo     28
> yama        4
>
> So, trying to avoid the indirect calls is, as you say, an optimization,
> but it might be a needed one due to the other limitations.
>
> To me, some questions present themselves:
>
> a) What, exactly, are the performance characteristics of:
> 	"before"
> 	"with indirect calls"
> 	"with static keys optimization"
>
> b) Would there actually be a global benefit to using the static keys
>    optimization for other LSMs? (Especially given that they're already
>    sparsely populated and policy likely determines utility -- all the
>    LSMs would just turn ON all their static keys or turn off ALL their
>    static keys depending on having policy loaded.)
>
> If static keys are justified for KRSI (by "a") then it seems the approach
> here should stand. If "b" is also true, then we need an additional
> series to apply this optimization for the other LSMs (but that seems
> distinctly separate from THIS series).
>


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-23 22:08       ` Alexei Starovoitov
@ 2020-02-24 16:32         ` Casey Schaufler
  2020-02-24 17:13           ` KP Singh
  0 siblings, 1 reply; 45+ messages in thread
From: Casey Schaufler @ 2020-02-24 16:32 UTC (permalink / raw)
  To: Alexei Starovoitov, Kees Cook
  Cc: KP Singh, LKML, Linux Security Module list, Alexei Starovoitov,
	James Morris, bpf, netdev, Casey Schaufler

On 2/23/2020 2:08 PM, Alexei Starovoitov wrote:
> On Fri, Feb 21, 2020 at 08:22:59PM -0800, Kees Cook wrote:
>> If I'm understanding this correctly, there are two issues:
>>
>> 1- BPF needs to be run last due to fexit trampolines (?)
> no.
> The placement of nop call can be anywhere.
> BPF trampoline is automagically converting nop call into a sequence
> of directly invoked BPF programs.
> No link list traversals and no indirect calls in run-time.

Then why the insistence that it be last?

>> 2- BPF hooks don't know what may be attached at any given time, so
>>    ALL LSM hooks need to be universally hooked. THIS turns out to create
>>    a measurable performance problem in that the cost of the indirect call
>>    on the (mostly/usually) empty BPF policy is too high.
> also no.

Um, then why not use the infrastructure as is?

>> So, trying to avoid the indirect calls is, as you say, an optimization,
>> but it might be a needed one due to the other limitations.
> I'm convinced that avoiding the cost of retpoline in critical path is a
> requirement for any new infrastructure in the kernel.

Sorry, I haven't gotten that memo.

> Not only for security, but for any new infra.

The LSM infrastructure ain't new.

> Networking stack converted all such places to conditional calls.
> In BPF land we converted indirect calls to direct jumps and direct calls.
> It took two years to do so. Adding new indirect calls is not an option.
> I'm eagerly waiting for Peter's static_call patches to land to convert
> a lot more indirect calls. May be existing LSMs will take advantage
> of static_call patches too, but static_call is not an option for BPF.
> That's why we introduced BPF trampoline in the last kernel release.

Sorry, but I don't see how BPF is so overwhelmingly special.

>> b) Would there actually be a global benefit to using the static keys
>>    optimization for other LSMs?
> Yes. Just compiling with CONFIG_SECURITY adds "if (hlist_empty)" check
> for every hook.

Err, no, it doesn't. It does an hlish_for_each_entry(), which
may be the equivalent on an empty list, but let's not go around
spreading misinformation.

>  Some of those hooks are in critical path. This load+cmp
> can be avoided with static_key optimization. I think it's worth doing.

I admit to being unfamiliar with the static_key implementation,
but if it would work for a list of hooks rather than a singe hook,
I'm all ears.

>> If static keys are justified for KRSI
> I really like that KRSI costs absolutely zero when it's not enabled.

And I dislike that there's security module specific code in security.c,
security.h and/or lsm_hooks.h. KRSI *is not that special*.

> Attaching BPF prog to one hook preserves zero cost for all other hooks.
> And when one hook is BPF powered it's using direct call instead of
> super expensive retpoline.

I'm not objecting to the good it does for KRSI.
I am *strongly* objecting to special casing KRSI.

> Overall this patch set looks good to me. There was a minor issue with prog
> accounting. I expect only that bit to be fixed in v5.


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-24 16:32         ` Casey Schaufler
@ 2020-02-24 17:13           ` KP Singh
  2020-02-24 18:45             ` Casey Schaufler
  0 siblings, 1 reply; 45+ messages in thread
From: KP Singh @ 2020-02-24 17:13 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: Alexei Starovoitov, Kees Cook, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris, bpf, netdev

On 24-Feb 08:32, Casey Schaufler wrote:
> On 2/23/2020 2:08 PM, Alexei Starovoitov wrote:
> > On Fri, Feb 21, 2020 at 08:22:59PM -0800, Kees Cook wrote:
> >> If I'm understanding this correctly, there are two issues:
> >>
> >> 1- BPF needs to be run last due to fexit trampolines (?)
> > no.
> > The placement of nop call can be anywhere.
> > BPF trampoline is automagically converting nop call into a sequence
> > of directly invoked BPF programs.
> > No link list traversals and no indirect calls in run-time.
> 
> Then why the insistence that it be last?

I think this came out of the discussion about not being able to
override the other LSMs and introduce a new attack vector with some
arguments discussed at:

  https://lore.kernel.org/bpf/20200109194302.GA85350@google.com/

Let's say we have SELinux + BPF runnng on the system. BPF should still
respect any decisions made by SELinux. This hasn't got anything to
do with the usage of fexit trampolines.

> 
> >> 2- BPF hooks don't know what may be attached at any given time, so
> >>    ALL LSM hooks need to be universally hooked. THIS turns out to create
> >>    a measurable performance problem in that the cost of the indirect call
> >>    on the (mostly/usually) empty BPF policy is too high.
> > also no.
> 
> Um, then why not use the infrastructure as is?

I think what Kees meant is that BPF allows hooking to all the LSM
hooks and providing static LSM hook callbacks like traditional LSMs
has a measurable performance overhead and this is indeed correct. This
is why we want provide with the following characteristics:

- Without introducing a new attack surface, this was the reason for
  creating a mutable security_hook_heads in v1 which ran the hook
  after v1.

  This approach still had the issues of an indirect call and an
  extra check when not used. So this was not truly zero overhead even
  after special casing BPF.

- Having a truly zero performance overhead on the system. There are
  other tracing / attachment mechnaisms in the kernel which provide
  similar guarrantees (using static keys and direct calls) and it
  seems regressive for KRSI to not leverage these known patterns and
  sacrifice performance espeically in hotpaths. This provides users
  to use KRSI alongside other LSMs without paying extra cost for all
  the possible hooks.

> 
> >> So, trying to avoid the indirect calls is, as you say, an optimization,
> >> but it might be a needed one due to the other limitations.
> > I'm convinced that avoiding the cost of retpoline in critical path is a
> > requirement for any new infrastructure in the kernel.
> 
> Sorry, I haven't gotten that memo.
> 
> > Not only for security, but for any new infra.
> 
> The LSM infrastructure ain't new.

But the ability to attach BPF programs to LSM hooks is new. Also, we
have not had the whole implementation of the LSM hook be mutable
before and this is essentially what the KRSI provides.

> 
> > Networking stack converted all such places to conditional calls.
> > In BPF land we converted indirect calls to direct jumps and direct calls.
> > It took two years to do so. Adding new indirect calls is not an option.
> > I'm eagerly waiting for Peter's static_call patches to land to convert
> > a lot more indirect calls. May be existing LSMs will take advantage
> > of static_call patches too, but static_call is not an option for BPF.
> > That's why we introduced BPF trampoline in the last kernel release.
> 
> Sorry, but I don't see how BPF is so overwhelmingly special.

My analogy here is that if every tracepoint in the kernel were of the
type:

if (trace_foo_enabled) // <-- Overhead here, solved with static key
   trace_foo(a);  // <-- retpoline overhead, solved with fexit trampolines

It would be very hard to justify enabling them on a production system,
and the same can be said for BPF and KRSI.

- KP

> 
> >> b) Would there actually be a global benefit to using the static keys
> >>    optimization for other LSMs?
> > Yes. Just compiling with CONFIG_SECURITY adds "if (hlist_empty)" check
> > for every hook.
> 
> Err, no, it doesn't. It does an hlish_for_each_entry(), which
> may be the equivalent on an empty list, but let's not go around
> spreading misinformation.
> 
> >  Some of those hooks are in critical path. This load+cmp
> > can be avoided with static_key optimization. I think it's worth doing.
> 
> I admit to being unfamiliar with the static_key implementation,
> but if it would work for a list of hooks rather than a singe hook,
> I'm all ears.
> 
> >> If static keys are justified for KRSI
> > I really like that KRSI costs absolutely zero when it's not enabled.
> 
> And I dislike that there's security module specific code in security.c,
> security.h and/or lsm_hooks.h. KRSI *is not that special*.
> 
> > Attaching BPF prog to one hook preserves zero cost for all other hooks.
> > And when one hook is BPF powered it's using direct call instead of
> > super expensive retpoline.
> 
> I'm not objecting to the good it does for KRSI.
> I am *strongly* objecting to special casing KRSI.
> 
> > Overall this patch set looks good to me. There was a minor issue with prog
> > accounting. I expect only that bit to be fixed in v5.
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-22  4:22     ` Kees Cook
  2020-02-23 22:08       ` Alexei Starovoitov
  2020-02-24 16:09       ` Casey Schaufler
@ 2020-02-24 17:23       ` KP Singh
  2 siblings, 0 replies; 45+ messages in thread
From: KP Singh @ 2020-02-24 17:23 UTC (permalink / raw)
  To: Kees Cook
  Cc: Casey Schaufler, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris

Hi Kees,

Thanks for the feedback!

On 21-Feb 20:22, Kees Cook wrote:
> On Thu, Feb 20, 2020 at 03:49:05PM -0800, Casey Schaufler wrote:
> > On 2/20/2020 9:52 AM, KP Singh wrote:
> > > From: KP Singh <kpsingh@google.com>
> > 
> > Sorry about the heavy list pruning - the original set
> > blows thunderbird up.
> 
> (I've added some people back; I had to dig this thread back out of lkml
> since I didn't get a direct copy...)
> 
> > > The BPF LSM programs are implemented as fexit trampolines to avoid the
> > > overhead of retpolines. These programs cannot be attached to security_*
> > > wrappers as there are quite a few security_* functions that do more than
> > > just calling the LSM callbacks.
> > >
> > > This was discussed on the lists in:
> > >
> > >   https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#m068becce588a0cdf01913f368a97aea4c62d8266
> > >
> > > Adding a NOP callback after all the static LSM callbacks are called has
> > > the following benefits:
> > >
> > > - The BPF programs run at the right stage of the security_* wrappers.
> > > - They run after all the static LSM hooks allowed the operation,
> > >   therefore cannot allow an action that was already denied.
> > 
> > I still say that the special call-out to BPF is unnecessary.
> > I remain unconvinced by the arguments. You aren't doing anything
> > so special that the general mechanism won't work.
> 
> If I'm understanding this correctly, there are two issues:
> 
> 1- BPF needs to be run last due to fexit trampolines (?)
> 
> 2- BPF hooks don't know what may be attached at any given time, so
>    ALL LSM hooks need to be universally hooked. THIS turns out to create
>    a measurable performance problem in that the cost of the indirect call
>    on the (mostly/usually) empty BPF policy is too high.
> 
> "1" can be solved a lot of ways, and doesn't seem to be a debated part
> of this series.
> 
> "2" is interesting -- it creates a performance problem for EVERYONE that
> builds in this kernel feature, regardless of them using it. Excepting
> SELinux, "traditional" LSMs tends to be relatively sparse in their hooking:
> 
> $ grep '^      struct hlist_head' include/linux/lsm_hooks.h | wc -l
> 230
> $ for i in apparmor loadpin lockdown safesetid selinux smack tomoyo yama ; \
>   do echo -n "$i " && (cd $i && git grep LSM_HOOK_INIT | wc -l) ; done
> apparmor   68
> loadpin     3
> lockdown    1
> safesetid   2
> selinux   202
> smack     108
> tomoyo     28
> yama        4
> 
> So, trying to avoid the indirect calls is, as you say, an optimization,
> but it might be a needed one due to the other limitations.
> 
> To me, some questions present themselves:
> 
> a) What, exactly, are the performance characteristics of:
> 	"before"
> 	"with indirect calls"
> 	"with static keys optimization"

Good suggestion!

I will do some analysis and come back with the numbers.

> 
> b) Would there actually be a global benefit to using the static keys
>    optimization for other LSMs? (Especially given that they're already
>    sparsely populated and policy likely determines utility -- all the
>    LSMs would just turn ON all their static keys or turn off ALL their
>    static keys depending on having policy loaded.)

As Alexei mentioned, we can use the patches for static calls after
they are merged:

https://lore.kernel.org/lkml/8bc857824f82462a296a8a3c4913a11a7f801e74.1547073843.git.jpoimboe@redhat.com/

to make the framework better (as a separate series) especially given
that we are unsure how they work with BPF.

- KP

> 
> If static keys are justified for KRSI (by "a") then it seems the approach
> here should stand. If "b" is also true, then we need an additional
> series to apply this optimization for the other LSMs (but that seems
> distinctly separate from THIS series).
> 
> -- 
> Kees Cook

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-24 17:13           ` KP Singh
@ 2020-02-24 18:45             ` Casey Schaufler
  2020-02-24 21:41               ` Kees Cook
  0 siblings, 1 reply; 45+ messages in thread
From: Casey Schaufler @ 2020-02-24 18:45 UTC (permalink / raw)
  To: KP Singh
  Cc: Alexei Starovoitov, Kees Cook, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris, bpf, netdev, Casey Schaufler

On 2/24/2020 9:13 AM, KP Singh wrote:
> On 24-Feb 08:32, Casey Schaufler wrote:
>> On 2/23/2020 2:08 PM, Alexei Starovoitov wrote:
>>> On Fri, Feb 21, 2020 at 08:22:59PM -0800, Kees Cook wrote:
>>>> If I'm understanding this correctly, there are two issues:
>>>>
>>>> 1- BPF needs to be run last due to fexit trampolines (?)
>>> no.
>>> The placement of nop call can be anywhere.
>>> BPF trampoline is automagically converting nop call into a sequence
>>> of directly invoked BPF programs.
>>> No link list traversals and no indirect calls in run-time.
>> Then why the insistence that it be last?
> I think this came out of the discussion about not being able to
> override the other LSMs and introduce a new attack vector with some
> arguments discussed at:
>
>   https://lore.kernel.org/bpf/20200109194302.GA85350@google.com/
>
> Let's say we have SELinux + BPF runnng on the system. BPF should still
> respect any decisions made by SELinux. This hasn't got anything to
> do with the usage of fexit trampolines.

The discussion sited is more about GPL than anything else.

The LSM rule is that any security module must be able to
accept the decisions of others. SELinux has to accept decisions
made ahead of it. It always has, as LSM checks occur after
"traditional" checks, which may fail. The only reason that you
need to be last in this implementation appears to be that you
refuse to use the general mechanisms. You can't blame SELinux
for that.

>>>> 2- BPF hooks don't know what may be attached at any given time, so
>>>>    ALL LSM hooks need to be universally hooked. THIS turns out to create
>>>>    a measurable performance problem in that the cost of the indirect call
>>>>    on the (mostly/usually) empty BPF policy is too high.
>>> also no.
>> Um, then why not use the infrastructure as is?
> I think what Kees meant is that BPF allows hooking to all the LSM
> hooks and providing static LSM hook callbacks like traditional LSMs
> has a measurable performance overhead and this is indeed correct. This
> is why we want provide with the following characteristics:

I was responding to the "also no", which denies both what Kees said
and what you're saying here. 

> - Without introducing a new attack surface, this was the reason for
>   creating a mutable security_hook_heads in v1 which ran the hook
>   after v1.

Yeah,

>   This approach still had the issues of an indirect call and an
>   extra check when not used. So this was not truly zero overhead even
>   after special casing BPF.

The LSM mechanism is not zero overhead. It never has been. That's why
you can compile it out. You get added value at a price. You get the
ability to use SELinux and KRSI together at a price. If that's unacceptable
you can go the route of seccomp, which doesn't use LSM for many of the
same reasons you're on about.

When LSM was introduced it was expected to be used by the lunatic fringe
people with government mandated security requirements. Today it has a
much greater general application. That's why I'm in the process of
bringing it up to modern use case requirements. Performance is much
more important now than it was before the use of LSM became popular.

> - Having a truly zero performance overhead on the system. There are
>   other tracing / attachment mechnaisms in the kernel which provide
>   similar guarrantees (using static keys and direct calls) and it
>   seems regressive for KRSI to not leverage these known patterns and
>   sacrifice performance espeically in hotpaths. This provides users
>   to use KRSI alongside other LSMs without paying extra cost for all
>   the possible hooks.

This is in direct conflict with the aforementioned "also no".

>>>> So, trying to avoid the indirect calls is, as you say, an optimization,
>>>> but it might be a needed one due to the other limitations.
>>> I'm convinced that avoiding the cost of retpoline in critical path is a
>>> requirement for any new infrastructure in the kernel.
>> Sorry, I haven't gotten that memo.
>>
>>> Not only for security, but for any new infra.
>> The LSM infrastructure ain't new.
> But the ability to attach BPF programs to LSM hooks is new.

Stop right there. No, I mean it. Really, stop right there.
I don't give a flying fig (he said, using the polite expression
rather than the vulgar) about what you want to do within a
security module. Attach a BPF program, randomize arbitrary
memory locations, do traditional Bell & LaPadula, it's all the
same to the LSM infrastructure. If you want to do something
that has to work outside that, the way audit and seccomp do,
you need to take that out of the LSM infrastructure. If you
want the convenience of the LSM infrastructure you don't get
to muck it up.

> Also, we
> have not had the whole implementation of the LSM hook be mutable
> before and this is essentially what the KRSI provides.

It can do that wholly within KRSI hooks. You don't need to
put KRSI specific code into security.c.

>>> Networking stack converted all such places to conditional calls.
>>> In BPF land we converted indirect calls to direct jumps and direct calls.
>>> It took two years to do so. Adding new indirect calls is not an option.
>>> I'm eagerly waiting for Peter's static_call patches to land to convert
>>> a lot more indirect calls. May be existing LSMs will take advantage
>>> of static_call patches too, but static_call is not an option for BPF.
>>> That's why we introduced BPF trampoline in the last kernel release.
>> Sorry, but I don't see how BPF is so overwhelmingly special.
> My analogy here is that if every tracepoint in the kernel were of the
> type:
>
> if (trace_foo_enabled) // <-- Overhead here, solved with static key
>    trace_foo(a);  // <-- retpoline overhead, solved with fexit trampolines
>
> It would be very hard to justify enabling them on a production system,
> and the same can be said for BPF and KRSI.

The same can be and has been said about the LSM infrastructure.
If BPF and KRSI are that performance critical you shouldn't be
tying them to LSM, which is known to have overhead. If you can't
accept the LSM overhead, get out of the LSM. Or, help us fix the
LSM infrastructure to make its overhead closer to zero. Whether
you believe it or not, a lot of work has gone into keeping the LSM
overhead as small as possible while remaining sufficiently general
to perform its function.

No. If you're too special to play by LSM rules then you're special
enough to get into the kernel using more direct means.




^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-24 18:45             ` Casey Schaufler
@ 2020-02-24 21:41               ` Kees Cook
  2020-02-24 22:29                 ` Casey Schaufler
                                   ` (2 more replies)
  0 siblings, 3 replies; 45+ messages in thread
From: Kees Cook @ 2020-02-24 21:41 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: KP Singh, Alexei Starovoitov, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris, bpf, netdev

On Mon, Feb 24, 2020 at 10:45:27AM -0800, Casey Schaufler wrote:
> On 2/24/2020 9:13 AM, KP Singh wrote:
> > On 24-Feb 08:32, Casey Schaufler wrote:
> >> On 2/23/2020 2:08 PM, Alexei Starovoitov wrote:
> >>> On Fri, Feb 21, 2020 at 08:22:59PM -0800, Kees Cook wrote:
> >>>> If I'm understanding this correctly, there are two issues:
> >>>>
> >>>> 1- BPF needs to be run last due to fexit trampolines (?)
> >>> no.
> >>> The placement of nop call can be anywhere.
> >>> BPF trampoline is automagically converting nop call into a sequence
> >>> of directly invoked BPF programs.
> >>> No link list traversals and no indirect calls in run-time.
> >> Then why the insistence that it be last?
> > I think this came out of the discussion about not being able to
> > override the other LSMs and introduce a new attack vector with some
> > arguments discussed at:
> >
> >   https://lore.kernel.org/bpf/20200109194302.GA85350@google.com/
> >
> > Let's say we have SELinux + BPF runnng on the system. BPF should still
> > respect any decisions made by SELinux. This hasn't got anything to
> > do with the usage of fexit trampolines.
> 
> The discussion sited is more about GPL than anything else.
> 
> The LSM rule is that any security module must be able to
> accept the decisions of others. SELinux has to accept decisions
> made ahead of it. It always has, as LSM checks occur after
> "traditional" checks, which may fail. The only reason that you
> need to be last in this implementation appears to be that you
> refuse to use the general mechanisms. You can't blame SELinux
> for that.

Okay, this is why I wanted to try to state things plainly. The "in last
position" appears to be the result of a couple design choices:

-the idea of "not wanting to get in the way of other LSMs", while
 admirable, needs to actually be a non-goal to be "just" a stacked LSM
 (as you're saying here Casey). This position _was_ required for the
 non-privileged LSM case to avoid security implications, but that goal
 not longer exists here either.

-optimally using the zero-cost call-outs (static key + fexit trampolines)
 meant it didn't interact well with the existing stacking mechanism.

So, fine, these appear to be design choices, not *specifically*
requirements. Let's move on, I think there is more to unpack here...

> >>>> 2- BPF hooks don't know what may be attached at any given time, so
> >>>>    ALL LSM hooks need to be universally hooked. THIS turns out to create
> >>>>    a measurable performance problem in that the cost of the indirect call
> >>>>    on the (mostly/usually) empty BPF policy is too high.
> >>> also no.

AIUI, there was some confusion on Alexei's reply here. I, perhaps,
was not as clear as I needed to be. I think the later discussion on
performance overheads gets more into the point, and gets us closer to
the objections Alexei had. More below...

> >   This approach still had the issues of an indirect call and an
> >   extra check when not used. So this was not truly zero overhead even
> >   after special casing BPF.
> 
> The LSM mechanism is not zero overhead. It never has been. That's why
> you can compile it out. You get added value at a price. You get the
> ability to use SELinux and KRSI together at a price. If that's unacceptable
> you can go the route of seccomp, which doesn't use LSM for many of the
> same reasons you're on about.
> [...]
> >>>> So, trying to avoid the indirect calls is, as you say, an optimization,
> >>>> but it might be a needed one due to the other limitations.
> >>> I'm convinced that avoiding the cost of retpoline in critical path is a
> >>> requirement for any new infrastructure in the kernel.
> >> Sorry, I haven't gotten that memo.

I agree with Casey here -- it's a nice goal, but those cost evaluations have
not yet(?[1]) hit the LSM world. I think it's a desirable goal, to be
sure, but this does appear to be an early optimization.

> [...]
> It can do that wholly within KRSI hooks. You don't need to
> put KRSI specific code into security.c.

This observation is where I keep coming back to.

Yes, the resulting code is not as fast as it could be. The fact that BPF
triggers the worst-case performance of LSM hooking is the "new" part
here, from what I can see.

I suspect the problem is that folks in the BPF subsystem don't want to
be seen as slowing anything down, even other subsystems, so they don't
want to see this done in the traditional LSM hooking way (which contains
indirect calls).

But the LSM subsystem doesn't want special cases (Casey has worked very
hard to generalize everything there for stacking). It is really hard to
accept adding a new special case when there are still special cases yet
to be worked out even in the LSM code itself[2].

> >>> Networking stack converted all such places to conditional calls.
> >>> In BPF land we converted indirect calls to direct jumps and direct calls.
> >>> It took two years to do so. Adding new indirect calls is not an option.
> >>> I'm eagerly waiting for Peter's static_call patches to land to convert
> >>> a lot more indirect calls. May be existing LSMs will take advantage
> >>> of static_call patches too, but static_call is not an option for BPF.
> >>> That's why we introduced BPF trampoline in the last kernel release.
> >> Sorry, but I don't see how BPF is so overwhelmingly special.
> > My analogy here is that if every tracepoint in the kernel were of the
> > type:
> >
> > if (trace_foo_enabled) // <-- Overhead here, solved with static key
> >    trace_foo(a);  // <-- retpoline overhead, solved with fexit trampolines

This is a helpful distillation; thanks.

static keys (perhaps better described as static branches) make sense to
me; I'm familiar with them being used all over the place[3]. The resulting
"zero performance" branch mechanism is extremely handy.

I had been thinking about the fexit stuff only as a way for BPF to call
into kernel functions directly, and I missed the place where this got
used for calling from the kernel into BPF directly. KP walked me through
the fexit stuff off list. I missed where there NOP stub ("noinline int
bpf_lsm_##NAME(__VA_ARGS__) { return 0; }") was being patched by BPF in
https://lore.kernel.org/lkml/20200220175250.10795-6-kpsingh@chromium.org/
The key bit being "bpf_trampoline_update(prog)"

> > It would be very hard to justify enabling them on a production system,
> > and the same can be said for BPF and KRSI.
> 
> The same can be and has been said about the LSM infrastructure.
> If BPF and KRSI are that performance critical you shouldn't be
> tying them to LSM, which is known to have overhead. If you can't
> accept the LSM overhead, get out of the LSM. Or, help us fix the
> LSM infrastructure to make its overhead closer to zero. Whether
> you believe it or not, a lot of work has gone into keeping the LSM
> overhead as small as possible while remaining sufficiently general
> to perform its function.
> 
> No. If you're too special to play by LSM rules then you're special
> enough to get into the kernel using more direct means.

So, I see the primary conflict here being about the performance
optimizations. AIUI:

- BPF subsystem maintainers do not want any new slowdown associated
  with BPF
- LSM subsystem maintainers do not want any new special cases in
  LSM stacking

So, unless James is going to take this over Casey's objections, the path
forward I see here is:

- land a "slow" KRSI (i.e. one that hooks every hook with a stub).
- optimize calling for all LSMs

Does this seem right to everyone?

-Kees


[1] There is a "known cost to LSM", but as Casey mentions, it's been
generally deemed "acceptable". There have been some recent attempts to
quantify it, but it's not been very clear:
https://lore.kernel.org/linux-security-module/c98000ea-df0e-1ab7-a0e2-b47d913f50c8@tycho.nsa.gov/ (lore is missing half this conversation for some reason)

[2] Casey's work to generalize the LSM interfaces continues and it quite
complex:
https://lore.kernel.org/linux-security-module/20200214234203.7086-1-casey@schaufler-ca.com/

[3] E.g. HARDENED_USERCOPY uses it:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/usercopy.c?h=v5.5#n258
and so does the heap memory auto-initialization:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/slab.h?h=v5.5#n676

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-24 21:41               ` Kees Cook
@ 2020-02-24 22:29                 ` Casey Schaufler
  2020-02-25  5:41                 ` Alexei Starovoitov
  2020-02-25 19:29                 ` KP Singh
  2 siblings, 0 replies; 45+ messages in thread
From: Casey Schaufler @ 2020-02-24 22:29 UTC (permalink / raw)
  To: Kees Cook
  Cc: KP Singh, Alexei Starovoitov, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris, bpf, netdev, Casey Schaufler

On 2/24/2020 1:41 PM, Kees Cook wrote:
> On Mon, Feb 24, 2020 at 10:45:27AM -0800, Casey Schaufler wrote:
>> On 2/24/2020 9:13 AM, KP Singh wrote:
>>> On 24-Feb 08:32, Casey Schaufler wrote:
>>>> On 2/23/2020 2:08 PM, Alexei Starovoitov wrote:
>>>>> On Fri, Feb 21, 2020 at 08:22:59PM -0800, Kees Cook wrote:
>>>>>> If I'm understanding this correctly, there are two issues:
>>>>>>
>>>>>> 1- BPF needs to be run last due to fexit trampolines (?)
>>>>> no.
>>>>> The placement of nop call can be anywhere.
>>>>> BPF trampoline is automagically converting nop call into a sequence
>>>>> of directly invoked BPF programs.
>>>>> No link list traversals and no indirect calls in run-time.
>>>> Then why the insistence that it be last?
>>> I think this came out of the discussion about not being able to
>>> override the other LSMs and introduce a new attack vector with some
>>> arguments discussed at:
>>>
>>>   https://lore.kernel.org/bpf/20200109194302.GA85350@google.com/
>>>
>>> Let's say we have SELinux + BPF runnng on the system. BPF should still
>>> respect any decisions made by SELinux. This hasn't got anything to
>>> do with the usage of fexit trampolines.
>> The discussion sited is more about GPL than anything else.
>>
>> The LSM rule is that any security module must be able to
>> accept the decisions of others. SELinux has to accept decisions
>> made ahead of it. It always has, as LSM checks occur after
>> "traditional" checks, which may fail. The only reason that you
>> need to be last in this implementation appears to be that you
>> refuse to use the general mechanisms. You can't blame SELinux
>> for that.
> Okay, this is why I wanted to try to state things plainly. The "in last
> position" appears to be the result of a couple design choices:
>
> -the idea of "not wanting to get in the way of other LSMs", while
>  admirable, needs to actually be a non-goal to be "just" a stacked LSM
>  (as you're saying here Casey). This position _was_ required for the
>  non-privileged LSM case to avoid security implications, but that goal
>  not longer exists here either.

Thanks.

> -optimally using the zero-cost call-outs (static key + fexit trampolines)
>  meant it didn't interact well with the existing stacking mechanism.

Exactly.

> So, fine, these appear to be design choices, not *specifically*
> requirements. Let's move on, I think there is more to unpack here...

Right.

>>>>>> 2- BPF hooks don't know what may be attached at any given time, so
>>>>>>    ALL LSM hooks need to be universally hooked. THIS turns out to create
>>>>>>    a measurable performance problem in that the cost of the indirect call
>>>>>>    on the (mostly/usually) empty BPF policy is too high.
>>>>> also no.
> AIUI, there was some confusion on Alexei's reply here. I, perhaps,
> was not as clear as I needed to be. I think the later discussion on
> performance overheads gets more into the point, and gets us closer to
> the objections Alexei had. More below...

Agreed.

>>>   This approach still had the issues of an indirect call and an
>>>   extra check when not used. So this was not truly zero overhead even
>>>   after special casing BPF.
>> The LSM mechanism is not zero overhead. It never has been. That's why
>> you can compile it out. You get added value at a price. You get the
>> ability to use SELinux and KRSI together at a price. If that's unacceptable
>> you can go the route of seccomp, which doesn't use LSM for many of the
>> same reasons you're on about.
>> [...]
>>>>>> So, trying to avoid the indirect calls is, as you say, an optimization,
>>>>>> but it might be a needed one due to the other limitations.
>>>>> I'm convinced that avoiding the cost of retpoline in critical path is a
>>>>> requirement for any new infrastructure in the kernel.
>>>> Sorry, I haven't gotten that memo.
> I agree with Casey here -- it's a nice goal, but those cost evaluations have
> not yet(?[1]) hit the LSM world. I think it's a desirable goal, to be
> sure, but this does appear to be an early optimization.

Thanks for helping clarify that.

>> [...]
>> It can do that wholly within KRSI hooks. You don't need to
>> put KRSI specific code into security.c.
> This observation is where I keep coming back to.
>
> Yes, the resulting code is not as fast as it could be. The fact that BPF
> triggers the worst-case performance of LSM hooking is the "new" part
> here, from what I can see.

I haven't put this oar in the water before, but mightn't it be
possible to configure which LSM hooks can have BPF programs installed
at compile time, and thus address this issue for the "production" case?
I fully expect that such a configuration option would be hideously ugly
both to implement and use. If the impact of unused BPF hooks is so
great a concern, perhaps it would be worthwhile.

> I suspect the problem is that folks in the BPF subsystem don't want to
> be seen as slowing anything down, even other subsystems, so they don't
> want to see this done in the traditional LSM hooking way (which contains
> indirect calls).

I get that, and it is a laudable goal, but ...

> But the LSM subsystem doesn't want special cases (Casey has worked very
> hard to generalize everything there for stacking). It is really hard to
> accept adding a new special case when there are still special cases yet
> to be worked out even in the LSM code itself[2].

... like Kees says, this isn't the only use case we have to deal with. 

>>>>> Networking stack converted all such places to conditional calls.
>>>>> In BPF land we converted indirect calls to direct jumps and direct calls.
>>>>> It took two years to do so. Adding new indirect calls is not an option.
>>>>> I'm eagerly waiting for Peter's static_call patches to land to convert
>>>>> a lot more indirect calls. May be existing LSMs will take advantage
>>>>> of static_call patches too, but static_call is not an option for BPF.
>>>>> That's why we introduced BPF trampoline in the last kernel release.
>>>> Sorry, but I don't see how BPF is so overwhelmingly special.
>>> My analogy here is that if every tracepoint in the kernel were of the
>>> type:
>>>
>>> if (trace_foo_enabled) // <-- Overhead here, solved with static key
>>>    trace_foo(a);  // <-- retpoline overhead, solved with fexit trampolines
> This is a helpful distillation; thanks.
>
> static keys (perhaps better described as static branches) make sense to
> me; I'm familiar with them being used all over the place[3]. The resulting
> "zero performance" branch mechanism is extremely handy.
>
> I had been thinking about the fexit stuff only as a way for BPF to call
> into kernel functions directly, and I missed the place where this got
> used for calling from the kernel into BPF directly. KP walked me through
> the fexit stuff off list. I missed where there NOP stub ("noinline int
> bpf_lsm_##NAME(__VA_ARGS__) { return 0; }") was being patched by BPF in
> https://lore.kernel.org/lkml/20200220175250.10795-6-kpsingh@chromium.org/
> The key bit being "bpf_trampoline_update(prog)"
>
>>> It would be very hard to justify enabling them on a production system,
>>> and the same can be said for BPF and KRSI.
>> The same can be and has been said about the LSM infrastructure.
>> If BPF and KRSI are that performance critical you shouldn't be
>> tying them to LSM, which is known to have overhead. If you can't
>> accept the LSM overhead, get out of the LSM. Or, help us fix the
>> LSM infrastructure to make its overhead closer to zero. Whether
>> you believe it or not, a lot of work has gone into keeping the LSM
>> overhead as small as possible while remaining sufficiently general
>> to perform its function.
>>
>> No. If you're too special to play by LSM rules then you're special
>> enough to get into the kernel using more direct means.
> So, I see the primary conflict here being about the performance
> optimizations. AIUI:
>
> - BPF subsystem maintainers do not want any new slowdown associated
>   with BPF

Right.

> - LSM subsystem maintainers do not want any new special cases in
>   LSM stacking

Right.

> So, unless James is going to take this over Casey's objections, the path
> forward I see here is:
>
> - land a "slow" KRSI (i.e. one that hooks every hook with a stub).
> - optimize calling for all LSMs
>
> Does this seem right to everyone?

This would be my first choice. The existing list-of-hooks mechanism is
an "obvious" implementation. I have been thinking about possible ways to
make it better, but as y'all may have guessed, I haven't seen all the cool
new optimization techniques.

> -Kees
>
>
> [1] There is a "known cost to LSM", but as Casey mentions, it's been
> generally deemed "acceptable". There have been some recent attempts to
> quantify it, but it's not been very clear:
> https://lore.kernel.org/linux-security-module/c98000ea-df0e-1ab7-a0e2-b47d913f50c8@tycho.nsa.gov/ (lore is missing half this conversation for some reason)
>
> [2] Casey's work to generalize the LSM interfaces continues and it quite
> complex:
> https://lore.kernel.org/linux-security-module/20200214234203.7086-1-casey@schaufler-ca.com/
>
> [3] E.g. HARDENED_USERCOPY uses it:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/usercopy.c?h=v5.5#n258
> and so does the heap memory auto-initialization:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/slab.h?h=v5.5#n676
>


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-24 21:41               ` Kees Cook
  2020-02-24 22:29                 ` Casey Schaufler
@ 2020-02-25  5:41                 ` Alexei Starovoitov
  2020-02-25 15:31                   ` Kees Cook
                                     ` (2 more replies)
  2020-02-25 19:29                 ` KP Singh
  2 siblings, 3 replies; 45+ messages in thread
From: Alexei Starovoitov @ 2020-02-25  5:41 UTC (permalink / raw)
  To: Kees Cook
  Cc: Casey Schaufler, KP Singh, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris, bpf, netdev

On Mon, Feb 24, 2020 at 01:41:19PM -0800, Kees Cook wrote:
> 
> But the LSM subsystem doesn't want special cases (Casey has worked very
> hard to generalize everything there for stacking). It is really hard to
> accept adding a new special case when there are still special cases yet
> to be worked out even in the LSM code itself[2].
> [2] Casey's work to generalize the LSM interfaces continues and it quite
> complex:
> https://lore.kernel.org/linux-security-module/20200214234203.7086-1-casey@schaufler-ca.com/

I think the key mistake we made is that we classified KRSI as LSM.
LSM stacking, lsmblobs that the above set is trying to do are not necessary for KRSI.
I don't see anything in LSM infra that KRSI can reuse.
The only thing BPF needs is a function to attach to.
It can be a nop function or any other.
security_*() functions are interesting from that angle only.
Hence I propose to reconsider what I was suggesting earlier.
No changes to secruity/ directory.
Attach to security_*() funcs via bpf trampoline.
The key observation vs what I was saying earlier is KRSI and LSM are wrong names.
I think "security" is also loaded word that should be avoided.
I'm proposing to rename BPF_PROG_TYPE_LSM into BPF_PROG_TYPE_OVERRIDE_RETURN.

> So, unless James is going to take this over Casey's objections, the path
> forward I see here is:
> 
> - land a "slow" KRSI (i.e. one that hooks every hook with a stub).
> - optimize calling for all LSMs

I'm very much surprised how 'slow' KRSI is an option at all.
'slow' KRSI means that CONFIG_SECURITY_KRSI=y adds indirect calls to nop
functions for every place in the kernel that calls security_*().
This is not an acceptable overhead. Even w/o retpoline
this is not something datacenter servers can use.

Another option is to do this:
diff --git a/include/linux/security.h b/include/linux/security.h
index 64b19f050343..7887ce636fb1 100644
--- a/include/linux/security.h
+++ b/include/linux/security.h
@@ -240,7 +240,7 @@ static inline const char *kernel_load_data_id_str(enum kernel_load_data_id id)
        return kernel_load_data_str[id];
 }

-#ifdef CONFIG_SECURITY
+#if defined(CONFIG_SECURITY) || defined(CONFIG_BPF_OVERRIDE_RETURN)

Single line change to security.h and new file kernel/bpf/override_security.c
that will look like:
int security_binder_set_context_mgr(struct task_struct *mgr)
{
        return 0;
}

int security_binder_transaction(struct task_struct *from,
                                struct task_struct *to)
{
        return 0;
}
Essentially it will provide BPF side with a set of nop functions.
CONFIG_SECURITY is off. It may seem as a downside that it will force a choice
on kernel users. Either they build the kernel with CONFIG_SECURITY and their
choice of LSMs or build the kernel with CONFIG_BPF_OVERRIDE_RETURN and use
BPF_PROG_TYPE_OVERRIDE_RETURN programs to enforce any kind of policy. I think
it's a pro not a con.

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 6/8] tools/libbpf: Add support for BPF_PROG_TYPE_LSM
  2020-02-20 17:52 ` [PATCH bpf-next v4 6/8] tools/libbpf: Add support for BPF_PROG_TYPE_LSM KP Singh
@ 2020-02-25  6:45   ` Andrii Nakryiko
  0 siblings, 0 replies; 45+ messages in thread
From: Andrii Nakryiko @ 2020-02-25  6:45 UTC (permalink / raw)
  To: KP Singh
  Cc: open list, bpf, linux-security-module, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Jann Horn,
	Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

On Thu, Feb 20, 2020 at 9:53 AM KP Singh <kpsingh@chromium.org> wrote:
>
> From: KP Singh <kpsingh@google.com>
>
> Since BPF_PROG_TYPE_LSM uses the same attaching mechanism as
> BPF_PROG_TYPE_TRACING, the common logic is refactored into a static
> function bpf_program__attach_btf.
>
> A new API call bpf_program__attach_lsm is still added to avoid userspace
> conflicts if this ever changes in the future.
>
> Signed-off-by: KP Singh <kpsingh@google.com>
> ---
>  tools/lib/bpf/bpf.c      |  3 ++-
>  tools/lib/bpf/libbpf.c   | 46 ++++++++++++++++++++++++++++++++--------
>  tools/lib/bpf/libbpf.h   |  4 ++++
>  tools/lib/bpf/libbpf.map |  3 +++
>  4 files changed, 46 insertions(+), 10 deletions(-)
>
> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> index c6dafe563176..73220176728d 100644
> --- a/tools/lib/bpf/bpf.c
> +++ b/tools/lib/bpf/bpf.c
> @@ -235,7 +235,8 @@ int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr,
>         memset(&attr, 0, sizeof(attr));
>         attr.prog_type = load_attr->prog_type;
>         attr.expected_attach_type = load_attr->expected_attach_type;
> -       if (attr.prog_type == BPF_PROG_TYPE_STRUCT_OPS) {
> +       if (attr.prog_type == BPF_PROG_TYPE_STRUCT_OPS ||
> +           attr.prog_type == BPF_PROG_TYPE_LSM) {
>                 attr.attach_btf_id = load_attr->attach_btf_id;
>         } else if (attr.prog_type == BPF_PROG_TYPE_TRACING ||
>                    attr.prog_type == BPF_PROG_TYPE_EXT) {
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 514b1a524abb..d11139d5e76b 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -2351,16 +2351,14 @@ static int bpf_object__finalize_btf(struct bpf_object *obj)
>
>  static inline bool libbpf_prog_needs_vmlinux_btf(struct bpf_program *prog)
>  {
> -       if (prog->type == BPF_PROG_TYPE_STRUCT_OPS)
> +       if (prog->type == BPF_PROG_TYPE_STRUCT_OPS ||
> +           prog->type == BPF_PROG_TYPE_LSM)
>                 return true;
>
>         /* BPF_PROG_TYPE_TRACING programs which do not attach to other programs
>          * also need vmlinux BTF
>          */
> -       if (prog->type == BPF_PROG_TYPE_TRACING && !prog->attach_prog_fd)
> -               return true;
> -
> -       return false;
> +       return prog->type == BPF_PROG_TYPE_TRACING && !prog->attach_prog_fd;


please keep this as is, it allows to add more logic easily, if necessary

>  }
>
>  static int bpf_object__load_vmlinux_btf(struct bpf_object *obj)
> @@ -4855,7 +4853,8 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
>         load_attr.insns = insns;
>         load_attr.insns_cnt = insns_cnt;
>         load_attr.license = license;

[...]

> -struct bpf_link *bpf_program__attach_trace(struct bpf_program *prog)
> +/* Common logic for all BPF program types that attach to a btf_id */
> +static struct bpf_link *bpf_program__attach_btf(struct bpf_program *prog)
>  {
>         char errmsg[STRERR_BUFSIZE];
>         struct bpf_link_fd *link;
> @@ -7376,7 +7388,7 @@ struct bpf_link *bpf_program__attach_trace(struct bpf_program *prog)
>         if (pfd < 0) {
>                 pfd = -errno;
>                 free(link);
> -               pr_warn("program '%s': failed to attach to trace: %s\n",
> +               pr_warn("program '%s': failed to attach to: %s\n",

to attach to ... what?.. %s at the end is just an error message

>                         bpf_program__title(prog, false),
>                         libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
>                 return ERR_PTR(pfd);
> @@ -7385,10 +7397,26 @@ struct bpf_link *bpf_program__attach_trace(struct bpf_program *prog)
>         return (struct bpf_link *)link;
>  }
>

[...]

> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -227,10 +227,13 @@ LIBBPF_0.0.7 {
>                 bpf_probe_large_insn_limit;
>                 bpf_prog_attach_xattr;
>                 bpf_program__attach;
> +               bpf_program__attach_lsm;
>                 bpf_program__name;
>                 bpf_program__is_extension;
> +               bpf_program__is_lsm;
>                 bpf_program__is_struct_ops;
>                 bpf_program__set_extension;
> +               bpf_program__set_lsm;

please make sure to add to 0.0.8 version for new revision

>                 bpf_program__set_struct_ops;
>                 btf__align_of;
>                 libbpf_find_kernel_btf;
> --
> 2.20.1
>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-25  5:41                 ` Alexei Starovoitov
@ 2020-02-25 15:31                   ` Kees Cook
  2020-02-25 19:31                   ` KP Singh
  2020-02-26  0:30                   ` Casey Schaufler
  2 siblings, 0 replies; 45+ messages in thread
From: Kees Cook @ 2020-02-25 15:31 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Casey Schaufler, KP Singh, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris, bpf, netdev

On Mon, Feb 24, 2020 at 09:41:27PM -0800, Alexei Starovoitov wrote:
> I'm proposing to rename BPF_PROG_TYPE_LSM into BPF_PROG_TYPE_OVERRIDE_RETURN.

Isn't the type used to decide which validator to use?

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-24 21:41               ` Kees Cook
  2020-02-24 22:29                 ` Casey Schaufler
  2020-02-25  5:41                 ` Alexei Starovoitov
@ 2020-02-25 19:29                 ` KP Singh
  2 siblings, 0 replies; 45+ messages in thread
From: KP Singh @ 2020-02-25 19:29 UTC (permalink / raw)
  To: Kees Cook
  Cc: Casey Schaufler, Alexei Starovoitov, LKML,
	Linux Security Module list, Alexei Starovoitov, James Morris,
	bpf, netdev

On 24-Feb 13:41, Kees Cook wrote:
> On Mon, Feb 24, 2020 at 10:45:27AM -0800, Casey Schaufler wrote:
> > On 2/24/2020 9:13 AM, KP Singh wrote:
> > > On 24-Feb 08:32, Casey Schaufler wrote:
> > >> On 2/23/2020 2:08 PM, Alexei Starovoitov wrote:
> > >>> On Fri, Feb 21, 2020 at 08:22:59PM -0800, Kees Cook wrote:
> > >>>> If I'm understanding this correctly, there are two issues:
> > >>>>
> > >>>> 1- BPF needs to be run last due to fexit trampolines (?)
> > >>> no.
> > >>> The placement of nop call can be anywhere.
> > >>> BPF trampoline is automagically converting nop call into a sequence
> > >>> of directly invoked BPF programs.
> > >>> No link list traversals and no indirect calls in run-time.
> > >> Then why the insistence that it be last?
> > > I think this came out of the discussion about not being able to
> > > override the other LSMs and introduce a new attack vector with some
> > > arguments discussed at:
> > >
> > >   https://lore.kernel.org/bpf/20200109194302.GA85350@google.com/
> > >
> > > Let's say we have SELinux + BPF runnng on the system. BPF should still
> > > respect any decisions made by SELinux. This hasn't got anything to
> > > do with the usage of fexit trampolines.
> > 
> > The discussion sited is more about GPL than anything else.
> > 
> > The LSM rule is that any security module must be able to
> > accept the decisions of others. SELinux has to accept decisions
> > made ahead of it. It always has, as LSM checks occur after
> > "traditional" checks, which may fail. The only reason that you
> > need to be last in this implementation appears to be that you
> > refuse to use the general mechanisms. You can't blame SELinux
> > for that.
> 
> Okay, this is why I wanted to try to state things plainly. The "in last
> position" appears to be the result of a couple design choices:
> 
> -the idea of "not wanting to get in the way of other LSMs", while
>  admirable, needs to actually be a non-goal to be "just" a stacked LSM
>  (as you're saying here Casey). This position _was_ required for the
>  non-privileged LSM case to avoid security implications, but that goal
>  not longer exists here either.
> 
> -optimally using the zero-cost call-outs (static key + fexit trampolines)
>  meant it didn't interact well with the existing stacking mechanism.
> 
> So, fine, these appear to be design choices, not *specifically*
> requirements. Let's move on, I think there is more to unpack here...
> 
> > >>>> 2- BPF hooks don't know what may be attached at any given time, so
> > >>>>    ALL LSM hooks need to be universally hooked. THIS turns out to create
> > >>>>    a measurable performance problem in that the cost of the indirect call
> > >>>>    on the (mostly/usually) empty BPF policy is too high.
> > >>> also no.
> 
> AIUI, there was some confusion on Alexei's reply here. I, perhaps,
> was not as clear as I needed to be. I think the later discussion on
> performance overheads gets more into the point, and gets us closer to
> the objections Alexei had. More below...
> 
> > >   This approach still had the issues of an indirect call and an
> > >   extra check when not used. So this was not truly zero overhead even
> > >   after special casing BPF.
> > 
> > The LSM mechanism is not zero overhead. It never has been. That's why
> > you can compile it out. You get added value at a price. You get the
> > ability to use SELinux and KRSI together at a price. If that's unacceptable
> > you can go the route of seccomp, which doesn't use LSM for many of the
> > same reasons you're on about.
> > [...]
> > >>>> So, trying to avoid the indirect calls is, as you say, an optimization,
> > >>>> but it might be a needed one due to the other limitations.
> > >>> I'm convinced that avoiding the cost of retpoline in critical path is a
> > >>> requirement for any new infrastructure in the kernel.
> > >> Sorry, I haven't gotten that memo.
> 
> I agree with Casey here -- it's a nice goal, but those cost evaluations have
> not yet(?[1]) hit the LSM world. I think it's a desirable goal, to be
> sure, but this does appear to be an early optimization.
> 
> > [...]
> > It can do that wholly within KRSI hooks. You don't need to
> > put KRSI specific code into security.c.
> 
> This observation is where I keep coming back to.
> 
> Yes, the resulting code is not as fast as it could be. The fact that BPF
> triggers the worst-case performance of LSM hooking is the "new" part
> here, from what I can see.
> 
> I suspect the problem is that folks in the BPF subsystem don't want to
> be seen as slowing anything down, even other subsystems, so they don't
> want to see this done in the traditional LSM hooking way (which contains
> indirect calls).
> 
> But the LSM subsystem doesn't want special cases (Casey has worked very
> hard to generalize everything there for stacking). It is really hard to
> accept adding a new special case when there are still special cases yet
> to be worked out even in the LSM code itself[2].
> 
> > >>> Networking stack converted all such places to conditional calls.
> > >>> In BPF land we converted indirect calls to direct jumps and direct calls.
> > >>> It took two years to do so. Adding new indirect calls is not an option.
> > >>> I'm eagerly waiting for Peter's static_call patches to land to convert
> > >>> a lot more indirect calls. May be existing LSMs will take advantage
> > >>> of static_call patches too, but static_call is not an option for BPF.
> > >>> That's why we introduced BPF trampoline in the last kernel release.
> > >> Sorry, but I don't see how BPF is so overwhelmingly special.
> > > My analogy here is that if every tracepoint in the kernel were of the
> > > type:
> > >
> > > if (trace_foo_enabled) // <-- Overhead here, solved with static key
> > >    trace_foo(a);  // <-- retpoline overhead, solved with fexit trampolines
> 
> This is a helpful distillation; thanks.
> 
> static keys (perhaps better described as static branches) make sense to
> me; I'm familiar with them being used all over the place[3]. The resulting
> "zero performance" branch mechanism is extremely handy.
> 
> I had been thinking about the fexit stuff only as a way for BPF to call
> into kernel functions directly, and I missed the place where this got
> used for calling from the kernel into BPF directly. KP walked me through
> the fexit stuff off list. I missed where there NOP stub ("noinline int
> bpf_lsm_##NAME(__VA_ARGS__) { return 0; }") was being patched by BPF in
> https://lore.kernel.org/lkml/20200220175250.10795-6-kpsingh@chromium.org/
> The key bit being "bpf_trampoline_update(prog)"
> 
> > > It would be very hard to justify enabling them on a production system,
> > > and the same can be said for BPF and KRSI.
> > 
> > The same can be and has been said about the LSM infrastructure.
> > If BPF and KRSI are that performance critical you shouldn't be
> > tying them to LSM, which is known to have overhead. If you can't
> > accept the LSM overhead, get out of the LSM. Or, help us fix the
> > LSM infrastructure to make its overhead closer to zero. Whether
> > you believe it or not, a lot of work has gone into keeping the LSM
> > overhead as small as possible while remaining sufficiently general
> > to perform its function.
> > 
> > No. If you're too special to play by LSM rules then you're special
> > enough to get into the kernel using more direct means.
> 
> So, I see the primary conflict here being about the performance
> optimizations. AIUI:
> 
> - BPF subsystem maintainers do not want any new slowdown associated
>   with BPF
> - LSM subsystem maintainers do not want any new special cases in
>   LSM stacking
> 
> So, unless James is going to take this over Casey's objections, the path
> forward I see here is:
> 
> - land a "slow" KRSI (i.e. one that hooks every hook with a stub).
> - optimize calling for all LSMs

I will work on v5 which resgisters the nops as standard LSM hooks and
we can follow-up on performance.

- KP

> 
> Does this seem right to everyone?
> 
> -Kees
> 
> 
> [1] There is a "known cost to LSM", but as Casey mentions, it's been
> generally deemed "acceptable". There have been some recent attempts to
> quantify it, but it's not been very clear:
> https://lore.kernel.org/linux-security-module/c98000ea-df0e-1ab7-a0e2-b47d913f50c8@tycho.nsa.gov/ (lore is missing half this conversation for some reason)
> 
> [2] Casey's work to generalize the LSM interfaces continues and it quite
> complex:
> https://lore.kernel.org/linux-security-module/20200214234203.7086-1-casey@schaufler-ca.com/
> 
> [3] E.g. HARDENED_USERCOPY uses it:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/usercopy.c?h=v5.5#n258
> and so does the heap memory auto-initialization:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/slab.h?h=v5.5#n676
> 
> -- 
> Kees Cook

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-25  5:41                 ` Alexei Starovoitov
  2020-02-25 15:31                   ` Kees Cook
@ 2020-02-25 19:31                   ` KP Singh
  2020-02-26  0:30                   ` Casey Schaufler
  2 siblings, 0 replies; 45+ messages in thread
From: KP Singh @ 2020-02-25 19:31 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Kees Cook, Casey Schaufler, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris, bpf, netdev

On 24-Feb 21:41, Alexei Starovoitov wrote:
> On Mon, Feb 24, 2020 at 01:41:19PM -0800, Kees Cook wrote:
> > 
> > But the LSM subsystem doesn't want special cases (Casey has worked very
> > hard to generalize everything there for stacking). It is really hard to
> > accept adding a new special case when there are still special cases yet
> > to be worked out even in the LSM code itself[2].
> > [2] Casey's work to generalize the LSM interfaces continues and it quite
> > complex:
> > https://lore.kernel.org/linux-security-module/20200214234203.7086-1-casey@schaufler-ca.com/
> 
> I think the key mistake we made is that we classified KRSI as LSM.
> LSM stacking, lsmblobs that the above set is trying to do are not necessary for KRSI.
> I don't see anything in LSM infra that KRSI can reuse.
> The only thing BPF needs is a function to attach to.
> It can be a nop function or any other.
> security_*() functions are interesting from that angle only.
> Hence I propose to reconsider what I was suggesting earlier.
> No changes to secruity/ directory.
> Attach to security_*() funcs via bpf trampoline.
> The key observation vs what I was saying earlier is KRSI and LSM are wrong names.
> I think "security" is also loaded word that should be avoided.
> I'm proposing to rename BPF_PROG_TYPE_LSM into BPF_PROG_TYPE_OVERRIDE_RETURN.

The BPF_PROG_TYPE_OVERRIDE_RETURN seems to be useful in general as
well and we have the implementation already figured out as a part of
the LSM work. I will split that bit into a separate series.

- KP

> 
> > So, unless James is going to take this over Casey's objections, the path
> > forward I see here is:
> > 
> > - land a "slow" KRSI (i.e. one that hooks every hook with a stub).
> > - optimize calling for all LSMs
> 
> I'm very much surprised how 'slow' KRSI is an option at all.
> 'slow' KRSI means that CONFIG_SECURITY_KRSI=y adds indirect calls to nop
> functions for every place in the kernel that calls security_*().
> This is not an acceptable overhead. Even w/o retpoline
> this is not something datacenter servers can use.
> 
> Another option is to do this:
> diff --git a/include/linux/security.h b/include/linux/security.h
> index 64b19f050343..7887ce636fb1 100644
> --- a/include/linux/security.h
> +++ b/include/linux/security.h
> @@ -240,7 +240,7 @@ static inline const char *kernel_load_data_id_str(enum kernel_load_data_id id)
>         return kernel_load_data_str[id];
>  }
> 
> -#ifdef CONFIG_SECURITY
> +#if defined(CONFIG_SECURITY) || defined(CONFIG_BPF_OVERRIDE_RETURN)
> 
> Single line change to security.h and new file kernel/bpf/override_security.c
> that will look like:
> int security_binder_set_context_mgr(struct task_struct *mgr)
> {
>         return 0;
> }
> 
> int security_binder_transaction(struct task_struct *from,
>                                 struct task_struct *to)
> {
>         return 0;
> }
> Essentially it will provide BPF side with a set of nop functions.
> CONFIG_SECURITY is off. It may seem as a downside that it will force a choice
> on kernel users. Either they build the kernel with CONFIG_SECURITY and their
> choice of LSMs or build the kernel with CONFIG_BPF_OVERRIDE_RETURN and use
> BPF_PROG_TYPE_OVERRIDE_RETURN programs to enforce any kind of policy. I think
> it's a pro not a con.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-25  5:41                 ` Alexei Starovoitov
  2020-02-25 15:31                   ` Kees Cook
  2020-02-25 19:31                   ` KP Singh
@ 2020-02-26  0:30                   ` Casey Schaufler
  2020-02-26  5:15                     ` KP Singh
  2 siblings, 1 reply; 45+ messages in thread
From: Casey Schaufler @ 2020-02-26  0:30 UTC (permalink / raw)
  To: Alexei Starovoitov, Kees Cook
  Cc: KP Singh, LKML, Linux Security Module list, Alexei Starovoitov,
	James Morris, bpf, netdev, Casey Schaufler

On 2/24/2020 9:41 PM, Alexei Starovoitov wrote:
> On Mon, Feb 24, 2020 at 01:41:19PM -0800, Kees Cook wrote:
>> But the LSM subsystem doesn't want special cases (Casey has worked very
>> hard to generalize everything there for stacking). It is really hard to
>> accept adding a new special case when there are still special cases yet
>> to be worked out even in the LSM code itself[2].
>> [2] Casey's work to generalize the LSM interfaces continues and it quite
>> complex:
>> https://lore.kernel.org/linux-security-module/20200214234203.7086-1-casey@schaufler-ca.com/
> I think the key mistake we made is that we classified KRSI as LSM.
> LSM stacking, lsmblobs that the above set is trying to do are not necessary for KRSI.
> I don't see anything in LSM infra that KRSI can reuse.
> The only thing BPF needs is a function to attach to.
> It can be a nop function or any other.
> security_*() functions are interesting from that angle only.
> Hence I propose to reconsider what I was suggesting earlier.
> No changes to secruity/ directory.
> Attach to security_*() funcs via bpf trampoline.
> The key observation vs what I was saying earlier is KRSI and LSM are wrong names.
> I think "security" is also loaded word that should be avoided.

No argument there.

> I'm proposing to rename BPF_PROG_TYPE_LSM into BPF_PROG_TYPE_OVERRIDE_RETURN.
>
>> So, unless James is going to take this over Casey's objections, the path
>> forward I see here is:
>>
>> - land a "slow" KRSI (i.e. one that hooks every hook with a stub).
>> - optimize calling for all LSMs
> I'm very much surprised how 'slow' KRSI is an option at all.
> 'slow' KRSI means that CONFIG_SECURITY_KRSI=y adds indirect calls to nop
> functions for every place in the kernel that calls security_*().
> This is not an acceptable overhead. Even w/o retpoline
> this is not something datacenter servers can use.

In the universe I live in data centers will disable hyper-threading,
reducing performance substantially, in the face of hypothetical security
exploits. That's a massively greater performance impact than the handful
of instructions required to do indirect calls. Not to mention the impact
of the BPF programs that have been included. Have you ever looked at what
happens to system performance when polkitd is enabled?


>
> Another option is to do this:
> diff --git a/include/linux/security.h b/include/linux/security.h
> index 64b19f050343..7887ce636fb1 100644
> --- a/include/linux/security.h
> +++ b/include/linux/security.h
> @@ -240,7 +240,7 @@ static inline const char *kernel_load_data_id_str(enum kernel_load_data_id id)
>         return kernel_load_data_str[id];
>  }
>
> -#ifdef CONFIG_SECURITY
> +#if defined(CONFIG_SECURITY) || defined(CONFIG_BPF_OVERRIDE_RETURN)
>
> Single line change to security.h and new file kernel/bpf/override_security.c
> that will look like:
> int security_binder_set_context_mgr(struct task_struct *mgr)
> {
>         return 0;
> }
>
> int security_binder_transaction(struct task_struct *from,
>                                 struct task_struct *to)
> {
>         return 0;
> }
> Essentially it will provide BPF side with a set of nop functions.
> CONFIG_SECURITY is off. It may seem as a downside that it will force a choice
> on kernel users. Either they build the kernel with CONFIG_SECURITY and their
> choice of LSMs or build the kernel with CONFIG_BPF_OVERRIDE_RETURN and use
> BPF_PROG_TYPE_OVERRIDE_RETURN programs to enforce any kind of policy. I think
> it's a pro not a con.

Err, no. All distros use an LSM or two. Unless you can re-implement SELinux
in BPF (good luck with state transitions) you've built a warp drive without
ever having mined dilithium crystals.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-26  0:30                   ` Casey Schaufler
@ 2020-02-26  5:15                     ` KP Singh
  2020-02-26 15:35                       ` Casey Schaufler
  0 siblings, 1 reply; 45+ messages in thread
From: KP Singh @ 2020-02-26  5:15 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: Alexei Starovoitov, Kees Cook, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris, bpf, netdev

On 25-Feb 16:30, Casey Schaufler wrote:
> On 2/24/2020 9:41 PM, Alexei Starovoitov wrote:
> > On Mon, Feb 24, 2020 at 01:41:19PM -0800, Kees Cook wrote:
> >> But the LSM subsystem doesn't want special cases (Casey has worked very
> >> hard to generalize everything there for stacking). It is really hard to
> >> accept adding a new special case when there are still special cases yet
> >> to be worked out even in the LSM code itself[2].
> >> [2] Casey's work to generalize the LSM interfaces continues and it quite
> >> complex:
> >> https://lore.kernel.org/linux-security-module/20200214234203.7086-1-casey@schaufler-ca.com/
> > I think the key mistake we made is that we classified KRSI as LSM.
> > LSM stacking, lsmblobs that the above set is trying to do are not necessary for KRSI.
> > I don't see anything in LSM infra that KRSI can reuse.
> > The only thing BPF needs is a function to attach to.
> > It can be a nop function or any other.
> > security_*() functions are interesting from that angle only.
> > Hence I propose to reconsider what I was suggesting earlier.
> > No changes to secruity/ directory.
> > Attach to security_*() funcs via bpf trampoline.
> > The key observation vs what I was saying earlier is KRSI and LSM are wrong names.
> > I think "security" is also loaded word that should be avoided.
> 
> No argument there.
> 
> > I'm proposing to rename BPF_PROG_TYPE_LSM into BPF_PROG_TYPE_OVERRIDE_RETURN.
> >
> >> So, unless James is going to take this over Casey's objections, the path
> >> forward I see here is:
> >>
> >> - land a "slow" KRSI (i.e. one that hooks every hook with a stub).
> >> - optimize calling for all LSMs
> > I'm very much surprised how 'slow' KRSI is an option at all.
> > 'slow' KRSI means that CONFIG_SECURITY_KRSI=y adds indirect calls to nop
> > functions for every place in the kernel that calls security_*().
> > This is not an acceptable overhead. Even w/o retpoline
> > this is not something datacenter servers can use.
> 
> In the universe I live in data centers will disable hyper-threading,
> reducing performance substantially, in the face of hypothetical security
> exploits. That's a massively greater performance impact than the handful
> of instructions required to do indirect calls. Not to mention the impact

Indirect calls have worse performance implications than just a few
instructions and are especially not suitable for hotpaths.

There have been multiple efforts to reduce their usage e.g.:

  - https://lwn.net/Articles/774743/
  - https://lwn.net/Articles/773985/

> of the BPF programs that have been included. Have you ever looked at what

  BPF programs are JIT'ed and optimized to native code.

> happens to system performance when polkitd is enabled?

However, let's discuss all this separately when we follow-up with
performance improvements after submitting the initial patch-set.

> 
> 
> >
> > Another option is to do this:
> > diff --git a/include/linux/security.h b/include/linux/security.h
> > index 64b19f050343..7887ce636fb1 100644
> > --- a/include/linux/security.h
> > +++ b/include/linux/security.h
> > @@ -240,7 +240,7 @@ static inline const char *kernel_load_data_id_str(enum kernel_load_data_id id)
> >         return kernel_load_data_str[id];
> >  }
> >
> > -#ifdef CONFIG_SECURITY
> > +#if defined(CONFIG_SECURITY) || defined(CONFIG_BPF_OVERRIDE_RETURN)
> >
> > Single line change to security.h and new file kernel/bpf/override_security.c
> > that will look like:
> > int security_binder_set_context_mgr(struct task_struct *mgr)
> > {
> >         return 0;
> > }
> >
> > int security_binder_transaction(struct task_struct *from,
> >                                 struct task_struct *to)
> > {
> >         return 0;
> > }
> > Essentially it will provide BPF side with a set of nop functions.
> > CONFIG_SECURITY is off. It may seem as a downside that it will force a choice
> > on kernel users. Either they build the kernel with CONFIG_SECURITY and their
> > choice of LSMs or build the kernel with CONFIG_BPF_OVERRIDE_RETURN and use
> > BPF_PROG_TYPE_OVERRIDE_RETURN programs to enforce any kind of policy. I think
> > it's a pro not a con.
> 
> Err, no. All distros use an LSM or two. Unless you can re-implement SELinux

The users mentioned here in this context are (I would assume) the more
performance sensitive users who would, potentially, disable
CONFIG_SECURITY because of the current performance characteristics.

We can also discuss this separately and only if we find that we need
it for the BPF_OVERRIDE_RET type attachment.

- KP

> in BPF (good luck with state transitions) you've built a warp drive without
> ever having mined dilithium crystals.
> 
> 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs
  2020-02-26  5:15                     ` KP Singh
@ 2020-02-26 15:35                       ` Casey Schaufler
  0 siblings, 0 replies; 45+ messages in thread
From: Casey Schaufler @ 2020-02-26 15:35 UTC (permalink / raw)
  To: KP Singh
  Cc: Alexei Starovoitov, Kees Cook, LKML, Linux Security Module list,
	Alexei Starovoitov, James Morris, bpf, netdev, Casey Schaufler

On 2/25/2020 9:15 PM, KP Singh wrote:
> On 25-Feb 16:30, Casey Schaufler wrote:
>> On 2/24/2020 9:41 PM, Alexei Starovoitov wrote:
>>> On Mon, Feb 24, 2020 at 01:41:19PM -0800, Kees Cook wrote:
>>>> But the LSM subsystem doesn't want special cases (Casey has worked very
>>>> hard to generalize everything there for stacking). It is really hard to
>>>> accept adding a new special case when there are still special cases yet
>>>> to be worked out even in the LSM code itself[2].
>>>> [2] Casey's work to generalize the LSM interfaces continues and it quite
>>>> complex:
>>>> https://lore.kernel.org/linux-security-module/20200214234203.7086-1-casey@schaufler-ca.com/
>>> I think the key mistake we made is that we classified KRSI as LSM.
>>> LSM stacking, lsmblobs that the above set is trying to do are not necessary for KRSI.
>>> I don't see anything in LSM infra that KRSI can reuse.
>>> The only thing BPF needs is a function to attach to.
>>> It can be a nop function or any other.
>>> security_*() functions are interesting from that angle only.
>>> Hence I propose to reconsider what I was suggesting earlier.
>>> No changes to secruity/ directory.
>>> Attach to security_*() funcs via bpf trampoline.
>>> The key observation vs what I was saying earlier is KRSI and LSM are wrong names.
>>> I think "security" is also loaded word that should be avoided.
>> No argument there.
>>
>>> I'm proposing to rename BPF_PROG_TYPE_LSM into BPF_PROG_TYPE_OVERRIDE_RETURN.
>>>
>>>> So, unless James is going to take this over Casey's objections, the path
>>>> forward I see here is:
>>>>
>>>> - land a "slow" KRSI (i.e. one that hooks every hook with a stub).
>>>> - optimize calling for all LSMs
>>> I'm very much surprised how 'slow' KRSI is an option at all.
>>> 'slow' KRSI means that CONFIG_SECURITY_KRSI=y adds indirect calls to nop
>>> functions for every place in the kernel that calls security_*().
>>> This is not an acceptable overhead. Even w/o retpoline
>>> this is not something datacenter servers can use.
>> In the universe I live in data centers will disable hyper-threading,
>> reducing performance substantially, in the face of hypothetical security
>> exploits. That's a massively greater performance impact than the handful
>> of instructions required to do indirect calls. Not to mention the impact
> Indirect calls have worse performance implications than just a few
> instructions and are especially not suitable for hotpaths.
>
> There have been multiple efforts to reduce their usage e.g.:
>
>   - https://lwn.net/Articles/774743/
>   - https://lwn.net/Articles/773985/
>
>> of the BPF programs that have been included. Have you ever looked at what
>   BPF programs are JIT'ed and optimized to native code.

Doesn't mean people won't write slow code.


>> happens to system performance when polkitd is enabled?
> However, let's discuss all this separately when we follow-up with
> performance improvements after submitting the initial patch-set.

Think performance up front. Don't ignore issues.

>>> Another option is to do this:
>>> diff --git a/include/linux/security.h b/include/linux/security.h
>>> index 64b19f050343..7887ce636fb1 100644
>>> --- a/include/linux/security.h
>>> +++ b/include/linux/security.h
>>> @@ -240,7 +240,7 @@ static inline const char *kernel_load_data_id_str(enum kernel_load_data_id id)
>>>         return kernel_load_data_str[id];
>>>  }
>>>
>>> -#ifdef CONFIG_SECURITY
>>> +#if defined(CONFIG_SECURITY) || defined(CONFIG_BPF_OVERRIDE_RETURN)
>>>
>>> Single line change to security.h and new file kernel/bpf/override_security.c
>>> that will look like:
>>> int security_binder_set_context_mgr(struct task_struct *mgr)
>>> {
>>>         return 0;
>>> }
>>>
>>> int security_binder_transaction(struct task_struct *from,
>>>                                 struct task_struct *to)
>>> {
>>>         return 0;
>>> }
>>> Essentially it will provide BPF side with a set of nop functions.
>>> CONFIG_SECURITY is off. It may seem as a downside that it will force a choice
>>> on kernel users. Either they build the kernel with CONFIG_SECURITY and their
>>> choice of LSMs or build the kernel with CONFIG_BPF_OVERRIDE_RETURN and use
>>> BPF_PROG_TYPE_OVERRIDE_RETURN programs to enforce any kind of policy. I think
>>> it's a pro not a con.
>> Err, no. All distros use an LSM or two. Unless you can re-implement SELinux
> The users mentioned here in this context are (I would assume) the more
> performance sensitive users who would, potentially, disable
> CONFIG_SECURITY because of the current performance characteristics.

You assume that the most performance sensitive people would allow
a mechanism to arbitrarily add overhead that is out of their control?
How does that make sense?

> We can also discuss this separately and only if we find that we need
> it for the BPF_OVERRIDE_RET type attachment.
>
> - KP
>
>> in BPF (good luck with state transitions) you've built a warp drive without
>> ever having mined dilithium crystals.
>>
>>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI)
  2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (8 preceding siblings ...)
  2020-02-21 19:19 ` [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) Casey Schaufler
@ 2020-02-27 18:40 ` Dr. Greg
  9 siblings, 0 replies; 45+ messages in thread
From: Dr. Greg @ 2020-02-27 18:40 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-kernel, bpf, linux-security-module, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Jann Horn,
	Matthew Garrett, Christian Brauner, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

On Thu, Feb 20, 2020 at 06:52:42PM +0100, KP Singh wrote:

Good morning, I hope the week is going well for everyone.

Apologies for being somewhat late with these comments, I've been
recovering from travel.

> # Motivation
> 
> Google does analysis of rich runtime security data to detect and thwart
> threats in real-time. Currently, this is done in custom kernel modules
> but we would like to replace this with something that's upstream and
> useful to others.
> 
> The current kernel infrastructure for providing telemetry (Audit, Perf
> etc.) is disjoint from access enforcement (i.e. LSMs).  Augmenting the
> information provided by audit requires kernel changes to audit, its
> policy language and user-space components. Furthermore, building a MAC
> policy based on the newly added telemetry data requires changes to
> various LSMs and their respective policy languages.
> 
> This patchset allows BPF programs to be attached to LSM hooks This
> facilitates a unified and dynamic (not requiring re-compilation of the
> kernel) audit and MAC policy.
> 
> # Why an LSM?
> 
> Linux Security Modules target security behaviours rather than the
> kernel's API. For example, it's easy to miss out a newly added system
> call for executing processes (eg. execve, execveat etc.) but the LSM
> framework ensures that all process executions trigger the relevant hooks
> irrespective of how the process was executed.
> 
> Allowing users to implement LSM hooks at runtime also benefits the LSM
> eco-system by enabling a quick feedback loop from the security community
> about the kind of behaviours that the LSM Framework should be targeting.

On the remote possibility that our practical experiences are relevant
to this, I thought I would pitch these comments in, since I see that
LWN is covering the issues and sensitivities surrounding BPF based
'intelligent' LSM hooks, if I can take the liberty of referring to
them as that.

We namespaced a modified version of the Linux IMA implementation in
order to provide a mechanism for deterministic system modeling, in
order to support autonomously self defensive platforms for
IOT/INED/SCADA type applications.  Big picture, the objective was to
provide 'dynamic intelligence' for LSM decisions, presumably an
objective similar to the KRSI initiative.

Our IMA implementation, if you can still call it that, pushes
actor/subject interaction identities up into an SGX enclave that runs
a modeling engine that makes decisions on whether or not a process is
engaging in activity inconsistent with a behavioral map defined by the
platform or container developer.  If the behavior is extra-dimensional
(untrusted), the enclave, via an OCALL, sets the value of a 'bad
actor' variable in the task control structure that is used to indicate
that the context of execution has questionable trust status.

We paired this with a very simple LSM that has each hook check a bit
position in the bad actor variable/bitfield to determine whether or
not the hook should operate on the requested action.  Separate LSM
infrastructure is provided that specifies whether or not the behavior
should be EPERM'ed or logged.  An LSM using this infrastructure also
has the ability, if triggered by the trust status of the context of
execution, to make further assessments based on what information is
supplied via the hook itself.

Our field experience and testing has suggested that this architecture
has considerable utility.

In this model, numerous and disparate sections of the kernel can have
input into the trust status of a context of execution.  This
methodology would seem to be consistent with having multiple eBPF tap
points in the kernel that can make decisions on what they perceive to
be security relevant issues and if and how the behavior should be
acted upon by the LSM.

At the LSM level the costs are minimal, essentially a conditional
check for non-zero status.  Performance costs will be with the eBPF
code installed at introspection points.  At the end of the
day. security costs money, if no one is willing to pay the bill we
simply won't have secure systems, the fundamental tenant of the
inherent economic barrier to security.

Food for thought if anyone is interested.

Best wishes for a productive remainder of the week.

Dr. Greg

As always,
Dr. Greg Wettstein, Ph.D, Worker
IDfusion, LLC               SGX secured infrastructure and
4206 N. 19th Ave.           autonomously self-defensive platforms.
Fargo, ND  58102
PH: 701-281-1686            EMAIL: greg@idfusion.net
------------------------------------------------------------------------------
"We have to grow some roots before we can even think about having
 any blossoms."
                                -- Terrance George Wieland
                                   Resurrection.

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2020-02-27 18:42 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-20 17:52 [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) KP Singh
2020-02-20 17:52 ` [PATCH bpf-next v4 1/8] bpf: Introduce BPF_PROG_TYPE_LSM KP Singh
2020-02-20 17:52 ` [PATCH bpf-next v4 2/8] security: Refactor declaration of LSM hooks KP Singh
2020-02-20 17:52 ` [PATCH bpf-next v4 3/8] bpf: lsm: provide attachment points for BPF LSM programs KP Singh
2020-02-20 23:49   ` Casey Schaufler
2020-02-21 11:44     ` KP Singh
2020-02-21 18:23       ` Casey Schaufler
2020-02-22  4:22     ` Kees Cook
2020-02-23 22:08       ` Alexei Starovoitov
2020-02-24 16:32         ` Casey Schaufler
2020-02-24 17:13           ` KP Singh
2020-02-24 18:45             ` Casey Schaufler
2020-02-24 21:41               ` Kees Cook
2020-02-24 22:29                 ` Casey Schaufler
2020-02-25  5:41                 ` Alexei Starovoitov
2020-02-25 15:31                   ` Kees Cook
2020-02-25 19:31                   ` KP Singh
2020-02-26  0:30                   ` Casey Schaufler
2020-02-26  5:15                     ` KP Singh
2020-02-26 15:35                       ` Casey Schaufler
2020-02-25 19:29                 ` KP Singh
2020-02-24 16:09       ` Casey Schaufler
2020-02-24 17:23       ` KP Singh
2020-02-21  2:25   ` Alexei Starovoitov
2020-02-21 11:47     ` KP Singh
2020-02-20 17:52 ` [PATCH bpf-next v4 4/8] bpf: lsm: Add support for enabling/disabling BPF hooks KP Singh
2020-02-21 18:57   ` Casey Schaufler
2020-02-21 19:11     ` James Morris
2020-02-22  4:26   ` Kees Cook
2020-02-20 17:52 ` [PATCH bpf-next v4 5/8] bpf: lsm: Implement attach, detach and execution KP Singh
2020-02-21  2:17   ` Alexei Starovoitov
2020-02-21 12:02     ` KP Singh
2020-02-20 17:52 ` [PATCH bpf-next v4 6/8] tools/libbpf: Add support for BPF_PROG_TYPE_LSM KP Singh
2020-02-25  6:45   ` Andrii Nakryiko
2020-02-20 17:52 ` [PATCH bpf-next v4 7/8] bpf: lsm: Add selftests " KP Singh
2020-02-20 17:52 ` [PATCH bpf-next v4 8/8] bpf: lsm: Add Documentation KP Singh
2020-02-21 19:19 ` [PATCH bpf-next v4 0/8] MAC and Audit policy using eBPF (KRSI) Casey Schaufler
2020-02-21 19:41   ` KP Singh
2020-02-21 22:31     ` Casey Schaufler
2020-02-21 23:09       ` KP Singh
2020-02-21 23:49         ` Casey Schaufler
2020-02-22  0:22       ` Kees Cook
2020-02-22  1:04         ` Casey Schaufler
2020-02-22  3:36           ` Kees Cook
2020-02-27 18:40 ` Dr. Greg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.