linux-security-module.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v3 00/10]  MAC and Audit policy using eBPF (KRSI)
@ 2020-01-23 15:24 KP Singh
  2020-01-23 15:24 ` [PATCH bpf-next v3 01/10] bpf: btf: Add btf_type_by_name_kind KP Singh
                   ` (9 more replies)
  0 siblings, 10 replies; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

# v2 -> v3 does not change the overall design and has some minor fixes:

* LSM_ORDER_LAST is introduced to represent the behaviour of the BPF LSM
* Fixed the inadvertent clobbering of the LSM Hook error codes
* Added GPL license requirement to the commit log
* The lsm_hook_idx is now the more conventional 0-based index
* Some changes were split into a separate patch ("Load btf_vmlinux only
  once per object")
  https://lore.kernel.org/bpf/20200117212825.11755-1-kpsingh@chromium.org/
* Addressed Andrii's feedback on the BTF implementation
* Documentation update for using generated vmlinux.h to simplify
  programs
* Rebase

# Changes since v1 (https://lore.kernel.org/bpf/20191220154208.15895-1-kpsingh@chromium.org/):

* Eliminate the requirement to maintain LSM hooks separately in
  security/bpf/hooks.h Use BPF trampolines to dynamically allocate
  security hooks
* Drop the use of securityfs as bpftool provides the required
  introspection capabilities.  Update the tests to use the bpf_skeleton
  and global variables
* Use O_CLOEXEC anonymous fds to represent BPF attachment in line with
  the other BPF programs with the possibility to use bpf program pinning
  in the future to provide "permanent attachment".
* Drop the logic based on prog names for handling re-attachment.
* Drop bpf_lsm_event_output from this series and send it as a separate
  patch.

# Motivation

Google does analysis of rich runtime security data to detect and thwart
threats in real-time. Currently, this is done in custom kernel modules
but we would like to replace this with something that's upstream and
useful to others.

The current kernel infrastructure for providing telemetry (Audit, Perf
etc.) is disjoint from access enforcement (i.e. LSMs).  Augmenting the
information provided by audit requires kernel changes to audit, its
policy language and user-space components. Furthermore, building a MAC
policy based on the newly added telemetry data requires changes to
various LSMs and their respective policy languages.

This patchset proposes a new stackable and privileged LSM which allows
the LSM hooks to be implemented using eBPF. This facilitates a unified
and dynamic (not requiring re-compilation of the kernel) audit and MAC
policy.

# Why an LSM?

Linux Security Modules target security behaviours rather than the
kernel's API. For example, it's easy to miss out a newly added system
call for executing processes (eg. execve, execveat etc.) but the LSM
framework ensures that all process executions trigger the relevant hooks
irrespective of how the process was executed.

Allowing users to implement LSM hooks at runtime also benefits the LSM
eco-system by enabling a quick feedback loop from the security community
about the kind of behaviours that the LSM Framework should be targeting.

# How does it work?

The LSM introduces a new eBPF (https://docs.cilium.io/en/v1.6/bpf/)
program type BPF_PROG_TYPE_LSM which can only be attached to LSM hooks.
Attachment requires CAP_SYS_ADMIN for loading eBPF programs and
CAP_MAC_ADMIN for modifying MAC policies.

The eBPF programs are attached to a separate security_hook_heads
maintained by the BPF LSM for mutable hooks and executed after all the
statically defined hooks (i.e. the ones declared by SELinux, AppArmor,
Smack etc). This also ensures that statically defined LSM hooks retain
the behaviour of "being read-only after init", i.e. __lsm_ro_after_init.

Upon attachment, a security hook is dynamically allocated with
arch_bpf_prepare_trampoline which generates code to handle the
conversion from the signature of the hook to the BPF context and allows
the JIT'ed BPF program to be called as a C function with the same
arguments as the LSM hooks. If any of the attached eBPF programs returns
an error (like ENOPERM), the behaviour represented by the hook is
denied.

Audit logs can be written using a format chosen by the eBPF program to
the perf events buffer or to global eBPF variables or maps and can be
further processed in user-space.

# BTF Based Design

The current design uses BTF
(https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html,
https://lwn.net/Articles/803258/) which allows verifiable read-only
structure accesses by field names rather than fixed offsets. This allows
accessing the hook parameters using a dynamically created context which
provides a certain degree of ABI stability:


// Only declare the structure and fields intended to be used
// in the program
struct vm_area_struct {
  unsigned long vm_start;
} __attribute__((preserve_access_index));

// Declare the eBPF program mprotect_audit which attaches to
// to the file_mprotect LSM hook and accepts three arguments.
SEC("lsm/file_mprotect")
int BPF_PROG(mprotect_audit, struct vm_area_struct *vma,
       unsigned long reqprot, unsigned long prot)
{
  unsigned long vm_start = vma->vm_start;

  return 0;
}

By relocating field offsets, BTF makes a large portion of kernel data
structures readily accessible across kernel versions without requiring a
large corpus of BPF helper functions and requiring recompilation with
every kernel version. The BTF type information is also used by the BPF
verifier to validate memory accesses within the BPF program and also
prevents arbitrary writes to the kernel memory.

The limitations of BTF compatibility are described in BPF Co-Re
(http://vger.kernel.org/bpfconf2019_talks/bpf-core.pdf, i.e. field
renames, #defines and changes to the signature of LSM hooks).

This design imposes that the MAC policy (eBPF programs) be updated when
the inspected kernel structures change outside of BTF compatibility
guarantees. In practice, this is only required when a structure field
used by a current policy is removed (or renamed) or when the used LSM
hooks change. We expect the maintenance cost of these changes to be
acceptable as compared to the previous design
(https://lore.kernel.org/bpf/20190910115527.5235-1-kpsingh@chromium.org/).

# Why not tracepoints or kprobes?

In order to do MAC with tracepoints or kprobes, we would need to
override the return value of the security hook. This is not possible
with tracepoints or call-site kprobes.

Attaching to the return boundary (kretprobe) implies that BPF programs
would always get called after all the other LSM hooks are called and
clobber the pre-existing LSM semantics.

Enforcing MAC policy with an actual LSM helps leverage the verified
semantics of the framework.

# Usage Examples

A simple example and some documentation is included in the patchset.

In order to better illustrate the capabilities of the framework some
more advanced prototype (not-ready for review) code has also been
published separately:

* Logging execution events (including environment variables and
  arguments)
https://github.com/sinkap/linux-krsi/blob/patch/v1/examples/samples/bpf/lsm_audit_env.c
* Detecting deletion of running executables:
https://github.com/sinkap/linux-krsi/blob/patch/v1/examples/samples/bpf/lsm_detect_exec_unlink.c
* Detection of writes to /proc/<pid>/mem:

https://github.com/sinkap/linux-krsi/blob/patch/v1/examples/samples/bpf/lsm_audit_env.c

We have updated Google's internal telemetry infrastructure and have
started deploying this LSM on our Linux Workstations. This gives us more
confidence in the real-world applications of such a system.

KP Singh (10):
  bpf: btf: Add btf_type_by_name_kind
  bpf: lsm: Add a skeleton and config options
  bpf: lsm: Introduce types for eBPF based LSM
  bpf: lsm: Add mutable hooks list for the BPF LSM
  bpf: lsm: BTF API for LSM hooks
  bpf: lsm: Implement attach, detach and execution
  bpf: lsm: Make the allocated callback RO+X
  tools/libbpf: Add support for BPF_PROG_TYPE_LSM
  bpf: lsm: Add selftests for BPF_PROG_TYPE_LSM
  bpf: lsm: Add Documentation

 Documentation/security/bpf.rst                | 165 +++++++++
 Documentation/security/index.rst              |   1 +
 MAINTAINERS                                   |  11 +
 include/linux/bpf.h                           |   4 +
 include/linux/bpf_lsm.h                       |  99 ++++++
 include/linux/bpf_types.h                     |   4 +
 include/linux/btf.h                           |   3 +
 include/linux/lsm_hooks.h                     |   1 +
 include/uapi/linux/bpf.h                      |   6 +
 kernel/bpf/btf.c                              |  13 +
 kernel/bpf/syscall.c                          |  51 ++-
 kernel/bpf/verifier.c                         |  74 +++-
 security/Kconfig                              |  11 +-
 security/Makefile                             |   2 +
 security/bpf/Kconfig                          |  25 ++
 security/bpf/Makefile                         |   7 +
 security/bpf/hooks.c                          | 315 ++++++++++++++++++
 security/bpf/include/bpf_lsm.h                |  70 ++++
 security/bpf/lsm.c                            |  87 +++++
 security/bpf/ops.c                            |  30 ++
 security/security.c                           |  30 +-
 tools/include/uapi/linux/bpf.h                |   6 +
 tools/lib/bpf/bpf.c                           |   6 +-
 tools/lib/bpf/bpf.h                           |   1 +
 tools/lib/bpf/libbpf.c                        | 104 +++++-
 tools/lib/bpf/libbpf.h                        |   4 +
 tools/lib/bpf/libbpf.map                      |   3 +
 tools/lib/bpf/libbpf_probes.c                 |   1 +
 tools/testing/selftests/bpf/lsm_helpers.h     |  19 ++
 .../bpf/prog_tests/lsm_mprotect_audit.c       |  58 ++++
 .../selftests/bpf/progs/lsm_mprotect_audit.c  |  48 +++
 31 files changed, 1229 insertions(+), 30 deletions(-)
 create mode 100644 Documentation/security/bpf.rst
 create mode 100644 include/linux/bpf_lsm.h
 create mode 100644 security/bpf/Kconfig
 create mode 100644 security/bpf/Makefile
 create mode 100644 security/bpf/hooks.c
 create mode 100644 security/bpf/include/bpf_lsm.h
 create mode 100644 security/bpf/lsm.c
 create mode 100644 security/bpf/ops.c
 create mode 100644 tools/testing/selftests/bpf/lsm_helpers.h
 create mode 100644 tools/testing/selftests/bpf/prog_tests/lsm_mprotect_audit.c
 create mode 100644 tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c

-- 
2.20.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v3 01/10] bpf: btf: Add btf_type_by_name_kind
  2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
@ 2020-01-23 15:24 ` KP Singh
  2020-01-23 20:06   ` Andrii Nakryiko
  2020-01-23 15:24 ` [PATCH bpf-next v3 02/10] bpf: lsm: Add a skeleton and config options KP Singh
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

- The LSM code does the combination of btf_find_by_name_kind and
  btf_type_by_id a couple of times to figure out the BTF type for
  security_hook_heads and security_list_options.
- Add an extern for btf_vmlinux in btf.h

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 include/linux/btf.h |  3 +++
 kernel/bpf/btf.c    | 12 ++++++++++++
 2 files changed, 15 insertions(+)

diff --git a/include/linux/btf.h b/include/linux/btf.h
index 5c1ea99b480f..d4e859f90a39 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -15,6 +15,7 @@ struct btf_type;
 union bpf_attr;
 
 extern const struct file_operations btf_fops;
+extern struct btf *btf_vmlinux;
 
 void btf_put(struct btf *btf);
 int btf_new_fd(const union bpf_attr *attr);
@@ -66,6 +67,8 @@ const struct btf_type *
 btf_resolve_size(const struct btf *btf, const struct btf_type *type,
 		 u32 *type_size, const struct btf_type **elem_type,
 		 u32 *total_nelems);
+const struct btf_type *btf_type_by_name_kind(
+	struct btf *btf, const char *name, u8 kind);
 
 #define for_each_member(i, struct_type, member)			\
 	for (i = 0, member = btf_type_member(struct_type);	\
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 32963b6d5a9c..ea53c16802cb 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -441,6 +441,18 @@ const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf,
 	return NULL;
 }
 
+const struct btf_type *btf_type_by_name_kind(
+	struct btf *btf, const char *name, u8 kind)
+{
+	s32 type_id;
+
+	type_id = btf_find_by_name_kind(btf, name, kind);
+	if (type_id < 0)
+		return ERR_PTR(type_id);
+
+	return btf_type_by_id(btf, type_id);
+}
+
 /* Types that act only as a source, not sink or intermediate
  * type when resolving.
  */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v3 02/10] bpf: lsm: Add a skeleton and config options
  2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
  2020-01-23 15:24 ` [PATCH bpf-next v3 01/10] bpf: btf: Add btf_type_by_name_kind KP Singh
@ 2020-01-23 15:24 ` KP Singh
  2020-02-10 23:52   ` Alexei Starovoitov
  2020-01-23 15:24 ` [PATCH bpf-next v3 03/10] bpf: lsm: Introduce types for eBPF based LSM KP Singh
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

The LSM can be enabled by CONFIG_SECURITY_BPF.  Without
CONFIG_SECURITY_BPF_ENFORCE, the LSM will run the attached eBPF programs
but not enforce MAC policy based on the return value of the attached
programs.

The BPF LSM has two kinds of hooks:

- Statically defined hooks defined at init with __lsm_ro_after_init
  which are attached to the security_hook_heads defined in
  security/security.c similar to other LSMs.
- Mutable hooks that are attached to a separate security_hook_heads
  maintained by the BPF LSM (introduced in a subsequent patch).

The mutable hooks are always executed after all the static hooks
(irrespective of the position of "bpf" in the list of LSMs). Therefore,
the newly introduced LSM_ORDER_LAST represents the behaviour of this LSM
correctly

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 MAINTAINERS               |  7 +++++++
 include/linux/lsm_hooks.h |  1 +
 security/Kconfig          | 11 ++++++-----
 security/Makefile         |  2 ++
 security/bpf/Kconfig      | 22 ++++++++++++++++++++++
 security/bpf/Makefile     |  5 +++++
 security/bpf/lsm.c        | 26 ++++++++++++++++++++++++++
 security/security.c       |  5 +++++
 8 files changed, 74 insertions(+), 5 deletions(-)
 create mode 100644 security/bpf/Kconfig
 create mode 100644 security/bpf/Makefile
 create mode 100644 security/bpf/lsm.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 702382b89c37..e2b7f76a1a70 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3203,6 +3203,13 @@ S:	Supported
 F:	arch/x86/net/
 X:	arch/x86/net/bpf_jit_comp32.c
 
+BPF SECURITY MODULE
+M:	KP Singh <kpsingh@chromium.org>
+L:	linux-security-module@vger.kernel.org
+L:	bpf@vger.kernel.org
+S:	Maintained
+F:	security/bpf/
+
 BROADCOM B44 10/100 ETHERNET DRIVER
 M:	Michael Chan <michael.chan@broadcom.com>
 L:	netdev@vger.kernel.org
diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
index 20d8cf194fb7..5f744fcb2275 100644
--- a/include/linux/lsm_hooks.h
+++ b/include/linux/lsm_hooks.h
@@ -2121,6 +2121,7 @@ extern void security_add_hooks(struct security_hook_list *hooks, int count,
 enum lsm_order {
 	LSM_ORDER_FIRST = -1,	/* This is only for capabilities. */
 	LSM_ORDER_MUTABLE = 0,
+	LSM_ORDER_LAST = 1,
 };
 
 struct lsm_info {
diff --git a/security/Kconfig b/security/Kconfig
index 2a1a2d396228..6f1aab195e7d 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -236,6 +236,7 @@ source "security/tomoyo/Kconfig"
 source "security/apparmor/Kconfig"
 source "security/loadpin/Kconfig"
 source "security/yama/Kconfig"
+source "security/bpf/Kconfig"
 source "security/safesetid/Kconfig"
 source "security/lockdown/Kconfig"
 
@@ -277,11 +278,11 @@ endchoice
 
 config LSM
 	string "Ordered list of enabled LSMs"
-	default "lockdown,yama,loadpin,safesetid,integrity,smack,selinux,tomoyo,apparmor" if DEFAULT_SECURITY_SMACK
-	default "lockdown,yama,loadpin,safesetid,integrity,apparmor,selinux,smack,tomoyo" if DEFAULT_SECURITY_APPARMOR
-	default "lockdown,yama,loadpin,safesetid,integrity,tomoyo" if DEFAULT_SECURITY_TOMOYO
-	default "lockdown,yama,loadpin,safesetid,integrity" if DEFAULT_SECURITY_DAC
-	default "lockdown,yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor"
+	default "lockdown,yama,loadpin,safesetid,integrity,smack,selinux,tomoyo,apparmor,bpf" if DEFAULT_SECURITY_SMACK
+	default "lockdown,yama,loadpin,safesetid,integrity,apparmor,selinux,smack,tomoyo,bpf" if DEFAULT_SECURITY_APPARMOR
+	default "lockdown,yama,loadpin,safesetid,integrity,tomoyo,bpf" if DEFAULT_SECURITY_TOMOYO
+	default "lockdown,yama,loadpin,safesetid,integrity,bpf" if DEFAULT_SECURITY_DAC
+	default "lockdown,yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor,bpf"
 	help
 	  A comma-separated list of LSMs, in initialization order.
 	  Any LSMs left off this list will be ignored. This can be
diff --git a/security/Makefile b/security/Makefile
index be1dd9d2cb2f..50e6821dd7b7 100644
--- a/security/Makefile
+++ b/security/Makefile
@@ -12,6 +12,7 @@ subdir-$(CONFIG_SECURITY_YAMA)		+= yama
 subdir-$(CONFIG_SECURITY_LOADPIN)	+= loadpin
 subdir-$(CONFIG_SECURITY_SAFESETID)    += safesetid
 subdir-$(CONFIG_SECURITY_LOCKDOWN_LSM)	+= lockdown
+subdir-$(CONFIG_SECURITY_BPF)		+= bpf
 
 # always enable default capabilities
 obj-y					+= commoncap.o
@@ -29,6 +30,7 @@ obj-$(CONFIG_SECURITY_YAMA)		+= yama/
 obj-$(CONFIG_SECURITY_LOADPIN)		+= loadpin/
 obj-$(CONFIG_SECURITY_SAFESETID)       += safesetid/
 obj-$(CONFIG_SECURITY_LOCKDOWN_LSM)	+= lockdown/
+obj-$(CONFIG_SECURITY_BPF)		+= bpf/
 obj-$(CONFIG_CGROUP_DEVICE)		+= device_cgroup.o
 
 # Object integrity file lists
diff --git a/security/bpf/Kconfig b/security/bpf/Kconfig
new file mode 100644
index 000000000000..a5f6c67ae526
--- /dev/null
+++ b/security/bpf/Kconfig
@@ -0,0 +1,22 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright 2019 Google LLC.
+
+config SECURITY_BPF
+	bool "BPF-based MAC and audit policy"
+	depends on SECURITY
+	depends on BPF_SYSCALL
+	help
+	  This enables instrumentation of the security hooks with
+	  eBPF programs.
+
+	  If you are unsure how to answer this question, answer N.
+
+config SECURITY_BPF_ENFORCE
+	bool "Deny operations based on the evaluation of the attached programs"
+	depends on SECURITY_BPF
+	help
+	  eBPF programs attached to hooks can be used for both auditing and
+	  enforcement. Enabling enforcement implies that the evaluation result
+	  from the attached eBPF programs will allow or deny the operation
+	  guarded by the security hook.
diff --git a/security/bpf/Makefile b/security/bpf/Makefile
new file mode 100644
index 000000000000..26a0ab6f99b7
--- /dev/null
+++ b/security/bpf/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright 2019 Google LLC.
+
+obj-$(CONFIG_SECURITY_BPF) := lsm.o
diff --git a/security/bpf/lsm.c b/security/bpf/lsm.c
new file mode 100644
index 000000000000..dc9ac03c7aa0
--- /dev/null
+++ b/security/bpf/lsm.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#include <linux/lsm_hooks.h>
+
+/* This is only for internal hooks, always statically shipped as part of the
+ * BPF LSM. Statically defined hooks are appended to the security_hook_heads
+ * which is common for LSMs and R/O after init.
+ */
+static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = {};
+
+static int __init bpf_lsm_init(void)
+{
+	security_add_hooks(bpf_lsm_hooks, ARRAY_SIZE(bpf_lsm_hooks), "bpf");
+	pr_info("LSM support for eBPF active\n");
+	return 0;
+}
+
+DEFINE_LSM(bpf) = {
+	.name = "bpf",
+	.init = bpf_lsm_init,
+	.order = LSM_ORDER_LAST,
+};
diff --git a/security/security.c b/security/security.c
index cd2d18d2d279..30a8aa700557 100644
--- a/security/security.c
+++ b/security/security.c
@@ -264,6 +264,11 @@ static void __init ordered_lsm_parse(const char *order, const char *origin)
 		}
 	}
 
+	for (lsm = __start_lsm_info; lsm < __end_lsm_info; lsm++) {
+		if (lsm->order == LSM_ORDER_LAST)
+			append_ordered_lsm(lsm, "last");
+	}
+
 	/* Disable all LSMs not in the ordered list. */
 	for (lsm = __start_lsm_info; lsm < __end_lsm_info; lsm++) {
 		if (exists_ordered_lsm(lsm))
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v3 03/10] bpf: lsm: Introduce types for eBPF based LSM
  2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
  2020-01-23 15:24 ` [PATCH bpf-next v3 01/10] bpf: btf: Add btf_type_by_name_kind KP Singh
  2020-01-23 15:24 ` [PATCH bpf-next v3 02/10] bpf: lsm: Add a skeleton and config options KP Singh
@ 2020-01-23 15:24 ` KP Singh
  2020-02-10 23:58   ` Alexei Starovoitov
  2020-01-23 15:24 ` [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM KP Singh
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

A new eBPF program type BPF_PROG_TYPE_LSM with an
expected attach type of BPF_LSM_MAC. Attachment to LSM hooks is not
implemented in this patch.

On defining the types for the program, the macros expect that
<prog_name>_prog_ops and <prog_name>_verifier_ops exist. This is
implicitly required by the macro:

  BPF_PROG_TYPE(BPF_PROG_TYPE_LSM, lsm, ...)

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 include/linux/bpf_types.h      |  4 ++++
 include/uapi/linux/bpf.h       |  2 ++
 kernel/bpf/syscall.c           |  6 ++++++
 security/bpf/Makefile          |  2 +-
 security/bpf/ops.c             | 28 ++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h |  2 ++
 tools/lib/bpf/libbpf_probes.c  |  1 +
 7 files changed, 44 insertions(+), 1 deletion(-)
 create mode 100644 security/bpf/ops.c

diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index c81d4ece79a4..c36790b202e3 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -70,6 +70,10 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_STRUCT_OPS, bpf_struct_ops,
 	      void *, void *)
 BPF_PROG_TYPE(BPF_PROG_TYPE_EXT, bpf_extension,
 	      void *, void *)
+#ifdef CONFIG_SECURITY_BPF
+BPF_PROG_TYPE(BPF_PROG_TYPE_LSM, lsm,
+	       void *, void *)
+#endif /* CONFIG_SECURITY_BPF */
 #endif
 
 BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index f1d74a2bd234..2f1e24a8c4a4 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -181,6 +181,7 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_TRACING,
 	BPF_PROG_TYPE_STRUCT_OPS,
 	BPF_PROG_TYPE_EXT,
+	BPF_PROG_TYPE_LSM,
 };
 
 enum bpf_attach_type {
@@ -210,6 +211,7 @@ enum bpf_attach_type {
 	BPF_TRACE_RAW_TP,
 	BPF_TRACE_FENTRY,
 	BPF_TRACE_FEXIT,
+	BPF_LSM_MAC,
 	__MAX_BPF_ATTACH_TYPE
 };
 
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index a91ad518c050..eab4a36ee889 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2396,6 +2396,9 @@ static int bpf_prog_attach(const union bpf_attr *attr)
 	case BPF_LIRC_MODE2:
 		ptype = BPF_PROG_TYPE_LIRC_MODE2;
 		break;
+	case BPF_LSM_MAC:
+		ptype = BPF_PROG_TYPE_LSM;
+		break;
 	case BPF_FLOW_DISSECTOR:
 		ptype = BPF_PROG_TYPE_FLOW_DISSECTOR;
 		break;
@@ -2427,6 +2430,9 @@ static int bpf_prog_attach(const union bpf_attr *attr)
 	case BPF_PROG_TYPE_LIRC_MODE2:
 		ret = lirc_prog_attach(attr, prog);
 		break;
+	case BPF_PROG_TYPE_LSM:
+		ret = -EOPNOTSUPP;
+		break;
 	case BPF_PROG_TYPE_FLOW_DISSECTOR:
 		ret = skb_flow_dissector_bpf_prog_attach(attr, prog);
 		break;
diff --git a/security/bpf/Makefile b/security/bpf/Makefile
index 26a0ab6f99b7..c78a8a056e7e 100644
--- a/security/bpf/Makefile
+++ b/security/bpf/Makefile
@@ -2,4 +2,4 @@
 #
 # Copyright 2019 Google LLC.
 
-obj-$(CONFIG_SECURITY_BPF) := lsm.o
+obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o
diff --git a/security/bpf/ops.c b/security/bpf/ops.c
new file mode 100644
index 000000000000..81c2bd9c0495
--- /dev/null
+++ b/security/bpf/ops.c
@@ -0,0 +1,28 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#include <linux/filter.h>
+#include <linux/bpf.h>
+
+const struct bpf_prog_ops lsm_prog_ops = {
+};
+
+static const struct bpf_func_proto *get_bpf_func_proto(
+	enum bpf_func_id func_id, const struct bpf_prog *prog)
+{
+	switch (func_id) {
+	case BPF_FUNC_map_lookup_elem:
+		return &bpf_map_lookup_elem_proto;
+	case BPF_FUNC_get_current_pid_tgid:
+		return &bpf_get_current_pid_tgid_proto;
+	default:
+		return NULL;
+	}
+}
+
+const struct bpf_verifier_ops lsm_verifier_ops = {
+	.get_func_proto = get_bpf_func_proto,
+};
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index f1d74a2bd234..2f1e24a8c4a4 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -181,6 +181,7 @@ enum bpf_prog_type {
 	BPF_PROG_TYPE_TRACING,
 	BPF_PROG_TYPE_STRUCT_OPS,
 	BPF_PROG_TYPE_EXT,
+	BPF_PROG_TYPE_LSM,
 };
 
 enum bpf_attach_type {
@@ -210,6 +211,7 @@ enum bpf_attach_type {
 	BPF_TRACE_RAW_TP,
 	BPF_TRACE_FENTRY,
 	BPF_TRACE_FEXIT,
+	BPF_LSM_MAC,
 	__MAX_BPF_ATTACH_TYPE
 };
 
diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
index b782ebef6ac9..2c92059c0c90 100644
--- a/tools/lib/bpf/libbpf_probes.c
+++ b/tools/lib/bpf/libbpf_probes.c
@@ -108,6 +108,7 @@ probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns,
 	case BPF_PROG_TYPE_TRACING:
 	case BPF_PROG_TYPE_STRUCT_OPS:
 	case BPF_PROG_TYPE_EXT:
+	case BPF_PROG_TYPE_LSM:
 	default:
 		break;
 	}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (2 preceding siblings ...)
  2020-01-23 15:24 ` [PATCH bpf-next v3 03/10] bpf: lsm: Introduce types for eBPF based LSM KP Singh
@ 2020-01-23 15:24 ` KP Singh
  2020-01-23 17:03   ` Casey Schaufler
  2020-02-11  3:12   ` Alexei Starovoitov
  2020-01-23 15:24 ` [PATCH bpf-next v3 05/10] bpf: lsm: BTF API for LSM hooks KP Singh
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

- The list of hooks registered by an LSM is currently immutable as they
  are declared with __lsm_ro_after_init and they are attached to a
  security_hook_heads struct.
- For the BPF LSM we need to de/register the hooks at runtime. Making
  the existing security_hook_heads mutable broadens an
  attack vector, so a separate security_hook_heads is added for only
  those that ~must~ be mutable.
- These mutable hooks are run only after all the static hooks have
  successfully executed.

This is based on the ideas discussed in:

  https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal

Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: KP Singh <kpsingh@google.com>
---
 MAINTAINERS             |  1 +
 include/linux/bpf_lsm.h | 72 +++++++++++++++++++++++++++++++++++++++++
 security/bpf/Kconfig    |  1 +
 security/bpf/Makefile   |  2 +-
 security/bpf/hooks.c    | 20 ++++++++++++
 security/bpf/lsm.c      |  7 ++++
 security/security.c     | 25 +++++++-------
 7 files changed, 116 insertions(+), 12 deletions(-)
 create mode 100644 include/linux/bpf_lsm.h
 create mode 100644 security/bpf/hooks.c

diff --git a/MAINTAINERS b/MAINTAINERS
index e2b7f76a1a70..c606b3d89992 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3209,6 +3209,7 @@ L:	linux-security-module@vger.kernel.org
 L:	bpf@vger.kernel.org
 S:	Maintained
 F:	security/bpf/
+F:	include/linux/bpf_lsm.h
 
 BROADCOM B44 10/100 ETHERNET DRIVER
 M:	Michael Chan <michael.chan@broadcom.com>
diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
new file mode 100644
index 000000000000..57c20b2cd2f4
--- /dev/null
+++ b/include/linux/bpf_lsm.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#ifndef _LINUX_BPF_LSM_H
+#define _LINUX_BPF_LSM_H
+
+#include <linux/bpf.h>
+#include <linux/lsm_hooks.h>
+
+#ifdef CONFIG_SECURITY_BPF
+
+/* Mutable hooks defined at runtime and executed after all the statically
+ * defined LSM hooks.
+ */
+extern struct security_hook_heads bpf_lsm_hook_heads;
+
+int bpf_lsm_srcu_read_lock(void);
+void bpf_lsm_srcu_read_unlock(int idx);
+
+#define CALL_BPF_LSM_VOID_HOOKS(FUNC, ...)			\
+	do {							\
+		struct security_hook_list *P;			\
+		int _idx;					\
+								\
+		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
+			break;					\
+								\
+		_idx = bpf_lsm_srcu_read_lock();		\
+		hlist_for_each_entry(P, &bpf_lsm_hook_heads.FUNC, list) \
+			P->hook.FUNC(__VA_ARGS__);		\
+		bpf_lsm_srcu_read_unlock(_idx);			\
+	} while (0)
+
+#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) ({			\
+	int _ret = 0;						\
+	do {							\
+		struct security_hook_list *P;			\
+		int _idx;					\
+								\
+		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
+			break;					\
+								\
+		_idx = bpf_lsm_srcu_read_lock();		\
+								\
+		hlist_for_each_entry(P,				\
+			&bpf_lsm_hook_heads.FUNC, list) {	\
+			_ret = P->hook.FUNC(__VA_ARGS__);		\
+			if (_ret && IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE)) \
+				break;				\
+		}						\
+		bpf_lsm_srcu_read_unlock(_idx);			\
+	} while (0);						\
+	IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE) ? _ret : 0;	\
+})
+
+#else /* !CONFIG_SECURITY_BPF */
+
+#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) (0)
+#define CALL_BPF_LSM_VOID_HOOKS(...)
+
+static inline int bpf_lsm_srcu_read_lock(void)
+{
+	return 0;
+}
+static inline void bpf_lsm_srcu_read_unlock(int idx) {}
+
+#endif /* CONFIG_SECURITY_BPF */
+
+#endif /* _LINUX_BPF_LSM_H */
diff --git a/security/bpf/Kconfig b/security/bpf/Kconfig
index a5f6c67ae526..595e4ad597ae 100644
--- a/security/bpf/Kconfig
+++ b/security/bpf/Kconfig
@@ -6,6 +6,7 @@ config SECURITY_BPF
 	bool "BPF-based MAC and audit policy"
 	depends on SECURITY
 	depends on BPF_SYSCALL
+	depends on SRCU
 	help
 	  This enables instrumentation of the security hooks with
 	  eBPF programs.
diff --git a/security/bpf/Makefile b/security/bpf/Makefile
index c78a8a056e7e..c526927c337d 100644
--- a/security/bpf/Makefile
+++ b/security/bpf/Makefile
@@ -2,4 +2,4 @@
 #
 # Copyright 2019 Google LLC.
 
-obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o
+obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o hooks.o
diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
new file mode 100644
index 000000000000..b123d9cb4cd4
--- /dev/null
+++ b/security/bpf/hooks.c
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#include <linux/bpf_lsm.h>
+#include <linux/srcu.h>
+
+DEFINE_STATIC_SRCU(security_hook_srcu);
+
+int bpf_lsm_srcu_read_lock(void)
+{
+	return srcu_read_lock(&security_hook_srcu);
+}
+
+void bpf_lsm_srcu_read_unlock(int idx)
+{
+	return srcu_read_unlock(&security_hook_srcu, idx);
+}
diff --git a/security/bpf/lsm.c b/security/bpf/lsm.c
index dc9ac03c7aa0..a25a068e1781 100644
--- a/security/bpf/lsm.c
+++ b/security/bpf/lsm.c
@@ -4,6 +4,7 @@
  * Copyright 2019 Google LLC.
  */
 
+#include <linux/bpf_lsm.h>
 #include <linux/lsm_hooks.h>
 
 /* This is only for internal hooks, always statically shipped as part of the
@@ -12,6 +13,12 @@
  */
 static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = {};
 
+/* Security hooks registered dynamically by the BPF LSM and must be accessed
+ * by holding bpf_lsm_srcu_read_lock and bpf_lsm_srcu_read_unlock. The mutable
+ * hooks dynamically allocated by the BPF LSM are appeneded here.
+ */
+struct security_hook_heads bpf_lsm_hook_heads;
+
 static int __init bpf_lsm_init(void)
 {
 	security_add_hooks(bpf_lsm_hooks, ARRAY_SIZE(bpf_lsm_hooks), "bpf");
diff --git a/security/security.c b/security/security.c
index 30a8aa700557..95a46ca25dcd 100644
--- a/security/security.c
+++ b/security/security.c
@@ -27,6 +27,7 @@
 #include <linux/backing-dev.h>
 #include <linux/string.h>
 #include <linux/msg.h>
+#include <linux/bpf_lsm.h>
 #include <net/flow.h>
 
 #define MAX_LSM_EVM_XATTR	2
@@ -657,20 +658,22 @@ static void __init lsm_early_task(struct task_struct *task)
 								\
 		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
 			P->hook.FUNC(__VA_ARGS__);		\
+		CALL_BPF_LSM_VOID_HOOKS(FUNC, __VA_ARGS__);	\
 	} while (0)
 
-#define call_int_hook(FUNC, IRC, ...) ({			\
-	int RC = IRC;						\
-	do {							\
-		struct security_hook_list *P;			\
-								\
+#define call_int_hook(FUNC, IRC, ...) ({				\
+	int RC = IRC;							\
+	do {								\
+		struct security_hook_list *P;				\
 		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
-			RC = P->hook.FUNC(__VA_ARGS__);		\
-			if (RC != 0)				\
-				break;				\
-		}						\
-	} while (0);						\
-	RC;							\
+			RC = P->hook.FUNC(__VA_ARGS__);			\
+			if (RC != 0)					\
+				break;					\
+		}							\
+		if (RC == 0)						\
+			RC = CALL_BPF_LSM_INT_HOOKS(FUNC, __VA_ARGS__);	\
+	} while (0);							\
+	RC;								\
 })
 
 /* Security operations */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v3 05/10] bpf: lsm: BTF API for LSM hooks
  2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (3 preceding siblings ...)
  2020-01-23 15:24 ` [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM KP Singh
@ 2020-01-23 15:24 ` KP Singh
  2020-01-23 15:24 ` [PATCH bpf-next v3 06/10] bpf: lsm: Implement attach, detach and execution KP Singh
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

The BTF API provides information required by the BPF verifier to
attach eBPF programs to the LSM hooks by using the BTF information of
two types:

- struct security_hook_heads: This type provides the offset (using the
  lsm_hook_idx passed by the userspace) to which a new dynamically
  allocated security hook must be attached
- union security_list_options: This provides the information about the
  function prototype required by the hook

The type_ids for these types are calculated once during __init as
bpf_type_by_name_kind does an expensive linear search with string
compariason.  Furthermore, the total number of LSM hooks which can be
determined from the btf_type_vlen of security_hook_heads is needed (in a
subsequent patch) to initialize the mutable hooks for the BPF LSM.

When the program is loaded:

- The verifier receives the index of a member in struct
  security_hook_heads to which a program must be attached as
  prog->aux->lsm_hook_idx.
- bpf_lsm_type_by_idx is used to determine the func_proto of
  the LSM hook and updates prog->aux->attach_func_proto
- bpf_lsm_head_by_idx is used to determine the hlist_head to which
  the BPF program must be attached.

Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: KP Singh <kpsingh@google.com>
---
 include/linux/bpf_lsm.h        | 12 ++++++
 security/bpf/Kconfig           |  1 +
 security/bpf/Makefile          |  2 +
 security/bpf/hooks.c           | 75 ++++++++++++++++++++++++++++++++++
 security/bpf/include/bpf_lsm.h | 21 ++++++++++
 security/bpf/lsm.c             | 35 ++++++++++++++++
 6 files changed, 146 insertions(+)
 create mode 100644 security/bpf/include/bpf_lsm.h

diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
index 57c20b2cd2f4..5e61c0736001 100644
--- a/include/linux/bpf_lsm.h
+++ b/include/linux/bpf_lsm.h
@@ -19,6 +19,8 @@ extern struct security_hook_heads bpf_lsm_hook_heads;
 
 int bpf_lsm_srcu_read_lock(void);
 void bpf_lsm_srcu_read_unlock(int idx);
+const struct btf_type *bpf_lsm_type_by_idx(struct btf *btf, u32 offset);
+const struct btf_member *bpf_lsm_head_by_idx(struct btf *btf, u32 idx);
 
 #define CALL_BPF_LSM_VOID_HOOKS(FUNC, ...)			\
 	do {							\
@@ -66,6 +68,16 @@ static inline int bpf_lsm_srcu_read_lock(void)
 	return 0;
 }
 static inline void bpf_lsm_srcu_read_unlock(int idx) {}
+static inline const struct btf_type *bpf_lsm_type_by_idx(
+	struct btf *btf, u32 idx)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
+static inline const struct btf_member *bpf_lsm_head_by_idx(
+	struct btf *btf, u32 idx)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
 
 #endif /* CONFIG_SECURITY_BPF */
 
diff --git a/security/bpf/Kconfig b/security/bpf/Kconfig
index 595e4ad597ae..9438d899b618 100644
--- a/security/bpf/Kconfig
+++ b/security/bpf/Kconfig
@@ -7,6 +7,7 @@ config SECURITY_BPF
 	depends on SECURITY
 	depends on BPF_SYSCALL
 	depends on SRCU
+	depends on DEBUG_INFO_BTF
 	help
 	  This enables instrumentation of the security hooks with
 	  eBPF programs.
diff --git a/security/bpf/Makefile b/security/bpf/Makefile
index c526927c337d..748b9b7d4bc7 100644
--- a/security/bpf/Makefile
+++ b/security/bpf/Makefile
@@ -3,3 +3,5 @@
 # Copyright 2019 Google LLC.
 
 obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o hooks.o
+
+ccflags-y := -I$(srctree)/security/bpf -I$(srctree)/security/bpf/include
diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
index b123d9cb4cd4..e9dc6933b6fa 100644
--- a/security/bpf/hooks.c
+++ b/security/bpf/hooks.c
@@ -5,8 +5,12 @@
  */
 
 #include <linux/bpf_lsm.h>
+#include <linux/bpf.h>
+#include <linux/btf.h>
 #include <linux/srcu.h>
 
+#include "bpf_lsm.h"
+
 DEFINE_STATIC_SRCU(security_hook_srcu);
 
 int bpf_lsm_srcu_read_lock(void)
@@ -18,3 +22,74 @@ void bpf_lsm_srcu_read_unlock(int idx)
 {
 	return srcu_read_unlock(&security_hook_srcu, idx);
 }
+
+static inline int validate_hlist_head(struct btf *btf,
+				      const struct btf_member *member)
+{
+	const struct btf_type *t;
+
+	t = btf_type_by_id(btf, member->type);
+	if (unlikely(!t))
+		return -EINVAL;
+
+	if (BTF_INFO_KIND(t->info) != BTF_KIND_STRUCT)
+		return -EINVAL;
+
+	if (t->size != sizeof(struct hlist_head))
+		return -EINVAL;
+
+	return 0;
+}
+
+/* Find the BTF representation of the security_hook_heads member for a member
+ * with a given index in struct security_hook_heads.
+ */
+const struct btf_member *bpf_lsm_head_by_idx(struct btf *btf, u32 idx)
+{
+	const struct btf_member *member;
+	int ret;
+
+	if (idx >= btf_type_vlen(bpf_lsm_info.btf_hook_heads))
+		return ERR_PTR(-EINVAL);
+
+	member = btf_type_member(bpf_lsm_info.btf_hook_heads) + idx;
+	ret = validate_hlist_head(btf, member);
+	if (ret < 0)
+		return ERR_PTR(ret);
+
+	return member;
+}
+
+/* Given an index of a member in security_hook_heads return the
+ * corresponding type for the LSM hook. The members of the union
+ * security_list_options have the same name as the security_hook_heads which
+ * is ensured by the LSM_HOOK_INIT macro defined in include/linux/lsm_hooks.h
+ */
+const struct btf_type *bpf_lsm_type_by_idx(struct btf *btf, u32 idx)
+{
+	const struct btf_member *member, *hook_head = NULL;
+	const struct btf_type *t;
+	u32 i;
+
+	hook_head = bpf_lsm_head_by_idx(btf, idx);
+	if (IS_ERR(hook_head))
+		return ERR_PTR(PTR_ERR(hook_head));
+
+	for_each_member(i, bpf_lsm_info.btf_hook_types, member) {
+		if (hook_head->name_off == member->name_off) {
+			t = btf_type_by_id(btf, member->type);
+			if (unlikely(!t))
+				return ERR_PTR(-EINVAL);
+
+			if (!btf_type_is_ptr(t))
+				return ERR_PTR(-EINVAL);
+
+			t = btf_type_by_id(btf, t->type);
+			if (unlikely(!t))
+				return ERR_PTR(-EINVAL);
+			return t;
+		}
+	}
+
+	return ERR_PTR(-ESRCH);
+}
diff --git a/security/bpf/include/bpf_lsm.h b/security/bpf/include/bpf_lsm.h
new file mode 100644
index 000000000000..f142596d97bd
--- /dev/null
+++ b/security/bpf/include/bpf_lsm.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _BPF_LSM_H
+#define _BPF_LSM_H
+
+#include <linux/filter.h>
+#include <linux/bpf.h>
+#include <linux/btf.h>
+
+struct bpf_lsm_info {
+	/* BTF type for security_hook_heads populated at init.
+	 */
+	const struct btf_type *btf_hook_heads;
+	/* BTF type for security_list_options populated at init.
+	 */
+	const struct btf_type *btf_hook_types;
+};
+
+extern struct bpf_lsm_info bpf_lsm_info;
+
+#endif /* _BPF_LSM_H */
diff --git a/security/bpf/lsm.c b/security/bpf/lsm.c
index a25a068e1781..736e0ee3f926 100644
--- a/security/bpf/lsm.c
+++ b/security/bpf/lsm.c
@@ -7,6 +7,8 @@
 #include <linux/bpf_lsm.h>
 #include <linux/lsm_hooks.h>
 
+#include "bpf_lsm.h"
+
 /* This is only for internal hooks, always statically shipped as part of the
  * BPF LSM. Statically defined hooks are appended to the security_hook_heads
  * which is common for LSMs and R/O after init.
@@ -19,6 +21,39 @@ static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = {};
  */
 struct security_hook_heads bpf_lsm_hook_heads;
 
+struct bpf_lsm_info bpf_lsm_info;
+
+static __init int bpf_lsm_info_init(void)
+{
+	const struct btf_type *t;
+
+	if (!btf_vmlinux)
+		/* No need to grab any locks because we are still in init */
+		btf_vmlinux = btf_parse_vmlinux();
+
+	if (IS_ERR(btf_vmlinux)) {
+		pr_err("btf_vmlinux is malformed\n");
+		return PTR_ERR(btf_vmlinux);
+	}
+
+	t = btf_type_by_name_kind(btf_vmlinux, "security_hook_heads",
+				  BTF_KIND_STRUCT);
+	if (WARN_ON(IS_ERR(t)))
+		return PTR_ERR(t);
+
+	bpf_lsm_info.btf_hook_heads = t;
+
+	t = btf_type_by_name_kind(btf_vmlinux, "security_list_options",
+				  BTF_KIND_UNION);
+	if (WARN_ON(IS_ERR(t)))
+		return PTR_ERR(t);
+
+	bpf_lsm_info.btf_hook_types = t;
+	return 0;
+}
+
+late_initcall(bpf_lsm_info_init);
+
 static int __init bpf_lsm_init(void)
 {
 	security_add_hooks(bpf_lsm_hooks, ARRAY_SIZE(bpf_lsm_hooks), "bpf");
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v3 06/10] bpf: lsm: Implement attach, detach and execution
  2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (4 preceding siblings ...)
  2020-01-23 15:24 ` [PATCH bpf-next v3 05/10] bpf: lsm: BTF API for LSM hooks KP Singh
@ 2020-01-23 15:24 ` KP Singh
  2020-01-23 15:24 ` [PATCH bpf-next v3 07/10] bpf: lsm: Make the allocated callback RO+X KP Singh
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Greg Kroah-Hartman,
	Thomas Garnier, Alexei Starovoitov, Daniel Borkmann,
	James Morris, Kees Cook, Thomas Garnier, Michael Halcrow,
	Paul Turner, Brendan Gregg, Jann Horn, Matthew Garrett,
	Christian Brauner, Mickaël Salaün, Florent Revest,
	Brendan Jackman, Martin KaFai Lau, Song Liu, Yonghong Song,
	Serge E. Hallyn, Mauro Carvalho Chehab, David S. Miller,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

JITed BPF programs are used by the BPF LSM as dynamically allocated
security hooks. arch_bpf_prepare_trampoline generates code to handle
conversion of the signature of the hook to the BPF context and allows
the BPF program to be called directly as a C function.

The BPF_PROG_TYPE_LSM programs must have a GPL compatible license and
the following permissions are required to attach a program to a
hook:

- CAP_SYS_ADMIN to load the program
- CAP_MAC_ADMIN to attach it (i.e. to update the security policy)

When the program is loaded (BPF_PROG_LOAD), the verifier receives the
index of the member in the security_hook_heads struct in the
prog->aux->lsm_hook_idx and uses the BTF API provided by the LSM to:

- Populate the name of the hook in prog->aux->attach_func_name and
  the prototype in prog->aux->attach_func_proto.
- Verify if the offset is valid for a type struct hlist_head.
- The program is verified for accesses based on the attach_func_proto
  similar to raw_tp BPF programs.

When an attachment (BPF_PROG_ATTACH) is requested:

- The information required to set-up of a callback is populated in the
  struct bpf_lsm_list.
- A new callback and a bpf_lsm_hook is allocated and the address of
  the hook is set to the be the address of the allocated callback.
- The attachment returns an anonymous O_CLOEXEC fd which detaches the
  program on close.

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 include/linux/bpf.h            |   4 +
 include/linux/bpf_lsm.h        |  15 +++
 include/uapi/linux/bpf.h       |   4 +
 kernel/bpf/syscall.c           |  47 +++++++-
 kernel/bpf/verifier.c          |  73 ++++++++++--
 security/bpf/Kconfig           |   1 +
 security/bpf/hooks.c           | 201 +++++++++++++++++++++++++++++++++
 security/bpf/include/bpf_lsm.h |  49 ++++++++
 security/bpf/lsm.c             |  19 ++++
 security/bpf/ops.c             |   2 +
 tools/include/uapi/linux/bpf.h |   4 +
 11 files changed, 408 insertions(+), 11 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index a9687861fd7e..6d6653a375f6 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -614,6 +614,10 @@ struct bpf_prog_aux {
 	u32 func_cnt; /* used by non-func prog as the number of func progs */
 	u32 func_idx; /* 0 for non-func prog, the index in func array for func prog */
 	u32 attach_btf_id; /* in-kernel BTF type id to attach to */
+	/* Index of the hlist_head in security_hook_heads to which the program
+	 * must be attached.
+	 */
+	u32 lsm_hook_idx;
 	struct bpf_prog *linked_prog;
 	bool verifier_zext; /* Zero extensions has been inserted by verifier. */
 	bool offload_requested;
diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
index 5e61c0736001..b6daa0875c8d 100644
--- a/include/linux/bpf_lsm.h
+++ b/include/linux/bpf_lsm.h
@@ -19,8 +19,11 @@ extern struct security_hook_heads bpf_lsm_hook_heads;
 
 int bpf_lsm_srcu_read_lock(void);
 void bpf_lsm_srcu_read_unlock(int idx);
+int bpf_lsm_verify_prog(const struct bpf_prog *prog);
 const struct btf_type *bpf_lsm_type_by_idx(struct btf *btf, u32 offset);
 const struct btf_member *bpf_lsm_head_by_idx(struct btf *btf, u32 idx);
+int bpf_lsm_attach(struct bpf_prog *prog);
+int bpf_lsm_detach(struct bpf_prog *prog);
 
 #define CALL_BPF_LSM_VOID_HOOKS(FUNC, ...)			\
 	do {							\
@@ -68,6 +71,10 @@ static inline int bpf_lsm_srcu_read_lock(void)
 	return 0;
 }
 static inline void bpf_lsm_srcu_read_unlock(int idx) {}
+static inline int bpf_lsm_verify_prog(const struct bpf_prog *prog)
+{
+	return -EOPNOTSUPP;
+}
 static inline const struct btf_type *bpf_lsm_type_by_idx(
 	struct btf *btf, u32 idx)
 {
@@ -78,6 +85,14 @@ static inline const struct btf_member *bpf_lsm_head_by_idx(
 {
 	return ERR_PTR(-EOPNOTSUPP);
 }
+static inline int bpf_lsm_attach(struct bpf_prog *prog)
+{
+	return -EOPNOTSUPP;
+}
+static inline int bpf_lsm_detach(struct bpf_prog *prog)
+{
+	return -EOPNOTSUPP;
+}
 
 #endif /* CONFIG_SECURITY_BPF */
 
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 2f1e24a8c4a4..a3206de23db9 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -470,6 +470,10 @@ union bpf_attr {
 		__u32		line_info_cnt;	/* number of bpf_line_info records */
 		__u32		attach_btf_id;	/* in-kernel BTF type id to attach to */
 		__u32		attach_prog_fd; /* 0 to attach to vmlinux */
+		/* Index of the hlist_head in security_hook_heads to which the
+		 * program must be attached.
+		 */
+		__u32		lsm_hook_idx;
 	};
 
 	struct { /* anonymous struct used by BPF_OBJ_* commands */
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index eab4a36ee889..851ca97120ac 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -4,6 +4,7 @@
 #include <linux/bpf.h>
 #include <linux/bpf_trace.h>
 #include <linux/bpf_lirc.h>
+#include <linux/bpf_lsm.h>
 #include <linux/btf.h>
 #include <linux/syscalls.h>
 #include <linux/slab.h>
@@ -1993,7 +1994,7 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
 }
 
 /* last field in 'union bpf_attr' used by this command */
-#define	BPF_PROG_LOAD_LAST_FIELD attach_prog_fd
+#define	BPF_PROG_LOAD_LAST_FIELD lsm_hook_idx
 
 static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
 {
@@ -2047,6 +2048,10 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
 
 	prog->expected_attach_type = attr->expected_attach_type;
 	prog->aux->attach_btf_id = attr->attach_btf_id;
+
+	if (type == BPF_PROG_TYPE_LSM)
+		prog->aux->lsm_hook_idx = attr->lsm_hook_idx;
+
 	if (attr->attach_prog_fd) {
 		struct bpf_prog *tgt_prog;
 
@@ -2213,6 +2218,44 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog)
 	return err;
 }
 
+static int bpf_lsm_prog_release(struct inode *inode, struct file *filp)
+{
+	struct bpf_prog *prog = filp->private_data;
+
+	WARN_ON_ONCE(bpf_lsm_detach(prog));
+	bpf_prog_put(prog);
+	return 0;
+}
+
+static const struct file_operations bpf_lsm_prog_fops = {
+	.release	= bpf_lsm_prog_release,
+	.read		= bpf_dummy_read,
+	.write		= bpf_dummy_write,
+};
+
+static int bpf_lsm_prog_attach(struct bpf_prog *prog)
+{
+	int ret;
+
+	if (prog->expected_attach_type != BPF_LSM_MAC)
+		return -EINVAL;
+
+	/* The attach increments the references to the program which is
+	 * decremented on detach as a part of bpf_lsm_hook_free.
+	 */
+	ret = bpf_lsm_attach(prog);
+	if (ret)
+		return ret;
+
+	ret = anon_inode_getfd("bpf-lsm-prog", &bpf_lsm_prog_fops,
+				prog, O_CLOEXEC);
+	if (ret < 0) {
+		bpf_lsm_detach(prog);
+		return ret;
+	}
+	return 0;
+}
+
 struct bpf_raw_tracepoint {
 	struct bpf_raw_event_map *btp;
 	struct bpf_prog *prog;
@@ -2431,7 +2474,7 @@ static int bpf_prog_attach(const union bpf_attr *attr)
 		ret = lirc_prog_attach(attr, prog);
 		break;
 	case BPF_PROG_TYPE_LSM:
-		ret = -EOPNOTSUPP;
+		ret = bpf_lsm_prog_attach(prog);
 		break;
 	case BPF_PROG_TYPE_FLOW_DISSECTOR:
 		ret = skb_flow_dissector_bpf_prog_attach(attr, prog);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1cc945daa9c8..1a4013c68afa 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -19,6 +19,7 @@
 #include <linux/sort.h>
 #include <linux/perf_event.h>
 #include <linux/ctype.h>
+#include <linux/bpf_lsm.h>
 
 #include "disasm.h"
 
@@ -6405,8 +6406,9 @@ static int check_return_code(struct bpf_verifier_env *env)
 	struct tnum range = tnum_range(0, 1);
 	int err;
 
-	/* The struct_ops func-ptr's return type could be "void" */
-	if (env->prog->type == BPF_PROG_TYPE_STRUCT_OPS &&
+	/* LSM and struct_ops func-ptr's return type could be "void" */
+	if ((env->prog->type == BPF_PROG_TYPE_STRUCT_OPS ||
+	     env->prog->type == BPF_PROG_TYPE_LSM) &&
 	    !prog->aux->attach_func_proto->type)
 		return 0;
 
@@ -9775,7 +9777,49 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
 	return 0;
 }
 
-static int check_attach_btf_id(struct bpf_verifier_env *env)
+/* BPF_PROG_TYPE_LSM programs pass the member index of the LSM hook in the
+ * security_hook_heads as the lsm_hook_idx. The verifier determines the
+ * name and the prototype for the LSM hook using the information in
+ * security_list_options and validates if the offset is a valid hlist_head.
+ */
+static inline int check_attach_btf_id_lsm(struct bpf_verifier_env *env)
+{
+	struct bpf_prog *prog = env->prog;
+	u32 idx = prog->aux->lsm_hook_idx;
+	const struct btf_member *head;
+	const struct btf_type *t;
+	const char *tname;
+	int ret;
+
+	ret = bpf_lsm_verify_prog(prog);
+	if (ret < 0)
+		return -EINVAL;
+
+	t = bpf_lsm_type_by_idx(btf_vmlinux, idx);
+	if (IS_ERR(t)) {
+		verbose(env, "unable to find security_list_option for idx %u in security_hook_heads\n", idx);
+		return -EINVAL;
+	}
+
+	if (!btf_type_is_func_proto(t))
+		return -EINVAL;
+
+	head = bpf_lsm_head_by_idx(btf_vmlinux, idx);
+	if (IS_ERR(head)) {
+		verbose(env, "no security_hook_heads index = %u\n", idx);
+		return PTR_ERR(head);
+	}
+
+	tname = btf_name_by_offset(btf_vmlinux, head->name_off);
+	if (!tname || !tname[0])
+		return -EINVAL;
+
+	prog->aux->attach_func_name = tname;
+	prog->aux->attach_func_proto = t;
+	return 0;
+}
+
+static int check_attach_btf_id_tracing(struct bpf_verifier_env *env)
 {
 	struct bpf_prog *prog = env->prog;
 	bool prog_extension = prog->type == BPF_PROG_TYPE_EXT;
@@ -9791,12 +9835,6 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 	long addr;
 	u64 key;
 
-	if (prog->type == BPF_PROG_TYPE_STRUCT_OPS)
-		return check_struct_ops_btf_id(env);
-
-	if (prog->type != BPF_PROG_TYPE_TRACING && !prog_extension)
-		return 0;
-
 	if (!btf_id) {
 		verbose(env, "Tracing programs must provide btf_id\n");
 		return -EINVAL;
@@ -9981,6 +10019,23 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 	}
 }
 
+static int check_attach_btf_id(struct bpf_verifier_env *env)
+{
+	struct bpf_prog *prog = env->prog;
+
+	switch (prog->type) {
+	case BPF_PROG_TYPE_TRACING:
+	case BPF_PROG_TYPE_EXT:
+		return check_attach_btf_id_tracing(env);
+	case BPF_PROG_TYPE_STRUCT_OPS:
+		return check_struct_ops_btf_id(env);
+	case BPF_PROG_TYPE_LSM:
+		return check_attach_btf_id_lsm(env);
+	default:
+		return 0;
+	}
+}
+
 int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
 	      union bpf_attr __user *uattr)
 {
diff --git a/security/bpf/Kconfig b/security/bpf/Kconfig
index 9438d899b618..a915c549f4b8 100644
--- a/security/bpf/Kconfig
+++ b/security/bpf/Kconfig
@@ -8,6 +8,7 @@ config SECURITY_BPF
 	depends on BPF_SYSCALL
 	depends on SRCU
 	depends on DEBUG_INFO_BTF
+	depends on BPF_JIT && HAVE_EBPF_JIT
 	help
 	  This enables instrumentation of the security hooks with
 	  eBPF programs.
diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
index e9dc6933b6fa..f1d4fdcdb20e 100644
--- a/security/bpf/hooks.c
+++ b/security/bpf/hooks.c
@@ -6,11 +6,14 @@
 
 #include <linux/bpf_lsm.h>
 #include <linux/bpf.h>
+#include <linux/bpf_verifier.h>
 #include <linux/btf.h>
 #include <linux/srcu.h>
 
 #include "bpf_lsm.h"
 
+#define SECURITY_LIST_HEAD(off) ((void *)&bpf_lsm_hook_heads + off)
+
 DEFINE_STATIC_SRCU(security_hook_srcu);
 
 int bpf_lsm_srcu_read_lock(void)
@@ -41,6 +44,27 @@ static inline int validate_hlist_head(struct btf *btf,
 	return 0;
 }
 
+int bpf_lsm_verify_prog(const struct bpf_prog *prog)
+{
+	u32 num_hooks = btf_type_vlen(bpf_lsm_info.btf_hook_heads);
+	u32 idx = prog->aux->lsm_hook_idx;
+	struct bpf_verifier_log log = {};
+
+	if (!prog->gpl_compatible) {
+		bpf_log(&log,
+			"LSM programs must have a GPL compatible license\n");
+		return -EINVAL;
+	}
+
+	if (idx >= num_hooks) {
+		bpf_log(&log, "lsm_hook_idx should be between 0 and %u\n",
+			num_hooks - 1);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 /* Find the BTF representation of the security_hook_heads member for a member
  * with a given index in struct security_hook_heads.
  */
@@ -93,3 +117,180 @@ const struct btf_type *bpf_lsm_type_by_idx(struct btf *btf, u32 idx)
 
 	return ERR_PTR(-ESRCH);
 }
+
+static void *bpf_lsm_get_func_addr(struct security_hook_list *s,
+				   const char *name)
+{
+	const struct btf_member *member;
+	s32 i;
+
+	for_each_member(i, bpf_lsm_info.btf_hook_types, member)
+		if (!strncmp(btf_name_by_offset(btf_vmlinux, member->name_off),
+			     name, strlen(name) + 1))
+			return (void *)&s->hook + member->offset / 8;
+
+	return ERR_PTR(-ESRCH);
+}
+
+static struct bpf_lsm_list *bpf_lsm_list_lookup(struct bpf_prog *prog)
+{
+	struct bpf_verifier_log bpf_log = {};
+	u32 idx = prog->aux->lsm_hook_idx;
+	const struct btf_member *head;
+	struct bpf_lsm_list *list;
+	int ret = 0;
+
+	list = &bpf_lsm_info.hook_lists[idx];
+
+	mutex_lock(&list->mutex);
+
+	if (list->initialized)
+		goto unlock;
+
+	list->attach_type = prog->aux->attach_func_proto;
+
+	ret = btf_distill_func_proto(&bpf_log, btf_vmlinux, list->attach_type,
+				     prog->aux->attach_func_name,
+				     &list->func_model);
+	if (ret)
+		goto unlock;
+
+	head = bpf_lsm_head_by_idx(btf_vmlinux, idx);
+	if (IS_ERR(head)) {
+		ret = PTR_ERR(head);
+		goto unlock;
+	}
+
+	list->security_list_head = SECURITY_LIST_HEAD(head->offset / 8);
+	list->initialized = true;
+unlock:
+	mutex_unlock(&list->mutex);
+	if (ret)
+		return ERR_PTR(ret);
+	return list;
+}
+
+static struct bpf_lsm_hook *bpf_lsm_hook_alloc(struct bpf_lsm_list *list,
+					       struct bpf_prog *prog)
+{
+	struct bpf_lsm_hook *hook;
+	void *image;
+	int ret = 0;
+
+	image = bpf_jit_alloc_exec(PAGE_SIZE);
+	if (!image)
+		return ERR_PTR(-ENOMEM);
+
+	set_vm_flush_reset_perms(image);
+
+	ret = arch_prepare_bpf_trampoline(image, image + PAGE_SIZE,
+		&list->func_model, 0, &prog, 1, NULL, 0, NULL);
+	if (ret < 0) {
+		ret = -EINVAL;
+		goto error;
+	}
+
+	hook = kzalloc(sizeof(struct bpf_lsm_hook), GFP_KERNEL);
+	if (!hook) {
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	hook->image = image;
+	hook->prog = prog;
+	bpf_prog_inc(prog);
+	return hook;
+error:
+	bpf_jit_free_exec(image);
+	return ERR_PTR(ret);
+}
+
+static void bpf_lsm_hook_free(struct bpf_lsm_hook *tr)
+{
+	if (!tr)
+		return;
+
+	if (tr->prog)
+		bpf_prog_put(tr->prog);
+
+	bpf_jit_free_exec(tr->image);
+	kfree(tr);
+}
+
+int bpf_lsm_attach(struct bpf_prog *prog)
+{
+	struct bpf_lsm_hook *hook;
+	struct bpf_lsm_list *list;
+	void **addr;
+	int ret = 0;
+
+	/* Only CAP_MAC_ADMIN users are allowed to make changes to LSM hooks
+	 */
+	if (!capable(CAP_MAC_ADMIN))
+		return -EPERM;
+
+	if (!bpf_lsm_info.initialized)
+		return -EBUSY;
+
+	list = bpf_lsm_list_lookup(prog);
+	if (IS_ERR(list))
+		return PTR_ERR(list);
+
+	hook = bpf_lsm_hook_alloc(list, prog);
+	if (IS_ERR(hook))
+		return PTR_ERR(hook);
+
+	hook->sec_hook.head = list->security_list_head;
+	addr = bpf_lsm_get_func_addr(&hook->sec_hook,
+				     prog->aux->attach_func_name);
+	if (IS_ERR(addr)) {
+		ret = PTR_ERR(addr);
+		goto error;
+	}
+
+	*addr = hook->image;
+
+	mutex_lock(&list->mutex);
+	hlist_add_tail_rcu(&hook->sec_hook.list, hook->sec_hook.head);
+	mutex_unlock(&list->mutex);
+	return 0;
+
+error:
+	bpf_lsm_hook_free(hook);
+	return ret;
+}
+
+int bpf_lsm_detach(struct bpf_prog *prog)
+{
+	struct security_hook_list *sec_hook;
+	struct bpf_lsm_hook *hook = NULL;
+	struct bpf_lsm_list *list;
+	struct hlist_node *n;
+
+	/* Only CAP_MAC_ADMIN users are allowed to make changes to LSM hooks
+	 */
+	if (!capable(CAP_MAC_ADMIN))
+		return -EPERM;
+
+	if (!bpf_lsm_info.initialized)
+		return -EBUSY;
+
+	list = &bpf_lsm_info.hook_lists[prog->aux->lsm_hook_idx];
+
+	mutex_lock(&list->mutex);
+	hlist_for_each_entry_safe(sec_hook, n, list->security_list_head, list) {
+		hook = container_of(sec_hook, struct bpf_lsm_hook, sec_hook);
+		if (hook->prog == prog) {
+			hlist_del_rcu(&hook->sec_hook.list);
+			break;
+		}
+	}
+	mutex_unlock(&list->mutex);
+	/* call_rcu is not used directly as module_memfree cannot run from an
+	 * softirq context. The best way would be to schedule this on a work
+	 * queue.
+	 */
+	synchronize_srcu(&security_hook_srcu);
+	bpf_lsm_hook_free(hook);
+	return 0;
+}
diff --git a/security/bpf/include/bpf_lsm.h b/security/bpf/include/bpf_lsm.h
index f142596d97bd..4c51ce4d29e7 100644
--- a/security/bpf/include/bpf_lsm.h
+++ b/security/bpf/include/bpf_lsm.h
@@ -4,9 +4,50 @@
 #define _BPF_LSM_H
 
 #include <linux/filter.h>
+#include <linux/lsm_hooks.h>
 #include <linux/bpf.h>
 #include <linux/btf.h>
 
+struct bpf_lsm_hook {
+	/* The security_hook_list is initialized dynamically. These are
+	 * initialized in static LSMs by LSM_HOOK_INIT.
+	 */
+	struct security_hook_list sec_hook;
+	/* The BPF program for which this hook was allocated, this is used upon
+	 * detachment to find the hook corresponding to a program.
+	 */
+	struct bpf_prog *prog;
+	/* The address of the allocated function */
+	void *image;
+};
+
+/* The list represents the list of hooks attached to a particular
+ * security_list_head and contains information required for attaching and
+ * detaching BPF Programs.
+ */
+struct bpf_lsm_list {
+	/* Used on the first attached BPF program to populate the remaining
+	 * information
+	 */
+	bool initialized;
+	/* This mutex is used to serialize accesses to all the fields in
+	 * this structure.
+	 */
+	struct mutex mutex;
+	/* The BTF type for this hook.
+	 */
+	const struct btf_type *attach_type;
+	/* func_model for the setup of the callback.
+	 */
+	struct btf_func_model func_model;
+	/* The list of functions currently associated with the LSM hook.
+	 */
+	struct list_head callback_list;
+	/* The head to which the allocated hooks must be attached to.
+	 */
+	struct hlist_head *security_list_head;
+};
+
 struct bpf_lsm_info {
 	/* BTF type for security_hook_heads populated at init.
 	 */
@@ -14,6 +55,14 @@ struct bpf_lsm_info {
 	/* BTF type for security_list_options populated at init.
 	 */
 	const struct btf_type *btf_hook_types;
+	/* Dynamic Hooks can only be attached after the LSM is initialized.
+	 */
+	bool initialized;
+	/* The hook_lists is allocated during __init and mutexes for each
+	 * bpf_lsm_list are initialized on __init, the remaining initialization
+	 * happens when a BPF program is attached to the list.
+	 */
+	struct bpf_lsm_list *hook_lists;
 };
 
 extern struct bpf_lsm_info bpf_lsm_info;
diff --git a/security/bpf/lsm.c b/security/bpf/lsm.c
index 736e0ee3f926..8555879af8c8 100644
--- a/security/bpf/lsm.c
+++ b/security/bpf/lsm.c
@@ -26,6 +26,8 @@ struct bpf_lsm_info bpf_lsm_info;
 static __init int bpf_lsm_info_init(void)
 {
 	const struct btf_type *t;
+	u32 num_hooks;
+	int i;
 
 	if (!btf_vmlinux)
 		/* No need to grab any locks because we are still in init */
@@ -42,6 +44,7 @@ static __init int bpf_lsm_info_init(void)
 		return PTR_ERR(t);
 
 	bpf_lsm_info.btf_hook_heads = t;
+	num_hooks = btf_type_vlen(t);
 
 	t = btf_type_by_name_kind(btf_vmlinux, "security_list_options",
 				  BTF_KIND_UNION);
@@ -49,6 +52,22 @@ static __init int bpf_lsm_info_init(void)
 		return PTR_ERR(t);
 
 	bpf_lsm_info.btf_hook_types = t;
+
+	bpf_lsm_info.hook_lists = kcalloc(num_hooks,
+					  sizeof(struct bpf_lsm_list),
+					  GFP_KERNEL);
+	if (!bpf_lsm_info.hook_lists)
+		return -ENOMEM;
+
+	/* The mutex needs to be initialized at init as it must be held
+	 * when mutating the list. The rest of the information in the list
+	 * is populated lazily when the first LSM hook callback is appeneded
+	 * to the list.
+	 */
+	for (i = 0; i < num_hooks; i++)
+		mutex_init(&bpf_lsm_info.hook_lists[i].mutex);
+
+	bpf_lsm_info.initialized = true;
 	return 0;
 }
 
diff --git a/security/bpf/ops.c b/security/bpf/ops.c
index 81c2bd9c0495..9a26633ac6f3 100644
--- a/security/bpf/ops.c
+++ b/security/bpf/ops.c
@@ -6,6 +6,7 @@
 
 #include <linux/filter.h>
 #include <linux/bpf.h>
+#include <linux/btf.h>
 
 const struct bpf_prog_ops lsm_prog_ops = {
 };
@@ -25,4 +26,5 @@ static const struct bpf_func_proto *get_bpf_func_proto(
 
 const struct bpf_verifier_ops lsm_verifier_ops = {
 	.get_func_proto = get_bpf_func_proto,
+	.is_valid_access = btf_ctx_access,
 };
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 2f1e24a8c4a4..a3206de23db9 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -470,6 +470,10 @@ union bpf_attr {
 		__u32		line_info_cnt;	/* number of bpf_line_info records */
 		__u32		attach_btf_id;	/* in-kernel BTF type id to attach to */
 		__u32		attach_prog_fd; /* 0 to attach to vmlinux */
+		/* Index of the hlist_head in security_hook_heads to which the
+		 * program must be attached.
+		 */
+		__u32		lsm_hook_idx;
 	};
 
 	struct { /* anonymous struct used by BPF_OBJ_* commands */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v3 07/10] bpf: lsm: Make the allocated callback RO+X
  2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (5 preceding siblings ...)
  2020-01-23 15:24 ` [PATCH bpf-next v3 06/10] bpf: lsm: Implement attach, detach and execution KP Singh
@ 2020-01-23 15:24 ` KP Singh
  2020-01-23 15:24 ` [PATCH bpf-next v3 08/10] tools/libbpf: Add support for BPF_PROG_TYPE_LSM KP Singh
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

This patch is not needed after arch_bpf_prepare_trampoline
moves to using text_poke.

The two IPI TLB flushes can be further optimized if a new API to handle
W^X in the kernel emerges as an outcome of:

  https://lore.kernel.org/bpf/20200103234725.22846-1-kpsingh@chromium.org/

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 security/bpf/hooks.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
index f1d4fdcdb20e..beeeb8c1f9c2 100644
--- a/security/bpf/hooks.c
+++ b/security/bpf/hooks.c
@@ -190,6 +190,15 @@ static struct bpf_lsm_hook *bpf_lsm_hook_alloc(struct bpf_lsm_list *list,
 		goto error;
 	}
 
+	/* First make the page read-only, and only then make it executable to
+	 * prevent it from being W+X in between.
+	 */
+	set_memory_ro((unsigned long)image, 1);
+	/* More checks can be done here to ensure that nothing was changed
+	 * between arch_prepare_bpf_trampoline and set_memory_ro.
+	 */
+	set_memory_x((unsigned long)image, 1);
+
 	hook = kzalloc(sizeof(struct bpf_lsm_hook), GFP_KERNEL);
 	if (!hook) {
 		ret = -ENOMEM;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v3 08/10] tools/libbpf: Add support for BPF_PROG_TYPE_LSM
  2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (6 preceding siblings ...)
  2020-01-23 15:24 ` [PATCH bpf-next v3 07/10] bpf: lsm: Make the allocated callback RO+X KP Singh
@ 2020-01-23 15:24 ` KP Singh
  2020-01-23 18:00   ` Andrii Nakryiko
  2020-01-23 15:24 ` [PATCH bpf-next v3 09/10] bpf: lsm: Add selftests " KP Singh
  2020-01-23 15:24 ` [PATCH bpf-next v3 10/10] bpf: lsm: Add Documentation KP Singh
  9 siblings, 1 reply; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

* Add functionality in libbpf to attach eBPF program to LSM hooks
* Lookup the index of the LSM hook in security_hook_heads and pass it in
  attr->lsm_hook_idx

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 tools/lib/bpf/bpf.c      |   6 ++-
 tools/lib/bpf/bpf.h      |   1 +
 tools/lib/bpf/libbpf.c   | 104 +++++++++++++++++++++++++++++++++++++--
 tools/lib/bpf/libbpf.h   |   4 ++
 tools/lib/bpf/libbpf.map |   3 ++
 5 files changed, 114 insertions(+), 4 deletions(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index c6dafe563176..60dac1b80e5a 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -235,7 +235,10 @@ int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr,
 	memset(&attr, 0, sizeof(attr));
 	attr.prog_type = load_attr->prog_type;
 	attr.expected_attach_type = load_attr->expected_attach_type;
-	if (attr.prog_type == BPF_PROG_TYPE_STRUCT_OPS) {
+
+	if (attr.prog_type == BPF_PROG_TYPE_LSM) {
+		attr.lsm_hook_idx = load_attr->lsm_hook_idx;
+	} else if (attr.prog_type == BPF_PROG_TYPE_STRUCT_OPS) {
 		attr.attach_btf_id = load_attr->attach_btf_id;
 	} else if (attr.prog_type == BPF_PROG_TYPE_TRACING ||
 		   attr.prog_type == BPF_PROG_TYPE_EXT) {
@@ -245,6 +248,7 @@ int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr,
 		attr.prog_ifindex = load_attr->prog_ifindex;
 		attr.kern_version = load_attr->kern_version;
 	}
+
 	attr.insn_cnt = (__u32)load_attr->insns_cnt;
 	attr.insns = ptr_to_u64(load_attr->insns);
 	attr.license = ptr_to_u64(load_attr->license);
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index b976e77316cc..cfd59f7c29a7 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -85,6 +85,7 @@ struct bpf_load_program_attr {
 	union {
 		__u32 prog_ifindex;
 		__u32 attach_btf_id;
+		__u32 lsm_hook_idx;
 	};
 	__u32 prog_btf_fd;
 	__u32 func_info_rec_size;
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index ae34b681ae82..1ecbf6c78b97 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -228,6 +228,7 @@ struct bpf_program {
 	enum bpf_attach_type expected_attach_type;
 	__u32 attach_btf_id;
 	__u32 attach_prog_fd;
+	__u32 lsm_hook_idx;
 	void *func_info;
 	__u32 func_info_rec_size;
 	__u32 func_info_cnt;
@@ -2352,7 +2353,9 @@ static int bpf_object__finalize_btf(struct bpf_object *obj)
 
 static inline bool libbpf_prog_needs_vmlinux_btf(struct bpf_program *prog)
 {
-	if (prog->type == BPF_PROG_TYPE_STRUCT_OPS)
+
+	if (prog->type == BPF_PROG_TYPE_LSM ||
+	    prog->type == BPF_PROG_TYPE_STRUCT_OPS)
 		return true;
 
 	/* BPF_PROG_TYPE_TRACING programs which do not attach to other programs
@@ -4835,7 +4838,10 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
 	load_attr.insns = insns;
 	load_attr.insns_cnt = insns_cnt;
 	load_attr.license = license;
-	if (prog->type == BPF_PROG_TYPE_STRUCT_OPS) {
+
+	if (prog->type == BPF_PROG_TYPE_LSM) {
+		load_attr.lsm_hook_idx = prog->lsm_hook_idx;
+	} else if (prog->type == BPF_PROG_TYPE_STRUCT_OPS) {
 		load_attr.attach_btf_id = prog->attach_btf_id;
 	} else if (prog->type == BPF_PROG_TYPE_TRACING ||
 		   prog->type == BPF_PROG_TYPE_EXT) {
@@ -4845,6 +4851,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
 		load_attr.kern_version = kern_version;
 		load_attr.prog_ifindex = prog->prog_ifindex;
 	}
+
 	/* if .BTF.ext was loaded, kernel supports associated BTF for prog */
 	if (prog->obj->btf_ext)
 		btf_fd = bpf_object__btf_fd(prog->obj);
@@ -4914,10 +4921,11 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
 }
 
 static int libbpf_find_attach_btf_id(struct bpf_program *prog);
+static __s32 find_lsm_hook_idx(struct bpf_program *prog);
 
 int bpf_program__load(struct bpf_program *prog, char *license, __u32 kern_ver)
 {
-	int err = 0, fd, i, btf_id;
+	int err = 0, fd, i, btf_id, idx;
 
 	if (prog->type == BPF_PROG_TYPE_TRACING ||
 	    prog->type == BPF_PROG_TYPE_EXT) {
@@ -4927,6 +4935,13 @@ int bpf_program__load(struct bpf_program *prog, char *license, __u32 kern_ver)
 		prog->attach_btf_id = btf_id;
 	}
 
+	if (prog->type == BPF_PROG_TYPE_LSM) {
+		idx = find_lsm_hook_idx(prog);
+		if (idx < 0)
+			return idx;
+		prog->lsm_hook_idx = idx;
+	}
+
 	if (prog->instances.nr < 0 || !prog->instances.fds) {
 		if (prog->preprocessor) {
 			pr_warn("Internal error: can't load program '%s'\n",
@@ -5084,6 +5099,8 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
 		if (prog->type != BPF_PROG_TYPE_UNSPEC)
 			continue;
 
+
+
 		err = libbpf_prog_type_by_name(prog->section_name, &prog_type,
 					       &attach_type);
 		if (err == -ESRCH)
@@ -6160,6 +6177,7 @@ bool bpf_program__is_##NAME(const struct bpf_program *prog)	\
 }								\
 
 BPF_PROG_TYPE_FNS(socket_filter, BPF_PROG_TYPE_SOCKET_FILTER);
+BPF_PROG_TYPE_FNS(lsm, BPF_PROG_TYPE_LSM);
 BPF_PROG_TYPE_FNS(kprobe, BPF_PROG_TYPE_KPROBE);
 BPF_PROG_TYPE_FNS(sched_cls, BPF_PROG_TYPE_SCHED_CLS);
 BPF_PROG_TYPE_FNS(sched_act, BPF_PROG_TYPE_SCHED_ACT);
@@ -6226,6 +6244,8 @@ static struct bpf_link *attach_raw_tp(const struct bpf_sec_def *sec,
 				      struct bpf_program *prog);
 static struct bpf_link *attach_trace(const struct bpf_sec_def *sec,
 				     struct bpf_program *prog);
+static struct bpf_link *attach_lsm(const struct bpf_sec_def *sec,
+				   struct bpf_program *prog);
 
 struct bpf_sec_def {
 	const char *sec;
@@ -6272,6 +6292,9 @@ static const struct bpf_sec_def section_defs[] = {
 	SEC_DEF("freplace/", EXT,
 		.is_attach_btf = true,
 		.attach_fn = attach_trace),
+	SEC_DEF("lsm/", LSM,
+		.expected_attach_type = BPF_LSM_MAC,
+		.attach_fn = attach_lsm),
 	BPF_PROG_SEC("xdp",			BPF_PROG_TYPE_XDP),
 	BPF_PROG_SEC("perf_event",		BPF_PROG_TYPE_PERF_EVENT),
 	BPF_PROG_SEC("lwt_in",			BPF_PROG_TYPE_LWT_IN),
@@ -6533,6 +6556,44 @@ static int bpf_object__collect_struct_ops_map_reloc(struct bpf_object *obj,
 	return -EINVAL;
 }
 
+static __s32 find_lsm_hook_idx(struct bpf_program *prog)
+{
+	struct btf *btf = prog->obj->btf_vmlinux;
+	const char *name = prog->section_name;
+	const struct bpf_sec_def *sec_def;
+	const struct btf_type *t;
+	struct btf_member *m;
+	__s32 type_id;
+	int i;
+
+	sec_def = find_sec_def(name);
+	if (!sec_def)
+		return -ESRCH;
+
+	name += sec_def->len;
+
+	type_id = btf__find_by_name_kind(btf, "security_hook_heads",
+					 BTF_KIND_STRUCT);
+	if (type_id < 0) {
+		pr_warn("security_hook_heads not found in vmlinux BTF\n");
+		return type_id;
+	}
+
+	t = btf__type_by_id(btf, type_id);
+	if (!t) {
+		pr_warn("Can't find type for security_hook_heads: %u\n", type_id);
+		return -ESRCH;
+	}
+
+	for (m = btf_members(t), i = 0; i < btf_vlen(t); i++, m++) {
+		if (!strcmp(btf__name_by_offset(btf, m->name_off), name))
+			return i;
+	}
+
+	pr_warn("Can't find lsm_hook_idx for %s in security_hook_heads\n", name);
+	return -ESRCH;
+}
+
 #define BTF_TRACE_PREFIX "btf_trace_"
 #define BTF_MAX_NAME_SIZE 128
 
@@ -7372,6 +7433,43 @@ static struct bpf_link *attach_trace(const struct bpf_sec_def *sec,
 	return bpf_program__attach_trace(prog);
 }
 
+struct bpf_link *bpf_program__attach_lsm(struct bpf_program *prog)
+{
+	char errmsg[STRERR_BUFSIZE];
+	struct bpf_link_fd *link;
+	int prog_fd, pfd;
+
+	prog_fd = bpf_program__fd(prog);
+	if (prog_fd < 0) {
+		pr_warn("program '%s': can't attach before loaded\n",
+			bpf_program__title(prog, false));
+		return ERR_PTR(-EINVAL);
+	}
+
+	link = calloc(1, sizeof(*link));
+	if (!link)
+		return ERR_PTR(-ENOMEM);
+	link->link.detach = &bpf_link__detach_fd;
+
+	pfd = bpf_prog_attach(prog_fd, 0, BPF_LSM_MAC, 0);
+	if (pfd < 0) {
+		pfd = -errno;
+		free(link);
+		pr_warn("program '%s': failed to attach: %s\n",
+			bpf_program__title(prog, false),
+			libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+		return ERR_PTR(pfd);
+	}
+	link->fd = pfd;
+	return (struct bpf_link *)link;
+}
+
+static struct bpf_link *attach_lsm(const struct bpf_sec_def *sec,
+				   struct bpf_program *prog)
+{
+	return bpf_program__attach_lsm(prog);
+}
+
 struct bpf_link *bpf_program__attach(struct bpf_program *prog)
 {
 	const struct bpf_sec_def *sec_def;
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 2a5e3b087002..ef09ca980758 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -239,6 +239,8 @@ bpf_program__attach_raw_tracepoint(struct bpf_program *prog,
 
 LIBBPF_API struct bpf_link *
 bpf_program__attach_trace(struct bpf_program *prog);
+LIBBPF_API struct bpf_link *
+bpf_program__attach_lsm(struct bpf_program *prog);
 struct bpf_map;
 LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(struct bpf_map *map);
 struct bpf_insn;
@@ -312,6 +314,7 @@ LIBBPF_API int bpf_program__set_socket_filter(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_tracepoint(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_raw_tracepoint(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_kprobe(struct bpf_program *prog);
+LIBBPF_API int bpf_program__set_lsm(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_sched_cls(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_sched_act(struct bpf_program *prog);
 LIBBPF_API int bpf_program__set_xdp(struct bpf_program *prog);
@@ -334,6 +337,7 @@ LIBBPF_API bool bpf_program__is_socket_filter(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_tracepoint(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_raw_tracepoint(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_kprobe(const struct bpf_program *prog);
+LIBBPF_API bool bpf_program__is_lsm(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_sched_cls(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_sched_act(const struct bpf_program *prog);
 LIBBPF_API bool bpf_program__is_xdp(const struct bpf_program *prog);
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index b035122142bb..8df332a528a0 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -227,10 +227,13 @@ LIBBPF_0.0.7 {
 		bpf_probe_large_insn_limit;
 		bpf_prog_attach_xattr;
 		bpf_program__attach;
+		bpf_program__attach_lsm;
 		bpf_program__name;
 		bpf_program__is_extension;
+		bpf_program__is_lsm;
 		bpf_program__is_struct_ops;
 		bpf_program__set_extension;
+		bpf_program__set_lsm;
 		bpf_program__set_struct_ops;
 		btf__align_of;
 		libbpf_find_kernel_btf;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v3 09/10] bpf: lsm: Add selftests for BPF_PROG_TYPE_LSM
  2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (7 preceding siblings ...)
  2020-01-23 15:24 ` [PATCH bpf-next v3 08/10] tools/libbpf: Add support for BPF_PROG_TYPE_LSM KP Singh
@ 2020-01-23 15:24 ` KP Singh
  2020-01-23 15:24 ` [PATCH bpf-next v3 10/10] bpf: lsm: Add Documentation KP Singh
  9 siblings, 0 replies; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

* Load a BPF program that audits mprotect calls
* Attach the program to the "file_mprotect" LSM hook.
* Do an mprotect on some memory allocated on the heap
* Verify if the audit event was received using the shared global
  result variable.

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 MAINTAINERS                                   |  3 +
 tools/testing/selftests/bpf/lsm_helpers.h     | 19 ++++++
 .../bpf/prog_tests/lsm_mprotect_audit.c       | 58 +++++++++++++++++++
 .../selftests/bpf/progs/lsm_mprotect_audit.c  | 47 +++++++++++++++
 4 files changed, 127 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/lsm_helpers.h
 create mode 100644 tools/testing/selftests/bpf/prog_tests/lsm_mprotect_audit.c
 create mode 100644 tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c

diff --git a/MAINTAINERS b/MAINTAINERS
index c606b3d89992..32236d89d00b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3210,6 +3210,9 @@ L:	bpf@vger.kernel.org
 S:	Maintained
 F:	security/bpf/
 F:	include/linux/bpf_lsm.h
+F:	tools/testing/selftests/bpf/lsm_helpers.h
+F:	tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c
+F:	tools/testing/selftests/bpf/prog_tests/lsm_mprotect_audit.c
 
 BROADCOM B44 10/100 ETHERNET DRIVER
 M:	Michael Chan <michael.chan@broadcom.com>
diff --git a/tools/testing/selftests/bpf/lsm_helpers.h b/tools/testing/selftests/bpf/lsm_helpers.h
new file mode 100644
index 000000000000..8bad08f77654
--- /dev/null
+++ b/tools/testing/selftests/bpf/lsm_helpers.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+#ifndef _LSM_HELPERS_H
+#define _LSM_HELPERS_H
+
+struct lsm_mprotect_audit_result {
+	/* This ensures that the LSM Hook only monitors the PID requested
+	 * by the loader
+	 */
+	__u32 monitored_pid;
+	/* The number of mprotect calls for the monitored PID.
+	 */
+	__u32 mprotect_count;
+};
+
+#endif /* _LSM_HELPERS_H */
diff --git a/tools/testing/selftests/bpf/prog_tests/lsm_mprotect_audit.c b/tools/testing/selftests/bpf/prog_tests/lsm_mprotect_audit.c
new file mode 100644
index 000000000000..ff90b874eafc
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/lsm_mprotect_audit.c
@@ -0,0 +1,58 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#include <test_progs.h>
+#include <sys/mman.h>
+#include <unistd.h>
+#include <malloc.h>
+#include "lsm_helpers.h"
+#include "lsm_mprotect_audit.skel.h"
+
+int heap_mprotect(void)
+{
+	void *buf;
+	long sz;
+
+	sz = sysconf(_SC_PAGESIZE);
+	if (sz < 0)
+		return sz;
+
+	buf = memalign(sz, 2 * sz);
+	if (buf == NULL)
+		return -ENOMEM;
+
+	return mprotect(buf, sz, PROT_READ | PROT_EXEC);
+}
+
+void test_lsm_mprotect_audit(void)
+{
+	struct lsm_mprotect_audit_result *result;
+	struct lsm_mprotect_audit *skel = NULL;
+	int err, duration = 0;
+
+	skel = lsm_mprotect_audit__open_and_load();
+	if (CHECK(!skel, "skel_load", "lsm_mprotect_audit skeleton failed\n"))
+		goto close_prog;
+
+	err = lsm_mprotect_audit__attach(skel);
+	if (CHECK(err, "attach", "lsm_mprotect_audit attach failed: %d\n", err))
+		goto close_prog;
+
+	result = &skel->bss->result;
+	result->monitored_pid = getpid();
+
+	err = heap_mprotect();
+	if (CHECK(err < 0, "heap_mprotect", "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	/* Make sure mprotect_audit program was triggered
+	 * and detected an mprotect on the heap.
+	 */
+	CHECK_FAIL(result->mprotect_count != 1);
+
+close_prog:
+	lsm_mprotect_audit__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c b/tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c
new file mode 100644
index 000000000000..231cd64fd0f7
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright 2019 Google LLC.
+ */
+
+#include <linux/bpf.h>
+#include <stdbool.h>
+#include "bpf_trace_helpers.h"
+#include "lsm_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct lsm_mprotect_audit_result result = {
+	.mprotect_count = 0,
+	.monitored_pid = 0,
+};
+
+/*
+ * Define some of the structs used in the BPF program.
+ * Only the field names and their sizes need to be the
+ * same as the kernel type, the order is irrelevant.
+ */
+struct mm_struct {
+	unsigned long start_brk, brk;
+} __attribute__((preserve_access_index));
+
+struct vm_area_struct {
+	unsigned long vm_start, vm_end;
+	struct mm_struct *vm_mm;
+} __attribute__((preserve_access_index));
+
+SEC("lsm/file_mprotect")
+int BPF_PROG(mprotect_audit, struct vm_area_struct *vma,
+	     unsigned long reqprot, unsigned long prot)
+{
+	__u32 pid = bpf_get_current_pid_tgid();
+	int is_heap = 0;
+
+	is_heap = (vma->vm_start >= vma->vm_mm->start_brk &&
+		   vma->vm_end <= vma->vm_mm->brk);
+
+	if (is_heap && result.monitored_pid == pid)
+		result.mprotect_count++;
+
+	return 0;
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v3 10/10] bpf: lsm: Add Documentation
  2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
                   ` (8 preceding siblings ...)
  2020-01-23 15:24 ` [PATCH bpf-next v3 09/10] bpf: lsm: Add selftests " KP Singh
@ 2020-01-23 15:24 ` KP Singh
  9 siblings, 0 replies; 44+ messages in thread
From: KP Singh @ 2020-01-23 15:24 UTC (permalink / raw)
  To: linux-kernel, bpf, linux-security-module
  Cc: Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

From: KP Singh <kpsingh@google.com>

Document how eBPF programs (BPF_PROG_TYPE_LSM) can be loaded and
attached (BPF_LSM_MAC) to the LSM hooks.

Signed-off-by: KP Singh <kpsingh@google.com>
Reviewed-by: Brendan Jackman <jackmanb@google.com>
Reviewed-by: Florent Revest <revest@google.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 Documentation/security/bpf.rst   | 165 +++++++++++++++++++++++++++++++
 Documentation/security/index.rst |   1 +
 MAINTAINERS                      |   1 +
 3 files changed, 167 insertions(+)
 create mode 100644 Documentation/security/bpf.rst

diff --git a/Documentation/security/bpf.rst b/Documentation/security/bpf.rst
new file mode 100644
index 000000000000..ec7d147c83b2
--- /dev/null
+++ b/Documentation/security/bpf.rst
@@ -0,0 +1,165 @@
+.. SPDX-License-Identifier: GPL-2.0+
+.. Copyright 2019 Google LLC.
+
+==========================
+eBPF Linux Security Module
+==========================
+
+This LSM allows runtime instrumentation of the LSM hooks by privileged users to
+implement system-wide MAC (Mandatory Access Control) and Audit policies using
+eBPF. The LSM is privileged and stackable and requires both ``CAP_SYS_ADMIN``
+and ``CAP_MAC_ADMIN`` for the loading of BPF programs and modification of MAC
+policies respectively.
+
+eBPF Programs
+==============
+
+`eBPF (extended BPF) <https://cilium.readthedocs.io/en/latest/bpf>`_ is a
+virtual machine-like construct in the Linux Kernel allowing the execution of
+verifiable, just-in-time compiled byte code at various points in the Kernel.
+
+The eBPF LSM adds a new type, ``BPF_PROG_TYPE_LSM``, of eBPF programs which
+have the following characteristics:
+
+   * Multiple eBPF programs can be attached to the same LSM hook
+   * The programs are always run after the static hooks (i.e. the ones
+     registered by SELinux, AppArmor, Smack etc.)
+   * LSM hooks can return an ``-EPERM`` to indicate the decision of the
+     MAC policy being enforced (``CONFIG_SECURITY_BPF_ENFORCE``) or
+     simply be used for auditing.
+   * If ``CONFIG_SECURITY_BPF_ENFORCE`` is enabled and a non-zero error
+     code is returned from the BPF program, no further BPF programs for
+     the hook are executed
+   * Allowing the eBPF programs to be attached to all the LSM hooks by
+     making :doc:`/bpf/btf` type information available for all LSM hooks
+     and allowing the BPF verifier to perform runtime relocations and
+     validation on the programs
+
+Structure
+---------
+
+The example shows an eBPF program that can be attached to the ``file_mprotect``
+LSM hook:
+
+.. c:function:: int file_mprotect(struct vm_area_struct *vma, unsigned long reqprot, unsigned long prot);
+
+eBPF programs that use :doc:`/bpf/btf` do not need to include kernel headers
+for accessing information from the attached eBPF program's context. They can
+simply declare the structures in the eBPF program and only specify the fields
+that need to be accessed.
+
+.. code-block:: c
+
+	struct mm_struct {
+		unsigned long start_brk, brk, start_stack;
+	} __attribute__((preserve_access_index));
+
+	struct vm_area_struct {
+		unsigned long start_brk, brk, start_stack;
+		unsigned long vm_start, vm_end;
+		struct mm_struct *vm_mm;
+	} __attribute__((preserve_access_index));
+
+
+.. note:: Only the size and the names of the fields must match the type in the
+	  kernel and the order of the fields is irrelevant.
+
+This can be further simplified (if one has access to the BTF information at
+build time) by generating the ``vmlinux.h`` with:
+
+.. code-block:: console
+
+        # bpftool dump file <path-to-btf-vmlinux> format c > vmlinux.h
+
+.. note:: ``path-to-btf-vmlinux`` can be ``/sys/kernel/btf/vmlinux`` if the
+	  build environment matches the environment the BPF programs are
+	  deployed in.
+
+The ``vmlinux.h`` can then simply be included in the BPF programs wihtout
+requiring the definition of the the types.
+
+The eBPF programs can be declared using the``BPF_PROG``
+macros defined in `tools/testing/selftests/bpf/bpf_trace_helpers.h`_. In this
+example:
+
+	* ``"lsm/file_mprotect"`` indicates the LSM hook that the program must
+	  be attached to
+	* ``mprotect_audit`` is the name of the eBPF program
+
+.. code-block:: c
+
+        SEC("lsm/file_mprotect")
+        int BPF_PROG(mprotect_audit, struct vm_area_struct *vma,
+                     unsigned long reqprot, unsigned long prot)
+	{
+		int is_heap;
+
+		is_heap = (vma->vm_start >= vma->vm_mm->start_brk &&
+			   vma->vm_end <= vma->vm_mm->brk);
+
+		/*
+		 * Return an -EPERM or write information to the perf events buffer
+		 * for auditing
+		 */
+	}
+
+The ``__attribute__((preserve_access_index))`` is a clang feature that allows
+the BPF verifier to update the offsets for the access at runtime using the
+:doc:`/bpf/btf` information. Since the BPF verifier is aware of the types, it
+also validates all the accesses made to the various types in the eBPF program.
+
+Loading
+-------
+
+eBPP programs can be loaded with the :manpage:`bpf(2)` syscall's
+``BPF_PROG_LOAD`` operation or more simply by using the the libbpf helper
+``bpf_prog_load_xattr``:
+
+
+.. code-block:: c
+
+	struct bpf_prog_load_attr attr = {
+		.file = "./prog.o",
+	};
+	struct bpf_object *prog_obj;
+	struct bpf_program *prog;
+	int prog_fd;
+
+	bpf_prog_load_xattr(&attr, &prog_obj, &prog_fd);
+
+Attachment to LSM Hooks
+-----------------------
+
+The LSM allows attachment of eBPF programs as LSM hooks using :manpage:`bpf(2)`
+syscall's ``BPF_PROG_ATTACH`` operation or more simply by
+using the libbpf helper ``bpf_program__attach_lsm``. In the code shown below
+``prog`` is the eBPF program loaded using ``BPF_PROG_LOAD``:
+
+.. code-block:: c
+
+	struct bpf_link *link;
+
+	link = bpf_program__attach_lsm(prog);
+
+The program can be detached from the LSM hook by *destroying* the ``link``
+link returned by ``bpf_program__attach_lsm``:
+
+.. code-block:: c
+
+	link->destroy();
+
+Examples
+--------
+
+An example eBPF program can be found in
+`tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c`_ and the corresponding
+userspace code in
+`tools/testing/selftests/bpf/prog_tests/lsm_mprotect_audit.c`_
+
+.. Links
+.. _tools/testing/selftests/bpf/bpf_trace_helpers.h:
+   https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/tools/testing/selftests/selftests/bpf/bpf_trace_helpers.h
+.. _tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c:
+   https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c
+.. _tools/testing/selftests/bpf/prog_tests/lsm_mprotect_audit.c:
+   https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/tools/testing/selftests/bpf/prog_tests/lsm_mprotect_audit.c
diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst
index fc503dd689a7..844463df4547 100644
--- a/Documentation/security/index.rst
+++ b/Documentation/security/index.rst
@@ -5,6 +5,7 @@ Security Documentation
 .. toctree::
    :maxdepth: 1
 
+   bpf
    credentials
    IMA-templates
    keys/index
diff --git a/MAINTAINERS b/MAINTAINERS
index 32236d89d00b..e1de1a345205 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3213,6 +3213,7 @@ F:	include/linux/bpf_lsm.h
 F:	tools/testing/selftests/bpf/lsm_helpers.h
 F:	tools/testing/selftests/bpf/progs/lsm_mprotect_audit.c
 F:	tools/testing/selftests/bpf/prog_tests/lsm_mprotect_audit.c
+F:	Documentation/security/bpf.rst
 
 BROADCOM B44 10/100 ETHERNET DRIVER
 M:	Michael Chan <michael.chan@broadcom.com>
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-01-23 15:24 ` [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM KP Singh
@ 2020-01-23 17:03   ` Casey Schaufler
  2020-01-23 17:59     ` KP Singh
  2020-02-11  3:12   ` Alexei Starovoitov
  1 sibling, 1 reply; 44+ messages in thread
From: Casey Schaufler @ 2020-01-23 17:03 UTC (permalink / raw)
  To: KP Singh, LKML, Linux Security Module list, bpf; +Cc: Casey Schaufler

On 1/23/2020 7:24 AM, KP Singh wrote:
> From: KP Singh <kpsingh@google.com>
>
> - The list of hooks registered by an LSM is currently immutable as they
>   are declared with __lsm_ro_after_init and they are attached to a
>   security_hook_heads struct.
> - For the BPF LSM we need to de/register the hooks at runtime. Making
>   the existing security_hook_heads mutable broadens an
>   attack vector, so a separate security_hook_heads is added for only
>   those that ~must~ be mutable.
> - These mutable hooks are run only after all the static hooks have
>   successfully executed.
>
> This is based on the ideas discussed in:
>
>   https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal
>
> Reviewed-by: Brendan Jackman <jackmanb@google.com>
> Reviewed-by: Florent Revest <revest@google.com>
> Reviewed-by: Thomas Garnier <thgarnie@google.com>
> Signed-off-by: KP Singh <kpsingh@google.com>
> ---
>  MAINTAINERS             |  1 +
>  include/linux/bpf_lsm.h | 72 +++++++++++++++++++++++++++++++++++++++++
>  security/bpf/Kconfig    |  1 +
>  security/bpf/Makefile   |  2 +-
>  security/bpf/hooks.c    | 20 ++++++++++++
>  security/bpf/lsm.c      |  7 ++++
>  security/security.c     | 25 +++++++-------
>  7 files changed, 116 insertions(+), 12 deletions(-)
>  create mode 100644 include/linux/bpf_lsm.h
>  create mode 100644 security/bpf/hooks.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index e2b7f76a1a70..c606b3d89992 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -3209,6 +3209,7 @@ L:	linux-security-module@vger.kernel.org
>  L:	bpf@vger.kernel.org
>  S:	Maintained
>  F:	security/bpf/
> +F:	include/linux/bpf_lsm.h
>  
>  BROADCOM B44 10/100 ETHERNET DRIVER
>  M:	Michael Chan <michael.chan@broadcom.com>
> diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
> new file mode 100644
> index 000000000000..57c20b2cd2f4
> --- /dev/null
> +++ b/include/linux/bpf_lsm.h
> @@ -0,0 +1,72 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +/*
> + * Copyright 2019 Google LLC.
> + */
> +
> +#ifndef _LINUX_BPF_LSM_H
> +#define _LINUX_BPF_LSM_H
> +
> +#include <linux/bpf.h>
> +#include <linux/lsm_hooks.h>
> +
> +#ifdef CONFIG_SECURITY_BPF
> +
> +/* Mutable hooks defined at runtime and executed after all the statically
> + * defined LSM hooks.
> + */
> +extern struct security_hook_heads bpf_lsm_hook_heads;
> +
> +int bpf_lsm_srcu_read_lock(void);
> +void bpf_lsm_srcu_read_unlock(int idx);
> +
> +#define CALL_BPF_LSM_VOID_HOOKS(FUNC, ...)			\
> +	do {							\
> +		struct security_hook_list *P;			\
> +		int _idx;					\
> +								\
> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
> +			break;					\
> +								\
> +		_idx = bpf_lsm_srcu_read_lock();		\
> +		hlist_for_each_entry(P, &bpf_lsm_hook_heads.FUNC, list) \
> +			P->hook.FUNC(__VA_ARGS__);		\
> +		bpf_lsm_srcu_read_unlock(_idx);			\
> +	} while (0)
> +
> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) ({			\
> +	int _ret = 0;						\
> +	do {							\
> +		struct security_hook_list *P;			\
> +		int _idx;					\
> +								\
> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
> +			break;					\
> +								\
> +		_idx = bpf_lsm_srcu_read_lock();		\
> +								\
> +		hlist_for_each_entry(P,				\
> +			&bpf_lsm_hook_heads.FUNC, list) {	\
> +			_ret = P->hook.FUNC(__VA_ARGS__);		\
> +			if (_ret && IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE)) \
> +				break;				\
> +		}						\
> +		bpf_lsm_srcu_read_unlock(_idx);			\
> +	} while (0);						\
> +	IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE) ? _ret : 0;	\
> +})
> +
> +#else /* !CONFIG_SECURITY_BPF */
> +
> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) (0)
> +#define CALL_BPF_LSM_VOID_HOOKS(...)
> +
> +static inline int bpf_lsm_srcu_read_lock(void)
> +{
> +	return 0;
> +}
> +static inline void bpf_lsm_srcu_read_unlock(int idx) {}
> +
> +#endif /* CONFIG_SECURITY_BPF */
> +
> +#endif /* _LINUX_BPF_LSM_H */
> diff --git a/security/bpf/Kconfig b/security/bpf/Kconfig
> index a5f6c67ae526..595e4ad597ae 100644
> --- a/security/bpf/Kconfig
> +++ b/security/bpf/Kconfig
> @@ -6,6 +6,7 @@ config SECURITY_BPF
>  	bool "BPF-based MAC and audit policy"
>  	depends on SECURITY
>  	depends on BPF_SYSCALL
> +	depends on SRCU
>  	help
>  	  This enables instrumentation of the security hooks with
>  	  eBPF programs.
> diff --git a/security/bpf/Makefile b/security/bpf/Makefile
> index c78a8a056e7e..c526927c337d 100644
> --- a/security/bpf/Makefile
> +++ b/security/bpf/Makefile
> @@ -2,4 +2,4 @@
>  #
>  # Copyright 2019 Google LLC.
>  
> -obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o
> +obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o hooks.o
> diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
> new file mode 100644
> index 000000000000..b123d9cb4cd4
> --- /dev/null
> +++ b/security/bpf/hooks.c
> @@ -0,0 +1,20 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * Copyright 2019 Google LLC.
> + */
> +
> +#include <linux/bpf_lsm.h>
> +#include <linux/srcu.h>
> +
> +DEFINE_STATIC_SRCU(security_hook_srcu);
> +
> +int bpf_lsm_srcu_read_lock(void)
> +{
> +	return srcu_read_lock(&security_hook_srcu);
> +}
> +
> +void bpf_lsm_srcu_read_unlock(int idx)
> +{
> +	return srcu_read_unlock(&security_hook_srcu, idx);
> +}
> diff --git a/security/bpf/lsm.c b/security/bpf/lsm.c
> index dc9ac03c7aa0..a25a068e1781 100644
> --- a/security/bpf/lsm.c
> +++ b/security/bpf/lsm.c
> @@ -4,6 +4,7 @@
>   * Copyright 2019 Google LLC.
>   */
>  
> +#include <linux/bpf_lsm.h>
>  #include <linux/lsm_hooks.h>
>  
>  /* This is only for internal hooks, always statically shipped as part of the
> @@ -12,6 +13,12 @@
>   */
>  static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = {};
>  
> +/* Security hooks registered dynamically by the BPF LSM and must be accessed
> + * by holding bpf_lsm_srcu_read_lock and bpf_lsm_srcu_read_unlock. The mutable
> + * hooks dynamically allocated by the BPF LSM are appeneded here.
> + */
> +struct security_hook_heads bpf_lsm_hook_heads;
> +
>  static int __init bpf_lsm_init(void)
>  {
>  	security_add_hooks(bpf_lsm_hooks, ARRAY_SIZE(bpf_lsm_hooks), "bpf");
> diff --git a/security/security.c b/security/security.c
> index 30a8aa700557..95a46ca25dcd 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -27,6 +27,7 @@
>  #include <linux/backing-dev.h>
>  #include <linux/string.h>
>  #include <linux/msg.h>
> +#include <linux/bpf_lsm.h>
>  #include <net/flow.h>
>  
>  #define MAX_LSM_EVM_XATTR	2
> @@ -657,20 +658,22 @@ static void __init lsm_early_task(struct task_struct *task)
>  								\
>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
>  			P->hook.FUNC(__VA_ARGS__);		\
> +		CALL_BPF_LSM_VOID_HOOKS(FUNC, __VA_ARGS__);	\

I'm sorry if I wasn't clear on the v2 review.
This does not belong in the infrastructure. You should be
doing all the bpf_lsm hook processing in you module.
bpf_lsm_task_alloc() should loop though all the bpf
task_alloc hooks if they have to be handled differently
from "normal" LSM hooks.

>  	} while (0)
>  
> -#define call_int_hook(FUNC, IRC, ...) ({			\
> -	int RC = IRC;						\
> -	do {							\
> -		struct security_hook_list *P;			\
> -								\
> +#define call_int_hook(FUNC, IRC, ...) ({				\
> +	int RC = IRC;							\
> +	do {								\
> +		struct security_hook_list *P;				\
>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
> -			RC = P->hook.FUNC(__VA_ARGS__);		\
> -			if (RC != 0)				\
> -				break;				\
> -		}						\
> -	} while (0);						\
> -	RC;							\
> +			RC = P->hook.FUNC(__VA_ARGS__);			\
> +			if (RC != 0)					\
> +				break;					\
> +		}							\
> +		if (RC == 0)						\
> +			RC = CALL_BPF_LSM_INT_HOOKS(FUNC, __VA_ARGS__);	\
> +	} while (0);							\
> +	RC;								\
>  })
>  
>  /* Security operations */

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-01-23 17:03   ` Casey Schaufler
@ 2020-01-23 17:59     ` KP Singh
  2020-01-23 19:09       ` Casey Schaufler
  0 siblings, 1 reply; 44+ messages in thread
From: KP Singh @ 2020-01-23 17:59 UTC (permalink / raw)
  To: Casey Schaufler; +Cc: KP Singh, LKML, Linux Security Module list, bpf

On 23-Jan 09:03, Casey Schaufler wrote:
> On 1/23/2020 7:24 AM, KP Singh wrote:
> > From: KP Singh <kpsingh@google.com>
> >
> > - The list of hooks registered by an LSM is currently immutable as they
> >   are declared with __lsm_ro_after_init and they are attached to a
> >   security_hook_heads struct.
> > - For the BPF LSM we need to de/register the hooks at runtime. Making
> >   the existing security_hook_heads mutable broadens an
> >   attack vector, so a separate security_hook_heads is added for only
> >   those that ~must~ be mutable.
> > - These mutable hooks are run only after all the static hooks have
> >   successfully executed.
> >
> > This is based on the ideas discussed in:
> >
> >   https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal
> >
> > Reviewed-by: Brendan Jackman <jackmanb@google.com>
> > Reviewed-by: Florent Revest <revest@google.com>
> > Reviewed-by: Thomas Garnier <thgarnie@google.com>
> > Signed-off-by: KP Singh <kpsingh@google.com>
> > ---
> >  MAINTAINERS             |  1 +
> >  include/linux/bpf_lsm.h | 72 +++++++++++++++++++++++++++++++++++++++++
> >  security/bpf/Kconfig    |  1 +
> >  security/bpf/Makefile   |  2 +-
> >  security/bpf/hooks.c    | 20 ++++++++++++
> >  security/bpf/lsm.c      |  7 ++++
> >  security/security.c     | 25 +++++++-------
> >  7 files changed, 116 insertions(+), 12 deletions(-)
> >  create mode 100644 include/linux/bpf_lsm.h
> >  create mode 100644 security/bpf/hooks.c
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index e2b7f76a1a70..c606b3d89992 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -3209,6 +3209,7 @@ L:	linux-security-module@vger.kernel.org
> >  L:	bpf@vger.kernel.org
> >  S:	Maintained
> >  F:	security/bpf/
> > +F:	include/linux/bpf_lsm.h
> >  
> >  BROADCOM B44 10/100 ETHERNET DRIVER
> >  M:	Michael Chan <michael.chan@broadcom.com>
> > diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
> > new file mode 100644
> > index 000000000000..57c20b2cd2f4
> > --- /dev/null
> > +++ b/include/linux/bpf_lsm.h
> > @@ -0,0 +1,72 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +
> > +/*
> > + * Copyright 2019 Google LLC.
> > + */
> > +
> > +#ifndef _LINUX_BPF_LSM_H
> > +#define _LINUX_BPF_LSM_H
> > +
> > +#include <linux/bpf.h>
> > +#include <linux/lsm_hooks.h>
> > +
> > +#ifdef CONFIG_SECURITY_BPF
> > +
> > +/* Mutable hooks defined at runtime and executed after all the statically
> > + * defined LSM hooks.
> > + */
> > +extern struct security_hook_heads bpf_lsm_hook_heads;
> > +
> > +int bpf_lsm_srcu_read_lock(void);
> > +void bpf_lsm_srcu_read_unlock(int idx);
> > +
> > +#define CALL_BPF_LSM_VOID_HOOKS(FUNC, ...)			\
> > +	do {							\
> > +		struct security_hook_list *P;			\
> > +		int _idx;					\
> > +								\
> > +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
> > +			break;					\
> > +								\
> > +		_idx = bpf_lsm_srcu_read_lock();		\
> > +		hlist_for_each_entry(P, &bpf_lsm_hook_heads.FUNC, list) \
> > +			P->hook.FUNC(__VA_ARGS__);		\
> > +		bpf_lsm_srcu_read_unlock(_idx);			\
> > +	} while (0)
> > +
> > +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) ({			\
> > +	int _ret = 0;						\
> > +	do {							\
> > +		struct security_hook_list *P;			\
> > +		int _idx;					\
> > +								\
> > +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
> > +			break;					\
> > +								\
> > +		_idx = bpf_lsm_srcu_read_lock();		\
> > +								\
> > +		hlist_for_each_entry(P,				\
> > +			&bpf_lsm_hook_heads.FUNC, list) {	\
> > +			_ret = P->hook.FUNC(__VA_ARGS__);		\
> > +			if (_ret && IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE)) \
> > +				break;				\
> > +		}						\
> > +		bpf_lsm_srcu_read_unlock(_idx);			\
> > +	} while (0);						\
> > +	IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE) ? _ret : 0;	\
> > +})
> > +
> > +#else /* !CONFIG_SECURITY_BPF */
> > +
> > +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) (0)
> > +#define CALL_BPF_LSM_VOID_HOOKS(...)
> > +
> > +static inline int bpf_lsm_srcu_read_lock(void)
> > +{
> > +	return 0;
> > +}
> > +static inline void bpf_lsm_srcu_read_unlock(int idx) {}
> > +
> > +#endif /* CONFIG_SECURITY_BPF */
> > +
> > +#endif /* _LINUX_BPF_LSM_H */
> > diff --git a/security/bpf/Kconfig b/security/bpf/Kconfig
> > index a5f6c67ae526..595e4ad597ae 100644
> > --- a/security/bpf/Kconfig
> > +++ b/security/bpf/Kconfig
> > @@ -6,6 +6,7 @@ config SECURITY_BPF
> >  	bool "BPF-based MAC and audit policy"
> >  	depends on SECURITY
> >  	depends on BPF_SYSCALL
> > +	depends on SRCU
> >  	help
> >  	  This enables instrumentation of the security hooks with
> >  	  eBPF programs.
> > diff --git a/security/bpf/Makefile b/security/bpf/Makefile
> > index c78a8a056e7e..c526927c337d 100644
> > --- a/security/bpf/Makefile
> > +++ b/security/bpf/Makefile
> > @@ -2,4 +2,4 @@
> >  #
> >  # Copyright 2019 Google LLC.
> >  
> > -obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o
> > +obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o hooks.o
> > diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
> > new file mode 100644
> > index 000000000000..b123d9cb4cd4
> > --- /dev/null
> > +++ b/security/bpf/hooks.c
> > @@ -0,0 +1,20 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/*
> > + * Copyright 2019 Google LLC.
> > + */
> > +
> > +#include <linux/bpf_lsm.h>
> > +#include <linux/srcu.h>
> > +
> > +DEFINE_STATIC_SRCU(security_hook_srcu);
> > +
> > +int bpf_lsm_srcu_read_lock(void)
> > +{
> > +	return srcu_read_lock(&security_hook_srcu);
> > +}
> > +
> > +void bpf_lsm_srcu_read_unlock(int idx)
> > +{
> > +	return srcu_read_unlock(&security_hook_srcu, idx);
> > +}
> > diff --git a/security/bpf/lsm.c b/security/bpf/lsm.c
> > index dc9ac03c7aa0..a25a068e1781 100644
> > --- a/security/bpf/lsm.c
> > +++ b/security/bpf/lsm.c
> > @@ -4,6 +4,7 @@
> >   * Copyright 2019 Google LLC.
> >   */
> >  
> > +#include <linux/bpf_lsm.h>
> >  #include <linux/lsm_hooks.h>
> >  
> >  /* This is only for internal hooks, always statically shipped as part of the
> > @@ -12,6 +13,12 @@
> >   */
> >  static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = {};
> >  
> > +/* Security hooks registered dynamically by the BPF LSM and must be accessed
> > + * by holding bpf_lsm_srcu_read_lock and bpf_lsm_srcu_read_unlock. The mutable
> > + * hooks dynamically allocated by the BPF LSM are appeneded here.
> > + */
> > +struct security_hook_heads bpf_lsm_hook_heads;
> > +
> >  static int __init bpf_lsm_init(void)
> >  {
> >  	security_add_hooks(bpf_lsm_hooks, ARRAY_SIZE(bpf_lsm_hooks), "bpf");
> > diff --git a/security/security.c b/security/security.c
> > index 30a8aa700557..95a46ca25dcd 100644
> > --- a/security/security.c
> > +++ b/security/security.c
> > @@ -27,6 +27,7 @@
> >  #include <linux/backing-dev.h>
> >  #include <linux/string.h>
> >  #include <linux/msg.h>
> > +#include <linux/bpf_lsm.h>
> >  #include <net/flow.h>
> >  
> >  #define MAX_LSM_EVM_XATTR	2
> > @@ -657,20 +658,22 @@ static void __init lsm_early_task(struct task_struct *task)
> >  								\
> >  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
> >  			P->hook.FUNC(__VA_ARGS__);		\
> > +		CALL_BPF_LSM_VOID_HOOKS(FUNC, __VA_ARGS__);	\
> 
> I'm sorry if I wasn't clear on the v2 review.
> This does not belong in the infrastructure. You should be
> doing all the bpf_lsm hook processing in you module.
> bpf_lsm_task_alloc() should loop though all the bpf
> task_alloc hooks if they have to be handled differently
> from "normal" LSM hooks.

The BPF LSM does not define static hooks (the ones registered to
security_hook_heads in security.c with __lsm_ro_after_init) for each
LSM hook. If it tries to do that one ends with what was in v1:

  https://lore.kernel.org/bpf/20191220154208.15895-7-kpsingh@chromium.org

This gets quite ugly (security/bpf/hooks.h from v1) and was noted by
the BPF maintainers:

  https://lore.kernel.org/bpf/20191222012722.gdqhppxpfmqfqbld@ast-mbp.dhcp.thefacebook.com/

As I mentioned, some of the ideas we used here are based on:

  https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal

Which gave each LSM the ability to add mutable hooks at runtime. If
you prefer we can make this generic and allow the LSMs to register
mutable hooks with the BPF LSM be the only LSM that uses it (and
enforce it with a whitelist).

Would this generic approach be something you would consider better
than just calling the BPF mutable hooks directly?

- KP

> 
> >  	} while (0)
> >  
> > -#define call_int_hook(FUNC, IRC, ...) ({			\
> > -	int RC = IRC;						\
> > -	do {							\
> > -		struct security_hook_list *P;			\
> > -								\
> > +#define call_int_hook(FUNC, IRC, ...) ({				\
> > +	int RC = IRC;							\
> > +	do {								\
> > +		struct security_hook_list *P;				\
> >  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
> > -			RC = P->hook.FUNC(__VA_ARGS__);		\
> > -			if (RC != 0)				\
> > -				break;				\
> > -		}						\
> > -	} while (0);						\
> > -	RC;							\
> > +			RC = P->hook.FUNC(__VA_ARGS__);			\
> > +			if (RC != 0)					\
> > +				break;					\
> > +		}							\
> > +		if (RC == 0)						\
> > +			RC = CALL_BPF_LSM_INT_HOOKS(FUNC, __VA_ARGS__);	\
> > +	} while (0);							\
> > +	RC;								\
> >  })
> >  
> >  /* Security operations */

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 08/10] tools/libbpf: Add support for BPF_PROG_TYPE_LSM
  2020-01-23 15:24 ` [PATCH bpf-next v3 08/10] tools/libbpf: Add support for BPF_PROG_TYPE_LSM KP Singh
@ 2020-01-23 18:00   ` Andrii Nakryiko
  2020-01-24 14:16     ` KP Singh
  0 siblings, 1 reply; 44+ messages in thread
From: Andrii Nakryiko @ 2020-01-23 18:00 UTC (permalink / raw)
  To: KP Singh
  Cc: open list, bpf, linux-security-module, Brendan Jackman,
	Florent Revest, Thomas Garnier, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Jann Horn,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Martin KaFai Lau, Song Liu,
	Yonghong Song, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Nicolas Ferre,
	Stanislav Fomichev, Quentin Monnet, Andrey Ignatov, Joe Stringer

On Thu, Jan 23, 2020 at 7:25 AM KP Singh <kpsingh@chromium.org> wrote:
>
> From: KP Singh <kpsingh@google.com>
>
> * Add functionality in libbpf to attach eBPF program to LSM hooks
> * Lookup the index of the LSM hook in security_hook_heads and pass it in
>   attr->lsm_hook_idx
>
> Signed-off-by: KP Singh <kpsingh@google.com>
> Reviewed-by: Brendan Jackman <jackmanb@google.com>
> Reviewed-by: Florent Revest <revest@google.com>
> Reviewed-by: Thomas Garnier <thgarnie@google.com>
> ---

Looks good, but see few nits below.

Acked-by: Andrii Nakryiko <andriin@fb.com>

>  tools/lib/bpf/bpf.c      |   6 ++-
>  tools/lib/bpf/bpf.h      |   1 +
>  tools/lib/bpf/libbpf.c   | 104 +++++++++++++++++++++++++++++++++++++--
>  tools/lib/bpf/libbpf.h   |   4 ++
>  tools/lib/bpf/libbpf.map |   3 ++
>  5 files changed, 114 insertions(+), 4 deletions(-)
>

[...]

> @@ -5084,6 +5099,8 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
>                 if (prog->type != BPF_PROG_TYPE_UNSPEC)
>                         continue;
>
> +
> +

why these extra lines?

>                 err = libbpf_prog_type_by_name(prog->section_name, &prog_type,
>                                                &attach_type);
>                 if (err == -ESRCH)
> @@ -6160,6 +6177,7 @@ bool bpf_program__is_##NAME(const struct bpf_program *prog)       \
>  }                                                              \
>
>  BPF_PROG_TYPE_FNS(socket_filter, BPF_PROG_TYPE_SOCKET_FILTER);
> +BPF_PROG_TYPE_FNS(lsm, BPF_PROG_TYPE_LSM);
>  BPF_PROG_TYPE_FNS(kprobe, BPF_PROG_TYPE_KPROBE);
>  BPF_PROG_TYPE_FNS(sched_cls, BPF_PROG_TYPE_SCHED_CLS);
>  BPF_PROG_TYPE_FNS(sched_act, BPF_PROG_TYPE_SCHED_ACT);
> @@ -6226,6 +6244,8 @@ static struct bpf_link *attach_raw_tp(const struct bpf_sec_def *sec,
>                                       struct bpf_program *prog);
>  static struct bpf_link *attach_trace(const struct bpf_sec_def *sec,
>                                      struct bpf_program *prog);
> +static struct bpf_link *attach_lsm(const struct bpf_sec_def *sec,
> +                                  struct bpf_program *prog);
>
>  struct bpf_sec_def {
>         const char *sec;
> @@ -6272,6 +6292,9 @@ static const struct bpf_sec_def section_defs[] = {
>         SEC_DEF("freplace/", EXT,
>                 .is_attach_btf = true,
>                 .attach_fn = attach_trace),
> +       SEC_DEF("lsm/", LSM,
> +               .expected_attach_type = BPF_LSM_MAC,

curious, will there be non-MAC LSM programs? if yes, how they are
going to be different and which prefix will we use then?

> +               .attach_fn = attach_lsm),
>         BPF_PROG_SEC("xdp",                     BPF_PROG_TYPE_XDP),
>         BPF_PROG_SEC("perf_event",              BPF_PROG_TYPE_PERF_EVENT),
>         BPF_PROG_SEC("lwt_in",                  BPF_PROG_TYPE_LWT_IN),
> @@ -6533,6 +6556,44 @@ static int bpf_object__collect_struct_ops_map_reloc(struct bpf_object *obj,
>         return -EINVAL;
>  }
>
> +static __s32 find_lsm_hook_idx(struct bpf_program *prog)

nit: I'd stick to int for return result, we barely ever use __s32 in libbpf.c

[...]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-01-23 17:59     ` KP Singh
@ 2020-01-23 19:09       ` Casey Schaufler
  2020-01-23 22:24         ` KP Singh
  0 siblings, 1 reply; 44+ messages in thread
From: Casey Schaufler @ 2020-01-23 19:09 UTC (permalink / raw)
  To: KP Singh; +Cc: LKML, Linux Security Module list, bpf

On 1/23/2020 9:59 AM, KP Singh wrote:
> On 23-Jan 09:03, Casey Schaufler wrote:
>> On 1/23/2020 7:24 AM, KP Singh wrote:
>>> From: KP Singh <kpsingh@google.com>
>>>
>>> - The list of hooks registered by an LSM is currently immutable as they
>>>   are declared with __lsm_ro_after_init and they are attached to a
>>>   security_hook_heads struct.
>>> - For the BPF LSM we need to de/register the hooks at runtime. Making
>>>   the existing security_hook_heads mutable broadens an
>>>   attack vector, so a separate security_hook_heads is added for only
>>>   those that ~must~ be mutable.
>>> - These mutable hooks are run only after all the static hooks have
>>>   successfully executed.
>>>
>>> This is based on the ideas discussed in:
>>>
>>>   https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal
>>>
>>> Reviewed-by: Brendan Jackman <jackmanb@google.com>
>>> Reviewed-by: Florent Revest <revest@google.com>
>>> Reviewed-by: Thomas Garnier <thgarnie@google.com>
>>> Signed-off-by: KP Singh <kpsingh@google.com>
>>> ---
>>>  MAINTAINERS             |  1 +
>>>  include/linux/bpf_lsm.h | 72 +++++++++++++++++++++++++++++++++++++++++
>>>  security/bpf/Kconfig    |  1 +
>>>  security/bpf/Makefile   |  2 +-
>>>  security/bpf/hooks.c    | 20 ++++++++++++
>>>  security/bpf/lsm.c      |  7 ++++
>>>  security/security.c     | 25 +++++++-------
>>>  7 files changed, 116 insertions(+), 12 deletions(-)
>>>  create mode 100644 include/linux/bpf_lsm.h
>>>  create mode 100644 security/bpf/hooks.c
>>>
>>> diff --git a/MAINTAINERS b/MAINTAINERS
>>> index e2b7f76a1a70..c606b3d89992 100644
>>> --- a/MAINTAINERS
>>> +++ b/MAINTAINERS
>>> @@ -3209,6 +3209,7 @@ L:	linux-security-module@vger.kernel.org
>>>  L:	bpf@vger.kernel.org
>>>  S:	Maintained
>>>  F:	security/bpf/
>>> +F:	include/linux/bpf_lsm.h
>>>  
>>>  BROADCOM B44 10/100 ETHERNET DRIVER
>>>  M:	Michael Chan <michael.chan@broadcom.com>
>>> diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
>>> new file mode 100644
>>> index 000000000000..57c20b2cd2f4
>>> --- /dev/null
>>> +++ b/include/linux/bpf_lsm.h
>>> @@ -0,0 +1,72 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +
>>> +/*
>>> + * Copyright 2019 Google LLC.
>>> + */
>>> +
>>> +#ifndef _LINUX_BPF_LSM_H
>>> +#define _LINUX_BPF_LSM_H
>>> +
>>> +#include <linux/bpf.h>
>>> +#include <linux/lsm_hooks.h>
>>> +
>>> +#ifdef CONFIG_SECURITY_BPF
>>> +
>>> +/* Mutable hooks defined at runtime and executed after all the statically
>>> + * defined LSM hooks.
>>> + */
>>> +extern struct security_hook_heads bpf_lsm_hook_heads;
>>> +
>>> +int bpf_lsm_srcu_read_lock(void);
>>> +void bpf_lsm_srcu_read_unlock(int idx);
>>> +
>>> +#define CALL_BPF_LSM_VOID_HOOKS(FUNC, ...)			\
>>> +	do {							\
>>> +		struct security_hook_list *P;			\
>>> +		int _idx;					\
>>> +								\
>>> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
>>> +			break;					\
>>> +								\
>>> +		_idx = bpf_lsm_srcu_read_lock();		\
>>> +		hlist_for_each_entry(P, &bpf_lsm_hook_heads.FUNC, list) \
>>> +			P->hook.FUNC(__VA_ARGS__);		\
>>> +		bpf_lsm_srcu_read_unlock(_idx);			\
>>> +	} while (0)
>>> +
>>> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) ({			\
>>> +	int _ret = 0;						\
>>> +	do {							\
>>> +		struct security_hook_list *P;			\
>>> +		int _idx;					\
>>> +								\
>>> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
>>> +			break;					\
>>> +								\
>>> +		_idx = bpf_lsm_srcu_read_lock();		\
>>> +								\
>>> +		hlist_for_each_entry(P,				\
>>> +			&bpf_lsm_hook_heads.FUNC, list) {	\
>>> +			_ret = P->hook.FUNC(__VA_ARGS__);		\
>>> +			if (_ret && IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE)) \
>>> +				break;				\
>>> +		}						\
>>> +		bpf_lsm_srcu_read_unlock(_idx);			\
>>> +	} while (0);						\
>>> +	IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE) ? _ret : 0;	\
>>> +})
>>> +
>>> +#else /* !CONFIG_SECURITY_BPF */
>>> +
>>> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) (0)
>>> +#define CALL_BPF_LSM_VOID_HOOKS(...)
>>> +
>>> +static inline int bpf_lsm_srcu_read_lock(void)
>>> +{
>>> +	return 0;
>>> +}
>>> +static inline void bpf_lsm_srcu_read_unlock(int idx) {}
>>> +
>>> +#endif /* CONFIG_SECURITY_BPF */
>>> +
>>> +#endif /* _LINUX_BPF_LSM_H */
>>> diff --git a/security/bpf/Kconfig b/security/bpf/Kconfig
>>> index a5f6c67ae526..595e4ad597ae 100644
>>> --- a/security/bpf/Kconfig
>>> +++ b/security/bpf/Kconfig
>>> @@ -6,6 +6,7 @@ config SECURITY_BPF
>>>  	bool "BPF-based MAC and audit policy"
>>>  	depends on SECURITY
>>>  	depends on BPF_SYSCALL
>>> +	depends on SRCU
>>>  	help
>>>  	  This enables instrumentation of the security hooks with
>>>  	  eBPF programs.
>>> diff --git a/security/bpf/Makefile b/security/bpf/Makefile
>>> index c78a8a056e7e..c526927c337d 100644
>>> --- a/security/bpf/Makefile
>>> +++ b/security/bpf/Makefile
>>> @@ -2,4 +2,4 @@
>>>  #
>>>  # Copyright 2019 Google LLC.
>>>  
>>> -obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o
>>> +obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o hooks.o
>>> diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
>>> new file mode 100644
>>> index 000000000000..b123d9cb4cd4
>>> --- /dev/null
>>> +++ b/security/bpf/hooks.c
>>> @@ -0,0 +1,20 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +
>>> +/*
>>> + * Copyright 2019 Google LLC.
>>> + */
>>> +
>>> +#include <linux/bpf_lsm.h>
>>> +#include <linux/srcu.h>
>>> +
>>> +DEFINE_STATIC_SRCU(security_hook_srcu);
>>> +
>>> +int bpf_lsm_srcu_read_lock(void)
>>> +{
>>> +	return srcu_read_lock(&security_hook_srcu);
>>> +}
>>> +
>>> +void bpf_lsm_srcu_read_unlock(int idx)
>>> +{
>>> +	return srcu_read_unlock(&security_hook_srcu, idx);
>>> +}
>>> diff --git a/security/bpf/lsm.c b/security/bpf/lsm.c
>>> index dc9ac03c7aa0..a25a068e1781 100644
>>> --- a/security/bpf/lsm.c
>>> +++ b/security/bpf/lsm.c
>>> @@ -4,6 +4,7 @@
>>>   * Copyright 2019 Google LLC.
>>>   */
>>>  
>>> +#include <linux/bpf_lsm.h>
>>>  #include <linux/lsm_hooks.h>
>>>  
>>>  /* This is only for internal hooks, always statically shipped as part of the
>>> @@ -12,6 +13,12 @@
>>>   */
>>>  static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = {};
>>>  
>>> +/* Security hooks registered dynamically by the BPF LSM and must be accessed
>>> + * by holding bpf_lsm_srcu_read_lock and bpf_lsm_srcu_read_unlock. The mutable
>>> + * hooks dynamically allocated by the BPF LSM are appeneded here.
>>> + */
>>> +struct security_hook_heads bpf_lsm_hook_heads;
>>> +
>>>  static int __init bpf_lsm_init(void)
>>>  {
>>>  	security_add_hooks(bpf_lsm_hooks, ARRAY_SIZE(bpf_lsm_hooks), "bpf");
>>> diff --git a/security/security.c b/security/security.c
>>> index 30a8aa700557..95a46ca25dcd 100644
>>> --- a/security/security.c
>>> +++ b/security/security.c
>>> @@ -27,6 +27,7 @@
>>>  #include <linux/backing-dev.h>
>>>  #include <linux/string.h>
>>>  #include <linux/msg.h>
>>> +#include <linux/bpf_lsm.h>
>>>  #include <net/flow.h>
>>>  
>>>  #define MAX_LSM_EVM_XATTR	2
>>> @@ -657,20 +658,22 @@ static void __init lsm_early_task(struct task_struct *task)
>>>  								\
>>>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
>>>  			P->hook.FUNC(__VA_ARGS__);		\
>>> +		CALL_BPF_LSM_VOID_HOOKS(FUNC, __VA_ARGS__);	\
>> I'm sorry if I wasn't clear on the v2 review.
>> This does not belong in the infrastructure. You should be
>> doing all the bpf_lsm hook processing in you module.
>> bpf_lsm_task_alloc() should loop though all the bpf
>> task_alloc hooks if they have to be handled differently
>> from "normal" LSM hooks.
> The BPF LSM does not define static hooks (the ones registered to
> security_hook_heads in security.c with __lsm_ro_after_init) for each
> LSM hook. If it tries to do that one ends with what was in v1:
>
>   https://lore.kernel.org/bpf/20191220154208.15895-7-kpsingh@chromium.org
>
> This gets quite ugly (security/bpf/hooks.h from v1) and was noted by
> the BPF maintainers:
>
>   https://lore.kernel.org/bpf/20191222012722.gdqhppxpfmqfqbld@ast-mbp.dhcp.thefacebook.com/
>
> As I mentioned, some of the ideas we used here are based on:
>
>   https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal
>
> Which gave each LSM the ability to add mutable hooks at runtime. If
> you prefer we can make this generic and allow the LSMs to register
> mutable hooks with the BPF LSM be the only LSM that uses it (and
> enforce it with a whitelist).
>
> Would this generic approach be something you would consider better
> than just calling the BPF mutable hooks directly?

What I think makes sense is for the BPF LSM to have a hook
for each of the interfaces and for that hook to handle the
mutable list for the interface. If BPF not included there
will be no mutable hooks. 

Yes, your v1 got this right.

>
> - KP
>
>>>  	} while (0)
>>>  
>>> -#define call_int_hook(FUNC, IRC, ...) ({			\
>>> -	int RC = IRC;						\
>>> -	do {							\
>>> -		struct security_hook_list *P;			\
>>> -								\
>>> +#define call_int_hook(FUNC, IRC, ...) ({				\
>>> +	int RC = IRC;							\
>>> +	do {								\
>>> +		struct security_hook_list *P;				\
>>>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
>>> -			RC = P->hook.FUNC(__VA_ARGS__);		\
>>> -			if (RC != 0)				\
>>> -				break;				\
>>> -		}						\
>>> -	} while (0);						\
>>> -	RC;							\
>>> +			RC = P->hook.FUNC(__VA_ARGS__);			\
>>> +			if (RC != 0)					\
>>> +				break;					\
>>> +		}							\
>>> +		if (RC == 0)						\
>>> +			RC = CALL_BPF_LSM_INT_HOOKS(FUNC, __VA_ARGS__);	\
>>> +	} while (0);							\
>>> +	RC;								\
>>>  })
>>>  
>>>  /* Security operations */


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 01/10] bpf: btf: Add btf_type_by_name_kind
  2020-01-23 15:24 ` [PATCH bpf-next v3 01/10] bpf: btf: Add btf_type_by_name_kind KP Singh
@ 2020-01-23 20:06   ` Andrii Nakryiko
  2020-01-24 14:12     ` KP Singh
  0 siblings, 1 reply; 44+ messages in thread
From: Andrii Nakryiko @ 2020-01-23 20:06 UTC (permalink / raw)
  To: KP Singh
  Cc: open list, bpf, linux-security-module, Brendan Jackman,
	Florent Revest, Thomas Garnier, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Jann Horn,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Martin KaFai Lau, Song Liu,
	Yonghong Song, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Nicolas Ferre,
	Stanislav Fomichev, Quentin Monnet, Andrey Ignatov, Joe Stringer

On Thu, Jan 23, 2020 at 7:25 AM KP Singh <kpsingh@chromium.org> wrote:
>
> From: KP Singh <kpsingh@google.com>
>
> - The LSM code does the combination of btf_find_by_name_kind and
>   btf_type_by_id a couple of times to figure out the BTF type for
>   security_hook_heads and security_list_options.
> - Add an extern for btf_vmlinux in btf.h
>
> Signed-off-by: KP Singh <kpsingh@google.com>
> Reviewed-by: Brendan Jackman <jackmanb@google.com>
> Reviewed-by: Florent Revest <revest@google.com>
> Reviewed-by: Thomas Garnier <thgarnie@google.com>
> ---
>  include/linux/btf.h |  3 +++
>  kernel/bpf/btf.c    | 12 ++++++++++++
>  2 files changed, 15 insertions(+)
>
> diff --git a/include/linux/btf.h b/include/linux/btf.h
> index 5c1ea99b480f..d4e859f90a39 100644
> --- a/include/linux/btf.h
> +++ b/include/linux/btf.h
> @@ -15,6 +15,7 @@ struct btf_type;
>  union bpf_attr;
>
>  extern const struct file_operations btf_fops;
> +extern struct btf *btf_vmlinux;
>
>  void btf_put(struct btf *btf);
>  int btf_new_fd(const union bpf_attr *attr);
> @@ -66,6 +67,8 @@ const struct btf_type *
>  btf_resolve_size(const struct btf *btf, const struct btf_type *type,
>                  u32 *type_size, const struct btf_type **elem_type,
>                  u32 *total_nelems);
> +const struct btf_type *btf_type_by_name_kind(
> +       struct btf *btf, const char *name, u8 kind);
>
>  #define for_each_member(i, struct_type, member)                        \
>         for (i = 0, member = btf_type_member(struct_type);      \
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 32963b6d5a9c..ea53c16802cb 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -441,6 +441,18 @@ const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf,
>         return NULL;
>  }
>
> +const struct btf_type *btf_type_by_name_kind(
> +       struct btf *btf, const char *name, u8 kind)
> +{
> +       s32 type_id;
> +
> +       type_id = btf_find_by_name_kind(btf, name, kind);
> +       if (type_id < 0)
> +               return ERR_PTR(type_id);
> +
> +       return btf_type_by_id(btf, type_id);
> +}
> +

is it worth having this as a separate global function? If
btf_find_by_name_kind returns valid ID, then you don't really need to
check btf_type_by_id result, it is always going to be valid. So the
pattern becomes:

type_id = btf_find_by_name_kind(btf, name, kind);
if (type_id < 0)
  goto handle_error;
t = btf_type_by_id(btf, type_id);
/* now just use t */

which is not much more verbose than:

t = btf_type_by_name_kind(btf, name, kind);
if (IS_ERR(t))
  goto handle_error
/* now use t */


>  /* Types that act only as a source, not sink or intermediate
>   * type when resolving.
>   */
> --
> 2.20.1
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-01-23 19:09       ` Casey Schaufler
@ 2020-01-23 22:24         ` KP Singh
  2020-01-23 23:50           ` Casey Schaufler
  0 siblings, 1 reply; 44+ messages in thread
From: KP Singh @ 2020-01-23 22:24 UTC (permalink / raw)
  To: Casey Schaufler; +Cc: LKML, Linux Security Module list, bpf

On 23-Jan 11:09, Casey Schaufler wrote:
> On 1/23/2020 9:59 AM, KP Singh wrote:
> > On 23-Jan 09:03, Casey Schaufler wrote:
> >> On 1/23/2020 7:24 AM, KP Singh wrote:
> >>> From: KP Singh <kpsingh@google.com>
> >>>
> >>> - The list of hooks registered by an LSM is currently immutable as they
> >>>   are declared with __lsm_ro_after_init and they are attached to a
> >>>   security_hook_heads struct.
> >>> - For the BPF LSM we need to de/register the hooks at runtime. Making
> >>>   the existing security_hook_heads mutable broadens an
> >>>   attack vector, so a separate security_hook_heads is added for only
> >>>   those that ~must~ be mutable.
> >>> - These mutable hooks are run only after all the static hooks have
> >>>   successfully executed.
> >>>
> >>> This is based on the ideas discussed in:
> >>>
> >>>   https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal
> >>>
> >>> Reviewed-by: Brendan Jackman <jackmanb@google.com>
> >>> Reviewed-by: Florent Revest <revest@google.com>
> >>> Reviewed-by: Thomas Garnier <thgarnie@google.com>
> >>> Signed-off-by: KP Singh <kpsingh@google.com>
> >>> ---
> >>>  MAINTAINERS             |  1 +
> >>>  include/linux/bpf_lsm.h | 72 +++++++++++++++++++++++++++++++++++++++++
> >>>  security/bpf/Kconfig    |  1 +
> >>>  security/bpf/Makefile   |  2 +-
> >>>  security/bpf/hooks.c    | 20 ++++++++++++
> >>>  security/bpf/lsm.c      |  7 ++++
> >>>  security/security.c     | 25 +++++++-------
> >>>  7 files changed, 116 insertions(+), 12 deletions(-)
> >>>  create mode 100644 include/linux/bpf_lsm.h
> >>>  create mode 100644 security/bpf/hooks.c
> >>>
> >>> diff --git a/MAINTAINERS b/MAINTAINERS
> >>> index e2b7f76a1a70..c606b3d89992 100644
> >>> --- a/MAINTAINERS
> >>> +++ b/MAINTAINERS
> >>> @@ -3209,6 +3209,7 @@ L:	linux-security-module@vger.kernel.org
> >>>  L:	bpf@vger.kernel.org
> >>>  S:	Maintained
> >>>  F:	security/bpf/
> >>> +F:	include/linux/bpf_lsm.h
> >>>  
> >>>  BROADCOM B44 10/100 ETHERNET DRIVER
> >>>  M:	Michael Chan <michael.chan@broadcom.com>
> >>> diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
> >>> new file mode 100644
> >>> index 000000000000..57c20b2cd2f4
> >>> --- /dev/null
> >>> +++ b/include/linux/bpf_lsm.h
> >>> @@ -0,0 +1,72 @@
> >>> +/* SPDX-License-Identifier: GPL-2.0 */
> >>> +
> >>> +/*
> >>> + * Copyright 2019 Google LLC.
> >>> + */
> >>> +
> >>> +#ifndef _LINUX_BPF_LSM_H
> >>> +#define _LINUX_BPF_LSM_H
> >>> +
> >>> +#include <linux/bpf.h>
> >>> +#include <linux/lsm_hooks.h>
> >>> +
> >>> +#ifdef CONFIG_SECURITY_BPF
> >>> +
> >>> +/* Mutable hooks defined at runtime and executed after all the statically
> >>> + * defined LSM hooks.
> >>> + */
> >>> +extern struct security_hook_heads bpf_lsm_hook_heads;
> >>> +
> >>> +int bpf_lsm_srcu_read_lock(void);
> >>> +void bpf_lsm_srcu_read_unlock(int idx);
> >>> +
> >>> +#define CALL_BPF_LSM_VOID_HOOKS(FUNC, ...)			\
> >>> +	do {							\
> >>> +		struct security_hook_list *P;			\
> >>> +		int _idx;					\
> >>> +								\
> >>> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
> >>> +			break;					\
> >>> +								\
> >>> +		_idx = bpf_lsm_srcu_read_lock();		\
> >>> +		hlist_for_each_entry(P, &bpf_lsm_hook_heads.FUNC, list) \
> >>> +			P->hook.FUNC(__VA_ARGS__);		\
> >>> +		bpf_lsm_srcu_read_unlock(_idx);			\
> >>> +	} while (0)
> >>> +
> >>> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) ({			\
> >>> +	int _ret = 0;						\
> >>> +	do {							\
> >>> +		struct security_hook_list *P;			\
> >>> +		int _idx;					\
> >>> +								\
> >>> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
> >>> +			break;					\
> >>> +								\
> >>> +		_idx = bpf_lsm_srcu_read_lock();		\
> >>> +								\
> >>> +		hlist_for_each_entry(P,				\
> >>> +			&bpf_lsm_hook_heads.FUNC, list) {	\
> >>> +			_ret = P->hook.FUNC(__VA_ARGS__);		\
> >>> +			if (_ret && IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE)) \
> >>> +				break;				\
> >>> +		}						\
> >>> +		bpf_lsm_srcu_read_unlock(_idx);			\
> >>> +	} while (0);						\
> >>> +	IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE) ? _ret : 0;	\
> >>> +})
> >>> +
> >>> +#else /* !CONFIG_SECURITY_BPF */
> >>> +
> >>> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) (0)
> >>> +#define CALL_BPF_LSM_VOID_HOOKS(...)
> >>> +
> >>> +static inline int bpf_lsm_srcu_read_lock(void)
> >>> +{
> >>> +	return 0;
> >>> +}
> >>> +static inline void bpf_lsm_srcu_read_unlock(int idx) {}
> >>> +
> >>> +#endif /* CONFIG_SECURITY_BPF */
> >>> +
> >>> +#endif /* _LINUX_BPF_LSM_H */
> >>> diff --git a/security/bpf/Kconfig b/security/bpf/Kconfig
> >>> index a5f6c67ae526..595e4ad597ae 100644
> >>> --- a/security/bpf/Kconfig
> >>> +++ b/security/bpf/Kconfig
> >>> @@ -6,6 +6,7 @@ config SECURITY_BPF
> >>>  	bool "BPF-based MAC and audit policy"
> >>>  	depends on SECURITY
> >>>  	depends on BPF_SYSCALL
> >>> +	depends on SRCU
> >>>  	help
> >>>  	  This enables instrumentation of the security hooks with
> >>>  	  eBPF programs.
> >>> diff --git a/security/bpf/Makefile b/security/bpf/Makefile
> >>> index c78a8a056e7e..c526927c337d 100644
> >>> --- a/security/bpf/Makefile
> >>> +++ b/security/bpf/Makefile
> >>> @@ -2,4 +2,4 @@
> >>>  #
> >>>  # Copyright 2019 Google LLC.
> >>>  
> >>> -obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o
> >>> +obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o hooks.o
> >>> diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
> >>> new file mode 100644
> >>> index 000000000000..b123d9cb4cd4
> >>> --- /dev/null
> >>> +++ b/security/bpf/hooks.c
> >>> @@ -0,0 +1,20 @@
> >>> +// SPDX-License-Identifier: GPL-2.0
> >>> +
> >>> +/*
> >>> + * Copyright 2019 Google LLC.
> >>> + */
> >>> +
> >>> +#include <linux/bpf_lsm.h>
> >>> +#include <linux/srcu.h>
> >>> +
> >>> +DEFINE_STATIC_SRCU(security_hook_srcu);
> >>> +
> >>> +int bpf_lsm_srcu_read_lock(void)
> >>> +{
> >>> +	return srcu_read_lock(&security_hook_srcu);
> >>> +}
> >>> +
> >>> +void bpf_lsm_srcu_read_unlock(int idx)
> >>> +{
> >>> +	return srcu_read_unlock(&security_hook_srcu, idx);
> >>> +}
> >>> diff --git a/security/bpf/lsm.c b/security/bpf/lsm.c
> >>> index dc9ac03c7aa0..a25a068e1781 100644
> >>> --- a/security/bpf/lsm.c
> >>> +++ b/security/bpf/lsm.c
> >>> @@ -4,6 +4,7 @@
> >>>   * Copyright 2019 Google LLC.
> >>>   */
> >>>  
> >>> +#include <linux/bpf_lsm.h>
> >>>  #include <linux/lsm_hooks.h>
> >>>  
> >>>  /* This is only for internal hooks, always statically shipped as part of the
> >>> @@ -12,6 +13,12 @@
> >>>   */
> >>>  static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = {};
> >>>  
> >>> +/* Security hooks registered dynamically by the BPF LSM and must be accessed
> >>> + * by holding bpf_lsm_srcu_read_lock and bpf_lsm_srcu_read_unlock. The mutable
> >>> + * hooks dynamically allocated by the BPF LSM are appeneded here.
> >>> + */
> >>> +struct security_hook_heads bpf_lsm_hook_heads;
> >>> +
> >>>  static int __init bpf_lsm_init(void)
> >>>  {
> >>>  	security_add_hooks(bpf_lsm_hooks, ARRAY_SIZE(bpf_lsm_hooks), "bpf");
> >>> diff --git a/security/security.c b/security/security.c
> >>> index 30a8aa700557..95a46ca25dcd 100644
> >>> --- a/security/security.c
> >>> +++ b/security/security.c
> >>> @@ -27,6 +27,7 @@
> >>>  #include <linux/backing-dev.h>
> >>>  #include <linux/string.h>
> >>>  #include <linux/msg.h>
> >>> +#include <linux/bpf_lsm.h>
> >>>  #include <net/flow.h>
> >>>  
> >>>  #define MAX_LSM_EVM_XATTR	2
> >>> @@ -657,20 +658,22 @@ static void __init lsm_early_task(struct task_struct *task)
> >>>  								\
> >>>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
> >>>  			P->hook.FUNC(__VA_ARGS__);		\
> >>> +		CALL_BPF_LSM_VOID_HOOKS(FUNC, __VA_ARGS__);	\
> >> I'm sorry if I wasn't clear on the v2 review.
> >> This does not belong in the infrastructure. You should be
> >> doing all the bpf_lsm hook processing in you module.
> >> bpf_lsm_task_alloc() should loop though all the bpf
> >> task_alloc hooks if they have to be handled differently
> >> from "normal" LSM hooks.
> > The BPF LSM does not define static hooks (the ones registered to
> > security_hook_heads in security.c with __lsm_ro_after_init) for each
> > LSM hook. If it tries to do that one ends with what was in v1:
> >
> >   https://lore.kernel.org/bpf/20191220154208.15895-7-kpsingh@chromium.org
> >
> > This gets quite ugly (security/bpf/hooks.h from v1) and was noted by
> > the BPF maintainers:
> >
> >   https://lore.kernel.org/bpf/20191222012722.gdqhppxpfmqfqbld@ast-mbp.dhcp.thefacebook.com/
> >
> > As I mentioned, some of the ideas we used here are based on:
> >
> >   https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal
> >
> > Which gave each LSM the ability to add mutable hooks at runtime. If
> > you prefer we can make this generic and allow the LSMs to register
> > mutable hooks with the BPF LSM be the only LSM that uses it (and
> > enforce it with a whitelist).
> >
> > Would this generic approach be something you would consider better
> > than just calling the BPF mutable hooks directly?
> 
> What I think makes sense is for the BPF LSM to have a hook
> for each of the interfaces and for that hook to handle the
> mutable list for the interface. If BPF not included there
> will be no mutable hooks. 
> 
> Yes, your v1 got this right.

BPF LSM does provide mutable LSM hooks and it ends up being simpler
to implement/maintain when they are treated as such.

 The other approaches which we have considered are:

- Using macro magic to allocate static hook bodies which call eBPF
  programs as implemented in v1. This entails maintaining a
  separate list of LSM hooks in the BPF LSM which is evident from the
  giant security/bpf/include/hooks.h in:

  https://lore.kernel.org/bpf/20191220154208.15895-7-kpsingh@chromium.org

- Another approach one can think of is to allocate all the trampoline
  images (one page each) at __init and update these images to invoke
  BPF programs when they are attached.

Both these approaches seem to suffer from the downside of doing more
work when it's not really needed (i.e. doing prep work for hooks which
have no eBPF programs attached) and they appear to to mask the fact
that what the BPF LSM provides is actually mutable LSM hooks by
allocating static wrappers around mutable callbacks.

Are there other downsides apart from the fact we have an explicit call
to the mutable hooks in the LSM code? (Note that we want to have these
mutable hooks run after all the static LSM hooks so ordering
would still end up being LSM_ORDER_LAST)

It would be great to hear the maintainers' perspective based on the
trade-offs involved with the different approaches discussed.

We are happy to adapt our approach based on the consensus we reach
here.

- KP

> 
> >
> > - KP
> >
> >>>  	} while (0)
> >>>  
> >>> -#define call_int_hook(FUNC, IRC, ...) ({			\
> >>> -	int RC = IRC;						\
> >>> -	do {							\
> >>> -		struct security_hook_list *P;			\
> >>> -								\
> >>> +#define call_int_hook(FUNC, IRC, ...) ({				\
> >>> +	int RC = IRC;							\
> >>> +	do {								\
> >>> +		struct security_hook_list *P;				\
> >>>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
> >>> -			RC = P->hook.FUNC(__VA_ARGS__);		\
> >>> -			if (RC != 0)				\
> >>> -				break;				\
> >>> -		}						\
> >>> -	} while (0);						\
> >>> -	RC;							\
> >>> +			RC = P->hook.FUNC(__VA_ARGS__);			\
> >>> +			if (RC != 0)					\
> >>> +				break;					\
> >>> +		}							\
> >>> +		if (RC == 0)						\
> >>> +			RC = CALL_BPF_LSM_INT_HOOKS(FUNC, __VA_ARGS__);	\
> >>> +	} while (0);							\
> >>> +	RC;								\
> >>>  })
> >>>  
> >>>  /* Security operations */
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-01-23 22:24         ` KP Singh
@ 2020-01-23 23:50           ` Casey Schaufler
  2020-01-24  1:25             ` KP Singh
  0 siblings, 1 reply; 44+ messages in thread
From: Casey Schaufler @ 2020-01-23 23:50 UTC (permalink / raw)
  To: KP Singh; +Cc: LKML, Linux Security Module list, bpf, Casey Schaufler

On 1/23/2020 2:24 PM, KP Singh wrote:
> On 23-Jan 11:09, Casey Schaufler wrote:
>> On 1/23/2020 9:59 AM, KP Singh wrote:
>>> On 23-Jan 09:03, Casey Schaufler wrote:
>>>> On 1/23/2020 7:24 AM, KP Singh wrote:
>>>>> From: KP Singh <kpsingh@google.com>
>>>>>
>>>>> - The list of hooks registered by an LSM is currently immutable as they
>>>>>   are declared with __lsm_ro_after_init and they are attached to a
>>>>>   security_hook_heads struct.
>>>>> - For the BPF LSM we need to de/register the hooks at runtime. Making
>>>>>   the existing security_hook_heads mutable broadens an
>>>>>   attack vector, so a separate security_hook_heads is added for only
>>>>>   those that ~must~ be mutable.
>>>>> - These mutable hooks are run only after all the static hooks have
>>>>>   successfully executed.
>>>>>
>>>>> This is based on the ideas discussed in:
>>>>>
>>>>>   https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal
>>>>>
>>>>> Reviewed-by: Brendan Jackman <jackmanb@google.com>
>>>>> Reviewed-by: Florent Revest <revest@google.com>
>>>>> Reviewed-by: Thomas Garnier <thgarnie@google.com>
>>>>> Signed-off-by: KP Singh <kpsingh@google.com>
>>>>> ---
>>>>>  MAINTAINERS             |  1 +
>>>>>  include/linux/bpf_lsm.h | 72 +++++++++++++++++++++++++++++++++++++++++
>>>>>  security/bpf/Kconfig    |  1 +
>>>>>  security/bpf/Makefile   |  2 +-
>>>>>  security/bpf/hooks.c    | 20 ++++++++++++
>>>>>  security/bpf/lsm.c      |  7 ++++
>>>>>  security/security.c     | 25 +++++++-------
>>>>>  7 files changed, 116 insertions(+), 12 deletions(-)
>>>>>  create mode 100644 include/linux/bpf_lsm.h
>>>>>  create mode 100644 security/bpf/hooks.c
>>>>>
>>>>> diff --git a/MAINTAINERS b/MAINTAINERS
>>>>> index e2b7f76a1a70..c606b3d89992 100644
>>>>> --- a/MAINTAINERS
>>>>> +++ b/MAINTAINERS
>>>>> @@ -3209,6 +3209,7 @@ L:	linux-security-module@vger.kernel.org
>>>>>  L:	bpf@vger.kernel.org
>>>>>  S:	Maintained
>>>>>  F:	security/bpf/
>>>>> +F:	include/linux/bpf_lsm.h
>>>>>  
>>>>>  BROADCOM B44 10/100 ETHERNET DRIVER
>>>>>  M:	Michael Chan <michael.chan@broadcom.com>
>>>>> diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
>>>>> new file mode 100644
>>>>> index 000000000000..57c20b2cd2f4
>>>>> --- /dev/null
>>>>> +++ b/include/linux/bpf_lsm.h
>>>>> @@ -0,0 +1,72 @@
>>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>>> +
>>>>> +/*
>>>>> + * Copyright 2019 Google LLC.
>>>>> + */
>>>>> +
>>>>> +#ifndef _LINUX_BPF_LSM_H
>>>>> +#define _LINUX_BPF_LSM_H
>>>>> +
>>>>> +#include <linux/bpf.h>
>>>>> +#include <linux/lsm_hooks.h>
>>>>> +
>>>>> +#ifdef CONFIG_SECURITY_BPF
>>>>> +
>>>>> +/* Mutable hooks defined at runtime and executed after all the statically
>>>>> + * defined LSM hooks.
>>>>> + */
>>>>> +extern struct security_hook_heads bpf_lsm_hook_heads;
>>>>> +
>>>>> +int bpf_lsm_srcu_read_lock(void);
>>>>> +void bpf_lsm_srcu_read_unlock(int idx);
>>>>> +
>>>>> +#define CALL_BPF_LSM_VOID_HOOKS(FUNC, ...)			\
>>>>> +	do {							\
>>>>> +		struct security_hook_list *P;			\
>>>>> +		int _idx;					\
>>>>> +								\
>>>>> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
>>>>> +			break;					\
>>>>> +								\
>>>>> +		_idx = bpf_lsm_srcu_read_lock();		\
>>>>> +		hlist_for_each_entry(P, &bpf_lsm_hook_heads.FUNC, list) \
>>>>> +			P->hook.FUNC(__VA_ARGS__);		\
>>>>> +		bpf_lsm_srcu_read_unlock(_idx);			\
>>>>> +	} while (0)
>>>>> +
>>>>> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) ({			\
>>>>> +	int _ret = 0;						\
>>>>> +	do {							\
>>>>> +		struct security_hook_list *P;			\
>>>>> +		int _idx;					\
>>>>> +								\
>>>>> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
>>>>> +			break;					\
>>>>> +								\
>>>>> +		_idx = bpf_lsm_srcu_read_lock();		\
>>>>> +								\
>>>>> +		hlist_for_each_entry(P,				\
>>>>> +			&bpf_lsm_hook_heads.FUNC, list) {	\
>>>>> +			_ret = P->hook.FUNC(__VA_ARGS__);		\
>>>>> +			if (_ret && IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE)) \
>>>>> +				break;				\
>>>>> +		}						\
>>>>> +		bpf_lsm_srcu_read_unlock(_idx);			\
>>>>> +	} while (0);						\
>>>>> +	IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE) ? _ret : 0;	\
>>>>> +})
>>>>> +
>>>>> +#else /* !CONFIG_SECURITY_BPF */
>>>>> +
>>>>> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) (0)
>>>>> +#define CALL_BPF_LSM_VOID_HOOKS(...)
>>>>> +
>>>>> +static inline int bpf_lsm_srcu_read_lock(void)
>>>>> +{
>>>>> +	return 0;
>>>>> +}
>>>>> +static inline void bpf_lsm_srcu_read_unlock(int idx) {}
>>>>> +
>>>>> +#endif /* CONFIG_SECURITY_BPF */
>>>>> +
>>>>> +#endif /* _LINUX_BPF_LSM_H */
>>>>> diff --git a/security/bpf/Kconfig b/security/bpf/Kconfig
>>>>> index a5f6c67ae526..595e4ad597ae 100644
>>>>> --- a/security/bpf/Kconfig
>>>>> +++ b/security/bpf/Kconfig
>>>>> @@ -6,6 +6,7 @@ config SECURITY_BPF
>>>>>  	bool "BPF-based MAC and audit policy"
>>>>>  	depends on SECURITY
>>>>>  	depends on BPF_SYSCALL
>>>>> +	depends on SRCU
>>>>>  	help
>>>>>  	  This enables instrumentation of the security hooks with
>>>>>  	  eBPF programs.
>>>>> diff --git a/security/bpf/Makefile b/security/bpf/Makefile
>>>>> index c78a8a056e7e..c526927c337d 100644
>>>>> --- a/security/bpf/Makefile
>>>>> +++ b/security/bpf/Makefile
>>>>> @@ -2,4 +2,4 @@
>>>>>  #
>>>>>  # Copyright 2019 Google LLC.
>>>>>  
>>>>> -obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o
>>>>> +obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o hooks.o
>>>>> diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
>>>>> new file mode 100644
>>>>> index 000000000000..b123d9cb4cd4
>>>>> --- /dev/null
>>>>> +++ b/security/bpf/hooks.c
>>>>> @@ -0,0 +1,20 @@
>>>>> +// SPDX-License-Identifier: GPL-2.0
>>>>> +
>>>>> +/*
>>>>> + * Copyright 2019 Google LLC.
>>>>> + */
>>>>> +
>>>>> +#include <linux/bpf_lsm.h>
>>>>> +#include <linux/srcu.h>
>>>>> +
>>>>> +DEFINE_STATIC_SRCU(security_hook_srcu);
>>>>> +
>>>>> +int bpf_lsm_srcu_read_lock(void)
>>>>> +{
>>>>> +	return srcu_read_lock(&security_hook_srcu);
>>>>> +}
>>>>> +
>>>>> +void bpf_lsm_srcu_read_unlock(int idx)
>>>>> +{
>>>>> +	return srcu_read_unlock(&security_hook_srcu, idx);
>>>>> +}
>>>>> diff --git a/security/bpf/lsm.c b/security/bpf/lsm.c
>>>>> index dc9ac03c7aa0..a25a068e1781 100644
>>>>> --- a/security/bpf/lsm.c
>>>>> +++ b/security/bpf/lsm.c
>>>>> @@ -4,6 +4,7 @@
>>>>>   * Copyright 2019 Google LLC.
>>>>>   */
>>>>>  
>>>>> +#include <linux/bpf_lsm.h>
>>>>>  #include <linux/lsm_hooks.h>
>>>>>  
>>>>>  /* This is only for internal hooks, always statically shipped as part of the
>>>>> @@ -12,6 +13,12 @@
>>>>>   */
>>>>>  static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = {};
>>>>>  
>>>>> +/* Security hooks registered dynamically by the BPF LSM and must be accessed
>>>>> + * by holding bpf_lsm_srcu_read_lock and bpf_lsm_srcu_read_unlock. The mutable
>>>>> + * hooks dynamically allocated by the BPF LSM are appeneded here.
>>>>> + */
>>>>> +struct security_hook_heads bpf_lsm_hook_heads;
>>>>> +
>>>>>  static int __init bpf_lsm_init(void)
>>>>>  {
>>>>>  	security_add_hooks(bpf_lsm_hooks, ARRAY_SIZE(bpf_lsm_hooks), "bpf");
>>>>> diff --git a/security/security.c b/security/security.c
>>>>> index 30a8aa700557..95a46ca25dcd 100644
>>>>> --- a/security/security.c
>>>>> +++ b/security/security.c
>>>>> @@ -27,6 +27,7 @@
>>>>>  #include <linux/backing-dev.h>
>>>>>  #include <linux/string.h>
>>>>>  #include <linux/msg.h>
>>>>> +#include <linux/bpf_lsm.h>
>>>>>  #include <net/flow.h>
>>>>>  
>>>>>  #define MAX_LSM_EVM_XATTR	2
>>>>> @@ -657,20 +658,22 @@ static void __init lsm_early_task(struct task_struct *task)
>>>>>  								\
>>>>>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
>>>>>  			P->hook.FUNC(__VA_ARGS__);		\
>>>>> +		CALL_BPF_LSM_VOID_HOOKS(FUNC, __VA_ARGS__);	\
>>>> I'm sorry if I wasn't clear on the v2 review.
>>>> This does not belong in the infrastructure. You should be
>>>> doing all the bpf_lsm hook processing in you module.
>>>> bpf_lsm_task_alloc() should loop though all the bpf
>>>> task_alloc hooks if they have to be handled differently
>>>> from "normal" LSM hooks.
>>> The BPF LSM does not define static hooks (the ones registered to
>>> security_hook_heads in security.c with __lsm_ro_after_init) for each
>>> LSM hook. If it tries to do that one ends with what was in v1:
>>>
>>>   https://lore.kernel.org/bpf/20191220154208.15895-7-kpsingh@chromium.org
>>>
>>> This gets quite ugly (security/bpf/hooks.h from v1) and was noted by
>>> the BPF maintainers:
>>>
>>>   https://lore.kernel.org/bpf/20191222012722.gdqhppxpfmqfqbld@ast-mbp.dhcp.thefacebook.com/
>>>
>>> As I mentioned, some of the ideas we used here are based on:
>>>
>>>   https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal
>>>
>>> Which gave each LSM the ability to add mutable hooks at runtime. If
>>> you prefer we can make this generic and allow the LSMs to register
>>> mutable hooks with the BPF LSM be the only LSM that uses it (and
>>> enforce it with a whitelist).
>>>
>>> Would this generic approach be something you would consider better
>>> than just calling the BPF mutable hooks directly?
>> What I think makes sense is for the BPF LSM to have a hook
>> for each of the interfaces and for that hook to handle the
>> mutable list for the interface. If BPF not included there
>> will be no mutable hooks. 
>>
>> Yes, your v1 got this right.
> BPF LSM does provide mutable LSM hooks and it ends up being simpler
> to implement/maintain when they are treated as such.

If you want to put mutable hook handling in the infrastructure
you need to make it general mutable hook handling as opposed to
BPF hook handling. I don't know if that would be acceptable for
all the reasons called out about dynamic module loading.

>
>  The other approaches which we have considered are:
>
> - Using macro magic to allocate static hook bodies which call eBPF
>   programs as implemented in v1. This entails maintaining a
>   separate list of LSM hooks in the BPF LSM which is evident from the
>   giant security/bpf/include/hooks.h in:
>
>   https://lore.kernel.org/bpf/20191220154208.15895-7-kpsingh@chromium.org

I haven't put much though into how you might make that cleaner,
but I don't see how you can expect any approach to turn out
smaller than or less ugly than security.c.

>
> - Another approach one can think of is to allocate all the trampoline
>   images (one page each) at __init and update these images to invoke
>   BPF programs when they are attached.
>
> Both these approaches seem to suffer from the downside of doing more
> work when it's not really needed (i.e. doing prep work for hooks which
> have no eBPF programs attached) and they appear to to mask the fact
> that what the BPF LSM provides is actually mutable LSM hooks by
> allocating static wrappers around mutable callbacks.

That's a "feature" of the LSM infrastructure. If you're not using a hook
you just don't provide one. It is a artifact of your intent of providing
a general extension that requires you provide a hook which may do nothing
for every interface.

>
> Are there other downsides apart from the fact we have an explicit call
> to the mutable hooks in the LSM code? (Note that we want to have these
> mutable hooks run after all the static LSM hooks so ordering
> would still end up being LSM_ORDER_LAST)

My intention when I suggested using LSM_ORDER_LAST was to
remove the explicit calls to BPF in the infrastructure.

>
> It would be great to hear the maintainers' perspective based on the
> trade-offs involved with the different approaches discussed.

Please bear in mind that the maintainer (James Morris) didn't
develop the hook list scheme.

> We are happy to adapt our approach based on the consensus we reach
> here.
>
> - KP
>
>>> - KP
>>>
>>>>>  	} while (0)
>>>>>  
>>>>> -#define call_int_hook(FUNC, IRC, ...) ({			\
>>>>> -	int RC = IRC;						\
>>>>> -	do {							\
>>>>> -		struct security_hook_list *P;			\
>>>>> -								\
>>>>> +#define call_int_hook(FUNC, IRC, ...) ({				\
>>>>> +	int RC = IRC;							\
>>>>> +	do {								\
>>>>> +		struct security_hook_list *P;				\
>>>>>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
>>>>> -			RC = P->hook.FUNC(__VA_ARGS__);		\
>>>>> -			if (RC != 0)				\
>>>>> -				break;				\
>>>>> -		}						\
>>>>> -	} while (0);						\
>>>>> -	RC;							\
>>>>> +			RC = P->hook.FUNC(__VA_ARGS__);			\
>>>>> +			if (RC != 0)					\
>>>>> +				break;					\
>>>>> +		}							\
>>>>> +		if (RC == 0)						\
>>>>> +			RC = CALL_BPF_LSM_INT_HOOKS(FUNC, __VA_ARGS__);	\
>>>>> +	} while (0);							\
>>>>> +	RC;								\
>>>>>  })
>>>>>  
>>>>>  /* Security operations */


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-01-23 23:50           ` Casey Schaufler
@ 2020-01-24  1:25             ` KP Singh
  2020-01-24 21:55               ` James Morris
  0 siblings, 1 reply; 44+ messages in thread
From: KP Singh @ 2020-01-24  1:25 UTC (permalink / raw)
  To: Casey Schaufler; +Cc: LKML, Linux Security Module list, bpf

On 23-Jan 15:50, Casey Schaufler wrote:
> On 1/23/2020 2:24 PM, KP Singh wrote:
> > On 23-Jan 11:09, Casey Schaufler wrote:
> >> On 1/23/2020 9:59 AM, KP Singh wrote:
> >>> On 23-Jan 09:03, Casey Schaufler wrote:
> >>>> On 1/23/2020 7:24 AM, KP Singh wrote:
> >>>>> From: KP Singh <kpsingh@google.com>
> >>>>>
> >>>>> - The list of hooks registered by an LSM is currently immutable as they
> >>>>>   are declared with __lsm_ro_after_init and they are attached to a
> >>>>>   security_hook_heads struct.
> >>>>> - For the BPF LSM we need to de/register the hooks at runtime. Making
> >>>>>   the existing security_hook_heads mutable broadens an
> >>>>>   attack vector, so a separate security_hook_heads is added for only
> >>>>>   those that ~must~ be mutable.
> >>>>> - These mutable hooks are run only after all the static hooks have
> >>>>>   successfully executed.
> >>>>>
> >>>>> This is based on the ideas discussed in:
> >>>>>
> >>>>>   https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal
> >>>>>
> >>>>> Reviewed-by: Brendan Jackman <jackmanb@google.com>
> >>>>> Reviewed-by: Florent Revest <revest@google.com>
> >>>>> Reviewed-by: Thomas Garnier <thgarnie@google.com>
> >>>>> Signed-off-by: KP Singh <kpsingh@google.com>
> >>>>> ---
> >>>>>  MAINTAINERS             |  1 +
> >>>>>  include/linux/bpf_lsm.h | 72 +++++++++++++++++++++++++++++++++++++++++
> >>>>>  security/bpf/Kconfig    |  1 +
> >>>>>  security/bpf/Makefile   |  2 +-
> >>>>>  security/bpf/hooks.c    | 20 ++++++++++++
> >>>>>  security/bpf/lsm.c      |  7 ++++
> >>>>>  security/security.c     | 25 +++++++-------
> >>>>>  7 files changed, 116 insertions(+), 12 deletions(-)
> >>>>>  create mode 100644 include/linux/bpf_lsm.h
> >>>>>  create mode 100644 security/bpf/hooks.c
> >>>>>
> >>>>> diff --git a/MAINTAINERS b/MAINTAINERS
> >>>>> index e2b7f76a1a70..c606b3d89992 100644
> >>>>> --- a/MAINTAINERS
> >>>>> +++ b/MAINTAINERS
> >>>>> @@ -3209,6 +3209,7 @@ L:	linux-security-module@vger.kernel.org
> >>>>>  L:	bpf@vger.kernel.org
> >>>>>  S:	Maintained
> >>>>>  F:	security/bpf/
> >>>>> +F:	include/linux/bpf_lsm.h
> >>>>>  
> >>>>>  BROADCOM B44 10/100 ETHERNET DRIVER
> >>>>>  M:	Michael Chan <michael.chan@broadcom.com>
> >>>>> diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
> >>>>> new file mode 100644
> >>>>> index 000000000000..57c20b2cd2f4
> >>>>> --- /dev/null
> >>>>> +++ b/include/linux/bpf_lsm.h
> >>>>> @@ -0,0 +1,72 @@
> >>>>> +/* SPDX-License-Identifier: GPL-2.0 */
> >>>>> +
> >>>>> +/*
> >>>>> + * Copyright 2019 Google LLC.
> >>>>> + */
> >>>>> +
> >>>>> +#ifndef _LINUX_BPF_LSM_H
> >>>>> +#define _LINUX_BPF_LSM_H
> >>>>> +
> >>>>> +#include <linux/bpf.h>
> >>>>> +#include <linux/lsm_hooks.h>
> >>>>> +
> >>>>> +#ifdef CONFIG_SECURITY_BPF
> >>>>> +
> >>>>> +/* Mutable hooks defined at runtime and executed after all the statically
> >>>>> + * defined LSM hooks.
> >>>>> + */
> >>>>> +extern struct security_hook_heads bpf_lsm_hook_heads;
> >>>>> +
> >>>>> +int bpf_lsm_srcu_read_lock(void);
> >>>>> +void bpf_lsm_srcu_read_unlock(int idx);
> >>>>> +
> >>>>> +#define CALL_BPF_LSM_VOID_HOOKS(FUNC, ...)			\
> >>>>> +	do {							\
> >>>>> +		struct security_hook_list *P;			\
> >>>>> +		int _idx;					\
> >>>>> +								\
> >>>>> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
> >>>>> +			break;					\
> >>>>> +								\
> >>>>> +		_idx = bpf_lsm_srcu_read_lock();		\
> >>>>> +		hlist_for_each_entry(P, &bpf_lsm_hook_heads.FUNC, list) \
> >>>>> +			P->hook.FUNC(__VA_ARGS__);		\
> >>>>> +		bpf_lsm_srcu_read_unlock(_idx);			\
> >>>>> +	} while (0)
> >>>>> +
> >>>>> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) ({			\
> >>>>> +	int _ret = 0;						\
> >>>>> +	do {							\
> >>>>> +		struct security_hook_list *P;			\
> >>>>> +		int _idx;					\
> >>>>> +								\
> >>>>> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
> >>>>> +			break;					\
> >>>>> +								\
> >>>>> +		_idx = bpf_lsm_srcu_read_lock();		\
> >>>>> +								\
> >>>>> +		hlist_for_each_entry(P,				\
> >>>>> +			&bpf_lsm_hook_heads.FUNC, list) {	\
> >>>>> +			_ret = P->hook.FUNC(__VA_ARGS__);		\
> >>>>> +			if (_ret && IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE)) \
> >>>>> +				break;				\
> >>>>> +		}						\
> >>>>> +		bpf_lsm_srcu_read_unlock(_idx);			\
> >>>>> +	} while (0);						\
> >>>>> +	IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE) ? _ret : 0;	\
> >>>>> +})
> >>>>> +
> >>>>> +#else /* !CONFIG_SECURITY_BPF */
> >>>>> +
> >>>>> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) (0)
> >>>>> +#define CALL_BPF_LSM_VOID_HOOKS(...)
> >>>>> +
> >>>>> +static inline int bpf_lsm_srcu_read_lock(void)
> >>>>> +{
> >>>>> +	return 0;
> >>>>> +}
> >>>>> +static inline void bpf_lsm_srcu_read_unlock(int idx) {}
> >>>>> +
> >>>>> +#endif /* CONFIG_SECURITY_BPF */
> >>>>> +
> >>>>> +#endif /* _LINUX_BPF_LSM_H */
> >>>>> diff --git a/security/bpf/Kconfig b/security/bpf/Kconfig
> >>>>> index a5f6c67ae526..595e4ad597ae 100644
> >>>>> --- a/security/bpf/Kconfig
> >>>>> +++ b/security/bpf/Kconfig
> >>>>> @@ -6,6 +6,7 @@ config SECURITY_BPF
> >>>>>  	bool "BPF-based MAC and audit policy"
> >>>>>  	depends on SECURITY
> >>>>>  	depends on BPF_SYSCALL
> >>>>> +	depends on SRCU
> >>>>>  	help
> >>>>>  	  This enables instrumentation of the security hooks with
> >>>>>  	  eBPF programs.
> >>>>> diff --git a/security/bpf/Makefile b/security/bpf/Makefile
> >>>>> index c78a8a056e7e..c526927c337d 100644
> >>>>> --- a/security/bpf/Makefile
> >>>>> +++ b/security/bpf/Makefile
> >>>>> @@ -2,4 +2,4 @@
> >>>>>  #
> >>>>>  # Copyright 2019 Google LLC.
> >>>>>  
> >>>>> -obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o
> >>>>> +obj-$(CONFIG_SECURITY_BPF) := lsm.o ops.o hooks.o
> >>>>> diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c
> >>>>> new file mode 100644
> >>>>> index 000000000000..b123d9cb4cd4
> >>>>> --- /dev/null
> >>>>> +++ b/security/bpf/hooks.c
> >>>>> @@ -0,0 +1,20 @@
> >>>>> +// SPDX-License-Identifier: GPL-2.0
> >>>>> +
> >>>>> +/*
> >>>>> + * Copyright 2019 Google LLC.
> >>>>> + */
> >>>>> +
> >>>>> +#include <linux/bpf_lsm.h>
> >>>>> +#include <linux/srcu.h>
> >>>>> +
> >>>>> +DEFINE_STATIC_SRCU(security_hook_srcu);
> >>>>> +
> >>>>> +int bpf_lsm_srcu_read_lock(void)
> >>>>> +{
> >>>>> +	return srcu_read_lock(&security_hook_srcu);
> >>>>> +}
> >>>>> +
> >>>>> +void bpf_lsm_srcu_read_unlock(int idx)
> >>>>> +{
> >>>>> +	return srcu_read_unlock(&security_hook_srcu, idx);
> >>>>> +}
> >>>>> diff --git a/security/bpf/lsm.c b/security/bpf/lsm.c
> >>>>> index dc9ac03c7aa0..a25a068e1781 100644
> >>>>> --- a/security/bpf/lsm.c
> >>>>> +++ b/security/bpf/lsm.c
> >>>>> @@ -4,6 +4,7 @@
> >>>>>   * Copyright 2019 Google LLC.
> >>>>>   */
> >>>>>  
> >>>>> +#include <linux/bpf_lsm.h>
> >>>>>  #include <linux/lsm_hooks.h>
> >>>>>  
> >>>>>  /* This is only for internal hooks, always statically shipped as part of the
> >>>>> @@ -12,6 +13,12 @@
> >>>>>   */
> >>>>>  static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = {};
> >>>>>  
> >>>>> +/* Security hooks registered dynamically by the BPF LSM and must be accessed
> >>>>> + * by holding bpf_lsm_srcu_read_lock and bpf_lsm_srcu_read_unlock. The mutable
> >>>>> + * hooks dynamically allocated by the BPF LSM are appeneded here.
> >>>>> + */
> >>>>> +struct security_hook_heads bpf_lsm_hook_heads;
> >>>>> +
> >>>>>  static int __init bpf_lsm_init(void)
> >>>>>  {
> >>>>>  	security_add_hooks(bpf_lsm_hooks, ARRAY_SIZE(bpf_lsm_hooks), "bpf");
> >>>>> diff --git a/security/security.c b/security/security.c
> >>>>> index 30a8aa700557..95a46ca25dcd 100644
> >>>>> --- a/security/security.c
> >>>>> +++ b/security/security.c
> >>>>> @@ -27,6 +27,7 @@
> >>>>>  #include <linux/backing-dev.h>
> >>>>>  #include <linux/string.h>
> >>>>>  #include <linux/msg.h>
> >>>>> +#include <linux/bpf_lsm.h>
> >>>>>  #include <net/flow.h>
> >>>>>  
> >>>>>  #define MAX_LSM_EVM_XATTR	2
> >>>>> @@ -657,20 +658,22 @@ static void __init lsm_early_task(struct task_struct *task)
> >>>>>  								\
> >>>>>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
> >>>>>  			P->hook.FUNC(__VA_ARGS__);		\
> >>>>> +		CALL_BPF_LSM_VOID_HOOKS(FUNC, __VA_ARGS__);	\
> >>>> I'm sorry if I wasn't clear on the v2 review.
> >>>> This does not belong in the infrastructure. You should be
> >>>> doing all the bpf_lsm hook processing in you module.
> >>>> bpf_lsm_task_alloc() should loop though all the bpf
> >>>> task_alloc hooks if they have to be handled differently
> >>>> from "normal" LSM hooks.
> >>> The BPF LSM does not define static hooks (the ones registered to
> >>> security_hook_heads in security.c with __lsm_ro_after_init) for each
> >>> LSM hook. If it tries to do that one ends with what was in v1:
> >>>
> >>>   https://lore.kernel.org/bpf/20191220154208.15895-7-kpsingh@chromium.org
> >>>
> >>> This gets quite ugly (security/bpf/hooks.h from v1) and was noted by
> >>> the BPF maintainers:
> >>>
> >>>   https://lore.kernel.org/bpf/20191222012722.gdqhppxpfmqfqbld@ast-mbp.dhcp.thefacebook.com/
> >>>
> >>> As I mentioned, some of the ideas we used here are based on:
> >>>
> >>>   https://lore.kernel.org/lkml/20180408065916.GA2832@ircssh-2.c.rugged-nimbus-611.internal
> >>>
> >>> Which gave each LSM the ability to add mutable hooks at runtime. If
> >>> you prefer we can make this generic and allow the LSMs to register
> >>> mutable hooks with the BPF LSM be the only LSM that uses it (and
> >>> enforce it with a whitelist).
> >>>
> >>> Would this generic approach be something you would consider better
> >>> than just calling the BPF mutable hooks directly?
> >> What I think makes sense is for the BPF LSM to have a hook
> >> for each of the interfaces and for that hook to handle the
> >> mutable list for the interface. If BPF not included there
> >> will be no mutable hooks. 
> >>
> >> Yes, your v1 got this right.
> > BPF LSM does provide mutable LSM hooks and it ends up being simpler
> > to implement/maintain when they are treated as such.
> 
> If you want to put mutable hook handling in the infrastructure
> you need to make it general mutable hook handling as opposed to
> BPF hook handling. I don't know if that would be acceptable for
> all the reasons called out about dynamic module loading.

We can have generic mutable hook handling and if an LSM doesn't
provide a mutable security_hook_heads, it would not allow dynamic
hooks / dynamic module loading.

So, in practice it will just be the BPF LSM that allows mutable hooks
and the other existing LSMs won't. I guess it will be cleaner than
calling the BPF hooks directly from the LSM code (i.e in security.c)

- KP

> 
> >
> >  The other approaches which we have considered are:
> >
> > - Using macro magic to allocate static hook bodies which call eBPF
> >   programs as implemented in v1. This entails maintaining a
> >   separate list of LSM hooks in the BPF LSM which is evident from the
> >   giant security/bpf/include/hooks.h in:
> >
> >   https://lore.kernel.org/bpf/20191220154208.15895-7-kpsingh@chromium.org
> 
> I haven't put much though into how you might make that cleaner,
> but I don't see how you can expect any approach to turn out
> smaller than or less ugly than security.c.
> 
> >
> > - Another approach one can think of is to allocate all the trampoline
> >   images (one page each) at __init and update these images to invoke
> >   BPF programs when they are attached.
> >
> > Both these approaches seem to suffer from the downside of doing more
> > work when it's not really needed (i.e. doing prep work for hooks which
> > have no eBPF programs attached) and they appear to to mask the fact
> > that what the BPF LSM provides is actually mutable LSM hooks by
> > allocating static wrappers around mutable callbacks.
> 
> That's a "feature" of the LSM infrastructure. If you're not using a hook
> you just don't provide one. It is a artifact of your intent of providing
> a general extension that requires you provide a hook which may do nothing
> for every interface.
> 
> >
> > Are there other downsides apart from the fact we have an explicit call
> > to the mutable hooks in the LSM code? (Note that we want to have these
> > mutable hooks run after all the static LSM hooks so ordering
> > would still end up being LSM_ORDER_LAST)
> 
> My intention when I suggested using LSM_ORDER_LAST was to
> remove the explicit calls to BPF in the infrastructure.
> 
> >
> > It would be great to hear the maintainers' perspective based on the
> > trade-offs involved with the different approaches discussed.
> 
> Please bear in mind that the maintainer (James Morris) didn't
> develop the hook list scheme.
> 
> > We are happy to adapt our approach based on the consensus we reach
> > here.
> >
> > - KP
> >
> >>> - KP
> >>>
> >>>>>  	} while (0)
> >>>>>  
> >>>>> -#define call_int_hook(FUNC, IRC, ...) ({			\
> >>>>> -	int RC = IRC;						\
> >>>>> -	do {							\
> >>>>> -		struct security_hook_list *P;			\
> >>>>> -								\
> >>>>> +#define call_int_hook(FUNC, IRC, ...) ({				\
> >>>>> +	int RC = IRC;							\
> >>>>> +	do {								\
> >>>>> +		struct security_hook_list *P;				\
> >>>>>  		hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \
> >>>>> -			RC = P->hook.FUNC(__VA_ARGS__);		\
> >>>>> -			if (RC != 0)				\
> >>>>> -				break;				\
> >>>>> -		}						\
> >>>>> -	} while (0);						\
> >>>>> -	RC;							\
> >>>>> +			RC = P->hook.FUNC(__VA_ARGS__);			\
> >>>>> +			if (RC != 0)					\
> >>>>> +				break;					\
> >>>>> +		}							\
> >>>>> +		if (RC == 0)						\
> >>>>> +			RC = CALL_BPF_LSM_INT_HOOKS(FUNC, __VA_ARGS__);	\
> >>>>> +	} while (0);							\
> >>>>> +	RC;								\
> >>>>>  })
> >>>>>  
> >>>>>  /* Security operations */
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 01/10] bpf: btf: Add btf_type_by_name_kind
  2020-01-23 20:06   ` Andrii Nakryiko
@ 2020-01-24 14:12     ` KP Singh
  0 siblings, 0 replies; 44+ messages in thread
From: KP Singh @ 2020-01-24 14:12 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: open list, bpf, linux-security-module, Brendan Jackman,
	Florent Revest, Thomas Garnier, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Jann Horn,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Martin KaFai Lau, Song Liu,
	Yonghong Song, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Nicolas Ferre,
	Stanislav Fomichev, Quentin Monnet, Andrey Ignatov, Joe Stringer

On 23-Jan 12:06, Andrii Nakryiko wrote:
> On Thu, Jan 23, 2020 at 7:25 AM KP Singh <kpsingh@chromium.org> wrote:
> >
> > From: KP Singh <kpsingh@google.com>
> >
> > - The LSM code does the combination of btf_find_by_name_kind and
> >   btf_type_by_id a couple of times to figure out the BTF type for
> >   security_hook_heads and security_list_options.
> > - Add an extern for btf_vmlinux in btf.h
> >
> > Signed-off-by: KP Singh <kpsingh@google.com>
> > Reviewed-by: Brendan Jackman <jackmanb@google.com>
> > Reviewed-by: Florent Revest <revest@google.com>
> > Reviewed-by: Thomas Garnier <thgarnie@google.com>
> > ---
> >  include/linux/btf.h |  3 +++
> >  kernel/bpf/btf.c    | 12 ++++++++++++
> >  2 files changed, 15 insertions(+)
> >
> > diff --git a/include/linux/btf.h b/include/linux/btf.h
> > index 5c1ea99b480f..d4e859f90a39 100644
> > --- a/include/linux/btf.h
> > +++ b/include/linux/btf.h
> > @@ -15,6 +15,7 @@ struct btf_type;
> >  union bpf_attr;
> >
> >  extern const struct file_operations btf_fops;
> > +extern struct btf *btf_vmlinux;
> >
> >  void btf_put(struct btf *btf);
> >  int btf_new_fd(const union bpf_attr *attr);
> > @@ -66,6 +67,8 @@ const struct btf_type *
> >  btf_resolve_size(const struct btf *btf, const struct btf_type *type,
> >                  u32 *type_size, const struct btf_type **elem_type,
> >                  u32 *total_nelems);
> > +const struct btf_type *btf_type_by_name_kind(
> > +       struct btf *btf, const char *name, u8 kind);
> >
> >  #define for_each_member(i, struct_type, member)                        \
> >         for (i = 0, member = btf_type_member(struct_type);      \
> > diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> > index 32963b6d5a9c..ea53c16802cb 100644
> > --- a/kernel/bpf/btf.c
> > +++ b/kernel/bpf/btf.c
> > @@ -441,6 +441,18 @@ const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf,
> >         return NULL;
> >  }
> >
> > +const struct btf_type *btf_type_by_name_kind(
> > +       struct btf *btf, const char *name, u8 kind)
> > +{
> > +       s32 type_id;
> > +
> > +       type_id = btf_find_by_name_kind(btf, name, kind);
> > +       if (type_id < 0)
> > +               return ERR_PTR(type_id);
> > +
> > +       return btf_type_by_id(btf, type_id);
> > +}
> > +
> 
> is it worth having this as a separate global function? If
> btf_find_by_name_kind returns valid ID, then you don't really need to
> check btf_type_by_id result, it is always going to be valid. So the
> pattern becomes:

Yeah you're right. We went from using this a few times in separate
functions to 2 times in the same function (based on the changes in
v2 -> v3)

I will drop this from the next revision.

 
> type_id = btf_find_by_name_kind(btf, name, kind);
> if (type_id < 0)
>   goto handle_error;
> t = btf_type_by_id(btf, type_id);
> /* now just use t */
> 
> which is not much more verbose than:
> 
> t = btf_type_by_name_kind(btf, name, kind);
> if (IS_ERR(t))
>   goto handle_error
> /* now use t */
> 

Agreed.

- KP

> 
> >  /* Types that act only as a source, not sink or intermediate
> >   * type when resolving.
> >   */
> > --
> > 2.20.1
> >

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 08/10] tools/libbpf: Add support for BPF_PROG_TYPE_LSM
  2020-01-23 18:00   ` Andrii Nakryiko
@ 2020-01-24 14:16     ` KP Singh
  0 siblings, 0 replies; 44+ messages in thread
From: KP Singh @ 2020-01-24 14:16 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: open list, bpf, linux-security-module, Brendan Jackman,
	Florent Revest, Thomas Garnier, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Jann Horn,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Martin KaFai Lau, Song Liu,
	Yonghong Song, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Nicolas Ferre,
	Stanislav Fomichev, Quentin Monnet, Andrey Ignatov, Joe Stringer

> On Thu, Jan 23, 2020 at 7:25 AM KP Singh <kpsingh@chromium.org> wrote:
> >
> > From: KP Singh <kpsingh@google.com>
> >
> > * Add functionality in libbpf to attach eBPF program to LSM hooks
> > * Lookup the index of the LSM hook in security_hook_heads and pass it in
> >   attr->lsm_hook_idx
> >
> > Signed-off-by: KP Singh <kpsingh@google.com>
> > Reviewed-by: Brendan Jackman <jackmanb@google.com>
> > Reviewed-by: Florent Revest <revest@google.com>
> > Reviewed-by: Thomas Garnier <thgarnie@google.com>
> > ---
> 
> Looks good, but see few nits below.
> 
> Acked-by: Andrii Nakryiko <andriin@fb.com>

Thanks!

> 
> >  tools/lib/bpf/bpf.c      |   6 ++-
> >  tools/lib/bpf/bpf.h      |   1 +
> >  tools/lib/bpf/libbpf.c   | 104 +++++++++++++++++++++++++++++++++++++--
> >  tools/lib/bpf/libbpf.h   |   4 ++
> >  tools/lib/bpf/libbpf.map |   3 ++
> >  5 files changed, 114 insertions(+), 4 deletions(-)
> >
> 
> [...]
> 
> > @@ -5084,6 +5099,8 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
> >                 if (prog->type != BPF_PROG_TYPE_UNSPEC)
> >                         continue;
> >
> > +
> > +
> 
> why these extra lines?

Ah this might have crept in my latest rebase. Will remove these.

> 
> >                 err = libbpf_prog_type_by_name(prog->section_name, &prog_type,
> >                                                &attach_type);
> >                 if (err == -ESRCH)
> > @@ -6160,6 +6177,7 @@ bool bpf_program__is_##NAME(const struct bpf_program *prog)       \
> >  }                                                              \
> >
> >  BPF_PROG_TYPE_FNS(socket_filter, BPF_PROG_TYPE_SOCKET_FILTER);
> > +BPF_PROG_TYPE_FNS(lsm, BPF_PROG_TYPE_LSM);
> >  BPF_PROG_TYPE_FNS(kprobe, BPF_PROG_TYPE_KPROBE);
> >  BPF_PROG_TYPE_FNS(sched_cls, BPF_PROG_TYPE_SCHED_CLS);
> >  BPF_PROG_TYPE_FNS(sched_act, BPF_PROG_TYPE_SCHED_ACT);
> > @@ -6226,6 +6244,8 @@ static struct bpf_link *attach_raw_tp(const struct bpf_sec_def *sec,
> >                                       struct bpf_program *prog);
> >  static struct bpf_link *attach_trace(const struct bpf_sec_def *sec,
> >                                      struct bpf_program *prog);
> > +static struct bpf_link *attach_lsm(const struct bpf_sec_def *sec,
> > +                                  struct bpf_program *prog);
> >
> >  struct bpf_sec_def {
> >         const char *sec;
> > @@ -6272,6 +6292,9 @@ static const struct bpf_sec_def section_defs[] = {
> >         SEC_DEF("freplace/", EXT,
> >                 .is_attach_btf = true,
> >                 .attach_fn = attach_trace),
> > +       SEC_DEF("lsm/", LSM,
> > +               .expected_attach_type = BPF_LSM_MAC,
> 
> curious, will there be non-MAC LSM programs? if yes, how they are
> going to be different and which prefix will we use then?

One can think BPF_LSM_AUDIT programs which will only be used to log
information from the LSM hooks and not enforce a policy. Currently,
one can sort of do that by disabling CONFIG_SECURITY_BPF_ENFORCE but
that's an all or none hammer.

> 
> > +               .attach_fn = attach_lsm),
> >         BPF_PROG_SEC("xdp",                     BPF_PROG_TYPE_XDP),
> >         BPF_PROG_SEC("perf_event",              BPF_PROG_TYPE_PERF_EVENT),
> >         BPF_PROG_SEC("lwt_in",                  BPF_PROG_TYPE_LWT_IN),
> > @@ -6533,6 +6556,44 @@ static int bpf_object__collect_struct_ops_map_reloc(struct bpf_object *obj,
> >         return -EINVAL;
> >  }
> >
> > +static __s32 find_lsm_hook_idx(struct bpf_program *prog)
> 
> nit: I'd stick to int for return result, we barely ever use __s32 in libbpf.c

Sure. Changed to int.

- KP

> 
> [...]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-01-24  1:25             ` KP Singh
@ 2020-01-24 21:55               ` James Morris
  0 siblings, 0 replies; 44+ messages in thread
From: James Morris @ 2020-01-24 21:55 UTC (permalink / raw)
  To: KP Singh; +Cc: Casey Schaufler, LKML, Linux Security Module list, bpf

On Thu, 23 Jan 2020, KP Singh wrote:

> 
> > If you want to put mutable hook handling in the infrastructure
> > you need to make it general mutable hook handling as opposed to
> > BPF hook handling. I don't know if that would be acceptable for
> > all the reasons called out about dynamic module loading.
> 
> We can have generic mutable hook handling and if an LSM doesn't
--> provide a mutable security_hook_heads, it would not allow dynamic
> hooks / dynamic module loading.
> 
> So, in practice it will just be the BPF LSM that allows mutable hooks
> and the other existing LSMs won't. I guess it will be cleaner than
> calling the BPF hooks directly from the LSM code (i.e in security.c)

I'm inclined to only have mutable hooks for KRSI, not for all LSMs. This 
is a special case and we don't need to provide this for anyone else.

Btw, folks, PLEASE trim replies.


-- 
James Morris
<jmorris@namei.org>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 02/10] bpf: lsm: Add a skeleton and config options
  2020-01-23 15:24 ` [PATCH bpf-next v3 02/10] bpf: lsm: Add a skeleton and config options KP Singh
@ 2020-02-10 23:52   ` Alexei Starovoitov
  2020-02-11 12:45     ` KP Singh
  0 siblings, 1 reply; 44+ messages in thread
From: Alexei Starovoitov @ 2020-02-10 23:52 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-kernel, bpf, linux-security-module, Brendan Jackman,
	Florent Revest, Thomas Garnier, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Jann Horn,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Martin KaFai Lau, Song Liu,
	Yonghong Song, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Nicolas Ferre,
	Stanislav Fomichev, Quentin Monnet, Andrey Ignatov, Joe Stringer

On Thu, Jan 23, 2020 at 07:24:32AM -0800, KP Singh wrote:
>  
> +BPF SECURITY MODULE
> +M:	KP Singh <kpsingh@chromium.org>
> +L:	linux-security-module@vger.kernel.org
> +L:	bpf@vger.kernel.org
> +S:	Maintained
> +F:	security/bpf/

Instead of creating new entry I think it's more appropriate
to add this to main BPF entry like:
diff --git a/MAINTAINERS b/MAINTAINERS
index c74e4ea714a5..f656ddec0722 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3147,6 +3147,7 @@ R:        Martin KaFai Lau <kafai@fb.com>
 R:     Song Liu <songliubraving@fb.com>
 R:     Yonghong Song <yhs@fb.com>
 R:     Andrii Nakryiko <andriin@fb.com>
+R:     KP Singh <kpsingh@chromium.org>
 L:     netdev@vger.kernel.org
 L:     bpf@vger.kernel.org
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git
@@ -3172,6 +3173,7 @@ F:        samples/bpf/
 F:     tools/bpf/
 F:     tools/lib/bpf/
 F:     tools/testing/selftests/bpf/
+F:     security/bpf/
 K:     bpf
 N:     bpf

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 03/10] bpf: lsm: Introduce types for eBPF based LSM
  2020-01-23 15:24 ` [PATCH bpf-next v3 03/10] bpf: lsm: Introduce types for eBPF based LSM KP Singh
@ 2020-02-10 23:58   ` Alexei Starovoitov
  2020-02-11 12:44     ` KP Singh
  0 siblings, 1 reply; 44+ messages in thread
From: Alexei Starovoitov @ 2020-02-10 23:58 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-kernel, bpf, linux-security-module, Brendan Jackman,
	Florent Revest, Thomas Garnier, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Jann Horn,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Martin KaFai Lau, Song Liu,
	Yonghong Song, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Nicolas Ferre,
	Stanislav Fomichev, Quentin Monnet, Andrey Ignatov, Joe Stringer

On Thu, Jan 23, 2020 at 07:24:33AM -0800, KP Singh wrote:
> +
> +static const struct bpf_func_proto *get_bpf_func_proto(
> +	enum bpf_func_id func_id, const struct bpf_prog *prog)
> +{
> +	switch (func_id) {
> +	case BPF_FUNC_map_lookup_elem:
> +		return &bpf_map_lookup_elem_proto;
> +	case BPF_FUNC_get_current_pid_tgid:
> +		return &bpf_get_current_pid_tgid_proto;
> +	default:
> +		return NULL;
> +	}
> +}
> +
> +const struct bpf_verifier_ops lsm_verifier_ops = {
> +	.get_func_proto = get_bpf_func_proto,
> +};

Why artificially limit it like this?
It will cause a lot of churn in the future. Like allowing map update and
delete, in addition to lookup, will be an obvious next step.
I think allowing tracing_func_proto() from the start is cleaner.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-01-23 15:24 ` [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM KP Singh
  2020-01-23 17:03   ` Casey Schaufler
@ 2020-02-11  3:12   ` Alexei Starovoitov
  2020-02-11 12:43     ` KP Singh
  1 sibling, 1 reply; 44+ messages in thread
From: Alexei Starovoitov @ 2020-02-11  3:12 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-kernel, bpf, linux-security-module, Brendan Jackman,
	Florent Revest, Thomas Garnier, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Jann Horn,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	kernel-team

On Thu, Jan 23, 2020 at 07:24:34AM -0800, KP Singh wrote:
> +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) ({			\
> +	int _ret = 0;						\
> +	do {							\
> +		struct security_hook_list *P;			\
> +		int _idx;					\
> +								\
> +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
> +			break;					\
> +								\
> +		_idx = bpf_lsm_srcu_read_lock();		\
> +								\
> +		hlist_for_each_entry(P,				\
> +			&bpf_lsm_hook_heads.FUNC, list) {	\
> +			_ret = P->hook.FUNC(__VA_ARGS__);		\
> +			if (_ret && IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE)) \
> +				break;				\
> +		}						\
> +		bpf_lsm_srcu_read_unlock(_idx);			\
> +	} while (0);						\
> +	IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE) ? _ret : 0;	\
> +})

This extra CONFIG_SECURITY_BPF_ENFORCE doesn't make sense to me.
Why do all the work for bpf-lsm and ignore return code? Such framework already
exists. For audit only case the user could have kprobed security_*() in
security/security.c and had access to exactly the same data. There is no need
in any of these patches if audit the only use case.
Obviously bpf-lsm has to be capable of making go/no-go decision, so
my preference is to drop this extra kconfig knob.
I think the agreement seen in earlier comments in this thread that the prefered
choice is to always have bpf-based lsm to be equivalent to LSM_ORDER_LAST. In
such case how about using bpf-trampoline fexit logic instead?
Pros:
- no changes to security/ directory
- no changes to call_int_hook() macro
- patches 4, 5, 6 no longer necessary
- when security is off all security_*() hooks do single
  if (hlist_empty(&bpf_lsm_hook_heads.FUNC)) check.
  With patch 4 there will two such checks. Tiny perf penalty.
  With fexit approach there will be no extra check.
- fexit approach is fast even on kernels compiled with retpoline, since
  its using direct calls
Cons:
- bpf trampoline so far is x86 only and arm64 support is wip

By plugging into fexit I'm proposing to let bpf-lsm prog type modify return
value. Currently bpf-fexit prog type has read-only access to it. Adding write
access is a straightforward verifier change. The bpf progs from patch 9 will
still look exactly the same way:
SEC("lsm/file_mprotect")
int BPF_PROG(mprotect_audit, struct vm_area_struct *vma,
            unsigned long reqprot, unsigned long prot) { ... }
The difference that libbpf will be finding btf_id of security_file_mprotect()
function and adding fexit trampoline to it instead of going via
security_list_options and its own lsm_hook_idx uapi. I think reusing existing
tracing facilities to attach and multiplex multiple programs is cleaner. More
code reuse. Unified testing of lsm and tracing, etc.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-02-11  3:12   ` Alexei Starovoitov
@ 2020-02-11 12:43     ` KP Singh
  2020-02-11 17:58       ` Alexei Starovoitov
  0 siblings, 1 reply; 44+ messages in thread
From: KP Singh @ 2020-02-11 12:43 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: KP Singh, linux-kernel, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Serge E. Hallyn, Mauro Carvalho Chehab, David S. Miller,
	Greg Kroah-Hartman, kernel-team

On 10-Feb 19:12, Alexei Starovoitov wrote:
> On Thu, Jan 23, 2020 at 07:24:34AM -0800, KP Singh wrote:
> > +#define CALL_BPF_LSM_INT_HOOKS(FUNC, ...) ({			\
> > +	int _ret = 0;						\
> > +	do {							\
> > +		struct security_hook_list *P;			\
> > +		int _idx;					\
> > +								\
> > +		if (hlist_empty(&bpf_lsm_hook_heads.FUNC))	\
> > +			break;					\
> > +								\
> > +		_idx = bpf_lsm_srcu_read_lock();		\
> > +								\
> > +		hlist_for_each_entry(P,				\
> > +			&bpf_lsm_hook_heads.FUNC, list) {	\
> > +			_ret = P->hook.FUNC(__VA_ARGS__);		\
> > +			if (_ret && IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE)) \
> > +				break;				\
> > +		}						\
> > +		bpf_lsm_srcu_read_unlock(_idx);			\
> > +	} while (0);						\
> > +	IS_ENABLED(CONFIG_SECURITY_BPF_ENFORCE) ? _ret : 0;	\
> > +})
> 
> This extra CONFIG_SECURITY_BPF_ENFORCE doesn't make sense to me.

We found it easier to debug the programs if enforcement is disabled.
But we can remove this option if you feel strongly about it.

> Why do all the work for bpf-lsm and ignore return code? Such framework already
> exists. For audit only case the user could have kprobed security_*() in
> security/security.c and had access to exactly the same data. There is no need
> in any of these patches if audit the only use case.
> Obviously bpf-lsm has to be capable of making go/no-go decision, so
> my preference is to drop this extra kconfig knob.
> I think the agreement seen in earlier comments in this thread that the prefered
> choice is to always have bpf-based lsm to be equivalent to LSM_ORDER_LAST. In
> such case how about using bpf-trampoline fexit logic instead?

Even if the BPF LSM is LSM_ORDER_LAST this is still different from
adding a trampoline to the exit of the security hooks for a few other
reasons.

> Pros:
> - no changes to security/ directory

* We do need to initialize the BPF LSM as a proper LSM (i.e. in
  security/bpf) because it needs access to security blobs. This is
  only possible statically for now as they should be set after boot
  time to provide guarantees to any helper that uses information in
  security blobs. I have proposed how we could make this dynamic as a
  discussion topic for the BPF conference:

    https://lore.kernel.org/bpf/20200127171516.GA2710@chromium.org

  As you can see from some of the prototype use cases e.g:

    https://github.com/sinkap/linux-krsi/blob/patch/v1/examples/samples/bpf/lsm_detect_exec_unlink.c

  that they do rely on security blobs and that they are key in doing
  meaningful security analysis.

* When using the semantic provided by fexit, the BPF LSM program will
  always be executed and will be able to override / clobber the
  decision of LSMs which appear before it in the ordered list. This
  semantic is very different from what we currently have (i.e. the BPF
  LSM hook is only called if all the other LSMs allow the action) and
  seems to be bypassing the LSM framework.

* Not all security_* wrappers simply call the attached hooks and return
  their exit code and not all of them pass the same arguments to the
  hook e.g. security_bprm_check, security_file_open,
  security_task_alloc to just name a few. Illustrating this further
  using security_task_alloc as an example:

	rc = call_int_hook(task_alloc, 0, task, clone_flags);
	if (unlikely(rc))
		security_task_free(task);
	return rc;

Which means we would leak task_structs in this case. While
call_int_hook is sort of equivalent to the fexit trampoline for most
hooks, it's not really the case for some (quite important) LSM hooks.

- KP

> - no changes to call_int_hook() macro
> - patches 4, 5, 6 no longer necessary
> - when security is off all security_*() hooks do single
>   if (hlist_empty(&bpf_lsm_hook_heads.FUNC)) check.
>   With patch 4 there will two such checks. Tiny perf penalty.
>   With fexit approach there will be no extra check.
> - fexit approach is fast even on kernels compiled with retpoline, since
>   its using direct calls
> Cons:
> - bpf trampoline so far is x86 only and arm64 support is wip
> 
> By plugging into fexit I'm proposing to let bpf-lsm prog type modify return
> value. Currently bpf-fexit prog type has read-only access to it. Adding write
> access is a straightforward verifier change. The bpf progs from patch 9 will
> still look exactly the same way:
> SEC("lsm/file_mprotect")
> int BPF_PROG(mprotect_audit, struct vm_area_struct *vma,
>             unsigned long reqprot, unsigned long prot) { ... }
> The difference that libbpf will be finding btf_id of security_file_mprotect()
> function and adding fexit trampoline to it instead of going via
> security_list_options and its own lsm_hook_idx uapi. I think reusing existing
> tracing facilities to attach and multiplex multiple programs is cleaner. More
> code reuse. Unified testing of lsm and tracing, etc.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 03/10] bpf: lsm: Introduce types for eBPF based LSM
  2020-02-10 23:58   ` Alexei Starovoitov
@ 2020-02-11 12:44     ` KP Singh
  0 siblings, 0 replies; 44+ messages in thread
From: KP Singh @ 2020-02-11 12:44 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: KP Singh, linux-kernel, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

On 10-Feb 15:58, Alexei Starovoitov wrote:
> On Thu, Jan 23, 2020 at 07:24:33AM -0800, KP Singh wrote:
> > +
> > +static const struct bpf_func_proto *get_bpf_func_proto(
> > +	enum bpf_func_id func_id, const struct bpf_prog *prog)
> > +{
> > +	switch (func_id) {
> > +	case BPF_FUNC_map_lookup_elem:
> > +		return &bpf_map_lookup_elem_proto;
> > +	case BPF_FUNC_get_current_pid_tgid:
> > +		return &bpf_get_current_pid_tgid_proto;
> > +	default:
> > +		return NULL;
> > +	}
> > +}
> > +
> > +const struct bpf_verifier_ops lsm_verifier_ops = {
> > +	.get_func_proto = get_bpf_func_proto,
> > +};
> 
> Why artificially limit it like this?
> It will cause a lot of churn in the future. Like allowing map update and
> delete, in addition to lookup, will be an obvious next step.
> I think allowing tracing_func_proto() from the start is cleaner.

Sure, I will replace it to use tracing_func_proto in the next
revision.

- KP


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 02/10] bpf: lsm: Add a skeleton and config options
  2020-02-10 23:52   ` Alexei Starovoitov
@ 2020-02-11 12:45     ` KP Singh
  0 siblings, 0 replies; 44+ messages in thread
From: KP Singh @ 2020-02-11 12:45 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: KP Singh, linux-kernel, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Jann Horn, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Martin KaFai Lau, Song Liu, Yonghong Song, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Nicolas Ferre, Stanislav Fomichev, Quentin Monnet,
	Andrey Ignatov, Joe Stringer

On 10-Feb 15:52, Alexei Starovoitov wrote:
> On Thu, Jan 23, 2020 at 07:24:32AM -0800, KP Singh wrote:
> >  
> > +BPF SECURITY MODULE
> > +M:	KP Singh <kpsingh@chromium.org>
> > +L:	linux-security-module@vger.kernel.org
> > +L:	bpf@vger.kernel.org
> > +S:	Maintained
> > +F:	security/bpf/
> 
> Instead of creating new entry I think it's more appropriate
> to add this to main BPF entry like:
> diff --git a/MAINTAINERS b/MAINTAINERS
> index c74e4ea714a5..f656ddec0722 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -3147,6 +3147,7 @@ R:        Martin KaFai Lau <kafai@fb.com>
>  R:     Song Liu <songliubraving@fb.com>
>  R:     Yonghong Song <yhs@fb.com>
>  R:     Andrii Nakryiko <andriin@fb.com>
> +R:     KP Singh <kpsingh@chromium.org>

I am okay with this too :) Will update it in the next revision.

- KP

>  L:     netdev@vger.kernel.org
>  L:     bpf@vger.kernel.org
>  T:     git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git
> @@ -3172,6 +3173,7 @@ F:        samples/bpf/
>  F:     tools/bpf/
>  F:     tools/lib/bpf/
>  F:     tools/testing/selftests/bpf/
> +F:     security/bpf/
>  K:     bpf
>  N:     bpf

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM
  2020-02-11 12:43     ` KP Singh
@ 2020-02-11 17:58       ` Alexei Starovoitov
  2020-02-11 18:44         ` BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM] Jann Horn
  0 siblings, 1 reply; 44+ messages in thread
From: Alexei Starovoitov @ 2020-02-11 17:58 UTC (permalink / raw)
  To: KP Singh
  Cc: linux-kernel, bpf, linux-security-module, Brendan Jackman,
	Florent Revest, Thomas Garnier, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Jann Horn,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	kernel-team

On Tue, Feb 11, 2020 at 01:43:34PM +0100, KP Singh wrote:
> 
> > Pros:
> > - no changes to security/ directory
> 
> * We do need to initialize the BPF LSM as a proper LSM (i.e. in
>   security/bpf) because it needs access to security blobs. This is
>   only possible statically for now as they should be set after boot
>   time to provide guarantees to any helper that uses information in
>   security blobs. I have proposed how we could make this dynamic as a
>   discussion topic for the BPF conference:
> 
>     https://lore.kernel.org/bpf/20200127171516.GA2710@chromium.org
> 
>   As you can see from some of the prototype use cases e.g:
> 
>     https://github.com/sinkap/linux-krsi/blob/patch/v1/examples/samples/bpf/lsm_detect_exec_unlink.c
> 
>   that they do rely on security blobs and that they are key in doing
>   meaningful security analysis.

above example doesn't use security blob from bpf prog.
Are you referring to
https://github.com/sinkap/linux-krsi/blob/patch/v1/examples/security/bpf/ops.c#L455
Then it's a bpf helper that is using it. And that helper could have been
implemented differently. I think it should be a separate discussion on merits
of such helper, its api, and its implementation.

At the same time I agree that additional scratch space accessible by lsm in
inode->i_security, cred->security and other kernel structs is certainly
necessary, but it's a nack to piggy back on legacy lsm way of doing it. The
implementation of bpf_lsm_blob_sizes.lbs_inode fits one single purpose. It's
fine for builtin LSM where blob sizes and code that uses it lives in one place
in the kernel and being modified at once when need for more space arises. For
bpf progs such approach is a non starter. Progs need to have flexible amount
scratch space. Thankfully this problem is not new. It was solved already.
Please see how bpf_sk_storage is implemented. It's a flexible amount of scratch
spaces available to bpf progs that is available in every socket. It's done on
demand. No space is wasted when progs are not using it. Not all sockets has to
have it either. I strongly believe that the same approach should be used for
scratch space in inode, cred, etc. It can be a union of existing 'void
*security' pointer or a new pointer. net/core/bpf_sk_storage.c implementation
is not socket specific. It can be generalized for inode, cred, task, etc.

> * When using the semantic provided by fexit, the BPF LSM program will
>   always be executed and will be able to override / clobber the
>   decision of LSMs which appear before it in the ordered list. This
>   semantic is very different from what we currently have (i.e. the BPF
>   LSM hook is only called if all the other LSMs allow the action) and
>   seems to be bypassing the LSM framework.

It that's a concern it's trivial to add 'if (RC == 0)' check to fexit
trampoline generator specific to lsm progs.

> * Not all security_* wrappers simply call the attached hooks and return
>   their exit code and not all of them pass the same arguments to the
>   hook e.g. security_bprm_check, security_file_open,
>   security_task_alloc to just name a few. Illustrating this further
>   using security_task_alloc as an example:
> 
> 	rc = call_int_hook(task_alloc, 0, task, clone_flags);
> 	if (unlikely(rc))
> 		security_task_free(task);
> 	return rc;
> 
> Which means we would leak task_structs in this case. While
> call_int_hook is sort of equivalent to the fexit trampoline for most
> hooks, it's not really the case for some (quite important) LSM hooks.

let's look at them one by one.
1.
security_bprm_check() calling ima_bprm_check.
I think it's layering violation.
Was it that hard to convince vfs folks to add it in fs/exec.c where
it belongs?

2.
security_file_open() calling fsnotify_perm().
Same layering violation and it's clearly broken.
When CONFIG_SECURITY is not defined:
static inline int security_file_open(struct file *file)
{
        return 0;
}
There is no call to fsnotify_perm().
So fsnotify_open/mkdir/etc() work fine with and without CONFIG_SECURITY,
but fsnotify_perm() events can be lost depending on kconfig.
fsnotify_perm() should be moved in fs/open.c.

3.
security_task_alloc(). hmm.
when CONFIG_SECURITY is enabled and corresponding LSM with
non zero blob_sizes.lbs_task is registered that hook allocates
memory even if task_alloc is empty.
Doing security_task_free() in that hook also looks wrong.
It should have been:
diff --git a/kernel/fork.c b/kernel/fork.c
index ef82feb4bddc..a0d31e781341 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2062,7 +2062,7 @@ static __latent_entropy struct task_struct *copy_process(
        shm_init_task(p);
        retval = security_task_alloc(p, clone_flags);
        if (retval)
-               goto bad_fork_cleanup_audit;
+               goto bad_fork_cleanup_security;
Same issue with security_file_alloc().

I think this layering issues should be fixed, but it's not a blocker for
lsm-bpf to proceed. Using fexit mechanism and bpf_sk_storage generalization is
all that is needed. None of it should touch security/*.

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-11 17:58       ` Alexei Starovoitov
@ 2020-02-11 18:44         ` Jann Horn
  2020-02-11 19:09           ` Alexei Starovoitov
  0 siblings, 1 reply; 44+ messages in thread
From: Jann Horn @ 2020-02-11 18:44 UTC (permalink / raw)
  To: Alexei Starovoitov, KP Singh
  Cc: kernel list, bpf, linux-security-module, Brendan Jackman,
	Florent Revest, Thomas Garnier, Alexei Starovoitov,
	Daniel Borkmann, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Matthew Garrett,
	Christian Brauner, Mickaël Salaün, Florent Revest,
	Brendan Jackman, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Kernel Team

On Tue, Feb 11, 2020 at 6:58 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
> On Tue, Feb 11, 2020 at 01:43:34PM +0100, KP Singh wrote:
[...]
> > * When using the semantic provided by fexit, the BPF LSM program will
> >   always be executed and will be able to override / clobber the
> >   decision of LSMs which appear before it in the ordered list. This
> >   semantic is very different from what we currently have (i.e. the BPF
> >   LSM hook is only called if all the other LSMs allow the action) and
> >   seems to be bypassing the LSM framework.
>
> It that's a concern it's trivial to add 'if (RC == 0)' check to fexit
> trampoline generator specific to lsm progs.
[...]
> Using fexit mechanism and bpf_sk_storage generalization is
> all that is needed. None of it should touch security/*.

If I understand your suggestion correctly, that seems like a terrible
idea to me from the perspective of inspectability and debuggability.
If at runtime, a function can branch off elsewhere to modify its
decision, I want to see that in the source code. If someone e.g.
changes the parameters or the locking rules around a security hook,
how are they supposed to understand the implications if that happens
through some magic fexit trampoline that is injected at runtime?

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-11 18:44         ` BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM] Jann Horn
@ 2020-02-11 19:09           ` Alexei Starovoitov
  2020-02-11 19:36             ` Jann Horn
  0 siblings, 1 reply; 44+ messages in thread
From: Alexei Starovoitov @ 2020-02-11 19:09 UTC (permalink / raw)
  To: Jann Horn
  Cc: KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Kernel Team

On Tue, Feb 11, 2020 at 07:44:05PM +0100, Jann Horn wrote:
> On Tue, Feb 11, 2020 at 6:58 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> > On Tue, Feb 11, 2020 at 01:43:34PM +0100, KP Singh wrote:
> [...]
> > > * When using the semantic provided by fexit, the BPF LSM program will
> > >   always be executed and will be able to override / clobber the
> > >   decision of LSMs which appear before it in the ordered list. This
> > >   semantic is very different from what we currently have (i.e. the BPF
> > >   LSM hook is only called if all the other LSMs allow the action) and
> > >   seems to be bypassing the LSM framework.
> >
> > It that's a concern it's trivial to add 'if (RC == 0)' check to fexit
> > trampoline generator specific to lsm progs.
> [...]
> > Using fexit mechanism and bpf_sk_storage generalization is
> > all that is needed. None of it should touch security/*.
> 
> If I understand your suggestion correctly, that seems like a terrible
> idea to me from the perspective of inspectability and debuggability.
> If at runtime, a function can branch off elsewhere to modify its
> decision, I want to see that in the source code. If someone e.g.
> changes the parameters or the locking rules around a security hook,
> how are they supposed to understand the implications if that happens
> through some magic fexit trampoline that is injected at runtime?

I'm not following the concern. There is error injection facility that is
heavily used with and without bpf. In this case there is really no difference
whether trampoline is used with direct call or indirect callback via function
pointer. Both will jump to bpf prog. The _source code_ of bpf program will
_always_ be available for humans to examine via "bpftool prog dump" since BTF
is required. So from inspectability and debuggability point of view lsm+bpf
stuff is way more visible than any builtin LSM. At any time people will be able
to see what exactly is running on the system. Assuming folks can read C code.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-11 19:09           ` Alexei Starovoitov
@ 2020-02-11 19:36             ` Jann Horn
  2020-02-11 20:10               ` Alexei Starovoitov
  0 siblings, 1 reply; 44+ messages in thread
From: Jann Horn @ 2020-02-11 19:36 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Kernel Team

On Tue, Feb 11, 2020 at 8:09 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
> On Tue, Feb 11, 2020 at 07:44:05PM +0100, Jann Horn wrote:
> > On Tue, Feb 11, 2020 at 6:58 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > > On Tue, Feb 11, 2020 at 01:43:34PM +0100, KP Singh wrote:
> > [...]
> > > > * When using the semantic provided by fexit, the BPF LSM program will
> > > >   always be executed and will be able to override / clobber the
> > > >   decision of LSMs which appear before it in the ordered list. This
> > > >   semantic is very different from what we currently have (i.e. the BPF
> > > >   LSM hook is only called if all the other LSMs allow the action) and
> > > >   seems to be bypassing the LSM framework.
> > >
> > > It that's a concern it's trivial to add 'if (RC == 0)' check to fexit
> > > trampoline generator specific to lsm progs.
> > [...]
> > > Using fexit mechanism and bpf_sk_storage generalization is
> > > all that is needed. None of it should touch security/*.
> >
> > If I understand your suggestion correctly, that seems like a terrible
> > idea to me from the perspective of inspectability and debuggability.
> > If at runtime, a function can branch off elsewhere to modify its
> > decision, I want to see that in the source code. If someone e.g.
> > changes the parameters or the locking rules around a security hook,
> > how are they supposed to understand the implications if that happens
> > through some magic fexit trampoline that is injected at runtime?
>
> I'm not following the concern. There is error injection facility that is
> heavily used with and without bpf. In this case there is really no difference
> whether trampoline is used with direct call or indirect callback via function
> pointer. Both will jump to bpf prog. The _source code_ of bpf program will
> _always_ be available for humans to examine via "bpftool prog dump" since BTF
> is required. So from inspectability and debuggability point of view lsm+bpf
> stuff is way more visible than any builtin LSM. At any time people will be able
> to see what exactly is running on the system. Assuming folks can read C code.

You said that you want to use fexit without touching security/, which
AFAIU means that the branch from security_*() to the BPF LSM will be
invisible in the *kernel's* source code unless the reader already
knows about the BPF LSM. But maybe I'm just misunderstanding your
idea.

If a random developer is trying to change the locking rules around
security_blah(), and wants to e.g. figure out whether it's okay to
call that thing with a spinlock held, or whether one of the arguments
is actually used, or stuff like that, the obvious way to verify that
is to follow all the direct and indirect calls made from
security_blah(). It's tedious, but it works, unless something is
hooked up to it in a way that is visible in no way in the source code.

I agree that the way in which the call happens behind the scenes
doesn't matter all that much - I don't really care all that much
whether it's an indirect call, a runtime-patched direct call in inline
assembly, or an fexit hook. What I do care about is that someone
reading through any affected function can immediately see that the
branch exists - in other words, ideally, I'd like it to be something
happening in the method body, but if you think that's unacceptable, I
think there should at least be a function attribute that makes it very
clear what's going on.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-11 19:36             ` Jann Horn
@ 2020-02-11 20:10               ` Alexei Starovoitov
  2020-02-11 20:33                 ` Jann Horn
  0 siblings, 1 reply; 44+ messages in thread
From: Alexei Starovoitov @ 2020-02-11 20:10 UTC (permalink / raw)
  To: Jann Horn
  Cc: KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Kernel Team

On Tue, Feb 11, 2020 at 08:36:18PM +0100, Jann Horn wrote:
> On Tue, Feb 11, 2020 at 8:09 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> > On Tue, Feb 11, 2020 at 07:44:05PM +0100, Jann Horn wrote:
> > > On Tue, Feb 11, 2020 at 6:58 PM Alexei Starovoitov
> > > <alexei.starovoitov@gmail.com> wrote:
> > > > On Tue, Feb 11, 2020 at 01:43:34PM +0100, KP Singh wrote:
> > > [...]
> > > > > * When using the semantic provided by fexit, the BPF LSM program will
> > > > >   always be executed and will be able to override / clobber the
> > > > >   decision of LSMs which appear before it in the ordered list. This
> > > > >   semantic is very different from what we currently have (i.e. the BPF
> > > > >   LSM hook is only called if all the other LSMs allow the action) and
> > > > >   seems to be bypassing the LSM framework.
> > > >
> > > > It that's a concern it's trivial to add 'if (RC == 0)' check to fexit
> > > > trampoline generator specific to lsm progs.
> > > [...]
> > > > Using fexit mechanism and bpf_sk_storage generalization is
> > > > all that is needed. None of it should touch security/*.
> > >
> > > If I understand your suggestion correctly, that seems like a terrible
> > > idea to me from the perspective of inspectability and debuggability.
> > > If at runtime, a function can branch off elsewhere to modify its
> > > decision, I want to see that in the source code. If someone e.g.
> > > changes the parameters or the locking rules around a security hook,
> > > how are they supposed to understand the implications if that happens
> > > through some magic fexit trampoline that is injected at runtime?
> >
> > I'm not following the concern. There is error injection facility that is
> > heavily used with and without bpf. In this case there is really no difference
> > whether trampoline is used with direct call or indirect callback via function
> > pointer. Both will jump to bpf prog. The _source code_ of bpf program will
> > _always_ be available for humans to examine via "bpftool prog dump" since BTF
> > is required. So from inspectability and debuggability point of view lsm+bpf
> > stuff is way more visible than any builtin LSM. At any time people will be able
> > to see what exactly is running on the system. Assuming folks can read C code.
> 
> You said that you want to use fexit without touching security/, which
> AFAIU means that the branch from security_*() to the BPF LSM will be
> invisible in the *kernel's* source code unless the reader already
> knows about the BPF LSM. But maybe I'm just misunderstanding your
> idea.
> 
> If a random developer is trying to change the locking rules around
> security_blah(), and wants to e.g. figure out whether it's okay to
> call that thing with a spinlock held, or whether one of the arguments
> is actually used, or stuff like that, the obvious way to verify that
> is to follow all the direct and indirect calls made from
> security_blah(). It's tedious, but it works, unless something is
> hooked up to it in a way that is visible in no way in the source code.
> 
> I agree that the way in which the call happens behind the scenes
> doesn't matter all that much - I don't really care all that much
> whether it's an indirect call, a runtime-patched direct call in inline
> assembly, or an fexit hook. What I do care about is that someone
> reading through any affected function can immediately see that the
> branch exists - in other words, ideally, I'd like it to be something
> happening in the method body, but if you think that's unacceptable, I
> think there should at least be a function attribute that makes it very
> clear what's going on.

Got it. Then let's whitelist them ?
All error injection points are marked with ALLOW_ERROR_INJECTION().
We can do something similar here, but let's do it via BTF and avoid
abusing yet another elf section for this mark.
I think BTF_TYPE_EMIT() should work. Just need to pick explicit enough
name and extensive comment about what is going on.
Locking rules and cleanup around security_blah() shouldn't change though.
Like security_task_alloc() should be paired with security_task_free().
And so on. With bpf_sk_storage like logic the alloc/free of scratch
space will be similar to the way socket and bpf progs deal with it.

Some of the lsm hooks are in critical path. Like security_socket_sendmsg().
retpoline hurts. If we go with indirect calls right now it will be harder to
optimize later. It took us long time to come up with bpf trampoline and build
bpf dispatcher on top of it to remove single indirect call from XDP runtime.
For bpf+lsm would be good to avoid it from the start.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-11 20:10               ` Alexei Starovoitov
@ 2020-02-11 20:33                 ` Jann Horn
  2020-02-11 21:32                   ` Jann Horn
  2020-02-11 21:38                   ` Alexei Starovoitov
  0 siblings, 2 replies; 44+ messages in thread
From: Jann Horn @ 2020-02-11 20:33 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Kernel Team

()On Tue, Feb 11, 2020 at 9:10 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
> On Tue, Feb 11, 2020 at 08:36:18PM +0100, Jann Horn wrote:
> > On Tue, Feb 11, 2020 at 8:09 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > > On Tue, Feb 11, 2020 at 07:44:05PM +0100, Jann Horn wrote:
> > > > On Tue, Feb 11, 2020 at 6:58 PM Alexei Starovoitov
> > > > <alexei.starovoitov@gmail.com> wrote:
> > > > > On Tue, Feb 11, 2020 at 01:43:34PM +0100, KP Singh wrote:
> > > > [...]
> > > > > > * When using the semantic provided by fexit, the BPF LSM program will
> > > > > >   always be executed and will be able to override / clobber the
> > > > > >   decision of LSMs which appear before it in the ordered list. This
> > > > > >   semantic is very different from what we currently have (i.e. the BPF
> > > > > >   LSM hook is only called if all the other LSMs allow the action) and
> > > > > >   seems to be bypassing the LSM framework.
> > > > >
> > > > > It that's a concern it's trivial to add 'if (RC == 0)' check to fexit
> > > > > trampoline generator specific to lsm progs.
> > > > [...]
> > > > > Using fexit mechanism and bpf_sk_storage generalization is
> > > > > all that is needed. None of it should touch security/*.
> > > >
> > > > If I understand your suggestion correctly, that seems like a terrible
> > > > idea to me from the perspective of inspectability and debuggability.
> > > > If at runtime, a function can branch off elsewhere to modify its
> > > > decision, I want to see that in the source code. If someone e.g.
> > > > changes the parameters or the locking rules around a security hook,
> > > > how are they supposed to understand the implications if that happens
> > > > through some magic fexit trampoline that is injected at runtime?
> > >
> > > I'm not following the concern. There is error injection facility that is
> > > heavily used with and without bpf. In this case there is really no difference
> > > whether trampoline is used with direct call or indirect callback via function
> > > pointer. Both will jump to bpf prog. The _source code_ of bpf program will
> > > _always_ be available for humans to examine via "bpftool prog dump" since BTF
> > > is required. So from inspectability and debuggability point of view lsm+bpf
> > > stuff is way more visible than any builtin LSM. At any time people will be able
> > > to see what exactly is running on the system. Assuming folks can read C code.
> >
> > You said that you want to use fexit without touching security/, which
> > AFAIU means that the branch from security_*() to the BPF LSM will be
> > invisible in the *kernel's* source code unless the reader already
> > knows about the BPF LSM. But maybe I'm just misunderstanding your
> > idea.
> >
> > If a random developer is trying to change the locking rules around
> > security_blah(), and wants to e.g. figure out whether it's okay to
> > call that thing with a spinlock held, or whether one of the arguments
> > is actually used, or stuff like that, the obvious way to verify that
> > is to follow all the direct and indirect calls made from
> > security_blah(). It's tedious, but it works, unless something is
> > hooked up to it in a way that is visible in no way in the source code.
> >
> > I agree that the way in which the call happens behind the scenes
> > doesn't matter all that much - I don't really care all that much
> > whether it's an indirect call, a runtime-patched direct call in inline
> > assembly, or an fexit hook. What I do care about is that someone
> > reading through any affected function can immediately see that the
> > branch exists - in other words, ideally, I'd like it to be something
> > happening in the method body, but if you think that's unacceptable, I
> > think there should at least be a function attribute that makes it very
> > clear what's going on.
>
> Got it. Then let's whitelist them ?
> All error injection points are marked with ALLOW_ERROR_INJECTION().
> We can do something similar here, but let's do it via BTF and avoid
> abusing yet another elf section for this mark.
> I think BTF_TYPE_EMIT() should work. Just need to pick explicit enough
> name and extensive comment about what is going on.

Sounds reasonable to me. :)

> Locking rules and cleanup around security_blah() shouldn't change though.
> Like security_task_alloc() should be paired with security_task_free().
> And so on. With bpf_sk_storage like logic the alloc/free of scratch
> space will be similar to the way socket and bpf progs deal with it.
>
> Some of the lsm hooks are in critical path. Like security_socket_sendmsg().
> retpoline hurts. If we go with indirect calls right now it will be harder to
> optimize later. It took us long time to come up with bpf trampoline and build
> bpf dispatcher on top of it to remove single indirect call from XDP runtime.
> For bpf+lsm would be good to avoid it from the start.

Just out of curiosity: Are fexit hooks really much cheaper than indirect calls?

AFAIK ftrace on x86-64 replaces the return pointer for fexit
instrumentation (see prepare_ftrace_return()). So when the function
returns, there is one return misprediction for branching into
return_to_handler(), and then the processor's internal return stack
will probably be misaligned so that after ftrace_return_to_handler()
is done running, all the following returns will also be mispredicted.

So I would've thought that fexit hooks would have at least roughly the
same impact as indirect calls - indirect calls via retpoline do one
mispredicted branch, fexit hooks do at least two AFAICS. But I guess
indirect calls could still be slower if fexit benefits from having all
the mispredicted pointers stored on the cache-hot stack while the
indirect branch target is too infrequently accessed to be in L1D, or
something like that?

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-11 20:33                 ` Jann Horn
@ 2020-02-11 21:32                   ` Jann Horn
  2020-02-11 21:38                   ` Alexei Starovoitov
  1 sibling, 0 replies; 44+ messages in thread
From: Jann Horn @ 2020-02-11 21:32 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Kernel Team

On Tue, Feb 11, 2020 at 9:33 PM Jann Horn <jannh@google.com> wrote:
> On Tue, Feb 11, 2020 at 9:10 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> > On Tue, Feb 11, 2020 at 08:36:18PM +0100, Jann Horn wrote:
> > > On Tue, Feb 11, 2020 at 8:09 PM Alexei Starovoitov
> > > <alexei.starovoitov@gmail.com> wrote:
> > > > On Tue, Feb 11, 2020 at 07:44:05PM +0100, Jann Horn wrote:
> > > > > On Tue, Feb 11, 2020 at 6:58 PM Alexei Starovoitov
> > > > > <alexei.starovoitov@gmail.com> wrote:
> > > > > > On Tue, Feb 11, 2020 at 01:43:34PM +0100, KP Singh wrote:
> > > > > [...]
> > > > > > > * When using the semantic provided by fexit, the BPF LSM program will
> > > > > > >   always be executed and will be able to override / clobber the
> > > > > > >   decision of LSMs which appear before it in the ordered list. This
> > > > > > >   semantic is very different from what we currently have (i.e. the BPF
> > > > > > >   LSM hook is only called if all the other LSMs allow the action) and
> > > > > > >   seems to be bypassing the LSM framework.
> > > > > >
> > > > > > It that's a concern it's trivial to add 'if (RC == 0)' check to fexit
> > > > > > trampoline generator specific to lsm progs.
> > > > > [...]
> > > > > > Using fexit mechanism and bpf_sk_storage generalization is
> > > > > > all that is needed. None of it should touch security/*.
[...]
> > Some of the lsm hooks are in critical path. Like security_socket_sendmsg().
> > retpoline hurts. If we go with indirect calls right now it will be harder to
> > optimize later. It took us long time to come up with bpf trampoline and build
> > bpf dispatcher on top of it to remove single indirect call from XDP runtime.
> > For bpf+lsm would be good to avoid it from the start.
>
> Just out of curiosity: Are fexit hooks really much cheaper than indirect calls?
>
> AFAIK ftrace on x86-64 replaces the return pointer for fexit
> instrumentation (see prepare_ftrace_return()). So when the function
> returns, there is one return misprediction for branching into
> return_to_handler(), and then the processor's internal return stack
> will probably be misaligned so that after ftrace_return_to_handler()
> is done running, all the following returns will also be mispredicted.
>
> So I would've thought that fexit hooks would have at least roughly the
> same impact as indirect calls - indirect calls via retpoline do one
> mispredicted branch, fexit hooks do at least two AFAICS. But I guess
> indirect calls could still be slower if fexit benefits from having all
> the mispredicted pointers stored on the cache-hot stack while the
> indirect branch target is too infrequently accessed to be in L1D, or
> something like that?

Ah, kpsingh explained to me that BPF trampolines work differently from
normal ftrace and don't do the return address poking. Still, the
processor's internal call stack will be misaligned by the BPF
trampoline, right? Since there is one more call than return, if e.g.
security_blah() is hooked, then the kernel will probably predict
security_blah+5 as the target when returning from the trampoline to
security_blah's parent, then when returning from security_blah's
parent to the grandparent predict the parent, and so on, right?

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-11 20:33                 ` Jann Horn
  2020-02-11 21:32                   ` Jann Horn
@ 2020-02-11 21:38                   ` Alexei Starovoitov
  2020-02-11 23:26                     ` Alexei Starovoitov
  1 sibling, 1 reply; 44+ messages in thread
From: Alexei Starovoitov @ 2020-02-11 21:38 UTC (permalink / raw)
  To: Jann Horn
  Cc: KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Kernel Team

On Tue, Feb 11, 2020 at 09:33:49PM +0100, Jann Horn wrote:
> >
> > Got it. Then let's whitelist them ?
> > All error injection points are marked with ALLOW_ERROR_INJECTION().
> > We can do something similar here, but let's do it via BTF and avoid
> > abusing yet another elf section for this mark.
> > I think BTF_TYPE_EMIT() should work. Just need to pick explicit enough
> > name and extensive comment about what is going on.
> 
> Sounds reasonable to me. :)

awesome :)

> > Locking rules and cleanup around security_blah() shouldn't change though.
> > Like security_task_alloc() should be paired with security_task_free().
> > And so on. With bpf_sk_storage like logic the alloc/free of scratch
> > space will be similar to the way socket and bpf progs deal with it.
> >
> > Some of the lsm hooks are in critical path. Like security_socket_sendmsg().
> > retpoline hurts. If we go with indirect calls right now it will be harder to
> > optimize later. It took us long time to come up with bpf trampoline and build
> > bpf dispatcher on top of it to remove single indirect call from XDP runtime.
> > For bpf+lsm would be good to avoid it from the start.
> 
> Just out of curiosity: Are fexit hooks really much cheaper than indirect calls?
> 
> AFAIK ftrace on x86-64 replaces the return pointer for fexit
> instrumentation (see prepare_ftrace_return()). So when the function
> returns, there is one return misprediction for branching into
> return_to_handler(), and then the processor's internal return stack
> will probably be misaligned so that after ftrace_return_to_handler()
> is done running, all the following returns will also be mispredicted.
> 
> So I would've thought that fexit hooks would have at least roughly the
> same impact as indirect calls - indirect calls via retpoline do one
> mispredicted branch, fexit hooks do at least two AFAICS. But I guess
> indirect calls could still be slower if fexit benefits from having all
> the mispredicted pointers stored on the cache-hot stack while the
> indirect branch target is too infrequently accessed to be in L1D, or
> something like that?

For fexit I've tried ftrace style of overwriting return address in the stack,
but it was slower than "leave; add rsp, 8; ret". So I went with that.
Looks like skipping one frame makes intel return stack prediction work better
than overwriting. I tested on broadwell only though. Later I realized that I
can do "jmp bpf_trampoline" instead of "call bpf_trampoline", so this extra
'add rsp, 8' can be removed and both direct jump and return stack predictors
will be happy. Will be posting this patch soon.

Old perf numbers of fexit vs original vs kprobe:
https://lore.kernel.org/bpf/20191122011515.255371-1-ast@kernel.org/

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-11 21:38                   ` Alexei Starovoitov
@ 2020-02-11 23:26                     ` Alexei Starovoitov
  2020-02-12  0:09                       ` Daniel Borkmann
  0 siblings, 1 reply; 44+ messages in thread
From: Alexei Starovoitov @ 2020-02-11 23:26 UTC (permalink / raw)
  To: Jann Horn
  Cc: KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, Daniel Borkmann, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Kernel Team

On Tue, Feb 11, 2020 at 1:38 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Feb 11, 2020 at 09:33:49PM +0100, Jann Horn wrote:
> > >
> > > Got it. Then let's whitelist them ?
> > > All error injection points are marked with ALLOW_ERROR_INJECTION().
> > > We can do something similar here, but let's do it via BTF and avoid
> > > abusing yet another elf section for this mark.
> > > I think BTF_TYPE_EMIT() should work. Just need to pick explicit enough
> > > name and extensive comment about what is going on.
> >
> > Sounds reasonable to me. :)
>
> awesome :)

Looks like the kernel already provides this whitelisting.
$ bpftool btf dump file /sys/kernel/btf/vmlinux |grep FUNC|grep '\<security_'
gives the list of all LSM hooks that lsm-bpf will be able to attach to.
There are two exceptions there security_add_hooks() and security_init().
Both are '__init'. Too late for lsm-bpf to touch.
So filtering BTF funcs by 'security_' prefix will be enough.
It should be documented though.
The number of attachable funcs depends on kconfig which is
a nice property and further strengthen the point that
lsm-bpf is very much kernel specific.
We probably should blacklist security_bpf*() hooks though.
Otherwise inception fans will have a field day.
Disallowing bpf with bpf :)

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-11 23:26                     ` Alexei Starovoitov
@ 2020-02-12  0:09                       ` Daniel Borkmann
  2020-02-12  2:45                         ` Alexei Starovoitov
  0 siblings, 1 reply; 44+ messages in thread
From: Daniel Borkmann @ 2020-02-12  0:09 UTC (permalink / raw)
  To: Alexei Starovoitov, Jann Horn
  Cc: KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Matthew Garrett,
	Christian Brauner, Mickaël Salaün, Florent Revest,
	Brendan Jackman, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Kernel Team

On 2/12/20 12:26 AM, Alexei Starovoitov wrote:
> On Tue, Feb 11, 2020 at 1:38 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
>>
>> On Tue, Feb 11, 2020 at 09:33:49PM +0100, Jann Horn wrote:
>>>>
>>>> Got it. Then let's whitelist them ?
>>>> All error injection points are marked with ALLOW_ERROR_INJECTION().
>>>> We can do something similar here, but let's do it via BTF and avoid
>>>> abusing yet another elf section for this mark.
>>>> I think BTF_TYPE_EMIT() should work. Just need to pick explicit enough
>>>> name and extensive comment about what is going on.
>>>
>>> Sounds reasonable to me. :)
>>
>> awesome :)
> 
> Looks like the kernel already provides this whitelisting.
> $ bpftool btf dump file /sys/kernel/btf/vmlinux |grep FUNC|grep '\<security_'
> gives the list of all LSM hooks that lsm-bpf will be able to attach to.
> There are two exceptions there security_add_hooks() and security_init().
> Both are '__init'. Too late for lsm-bpf to touch.
> So filtering BTF funcs by 'security_' prefix will be enough.
> It should be documented though.
> The number of attachable funcs depends on kconfig which is
> a nice property and further strengthen the point that
> lsm-bpf is very much kernel specific.
> We probably should blacklist security_bpf*() hooks though.

One thing that is not quite clear to me wrt the fexit approach; assuming
we'd whitelist something like security_inode_link():

int security_inode_link(struct dentry *old_dentry, struct inode *dir,
                          struct dentry *new_dentry)
{
         if (unlikely(IS_PRIVATE(d_backing_inode(old_dentry))))
                 return 0;
         return call_int_hook(inode_link, 0, old_dentry, dir, new_dentry);
}

Would this then mean the BPF prog needs to reimplement above check by
probing old_dentry->d_inode to later ensure its verdict stays 0 there
too, or that such extra code is to be moved to call-sites instead? If
former, what about more complex logic?

Another approach could be to have a special nop inside call_int_hook()
macro which would then get patched to avoid these situations. Somewhat
similar like static keys where it could be defined anywhere in text but
with updating of call_int_hook()'s RC for the verdict.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-12  0:09                       ` Daniel Borkmann
@ 2020-02-12  2:45                         ` Alexei Starovoitov
  2020-02-12 13:27                           ` Daniel Borkmann
  2020-02-12 15:52                           ` Casey Schaufler
  0 siblings, 2 replies; 44+ messages in thread
From: Alexei Starovoitov @ 2020-02-12  2:45 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Jann Horn, KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Matthew Garrett,
	Christian Brauner, Mickaël Salaün, Florent Revest,
	Brendan Jackman, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Kernel Team

On Wed, Feb 12, 2020 at 01:09:07AM +0100, Daniel Borkmann wrote:
> 
> Another approach could be to have a special nop inside call_int_hook()
> macro which would then get patched to avoid these situations. Somewhat
> similar like static keys where it could be defined anywhere in text but
> with updating of call_int_hook()'s RC for the verdict.

Sounds nice in theory. I couldn't quite picture how that would look
in the code, so I hacked:
diff --git a/security/security.c b/security/security.c
index 565bc9b67276..ce4bc1e5e26c 100644
--- a/security/security.c
+++ b/security/security.c
@@ -28,6 +28,7 @@
 #include <linux/string.h>
 #include <linux/msg.h>
 #include <net/flow.h>
+#include <linux/jump_label.h>

 #define MAX_LSM_EVM_XATTR      2

@@ -678,12 +679,26 @@ static void __init lsm_early_task(struct task_struct *task)
  *     This is a hook that returns a value.
  */

+#define LSM_HOOK_NAME(FUNC) \
+       DEFINE_STATIC_KEY_FALSE(bpf_lsm_key_##FUNC);
+#include <linux/lsm_hook_names.h>
+#undef LSM_HOOK_NAME
+__diag_push();
+__diag_ignore(GCC, 8, "-Wstrict-prototypes", "");
+#define LSM_HOOK_NAME(FUNC) \
+       int bpf_lsm_call_##FUNC() {return 0;}
+#include <linux/lsm_hook_names.h>
+#undef LSM_HOOK_NAME
+__diag_pop();
+
 #define call_void_hook(FUNC, ...)                              \
        do {                                                    \
                struct security_hook_list *P;                   \
                                                                \
                hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
                        P->hook.FUNC(__VA_ARGS__);              \
+               if (static_branch_unlikely(&bpf_lsm_key_##FUNC)) \
+                      (void)bpf_lsm_call_##FUNC(__VA_ARGS__); \
        } while (0)

 #define call_int_hook(FUNC, IRC, ...) ({                       \
@@ -696,6 +711,8 @@ static void __init lsm_early_task(struct task_struct *task)
                        if (RC != 0)                            \
                                break;                          \
                }                                               \
+               if (RC == IRC && static_branch_unlikely(&bpf_lsm_key_##FUNC)) \
+                      RC = bpf_lsm_call_##FUNC(__VA_ARGS__); \
        } while (0);                                            \
        RC;                                                     \
 })

The assembly looks good from correctness and performance points.
union security_list_options can be split into lsm_hook_names.h too
to avoid __diag_ignore. Is that what you have in mind?
I don't see how one can improve call_int_hook() macro without
full refactoring of linux/lsm_hooks.h
imo static_key doesn't have to be there in the first set. We can add this
optimization later.

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-12  2:45                         ` Alexei Starovoitov
@ 2020-02-12 13:27                           ` Daniel Borkmann
  2020-02-12 16:04                             ` KP Singh
  2020-02-12 15:52                           ` Casey Schaufler
  1 sibling, 1 reply; 44+ messages in thread
From: Daniel Borkmann @ 2020-02-12 13:27 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Jann Horn, KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Matthew Garrett,
	Christian Brauner, Mickaël Salaün, Florent Revest,
	Brendan Jackman, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Kernel Team

On 2/12/20 3:45 AM, Alexei Starovoitov wrote:
> On Wed, Feb 12, 2020 at 01:09:07AM +0100, Daniel Borkmann wrote:
>>
>> Another approach could be to have a special nop inside call_int_hook()
>> macro which would then get patched to avoid these situations. Somewhat
>> similar like static keys where it could be defined anywhere in text but
>> with updating of call_int_hook()'s RC for the verdict.
> 
> Sounds nice in theory. I couldn't quite picture how that would look
> in the code, so I hacked:
> diff --git a/security/security.c b/security/security.c
> index 565bc9b67276..ce4bc1e5e26c 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -28,6 +28,7 @@
>   #include <linux/string.h>
>   #include <linux/msg.h>
>   #include <net/flow.h>
> +#include <linux/jump_label.h>
> 
>   #define MAX_LSM_EVM_XATTR      2
> 
> @@ -678,12 +679,26 @@ static void __init lsm_early_task(struct task_struct *task)
>    *     This is a hook that returns a value.
>    */
> 
> +#define LSM_HOOK_NAME(FUNC) \
> +       DEFINE_STATIC_KEY_FALSE(bpf_lsm_key_##FUNC);
> +#include <linux/lsm_hook_names.h>
> +#undef LSM_HOOK_NAME
> +__diag_push();
> +__diag_ignore(GCC, 8, "-Wstrict-prototypes", "");
> +#define LSM_HOOK_NAME(FUNC) \
> +       int bpf_lsm_call_##FUNC() {return 0;}
> +#include <linux/lsm_hook_names.h>
> +#undef LSM_HOOK_NAME
> +__diag_pop();
> +
>   #define call_void_hook(FUNC, ...)                              \
>          do {                                                    \
>                  struct security_hook_list *P;                   \
>                                                                  \
>                  hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
>                          P->hook.FUNC(__VA_ARGS__);              \
> +               if (static_branch_unlikely(&bpf_lsm_key_##FUNC)) \
> +                      (void)bpf_lsm_call_##FUNC(__VA_ARGS__); \
>          } while (0)
> 
>   #define call_int_hook(FUNC, IRC, ...) ({                       \
> @@ -696,6 +711,8 @@ static void __init lsm_early_task(struct task_struct *task)
>                          if (RC != 0)                            \
>                                  break;                          \
>                  }                                               \
> +               if (RC == IRC && static_branch_unlikely(&bpf_lsm_key_##FUNC)) \
> +                      RC = bpf_lsm_call_##FUNC(__VA_ARGS__); \

Nit: the `RC == IRC` test could be moved behind the static_branch_unlikely() so
that it would be bypassed when not enabled.

>          } while (0);                                            \
>          RC;                                                     \
>   })
> 
> The assembly looks good from correctness and performance points.
> union security_list_options can be split into lsm_hook_names.h too
> to avoid __diag_ignore. Is that what you have in mind?
> I don't see how one can improve call_int_hook() macro without
> full refactoring of linux/lsm_hooks.h
> imo static_key doesn't have to be there in the first set. We can add this
> optimization later.

Yes, like the above diff looks good, and then we'd dynamically attach the program
at bpf_lsm_call_##FUNC()'s fexit hook for a direct jump, so all the security_blah()
internals could stay as-is which then might also address Jann's concerns wrt
concrete annotation as well as potential locking changes inside security_blah().
Agree that patching out via static key could be optional but since you were talking
about avoiding indirect jumps..

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-12  2:45                         ` Alexei Starovoitov
  2020-02-12 13:27                           ` Daniel Borkmann
@ 2020-02-12 15:52                           ` Casey Schaufler
  2020-02-12 16:26                             ` KP Singh
  1 sibling, 1 reply; 44+ messages in thread
From: Casey Schaufler @ 2020-02-12 15:52 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: Jann Horn, KP Singh, kernel list, bpf, linux-security-module,
	Brendan Jackman, Florent Revest, Thomas Garnier,
	Alexei Starovoitov, James Morris, Kees Cook, Thomas Garnier,
	Michael Halcrow, Paul Turner, Brendan Gregg, Matthew Garrett,
	Christian Brauner, Mickaël Salaün, Florent Revest,
	Brendan Jackman, Serge E. Hallyn, Mauro Carvalho Chehab,
	David S. Miller, Greg Kroah-Hartman, Kernel Team,
	Casey Schaufler

On 2/11/2020 6:45 PM, Alexei Starovoitov wrote:
> On Wed, Feb 12, 2020 at 01:09:07AM +0100, Daniel Borkmann wrote:
>> Another approach could be to have a special nop inside call_int_hook()
>> macro which would then get patched to avoid these situations. Somewhat
>> similar like static keys where it could be defined anywhere in text but
>> with updating of call_int_hook()'s RC for the verdict.

Tell me again why you can't register your BPF hooks like all the
other security modules do? You keep reintroducing BPF as a special
case, and I don't see why.

> Sounds nice in theory. I couldn't quite picture how that would look
> in the code, so I hacked:
> diff --git a/security/security.c b/security/security.c
> index 565bc9b67276..ce4bc1e5e26c 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -28,6 +28,7 @@
>  #include <linux/string.h>
>  #include <linux/msg.h>
>  #include <net/flow.h>
> +#include <linux/jump_label.h>
>
>  #define MAX_LSM_EVM_XATTR      2
>
> @@ -678,12 +679,26 @@ static void __init lsm_early_task(struct task_struct *task)
>   *     This is a hook that returns a value.
>   */
>
> +#define LSM_HOOK_NAME(FUNC) \
> +       DEFINE_STATIC_KEY_FALSE(bpf_lsm_key_##FUNC);
> +#include <linux/lsm_hook_names.h>
> +#undef LSM_HOOK_NAME
> +__diag_push();
> +__diag_ignore(GCC, 8, "-Wstrict-prototypes", "");
> +#define LSM_HOOK_NAME(FUNC) \
> +       int bpf_lsm_call_##FUNC() {return 0;}
> +#include <linux/lsm_hook_names.h>
> +#undef LSM_HOOK_NAME
> +__diag_pop();
> +
>  #define call_void_hook(FUNC, ...)                              \
>         do {                                                    \
>                 struct security_hook_list *P;                   \
>                                                                 \
>                 hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
>                         P->hook.FUNC(__VA_ARGS__);              \
> +               if (static_branch_unlikely(&bpf_lsm_key_##FUNC)) \
> +                      (void)bpf_lsm_call_##FUNC(__VA_ARGS__); \
>         } while (0)
>
>  #define call_int_hook(FUNC, IRC, ...) ({                       \
> @@ -696,6 +711,8 @@ static void __init lsm_early_task(struct task_struct *task)
>                         if (RC != 0)                            \
>                                 break;                          \
>                 }                                               \
> +               if (RC == IRC && static_branch_unlikely(&bpf_lsm_key_##FUNC)) \
> +                      RC = bpf_lsm_call_##FUNC(__VA_ARGS__); \
>         } while (0);                                            \
>         RC;                                                     \
>  })
>
> The assembly looks good from correctness and performance points.
> union security_list_options can be split into lsm_hook_names.h too
> to avoid __diag_ignore. Is that what you have in mind?
> I don't see how one can improve call_int_hook() macro without
> full refactoring of linux/lsm_hooks.h
> imo static_key doesn't have to be there in the first set. We can add this
> optimization later.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-12 13:27                           ` Daniel Borkmann
@ 2020-02-12 16:04                             ` KP Singh
  0 siblings, 0 replies; 44+ messages in thread
From: KP Singh @ 2020-02-12 16:04 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Alexei Starovoitov, Jann Horn, KP Singh, kernel list, bpf,
	linux-security-module, Brendan Jackman, Florent Revest,
	Thomas Garnier, Alexei Starovoitov, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Kernel Team

On 12-Feb 14:27, Daniel Borkmann wrote:
> On 2/12/20 3:45 AM, Alexei Starovoitov wrote:
> > On Wed, Feb 12, 2020 at 01:09:07AM +0100, Daniel Borkmann wrote:
> > > 
> > > Another approach could be to have a special nop inside call_int_hook()
> > > macro which would then get patched to avoid these situations. Somewhat
> > > similar like static keys where it could be defined anywhere in text but
> > > with updating of call_int_hook()'s RC for the verdict.
> > 
> > Sounds nice in theory. I couldn't quite picture how that would look
> > in the code, so I hacked:
> > diff --git a/security/security.c b/security/security.c
> > index 565bc9b67276..ce4bc1e5e26c 100644
> > --- a/security/security.c
> > +++ b/security/security.c
> > @@ -28,6 +28,7 @@
> >   #include <linux/string.h>
> >   #include <linux/msg.h>
> >   #include <net/flow.h>
> > +#include <linux/jump_label.h>
> > 
> >   #define MAX_LSM_EVM_XATTR      2
> > 
> > @@ -678,12 +679,26 @@ static void __init lsm_early_task(struct task_struct *task)
> >    *     This is a hook that returns a value.
> >    */
> > 
> > +#define LSM_HOOK_NAME(FUNC) \
> > +       DEFINE_STATIC_KEY_FALSE(bpf_lsm_key_##FUNC);
> > +#include <linux/lsm_hook_names.h>
> > +#undef LSM_HOOK_NAME
> > +__diag_push();
> > +__diag_ignore(GCC, 8, "-Wstrict-prototypes", "");
> > +#define LSM_HOOK_NAME(FUNC) \
> > +       int bpf_lsm_call_##FUNC() {return 0;}
> > +#include <linux/lsm_hook_names.h>
> > +#undef LSM_HOOK_NAME
> > +__diag_pop();
> > +
> >   #define call_void_hook(FUNC, ...)                              \
> >          do {                                                    \
> >                  struct security_hook_list *P;                   \
> >                                                                  \
> >                  hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \
> >                          P->hook.FUNC(__VA_ARGS__);              \
> > +               if (static_branch_unlikely(&bpf_lsm_key_##FUNC)) \
> > +                      (void)bpf_lsm_call_##FUNC(__VA_ARGS__); \
> >          } while (0)
> > 
> >   #define call_int_hook(FUNC, IRC, ...) ({                       \
> > @@ -696,6 +711,8 @@ static void __init lsm_early_task(struct task_struct *task)
> >                          if (RC != 0)                            \
> >                                  break;                          \
> >                  }                                               \
> > +               if (RC == IRC && static_branch_unlikely(&bpf_lsm_key_##FUNC)) \
> > +                      RC = bpf_lsm_call_##FUNC(__VA_ARGS__); \
> 
> Nit: the `RC == IRC` test could be moved behind the static_branch_unlikely() so
> that it would be bypassed when not enabled.
> 
> >          } while (0);                                            \
> >          RC;                                                     \
> >   })
> > 
> > The assembly looks good from correctness and performance points.
> > union security_list_options can be split into lsm_hook_names.h too
> > to avoid __diag_ignore. Is that what you have in mind?
> > I don't see how one can improve call_int_hook() macro without
> > full refactoring of linux/lsm_hooks.h
> > imo static_key doesn't have to be there in the first set. We can add this
> > optimization later.
> 
> Yes, like the above diff looks good, and then we'd dynamically attach the program
> at bpf_lsm_call_##FUNC()'s fexit hook for a direct jump, so all the security_blah()
> internals could stay as-is which then might also address Jann's concerns wrt
> concrete annotation as well as potential locking changes inside security_blah().
> Agree that patching out via static key could be optional but since you were talking
> about avoiding indirect jumps..

I like this approach as well. Will give it a go and update the
patches. Thanks a lot for your inputs!

- KP

> 
> Thanks,
> Daniel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-12 15:52                           ` Casey Schaufler
@ 2020-02-12 16:26                             ` KP Singh
  2020-02-12 18:59                               ` Casey Schaufler
  0 siblings, 1 reply; 44+ messages in thread
From: KP Singh @ 2020-02-12 16:26 UTC (permalink / raw)
  To: Casey Schaufler
  Cc: Alexei Starovoitov, Daniel Borkmann, Jann Horn, KP Singh,
	kernel list, bpf, linux-security-module, Brendan Jackman,
	Florent Revest, Thomas Garnier, Alexei Starovoitov, James Morris,
	Kees Cook, Thomas Garnier, Michael Halcrow, Paul Turner,
	Brendan Gregg, Matthew Garrett, Christian Brauner,
	Mickaël Salaün, Florent Revest, Brendan Jackman,
	Serge E. Hallyn, Mauro Carvalho Chehab, David S. Miller,
	Greg Kroah-Hartman, Kernel Team

On 12-Feb 07:52, Casey Schaufler wrote:
> On 2/11/2020 6:45 PM, Alexei Starovoitov wrote:
> > On Wed, Feb 12, 2020 at 01:09:07AM +0100, Daniel Borkmann wrote:
> >> Another approach could be to have a special nop inside call_int_hook()
> >> macro which would then get patched to avoid these situations. Somewhat
> >> similar like static keys where it could be defined anywhere in text but
> >> with updating of call_int_hook()'s RC for the verdict.
> 
> Tell me again why you can't register your BPF hooks like all the
> other security modules do? You keep reintroducing BPF as a special
> case, and I don't see why.

I think we tried to answer this in the discussion we had:

 https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#meb1eea982e63be0806f9bba58e91160871803752

BPF should not allocate a wrapper (to be statically regsitered at
init) for each LSM hook and run the programs from within that as this
implies adding overhead across the board for every hook even if
it's never used (i.e. no BPF program is attached to the hook).

We can, with the suggestions discussed here, avoid adding unncessary
overhead for unused hooks. And, as Alexei mentioned, adding overhead
when not really needed is especially bad for LSM hooks like
sock_sendmsg.

The other LSMs do not provide dynamic / mutable hooks, so it makes
sense for them to register the hooks once at load time.

- KP

> > Sounds nice in theory. I couldn't quite picture how that would look
> > in the code, so I hacked:
> > diff --git a/security/security.c b/security/security.c
> > index 565bc9b67276..ce4bc1e5e26c 100644
> > --- a/security/security.c

[...]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM]
  2020-02-12 16:26                             ` KP Singh
@ 2020-02-12 18:59                               ` Casey Schaufler
  0 siblings, 0 replies; 44+ messages in thread
From: Casey Schaufler @ 2020-02-12 18:59 UTC (permalink / raw)
  To: KP Singh
  Cc: Alexei Starovoitov, Daniel Borkmann, Jann Horn, kernel list, bpf,
	linux-security-module, Brendan Jackman, Florent Revest,
	Thomas Garnier, Alexei Starovoitov, James Morris, Kees Cook,
	Thomas Garnier, Michael Halcrow, Paul Turner, Brendan Gregg,
	Matthew Garrett, Christian Brauner, Mickaël Salaün,
	Florent Revest, Brendan Jackman, Serge E. Hallyn,
	Mauro Carvalho Chehab, David S. Miller, Greg Kroah-Hartman,
	Kernel Team, Casey Schaufler

On 2/12/2020 8:26 AM, KP Singh wrote:
> On 12-Feb 07:52, Casey Schaufler wrote:
>> On 2/11/2020 6:45 PM, Alexei Starovoitov wrote:
>>> On Wed, Feb 12, 2020 at 01:09:07AM +0100, Daniel Borkmann wrote:
>>>> Another approach could be to have a special nop inside call_int_hook()
>>>> macro which would then get patched to avoid these situations. Somewhat
>>>> similar like static keys where it could be defined anywhere in text but
>>>> with updating of call_int_hook()'s RC for the verdict.
>> Tell me again why you can't register your BPF hooks like all the
>> other security modules do? You keep reintroducing BPF as a special
>> case, and I don't see why.
> I think we tried to answer this in the discussion we had:
>
>  https://lore.kernel.org/bpf/20200123152440.28956-1-kpsingh@chromium.org/T/#meb1eea982e63be0806f9bba58e91160871803752

I understand your arguments, but remain unconvinced.

> BPF should not allocate a wrapper (to be statically regsitered at
> init) for each LSM hook and run the programs from within that as this
> implies adding overhead across the board for every hook even if
> it's never used (i.e. no BPF program is attached to the hook).

SELinux would run faster if it didn't have hooks installed where
there is no policy loaded that would ever fail for them. That's
not the infrastructure's problem.

> We can, with the suggestions discussed here, avoid adding unncessary
> overhead for unused hooks. And, as Alexei mentioned, adding overhead
> when not really needed is especially bad for LSM hooks like
> sock_sendmsg.

You're adding overhead for systems that have BPF built, but not used.

> The other LSMs do not provide dynamic / mutable hooks, so it makes
> sense for them to register the hooks once at load time.

As mentioned above, the hooks may not be mutable, but policy
may make them pointless. That is the security module's problem,
not the infrastructure's.

>
> - KP
>
>>> Sounds nice in theory. I couldn't quite picture how that would look
>>> in the code, so I hacked:
>>> diff --git a/security/security.c b/security/security.c
>>> index 565bc9b67276..ce4bc1e5e26c 100644
>>> --- a/security/security.c
> [...]


^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2020-02-12 18:59 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-23 15:24 [PATCH bpf-next v3 00/10] MAC and Audit policy using eBPF (KRSI) KP Singh
2020-01-23 15:24 ` [PATCH bpf-next v3 01/10] bpf: btf: Add btf_type_by_name_kind KP Singh
2020-01-23 20:06   ` Andrii Nakryiko
2020-01-24 14:12     ` KP Singh
2020-01-23 15:24 ` [PATCH bpf-next v3 02/10] bpf: lsm: Add a skeleton and config options KP Singh
2020-02-10 23:52   ` Alexei Starovoitov
2020-02-11 12:45     ` KP Singh
2020-01-23 15:24 ` [PATCH bpf-next v3 03/10] bpf: lsm: Introduce types for eBPF based LSM KP Singh
2020-02-10 23:58   ` Alexei Starovoitov
2020-02-11 12:44     ` KP Singh
2020-01-23 15:24 ` [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM KP Singh
2020-01-23 17:03   ` Casey Schaufler
2020-01-23 17:59     ` KP Singh
2020-01-23 19:09       ` Casey Schaufler
2020-01-23 22:24         ` KP Singh
2020-01-23 23:50           ` Casey Schaufler
2020-01-24  1:25             ` KP Singh
2020-01-24 21:55               ` James Morris
2020-02-11  3:12   ` Alexei Starovoitov
2020-02-11 12:43     ` KP Singh
2020-02-11 17:58       ` Alexei Starovoitov
2020-02-11 18:44         ` BPF LSM and fexit [was: [PATCH bpf-next v3 04/10] bpf: lsm: Add mutable hooks list for the BPF LSM] Jann Horn
2020-02-11 19:09           ` Alexei Starovoitov
2020-02-11 19:36             ` Jann Horn
2020-02-11 20:10               ` Alexei Starovoitov
2020-02-11 20:33                 ` Jann Horn
2020-02-11 21:32                   ` Jann Horn
2020-02-11 21:38                   ` Alexei Starovoitov
2020-02-11 23:26                     ` Alexei Starovoitov
2020-02-12  0:09                       ` Daniel Borkmann
2020-02-12  2:45                         ` Alexei Starovoitov
2020-02-12 13:27                           ` Daniel Borkmann
2020-02-12 16:04                             ` KP Singh
2020-02-12 15:52                           ` Casey Schaufler
2020-02-12 16:26                             ` KP Singh
2020-02-12 18:59                               ` Casey Schaufler
2020-01-23 15:24 ` [PATCH bpf-next v3 05/10] bpf: lsm: BTF API for LSM hooks KP Singh
2020-01-23 15:24 ` [PATCH bpf-next v3 06/10] bpf: lsm: Implement attach, detach and execution KP Singh
2020-01-23 15:24 ` [PATCH bpf-next v3 07/10] bpf: lsm: Make the allocated callback RO+X KP Singh
2020-01-23 15:24 ` [PATCH bpf-next v3 08/10] tools/libbpf: Add support for BPF_PROG_TYPE_LSM KP Singh
2020-01-23 18:00   ` Andrii Nakryiko
2020-01-24 14:16     ` KP Singh
2020-01-23 15:24 ` [PATCH bpf-next v3 09/10] bpf: lsm: Add selftests " KP Singh
2020-01-23 15:24 ` [PATCH bpf-next v3 10/10] bpf: lsm: Add Documentation KP Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).