All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/3] Support storing struct task_struct objects as kptrs
@ 2022-10-14 21:21 David Vernet
  2022-10-14 21:21 ` [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs David Vernet
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: David Vernet @ 2022-10-14 21:21 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj,
	memxor

Now that BPF supports adding new kernel functions with kfuncs, and storing
kernel objects in maps with kptrs, we can add a set of kfuncs which allow
struct task_struct objects to be stored in maps as referenced kptrs.

The possible use cases for doing this are plentiful.  During tracing,
for example, it would be useful to be able to collect some tasks that
performed a certain operation, and then periodically summarize who they
are, which cgroup they're in, how much CPU time they've utilized, etc.
Doing this now would require storing the task's pids along with some
relevant data to be exported to user space, and later associating the
pids to tasks in other event handlers where the data is recorded.
Another useful by-product of this is that it allows a program to pin a
task in a BPF program, and by proxy therefore also e.g. pin its task
local storage.

In order to support this, we'll need to expand KF_TRUSTED_ARGS to
support receiving trusted, non-refcounted pointers. It currently only
supports either PTR_TO_CTX pointers, or refcounted pointers . What this
means in terms of implementation is that btf_check_func_arg_match()
would have to add another condition to its logic for checking if
a ptr needs a refcount to also require that the pointer has at least one
type modifier, such as PTR_UNTRUSTED. PTR_UNTRUSTED does not cover all
of the possible pointers we need to watch out for, though. For example,
a pointer obtained from walking a struct is considered "trusted" (or at
least, not PTR_UNTRUSTED). To account for this and enable us to expand
KF_TRUSTED_ARGS, this patch set also introduces a new PTR_NESTED type
flag modifier which records if a pointer was obtained from walking a
struct.

This patch set adds this new PTR_NESTED type flag, expands
KF_TRUSTED_ARGS accordingly, adds the new set of kfuncs mentioned above,
and then finally adds a new selftest suite to validate all of this new
behavior.

David Vernet (3):
  bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs
  bpf: Add kfuncs for storing struct task_struct * as a kptr
  bpf/selftests: Add selftests for new task kfuncs

 include/linux/bpf.h                           |   6 +
 kernel/bpf/btf.c                              |  11 +-
 kernel/bpf/helpers.c                          |  86 ++++-
 kernel/bpf/verifier.c                         |  12 +-
 tools/testing/selftests/bpf/DENYLIST.s390x    |   1 +
 .../selftests/bpf/prog_tests/task_kfunc.c     | 160 +++++++++
 .../selftests/bpf/progs/task_kfunc_common.h   |  83 +++++
 .../selftests/bpf/progs/task_kfunc_failure.c  | 315 ++++++++++++++++++
 .../selftests/bpf/progs/task_kfunc_success.c  | 132 ++++++++
 tools/testing/selftests/bpf/verifier/calls.c  |   4 +-
 10 files changed, 801 insertions(+), 9 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/task_kfunc.c
 create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_common.h
 create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_failure.c
 create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_success.c

-- 
2.38.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs
  2022-10-14 21:21 [PATCH v5 0/3] Support storing struct task_struct objects as kptrs David Vernet
@ 2022-10-14 21:21 ` David Vernet
  2022-10-18  1:32   ` Kumar Kartikeya Dwivedi
  2022-10-14 21:21 ` [PATCH v5 2/3] bpf: Add kfuncs for storing struct task_struct * as a kptr David Vernet
  2022-10-14 21:21 ` [PATCH v5 3/3] bpf/selftests: Add selftests for new task kfuncs David Vernet
  2 siblings, 1 reply; 14+ messages in thread
From: David Vernet @ 2022-10-14 21:21 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj,
	memxor

Kfuncs currently support specifying the KF_TRUSTED_ARGS flag to signal
to the verifier that it should enforce that a BPF program passes it a
"safe", trusted pointer. Currently, "safe" means that the pointer is
either PTR_TO_CTX, or is refcounted. There may be cases, however, where
the kernel passes a BPF program a safe / trusted pointer to an object
that the BPF program wishes to use as a kptr, but because the object
does not yet have a ref_obj_id from the perspective of the verifier, the
program would be unable to pass it to a KF_ACQUIRE | KF_TRUSTED_ARGS
kfunc.

The solution is to expand the set of pointers that are considered
trusted according to KF_TRUSTED_ARGS, so that programs can invoke kfuncs
with these pointers without getting rejected by the verifier.

There is already a PTR_UNTRUSTED flag that is set in some scenarios,
such as when a BPF program reads a kptr directly from a map
without performing a bpf_kptr_xchg() call. These pointers of course can
and should be rejected by the verifier. Unfortunately, however,
PTR_UNTRUSTED does not cover all the cases for safety that need to
be addressed to adequately protect kfuncs. Specifically, pointers
obtained by a BPF program "walking" a struct are _not_ considered
PTR_UNTRUSTED according to BPF. For example, say that we were to add a
kfunc called bpf_task_acquire(), with KF_ACQUIRE | KF_TRUSTED_ARGS, to
acquire a struct task_struct *. If we only used PTR_UNTRUSTED to signal
that a task was unsafe to pass to a kfunc, the verifier would mistakenly
allow the following unsafe BPF program to be loaded:

SEC("tp_btf/task_newtask")
int BPF_PROG(unsafe_acquire_task,
             struct task_struct *task,
             u64 clone_flags)
{
        struct task_struct *acquired, *nested;

        nested = task->last_wakee;

        /* Would not be rejected by the verifier. */
        acquired = bpf_task_acquire(nested);
        if (!acquired)
                return 0;

        bpf_task_release(acquired);
        return 0;
}

To address this, this patch defines a new type flag called PTR_NESTED
which tracks whether a PTR_TO_BTF_ID pointer was retrieved from walking
a struct. A pointer passed directly from the kernel begins with
(PTR_NESTED & type) == 0, meaning of course that it is not nested. Any
pointer received from walking that object, however, would inherit that
flag and become a nested pointer.

With that flag, this patch also updates btf_check_func_arg_match() to
only flag a PTR_TO_BTF_ID object as requiring a refcount if it has any
type modifiers (which of course includes both PTR_UNTRUSTED and
PTR_NESTED). Otherwise, the pointer passes this check and continues
onto the others in btf_check_func_arg_match().

A subsequent patch will add kfuncs for storing a task kfunc as a kptr,
and then another patch will validate this feature by ensuring that the
verifier rejects a kfunc invocation with a nested pointer.

Signed-off-by: David Vernet <void@manifault.com>
---
 include/linux/bpf.h                          |  6 ++++++
 kernel/bpf/btf.c                             | 11 ++++++++++-
 kernel/bpf/verifier.c                        | 12 +++++++++++-
 tools/testing/selftests/bpf/verifier/calls.c |  4 ++--
 4 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 9e7d46d16032..b624024edb4e 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -457,6 +457,12 @@ enum bpf_type_flag {
 	/* Size is known at compile time. */
 	MEM_FIXED_SIZE		= BIT(10 + BPF_BASE_TYPE_BITS),
 
+	/* PTR was obtained from walking a struct. This is used with
+	 * PTR_TO_BTF_ID to determine whether the pointer is safe to pass to a
+	 * kfunc with KF_TRUSTED_ARGS.
+	 */
+	PTR_NESTED		= BIT(11 + BPF_BASE_TYPE_BITS),
+
 	__BPF_TYPE_FLAG_MAX,
 	__BPF_TYPE_LAST_FLAG	= __BPF_TYPE_FLAG_MAX - 1,
 };
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index eba603cec2c5..3d7bad11b10b 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6333,8 +6333,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
 		/* Check if argument must be a referenced pointer, args + i has
 		 * been verified to be a pointer (after skipping modifiers).
 		 * PTR_TO_CTX is ok without having non-zero ref_obj_id.
+		 *
+		 * All object pointers must be refcounted, other than:
+		 * - PTR_TO_CTX
+		 * - Trusted pointers (i.e. pointers with no type modifiers)
 		 */
-		if (is_kfunc && trusted_args && (obj_ptr && reg->type != PTR_TO_CTX) && !reg->ref_obj_id) {
+		if (is_kfunc &&
+		    trusted_args &&
+		    obj_ptr &&
+		    base_type(reg->type) != PTR_TO_CTX &&
+		    type_flag(reg->type) &&
+		    !reg->ref_obj_id) {
 			bpf_log(log, "R%d must be referenced\n", regno);
 			return -EINVAL;
 		}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6f6d2d511c06..d16a08ca507b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -581,6 +581,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env,
 		strncpy(prefix, "user_", 32);
 	if (type & MEM_PERCPU)
 		strncpy(prefix, "percpu_", 32);
+	if (type & PTR_NESTED)
+		strncpy(prefix, "nested_", 32);
 	if (type & PTR_UNTRUSTED)
 		strncpy(prefix, "untrusted_", 32);
 
@@ -4558,6 +4560,9 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
 	if (type_flag(reg->type) & PTR_UNTRUSTED)
 		flag |= PTR_UNTRUSTED;
 
+	/* All pointers obtained by walking a struct are nested. */
+	flag |= PTR_NESTED;
+
 	if (atype == BPF_READ && value_regno >= 0)
 		mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
 
@@ -5694,7 +5699,12 @@ static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } };
 static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } };
 static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } };
 static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } };
-static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } };
+static const struct bpf_reg_types btf_ptr_types = {
+	.types = {
+		PTR_TO_BTF_ID,
+		PTR_TO_BTF_ID | PTR_NESTED
+	},
+};
 static const struct bpf_reg_types spin_lock_types = { .types = { PTR_TO_MAP_VALUE } };
 static const struct bpf_reg_types percpu_btf_ptr_types = { .types = { PTR_TO_BTF_ID | MEM_PERCPU } };
 static const struct bpf_reg_types func_ptr_types = { .types = { PTR_TO_FUNC } };
diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
index e1a937277b54..496c29b1a298 100644
--- a/tools/testing/selftests/bpf/verifier/calls.c
+++ b/tools/testing/selftests/bpf/verifier/calls.c
@@ -181,7 +181,7 @@
 	},
 	.result_unpriv = REJECT,
 	.result = REJECT,
-	.errstr = "negative offset ptr_ ptr R1 off=-4 disallowed",
+	.errstr = "negative offset nested_ptr_ ptr R1 off=-4 disallowed",
 },
 {
 	"calls: invalid kfunc call: PTR_TO_BTF_ID with variable offset",
@@ -243,7 +243,7 @@
 	},
 	.result_unpriv = REJECT,
 	.result = REJECT,
-	.errstr = "R1 must be referenced",
+	.errstr = "arg#0 pointer type STRUCT prog_test_ref_kfunc must point to scalar",
 },
 {
 	"calls: valid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID",
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 2/3] bpf: Add kfuncs for storing struct task_struct * as a kptr
  2022-10-14 21:21 [PATCH v5 0/3] Support storing struct task_struct objects as kptrs David Vernet
  2022-10-14 21:21 ` [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs David Vernet
@ 2022-10-14 21:21 ` David Vernet
  2022-10-18  1:55   ` Kumar Kartikeya Dwivedi
  2022-10-14 21:21 ` [PATCH v5 3/3] bpf/selftests: Add selftests for new task kfuncs David Vernet
  2 siblings, 1 reply; 14+ messages in thread
From: David Vernet @ 2022-10-14 21:21 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj,
	memxor

Now that BPF supports adding new kernel functions with kfuncs, and
storing kernel objects in maps with kptrs, we can add a set of kfuncs
which allow struct task_struct objects to be stored in maps as
referenced kptrs. The possible use cases for doing this are plentiful.
During tracing, for example, it would be useful to be able to collect
some tasks that performed a certain operation, and then periodically
summarize who they are, which cgroup they're in, how much CPU time
they've utilized, etc.

In order to enable this, this patch adds three new kfuncs:

struct task_struct *bpf_task_acquire(struct task_struct *p);
struct task_struct *bpf_task_kptr_get(struct task_struct **pp);
void bpf_task_release(struct task_struct *p);

A follow-on patch will add selftests validating these kfuncs.

Signed-off-by: David Vernet <void@manifault.com>
---
 kernel/bpf/helpers.c | 86 +++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 81 insertions(+), 5 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index a6b04faed282..9d0307969977 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1700,20 +1700,96 @@ bpf_base_func_proto(enum bpf_func_id func_id)
 	}
 }
 
-BTF_SET8_START(tracing_btf_ids)
+__diag_push();
+__diag_ignore_all("-Wmissing-prototypes",
+		  "Global functions as their definitions will be in vmlinux BTF");
+
+/**
+ * bpf_task_acquire - Acquire a reference to a task. A task acquired by this
+ * kfunc which is not stored in a map as a kptr, must be released by calling
+ * bpf_task_release().
+ * @p: The task on which a reference is being acquired.
+ */
+__used noinline
+struct task_struct *bpf_task_acquire(struct task_struct *p)
+{
+	if (!p)
+		return NULL;
+
+	refcount_inc(&p->rcu_users);
+	return p;
+}
+
+/**
+ * bpf_task_kptr_get - Acquire a reference on a struct task_struct kptr. A task
+ * kptr acquired by this kfunc which is not subsequently stored in a map, must
+ * be released by calling bpf_task_release().
+ * @pp: A pointer to a task kptr on which a reference is being acquired.
+ */
+__used noinline
+struct task_struct *bpf_task_kptr_get(struct task_struct **pp)
+{
+	struct task_struct *p;
+
+	rcu_read_lock();
+	p = READ_ONCE(*pp);
+	if (p && !refcount_inc_not_zero(&p->rcu_users))
+		p = NULL;
+	rcu_read_unlock();
+
+	return p;
+}
+
+/**
+ * bpf_task_release - Release the reference acquired on a struct task_struct *.
+ * If this kfunc is invoked in an RCU read region, the task_struct is
+ * guaranteed to not be freed until the current grace period has ended, even if
+ * its refcount drops to 0.
+ * @p: The task on which a reference is being released.
+ */
+__used noinline void bpf_task_release(struct task_struct *p)
+{
+	if (!p)
+		return;
+
+	put_task_struct_rcu_user(p);
+}
+
+__diag_pop();
+
+BTF_SET8_START(generic_kfunc_btf_ids)
 #ifdef CONFIG_KEXEC_CORE
 BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE)
 #endif
-BTF_SET8_END(tracing_btf_ids)
+BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RET_NULL | KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_task_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE | KF_TRUSTED_ARGS)
+BTF_SET8_END(generic_kfunc_btf_ids)
 
-static const struct btf_kfunc_id_set tracing_kfunc_set = {
+static const struct btf_kfunc_id_set generic_kfunc_set = {
 	.owner = THIS_MODULE,
-	.set   = &tracing_btf_ids,
+	.set   = &generic_kfunc_btf_ids,
 };
 
+BTF_ID_LIST(generic_kfunc_dtor_ids)
+BTF_ID(struct, task_struct)
+BTF_ID(func, bpf_task_release)
+
 static int __init kfunc_init(void)
 {
-	return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &tracing_kfunc_set);
+	int ret;
+	const struct btf_id_dtor_kfunc generic_kfunc_dtors[] = {
+		{
+			.btf_id       = generic_kfunc_dtor_ids[0],
+			.kfunc_btf_id = generic_kfunc_dtor_ids[1]
+		},
+	};
+
+	ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &generic_kfunc_set);
+	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &generic_kfunc_set);
+	return ret ?: register_btf_id_dtor_kfuncs(generic_kfunc_dtors,
+						  ARRAY_SIZE(generic_kfunc_dtors),
+						  THIS_MODULE);
 }
 
 late_initcall(kfunc_init);
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 3/3] bpf/selftests: Add selftests for new task kfuncs
  2022-10-14 21:21 [PATCH v5 0/3] Support storing struct task_struct objects as kptrs David Vernet
  2022-10-14 21:21 ` [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs David Vernet
  2022-10-14 21:21 ` [PATCH v5 2/3] bpf: Add kfuncs for storing struct task_struct * as a kptr David Vernet
@ 2022-10-14 21:21 ` David Vernet
  2022-10-19 17:39   ` David Vernet
  2 siblings, 1 reply; 14+ messages in thread
From: David Vernet @ 2022-10-14 21:21 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj,
	memxor

A previous change added a series of kfuncs for storing struct
task_struct objects as referenced kptrs. This patch adds a new
task_kfunc test suite for validating their expected behavior.

Signed-off-by: David Vernet <void@manifault.com>
---
 tools/testing/selftests/bpf/DENYLIST.s390x    |   1 +
 .../selftests/bpf/prog_tests/task_kfunc.c     | 160 +++++++++
 .../selftests/bpf/progs/task_kfunc_common.h   |  83 +++++
 .../selftests/bpf/progs/task_kfunc_failure.c  | 315 ++++++++++++++++++
 .../selftests/bpf/progs/task_kfunc_success.c  | 132 ++++++++
 5 files changed, 691 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/task_kfunc.c
 create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_common.h
 create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_failure.c
 create mode 100644 tools/testing/selftests/bpf/progs/task_kfunc_success.c

diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x
index 520f12229b98..323a0e312b3d 100644
--- a/tools/testing/selftests/bpf/DENYLIST.s390x
+++ b/tools/testing/selftests/bpf/DENYLIST.s390x
@@ -52,6 +52,7 @@ skc_to_unix_sock                         # could not attach BPF object unexpecte
 socket_cookie                            # prog_attach unexpected error: -524                                          (trampoline)
 stacktrace_build_id                      # compare_map_keys stackid_hmap vs. stackmap err -2 errno 2                   (?)
 tailcalls                                # tail_calls are not allowed in non-JITed programs with bpf-to-bpf calls      (?)
+task_kfunc                               # JIT does not support calling kernel function
 task_local_storage                       # failed to auto-attach program 'trace_exit_creds': -524                      (trampoline)
 test_bpffs                               # bpffs test  failed 255                                                      (iterator)
 test_bprm_opts                           # failed to auto-attach program 'secure_exec': -524                           (trampoline)
diff --git a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c
new file mode 100644
index 000000000000..18492d010c45
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c
@@ -0,0 +1,160 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#define _GNU_SOURCE
+#include <sys/wait.h>
+#include <test_progs.h>
+#include <unistd.h>
+
+#include "task_kfunc_failure.skel.h"
+#include "task_kfunc_success.skel.h"
+
+static size_t log_buf_sz = 1 << 20; /* 1 MB */
+static char obj_log_buf[1048576];
+
+static struct task_kfunc_success *open_load_task_kfunc_skel(void)
+{
+	struct task_kfunc_success *skel;
+	int err;
+
+	skel = task_kfunc_success__open();
+	if (!ASSERT_OK_PTR(skel, "skel_open"))
+		return NULL;
+
+	skel->bss->pid = getpid();
+
+	err = task_kfunc_success__load(skel);
+	if (!ASSERT_OK(err, "skel_load"))
+		goto cleanup;
+
+	return skel;
+
+cleanup:
+	task_kfunc_success__destroy(skel);
+	return NULL;
+}
+
+static void run_success_test(const char *prog_name)
+{
+	struct task_kfunc_success *skel;
+	int status;
+	pid_t child_pid;
+	struct bpf_program *prog;
+	struct bpf_link *link = NULL;
+
+	skel = open_load_task_kfunc_skel();
+	if (!ASSERT_OK_PTR(skel, "open_load_skel"))
+		return;
+
+	if (!ASSERT_OK(skel->bss->err, "pre_spawn_err"))
+		goto cleanup;
+
+	prog = bpf_object__find_program_by_name(skel->obj, prog_name);
+	if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
+		goto cleanup;
+
+	link = bpf_program__attach(prog);
+	if (!ASSERT_OK_PTR(link, "attached_link"))
+		goto cleanup;
+
+	child_pid = fork();
+	if (!ASSERT_GT(child_pid, -1, "child_pid"))
+		goto cleanup;
+	if (child_pid == 0)
+		_exit(0);
+	waitpid(child_pid, &status, 0);
+
+	ASSERT_OK(skel->bss->err, "post_wait_err");
+
+cleanup:
+	bpf_link__destroy(link);
+	task_kfunc_success__destroy(skel);
+}
+
+static const char * const success_tests[] = {
+	"test_task_acquire_release_argument",
+	"test_task_acquire_release_current",
+	"test_task_acquire_leave_in_map",
+	"test_task_xchg_release",
+	"test_task_get_release",
+};
+
+static struct {
+	const char *prog_name;
+	const char *expected_err_msg;
+} failure_tests[] = {
+	{"task_kfunc_acquire_untrusted", "arg#0 pointer type STRUCT task_struct must point"},
+	{"task_kfunc_acquire_fp", "arg#0 pointer type STRUCT task_struct must point"},
+	{"task_kfunc_acquire_no_null_check", "arg#0 pointer type STRUCT task_struct must point"},
+	{"task_kfunc_acquire_trusted_nested", "arg#0 pointer type STRUCT task_struct must point"},
+	{"task_kfunc_acquire_null", "arg#0 pointer type STRUCT task_struct must point"},
+	{"task_kfunc_acquire_unreleased", "Unreleased reference"},
+	{"task_kfunc_get_non_kptr_param", "arg#0 expected pointer to map value"},
+	{"task_kfunc_get_non_kptr_acquired", "arg#0 expected pointer to map value"},
+	{"task_kfunc_get_null", "arg#0 expected pointer to map value"},
+	{"task_kfunc_xchg_unreleased", "Unreleased reference"},
+	{"task_kfunc_get_unreleased", "Unreleased reference"},
+	{"task_kfunc_release_untrusted", "arg#0 pointer type STRUCT task_struct must point"},
+	{"task_kfunc_release_fp", "arg#0 pointer type STRUCT task_struct must point"},
+	{"task_kfunc_release_null", "arg#0 pointer type STRUCT task_struct must point"},
+	{"task_kfunc_release_unacquired", "release kernel function bpf_task_release expects refcounted PTR_TO_BTF_ID"},
+};
+
+static void verify_fail(const char *prog_name, const char *expected_err_msg)
+{
+	LIBBPF_OPTS(bpf_object_open_opts, opts);
+	struct task_kfunc_failure *skel;
+	int err, i;
+
+	opts.kernel_log_buf = obj_log_buf;
+	opts.kernel_log_size = log_buf_sz;
+	opts.kernel_log_level = 1;
+
+	skel = task_kfunc_failure__open_opts(&opts);
+	if (!ASSERT_OK_PTR(skel, "task_kfunc_failure__open_opts"))
+		goto cleanup;
+
+	skel->bss->pid = getpid();
+
+	for (i = 0; i < ARRAY_SIZE(failure_tests); i++) {
+		struct bpf_program *prog;
+		const char *curr_name = failure_tests[i].prog_name;
+
+		prog = bpf_object__find_program_by_name(skel->obj, curr_name);
+		if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
+			goto cleanup;
+
+		bpf_program__set_autoload(prog, !strcmp(curr_name, prog_name));
+	}
+
+	err = task_kfunc_failure__load(skel);
+	if (!ASSERT_ERR(err, "unexpected load success"))
+		goto cleanup;
+
+	if (!ASSERT_OK_PTR(strstr(obj_log_buf, expected_err_msg), "expected_err_msg")) {
+		fprintf(stderr, "Expected err_msg: %s\n", expected_err_msg);
+		fprintf(stderr, "Verifier output: %s\n", obj_log_buf);
+	}
+
+cleanup:
+	task_kfunc_failure__destroy(skel);
+}
+
+void test_task_kfunc(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(success_tests); i++) {
+		if (!test__start_subtest(success_tests[i]))
+			continue;
+
+		run_success_test(success_tests[i]);
+	}
+
+	for (i = 0; i < ARRAY_SIZE(failure_tests); i++) {
+		if (!test__start_subtest(failure_tests[i].prog_name))
+			continue;
+
+		verify_fail(failure_tests[i].prog_name, failure_tests[i].expected_err_msg);
+	}
+}
diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_common.h b/tools/testing/selftests/bpf/progs/task_kfunc_common.h
new file mode 100644
index 000000000000..f51257bdd695
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/task_kfunc_common.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#ifndef _TASK_KFUNC_COMMON_H
+#define _TASK_KFUNC_COMMON_H
+
+#include <errno.h>
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+
+struct __tasks_kfunc_map_value {
+	struct task_struct __kptr_ref * task;
+};
+
+struct hash_map {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__type(key, int);
+	__type(value, struct __tasks_kfunc_map_value);
+	__uint(max_entries, 1);
+} __tasks_kfunc_map SEC(".maps");
+
+struct task_struct *bpf_task_acquire(struct task_struct *p) __ksym;
+struct task_struct *bpf_task_kptr_get(struct task_struct **pp) __ksym;
+void bpf_task_release(struct task_struct *p) __ksym;
+
+#define TEST_NAME_SZ 128
+
+/* The pid of the test process used to determine if a newly created task is the test task. */
+int pid;
+
+static inline struct __tasks_kfunc_map_value *tasks_kfunc_map_value_lookup(struct task_struct *p)
+{
+	s32 pid;
+	long status;
+
+	status = bpf_probe_read_kernel(&pid, sizeof(pid), &p->pid);
+	if (status)
+		return NULL;
+
+	return bpf_map_lookup_elem(&__tasks_kfunc_map, &pid);
+}
+
+static inline int tasks_kfunc_map_insert(struct task_struct *p)
+{
+	struct __tasks_kfunc_map_value local, *v;
+	long status;
+	struct task_struct *acquired, *old;
+	s32 pid;
+
+	status = bpf_probe_read_kernel(&pid, sizeof(pid), &p->pid);
+	if (status)
+		return status;
+
+	local.task = NULL;
+	status = bpf_map_update_elem(&__tasks_kfunc_map, &pid, &local, BPF_NOEXIST);
+	if (status)
+		return status;
+
+	v = bpf_map_lookup_elem(&__tasks_kfunc_map, &pid);
+	if (!v) {
+		bpf_map_delete_elem(&__tasks_kfunc_map, &pid);
+		return status;
+	}
+
+	acquired = bpf_task_acquire(p);
+	old = bpf_kptr_xchg(&v->task, acquired);
+	if (old) {
+		bpf_task_release(old);
+		return -EEXIST;
+	}
+
+	return 0;
+}
+
+static inline bool is_test_kfunc_task(void)
+{
+	int cur_pid = bpf_get_current_pid_tgid() >> 32;
+
+	return pid == cur_pid;
+}
+
+#endif /* _TASK_KFUNC_COMMON_H */
diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
new file mode 100644
index 000000000000..74d2f176a2de
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
@@ -0,0 +1,315 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+
+#include "task_kfunc_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+/* Prototype for all of the program trace events below:
+ *
+ * TRACE_EVENT(task_newtask,
+ *         TP_PROTO(struct task_struct *p, u64 clone_flags)
+ */
+
+static struct __tasks_kfunc_map_value *insert_lookup_task(struct task_struct *task)
+{
+	int status;
+
+	status = tasks_kfunc_map_insert(task);
+	if (status)
+		return NULL;
+
+	return tasks_kfunc_map_value_lookup(task);
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *acquired;
+	struct __tasks_kfunc_map_value *v;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	v = insert_lookup_task(task);
+	if (!v)
+		return 0;
+
+	/* Can't invoke bpf_task_acquire() on an untrusted pointer. */
+	acquired = bpf_task_acquire(v->task);
+	if (!acquired)
+		return 0;
+
+	bpf_task_release(acquired);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_acquire_fp, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *acquired, *stack_task = (struct task_struct *)&clone_flags;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	/* Can't invoke bpf_task_acquire() on a random frame pointer. */
+	acquired = bpf_task_acquire((struct task_struct *)&stack_task);
+	if (!acquired)
+		return 0;
+	bpf_task_release(acquired);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_acquire_no_null_check, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *acquired;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	acquired = bpf_task_acquire(task);
+	/* Can't release a bpf_task_acquire()'d task without a NULL check. */
+	bpf_task_release(acquired);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_acquire_trusted_nested, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *acquired;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	/* Can't invoke bpf_task_acquire() on a trusted pointer at a nonzero offset. */
+	acquired = bpf_task_acquire(task->last_wakee);
+	if (!acquired)
+		return 0;
+	bpf_task_release(acquired);
+
+	return 0;
+}
+
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_acquire_null, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *acquired;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	/* Can't invoke bpf_task_acquire() on a NULL pointer. */
+	acquired = bpf_task_acquire(NULL);
+	if (!acquired)
+		return 0;
+	bpf_task_release(acquired);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_acquire_unreleased, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *acquired;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	acquired = bpf_task_acquire(task);
+
+	/* Acquired task is never released. */
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_get_non_kptr_param, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *kptr;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	/* Cannot use bpf_task_kptr_get() on a non-kptr, even on a valid task. */
+	kptr = bpf_task_kptr_get(&task);
+	if (!kptr)
+		return 0;
+
+	bpf_task_release(kptr);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_get_non_kptr_acquired, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *kptr, *acquired;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	acquired = bpf_task_acquire(task);
+	if (!acquired)
+		return 0;
+
+	/* Cannot use bpf_task_kptr_get() on a non-kptr, even if it was acquired. */
+	kptr = bpf_task_kptr_get(&acquired);
+	bpf_task_release(acquired);
+	if (!kptr)
+		return 0;
+
+	bpf_task_release(kptr);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_get_null, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *kptr;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	/* Cannot use bpf_task_kptr_get() on a NULL pointer. */
+	kptr = bpf_task_kptr_get(NULL);
+	if (!kptr)
+		return 0;
+
+	bpf_task_release(kptr);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_xchg_unreleased, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *kptr;
+	struct __tasks_kfunc_map_value *v;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	v = insert_lookup_task(task);
+	if (!v)
+		return 0;
+
+	kptr = bpf_kptr_xchg(&v->task, NULL);
+	if (!kptr)
+		return 0;
+
+	/* Kptr retrieved from map is never released. */
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_get_unreleased, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *kptr;
+	struct __tasks_kfunc_map_value *v;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	v = insert_lookup_task(task);
+	if (!v)
+		return 0;
+
+	kptr = bpf_task_kptr_get(&v->task);
+	if (!kptr)
+		return 0;
+
+	/* Kptr acquired above is never released. */
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_flags)
+{
+	struct __tasks_kfunc_map_value *v;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	v = insert_lookup_task(task);
+	if (!v)
+		return 0;
+
+	/* Can't invoke bpf_task_release() on an untrusted pointer. */
+	bpf_task_release(v->task);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_release_fp, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *acquired = (struct task_struct *)&clone_flags;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	/* Cannot release random frame pointer. */
+	bpf_task_release(acquired);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_release_null, struct task_struct *task, u64 clone_flags)
+{
+	struct __tasks_kfunc_map_value local, *v;
+	long status;
+	struct task_struct *acquired, *old;
+	s32 pid;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	status = bpf_probe_read_kernel(&pid, sizeof(pid), &task->pid);
+	if (status)
+		return 0;
+
+	local.task = NULL;
+	status = bpf_map_update_elem(&__tasks_kfunc_map, &pid, &local, BPF_NOEXIST);
+	if (status)
+		return status;
+
+	v = bpf_map_lookup_elem(&__tasks_kfunc_map, &pid);
+	if (!v)
+		return status;
+
+	acquired = bpf_task_acquire(task);
+	if (!acquired)
+		return 0;
+
+	old = bpf_kptr_xchg(&v->task, acquired);
+
+	/* old cannot be passed to bpf_task_release() without a NULL check. */
+	bpf_task_release(old);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(task_kfunc_release_unacquired, struct task_struct *task, u64 clone_flags)
+{
+	if (!is_test_kfunc_task())
+		return 0;
+
+	/* Cannot release trusted task pointer which was not acquired. */
+	bpf_task_release(task);
+
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_success.c b/tools/testing/selftests/bpf/progs/task_kfunc_success.c
new file mode 100644
index 000000000000..8d5c05b41d53
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/task_kfunc_success.c
@@ -0,0 +1,132 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+
+#include "task_kfunc_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+int err;
+
+/* Prototype for all of the program trace events below:
+ *
+ * TRACE_EVENT(task_newtask,
+ *         TP_PROTO(struct task_struct *p, u64 clone_flags)
+ */
+
+static int test_acquire_release(struct task_struct *task)
+{
+	struct task_struct *acquired;
+
+	acquired = bpf_task_acquire(task);
+	if (!acquired) {
+		err = 1;
+		return 0;
+	}
+
+	bpf_task_release(acquired);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_task_acquire_release_argument, struct task_struct *task, u64 clone_flags)
+{
+	if (!is_test_kfunc_task())
+		return 0;
+
+	return test_acquire_release(task);
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_task_acquire_release_current, struct task_struct *task, u64 clone_flags)
+{
+	if (!is_test_kfunc_task())
+		return 0;
+
+	return test_acquire_release(bpf_get_current_task_btf());
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_task_acquire_leave_in_map, struct task_struct *task, u64 clone_flags)
+{
+	long status;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	status = tasks_kfunc_map_insert(task);
+	if (status)
+		err = 1;
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_task_xchg_release, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *kptr;
+	struct __tasks_kfunc_map_value *v;
+	long status;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	status = tasks_kfunc_map_insert(task);
+	if (status) {
+		err = 1;
+		return 0;
+	}
+
+	v = tasks_kfunc_map_value_lookup(task);
+	if (!v) {
+		err = 2;
+		return 0;
+	}
+
+	kptr = bpf_kptr_xchg(&v->task, NULL);
+	if (!kptr) {
+		err = 3;
+		return 0;
+	}
+
+	bpf_task_release(kptr);
+
+	return 0;
+}
+
+SEC("tp_btf/task_newtask")
+int BPF_PROG(test_task_get_release, struct task_struct *task, u64 clone_flags)
+{
+	struct task_struct *kptr;
+	struct __tasks_kfunc_map_value *v;
+	long status;
+
+	if (!is_test_kfunc_task())
+		return 0;
+
+	status = tasks_kfunc_map_insert(task);
+	if (status) {
+		err = 1;
+		return 0;
+	}
+
+	v = tasks_kfunc_map_value_lookup(task);
+	if (!v) {
+		err = 2;
+		return 0;
+	}
+
+	kptr = bpf_task_kptr_get(&v->task);
+	if (!kptr) {
+		err = 3;
+		return 0;
+	}
+
+	bpf_task_release(kptr);
+
+	return 0;
+}
-- 
2.38.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs
  2022-10-14 21:21 ` [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs David Vernet
@ 2022-10-18  1:32   ` Kumar Kartikeya Dwivedi
  2022-10-19 19:37     ` David Vernet
  0 siblings, 1 reply; 14+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-10-18  1:32 UTC (permalink / raw)
  To: David Vernet
  Cc: bpf, ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj

On Sat, 15 Oct 2022 at 02:51, David Vernet <void@manifault.com> wrote:
>
> Kfuncs currently support specifying the KF_TRUSTED_ARGS flag to signal
> to the verifier that it should enforce that a BPF program passes it a
> "safe", trusted pointer. Currently, "safe" means that the pointer is
> either PTR_TO_CTX, or is refcounted. There may be cases, however, where
> the kernel passes a BPF program a safe / trusted pointer to an object
> that the BPF program wishes to use as a kptr, but because the object
> does not yet have a ref_obj_id from the perspective of the verifier, the
> program would be unable to pass it to a KF_ACQUIRE | KF_TRUSTED_ARGS
> kfunc.
>
> The solution is to expand the set of pointers that are considered
> trusted according to KF_TRUSTED_ARGS, so that programs can invoke kfuncs
> with these pointers without getting rejected by the verifier.
>
> There is already a PTR_UNTRUSTED flag that is set in some scenarios,
> such as when a BPF program reads a kptr directly from a map
> without performing a bpf_kptr_xchg() call. These pointers of course can
> and should be rejected by the verifier. Unfortunately, however,
> PTR_UNTRUSTED does not cover all the cases for safety that need to
> be addressed to adequately protect kfuncs. Specifically, pointers
> obtained by a BPF program "walking" a struct are _not_ considered
> PTR_UNTRUSTED according to BPF. For example, say that we were to add a
> kfunc called bpf_task_acquire(), with KF_ACQUIRE | KF_TRUSTED_ARGS, to
> acquire a struct task_struct *. If we only used PTR_UNTRUSTED to signal
> that a task was unsafe to pass to a kfunc, the verifier would mistakenly
> allow the following unsafe BPF program to be loaded:
>
> SEC("tp_btf/task_newtask")
> int BPF_PROG(unsafe_acquire_task,
>              struct task_struct *task,
>              u64 clone_flags)
> {
>         struct task_struct *acquired, *nested;
>
>         nested = task->last_wakee;
>
>         /* Would not be rejected by the verifier. */
>         acquired = bpf_task_acquire(nested);
>         if (!acquired)
>                 return 0;
>
>         bpf_task_release(acquired);
>         return 0;
> }
>
> To address this, this patch defines a new type flag called PTR_NESTED
> which tracks whether a PTR_TO_BTF_ID pointer was retrieved from walking
> a struct. A pointer passed directly from the kernel begins with
> (PTR_NESTED & type) == 0, meaning of course that it is not nested. Any
> pointer received from walking that object, however, would inherit that
> flag and become a nested pointer.
>
> With that flag, this patch also updates btf_check_func_arg_match() to
> only flag a PTR_TO_BTF_ID object as requiring a refcount if it has any
> type modifiers (which of course includes both PTR_UNTRUSTED and
> PTR_NESTED). Otherwise, the pointer passes this check and continues
> onto the others in btf_check_func_arg_match().
>
> A subsequent patch will add kfuncs for storing a task kfunc as a kptr,
> and then another patch will validate this feature by ensuring that the
> verifier rejects a kfunc invocation with a nested pointer.
>
> Signed-off-by: David Vernet <void@manifault.com>
> ---

Please tag the patches with [ PATCH bpf-next ... ] subject prefix.

>  include/linux/bpf.h                          |  6 ++++++
>  kernel/bpf/btf.c                             | 11 ++++++++++-
>  kernel/bpf/verifier.c                        | 12 +++++++++++-
>  tools/testing/selftests/bpf/verifier/calls.c |  4 ++--
>  4 files changed, 29 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 9e7d46d16032..b624024edb4e 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -457,6 +457,12 @@ enum bpf_type_flag {
>         /* Size is known at compile time. */
>         MEM_FIXED_SIZE          = BIT(10 + BPF_BASE_TYPE_BITS),
>
> +       /* PTR was obtained from walking a struct. This is used with
> +        * PTR_TO_BTF_ID to determine whether the pointer is safe to pass to a
> +        * kfunc with KF_TRUSTED_ARGS.
> +        */
> +       PTR_NESTED              = BIT(11 + BPF_BASE_TYPE_BITS),
> +
>         __BPF_TYPE_FLAG_MAX,
>         __BPF_TYPE_LAST_FLAG    = __BPF_TYPE_FLAG_MAX - 1,
>  };
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index eba603cec2c5..3d7bad11b10b 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -6333,8 +6333,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
>                 /* Check if argument must be a referenced pointer, args + i has
>                  * been verified to be a pointer (after skipping modifiers).
>                  * PTR_TO_CTX is ok without having non-zero ref_obj_id.
> +                *
> +                * All object pointers must be refcounted, other than:
> +                * - PTR_TO_CTX
> +                * - Trusted pointers (i.e. pointers with no type modifiers)
>                  */
> -               if (is_kfunc && trusted_args && (obj_ptr && reg->type != PTR_TO_CTX) && !reg->ref_obj_id) {
> +               if (is_kfunc &&
> +                   trusted_args &&
> +                   obj_ptr &&
> +                   base_type(reg->type) != PTR_TO_CTX &&
> +                   type_flag(reg->type) &&
> +                   !reg->ref_obj_id) {
>                         bpf_log(log, "R%d must be referenced\n", regno);
>                         return -EINVAL;
>                 }
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6f6d2d511c06..d16a08ca507b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -581,6 +581,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env,
>                 strncpy(prefix, "user_", 32);
>         if (type & MEM_PERCPU)
>                 strncpy(prefix, "percpu_", 32);
> +       if (type & PTR_NESTED)
> +               strncpy(prefix, "nested_", 32);
>         if (type & PTR_UNTRUSTED)
>                 strncpy(prefix, "untrusted_", 32);
>

Since these are no longer exclusive, the code needs to be updated to
append strings to the prefix buffer.
Maybe just using snprintf with %s%s%s.. would be better, passing ""
when !(type & flag).

> @@ -4558,6 +4560,9 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
>         if (type_flag(reg->type) & PTR_UNTRUSTED)
>                 flag |= PTR_UNTRUSTED;
>
> +       /* All pointers obtained by walking a struct are nested. */
> +       flag |= PTR_NESTED;
> +

Instead of PTR_NESTED, how about PTR_WALK?

>         if (atype == BPF_READ && value_regno >= 0)
>                 mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
>
> @@ -5694,7 +5699,12 @@ static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } };
>  static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } };
>  static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } };
>  static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } };
> -static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } };
> +static const struct bpf_reg_types btf_ptr_types = {
> +       .types = {
> +               PTR_TO_BTF_ID,
> +               PTR_TO_BTF_ID | PTR_NESTED
> +       },
> +};
>  static const struct bpf_reg_types spin_lock_types = { .types = { PTR_TO_MAP_VALUE } };
>  static const struct bpf_reg_types percpu_btf_ptr_types = { .types = { PTR_TO_BTF_ID | MEM_PERCPU } };
>  static const struct bpf_reg_types func_ptr_types = { .types = { PTR_TO_FUNC } };
> diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
> index e1a937277b54..496c29b1a298 100644
> --- a/tools/testing/selftests/bpf/verifier/calls.c
> +++ b/tools/testing/selftests/bpf/verifier/calls.c
> @@ -181,7 +181,7 @@
>         },
>         .result_unpriv = REJECT,
>         .result = REJECT,
> -       .errstr = "negative offset ptr_ ptr R1 off=-4 disallowed",
> +       .errstr = "negative offset nested_ptr_ ptr R1 off=-4 disallowed",
>  },
>  {
>         "calls: invalid kfunc call: PTR_TO_BTF_ID with variable offset",
> @@ -243,7 +243,7 @@
>         },
>         .result_unpriv = REJECT,
>         .result = REJECT,
> -       .errstr = "R1 must be referenced",
> +       .errstr = "arg#0 pointer type STRUCT prog_test_ref_kfunc must point to scalar",
>  },
>  {
>         "calls: valid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID",
> --
> 2.38.0
>

On Sat, 15 Oct 2022 at 02:51, David Vernet <void@manifault.com> wrote:
>
> Kfuncs currently support specifying the KF_TRUSTED_ARGS flag to signal
> to the verifier that it should enforce that a BPF program passes it a
> "safe", trusted pointer. Currently, "safe" means that the pointer is
> either PTR_TO_CTX, or is refcounted. There may be cases, however, where
> the kernel passes a BPF program a safe / trusted pointer to an object
> that the BPF program wishes to use as a kptr, but because the object
> does not yet have a ref_obj_id from the perspective of the verifier, the
> program would be unable to pass it to a KF_ACQUIRE | KF_TRUSTED_ARGS
> kfunc.
>
> The solution is to expand the set of pointers that are considered
> trusted according to KF_TRUSTED_ARGS, so that programs can invoke kfuncs
> with these pointers without getting rejected by the verifier.
>
> There is already a PTR_UNTRUSTED flag that is set in some scenarios,
> such as when a BPF program reads a kptr directly from a map
> without performing a bpf_kptr_xchg() call. These pointers of course can
> and should be rejected by the verifier. Unfortunately, however,
> PTR_UNTRUSTED does not cover all the cases for safety that need to
> be addressed to adequately protect kfuncs. Specifically, pointers
> obtained by a BPF program "walking" a struct are _not_ considered
> PTR_UNTRUSTED according to BPF. For example, say that we were to add a
> kfunc called bpf_task_acquire(), with KF_ACQUIRE | KF_TRUSTED_ARGS, to
> acquire a struct task_struct *. If we only used PTR_UNTRUSTED to signal
> that a task was unsafe to pass to a kfunc, the verifier would mistakenly
> allow the following unsafe BPF program to be loaded:
>
> SEC("tp_btf/task_newtask")
> int BPF_PROG(unsafe_acquire_task,
>              struct task_struct *task,
>              u64 clone_flags)
> {
>         struct task_struct *acquired, *nested;
>
>         nested = task->last_wakee;
>
>         /* Would not be rejected by the verifier. */
>         acquired = bpf_task_acquire(nested);
>         if (!acquired)
>                 return 0;
>
>         bpf_task_release(acquired);
>         return 0;
> }
>
> To address this, this patch defines a new type flag called PTR_NESTED
> which tracks whether a PTR_TO_BTF_ID pointer was retrieved from walking
> a struct. A pointer passed directly from the kernel begins with
> (PTR_NESTED & type) == 0, meaning of course that it is not nested. Any
> pointer received from walking that object, however, would inherit that
> flag and become a nested pointer.
>
> With that flag, this patch also updates btf_check_func_arg_match() to
> only flag a PTR_TO_BTF_ID object as requiring a refcount if it has any
> type modifiers (which of course includes both PTR_UNTRUSTED and
> PTR_NESTED). Otherwise, the pointer passes this check and continues
> onto the others in btf_check_func_arg_match().
>
> A subsequent patch will add kfuncs for storing a task kfunc as a kptr,
> and then another patch will validate this feature by ensuring that the
> verifier rejects a kfunc invocation with a nested pointer.
>
> Signed-off-by: David Vernet <void@manifault.com>
> ---
>  include/linux/bpf.h                          |  6 ++++++
>  kernel/bpf/btf.c                             | 11 ++++++++++-
>  kernel/bpf/verifier.c                        | 12 +++++++++++-
>  tools/testing/selftests/bpf/verifier/calls.c |  4 ++--
>  4 files changed, 29 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 9e7d46d16032..b624024edb4e 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -457,6 +457,12 @@ enum bpf_type_flag {
>         /* Size is known at compile time. */
>         MEM_FIXED_SIZE          = BIT(10 + BPF_BASE_TYPE_BITS),
>
> +       /* PTR was obtained from walking a struct. This is used with
> +        * PTR_TO_BTF_ID to determine whether the pointer is safe to pass to a
> +        * kfunc with KF_TRUSTED_ARGS.
> +        */
> +       PTR_NESTED              = BIT(11 + BPF_BASE_TYPE_BITS),
> +
>         __BPF_TYPE_FLAG_MAX,
>         __BPF_TYPE_LAST_FLAG    = __BPF_TYPE_FLAG_MAX - 1,
>  };
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index eba603cec2c5..3d7bad11b10b 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -6333,8 +6333,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
>                 /* Check if argument must be a referenced pointer, args + i has
>                  * been verified to be a pointer (after skipping modifiers).
>                  * PTR_TO_CTX is ok without having non-zero ref_obj_id.
> +                *
> +                * All object pointers must be refcounted, other than:
> +                * - PTR_TO_CTX
> +                * - Trusted pointers (i.e. pointers with no type modifiers)
>                  */
> -               if (is_kfunc && trusted_args && (obj_ptr && reg->type != PTR_TO_CTX) && !reg->ref_obj_id) {
> +               if (is_kfunc &&
> +                   trusted_args &&
> +                   obj_ptr &&
> +                   base_type(reg->type) != PTR_TO_CTX &&
> +                   type_flag(reg->type) &&
> +                   !reg->ref_obj_id) {
>                         bpf_log(log, "R%d must be referenced\n", regno);
>                         return -EINVAL;
>                 }
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6f6d2d511c06..d16a08ca507b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -581,6 +581,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env,
>                 strncpy(prefix, "user_", 32);
>         if (type & MEM_PERCPU)
>                 strncpy(prefix, "percpu_", 32);
> +       if (type & PTR_NESTED)
> +               strncpy(prefix, "nested_", 32);
>         if (type & PTR_UNTRUSTED)
>                 strncpy(prefix, "untrusted_", 32);
>
> @@ -4558,6 +4560,9 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
>         if (type_flag(reg->type) & PTR_UNTRUSTED)
>                 flag |= PTR_UNTRUSTED;
>
> +       /* All pointers obtained by walking a struct are nested. */
> +       flag |= PTR_NESTED;
> +
>         if (atype == BPF_READ && value_regno >= 0)
>                 mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
>
> @@ -5694,7 +5699,12 @@ static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } };
>  static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } };
>  static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } };
>  static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } };
> -static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } };
> +static const struct bpf_reg_types btf_ptr_types = {
> +       .types = {
> +               PTR_TO_BTF_ID,
> +               PTR_TO_BTF_ID | PTR_NESTED
> +       },
> +};

CI fails, two of those failures are from not updating
check_func_arg_reg_off for PTR_TO_BTF_ID | PTR_WALK, and the other one
is a crash because you didn't add to the list in convert_ctx_access
for PROBE_MEM based load fault handling.

The third issue is a bit more interesting:

; if (fib6_nh->fib_nh_gw_family) {
4375 58: (71) r1 = *(u8 *)(r9 +14)
4376 same insn cannot be used with different pointers

This is because one branch does:
fib6_nh = &rt->fib6_nh[0];
while the other does
fib6_nh = &nh->nh_info->fib6_nh;

So the load insn 58 in one path uses PTR_TO_BTF_ID and in the other
PTR_TO_BTF_ID | PTR_NESTED, so reg_type_mismatch bails on seeing src
!= prev.

I think it's ok to allow this, and we can probably do base_type(src)
!= base_type(prev) here. If the load was not permitted,
check_mem_access before this should have caught it. The check is
mostly to prevent doing stuff like PTR_TO_CTX in one path and
PTR_TO_BTF_ID in the other, because then the convert_ctx_access
handling would be totally incorrect for the load instruction.

Hopefully I'm not missing anything.

But please add some comments about this when updating this particular
check, it's very easy to forget all this later.

> [...]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 2/3] bpf: Add kfuncs for storing struct task_struct * as a kptr
  2022-10-14 21:21 ` [PATCH v5 2/3] bpf: Add kfuncs for storing struct task_struct * as a kptr David Vernet
@ 2022-10-18  1:55   ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 14+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-10-18  1:55 UTC (permalink / raw)
  To: David Vernet
  Cc: bpf, ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj

On Sat, 15 Oct 2022 at 02:51, David Vernet <void@manifault.com> wrote:
>
> Now that BPF supports adding new kernel functions with kfuncs, and
> storing kernel objects in maps with kptrs, we can add a set of kfuncs
> which allow struct task_struct objects to be stored in maps as
> referenced kptrs. The possible use cases for doing this are plentiful.
> During tracing, for example, it would be useful to be able to collect
> some tasks that performed a certain operation, and then periodically
> summarize who they are, which cgroup they're in, how much CPU time
> they've utilized, etc.
>
> In order to enable this, this patch adds three new kfuncs:
>
> struct task_struct *bpf_task_acquire(struct task_struct *p);
> struct task_struct *bpf_task_kptr_get(struct task_struct **pp);
> void bpf_task_release(struct task_struct *p);
>
> A follow-on patch will add selftests validating these kfuncs.
>
> Signed-off-by: David Vernet <void@manifault.com>
> ---
>  [...]
> +/**
> + * bpf_task_release - Release the reference acquired on a struct task_struct *.
> + * If this kfunc is invoked in an RCU read region, the task_struct is
> + * guaranteed to not be freed until the current grace period has ended, even if
> + * its refcount drops to 0.
> + * @p: The task on which a reference is being released.
> + */
> +__used noinline void bpf_task_release(struct task_struct *p)
> +{
> +       if (!p)
> +               return;
> +
> +       put_task_struct_rcu_user(p);
> +}
> +
> +__diag_pop();
> +
> +BTF_SET8_START(generic_kfunc_btf_ids)
>  #ifdef CONFIG_KEXEC_CORE
>  BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE)
>  #endif
> -BTF_SET8_END(tracing_btf_ids)
> +BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RET_NULL | KF_TRUSTED_ARGS)
> +BTF_ID_FLAGS(func, bpf_task_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL)
> +BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE | KF_TRUSTED_ARGS)

You can remove KF_TRUSTED_ARGS here for bpf_task_release, if this is
required, it would be a bug.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/3] bpf/selftests: Add selftests for new task kfuncs
  2022-10-14 21:21 ` [PATCH v5 3/3] bpf/selftests: Add selftests for new task kfuncs David Vernet
@ 2022-10-19 17:39   ` David Vernet
  2022-10-20  6:19     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 14+ messages in thread
From: David Vernet @ 2022-10-19 17:39 UTC (permalink / raw)
  To: bpf
  Cc: ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj,
	memxor

On Tue, Oct 18, 2022 at 07:23:23AM +0530, Kumar Kartikeya Dwivedi wrote:

Note: I'm responding to Kumar's email from v3 [0] here on v5 instead,
per his request on [1].

[0]: https://lore.kernel.org/all/CAP01T77PTK+bD2mBrxJShKNPhEypT2+nSHcr3=uuJbrghv_wFg@mail.gmail.com/
[1]: https://lore.kernel.org/all/CAP01T747PKC2jySOZCWu_gauHbBfaj4JE=hbtm4Z4C-Y8b3ZHg@mail.gmail.com/

My apologies again for the silly mistakes and having to send multiple
versions of the patch set.

> On Sat, 15 Oct 2022 at 01:45, David Vernet <void@manifault.com> wrote:
> >
> > A previous change added a series of kfuncs for storing struct
> > task_struct objects as referenced kptrs. This patch adds a new
> > task_kfunc test suite for validating their expected behavior.
> >
> > Signed-off-by: David Vernet <void@manifault.com>
> > ---
> > [...]
> > +
> > +SEC("tp_btf/task_newtask")
> > +int BPF_PROG(task_kfunc_acquire_trusted_nested, struct task_struct *task, u64 clone_flags)
> > +{
> > +       struct task_struct *acquired;
> > +
> > +       if (!is_test_kfunc_task())
> > +               return 0;
> > +
> > +       /* Can't invoke bpf_task_acquire() on a trusted pointer at a nonzero offset. */
> > +       acquired = bpf_task_acquire(task->last_wakee);
>
> The comment is incorrect, that would be &task->last_wakee instead,
> this is PTR_TO_BTF_ID | PTR_NESTED.

Well, it's a nonzero offset from task. But yes, to your point, it's a
misleading comment because the offset is 0 in the verifier. I'll
rephrase this to reflect that it's a nested pointer (or a walked
pointer, whatever nomenclature we end up going with).

> > +       if (!acquired)
> > +               return 0;
> > +       bpf_task_release(acquired);
> > +
> > +       return 0;
> > +}
> > +
> > [...]
> > +
> > +static int test_acquire_release(struct task_struct *task)
> > +{
> > +       struct task_struct *acquired;
> > +
> > +       acquired = bpf_task_acquire(task);
>
> Unfortunately a side effect of this change is that now since
> PTR_TO_BTF_ID without ref_obj_id is considered trusted, the bpf_ct_*
> functions would begin working with tp_btf args. That probably needs 
> be fixed so that they reject them (ideally with a failing test case to
> make sure it doesn't resurface), probably with a new suffix __ref/or
> __owned as added here [0].
>
> Alexei, since you've suggested avoiding adding that suffix, do you see
> any other way out here?
> It's questionable whether bpf_ct_set_timeout/status should work for CT
> not owned by the BPF program.
>
>   [0]: https://lore.kernel.org/bpf/dfb859a6b76a9234baa194e795ae89cb7ca5694b.1662383493.git.lorenzo@kerne

Ah, yeah, it makes sense that some kfuncs really should only ever be
passed an object if the program owns a reference on it. Specifically for
e.g. bpf_ct_set_timeout/status() as you point out, which should only be
passed a struct nf_conn__init that was allocated by bpf_skb_ct_alloc().

It'd be nice if we could just add another flag like KF_REFERENCED_ARGS
or KF_OWNED_ARGS, which would allow a subset of arguments affored by
KF_TRUSTED_ARGS, only those with ref_obj_id > 0. That approach wouldn't
allow the flexibility of having per-argument specifications as your
proposal to use __ref or __owned suffixes on the names, but that already
applies to KF_TRUSTED_ARGS as well.

Personally I'm in agreement with Alexei that it's not a user friendly
API to use suffixes in the name like this. If we want to allow kfunc
authors to have per-argument specifiers, using compiler attributes
and/or some kind of tagging is probably the way to do it?

My proposal for now is to add a new KF_OWNED_ARGS flag, and to very
clearly document exactly what that and KF_TRUSTED_ARGS implies for
kfuncs. Later on, we could explore solutions for having per-arg
specifiers. What do you and Alexei think?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs
  2022-10-18  1:32   ` Kumar Kartikeya Dwivedi
@ 2022-10-19 19:37     ` David Vernet
  2022-10-20  5:57       ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 14+ messages in thread
From: David Vernet @ 2022-10-19 19:37 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj

On Tue, Oct 18, 2022 at 07:02:15AM +0530, Kumar Kartikeya Dwivedi wrote:
> Please tag the patches with [ PATCH bpf-next ... ] subject prefix.

Sure, will do.

> >  include/linux/bpf.h                          |  6 ++++++
> >  kernel/bpf/btf.c                             | 11 ++++++++++-
> >  kernel/bpf/verifier.c                        | 12 +++++++++++-
> >  tools/testing/selftests/bpf/verifier/calls.c |  4 ++--
> >  4 files changed, 29 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 9e7d46d16032..b624024edb4e 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -457,6 +457,12 @@ enum bpf_type_flag {
> >         /* Size is known at compile time. */
> >         MEM_FIXED_SIZE          = BIT(10 + BPF_BASE_TYPE_BITS),
> >
> > +       /* PTR was obtained from walking a struct. This is used with
> > +        * PTR_TO_BTF_ID to determine whether the pointer is safe to pass to a
> > +        * kfunc with KF_TRUSTED_ARGS.
> > +        */
> > +       PTR_NESTED              = BIT(11 + BPF_BASE_TYPE_BITS),
> > +
> >         __BPF_TYPE_FLAG_MAX,
> >         __BPF_TYPE_LAST_FLAG    = __BPF_TYPE_FLAG_MAX - 1,
> >  };
> > diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> > index eba603cec2c5..3d7bad11b10b 100644
> > --- a/kernel/bpf/btf.c
> > +++ b/kernel/bpf/btf.c
> > @@ -6333,8 +6333,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
> >                 /* Check if argument must be a referenced pointer, args + i has
> >                  * been verified to be a pointer (after skipping modifiers).
> >                  * PTR_TO_CTX is ok without having non-zero ref_obj_id.
> > +                *
> > +                * All object pointers must be refcounted, other than:
> > +                * - PTR_TO_CTX
> > +                * - Trusted pointers (i.e. pointers with no type modifiers)
> >                  */
> > -               if (is_kfunc && trusted_args && (obj_ptr && reg->type != PTR_TO_CTX) && !reg->ref_obj_id) {
> > +               if (is_kfunc &&
> > +                   trusted_args &&
> > +                   obj_ptr &&
> > +                   base_type(reg->type) != PTR_TO_CTX &&
> > +                   type_flag(reg->type) &&
> > +                   !reg->ref_obj_id) {
> >                         bpf_log(log, "R%d must be referenced\n", regno);
> >                         return -EINVAL;
> >                 }
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 6f6d2d511c06..d16a08ca507b 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -581,6 +581,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env,
> >                 strncpy(prefix, "user_", 32);
> >         if (type & MEM_PERCPU)
> >                 strncpy(prefix, "percpu_", 32);
> > +       if (type & PTR_NESTED)
> > +               strncpy(prefix, "nested_", 32);
> >         if (type & PTR_UNTRUSTED)
> >                 strncpy(prefix, "untrusted_", 32);
> >
> 
> Since these are no longer exclusive, the code needs to be updated to
> append strings to the prefix buffer.
> Maybe just using snprintf with %s%s%s.. would be better, passing ""
> when !(type & flag).

Sure, I can make that change. We'll have to increase the size of the
prefix string on the stack, but that's hardly problematic as these
strings are not terribly large.

> > @@ -4558,6 +4560,9 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
> >         if (type_flag(reg->type) & PTR_UNTRUSTED)
> >                 flag |= PTR_UNTRUSTED;
> >
> > +       /* All pointers obtained by walking a struct are nested. */
> > +       flag |= PTR_NESTED;
> > +
> 
> Instead of PTR_NESTED, how about PTR_WALK?

I don't have a strong preference between either, though I would prefer
PTR_WALKED if we go with the latter. Does that work for you?

> >         if (atype == BPF_READ && value_regno >= 0)
> >                 mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
> >
> > @@ -5694,7 +5699,12 @@ static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } };
> >  static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } };
> >  static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } };
> >  static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } };
> > -static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } };
> > +static const struct bpf_reg_types btf_ptr_types = {
> > +       .types = {
> > +               PTR_TO_BTF_ID,
> > +               PTR_TO_BTF_ID | PTR_NESTED
> > +       },
> > +};
> >  static const struct bpf_reg_types spin_lock_types = { .types = { PTR_TO_MAP_VALUE } };
> >  static const struct bpf_reg_types percpu_btf_ptr_types = { .types = { PTR_TO_BTF_ID | MEM_PERCPU } };
> >  static const struct bpf_reg_types func_ptr_types = { .types = { PTR_TO_FUNC } };
> > diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
> > index e1a937277b54..496c29b1a298 100644
> > --- a/tools/testing/selftests/bpf/verifier/calls.c
> > +++ b/tools/testing/selftests/bpf/verifier/calls.c
> > @@ -181,7 +181,7 @@
> >         },
> >         .result_unpriv = REJECT,
> >         .result = REJECT,
> > -       .errstr = "negative offset ptr_ ptr R1 off=-4 disallowed",
> > +       .errstr = "negative offset nested_ptr_ ptr R1 off=-4 disallowed",
> >  },
> >  {
> >         "calls: invalid kfunc call: PTR_TO_BTF_ID with variable offset",
> > @@ -243,7 +243,7 @@
> >         },
> >         .result_unpriv = REJECT,
> >         .result = REJECT,
> > -       .errstr = "R1 must be referenced",
> > +       .errstr = "arg#0 pointer type STRUCT prog_test_ref_kfunc must point to scalar",
> >  },
> >  {
> >         "calls: valid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID",
> > --
> > 2.38.0
> >
> 
> On Sat, 15 Oct 2022 at 02:51, David Vernet <void@manifault.com> wrote:
> >
> > Kfuncs currently support specifying the KF_TRUSTED_ARGS flag to signal
> > to the verifier that it should enforce that a BPF program passes it a
> > "safe", trusted pointer. Currently, "safe" means that the pointer is
> > either PTR_TO_CTX, or is refcounted. There may be cases, however, where
> > the kernel passes a BPF program a safe / trusted pointer to an object
> > that the BPF program wishes to use as a kptr, but because the object
> > does not yet have a ref_obj_id from the perspective of the verifier, the
> > program would be unable to pass it to a KF_ACQUIRE | KF_TRUSTED_ARGS
> > kfunc.
> >
> > The solution is to expand the set of pointers that are considered
> > trusted according to KF_TRUSTED_ARGS, so that programs can invoke kfuncs
> > with these pointers without getting rejected by the verifier.
> >
> > There is already a PTR_UNTRUSTED flag that is set in some scenarios,
> > such as when a BPF program reads a kptr directly from a map
> > without performing a bpf_kptr_xchg() call. These pointers of course can
> > and should be rejected by the verifier. Unfortunately, however,
> > PTR_UNTRUSTED does not cover all the cases for safety that need to
> > be addressed to adequately protect kfuncs. Specifically, pointers
> > obtained by a BPF program "walking" a struct are _not_ considered
> > PTR_UNTRUSTED according to BPF. For example, say that we were to add a
> > kfunc called bpf_task_acquire(), with KF_ACQUIRE | KF_TRUSTED_ARGS, to
> > acquire a struct task_struct *. If we only used PTR_UNTRUSTED to signal
> > that a task was unsafe to pass to a kfunc, the verifier would mistakenly
> > allow the following unsafe BPF program to be loaded:
> >
> > SEC("tp_btf/task_newtask")
> > int BPF_PROG(unsafe_acquire_task,
> >              struct task_struct *task,
> >              u64 clone_flags)
> > {
> >         struct task_struct *acquired, *nested;
> >
> >         nested = task->last_wakee;
> >
> >         /* Would not be rejected by the verifier. */
> >         acquired = bpf_task_acquire(nested);
> >         if (!acquired)
> >                 return 0;
> >
> >         bpf_task_release(acquired);
> >         return 0;
> > }
> >
> > To address this, this patch defines a new type flag called PTR_NESTED
> > which tracks whether a PTR_TO_BTF_ID pointer was retrieved from walking
> > a struct. A pointer passed directly from the kernel begins with
> > (PTR_NESTED & type) == 0, meaning of course that it is not nested. Any
> > pointer received from walking that object, however, would inherit that
> > flag and become a nested pointer.
> >
> > With that flag, this patch also updates btf_check_func_arg_match() to
> > only flag a PTR_TO_BTF_ID object as requiring a refcount if it has any
> > type modifiers (which of course includes both PTR_UNTRUSTED and
> > PTR_NESTED). Otherwise, the pointer passes this check and continues
> > onto the others in btf_check_func_arg_match().
> >
> > A subsequent patch will add kfuncs for storing a task kfunc as a kptr,
> > and then another patch will validate this feature by ensuring that the
> > verifier rejects a kfunc invocation with a nested pointer.
> >
> > Signed-off-by: David Vernet <void@manifault.com>
> > ---
> >  include/linux/bpf.h                          |  6 ++++++
> >  kernel/bpf/btf.c                             | 11 ++++++++++-
> >  kernel/bpf/verifier.c                        | 12 +++++++++++-
> >  tools/testing/selftests/bpf/verifier/calls.c |  4 ++--
> >  4 files changed, 29 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 9e7d46d16032..b624024edb4e 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -457,6 +457,12 @@ enum bpf_type_flag {
> >         /* Size is known at compile time. */
> >         MEM_FIXED_SIZE          = BIT(10 + BPF_BASE_TYPE_BITS),
> >
> > +       /* PTR was obtained from walking a struct. This is used with
> > +        * PTR_TO_BTF_ID to determine whether the pointer is safe to pass to a
> > +        * kfunc with KF_TRUSTED_ARGS.
> > +        */
> > +       PTR_NESTED              = BIT(11 + BPF_BASE_TYPE_BITS),
> > +
> >         __BPF_TYPE_FLAG_MAX,
> >         __BPF_TYPE_LAST_FLAG    = __BPF_TYPE_FLAG_MAX - 1,
> >  };
> > diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> > index eba603cec2c5..3d7bad11b10b 100644
> > --- a/kernel/bpf/btf.c
> > +++ b/kernel/bpf/btf.c
> > @@ -6333,8 +6333,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
> >                 /* Check if argument must be a referenced pointer, args + i has
> >                  * been verified to be a pointer (after skipping modifiers).
> >                  * PTR_TO_CTX is ok without having non-zero ref_obj_id.
> > +                *
> > +                * All object pointers must be refcounted, other than:
> > +                * - PTR_TO_CTX
> > +                * - Trusted pointers (i.e. pointers with no type modifiers)
> >                  */
> > -               if (is_kfunc && trusted_args && (obj_ptr && reg->type != PTR_TO_CTX) && !reg->ref_obj_id) {
> > +               if (is_kfunc &&
> > +                   trusted_args &&
> > +                   obj_ptr &&
> > +                   base_type(reg->type) != PTR_TO_CTX &&
> > +                   type_flag(reg->type) &&
> > +                   !reg->ref_obj_id) {
> >                         bpf_log(log, "R%d must be referenced\n", regno);
> >                         return -EINVAL;
> >                 }
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 6f6d2d511c06..d16a08ca507b 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -581,6 +581,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env,
> >                 strncpy(prefix, "user_", 32);
> >         if (type & MEM_PERCPU)
> >                 strncpy(prefix, "percpu_", 32);
> > +       if (type & PTR_NESTED)
> > +               strncpy(prefix, "nested_", 32);
> >         if (type & PTR_UNTRUSTED)
> >                 strncpy(prefix, "untrusted_", 32);
> >
> > @@ -4558,6 +4560,9 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
> >         if (type_flag(reg->type) & PTR_UNTRUSTED)
> >                 flag |= PTR_UNTRUSTED;
> >
> > +       /* All pointers obtained by walking a struct are nested. */
> > +       flag |= PTR_NESTED;
> > +
> >         if (atype == BPF_READ && value_regno >= 0)
> >                 mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
> >
> > @@ -5694,7 +5699,12 @@ static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } };
> >  static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } };
> >  static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } };
> >  static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } };
> > -static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } };
> > +static const struct bpf_reg_types btf_ptr_types = {
> > +       .types = {
> > +               PTR_TO_BTF_ID,
> > +               PTR_TO_BTF_ID | PTR_NESTED
> > +       },
> > +};
> 
> CI fails, two of those failures are from not updating
> check_func_arg_reg_off for PTR_TO_BTF_ID | PTR_WALK, and the other one

Gah, I didn't think it was necessary for this case as it's not required
for btf_check_func_arg_match(), which will eventually just fail in the
following check:

	if (!btf_type_is_struct(ref_t)) {
		bpf_log(log, "kernel function %s args#%d pointer type %s %s is not support
			func_name, i, btf_type_str(ref_t),
			ref_tname);
		return -EINVAL;
	}

Note that we also don't include PTR_TO_BTF_ID | PTR_UNTRUSTED here. The
difference for PTR_TO_BTF_ID | PTR_WALK(ED) is of course that we also need to
allow it to work properly for normal helper calls, so I'll make that change.
Thanks for pointing it out. In general, the whole dance between register base
types + modifiers sometimes feels like a mine field...

> is a crash because you didn't add to the list in convert_ctx_access
> for PROBE_MEM based load fault handling.

Sorry, just a plain oversight on my end for this one. Thanks for pointing it
out as well.

> The third issue is a bit more interesting:
> 
> ; if (fib6_nh->fib_nh_gw_family) {
> 4375 58: (71) r1 = *(u8 *)(r9 +14)
> 4376 same insn cannot be used with different pointers
> 
> This is because one branch does:
> fib6_nh = &rt->fib6_nh[0];
> while the other does
> fib6_nh = &nh->nh_info->fib6_nh;
> 
> So the load insn 58 in one path uses PTR_TO_BTF_ID and in the other
> PTR_TO_BTF_ID | PTR_NESTED, so reg_type_mismatch bails on seeing src
> != prev.

Ah, that's tricky. Yeah it seems perfectly reasonable to just check the base
type here. As you point out below, check_mem_access() should guard against
invalid accesses.

> I think it's ok to allow this, and we can probably do base_type(src)
> != base_type(prev) here. If the load was not permitted,
> check_mem_access before this should have caught it. The check is
> mostly to prevent doing stuff like PTR_TO_CTX in one path and
> PTR_TO_BTF_ID in the other, because then the convert_ctx_access
> handling would be totally incorrect for the load instruction.
> 
> Hopefully I'm not missing anything.
> 
> But please add some comments about this when updating this particular
> check, it's very easy to forget all this later.

Definitely, will do in v6.

Thanks a lot for the thorough review.

- David

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs
  2022-10-19 19:37     ` David Vernet
@ 2022-10-20  5:57       ` Kumar Kartikeya Dwivedi
  2022-10-20  6:11         ` David Vernet
  0 siblings, 1 reply; 14+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-10-20  5:57 UTC (permalink / raw)
  To: David Vernet
  Cc: bpf, ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj

On Thu, Oct 20, 2022 at 01:07:09AM IST, David Vernet wrote:
> On Tue, Oct 18, 2022 at 07:02:15AM +0530, Kumar Kartikeya Dwivedi wrote:
> > Please tag the patches with [ PATCH bpf-next ... ] subject prefix.
>
> Sure, will do.
>
> > >  include/linux/bpf.h                          |  6 ++++++
> > >  kernel/bpf/btf.c                             | 11 ++++++++++-
> > >  kernel/bpf/verifier.c                        | 12 +++++++++++-
> > >  tools/testing/selftests/bpf/verifier/calls.c |  4 ++--
> > >  4 files changed, 29 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > > index 9e7d46d16032..b624024edb4e 100644
> > > --- a/include/linux/bpf.h
> > > +++ b/include/linux/bpf.h
> > > @@ -457,6 +457,12 @@ enum bpf_type_flag {
> > >         /* Size is known at compile time. */
> > >         MEM_FIXED_SIZE          = BIT(10 + BPF_BASE_TYPE_BITS),
> > >
> > > +       /* PTR was obtained from walking a struct. This is used with
> > > +        * PTR_TO_BTF_ID to determine whether the pointer is safe to pass to a
> > > +        * kfunc with KF_TRUSTED_ARGS.
> > > +        */
> > > +       PTR_NESTED              = BIT(11 + BPF_BASE_TYPE_BITS),
> > > +
> > >         __BPF_TYPE_FLAG_MAX,
> > >         __BPF_TYPE_LAST_FLAG    = __BPF_TYPE_FLAG_MAX - 1,
> > >  };
> > > diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> > > index eba603cec2c5..3d7bad11b10b 100644
> > > --- a/kernel/bpf/btf.c
> > > +++ b/kernel/bpf/btf.c
> > > @@ -6333,8 +6333,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env,
> > >                 /* Check if argument must be a referenced pointer, args + i has
> > >                  * been verified to be a pointer (after skipping modifiers).
> > >                  * PTR_TO_CTX is ok without having non-zero ref_obj_id.
> > > +                *
> > > +                * All object pointers must be refcounted, other than:
> > > +                * - PTR_TO_CTX
> > > +                * - Trusted pointers (i.e. pointers with no type modifiers)
> > >                  */
> > > -               if (is_kfunc && trusted_args && (obj_ptr && reg->type != PTR_TO_CTX) && !reg->ref_obj_id) {
> > > +               if (is_kfunc &&
> > > +                   trusted_args &&
> > > +                   obj_ptr &&
> > > +                   base_type(reg->type) != PTR_TO_CTX &&
> > > +                   type_flag(reg->type) &&
> > > +                   !reg->ref_obj_id) {
> > >                         bpf_log(log, "R%d must be referenced\n", regno);
> > >                         return -EINVAL;
> > >                 }
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 6f6d2d511c06..d16a08ca507b 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -581,6 +581,8 @@ static const char *reg_type_str(struct bpf_verifier_env *env,
> > >                 strncpy(prefix, "user_", 32);
> > >         if (type & MEM_PERCPU)
> > >                 strncpy(prefix, "percpu_", 32);
> > > +       if (type & PTR_NESTED)
> > > +               strncpy(prefix, "nested_", 32);
> > >         if (type & PTR_UNTRUSTED)
> > >                 strncpy(prefix, "untrusted_", 32);
> > >
> >
> > Since these are no longer exclusive, the code needs to be updated to
> > append strings to the prefix buffer.
> > Maybe just using snprintf with %s%s%s.. would be better, passing ""
> > when !(type & flag).
>
> Sure, I can make that change. We'll have to increase the size of the
> prefix string on the stack, but that's hardly problematic as these
> strings are not terribly large.
>
> > > @@ -4558,6 +4560,9 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
> > >         if (type_flag(reg->type) & PTR_UNTRUSTED)
> > >                 flag |= PTR_UNTRUSTED;
> > >
> > > +       /* All pointers obtained by walking a struct are nested. */
> > > +       flag |= PTR_NESTED;
> > > +
> >
> > Instead of PTR_NESTED, how about PTR_WALK?
>
> I don't have a strong preference between either, though I would prefer
> PTR_WALKED if we go with the latter. Does that work for you?
>

Yes, I just think PTR_NESTED is a bit misleading, it's not nested within the old
object, we loaded a pointer from it, it should just indicate that the pointer
came from a walk of a trusted PTR_TO_BTF_ID.

> > > [...]
> > > @@ -5694,7 +5699,12 @@ static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } };
> > >  static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } };
> > >  static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } };
> > >  static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } };
> > > -static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } };
> > > +static const struct bpf_reg_types btf_ptr_types = {
> > > +       .types = {
> > > +               PTR_TO_BTF_ID,
> > > +               PTR_TO_BTF_ID | PTR_NESTED
> > > +       },
> > > +};
> >
> > CI fails, two of those failures are from not updating
> > check_func_arg_reg_off for PTR_TO_BTF_ID | PTR_WALK, and the other one
>
> Gah, I didn't think it was necessary for this case as it's not required
> for btf_check_func_arg_match(), which will eventually just fail in the
> following check:
>
> 	if (!btf_type_is_struct(ref_t)) {
> 		bpf_log(log, "kernel function %s args#%d pointer type %s %s is not support
> 			func_name, i, btf_type_str(ref_t),
> 			ref_tname);
> 		return -EINVAL;
> 	}

Why would it fail there? It will still be a struct type.
I think you misunderstand this a bit.

When you have task from tracing ctx arg:
r1 = ctx;
r1 = *(r1 + ...); // PTR_TO_BTF_ID, task_struct, off=0
// r1 = task->next
r1 = *(r1 + offsetof(task_struct, next)); // PTR_TO_BTF_ID | PTR_WALKED, task_struct, off = 0

We loaded a pointer from task_struct into r1.
Now r1 still points to a task_struct, so that check above won't fail for r1.

>
> Note that we also don't include PTR_TO_BTF_ID | PTR_UNTRUSTED here. The
> difference for PTR_TO_BTF_ID | PTR_WALK(ED) is of course that we also need to
> allow it to work properly for normal helper calls, so I'll make that change.
> Thanks for pointing it out. In general, the whole dance between register base
> types + modifiers sometimes feels like a mine field...
>

Yes, I don't like how it's growing and being mixed either. Eventually I think we
should document what combinations are allowed and reject everything else when
initializing reg->type to prevent bugs, but IDK whether something like this
would be accepted.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs
  2022-10-20  5:57       ` Kumar Kartikeya Dwivedi
@ 2022-10-20  6:11         ` David Vernet
  2022-10-20  6:22           ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 14+ messages in thread
From: David Vernet @ 2022-10-20  6:11 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj

On Thu, Oct 20, 2022 at 11:27:49AM +0530, Kumar Kartikeya Dwivedi wrote:
> > Sure, I can make that change. We'll have to increase the size of the
> > prefix string on the stack, but that's hardly problematic as these
> > strings are not terribly large.
> >
> > > > @@ -4558,6 +4560,9 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
> > > >         if (type_flag(reg->type) & PTR_UNTRUSTED)
> > > >                 flag |= PTR_UNTRUSTED;
> > > >
> > > > +       /* All pointers obtained by walking a struct are nested. */
> > > > +       flag |= PTR_NESTED;
> > > > +
> > >
> > > Instead of PTR_NESTED, how about PTR_WALK?
> >
> > I don't have a strong preference between either, though I would prefer
> > PTR_WALKED if we go with the latter. Does that work for you?
> >
> 
> Yes, I just think PTR_NESTED is a bit misleading, it's not nested within the old
> object, we loaded a pointer from it, it should just indicate that the pointer
> came from a walk of a trusted PTR_TO_BTF_ID.

Ok, we'll go with PTR_WALKED.

> > > > [...]
> > > > @@ -5694,7 +5699,12 @@ static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } };
> > > >  static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } };
> > > >  static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } };
> > > >  static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } };
> > > > -static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } };
> > > > +static const struct bpf_reg_types btf_ptr_types = {
> > > > +       .types = {
> > > > +               PTR_TO_BTF_ID,
> > > > +               PTR_TO_BTF_ID | PTR_NESTED
> > > > +       },
> > > > +};
> > >
> > > CI fails, two of those failures are from not updating
> > > check_func_arg_reg_off for PTR_TO_BTF_ID | PTR_WALK, and the other one
> >
> > Gah, I didn't think it was necessary for this case as it's not required
> > for btf_check_func_arg_match(), which will eventually just fail in the
> > following check:
> >
> > 	if (!btf_type_is_struct(ref_t)) {
> > 		bpf_log(log, "kernel function %s args#%d pointer type %s %s is not support
> > 			func_name, i, btf_type_str(ref_t),
> > 			ref_tname);
> > 		return -EINVAL;
> > 	}
> 
> Why would it fail there? It will still be a struct type.
> I think you misunderstand this a bit.

Apologies, as mentioned below I pasted the wrong if-check on accident.
If I had actually meant to paste that one, then saying I "misunderstand
this a bit" would have been a very generous understatment :-)

> When you have task from tracing ctx arg:
> r1 = ctx;
> r1 = *(r1 + ...); // PTR_TO_BTF_ID, task_struct, off=0
> // r1 = task->next
> r1 = *(r1 + offsetof(task_struct, next)); // PTR_TO_BTF_ID | PTR_WALKED, task_struct, off = 0
> 
> We loaded a pointer from task_struct into r1.
> Now r1 still points to a task_struct, so that check above won't fail for r1.

I meant to paste the if-condition _above_ that one. This is the if-check
we'll fail due to the presence of a type modifier (PTR_WALKED):

	} else if (is_kfunc && (reg->type == PTR_TO_BTF_ID ||
		   (reg2btf_ids[base_type(reg->type)] && !type_flag(reg->type)))) {
		const struct btf_type *reg_ref_t;
		const struct btf *reg_btf;
		const char *reg_ref_tname;
		u32 reg_ref_id;

So we'll never even get to the if check I originally pasted because
reg->type == PTR_TO_BTF_ID will fail for a PTR_WALKED reg. And then
below we'll eventually fail later on here:

	/* Permit pointer to mem, but only when argument
	 * type is pointer to scalar, or struct composed
	 * (recursively) of scalars.
	 * When arg_mem_size is true, the pointer can be
	 * void *.
	 * Also permit initialized local dynamic pointers.
	 */
	if (!btf_type_is_scalar(ref_t) &&
	    !__btf_type_is_scalar_struct(log, btf, ref_t, 0) &&
	    !arg_dynptr &&
	    (arg_mem_size ? !btf_type_is_void(ref_t) : 1)) {
		bpf_log(log,
			"arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n",
			i, btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : "");
		return -EINVAL;
	}

Appreciate the explanation, sorry to have made you type it.

> > Note that we also don't include PTR_TO_BTF_ID | PTR_UNTRUSTED here. The
> > difference for PTR_TO_BTF_ID | PTR_WALK(ED) is of course that we also need to
> > allow it to work properly for normal helper calls, so I'll make that change.
> > Thanks for pointing it out. In general, the whole dance between register base
> > types + modifiers sometimes feels like a mine field...
> >
> 
> Yes, I don't like how it's growing and being mixed either. Eventually I think we
> should document what combinations are allowed and reject everything else when
> initializing reg->type to prevent bugs, but IDK whether something like this
> would be accepted.

That seems like a pretty sane idea. A project for another day...

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/3] bpf/selftests: Add selftests for new task kfuncs
  2022-10-19 17:39   ` David Vernet
@ 2022-10-20  6:19     ` Kumar Kartikeya Dwivedi
  2022-10-20  6:33       ` David Vernet
  0 siblings, 1 reply; 14+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-10-20  6:19 UTC (permalink / raw)
  To: David Vernet
  Cc: bpf, ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj

On Wed, Oct 19, 2022 at 11:09:05PM IST, David Vernet wrote:
> > On Sat, 15 Oct 2022 at 01:45, David Vernet <void@manifault.com> wrote:
> > >
> > > A previous change added a series of kfuncs for storing struct
> > > task_struct objects as referenced kptrs. This patch adds a new
> > > task_kfunc test suite for validating their expected behavior.
> > >
> > > Signed-off-by: David Vernet <void@manifault.com>
> > > ---
> > > [...]
> > > +
> > > +SEC("tp_btf/task_newtask")
> > > +int BPF_PROG(task_kfunc_acquire_trusted_nested, struct task_struct *task, u64 clone_flags)
> > > +{
> > > +       struct task_struct *acquired;
> > > +
> > > +       if (!is_test_kfunc_task())
> > > +               return 0;
> > > +
> > > +       /* Can't invoke bpf_task_acquire() on a trusted pointer at a nonzero offset. */
> > > +       acquired = bpf_task_acquire(task->last_wakee);
> >
> > The comment is incorrect, that would be &task->last_wakee instead,
> > this is PTR_TO_BTF_ID | PTR_NESTED.
>
> Well, it's a nonzero offset from task. But yes, to your point, it's a
> misleading comment because the offset is 0 in the verifier. I'll

The load insn has a non-zero offset, but not the destination reg.

What you did was:
r1 = rX + offsetof(task_struct, last_wakee); // r1 == rX + off
r1 = *(r1); // r1 == PTR_TO_BTF_ID of task_struct, off = 0

Embedded structs are different,
&file->f_path means non-zero offset into PTR_TO_BTF_ID of struct file

> rephrase this to reflect that it's a nested pointer (or a walked
> pointer, whatever nomenclature we end up going with).
>
> > > +       if (!acquired)
> > > +               return 0;
> > > +       bpf_task_release(acquired);
> > > +
> > > +       return 0;
> > > +}
> > > +
> > > [...]
> > > +
> > > +static int test_acquire_release(struct task_struct *task)
> > > +{
> > > +       struct task_struct *acquired;
> > > +
> > > +       acquired = bpf_task_acquire(task);
> >
> > Unfortunately a side effect of this change is that now since
> > PTR_TO_BTF_ID without ref_obj_id is considered trusted, the bpf_ct_*
> > functions would begin working with tp_btf args. That probably needs
> > be fixed so that they reject them (ideally with a failing test case to
> > make sure it doesn't resurface), probably with a new suffix __ref/or
> > __owned as added here [0].
> >
> > Alexei, since you've suggested avoiding adding that suffix, do you see
> > any other way out here?
> > It's questionable whether bpf_ct_set_timeout/status should work for CT
> > not owned by the BPF program.
> >
> >   [0]: https://lore.kernel.org/bpf/dfb859a6b76a9234baa194e795ae89cb7ca5694b.1662383493.git.lorenzo@kerne
>
> Ah, yeah, it makes sense that some kfuncs really should only ever be
> passed an object if the program owns a reference on it. Specifically for
> e.g. bpf_ct_set_timeout/status() as you point out, which should only be
> passed a struct nf_conn__init that was allocated by bpf_skb_ct_alloc().
>
> It'd be nice if we could just add another flag like KF_REFERENCED_ARGS
> or KF_OWNED_ARGS, which would allow a subset of arguments affored by
> KF_TRUSTED_ARGS, only those with ref_obj_id > 0. That approach wouldn't
> allow the flexibility of having per-argument specifications as your
> proposal to use __ref or __owned suffixes on the names, but that already
> applies to KF_TRUSTED_ARGS as well.
>
> Personally I'm in agreement with Alexei that it's not a user friendly
> API to use suffixes in the name like this. If we want to allow kfunc
> authors to have per-argument specifiers, using compiler attributes
> and/or some kind of tagging is probably the way to do it?

Sadly GCC doesn't support BTF tags. So this is the next best tagging approach.

There was also another horrendous proposal from me to add flags to each argument:
https://gist.github.com/kkdwivedi/7839cc9e4f002acc3e15350b1b86c88c#file-kfunc-arg-patch-L137

>
> My proposal for now is to add a new KF_OWNED_ARGS flag, and to very
> clearly document exactly what that and KF_TRUSTED_ARGS implies for
> kfuncs. Later on, we could explore solutions for having per-arg
> specifiers. What do you and Alexei think?

Based on your proposal above:

For KF_OWNED_ARGS, any PTR_TO_BTF_ID should have non-zero ref_obj_id, for
KF_TRUSTED_ARGS there will be no such condition. Otherwise they will be the
same. Then switch all CT helpers to KF_OWNED_ARGS.

This should work fine.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs
  2022-10-20  6:11         ` David Vernet
@ 2022-10-20  6:22           ` Kumar Kartikeya Dwivedi
  2022-10-20  6:45             ` David Vernet
  0 siblings, 1 reply; 14+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-10-20  6:22 UTC (permalink / raw)
  To: David Vernet
  Cc: bpf, ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj

On Thu, Oct 20, 2022 at 11:41:41AM IST, David Vernet wrote:
> [...]
> Apologies, as mentioned below I pasted the wrong if-check on accident.
> If I had actually meant to paste that one, then saying I "misunderstand
> this a bit" would have been a very generous understatment :-)
>
> > When you have task from tracing ctx arg:
> > r1 = ctx;
> > r1 = *(r1 + ...); // PTR_TO_BTF_ID, task_struct, off=0
> > // r1 = task->next
> > r1 = *(r1 + offsetof(task_struct, next)); // PTR_TO_BTF_ID | PTR_WALKED, task_struct, off = 0
> >
> > We loaded a pointer from task_struct into r1.
> > Now r1 still points to a task_struct, so that check above won't fail for r1.
>
> I meant to paste the if-condition _above_ that one. This is the if-check
> we'll fail due to the presence of a type modifier (PTR_WALKED):
>
> 	} else if (is_kfunc && (reg->type == PTR_TO_BTF_ID ||
> 		   (reg2btf_ids[base_type(reg->type)] && !type_flag(reg->type)))) {
> 		const struct btf_type *reg_ref_t;
> 		const struct btf *reg_btf;
> 		const char *reg_ref_tname;
> 		u32 reg_ref_id;
>
> So we'll never even get to the if check I originally pasted because
> reg->type == PTR_TO_BTF_ID will fail for a PTR_WALKED reg. And then
> below we'll eventually fail later on here:
>
> 	/* Permit pointer to mem, but only when argument
> 	 * type is pointer to scalar, or struct composed
> 	 * (recursively) of scalars.
> 	 * When arg_mem_size is true, the pointer can be
> 	 * void *.
> 	 * Also permit initialized local dynamic pointers.
> 	 */
> 	if (!btf_type_is_scalar(ref_t) &&
> 	    !__btf_type_is_scalar_struct(log, btf, ref_t, 0) &&
> 	    !arg_dynptr &&
> 	    (arg_mem_size ? !btf_type_is_void(ref_t) : 1)) {
> 		bpf_log(log,
> 			"arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n",
> 			i, btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : "");
> 		return -EINVAL;
> 	}
>
> Appreciate the explanation, sorry to have made you type it.
>

Ah, I see. Your analysis is right, but the error in CI comes from
check_func_arg_reg_off invocation in check_helper_call, this code is for kfuncs.

Since you have this to preserve backwards compat:
+static const struct bpf_reg_types btf_ptr_types = {
+	.types = {
+		PTR_TO_BTF_ID,
+		PTR_TO_BTF_ID | PTR_NESTED
+	},
+};

It allows passing those with PTR_NESTED to stable helpers.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/3] bpf/selftests: Add selftests for new task kfuncs
  2022-10-20  6:19     ` Kumar Kartikeya Dwivedi
@ 2022-10-20  6:33       ` David Vernet
  0 siblings, 0 replies; 14+ messages in thread
From: David Vernet @ 2022-10-20  6:33 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj

On Thu, Oct 20, 2022 at 11:49:03AM +0530, Kumar Kartikeya Dwivedi wrote:
> On Wed, Oct 19, 2022 at 11:09:05PM IST, David Vernet wrote:
> > > On Sat, 15 Oct 2022 at 01:45, David Vernet <void@manifault.com> wrote:
> > > >
> > > > A previous change added a series of kfuncs for storing struct
> > > > task_struct objects as referenced kptrs. This patch adds a new
> > > > task_kfunc test suite for validating their expected behavior.
> > > >
> > > > Signed-off-by: David Vernet <void@manifault.com>
> > > > ---
> > > > [...]
> > > > +
> > > > +SEC("tp_btf/task_newtask")
> > > > +int BPF_PROG(task_kfunc_acquire_trusted_nested, struct task_struct *task, u64 clone_flags)
> > > > +{
> > > > +       struct task_struct *acquired;
> > > > +
> > > > +       if (!is_test_kfunc_task())
> > > > +               return 0;
> > > > +
> > > > +       /* Can't invoke bpf_task_acquire() on a trusted pointer at a nonzero offset. */
> > > > +       acquired = bpf_task_acquire(task->last_wakee);
> > >
> > > The comment is incorrect, that would be &task->last_wakee instead,
> > > this is PTR_TO_BTF_ID | PTR_NESTED.
> >
> > Well, it's a nonzero offset from task. But yes, to your point, it's a
> > misleading comment because the offset is 0 in the verifier. I'll
> 
> The load insn has a non-zero offset, but not the destination reg.

Yeah, this is what I meant by hand-wavily saying "0 in the verifier". I
agree that the comment is incorrect as stated, I'll fix it in the next
revision per your suggestion.

> What you did was:
> r1 = rX + offsetof(task_struct, last_wakee); // r1 == rX + off
> r1 = *(r1); // r1 == PTR_TO_BTF_ID of task_struct, off = 0
> 
> Embedded structs are different,
> &file->f_path means non-zero offset into PTR_TO_BTF_ID of struct file
> 
> > rephrase this to reflect that it's a nested pointer (or a walked
> > pointer, whatever nomenclature we end up going with).
> >
> > > > +       if (!acquired)
> > > > +               return 0;
> > > > +       bpf_task_release(acquired);
> > > > +
> > > > +       return 0;
> > > > +}
> > > > +
> > > > [...]
> > > > +
> > > > +static int test_acquire_release(struct task_struct *task)
> > > > +{
> > > > +       struct task_struct *acquired;
> > > > +
> > > > +       acquired = bpf_task_acquire(task);
> > >
> > > Unfortunately a side effect of this change is that now since
> > > PTR_TO_BTF_ID without ref_obj_id is considered trusted, the bpf_ct_*
> > > functions would begin working with tp_btf args. That probably needs
> > > be fixed so that they reject them (ideally with a failing test case to
> > > make sure it doesn't resurface), probably with a new suffix __ref/or
> > > __owned as added here [0].
> > >
> > > Alexei, since you've suggested avoiding adding that suffix, do you see
> > > any other way out here?
> > > It's questionable whether bpf_ct_set_timeout/status should work for CT
> > > not owned by the BPF program.
> > >
> > >   [0]: https://lore.kernel.org/bpf/dfb859a6b76a9234baa194e795ae89cb7ca5694b.1662383493.git.lorenzo@kerne
> >
> > Ah, yeah, it makes sense that some kfuncs really should only ever be
> > passed an object if the program owns a reference on it. Specifically for
> > e.g. bpf_ct_set_timeout/status() as you point out, which should only be
> > passed a struct nf_conn__init that was allocated by bpf_skb_ct_alloc().
> >
> > It'd be nice if we could just add another flag like KF_REFERENCED_ARGS
> > or KF_OWNED_ARGS, which would allow a subset of arguments affored by
> > KF_TRUSTED_ARGS, only those with ref_obj_id > 0. That approach wouldn't
> > allow the flexibility of having per-argument specifications as your
> > proposal to use __ref or __owned suffixes on the names, but that already
> > applies to KF_TRUSTED_ARGS as well.
> >
> > Personally I'm in agreement with Alexei that it's not a user friendly
> > API to use suffixes in the name like this. If we want to allow kfunc
> > authors to have per-argument specifiers, using compiler attributes
> > and/or some kind of tagging is probably the way to do it?
> 
> Sadly GCC doesn't support BTF tags. So this is the next best tagging approach.
> 
> There was also another horrendous proposal from me to add flags to each argument:
> https://gist.github.com/kkdwivedi/7839cc9e4f002acc3e15350b1b86c88c#file-kfunc-arg-patch-L137

I feel like I must be missing something given that you said this was
horrendous, but I actually don't hate this. This is pretty similar to
what we do for helpers anyways, no? I certainly prefer it over the
suffix naming approach. IMO the problem with that is it kind of requires
users to dive into the verifier to understand how to implement kfuncs.
That also reminds me that in the next revision, I'll also update the
documentation for KF_TRUSTED_ARGS (and KF_OWNED_ARGS) to reflect the new
state of things.

> > My proposal for now is to add a new KF_OWNED_ARGS flag, and to very
> > clearly document exactly what that and KF_TRUSTED_ARGS implies for
> > kfuncs. Later on, we could explore solutions for having per-arg
> > specifiers. What do you and Alexei think?
> 
> Based on your proposal above:
> 
> For KF_OWNED_ARGS, any PTR_TO_BTF_ID should have non-zero ref_obj_id, for
> KF_TRUSTED_ARGS there will be no such condition. Otherwise they will be the
> same. Then switch all CT helpers to KF_OWNED_ARGS.

Exactly

> This should work fine.

Great, I'll get to work on this. Unless Alexei or anyone else objects?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs
  2022-10-20  6:22           ` Kumar Kartikeya Dwivedi
@ 2022-10-20  6:45             ` David Vernet
  0 siblings, 0 replies; 14+ messages in thread
From: David Vernet @ 2022-10-20  6:45 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, ast, daniel, andrii, martin.lau, song, yhs, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, linux-kernel, kernel-team, tj

On Thu, Oct 20, 2022 at 11:52:29AM +0530, Kumar Kartikeya Dwivedi wrote:
> On Thu, Oct 20, 2022 at 11:41:41AM IST, David Vernet wrote:
> > [...]
> > Apologies, as mentioned below I pasted the wrong if-check on accident.
> > If I had actually meant to paste that one, then saying I "misunderstand
> > this a bit" would have been a very generous understatment :-)
> >
> > > When you have task from tracing ctx arg:
> > > r1 = ctx;
> > > r1 = *(r1 + ...); // PTR_TO_BTF_ID, task_struct, off=0
> > > // r1 = task->next
> > > r1 = *(r1 + offsetof(task_struct, next)); // PTR_TO_BTF_ID | PTR_WALKED, task_struct, off = 0
> > >
> > > We loaded a pointer from task_struct into r1.
> > > Now r1 still points to a task_struct, so that check above won't fail for r1.
> >
> > I meant to paste the if-condition _above_ that one. This is the if-check
> > we'll fail due to the presence of a type modifier (PTR_WALKED):
> >
> > 	} else if (is_kfunc && (reg->type == PTR_TO_BTF_ID ||
> > 		   (reg2btf_ids[base_type(reg->type)] && !type_flag(reg->type)))) {
> > 		const struct btf_type *reg_ref_t;
> > 		const struct btf *reg_btf;
> > 		const char *reg_ref_tname;
> > 		u32 reg_ref_id;
> >
> > So we'll never even get to the if check I originally pasted because
> > reg->type == PTR_TO_BTF_ID will fail for a PTR_WALKED reg. And then
> > below we'll eventually fail later on here:
> >
> > 	/* Permit pointer to mem, but only when argument
> > 	 * type is pointer to scalar, or struct composed
> > 	 * (recursively) of scalars.
> > 	 * When arg_mem_size is true, the pointer can be
> > 	 * void *.
> > 	 * Also permit initialized local dynamic pointers.
> > 	 */
> > 	if (!btf_type_is_scalar(ref_t) &&
> > 	    !__btf_type_is_scalar_struct(log, btf, ref_t, 0) &&
> > 	    !arg_dynptr &&
> > 	    (arg_mem_size ? !btf_type_is_void(ref_t) : 1)) {
> > 		bpf_log(log,
> > 			"arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n",
> > 			i, btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : "");
> > 		return -EINVAL;
> > 	}
> >
> > Appreciate the explanation, sorry to have made you type it.
> >
> 
> Ah, I see. Your analysis is right, but the error in CI comes from
> check_func_arg_reg_off invocation in check_helper_call, this code is for kfuncs.

Yeah, in my defense, if you look back at [0] where I fat-fingered the
wrong if statement, I did say that I missed the case for helpers
specifically:

> Note that we also don't include PTR_TO_BTF_ID | PTR_UNTRUSTED here. The
> difference for PTR_TO_BTF_ID | PTR_WALK(ED) is of course that we also need to
> allow it to work properly for normal helper calls, so I'll make that change.

[0]: https://lore.kernel.org/all/Y1BR5c6W4tgljA8q@maniforge.dhcp.thefacebook.com/

Anyways, I think we're on the same page. I already have a local revision
that fixes this.

> Since you have this to preserve backwards compat:
> +static const struct bpf_reg_types btf_ptr_types = {
> +	.types = {
> +		PTR_TO_BTF_ID,
> +		PTR_TO_BTF_ID | PTR_NESTED
> +	},
> +};
> 
> It allows passing those with PTR_NESTED to stable helpers.

I'd need to look over all of this code again to give useful suggestions
here (it's very difficult to keep it all paged in, there's just so much
context), but it'd be nice if we could somehow refactor some of this so
that check_helper_call() and check_kfunc_call() shared more code.
Obviously out of scope for this patch set.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-10-20  6:46 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-14 21:21 [PATCH v5 0/3] Support storing struct task_struct objects as kptrs David Vernet
2022-10-14 21:21 ` [PATCH v5 1/3] bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs David Vernet
2022-10-18  1:32   ` Kumar Kartikeya Dwivedi
2022-10-19 19:37     ` David Vernet
2022-10-20  5:57       ` Kumar Kartikeya Dwivedi
2022-10-20  6:11         ` David Vernet
2022-10-20  6:22           ` Kumar Kartikeya Dwivedi
2022-10-20  6:45             ` David Vernet
2022-10-14 21:21 ` [PATCH v5 2/3] bpf: Add kfuncs for storing struct task_struct * as a kptr David Vernet
2022-10-18  1:55   ` Kumar Kartikeya Dwivedi
2022-10-14 21:21 ` [PATCH v5 3/3] bpf/selftests: Add selftests for new task kfuncs David Vernet
2022-10-19 17:39   ` David Vernet
2022-10-20  6:19     ` Kumar Kartikeya Dwivedi
2022-10-20  6:33       ` David Vernet

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.