All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/4] Follow ups for kptr series
@ 2022-05-11 19:46 Kumar Kartikeya Dwivedi
  2022-05-11 19:46 ` [PATCH bpf-next v2 1/4] bpf: Fix sparse warning for bpf_kptr_xchg_proto Kumar Kartikeya Dwivedi
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-05-11 19:46 UTC (permalink / raw)
  To: bpf; +Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann

Fix a build time warning, and address comments from Alexei on the merged
version [0].

  [0]: https://lore.kernel.org/bpf/20220424214901.2743946-1-memxor@gmail.com

Changelog:
----------
v1 -> v2
v1: https://lore.kernel.org/bpf/20220510211727.575686-1-memxor@gmail.com

 * Add Fixes tag to patch 1
 * Fix test_progs-noalu32 failure in CI due to different alloc_insn (Alexei)
 * Remove per-CPU struct, use global struct (Alexei)

Kumar Kartikeya Dwivedi (4):
  bpf: Fix sparse warning for bpf_kptr_xchg_proto
  bpf: Prepare prog_test_struct kfuncs for runtime tests
  selftests/bpf: Add negative C tests for kptrs
  selftests/bpf: Add tests for kptr_ref refcounting

 include/linux/bpf.h                           |   1 +
 net/bpf/test_run.c                            |  23 +-
 .../selftests/bpf/prog_tests/map_kptr.c       | 108 ++++-
 tools/testing/selftests/bpf/progs/map_kptr.c  | 106 ++++-
 .../selftests/bpf/progs/map_kptr_fail.c       | 418 ++++++++++++++++++
 .../testing/selftests/bpf/verifier/map_kptr.c |   4 +-
 6 files changed, 649 insertions(+), 11 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/map_kptr_fail.c

-- 
2.35.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v2 1/4] bpf: Fix sparse warning for bpf_kptr_xchg_proto
  2022-05-11 19:46 [PATCH bpf-next v2 0/4] Follow ups for kptr series Kumar Kartikeya Dwivedi
@ 2022-05-11 19:46 ` Kumar Kartikeya Dwivedi
  2022-05-11 19:46 ` [PATCH bpf-next v2 2/4] bpf: Prepare prog_test_struct kfuncs for runtime tests Kumar Kartikeya Dwivedi
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-05-11 19:46 UTC (permalink / raw)
  To: bpf
  Cc: kernel test robot, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann

Kernel Test Robot complained about missing static storage class
annotation for bpf_kptr_xchg_proto variable.

sparse: symbol 'bpf_kptr_xchg_proto' was not declared. Should it be static?

This caused by missing extern definition in the header. Add it to
suppress the sparse warning.

Fixes: c0a5a21c25f3 ("bpf: Allow storing referenced kptr in map")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/bpf.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index aba7ded56436..3ded8711457f 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2246,6 +2246,7 @@ extern const struct bpf_func_proto bpf_find_vma_proto;
 extern const struct bpf_func_proto bpf_loop_proto;
 extern const struct bpf_func_proto bpf_strncmp_proto;
 extern const struct bpf_func_proto bpf_copy_from_user_task_proto;
+extern const struct bpf_func_proto bpf_kptr_xchg_proto;
 
 const struct bpf_func_proto *tracing_prog_func_proto(
   enum bpf_func_id func_id, const struct bpf_prog *prog);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v2 2/4] bpf: Prepare prog_test_struct kfuncs for runtime tests
  2022-05-11 19:46 [PATCH bpf-next v2 0/4] Follow ups for kptr series Kumar Kartikeya Dwivedi
  2022-05-11 19:46 ` [PATCH bpf-next v2 1/4] bpf: Fix sparse warning for bpf_kptr_xchg_proto Kumar Kartikeya Dwivedi
@ 2022-05-11 19:46 ` Kumar Kartikeya Dwivedi
  2022-05-11 19:46 ` [PATCH bpf-next v2 3/4] selftests/bpf: Add negative C tests for kptrs Kumar Kartikeya Dwivedi
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-05-11 19:46 UTC (permalink / raw)
  To: bpf; +Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann

In an effort to actually test the refcounting logic at runtime, add a
refcount_t member to prog_test_ref_kfunc and use it in selftests to
verify and test the whole logic more exhaustively.

The kfunc calls for prog_test_member do not require runtime refcounting,
as they are only used for verifier selftests, not during runtime
execution. Hence, their implementation now has a WARN_ON_ONCE as it is
not meant to be reachable code at runtime. It is strictly used in tests
triggering failure cases in the verifier. bpf_kfunc_call_memb_release is
called from map free path, since prog_test_member is embedded in map
value for some verifier tests, so we skip WARN_ON_ONCE for it.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 net/bpf/test_run.c                            | 23 ++++++++++++++-----
 .../testing/selftests/bpf/verifier/map_kptr.c |  4 ++--
 2 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 7a1579c91432..4d08cca771c7 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -564,31 +564,36 @@ struct prog_test_ref_kfunc {
 	int b;
 	struct prog_test_member memb;
 	struct prog_test_ref_kfunc *next;
+	refcount_t cnt;
 };
 
 static struct prog_test_ref_kfunc prog_test_struct = {
 	.a = 42,
 	.b = 108,
 	.next = &prog_test_struct,
+	.cnt = REFCOUNT_INIT(1),
 };
 
 noinline struct prog_test_ref_kfunc *
 bpf_kfunc_call_test_acquire(unsigned long *scalar_ptr)
 {
-	/* randomly return NULL */
-	if (get_jiffies_64() % 2)
-		return NULL;
+	refcount_inc(&prog_test_struct.cnt);
 	return &prog_test_struct;
 }
 
 noinline struct prog_test_member *
 bpf_kfunc_call_memb_acquire(void)
 {
-	return &prog_test_struct.memb;
+	WARN_ON_ONCE(1);
+	return NULL;
 }
 
 noinline void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p)
 {
+	if (!p)
+		return;
+
+	refcount_dec(&p->cnt);
 }
 
 noinline void bpf_kfunc_call_memb_release(struct prog_test_member *p)
@@ -597,12 +602,18 @@ noinline void bpf_kfunc_call_memb_release(struct prog_test_member *p)
 
 noinline void bpf_kfunc_call_memb1_release(struct prog_test_member1 *p)
 {
+	WARN_ON_ONCE(1);
 }
 
 noinline struct prog_test_ref_kfunc *
-bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **p, int a, int b)
+bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **pp, int a, int b)
 {
-	return &prog_test_struct;
+	struct prog_test_ref_kfunc *p = READ_ONCE(*pp);
+
+	if (!p)
+		return NULL;
+	refcount_inc(&p->cnt);
+	return p;
 }
 
 struct prog_test_pass1 {
diff --git a/tools/testing/selftests/bpf/verifier/map_kptr.c b/tools/testing/selftests/bpf/verifier/map_kptr.c
index 9113834640e6..6914904344c0 100644
--- a/tools/testing/selftests/bpf/verifier/map_kptr.c
+++ b/tools/testing/selftests/bpf/verifier/map_kptr.c
@@ -212,13 +212,13 @@
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 24),
+	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 32),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
 	.fixup_map_kptr = { 1 },
 	.result = REJECT,
-	.errstr = "access beyond struct prog_test_ref_kfunc at off 24 size 8",
+	.errstr = "access beyond struct prog_test_ref_kfunc at off 32 size 8",
 },
 {
 	"map_kptr: unref: inherit PTR_UNTRUSTED on struct walk",
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v2 3/4] selftests/bpf: Add negative C tests for kptrs
  2022-05-11 19:46 [PATCH bpf-next v2 0/4] Follow ups for kptr series Kumar Kartikeya Dwivedi
  2022-05-11 19:46 ` [PATCH bpf-next v2 1/4] bpf: Fix sparse warning for bpf_kptr_xchg_proto Kumar Kartikeya Dwivedi
  2022-05-11 19:46 ` [PATCH bpf-next v2 2/4] bpf: Prepare prog_test_struct kfuncs for runtime tests Kumar Kartikeya Dwivedi
@ 2022-05-11 19:46 ` Kumar Kartikeya Dwivedi
  2022-05-12  3:08   ` Andrii Nakryiko
  2022-05-11 19:46 ` [PATCH bpf-next v2 4/4] selftests/bpf: Add tests for kptr_ref refcounting Kumar Kartikeya Dwivedi
  2022-05-12  0:20 ` [PATCH bpf-next v2 0/4] Follow ups for kptr series patchwork-bot+netdevbpf
  4 siblings, 1 reply; 9+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-05-11 19:46 UTC (permalink / raw)
  To: bpf; +Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann

This uses the newly added SEC("?foo") naming to disable autoload of
programs, and then loads them one by one for the object and verifies
that loading fails and matches the returned error string from verifier.
This is similar to already existing verifier tests but provides coverage
for BPF C.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 .../selftests/bpf/prog_tests/map_kptr.c       |  87 +++-
 .../selftests/bpf/progs/map_kptr_fail.c       | 418 ++++++++++++++++++
 2 files changed, 504 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/progs/map_kptr_fail.c

diff --git a/tools/testing/selftests/bpf/prog_tests/map_kptr.c b/tools/testing/selftests/bpf/prog_tests/map_kptr.c
index 9e2fbda64a65..00819277cb17 100644
--- a/tools/testing/selftests/bpf/prog_tests/map_kptr.c
+++ b/tools/testing/selftests/bpf/prog_tests/map_kptr.c
@@ -2,8 +2,86 @@
 #include <test_progs.h>
 
 #include "map_kptr.skel.h"
+#include "map_kptr_fail.skel.h"
 
-void test_map_kptr(void)
+static char log_buf[1024 * 1024];
+
+struct {
+	const char *prog_name;
+	const char *err_msg;
+} map_kptr_fail_tests[] = {
+	{ "size_not_bpf_dw", "kptr access size must be BPF_DW" },
+	{ "non_const_var_off", "kptr access cannot have variable offset" },
+	{ "non_const_var_off_kptr_xchg", "R1 doesn't have constant offset. kptr has to be" },
+	{ "misaligned_access_write", "kptr access misaligned expected=8 off=7" },
+	{ "misaligned_access_read", "kptr access misaligned expected=8 off=1" },
+	{ "reject_var_off_store", "variable untrusted_ptr_ access var_off=(0x0; 0x1e0)" },
+	{ "reject_bad_type_match", "invalid kptr access, R1 type=untrusted_ptr_prog_test_ref_kfunc" },
+	{ "marked_as_untrusted_or_null", "R1 type=untrusted_ptr_or_null_ expected=percpu_ptr_" },
+	{ "correct_btf_id_check_size", "access beyond struct prog_test_ref_kfunc at off 32 size 4" },
+	{ "inherit_untrusted_on_walk", "R1 type=untrusted_ptr_ expected=percpu_ptr_" },
+	{ "reject_kptr_xchg_on_unref", "off=8 kptr isn't referenced kptr" },
+	{ "reject_kptr_get_no_map_val", "arg#0 expected pointer to map value" },
+	{ "reject_kptr_get_no_null_map_val", "arg#0 expected pointer to map value" },
+	{ "reject_kptr_get_no_kptr", "arg#0 no referenced kptr at map value offset=0" },
+	{ "reject_kptr_get_on_unref", "arg#0 no referenced kptr at map value offset=8" },
+	{ "reject_kptr_get_bad_type_match", "kernel function bpf_kfunc_call_test_kptr_get args#0" },
+	{ "mark_ref_as_untrusted_or_null", "R1 type=untrusted_ptr_or_null_ expected=percpu_ptr_" },
+	{ "reject_untrusted_store_to_ref", "store to referenced kptr disallowed" },
+	{ "reject_bad_type_xchg", "invalid kptr access, R2 type=ptr_prog_test_ref_kfunc expected=ptr_prog_test_member" },
+	{ "reject_untrusted_xchg", "R2 type=untrusted_ptr_ expected=ptr_" },
+	{ "reject_member_of_ref_xchg", "invalid kptr access, R2 type=ptr_prog_test_ref_kfunc" },
+	{ "reject_indirect_helper_access", "kptr cannot be accessed indirectly by helper" },
+	{ "reject_indirect_global_func_access", "kptr cannot be accessed indirectly by helper" },
+	{ "kptr_xchg_ref_state", "Unreleased reference id=5 alloc_insn=" },
+	{ "kptr_get_ref_state", "Unreleased reference id=3 alloc_insn=" },
+};
+
+static void test_map_kptr_fail_prog(const char *prog_name, const char *err_msg)
+{
+	LIBBPF_OPTS(bpf_object_open_opts, opts, .kernel_log_buf = log_buf,
+						.kernel_log_size = sizeof(log_buf),
+						.kernel_log_level = 1);
+	struct map_kptr_fail *skel;
+	struct bpf_program *prog;
+	int ret;
+
+	skel = map_kptr_fail__open_opts(&opts);
+	if (!ASSERT_OK_PTR(skel, "map_kptr_fail__open_opts"))
+		return;
+
+	prog = bpf_object__find_program_by_name(skel->obj, prog_name);
+	if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
+		goto end;
+
+	bpf_program__set_autoload(prog, true);
+
+	ret = map_kptr_fail__load(skel);
+	if (!ASSERT_ERR(ret, "map_kptr__load must fail"))
+		goto end;
+
+	if (!ASSERT_OK_PTR(strstr(log_buf, err_msg), "expected error message")) {
+		fprintf(stderr, "Expected: %s\n", err_msg);
+		fprintf(stderr, "Verifier: %s\n", log_buf);
+	}
+
+end:
+	map_kptr_fail__destroy(skel);
+}
+
+static void test_map_kptr_fail(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(map_kptr_fail_tests); i++) {
+		if (!test__start_subtest(map_kptr_fail_tests[i].prog_name))
+			continue;
+		test_map_kptr_fail_prog(map_kptr_fail_tests[i].prog_name,
+					map_kptr_fail_tests[i].err_msg);
+	}
+}
+
+static void test_map_kptr_success(void)
 {
 	struct map_kptr *skel;
 	int key = 0, ret;
@@ -35,3 +113,10 @@ void test_map_kptr(void)
 
 	map_kptr__destroy(skel);
 }
+
+void test_map_kptr(void)
+{
+	if (test__start_subtest("success"))
+		test_map_kptr_success();
+	test_map_kptr_fail();
+}
diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
new file mode 100644
index 000000000000..05e209b1b12a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
@@ -0,0 +1,418 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+
+struct map_value {
+	char buf[8];
+	struct prog_test_ref_kfunc __kptr *unref_ptr;
+	struct prog_test_ref_kfunc __kptr_ref *ref_ptr;
+	struct prog_test_member __kptr_ref *ref_memb_ptr;
+};
+
+struct array_map {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__type(key, int);
+	__type(value, struct map_value);
+	__uint(max_entries, 1);
+} array_map SEC(".maps");
+
+extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym;
+extern struct prog_test_ref_kfunc *
+bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **p, int a, int b) __ksym;
+
+SEC("?tc")
+int size_not_bpf_dw(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	*(u32 *)&v->unref_ptr = 0;
+	return 0;
+}
+
+SEC("?tc")
+int non_const_var_off(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0, id;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	id = ctx->protocol;
+	if (id < 4 || id > 12)
+		return 0;
+	*(u64 *)((void *)v + id) = 0;
+
+	return 0;
+}
+
+SEC("?tc")
+int non_const_var_off_kptr_xchg(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0, id;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	id = ctx->protocol;
+	if (id < 4 || id > 12)
+		return 0;
+	bpf_kptr_xchg((void *)v + id, NULL);
+
+	return 0;
+}
+
+SEC("?tc")
+int misaligned_access_write(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	*(void **)((void *)v + 7) = NULL;
+
+	return 0;
+}
+
+SEC("?tc")
+int misaligned_access_read(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	return *(u64 *)((void *)v + 1);
+}
+
+SEC("?tc")
+int reject_var_off_store(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *unref_ptr;
+	struct map_value *v;
+	int key = 0, id;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	unref_ptr = v->unref_ptr;
+	if (!unref_ptr)
+		return 0;
+	id = ctx->protocol;
+	if (id < 4 || id > 12)
+		return 0;
+	unref_ptr += id;
+	v->unref_ptr = unref_ptr;
+
+	return 0;
+}
+
+SEC("?tc")
+int reject_bad_type_match(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *unref_ptr;
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	unref_ptr = v->unref_ptr;
+	if (!unref_ptr)
+		return 0;
+	unref_ptr = (void *)unref_ptr + 4;
+	v->unref_ptr = unref_ptr;
+
+	return 0;
+}
+
+SEC("?tc")
+int marked_as_untrusted_or_null(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	bpf_this_cpu_ptr(v->unref_ptr);
+	return 0;
+}
+
+SEC("?tc")
+int correct_btf_id_check_size(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *p;
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	p = v->unref_ptr;
+	if (!p)
+		return 0;
+	return *(int *)((void *)p + bpf_core_type_size(struct prog_test_ref_kfunc));
+}
+
+SEC("?tc")
+int inherit_untrusted_on_walk(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *unref_ptr;
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	unref_ptr = v->unref_ptr;
+	if (!unref_ptr)
+		return 0;
+	unref_ptr = unref_ptr->next;
+	bpf_this_cpu_ptr(unref_ptr);
+	return 0;
+}
+
+SEC("?tc")
+int reject_kptr_xchg_on_unref(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	bpf_kptr_xchg(&v->unref_ptr, NULL);
+	return 0;
+}
+
+SEC("?tc")
+int reject_kptr_get_no_map_val(struct __sk_buff *ctx)
+{
+	bpf_kfunc_call_test_kptr_get((void *)&ctx, 0, 0);
+	return 0;
+}
+
+SEC("?tc")
+int reject_kptr_get_no_null_map_val(struct __sk_buff *ctx)
+{
+	bpf_kfunc_call_test_kptr_get(bpf_map_lookup_elem(&array_map, &(int){0}), 0, 0);
+	return 0;
+}
+
+SEC("?tc")
+int reject_kptr_get_no_kptr(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	bpf_kfunc_call_test_kptr_get((void *)v, 0, 0);
+	return 0;
+}
+
+SEC("?tc")
+int reject_kptr_get_on_unref(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	bpf_kfunc_call_test_kptr_get(&v->unref_ptr, 0, 0);
+	return 0;
+}
+
+SEC("?tc")
+int reject_kptr_get_bad_type_match(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	bpf_kfunc_call_test_kptr_get((void *)&v->ref_memb_ptr, 0, 0);
+	return 0;
+}
+
+SEC("?tc")
+int mark_ref_as_untrusted_or_null(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	bpf_this_cpu_ptr(v->ref_ptr);
+	return 0;
+}
+
+SEC("?tc")
+int reject_untrusted_store_to_ref(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *p;
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	p = v->ref_ptr;
+	if (!p)
+		return 0;
+	/* Checkmate, clang */
+	*(struct prog_test_ref_kfunc * volatile *)&v->ref_ptr = p;
+	return 0;
+}
+
+SEC("?tc")
+int reject_untrusted_xchg(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *p;
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	p = v->ref_ptr;
+	if (!p)
+		return 0;
+	bpf_kptr_xchg(&v->ref_ptr, p);
+	return 0;
+}
+
+SEC("?tc")
+int reject_bad_type_xchg(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *ref_ptr;
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	ref_ptr = bpf_kfunc_call_test_acquire(&(unsigned long){0});
+	if (!ref_ptr)
+		return 0;
+	bpf_kptr_xchg(&v->ref_memb_ptr, ref_ptr);
+	return 0;
+}
+
+SEC("?tc")
+int reject_member_of_ref_xchg(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *ref_ptr;
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	ref_ptr = bpf_kfunc_call_test_acquire(&(unsigned long){0});
+	if (!ref_ptr)
+		return 0;
+	bpf_kptr_xchg(&v->ref_memb_ptr, &ref_ptr->memb);
+	return 0;
+}
+
+SEC("?syscall")
+int reject_indirect_helper_access(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	bpf_get_current_comm(v, sizeof(v->buf) + 1);
+	return 0;
+}
+
+__noinline
+int write_func(int *p)
+{
+	return p ? *p = 42 : 0;
+}
+
+SEC("?tc")
+int reject_indirect_global_func_access(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	return write_func((void *)v + 5);
+}
+
+SEC("?tc")
+int kptr_xchg_ref_state(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *p;
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	p = bpf_kfunc_call_test_acquire(&(unsigned long){0});
+	if (!p)
+		return 0;
+	bpf_kptr_xchg(&v->ref_ptr, p);
+	return 0;
+}
+
+SEC("?tc")
+int kptr_get_ref_state(struct __sk_buff *ctx)
+{
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 0;
+
+	bpf_kfunc_call_test_kptr_get(&v->ref_ptr, 0, 0);
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next v2 4/4] selftests/bpf: Add tests for kptr_ref refcounting
  2022-05-11 19:46 [PATCH bpf-next v2 0/4] Follow ups for kptr series Kumar Kartikeya Dwivedi
                   ` (2 preceding siblings ...)
  2022-05-11 19:46 ` [PATCH bpf-next v2 3/4] selftests/bpf: Add negative C tests for kptrs Kumar Kartikeya Dwivedi
@ 2022-05-11 19:46 ` Kumar Kartikeya Dwivedi
  2022-05-12  0:20 ` [PATCH bpf-next v2 0/4] Follow ups for kptr series patchwork-bot+netdevbpf
  4 siblings, 0 replies; 9+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-05-11 19:46 UTC (permalink / raw)
  To: bpf; +Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann

Check at runtime how various operations for kptr_ref affect its refcount
and verify against the actual count.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 .../selftests/bpf/prog_tests/map_kptr.c       |  27 ++++-
 tools/testing/selftests/bpf/progs/map_kptr.c  | 106 +++++++++++++++++-
 2 files changed, 128 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/map_kptr.c b/tools/testing/selftests/bpf/prog_tests/map_kptr.c
index 00819277cb17..8142dd9eff4c 100644
--- a/tools/testing/selftests/bpf/prog_tests/map_kptr.c
+++ b/tools/testing/selftests/bpf/prog_tests/map_kptr.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0
 #include <test_progs.h>
+#include <network_helpers.h>
 
 #include "map_kptr.skel.h"
 #include "map_kptr_fail.skel.h"
@@ -81,8 +82,13 @@ static void test_map_kptr_fail(void)
 	}
 }
 
-static void test_map_kptr_success(void)
+static void test_map_kptr_success(bool test_run)
 {
+	LIBBPF_OPTS(bpf_test_run_opts, opts,
+		.data_in = &pkt_v4,
+		.data_size_in = sizeof(pkt_v4),
+		.repeat = 1,
+	);
 	struct map_kptr *skel;
 	int key = 0, ret;
 	char buf[24];
@@ -91,6 +97,16 @@ static void test_map_kptr_success(void)
 	if (!ASSERT_OK_PTR(skel, "map_kptr__open_and_load"))
 		return;
 
+	ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref), &opts);
+	ASSERT_OK(ret, "test_map_kptr_ref refcount");
+	ASSERT_OK(opts.retval, "test_map_kptr_ref retval");
+	ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref2), &opts);
+	ASSERT_OK(ret, "test_map_kptr_ref2 refcount");
+	ASSERT_OK(opts.retval, "test_map_kptr_ref2 retval");
+
+	if (test_run)
+		return;
+
 	ret = bpf_map_update_elem(bpf_map__fd(skel->maps.array_map), &key, buf, 0);
 	ASSERT_OK(ret, "array_map update");
 	ret = bpf_map_update_elem(bpf_map__fd(skel->maps.array_map), &key, buf, 0);
@@ -116,7 +132,12 @@ static void test_map_kptr_success(void)
 
 void test_map_kptr(void)
 {
-	if (test__start_subtest("success"))
-		test_map_kptr_success();
+	if (test__start_subtest("success")) {
+		test_map_kptr_success(false);
+		/* Do test_run twice, so that we see refcount going back to 1
+		 * after we leave it in map from first iteration.
+		 */
+		test_map_kptr_success(true);
+	}
 	test_map_kptr_fail();
 }
diff --git a/tools/testing/selftests/bpf/progs/map_kptr.c b/tools/testing/selftests/bpf/progs/map_kptr.c
index 1b0e0409eaa5..eb8217803493 100644
--- a/tools/testing/selftests/bpf/progs/map_kptr.c
+++ b/tools/testing/selftests/bpf/progs/map_kptr.c
@@ -141,7 +141,7 @@ SEC("tc")
 int test_map_kptr(struct __sk_buff *ctx)
 {
 	struct map_value *v;
-	int i, key = 0;
+	int key = 0;
 
 #define TEST(map)					\
 	v = bpf_map_lookup_elem(&map, &key);		\
@@ -162,7 +162,7 @@ SEC("tc")
 int test_map_in_map_kptr(struct __sk_buff *ctx)
 {
 	struct map_value *v;
-	int i, key = 0;
+	int key = 0;
 	void *map;
 
 #define TEST(map_in_map)                                \
@@ -187,4 +187,106 @@ int test_map_in_map_kptr(struct __sk_buff *ctx)
 	return 0;
 }
 
+SEC("tc")
+int test_map_kptr_ref(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *p, *p_st;
+	unsigned long arg = 0;
+	struct map_value *v;
+	int key = 0, ret;
+
+	p = bpf_kfunc_call_test_acquire(&arg);
+	if (!p)
+		return 1;
+
+	p_st = p->next;
+	if (p_st->cnt.refs.counter != 2) {
+		ret = 2;
+		goto end;
+	}
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v) {
+		ret = 3;
+		goto end;
+	}
+
+	p = bpf_kptr_xchg(&v->ref_ptr, p);
+	if (p) {
+		ret = 4;
+		goto end;
+	}
+	if (p_st->cnt.refs.counter != 2)
+		return 5;
+
+	p = bpf_kfunc_call_test_kptr_get(&v->ref_ptr, 0, 0);
+	if (!p)
+		return 6;
+	if (p_st->cnt.refs.counter != 3) {
+		ret = 7;
+		goto end;
+	}
+	bpf_kfunc_call_test_release(p);
+	if (p_st->cnt.refs.counter != 2)
+		return 8;
+
+	p = bpf_kptr_xchg(&v->ref_ptr, NULL);
+	if (!p)
+		return 9;
+	bpf_kfunc_call_test_release(p);
+	if (p_st->cnt.refs.counter != 1)
+		return 10;
+
+	p = bpf_kfunc_call_test_acquire(&arg);
+	if (!p)
+		return 11;
+	p = bpf_kptr_xchg(&v->ref_ptr, p);
+	if (p) {
+		ret = 12;
+		goto end;
+	}
+	if (p_st->cnt.refs.counter != 2)
+		return 13;
+	/* Leave in map */
+
+	return 0;
+end:
+	bpf_kfunc_call_test_release(p);
+	return ret;
+}
+
+SEC("tc")
+int test_map_kptr_ref2(struct __sk_buff *ctx)
+{
+	struct prog_test_ref_kfunc *p, *p_st;
+	struct map_value *v;
+	int key = 0;
+
+	v = bpf_map_lookup_elem(&array_map, &key);
+	if (!v)
+		return 1;
+
+	p_st = v->ref_ptr;
+	if (!p_st || p_st->cnt.refs.counter != 2)
+		return 2;
+
+	p = bpf_kptr_xchg(&v->ref_ptr, NULL);
+	if (!p)
+		return 3;
+	if (p_st->cnt.refs.counter != 2) {
+		bpf_kfunc_call_test_release(p);
+		return 4;
+	}
+
+	p = bpf_kptr_xchg(&v->ref_ptr, p);
+	if (p) {
+		bpf_kfunc_call_test_release(p);
+		return 5;
+	}
+	if (p_st->cnt.refs.counter != 2)
+		return 6;
+
+	return 0;
+}
+
 char _license[] SEC("license") = "GPL";
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 0/4] Follow ups for kptr series
  2022-05-11 19:46 [PATCH bpf-next v2 0/4] Follow ups for kptr series Kumar Kartikeya Dwivedi
                   ` (3 preceding siblings ...)
  2022-05-11 19:46 ` [PATCH bpf-next v2 4/4] selftests/bpf: Add tests for kptr_ref refcounting Kumar Kartikeya Dwivedi
@ 2022-05-12  0:20 ` patchwork-bot+netdevbpf
  4 siblings, 0 replies; 9+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-05-12  0:20 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi; +Cc: bpf, ast, andrii, daniel

Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Thu, 12 May 2022 01:16:50 +0530 you wrote:
> Fix a build time warning, and address comments from Alexei on the merged
> version [0].
> 
>   [0]: https://lore.kernel.org/bpf/20220424214901.2743946-1-memxor@gmail.com
> 
> Changelog:
> 
> [...]

Here is the summary with links:
  - [bpf-next,v2,1/4] bpf: Fix sparse warning for bpf_kptr_xchg_proto
    https://git.kernel.org/bpf/bpf-next/c/5b74c690e1c5
  - [bpf-next,v2,2/4] bpf: Prepare prog_test_struct kfuncs for runtime tests
    https://git.kernel.org/bpf/bpf-next/c/5cdccadcac26
  - [bpf-next,v2,3/4] selftests/bpf: Add negative C tests for kptrs
    https://git.kernel.org/bpf/bpf-next/c/04accf794bb2
  - [bpf-next,v2,4/4] selftests/bpf: Add tests for kptr_ref refcounting
    https://git.kernel.org/bpf/bpf-next/c/0ef6740e9777

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 3/4] selftests/bpf: Add negative C tests for kptrs
  2022-05-11 19:46 ` [PATCH bpf-next v2 3/4] selftests/bpf: Add negative C tests for kptrs Kumar Kartikeya Dwivedi
@ 2022-05-12  3:08   ` Andrii Nakryiko
  2022-05-12 23:58     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 9+ messages in thread
From: Andrii Nakryiko @ 2022-05-12  3:08 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann

On Wed, May 11, 2022 at 12:46 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> This uses the newly added SEC("?foo") naming to disable autoload of
> programs, and then loads them one by one for the object and verifies
> that loading fails and matches the returned error string from verifier.
> This is similar to already existing verifier tests but provides coverage
> for BPF C.
>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  .../selftests/bpf/prog_tests/map_kptr.c       |  87 +++-
>  .../selftests/bpf/progs/map_kptr_fail.c       | 418 ++++++++++++++++++
>  2 files changed, 504 insertions(+), 1 deletion(-)
>  create mode 100644 tools/testing/selftests/bpf/progs/map_kptr_fail.c
>

[...]

> +
> +static void test_map_kptr_success(void)
>  {
>         struct map_kptr *skel;
>         int key = 0, ret;
> @@ -35,3 +113,10 @@ void test_map_kptr(void)
>
>         map_kptr__destroy(skel);
>  }
> +
> +void test_map_kptr(void)
> +{
> +       if (test__start_subtest("success"))
> +               test_map_kptr_success();
> +       test_map_kptr_fail();

I think the intent for this was to be another subtest, right? Worth
fixing in a follow up?

> +}
> diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
> new file mode 100644
> index 000000000000..05e209b1b12a
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
> @@ -0,0 +1,418 @@

[...]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 3/4] selftests/bpf: Add negative C tests for kptrs
  2022-05-12  3:08   ` Andrii Nakryiko
@ 2022-05-12 23:58     ` Kumar Kartikeya Dwivedi
  2022-05-13  0:10       ` Andrii Nakryiko
  0 siblings, 1 reply; 9+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2022-05-12 23:58 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann

On Thu, May 12, 2022 at 08:38:16AM IST, Andrii Nakryiko wrote:
> On Wed, May 11, 2022 at 12:46 PM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > This uses the newly added SEC("?foo") naming to disable autoload of
> > programs, and then loads them one by one for the object and verifies
> > that loading fails and matches the returned error string from verifier.
> > This is similar to already existing verifier tests but provides coverage
> > for BPF C.
> >
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  .../selftests/bpf/prog_tests/map_kptr.c       |  87 +++-
> >  .../selftests/bpf/progs/map_kptr_fail.c       | 418 ++++++++++++++++++
> >  2 files changed, 504 insertions(+), 1 deletion(-)
> >  create mode 100644 tools/testing/selftests/bpf/progs/map_kptr_fail.c
> >
>
> [...]
>
> > +
> > +static void test_map_kptr_success(void)
> >  {
> >         struct map_kptr *skel;
> >         int key = 0, ret;
> > @@ -35,3 +113,10 @@ void test_map_kptr(void)
> >
> >         map_kptr__destroy(skel);
> >  }
> > +
> > +void test_map_kptr(void)
> > +{
> > +       if (test__start_subtest("success"))
> > +               test_map_kptr_success();
> > +       test_map_kptr_fail();
>
> I think the intent for this was to be another subtest, right? Worth
> fixing in a follow up?
>

No, instead I am calling test__start_subtest inside it for each program name
that is failing, to make them the subtest. In that case, it should be alright?

> > +}
> > diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
> > new file mode 100644
> > index 000000000000..05e209b1b12a
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
> > @@ -0,0 +1,418 @@
>
> [...]

--
Kartikeya

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next v2 3/4] selftests/bpf: Add negative C tests for kptrs
  2022-05-12 23:58     ` Kumar Kartikeya Dwivedi
@ 2022-05-13  0:10       ` Andrii Nakryiko
  0 siblings, 0 replies; 9+ messages in thread
From: Andrii Nakryiko @ 2022-05-13  0:10 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann

On Thu, May 12, 2022 at 4:57 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> On Thu, May 12, 2022 at 08:38:16AM IST, Andrii Nakryiko wrote:
> > On Wed, May 11, 2022 at 12:46 PM Kumar Kartikeya Dwivedi
> > <memxor@gmail.com> wrote:
> > >
> > > This uses the newly added SEC("?foo") naming to disable autoload of
> > > programs, and then loads them one by one for the object and verifies
> > > that loading fails and matches the returned error string from verifier.
> > > This is similar to already existing verifier tests but provides coverage
> > > for BPF C.
> > >
> > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > > ---
> > >  .../selftests/bpf/prog_tests/map_kptr.c       |  87 +++-
> > >  .../selftests/bpf/progs/map_kptr_fail.c       | 418 ++++++++++++++++++
> > >  2 files changed, 504 insertions(+), 1 deletion(-)
> > >  create mode 100644 tools/testing/selftests/bpf/progs/map_kptr_fail.c
> > >
> >
> > [...]
> >
> > > +
> > > +static void test_map_kptr_success(void)
> > >  {
> > >         struct map_kptr *skel;
> > >         int key = 0, ret;
> > > @@ -35,3 +113,10 @@ void test_map_kptr(void)
> > >
> > >         map_kptr__destroy(skel);
> > >  }
> > > +
> > > +void test_map_kptr(void)
> > > +{
> > > +       if (test__start_subtest("success"))
> > > +               test_map_kptr_success();
> > > +       test_map_kptr_fail();
> >
> > I think the intent for this was to be another subtest, right? Worth
> > fixing in a follow up?
> >
>
> No, instead I am calling test__start_subtest inside it for each program name
> that is failing, to make them the subtest. In that case, it should be alright?

Ah, ok, it's quite confusing to have test__start_subtest in different
functions (though not illegal, I suppose). I'd rather have that test
loop right here in test_map_kptr() function, but it's no big deal

>
> > > +}
> > > diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
> > > new file mode 100644
> > > index 000000000000..05e209b1b12a
> > > --- /dev/null
> > > +++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
> > > @@ -0,0 +1,418 @@
> >
> > [...]
>
> --
> Kartikeya

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-05-13  0:10 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-11 19:46 [PATCH bpf-next v2 0/4] Follow ups for kptr series Kumar Kartikeya Dwivedi
2022-05-11 19:46 ` [PATCH bpf-next v2 1/4] bpf: Fix sparse warning for bpf_kptr_xchg_proto Kumar Kartikeya Dwivedi
2022-05-11 19:46 ` [PATCH bpf-next v2 2/4] bpf: Prepare prog_test_struct kfuncs for runtime tests Kumar Kartikeya Dwivedi
2022-05-11 19:46 ` [PATCH bpf-next v2 3/4] selftests/bpf: Add negative C tests for kptrs Kumar Kartikeya Dwivedi
2022-05-12  3:08   ` Andrii Nakryiko
2022-05-12 23:58     ` Kumar Kartikeya Dwivedi
2022-05-13  0:10       ` Andrii Nakryiko
2022-05-11 19:46 ` [PATCH bpf-next v2 4/4] selftests/bpf: Add tests for kptr_ref refcounting Kumar Kartikeya Dwivedi
2022-05-12  0:20 ` [PATCH bpf-next v2 0/4] Follow ups for kptr series patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.