All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf 0/3] bpf: Enforce NULL check on new _OR_NULL return types
@ 2020-10-19 19:42 Martin KaFai Lau
  2020-10-19 19:42 ` [PATCH bpf 1/3] bpf: Enforce id generation for all may-be-null register type Martin KaFai Lau
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Martin KaFai Lau @ 2020-10-19 19:42 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Hao Luo, kernel-team,
	netdev, Yonghong Song

This set enforces NULL check on the new helper return types,
RET_PTR_TO_BTF_ID_OR_NULL and RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL.

Martin KaFai Lau (3):
  bpf: Enforce id generation for all may-be-null register type
  bpf: selftest: Ensure the return value of bpf_skc_to helpers must be
    checked
  bpf: selftest: Ensure the return value of the bpf_per_cpu_ptr() must
    be checked

 kernel/bpf/verifier.c                         | 11 ++--
 .../selftests/bpf/prog_tests/ksyms_btf.c      | 57 +++++++++++++------
 .../bpf/progs/test_ksyms_btf_null_check.c     | 31 ++++++++++
 tools/testing/selftests/bpf/verifier/sock.c   | 25 ++++++++
 4 files changed, 100 insertions(+), 24 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_btf_null_check.c

-- 
2.24.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH bpf 1/3] bpf: Enforce id generation for all may-be-null register type
  2020-10-19 19:42 [PATCH bpf 0/3] bpf: Enforce NULL check on new _OR_NULL return types Martin KaFai Lau
@ 2020-10-19 19:42 ` Martin KaFai Lau
  2020-10-19 23:23   ` Alexei Starovoitov
  2020-10-19 19:42 ` [PATCH bpf 2/3] bpf: selftest: Ensure the return value of bpf_skc_to helpers must be checked Martin KaFai Lau
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Martin KaFai Lau @ 2020-10-19 19:42 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Hao Luo, kernel-team,
	netdev, Yonghong Song

The commit af7ec1383361 ("bpf: Add bpf_skc_to_tcp6_sock() helper")
introduces RET_PTR_TO_BTF_ID_OR_NULL and
the commit eaa6bcb71ef6 ("bpf: Introduce bpf_per_cpu_ptr()")
introduces RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL.
Note that for RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL, the reg0->type
could become PTR_TO_MEM_OR_NULL which is not covered by
BPF_PROBE_MEM.

The BPF_REG_0 will then hold a _OR_NULL pointer type. This _OR_NULL
pointer type requires the bpf program to explicitly do a NULL check first.
After NULL check, the verifier will mark all registers having
the same reg->id as safe to use.  However, the reg->id
is not set for those new _OR_NULL return types.  One of the ways
that may be wrong is, checking NULL for one btf_id typed pointer will
end up validating all other btf_id typed pointers because
all of them have id == 0.  The later tests will exercise
this path.

To fix it and also avoid similar issue in the future, this patch
moves the id generation logic out of each individual RET type
test in check_helper_call().  Instead, it does one
reg_type_may_be_null() test and then do the id generation
if needed.

This patch also adds a WARN_ON_ONCE in mark_ptr_or_null_reg()
to catch future breakage.

The _OR_NULL pointer usage in the bpf_iter_reg.ctx_arg_info is
fine because it just happens that the existing id generation after
check_ctx_access() has covered it.  It is also using the
reg_type_may_be_null() to decide if id generation is needed or not.

Fixes: af7ec1383361 ("bpf: Add bpf_skc_to_tcp6_sock() helper")
Fixes: eaa6bcb71ef6 ("bpf: Introduce bpf_per_cpu_ptr()")
Cc: Yonghong Song <yhs@fb.com>
Cc: Hao Luo <haoluo@google.com>
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
---
 kernel/bpf/verifier.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 39d7f44e7c92..6200519582a6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5133,24 +5133,19 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 				regs[BPF_REG_0].id = ++env->id_gen;
 		} else {
 			regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
-			regs[BPF_REG_0].id = ++env->id_gen;
 		}
 	} else if (fn->ret_type == RET_PTR_TO_SOCKET_OR_NULL) {
 		mark_reg_known_zero(env, regs, BPF_REG_0);
 		regs[BPF_REG_0].type = PTR_TO_SOCKET_OR_NULL;
-		regs[BPF_REG_0].id = ++env->id_gen;
 	} else if (fn->ret_type == RET_PTR_TO_SOCK_COMMON_OR_NULL) {
 		mark_reg_known_zero(env, regs, BPF_REG_0);
 		regs[BPF_REG_0].type = PTR_TO_SOCK_COMMON_OR_NULL;
-		regs[BPF_REG_0].id = ++env->id_gen;
 	} else if (fn->ret_type == RET_PTR_TO_TCP_SOCK_OR_NULL) {
 		mark_reg_known_zero(env, regs, BPF_REG_0);
 		regs[BPF_REG_0].type = PTR_TO_TCP_SOCK_OR_NULL;
-		regs[BPF_REG_0].id = ++env->id_gen;
 	} else if (fn->ret_type == RET_PTR_TO_ALLOC_MEM_OR_NULL) {
 		mark_reg_known_zero(env, regs, BPF_REG_0);
 		regs[BPF_REG_0].type = PTR_TO_MEM_OR_NULL;
-		regs[BPF_REG_0].id = ++env->id_gen;
 		regs[BPF_REG_0].mem_size = meta.mem_size;
 	} else if (fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL ||
 		   fn->ret_type == RET_PTR_TO_MEM_OR_BTF_ID) {
@@ -5199,6 +5194,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 		return -EINVAL;
 	}
 
+	if (reg_type_may_be_null(regs[BPF_REG_0].type))
+		regs[BPF_REG_0].id = ++env->id_gen;
+
 	if (is_ptr_cast_function(func_id)) {
 		/* For release_reference() */
 		regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
@@ -7212,7 +7210,8 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
 				 struct bpf_reg_state *reg, u32 id,
 				 bool is_null)
 {
-	if (reg_type_may_be_null(reg->type) && reg->id == id) {
+	if (reg_type_may_be_null(reg->type) && reg->id == id &&
+	    !WARN_ON_ONCE(!reg->id)) {
 		/* Old offset (both fixed and variable parts) should
 		 * have been known-zero, because we don't allow pointer
 		 * arithmetic on pointers that might be NULL.
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH bpf 2/3] bpf: selftest: Ensure the return value of bpf_skc_to helpers must be checked
  2020-10-19 19:42 [PATCH bpf 0/3] bpf: Enforce NULL check on new _OR_NULL return types Martin KaFai Lau
  2020-10-19 19:42 ` [PATCH bpf 1/3] bpf: Enforce id generation for all may-be-null register type Martin KaFai Lau
@ 2020-10-19 19:42 ` Martin KaFai Lau
  2020-10-19 19:42 ` [PATCH bpf 3/3] bpf: selftest: Ensure the return value of the bpf_per_cpu_ptr() " Martin KaFai Lau
  2020-10-19 23:30 ` [PATCH bpf 0/3] bpf: Enforce NULL check on new _OR_NULL return types patchwork-bot+netdevbpf
  3 siblings, 0 replies; 7+ messages in thread
From: Martin KaFai Lau @ 2020-10-19 19:42 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Hao Luo, kernel-team,
	netdev, Yonghong Song

This patch tests:

int bpf_cls(struct __sk_buff *skb)
{
	/* REG_6: sk
	 * REG_7: tp
	 * REG_8: req_sk
	 */

	sk = skb->sk;
	if (!sk)
		return 0;

	tp = bpf_skc_to_tcp_sock(sk);
	req_sk = bpf_skc_to_tcp_request_sock(sk);
	if (!req_sk)
		return 0;

	/* !tp has not been tested, so verifier should reject. */
	return *(__u8 *)tp;
}

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
---
 tools/testing/selftests/bpf/verifier/sock.c | 25 +++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/tools/testing/selftests/bpf/verifier/sock.c b/tools/testing/selftests/bpf/verifier/sock.c
index b1aac2641498..ce13ece08d51 100644
--- a/tools/testing/selftests/bpf/verifier/sock.c
+++ b/tools/testing/selftests/bpf/verifier/sock.c
@@ -631,3 +631,28 @@
 	.prog_type = BPF_PROG_TYPE_SK_REUSEPORT,
 	.result = ACCEPT,
 },
+{
+	"mark null check on return value of bpf_skc_to helpers",
+	.insns = {
+	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
+	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_EXIT_INSN(),
+	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
+	BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_sock),
+	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
+	BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_request_sock),
+	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
+	BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0, 2),
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_EXIT_INSN(),
+	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_7, 0),
+	BPF_EXIT_INSN(),
+	},
+	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+	.result = REJECT,
+	.errstr = "invalid mem access",
+	.result_unpriv = REJECT,
+	.errstr_unpriv = "unknown func",
+},
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH bpf 3/3] bpf: selftest: Ensure the return value of the bpf_per_cpu_ptr() must be checked
  2020-10-19 19:42 [PATCH bpf 0/3] bpf: Enforce NULL check on new _OR_NULL return types Martin KaFai Lau
  2020-10-19 19:42 ` [PATCH bpf 1/3] bpf: Enforce id generation for all may-be-null register type Martin KaFai Lau
  2020-10-19 19:42 ` [PATCH bpf 2/3] bpf: selftest: Ensure the return value of bpf_skc_to helpers must be checked Martin KaFai Lau
@ 2020-10-19 19:42 ` Martin KaFai Lau
  2020-10-20  0:33   ` Hao Luo
  2020-10-19 23:30 ` [PATCH bpf 0/3] bpf: Enforce NULL check on new _OR_NULL return types patchwork-bot+netdevbpf
  3 siblings, 1 reply; 7+ messages in thread
From: Martin KaFai Lau @ 2020-10-19 19:42 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Daniel Borkmann, Hao Luo, kernel-team,
	netdev, Yonghong Song

This patch tests all pointers returned by bpf_per_cpu_ptr() must be
tested for NULL first before it can be accessed.

This patch adds a subtest "null_check", so it moves the ".data..percpu"
existence check to the very beginning and before doing any subtest.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
---
 .../selftests/bpf/prog_tests/ksyms_btf.c      | 57 +++++++++++++------
 .../bpf/progs/test_ksyms_btf_null_check.c     | 31 ++++++++++
 2 files changed, 70 insertions(+), 18 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_btf_null_check.c

diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c b/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
index 28e26bd3e0ca..b58b775d19f3 100644
--- a/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
+++ b/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
@@ -5,18 +5,17 @@
 #include <bpf/libbpf.h>
 #include <bpf/btf.h>
 #include "test_ksyms_btf.skel.h"
+#include "test_ksyms_btf_null_check.skel.h"
 
 static int duration;
 
-void test_ksyms_btf(void)
+static void test_basic(void)
 {
 	__u64 runqueues_addr, bpf_prog_active_addr;
 	__u32 this_rq_cpu;
 	int this_bpf_prog_active;
 	struct test_ksyms_btf *skel = NULL;
 	struct test_ksyms_btf__data *data;
-	struct btf *btf;
-	int percpu_datasec;
 	int err;
 
 	err = kallsyms_find("runqueues", &runqueues_addr);
@@ -31,20 +30,6 @@ void test_ksyms_btf(void)
 	if (CHECK(err == -ENOENT, "ksym_find", "symbol 'bpf_prog_active' not found\n"))
 		return;
 
-	btf = libbpf_find_kernel_btf();
-	if (CHECK(IS_ERR(btf), "btf_exists", "failed to load kernel BTF: %ld\n",
-		  PTR_ERR(btf)))
-		return;
-
-	percpu_datasec = btf__find_by_name_kind(btf, ".data..percpu",
-						BTF_KIND_DATASEC);
-	if (percpu_datasec < 0) {
-		printf("%s:SKIP:no PERCPU DATASEC in kernel btf\n",
-		       __func__);
-		test__skip();
-		goto cleanup;
-	}
-
 	skel = test_ksyms_btf__open_and_load();
 	if (CHECK(!skel, "skel_open", "failed to open and load skeleton\n"))
 		goto cleanup;
@@ -83,6 +68,42 @@ void test_ksyms_btf(void)
 	      data->out__bpf_prog_active);
 
 cleanup:
-	btf__free(btf);
 	test_ksyms_btf__destroy(skel);
 }
+
+static void test_null_check(void)
+{
+	struct test_ksyms_btf_null_check *skel;
+
+	skel = test_ksyms_btf_null_check__open_and_load();
+	CHECK(skel, "skel_open", "unexpected load of a prog missing null check\n");
+
+	test_ksyms_btf_null_check__destroy(skel);
+}
+
+void test_ksyms_btf(void)
+{
+	int percpu_datasec;
+	struct btf *btf;
+
+	btf = libbpf_find_kernel_btf();
+	if (CHECK(IS_ERR(btf), "btf_exists", "failed to load kernel BTF: %ld\n",
+		  PTR_ERR(btf)))
+		return;
+
+	percpu_datasec = btf__find_by_name_kind(btf, ".data..percpu",
+						BTF_KIND_DATASEC);
+	btf__free(btf);
+	if (percpu_datasec < 0) {
+		printf("%s:SKIP:no PERCPU DATASEC in kernel btf\n",
+		       __func__);
+		test__skip();
+		return;
+	}
+
+	if (test__start_subtest("basic"))
+		test_basic();
+
+	if (test__start_subtest("null_check"))
+		test_null_check();
+}
diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_btf_null_check.c b/tools/testing/selftests/bpf/progs/test_ksyms_btf_null_check.c
new file mode 100644
index 000000000000..8bc8f7c637bc
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_ksyms_btf_null_check.c
@@ -0,0 +1,31 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+
+#include "vmlinux.h"
+
+#include <bpf/bpf_helpers.h>
+
+extern const struct rq runqueues __ksym; /* struct type global var. */
+extern const int bpf_prog_active __ksym; /* int type global var. */
+
+SEC("raw_tp/sys_enter")
+int handler(const void *ctx)
+{
+	struct rq *rq;
+	int *active;
+	__u32 cpu;
+
+	cpu = bpf_get_smp_processor_id();
+	rq = (struct rq *)bpf_per_cpu_ptr(&runqueues, cpu);
+	active = (int *)bpf_per_cpu_ptr(&bpf_prog_active, cpu);
+	if (active) {
+		/* READ_ONCE */
+		*(volatile int *)active;
+		/* !rq has not been tested, so verifier should reject. */
+		*(volatile int *)(&rq->cpu);
+	}
+
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 1/3] bpf: Enforce id generation for all may-be-null register type
  2020-10-19 19:42 ` [PATCH bpf 1/3] bpf: Enforce id generation for all may-be-null register type Martin KaFai Lau
@ 2020-10-19 23:23   ` Alexei Starovoitov
  0 siblings, 0 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2020-10-19 23:23 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Hao Luo, Kernel Team,
	Network Development, Yonghong Song

On Mon, Oct 19, 2020 at 12:43 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> The commit af7ec1383361 ("bpf: Add bpf_skc_to_tcp6_sock() helper")
> introduces RET_PTR_TO_BTF_ID_OR_NULL and
> the commit eaa6bcb71ef6 ("bpf: Introduce bpf_per_cpu_ptr()")
> introduces RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL.
> Note that for RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL, the reg0->type
> could become PTR_TO_MEM_OR_NULL which is not covered by
> BPF_PROBE_MEM.
>
> The BPF_REG_0 will then hold a _OR_NULL pointer type. This _OR_NULL
> pointer type requires the bpf program to explicitly do a NULL check first.
> After NULL check, the verifier will mark all registers having
> the same reg->id as safe to use.  However, the reg->id
> is not set for those new _OR_NULL return types.  One of the ways
> that may be wrong is, checking NULL for one btf_id typed pointer will
> end up validating all other btf_id typed pointers because
> all of them have id == 0.  The later tests will exercise
> this path.
>
> To fix it and also avoid similar issue in the future, this patch
> moves the id generation logic out of each individual RET type
> test in check_helper_call().  Instead, it does one
> reg_type_may_be_null() test and then do the id generation
> if needed.
>
> This patch also adds a WARN_ON_ONCE in mark_ptr_or_null_reg()
> to catch future breakage.
>
> The _OR_NULL pointer usage in the bpf_iter_reg.ctx_arg_info is
> fine because it just happens that the existing id generation after
> check_ctx_access() has covered it.  It is also using the
> reg_type_may_be_null() to decide if id generation is needed or not.
>
> Fixes: af7ec1383361 ("bpf: Add bpf_skc_to_tcp6_sock() helper")
> Fixes: eaa6bcb71ef6 ("bpf: Introduce bpf_per_cpu_ptr()")
> Cc: Yonghong Song <yhs@fb.com>
> Cc: Hao Luo <haoluo@google.com>
> Signed-off-by: Martin KaFai Lau <kafai@fb.com>

Good catch. The fix makes sense to me. Applied.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 0/3] bpf: Enforce NULL check on new _OR_NULL return types
  2020-10-19 19:42 [PATCH bpf 0/3] bpf: Enforce NULL check on new _OR_NULL return types Martin KaFai Lau
                   ` (2 preceding siblings ...)
  2020-10-19 19:42 ` [PATCH bpf 3/3] bpf: selftest: Ensure the return value of the bpf_per_cpu_ptr() " Martin KaFai Lau
@ 2020-10-19 23:30 ` patchwork-bot+netdevbpf
  3 siblings, 0 replies; 7+ messages in thread
From: patchwork-bot+netdevbpf @ 2020-10-19 23:30 UTC (permalink / raw)
  To: Martin KaFai Lau; +Cc: bpf, ast, daniel, haoluo, kernel-team, netdev, yhs

Hello:

This series was applied to bpf/bpf.git (refs/heads/master):

On Mon, 19 Oct 2020 12:42:06 -0700 you wrote:
> This set enforces NULL check on the new helper return types,
> RET_PTR_TO_BTF_ID_OR_NULL and RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL.
> 
> Martin KaFai Lau (3):
>   bpf: Enforce id generation for all may-be-null register type
>   bpf: selftest: Ensure the return value of bpf_skc_to helpers must be
>     checked
>   bpf: selftest: Ensure the return value of the bpf_per_cpu_ptr() must
>     be checked
> 
> [...]

Here is the summary with links:
  - [bpf,1/3] bpf: Enforce id generation for all may-be-null register type
    https://git.kernel.org/bpf/bpf/c/93c230e3f5bd
  - [bpf,2/3] bpf: selftest: Ensure the return value of bpf_skc_to helpers must be checked
    https://git.kernel.org/bpf/bpf/c/e710bcc6d92c
  - [bpf,3/3] bpf: selftest: Ensure the return value of the bpf_per_cpu_ptr() must be checked
    https://git.kernel.org/bpf/bpf/c/8568c3cefd51

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf 3/3] bpf: selftest: Ensure the return value of the bpf_per_cpu_ptr() must be checked
  2020-10-19 19:42 ` [PATCH bpf 3/3] bpf: selftest: Ensure the return value of the bpf_per_cpu_ptr() " Martin KaFai Lau
@ 2020-10-20  0:33   ` Hao Luo
  0 siblings, 0 replies; 7+ messages in thread
From: Hao Luo @ 2020-10-20  0:33 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Kernel Team,
	Networking, Yonghong Song

Thanks Martin for catching and fixing this!


On Mon, Oct 19, 2020 at 12:43 PM Martin KaFai Lau <kafai@fb.com> wrote:
>
> This patch tests all pointers returned by bpf_per_cpu_ptr() must be
> tested for NULL first before it can be accessed.
>
> This patch adds a subtest "null_check", so it moves the ".data..percpu"
> existence check to the very beginning and before doing any subtest.
>
> Signed-off-by: Martin KaFai Lau <kafai@fb.com>
> ---
>  .../selftests/bpf/prog_tests/ksyms_btf.c      | 57 +++++++++++++------
>  .../bpf/progs/test_ksyms_btf_null_check.c     | 31 ++++++++++
>  2 files changed, 70 insertions(+), 18 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_btf_null_check.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c b/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
> index 28e26bd3e0ca..b58b775d19f3 100644
> --- a/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
> +++ b/tools/testing/selftests/bpf/prog_tests/ksyms_btf.c
> @@ -5,18 +5,17 @@
>  #include <bpf/libbpf.h>
>  #include <bpf/btf.h>
>  #include "test_ksyms_btf.skel.h"
> +#include "test_ksyms_btf_null_check.skel.h"
>
>  static int duration;
>
> -void test_ksyms_btf(void)
> +static void test_basic(void)
>  {
>         __u64 runqueues_addr, bpf_prog_active_addr;
>         __u32 this_rq_cpu;
>         int this_bpf_prog_active;
>         struct test_ksyms_btf *skel = NULL;
>         struct test_ksyms_btf__data *data;
> -       struct btf *btf;
> -       int percpu_datasec;
>         int err;
>
>         err = kallsyms_find("runqueues", &runqueues_addr);
> @@ -31,20 +30,6 @@ void test_ksyms_btf(void)
>         if (CHECK(err == -ENOENT, "ksym_find", "symbol 'bpf_prog_active' not found\n"))
>                 return;
>
> -       btf = libbpf_find_kernel_btf();
> -       if (CHECK(IS_ERR(btf), "btf_exists", "failed to load kernel BTF: %ld\n",
> -                 PTR_ERR(btf)))
> -               return;
> -
> -       percpu_datasec = btf__find_by_name_kind(btf, ".data..percpu",
> -                                               BTF_KIND_DATASEC);
> -       if (percpu_datasec < 0) {
> -               printf("%s:SKIP:no PERCPU DATASEC in kernel btf\n",
> -                      __func__);
> -               test__skip();
> -               goto cleanup;
> -       }
> -
>         skel = test_ksyms_btf__open_and_load();
>         if (CHECK(!skel, "skel_open", "failed to open and load skeleton\n"))
>                 goto cleanup;
> @@ -83,6 +68,42 @@ void test_ksyms_btf(void)
>               data->out__bpf_prog_active);
>
>  cleanup:
> -       btf__free(btf);
>         test_ksyms_btf__destroy(skel);
>  }
> +
> +static void test_null_check(void)
> +{
> +       struct test_ksyms_btf_null_check *skel;
> +
> +       skel = test_ksyms_btf_null_check__open_and_load();
> +       CHECK(skel, "skel_open", "unexpected load of a prog missing null check\n");
> +
> +       test_ksyms_btf_null_check__destroy(skel);
> +}
> +
> +void test_ksyms_btf(void)
> +{
> +       int percpu_datasec;
> +       struct btf *btf;
> +
> +       btf = libbpf_find_kernel_btf();
> +       if (CHECK(IS_ERR(btf), "btf_exists", "failed to load kernel BTF: %ld\n",
> +                 PTR_ERR(btf)))
> +               return;
> +
> +       percpu_datasec = btf__find_by_name_kind(btf, ".data..percpu",
> +                                               BTF_KIND_DATASEC);
> +       btf__free(btf);
> +       if (percpu_datasec < 0) {
> +               printf("%s:SKIP:no PERCPU DATASEC in kernel btf\n",
> +                      __func__);
> +               test__skip();
> +               return;
> +       }
> +
> +       if (test__start_subtest("basic"))
> +               test_basic();
> +
> +       if (test__start_subtest("null_check"))
> +               test_null_check();
> +}
> diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_btf_null_check.c b/tools/testing/selftests/bpf/progs/test_ksyms_btf_null_check.c
> new file mode 100644
> index 000000000000..8bc8f7c637bc
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/test_ksyms_btf_null_check.c
> @@ -0,0 +1,31 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2020 Facebook */
> +
> +#include "vmlinux.h"
> +
> +#include <bpf/bpf_helpers.h>
> +
> +extern const struct rq runqueues __ksym; /* struct type global var. */
> +extern const int bpf_prog_active __ksym; /* int type global var. */
> +
> +SEC("raw_tp/sys_enter")
> +int handler(const void *ctx)
> +{
> +       struct rq *rq;
> +       int *active;
> +       __u32 cpu;
> +
> +       cpu = bpf_get_smp_processor_id();
> +       rq = (struct rq *)bpf_per_cpu_ptr(&runqueues, cpu);
> +       active = (int *)bpf_per_cpu_ptr(&bpf_prog_active, cpu);
> +       if (active) {
> +               /* READ_ONCE */
> +               *(volatile int *)active;
> +               /* !rq has not been tested, so verifier should reject. */
> +               *(volatile int *)(&rq->cpu);
> +       }
> +
> +       return 0;
> +}
> +
> +char _license[] SEC("license") = "GPL";
> --
> 2.24.1
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-10-20  0:33 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-19 19:42 [PATCH bpf 0/3] bpf: Enforce NULL check on new _OR_NULL return types Martin KaFai Lau
2020-10-19 19:42 ` [PATCH bpf 1/3] bpf: Enforce id generation for all may-be-null register type Martin KaFai Lau
2020-10-19 23:23   ` Alexei Starovoitov
2020-10-19 19:42 ` [PATCH bpf 2/3] bpf: selftest: Ensure the return value of bpf_skc_to helpers must be checked Martin KaFai Lau
2020-10-19 19:42 ` [PATCH bpf 3/3] bpf: selftest: Ensure the return value of the bpf_per_cpu_ptr() " Martin KaFai Lau
2020-10-20  0:33   ` Hao Luo
2020-10-19 23:30 ` [PATCH bpf 0/3] bpf: Enforce NULL check on new _OR_NULL return types patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.