bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables
@ 2023-04-06 16:44 Yonghong Song
  2023-04-06 16:44 ` [PATCH bpf-next v2 1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking Yonghong Song
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Yonghong Song @ 2023-04-06 16:44 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

LLVM commit [1] introduced hoistMinMax optimization like
  (i < VIRTIO_MAX_SGS) && (i < out_sgs)
to
  upper = MIN(VIRTIO_MAX_SGS, out_sgs)
  ... i < upper ...
and caused the verification failure. Commit [2] workarounded the issue by
adding some bpf assembly code to prohibit the above optimization.
This patch improved verifier such that verification can succeed without
the above workaround.

Without [2], the current verifier will hit the following failures:
  ...
  119: (15) if r1 == 0x0 goto pc+1
  The sequence of 8193 jumps is too complex.
  verification time 525829 usec
  stack depth 64
  processed 156616 insns (limit 1000000) max_states_per_insn 8 total_states 1754 peak_states 1712 mark_read 12
  -- END PROG LOAD LOG --
  libbpf: prog 'trace_virtqueue_add_sgs': failed to load: -14
  libbpf: failed to load object 'loop6.bpf.o'
  ...
The failure is due to verifier inadequately handling '<const> <cond_op> <non_const>' which will
go through both pathes and generate the following verificaiton states:
  ...
  89: (07) r2 += 1                      ; R2_w=5
  90: (79) r8 = *(u64 *)(r10 -48)       ; R8_w=scalar() R10=fp0
  91: (79) r1 = *(u64 *)(r10 -56)       ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0
  92: (ad) if r2 < r1 goto pc+41        ; R0_w=scalar() R1_w=scalar(umin=6,umax=5,var_off=(0x4; 0x3))
      R2_w=5 R6_w=scalar(id=385) R7_w=0 R8_w=scalar() R9_w=scalar(umax=21474836475,var_off=(0x0; 0x7ffffffff))
      R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40_w=4 fp-48=mmmmmmmm fp-56= fp-64=mmmmmmmm
  ...
  89: (07) r2 += 1                      ; R2_w=6
  90: (79) r8 = *(u64 *)(r10 -48)       ; R8_w=scalar() R10=fp0
  91: (79) r1 = *(u64 *)(r10 -56)       ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0
  92: (ad) if r2 < r1 goto pc+41        ; R0_w=scalar() R1_w=scalar(umin=7,umax=5,var_off=(0x4; 0x3))
      R2_w=6 R6=scalar(id=388) R7=0 R8_w=scalar() R9_w=scalar(umax=25769803770,var_off=(0x0; 0x7ffffffff))
      R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40=5 fp-48=mmmmmmmm fp-56= fp-64=mmmmmmmm
    ...
  89: (07) r2 += 1                      ; R2_w=4088
  90: (79) r8 = *(u64 *)(r10 -48)       ; R8_w=scalar() R10=fp0
  91: (79) r1 = *(u64 *)(r10 -56)       ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0
  92: (ad) if r2 < r1 goto pc+41        ; R0=scalar() R1=scalar(umin=4089,umax=5,var_off=(0x0; 0x7))
      R2=4088 R6=scalar(id=12634) R7=0 R8=scalar() R9=scalar(umax=17557826301960,var_off=(0x0; 0xfffffffffff))
      R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40=4087 fp-48=mmmmmmmm fp-56= fp-64=mmmmmmmm

Patch 3 fixed the above issue by handling '<const> <cond_op> <non_const>' properly.
During developing selftests for Patch 3, I found some issues with bound deduction with
BPF_EQ/BPF_NE and fixed the issue in Patch 1.

After the above issue is fixed, the second issue shows up.
  ...
  67: (07) r1 += -16                    ; R1_w=fp-16
  ; bpf_probe_read_kernel(&sgp, sizeof(sgp), sgs + i);
  68: (b4) w2 = 8                       ; R2_w=8
  69: (85) call bpf_probe_read_kernel#113       ; R0_w=scalar() fp-16=mmmmmmmm
  ; return sgp;
  70: (79) r6 = *(u64 *)(r10 -16)       ; R6=scalar() R10=fp0
  ; for (n = 0, sgp = get_sgp(sgs, i); sgp && (n < SG_MAX);
  71: (15) if r6 == 0x0 goto pc-49      ; R6=scalar()
  72: (b4) w1 = 0                       ; R1_w=0
  73: (05) goto pc-46
  ; for (i = 0; (i < VIRTIO_MAX_SGS) && (i < out_sgs); i++) {
  28: (bc) w7 = w1                      ; R1_w=0 R7_w=0
  ; bpf_probe_read_kernel(&len, sizeof(len), &sgp->length);
  ...
  23: (79) r3 = *(u64 *)(r10 -40)       ; R3_w=2 R10=fp0
  ; for (i = 0; (i < VIRTIO_MAX_SGS) && (i < out_sgs); i++) {
  24: (07) r3 += 1                      ; R3_w=3
  ; for (i = 0; (i < VIRTIO_MAX_SGS) && (i < out_sgs); i++) {
  25: (79) r1 = *(u64 *)(r10 -56)       ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0
  26: (ad) if r3 < r1 goto pc+34 61: R0=scalar() R1_w=scalar(umin=4,umax=5,var_off=(0x4; 0x1)) R3_w=3 R6=scalar(id=1658)
     R7=0 R8=scalar(id=1653) R9=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm
     fp-24=mmmm???? fp-32= fp-40=2 fp-56= fp-64=mmmmmmmm
  ; if (sg_is_chain(&sg))
  61: (7b) *(u64 *)(r10 -40) = r3       ; R3_w=3 R10=fp0 fp-40_w=3
    ...
  67: (07) r1 += -16                    ; R1_w=fp-16
  ; bpf_probe_read_kernel(&sgp, sizeof(sgp), sgs + i);
  68: (b4) w2 = 8                       ; R2_w=8
  69: (85) call bpf_probe_read_kernel#113       ; R0_w=scalar() fp-16=mmmmmmmm
  ; return sgp;
  70: (79) r6 = *(u64 *)(r10 -16)
  ; for (n = 0, sgp = get_sgp(sgs, i); sgp && (n < SG_MAX);
  infinite loop detected at insn 71
  verification time 90800 usec
  stack depth 64
  processed 25017 insns (limit 1000000) max_states_per_insn 20 total_states 491 peak_states 169 mark_read 12
  -- END PROG LOAD LOG --
  libbpf: prog 'trace_virtqueue_add_sgs': failed to load: -22
  
Further analysis found the index variable 'i' is spilled but since it is not marked as precise.
This is more tricky as identifying induction variable is not easy in verifier. Although a heuristic
is possible, let us leave it for now.

  [1] https://reviews.llvm.org/D143726
  [2] Commit 3c2611bac08a ("selftests/bpf: Fix trace_virtqueue_add_sgs test issue with LLVM 17.")

Yonghong Song (4):
  bpf: Improve verifier JEQ/JNE insn branch taken checking
  selftests/bpf: Add tests for non-constant cond_op NE/EQ bound
    deduction
  bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in
    verifier
  selftests/bpf: Add verifier tests for code pattern '<const> <cond_op>
    <non_const>'

 kernel/bpf/verifier.c                         |  20 +
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../verifier_bounds_deduction_non_const.c     | 639 ++++++++++++++++++
 .../progs/verifier_bounds_mix_sign_unsign.c   |   2 +-
 4 files changed, 662 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v2 1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking
  2023-04-06 16:44 [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables Yonghong Song
@ 2023-04-06 16:44 ` Yonghong Song
  2023-04-06 17:49   ` Dave Marchevsky
  2023-04-06 20:35   ` Andrii Nakryiko
  2023-04-06 16:45 ` [PATCH bpf-next v2 2/4] selftests/bpf: Add tests for non-constant cond_op NE/EQ bound deduction Yonghong Song
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 13+ messages in thread
From: Yonghong Song @ 2023-04-06 16:44 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

Currently, for BPF_JEQ/BPF_JNE insn, verifier determines
whether the branch is taken or not only if both operands
are constants. Therefore, for the following code snippet,
  0: (85) call bpf_ktime_get_ns#5       ; R0_w=scalar()
  1: (a5) if r0 < 0x3 goto pc+2         ; R0_w=scalar(umin=3)
  2: (b7) r2 = 2                        ; R2_w=2
  3: (1d) if r0 == r2 goto pc+2 6

At insn 3, since r0 is not a constant, verifier assumes both branch
can be taken which may lead inproper verification failure.

Add comparing umin/umax value and the constant. If the umin value
is greater than the constant, or umax value is smaller than the constant,
for JEQ the branch must be not-taken, and for JNE the branch must be taken.
The jmp32 mode JEQ/JNE branch taken checking is also handled similarly.

The following lists the veristat result w.r.t. changed number
of processes insns during verification:

File                                                   Program                                               Insns (A)  Insns (B)  Insns    (DIFF)
-----------------------------------------------------  ----------------------------------------------------  ---------  ---------  ---------------
test_cls_redirect.bpf.linked3.o                        cls_redirect                                              64980      73472  +8492 (+13.07%)
test_seg6_loop.bpf.linked3.o                           __add_egr_x                                               12425      12423      -2 (-0.02%)
test_tcp_hdr_options.bpf.linked3.o                     estab                                                      2634       2558     -76 (-2.89%)
test_parse_tcp_hdr_opt.bpf.linked3.o                   xdp_ingress_v6                                             1421       1420      -1 (-0.07%)
test_parse_tcp_hdr_opt_dynptr.bpf.linked3.o            xdp_ingress_v6                                             1238       1237      -1 (-0.08%)
test_tc_dtime.bpf.linked3.o                            egress_fwdns_prio100                                        414        411      -3 (-0.72%)

Mostly a small improvement but test_cls_redirect.bpf.linked3.o has a 13% regression.
I checked with verifier log and found it this is due to pruning.
For some JEQ/JNE branches impacted by this patch,
one branch is explored and the other has state equivalence and
pruned.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 kernel/bpf/verifier.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 56f569811f70..5c6b90e384a5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12651,10 +12651,14 @@ static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode)
 	case BPF_JEQ:
 		if (tnum_is_const(subreg))
 			return !!tnum_equals_const(subreg, val);
+		else if (val < reg->u32_min_value || val > reg->u32_max_value)
+			return 0;
 		break;
 	case BPF_JNE:
 		if (tnum_is_const(subreg))
 			return !tnum_equals_const(subreg, val);
+		else if (val < reg->u32_min_value || val > reg->u32_max_value)
+			return 1;
 		break;
 	case BPF_JSET:
 		if ((~subreg.mask & subreg.value) & val)
@@ -12724,10 +12728,14 @@ static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
 	case BPF_JEQ:
 		if (tnum_is_const(reg->var_off))
 			return !!tnum_equals_const(reg->var_off, val);
+		else if (val < reg->umin_value || val > reg->umax_value)
+			return 0;
 		break;
 	case BPF_JNE:
 		if (tnum_is_const(reg->var_off))
 			return !tnum_equals_const(reg->var_off, val);
+		else if (val < reg->umin_value || val > reg->umax_value)
+			return 1;
 		break;
 	case BPF_JSET:
 		if ((~reg->var_off.mask & reg->var_off.value) & val)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v2 2/4] selftests/bpf: Add tests for non-constant cond_op NE/EQ bound deduction
  2023-04-06 16:44 [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables Yonghong Song
  2023-04-06 16:44 ` [PATCH bpf-next v2 1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking Yonghong Song
@ 2023-04-06 16:45 ` Yonghong Song
  2023-04-06 20:37   ` Andrii Nakryiko
  2023-04-06 16:45 ` [PATCH bpf-next v2 3/4] bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier Yonghong Song
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Yonghong Song @ 2023-04-06 16:45 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

Add various tests for code pattern '<non-const> NE/EQ <const>' implemented
in the previous verifier patch. Without the verifier patch, these new
tests will fail.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../verifier_bounds_deduction_non_const.c     | 179 ++++++++++++++++++
 2 files changed, 181 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index efc8cf2e18d0..73dff693d411 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -7,6 +7,7 @@
 #include "verifier_array_access.skel.h"
 #include "verifier_basic_stack.skel.h"
 #include "verifier_bounds_deduction.skel.h"
+#include "verifier_bounds_deduction_non_const.skel.h"
 #include "verifier_bounds_mix_sign_unsign.skel.h"
 #include "verifier_cfg.skel.h"
 #include "verifier_cgroup_inv_retcode.skel.h"
@@ -70,6 +71,7 @@ void test_verifier_and(void)                  { RUN(verifier_and); }
 void test_verifier_array_access(void)         { RUN(verifier_array_access); }
 void test_verifier_basic_stack(void)          { RUN(verifier_basic_stack); }
 void test_verifier_bounds_deduction(void)     { RUN(verifier_bounds_deduction); }
+void test_verifier_bounds_deduction_non_const(void)     { RUN(verifier_bounds_deduction_non_const); }
 void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); }
 void test_verifier_cfg(void)                  { RUN(verifier_cfg); }
 void test_verifier_cgroup_inv_retcode(void)   { RUN(verifier_cgroup_inv_retcode); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c b/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c
new file mode 100644
index 000000000000..fe570d866139
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c
@@ -0,0 +1,179 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <non_const> == <const>, 1")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_1(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 < 3 goto l0_%=;				\
+	r2 = 2;						\
+	if r0 == r2 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <non_const> == <const>, 2")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_2(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 > 3 goto l0_%=;				\
+	r2 = 4;						\
+	if r0 == r2 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <non_const> != <const>, 1")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_3(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 < 3 goto l0_%=;				\
+	r2 = 2;						\
+	if r0 != r2 goto l0_%=;				\
+	goto l1_%=;					\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <non_const> != <const>, 2")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_4(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 > 3 goto l0_%=;				\
+	r2 = 4;						\
+	if r0 != r2 goto l0_%=;				\
+	goto l1_%=;					\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <non_const> == <const>, 1")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_5(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 < 4 goto l0_%=;				\
+	w2 = 3;						\
+	if w0 == w2 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <non_const> == <const>, 2")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_6(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 > 4 goto l0_%=;				\
+	w2 = 5;						\
+	if w0 == w2 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <non_const> != <const>, 1")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_7(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 < 3 goto l0_%=;				\
+	w2 = 2;						\
+	if w0 != w2 goto l0_%=;				\
+	goto l1_%=;					\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <non_const> != <const>, 2")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_8(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 > 3 goto l0_%=;				\
+	w2 = 4;						\
+	if w0 != w2 goto l0_%=;				\
+	goto l1_%=;					\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v2 3/4] bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier
  2023-04-06 16:44 [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables Yonghong Song
  2023-04-06 16:44 ` [PATCH bpf-next v2 1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking Yonghong Song
  2023-04-06 16:45 ` [PATCH bpf-next v2 2/4] selftests/bpf: Add tests for non-constant cond_op NE/EQ bound deduction Yonghong Song
@ 2023-04-06 16:45 ` Yonghong Song
  2023-04-06 18:10   ` Dave Marchevsky
  2023-04-06 16:45 ` [PATCH bpf-next v2 4/4] selftests/bpf: Add verifier tests for code pattern '<const> <cond_op> <non_const>' Yonghong Song
  2023-04-06 22:40 ` [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables patchwork-bot+netdevbpf
  4 siblings, 1 reply; 13+ messages in thread
From: Yonghong Song @ 2023-04-06 16:45 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

Currently, the verifier does not handle '<const> <cond_op> <non_const>' well.
For example,
  ...
  10: (79) r1 = *(u64 *)(r10 -16)       ; R1_w=scalar() R10=fp0
  11: (b7) r2 = 0                       ; R2_w=0
  12: (2d) if r2 > r1 goto pc+2
  13: (b7) r0 = 0
  14: (95) exit
  15: (65) if r1 s> 0x1 goto pc+3
  16: (0f) r0 += r1
  ...
At insn 12, verifier decides both true and false branch are possible, but
actually only false branch is possible.

Currently, the verifier already supports patterns '<non_const> <cond_op> <const>.
Add support for patterns '<const> <cond_op> <non_const>' in a similar way.

Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10'
due to this change.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 kernel/bpf/verifier.c                                | 12 ++++++++++++
 .../bpf/progs/verifier_bounds_mix_sign_unsign.c      |  2 +-
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5c6b90e384a5..3660b573048a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13356,6 +13356,18 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
 				       src_reg->var_off.value,
 				       opcode,
 				       is_jmp32);
+	} else if (dst_reg->type == SCALAR_VALUE &&
+		   is_jmp32 && tnum_is_const(tnum_subreg(dst_reg->var_off))) {
+		pred = is_branch_taken(src_reg,
+				       tnum_subreg(dst_reg->var_off).value,
+				       flip_opcode(opcode),
+				       is_jmp32);
+	} else if (dst_reg->type == SCALAR_VALUE &&
+		   !is_jmp32 && tnum_is_const(dst_reg->var_off)) {
+		pred = is_branch_taken(src_reg,
+				       dst_reg->var_off.value,
+				       flip_opcode(opcode),
+				       is_jmp32);
 	} else if (reg_is_pkt_pointer_any(dst_reg) &&
 		   reg_is_pkt_pointer_any(src_reg) &&
 		   !is_jmp32) {
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c b/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c
index 91a66357896a..4f40144748a5 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c
@@ -354,7 +354,7 @@ __naked void signed_and_unsigned_variant_10(void)
 	call %[bpf_map_lookup_elem];			\
 	if r0 == 0 goto l0_%=;				\
 	r1 = *(u64*)(r10 - 16);				\
-	r2 = 0;						\
+	r2 = -1;						\
 	if r2 > r1 goto l1_%=;				\
 	r0 = 0;						\
 	exit;						\
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next v2 4/4] selftests/bpf: Add verifier tests for code pattern '<const> <cond_op> <non_const>'
  2023-04-06 16:44 [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables Yonghong Song
                   ` (2 preceding siblings ...)
  2023-04-06 16:45 ` [PATCH bpf-next v2 3/4] bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier Yonghong Song
@ 2023-04-06 16:45 ` Yonghong Song
  2023-04-06 20:45   ` Andrii Nakryiko
  2023-04-06 22:40 ` [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables patchwork-bot+netdevbpf
  4 siblings, 1 reply; 13+ messages in thread
From: Yonghong Song @ 2023-04-06 16:45 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

Add various tests for code pattern '<const> <cond_op> <non_const>' to
exercise the previous verifier patch.

The following are veristat changed number of processed insns stat
comparing the previous patch vs. this patch:

File                                                   Program                                               Insns (A)  Insns (B)  Insns  (DIFF)
-----------------------------------------------------  ----------------------------------------------------  ---------  ---------  -------------
test_seg6_loop.bpf.linked3.o                           __add_egr_x                                               12423      12314  -109 (-0.88%)

Only one program is affected with minor change.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../verifier_bounds_deduction_non_const.c     | 460 ++++++++++++++++++
 1 file changed, 460 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c b/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c
index fe570d866139..823f727cf210 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c
@@ -176,4 +176,464 @@ l1_%=:							\
 	: __clobber_all);
 }
 
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> > <non_const>, 1")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_9(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	r2 = 0;						\
+	if r2 > r0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> > <non_const>, 2")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_10(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 < 4 goto l0_%=;				\
+	r2 = 4;						\
+	if r2 > r0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> >= <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_11(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 < 4 goto l0_%=;				\
+	r2 = 3;						\
+	if r2 >= r0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> < <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_12(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 > 4 goto l0_%=;				\
+	r2 = 4;						\
+	if r2 < r0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> <= <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_13(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 >= 4 goto l0_%=;				\
+	r2 = 4;						\
+	if r2 <= r0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> == <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_14(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 < 3 goto l0_%=;				\
+	r2 = 2;						\
+	if r2 == r0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> s> <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_15(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 s< 4 goto l0_%=;				\
+	r2 = 4;						\
+	if r2 s> r0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> s>= <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_16(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 s< 4 goto l0_%=;				\
+	r2 = 3;						\
+	if r2 s>= r0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> s< <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_17(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 s> 4 goto l0_%=;				\
+	r2 = 4;						\
+	if r2 s< r0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> s<= <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_18(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 s> 4 goto l0_%=;				\
+	r2 = 5;						\
+	if r2 s<= r0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp64, <const> != <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_19(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if r0 < 3 goto l0_%=;				\
+	r2 = 2;						\
+	if r2 != r0 goto l0_%=;				\
+	goto l1_%=;					\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> > <non_const>, 1")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_20(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	w2 = 0;						\
+	if w2 > w0 goto l0_%=;				\
+	r0 = 0;						\
+	exit;						\
+l0_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> > <non_const>, 2")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_21(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 < 4 goto l0_%=;				\
+	w2 = 4;						\
+	if w2 > w0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> >= <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_22(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 < 4 goto l0_%=;				\
+	w2 = 3;						\
+	if w2 >= w0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> < <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_23(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 > 4 goto l0_%=;				\
+	w2 = 4;						\
+	if w2 < w0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> <= <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_24(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 >= 4 goto l0_%=;				\
+	w2 = 4;						\
+	if w2 <= w0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> == <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_25(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 < 4 goto l0_%=;				\
+	w2 = 3;						\
+	if w2 == w0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> s> <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_26(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 s< 4 goto l0_%=;				\
+	w2 = 4;						\
+	if w2 s> w0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> s>= <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_27(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 s< 4 goto l0_%=;				\
+	w2 = 3;						\
+	if w2 s>= w0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> s< <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_28(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 s> 4 goto l0_%=;				\
+	w2 = 5;						\
+	if w2 s< w0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> s<= <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_29(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 s>= 4 goto l0_%=;				\
+	w2 = 4;						\
+	if w2 s<= w0 goto l1_%=;				\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
+SEC("socket")
+__description("check deducing bounds from non-const, jmp32, <const> != <non_const>")
+__success __retval(0)
+__naked void deducing_bounds_from_non_const_30(void)
+{
+	asm volatile ("					\
+	call %[bpf_ktime_get_ns];			\
+	if w0 < 3 goto l0_%=;				\
+	w2 = 2;						\
+	if w2 != w0 goto l0_%=;				\
+	goto l1_%=;					\
+l0_%=:							\
+	r0 = 0;						\
+	exit;						\
+l1_%=:							\
+	r0 -= r1;					\
+	exit;						\
+"	:
+	: __imm(bpf_ktime_get_ns)
+	: __clobber_all);
+}
+
 char _license[] SEC("license") = "GPL";
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v2 1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking
  2023-04-06 16:44 ` [PATCH bpf-next v2 1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking Yonghong Song
@ 2023-04-06 17:49   ` Dave Marchevsky
  2023-04-06 20:03     ` Alexei Starovoitov
  2023-04-06 20:35   ` Andrii Nakryiko
  1 sibling, 1 reply; 13+ messages in thread
From: Dave Marchevsky @ 2023-04-06 17:49 UTC (permalink / raw)
  To: Yonghong Song, bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau



On 4/6/23 12:44 PM, Yonghong Song wrote:
> Currently, for BPF_JEQ/BPF_JNE insn, verifier determines
> whether the branch is taken or not only if both operands
> are constants. Therefore, for the following code snippet,
>   0: (85) call bpf_ktime_get_ns#5       ; R0_w=scalar()
>   1: (a5) if r0 < 0x3 goto pc+2         ; R0_w=scalar(umin=3)
>   2: (b7) r2 = 2                        ; R2_w=2
>   3: (1d) if r0 == r2 goto pc+2 6
> 
> At insn 3, since r0 is not a constant, verifier assumes both branch
> can be taken which may lead inproper verification failure.
> 
> Add comparing umin/umax value and the constant. If the umin value
> is greater than the constant, or umax value is smaller than the constant,
> for JEQ the branch must be not-taken, and for JNE the branch must be taken.
> The jmp32 mode JEQ/JNE branch taken checking is also handled similarly.
> 
> The following lists the veristat result w.r.t. changed number
> of processes insns during verification:
> 
> File                                                   Program                                               Insns (A)  Insns (B)  Insns    (DIFF)
> -----------------------------------------------------  ----------------------------------------------------  ---------  ---------  ---------------
> test_cls_redirect.bpf.linked3.o                        cls_redirect                                              64980      73472  +8492 (+13.07%)
> test_seg6_loop.bpf.linked3.o                           __add_egr_x                                               12425      12423      -2 (-0.02%)
> test_tcp_hdr_options.bpf.linked3.o                     estab                                                      2634       2558     -76 (-2.89%)
> test_parse_tcp_hdr_opt.bpf.linked3.o                   xdp_ingress_v6                                             1421       1420      -1 (-0.07%)
> test_parse_tcp_hdr_opt_dynptr.bpf.linked3.o            xdp_ingress_v6                                             1238       1237      -1 (-0.08%)
> test_tc_dtime.bpf.linked3.o                            egress_fwdns_prio100                                        414        411      -3 (-0.72%)
> 
> Mostly a small improvement but test_cls_redirect.bpf.linked3.o has a 13% regression.
> I checked with verifier log and found it this is due to pruning.
> For some JEQ/JNE branches impacted by this patch,
> one branch is explored and the other has state equivalence and
> pruned.

Can you elaborate a bit on this insn increase caused by pruning?

My naive reading of this: there was some state exploration that was
previously pruned due to is_branch_taken returning indeterminate
value (-1), and now that it can concretely say branch is/isn't taken,
states which would've been considered equivalent no longer are.
Is that accurate?

> 
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---

Regardless,

Acked-by: Dave Marchevsky <davemarchevsky@fb.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v2 3/4] bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier
  2023-04-06 16:45 ` [PATCH bpf-next v2 3/4] bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier Yonghong Song
@ 2023-04-06 18:10   ` Dave Marchevsky
  2023-04-06 20:40     ` Andrii Nakryiko
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Marchevsky @ 2023-04-06 18:10 UTC (permalink / raw)
  To: Yonghong Song, bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

On 4/6/23 12:45 PM, Yonghong Song wrote:
> Currently, the verifier does not handle '<const> <cond_op> <non_const>' well.
> For example,
>   ...
>   10: (79) r1 = *(u64 *)(r10 -16)       ; R1_w=scalar() R10=fp0
>   11: (b7) r2 = 0                       ; R2_w=0
>   12: (2d) if r2 > r1 goto pc+2
>   13: (b7) r0 = 0
>   14: (95) exit
>   15: (65) if r1 s> 0x1 goto pc+3
>   16: (0f) r0 += r1
>   ...
> At insn 12, verifier decides both true and false branch are possible, but
> actually only false branch is possible.
> 
> Currently, the verifier already supports patterns '<non_const> <cond_op> <const>.
> Add support for patterns '<const> <cond_op> <non_const>' in a similar way.
> 
> Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10'
> due to this change.
> 
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---

I still think there's a refactoring opportunity here, but I see your comments
on the related thread in v1 of this series, and don't think it's a blocker
to find cleanest refactor.

Acked-by: Dave Marchevsky <davemarchevsky@fb.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v2 1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking
  2023-04-06 17:49   ` Dave Marchevsky
@ 2023-04-06 20:03     ` Alexei Starovoitov
  0 siblings, 0 replies; 13+ messages in thread
From: Alexei Starovoitov @ 2023-04-06 20:03 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: Yonghong Song, bpf, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, Kernel Team, Martin KaFai Lau

On Thu, Apr 6, 2023 at 10:49 AM Dave Marchevsky <davemarchevsky@meta.com> wrote:
>
>
>
> On 4/6/23 12:44 PM, Yonghong Song wrote:
> > Currently, for BPF_JEQ/BPF_JNE insn, verifier determines
> > whether the branch is taken or not only if both operands
> > are constants. Therefore, for the following code snippet,
> >   0: (85) call bpf_ktime_get_ns#5       ; R0_w=scalar()
> >   1: (a5) if r0 < 0x3 goto pc+2         ; R0_w=scalar(umin=3)
> >   2: (b7) r2 = 2                        ; R2_w=2
> >   3: (1d) if r0 == r2 goto pc+2 6
> >
> > At insn 3, since r0 is not a constant, verifier assumes both branch
> > can be taken which may lead inproper verification failure.
> >
> > Add comparing umin/umax value and the constant. If the umin value
> > is greater than the constant, or umax value is smaller than the constant,
> > for JEQ the branch must be not-taken, and for JNE the branch must be taken.
> > The jmp32 mode JEQ/JNE branch taken checking is also handled similarly.
> >
> > The following lists the veristat result w.r.t. changed number
> > of processes insns during verification:
> >
> > File                                                   Program                                               Insns (A)  Insns (B)  Insns    (DIFF)
> > -----------------------------------------------------  ----------------------------------------------------  ---------  ---------  ---------------
> > test_cls_redirect.bpf.linked3.o                        cls_redirect                                              64980      73472  +8492 (+13.07%)
> > test_seg6_loop.bpf.linked3.o                           __add_egr_x                                               12425      12423      -2 (-0.02%)
> > test_tcp_hdr_options.bpf.linked3.o                     estab                                                      2634       2558     -76 (-2.89%)
> > test_parse_tcp_hdr_opt.bpf.linked3.o                   xdp_ingress_v6                                             1421       1420      -1 (-0.07%)
> > test_parse_tcp_hdr_opt_dynptr.bpf.linked3.o            xdp_ingress_v6                                             1238       1237      -1 (-0.08%)
> > test_tc_dtime.bpf.linked3.o                            egress_fwdns_prio100                                        414        411      -3 (-0.72%)
> >
> > Mostly a small improvement but test_cls_redirect.bpf.linked3.o has a 13% regression.
> > I checked with verifier log and found it this is due to pruning.
> > For some JEQ/JNE branches impacted by this patch,
> > one branch is explored and the other has state equivalence and
> > pruned.
>
> Can you elaborate a bit on this insn increase caused by pruning?
>
> My naive reading of this: there was some state exploration that was
> previously pruned due to is_branch_taken returning indeterminate
> value (-1), and now that it can concretely say branch is/isn't taken,
> states which would've been considered equivalent no longer are.
> Is that accurate?

Pretty much. It's because when is_branch_taken() is certain on the direction
of the branch it marks register as precise and it hurts state equivalence
later.

>
> >
> > Signed-off-by: Yonghong Song <yhs@fb.com>
> > ---
>
> Regardless,
>
> Acked-by: Dave Marchevsky <davemarchevsky@fb.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v2 1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking
  2023-04-06 16:44 ` [PATCH bpf-next v2 1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking Yonghong Song
  2023-04-06 17:49   ` Dave Marchevsky
@ 2023-04-06 20:35   ` Andrii Nakryiko
  1 sibling, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 20:35 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

On Thu, Apr 6, 2023 at 9:45 AM Yonghong Song <yhs@fb.com> wrote:
>
> Currently, for BPF_JEQ/BPF_JNE insn, verifier determines
> whether the branch is taken or not only if both operands
> are constants. Therefore, for the following code snippet,
>   0: (85) call bpf_ktime_get_ns#5       ; R0_w=scalar()
>   1: (a5) if r0 < 0x3 goto pc+2         ; R0_w=scalar(umin=3)
>   2: (b7) r2 = 2                        ; R2_w=2
>   3: (1d) if r0 == r2 goto pc+2 6
>
> At insn 3, since r0 is not a constant, verifier assumes both branch
> can be taken which may lead inproper verification failure.
>
> Add comparing umin/umax value and the constant. If the umin value
> is greater than the constant, or umax value is smaller than the constant,
> for JEQ the branch must be not-taken, and for JNE the branch must be taken.
> The jmp32 mode JEQ/JNE branch taken checking is also handled similarly.
>
> The following lists the veristat result w.r.t. changed number
> of processes insns during verification:
>
> File                                                   Program                                               Insns (A)  Insns (B)  Insns    (DIFF)
> -----------------------------------------------------  ----------------------------------------------------  ---------  ---------  ---------------
> test_cls_redirect.bpf.linked3.o                        cls_redirect                                              64980      73472  +8492 (+13.07%)
> test_seg6_loop.bpf.linked3.o                           __add_egr_x                                               12425      12423      -2 (-0.02%)
> test_tcp_hdr_options.bpf.linked3.o                     estab                                                      2634       2558     -76 (-2.89%)
> test_parse_tcp_hdr_opt.bpf.linked3.o                   xdp_ingress_v6                                             1421       1420      -1 (-0.07%)
> test_parse_tcp_hdr_opt_dynptr.bpf.linked3.o            xdp_ingress_v6                                             1238       1237      -1 (-0.08%)
> test_tc_dtime.bpf.linked3.o                            egress_fwdns_prio100                                        414        411      -3 (-0.72%)
>
> Mostly a small improvement but test_cls_redirect.bpf.linked3.o has a 13% regression.
> I checked with verifier log and found it this is due to pruning.
> For some JEQ/JNE branches impacted by this patch,
> one branch is explored and the other has state equivalence and
> pruned.
>
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---

LGTM.

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  kernel/bpf/verifier.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 56f569811f70..5c6b90e384a5 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12651,10 +12651,14 @@ static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode)
>         case BPF_JEQ:
>                 if (tnum_is_const(subreg))
>                         return !!tnum_equals_const(subreg, val);
> +               else if (val < reg->u32_min_value || val > reg->u32_max_value)
> +                       return 0;
>                 break;
>         case BPF_JNE:
>                 if (tnum_is_const(subreg))
>                         return !tnum_equals_const(subreg, val);
> +               else if (val < reg->u32_min_value || val > reg->u32_max_value)
> +                       return 1;
>                 break;
>         case BPF_JSET:
>                 if ((~subreg.mask & subreg.value) & val)
> @@ -12724,10 +12728,14 @@ static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
>         case BPF_JEQ:
>                 if (tnum_is_const(reg->var_off))
>                         return !!tnum_equals_const(reg->var_off, val);
> +               else if (val < reg->umin_value || val > reg->umax_value)
> +                       return 0;
>                 break;
>         case BPF_JNE:
>                 if (tnum_is_const(reg->var_off))
>                         return !tnum_equals_const(reg->var_off, val);
> +               else if (val < reg->umin_value || val > reg->umax_value)
> +                       return 1;
>                 break;
>         case BPF_JSET:
>                 if ((~reg->var_off.mask & reg->var_off.value) & val)
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v2 2/4] selftests/bpf: Add tests for non-constant cond_op NE/EQ bound deduction
  2023-04-06 16:45 ` [PATCH bpf-next v2 2/4] selftests/bpf: Add tests for non-constant cond_op NE/EQ bound deduction Yonghong Song
@ 2023-04-06 20:37   ` Andrii Nakryiko
  0 siblings, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 20:37 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

On Thu, Apr 6, 2023 at 9:47 AM Yonghong Song <yhs@fb.com> wrote:
>
> Add various tests for code pattern '<non-const> NE/EQ <const>' implemented
> in the previous verifier patch. Without the verifier patch, these new
> tests will fail.
>
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---

Makes sense.

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  .../selftests/bpf/prog_tests/verifier.c       |   2 +
>  .../verifier_bounds_deduction_non_const.c     | 179 ++++++++++++++++++
>  2 files changed, 181 insertions(+)
>  create mode 100644 tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c
>

[...]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v2 3/4] bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier
  2023-04-06 18:10   ` Dave Marchevsky
@ 2023-04-06 20:40     ` Andrii Nakryiko
  0 siblings, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 20:40 UTC (permalink / raw)
  To: Dave Marchevsky
  Cc: Yonghong Song, bpf, Alexei Starovoitov, Andrii Nakryiko,
	Daniel Borkmann, kernel-team, Martin KaFai Lau

On Thu, Apr 6, 2023 at 11:10 AM Dave Marchevsky <davemarchevsky@meta.com> wrote:
>
> On 4/6/23 12:45 PM, Yonghong Song wrote:
> > Currently, the verifier does not handle '<const> <cond_op> <non_const>' well.
> > For example,
> >   ...
> >   10: (79) r1 = *(u64 *)(r10 -16)       ; R1_w=scalar() R10=fp0
> >   11: (b7) r2 = 0                       ; R2_w=0
> >   12: (2d) if r2 > r1 goto pc+2
> >   13: (b7) r0 = 0
> >   14: (95) exit
> >   15: (65) if r1 s> 0x1 goto pc+3
> >   16: (0f) r0 += r1
> >   ...
> > At insn 12, verifier decides both true and false branch are possible, but
> > actually only false branch is possible.
> >
> > Currently, the verifier already supports patterns '<non_const> <cond_op> <const>.
> > Add support for patterns '<const> <cond_op> <non_const>' in a similar way.
> >
> > Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10'
> > due to this change.
> >
> > Signed-off-by: Yonghong Song <yhs@fb.com>
> > ---
>
> I still think there's a refactoring opportunity here, but I see your comments
> on the related thread in v1 of this series, and don't think it's a blocker
> to find cleanest refactor.

Agreed, but current implementation is not wrong, so:

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>
> Acked-by: Dave Marchevsky <davemarchevsky@fb.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v2 4/4] selftests/bpf: Add verifier tests for code pattern '<const> <cond_op> <non_const>'
  2023-04-06 16:45 ` [PATCH bpf-next v2 4/4] selftests/bpf: Add verifier tests for code pattern '<const> <cond_op> <non_const>' Yonghong Song
@ 2023-04-06 20:45   ` Andrii Nakryiko
  0 siblings, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2023-04-06 20:45 UTC (permalink / raw)
  To: Yonghong Song
  Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	kernel-team, Martin KaFai Lau

On Thu, Apr 6, 2023 at 9:45 AM Yonghong Song <yhs@fb.com> wrote:
>
> Add various tests for code pattern '<const> <cond_op> <non_const>' to
> exercise the previous verifier patch.
>
> The following are veristat changed number of processed insns stat
> comparing the previous patch vs. this patch:
>
> File                                                   Program                                               Insns (A)  Insns (B)  Insns  (DIFF)
> -----------------------------------------------------  ----------------------------------------------------  ---------  ---------  -------------
> test_seg6_loop.bpf.linked3.o                           __add_egr_x                                               12423      12314  -109 (-0.88%)
>

nit: a bit of veristat trivia for future uses. If you specify filters
properly, it will size the width of columns more tightly and you
wouldn't have to trim only relevant rows. The above would do,
probably:

sudo ./veristat *.linked3.o -e file,prog,insns -f 'insns_diff!=0'

> Only one program is affected with minor change.
>
> Signed-off-by: Yonghong Song <yhs@fb.com>
> ---
>  .../verifier_bounds_deduction_non_const.c     | 460 ++++++++++++++++++
>  1 file changed, 460 insertions(+)
>

Nice set of tests!

Acked-by: Andrii Nakryiko <andrii@kernel.org>


> diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c b/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c
> index fe570d866139..823f727cf210 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c
> @@ -176,4 +176,464 @@ l1_%=:                                                    \
>         : __clobber_all);
>  }
>

[...]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables
  2023-04-06 16:44 [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables Yonghong Song
                   ` (3 preceding siblings ...)
  2023-04-06 16:45 ` [PATCH bpf-next v2 4/4] selftests/bpf: Add verifier tests for code pattern '<const> <cond_op> <non_const>' Yonghong Song
@ 2023-04-06 22:40 ` patchwork-bot+netdevbpf
  4 siblings, 0 replies; 13+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-04-06 22:40 UTC (permalink / raw)
  To: Yonghong Song; +Cc: bpf, ast, andrii, daniel, kernel-team, martin.lau

Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Thu, 6 Apr 2023 09:44:50 -0700 you wrote:
> LLVM commit [1] introduced hoistMinMax optimization like
>   (i < VIRTIO_MAX_SGS) && (i < out_sgs)
> to
>   upper = MIN(VIRTIO_MAX_SGS, out_sgs)
>   ... i < upper ...
> and caused the verification failure. Commit [2] workarounded the issue by
> adding some bpf assembly code to prohibit the above optimization.
> This patch improved verifier such that verification can succeed without
> the above workaround.
> 
> [...]

Here is the summary with links:
  - [bpf-next,v2,1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking
    https://git.kernel.org/bpf/bpf-next/c/13fbcee55706
  - [bpf-next,v2,2/4] selftests/bpf: Add tests for non-constant cond_op NE/EQ bound deduction
    https://git.kernel.org/bpf/bpf-next/c/aec08d677b4d
  - [bpf-next,v2,3/4] bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier
    https://git.kernel.org/bpf/bpf-next/c/953d9f5beaf7
  - [bpf-next,v2,4/4] selftests/bpf: Add verifier tests for code pattern '<const> <cond_op> <non_const>'
    https://git.kernel.org/bpf/bpf-next/c/23a88fae9f20

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-04-06 22:40 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-06 16:44 [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables Yonghong Song
2023-04-06 16:44 ` [PATCH bpf-next v2 1/4] bpf: Improve verifier JEQ/JNE insn branch taken checking Yonghong Song
2023-04-06 17:49   ` Dave Marchevsky
2023-04-06 20:03     ` Alexei Starovoitov
2023-04-06 20:35   ` Andrii Nakryiko
2023-04-06 16:45 ` [PATCH bpf-next v2 2/4] selftests/bpf: Add tests for non-constant cond_op NE/EQ bound deduction Yonghong Song
2023-04-06 20:37   ` Andrii Nakryiko
2023-04-06 16:45 ` [PATCH bpf-next v2 3/4] bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier Yonghong Song
2023-04-06 18:10   ` Dave Marchevsky
2023-04-06 20:40     ` Andrii Nakryiko
2023-04-06 16:45 ` [PATCH bpf-next v2 4/4] selftests/bpf: Add verifier tests for code pattern '<const> <cond_op> <non_const>' Yonghong Song
2023-04-06 20:45   ` Andrii Nakryiko
2023-04-06 22:40 ` [PATCH bpf-next v2 0/4] bpf: Improve verifier for cond_op and spilled loop index variables patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).