linux-kselftest.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH bpf-next v2 1/2] bpf: add bound tracking for BPF_MOD
  2023-03-24  4:58 ` [PATCH bpf-next v2 1/2] " Xu Kuohai
@ 2023-03-23 17:16   ` Alexei Starovoitov
  2023-03-25  1:57     ` Xu Kuohai
  0 siblings, 1 reply; 5+ messages in thread
From: Alexei Starovoitov @ 2023-03-23 17:16 UTC (permalink / raw)
  To: Xu Kuohai
  Cc: bpf, open list:KERNEL SELFTEST FRAMEWORK, LKML,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan

On Thu, Mar 23, 2023 at 8:59 AM Xu Kuohai <xukuohai@huaweicloud.com> wrote:
>
> From: Xu Kuohai <xukuohai@huawei.com>
>
> dst_reg is marked as unknown when BPF_MOD instruction is verified, causing
> the following bpf prog to be incorrectly rejected.
>
> 0: r0 = 0
> 1: r0 %= 1   // r0 is marked as unknown
> 2: r1 = 0
> 3: r1 += 1
> 4: if r1 < r0 goto pc-2 // verifier treats the loop as unbounded
> 5: exit
>
> To teach verifier to accept the above prog, this patch adds bound tracking
> for BPF_MOD.
>
> The approach is based on the following rules:
>
> 1. BPF_MOD is unsigned;
>
> 2. For an unsigned constant divisor x:
>
>  a. when x != 0, the resulted dst_reg bits are in the range [0, x - 1],
>     and if no wrapping occurs, the result can be further narrowed down
>     to [umin mod x, umax mod x];
>
>  b. when x == 0, dst_reg is truncated to 32 bits by mod32 or remains
>     unchanged by mod64.
>
> Signed-off-by: Xu Kuohai <xukuohai@huawei.com>

Same Nack as before.
You haven't answered _why_ anyone needs it.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH bpf-next v2 0/2] bpf: add bound tracking for BPF_MOD
@ 2023-03-24  4:58 Xu Kuohai
  2023-03-24  4:58 ` [PATCH bpf-next v2 1/2] " Xu Kuohai
  2023-03-24  4:58 ` [PATCH bpf-next v2 2/2] selftests/bpf: check if verifier tracks dst_reg bound " Xu Kuohai
  0 siblings, 2 replies; 5+ messages in thread
From: Xu Kuohai @ 2023-03-24  4:58 UTC (permalink / raw)
  To: bpf, linux-kselftest, linux-kernel
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan

From: Xu Kuohai <xukuohai@huawei.com>

dst_reg is marked as unknown when BPF_MOD instruction is verified, causing
the following bpf prog to be incorrectly rejected.

0: r0 = 0
1: r0 %= 1   // r0 is marked as unknown
2: r1 = 0
3: r1 += 1
4: if r1 < r0 goto pc-2 // verifier concludes the loop is unbounded
5: exit

To teach verifier to accept the above prog, this series adds bound tracking
for BPF_MOD.

v2:
- fix build warning reported by kernel test robot <lkp@intel.com> [0]
- add two more cases and update commit message

[0] https://lore.kernel.org/oe-kbuild-all/202303060036.zK05OC5M-lkp@intel.com

v1: https://lore.kernel.org/bpf/20230306033119.2634976-1-xukuohai@huaweicloud.com

Xu Kuohai (2):
  bpf: add bound tracking for BPF_MOD
  selftests/bpf: check if verifier tracks dst_reg bound for BPF_MOD

 kernel/bpf/verifier.c                      |  98 ++++++-
 tools/testing/selftests/bpf/verifier/mod.c | 320 +++++++++++++++++++++
 2 files changed, 413 insertions(+), 5 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/verifier/mod.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH bpf-next v2 1/2] bpf: add bound tracking for BPF_MOD
  2023-03-24  4:58 [PATCH bpf-next v2 0/2] bpf: add bound tracking for BPF_MOD Xu Kuohai
@ 2023-03-24  4:58 ` Xu Kuohai
  2023-03-23 17:16   ` Alexei Starovoitov
  2023-03-24  4:58 ` [PATCH bpf-next v2 2/2] selftests/bpf: check if verifier tracks dst_reg bound " Xu Kuohai
  1 sibling, 1 reply; 5+ messages in thread
From: Xu Kuohai @ 2023-03-24  4:58 UTC (permalink / raw)
  To: bpf, linux-kselftest, linux-kernel
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan

From: Xu Kuohai <xukuohai@huawei.com>

dst_reg is marked as unknown when BPF_MOD instruction is verified, causing
the following bpf prog to be incorrectly rejected.

0: r0 = 0
1: r0 %= 1   // r0 is marked as unknown
2: r1 = 0
3: r1 += 1
4: if r1 < r0 goto pc-2 // verifier treats the loop as unbounded
5: exit

To teach verifier to accept the above prog, this patch adds bound tracking
for BPF_MOD.

The approach is based on the following rules:

1. BPF_MOD is unsigned;

2. For an unsigned constant divisor x:

 a. when x != 0, the resulted dst_reg bits are in the range [0, x - 1],
    and if no wrapping occurs, the result can be further narrowed down
    to [umin mod x, umax mod x];

 b. when x == 0, dst_reg is truncated to 32 bits by mod32 or remains
    unchanged by mod64.

Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
---
 kernel/bpf/verifier.c | 98 ++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 93 insertions(+), 5 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 64f06f6e16bf..e8e37f587d6c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12085,6 +12085,87 @@ static void scalar_min_max_arsh(struct bpf_reg_state *dst_reg,
 	__update_reg_bounds(dst_reg);
 }
 
+static void scalar32_min_max_mod(struct bpf_reg_state *dst_reg,
+				 struct bpf_reg_state *src_reg)
+{
+	u32 val = (u32)src_reg->var_off.value; /* src_reg is constant */
+	u64 umax = dst_reg->u32_max_value; /* do_div requires u64 */
+	u64 umin = dst_reg->u32_min_value; /* do_div requires u64 */
+	u32 umax_rem, umin_rem;
+
+	/* dst_reg is 32-bit truncated when mod32 zero, since
+	 * adjust_scalar_min_max_vals invokes zext_32_to_64 to do truncation
+	 * for all alu32 ops, here we do nothing and just return.
+	 */
+	if (!val)
+		return;
+
+	umax_rem = do_div(umax, val);
+	umin_rem = do_div(umin, val);
+
+	/* no wrapping */
+	if (umax - umin < val && umin_rem <= umax_rem) {
+		dst_reg->var_off = tnum_range(umin_rem, umax_rem);
+		dst_reg->u32_min_value = umin_rem;
+		dst_reg->u32_max_value = umax_rem;
+	} else {
+		dst_reg->var_off = tnum_range(0, val - 1);
+		dst_reg->u32_min_value = 0;
+		dst_reg->u32_max_value = val - 1;
+	}
+
+	/* cross the sign boundary */
+	if ((s32)dst_reg->u32_min_value > (s32)dst_reg->u32_max_value) {
+		dst_reg->s32_min_value = S32_MIN;
+		dst_reg->s32_max_value = S32_MAX;
+	} else {
+		dst_reg->s32_min_value = (s32)dst_reg->u32_min_value;
+		dst_reg->s32_max_value = (s32)dst_reg->u32_max_value;
+	}
+
+	/* mark reg64 unbounded to deduce 64-bit bounds from var_off */
+	__mark_reg64_unbounded(dst_reg);
+}
+
+static void scalar_min_max_mod(struct bpf_reg_state *dst_reg,
+			       struct bpf_reg_state *src_reg)
+{
+	u64 val = src_reg->var_off.value; /* src_reg is constant */
+	u64 umax = dst_reg->umax_value;
+	u64 umin = dst_reg->umin_value;
+	u64 umax_rem, umin_rem;
+
+	/* dst_reg is untouched when mod64 zero */
+	if (!val)
+		return;
+
+	div64_u64_rem(umin, val, &umin_rem);
+	div64_u64_rem(umax, val, &umax_rem);
+
+	/* no wrapping */
+	if (umax - umin < val && umin_rem <= umax_rem) {
+		dst_reg->var_off = tnum_range(umin_rem, umax_rem);
+		dst_reg->umin_value = umin_rem;
+		dst_reg->umax_value = umax_rem;
+	} else {
+		dst_reg->var_off = tnum_range(0, val - 1);
+		dst_reg->umin_value = 0;
+		dst_reg->umax_value = val - 1;
+	}
+
+	/* cross the sign boundary */
+	if ((s64)dst_reg->umin_value > (s64)dst_reg->umax_value) {
+		dst_reg->smin_value = S64_MIN;
+		dst_reg->smax_value = S64_MAX;
+	} else {
+		dst_reg->smin_value = (s64)dst_reg->umin_value;
+		dst_reg->smax_value = (s64)dst_reg->umax_value;
+	}
+
+	/* mark reg32 unbounded to deduce 32-bit bounds from var_off */
+	__mark_reg32_unbounded(dst_reg);
+}
+
 /* WARNING: This function does calculations on 64-bit values, but the actual
  * execution may occur on 32-bit values. Therefore, things like bitshifts
  * need extra checks in the 32-bit case.
@@ -12159,11 +12240,12 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
 	 * and BPF_OR. This is possible because these ops have fairly easy to
 	 * understand and calculate behavior in both 32-bit and 64-bit alu ops.
 	 * See alu32 verifier tests for examples. The second class of
-	 * operations, BPF_LSH, BPF_RSH, and BPF_ARSH, however are not so easy
-	 * with regards to tracking sign/unsigned bounds because the bits may
-	 * cross subreg boundaries in the alu64 case. When this happens we mark
-	 * the reg unbounded in the subreg bound space and use the resulting
-	 * tnum to calculate an approximation of the sign/unsigned bounds.
+	 * operations, BPF_LSH, BPF_RSH, BPF_ARSH and BPF_MOD, however are not
+	 * so easy with regards to tracking sign/unsigned bounds because the
+	 * bits may cross subreg boundaries in the alu64 case. When this happens
+	 * we mark the reg unbounded in the subreg bound space and use the
+	 * resulting tnum to calculate an approximation of the sign/unsigned
+	 * bounds.
 	 */
 	switch (opcode) {
 	case BPF_ADD:
@@ -12235,6 +12317,12 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
 		else
 			scalar_min_max_arsh(dst_reg, &src_reg);
 		break;
+	case BPF_MOD:
+		if (alu32)
+			scalar32_min_max_mod(dst_reg, &src_reg);
+		else
+			scalar_min_max_mod(dst_reg, &src_reg);
+		break;
 	default:
 		mark_reg_unknown(env, regs, insn->dst_reg);
 		break;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH bpf-next v2 2/2] selftests/bpf: check if verifier tracks dst_reg bound for BPF_MOD
  2023-03-24  4:58 [PATCH bpf-next v2 0/2] bpf: add bound tracking for BPF_MOD Xu Kuohai
  2023-03-24  4:58 ` [PATCH bpf-next v2 1/2] " Xu Kuohai
@ 2023-03-24  4:58 ` Xu Kuohai
  1 sibling, 0 replies; 5+ messages in thread
From: Xu Kuohai @ 2023-03-24  4:58 UTC (permalink / raw)
  To: bpf, linux-kselftest, linux-kernel
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan

From: Xu Kuohai <xukuohai@huawei.com>

Test cases to check if verifier tracks dst_reg bound for BPF_MOD.

Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
---
 tools/testing/selftests/bpf/verifier/mod.c | 320 +++++++++++++++++++++
 1 file changed, 320 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/verifier/mod.c

diff --git a/tools/testing/selftests/bpf/verifier/mod.c b/tools/testing/selftests/bpf/verifier/mod.c
new file mode 100644
index 000000000000..3aec856d5c9f
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/mod.c
@@ -0,0 +1,320 @@
+{
+	"mod64 positive imm",
+	.insns = {
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_ALU64_IMM(BPF_MOD, BPF_REG_0, 1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 0 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+},
+{
+	"mod64 positive reg",
+	.insns = {
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_MOV64_IMM(BPF_REG_1, 1),
+	BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 0 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+},
+{
+	"mod64 zero",
+	.insns = {
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 0 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+},
+{
+	"mod64 negative 1",
+	.insns = {
+	BPF_MOV64_IMM(BPF_REG_0, 1),
+	BPF_MOV64_IMM(BPF_REG_1, -1),
+	BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 1 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+	.retval = 1,
+},
+{
+	"mod64 negative 2",
+	.insns = {
+	BPF_MOV64_IMM(BPF_REG_0, -4),
+	BPF_MOV32_IMM(BPF_REG_1, 5),
+	BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 2 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+	.retval = 2,
+},
+{
+	"mod64 negative 3",
+	.insns = {
+	BPF_MOV32_IMM(BPF_REG_0, -4),
+	BPF_MOV32_IMM(BPF_REG_1, -5),
+	BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 1 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+	.retval = 1,
+},
+{
+	"mod64 variable dividend cross signed boundary, with JLT",
+	.insns = {
+	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+		    offsetof(struct xdp_md, data)),
+	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+		    offsetof(struct xdp_md, data_end)),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 11),
+
+	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
+	BPF_LD_IMM64(BPF_REG_0, 0x7fffffffffffff10),
+	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
+	BPF_LD_IMM64(BPF_REG_0, 0x80000000000000ff),
+	BPF_ALU64_REG(BPF_MOD, BPF_REG_1, BPF_REG_0),
+
+	BPF_LD_IMM64(BPF_REG_0, 0x8000000000000000),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+	/* r1 unsigned range is [0x7fffffffffffff10, 0x800000000000000f] */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2),
+
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+},
+{
+	"mod64 variable dividend cross signed boundary, with JSLT",
+	.insns = {
+	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+		    offsetof(struct xdp_md, data)),
+	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+		    offsetof(struct xdp_md, data_end)),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 11),
+
+	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
+	BPF_LD_IMM64(BPF_REG_0, 0x7fffffffffffff10),
+	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
+	BPF_LD_IMM64(BPF_REG_0, 0x80000000000000ff),
+	BPF_ALU64_REG(BPF_MOD, BPF_REG_1, BPF_REG_0),
+
+	BPF_LD_IMM64(BPF_REG_0, 0x8000000000000000),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+	/* r1 signed range is [S64_MIN, S64_MAX] */
+	BPF_JMP_REG(BPF_JSLT, BPF_REG_0, BPF_REG_1, -2),
+
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_EXIT_INSN(),
+	},
+	.errstr = "BPF program is too large.",
+	.result = REJECT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+},
+{
+	"mod32 positive imm",
+	.insns = {
+	BPF_MOV32_IMM(BPF_REG_0, 0),
+	BPF_ALU32_IMM(BPF_MOD, BPF_REG_0, 1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 0 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+},
+{
+	"mod32 positive reg",
+	.insns = {
+	BPF_MOV32_IMM(BPF_REG_0, 0),
+	BPF_MOV32_IMM(BPF_REG_1, 1),
+	BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 0 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+},
+{
+	"mod32 zero",
+	.insns = {
+	BPF_MOV32_IMM(BPF_REG_0, 0),
+	BPF_MOV32_IMM(BPF_REG_1, 0),
+	BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 0 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+},
+{
+	"mod32 negative 1",
+	.insns = {
+	BPF_MOV32_IMM(BPF_REG_0, 1),
+	BPF_MOV32_IMM(BPF_REG_1, -1),
+	BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 1 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+	.retval = 1,
+},
+{
+	"mod32 negative 2",
+	.insns = {
+	BPF_MOV32_IMM(BPF_REG_0, -4),
+	BPF_MOV32_IMM(BPF_REG_1, 5),
+	BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 2 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+	.retval = 2,
+},
+{
+	"mod32 negative 3",
+	.insns = {
+	BPF_MOV32_IMM(BPF_REG_0, -4),
+	BPF_MOV32_IMM(BPF_REG_1, -5),
+	BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+
+	BPF_MOV64_IMM(BPF_REG_1, 0),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	/* r0 = 1 */
+	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_0, -2),
+
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+	.retval = 1,
+},
+{
+	"mod32 variable dividend cross signed boundary, with JLT",
+	.insns = {
+	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+		    offsetof(struct xdp_md, data)),
+	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+		    offsetof(struct xdp_md, data_end)),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 7),
+
+	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
+	BPF_MOV64_IMM(BPF_REG_0, 0x7fffff10),
+	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
+	BPF_ALU32_IMM(BPF_MOD, BPF_REG_1, 0x800000ff),
+
+	BPF_MOV32_IMM(BPF_REG_0, 0x80000000),
+	BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 1),
+	/* r1 unsigned 32-bit range is [0x7fffff10, 0x8000000f] */
+	BPF_JMP32_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2),
+
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+},
+{
+	"mod32 variable dividend cross signed boundary, with JSLT",
+	.insns = {
+	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+		    offsetof(struct xdp_md, data)),
+	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+		    offsetof(struct xdp_md, data_end)),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
+	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 7),
+
+	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
+	BPF_MOV64_IMM(BPF_REG_0, 0x7fffff10),
+	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
+	BPF_ALU32_IMM(BPF_MOD, BPF_REG_1, 0x800000ff),
+
+	BPF_MOV32_IMM(BPF_REG_0, 0x80000000),
+	BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 1),
+	/* r1 signed 32-bit range is [S32_MIN, S32_MAX] */
+	BPF_JMP32_REG(BPF_JSLT, BPF_REG_0, BPF_REG_1, -2),
+
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_EXIT_INSN(),
+	},
+	.errstr = "BPF program is too large.",
+	.result = REJECT,
+	.prog_type = BPF_PROG_TYPE_XDP,
+},
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH bpf-next v2 1/2] bpf: add bound tracking for BPF_MOD
  2023-03-23 17:16   ` Alexei Starovoitov
@ 2023-03-25  1:57     ` Xu Kuohai
  0 siblings, 0 replies; 5+ messages in thread
From: Xu Kuohai @ 2023-03-25  1:57 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, open list:KERNEL SELFTEST FRAMEWORK, LKML,
	Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Mykola Lysenko,
	Shuah Khan

On 3/24/2023 1:16 AM, Alexei Starovoitov wrote:
> On Thu, Mar 23, 2023 at 8:59 AM Xu Kuohai <xukuohai@huaweicloud.com> wrote:
>>
>> From: Xu Kuohai <xukuohai@huawei.com>
>>
>> dst_reg is marked as unknown when BPF_MOD instruction is verified, causing
>> the following bpf prog to be incorrectly rejected.
>>
>> 0: r0 = 0
>> 1: r0 %= 1   // r0 is marked as unknown
>> 2: r1 = 0
>> 3: r1 += 1
>> 4: if r1 < r0 goto pc-2 // verifier treats the loop as unbounded
>> 5: exit
>>
>> To teach verifier to accept the above prog, this patch adds bound tracking
>> for BPF_MOD.
>>
>> The approach is based on the following rules:
>>
>> 1. BPF_MOD is unsigned;
>>
>> 2. For an unsigned constant divisor x:
>>
>>   a. when x != 0, the resulted dst_reg bits are in the range [0, x - 1],
>>      and if no wrapping occurs, the result can be further narrowed down
>>      to [umin mod x, umax mod x];
>>
>>   b. when x == 0, dst_reg is truncated to 32 bits by mod32 or remains
>>      unchanged by mod64.
>>
>> Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
> 
> Same Nack as before.

Sorry, I did not receive Nack for this patch before.

> You haven't answered _why_ anyone needs it.

No idea, I simply believe it's good to have a closer estimate of the result.


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-03-25  1:57 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-24  4:58 [PATCH bpf-next v2 0/2] bpf: add bound tracking for BPF_MOD Xu Kuohai
2023-03-24  4:58 ` [PATCH bpf-next v2 1/2] " Xu Kuohai
2023-03-23 17:16   ` Alexei Starovoitov
2023-03-25  1:57     ` Xu Kuohai
2023-03-24  4:58 ` [PATCH bpf-next v2 2/2] selftests/bpf: check if verifier tracks dst_reg bound " Xu Kuohai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).