netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [bpf-next PATCH 1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases
@ 2020-09-24 18:45 John Fastabend
  2020-09-24 18:45 ` [bpf-next PATCH 2/2] bpf: Add AND verifier test case where 32bit and 64bit bounds differ John Fastabend
  2020-09-25 23:56 ` [bpf-next PATCH 1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases Alexei Starovoitov
  0 siblings, 2 replies; 4+ messages in thread
From: John Fastabend @ 2020-09-24 18:45 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, john.fastabend

In BPF_AND and BPF_OR alu cases we have this pattern when the src and dst
tnum is a constant.

 1 dst_reg->var_off = tnum_[op](dst_reg->var_off, src_reg.var_off)
 2 scalar32_min_max_[op]
 3       if (known) return
 4 scalar_min_max_[op]
 5       if (known)
 6          __mark_reg_known(dst_reg,
                   dst_reg->var_off.value [op] src_reg.var_off.value)

The result is in 1 we calculate the var_off value and store it in the
dst_reg. Then in 6 we duplicate this logic doing the op again on the
value.

The duplication comes from the the tnum_[op] handlers because they have
already done the value calcuation. For example this is tnum_and().

 struct tnum tnum_and(struct tnum a, struct tnum b)
 {
	u64 alpha, beta, v;

	alpha = a.value | a.mask;
	beta = b.value | b.mask;
	v = a.value & b.value;
	return TNUM(v, alpha & beta & ~v);
 }

So lets remove the redundant op calculation. Its confusing for readers
and unnecessary. Its also not harmful because those ops have the
property, r1 & r1 = r1 and r1 | r1 = r1.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 kernel/bpf/verifier.c |    6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 15ab889b0a3f..33fcc18fdd39 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5818,8 +5818,7 @@ static void scalar_min_max_and(struct bpf_reg_state *dst_reg,
 	u64 umax_val = src_reg->umax_value;
 
 	if (src_known && dst_known) {
-		__mark_reg_known(dst_reg, dst_reg->var_off.value &
-					  src_reg->var_off.value);
+		__mark_reg_known(dst_reg, dst_reg->var_off.value);
 		return;
 	}
 
@@ -5889,8 +5888,7 @@ static void scalar_min_max_or(struct bpf_reg_state *dst_reg,
 	u64 umin_val = src_reg->umin_value;
 
 	if (src_known && dst_known) {
-		__mark_reg_known(dst_reg, dst_reg->var_off.value |
-					  src_reg->var_off.value);
+		__mark_reg_known(dst_reg, dst_reg->var_off.value);
 		return;
 	}
 


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [bpf-next PATCH 2/2] bpf: Add AND verifier test case where 32bit and 64bit bounds differ
  2020-09-24 18:45 [bpf-next PATCH 1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases John Fastabend
@ 2020-09-24 18:45 ` John Fastabend
  2020-09-25 23:56 ` [bpf-next PATCH 1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases Alexei Starovoitov
  1 sibling, 0 replies; 4+ messages in thread
From: John Fastabend @ 2020-09-24 18:45 UTC (permalink / raw)
  To: ast, daniel; +Cc: netdev, john.fastabend

If we AND two values together that are known in the 32bit subregs, but not
known in the 64bit registers we rely on the tnum value to report the 32bit
subreg is known. And do not use mark_reg_known() directly from
scalar32_min_max_and()

Add an AND test to cover the case with known 32bit subreg, but unknown
64bit reg.

Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 tools/testing/selftests/bpf/verifier/and.c |   16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/tools/testing/selftests/bpf/verifier/and.c b/tools/testing/selftests/bpf/verifier/and.c
index d781bc86e100..ca8fdb1b3f01 100644
--- a/tools/testing/selftests/bpf/verifier/and.c
+++ b/tools/testing/selftests/bpf/verifier/and.c
@@ -48,3 +48,19 @@
 	.result = REJECT,
 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
 },
+{
+	"check known subreg with unknown reg",
+	.insns = {
+	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 32),
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xFFFF1234),
+	/* Upper bits are unknown but AND above masks out 1 zero'ing lower bits */
+	BPF_JMP32_IMM(BPF_JLT, BPF_REG_0, 1, 1),
+	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 512),
+	BPF_MOV64_IMM(BPF_REG_0, 0),
+	BPF_EXIT_INSN(),
+	},
+	.result = ACCEPT,
+	.retval = 0
+},


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [bpf-next PATCH 1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases
  2020-09-24 18:45 [bpf-next PATCH 1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases John Fastabend
  2020-09-24 18:45 ` [bpf-next PATCH 2/2] bpf: Add AND verifier test case where 32bit and 64bit bounds differ John Fastabend
@ 2020-09-25 23:56 ` Alexei Starovoitov
  2020-09-26  4:36   ` John Fastabend
  1 sibling, 1 reply; 4+ messages in thread
From: Alexei Starovoitov @ 2020-09-25 23:56 UTC (permalink / raw)
  To: John Fastabend
  Cc: Alexei Starovoitov, Daniel Borkmann, Network Development, bpf

On Thu, Sep 24, 2020 at 11:45 AM John Fastabend
<john.fastabend@gmail.com> wrote:
>
> In BPF_AND and BPF_OR alu cases we have this pattern when the src and dst
> tnum is a constant.
>
>  1 dst_reg->var_off = tnum_[op](dst_reg->var_off, src_reg.var_off)
>  2 scalar32_min_max_[op]
>  3       if (known) return
>  4 scalar_min_max_[op]
>  5       if (known)
>  6          __mark_reg_known(dst_reg,
>                    dst_reg->var_off.value [op] src_reg.var_off.value)
>
> The result is in 1 we calculate the var_off value and store it in the
> dst_reg. Then in 6 we duplicate this logic doing the op again on the
> value.
>
> The duplication comes from the the tnum_[op] handlers because they have
> already done the value calcuation. For example this is tnum_and().
>
>  struct tnum tnum_and(struct tnum a, struct tnum b)
>  {
>         u64 alpha, beta, v;
>
>         alpha = a.value | a.mask;
>         beta = b.value | b.mask;
>         v = a.value & b.value;
>         return TNUM(v, alpha & beta & ~v);
>  }
>
> So lets remove the redundant op calculation. Its confusing for readers
> and unnecessary. Its also not harmful because those ops have the
> property, r1 & r1 = r1 and r1 | r1 = r1.
>
> Signed-off-by: John Fastabend <john.fastabend@gmail.com>

Applied. Thanks for the follow up.
In the future please always cc bpf@vger for two reasons:
- to get proper 'Link:' integrated in git commit
- to get them into a new instance of
https://patchwork.kernel.org/project/bpf/list
  which we will start using soon to send automatic 'applied' emails.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [bpf-next PATCH 1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases
  2020-09-25 23:56 ` [bpf-next PATCH 1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases Alexei Starovoitov
@ 2020-09-26  4:36   ` John Fastabend
  0 siblings, 0 replies; 4+ messages in thread
From: John Fastabend @ 2020-09-26  4:36 UTC (permalink / raw)
  To: Alexei Starovoitov, John Fastabend
  Cc: Alexei Starovoitov, Daniel Borkmann, Network Development, bpf

Alexei Starovoitov wrote:
> On Thu, Sep 24, 2020 at 11:45 AM John Fastabend
> <john.fastabend@gmail.com> wrote:
> >
> > In BPF_AND and BPF_OR alu cases we have this pattern when the src and dst
> > tnum is a constant.
> >
> >  1 dst_reg->var_off = tnum_[op](dst_reg->var_off, src_reg.var_off)
> >  2 scalar32_min_max_[op]
> >  3       if (known) return
> >  4 scalar_min_max_[op]
> >  5       if (known)
> >  6          __mark_reg_known(dst_reg,
> >                    dst_reg->var_off.value [op] src_reg.var_off.value)
> >
> > The result is in 1 we calculate the var_off value and store it in the
> > dst_reg. Then in 6 we duplicate this logic doing the op again on the
> > value.
> >
> > The duplication comes from the the tnum_[op] handlers because they have
> > already done the value calcuation. For example this is tnum_and().
> >
> >  struct tnum tnum_and(struct tnum a, struct tnum b)
> >  {
> >         u64 alpha, beta, v;
> >
> >         alpha = a.value | a.mask;
> >         beta = b.value | b.mask;
> >         v = a.value & b.value;
> >         return TNUM(v, alpha & beta & ~v);
> >  }
> >
> > So lets remove the redundant op calculation. Its confusing for readers
> > and unnecessary. Its also not harmful because those ops have the
> > property, r1 & r1 = r1 and r1 | r1 = r1.
> >
> > Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> 
> Applied. Thanks for the follow up.
> In the future please always cc bpf@vger for two reasons:
> - to get proper 'Link:' integrated in git commit
> - to get them into a new instance of
> https://patchwork.kernel.org/project/bpf/list

+1

>   which we will start using soon to send automatic 'applied' emails.


Apologies, I updated some scripts and unfortunately typo dropped a '-'
and cut off bpf@vger from the CC list. Also I just used it to land
two more patches without bpf@vger happy to resend with CC included
if folks want. Sorry for the extra work/noise.

Thanks.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-09-26  4:36 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-24 18:45 [bpf-next PATCH 1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases John Fastabend
2020-09-24 18:45 ` [bpf-next PATCH 2/2] bpf: Add AND verifier test case where 32bit and 64bit bounds differ John Fastabend
2020-09-25 23:56 ` [bpf-next PATCH 1/2] bpf, verifier: Remove redundant var_off.value ops in scalar known reg cases Alexei Starovoitov
2020-09-26  4:36   ` John Fastabend

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).