All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Fastabend <john.fastabend@gmail.com>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	John Fastabend <john.fastabend@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>, bpf <bpf@vger.kernel.org>,
	Yonghong Song <yhs@fb.com>, Alexei Starovoitov <ast@kernel.org>
Subject: Re: [bpf PATCH v3] bpf: verifier, do_refine_retval_range may clamp umin to 0 incorrectly
Date: Tue, 04 Feb 2020 11:55:37 -0800	[thread overview]
Message-ID: <5e39cc3957bd1_63882ad0d49345c0c5@john-XPS-13-9370.notmuch> (raw)
In-Reply-To: <CAADnVQ+m70Pzs33mAhsF0JEx+LVoXrTZyC-szhyk+cNo71GgXw@mail.gmail.com>

Alexei Starovoitov wrote:
> On Fri, Jan 31, 2020 at 9:16 AM John Fastabend <john.fastabend@gmail.com> wrote:
> >
> > Also don't mind to build pseudo instruction here for signed extension
> > but its not clear to me why we are getting different instruction
> > selections? Its not clear to me why sext is being chosen in your case?
> 
> Sign extension has to be there if jmp64 is used.
> So the difference is due to -mcpu=v2 vs -mcpu=v3
> v2 does alu32, but not jmp32
> v3 does both.
> By default selftests are using -mcpu=probe which
> detects v2/v3 depending on running kernel.
> 
> llc -mattr=dwarfris -march=bpf -mcpu=v3  -mattr=+alu32
> ;       usize = bpf_get_stack(ctx, raw_data, max_len, BPF_F_USER_STACK);
>       48:       bf 61 00 00 00 00 00 00 r1 = r6
>       49:       bf 72 00 00 00 00 00 00 r2 = r7
>       50:       b4 03 00 00 20 03 00 00 w3 = 800
>       51:       b7 04 00 00 00 01 00 00 r4 = 256
>       52:       85 00 00 00 43 00 00 00 call 67
>       53:       bc 08 00 00 00 00 00 00 w8 = w0
> ;       if (usize < 0)
>       54:       c6 08 16 00 00 00 00 00 if w8 s< 0 goto +22 <LBB0_6>
> ;       ksize = bpf_get_stack(ctx, raw_data + usize, max_len - usize, 0);
>       55:       1c 89 00 00 00 00 00 00 w9 -= w8
>       56:       bc 81 00 00 00 00 00 00 w1 = w8
>       57:       67 01 00 00 20 00 00 00 r1 <<= 32
>       58:       77 01 00 00 20 00 00 00 r1 >>= 32
>       59:       bf 72 00 00 00 00 00 00 r2 = r7
>       60:       0f 12 00 00 00 00 00 00 r2 += r1
>       61:       bf 61 00 00 00 00 00 00 r1 = r6
>       62:       bc 93 00 00 00 00 00 00 w3 = w9
>       63:       b7 04 00 00 00 00 00 00 r4 = 0
>       64:       85 00 00 00 43 00 00 00 call 67
> ;       if (ksize < 0)
>       65:       c6 00 0b 00 00 00 00 00 if w0 s< 0 goto +11 <LBB0_6>
> 
> llc -mattr=dwarfris -march=bpf -mcpu=v2  -mattr=+alu32
> ;       usize = bpf_get_stack(ctx, raw_data, max_len, BPF_F_USER_STACK);
>       48:       bf 61 00 00 00 00 00 00 r1 = r6
>       49:       bf 72 00 00 00 00 00 00 r2 = r7
>       50:       b4 03 00 00 20 03 00 00 w3 = 800
>       51:       b7 04 00 00 00 01 00 00 r4 = 256
>       52:       85 00 00 00 43 00 00 00 call 67
>       53:       bc 08 00 00 00 00 00 00 w8 = w0
> ;       if (usize < 0)
>       54:       bc 81 00 00 00 00 00 00 w1 = w8
>       55:       67 01 00 00 20 00 00 00 r1 <<= 32
>       56:       c7 01 00 00 20 00 00 00 r1 s>>= 32
>       57:       c5 01 19 00 00 00 00 00 if r1 s< 0 goto +25 <LBB0_6>
> ;       ksize = bpf_get_stack(ctx, raw_data + usize, max_len - usize, 0);
>       58:       1c 89 00 00 00 00 00 00 w9 -= w8
>       59:       bc 81 00 00 00 00 00 00 w1 = w8
>       60:       67 01 00 00 20 00 00 00 r1 <<= 32
>       61:       77 01 00 00 20 00 00 00 r1 >>= 32
>       62:       bf 72 00 00 00 00 00 00 r2 = r7
>       63:       0f 12 00 00 00 00 00 00 r2 += r1
>       64:       bf 61 00 00 00 00 00 00 r1 = r6
>       65:       bc 93 00 00 00 00 00 00 w3 = w9
>       66:       b7 04 00 00 00 00 00 00 r4 = 0
>       67:       85 00 00 00 43 00 00 00 call 67
> ;       if (ksize < 0)
>       68:       bc 01 00 00 00 00 00 00 w1 = w0
>       69:       67 01 00 00 20 00 00 00 r1 <<= 32
>       70:       c7 01 00 00 20 00 00 00 r1 s>>= 32
>       71:       c5 01 0b 00 00 00 00 00 if r1 s< 0 goto +11 <LBB0_6>
> 
> zext is there both cases and it will be optimized with your llvm patch.
> So please send it. Don't delay :)

LLVM patch here, https://reviews.llvm.org/D73985

With updated LLVM I can pass selftests with above fix and additional patch
below to get tighter bounds on 32bit registers. So going forward I think
we need to review and assuming it looks good commit above llvm patch and
then go forward with this series.

---

bpf: coerce reg use tighter max bound if possible
    
When we do a coerce_reg_to_size we lose possibly valid upper bounds in
the case where, (a) smax is non-negative and (b) smax is less than max
value in new reg size. If both (a) and (b) are satisfied we can keep
the smax bound. (a) is required to ensure we do not remove upper sign
bit. And (b) is required to ensure previously set bits are contained
inside the new reg bits.
    
Signed-off-by: John Fastabend <john.fastabend@gmail.com>

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1cc945d..e5349d6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2805,7 +2805,8 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
 		reg->umax_value = mask;
 	}
 	reg->smin_value = reg->umin_value;
-	reg->smax_value = reg->umax_value;
+	if (reg->smax_value < 0 || reg->smax_value > reg->umax_value)
+		reg->smax_value = reg->umax_value;
 }
 
 static bool bpf_map_is_rdonly(const struct bpf_map *map)

  reply	other threads:[~2020-02-04 19:55 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-27 19:29 [bpf PATCH v3] bpf: verifier, do_refine_retval_range may clamp umin to 0 incorrectly John Fastabend
2020-01-29 16:25 ` Daniel Borkmann
2020-01-29 19:28   ` Alexei Starovoitov
2020-01-29 22:20     ` Daniel Borkmann
2020-01-29 22:52       ` John Fastabend
2020-01-30  0:04         ` Alexei Starovoitov
2020-01-30 17:38           ` John Fastabend
2020-01-30 17:59             ` Alexei Starovoitov
2020-01-30 23:34               ` John Fastabend
2020-01-31  0:15                 ` Yonghong Song
2020-01-31  0:44                   ` John Fastabend
2020-01-31  0:52                     ` Yonghong Song
2020-01-31  2:50                     ` Alexei Starovoitov
2020-01-31  0:28                 ` Yonghong Song
2020-01-31  0:48                   ` John Fastabend
2020-01-31  2:46                 ` Alexei Starovoitov
2020-01-31  5:48                   ` John Fastabend
2020-01-31  6:18                     ` Alexei Starovoitov
2020-01-31 17:16                       ` John Fastabend
2020-01-31 21:36                         ` Alexei Starovoitov
2020-02-04 19:55                           ` John Fastabend [this message]
2020-02-05  1:21                             ` Yonghong Song
2020-02-05  3:05                               ` John Fastabend
2020-02-06  1:24                                 ` Yonghong Song
2020-02-07 20:47                                   ` John Fastabend
2020-02-08  6:23                                     ` Yonghong Song
2020-04-09 15:03 ` Lorenzo Fontana

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5e39cc3957bd1_63882ad0d49345c0c5@john-XPS-13-9370.notmuch \
    --to=john.fastabend@gmail.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.