bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brendan Jackman <jackmanb@google.com>
To: Jiri Olsa <jolsa@redhat.com>
Cc: bpf <bpf@vger.kernel.org>, Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andrii Nakryiko <andrii.nakryiko@gmail.com>,
	KP Singh <kpsingh@chromium.org>,
	Florent Revest <revest@chromium.org>,
	John Fastabend <john.fastabend@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>,
	"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
	Sandipan Das <sandipan@linux.ibm.com>
Subject: Re: [BUG soft lockup] Re: [PATCH bpf-next v3] bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH
Date: Mon, 28 Jun 2021 11:21:42 +0200	[thread overview]
Message-ID: <CA+i-1C0DAr5ecAOV06_fqeCooic4AF=71ur63HJ6ddbj9ceDpQ@mail.gmail.com> (raw)
In-Reply-To: <YNiadhIbJBBPeOr6@krava>

On Sun, 27 Jun 2021 at 17:34, Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Tue, Feb 02, 2021 at 01:50:02PM +0000, Brendan Jackman wrote:
>
> SNIP
>
> > diff --git a/tools/testing/selftests/bpf/verifier/atomic_bounds.c b/tools/testing/selftests/bpf/verifier/atomic_bounds.c
> > new file mode 100644
> > index 000000000000..e82183e4914f
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/verifier/atomic_bounds.c
> > @@ -0,0 +1,27 @@
> > +{
> > +     "BPF_ATOMIC bounds propagation, mem->reg",
> > +     .insns = {
> > +             /* a = 0; */
> > +             /*
> > +              * Note this is implemented with two separate instructions,
> > +              * where you might think one would suffice:
> > +              *
> > +              * BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
> > +              *
> > +              * This is because BPF_ST_MEM doesn't seem to set the stack slot
> > +              * type to 0 when storing an immediate.
> > +              */
> > +             BPF_MOV64_IMM(BPF_REG_0, 0),
> > +             BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
> > +             /* b = atomic_fetch_add(&a, 1); */
> > +             BPF_MOV64_IMM(BPF_REG_1, 1),
> > +             BPF_ATOMIC_OP(BPF_DW, BPF_ADD | BPF_FETCH, BPF_REG_10, BPF_REG_1, -8),
> > +             /* Verifier should be able to tell that this infinite loop isn't reachable. */
> > +             /* if (b) while (true) continue; */
> > +             BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, -1),
> > +             BPF_EXIT_INSN(),
> > +     },
> > +     .result = ACCEPT,
> > +     .result_unpriv = REJECT,
> > +     .errstr_unpriv = "back-edge",
> > +},
> >
> > base-commit: 61ca36c8c4eb3bae35a285b1ae18c514cde65439
> > --
> > 2.30.0.365.g02bc693789-goog
> >
>
> hi,
> I tracked soft lock up on powerpc to this test:
>
>         [root@ibm-p9z-07-lp1 bpf]# ./test_verifier 25
>         #25/u BPF_ATOMIC bounds propagation, mem->reg SKIP
>         #25/p BPF_ATOMIC bounds propagation, mem->reg
>
>         Message from syslogd@ibm-p9z-07-lp1 at Jun 27 11:24:34 ...
>          kernel:watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [test_verifier:1055]
>
>         Message from syslogd@ibm-p9z-07-lp1 at Jun 27 11:25:04 ...
>          kernel:watchdog: BUG: soft lockup - CPU#2 stuck for 48s! [test_verifier:1055]
>
> please check the console output below.. it looks like the verifier
> allowed the loop to happen for some reason on powerpc.. any idea?
>
> I'm on latest bpf-next/master, I can send the config if needed
>
> thanks,
> jirka
>
>
> ---
> ibm-p9z-07-lp1 login: [  184.108655] watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [test_verifier:1055]
> [  184.108679] Modules linked in: snd_seq_dummy(E) snd_hrtimer(E) snd_seq(E) snd_seq_device(E) snd_timer(E) snd(E) soundcore(E) bonding(E) tls(E) rfkill(E) pseries_rng(E) drm(E) fuse(E) drm_panel_orientation_quirks(E) xfs(E) libcrc32c(E) sd_mod(E) t10_pi(E) ibmvscsi(E) ibmveth(E) scsi_transport_srp(E) vmx_crypto(E) dm_mirror(E) dm_region_hash(E) dm_log(E) dm_mod(E)
> [  184.108722] CPU: 2 PID: 1055 Comm: test_verifier Tainted: G            E     5.13.0-rc3+ #3
> [  184.108728] NIP:  c00800000131314c LR: c000000000c56918 CTR: c008000001313118
> [  184.108733] REGS: c0000000119ef820 TRAP: 0900   Tainted: G            E      (5.13.0-rc3+)
> [  184.108739] MSR:  8000000000009033 <SF,EE,ME,IR,DR,RI,LE>  CR: 44222840  XER: 20040003
> [  184.108752] CFAR: c008000001313150 IRQMASK: 0
> [  184.108752] GPR00: c000000000c5671c c0000000119efac0 c000000002a08400 0000000000000001
> [  184.108752] GPR04: c0080000010c0048 ffffffffffffffff 0000000001f3f8ec 0000000000000008
> [  184.108752] GPR08: 0000000000000000 c0000000119efae8 0000000000000001 49adb8fcb8417937
> [  184.108752] GPR12: c008000001313118 c00000001ecae400 0000000000000000 0000000000000000
> [  184.108752] GPR16: 0000000000000000 0000000000000000 0000000000000000 c0000000021cf6f8
> [  184.108752] GPR20: 0000000000000000 c0000000119efc34 c0000000119efc30 c0080000010c0048
> [  184.108752] GPR24: c00000000a1dc100 0000000000000001 c000000011fadc80 c0000000021cf638
> [  184.108752] GPR28: c0080000010c0000 0000000000000001 c0000000021cf638 c0000000119efaf0
> [  184.108812] NIP [c00800000131314c] bpf_prog_a2eb9104e5e8a5bf+0x34/0xcee8
> [  184.108819] LR [c000000000c56918] bpf_test_run+0x2f8/0x470
> [  184.108826] Call Trace:
> [  184.108828] [c0000000119efac0] [c0000000119efb30] 0xc0000000119efb30 (unreliable)
> [  184.108835] [c0000000119efb30] [c000000000c5671c] bpf_test_run+0xfc/0x470
> [  184.108841] [c0000000119efc10] [c000000000c57b6c] bpf_prog_test_run_skb+0x38c/0x660
> [  184.108848] [c0000000119efcb0] [c00000000035de6c] __sys_bpf+0x46c/0xd60
> [  184.108854] [c0000000119efd90] [c00000000035e810] sys_bpf+0x30/0x40
> [  184.108859] [c0000000119efdb0] [c00000000002ea34] system_call_exception+0x144/0x280
> [  184.108866] [c0000000119efe10] [c00000000000c570] system_call_vectored_common+0xf0/0x268
> [  184.108874] --- interrupt: 3000 at 0x7fff8bb3ef24
> [  184.108878] NIP:  00007fff8bb3ef24 LR: 0000000000000000 CTR: 0000000000000000
> [  184.108883] REGS: c0000000119efe80 TRAP: 3000   Tainted: G            E      (5.13.0-rc3+)
> [  184.108887] MSR:  800000000280f033 <SF,VEC,VSX,EE,PR,FP,ME,IR,DR,RI,LE>  CR: 28000848  XER: 00000000
> [  184.108903] IRQMASK: 0
> [  184.108903] GPR00: 0000000000000169 00007fffe4577710 00007fff8bc27200 000000000000000a
> [  184.108903] GPR04: 00007fffe45777b8 0000000000000080 0000000000000001 0000000000000008
> [  184.108903] GPR08: 000000000000000a 0000000000000000 0000000000000000 0000000000000000
> [  184.108903] GPR12: 0000000000000000 00007fff8be1c400 0000000000000000 0000000000000000
> [  184.108903] GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
> [  184.108903] GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
> [  184.108903] GPR24: 0000000000000000 0000000000000000 0000000000000000 000000001000d1d0
> [  184.108903] GPR28: 0000000000000002 00007fffe4578128 00007fffe45782c0 00007fffe4577710
> [  184.108960] NIP [00007fff8bb3ef24] 0x7fff8bb3ef24
> [  184.108964] LR [0000000000000000] 0x0
> [  184.108967] --- interrupt: 3000
> [  184.108970] Instruction dump:
> [  184.108974] 60000000 f821ff91 fbe10068 3be10030 39000000 f91ffff8 38600001 393ffff8
> [  184.108985] 7d4048a8 7d4a1a14 7d4049ad 4082fff4 <28230000> 4082fffc 60000000 ebe10068

Hmm, is the test prog from atomic_bounds.c getting JITed there (my
dumb guess at what '0xc0000000119efb30 (unreliable)' means)? That
shouldn't happen - should get 'eBPF filter atomic op code %02x (@%d)
unsupported\n' in dmesg instead. I wonder if I missed something in
commit 91c960b0056 (bpf: Rename BPF_XADD and prepare to encode other
atomics in .imm). Any idea if this test was ever passing on PowerPC?

  reply	other threads:[~2021-06-28  9:21 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-02 13:50 [PATCH bpf-next v3] bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH Brendan Jackman
2021-02-03 17:07 ` Yonghong Song
2021-02-03 17:37   ` Alexei Starovoitov
2021-06-27 15:34 ` [BUG soft lockup] " Jiri Olsa
2021-06-28  9:21   ` Brendan Jackman [this message]
2021-06-29 14:10     ` Jiri Olsa
2021-06-29 16:04       ` Jiri Olsa
2021-06-29 16:25         ` Brendan Jackman
2021-06-29 16:41           ` Jiri Olsa
2021-06-29 21:08             ` Jiri Olsa
2021-06-30 10:34               ` Brendan Jackman
     [not found]                 ` <YNxmwZGtnqiXGnF0@krava>
2021-07-01  8:18                   ` Brendan Jackman
2021-07-01 10:16                     ` Jiri Olsa
2021-07-01 11:02                     ` Naveen N. Rao
2021-07-01 14:15                       ` Jiri Olsa
2021-07-02  9:44                         ` Brendan Jackman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+i-1C0DAr5ecAOV06_fqeCooic4AF=71ur63HJ6ddbj9ceDpQ@mail.gmail.com' \
    --to=jackmanb@google.com \
    --cc=andrii.nakryiko@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@redhat.com \
    --cc=kpsingh@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=naveen.n.rao@linux.ibm.com \
    --cc=revest@chromium.org \
    --cc=sandipan@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).