BPF Archive on lore.kernel.org
 help / color / Atom feed
From: Wenbo Zhang <ethercflow@gmail.com>
To: Yonghong Song <yhs@fb.com>
Cc: bpf <bpf@vger.kernel.org>, Andrii Nakryiko <andrii.nakryiko@gmail.com>
Subject: Re: tp_btf: if (!struct->pointer_member) always actually false although pointer_member == NULL
Date: Wed, 1 Jul 2020 13:05:23 +0800
Message-ID: <CABtjQmZPFsqVTZ=ofOHq6JNXBjcYdBSnGwXi4xqjwr-tK0bP+g@mail.gmail.com> (raw)
In-Reply-To: <79dbb7c0-449d-83eb-5f4f-7af0cc269168@fb.com>

> Thanks for reporting.
> The assembly code looks correct. So this mostly related to kernel.
> Will take a further look.

You are welcome, I've noticed your [PATCH bpf 1/2] bpf: fix an
incorrect branch elimination by verifier.
I'll try it with latest bpf tree later. Thank you all.

> For the llvm crash below, it should have been fixed in last llvm trunk.
> Please give a try. Thanks!

Get it, thanks. I'll try later.

Yonghong Song <yhs@fb.com> 于2020年6月30日周二 上午4:47写道:
>
>
>
> On 6/28/20 10:25 PM, Wenbo Zhang wrote:
> > I found in tp_btf program, direct access struct's pointer member's
> > behaviour isn't consistent with
> > BPF_CORE_READ. for example:
> >
> > SEC("tp_btf/block_rq_issue")
> > int BPF_PROG(tp_btf__block_rq_issue, struct request_queue *q,
> >      struct request *rq)
> > {
> >          /* After echo none > /sys/block/$dev/queue/scheduler,
> >           * the $dev's q->elevator will be set to NULL.
> >           */
> >          if (!q->elevator)
> >                  bpf_printk("direct access: noop\n");
> >          if (!BPF_CORE_READ(q, elevator))
> >                  bpf_printk("FROM CORE READ: noop\n");
> >          return 0;
> > }
> >
> > Although its value is NULL, from trace_pipe I can only see
> >
> >> FROM CORE READ: noop
> >
> > So it seems  `if (!q->elevator)` always return false.
>
> Thanks for reporting.
> The assembly code looks correct. So this mostly related to kernel.
> Will take a further look.
>
> For the llvm crash below, it should have been fixed in last llvm trunk.
> Please give a try. Thanks!
>
> >
> > I tested it with kernel 5.7.0-rc7+ and 5.8.0-rc1+, both have this problem.
> > clang version: clang version 10.0.0-4ubuntu1~18.04.1
> >
> > Reproduce step:
> > 1. Run this bpf prog;
> > 2. Run `cat /sys/kernel/debug/tracing/trace_pipe` in other window;
> > 3. Run `echo none > /sys/block/sdc/queue/scheduler`;  # please replace
> > sdc to your device;
> > 4. Run `dd if=/dev/zero of=/dev/sdc  bs=1MiB count=200 oflag=direct`;
> >
> >
> > The output of  `llvm-objdump-10 -D bio.bpf.o` is:
> >
> >
> > bio.bpf.o:      file format ELF64-BPF
> >
> >
> > Disassembly of section tp_btf/block_rq_issue:
> >
> > 0000000000000000 tp_btf__block_rq_issue:
> >         0:       b7 02 00 00 08 00 00 00 r2 = 8
> >         1:       79 11 00 00 00 00 00 00 r1 = *(u64 *)(r1 + 0)
> >         2:       bf 16 00 00 00 00 00 00 r6 = r1
> >         3:       0f 26 00 00 00 00 00 00 r6 += r2
> >         4:       79 11 08 00 00 00 00 00 r1 = *(u64 *)(r1 + 8)
> >         5:       55 01 0e 00 00 00 00 00 if r1 != 0 goto +14 <LBB0_2>
> >         6:       b7 01 00 00 00 00 00 00 r1 = 0
> >         7:       73 1a fc ff 00 00 00 00 *(u8 *)(r10 - 4) = r1
> >         8:       b7 01 00 00 6f 6f 70 0a r1 = 175140719
> >         9:       63 1a f8 ff 00 00 00 00 *(u32 *)(r10 - 8) = r1
> >        10:       18 01 00 00 63 63 65 73 00 00 00 00 73 3a 20 6e r1 =
> > 7935406810958488419 ll
> >        12:       7b 1a f0 ff 00 00 00 00 *(u64 *)(r10 - 16) = r1
> >        13:       18 01 00 00 64 69 72 65 00 00 00 00 63 74 20 61 r1 =
> > 6998721791186332004 ll
> >        15:       7b 1a e8 ff 00 00 00 00 *(u64 *)(r10 - 24) = r1
> >        16:       bf a1 00 00 00 00 00 00 r1 = r10
> >        17:       07 01 00 00 e8 ff ff ff r1 += -24
> >        18:       b7 02 00 00 15 00 00 00 r2 = 21
> >        19:       85 00 00 00 06 00 00 00 call 6
> >
> > 00000000000000a0 LBB0_2:
> >        20:       bf a1 00 00 00 00 00 00 r1 = r10
> >        21:       07 01 00 00 e8 ff ff ff r1 += -24
> >        22:       b7 02 00 00 08 00 00 00 r2 = 8
> >        23:       bf 63 00 00 00 00 00 00 r3 = r6
> >        24:       85 00 00 00 04 00 00 00 call 4
> >        25:       79 a1 e8 ff 00 00 00 00 r1 = *(u64 *)(r10 - 24)
> >        26:       55 01 0e 00 00 00 00 00 if r1 != 0 goto +14 <LBB0_4>
> >        27:       b7 01 00 00 0a 00 00 00 r1 = 10
> >        28:       6b 1a fc ff 00 00 00 00 *(u16 *)(r10 - 4) = r1
> >        29:       b7 01 00 00 6e 6f 6f 70 r1 = 1886351214
> >        30:       63 1a f8 ff 00 00 00 00 *(u32 *)(r10 - 8) = r1
> >        31:       18 01 00 00 45 20 52 45 00 00 00 00 41 44 3a 20 r1 =
> > 2322243604989485125 ll
> >        33:       7b 1a f0 ff 00 00 00 00 *(u64 *)(r10 - 16) = r1
> >        34:       18 01 00 00 46 52 4f 4d 00 00 00 00 20 43 4f 52 r1 =
> > 5931033040285291078 ll
> >        36:       7b 1a e8 ff 00 00 00 00 *(u64 *)(r10 - 24) = r1
> >        37:       bf a1 00 00 00 00 00 00 r1 = r10
> >        38:       07 01 00 00 e8 ff ff ff r1 += -24
> >        39:       b7 02 00 00 16 00 00 00 r2 = 22
> >        40:       85 00 00 00 06 00 00 00 call 6
> >
> > 0000000000000148 LBB0_4:
> >        41:       b7 00 00 00 00 00 00 00 r0 = 0
> >        42:       95 00 00 00 00 00 00 00 exit
> >
> > Disassembly of section license:
> >
> > 0000000000000000 LICENSE:
> >         0:       47      <unknown>
> >         0:       50      <unknown>
> >         0:       4c      <unknown>
> >         0:       00      <unknown>
> >
> > Disassembly of section .rodata.str1.1:
> >
> > 0000000000000000 .rodata.str1.1:
> >         0:       64 69 72 65 63 74 20 61 w9 <<= 1629516899
> >         1:       63 63 65 73 73 3a 20 6e *(u32 *)(r3 + 29541) = r6
> >         2:       6f 6f 70 0a 00 46 52 4f <unknown>
> >         3:       4d 20 43 4f 52 45 20 52 <unknown>
> >         4:       45 41 44 3a 20 6e 6f 6f <unknown>
> >         5:       70      <unknown>
> >         5:       0a      <unknown>
> >         5:       00      <unknown>
> >
> [...]
> >       709:       69 5f 72 65 6d 61 69 6e <unknown>
> >       710:       69 6e 67 00 62 69 5f 69 <unknown>
> >       711:       74 65 72 00 62 69 5f 65 w5 >>= 1700751714
> >       712:       6e 64 5f 69 6f 00 62 69 if
> >
> >
> > BTW, the llvm-objdump will core dump after output the above info:
> >
> > Stack dump:
> > 0. Program arguments: llvm-objdump-10 -D bio.bpf.o
> > /usr/lib/x86_64-linux-gnu/libLLVM-10.so.1(_ZN4llvm3sys15PrintStackTraceERNS_11raw_ostreamE+0x1f)[0x7f7636d5dc3f]
> > /usr/lib/x86_64-linux-gnu/libLLVM-10.so.1(_ZN4llvm3sys17RunSignalHandlersEv+0x50)[0x7f7636d5bf00]
> > /usr/lib/x86_64-linux-gnu/libLLVM-10.so.1(+0x978205)[0x7f7636d5e205]
> > /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890)[0x7f76361d9890]
> > /usr/lib/x86_64-linux-gnu/libLLVM-10.so.1(+0x21bbed3)[0x7f76385a1ed3]
> > /usr/lib/x86_64-linux-gnu/libLLVM-10.so.1(+0x21baefb)[0x7f76385a0efb]
> > /usr/lib/x86_64-linux-gnu/libLLVM-10.so.1(+0x21bc0ce)[0x7f76385a20ce]
> > llvm-objdump-10[0x41b78c]
> > llvm-objdump-10[0x425278]
> > llvm-objdump-10[0x41f502]
> > llvm-objdump-10[0x41a473]
> > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0x7f763546db97]
> > llvm-objdump-10[0x41542a]
> > [1]    21636 segmentation fault (core dumped)
> >
> > llvm-objdump-10 --version
> > LLVM (https://urldefense.proofpoint.com/v2/url?u=http-3A__llvm.org_&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=DA8e1B5r073vIqRrFz7MRA&m=zs4_mz-CGExwverPej7QEcaeDzsjcZfkD_GiyMQDJbE&s=1inYTci4noQ6dJN-mUYTlvU7OrTX3C7h-0Kn39reX-Y&e= ):
> >    LLVM version 10.0.0
> >
> >    Optimized build.
> >    Default target: x86_64-pc-linux-gnu
> >    Host CPU: broadwell
> >
> >    Registered Targets:
> >      aarch64    - AArch64 (little endian)
> >      aarch64_32 - AArch64 (little endian ILP32)
> [...]

      reply index

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-29  5:25 Wenbo Zhang
2020-06-29  6:24 ` Andrii Nakryiko
2020-06-29 20:47 ` Yonghong Song
2020-07-01  5:05   ` Wenbo Zhang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABtjQmZPFsqVTZ=ofOHq6JNXBjcYdBSnGwXi4xqjwr-tK0bP+g@mail.gmail.com' \
    --to=ethercflow@gmail.com \
    --cc=andrii.nakryiko@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

BPF Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/bpf/0 bpf/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 bpf bpf/ https://lore.kernel.org/bpf \
		bpf@vger.kernel.org
	public-inbox-index bpf

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.bpf


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git