From: Song Liu <songliubraving@fb.com>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"bpf@vger.kernel.org" <bpf@vger.kernel.org>,
Kernel Team <Kernel-team@fb.com>,
"ast@kernel.org" <ast@kernel.org>,
"daniel@iogearbox.net" <daniel@iogearbox.net>
Subject: Re: [RFC bpf-next 3/4] bpftool: introduce "prog profile" command
Date: Mon, 17 Feb 2020 02:51:59 +0000 [thread overview]
Message-ID: <2AC53E92-FC90-4133-9BCF-5BE627A5B3A2@fb.com> (raw)
In-Reply-To: <20200215004500.gs3ylstfo3aksfbp@ast-mbp>
> On Feb 14, 2020, at 4:45 PM, Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
>
> On Thu, Feb 13, 2020 at 01:01:14PM -0800, Song Liu wrote:
>> With fentry/fexit programs, it is possible to profile BPF program with
>> hardware counters. Introduce bpftool "prog profile", which measures key
>> metrics of a BPF program.
>>
>> bpftool prog profile command creates per-cpu perf events. Then it attaches
>> fentry/fexit programs to the target BPF program. The fentry program saves
>> perf event value to a map. The fexit program reads the perf event again,
>> and calculates the difference, which is the instructions/cycles used by
>> the target program.
>>
>> Example input and output:
>>
>> ./bpftool prog profile 20 id 810 cycles instructions
>> cycles: duration 20 run_cnt 1368 miss_cnt 665
>> counter 503377 enabled 668202 running 351857
>> instructions: duration 20 run_cnt 1368 miss_cnt 707
>> counter 398625 enabled 502330 running 272014
>>
>> This command measures cycles and instructions for BPF program with id
>> 810 for 20 seconds. The program has triggered 1368 times. cycles was not
>> measured in 665 out of these runs, because of perf event multiplexing
>> (some perf commands are running in the background). In these runs, the BPF
>> program consumed 503377 cycles. The perf_event enabled and running time
>> are 668202 and 351857 respectively.
>
> if (diff.enabled > diff.running) increment miss_cnt.
> Why show this to users?
> I think 'miss_cnt' the users will interpret as data is bogus,
> but it only means that the counter was multiplexed.
> The data is still accurate, no?
We (or the user) need to be careful to get all the math correct:
For # of run per second, we need total count;
For cycles per run, we need non-miss-count (total_count - miss_cnt).
So miss_cnt is useful for some users.
One thing tricky here is that different events in the same session may
have different miss_cnt. I just realized that we can probably avoid most
of such cases, and only take samples when all the counters are counting.
> This condition will probably be hit fairly often, no?
This really depends on the system. Data center servers usually have a
few perf_event running 24/7. We are more likely to hit multiplexing on
these systems.
>
>> tools/bpf/bpftool/profiler.skel.h | 820 ++++++++++++++++++++++
>
> I think bpftool needs to be build twice to avoid this.
>
> Could you change the output format to be 'perf stat' like:
> 55,766.51 msec task-clock # 0.996 CPUs utilized
> 4,891 context-switches # 0.088 K/sec
> 31 cpu-migrations # 0.001 K/sec
> 1,806,065 page-faults # 0.032 M/sec
> 166,819,295,451 cycles # 2.991 GHz (50.12%)
> 251,115,795,764 instructions # 1.51 insn per cycle (50.10%)
Will try this in next version.
>
> Also printing 'duration' is unnecessary. The user specified it at the command
> line and it doesn't need to be reported back to the user.
> Can you also make it optional? Until users Ctrl-C's bpftool ?
Yes, I plan to add the Ctrl-C approach. Even with a duration, user can still
hit Ctrl-C and get partial results. I guess, we should show the duration for
when user Ctrl-C?
> So it may look like:
> $ ./bpftool prog profile id 810 cycles instructions
> 1,368 run_cnt
> 503,377 cycles
> 398,625 instructions # 0.79 insn per cycle
>
> Computing additional things like 'insn per cycle' do help humans to
> pay attention the issue. Like <1 ipc is not great and the next step
> would be to profile this program for cache misses.
Yes, instruction per cycle is useful. Let me add that.
next prev parent reply other threads:[~2020-02-17 2:52 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-13 21:01 [RFC bpf-next 0/4] bpftool: introduce prog profile Song Liu
2020-02-13 21:01 ` [RFC bpf-next 1/4] bpf: allow bpf_perf_event_read_value in all BPF programs Song Liu
2020-02-13 21:01 ` [RFC bpf-next 2/4] libbpf: introduce bpf_program__overwrite_section_name() Song Liu
2020-02-13 21:01 ` [RFC bpf-next 3/4] bpftool: introduce "prog profile" command Song Liu
2020-02-13 21:50 ` Toke Høiland-Jørgensen
2020-02-13 21:55 ` Song Liu
2020-02-13 22:06 ` Toke Høiland-Jørgensen
2020-02-15 0:45 ` Alexei Starovoitov
2020-02-17 2:51 ` Song Liu [this message]
2020-02-13 21:01 ` [RFC bpf-next 4/4] bpftool: Documentation for bpftool prog profile Song Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2AC53E92-FC90-4133-9BCF-5BE627A5B3A2@fb.com \
--to=songliubraving@fb.com \
--cc=Kernel-team@fb.com \
--cc=alexei.starovoitov@gmail.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).