linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Namhyung Kim <namhyung@kernel.org>
To: Song Liu <songliubraving@fb.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Kernel Team <Kernel-team@fb.com>,
	Arnaldo Carvalho de Melo <acme@redhat.com>,
	Jiri Olsa <jolsa@kernel.org>
Subject: Re: [PATCH v2 0/3] perf-stat: share hardware PMCs with BPF
Date: Thu, 18 Mar 2021 13:32:30 +0900	[thread overview]
Message-ID: <CAM9d7cjAngAKo9EazV=iyNncBZY53-rnE5_8SYuJiEuG4f4-yg@mail.gmail.com> (raw)
In-Reply-To: <7D48A756-C253-48DE-B536-826314778404@fb.com>

On Thu, Mar 18, 2021 at 12:52 PM Song Liu <songliubraving@fb.com> wrote:
>
>
>
> > On Mar 17, 2021, at 6:11 AM, Arnaldo Carvalho de Melo <acme@kernel.org> wrote:
> >
> > Em Wed, Mar 17, 2021 at 02:29:28PM +0900, Namhyung Kim escreveu:
> >> Hi Song,
> >>
> >> On Wed, Mar 17, 2021 at 6:18 AM Song Liu <songliubraving@fb.com> wrote:
> >>>
> >>> perf uses performance monitoring counters (PMCs) to monitor system
> >>> performance. The PMCs are limited hardware resources. For example,
> >>> Intel CPUs have 3x fixed PMCs and 4x programmable PMCs per cpu.
> >>>
> >>> Modern data center systems use these PMCs in many different ways:
> >>> system level monitoring, (maybe nested) container level monitoring, per
> >>> process monitoring, profiling (in sample mode), etc. In some cases,
> >>> there are more active perf_events than available hardware PMCs. To allow
> >>> all perf_events to have a chance to run, it is necessary to do expensive
> >>> time multiplexing of events.
> >>>
> >>> On the other hand, many monitoring tools count the common metrics (cycles,
> >>> instructions). It is a waste to have multiple tools create multiple
> >>> perf_events of "cycles" and occupy multiple PMCs.
> >>
> >> Right, it'd be really helpful when the PMCs are frequently or mostly shared.
> >> But it'd also increase the overhead for uncontended cases as BPF programs
> >> need to run on every context switch.  Depending on the workload, it may
> >> cause a non-negligible performance impact.  So users should be aware of it.
> >
> > Would be interesting to, humm, measure both cases to have a firm number
> > of the impact, how many instructions are added when sharing using
> > --bpf-counters?
> >
> > I.e. compare the "expensive time multiplexing of events" with its
> > avoidance by using --bpf-counters.
> >
> > Song, have you perfmormed such measurements?
>
> I have got some measurements with perf-bench-sched-messaging:
>
> The system: x86_64 with 23 cores (46 HT)
>
> The perf-stat command:
> perf stat -e cycles,cycles,instructions,instructions,ref-cycles,ref-cycles <target, etc.>
>
> The benchmark command and output:
> ./perf bench sched messaging -g 40 -l 50000 -t
> # Running 'sched/messaging' benchmark:
> # 20 sender and receiver threads per group
> # 40 groups == 1600 threads run
>      Total time: 10X.XXX [sec]
>
>
> I use the "Total time" as measurement, so smaller number is better.
>
> For each condition, I run the command 5 times, and took the median of
> "Total time".
>
> Baseline (no perf-stat)                 104.873 [sec]
> # global
> perf stat -a                            107.887 [sec]
> perf stat -a --bpf-counters             106.071 [sec]
> # per task
> perf stat                               106.314 [sec]
> perf stat --bpf-counters                105.965 [sec]
> # per cpu
> perf stat -C 1,3,5                      107.063 [sec]
> perf stat -C 1,3,5 --bpf-counters       106.406 [sec]
>
> From the data, --bpf-counters is slightly better than the regular event
> for all targets. I noticed that the results are not very stable. There
> are a couple 108.xx runs in some of the conditions (w/ and w/o
> --bpf-counters).

Hmm.. so this result is when multiplexing happened, right?
I wondered how/why the regular perf stat is slower..

Thanks,
Namhyung

>
>
> I also measured the average runtime of the BPF programs, with
>
>         sysctl kernel.bpf_stats_enabled=1
>
> For each event, if we have one leader and two followers, the total run
> time is about 340ns. IOW, 340ns for two perf-stat reading instructions,
> 340ns for two perf-stat reading cycles, etc.
>
> Thanks,
> Song

  reply	other threads:[~2021-03-18  4:33 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-16 21:18 [PATCH v2 0/3] perf-stat: share hardware PMCs with BPF Song Liu
2021-03-16 21:18 ` [PATCH v2 1/3] perf-stat: introduce bperf, " Song Liu
2021-03-18  5:54   ` Namhyung Kim
2021-03-18  7:22     ` Song Liu
2021-03-18 13:49       ` Namhyung Kim
2021-03-18 17:16         ` Song Liu
2021-03-18 21:15   ` Jiri Olsa
2021-03-19 18:41     ` Arnaldo Carvalho de Melo
2021-03-19 18:55       ` Jiri Olsa
2021-03-19 22:06         ` Song Liu
2021-03-23  0:53       ` Song Liu
2021-03-23 12:25       ` Arnaldo Carvalho de Melo
2021-03-23 12:37         ` Arnaldo Carvalho de Melo
2021-03-23 18:27           ` Arnaldo Carvalho de Melo
2021-03-16 21:18 ` [PATCH v2 2/3] perf-stat: measure t0 and ref_time after enable_counters() Song Liu
2021-03-16 21:18 ` [PATCH v2 3/3] perf-test: add a test for perf-stat --bpf-counters option Song Liu
2021-03-18  6:07   ` Namhyung Kim
2021-03-18  7:39     ` Song Liu
2021-03-17  5:29 ` [PATCH v2 0/3] perf-stat: share hardware PMCs with BPF Namhyung Kim
2021-03-17  9:19   ` Jiri Olsa
2021-03-17 13:11   ` Arnaldo Carvalho de Melo
2021-03-18  3:52     ` Song Liu
2021-03-18  4:32       ` Namhyung Kim [this message]
2021-03-18  7:03         ` Song Liu
2021-03-18 21:14       ` Jiri Olsa
2021-03-19  0:09         ` Arnaldo
2021-03-19  0:22           ` Song Liu
2021-03-19  0:54             ` Namhyung Kim
2021-03-19 15:35               ` Arnaldo Carvalho de Melo
2021-03-19 15:58                 ` Namhyung Kim
2021-03-19 16:14                   ` Song Liu
2021-03-23 21:10                     ` Arnaldo Carvalho de Melo
2021-03-23 21:26                       ` Song Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAM9d7cjAngAKo9EazV=iyNncBZY53-rnE5_8SYuJiEuG4f4-yg@mail.gmail.com' \
    --to=namhyung@kernel.org \
    --cc=Kernel-team@fb.com \
    --cc=acme@kernel.org \
    --cc=acme@redhat.com \
    --cc=jolsa@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=songliubraving@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).