bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrii Nakryiko <andrii.nakryiko@gmail.com>
To: Y Song <ys114321@gmail.com>
Cc: Ilya Leoshkevich <iii@linux.ibm.com>, bpf <bpf@vger.kernel.org>,
	Networking <netdev@vger.kernel.org>,
	gor@linux.ibm.com, heiko.carstens@de.ibm.com
Subject: Re: [PATCH bpf] selftests/bpf: fix test_send_signal_nmi on s390
Date: Fri, 12 Jul 2019 12:59:26 -0700	[thread overview]
Message-ID: <CAEf4Bza4_Xwdb4euhOyad2n6OtbdbaZP2Hkm-xO2LRVTmyDO4A@mail.gmail.com> (raw)
In-Reply-To: <CAH3MdRWEfrQt6P4eMYgGRE9OgLkjQLqoZnCwFbrxwqKPyrrHpQ@mail.gmail.com>

On Fri, Jul 12, 2019 at 12:55 PM Y Song <ys114321@gmail.com> wrote:
>
> On Fri, Jul 12, 2019 at 11:24 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Fri, Jul 12, 2019 at 10:46 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> > >
> > > Many s390 setups (most notably, KVM guests) do not have access to
> > > hardware performance events.
> > >
> > > Therefore, use the software event instead.
> > >
> > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > Acked-by: Vasily Gorbik <gor@linux.ibm.com>
> > > ---
> > >  tools/testing/selftests/bpf/prog_tests/send_signal.c | 9 +++++++++
> > >  1 file changed, 9 insertions(+)
> > >
> > > diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > > index 67cea1686305..4a45ea0b8448 100644
> > > --- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > > +++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > > @@ -176,10 +176,19 @@ static int test_send_signal_tracepoint(void)
> > >  static int test_send_signal_nmi(void)
> > >  {
> > >         struct perf_event_attr attr = {
> > > +#if defined(__s390__)
> > > +               /* Many s390 setups (most notably, KVM guests) do not have
> > > +                * access to hardware performance events.
> > > +                */
> > > +               .sample_period = 1,
> > > +               .type = PERF_TYPE_SOFTWARE,
> > > +               .config = PERF_COUNT_SW_CPU_CLOCK,
> > > +#else
> >
> > Is there any harm in switching all archs to software event? I'd rather
> > avoid all those special arch cases, which will be really hard to test
> > for people without direct access to them.
>
> I still like to do hardware cpu_cycles in order to test nmi.
> In a physical box.
> $ perf list
> List of pre-defined events (to be used in -e):
>
>   branch-instructions OR branches                    [Hardware event]
>   branch-misses                                      [Hardware event]
>   bus-cycles                                         [Hardware event]
>   cache-misses                                       [Hardware event]
>   cache-references                                   [Hardware event]
>   cpu-cycles OR cycles                               [Hardware event]
>   instructions                                       [Hardware event]
>   ref-cycles                                         [Hardware event]
>
>   alignment-faults                                   [Software event]
>   bpf-output                                         [Software event]
>   context-switches OR cs                             [Software event]
>   cpu-clock                                          [Software event]
>   cpu-migrations OR migrations                       [Software event]
>   dummy                                              [Software event]
>   emulation-faults                                   [Software event]
>   major-faults                                       [Software event]
>   minor-faults                                       [Software event]
>   page-faults OR faults                              [Software event]
>   task-clock                                         [Software event]
>
>   L1-dcache-load-misses                              [Hardware cache event]
> ...
>
> In a VM
> $ perf list
> List of pre-defined events (to be used in -e):
>
>   alignment-faults                                   [Software event]
>   bpf-output                                         [Software event]
>   context-switches OR cs                             [Software event]
>   cpu-clock                                          [Software event]
>   cpu-migrations OR migrations                       [Software event]
>   dummy                                              [Software event]
>   emulation-faults                                   [Software event]
>   major-faults                                       [Software event]
>   minor-faults                                       [Software event]
>   page-faults OR faults                              [Software event]
>   task-clock                                         [Software event]
>
>   msr/smi/                                           [Kernel PMU
> event]
>   msr/tsc/                                           [Kernel PMU event]
> .....
>
> Is it possible that we detect at runtime whether the hardware
> cpu_cycles available or not?
> If available, let us do hardware one. Otherwise, skip or do the
> software one? The software one does not really do nmi so it will take
> the same code path in kernel as tracepoint.

Yeah, that's what I was worried about.

Ilya, could you please take a look how hard would it be to do this HW
vs SW perf event support?

>
> >
> > >                 .sample_freq = 50,
> > >                 .freq = 1,
> > >                 .type = PERF_TYPE_HARDWARE,
> > >                 .config = PERF_COUNT_HW_CPU_CYCLES,
> > > +#endif
> > >         };
> > >
> > >         return test_send_signal_common(&attr, BPF_PROG_TYPE_PERF_EVENT, "perf_event");
> > > --
> > > 2.21.0
> > >

      reply	other threads:[~2019-07-12 19:59 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-12 17:45 [PATCH bpf] selftests/bpf: fix test_send_signal_nmi on s390 Ilya Leoshkevich
2019-07-12 18:22 ` Andrii Nakryiko
2019-07-12 19:54   ` Y Song
2019-07-12 19:59     ` Andrii Nakryiko [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAEf4Bza4_Xwdb4euhOyad2n6OtbdbaZP2Hkm-xO2LRVTmyDO4A@mail.gmail.com \
    --to=andrii.nakryiko@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=gor@linux.ibm.com \
    --cc=heiko.carstens@de.ibm.com \
    --cc=iii@linux.ibm.com \
    --cc=netdev@vger.kernel.org \
    --cc=ys114321@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).