bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf] selftests/bpf: fix test_send_signal_nmi on s390
@ 2019-07-12 17:45 Ilya Leoshkevich
  2019-07-12 18:22 ` Andrii Nakryiko
  0 siblings, 1 reply; 4+ messages in thread
From: Ilya Leoshkevich @ 2019-07-12 17:45 UTC (permalink / raw)
  To: bpf, netdev; +Cc: gor, heiko.carstens, Ilya Leoshkevich

Many s390 setups (most notably, KVM guests) do not have access to
hardware performance events.

Therefore, use the software event instead.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
---
 tools/testing/selftests/bpf/prog_tests/send_signal.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
index 67cea1686305..4a45ea0b8448 100644
--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
+++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
@@ -176,10 +176,19 @@ static int test_send_signal_tracepoint(void)
 static int test_send_signal_nmi(void)
 {
 	struct perf_event_attr attr = {
+#if defined(__s390__)
+		/* Many s390 setups (most notably, KVM guests) do not have
+		 * access to hardware performance events.
+		 */
+		.sample_period = 1,
+		.type = PERF_TYPE_SOFTWARE,
+		.config = PERF_COUNT_SW_CPU_CLOCK,
+#else
 		.sample_freq = 50,
 		.freq = 1,
 		.type = PERF_TYPE_HARDWARE,
 		.config = PERF_COUNT_HW_CPU_CYCLES,
+#endif
 	};
 
 	return test_send_signal_common(&attr, BPF_PROG_TYPE_PERF_EVENT, "perf_event");
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] selftests/bpf: fix test_send_signal_nmi on s390
  2019-07-12 17:45 [PATCH bpf] selftests/bpf: fix test_send_signal_nmi on s390 Ilya Leoshkevich
@ 2019-07-12 18:22 ` Andrii Nakryiko
  2019-07-12 19:54   ` Y Song
  0 siblings, 1 reply; 4+ messages in thread
From: Andrii Nakryiko @ 2019-07-12 18:22 UTC (permalink / raw)
  To: Ilya Leoshkevich; +Cc: bpf, Networking, gor, heiko.carstens

On Fri, Jul 12, 2019 at 10:46 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> Many s390 setups (most notably, KVM guests) do not have access to
> hardware performance events.
>
> Therefore, use the software event instead.
>
> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> Acked-by: Vasily Gorbik <gor@linux.ibm.com>
> ---
>  tools/testing/selftests/bpf/prog_tests/send_signal.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> index 67cea1686305..4a45ea0b8448 100644
> --- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
> +++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> @@ -176,10 +176,19 @@ static int test_send_signal_tracepoint(void)
>  static int test_send_signal_nmi(void)
>  {
>         struct perf_event_attr attr = {
> +#if defined(__s390__)
> +               /* Many s390 setups (most notably, KVM guests) do not have
> +                * access to hardware performance events.
> +                */
> +               .sample_period = 1,
> +               .type = PERF_TYPE_SOFTWARE,
> +               .config = PERF_COUNT_SW_CPU_CLOCK,
> +#else

Is there any harm in switching all archs to software event? I'd rather
avoid all those special arch cases, which will be really hard to test
for people without direct access to them.

>                 .sample_freq = 50,
>                 .freq = 1,
>                 .type = PERF_TYPE_HARDWARE,
>                 .config = PERF_COUNT_HW_CPU_CYCLES,
> +#endif
>         };
>
>         return test_send_signal_common(&attr, BPF_PROG_TYPE_PERF_EVENT, "perf_event");
> --
> 2.21.0
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] selftests/bpf: fix test_send_signal_nmi on s390
  2019-07-12 18:22 ` Andrii Nakryiko
@ 2019-07-12 19:54   ` Y Song
  2019-07-12 19:59     ` Andrii Nakryiko
  0 siblings, 1 reply; 4+ messages in thread
From: Y Song @ 2019-07-12 19:54 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: Ilya Leoshkevich, bpf, Networking, gor, heiko.carstens

On Fri, Jul 12, 2019 at 11:24 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jul 12, 2019 at 10:46 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >
> > Many s390 setups (most notably, KVM guests) do not have access to
> > hardware performance events.
> >
> > Therefore, use the software event instead.
> >
> > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > Acked-by: Vasily Gorbik <gor@linux.ibm.com>
> > ---
> >  tools/testing/selftests/bpf/prog_tests/send_signal.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > index 67cea1686305..4a45ea0b8448 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > @@ -176,10 +176,19 @@ static int test_send_signal_tracepoint(void)
> >  static int test_send_signal_nmi(void)
> >  {
> >         struct perf_event_attr attr = {
> > +#if defined(__s390__)
> > +               /* Many s390 setups (most notably, KVM guests) do not have
> > +                * access to hardware performance events.
> > +                */
> > +               .sample_period = 1,
> > +               .type = PERF_TYPE_SOFTWARE,
> > +               .config = PERF_COUNT_SW_CPU_CLOCK,
> > +#else
>
> Is there any harm in switching all archs to software event? I'd rather
> avoid all those special arch cases, which will be really hard to test
> for people without direct access to them.

I still like to do hardware cpu_cycles in order to test nmi.
In a physical box.
$ perf list
List of pre-defined events (to be used in -e):

  branch-instructions OR branches                    [Hardware event]
  branch-misses                                      [Hardware event]
  bus-cycles                                         [Hardware event]
  cache-misses                                       [Hardware event]
  cache-references                                   [Hardware event]
  cpu-cycles OR cycles                               [Hardware event]
  instructions                                       [Hardware event]
  ref-cycles                                         [Hardware event]

  alignment-faults                                   [Software event]
  bpf-output                                         [Software event]
  context-switches OR cs                             [Software event]
  cpu-clock                                          [Software event]
  cpu-migrations OR migrations                       [Software event]
  dummy                                              [Software event]
  emulation-faults                                   [Software event]
  major-faults                                       [Software event]
  minor-faults                                       [Software event]
  page-faults OR faults                              [Software event]
  task-clock                                         [Software event]

  L1-dcache-load-misses                              [Hardware cache event]
...

In a VM
$ perf list
List of pre-defined events (to be used in -e):

  alignment-faults                                   [Software event]
  bpf-output                                         [Software event]
  context-switches OR cs                             [Software event]
  cpu-clock                                          [Software event]
  cpu-migrations OR migrations                       [Software event]
  dummy                                              [Software event]
  emulation-faults                                   [Software event]
  major-faults                                       [Software event]
  minor-faults                                       [Software event]
  page-faults OR faults                              [Software event]
  task-clock                                         [Software event]

  msr/smi/                                           [Kernel PMU
event]
  msr/tsc/                                           [Kernel PMU event]
.....

Is it possible that we detect at runtime whether the hardware
cpu_cycles available or not?
If available, let us do hardware one. Otherwise, skip or do the
software one? The software one does not really do nmi so it will take
the same code path in kernel as tracepoint.

>
> >                 .sample_freq = 50,
> >                 .freq = 1,
> >                 .type = PERF_TYPE_HARDWARE,
> >                 .config = PERF_COUNT_HW_CPU_CYCLES,
> > +#endif
> >         };
> >
> >         return test_send_signal_common(&attr, BPF_PROG_TYPE_PERF_EVENT, "perf_event");
> > --
> > 2.21.0
> >

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf] selftests/bpf: fix test_send_signal_nmi on s390
  2019-07-12 19:54   ` Y Song
@ 2019-07-12 19:59     ` Andrii Nakryiko
  0 siblings, 0 replies; 4+ messages in thread
From: Andrii Nakryiko @ 2019-07-12 19:59 UTC (permalink / raw)
  To: Y Song; +Cc: Ilya Leoshkevich, bpf, Networking, gor, heiko.carstens

On Fri, Jul 12, 2019 at 12:55 PM Y Song <ys114321@gmail.com> wrote:
>
> On Fri, Jul 12, 2019 at 11:24 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Fri, Jul 12, 2019 at 10:46 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> > >
> > > Many s390 setups (most notably, KVM guests) do not have access to
> > > hardware performance events.
> > >
> > > Therefore, use the software event instead.
> > >
> > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > Acked-by: Vasily Gorbik <gor@linux.ibm.com>
> > > ---
> > >  tools/testing/selftests/bpf/prog_tests/send_signal.c | 9 +++++++++
> > >  1 file changed, 9 insertions(+)
> > >
> > > diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > > index 67cea1686305..4a45ea0b8448 100644
> > > --- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > > +++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > > @@ -176,10 +176,19 @@ static int test_send_signal_tracepoint(void)
> > >  static int test_send_signal_nmi(void)
> > >  {
> > >         struct perf_event_attr attr = {
> > > +#if defined(__s390__)
> > > +               /* Many s390 setups (most notably, KVM guests) do not have
> > > +                * access to hardware performance events.
> > > +                */
> > > +               .sample_period = 1,
> > > +               .type = PERF_TYPE_SOFTWARE,
> > > +               .config = PERF_COUNT_SW_CPU_CLOCK,
> > > +#else
> >
> > Is there any harm in switching all archs to software event? I'd rather
> > avoid all those special arch cases, which will be really hard to test
> > for people without direct access to them.
>
> I still like to do hardware cpu_cycles in order to test nmi.
> In a physical box.
> $ perf list
> List of pre-defined events (to be used in -e):
>
>   branch-instructions OR branches                    [Hardware event]
>   branch-misses                                      [Hardware event]
>   bus-cycles                                         [Hardware event]
>   cache-misses                                       [Hardware event]
>   cache-references                                   [Hardware event]
>   cpu-cycles OR cycles                               [Hardware event]
>   instructions                                       [Hardware event]
>   ref-cycles                                         [Hardware event]
>
>   alignment-faults                                   [Software event]
>   bpf-output                                         [Software event]
>   context-switches OR cs                             [Software event]
>   cpu-clock                                          [Software event]
>   cpu-migrations OR migrations                       [Software event]
>   dummy                                              [Software event]
>   emulation-faults                                   [Software event]
>   major-faults                                       [Software event]
>   minor-faults                                       [Software event]
>   page-faults OR faults                              [Software event]
>   task-clock                                         [Software event]
>
>   L1-dcache-load-misses                              [Hardware cache event]
> ...
>
> In a VM
> $ perf list
> List of pre-defined events (to be used in -e):
>
>   alignment-faults                                   [Software event]
>   bpf-output                                         [Software event]
>   context-switches OR cs                             [Software event]
>   cpu-clock                                          [Software event]
>   cpu-migrations OR migrations                       [Software event]
>   dummy                                              [Software event]
>   emulation-faults                                   [Software event]
>   major-faults                                       [Software event]
>   minor-faults                                       [Software event]
>   page-faults OR faults                              [Software event]
>   task-clock                                         [Software event]
>
>   msr/smi/                                           [Kernel PMU
> event]
>   msr/tsc/                                           [Kernel PMU event]
> .....
>
> Is it possible that we detect at runtime whether the hardware
> cpu_cycles available or not?
> If available, let us do hardware one. Otherwise, skip or do the
> software one? The software one does not really do nmi so it will take
> the same code path in kernel as tracepoint.

Yeah, that's what I was worried about.

Ilya, could you please take a look how hard would it be to do this HW
vs SW perf event support?

>
> >
> > >                 .sample_freq = 50,
> > >                 .freq = 1,
> > >                 .type = PERF_TYPE_HARDWARE,
> > >                 .config = PERF_COUNT_HW_CPU_CYCLES,
> > > +#endif
> > >         };
> > >
> > >         return test_send_signal_common(&attr, BPF_PROG_TYPE_PERF_EVENT, "perf_event");
> > > --
> > > 2.21.0
> > >

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-07-12 19:59 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-12 17:45 [PATCH bpf] selftests/bpf: fix test_send_signal_nmi on s390 Ilya Leoshkevich
2019-07-12 18:22 ` Andrii Nakryiko
2019-07-12 19:54   ` Y Song
2019-07-12 19:59     ` Andrii Nakryiko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).