All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 1/2] perf stat: Clear reset_group for each stat run
@ 2022-08-22 21:33 Ian Rogers
  2022-08-22 21:33 ` [PATCH v1 2/2] perf test: Stat test for repeat with a weak group Ian Rogers
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Ian Rogers @ 2022-08-22 21:33 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Ian Rogers, Kan Liang, Andi Kleen, linux-perf-users,
	linux-kernel
  Cc: Stephane Eranian

If a weak group is broken then the reset_group flag remains set for
the next run. Having reset_group set means the counter isn't created
and ultimately a segfault.

A simple reproduction of this is:
perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W
which will be added as a test in the next patch.

Fixes: 4804e0111662 ("perf stat: Use affinity for opening events")
Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/builtin-stat.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 7fb81a44672d..54cd29d07ca8 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -826,6 +826,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
 	}
 
 	evlist__for_each_entry(evsel_list, counter) {
+		counter->reset_group = false;
 		if (bpf_counter__load(counter, &target))
 			return -1;
 		if (!evsel__is_bpf(counter))
-- 
2.37.2.609.g9ff673ca1a-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v1 2/2] perf test: Stat test for repeat with a weak group
  2022-08-22 21:33 [PATCH v1 1/2] perf stat: Clear reset_group for each stat run Ian Rogers
@ 2022-08-22 21:33 ` Ian Rogers
  2022-08-23  7:57 ` [PATCH v1 1/2] perf stat: Clear reset_group for each stat run Xing Zhengjun
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Ian Rogers @ 2022-08-22 21:33 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Ian Rogers, Kan Liang, Andi Kleen, linux-perf-users,
	linux-kernel
  Cc: Stephane Eranian

Breaking a weak group requires multiple passes of an evlist, with
multiple runs this can introduce bugs ultimately leading to
segfaults. Add a test to cover this.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/tests/shell/stat.sh | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/tools/perf/tests/shell/stat.sh b/tools/perf/tests/shell/stat.sh
index 9313ef2739e0..26a51b48aee4 100755
--- a/tools/perf/tests/shell/stat.sh
+++ b/tools/perf/tests/shell/stat.sh
@@ -28,6 +28,24 @@ test_stat_record_report() {
   echo "stat record and report test [Success]"
 }
 
+test_stat_repeat_weak_groups() {
+  echo "stat repeat weak groups test"
+  if ! perf stat -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}' \
+     true 2>&1 | grep -q 'seconds time elapsed'
+  then
+    echo "stat repeat weak groups test [Skipped event parsing failed]"
+    return
+  fi
+  if ! perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W' \
+    true > /dev/null 2>&1
+  then
+    echo "stat repeat weak groups test [Failed]"
+    err=1
+    return
+  fi
+  echo "stat repeat weak groups test [Success]"
+}
+
 test_topdown_groups() {
   # Topdown events must be grouped with the slots event first. Test that
   # parse-events reorders this.
@@ -75,6 +93,7 @@ test_topdown_weak_groups() {
 
 test_default_stat
 test_stat_record_report
+test_stat_repeat_weak_groups
 test_topdown_groups
 test_topdown_weak_groups
 exit $err
-- 
2.37.2.609.g9ff673ca1a-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/2] perf stat: Clear reset_group for each stat run
  2022-08-22 21:33 [PATCH v1 1/2] perf stat: Clear reset_group for each stat run Ian Rogers
  2022-08-22 21:33 ` [PATCH v1 2/2] perf test: Stat test for repeat with a weak group Ian Rogers
@ 2022-08-23  7:57 ` Xing Zhengjun
  2022-08-23 13:34 ` Arnaldo Carvalho de Melo
  2022-08-23 15:10 ` Andi Kleen
  3 siblings, 0 replies; 7+ messages in thread
From: Xing Zhengjun @ 2022-08-23  7:57 UTC (permalink / raw)
  To: Ian Rogers, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Kan Liang, Andi Kleen, linux-perf-users,
	linux-kernel
  Cc: Stephane Eranian



On 8/23/2022 5:33 AM, Ian Rogers wrote:
> If a weak group is broken then the reset_group flag remains set for
> the next run. Having reset_group set means the counter isn't created
> and ultimately a segfault.
> 
> A simple reproduction of this is:
> perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W

It is better to change to a full command:
perf stat -r2 -e 
'{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W' 
true
> which will be added as a test in the next patch.
> 
> Fixes: 4804e0111662 ("perf stat: Use affinity for opening events")
> Signed-off-by: Ian Rogers <irogers@google.com>

I test the two patches on both non-hybrid and hybrid machines, and the 
"Segmentation fault" disappeared.

Tested-by: Zhengjun Xing <zhengjun.xing@linux.intel.com>

> ---
>   tools/perf/builtin-stat.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index 7fb81a44672d..54cd29d07ca8 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -826,6 +826,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
>   	}
>   
>   	evlist__for_each_entry(evsel_list, counter) {
> +		counter->reset_group = false;
>   		if (bpf_counter__load(counter, &target))
>   			return -1;
>   		if (!evsel__is_bpf(counter))

-- 
Zhengjun Xing

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/2] perf stat: Clear reset_group for each stat run
  2022-08-22 21:33 [PATCH v1 1/2] perf stat: Clear reset_group for each stat run Ian Rogers
  2022-08-22 21:33 ` [PATCH v1 2/2] perf test: Stat test for repeat with a weak group Ian Rogers
  2022-08-23  7:57 ` [PATCH v1 1/2] perf stat: Clear reset_group for each stat run Xing Zhengjun
@ 2022-08-23 13:34 ` Arnaldo Carvalho de Melo
  2022-08-23 16:33   ` Ian Rogers
  2022-08-23 15:10 ` Andi Kleen
  3 siblings, 1 reply; 7+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-08-23 13:34 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Kan Liang, Andi Kleen, linux-perf-users,
	linux-kernel, Stephane Eranian

Em Mon, Aug 22, 2022 at 02:33:51PM -0700, Ian Rogers escreveu:
> If a weak group is broken then the reset_group flag remains set for
> the next run. Having reset_group set means the counter isn't created
> and ultimately a segfault.
> 
> A simple reproduction of this is:
> perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W
> which will be added as a test in the next patch.

So doing this on that existing BPF related loop may solve the problem,
but for someone looking just at the source code, without any comment,
may be cryptic, no?

And then the fixes tags talks about affinity, adding a bit more
confusion, albeit being the part that does the weak logic :-\

Can we have a comment just before:

+             counter->reset_group = false;

Stating that this is needed only when using -r?

- Arnaldo
 
> Fixes: 4804e0111662 ("perf stat: Use affinity for opening events")
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/builtin-stat.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index 7fb81a44672d..54cd29d07ca8 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -826,6 +826,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
>  	}
>  
>  	evlist__for_each_entry(evsel_list, counter) {
> +		counter->reset_group = false;
>  		if (bpf_counter__load(counter, &target))
>  			return -1;
>  		if (!evsel__is_bpf(counter))
> -- 
> 2.37.2.609.g9ff673ca1a-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/2] perf stat: Clear reset_group for each stat run
  2022-08-22 21:33 [PATCH v1 1/2] perf stat: Clear reset_group for each stat run Ian Rogers
                   ` (2 preceding siblings ...)
  2022-08-23 13:34 ` Arnaldo Carvalho de Melo
@ 2022-08-23 15:10 ` Andi Kleen
  2022-08-23 18:42   ` Arnaldo Carvalho de Melo
  3 siblings, 1 reply; 7+ messages in thread
From: Andi Kleen @ 2022-08-23 15:10 UTC (permalink / raw)
  To: Ian Rogers, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Kan Liang, linux-perf-users,
	linux-kernel
  Cc: Stephane Eranian


On 8/22/2022 11:33 PM, Ian Rogers wrote:
> If a weak group is broken then the reset_group flag remains set for
> the next run. Having reset_group set means the counter isn't created
> and ultimately a segfault.
>
> A simple reproduction of this is:
> perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W
> which will be added as a test in the next patch.
>
> Fixes: 4804e0111662 ("perf stat: Use affinity for opening events")
> Signed-off-by: Ian Rogers <irogers@google.com>


Makes sense


Reviewed-by: Andi Kleen <ak@linux.intel.com>



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/2] perf stat: Clear reset_group for each stat run
  2022-08-23 13:34 ` Arnaldo Carvalho de Melo
@ 2022-08-23 16:33   ` Ian Rogers
  0 siblings, 0 replies; 7+ messages in thread
From: Ian Rogers @ 2022-08-23 16:33 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Kan Liang, Andi Kleen, linux-perf-users,
	linux-kernel, Stephane Eranian

On Tue, Aug 23, 2022 at 6:34 AM Arnaldo Carvalho de Melo
<acme@kernel.org> wrote:
>
> Em Mon, Aug 22, 2022 at 02:33:51PM -0700, Ian Rogers escreveu:
> > If a weak group is broken then the reset_group flag remains set for
> > the next run. Having reset_group set means the counter isn't created
> > and ultimately a segfault.
> >
> > A simple reproduction of this is:
> > perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W
> > which will be added as a test in the next patch.
>
> So doing this on that existing BPF related loop may solve the problem,
> but for someone looking just at the source code, without any comment,
> may be cryptic, no?
>
> And then the fixes tags talks about affinity, adding a bit more
> confusion, albeit being the part that does the weak logic :-\
>
> Can we have a comment just before:
>
> +             counter->reset_group = false;
>
> Stating that this is needed only when using -r?

It is possible to add a comment but thinking about it, it would have
said pretty much what the code was doing and so I skipped it. I'm wary
of comments that capture too much of the implementation as they are
prone to becoming stale. Logically this function is just iterating
over the evlist creating counters, but on top of that we have the
affinity optimization. The BPF code didn't need that and so has its
own evlist iteration. We could add another loop just to clear
reset_group, that didn't seem to make sense. It's unfortunate how that
relates to the fixes tag but I don't think we should optimize for that
case.

Thanks,
Ian

> - Arnaldo
>
> > Fixes: 4804e0111662 ("perf stat: Use affinity for opening events")
> > Signed-off-by: Ian Rogers <irogers@google.com>
> > ---
> >  tools/perf/builtin-stat.c | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> > index 7fb81a44672d..54cd29d07ca8 100644
> > --- a/tools/perf/builtin-stat.c
> > +++ b/tools/perf/builtin-stat.c
> > @@ -826,6 +826,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
> >       }
> >
> >       evlist__for_each_entry(evsel_list, counter) {
> > +             counter->reset_group = false;
> >               if (bpf_counter__load(counter, &target))
> >                       return -1;
> >               if (!evsel__is_bpf(counter))
> > --
> > 2.37.2.609.g9ff673ca1a-goog
>
> --
>
> - Arnaldo

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/2] perf stat: Clear reset_group for each stat run
  2022-08-23 15:10 ` Andi Kleen
@ 2022-08-23 18:42   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 7+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-08-23 18:42 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Ian Rogers, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Kan Liang,
	linux-perf-users, linux-kernel, Stephane Eranian

Em Tue, Aug 23, 2022 at 05:10:55PM +0200, Andi Kleen escreveu:
> 
> On 8/22/2022 11:33 PM, Ian Rogers wrote:
> > If a weak group is broken then the reset_group flag remains set for
> > the next run. Having reset_group set means the counter isn't created
> > and ultimately a segfault.
> > 
> > A simple reproduction of this is:
> > perf stat -r2 -e '{cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles,cycles}:W
> > which will be added as a test in the next patch.
> > 
> > Fixes: 4804e0111662 ("perf stat: Use affinity for opening events")
> > Signed-off-by: Ian Rogers <irogers@google.com>
> 
> 
> Makes sense
> 
> 
> Reviewed-by: Andi Kleen <ak@linux.intel.com>

Ok, applied.

- Arnaldo

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-08-23 19:42 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-22 21:33 [PATCH v1 1/2] perf stat: Clear reset_group for each stat run Ian Rogers
2022-08-22 21:33 ` [PATCH v1 2/2] perf test: Stat test for repeat with a weak group Ian Rogers
2022-08-23  7:57 ` [PATCH v1 1/2] perf stat: Clear reset_group for each stat run Xing Zhengjun
2022-08-23 13:34 ` Arnaldo Carvalho de Melo
2022-08-23 16:33   ` Ian Rogers
2022-08-23 15:10 ` Andi Kleen
2022-08-23 18:42   ` Arnaldo Carvalho de Melo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.