All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] Several perf metrics topdown related fixes
@ 2022-05-13 15:15 kan.liang
  2022-05-13 15:15 ` [PATCH 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: kan.liang @ 2022-05-13 15:15 UTC (permalink / raw)
  To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
  Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The Patch 1 is a follow-up patch for Ian's ("Fix topdown event weak
grouping")[1].

The patch 2 is to fix the perf metrics topdown events in a mixed group.
It reuses the function introduced in [1].
Patch 1 & 2 should be on top of [1].

The patch 3 & 4 are to fix other perf metrics topdown related issues.
They can be merged separately.

[1]: https://lore.kernel.org/all/20220512061308.1152233-1-irogers@google.com/

Kan Liang (4):
  perf evsel: Fixes topdown events in a weak group for the hybrid
    platform
  perf stat: Always keep perf metrics topdown events in a group
  perf parse-events: Support different format of the topdown event name
  perf parse-events: Move slots event for hybrid platforms too

 tools/perf/arch/x86/util/evlist.c  |  7 ++++---
 tools/perf/arch/x86/util/evsel.c   |  5 +++--
 tools/perf/arch/x86/util/topdown.c | 18 ++++++++++++++++++
 tools/perf/arch/x86/util/topdown.h |  7 +++++++
 tools/perf/builtin-stat.c          |  7 +++++--
 5 files changed, 37 insertions(+), 7 deletions(-)
 create mode 100644 tools/perf/arch/x86/util/topdown.h

-- 
2.35.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform
  2022-05-13 15:15 [PATCH 0/4] Several perf metrics topdown related fixes kan.liang
@ 2022-05-13 15:15 ` kan.liang
  2022-05-13 15:39   ` Ian Rogers
  2022-05-13 15:15 ` [PATCH 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: kan.liang @ 2022-05-13 15:15 UTC (permalink / raw)
  To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
  Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The patch ("perf evlist: Keep topdown counters in weak group") fixes the
perf metrics topdown event issue when the topdown events are in a weak
group on a non-hybrid platform. However, it doesn't work for the hybrid
platform.

$./perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1

 Performance counter stats for 'system wide':

       751,765,068      cpu_core/slots/                                               (84.07%)
   <not supported>      cpu_core/topdown-bad-spec/
   <not supported>      cpu_core/topdown-be-bound/
   <not supported>      cpu_core/topdown-fe-bound/
   <not supported>      cpu_core/topdown-retiring/
        12,398,197      cpu_core/branch-instructions/                                     (84.07%)
         1,054,218      cpu_core/branch-misses/                                       (84.24%)
       539,764,637      cpu_core/bus-cycles/                                          (84.64%)
            14,683      cpu_core/cache-misses/                                        (84.87%)
         7,277,809      cpu_core/cache-references/                                     (77.30%)
       222,299,439      cpu_core/cpu-cycles/                                          (77.28%)
        63,661,714      cpu_core/instructions/                                        (84.85%)
                 0      cpu_core/mem-loads/                                           (77.29%)
        12,271,725      cpu_core/mem-stores/                                          (77.30%)
       542,241,102      cpu_core/ref-cycles/                                          (84.85%)
             8,854      cpu_core/cache-misses/                                        (76.71%)
         7,179,013      cpu_core/cache-references/                                     (76.31%)

       1.003245250 seconds time elapsed

A hybrid platform has a different PMU name for the core PMUs, while
the current perf hard code the PMU name "cpu".

The evsel->pmu_name can be used to replace the "cpu" to fix the issue.
For a hybrid platform, the pmu_name must be non-NULL. Because there are
at least two core PMUs. The PMU has to be specified.
For a non-hybrid platform, the pmu_name may be NULL. Because there is
only one core PMU, "cpu". For a NULL pmu_name, we can safely assume that
it is a "cpu" PMU.

With the patch,

$perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1

 Performance counter stats for 'system wide':

       766,620,266      cpu_core/slots/                                               (84.06%)
        73,172,129      cpu_core/topdown-bad-spec/ #      9.5% bad speculation         (84.06%)
       193,443,341      cpu_core/topdown-be-bound/ #     25.0% backend bound           (84.06%)
       403,940,929      cpu_core/topdown-fe-bound/ #     52.3% frontend bound          (84.06%)
       102,070,237      cpu_core/topdown-retiring/ #     13.2% retiring                (84.06%)
        12,364,429      cpu_core/branch-instructions/                                     (84.03%)
         1,080,124      cpu_core/branch-misses/                                       (84.24%)
       564,120,383      cpu_core/bus-cycles/                                          (84.65%)
            36,979      cpu_core/cache-misses/                                        (84.86%)
         7,298,094      cpu_core/cache-references/                                     (77.30%)
       227,174,372      cpu_core/cpu-cycles/                                          (77.31%)
        63,886,523      cpu_core/instructions/                                        (84.87%)
                 0      cpu_core/mem-loads/                                           (77.31%)
        12,208,782      cpu_core/mem-stores/                                          (77.31%)
       566,409,738      cpu_core/ref-cycles/                                          (84.87%)
            23,118      cpu_core/cache-misses/                                        (76.71%)
         7,212,602      cpu_core/cache-references/                                     (76.29%)

       1.003228667 seconds time elapsed

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 tools/perf/arch/x86/util/evsel.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
index 00cb4466b4ca..24510bcb4bf4 100644
--- a/tools/perf/arch/x86/util/evsel.c
+++ b/tools/perf/arch/x86/util/evsel.c
@@ -33,8 +33,9 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
 
 bool arch_evsel__must_be_in_group(const struct evsel *evsel)
 {
-	if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
-	    !pmu_have_event("cpu", "slots"))
+	const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
+
+	if (!pmu_have_event(pmu_name, "slots"))
 		return false;
 
 	return evsel->name &&
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/4] perf stat: Always keep perf metrics topdown events in a group
  2022-05-13 15:15 [PATCH 0/4] Several perf metrics topdown related fixes kan.liang
  2022-05-13 15:15 ` [PATCH 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
@ 2022-05-13 15:15 ` kan.liang
  2022-05-13 16:32   ` Ian Rogers
  2022-05-13 15:15 ` [PATCH 3/4] perf parse-events: Support different format of the topdown event name kan.liang
  2022-05-13 15:15 ` [PATCH 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
  3 siblings, 1 reply; 13+ messages in thread
From: kan.liang @ 2022-05-13 15:15 UTC (permalink / raw)
  To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
  Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

If any member in a group has a different cpu mask than the other
members, the current perf stat disables group. when the perf metrics
topdown events are part of the group, the below <not supported> error
will be triggered.

$ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
WARNING: grouped events cpus do not match, disabling group:
  anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }

 Performance counter stats for 'system wide':

       141,465,174      slots
   <not supported>      topdown-retiring
     1,605,330,334      uncore_imc_free_running_0/dclk/

The perf metrics topdown events must always be grouped with a slots
event as leader.

With the patch, the topdown events aren't broken from the group for the
splitting.

$ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
WARNING: grouped events cpus do not match, disabling group:
  anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }

 Performance counter stats for 'system wide':

       346,110,588      slots
       124,608,256      topdown-retiring
     1,606,869,976      uncore_imc_free_running_0/dclk/

       1.003877592 seconds time elapsed

Fixes: a9a1790247bd ("perf stat: Ensure group is defined on top of the same cpu mask")
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 tools/perf/builtin-stat.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index a96f106dc93a..af2248868a4f 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -272,8 +272,11 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
 		}
 
 		for_each_group_evsel(pos, leader) {
-			evsel__set_leader(pos, pos);
-			pos->core.nr_members = 0;
+			if (!evsel__must_be_in_group(pos) && pos != leader) {
+				evsel__set_leader(pos, pos);
+				pos->core.nr_members = 0;
+				leader->core.nr_members--;
+			}
 		}
 		evsel->core.leader->nr_members = 0;
 	}
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/4] perf parse-events: Support different format of the topdown event name
  2022-05-13 15:15 [PATCH 0/4] Several perf metrics topdown related fixes kan.liang
  2022-05-13 15:15 ` [PATCH 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
  2022-05-13 15:15 ` [PATCH 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
@ 2022-05-13 15:15 ` kan.liang
  2022-05-13 16:44   ` Ian Rogers
  2022-05-13 15:15 ` [PATCH 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
  3 siblings, 1 reply; 13+ messages in thread
From: kan.liang @ 2022-05-13 15:15 UTC (permalink / raw)
  To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
  Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The evsel->name may have a different format for a topdown event, a pure
topdown name (e.g., topdown-fe-bound), or a PMU name + a topdown name
(e.g., cpu/topdown-fe-bound/). The cpu/topdown-fe-bound/ kind format
isn't supported by the arch_evlist__leader(). This format is a very
common format for a hybrid platform, which requires specifying the PMU
name for each event.

Without the patch,

$perf stat -e '{instructions,slots,cpu/topdown-fe-bound/}' -a sleep 1

 Performance counter stats for 'system wide':

     <not counted>      instructions
     <not counted>      slots
   <not supported>      cpu/topdown-fe-bound/

       1.003482041 seconds time elapsed

Some events weren't counted. Try disabling the NMI watchdog:
        echo 0 > /proc/sys/kernel/nmi_watchdog
        perf stat ...
        echo 1 > /proc/sys/kernel/nmi_watchdog
The events in group usually have to be from the same PMU. Try reorganizing the group.


With the patch,

perf stat -e '{instructions,slots,cpu/topdown-fe-bound/}' -a sleep 1

 Performance counter stats for 'system wide':

       157,383,996      slots
        25,011,711      instructions
        27,441,686      cpu/topdown-fe-bound/

       1.003530890 seconds time elapsed

Fixes: bc355822f0d9 ("perf parse-events: Move slots only with topdown")
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 tools/perf/arch/x86/util/evlist.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
index cfc208d71f00..75564a7df15b 100644
--- a/tools/perf/arch/x86/util/evlist.c
+++ b/tools/perf/arch/x86/util/evlist.c
@@ -36,7 +36,7 @@ struct evsel *arch_evlist__leader(struct list_head *list)
 				if (slots == first)
 					return first;
 			}
-			if (!strncasecmp(evsel->name, "topdown", 7))
+			if (strcasestr(evsel->name, "topdown"))
 				has_topdown = true;
 			if (slots && has_topdown)
 				return slots;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/4] perf parse-events: Move slots event for the hybrid platform too
  2022-05-13 15:15 [PATCH 0/4] Several perf metrics topdown related fixes kan.liang
                   ` (2 preceding siblings ...)
  2022-05-13 15:15 ` [PATCH 3/4] perf parse-events: Support different format of the topdown event name kan.liang
@ 2022-05-13 15:15 ` kan.liang
  2022-05-13 17:07   ` Ian Rogers
  3 siblings, 1 reply; 13+ messages in thread
From: kan.liang @ 2022-05-13 15:15 UTC (permalink / raw)
  To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
  Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The commit 94dbfd6781a0 ("perf parse-events: Architecture specific
leader override") introduced a feature to reorder the slots event to
fulfill the restriction of the perf metrics topdown group. But the
feature doesn't work on the hybrid machine.

$perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1

 Performance counter stats for 'system wide':

     <not counted>      cpu_core/instructions/
     <not counted>      cpu_core/slots/
   <not supported>      cpu_core/topdown-retiring/

       1.002871801 seconds time elapsed

A hybrid platform has a different PMU name for the core PMUs, while
current perf hard code the PMU name "cpu".

Introduce a new function to check whether the system supports the perf
metrics feature. The result is cached for the future usage.

For X86, the core PMU name always has "cpu" prefix.

With the patch,

$perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1

 Performance counter stats for 'system wide':

        76,337,010      cpu_core/slots/
        10,416,809      cpu_core/instructions/
        11,692,372      cpu_core/topdown-retiring/

       1.002805453 seconds time elapsed

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 tools/perf/arch/x86/util/evlist.c  |  5 +++--
 tools/perf/arch/x86/util/topdown.c | 18 ++++++++++++++++++
 tools/perf/arch/x86/util/topdown.h |  7 +++++++
 3 files changed, 28 insertions(+), 2 deletions(-)
 create mode 100644 tools/perf/arch/x86/util/topdown.h

diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
index 75564a7df15b..68f681ad54c1 100644
--- a/tools/perf/arch/x86/util/evlist.c
+++ b/tools/perf/arch/x86/util/evlist.c
@@ -3,6 +3,7 @@
 #include "util/pmu.h"
 #include "util/evlist.h"
 #include "util/parse-events.h"
+#include "topdown.h"
 
 #define TOPDOWN_L1_EVENTS	"{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound}"
 #define TOPDOWN_L2_EVENTS	"{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown-fetch-lat,topdown-mem-bound}"
@@ -25,12 +26,12 @@ struct evsel *arch_evlist__leader(struct list_head *list)
 
 	first = list_first_entry(list, struct evsel, core.node);
 
-	if (!pmu_have_event("cpu", "slots"))
+	if (!topdown_sys_has_perf_metrics())
 		return first;
 
 	/* If there is a slots event and a topdown event then the slots event comes first. */
 	__evlist__for_each_entry(list, evsel) {
-		if (evsel->pmu_name && !strcmp(evsel->pmu_name, "cpu") && evsel->name) {
+		if (evsel->pmu_name && !strncmp(evsel->pmu_name, "cpu", 3) && evsel->name) {
 			if (strcasestr(evsel->name, "slots")) {
 				slots = evsel;
 				if (slots == first)
diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/topdown.c
index 2f3d96aa92a5..95b9fdef59ab 100644
--- a/tools/perf/arch/x86/util/topdown.c
+++ b/tools/perf/arch/x86/util/topdown.c
@@ -3,6 +3,24 @@
 #include "api/fs/fs.h"
 #include "util/pmu.h"
 #include "util/topdown.h"
+#include "topdown.h"
+
+bool topdown_sys_has_perf_metrics(void)
+{
+	static bool has_perf_metrics;
+	static bool cached;
+	struct perf_pmu *pmu;
+
+	if (cached)
+		return has_perf_metrics;
+
+	pmu = perf_pmu__find_by_type(PERF_TYPE_RAW);
+	if (pmu && pmu_have_event(pmu->name, "slots"))
+		has_perf_metrics = true;
+
+	cached = true;
+	return has_perf_metrics;
+}
 
 /*
  * Check whether we can use a group for top down.
diff --git a/tools/perf/arch/x86/util/topdown.h b/tools/perf/arch/x86/util/topdown.h
new file mode 100644
index 000000000000..46bf9273e572
--- /dev/null
+++ b/tools/perf/arch/x86/util/topdown.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _TOPDOWN_H
+#define _TOPDOWN_H 1
+
+bool topdown_sys_has_perf_metrics(void);
+
+#endif
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform
  2022-05-13 15:15 ` [PATCH 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
@ 2022-05-13 15:39   ` Ian Rogers
  2022-05-13 16:24     ` Liang, Kan
  0 siblings, 1 reply; 13+ messages in thread
From: Ian Rogers @ 2022-05-13 15:39 UTC (permalink / raw)
  To: kan.liang
  Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian

On Fri, May 13, 2022 at 8:16 AM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> The patch ("perf evlist: Keep topdown counters in weak group") fixes the
> perf metrics topdown event issue when the topdown events are in a weak
> group on a non-hybrid platform. However, it doesn't work for the hybrid
> platform.
>
> $./perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>
>  Performance counter stats for 'system wide':
>
>        751,765,068      cpu_core/slots/                                               (84.07%)
>    <not supported>      cpu_core/topdown-bad-spec/
>    <not supported>      cpu_core/topdown-be-bound/
>    <not supported>      cpu_core/topdown-fe-bound/
>    <not supported>      cpu_core/topdown-retiring/
>         12,398,197      cpu_core/branch-instructions/                                     (84.07%)
>          1,054,218      cpu_core/branch-misses/                                       (84.24%)
>        539,764,637      cpu_core/bus-cycles/                                          (84.64%)
>             14,683      cpu_core/cache-misses/                                        (84.87%)
>          7,277,809      cpu_core/cache-references/                                     (77.30%)
>        222,299,439      cpu_core/cpu-cycles/                                          (77.28%)
>         63,661,714      cpu_core/instructions/                                        (84.85%)
>                  0      cpu_core/mem-loads/                                           (77.29%)
>         12,271,725      cpu_core/mem-stores/                                          (77.30%)
>        542,241,102      cpu_core/ref-cycles/                                          (84.85%)
>              8,854      cpu_core/cache-misses/                                        (76.71%)
>          7,179,013      cpu_core/cache-references/                                     (76.31%)
>
>        1.003245250 seconds time elapsed
>
> A hybrid platform has a different PMU name for the core PMUs, while
> the current perf hard code the PMU name "cpu".
>
> The evsel->pmu_name can be used to replace the "cpu" to fix the issue.
> For a hybrid platform, the pmu_name must be non-NULL. Because there are
> at least two core PMUs. The PMU has to be specified.
> For a non-hybrid platform, the pmu_name may be NULL. Because there is
> only one core PMU, "cpu". For a NULL pmu_name, we can safely assume that
> it is a "cpu" PMU.
>
> With the patch,
>
> $perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>
>  Performance counter stats for 'system wide':
>
>        766,620,266      cpu_core/slots/                                               (84.06%)
>         73,172,129      cpu_core/topdown-bad-spec/ #      9.5% bad speculation         (84.06%)
>        193,443,341      cpu_core/topdown-be-bound/ #     25.0% backend bound           (84.06%)
>        403,940,929      cpu_core/topdown-fe-bound/ #     52.3% frontend bound          (84.06%)
>        102,070,237      cpu_core/topdown-retiring/ #     13.2% retiring                (84.06%)
>         12,364,429      cpu_core/branch-instructions/                                     (84.03%)
>          1,080,124      cpu_core/branch-misses/                                       (84.24%)
>        564,120,383      cpu_core/bus-cycles/                                          (84.65%)
>             36,979      cpu_core/cache-misses/                                        (84.86%)
>          7,298,094      cpu_core/cache-references/                                     (77.30%)
>        227,174,372      cpu_core/cpu-cycles/                                          (77.31%)
>         63,886,523      cpu_core/instructions/                                        (84.87%)
>                  0      cpu_core/mem-loads/                                           (77.31%)
>         12,208,782      cpu_core/mem-stores/                                          (77.31%)
>        566,409,738      cpu_core/ref-cycles/                                          (84.87%)
>             23,118      cpu_core/cache-misses/                                        (76.71%)
>          7,212,602      cpu_core/cache-references/                                     (76.29%)
>
>        1.003228667 seconds time elapsed
>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> ---
>  tools/perf/arch/x86/util/evsel.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
> index 00cb4466b4ca..24510bcb4bf4 100644
> --- a/tools/perf/arch/x86/util/evsel.c
> +++ b/tools/perf/arch/x86/util/evsel.c
> @@ -33,8 +33,9 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
>
>  bool arch_evsel__must_be_in_group(const struct evsel *evsel)
>  {
> -       if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
> -           !pmu_have_event("cpu", "slots"))
> +       const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
> +
> +       if (!pmu_have_event(pmu_name, "slots"))

Playing devil's advocate, if I have a PMU for my network accelerator
and it has an event called "slots" then this test will also be true.

The property that is being tested here is "does this CPU have topdown
events" and so allowing any PMU removes the "does this CPU" part of
the equation. I think ideally we'd have an arch functions something
like:

bool arch_pmu__has_intel_topdown_events(void)
{
  static bool has_topdown_events = pmu_have_event("cpu", "slots") ||
pmu_have_event("cpu_core", "slots");

  return has_topdown_events;
}

bool arch_pmu__supports_intel_topdown_events(const char *pmu_name)
{
  if (!pmu_name)
    return false;
  return arch_pmu__has_intel_topdown_events() && (!strncmp(pmu_name,
"cpu") || !strncmp(pmu_name, "cpu_core"));
}

bool arch_evsel__is_intel_topdown_event(struct evsel *evsel)
{
  if (!arch_pmu__supports_intel_topdown_events(evsel->pmu))
    return false;

  return strcasestr(evsel->name, "slots") || strcasestr(evsel->name, "topdown");
}

This then gives us:

bool arch_evsel__must_be_in_group(const struct evsel *evsel)
{
  return  arch_evsel__is_intel_topdown_event(evsel);
}

These functions can then be reused for the arch_evlist topdown code,
etc. What I don't see in these functions is use of any hybrid
abstraction and so it isn't clear to me how with hybrid something like
this would be plumbed in.

Thanks,
Ian

>                 return false;
>
>         return evsel->name &&
> --
> 2.35.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform
  2022-05-13 15:39   ` Ian Rogers
@ 2022-05-13 16:24     ` Liang, Kan
  2022-05-13 16:43       ` Ian Rogers
  0 siblings, 1 reply; 13+ messages in thread
From: Liang, Kan @ 2022-05-13 16:24 UTC (permalink / raw)
  To: Ian Rogers
  Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian



On 5/13/2022 11:39 AM, Ian Rogers wrote:
> On Fri, May 13, 2022 at 8:16 AM <kan.liang@linux.intel.com> wrote:
>>
>> From: Kan Liang <kan.liang@linux.intel.com>
>>
>> The patch ("perf evlist: Keep topdown counters in weak group") fixes the
>> perf metrics topdown event issue when the topdown events are in a weak
>> group on a non-hybrid platform. However, it doesn't work for the hybrid
>> platform.
>>
>> $./perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
>> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
>> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
>> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
>> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
>> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
>> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>>
>>   Performance counter stats for 'system wide':
>>
>>         751,765,068      cpu_core/slots/                                               (84.07%)
>>     <not supported>      cpu_core/topdown-bad-spec/
>>     <not supported>      cpu_core/topdown-be-bound/
>>     <not supported>      cpu_core/topdown-fe-bound/
>>     <not supported>      cpu_core/topdown-retiring/
>>          12,398,197      cpu_core/branch-instructions/                                     (84.07%)
>>           1,054,218      cpu_core/branch-misses/                                       (84.24%)
>>         539,764,637      cpu_core/bus-cycles/                                          (84.64%)
>>              14,683      cpu_core/cache-misses/                                        (84.87%)
>>           7,277,809      cpu_core/cache-references/                                     (77.30%)
>>         222,299,439      cpu_core/cpu-cycles/                                          (77.28%)
>>          63,661,714      cpu_core/instructions/                                        (84.85%)
>>                   0      cpu_core/mem-loads/                                           (77.29%)
>>          12,271,725      cpu_core/mem-stores/                                          (77.30%)
>>         542,241,102      cpu_core/ref-cycles/                                          (84.85%)
>>               8,854      cpu_core/cache-misses/                                        (76.71%)
>>           7,179,013      cpu_core/cache-references/                                     (76.31%)
>>
>>         1.003245250 seconds time elapsed
>>
>> A hybrid platform has a different PMU name for the core PMUs, while
>> the current perf hard code the PMU name "cpu".
>>
>> The evsel->pmu_name can be used to replace the "cpu" to fix the issue.
>> For a hybrid platform, the pmu_name must be non-NULL. Because there are
>> at least two core PMUs. The PMU has to be specified.
>> For a non-hybrid platform, the pmu_name may be NULL. Because there is
>> only one core PMU, "cpu". For a NULL pmu_name, we can safely assume that
>> it is a "cpu" PMU.
>>
>> With the patch,
>>
>> $perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
>> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
>> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
>> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
>> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
>> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
>> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>>
>>   Performance counter stats for 'system wide':
>>
>>         766,620,266      cpu_core/slots/                                               (84.06%)
>>          73,172,129      cpu_core/topdown-bad-spec/ #      9.5% bad speculation         (84.06%)
>>         193,443,341      cpu_core/topdown-be-bound/ #     25.0% backend bound           (84.06%)
>>         403,940,929      cpu_core/topdown-fe-bound/ #     52.3% frontend bound          (84.06%)
>>         102,070,237      cpu_core/topdown-retiring/ #     13.2% retiring                (84.06%)
>>          12,364,429      cpu_core/branch-instructions/                                     (84.03%)
>>           1,080,124      cpu_core/branch-misses/                                       (84.24%)
>>         564,120,383      cpu_core/bus-cycles/                                          (84.65%)
>>              36,979      cpu_core/cache-misses/                                        (84.86%)
>>           7,298,094      cpu_core/cache-references/                                     (77.30%)
>>         227,174,372      cpu_core/cpu-cycles/                                          (77.31%)
>>          63,886,523      cpu_core/instructions/                                        (84.87%)
>>                   0      cpu_core/mem-loads/                                           (77.31%)
>>          12,208,782      cpu_core/mem-stores/                                          (77.31%)
>>         566,409,738      cpu_core/ref-cycles/                                          (84.87%)
>>              23,118      cpu_core/cache-misses/                                        (76.71%)
>>           7,212,602      cpu_core/cache-references/                                     (76.29%)
>>
>>         1.003228667 seconds time elapsed
>>
>> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
>> ---
>>   tools/perf/arch/x86/util/evsel.c | 5 +++--
>>   1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
>> index 00cb4466b4ca..24510bcb4bf4 100644
>> --- a/tools/perf/arch/x86/util/evsel.c
>> +++ b/tools/perf/arch/x86/util/evsel.c
>> @@ -33,8 +33,9 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
>>
>>   bool arch_evsel__must_be_in_group(const struct evsel *evsel)
>>   {
>> -       if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
>> -           !pmu_have_event("cpu", "slots"))
>> +       const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
>> +
>> +       if (!pmu_have_event(pmu_name, "slots"))
> 
> Playing devil's advocate, if I have a PMU for my network accelerator
> and it has an event called "slots" then this test will also be true.
>

IIRC, the pmu_have_event should only check the event which is exposed by 
the kernel. It's very unlikely that another PMU expose the exact same name.

If you still worry about it, I think we can check the PMU type 
PERF_TYPE_RAW here, which is reserved for the core PMU. Others cannot 
use it.

It looks like arch_evsel__must_be_in_group() is the only user for the 
evsel__sys_has_perf_metrics() for now, so I make it static.

The other pmu_have_event("cpu", "slots") is in evlist.c. 
topdown_sys_has_perf_metrics() in patch 4 should be used to replace it.
I think Zhengjun will post patches for the changes for the evlist.c


diff --git a/tools/perf/arch/x86/util/evsel.c 
b/tools/perf/arch/x86/util/evsel.c
index 24510bcb4bf4..a4714174e30f 100644
--- a/tools/perf/arch/x86/util/evsel.c
+++ b/tools/perf/arch/x86/util/evsel.c
@@ -31,11 +31,20 @@ void arch_evsel__fixup_new_cycles(struct 
perf_event_attr *attr)
         free(env.cpuid);
  }

-bool arch_evsel__must_be_in_group(const struct evsel *evsel)
+static bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
  {
         const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";

-       if (!pmu_have_event(pmu_name, "slots"))
+       if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
+           pmu_have_event(pmu_name, "slots"))
+               return true;
+
+       return false;
+}
+
+bool arch_evsel__must_be_in_group(const struct evsel *evsel)
+{
+       if (!evsel__sys_has_perf_metrics(evsel))
                 return false;

         return evsel->name &&

Thanks,
Kan

> The property that is being tested here is "does this CPU have topdown
> events" and so allowing any PMU removes the "does this CPU" part of
> the equation. I think ideally we'd have an arch functions something
> like:
> 
> bool arch_pmu__has_intel_topdown_events(void)
> {
>    static bool has_topdown_events = pmu_have_event("cpu", "slots") ||
> pmu_have_event("cpu_core", "slots");
> 
>    return has_topdown_events;
> }
> 
> bool arch_pmu__supports_intel_topdown_events(const char *pmu_name)
> {
>    if (!pmu_name)
>      return false;
>    return arch_pmu__has_intel_topdown_events() && (!strncmp(pmu_name,
> "cpu") || !strncmp(pmu_name, "cpu_core"));
> }
> 
> bool arch_evsel__is_intel_topdown_event(struct evsel *evsel)
> {
>    if (!arch_pmu__supports_intel_topdown_events(evsel->pmu))
>      return false;
> 
>    return strcasestr(evsel->name, "slots") || strcasestr(evsel->name, "topdown");
> }
> 
> This then gives us:
> 
> bool arch_evsel__must_be_in_group(const struct evsel *evsel)
> {
>    return  arch_evsel__is_intel_topdown_event(evsel);
> }
> 
> These functions can then be reused for the arch_evlist topdown code,
> etc. What I don't see in these functions is use of any hybrid
> abstraction and so it isn't clear to me how with hybrid something like
> this would be plumbed in.
> 
> Thanks,
> Ian
> 
>>                  return false;
>>
>>          return evsel->name &&
>> --
>> 2.35.1
>>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] perf stat: Always keep perf metrics topdown events in a group
  2022-05-13 15:15 ` [PATCH 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
@ 2022-05-13 16:32   ` Ian Rogers
  2022-05-13 17:29     ` Liang, Kan
  0 siblings, 1 reply; 13+ messages in thread
From: Ian Rogers @ 2022-05-13 16:32 UTC (permalink / raw)
  To: kan.liang
  Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian

On Fri, May 13, 2022 at 8:16 AM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> If any member in a group has a different cpu mask than the other
> members, the current perf stat disables group. when the perf metrics
> topdown events are part of the group, the below <not supported> error
> will be triggered.
>
> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
> WARNING: grouped events cpus do not match, disabling group:
>   anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>
>  Performance counter stats for 'system wide':
>
>        141,465,174      slots
>    <not supported>      topdown-retiring
>      1,605,330,334      uncore_imc_free_running_0/dclk/
>
> The perf metrics topdown events must always be grouped with a slots
> event as leader.
>
> With the patch, the topdown events aren't broken from the group for the
> splitting.
>
> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
> WARNING: grouped events cpus do not match, disabling group:
>   anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>
>  Performance counter stats for 'system wide':
>
>        346,110,588      slots
>        124,608,256      topdown-retiring
>      1,606,869,976      uncore_imc_free_running_0/dclk/
>
>        1.003877592 seconds time elapsed

Nice! This is based on:
https://lore.kernel.org/lkml/20220512061308.1152233-2-irogers@google.com/
You may end up with a group with the leader having a group count of 1
(itself). I explicitly zeroed that in the change above, but this may
be unnecessary. Maybe we should move this code to helper functions for
sharing and consistency on what the leader count should be.

Thanks,
Ian

> Fixes: a9a1790247bd ("perf stat: Ensure group is defined on top of the same cpu mask")
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> ---
>  tools/perf/builtin-stat.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index a96f106dc93a..af2248868a4f 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -272,8 +272,11 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
>                 }
>
>                 for_each_group_evsel(pos, leader) {
> -                       evsel__set_leader(pos, pos);
> -                       pos->core.nr_members = 0;
> +                       if (!evsel__must_be_in_group(pos) && pos != leader) {
> +                               evsel__set_leader(pos, pos);
> +                               pos->core.nr_members = 0;
> +                               leader->core.nr_members--;
> +                       }
>                 }
>                 evsel->core.leader->nr_members = 0;
>         }
> --
> 2.35.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform
  2022-05-13 16:24     ` Liang, Kan
@ 2022-05-13 16:43       ` Ian Rogers
  2022-05-13 17:24         ` Liang, Kan
  0 siblings, 1 reply; 13+ messages in thread
From: Ian Rogers @ 2022-05-13 16:43 UTC (permalink / raw)
  To: Liang, Kan
  Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian

On Fri, May 13, 2022 at 9:24 AM Liang, Kan <kan.liang@linux.intel.com> wrote:
>
>
>
> On 5/13/2022 11:39 AM, Ian Rogers wrote:
> > On Fri, May 13, 2022 at 8:16 AM <kan.liang@linux.intel.com> wrote:
> >>
> >> From: Kan Liang <kan.liang@linux.intel.com>
> >>
> >> The patch ("perf evlist: Keep topdown counters in weak group") fixes the
> >> perf metrics topdown event issue when the topdown events are in a weak
> >> group on a non-hybrid platform. However, it doesn't work for the hybrid
> >> platform.
> >>
> >> $./perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
> >> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
> >> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
> >> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
> >> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
> >> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
> >> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
> >>
> >>   Performance counter stats for 'system wide':
> >>
> >>         751,765,068      cpu_core/slots/                                               (84.07%)
> >>     <not supported>      cpu_core/topdown-bad-spec/
> >>     <not supported>      cpu_core/topdown-be-bound/
> >>     <not supported>      cpu_core/topdown-fe-bound/
> >>     <not supported>      cpu_core/topdown-retiring/
> >>          12,398,197      cpu_core/branch-instructions/                                     (84.07%)
> >>           1,054,218      cpu_core/branch-misses/                                       (84.24%)
> >>         539,764,637      cpu_core/bus-cycles/                                          (84.64%)
> >>              14,683      cpu_core/cache-misses/                                        (84.87%)
> >>           7,277,809      cpu_core/cache-references/                                     (77.30%)
> >>         222,299,439      cpu_core/cpu-cycles/                                          (77.28%)
> >>          63,661,714      cpu_core/instructions/                                        (84.85%)
> >>                   0      cpu_core/mem-loads/                                           (77.29%)
> >>          12,271,725      cpu_core/mem-stores/                                          (77.30%)
> >>         542,241,102      cpu_core/ref-cycles/                                          (84.85%)
> >>               8,854      cpu_core/cache-misses/                                        (76.71%)
> >>           7,179,013      cpu_core/cache-references/                                     (76.31%)
> >>
> >>         1.003245250 seconds time elapsed
> >>
> >> A hybrid platform has a different PMU name for the core PMUs, while
> >> the current perf hard code the PMU name "cpu".
> >>
> >> The evsel->pmu_name can be used to replace the "cpu" to fix the issue.
> >> For a hybrid platform, the pmu_name must be non-NULL. Because there are
> >> at least two core PMUs. The PMU has to be specified.
> >> For a non-hybrid platform, the pmu_name may be NULL. Because there is
> >> only one core PMU, "cpu". For a NULL pmu_name, we can safely assume that
> >> it is a "cpu" PMU.
> >>
> >> With the patch,
> >>
> >> $perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
> >> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
> >> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
> >> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
> >> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
> >> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
> >> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
> >>
> >>   Performance counter stats for 'system wide':
> >>
> >>         766,620,266      cpu_core/slots/                                               (84.06%)
> >>          73,172,129      cpu_core/topdown-bad-spec/ #      9.5% bad speculation         (84.06%)
> >>         193,443,341      cpu_core/topdown-be-bound/ #     25.0% backend bound           (84.06%)
> >>         403,940,929      cpu_core/topdown-fe-bound/ #     52.3% frontend bound          (84.06%)
> >>         102,070,237      cpu_core/topdown-retiring/ #     13.2% retiring                (84.06%)
> >>          12,364,429      cpu_core/branch-instructions/                                     (84.03%)
> >>           1,080,124      cpu_core/branch-misses/                                       (84.24%)
> >>         564,120,383      cpu_core/bus-cycles/                                          (84.65%)
> >>              36,979      cpu_core/cache-misses/                                        (84.86%)
> >>           7,298,094      cpu_core/cache-references/                                     (77.30%)
> >>         227,174,372      cpu_core/cpu-cycles/                                          (77.31%)
> >>          63,886,523      cpu_core/instructions/                                        (84.87%)
> >>                   0      cpu_core/mem-loads/                                           (77.31%)
> >>          12,208,782      cpu_core/mem-stores/                                          (77.31%)
> >>         566,409,738      cpu_core/ref-cycles/                                          (84.87%)
> >>              23,118      cpu_core/cache-misses/                                        (76.71%)
> >>           7,212,602      cpu_core/cache-references/                                     (76.29%)
> >>
> >>         1.003228667 seconds time elapsed
> >>
> >> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> >> ---
> >>   tools/perf/arch/x86/util/evsel.c | 5 +++--
> >>   1 file changed, 3 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
> >> index 00cb4466b4ca..24510bcb4bf4 100644
> >> --- a/tools/perf/arch/x86/util/evsel.c
> >> +++ b/tools/perf/arch/x86/util/evsel.c
> >> @@ -33,8 +33,9 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
> >>
> >>   bool arch_evsel__must_be_in_group(const struct evsel *evsel)
> >>   {
> >> -       if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
> >> -           !pmu_have_event("cpu", "slots"))
> >> +       const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
> >> +
> >> +       if (!pmu_have_event(pmu_name, "slots"))
> >
> > Playing devil's advocate, if I have a PMU for my network accelerator
> > and it has an event called "slots" then this test will also be true.
> >
>
> IIRC, the pmu_have_event should only check the event which is exposed by
> the kernel. It's very unlikely that another PMU expose the exact same name.
>
> If you still worry about it, I think we can check the PMU type
> PERF_TYPE_RAW here, which is reserved for the core PMU. Others cannot
> use it.

That's cool, this isn't documented behavior though:
https://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git/tree/tools/include/uapi/linux/perf_event.h?h=perf/core#n34
and PERF_TYPE_HARDWARE wouldn't seem a wholly unreasonable type. It
kind of feels like depending on a quirk, and so we should bury the
quirk in a helper function and document it :-)

> It looks like arch_evsel__must_be_in_group() is the only user for the
> evsel__sys_has_perf_metrics() for now, so I make it static.
>
> The other pmu_have_event("cpu", "slots") is in evlist.c.
> topdown_sys_has_perf_metrics() in patch 4 should be used to replace it.
> I think Zhengjun will post patches for the changes for the evlist.c

Ok, is Zhengjun putting his changes on top of this and fixing up the
APIs or is he waiting on these changes landing? Let me know how to
help. I'm guessing landing my changes is the first step.

Thanks,
Ian

> diff --git a/tools/perf/arch/x86/util/evsel.c
> b/tools/perf/arch/x86/util/evsel.c
> index 24510bcb4bf4..a4714174e30f 100644
> --- a/tools/perf/arch/x86/util/evsel.c
> +++ b/tools/perf/arch/x86/util/evsel.c
> @@ -31,11 +31,20 @@ void arch_evsel__fixup_new_cycles(struct
> perf_event_attr *attr)
>          free(env.cpuid);
>   }
>
> -bool arch_evsel__must_be_in_group(const struct evsel *evsel)
> +static bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
>   {
>          const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
>
> -       if (!pmu_have_event(pmu_name, "slots"))
> +       if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
> +           pmu_have_event(pmu_name, "slots"))
> +               return true;
> +
> +       return false;
> +}
> +
> +bool arch_evsel__must_be_in_group(const struct evsel *evsel)
> +{
> +       if (!evsel__sys_has_perf_metrics(evsel))
>                  return false;
>
>          return evsel->name &&
>
> Thanks,
> Kan
>
> > The property that is being tested here is "does this CPU have topdown
> > events" and so allowing any PMU removes the "does this CPU" part of
> > the equation. I think ideally we'd have an arch functions something
> > like:
> >
> > bool arch_pmu__has_intel_topdown_events(void)
> > {
> >    static bool has_topdown_events = pmu_have_event("cpu", "slots") ||
> > pmu_have_event("cpu_core", "slots");
> >
> >    return has_topdown_events;
> > }
> >
> > bool arch_pmu__supports_intel_topdown_events(const char *pmu_name)
> > {
> >    if (!pmu_name)
> >      return false;
> >    return arch_pmu__has_intel_topdown_events() && (!strncmp(pmu_name,
> > "cpu") || !strncmp(pmu_name, "cpu_core"));
> > }
> >
> > bool arch_evsel__is_intel_topdown_event(struct evsel *evsel)
> > {
> >    if (!arch_pmu__supports_intel_topdown_events(evsel->pmu))
> >      return false;
> >
> >    return strcasestr(evsel->name, "slots") || strcasestr(evsel->name, "topdown");
> > }
> >
> > This then gives us:
> >
> > bool arch_evsel__must_be_in_group(const struct evsel *evsel)
> > {
> >    return  arch_evsel__is_intel_topdown_event(evsel);
> > }
> >
> > These functions can then be reused for the arch_evlist topdown code,
> > etc. What I don't see in these functions is use of any hybrid
> > abstraction and so it isn't clear to me how with hybrid something like
> > this would be plumbed in.
> >
> > Thanks,
> > Ian
> >
> >>                  return false;
> >>
> >>          return evsel->name &&
> >> --
> >> 2.35.1
> >>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/4] perf parse-events: Support different format of the topdown event name
  2022-05-13 15:15 ` [PATCH 3/4] perf parse-events: Support different format of the topdown event name kan.liang
@ 2022-05-13 16:44   ` Ian Rogers
  0 siblings, 0 replies; 13+ messages in thread
From: Ian Rogers @ 2022-05-13 16:44 UTC (permalink / raw)
  To: kan.liang
  Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian

On Fri, May 13, 2022 at 8:16 AM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> The evsel->name may have a different format for a topdown event, a pure
> topdown name (e.g., topdown-fe-bound), or a PMU name + a topdown name
> (e.g., cpu/topdown-fe-bound/). The cpu/topdown-fe-bound/ kind format
> isn't supported by the arch_evlist__leader(). This format is a very
> common format for a hybrid platform, which requires specifying the PMU
> name for each event.
>
> Without the patch,
>
> $perf stat -e '{instructions,slots,cpu/topdown-fe-bound/}' -a sleep 1
>
>  Performance counter stats for 'system wide':
>
>      <not counted>      instructions
>      <not counted>      slots
>    <not supported>      cpu/topdown-fe-bound/
>
>        1.003482041 seconds time elapsed
>
> Some events weren't counted. Try disabling the NMI watchdog:
>         echo 0 > /proc/sys/kernel/nmi_watchdog
>         perf stat ...
>         echo 1 > /proc/sys/kernel/nmi_watchdog
> The events in group usually have to be from the same PMU. Try reorganizing the group.
>
>
> With the patch,
>
> perf stat -e '{instructions,slots,cpu/topdown-fe-bound/}' -a sleep 1
>
>  Performance counter stats for 'system wide':
>
>        157,383,996      slots
>         25,011,711      instructions
>         27,441,686      cpu/topdown-fe-bound/
>
>        1.003530890 seconds time elapsed
>
> Fixes: bc355822f0d9 ("perf parse-events: Move slots only with topdown")
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>

Reviewed-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  tools/perf/arch/x86/util/evlist.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
> index cfc208d71f00..75564a7df15b 100644
> --- a/tools/perf/arch/x86/util/evlist.c
> +++ b/tools/perf/arch/x86/util/evlist.c
> @@ -36,7 +36,7 @@ struct evsel *arch_evlist__leader(struct list_head *list)
>                                 if (slots == first)
>                                         return first;
>                         }
> -                       if (!strncasecmp(evsel->name, "topdown", 7))
> +                       if (strcasestr(evsel->name, "topdown"))
>                                 has_topdown = true;
>                         if (slots && has_topdown)
>                                 return slots;
> --
> 2.35.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] perf parse-events: Move slots event for the hybrid platform too
  2022-05-13 15:15 ` [PATCH 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
@ 2022-05-13 17:07   ` Ian Rogers
  0 siblings, 0 replies; 13+ messages in thread
From: Ian Rogers @ 2022-05-13 17:07 UTC (permalink / raw)
  To: kan.liang
  Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian

On Fri, May 13, 2022 at 8:16 AM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> The commit 94dbfd6781a0 ("perf parse-events: Architecture specific
> leader override") introduced a feature to reorder the slots event to
> fulfill the restriction of the perf metrics topdown group. But the
> feature doesn't work on the hybrid machine.
>
> $perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1
>
>  Performance counter stats for 'system wide':
>
>      <not counted>      cpu_core/instructions/
>      <not counted>      cpu_core/slots/
>    <not supported>      cpu_core/topdown-retiring/
>
>        1.002871801 seconds time elapsed
>
> A hybrid platform has a different PMU name for the core PMUs, while
> current perf hard code the PMU name "cpu".
>
> Introduce a new function to check whether the system supports the perf
> metrics feature. The result is cached for the future usage.
>
> For X86, the core PMU name always has "cpu" prefix.
>
> With the patch,
>
> $perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1
>
>  Performance counter stats for 'system wide':
>
>         76,337,010      cpu_core/slots/
>         10,416,809      cpu_core/instructions/
>         11,692,372      cpu_core/topdown-retiring/
>
>        1.002805453 seconds time elapsed
>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> ---
>  tools/perf/arch/x86/util/evlist.c  |  5 +++--
>  tools/perf/arch/x86/util/topdown.c | 18 ++++++++++++++++++
>  tools/perf/arch/x86/util/topdown.h |  7 +++++++
>  3 files changed, 28 insertions(+), 2 deletions(-)
>  create mode 100644 tools/perf/arch/x86/util/topdown.h
>
> diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
> index 75564a7df15b..68f681ad54c1 100644
> --- a/tools/perf/arch/x86/util/evlist.c
> +++ b/tools/perf/arch/x86/util/evlist.c
> @@ -3,6 +3,7 @@
>  #include "util/pmu.h"
>  #include "util/evlist.h"
>  #include "util/parse-events.h"
> +#include "topdown.h"
>
>  #define TOPDOWN_L1_EVENTS      "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound}"
>  #define TOPDOWN_L2_EVENTS      "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown-fetch-lat,topdown-mem-bound}"
> @@ -25,12 +26,12 @@ struct evsel *arch_evlist__leader(struct list_head *list)
>
>         first = list_first_entry(list, struct evsel, core.node);
>
> -       if (!pmu_have_event("cpu", "slots"))
> +       if (!topdown_sys_has_perf_metrics())
>                 return first;
>
>         /* If there is a slots event and a topdown event then the slots event comes first. */
>         __evlist__for_each_entry(list, evsel) {
> -               if (evsel->pmu_name && !strcmp(evsel->pmu_name, "cpu") && evsel->name) {
> +               if (evsel->pmu_name && !strncmp(evsel->pmu_name, "cpu", 3) && evsel->name) {
>                         if (strcasestr(evsel->name, "slots")) {
>                                 slots = evsel;
>                                 if (slots == first)
> diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/topdown.c
> index 2f3d96aa92a5..95b9fdef59ab 100644
> --- a/tools/perf/arch/x86/util/topdown.c
> +++ b/tools/perf/arch/x86/util/topdown.c
> @@ -3,6 +3,24 @@
>  #include "api/fs/fs.h"
>  #include "util/pmu.h"
>  #include "util/topdown.h"
> +#include "topdown.h"
> +
> +bool topdown_sys_has_perf_metrics(void)
> +{
> +       static bool has_perf_metrics;
> +       static bool cached;
> +       struct perf_pmu *pmu;
> +
> +       if (cached)
> +               return has_perf_metrics;
> +

Worth a comment here for the meaning of looking up PERF_TYPE_RAW.

Thanks,
Ian

> +       pmu = perf_pmu__find_by_type(PERF_TYPE_RAW);
> +       if (pmu && pmu_have_event(pmu->name, "slots"))
> +               has_perf_metrics = true;
> +
> +       cached = true;
> +       return has_perf_metrics;
> +}
>
>  /*
>   * Check whether we can use a group for top down.
> diff --git a/tools/perf/arch/x86/util/topdown.h b/tools/perf/arch/x86/util/topdown.h
> new file mode 100644
> index 000000000000..46bf9273e572
> --- /dev/null
> +++ b/tools/perf/arch/x86/util/topdown.h
> @@ -0,0 +1,7 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _TOPDOWN_H
> +#define _TOPDOWN_H 1
> +
> +bool topdown_sys_has_perf_metrics(void);
> +
> +#endif
> --
> 2.35.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform
  2022-05-13 16:43       ` Ian Rogers
@ 2022-05-13 17:24         ` Liang, Kan
  0 siblings, 0 replies; 13+ messages in thread
From: Liang, Kan @ 2022-05-13 17:24 UTC (permalink / raw)
  To: Ian Rogers
  Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian



On 5/13/2022 12:43 PM, Ian Rogers wrote:
> On Fri, May 13, 2022 at 9:24 AM Liang, Kan <kan.liang@linux.intel.com> wrote:
>>
>>
>>
>> On 5/13/2022 11:39 AM, Ian Rogers wrote:
>>> On Fri, May 13, 2022 at 8:16 AM <kan.liang@linux.intel.com> wrote:
>>>>
>>>> From: Kan Liang <kan.liang@linux.intel.com>
>>>>
>>>> The patch ("perf evlist: Keep topdown counters in weak group") fixes the
>>>> perf metrics topdown event issue when the topdown events are in a weak
>>>> group on a non-hybrid platform. However, it doesn't work for the hybrid
>>>> platform.
>>>>
>>>> $./perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
>>>> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
>>>> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
>>>> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
>>>> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
>>>> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
>>>> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>>>>
>>>>    Performance counter stats for 'system wide':
>>>>
>>>>          751,765,068      cpu_core/slots/                                               (84.07%)
>>>>      <not supported>      cpu_core/topdown-bad-spec/
>>>>      <not supported>      cpu_core/topdown-be-bound/
>>>>      <not supported>      cpu_core/topdown-fe-bound/
>>>>      <not supported>      cpu_core/topdown-retiring/
>>>>           12,398,197      cpu_core/branch-instructions/                                     (84.07%)
>>>>            1,054,218      cpu_core/branch-misses/                                       (84.24%)
>>>>          539,764,637      cpu_core/bus-cycles/                                          (84.64%)
>>>>               14,683      cpu_core/cache-misses/                                        (84.87%)
>>>>            7,277,809      cpu_core/cache-references/                                     (77.30%)
>>>>          222,299,439      cpu_core/cpu-cycles/                                          (77.28%)
>>>>           63,661,714      cpu_core/instructions/                                        (84.85%)
>>>>                    0      cpu_core/mem-loads/                                           (77.29%)
>>>>           12,271,725      cpu_core/mem-stores/                                          (77.30%)
>>>>          542,241,102      cpu_core/ref-cycles/                                          (84.85%)
>>>>                8,854      cpu_core/cache-misses/                                        (76.71%)
>>>>            7,179,013      cpu_core/cache-references/                                     (76.31%)
>>>>
>>>>          1.003245250 seconds time elapsed
>>>>
>>>> A hybrid platform has a different PMU name for the core PMUs, while
>>>> the current perf hard code the PMU name "cpu".
>>>>
>>>> The evsel->pmu_name can be used to replace the "cpu" to fix the issue.
>>>> For a hybrid platform, the pmu_name must be non-NULL. Because there are
>>>> at least two core PMUs. The PMU has to be specified.
>>>> For a non-hybrid platform, the pmu_name may be NULL. Because there is
>>>> only one core PMU, "cpu". For a NULL pmu_name, we can safely assume that
>>>> it is a "cpu" PMU.
>>>>
>>>> With the patch,
>>>>
>>>> $perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
>>>> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
>>>> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
>>>> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
>>>> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
>>>> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
>>>> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>>>>
>>>>    Performance counter stats for 'system wide':
>>>>
>>>>          766,620,266      cpu_core/slots/                                               (84.06%)
>>>>           73,172,129      cpu_core/topdown-bad-spec/ #      9.5% bad speculation         (84.06%)
>>>>          193,443,341      cpu_core/topdown-be-bound/ #     25.0% backend bound           (84.06%)
>>>>          403,940,929      cpu_core/topdown-fe-bound/ #     52.3% frontend bound          (84.06%)
>>>>          102,070,237      cpu_core/topdown-retiring/ #     13.2% retiring                (84.06%)
>>>>           12,364,429      cpu_core/branch-instructions/                                     (84.03%)
>>>>            1,080,124      cpu_core/branch-misses/                                       (84.24%)
>>>>          564,120,383      cpu_core/bus-cycles/                                          (84.65%)
>>>>               36,979      cpu_core/cache-misses/                                        (84.86%)
>>>>            7,298,094      cpu_core/cache-references/                                     (77.30%)
>>>>          227,174,372      cpu_core/cpu-cycles/                                          (77.31%)
>>>>           63,886,523      cpu_core/instructions/                                        (84.87%)
>>>>                    0      cpu_core/mem-loads/                                           (77.31%)
>>>>           12,208,782      cpu_core/mem-stores/                                          (77.31%)
>>>>          566,409,738      cpu_core/ref-cycles/                                          (84.87%)
>>>>               23,118      cpu_core/cache-misses/                                        (76.71%)
>>>>            7,212,602      cpu_core/cache-references/                                     (76.29%)
>>>>
>>>>          1.003228667 seconds time elapsed
>>>>
>>>> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
>>>> ---
>>>>    tools/perf/arch/x86/util/evsel.c | 5 +++--
>>>>    1 file changed, 3 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
>>>> index 00cb4466b4ca..24510bcb4bf4 100644
>>>> --- a/tools/perf/arch/x86/util/evsel.c
>>>> +++ b/tools/perf/arch/x86/util/evsel.c
>>>> @@ -33,8 +33,9 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
>>>>
>>>>    bool arch_evsel__must_be_in_group(const struct evsel *evsel)
>>>>    {
>>>> -       if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
>>>> -           !pmu_have_event("cpu", "slots"))
>>>> +       const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
>>>> +
>>>> +       if (!pmu_have_event(pmu_name, "slots"))
>>>
>>> Playing devil's advocate, if I have a PMU for my network accelerator
>>> and it has an event called "slots" then this test will also be true.
>>>
>>
>> IIRC, the pmu_have_event should only check the event which is exposed by
>> the kernel. It's very unlikely that another PMU expose the exact same name.
>>
>> If you still worry about it, I think we can check the PMU type
>> PERF_TYPE_RAW here, which is reserved for the core PMU. Others cannot
>> use it.
> 
> That's cool, this isn't documented behavior though:
> https://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git/tree/tools/include/uapi/linux/perf_event.h?h=perf/core#n34
> and PERF_TYPE_HARDWARE wouldn't seem a wholly unreasonable type. It
> kind of feels like depending on a quirk, and so we should bury the
> quirk in a helper function and document it :-)

The PERF_TYPE_HARDWARE and PERF_TYPE_HW_CACHE are aliases for the 
PERF_TYPE_RAW. The PERF_TYPE_HARDWARE is used for the 10 common hardware 
events and The PERF_TYPE_HW_CACHE is used for the hardware cache events.
Other core events (not include the atom core events in a hybrid machine) 
should have the PERF_TYPE_RAW type.

Since the perf metrics is a big core only feature, checking both the 
PERF_TYPE_RAW type and the slots event should be good enough here.

I will add more comments in V2.

> 
>> It looks like arch_evsel__must_be_in_group() is the only user for the
>> evsel__sys_has_perf_metrics() for now, so I make it static.
>>
>> The other pmu_have_event("cpu", "slots") is in evlist.c.
>> topdown_sys_has_perf_metrics() in patch 4 should be used to replace it.
>> I think Zhengjun will post patches for the changes for the evlist.c
> 
> Ok, is Zhengjun putting his changes on top of this and fixing up the
> APIs or is he waiting on these changes landing? Let me know how to
> help. I'm guessing landing my changes is the first step.

Right. Your changes is the first step. Then this patch set. Zhengjun's 
will be on top of us.

Thanks,
Kan

> 
> Thanks,
> Ian
> 
>> diff --git a/tools/perf/arch/x86/util/evsel.c
>> b/tools/perf/arch/x86/util/evsel.c
>> index 24510bcb4bf4..a4714174e30f 100644
>> --- a/tools/perf/arch/x86/util/evsel.c
>> +++ b/tools/perf/arch/x86/util/evsel.c
>> @@ -31,11 +31,20 @@ void arch_evsel__fixup_new_cycles(struct
>> perf_event_attr *attr)
>>           free(env.cpuid);
>>    }
>>
>> -bool arch_evsel__must_be_in_group(const struct evsel *evsel)
>> +static bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
>>    {
>>           const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
>>
>> -       if (!pmu_have_event(pmu_name, "slots"))
>> +       if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
>> +           pmu_have_event(pmu_name, "slots"))
>> +               return true;
>> +
>> +       return false;
>> +}
>> +
>> +bool arch_evsel__must_be_in_group(const struct evsel *evsel)
>> +{
>> +       if (!evsel__sys_has_perf_metrics(evsel))
>>                   return false;
>>
>>           return evsel->name &&
>>
>> Thanks,
>> Kan
>>
>>> The property that is being tested here is "does this CPU have topdown
>>> events" and so allowing any PMU removes the "does this CPU" part of
>>> the equation. I think ideally we'd have an arch functions something
>>> like:
>>>
>>> bool arch_pmu__has_intel_topdown_events(void)
>>> {
>>>     static bool has_topdown_events = pmu_have_event("cpu", "slots") ||
>>> pmu_have_event("cpu_core", "slots");
>>>
>>>     return has_topdown_events;
>>> }
>>>
>>> bool arch_pmu__supports_intel_topdown_events(const char *pmu_name)
>>> {
>>>     if (!pmu_name)
>>>       return false;
>>>     return arch_pmu__has_intel_topdown_events() && (!strncmp(pmu_name,
>>> "cpu") || !strncmp(pmu_name, "cpu_core"));
>>> }
>>>
>>> bool arch_evsel__is_intel_topdown_event(struct evsel *evsel)
>>> {
>>>     if (!arch_pmu__supports_intel_topdown_events(evsel->pmu))
>>>       return false;
>>>
>>>     return strcasestr(evsel->name, "slots") || strcasestr(evsel->name, "topdown");
>>> }
>>>
>>> This then gives us:
>>>
>>> bool arch_evsel__must_be_in_group(const struct evsel *evsel)
>>> {
>>>     return  arch_evsel__is_intel_topdown_event(evsel);
>>> }
>>>
>>> These functions can then be reused for the arch_evlist topdown code,
>>> etc. What I don't see in these functions is use of any hybrid
>>> abstraction and so it isn't clear to me how with hybrid something like
>>> this would be plumbed in.
>>>
>>> Thanks,
>>> Ian
>>>
>>>>                   return false;
>>>>
>>>>           return evsel->name &&
>>>> --
>>>> 2.35.1
>>>>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] perf stat: Always keep perf metrics topdown events in a group
  2022-05-13 16:32   ` Ian Rogers
@ 2022-05-13 17:29     ` Liang, Kan
  0 siblings, 0 replies; 13+ messages in thread
From: Liang, Kan @ 2022-05-13 17:29 UTC (permalink / raw)
  To: Ian Rogers
  Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian



On 5/13/2022 12:32 PM, Ian Rogers wrote:
> On Fri, May 13, 2022 at 8:16 AM <kan.liang@linux.intel.com> wrote:
>>
>> From: Kan Liang <kan.liang@linux.intel.com>
>>
>> If any member in a group has a different cpu mask than the other
>> members, the current perf stat disables group. when the perf metrics
>> topdown events are part of the group, the below <not supported> error
>> will be triggered.
>>
>> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
>> WARNING: grouped events cpus do not match, disabling group:
>>    anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>>
>>   Performance counter stats for 'system wide':
>>
>>         141,465,174      slots
>>     <not supported>      topdown-retiring
>>       1,605,330,334      uncore_imc_free_running_0/dclk/
>>
>> The perf metrics topdown events must always be grouped with a slots
>> event as leader.
>>
>> With the patch, the topdown events aren't broken from the group for the
>> splitting.
>>
>> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
>> WARNING: grouped events cpus do not match, disabling group:
>>    anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>>
>>   Performance counter stats for 'system wide':
>>
>>         346,110,588      slots
>>         124,608,256      topdown-retiring
>>       1,606,869,976      uncore_imc_free_running_0/dclk/
>>
>>         1.003877592 seconds time elapsed
> 
> Nice! This is based on:
> https://lore.kernel.org/lkml/20220512061308.1152233-2-irogers@google.com/
> You may end up with a group with the leader having a group count of 1
> (itself). I explicitly zeroed that in the change above, but this may
> be unnecessary. Maybe we should move this code to helper functions for
> sharing and consistency on what the leader count should be.
> 

I think the current code has already did evsel->core.leader->nr_members 
= 0; for the leader. So I don't change it.

Yes, a helper function seems reasonable. I will add it in V2.

Thanks,
Kan
> Thanks,
> Ian
> 
>> Fixes: a9a1790247bd ("perf stat: Ensure group is defined on top of the same cpu mask")
>> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
>> ---
>>   tools/perf/builtin-stat.c | 7 +++++--
>>   1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
>> index a96f106dc93a..af2248868a4f 100644
>> --- a/tools/perf/builtin-stat.c
>> +++ b/tools/perf/builtin-stat.c
>> @@ -272,8 +272,11 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
>>                  }
>>
>>                  for_each_group_evsel(pos, leader) {
>> -                       evsel__set_leader(pos, pos);
>> -                       pos->core.nr_members = 0;
>> +                       if (!evsel__must_be_in_group(pos) && pos != leader) {
>> +                               evsel__set_leader(pos, pos);
>> +                               pos->core.nr_members = 0;
>> +                               leader->core.nr_members--;
>> +                       }
>>                  }
>>                  evsel->core.leader->nr_members = 0;
>>          }
>> --
>> 2.35.1
>>

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-05-13 17:29 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-13 15:15 [PATCH 0/4] Several perf metrics topdown related fixes kan.liang
2022-05-13 15:15 ` [PATCH 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
2022-05-13 15:39   ` Ian Rogers
2022-05-13 16:24     ` Liang, Kan
2022-05-13 16:43       ` Ian Rogers
2022-05-13 17:24         ` Liang, Kan
2022-05-13 15:15 ` [PATCH 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
2022-05-13 16:32   ` Ian Rogers
2022-05-13 17:29     ` Liang, Kan
2022-05-13 15:15 ` [PATCH 3/4] perf parse-events: Support different format of the topdown event name kan.liang
2022-05-13 16:44   ` Ian Rogers
2022-05-13 15:15 ` [PATCH 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
2022-05-13 17:07   ` Ian Rogers

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.