All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V3 0/4] Several perf metrics topdown related fixes
@ 2022-05-18 14:38 kan.liang
  2022-05-18 14:38 ` [PATCH V3 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: kan.liang @ 2022-05-18 14:38 UTC (permalink / raw)
  To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
  Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The Patch 1 is a follow-up patch for Ian's ("Fix topdown event weak
grouping")[1].

The patch 2 is to fix the perf metrics topdown events in a mixed group.
It reuses the function introduced in [1].
Patch 1 & 2 should be on top of [1].

The patch 3 & 4 are to fix other perf metrics topdown related issues.
They can be merged separately.

[1]: https://lore.kernel.org/all/20220517052724.283874-2-irogers@google.com/

Changes since V2:
- Add more comments for the evsel__sys_has_perf_metrics() and
  topdown_sys_has_perf_metrics()
- Remove the uncessary evsel->core.leader->nr_members = 0; in patch 2.
  The value has been updated in the new evsel__remove_from_group().
- Add Reviewed-by from Ian for patch 4

Changes since V1:
- Add comments for the evsel__sys_has_perf_metrics() and
  topdown_sys_has_perf_metrics()
- Factor out evsel__remove_from_group()
- Add Reviewed-by from Ian for patch 3

Kan Liang (4):
  perf evsel: Fixes topdown events in a weak group for the hybrid
    platform
  perf stat: Always keep perf metrics topdown events in a group
  perf parse-events: Support different format of the topdown event name
  perf parse-events: Move slots event for the hybrid platform too

 tools/perf/arch/x86/util/evlist.c  |  7 ++++---
 tools/perf/arch/x86/util/evsel.c   | 23 +++++++++++++++++++++--
 tools/perf/arch/x86/util/topdown.c | 25 +++++++++++++++++++++++++
 tools/perf/arch/x86/util/topdown.h |  7 +++++++
 tools/perf/builtin-stat.c          |  7 ++-----
 tools/perf/util/evlist.c           |  6 +-----
 tools/perf/util/evsel.c            | 13 +++++++++++--
 tools/perf/util/evsel.h            |  2 +-
 8 files changed, 72 insertions(+), 18 deletions(-)
 create mode 100644 tools/perf/arch/x86/util/topdown.h

-- 
2.35.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH V3 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform
  2022-05-18 14:38 [PATCH V3 0/4] Several perf metrics topdown related fixes kan.liang
@ 2022-05-18 14:38 ` kan.liang
  2022-05-19  4:31   ` Ian Rogers
  2022-05-18 14:38 ` [PATCH V3 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 8+ messages in thread
From: kan.liang @ 2022-05-18 14:38 UTC (permalink / raw)
  To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
  Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The patch ("perf evlist: Keep topdown counters in weak group") fixes the
perf metrics topdown event issue when the topdown events are in a weak
group on a non-hybrid platform. However, it doesn't work for the hybrid
platform.

$./perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1

Performance counter stats for 'system wide':

     751,765,068      cpu_core/slots/                        (84.07%)
 <not supported>      cpu_core/topdown-bad-spec/
 <not supported>      cpu_core/topdown-be-bound/
 <not supported>      cpu_core/topdown-fe-bound/
 <not supported>      cpu_core/topdown-retiring/
      12,398,197      cpu_core/branch-instructions/          (84.07%)
       1,054,218      cpu_core/branch-misses/                (84.24%)
     539,764,637      cpu_core/bus-cycles/                   (84.64%)
          14,683      cpu_core/cache-misses/                 (84.87%)
       7,277,809      cpu_core/cache-references/             (77.30%)
     222,299,439      cpu_core/cpu-cycles/                   (77.28%)
      63,661,714      cpu_core/instructions/                 (84.85%)
               0      cpu_core/mem-loads/                    (77.29%)
      12,271,725      cpu_core/mem-stores/                   (77.30%)
     542,241,102      cpu_core/ref-cycles/                   (84.85%)
           8,854      cpu_core/cache-misses/                 (76.71%)
       7,179,013      cpu_core/cache-references/             (76.31%)

       1.003245250 seconds time elapsed

A hybrid platform has a different PMU name for the core PMUs, while
the current perf hard code the PMU name "cpu".

The evsel->pmu_name can be used to replace the "cpu" to fix the issue.
For a hybrid platform, the pmu_name must be non-NULL. Because there are
at least two core PMUs. The PMU has to be specified.
For a non-hybrid platform, the pmu_name may be NULL. Because there is
only one core PMU, "cpu". For a NULL pmu_name, we can safely assume that
it is a "cpu" PMU.

In case other PMUs also define the "slots" event, checking the PMU type
as well.

With the patch,

$perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1

Performance counter stats for 'system wide':

   766,620,266   cpu_core/slots/                                        (84.06%)
    73,172,129   cpu_core/topdown-bad-spec/ #    9.5% bad speculation   (84.06%)
   193,443,341   cpu_core/topdown-be-bound/ #    25.0% backend bound    (84.06%)
   403,940,929   cpu_core/topdown-fe-bound/ #    52.3% frontend bound   (84.06%)
   102,070,237   cpu_core/topdown-retiring/ #    13.2% retiring         (84.06%)
    12,364,429   cpu_core/branch-instructions/                          (84.03%)
     1,080,124   cpu_core/branch-misses/                                (84.24%)
   564,120,383   cpu_core/bus-cycles/                                   (84.65%)
        36,979   cpu_core/cache-misses/                                 (84.86%)
     7,298,094   cpu_core/cache-references/                             (77.30%)
   227,174,372   cpu_core/cpu-cycles/                                   (77.31%)
    63,886,523   cpu_core/instructions/                                 (84.87%)
             0   cpu_core/mem-loads/                                    (77.31%)
    12,208,782   cpu_core/mem-stores/                                   (77.31%)
   566,409,738   cpu_core/ref-cycles/                                   (84.87%)
        23,118   cpu_core/cache-misses/                                 (76.71%)
     7,212,602   cpu_core/cache-references/                             (76.29%)

     1.003228667 seconds time elapsed

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 tools/perf/arch/x86/util/evsel.c | 23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
index 00cb4466b4ca..88306183d629 100644
--- a/tools/perf/arch/x86/util/evsel.c
+++ b/tools/perf/arch/x86/util/evsel.c
@@ -31,10 +31,29 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
 	free(env.cpuid);
 }
 
+/* Check whether the evsel's PMU supports the perf metrics */
+static bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
+{
+	const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
+
+	/*
+	 * The PERF_TYPE_RAW type is the core PMU type, e.g., "cpu" PMU
+	 * on a non-hybrid machine, "cpu_core" PMU on a hybrid machine.
+	 * The slots event is only available for the core PMU, which
+	 * supports the perf metrics feature.
+	 * Checking both the PERF_TYPE_RAW type and the slots event
+	 * should be good enough to detect the perf metrics feature.
+	 */
+	if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
+	    pmu_have_event(pmu_name, "slots"))
+		return true;
+
+	return false;
+}
+
 bool arch_evsel__must_be_in_group(const struct evsel *evsel)
 {
-	if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
-	    !pmu_have_event("cpu", "slots"))
+	if (!evsel__sys_has_perf_metrics(evsel))
 		return false;
 
 	return evsel->name &&
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V3 2/4] perf stat: Always keep perf metrics topdown events in a group
  2022-05-18 14:38 [PATCH V3 0/4] Several perf metrics topdown related fixes kan.liang
  2022-05-18 14:38 ` [PATCH V3 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
@ 2022-05-18 14:38 ` kan.liang
  2022-05-19  4:26   ` Ian Rogers
  2022-05-18 14:38 ` [PATCH V3 3/4] perf parse-events: Support different format of the topdown event name kan.liang
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 8+ messages in thread
From: kan.liang @ 2022-05-18 14:38 UTC (permalink / raw)
  To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
  Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

If any member in a group has a different cpu mask than the other
members, the current perf stat disables group. when the perf metrics
topdown events are part of the group, the below <not supported> error
will be triggered.

$ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
WARNING: grouped events cpus do not match, disabling group:
  anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }

 Performance counter stats for 'system wide':

       141,465,174      slots
   <not supported>      topdown-retiring
     1,605,330,334      uncore_imc_free_running_0/dclk/

The perf metrics topdown events must always be grouped with a slots
event as leader.

Factor out evsel__remove_from_group() to only remove the regular events
from the group.

Remove evsel__must_be_in_group(), since no one use it anymore.

With the patch, the topdown events aren't broken from the group for the
splitting.

$ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
WARNING: grouped events cpus do not match, disabling group:
  anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }

 Performance counter stats for 'system wide':

       346,110,588      slots
       124,608,256      topdown-retiring
     1,606,869,976      uncore_imc_free_running_0/dclk/

       1.003877592 seconds time elapsed

Fixes: a9a1790247bd ("perf stat: Ensure group is defined on top of the same cpu mask")
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 tools/perf/builtin-stat.c |  7 ++-----
 tools/perf/util/evlist.c  |  6 +-----
 tools/perf/util/evsel.c   | 13 +++++++++++--
 tools/perf/util/evsel.h   |  2 +-
 4 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index a96f106dc93a..f058e8cddfa8 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -271,11 +271,8 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
 			pr_warning("     %s: %s\n", evsel->name, buf);
 		}
 
-		for_each_group_evsel(pos, leader) {
-			evsel__set_leader(pos, pos);
-			pos->core.nr_members = 0;
-		}
-		evsel->core.leader->nr_members = 0;
+		for_each_group_evsel(pos, leader)
+			evsel__remove_from_group(pos, leader);
 	}
 }
 
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index dfa65a383502..7fc544330fea 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1795,11 +1795,7 @@ struct evsel *evlist__reset_weak_group(struct evlist *evsel_list, struct evsel *
 			 * them. Some events, like Intel topdown, require being
 			 * in a group and so keep these in the group.
 			 */
-			if (!evsel__must_be_in_group(c2) && c2 != leader) {
-				evsel__set_leader(c2, c2);
-				c2->core.nr_members = 0;
-				leader->core.nr_members--;
-			}
+			evsel__remove_from_group(c2, leader);
 
 			/*
 			 * Set this for all former members of the group
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index b98882cbb286..deb428ee5e50 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -3083,7 +3083,16 @@ bool __weak arch_evsel__must_be_in_group(const struct evsel *evsel __maybe_unuse
 	return false;
 }
 
-bool evsel__must_be_in_group(const struct evsel *evsel)
+/*
+ * Remove an event from a given group (leader).
+ * Some events, e.g., perf metrics Topdown events,
+ * must always be grouped. Ignore the events.
+ */
+void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader)
 {
-	return arch_evsel__must_be_in_group(evsel);
+	if (!arch_evsel__must_be_in_group(evsel) && evsel != leader) {
+		evsel__set_leader(evsel, evsel);
+		evsel->core.nr_members = 0;
+		leader->core.nr_members--;
+	}
 }
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index a36172ed4cf6..47f65f8e7c74 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -483,7 +483,7 @@ bool evsel__has_leader(struct evsel *evsel, struct evsel *leader);
 bool evsel__is_leader(struct evsel *evsel);
 void evsel__set_leader(struct evsel *evsel, struct evsel *leader);
 int evsel__source_count(const struct evsel *evsel);
-bool evsel__must_be_in_group(const struct evsel *evsel);
+void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader);
 
 bool arch_evsel__must_be_in_group(const struct evsel *evsel);
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V3 3/4] perf parse-events: Support different format of the topdown event name
  2022-05-18 14:38 [PATCH V3 0/4] Several perf metrics topdown related fixes kan.liang
  2022-05-18 14:38 ` [PATCH V3 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
  2022-05-18 14:38 ` [PATCH V3 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
@ 2022-05-18 14:38 ` kan.liang
  2022-05-18 14:39 ` [PATCH V3 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
  2022-05-20 14:15 ` [PATCH V3 0/4] Several perf metrics topdown related fixes Arnaldo Carvalho de Melo
  4 siblings, 0 replies; 8+ messages in thread
From: kan.liang @ 2022-05-18 14:38 UTC (permalink / raw)
  To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
  Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The evsel->name may have a different format for a topdown event, a pure
topdown name (e.g., topdown-fe-bound), or a PMU name + a topdown name
(e.g., cpu/topdown-fe-bound/). The cpu/topdown-fe-bound/ kind format
isn't supported by the arch_evlist__leader(). This format is a very
common format for a hybrid platform, which requires specifying the PMU
name for each event.

Without the patch,

$perf stat -e '{instructions,slots,cpu/topdown-fe-bound/}' -a sleep 1

 Performance counter stats for 'system wide':

     <not counted>      instructions
     <not counted>      slots
   <not supported>      cpu/topdown-fe-bound/

       1.003482041 seconds time elapsed

Some events weren't counted. Try disabling the NMI watchdog:
        echo 0 > /proc/sys/kernel/nmi_watchdog
        perf stat ...
        echo 1 > /proc/sys/kernel/nmi_watchdog
The events in group usually have to be from the same PMU. Try reorganizing the group.

With the patch,

perf stat -e '{instructions,slots,cpu/topdown-fe-bound/}' -a sleep 1

 Performance counter stats for 'system wide':

       157,383,996      slots
        25,011,711      instructions
        27,441,686      cpu/topdown-fe-bound/

       1.003530890 seconds time elapsed

Fixes: bc355822f0d9 ("perf parse-events: Move slots only with topdown")
Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 tools/perf/arch/x86/util/evlist.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
index cfc208d71f00..75564a7df15b 100644
--- a/tools/perf/arch/x86/util/evlist.c
+++ b/tools/perf/arch/x86/util/evlist.c
@@ -36,7 +36,7 @@ struct evsel *arch_evlist__leader(struct list_head *list)
 				if (slots == first)
 					return first;
 			}
-			if (!strncasecmp(evsel->name, "topdown", 7))
+			if (strcasestr(evsel->name, "topdown"))
 				has_topdown = true;
 			if (slots && has_topdown)
 				return slots;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V3 4/4] perf parse-events: Move slots event for the hybrid platform too
  2022-05-18 14:38 [PATCH V3 0/4] Several perf metrics topdown related fixes kan.liang
                   ` (2 preceding siblings ...)
  2022-05-18 14:38 ` [PATCH V3 3/4] perf parse-events: Support different format of the topdown event name kan.liang
@ 2022-05-18 14:39 ` kan.liang
  2022-05-20 14:15 ` [PATCH V3 0/4] Several perf metrics topdown related fixes Arnaldo Carvalho de Melo
  4 siblings, 0 replies; 8+ messages in thread
From: kan.liang @ 2022-05-18 14:39 UTC (permalink / raw)
  To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
  Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang

From: Kan Liang <kan.liang@linux.intel.com>

The commit 94dbfd6781a0 ("perf parse-events: Architecture specific
leader override") introduced a feature to reorder the slots event to
fulfill the restriction of the perf metrics topdown group. But the
feature doesn't work on the hybrid machine.

$perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1

 Performance counter stats for 'system wide':

     <not counted>      cpu_core/instructions/
     <not counted>      cpu_core/slots/
   <not supported>      cpu_core/topdown-retiring/

       1.002871801 seconds time elapsed

A hybrid platform has a different PMU name for the core PMUs, while
current perf hard code the PMU name "cpu".

Introduce a new function to check whether the system supports the perf
metrics feature. The result is cached for the future usage.

For X86, the core PMU name always has "cpu" prefix.

With the patch,

$perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1

 Performance counter stats for 'system wide':

        76,337,010      cpu_core/slots/
        10,416,809      cpu_core/instructions/
        11,692,372      cpu_core/topdown-retiring/

       1.002805453 seconds time elapsed

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 tools/perf/arch/x86/util/evlist.c  |  5 +++--
 tools/perf/arch/x86/util/topdown.c | 25 +++++++++++++++++++++++++
 tools/perf/arch/x86/util/topdown.h |  7 +++++++
 3 files changed, 35 insertions(+), 2 deletions(-)
 create mode 100644 tools/perf/arch/x86/util/topdown.h

diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
index 75564a7df15b..68f681ad54c1 100644
--- a/tools/perf/arch/x86/util/evlist.c
+++ b/tools/perf/arch/x86/util/evlist.c
@@ -3,6 +3,7 @@
 #include "util/pmu.h"
 #include "util/evlist.h"
 #include "util/parse-events.h"
+#include "topdown.h"
 
 #define TOPDOWN_L1_EVENTS	"{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound}"
 #define TOPDOWN_L2_EVENTS	"{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown-fetch-lat,topdown-mem-bound}"
@@ -25,12 +26,12 @@ struct evsel *arch_evlist__leader(struct list_head *list)
 
 	first = list_first_entry(list, struct evsel, core.node);
 
-	if (!pmu_have_event("cpu", "slots"))
+	if (!topdown_sys_has_perf_metrics())
 		return first;
 
 	/* If there is a slots event and a topdown event then the slots event comes first. */
 	__evlist__for_each_entry(list, evsel) {
-		if (evsel->pmu_name && !strcmp(evsel->pmu_name, "cpu") && evsel->name) {
+		if (evsel->pmu_name && !strncmp(evsel->pmu_name, "cpu", 3) && evsel->name) {
 			if (strcasestr(evsel->name, "slots")) {
 				slots = evsel;
 				if (slots == first)
diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/topdown.c
index 2f3d96aa92a5..f4d5422e9960 100644
--- a/tools/perf/arch/x86/util/topdown.c
+++ b/tools/perf/arch/x86/util/topdown.c
@@ -3,6 +3,31 @@
 #include "api/fs/fs.h"
 #include "util/pmu.h"
 #include "util/topdown.h"
+#include "topdown.h"
+
+/* Check whether there is a PMU which supports the perf metrics. */
+bool topdown_sys_has_perf_metrics(void)
+{
+	static bool has_perf_metrics;
+	static bool cached;
+	struct perf_pmu *pmu;
+
+	if (cached)
+		return has_perf_metrics;
+
+	/*
+	 * The perf metrics feature is a core PMU feature.
+	 * The PERF_TYPE_RAW type is the type of a core PMU.
+	 * The slots event is only available when the core PMU
+	 * supports the perf metrics feature.
+	 */
+	pmu = perf_pmu__find_by_type(PERF_TYPE_RAW);
+	if (pmu && pmu_have_event(pmu->name, "slots"))
+		has_perf_metrics = true;
+
+	cached = true;
+	return has_perf_metrics;
+}
 
 /*
  * Check whether we can use a group for top down.
diff --git a/tools/perf/arch/x86/util/topdown.h b/tools/perf/arch/x86/util/topdown.h
new file mode 100644
index 000000000000..46bf9273e572
--- /dev/null
+++ b/tools/perf/arch/x86/util/topdown.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _TOPDOWN_H
+#define _TOPDOWN_H 1
+
+bool topdown_sys_has_perf_metrics(void);
+
+#endif
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH V3 2/4] perf stat: Always keep perf metrics topdown events in a group
  2022-05-18 14:38 ` [PATCH V3 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
@ 2022-05-19  4:26   ` Ian Rogers
  0 siblings, 0 replies; 8+ messages in thread
From: Ian Rogers @ 2022-05-19  4:26 UTC (permalink / raw)
  To: kan.liang
  Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian

On Wed, May 18, 2022 at 7:39 AM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> If any member in a group has a different cpu mask than the other
> members, the current perf stat disables group. when the perf metrics
> topdown events are part of the group, the below <not supported> error
> will be triggered.
>
> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
> WARNING: grouped events cpus do not match, disabling group:
>   anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>
>  Performance counter stats for 'system wide':
>
>        141,465,174      slots
>    <not supported>      topdown-retiring
>      1,605,330,334      uncore_imc_free_running_0/dclk/
>
> The perf metrics topdown events must always be grouped with a slots
> event as leader.
>
> Factor out evsel__remove_from_group() to only remove the regular events
> from the group.
>
> Remove evsel__must_be_in_group(), since no one use it anymore.
>
> With the patch, the topdown events aren't broken from the group for the
> splitting.
>
> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
> WARNING: grouped events cpus do not match, disabling group:
>   anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>
>  Performance counter stats for 'system wide':
>
>        346,110,588      slots
>        124,608,256      topdown-retiring
>      1,606,869,976      uncore_imc_free_running_0/dclk/
>
>        1.003877592 seconds time elapsed
>
> Fixes: a9a1790247bd ("perf stat: Ensure group is defined on top of the same cpu mask")
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>

Acked-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> ---
>  tools/perf/builtin-stat.c |  7 ++-----
>  tools/perf/util/evlist.c  |  6 +-----
>  tools/perf/util/evsel.c   | 13 +++++++++++--
>  tools/perf/util/evsel.h   |  2 +-
>  4 files changed, 15 insertions(+), 13 deletions(-)
>
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index a96f106dc93a..f058e8cddfa8 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -271,11 +271,8 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
>                         pr_warning("     %s: %s\n", evsel->name, buf);
>                 }
>
> -               for_each_group_evsel(pos, leader) {
> -                       evsel__set_leader(pos, pos);
> -                       pos->core.nr_members = 0;
> -               }
> -               evsel->core.leader->nr_members = 0;
> +               for_each_group_evsel(pos, leader)
> +                       evsel__remove_from_group(pos, leader);
>         }
>  }
>
> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> index dfa65a383502..7fc544330fea 100644
> --- a/tools/perf/util/evlist.c
> +++ b/tools/perf/util/evlist.c
> @@ -1795,11 +1795,7 @@ struct evsel *evlist__reset_weak_group(struct evlist *evsel_list, struct evsel *
>                          * them. Some events, like Intel topdown, require being
>                          * in a group and so keep these in the group.
>                          */
> -                       if (!evsel__must_be_in_group(c2) && c2 != leader) {
> -                               evsel__set_leader(c2, c2);
> -                               c2->core.nr_members = 0;
> -                               leader->core.nr_members--;
> -                       }
> +                       evsel__remove_from_group(c2, leader);
>
>                         /*
>                          * Set this for all former members of the group
> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
> index b98882cbb286..deb428ee5e50 100644
> --- a/tools/perf/util/evsel.c
> +++ b/tools/perf/util/evsel.c
> @@ -3083,7 +3083,16 @@ bool __weak arch_evsel__must_be_in_group(const struct evsel *evsel __maybe_unuse
>         return false;
>  }
>
> -bool evsel__must_be_in_group(const struct evsel *evsel)
> +/*
> + * Remove an event from a given group (leader).
> + * Some events, e.g., perf metrics Topdown events,
> + * must always be grouped. Ignore the events.
> + */
> +void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader)
>  {
> -       return arch_evsel__must_be_in_group(evsel);
> +       if (!arch_evsel__must_be_in_group(evsel) && evsel != leader) {
> +               evsel__set_leader(evsel, evsel);
> +               evsel->core.nr_members = 0;
> +               leader->core.nr_members--;
> +       }
>  }
> diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
> index a36172ed4cf6..47f65f8e7c74 100644
> --- a/tools/perf/util/evsel.h
> +++ b/tools/perf/util/evsel.h
> @@ -483,7 +483,7 @@ bool evsel__has_leader(struct evsel *evsel, struct evsel *leader);
>  bool evsel__is_leader(struct evsel *evsel);
>  void evsel__set_leader(struct evsel *evsel, struct evsel *leader);
>  int evsel__source_count(const struct evsel *evsel);
> -bool evsel__must_be_in_group(const struct evsel *evsel);
> +void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader);
>
>  bool arch_evsel__must_be_in_group(const struct evsel *evsel);
>
> --
> 2.35.1
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V3 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform
  2022-05-18 14:38 ` [PATCH V3 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
@ 2022-05-19  4:31   ` Ian Rogers
  0 siblings, 0 replies; 8+ messages in thread
From: Ian Rogers @ 2022-05-19  4:31 UTC (permalink / raw)
  To: kan.liang
  Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian

On Wed, May 18, 2022 at 7:39 AM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> The patch ("perf evlist: Keep topdown counters in weak group") fixes the
> perf metrics topdown event issue when the topdown events are in a weak
> group on a non-hybrid platform. However, it doesn't work for the hybrid
> platform.
>
> $./perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>
> Performance counter stats for 'system wide':
>
>      751,765,068      cpu_core/slots/                        (84.07%)
>  <not supported>      cpu_core/topdown-bad-spec/
>  <not supported>      cpu_core/topdown-be-bound/
>  <not supported>      cpu_core/topdown-fe-bound/
>  <not supported>      cpu_core/topdown-retiring/
>       12,398,197      cpu_core/branch-instructions/          (84.07%)
>        1,054,218      cpu_core/branch-misses/                (84.24%)
>      539,764,637      cpu_core/bus-cycles/                   (84.64%)
>           14,683      cpu_core/cache-misses/                 (84.87%)
>        7,277,809      cpu_core/cache-references/             (77.30%)
>      222,299,439      cpu_core/cpu-cycles/                   (77.28%)
>       63,661,714      cpu_core/instructions/                 (84.85%)
>                0      cpu_core/mem-loads/                    (77.29%)
>       12,271,725      cpu_core/mem-stores/                   (77.30%)
>      542,241,102      cpu_core/ref-cycles/                   (84.85%)
>            8,854      cpu_core/cache-misses/                 (76.71%)
>        7,179,013      cpu_core/cache-references/             (76.31%)
>
>        1.003245250 seconds time elapsed
>
> A hybrid platform has a different PMU name for the core PMUs, while
> the current perf hard code the PMU name "cpu".
>
> The evsel->pmu_name can be used to replace the "cpu" to fix the issue.
> For a hybrid platform, the pmu_name must be non-NULL. Because there are
> at least two core PMUs. The PMU has to be specified.
> For a non-hybrid platform, the pmu_name may be NULL. Because there is
> only one core PMU, "cpu". For a NULL pmu_name, we can safely assume that
> it is a "cpu" PMU.
>
> In case other PMUs also define the "slots" event, checking the PMU type
> as well.
>
> With the patch,
>
> $perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>
> Performance counter stats for 'system wide':
>
>    766,620,266   cpu_core/slots/                                        (84.06%)
>     73,172,129   cpu_core/topdown-bad-spec/ #    9.5% bad speculation   (84.06%)
>    193,443,341   cpu_core/topdown-be-bound/ #    25.0% backend bound    (84.06%)
>    403,940,929   cpu_core/topdown-fe-bound/ #    52.3% frontend bound   (84.06%)
>    102,070,237   cpu_core/topdown-retiring/ #    13.2% retiring         (84.06%)
>     12,364,429   cpu_core/branch-instructions/                          (84.03%)
>      1,080,124   cpu_core/branch-misses/                                (84.24%)
>    564,120,383   cpu_core/bus-cycles/                                   (84.65%)
>         36,979   cpu_core/cache-misses/                                 (84.86%)
>      7,298,094   cpu_core/cache-references/                             (77.30%)
>    227,174,372   cpu_core/cpu-cycles/                                   (77.31%)
>     63,886,523   cpu_core/instructions/                                 (84.87%)
>              0   cpu_core/mem-loads/                                    (77.31%)
>     12,208,782   cpu_core/mem-stores/                                   (77.31%)
>    566,409,738   cpu_core/ref-cycles/                                   (84.87%)
>         23,118   cpu_core/cache-misses/                                 (76.71%)
>      7,212,602   cpu_core/cache-references/                             (76.29%)
>
>      1.003228667 seconds time elapsed
>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>

Acked-by: Ian Rogers <irogers@google.com>

> ---
>  tools/perf/arch/x86/util/evsel.c | 23 +++++++++++++++++++++--
>  1 file changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
> index 00cb4466b4ca..88306183d629 100644
> --- a/tools/perf/arch/x86/util/evsel.c
> +++ b/tools/perf/arch/x86/util/evsel.c
> @@ -31,10 +31,29 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
>         free(env.cpuid);
>  }
>
> +/* Check whether the evsel's PMU supports the perf metrics */
> +static bool evsel__sys_has_perf_metrics(const struct evsel *evsel)

nit: perhaps the function name could be closer to the comment, like
evsel__pmu_has_topdown_metrics. The use of metrics is somewhat
overloaded here with the regular metrics code, but this code lives
under arch/x86 so I guess that's ok.

Thanks,
Ian

> +{
> +       const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
> +
> +       /*
> +        * The PERF_TYPE_RAW type is the core PMU type, e.g., "cpu" PMU
> +        * on a non-hybrid machine, "cpu_core" PMU on a hybrid machine.
> +        * The slots event is only available for the core PMU, which
> +        * supports the perf metrics feature.
> +        * Checking both the PERF_TYPE_RAW type and the slots event
> +        * should be good enough to detect the perf metrics feature.
> +        */
> +       if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
> +           pmu_have_event(pmu_name, "slots"))
> +               return true;
> +
> +       return false;
> +}
> +
>  bool arch_evsel__must_be_in_group(const struct evsel *evsel)
>  {
> -       if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
> -           !pmu_have_event("cpu", "slots"))
> +       if (!evsel__sys_has_perf_metrics(evsel))
>                 return false;
>
>         return evsel->name &&
> --
> 2.35.1
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V3 0/4] Several perf metrics topdown related fixes
  2022-05-18 14:38 [PATCH V3 0/4] Several perf metrics topdown related fixes kan.liang
                   ` (3 preceding siblings ...)
  2022-05-18 14:39 ` [PATCH V3 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
@ 2022-05-20 14:15 ` Arnaldo Carvalho de Melo
  4 siblings, 0 replies; 8+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-05-20 14:15 UTC (permalink / raw)
  To: kan.liang
  Cc: mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users,
	peterz, zhengjun.xing, adrian.hunter, ak, eranian

Em Wed, May 18, 2022 at 07:38:56AM -0700, kan.liang@linux.intel.com escreveu:
> From: Kan Liang <kan.liang@linux.intel.com>
> 
> The Patch 1 is a follow-up patch for Ian's ("Fix topdown event weak
> grouping")[1].
> 
> The patch 2 is to fix the perf metrics topdown events in a mixed group.
> It reuses the function introduced in [1].
> Patch 1 & 2 should be on top of [1].
> 
> The patch 3 & 4 are to fix other perf metrics topdown related issues.
> They can be merged separately.
> 
> [1]: https://lore.kernel.org/all/20220517052724.283874-2-irogers@google.com/

Thanks, applied.

- Arnaldo

 
> Changes since V2:
> - Add more comments for the evsel__sys_has_perf_metrics() and
>   topdown_sys_has_perf_metrics()
> - Remove the uncessary evsel->core.leader->nr_members = 0; in patch 2.
>   The value has been updated in the new evsel__remove_from_group().
> - Add Reviewed-by from Ian for patch 4
> 
> Changes since V1:
> - Add comments for the evsel__sys_has_perf_metrics() and
>   topdown_sys_has_perf_metrics()
> - Factor out evsel__remove_from_group()
> - Add Reviewed-by from Ian for patch 3
> 
> Kan Liang (4):
>   perf evsel: Fixes topdown events in a weak group for the hybrid
>     platform
>   perf stat: Always keep perf metrics topdown events in a group
>   perf parse-events: Support different format of the topdown event name
>   perf parse-events: Move slots event for the hybrid platform too
> 
>  tools/perf/arch/x86/util/evlist.c  |  7 ++++---
>  tools/perf/arch/x86/util/evsel.c   | 23 +++++++++++++++++++++--
>  tools/perf/arch/x86/util/topdown.c | 25 +++++++++++++++++++++++++
>  tools/perf/arch/x86/util/topdown.h |  7 +++++++
>  tools/perf/builtin-stat.c          |  7 ++-----
>  tools/perf/util/evlist.c           |  6 +-----
>  tools/perf/util/evsel.c            | 13 +++++++++++--
>  tools/perf/util/evsel.h            |  2 +-
>  8 files changed, 72 insertions(+), 18 deletions(-)
>  create mode 100644 tools/perf/arch/x86/util/topdown.h
> 
> -- 
> 2.35.1

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-05-20 14:16 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-18 14:38 [PATCH V3 0/4] Several perf metrics topdown related fixes kan.liang
2022-05-18 14:38 ` [PATCH V3 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
2022-05-19  4:31   ` Ian Rogers
2022-05-18 14:38 ` [PATCH V3 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
2022-05-19  4:26   ` Ian Rogers
2022-05-18 14:38 ` [PATCH V3 3/4] perf parse-events: Support different format of the topdown event name kan.liang
2022-05-18 14:39 ` [PATCH V3 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
2022-05-20 14:15 ` [PATCH V3 0/4] Several perf metrics topdown related fixes Arnaldo Carvalho de Melo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.