* [PATCH V2 0/4] Several perf metrics topdown related fixes
@ 2022-05-16 15:24 kan.liang
2022-05-16 15:24 ` [PATCH V2 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: kan.liang @ 2022-05-16 15:24 UTC (permalink / raw)
To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
The Patch 1 is a follow-up patch for Ian's ("Fix topdown event weak
grouping")[1].
The patch 2 is to fix the perf metrics topdown events in a mixed group.
It reuses the function introduced in [1].
Patch 1 & 2 should be on top of [1].
The patch 3 & 4 are to fix other perf metrics topdown related issues.
They can be merged separately.
[1]: https://lore.kernel.org/all/20220512061308.1152233-1-irogers@google.com/
Changes since V1:
- Add comments for the evsel__sys_has_perf_metrics() and
topdown_sys_has_perf_metrics()
- Factor out evsel__remove_from_group()
Kan Liang (4):
perf evsel: Fixes topdown events in a weak group for the hybrid
platform
perf stat: Always keep perf metrics topdown events in a group
perf parse-events: Support different format of the topdown event name
perf parse-events: Move slots event for the hybrid platform too
tools/perf/arch/x86/util/evlist.c | 7 ++++---
tools/perf/arch/x86/util/evsel.c | 21 +++++++++++++++++++--
tools/perf/arch/x86/util/topdown.c | 24 ++++++++++++++++++++++++
tools/perf/arch/x86/util/topdown.h | 7 +++++++
tools/perf/builtin-stat.c | 7 +++----
tools/perf/util/evlist.c | 6 +-----
tools/perf/util/evsel.c | 13 +++++++++++--
tools/perf/util/evsel.h | 2 +-
8 files changed, 70 insertions(+), 17 deletions(-)
create mode 100644 tools/perf/arch/x86/util/topdown.h
--
2.35.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH V2 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform
2022-05-16 15:24 [PATCH V2 0/4] Several perf metrics topdown related fixes kan.liang
@ 2022-05-16 15:24 ` kan.liang
2022-05-17 2:52 ` Ian Rogers
2022-05-16 15:24 ` [PATCH V2 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: kan.liang @ 2022-05-16 15:24 UTC (permalink / raw)
To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
The patch ("perf evlist: Keep topdown counters in weak group") fixes the
perf metrics topdown event issue when the topdown events are in a weak
group on a non-hybrid platform. However, it doesn't work for the hybrid
platform.
$./perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
Performance counter stats for 'system wide':
751,765,068 cpu_core/slots/ (84.07%)
<not supported> cpu_core/topdown-bad-spec/
<not supported> cpu_core/topdown-be-bound/
<not supported> cpu_core/topdown-fe-bound/
<not supported> cpu_core/topdown-retiring/
12,398,197 cpu_core/branch-instructions/ (84.07%)
1,054,218 cpu_core/branch-misses/ (84.24%)
539,764,637 cpu_core/bus-cycles/ (84.64%)
14,683 cpu_core/cache-misses/ (84.87%)
7,277,809 cpu_core/cache-references/ (77.30%)
222,299,439 cpu_core/cpu-cycles/ (77.28%)
63,661,714 cpu_core/instructions/ (84.85%)
0 cpu_core/mem-loads/ (77.29%)
12,271,725 cpu_core/mem-stores/ (77.30%)
542,241,102 cpu_core/ref-cycles/ (84.85%)
8,854 cpu_core/cache-misses/ (76.71%)
7,179,013 cpu_core/cache-references/ (76.31%)
1.003245250 seconds time elapsed
A hybrid platform has a different PMU name for the core PMUs, while
the current perf hard code the PMU name "cpu".
The evsel->pmu_name can be used to replace the "cpu" to fix the issue.
For a hybrid platform, the pmu_name must be non-NULL. Because there are
at least two core PMUs. The PMU has to be specified.
For a non-hybrid platform, the pmu_name may be NULL. Because there is
only one core PMU, "cpu". For a NULL pmu_name, we can safely assume that
it is a "cpu" PMU.
In case other PMUs also define the "slots" event, checking the PMU type
as well.
With the patch,
$perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
Performance counter stats for 'system wide':
766,620,266 cpu_core/slots/ (84.06%)
73,172,129 cpu_core/topdown-bad-spec/ # 9.5% bad speculation (84.06%)
193,443,341 cpu_core/topdown-be-bound/ # 25.0% backend bound (84.06%)
403,940,929 cpu_core/topdown-fe-bound/ # 52.3% frontend bound (84.06%)
102,070,237 cpu_core/topdown-retiring/ # 13.2% retiring (84.06%)
12,364,429 cpu_core/branch-instructions/ (84.03%)
1,080,124 cpu_core/branch-misses/ (84.24%)
564,120,383 cpu_core/bus-cycles/ (84.65%)
36,979 cpu_core/cache-misses/ (84.86%)
7,298,094 cpu_core/cache-references/ (77.30%)
227,174,372 cpu_core/cpu-cycles/ (77.31%)
63,886,523 cpu_core/instructions/ (84.87%)
0 cpu_core/mem-loads/ (77.31%)
12,208,782 cpu_core/mem-stores/ (77.31%)
566,409,738 cpu_core/ref-cycles/ (84.87%)
23,118 cpu_core/cache-misses/ (76.71%)
7,212,602 cpu_core/cache-references/ (76.29%)
1.003228667 seconds time elapsed
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
tools/perf/arch/x86/util/evsel.c | 21 +++++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
index 00cb4466b4ca..6eda5a491eda 100644
--- a/tools/perf/arch/x86/util/evsel.c
+++ b/tools/perf/arch/x86/util/evsel.c
@@ -31,10 +31,27 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
free(env.cpuid);
}
+static bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
+{
+ const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
+
+ /*
+ * The PERF_TYPE_RAW type is the core PMU type.
+ * The slots event is only available for the core PMU, which
+ * supports the perf metrics feature.
+ * Checking both the PERF_TYPE_RAW type and the slots event
+ * should be good enough to detect the perf metrics feature.
+ */
+ if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
+ pmu_have_event(pmu_name, "slots"))
+ return true;
+
+ return false;
+}
+
bool arch_evsel__must_be_in_group(const struct evsel *evsel)
{
- if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
- !pmu_have_event("cpu", "slots"))
+ if (!evsel__sys_has_perf_metrics(evsel))
return false;
return evsel->name &&
--
2.35.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH V2 2/4] perf stat: Always keep perf metrics topdown events in a group
2022-05-16 15:24 [PATCH V2 0/4] Several perf metrics topdown related fixes kan.liang
2022-05-16 15:24 ` [PATCH V2 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
@ 2022-05-16 15:24 ` kan.liang
2022-05-17 3:11 ` Ian Rogers
2022-05-16 15:24 ` [PATCH V2 3/4] perf parse-events: Support different format of the topdown event name kan.liang
2022-05-16 15:24 ` [PATCH V2 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
3 siblings, 1 reply; 10+ messages in thread
From: kan.liang @ 2022-05-16 15:24 UTC (permalink / raw)
To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
If any member in a group has a different cpu mask than the other
members, the current perf stat disables group. when the perf metrics
topdown events are part of the group, the below <not supported> error
will be triggered.
$ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
WARNING: grouped events cpus do not match, disabling group:
anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
Performance counter stats for 'system wide':
141,465,174 slots
<not supported> topdown-retiring
1,605,330,334 uncore_imc_free_running_0/dclk/
The perf metrics topdown events must always be grouped with a slots
event as leader.
Factor out evsel__remove_from_group() to only remove the regular events
from the group.
Remove evsel__must_be_in_group(), since no one use it anymore.
With the patch, the topdown events aren't broken from the group for the
splitting.
$ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
WARNING: grouped events cpus do not match, disabling group:
anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
Performance counter stats for 'system wide':
346,110,588 slots
124,608,256 topdown-retiring
1,606,869,976 uncore_imc_free_running_0/dclk/
1.003877592 seconds time elapsed
Fixes: a9a1790247bd ("perf stat: Ensure group is defined on top of the same cpu mask")
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
tools/perf/builtin-stat.c | 7 +++----
tools/perf/util/evlist.c | 6 +-----
tools/perf/util/evsel.c | 13 +++++++++++--
tools/perf/util/evsel.h | 2 +-
4 files changed, 16 insertions(+), 12 deletions(-)
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index a96f106dc93a..75c88c7939b1 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -271,10 +271,9 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
pr_warning(" %s: %s\n", evsel->name, buf);
}
- for_each_group_evsel(pos, leader) {
- evsel__set_leader(pos, pos);
- pos->core.nr_members = 0;
- }
+ for_each_group_evsel(pos, leader)
+ evsel__remove_from_group(pos, leader);
+
evsel->core.leader->nr_members = 0;
}
}
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index dfa65a383502..7fc544330fea 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1795,11 +1795,7 @@ struct evsel *evlist__reset_weak_group(struct evlist *evsel_list, struct evsel *
* them. Some events, like Intel topdown, require being
* in a group and so keep these in the group.
*/
- if (!evsel__must_be_in_group(c2) && c2 != leader) {
- evsel__set_leader(c2, c2);
- c2->core.nr_members = 0;
- leader->core.nr_members--;
- }
+ evsel__remove_from_group(c2, leader);
/*
* Set this for all former members of the group
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index b98882cbb286..deb428ee5e50 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -3083,7 +3083,16 @@ bool __weak arch_evsel__must_be_in_group(const struct evsel *evsel __maybe_unuse
return false;
}
-bool evsel__must_be_in_group(const struct evsel *evsel)
+/*
+ * Remove an event from a given group (leader).
+ * Some events, e.g., perf metrics Topdown events,
+ * must always be grouped. Ignore the events.
+ */
+void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader)
{
- return arch_evsel__must_be_in_group(evsel);
+ if (!arch_evsel__must_be_in_group(evsel) && evsel != leader) {
+ evsel__set_leader(evsel, evsel);
+ evsel->core.nr_members = 0;
+ leader->core.nr_members--;
+ }
}
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index a36172ed4cf6..47f65f8e7c74 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -483,7 +483,7 @@ bool evsel__has_leader(struct evsel *evsel, struct evsel *leader);
bool evsel__is_leader(struct evsel *evsel);
void evsel__set_leader(struct evsel *evsel, struct evsel *leader);
int evsel__source_count(const struct evsel *evsel);
-bool evsel__must_be_in_group(const struct evsel *evsel);
+void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader);
bool arch_evsel__must_be_in_group(const struct evsel *evsel);
--
2.35.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH V2 3/4] perf parse-events: Support different format of the topdown event name
2022-05-16 15:24 [PATCH V2 0/4] Several perf metrics topdown related fixes kan.liang
2022-05-16 15:24 ` [PATCH V2 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
2022-05-16 15:24 ` [PATCH V2 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
@ 2022-05-16 15:24 ` kan.liang
2022-05-16 15:24 ` [PATCH V2 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
3 siblings, 0 replies; 10+ messages in thread
From: kan.liang @ 2022-05-16 15:24 UTC (permalink / raw)
To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
The evsel->name may have a different format for a topdown event, a pure
topdown name (e.g., topdown-fe-bound), or a PMU name + a topdown name
(e.g., cpu/topdown-fe-bound/). The cpu/topdown-fe-bound/ kind format
isn't supported by the arch_evlist__leader(). This format is a very
common format for a hybrid platform, which requires specifying the PMU
name for each event.
Without the patch,
$perf stat -e '{instructions,slots,cpu/topdown-fe-bound/}' -a sleep 1
Performance counter stats for 'system wide':
<not counted> instructions
<not counted> slots
<not supported> cpu/topdown-fe-bound/
1.003482041 seconds time elapsed
Some events weren't counted. Try disabling the NMI watchdog:
echo 0 > /proc/sys/kernel/nmi_watchdog
perf stat ...
echo 1 > /proc/sys/kernel/nmi_watchdog
The events in group usually have to be from the same PMU. Try reorganizing the group.
With the patch,
perf stat -e '{instructions,slots,cpu/topdown-fe-bound/}' -a sleep 1
Performance counter stats for 'system wide':
157,383,996 slots
25,011,711 instructions
27,441,686 cpu/topdown-fe-bound/
1.003530890 seconds time elapsed
Fixes: bc355822f0d9 ("perf parse-events: Move slots only with topdown")
Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
tools/perf/arch/x86/util/evlist.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
index cfc208d71f00..75564a7df15b 100644
--- a/tools/perf/arch/x86/util/evlist.c
+++ b/tools/perf/arch/x86/util/evlist.c
@@ -36,7 +36,7 @@ struct evsel *arch_evlist__leader(struct list_head *list)
if (slots == first)
return first;
}
- if (!strncasecmp(evsel->name, "topdown", 7))
+ if (strcasestr(evsel->name, "topdown"))
has_topdown = true;
if (slots && has_topdown)
return slots;
--
2.35.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH V2 4/4] perf parse-events: Move slots event for the hybrid platform too
2022-05-16 15:24 [PATCH V2 0/4] Several perf metrics topdown related fixes kan.liang
` (2 preceding siblings ...)
2022-05-16 15:24 ` [PATCH V2 3/4] perf parse-events: Support different format of the topdown event name kan.liang
@ 2022-05-16 15:24 ` kan.liang
2022-05-17 2:54 ` Ian Rogers
3 siblings, 1 reply; 10+ messages in thread
From: kan.liang @ 2022-05-16 15:24 UTC (permalink / raw)
To: acme, mingo, irogers, jolsa, namhyung, linux-kernel, linux-perf-users
Cc: peterz, zhengjun.xing, adrian.hunter, ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
The commit 94dbfd6781a0 ("perf parse-events: Architecture specific
leader override") introduced a feature to reorder the slots event to
fulfill the restriction of the perf metrics topdown group. But the
feature doesn't work on the hybrid machine.
$perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1
Performance counter stats for 'system wide':
<not counted> cpu_core/instructions/
<not counted> cpu_core/slots/
<not supported> cpu_core/topdown-retiring/
1.002871801 seconds time elapsed
A hybrid platform has a different PMU name for the core PMUs, while
current perf hard code the PMU name "cpu".
Introduce a new function to check whether the system supports the perf
metrics feature. The result is cached for the future usage.
For X86, the core PMU name always has "cpu" prefix.
With the patch,
$perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1
Performance counter stats for 'system wide':
76,337,010 cpu_core/slots/
10,416,809 cpu_core/instructions/
11,692,372 cpu_core/topdown-retiring/
1.002805453 seconds time elapsed
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
tools/perf/arch/x86/util/evlist.c | 5 +++--
tools/perf/arch/x86/util/topdown.c | 24 ++++++++++++++++++++++++
tools/perf/arch/x86/util/topdown.h | 7 +++++++
3 files changed, 34 insertions(+), 2 deletions(-)
create mode 100644 tools/perf/arch/x86/util/topdown.h
diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
index 75564a7df15b..68f681ad54c1 100644
--- a/tools/perf/arch/x86/util/evlist.c
+++ b/tools/perf/arch/x86/util/evlist.c
@@ -3,6 +3,7 @@
#include "util/pmu.h"
#include "util/evlist.h"
#include "util/parse-events.h"
+#include "topdown.h"
#define TOPDOWN_L1_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound}"
#define TOPDOWN_L2_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown-fetch-lat,topdown-mem-bound}"
@@ -25,12 +26,12 @@ struct evsel *arch_evlist__leader(struct list_head *list)
first = list_first_entry(list, struct evsel, core.node);
- if (!pmu_have_event("cpu", "slots"))
+ if (!topdown_sys_has_perf_metrics())
return first;
/* If there is a slots event and a topdown event then the slots event comes first. */
__evlist__for_each_entry(list, evsel) {
- if (evsel->pmu_name && !strcmp(evsel->pmu_name, "cpu") && evsel->name) {
+ if (evsel->pmu_name && !strncmp(evsel->pmu_name, "cpu", 3) && evsel->name) {
if (strcasestr(evsel->name, "slots")) {
slots = evsel;
if (slots == first)
diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/topdown.c
index 2f3d96aa92a5..5e86859279e3 100644
--- a/tools/perf/arch/x86/util/topdown.c
+++ b/tools/perf/arch/x86/util/topdown.c
@@ -3,6 +3,30 @@
#include "api/fs/fs.h"
#include "util/pmu.h"
#include "util/topdown.h"
+#include "topdown.h"
+
+bool topdown_sys_has_perf_metrics(void)
+{
+ static bool has_perf_metrics;
+ static bool cached;
+ struct perf_pmu *pmu;
+
+ if (cached)
+ return has_perf_metrics;
+
+ /*
+ * The perf metrics feature is a core PMU feature.
+ * The PERF_TYPE_RAW type is the type of a core PMU.
+ * The slots event is only available when the core PMU
+ * supports the perf metrics feature.
+ */
+ pmu = perf_pmu__find_by_type(PERF_TYPE_RAW);
+ if (pmu && pmu_have_event(pmu->name, "slots"))
+ has_perf_metrics = true;
+
+ cached = true;
+ return has_perf_metrics;
+}
/*
* Check whether we can use a group for top down.
diff --git a/tools/perf/arch/x86/util/topdown.h b/tools/perf/arch/x86/util/topdown.h
new file mode 100644
index 000000000000..46bf9273e572
--- /dev/null
+++ b/tools/perf/arch/x86/util/topdown.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _TOPDOWN_H
+#define _TOPDOWN_H 1
+
+bool topdown_sys_has_perf_metrics(void);
+
+#endif
--
2.35.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH V2 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform
2022-05-16 15:24 ` [PATCH V2 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
@ 2022-05-17 2:52 ` Ian Rogers
2022-05-17 13:43 ` Liang, Kan
0 siblings, 1 reply; 10+ messages in thread
From: Ian Rogers @ 2022-05-17 2:52 UTC (permalink / raw)
To: kan.liang
Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
peterz, zhengjun.xing, adrian.hunter, ak, eranian
On Mon, May 16, 2022 at 8:25 AM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> The patch ("perf evlist: Keep topdown counters in weak group") fixes the
> perf metrics topdown event issue when the topdown events are in a weak
> group on a non-hybrid platform. However, it doesn't work for the hybrid
> platform.
>
> $./perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>
> Performance counter stats for 'system wide':
>
> 751,765,068 cpu_core/slots/ (84.07%)
> <not supported> cpu_core/topdown-bad-spec/
> <not supported> cpu_core/topdown-be-bound/
> <not supported> cpu_core/topdown-fe-bound/
> <not supported> cpu_core/topdown-retiring/
> 12,398,197 cpu_core/branch-instructions/ (84.07%)
> 1,054,218 cpu_core/branch-misses/ (84.24%)
> 539,764,637 cpu_core/bus-cycles/ (84.64%)
> 14,683 cpu_core/cache-misses/ (84.87%)
> 7,277,809 cpu_core/cache-references/ (77.30%)
> 222,299,439 cpu_core/cpu-cycles/ (77.28%)
> 63,661,714 cpu_core/instructions/ (84.85%)
> 0 cpu_core/mem-loads/ (77.29%)
> 12,271,725 cpu_core/mem-stores/ (77.30%)
> 542,241,102 cpu_core/ref-cycles/ (84.85%)
> 8,854 cpu_core/cache-misses/ (76.71%)
> 7,179,013 cpu_core/cache-references/ (76.31%)
>
> 1.003245250 seconds time elapsed
>
> A hybrid platform has a different PMU name for the core PMUs, while
> the current perf hard code the PMU name "cpu".
>
> The evsel->pmu_name can be used to replace the "cpu" to fix the issue.
> For a hybrid platform, the pmu_name must be non-NULL. Because there are
> at least two core PMUs. The PMU has to be specified.
> For a non-hybrid platform, the pmu_name may be NULL. Because there is
> only one core PMU, "cpu". For a NULL pmu_name, we can safely assume that
> it is a "cpu" PMU.
>
> In case other PMUs also define the "slots" event, checking the PMU type
> as well.
>
> With the patch,
>
> $perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>
> Performance counter stats for 'system wide':
>
> 766,620,266 cpu_core/slots/ (84.06%)
> 73,172,129 cpu_core/topdown-bad-spec/ # 9.5% bad speculation (84.06%)
> 193,443,341 cpu_core/topdown-be-bound/ # 25.0% backend bound (84.06%)
> 403,940,929 cpu_core/topdown-fe-bound/ # 52.3% frontend bound (84.06%)
> 102,070,237 cpu_core/topdown-retiring/ # 13.2% retiring (84.06%)
> 12,364,429 cpu_core/branch-instructions/ (84.03%)
> 1,080,124 cpu_core/branch-misses/ (84.24%)
> 564,120,383 cpu_core/bus-cycles/ (84.65%)
> 36,979 cpu_core/cache-misses/ (84.86%)
> 7,298,094 cpu_core/cache-references/ (77.30%)
> 227,174,372 cpu_core/cpu-cycles/ (77.31%)
> 63,886,523 cpu_core/instructions/ (84.87%)
> 0 cpu_core/mem-loads/ (77.31%)
> 12,208,782 cpu_core/mem-stores/ (77.31%)
> 566,409,738 cpu_core/ref-cycles/ (84.87%)
> 23,118 cpu_core/cache-misses/ (76.71%)
> 7,212,602 cpu_core/cache-references/ (76.29%)
>
> 1.003228667 seconds time elapsed
>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> ---
> tools/perf/arch/x86/util/evsel.c | 21 +++++++++++++++++++--
> 1 file changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
> index 00cb4466b4ca..6eda5a491eda 100644
> --- a/tools/perf/arch/x86/util/evsel.c
> +++ b/tools/perf/arch/x86/util/evsel.c
> @@ -31,10 +31,27 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
> free(env.cpuid);
> }
>
> +static bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
> +{
Why have this and not use topdown_sys_has_perf_metrics? It seems
strange to have the mix of evsel and PMU that this function is
testing.
> + const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
> +
> + /*
> + * The PERF_TYPE_RAW type is the core PMU type.
> + * The slots event is only available for the core PMU, which
> + * supports the perf metrics feature.
nit: Does "core PMU" mean /sys/devices/cpu_core ? It would be good to
disambiguate possibly by just using "cpu_core PMU".
Thanks,
Ian
> + * Checking both the PERF_TYPE_RAW type and the slots event
> + * should be good enough to detect the perf metrics feature.
> + */
> + if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
> + pmu_have_event(pmu_name, "slots"))
> + return true;
> +
> + return false;
> +}
> +
> bool arch_evsel__must_be_in_group(const struct evsel *evsel)
> {
> - if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
> - !pmu_have_event("cpu", "slots"))
> + if (!evsel__sys_has_perf_metrics(evsel))
> return false;
>
> return evsel->name &&
> --
> 2.35.1
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH V2 4/4] perf parse-events: Move slots event for the hybrid platform too
2022-05-16 15:24 ` [PATCH V2 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
@ 2022-05-17 2:54 ` Ian Rogers
0 siblings, 0 replies; 10+ messages in thread
From: Ian Rogers @ 2022-05-17 2:54 UTC (permalink / raw)
To: kan.liang
Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
peterz, zhengjun.xing, adrian.hunter, ak, eranian
On Mon, May 16, 2022 at 8:25 AM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> The commit 94dbfd6781a0 ("perf parse-events: Architecture specific
> leader override") introduced a feature to reorder the slots event to
> fulfill the restriction of the perf metrics topdown group. But the
> feature doesn't work on the hybrid machine.
>
> $perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1
>
> Performance counter stats for 'system wide':
>
> <not counted> cpu_core/instructions/
> <not counted> cpu_core/slots/
> <not supported> cpu_core/topdown-retiring/
>
> 1.002871801 seconds time elapsed
>
> A hybrid platform has a different PMU name for the core PMUs, while
> current perf hard code the PMU name "cpu".
>
> Introduce a new function to check whether the system supports the perf
> metrics feature. The result is cached for the future usage.
>
> For X86, the core PMU name always has "cpu" prefix.
>
> With the patch,
>
> $perf stat -e "{cpu_core/instructions/,cpu_core/slots/,cpu_core/topdown-retiring/}" -a sleep 1
>
> Performance counter stats for 'system wide':
>
> 76,337,010 cpu_core/slots/
> 10,416,809 cpu_core/instructions/
> 11,692,372 cpu_core/topdown-retiring/
>
> 1.002805453 seconds time elapsed
>
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Thanks!
Ian
> ---
> tools/perf/arch/x86/util/evlist.c | 5 +++--
> tools/perf/arch/x86/util/topdown.c | 24 ++++++++++++++++++++++++
> tools/perf/arch/x86/util/topdown.h | 7 +++++++
> 3 files changed, 34 insertions(+), 2 deletions(-)
> create mode 100644 tools/perf/arch/x86/util/topdown.h
>
> diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
> index 75564a7df15b..68f681ad54c1 100644
> --- a/tools/perf/arch/x86/util/evlist.c
> +++ b/tools/perf/arch/x86/util/evlist.c
> @@ -3,6 +3,7 @@
> #include "util/pmu.h"
> #include "util/evlist.h"
> #include "util/parse-events.h"
> +#include "topdown.h"
>
> #define TOPDOWN_L1_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound}"
> #define TOPDOWN_L2_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown-fetch-lat,topdown-mem-bound}"
> @@ -25,12 +26,12 @@ struct evsel *arch_evlist__leader(struct list_head *list)
>
> first = list_first_entry(list, struct evsel, core.node);
>
> - if (!pmu_have_event("cpu", "slots"))
> + if (!topdown_sys_has_perf_metrics())
> return first;
>
> /* If there is a slots event and a topdown event then the slots event comes first. */
> __evlist__for_each_entry(list, evsel) {
> - if (evsel->pmu_name && !strcmp(evsel->pmu_name, "cpu") && evsel->name) {
> + if (evsel->pmu_name && !strncmp(evsel->pmu_name, "cpu", 3) && evsel->name) {
> if (strcasestr(evsel->name, "slots")) {
> slots = evsel;
> if (slots == first)
> diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/topdown.c
> index 2f3d96aa92a5..5e86859279e3 100644
> --- a/tools/perf/arch/x86/util/topdown.c
> +++ b/tools/perf/arch/x86/util/topdown.c
> @@ -3,6 +3,30 @@
> #include "api/fs/fs.h"
> #include "util/pmu.h"
> #include "util/topdown.h"
> +#include "topdown.h"
> +
> +bool topdown_sys_has_perf_metrics(void)
> +{
> + static bool has_perf_metrics;
> + static bool cached;
> + struct perf_pmu *pmu;
> +
> + if (cached)
> + return has_perf_metrics;
> +
> + /*
> + * The perf metrics feature is a core PMU feature.
> + * The PERF_TYPE_RAW type is the type of a core PMU.
> + * The slots event is only available when the core PMU
> + * supports the perf metrics feature.
> + */
> + pmu = perf_pmu__find_by_type(PERF_TYPE_RAW);
> + if (pmu && pmu_have_event(pmu->name, "slots"))
> + has_perf_metrics = true;
> +
> + cached = true;
> + return has_perf_metrics;
> +}
>
> /*
> * Check whether we can use a group for top down.
> diff --git a/tools/perf/arch/x86/util/topdown.h b/tools/perf/arch/x86/util/topdown.h
> new file mode 100644
> index 000000000000..46bf9273e572
> --- /dev/null
> +++ b/tools/perf/arch/x86/util/topdown.h
> @@ -0,0 +1,7 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _TOPDOWN_H
> +#define _TOPDOWN_H 1
> +
> +bool topdown_sys_has_perf_metrics(void);
> +
> +#endif
> --
> 2.35.1
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH V2 2/4] perf stat: Always keep perf metrics topdown events in a group
2022-05-16 15:24 ` [PATCH V2 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
@ 2022-05-17 3:11 ` Ian Rogers
2022-05-17 13:43 ` Liang, Kan
0 siblings, 1 reply; 10+ messages in thread
From: Ian Rogers @ 2022-05-17 3:11 UTC (permalink / raw)
To: kan.liang
Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
peterz, zhengjun.xing, adrian.hunter, ak, eranian
On Mon, May 16, 2022 at 8:25 AM <kan.liang@linux.intel.com> wrote:
>
> From: Kan Liang <kan.liang@linux.intel.com>
>
> If any member in a group has a different cpu mask than the other
> members, the current perf stat disables group. when the perf metrics
> topdown events are part of the group, the below <not supported> error
> will be triggered.
>
> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
> WARNING: grouped events cpus do not match, disabling group:
> anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>
> Performance counter stats for 'system wide':
>
> 141,465,174 slots
> <not supported> topdown-retiring
> 1,605,330,334 uncore_imc_free_running_0/dclk/
>
> The perf metrics topdown events must always be grouped with a slots
> event as leader.
>
> Factor out evsel__remove_from_group() to only remove the regular events
> from the group.
>
> Remove evsel__must_be_in_group(), since no one use it anymore.
>
> With the patch, the topdown events aren't broken from the group for the
> splitting.
>
> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
> WARNING: grouped events cpus do not match, disabling group:
> anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>
> Performance counter stats for 'system wide':
>
> 346,110,588 slots
> 124,608,256 topdown-retiring
> 1,606,869,976 uncore_imc_free_running_0/dclk/
>
> 1.003877592 seconds time elapsed
>
> Fixes: a9a1790247bd ("perf stat: Ensure group is defined on top of the same cpu mask")
> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
> ---
> tools/perf/builtin-stat.c | 7 +++----
> tools/perf/util/evlist.c | 6 +-----
> tools/perf/util/evsel.c | 13 +++++++++++--
> tools/perf/util/evsel.h | 2 +-
> 4 files changed, 16 insertions(+), 12 deletions(-)
>
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index a96f106dc93a..75c88c7939b1 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -271,10 +271,9 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
> pr_warning(" %s: %s\n", evsel->name, buf);
> }
>
> - for_each_group_evsel(pos, leader) {
> - evsel__set_leader(pos, pos);
> - pos->core.nr_members = 0;
> - }
> + for_each_group_evsel(pos, leader)
> + evsel__remove_from_group(pos, leader);
> +
> evsel->core.leader->nr_members = 0;
This shouldn't be necessary now.
> }
> }
> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> index dfa65a383502..7fc544330fea 100644
> --- a/tools/perf/util/evlist.c
> +++ b/tools/perf/util/evlist.c
> @@ -1795,11 +1795,7 @@ struct evsel *evlist__reset_weak_group(struct evlist *evsel_list, struct evsel *
> * them. Some events, like Intel topdown, require being
> * in a group and so keep these in the group.
> */
> - if (!evsel__must_be_in_group(c2) && c2 != leader) {
> - evsel__set_leader(c2, c2);
> - c2->core.nr_members = 0;
> - leader->core.nr_members--;
> - }
> + evsel__remove_from_group(c2, leader);
>
> /*
> * Set this for all former members of the group
> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
> index b98882cbb286..deb428ee5e50 100644
> --- a/tools/perf/util/evsel.c
> +++ b/tools/perf/util/evsel.c
> @@ -3083,7 +3083,16 @@ bool __weak arch_evsel__must_be_in_group(const struct evsel *evsel __maybe_unuse
> return false;
> }
>
> -bool evsel__must_be_in_group(const struct evsel *evsel)
> +/*
> + * Remove an event from a given group (leader).
> + * Some events, e.g., perf metrics Topdown events,
> + * must always be grouped. Ignore the events.
> + */
> +void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader)
> {
> - return arch_evsel__must_be_in_group(evsel);
> + if (!arch_evsel__must_be_in_group(evsel) && evsel != leader) {
> + evsel__set_leader(evsel, evsel);
> + evsel->core.nr_members = 0;
> + leader->core.nr_members--;
> + }
Should we also have:
if (leader->core.nr_members == 1)
leader->core.nr_members = 0;
Other wise say:
{instructions,cycles}
with a remove of cycles becomes:
{instructions}, cycles
rather than the previous:
instructions,cycles
Actually, looking at:
https://lore.kernel.org/lkml/20220512061308.1152233-2-irogers@google.com/
+ /* Reset the leader count if all entries were removed. */
+ if (leader->core.nr_members)
+ leader->core.nr_members = 0;
is wrong and should be:
+ /* Reset the leader count if all entries were removed. */
+ if (leader->core.nr_members == 1)
+ leader->core.nr_members = 0;
I'll fix and re-send.
Thanks,
Ian
> }
> diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
> index a36172ed4cf6..47f65f8e7c74 100644
> --- a/tools/perf/util/evsel.h
> +++ b/tools/perf/util/evsel.h
> @@ -483,7 +483,7 @@ bool evsel__has_leader(struct evsel *evsel, struct evsel *leader);
> bool evsel__is_leader(struct evsel *evsel);
> void evsel__set_leader(struct evsel *evsel, struct evsel *leader);
> int evsel__source_count(const struct evsel *evsel);
> -bool evsel__must_be_in_group(const struct evsel *evsel);
> +void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader);
>
> bool arch_evsel__must_be_in_group(const struct evsel *evsel);
>
> --
> 2.35.1
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH V2 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform
2022-05-17 2:52 ` Ian Rogers
@ 2022-05-17 13:43 ` Liang, Kan
0 siblings, 0 replies; 10+ messages in thread
From: Liang, Kan @ 2022-05-17 13:43 UTC (permalink / raw)
To: Ian Rogers
Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
peterz, zhengjun.xing, adrian.hunter, ak, eranian
On 5/16/2022 10:52 PM, Ian Rogers wrote:
> On Mon, May 16, 2022 at 8:25 AM <kan.liang@linux.intel.com> wrote:
>>
>> From: Kan Liang <kan.liang@linux.intel.com>
>>
>> The patch ("perf evlist: Keep topdown counters in weak group") fixes the
>> perf metrics topdown event issue when the topdown events are in a weak
>> group on a non-hybrid platform. However, it doesn't work for the hybrid
>> platform.
>>
>> $./perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
>> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
>> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
>> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
>> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
>> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
>> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>>
>> Performance counter stats for 'system wide':
>>
>> 751,765,068 cpu_core/slots/ (84.07%)
>> <not supported> cpu_core/topdown-bad-spec/
>> <not supported> cpu_core/topdown-be-bound/
>> <not supported> cpu_core/topdown-fe-bound/
>> <not supported> cpu_core/topdown-retiring/
>> 12,398,197 cpu_core/branch-instructions/ (84.07%)
>> 1,054,218 cpu_core/branch-misses/ (84.24%)
>> 539,764,637 cpu_core/bus-cycles/ (84.64%)
>> 14,683 cpu_core/cache-misses/ (84.87%)
>> 7,277,809 cpu_core/cache-references/ (77.30%)
>> 222,299,439 cpu_core/cpu-cycles/ (77.28%)
>> 63,661,714 cpu_core/instructions/ (84.85%)
>> 0 cpu_core/mem-loads/ (77.29%)
>> 12,271,725 cpu_core/mem-stores/ (77.30%)
>> 542,241,102 cpu_core/ref-cycles/ (84.85%)
>> 8,854 cpu_core/cache-misses/ (76.71%)
>> 7,179,013 cpu_core/cache-references/ (76.31%)
>>
>> 1.003245250 seconds time elapsed
>>
>> A hybrid platform has a different PMU name for the core PMUs, while
>> the current perf hard code the PMU name "cpu".
>>
>> The evsel->pmu_name can be used to replace the "cpu" to fix the issue.
>> For a hybrid platform, the pmu_name must be non-NULL. Because there are
>> at least two core PMUs. The PMU has to be specified.
>> For a non-hybrid platform, the pmu_name may be NULL. Because there is
>> only one core PMU, "cpu". For a NULL pmu_name, we can safely assume that
>> it is a "cpu" PMU.
>>
>> In case other PMUs also define the "slots" event, checking the PMU type
>> as well.
>>
>> With the patch,
>>
>> $perf stat -e '{cpu_core/slots/,cpu_core/topdown-bad-spec/,
>> cpu_core/topdown-be-bound/,cpu_core/topdown-fe-bound/,
>> cpu_core/topdown-retiring/,cpu_core/branch-instructions/,
>> cpu_core/branch-misses/,cpu_core/bus-cycles/,cpu_core/cache-misses/,
>> cpu_core/cache-references/,cpu_core/cpu-cycles/,cpu_core/instructions/,
>> cpu_core/mem-loads/,cpu_core/mem-stores/,cpu_core/ref-cycles/,
>> cpu_core/cache-misses/,cpu_core/cache-references/}:W' -a sleep 1
>>
>> Performance counter stats for 'system wide':
>>
>> 766,620,266 cpu_core/slots/ (84.06%)
>> 73,172,129 cpu_core/topdown-bad-spec/ # 9.5% bad speculation (84.06%)
>> 193,443,341 cpu_core/topdown-be-bound/ # 25.0% backend bound (84.06%)
>> 403,940,929 cpu_core/topdown-fe-bound/ # 52.3% frontend bound (84.06%)
>> 102,070,237 cpu_core/topdown-retiring/ # 13.2% retiring (84.06%)
>> 12,364,429 cpu_core/branch-instructions/ (84.03%)
>> 1,080,124 cpu_core/branch-misses/ (84.24%)
>> 564,120,383 cpu_core/bus-cycles/ (84.65%)
>> 36,979 cpu_core/cache-misses/ (84.86%)
>> 7,298,094 cpu_core/cache-references/ (77.30%)
>> 227,174,372 cpu_core/cpu-cycles/ (77.31%)
>> 63,886,523 cpu_core/instructions/ (84.87%)
>> 0 cpu_core/mem-loads/ (77.31%)
>> 12,208,782 cpu_core/mem-stores/ (77.31%)
>> 566,409,738 cpu_core/ref-cycles/ (84.87%)
>> 23,118 cpu_core/cache-misses/ (76.71%)
>> 7,212,602 cpu_core/cache-references/ (76.29%)
>>
>> 1.003228667 seconds time elapsed
>>
>> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
>> ---
>> tools/perf/arch/x86/util/evsel.c | 21 +++++++++++++++++++--
>> 1 file changed, 19 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
>> index 00cb4466b4ca..6eda5a491eda 100644
>> --- a/tools/perf/arch/x86/util/evsel.c
>> +++ b/tools/perf/arch/x86/util/evsel.c
>> @@ -31,10 +31,27 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
>> free(env.cpuid);
>> }
>>
>> +static bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
>> +{
>
> Why have this and not use topdown_sys_has_perf_metrics? It seems
> strange to have the mix of evsel and PMU that this function is
> testing.
This function tells whether the evsel's PMU supports perf metrics.
The topdown_sys_has_perf_metrics() tells whether there is a PMU on the
machine which supports perf metrics.
I don't think we want to group the topdown events on the cpu_atom.
The evsel__sys_has_perf_metrics() can filter out the cpu_atom case.
I will add comments for the two functions to clarify.
>
>> + const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
>> +
>> + /*
>> + * The PERF_TYPE_RAW type is the core PMU type.
>> + * The slots event is only available for the core PMU, which
>> + * supports the perf metrics feature.
>
> nit: Does "core PMU" mean /sys/devices/cpu_core ? It would be good to
> disambiguate possibly by just using "cpu_core PMU".
No. The core PMU is to be distinguished with the uncore PMU.
For a non-hybrid machine, the core PMU is the "cpu" PMU.
For a hybrid machine, the core PMU is the "cpu_core" PMU.
I don't think we want to specify the "cpu_core" PMU here, because the
function should work for both hybrid and non-hybrid platforms.
Thanks,
Kan
>
> Thanks,
> Ian
>
>
>> + * Checking both the PERF_TYPE_RAW type and the slots event
>> + * should be good enough to detect the perf metrics feature.
>> + */
>> + if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
>> + pmu_have_event(pmu_name, "slots"))
>> + return true;
>> +
>> + return false;
>> +}
>> +
>> bool arch_evsel__must_be_in_group(const struct evsel *evsel)
>> {
>> - if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
>> - !pmu_have_event("cpu", "slots"))
>> + if (!evsel__sys_has_perf_metrics(evsel))
>> return false;
>>
>> return evsel->name &&
>> --
>> 2.35.1
>>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH V2 2/4] perf stat: Always keep perf metrics topdown events in a group
2022-05-17 3:11 ` Ian Rogers
@ 2022-05-17 13:43 ` Liang, Kan
0 siblings, 0 replies; 10+ messages in thread
From: Liang, Kan @ 2022-05-17 13:43 UTC (permalink / raw)
To: Ian Rogers
Cc: acme, mingo, jolsa, namhyung, linux-kernel, linux-perf-users,
peterz, zhengjun.xing, adrian.hunter, ak, eranian
On 5/16/2022 11:11 PM, Ian Rogers wrote:
> On Mon, May 16, 2022 at 8:25 AM <kan.liang@linux.intel.com> wrote:
>>
>> From: Kan Liang <kan.liang@linux.intel.com>
>>
>> If any member in a group has a different cpu mask than the other
>> members, the current perf stat disables group. when the perf metrics
>> topdown events are part of the group, the below <not supported> error
>> will be triggered.
>>
>> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
>> WARNING: grouped events cpus do not match, disabling group:
>> anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>>
>> Performance counter stats for 'system wide':
>>
>> 141,465,174 slots
>> <not supported> topdown-retiring
>> 1,605,330,334 uncore_imc_free_running_0/dclk/
>>
>> The perf metrics topdown events must always be grouped with a slots
>> event as leader.
>>
>> Factor out evsel__remove_from_group() to only remove the regular events
>> from the group.
>>
>> Remove evsel__must_be_in_group(), since no one use it anymore.
>>
>> With the patch, the topdown events aren't broken from the group for the
>> splitting.
>>
>> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
>> WARNING: grouped events cpus do not match, disabling group:
>> anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>>
>> Performance counter stats for 'system wide':
>>
>> 346,110,588 slots
>> 124,608,256 topdown-retiring
>> 1,606,869,976 uncore_imc_free_running_0/dclk/
>>
>> 1.003877592 seconds time elapsed
>>
>> Fixes: a9a1790247bd ("perf stat: Ensure group is defined on top of the same cpu mask")
>> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
>> ---
>> tools/perf/builtin-stat.c | 7 +++----
>> tools/perf/util/evlist.c | 6 +-----
>> tools/perf/util/evsel.c | 13 +++++++++++--
>> tools/perf/util/evsel.h | 2 +-
>> 4 files changed, 16 insertions(+), 12 deletions(-)
>>
>> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
>> index a96f106dc93a..75c88c7939b1 100644
>> --- a/tools/perf/builtin-stat.c
>> +++ b/tools/perf/builtin-stat.c
>> @@ -271,10 +271,9 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
>> pr_warning(" %s: %s\n", evsel->name, buf);
>> }
>>
>> - for_each_group_evsel(pos, leader) {
>> - evsel__set_leader(pos, pos);
>> - pos->core.nr_members = 0;
>> - }
>> + for_each_group_evsel(pos, leader)
>> + evsel__remove_from_group(pos, leader);
>> +
>> evsel->core.leader->nr_members = 0;
>
> This shouldn't be necessary now.
It should point to itself which has been updated in
evsel__remove_from_group().
I will remove it in V3.
>
>> }
>> }
>> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
>> index dfa65a383502..7fc544330fea 100644
>> --- a/tools/perf/util/evlist.c
>> +++ b/tools/perf/util/evlist.c
>> @@ -1795,11 +1795,7 @@ struct evsel *evlist__reset_weak_group(struct evlist *evsel_list, struct evsel *
>> * them. Some events, like Intel topdown, require being
>> * in a group and so keep these in the group.
>> */
>> - if (!evsel__must_be_in_group(c2) && c2 != leader) {
>> - evsel__set_leader(c2, c2);
>> - c2->core.nr_members = 0;
>> - leader->core.nr_members--;
>> - }
>> + evsel__remove_from_group(c2, leader);
>>
>> /*
>> * Set this for all former members of the group
>> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
>> index b98882cbb286..deb428ee5e50 100644
>> --- a/tools/perf/util/evsel.c
>> +++ b/tools/perf/util/evsel.c
>> @@ -3083,7 +3083,16 @@ bool __weak arch_evsel__must_be_in_group(const struct evsel *evsel __maybe_unuse
>> return false;
>> }
>>
>> -bool evsel__must_be_in_group(const struct evsel *evsel)
>> +/*
>> + * Remove an event from a given group (leader).
>> + * Some events, e.g., perf metrics Topdown events,
>> + * must always be grouped. Ignore the events.
>> + */
>> +void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader)
>> {
>> - return arch_evsel__must_be_in_group(evsel);
>> + if (!arch_evsel__must_be_in_group(evsel) && evsel != leader) {
>> + evsel__set_leader(evsel, evsel);
>> + evsel->core.nr_members = 0;
>> + leader->core.nr_members--;
>> + }
>
> Should we also have:
>
> if (leader->core.nr_members == 1)
> leader->core.nr_members = 0;
>
> Other wise say:
>
> {instructions,cycles}
>
> with a remove of cycles becomes:
>
> {instructions}, cycles
>
> rather than the previous:
>
> instructions,cycles
>
> Actually, looking at:
> https://lore.kernel.org/lkml/20220512061308.1152233-2-irogers@google.com/
>
> + /* Reset the leader count if all entries were removed. */
> + if (leader->core.nr_members)
> + leader->core.nr_members = 0;
>
> is wrong and should be:
>
> + /* Reset the leader count if all entries were removed. */
> + if (leader->core.nr_members == 1)
> + leader->core.nr_members = 0;
>
For a perf metrics topdown group, the leader's nr_members must be > 1
after the reset. We should not clear it.
For the other weak group, the leader's nr_members should equal to 1
after the reset. We only need to clear it for this case.
I think it makes sense.
Thanks,
Kan
> I'll fix and re-send.
> > Thanks,
> Ian
>
>> }
>> diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
>> index a36172ed4cf6..47f65f8e7c74 100644
>> --- a/tools/perf/util/evsel.h
>> +++ b/tools/perf/util/evsel.h
>> @@ -483,7 +483,7 @@ bool evsel__has_leader(struct evsel *evsel, struct evsel *leader);
>> bool evsel__is_leader(struct evsel *evsel);
>> void evsel__set_leader(struct evsel *evsel, struct evsel *leader);
>> int evsel__source_count(const struct evsel *evsel);
>> -bool evsel__must_be_in_group(const struct evsel *evsel);
>> +void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader);
>>
>> bool arch_evsel__must_be_in_group(const struct evsel *evsel);
>>
>> --
>> 2.35.1
>>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2022-05-17 13:44 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-16 15:24 [PATCH V2 0/4] Several perf metrics topdown related fixes kan.liang
2022-05-16 15:24 ` [PATCH V2 1/4] perf evsel: Fixes topdown events in a weak group for the hybrid platform kan.liang
2022-05-17 2:52 ` Ian Rogers
2022-05-17 13:43 ` Liang, Kan
2022-05-16 15:24 ` [PATCH V2 2/4] perf stat: Always keep perf metrics topdown events in a group kan.liang
2022-05-17 3:11 ` Ian Rogers
2022-05-17 13:43 ` Liang, Kan
2022-05-16 15:24 ` [PATCH V2 3/4] perf parse-events: Support different format of the topdown event name kan.liang
2022-05-16 15:24 ` [PATCH V2 4/4] perf parse-events: Move slots event for the hybrid platform too kan.liang
2022-05-17 2:54 ` Ian Rogers
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).