All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] perf/amd: Zen4 IBS extensions support
@ 2022-04-25  4:43 Ravi Bangoria
  2022-04-25  4:43 ` [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering Ravi Bangoria
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-25  4:43 UTC (permalink / raw)
  To: peterz, acme
  Cc: ravi.bangoria, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla

IBS support has been enhanced with two new features in upcoming uarch:
1. DataSrc extension and 2. L3 Miss Filtering capability. Both are
indicated by CPUID_Fn8000001B_EAX bit 11.

DataSrc extension provides additional data source details for tagged
load/store operations. Add support for these new bits in perf report/
script raw-dump.

IBS L3 miss filtering works by tagging an instruction on IBS counter
overflow and generating an NMI if the tagged instruction causes an L3
miss. Samples without an L3 miss are discarded and counter is reset
with random value (between 1-15 for fetch pmu and 1-127 for op pmu).
This helps in reducing sampling overhead when user is interested only
in such samples. One of the use case of such filtered samples is to
feed data to page-migration daemon in tiered memory systems.

Add support for L3 miss filtering in IBS driver via new pmu attribute
"l3missonly". Example usage:

  # perf record -a -e ibs_op/l3missonly=1/ --raw-samples sleep 5
  # perf report -D

Some important points to keep in mind while using L3 miss filtering:
1. Hw internally reset sampling period when tagged instruction does
   not cause L3 miss. But there is no way to reconstruct aggregated
   sampling period when this happens.
2. L3 miss is not the actual event being counted. Rather, IBS will
   count fetch, cycles or uOps depending on the configuration. Thus
   sampling period have no direct connection to L3 misses.

1st causes sampling period skew. Thus, I've added warning message at
perf record:

  # perf record -c 10000 -C 0 -e ibs_op/l3missonly=1/
  WARNING: Hw internally resets sampling period when L3 Miss Filtering is enabled
  and tagged operation does not cause L3 Miss. This causes sampling period skew.

User can configure smaller sampling period to get more samples while
using l3missonly.

Ravi Bangoria (6):
  perf/amd/ibs: Add support for L3 miss filtering
  perf/amd/ibs: Advertise zen4_ibs_extensions as pmu capability
    attribute
  perf/tool/amd/ibs: Warn about sampling period skew
  perf/tool: Parse non-cpu pmu capabilities
  perf/tool/amd/ibs: Support new IBS bits in raw trace dump
  perf/tool/amd/ibs: Fix comment

 arch/x86/events/amd/ibs.c                     |  76 +++++--
 arch/x86/include/asm/amd-ibs.h                |  18 +-
 arch/x86/include/asm/perf_event.h             |   3 +
 tools/arch/x86/include/asm/amd-ibs.h          |  18 +-
 .../Documentation/perf.data-file-format.txt   |  18 ++
 tools/perf/arch/x86/util/evsel.c              |  31 +++
 tools/perf/util/amd-sample-raw.c              |  68 +++++-
 tools/perf/util/env.c                         |  48 +++-
 tools/perf/util/env.h                         |  11 +
 tools/perf/util/evsel.c                       |   7 +
 tools/perf/util/evsel.h                       |   1 +
 tools/perf/util/header.c                      | 211 ++++++++++++++++++
 tools/perf/util/header.h                      |   1 +
 tools/perf/util/pmu.c                         |  15 +-
 tools/perf/util/pmu.h                         |   2 +
 15 files changed, 483 insertions(+), 45 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering
  2022-04-25  4:43 [PATCH 0/6] perf/amd: Zen4 IBS extensions support Ravi Bangoria
@ 2022-04-25  4:43 ` Ravi Bangoria
  2022-04-26  9:18   ` Robert Richter
  2022-04-26 10:07   ` Peter Zijlstra
  2022-04-25  4:43 ` [PATCH 2/6] perf/amd/ibs: Advertise zen4_ibs_extensions as pmu capability attribute Ravi Bangoria
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-25  4:43 UTC (permalink / raw)
  To: peterz, acme
  Cc: ravi.bangoria, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla

IBS L3 miss filtering works by tagging an instruction on IBS counter
overflow and generating an NMI if the tagged instruction causes an L3
miss. Samples without an L3 miss are discarded and counter is reset
with random value (between 1-15 for fetch pmu and 1-127 for op pmu).
This helps in reducing sampling overhead when user is interested only
in such samples. One of the use case of such filtered samples is to
feed data to page-migration daemon in tiered memory systems.

Add support for L3 miss filtering in IBS driver via new pmu attribute
"l3missonly". Example usage:

  # perf record -a -e ibs_op/l3missonly=1/ --raw-samples sleep 5

Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
 arch/x86/events/amd/ibs.c         | 42 ++++++++++++++++++++++---------
 arch/x86/include/asm/perf_event.h |  3 +++
 2 files changed, 33 insertions(+), 12 deletions(-)

diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
index 9739019d4b67..a5303d62060c 100644
--- a/arch/x86/events/amd/ibs.c
+++ b/arch/x86/events/amd/ibs.c
@@ -520,16 +520,12 @@ static void perf_ibs_read(struct perf_event *event) { }
 
 PMU_FORMAT_ATTR(rand_en,	"config:57");
 PMU_FORMAT_ATTR(cnt_ctl,	"config:19");
+PMU_EVENT_ATTR_STRING(l3missonly, fetch_l3missonly, "config:59");
+PMU_EVENT_ATTR_STRING(l3missonly, op_l3missonly, "config:16");
 
-static struct attribute *ibs_fetch_format_attrs[] = {
-	&format_attr_rand_en.attr,
-	NULL,
-};
-
-static struct attribute *ibs_op_format_attrs[] = {
-	NULL,	/* &format_attr_cnt_ctl.attr if IBS_CAPS_OPCNT */
-	NULL,
-};
+/* size = nr attrs plus NULL at the end */
+static struct attribute *ibs_fetch_format_attrs[3];
+static struct attribute *ibs_op_format_attrs[3];
 
 static struct perf_ibs perf_ibs_fetch = {
 	.pmu = {
@@ -759,9 +755,9 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
 	return ret;
 }
 
-static __init void perf_event_ibs_init(void)
+static __init void perf_ibs_fetch_prepare(void)
 {
-	struct attribute **attr = ibs_op_format_attrs;
+	struct attribute **format_attrs = perf_ibs_fetch.format_attrs;
 
 	/*
 	 * Some chips fail to reset the fetch count when it is written; instead
@@ -773,11 +769,22 @@ static __init void perf_event_ibs_init(void)
 	if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10)
 		perf_ibs_fetch.fetch_ignore_if_zero_rip = 1;
 
+	*format_attrs++ = &format_attr_rand_en.attr;
+	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
+		perf_ibs_fetch.config_mask |= IBS_FETCH_L3MISSONLY;
+		*format_attrs++ = &fetch_l3missonly.attr.attr;
+	}
+
 	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
+}
+
+static __init void perf_ibs_op_prepare(void)
+{
+	struct attribute **format_attrs = perf_ibs_op.format_attrs;
 
 	if (ibs_caps & IBS_CAPS_OPCNT) {
 		perf_ibs_op.config_mask |= IBS_OP_CNT_CTL;
-		*attr++ = &format_attr_cnt_ctl.attr;
+		*format_attrs++ = &format_attr_cnt_ctl.attr;
 	}
 
 	if (ibs_caps & IBS_CAPS_OPCNTEXT) {
@@ -786,7 +793,18 @@ static __init void perf_event_ibs_init(void)
 		perf_ibs_op.cnt_mask    |= IBS_OP_MAX_CNT_EXT_MASK;
 	}
 
+	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
+		perf_ibs_op.config_mask |= IBS_OP_L3MISSONLY;
+		*format_attrs++ = &op_l3missonly.attr.attr;
+	}
+
 	perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
+}
+
+static __init void perf_event_ibs_init(void)
+{
+	perf_ibs_fetch_prepare();
+	perf_ibs_op_prepare();
 
 	register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs");
 	pr_info("perf: AMD IBS detected (0x%08x)\n", ibs_caps);
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index b06e4c573add..a24b637a6e1d 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -391,6 +391,7 @@ struct pebs_xmm {
 #define IBS_CAPS_OPBRNFUSE		(1U<<8)
 #define IBS_CAPS_FETCHCTLEXTD		(1U<<9)
 #define IBS_CAPS_OPDATA4		(1U<<10)
+#define IBS_CAPS_ZEN4IBSEXTENSIONS	(1U<<11)
 
 #define IBS_CAPS_DEFAULT		(IBS_CAPS_AVAIL		\
 					 | IBS_CAPS_FETCHSAM	\
@@ -404,6 +405,7 @@ struct pebs_xmm {
 #define IBSCTL_LVT_OFFSET_MASK		0x0F
 
 /* IBS fetch bits/masks */
+#define IBS_FETCH_L3MISSONLY	(1ULL<<59)
 #define IBS_FETCH_RAND_EN	(1ULL<<57)
 #define IBS_FETCH_VAL		(1ULL<<49)
 #define IBS_FETCH_ENABLE	(1ULL<<48)
@@ -420,6 +422,7 @@ struct pebs_xmm {
 #define IBS_OP_CNT_CTL		(1ULL<<19)
 #define IBS_OP_VAL		(1ULL<<18)
 #define IBS_OP_ENABLE		(1ULL<<17)
+#define IBS_OP_L3MISSONLY	(1ULL<<16)
 #define IBS_OP_MAX_CNT		0x0000FFFFULL
 #define IBS_OP_MAX_CNT_EXT	0x007FFFFFULL	/* not a register bit mask */
 #define IBS_OP_MAX_CNT_EXT_MASK	(0x7FULL<<20)	/* separate upper 7 bits */
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/6] perf/amd/ibs: Advertise zen4_ibs_extensions as pmu capability attribute
  2022-04-25  4:43 [PATCH 0/6] perf/amd: Zen4 IBS extensions support Ravi Bangoria
  2022-04-25  4:43 ` [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering Ravi Bangoria
@ 2022-04-25  4:43 ` Ravi Bangoria
  2022-04-26  9:57   ` Robert Richter
  2022-04-25  4:43 ` [PATCH 3/6] perf/tool/amd/ibs: Warn about sampling period skew Ravi Bangoria
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-25  4:43 UTC (permalink / raw)
  To: peterz, acme
  Cc: ravi.bangoria, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla

PMU driver can advertise certain feature via capability attribute('caps'
sysfs directory) which can be consumed by userspace tools like perf. Add
zen4_ibs_extensions capability attribute for IBS pmus. This attribute
will be enabled when CPUID_Fn8000001B_EAX[11] is set.

Without patch:

  $ ls /sys/bus/event_source/devices/ibs_op/caps
  ls: cannot access '/sys/.../ibs_op/caps': No such file or directory

With patch:

  $ ls /sys/bus/event_source/devices/ibs_op/caps
  zen4_ibs_extensions

Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
 arch/x86/events/amd/ibs.c | 34 ++++++++++++++++++++++++++++------
 1 file changed, 28 insertions(+), 6 deletions(-)

diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
index a5303d62060c..54e12bd7843e 100644
--- a/arch/x86/events/amd/ibs.c
+++ b/arch/x86/events/amd/ibs.c
@@ -95,8 +95,10 @@ struct perf_ibs {
 	struct cpu_perf_ibs __percpu	*pcpu;
 
 	struct attribute		**format_attrs;
+	struct attribute		**caps_attrs;
 	struct attribute_group		format_group;
-	const struct attribute_group	*attr_groups[2];
+	struct attribute_group		caps_group;
+	const struct attribute_group	*attr_groups[3];
 
 	u64				(*get_count)(u64 config);
 };
@@ -522,10 +524,13 @@ PMU_FORMAT_ATTR(rand_en,	"config:57");
 PMU_FORMAT_ATTR(cnt_ctl,	"config:19");
 PMU_EVENT_ATTR_STRING(l3missonly, fetch_l3missonly, "config:59");
 PMU_EVENT_ATTR_STRING(l3missonly, op_l3missonly, "config:16");
+PMU_EVENT_ATTR_STRING(zen4_ibs_extensions, zen4_ibs_extensions, "1");
 
 /* size = nr attrs plus NULL at the end */
 static struct attribute *ibs_fetch_format_attrs[3];
 static struct attribute *ibs_op_format_attrs[3];
+static struct attribute *ibs_fetch_caps_attrs[2];
+static struct attribute *ibs_op_caps_attrs[2];
 
 static struct perf_ibs perf_ibs_fetch = {
 	.pmu = {
@@ -548,6 +553,7 @@ static struct perf_ibs perf_ibs_fetch = {
 	.offset_mask		= { MSR_AMD64_IBSFETCH_REG_MASK },
 	.offset_max		= MSR_AMD64_IBSFETCH_REG_COUNT,
 	.format_attrs		= ibs_fetch_format_attrs,
+	.caps_attrs		= ibs_fetch_caps_attrs,
 
 	.get_count		= get_ibs_fetch_count,
 };
@@ -574,6 +580,7 @@ static struct perf_ibs perf_ibs_op = {
 	.offset_mask		= { MSR_AMD64_IBSOP_REG_MASK },
 	.offset_max		= MSR_AMD64_IBSOP_REG_COUNT,
 	.format_attrs		= ibs_op_format_attrs,
+	.caps_attrs		= ibs_op_caps_attrs,
 
 	.get_count		= get_ibs_op_count,
 };
@@ -728,6 +735,7 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
 {
 	struct cpu_perf_ibs __percpu *pcpu;
 	int ret;
+	int i = 0;
 
 	pcpu = alloc_percpu(struct cpu_perf_ibs);
 	if (!pcpu)
@@ -736,16 +744,26 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
 	perf_ibs->pcpu = pcpu;
 
 	/* register attributes */
+	memset(&perf_ibs->attr_groups, 0, sizeof(perf_ibs->attr_groups));
 	if (perf_ibs->format_attrs[0]) {
 		memset(&perf_ibs->format_group, 0, sizeof(perf_ibs->format_group));
-		perf_ibs->format_group.name	= "format";
-		perf_ibs->format_group.attrs	= perf_ibs->format_attrs;
+		perf_ibs->format_group.name = "format";
+		perf_ibs->format_group.attrs = perf_ibs->format_attrs;
 
-		memset(&perf_ibs->attr_groups, 0, sizeof(perf_ibs->attr_groups));
-		perf_ibs->attr_groups[0]	= &perf_ibs->format_group;
-		perf_ibs->pmu.attr_groups	= perf_ibs->attr_groups;
+		perf_ibs->attr_groups[i++] = &perf_ibs->format_group;
 	}
 
+	if (perf_ibs->caps_attrs[0]) {
+		memset(&perf_ibs->caps_group, 0, sizeof(perf_ibs->caps_group));
+		perf_ibs->caps_group.name = "caps";
+		perf_ibs->caps_group.attrs = perf_ibs->caps_attrs;
+
+		perf_ibs->attr_groups[i++] = &perf_ibs->caps_group;
+	}
+
+	if (i)
+		perf_ibs->pmu.attr_groups = perf_ibs->attr_groups;
+
 	ret = perf_pmu_register(&perf_ibs->pmu, name, -1);
 	if (ret) {
 		perf_ibs->pcpu = NULL;
@@ -758,6 +776,7 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
 static __init void perf_ibs_fetch_prepare(void)
 {
 	struct attribute **format_attrs = perf_ibs_fetch.format_attrs;
+	struct attribute **caps_attr = perf_ibs_fetch.caps_attrs;
 
 	/*
 	 * Some chips fail to reset the fetch count when it is written; instead
@@ -773,6 +792,7 @@ static __init void perf_ibs_fetch_prepare(void)
 	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
 		perf_ibs_fetch.config_mask |= IBS_FETCH_L3MISSONLY;
 		*format_attrs++ = &fetch_l3missonly.attr.attr;
+		*caps_attr++ = &zen4_ibs_extensions.attr.attr;
 	}
 
 	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
@@ -781,6 +801,7 @@ static __init void perf_ibs_fetch_prepare(void)
 static __init void perf_ibs_op_prepare(void)
 {
 	struct attribute **format_attrs = perf_ibs_op.format_attrs;
+	struct attribute **caps_attr = perf_ibs_op.caps_attrs;
 
 	if (ibs_caps & IBS_CAPS_OPCNT) {
 		perf_ibs_op.config_mask |= IBS_OP_CNT_CTL;
@@ -796,6 +817,7 @@ static __init void perf_ibs_op_prepare(void)
 	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
 		perf_ibs_op.config_mask |= IBS_OP_L3MISSONLY;
 		*format_attrs++ = &op_l3missonly.attr.attr;
+		*caps_attr++ = &zen4_ibs_extensions.attr.attr;
 	}
 
 	perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/6] perf/tool/amd/ibs: Warn about sampling period skew
  2022-04-25  4:43 [PATCH 0/6] perf/amd: Zen4 IBS extensions support Ravi Bangoria
  2022-04-25  4:43 ` [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering Ravi Bangoria
  2022-04-25  4:43 ` [PATCH 2/6] perf/amd/ibs: Advertise zen4_ibs_extensions as pmu capability attribute Ravi Bangoria
@ 2022-04-25  4:43 ` Ravi Bangoria
  2022-04-26 10:09   ` Robert Richter
  2022-04-25  4:43 ` [PATCH 4/6] perf/tool: Parse non-cpu pmu capabilities Ravi Bangoria
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-25  4:43 UTC (permalink / raw)
  To: peterz, acme
  Cc: ravi.bangoria, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla

Samples without an L3 miss are discarded and counter is reset with
random value (between 1-15 for fetch pmu and 1-127 for op pmu) when
IBS L3 miss filtering is enabled. This causes a sampling period skew
but there is no way to reconstruct aggregated sampling period. So
print a warning at perf record if user sets l3missonly=1.

Ex:
  # perf record -c 10000 -C 0 -e ibs_op/l3missonly=1/
  WARNING: Hw internally resets sampling period when L3 Miss Filtering is enabled
  and tagged operation does not cause L3 Miss. This causes sampling period skew.

Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
 tools/perf/arch/x86/util/evsel.c | 31 +++++++++++++++++++++++++++++++
 tools/perf/util/evsel.c          |  7 +++++++
 tools/perf/util/evsel.h          |  1 +
 3 files changed, 39 insertions(+)

diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
index ac2899a25b7a..acaabdba5db8 100644
--- a/tools/perf/arch/x86/util/evsel.c
+++ b/tools/perf/arch/x86/util/evsel.c
@@ -4,6 +4,8 @@
 #include "util/evsel.h"
 #include "util/env.h"
 #include "linux/string.h"
+#include "util/pmu.h"
+#include "util/debug.h"
 
 void arch_evsel__set_sample_weight(struct evsel *evsel)
 {
@@ -29,3 +31,32 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
 
 	free(env.cpuid);
 }
+
+void arch_evsel__warn_ambiguity(struct evsel *evsel, struct perf_event_attr *attr)
+{
+	struct perf_env *env = evsel__env(evsel);
+	struct perf_pmu *evsel_pmu = evsel__find_pmu(evsel);
+	struct perf_pmu *ibs_fetch_pmu = perf_pmu__find("ibs_fetch");
+	struct perf_pmu *ibs_op_pmu = perf_pmu__find("ibs_op");
+	static int warned_once;
+
+	if (warned_once || !perf_env__cpuid(env) || !env->cpuid ||
+	    !strstarts(env->cpuid, "AuthenticAMD") || !evsel_pmu)
+		return;
+
+	if (ibs_fetch_pmu && ibs_fetch_pmu->type == evsel_pmu->type) {
+		if (attr->config & (1ULL << 59)) {
+			pr_warning(
+"WARNING: Hw internally resets sampling period when L3 Miss Filtering is enabled\n"
+"and tagged operation does not cause L3 Miss. This causes sampling period skew.\n");
+			warned_once = 1;
+		}
+	} else if (ibs_op_pmu && ibs_op_pmu->type == evsel_pmu->type) {
+		if (attr->config & (1ULL << 16)) {
+			pr_warning(
+"WARNING: Hw internally resets sampling period when L3 Miss Filtering is enabled\n"
+"and tagged operation does not cause L3 Miss. This causes sampling period skew.\n");
+			warned_once = 1;
+		}
+	}
+}
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 2a1729e7aee4..4f8b72d4a521 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1064,6 +1064,11 @@ void __weak arch_evsel__fixup_new_cycles(struct perf_event_attr *attr __maybe_un
 {
 }
 
+void __weak arch_evsel__warn_ambiguity(struct evsel *evsel __maybe_unused,
+				       struct perf_event_attr *attr __maybe_unused)
+{
+}
+
 static void evsel__set_default_freq_period(struct record_opts *opts,
 					   struct perf_event_attr *attr)
 {
@@ -1339,6 +1344,8 @@ void evsel__config(struct evsel *evsel, struct record_opts *opts,
 	 */
 	if (evsel__is_dummy_event(evsel))
 		evsel__reset_sample_bit(evsel, BRANCH_STACK);
+
+	arch_evsel__warn_ambiguity(evsel, attr);
 }
 
 int evsel__set_filter(struct evsel *evsel, const char *filter)
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 041b42d33bf5..195ae30ec45b 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -281,6 +281,7 @@ void evsel__set_sample_id(struct evsel *evsel, bool use_sample_identifier);
 
 void arch_evsel__set_sample_weight(struct evsel *evsel);
 void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr);
+void arch_evsel__warn_ambiguity(struct evsel *evsel, struct perf_event_attr *attr);
 
 int evsel__set_filter(struct evsel *evsel, const char *filter);
 int evsel__append_tp_filter(struct evsel *evsel, const char *filter);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/6] perf/tool: Parse non-cpu pmu capabilities
  2022-04-25  4:43 [PATCH 0/6] perf/amd: Zen4 IBS extensions support Ravi Bangoria
                   ` (2 preceding siblings ...)
  2022-04-25  4:43 ` [PATCH 3/6] perf/tool/amd/ibs: Warn about sampling period skew Ravi Bangoria
@ 2022-04-25  4:43 ` Ravi Bangoria
  2022-04-26 10:37   ` Robert Richter
  2022-04-25  4:43 ` [PATCH 5/6] perf/tool/amd/ibs: Support new IBS bits in raw trace dump Ravi Bangoria
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-25  4:43 UTC (permalink / raw)
  To: peterz, acme
  Cc: ravi.bangoria, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla

Pmus advertise their capabilities via sysfs attribute files but
perf tool currently parses only core(cpu) pmu capabilities. Add
support for parsing non-cpu pmu capabilities.

Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
 .../Documentation/perf.data-file-format.txt   |  18 ++
 tools/perf/util/env.c                         |  48 +++-
 tools/perf/util/env.h                         |  11 +
 tools/perf/util/header.c                      | 211 ++++++++++++++++++
 tools/perf/util/header.h                      |   1 +
 tools/perf/util/pmu.c                         |  15 +-
 tools/perf/util/pmu.h                         |   2 +
 7 files changed, 301 insertions(+), 5 deletions(-)

diff --git a/tools/perf/Documentation/perf.data-file-format.txt b/tools/perf/Documentation/perf.data-file-format.txt
index f56d0e0fbff6..dea3acb36558 100644
--- a/tools/perf/Documentation/perf.data-file-format.txt
+++ b/tools/perf/Documentation/perf.data-file-format.txt
@@ -435,6 +435,24 @@ struct {
 	} [nr_pmu];
 };
 
+	HEADER_PMU_CAPS = 32,
+
+	List of pmu capabilities (except cpu pmu which is already
+	covered by HEADER_CPU_PMU_CAPS)
+
+struct {
+	u32 nr_pmus;
+	struct {
+		u8 core_type;	/* For hybrid topology */
+		char pmu_name[];
+		u16 nr_caps;
+		struct {
+			char name[];
+			char value[];
+		} [nr_caps];
+	} [nr_pmus];
+};
+
 	other bits are reserved and should ignored for now
 	HEADER_FEAT_BITS	= 256,
 
diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
index 579e44c59914..928633f07086 100644
--- a/tools/perf/util/env.c
+++ b/tools/perf/util/env.c
@@ -179,7 +179,7 @@ static void perf_env__purge_bpf(struct perf_env *env __maybe_unused)
 
 void perf_env__exit(struct perf_env *env)
 {
-	int i;
+	int i, j;
 
 	perf_env__purge_bpf(env);
 	perf_env__purge_cgroups(env);
@@ -222,6 +222,14 @@ void perf_env__exit(struct perf_env *env)
 		zfree(&env->hybrid_cpc_nodes[i].pmu_name);
 	}
 	zfree(&env->hybrid_cpc_nodes);
+
+	for (i = 0; i < env->nr_pmus_with_caps; i++) {
+		zfree(&env->env_pmu_caps[i].pmu_name);
+		for (j = 0; j < env->env_pmu_caps[i].nr_caps; j++)
+			zfree(&env->env_pmu_caps[i].pmu_caps[j]);
+		zfree(&env->env_pmu_caps[i].pmu_caps);
+	}
+	zfree(&env->env_pmu_caps);
 }
 
 void perf_env__init(struct perf_env *env)
@@ -527,3 +535,41 @@ int perf_env__numa_node(struct perf_env *env, struct perf_cpu cpu)
 
 	return cpu.cpu >= 0 && cpu.cpu < env->nr_numa_map ? env->numa_map[cpu.cpu] : -1;
 }
+
+char *perf_env__find_pmu_cap(struct perf_env *env, u8 core_type,
+			     const char *pmu_name, const char *cap)
+{
+	struct env_pmu_caps *env_pmu_caps = env->env_pmu_caps;
+	char *cap_eq;
+	int cap_size;
+	char **ptr;
+	int i, j;
+
+	if (!pmu_name || !cap)
+		return NULL;
+
+	cap_size = strlen(cap);
+	cap_eq = zalloc(cap_size + 2);
+	if (!cap_eq)
+		return NULL;
+
+	memcpy(cap_eq, cap, cap_size);
+	cap_eq[cap_size] = '=';
+
+	for (i = 0; i < env->nr_pmus_with_caps; i++) {
+		if (env_pmu_caps[i].core_type != core_type ||
+		    strcmp(env_pmu_caps[i].pmu_name, pmu_name))
+			continue;
+
+		ptr = env_pmu_caps[i].pmu_caps;
+
+		for (j = 0; j < env_pmu_caps[i].nr_caps; j++) {
+			if (!strncmp(ptr[j], cap_eq, cap_size + 1)) {
+				free(cap_eq);
+				return &ptr[j][cap_size + 1];
+			}
+		}
+	}
+	free(cap_eq);
+	return NULL;
+}
diff --git a/tools/perf/util/env.h b/tools/perf/util/env.h
index a3541f98e1fc..2b767f4ae6e0 100644
--- a/tools/perf/util/env.h
+++ b/tools/perf/util/env.h
@@ -50,6 +50,13 @@ struct hybrid_cpc_node {
 	char            *pmu_name;
 };
 
+struct env_pmu_caps {
+	u8	core_type;
+	char	*pmu_name;
+	u16	nr_caps;
+	char	**pmu_caps;
+};
+
 struct perf_env {
 	char			*hostname;
 	char			*os_release;
@@ -75,6 +82,7 @@ struct perf_env {
 	int			nr_cpu_pmu_caps;
 	int			nr_hybrid_nodes;
 	int			nr_hybrid_cpc_nodes;
+	int			nr_pmus_with_caps;
 	char			*cmdline;
 	const char		**cmdline_argv;
 	char			*sibling_cores;
@@ -95,6 +103,7 @@ struct perf_env {
 	unsigned long long	 memory_bsize;
 	struct hybrid_node	*hybrid_nodes;
 	struct hybrid_cpc_node	*hybrid_cpc_nodes;
+	struct env_pmu_caps	*env_pmu_caps;
 #ifdef HAVE_LIBBPF_SUPPORT
 	/*
 	 * bpf_info_lock protects bpf rbtrees. This is needed because the
@@ -172,4 +181,6 @@ bool perf_env__insert_btf(struct perf_env *env, struct btf_node *btf_node);
 struct btf_node *perf_env__find_btf(struct perf_env *env, __u32 btf_id);
 
 int perf_env__numa_node(struct perf_env *env, struct perf_cpu cpu);
+char *perf_env__find_pmu_cap(struct perf_env *env, u8 core_type,
+			     const char *pmu_name, const char *cap);
 #endif /* __PERF_ENV_H */
diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
index d546ff724dbe..425859a001d1 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util/header.c
@@ -217,6 +217,19 @@ static int __do_read(struct feat_fd *ff, void *addr, ssize_t size)
 	return __do_read_buf(ff, addr, size);
 }
 
+static int do_read_u16(struct feat_fd *ff, u16 *addr)
+{
+	int ret;
+
+	ret = __do_read(ff, addr, sizeof(*addr));
+	if (ret)
+		return ret;
+
+	if (ff->ph->needs_swap)
+		*addr = bswap_16(*addr);
+	return 0;
+}
+
 static int do_read_u32(struct feat_fd *ff, u32 *addr)
 {
 	int ret;
@@ -1529,6 +1542,77 @@ static int write_hybrid_cpu_pmu_caps(struct feat_fd *ff,
 	return 0;
 }
 
+/*
+ * File format:
+ *
+ * struct {
+ *	u32 nr_pmus;
+ *	struct {
+ *		u8 core_type;
+ *		char pmu_name[];
+ *		u16 nr_caps;
+ *		struct {
+ *			char name[];
+ *			char value[];
+ *		} [nr_caps];
+ *	} [nr_pmus];
+ * };
+ */
+static int write_pmu_caps(struct feat_fd *ff, struct evlist *evlist __maybe_unused)
+{
+	struct perf_pmu_caps *caps = NULL;
+	struct perf_pmu *pmu = NULL;
+	u8 core_type = 0;
+	u32 nr_pmus = 0;
+	int ret;
+
+	while ((pmu = perf_pmu__scan(pmu))) {
+		if (!pmu->name || !strncmp(pmu->name, "cpu", 3) ||
+		    perf_pmu__caps_parse(pmu) <= 0)
+			continue;
+		nr_pmus++;
+	}
+
+	ret = do_write(ff, &nr_pmus, sizeof(nr_pmus));
+	if (ret < 0)
+		return ret;
+
+	if (!nr_pmus)
+		return 0;
+
+	while ((pmu = perf_pmu__scan(pmu))) {
+		if (!pmu->name || !strncmp(pmu->name, "cpu", 3) || !pmu->nr_caps)
+			continue;
+
+		/*
+		 * Currently core_type is always set to 0. But it can be
+		 * used in future for hybrid topology pmus.
+		 */
+		ret = do_write(ff, &core_type, sizeof(core_type));
+		if (ret < 0)
+			return ret;
+
+		ret = do_write_string(ff, pmu->name);
+		if (ret < 0)
+			return ret;
+
+		ret = do_write(ff, &pmu->nr_caps, sizeof(pmu->nr_caps));
+		if (ret < 0)
+			return ret;
+
+		list_for_each_entry(caps, &pmu->caps, list) {
+			ret = do_write_string(ff, caps->name);
+			if (ret < 0)
+				return ret;
+
+			ret = do_write_string(ff, caps->value);
+			if (ret < 0)
+				return ret;
+		}
+	}
+	return 0;
+}
+
 static void print_hostname(struct feat_fd *ff, FILE *fp)
 {
 	fprintf(fp, "# hostname : %s\n", ff->ph->env.hostname);
@@ -2158,6 +2242,31 @@ static void print_mem_topology(struct feat_fd *ff, FILE *fp)
 	}
 }
 
+static void print_pmu_caps(struct feat_fd *ff, FILE *fp)
+{
+	struct env_pmu_caps *env_pmu_caps = ff->ph->env.env_pmu_caps;
+	int nr_pmus_with_caps = ff->ph->env.nr_pmus_with_caps;
+	const char *delimiter = "";
+	char **ptr;
+	int i, j;
+
+	if (!nr_pmus_with_caps)
+		return;
+
+	for (i = 0; i < nr_pmus_with_caps; i++) {
+		fprintf(fp, "# %s pmu capabilities: ", env_pmu_caps[i].pmu_name);
+
+		ptr = env_pmu_caps[i].pmu_caps;
+
+		delimiter = "";
+		for (j = 0; j < env_pmu_caps[i].nr_caps; j++) {
+			fprintf(fp, "%s%s", delimiter, ptr[j]);
+			delimiter = ", ";
+		}
+		fprintf(fp, "\n");
+	}
+}
+
 static int __event_process_build_id(struct perf_record_header_build_id *bev,
 				    char *filename,
 				    struct perf_session *session)
@@ -3268,6 +3377,107 @@ static int process_hybrid_cpu_pmu_caps(struct feat_fd *ff,
 	return ret;
 }
 
+static int __process_pmu_caps(struct feat_fd *ff, struct env_pmu_caps *env_pmu_caps)
+{
+	u16 nr_caps = env_pmu_caps->nr_caps;
+	int name_size, value_size;
+	char *name, *value, *ptr;
+	u16 i;
+
+	env_pmu_caps->pmu_caps = zalloc(sizeof(char *) * nr_caps);
+	if (!env_pmu_caps->pmu_caps)
+		return -1;
+
+	for (i = 0; i < nr_caps; i++) {
+		name = do_read_string(ff);
+		if (!name)
+			goto error;
+
+		value = do_read_string(ff);
+		if (!value)
+			goto free_name;
+
+		name_size = strlen(name);
+		value_size = strlen(value);
+		ptr = zalloc(sizeof(char) * (name_size + value_size + 2));
+		if (!ptr)
+			goto free_value;
+
+		memcpy(ptr, name, name_size);
+		ptr[name_size] = '=';
+		memcpy(ptr + name_size + 1, value, value_size);
+		env_pmu_caps->pmu_caps[i] = ptr;
+
+		free(value);
+		free(name);
+	}
+	return 0;
+
+free_value:
+	free(value);
+free_name:
+	free(name);
+error:
+	for (; i > 0; i--)
+		free(env_pmu_caps->pmu_caps[i - 1]);
+	free(env_pmu_caps->pmu_caps);
+	return -1;
+}
+
+static int process_pmu_caps(struct feat_fd *ff, void *data __maybe_unused)
+{
+	struct env_pmu_caps *env_pmu_caps;
+	u32 nr_pmus;
+	u32 i;
+	u16 j;
+
+	ff->ph->env.nr_pmus_with_caps = 0;
+	ff->ph->env.env_pmu_caps = NULL;
+
+	if (do_read_u32(ff, &nr_pmus))
+		return -1;
+
+	if (!nr_pmus)
+		return 0;
+
+	env_pmu_caps = zalloc(sizeof(struct env_pmu_caps) * nr_pmus);
+	if (!env_pmu_caps)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_pmus; i++) {
+		if (__do_read(ff, &env_pmu_caps[i].core_type, sizeof(env_pmu_caps[i].core_type)))
+			goto error;
+
+		env_pmu_caps[i].pmu_name = do_read_string(ff);
+		if (!env_pmu_caps[i].pmu_name)
+			goto error;
+
+		if (do_read_u16(ff, &env_pmu_caps[i].nr_caps))
+			goto free_pmu_name;
+
+		if (!__process_pmu_caps(ff, &env_pmu_caps[i]))
+			continue;
+
+free_pmu_name:
+		free(env_pmu_caps[i].pmu_name);
+		goto error;
+	}
+
+	ff->ph->env.nr_pmus_with_caps = nr_pmus;
+	ff->ph->env.env_pmu_caps = env_pmu_caps;
+	return 0;
+
+error:
+	for (; i > 0; i--) {
+		free(env_pmu_caps[i - 1].pmu_name);
+		for (j = 0; j < env_pmu_caps[i - 1].nr_caps; j++)
+			free(env_pmu_caps[i - 1].pmu_caps[j]);
+		free(env_pmu_caps[i - 1].pmu_caps);
+	}
+	free(env_pmu_caps);
+	return -1;
+}
+
 #define FEAT_OPR(n, func, __full_only) \
 	[HEADER_##n] = {					\
 		.name	    = __stringify(n),			\
@@ -3331,6 +3541,7 @@ const struct perf_header_feature_ops feat_ops[HEADER_LAST_FEATURE] = {
 	FEAT_OPR(CLOCK_DATA,	clock_data,	false),
 	FEAT_OPN(HYBRID_TOPOLOGY,	hybrid_topology,	true),
 	FEAT_OPR(HYBRID_CPU_PMU_CAPS,	hybrid_cpu_pmu_caps,	false),
+	FEAT_OPR(PMU_CAPS,	pmu_caps,	false),
 };
 
 struct header_print_data {
diff --git a/tools/perf/util/header.h b/tools/perf/util/header.h
index c9e3265832d9..38584419678f 100644
--- a/tools/perf/util/header.h
+++ b/tools/perf/util/header.h
@@ -47,6 +47,7 @@ enum {
 	HEADER_CLOCK_DATA,
 	HEADER_HYBRID_TOPOLOGY,
 	HEADER_HYBRID_CPU_PMU_CAPS,
+	HEADER_PMU_CAPS,
 	HEADER_LAST_FEATURE,
 	HEADER_FEAT_BITS	= 256,
 };
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index 9a1c7e63e663..8d599acb7569 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -1890,16 +1890,22 @@ int perf_pmu__caps_parse(struct perf_pmu *pmu)
 	const char *sysfs = sysfs__mountpoint();
 	DIR *caps_dir;
 	struct dirent *evt_ent;
-	int nr_caps = 0;
+
+	if (pmu->caps_initialized)
+		return pmu->nr_caps;
 
 	if (!sysfs)
 		return -1;
 
+	pmu->nr_caps = 0;
+
 	snprintf(caps_path, PATH_MAX,
 		 "%s" EVENT_SOURCE_DEVICE_PATH "%s/caps", sysfs, pmu->name);
 
-	if (stat(caps_path, &st) < 0)
+	if (stat(caps_path, &st) < 0) {
+		pmu->caps_initialized = true;
 		return 0;	/* no error if caps does not exist */
+	}
 
 	caps_dir = opendir(caps_path);
 	if (!caps_dir)
@@ -1926,13 +1932,14 @@ int perf_pmu__caps_parse(struct perf_pmu *pmu)
 			continue;
 		}
 
-		nr_caps++;
+		pmu->nr_caps++;
 		fclose(file);
 	}
 
 	closedir(caps_dir);
 
-	return nr_caps;
+	pmu->caps_initialized = true;
+	return pmu->nr_caps;
 }
 
 void perf_pmu__warn_invalid_config(struct perf_pmu *pmu, __u64 config,
diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
index 541889fa9f9c..593005e68bea 100644
--- a/tools/perf/util/pmu.h
+++ b/tools/perf/util/pmu.h
@@ -46,6 +46,8 @@ struct perf_pmu {
 	struct perf_cpu_map *cpus;
 	struct list_head format;  /* HEAD struct perf_pmu_format -> list */
 	struct list_head aliases; /* HEAD struct perf_pmu_alias -> list */
+	bool caps_initialized;
+	u16 nr_caps;
 	struct list_head caps;    /* HEAD struct perf_pmu_caps -> list */
 	struct list_head list;    /* ELEM */
 	struct list_head hybrid_list;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/6] perf/tool/amd/ibs: Support new IBS bits in raw trace dump
  2022-04-25  4:43 [PATCH 0/6] perf/amd: Zen4 IBS extensions support Ravi Bangoria
                   ` (3 preceding siblings ...)
  2022-04-25  4:43 ` [PATCH 4/6] perf/tool: Parse non-cpu pmu capabilities Ravi Bangoria
@ 2022-04-25  4:43 ` Ravi Bangoria
  2022-04-26 11:27   ` Robert Richter
  2022-04-25  4:43 ` [PATCH 6/6] perf/tool/amd/ibs: Fix comment Ravi Bangoria
  2022-04-25 20:32 ` [PATCH 0/6] perf/amd: Zen4 IBS extensions support Peter Zijlstra
  6 siblings, 1 reply; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-25  4:43 UTC (permalink / raw)
  To: peterz, acme
  Cc: ravi.bangoria, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla

IBS support has been enhanced with two new features in upcoming uarch:
1. DataSrc extension and 2. L3 miss filtering. Additional set of bits
has been introduced in IBS registers to exploit these features.
Interpret those bits while doing perf report/script raw dump.

IBS op pmu ex:
  $ sudo ./perf record -c 130 -a -e ibs_op/l3missonly=1/ --raw-samples
  $ sudo ./perf report -D
  ...
  ibs_op_ctl:     0000004500070008 MaxCnt       128 L3MissOnly 1 En 1
        Val 1 CntCtl 0=cycles CurCnt        69
  ibs_op_data:    0000000000710002 CompToRetCtr     2 TagToRetCtr   113
        BrnRet 0  RipInvalid 0 BrnFuse 0 Microcode 0
  ibs_op_data2:   0000000000000002 CacheHitSt 0=M-state RmtNode 0
        DataSrc 2=A peer cache in a near CCX
  ibs_op_data3:   000000681d1700a1 LdOp 1 StOp 0 DcL1TlbMiss 0
        DcL2TlbMiss 0 DcL1TlbHit2M 0 DcL1TlbHit1G 1 DcL2TlbHit2M 0
        DcMiss 1 DcMisAcc 0 DcWcMemAcc 0 DcUcMemAcc 0 DcLockedOp 0
        DcMissNoMabAlloc 1 DcLinAddrValid 1 DcPhyAddrValid 1
        DcL2TlbHit1G 0 L2Miss 1 SwPf 0 OpMemWidth  8 bytes
        OpDcMissOpenMemReqs  7 DcMissLat   104 TlbRefillLat     0

IBS Fetch pmu ex:
  $ sudo ./perf record -c 130 -a -e ibs_fetch/l3missonly=1/ --raw-samples
  $ sudo ./perf report -D
  ...
  ibs_fetch_ctl:  3c1f00c700080008 MaxCnt     128 Cnt     128 Lat   199
        En 1 Val 1 Comp 1 IcMiss 1 PhyAddrValid        1 L1TlbPgSz 4KB
        L1TlbMiss 0 L2TlbMiss 0 RandEn 0 L2Miss 1 L3MissOnly 1
        FetchOcMiss 1 FetchL3Miss 1

With the DataSrc extensions, the source of data can be decoded among:
 - Local L3 or other L1/L2 in CCX.
 - A peer cache in a near CCX.
 - Data returned from DRAM.
 - A peer cache in a far CCX.
 - DRAM address map with "long latency" bit set.
 - Data returned from MMIO/Config/PCI/APIC.
 - Extension Memory (S-Link, GenZ, etc - identified by the CS target
    and/or address map at DF's choice).
 - Peer Agent Memory.

Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
 arch/x86/include/asm/amd-ibs.h       | 16 ++++---
 tools/arch/x86/include/asm/amd-ibs.h | 16 ++++---
 tools/perf/util/amd-sample-raw.c     | 68 ++++++++++++++++++++++++----
 3 files changed, 80 insertions(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/amd-ibs.h b/arch/x86/include/asm/amd-ibs.h
index 46e1df45efc0..f55c299554a8 100644
--- a/arch/x86/include/asm/amd-ibs.h
+++ b/arch/x86/include/asm/amd-ibs.h
@@ -29,7 +29,10 @@ union ibs_fetch_ctl {
 			rand_en:1,	/* 57: random tagging enable */
 			fetch_l2_miss:1,/* 58: L2 miss for sampled fetch
 					 *      (needs IbsFetchComp) */
-			reserved:5;	/* 59-63: reserved */
+			l3_miss_only:1,	/* 59: Collect L3 miss samples only */
+			fetch_oc_miss:1,/* 60: Op cache miss for the sampled fetch */
+			fetch_l3_miss:1,/* 61: L3 cache miss for the sampled fetch */
+			reserved:2;	/* 62-63: reserved */
 	};
 };
 
@@ -38,14 +41,14 @@ union ibs_op_ctl {
 	__u64 val;
 	struct {
 		__u64	opmaxcnt:16,	/* 0-15: periodic op max. count */
-			reserved0:1,	/* 16: reserved */
+			l3_miss_only:1,	/* 16: Collect L3 miss samples only */
 			op_en:1,	/* 17: op sampling enable */
 			op_val:1,	/* 18: op sample valid */
 			cnt_ctl:1,	/* 19: periodic op counter control */
 			opmaxcnt_ext:7,	/* 20-26: upper 7 bits of periodic op maximum count */
-			reserved1:5,	/* 27-31: reserved */
+			reserved0:5,	/* 27-31: reserved */
 			opcurcnt:27,	/* 32-58: periodic op counter current count */
-			reserved2:5;	/* 59-63: reserved */
+			reserved1:5;	/* 59-63: reserved */
 	};
 };
 
@@ -71,11 +74,12 @@ union ibs_op_data {
 union ibs_op_data2 {
 	__u64 val;
 	struct {
-		__u64	data_src:3,	/* 0-2: data source */
+		__u64	data_src_lo:3,	/* 0-2: data source low */
 			reserved0:1,	/* 3: reserved */
 			rmt_node:1,	/* 4: destination node */
 			cache_hit_st:1,	/* 5: cache hit state */
-			reserved1:57;	/* 5-63: reserved */
+			data_src_hi:2,	/* 6-7: data source high */
+			reserved1:56;	/* 8-63: reserved */
 	};
 };
 
diff --git a/tools/arch/x86/include/asm/amd-ibs.h b/tools/arch/x86/include/asm/amd-ibs.h
index 174e7d83fcbd..94b75721c3c8 100644
--- a/tools/arch/x86/include/asm/amd-ibs.h
+++ b/tools/arch/x86/include/asm/amd-ibs.h
@@ -29,7 +29,10 @@ union ibs_fetch_ctl {
 			rand_en:1,	/* 57: random tagging enable */
 			fetch_l2_miss:1,/* 58: L2 miss for sampled fetch
 					 *      (needs IbsFetchComp) */
-			reserved:5;	/* 59-63: reserved */
+			l3_miss_only:1,	/* 59: Collect L3 miss samples only */
+			fetch_oc_miss:1,/* 60: Op cache miss for the sampled fetch */
+			fetch_l3_miss:1,/* 61: L3 cache miss for the sampled fetch */
+			reserved:2;	/* 62-63: reserved */
 	};
 };
 
@@ -38,14 +41,14 @@ union ibs_op_ctl {
 	__u64 val;
 	struct {
 		__u64	opmaxcnt:16,	/* 0-15: periodic op max. count */
-			reserved0:1,	/* 16: reserved */
+			l3_miss_only:1,	/* 16: Collect L3 miss samples only */
 			op_en:1,	/* 17: op sampling enable */
 			op_val:1,	/* 18: op sample valid */
 			cnt_ctl:1,	/* 19: periodic op counter control */
 			opmaxcnt_ext:7,	/* 20-26: upper 7 bits of periodic op maximum count */
-			reserved1:5,	/* 27-31: reserved */
+			reserved0:5,	/* 27-31: reserved */
 			opcurcnt:27,	/* 32-58: periodic op counter current count */
-			reserved2:5;	/* 59-63: reserved */
+			reserved1:5;	/* 59-63: reserved */
 	};
 };
 
@@ -71,11 +74,12 @@ union ibs_op_data {
 union ibs_op_data2 {
 	__u64 val;
 	struct {
-		__u64	data_src:3,	/* 0-2: data source */
+		__u64	data_src_lo:3,	/* 0-2: data source low */
 			reserved0:1,	/* 3: reserved */
 			rmt_node:1,	/* 4: destination node */
 			cache_hit_st:1,	/* 5: cache hit state */
-			reserved1:57;	/* 5-63: reserved */
+			data_src_hi:2,	/* 6-7: data source high */
+			reserved1:56;	/* 8-63: reserved */
 	};
 };
 
diff --git a/tools/perf/util/amd-sample-raw.c b/tools/perf/util/amd-sample-raw.c
index d19d765195c5..8635385b5b34 100644
--- a/tools/perf/util/amd-sample-raw.c
+++ b/tools/perf/util/amd-sample-raw.c
@@ -18,6 +18,7 @@
 #include "pmu-events/pmu-events.h"
 
 static u32 cpu_family, cpu_model, ibs_fetch_type, ibs_op_type;
+static bool zen4_ibs_extensions;
 
 static void pr_ibs_fetch_ctl(union ibs_fetch_ctl reg)
 {
@@ -39,6 +40,7 @@ static void pr_ibs_fetch_ctl(union ibs_fetch_ctl reg)
 	};
 	const char *ic_miss_str = NULL;
 	const char *l1tlb_pgsz_str = NULL;
+	char l3_miss_str[sizeof(" L3MissOnly _ FetchOcMiss _ FetchL3Miss _")] = "";
 
 	if (cpu_family == 0x19 && cpu_model < 0x10) {
 		/*
@@ -53,12 +55,19 @@ static void pr_ibs_fetch_ctl(union ibs_fetch_ctl reg)
 		ic_miss_str = ic_miss_strs[reg.ic_miss];
 	}
 
+	if (zen4_ibs_extensions) {
+		snprintf(l3_miss_str, sizeof(l3_miss_str),
+			 " L3MissOnly %d FetchOcMiss %d FetchL3Miss %d",
+			 reg.l3_miss_only, reg.fetch_oc_miss, reg.fetch_l3_miss);
+	}
+
 	printf("ibs_fetch_ctl:\t%016llx MaxCnt %7d Cnt %7d Lat %5d En %d Val %d Comp %d%s "
-	       "PhyAddrValid %d%s L1TlbMiss %d L2TlbMiss %d RandEn %d%s\n",
+		"PhyAddrValid %d%s L1TlbMiss %d L2TlbMiss %d RandEn %d%s%s\n",
 		reg.val, reg.fetch_maxcnt << 4, reg.fetch_cnt << 4, reg.fetch_lat,
 		reg.fetch_en, reg.fetch_val, reg.fetch_comp, ic_miss_str ? : "",
 		reg.phy_addr_valid, l1tlb_pgsz_str ? : "", reg.l1tlb_miss, reg.l2tlb_miss,
-		reg.rand_en, reg.fetch_comp ? (reg.fetch_l2_miss ? " L2Miss 1" : " L2Miss 0") : "");
+		reg.rand_en, reg.fetch_comp ? (reg.fetch_l2_miss ? " L2Miss 1" : " L2Miss 0") : "",
+		l3_miss_str);
 }
 
 static void pr_ic_ibs_extd_ctl(union ic_ibs_extd_ctl reg)
@@ -68,9 +77,15 @@ static void pr_ic_ibs_extd_ctl(union ic_ibs_extd_ctl reg)
 
 static void pr_ibs_op_ctl(union ibs_op_ctl reg)
 {
-	printf("ibs_op_ctl:\t%016llx MaxCnt %9d En %d Val %d CntCtl %d=%s CurCnt %9d\n",
-	       reg.val, ((reg.opmaxcnt_ext << 16) | reg.opmaxcnt) << 4, reg.op_en, reg.op_val,
-	       reg.cnt_ctl, reg.cnt_ctl ? "uOps" : "cycles", reg.opcurcnt);
+	char l3_miss_only[sizeof(" L3MissOnly _")] = "";
+
+	if (zen4_ibs_extensions)
+		snprintf(l3_miss_only, sizeof(l3_miss_only), " L3MissOnly %d", reg.l3_miss_only);
+
+	printf("ibs_op_ctl:\t%016llx MaxCnt %9d%s En %d Val %d CntCtl %d=%s CurCnt %9d\n",
+		reg.val, ((reg.opmaxcnt_ext << 16) | reg.opmaxcnt) << 4, l3_miss_only,
+		reg.op_en, reg.op_val, reg.cnt_ctl,
+		reg.cnt_ctl ? "uOps" : "cycles", reg.opcurcnt);
 }
 
 static void pr_ibs_op_data(union ibs_op_data reg)
@@ -84,7 +99,34 @@ static void pr_ibs_op_data(union ibs_op_data reg)
 		reg.op_brn_ret, reg.op_rip_invalid, reg.op_brn_fuse, reg.op_microcode);
 }
 
-static void pr_ibs_op_data2(union ibs_op_data2 reg)
+static void pr_ibs_op_data2_extended(union ibs_op_data2 reg)
+{
+	static const char * const data_src_str[] = {
+		"",
+		" DataSrc 1=Local L3 or other L1/L2 in CCX",
+		" DataSrc 2=A peer cache in a near CCX",
+		" DataSrc 3=Data returned from DRAM",
+		" DataSrc 4=(reserved)",
+		" DataSrc 5=A peer cache in a far CCX",
+		" DataSrc 6=DRAM address map with \"long latency\" bit set",
+		" DataSrc 7=Data returned from MMIO/Config/PCI/APIC",
+		" DataSrc 8=Extension Memory (S-Link, GenZ, etc)",
+		" DataSrc 9=(reserved)",
+		" DataSrc 10=(reserved)",
+		" DataSrc 11=(reserved)",
+		" DataSrc 12=Peer Agent Memory",
+		/* 13 to 31 are reserved. Avoid printing them. */
+	};
+	int data_src = (reg.data_src_hi << 3) | reg.data_src_lo;
+
+	printf("ibs_op_data2:\t%016llx %sRmtNode %d%s\n", reg.val,
+		(data_src == 1 || data_src == 2 || data_src == 5) ?
+			(reg.cache_hit_st ? "CacheHitSt 1=O-State " : "CacheHitSt 0=M-state ") : "",
+		reg.rmt_node,
+		data_src < (int)ARRAY_SIZE(data_src_str) ? data_src_str[data_src] : "");
+}
+
+static void pr_ibs_op_data2_default(union ibs_op_data2 reg)
 {
 	static const char * const data_src_str[] = {
 		"",
@@ -98,9 +140,16 @@ static void pr_ibs_op_data2(union ibs_op_data2 reg)
 	};
 
 	printf("ibs_op_data2:\t%016llx %sRmtNode %d%s\n", reg.val,
-	       reg.data_src == 2 ? (reg.cache_hit_st ? "CacheHitSt 1=O-State "
+	       reg.data_src_lo == 2 ? (reg.cache_hit_st ? "CacheHitSt 1=O-State "
 						     : "CacheHitSt 0=M-state ") : "",
-	       reg.rmt_node, data_src_str[reg.data_src]);
+	       reg.rmt_node, data_src_str[reg.data_src_lo]);
+}
+
+static void pr_ibs_op_data2(union ibs_op_data2 reg)
+{
+	if (zen4_ibs_extensions)
+		return pr_ibs_op_data2_extended(reg);
+	pr_ibs_op_data2_default(reg);
 }
 
 static void pr_ibs_op_data3(union ibs_op_data3 reg)
@@ -279,6 +328,9 @@ bool evlist__has_amd_ibs(struct evlist *evlist)
 		pmu_mapping += strlen(pmu_mapping) + 1 /* '\0' */;
 	}
 
+	if (perf_env__find_pmu_cap(env, 0, "ibs_op", "zen4_ibs_extensions"))
+		zen4_ibs_extensions = 1;
+
 	if (ibs_fetch_type || ibs_op_type) {
 		if (!cpu_family)
 			parse_cpuid(env);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/6] perf/tool/amd/ibs: Fix comment
  2022-04-25  4:43 [PATCH 0/6] perf/amd: Zen4 IBS extensions support Ravi Bangoria
                   ` (4 preceding siblings ...)
  2022-04-25  4:43 ` [PATCH 5/6] perf/tool/amd/ibs: Support new IBS bits in raw trace dump Ravi Bangoria
@ 2022-04-25  4:43 ` Ravi Bangoria
  2022-04-26 11:27   ` Robert Richter
  2022-04-25 20:32 ` [PATCH 0/6] perf/amd: Zen4 IBS extensions support Peter Zijlstra
  6 siblings, 1 reply; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-25  4:43 UTC (permalink / raw)
  To: peterz, acme
  Cc: ravi.bangoria, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla

s/IBS Op Data 2/IBS Op Data 1/ for MSR 0xc0011035.

Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
---
 arch/x86/include/asm/amd-ibs.h       | 2 +-
 tools/arch/x86/include/asm/amd-ibs.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/amd-ibs.h b/arch/x86/include/asm/amd-ibs.h
index f55c299554a8..1a5dd23cf99d 100644
--- a/arch/x86/include/asm/amd-ibs.h
+++ b/arch/x86/include/asm/amd-ibs.h
@@ -52,7 +52,7 @@ union ibs_op_ctl {
 	};
 };
 
-/* MSR 0xc0011035: IBS Op Data 2 */
+/* MSR 0xc0011035: IBS Op Data 1 */
 union ibs_op_data {
 	__u64 val;
 	struct {
diff --git a/tools/arch/x86/include/asm/amd-ibs.h b/tools/arch/x86/include/asm/amd-ibs.h
index 94b75721c3c8..0780c848fa6a 100644
--- a/tools/arch/x86/include/asm/amd-ibs.h
+++ b/tools/arch/x86/include/asm/amd-ibs.h
@@ -52,7 +52,7 @@ union ibs_op_ctl {
 	};
 };
 
-/* MSR 0xc0011035: IBS Op Data 2 */
+/* MSR 0xc0011035: IBS Op Data 1 */
 union ibs_op_data {
 	__u64 val;
 	struct {
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/6] perf/amd: Zen4 IBS extensions support
  2022-04-25  4:43 [PATCH 0/6] perf/amd: Zen4 IBS extensions support Ravi Bangoria
                   ` (5 preceding siblings ...)
  2022-04-25  4:43 ` [PATCH 6/6] perf/tool/amd/ibs: Fix comment Ravi Bangoria
@ 2022-04-25 20:32 ` Peter Zijlstra
  2022-04-26  7:00   ` Ravi Bangoria
  6 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2022-04-25 20:32 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp, irogers,
	yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla

On Mon, Apr 25, 2022 at 10:13:17AM +0530, Ravi Bangoria wrote:
> IBS support has been enhanced with two new features in upcoming uarch:
> 1. DataSrc extension and 2. L3 Miss Filtering capability. Both are
> indicated by CPUID_Fn8000001B_EAX bit 11.

Hi Ravi, could you perhaps also look at fixing this existing IBS
problem?

  https://lkml.kernel.org/r/YlVPpVC8chepOdzJ@hirez.programming.kicks-ass.net

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/6] perf/amd: Zen4 IBS extensions support
  2022-04-25 20:32 ` [PATCH 0/6] perf/amd: Zen4 IBS extensions support Peter Zijlstra
@ 2022-04-26  7:00   ` Ravi Bangoria
  0 siblings, 0 replies; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-26  7:00 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp, irogers,
	yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla,
	Ravi Bangoria



On 26-Apr-22 2:02 AM, Peter Zijlstra wrote:
> On Mon, Apr 25, 2022 at 10:13:17AM +0530, Ravi Bangoria wrote:
>> IBS support has been enhanced with two new features in upcoming uarch:
>> 1. DataSrc extension and 2. L3 Miss Filtering capability. Both are
>> indicated by CPUID_Fn8000001B_EAX bit 11.
> 
> Hi Ravi, could you perhaps also look at fixing this existing IBS
> problem?
> 
>   https://lkml.kernel.org/r/YlVPpVC8chepOdzJ@hirez.programming.kicks-ass.net

Sure. Will do.

Thanks.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering
  2022-04-25  4:43 ` [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering Ravi Bangoria
@ 2022-04-26  9:18   ` Robert Richter
  2022-04-26 11:25     ` Ravi Bangoria
  2022-04-26 10:07   ` Peter Zijlstra
  1 sibling, 1 reply; 22+ messages in thread
From: Robert Richter @ 2022-04-26  9:18 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: peterz, acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla

On 25.04.22 10:13:18, Ravi Bangoria wrote:
> IBS L3 miss filtering works by tagging an instruction on IBS counter
> overflow and generating an NMI if the tagged instruction causes an L3
> miss. Samples without an L3 miss are discarded and counter is reset
> with random value (between 1-15 for fetch pmu and 1-127 for op pmu).
> This helps in reducing sampling overhead when user is interested only
> in such samples. One of the use case of such filtered samples is to
> feed data to page-migration daemon in tiered memory systems.
> 
> Add support for L3 miss filtering in IBS driver via new pmu attribute
> "l3missonly". Example usage:
> 
>   # perf record -a -e ibs_op/l3missonly=1/ --raw-samples sleep 5
> 
> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
> ---
>  arch/x86/events/amd/ibs.c         | 42 ++++++++++++++++++++++---------
>  arch/x86/include/asm/perf_event.h |  3 +++
>  2 files changed, 33 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
> index 9739019d4b67..a5303d62060c 100644
> --- a/arch/x86/events/amd/ibs.c
> +++ b/arch/x86/events/amd/ibs.c
> @@ -520,16 +520,12 @@ static void perf_ibs_read(struct perf_event *event) { }
>  
>  PMU_FORMAT_ATTR(rand_en,	"config:57");
>  PMU_FORMAT_ATTR(cnt_ctl,	"config:19");
> +PMU_EVENT_ATTR_STRING(l3missonly, fetch_l3missonly, "config:59");
> +PMU_EVENT_ATTR_STRING(l3missonly, op_l3missonly, "config:16");
>  
> -static struct attribute *ibs_fetch_format_attrs[] = {
> -	&format_attr_rand_en.attr,
> -	NULL,
> -};
> -
> -static struct attribute *ibs_op_format_attrs[] = {
> -	NULL,	/* &format_attr_cnt_ctl.attr if IBS_CAPS_OPCNT */
> -	NULL,
> -};
> +/* size = nr attrs plus NULL at the end */
> +static struct attribute *ibs_fetch_format_attrs[3];
> +static struct attribute *ibs_op_format_attrs[3];

Define a macro for the array size.

>  
>  static struct perf_ibs perf_ibs_fetch = {
>  	.pmu = {
> @@ -759,9 +755,9 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
>  	return ret;
>  }
>  
> -static __init void perf_event_ibs_init(void)
> +static __init void perf_ibs_fetch_prepare(void)

Since this actually initializes the pmu, let's call that
perf_ibs_fetch_init().

For low level init functions it would be good to keep track of the
return code even if it is later not evaluated. So these kind of
function should return an error code.

>  {
> -	struct attribute **attr = ibs_op_format_attrs;
> +	struct attribute **format_attrs = perf_ibs_fetch.format_attrs;

I think we could keep this short here with 'attr'.

>  
>  	/*
>  	 * Some chips fail to reset the fetch count when it is written; instead
> @@ -773,11 +769,22 @@ static __init void perf_event_ibs_init(void)
>  	if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10)
>  		perf_ibs_fetch.fetch_ignore_if_zero_rip = 1;
>  
> +	*format_attrs++ = &format_attr_rand_en.attr;
> +	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
> +		perf_ibs_fetch.config_mask |= IBS_FETCH_L3MISSONLY;
> +		*format_attrs++ = &fetch_l3missonly.attr.attr;
> +	}

You should also write the terminating NULL pointer here, though the
mem is preinitialized zero.

> +
>  	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
> +}
> +
> +static __init void perf_ibs_op_prepare(void)
> +{
> +	struct attribute **format_attrs = perf_ibs_op.format_attrs;
>  
>  	if (ibs_caps & IBS_CAPS_OPCNT) {
>  		perf_ibs_op.config_mask |= IBS_OP_CNT_CTL;
> -		*attr++ = &format_attr_cnt_ctl.attr;
> +		*format_attrs++ = &format_attr_cnt_ctl.attr;
>  	}
>  
>  	if (ibs_caps & IBS_CAPS_OPCNTEXT) {
> @@ -786,7 +793,18 @@ static __init void perf_event_ibs_init(void)
>  		perf_ibs_op.cnt_mask    |= IBS_OP_MAX_CNT_EXT_MASK;
>  	}
>  
> +	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
> +		perf_ibs_op.config_mask |= IBS_OP_L3MISSONLY;
> +		*format_attrs++ = &op_l3missonly.attr.attr;
> +	}
> +
>  	perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
> +}

Same for this function: *_init(), error code, attrs, terminating NULL.

> +
> +static __init void perf_event_ibs_init(void)
> +{
> +	perf_ibs_fetch_prepare();
> +	perf_ibs_op_prepare();
>  
>  	register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs");
>  	pr_info("perf: AMD IBS detected (0x%08x)\n", ibs_caps);

The function is now small enough to be squashed into amd_ibs_init().

-Robert

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] perf/amd/ibs: Advertise zen4_ibs_extensions as pmu capability attribute
  2022-04-25  4:43 ` [PATCH 2/6] perf/amd/ibs: Advertise zen4_ibs_extensions as pmu capability attribute Ravi Bangoria
@ 2022-04-26  9:57   ` Robert Richter
  2022-04-26 11:40     ` Ravi Bangoria
  0 siblings, 1 reply; 22+ messages in thread
From: Robert Richter @ 2022-04-26  9:57 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: peterz, acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla

On 25.04.22 10:13:19, Ravi Bangoria wrote:
> PMU driver can advertise certain feature via capability attribute('caps'
> sysfs directory) which can be consumed by userspace tools like perf. Add
> zen4_ibs_extensions capability attribute for IBS pmus. This attribute
> will be enabled when CPUID_Fn8000001B_EAX[11] is set.
> 
> Without patch:
> 
>   $ ls /sys/bus/event_source/devices/ibs_op/caps
>   ls: cannot access '/sys/.../ibs_op/caps': No such file or directory
> 
> With patch:
> 
>   $ ls /sys/bus/event_source/devices/ibs_op/caps
>   zen4_ibs_extensions
> 
> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
> ---
>  arch/x86/events/amd/ibs.c | 34 ++++++++++++++++++++++++++++------
>  1 file changed, 28 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
> index a5303d62060c..54e12bd7843e 100644
> --- a/arch/x86/events/amd/ibs.c
> +++ b/arch/x86/events/amd/ibs.c
> @@ -95,8 +95,10 @@ struct perf_ibs {
>  	struct cpu_perf_ibs __percpu	*pcpu;
>  
>  	struct attribute		**format_attrs;
> +	struct attribute		**caps_attrs;
>  	struct attribute_group		format_group;
> -	const struct attribute_group	*attr_groups[2];
> +	struct attribute_group		caps_group;
> +	const struct attribute_group	*attr_groups[3];

Also add a macro for max groups.

>  
>  	u64				(*get_count)(u64 config);
>  };
> @@ -522,10 +524,13 @@ PMU_FORMAT_ATTR(rand_en,	"config:57");
>  PMU_FORMAT_ATTR(cnt_ctl,	"config:19");
>  PMU_EVENT_ATTR_STRING(l3missonly, fetch_l3missonly, "config:59");
>  PMU_EVENT_ATTR_STRING(l3missonly, op_l3missonly, "config:16");
> +PMU_EVENT_ATTR_STRING(zen4_ibs_extensions, zen4_ibs_extensions, "1");
>  
>  /* size = nr attrs plus NULL at the end */
>  static struct attribute *ibs_fetch_format_attrs[3];
>  static struct attribute *ibs_op_format_attrs[3];
> +static struct attribute *ibs_fetch_caps_attrs[2];
> +static struct attribute *ibs_op_caps_attrs[2];
>  
>  static struct perf_ibs perf_ibs_fetch = {
>  	.pmu = {
> @@ -548,6 +553,7 @@ static struct perf_ibs perf_ibs_fetch = {
>  	.offset_mask		= { MSR_AMD64_IBSFETCH_REG_MASK },
>  	.offset_max		= MSR_AMD64_IBSFETCH_REG_COUNT,
>  	.format_attrs		= ibs_fetch_format_attrs,
> +	.caps_attrs		= ibs_fetch_caps_attrs,
>  
>  	.get_count		= get_ibs_fetch_count,
>  };
> @@ -574,6 +580,7 @@ static struct perf_ibs perf_ibs_op = {
>  	.offset_mask		= { MSR_AMD64_IBSOP_REG_MASK },
>  	.offset_max		= MSR_AMD64_IBSOP_REG_COUNT,
>  	.format_attrs		= ibs_op_format_attrs,
> +	.caps_attrs		= ibs_op_caps_attrs,
>  
>  	.get_count		= get_ibs_op_count,
>  };
> @@ -728,6 +735,7 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
>  {
>  	struct cpu_perf_ibs __percpu *pcpu;
>  	int ret;
> +	int i = 0;

'group'? Use a pointer to iterate over groups here same as for attrs?

	struct attribute_group **group = &perf_ibs->attr_groups;

>  
>  	pcpu = alloc_percpu(struct cpu_perf_ibs);
>  	if (!pcpu)
> @@ -736,16 +744,26 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
>  	perf_ibs->pcpu = pcpu;
>  
>  	/* register attributes */
> +	memset(&perf_ibs->attr_groups, 0, sizeof(perf_ibs->attr_groups));

With a termination below this could be dropped.

>  	if (perf_ibs->format_attrs[0]) {
>  		memset(&perf_ibs->format_group, 0, sizeof(perf_ibs->format_group));
> -		perf_ibs->format_group.name	= "format";
> -		perf_ibs->format_group.attrs	= perf_ibs->format_attrs;
> +		perf_ibs->format_group.name = "format";
> +		perf_ibs->format_group.attrs = perf_ibs->format_attrs;
>  
> -		memset(&perf_ibs->attr_groups, 0, sizeof(perf_ibs->attr_groups));
> -		perf_ibs->attr_groups[0]	= &perf_ibs->format_group;
> -		perf_ibs->pmu.attr_groups	= perf_ibs->attr_groups;
> +		perf_ibs->attr_groups[i++] = &perf_ibs->format_group;

So this could be:

		*groups++ = &perf_ibs->format_group

>  	}
>  
> +	if (perf_ibs->caps_attrs[0]) {
> +		memset(&perf_ibs->caps_group, 0, sizeof(perf_ibs->caps_group));
> +		perf_ibs->caps_group.name = "caps";
> +		perf_ibs->caps_group.attrs = perf_ibs->caps_attrs;
> +
> +		perf_ibs->attr_groups[i++] = &perf_ibs->caps_group;

Similar here.

> +	}

Add a terminating NULL pointer for groups here.

> +
> +	if (i)

	if (perf_ibs->attr_groups[0])
		...

> +		perf_ibs->pmu.attr_groups = perf_ibs->attr_groups;
> +
>  	ret = perf_pmu_register(&perf_ibs->pmu, name, -1);
>  	if (ret) {
>  		perf_ibs->pcpu = NULL;
> @@ -758,6 +776,7 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
>  static __init void perf_ibs_fetch_prepare(void)
>  {
>  	struct attribute **format_attrs = perf_ibs_fetch.format_attrs;
> +	struct attribute **caps_attr = perf_ibs_fetch.caps_attrs;

Now I see why it was renamed, ok.

>  
>  	/*
>  	 * Some chips fail to reset the fetch count when it is written; instead
> @@ -773,6 +792,7 @@ static __init void perf_ibs_fetch_prepare(void)
>  	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
>  		perf_ibs_fetch.config_mask |= IBS_FETCH_L3MISSONLY;
>  		*format_attrs++ = &fetch_l3missonly.attr.attr;
> +		*caps_attr++ = &zen4_ibs_extensions.attr.attr;
>  	}
>  
>  	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
> @@ -781,6 +801,7 @@ static __init void perf_ibs_fetch_prepare(void)
>  static __init void perf_ibs_op_prepare(void)
>  {
>  	struct attribute **format_attrs = perf_ibs_op.format_attrs;
> +	struct attribute **caps_attr = perf_ibs_op.caps_attrs;
>  
>  	if (ibs_caps & IBS_CAPS_OPCNT) {
>  		perf_ibs_op.config_mask |= IBS_OP_CNT_CTL;
> @@ -796,6 +817,7 @@ static __init void perf_ibs_op_prepare(void)
>  	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
>  		perf_ibs_op.config_mask |= IBS_OP_L3MISSONLY;
>  		*format_attrs++ = &op_l3missonly.attr.attr;
> +		*caps_attr++ = &zen4_ibs_extensions.attr.attr;
>  	}

Also add a termination for caps here.

-Robert

>  
>  	perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering
  2022-04-25  4:43 ` [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering Ravi Bangoria
  2022-04-26  9:18   ` Robert Richter
@ 2022-04-26 10:07   ` Peter Zijlstra
  2022-04-26 11:30     ` Ravi Bangoria
  1 sibling, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2022-04-26 10:07 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp, irogers,
	yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla

On Mon, Apr 25, 2022 at 10:13:18AM +0530, Ravi Bangoria wrote:
> IBS L3 miss filtering works by tagging an instruction on IBS counter
> overflow and generating an NMI if the tagged instruction causes an L3
> miss. Samples without an L3 miss are discarded and counter is reset
> with random value (between 1-15 for fetch pmu and 1-127 for op pmu).
> This helps in reducing sampling overhead when user is interested only
> in such samples. One of the use case of such filtered samples is to
> feed data to page-migration daemon in tiered memory systems.
> 
> Add support for L3 miss filtering in IBS driver via new pmu attribute
> "l3missonly". Example usage:
> 
>   # perf record -a -e ibs_op/l3missonly=1/ --raw-samples sleep 5
> 
> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
> ---
>  arch/x86/events/amd/ibs.c         | 42 ++++++++++++++++++++++---------
>  arch/x86/include/asm/perf_event.h |  3 +++
>  2 files changed, 33 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
> index 9739019d4b67..a5303d62060c 100644
> --- a/arch/x86/events/amd/ibs.c
> +++ b/arch/x86/events/amd/ibs.c
> @@ -520,16 +520,12 @@ static void perf_ibs_read(struct perf_event *event) { }
>  
>  PMU_FORMAT_ATTR(rand_en,	"config:57");
>  PMU_FORMAT_ATTR(cnt_ctl,	"config:19");
> +PMU_EVENT_ATTR_STRING(l3missonly, fetch_l3missonly, "config:59");
> +PMU_EVENT_ATTR_STRING(l3missonly, op_l3missonly, "config:16");
>  
> -static struct attribute *ibs_fetch_format_attrs[] = {
> -	&format_attr_rand_en.attr,
> -	NULL,
> -};
> -
> -static struct attribute *ibs_op_format_attrs[] = {
> -	NULL,	/* &format_attr_cnt_ctl.attr if IBS_CAPS_OPCNT */
> -	NULL,
> -};
> +/* size = nr attrs plus NULL at the end */
> +static struct attribute *ibs_fetch_format_attrs[3];
> +static struct attribute *ibs_op_format_attrs[3];
>  
>  static struct perf_ibs perf_ibs_fetch = {
>  	.pmu = {
> @@ -759,9 +755,9 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
>  	return ret;
>  }
>  
> -static __init void perf_event_ibs_init(void)
> +static __init void perf_ibs_fetch_prepare(void)
>  {
> -	struct attribute **attr = ibs_op_format_attrs;
> +	struct attribute **format_attrs = perf_ibs_fetch.format_attrs;
>  
>  	/*
>  	 * Some chips fail to reset the fetch count when it is written; instead
> @@ -773,11 +769,22 @@ static __init void perf_event_ibs_init(void)
>  	if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10)
>  		perf_ibs_fetch.fetch_ignore_if_zero_rip = 1;
>  
> +	*format_attrs++ = &format_attr_rand_en.attr;
> +	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
> +		perf_ibs_fetch.config_mask |= IBS_FETCH_L3MISSONLY;
> +		*format_attrs++ = &fetch_l3missonly.attr.attr;
> +	}
> +
>  	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
> +}
> +
> +static __init void perf_ibs_op_prepare(void)
> +{
> +	struct attribute **format_attrs = perf_ibs_op.format_attrs;
>  
>  	if (ibs_caps & IBS_CAPS_OPCNT) {
>  		perf_ibs_op.config_mask |= IBS_OP_CNT_CTL;
> -		*attr++ = &format_attr_cnt_ctl.attr;
> +		*format_attrs++ = &format_attr_cnt_ctl.attr;
>  	}
>  
>  	if (ibs_caps & IBS_CAPS_OPCNTEXT) {
> @@ -786,7 +793,18 @@ static __init void perf_event_ibs_init(void)
>  		perf_ibs_op.cnt_mask    |= IBS_OP_MAX_CNT_EXT_MASK;
>  	}
>  
> +	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
> +		perf_ibs_op.config_mask |= IBS_OP_L3MISSONLY;
> +		*format_attrs++ = &op_l3missonly.attr.attr;
> +	}
> +
>  	perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
> +}

Right, so Greg told us to stop doing silly things like this and use
.is_visible, also see commits like:

  b7c9b3927337 ("perf/x86/intel: Use ->is_visible callback for default group")

There's quite a bit of that in the intel driver and some in the x86
core code too. Please have a look.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/6] perf/tool/amd/ibs: Warn about sampling period skew
  2022-04-25  4:43 ` [PATCH 3/6] perf/tool/amd/ibs: Warn about sampling period skew Ravi Bangoria
@ 2022-04-26 10:09   ` Robert Richter
  2022-04-26 11:43     ` Ravi Bangoria
  0 siblings, 1 reply; 22+ messages in thread
From: Robert Richter @ 2022-04-26 10:09 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: peterz, acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla

On 25.04.22 10:13:20, Ravi Bangoria wrote:

> @@ -29,3 +31,32 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
>  
>  	free(env.cpuid);
>  }
> +
> +void arch_evsel__warn_ambiguity(struct evsel *evsel, struct perf_event_attr *attr)

Have an 'ibs_' string in the name?

> +{
> +	struct perf_env *env = evsel__env(evsel);
> +	struct perf_pmu *evsel_pmu = evsel__find_pmu(evsel);
> +	struct perf_pmu *ibs_fetch_pmu = perf_pmu__find("ibs_fetch");
> +	struct perf_pmu *ibs_op_pmu = perf_pmu__find("ibs_op");
> +	static int warned_once;
> +
> +	if (warned_once || !perf_env__cpuid(env) || !env->cpuid ||
> +	    !strstarts(env->cpuid, "AuthenticAMD") || !evsel_pmu)
> +		return;
> +
> +	if (ibs_fetch_pmu && ibs_fetch_pmu->type == evsel_pmu->type) {
> +		if (attr->config & (1ULL << 59)) {
> +			pr_warning(
> +"WARNING: Hw internally resets sampling period when L3 Miss Filtering is enabled\n"
> +"and tagged operation does not cause L3 Miss. This causes sampling period skew.\n");
> +			warned_once = 1;
> +		}
> +	} else if (ibs_op_pmu && ibs_op_pmu->type == evsel_pmu->type) {
> +		if (attr->config & (1ULL << 16)) {
> +			pr_warning(
> +"WARNING: Hw internally resets sampling period when L3 Miss Filtering is enabled\n"
> +"and tagged operation does not cause L3 Miss. This causes sampling period skew.\n");

Avoid duplicate code. You could move it to the end of the function and
early return if the config bit is not set.

> +			warned_once = 1;
> +		}
> +	}
> +}

-Robert

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/6] perf/tool: Parse non-cpu pmu capabilities
  2022-04-25  4:43 ` [PATCH 4/6] perf/tool: Parse non-cpu pmu capabilities Ravi Bangoria
@ 2022-04-26 10:37   ` Robert Richter
  2022-04-26 11:53     ` Ravi Bangoria
  0 siblings, 1 reply; 22+ messages in thread
From: Robert Richter @ 2022-04-26 10:37 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: peterz, acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla

On 25.04.22 10:13:21, Ravi Bangoria wrote:

> diff --git a/tools/perf/Documentation/perf.data-file-format.txt b/tools/perf/Documentation/perf.data-file-format.txt
> index f56d0e0fbff6..dea3acb36558 100644
> --- a/tools/perf/Documentation/perf.data-file-format.txt
> +++ b/tools/perf/Documentation/perf.data-file-format.txt
> @@ -435,6 +435,24 @@ struct {
>  	} [nr_pmu];
>  };
>  
> +	HEADER_PMU_CAPS = 32,
> +
> +	List of pmu capabilities (except cpu pmu which is already
> +	covered by HEADER_CPU_PMU_CAPS)
> +
> +struct {
> +	u32 nr_pmus;
> +	struct {
> +		u8 core_type;	/* For hybrid topology */
> +		char pmu_name[];
> +		u16 nr_caps;
> +		struct {
> +			char name[];
> +			char value[];
> +		} [nr_caps];
> +	} [nr_pmus];
> +};
> +

This looks quite a bit complex and special.

Why not just reusing struct nr_cpu_pmu_caps (id 28)? Rename it and
introduce macros for backwards compatability if needed.

pmu_name is already encoded in sysfs and core_type could be a caps
value?

-Robert

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering
  2022-04-26  9:18   ` Robert Richter
@ 2022-04-26 11:25     ` Ravi Bangoria
  0 siblings, 0 replies; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-26 11:25 UTC (permalink / raw)
  To: Robert Richter
  Cc: peterz, acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla, Ravi Bangoria


On 26-Apr-22 2:48 PM, Robert Richter wrote:
> On 25.04.22 10:13:18, Ravi Bangoria wrote:
>> IBS L3 miss filtering works by tagging an instruction on IBS counter
>> overflow and generating an NMI if the tagged instruction causes an L3
>> miss. Samples without an L3 miss are discarded and counter is reset
>> with random value (between 1-15 for fetch pmu and 1-127 for op pmu).
>> This helps in reducing sampling overhead when user is interested only
>> in such samples. One of the use case of such filtered samples is to
>> feed data to page-migration daemon in tiered memory systems.
>>
>> Add support for L3 miss filtering in IBS driver via new pmu attribute
>> "l3missonly". Example usage:
>>
>>   # perf record -a -e ibs_op/l3missonly=1/ --raw-samples sleep 5
>>
>> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
>> ---
>>  arch/x86/events/amd/ibs.c         | 42 ++++++++++++++++++++++---------
>>  arch/x86/include/asm/perf_event.h |  3 +++
>>  2 files changed, 33 insertions(+), 12 deletions(-)
>>
>> diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
>> index 9739019d4b67..a5303d62060c 100644
>> --- a/arch/x86/events/amd/ibs.c
>> +++ b/arch/x86/events/amd/ibs.c
>> @@ -520,16 +520,12 @@ static void perf_ibs_read(struct perf_event *event) { }
>>  
>>  PMU_FORMAT_ATTR(rand_en,	"config:57");
>>  PMU_FORMAT_ATTR(cnt_ctl,	"config:19");
>> +PMU_EVENT_ATTR_STRING(l3missonly, fetch_l3missonly, "config:59");
>> +PMU_EVENT_ATTR_STRING(l3missonly, op_l3missonly, "config:16");
>>  
>> -static struct attribute *ibs_fetch_format_attrs[] = {
>> -	&format_attr_rand_en.attr,
>> -	NULL,
>> -};
>> -
>> -static struct attribute *ibs_op_format_attrs[] = {
>> -	NULL,	/* &format_attr_cnt_ctl.attr if IBS_CAPS_OPCNT */
>> -	NULL,
>> -};
>> +/* size = nr attrs plus NULL at the end */
>> +static struct attribute *ibs_fetch_format_attrs[3];
>> +static struct attribute *ibs_op_format_attrs[3];
> 
> Define a macro for the array size.

Except defining size of the above arrays, there is no use of such
macros. So I don't feel the need of it.

> 
>>  
>>  static struct perf_ibs perf_ibs_fetch = {
>>  	.pmu = {
>> @@ -759,9 +755,9 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
>>  	return ret;
>>  }
>>  
>> -static __init void perf_event_ibs_init(void)
>> +static __init void perf_ibs_fetch_prepare(void)
> 
> Since this actually initializes the pmu, let's call that
> perf_ibs_fetch_init().

Sure

> 
> For low level init functions it would be good to keep track of the
> return code even if it is later not evaluated. So these kind of
> function should return an error code.

Sure

> 
>>  {
>> -	struct attribute **attr = ibs_op_format_attrs;
>> +	struct attribute **format_attrs = perf_ibs_fetch.format_attrs;
> 
> I think we could keep this short here with 'attr'.
> 
>>  
>>  	/*
>>  	 * Some chips fail to reset the fetch count when it is written; instead
>> @@ -773,11 +769,22 @@ static __init void perf_event_ibs_init(void)
>>  	if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10)
>>  		perf_ibs_fetch.fetch_ignore_if_zero_rip = 1;
>>  
>> +	*format_attrs++ = &format_attr_rand_en.attr;
>> +	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
>> +		perf_ibs_fetch.config_mask |= IBS_FETCH_L3MISSONLY;
>> +		*format_attrs++ = &fetch_l3missonly.attr.attr;
>> +	}
> 
> You should also write the terminating NULL pointer here, though the
> mem is preinitialized zero.

That seems unnecessary

> 
>> +
>>  	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
>> +}
>> +
>> +static __init void perf_ibs_op_prepare(void)
>> +{
>> +	struct attribute **format_attrs = perf_ibs_op.format_attrs;
>>  
>>  	if (ibs_caps & IBS_CAPS_OPCNT) {
>>  		perf_ibs_op.config_mask |= IBS_OP_CNT_CTL;
>> -		*attr++ = &format_attr_cnt_ctl.attr;
>> +		*format_attrs++ = &format_attr_cnt_ctl.attr;
>>  	}
>>  
>>  	if (ibs_caps & IBS_CAPS_OPCNTEXT) {
>> @@ -786,7 +793,18 @@ static __init void perf_event_ibs_init(void)
>>  		perf_ibs_op.cnt_mask    |= IBS_OP_MAX_CNT_EXT_MASK;
>>  	}
>>  
>> +	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
>> +		perf_ibs_op.config_mask |= IBS_OP_L3MISSONLY;
>> +		*format_attrs++ = &op_l3missonly.attr.attr;
>> +	}
>> +
>>  	perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
>> +}
> 
> Same for this function: *_init(), error code, attrs, terminating NULL.
> 
>> +
>> +static __init void perf_event_ibs_init(void)
>> +{
>> +	perf_ibs_fetch_prepare();
>> +	perf_ibs_op_prepare();
>>  
>>  	register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs");
>>  	pr_info("perf: AMD IBS detected (0x%08x)\n", ibs_caps);
> 
> The function is now small enough to be squashed into amd_ibs_init().

It's small enough but it still make sense to keep this function, as there is
an empty version of it when (CONFIG_PERF_EVENTS=n && CONFIG_CPU_SUP_AMD=n).

Thanks for the review,
Ravi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 5/6] perf/tool/amd/ibs: Support new IBS bits in raw trace dump
  2022-04-25  4:43 ` [PATCH 5/6] perf/tool/amd/ibs: Support new IBS bits in raw trace dump Ravi Bangoria
@ 2022-04-26 11:27   ` Robert Richter
  2022-04-26 13:34     ` Ravi Bangoria
  0 siblings, 1 reply; 22+ messages in thread
From: Robert Richter @ 2022-04-26 11:27 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: peterz, acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla

On 25.04.22 10:13:22, Ravi Bangoria wrote:

> @@ -71,11 +74,12 @@ union ibs_op_data {
>  union ibs_op_data2 {
>  	__u64 val;
>  	struct {
> -		__u64	data_src:3,	/* 0-2: data source */
> +		__u64	data_src_lo:3,	/* 0-2: data source low */
>  			reserved0:1,	/* 3: reserved */
>  			rmt_node:1,	/* 4: destination node */
>  			cache_hit_st:1,	/* 5: cache hit state */
> -			reserved1:57;	/* 5-63: reserved */
> +			data_src_hi:2,	/* 6-7: data source high */
> +			reserved1:56;	/* 8-63: reserved */

Good catch, bit 63 was not defined before.

>  	};
>  };

> @@ -279,6 +328,9 @@ bool evlist__has_amd_ibs(struct evlist *evlist)
>  		pmu_mapping += strlen(pmu_mapping) + 1 /* '\0' */;
>  	}
>  
> +	if (perf_env__find_pmu_cap(env, 0, "ibs_op", "zen4_ibs_extensions"))
> +		zen4_ibs_extensions = 1;
> +

This caps check should be moved to ibs_op and ibs_fetch pmu specific
code. Use the env of the specific pmu respectively in the sample
decoding.

-Robert

>  	if (ibs_fetch_type || ibs_op_type) {
>  		if (!cpu_family)
>  			parse_cpuid(env);

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6/6] perf/tool/amd/ibs: Fix comment
  2022-04-25  4:43 ` [PATCH 6/6] perf/tool/amd/ibs: Fix comment Ravi Bangoria
@ 2022-04-26 11:27   ` Robert Richter
  0 siblings, 0 replies; 22+ messages in thread
From: Robert Richter @ 2022-04-26 11:27 UTC (permalink / raw)
  To: Ravi Bangoria
  Cc: peterz, acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla

On 25.04.22 10:13:23, Ravi Bangoria wrote:
> s/IBS Op Data 2/IBS Op Data 1/ for MSR 0xc0011035.
> 
> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>

Looks good.

-Robert

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering
  2022-04-26 10:07   ` Peter Zijlstra
@ 2022-04-26 11:30     ` Ravi Bangoria
  0 siblings, 0 replies; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-26 11:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp, irogers,
	yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, rrichter, santosh.shukla,
	Ravi Bangoria



On 26-Apr-22 3:37 PM, Peter Zijlstra wrote:
> On Mon, Apr 25, 2022 at 10:13:18AM +0530, Ravi Bangoria wrote:
>> IBS L3 miss filtering works by tagging an instruction on IBS counter
>> overflow and generating an NMI if the tagged instruction causes an L3
>> miss. Samples without an L3 miss are discarded and counter is reset
>> with random value (between 1-15 for fetch pmu and 1-127 for op pmu).
>> This helps in reducing sampling overhead when user is interested only
>> in such samples. One of the use case of such filtered samples is to
>> feed data to page-migration daemon in tiered memory systems.
>>
>> Add support for L3 miss filtering in IBS driver via new pmu attribute
>> "l3missonly". Example usage:
>>
>>   # perf record -a -e ibs_op/l3missonly=1/ --raw-samples sleep 5
>>
>> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
>> ---
>>  arch/x86/events/amd/ibs.c         | 42 ++++++++++++++++++++++---------
>>  arch/x86/include/asm/perf_event.h |  3 +++
>>  2 files changed, 33 insertions(+), 12 deletions(-)
>>
>> diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
>> index 9739019d4b67..a5303d62060c 100644
>> --- a/arch/x86/events/amd/ibs.c
>> +++ b/arch/x86/events/amd/ibs.c
>> @@ -520,16 +520,12 @@ static void perf_ibs_read(struct perf_event *event) { }
>>  
>>  PMU_FORMAT_ATTR(rand_en,	"config:57");
>>  PMU_FORMAT_ATTR(cnt_ctl,	"config:19");
>> +PMU_EVENT_ATTR_STRING(l3missonly, fetch_l3missonly, "config:59");
>> +PMU_EVENT_ATTR_STRING(l3missonly, op_l3missonly, "config:16");
>>  
>> -static struct attribute *ibs_fetch_format_attrs[] = {
>> -	&format_attr_rand_en.attr,
>> -	NULL,
>> -};
>> -
>> -static struct attribute *ibs_op_format_attrs[] = {
>> -	NULL,	/* &format_attr_cnt_ctl.attr if IBS_CAPS_OPCNT */
>> -	NULL,
>> -};
>> +/* size = nr attrs plus NULL at the end */
>> +static struct attribute *ibs_fetch_format_attrs[3];
>> +static struct attribute *ibs_op_format_attrs[3];
>>  
>>  static struct perf_ibs perf_ibs_fetch = {
>>  	.pmu = {
>> @@ -759,9 +755,9 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
>>  	return ret;
>>  }
>>  
>> -static __init void perf_event_ibs_init(void)
>> +static __init void perf_ibs_fetch_prepare(void)
>>  {
>> -	struct attribute **attr = ibs_op_format_attrs;
>> +	struct attribute **format_attrs = perf_ibs_fetch.format_attrs;
>>  
>>  	/*
>>  	 * Some chips fail to reset the fetch count when it is written; instead
>> @@ -773,11 +769,22 @@ static __init void perf_event_ibs_init(void)
>>  	if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10)
>>  		perf_ibs_fetch.fetch_ignore_if_zero_rip = 1;
>>  
>> +	*format_attrs++ = &format_attr_rand_en.attr;
>> +	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
>> +		perf_ibs_fetch.config_mask |= IBS_FETCH_L3MISSONLY;
>> +		*format_attrs++ = &fetch_l3missonly.attr.attr;
>> +	}
>> +
>>  	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
>> +}
>> +
>> +static __init void perf_ibs_op_prepare(void)
>> +{
>> +	struct attribute **format_attrs = perf_ibs_op.format_attrs;
>>  
>>  	if (ibs_caps & IBS_CAPS_OPCNT) {
>>  		perf_ibs_op.config_mask |= IBS_OP_CNT_CTL;
>> -		*attr++ = &format_attr_cnt_ctl.attr;
>> +		*format_attrs++ = &format_attr_cnt_ctl.attr;
>>  	}
>>  
>>  	if (ibs_caps & IBS_CAPS_OPCNTEXT) {
>> @@ -786,7 +793,18 @@ static __init void perf_event_ibs_init(void)
>>  		perf_ibs_op.cnt_mask    |= IBS_OP_MAX_CNT_EXT_MASK;
>>  	}
>>  
>> +	if (ibs_caps & IBS_CAPS_ZEN4IBSEXTENSIONS) {
>> +		perf_ibs_op.config_mask |= IBS_OP_L3MISSONLY;
>> +		*format_attrs++ = &op_l3missonly.attr.attr;
>> +	}
>> +
>>  	perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
>> +}
> 
> Right, so Greg told us to stop doing silly things like this and use
> .is_visible, also see commits like:
> 
>   b7c9b3927337 ("perf/x86/intel: Use ->is_visible callback for default group")
> 
> There's quite a bit of that in the intel driver and some in the x86
> core code too. Please have a look.

Sure.

Thanks for the review,
Ravi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] perf/amd/ibs: Advertise zen4_ibs_extensions as pmu capability attribute
  2022-04-26  9:57   ` Robert Richter
@ 2022-04-26 11:40     ` Ravi Bangoria
  0 siblings, 0 replies; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-26 11:40 UTC (permalink / raw)
  To: Robert Richter
  Cc: peterz, acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla, Ravi Bangoria



On 26-Apr-22 3:27 PM, Robert Richter wrote:
> On 25.04.22 10:13:19, Ravi Bangoria wrote:
>> PMU driver can advertise certain feature via capability attribute('caps'
>> sysfs directory) which can be consumed by userspace tools like perf. Add
>> zen4_ibs_extensions capability attribute for IBS pmus. This attribute
>> will be enabled when CPUID_Fn8000001B_EAX[11] is set.
>>
>> Without patch:
>>
>>   $ ls /sys/bus/event_source/devices/ibs_op/caps
>>   ls: cannot access '/sys/.../ibs_op/caps': No such file or directory
>>
>> With patch:
>>
>>   $ ls /sys/bus/event_source/devices/ibs_op/caps
>>   zen4_ibs_extensions
>>
>> Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
>> ---
>>  arch/x86/events/amd/ibs.c | 34 ++++++++++++++++++++++++++++------
>>  1 file changed, 28 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
>> index a5303d62060c..54e12bd7843e 100644
>> --- a/arch/x86/events/amd/ibs.c
>> +++ b/arch/x86/events/amd/ibs.c
>> @@ -95,8 +95,10 @@ struct perf_ibs {
>>  	struct cpu_perf_ibs __percpu	*pcpu;
>>  
>>  	struct attribute		**format_attrs;
>> +	struct attribute		**caps_attrs;
>>  	struct attribute_group		format_group;
>> -	const struct attribute_group	*attr_groups[2];
>> +	struct attribute_group		caps_group;
>> +	const struct attribute_group	*attr_groups[3];
> 
> Also add a macro for max groups.

Same as previous patch, except defining size of the array, it won't
be used anywhere else.

> 
>>  
>>  	u64				(*get_count)(u64 config);
>>  };
>> @@ -522,10 +524,13 @@ PMU_FORMAT_ATTR(rand_en,	"config:57");
>>  PMU_FORMAT_ATTR(cnt_ctl,	"config:19");
>>  PMU_EVENT_ATTR_STRING(l3missonly, fetch_l3missonly, "config:59");
>>  PMU_EVENT_ATTR_STRING(l3missonly, op_l3missonly, "config:16");
>> +PMU_EVENT_ATTR_STRING(zen4_ibs_extensions, zen4_ibs_extensions, "1");
>>  
>>  /* size = nr attrs plus NULL at the end */
>>  static struct attribute *ibs_fetch_format_attrs[3];
>>  static struct attribute *ibs_op_format_attrs[3];
>> +static struct attribute *ibs_fetch_caps_attrs[2];
>> +static struct attribute *ibs_op_caps_attrs[2];
>>  
>>  static struct perf_ibs perf_ibs_fetch = {
>>  	.pmu = {
>> @@ -548,6 +553,7 @@ static struct perf_ibs perf_ibs_fetch = {
>>  	.offset_mask		= { MSR_AMD64_IBSFETCH_REG_MASK },
>>  	.offset_max		= MSR_AMD64_IBSFETCH_REG_COUNT,
>>  	.format_attrs		= ibs_fetch_format_attrs,
>> +	.caps_attrs		= ibs_fetch_caps_attrs,
>>  
>>  	.get_count		= get_ibs_fetch_count,
>>  };
>> @@ -574,6 +580,7 @@ static struct perf_ibs perf_ibs_op = {
>>  	.offset_mask		= { MSR_AMD64_IBSOP_REG_MASK },
>>  	.offset_max		= MSR_AMD64_IBSOP_REG_COUNT,
>>  	.format_attrs		= ibs_op_format_attrs,
>> +	.caps_attrs		= ibs_op_caps_attrs,
>>  
>>  	.get_count		= get_ibs_op_count,
>>  };
>> @@ -728,6 +735,7 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
>>  {
>>  	struct cpu_perf_ibs __percpu *pcpu;
>>  	int ret;
>> +	int i = 0;
> 
> 'group'? Use a pointer to iterate over groups here same as for attrs?
> 
> 	struct attribute_group **group = &perf_ibs->attr_groups;

Sure.

> 
>>  
>>  	pcpu = alloc_percpu(struct cpu_perf_ibs);
>>  	if (!pcpu)
>> @@ -736,16 +744,26 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
>>  	perf_ibs->pcpu = pcpu;
>>  
>>  	/* register attributes */
>> +	memset(&perf_ibs->attr_groups, 0, sizeof(perf_ibs->attr_groups));
> 
> With a termination below this could be dropped.

I think we can just remove this line. perf_ibs points to either perf_ibs_fetch
or perf_ibs_op, both of which are global variables and would be preinitialized
with 0s.

> 
>>  	if (perf_ibs->format_attrs[0]) {
>>  		memset(&perf_ibs->format_group, 0, sizeof(perf_ibs->format_group));
>> -		perf_ibs->format_group.name	= "format";
>> -		perf_ibs->format_group.attrs	= perf_ibs->format_attrs;
>> +		perf_ibs->format_group.name = "format";
>> +		perf_ibs->format_group.attrs = perf_ibs->format_attrs;
>>  
>> -		memset(&perf_ibs->attr_groups, 0, sizeof(perf_ibs->attr_groups));
>> -		perf_ibs->attr_groups[0]	= &perf_ibs->format_group;
>> -		perf_ibs->pmu.attr_groups	= perf_ibs->attr_groups;
>> +		perf_ibs->attr_groups[i++] = &perf_ibs->format_group;
> 
> So this could be:
> 
> 		*groups++ = &perf_ibs->format_group
> 
>>  	}
>>  
>> +	if (perf_ibs->caps_attrs[0]) {
>> +		memset(&perf_ibs->caps_group, 0, sizeof(perf_ibs->caps_group));
>> +		perf_ibs->caps_group.name = "caps";
>> +		perf_ibs->caps_group.attrs = perf_ibs->caps_attrs;
>> +
>> +		perf_ibs->attr_groups[i++] = &perf_ibs->caps_group;
> 
> Similar here.
> 
>> +	}
> 
> Add a terminating NULL pointer for groups here.
> 
>> +
>> +	if (i)
> 
> 	if (perf_ibs->attr_groups[0])
> 		...
> 
>> +		perf_ibs->pmu.attr_groups = perf_ibs->attr_groups;
>> +
>>  	ret = perf_pmu_register(&perf_ibs->pmu, name, -1);
>>  	if (ret) {
>>  		perf_ibs->pcpu = NULL;
>> @@ -758,6 +776,7 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
>>  static __init void perf_ibs_fetch_prepare(void)
>>  {
>>  	struct attribute **format_attrs = perf_ibs_fetch.format_attrs;
>> +	struct attribute **caps_attr = perf_ibs_fetch.caps_attrs;
> 
> Now I see why it was renamed, ok.

Yup.

Thanks for the review,
Ravi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/6] perf/tool/amd/ibs: Warn about sampling period skew
  2022-04-26 10:09   ` Robert Richter
@ 2022-04-26 11:43     ` Ravi Bangoria
  0 siblings, 0 replies; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-26 11:43 UTC (permalink / raw)
  To: Robert Richter
  Cc: peterz, acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla, Ravi Bangoria



On 26-Apr-22 3:39 PM, Robert Richter wrote:
> On 25.04.22 10:13:20, Ravi Bangoria wrote:
> 
>> @@ -29,3 +31,32 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
>>  
>>  	free(env.cpuid);
>>  }
>> +
>> +void arch_evsel__warn_ambiguity(struct evsel *evsel, struct perf_event_attr *attr)
> 
> Have an 'ibs_' string in the name?

No, this is not ibs specific. Any arch can define it's own version.

> 
>> +{
>> +	struct perf_env *env = evsel__env(evsel);
>> +	struct perf_pmu *evsel_pmu = evsel__find_pmu(evsel);
>> +	struct perf_pmu *ibs_fetch_pmu = perf_pmu__find("ibs_fetch");
>> +	struct perf_pmu *ibs_op_pmu = perf_pmu__find("ibs_op");
>> +	static int warned_once;
>> +
>> +	if (warned_once || !perf_env__cpuid(env) || !env->cpuid ||
>> +	    !strstarts(env->cpuid, "AuthenticAMD") || !evsel_pmu)
>> +		return;
>> +
>> +	if (ibs_fetch_pmu && ibs_fetch_pmu->type == evsel_pmu->type) {
>> +		if (attr->config & (1ULL << 59)) {
>> +			pr_warning(
>> +"WARNING: Hw internally resets sampling period when L3 Miss Filtering is enabled\n"
>> +"and tagged operation does not cause L3 Miss. This causes sampling period skew.\n");
>> +			warned_once = 1;
>> +		}
>> +	} else if (ibs_op_pmu && ibs_op_pmu->type == evsel_pmu->type) {
>> +		if (attr->config & (1ULL << 16)) {
>> +			pr_warning(
>> +"WARNING: Hw internally resets sampling period when L3 Miss Filtering is enabled\n"
>> +"and tagged operation does not cause L3 Miss. This causes sampling period skew.\n");
> 
> Avoid duplicate code. You could move it to the end of the function and
> early return if the config bit is not set.

Sure.

Thanks,
Ravi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/6] perf/tool: Parse non-cpu pmu capabilities
  2022-04-26 10:37   ` Robert Richter
@ 2022-04-26 11:53     ` Ravi Bangoria
  0 siblings, 0 replies; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-26 11:53 UTC (permalink / raw)
  To: Robert Richter
  Cc: peterz, acme, mingo, mark.rutland, jolsa, namhyung, tglx, bp,
	irogers, yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla, Ravi Bangoria



On 26-Apr-22 4:07 PM, Robert Richter wrote:
> On 25.04.22 10:13:21, Ravi Bangoria wrote:
> 
>> diff --git a/tools/perf/Documentation/perf.data-file-format.txt b/tools/perf/Documentation/perf.data-file-format.txt
>> index f56d0e0fbff6..dea3acb36558 100644
>> --- a/tools/perf/Documentation/perf.data-file-format.txt
>> +++ b/tools/perf/Documentation/perf.data-file-format.txt
>> @@ -435,6 +435,24 @@ struct {
>>  	} [nr_pmu];
>>  };
>>  
>> +	HEADER_PMU_CAPS = 32,
>> +
>> +	List of pmu capabilities (except cpu pmu which is already
>> +	covered by HEADER_CPU_PMU_CAPS)
>> +
>> +struct {
>> +	u32 nr_pmus;
>> +	struct {
>> +		u8 core_type;	/* For hybrid topology */
>> +		char pmu_name[];
>> +		u16 nr_caps;
>> +		struct {
>> +			char name[];
>> +			char value[];
>> +		} [nr_caps];
>> +	} [nr_pmus];
>> +};
>> +
> 
> This looks quite a bit complex and special.
> 
> Why not just reusing struct nr_cpu_pmu_caps (id 28)? Rename it and
> introduce macros for backwards compatability if needed.

No. HEADER_CPU_PMU_CAPS (id 28) is designed only for CPU pmu and can
not save more than one pmu data. A designed proposed in this patch
is generic and can be used to save capabilities of multiple pmus
within single header. Something like:

struct {
    nr_pmus=2,

    [0] = struct {
          core_type = 0,
          pmu_name = "ibs_fetch"
          nr_caps = 2,
          [0] = { .name = "cap1", .value = "value1" }
          [1] = { .name = "cap2", .value = "value2" }
    },

    [1] = struct {
          core_type = 0,
          pmu_name = "ibs_op"
          nr_caps = 3,
          [0] = { .name = "cap1", .value = "value1" }
          [1] = { .name = "cap2", .value = "value2" }
          [2] = { .name = "cap3", .value = "value3" }
    },
}

> 
> pmu_name is already encoded in sysfs and core_type could be a caps
> value?

pmu_name is needed to map capability with pmu. And core_type is needed
when same pmu is supported by multiple core types (like P-core / E-core
on Intel platform) but has different capabilities.

Thanks,
Ravi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 5/6] perf/tool/amd/ibs: Support new IBS bits in raw trace dump
  2022-04-26 11:27   ` Robert Richter
@ 2022-04-26 13:34     ` Ravi Bangoria
  0 siblings, 0 replies; 22+ messages in thread
From: Ravi Bangoria @ 2022-04-26 13:34 UTC (permalink / raw)
  To: Robert Richter
  Cc: peterz, mingo, mark.rutland, namhyung, tglx, bp, irogers,
	yao.jin, james.clark, leo.yan, kan.liang, ak, eranian,
	like.xu.linux, x86, linux-perf-users, linux-kernel, sandipan.das,
	ananth.narayan, kim.phillips, santosh.shukla, Ravi Bangoria,
	acme, jolsa



On 26-Apr-22 4:57 PM, Robert Richter wrote:
> On 25.04.22 10:13:22, Ravi Bangoria wrote:
> 
>> @@ -71,11 +74,12 @@ union ibs_op_data {
>>  union ibs_op_data2 {
>>  	__u64 val;
>>  	struct {
>> -		__u64	data_src:3,	/* 0-2: data source */
>> +		__u64	data_src_lo:3,	/* 0-2: data source low */
>>  			reserved0:1,	/* 3: reserved */
>>  			rmt_node:1,	/* 4: destination node */
>>  			cache_hit_st:1,	/* 5: cache hit state */
>> -			reserved1:57;	/* 5-63: reserved */
>> +			data_src_hi:2,	/* 6-7: data source high */
>> +			reserved1:56;	/* 8-63: reserved */
> 
> Good catch, bit 63 was not defined before.

Thanks!

> 
>>  	};
>>  };
> 
>> @@ -279,6 +328,9 @@ bool evlist__has_amd_ibs(struct evlist *evlist)
>>  		pmu_mapping += strlen(pmu_mapping) + 1 /* '\0' */;
>>  	}
>>  
>> +	if (perf_env__find_pmu_cap(env, 0, "ibs_op", "zen4_ibs_extensions"))
>> +		zen4_ibs_extensions = 1;
>> +
> 
> This caps check should be moved to ibs_op and ibs_fetch pmu specific
> code. Use the env of the specific pmu respectively in the sample
> decoding.

IIRC, we don't populate perf_pmu at 'perf report' time as we might not be
running 'perf report' on the same machine or even same arch.

Thanks for the review,
Ravi

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2022-04-26 13:35 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-25  4:43 [PATCH 0/6] perf/amd: Zen4 IBS extensions support Ravi Bangoria
2022-04-25  4:43 ` [PATCH 1/6] perf/amd/ibs: Add support for L3 miss filtering Ravi Bangoria
2022-04-26  9:18   ` Robert Richter
2022-04-26 11:25     ` Ravi Bangoria
2022-04-26 10:07   ` Peter Zijlstra
2022-04-26 11:30     ` Ravi Bangoria
2022-04-25  4:43 ` [PATCH 2/6] perf/amd/ibs: Advertise zen4_ibs_extensions as pmu capability attribute Ravi Bangoria
2022-04-26  9:57   ` Robert Richter
2022-04-26 11:40     ` Ravi Bangoria
2022-04-25  4:43 ` [PATCH 3/6] perf/tool/amd/ibs: Warn about sampling period skew Ravi Bangoria
2022-04-26 10:09   ` Robert Richter
2022-04-26 11:43     ` Ravi Bangoria
2022-04-25  4:43 ` [PATCH 4/6] perf/tool: Parse non-cpu pmu capabilities Ravi Bangoria
2022-04-26 10:37   ` Robert Richter
2022-04-26 11:53     ` Ravi Bangoria
2022-04-25  4:43 ` [PATCH 5/6] perf/tool/amd/ibs: Support new IBS bits in raw trace dump Ravi Bangoria
2022-04-26 11:27   ` Robert Richter
2022-04-26 13:34     ` Ravi Bangoria
2022-04-25  4:43 ` [PATCH 6/6] perf/tool/amd/ibs: Fix comment Ravi Bangoria
2022-04-26 11:27   ` Robert Richter
2022-04-25 20:32 ` [PATCH 0/6] perf/amd: Zen4 IBS extensions support Peter Zijlstra
2022-04-26  7:00   ` Ravi Bangoria

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.