linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] perf header: Support HYBRID_TOPOLOGY and HYBRID_CPU_PMU_CAPS
@ 2021-05-11  5:30 Jin Yao
  2021-05-11  5:30 ` [PATCH v3 1/2] perf header: Support HYBRID_TOPOLOGY feature Jin Yao
  2021-05-11  5:30 ` [PATCH v3 2/2] perf header: Support HYBRID_CPU_PMU_CAPS feature Jin Yao
  0 siblings, 2 replies; 5+ messages in thread
From: Jin Yao @ 2021-05-11  5:30 UTC (permalink / raw)
  To: acme, jolsa, peterz, mingo, alexander.shishkin
  Cc: Linux-kernel, ak, kan.liang, yao.jin, Jin Yao

AlderLake uses a hybrid architecture utilizing Golden Cove cores
(core cpu) and Gracemont cores (atom cpu). It would be useful to let user
know the hybrid topology, the HYBRID_TOPOLOGY feature in header indicates
which cpus are core cpus, and which cpus are atom cpus.

On hybrid platform, it may have several cpu pmus, such as, "cpu_core" and
"cpu_atom". The HYBRID_CPU_PMU_CAPS feature in perf header is created to
support multiple cpu pmus.

v3:
---
- For "[PATCH v3 1/2] perf header: Support HYBRID_TOPOLOGY feature",
  update HEADER_HYBRID_TOPOLOGY format in perf.data-file-format.txt.

- For "[PATCH v3 2/2] perf header: Support HYBRID_CPU_PMU_CAPS feature",
  Don't extend the original CPU_PMU_CAPS to support hybrid cpu pmus.
  Instead, create a new feature 'HYBRID_CPU_PMU_CAPS' in header.

v2:
---
- In "perf header: Support HYBRID_TOPOLOGY feature", don't use the n->map
  to print the cpu list, just use n->cpus.

- Separate hybrid CPU_PMU_CAPS support into two patches:
  perf header: Write hybrid CPU_PMU_CAPS
  perf header: Process hybrid CPU_PMU_CAPS

- Add some words to perf.data-file-format.txt for HYBRID_TOPOLOGY and
  hybrid CPU_PMU_CAPS.


Jin Yao (2):
  perf header: Support HYBRID_TOPOLOGY feature
  perf header: Support HYBRID_CPU_PMU_CAPS feature

 .../Documentation/perf.data-file-format.txt   |  33 +++
 tools/perf/util/cputopo.c                     |  80 ++++++
 tools/perf/util/cputopo.h                     |  13 +
 tools/perf/util/env.c                         |  12 +
 tools/perf/util/env.h                         |  16 ++
 tools/perf/util/header.c                      | 250 ++++++++++++++++--
 tools/perf/util/header.h                      |   2 +
 tools/perf/util/pmu-hybrid.h                  |  11 +
 8 files changed, 398 insertions(+), 19 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3 1/2] perf header: Support HYBRID_TOPOLOGY feature
  2021-05-11  5:30 [PATCH v3 0/2] perf header: Support HYBRID_TOPOLOGY and HYBRID_CPU_PMU_CAPS Jin Yao
@ 2021-05-11  5:30 ` Jin Yao
  2021-05-11  5:30 ` [PATCH v3 2/2] perf header: Support HYBRID_CPU_PMU_CAPS feature Jin Yao
  1 sibling, 0 replies; 5+ messages in thread
From: Jin Yao @ 2021-05-11  5:30 UTC (permalink / raw)
  To: acme, jolsa, peterz, mingo, alexander.shishkin
  Cc: Linux-kernel, ak, kan.liang, yao.jin, Jin Yao

It would be useful to let user know the hybrid topology.
Adding HYBRID_TOPOLOGY feature in header to indicate the
core cpus and the atom cpus.

With this patch,

For the perf.data generated on hybrid platform,
reports the hybrid cpu list.

  root@otcpl-adl-s-2:~# perf report --header-only -I
  ...
  # hybrid cpu system:
  # cpu_core cpu list : 0-15
  # cpu_atom cpu list : 16-23

For the perf.data generated on non-hybrid platform,
reports the message that HYBRID_TOPOLOGY is missing.

  root@kbl-ppc:~# perf report --header-only -I
  ...
  # missing features: TRACING_DATA BRANCH_STACK GROUP_DESC AUXTRACE STAT CLOCKID DIR_FORMAT COMPRESSED CLOCK_DATA HYBRID_TOPOLOGY

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
---
 .../Documentation/perf.data-file-format.txt   | 17 ++++
 tools/perf/util/cputopo.c                     | 80 +++++++++++++++++
 tools/perf/util/cputopo.h                     | 13 +++
 tools/perf/util/env.c                         |  6 ++
 tools/perf/util/env.h                         |  7 ++
 tools/perf/util/header.c                      | 87 +++++++++++++++++++
 tools/perf/util/header.h                      |  1 +
 tools/perf/util/pmu-hybrid.h                  | 11 +++
 8 files changed, 222 insertions(+)

diff --git a/tools/perf/Documentation/perf.data-file-format.txt b/tools/perf/Documentation/perf.data-file-format.txt
index 9ee96640744e..fbee9e580ee4 100644
--- a/tools/perf/Documentation/perf.data-file-format.txt
+++ b/tools/perf/Documentation/perf.data-file-format.txt
@@ -402,6 +402,23 @@ struct {
 	u64 clockid_time_ns;
 };
 
+	HEADER_HYBRID_TOPOLOGY = 30,
+
+Indicate the hybrid CPUs. The format of data is as below.
+
+struct {
+	u32 nr;
+	struct {
+		char pmu_name[];
+		char cpus[];
+	} [nr]; /* Variable length records */
+};
+
+Example:
+  hybrid cpu system:
+  cpu_core cpu list : 0-15
+  cpu_atom cpu list : 16-23
+
 	other bits are reserved and should ignored for now
 	HEADER_FEAT_BITS	= 256,
 
diff --git a/tools/perf/util/cputopo.c b/tools/perf/util/cputopo.c
index 1b52402a8923..ec77e2a7b3ca 100644
--- a/tools/perf/util/cputopo.c
+++ b/tools/perf/util/cputopo.c
@@ -12,6 +12,7 @@
 #include "cpumap.h"
 #include "debug.h"
 #include "env.h"
+#include "pmu-hybrid.h"
 
 #define CORE_SIB_FMT \
 	"%s/devices/system/cpu/cpu%d/topology/core_siblings_list"
@@ -351,3 +352,82 @@ void numa_topology__delete(struct numa_topology *tp)
 
 	free(tp);
 }
+
+static int load_hybrid_node(struct hybrid_topology_node *node,
+			    struct perf_pmu *pmu)
+{
+	const char *sysfs;
+	char path[PATH_MAX];
+	char *buf = NULL, *p;
+	FILE *fp;
+	size_t len = 0;
+
+	node->pmu_name = strdup(pmu->name);
+	if (!node->pmu_name)
+		return -1;
+
+	sysfs = sysfs__mountpoint();
+	if (!sysfs)
+		goto err;
+
+	snprintf(path, PATH_MAX, CPUS_TEMPLATE_CPU, sysfs, pmu->name);
+	fp = fopen(path, "r");
+	if (!fp)
+		goto err;
+
+	if (getline(&buf, &len, fp) <= 0) {
+		fclose(fp);
+		goto err;
+	}
+
+	p = strchr(buf, '\n');
+	if (p)
+		*p = '\0';
+
+	fclose(fp);
+	node->cpus = buf;
+	return 0;
+
+err:
+	zfree(&node->pmu_name);
+	free(buf);
+	return -1;
+}
+
+struct hybrid_topology *hybrid_topology__new(void)
+{
+	struct perf_pmu *pmu;
+	struct hybrid_topology *tp = NULL;
+	u32 nr, i = 0;
+
+	nr = perf_pmu__hybrid_pmu_num();
+	if (nr == 0)
+		return NULL;
+
+	tp = zalloc(sizeof(*tp) + sizeof(tp->nodes[0]) * nr);
+	if (!tp)
+		return NULL;
+
+	tp->nr = nr;
+	perf_pmu__for_each_hybrid_pmu(pmu) {
+		if (load_hybrid_node(&tp->nodes[i], pmu)) {
+			hybrid_topology__delete(tp);
+			return NULL;
+		}
+		i++;
+	}
+
+	return tp;
+}
+
+void hybrid_topology__delete(struct hybrid_topology *tp)
+{
+	u32 i;
+
+	for (i = 0; i < tp->nr; i++) {
+		zfree(&tp->nodes[i].pmu_name);
+		zfree(&tp->nodes[i].cpus);
+	}
+
+	free(tp);
+}
diff --git a/tools/perf/util/cputopo.h b/tools/perf/util/cputopo.h
index 6201c3790d86..d9af97177068 100644
--- a/tools/perf/util/cputopo.h
+++ b/tools/perf/util/cputopo.h
@@ -25,10 +25,23 @@ struct numa_topology {
 	struct numa_topology_node	nodes[];
 };
 
+struct hybrid_topology_node {
+	char		*pmu_name;
+	char		*cpus;
+};
+
+struct hybrid_topology {
+	u32				nr;
+	struct hybrid_topology_node	nodes[];
+};
+
 struct cpu_topology *cpu_topology__new(void);
 void cpu_topology__delete(struct cpu_topology *tp);
 
 struct numa_topology *numa_topology__new(void);
 void numa_topology__delete(struct numa_topology *tp);
 
+struct hybrid_topology *hybrid_topology__new(void);
+void hybrid_topology__delete(struct hybrid_topology *tp);
+
 #endif /* __PERF_CPUTOPO_H */
diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
index 9130f6fad8d5..744ae87b5bfa 100644
--- a/tools/perf/util/env.c
+++ b/tools/perf/util/env.c
@@ -202,6 +202,12 @@ void perf_env__exit(struct perf_env *env)
 	for (i = 0; i < env->nr_memory_nodes; i++)
 		zfree(&env->memory_nodes[i].set);
 	zfree(&env->memory_nodes);
+
+	for (i = 0; i < env->nr_hybrid_nodes; i++) {
+		zfree(&env->hybrid_nodes[i].pmu_name);
+		zfree(&env->hybrid_nodes[i].cpus);
+	}
+	zfree(&env->hybrid_nodes);
 }
 
 void perf_env__init(struct perf_env *env __maybe_unused)
diff --git a/tools/perf/util/env.h b/tools/perf/util/env.h
index ca249bf5e984..e5e5deebe68d 100644
--- a/tools/perf/util/env.h
+++ b/tools/perf/util/env.h
@@ -37,6 +37,11 @@ struct memory_node {
 	unsigned long	*set;
 };
 
+struct hybrid_node {
+	char	*pmu_name;
+	char	*cpus;
+};
+
 struct perf_env {
 	char			*hostname;
 	char			*os_release;
@@ -59,6 +64,7 @@ struct perf_env {
 	int			nr_pmu_mappings;
 	int			nr_groups;
 	int			nr_cpu_pmu_caps;
+	int			nr_hybrid_nodes;
 	char			*cmdline;
 	const char		**cmdline_argv;
 	char			*sibling_cores;
@@ -77,6 +83,7 @@ struct perf_env {
 	struct numa_node	*numa_nodes;
 	struct memory_node	*memory_nodes;
 	unsigned long long	 memory_bsize;
+	struct hybrid_node	*hybrid_nodes;
 #ifdef HAVE_LIBBPF_SUPPORT
 	/*
 	 * bpf_info_lock protects bpf rbtrees. This is needed because the
diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
index 02b13c7a23be..ebf4203b36b8 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util/header.c
@@ -932,6 +932,40 @@ static int write_clock_data(struct feat_fd *ff,
 	return do_write(ff, data64, sizeof(*data64));
 }
 
+static int write_hybrid_topology(struct feat_fd *ff,
+				 struct evlist *evlist __maybe_unused)
+{
+	struct hybrid_topology *tp;
+	int ret;
+	u32 i;
+
+	tp = hybrid_topology__new();
+	if (!tp)
+		return -ENOENT;
+
+	ret = do_write(ff, &tp->nr, sizeof(u32));
+	if (ret < 0)
+		goto err;
+
+	for (i = 0; i < tp->nr; i++) {
+		struct hybrid_topology_node *n = &tp->nodes[i];
+
+		ret = do_write_string(ff, n->pmu_name);
+		if (ret < 0)
+			goto err;
+
+		ret = do_write_string(ff, n->cpus);
+		if (ret < 0)
+			goto err;
+	}
+
+	ret = 0;
+
+err:
+	hybrid_topology__delete(tp);
+	return ret;
+}
+
 static int write_dir_format(struct feat_fd *ff,
 			    struct evlist *evlist __maybe_unused)
 {
@@ -1623,6 +1657,18 @@ static void print_clock_data(struct feat_fd *ff, FILE *fp)
 		    clockid_name(clockid));
 }
 
+static void print_hybrid_topology(struct feat_fd *ff, FILE *fp)
+{
+	int i;
+	struct hybrid_node *n;
+
+	fprintf(fp, "# hybrid cpu system:\n");
+	for (i = 0; i < ff->ph->env.nr_hybrid_nodes; i++) {
+		n = &ff->ph->env.hybrid_nodes[i];
+		fprintf(fp, "# %s cpu list : %s\n", n->pmu_name, n->cpus);
+	}
+}
+
 static void print_dir_format(struct feat_fd *ff, FILE *fp)
 {
 	struct perf_session *session;
@@ -2849,6 +2895,46 @@ static int process_clock_data(struct feat_fd *ff,
 	return 0;
 }
 
+static int process_hybrid_topology(struct feat_fd *ff,
+				   void *data __maybe_unused)
+{
+	struct hybrid_node *nodes, *n;
+	u32 nr, i;
+
+	/* nr nodes */
+	if (do_read_u32(ff, &nr))
+		return -1;
+
+	nodes = zalloc(sizeof(*nodes) * nr);
+	if (!nodes)
+		return -ENOMEM;
+
+	for (i = 0; i < nr; i++) {
+		n = &nodes[i];
+
+		n->pmu_name = do_read_string(ff);
+		if (!n->pmu_name)
+			goto error;
+
+		n->cpus = do_read_string(ff);
+		if (!n->cpus)
+			goto error;
+	}
+
+	ff->ph->env.nr_hybrid_nodes = nr;
+	ff->ph->env.hybrid_nodes = nodes;
+	return 0;
+
+error:
+	for (i = 0; i < nr; i++) {
+		free(nodes[i].pmu_name);
+		free(nodes[i].cpus);
+	}
+
+	free(nodes);
+	return -1;
+}
+
 static int process_dir_format(struct feat_fd *ff,
 			      void *_data __maybe_unused)
 {
@@ -3117,6 +3203,7 @@ const struct perf_header_feature_ops feat_ops[HEADER_LAST_FEATURE] = {
 	FEAT_OPR(COMPRESSED,	compressed,	false),
 	FEAT_OPR(CPU_PMU_CAPS,	cpu_pmu_caps,	false),
 	FEAT_OPR(CLOCK_DATA,	clock_data,	false),
+	FEAT_OPN(HYBRID_TOPOLOGY,	hybrid_topology,	true),
 };
 
 struct header_print_data {
diff --git a/tools/perf/util/header.h b/tools/perf/util/header.h
index 2aca71763ecf..3f12ec0eb84e 100644
--- a/tools/perf/util/header.h
+++ b/tools/perf/util/header.h
@@ -45,6 +45,7 @@ enum {
 	HEADER_COMPRESSED,
 	HEADER_CPU_PMU_CAPS,
 	HEADER_CLOCK_DATA,
+	HEADER_HYBRID_TOPOLOGY,
 	HEADER_LAST_FEATURE,
 	HEADER_FEAT_BITS	= 256,
 };
diff --git a/tools/perf/util/pmu-hybrid.h b/tools/perf/util/pmu-hybrid.h
index d0fa7bc50a76..2b186c26a43e 100644
--- a/tools/perf/util/pmu-hybrid.h
+++ b/tools/perf/util/pmu-hybrid.h
@@ -19,4 +19,15 @@ struct perf_pmu *perf_pmu__find_hybrid_pmu(const char *name);
 bool perf_pmu__is_hybrid(const char *name);
 char *perf_pmu__hybrid_type_to_pmu(const char *type);
 
+static inline int perf_pmu__hybrid_pmu_num(void)
+{
+	struct perf_pmu *pmu;
+	int num = 0;
+
+	perf_pmu__for_each_hybrid_pmu(pmu)
+		num++;
+
+	return num;
+}
+
 #endif /* __PMU_HYBRID_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 2/2] perf header: Support HYBRID_CPU_PMU_CAPS feature
  2021-05-11  5:30 [PATCH v3 0/2] perf header: Support HYBRID_TOPOLOGY and HYBRID_CPU_PMU_CAPS Jin Yao
  2021-05-11  5:30 ` [PATCH v3 1/2] perf header: Support HYBRID_TOPOLOGY feature Jin Yao
@ 2021-05-11  5:30 ` Jin Yao
  2021-05-14  8:16   ` Jiri Olsa
  1 sibling, 1 reply; 5+ messages in thread
From: Jin Yao @ 2021-05-11  5:30 UTC (permalink / raw)
  To: acme, jolsa, peterz, mingo, alexander.shishkin
  Cc: Linux-kernel, ak, kan.liang, yao.jin, Jin Yao

Perf has supported CPU_PMU_CAPS feature to display a list
of cpu PMU capabilities. But on hybrid platform, it may have
several cpu pmus (such as, "cpu_core" and "cpu_atom"). The
CPU_PMU_CAPS feature is hard to extend to support multiple
cpu pmus well if it needs to be compatible for the case that
is old perf data file + new perf tool.

So for better compatibility we now create a new feature
HYBRID_CPU_PMU_CAPS in header.

For the perf.data generated on hybrid platform,

  root@otcpl-adl-s-2:~# perf report --header-only -I

  # cpu_core pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
  # cpu_atom pmu capabilities: branches=32, max_precise=3, pmu_name=alderlake_hybrid
  # missing features: TRACING_DATA BRANCH_STACK GROUP_DESC AUXTRACE STAT CLOCKID DIR_FORMAT COMPRESSED CPU_PMU_CAPS CLOCK_DATA

For the perf.data generated on non-hybrid platform

  root@kbl-ppc:~# perf report --header-only -I

  # cpu pmu capabilities: branches=32, max_precise=3, pmu_name=skylake
  # missing features: TRACING_DATA BRANCH_STACK GROUP_DESC AUXTRACE STAT CLOCKID DIR_FORMAT COMPRESSED CLOCK_DATA HYBRID_TOPOLOGY HYBRID_CPU_PMU_CAPS

Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
---
 .../Documentation/perf.data-file-format.txt   |  16 ++
 tools/perf/util/env.c                         |   6 +
 tools/perf/util/env.h                         |   9 +
 tools/perf/util/header.c                      | 163 ++++++++++++++++--
 tools/perf/util/header.h                      |   1 +
 5 files changed, 176 insertions(+), 19 deletions(-)

diff --git a/tools/perf/Documentation/perf.data-file-format.txt b/tools/perf/Documentation/perf.data-file-format.txt
index fbee9e580ee4..e6ff8c898ada 100644
--- a/tools/perf/Documentation/perf.data-file-format.txt
+++ b/tools/perf/Documentation/perf.data-file-format.txt
@@ -419,6 +419,22 @@ Example:
   cpu_core cpu list : 0-15
   cpu_atom cpu list : 16-23
 
+	HEADER_HYBRID_CPU_PMU_CAPS = 31,
+
+	A list of hybrid CPU PMU capabilities.
+
+struct {
+	u32 nr_pmu;
+	struct {
+		u32 nr_cpu_pmu_caps;
+		{
+			char	name[];
+			char	value[];
+		} [nr_cpu_pmu_caps];
+		char pmu_name[];
+	} [nr_pmu];
+};
+
 	other bits are reserved and should ignored for now
 	HEADER_FEAT_BITS	= 256,
 
diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
index 744ae87b5bfa..1bea5b29b12d 100644
--- a/tools/perf/util/env.c
+++ b/tools/perf/util/env.c
@@ -208,6 +208,12 @@ void perf_env__exit(struct perf_env *env)
 		zfree(&env->hybrid_nodes[i].cpus);
 	}
 	zfree(&env->hybrid_nodes);
+
+	for (i = 0; i < env->nr_hybrid_cpc_nodes; i++) {
+		zfree(&env->hybrid_cpc_nodes[i].cpu_pmu_caps);
+		zfree(&env->hybrid_cpc_nodes[i].pmu_name);
+	}
+	zfree(&env->hybrid_cpc_nodes);
 }
 
 void perf_env__init(struct perf_env *env __maybe_unused)
diff --git a/tools/perf/util/env.h b/tools/perf/util/env.h
index e5e5deebe68d..6824a7423a2d 100644
--- a/tools/perf/util/env.h
+++ b/tools/perf/util/env.h
@@ -42,6 +42,13 @@ struct hybrid_node {
 	char	*cpus;
 };
 
+struct hybrid_cpc_node {
+	int		nr_cpu_pmu_caps;
+	unsigned int    max_branches;
+	char            *cpu_pmu_caps;
+	char            *pmu_name;
+};
+
 struct perf_env {
 	char			*hostname;
 	char			*os_release;
@@ -65,6 +72,7 @@ struct perf_env {
 	int			nr_groups;
 	int			nr_cpu_pmu_caps;
 	int			nr_hybrid_nodes;
+	int			nr_hybrid_cpc_nodes;
 	char			*cmdline;
 	const char		**cmdline_argv;
 	char			*sibling_cores;
@@ -84,6 +92,7 @@ struct perf_env {
 	struct memory_node	*memory_nodes;
 	unsigned long long	 memory_bsize;
 	struct hybrid_node	*hybrid_nodes;
+	struct hybrid_cpc_node	*hybrid_cpc_nodes;
 #ifdef HAVE_LIBBPF_SUPPORT
 	/*
 	 * bpf_info_lock protects bpf rbtrees. This is needed because the
diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
index ebf4203b36b8..f7f2f026bb00 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util/header.c
@@ -49,6 +49,7 @@
 #include "cputopo.h"
 #include "bpf-event.h"
 #include "clockid.h"
+#include "pmu-hybrid.h"
 
 #include <linux/ctype.h>
 #include <internal/lib.h>
@@ -1459,18 +1460,14 @@ static int write_compressed(struct feat_fd *ff __maybe_unused,
 	return do_write(ff, &(ff->ph->env.comp_mmap_len), sizeof(ff->ph->env.comp_mmap_len));
 }
 
-static int write_cpu_pmu_caps(struct feat_fd *ff,
-			      struct evlist *evlist __maybe_unused)
+static int write_per_cpu_pmu_caps(struct feat_fd *ff, struct perf_pmu *pmu,
+				  bool write_pmu)
 {
-	struct perf_pmu *cpu_pmu = perf_pmu__find("cpu");
 	struct perf_pmu_caps *caps = NULL;
 	int nr_caps;
 	int ret;
 
-	if (!cpu_pmu)
-		return -ENOENT;
-
-	nr_caps = perf_pmu__caps_parse(cpu_pmu);
+	nr_caps = perf_pmu__caps_parse(pmu);
 	if (nr_caps < 0)
 		return nr_caps;
 
@@ -1478,7 +1475,7 @@ static int write_cpu_pmu_caps(struct feat_fd *ff,
 	if (ret < 0)
 		return ret;
 
-	list_for_each_entry(caps, &cpu_pmu->caps, list) {
+	list_for_each_entry(caps, &pmu->caps, list) {
 		ret = do_write_string(ff, caps->name);
 		if (ret < 0)
 			return ret;
@@ -1488,9 +1485,49 @@ static int write_cpu_pmu_caps(struct feat_fd *ff,
 			return ret;
 	}
 
+	if (write_pmu) {
+		ret = do_write_string(ff, pmu->name);
+		if (ret < 0)
+			return ret;
+	}
+
 	return ret;
 }
 
+static int write_cpu_pmu_caps(struct feat_fd *ff,
+			      struct evlist *evlist __maybe_unused)
+{
+	struct perf_pmu *cpu_pmu = perf_pmu__find("cpu");
+
+	if (!cpu_pmu)
+		return -ENOENT;
+
+	return write_per_cpu_pmu_caps(ff, cpu_pmu, false);
+}
+
+static int write_hybrid_cpu_pmu_caps(struct feat_fd *ff,
+				     struct evlist *evlist __maybe_unused)
+{
+	struct perf_pmu *pmu;
+	u32 nr_pmu = perf_pmu__hybrid_pmu_num();
+	int ret;
+
+	if (nr_pmu == 0)
+		return -ENOENT;
+
+	ret = do_write(ff, &nr_pmu, sizeof(nr_pmu));
+	if (ret < 0)
+		return ret;
+
+	perf_pmu__for_each_hybrid_pmu(pmu) {
+		ret = write_per_cpu_pmu_caps(ff, pmu, true);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
 static void print_hostname(struct feat_fd *ff, FILE *fp)
 {
 	fprintf(fp, "# hostname : %s\n", ff->ph->env.hostname);
@@ -1962,18 +1999,28 @@ static void print_compressed(struct feat_fd *ff, FILE *fp)
 		ff->ph->env.comp_level, ff->ph->env.comp_ratio);
 }
 
-static void print_cpu_pmu_caps(struct feat_fd *ff, FILE *fp)
+static void print_per_cpu_pmu_caps(FILE *fp, int nr_caps, char *cpu_pmu_caps,
+				   char *pmu_name)
 {
-	const char *delimiter = "# cpu pmu capabilities: ";
-	u32 nr_caps = ff->ph->env.nr_cpu_pmu_caps;
-	char *str;
+	const char *delimiter;
+	char *str, buf[128];
 
 	if (!nr_caps) {
-		fprintf(fp, "# cpu pmu capabilities: not available\n");
+		if (!pmu_name)
+			fprintf(fp, "# cpu pmu capabilities: not available\n");
+		else
+			fprintf(fp, "# %s pmu capabilities: not available\n", pmu_name);
 		return;
 	}
 
-	str = ff->ph->env.cpu_pmu_caps;
+	if (!pmu_name)
+		scnprintf(buf, sizeof(buf), "# cpu pmu capabilities: ");
+	else
+		scnprintf(buf, sizeof(buf), "# %s pmu capabilities: ", pmu_name);
+
+	delimiter = buf;
+
+	str = cpu_pmu_caps;
 	while (nr_caps--) {
 		fprintf(fp, "%s%s", delimiter, str);
 		delimiter = ", ";
@@ -1983,6 +2030,24 @@ static void print_cpu_pmu_caps(struct feat_fd *ff, FILE *fp)
 	fprintf(fp, "\n");
 }
 
+static void print_cpu_pmu_caps(struct feat_fd *ff, FILE *fp)
+{
+	print_per_cpu_pmu_caps(fp, ff->ph->env.nr_cpu_pmu_caps,
+			       ff->ph->env.cpu_pmu_caps, NULL);
+}
+
+static void print_hybrid_cpu_pmu_caps(struct feat_fd *ff, FILE *fp)
+{
+	struct hybrid_cpc_node *n;
+
+	for (int i = 0; i < ff->ph->env.nr_hybrid_cpc_nodes; i++) {
+		n = &ff->ph->env.hybrid_cpc_nodes[i];
+		print_per_cpu_pmu_caps(fp, n->nr_cpu_pmu_caps,
+				       n->cpu_pmu_caps,
+				       n->pmu_name);
+	}
+}
+
 static void print_pmu_mappings(struct feat_fd *ff, FILE *fp)
 {
 	const char *delimiter = "# pmu mappings: ";
@@ -3088,8 +3153,9 @@ static int process_compressed(struct feat_fd *ff,
 	return 0;
 }
 
-static int process_cpu_pmu_caps(struct feat_fd *ff,
-				void *data __maybe_unused)
+static int process_per_cpu_pmu_caps(struct feat_fd *ff, int *nr_cpu_pmu_caps,
+				    char **cpu_pmu_caps,
+				    unsigned int *max_branches)
 {
 	char *name, *value;
 	struct strbuf sb;
@@ -3103,7 +3169,7 @@ static int process_cpu_pmu_caps(struct feat_fd *ff,
 		return 0;
 	}
 
-	ff->ph->env.nr_cpu_pmu_caps = nr_caps;
+	*nr_cpu_pmu_caps = nr_caps;
 
 	if (strbuf_init(&sb, 128) < 0)
 		return -1;
@@ -3125,12 +3191,12 @@ static int process_cpu_pmu_caps(struct feat_fd *ff,
 			goto free_value;
 
 		if (!strcmp(name, "branches"))
-			ff->ph->env.max_branches = atoi(value);
+			*max_branches = atoi(value);
 
 		free(value);
 		free(name);
 	}
-	ff->ph->env.cpu_pmu_caps = strbuf_detach(&sb, NULL);
+	*cpu_pmu_caps = strbuf_detach(&sb, NULL);
 	return 0;
 
 free_value:
@@ -3142,6 +3208,64 @@ static int process_cpu_pmu_caps(struct feat_fd *ff,
 	return -1;
 }
 
+static int process_cpu_pmu_caps(struct feat_fd *ff,
+				void *data __maybe_unused)
+{
+	int ret;
+
+	ret = process_per_cpu_pmu_caps(ff, &ff->ph->env.nr_cpu_pmu_caps,
+				       &ff->ph->env.cpu_pmu_caps,
+				       &ff->ph->env.max_branches);
+	return ret;
+}
+
+static int process_hybrid_cpu_pmu_caps(struct feat_fd *ff,
+				       void *data __maybe_unused)
+{
+	struct hybrid_cpc_node *nodes;
+	u32 nr_pmu, i;
+	int ret;
+
+	if (do_read_u32(ff, &nr_pmu))
+		return -1;
+
+	if (!nr_pmu) {
+		pr_debug("hybrid cpu pmu capabilities not available\n");
+		return 0;
+	}
+
+	nodes = zalloc(sizeof(*nodes) * nr_pmu);
+	if (!nodes)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_pmu; i++) {
+		struct hybrid_cpc_node *n = &nodes[i];
+
+		ret = process_per_cpu_pmu_caps(ff, &n->nr_cpu_pmu_caps,
+					       &n->cpu_pmu_caps,
+					       &n->max_branches);
+		if (ret)
+			goto err;
+
+		n->pmu_name = do_read_string(ff);
+		if (!n->pmu_name)
+			goto err;
+	}
+
+	ff->ph->env.nr_hybrid_cpc_nodes = nr_pmu;
+	ff->ph->env.hybrid_cpc_nodes = nodes;
+	return 0;
+
+err:
+	for (i = 0; i < nr_pmu; i++) {
+		free(nodes[i].cpu_pmu_caps);
+		free(nodes[i].pmu_name);
+	}
+
+	free(nodes);
+	return ret;
+}
+
 #define FEAT_OPR(n, func, __full_only) \
 	[HEADER_##n] = {					\
 		.name	    = __stringify(n),			\
@@ -3204,6 +3328,7 @@ const struct perf_header_feature_ops feat_ops[HEADER_LAST_FEATURE] = {
 	FEAT_OPR(CPU_PMU_CAPS,	cpu_pmu_caps,	false),
 	FEAT_OPR(CLOCK_DATA,	clock_data,	false),
 	FEAT_OPN(HYBRID_TOPOLOGY,	hybrid_topology,	true),
+	FEAT_OPR(HYBRID_CPU_PMU_CAPS,	hybrid_cpu_pmu_caps,	false),
 };
 
 struct header_print_data {
diff --git a/tools/perf/util/header.h b/tools/perf/util/header.h
index 3f12ec0eb84e..ae6b1cf19a7d 100644
--- a/tools/perf/util/header.h
+++ b/tools/perf/util/header.h
@@ -46,6 +46,7 @@ enum {
 	HEADER_CPU_PMU_CAPS,
 	HEADER_CLOCK_DATA,
 	HEADER_HYBRID_TOPOLOGY,
+	HEADER_HYBRID_CPU_PMU_CAPS,
 	HEADER_LAST_FEATURE,
 	HEADER_FEAT_BITS	= 256,
 };
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 2/2] perf header: Support HYBRID_CPU_PMU_CAPS feature
  2021-05-11  5:30 ` [PATCH v3 2/2] perf header: Support HYBRID_CPU_PMU_CAPS feature Jin Yao
@ 2021-05-14  8:16   ` Jiri Olsa
  2021-05-14  8:25     ` Jin, Yao
  0 siblings, 1 reply; 5+ messages in thread
From: Jiri Olsa @ 2021-05-14  8:16 UTC (permalink / raw)
  To: Jin Yao
  Cc: acme, jolsa, peterz, mingo, alexander.shishkin, Linux-kernel, ak,
	kan.liang, yao.jin

On Tue, May 11, 2021 at 01:30:03PM +0800, Jin Yao wrote:

SNIP

> diff --git a/tools/perf/Documentation/perf.data-file-format.txt b/tools/perf/Documentation/perf.data-file-format.txt
> index fbee9e580ee4..e6ff8c898ada 100644
> --- a/tools/perf/Documentation/perf.data-file-format.txt
> +++ b/tools/perf/Documentation/perf.data-file-format.txt
> @@ -419,6 +419,22 @@ Example:
>    cpu_core cpu list : 0-15
>    cpu_atom cpu list : 16-23
>  
> +	HEADER_HYBRID_CPU_PMU_CAPS = 31,
> +
> +	A list of hybrid CPU PMU capabilities.
> +
> +struct {
> +	u32 nr_pmu;
> +	struct {
> +		u32 nr_cpu_pmu_caps;
> +		{
> +			char	name[];
> +			char	value[];
> +		} [nr_cpu_pmu_caps];
> +		char pmu_name[];
> +	} [nr_pmu];
> +};

when I saw it's similar to the previous one I thought we could have
one big hybrid feature.. but that would be probably more complex and
we might not be able to reuse the code as much as you did


>  free_value:
> @@ -3142,6 +3208,64 @@ static int process_cpu_pmu_caps(struct feat_fd *ff,
>  	return -1;
>  }
>  
> +static int process_cpu_pmu_caps(struct feat_fd *ff,
> +				void *data __maybe_unused)
> +{
> +	int ret;
> +
> +	ret = process_per_cpu_pmu_caps(ff, &ff->ph->env.nr_cpu_pmu_caps,
> +				       &ff->ph->env.cpu_pmu_caps,
> +				       &ff->ph->env.max_branches);
> +	return ret;

why the 'ret' var? could be just:

   return process_per_cpu_pmu_caps(...

> +}
> +
> +static int process_hybrid_cpu_pmu_caps(struct feat_fd *ff,
> +				       void *data __maybe_unused)
> +{
> +	struct hybrid_cpc_node *nodes;
> +	u32 nr_pmu, i;
> +	int ret;
> +
> +	if (do_read_u32(ff, &nr_pmu))
> +		return -1;
> +
> +	if (!nr_pmu) {
> +		pr_debug("hybrid cpu pmu capabilities not available\n");
> +		return 0;
> +	}
> +
> +	nodes = zalloc(sizeof(*nodes) * nr_pmu);
> +	if (!nodes)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < nr_pmu; i++) {
> +		struct hybrid_cpc_node *n = &nodes[i];
> +
> +		ret = process_per_cpu_pmu_caps(ff, &n->nr_cpu_pmu_caps,
> +					       &n->cpu_pmu_caps,
> +					       &n->max_branches);
> +		if (ret)
> +			goto err;
> +
> +		n->pmu_name = do_read_string(ff);
> +		if (!n->pmu_name)

should you set 'ret = -1' in here?

other than this both patches look good to me

thanks,
jirka

> +			goto err;
> +	}
> +
> +	ff->ph->env.nr_hybrid_cpc_nodes = nr_pmu;
> +	ff->ph->env.hybrid_cpc_nodes = nodes;
> +	return 0;
> +
> +err:
> +	for (i = 0; i < nr_pmu; i++) {
> +		free(nodes[i].cpu_pmu_caps);

SNIP


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 2/2] perf header: Support HYBRID_CPU_PMU_CAPS feature
  2021-05-14  8:16   ` Jiri Olsa
@ 2021-05-14  8:25     ` Jin, Yao
  0 siblings, 0 replies; 5+ messages in thread
From: Jin, Yao @ 2021-05-14  8:25 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: acme, jolsa, peterz, mingo, alexander.shishkin, Linux-kernel, ak,
	kan.liang, yao.jin

Hi Jiri,

On 5/14/2021 4:16 PM, Jiri Olsa wrote:
> On Tue, May 11, 2021 at 01:30:03PM +0800, Jin Yao wrote:
> 
> SNIP
> 
>> diff --git a/tools/perf/Documentation/perf.data-file-format.txt b/tools/perf/Documentation/perf.data-file-format.txt
>> index fbee9e580ee4..e6ff8c898ada 100644
>> --- a/tools/perf/Documentation/perf.data-file-format.txt
>> +++ b/tools/perf/Documentation/perf.data-file-format.txt
>> @@ -419,6 +419,22 @@ Example:
>>     cpu_core cpu list : 0-15
>>     cpu_atom cpu list : 16-23
>>   
>> +	HEADER_HYBRID_CPU_PMU_CAPS = 31,
>> +
>> +	A list of hybrid CPU PMU capabilities.
>> +
>> +struct {
>> +	u32 nr_pmu;
>> +	struct {
>> +		u32 nr_cpu_pmu_caps;
>> +		{
>> +			char	name[];
>> +			char	value[];
>> +		} [nr_cpu_pmu_caps];
>> +		char pmu_name[];
>> +	} [nr_pmu];
>> +};
> 
> when I saw it's similar to the previous one I thought we could have
> one big hybrid feature.. but that would be probably more complex and
> we might not be able to reuse the code as much as you did
> 

Yes. Actually I had the same idea before but as you said the code would be more complex.

> 
>>   free_value:
>> @@ -3142,6 +3208,64 @@ static int process_cpu_pmu_caps(struct feat_fd *ff,
>>   	return -1;
>>   }
>>   
>> +static int process_cpu_pmu_caps(struct feat_fd *ff,
>> +				void *data __maybe_unused)
>> +{
>> +	int ret;
>> +
>> +	ret = process_per_cpu_pmu_caps(ff, &ff->ph->env.nr_cpu_pmu_caps,
>> +				       &ff->ph->env.cpu_pmu_caps,
>> +				       &ff->ph->env.max_branches);
>> +	return ret;
> 
> why the 'ret' var? could be just:
> 
>     return process_per_cpu_pmu_caps(...
> 

OK, I will fix it in v4.

>> +}
>> +
>> +static int process_hybrid_cpu_pmu_caps(struct feat_fd *ff,
>> +				       void *data __maybe_unused)
>> +{
>> +	struct hybrid_cpc_node *nodes;
>> +	u32 nr_pmu, i;
>> +	int ret;
>> +
>> +	if (do_read_u32(ff, &nr_pmu))
>> +		return -1;
>> +
>> +	if (!nr_pmu) {
>> +		pr_debug("hybrid cpu pmu capabilities not available\n");
>> +		return 0;
>> +	}
>> +
>> +	nodes = zalloc(sizeof(*nodes) * nr_pmu);
>> +	if (!nodes)
>> +		return -ENOMEM;
>> +
>> +	for (i = 0; i < nr_pmu; i++) {
>> +		struct hybrid_cpc_node *n = &nodes[i];
>> +
>> +		ret = process_per_cpu_pmu_caps(ff, &n->nr_cpu_pmu_caps,
>> +					       &n->cpu_pmu_caps,
>> +					       &n->max_branches);
>> +		if (ret)
>> +			goto err;
>> +
>> +		n->pmu_name = do_read_string(ff);
>> +		if (!n->pmu_name)
> 
> should you set 'ret = -1' in here?
> 

Yes, I should add 'ret = -1' before 'n->pmu_name = do_read_string(ff);'.

> other than this both patches look good to me
> 

Thanks, I will prepare v4 soon.

Thanks
Jin Yao

> thanks,
> jirka
> 
>> +			goto err;
>> +	}
>> +
>> +	ff->ph->env.nr_hybrid_cpc_nodes = nr_pmu;
>> +	ff->ph->env.hybrid_cpc_nodes = nodes;
>> +	return 0;
>> +
>> +err:
>> +	for (i = 0; i < nr_pmu; i++) {
>> +		free(nodes[i].cpu_pmu_caps);
> 
> SNIP
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-05-14  8:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-11  5:30 [PATCH v3 0/2] perf header: Support HYBRID_TOPOLOGY and HYBRID_CPU_PMU_CAPS Jin Yao
2021-05-11  5:30 ` [PATCH v3 1/2] perf header: Support HYBRID_TOPOLOGY feature Jin Yao
2021-05-11  5:30 ` [PATCH v3 2/2] perf header: Support HYBRID_CPU_PMU_CAPS feature Jin Yao
2021-05-14  8:16   ` Jiri Olsa
2021-05-14  8:25     ` Jin, Yao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).