All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] JSON output for perf stat
@ 2022-05-25  5:38 Ian Rogers
  2022-05-25  5:38 ` [PATCH v4 1/3] perf test: Add checking for perf stat CSV output Ian Rogers
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Ian Rogers @ 2022-05-25  5:38 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Kan Liang, Zhengjun Xing, Sandipan Das, Claire Jensen,
	Alyssa Ross, Like Xu, James Clark, Florian Fischer, linux-kernel,
	linux-perf-users, Claire Jensen
  Cc: Stephane Eranian, Ian Rogers

Parsing the CSV or text output of perf stat can be problematic when
new output is added (columns in CSV format). JSON names values and
simplifies the job of parsing. Add a JSON output option to perf-stat
then add unit test that parses and validates the output.

This is a resend of two v2 patches:
https://lore.kernel.org/lkml/20210813220754.2104922-1-cjense@google.com/
https://lore.kernel.org/lkml/20210813220936.2105426-1-cjense@google.com/
with a few formatting changes and improvements to the linter.

The CSV test/linter is also added to ensure that CSV output doesn't regress:
https://lore.kernel.org/lkml/20210813192108.2087512-1-cjense@google.com/

v4. Does some minor fixes to the json linter.

v3. There is some tidy up of CSV code including a potential memory
    over run in the os.nfields set up caught by sanitizers. To
    facilitate this an AGGR_MAX value is added. v3 also adds the CSV
    testing.

v2. Fixes the system wide no aggregation test to not run if the
    paranoia is wrong. It also makes the counter-value check handle
    the "<not counted>" and "<not supported>" cases.

Claire Jensen (3):
  perf test: Add checking for perf stat CSV output.
  perf stat: Add JSON output option
  perf test: Json format checking

 tools/perf/Documentation/perf-stat.txt        |  21 +
 tools/perf/builtin-stat.c                     |   6 +
 .../tests/shell/lib/perf_csv_output_lint.py   |  48 +++
 .../tests/shell/lib/perf_json_output_lint.py  |  94 +++++
 tools/perf/tests/shell/stat+csv_output.sh     | 147 +++++++
 tools/perf/tests/shell/stat+json_output.sh    | 147 +++++++
 tools/perf/util/stat-display.c                | 384 +++++++++++++-----
 tools/perf/util/stat.c                        |   1 +
 tools/perf/util/stat.h                        |   2 +
 9 files changed, 744 insertions(+), 106 deletions(-)
 create mode 100644 tools/perf/tests/shell/lib/perf_csv_output_lint.py
 create mode 100644 tools/perf/tests/shell/lib/perf_json_output_lint.py
 create mode 100755 tools/perf/tests/shell/stat+csv_output.sh
 create mode 100755 tools/perf/tests/shell/stat+json_output.sh

-- 
2.36.1.124.g0e6072fb45-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v4 1/3] perf test: Add checking for perf stat CSV output.
  2022-05-25  5:38 [PATCH v4 0/3] JSON output for perf stat Ian Rogers
@ 2022-05-25  5:38 ` Ian Rogers
  2022-05-25 11:07   ` Arnaldo Carvalho de Melo
  2022-05-25  5:38 ` [PATCH v4 2/3] perf stat: Add JSON output option Ian Rogers
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Ian Rogers @ 2022-05-25  5:38 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Kan Liang, Zhengjun Xing, Sandipan Das, Claire Jensen,
	Alyssa Ross, Like Xu, James Clark, Florian Fischer, linux-kernel,
	linux-perf-users, Claire Jensen
  Cc: Stephane Eranian

From: Claire Jensen <cjense@google.com>

Counts expected fields for various commands. No testing added for
summary mode since it is broken.

An example of the summary output is:

         summary,263831,,instructions:u,1435072,100.0,0.46,insn per cycle
,,,,,1.37,stalled cycles per insn

This should be:

         summary,263831,,instructions:u,1435072,100.0,0.46,insn per cycle
         summary,,,,,,1.37,stalled cycles per insn

The output has 7 fields when it should have 8. Additionally, the newline
spacing is wrong, so it was excluded from testing until a fix is made.

Signed-off-by: Claire Jensen <cjense@google.com>
---
 .../tests/shell/lib/perf_csv_output_lint.py   |  48 ++++++
 tools/perf/tests/shell/stat+csv_output.sh     | 147 ++++++++++++++++++
 2 files changed, 195 insertions(+)
 create mode 100644 tools/perf/tests/shell/lib/perf_csv_output_lint.py
 create mode 100755 tools/perf/tests/shell/stat+csv_output.sh

diff --git a/tools/perf/tests/shell/lib/perf_csv_output_lint.py b/tools/perf/tests/shell/lib/perf_csv_output_lint.py
new file mode 100644
index 000000000000..714f283cfb1b
--- /dev/null
+++ b/tools/perf/tests/shell/lib/perf_csv_output_lint.py
@@ -0,0 +1,48 @@
+#!/usr/bin/python
+# SPDX-License-Identifier: GPL-2.0
+
+import argparse
+import sys
+
+# Basic sanity check of perf CSV output as specified in the man page.
+# Currently just checks the number of fields per line in output.
+
+ap = argparse.ArgumentParser()
+ap.add_argument('--no-args', action='store_true')
+ap.add_argument('--interval', action='store_true')
+ap.add_argument('--system-wide-no-aggr', action='store_true')
+ap.add_argument('--system-wide', action='store_true')
+ap.add_argument('--event', action='store_true')
+ap.add_argument('--per-core', action='store_true')
+ap.add_argument('--per-thread', action='store_true')
+ap.add_argument('--per-die', action='store_true')
+ap.add_argument('--per-node', action='store_true')
+ap.add_argument('--per-socket', action='store_true')
+ap.add_argument('--separator', default=',', nargs='?')
+args = ap.parse_args()
+
+Lines = sys.stdin.readlines()
+
+def check_csv_output(exp):
+  for line in Lines:
+    if 'failed' not in line:
+      count = line.count(args.separator)
+      if count != exp:
+        sys.stdout.write(''.join(Lines))
+        raise RuntimeError(f'wrong number of fields. expected {exp} in {line}')
+
+try:
+  if args.no_args or args.system_wide or args.event:
+    expected_items = 6
+  elif args.interval or args.per_thread or args.system_wide_no_aggr:
+    expected_items = 7
+  elif args.per_core or args.per_socket or args.per_node or args.per_die:
+    expected_items = 8
+  else:
+    ap.print_help()
+    raise RuntimeError('No checking option specified')
+  check_csv_output(expected_items)
+
+except:
+  sys.stdout.write('Test failed for input: ' + ''.join(Lines))
+  raise
diff --git a/tools/perf/tests/shell/stat+csv_output.sh b/tools/perf/tests/shell/stat+csv_output.sh
new file mode 100755
index 000000000000..82c25e9c7f21
--- /dev/null
+++ b/tools/perf/tests/shell/stat+csv_output.sh
@@ -0,0 +1,147 @@
+#!/bin/bash
+# perf stat CSV output linter
+# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+# Tests various perf stat CSV output commands for the
+# correct number of fields and the CSV separator set to ','.
+
+set -e
+
+pythonchecker=$(dirname $0)/lib/perf_csv_output_lint.py
+if [ "x$PYTHON" == "x" ]
+then
+	if which python3 > /dev/null
+	then
+		PYTHON=python3
+	elif which python > /dev/null
+	then
+		PYTHON=python
+	else
+		echo Skipping test, python not detected please set environment variable PYTHON.
+		exit 2
+	fi
+fi
+
+# Return true if perf_event_paranoid is > $1 and not running as root.
+function ParanoidAndNotRoot()
+{
+	 [ $(id -u) != 0 ] && [ $(cat /proc/sys/kernel/perf_event_paranoid) -gt $1 ]
+}
+
+check_no_args()
+{
+	echo -n "Checking CSV output: no args "
+	perf stat -x, true 2>&1 | $PYTHON $pythonchecker --no-args
+	echo "[Success]"
+}
+
+check_system_wide()
+{
+	echo -n "Checking CSV output: system wide "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -x, -a true 2>&1 | $PYTHON $pythonchecker --system-wide
+	echo "[Success]"
+}
+
+check_system_wide_no_aggr()
+{
+	echo -n "Checking CSV output: system wide "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	echo -n "Checking CSV output: system wide no aggregation "
+	perf stat -x, -A -a --no-merge true 2>&1 | $PYTHON $pythonchecker --system-wide-no-aggr
+	echo "[Success]"
+}
+
+check_interval()
+{
+	echo -n "Checking CSV output: interval "
+	perf stat -x, -I 1000 true 2>&1 | $PYTHON $pythonchecker --interval
+	echo "[Success]"
+}
+
+
+check_event()
+{
+	echo -n "Checking CSV output: event "
+	perf stat -x, -e cpu-clock true 2>&1 | $PYTHON $pythonchecker --event
+	echo "[Success]"
+}
+
+check_per_core()
+{
+	echo -n "Checking CSV output: per core "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -x, --per-core -a true 2>&1 | $PYTHON $pythonchecker --per-core
+	echo "[Success]"
+}
+
+check_per_thread()
+{
+	echo -n "Checking CSV output: per thread "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -x, --per-thread -a true 2>&1 | $PYTHON $pythonchecker --per-thread
+	echo "[Success]"
+}
+
+check_per_die()
+{
+	echo -n "Checking CSV output: per die "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -x, --per-die -a true 2>&1 | $PYTHON $pythonchecker --per-die
+	echo "[Success]"
+}
+
+check_per_node()
+{
+	echo -n "Checking CSV output: per node "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -x, --per-node -a true 2>&1 | $PYTHON $pythonchecker --per-node
+	echo "[Success]"
+}
+
+check_per_socket()
+{
+	echo -n "Checking CSV output: per socket "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -x, --per-socket -a true 2>&1 | $PYTHON $pythonchecker --per-socket
+	echo "[Success]"
+}
+
+check_no_args
+check_system_wide
+check_system_wide_no_aggr
+check_interval
+check_event
+check_per_core
+check_per_thread
+check_per_die
+check_per_node
+check_per_socket
+exit 0
-- 
2.36.1.124.g0e6072fb45-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 2/3] perf stat: Add JSON output option
  2022-05-25  5:38 [PATCH v4 0/3] JSON output for perf stat Ian Rogers
  2022-05-25  5:38 ` [PATCH v4 1/3] perf test: Add checking for perf stat CSV output Ian Rogers
@ 2022-05-25  5:38 ` Ian Rogers
  2022-05-31 22:46   ` Namhyung Kim
  2022-05-25  5:38 ` [PATCH v4 3/3] perf test: Json format checking Ian Rogers
  2022-05-25 11:18 ` [PATCH v4 0/3] JSON output for perf stat Arnaldo Carvalho de Melo
  3 siblings, 1 reply; 11+ messages in thread
From: Ian Rogers @ 2022-05-25  5:38 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Kan Liang, Zhengjun Xing, Sandipan Das, Claire Jensen,
	Alyssa Ross, Like Xu, James Clark, Florian Fischer, linux-kernel,
	linux-perf-users, Claire Jensen
  Cc: Stephane Eranian

From: Claire Jensen <cjense@google.com>

CSV output is tricky to format and column layout changes are susceptible
to breaking parsers. New JSON-formatted output has variable names to
identify fields that are consistent and informative, making
the output parseable.

CSV output example:

1.20,msec,task-clock:u,1204272,100.00,0.697,CPUs utilized
0,,context-switches:u,1204272,100.00,0.000,/sec
0,,cpu-migrations:u,1204272,100.00,0.000,/sec
70,,page-faults:u,1204272,100.00,58.126,K/sec

JSON output example:

{"counter-value" : "3805.723968", "unit" : "msec", "event" :
"cpu-clock", "event-runtime" : 3805731510100.00, "pcnt-running"
: 100.00, "metric-value" : 4.007571, "metric-unit" : "CPUs utilized"}
{"counter-value" : "6166.000000", "unit" : "", "event" :
"context-switches", "event-runtime" : 3805723045100.00, "pcnt-running"
: 100.00, "metric-value" : 1.620191, "metric-unit" : "K/sec"}
{"counter-value" : "466.000000", "unit" : "", "event" :
"cpu-migrations", "event-runtime" : 3805727613100.00, "pcnt-running"
: 100.00, "metric-value" : 122.447136, "metric-unit" : "/sec"}
{"counter-value" : "208.000000", "unit" : "", "event" :
"page-faults", "event-runtime" : 3805726799100.00, "pcnt-running"
: 100.00, "metric-value" : 54.654516, "metric-unit" : "/sec"}

Also added documentation for JSON option.
There is some tidy up of CSV code including a potential memory over run
in the os.nfields set up. To facilitate this an AGGR_MAX value is added.

Signed-off-by: Claire Jensen <cjense@google.com>
---
 tools/perf/Documentation/perf-stat.txt |  21 ++
 tools/perf/builtin-stat.c              |   6 +
 tools/perf/util/stat-display.c         | 384 ++++++++++++++++++-------
 tools/perf/util/stat.c                 |   1 +
 tools/perf/util/stat.h                 |   2 +
 5 files changed, 308 insertions(+), 106 deletions(-)

diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
index 8d1cde00b8d6..f9cdfd912b05 100644
--- a/tools/perf/Documentation/perf-stat.txt
+++ b/tools/perf/Documentation/perf-stat.txt
@@ -570,6 +570,27 @@ Additional metrics may be printed with all earlier fields being empty.
 
 include::intel-hybrid.txt[]
 
+JSON FORMAT
+-----------
+
+With -j, perf stat is able to print out a JSON format output
+that can be used for parsing.
+
+- timestamp : optional usec time stamp in fractions of second (with -I)
+- optional aggregate options:
+		- core : core identifier (with --per-core)
+		- die : die identifier (with --per-die)
+		- socket : socket identifier (with --per-socket)
+		- node : node identifier (with --per-node)
+		- thread : thread identifier (with --per-thread)
+- counter-value : counter value
+- unit : unit of the counter value or empty
+- event : event name
+- variance : optional variance if multiple values are collected (with -r)
+- runtime : run time of counter
+- metric-value : optional metric value
+- metric-unit : optional unit of metric
+
 SEE ALSO
 --------
 linkperf:perf-top[1], linkperf:perf-list[1]
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 7e6cc8bdf061..b6972e18688f 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -1259,6 +1259,8 @@ static struct option stat_options[] = {
 		    "Merge identical named hybrid events"),
 	OPT_STRING('x', "field-separator", &stat_config.csv_sep, "separator",
 		   "print counts with custom separator"),
+	OPT_BOOLEAN('j', "json-output", &stat_config.json_output,
+		   "print counts in JSON format"),
 	OPT_CALLBACK('G', "cgroup", &evsel_list, "name",
 		     "monitor event in cgroup name only", parse_stat_cgroups),
 	OPT_STRING(0, "for-each-cgroup", &stat_config.cgroup_list, "name",
@@ -1445,6 +1447,7 @@ static aggr_cpu_id_get_t aggr_mode__get_aggr(enum aggr_mode aggr_mode)
 	case AGGR_GLOBAL:
 	case AGGR_THREAD:
 	case AGGR_UNSET:
+	case AGGR_MAX:
 	default:
 		return NULL;
 	}
@@ -1469,6 +1472,7 @@ static aggr_get_id_t aggr_mode__get_id(enum aggr_mode aggr_mode)
 	case AGGR_GLOBAL:
 	case AGGR_THREAD:
 	case AGGR_UNSET:
+	case AGGR_MAX:
 	default:
 		return NULL;
 	}
@@ -1619,6 +1623,7 @@ static aggr_cpu_id_get_t aggr_mode__get_aggr_file(enum aggr_mode aggr_mode)
 	case AGGR_GLOBAL:
 	case AGGR_THREAD:
 	case AGGR_UNSET:
+	case AGGR_MAX:
 	default:
 		return NULL;
 	}
@@ -1639,6 +1644,7 @@ static aggr_get_id_t aggr_mode__get_id_file(enum aggr_mode aggr_mode)
 	case AGGR_GLOBAL:
 	case AGGR_THREAD:
 	case AGGR_UNSET:
+	case AGGR_MAX:
 	default:
 		return NULL;
 	}
diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
index 606f09b09226..2bbd11446fa9 100644
--- a/tools/perf/util/stat-display.c
+++ b/tools/perf/util/stat-display.c
@@ -28,15 +28,21 @@
 static void print_running(struct perf_stat_config *config,
 			  u64 run, u64 ena)
 {
-	if (config->csv_output) {
-		fprintf(config->output, "%s%" PRIu64 "%s%.2f",
-					config->csv_sep,
-					run,
-					config->csv_sep,
-					ena ? 100.0 * run / ena : 100.0);
-	} else if (run != ena) {
+
+	double enabled_percent = 100;
+
+	if (run != ena)
+		enabled_percent = 100 * run / ena;
+	if (config->json_output)
+		fprintf(config->output,
+			"\"event-runtime\" : %lu, \"pcnt-running\" : %.2f, ",
+			run, enabled_percent);
+	else if (config->csv_output)
+		fprintf(config->output,
+			"%s%" PRIu64 "%s%.2f", config->csv_sep,
+			run, config->csv_sep, enabled_percent);
+	else if (run != ena)
 		fprintf(config->output, "  (%.2f%%)", 100.0 * run / ena);
-	}
 }
 
 static void print_noise_pct(struct perf_stat_config *config,
@@ -44,7 +50,9 @@ static void print_noise_pct(struct perf_stat_config *config,
 {
 	double pct = rel_stddev_stats(total, avg);
 
-	if (config->csv_output)
+	if (config->json_output)
+		fprintf(config->output, "\"variance\" : %.2f, ", pct);
+	else if (config->csv_output)
 		fprintf(config->output, "%s%.2f%%", config->csv_sep, pct);
 	else if (pct)
 		fprintf(config->output, "  ( +-%6.2f%% )", pct);
@@ -66,7 +74,11 @@ static void print_cgroup(struct perf_stat_config *config, struct evsel *evsel)
 {
 	if (nr_cgroups) {
 		const char *cgrp_name = evsel->cgrp ? evsel->cgrp->name  : "";
-		fprintf(config->output, "%s%s", config->csv_sep, cgrp_name);
+
+		if (config->json_output)
+			fprintf(config->output, "\"cgroup\" : \"%s\", ", cgrp_name);
+		else
+			fprintf(config->output, "%s%s", config->csv_sep, cgrp_name);
 	}
 }
 
@@ -74,69 +86,123 @@ static void print_cgroup(struct perf_stat_config *config, struct evsel *evsel)
 static void aggr_printout(struct perf_stat_config *config,
 			  struct evsel *evsel, struct aggr_cpu_id id, int nr)
 {
+
+
+	if (config->json_output && !config->interval)
+		fprintf(config->output, "{");
+
 	switch (config->aggr_mode) {
 	case AGGR_CORE:
-		fprintf(config->output, "S%d-D%d-C%*d%s%*d%s",
-			id.socket,
-			id.die,
-			config->csv_output ? 0 : -8,
-			id.core,
-			config->csv_sep,
-			config->csv_output ? 0 : 4,
-			nr,
-			config->csv_sep);
+		if (config->json_output) {
+			fprintf(config->output,
+				"\"core\" : \"S%d-D%d-C%d\", \"aggregate-number\" : %d, ",
+				id.socket,
+				id.die,
+				id.core,
+				nr);
+		} else {
+			fprintf(config->output, "S%d-D%d-C%*d%s%*d%s",
+				id.socket,
+				id.die,
+				config->csv_output ? 0 : -8,
+				id.core,
+				config->csv_sep,
+				config->csv_output ? 0 : 4,
+				nr,
+				config->csv_sep);
+		}
 		break;
 	case AGGR_DIE:
-		fprintf(config->output, "S%d-D%*d%s%*d%s",
-			id.socket,
-			config->csv_output ? 0 : -8,
-			id.die,
-			config->csv_sep,
-			config->csv_output ? 0 : 4,
-			nr,
-			config->csv_sep);
+		if (config->json_output) {
+			fprintf(config->output,
+				"\"die\" : \"S%d-D%d\", \"aggregate-number\" : %d, ",
+				id.socket,
+				id.die,
+				nr);
+		} else {
+			fprintf(config->output, "S%d-D%*d%s%*d%s",
+				id.socket,
+				config->csv_output ? 0 : -8,
+				id.die,
+				config->csv_sep,
+				config->csv_output ? 0 : 4,
+				nr,
+				config->csv_sep);
+		}
 		break;
 	case AGGR_SOCKET:
-		fprintf(config->output, "S%*d%s%*d%s",
-			config->csv_output ? 0 : -5,
-			id.socket,
-			config->csv_sep,
-			config->csv_output ? 0 : 4,
-			nr,
-			config->csv_sep);
-			break;
+		if (config->json_output) {
+			fprintf(config->output,
+				"\"socket\" : \"S%d\", \"aggregate-number\" : %d, ",
+				id.socket,
+				nr);
+		} else {
+			fprintf(config->output, "S%*d%s%*d%s",
+				config->csv_output ? 0 : -5,
+				id.socket,
+				config->csv_sep,
+				config->csv_output ? 0 : 4,
+				nr,
+				config->csv_sep);
+		}
+		break;
 	case AGGR_NODE:
-		fprintf(config->output, "N%*d%s%*d%s",
-			config->csv_output ? 0 : -5,
-			id.node,
-			config->csv_sep,
-			config->csv_output ? 0 : 4,
-			nr,
-			config->csv_sep);
-			break;
+		if (config->json_output) {
+			fprintf(config->output, "\"node\" : \"N%d\", \"aggregate-number\" : %d, ",
+				id.node,
+				nr);
+		} else {
+			fprintf(config->output, "N%*d%s%*d%s",
+				config->csv_output ? 0 : -5,
+				id.node,
+				config->csv_sep,
+				config->csv_output ? 0 : 4,
+				nr,
+				config->csv_sep);
+		}
+		break;
 	case AGGR_NONE:
-		if (evsel->percore && !config->percore_show_thread) {
-			fprintf(config->output, "S%d-D%d-C%*d%s",
-				id.socket,
-				id.die,
-				config->csv_output ? 0 : -3,
-				id.core, config->csv_sep);
-		} else if (id.cpu.cpu > -1) {
-			fprintf(config->output, "CPU%*d%s",
-				config->csv_output ? 0 : -7,
-				id.cpu.cpu, config->csv_sep);
+		if (config->json_output) {
+			if (evsel->percore && !config->percore_show_thread) {
+				fprintf(config->output, "\"core\" : \"S%d-D%d-C%d\"",
+					id.socket,
+					id.die,
+					id.core);
+			} else if (id.core > -1) {
+				fprintf(config->output, "\"cpu\" : \"%d\", ",
+					id.cpu.cpu);
+			}
+		} else {
+			if (evsel->percore && !config->percore_show_thread) {
+				fprintf(config->output, "S%d-D%d-C%*d%s",
+					id.socket,
+					id.die,
+					config->csv_output ? 0 : -3,
+					id.core, config->csv_sep);
+			} else if (id.core > -1) {
+				fprintf(config->output, "CPU%*d%s",
+					config->csv_output ? 0 : -7,
+					id.cpu.cpu, config->csv_sep);
+			}
 		}
 		break;
 	case AGGR_THREAD:
-		fprintf(config->output, "%*s-%*d%s",
-			config->csv_output ? 0 : 16,
-			perf_thread_map__comm(evsel->core.threads, id.thread),
-			config->csv_output ? 0 : -8,
-			perf_thread_map__pid(evsel->core.threads, id.thread),
-			config->csv_sep);
+		if (config->json_output) {
+			fprintf(config->output, "\"thread\" : \"%s-%d\", ",
+				perf_thread_map__comm(evsel->core.threads, id.thread),
+				perf_thread_map__pid(evsel->core.threads, id.thread));
+		} else {
+			fprintf(config->output, "%*s-%*d%s",
+				config->csv_output ? 0 : 16,
+				perf_thread_map__comm(evsel->core.threads, id.thread),
+				config->csv_output ? 0 : -8,
+				perf_thread_map__pid(evsel->core.threads, id.thread),
+				config->csv_sep);
+		}
 		break;
 	case AGGR_GLOBAL:
 	case AGGR_UNSET:
+	case AGGR_MAX:
 	default:
 		break;
 	}
@@ -234,6 +300,31 @@ static void print_metric_csv(struct perf_stat_config *config __maybe_unused,
 	fprintf(out, "%s%s%s%s", config->csv_sep, vals, config->csv_sep, skip_spaces(unit));
 }
 
+static void print_metric_json(struct perf_stat_config *config __maybe_unused,
+			     void *ctx,
+			     const char *color __maybe_unused,
+			     const char *fmt __maybe_unused,
+			     const char *unit, double val)
+{
+	struct outstate *os = ctx;
+	FILE *out = os->fh;
+
+	fprintf(out, "\"metric-value\" : %f, ", val);
+	fprintf(out, "\"metric-unit\" : \"%s\"", unit);
+	if (!config->metric_only)
+		fprintf(out, "}");
+}
+
+static void new_line_json(struct perf_stat_config *config, void *ctx)
+{
+	struct outstate *os = ctx;
+
+	fputc('\n', os->fh);
+	if (os->prefix)
+		fprintf(os->fh, "%s", os->prefix);
+	aggr_printout(config, os->evsel, os->id, os->nr);
+}
+
 /* Filter out some columns that don't work well in metrics only mode */
 
 static bool valid_only_metric(const char *unit)
@@ -300,6 +391,27 @@ static void print_metric_only_csv(struct perf_stat_config *config __maybe_unused
 	fprintf(out, "%s%s", vals, config->csv_sep);
 }
 
+static void print_metric_only_json(struct perf_stat_config *config __maybe_unused,
+				  void *ctx, const char *color __maybe_unused,
+				  const char *fmt,
+				  const char *unit, double val)
+{
+	struct outstate *os = ctx;
+	FILE *out = os->fh;
+	char buf[64], *vals, *ends;
+	char tbuf[1024];
+
+	if (!valid_only_metric(unit))
+		return;
+	unit = fixunit(tbuf, os->evsel, unit);
+	snprintf(buf, sizeof(buf), fmt, val);
+	ends = vals = skip_spaces(buf);
+	while (isdigit(*ends) || *ends == '.')
+		ends++;
+	*ends = 0;
+	fprintf(out, "{\"metric-value\" : \"%s\"}", vals);
+}
+
 static void new_line_metric(struct perf_stat_config *config __maybe_unused,
 			    void *ctx __maybe_unused)
 {
@@ -318,10 +430,13 @@ static void print_metric_header(struct perf_stat_config *config,
 	    os->evsel->priv != os->evsel->evlist->selected->priv)
 		return;
 
-	if (!valid_only_metric(unit))
+	if (!valid_only_metric(unit) && !config->json_output)
 		return;
 	unit = fixunit(tbuf, os->evsel, unit);
-	if (config->csv_output)
+
+	if (config->json_output)
+		fprintf(os->fh, "\"unit\" : \"%s\"", unit);
+	else if (config->csv_output)
 		fprintf(os->fh, "%s%s", unit, config->csv_sep);
 	else
 		fprintf(os->fh, "%*s ", config->metric_only_len, unit);
@@ -367,14 +482,28 @@ static void abs_printout(struct perf_stat_config *config,
 
 	aggr_printout(config, evsel, id, nr);
 
-	fprintf(output, fmt, avg, config->csv_sep);
+	if (config->json_output)
+		fprintf(output, "\"counter-value\" : \"%f\", ", avg);
+	else
+		fprintf(output, fmt, avg, config->csv_sep);
+
+	if (config->json_output) {
+		if (evsel->unit) {
+			fprintf(output, "\"unit\" : \"%s\", ",
+				evsel->unit);
+		}
+	} else {
+		if (evsel->unit)
+			fprintf(output, "%-*s%s",
+				config->csv_output ? 0 : config->unit_width,
+				evsel->unit, config->csv_sep);
+	}
 
-	if (evsel->unit)
-		fprintf(output, "%-*s%s",
-			config->csv_output ? 0 : config->unit_width,
-			evsel->unit, config->csv_sep);
 
-	fprintf(output, "%-*s", config->csv_output ? 0 : 25, evsel__name(evsel));
+	if (config->json_output)
+		fprintf(output, "\"event\" : \"%s\", ", evsel__name(evsel));
+	else
+		fprintf(output, "%-*s", config->csv_output ? 0 : 25, evsel__name(evsel));
 
 	print_cgroup(config, evsel);
 }
@@ -416,34 +545,30 @@ static void printout(struct perf_stat_config *config, struct aggr_cpu_id id, int
 		.nr = nr,
 		.evsel = counter,
 	};
-	print_metric_t pm = print_metric_std;
+	print_metric_t pm;
 	new_line_t nl;
 
-	if (config->metric_only) {
-		nl = new_line_metric;
-		if (config->csv_output)
-			pm = print_metric_only_csv;
-		else
-			pm = print_metric_only;
-	} else
-		nl = new_line_std;
-
-	if (config->csv_output && !config->metric_only) {
-		static int aggr_fields[] = {
-			[AGGR_GLOBAL] = 0,
-			[AGGR_THREAD] = 1,
+	if (config->csv_output) {
+		static const int aggr_fields[AGGR_MAX] = {
 			[AGGR_NONE] = 1,
+			[AGGR_GLOBAL] = 0,
 			[AGGR_SOCKET] = 2,
 			[AGGR_DIE] = 2,
 			[AGGR_CORE] = 2,
+			[AGGR_THREAD] = 1,
+			[AGGR_UNSET] = 0,
+			[AGGR_NODE] = 0,
 		};
 
-		pm = print_metric_csv;
-		nl = new_line_csv;
-		os.nfields = 3;
-		os.nfields += aggr_fields[config->aggr_mode];
-		if (counter->cgrp)
-			os.nfields++;
+		pm = config->metric_only ? print_metric_only_csv : print_metric_csv;
+		nl = config->metric_only ? new_line_metric : new_line_csv;
+		os.nfields = 3 + aggr_fields[config->aggr_mode] + (counter->cgrp ? 1 : 0);
+	} else if (config->json_output) {
+		pm = config->metric_only ? print_metric_only_json : print_metric_json;
+		nl = config->metric_only ? new_line_metric : new_line_json;
+	} else {
+		pm = config->metric_only ? print_metric_only : print_metric_std;
+		nl = config->metric_only ? new_line_metric : new_line_std;
 	}
 
 	if (!config->no_csv_summary && config->csv_output &&
@@ -458,10 +583,15 @@ static void printout(struct perf_stat_config *config, struct aggr_cpu_id id, int
 		}
 		aggr_printout(config, counter, id, nr);
 
-		fprintf(config->output, "%*s%s",
-			config->csv_output ? 0 : 18,
-			counter->supported ? CNTR_NOT_COUNTED : CNTR_NOT_SUPPORTED,
-			config->csv_sep);
+		if (config->json_output) {
+			fprintf(config->output, "\"counter-value\" : \"%s\", ",
+					counter->supported ? CNTR_NOT_COUNTED : CNTR_NOT_SUPPORTED);
+		} else {
+			fprintf(config->output, "%*s%s",
+				config->csv_output ? 0 : 18,
+				counter->supported ? CNTR_NOT_COUNTED : CNTR_NOT_SUPPORTED,
+				config->csv_sep);
+		}
 
 		if (counter->supported) {
 			if (!evlist__has_hybrid(counter->evlist)) {
@@ -471,21 +601,32 @@ static void printout(struct perf_stat_config *config, struct aggr_cpu_id id, int
 			}
 		}
 
-		fprintf(config->output, "%-*s%s",
-			config->csv_output ? 0 : config->unit_width,
-			counter->unit, config->csv_sep);
+		if (config->json_output) {
+			fprintf(config->output, "\"unit\" : \"%s\", ", counter->unit);
+		} else {
+			fprintf(config->output, "%-*s%s",
+				config->csv_output ? 0 : config->unit_width,
+				counter->unit, config->csv_sep);
+		}
 
-		fprintf(config->output, "%*s",
-			config->csv_output ? 0 : -25, evsel__name(counter));
+		if (config->json_output) {
+			fprintf(config->output, "\"event\" : \"%s\", ",
+				evsel__name(counter));
+		} else {
+			fprintf(config->output, "%*s",
+				 config->csv_output ? 0 : -25, evsel__name(counter));
+		}
 
 		print_cgroup(config, counter);
 
-		if (!config->csv_output)
+		if (!config->csv_output && !config->json_output)
 			pm(config, &os, NULL, NULL, "", 0);
 		print_noise(config, counter, noise);
 		print_running(config, run, ena);
 		if (config->csv_output)
 			pm(config, &os, NULL, NULL, "", 0);
+		else if (config->json_output)
+			pm(config, &os, NULL, NULL, "", 0);
 		return;
 	}
 
@@ -500,12 +641,15 @@ static void printout(struct perf_stat_config *config, struct aggr_cpu_id id, int
 	if (config->csv_output && !config->metric_only) {
 		print_noise(config, counter, noise);
 		print_running(config, run, ena);
+	} else if (config->json_output && !config->metric_only) {
+		print_noise(config, counter, noise);
+		print_running(config, run, ena);
 	}
 
 	perf_stat__print_shadow_stats(config, counter, uval,
 				first_shadow_cpu_map_idx(config, counter, &id),
 				&out, &config->metric_events, st);
-	if (!config->csv_output && !config->metric_only) {
+	if (!config->csv_output && !config->metric_only && !config->json_output) {
 		print_noise(config, counter, noise);
 		print_running(config, run, ena);
 	}
@@ -1004,8 +1148,12 @@ static void print_metric_headers(struct perf_stat_config *config,
 	struct outstate os = {
 		.fh = config->output
 	};
+	bool first = true;
+
+		if (config->json_output && !config->interval)
+			fprintf(config->output, "{");
 
-	if (prefix)
+	if (prefix && !config->json_output)
 		fprintf(config->output, "%s", prefix);
 
 	if (!config->csv_output && !no_indent)
@@ -1025,6 +1173,9 @@ static void print_metric_headers(struct perf_stat_config *config,
 		os.evsel = counter;
 		out.ctx = &os;
 		out.print_metric = print_metric_header;
+		if (!first && config->json_output)
+			fprintf(config->output, ", ");
+		first = false;
 		out.new_line = new_line_metric;
 		out.force_header = true;
 		perf_stat__print_shadow_stats(config, counter, 0,
@@ -1033,6 +1184,8 @@ static void print_metric_headers(struct perf_stat_config *config,
 					      &config->metric_events,
 					      &rt_stat);
 	}
+	if (config->json_output)
+		fprintf(config->output, "}");
 	fputc('\n', config->output);
 }
 
@@ -1048,10 +1201,18 @@ static void print_interval(struct perf_stat_config *config,
 	if (config->interval_clear)
 		puts(CONSOLE_CLEAR);
 
-	if (!config->iostat_run)
-		sprintf(prefix, "%6lu.%09lu%s", (unsigned long) ts->tv_sec, ts->tv_nsec, config->csv_sep);
-
-	if ((num_print_interval == 0 && !config->csv_output) || config->interval_clear) {
+	if (!config->iostat_run && !config->json_output)
+		sprintf(prefix, "%6lu.%09lu%s", (unsigned long) ts->tv_sec,
+				 ts->tv_nsec, config->csv_sep);
+	if (!config->iostat_run && config->json_output && !config->metric_only)
+		sprintf(prefix, "{\"interval\" : %lu.%09lu, ", (unsigned long)
+				 ts->tv_sec, ts->tv_nsec);
+	if (!config->iostat_run && config->json_output && config->metric_only)
+		sprintf(prefix, "{\"interval\" : %lu.%09lu}", (unsigned long)
+				 ts->tv_sec, ts->tv_nsec);
+
+	if ((num_print_interval == 0 && !config->csv_output && !config->json_output)
+			 || config->interval_clear) {
 		switch (config->aggr_mode) {
 		case AGGR_NODE:
 			fprintf(output, "#           time node   cpus");
@@ -1091,12 +1252,19 @@ static void print_interval(struct perf_stat_config *config,
 					fprintf(output, "             counts %*s events\n", unit_width, "unit");
 			}
 		case AGGR_UNSET:
+		case AGGR_MAX:
 			break;
 		}
 	}
 
-	if ((num_print_interval == 0 || config->interval_clear) && metric_only)
+	if ((num_print_interval == 0 || config->interval_clear)
+			 && metric_only && !config->json_output)
 		print_metric_headers(config, evlist, " ", true);
+	if ((num_print_interval == 0 || config->interval_clear)
+			 && metric_only && config->json_output) {
+		fprintf(output, "{");
+		print_metric_headers(config, evlist, " ", true);
+	}
 	if (++num_print_interval == 25)
 		num_print_interval = 0;
 }
@@ -1110,7 +1278,7 @@ static void print_header(struct perf_stat_config *config,
 
 	fflush(stdout);
 
-	if (!config->csv_output) {
+	if (!config->csv_output && !config->json_output) {
 		fprintf(output, "\n");
 		fprintf(output, " Performance counter stats for ");
 		if (_target->bpf_str)
@@ -1303,6 +1471,9 @@ void evlist__print_counters(struct evlist *evlist, struct perf_stat_config *conf
 			num_print_iv = 0;
 		if (config->aggr_mode == AGGR_GLOBAL && prefix && !config->iostat_run)
 			fprintf(config->output, "%s", prefix);
+
+		if (config->json_output && !config->metric_only)
+			fprintf(config->output, "}");
 	}
 
 	switch (config->aggr_mode) {
@@ -1341,12 +1512,13 @@ void evlist__print_counters(struct evlist *evlist, struct perf_stat_config *conf
 			}
 		}
 		break;
+	case AGGR_MAX:
 	case AGGR_UNSET:
 	default:
 		break;
 	}
 
-	if (!interval && !config->csv_output)
+	if (!interval && !config->csv_output && !config->json_output)
 		print_footer(config);
 
 	fflush(config->output);
diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
index 37ea2d044708..0882b4754fcf 100644
--- a/tools/perf/util/stat.c
+++ b/tools/perf/util/stat.c
@@ -401,6 +401,7 @@ process_counter_values(struct perf_stat_config *config, struct evsel *evsel,
 		aggr->ena += count->ena;
 		aggr->run += count->run;
 	case AGGR_UNSET:
+	case AGGR_MAX:
 	default:
 		break;
 	}
diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h
index b5aeb8e6d34b..668250022f8c 100644
--- a/tools/perf/util/stat.h
+++ b/tools/perf/util/stat.h
@@ -57,6 +57,7 @@ enum aggr_mode {
 	AGGR_THREAD,
 	AGGR_UNSET,
 	AGGR_NODE,
+	AGGR_MAX
 };
 
 enum {
@@ -121,6 +122,7 @@ struct perf_stat_config {
 	bool			 no_inherit;
 	bool			 identifier;
 	bool			 csv_output;
+	bool			 json_output;
 	bool			 interval_clear;
 	bool			 metric_only;
 	bool			 null_run;
-- 
2.36.1.124.g0e6072fb45-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 3/3] perf test: Json format checking
  2022-05-25  5:38 [PATCH v4 0/3] JSON output for perf stat Ian Rogers
  2022-05-25  5:38 ` [PATCH v4 1/3] perf test: Add checking for perf stat CSV output Ian Rogers
  2022-05-25  5:38 ` [PATCH v4 2/3] perf stat: Add JSON output option Ian Rogers
@ 2022-05-25  5:38 ` Ian Rogers
  2022-05-25 11:18 ` [PATCH v4 0/3] JSON output for perf stat Arnaldo Carvalho de Melo
  3 siblings, 0 replies; 11+ messages in thread
From: Ian Rogers @ 2022-05-25  5:38 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Kan Liang, Zhengjun Xing, Sandipan Das, Claire Jensen,
	Alyssa Ross, Like Xu, James Clark, Florian Fischer, linux-kernel,
	linux-perf-users, Claire Jensen
  Cc: Stephane Eranian

From: Claire Jensen <cjense@google.com>

Add field checking tests for perf stat JSON output.
Sanity checks the expected number of fields are present, that the
expected keys are present and they have the correct values.

Signed-off-by: Claire Jensen <cjense@google.com>
---
 .../tests/shell/lib/perf_json_output_lint.py  |  94 +++++++++++
 tools/perf/tests/shell/stat+json_output.sh    | 147 ++++++++++++++++++
 2 files changed, 241 insertions(+)
 create mode 100644 tools/perf/tests/shell/lib/perf_json_output_lint.py
 create mode 100755 tools/perf/tests/shell/stat+json_output.sh

diff --git a/tools/perf/tests/shell/lib/perf_json_output_lint.py b/tools/perf/tests/shell/lib/perf_json_output_lint.py
new file mode 100644
index 000000000000..19477a5f1558
--- /dev/null
+++ b/tools/perf/tests/shell/lib/perf_json_output_lint.py
@@ -0,0 +1,94 @@
+#!/usr/bin/python
+# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+# Basic sanity check of perf JSON output as specified in the man page.
+
+import argparse
+import sys
+import json
+
+ap = argparse.ArgumentParser()
+ap.add_argument('--no-args', action='store_true')
+ap.add_argument('--interval', action='store_true')
+ap.add_argument('--system-wide-no-aggr', action='store_true')
+ap.add_argument('--system-wide', action='store_true')
+ap.add_argument('--event', action='store_true')
+ap.add_argument('--per-core', action='store_true')
+ap.add_argument('--per-thread', action='store_true')
+ap.add_argument('--per-die', action='store_true')
+ap.add_argument('--per-node', action='store_true')
+ap.add_argument('--per-socket', action='store_true')
+args = ap.parse_args()
+
+Lines = sys.stdin.readlines()
+
+def isfloat(num):
+  try:
+    float(num)
+    return True
+  except ValueError:
+    return False
+
+
+def isint(num):
+  try:
+    int(num)
+    return True
+  except ValueError:
+    return False
+
+def is_counter_value(num):
+  return isfloat(num) or num == '<not counted>' or num == '<not supported>'
+
+def check_json_output(expected_items):
+  if expected_items != -1:
+    for line in Lines:
+      if 'failed' not in line:
+        count = 0
+        count = line.count(',')
+        if count != expected_items and count == 1 and 'metric-value' in line:
+          # Events that generate >1 metric may have isolated metric values.
+          continue
+        if count != expected_items:
+          raise RuntimeError(f'wrong number of fields. counted {count} expected {expected_items}'
+                             f' in \'{line}\'')
+  checks = {
+      'aggregate-number': lambda x: isfloat(x),
+      'core': lambda x: True,
+      'counter-value': lambda x: is_counter_value(x),
+      'cgroup': lambda x: True,
+      'cpu': lambda x: isint(x),
+      'die': lambda x: True,
+      'event': lambda x: True,
+      'event-runtime': lambda x: isfloat(x),
+      'interval': lambda x: isfloat(x),
+      'metric-unit': lambda x: True,
+      'metric-value': lambda x: isfloat(x),
+      'node': lambda x: True,
+      'pcnt-running': lambda x: isfloat(x),
+      'socket': lambda x: True,
+      'thread': lambda x: True,
+      'unit': lambda x: True,
+  }
+  input = '[\n' + ','.join(Lines) + '\n]'
+  for item in json.loads(input):
+    for key, value in item.items():
+      if key not in checks:
+        raise RuntimeError(f'Unexpected key: key={key} value={value}')
+      if not checks[key](value):
+        raise RuntimeError(f'Check failed for: key={key} value={value}')
+
+
+try:
+  if args.no_args or args.system_wide or args.event:
+    expected_items = 6
+  elif args.interval or args.per_thread or args.system_wide_no_aggr:
+    expected_items = 7
+  elif args.per_core or args.per_socket or args.per_node or args.per_die:
+    expected_items = 8
+  else:
+    # If no option is specified, don't check the number of items.
+    expected_items = -1
+  check_json_output(expected_items)
+except:
+  print('Test failed for input:\n' + '\n'.join(Lines))
+  raise
diff --git a/tools/perf/tests/shell/stat+json_output.sh b/tools/perf/tests/shell/stat+json_output.sh
new file mode 100755
index 000000000000..7748b677f2f9
--- /dev/null
+++ b/tools/perf/tests/shell/stat+json_output.sh
@@ -0,0 +1,147 @@
+#!/bin/bash
+# perf stat JSON output linter
+# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+# Checks various perf stat JSON output commands for the
+# correct number of fields.
+
+set -e
+
+pythonchecker=$(dirname $0)/lib/perf_json_output_lint.py
+if [ "x$PYTHON" == "x" ]
+then
+	if which python3 > /dev/null
+	then
+		PYTHON=python3
+	elif which python > /dev/null
+	then
+		PYTHON=python
+	else
+		echo Skipping test, python not detected please set environment variable PYTHON.
+		exit 2
+	fi
+fi
+
+# Return true if perf_event_paranoid is > $1 and not running as root.
+function ParanoidAndNotRoot()
+{
+	 [ $(id -u) != 0 ] && [ $(cat /proc/sys/kernel/perf_event_paranoid) -gt $1 ]
+}
+
+check_no_args()
+{
+	echo -n "Checking json output: no args "
+	perf stat -j true 2>&1 | $PYTHON $pythonchecker --no-args
+	echo "[Success]"
+}
+
+check_system_wide()
+{
+	echo -n "Checking json output: system wide "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -j -a true 2>&1 | $PYTHON $pythonchecker --system-wide
+	echo "[Success]"
+}
+
+check_system_wide_no_aggr()
+{
+	echo -n "Checking json output: system wide "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	echo -n "Checking json output: system wide no aggregation "
+	perf stat -j -A -a --no-merge true 2>&1 | $PYTHON $pythonchecker --system-wide-no-aggr
+	echo "[Success]"
+}
+
+check_interval()
+{
+	echo -n "Checking json output: interval "
+	perf stat -j -I 1000 true 2>&1 | $PYTHON $pythonchecker --interval
+	echo "[Success]"
+}
+
+
+check_event()
+{
+	echo -n "Checking json output: event "
+	perf stat -j -e cpu-clock true 2>&1 | $PYTHON $pythonchecker --event
+	echo "[Success]"
+}
+
+check_per_core()
+{
+	echo -n "Checking json output: per core "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -j --per-core -a true 2>&1 | $PYTHON $pythonchecker --per-core
+	echo "[Success]"
+}
+
+check_per_thread()
+{
+	echo -n "Checking json output: per thread "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -j --per-thread -a true 2>&1 | $PYTHON $pythonchecker --per-thread
+	echo "[Success]"
+}
+
+check_per_die()
+{
+	echo -n "Checking json output: per die "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -j --per-die -a true 2>&1 | $PYTHON $pythonchecker --per-die
+	echo "[Success]"
+}
+
+check_per_node()
+{
+	echo -n "Checking json output: per node "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -j --per-node -a true 2>&1 | $PYTHON $pythonchecker --per-node
+	echo "[Success]"
+}
+
+check_per_socket()
+{
+	echo -n "Checking json output: per socket "
+	if ParanoidAndNotRoot 0
+	then
+		echo "[Skip] parnoia and not root"
+		return
+	fi
+	perf stat -j --per-socket -a true 2>&1 | $PYTHON $pythonchecker --per-socket
+	echo "[Success]"
+}
+
+check_no_args
+check_system_wide
+check_system_wide_no_aggr
+check_interval
+check_event
+check_per_core
+check_per_thread
+check_per_die
+check_per_node
+check_per_socket
+exit 0
-- 
2.36.1.124.g0e6072fb45-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 1/3] perf test: Add checking for perf stat CSV output.
  2022-05-25  5:38 ` [PATCH v4 1/3] perf test: Add checking for perf stat CSV output Ian Rogers
@ 2022-05-25 11:07   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 11+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-05-25 11:07 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Kan Liang, Zhengjun Xing, Sandipan Das,
	Claire Jensen, Alyssa Ross, Like Xu, James Clark,
	Florian Fischer, linux-kernel, linux-perf-users, Claire Jensen,
	Stephane Eranian

Em Tue, May 24, 2022 at 10:38:12PM -0700, Ian Rogers escreveu:
> From: Claire Jensen <cjense@google.com>
> 
> Counts expected fields for various commands. No testing added for
> summary mode since it is broken.
> 
> An example of the summary output is:
> 
>          summary,263831,,instructions:u,1435072,100.0,0.46,insn per cycle
> ,,,,,1.37,stalled cycles per insn
> 
> This should be:
> 
>          summary,263831,,instructions:u,1435072,100.0,0.46,insn per cycle
>          summary,,,,,,1.37,stalled cycles per insn
> 
> The output has 7 fields when it should have 8. Additionally, the newline
> spacing is wrong, so it was excluded from testing until a fix is made.

s/parnoia/paranoid/ on the [Skip] lines, or am I missing something? :-)

I'll fix this up here.

- Arnaldo
 
> Signed-off-by: Claire Jensen <cjense@google.com>
> ---
>  .../tests/shell/lib/perf_csv_output_lint.py   |  48 ++++++
>  tools/perf/tests/shell/stat+csv_output.sh     | 147 ++++++++++++++++++
>  2 files changed, 195 insertions(+)
>  create mode 100644 tools/perf/tests/shell/lib/perf_csv_output_lint.py
>  create mode 100755 tools/perf/tests/shell/stat+csv_output.sh
> 
> diff --git a/tools/perf/tests/shell/lib/perf_csv_output_lint.py b/tools/perf/tests/shell/lib/perf_csv_output_lint.py
> new file mode 100644
> index 000000000000..714f283cfb1b
> --- /dev/null
> +++ b/tools/perf/tests/shell/lib/perf_csv_output_lint.py
> @@ -0,0 +1,48 @@
> +#!/usr/bin/python
> +# SPDX-License-Identifier: GPL-2.0
> +
> +import argparse
> +import sys
> +
> +# Basic sanity check of perf CSV output as specified in the man page.
> +# Currently just checks the number of fields per line in output.
> +
> +ap = argparse.ArgumentParser()
> +ap.add_argument('--no-args', action='store_true')
> +ap.add_argument('--interval', action='store_true')
> +ap.add_argument('--system-wide-no-aggr', action='store_true')
> +ap.add_argument('--system-wide', action='store_true')
> +ap.add_argument('--event', action='store_true')
> +ap.add_argument('--per-core', action='store_true')
> +ap.add_argument('--per-thread', action='store_true')
> +ap.add_argument('--per-die', action='store_true')
> +ap.add_argument('--per-node', action='store_true')
> +ap.add_argument('--per-socket', action='store_true')
> +ap.add_argument('--separator', default=',', nargs='?')
> +args = ap.parse_args()
> +
> +Lines = sys.stdin.readlines()
> +
> +def check_csv_output(exp):
> +  for line in Lines:
> +    if 'failed' not in line:
> +      count = line.count(args.separator)
> +      if count != exp:
> +        sys.stdout.write(''.join(Lines))
> +        raise RuntimeError(f'wrong number of fields. expected {exp} in {line}')
> +
> +try:
> +  if args.no_args or args.system_wide or args.event:
> +    expected_items = 6
> +  elif args.interval or args.per_thread or args.system_wide_no_aggr:
> +    expected_items = 7
> +  elif args.per_core or args.per_socket or args.per_node or args.per_die:
> +    expected_items = 8
> +  else:
> +    ap.print_help()
> +    raise RuntimeError('No checking option specified')
> +  check_csv_output(expected_items)
> +
> +except:
> +  sys.stdout.write('Test failed for input: ' + ''.join(Lines))
> +  raise
> diff --git a/tools/perf/tests/shell/stat+csv_output.sh b/tools/perf/tests/shell/stat+csv_output.sh
> new file mode 100755
> index 000000000000..82c25e9c7f21
> --- /dev/null
> +++ b/tools/perf/tests/shell/stat+csv_output.sh
> @@ -0,0 +1,147 @@
> +#!/bin/bash
> +# perf stat CSV output linter
> +# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
> +# Tests various perf stat CSV output commands for the
> +# correct number of fields and the CSV separator set to ','.
> +
> +set -e
> +
> +pythonchecker=$(dirname $0)/lib/perf_csv_output_lint.py
> +if [ "x$PYTHON" == "x" ]
> +then
> +	if which python3 > /dev/null
> +	then
> +		PYTHON=python3
> +	elif which python > /dev/null
> +	then
> +		PYTHON=python
> +	else
> +		echo Skipping test, python not detected please set environment variable PYTHON.
> +		exit 2
> +	fi
> +fi
> +
> +# Return true if perf_event_paranoid is > $1 and not running as root.
> +function ParanoidAndNotRoot()
> +{
> +	 [ $(id -u) != 0 ] && [ $(cat /proc/sys/kernel/perf_event_paranoid) -gt $1 ]
> +}
> +
> +check_no_args()
> +{
> +	echo -n "Checking CSV output: no args "
> +	perf stat -x, true 2>&1 | $PYTHON $pythonchecker --no-args
> +	echo "[Success]"
> +}
> +
> +check_system_wide()
> +{
> +	echo -n "Checking CSV output: system wide "
> +	if ParanoidAndNotRoot 0
> +	then
> +		echo "[Skip] parnoia and not root"
> +		return
> +	fi
> +	perf stat -x, -a true 2>&1 | $PYTHON $pythonchecker --system-wide
> +	echo "[Success]"
> +}
> +
> +check_system_wide_no_aggr()
> +{
> +	echo -n "Checking CSV output: system wide "
> +	if ParanoidAndNotRoot 0
> +	then
> +		echo "[Skip] parnoia and not root"
> +		return
> +	fi
> +	echo -n "Checking CSV output: system wide no aggregation "
> +	perf stat -x, -A -a --no-merge true 2>&1 | $PYTHON $pythonchecker --system-wide-no-aggr
> +	echo "[Success]"
> +}
> +
> +check_interval()
> +{
> +	echo -n "Checking CSV output: interval "
> +	perf stat -x, -I 1000 true 2>&1 | $PYTHON $pythonchecker --interval
> +	echo "[Success]"
> +}
> +
> +
> +check_event()
> +{
> +	echo -n "Checking CSV output: event "
> +	perf stat -x, -e cpu-clock true 2>&1 | $PYTHON $pythonchecker --event
> +	echo "[Success]"
> +}
> +
> +check_per_core()
> +{
> +	echo -n "Checking CSV output: per core "
> +	if ParanoidAndNotRoot 0
> +	then
> +		echo "[Skip] parnoia and not root"
> +		return
> +	fi
> +	perf stat -x, --per-core -a true 2>&1 | $PYTHON $pythonchecker --per-core
> +	echo "[Success]"
> +}
> +
> +check_per_thread()
> +{
> +	echo -n "Checking CSV output: per thread "
> +	if ParanoidAndNotRoot 0
> +	then
> +		echo "[Skip] parnoia and not root"
> +		return
> +	fi
> +	perf stat -x, --per-thread -a true 2>&1 | $PYTHON $pythonchecker --per-thread
> +	echo "[Success]"
> +}
> +
> +check_per_die()
> +{
> +	echo -n "Checking CSV output: per die "
> +	if ParanoidAndNotRoot 0
> +	then
> +		echo "[Skip] parnoia and not root"
> +		return
> +	fi
> +	perf stat -x, --per-die -a true 2>&1 | $PYTHON $pythonchecker --per-die
> +	echo "[Success]"
> +}
> +
> +check_per_node()
> +{
> +	echo -n "Checking CSV output: per node "
> +	if ParanoidAndNotRoot 0
> +	then
> +		echo "[Skip] parnoia and not root"
> +		return
> +	fi
> +	perf stat -x, --per-node -a true 2>&1 | $PYTHON $pythonchecker --per-node
> +	echo "[Success]"
> +}
> +
> +check_per_socket()
> +{
> +	echo -n "Checking CSV output: per socket "
> +	if ParanoidAndNotRoot 0
> +	then
> +		echo "[Skip] parnoia and not root"
> +		return
> +	fi
> +	perf stat -x, --per-socket -a true 2>&1 | $PYTHON $pythonchecker --per-socket
> +	echo "[Success]"
> +}
> +
> +check_no_args
> +check_system_wide
> +check_system_wide_no_aggr
> +check_interval
> +check_event
> +check_per_core
> +check_per_thread
> +check_per_die
> +check_per_node
> +check_per_socket
> +exit 0
> -- 
> 2.36.1.124.g0e6072fb45-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 0/3] JSON output for perf stat
  2022-05-25  5:38 [PATCH v4 0/3] JSON output for perf stat Ian Rogers
                   ` (2 preceding siblings ...)
  2022-05-25  5:38 ` [PATCH v4 3/3] perf test: Json format checking Ian Rogers
@ 2022-05-25 11:18 ` Arnaldo Carvalho de Melo
  2022-05-25 11:21   ` Arnaldo Carvalho de Melo
  3 siblings, 1 reply; 11+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-05-25 11:18 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Kan Liang, Zhengjun Xing, Sandipan Das,
	Claire Jensen, Alyssa Ross, Like Xu, James Clark,
	Florian Fischer, linux-kernel, linux-perf-users, Claire Jensen,
	Stephane Eranian

Em Tue, May 24, 2022 at 10:38:11PM -0700, Ian Rogers escreveu:
> Parsing the CSV or text output of perf stat can be problematic when
> new output is added (columns in CSV format). JSON names values and
> simplifies the job of parsing. Add a JSON output option to perf-stat
> then add unit test that parses and validates the output.
> 
> This is a resend of two v2 patches:
> https://lore.kernel.org/lkml/20210813220754.2104922-1-cjense@google.com/
> https://lore.kernel.org/lkml/20210813220936.2105426-1-cjense@google.com/
> with a few formatting changes and improvements to the linter.
> 
> The CSV test/linter is also added to ensure that CSV output doesn't regress:
> https://lore.kernel.org/lkml/20210813192108.2087512-1-cjense@google.com/

So, the JSON test is failing:

⬢[acme@toolbox perf]$ perf test -v JSON
Couldn't bump rlimit(MEMLOCK), failures may take place when creating BPF maps, etc
 90: perf stat JSON output linter                                    :
--- start ---
test child forked, pid 2626229
Checking json output: no args [Success]
Checking json output: system wide [Skip] parnoia and not root
Checking json output: system wide [Skip] parnoia and not root
Checking json output: interval Test failed for input:
{"interval" : 0.000506453, "counter-value" : "0.212360", "unit" : "msec", "event" : "task-clock:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000212, "metric-unit" : "CPUs utilized"}

{"interval" : 0.000506453, "counter-value" : "0.000000", "unit" : "", "event" : "context-switches:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000000, "metric-unit" : "/sec"}

{"interval" : 0.000506453, "counter-value" : "0.000000", "unit" : "", "event" : "cpu-migrations:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000000, "metric-unit" : "/sec"}

{"interval" : 0.000506453, "counter-value" : "45.000000", "unit" : "", "event" : "page-faults:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 211.904313, "metric-unit" : "K/sec"}

{"interval" : 0.000506453, "counter-value" : "143761.000000", "unit" : "", "event" : "cycles:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 0.676968, "metric-unit" : "GHz"}

{"interval" : 0.000506453, "counter-value" : "456.000000", "unit" : "", "event" : "stalled-cycles-frontend:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 0.317193, "metric-unit" : "frontend cycles idle"}

{"interval" : 0.000506453, "counter-value" : "11639.000000", "unit" : "", "event" : "stalled-cycles-backend:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 8.096076, "metric-unit" : "backend cycles idle"}

{"interval" : 0.000506453, "counter-value" : "150684.000000", "unit" : "", "event" : "instructions:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 1.048156, "metric-unit" : "insn per cycle"}

{"interval" : 0.000506453, "metric-value" : 0.077241, "metric-unit" : "stalled cycles per insn"}

{"interval" : 0.000506453, "counter-value" : "29735.000000", "unit" : "", "event" : "branches:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 140.021661, "metric-unit" : "M/sec"}

{"interval" : 0.000506453, "counter-value" : "<not counted>", "unit" : "", "event" : "branch-misses:u", "event-runtime" : 0, "pcnt-running" : 0.00, "metric-value" : 0.000000, "metric-unit" : ""}

Traceback (most recent call last):
  File "/var/home/acme/git/perf/./tools/perf/tests/shell/lib/perf_json_output_lint.py", line 91, in <module>
    check_json_output(expected_items)
  File "/var/home/acme/git/perf/./tools/perf/tests/shell/lib/perf_json_output_lint.py", line 52, in check_json_output
    raise RuntimeError(f'wrong number of fields. counted {count} expected {expected_items}'
RuntimeError: wrong number of fields. counted 2 expected 7 in '{"interval" : 0.000506453, "metric-value" : 0.077241, "metric-unit" : "stalled cycles per insn"}
'
test child finished with -1
---- end ----
perf stat JSON output linter: FAILED!
⬢[acme@toolbox perf]$

So please check this and resubmit.

My system is a fedora 35 silverblue toolbox.

⬢[acme@toolbox perf]$ rpm -q python3
python3-3.10.4-1.fc35.x86_64

- Arnaldo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 0/3] JSON output for perf stat
  2022-05-25 11:18 ` [PATCH v4 0/3] JSON output for perf stat Arnaldo Carvalho de Melo
@ 2022-05-25 11:21   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 11+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-05-25 11:21 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Kan Liang, Zhengjun Xing, Sandipan Das,
	Claire Jensen, Alyssa Ross, Like Xu, James Clark,
	Florian Fischer, linux-kernel, linux-perf-users, Claire Jensen,
	Stephane Eranian

Em Wed, May 25, 2022 at 08:18:27AM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Tue, May 24, 2022 at 10:38:11PM -0700, Ian Rogers escreveu:
> > Parsing the CSV or text output of perf stat can be problematic when
> > new output is added (columns in CSV format). JSON names values and
> > simplifies the job of parsing. Add a JSON output option to perf-stat
> > then add unit test that parses and validates the output.
> > 
> > This is a resend of two v2 patches:
> > https://lore.kernel.org/lkml/20210813220754.2104922-1-cjense@google.com/
> > https://lore.kernel.org/lkml/20210813220936.2105426-1-cjense@google.com/
> > with a few formatting changes and improvements to the linter.
> > 
> > The CSV test/linter is also added to ensure that CSV output doesn't regress:
> > https://lore.kernel.org/lkml/20210813192108.2087512-1-cjense@google.com/
> 
> So, the JSON test is failing:
> 
> ⬢[acme@toolbox perf]$ perf test -v JSON
> Couldn't bump rlimit(MEMLOCK), failures may take place when creating BPF maps, etc
>  90: perf stat JSON output linter                                    :
> --- start ---
> test child forked, pid 2626229
> Checking json output: no args [Success]
> Checking json output: system wide [Skip] parnoia and not root
> Checking json output: system wide [Skip] parnoia and not root
> Checking json output: interval Test failed for input:
> {"interval" : 0.000506453, "counter-value" : "0.212360", "unit" : "msec", "event" : "task-clock:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000212, "metric-unit" : "CPUs utilized"}
> 
> {"interval" : 0.000506453, "counter-value" : "0.000000", "unit" : "", "event" : "context-switches:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000000, "metric-unit" : "/sec"}
> 
> {"interval" : 0.000506453, "counter-value" : "0.000000", "unit" : "", "event" : "cpu-migrations:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 0.000000, "metric-unit" : "/sec"}
> 
> {"interval" : 0.000506453, "counter-value" : "45.000000", "unit" : "", "event" : "page-faults:u", "event-runtime" : 212360, "pcnt-running" : 100.00, "metric-value" : 211.904313, "metric-unit" : "K/sec"}
> 
> {"interval" : 0.000506453, "counter-value" : "143761.000000", "unit" : "", "event" : "cycles:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 0.676968, "metric-unit" : "GHz"}
> 
> {"interval" : 0.000506453, "counter-value" : "456.000000", "unit" : "", "event" : "stalled-cycles-frontend:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 0.317193, "metric-unit" : "frontend cycles idle"}
> 
> {"interval" : 0.000506453, "counter-value" : "11639.000000", "unit" : "", "event" : "stalled-cycles-backend:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 8.096076, "metric-unit" : "backend cycles idle"}
> 
> {"interval" : 0.000506453, "counter-value" : "150684.000000", "unit" : "", "event" : "instructions:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 1.048156, "metric-unit" : "insn per cycle"}
> 
> {"interval" : 0.000506453, "metric-value" : 0.077241, "metric-unit" : "stalled cycles per insn"}
> 
> {"interval" : 0.000506453, "counter-value" : "29735.000000", "unit" : "", "event" : "branches:u", "event-runtime" : 217290, "pcnt-running" : 100.00, "metric-value" : 140.021661, "metric-unit" : "M/sec"}
> 
> {"interval" : 0.000506453, "counter-value" : "<not counted>", "unit" : "", "event" : "branch-misses:u", "event-runtime" : 0, "pcnt-running" : 0.00, "metric-value" : 0.000000, "metric-unit" : ""}
> 
> Traceback (most recent call last):
>   File "/var/home/acme/git/perf/./tools/perf/tests/shell/lib/perf_json_output_lint.py", line 91, in <module>
>     check_json_output(expected_items)
>   File "/var/home/acme/git/perf/./tools/perf/tests/shell/lib/perf_json_output_lint.py", line 52, in check_json_output
>     raise RuntimeError(f'wrong number of fields. counted {count} expected {expected_items}'
> RuntimeError: wrong number of fields. counted 2 expected 7 in '{"interval" : 0.000506453, "metric-value" : 0.077241, "metric-unit" : "stalled cycles per insn"}
> '
> test child finished with -1
> ---- end ----
> perf stat JSON output linter: FAILED!
> ⬢[acme@toolbox perf]$
> 
> So please check this and resubmit.

I kept the first patch in the series, so please just check the JSON
ones.

- Arnaldo
 
> My system is a fedora 35 silverblue toolbox.
> 
> ⬢[acme@toolbox perf]$ rpm -q python3
> python3-3.10.4-1.fc35.x86_64
> 
> - Arnaldo

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 2/3] perf stat: Add JSON output option
  2022-05-25  5:38 ` [PATCH v4 2/3] perf stat: Add JSON output option Ian Rogers
@ 2022-05-31 22:46   ` Namhyung Kim
  2022-05-31 23:13     ` Ian Rogers
  0 siblings, 1 reply; 11+ messages in thread
From: Namhyung Kim @ 2022-05-31 22:46 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Kan Liang,
	Zhengjun Xing, Sandipan Das, Claire Jensen, Alyssa Ross, Like Xu,
	James Clark, Florian Fischer, linux-kernel, linux-perf-users,
	Claire Jensen, Stephane Eranian

Hi Ian,

On Tue, May 24, 2022 at 10:38 PM Ian Rogers <irogers@google.com> wrote:
>
> From: Claire Jensen <cjense@google.com>
>
> CSV output is tricky to format and column layout changes are susceptible
> to breaking parsers. New JSON-formatted output has variable names to
> identify fields that are consistent and informative, making
> the output parseable.
>
> CSV output example:
>
> 1.20,msec,task-clock:u,1204272,100.00,0.697,CPUs utilized
> 0,,context-switches:u,1204272,100.00,0.000,/sec
> 0,,cpu-migrations:u,1204272,100.00,0.000,/sec
> 70,,page-faults:u,1204272,100.00,58.126,K/sec
>
> JSON output example:
>
> {"counter-value" : "3805.723968", "unit" : "msec", "event" :
> "cpu-clock", "event-runtime" : 3805731510100.00, "pcnt-running"
> : 100.00, "metric-value" : 4.007571, "metric-unit" : "CPUs utilized"}
> {"counter-value" : "6166.000000", "unit" : "", "event" :
> "context-switches", "event-runtime" : 3805723045100.00, "pcnt-running"
> : 100.00, "metric-value" : 1.620191, "metric-unit" : "K/sec"}
> {"counter-value" : "466.000000", "unit" : "", "event" :
> "cpu-migrations", "event-runtime" : 3805727613100.00, "pcnt-running"
> : 100.00, "metric-value" : 122.447136, "metric-unit" : "/sec"}
> {"counter-value" : "208.000000", "unit" : "", "event" :
> "page-faults", "event-runtime" : 3805726799100.00, "pcnt-running"
> : 100.00, "metric-value" : 54.654516, "metric-unit" : "/sec"}
>
> Also added documentation for JSON option.
> There is some tidy up of CSV code including a potential memory over run
> in the os.nfields set up. To facilitate this an AGGR_MAX value is added.
>
> Signed-off-by: Claire Jensen <cjense@google.com>

Your sign-off as well?

Anyway, I think there are places to clean up this part of code
more but that's not a part of the work.  Maybe I need to find
some time to do that later.

Acked-by: Namhyung Kim <namhyung@kernel.org>

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 2/3] perf stat: Add JSON output option
  2022-05-31 22:46   ` Namhyung Kim
@ 2022-05-31 23:13     ` Ian Rogers
  2022-05-31 23:18       ` Ian Rogers
  2022-08-10 14:03       ` Arnaldo Carvalho de Melo
  0 siblings, 2 replies; 11+ messages in thread
From: Ian Rogers @ 2022-05-31 23:13 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Kan Liang,
	Zhengjun Xing, Sandipan Das, Claire Jensen, Alyssa Ross, Like Xu,
	James Clark, Florian Fischer, linux-kernel, linux-perf-users,
	Claire Jensen, Stephane Eranian

On Tue, May 31, 2022 at 3:46 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> Hi Ian,
>
> On Tue, May 24, 2022 at 10:38 PM Ian Rogers <irogers@google.com> wrote:
> >
> > From: Claire Jensen <cjense@google.com>
> >
> > CSV output is tricky to format and column layout changes are susceptible
> > to breaking parsers. New JSON-formatted output has variable names to
> > identify fields that are consistent and informative, making
> > the output parseable.
> >
> > CSV output example:
> >
> > 1.20,msec,task-clock:u,1204272,100.00,0.697,CPUs utilized
> > 0,,context-switches:u,1204272,100.00,0.000,/sec
> > 0,,cpu-migrations:u,1204272,100.00,0.000,/sec
> > 70,,page-faults:u,1204272,100.00,58.126,K/sec
> >
> > JSON output example:
> >
> > {"counter-value" : "3805.723968", "unit" : "msec", "event" :
> > "cpu-clock", "event-runtime" : 3805731510100.00, "pcnt-running"
> > : 100.00, "metric-value" : 4.007571, "metric-unit" : "CPUs utilized"}
> > {"counter-value" : "6166.000000", "unit" : "", "event" :
> > "context-switches", "event-runtime" : 3805723045100.00, "pcnt-running"
> > : 100.00, "metric-value" : 1.620191, "metric-unit" : "K/sec"}
> > {"counter-value" : "466.000000", "unit" : "", "event" :
> > "cpu-migrations", "event-runtime" : 3805727613100.00, "pcnt-running"
> > : 100.00, "metric-value" : 122.447136, "metric-unit" : "/sec"}
> > {"counter-value" : "208.000000", "unit" : "", "event" :
> > "page-faults", "event-runtime" : 3805726799100.00, "pcnt-running"
> > : 100.00, "metric-value" : 54.654516, "metric-unit" : "/sec"}
> >
> > Also added documentation for JSON option.
> > There is some tidy up of CSV code including a potential memory over run
> > in the os.nfields set up. To facilitate this an AGGR_MAX value is added.
> >
> > Signed-off-by: Claire Jensen <cjense@google.com>
>
> Your sign-off as well?

Doh:
Signed-off-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

> Anyway, I think there are places to clean up this part of code
> more but that's not a part of the work.  Maybe I need to find
> some time to do that later.
>
> Acked-by: Namhyung Kim <namhyung@kernel.org>
>
> Thanks,
> Namhyung

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 2/3] perf stat: Add JSON output option
  2022-05-31 23:13     ` Ian Rogers
@ 2022-05-31 23:18       ` Ian Rogers
  2022-08-10 14:03       ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 11+ messages in thread
From: Ian Rogers @ 2022-05-31 23:18 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Kan Liang,
	Zhengjun Xing, Sandipan Das, Claire Jensen, Alyssa Ross, Like Xu,
	James Clark, Florian Fischer, linux-kernel, linux-perf-users,
	Claire Jensen, Stephane Eranian

On Tue, May 31, 2022 at 4:13 PM Ian Rogers <irogers@google.com> wrote:
>
> On Tue, May 31, 2022 at 3:46 PM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > Hi Ian,
> >
> > On Tue, May 24, 2022 at 10:38 PM Ian Rogers <irogers@google.com> wrote:
> > >
> > > From: Claire Jensen <cjense@google.com>
> > >
> > > CSV output is tricky to format and column layout changes are susceptible
> > > to breaking parsers. New JSON-formatted output has variable names to
> > > identify fields that are consistent and informative, making
> > > the output parseable.
> > >
> > > CSV output example:
> > >
> > > 1.20,msec,task-clock:u,1204272,100.00,0.697,CPUs utilized
> > > 0,,context-switches:u,1204272,100.00,0.000,/sec
> > > 0,,cpu-migrations:u,1204272,100.00,0.000,/sec
> > > 70,,page-faults:u,1204272,100.00,58.126,K/sec
> > >
> > > JSON output example:
> > >
> > > {"counter-value" : "3805.723968", "unit" : "msec", "event" :
> > > "cpu-clock", "event-runtime" : 3805731510100.00, "pcnt-running"
> > > : 100.00, "metric-value" : 4.007571, "metric-unit" : "CPUs utilized"}
> > > {"counter-value" : "6166.000000", "unit" : "", "event" :
> > > "context-switches", "event-runtime" : 3805723045100.00, "pcnt-running"
> > > : 100.00, "metric-value" : 1.620191, "metric-unit" : "K/sec"}
> > > {"counter-value" : "466.000000", "unit" : "", "event" :
> > > "cpu-migrations", "event-runtime" : 3805727613100.00, "pcnt-running"
> > > : 100.00, "metric-value" : 122.447136, "metric-unit" : "/sec"}
> > > {"counter-value" : "208.000000", "unit" : "", "event" :
> > > "page-faults", "event-runtime" : 3805726799100.00, "pcnt-running"
> > > : 100.00, "metric-value" : 54.654516, "metric-unit" : "/sec"}
> > >
> > > Also added documentation for JSON option.
> > > There is some tidy up of CSV code including a potential memory over run
> > > in the os.nfields set up. To facilitate this an AGGR_MAX value is added.
> > >
> > > Signed-off-by: Claire Jensen <cjense@google.com>
> >
> > Your sign-off as well?
>
> Doh:
> Signed-off-by: Ian Rogers <irogers@google.com>
>
> Thanks,
> Ian

Actually, this was fixed in V5:
https://lore.kernel.org/lkml/20220526224515.4088240-1-irogers@google.com/

Thanks,
Ian

> > Anyway, I think there are places to clean up this part of code
> > more but that's not a part of the work.  Maybe I need to find
> > some time to do that later.
> >
> > Acked-by: Namhyung Kim <namhyung@kernel.org>
> >
> > Thanks,
> > Namhyung

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 2/3] perf stat: Add JSON output option
  2022-05-31 23:13     ` Ian Rogers
  2022-05-31 23:18       ` Ian Rogers
@ 2022-08-10 14:03       ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 11+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-08-10 14:03 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Namhyung Kim, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Kan Liang, Zhengjun Xing,
	Sandipan Das, Claire Jensen, Alyssa Ross, Like Xu, James Clark,
	Florian Fischer, linux-kernel, linux-perf-users, Claire Jensen,
	Stephane Eranian

Em Tue, May 31, 2022 at 04:13:59PM -0700, Ian Rogers escreveu:
> On Tue, May 31, 2022 at 3:46 PM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > Hi Ian,
> >
> > On Tue, May 24, 2022 at 10:38 PM Ian Rogers <irogers@google.com> wrote:
> > >
> > > From: Claire Jensen <cjense@google.com>
> > >
> > > CSV output is tricky to format and column layout changes are susceptible
> > > to breaking parsers. New JSON-formatted output has variable names to
> > > identify fields that are consistent and informative, making
> > > the output parseable.
> > >
> > > CSV output example:
> > >
> > > 1.20,msec,task-clock:u,1204272,100.00,0.697,CPUs utilized
> > > 0,,context-switches:u,1204272,100.00,0.000,/sec
> > > 0,,cpu-migrations:u,1204272,100.00,0.000,/sec
> > > 70,,page-faults:u,1204272,100.00,58.126,K/sec
> > >
> > > JSON output example:
> > >
> > > {"counter-value" : "3805.723968", "unit" : "msec", "event" :
> > > "cpu-clock", "event-runtime" : 3805731510100.00, "pcnt-running"
> > > : 100.00, "metric-value" : 4.007571, "metric-unit" : "CPUs utilized"}
> > > {"counter-value" : "6166.000000", "unit" : "", "event" :
> > > "context-switches", "event-runtime" : 3805723045100.00, "pcnt-running"
> > > : 100.00, "metric-value" : 1.620191, "metric-unit" : "K/sec"}
> > > {"counter-value" : "466.000000", "unit" : "", "event" :
> > > "cpu-migrations", "event-runtime" : 3805727613100.00, "pcnt-running"
> > > : 100.00, "metric-value" : 122.447136, "metric-unit" : "/sec"}
> > > {"counter-value" : "208.000000", "unit" : "", "event" :
> > > "page-faults", "event-runtime" : 3805726799100.00, "pcnt-running"
> > > : 100.00, "metric-value" : 54.654516, "metric-unit" : "/sec"}
> > >
> > > Also added documentation for JSON option.
> > > There is some tidy up of CSV code including a potential memory over run
> > > in the os.nfields set up. To facilitate this an AGGR_MAX value is added.
> > >
> > > Signed-off-by: Claire Jensen <cjense@google.com>
> >
> > Your sign-off as well?
> 
> Doh:
> Signed-off-by: Ian Rogers <irogers@google.com>

Added it (b4 did it really) and also added the fixup below.

- Arnaldo

diff --git a/tools/perf/util/stat-display.c b/tools/perf/util/stat-display.c
index 33e0ee9633291fd5..b82844cb0ce77845 100644
--- a/tools/perf/util/stat-display.c
+++ b/tools/perf/util/stat-display.c
@@ -35,7 +35,7 @@ static void print_running(struct perf_stat_config *config,
 		enabled_percent = 100 * run / ena;
 	if (config->json_output)
 		fprintf(config->output,
-			"\"event-runtime\" : %lu, \"pcnt-running\" : %.2f, ",
+			"\"event-runtime\" : %" PRIu64 ", \"pcnt-running\" : %.2f, ",
 			run, enabled_percent);
 	else if (config->csv_output)
 		fprintf(config->output,

^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-08-10 14:04 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-25  5:38 [PATCH v4 0/3] JSON output for perf stat Ian Rogers
2022-05-25  5:38 ` [PATCH v4 1/3] perf test: Add checking for perf stat CSV output Ian Rogers
2022-05-25 11:07   ` Arnaldo Carvalho de Melo
2022-05-25  5:38 ` [PATCH v4 2/3] perf stat: Add JSON output option Ian Rogers
2022-05-31 22:46   ` Namhyung Kim
2022-05-31 23:13     ` Ian Rogers
2022-05-31 23:18       ` Ian Rogers
2022-08-10 14:03       ` Arnaldo Carvalho de Melo
2022-05-25  5:38 ` [PATCH v4 3/3] perf test: Json format checking Ian Rogers
2022-05-25 11:18 ` [PATCH v4 0/3] JSON output for perf stat Arnaldo Carvalho de Melo
2022-05-25 11:21   ` Arnaldo Carvalho de Melo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.