* [PATCH v2 1/1] Add field checking tests for perf stat JSON output.
@ 2021-08-13 22:09 Claire Jensen
2021-08-31 19:46 ` Jiri Olsa
0 siblings, 1 reply; 3+ messages in thread
From: Claire Jensen @ 2021-08-13 22:09 UTC (permalink / raw)
To: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
namhyung, yao.jin, song, andi, adrian.hunter, kan.liang,
james.clark, alexander.antonov, changbin.du, liuqi115, irogers,
eranian, linux-kernel, linux-perf-users, clairej735
Cc: Claire Jensen
Counts number of fields to make sure expected fields are present.
Signed-off-by: Claire Jensen <cjense@google.com>
---
.../tests/shell/lib/perf_json_output_lint.py | 48 ++++++++
tools/perf/tests/shell/stat+json_output.sh | 114 ++++++++++++++++++
2 files changed, 162 insertions(+)
create mode 100644 tools/perf/tests/shell/lib/perf_json_output_lint.py
create mode 100644 tools/perf/tests/shell/stat+json_output.sh
diff --git a/tools/perf/tests/shell/lib/perf_json_output_lint.py b/tools/perf/tests/shell/lib/perf_json_output_lint.py
new file mode 100644
index 000000000000..45d9163e7423
--- /dev/null
+++ b/tools/perf/tests/shell/lib/perf_json_output_lint.py
@@ -0,0 +1,48 @@
+#!/usr/bin/python
+
+from __future__ import print_function
+import argparse
+import sys
+
+# Basic sanity check of perf JSON output as specified in the man page.
+# Currently just checks the number of fields per line in output.
+
+ap = argparse.ArgumentParser()
+ap.add_argument('--no-args', action='store_true')
+ap.add_argument('--interval', action='store_true')
+ap.add_argument('--all-cpus-no-aggr', action='store_true')
+ap.add_argument('--all-cpus', action='store_true')
+ap.add_argument('--event', action='store_true')
+ap.add_argument('--per-core', action='store_true')
+ap.add_argument('--per-thread', action='store_true')
+ap.add_argument('--per-die', action='store_true')
+ap.add_argument('--per-node', action='store_true')
+ap.add_argument('--per-socket', action='store_true')
+args = ap.parse_args()
+
+Lines = sys.stdin.readlines()
+ch = ','
+
+
+def check_json_output(exp):
+ for line in Lines:
+ if 'failed' not in line:
+ count = 0
+ count = line.count(ch)
+ if count != exp:
+ sys.stdout.write(''.join(Lines))
+ raise RuntimeError('wrong number of fields. counted {0}'
+ ' expected {1} in {2}\n'.format(count, exp, line))
+
+
+try:
+ if args.no_args or args.all_cpus or args.event:
+ check_json_output(6)
+ if args.interval or args.per_thread:
+ check_json_output(7)
+ if args.per_core or args.per_socket or args.per_node or args.per_die:
+ check_json_output(8)
+
+except:
+ sys.stdout.write('Test failed for input:\n' + ''.join(Lines))
+ raise
diff --git a/tools/perf/tests/shell/stat+json_output.sh b/tools/perf/tests/shell/stat+json_output.sh
new file mode 100644
index 000000000000..8a772badae45
--- /dev/null
+++ b/tools/perf/tests/shell/stat+json_output.sh
@@ -0,0 +1,114 @@
+#!/bin/bash
+# perf stat JSON output linter
+# SPDX-License-Identifier: GPL-2.0
+# Checks various perf stat JSON output commands for the
+# correct number of fields.
+
+set -e
+set -x
+
+pythonchecker=$(dirname $0)/lib/perf_json_output_lint.py
+file="/proc/sys/kernel/perf_event_paranoid"
+paranoia=$(cat "$file" | grep -o -E '[0-9]+')
+
+check_no_args()
+{
+ perf stat -j sleep 1 2>&1 | \
+ python $pythonchecker --no-args
+}
+
+if [ $paranoia -gt 0 ];
+then
+ echo check_all_cpus test skipped because of paranoia level.
+else
+ check_all_cpus()
+ {
+ perf stat -j -a 2>&1 sleep 1 | \
+ python $pythonchecker --all-cpus
+ }
+ check_all_cpus
+fi
+
+check_interval()
+{
+ perf stat -j -I 1000 2>&1 sleep 1 | \
+ python $pythonchecker --interval
+}
+
+check_all_cpus_no_aggr()
+{
+ perf stat -j -A -a --no-merge 2>&1 sleep 1 | \
+ python $pythonchecker --all-cpus-no-aggr
+}
+
+check_event()
+{
+ perf stat -j -e cpu-clock 2>&1 sleep 1 | \
+ python $pythonchecker --event
+}
+
+if [ $paranoia -gt 0 ];
+then
+ echo check_all_cpus test skipped because of paranoia level.
+else
+ check_per_core()
+ {
+ perf stat -j --per-core -a 2>&1 sleep 1 | \
+ python $pythonchecker --per-core
+ }
+ check_per_core
+fi
+
+if [ $paranoia -gt 0 ];
+then
+ echo check_all_cpus test skipped because of paranoia level.
+else
+ check_per_thread()
+ {
+ perf stat -j --per-thread -a 2>&1 sleep 1 | \
+ python $pythonchecker --per-thread
+ }
+ check_per_thread
+fi
+
+if [ $paranoia -gt 0 ];
+then
+ echo check_per_die test skipped because of paranoia level.
+else
+ check_per_die()
+ {
+ perf stat -j --per-die -a 2>&1 sleep 1 | \
+ python $pythonchecker --per-die
+ }
+ check_per_die
+fi
+
+if [ $paranoia -gt 0 ];
+then
+ echo check_per_node test skipped because of paranoia level.
+else
+ check_per_node()
+ {
+ perf stat -j --per-node -a 2>&1 sleep 1 | \
+ python $pythonchecker --per-node
+ }
+ check_per_node
+fi
+
+if [ $paranoia -gt 0 ];
+then
+ echo check_per_socket test skipped because of paranoia level.
+else
+ check_per_socket()
+ {
+ perf stat -j --per-socket -a 2>&1 sleep 1 | \
+ python $pythonchecker --per-socket
+ }
+ check_per_socket
+fi
+
+check_no_args
+check_interval
+check_all_cpus_no_aggr
+check_event
+exit 0
--
2.33.0.rc1.237.g0d66db33f3-goog
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v2 1/1] Add field checking tests for perf stat JSON output.
2021-08-13 22:09 [PATCH v2 1/1] Add field checking tests for perf stat JSON output Claire Jensen
@ 2021-08-31 19:46 ` Jiri Olsa
2022-01-03 14:53 ` Arnaldo Carvalho de Melo
0 siblings, 1 reply; 3+ messages in thread
From: Jiri Olsa @ 2021-08-31 19:46 UTC (permalink / raw)
To: Claire Jensen
Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, namhyung,
yao.jin, song, andi, adrian.hunter, kan.liang, james.clark,
alexander.antonov, changbin.du, liuqi115, irogers, eranian,
linux-kernel, linux-perf-users, clairej735
On Fri, Aug 13, 2021 at 10:09:37PM +0000, Claire Jensen wrote:
> Counts number of fields to make sure expected fields are present.
>
> Signed-off-by: Claire Jensen <cjense@google.com>
> ---
> .../tests/shell/lib/perf_json_output_lint.py | 48 ++++++++
> tools/perf/tests/shell/stat+json_output.sh | 114 ++++++++++++++++++
> 2 files changed, 162 insertions(+)
> create mode 100644 tools/perf/tests/shell/lib/perf_json_output_lint.py
> create mode 100644 tools/perf/tests/shell/stat+json_output.sh
this one needs to have exec priv, right?
>
> diff --git a/tools/perf/tests/shell/lib/perf_json_output_lint.py b/tools/perf/tests/shell/lib/perf_json_output_lint.py
> new file mode 100644
> index 000000000000..45d9163e7423
> --- /dev/null
SNIP
> diff --git a/tools/perf/tests/shell/stat+json_output.sh b/tools/perf/tests/shell/stat+json_output.sh
> new file mode 100644
> index 000000000000..8a772badae45
> --- /dev/null
> +++ b/tools/perf/tests/shell/stat+json_output.sh
> @@ -0,0 +1,114 @@
> +#!/bin/bash
> +# perf stat JSON output linter
> +# SPDX-License-Identifier: GPL-2.0
> +# Checks various perf stat JSON output commands for the
> +# correct number of fields.
> +
> +set -e
> +set -x
> +
> +pythonchecker=$(dirname $0)/lib/perf_json_output_lint.py
> +file="/proc/sys/kernel/perf_event_paranoid"
> +paranoia=$(cat "$file" | grep -o -E '[0-9]+')
> +
> +check_no_args()
> +{
> + perf stat -j sleep 1 2>&1 | \
hum, is this based on some other change? I don't see -j option in perf stat
jirka
> + python $pythonchecker --no-args
> +}
> +
> +if [ $paranoia -gt 0 ];
> +then
> + echo check_all_cpus test skipped because of paranoia level.
> +else
> + check_all_cpus()
> + {
> + perf stat -j -a 2>&1 sleep 1 | \
> + python $pythonchecker --all-cpus
> + }
SNIP
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v2 1/1] Add field checking tests for perf stat JSON output.
2021-08-31 19:46 ` Jiri Olsa
@ 2022-01-03 14:53 ` Arnaldo Carvalho de Melo
0 siblings, 0 replies; 3+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-01-03 14:53 UTC (permalink / raw)
To: Claire Jensen, Jiri Olsa
Cc: peterz, mingo, mark.rutland, alexander.shishkin, namhyung,
yao.jin, song, andi, adrian.hunter, kan.liang, james.clark,
alexander.antonov, changbin.du, liuqi115, irogers, eranian,
linux-kernel, linux-perf-users, clairej735
Em Tue, Aug 31, 2021 at 09:46:10PM +0200, Jiri Olsa escreveu:
> On Fri, Aug 13, 2021 at 10:09:37PM +0000, Claire Jensen wrote:
> > Counts number of fields to make sure expected fields are present.
> >
> > Signed-off-by: Claire Jensen <cjense@google.com>
> > ---
> > .../tests/shell/lib/perf_json_output_lint.py | 48 ++++++++
> > tools/perf/tests/shell/stat+json_output.sh | 114 ++++++++++++++++++
> > 2 files changed, 162 insertions(+)
> > create mode 100644 tools/perf/tests/shell/lib/perf_json_output_lint.py
> > create mode 100644 tools/perf/tests/shell/stat+json_output.sh
>
> this one needs to have exec priv, right?
>
> >
> > diff --git a/tools/perf/tests/shell/lib/perf_json_output_lint.py b/tools/perf/tests/shell/lib/perf_json_output_lint.py
> > new file mode 100644
> > index 000000000000..45d9163e7423
> > --- /dev/null
>
> SNIP
>
> > diff --git a/tools/perf/tests/shell/stat+json_output.sh b/tools/perf/tests/shell/stat+json_output.sh
> > new file mode 100644
> > index 000000000000..8a772badae45
> > --- /dev/null
> > +++ b/tools/perf/tests/shell/stat+json_output.sh
> > @@ -0,0 +1,114 @@
> > +#!/bin/bash
> > +# perf stat JSON output linter
> > +# SPDX-License-Identifier: GPL-2.0
> > +# Checks various perf stat JSON output commands for the
> > +# correct number of fields.
> > +
> > +set -e
> > +set -x
> > +
> > +pythonchecker=$(dirname $0)/lib/perf_json_output_lint.py
> > +file="/proc/sys/kernel/perf_event_paranoid"
> > +paranoia=$(cat "$file" | grep -o -E '[0-9]+')
> > +
> > +check_no_args()
> > +{
> > + perf stat -j sleep 1 2>&1 | \
>
> hum, is this based on some other change? I don't see -j option in perf stat
Yeah, testing it I stumbled in both problems as reported by Jiri:
⬢[acme@toolbox perf]$ perf test -v linter
Couldn't bump rlimit(MEMLOCK), failures may take place when creating BPF maps, etc
87: perf stat JSON output linter :
--- start ---
test child forked, pid 1826327
sh: line 1: ./tools/perf/tests/shell/stat+json_output.sh: Permission denied
test child finished with -1
---- end ----
perf stat JSON output linter: FAILED!
⬢[acme@toolbox perf]$
⬢[acme@toolbox perf]$ ls -la tools/perf/tests/shell/stat+json_output.sh
-rw-r--r--. 1 acme acme 2113 Jan 3 11:51 tools/perf/tests/shell/stat+json_output.sh
⬢[acme@toolbox perf]$ chmod +x tools/perf/tests/shell/stat+json_output.sh
⬢[acme@toolbox perf]$ perf test -v linter
Couldn't bump rlimit(MEMLOCK), failures may take place when creating BPF maps, etc
87: perf stat JSON output linter :
--- start ---
test child forked, pid 1826350
++ dirname ./tools/perf/tests/shell/stat+json_output.sh
+ pythonchecker=./tools/perf/tests/shell/lib/perf_json_output_lint.py
+ file=/proc/sys/kernel/perf_event_paranoid
++ cat /proc/sys/kernel/perf_event_paranoid
++ grep -o -E '[0-9]+'
+ paranoia=2
+ '[' 2 -gt 0 ']'
+ echo check_all_cpus test skipped because of paranoia level.
check_all_cpus test skipped because of paranoia level.
+ '[' 2 -gt 0 ']'
+ echo check_all_cpus test skipped because of paranoia level.
check_all_cpus test skipped because of paranoia level.
+ '[' 2 -gt 0 ']'
+ echo check_all_cpus test skipped because of paranoia level.
check_all_cpus test skipped because of paranoia level.
+ '[' 2 -gt 0 ']'
+ echo check_per_die test skipped because of paranoia level.
check_per_die test skipped because of paranoia level.
+ '[' 2 -gt 0 ']'
+ echo check_per_node test skipped because of paranoia level.
check_per_node test skipped because of paranoia level.
+ '[' 2 -gt 0 ']'
+ echo check_per_socket test skipped because of paranoia level.
check_per_socket test skipped because of paranoia level.
+ check_no_args
+ perf stat -j sleep 1
+ python ./tools/perf/tests/shell/lib/perf_json_output_lint.py --no-args
Error: unknown switch `j'
Usage: perf stat [<options>] [<command>]
-a, --all-cpus system-wide collection from all CPUs
-A, --no-aggr disable CPU count aggregation
-B, --big-num print large numbers with thousands' separators
-b, --bpf-prog <bpf-prog-id>
stat events on existing bpf program id
-C, --cpu <cpu> list of cpus to monitor in system-wide
-D, --delay <n> ms to wait before starting measurement after program start (-1: start with events disabled)
-d, --detailed detailed run - start a lot of events
-e, --event <event> event selector. use 'perf list' to list available events
-G, --cgroup <name> monitor event in cgroup name only
-g, --group put the counters into a counter group
-I, --interval-print <n>
print counts at regular interval in ms (overhead is possible for values <= 100ms)
-i, --no-inherit child tasks do not inherit counters
-M, --metrics <metric/metric group list>
monitor specified metrics or metric groups (separated by ,)
-n, --null null run - dont start any counters
-o, --output <file> output file name
-p, --pid <pid> stat events on existing process id
-r, --repeat <n> repeat command and print average + stddev (max: 100, forever: 0)
-S, --sync call sync() before starting a run
-t, --tid <tid> stat events on existing thread id
-T, --transaction hardware transaction statistics
-v, --verbose be more verbose (show counter open errors, etc)
-x, --field-separator <separator>
print counts with custom separator
--all-kernel Configure all used events to run in kernel space.
--all-user Configure all used events to run in user space.
--append append to the output file
--bpf-attr-map <attr-map-path>
path to perf_event_attr map
--bpf-counters use bpf program to count events
--control <fd:ctl-fd[,ack-fd] or fifo:ctl-fifo[,ack-fifo]>
Listen on ctl-fd descriptor for command to control measurement ('enable': enable events, 'disable': disable events).
Optionally send control command completion ('ack\n') to ack-fd descriptor.
Alternatively, ctl-fifo / ack-fifo will be opened and used as ctl-fd / ack-fd.
--filter <filter>
event filter
--for-each-cgroup <name>
expand events for each cgroup
--interval-clear clear screen in between new interval
--interval-count <n>
print counts for fixed number of times
--iostat[=<default>]
measure I/O performance metrics provided by arch/platform
--log-fd <n> log output to fd, instead of stderr
--metric-no-group
don't group metric events, impacts multiplexing
--metric-no-merge
don't try to share events between metrics in a group
--metric-only Only print computed metrics. No raw values
--no-csv-summary don't print 'summary' for CSV summary output
--no-merge Do not merge identical named events
--per-core aggregate counts per physical processor core
--per-die aggregate counts per processor die
--per-node aggregate counts per numa node
--per-socket aggregate counts per processor socket
--per-thread aggregate counts per thread
--percore-show-thread
Use with 'percore' event qualifier to show the event counts of one hardware thread by sum up total hardware threads of same physical core
--post <command> command to run after to the measured command
--pre <command> command to run prior to the measured command
--quiet don't print output (useful with record)
--scale Use --no-scale to disable counter scaling for multiplexing
--smi-cost measure SMI cost
--summary print summary for interval mode
--table display details about each run (only with -r option)
--td-level <n> Set the metrics level for the top-down statistics (0: max level)
--timeout <n> stop workload and print counts after a timeout period in ms (>= 10ms)
--topdown measure top-down statistics
Test failed for input:
Error: unknown switch `j'
Usage: perf stat [<options>] [<command>]
-a, --all-cpus system-wide collection from all CPUs
-A, --no-aggr disable CPU count aggregation
-B, --big-num print large numbers with thousands' separators
-b, --bpf-prog <bpf-prog-id>
stat events on existing bpf program id
-C, --cpu <cpu> list of cpus to monitor in system-wide
-D, --delay <n> ms to wait before starting measurement after program start (-1: start with events disabled)
-d, --detailed detailed run - start a lot of events
-e, --event <event> event selector. use 'perf list' to list available events
-G, --cgroup <name> monitor event in cgroup name only
-g, --group put the counters into a counter group
-I, --interval-print <n>
print counts at regular interval in ms (overhead is possible for values <= 100ms)
-i, --no-inherit child tasks do not inherit counters
-M, --metrics <metric/metric group list>
monitor specified metrics or metric groups (separated by ,)
-n, --null null run - dont start any counters
-o, --output <file> output file name
-p, --pid <pid> stat events on existing process id
-r, --repeat <n> repeat command and print average + stddev (max: 100, forever: 0)
-S, --sync call sync() before starting a run
-t, --tid <tid> stat events on existing thread id
-T, --transaction hardware transaction statistics
-v, --verbose be more verbose (show counter open errors, etc)
-x, --field-separator <separator>
print counts with custom separator
--all-kernel Configure all used events to run in kernel space.
--all-user Configure all used events to run in user space.
--append append to the output file
--bpf-attr-map <attr-map-path>
path to perf_event_attr map
--bpf-counters use bpf program to count events
--control <fd:ctl-fd[,ack-fd] or fifo:ctl-fifo[,ack-fifo]>
Listen on ctl-fd descriptor for command to control measurement ('enable': enable events, 'disable': disable events).
Optionally send control command completion ('ack\n') to ack-fd descriptor.
Alternatively, ctl-fifo / ack-fifo will be opened and used as ctl-fd / ack-fd.
--filter <filter>
event filter
--for-each-cgroup <name>
expand events for each cgroup
--interval-clear clear screen in between new interval
--interval-count <n>
print counts for fixed number of times
--iostat[=<default>]
measure I/O performance metrics provided by arch/platform
--log-fd <n> log output to fd, instead of stderr
--metric-no-group
don't group metric events, impacts multiplexing
--metric-no-merge
don't try to share events between metrics in a group
--metric-only Only print computed metrics. No raw values
--no-csv-summary don't print 'summary' for CSV summary output
--no-merge Do not merge identical named events
--per-core aggregate counts per physical processor core
--per-die aggregate counts per processor die
--per-node aggregate counts per numa node
--per-socket aggregate counts per processor socket
--per-thread aggregate counts per thread
--percore-show-thread
Use with 'percore' event qualifier to show the event counts of one hardware thread by sum up total hardware threads of same physical core
--post <command> command to run after to the measured command
--pre <command> command to run prior to the measured command
--quiet don't print output (useful with record)
--scale Use --no-scale to disable counter scaling for multiplexing
--smi-cost measure SMI cost
--summary print summary for interval mode
--table display details about each run (only with -r option)
--td-level <n> Set the metrics level for the top-down statistics (0: max level)
--timeout <n> stop workload and print counts after a timeout period in ms (>= 10ms)
--topdown measure top-down statistics
Traceback (most recent call last):
File "/var/home/acme/git/perf/./tools/perf/tests/shell/lib/perf_json_output_lint.py", line 40, in <module>
check_json_output(6)
File "/var/home/acme/git/perf/./tools/perf/tests/shell/lib/perf_json_output_lint.py", line 34, in check_json_output
raise RuntimeError('wrong number of fields. counted {0}'
RuntimeError: wrong number of fields. counted 0 expected 6 in Error: unknown switch `j'
test child finished with -1
---- end ----
perf stat JSON output linter: FAILED!
⬢[acme@toolbox perf]$
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2022-01-03 14:54 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-13 22:09 [PATCH v2 1/1] Add field checking tests for perf stat JSON output Claire Jensen
2021-08-31 19:46 ` Jiri Olsa
2022-01-03 14:53 ` Arnaldo Carvalho de Melo
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.