From: Numfor Mbiziwo-Tiapo <nums@google.com>
To: peterz@infradead.org, mingo@redhat.com, acme@kernel.org,
alexander.shishkin@linux.intel.com, jolsa@redhat.com,
namhyung@kernel.org, songliubraving@fb.com, mbd@fb.com
Cc: linux-kernel@vger.kernel.org, irogers@google.com,
eranian@google.com, Numfor Mbiziwo-Tiapo <nums@google.com>
Subject: [PATCH] Fix perf stat repeat segfault
Date: Wed, 10 Jul 2019 13:45:40 -0700 [thread overview]
Message-ID: <20190710204540.176495-1-nums@google.com> (raw)
When perf stat is called with event groups and the repeat option,
a segfault occurs because the cpu ids are stored on each iteration
of the repeat, when they should only be stored on the first iteration,
which causes a buffer overflow.
This can be replicated by running (from the tip directory):
make -C tools/perf
then running:
tools/perf/perf stat -e '{cycles,instructions}' -r 10 ls
Since run_idx keeps track of the current iteration of the repeat,
only storing the cpu ids on the first iteration (when run_idx < 1)
fixes this issue.
Signed-off-by: Numfor Mbiziwo-Tiapo <nums@google.com>
---
tools/perf/builtin-stat.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 63a3afc7f32b..92d6694367e4 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -378,9 +378,10 @@ static void workload_exec_failed_signal(int signo __maybe_unused, siginfo_t *inf
workload_exec_errno = info->si_value.sival_int;
}
-static bool perf_evsel__should_store_id(struct perf_evsel *counter)
+static bool perf_evsel__should_store_id(struct perf_evsel *counter, int run_idx)
{
- return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID;
+ return STAT_RECORD || counter->attr.read_format & PERF_FORMAT_ID
+ && run_idx < 1;
}
static bool is_target_alive(struct target *_target,
@@ -503,7 +504,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
if (l > stat_config.unit_width)
stat_config.unit_width = l;
- if (perf_evsel__should_store_id(counter) &&
+ if (perf_evsel__should_store_id(counter, run_idx) &&
perf_evsel__store_ids(counter, evsel_list))
return -1;
}
--
2.22.0.410.gd8fdbe21b5-goog
next reply other threads:[~2019-07-10 20:45 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-10 20:45 Numfor Mbiziwo-Tiapo [this message]
2019-07-11 4:44 ` [PATCH] Fix perf stat repeat segfault Ravi Bangoria
2019-07-14 20:44 ` Jiri Olsa
2019-07-14 20:55 ` Jiri Olsa
2019-07-14 21:36 ` Stephane Eranian
2019-07-15 7:59 ` Jiri Olsa
2019-07-15 8:14 ` Stephane Eranian
2019-07-15 8:31 ` Jiri Olsa
2019-07-15 14:21 ` [PATCH] perf stat: Fix segfault for event group in repeat mode Jiri Olsa
2019-07-16 18:48 ` Arnaldo Carvalho de Melo
2019-07-23 21:49 ` [tip:perf/urgent] " tip-bot for Jiri Olsa
2019-07-11 17:21 [PATCH] Fix perf stat repeat segfault Numfor Mbiziwo-Tiapo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190710204540.176495-1-nums@google.com \
--to=nums@google.com \
--cc=acme@kernel.org \
--cc=alexander.shishkin@linux.intel.com \
--cc=eranian@google.com \
--cc=irogers@google.com \
--cc=jolsa@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mbd@fb.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=songliubraving@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).