From: Namhyung Kim <namhyung@kernel.org>
To: Jiri Olsa <jolsa@redhat.com>
Cc: Wei Li <liwei391@huawei.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Andi Kleen <ak@linux.intel.com>,
Alexey Budankov <alexey.budankov@linux.intel.com>,
Adrian Hunter <adrian.hunter@intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
linux-kernel <linux-kernel@vger.kernel.org>,
linux-arm-kernel@lists.infradead.org,
Li Bin <huawei.libin@huawei.com>
Subject: Re: [PATCH 1/2] perf stat: Fix segfault when counting armv8_pmu events
Date: Wed, 23 Sep 2020 23:15:06 +0900 [thread overview]
Message-ID: <CAM9d7cgT4qLH0mPM1nTRa-FYwjMOc4LOCUD_X0r21hdUUVLpRA@mail.gmail.com> (raw)
In-Reply-To: <20200923140747.GN2893484@krava>
On Wed, Sep 23, 2020 at 11:08 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Wed, Sep 23, 2020 at 10:49:52PM +0900, Namhyung Kim wrote:
> > On Wed, Sep 23, 2020 at 2:44 PM Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > On Tue, Sep 22, 2020 at 11:13:45AM +0800, Wei Li wrote:
> > > > When executing perf stat with armv8_pmu events with a workload, it will
> > > > report a segfault as result.
> > >
> > > please share the perf stat command line you see that segfault for
> >
> > It seems the description in the patch 0/2 already has it:
> >
> > [root@localhost hulk]# tools/perf/perf stat -e
> > armv8_pmuv3_0/ll_cache_rd/,armv8_pmuv3_0/ll_cache_miss_rd/ ls >
> > /dev/null
> > Segmentation fault
>
> yea I found it, but can't reproduce it.. I see the issue from
> patch 2, but not sure what's the problem so far
I think the problem is that armv8_pmu has a cpumask,
and the user requested per-task events.
The code tried to open the event with a dummy cpu map
since it's not a cpu event, but the pmu has cpu map and
it's passed to evsel. So there's confusion somewhere
whether it should use evsel->cpus or a dummy map.
Thanks
Namhyung
next prev parent reply other threads:[~2020-09-23 14:15 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-22 3:13 [PATCH 0/2] perf stat: Unbreak perf stat with ARMv8 PMU events Wei Li
2020-09-22 3:13 ` [PATCH 1/2] perf stat: Fix segfault when counting armv8_pmu events Wei Li
2020-09-22 19:23 ` Andi Kleen
2020-09-22 19:50 ` Andi Kleen
2020-09-24 14:14 ` liwei (GF)
2020-09-23 5:44 ` Jiri Olsa
2020-09-23 13:49 ` Namhyung Kim
2020-09-23 14:07 ` Jiri Olsa
2020-09-23 14:15 ` Namhyung Kim [this message]
2020-09-23 20:19 ` Jiri Olsa
2020-09-24 14:36 ` Namhyung Kim
2020-09-25 21:01 ` Jiri Olsa
2020-10-02 8:59 ` Jiri Olsa
2020-10-06 6:51 ` Song Bao Hua (Barry Song)
2020-09-22 3:13 ` [PATCH 2/2] perf stat: Unbreak perf stat with " Wei Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAM9d7cgT4qLH0mPM1nTRa-FYwjMOc4LOCUD_X0r21hdUUVLpRA@mail.gmail.com \
--to=namhyung@kernel.org \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=ak@linux.intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=alexey.budankov@linux.intel.com \
--cc=huawei.libin@huawei.com \
--cc=jolsa@redhat.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=liwei391@huawei.com \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).