From: Song Liu <songliubraving@fb.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: open list <linux-kernel@vger.kernel.org>,
Kernel Team <Kernel-team@fb.com>,
Arnaldo Carvalho de Melo <acme@redhat.com>,
Jiri Olsa <jolsa@kernel.org>,
Alexey Budankov <alexey.budankov@linux.intel.com>,
Namhyung Kim <namhyung@kernel.org>, Tejun Heo <tj@kernel.org>
Subject: Re: [PATCH v9] perf: Sharing PMU counters across compatible events
Date: Fri, 10 Jan 2020 17:37:45 +0000 [thread overview]
Message-ID: <CF654118-59C1-46AA-B9DB-CA14D9FFACF7@fb.com> (raw)
In-Reply-To: <20200110125944.GJ2844@hirez.programming.kicks-ass.net>
Hi Peter,
Thanks for your review!
> On Jan 10, 2020, at 4:59 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Dec 17, 2019 at 09:59:48AM -0800, Song Liu wrote:
>
> This is starting to look good, find a few comments below.
>
>> include/linux/perf_event.h | 13 +-
>> kernel/events/core.c | 363 ++++++++++++++++++++++++++++++++-----
>> 2 files changed, 332 insertions(+), 44 deletions(-)
>>
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index 6d4c22aee384..45a346ee33d2 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -547,7 +547,9 @@ enum perf_event_state {
>> PERF_EVENT_STATE_ERROR = -2,
>> PERF_EVENT_STATE_OFF = -1,
>> PERF_EVENT_STATE_INACTIVE = 0,
>> - PERF_EVENT_STATE_ACTIVE = 1,
>> + /* the hw PMC is enabled, but this event is not counting */
>> + PERF_EVENT_STATE_ENABLED = 1,
>> + PERF_EVENT_STATE_ACTIVE = 2,
>> };
>
> It's probably best to extend the comment above instead of adding a
> comment for one of the states.
Will update.
>
>>
>> struct file;
>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 4ff86d57f9e5..7d4b6ac46de5 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -1657,6 +1657,181 @@ perf_event_groups_next(struct perf_event *event)
>> event = rb_entry_safe(rb_next(&event->group_node), \
>> typeof(*event), group_node))
>>
>> +static inline bool perf_event_can_share(struct perf_event *event)
>> +{
>> + /* only share hardware counting events */
>> + return !is_sampling_event(event);
>> + return !is_software_event(event) && !is_sampling_event(event);
>
> One of those return statements is too many; I'm thinking you meant to
> only have the second.
Exactly! The first line is for vm tests. Sorry for the confusion.
>
>> +}
>> +
[...]
>> + active_count = event->dup_active_count;
>> + perf_event_exit_dup_master(event);
>> +
>> + if (!count)
>> + return;
>> +
>> + if (count == 1) {
>> + /* no more sharing */
>> + new_master->dup_master = NULL;
>> + } else {
>> + perf_event_init_dup_master(new_master);
>> + new_master->dup_active_count = active_count;
>> + }
>> +
>> + if (active_count) {
>
> Would it make sense to do something like:
>
> new_master->hw.idx = event->hw.idx;
>
> That should ensure x86_schedule_events() can do with the fast path;
> after all, we're adding back the 'same' event. If we do this; this wants
> a comment though.
I think this make sense for x86, but maybe not as much for other architectures.
For example, it is most likely a no-op for RISC-V. Maybe we can add a new API
to struct pmu, like "void copy_hw_config(struct perf_event *, struct perf_event *)".
For x86, it will be like
void x86_copy_hw_config(from, to) {
to->hw.idx = from->hw.idx;
}
>
>> + WARN_ON_ONCE(event->pmu->add(new_master, PERF_EF_START));
>
> For consistency that probably ought to be:
>
> new_master->pmu->add(new_master, PERF_EF_START);
Will fix.
>
>> + if (new_master->state == PERF_EVENT_STATE_INACTIVE)
>> + new_master->state = PERF_EVENT_STATE_ENABLED;
>
> If this really should not be perf_event_set_state() we need a comment
> explaining why -- I think I see, but it's still early and I've not had
> nearly enough tea to wake me up.
Will add comment.
[...]
>>
>> @@ -2242,9 +2494,9 @@ static void __perf_event_disable(struct perf_event *event,
>> }
>>
>> if (event == event->group_leader)
>> - group_sched_out(event, cpuctx, ctx);
>> + group_sched_out(event, cpuctx, ctx, true);
>> else
>> - event_sched_out(event, cpuctx, ctx);
>> + event_sched_out(event, cpuctx, ctx, true);
>>
>> perf_event_set_state(event, PERF_EVENT_STATE_OFF);
>> }
>
> So the above event_sched_out(.remove_dup) is very inconsistent with the
> below ctx_resched(.event_add_dup).
[...]
>> @@ -2810,7 +3069,7 @@ static void __perf_event_enable(struct perf_event *event,
>> if (ctx->task)
>> WARN_ON_ONCE(task_ctx != ctx);
>>
>> - ctx_resched(cpuctx, task_ctx, get_event_type(event));
>> + ctx_resched(cpuctx, task_ctx, get_event_type(event), event);
>> }
>>
>> /*
>
> We basically need:
>
> * perf_event_setup_dup() after add_event_to_ctx(), but before *sched_in()
> - perf_install_in_context()
> - perf_event_enable()
> - inherit_event()
>
> * perf_event_remove_dup() after *sched_out(), but before list_del_event()
> - perf_remove_from_context()
> - perf_event_disable()
>
> AFAICT we can do that without changing *sched_out() and ctx_resched(),
> with probably less lines changed over all.
We currently need these changes to sched_out() and ctx_resched() because we
only do setup_dup() and remove_dup() when the whole ctx is scheduled out.
Maybe this is not really necessary? I am not sure whether simpler code need
more reschedules. Let me take a closer look...
>
>> @@ -4051,6 +4310,9 @@ static void __perf_event_read(void *info)
>>
>> static inline u64 perf_event_count(struct perf_event *event)
>> {
>> + if (event->dup_master == event)
>> + return local64_read(&event->master_count) +
>> + atomic64_read(&event->master_child_count);
>
> Wants {}
Will fix.
Thanks again,
Song
next prev parent reply other threads:[~2020-01-10 17:38 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-17 17:59 [PATCH v9] perf: Sharing PMU counters across compatible events Song Liu
2020-01-02 18:45 ` Song Liu
2020-01-10 12:59 ` Peter Zijlstra
2020-01-10 17:37 ` Song Liu [this message]
2020-01-16 23:59 ` Song Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CF654118-59C1-46AA-B9DB-CA14D9FFACF7@fb.com \
--to=songliubraving@fb.com \
--cc=Kernel-team@fb.com \
--cc=acme@redhat.com \
--cc=alexey.budankov@linux.intel.com \
--cc=jolsa@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).