archive mirror
 help / color / mirror / Atom feed
From: Mark Rutland <>
To: Peter Zijlstra <>
	Alexander Shishkin <>,
	Arnaldo Carvalho de Melo <>,
	Ingo Molnar <>, Will Deacon <>
Subject: Re: [PATCH] perf: fix pmu::filter_match for SW-led groups
Date: Mon, 4 Jul 2016 19:05:35 +0100	[thread overview]
Message-ID: <20160704180534.GD9048@leverpostej> (raw)
In-Reply-To: <>

On Sat, Jul 02, 2016 at 06:40:25PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 14, 2016 at 04:10:41PM +0100, Mark Rutland wrote:
> > However, pmu::filter_match is only called for the leader of each event
> > group. When the leader is a SW event, we do not filter the groups, and
> > may fail at pmu::add time, and when this happens we'll give up on
> > scheduling any event groups later in the list until they are rotated
> > ahead of the failing group.
> Ha! indeed.
> > I've tried to find a better way of handling this (without needing to walk the
> > siblings list), but so far I'm at a loss. At least it's "only" O(n) in the size
> > of the sibling list we were going to walk anyway.
> > 
> > I suspect that at a more fundamental level, I need to stop sharing a
> > perf_hw_context between HW PMUs (i.e. replace task_struct::perf_event_ctxp with
> > something that can handle multiple HW PMUs). From previous attempts I'm not
> > sure if that's going to be possible.
> > 
> > Any ideas appreciated!
> So I think I have half-cooked ideas.
> One of the problems I've been wanting to solve for a long time is that
> the per-cpu flexible list has priority over the per-task flexible list.
> I would like them to rotate together.

Makes sense.

> One of the ways I was looking at getting that done is a virtual runtime
> scheduler (just like cfs). The tricky point is merging two virtual
> runtime trees. But I think that should be doable if we sort the trees on
> lag.
> In any case, the relevance to your question is that once we have a tree,
> we can play games with order; that is, if we first order on PMU-id and
> only second on lag, we get whole subtree clusters specific for a PMU.

Hmm... I'm not sure how that helps in this case. Wouldn't we still need
to walk the sibling list to get the HW PMU-id in the case of a SW group

For the heterogeenous case we'd need a different sort order per-cpu
(well, per microarchitecture), which sounds like we're going to have to
fully sort the events every time they move between CPUs. :/

> Lost of details missing in that picture, but I think something along
> those lines might get us what we want.

Perhaps! Hopefully I'm just missing those detail above. :)

I also had another though about solving the SW-led group case: if the
leader had a reference to the group's HW PMU (of which there should only
be one), we can filter on that alone, and can also use that in
group_sched_in rather than the ctx->pmu, avoiding the issue that
ctx->pmu is not the same as the group's HW PMU.

I'll have a play with that approach in the mean time.


  reply	other threads:[~2016-07-04 18:05 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-14 15:10 Mark Rutland
2016-07-02 16:40 ` Peter Zijlstra
2016-07-04 18:05   ` Mark Rutland [this message]
2016-07-05  8:35     ` Peter Zijlstra
2016-07-05  9:44       ` Mark Rutland
2016-07-05 12:04         ` Peter Zijlstra
2016-07-05 12:52           ` Mark Rutland
2016-07-07  8:31 ` [tip:perf/core] perf/core: Fix " tip-bot for Mark Rutland

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160704180534.GD9048@leverpostej \ \ \ \ \ \ \ \
    --subject='Re: [PATCH] perf: fix pmu::filter_match for SW-led groups' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).