* Re: Perf hotplug lockup in v4.9-rc8
2016-12-09 13:59 ` Peter Zijlstra
@ 2016-12-12 11:46 ` Will Deacon
2016-12-12 12:42 ` Peter Zijlstra
2017-01-11 14:59 ` Mark Rutland
2017-01-14 12:28 ` [tip:perf/urgent] perf/core: Fix sys_perf_event_open() vs. hotplug tip-bot for Peter Zijlstra
2 siblings, 1 reply; 17+ messages in thread
From: Will Deacon @ 2016-12-12 11:46 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Mark Rutland, linux-kernel, Ingo Molnar,
Arnaldo Carvalho de Melo, Thomas Gleixner,
Sebastian Andrzej Siewior, jeremy.linton
On Fri, Dec 09, 2016 at 02:59:00PM +0100, Peter Zijlstra wrote:
> On Wed, Dec 07, 2016 at 07:34:55PM +0100, Peter Zijlstra wrote:
>
> > @@ -2352,6 +2357,28 @@ perf_install_in_context(struct perf_event_context *ctx,
> > return;
> > }
> > raw_spin_unlock_irq(&ctx->lock);
> > +
> > + raw_spin_lock_irq(&task->pi_lock);
> > + if (!(task->state == TASK_RUNNING || task->state == TASK_WAKING)) {
> > + /*
> > + * XXX horrific hack...
> > + */
> > + raw_spin_lock(&ctx->lock);
> > + if (task != ctx->task) {
> > + raw_spin_unlock(&ctx->lock);
> > + raw_spin_unlock_irq(&task->pi_lock);
> > + goto again;
> > + }
> > +
> > + add_event_to_ctx(event, ctx);
> > + raw_spin_unlock(&ctx->lock);
> > + raw_spin_unlock_irq(&task->pi_lock);
> > + return;
> > + }
> > + raw_spin_unlock_irq(&task->pi_lock);
> > +
> > + cond_resched();
> > +
> > /*
> > * Since !ctx->is_active doesn't mean anything, we must IPI
> > * unconditionally.
>
> So while I went back and forth trying to make that less ugly, I figured
> there was another problem.
>
> Imagine the cpu_function_call() hitting the 'right' cpu, but not finding
> the task current. It will then continue to install the event in the
> context. However, that doesn't stop another CPU from pulling the task in
> question from our rq and scheduling it elsewhere.
>
> This all lead me to the below patch.. Now it has a rather large comment,
> and while it represents my current thinking on the matter, I'm not at
> all sure its entirely correct. I got my brain in a fair twist while
> writing it.
>
> Please as to carefully think about it.
>
> ---
> kernel/events/core.c | 70 +++++++++++++++++++++++++++++++++++-----------------
> 1 file changed, 48 insertions(+), 22 deletions(-)
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 6ee1febdf6ff..7d9ae461c535 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2252,7 +2252,7 @@ static int __perf_install_in_context(void *info)
> struct perf_event_context *ctx = event->ctx;
> struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
> struct perf_event_context *task_ctx = cpuctx->task_ctx;
> - bool activate = true;
> + bool reprogram = true;
> int ret = 0;
>
> raw_spin_lock(&cpuctx->ctx.lock);
> @@ -2260,27 +2260,26 @@ static int __perf_install_in_context(void *info)
> raw_spin_lock(&ctx->lock);
> task_ctx = ctx;
>
> - /* If we're on the wrong CPU, try again */
> - if (task_cpu(ctx->task) != smp_processor_id()) {
> - ret = -ESRCH;
> - goto unlock;
> - }
> + reprogram = (ctx->task == current);
>
> /*
> - * If we're on the right CPU, see if the task we target is
> - * current, if not we don't have to activate the ctx, a future
> - * context switch will do that for us.
> + * If the task is running, it must be running on this CPU,
> + * otherwise we cannot reprogram things.
> + *
> + * If its not running, we don't care, ctx->lock will
> + * serialize against it becoming runnable.
> */
> - if (ctx->task != current)
> - activate = false;
> - else
> - WARN_ON_ONCE(cpuctx->task_ctx && cpuctx->task_ctx != ctx);
> + if (task_curr(ctx->task) && !reprogram) {
> + ret = -ESRCH;
> + goto unlock;
> + }
>
> + WARN_ON_ONCE(reprogram && cpuctx->task_ctx && cpuctx->task_ctx != ctx);
> } else if (task_ctx) {
> raw_spin_lock(&task_ctx->lock);
> }
>
> - if (activate) {
> + if (reprogram) {
> ctx_sched_out(ctx, cpuctx, EVENT_TIME);
> add_event_to_ctx(event, ctx);
> ctx_resched(cpuctx, task_ctx);
> @@ -2331,13 +2330,36 @@ perf_install_in_context(struct perf_event_context *ctx,
> /*
> * Installing events is tricky because we cannot rely on ctx->is_active
> * to be set in case this is the nr_events 0 -> 1 transition.
> + *
> + * Instead we use task_curr(), which tells us if the task is running.
> + * However, since we use task_curr() outside of rq::lock, we can race
> + * against the actual state. This means the result can be wrong.
> + *
> + * If we get a false positive, we retry, this is harmless.
> + *
> + * If we get a false negative, things are complicated. If we are after
> + * perf_event_context_sched_in() ctx::lock will serialize us, and the
> + * value must be correct. If we're before, it doesn't matter since
> + * perf_event_context_sched_in() will program the counter.
> + *
> + * However, this hinges on the remote context switch having observed
> + * our task->perf_event_ctxp[] store, such that it will in fact take
> + * ctx::lock in perf_event_context_sched_in().
> + *
> + * We do this by task_function_call(), if the IPI fails to hit the task
> + * we know any future context switch of task must see the
> + * perf_event_ctpx[] store.
> */
> -again:
> +
> /*
> - * Cannot use task_function_call() because we need to run on the task's
> - * CPU regardless of whether its current or not.
> + * This smp_mb() orders the task->perf_event_ctxp[] store with the
> + * task_cpu() load, such that if the IPI then does not find the task
> + * running, a future context switch of that task must observe the
> + * store.
> */
> - if (!cpu_function_call(task_cpu(task), __perf_install_in_context, event))
> + smp_mb();
> +again:
> + if (!task_function_call(task, __perf_install_in_context, event))
> return;
I'm trying to figure out whether or not the barriers implied by the IPI
are sufficient here, or whether we really need the explicit smp_mb().
Certainly, arch_send_call_function_single_ipi has to order the publishing
of the remote work before the signalling of the interrupt, but the comment
above refers to "the task_cpu() load" and I can't see that after your
diff.
What are you trying to order here?
Will
>
> raw_spin_lock_irq(&ctx->lock);
> @@ -2351,12 +2373,16 @@ perf_install_in_context(struct perf_event_context *ctx,
> raw_spin_unlock_irq(&ctx->lock);
> return;
> }
> - raw_spin_unlock_irq(&ctx->lock);
> /*
> - * Since !ctx->is_active doesn't mean anything, we must IPI
> - * unconditionally.
> + * If the task is not running, ctx->lock will avoid it becoming so,
> + * thus we can safely install the event.
> */
> - goto again;
> + if (task_curr(task)) {
> + raw_spin_unlock_irq(&ctx->lock);
> + goto again;
> + }
> + add_event_to_ctx(event, ctx);
> + raw_spin_unlock_irq(&ctx->lock);
> }
>
> /*
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Perf hotplug lockup in v4.9-rc8
2016-12-12 11:46 ` Will Deacon
@ 2016-12-12 12:42 ` Peter Zijlstra
2016-12-22 8:45 ` Peter Zijlstra
0 siblings, 1 reply; 17+ messages in thread
From: Peter Zijlstra @ 2016-12-12 12:42 UTC (permalink / raw)
To: Will Deacon
Cc: Mark Rutland, linux-kernel, Ingo Molnar,
Arnaldo Carvalho de Melo, Thomas Gleixner,
Sebastian Andrzej Siewior, jeremy.linton
On Mon, Dec 12, 2016 at 11:46:40AM +0000, Will Deacon wrote:
> > @@ -2331,13 +2330,36 @@ perf_install_in_context(struct perf_event_context *ctx,
> > /*
> > * Installing events is tricky because we cannot rely on ctx->is_active
> > * to be set in case this is the nr_events 0 -> 1 transition.
> > + *
> > + * Instead we use task_curr(), which tells us if the task is running.
> > + * However, since we use task_curr() outside of rq::lock, we can race
> > + * against the actual state. This means the result can be wrong.
> > + *
> > + * If we get a false positive, we retry, this is harmless.
> > + *
> > + * If we get a false negative, things are complicated. If we are after
> > + * perf_event_context_sched_in() ctx::lock will serialize us, and the
> > + * value must be correct. If we're before, it doesn't matter since
> > + * perf_event_context_sched_in() will program the counter.
> > + *
> > + * However, this hinges on the remote context switch having observed
> > + * our task->perf_event_ctxp[] store, such that it will in fact take
> > + * ctx::lock in perf_event_context_sched_in().
> > + *
> > + * We do this by task_function_call(), if the IPI fails to hit the task
> > + * we know any future context switch of task must see the
> > + * perf_event_ctpx[] store.
> > */
> > +
> > /*
> > + * This smp_mb() orders the task->perf_event_ctxp[] store with the
> > + * task_cpu() load, such that if the IPI then does not find the task
> > + * running, a future context switch of that task must observe the
> > + * store.
> > */
> > + smp_mb();
> > +again:
> > + if (!task_function_call(task, __perf_install_in_context, event))
> > return;
>
> I'm trying to figure out whether or not the barriers implied by the IPI
> are sufficient here, or whether we really need the explicit smp_mb().
> Certainly, arch_send_call_function_single_ipi has to order the publishing
> of the remote work before the signalling of the interrupt, but the comment
> above refers to "the task_cpu() load" and I can't see that after your
> diff.
>
> What are you trying to order here?
I suppose something like this:
CPU0 CPU1 CPU2
(current == t)
t->perf_event_ctxp[] = ctx;
smp_mb();
cpu = task_cpu(t);
switch(t, n);
migrate(t, 2);
switch(p, t);
ctx = t->perf_event_ctxp[]; // must not be NULL
smp_function_call(cpu, ..);
generic_exec_single()
func();
spin_lock(ctx->lock);
if (task_curr(t)) // false
add_event_to_ctx();
spin_unlock(ctx->lock);
perf_event_context_sched_in();
spin_lock(ctx->lock);
// sees event
Where between setting the perf_event_ctxp[] and sending the IPI the task
moves away and the IPI misses, and while the new CPU is in the middle of
scheduling in t, it hasn't yet passed through perf_event_sched_in(), but
when it does, it _must_ observe the ctx value we stored.
My thinking was that the IPI itself is not sufficient since when it
misses the task, nothing then guarantees we see the store. However, if
we order the store and the task_cpu() load, then any context
switching/migrating involved with changing that value, should ensure we
see our prior store.
Of course, even now writing this, I'm still confused.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Perf hotplug lockup in v4.9-rc8
2016-12-12 12:42 ` Peter Zijlstra
@ 2016-12-22 8:45 ` Peter Zijlstra
2016-12-22 14:00 ` Peter Zijlstra
0 siblings, 1 reply; 17+ messages in thread
From: Peter Zijlstra @ 2016-12-22 8:45 UTC (permalink / raw)
To: Will Deacon
Cc: Mark Rutland, linux-kernel, Ingo Molnar,
Arnaldo Carvalho de Melo, Thomas Gleixner,
Sebastian Andrzej Siewior, jeremy.linton, Boqun Feng,
Paul McKenney
On Mon, Dec 12, 2016 at 01:42:28PM +0100, Peter Zijlstra wrote:
> On Mon, Dec 12, 2016 at 11:46:40AM +0000, Will Deacon wrote:
> > > @@ -2331,13 +2330,36 @@ perf_install_in_context(struct perf_event_context *ctx,
> > > /*
> > > * Installing events is tricky because we cannot rely on ctx->is_active
> > > * to be set in case this is the nr_events 0 -> 1 transition.
> > > + *
> > > + * Instead we use task_curr(), which tells us if the task is running.
> > > + * However, since we use task_curr() outside of rq::lock, we can race
> > > + * against the actual state. This means the result can be wrong.
> > > + *
> > > + * If we get a false positive, we retry, this is harmless.
> > > + *
> > > + * If we get a false negative, things are complicated. If we are after
> > > + * perf_event_context_sched_in() ctx::lock will serialize us, and the
> > > + * value must be correct. If we're before, it doesn't matter since
> > > + * perf_event_context_sched_in() will program the counter.
> > > + *
> > > + * However, this hinges on the remote context switch having observed
> > > + * our task->perf_event_ctxp[] store, such that it will in fact take
> > > + * ctx::lock in perf_event_context_sched_in().
> > > + *
> > > + * We do this by task_function_call(), if the IPI fails to hit the task
> > > + * we know any future context switch of task must see the
> > > + * perf_event_ctpx[] store.
> > > */
> > > +
> > > /*
> > > + * This smp_mb() orders the task->perf_event_ctxp[] store with the
> > > + * task_cpu() load, such that if the IPI then does not find the task
> > > + * running, a future context switch of that task must observe the
> > > + * store.
> > > */
> > > + smp_mb();
> > > +again:
> > > + if (!task_function_call(task, __perf_install_in_context, event))
> > > return;
> >
> > I'm trying to figure out whether or not the barriers implied by the IPI
> > are sufficient here, or whether we really need the explicit smp_mb().
> > Certainly, arch_send_call_function_single_ipi has to order the publishing
> > of the remote work before the signalling of the interrupt, but the comment
> > above refers to "the task_cpu() load" and I can't see that after your
> > diff.
> >
> > What are you trying to order here?
>
> I suppose something like this:
>
>
> CPU0 CPU1 CPU2
>
> (current == t)
>
> t->perf_event_ctxp[] = ctx;
> smp_mb();
> cpu = task_cpu(t);
>
> switch(t, n);
> migrate(t, 2);
> switch(p, t);
>
> ctx = t->perf_event_ctxp[]; // must not be NULL
>
So I think I can cast the above into a test like:
W[x] = 1 W[y] = 1 R[z] = 1
mb mb mb
R[y] = 0 W[z] = 1 R[x] = 0
Where x is the perf_event_ctxp[], y is our task's cpu and z is our task
being placed on the rq of cpu2.
See also commit: 8643cda549ca ("sched/core, locking: Document
Program-Order guarantees"), Independent of which cpu initiates the
migration between CPU1 and CPU2 there is ordering between the CPUs.
This would then translate into something like:
C C-peterz
{
}
P0(int *x, int *y)
{
int r1;
WRITE_ONCE(*x, 1);
smp_mb();
r1 = READ_ONCE(*y);
}
P1(int *y, int *z)
{
WRITE_ONCE(*y, 1);
smp_mb();
WRITE_ONCE(*z, 1);
}
P2(int *x, int *z)
{
int r1;
int r2;
r1 = READ_ONCE(*z);
smp_mb();
r2 = READ_ONCE(*x);
}
exists
(0:r1=0 /\ 2:r1=1 /\ 2:r2=0)
Which evaluates into:
Test C-peterz Allowed
States 7
0:r1=0; 2:r1=0; 2:r2=0;
0:r1=0; 2:r1=0; 2:r2=1;
0:r1=0; 2:r1=1; 2:r2=1;
0:r1=1; 2:r1=0; 2:r2=0;
0:r1=1; 2:r1=0; 2:r2=1;
0:r1=1; 2:r1=1; 2:r2=0;
0:r1=1; 2:r1=1; 2:r2=1;
No
Witnesses
Positive: 0 Negative: 7
Condition exists (0:r1=0 /\ 2:r1=1 /\ 2:r2=0)
Observation C-peterz Never 0 7
Hash=661589febb9e41b222d8acae1fd64e25
And the strong and weak model agree.
> smp_function_call(cpu, ..);
>
> generic_exec_single()
> func();
> spin_lock(ctx->lock);
> if (task_curr(t)) // false
>
> add_event_to_ctx();
> spin_unlock(ctx->lock);
>
> perf_event_context_sched_in();
> spin_lock(ctx->lock);
> // sees event
>
>
> Where between setting the perf_event_ctxp[] and sending the IPI the task
> moves away and the IPI misses, and while the new CPU is in the middle of
> scheduling in t, it hasn't yet passed through perf_event_sched_in(), but
> when it does, it _must_ observe the ctx value we stored.
>
> My thinking was that the IPI itself is not sufficient since when it
> misses the task, nothing then guarantees we see the store. However, if
> we order the store and the task_cpu() load, then any context
> switching/migrating involved with changing that value, should ensure we
> see our prior store.
>
> Of course, even now writing this, I'm still confused.
On IRC you said:
: I think it's similar to the "ISA2" litmus test, only the first reads-from edge is an IPI and the second is an Unlock->Lock
In case the IPI misses, we cannot use the IPI itself for anything I'm
afraid, also per the above we don't need to.
: the case I'm more confused by is if CPU2 takes the ctx->lock before CPU1
: I'm guessing that's prevented by the way migration works?
So same scenario but CPU2 takes the ctx->lock first. In that case it
will not observe our event and do nothing. CPU1 will then acquire
ctx->lock, this then implies ordering against CPU2, which means it
_must_ observe task_curr() && task != current and it too will not do
anything but we'll loop and try the whole thing again.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Perf hotplug lockup in v4.9-rc8
2016-12-22 8:45 ` Peter Zijlstra
@ 2016-12-22 14:00 ` Peter Zijlstra
2016-12-22 16:33 ` Paul E. McKenney
0 siblings, 1 reply; 17+ messages in thread
From: Peter Zijlstra @ 2016-12-22 14:00 UTC (permalink / raw)
To: Will Deacon
Cc: Mark Rutland, linux-kernel, Ingo Molnar,
Arnaldo Carvalho de Melo, Thomas Gleixner,
Sebastian Andrzej Siewior, jeremy.linton, Boqun Feng,
Paul McKenney
On Thu, Dec 22, 2016 at 09:45:09AM +0100, Peter Zijlstra wrote:
> On Mon, Dec 12, 2016 at 01:42:28PM +0100, Peter Zijlstra wrote:
> > > What are you trying to order here?
> >
> > I suppose something like this:
> >
> >
> > CPU0 CPU1 CPU2
> >
> > (current == t)
> >
> > t->perf_event_ctxp[] = ctx;
> > smp_mb();
> > cpu = task_cpu(t);
> >
> > switch(t, n);
> > migrate(t, 2);
> > switch(p, t);
> >
> > ctx = t->perf_event_ctxp[]; // must not be NULL
> >
>
> So I think I can cast the above into a test like:
>
> W[x] = 1 W[y] = 1 R[z] = 1
> mb mb mb
> R[y] = 0 W[z] = 1 R[x] = 0
>
> Where x is the perf_event_ctxp[], y is our task's cpu and z is our task
> being placed on the rq of cpu2.
>
> See also commit: 8643cda549ca ("sched/core, locking: Document
> Program-Order guarantees"), Independent of which cpu initiates the
> migration between CPU1 and CPU2 there is ordering between the CPUs.
I think that when we assume RCpc locks, the above CPU1 mb ends up being
something like an smp_wmb() (ie. non transitive). CPU2 needs to do a
context switch between observing the task on its runqueue and getting to
switching in perf-events for the task, which keeps that a full mb.
Now, if only this model would have locks in ;-)
> This would then translate into something like:
>
> C C-peterz
>
> {
> }
>
> P0(int *x, int *y)
> {
> int r1;
>
> WRITE_ONCE(*x, 1);
> smp_mb();
> r1 = READ_ONCE(*y);
> }
>
> P1(int *y, int *z)
> {
> WRITE_ONCE(*y, 1);
> smp_mb();
And this modified to: smp_wmb()
> WRITE_ONCE(*z, 1);
> }
>
> P2(int *x, int *z)
> {
> int r1;
> int r2;
>
> r1 = READ_ONCE(*z);
> smp_mb();
> r2 = READ_ONCE(*x);
> }
>
> exists
> (0:r1=0 /\ 2:r1=1 /\ 2:r2=0)
Still results in the same outcome.
If however we change P2's barrier into a smp_rmb() it does become
possible, but as said above, there's a context switch in between which
implies a full barrier so no worries.
Similar if I replace everything z with smp_store_release() and
smp_load_acquire().
Of course, its entirely possible the litmus test doesn't reflect
reality, I still find it somewhat hard to write these things.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Perf hotplug lockup in v4.9-rc8
2016-12-22 14:00 ` Peter Zijlstra
@ 2016-12-22 16:33 ` Paul E. McKenney
0 siblings, 0 replies; 17+ messages in thread
From: Paul E. McKenney @ 2016-12-22 16:33 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Will Deacon, Mark Rutland, linux-kernel, Ingo Molnar,
Arnaldo Carvalho de Melo, Thomas Gleixner,
Sebastian Andrzej Siewior, jeremy.linton, Boqun Feng
On Thu, Dec 22, 2016 at 03:00:10PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 22, 2016 at 09:45:09AM +0100, Peter Zijlstra wrote:
> > On Mon, Dec 12, 2016 at 01:42:28PM +0100, Peter Zijlstra wrote:
>
> > > > What are you trying to order here?
> > >
> > > I suppose something like this:
> > >
> > >
> > > CPU0 CPU1 CPU2
> > >
> > > (current == t)
> > >
> > > t->perf_event_ctxp[] = ctx;
> > > smp_mb();
> > > cpu = task_cpu(t);
> > >
> > > switch(t, n);
> > > migrate(t, 2);
> > > switch(p, t);
> > >
> > > ctx = t->perf_event_ctxp[]; // must not be NULL
> > >
> >
> > So I think I can cast the above into a test like:
> >
> > W[x] = 1 W[y] = 1 R[z] = 1
> > mb mb mb
> > R[y] = 0 W[z] = 1 R[x] = 0
> >
> > Where x is the perf_event_ctxp[], y is our task's cpu and z is our task
> > being placed on the rq of cpu2.
> >
> > See also commit: 8643cda549ca ("sched/core, locking: Document
> > Program-Order guarantees"), Independent of which cpu initiates the
> > migration between CPU1 and CPU2 there is ordering between the CPUs.
>
> I think that when we assume RCpc locks, the above CPU1 mb ends up being
> something like an smp_wmb() (ie. non transitive). CPU2 needs to do a
> context switch between observing the task on its runqueue and getting to
> switching in perf-events for the task, which keeps that a full mb.
>
> Now, if only this model would have locks in ;-)
Yeah, we are slow. ;-)
But you should be able to emulate them with xchg_acquire() and
smp_store_release().
Thanx, Paul
> > This would then translate into something like:
> >
> > C C-peterz
> >
> > {
> > }
> >
> > P0(int *x, int *y)
> > {
> > int r1;
> >
> > WRITE_ONCE(*x, 1);
> > smp_mb();
> > r1 = READ_ONCE(*y);
> > }
> >
> > P1(int *y, int *z)
> > {
> > WRITE_ONCE(*y, 1);
> > smp_mb();
>
> And this modified to: smp_wmb()
>
> > WRITE_ONCE(*z, 1);
> > }
> >
> > P2(int *x, int *z)
> > {
> > int r1;
> > int r2;
> >
> > r1 = READ_ONCE(*z);
> > smp_mb();
> > r2 = READ_ONCE(*x);
> > }
> >
> > exists
> > (0:r1=0 /\ 2:r1=1 /\ 2:r2=0)
>
> Still results in the same outcome.
>
> If however we change P2's barrier into a smp_rmb() it does become
> possible, but as said above, there's a context switch in between which
> implies a full barrier so no worries.
>
> Similar if I replace everything z with smp_store_release() and
> smp_load_acquire().
>
>
> Of course, its entirely possible the litmus test doesn't reflect
> reality, I still find it somewhat hard to write these things.
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Perf hotplug lockup in v4.9-rc8
2016-12-09 13:59 ` Peter Zijlstra
2016-12-12 11:46 ` Will Deacon
@ 2017-01-11 14:59 ` Mark Rutland
2017-01-11 16:03 ` Peter Zijlstra
2017-01-14 12:28 ` [tip:perf/urgent] perf/core: Fix sys_perf_event_open() vs. hotplug tip-bot for Peter Zijlstra
2 siblings, 1 reply; 17+ messages in thread
From: Mark Rutland @ 2017-01-11 14:59 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, Ingo Molnar, Arnaldo Carvalho de Melo,
Thomas Gleixner, Sebastian Andrzej Siewior, jeremy.linton,
Will Deacon
Hi Peter,
Sorry for the delay; this fell into my backlog over the holiday.
On Fri, Dec 09, 2016 at 02:59:00PM +0100, Peter Zijlstra wrote:
> So while I went back and forth trying to make that less ugly, I figured
> there was another problem.
>
> Imagine the cpu_function_call() hitting the 'right' cpu, but not finding
> the task current. It will then continue to install the event in the
> context. However, that doesn't stop another CPU from pulling the task in
> question from our rq and scheduling it elsewhere.
>
> This all lead me to the below patch.. Now it has a rather large comment,
> and while it represents my current thinking on the matter, I'm not at
> all sure its entirely correct. I got my brain in a fair twist while
> writing it.
>
> Please as to carefully think about it.
FWIW, I've given the below a spin on a few systems, and with it applied
my reproducer no longer triggers the issue.
Unfortunately, most of the ordering concerns have gone over my head. :/
> @@ -2331,13 +2330,36 @@ perf_install_in_context(struct perf_event_context *ctx,
> /*
> * Installing events is tricky because we cannot rely on ctx->is_active
> * to be set in case this is the nr_events 0 -> 1 transition.
> + *
> + * Instead we use task_curr(), which tells us if the task is running.
> + * However, since we use task_curr() outside of rq::lock, we can race
> + * against the actual state. This means the result can be wrong.
> + *
> + * If we get a false positive, we retry, this is harmless.
> + *
> + * If we get a false negative, things are complicated. If we are after
> + * perf_event_context_sched_in() ctx::lock will serialize us, and the
> + * value must be correct. If we're before, it doesn't matter since
> + * perf_event_context_sched_in() will program the counter.
> + *
> + * However, this hinges on the remote context switch having observed
> + * our task->perf_event_ctxp[] store, such that it will in fact take
> + * ctx::lock in perf_event_context_sched_in().
Sorry if I'm being thick here, but which store are we describing above?
i.e. which function, how does that relate to perf_install_in_context()?
I haven't managed to wrap my head around why this matters. :/
Thanks,
Mark.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Perf hotplug lockup in v4.9-rc8
2017-01-11 14:59 ` Mark Rutland
@ 2017-01-11 16:03 ` Peter Zijlstra
2017-01-11 16:26 ` Mark Rutland
2017-01-11 19:51 ` Peter Zijlstra
0 siblings, 2 replies; 17+ messages in thread
From: Peter Zijlstra @ 2017-01-11 16:03 UTC (permalink / raw)
To: Mark Rutland
Cc: linux-kernel, Ingo Molnar, Arnaldo Carvalho de Melo,
Thomas Gleixner, Sebastian Andrzej Siewior, jeremy.linton,
Will Deacon
On Wed, Jan 11, 2017 at 02:59:20PM +0000, Mark Rutland wrote:
> Hi Peter,
>
> Sorry for the delay; this fell into my backlog over the holiday.
>
> On Fri, Dec 09, 2016 at 02:59:00PM +0100, Peter Zijlstra wrote:
> > So while I went back and forth trying to make that less ugly, I figured
> > there was another problem.
> >
> > Imagine the cpu_function_call() hitting the 'right' cpu, but not finding
> > the task current. It will then continue to install the event in the
> > context. However, that doesn't stop another CPU from pulling the task in
> > question from our rq and scheduling it elsewhere.
> >
> > This all lead me to the below patch.. Now it has a rather large comment,
> > and while it represents my current thinking on the matter, I'm not at
> > all sure its entirely correct. I got my brain in a fair twist while
> > writing it.
> >
> > Please as to carefully think about it.
>
> FWIW, I've given the below a spin on a few systems, and with it applied
> my reproducer no longer triggers the issue.
>
> Unfortunately, most of the ordering concerns have gone over my head. :/
>
> > @@ -2331,13 +2330,36 @@ perf_install_in_context(struct perf_event_context *ctx,
> > /*
> > * Installing events is tricky because we cannot rely on ctx->is_active
> > * to be set in case this is the nr_events 0 -> 1 transition.
> > + *
> > + * Instead we use task_curr(), which tells us if the task is running.
> > + * However, since we use task_curr() outside of rq::lock, we can race
> > + * against the actual state. This means the result can be wrong.
> > + *
> > + * If we get a false positive, we retry, this is harmless.
> > + *
> > + * If we get a false negative, things are complicated. If we are after
> > + * perf_event_context_sched_in() ctx::lock will serialize us, and the
> > + * value must be correct. If we're before, it doesn't matter since
> > + * perf_event_context_sched_in() will program the counter.
> > + *
> > + * However, this hinges on the remote context switch having observed
> > + * our task->perf_event_ctxp[] store, such that it will in fact take
> > + * ctx::lock in perf_event_context_sched_in().
>
> Sorry if I'm being thick here, but which store are we describing above?
> i.e. which function, how does that relate to perf_install_in_context()?
The only store to perf_event_ctxp[] of interest is the initial one in
find_get_context().
> I haven't managed to wrap my head around why this matters. :/
See the scenario from:
https://lkml.kernel.org/r/20161212124228.GE3124@twins.programming.kicks-ass.net
Its installing the first event on 't', which concurrently with the
install gets migrated to a third CPU.
CPU0 CPU1 CPU2
(current == t)
t->perf_event_ctxp[] = ctx;
smp_mb();
cpu = task_cpu(t);
switch(t, n);
migrate(t, 2);
switch(p, t);
ctx = t->perf_event_ctxp[]; // must not be NULL
smp_function_call(cpu, ..);
generic_exec_single()
func();
spin_lock(ctx->lock);
if (task_curr(t)) // false
add_event_to_ctx();
spin_unlock(ctx->lock);
perf_event_context_sched_in();
spin_lock(ctx->lock);
// sees event
So its CPU0's store of t->perf_event_ctxp[] that must not go 'missing.
Because if CPU2's load of that variable were to observe NULL, it would
not try to schedule the ctx and we'd have a task running without its
counter, which would be 'bad'.
As long as we observe !NULL, we'll acquire ctx->lock. If we acquire it
first and not see the event yet, then CPU0 must observe task_running()
and retry. If the install happens first, then we must see the event on
sched-in and all is well.
In any case, I'll try and write a proper Changelog for this...
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Perf hotplug lockup in v4.9-rc8
2017-01-11 16:03 ` Peter Zijlstra
@ 2017-01-11 16:26 ` Mark Rutland
2017-01-11 19:51 ` Peter Zijlstra
1 sibling, 0 replies; 17+ messages in thread
From: Mark Rutland @ 2017-01-11 16:26 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, Ingo Molnar, Arnaldo Carvalho de Melo,
Thomas Gleixner, Sebastian Andrzej Siewior, jeremy.linton,
Will Deacon
On Wed, Jan 11, 2017 at 05:03:58PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 11, 2017 at 02:59:20PM +0000, Mark Rutland wrote:
> > On Fri, Dec 09, 2016 at 02:59:00PM +0100, Peter Zijlstra wrote:
> > > + * If we get a false negative, things are complicated. If we are after
> > > + * perf_event_context_sched_in() ctx::lock will serialize us, and the
> > > + * value must be correct. If we're before, it doesn't matter since
> > > + * perf_event_context_sched_in() will program the counter.
> > > + *
> > > + * However, this hinges on the remote context switch having observed
> > > + * our task->perf_event_ctxp[] store, such that it will in fact take
> > > + * ctx::lock in perf_event_context_sched_in().
> >
> > Sorry if I'm being thick here, but which store are we describing above?
> > i.e. which function, how does that relate to perf_install_in_context()?
>
> The only store to perf_event_ctxp[] of interest is the initial one in
> find_get_context().
Ah, I see. I'd missed the rcu_assign_pointer() when looking around for
an assignment.
> > I haven't managed to wrap my head around why this matters. :/
>
> See the scenario from:
>
> https://lkml.kernel.org/r/20161212124228.GE3124@twins.programming.kicks-ass.net
>
> Its installing the first event on 't', which concurrently with the
> install gets migrated to a third CPU.
I was completely failing to consider that this was the installation of
the first event; I should have read the existing comment. Things make a
lot more sense now.
> CPU0 CPU1 CPU2
>
> (current == t)
>
> t->perf_event_ctxp[] = ctx;
> smp_mb();
> cpu = task_cpu(t);
>
> switch(t, n);
> migrate(t, 2);
> switch(p, t);
>
> ctx = t->perf_event_ctxp[]; // must not be NULL
>
> smp_function_call(cpu, ..);
>
> generic_exec_single()
> func();
> spin_lock(ctx->lock);
> if (task_curr(t)) // false
>
> add_event_to_ctx();
> spin_unlock(ctx->lock);
>
> perf_event_context_sched_in();
> spin_lock(ctx->lock);
> // sees event
>
>
>
> So its CPU0's store of t->perf_event_ctxp[] that must not go 'missing.
> Because if CPU2's load of that variable were to observe NULL, it would
> not try to schedule the ctx and we'd have a task running without its
> counter, which would be 'bad'.
>
> As long as we observe !NULL, we'll acquire ctx->lock. If we acquire it
> first and not see the event yet, then CPU0 must observe task_running()
> and retry. If the install happens first, then we must see the event on
> sched-in and all is well.
I think I follow now. Thanks for bearing with me!
> In any case, I'll try and write a proper Changelog for this...
If it's just the commit message and/or comments changing, feel free to
add:
Tested-by: Mark Rutland <mark.rutland@arm.com>
Thanks,
Mark.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Perf hotplug lockup in v4.9-rc8
2017-01-11 16:03 ` Peter Zijlstra
2017-01-11 16:26 ` Mark Rutland
@ 2017-01-11 19:51 ` Peter Zijlstra
1 sibling, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2017-01-11 19:51 UTC (permalink / raw)
To: Mark Rutland
Cc: linux-kernel, Ingo Molnar, Arnaldo Carvalho de Melo,
Thomas Gleixner, Sebastian Andrzej Siewior, jeremy.linton,
Will Deacon
On Wed, Jan 11, 2017 at 05:03:58PM +0100, Peter Zijlstra wrote:
>
> In any case, I'll try and write a proper Changelog for this...
This is what I came up with, most of it should look familiar, its
copy/pasted bits from this thread.
---
Subject: perf: Fix sys_perf_event_open() vs hotplug
From: Peter Zijlstra <peterz@infradead.org>
Date: Fri, 9 Dec 2016 14:59:00 +0100
There is problem with installing an event in a task that is 'stuck' on
an offline CPU.
Blocked tasks are not dis-assosciated from offlined CPUs, after all, a
blocked task doesn't run and doesn't require a CPU etc.. Only on
wakeup do we ammend the situation and place the task on a available
CPU.
If we hit such a task with perf_install_in_context() we'll loop until
either that task wakes up or the CPU comes back online, if the task
waking depends on the event being installed, we're stuck.
While looking into this issue, I also spotted another problem, if we
hit a task with perf_install_in_context() that is in the middle of
being migrated, that is we observe the old CPU before sending the IPI,
but run the IPI (on the old CPU) while the task is already running on
the new CPU, things also go sideways.
Rework things to rely on task_curr() -- outside of rq->lock -- which
is rather tricky. Imagine the following scenario where we're trying to
install the first event into our task 't':
CPU0 CPU1 CPU2
(current == t)
t->perf_event_ctxp[] = ctx;
smp_mb();
cpu = task_cpu(t);
switch(t, n);
migrate(t, 2);
switch(p, t);
ctx = t->perf_event_ctxp[]; // must not be NULL
smp_function_call(cpu, ..);
generic_exec_single()
func();
spin_lock(ctx->lock);
if (task_curr(t)) // false
add_event_to_ctx();
spin_unlock(ctx->lock);
perf_event_context_sched_in();
spin_lock(ctx->lock);
// sees event
So its CPU0's store of t->perf_event_ctxp[] that must not go 'missing'.
Because if CPU2's load of that variable were to observe NULL, it would
not try to schedule the ctx and we'd have a task running without its
counter, which would be 'bad'.
As long as we observe !NULL, we'll acquire ctx->lock. If we acquire it
first and not see the event yet, then CPU0 must observe task_curr()
and retry. If the install happens first, then we must see the event on
sched-in and all is well.
I think we can translate the first part (until the 'must not be NULL')
of the scenario to a litmus test like:
C C-peterz
{
}
P0(int *x, int *y)
{
int r1;
WRITE_ONCE(*x, 1);
smp_mb();
r1 = READ_ONCE(*y);
}
P1(int *y, int *z)
{
WRITE_ONCE(*y, 1);
smp_store_release(z, 1);
}
P2(int *x, int *z)
{
int r1;
int r2;
r1 = smp_load_acquire(z);
smp_mb();
r2 = READ_ONCE(*x);
}
exists
(0:r1=0 /\ 2:r1=1 /\ 2:r2=0)
Where:
x is perf_event_ctxp[],
y is our tasks's CPU, and
z is our task being placed on the rq of CPU2.
The P0 smp_mb() is the one added by this patch, ordering the store to
perf_event_ctxp[] from find_get_context() and the load of task_cpu()
in task_function_call().
The smp_store_release/smp_load_acquire model the RCpc locking of the
rq->lock and the smp_mb() of P2 is the context switch switching from
whatever CPU2 was running to our task 't'.
This litmus test evaluates into:
Test C-peterz Allowed
States 7
0:r1=0; 2:r1=0; 2:r2=0;
0:r1=0; 2:r1=0; 2:r2=1;
0:r1=0; 2:r1=1; 2:r2=1;
0:r1=1; 2:r1=0; 2:r2=0;
0:r1=1; 2:r1=0; 2:r2=1;
0:r1=1; 2:r1=1; 2:r2=0;
0:r1=1; 2:r1=1; 2:r2=1;
No
Witnesses
Positive: 0 Negative: 7
Condition exists (0:r1=0 /\ 2:r1=1 /\ 2:r2=0)
Observation C-peterz Never 0 7
Hash=e427f41d9146b2a5445101d3e2fcaa34
And the strong and weak model agree.
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: jeremy.linton@arm.com
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reported-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20161209135900.GU3174@twins.programming.kicks-ass.net
---
kernel/events/core.c | 70 ++++++++++++++++++++++++++++++++++-----------------
1 file changed, 48 insertions(+), 22 deletions(-)
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2249,7 +2249,7 @@ static int __perf_install_in_context(vo
struct perf_event_context *ctx = event->ctx;
struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
struct perf_event_context *task_ctx = cpuctx->task_ctx;
- bool activate = true;
+ bool reprogram = true;
int ret = 0;
raw_spin_lock(&cpuctx->ctx.lock);
@@ -2257,27 +2257,26 @@ static int __perf_install_in_context(vo
raw_spin_lock(&ctx->lock);
task_ctx = ctx;
- /* If we're on the wrong CPU, try again */
- if (task_cpu(ctx->task) != smp_processor_id()) {
- ret = -ESRCH;
- goto unlock;
- }
+ reprogram = (ctx->task == current);
/*
- * If we're on the right CPU, see if the task we target is
- * current, if not we don't have to activate the ctx, a future
- * context switch will do that for us.
+ * If the task is running, it must be running on this CPU,
+ * otherwise we cannot reprogram things.
+ *
+ * If its not running, we don't care, ctx->lock will
+ * serialize against it becoming runnable.
*/
- if (ctx->task != current)
- activate = false;
- else
- WARN_ON_ONCE(cpuctx->task_ctx && cpuctx->task_ctx != ctx);
+ if (task_curr(ctx->task) && !reprogram) {
+ ret = -ESRCH;
+ goto unlock;
+ }
+ WARN_ON_ONCE(reprogram && cpuctx->task_ctx && cpuctx->task_ctx != ctx);
} else if (task_ctx) {
raw_spin_lock(&task_ctx->lock);
}
- if (activate) {
+ if (reprogram) {
ctx_sched_out(ctx, cpuctx, EVENT_TIME);
add_event_to_ctx(event, ctx);
ctx_resched(cpuctx, task_ctx);
@@ -2328,13 +2327,36 @@ perf_install_in_context(struct perf_even
/*
* Installing events is tricky because we cannot rely on ctx->is_active
* to be set in case this is the nr_events 0 -> 1 transition.
+ *
+ * Instead we use task_curr(), which tells us if the task is running.
+ * However, since we use task_curr() outside of rq::lock, we can race
+ * against the actual state. This means the result can be wrong.
+ *
+ * If we get a false positive, we retry, this is harmless.
+ *
+ * If we get a false negative, things are complicated. If we are after
+ * perf_event_context_sched_in() ctx::lock will serialize us, and the
+ * value must be correct. If we're before, it doesn't matter since
+ * perf_event_context_sched_in() will program the counter.
+ *
+ * However, this hinges on the remote context switch having observed
+ * our task->perf_event_ctxp[] store, such that it will in fact take
+ * ctx::lock in perf_event_context_sched_in().
+ *
+ * We do this by task_function_call(), if the IPI fails to hit the task
+ * we know any future context switch of task must see the
+ * perf_event_ctpx[] store.
*/
-again:
+
/*
- * Cannot use task_function_call() because we need to run on the task's
- * CPU regardless of whether its current or not.
+ * This smp_mb() orders the task->perf_event_ctxp[] store with the
+ * task_cpu() load, such that if the IPI then does not find the task
+ * running, a future context switch of that task must observe the
+ * store.
*/
- if (!cpu_function_call(task_cpu(task), __perf_install_in_context, event))
+ smp_mb();
+again:
+ if (!task_function_call(task, __perf_install_in_context, event))
return;
raw_spin_lock_irq(&ctx->lock);
@@ -2348,12 +2370,16 @@ perf_install_in_context(struct perf_even
raw_spin_unlock_irq(&ctx->lock);
return;
}
- raw_spin_unlock_irq(&ctx->lock);
/*
- * Since !ctx->is_active doesn't mean anything, we must IPI
- * unconditionally.
+ * If the task is not running, ctx->lock will avoid it becoming so,
+ * thus we can safely install the event.
*/
- goto again;
+ if (task_curr(task)) {
+ raw_spin_unlock_irq(&ctx->lock);
+ goto again;
+ }
+ add_event_to_ctx(event, ctx);
+ raw_spin_unlock_irq(&ctx->lock);
}
/*
^ permalink raw reply [flat|nested] 17+ messages in thread
* [tip:perf/urgent] perf/core: Fix sys_perf_event_open() vs. hotplug
2016-12-09 13:59 ` Peter Zijlstra
2016-12-12 11:46 ` Will Deacon
2017-01-11 14:59 ` Mark Rutland
@ 2017-01-14 12:28 ` tip-bot for Peter Zijlstra
2 siblings, 0 replies; 17+ messages in thread
From: tip-bot for Peter Zijlstra @ 2017-01-14 12:28 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, hpa, peterz, jolsa, tglx, mingo, acme,
mark.rutland, will.deacon, eranian, alexander.shishkin, torvalds,
bigeasy, vincent.weaver, acme
Commit-ID: 63cae12bce9861cec309798d34701cf3da20bc71
Gitweb: http://git.kernel.org/tip/63cae12bce9861cec309798d34701cf3da20bc71
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Fri, 9 Dec 2016 14:59:00 +0100
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Sat, 14 Jan 2017 10:56:10 +0100
perf/core: Fix sys_perf_event_open() vs. hotplug
There is problem with installing an event in a task that is 'stuck' on
an offline CPU.
Blocked tasks are not dis-assosciated from offlined CPUs, after all, a
blocked task doesn't run and doesn't require a CPU etc.. Only on
wakeup do we ammend the situation and place the task on a available
CPU.
If we hit such a task with perf_install_in_context() we'll loop until
either that task wakes up or the CPU comes back online, if the task
waking depends on the event being installed, we're stuck.
While looking into this issue, I also spotted another problem, if we
hit a task with perf_install_in_context() that is in the middle of
being migrated, that is we observe the old CPU before sending the IPI,
but run the IPI (on the old CPU) while the task is already running on
the new CPU, things also go sideways.
Rework things to rely on task_curr() -- outside of rq->lock -- which
is rather tricky. Imagine the following scenario where we're trying to
install the first event into our task 't':
CPU0 CPU1 CPU2
(current == t)
t->perf_event_ctxp[] = ctx;
smp_mb();
cpu = task_cpu(t);
switch(t, n);
migrate(t, 2);
switch(p, t);
ctx = t->perf_event_ctxp[]; // must not be NULL
smp_function_call(cpu, ..);
generic_exec_single()
func();
spin_lock(ctx->lock);
if (task_curr(t)) // false
add_event_to_ctx();
spin_unlock(ctx->lock);
perf_event_context_sched_in();
spin_lock(ctx->lock);
// sees event
So its CPU0's store of t->perf_event_ctxp[] that must not go 'missing'.
Because if CPU2's load of that variable were to observe NULL, it would
not try to schedule the ctx and we'd have a task running without its
counter, which would be 'bad'.
As long as we observe !NULL, we'll acquire ctx->lock. If we acquire it
first and not see the event yet, then CPU0 must observe task_curr()
and retry. If the install happens first, then we must see the event on
sched-in and all is well.
I think we can translate the first part (until the 'must not be NULL')
of the scenario to a litmus test like:
C C-peterz
{
}
P0(int *x, int *y)
{
int r1;
WRITE_ONCE(*x, 1);
smp_mb();
r1 = READ_ONCE(*y);
}
P1(int *y, int *z)
{
WRITE_ONCE(*y, 1);
smp_store_release(z, 1);
}
P2(int *x, int *z)
{
int r1;
int r2;
r1 = smp_load_acquire(z);
smp_mb();
r2 = READ_ONCE(*x);
}
exists
(0:r1=0 /\ 2:r1=1 /\ 2:r2=0)
Where:
x is perf_event_ctxp[],
y is our tasks's CPU, and
z is our task being placed on the rq of CPU2.
The P0 smp_mb() is the one added by this patch, ordering the store to
perf_event_ctxp[] from find_get_context() and the load of task_cpu()
in task_function_call().
The smp_store_release/smp_load_acquire model the RCpc locking of the
rq->lock and the smp_mb() of P2 is the context switch switching from
whatever CPU2 was running to our task 't'.
This litmus test evaluates into:
Test C-peterz Allowed
States 7
0:r1=0; 2:r1=0; 2:r2=0;
0:r1=0; 2:r1=0; 2:r2=1;
0:r1=0; 2:r1=1; 2:r2=1;
0:r1=1; 2:r1=0; 2:r2=0;
0:r1=1; 2:r1=0; 2:r2=1;
0:r1=1; 2:r1=1; 2:r2=0;
0:r1=1; 2:r1=1; 2:r2=1;
No
Witnesses
Positive: 0 Negative: 7
Condition exists (0:r1=0 /\ 2:r1=1 /\ 2:r2=0)
Observation C-peterz Never 0 7
Hash=e427f41d9146b2a5445101d3e2fcaa34
And the strong and weak model agree.
Reported-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: jeremy.linton@arm.com
Link: http://lkml.kernel.org/r/20161209135900.GU3174@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/events/core.c | 70 +++++++++++++++++++++++++++++++++++-----------------
1 file changed, 48 insertions(+), 22 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index ab15509..72ce7d6 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2249,7 +2249,7 @@ static int __perf_install_in_context(void *info)
struct perf_event_context *ctx = event->ctx;
struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
struct perf_event_context *task_ctx = cpuctx->task_ctx;
- bool activate = true;
+ bool reprogram = true;
int ret = 0;
raw_spin_lock(&cpuctx->ctx.lock);
@@ -2257,27 +2257,26 @@ static int __perf_install_in_context(void *info)
raw_spin_lock(&ctx->lock);
task_ctx = ctx;
- /* If we're on the wrong CPU, try again */
- if (task_cpu(ctx->task) != smp_processor_id()) {
- ret = -ESRCH;
- goto unlock;
- }
+ reprogram = (ctx->task == current);
/*
- * If we're on the right CPU, see if the task we target is
- * current, if not we don't have to activate the ctx, a future
- * context switch will do that for us.
+ * If the task is running, it must be running on this CPU,
+ * otherwise we cannot reprogram things.
+ *
+ * If its not running, we don't care, ctx->lock will
+ * serialize against it becoming runnable.
*/
- if (ctx->task != current)
- activate = false;
- else
- WARN_ON_ONCE(cpuctx->task_ctx && cpuctx->task_ctx != ctx);
+ if (task_curr(ctx->task) && !reprogram) {
+ ret = -ESRCH;
+ goto unlock;
+ }
+ WARN_ON_ONCE(reprogram && cpuctx->task_ctx && cpuctx->task_ctx != ctx);
} else if (task_ctx) {
raw_spin_lock(&task_ctx->lock);
}
- if (activate) {
+ if (reprogram) {
ctx_sched_out(ctx, cpuctx, EVENT_TIME);
add_event_to_ctx(event, ctx);
ctx_resched(cpuctx, task_ctx);
@@ -2328,13 +2327,36 @@ perf_install_in_context(struct perf_event_context *ctx,
/*
* Installing events is tricky because we cannot rely on ctx->is_active
* to be set in case this is the nr_events 0 -> 1 transition.
+ *
+ * Instead we use task_curr(), which tells us if the task is running.
+ * However, since we use task_curr() outside of rq::lock, we can race
+ * against the actual state. This means the result can be wrong.
+ *
+ * If we get a false positive, we retry, this is harmless.
+ *
+ * If we get a false negative, things are complicated. If we are after
+ * perf_event_context_sched_in() ctx::lock will serialize us, and the
+ * value must be correct. If we're before, it doesn't matter since
+ * perf_event_context_sched_in() will program the counter.
+ *
+ * However, this hinges on the remote context switch having observed
+ * our task->perf_event_ctxp[] store, such that it will in fact take
+ * ctx::lock in perf_event_context_sched_in().
+ *
+ * We do this by task_function_call(), if the IPI fails to hit the task
+ * we know any future context switch of task must see the
+ * perf_event_ctpx[] store.
*/
-again:
+
/*
- * Cannot use task_function_call() because we need to run on the task's
- * CPU regardless of whether its current or not.
+ * This smp_mb() orders the task->perf_event_ctxp[] store with the
+ * task_cpu() load, such that if the IPI then does not find the task
+ * running, a future context switch of that task must observe the
+ * store.
*/
- if (!cpu_function_call(task_cpu(task), __perf_install_in_context, event))
+ smp_mb();
+again:
+ if (!task_function_call(task, __perf_install_in_context, event))
return;
raw_spin_lock_irq(&ctx->lock);
@@ -2348,12 +2370,16 @@ again:
raw_spin_unlock_irq(&ctx->lock);
return;
}
- raw_spin_unlock_irq(&ctx->lock);
/*
- * Since !ctx->is_active doesn't mean anything, we must IPI
- * unconditionally.
+ * If the task is not running, ctx->lock will avoid it becoming so,
+ * thus we can safely install the event.
*/
- goto again;
+ if (task_curr(task)) {
+ raw_spin_unlock_irq(&ctx->lock);
+ goto again;
+ }
+ add_event_to_ctx(event, ctx);
+ raw_spin_unlock_irq(&ctx->lock);
}
/*
^ permalink raw reply related [flat|nested] 17+ messages in thread