From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S938522AbdAKQEN (ORCPT ); Wed, 11 Jan 2017 11:04:13 -0500 Received: from merlin.infradead.org ([205.233.59.134]:38762 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S938503AbdAKQEL (ORCPT ); Wed, 11 Jan 2017 11:04:11 -0500 Date: Wed, 11 Jan 2017 17:03:58 +0100 From: Peter Zijlstra To: Mark Rutland Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Arnaldo Carvalho de Melo , Thomas Gleixner , Sebastian Andrzej Siewior , jeremy.linton@arm.com, Will Deacon Subject: Re: Perf hotplug lockup in v4.9-rc8 Message-ID: <20170111160358.GA3081@twins.programming.kicks-ass.net> References: <20161207135217.GA25605@leverpostej> <20161207175347.GB13840@leverpostej> <20161207183455.GQ3124@twins.programming.kicks-ass.net> <20161209135900.GU3174@twins.programming.kicks-ass.net> <20170111145920.GB26344@leverpostej> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170111145920.GB26344@leverpostej> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 11, 2017 at 02:59:20PM +0000, Mark Rutland wrote: > Hi Peter, > > Sorry for the delay; this fell into my backlog over the holiday. > > On Fri, Dec 09, 2016 at 02:59:00PM +0100, Peter Zijlstra wrote: > > So while I went back and forth trying to make that less ugly, I figured > > there was another problem. > > > > Imagine the cpu_function_call() hitting the 'right' cpu, but not finding > > the task current. It will then continue to install the event in the > > context. However, that doesn't stop another CPU from pulling the task in > > question from our rq and scheduling it elsewhere. > > > > This all lead me to the below patch.. Now it has a rather large comment, > > and while it represents my current thinking on the matter, I'm not at > > all sure its entirely correct. I got my brain in a fair twist while > > writing it. > > > > Please as to carefully think about it. > > FWIW, I've given the below a spin on a few systems, and with it applied > my reproducer no longer triggers the issue. > > Unfortunately, most of the ordering concerns have gone over my head. :/ > > > @@ -2331,13 +2330,36 @@ perf_install_in_context(struct perf_event_context *ctx, > > /* > > * Installing events is tricky because we cannot rely on ctx->is_active > > * to be set in case this is the nr_events 0 -> 1 transition. > > + * > > + * Instead we use task_curr(), which tells us if the task is running. > > + * However, since we use task_curr() outside of rq::lock, we can race > > + * against the actual state. This means the result can be wrong. > > + * > > + * If we get a false positive, we retry, this is harmless. > > + * > > + * If we get a false negative, things are complicated. If we are after > > + * perf_event_context_sched_in() ctx::lock will serialize us, and the > > + * value must be correct. If we're before, it doesn't matter since > > + * perf_event_context_sched_in() will program the counter. > > + * > > + * However, this hinges on the remote context switch having observed > > + * our task->perf_event_ctxp[] store, such that it will in fact take > > + * ctx::lock in perf_event_context_sched_in(). > > Sorry if I'm being thick here, but which store are we describing above? > i.e. which function, how does that relate to perf_install_in_context()? The only store to perf_event_ctxp[] of interest is the initial one in find_get_context(). > I haven't managed to wrap my head around why this matters. :/ See the scenario from: https://lkml.kernel.org/r/20161212124228.GE3124@twins.programming.kicks-ass.net Its installing the first event on 't', which concurrently with the install gets migrated to a third CPU. CPU0 CPU1 CPU2 (current == t) t->perf_event_ctxp[] = ctx; smp_mb(); cpu = task_cpu(t); switch(t, n); migrate(t, 2); switch(p, t); ctx = t->perf_event_ctxp[]; // must not be NULL smp_function_call(cpu, ..); generic_exec_single() func(); spin_lock(ctx->lock); if (task_curr(t)) // false add_event_to_ctx(); spin_unlock(ctx->lock); perf_event_context_sched_in(); spin_lock(ctx->lock); // sees event So its CPU0's store of t->perf_event_ctxp[] that must not go 'missing. Because if CPU2's load of that variable were to observe NULL, it would not try to schedule the ctx and we'd have a task running without its counter, which would be 'bad'. As long as we observe !NULL, we'll acquire ctx->lock. If we acquire it first and not see the event yet, then CPU0 must observe task_running() and retry. If the install happens first, then we must see the event on sched-in and all is well. In any case, I'll try and write a proper Changelog for this...