From mboxrd@z Thu Jan 1 00:00:00 1970 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932434AbeAIRxu (ORCPT + 1 other); Tue, 9 Jan 2018 12:53:50 -0500 Received: from mail.kernel.org ([198.145.29.99]:37340 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757605AbeAIRxt (ORCPT ); Tue, 9 Jan 2018 12:53:49 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 66418204EF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=goodmis.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=rostedt@goodmis.org Date: Tue, 9 Jan 2018 12:53:46 -0500 From: Steven Rostedt To: Peter Zijlstra Cc: Vince Weaver , Ingo Molnar , linux-kernel@vger.kernel.org, Ingo Molnar , Arnaldo Carvalho de Melo , Thomas Gleixner Subject: Re: perf: perf_fuzzer quickly locks up on 4.15-rc7 Message-ID: <20180109125346.17cad4a5@vmware.local.home> In-Reply-To: <20180109161400.GC2369@hirez.programming.kicks-ass.net> References: <20180108173005.lkglqrixb2ota6g2@gmail.com> <20180109102507.GG6176@hirez.programming.kicks-ass.net> <20180109132602.GA2369@hirez.programming.kicks-ass.net> <20180109151253.GK6176@hirez.programming.kicks-ass.net> <20180109161400.GC2369@hirez.programming.kicks-ass.net> X-Mailer: Claws Mail 3.15.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: On Tue, 9 Jan 2018 17:14:00 +0100 Peter Zijlstra wrote: > On Tue, Jan 09, 2018 at 04:12:53PM +0100, Peter Zijlstra wrote: > > > In any case, I found yet another lockdep splat, trying to figure out wth > > to do about that. > > An of course, testing this one yields yet another lockdep splat.. > onwards to #3 :/ > > --- > Subject: perf: Fix another perf,trace,cpuhp lock inversion > From: Peter Zijlstra > Date: Tue Jan 9 17:07:59 CET 2018 > > Lockdep complained about: > > perf_trace_init() > #0 mutex_lock(&event_mutex) > perf_trace_event_init() > perf_trace_event_reg() > tp_event->class->reg() := tracepoint_probe_register > #1 mutex_lock(&tracepoints_mutex) > trace_point_add_func() > #2 static_key_enable() > > > > #2 do_cpu_up() > perf_event_init_cpu() > #3 mutex_lock(&pmus_lock) > #4 mutex_lock(&ctx->mutex) > > > perf_ioctl() > #4 ctx = perf_event_ctx_lock() > _perf_iotcl() > ftrace_profile_set_filter() > #0 mutex_lock(&event_mutex) > > > Fudge it for now by noting that the tracepoint state does not depend > on the event <-> context relation. Ugly though :/ > Looking at ftrace_profile_set_filter(), I see it starts with: mutex_lock(&event_mutex); How much of a big deal would it be if we move taking event_mutex() into perf_ioctl(), and then make ftrace_profile_set_filter() not take the event_mutex. This is the only place that function is used. Would that work? Below is a patch (totally untested, not even compiled) -- Steve Signed-off-by: Steven Rostedt (VMware) --- diff --git a/kernel/events/core.c b/kernel/events/core.c index 4df5b695bf0d..9fac7ac14b32 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4741,9 +4741,11 @@ static long perf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) struct perf_event_context *ctx; long ret; + mutex_lock(&event_mutex); ctx = perf_event_ctx_lock(event); ret = _perf_ioctl(event, cmd, arg); perf_event_ctx_unlock(event, ctx); + mutex_unlock(&event_mutex); return ret; } diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c index 61e7f0678d33..46c2e5d20662 100644 --- a/kernel/trace/trace_events_filter.c +++ b/kernel/trace/trace_events_filter.c @@ -2256,6 +2256,7 @@ static int ftrace_function_set_filter(struct perf_event *event, } #endif /* CONFIG_FUNCTION_TRACER */ +/* Must have event_mutex held */ int ftrace_profile_set_filter(struct perf_event *event, int event_id, char *filter_str) { @@ -2263,17 +2264,14 @@ int ftrace_profile_set_filter(struct perf_event *event, int event_id, struct event_filter *filter; struct trace_event_call *call; - mutex_lock(&event_mutex); + lockdep_assert_held(&event_mutex); call = event->tp_event; - - err = -EINVAL; if (!call) - goto out_unlock; + return -EINVAL; - err = -EEXIST; if (event->filter) - goto out_unlock; + return -EEXIST; err = create_filter(call, filter_str, false, &filter); if (err) @@ -2288,9 +2286,6 @@ int ftrace_profile_set_filter(struct perf_event *event, int event_id, if (err || ftrace_event_is_function(call)) __free_filter(filter); -out_unlock: - mutex_unlock(&event_mutex); - return err; }