From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B931C32771 for ; Thu, 9 Jan 2020 12:24:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D3C772072A for ; Thu, 9 Jan 2020 12:24:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bdS1CzfY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730928AbgAIMYx (ORCPT ); Thu, 9 Jan 2020 07:24:53 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:38719 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729159AbgAIMYx (ORCPT ); Thu, 9 Jan 2020 07:24:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1578572691; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ZN4kV3eMEjG7v2k1/FQbd0hSQzrkbIOTygf9JUjBaPk=; b=bdS1CzfY2s9cwOKN+/gNKC3e43HmY3NIl/8aLnc0Gilu3nRpTWbufbU9DmY64GXXmv3cA6 sVloC9Zgy4+7v9UqjJCVW1aFQGLPKZFi5tEHAgcYqonG4Dkk8nlPBCrDYTMdSs64mwY39T 5iK5JHWLdPbcfmHR7lLh9pgfOhHdu8c= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-255-mbnEw2e7OJOIBON_ZTkJIg-1; Thu, 09 Jan 2020 07:24:49 -0500 X-MC-Unique: mbnEw2e7OJOIBON_ZTkJIg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ED33E801E7B; Thu, 9 Jan 2020 12:24:47 +0000 (UTC) Received: from krava (unknown [10.43.17.48]) by smtp.corp.redhat.com (Postfix) with ESMTPS id F11F28060C; Thu, 9 Jan 2020 12:24:45 +0000 (UTC) Date: Thu, 9 Jan 2020 13:24:43 +0100 From: Jiri Olsa To: Jann Horn Cc: Andi Kleen , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Namhyung Kim , kernel list Subject: Re: "perf ftrace" segfault because ->cpus!=NULL but ->all_cpus==NULL Message-ID: <20200109122443.GF52936@krava> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 09, 2020 at 12:37:14PM +0100, Jann Horn wrote: > I was clumsily trying to use "perf ftrace" from git master (I might > very well be using it wrong), and it's falling over with a NULL deref. > I don't really understand the perf code, but it looks to me like it > might be related to Andi Kleen's refactoring that introduced > evlist->all_cpus? > > I think the problem is that evlist_close() assumes that ->cpus!=NULL > implies ->all_cpus!=NULL, but perf_evlist__propagate_maps() doesn't > set ->all_cpus if the evlist is empty. > > Here's the crash I encountered: > > root@vm:~# /linux-llvm/tools/perf/perf ftrace -a -T kmem_cache_alloc > failed to reset ftrace > Segmentation fault (core dumped) > root@vm:~# gdb /linux-llvm/tools/perf/perf core > [...] > Core was generated by `/linux-llvm/tools/perf/perf ftrace -a -T > kmem_cache_alloc'. > Program terminated with signal SIGSEGV, Segmentation fault. > #0 perf_cpu_map__cpu (cpus=0x0, idx=idx@entry=0) at cpumap.c:250 > [...] > (gdb) list > 245 return cpus; > 246 } > 247 > 248 int perf_cpu_map__cpu(const struct perf_cpu_map *cpus, int idx) > 249 { > 250 if (idx < cpus->nr) > 251 return cpus->map[idx]; > 252 > 253 return -1; > 254 } > (gdb) print cpus > $1 = (const struct perf_cpu_map *) 0x0 > (gdb) bt > #0 perf_cpu_map__cpu (cpus=0x0, idx=idx@entry=0) at cpumap.c:250 > #1 0x00005579cbf38731 in evlist__close > (evlist=evlist@entry=0x5579cd1c22c0) at util/evlist.c:1222 > #2 0x00005579cbf388f6 in evlist__delete (evlist=0x5579cd1c22c0) at > util/evlist.c:152 > #3 0x00005579cbf389c1 in evlist__delete (evlist=) at > util/evlist.c:148 > #4 0x00005579cbe9e3b5 in cmd_ftrace (argc=0, argv=0x7ffe818a16e0) at > builtin-ftrace.c:520 > #5 0x00005579cbf2287d in run_builtin (p=0x5579cc2a9740 > , argc=4, argv=0x7ffe818a16e0) at perf.c:312 > #6 0x00005579cbe96baa in handle_internal_command > (argv=0x7ffe818a16e0, argc=4) at perf.c:364 > #7 run_argv (argcp=, argv=) at perf.c:408 > #8 main (argc=4, argv=0x7ffe818a16e0) at perf.c:538 > (gdb) frame 1 > #1 0x00005579cbf38731 in evlist__close > (evlist=evlist@entry=0x5579cd1c22c0) at util/evlist.c:1222 > 1222 evlist__for_each_cpu(evlist, i, cpu) { > (gdb) print evlist > $2 = (struct evlist *) 0x5579cd1c22c0 > (gdb) print *evlist > $3 = {core = {entries = {next = 0x5579cd1c22c0, prev = > 0x5579cd1c22c0}, nr_entries = 0, has_user_cpus = false, cpus = > 0x5579cd1b8fe0, all_cpus = 0x0, threads = 0x5579cd1c2d60, > nr_mmaps = 0, mmap_len = 0, pollfd = {nr = 0, nr_alloc = 0, > nr_autogrow = 64, entries = 0x0, priv = 0x0}, heads = {{first = 0x0} > }, mmap = 0x0, mmap_ovw = 0x0, > mmap_first = 0x0, mmap_ovw_first = 0x0}, nr_groups = 0, enabled = > false, id_pos = 0, is_pos = 0, combined_sample_type = 0, > bkw_mmap_state = BKW_MMAP_NOTREADY, workload = {cork_fd = 0, > pid = -1}, mmap = 0x0, overwrite_mmap = 0x0, selected = 0x0, stats > = {total_period = 0, total_non_filtered_period = 0, total_lost = 0, > total_lost_samples = 0, total_aux_lost = 0, > total_aux_partial = 0, total_invalid_chains = 0, nr_events = {0 > }, nr_non_filtered_samples = 0, nr_lost_warned = 0, > nr_unknown_events = 0, nr_invalid_chains = 0, > nr_unknown_id = 0, nr_unprocessable_samples = 0, > nr_auxtrace_errors = {0, 0}, nr_proc_map_timeout = 0}, env = 0x0, > trace_event_sample_raw = 0x0, first_sample_time = 0, > last_sample_time = 0, thread = {th = 0, done = 0}} > (gdb) oops, ftrace is using evlist without event ;-) so all_cpus will not get initialized.. Andi, should we initialize evlist::all_cpus with the first cpus we get, like in the patch below? thanks, jirka --- diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c index ae9e65aa2491..5b9f2ca50591 100644 --- a/tools/lib/perf/evlist.c +++ b/tools/lib/perf/evlist.c @@ -164,6 +164,9 @@ void perf_evlist__set_maps(struct perf_evlist *evlist, evlist->threads = perf_thread_map__get(threads); } + if (!evlist->all_cpus && cpus) + evlist->all_cpus = perf_cpu_map__get(cpus); + perf_evlist__propagate_maps(evlist); }