From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73565C77B7C for ; Fri, 26 May 2023 21:55:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244185AbjEZVzm (ORCPT ); Fri, 26 May 2023 17:55:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244141AbjEZVzY (ORCPT ); Fri, 26 May 2023 17:55:24 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A12FE52 for ; Fri, 26 May 2023 14:54:48 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-56183784dd3so16666197b3.3 for ; Fri, 26 May 2023 14:54:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685138086; x=1687730086; h=to:from:subject:references:mime-version:message-id:in-reply-to:date :from:to:cc:subject:date:message-id:reply-to; bh=RwwwjxGhBdfYap7unP7SQ0HQEBv8oo9B//S2Wm6hAfY=; b=VDjFOWchF4iASJssKI5uS4mtm0PwJ6/QuCiwJ6jDFyyKNpqYr73SvCv2JtFc1EgyiD 59gMXLVYCHfmsJbb6tDpNP9V80cEpD2mg0B2VyW8qXda1KZAmKHvT9uomPJu24in2eC9 aifhC+cloAb0YUg4bolJCQhZ3BG8Mz1tkaUePwNtdwcSUgiEkXOlgbtfu6npaoJcQ3Kj bf4hqPs6ZGsyXw5gJ9aW2D0zoKdProuxYE8MTYW0z6AveHCCh64m0tDvxWOf9lsvDdZv 0Oox2Q1xw3KfQXa1RWnPCU+axMSqsMkrcnRB0zD/nUGqUkAnHTwztTNYSekg/tRzofN8 GYTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685138086; x=1687730086; h=to:from:subject:references:mime-version:message-id:in-reply-to:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RwwwjxGhBdfYap7unP7SQ0HQEBv8oo9B//S2Wm6hAfY=; b=Ndd5oDS8blWshAY/kIXm4cKNoWheE0XpQFa9swpavIwol75f26BMCDGi0bcVRhVSiW W06YHzaVQxGWlDCYsSH0SEvRQZM8/ioogaWgqhLUaPmGPUARMR/RqwSK7j+orCGLRFYO UCA1TNnKKZ9yow8qg0Eqrjf6n1POm0vOCZ4hJkKv7cxpODyggMpRl6Xqm0hNyyXtk2+E OZY9jeRB338RwrRvmM3hhzBqxVo0fqr8ddCQBGjekKc9E8fH0pmP7WfcgTxzoDGrXQKW JpbVP5/bsvjjNGIus7X280SkeiEPQstP/GmJlDRgGdV7QUQbLl5YcYzg04+4Zs64eyeB pAOg== X-Gm-Message-State: AC+VfDz/+rZBX0YQbtdX2colk66w6SxQ+qZ5JeujZgmlkGTNFhbZWrpV ZYN30Aiu5AlYxJJrfnpESvGIrBkEQFH3 X-Google-Smtp-Source: ACHHUZ4qKgJbfBiHV0xq0EPzsw8tvg/qhmixLTDPuLCpPnIkWCjbUHux0MNk/wTfCtWuU5pImtvlFvTJM0WR X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:3b4e:312c:644:a642]) (user=irogers job=sendgmr) by 2002:a81:bc0d:0:b0:541:61aa:9e60 with SMTP id a13-20020a81bc0d000000b0054161aa9e60mr1827277ywi.6.1685138085819; Fri, 26 May 2023 14:54:45 -0700 (PDT) Date: Fri, 26 May 2023 14:53:49 -0700 In-Reply-To: <20230526215410.2435674-1-irogers@google.com> Message-Id: <20230526215410.2435674-15-irogers@google.com> Mime-Version: 1.0 References: <20230526215410.2435674-1-irogers@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Subject: [PATCH v4 14/35] perf evlist: Remove __evlist__add_default From: Ian Rogers To: Suzuki K Poulose , Mike Leach , Leo Yan , John Garry , Will Deacon , James Clark , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Kajol Jain , Jing Zhang , Kan Liang , Zhengjun Xing , Ravi Bangoria , Madhavan Srinivasan , Athira Rajeev , Ming Wang , Huacai Chen , Sandipan Das , Dmitrii Dolgov <9erthalion6@gmail.com>, Sean Christopherson , Ali Saidi , Rob Herring , Thomas Richter , Kang Minchul , linux-kernel@vger.kernel.org, coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org __evlist__add_default adds a cycles event to a typically empty evlist and was extended for hybrid with evlist__add_default_hybrid, as more than 1 PMU was necessary. Rather than have dedicated logic for the cycles event, this change switches to parsing 'cycles:P' which will handle wildcarding the PMUs appropriately for hybrid. Signed-off-by: Ian Rogers Reviewed-by: Kan Liang --- tools/perf/arch/x86/util/evsel.c | 20 -------------- tools/perf/builtin-record.c | 13 +++------ tools/perf/builtin-top.c | 10 ++++--- tools/perf/util/evlist-hybrid.c | 25 ----------------- tools/perf/util/evlist-hybrid.h | 1 - tools/perf/util/evlist.c | 22 ++++++--------- tools/perf/util/evlist.h | 7 ----- tools/perf/util/evsel.c | 46 -------------------------------- tools/perf/util/evsel.h | 3 --- 9 files changed, 17 insertions(+), 130 deletions(-) diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c index ea3972d785d1..153cdca94cd4 100644 --- a/tools/perf/arch/x86/util/evsel.c +++ b/tools/perf/arch/x86/util/evsel.c @@ -16,26 +16,6 @@ void arch_evsel__set_sample_weight(struct evsel *evsel) evsel__set_sample_bit(evsel, WEIGHT_STRUCT); } -void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr) -{ - struct perf_env env = { .total_mem = 0, } ; - - if (!perf_env__cpuid(&env)) - return; - - /* - * On AMD, precise cycles event sampling internally uses IBS pmu. - * But IBS does not have filtering capabilities and perf by default - * sets exclude_guest = 1. This makes IBS pmu event init fail and - * thus perf ends up doing non-precise sampling. Avoid it by clearing - * exclude_guest. - */ - if (env.cpuid && strstarts(env.cpuid, "AuthenticAMD")) - attr->exclude_guest = 0; - - free(env.cpuid); -} - /* Check whether the evsel's PMU supports the perf metrics */ bool evsel__sys_has_perf_metrics(const struct evsel *evsel) { diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 88f7b4241153..d80b54a6f450 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -4161,18 +4161,11 @@ int cmd_record(int argc, const char **argv) record.opts.tail_synthesize = true; if (rec->evlist->core.nr_entries == 0) { - if (perf_pmu__has_hybrid()) { - err = evlist__add_default_hybrid(rec->evlist, - !record.opts.no_samples); - } else { - err = __evlist__add_default(rec->evlist, - !record.opts.no_samples); - } + bool can_profile_kernel = perf_event_paranoid_check(1); - if (err < 0) { - pr_err("Not enough memory for event selector list\n"); + err = parse_event(rec->evlist, can_profile_kernel ? "cycles:P" : "cycles:Pu"); + if (err) goto out; - } } if (rec->opts.target.tid && !rec->opts.no_inherit_set) diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c index 48ee49e95c5e..27a7f068207d 100644 --- a/tools/perf/builtin-top.c +++ b/tools/perf/builtin-top.c @@ -1653,10 +1653,12 @@ int cmd_top(int argc, const char **argv) if (annotate_check_args(&top.annotation_opts) < 0) goto out_delete_evlist; - if (!top.evlist->core.nr_entries && - evlist__add_default(top.evlist) < 0) { - pr_err("Not enough memory for event selector list\n"); - goto out_delete_evlist; + if (!top.evlist->core.nr_entries) { + bool can_profile_kernel = perf_event_paranoid_check(1); + int err = parse_event(top.evlist, can_profile_kernel ? "cycles:P" : "cycles:Pu"); + + if (err) + goto out_delete_evlist; } status = evswitch__init(&top.evswitch, top.evlist, stderr); diff --git a/tools/perf/util/evlist-hybrid.c b/tools/perf/util/evlist-hybrid.c index 0f59c80f27b2..64f78d06fe19 100644 --- a/tools/perf/util/evlist-hybrid.c +++ b/tools/perf/util/evlist-hybrid.c @@ -16,31 +16,6 @@ #include #include -int evlist__add_default_hybrid(struct evlist *evlist, bool precise) -{ - struct evsel *evsel; - struct perf_pmu *pmu; - __u64 config; - struct perf_cpu_map *cpus; - - perf_pmu__for_each_hybrid_pmu(pmu) { - config = PERF_COUNT_HW_CPU_CYCLES | - ((__u64)pmu->type << PERF_PMU_TYPE_SHIFT); - evsel = evsel__new_cycles(precise, PERF_TYPE_HARDWARE, - config); - if (!evsel) - return -ENOMEM; - - cpus = perf_cpu_map__get(pmu->cpus); - evsel->core.cpus = cpus; - evsel->core.own_cpus = perf_cpu_map__get(cpus); - evsel->pmu_name = strdup(pmu->name); - evlist__add(evlist, evsel); - } - - return 0; -} - bool evlist__has_hybrid(struct evlist *evlist) { struct evsel *evsel; diff --git a/tools/perf/util/evlist-hybrid.h b/tools/perf/util/evlist-hybrid.h index 4b000eda6626..0cded76eb344 100644 --- a/tools/perf/util/evlist-hybrid.h +++ b/tools/perf/util/evlist-hybrid.h @@ -7,7 +7,6 @@ #include "evlist.h" #include -int evlist__add_default_hybrid(struct evlist *evlist, bool precise); bool evlist__has_hybrid(struct evlist *evlist); #endif /* __PERF_EVLIST_HYBRID_H */ diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index 9dfa977193b3..63f8821a5395 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -93,8 +93,15 @@ struct evlist *evlist__new(void) struct evlist *evlist__new_default(void) { struct evlist *evlist = evlist__new(); + bool can_profile_kernel; + int err; + + if (!evlist) + return NULL; - if (evlist && evlist__add_default(evlist)) { + can_profile_kernel = perf_event_paranoid_check(1); + err = parse_event(evlist, can_profile_kernel ? "cycles:P" : "cycles:Pu"); + if (err) { evlist__delete(evlist); evlist = NULL; } @@ -237,19 +244,6 @@ static void evlist__set_leader(struct evlist *evlist) perf_evlist__set_leader(&evlist->core); } -int __evlist__add_default(struct evlist *evlist, bool precise) -{ - struct evsel *evsel; - - evsel = evsel__new_cycles(precise, PERF_TYPE_HARDWARE, - PERF_COUNT_HW_CPU_CYCLES); - if (evsel == NULL) - return -ENOMEM; - - evlist__add(evlist, evsel); - return 0; -} - static struct evsel *evlist__dummy_event(struct evlist *evlist) { struct perf_event_attr attr = { diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index 5e7ff44f3043..664c6bf7b3e0 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -100,13 +100,6 @@ void evlist__delete(struct evlist *evlist); void evlist__add(struct evlist *evlist, struct evsel *entry); void evlist__remove(struct evlist *evlist, struct evsel *evsel); -int __evlist__add_default(struct evlist *evlist, bool precise); - -static inline int evlist__add_default(struct evlist *evlist) -{ - return __evlist__add_default(evlist, true); -} - int evlist__add_attrs(struct evlist *evlist, struct perf_event_attr *attrs, size_t nr_attrs); int __evlist__add_default_attrs(struct evlist *evlist, diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index 8c8f371ea2b5..1df8f967d2eb 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -316,48 +316,6 @@ struct evsel *evsel__new_idx(struct perf_event_attr *attr, int idx) return evsel; } -static bool perf_event_can_profile_kernel(void) -{ - return perf_event_paranoid_check(1); -} - -struct evsel *evsel__new_cycles(bool precise __maybe_unused, __u32 type, __u64 config) -{ - struct perf_event_attr attr = { - .type = type, - .config = config, - .exclude_kernel = !perf_event_can_profile_kernel(), - }; - struct evsel *evsel; - - event_attr_init(&attr); - - /* - * Now let the usual logic to set up the perf_event_attr defaults - * to kick in when we return and before perf_evsel__open() is called. - */ - evsel = evsel__new(&attr); - if (evsel == NULL) - goto out; - - arch_evsel__fixup_new_cycles(&evsel->core.attr); - - evsel->precise_max = true; - - /* use asprintf() because free(evsel) assumes name is allocated */ - if (asprintf(&evsel->name, "cycles%s%s%.*s", - (attr.precise_ip || attr.exclude_kernel) ? ":" : "", - attr.exclude_kernel ? "u" : "", - attr.precise_ip ? attr.precise_ip + 1 : 0, "ppp") < 0) - goto error_free; -out: - return evsel; -error_free: - evsel__delete(evsel); - evsel = NULL; - goto out; -} - int copy_config_terms(struct list_head *dst, struct list_head *src) { struct evsel_config_term *pos, *tmp; @@ -1131,10 +1089,6 @@ void __weak arch_evsel__set_sample_weight(struct evsel *evsel) evsel__set_sample_bit(evsel, WEIGHT); } -void __weak arch_evsel__fixup_new_cycles(struct perf_event_attr *attr __maybe_unused) -{ -} - void __weak arch__post_evsel_config(struct evsel *evsel __maybe_unused, struct perf_event_attr *attr __maybe_unused) { diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h index df8928745fc6..429b172cc94d 100644 --- a/tools/perf/util/evsel.h +++ b/tools/perf/util/evsel.h @@ -243,8 +243,6 @@ static inline struct evsel *evsel__newtp(const char *sys, const char *name) } #endif -struct evsel *evsel__new_cycles(bool precise, __u32 type, __u64 config); - #ifdef HAVE_LIBTRACEEVENT struct tep_event *event_format__new(const char *sys, const char *name); #endif @@ -312,7 +310,6 @@ void __evsel__reset_sample_bit(struct evsel *evsel, enum perf_event_sample_forma void evsel__set_sample_id(struct evsel *evsel, bool use_sample_identifier); void arch_evsel__set_sample_weight(struct evsel *evsel); -void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr); void arch__post_evsel_config(struct evsel *evsel, struct perf_event_attr *attr); int evsel__set_filter(struct evsel *evsel, const char *filter); -- 2.41.0.rc0.172.g3f132b7071-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 939E9C77B7C for ; Fri, 26 May 2023 22:41:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pFcXyTEVER5qA7sleXCke4bhp9QjETRMLH+637iWLoI=; b=ODpCpMhSwrZNbT 70mqw1lAgVo2UqhxgflPTOJRe4n8ba6bM1iVW5RPPwhKoEjr2szwc8GycN7agKHQEZWAcal/LhAQB BqLBQ3RvpNzcKVJHGp39inyPe4Bl6ybDqRwWZ5dkol9DpvejP/K3bOYrSE7nWIYVYwgdA+oq07Ciz HywNM14h51EGHPL0AkFK7gB4Fikvo/8sn6IQOTGsc6Xp5x/CChTizzeiViPWCr3oiw5qj9XAL8Jk6 2LtZCu2wN8Qh9PzZKK+eDQl3iy0MnF/oYzm+Edj8HnJ3YnqNMUWLCFmDUyAA5ilEnLlUaIpqldXyc fxKTG8+nV263+XYZdd+g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q2g7E-004A9I-2U; Fri, 26 May 2023 22:40:56 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q2g6t-0049wp-2F for linux-arm-kernel@bombadil.infradead.org; Fri, 26 May 2023 22:40:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To:Cc: Content-Transfer-Encoding:Content-ID:Content-Description; bh=RwwwjxGhBdfYap7unP7SQ0HQEBv8oo9B//S2Wm6hAfY=; b=XSIo8kodCYeOlZ8vUSJqLH+aIs 5U+4v1E9lAS8OY+eWjmKnMy/FiIgbhn2s5My7s5rMPjSxzkOPN7bO7IPvWkJDX7YMrGDXCoAf/PG+ hamQdrp/fS1KKkwRIelyGLe22p6r3+OtfBLM/pWiSuL1yupxUEL+b3N4q0S5HBVi140KHWQ+CDpRg aE9Sfg0GBDdF/iE17KbAhGbI3yCiYgdb7YLWasOvY9HqRJqT6Bjy0tEbX2ihCOwgSe0Xft5Qc+KG4 Hxk17OFiKEUnBTGSr1ih7iJozr9nK5IRnuf6byTkUxWyCX83npIRmd2BnnPud06L1L4XR7nw7bK/f Ta9p4u8g==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q2fOZ-007isO-1Q for linux-arm-kernel@lists.infradead.org; Fri, 26 May 2023 21:54:49 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-559c416b024so16773557b3.1 for ; Fri, 26 May 2023 14:54:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685138086; x=1687730086; h=to:from:subject:references:mime-version:message-id:in-reply-to:date :from:to:cc:subject:date:message-id:reply-to; bh=RwwwjxGhBdfYap7unP7SQ0HQEBv8oo9B//S2Wm6hAfY=; b=VDjFOWchF4iASJssKI5uS4mtm0PwJ6/QuCiwJ6jDFyyKNpqYr73SvCv2JtFc1EgyiD 59gMXLVYCHfmsJbb6tDpNP9V80cEpD2mg0B2VyW8qXda1KZAmKHvT9uomPJu24in2eC9 aifhC+cloAb0YUg4bolJCQhZ3BG8Mz1tkaUePwNtdwcSUgiEkXOlgbtfu6npaoJcQ3Kj bf4hqPs6ZGsyXw5gJ9aW2D0zoKdProuxYE8MTYW0z6AveHCCh64m0tDvxWOf9lsvDdZv 0Oox2Q1xw3KfQXa1RWnPCU+axMSqsMkrcnRB0zD/nUGqUkAnHTwztTNYSekg/tRzofN8 GYTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685138086; x=1687730086; h=to:from:subject:references:mime-version:message-id:in-reply-to:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RwwwjxGhBdfYap7unP7SQ0HQEBv8oo9B//S2Wm6hAfY=; b=I9GGf1v0suFTa1DCa6TJ33l7GEmurfVerhkcazCpw56jqoPVgLetQUw9vKqNZ2w1Rn LTL8MW16XAVwed0b0Dqs8MfZwR3fSLO6MNqBqziVEiYLahL/bYAOMeX9gCumcIM4hBIj zIZk7aBtibxB0C/sb4XRx3TF4qoZ4bB4KyYSYAuOsTVIuQJc10cWGtkpwac5790d4koN 8vUZE2/ULMYkCcYCm3o8h/lf1rMrW0cyuIgisz1G5p6LxjzGYy8G5Sit46qFGn7dr2MD uXV3O5EXwg4MzoZ/m/+wXpXNCyZL6XffJVksk67SxGX8QZfnH2M3/cT1r4SmfW8tbn7X C2GQ== X-Gm-Message-State: AC+VfDzLQRWFSa4YAO/qReZzFJMLRSI8KvK5RPPxP1HalmInXhvox0JX n3zo/nZgs4i2kqG488gZdjqccjBAqpPe X-Google-Smtp-Source: ACHHUZ4qKgJbfBiHV0xq0EPzsw8tvg/qhmixLTDPuLCpPnIkWCjbUHux0MNk/wTfCtWuU5pImtvlFvTJM0WR X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:3b4e:312c:644:a642]) (user=irogers job=sendgmr) by 2002:a81:bc0d:0:b0:541:61aa:9e60 with SMTP id a13-20020a81bc0d000000b0054161aa9e60mr1827277ywi.6.1685138085819; Fri, 26 May 2023 14:54:45 -0700 (PDT) Date: Fri, 26 May 2023 14:53:49 -0700 In-Reply-To: <20230526215410.2435674-1-irogers@google.com> Message-Id: <20230526215410.2435674-15-irogers@google.com> Mime-Version: 1.0 References: <20230526215410.2435674-1-irogers@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Subject: [PATCH v4 14/35] perf evlist: Remove __evlist__add_default From: Ian Rogers To: Suzuki K Poulose , Mike Leach , Leo Yan , John Garry , Will Deacon , James Clark , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Kajol Jain , Jing Zhang , Kan Liang , Zhengjun Xing , Ravi Bangoria , Madhavan Srinivasan , Athira Rajeev , Ming Wang , Huacai Chen , Sandipan Das , Dmitrii Dolgov <9erthalion6@gmail.com>, Sean Christopherson , Ali Saidi , Rob Herring , Thomas Richter , Kang Minchul , linux-kernel@vger.kernel.org, coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org, linux-perf-users@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230526_225447_564876_53A644B9 X-CRM114-Status: GOOD ( 23.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org __evlist__add_default adds a cycles event to a typically empty evlist and was extended for hybrid with evlist__add_default_hybrid, as more than 1 PMU was necessary. Rather than have dedicated logic for the cycles event, this change switches to parsing 'cycles:P' which will handle wildcarding the PMUs appropriately for hybrid. Signed-off-by: Ian Rogers Reviewed-by: Kan Liang --- tools/perf/arch/x86/util/evsel.c | 20 -------------- tools/perf/builtin-record.c | 13 +++------ tools/perf/builtin-top.c | 10 ++++--- tools/perf/util/evlist-hybrid.c | 25 ----------------- tools/perf/util/evlist-hybrid.h | 1 - tools/perf/util/evlist.c | 22 ++++++--------- tools/perf/util/evlist.h | 7 ----- tools/perf/util/evsel.c | 46 -------------------------------- tools/perf/util/evsel.h | 3 --- 9 files changed, 17 insertions(+), 130 deletions(-) diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c index ea3972d785d1..153cdca94cd4 100644 --- a/tools/perf/arch/x86/util/evsel.c +++ b/tools/perf/arch/x86/util/evsel.c @@ -16,26 +16,6 @@ void arch_evsel__set_sample_weight(struct evsel *evsel) evsel__set_sample_bit(evsel, WEIGHT_STRUCT); } -void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr) -{ - struct perf_env env = { .total_mem = 0, } ; - - if (!perf_env__cpuid(&env)) - return; - - /* - * On AMD, precise cycles event sampling internally uses IBS pmu. - * But IBS does not have filtering capabilities and perf by default - * sets exclude_guest = 1. This makes IBS pmu event init fail and - * thus perf ends up doing non-precise sampling. Avoid it by clearing - * exclude_guest. - */ - if (env.cpuid && strstarts(env.cpuid, "AuthenticAMD")) - attr->exclude_guest = 0; - - free(env.cpuid); -} - /* Check whether the evsel's PMU supports the perf metrics */ bool evsel__sys_has_perf_metrics(const struct evsel *evsel) { diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c index 88f7b4241153..d80b54a6f450 100644 --- a/tools/perf/builtin-record.c +++ b/tools/perf/builtin-record.c @@ -4161,18 +4161,11 @@ int cmd_record(int argc, const char **argv) record.opts.tail_synthesize = true; if (rec->evlist->core.nr_entries == 0) { - if (perf_pmu__has_hybrid()) { - err = evlist__add_default_hybrid(rec->evlist, - !record.opts.no_samples); - } else { - err = __evlist__add_default(rec->evlist, - !record.opts.no_samples); - } + bool can_profile_kernel = perf_event_paranoid_check(1); - if (err < 0) { - pr_err("Not enough memory for event selector list\n"); + err = parse_event(rec->evlist, can_profile_kernel ? "cycles:P" : "cycles:Pu"); + if (err) goto out; - } } if (rec->opts.target.tid && !rec->opts.no_inherit_set) diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c index 48ee49e95c5e..27a7f068207d 100644 --- a/tools/perf/builtin-top.c +++ b/tools/perf/builtin-top.c @@ -1653,10 +1653,12 @@ int cmd_top(int argc, const char **argv) if (annotate_check_args(&top.annotation_opts) < 0) goto out_delete_evlist; - if (!top.evlist->core.nr_entries && - evlist__add_default(top.evlist) < 0) { - pr_err("Not enough memory for event selector list\n"); - goto out_delete_evlist; + if (!top.evlist->core.nr_entries) { + bool can_profile_kernel = perf_event_paranoid_check(1); + int err = parse_event(top.evlist, can_profile_kernel ? "cycles:P" : "cycles:Pu"); + + if (err) + goto out_delete_evlist; } status = evswitch__init(&top.evswitch, top.evlist, stderr); diff --git a/tools/perf/util/evlist-hybrid.c b/tools/perf/util/evlist-hybrid.c index 0f59c80f27b2..64f78d06fe19 100644 --- a/tools/perf/util/evlist-hybrid.c +++ b/tools/perf/util/evlist-hybrid.c @@ -16,31 +16,6 @@ #include #include -int evlist__add_default_hybrid(struct evlist *evlist, bool precise) -{ - struct evsel *evsel; - struct perf_pmu *pmu; - __u64 config; - struct perf_cpu_map *cpus; - - perf_pmu__for_each_hybrid_pmu(pmu) { - config = PERF_COUNT_HW_CPU_CYCLES | - ((__u64)pmu->type << PERF_PMU_TYPE_SHIFT); - evsel = evsel__new_cycles(precise, PERF_TYPE_HARDWARE, - config); - if (!evsel) - return -ENOMEM; - - cpus = perf_cpu_map__get(pmu->cpus); - evsel->core.cpus = cpus; - evsel->core.own_cpus = perf_cpu_map__get(cpus); - evsel->pmu_name = strdup(pmu->name); - evlist__add(evlist, evsel); - } - - return 0; -} - bool evlist__has_hybrid(struct evlist *evlist) { struct evsel *evsel; diff --git a/tools/perf/util/evlist-hybrid.h b/tools/perf/util/evlist-hybrid.h index 4b000eda6626..0cded76eb344 100644 --- a/tools/perf/util/evlist-hybrid.h +++ b/tools/perf/util/evlist-hybrid.h @@ -7,7 +7,6 @@ #include "evlist.h" #include -int evlist__add_default_hybrid(struct evlist *evlist, bool precise); bool evlist__has_hybrid(struct evlist *evlist); #endif /* __PERF_EVLIST_HYBRID_H */ diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c index 9dfa977193b3..63f8821a5395 100644 --- a/tools/perf/util/evlist.c +++ b/tools/perf/util/evlist.c @@ -93,8 +93,15 @@ struct evlist *evlist__new(void) struct evlist *evlist__new_default(void) { struct evlist *evlist = evlist__new(); + bool can_profile_kernel; + int err; + + if (!evlist) + return NULL; - if (evlist && evlist__add_default(evlist)) { + can_profile_kernel = perf_event_paranoid_check(1); + err = parse_event(evlist, can_profile_kernel ? "cycles:P" : "cycles:Pu"); + if (err) { evlist__delete(evlist); evlist = NULL; } @@ -237,19 +244,6 @@ static void evlist__set_leader(struct evlist *evlist) perf_evlist__set_leader(&evlist->core); } -int __evlist__add_default(struct evlist *evlist, bool precise) -{ - struct evsel *evsel; - - evsel = evsel__new_cycles(precise, PERF_TYPE_HARDWARE, - PERF_COUNT_HW_CPU_CYCLES); - if (evsel == NULL) - return -ENOMEM; - - evlist__add(evlist, evsel); - return 0; -} - static struct evsel *evlist__dummy_event(struct evlist *evlist) { struct perf_event_attr attr = { diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h index 5e7ff44f3043..664c6bf7b3e0 100644 --- a/tools/perf/util/evlist.h +++ b/tools/perf/util/evlist.h @@ -100,13 +100,6 @@ void evlist__delete(struct evlist *evlist); void evlist__add(struct evlist *evlist, struct evsel *entry); void evlist__remove(struct evlist *evlist, struct evsel *evsel); -int __evlist__add_default(struct evlist *evlist, bool precise); - -static inline int evlist__add_default(struct evlist *evlist) -{ - return __evlist__add_default(evlist, true); -} - int evlist__add_attrs(struct evlist *evlist, struct perf_event_attr *attrs, size_t nr_attrs); int __evlist__add_default_attrs(struct evlist *evlist, diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c index 8c8f371ea2b5..1df8f967d2eb 100644 --- a/tools/perf/util/evsel.c +++ b/tools/perf/util/evsel.c @@ -316,48 +316,6 @@ struct evsel *evsel__new_idx(struct perf_event_attr *attr, int idx) return evsel; } -static bool perf_event_can_profile_kernel(void) -{ - return perf_event_paranoid_check(1); -} - -struct evsel *evsel__new_cycles(bool precise __maybe_unused, __u32 type, __u64 config) -{ - struct perf_event_attr attr = { - .type = type, - .config = config, - .exclude_kernel = !perf_event_can_profile_kernel(), - }; - struct evsel *evsel; - - event_attr_init(&attr); - - /* - * Now let the usual logic to set up the perf_event_attr defaults - * to kick in when we return and before perf_evsel__open() is called. - */ - evsel = evsel__new(&attr); - if (evsel == NULL) - goto out; - - arch_evsel__fixup_new_cycles(&evsel->core.attr); - - evsel->precise_max = true; - - /* use asprintf() because free(evsel) assumes name is allocated */ - if (asprintf(&evsel->name, "cycles%s%s%.*s", - (attr.precise_ip || attr.exclude_kernel) ? ":" : "", - attr.exclude_kernel ? "u" : "", - attr.precise_ip ? attr.precise_ip + 1 : 0, "ppp") < 0) - goto error_free; -out: - return evsel; -error_free: - evsel__delete(evsel); - evsel = NULL; - goto out; -} - int copy_config_terms(struct list_head *dst, struct list_head *src) { struct evsel_config_term *pos, *tmp; @@ -1131,10 +1089,6 @@ void __weak arch_evsel__set_sample_weight(struct evsel *evsel) evsel__set_sample_bit(evsel, WEIGHT); } -void __weak arch_evsel__fixup_new_cycles(struct perf_event_attr *attr __maybe_unused) -{ -} - void __weak arch__post_evsel_config(struct evsel *evsel __maybe_unused, struct perf_event_attr *attr __maybe_unused) { diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h index df8928745fc6..429b172cc94d 100644 --- a/tools/perf/util/evsel.h +++ b/tools/perf/util/evsel.h @@ -243,8 +243,6 @@ static inline struct evsel *evsel__newtp(const char *sys, const char *name) } #endif -struct evsel *evsel__new_cycles(bool precise, __u32 type, __u64 config); - #ifdef HAVE_LIBTRACEEVENT struct tep_event *event_format__new(const char *sys, const char *name); #endif @@ -312,7 +310,6 @@ void __evsel__reset_sample_bit(struct evsel *evsel, enum perf_event_sample_forma void evsel__set_sample_id(struct evsel *evsel, bool use_sample_identifier); void arch_evsel__set_sample_weight(struct evsel *evsel); -void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr); void arch__post_evsel_config(struct evsel *evsel, struct perf_event_attr *attr); int evsel__set_filter(struct evsel *evsel, const char *filter); -- 2.41.0.rc0.172.g3f132b7071-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel