All of lore.kernel.org
 help / color / mirror / Atom feed
From: James Clark <james.clark@arm.com>
To: Ian Rogers <irogers@google.com>, Namhyung Kim <namhyung@kernel.org>
Cc: "Peter Zijlstra" <peterz@infradead.org>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Arnaldo Carvalho de Melo" <acme@kernel.org>,
	"Mark Rutland" <mark.rutland@arm.com>,
	"Alexander Shishkin" <alexander.shishkin@linux.intel.com>,
	"Jiri Olsa" <jolsa@kernel.org>,
	"Adrian Hunter" <adrian.hunter@intel.com>,
	"Suzuki K Poulose" <suzuki.poulose@arm.com>,
	"Mike Leach" <mike.leach@linaro.org>,
	"John Garry" <john.g.garry@oracle.com>,
	"Will Deacon" <will@kernel.org>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Darren Hart" <dvhart@infradead.org>,
	"Davidlohr Bueso" <dave@stgolabs.net>,
	"André Almeida" <andrealmeid@igalia.com>,
	"Kan Liang" <kan.liang@linux.intel.com>,
	"K Prateek Nayak" <kprateek.nayak@amd.com>,
	"Sean Christopherson" <seanjc@google.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Kajol Jain" <kjain@linux.ibm.com>,
	"Athira Rajeev" <atrajeev@linux.vnet.ibm.com>,
	"Andrew Jones" <ajones@ventanamicro.com>,
	"Alexandre Ghiti" <alexghiti@rivosinc.com>,
	"Atish Patra" <atishp@rivosinc.com>,
	"Steinar H. Gunderson" <sesse@google.com>,
	"Yang Jihong" <yangjihong1@huawei.com>,
	"Yang Li" <yang.lee@linux.alibaba.com>,
	"Changbin Du" <changbin.du@huawei.com>,
	"Sandipan Das" <sandipan.das@amd.com>,
	"Ravi Bangoria" <ravi.bangoria@amd.com>,
	"Paran Lee" <p4ranlee@gmail.com>,
	"Nick Desaulniers" <ndesaulniers@google.com>,
	"Huacai Chen" <chenhuacai@kernel.org>,
	"Yanteng Si" <siyanteng@loongson.cn>,
	linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org,
	coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org,
	bpf@vger.kernel.org, "Leo Yan" <leo.yan@linux.dev>
Subject: Re: [PATCH v3 3/8] perf arm-spe/cs-etm: Directly iterate CPU maps
Date: Mon, 19 Feb 2024 10:16:51 +0000	[thread overview]
Message-ID: <b8bb3b4c-9774-e4e8-5705-5a6e01f68eb6@arm.com> (raw)
In-Reply-To: <CAP-5=fXzqF--KUOo1awmxDewupF-r_a2=yFC75tuGasNE-WpXg@mail.gmail.com>



On 17/02/2024 01:33, Ian Rogers wrote:
> On Fri, Feb 16, 2024 at 5:02 PM Namhyung Kim <namhyung@kernel.org> wrote:
>>
>> On Fri, Feb 2, 2024 at 3:41 PM Ian Rogers <irogers@google.com> wrote:
>>>
>>> Rather than iterate all CPUs and see if they are in CPU maps, directly
>>> iterate the CPU map. Similarly make use of the intersect function
>>> taking care for when "any" CPU is specified. Switch
>>> perf_cpu_map__has_any_cpu_or_is_empty to more appropriate
>>> alternatives.
>>>
>>> Signed-off-by: Ian Rogers <irogers@google.com>
>>> ---
>>>  tools/perf/arch/arm/util/cs-etm.c    | 114 ++++++++++++---------------
>>>  tools/perf/arch/arm64/util/arm-spe.c |   4 +-
>>>  2 files changed, 51 insertions(+), 67 deletions(-)
>>>
>>> diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
>>> index 77e6663c1703..07be32d99805 100644
>>> --- a/tools/perf/arch/arm/util/cs-etm.c
>>> +++ b/tools/perf/arch/arm/util/cs-etm.c
>>> @@ -197,38 +197,37 @@ static int cs_etm_validate_timestamp(struct auxtrace_record *itr,
>>>  static int cs_etm_validate_config(struct auxtrace_record *itr,
>>>                                   struct evsel *evsel)
>>>  {
>>> -       int i, err = -EINVAL;
>>> +       int idx, err = 0;
>>>         struct perf_cpu_map *event_cpus = evsel->evlist->core.user_requested_cpus;
>>> -       struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
>>> -
>>> -       /* Set option of each CPU we have */
>>> -       for (i = 0; i < cpu__max_cpu().cpu; i++) {
>>> -               struct perf_cpu cpu = { .cpu = i, };
>>> +       struct perf_cpu_map *intersect_cpus;
>>> +       struct perf_cpu cpu;
>>>
>>> -               /*
>>> -                * In per-cpu case, do the validation for CPUs to work with.
>>> -                * In per-thread case, the CPU map is empty.  Since the traced
>>> -                * program can run on any CPUs in this case, thus don't skip
>>> -                * validation.
>>> -                */
>>> -               if (!perf_cpu_map__has_any_cpu_or_is_empty(event_cpus) &&
>>> -                   !perf_cpu_map__has(event_cpus, cpu))
>>> -                       continue;
>>> +       /*
>>> +        * Set option of each CPU we have. In per-cpu case, do the validation
>>> +        * for CPUs to work with. In per-thread case, the CPU map has the "any"
>>> +        * CPU value. Since the traced program can run on any CPUs in this case,
>>> +        * thus don't skip validation.
>>> +        */
>>> +       if (!perf_cpu_map__has_any_cpu(event_cpus)) {
>>> +               struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
>>>
>>> -               if (!perf_cpu_map__has(online_cpus, cpu))
>>> -                       continue;
>>> +               intersect_cpus = perf_cpu_map__intersect(event_cpus, online_cpus);
>>> +               perf_cpu_map__put(online_cpus);
>>> +       } else {
>>> +               intersect_cpus = perf_cpu_map__new_online_cpus();
>>> +       }
>>
>> Would it be ok if any of these operations fail?  I believe the
>> cpu map functions work well with NULL already.
> 
> If the allocation fails then the loop below won't iterate (the map
> will be empty). The map is released and not used elsewhere in the
> code. An allocation failure here won't cause the code to crash, but
> there are other places where the code assumes what the properties of
> having done this function are and they won't be working as intended.
> It's not uncommon to see ENOMEM to just be abort for this reason.
> 
> Thanks,
> Ian
> 
>> Thanks,
>> Namhyung
>>

Reviewed-by: James Clark <james.clark@arm.com>

About the out of memory case, I don't really have much preference about
that. I doubt much of the code is tested or resiliant to it and the
behaviour is probably already unpredictable.

>>>
>>> -               err = cs_etm_validate_context_id(itr, evsel, i);
>>> +       perf_cpu_map__for_each_cpu_skip_any(cpu, idx, intersect_cpus) {
>>> +               err = cs_etm_validate_context_id(itr, evsel, cpu.cpu);
>>>                 if (err)
>>> -                       goto out;
>>> -               err = cs_etm_validate_timestamp(itr, evsel, i);
>>> +                       break;
>>> +
>>> +               err = cs_etm_validate_timestamp(itr, evsel, cpu.cpu);
>>>                 if (err)
>>> -                       goto out;
>>> +                       break;
>>>         }
>>>
>>> -       err = 0;
>>> -out:
>>> -       perf_cpu_map__put(online_cpus);
>>> +       perf_cpu_map__put(intersect_cpus);
>>>         return err;
>>>  }
>>>
>>> @@ -435,7 +434,7 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
>>>          * Also the case of per-cpu mmaps, need the contextID in order to be notified
>>>          * when a context switch happened.
>>>          */
>>> -       if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
>>> +       if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
>>>                 evsel__set_config_if_unset(cs_etm_pmu, cs_etm_evsel,
>>>                                            "timestamp", 1);
>>>                 evsel__set_config_if_unset(cs_etm_pmu, cs_etm_evsel,
>>> @@ -461,7 +460,7 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
>>>         evsel->core.attr.sample_period = 1;
>>>
>>>         /* In per-cpu case, always need the time of mmap events etc */
>>> -       if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus))
>>> +       if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus))
>>>                 evsel__set_sample_bit(evsel, TIME);
>>>
>>>         err = cs_etm_validate_config(itr, cs_etm_evsel);
>>> @@ -533,45 +532,31 @@ static size_t
>>>  cs_etm_info_priv_size(struct auxtrace_record *itr __maybe_unused,
>>>                       struct evlist *evlist __maybe_unused)
>>>  {
>>> -       int i;
>>> +       int idx;
>>>         int etmv3 = 0, etmv4 = 0, ete = 0;
>>>         struct perf_cpu_map *event_cpus = evlist->core.user_requested_cpus;
>>> -       struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
>>> -
>>> -       /* cpu map is not empty, we have specific CPUs to work with */
>>> -       if (!perf_cpu_map__has_any_cpu_or_is_empty(event_cpus)) {
>>> -               for (i = 0; i < cpu__max_cpu().cpu; i++) {
>>> -                       struct perf_cpu cpu = { .cpu = i, };
>>> +       struct perf_cpu_map *intersect_cpus;
>>> +       struct perf_cpu cpu;
>>>
>>> -                       if (!perf_cpu_map__has(event_cpus, cpu) ||
>>> -                           !perf_cpu_map__has(online_cpus, cpu))
>>> -                               continue;
>>> +       if (!perf_cpu_map__has_any_cpu(event_cpus)) {
>>> +               /* cpu map is not "any" CPU , we have specific CPUs to work with */
>>> +               struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
>>>
>>> -                       if (cs_etm_is_ete(itr, i))
>>> -                               ete++;
>>> -                       else if (cs_etm_is_etmv4(itr, i))
>>> -                               etmv4++;
>>> -                       else
>>> -                               etmv3++;
>>> -               }
>>> +               intersect_cpus = perf_cpu_map__intersect(event_cpus, online_cpus);
>>> +               perf_cpu_map__put(online_cpus);
>>>         } else {
>>> -               /* get configuration for all CPUs in the system */
>>> -               for (i = 0; i < cpu__max_cpu().cpu; i++) {
>>> -                       struct perf_cpu cpu = { .cpu = i, };
>>> -
>>> -                       if (!perf_cpu_map__has(online_cpus, cpu))
>>> -                               continue;
>>> -
>>> -                       if (cs_etm_is_ete(itr, i))
>>> -                               ete++;
>>> -                       else if (cs_etm_is_etmv4(itr, i))
>>> -                               etmv4++;
>>> -                       else
>>> -                               etmv3++;
>>> -               }
>>> +               /* Event can be "any" CPU so count all online CPUs. */
>>> +               intersect_cpus = perf_cpu_map__new_online_cpus();
>>>         }
>>> -
>>> -       perf_cpu_map__put(online_cpus);
>>> +       perf_cpu_map__for_each_cpu_skip_any(cpu, idx, intersect_cpus) {
>>> +               if (cs_etm_is_ete(itr, cpu.cpu))
>>> +                       ete++;
>>> +               else if (cs_etm_is_etmv4(itr, cpu.cpu))
>>> +                       etmv4++;
>>> +               else
>>> +                       etmv3++;
>>> +       }
>>> +       perf_cpu_map__put(intersect_cpus);
>>>
>>>         return (CS_ETM_HEADER_SIZE +
>>>                (ete   * CS_ETE_PRIV_SIZE) +
>>> @@ -813,16 +798,15 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
>>>         if (!session->evlist->core.nr_mmaps)
>>>                 return -EINVAL;
>>>
>>> -       /* If the cpu_map is empty all online CPUs are involved */
>>> -       if (perf_cpu_map__has_any_cpu_or_is_empty(event_cpus)) {
>>> +       /* If the cpu_map has the "any" CPU all online CPUs are involved */
>>> +       if (perf_cpu_map__has_any_cpu(event_cpus)) {
>>>                 cpu_map = online_cpus;
>>>         } else {
>>>                 /* Make sure all specified CPUs are online */
>>> -               for (i = 0; i < perf_cpu_map__nr(event_cpus); i++) {
>>> -                       struct perf_cpu cpu = { .cpu = i, };
>>> +               struct perf_cpu cpu;
>>>
>>> -                       if (perf_cpu_map__has(event_cpus, cpu) &&
>>> -                           !perf_cpu_map__has(online_cpus, cpu))
>>> +               perf_cpu_map__for_each_cpu(cpu, i, event_cpus) {
>>> +                       if (!perf_cpu_map__has(online_cpus, cpu))
>>>                                 return -EINVAL;
>>>                 }
>>>
>>> diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c
>>> index 51ccbfd3d246..0b52e67edb3b 100644
>>> --- a/tools/perf/arch/arm64/util/arm-spe.c
>>> +++ b/tools/perf/arch/arm64/util/arm-spe.c
>>> @@ -232,7 +232,7 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
>>>          * In the case of per-cpu mmaps, sample CPU for AUX event;
>>>          * also enable the timestamp tracing for samples correlation.
>>>          */
>>> -       if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
>>> +       if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
>>>                 evsel__set_sample_bit(arm_spe_evsel, CPU);
>>>                 evsel__set_config_if_unset(arm_spe_pmu, arm_spe_evsel,
>>>                                            "ts_enable", 1);
>>> @@ -265,7 +265,7 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
>>>         tracking_evsel->core.attr.sample_period = 1;
>>>
>>>         /* In per-cpu case, always need the time of mmap events etc */
>>> -       if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
>>> +       if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
>>>                 evsel__set_sample_bit(tracking_evsel, TIME);
>>>                 evsel__set_sample_bit(tracking_evsel, CPU);
>>>
>>> --
>>> 2.43.0.594.gd9cf4e227d-goog
>>>

WARNING: multiple messages have this Message-ID (diff)
From: James Clark <james.clark@arm.com>
To: Ian Rogers <irogers@google.com>, Namhyung Kim <namhyung@kernel.org>
Cc: "Peter Zijlstra" <peterz@infradead.org>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Arnaldo Carvalho de Melo" <acme@kernel.org>,
	"Mark Rutland" <mark.rutland@arm.com>,
	"Alexander Shishkin" <alexander.shishkin@linux.intel.com>,
	"Jiri Olsa" <jolsa@kernel.org>,
	"Adrian Hunter" <adrian.hunter@intel.com>,
	"Suzuki K Poulose" <suzuki.poulose@arm.com>,
	"Mike Leach" <mike.leach@linaro.org>,
	"John Garry" <john.g.garry@oracle.com>,
	"Will Deacon" <will@kernel.org>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Darren Hart" <dvhart@infradead.org>,
	"Davidlohr Bueso" <dave@stgolabs.net>,
	"André Almeida" <andrealmeid@igalia.com>,
	"Kan Liang" <kan.liang@linux.intel.com>,
	"K Prateek Nayak" <kprateek.nayak@amd.com>,
	"Sean Christopherson" <seanjc@google.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Kajol Jain" <kjain@linux.ibm.com>,
	"Athira Rajeev" <atrajeev@linux.vnet.ibm.com>,
	"Andrew Jones" <ajones@ventanamicro.com>,
	"Alexandre Ghiti" <alexghiti@rivosinc.com>,
	"Atish Patra" <atishp@rivosinc.com>,
	"Steinar H. Gunderson" <sesse@google.com>,
	"Yang Jihong" <yangjihong1@huawei.com>,
	"Yang Li" <yang.lee@linux.alibaba.com>,
	"Changbin Du" <changbin.du@huawei.com>,
	"Sandipan Das" <sandipan.das@amd.com>,
	"Ravi Bangoria" <ravi.bangoria@amd.com>,
	"Paran Lee" <p4ranlee@gmail.com>,
	"Nick Desaulniers" <ndesaulniers@google.com>,
	"Huacai Chen" <chenhuacai@kernel.org>,
	"Yanteng Si" <siyanteng@loongson.cn>,
	linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org,
	coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org,
	bpf@vger.kernel.org, "Leo Yan" <leo.yan@linux.dev>
Subject: Re: [PATCH v3 3/8] perf arm-spe/cs-etm: Directly iterate CPU maps
Date: Mon, 19 Feb 2024 10:16:51 +0000	[thread overview]
Message-ID: <b8bb3b4c-9774-e4e8-5705-5a6e01f68eb6@arm.com> (raw)
In-Reply-To: <CAP-5=fXzqF--KUOo1awmxDewupF-r_a2=yFC75tuGasNE-WpXg@mail.gmail.com>



On 17/02/2024 01:33, Ian Rogers wrote:
> On Fri, Feb 16, 2024 at 5:02 PM Namhyung Kim <namhyung@kernel.org> wrote:
>>
>> On Fri, Feb 2, 2024 at 3:41 PM Ian Rogers <irogers@google.com> wrote:
>>>
>>> Rather than iterate all CPUs and see if they are in CPU maps, directly
>>> iterate the CPU map. Similarly make use of the intersect function
>>> taking care for when "any" CPU is specified. Switch
>>> perf_cpu_map__has_any_cpu_or_is_empty to more appropriate
>>> alternatives.
>>>
>>> Signed-off-by: Ian Rogers <irogers@google.com>
>>> ---
>>>  tools/perf/arch/arm/util/cs-etm.c    | 114 ++++++++++++---------------
>>>  tools/perf/arch/arm64/util/arm-spe.c |   4 +-
>>>  2 files changed, 51 insertions(+), 67 deletions(-)
>>>
>>> diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
>>> index 77e6663c1703..07be32d99805 100644
>>> --- a/tools/perf/arch/arm/util/cs-etm.c
>>> +++ b/tools/perf/arch/arm/util/cs-etm.c
>>> @@ -197,38 +197,37 @@ static int cs_etm_validate_timestamp(struct auxtrace_record *itr,
>>>  static int cs_etm_validate_config(struct auxtrace_record *itr,
>>>                                   struct evsel *evsel)
>>>  {
>>> -       int i, err = -EINVAL;
>>> +       int idx, err = 0;
>>>         struct perf_cpu_map *event_cpus = evsel->evlist->core.user_requested_cpus;
>>> -       struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
>>> -
>>> -       /* Set option of each CPU we have */
>>> -       for (i = 0; i < cpu__max_cpu().cpu; i++) {
>>> -               struct perf_cpu cpu = { .cpu = i, };
>>> +       struct perf_cpu_map *intersect_cpus;
>>> +       struct perf_cpu cpu;
>>>
>>> -               /*
>>> -                * In per-cpu case, do the validation for CPUs to work with.
>>> -                * In per-thread case, the CPU map is empty.  Since the traced
>>> -                * program can run on any CPUs in this case, thus don't skip
>>> -                * validation.
>>> -                */
>>> -               if (!perf_cpu_map__has_any_cpu_or_is_empty(event_cpus) &&
>>> -                   !perf_cpu_map__has(event_cpus, cpu))
>>> -                       continue;
>>> +       /*
>>> +        * Set option of each CPU we have. In per-cpu case, do the validation
>>> +        * for CPUs to work with. In per-thread case, the CPU map has the "any"
>>> +        * CPU value. Since the traced program can run on any CPUs in this case,
>>> +        * thus don't skip validation.
>>> +        */
>>> +       if (!perf_cpu_map__has_any_cpu(event_cpus)) {
>>> +               struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
>>>
>>> -               if (!perf_cpu_map__has(online_cpus, cpu))
>>> -                       continue;
>>> +               intersect_cpus = perf_cpu_map__intersect(event_cpus, online_cpus);
>>> +               perf_cpu_map__put(online_cpus);
>>> +       } else {
>>> +               intersect_cpus = perf_cpu_map__new_online_cpus();
>>> +       }
>>
>> Would it be ok if any of these operations fail?  I believe the
>> cpu map functions work well with NULL already.
> 
> If the allocation fails then the loop below won't iterate (the map
> will be empty). The map is released and not used elsewhere in the
> code. An allocation failure here won't cause the code to crash, but
> there are other places where the code assumes what the properties of
> having done this function are and they won't be working as intended.
> It's not uncommon to see ENOMEM to just be abort for this reason.
> 
> Thanks,
> Ian
> 
>> Thanks,
>> Namhyung
>>

Reviewed-by: James Clark <james.clark@arm.com>

About the out of memory case, I don't really have much preference about
that. I doubt much of the code is tested or resiliant to it and the
behaviour is probably already unpredictable.

>>>
>>> -               err = cs_etm_validate_context_id(itr, evsel, i);
>>> +       perf_cpu_map__for_each_cpu_skip_any(cpu, idx, intersect_cpus) {
>>> +               err = cs_etm_validate_context_id(itr, evsel, cpu.cpu);
>>>                 if (err)
>>> -                       goto out;
>>> -               err = cs_etm_validate_timestamp(itr, evsel, i);
>>> +                       break;
>>> +
>>> +               err = cs_etm_validate_timestamp(itr, evsel, cpu.cpu);
>>>                 if (err)
>>> -                       goto out;
>>> +                       break;
>>>         }
>>>
>>> -       err = 0;
>>> -out:
>>> -       perf_cpu_map__put(online_cpus);
>>> +       perf_cpu_map__put(intersect_cpus);
>>>         return err;
>>>  }
>>>
>>> @@ -435,7 +434,7 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
>>>          * Also the case of per-cpu mmaps, need the contextID in order to be notified
>>>          * when a context switch happened.
>>>          */
>>> -       if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
>>> +       if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
>>>                 evsel__set_config_if_unset(cs_etm_pmu, cs_etm_evsel,
>>>                                            "timestamp", 1);
>>>                 evsel__set_config_if_unset(cs_etm_pmu, cs_etm_evsel,
>>> @@ -461,7 +460,7 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
>>>         evsel->core.attr.sample_period = 1;
>>>
>>>         /* In per-cpu case, always need the time of mmap events etc */
>>> -       if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus))
>>> +       if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus))
>>>                 evsel__set_sample_bit(evsel, TIME);
>>>
>>>         err = cs_etm_validate_config(itr, cs_etm_evsel);
>>> @@ -533,45 +532,31 @@ static size_t
>>>  cs_etm_info_priv_size(struct auxtrace_record *itr __maybe_unused,
>>>                       struct evlist *evlist __maybe_unused)
>>>  {
>>> -       int i;
>>> +       int idx;
>>>         int etmv3 = 0, etmv4 = 0, ete = 0;
>>>         struct perf_cpu_map *event_cpus = evlist->core.user_requested_cpus;
>>> -       struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
>>> -
>>> -       /* cpu map is not empty, we have specific CPUs to work with */
>>> -       if (!perf_cpu_map__has_any_cpu_or_is_empty(event_cpus)) {
>>> -               for (i = 0; i < cpu__max_cpu().cpu; i++) {
>>> -                       struct perf_cpu cpu = { .cpu = i, };
>>> +       struct perf_cpu_map *intersect_cpus;
>>> +       struct perf_cpu cpu;
>>>
>>> -                       if (!perf_cpu_map__has(event_cpus, cpu) ||
>>> -                           !perf_cpu_map__has(online_cpus, cpu))
>>> -                               continue;
>>> +       if (!perf_cpu_map__has_any_cpu(event_cpus)) {
>>> +               /* cpu map is not "any" CPU , we have specific CPUs to work with */
>>> +               struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
>>>
>>> -                       if (cs_etm_is_ete(itr, i))
>>> -                               ete++;
>>> -                       else if (cs_etm_is_etmv4(itr, i))
>>> -                               etmv4++;
>>> -                       else
>>> -                               etmv3++;
>>> -               }
>>> +               intersect_cpus = perf_cpu_map__intersect(event_cpus, online_cpus);
>>> +               perf_cpu_map__put(online_cpus);
>>>         } else {
>>> -               /* get configuration for all CPUs in the system */
>>> -               for (i = 0; i < cpu__max_cpu().cpu; i++) {
>>> -                       struct perf_cpu cpu = { .cpu = i, };
>>> -
>>> -                       if (!perf_cpu_map__has(online_cpus, cpu))
>>> -                               continue;
>>> -
>>> -                       if (cs_etm_is_ete(itr, i))
>>> -                               ete++;
>>> -                       else if (cs_etm_is_etmv4(itr, i))
>>> -                               etmv4++;
>>> -                       else
>>> -                               etmv3++;
>>> -               }
>>> +               /* Event can be "any" CPU so count all online CPUs. */
>>> +               intersect_cpus = perf_cpu_map__new_online_cpus();
>>>         }
>>> -
>>> -       perf_cpu_map__put(online_cpus);
>>> +       perf_cpu_map__for_each_cpu_skip_any(cpu, idx, intersect_cpus) {
>>> +               if (cs_etm_is_ete(itr, cpu.cpu))
>>> +                       ete++;
>>> +               else if (cs_etm_is_etmv4(itr, cpu.cpu))
>>> +                       etmv4++;
>>> +               else
>>> +                       etmv3++;
>>> +       }
>>> +       perf_cpu_map__put(intersect_cpus);
>>>
>>>         return (CS_ETM_HEADER_SIZE +
>>>                (ete   * CS_ETE_PRIV_SIZE) +
>>> @@ -813,16 +798,15 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
>>>         if (!session->evlist->core.nr_mmaps)
>>>                 return -EINVAL;
>>>
>>> -       /* If the cpu_map is empty all online CPUs are involved */
>>> -       if (perf_cpu_map__has_any_cpu_or_is_empty(event_cpus)) {
>>> +       /* If the cpu_map has the "any" CPU all online CPUs are involved */
>>> +       if (perf_cpu_map__has_any_cpu(event_cpus)) {
>>>                 cpu_map = online_cpus;
>>>         } else {
>>>                 /* Make sure all specified CPUs are online */
>>> -               for (i = 0; i < perf_cpu_map__nr(event_cpus); i++) {
>>> -                       struct perf_cpu cpu = { .cpu = i, };
>>> +               struct perf_cpu cpu;
>>>
>>> -                       if (perf_cpu_map__has(event_cpus, cpu) &&
>>> -                           !perf_cpu_map__has(online_cpus, cpu))
>>> +               perf_cpu_map__for_each_cpu(cpu, i, event_cpus) {
>>> +                       if (!perf_cpu_map__has(online_cpus, cpu))
>>>                                 return -EINVAL;
>>>                 }
>>>
>>> diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c
>>> index 51ccbfd3d246..0b52e67edb3b 100644
>>> --- a/tools/perf/arch/arm64/util/arm-spe.c
>>> +++ b/tools/perf/arch/arm64/util/arm-spe.c
>>> @@ -232,7 +232,7 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
>>>          * In the case of per-cpu mmaps, sample CPU for AUX event;
>>>          * also enable the timestamp tracing for samples correlation.
>>>          */
>>> -       if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
>>> +       if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
>>>                 evsel__set_sample_bit(arm_spe_evsel, CPU);
>>>                 evsel__set_config_if_unset(arm_spe_pmu, arm_spe_evsel,
>>>                                            "ts_enable", 1);
>>> @@ -265,7 +265,7 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
>>>         tracking_evsel->core.attr.sample_period = 1;
>>>
>>>         /* In per-cpu case, always need the time of mmap events etc */
>>> -       if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
>>> +       if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
>>>                 evsel__set_sample_bit(tracking_evsel, TIME);
>>>                 evsel__set_sample_bit(tracking_evsel, CPU);
>>>
>>> --
>>> 2.43.0.594.gd9cf4e227d-goog
>>>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2024-02-19 10:17 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-02 23:40 [PATCH v3 0/8] Clean up libperf cpumap's empty function Ian Rogers
2024-02-02 23:40 ` Ian Rogers
2024-02-02 23:40 ` [PATCH v3 1/8] libperf cpumap: Add any, empty and min helpers Ian Rogers
2024-02-02 23:40   ` Ian Rogers
2024-02-02 23:40 ` [PATCH v3 2/8] libperf cpumap: Ensure empty cpumap is NULL from alloc Ian Rogers
2024-02-02 23:40   ` Ian Rogers
2024-02-17  0:25   ` Namhyung Kim
2024-02-17  0:25     ` Namhyung Kim
2024-02-17  0:52     ` Ian Rogers
2024-02-17  0:52       ` Ian Rogers
2024-02-19 10:38       ` James Clark
2024-02-19 10:38         ` James Clark
2024-02-02 23:40 ` [PATCH v3 3/8] perf arm-spe/cs-etm: Directly iterate CPU maps Ian Rogers
2024-02-02 23:40   ` Ian Rogers
2024-02-17  1:01   ` Namhyung Kim
2024-02-17  1:01     ` Namhyung Kim
2024-02-17  1:33     ` Ian Rogers
2024-02-17  1:33       ` Ian Rogers
2024-02-19 10:16       ` James Clark [this message]
2024-02-19 10:16         ` James Clark
2024-02-02 23:40 ` [PATCH v3 4/8] perf intel-pt/intel-bts: Switch perf_cpu_map__has_any_cpu_or_is_empty use Ian Rogers
2024-02-02 23:40   ` Ian Rogers
2024-02-02 23:40 ` [PATCH v3 5/8] perf cpumap: Clean up use of perf_cpu_map__has_any_cpu_or_is_empty Ian Rogers
2024-02-02 23:40   ` Ian Rogers
2024-02-02 23:40 ` [PATCH v3 6/8] perf arm64 header: Remove unnecessary CPU map get and put Ian Rogers
2024-02-02 23:40   ` Ian Rogers
2024-02-02 23:40 ` [PATCH v3 7/8] perf stat: Remove duplicate cpus_map_matched function Ian Rogers
2024-02-02 23:40   ` Ian Rogers
2024-02-02 23:40 ` [PATCH v3 8/8] perf cpumap: Use perf_cpu_map__for_each_cpu when possible Ian Rogers
2024-02-02 23:40   ` Ian Rogers
2024-02-14 22:02 ` [PATCH v3 0/8] Clean up libperf cpumap's empty function Ian Rogers
2024-02-14 22:02   ` Ian Rogers
2024-02-17  1:04   ` Namhyung Kim
2024-02-17  1:04     ` Namhyung Kim
2024-03-07 23:47     ` Namhyung Kim
2024-03-07 23:47       ` Namhyung Kim
2024-03-18 21:37       ` Arnaldo Carvalho de Melo
2024-03-18 21:37         ` Arnaldo Carvalho de Melo
2024-03-19  4:18         ` Namhyung Kim
2024-03-19  4:18           ` Namhyung Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b8bb3b4c-9774-e4e8-5705-5a6e01f68eb6@arm.com \
    --to=james.clark@arm.com \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=ajones@ventanamicro.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=alexghiti@rivosinc.com \
    --cc=andrealmeid@igalia.com \
    --cc=atishp@rivosinc.com \
    --cc=atrajeev@linux.vnet.ibm.com \
    --cc=bpf@vger.kernel.org \
    --cc=changbin.du@huawei.com \
    --cc=chenhuacai@kernel.org \
    --cc=coresight@lists.linaro.org \
    --cc=dave@stgolabs.net \
    --cc=dvhart@infradead.org \
    --cc=irogers@google.com \
    --cc=john.g.garry@oracle.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=kjain@linux.ibm.com \
    --cc=kprateek.nayak@amd.com \
    --cc=leo.yan@linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mike.leach@linaro.org \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=ndesaulniers@google.com \
    --cc=p4ranlee@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=ravi.bangoria@amd.com \
    --cc=sandipan.das@amd.com \
    --cc=seanjc@google.com \
    --cc=sesse@google.com \
    --cc=siyanteng@loongson.cn \
    --cc=suzuki.poulose@arm.com \
    --cc=tglx@linutronix.de \
    --cc=will@kernel.org \
    --cc=yang.lee@linux.alibaba.com \
    --cc=yangjihong1@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.