linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Anshuman Khandual <anshuman.khandual@arm.com>
To: Suzuki K Poulose <suzuki.poulose@arm.com>,
	Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, will@kernel.org,
	catalin.marinas@arm.com, Mark Brown <broonie@kernel.org>,
	James Clark <james.clark@arm.com>, Rob Herring <robh@kernel.org>,
	Marc Zyngier <maz@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	linux-perf-users@vger.kernel.org
Subject: Re: [PATCH V11 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
Date: Fri, 9 Jun 2023 09:30:59 +0530	[thread overview]
Message-ID: <78cb22e2-c46e-d62d-fefc-b7963737499e@arm.com> (raw)
In-Reply-To: <290b577c-4740-d2e2-d236-c8bbe2f907b9@arm.com>



On 6/8/23 15:43, Suzuki K Poulose wrote:
> On 06/06/2023 11:34, Anshuman Khandual wrote:
>>
>>
>> On 6/5/23 17:35, Mark Rutland wrote:
>>> On Wed, May 31, 2023 at 09:34:23AM +0530, Anshuman Khandual wrote:
>>>> This enables support for branch stack sampling event in ARMV8 PMU, checking
>>>> has_branch_stack() on the event inside 'struct arm_pmu' callbacks. Although
>>>> these branch stack helpers armv8pmu_branch_XXXXX() are just dummy functions
>>>> for now. While here, this also defines arm_pmu's sched_task() callback with
>>>> armv8pmu_sched_task(), which resets the branch record buffer on a sched_in.
>>>
>>> This generally looks good, but I have a few comments below.
>>>
>>> [...]
>>>
>>>> +static inline bool armv8pmu_branch_valid(struct perf_event *event)
>>>> +{
>>>> +    WARN_ON_ONCE(!has_branch_stack(event));
>>>> +    return false;
>>>> +}
>>>
>>> IIUC this is for validating the attr, so could we please name this
>>> armv8pmu_branch_attr_valid() ?
>>
>> Sure, will change the name and updated call sites.
>>
>>>
>>> [...]
>>>
>>>> +static int branch_records_alloc(struct arm_pmu *armpmu)
>>>> +{
>>>> +    struct pmu_hw_events *events;
>>>> +    int cpu;
>>>> +
>>>> +    for_each_possible_cpu(cpu) {
> 
> Shouldn't this be supported_pmus ? i.e.
>     for_each_cpu(cpu, &armpmu->supported_cpus) {
> 
> 
>>>> +        events = per_cpu_ptr(armpmu->hw_events, cpu);
>>>> +        events->branches = kzalloc(sizeof(struct branch_records), GFP_KERNEL);
>>>> +        if (!events->branches)
>>>> +            return -ENOMEM;
> 
> Do we need to free the allocated branches already ?

This gets fixed in the next patch via per-cpu allocation. I will
move and fold the code block in here. Updated function will look
like the following.

static int branch_records_alloc(struct arm_pmu *armpmu)
{
        struct branch_records __percpu *records;
        int cpu;

        records = alloc_percpu_gfp(struct branch_records, GFP_KERNEL);
        if (!records)
                return -ENOMEM;

        /*
         * FIXME: Memory allocated via records gets completely
         * consumed here, never required to be freed up later. Hence
         * losing access to on stack 'records' is acceptable.
         * Otherwise this alloc handle has to be saved some where.
         */
        for_each_possible_cpu(cpu) {
                struct pmu_hw_events *events_cpu;
                struct branch_records *records_cpu;

                events_cpu = per_cpu_ptr(armpmu->hw_events, cpu);
                records_cpu = per_cpu_ptr(records, cpu);
                events_cpu->branches = records_cpu;
        }
        return 0;
}

Regarding the cpumask argument in for_each_cpu().

- hw_events is a __percpu pointer in struct arm_pmu

	- pmu->hw_events = alloc_percpu_gfp(struct pmu_hw_events, GFP_KERNEL)


- 'records' above is being allocated via alloc_percpu_gfp()

	- records = alloc_percpu_gfp(struct branch_records, GFP_KERNEL)

If 'armpmu->supported_cpus' mask gets used instead of possible cpu mask,
would not there be some dangling per-cpu branch_record allocated areas,
that remain unsigned ? Assigning all of them back into hw_events should
be harmless.

> 
>>>> +    }
> 
> 
> May be:
>     int ret = 0;
> 
>     for_each_cpu(cpu, &armpmu->supported_cpus) {
>         events = per_cpu_ptr(armpmu->hw_events, cpu);
>         events->branches = kzalloc(sizeof(struct         branch_records), GFP_KERNEL);
>        
>         if (!events->branches) {
>             ret = -ENOMEM;
>             break;
>         }
>     }
> 
>     if (!ret)
>         return 0;
> 
>     for_each_cpu(cpu, &armpmu->supported_cpus) {
>         events = per_cpu_ptr(armpmu->hw_events, cpu);
>         if (!events->branches)
>             break;
>         kfree(events->branches);
>     }
>     return ret;
>     
>>>> +    return 0;
>>>
>>> This leaks memory if any allocation fails, and the next patch replaces this
>>> code entirely.
>>
>> Okay.
>>
>>>
>>> Please add this once in a working state. Either use the percpu allocation
>>> trick in the next patch from the start, or have this kzalloc() with a
>>> corresponding kfree() in an error path.
>>
>> I will change branch_records_alloc() as suggested in the next patch's thread
>> and fold those changes here in this patch.
>>
>>>
>>>>   }
>>>>     static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>>>> @@ -1145,12 +1162,24 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>>>>       };
>>>>       int ret;
>>>>   +    ret = armv8pmu_private_alloc(cpu_pmu);
>>>> +    if (ret)
>>>> +        return ret;
>>>> +
>>>>       ret = smp_call_function_any(&cpu_pmu->supported_cpus,
>>>>                       __armv8pmu_probe_pmu,
>>>>                       &probe, 1);
>>>>       if (ret)
>>>>           return ret;
>>>>   +    if (arm_pmu_branch_stack_supported(cpu_pmu)) {
>>>> +        ret = branch_records_alloc(cpu_pmu);
>>>> +        if (ret)
>>>> +            return ret;
>>>> +    } else {
>>>> +        armv8pmu_private_free(cpu_pmu);
>>>> +    }
>>>
>>> I see from the next patch that "private" is four ints, so please just add that
>>> to struct arm_pmu under an ifdef CONFIG_ARM64_BRBE. That'll simplify this, and
>>> if we end up needing more space in future we can consider factoring it out.
>>
>> struct arm_pmu {
>>     ........................................
>>          /* Implementation specific attributes */
>>          void            *private;
>> }
>>
>> private pointer here creates an abstraction for given pmu implementation
>> to hide attribute details without making it known to core arm pmu layer.
>> Although adding ifdef CONFIG_ARM64_BRBE solves the problem as mentioned
>> above, it does break that abstraction. Currently arm_pmu layer is aware
>> about 'branch records' but not about BRBE in particular which the driver
>> adds later on. I suggest we should not break that abstraction.
>>
>> Instead a global 'static struct brbe_hw_attr' in drivers/perf/arm_brbe.c
>> can be initialized into arm_pmu->private during armv8pmu_branch_probe(),
>> which will also solve the allocation-free problem. Also similar helpers
>> armv8pmu_task_ctx_alloc()/free() could be defined to manage task context
>> cache i.e arm_pmu->pmu.task_ctx_cache independently.
>>
>> But now armv8pmu_task_ctx_alloc() can be called after pmu probe confirms
>> to have arm_pmu->has_branch_stack.
>>
>>>
>>>> +
>>>>       return probe.present ? 0 : -ENODEV;
>>>>   }
>>>
>>> It also seems odd to ceck probe.present *after* checking
>>> arm_pmu_branch_stack_supported().
>>
>> I will reorganize as suggested below.
>>
>>>
>>> With the allocation removed I think this can be written more clearly as:
>>>
>>> | static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>>> | {
>>> |         struct armv8pmu_probe_info probe = {
>>> |                 .pmu = cpu_pmu,
>>> |                 .present = false,
>>> |         };
>>> |         int ret;
>>> |
>>> |         ret = smp_call_function_any(&cpu_pmu->supported_cpus,
>>> |                                     __armv8pmu_probe_pmu,
>>> |                                     &probe, 1);
>>> |         if (ret)
>>> |                 return ret; > |
>>> |         if (!probe.present)
>>> |                 return -ENODEV;
>>> |
>>> |         if (arm_pmu_branch_stack_supported(cpu_pmu))
>>> |                 ret = branch_records_alloc(cpu_pmu);
>>> |
>>> |         return ret;
>>> | }
> 
> Could we not simplify this as below and keep the abstraction, since we
> already have it ?

No, there is an allocation dependency before the smp call context.
 
> 
>>> | static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
>>> | {
>>> |         struct armv8pmu_probe_info probe = {
>>> |                 .pmu = cpu_pmu,
>>> |                 .present = false,
>>> |         };
>>> |         int ret;
>>> |
>>> |         ret = smp_call_function_any(&cpu_pmu->supported_cpus,
>>> |                                     __armv8pmu_probe_pmu,
>>> |                                     &probe, 1);
>>> |         if (ret)
>>> |                 return ret;
>>> |         if (!probe.present)
>>> |                 return -ENODEV;
>>> |
>>> |           if (!arm_pmu_branch_stack_supported(cpu_pmu))
>>> |             return 0;
>>> |
>>> |         ret = armv8pmu_private_alloc(cpu_pmu);

This needs to be allocated before each supported PMU gets probed via
__armv8pmu_probe_pmu() inside smp_call_function_any() callbacks that
unfortunately cannot do memory allocation.

>>> |         if (ret)
>>> |         return ret;
>>> |       
>>> |          ret = branch_records_alloc(cpu_pmu);
>>> |          if (ret)
>>> |          armv8pmu_private_free(cpu_pmu);
>>> |       
>>> |        return ret;
>>> | }

Changing the abstraction will cause too much code churn, this late in
the development phase, which should be avoided IMHO.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2023-06-09  4:01 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-31  4:04 [PATCH V11 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
2023-05-31  4:04 ` [PATCH V11 01/10] drivers: perf: arm_pmu: Add new sched_task() callback Anshuman Khandual
2023-06-05  7:26   ` Mark Rutland
2023-05-31  4:04 ` [PATCH V11 02/10] arm64/perf: Add BRBE registers and fields Anshuman Khandual
2023-06-05  7:55   ` Mark Rutland
2023-06-06  4:27     ` Anshuman Khandual
2023-06-13 16:27   ` Mark Rutland
2023-06-14  2:59     ` Anshuman Khandual
2023-05-31  4:04 ` [PATCH V11 03/10] arm64/perf: Add branch stack support in struct arm_pmu Anshuman Khandual
2023-06-05  7:58   ` Mark Rutland
2023-06-06  4:47     ` Anshuman Khandual
2023-05-31  4:04 ` [PATCH V11 04/10] arm64/perf: Add branch stack support in struct pmu_hw_events Anshuman Khandual
2023-06-05  8:00   ` Mark Rutland
2023-05-31  4:04 ` [PATCH V11 05/10] arm64/perf: Add branch stack support in ARMV8 PMU Anshuman Khandual
2023-06-02  2:33   ` Namhyung Kim
2023-06-05  2:43     ` Anshuman Khandual
2023-06-05 12:05   ` Mark Rutland
2023-06-06 10:34     ` Anshuman Khandual
2023-06-06 10:41       ` Mark Rutland
2023-06-08 10:13       ` Suzuki K Poulose
2023-06-09  4:00         ` Anshuman Khandual [this message]
2023-06-09  9:54           ` Suzuki K Poulose
2023-06-09  7:14         ` Anshuman Khandual
2023-05-31  4:04 ` [PATCH V11 06/10] arm64/perf: Enable branch stack events via FEAT_BRBE Anshuman Khandual
2023-06-02  1:45   ` Namhyung Kim
2023-06-05  3:00     ` Anshuman Khandual
2023-06-05 13:43   ` Mark Rutland
2023-06-09  4:30     ` Anshuman Khandual
2023-06-09 12:37       ` Mark Rutland
2023-06-09  4:47     ` Anshuman Khandual
2023-06-09 12:42       ` Mark Rutland
2023-06-09  5:22     ` Anshuman Khandual
2023-06-09 12:47       ` Mark Rutland
2023-06-09 13:15         ` Mark Rutland
2023-06-09 13:34         ` James Clark
2023-05-31  4:04 ` [PATCH V11 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack() Anshuman Khandual
2023-05-31  4:04 ` [PATCH V11 08/10] arm64/perf: Add struct brbe_regset helper functions Anshuman Khandual
2023-06-02  2:40   ` Namhyung Kim
2023-06-05  3:14     ` Anshuman Khandual
2023-06-05 23:49       ` Namhyung Kim
2023-06-13 17:17   ` Mark Rutland
2023-06-14  5:14     ` Anshuman Khandual
2023-06-14 10:59       ` Mark Rutland
2023-05-31  4:04 ` [PATCH V11 09/10] arm64/perf: Implement branch records save on task sched out Anshuman Khandual
2023-05-31  4:04 ` [PATCH V11 10/10] arm64/perf: Implement branch records save on PMU IRQ Anshuman Khandual
2023-06-09 11:13 ` [PATCH V11 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=78cb22e2-c46e-d62d-fefc-b7963737499e@arm.com \
    --to=anshuman.khandual@arm.com \
    --cc=acme@kernel.org \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=james.clark@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=robh@kernel.org \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).