All of lore.kernel.org
 help / color / mirror / Atom feed
From: Linu Cherian <linuc.decode@gmail.com>
To: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	Coresight ML <coresight@lists.linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.org>,
	Linu Cherian <lcherian@marvell.com>,
	Mike Leach <mike.leach@linaro.org>
Subject: Re: [PATCH v4 0/2] Make sysFS functional on topologies with per core sink
Date: Mon, 26 Oct 2020 10:03:57 +0530	[thread overview]
Message-ID: <CAAHhmWhzc2+K3uOLbKEngLbjzPW9KqipjwGpyk9CFA0UNa4Lxg@mail.gmail.com> (raw)
In-Reply-To: <2bd65f2d-5660-10b3-f51f-448221d78d3d@arm.com>

Hi Suzuki,

On Mon, Oct 5, 2020 at 4:52 PM Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
>
> Hi Linu,
>
> On 09/04/2020 03:41 AM, Linu Cherian wrote:
> > This patch series tries to fix the sysfs breakage on topologies
> > with per core sink.
> >
> > Changes since v3:
> > - References to coresight_get_enabled_sink in perf interface
> >    has been removed and marked deprecated as a new patch.
> > - To avoid changes to coresight_find_sink for ease of maintenance,
> >    search function specific to sysfs usage has been added.
> > - Sysfs being the only user for coresight_get_enabled sink,
> >    reset option is removed as well.
>
> Have you tried running perf with --per-thread option ? I believe
> this will be impacted as well, as we choose a single sink at the
> moment and this may not be reachable from the other CPUs, where
> the event may be scheduled. Eventually loosing trace for the
> duration where the task is scheduled on a different CPU.
>
> Please could you try this patch and see if helps ? I have lightly
> tested this on a fast model.

We are seeing some issues while testing with this patch.
The issue is that, always buffer allocation for the sink happens to be on the
first core in cpu mask and this doesn't match with the core on which
event is started. Please see below for additional comments.


>
> ---8>---
>
> coresight: etm-perf: Allow an event to use multiple sinks
>
> When there are multiple sinks on the system, in the absence
> of a specified sink, it is quite possible that a default sink
> for an ETM could be different from that of another ETM (e.g, on
> systems with per-CPU sinks). However we do not support having
> multiple sinks for an event yet. This patch allows the event to
> use the default sinks on the ETMs where they are scheduled as
> long as the sinks are of the same type.
>
> e.g, if we have 1x1 topology with per-CPU ETRs, the event can
> use the per-CPU ETR for the session. However, if the sinks
> are of different type, e.g TMC-ETR on one and a custom sink
> on another, the event will only trace on the first detected
> sink (just like we have today).
>
> Cc: Linu Cherian <lcherian@marvell.com>
> Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
> Cc: Mike Leach <mike.leach@linaro.org>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>   .../hwtracing/coresight/coresight-etm-perf.c  | 69 +++++++++++++------
>   1 file changed, 49 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c
> b/drivers/hwtracing/coresight/coresight-etm-perf.c
> index c2c9b127d074..19fe38010474 100644
> --- a/drivers/hwtracing/coresight/coresight-etm-perf.c
> +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
> @@ -204,14 +204,28 @@ static void etm_free_aux(void *data)
>         schedule_work(&event_data->work);
>   }
>
> +/*
> + * When an event could be scheduled on more than one CPUs, we have to make
> + * sure that the sinks are of the same type, so that the sink_buffer could
> + * be reused.
> + */
> +static bool sinks_match(struct coresight_device *a, struct coresight_device *b)
> +{
> +       if (!a || !b)
> +               return false;
> +       return (sink_ops(a) == sink_ops(b)) &&
> +              (a->subtype.sink_subtype == b->subtype.sink_subtype);
> +}
> +
>   static void *etm_setup_aux(struct perf_event *event, void **pages,
>                            int nr_pages, bool overwrite)
>   {
>         u32 id;
>         int cpu = event->cpu;
>         cpumask_t *mask;
> -       struct coresight_device *sink;
> +       struct coresight_device *sink = NULL;
>         struct etm_event_data *event_data = NULL;
> +       bool sink_forced = false;
>
>         event_data = alloc_event_data(cpu);
>         if (!event_data)
> @@ -222,6 +236,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
>         if (event->attr.config2) {
>                 id = (u32)event->attr.config2;
>                 sink = coresight_get_sink_by_id(id);
> +               sink_forced = true;
>         }
>
>         mask = &event_data->mask;
> @@ -235,7 +250,7 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
>          */
>         for_each_cpu(cpu, mask) {
>                 struct list_head *path;
> -               struct coresight_device *csdev;
> +               struct coresight_device *csdev, *cpu_sink;
>
>                 csdev = per_cpu(csdev_src, cpu);
>                 /*
> @@ -243,33 +258,42 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
>                  * the mask and continue with the rest. If ever we try to trace
>                  * on this CPU, we handle it accordingly.
>                  */
> -               if (!csdev) {
> -                       cpumask_clear_cpu(cpu, mask);
> -                       continue;
> -               }
> -
> +               if (!csdev)
> +                       goto clear_cpu;
>                 /*
> -                * No sink provided - look for a default sink for one of the
> -                * devices. At present we only support topology where all CPUs
> -                * use the same sink [N:1], so only need to find one sink. The
> -                * coresight_build_path later will remove any CPU that does not
> -                * attach to the sink, or if we have not found a sink.
> +                * No sink provided - look for a default sink for all the devices.
> +                * We only support multiple sinks, only if all the default sinks
> +                * are of the same type, so that the sink buffer can be shared
> +                * as the event moves around. As earlier, we don't trace on a
> +                * CPU, if we can't find a suitable sink.
>                  */
> -               if (!sink)
> -                       sink = coresight_find_default_sink(csdev);
> +               if (!sink_forced) {
> +                       cpu_sink = coresight_find_default_sink(csdev);
> +                       if (!cpu_sink)
> +                               goto clear_cpu;
> +                       /* First sink for this event */
> +                       if (!sink) {
> +                               sink = cpu_sink;
> +                       } else if (!sinks_match(cpu_sink, sink)) {
> +                               goto clear_cpu;
> +                       }
>


In per-thread option, cpu_sink always happens to be on core 0,
since all cores are enabled in the cpu mask.
So feels like we need to take into consideration of the core on which
the event gets started(the core on which the process gets executed)
while doing the buffer allocation.

On a related note, I had another question on this.
Don't we also need to address cases when multiple threads are forked
in a process ?

Thanks.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2020-10-26  4:36 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-04  2:41 [PATCH v4 0/2] Make sysFS functional on topologies with per core sink Linu Cherian
2020-09-04  2:41 ` [PATCH v4 1/2] coresight: etm: perf: Sink selection using sysfs is deprecated Linu Cherian
2020-09-04  2:41 ` [PATCH v4 2/2] coresight: Make sysFS functional on topologies with per core sink Linu Cherian
2020-09-14 19:36   ` Mathieu Poirier
2020-09-15  4:40     ` Linu Cherian
2020-10-05 11:27 ` [PATCH v4 0/2] " Suzuki K Poulose
2020-10-06 13:21   ` Linu Cherian
2020-10-06 16:12   ` Mathieu Poirier
2020-10-06 16:43     ` Suzuki K Poulose
2020-10-06 16:56       ` Mathieu Poirier
2020-10-26  4:33   ` Linu Cherian [this message]
2020-10-26 18:17     ` Suzuki K Poulose
2020-10-27 13:13       ` Linu Cherian
2020-11-07  5:43         ` Linu Cherian
2020-11-10 12:57           ` Linu Cherian
2020-11-10 14:57             ` Suzuki K Poulose
2020-11-12  8:57               ` Linu Cherian
2020-11-12  9:20                 ` Suzuki K Poulose

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAHhmWhzc2+K3uOLbKEngLbjzPW9KqipjwGpyk9CFA0UNa4Lxg@mail.gmail.com \
    --to=linuc.decode@gmail.com \
    --cc=coresight@lists.linaro.org \
    --cc=lcherian@marvell.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=mathieu.poirier@linaro.org \
    --cc=mike.leach@linaro.org \
    --cc=suzuki.poulose@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.