linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Ian Rogers <irogers@google.com>
Cc: Namhyung Kim <namhyung@kernel.org>, Jiri Olsa <jolsa@kernel.org>,
	Ingo Molnar <mingo@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Adrian Hunter <adrian.hunter@intel.com>,
	linux-perf-users@vger.kernel.org,
	Song Liu <songliubraving@fb.com>, Hao Luo <haoluo@google.com>,
	bpf@vger.kernel.org
Subject: Re: [PATCH v4] perf tools: Get a perf cgroup more portably in BPF
Date: Mon, 26 Sep 2022 14:11:59 +0100	[thread overview]
Message-ID: <YzGlH51Idwet/NE9@kernel.org> (raw)
In-Reply-To: <CAP-5=fWwNwtocMR3s1Su2k2vZAwL4yhX19UGZ4i0dMXFDFFBJA@mail.gmail.com>

Em Fri, Sep 23, 2022 at 09:45:19AM -0700, Ian Rogers escreveu:
> On Thu, Sep 22, 2022 at 11:32 PM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > The perf_event_cgrp_id can be different on other configurations.
> > To be more portable as CO-RE, it needs to get the cgroup subsys id
> > using the bpf_core_enum_value() helper.
> >
> > Suggested-by: Ian Rogers <irogers@google.com>
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> 
> Reviewed-by: Ian Rogers <irogers@google.com>
> 
> Would be good to get this into perf/urgent, does it need Fixes tags for that?

I got it into the perf/urgent branch.

- Arnaldo
 
> Thanks,
> Ian
> 
> > ---
> > v4 changes)
> >  * add a missing check in the off_cpu
> >
> > v3 changes)
> >  * check compiler features for enum value
> >
> > v2 changes)
> >  * fix off_cpu.bpf.c too
> >  * get perf_subsys_id only once
> >
> >  tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 11 ++++++++++-
> >  tools/perf/util/bpf_skel/off_cpu.bpf.c      | 18 ++++++++++++++----
> >  2 files changed, 24 insertions(+), 5 deletions(-)
> >
> > diff --git a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> > index 292c430768b5..8e7520e273db 100644
> > --- a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> > +++ b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> > @@ -48,6 +48,7 @@ const volatile __u32 num_cpus = 1;
> >
> >  int enabled = 0;
> >  int use_cgroup_v2 = 0;
> > +int perf_subsys_id = -1;
> >
> >  static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
> >  {
> > @@ -58,7 +59,15 @@ static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
> >         int level;
> >         int cnt;
> >
> > -       cgrp = BPF_CORE_READ(p, cgroups, subsys[perf_event_cgrp_id], cgroup);
> > +       if (perf_subsys_id == -1) {
> > +#if __has_builtin(__builtin_preserve_enum_value)
> > +               perf_subsys_id = bpf_core_enum_value(enum cgroup_subsys_id,
> > +                                                    perf_event_cgrp_id);
> > +#else
> > +               perf_subsys_id = perf_event_cgrp_id;
> > +#endif
> > +       }
> > +       cgrp = BPF_CORE_READ(p, cgroups, subsys[perf_subsys_id], cgroup);
> >         level = BPF_CORE_READ(cgrp, level);
> >
> >         for (cnt = 0; i < MAX_LEVELS; i++) {
> > diff --git a/tools/perf/util/bpf_skel/off_cpu.bpf.c b/tools/perf/util/bpf_skel/off_cpu.bpf.c
> > index c4ba2bcf179f..38e3b287dbb2 100644
> > --- a/tools/perf/util/bpf_skel/off_cpu.bpf.c
> > +++ b/tools/perf/util/bpf_skel/off_cpu.bpf.c
> > @@ -94,6 +94,8 @@ const volatile bool has_prev_state = false;
> >  const volatile bool needs_cgroup = false;
> >  const volatile bool uses_cgroup_v1 = false;
> >
> > +int perf_subsys_id = -1;
> > +
> >  /*
> >   * Old kernel used to call it task_struct->state and now it's '__state'.
> >   * Use BPF CO-RE "ignored suffix rule" to deal with it like below:
> > @@ -119,11 +121,19 @@ static inline __u64 get_cgroup_id(struct task_struct *t)
> >  {
> >         struct cgroup *cgrp;
> >
> > -       if (uses_cgroup_v1)
> > -               cgrp = BPF_CORE_READ(t, cgroups, subsys[perf_event_cgrp_id], cgroup);
> > -       else
> > -               cgrp = BPF_CORE_READ(t, cgroups, dfl_cgrp);
> > +       if (!uses_cgroup_v1)
> > +               return BPF_CORE_READ(t, cgroups, dfl_cgrp, kn, id);
> > +
> > +       if (perf_subsys_id == -1) {
> > +#if __has_builtin(__builtin_preserve_enum_value)
> > +               perf_subsys_id = bpf_core_enum_value(enum cgroup_subsys_id,
> > +                                                    perf_event_cgrp_id);
> > +#else
> > +               perf_subsys_id = perf_event_cgrp_id;
> > +#endif
> > +       }
> >
> > +       cgrp = BPF_CORE_READ(t, cgroups, subsys[perf_subsys_id], cgroup);
> >         return BPF_CORE_READ(cgrp, kn, id);
> >  }
> >
> > --
> > 2.37.3.998.g577e59143f-goog
> >

-- 

- Arnaldo

  reply	other threads:[~2022-09-26 14:48 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-23  6:32 [PATCH v4] perf tools: Get a perf cgroup more portably in BPF Namhyung Kim
2022-09-23 16:45 ` Ian Rogers
2022-09-26 13:11   ` Arnaldo Carvalho de Melo [this message]
2022-09-26 13:06 ` Arnaldo Carvalho de Melo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YzGlH51Idwet/NE9@kernel.org \
    --to=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=bpf@vger.kernel.org \
    --cc=haoluo@google.com \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=songliubraving@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).