linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHSET v4 0/4] perf stat: Enable BPF counters with --for-each-cgroup
@ 2021-06-25  7:18 Namhyung Kim
  2021-06-25  7:18 ` [PATCH 1/4] perf tools: Add read_cgroup_id() function Namhyung Kim
                   ` (4 more replies)
  0 siblings, 5 replies; 21+ messages in thread
From: Namhyung Kim @ 2021-06-25  7:18 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Jiri Olsa
  Cc: Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen, Ian Rogers,
	Stephane Eranian, Song Liu

Hello,

This is to add BPF support for --for-each-cgroup to handle many cgroup
events on big machines.  You can use the --bpf-counters to enable the
new behavior.

 * changes in v4
  - convert cgrp_readings to a per-cpu array map
  - remove now-unused cpu_idx map
  - move common functions to a header file
  - reuse bpftool bootstrap binary
  - fix build error in the cgroup code
  
 * changes in v3
  - support cgroup hierarchy with ancestor ids
  - add and trigger raw_tp BPF program
  - add a build rule for vmlinux.h

 * changes in v2
  - remove incorrect use of BPF_F_PRESERVE_ELEMS
  - add missing map elements after lookup
  - handle cgroup v1

Basic idea is to use a single set of per-cpu events to count
interested events and aggregate them to each cgroup.  I used bperf
mechanism to use a BPF program for cgroup-switches and save the
results in a matching map element for given cgroups.

Without this, we need to have separate events for cgroups, and it
creates unnecessary multiplexing overhead (and PMU programming) when
tasks in different cgroups are switched.  I saw this makes a big
difference on 256 cpu machines with hundreds of cgroups.

Actually this is what I wanted to do it in the kernel [1], but we can
do the job using BPF!


Thanks,
Namhyung


[1] https://lore.kernel.org/lkml/20210413155337.644993-1-namhyung@kernel.org/


Namhyung Kim (4):
  perf tools: Add read_cgroup_id() function
  perf tools: Add cgroup_is_v2() helper
  perf tools: Move common bpf functions to bpf_counter.h
  perf stat: Enable BPF counter with --for-each-cgroup

 tools/perf/Makefile.perf                    |  17 +-
 tools/perf/util/Build                       |   1 +
 tools/perf/util/bpf_counter.c               |  57 +---
 tools/perf/util/bpf_counter.h               |  52 ++++
 tools/perf/util/bpf_counter_cgroup.c        | 299 ++++++++++++++++++++
 tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 191 +++++++++++++
 tools/perf/util/cgroup.c                    |  46 +++
 tools/perf/util/cgroup.h                    |  12 +
 8 files changed, 622 insertions(+), 53 deletions(-)
 create mode 100644 tools/perf/util/bpf_counter_cgroup.c
 create mode 100644 tools/perf/util/bpf_skel/bperf_cgroup.bpf.c

-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/4] perf tools: Add read_cgroup_id() function
  2021-06-25  7:18 [PATCHSET v4 0/4] perf stat: Enable BPF counters with --for-each-cgroup Namhyung Kim
@ 2021-06-25  7:18 ` Namhyung Kim
  2021-07-01 17:59   ` Arnaldo Carvalho de Melo
  2021-06-25  7:18 ` [PATCH 2/4] perf tools: Add cgroup_is_v2() helper Namhyung Kim
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 21+ messages in thread
From: Namhyung Kim @ 2021-06-25  7:18 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Jiri Olsa
  Cc: Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen, Ian Rogers,
	Stephane Eranian, Song Liu

The read_cgroup_id() is to read a cgroup id from a file handle using
name_to_handle_at(2) for the given cgroup.  It'll be used by bperf
cgroup stat later.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 tools/perf/util/cgroup.c | 25 +++++++++++++++++++++++++
 tools/perf/util/cgroup.h |  9 +++++++++
 2 files changed, 34 insertions(+)

diff --git a/tools/perf/util/cgroup.c b/tools/perf/util/cgroup.c
index f24ab4585553..ef18c988c681 100644
--- a/tools/perf/util/cgroup.c
+++ b/tools/perf/util/cgroup.c
@@ -45,6 +45,31 @@ static int open_cgroup(const char *name)
 	return fd;
 }
 
+#ifdef HAVE_FILE_HANDLE
+int read_cgroup_id(struct cgroup *cgrp)
+{
+	char path[PATH_MAX + 1];
+	char mnt[PATH_MAX + 1];
+	struct {
+		struct file_handle fh;
+		uint64_t cgroup_id;
+	} handle;
+	int mount_id;
+
+	if (cgroupfs_find_mountpoint(mnt, PATH_MAX + 1, "perf_event"))
+		return -1;
+
+	scnprintf(path, PATH_MAX, "%s/%s", mnt, cgrp->name);
+
+	handle.fh.handle_bytes = sizeof(handle.cgroup_id);
+	if (name_to_handle_at(AT_FDCWD, path, &handle.fh, &mount_id, 0) < 0)
+		return -1;
+
+	cgrp->id = handle.cgroup_id;
+	return 0;
+}
+#endif  /* HAVE_FILE_HANDLE */
+
 static struct cgroup *evlist__find_cgroup(struct evlist *evlist, const char *str)
 {
 	struct evsel *counter;
diff --git a/tools/perf/util/cgroup.h b/tools/perf/util/cgroup.h
index 162906f3412a..707adbe25123 100644
--- a/tools/perf/util/cgroup.h
+++ b/tools/perf/util/cgroup.h
@@ -38,4 +38,13 @@ struct cgroup *cgroup__find(struct perf_env *env, uint64_t id);
 
 void perf_env__purge_cgroups(struct perf_env *env);
 
+#ifdef HAVE_FILE_HANDLE
+int read_cgroup_id(struct cgroup *cgrp);
+#else
+int read_cgroup_id(struct cgroup *cgrp)
+{
+	return -1;
+}
+#endif  /* HAVE_FILE_HANDLE */
+
 #endif /* __CGROUP_H__ */
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/4] perf tools: Add cgroup_is_v2() helper
  2021-06-25  7:18 [PATCHSET v4 0/4] perf stat: Enable BPF counters with --for-each-cgroup Namhyung Kim
  2021-06-25  7:18 ` [PATCH 1/4] perf tools: Add read_cgroup_id() function Namhyung Kim
@ 2021-06-25  7:18 ` Namhyung Kim
  2021-06-29 15:51   ` Ian Rogers
  2021-06-25  7:18 ` [PATCH 3/4] perf tools: Move common bpf functions to bpf_counter.h Namhyung Kim
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 21+ messages in thread
From: Namhyung Kim @ 2021-06-25  7:18 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Jiri Olsa
  Cc: Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen, Ian Rogers,
	Stephane Eranian, Song Liu

The cgroup_is_v2() is to check if the given subsystem is mounted on
cgroup v2 or not.  It'll be used by BPF cgroup code later.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 tools/perf/util/cgroup.c | 19 +++++++++++++++++++
 tools/perf/util/cgroup.h |  2 ++
 2 files changed, 21 insertions(+)

diff --git a/tools/perf/util/cgroup.c b/tools/perf/util/cgroup.c
index ef18c988c681..e819a4f30fc2 100644
--- a/tools/perf/util/cgroup.c
+++ b/tools/perf/util/cgroup.c
@@ -9,6 +9,7 @@
 #include <linux/zalloc.h>
 #include <sys/types.h>
 #include <sys/stat.h>
+#include <sys/statfs.h>
 #include <fcntl.h>
 #include <stdlib.h>
 #include <string.h>
@@ -70,6 +71,24 @@ int read_cgroup_id(struct cgroup *cgrp)
 }
 #endif  /* HAVE_FILE_HANDLE */
 
+#ifndef CGROUP2_SUPER_MAGIC
+#define CGROUP2_SUPER_MAGIC  0x63677270
+#endif
+
+int cgroup_is_v2(const char *subsys)
+{
+	char mnt[PATH_MAX + 1];
+	struct statfs stbuf;
+
+	if (cgroupfs_find_mountpoint(mnt, PATH_MAX + 1, subsys))
+		return -1;
+
+	if (statfs(mnt, &stbuf) < 0)
+		return -1;
+
+	return (stbuf.f_type == CGROUP2_SUPER_MAGIC);
+}
+
 static struct cgroup *evlist__find_cgroup(struct evlist *evlist, const char *str)
 {
 	struct evsel *counter;
diff --git a/tools/perf/util/cgroup.h b/tools/perf/util/cgroup.h
index 707adbe25123..1549ec2fd348 100644
--- a/tools/perf/util/cgroup.h
+++ b/tools/perf/util/cgroup.h
@@ -47,4 +47,6 @@ int read_cgroup_id(struct cgroup *cgrp)
 }
 #endif  /* HAVE_FILE_HANDLE */
 
+int cgroup_is_v2(const char *subsys);
+
 #endif /* __CGROUP_H__ */
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/4] perf tools: Move common bpf functions to bpf_counter.h
  2021-06-25  7:18 [PATCHSET v4 0/4] perf stat: Enable BPF counters with --for-each-cgroup Namhyung Kim
  2021-06-25  7:18 ` [PATCH 1/4] perf tools: Add read_cgroup_id() function Namhyung Kim
  2021-06-25  7:18 ` [PATCH 2/4] perf tools: Add cgroup_is_v2() helper Namhyung Kim
@ 2021-06-25  7:18 ` Namhyung Kim
  2021-06-30 18:28   ` Song Liu
  2021-07-01 19:09   ` Arnaldo Carvalho de Melo
  2021-06-25  7:18 ` [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup Namhyung Kim
  2021-06-27 15:29 ` [PATCHSET v4 0/4] perf stat: Enable BPF counters " Namhyung Kim
  4 siblings, 2 replies; 21+ messages in thread
From: Namhyung Kim @ 2021-06-25  7:18 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Jiri Olsa
  Cc: Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen, Ian Rogers,
	Stephane Eranian, Song Liu

Some helper functions will be used for cgroup counting too.
Move them to a header file for sharing.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 tools/perf/util/bpf_counter.c | 52 -----------------------------------
 tools/perf/util/bpf_counter.h | 52 +++++++++++++++++++++++++++++++++++
 2 files changed, 52 insertions(+), 52 deletions(-)

diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
index 974f10e356f0..1af81e882eb6 100644
--- a/tools/perf/util/bpf_counter.c
+++ b/tools/perf/util/bpf_counter.c
@@ -7,12 +7,8 @@
 #include <unistd.h>
 #include <sys/file.h>
 #include <sys/time.h>
-#include <sys/resource.h>
 #include <linux/err.h>
 #include <linux/zalloc.h>
-#include <bpf/bpf.h>
-#include <bpf/btf.h>
-#include <bpf/libbpf.h>
 #include <api/fs/fs.h>
 #include <perf/bpf_perf.h>
 
@@ -37,13 +33,6 @@ static inline void *u64_to_ptr(__u64 ptr)
 	return (void *)(unsigned long)ptr;
 }
 
-static void set_max_rlimit(void)
-{
-	struct rlimit rinf = { RLIM_INFINITY, RLIM_INFINITY };
-
-	setrlimit(RLIMIT_MEMLOCK, &rinf);
-}
-
 static struct bpf_counter *bpf_counter_alloc(void)
 {
 	struct bpf_counter *counter;
@@ -297,33 +286,6 @@ struct bpf_counter_ops bpf_program_profiler_ops = {
 	.install_pe = bpf_program_profiler__install_pe,
 };
 
-static __u32 bpf_link_get_id(int fd)
-{
-	struct bpf_link_info link_info = {0};
-	__u32 link_info_len = sizeof(link_info);
-
-	bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
-	return link_info.id;
-}
-
-static __u32 bpf_link_get_prog_id(int fd)
-{
-	struct bpf_link_info link_info = {0};
-	__u32 link_info_len = sizeof(link_info);
-
-	bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
-	return link_info.prog_id;
-}
-
-static __u32 bpf_map_get_id(int fd)
-{
-	struct bpf_map_info map_info = {0};
-	__u32 map_info_len = sizeof(map_info);
-
-	bpf_obj_get_info_by_fd(fd, &map_info, &map_info_len);
-	return map_info.id;
-}
-
 static bool bperf_attr_map_compatible(int attr_map_fd)
 {
 	struct bpf_map_info map_info = {0};
@@ -385,20 +347,6 @@ static int bperf_lock_attr_map(struct target *target)
 	return map_fd;
 }
 
-/* trigger the leader program on a cpu */
-static int bperf_trigger_reading(int prog_fd, int cpu)
-{
-	DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
-			    .ctx_in = NULL,
-			    .ctx_size_in = 0,
-			    .flags = BPF_F_TEST_RUN_ON_CPU,
-			    .cpu = cpu,
-			    .retval = 0,
-		);
-
-	return bpf_prog_test_run_opts(prog_fd, &opts);
-}
-
 static int bperf_check_target(struct evsel *evsel,
 			      struct target *target,
 			      enum bperf_filter_type *filter_type,
diff --git a/tools/perf/util/bpf_counter.h b/tools/perf/util/bpf_counter.h
index d6d907c3dcf9..185555a9c1db 100644
--- a/tools/perf/util/bpf_counter.h
+++ b/tools/perf/util/bpf_counter.h
@@ -3,6 +3,10 @@
 #define __PERF_BPF_COUNTER_H 1
 
 #include <linux/list.h>
+#include <sys/resource.h>
+#include <bpf/bpf.h>
+#include <bpf/btf.h>
+#include <bpf/libbpf.h>
 
 struct evsel;
 struct target;
@@ -76,4 +80,52 @@ static inline int bpf_counter__install_pe(struct evsel *evsel __maybe_unused,
 
 #endif /* HAVE_BPF_SKEL */
 
+static inline void set_max_rlimit(void)
+{
+	struct rlimit rinf = { RLIM_INFINITY, RLIM_INFINITY };
+
+	setrlimit(RLIMIT_MEMLOCK, &rinf);
+}
+
+static inline __u32 bpf_link_get_id(int fd)
+{
+	struct bpf_link_info link_info = {0};
+	__u32 link_info_len = sizeof(link_info);
+
+	bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
+	return link_info.id;
+}
+
+static inline __u32 bpf_link_get_prog_id(int fd)
+{
+	struct bpf_link_info link_info = {0};
+	__u32 link_info_len = sizeof(link_info);
+
+	bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
+	return link_info.prog_id;
+}
+
+static inline __u32 bpf_map_get_id(int fd)
+{
+	struct bpf_map_info map_info = {0};
+	__u32 map_info_len = sizeof(map_info);
+
+	bpf_obj_get_info_by_fd(fd, &map_info, &map_info_len);
+	return map_info.id;
+}
+
+/* trigger the leader program on a cpu */
+static inline int bperf_trigger_reading(int prog_fd, int cpu)
+{
+	DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
+			    .ctx_in = NULL,
+			    .ctx_size_in = 0,
+			    .flags = BPF_F_TEST_RUN_ON_CPU,
+			    .cpu = cpu,
+			    .retval = 0,
+		);
+
+	return bpf_prog_test_run_opts(prog_fd, &opts);
+}
+
 #endif /* __PERF_BPF_COUNTER_H */
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup
  2021-06-25  7:18 [PATCHSET v4 0/4] perf stat: Enable BPF counters with --for-each-cgroup Namhyung Kim
                   ` (2 preceding siblings ...)
  2021-06-25  7:18 ` [PATCH 3/4] perf tools: Move common bpf functions to bpf_counter.h Namhyung Kim
@ 2021-06-25  7:18 ` Namhyung Kim
  2021-06-30 18:47   ` Song Liu
  2021-06-30 18:50   ` Arnaldo Carvalho de Melo
  2021-06-27 15:29 ` [PATCHSET v4 0/4] perf stat: Enable BPF counters " Namhyung Kim
  4 siblings, 2 replies; 21+ messages in thread
From: Namhyung Kim @ 2021-06-25  7:18 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Jiri Olsa
  Cc: Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen, Ian Rogers,
	Stephane Eranian, Song Liu

Recently bperf was added to use BPF to count perf events for various
purposes.  This is an extension for the approach and targetting to
cgroup usages.

Unlike the other bperf, it doesn't share the events with other
processes but it'd reduce unnecessary events (and the overhead of
multiplexing) for each monitored cgroup within the perf session.

When --for-each-cgroup is used with --bpf-counters, it will open
cgroup-switches event per cpu internally and attach the new BPF
program to read given perf_events and to aggregate the results for
cgroups.  It's only called when task is switched to a task in a
different cgroup.

Cc: Song Liu <songliubraving@fb.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 tools/perf/Makefile.perf                    |  17 +-
 tools/perf/util/Build                       |   1 +
 tools/perf/util/bpf_counter.c               |   5 +
 tools/perf/util/bpf_counter_cgroup.c        | 299 ++++++++++++++++++++
 tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 191 +++++++++++++
 tools/perf/util/cgroup.c                    |   2 +
 tools/perf/util/cgroup.h                    |   1 +
 7 files changed, 515 insertions(+), 1 deletion(-)
 create mode 100644 tools/perf/util/bpf_counter_cgroup.c
 create mode 100644 tools/perf/util/bpf_skel/bperf_cgroup.bpf.c

diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index e47f04e5b51e..b03a803d466d 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -1015,6 +1015,7 @@ SKEL_OUT := $(abspath $(OUTPUT)util/bpf_skel)
 SKEL_TMP_OUT := $(abspath $(SKEL_OUT)/.tmp)
 SKELETONS := $(SKEL_OUT)/bpf_prog_profiler.skel.h
 SKELETONS += $(SKEL_OUT)/bperf_leader.skel.h $(SKEL_OUT)/bperf_follower.skel.h
+SKELETONS += $(SKEL_OUT)/bperf_cgroup.skel.h
 
 ifdef BUILD_BPF_SKEL
 BPFTOOL := $(SKEL_TMP_OUT)/bootstrap/bpftool
@@ -1032,7 +1033,21 @@ $(SKEL_TMP_OUT)/%.bpf.o: util/bpf_skel/%.bpf.c $(LIBBPF) | $(SKEL_TMP_OUT)
 	$(QUIET_CLANG)$(CLANG) -g -O2 -target bpf -Wall -Werror $(BPF_INCLUDE) \
 	  -c $(filter util/bpf_skel/%.bpf.c,$^) -o $@ && $(LLVM_STRIP) -g $@
 
-$(SKEL_OUT)/%.skel.h: $(SKEL_TMP_OUT)/%.bpf.o | $(BPFTOOL)
+VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux)				\
+		     $(if $(KBUILD_OUTPUT),$(KBUILD_OUTPUT)/vmlinux)	\
+		     ../../vmlinux					\
+		     /sys/kernel/btf/vmlinux				\
+		     /boot/vmlinux-$(shell uname -r)
+VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
+
+$(SKEL_OUT)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL)
+ifeq ($(VMLINUX_H),)
+	$(QUIET_GEN)$(BPFTOOL) btf dump file $< format c > $@
+else
+	$(Q)cp "$(VMLINUX_H)" $@
+endif
+
+$(SKEL_OUT)/%.skel.h: $(SKEL_TMP_OUT)/%.bpf.o $(SKEL_OUT)/vmlinux.h | $(BPFTOOL)
 	$(QUIET_GENSKEL)$(BPFTOOL) gen skeleton $< > $@
 
 bpf-skel: $(SKELETONS)
diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index 95e15d1035ab..700d635448ff 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -140,6 +140,7 @@ perf-y += clockid.o
 perf-$(CONFIG_LIBBPF) += bpf-loader.o
 perf-$(CONFIG_LIBBPF) += bpf_map.o
 perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter.o
+perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter_cgroup.o
 perf-$(CONFIG_BPF_PROLOGUE) += bpf-prologue.o
 perf-$(CONFIG_LIBELF) += symbol-elf.o
 perf-$(CONFIG_LIBELF) += probe-file.o
diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
index 1af81e882eb6..79c19cb8bf2d 100644
--- a/tools/perf/util/bpf_counter.c
+++ b/tools/perf/util/bpf_counter.c
@@ -18,6 +18,7 @@
 #include "evsel.h"
 #include "evlist.h"
 #include "target.h"
+#include "cgroup.h"
 #include "cpumap.h"
 #include "thread_map.h"
 
@@ -740,6 +741,8 @@ struct bpf_counter_ops bperf_ops = {
 	.destroy    = bperf__destroy,
 };
 
+extern struct bpf_counter_ops bperf_cgrp_ops;
+
 static inline bool bpf_counter_skip(struct evsel *evsel)
 {
 	return list_empty(&evsel->bpf_counter_list) &&
@@ -757,6 +760,8 @@ int bpf_counter__load(struct evsel *evsel, struct target *target)
 {
 	if (target->bpf_str)
 		evsel->bpf_counter_ops = &bpf_program_profiler_ops;
+	else if (cgrp_event_expanded && target->use_bpf)
+		evsel->bpf_counter_ops = &bperf_cgrp_ops;
 	else if (target->use_bpf || evsel->bpf_counter ||
 		 evsel__match_bpf_counter_events(evsel->name))
 		evsel->bpf_counter_ops = &bperf_ops;
diff --git a/tools/perf/util/bpf_counter_cgroup.c b/tools/perf/util/bpf_counter_cgroup.c
new file mode 100644
index 000000000000..327f97a23a84
--- /dev/null
+++ b/tools/perf/util/bpf_counter_cgroup.c
@@ -0,0 +1,299 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2019 Facebook */
+/* Copyright (c) 2021 Google */
+
+#include <assert.h>
+#include <limits.h>
+#include <unistd.h>
+#include <sys/file.h>
+#include <sys/time.h>
+#include <sys/resource.h>
+#include <linux/err.h>
+#include <linux/zalloc.h>
+#include <linux/perf_event.h>
+#include <api/fs/fs.h>
+#include <perf/bpf_perf.h>
+
+#include "affinity.h"
+#include "bpf_counter.h"
+#include "cgroup.h"
+#include "counts.h"
+#include "debug.h"
+#include "evsel.h"
+#include "evlist.h"
+#include "target.h"
+#include "cpumap.h"
+#include "thread_map.h"
+
+#include "bpf_skel/bperf_cgroup.skel.h"
+
+static struct perf_event_attr cgrp_switch_attr = {
+	.type = PERF_TYPE_SOFTWARE,
+	.config = PERF_COUNT_SW_CGROUP_SWITCHES,
+	.size = sizeof(cgrp_switch_attr),
+	.sample_period = 1,
+	.disabled = 1,
+};
+
+static struct evsel *cgrp_switch;
+static struct bperf_cgroup_bpf *skel;
+
+#define FD(evt, cpu) (*(int *)xyarray__entry(evt->core.fd, cpu, 0))
+
+static int bperf_load_program(struct evlist *evlist)
+{
+	struct bpf_link *link;
+	struct evsel *evsel;
+	struct cgroup *cgrp, *leader_cgrp;
+	__u32 i, cpu;
+	int nr_cpus = evlist->core.all_cpus->nr;
+	int total_cpus = cpu__max_cpu();
+	int map_size, map_fd;
+	int prog_fd, err;
+
+	skel = bperf_cgroup_bpf__open();
+	if (!skel) {
+		pr_err("Failed to open cgroup skeleton\n");
+		return -1;
+	}
+
+	skel->rodata->num_cpus = total_cpus;
+	skel->rodata->num_events = evlist->core.nr_entries / nr_cgroups;
+
+	BUG_ON(evlist->core.nr_entries % nr_cgroups != 0);
+
+	/* we need one copy of events per cpu for reading */
+	map_size = total_cpus * evlist->core.nr_entries / nr_cgroups;
+	bpf_map__resize(skel->maps.events, map_size);
+	bpf_map__resize(skel->maps.cgrp_idx, nr_cgroups);
+	/* previous result is saved in a per-cpu array */
+	map_size = evlist->core.nr_entries / nr_cgroups;
+	bpf_map__resize(skel->maps.prev_readings, map_size);
+	/* cgroup result needs all events (per-cpu) */
+	map_size = evlist->core.nr_entries;
+	bpf_map__resize(skel->maps.cgrp_readings, map_size);
+
+	set_max_rlimit();
+
+	err = bperf_cgroup_bpf__load(skel);
+	if (err) {
+		pr_err("Failed to load cgroup skeleton\n");
+		goto out;
+	}
+
+	if (cgroup_is_v2("perf_event") > 0)
+		skel->bss->use_cgroup_v2 = 1;
+
+	err = -1;
+
+	cgrp_switch = evsel__new(&cgrp_switch_attr);
+	if (evsel__open_per_cpu(cgrp_switch, evlist->core.all_cpus, -1) < 0) {
+		pr_err("Failed to open cgroup switches event\n");
+		goto out;
+	}
+
+	for (i = 0; i < nr_cpus; i++) {
+		link = bpf_program__attach_perf_event(skel->progs.on_cgrp_switch,
+						      FD(cgrp_switch, i));
+		if (IS_ERR(link)) {
+			pr_err("Failed to attach cgroup program\n");
+			err = PTR_ERR(link);
+			goto out;
+		}
+	}
+
+	/*
+	 * Update cgrp_idx map from cgroup-id to event index.
+	 */
+	cgrp = NULL;
+	i = 0;
+
+	evlist__for_each_entry(evlist, evsel) {
+		if (cgrp == NULL || evsel->cgrp == leader_cgrp) {
+			leader_cgrp = evsel->cgrp;
+			evsel->cgrp = NULL;
+
+			/* open single copy of the events w/o cgroup */
+			err = evsel__open_per_cpu(evsel, evlist->core.all_cpus, -1);
+			if (err) {
+				pr_err("Failed to open first cgroup events\n");
+				goto out;
+			}
+
+			map_fd = bpf_map__fd(skel->maps.events);
+			for (cpu = 0; cpu < nr_cpus; cpu++) {
+				int fd = FD(evsel, cpu);
+				__u32 idx = evsel->idx * total_cpus +
+					evlist->core.all_cpus->map[cpu];
+
+				err = bpf_map_update_elem(map_fd, &idx, &fd,
+							  BPF_ANY);
+				if (err < 0) {
+					pr_err("Failed to update perf_event fd\n");
+					goto out;
+				}
+			}
+
+			evsel->cgrp = leader_cgrp;
+		}
+		evsel->supported = true;
+
+		if (evsel->cgrp == cgrp)
+			continue;
+
+		cgrp = evsel->cgrp;
+
+		if (read_cgroup_id(cgrp) < 0) {
+			pr_err("Failed to get cgroup id\n");
+			err = -1;
+			goto out;
+		}
+
+		map_fd = bpf_map__fd(skel->maps.cgrp_idx);
+		err = bpf_map_update_elem(map_fd, &cgrp->id, &i, BPF_ANY);
+		if (err < 0) {
+			pr_err("Failed to update cgroup index map\n");
+			goto out;
+		}
+
+		i++;
+	}
+
+	/*
+	 * bperf uses BPF_PROG_TEST_RUN to get accurate reading. Check
+	 * whether the kernel support it
+	 */
+	prog_fd = bpf_program__fd(skel->progs.trigger_read);
+	err = bperf_trigger_reading(prog_fd, 0);
+	if (err) {
+		pr_debug("The kernel does not support test_run for raw_tp BPF programs.\n"
+			 "Therefore, --for-each-cgroup might show inaccurate readings\n");
+	}
+
+out:
+	return err;
+}
+
+static int bperf_cgrp__load(struct evsel *evsel, struct target *target)
+{
+	static bool bperf_loaded = false;
+
+	evsel->bperf_leader_prog_fd = -1;
+	evsel->bperf_leader_link_fd = -1;
+
+	if (!bperf_loaded && bperf_load_program(evsel->evlist))
+		return -1;
+
+	bperf_loaded = true;
+	/* just to bypass bpf_counter_skip() */
+	evsel->follower_skel = (struct bperf_follower_bpf *)skel;
+
+	return 0;
+}
+
+static int bperf_cgrp__install_pe(struct evsel *evsel, int cpu, int fd)
+{
+	/* nothing to do */
+	return 0;
+}
+
+/*
+ * trigger the leader prog on each cpu, so the cgrp_reading map could get
+ * the latest results.
+ */
+static int bperf_cgrp__sync_counters(struct evlist *evlist)
+{
+	int i, cpu;
+	int nr_cpus = evlist->core.all_cpus->nr;
+	int prog_fd = bpf_program__fd(skel->progs.trigger_read);
+
+	for (i = 0; i < nr_cpus; i++) {
+		cpu = evlist->core.all_cpus->map[i];
+		bperf_trigger_reading(prog_fd, cpu);
+	}
+
+	return 0;
+}
+
+static int bperf_cgrp__enable(struct evsel *evsel)
+{
+	skel->bss->enabled = 1;
+	return 0;
+}
+
+static int bperf_cgrp__disable(struct evsel *evsel)
+{
+	if (evsel->idx)
+		return 0;
+
+	bperf_cgrp__sync_counters(evsel->evlist);
+
+	skel->bss->enabled = 0;
+	return 0;
+}
+
+static int bperf_cgrp__read(struct evsel *evsel)
+{
+	struct evlist *evlist = evsel->evlist;
+	int i, cpu, nr_cpus = evlist->core.all_cpus->nr;
+	int total_cpus = cpu__max_cpu();
+	struct perf_counts_values *counts;
+	struct bpf_perf_event_value *values;
+	int reading_map_fd, err = 0;
+	__u32 idx;
+
+	if (evsel->idx)
+		return 0;
+
+	bperf_cgrp__sync_counters(evsel->evlist);
+
+	values = calloc(total_cpus, sizeof(*values));
+	if (values == NULL)
+		return -ENOMEM;
+
+	reading_map_fd = bpf_map__fd(skel->maps.cgrp_readings);
+
+	evlist__for_each_entry(evlist, evsel) {
+		idx = evsel->idx;
+		err = bpf_map_lookup_elem(reading_map_fd, &idx, values);
+		if (err) {
+			pr_err("bpf map lookup falied: idx=%u, event=%s, cgrp=%s\n",
+			       idx, evsel__name(evsel), evsel->cgrp->name);
+			goto out;
+		}
+
+		for (i = 0; i < nr_cpus; i++) {
+			cpu = evlist->core.all_cpus->map[i];
+
+			counts = perf_counts(evsel->counts, i, 0);
+			counts->val = values[cpu].counter;
+			counts->ena = values[cpu].enabled;
+			counts->run = values[cpu].running;
+		}
+	}
+
+out:
+	free(values);
+	return err;
+}
+
+static int bperf_cgrp__destroy(struct evsel *evsel)
+{
+	if (evsel->idx)
+		return 0;
+
+	bperf_cgroup_bpf__destroy(skel);
+	evsel__delete(cgrp_switch);  // it'll destroy on_switch progs too
+
+	return 0;
+}
+
+struct bpf_counter_ops bperf_cgrp_ops = {
+	.load       = bperf_cgrp__load,
+	.enable     = bperf_cgrp__enable,
+	.disable    = bperf_cgrp__disable,
+	.read       = bperf_cgrp__read,
+	.install_pe = bperf_cgrp__install_pe,
+	.destroy    = bperf_cgrp__destroy,
+};
diff --git a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
new file mode 100644
index 000000000000..292c430768b5
--- /dev/null
+++ b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
@@ -0,0 +1,191 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+// Copyright (c) 2021 Facebook
+// Copyright (c) 2021 Google
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+#define MAX_LEVELS  10  // max cgroup hierarchy level: arbitrary
+#define MAX_EVENTS  32  // max events per cgroup: arbitrary
+
+// NOTE: many of map and global data will be modified before loading
+//       from the userspace (perf tool) using the skeleton helpers.
+
+// single set of global perf events to measure
+struct {
+	__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
+	__uint(key_size, sizeof(__u32));
+	__uint(value_size, sizeof(int));
+	__uint(max_entries, 1);
+} events SEC(".maps");
+
+// from cgroup id to event index
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(key_size, sizeof(__u64));
+	__uint(value_size, sizeof(__u32));
+	__uint(max_entries, 1);
+} cgrp_idx SEC(".maps");
+
+// per-cpu event snapshots to calculate delta
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+	__uint(key_size, sizeof(__u32));
+	__uint(value_size, sizeof(struct bpf_perf_event_value));
+} prev_readings SEC(".maps");
+
+// aggregated event values for each cgroup (per-cpu)
+// will be read from the user-space
+struct {
+	__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+	__uint(key_size, sizeof(__u32));
+	__uint(value_size, sizeof(struct bpf_perf_event_value));
+} cgrp_readings SEC(".maps");
+
+const volatile __u32 num_events = 1;
+const volatile __u32 num_cpus = 1;
+
+int enabled = 0;
+int use_cgroup_v2 = 0;
+
+static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
+{
+	struct task_struct *p = (void *)bpf_get_current_task();
+	struct cgroup *cgrp;
+	register int i = 0;
+	__u32 *elem;
+	int level;
+	int cnt;
+
+	cgrp = BPF_CORE_READ(p, cgroups, subsys[perf_event_cgrp_id], cgroup);
+	level = BPF_CORE_READ(cgrp, level);
+
+	for (cnt = 0; i < MAX_LEVELS; i++) {
+		__u64 cgrp_id;
+
+		if (i > level)
+			break;
+
+		// convert cgroup-id to a map index
+		cgrp_id = BPF_CORE_READ(cgrp, ancestor_ids[i]);
+		elem = bpf_map_lookup_elem(&cgrp_idx, &cgrp_id);
+		if (!elem)
+			continue;
+
+		cgrps[cnt++] = *elem;
+		if (cnt == size)
+			break;
+	}
+
+	return cnt;
+}
+
+static inline int get_cgroup_v2_idx(__u32 *cgrps, int size)
+{
+	register int i = 0;
+	__u32 *elem;
+	int cnt;
+
+	for (cnt = 0; i < MAX_LEVELS; i++) {
+		__u64 cgrp_id = bpf_get_current_ancestor_cgroup_id(i);
+
+		if (cgrp_id == 0)
+			break;
+
+		// convert cgroup-id to a map index
+		elem = bpf_map_lookup_elem(&cgrp_idx, &cgrp_id);
+		if (!elem)
+			continue;
+
+		cgrps[cnt++] = *elem;
+		if (cnt == size)
+			break;
+	}
+
+	return cnt;
+}
+
+static int bperf_cgroup_count(void)
+{
+	register __u32 idx = 0;  // to have it in a register to pass BPF verifier
+	register int c = 0;
+	struct bpf_perf_event_value val, delta, *prev_val, *cgrp_val;
+	__u32 cpu = bpf_get_smp_processor_id();
+	__u32 cgrp_idx[MAX_LEVELS];
+	int cgrp_cnt;
+	__u32 key, cgrp;
+	long err;
+
+	if (use_cgroup_v2)
+		cgrp_cnt = get_cgroup_v2_idx(cgrp_idx, MAX_LEVELS);
+	else
+		cgrp_cnt = get_cgroup_v1_idx(cgrp_idx, MAX_LEVELS);
+
+	for ( ; idx < MAX_EVENTS; idx++) {
+		if (idx == num_events)
+			break;
+
+		// XXX: do not pass idx directly (for verifier)
+		key = idx;
+		// this is per-cpu array for diff
+		prev_val = bpf_map_lookup_elem(&prev_readings, &key);
+		if (!prev_val) {
+			val.counter = val.enabled = val.running = 0;
+			bpf_map_update_elem(&prev_readings, &key, &val, BPF_ANY);
+
+			prev_val = bpf_map_lookup_elem(&prev_readings, &key);
+			if (!prev_val)
+				continue;
+		}
+
+		// read from global perf_event array
+		key = idx * num_cpus + cpu;
+		err = bpf_perf_event_read_value(&events, key, &val, sizeof(val));
+		if (err)
+			continue;
+
+		if (enabled) {
+			delta.counter = val.counter - prev_val->counter;
+			delta.enabled = val.enabled - prev_val->enabled;
+			delta.running = val.running - prev_val->running;
+
+			for (c = 0; c < MAX_LEVELS; c++) {
+				if (c == cgrp_cnt)
+					break;
+
+				cgrp = cgrp_idx[c];
+
+				// aggregate the result by cgroup
+				key = cgrp * num_events + idx;
+				cgrp_val = bpf_map_lookup_elem(&cgrp_readings, &key);
+				if (cgrp_val) {
+					cgrp_val->counter += delta.counter;
+					cgrp_val->enabled += delta.enabled;
+					cgrp_val->running += delta.running;
+				} else {
+					bpf_map_update_elem(&cgrp_readings, &key,
+							    &delta, BPF_ANY);
+				}
+			}
+		}
+
+		*prev_val = val;
+	}
+	return 0;
+}
+
+// This will be attached to cgroup-switches event for each cpu
+SEC("perf_events")
+int BPF_PROG(on_cgrp_switch)
+{
+	return bperf_cgroup_count();
+}
+
+SEC("raw_tp/sched_switch")
+int BPF_PROG(trigger_read)
+{
+	return bperf_cgroup_count();
+}
+
+char LICENSE[] SEC("license") = "Dual BSD/GPL";
diff --git a/tools/perf/util/cgroup.c b/tools/perf/util/cgroup.c
index e819a4f30fc2..851531102fd6 100644
--- a/tools/perf/util/cgroup.c
+++ b/tools/perf/util/cgroup.c
@@ -18,6 +18,7 @@
 #include <regex.h>
 
 int nr_cgroups;
+bool cgrp_event_expanded;
 
 /* used to match cgroup name with patterns */
 struct cgroup_name {
@@ -484,6 +485,7 @@ int evlist__expand_cgroup(struct evlist *evlist, const char *str,
 	}
 
 	ret = 0;
+	cgrp_event_expanded = true;
 
 out_err:
 	evlist__delete(orig_list);
diff --git a/tools/perf/util/cgroup.h b/tools/perf/util/cgroup.h
index 1549ec2fd348..21f7ccc566e1 100644
--- a/tools/perf/util/cgroup.h
+++ b/tools/perf/util/cgroup.h
@@ -17,6 +17,7 @@ struct cgroup {
 };
 
 extern int nr_cgroups; /* number of explicit cgroups defined */
+extern bool cgrp_event_expanded;
 
 struct cgroup *cgroup__get(struct cgroup *cgroup);
 void cgroup__put(struct cgroup *cgroup);
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCHSET v4 0/4] perf stat: Enable BPF counters with --for-each-cgroup
  2021-06-25  7:18 [PATCHSET v4 0/4] perf stat: Enable BPF counters with --for-each-cgroup Namhyung Kim
                   ` (3 preceding siblings ...)
  2021-06-25  7:18 ` [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup Namhyung Kim
@ 2021-06-27 15:29 ` Namhyung Kim
  2021-06-30  6:19   ` Namhyung Kim
  4 siblings, 1 reply; 21+ messages in thread
From: Namhyung Kim @ 2021-06-27 15:29 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Jiri Olsa
  Cc: Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen, Ian Rogers,
	Stephane Eranian, Song Liu

On Fri, Jun 25, 2021 at 12:18 AM Namhyung Kim <namhyung@kernel.org> wrote:
>
> Hello,
>
> This is to add BPF support for --for-each-cgroup to handle many cgroup
> events on big machines.  You can use the --bpf-counters to enable the
> new behavior.
>
>  * changes in v4
>   - convert cgrp_readings to a per-cpu array map
>   - remove now-unused cpu_idx map
>   - move common functions to a header file
>   - reuse bpftool bootstrap binary
>   - fix build error in the cgroup code
>
>  * changes in v3
>   - support cgroup hierarchy with ancestor ids
>   - add and trigger raw_tp BPF program
>   - add a build rule for vmlinux.h
>
>  * changes in v2
>   - remove incorrect use of BPF_F_PRESERVE_ELEMS
>   - add missing map elements after lookup
>   - handle cgroup v1
>
> Basic idea is to use a single set of per-cpu events to count
> interested events and aggregate them to each cgroup.  I used bperf
> mechanism to use a BPF program for cgroup-switches and save the
> results in a matching map element for given cgroups.
>
> Without this, we need to have separate events for cgroups, and it
> creates unnecessary multiplexing overhead (and PMU programming) when
> tasks in different cgroups are switched.  I saw this makes a big
> difference on 256 cpu machines with hundreds of cgroups.
>
> Actually this is what I wanted to do it in the kernel [1], but we can
> do the job using BPF!

Ugh, I found the current kernel bpf verifier doesn't accept the
bpf_get_current_ancestor_cgroup_id() helper.  Will send the fix
to BPF folks.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/4] perf tools: Add cgroup_is_v2() helper
  2021-06-25  7:18 ` [PATCH 2/4] perf tools: Add cgroup_is_v2() helper Namhyung Kim
@ 2021-06-29 15:51   ` Ian Rogers
  2021-06-30  6:35     ` Namhyung Kim
  0 siblings, 1 reply; 21+ messages in thread
From: Ian Rogers @ 2021-06-29 15:51 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Arnaldo Carvalho de Melo, Jiri Olsa, Ingo Molnar, Peter Zijlstra,
	LKML, Andi Kleen, Stephane Eranian, Song Liu

On Fri, Jun 25, 2021 at 12:18 AM Namhyung Kim <namhyung@kernel.org> wrote:
>
> The cgroup_is_v2() is to check if the given subsystem is mounted on
> cgroup v2 or not.  It'll be used by BPF cgroup code later.
>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
>  tools/perf/util/cgroup.c | 19 +++++++++++++++++++
>  tools/perf/util/cgroup.h |  2 ++
>  2 files changed, 21 insertions(+)
>
> diff --git a/tools/perf/util/cgroup.c b/tools/perf/util/cgroup.c
> index ef18c988c681..e819a4f30fc2 100644
> --- a/tools/perf/util/cgroup.c
> +++ b/tools/perf/util/cgroup.c
> @@ -9,6 +9,7 @@
>  #include <linux/zalloc.h>
>  #include <sys/types.h>
>  #include <sys/stat.h>
> +#include <sys/statfs.h>
>  #include <fcntl.h>
>  #include <stdlib.h>
>  #include <string.h>
> @@ -70,6 +71,24 @@ int read_cgroup_id(struct cgroup *cgrp)
>  }
>  #endif  /* HAVE_FILE_HANDLE */
>
> +#ifndef CGROUP2_SUPER_MAGIC
> +#define CGROUP2_SUPER_MAGIC  0x63677270
> +#endif
> +
> +int cgroup_is_v2(const char *subsys)
> +{
> +       char mnt[PATH_MAX + 1];
> +       struct statfs stbuf;
> +
> +       if (cgroupfs_find_mountpoint(mnt, PATH_MAX + 1, subsys))
> +               return -1;
> +
> +       if (statfs(mnt, &stbuf) < 0)
> +               return -1;
> +
> +       return (stbuf.f_type == CGROUP2_SUPER_MAGIC);
> +}
> +
>  static struct cgroup *evlist__find_cgroup(struct evlist *evlist, const char *str)
>  {
>         struct evsel *counter;
> diff --git a/tools/perf/util/cgroup.h b/tools/perf/util/cgroup.h
> index 707adbe25123..1549ec2fd348 100644
> --- a/tools/perf/util/cgroup.h
> +++ b/tools/perf/util/cgroup.h
> @@ -47,4 +47,6 @@ int read_cgroup_id(struct cgroup *cgrp)
>  }
>  #endif  /* HAVE_FILE_HANDLE */
>
> +int cgroup_is_v2(const char *subsys);
> +

I think this is okay. It may make sense to have this in
tools/lib/api/fs/fs.h, for example fs__valid_mount is already checking
magic numbers. Perhaps we can avoid a statfs call, but it'd need some
reorganization of the fs.h code.

Acked-by: Ian Rogers <irogers@google.com>

Thanks,
Ian

>  #endif /* __CGROUP_H__ */
> --
> 2.32.0.93.g670b81a890-goog
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHSET v4 0/4] perf stat: Enable BPF counters with --for-each-cgroup
  2021-06-27 15:29 ` [PATCHSET v4 0/4] perf stat: Enable BPF counters " Namhyung Kim
@ 2021-06-30  6:19   ` Namhyung Kim
  0 siblings, 0 replies; 21+ messages in thread
From: Namhyung Kim @ 2021-06-30  6:19 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Jiri Olsa
  Cc: Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen, Ian Rogers,
	Stephane Eranian, Song Liu

On Sun, Jun 27, 2021 at 8:29 AM Namhyung Kim <namhyung@kernel.org> wrote:
>
> On Fri, Jun 25, 2021 at 12:18 AM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > Hello,
> >
> > This is to add BPF support for --for-each-cgroup to handle many cgroup
> > events on big machines.  You can use the --bpf-counters to enable the
> > new behavior.
> >
> >  * changes in v4
> >   - convert cgrp_readings to a per-cpu array map
> >   - remove now-unused cpu_idx map
> >   - move common functions to a header file
> >   - reuse bpftool bootstrap binary
> >   - fix build error in the cgroup code
> >
> >  * changes in v3
> >   - support cgroup hierarchy with ancestor ids
> >   - add and trigger raw_tp BPF program
> >   - add a build rule for vmlinux.h
> >
> >  * changes in v2
> >   - remove incorrect use of BPF_F_PRESERVE_ELEMS
> >   - add missing map elements after lookup
> >   - handle cgroup v1
> >
> > Basic idea is to use a single set of per-cpu events to count
> > interested events and aggregate them to each cgroup.  I used bperf
> > mechanism to use a BPF program for cgroup-switches and save the
> > results in a matching map element for given cgroups.
> >
> > Without this, we need to have separate events for cgroups, and it
> > creates unnecessary multiplexing overhead (and PMU programming) when
> > tasks in different cgroups are switched.  I saw this makes a big
> > difference on 256 cpu machines with hundreds of cgroups.
> >
> > Actually this is what I wanted to do it in the kernel [1], but we can
> > do the job using BPF!
>
> Ugh, I found the current kernel bpf verifier doesn't accept the
> bpf_get_current_ancestor_cgroup_id() helper.  Will send the fix
> to BPF folks.

The fix landed on the bpf-next tree.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/4] perf tools: Add cgroup_is_v2() helper
  2021-06-29 15:51   ` Ian Rogers
@ 2021-06-30  6:35     ` Namhyung Kim
  2021-06-30 18:43       ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 21+ messages in thread
From: Namhyung Kim @ 2021-06-30  6:35 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Arnaldo Carvalho de Melo, Jiri Olsa, Ingo Molnar, Peter Zijlstra,
	LKML, Andi Kleen, Stephane Eranian, Song Liu

Hi Ian,

On Tue, Jun 29, 2021 at 8:51 AM Ian Rogers <irogers@google.com> wrote:
>
> On Fri, Jun 25, 2021 at 12:18 AM Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > The cgroup_is_v2() is to check if the given subsystem is mounted on
> > cgroup v2 or not.  It'll be used by BPF cgroup code later.
> >
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > ---
> >  tools/perf/util/cgroup.c | 19 +++++++++++++++++++
> >  tools/perf/util/cgroup.h |  2 ++
> >  2 files changed, 21 insertions(+)
> >
> > diff --git a/tools/perf/util/cgroup.c b/tools/perf/util/cgroup.c
> > index ef18c988c681..e819a4f30fc2 100644
> > --- a/tools/perf/util/cgroup.c
> > +++ b/tools/perf/util/cgroup.c
> > @@ -9,6 +9,7 @@
> >  #include <linux/zalloc.h>
> >  #include <sys/types.h>
> >  #include <sys/stat.h>
> > +#include <sys/statfs.h>
> >  #include <fcntl.h>
> >  #include <stdlib.h>
> >  #include <string.h>
> > @@ -70,6 +71,24 @@ int read_cgroup_id(struct cgroup *cgrp)
> >  }
> >  #endif  /* HAVE_FILE_HANDLE */
> >
> > +#ifndef CGROUP2_SUPER_MAGIC
> > +#define CGROUP2_SUPER_MAGIC  0x63677270
> > +#endif
> > +
> > +int cgroup_is_v2(const char *subsys)
> > +{
> > +       char mnt[PATH_MAX + 1];
> > +       struct statfs stbuf;
> > +
> > +       if (cgroupfs_find_mountpoint(mnt, PATH_MAX + 1, subsys))
> > +               return -1;
> > +
> > +       if (statfs(mnt, &stbuf) < 0)
> > +               return -1;
> > +
> > +       return (stbuf.f_type == CGROUP2_SUPER_MAGIC);
> > +}
> > +
> >  static struct cgroup *evlist__find_cgroup(struct evlist *evlist, const char *str)
> >  {
> >         struct evsel *counter;
> > diff --git a/tools/perf/util/cgroup.h b/tools/perf/util/cgroup.h
> > index 707adbe25123..1549ec2fd348 100644
> > --- a/tools/perf/util/cgroup.h
> > +++ b/tools/perf/util/cgroup.h
> > @@ -47,4 +47,6 @@ int read_cgroup_id(struct cgroup *cgrp)
> >  }
> >  #endif  /* HAVE_FILE_HANDLE */
> >
> > +int cgroup_is_v2(const char *subsys);
> > +
>
> I think this is okay. It may make sense to have this in
> tools/lib/api/fs/fs.h, for example fs__valid_mount is already checking
> magic numbers. Perhaps we can avoid a statfs call, but it'd need some
> reorganization of the fs.h code.
>
> Acked-by: Ian Rogers <irogers@google.com>

Thanks for your review!

Actually I'm ok with moving it to tools/lib.  Will do it in the next spin,
if it needs one. :)

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/4] perf tools: Move common bpf functions to bpf_counter.h
  2021-06-25  7:18 ` [PATCH 3/4] perf tools: Move common bpf functions to bpf_counter.h Namhyung Kim
@ 2021-06-30 18:28   ` Song Liu
  2021-07-01 19:09   ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 21+ messages in thread
From: Song Liu @ 2021-06-30 18:28 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Arnaldo Carvalho de Melo, Jiri Olsa, Ingo Molnar, Peter Zijlstra,
	LKML, Andi Kleen, Ian Rogers, Stephane Eranian



> On Jun 25, 2021, at 12:18 AM, Namhyung Kim <namhyung@kernel.org> wrote:
> 
> Some helper functions will be used for cgroup counting too.
> Move them to a header file for sharing.
> 
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/4] perf tools: Add cgroup_is_v2() helper
  2021-06-30  6:35     ` Namhyung Kim
@ 2021-06-30 18:43       ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 21+ messages in thread
From: Arnaldo Carvalho de Melo @ 2021-06-30 18:43 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Ian Rogers, Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML,
	Andi Kleen, Stephane Eranian, Song Liu

Em Tue, Jun 29, 2021 at 11:35:17PM -0700, Namhyung Kim escreveu:
> Hi Ian,
> 
> On Tue, Jun 29, 2021 at 8:51 AM Ian Rogers <irogers@google.com> wrote:
> >
> > On Fri, Jun 25, 2021 at 12:18 AM Namhyung Kim <namhyung@kernel.org> wrote:
> > >
> > > The cgroup_is_v2() is to check if the given subsystem is mounted on
> > > cgroup v2 or not.  It'll be used by BPF cgroup code later.
> > >
> > > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > > ---
> > >  tools/perf/util/cgroup.c | 19 +++++++++++++++++++
> > >  tools/perf/util/cgroup.h |  2 ++
> > >  2 files changed, 21 insertions(+)
> > >
> > > diff --git a/tools/perf/util/cgroup.c b/tools/perf/util/cgroup.c
> > > index ef18c988c681..e819a4f30fc2 100644
> > > --- a/tools/perf/util/cgroup.c
> > > +++ b/tools/perf/util/cgroup.c
> > > @@ -9,6 +9,7 @@
> > >  #include <linux/zalloc.h>
> > >  #include <sys/types.h>
> > >  #include <sys/stat.h>
> > > +#include <sys/statfs.h>
> > >  #include <fcntl.h>
> > >  #include <stdlib.h>
> > >  #include <string.h>
> > > @@ -70,6 +71,24 @@ int read_cgroup_id(struct cgroup *cgrp)
> > >  }
> > >  #endif  /* HAVE_FILE_HANDLE */
> > >
> > > +#ifndef CGROUP2_SUPER_MAGIC
> > > +#define CGROUP2_SUPER_MAGIC  0x63677270
> > > +#endif
> > > +
> > > +int cgroup_is_v2(const char *subsys)
> > > +{
> > > +       char mnt[PATH_MAX + 1];
> > > +       struct statfs stbuf;
> > > +
> > > +       if (cgroupfs_find_mountpoint(mnt, PATH_MAX + 1, subsys))
> > > +               return -1;
> > > +
> > > +       if (statfs(mnt, &stbuf) < 0)
> > > +               return -1;
> > > +
> > > +       return (stbuf.f_type == CGROUP2_SUPER_MAGIC);
> > > +}
> > > +
> > >  static struct cgroup *evlist__find_cgroup(struct evlist *evlist, const char *str)
> > >  {
> > >         struct evsel *counter;
> > > diff --git a/tools/perf/util/cgroup.h b/tools/perf/util/cgroup.h
> > > index 707adbe25123..1549ec2fd348 100644
> > > --- a/tools/perf/util/cgroup.h
> > > +++ b/tools/perf/util/cgroup.h
> > > @@ -47,4 +47,6 @@ int read_cgroup_id(struct cgroup *cgrp)
> > >  }
> > >  #endif  /* HAVE_FILE_HANDLE */
> > >
> > > +int cgroup_is_v2(const char *subsys);
> > > +
> >
> > I think this is okay. It may make sense to have this in
> > tools/lib/api/fs/fs.h, for example fs__valid_mount is already checking
> > magic numbers. Perhaps we can avoid a statfs call, but it'd need some
> > reorganization of the fs.h code.
> >
> > Acked-by: Ian Rogers <irogers@google.com>
> 
> Thanks for your review!
> 
> Actually I'm ok with moving it to tools/lib.  Will do it in the next spin,
> if it needs one. :)

O think I'll take v4, we can improve this in followup work.

- Arnaldo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup
  2021-06-25  7:18 ` [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup Namhyung Kim
@ 2021-06-30 18:47   ` Song Liu
  2021-06-30 20:09     ` Namhyung Kim
  2021-06-30 18:50   ` Arnaldo Carvalho de Melo
  1 sibling, 1 reply; 21+ messages in thread
From: Song Liu @ 2021-06-30 18:47 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Arnaldo Carvalho de Melo, Jiri Olsa, Ingo Molnar, Peter Zijlstra,
	LKML, Andi Kleen, Ian Rogers, Stephane Eranian



> On Jun 25, 2021, at 12:18 AM, Namhyung Kim <namhyung@kernel.org> wrote:
> 
> Recently bperf was added to use BPF to count perf events for various
> purposes.  This is an extension for the approach and targetting to
> cgroup usages.
> 
> Unlike the other bperf, it doesn't share the events with other
> processes but it'd reduce unnecessary events (and the overhead of
> multiplexing) for each monitored cgroup within the perf session.
> 
> When --for-each-cgroup is used with --bpf-counters, it will open
> cgroup-switches event per cpu internally and attach the new BPF
> program to read given perf_events and to aggregate the results for
> cgroups.  It's only called when task is switched to a task in a
> different cgroup.
> 
> Cc: Song Liu <songliubraving@fb.com>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
> tools/perf/Makefile.perf                    |  17 +-
> tools/perf/util/Build                       |   1 +
> tools/perf/util/bpf_counter.c               |   5 +
> tools/perf/util/bpf_counter_cgroup.c        | 299 ++++++++++++++++++++
> tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 191 +++++++++++++
> tools/perf/util/cgroup.c                    |   2 +
> tools/perf/util/cgroup.h                    |   1 +
> 7 files changed, 515 insertions(+), 1 deletion(-)
> create mode 100644 tools/perf/util/bpf_counter_cgroup.c
> create mode 100644 tools/perf/util/bpf_skel/bperf_cgroup.bpf.c

[...]

> diff --git a/tools/perf/util/bpf_counter_cgroup.c b/tools/perf/util/bpf_counter_cgroup.c
> new file mode 100644
> index 000000000000..327f97a23a84
> --- /dev/null
> +++ b/tools/perf/util/bpf_counter_cgroup.c
> @@ -0,0 +1,299 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/* Copyright (c) 2019 Facebook */

I am not sure whether this ^^^ is accurate. 

> +/* Copyright (c) 2021 Google */
> +
> +#include <assert.h>
> +#include <limits.h>
> +#include <unistd.h>
> +#include <sys/file.h>
> +#include <sys/time.h>
> +#include <sys/resource.h>
> +#include <linux/err.h>
> +#include <linux/zalloc.h>
> +#include <linux/perf_event.h>
> +#include <api/fs/fs.h>
> +#include <perf/bpf_perf.h>
> +
> +#include "affinity.h"
> +#include "bpf_counter.h"
> +#include "cgroup.h"
> +#include "counts.h"
> +#include "debug.h"
> +#include "evsel.h"
> +#include "evlist.h"
> +#include "target.h"
> +#include "cpumap.h"
> +#include "thread_map.h"
> +
> +#include "bpf_skel/bperf_cgroup.skel.h"
> +
> +static struct perf_event_attr cgrp_switch_attr = {
> +	.type = PERF_TYPE_SOFTWARE,
> +	.config = PERF_COUNT_SW_CGROUP_SWITCHES,
> +	.size = sizeof(cgrp_switch_attr),
> +	.sample_period = 1,
> +	.disabled = 1,
> +};
> +
> +static struct evsel *cgrp_switch;
> +static struct bperf_cgroup_bpf *skel;
> +
> +#define FD(evt, cpu) (*(int *)xyarray__entry(evt->core.fd, cpu, 0))
> +
> +static int bperf_load_program(struct evlist *evlist)
> +{
> +	struct bpf_link *link;
> +	struct evsel *evsel;
> +	struct cgroup *cgrp, *leader_cgrp;
> +	__u32 i, cpu;
> +	int nr_cpus = evlist->core.all_cpus->nr;
> +	int total_cpus = cpu__max_cpu();
> +	int map_size, map_fd;
> +	int prog_fd, err;
> +
> +	skel = bperf_cgroup_bpf__open();
> +	if (!skel) {
> +		pr_err("Failed to open cgroup skeleton\n");
> +		return -1;
> +	}
> +
> +	skel->rodata->num_cpus = total_cpus;
> +	skel->rodata->num_events = evlist->core.nr_entries / nr_cgroups;
> +
> +	BUG_ON(evlist->core.nr_entries % nr_cgroups != 0);
> +
> +	/* we need one copy of events per cpu for reading */
> +	map_size = total_cpus * evlist->core.nr_entries / nr_cgroups;
> +	bpf_map__resize(skel->maps.events, map_size);
> +	bpf_map__resize(skel->maps.cgrp_idx, nr_cgroups);
> +	/* previous result is saved in a per-cpu array */
> +	map_size = evlist->core.nr_entries / nr_cgroups;
> +	bpf_map__resize(skel->maps.prev_readings, map_size);
> +	/* cgroup result needs all events (per-cpu) */
> +	map_size = evlist->core.nr_entries;
> +	bpf_map__resize(skel->maps.cgrp_readings, map_size);
> +
> +	set_max_rlimit();
> +
> +	err = bperf_cgroup_bpf__load(skel);
> +	if (err) {
> +		pr_err("Failed to load cgroup skeleton\n");
> +		goto out;
> +	}
> +
> +	if (cgroup_is_v2("perf_event") > 0)
> +		skel->bss->use_cgroup_v2 = 1;
> +
> +	err = -1;
> +
> +	cgrp_switch = evsel__new(&cgrp_switch_attr);
> +	if (evsel__open_per_cpu(cgrp_switch, evlist->core.all_cpus, -1) < 0) {
> +		pr_err("Failed to open cgroup switches event\n");
> +		goto out;
> +	}
> +
> +	for (i = 0; i < nr_cpus; i++) {
> +		link = bpf_program__attach_perf_event(skel->progs.on_cgrp_switch,
> +						      FD(cgrp_switch, i));
> +		if (IS_ERR(link)) {
> +			pr_err("Failed to attach cgroup program\n");
> +			err = PTR_ERR(link);
> +			goto out;
> +		}
> +	}
> +
> +	/*
> +	 * Update cgrp_idx map from cgroup-id to event index.
> +	 */
> +	cgrp = NULL;
> +	i = 0;
> +
> +	evlist__for_each_entry(evlist, evsel) {
> +		if (cgrp == NULL || evsel->cgrp == leader_cgrp) {
> +			leader_cgrp = evsel->cgrp;
> +			evsel->cgrp = NULL;
> +
> +			/* open single copy of the events w/o cgroup */
> +			err = evsel__open_per_cpu(evsel, evlist->core.all_cpus, -1);
> +			if (err) {
> +				pr_err("Failed to open first cgroup events\n");
> +				goto out;
> +			}
> +
> +			map_fd = bpf_map__fd(skel->maps.events);
> +			for (cpu = 0; cpu < nr_cpus; cpu++) {
> +				int fd = FD(evsel, cpu);
> +				__u32 idx = evsel->idx * total_cpus +
> +					evlist->core.all_cpus->map[cpu];
> +
> +				err = bpf_map_update_elem(map_fd, &idx, &fd,
> +							  BPF_ANY);
> +				if (err < 0) {
> +					pr_err("Failed to update perf_event fd\n");
> +					goto out;
> +				}
> +			}
> +
> +			evsel->cgrp = leader_cgrp;
> +		}
> +		evsel->supported = true;
> +
> +		if (evsel->cgrp == cgrp)
> +			continue;
> +
> +		cgrp = evsel->cgrp;
> +
> +		if (read_cgroup_id(cgrp) < 0) {
> +			pr_err("Failed to get cgroup id\n");
> +			err = -1;
> +			goto out;
> +		}
> +
> +		map_fd = bpf_map__fd(skel->maps.cgrp_idx);
> +		err = bpf_map_update_elem(map_fd, &cgrp->id, &i, BPF_ANY);
> +		if (err < 0) {
> +			pr_err("Failed to update cgroup index map\n");
> +			goto out;
> +		}
> +
> +		i++;
> +	}
> +
> +	/*
> +	 * bperf uses BPF_PROG_TEST_RUN to get accurate reading. Check
> +	 * whether the kernel support it
> +	 */
> +	prog_fd = bpf_program__fd(skel->progs.trigger_read);
> +	err = bperf_trigger_reading(prog_fd, 0);
> +	if (err) {
> +		pr_debug("The kernel does not support test_run for raw_tp BPF programs.\n"
> +			 "Therefore, --for-each-cgroup might show inaccurate readings\n");

I think this should be a warning, and we should set err = 0 to continue? 

> +	}
> +
> +out:
> +	return err;
> +}
> +

[...]

> +
> +/*
> + * trigger the leader prog on each cpu, so the cgrp_reading map could get
> + * the latest results.
> + */
> +static int bperf_cgrp__sync_counters(struct evlist *evlist)
> +{
> +	int i, cpu;
> +	int nr_cpus = evlist->core.all_cpus->nr;
> +	int prog_fd = bpf_program__fd(skel->progs.trigger_read);
> +
> +	for (i = 0; i < nr_cpus; i++) {
> +		cpu = evlist->core.all_cpus->map[i];
> +		bperf_trigger_reading(prog_fd, cpu);
> +	}
> +
> +	return 0;
> +}
> +
> +static int bperf_cgrp__enable(struct evsel *evsel)
> +{

Do we need to call bperf_cgrp__sync_counters() before setting enabled to 1? 
If we don't, we may count some numbers before setting enabled to 1, no? 

> +	skel->bss->enabled = 1;
> +	return 0;
> +}

[...]


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup
  2021-06-25  7:18 ` [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup Namhyung Kim
  2021-06-30 18:47   ` Song Liu
@ 2021-06-30 18:50   ` Arnaldo Carvalho de Melo
  2021-06-30 20:12     ` Namhyung Kim
  2021-07-01 13:43     ` Arnaldo Carvalho de Melo
  1 sibling, 2 replies; 21+ messages in thread
From: Arnaldo Carvalho de Melo @ 2021-06-30 18:50 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen,
	Ian Rogers, Stephane Eranian, Song Liu

Em Fri, Jun 25, 2021 at 12:18:26AM -0700, Namhyung Kim escreveu:
> Recently bperf was added to use BPF to count perf events for various
> purposes.  This is an extension for the approach and targetting to
> cgroup usages.
> 
> Unlike the other bperf, it doesn't share the events with other
> processes but it'd reduce unnecessary events (and the overhead of
> multiplexing) for each monitored cgroup within the perf session.
> 
> When --for-each-cgroup is used with --bpf-counters, it will open
> cgroup-switches event per cpu internally and attach the new BPF
> program to read given perf_events and to aggregate the results for
> cgroups.  It's only called when task is switched to a task in a
> different cgroup.

I'll take a stab at fixing these:

⬢[acme@toolbox perf]$ make -k CORESIGHT=1 BUILD_BPF_SKEL=1 PYTHON=python3 DEBUG=1 O=/tmp/build/perf -C tools/perf install-bin
make: Entering directory '/var/home/acme/git/perf/tools/perf'
  BUILD:   Doing 'make -j24' parallel build
Warning: Kernel ABI header at 'tools/include/uapi/linux/kvm.h' differs from latest version at 'include/uapi/linux/kvm.h'
diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h
Warning: Kernel ABI header at 'tools/include/uapi/linux/mount.h' differs from latest version at 'include/uapi/linux/mount.h'
diff -u tools/include/uapi/linux/mount.h include/uapi/linux/mount.h
Warning: Kernel ABI header at 'tools/arch/x86/include/asm/cpufeatures.h' differs from latest version at 'arch/x86/include/asm/cpufeatures.h'
diff -u tools/arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/cpufeatures.h
Warning: Kernel ABI header at 'tools/arch/x86/include/asm/msr-index.h' differs from latest version at 'arch/x86/include/asm/msr-index.h'
diff -u tools/arch/x86/include/asm/msr-index.h arch/x86/include/asm/msr-index.h
Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/kvm.h' differs from latest version at 'arch/x86/include/uapi/asm/kvm.h'
diff -u tools/arch/x86/include/uapi/asm/kvm.h arch/x86/include/uapi/asm/kvm.h
Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/svm.h' differs from latest version at 'arch/x86/include/uapi/asm/svm.h'
diff -u tools/arch/x86/include/uapi/asm/svm.h arch/x86/include/uapi/asm/svm.h
Warning: Kernel ABI header at 'tools/arch/arm64/include/uapi/asm/kvm.h' differs from latest version at 'arch/arm64/include/uapi/asm/kvm.h'
diff -u tools/arch/arm64/include/uapi/asm/kvm.h arch/arm64/include/uapi/asm/kvm.h
  DESCEND plugins
  GEN     /tmp/build/perf/python/perf.so
  INSTALL trace_plugins
  CC      /tmp/build/perf/util/bpf_counter_cgroup.o
  CC      /tmp/build/perf/util/demangle-java.o
  CC      /tmp/build/perf/util/demangle-rust.o
  CC      /tmp/build/perf/util/jitdump.o
  CC      /tmp/build/perf/util/genelf.o
  CC      /tmp/build/perf/util/genelf_debug.o
  CC      /tmp/build/perf/util/perf-hooks.o
  CC      /tmp/build/perf/util/bpf-event.o
util/bpf_counter_cgroup.c: In function ‘bperf_load_program’:
util/bpf_counter_cgroup.c:96:23: error: comparison of integer expressions of different signedness: ‘__u32’ {aka ‘unsigned int’} and ‘int’ [-Werror=sign-compare]
   96 |         for (i = 0; i < nr_cpus; i++) {
      |                       ^
util/bpf_counter_cgroup.c:125:43: error: comparison of integer expressions of different signedness: ‘__u32’ {aka ‘unsigned int’} and ‘int’ [-Werror=sign-compare]
  125 |                         for (cpu = 0; cpu < nr_cpus; cpu++) {
      |                                           ^
util/bpf_counter_cgroup.c: In function ‘bperf_cgrp__load’:
util/bpf_counter_cgroup.c:178:65: error: unused parameter ‘target’ [-Werror=unused-parameter]
  178 | static int bperf_cgrp__load(struct evsel *evsel, struct target *target)
      |                                                  ~~~~~~~~~~~~~~~^~~~~~
util/bpf_counter_cgroup.c: In function ‘bperf_cgrp__install_pe’:
util/bpf_counter_cgroup.c:195:49: error: unused parameter ‘evsel’ [-Werror=unused-parameter]
  195 | static int bperf_cgrp__install_pe(struct evsel *evsel, int cpu, int fd)
      |                                   ~~~~~~~~~~~~~~^~~~~
util/bpf_counter_cgroup.c:195:60: error: unused parameter ‘cpu’ [-Werror=unused-parameter]
  195 | static int bperf_cgrp__install_pe(struct evsel *evsel, int cpu, int fd)
      |                                                        ~~~~^~~
util/bpf_counter_cgroup.c:195:69: error: unused parameter ‘fd’ [-Werror=unused-parameter]
  195 | static int bperf_cgrp__install_pe(struct evsel *evsel, int cpu, int fd)
      |                                                                 ~~~~^~
util/bpf_counter_cgroup.c: In function ‘bperf_cgrp__enable’:
util/bpf_counter_cgroup.c:219:45: error: unused parameter ‘evsel’ [-Werror=unused-parameter]
  219 | static int bperf_cgrp__enable(struct evsel *evsel)
      |                               ~~~~~~~~~~~~~~^~~~~
cc1: all warnings being treated as errors
make[4]: *** [/var/home/acme/git/perf/tools/build/Makefile.build:96: /tmp/build/perf/util/bpf_counter_cgroup.o] Error 1
make[4]: *** Waiting for unfinished jobs....
make[3]: *** [/var/home/acme/git/perf/tools/build/Makefile.build:139: util] Error 2
make[2]: *** [Makefile.perf:655: /tmp/build/perf/perf-in.o] Error 2
make[1]: *** [Makefile.perf:238: sub-make] Error 2
make: *** [Makefile:113: install-bin] Error 2
make: Leaving directory '/var/home/acme/git/perf/tools/perf'
⬢[acme@toolbox perf]$

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup
  2021-06-30 18:47   ` Song Liu
@ 2021-06-30 20:09     ` Namhyung Kim
  2021-07-01 20:16       ` Namhyung Kim
  0 siblings, 1 reply; 21+ messages in thread
From: Namhyung Kim @ 2021-06-30 20:09 UTC (permalink / raw)
  To: Song Liu
  Cc: Arnaldo Carvalho de Melo, Jiri Olsa, Ingo Molnar, Peter Zijlstra,
	LKML, Andi Kleen, Ian Rogers, Stephane Eranian

Hi Song,

On Wed, Jun 30, 2021 at 11:47 AM Song Liu <songliubraving@fb.com> wrote:
>
>
>
> > On Jun 25, 2021, at 12:18 AM, Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > Recently bperf was added to use BPF to count perf events for various
> > purposes.  This is an extension for the approach and targetting to
> > cgroup usages.
> >
> > Unlike the other bperf, it doesn't share the events with other
> > processes but it'd reduce unnecessary events (and the overhead of
> > multiplexing) for each monitored cgroup within the perf session.
> >
> > When --for-each-cgroup is used with --bpf-counters, it will open
> > cgroup-switches event per cpu internally and attach the new BPF
> > program to read given perf_events and to aggregate the results for
> > cgroups.  It's only called when task is switched to a task in a
> > different cgroup.
> >
> > Cc: Song Liu <songliubraving@fb.com>
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > ---
> > tools/perf/Makefile.perf                    |  17 +-
> > tools/perf/util/Build                       |   1 +
> > tools/perf/util/bpf_counter.c               |   5 +
> > tools/perf/util/bpf_counter_cgroup.c        | 299 ++++++++++++++++++++
> > tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 191 +++++++++++++
> > tools/perf/util/cgroup.c                    |   2 +
> > tools/perf/util/cgroup.h                    |   1 +
> > 7 files changed, 515 insertions(+), 1 deletion(-)
> > create mode 100644 tools/perf/util/bpf_counter_cgroup.c
> > create mode 100644 tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
>
> [...]
>
> > diff --git a/tools/perf/util/bpf_counter_cgroup.c b/tools/perf/util/bpf_counter_cgroup.c
> > new file mode 100644
> > index 000000000000..327f97a23a84
> > --- /dev/null
> > +++ b/tools/perf/util/bpf_counter_cgroup.c
> > @@ -0,0 +1,299 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/* Copyright (c) 2019 Facebook */
>
> I am not sure whether this ^^^ is accurate.

Well, I just copied it from the bpf_counter.c file which was the base
of this patch.  Now I don't think I have many lines of code directly
came from the origin.

So I'm not sure what I can do.  Do you want to update the
copyright year to 2021?  Or are you ok with removing the
line at all?

>
> > +/* Copyright (c) 2021 Google */
> > +
> > +#include <assert.h>
> > +#include <limits.h>
> > +#include <unistd.h>
> > +#include <sys/file.h>
> > +#include <sys/time.h>
> > +#include <sys/resource.h>
> > +#include <linux/err.h>
> > +#include <linux/zalloc.h>
> > +#include <linux/perf_event.h>
> > +#include <api/fs/fs.h>
> > +#include <perf/bpf_perf.h>
> > +
> > +#include "affinity.h"
> > +#include "bpf_counter.h"
> > +#include "cgroup.h"
> > +#include "counts.h"
> > +#include "debug.h"
> > +#include "evsel.h"
> > +#include "evlist.h"
> > +#include "target.h"
> > +#include "cpumap.h"
> > +#include "thread_map.h"
> > +
> > +#include "bpf_skel/bperf_cgroup.skel.h"
> > +
> > +static struct perf_event_attr cgrp_switch_attr = {
> > +     .type = PERF_TYPE_SOFTWARE,
> > +     .config = PERF_COUNT_SW_CGROUP_SWITCHES,
> > +     .size = sizeof(cgrp_switch_attr),
> > +     .sample_period = 1,
> > +     .disabled = 1,
> > +};
> > +
> > +static struct evsel *cgrp_switch;
> > +static struct bperf_cgroup_bpf *skel;
> > +
> > +#define FD(evt, cpu) (*(int *)xyarray__entry(evt->core.fd, cpu, 0))
> > +
> > +static int bperf_load_program(struct evlist *evlist)
> > +{
> > +     struct bpf_link *link;
> > +     struct evsel *evsel;
> > +     struct cgroup *cgrp, *leader_cgrp;
> > +     __u32 i, cpu;
> > +     int nr_cpus = evlist->core.all_cpus->nr;
> > +     int total_cpus = cpu__max_cpu();
> > +     int map_size, map_fd;
> > +     int prog_fd, err;
> > +
> > +     skel = bperf_cgroup_bpf__open();
> > +     if (!skel) {
> > +             pr_err("Failed to open cgroup skeleton\n");
> > +             return -1;
> > +     }
> > +
> > +     skel->rodata->num_cpus = total_cpus;
> > +     skel->rodata->num_events = evlist->core.nr_entries / nr_cgroups;
> > +
> > +     BUG_ON(evlist->core.nr_entries % nr_cgroups != 0);
> > +
> > +     /* we need one copy of events per cpu for reading */
> > +     map_size = total_cpus * evlist->core.nr_entries / nr_cgroups;
> > +     bpf_map__resize(skel->maps.events, map_size);
> > +     bpf_map__resize(skel->maps.cgrp_idx, nr_cgroups);
> > +     /* previous result is saved in a per-cpu array */
> > +     map_size = evlist->core.nr_entries / nr_cgroups;
> > +     bpf_map__resize(skel->maps.prev_readings, map_size);
> > +     /* cgroup result needs all events (per-cpu) */
> > +     map_size = evlist->core.nr_entries;
> > +     bpf_map__resize(skel->maps.cgrp_readings, map_size);
> > +
> > +     set_max_rlimit();
> > +
> > +     err = bperf_cgroup_bpf__load(skel);
> > +     if (err) {
> > +             pr_err("Failed to load cgroup skeleton\n");
> > +             goto out;
> > +     }
> > +
> > +     if (cgroup_is_v2("perf_event") > 0)
> > +             skel->bss->use_cgroup_v2 = 1;
> > +
> > +     err = -1;
> > +
> > +     cgrp_switch = evsel__new(&cgrp_switch_attr);
> > +     if (evsel__open_per_cpu(cgrp_switch, evlist->core.all_cpus, -1) < 0) {
> > +             pr_err("Failed to open cgroup switches event\n");
> > +             goto out;
> > +     }
> > +
> > +     for (i = 0; i < nr_cpus; i++) {
> > +             link = bpf_program__attach_perf_event(skel->progs.on_cgrp_switch,
> > +                                                   FD(cgrp_switch, i));
> > +             if (IS_ERR(link)) {
> > +                     pr_err("Failed to attach cgroup program\n");
> > +                     err = PTR_ERR(link);
> > +                     goto out;
> > +             }
> > +     }
> > +
> > +     /*
> > +      * Update cgrp_idx map from cgroup-id to event index.
> > +      */
> > +     cgrp = NULL;
> > +     i = 0;
> > +
> > +     evlist__for_each_entry(evlist, evsel) {
> > +             if (cgrp == NULL || evsel->cgrp == leader_cgrp) {
> > +                     leader_cgrp = evsel->cgrp;
> > +                     evsel->cgrp = NULL;
> > +
> > +                     /* open single copy of the events w/o cgroup */
> > +                     err = evsel__open_per_cpu(evsel, evlist->core.all_cpus, -1);
> > +                     if (err) {
> > +                             pr_err("Failed to open first cgroup events\n");
> > +                             goto out;
> > +                     }
> > +
> > +                     map_fd = bpf_map__fd(skel->maps.events);
> > +                     for (cpu = 0; cpu < nr_cpus; cpu++) {
> > +                             int fd = FD(evsel, cpu);
> > +                             __u32 idx = evsel->idx * total_cpus +
> > +                                     evlist->core.all_cpus->map[cpu];
> > +
> > +                             err = bpf_map_update_elem(map_fd, &idx, &fd,
> > +                                                       BPF_ANY);
> > +                             if (err < 0) {
> > +                                     pr_err("Failed to update perf_event fd\n");
> > +                                     goto out;
> > +                             }
> > +                     }
> > +
> > +                     evsel->cgrp = leader_cgrp;
> > +             }
> > +             evsel->supported = true;
> > +
> > +             if (evsel->cgrp == cgrp)
> > +                     continue;
> > +
> > +             cgrp = evsel->cgrp;
> > +
> > +             if (read_cgroup_id(cgrp) < 0) {
> > +                     pr_err("Failed to get cgroup id\n");
> > +                     err = -1;
> > +                     goto out;
> > +             }
> > +
> > +             map_fd = bpf_map__fd(skel->maps.cgrp_idx);
> > +             err = bpf_map_update_elem(map_fd, &cgrp->id, &i, BPF_ANY);
> > +             if (err < 0) {
> > +                     pr_err("Failed to update cgroup index map\n");
> > +                     goto out;
> > +             }
> > +
> > +             i++;
> > +     }
> > +
> > +     /*
> > +      * bperf uses BPF_PROG_TEST_RUN to get accurate reading. Check
> > +      * whether the kernel support it
> > +      */
> > +     prog_fd = bpf_program__fd(skel->progs.trigger_read);
> > +     err = bperf_trigger_reading(prog_fd, 0);
> > +     if (err) {
> > +             pr_debug("The kernel does not support test_run for raw_tp BPF programs.\n"
> > +                      "Therefore, --for-each-cgroup might show inaccurate readings\n");
>
> I think this should be a warning, and we should set err = 0 to continue?

Sounds good, will change.

>
> > +     }
> > +
> > +out:
> > +     return err;
> > +}
> > +
>
> [...]
>
> > +
> > +/*
> > + * trigger the leader prog on each cpu, so the cgrp_reading map could get
> > + * the latest results.
> > + */
> > +static int bperf_cgrp__sync_counters(struct evlist *evlist)
> > +{
> > +     int i, cpu;
> > +     int nr_cpus = evlist->core.all_cpus->nr;
> > +     int prog_fd = bpf_program__fd(skel->progs.trigger_read);
> > +
> > +     for (i = 0; i < nr_cpus; i++) {
> > +             cpu = evlist->core.all_cpus->map[i];
> > +             bperf_trigger_reading(prog_fd, cpu);
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> > +static int bperf_cgrp__enable(struct evsel *evsel)
> > +{
>
> Do we need to call bperf_cgrp__sync_counters() before setting enabled to 1?
> If we don't, we may count some numbers before setting enabled to 1, no?

Actually it'll update the prev_readings even if enabled = 0.
So I think it should get the correct counts after setting it to 1
without the bperf_cgrp__sync_counters().

Thanks,
Namhyung


>
> > +     skel->bss->enabled = 1;
> > +     return 0;
> > +}
>
> [...]
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup
  2021-06-30 18:50   ` Arnaldo Carvalho de Melo
@ 2021-06-30 20:12     ` Namhyung Kim
  2021-07-01 13:43     ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 21+ messages in thread
From: Namhyung Kim @ 2021-06-30 20:12 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen,
	Ian Rogers, Stephane Eranian, Song Liu

Hi Arnaldo,

On Wed, Jun 30, 2021 at 11:50 AM Arnaldo Carvalho de Melo
<acme@kernel.org> wrote:
>
> Em Fri, Jun 25, 2021 at 12:18:26AM -0700, Namhyung Kim escreveu:
> > Recently bperf was added to use BPF to count perf events for various
> > purposes.  This is an extension for the approach and targetting to
> > cgroup usages.
> >
> > Unlike the other bperf, it doesn't share the events with other
> > processes but it'd reduce unnecessary events (and the overhead of
> > multiplexing) for each monitored cgroup within the perf session.
> >
> > When --for-each-cgroup is used with --bpf-counters, it will open
> > cgroup-switches event per cpu internally and attach the new BPF
> > program to read given perf_events and to aggregate the results for
> > cgroups.  It's only called when task is switched to a task in a
> > different cgroup.
>
> I'll take a stab at fixing these:

Oops, sorry about that.  My build environment didn't catch this..
Will fix it in the v5.

Thanks,
Namhyung

>
> ⬢[acme@toolbox perf]$ make -k CORESIGHT=1 BUILD_BPF_SKEL=1 PYTHON=python3 DEBUG=1 O=/tmp/build/perf -C tools/perf install-bin
> make: Entering directory '/var/home/acme/git/perf/tools/perf'
>   BUILD:   Doing 'make -j24' parallel build
> Warning: Kernel ABI header at 'tools/include/uapi/linux/kvm.h' differs from latest version at 'include/uapi/linux/kvm.h'
> diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h
> Warning: Kernel ABI header at 'tools/include/uapi/linux/mount.h' differs from latest version at 'include/uapi/linux/mount.h'
> diff -u tools/include/uapi/linux/mount.h include/uapi/linux/mount.h
> Warning: Kernel ABI header at 'tools/arch/x86/include/asm/cpufeatures.h' differs from latest version at 'arch/x86/include/asm/cpufeatures.h'
> diff -u tools/arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/cpufeatures.h
> Warning: Kernel ABI header at 'tools/arch/x86/include/asm/msr-index.h' differs from latest version at 'arch/x86/include/asm/msr-index.h'
> diff -u tools/arch/x86/include/asm/msr-index.h arch/x86/include/asm/msr-index.h
> Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/kvm.h' differs from latest version at 'arch/x86/include/uapi/asm/kvm.h'
> diff -u tools/arch/x86/include/uapi/asm/kvm.h arch/x86/include/uapi/asm/kvm.h
> Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/svm.h' differs from latest version at 'arch/x86/include/uapi/asm/svm.h'
> diff -u tools/arch/x86/include/uapi/asm/svm.h arch/x86/include/uapi/asm/svm.h
> Warning: Kernel ABI header at 'tools/arch/arm64/include/uapi/asm/kvm.h' differs from latest version at 'arch/arm64/include/uapi/asm/kvm.h'
> diff -u tools/arch/arm64/include/uapi/asm/kvm.h arch/arm64/include/uapi/asm/kvm.h
>   DESCEND plugins
>   GEN     /tmp/build/perf/python/perf.so
>   INSTALL trace_plugins
>   CC      /tmp/build/perf/util/bpf_counter_cgroup.o
>   CC      /tmp/build/perf/util/demangle-java.o
>   CC      /tmp/build/perf/util/demangle-rust.o
>   CC      /tmp/build/perf/util/jitdump.o
>   CC      /tmp/build/perf/util/genelf.o
>   CC      /tmp/build/perf/util/genelf_debug.o
>   CC      /tmp/build/perf/util/perf-hooks.o
>   CC      /tmp/build/perf/util/bpf-event.o
> util/bpf_counter_cgroup.c: In function ‘bperf_load_program’:
> util/bpf_counter_cgroup.c:96:23: error: comparison of integer expressions of different signedness: ‘__u32’ {aka ‘unsigned int’} and ‘int’ [-Werror=sign-compare]
>    96 |         for (i = 0; i < nr_cpus; i++) {
>       |                       ^
> util/bpf_counter_cgroup.c:125:43: error: comparison of integer expressions of different signedness: ‘__u32’ {aka ‘unsigned int’} and ‘int’ [-Werror=sign-compare]
>   125 |                         for (cpu = 0; cpu < nr_cpus; cpu++) {
>       |                                           ^
> util/bpf_counter_cgroup.c: In function ‘bperf_cgrp__load’:
> util/bpf_counter_cgroup.c:178:65: error: unused parameter ‘target’ [-Werror=unused-parameter]
>   178 | static int bperf_cgrp__load(struct evsel *evsel, struct target *target)
>       |                                                  ~~~~~~~~~~~~~~~^~~~~~
> util/bpf_counter_cgroup.c: In function ‘bperf_cgrp__install_pe’:
> util/bpf_counter_cgroup.c:195:49: error: unused parameter ‘evsel’ [-Werror=unused-parameter]
>   195 | static int bperf_cgrp__install_pe(struct evsel *evsel, int cpu, int fd)
>       |                                   ~~~~~~~~~~~~~~^~~~~
> util/bpf_counter_cgroup.c:195:60: error: unused parameter ‘cpu’ [-Werror=unused-parameter]
>   195 | static int bperf_cgrp__install_pe(struct evsel *evsel, int cpu, int fd)
>       |                                                        ~~~~^~~
> util/bpf_counter_cgroup.c:195:69: error: unused parameter ‘fd’ [-Werror=unused-parameter]
>   195 | static int bperf_cgrp__install_pe(struct evsel *evsel, int cpu, int fd)
>       |                                                                 ~~~~^~
> util/bpf_counter_cgroup.c: In function ‘bperf_cgrp__enable’:
> util/bpf_counter_cgroup.c:219:45: error: unused parameter ‘evsel’ [-Werror=unused-parameter]
>   219 | static int bperf_cgrp__enable(struct evsel *evsel)
>       |                               ~~~~~~~~~~~~~~^~~~~
> cc1: all warnings being treated as errors
> make[4]: *** [/var/home/acme/git/perf/tools/build/Makefile.build:96: /tmp/build/perf/util/bpf_counter_cgroup.o] Error 1
> make[4]: *** Waiting for unfinished jobs....
> make[3]: *** [/var/home/acme/git/perf/tools/build/Makefile.build:139: util] Error 2
> make[2]: *** [Makefile.perf:655: /tmp/build/perf/perf-in.o] Error 2
> make[1]: *** [Makefile.perf:238: sub-make] Error 2
> make: *** [Makefile:113: install-bin] Error 2
> make: Leaving directory '/var/home/acme/git/perf/tools/perf'
> ⬢[acme@toolbox perf]$

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup
  2021-06-30 18:50   ` Arnaldo Carvalho de Melo
  2021-06-30 20:12     ` Namhyung Kim
@ 2021-07-01 13:43     ` Arnaldo Carvalho de Melo
  2021-07-01 17:10       ` Namhyung Kim
  1 sibling, 1 reply; 21+ messages in thread
From: Arnaldo Carvalho de Melo @ 2021-07-01 13:43 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen,
	Ian Rogers, Stephane Eranian, Song Liu

Em Wed, Jun 30, 2021 at 03:50:12PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Fri, Jun 25, 2021 at 12:18:26AM -0700, Namhyung Kim escreveu:
> > Recently bperf was added to use BPF to count perf events for various
> > purposes.  This is an extension for the approach and targetting to
> > cgroup usages.
> > 
> > Unlike the other bperf, it doesn't share the events with other
> > processes but it'd reduce unnecessary events (and the overhead of
> > multiplexing) for each monitored cgroup within the perf session.
> > 
> > When --for-each-cgroup is used with --bpf-counters, it will open
> > cgroup-switches event per cpu internally and attach the new BPF
> > program to read given perf_events and to aggregate the results for
> > cgroups.  It's only called when task is switched to a task in a
> > different cgroup.
> 
> I'll take a stab at fixing these:

So, tried some 'make -C tools clean', etc but I'm now stuck with:

  CLANG   /tmp/build/perf/util/bpf_skel/.tmp/bperf_cgroup.bpf.o
util/bpf_skel/bperf_cgroup.bpf.c:4:10: fatal error: 'vmlinux.h' file not found
#include "vmlinux.h"
         ^~~~~~~~~~~
1 error generated.
make[2]: *** [Makefile.perf:1033: /tmp/build/perf/util/bpf_skel/.tmp/bperf_cgroup.bpf.o] Error 1
make[2]: *** Waiting for unfinished jobs....
  CC      /tmp/build/perf/pmu-events/pmu-events.o
  LD      /tmp/build/perf/pmu-events/pmu-events-in.o

Auto-detecting system features:
...                        libbfd: [ on  ]
...        disassembler-four-args: [ on  ]
...                          zlib: [ on  ]
...                        libcap: [ on  ]
...               clang-bpf-co-re: [ on  ]


  MKDIR   /tmp/build/perf/util/bpf_skel/.tmp//bootstrap/
  MKDIR   /tmp/build/perf/util/bpf_skel/.tmp//bootstrap/libbpf/


Have to go errands now, will put what I have at tmp.perf/core now.
Please see if you can reproduce, I use this to build:

    make -k CORESIGHT=1 BUILD_BPF_SKEL=1 PYTHON=python3 DEBUG=1 O=/tmp/build/perf -C tools/perf install-bin

- Arnaldo
 
> ⬢[acme@toolbox perf]$ make -k CORESIGHT=1 BUILD_BPF_SKEL=1 PYTHON=python3 DEBUG=1 O=/tmp/build/perf -C tools/perf install-bin
> make: Entering directory '/var/home/acme/git/perf/tools/perf'
>   BUILD:   Doing 'make -j24' parallel build
> Warning: Kernel ABI header at 'tools/include/uapi/linux/kvm.h' differs from latest version at 'include/uapi/linux/kvm.h'
> diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h
> Warning: Kernel ABI header at 'tools/include/uapi/linux/mount.h' differs from latest version at 'include/uapi/linux/mount.h'
> diff -u tools/include/uapi/linux/mount.h include/uapi/linux/mount.h
> Warning: Kernel ABI header at 'tools/arch/x86/include/asm/cpufeatures.h' differs from latest version at 'arch/x86/include/asm/cpufeatures.h'
> diff -u tools/arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/cpufeatures.h
> Warning: Kernel ABI header at 'tools/arch/x86/include/asm/msr-index.h' differs from latest version at 'arch/x86/include/asm/msr-index.h'
> diff -u tools/arch/x86/include/asm/msr-index.h arch/x86/include/asm/msr-index.h
> Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/kvm.h' differs from latest version at 'arch/x86/include/uapi/asm/kvm.h'
> diff -u tools/arch/x86/include/uapi/asm/kvm.h arch/x86/include/uapi/asm/kvm.h
> Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/svm.h' differs from latest version at 'arch/x86/include/uapi/asm/svm.h'
> diff -u tools/arch/x86/include/uapi/asm/svm.h arch/x86/include/uapi/asm/svm.h
> Warning: Kernel ABI header at 'tools/arch/arm64/include/uapi/asm/kvm.h' differs from latest version at 'arch/arm64/include/uapi/asm/kvm.h'
> diff -u tools/arch/arm64/include/uapi/asm/kvm.h arch/arm64/include/uapi/asm/kvm.h
>   DESCEND plugins
>   GEN     /tmp/build/perf/python/perf.so
>   INSTALL trace_plugins
>   CC      /tmp/build/perf/util/bpf_counter_cgroup.o
>   CC      /tmp/build/perf/util/demangle-java.o
>   CC      /tmp/build/perf/util/demangle-rust.o
>   CC      /tmp/build/perf/util/jitdump.o
>   CC      /tmp/build/perf/util/genelf.o
>   CC      /tmp/build/perf/util/genelf_debug.o
>   CC      /tmp/build/perf/util/perf-hooks.o
>   CC      /tmp/build/perf/util/bpf-event.o
> util/bpf_counter_cgroup.c: In function ‘bperf_load_program’:
> util/bpf_counter_cgroup.c:96:23: error: comparison of integer expressions of different signedness: ‘__u32’ {aka ‘unsigned int’} and ‘int’ [-Werror=sign-compare]
>    96 |         for (i = 0; i < nr_cpus; i++) {
>       |                       ^
> util/bpf_counter_cgroup.c:125:43: error: comparison of integer expressions of different signedness: ‘__u32’ {aka ‘unsigned int’} and ‘int’ [-Werror=sign-compare]
>   125 |                         for (cpu = 0; cpu < nr_cpus; cpu++) {
>       |                                           ^
> util/bpf_counter_cgroup.c: In function ‘bperf_cgrp__load’:
> util/bpf_counter_cgroup.c:178:65: error: unused parameter ‘target’ [-Werror=unused-parameter]
>   178 | static int bperf_cgrp__load(struct evsel *evsel, struct target *target)
>       |                                                  ~~~~~~~~~~~~~~~^~~~~~
> util/bpf_counter_cgroup.c: In function ‘bperf_cgrp__install_pe’:
> util/bpf_counter_cgroup.c:195:49: error: unused parameter ‘evsel’ [-Werror=unused-parameter]
>   195 | static int bperf_cgrp__install_pe(struct evsel *evsel, int cpu, int fd)
>       |                                   ~~~~~~~~~~~~~~^~~~~
> util/bpf_counter_cgroup.c:195:60: error: unused parameter ‘cpu’ [-Werror=unused-parameter]
>   195 | static int bperf_cgrp__install_pe(struct evsel *evsel, int cpu, int fd)
>       |                                                        ~~~~^~~
> util/bpf_counter_cgroup.c:195:69: error: unused parameter ‘fd’ [-Werror=unused-parameter]
>   195 | static int bperf_cgrp__install_pe(struct evsel *evsel, int cpu, int fd)
>       |                                                                 ~~~~^~
> util/bpf_counter_cgroup.c: In function ‘bperf_cgrp__enable’:
> util/bpf_counter_cgroup.c:219:45: error: unused parameter ‘evsel’ [-Werror=unused-parameter]
>   219 | static int bperf_cgrp__enable(struct evsel *evsel)
>       |                               ~~~~~~~~~~~~~~^~~~~
> cc1: all warnings being treated as errors
> make[4]: *** [/var/home/acme/git/perf/tools/build/Makefile.build:96: /tmp/build/perf/util/bpf_counter_cgroup.o] Error 1
> make[4]: *** Waiting for unfinished jobs....
> make[3]: *** [/var/home/acme/git/perf/tools/build/Makefile.build:139: util] Error 2
> make[2]: *** [Makefile.perf:655: /tmp/build/perf/perf-in.o] Error 2
> make[1]: *** [Makefile.perf:238: sub-make] Error 2
> make: *** [Makefile:113: install-bin] Error 2
> make: Leaving directory '/var/home/acme/git/perf/tools/perf'
> ⬢[acme@toolbox perf]$

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup
  2021-07-01 13:43     ` Arnaldo Carvalho de Melo
@ 2021-07-01 17:10       ` Namhyung Kim
  0 siblings, 0 replies; 21+ messages in thread
From: Namhyung Kim @ 2021-07-01 17:10 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen,
	Ian Rogers, Stephane Eranian, Song Liu

On Thu, Jul 1, 2021 at 6:43 AM Arnaldo Carvalho de Melo <acme@kernel.org> wrote:
>
> Em Wed, Jun 30, 2021 at 03:50:12PM -0300, Arnaldo Carvalho de Melo escreveu:
> > Em Fri, Jun 25, 2021 at 12:18:26AM -0700, Namhyung Kim escreveu:
> > > Recently bperf was added to use BPF to count perf events for various
> > > purposes.  This is an extension for the approach and targetting to
> > > cgroup usages.
> > >
> > > Unlike the other bperf, it doesn't share the events with other
> > > processes but it'd reduce unnecessary events (and the overhead of
> > > multiplexing) for each monitored cgroup within the perf session.
> > >
> > > When --for-each-cgroup is used with --bpf-counters, it will open
> > > cgroup-switches event per cpu internally and attach the new BPF
> > > program to read given perf_events and to aggregate the results for
> > > cgroups.  It's only called when task is switched to a task in a
> > > different cgroup.
> >
> > I'll take a stab at fixing these:
>
> So, tried some 'make -C tools clean', etc but I'm now stuck with:
>
>   CLANG   /tmp/build/perf/util/bpf_skel/.tmp/bperf_cgroup.bpf.o
> util/bpf_skel/bperf_cgroup.bpf.c:4:10: fatal error: 'vmlinux.h' file not found
> #include "vmlinux.h"
>          ^~~~~~~~~~~
> 1 error generated.
> make[2]: *** [Makefile.perf:1033: /tmp/build/perf/util/bpf_skel/.tmp/bperf_cgroup.bpf.o] Error 1
> make[2]: *** Waiting for unfinished jobs....
>   CC      /tmp/build/perf/pmu-events/pmu-events.o
>   LD      /tmp/build/perf/pmu-events/pmu-events-in.o
>
> Auto-detecting system features:
> ...                        libbfd: [ on  ]
> ...        disassembler-four-args: [ on  ]
> ...                          zlib: [ on  ]
> ...                        libcap: [ on  ]
> ...               clang-bpf-co-re: [ on  ]
>
>
>   MKDIR   /tmp/build/perf/util/bpf_skel/.tmp//bootstrap/
>   MKDIR   /tmp/build/perf/util/bpf_skel/.tmp//bootstrap/libbpf/
>
>
> Have to go errands now, will put what I have at tmp.perf/core now.
> Please see if you can reproduce, I use this to build:
>
>     make -k CORESIGHT=1 BUILD_BPF_SKEL=1 PYTHON=python3 DEBUG=1 O=/tmp/build/perf -C tools/perf install-bin
>
> - Arnaldo

Oh, sorry about that.  I found the header generation is misplaced
in the Makefile.perf.  Will fix it in the next version.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] perf tools: Add read_cgroup_id() function
  2021-06-25  7:18 ` [PATCH 1/4] perf tools: Add read_cgroup_id() function Namhyung Kim
@ 2021-07-01 17:59   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 21+ messages in thread
From: Arnaldo Carvalho de Melo @ 2021-07-01 17:59 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen,
	Ian Rogers, Stephane Eranian, Song Liu

Em Fri, Jun 25, 2021 at 12:18:23AM -0700, Namhyung Kim escreveu:
> The read_cgroup_id() is to read a cgroup id from a file handle using
> name_to_handle_at(2) for the given cgroup.  It'll be used by bperf
> cgroup stat later.
> 
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
>  tools/perf/util/cgroup.c | 25 +++++++++++++++++++++++++
>  tools/perf/util/cgroup.h |  9 +++++++++
>  2 files changed, 34 insertions(+)
> 
> diff --git a/tools/perf/util/cgroup.c b/tools/perf/util/cgroup.c
> index f24ab4585553..ef18c988c681 100644
> --- a/tools/perf/util/cgroup.c
> +++ b/tools/perf/util/cgroup.c
> @@ -45,6 +45,31 @@ static int open_cgroup(const char *name)
>  	return fd;
>  }
>  
> +#ifdef HAVE_FILE_HANDLE
> +int read_cgroup_id(struct cgroup *cgrp)
> +{
> +	char path[PATH_MAX + 1];
> +	char mnt[PATH_MAX + 1];
> +	struct {
> +		struct file_handle fh;
> +		uint64_t cgroup_id;
> +	} handle;
> +	int mount_id;
> +
> +	if (cgroupfs_find_mountpoint(mnt, PATH_MAX + 1, "perf_event"))
> +		return -1;
> +
> +	scnprintf(path, PATH_MAX, "%s/%s", mnt, cgrp->name);
> +
> +	handle.fh.handle_bytes = sizeof(handle.cgroup_id);
> +	if (name_to_handle_at(AT_FDCWD, path, &handle.fh, &mount_id, 0) < 0)
> +		return -1;
> +
> +	cgrp->id = handle.cgroup_id;
> +	return 0;
> +}
> +#endif  /* HAVE_FILE_HANDLE */
> +
>  static struct cgroup *evlist__find_cgroup(struct evlist *evlist, const char *str)
>  {
>  	struct evsel *counter;
> diff --git a/tools/perf/util/cgroup.h b/tools/perf/util/cgroup.h
> index 162906f3412a..707adbe25123 100644
> --- a/tools/perf/util/cgroup.h
> +++ b/tools/perf/util/cgroup.h
> @@ -38,4 +38,13 @@ struct cgroup *cgroup__find(struct perf_env *env, uint64_t id);
>  
>  void perf_env__purge_cgroups(struct perf_env *env);
>  
> +#ifdef HAVE_FILE_HANDLE
> +int read_cgroup_id(struct cgroup *cgrp);
> +#else
> +int read_cgroup_id(struct cgroup *cgrp)
> +{
> +	return -1;
> +}
> +#endif  /* HAVE_FILE_HANDLE */
> +
>  #endif /* __CGROUP_H__ */
> -- 
> 2.32.0.93.g670b81a890-goog
> 


You forgot the __maybe_unused in the !HAVE_FILE_HANDLE case, also the
static inline for functions defined in headers, I'm fixing this up:

Alpine clang version 8.0.0 (tags/RELEASE_800/final) (based on LLVM 8.0.0)
Target: x86_64-alpine-linux-musl
Thread model: posix
InstalledDir: /usr/bin
Found candidate GCC installation: /usr/bin/../lib/gcc/x86_64-alpine-linux-musl/8.3.0
Found candidate GCC installation: /usr/lib/gcc/x86_64-alpine-linux-musl/8.3.0
Selected GCC installation: /usr/bin/../lib/gcc/x86_64-alpine-linux-musl/8.3.0
Candidate multilib: .;@m64
Selected multilib: .;@m64
+ rm -rf /tmp/build/perf
+ mkdir /tmp/build/perf
+ make 'ARCH=' 'CROSS_COMPILE=' 'EXTRA_CFLAGS=' -C tools/perf 'O=/tmp/build/perf' 'CC=clang'
make: Entering directory '/git/perf-5.13.0/tools/perf'
  BUILD:   Doing 'make -j24' parallel build
  HOSTCC  /tmp/build/perf/fixdep.o
  HOSTLD  /tmp/build/perf/fixdep-in.o
  LINK    /tmp/build/perf/fixdep
Makefile.config:439: No libdw DWARF unwind found, Please install elfutils-devel/libdw-dev >= 0.158 and/or set LIBDW_DIR
Makefile.config:444: No libdw.h found or old libdw.h found or elfutils is older than 0.138, disables dwarf support. Please install new elfutils-devel/libdw-dev
Makefile.config:563: DWARF support is off, BPF prologue is disabled
Makefile.config:571: No sys/sdt.h found, no SDT events are defined, please install systemtap-sdt-devel or systemtap-sdt-dev
Makefile.config:998: No libbabeltrace found, disables 'perf data' CTF format support, please install libbabeltrace-dev[el]/libbabeltrace-ctf-dev
Makefile.config:1024: No alternatives command found, you need to set JDIR= to point to the root of your Java directory

Auto-detecting system features:
...                         dwarf: [ OFF ]
...            dwarf_getlocations: [ OFF ]
...                         glibc: [ OFF ]
...                        libbfd: [ on  ]
...                libbfd-buildid: [ on  ]
...                        libcap: [ on  ]
...                        libelf: [ on  ]
...                       libnuma: [ on  ]
...        numa_num_possible_cpus: [ on  ]
...                       libperl: [ on  ]
...                     libpython: [ on  ]
...                     libcrypto: [ on  ]
...                     libunwind: [ on  ]
...            libdw-dwarf-unwind: [ OFF ]
...                          zlib: [ on  ]
...                          lzma: [ on  ]
...                     get_cpuid: [ on  ]
...                           bpf: [ on  ]
...                        libaio: [ on  ]
...                       libzstd: [ on  ]
...        disassembler-four-args: [ on  ]


  GEN     /tmp/build/perf/common-cmds.h
  PERF_VERSION = 5.13.gfdf19665b1fb
  MKDIR   /tmp/build/perf/pmu-events/
  MKDIR   /tmp/build/perf/pmu-events/
  MKDIR   /tmp/build/perf/pmu-events/
  HOSTCC  /tmp/build/perf/pmu-events/json.o
  HOSTCC  /tmp/build/perf/pmu-events/jsmn.o
  HOSTCC  /tmp/build/perf/pmu-events/jevents.o
  GEN     perf-archive
  GEN     perf-with-kcore
  GEN     perf-iostat
  CC      /tmp/build/perf/exec-cmd.o
  CC      /tmp/build/perf/help.o
  CC      /tmp/build/perf/pager.o
  CC      /tmp/build/perf/parse-options.o
  CC      /tmp/build/perf/run-command.o
  CC      /tmp/build/perf/sigchain.o
  CC      /tmp/build/perf/subcmd-config.o
  CC      /tmp/build/perf/cpu.o
  CC      /tmp/build/perf/debug.o
  MKDIR   /tmp/build/perf/fd/
  MKDIR   /tmp/build/perf/fs/
  CC      /tmp/build/perf/str_error_r.o
  MKDIR   /tmp/build/perf/fs/
  CC      /tmp/build/perf/fd/array.o
  MKDIR   /tmp/build/perf/fs/
  CC      /tmp/build/perf/fs/fs.o
  CC      /tmp/build/perf/fs/tracing_path.o
  CC      /tmp/build/perf/fs/cgroup.o
  CC      /tmp/build/perf/event-parse.o
  CC      /tmp/build/perf/event-plugin.o
  CC      /tmp/build/perf/trace-seq.o
  CC      /tmp/build/perf/parse-filter.o
  CC      /tmp/build/perf/parse-utils.o
  CC      /tmp/build/perf/kbuffer-parse.o
  CC      /tmp/build/perf/core.o
  CC      /tmp/build/perf/cpumap.o
  GEN     /tmp/build/perf/bpf_helper_defs.h
  CC      /tmp/build/perf/threadmap.o
  CC      /tmp/build/perf/evsel.o
  LD      /tmp/build/perf/fd/libapi-in.o
  CC      /tmp/build/perf/tep_strerror.o
  CC      /tmp/build/perf/plugin_jbd2.o
  CC      /tmp/build/perf/plugin_hrtimer.o
  CC      /tmp/build/perf/event-parse-api.o
  HOSTLD  /tmp/build/perf/pmu-events/jevents-in.o
  CC      /tmp/build/perf/plugin_kvm.o
  CC      /tmp/build/perf/plugin_kmem.o
  LINK    /tmp/build/perf/pmu-events/jevents
  CC      /tmp/build/perf/evlist.o
  CC      /tmp/build/perf/mmap.o
  LD      /tmp/build/perf/fs/libapi-in.o
  CC      /tmp/build/perf/plugin_mac80211.o
  GEN     /tmp/build/perf/pmu-events/pmu-events.c
  CC      /tmp/build/perf/zalloc.o
  LD      /tmp/build/perf/libapi-in.o
  AR      /tmp/build/perf/libapi.a
  CC      /tmp/build/perf/xyarray.o
  CC      /tmp/build/perf/lib.o
  CC      /tmp/build/perf/plugin_sched_switch.o
  LD      /tmp/build/perf/plugin_kmem-in.o
  LD      /tmp/build/perf/plugin_jbd2-in.o
  CC      /tmp/build/perf/plugin_function.o
  LD      /tmp/build/perf/plugin_hrtimer-in.o
  CC      /tmp/build/perf/plugin_futex.o
  CC      /tmp/build/perf/plugin_scsi.o
  CC      /tmp/build/perf/plugin_cfg80211.o
  LD      /tmp/build/perf/plugin_kvm-in.o
  MKDIR   /tmp/build/perf/staticobjs/
  CC      /tmp/build/perf/staticobjs/libbpf.o
  CC      /tmp/build/perf/plugin_xen.o
  MKDIR   /tmp/build/perf/staticobjs/
  CC      /tmp/build/perf/staticobjs/bpf.o
  CC      /tmp/build/perf/plugin_tlb.o
  CC      /tmp/build/perf/staticobjs/nlattr.o
  CC      /tmp/build/perf/staticobjs/btf.o
  LD      /tmp/build/perf/plugin_mac80211-in.o
  CC      /tmp/build/perf/staticobjs/libbpf_errno.o
  CC      /tmp/build/perf/staticobjs/str_error.o
  LD      /tmp/build/perf/plugin_sched_switch-in.o
  LINK    /tmp/build/perf/plugin_jbd2.so
  CC      /tmp/build/perf/staticobjs/netlink.o
  CC      /tmp/build/perf/staticobjs/bpf_prog_linfo.o
  CC      /tmp/build/perf/staticobjs/libbpf_probes.o
  LD      /tmp/build/perf/plugin_function-in.o
  LD      /tmp/build/perf/plugin_cfg80211-in.o
  LD      /tmp/build/perf/plugin_futex-in.o
  LD      /tmp/build/perf/plugin_xen-in.o
  LINK    /tmp/build/perf/plugin_hrtimer.so
  LINK    /tmp/build/perf/plugin_kmem.so
  LD      /tmp/build/perf/plugin_scsi-in.o
  LINK    /tmp/build/perf/plugin_kvm.so
  LINK    /tmp/build/perf/plugin_mac80211.so
  LINK    /tmp/build/perf/plugin_sched_switch.so
  LD      /tmp/build/perf/plugin_tlb-in.o
  LINK    /tmp/build/perf/plugin_futex.so
  LINK    /tmp/build/perf/plugin_function.so
  CC      /tmp/build/perf/staticobjs/xsk.o
  CC      /tmp/build/perf/staticobjs/hashmap.o
  LINK    /tmp/build/perf/plugin_xen.so
  LINK    /tmp/build/perf/plugin_scsi.so
  CC      /tmp/build/perf/staticobjs/btf_dump.o
  LINK    /tmp/build/perf/plugin_tlb.so
  LINK    /tmp/build/perf/plugin_cfg80211.so
  CC      /tmp/build/perf/staticobjs/ringbuf.o
  CC      /tmp/build/perf/staticobjs/strset.o
  CC      /tmp/build/perf/staticobjs/linker.o
  GEN     /tmp/build/perf/libtraceevent-dynamic-list
  CC      /tmp/build/perf/pmu-events/pmu-events.o
  LD      /tmp/build/perf/libsubcmd-in.o
  AR      /tmp/build/perf/libsubcmd.a
  LD      /tmp/build/perf/libperf-in.o
  AR      /tmp/build/perf/libperf.a
  LD      /tmp/build/perf/libtraceevent-in.o
  LINK    /tmp/build/perf/libtraceevent.a
  GEN     /tmp/build/perf/python/perf.so
  CC      /tmp/build/perf/builtin-bench.o
  CC      /tmp/build/perf/builtin-annotate.o
  CC      /tmp/build/perf/builtin-config.o
  CC      /tmp/build/perf/builtin-diff.o
  CC      /tmp/build/perf/builtin-evlist.o
  CC      /tmp/build/perf/builtin-ftrace.o
  CC      /tmp/build/perf/builtin-help.o
  CC      /tmp/build/perf/builtin-sched.o
  CC      /tmp/build/perf/builtin-buildid-list.o
  CC      /tmp/build/perf/builtin-buildid-cache.o
  CC      /tmp/build/perf/builtin-kallsyms.o
  CC      /tmp/build/perf/builtin-list.o
  CC      /tmp/build/perf/builtin-record.o
  CC      /tmp/build/perf/builtin-report.o
  CC      /tmp/build/perf/builtin-stat.o
  CC      /tmp/build/perf/builtin-timechart.o
  CC      /tmp/build/perf/builtin-top.o
  CC      /tmp/build/perf/builtin-script.o
  CC      /tmp/build/perf/builtin-kmem.o
In file included from builtin-top.c:25:
/git/perf-5.13.0/tools/perf/util/cgroup.h:44:35: error: unused parameter 'cgrp' [-Werror,-Wunused-parameter]
int read_cgroup_id(struct cgroup *cgrp)
                                  ^
/git/perf-5.13.0/tools/perf/util/cgroup.h:44:5: error: no previous prototype for function 'read_cgroup_id' [-Werror,-Wmissing-prototypes]
int read_cgroup_id(struct cgroup *cgrp)
    ^
In file included from builtin-stat.c:45:
/git/perf-5.13.0/tools/perf/util/cgroup.h:44:35: error: unused parameter 'cgrp' [-Werror,-Wunused-parameter]
int read_cgroup_id(struct cgroup *cgrp)
                                  ^
/git/perf-5.13.0/tools/perf/util/cgroup.h:44:5: error: no previous prototype for function 'read_cgroup_id' [-Werror,-Wmissing-prototypes]
int read_cgroup_id(struct cgroup *cgrp)
    ^
In file included from builtin-record.c:17:
/git/perf-5.13.0/tools/perf/util/cgroup.h:44:35: error: unused parameter 'cgrp' [-Werror,-Wunused-parameter]
int read_cgroup_id(struct cgroup *cgrp)
                                  ^
/git/perf-5.13.0/tools/perf/util/cgroup.h:44:5: error: no previous prototype for function 'read_cgroup_id' [-Werror,-Wmissing-prototypes]
int read_cgroup_id(struct cgroup *cgrp)
    ^
  CC      /tmp/build/perf/builtin-lock.o
  CC      /tmp/build/perf/builtin-kvm.o
  CC      /tmp/build/perf/builtin-mem.o
  CC      /tmp/build/perf/builtin-data.o
  CC      /tmp/build/perf/builtin-version.o
  CC      /tmp/build/perf/builtin-inject.o
2 errors generated.
make[3]: *** [/git/perf-5.13.0/tools/build/Makefile.build:97: /tmp/build/perf/builtin-record.o] Error 1
make[3]: *** Waiting for unfinished jobs....
2 errors generated.
make[3]: *** [/git/perf-5.13.0/tools/build/Makefile.build:97: /tmp/build/perf/builtin-top.o] Error 1
2 errors generated.
make[3]: *** [/git/perf-5.13.0/tools/build/Makefile.build:97: /tmp/build/perf/builtin-stat.o] Error 1
  LD      /tmp/build/perf/pmu-events/pmu-events-in.o
In file included from /git/perf-5.13.0/tools/perf/util/evsel.c:30:
/git/perf-5.13.0/tools/perf/util/cgroup.h:44:5: error: no previous prototype for function 'read_cgroup_id' [-Werror,-Wmissing-prototypes]


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/4] perf tools: Move common bpf functions to bpf_counter.h
  2021-06-25  7:18 ` [PATCH 3/4] perf tools: Move common bpf functions to bpf_counter.h Namhyung Kim
  2021-06-30 18:28   ` Song Liu
@ 2021-07-01 19:09   ` Arnaldo Carvalho de Melo
  2021-07-01 20:11     ` Namhyung Kim
  1 sibling, 1 reply; 21+ messages in thread
From: Arnaldo Carvalho de Melo @ 2021-07-01 19:09 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen,
	Ian Rogers, Stephane Eranian, Song Liu

Em Fri, Jun 25, 2021 at 12:18:25AM -0700, Namhyung Kim escreveu:
> Some helper functions will be used for cgroup counting too.
> Move them to a header file for sharing.
> 
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
>  tools/perf/util/bpf_counter.c | 52 -----------------------------------
>  tools/perf/util/bpf_counter.h | 52 +++++++++++++++++++++++++++++++++++
>  2 files changed, 52 insertions(+), 52 deletions(-)
> 
> diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
> index 974f10e356f0..1af81e882eb6 100644
> --- a/tools/perf/util/bpf_counter.c
> +++ b/tools/perf/util/bpf_counter.c
> @@ -7,12 +7,8 @@
>  #include <unistd.h>
>  #include <sys/file.h>
>  #include <sys/time.h>
> -#include <sys/resource.h>
>  #include <linux/err.h>
>  #include <linux/zalloc.h>
> -#include <bpf/bpf.h>
> -#include <bpf/btf.h>
> -#include <bpf/libbpf.h>
>  #include <api/fs/fs.h>
>  #include <perf/bpf_perf.h>
>  
> @@ -37,13 +33,6 @@ static inline void *u64_to_ptr(__u64 ptr)
>  	return (void *)(unsigned long)ptr;
>  }
>  
> -static void set_max_rlimit(void)
> -{
> -	struct rlimit rinf = { RLIM_INFINITY, RLIM_INFINITY };
> -
> -	setrlimit(RLIMIT_MEMLOCK, &rinf);
> -}
> -
>  static struct bpf_counter *bpf_counter_alloc(void)
>  {
>  	struct bpf_counter *counter;
> @@ -297,33 +286,6 @@ struct bpf_counter_ops bpf_program_profiler_ops = {
>  	.install_pe = bpf_program_profiler__install_pe,
>  };
>  
> -static __u32 bpf_link_get_id(int fd)
> -{
> -	struct bpf_link_info link_info = {0};

Moving this from bpf_counter.c to the header made this code be compiled
in places where it wasn't before, as bpf_counter.c is built only when:

perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter.o

For instance, this got broken:

  23    33.62 debian:9                      : FAIL clang version 3.8.1-24 (tags/RELEASE_381/final)
    In file included from builtin-stat.c:71:
    /git/perf-5.13.0/tools/perf/util/bpf_counter.h:92:37: error: missing field 'id' initializer [-Werror,-Wmissing-field-initializers]
            struct bpf_link_info link_info = {0};
                                               ^
    /git/perf-5.13.0/tools/perf/util/bpf_counter.h:101:37: error: missing field 'id' initializer [-Werror,-Wmissing-field-initializers]
            struct bpf_link_info link_info = {0};
                                               ^
    /git/perf-5.13.0/tools/perf/util/bpf_counter.h:110:35: error: missing field 'id' initializer [-Werror,-Wmissing-field-initializers]
            struct bpf_map_info map_info = {0};

ITs mostly older systems, but I'll fix it anyway.

> -	__u32 link_info_len = sizeof(link_info);
> -
> -	bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
> -	return link_info.id;
> -}
> -
> -static __u32 bpf_link_get_prog_id(int fd)
> -{
> -	struct bpf_link_info link_info = {0};
> -	__u32 link_info_len = sizeof(link_info);
> -
> -	bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
> -	return link_info.prog_id;
> -}
> -
> -static __u32 bpf_map_get_id(int fd)
> -{
> -	struct bpf_map_info map_info = {0};
> -	__u32 map_info_len = sizeof(map_info);
> -
> -	bpf_obj_get_info_by_fd(fd, &map_info, &map_info_len);
> -	return map_info.id;
> -}
> -
>  static bool bperf_attr_map_compatible(int attr_map_fd)
>  {
>  	struct bpf_map_info map_info = {0};
> @@ -385,20 +347,6 @@ static int bperf_lock_attr_map(struct target *target)
>  	return map_fd;
>  }
>  
> -/* trigger the leader program on a cpu */
> -static int bperf_trigger_reading(int prog_fd, int cpu)
> -{
> -	DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
> -			    .ctx_in = NULL,
> -			    .ctx_size_in = 0,
> -			    .flags = BPF_F_TEST_RUN_ON_CPU,
> -			    .cpu = cpu,
> -			    .retval = 0,
> -		);
> -
> -	return bpf_prog_test_run_opts(prog_fd, &opts);
> -}
> -
>  static int bperf_check_target(struct evsel *evsel,
>  			      struct target *target,
>  			      enum bperf_filter_type *filter_type,
> diff --git a/tools/perf/util/bpf_counter.h b/tools/perf/util/bpf_counter.h
> index d6d907c3dcf9..185555a9c1db 100644
> --- a/tools/perf/util/bpf_counter.h
> +++ b/tools/perf/util/bpf_counter.h
> @@ -3,6 +3,10 @@
>  #define __PERF_BPF_COUNTER_H 1
>  
>  #include <linux/list.h>
> +#include <sys/resource.h>
> +#include <bpf/bpf.h>
> +#include <bpf/btf.h>
> +#include <bpf/libbpf.h>
>  
>  struct evsel;
>  struct target;
> @@ -76,4 +80,52 @@ static inline int bpf_counter__install_pe(struct evsel *evsel __maybe_unused,
>  
>  #endif /* HAVE_BPF_SKEL */
>  
> +static inline void set_max_rlimit(void)
> +{
> +	struct rlimit rinf = { RLIM_INFINITY, RLIM_INFINITY };
> +
> +	setrlimit(RLIMIT_MEMLOCK, &rinf);
> +}
> +
> +static inline __u32 bpf_link_get_id(int fd)
> +{
> +	struct bpf_link_info link_info = {0};
> +	__u32 link_info_len = sizeof(link_info);
> +
> +	bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
> +	return link_info.id;
> +}
> +
> +static inline __u32 bpf_link_get_prog_id(int fd)
> +{
> +	struct bpf_link_info link_info = {0};
> +	__u32 link_info_len = sizeof(link_info);
> +
> +	bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
> +	return link_info.prog_id;
> +}
> +
> +static inline __u32 bpf_map_get_id(int fd)
> +{
> +	struct bpf_map_info map_info = {0};
> +	__u32 map_info_len = sizeof(map_info);
> +
> +	bpf_obj_get_info_by_fd(fd, &map_info, &map_info_len);
> +	return map_info.id;
> +}
> +
> +/* trigger the leader program on a cpu */
> +static inline int bperf_trigger_reading(int prog_fd, int cpu)
> +{
> +	DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
> +			    .ctx_in = NULL,
> +			    .ctx_size_in = 0,
> +			    .flags = BPF_F_TEST_RUN_ON_CPU,
> +			    .cpu = cpu,
> +			    .retval = 0,
> +		);
> +
> +	return bpf_prog_test_run_opts(prog_fd, &opts);
> +}
> +
>  #endif /* __PERF_BPF_COUNTER_H */
> -- 
> 2.32.0.93.g670b81a890-goog
> 

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/4] perf tools: Move common bpf functions to bpf_counter.h
  2021-07-01 19:09   ` Arnaldo Carvalho de Melo
@ 2021-07-01 20:11     ` Namhyung Kim
  0 siblings, 0 replies; 21+ messages in thread
From: Namhyung Kim @ 2021-07-01 20:11 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Andi Kleen,
	Ian Rogers, Stephane Eranian, Song Liu

On Thu, Jul 1, 2021 at 12:09 PM Arnaldo Carvalho de Melo
<acme@kernel.org> wrote:
>
> Em Fri, Jun 25, 2021 at 12:18:25AM -0700, Namhyung Kim escreveu:
> > Some helper functions will be used for cgroup counting too.
> > Move them to a header file for sharing.
> >
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > ---
> >  tools/perf/util/bpf_counter.c | 52 -----------------------------------
> >  tools/perf/util/bpf_counter.h | 52 +++++++++++++++++++++++++++++++++++
> >  2 files changed, 52 insertions(+), 52 deletions(-)
> >
> > diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
> > index 974f10e356f0..1af81e882eb6 100644
> > --- a/tools/perf/util/bpf_counter.c
> > +++ b/tools/perf/util/bpf_counter.c
> > @@ -7,12 +7,8 @@
> >  #include <unistd.h>
> >  #include <sys/file.h>
> >  #include <sys/time.h>
> > -#include <sys/resource.h>
> >  #include <linux/err.h>
> >  #include <linux/zalloc.h>
> > -#include <bpf/bpf.h>
> > -#include <bpf/btf.h>
> > -#include <bpf/libbpf.h>
> >  #include <api/fs/fs.h>
> >  #include <perf/bpf_perf.h>
> >
> > @@ -37,13 +33,6 @@ static inline void *u64_to_ptr(__u64 ptr)
> >       return (void *)(unsigned long)ptr;
> >  }
> >
> > -static void set_max_rlimit(void)
> > -{
> > -     struct rlimit rinf = { RLIM_INFINITY, RLIM_INFINITY };
> > -
> > -     setrlimit(RLIMIT_MEMLOCK, &rinf);
> > -}
> > -
> >  static struct bpf_counter *bpf_counter_alloc(void)
> >  {
> >       struct bpf_counter *counter;
> > @@ -297,33 +286,6 @@ struct bpf_counter_ops bpf_program_profiler_ops = {
> >       .install_pe = bpf_program_profiler__install_pe,
> >  };
> >
> > -static __u32 bpf_link_get_id(int fd)
> > -{
> > -     struct bpf_link_info link_info = {0};
>
> Moving this from bpf_counter.c to the header made this code be compiled
> in places where it wasn't before, as bpf_counter.c is built only when:
>
> perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter.o
>
> For instance, this got broken:
>
>   23    33.62 debian:9                      : FAIL clang version 3.8.1-24 (tags/RELEASE_381/final)
>     In file included from builtin-stat.c:71:
>     /git/perf-5.13.0/tools/perf/util/bpf_counter.h:92:37: error: missing field 'id' initializer [-Werror,-Wmissing-field-initializers]
>             struct bpf_link_info link_info = {0};
>                                                ^
>     /git/perf-5.13.0/tools/perf/util/bpf_counter.h:101:37: error: missing field 'id' initializer [-Werror,-Wmissing-field-initializers]
>             struct bpf_link_info link_info = {0};
>                                                ^
>     /git/perf-5.13.0/tools/perf/util/bpf_counter.h:110:35: error: missing field 'id' initializer [-Werror,-Wmissing-field-initializers]
>             struct bpf_map_info map_info = {0};
>
> ITs mostly older systems, but I'll fix it anyway.

Thanks a lot for fixing all the messes.
Assuming you're fixed the preparation patches.
I'll just update the last one next time.

Thanks,
Namhyung


>
> > -     __u32 link_info_len = sizeof(link_info);
> > -
> > -     bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
> > -     return link_info.id;
> > -}
> > -
> > -static __u32 bpf_link_get_prog_id(int fd)
> > -{
> > -     struct bpf_link_info link_info = {0};
> > -     __u32 link_info_len = sizeof(link_info);
> > -
> > -     bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
> > -     return link_info.prog_id;
> > -}
> > -
> > -static __u32 bpf_map_get_id(int fd)
> > -{
> > -     struct bpf_map_info map_info = {0};
> > -     __u32 map_info_len = sizeof(map_info);
> > -
> > -     bpf_obj_get_info_by_fd(fd, &map_info, &map_info_len);
> > -     return map_info.id;
> > -}
> > -
> >  static bool bperf_attr_map_compatible(int attr_map_fd)
> >  {
> >       struct bpf_map_info map_info = {0};
> > @@ -385,20 +347,6 @@ static int bperf_lock_attr_map(struct target *target)
> >       return map_fd;
> >  }
> >
> > -/* trigger the leader program on a cpu */
> > -static int bperf_trigger_reading(int prog_fd, int cpu)
> > -{
> > -     DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
> > -                         .ctx_in = NULL,
> > -                         .ctx_size_in = 0,
> > -                         .flags = BPF_F_TEST_RUN_ON_CPU,
> > -                         .cpu = cpu,
> > -                         .retval = 0,
> > -             );
> > -
> > -     return bpf_prog_test_run_opts(prog_fd, &opts);
> > -}
> > -
> >  static int bperf_check_target(struct evsel *evsel,
> >                             struct target *target,
> >                             enum bperf_filter_type *filter_type,
> > diff --git a/tools/perf/util/bpf_counter.h b/tools/perf/util/bpf_counter.h
> > index d6d907c3dcf9..185555a9c1db 100644
> > --- a/tools/perf/util/bpf_counter.h
> > +++ b/tools/perf/util/bpf_counter.h
> > @@ -3,6 +3,10 @@
> >  #define __PERF_BPF_COUNTER_H 1
> >
> >  #include <linux/list.h>
> > +#include <sys/resource.h>
> > +#include <bpf/bpf.h>
> > +#include <bpf/btf.h>
> > +#include <bpf/libbpf.h>
> >
> >  struct evsel;
> >  struct target;
> > @@ -76,4 +80,52 @@ static inline int bpf_counter__install_pe(struct evsel *evsel __maybe_unused,
> >
> >  #endif /* HAVE_BPF_SKEL */
> >
> > +static inline void set_max_rlimit(void)
> > +{
> > +     struct rlimit rinf = { RLIM_INFINITY, RLIM_INFINITY };
> > +
> > +     setrlimit(RLIMIT_MEMLOCK, &rinf);
> > +}
> > +
> > +static inline __u32 bpf_link_get_id(int fd)
> > +{
> > +     struct bpf_link_info link_info = {0};
> > +     __u32 link_info_len = sizeof(link_info);
> > +
> > +     bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
> > +     return link_info.id;
> > +}
> > +
> > +static inline __u32 bpf_link_get_prog_id(int fd)
> > +{
> > +     struct bpf_link_info link_info = {0};
> > +     __u32 link_info_len = sizeof(link_info);
> > +
> > +     bpf_obj_get_info_by_fd(fd, &link_info, &link_info_len);
> > +     return link_info.prog_id;
> > +}
> > +
> > +static inline __u32 bpf_map_get_id(int fd)
> > +{
> > +     struct bpf_map_info map_info = {0};
> > +     __u32 map_info_len = sizeof(map_info);
> > +
> > +     bpf_obj_get_info_by_fd(fd, &map_info, &map_info_len);
> > +     return map_info.id;
> > +}
> > +
> > +/* trigger the leader program on a cpu */
> > +static inline int bperf_trigger_reading(int prog_fd, int cpu)
> > +{
> > +     DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
> > +                         .ctx_in = NULL,
> > +                         .ctx_size_in = 0,
> > +                         .flags = BPF_F_TEST_RUN_ON_CPU,
> > +                         .cpu = cpu,
> > +                         .retval = 0,
> > +             );
> > +
> > +     return bpf_prog_test_run_opts(prog_fd, &opts);
> > +}
> > +
> >  #endif /* __PERF_BPF_COUNTER_H */
> > --
> > 2.32.0.93.g670b81a890-goog
> >
>
> --
>
> - Arnaldo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup
  2021-06-30 20:09     ` Namhyung Kim
@ 2021-07-01 20:16       ` Namhyung Kim
  0 siblings, 0 replies; 21+ messages in thread
From: Namhyung Kim @ 2021-07-01 20:16 UTC (permalink / raw)
  To: Song Liu
  Cc: Arnaldo Carvalho de Melo, Jiri Olsa, Ingo Molnar, Peter Zijlstra,
	LKML, Andi Kleen, Ian Rogers, Stephane Eranian

On Wed, Jun 30, 2021 at 1:09 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> Hi Song,
>
> On Wed, Jun 30, 2021 at 11:47 AM Song Liu <songliubraving@fb.com> wrote:
> >
> >
> >
> > > On Jun 25, 2021, at 12:18 AM, Namhyung Kim <namhyung@kernel.org> wrote:
> > >
> > > Recently bperf was added to use BPF to count perf events for various
> > > purposes.  This is an extension for the approach and targetting to
> > > cgroup usages.
> > >
> > > Unlike the other bperf, it doesn't share the events with other
> > > processes but it'd reduce unnecessary events (and the overhead of
> > > multiplexing) for each monitored cgroup within the perf session.
> > >
> > > When --for-each-cgroup is used with --bpf-counters, it will open
> > > cgroup-switches event per cpu internally and attach the new BPF
> > > program to read given perf_events and to aggregate the results for
> > > cgroups.  It's only called when task is switched to a task in a
> > > different cgroup.
> > >
> > > Cc: Song Liu <songliubraving@fb.com>
> > > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > > ---
> > > tools/perf/Makefile.perf                    |  17 +-
> > > tools/perf/util/Build                       |   1 +
> > > tools/perf/util/bpf_counter.c               |   5 +
> > > tools/perf/util/bpf_counter_cgroup.c        | 299 ++++++++++++++++++++
> > > tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 191 +++++++++++++
> > > tools/perf/util/cgroup.c                    |   2 +
> > > tools/perf/util/cgroup.h                    |   1 +
> > > 7 files changed, 515 insertions(+), 1 deletion(-)
> > > create mode 100644 tools/perf/util/bpf_counter_cgroup.c
> > > create mode 100644 tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> >
> > [...]
> >
> > > diff --git a/tools/perf/util/bpf_counter_cgroup.c b/tools/perf/util/bpf_counter_cgroup.c
> > > new file mode 100644
> > > index 000000000000..327f97a23a84
> > > --- /dev/null
> > > +++ b/tools/perf/util/bpf_counter_cgroup.c
> > > @@ -0,0 +1,299 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +
> > > +/* Copyright (c) 2019 Facebook */
> >
> > I am not sure whether this ^^^ is accurate.
>
> Well, I just copied it from the bpf_counter.c file which was the base
> of this patch.  Now I don't think I have many lines of code directly
> came from the origin.
>
> So I'm not sure what I can do.  Do you want to update the
> copyright year to 2021?  Or are you ok with removing the
> line at all?
>

> > [...]
> >
> > > +
> > > +/*
> > > + * trigger the leader prog on each cpu, so the cgrp_reading map could get
> > > + * the latest results.
> > > + */
> > > +static int bperf_cgrp__sync_counters(struct evlist *evlist)
> > > +{
> > > +     int i, cpu;
> > > +     int nr_cpus = evlist->core.all_cpus->nr;
> > > +     int prog_fd = bpf_program__fd(skel->progs.trigger_read);
> > > +
> > > +     for (i = 0; i < nr_cpus; i++) {
> > > +             cpu = evlist->core.all_cpus->map[i];
> > > +             bperf_trigger_reading(prog_fd, cpu);
> > > +     }
> > > +
> > > +     return 0;
> > > +}
> > > +
> > > +static int bperf_cgrp__enable(struct evsel *evsel)
> > > +{
> >
> > Do we need to call bperf_cgrp__sync_counters() before setting enabled to 1?
> > If we don't, we may count some numbers before setting enabled to 1, no?
>
> Actually it'll update the prev_readings even if enabled = 0.
> So I think it should get the correct counts after setting it to 1
> without the bperf_cgrp__sync_counters().

I thought about this again, and you're right.  Will change.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2021-07-01 20:16 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-25  7:18 [PATCHSET v4 0/4] perf stat: Enable BPF counters with --for-each-cgroup Namhyung Kim
2021-06-25  7:18 ` [PATCH 1/4] perf tools: Add read_cgroup_id() function Namhyung Kim
2021-07-01 17:59   ` Arnaldo Carvalho de Melo
2021-06-25  7:18 ` [PATCH 2/4] perf tools: Add cgroup_is_v2() helper Namhyung Kim
2021-06-29 15:51   ` Ian Rogers
2021-06-30  6:35     ` Namhyung Kim
2021-06-30 18:43       ` Arnaldo Carvalho de Melo
2021-06-25  7:18 ` [PATCH 3/4] perf tools: Move common bpf functions to bpf_counter.h Namhyung Kim
2021-06-30 18:28   ` Song Liu
2021-07-01 19:09   ` Arnaldo Carvalho de Melo
2021-07-01 20:11     ` Namhyung Kim
2021-06-25  7:18 ` [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup Namhyung Kim
2021-06-30 18:47   ` Song Liu
2021-06-30 20:09     ` Namhyung Kim
2021-07-01 20:16       ` Namhyung Kim
2021-06-30 18:50   ` Arnaldo Carvalho de Melo
2021-06-30 20:12     ` Namhyung Kim
2021-07-01 13:43     ` Arnaldo Carvalho de Melo
2021-07-01 17:10       ` Namhyung Kim
2021-06-27 15:29 ` [PATCHSET v4 0/4] perf stat: Enable BPF counters " Namhyung Kim
2021-06-30  6:19   ` Namhyung Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).