bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/2] samples: bpf: refactor perf_event user program with libbpf bpf_link
@ 2020-03-10  5:51 Daniel T. Lee
  2020-03-10  5:51 ` [PATCH bpf-next 1/2] samples: bpf: move read_trace_pipe to trace_helpers Daniel T. Lee
  2020-03-10  5:51 ` [PATCH bpf-next 2/2] samples: bpf: refactor perf_event user program with libbpf bpf_link Daniel T. Lee
  0 siblings, 2 replies; 6+ messages in thread
From: Daniel T. Lee @ 2020-03-10  5:51 UTC (permalink / raw)
  To: Daniel Borkmann, Alexei Starovoitov; +Cc: Andrii Nakryiko, netdev, bpf

Currently, some samples are using ioctl for enabling perf_event and
attaching BPF programs to this event. However, the bpf_program__attach
of libbpf(using bpf_link) is much more intuitive than the previous
method using ioctl.

bpf_program__attach_perf_event manages the enable of perf_event and
attach of BPF programs to it, so there's no neeed to do this
directly with ioctl.

In addition, bpf_link provides consistency in the use of API because it
allows disable (detach, destroy) for multiple events to be treated as
one bpf_link__destroy.

To refactor samples with using this libbpf API, the bpf_load in the
samples were removed and migrated to libbbpf. Because read_trace_pipe
is used in bpf_load, multiple samples cannot be migrated to libbpf,
this function was moved to trace_helpers.

Daniel T. Lee (2):
  samples: bpf: move read_trace_pipe to trace_helpers
  samples: bpf: refactor perf_event user program with libbpf bpf_link

 samples/bpf/Makefile                        |  8 +--
 samples/bpf/bpf_load.c                      | 20 -------
 samples/bpf/bpf_load.h                      |  1 -
 samples/bpf/sampleip_user.c                 | 58 +++++++++++++--------
 samples/bpf/trace_event_user.c              | 57 +++++++++++++-------
 samples/bpf/tracex1_user.c                  |  1 +
 samples/bpf/tracex5_user.c                  |  1 +
 tools/testing/selftests/bpf/trace_helpers.c | 23 ++++++++
 tools/testing/selftests/bpf/trace_helpers.h |  1 +
 9 files changed, 106 insertions(+), 64 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH bpf-next 1/2] samples: bpf: move read_trace_pipe to trace_helpers
  2020-03-10  5:51 [PATCH bpf-next 0/2] samples: bpf: refactor perf_event user program with libbpf bpf_link Daniel T. Lee
@ 2020-03-10  5:51 ` Daniel T. Lee
  2020-03-10 21:11   ` John Fastabend
  2020-03-10  5:51 ` [PATCH bpf-next 2/2] samples: bpf: refactor perf_event user program with libbpf bpf_link Daniel T. Lee
  1 sibling, 1 reply; 6+ messages in thread
From: Daniel T. Lee @ 2020-03-10  5:51 UTC (permalink / raw)
  To: Daniel Borkmann, Alexei Starovoitov; +Cc: Andrii Nakryiko, netdev, bpf

To reduce the reliance of trace samples (trace*_user) on bpf_load,
move read_trace_pipe to trace_helpers. By moving this bpf_loader helper
elsewhere, trace functions can be easily migrated to libbbpf.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
---
 samples/bpf/Makefile                        |  4 ++--
 samples/bpf/bpf_load.c                      | 20 ------------------
 samples/bpf/bpf_load.h                      |  1 -
 samples/bpf/tracex1_user.c                  |  1 +
 samples/bpf/tracex5_user.c                  |  1 +
 tools/testing/selftests/bpf/trace_helpers.c | 23 +++++++++++++++++++++
 tools/testing/selftests/bpf/trace_helpers.h |  1 +
 7 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index 79b0fee6943b..ff0061467dd3 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -64,11 +64,11 @@ fds_example-objs := fds_example.o
 sockex1-objs := sockex1_user.o
 sockex2-objs := sockex2_user.o
 sockex3-objs := bpf_load.o sockex3_user.o
-tracex1-objs := bpf_load.o tracex1_user.o
+tracex1-objs := bpf_load.o tracex1_user.o $(TRACE_HELPERS)
 tracex2-objs := bpf_load.o tracex2_user.o
 tracex3-objs := bpf_load.o tracex3_user.o
 tracex4-objs := bpf_load.o tracex4_user.o
-tracex5-objs := bpf_load.o tracex5_user.o
+tracex5-objs := bpf_load.o tracex5_user.o $(TRACE_HELPERS)
 tracex6-objs := bpf_load.o tracex6_user.o
 tracex7-objs := bpf_load.o tracex7_user.o
 test_probe_write_user-objs := bpf_load.o test_probe_write_user_user.o
diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
index 4574b1939e49..c5ad528f046e 100644
--- a/samples/bpf/bpf_load.c
+++ b/samples/bpf/bpf_load.c
@@ -665,23 +665,3 @@ int load_bpf_file_fixup_map(const char *path, fixup_map_cb fixup_map)
 {
 	return do_load_bpf_file(path, fixup_map);
 }
-
-void read_trace_pipe(void)
-{
-	int trace_fd;
-
-	trace_fd = open(DEBUGFS "trace_pipe", O_RDONLY, 0);
-	if (trace_fd < 0)
-		return;
-
-	while (1) {
-		static char buf[4096];
-		ssize_t sz;
-
-		sz = read(trace_fd, buf, sizeof(buf) - 1);
-		if (sz > 0) {
-			buf[sz] = 0;
-			puts(buf);
-		}
-	}
-}
diff --git a/samples/bpf/bpf_load.h b/samples/bpf/bpf_load.h
index 814894a12974..4fcd258c616f 100644
--- a/samples/bpf/bpf_load.h
+++ b/samples/bpf/bpf_load.h
@@ -53,6 +53,5 @@ extern int map_data_count;
 int load_bpf_file(char *path);
 int load_bpf_file_fixup_map(const char *path, fixup_map_cb fixup_map);
 
-void read_trace_pipe(void);
 int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags);
 #endif
diff --git a/samples/bpf/tracex1_user.c b/samples/bpf/tracex1_user.c
index af8c20608ab5..55fddbd08702 100644
--- a/samples/bpf/tracex1_user.c
+++ b/samples/bpf/tracex1_user.c
@@ -4,6 +4,7 @@
 #include <unistd.h>
 #include <bpf/bpf.h>
 #include "bpf_load.h"
+#include "trace_helpers.h"
 
 int main(int ac, char **argv)
 {
diff --git a/samples/bpf/tracex5_user.c b/samples/bpf/tracex5_user.c
index c4ab91c89494..c2317b39e0d2 100644
--- a/samples/bpf/tracex5_user.c
+++ b/samples/bpf/tracex5_user.c
@@ -8,6 +8,7 @@
 #include <bpf/bpf.h>
 #include "bpf_load.h"
 #include <sys/resource.h>
+#include "trace_helpers.h"
 
 /* install fake seccomp program to enable seccomp code path inside the kernel,
  * so that our kprobe attached to seccomp_phase1() can be triggered
diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
index 7f989b3e4e22..4d0e913bbb22 100644
--- a/tools/testing/selftests/bpf/trace_helpers.c
+++ b/tools/testing/selftests/bpf/trace_helpers.c
@@ -4,12 +4,15 @@
 #include <string.h>
 #include <assert.h>
 #include <errno.h>
+#include <fcntl.h>
 #include <poll.h>
 #include <unistd.h>
 #include <linux/perf_event.h>
 #include <sys/mman.h>
 #include "trace_helpers.h"
 
+#define DEBUGFS "/sys/kernel/debug/tracing/"
+
 #define MAX_SYMS 300000
 static struct ksym syms[MAX_SYMS];
 static int sym_cnt;
@@ -86,3 +89,23 @@ long ksym_get_addr(const char *name)
 
 	return 0;
 }
+
+void read_trace_pipe(void)
+{
+	int trace_fd;
+
+	trace_fd = open(DEBUGFS "trace_pipe", O_RDONLY, 0);
+	if (trace_fd < 0)
+		return;
+
+	while (1) {
+		static char buf[4096];
+		ssize_t sz;
+
+		sz = read(trace_fd, buf, sizeof(buf) - 1);
+		if (sz > 0) {
+			buf[sz] = 0;
+			puts(buf);
+		}
+	}
+}
diff --git a/tools/testing/selftests/bpf/trace_helpers.h b/tools/testing/selftests/bpf/trace_helpers.h
index 0383c9b8adc1..25ef597dd03f 100644
--- a/tools/testing/selftests/bpf/trace_helpers.h
+++ b/tools/testing/selftests/bpf/trace_helpers.h
@@ -12,5 +12,6 @@ struct ksym {
 int load_kallsyms(void);
 struct ksym *ksym_search(long key);
 long ksym_get_addr(const char *name);
+void read_trace_pipe(void);
 
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH bpf-next 2/2] samples: bpf: refactor perf_event user program with libbpf bpf_link
  2020-03-10  5:51 [PATCH bpf-next 0/2] samples: bpf: refactor perf_event user program with libbpf bpf_link Daniel T. Lee
  2020-03-10  5:51 ` [PATCH bpf-next 1/2] samples: bpf: move read_trace_pipe to trace_helpers Daniel T. Lee
@ 2020-03-10  5:51 ` Daniel T. Lee
  2020-03-10 21:33   ` John Fastabend
  1 sibling, 1 reply; 6+ messages in thread
From: Daniel T. Lee @ 2020-03-10  5:51 UTC (permalink / raw)
  To: Daniel Borkmann, Alexei Starovoitov; +Cc: Andrii Nakryiko, netdev, bpf

The bpf_program__attach of libbpf(using bpf_link) is much more intuitive
than the previous method using ioctl.

bpf_program__attach_perf_event manages the enable of perf_event and
attach of BPF programs to it, so there's no neeed to do this
directly with ioctl.

In addition, bpf_link provides consistency in the use of API because it
allows disable (detach, destroy) for multiple events to be treated as
one bpf_link__destroy.

This commit refactors samples that attach the bpf program to perf_event
by using libbbpf instead of ioctl. Also the bpf_load in the samples were
removed and migrated to use libbbpf API.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
---
 samples/bpf/Makefile           |  4 +--
 samples/bpf/sampleip_user.c    | 58 ++++++++++++++++++++++------------
 samples/bpf/trace_event_user.c | 57 ++++++++++++++++++++++-----------
 3 files changed, 78 insertions(+), 41 deletions(-)

diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index ff0061467dd3..424f6fe7ce38 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -88,8 +88,8 @@ xdp2-objs := xdp1_user.o
 xdp_router_ipv4-objs := xdp_router_ipv4_user.o
 test_current_task_under_cgroup-objs := bpf_load.o $(CGROUP_HELPERS) \
 				       test_current_task_under_cgroup_user.o
-trace_event-objs := bpf_load.o trace_event_user.o $(TRACE_HELPERS)
-sampleip-objs := bpf_load.o sampleip_user.o $(TRACE_HELPERS)
+trace_event-objs := trace_event_user.o $(TRACE_HELPERS)
+sampleip-objs := sampleip_user.o $(TRACE_HELPERS)
 tc_l2_redirect-objs := bpf_load.o tc_l2_redirect_user.o
 lwt_len_hist-objs := bpf_load.o lwt_len_hist_user.o
 xdp_tx_iptunnel-objs := xdp_tx_iptunnel_user.o
diff --git a/samples/bpf/sampleip_user.c b/samples/bpf/sampleip_user.c
index b0f115f938bc..8a94ff558b17 100644
--- a/samples/bpf/sampleip_user.c
+++ b/samples/bpf/sampleip_user.c
@@ -10,13 +10,11 @@
 #include <errno.h>
 #include <signal.h>
 #include <string.h>
-#include <assert.h>
 #include <linux/perf_event.h>
 #include <linux/ptrace.h>
 #include <linux/bpf.h>
-#include <sys/ioctl.h>
+#include <bpf/bpf.h>
 #include <bpf/libbpf.h>
-#include "bpf_load.h"
 #include "perf-sys.h"
 #include "trace_helpers.h"
 
@@ -25,6 +23,7 @@
 #define MAX_IPS		8192
 #define PAGE_OFFSET	0xffff880000000000
 
+static int map_fd;
 static int nr_cpus;
 
 static void usage(void)
@@ -34,7 +33,8 @@ static void usage(void)
 	printf("       duration   # sampling duration (seconds), default 5\n");
 }
 
-static int sampling_start(int *pmu_fd, int freq)
+static int sampling_start(int *pmu_fd, int freq, struct bpf_program *prog,
+			  struct bpf_link **link)
 {
 	int i;
 
@@ -53,20 +53,22 @@ static int sampling_start(int *pmu_fd, int freq)
 			fprintf(stderr, "ERROR: Initializing perf sampling\n");
 			return 1;
 		}
-		assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_SET_BPF,
-			     prog_fd[0]) == 0);
-		assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE, 0) == 0);
+		link[i] = bpf_program__attach_perf_event(prog, pmu_fd[i]);
+		if (link[i] < 0) {
+			fprintf(stderr, "ERROR: Attach perf event\n");
+			return 1;
+		}
 	}
 
 	return 0;
 }
 
-static void sampling_end(int *pmu_fd)
+static void sampling_end(struct bpf_link **link)
 {
 	int i;
 
 	for (i = 0; i < nr_cpus; i++)
-		close(pmu_fd[i]);
+		bpf_link__destroy(link[i]);
 }
 
 struct ipcount {
@@ -128,14 +130,17 @@ static void print_ip_map(int fd)
 static void int_exit(int sig)
 {
 	printf("\n");
-	print_ip_map(map_fd[0]);
+	print_ip_map(map_fd);
 	exit(0);
 }
 
 int main(int argc, char **argv)
 {
+	int prog_fd, *pmu_fd, opt, freq = DEFAULT_FREQ, secs = DEFAULT_SECS;
+	struct bpf_program *prog;
+	struct bpf_object *obj;
+	struct bpf_link **link;
 	char filename[256];
-	int *pmu_fd, opt, freq = DEFAULT_FREQ, secs = DEFAULT_SECS;
 
 	/* process arguments */
 	while ((opt = getopt(argc, argv, "F:h")) != -1) {
@@ -165,36 +170,47 @@ int main(int argc, char **argv)
 	/* create perf FDs for each CPU */
 	nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
 	pmu_fd = malloc(nr_cpus * sizeof(int));
-	if (pmu_fd == NULL) {
-		fprintf(stderr, "ERROR: malloc of pmu_fd\n");
+	link = malloc(nr_cpus * sizeof(struct bpf_link *));
+	if (pmu_fd == NULL || link == NULL) {
+		fprintf(stderr, "ERROR: malloc of pmu_fd/link\n");
 		return 1;
 	}
 
 	/* load BPF program */
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
-	if (load_bpf_file(filename)) {
+	if (bpf_prog_load(filename, BPF_PROG_TYPE_PERF_EVENT, &obj, &prog_fd)) {
 		fprintf(stderr, "ERROR: loading BPF program (errno %d):\n",
 			errno);
-		if (strcmp(bpf_log_buf, "") == 0)
-			fprintf(stderr, "Try: ulimit -l unlimited\n");
-		else
-			fprintf(stderr, "%s", bpf_log_buf);
 		return 1;
 	}
+
+	prog = bpf_program__next(NULL, obj);
+	if (!prog) {
+		printf("finding a prog in obj file failed\n");
+		return 1;
+	}
+
+	map_fd = bpf_object__find_map_fd_by_name(obj, "ip_map");
+	if (map_fd < 0) {
+		printf("finding a ip_map map in obj file failed\n");
+		return 1;
+	}
+
 	signal(SIGINT, int_exit);
 	signal(SIGTERM, int_exit);
 
 	/* do sampling */
 	printf("Sampling at %d Hertz for %d seconds. Ctrl-C also ends.\n",
 	       freq, secs);
-	if (sampling_start(pmu_fd, freq) != 0)
+	if (sampling_start(pmu_fd, freq, prog, link) != 0)
 		return 1;
 	sleep(secs);
-	sampling_end(pmu_fd);
+	sampling_end(link);
 	free(pmu_fd);
+	free(link);
 
 	/* output sample counts */
-	print_ip_map(map_fd[0]);
+	print_ip_map(map_fd);
 
 	return 0;
 }
diff --git a/samples/bpf/trace_event_user.c b/samples/bpf/trace_event_user.c
index 356171bc392b..fb5c7b91e74c 100644
--- a/samples/bpf/trace_event_user.c
+++ b/samples/bpf/trace_event_user.c
@@ -6,22 +6,21 @@
 #include <stdlib.h>
 #include <stdbool.h>
 #include <string.h>
-#include <fcntl.h>
-#include <poll.h>
-#include <sys/ioctl.h>
 #include <linux/perf_event.h>
 #include <linux/bpf.h>
 #include <signal.h>
-#include <assert.h>
 #include <errno.h>
 #include <sys/resource.h>
+#include <bpf/bpf.h>
 #include <bpf/libbpf.h>
-#include "bpf_load.h"
 #include "perf-sys.h"
 #include "trace_helpers.h"
 
 #define SAMPLE_FREQ 50
 
+/* counts, stackmap */
+static int map_fd[2];
+struct bpf_program *prog;
 static bool sys_read_seen, sys_write_seen;
 
 static void print_ksym(__u64 addr)
@@ -137,6 +136,7 @@ static inline int generate_load(void)
 static void test_perf_event_all_cpu(struct perf_event_attr *attr)
 {
 	int nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
+	struct bpf_link **link = malloc(nr_cpus * sizeof(struct bpf_link *));
 	int *pmu_fd = malloc(nr_cpus * sizeof(int));
 	int i, error = 0;
 
@@ -151,8 +151,12 @@ static void test_perf_event_all_cpu(struct perf_event_attr *attr)
 			error = 1;
 			goto all_cpu_err;
 		}
-		assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_SET_BPF, prog_fd[0]) == 0);
-		assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE) == 0);
+		link[i] = bpf_program__attach_perf_event(prog, pmu_fd[i]);
+		if (link[i] < 0) {
+			printf("bpf_program__attach_perf_event failed\n");
+			error = 1;
+			goto all_cpu_err;
+		}
 	}
 
 	if (generate_load() < 0) {
@@ -161,11 +165,11 @@ static void test_perf_event_all_cpu(struct perf_event_attr *attr)
 	}
 	print_stacks();
 all_cpu_err:
-	for (i--; i >= 0; i--) {
-		ioctl(pmu_fd[i], PERF_EVENT_IOC_DISABLE);
-		close(pmu_fd[i]);
-	}
+	for (i--; i >= 0; i--)
+		bpf_link__destroy(link[i]);
+
 	free(pmu_fd);
+	free(link);
 	if (error)
 		int_exit(0);
 }
@@ -173,6 +177,7 @@ static void test_perf_event_all_cpu(struct perf_event_attr *attr)
 static void test_perf_event_task(struct perf_event_attr *attr)
 {
 	int pmu_fd, error = 0;
+	struct bpf_link *link;
 
 	/* per task perf event, enable inherit so the "dd ..." command can be traced properly.
 	 * Enabling inherit will cause bpf_perf_prog_read_time helper failure.
@@ -185,8 +190,12 @@ static void test_perf_event_task(struct perf_event_attr *attr)
 		printf("sys_perf_event_open failed\n");
 		int_exit(0);
 	}
-	assert(ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd[0]) == 0);
-	assert(ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE) == 0);
+	link = bpf_program__attach_perf_event(prog, pmu_fd);
+	if (link < 0) {
+		printf("bpf_program__attach_perf_event failed\n");
+		close(pmu_fd);
+		int_exit(0);
+	}
 
 	if (generate_load() < 0) {
 		error = 1;
@@ -194,8 +203,7 @@ static void test_perf_event_task(struct perf_event_attr *attr)
 	}
 	print_stacks();
 err:
-	ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-	close(pmu_fd);
+	bpf_link__destroy(link);
 	if (error)
 		int_exit(0);
 }
@@ -282,7 +290,9 @@ static void test_bpf_perf_event(void)
 int main(int argc, char **argv)
 {
 	struct rlimit r = {RLIM_INFINITY, RLIM_INFINITY};
+	struct bpf_object *obj;
 	char filename[256];
+	int prog_fd;
 
 	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
 	setrlimit(RLIMIT_MEMLOCK, &r);
@@ -295,9 +305,20 @@ int main(int argc, char **argv)
 		return 1;
 	}
 
-	if (load_bpf_file(filename)) {
-		printf("%s", bpf_log_buf);
-		return 2;
+	if (bpf_prog_load(filename, BPF_PROG_TYPE_PERF_EVENT, &obj, &prog_fd))
+		return 1;
+
+	prog = bpf_program__next(NULL, obj);
+	if (!prog) {
+		printf("finding a prog in obj file failed\n");
+		return 1;
+	}
+
+	map_fd[0] = bpf_object__find_map_fd_by_name(obj, "counts");
+	map_fd[1] = bpf_object__find_map_fd_by_name(obj, "stackmap");
+	if (map_fd[0] < 0 || map_fd[1] < 0) {
+		printf("finding a counts/stackmap map in obj file failed\n");
+		return 1;
 	}
 
 	if (fork() == 0) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* RE: [PATCH bpf-next 1/2] samples: bpf: move read_trace_pipe to trace_helpers
  2020-03-10  5:51 ` [PATCH bpf-next 1/2] samples: bpf: move read_trace_pipe to trace_helpers Daniel T. Lee
@ 2020-03-10 21:11   ` John Fastabend
  0 siblings, 0 replies; 6+ messages in thread
From: John Fastabend @ 2020-03-10 21:11 UTC (permalink / raw)
  To: Daniel T. Lee, Daniel Borkmann, Alexei Starovoitov
  Cc: Andrii Nakryiko, netdev, bpf

Daniel T. Lee wrote:
> To reduce the reliance of trace samples (trace*_user) on bpf_load,
> move read_trace_pipe to trace_helpers. By moving this bpf_loader helper
> elsewhere, trace functions can be easily migrated to libbbpf.
> 
> Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
> ---
>  samples/bpf/Makefile                        |  4 ++--
>  samples/bpf/bpf_load.c                      | 20 ------------------
>  samples/bpf/bpf_load.h                      |  1 -
>  samples/bpf/tracex1_user.c                  |  1 +
>  samples/bpf/tracex5_user.c                  |  1 +
>  tools/testing/selftests/bpf/trace_helpers.c | 23 +++++++++++++++++++++
>  tools/testing/selftests/bpf/trace_helpers.h |  1 +
>  7 files changed, 28 insertions(+), 23 deletions(-)
> 

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [PATCH bpf-next 2/2] samples: bpf: refactor perf_event user program with libbpf bpf_link
  2020-03-10  5:51 ` [PATCH bpf-next 2/2] samples: bpf: refactor perf_event user program with libbpf bpf_link Daniel T. Lee
@ 2020-03-10 21:33   ` John Fastabend
  2020-03-10 22:49     ` Daniel T. Lee
  0 siblings, 1 reply; 6+ messages in thread
From: John Fastabend @ 2020-03-10 21:33 UTC (permalink / raw)
  To: Daniel T. Lee, Daniel Borkmann, Alexei Starovoitov
  Cc: Andrii Nakryiko, netdev, bpf

Daniel T. Lee wrote:
> The bpf_program__attach of libbpf(using bpf_link) is much more intuitive
> than the previous method using ioctl.
> 
> bpf_program__attach_perf_event manages the enable of perf_event and
> attach of BPF programs to it, so there's no neeed to do this
> directly with ioctl.
> 
> In addition, bpf_link provides consistency in the use of API because it
> allows disable (detach, destroy) for multiple events to be treated as
> one bpf_link__destroy.
> 
> This commit refactors samples that attach the bpf program to perf_event
> by using libbbpf instead of ioctl. Also the bpf_load in the samples were
> removed and migrated to use libbbpf API.
> 
> Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
> ---

[...]

>  
>  int main(int argc, char **argv)
>  {
> +	int prog_fd, *pmu_fd, opt, freq = DEFAULT_FREQ, secs = DEFAULT_SECS;
> +	struct bpf_program *prog;
> +	struct bpf_object *obj;
> +	struct bpf_link **link;
>  	char filename[256];
> -	int *pmu_fd, opt, freq = DEFAULT_FREQ, secs = DEFAULT_SECS;
>  
>  	/* process arguments */
>  	while ((opt = getopt(argc, argv, "F:h")) != -1) {
> @@ -165,36 +170,47 @@ int main(int argc, char **argv)
>  	/* create perf FDs for each CPU */
>  	nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
>  	pmu_fd = malloc(nr_cpus * sizeof(int));
> -	if (pmu_fd == NULL) {
> -		fprintf(stderr, "ERROR: malloc of pmu_fd\n");
> +	link = malloc(nr_cpus * sizeof(struct bpf_link *));
> +	if (pmu_fd == NULL || link == NULL) {
> +		fprintf(stderr, "ERROR: malloc of pmu_fd/link\n");
>  		return 1;
>  	}
>  
>  	/* load BPF program */
>  	snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
> -	if (load_bpf_file(filename)) {
> +	if (bpf_prog_load(filename, BPF_PROG_TYPE_PERF_EVENT, &obj, &prog_fd)) {
>  		fprintf(stderr, "ERROR: loading BPF program (errno %d):\n",
>  			errno);
> -		if (strcmp(bpf_log_buf, "") == 0)
> -			fprintf(stderr, "Try: ulimit -l unlimited\n");
> -		else
> -			fprintf(stderr, "%s", bpf_log_buf);
>  		return 1;
>  	}
> +
> +	prog = bpf_program__next(NULL, obj);
> +	if (!prog) {
> +		printf("finding a prog in obj file failed\n");
> +		return 1;
> +	}
> +
> +	map_fd = bpf_object__find_map_fd_by_name(obj, "ip_map");
> +	if (map_fd < 0) {
> +		printf("finding a ip_map map in obj file failed\n");
> +		return 1;
> +	}
> +
>  	signal(SIGINT, int_exit);
>  	signal(SIGTERM, int_exit);
>  
>  	/* do sampling */
>  	printf("Sampling at %d Hertz for %d seconds. Ctrl-C also ends.\n",
>  	       freq, secs);
> -	if (sampling_start(pmu_fd, freq) != 0)
> +	if (sampling_start(pmu_fd, freq, prog, link) != 0)
>  		return 1;
>  	sleep(secs);
> -	sampling_end(pmu_fd);
> +	sampling_end(link);
>  	free(pmu_fd);
> +	free(link);

Not really a problem with this patch but on error we don't free() memory but
then on normal exit there is a free() its a bit inconsistent. How about adding
free on errors as well?

>  
>  	/* output sample counts */
> -	print_ip_map(map_fd[0]);
> +	print_ip_map(map_fd);
>  
>  	return 0;
>  }

[...]
  
>  static void print_ksym(__u64 addr)
> @@ -137,6 +136,7 @@ static inline int generate_load(void)
>  static void test_perf_event_all_cpu(struct perf_event_attr *attr)
>  {
>  	int nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
> +	struct bpf_link **link = malloc(nr_cpus * sizeof(struct bpf_link *));

need to check if its null? Its not going to be very friendly to segfault
later. Or maybe I'm missing the check.

>  	int *pmu_fd = malloc(nr_cpus * sizeof(int));
>  	int i, error = 0;
>  
> @@ -151,8 +151,12 @@ static void test_perf_event_all_cpu(struct perf_event_attr *attr)
>  			error = 1;
>  			goto all_cpu_err;
>  		}
> -		assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_SET_BPF, prog_fd[0]) == 0);
> -		assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE) == 0);
> +		link[i] = bpf_program__attach_perf_event(prog, pmu_fd[i]);
> +		if (link[i] < 0) {
> +			printf("bpf_program__attach_perf_event failed\n");
> +			error = 1;
> +			goto all_cpu_err;
> +		}
>  	}
>  
>  	if (generate_load() < 0) {
> @@ -161,11 +165,11 @@ static void test_perf_event_all_cpu(struct perf_event_attr *attr)
>  	}
>  	print_stacks();
>  all_cpu_err:
> -	for (i--; i >= 0; i--) {
> -		ioctl(pmu_fd[i], PERF_EVENT_IOC_DISABLE);
> -		close(pmu_fd[i]);
> -	}
> +	for (i--; i >= 0; i--)
> +		bpf_link__destroy(link[i]);
> +
>  	free(pmu_fd);
> +	free(link);
>  	if (error)
>  		int_exit(0);
>  }

Thanks,
John

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH bpf-next 2/2] samples: bpf: refactor perf_event user program with libbpf bpf_link
  2020-03-10 21:33   ` John Fastabend
@ 2020-03-10 22:49     ` Daniel T. Lee
  0 siblings, 0 replies; 6+ messages in thread
From: Daniel T. Lee @ 2020-03-10 22:49 UTC (permalink / raw)
  To: John Fastabend
  Cc: Daniel Borkmann, Alexei Starovoitov, Andrii Nakryiko, netdev, bpf

On Wed, Mar 11, 2020 at 6:34 AM John Fastabend <john.fastabend@gmail.com> wrote:
>
> Daniel T. Lee wrote:
> > The bpf_program__attach of libbpf(using bpf_link) is much more intuitive
> > than the previous method using ioctl.
> >
> > bpf_program__attach_perf_event manages the enable of perf_event and
> > attach of BPF programs to it, so there's no neeed to do this
> > directly with ioctl.
> >
> > In addition, bpf_link provides consistency in the use of API because it
> > allows disable (detach, destroy) for multiple events to be treated as
> > one bpf_link__destroy.
> >
> > This commit refactors samples that attach the bpf program to perf_event
> > by using libbbpf instead of ioctl. Also the bpf_load in the samples were
> > removed and migrated to use libbbpf API.
> >
> > Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
> > ---
>
> [...]
>
> >
> >  int main(int argc, char **argv)
> >  {
> > +     int prog_fd, *pmu_fd, opt, freq = DEFAULT_FREQ, secs = DEFAULT_SECS;
> > +     struct bpf_program *prog;
> > +     struct bpf_object *obj;
> > +     struct bpf_link **link;
> >       char filename[256];
> > -     int *pmu_fd, opt, freq = DEFAULT_FREQ, secs = DEFAULT_SECS;
> >
> >       /* process arguments */
> >       while ((opt = getopt(argc, argv, "F:h")) != -1) {
> > @@ -165,36 +170,47 @@ int main(int argc, char **argv)
> >       /* create perf FDs for each CPU */
> >       nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
> >       pmu_fd = malloc(nr_cpus * sizeof(int));
> > -     if (pmu_fd == NULL) {
> > -             fprintf(stderr, "ERROR: malloc of pmu_fd\n");
> > +     link = malloc(nr_cpus * sizeof(struct bpf_link *));
> > +     if (pmu_fd == NULL || link == NULL) {
> > +             fprintf(stderr, "ERROR: malloc of pmu_fd/link\n");
> >               return 1;
> >       }
> >
> >       /* load BPF program */
> >       snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
> > -     if (load_bpf_file(filename)) {
> > +     if (bpf_prog_load(filename, BPF_PROG_TYPE_PERF_EVENT, &obj, &prog_fd)) {
> >               fprintf(stderr, "ERROR: loading BPF program (errno %d):\n",
> >                       errno);
> > -             if (strcmp(bpf_log_buf, "") == 0)
> > -                     fprintf(stderr, "Try: ulimit -l unlimited\n");
> > -             else
> > -                     fprintf(stderr, "%s", bpf_log_buf);
> >               return 1;
> >       }
> > +
> > +     prog = bpf_program__next(NULL, obj);
> > +     if (!prog) {
> > +             printf("finding a prog in obj file failed\n");
> > +             return 1;
> > +     }
> > +
> > +     map_fd = bpf_object__find_map_fd_by_name(obj, "ip_map");
> > +     if (map_fd < 0) {
> > +             printf("finding a ip_map map in obj file failed\n");
> > +             return 1;
> > +     }
> > +
> >       signal(SIGINT, int_exit);
> >       signal(SIGTERM, int_exit);
> >
> >       /* do sampling */
> >       printf("Sampling at %d Hertz for %d seconds. Ctrl-C also ends.\n",
> >              freq, secs);
> > -     if (sampling_start(pmu_fd, freq) != 0)
> > +     if (sampling_start(pmu_fd, freq, prog, link) != 0)
> >               return 1;
> >       sleep(secs);
> > -     sampling_end(pmu_fd);
> > +     sampling_end(link);
> >       free(pmu_fd);
> > +     free(link);
>
> Not really a problem with this patch but on error we don't free() memory but
> then on normal exit there is a free() its a bit inconsistent. How about adding
> free on errors as well?

I think you're right.
I'll add free() on errors to keep it consistent.
Will apply feedback right away!

>
> >
> >       /* output sample counts */
> > -     print_ip_map(map_fd[0]);
> > +     print_ip_map(map_fd);
> >
> >       return 0;
> >  }
>
> [...]
>
> >  static void print_ksym(__u64 addr)
> > @@ -137,6 +136,7 @@ static inline int generate_load(void)
> >  static void test_perf_event_all_cpu(struct perf_event_attr *attr)
> >  {
> >       int nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
> > +     struct bpf_link **link = malloc(nr_cpus * sizeof(struct bpf_link *));
>
> need to check if its null? Its not going to be very friendly to segfault
> later. Or maybe I'm missing the check.
>

Also, checking whether it is null will be more safe.
I'll apply and send next version patch.

> >       int *pmu_fd = malloc(nr_cpus * sizeof(int));
> >       int i, error = 0;
> >
> > @@ -151,8 +151,12 @@ static void test_perf_event_all_cpu(struct perf_event_attr *attr)
> >                       error = 1;
> >                       goto all_cpu_err;
> >               }
> > -             assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_SET_BPF, prog_fd[0]) == 0);
> > -             assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE) == 0);
> > +             link[i] = bpf_program__attach_perf_event(prog, pmu_fd[i]);
> > +             if (link[i] < 0) {
> > +                     printf("bpf_program__attach_perf_event failed\n");
> > +                     error = 1;
> > +                     goto all_cpu_err;
> > +             }
> >       }
> >
> >       if (generate_load() < 0) {
> > @@ -161,11 +165,11 @@ static void test_perf_event_all_cpu(struct perf_event_attr *attr)
> >       }
> >       print_stacks();
> >  all_cpu_err:
> > -     for (i--; i >= 0; i--) {
> > -             ioctl(pmu_fd[i], PERF_EVENT_IOC_DISABLE);
> > -             close(pmu_fd[i]);
> > -     }
> > +     for (i--; i >= 0; i--)
> > +             bpf_link__destroy(link[i]);
> > +
> >       free(pmu_fd);
> > +     free(link);
> >       if (error)
> >               int_exit(0);
> >  }
>
> Thanks,
> John

Thank you for your time and effort for the review.

Best,
Daniel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-03-10 22:49 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-10  5:51 [PATCH bpf-next 0/2] samples: bpf: refactor perf_event user program with libbpf bpf_link Daniel T. Lee
2020-03-10  5:51 ` [PATCH bpf-next 1/2] samples: bpf: move read_trace_pipe to trace_helpers Daniel T. Lee
2020-03-10 21:11   ` John Fastabend
2020-03-10  5:51 ` [PATCH bpf-next 2/2] samples: bpf: refactor perf_event user program with libbpf bpf_link Daniel T. Lee
2020-03-10 21:33   ` John Fastabend
2020-03-10 22:49     ` Daniel T. Lee

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).