All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/7] libbpf: add tracing attach APIs
@ 2019-06-20 23:09 Andrii Nakryiko
  2019-06-20 23:09 ` [PATCH bpf-next 1/7] libbpf: make libbpf_strerror_r agnostic to sign of error Andrii Nakryiko
                   ` (6 more replies)
  0 siblings, 7 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-20 23:09 UTC (permalink / raw)
  To: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team; +Cc: Andrii Nakryiko

This patchset adds the following APIs to allow attaching BPF programs to
tracing entities:
- bpf_program__attach_perf_event for attaching to any opened perf event FD,
  allowing users full control;
- bpf_program__attach_kprobe for attaching to kernel probes (both entry and
  return probes);
- bpf_program__attach_uprobe for attaching to user probes (both entry/return);
- bpf_program__attach_tracepoint for attaching to kernel tracepoints;
- bpf_program__attach_raw_tracepoint for attaching to raw kernel tracepoint
  (wrapper around bpf_raw_tracepoint_open);

This set of APIs makes libbpf more useful for tracing applications.

Pre-patch #1 makes internal libbpf_strerror_r helper function work w/ negative
error codes, lifting the burder off callers to keep track of error sign.
Patch #2 adds attach_perf_event, which is the base for all other APIs.
Patch #3 adds kprobe/uprobe APIs.
Patch #4 adds tracepoint/raw_tracepoint APIs.
Patch #5 converts one existing test to use attach_perf_event.
Patch #6 adds new kprobe/uprobe tests.
Patch #7 converts all the selftests currently using tracepoint to new APIs.

Andrii Nakryiko (7):
  libbpf: make libbpf_strerror_r agnostic to sign of error
  libbpf: add ability to attach/detach BPF to perf event
  libbpf: add kprobe/uprobe attach API
  libbpf: add tracepoint/raw tracepoint attach API
  selftests/bpf: switch test to new attach_perf_event API
  selftests/bpf: add kprobe/uprobe selftests
  selftests/bpf: convert existing tracepoint tests to new APIs

 tools/lib/bpf/libbpf.c                        | 347 ++++++++++++++++++
 tools/lib/bpf/libbpf.h                        |  17 +
 tools/lib/bpf/libbpf.map                      |   6 +
 tools/lib/bpf/str_error.c                     |   2 +-
 .../selftests/bpf/prog_tests/attach_probe.c   | 151 ++++++++
 .../bpf/prog_tests/stacktrace_build_id.c      |  49 +--
 .../bpf/prog_tests/stacktrace_build_id_nmi.c  |  16 +-
 .../selftests/bpf/prog_tests/stacktrace_map.c |  42 +--
 .../bpf/prog_tests/stacktrace_map_raw_tp.c    |  14 +-
 .../bpf/prog_tests/task_fd_query_rawtp.c      |  10 +-
 .../bpf/prog_tests/task_fd_query_tp.c         |  51 +--
 .../bpf/prog_tests/tp_attach_query.c          |  56 +--
 .../selftests/bpf/progs/test_attach_probe.c   |  55 +++
 13 files changed, 650 insertions(+), 166 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/attach_probe.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_attach_probe.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 1/7] libbpf: make libbpf_strerror_r agnostic to sign of error
  2019-06-20 23:09 [PATCH bpf-next 0/7] libbpf: add tracing attach APIs Andrii Nakryiko
@ 2019-06-20 23:09 ` Andrii Nakryiko
  2019-06-20 23:09 ` [PATCH bpf-next 2/7] libbpf: add ability to attach/detach BPF to perf event Andrii Nakryiko
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-20 23:09 UTC (permalink / raw)
  To: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team; +Cc: Andrii Nakryiko

It's often inconvenient to switch sign of error when passing it into
libbpf_strerror_r. It's better for it to handle that automatically.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/lib/bpf/str_error.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/lib/bpf/str_error.c b/tools/lib/bpf/str_error.c
index 00e48ac5b806..b8064eedc177 100644
--- a/tools/lib/bpf/str_error.c
+++ b/tools/lib/bpf/str_error.c
@@ -11,7 +11,7 @@
  */
 char *libbpf_strerror_r(int err, char *dst, int len)
 {
-	int ret = strerror_r(err, dst, len);
+	int ret = strerror_r(err < 0 ? -err : err, dst, len);
 	if (ret)
 		snprintf(dst, len, "ERROR: strerror_r(%d)=%d", err, ret);
 	return dst;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 2/7] libbpf: add ability to attach/detach BPF to perf event
  2019-06-20 23:09 [PATCH bpf-next 0/7] libbpf: add tracing attach APIs Andrii Nakryiko
  2019-06-20 23:09 ` [PATCH bpf-next 1/7] libbpf: make libbpf_strerror_r agnostic to sign of error Andrii Nakryiko
@ 2019-06-20 23:09 ` Andrii Nakryiko
  2019-06-21  0:01   ` Stanislav Fomichev
  2019-06-20 23:09 ` [PATCH bpf-next 3/7] libbpf: add kprobe/uprobe attach API Andrii Nakryiko
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-20 23:09 UTC (permalink / raw)
  To: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team; +Cc: Andrii Nakryiko

bpf_program__attach_perf_event allows to attach BPF program to existing
perf event, providing most generic and most low-level way to attach BPF
programs.

libbpf_perf_event_disable_and_close API is added to disable and close
existing perf event by its FD.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/lib/bpf/libbpf.c   | 41 ++++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/libbpf.h   |  4 ++++
 tools/lib/bpf/libbpf.map |  2 ++
 3 files changed, 47 insertions(+)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 8ce3beba8551..2bb1fa008be3 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -32,6 +32,7 @@
 #include <linux/limits.h>
 #include <linux/perf_event.h>
 #include <linux/ring_buffer.h>
+#include <sys/ioctl.h>
 #include <sys/stat.h>
 #include <sys/types.h>
 #include <sys/vfs.h>
@@ -3928,6 +3929,46 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr,
 	return 0;
 }
 
+int libbpf_perf_event_disable_and_close(int pfd)
+{
+	int err;
+
+	if (pfd < 0)
+		return 0;
+
+	err = ioctl(pfd, PERF_EVENT_IOC_DISABLE, 0);
+	close(pfd);
+	return err;
+}
+
+int bpf_program__attach_perf_event(struct bpf_program *prog, int pfd)
+{
+	char errmsg[STRERR_BUFSIZE];
+	int bpf_fd, err;
+
+	bpf_fd = bpf_program__fd(prog);
+	if (bpf_fd < 0) {
+		pr_warning("program '%s': can't attach before loaded\n",
+			   bpf_program__title(prog, false));
+		return -EINVAL;
+	}
+	if (ioctl(pfd, PERF_EVENT_IOC_SET_BPF, bpf_fd) < 0) {
+		err = -errno;
+		pr_warning("program '%s': failed to attach to pfd %d: %s\n",
+			   bpf_program__title(prog, false), pfd,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return err;
+	}
+	if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
+		err = -errno;
+		pr_warning("program '%s': failed to enable pfd %d: %s\n",
+			   bpf_program__title(prog, false), pfd,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return err;
+	}
+	return 0;
+}
+
 enum bpf_perf_event_ret
 bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
 			   void **copy_mem, size_t *copy_size,
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index d639f47e3110..76db1bbc0dac 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -165,6 +165,10 @@ LIBBPF_API int bpf_program__pin(struct bpf_program *prog, const char *path);
 LIBBPF_API int bpf_program__unpin(struct bpf_program *prog, const char *path);
 LIBBPF_API void bpf_program__unload(struct bpf_program *prog);
 
+LIBBPF_API int libbpf_perf_event_disable_and_close(int pfd);
+LIBBPF_API int bpf_program__attach_perf_event(struct bpf_program *prog,
+					      int pfd);
+
 struct bpf_insn;
 
 /*
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 2c6d835620d2..d27406982b5a 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -172,5 +172,7 @@ LIBBPF_0.0.4 {
 		btf_dump__new;
 		btf__parse_elf;
 		bpf_object__load_xattr;
+		bpf_program__attach_perf_event;
 		libbpf_num_possible_cpus;
+		libbpf_perf_event_disable_and_close;
 } LIBBPF_0.0.3;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 3/7] libbpf: add kprobe/uprobe attach API
  2019-06-20 23:09 [PATCH bpf-next 0/7] libbpf: add tracing attach APIs Andrii Nakryiko
  2019-06-20 23:09 ` [PATCH bpf-next 1/7] libbpf: make libbpf_strerror_r agnostic to sign of error Andrii Nakryiko
  2019-06-20 23:09 ` [PATCH bpf-next 2/7] libbpf: add ability to attach/detach BPF to perf event Andrii Nakryiko
@ 2019-06-20 23:09 ` Andrii Nakryiko
  2019-06-21  0:04   ` Stanislav Fomichev
  2019-06-20 23:09 ` [PATCH bpf-next 4/7] libbpf: add tracepoint/raw tracepoint " Andrii Nakryiko
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-20 23:09 UTC (permalink / raw)
  To: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team; +Cc: Andrii Nakryiko

Add ability to attach to kernel and user probes and retprobes.
Implementation depends on perf event support for kprobes/uprobes.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/lib/bpf/libbpf.c   | 207 +++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/libbpf.h   |   8 ++
 tools/lib/bpf/libbpf.map |   2 +
 3 files changed, 217 insertions(+)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 2bb1fa008be3..11329e05530e 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -3969,6 +3969,213 @@ int bpf_program__attach_perf_event(struct bpf_program *prog, int pfd)
 	return 0;
 }
 
+static int parse_uint(const char *buf)
+{
+	int ret;
+
+	errno = 0;
+	ret = (int)strtol(buf, NULL, 10);
+	if (errno) {
+		ret = -errno;
+		pr_debug("failed to parse '%s' as unsigned int\n", buf);
+		return ret;
+	}
+	if (ret < 0) {
+		pr_debug("failed to parse '%s' as unsigned int\n", buf);
+		return -EINVAL;
+	}
+	return ret;
+}
+
+static int parse_uint_from_file(const char* file)
+{
+	char buf[STRERR_BUFSIZE];
+	int fd, ret;
+
+	fd = open(file, O_RDONLY);
+	if (fd < 0) {
+		ret = -errno;
+		pr_debug("failed to open '%s': %s\n", file,
+			 libbpf_strerror_r(ret, buf, sizeof(buf)));
+		return ret;
+	}
+	ret = read(fd, buf, sizeof(buf));
+	close(fd);
+	if (ret < 0) {
+		ret = -errno;
+		pr_debug("failed to read '%s': %s\n", file,
+			libbpf_strerror_r(ret, buf, sizeof(buf)));
+		return ret;
+	}
+	if (ret == 0 || ret >= sizeof(buf)) {
+		buf[sizeof(buf) - 1] = 0;
+		pr_debug("unexpected input from '%s': '%s'\n", file, buf);
+		return -EINVAL;
+	}
+	return parse_uint(buf);
+}
+
+static int determine_kprobe_perf_type(void)
+{
+	const char *file = "/sys/bus/event_source/devices/kprobe/type";
+	return parse_uint_from_file(file);
+}
+
+static int determine_uprobe_perf_type(void)
+{
+	const char *file = "/sys/bus/event_source/devices/uprobe/type";
+	return parse_uint_from_file(file);
+}
+
+static int parse_config_from_file(const char *file)
+{
+	char buf[STRERR_BUFSIZE];
+	int fd, ret;
+
+	fd = open(file, O_RDONLY);
+	if (fd < 0) {
+		ret = -errno;
+		pr_debug("failed to open '%s': %s\n", file,
+			 libbpf_strerror_r(ret, buf, sizeof(buf)));
+		return ret;
+	}
+	ret = read(fd, buf, sizeof(buf));
+	close(fd);
+	if (ret < 0) {
+		ret = -errno;
+		pr_debug("failed to read '%s': %s\n", file,
+			libbpf_strerror_r(ret, buf, sizeof(buf)));
+		return ret;
+	}
+	if (ret == 0 || ret >= sizeof(buf)) {
+		buf[sizeof(buf) - 1] = 0;
+		pr_debug("unexpected input from '%s': '%s'\n", file, buf);
+		return -EINVAL;
+	}
+	if (strncmp(buf, "config:", 7)) {
+		pr_debug("expected 'config:' prefix, found '%s'\n", buf);
+		return -EINVAL;
+	}
+	return parse_uint(buf + 7);
+}
+
+static int determine_kprobe_retprobe_bit(void)
+{
+	const char *file = "/sys/bus/event_source/devices/kprobe/format/retprobe";
+	return parse_config_from_file(file);
+}
+
+static int determine_uprobe_retprobe_bit(void)
+{
+	const char *file = "/sys/bus/event_source/devices/uprobe/format/retprobe";
+	return parse_config_from_file(file);
+}
+
+static int perf_event_open_probe(bool uprobe, bool retprobe, const char* name,
+				 uint64_t offset, int pid)
+{
+	struct perf_event_attr attr = {};
+	char errmsg[STRERR_BUFSIZE];
+	int type, pfd, err;
+
+	type = uprobe ? determine_uprobe_perf_type()
+		      : determine_kprobe_perf_type();
+	if (type < 0) {
+		pr_warning("failed to determine %s perf type: %s\n",
+			   uprobe ? "uprobe" : "kprobe",
+			   libbpf_strerror_r(type, errmsg, sizeof(errmsg)));
+		return type;
+	}
+	if (retprobe) {
+		int bit = uprobe ? determine_uprobe_retprobe_bit()
+				 : determine_kprobe_retprobe_bit();
+
+		if (bit < 0) {
+			pr_warning("failed to determine %s retprobe bit: %s\n",
+				   uprobe ? "uprobe" : "kprobe",
+				   libbpf_strerror_r(bit, errmsg,
+						     sizeof(errmsg)));
+			return bit;
+		}
+		attr.config |= 1 << bit;
+	}
+	attr.size = sizeof(attr);
+	attr.type = type;
+	attr.config1 = (uint64_t)(void *)name; /* kprobe_func or uprobe_path */
+	attr.config2 = offset;		       /* kprobe_addr or probe_offset */
+
+	/* pid filter is meaningful only for uprobes */
+	pfd = syscall(__NR_perf_event_open, &attr,
+		      pid < 0 ? -1 : pid /* pid */,
+		      pid == -1 ? 0 : -1 /* cpu */,
+		      -1 /* group_fd */, PERF_FLAG_FD_CLOEXEC);
+	if (pfd < 0) {
+		err = -errno;
+		pr_warning("%s perf_event_open() failed: %s\n",
+			   uprobe ? "uprobe" : "kprobe",
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return err;
+	}
+	return pfd;
+}
+
+int bpf_program__attach_kprobe(struct bpf_program *prog, bool retprobe,
+			       const char *func_name)
+{
+	char errmsg[STRERR_BUFSIZE];
+	int pfd, err;
+
+	pfd = perf_event_open_probe(false /* uprobe */, retprobe, func_name,
+				    0 /* offset */, -1 /* pid */);
+	if (pfd < 0) {
+		pr_warning("program '%s': failed to create %s '%s' perf event: %s\n",
+			   bpf_program__title(prog, false),
+			   retprobe ? "kretprobe" : "kprobe", func_name,
+			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+		return pfd;
+	}
+	err = bpf_program__attach_perf_event(prog, pfd);
+	if (err) {
+		libbpf_perf_event_disable_and_close(pfd);
+		pr_warning("program '%s': failed to attach to %s '%s': %s\n",
+			   bpf_program__title(prog, false),
+			   retprobe ? "kretprobe" : "kprobe", func_name,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return err;
+	}
+	return pfd;
+}
+
+int bpf_program__attach_uprobe(struct bpf_program *prog, bool retprobe,
+			       pid_t pid, const char *binary_path,
+			       size_t func_offset)
+{
+	char errmsg[STRERR_BUFSIZE];
+	int pfd, err;
+
+	pfd = perf_event_open_probe(true /* uprobe */, retprobe,
+				    binary_path, func_offset, pid);
+	if (pfd < 0) {
+		pr_warning("program '%s': failed to create %s '%s:0x%zx' perf event: %s\n",
+			   bpf_program__title(prog, false),
+			   retprobe ? "uretprobe" : "uprobe",
+			   binary_path, func_offset,
+			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+		return pfd;
+	}
+	err = bpf_program__attach_perf_event(prog, pfd);
+	if (err) {
+		libbpf_perf_event_disable_and_close(pfd);
+		pr_warning("program '%s': failed to attach to %s '%s:0x%zx': %s\n",
+			   bpf_program__title(prog, false),
+			   retprobe ? "uretprobe" : "uprobe",
+			   binary_path, func_offset,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return err;
+	}
+	return pfd;
+}
+
 enum bpf_perf_event_ret
 bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
 			   void **copy_mem, size_t *copy_size,
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 76db1bbc0dac..a7264f06aa5f 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -168,6 +168,14 @@ LIBBPF_API void bpf_program__unload(struct bpf_program *prog);
 LIBBPF_API int libbpf_perf_event_disable_and_close(int pfd);
 LIBBPF_API int bpf_program__attach_perf_event(struct bpf_program *prog,
 					      int pfd);
+LIBBPF_API int bpf_program__attach_kprobe(struct bpf_program *prog,
+					  bool retprobe,
+					  const char *func_name);
+LIBBPF_API int bpf_program__attach_uprobe(struct bpf_program *prog,
+					  bool retprobe,
+					  pid_t pid,
+					  const char *binary_path,
+					  size_t func_offset);
 
 struct bpf_insn;
 
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index d27406982b5a..1a982c2e1751 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -172,7 +172,9 @@ LIBBPF_0.0.4 {
 		btf_dump__new;
 		btf__parse_elf;
 		bpf_object__load_xattr;
+		bpf_program__attach_kprobe;
 		bpf_program__attach_perf_event;
+		bpf_program__attach_uprobe;
 		libbpf_num_possible_cpus;
 		libbpf_perf_event_disable_and_close;
 } LIBBPF_0.0.3;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 4/7] libbpf: add tracepoint/raw tracepoint attach API
  2019-06-20 23:09 [PATCH bpf-next 0/7] libbpf: add tracing attach APIs Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2019-06-20 23:09 ` [PATCH bpf-next 3/7] libbpf: add kprobe/uprobe attach API Andrii Nakryiko
@ 2019-06-20 23:09 ` Andrii Nakryiko
  2019-06-21  0:07   ` Stanislav Fomichev
  2019-06-20 23:09 ` [PATCH bpf-next 5/7] selftests/bpf: switch test to new attach_perf_event API Andrii Nakryiko
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-20 23:09 UTC (permalink / raw)
  To: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team; +Cc: Andrii Nakryiko

Add APIs allowing to attach BPF program to kernel tracepoints. Raw
tracepoint attach API is also added for uniform per-BPF-program API,
but is mostly a wrapper around existing bpf_raw_tracepoint_open call.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/lib/bpf/libbpf.c   | 99 ++++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/libbpf.h   |  5 ++
 tools/lib/bpf/libbpf.map |  2 +
 3 files changed, 106 insertions(+)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 11329e05530e..cefe67ba160b 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -4176,6 +4176,105 @@ int bpf_program__attach_uprobe(struct bpf_program *prog, bool retprobe,
 	return pfd;
 }
 
+static int determine_tracepoint_id(const char* tp_category, const char* tp_name)
+{
+	char file[PATH_MAX];
+	int ret;
+
+	ret = snprintf(file, sizeof(file),
+		       "/sys/kernel/debug/tracing/events/%s/%s/id",
+		       tp_category, tp_name);
+	if (ret < 0)
+		return -errno;
+	if (ret >= sizeof(file)) {
+		pr_debug("tracepoint %s/%s path is too long\n",
+			 tp_category, tp_name);
+		return -E2BIG;
+	}
+	return parse_uint_from_file(file);
+}
+
+static int perf_event_open_tracepoint(const char* tp_category,
+				      const char* tp_name)
+{
+	struct perf_event_attr attr = {};
+	char errmsg[STRERR_BUFSIZE];
+	int tp_id, pfd, err;
+
+	tp_id = determine_tracepoint_id(tp_category, tp_name);
+	if (tp_id < 0){
+		pr_warning("failed to determine tracepoint '%s/%s' perf ID: %s\n",
+			   tp_category, tp_name,
+			   libbpf_strerror_r(tp_id, errmsg, sizeof(errmsg)));
+		return tp_id;
+	}
+
+	memset(&attr, 0, sizeof(attr));
+	attr.type = PERF_TYPE_TRACEPOINT;
+	attr.size = sizeof(attr);
+	attr.config = tp_id;
+
+	pfd = syscall( __NR_perf_event_open, &attr, -1 /* pid */, 0 /* cpu */,
+			-1 /* group_fd */, PERF_FLAG_FD_CLOEXEC);
+	if (pfd < 0) {
+		err = -errno;
+		pr_warning("tracepoint '%s/%s' perf_event_open() failed: %s\n",
+			   tp_category, tp_name,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return err;
+	}
+	return pfd;
+}
+
+int bpf_program__attach_tracepoint(struct bpf_program *prog,
+				   const char *tp_category,
+				   const char *tp_name)
+{
+	char errmsg[STRERR_BUFSIZE];
+	int pfd, err;
+
+	pfd = perf_event_open_tracepoint(tp_category, tp_name);
+	if (pfd < 0) {
+		pr_warning("program '%s': failed to create tracepoint '%s/%s' perf event: %s\n",
+			   bpf_program__title(prog, false),
+			   tp_category, tp_name,
+			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+		return pfd;
+	}
+	err = bpf_program__attach_perf_event(prog, pfd);
+	if (err) {
+		libbpf_perf_event_disable_and_close(pfd);
+		pr_warning("program '%s': failed to attach to tracepoint '%s/%s': %s\n",
+			   bpf_program__title(prog, false),
+			   tp_category, tp_name,
+			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+		return err;
+	}
+	return pfd;
+}
+
+int bpf_program__attach_raw_tracepoint(struct bpf_program *prog,
+				       const char *tp_name)
+{
+	char errmsg[STRERR_BUFSIZE];
+	int bpf_fd, pfd;
+
+	bpf_fd = bpf_program__fd(prog);
+	if (bpf_fd < 0) {
+		pr_warning("program '%s': can't attach before loaded\n",
+			   bpf_program__title(prog, false));
+		return -EINVAL;
+	}
+	pfd = bpf_raw_tracepoint_open(tp_name, bpf_fd);
+	if (pfd < 0) {
+		pr_warning("program '%s': failed to attach to raw tracepoint '%s': %s\n",
+			   bpf_program__title(prog, false), tp_name,
+			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+		return pfd;
+	}
+	return pfd;
+}
+
 enum bpf_perf_event_ret
 bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
 			   void **copy_mem, size_t *copy_size,
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index a7264f06aa5f..bf7020a565c6 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -176,6 +176,11 @@ LIBBPF_API int bpf_program__attach_uprobe(struct bpf_program *prog,
 					  pid_t pid,
 					  const char *binary_path,
 					  size_t func_offset);
+LIBBPF_API int bpf_program__attach_tracepoint(struct bpf_program *prog,
+					      const char *tp_category,
+					      const char *tp_name);
+LIBBPF_API int bpf_program__attach_raw_tracepoint(struct bpf_program *prog,
+						  const char *tp_name);
 
 struct bpf_insn;
 
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 1a982c2e1751..2382fbda4cbb 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -174,6 +174,8 @@ LIBBPF_0.0.4 {
 		bpf_object__load_xattr;
 		bpf_program__attach_kprobe;
 		bpf_program__attach_perf_event;
+		bpf_program__attach_raw_tracepoint;
+		bpf_program__attach_tracepoint;
 		bpf_program__attach_uprobe;
 		libbpf_num_possible_cpus;
 		libbpf_perf_event_disable_and_close;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 5/7] selftests/bpf: switch test to new attach_perf_event API
  2019-06-20 23:09 [PATCH bpf-next 0/7] libbpf: add tracing attach APIs Andrii Nakryiko
                   ` (3 preceding siblings ...)
  2019-06-20 23:09 ` [PATCH bpf-next 4/7] libbpf: add tracepoint/raw tracepoint " Andrii Nakryiko
@ 2019-06-20 23:09 ` Andrii Nakryiko
  2019-06-21  0:08   ` Stanislav Fomichev
  2019-06-20 23:09 ` [PATCH bpf-next 6/7] selftests/bpf: add kprobe/uprobe selftests Andrii Nakryiko
  2019-06-20 23:09 ` [PATCH bpf-next 7/7] selftests/bpf: convert existing tracepoint tests to new APIs Andrii Nakryiko
  6 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-20 23:09 UTC (permalink / raw)
  To: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team; +Cc: Andrii Nakryiko

Use new bpf_program__attach_perf_event() in test previously relying on
direct ioctl manipulations.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../bpf/prog_tests/stacktrace_build_id_nmi.c     | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
index 1c1a2f75f3d8..1bbdb0b82ac5 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
@@ -17,6 +17,7 @@ static __u64 read_perf_max_sample_freq(void)
 void test_stacktrace_build_id_nmi(void)
 {
 	int control_map_fd, stackid_hmap_fd, stackmap_fd, stack_amap_fd;
+	const char *prog_name = "tracepoint/random/urandom_read";
 	const char *file = "./test_stacktrace_build_id.o";
 	int err, pmu_fd, prog_fd;
 	struct perf_event_attr attr = {
@@ -25,6 +26,7 @@ void test_stacktrace_build_id_nmi(void)
 		.config = PERF_COUNT_HW_CPU_CYCLES,
 	};
 	__u32 key, previous_key, val, duration = 0;
+	struct bpf_program *prog;
 	struct bpf_object *obj;
 	char buf[256];
 	int i, j;
@@ -39,6 +41,10 @@ void test_stacktrace_build_id_nmi(void)
 	if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))
 		return;
 
+	prog = bpf_object__find_program_by_title(obj, prog_name);
+	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
+		goto close_prog;
+
 	pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
 			 0 /* cpu 0 */, -1 /* group id */,
 			 0 /* flags */);
@@ -47,16 +53,10 @@ void test_stacktrace_build_id_nmi(void)
 		  pmu_fd, errno))
 		goto close_prog;
 
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
-	if (CHECK(err, "perf_event_ioc_enable", "err %d errno %d\n",
-		  err, errno))
+	err = bpf_program__attach_perf_event(prog, pmu_fd);
+	if (CHECK(err, "attach_perf_event", "err %d\n", err))
 		goto close_pmu;
 
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
-	if (CHECK(err, "perf_event_ioc_set_bpf", "err %d errno %d\n",
-		  err, errno))
-		goto disable_pmu;
-
 	/* find map fds */
 	control_map_fd = bpf_find_map(__func__, obj, "control_map");
 	if (CHECK(control_map_fd < 0, "bpf_find_map control_map",
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 6/7] selftests/bpf: add kprobe/uprobe selftests
  2019-06-20 23:09 [PATCH bpf-next 0/7] libbpf: add tracing attach APIs Andrii Nakryiko
                   ` (4 preceding siblings ...)
  2019-06-20 23:09 ` [PATCH bpf-next 5/7] selftests/bpf: switch test to new attach_perf_event API Andrii Nakryiko
@ 2019-06-20 23:09 ` Andrii Nakryiko
  2019-06-20 23:09 ` [PATCH bpf-next 7/7] selftests/bpf: convert existing tracepoint tests to new APIs Andrii Nakryiko
  6 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-20 23:09 UTC (permalink / raw)
  To: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team; +Cc: Andrii Nakryiko

Add tests verifying kprobe/kretprobe/uprobe/uretprobe APIs work as
expected.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/attach_probe.c   | 151 ++++++++++++++++++
 .../selftests/bpf/progs/test_attach_probe.c   |  55 +++++++
 2 files changed, 206 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/attach_probe.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_attach_probe.c

diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
new file mode 100644
index 000000000000..5cc7e674a513
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
@@ -0,0 +1,151 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+
+ssize_t get_base_addr() {
+	size_t start;
+	char buf[256];
+	FILE *f;
+
+	f = fopen("/proc/self/maps", "r");
+	if (!f)
+		return -errno;
+
+	while (fscanf(f, "%zx-%*x %s %*s\n", &start, buf) == 2) {
+		if (strcmp(buf, "r-xp") == 0) {
+			fclose(f);
+			return start;
+		}
+	}
+
+	fclose(f);
+	return -EINVAL;
+}
+
+void test_attach_probe(void)
+{
+	const char *kprobe_name = "kprobe/sys_nanosleep";
+	const char *kretprobe_name = "kretprobe/sys_nanosleep";
+	const char *uprobe_name = "uprobe/trigger_func";
+	const char *uretprobe_name = "uretprobe/trigger_func";
+	const int kprobe_idx = 0, kretprobe_idx = 1;
+	const int uprobe_idx = 2, uretprobe_idx = 3;
+	const char *file = "./test_attach_probe.o";
+	struct bpf_program *kprobe_prog, *kretprobe_prog;
+	struct bpf_program *uprobe_prog, *uretprobe_prog;
+	struct bpf_object *obj;
+	int err, prog_fd, duration = 0, res;
+	int kprobe_pfd = -1, kretprobe_pfd = -1;
+	int uprobe_pfd = -1, uretprobe_pfd = -1;
+	int results_map_fd;
+	size_t uprobe_offset;
+	ssize_t base_addr;
+
+	base_addr = get_base_addr();
+	if (CHECK(base_addr < 0, "get_base_addr",
+		  "failed to find base addr: %zd", base_addr))
+		return;
+	uprobe_offset = (size_t)&get_base_addr - base_addr;
+
+	/* load programs */
+	err = bpf_prog_load(file, BPF_PROG_TYPE_KPROBE, &obj, &prog_fd);
+	if (CHECK(err, "obj_load", "err %d errno %d\n", err, errno))
+		return;
+
+	kprobe_prog = bpf_object__find_program_by_title(obj, kprobe_name);
+	if (CHECK(!kprobe_prog, "find_probe",
+		  "prog '%s' not found\n", kprobe_name))
+		goto cleanup;
+	kretprobe_prog = bpf_object__find_program_by_title(obj, kretprobe_name);
+	if (CHECK(!kretprobe_prog, "find_probe",
+		  "prog '%s' not found\n", kretprobe_name))
+		goto cleanup;
+	uprobe_prog = bpf_object__find_program_by_title(obj, uprobe_name);
+	if (CHECK(!uprobe_prog, "find_probe",
+		  "prog '%s' not found\n", uprobe_name))
+		goto cleanup;
+	uretprobe_prog = bpf_object__find_program_by_title(obj, uretprobe_name);
+	if (CHECK(!uretprobe_prog, "find_probe",
+		  "prog '%s' not found\n", uretprobe_name))
+		goto cleanup;
+
+	/* load maps */
+	results_map_fd = bpf_find_map(__func__, obj, "results_map");
+	if (CHECK(results_map_fd < 0, "find_results_map",
+		  "err %d\n", results_map_fd))
+		goto cleanup;
+
+	kprobe_pfd = bpf_program__attach_kprobe(kprobe_prog,
+						false /* retprobe */,
+						"sys_nanosleep");
+	if (CHECK(kprobe_pfd < 0, "attach_kprobe", "err %d\n", kprobe_pfd))
+		goto cleanup;
+
+	kretprobe_pfd = bpf_program__attach_kprobe(kretprobe_prog,
+						   true /* retprobe */,
+						   "sys_nanosleep");
+	if (CHECK(kretprobe_pfd < 0, "attach_kretprobe",
+		  "err %d\n", kretprobe_pfd))
+		goto cleanup;
+
+	uprobe_pfd = bpf_program__attach_uprobe(uprobe_prog,
+						false /* retprobe */,
+						0 /* self pid */,
+						"/proc/self/exe",
+						uprobe_offset);
+	if (CHECK(uprobe_pfd < 0, "attach_uprobe", "err %d\n", uprobe_pfd))
+		goto cleanup;
+
+	uretprobe_pfd = bpf_program__attach_uprobe(uretprobe_prog,
+						   true /* retprobe */,
+						   -1 /* any pid */,
+						   "/proc/self/exe",
+						   uprobe_offset);
+	if (CHECK(uretprobe_pfd < 0, "attach_uretprobe",
+		  "err %d\n", uretprobe_pfd))
+		goto cleanup;
+
+	/* trigger & validate kprobe && kretprobe */
+	usleep(1);
+
+	err = bpf_map_lookup_elem(results_map_fd, &kprobe_idx, &res);
+	if (CHECK(err, "get_kprobe_res",
+		  "failed to get kprobe res: %d\n", err))
+		goto cleanup;
+	if (CHECK(res != kprobe_idx + 1, "check_kprobe_res",
+		  "wrong kprobe res: %d\n", res))
+		goto cleanup;
+
+	err = bpf_map_lookup_elem(results_map_fd, &kretprobe_idx, &res);
+	if (CHECK(err, "get_kretprobe_res",
+		  "failed to get kretprobe res: %d\n", err))
+		goto cleanup;
+	if (CHECK(res != kretprobe_idx + 1, "check_kretprobe_res",
+		  "wrong kretprobe res: %d\n", res))
+		goto cleanup;
+
+	/* trigger & validate uprobe & uretprobe */
+	get_base_addr();
+
+	err = bpf_map_lookup_elem(results_map_fd, &uprobe_idx, &res);
+	if (CHECK(err, "get_uprobe_res",
+		  "failed to get uprobe res: %d\n", err))
+		goto cleanup;
+	if (CHECK(res != uprobe_idx + 1, "check_uprobe_res",
+		  "wrong uprobe res: %d\n", res))
+		goto cleanup;
+
+	err = bpf_map_lookup_elem(results_map_fd, &uretprobe_idx, &res);
+	if (CHECK(err, "get_uretprobe_res",
+		  "failed to get uretprobe res: %d\n", err))
+		goto cleanup;
+	if (CHECK(res != uretprobe_idx + 1, "check_uretprobe_res",
+		  "wrong uretprobe res: %d\n", res))
+		goto cleanup;
+
+cleanup:
+	libbpf_perf_event_disable_and_close(kprobe_pfd);
+	libbpf_perf_event_disable_and_close(kretprobe_pfd);
+	libbpf_perf_event_disable_and_close(uprobe_pfd);
+	libbpf_perf_event_disable_and_close(uretprobe_pfd);
+	bpf_object__close(obj);
+}
diff --git a/tools/testing/selftests/bpf/progs/test_attach_probe.c b/tools/testing/selftests/bpf/progs/test_attach_probe.c
new file mode 100644
index 000000000000..7a7c5cd728c8
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_attach_probe.c
@@ -0,0 +1,55 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2017 Facebook
+
+#include <linux/ptrace.h>
+#include <linux/bpf.h>
+#include "bpf_helpers.h"
+
+struct {
+	int type;
+	int max_entries;
+	int *key;
+	int *value;
+} results_map SEC(".maps") = {
+	.type = BPF_MAP_TYPE_ARRAY,
+	.max_entries = 4,
+};
+
+SEC("kprobe/sys_nanosleep")
+int handle_sys_nanosleep_entry(struct pt_regs *ctx)
+{
+	const int key = 0, value = 1;
+
+	bpf_map_update_elem(&results_map, &key, &value, 0);
+	return 0;
+}
+
+SEC("kretprobe/sys_nanosleep")
+int handle_sys_getpid_return(struct pt_regs *ctx)
+{
+	const int key = 1, value = 2;
+
+	bpf_map_update_elem(&results_map, &key, &value, 0);
+	return 0;
+}
+
+SEC("uprobe/trigger_func")
+int handle_uprobe_entry(struct pt_regs *ctx)
+{
+	const int key = 2, value = 3;
+
+	bpf_map_update_elem(&results_map, &key, &value, 0);
+	return 0;
+}
+
+SEC("uretprobe/trigger_func")
+int handle_uprobe_return(struct pt_regs *ctx)
+{
+	const int key = 3, value = 4;
+
+	bpf_map_update_elem(&results_map, &key, &value, 0);
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
+__u32 _version SEC("version") = 1;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 7/7] selftests/bpf: convert existing tracepoint tests to new APIs
  2019-06-20 23:09 [PATCH bpf-next 0/7] libbpf: add tracing attach APIs Andrii Nakryiko
                   ` (5 preceding siblings ...)
  2019-06-20 23:09 ` [PATCH bpf-next 6/7] selftests/bpf: add kprobe/uprobe selftests Andrii Nakryiko
@ 2019-06-20 23:09 ` Andrii Nakryiko
  6 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-20 23:09 UTC (permalink / raw)
  To: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team; +Cc: Andrii Nakryiko

Convert existing tests that attach to tracepoints to use
bpf_program__attach_tracepoint API instead.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../bpf/prog_tests/stacktrace_build_id.c      | 49 +++-------------
 .../selftests/bpf/prog_tests/stacktrace_map.c | 42 +++-----------
 .../bpf/prog_tests/stacktrace_map_raw_tp.c    | 14 ++++-
 .../bpf/prog_tests/task_fd_query_rawtp.c      | 10 +++-
 .../bpf/prog_tests/task_fd_query_tp.c         | 51 +++++------------
 .../bpf/prog_tests/tp_attach_query.c          | 56 ++++++-------------
 6 files changed, 65 insertions(+), 157 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c
index 3aab2b083c71..9ef3b66f3644 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c
@@ -4,10 +4,11 @@
 void test_stacktrace_build_id(void)
 {
 	int control_map_fd, stackid_hmap_fd, stackmap_fd, stack_amap_fd;
+	const char *prog_name = "tracepoint/random/urandom_read";
 	const char *file = "./test_stacktrace_build_id.o";
-	int bytes, efd, err, pmu_fd, prog_fd, stack_trace_len;
-	struct perf_event_attr attr = {};
+	int err, pmu_fd, prog_fd, stack_trace_len;
 	__u32 key, previous_key, val, duration = 0;
+	struct bpf_program *prog;
 	struct bpf_object *obj;
 	char buf[256];
 	int i, j;
@@ -20,42 +21,14 @@ void test_stacktrace_build_id(void)
 	if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))
 		goto out;
 
-	/* Get the ID for the sched/sched_switch tracepoint */
-	snprintf(buf, sizeof(buf),
-		 "/sys/kernel/debug/tracing/events/random/urandom_read/id");
-	efd = open(buf, O_RDONLY, 0);
-	if (CHECK(efd < 0, "open", "err %d errno %d\n", efd, errno))
+	prog = bpf_object__find_program_by_title(obj, prog_name);
+	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
 		goto close_prog;
 
-	bytes = read(efd, buf, sizeof(buf));
-	close(efd);
-	if (CHECK(bytes <= 0 || bytes >= sizeof(buf),
-		  "read", "bytes %d errno %d\n", bytes, errno))
+	pmu_fd = bpf_program__attach_tracepoint(prog, "random", "urandom_read");
+	if (CHECK(pmu_fd < 0, "attach_tp", "err %d\n", pmu_fd))
 		goto close_prog;
 
-	/* Open the perf event and attach bpf progrram */
-	attr.config = strtol(buf, NULL, 0);
-	attr.type = PERF_TYPE_TRACEPOINT;
-	attr.sample_type = PERF_SAMPLE_RAW | PERF_SAMPLE_CALLCHAIN;
-	attr.sample_period = 1;
-	attr.wakeup_events = 1;
-	pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
-			 0 /* cpu 0 */, -1 /* group id */,
-			 0 /* flags */);
-	if (CHECK(pmu_fd < 0, "perf_event_open", "err %d errno %d\n",
-		  pmu_fd, errno))
-		goto close_prog;
-
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
-	if (CHECK(err, "perf_event_ioc_enable", "err %d errno %d\n",
-		  err, errno))
-		goto close_pmu;
-
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
-	if (CHECK(err, "perf_event_ioc_set_bpf", "err %d errno %d\n",
-		  err, errno))
-		goto disable_pmu;
-
 	/* find map fds */
 	control_map_fd = bpf_find_map(__func__, obj, "control_map");
 	if (CHECK(control_map_fd < 0, "bpf_find_map control_map",
@@ -133,8 +106,7 @@ void test_stacktrace_build_id(void)
 	 * try it one more time.
 	 */
 	if (build_id_matches < 1 && retry--) {
-		ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-		close(pmu_fd);
+		libbpf_perf_event_disable_and_close(pmu_fd);
 		bpf_object__close(obj);
 		printf("%s:WARN:Didn't find expected build ID from the map, retrying\n",
 		       __func__);
@@ -152,10 +124,7 @@ void test_stacktrace_build_id(void)
 	      "err %d errno %d\n", err, errno);
 
 disable_pmu:
-	ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-
-close_pmu:
-	close(pmu_fd);
+	libbpf_perf_event_disable_and_close(pmu_fd);
 
 close_prog:
 	bpf_object__close(obj);
diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_map.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_map.c
index 2bfd50a0d6d1..df0716e69b96 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_map.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_map.c
@@ -4,50 +4,25 @@
 void test_stacktrace_map(void)
 {
 	int control_map_fd, stackid_hmap_fd, stackmap_fd, stack_amap_fd;
+	const char *prog_name = "tracepoint/sched/sched_switch";
+	int efd, err, prog_fd, stack_trace_len;
 	const char *file = "./test_stacktrace_map.o";
-	int bytes, efd, err, pmu_fd, prog_fd, stack_trace_len;
-	struct perf_event_attr attr = {};
 	__u32 key, val, duration = 0;
+	struct bpf_program *prog;
 	struct bpf_object *obj;
-	char buf[256];
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_TRACEPOINT, &obj, &prog_fd);
 	if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))
 		return;
 
-	/* Get the ID for the sched/sched_switch tracepoint */
-	snprintf(buf, sizeof(buf),
-		 "/sys/kernel/debug/tracing/events/sched/sched_switch/id");
-	efd = open(buf, O_RDONLY, 0);
-	if (CHECK(efd < 0, "open", "err %d errno %d\n", efd, errno))
+	prog = bpf_object__find_program_by_title(obj, prog_name);
+	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
 		goto close_prog;
 
-	bytes = read(efd, buf, sizeof(buf));
-	close(efd);
-	if (bytes <= 0 || bytes >= sizeof(buf))
-		goto close_prog;
-
-	/* Open the perf event and attach bpf progrram */
-	attr.config = strtol(buf, NULL, 0);
-	attr.type = PERF_TYPE_TRACEPOINT;
-	attr.sample_type = PERF_SAMPLE_RAW | PERF_SAMPLE_CALLCHAIN;
-	attr.sample_period = 1;
-	attr.wakeup_events = 1;
-	pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
-			 0 /* cpu 0 */, -1 /* group id */,
-			 0 /* flags */);
-	if (CHECK(pmu_fd < 0, "perf_event_open", "err %d errno %d\n",
-		  pmu_fd, errno))
+	efd = bpf_program__attach_tracepoint(prog, "sched", "sched_switch");
+	if (CHECK(efd < 0, "attach_tp", "err %d\n", efd))
 		goto close_prog;
 
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
-	if (err)
-		goto disable_pmu;
-
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
-	if (err)
-		goto disable_pmu;
-
 	/* find map fds */
 	control_map_fd = bpf_find_map(__func__, obj, "control_map");
 	if (control_map_fd < 0)
@@ -96,8 +71,7 @@ void test_stacktrace_map(void)
 disable_pmu:
 	error_cnt++;
 disable_pmu_noerr:
-	ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
-	close(pmu_fd);
+	close(efd);
 close_prog:
 	bpf_object__close(obj);
 }
diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
index 1f8387d80fd7..4d14a08b1d99 100644
--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
+++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c
@@ -3,18 +3,24 @@
 
 void test_stacktrace_map_raw_tp(void)
 {
+	const char *prog_name = "tracepoint/sched/sched_switch";
 	int control_map_fd, stackid_hmap_fd, stackmap_fd;
 	const char *file = "./test_stacktrace_map.o";
-	int efd, err, prog_fd;
 	__u32 key, val, duration = 0;
+	int efd = -1, err, prog_fd;
+	struct bpf_program *prog;
 	struct bpf_object *obj;
 
 	err = bpf_prog_load(file, BPF_PROG_TYPE_RAW_TRACEPOINT, &obj, &prog_fd);
 	if (CHECK(err, "prog_load raw tp", "err %d errno %d\n", err, errno))
 		return;
 
-	efd = bpf_raw_tracepoint_open("sched_switch", prog_fd);
-	if (CHECK(efd < 0, "raw_tp_open", "err %d errno %d\n", efd, errno))
+	prog = bpf_object__find_program_by_title(obj, prog_name);
+	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
+		goto close_prog;
+
+	efd = bpf_program__attach_raw_tracepoint(prog, "sched_switch");
+	if (CHECK(efd < 0, "attach_raw_tp", "err %d\n", efd))
 		goto close_prog;
 
 	/* find map fds */
@@ -55,5 +61,7 @@ void test_stacktrace_map_raw_tp(void)
 close_prog:
 	error_cnt++;
 close_prog_noerr:
+	if (efd >= 0)
+		close(efd);
 	bpf_object__close(obj);
 }
diff --git a/tools/testing/selftests/bpf/prog_tests/task_fd_query_rawtp.c b/tools/testing/selftests/bpf/prog_tests/task_fd_query_rawtp.c
index 958a3d88de99..6ad73cb8c7e3 100644
--- a/tools/testing/selftests/bpf/prog_tests/task_fd_query_rawtp.c
+++ b/tools/testing/selftests/bpf/prog_tests/task_fd_query_rawtp.c
@@ -3,9 +3,11 @@
 
 void test_task_fd_query_rawtp(void)
 {
+	const char *prog_name = "tracepoint/raw_syscalls/sys_enter";
 	const char *file = "./test_get_stack_rawtp.o";
 	__u64 probe_offset, probe_addr;
 	__u32 len, prog_id, fd_type;
+	struct bpf_program *prog;
 	struct bpf_object *obj;
 	int efd, err, prog_fd;
 	__u32 duration = 0;
@@ -15,8 +17,12 @@ void test_task_fd_query_rawtp(void)
 	if (CHECK(err, "prog_load raw tp", "err %d errno %d\n", err, errno))
 		return;
 
-	efd = bpf_raw_tracepoint_open("sys_enter", prog_fd);
-	if (CHECK(efd < 0, "raw_tp_open", "err %d errno %d\n", efd, errno))
+	prog = bpf_object__find_program_by_title(obj, prog_name);
+	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
+		goto close_prog;
+
+	efd = bpf_program__attach_raw_tracepoint(prog, "sys_enter");
+	if (CHECK(efd < 0, "attach_raw_tp", "err %d\n", efd))
 		goto close_prog;
 
 	/* query (getpid(), efd) */
diff --git a/tools/testing/selftests/bpf/prog_tests/task_fd_query_tp.c b/tools/testing/selftests/bpf/prog_tests/task_fd_query_tp.c
index f9b70e81682b..034870692636 100644
--- a/tools/testing/selftests/bpf/prog_tests/task_fd_query_tp.c
+++ b/tools/testing/selftests/bpf/prog_tests/task_fd_query_tp.c
@@ -1,15 +1,16 @@
 // SPDX-License-Identifier: GPL-2.0
 #include <test_progs.h>
 
-static void test_task_fd_query_tp_core(const char *probe_name,
+static void test_task_fd_query_tp_core(const char *tp_category,
 				       const char *tp_name)
 {
+	const char *prog_name = "tracepoint/sched/sched_switch";
 	const char *file = "./test_tracepoint.o";
-	int err, bytes, efd, prog_fd, pmu_fd;
-	struct perf_event_attr attr = {};
 	__u64 probe_offset, probe_addr;
 	__u32 len, prog_id, fd_type;
+	int err, prog_fd, pmu_fd;
 	struct bpf_object *obj = NULL;
+	struct bpf_program *prog;
 	__u32 duration = 0;
 	char buf[256];
 
@@ -17,37 +18,13 @@ static void test_task_fd_query_tp_core(const char *probe_name,
 	if (CHECK(err, "bpf_prog_load", "err %d errno %d\n", err, errno))
 		goto close_prog;
 
-	snprintf(buf, sizeof(buf),
-		 "/sys/kernel/debug/tracing/events/%s/id", probe_name);
-	efd = open(buf, O_RDONLY, 0);
-	if (CHECK(efd < 0, "open", "err %d errno %d\n", efd, errno))
+	prog = bpf_object__find_program_by_title(obj, prog_name);
+	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
 		goto close_prog;
-	bytes = read(efd, buf, sizeof(buf));
-	close(efd);
-	if (CHECK(bytes <= 0 || bytes >= sizeof(buf), "read",
-		  "bytes %d errno %d\n", bytes, errno))
-		goto close_prog;
-
-	attr.config = strtol(buf, NULL, 0);
-	attr.type = PERF_TYPE_TRACEPOINT;
-	attr.sample_type = PERF_SAMPLE_RAW;
-	attr.sample_period = 1;
-	attr.wakeup_events = 1;
-	pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
-			 0 /* cpu 0 */, -1 /* group id */,
-			 0 /* flags */);
-	if (CHECK(err, "perf_event_open", "err %d errno %d\n", err, errno))
-		goto close_pmu;
 
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
-	if (CHECK(err, "perf_event_ioc_enable", "err %d errno %d\n", err,
-		  errno))
-		goto close_pmu;
-
-	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
-	if (CHECK(err, "perf_event_ioc_set_bpf", "err %d errno %d\n", err,
-		  errno))
-		goto close_pmu;
+	pmu_fd = bpf_program__attach_tracepoint(prog, tp_category, tp_name);
+	if (CHECK(pmu_fd < 0, "attach_tp", "err %d\n", pmu_fd))
+		goto close_prog;
 
 	/* query (getpid(), pmu_fd) */
 	len = sizeof(buf);
@@ -62,11 +39,11 @@ static void test_task_fd_query_tp_core(const char *probe_name,
 		  fd_type, buf))
 		goto close_pmu;
 
-	close(pmu_fd);
+	libbpf_perf_event_disable_and_close(pmu_fd);
 	goto close_prog_noerr;
 
 close_pmu:
-	close(pmu_fd);
+	libbpf_perf_event_disable_and_close(pmu_fd);
 close_prog:
 	error_cnt++;
 close_prog_noerr:
@@ -75,8 +52,6 @@ static void test_task_fd_query_tp_core(const char *probe_name,
 
 void test_task_fd_query_tp(void)
 {
-	test_task_fd_query_tp_core("sched/sched_switch",
-				   "sched_switch");
-	test_task_fd_query_tp_core("syscalls/sys_enter_read",
-				   "sys_enter_read");
+	test_task_fd_query_tp_core("sched", "sched_switch");
+	test_task_fd_query_tp_core("syscalls", "sys_enter_read");
 }
diff --git a/tools/testing/selftests/bpf/prog_tests/tp_attach_query.c b/tools/testing/selftests/bpf/prog_tests/tp_attach_query.c
index fb095e5cd9af..5e129eb3eb47 100644
--- a/tools/testing/selftests/bpf/prog_tests/tp_attach_query.c
+++ b/tools/testing/selftests/bpf/prog_tests/tp_attach_query.c
@@ -6,9 +6,9 @@ void test_tp_attach_query(void)
 	const int num_progs = 3;
 	int i, j, bytes, efd, err, prog_fd[num_progs], pmu_fd[num_progs];
 	__u32 duration = 0, info_len, saved_prog_ids[num_progs];
+	const char *prog_name = "tracepoint/sched/sched_switch";
 	const char *file = "./test_tracepoint.o";
 	struct perf_event_query_bpf *query;
-	struct perf_event_attr attr = {};
 	struct bpf_object *obj[num_progs];
 	struct bpf_prog_info prog_info;
 	char buf[256];
@@ -27,19 +27,19 @@ void test_tp_attach_query(void)
 		  "read", "bytes %d errno %d\n", bytes, errno))
 		return;
 
-	attr.config = strtol(buf, NULL, 0);
-	attr.type = PERF_TYPE_TRACEPOINT;
-	attr.sample_type = PERF_SAMPLE_RAW | PERF_SAMPLE_CALLCHAIN;
-	attr.sample_period = 1;
-	attr.wakeup_events = 1;
-
 	query = malloc(sizeof(*query) + sizeof(__u32) * num_progs);
 	for (i = 0; i < num_progs; i++) {
+		struct bpf_program *prog;
+
 		err = bpf_prog_load(file, BPF_PROG_TYPE_TRACEPOINT, &obj[i],
 				    &prog_fd[i]);
 		if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))
 			goto cleanup1;
 
+		prog = bpf_object__find_program_by_title(obj[i], prog_name);
+		if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
+			goto cleanup1;
+
 		bzero(&prog_info, sizeof(prog_info));
 		prog_info.jited_prog_len = 0;
 		prog_info.xlated_prog_len = 0;
@@ -51,32 +51,10 @@ void test_tp_attach_query(void)
 			goto cleanup1;
 		saved_prog_ids[i] = prog_info.id;
 
-		pmu_fd[i] = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
-				    0 /* cpu 0 */, -1 /* group id */,
-				    0 /* flags */);
-		if (CHECK(pmu_fd[i] < 0, "perf_event_open", "err %d errno %d\n",
-			  pmu_fd[i], errno))
+		pmu_fd[i] = bpf_program__attach_tracepoint(prog, "sched",
+							   "sched_switch");
+		if (CHECK(pmu_fd[i] < 0, "attach_tp", "err %d\n", pmu_fd[i]))
 			goto cleanup2;
-		err = ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE, 0);
-		if (CHECK(err, "perf_event_ioc_enable", "err %d errno %d\n",
-			  err, errno))
-			goto cleanup3;
-
-		if (i == 0) {
-			/* check NULL prog array query */
-			query->ids_len = num_progs;
-			err = ioctl(pmu_fd[i], PERF_EVENT_IOC_QUERY_BPF, query);
-			if (CHECK(err || query->prog_cnt != 0,
-				  "perf_event_ioc_query_bpf",
-				  "err %d errno %d query->prog_cnt %u\n",
-				  err, errno, query->prog_cnt))
-				goto cleanup3;
-		}
-
-		err = ioctl(pmu_fd[i], PERF_EVENT_IOC_SET_BPF, prog_fd[i]);
-		if (CHECK(err, "perf_event_ioc_set_bpf", "err %d errno %d\n",
-			  err, errno))
-			goto cleanup3;
 
 		if (i == 1) {
 			/* try to get # of programs only */
@@ -86,7 +64,7 @@ void test_tp_attach_query(void)
 				  "perf_event_ioc_query_bpf",
 				  "err %d errno %d query->prog_cnt %u\n",
 				  err, errno, query->prog_cnt))
-				goto cleanup3;
+				goto cleanup2;
 
 			/* try a few negative tests */
 			/* invalid query pointer */
@@ -95,7 +73,7 @@ void test_tp_attach_query(void)
 			if (CHECK(!err || errno != EFAULT,
 				  "perf_event_ioc_query_bpf",
 				  "err %d errno %d\n", err, errno))
-				goto cleanup3;
+				goto cleanup2;
 
 			/* no enough space */
 			query->ids_len = 1;
@@ -104,7 +82,7 @@ void test_tp_attach_query(void)
 				  "perf_event_ioc_query_bpf",
 				  "err %d errno %d query->prog_cnt %u\n",
 				  err, errno, query->prog_cnt))
-				goto cleanup3;
+				goto cleanup2;
 		}
 
 		query->ids_len = num_progs;
@@ -113,21 +91,19 @@ void test_tp_attach_query(void)
 			  "perf_event_ioc_query_bpf",
 			  "err %d errno %d query->prog_cnt %u\n",
 			  err, errno, query->prog_cnt))
-			goto cleanup3;
+			goto cleanup2;
 		for (j = 0; j < i + 1; j++)
 			if (CHECK(saved_prog_ids[j] != query->ids[j],
 				  "perf_event_ioc_query_bpf",
 				  "#%d saved_prog_id %x query prog_id %x\n",
 				  j, saved_prog_ids[j], query->ids[j]))
-				goto cleanup3;
+				goto cleanup2;
 	}
 
 	i = num_progs - 1;
 	for (; i >= 0; i--) {
- cleanup3:
-		ioctl(pmu_fd[i], PERF_EVENT_IOC_DISABLE);
  cleanup2:
-		close(pmu_fd[i]);
+		libbpf_perf_event_disable_and_close(pmu_fd[i]);
  cleanup1:
 		bpf_object__close(obj[i]);
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 2/7] libbpf: add ability to attach/detach BPF to perf event
  2019-06-20 23:09 ` [PATCH bpf-next 2/7] libbpf: add ability to attach/detach BPF to perf event Andrii Nakryiko
@ 2019-06-21  0:01   ` Stanislav Fomichev
  2019-06-21  4:28     ` Andrii Nakryiko
  0 siblings, 1 reply; 16+ messages in thread
From: Stanislav Fomichev @ 2019-06-21  0:01 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team

On 06/20, Andrii Nakryiko wrote:
> bpf_program__attach_perf_event allows to attach BPF program to existing
> perf event, providing most generic and most low-level way to attach BPF
> programs.
> 
> libbpf_perf_event_disable_and_close API is added to disable and close
> existing perf event by its FD.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>  tools/lib/bpf/libbpf.c   | 41 ++++++++++++++++++++++++++++++++++++++++
>  tools/lib/bpf/libbpf.h   |  4 ++++
>  tools/lib/bpf/libbpf.map |  2 ++
>  3 files changed, 47 insertions(+)
> 
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 8ce3beba8551..2bb1fa008be3 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -32,6 +32,7 @@
>  #include <linux/limits.h>
>  #include <linux/perf_event.h>
>  #include <linux/ring_buffer.h>
> +#include <sys/ioctl.h>
>  #include <sys/stat.h>
>  #include <sys/types.h>
>  #include <sys/vfs.h>
> @@ -3928,6 +3929,46 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr,
>  	return 0;
>  }
>  
[..]
> +int libbpf_perf_event_disable_and_close(int pfd)
nit: why not call it libbpf_perf_event_detach[_and_close]?
It's usually attach/detach.

> +{
> +	int err;
> +
> +	if (pfd < 0)
> +		return 0;
> +
> +	err = ioctl(pfd, PERF_EVENT_IOC_DISABLE, 0);
> +	close(pfd);
> +	return err;
> +}
> +
> +int bpf_program__attach_perf_event(struct bpf_program *prog, int pfd)
> +{
> +	char errmsg[STRERR_BUFSIZE];
> +	int bpf_fd, err;
> +
> +	bpf_fd = bpf_program__fd(prog);
> +	if (bpf_fd < 0) {
> +		pr_warning("program '%s': can't attach before loaded\n",
> +			   bpf_program__title(prog, false));
> +		return -EINVAL;
> +	}
> +	if (ioctl(pfd, PERF_EVENT_IOC_SET_BPF, bpf_fd) < 0) {
> +		err = -errno;
> +		pr_warning("program '%s': failed to attach to pfd %d: %s\n",
> +			   bpf_program__title(prog, false), pfd,
> +			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> +		return err;
> +	}
> +	if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
> +		err = -errno;
> +		pr_warning("program '%s': failed to enable pfd %d: %s\n",
> +			   bpf_program__title(prog, false), pfd,
> +			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> +		return err;
> +	}
> +	return 0;
> +}
> +
>  enum bpf_perf_event_ret
>  bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
>  			   void **copy_mem, size_t *copy_size,
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index d639f47e3110..76db1bbc0dac 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -165,6 +165,10 @@ LIBBPF_API int bpf_program__pin(struct bpf_program *prog, const char *path);
>  LIBBPF_API int bpf_program__unpin(struct bpf_program *prog, const char *path);
>  LIBBPF_API void bpf_program__unload(struct bpf_program *prog);
>  
> +LIBBPF_API int libbpf_perf_event_disable_and_close(int pfd);
> +LIBBPF_API int bpf_program__attach_perf_event(struct bpf_program *prog,
> +					      int pfd);
> +
>  struct bpf_insn;
>  
>  /*
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index 2c6d835620d2..d27406982b5a 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -172,5 +172,7 @@ LIBBPF_0.0.4 {
>  		btf_dump__new;
>  		btf__parse_elf;
>  		bpf_object__load_xattr;
> +		bpf_program__attach_perf_event;
>  		libbpf_num_possible_cpus;
> +		libbpf_perf_event_disable_and_close;
>  } LIBBPF_0.0.3;
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 3/7] libbpf: add kprobe/uprobe attach API
  2019-06-20 23:09 ` [PATCH bpf-next 3/7] libbpf: add kprobe/uprobe attach API Andrii Nakryiko
@ 2019-06-21  0:04   ` Stanislav Fomichev
  2019-06-21  4:34     ` Andrii Nakryiko
  0 siblings, 1 reply; 16+ messages in thread
From: Stanislav Fomichev @ 2019-06-21  0:04 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team

On 06/20, Andrii Nakryiko wrote:
> Add ability to attach to kernel and user probes and retprobes.
> Implementation depends on perf event support for kprobes/uprobes.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>  tools/lib/bpf/libbpf.c   | 207 +++++++++++++++++++++++++++++++++++++++
>  tools/lib/bpf/libbpf.h   |   8 ++
>  tools/lib/bpf/libbpf.map |   2 +
>  3 files changed, 217 insertions(+)
> 
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 2bb1fa008be3..11329e05530e 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -3969,6 +3969,213 @@ int bpf_program__attach_perf_event(struct bpf_program *prog, int pfd)
>  	return 0;
>  }
>  
> +static int parse_uint(const char *buf)
> +{
> +	int ret;
> +
> +	errno = 0;
> +	ret = (int)strtol(buf, NULL, 10);
> +	if (errno) {
> +		ret = -errno;
> +		pr_debug("failed to parse '%s' as unsigned int\n", buf);
> +		return ret;
> +	}
> +	if (ret < 0) {
> +		pr_debug("failed to parse '%s' as unsigned int\n", buf);
> +		return -EINVAL;
> +	}
> +	return ret;
> +}
> +
> +static int parse_uint_from_file(const char* file)
> +{
> +	char buf[STRERR_BUFSIZE];
> +	int fd, ret;
> +
> +	fd = open(file, O_RDONLY);
> +	if (fd < 0) {
> +		ret = -errno;
> +		pr_debug("failed to open '%s': %s\n", file,
> +			 libbpf_strerror_r(ret, buf, sizeof(buf)));
> +		return ret;
> +	}
> +	ret = read(fd, buf, sizeof(buf));
> +	close(fd);
> +	if (ret < 0) {
> +		ret = -errno;
Is -errno still valid here after a close(fd) above? Do we have any
guarantee of errno preservation when we do another syscall?

> +		pr_debug("failed to read '%s': %s\n", file,
> +			libbpf_strerror_r(ret, buf, sizeof(buf)));
> +		return ret;
> +	}
> +	if (ret == 0 || ret >= sizeof(buf)) {
> +		buf[sizeof(buf) - 1] = 0;
> +		pr_debug("unexpected input from '%s': '%s'\n", file, buf);
> +		return -EINVAL;
> +	}
> +	return parse_uint(buf);
> +}
> +
> +static int determine_kprobe_perf_type(void)
> +{
> +	const char *file = "/sys/bus/event_source/devices/kprobe/type";
> +	return parse_uint_from_file(file);
> +}
> +
> +static int determine_uprobe_perf_type(void)
> +{
> +	const char *file = "/sys/bus/event_source/devices/uprobe/type";
> +	return parse_uint_from_file(file);
> +}
> +
> +static int parse_config_from_file(const char *file)
> +{
> +	char buf[STRERR_BUFSIZE];
> +	int fd, ret;
> +
> +	fd = open(file, O_RDONLY);
> +	if (fd < 0) {
> +		ret = -errno;
> +		pr_debug("failed to open '%s': %s\n", file,
> +			 libbpf_strerror_r(ret, buf, sizeof(buf)));
> +		return ret;
> +	}
> +	ret = read(fd, buf, sizeof(buf));
> +	close(fd);
> +	if (ret < 0) {
> +		ret = -errno;
> +		pr_debug("failed to read '%s': %s\n", file,
> +			libbpf_strerror_r(ret, buf, sizeof(buf)));
> +		return ret;
> +	}
> +	if (ret == 0 || ret >= sizeof(buf)) {
> +		buf[sizeof(buf) - 1] = 0;
> +		pr_debug("unexpected input from '%s': '%s'\n", file, buf);
> +		return -EINVAL;
> +	}
> +	if (strncmp(buf, "config:", 7)) {
> +		pr_debug("expected 'config:' prefix, found '%s'\n", buf);
> +		return -EINVAL;
> +	}
> +	return parse_uint(buf + 7);
> +}
> +
> +static int determine_kprobe_retprobe_bit(void)
> +{
> +	const char *file = "/sys/bus/event_source/devices/kprobe/format/retprobe";
> +	return parse_config_from_file(file);
> +}
> +
> +static int determine_uprobe_retprobe_bit(void)
> +{
> +	const char *file = "/sys/bus/event_source/devices/uprobe/format/retprobe";
> +	return parse_config_from_file(file);
> +}
> +
> +static int perf_event_open_probe(bool uprobe, bool retprobe, const char* name,
> +				 uint64_t offset, int pid)
> +{
> +	struct perf_event_attr attr = {};
> +	char errmsg[STRERR_BUFSIZE];
> +	int type, pfd, err;
> +
> +	type = uprobe ? determine_uprobe_perf_type()
> +		      : determine_kprobe_perf_type();
> +	if (type < 0) {
> +		pr_warning("failed to determine %s perf type: %s\n",
> +			   uprobe ? "uprobe" : "kprobe",
> +			   libbpf_strerror_r(type, errmsg, sizeof(errmsg)));
> +		return type;
> +	}
> +	if (retprobe) {
> +		int bit = uprobe ? determine_uprobe_retprobe_bit()
> +				 : determine_kprobe_retprobe_bit();
> +
> +		if (bit < 0) {
> +			pr_warning("failed to determine %s retprobe bit: %s\n",
> +				   uprobe ? "uprobe" : "kprobe",
> +				   libbpf_strerror_r(bit, errmsg,
> +						     sizeof(errmsg)));
> +			return bit;
> +		}
> +		attr.config |= 1 << bit;
> +	}
> +	attr.size = sizeof(attr);
> +	attr.type = type;
> +	attr.config1 = (uint64_t)(void *)name; /* kprobe_func or uprobe_path */
> +	attr.config2 = offset;		       /* kprobe_addr or probe_offset */
> +
> +	/* pid filter is meaningful only for uprobes */
> +	pfd = syscall(__NR_perf_event_open, &attr,
> +		      pid < 0 ? -1 : pid /* pid */,
> +		      pid == -1 ? 0 : -1 /* cpu */,
> +		      -1 /* group_fd */, PERF_FLAG_FD_CLOEXEC);
> +	if (pfd < 0) {
> +		err = -errno;
> +		pr_warning("%s perf_event_open() failed: %s\n",
> +			   uprobe ? "uprobe" : "kprobe",
> +			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> +		return err;
> +	}
> +	return pfd;
> +}
> +
> +int bpf_program__attach_kprobe(struct bpf_program *prog, bool retprobe,
> +			       const char *func_name)
> +{
> +	char errmsg[STRERR_BUFSIZE];
> +	int pfd, err;
> +
> +	pfd = perf_event_open_probe(false /* uprobe */, retprobe, func_name,
> +				    0 /* offset */, -1 /* pid */);
> +	if (pfd < 0) {
> +		pr_warning("program '%s': failed to create %s '%s' perf event: %s\n",
> +			   bpf_program__title(prog, false),
> +			   retprobe ? "kretprobe" : "kprobe", func_name,
> +			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
> +		return pfd;
> +	}
> +	err = bpf_program__attach_perf_event(prog, pfd);
> +	if (err) {
> +		libbpf_perf_event_disable_and_close(pfd);
> +		pr_warning("program '%s': failed to attach to %s '%s': %s\n",
> +			   bpf_program__title(prog, false),
> +			   retprobe ? "kretprobe" : "kprobe", func_name,
> +			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> +		return err;
> +	}
> +	return pfd;
> +}
> +
> +int bpf_program__attach_uprobe(struct bpf_program *prog, bool retprobe,
> +			       pid_t pid, const char *binary_path,
> +			       size_t func_offset)
> +{
> +	char errmsg[STRERR_BUFSIZE];
> +	int pfd, err;
> +
> +	pfd = perf_event_open_probe(true /* uprobe */, retprobe,
> +				    binary_path, func_offset, pid);
> +	if (pfd < 0) {
> +		pr_warning("program '%s': failed to create %s '%s:0x%zx' perf event: %s\n",
> +			   bpf_program__title(prog, false),
> +			   retprobe ? "uretprobe" : "uprobe",
> +			   binary_path, func_offset,
> +			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
> +		return pfd;
> +	}
> +	err = bpf_program__attach_perf_event(prog, pfd);
> +	if (err) {
> +		libbpf_perf_event_disable_and_close(pfd);
> +		pr_warning("program '%s': failed to attach to %s '%s:0x%zx': %s\n",
> +			   bpf_program__title(prog, false),
> +			   retprobe ? "uretprobe" : "uprobe",
> +			   binary_path, func_offset,
> +			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> +		return err;
> +	}
> +	return pfd;
> +}
> +
>  enum bpf_perf_event_ret
>  bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
>  			   void **copy_mem, size_t *copy_size,
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index 76db1bbc0dac..a7264f06aa5f 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -168,6 +168,14 @@ LIBBPF_API void bpf_program__unload(struct bpf_program *prog);
>  LIBBPF_API int libbpf_perf_event_disable_and_close(int pfd);
>  LIBBPF_API int bpf_program__attach_perf_event(struct bpf_program *prog,
>  					      int pfd);
> +LIBBPF_API int bpf_program__attach_kprobe(struct bpf_program *prog,
> +					  bool retprobe,
> +					  const char *func_name);
> +LIBBPF_API int bpf_program__attach_uprobe(struct bpf_program *prog,
> +					  bool retprobe,
> +					  pid_t pid,
> +					  const char *binary_path,
> +					  size_t func_offset);
>  
>  struct bpf_insn;
>  
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index d27406982b5a..1a982c2e1751 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -172,7 +172,9 @@ LIBBPF_0.0.4 {
>  		btf_dump__new;
>  		btf__parse_elf;
>  		bpf_object__load_xattr;
> +		bpf_program__attach_kprobe;
>  		bpf_program__attach_perf_event;
> +		bpf_program__attach_uprobe;
>  		libbpf_num_possible_cpus;
>  		libbpf_perf_event_disable_and_close;
>  } LIBBPF_0.0.3;
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 4/7] libbpf: add tracepoint/raw tracepoint attach API
  2019-06-20 23:09 ` [PATCH bpf-next 4/7] libbpf: add tracepoint/raw tracepoint " Andrii Nakryiko
@ 2019-06-21  0:07   ` Stanislav Fomichev
  2019-06-21  4:42     ` Andrii Nakryiko
  0 siblings, 1 reply; 16+ messages in thread
From: Stanislav Fomichev @ 2019-06-21  0:07 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team

On 06/20, Andrii Nakryiko wrote:
> Add APIs allowing to attach BPF program to kernel tracepoints. Raw
> tracepoint attach API is also added for uniform per-BPF-program API,
> but is mostly a wrapper around existing bpf_raw_tracepoint_open call.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>  tools/lib/bpf/libbpf.c   | 99 ++++++++++++++++++++++++++++++++++++++++
>  tools/lib/bpf/libbpf.h   |  5 ++
>  tools/lib/bpf/libbpf.map |  2 +
>  3 files changed, 106 insertions(+)
> 
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 11329e05530e..cefe67ba160b 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -4176,6 +4176,105 @@ int bpf_program__attach_uprobe(struct bpf_program *prog, bool retprobe,
>  	return pfd;
>  }
>  
> +static int determine_tracepoint_id(const char* tp_category, const char* tp_name)
> +{
> +	char file[PATH_MAX];
> +	int ret;
> +
> +	ret = snprintf(file, sizeof(file),
> +		       "/sys/kernel/debug/tracing/events/%s/%s/id",
> +		       tp_category, tp_name);
> +	if (ret < 0)
> +		return -errno;
> +	if (ret >= sizeof(file)) {
> +		pr_debug("tracepoint %s/%s path is too long\n",
> +			 tp_category, tp_name);
> +		return -E2BIG;
> +	}
> +	return parse_uint_from_file(file);
> +}
> +
> +static int perf_event_open_tracepoint(const char* tp_category,
> +				      const char* tp_name)
> +{
> +	struct perf_event_attr attr = {};
> +	char errmsg[STRERR_BUFSIZE];
> +	int tp_id, pfd, err;
> +
[..]
> +	tp_id = determine_tracepoint_id(tp_category, tp_name);
Why no assign to attr.config directly here?
You can move all other constants to the initialization as well:

struct perf_event_attr attr = {
	.type = PERF_TYPE_TRACEPON,
	.size = sizeof(struct perf_event_attr),
};

attr.config = determine_tracepoint_id(...);

(I guess that's a matter of style, but something to consider).

> +	if (tp_id < 0){
> +		pr_warning("failed to determine tracepoint '%s/%s' perf ID: %s\n",
> +			   tp_category, tp_name,
> +			   libbpf_strerror_r(tp_id, errmsg, sizeof(errmsg)));
> +		return tp_id;
> +	}
> +
[..]
> +	memset(&attr, 0, sizeof(attr));
Not needed since you do attr = {}; above?

> +	attr.type = PERF_TYPE_TRACEPOINT;
> +	attr.size = sizeof(attr);
> +	attr.config = tp_id;
> +
> +	pfd = syscall( __NR_perf_event_open, &attr, -1 /* pid */, 0 /* cpu */,
> +			-1 /* group_fd */, PERF_FLAG_FD_CLOEXEC);
> +	if (pfd < 0) {
> +		err = -errno;
> +		pr_warning("tracepoint '%s/%s' perf_event_open() failed: %s\n",
> +			   tp_category, tp_name,
> +			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> +		return err;
> +	}
> +	return pfd;
> +}
> +
> +int bpf_program__attach_tracepoint(struct bpf_program *prog,
> +				   const char *tp_category,
> +				   const char *tp_name)
> +{
> +	char errmsg[STRERR_BUFSIZE];
> +	int pfd, err;
> +
> +	pfd = perf_event_open_tracepoint(tp_category, tp_name);
> +	if (pfd < 0) {
> +		pr_warning("program '%s': failed to create tracepoint '%s/%s' perf event: %s\n",
> +			   bpf_program__title(prog, false),
> +			   tp_category, tp_name,
> +			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
> +		return pfd;
> +	}
> +	err = bpf_program__attach_perf_event(prog, pfd);
> +	if (err) {
> +		libbpf_perf_event_disable_and_close(pfd);
> +		pr_warning("program '%s': failed to attach to tracepoint '%s/%s': %s\n",
> +			   bpf_program__title(prog, false),
> +			   tp_category, tp_name,
> +			   libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> +		return err;
> +	}
> +	return pfd;
> +}
> +
> +int bpf_program__attach_raw_tracepoint(struct bpf_program *prog,
> +				       const char *tp_name)
> +{
> +	char errmsg[STRERR_BUFSIZE];
> +	int bpf_fd, pfd;
> +
> +	bpf_fd = bpf_program__fd(prog);
> +	if (bpf_fd < 0) {
> +		pr_warning("program '%s': can't attach before loaded\n",
> +			   bpf_program__title(prog, false));
> +		return -EINVAL;
> +	}
> +	pfd = bpf_raw_tracepoint_open(tp_name, bpf_fd);
> +	if (pfd < 0) {
> +		pr_warning("program '%s': failed to attach to raw tracepoint '%s': %s\n",
> +			   bpf_program__title(prog, false), tp_name,
> +			   libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
> +		return pfd;
> +	}
> +	return pfd;
> +}
> +
>  enum bpf_perf_event_ret
>  bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
>  			   void **copy_mem, size_t *copy_size,
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index a7264f06aa5f..bf7020a565c6 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -176,6 +176,11 @@ LIBBPF_API int bpf_program__attach_uprobe(struct bpf_program *prog,
>  					  pid_t pid,
>  					  const char *binary_path,
>  					  size_t func_offset);
> +LIBBPF_API int bpf_program__attach_tracepoint(struct bpf_program *prog,
> +					      const char *tp_category,
> +					      const char *tp_name);
> +LIBBPF_API int bpf_program__attach_raw_tracepoint(struct bpf_program *prog,
> +						  const char *tp_name);
>  
>  struct bpf_insn;
>  
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index 1a982c2e1751..2382fbda4cbb 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -174,6 +174,8 @@ LIBBPF_0.0.4 {
>  		bpf_object__load_xattr;
>  		bpf_program__attach_kprobe;
>  		bpf_program__attach_perf_event;
> +		bpf_program__attach_raw_tracepoint;
> +		bpf_program__attach_tracepoint;
>  		bpf_program__attach_uprobe;
>  		libbpf_num_possible_cpus;
>  		libbpf_perf_event_disable_and_close;
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 5/7] selftests/bpf: switch test to new attach_perf_event API
  2019-06-20 23:09 ` [PATCH bpf-next 5/7] selftests/bpf: switch test to new attach_perf_event API Andrii Nakryiko
@ 2019-06-21  0:08   ` Stanislav Fomichev
  2019-06-21  4:50     ` Andrii Nakryiko
  0 siblings, 1 reply; 16+ messages in thread
From: Stanislav Fomichev @ 2019-06-21  0:08 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: andrii.nakryiko, ast, daniel, netdev, bpf, kernel-team

On 06/20, Andrii Nakryiko wrote:
> Use new bpf_program__attach_perf_event() in test previously relying on
> direct ioctl manipulations.
Maybe use new detach/disable routine at the end of the
test_stacktrace_build_id_nmi as well?

> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>  .../bpf/prog_tests/stacktrace_build_id_nmi.c     | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
> index 1c1a2f75f3d8..1bbdb0b82ac5 100644
> --- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
> +++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
> @@ -17,6 +17,7 @@ static __u64 read_perf_max_sample_freq(void)
>  void test_stacktrace_build_id_nmi(void)
>  {
>  	int control_map_fd, stackid_hmap_fd, stackmap_fd, stack_amap_fd;
> +	const char *prog_name = "tracepoint/random/urandom_read";
>  	const char *file = "./test_stacktrace_build_id.o";
>  	int err, pmu_fd, prog_fd;
>  	struct perf_event_attr attr = {
> @@ -25,6 +26,7 @@ void test_stacktrace_build_id_nmi(void)
>  		.config = PERF_COUNT_HW_CPU_CYCLES,
>  	};
>  	__u32 key, previous_key, val, duration = 0;
> +	struct bpf_program *prog;
>  	struct bpf_object *obj;
>  	char buf[256];
>  	int i, j;
> @@ -39,6 +41,10 @@ void test_stacktrace_build_id_nmi(void)
>  	if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))
>  		return;
>  
> +	prog = bpf_object__find_program_by_title(obj, prog_name);
> +	if (CHECK(!prog, "find_prog", "prog '%s' not found\n", prog_name))
> +		goto close_prog;
> +
>  	pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
>  			 0 /* cpu 0 */, -1 /* group id */,
>  			 0 /* flags */);
> @@ -47,16 +53,10 @@ void test_stacktrace_build_id_nmi(void)
>  		  pmu_fd, errno))
>  		goto close_prog;
>  
> -	err = ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0);
> -	if (CHECK(err, "perf_event_ioc_enable", "err %d errno %d\n",
> -		  err, errno))
> +	err = bpf_program__attach_perf_event(prog, pmu_fd);
> +	if (CHECK(err, "attach_perf_event", "err %d\n", err))
>  		goto close_pmu;
>  
> -	err = ioctl(pmu_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
> -	if (CHECK(err, "perf_event_ioc_set_bpf", "err %d errno %d\n",
> -		  err, errno))
> -		goto disable_pmu;
> -
>  	/* find map fds */
>  	control_map_fd = bpf_find_map(__func__, obj, "control_map");
>  	if (CHECK(control_map_fd < 0, "bpf_find_map control_map",
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 2/7] libbpf: add ability to attach/detach BPF to perf event
  2019-06-21  0:01   ` Stanislav Fomichev
@ 2019-06-21  4:28     ` Andrii Nakryiko
  0 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-21  4:28 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann, Networking,
	bpf, Kernel Team

On Thu, Jun 20, 2019 at 5:01 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> On 06/20, Andrii Nakryiko wrote:
> > bpf_program__attach_perf_event allows to attach BPF program to existing
> > perf event, providing most generic and most low-level way to attach BPF
> > programs.
> >
> > libbpf_perf_event_disable_and_close API is added to disable and close
> > existing perf event by its FD.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> >  tools/lib/bpf/libbpf.c   | 41 ++++++++++++++++++++++++++++++++++++++++
> >  tools/lib/bpf/libbpf.h   |  4 ++++
> >  tools/lib/bpf/libbpf.map |  2 ++
> >  3 files changed, 47 insertions(+)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 8ce3beba8551..2bb1fa008be3 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -32,6 +32,7 @@
> >  #include <linux/limits.h>
> >  #include <linux/perf_event.h>
> >  #include <linux/ring_buffer.h>
> > +#include <sys/ioctl.h>
> >  #include <sys/stat.h>
> >  #include <sys/types.h>
> >  #include <sys/vfs.h>
> > @@ -3928,6 +3929,46 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr,
> >       return 0;
> >  }
> >
> [..]
> > +int libbpf_perf_event_disable_and_close(int pfd)
> nit: why not call it libbpf_perf_event_detach[_and_close]?
> It's usually attach/detach.

I think detach is actually confusing for perf event. Here's what you
do for tracing:

1. open perf event
2. enable perf event
3. attach BPF program to perf event
...
4. <is there a way to detach BPF program?>
5. disable perf event
6. close perf event

So open/close event, enable/disable event, attach / (auto-detach on
close right now) BPF program.

It seems like there should be explicit "detach this BPF program from
perf event without killing event itself", but I haven't found it.

But my point is that for event open/close and enable/disable seems
very complementary.


>
> > +{
> > +     int err;
> > +
> > +     if (pfd < 0)
> > +             return 0;
> > +
> > +     err = ioctl(pfd, PERF_EVENT_IOC_DISABLE, 0);
> > +     close(pfd);
> > +     return err;
> > +}
> > +
> > +int bpf_program__attach_perf_event(struct bpf_program *prog, int pfd)
> > +{
> > +     char errmsg[STRERR_BUFSIZE];
> > +     int bpf_fd, err;
> > +
> > +     bpf_fd = bpf_program__fd(prog);
> > +     if (bpf_fd < 0) {
> > +             pr_warning("program '%s': can't attach before loaded\n",
> > +                        bpf_program__title(prog, false));
> > +             return -EINVAL;
> > +     }
> > +     if (ioctl(pfd, PERF_EVENT_IOC_SET_BPF, bpf_fd) < 0) {
> > +             err = -errno;
> > +             pr_warning("program '%s': failed to attach to pfd %d: %s\n",
> > +                        bpf_program__title(prog, false), pfd,
> > +                        libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> > +             return err;
> > +     }
> > +     if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
> > +             err = -errno;
> > +             pr_warning("program '%s': failed to enable pfd %d: %s\n",
> > +                        bpf_program__title(prog, false), pfd,
> > +                        libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> > +             return err;
> > +     }
> > +     return 0;
> > +}
> > +
> >  enum bpf_perf_event_ret
> >  bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
> >                          void **copy_mem, size_t *copy_size,
> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > index d639f47e3110..76db1bbc0dac 100644
> > --- a/tools/lib/bpf/libbpf.h
> > +++ b/tools/lib/bpf/libbpf.h
> > @@ -165,6 +165,10 @@ LIBBPF_API int bpf_program__pin(struct bpf_program *prog, const char *path);
> >  LIBBPF_API int bpf_program__unpin(struct bpf_program *prog, const char *path);
> >  LIBBPF_API void bpf_program__unload(struct bpf_program *prog);
> >
> > +LIBBPF_API int libbpf_perf_event_disable_and_close(int pfd);
> > +LIBBPF_API int bpf_program__attach_perf_event(struct bpf_program *prog,
> > +                                           int pfd);
> > +
> >  struct bpf_insn;
> >
> >  /*
> > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > index 2c6d835620d2..d27406982b5a 100644
> > --- a/tools/lib/bpf/libbpf.map
> > +++ b/tools/lib/bpf/libbpf.map
> > @@ -172,5 +172,7 @@ LIBBPF_0.0.4 {
> >               btf_dump__new;
> >               btf__parse_elf;
> >               bpf_object__load_xattr;
> > +             bpf_program__attach_perf_event;
> >               libbpf_num_possible_cpus;
> > +             libbpf_perf_event_disable_and_close;
> >  } LIBBPF_0.0.3;
> > --
> > 2.17.1
> >

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 3/7] libbpf: add kprobe/uprobe attach API
  2019-06-21  0:04   ` Stanislav Fomichev
@ 2019-06-21  4:34     ` Andrii Nakryiko
  0 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-21  4:34 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann, Networking,
	bpf, Kernel Team

On Thu, Jun 20, 2019 at 5:04 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> On 06/20, Andrii Nakryiko wrote:
> > Add ability to attach to kernel and user probes and retprobes.
> > Implementation depends on perf event support for kprobes/uprobes.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> >  tools/lib/bpf/libbpf.c   | 207 +++++++++++++++++++++++++++++++++++++++
> >  tools/lib/bpf/libbpf.h   |   8 ++
> >  tools/lib/bpf/libbpf.map |   2 +
> >  3 files changed, 217 insertions(+)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 2bb1fa008be3..11329e05530e 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -3969,6 +3969,213 @@ int bpf_program__attach_perf_event(struct bpf_program *prog, int pfd)
> >       return 0;
> >  }
> >
> > +static int parse_uint(const char *buf)
> > +{
> > +     int ret;
> > +
> > +     errno = 0;
> > +     ret = (int)strtol(buf, NULL, 10);
> > +     if (errno) {
> > +             ret = -errno;
> > +             pr_debug("failed to parse '%s' as unsigned int\n", buf);
> > +             return ret;
> > +     }
> > +     if (ret < 0) {
> > +             pr_debug("failed to parse '%s' as unsigned int\n", buf);
> > +             return -EINVAL;
> > +     }
> > +     return ret;
> > +}
> > +
> > +static int parse_uint_from_file(const char* file)
> > +{
> > +     char buf[STRERR_BUFSIZE];
> > +     int fd, ret;
> > +
> > +     fd = open(file, O_RDONLY);
> > +     if (fd < 0) {
> > +             ret = -errno;
> > +             pr_debug("failed to open '%s': %s\n", file,
> > +                      libbpf_strerror_r(ret, buf, sizeof(buf)));
> > +             return ret;
> > +     }
> > +     ret = read(fd, buf, sizeof(buf));
> > +     close(fd);
> > +     if (ret < 0) {
> > +             ret = -errno;
> Is -errno still valid here after a close(fd) above? Do we have any
> guarantee of errno preservation when we do another syscall?

Good catch! No, close() can change errno. Fixed. Also fixed for
parse_config_from_file below.

>
> > +             pr_debug("failed to read '%s': %s\n", file,
> > +                     libbpf_strerror_r(ret, buf, sizeof(buf)));
> > +             return ret;
> > +     }
> > +     if (ret == 0 || ret >= sizeof(buf)) {
> > +             buf[sizeof(buf) - 1] = 0;
> > +             pr_debug("unexpected input from '%s': '%s'\n", file, buf);
> > +             return -EINVAL;
> > +     }
> > +     return parse_uint(buf);
> > +}
> > +
> > +static int determine_kprobe_perf_type(void)
> > +{
> > +     const char *file = "/sys/bus/event_source/devices/kprobe/type";
> > +     return parse_uint_from_file(file);
> > +}
> > +
> > +static int determine_uprobe_perf_type(void)
> > +{
> > +     const char *file = "/sys/bus/event_source/devices/uprobe/type";
> > +     return parse_uint_from_file(file);
> > +}
> > +
> > +static int parse_config_from_file(const char *file)
> > +{
> > +     char buf[STRERR_BUFSIZE];
> > +     int fd, ret;
> > +
> > +     fd = open(file, O_RDONLY);
> > +     if (fd < 0) {
> > +             ret = -errno;
> > +             pr_debug("failed to open '%s': %s\n", file,
> > +                      libbpf_strerror_r(ret, buf, sizeof(buf)));
> > +             return ret;
> > +     }
> > +     ret = read(fd, buf, sizeof(buf));
> > +     close(fd);
> > +     if (ret < 0) {
> > +             ret = -errno;
> > +             pr_debug("failed to read '%s': %s\n", file,
> > +                     libbpf_strerror_r(ret, buf, sizeof(buf)));
> > +             return ret;
> > +     }
> > +     if (ret == 0 || ret >= sizeof(buf)) {
> > +             buf[sizeof(buf) - 1] = 0;
> > +             pr_debug("unexpected input from '%s': '%s'\n", file, buf);
> > +             return -EINVAL;
> > +     }
> > +     if (strncmp(buf, "config:", 7)) {
> > +             pr_debug("expected 'config:' prefix, found '%s'\n", buf);
> > +             return -EINVAL;
> > +     }
> > +     return parse_uint(buf + 7);
> > +}
> > +

<snip>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 4/7] libbpf: add tracepoint/raw tracepoint attach API
  2019-06-21  0:07   ` Stanislav Fomichev
@ 2019-06-21  4:42     ` Andrii Nakryiko
  0 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-21  4:42 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann, Networking,
	bpf, Kernel Team

On Thu, Jun 20, 2019 at 5:07 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> On 06/20, Andrii Nakryiko wrote:
> > Add APIs allowing to attach BPF program to kernel tracepoints. Raw
> > tracepoint attach API is also added for uniform per-BPF-program API,
> > but is mostly a wrapper around existing bpf_raw_tracepoint_open call.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> >  tools/lib/bpf/libbpf.c   | 99 ++++++++++++++++++++++++++++++++++++++++
> >  tools/lib/bpf/libbpf.h   |  5 ++
> >  tools/lib/bpf/libbpf.map |  2 +
> >  3 files changed, 106 insertions(+)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 11329e05530e..cefe67ba160b 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -4176,6 +4176,105 @@ int bpf_program__attach_uprobe(struct bpf_program *prog, bool retprobe,
> >       return pfd;
> >  }
> >
> > +static int determine_tracepoint_id(const char* tp_category, const char* tp_name)
> > +{
> > +     char file[PATH_MAX];
> > +     int ret;
> > +
> > +     ret = snprintf(file, sizeof(file),
> > +                    "/sys/kernel/debug/tracing/events/%s/%s/id",
> > +                    tp_category, tp_name);
> > +     if (ret < 0)
> > +             return -errno;
> > +     if (ret >= sizeof(file)) {
> > +             pr_debug("tracepoint %s/%s path is too long\n",
> > +                      tp_category, tp_name);
> > +             return -E2BIG;
> > +     }
> > +     return parse_uint_from_file(file);
> > +}
> > +
> > +static int perf_event_open_tracepoint(const char* tp_category,
> > +                                   const char* tp_name)
> > +{
> > +     struct perf_event_attr attr = {};
> > +     char errmsg[STRERR_BUFSIZE];
> > +     int tp_id, pfd, err;
> > +
> [..]
> > +     tp_id = determine_tracepoint_id(tp_category, tp_name);
> Why no assign to attr.config directly here?

It's used in few places for error-handling branch, so it would look a
bit weird and make lines longer.


> You can move all other constants to the initialization as well:
>
> struct perf_event_attr attr = {
>         .type = PERF_TYPE_TRACEPON,
>         .size = sizeof(struct perf_event_attr),
> };
>
> attr.config = determine_tracepoint_id(...);
>
> (I guess that's a matter of style, but something to consider).

Yeah. It seems like explicit initialization of each member of
attribute structs is prevalent in libbpf.c. I also don't want to have
some fields initialized at declaration site, and some other in code.
Better to group all initialization together.

>
> > +     if (tp_id < 0){
> > +             pr_warning("failed to determine tracepoint '%s/%s' perf ID: %s\n",
> > +                        tp_category, tp_name,
> > +                        libbpf_strerror_r(tp_id, errmsg, sizeof(errmsg)));
> > +             return tp_id;
> > +     }
> > +
> [..]
> > +     memset(&attr, 0, sizeof(attr));
> Not needed since you do attr = {}; above?

Yep, removed.

>
> > +     attr.type = PERF_TYPE_TRACEPOINT;
> > +     attr.size = sizeof(attr);
> > +     attr.config = tp_id;
> > +
> > +     pfd = syscall( __NR_perf_event_open, &attr, -1 /* pid */, 0 /* cpu */,
> > +                     -1 /* group_fd */, PERF_FLAG_FD_CLOEXEC);
> > +     if (pfd < 0) {
> > +             err = -errno;
> > +             pr_warning("tracepoint '%s/%s' perf_event_open() failed: %s\n",
> > +                        tp_category, tp_name,
> > +                        libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
> > +             return err;
> > +     }
> > +     return pfd;
> > +}
> > +

<snip>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 5/7] selftests/bpf: switch test to new attach_perf_event API
  2019-06-21  0:08   ` Stanislav Fomichev
@ 2019-06-21  4:50     ` Andrii Nakryiko
  0 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2019-06-21  4:50 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann, Networking,
	bpf, Kernel Team

On Thu, Jun 20, 2019 at 5:08 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> On 06/20, Andrii Nakryiko wrote:
> > Use new bpf_program__attach_perf_event() in test previously relying on
> > direct ioctl manipulations.
> Maybe use new detach/disable routine at the end of the
> test_stacktrace_build_id_nmi as well?
>

yeah, totally, missed that.

> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> >  .../bpf/prog_tests/stacktrace_build_id_nmi.c     | 16 ++++++++--------
> >  1 file changed, 8 insertions(+), 8 deletions(-)
> >

<snip>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-06-21  4:51 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-20 23:09 [PATCH bpf-next 0/7] libbpf: add tracing attach APIs Andrii Nakryiko
2019-06-20 23:09 ` [PATCH bpf-next 1/7] libbpf: make libbpf_strerror_r agnostic to sign of error Andrii Nakryiko
2019-06-20 23:09 ` [PATCH bpf-next 2/7] libbpf: add ability to attach/detach BPF to perf event Andrii Nakryiko
2019-06-21  0:01   ` Stanislav Fomichev
2019-06-21  4:28     ` Andrii Nakryiko
2019-06-20 23:09 ` [PATCH bpf-next 3/7] libbpf: add kprobe/uprobe attach API Andrii Nakryiko
2019-06-21  0:04   ` Stanislav Fomichev
2019-06-21  4:34     ` Andrii Nakryiko
2019-06-20 23:09 ` [PATCH bpf-next 4/7] libbpf: add tracepoint/raw tracepoint " Andrii Nakryiko
2019-06-21  0:07   ` Stanislav Fomichev
2019-06-21  4:42     ` Andrii Nakryiko
2019-06-20 23:09 ` [PATCH bpf-next 5/7] selftests/bpf: switch test to new attach_perf_event API Andrii Nakryiko
2019-06-21  0:08   ` Stanislav Fomichev
2019-06-21  4:50     ` Andrii Nakryiko
2019-06-20 23:09 ` [PATCH bpf-next 6/7] selftests/bpf: add kprobe/uprobe selftests Andrii Nakryiko
2019-06-20 23:09 ` [PATCH bpf-next 7/7] selftests/bpf: convert existing tracepoint tests to new APIs Andrii Nakryiko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.